text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I have apache logs that look like following. How do I process this? 10.10.6.49 - - [26/Jan/2017:17:15:25 -0800] "POST /thrift/service/MyApiService/ HTTP/1.1" 200 4605 "" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" Hi Dale, This is a good question! You would do this by writing an Import UDF. We have several examples of Import UDF that you can search for on this discussion group. But let me take an stab at this: Here is an example UDF code for this in python: import json import io import apache_log_parser def parseAccessLog(fullPath, inStream): line_parser = apache_log_parser.make_parser("%h %l %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"") outputStr = [] for line in inStream.readlines(): try: logData = line_parser(line) except: continue # we recommend you do some processsing here and generate an ICV row instead yield logData Notice the use of yield above. Yield is a python feature. It returns a generator which can be used by the consumer of the UDF, which in this case is the Xcalar Compute Environment (XCE). Each line maps to a dictionary instance that is then converted to a generator using yield. If you needed to test your UDF with a main driver, make sure you use a stream generator. Here is example code for this: inStream = io.open("access_log") gen = parseAccessLog("", inStream) for record in gen: print "Printing record..." for fieldName, fieldValue in record.iteritems(): print "field {}: value {}".format(fieldName, fieldValue) Once you write your UDF, you can test this using Xcalar Design (XD). Use the Point to Data source feature and make sure your browser points to the dataset which needs to be streamed. Choose the JSON format for conversion. Once you have this UDF tested in command line shell, you can paste it in the UDF editor in Xcalar and upload it. Using the Import Data Source you can perform the import. Hope this helps! Cheers,Manoj To add to Manoj's great reply, I'd like to highlight how Xcalar Design 1.3.1 streamlines the workflow for creating an Import UDF in Jupyter Notebook for a custom format data source file, like the semi-structured apache access_log @dschaefer wanted to parse. Here are the steps you'd take today: Rather than using Python through a command line shell to debug, and manually copying your UDF into place, the new process shepherds you into Jupyter and back into the process of creating a table from your data source file. Try it out, and give us your feedback. Best wishes,Mark
https://discourse.xcalar.com/t/white-check-mark-how-do-i-process-semi-structured-apache-access-log/359
CC-MAIN-2019-13
refinedweb
437
75
Tcl8.6.7/Tk8.6.7 Documentation > [incr Tcl] Package Commands, version 4.1.0 > class - itcl::class — create a class of objects - SYNOPSIS - DESCRIPTION - CLASS DEFINITIONS - inherit baseClass ?baseClass...? - constructor args ?init? body - destructor body - className - objName info option ?args...? - CHAINING METHODS/PROCS - AUTO-LOADING - C PROCEDURES - KEYWORDS NAMEitcl::class — create a class of objects SYNOPSISitcl::class className { inherit baseClass ?baseClass...? constructor args ?init? body destructor body method name ?args? ?body? proc name ?args? ?body? variable varName ?init? ?config? common varName ?init? public command ?arg arg ...? protected command ?arg arg ...? private command ?arg arg ...? set varName ?value? array option ?arg arg ...? } className objName ?arg arg ...? objName method ?arg arg ...? className::proc ?arg arg ...? DESCRIPTIONThe fundamental construct in [incr Tcl] is the class definition. Each class acts as a template for actual objects that can be created. The class itself is a namespace which contains things common to all objects. Each object has its own unique bundle of data which contains instances of the "variables" defined in the class definition. Each object also has a built-in variable named "this", which contains the name of the object. Classes can also have "common" data members that are shared by all objects in a class. Two types of functions can be included in the class definition. "Methods" are functions which operate on a specific object, and therefore have access to both "variables" and "common" data members. "Procs" are ordinary procedures in the class class can only be defined once, although the bodies of class methods and procs can be defined again and again for interactive debugging. See the body and configbody commands for details. Each namespace can have its own collection of objects and classes. The list of classes available in the current context can be queried using the "itcl::find classes" command, and the list of objects, with the "itcl::find objects" command. A class can be deleted using the "delete class" command. Individual objects can be deleted using the "delete object" command. CLASS DEFINITIONS - class className definition - Provides the definition for a class named className. If the class className already exists, or if a command called className exists in the current namespace context, this command returns an error. If the class definition is successfully parsed, className becomes a command in the current context, handling the creation of objects for this class. The class definition is evaluated as a series of Tcl statements that define elements within the class. The following class definition commands are recognized: - inherit baseClass ?baseClass...? - Causes the current class to inherit characteristics from one or more base classes. Classes must have been defined by a previous class command, or must be available to the auto-loading facility (see "AUTO-LOADING" below). A single class definition can contain no more than one inherit command. The order of baseClass names in the inherit list affects the name resolution for class members. When the same member name appears in two or more base classes, the base class that appears first in the inherit list takes precedence. For example, if classes "Foo" and "Bar" both contain the member "x", and if another class class constructors that require arguments. Variables in the args specification can be accessed in the init code fragment, and passed to base class constructors. After evaluating the init statement, any base class constructors that have not been executed are invoked automatically without arguments. This ensures that all base classes are fully constructed before the constructor body is executed. By default, this scheme causes constructors to be invoked in order from least- to most-specific. This is exactly the opposite of the order that classes class hierarchy are invoked in order from most- to least-specific. This is the order that the classes class method, a method can be invoked like any other command-simply by using its name. Outside of the class context, the method name must be prefaced an object name, which provides the context for the data that it manipulates. Methods in a base class that are redefined in the current class, or hidden by another base class, can be qualified using the "className::method" syntax. - proc name ?args? ?body? - Declares a proc called name. A proc is an ordinary procedure within the class class method or proc, a proc can be invoked like any other command-simply by using its name. In any other namespace context, the proc is invoked using a qualified name like "className::proc". Procs in a base class that are redefined in the current class, or hidden by another base class, can also be accessed via their qualified name. - variable varName ?init? ?config? - Defines an object-specific variable named varName. All object-specific variables are automatically available in class class definition using the configbody command. - common varName ?init? - Declares a common variable named varName. Common variables reside in the class namespace and are shared by all objects belonging to the class. They are just like global variables, except that they need not be declared with the usual global command. They are automatically visible in all class class definition. This allows common data members to be initialized as arrays. For example: itcl::class Foo { class. CLASS USAGEOnce a class has been defined, the class name can be used as a command to create new objects belonging to the class. - className objName ?args...? - Creates a new object in class className with the name objName. Remaining arguments are passed to the constructor of the most-specific class. This in turn passes arguments to base class className<number>, where the className part is modified to start with a lowercase letter. In class Once class class where it was defined. If the "config" code generates an error, the variable is set back to its previous value, and the configure method returns an error. - objName isa className - Returns non-zero if the given className can be found in the object's heritage, and zero otherwise. - objName info option ?args...? - Returns information related to a particular object named objName, or to its class definition. The option parameter includes the following things, as well as the options recognized by the usual Tcl "info" command: - objName info class - Returns the name of the most-specific class for object objName. - objName info inherit - Returns the list of base classes as they were defined in the "inherit" command, or an empty string if this class has no base classes. - objName info heritage - Returns the current class name and the entire list of base classes in the order that they are traversed for member lookup and object destruction. - objName info function ?cmdName? ?-protection? ?-type? ?-name? ?-args? ?-body? - With no arguments, this command returns a list of all class/PROCSSometimes a base class has a method or proc that is redefined with the same name in a derived class. This is a way of making the derived class handle the same operations as the base class, but with its own specialized behavior. For example, suppose we have a Toaster class that looks like this: itcl::class Toaster { variable crumbs 0 method toast {nslices} { if {$crumbs > 50} { error "== FIRE! FIRE! ==" } set crumbs [expr $crumbs+4*$nslices] } method clean {} { set crumbs 0 } } We might create another class like SmartToaster that redefines the "toast" method. If we want to access the base class method, we can qualify it with the base class name, to avoid ambiguity: itcl::class SmartToaster { inherit Toaster method toast {nslices} { if {$crumbs > 40} { clean } return [Toaster::toast $nslices] } } Instead of hard-coding the base class name, we can use the "chain" command like this: itcl::class SmartToaster { inherit Toaster method toast {nslices} { if {$crumbs > 40} { clean } return [chain $nslices] } } The chain command searches through the class hierarchy for a slightly more generic (base class) implementation of a method or proc, and invokes it with the specified arguments. It starts at the current class context and searches through base classes in the order that they are reported by the "info heritage" command. If another implementation is not found, this command does nothing and returns the null string. AUTO-LOADINGClass definitions need not be loaded explicitly; they can be loaded as needed by the usual Tcl auto-loading facility. Each directory containing class definition files should have an accompanying "tclIndex" file. Each line in this file identifies a Tcl procedure or [incr Tcl] class definition and the file where the definition can be found. For example, suppose a directory contains the definitions for classes "Toaster" and "SmartToaster". Then the "tclIndex" file for this directory would look like: # Tcl autoload index file, version 2.0 for [incr T(:, classes will be auto-loaded as needed when used in an application. C PROCEDURESC procedures can be integrated into an [incr Tr Tcl] makes this possible by automatically setting up the context before executing the C procedure. This scheme provides a natural migration path for code development. Classes can be developed quickly using Tcl code to implement the bodies. An entire application can be built and tested. When necessary, individual bodies can be implemented with C code to improve performance.
http://docs.activestate.com/activetcl/8.6/tcl/ItclCmd/class.html
CC-MAIN-2018-43
refinedweb
1,512
53.1
Daniel Pozmanter 2005-02-19 Optionally. Christoph Zwerschke 2005-02-20 I know I'm nagging, but honestly I think separators base on indentation are not a good idea, because separators are comments and Python does not care about the indentation of comments either. Consider this example: def f(x): ..#---Label 1 ..print 1 #---Label 2 ..print 2 ..#---Label 3 ..print 3 Label 2 clearly belongs inside f(x) between 1 and 3, irrelevant of its indentation. But DrPython prints it after 1 and 3 which is confusing. My suggestion is that a label should be treated as if it had the same indentation as the following real (non-comment) Python statement. Another issue with separators: I think a dotted horizontal line in the middle would be better icon than empty space. Or a very light grey sharp character (#) symbol. With the empty space, the horizontal line at the left side looks as if it is ending "nowhere." Daniel Pozmanter 2005-02-21 The problem with a dotted horizontal line is that it may not line up as well on different platforms. Onto the parsing: The problem is that I'd need to completely change how the sourcebrowser parses to do things the way you want. I actually had labels parsed that way last time, but since drpy only looks at defs, classes and imports, the returns are not picked up (as you noticed in your last note on sourcebrowser behaviour). The question is, if I rewrite the parsing code, will it be drastically slower? I am not at all inclined to completely rewrite the sourcebrowser just for labels, if there is a significant speed price to pay. Perhaps using the block code to find the start and end of each block, and then locate labels within each block, would be best.... Any suggestions? Christoph Zwerschke 2005-02-21 I don't think the labels will slow down parsing if you do it correctly. Maybe I will have a look into it next week if I find some time. Hmmm. Well, it would if I searched for all blocks of code. However, If I only change the code so: IF a label is encountered, do the block end search then and only then (so the speed penalty is only paid by people using labels). Basically, the idea would be to try the previous block, and if the end of that block is after the position of the label, then add the label at the same indentation as the previous item. If not, try the item before that in (and so on). I think this will work. Or maybe it will just be a mess. After all that, I got it working perfectly :). It is always the simplist solution that seems to work (and be initially hidden). This will be in the next release.
http://sourceforge.net/p/drpython/discussion/283802/thread/faefbb4e/
CC-MAIN-2014-35
refinedweb
473
71.14
Now that we have created a game model, as well as some 3D assets to display, we can tie them together and have something to interact with. We will create a couple more scripts, one to serve as the view component on the board and one to serve as the game controller. By the end of this lesson, you should be able to play a sample single player experience of Tic Tac Toe. Board - Create a new project folder named “View” as a subfolder of the “Scripts” folder - Create a new C# script named “Board” in the “View” folder - Select the “Board” prefab in the Project pane - Add the “Board” component to the prefab - Open the script for editing and replace the template code with the following: using UnityEngine; using UnityEngine.EventSystems; using System.Collections; using TicTacToe; public class Board : MonoBehaviour, IPointerClickHandler { // Add Code Here } Notice that I imported the “EventSystems” namespace so that our “Board” class could implement the “IPointerClickHandler” interface. Remember that we added a Physics Raycaster component to the camera, so taps or clicks on the collider for the board will now tie into Unity’s event systems much like a click on a canvas button. public const string SquareClickedNotification = "Board.SquareClickedNotification"; Our board view will use the notification system to notify listeners anytime it has been clicked on. This notification will include an argument which indicates the index of a square that was clicked. [SerializeField] SetPooler xPooler; [SerializeField] SetPooler oPooler; These are convenient references to the poolers on the same “Board” prefab. You wont actually be able to connect the references at this time, because our script wont be able to compile until we implement the interface. public void Show (int index, Mark mark) { SetPooler pooler = mark == Mark.X ? xPooler : oPooler; GameObject instance = pooler.Dequeue().gameObject; int x = index % 3; int z = index / 3; instance.transform.localPosition = new Vector3( x + 0.5f, 0, z + 0.5f ); instance.SetActive(true); } When our “model” (the TicTacToe class) sends a notification that a mark has been placed on the screen, a “controller” class should handle the notification and tell a “view” to be updated and show what has happened. Our “Board” script is operating as the “view” and this public method will be how the controller will tell the view what to show and when to show it. Using the passed “mark” value I can grab one of my two poolers according to the one that matches. Then I can dequeue a new mark and position it based on where the position of the given index would appear. Finally, because the pooler system provides instances in a disabled state, I must activate the object so it will become visible. Note that I could have the view “listen” to the model notification itself, and sometimes I do, but the use of a controller allows for greater flexibility. For example, if I had an A.I. then it might create its own copy of the game, and I wouldn’t want notifications from that “dummy” game to cause the view to update. Of course, my notification system is powerful enough that the board could choose to only listen to notifications from a particular game instance, but even still there might be special rules for when or how to show something. As one example, maybe you want to animate the “winning” move appearing in a special way. The controller would be an ideal place to handle this logic. public void Clear () { xPooler.EnqueueAll(); oPooler.EnqueueAll(); } Of course we will need a way to clear the board when a new game begins, so this method allows both poolers to reclaim their pooled instances. void IPointerClickHandler.OnPointerClick (PointerEventData eventData) { Vector3 pos = eventData.pointerCurrentRaycast.worldPosition; int x = Mathf.FloorToInt(pos.x); int z = Mathf.FloorToInt(pos.z); if (x < 0 || z < 0 || x > 2 || z > 2) return; int index = z * 3 + x; this.PostNotification(SquareClickedNotification, index); } When I made the board model, I created it with a 3 unit square surface. I also positioned it so that its lower left corner would appear at the origin of the scene’s coordinate space. This allows me to use the world position of the click and determine which square on the board was targeted. For example, if I click near the right edge of the board I might get an “x” position of around “2.9…” which when floored is “2”. This value is exactly what I want because arrays are zero-based, so the indices I would care about would be 0, 1 and 2. As a precaution I abort the method early if something has allowed me to click on the board and get a coordinate that is out of bounds. Otherwise, I calculate the index and then post a notification so that a game controller knows I attempted input. Now that the script is complete, don’t forget to head back to Unity and connect the pooler references. If you connect them on the prefab in the Project pane, then it will automatically update the instance in our scene. Don’t forget you will need to save the project in order to save the changes to the prefab. Game Controller We’ve done a fair amount of work and haven’t gotten to really see or test anything. Let’s create a quick little demo of what the game might be like if it were only a single player game. - Create a new Empty GameObject called “Game Controller” in the scene - For organization sake, I decided to parent the “Board” and “Canvas”, to the “Game Controller”. Sometimes it can be convenient to be able to search in a hierarchy from one to the other, but it’s also nice to be able to collapse things in the scene hierarchy pane so you can find what you need more quickly. Note that I will NOT create a prefab out of this hierarchy because Unity doesn’t understand nested prefabs - Create a new project folder named “Controller” as a subfolder of the “Scripts” folder - Create a new C# script named “GameController” in the “Controller” folder - Add the “GameController” component to the “Game Controller” GameObject - Open the script for editing and replace the template code with the following: using UnityEngine; using System.Collections; using TicTacToe; public class GameController : MonoBehaviour { // Add Code Here } There is nothing special about this class – just a normal MonoBehaviour which will be used to connect our “model” and “view” together. public Game game = new Game(); public Board board; This class will handle the creation of the game “model” and will also maintain a reference to the board “view”. void OnEnable () { this.AddObserver(OnBoardSquareClicked, Board.SquareClickedNotification); this.AddObserver(OnDidBeginGame, Game.DidBeginGameNotification); this.AddObserver(OnDidMarkSquare, Game.DidMarkSquareNotification); } void OnDisable () { this.RemoveObserver(OnBoardSquareClicked, Board.SquareClickedNotification); this.RemoveObserver(OnDidBeginGame, Game.DidBeginGameNotification); this.RemoveObserver(OnDidMarkSquare, Game.DidMarkSquareNotification); } Here I show how to register and unregister for a subset of the notifications that we will need to implement the final version of this project. These few are enough to show a simple demo for now though. void Start () { board = GetComponentInChildren<Board>(); game.Reset(); } Since the “Start” method runs after “OnEnable” I will know that I have already registered for all of the relevent notifications. When I tell the game to “Reset” so that it starts a new game, my notification observer method will be ready to respond to it. void OnBoardSquareClicked (object sender, object args) { if (game.control == TicTacToe.Mark.None) game.Reset(); else game.Place((int)args); } When the board posts a notification that I clicked on it, I will want to do one of two things. If the game has already ended, then I will use that input as the trigger to start a new game. Otherwise, I will tell the game to attempt to take a turn based on the input. Note that I am not updating the board “view” at this time, because it is possible that the attempted move is invalid. For example, a user may have clicked on a square which was already occupied. By simply waiting for the notification that a move was actually made I don’t have to add any duplicate validation logic. void OnDidBeginGame (object sender, object args) { board.Clear(); } Here I make sure that the board “view” stays in sync with the game “model” by clearing the marks whenever a new game begins. void OnDidMarkSquare (object sender, object args) { int index = (int)args; Mark mark = game.board[index]; board.Show(index, mark); } In this method I grab the index of the square that was updated, and then use that index to figure out which mark was placed. Using those two bits of data, I can tell the view all it will need to show the current state of the game. Demo Time Now would be a good time to save the scene and project. Then go ahead and run the scene and see that single-player TicTacToe was quite easily accomplished! You play both sides of the match, just keep clicking on empty squares until a game ends, and optionally click again to reset and play another round. Next we just have to figure out how to play over a network – that’s the real challenge of this project. Summary In this lesson we began by making the board interactive. We used Unity’s EventSystems to register for clicks on the board, and were able to determine the square based on the position data it provided. We also connected it to the poolers so that it could correctly mark the squares with an appropriate prefab. Once we completed the code to drive the board, we made a sample implementation of a Game Controller which observed events from the game model and updated the view accordingly. We have a fully playable game, but only locally. We’ll start working toward multiplayer over a network next! Don’t forget that if you get stuck on something, you can always check the repository for a working version here. 14 thoughts on “Turn Based Multiplayer – Part 3” How do you connect the Poolers like it mentions before the Game Controller section? The SetPooler is a component which can be added to your objects via the Inspector pane in Unity. If you don’t have it, go back and check the “Project Setup” step in Part 1. That lesson also discusses how to configure the poolers. All you need to do in this lesson is connect a reference to the ones that were created in that first lesson. For the Game Controller script. It’s worth noting that the code-blocks on this page are incorrect. This tripped me up until I tracked it down. On this page the code-blocks show: void OnEnable () { this.Add); } void OnDisable () { this.Remove); } In the repository you have the corrected version, adding “Game” to the TicTacToe namespace path: void OnEnable () { this.AddObserver(OnBoardSquareClicked, Board.SquareClickedNotification); this.AddObserver(OnDidBeginGame, TicTacToe.Game.DidBeginGameNotification); this.AddObserver(OnDidMarkSquare, TicTacToe.Game.DidMarkSquareNotification); } void OnDisable () { this.RemoveObserver(OnBoardSquareClicked, Board.SquareClickedNotification); this.AddObserver(OnDidBeginGame, TicTacToe.Game.DidBeginGameNotification); this.AddObserver(OnDidMarkSquare, TicTacToe.GameDidMarkSquareNotification); } Thank you for the tutorial, it’s awesome! Thanks for pointing that out. I notice another problem now that you mention it. In the “OnDisable” method the last two statements should be “RemoveObserver” calls instead of “AddObserver” calls. You never updated it… Thanks for the reminder – it’s updated now. Ive done everything the same as you till now and when I execute the game it gives me this error “NullReferenceException: Object reference not set to an instance of an object GameController.OnDidBeginGame (System.Object sender, System.Object args) (at Assets/Scripts/Controller/GameController.cs:38) NotificationCenter.PostNotification (System.String notificationName, System.Object sender, System.Object e) (at Assets/Scripts/Common/NotificationCenter/NotificationCenter.cs:181) NotificationCenter.PostNotification (System.String notificationName, System.Object sender) (at Assets/Scripts/Common/NotificationCenter/NotificationCenter.cs:149) NotificationExtensions.PostNotification (System.Object obj, System.String notificationName) (at Assets/Scripts/Common/NotificationCenter/NotificationExtensions.cs:10) TicTacToe.Game.Reset () (at Assets/Scripts/Model/Game.cs:53) GameController.Start () (at Assets/Scripts/Controller/GameController.cs:27)” I don’t really get the error so i would like if you could help me please This is called a stack trace and is something you should take the time to understand. It tells you the order of events that led to something unexpected. The most recent line (at the top) is where a problem occurred. A NullReferenceException means that you tried to do something with a reference to an object, but instead your reference was pointing to nothing (null). They clarify that with the bit that says “Object reference not set to an instance of an object”. The log even contains the full path to the scripts, method names and even line numbers of where the chain of events happened, so it is very helpful if you learn it. To begin with, I would go look at the “GameController” script at line 38. What objects are referenced there? Your code may have been put in a different order than mine, but if it is the “OnDidBeginGame” method then I am guessing the statement causing a problem is “board.Clear();”, and if so, then your “board” reference probably hasn’t been assigned. You should look at the Inspector pane for the Game Controller scene object in Unity and verify that you have connected everything. I have everything in the same order as you and I have the board inside the “Board” Thing on the script in the inspector it just gives me that error when begining the game and the same on line 44 when I try to play those are the only errors so far and ive followed you till now also do I references the prefab or the instance of it in the gamecontroller ? It should be referring to the instance in the scene. Still not working and I can’t seem to get it but thank you nonetheless Most of the time that someone has encountered a Null-ref while following along with my projects, it has had something to do with an incorrect setup on the Unity side. In order to help you figure out “what” is the problem, you can put a Debug.Log statement just before the line that is causing the issue. Something like “Debug.Log(board == null);” If that still doesn’t help, don’t forget that you can download a working copy from the repo and compare it against your own. I had a similar issue… in my case I forgot to parent the board to GameController, so even I connected board in the GameController inspector, the Start method got Board component from GameController’s children, and it was null. Very nice tutorial, thank you!
http://theliquidfire.com/2016/05/05/turn-based-multiplayer-part-3/
CC-MAIN-2020-45
refinedweb
2,460
53.81
The first part of this introduction will discuss two very useful functions: enumerate and zip: enumerate(iterable[, start=0]) zip(iterable_1, iterable_2,...) enumerate is a special case of zip. enumerate takes any iterable(s) (list, tuple, string, set) for an argument, and returns a list of n-tuples. enumerate associates the original sequence elements with an arbitrarily chosen starting point and returns an iterator. For example: # List of last 5 closing prices for Amercian Express: AXP = [59.48, 59.15, 60.69, 60.76, 59.63] # Associate `AXP` closing prices with an index starting at 0: enum_AXP = enumerate(AXP) print("0-indexed starting point:") for i in enum_AXP: print(i) # You can start the enumeration at 1 by passing `start=1` to enumerate: enum_AXP = enumerate(AXP, start=1) print("1-indexed starting point:") for i in enum_AXP: print(i) zip is a generalization of enumerate. Whereas enumerate associates the elements of an iterable with a consecutive stream of intergers, zip associates elements position-wise from any number of iterables returning a stream of n-tuples: peril = ['BLD_WATR', 'BLD_FIRE', 'CONT_WTHR', 'GL_AIPI', 'EB'] base = [.0534, .0349, .0368, .0743, .0030] expos = [700000, 700000, 240000, 375.95, 940000] #calling `zip` on peril, base and expos will result in #a 3-tuple iterator, where each item can be returned in #any iteration scheme (for loop, list comp, etc...): policy = zip(peril, base, expos) for peril in policy: print(peril) A list of n-tuples (like policy above) can be “unpacked” into n-separate iterables by calling the zip function, but preceeding the iterable name with *. The following extracts the 3 original sequences from the list of 3-tuples created above: policy = [('BLD_WATR', 0.0534, 700000),('BLD_FIRE', 0.0349, 700000), ('CONT_WTHR', 0.0368, 240000),('GL_AIPI', 0.0743, 375.95), ('EB', 0.003, 940000)] #unpacking each member list: peril, base, expos = zip(*policy) print(peril) print(base) print(expos) Note that when using zip, the resulting stream of n-tuples will only be as long as the shortest iterable passed to the function. zip works best when dealing with sequences of equal length. If the sequences in question are of unequal length, the itertools module offers the zip_longest function, which takes a fillvalue argument to replace missing values: import itertools s1 = [1, 2, 3] s2 = ['a', 'b'] s3 = [100, 200, 300, 400] result = itertools.zip_longest(s1, s2, s3, fillvalue="N/A") for r in result: print(r) Python Dictionaries Dictionaries are unordered key-value pairs. Items are stored and fetched by key as opposed to positional offset. * The dict data structure is a mutable mapping, and values may be altered in place. dict‘s in Python are associative arrays (Hash Tables), allowing for fast lookups independent of the number of data elements. dict‘s are variable length, heterogeneous and arbitrarily nestable. Dictionary key-value pairs are not stored by index, and therefore the order in which dict elements are returned is not guaranteed. Keys need not always be strings: Any immutable object can be used as a dictkey (for example, a tuple). One way to initalize dict‘s is by associating keys-value pairs separated by a colon and surrounded by {}: >>> all_pols = {'00001':564.32, '00002':1123.09, '00003':3427.65, '00004':876.38} dict elements are retreived using the dictionary name along with the key enclosed in brackets: >>> all_pols['00001'] 564.32 >>> all_pols['00002'] 1123.09 >>> all_pols['00003'] 3427.65 >>> all_pols['00004'] 876.38 To iterate over a dict’s key-value pairs, it is necessary to only reference the dictionary name as the iterable in the for-loop: >>> all_pols = {'00001':564.32, '00002':1123.09, '00003':3427.65, '00004':876.38} >>> for item in all_pols: print(item, all_pols[item]) '00003' 3427.65 '00001' 564.32 '00002' 1123.09 '00004' 876.38 In Python 2, calling dict.keys() or dict.values() returned a list of the dict’s keys or values. In Python 3, calling dict.keys() or dict.values() returns a dictionary view, which does not replicate the contents of the underlying dictionary, but still reflects immediately any changes to the underlying data structure. In addition, the dictionary view can be used as the iterable in any iteration scheme. If it is necessary to obtain an actual list of the dict’s keys or values (or both), simply wrap the dictionary view in a call to list: # create list from dict keys: dict_keys_list = list(all_pols.keys()) # create list from dict values: dict_values_list = list(all_pols.values()) # create a list of tuple pairs from dict key-values: dict_pairs = list(all_pols.items()) # display each list. Note that order is arbitrary: print("List of all_pols keys : ", dict_keys_list) print("list of all_pols values : ", dict_values_list) print("list of all_pols (key,value) tuples: ", dict_pairs The comprehension syntax can also be used to generate dicts. Key-value pairs are separated by a colon, with the initialization statement surrounded by {}. The dict comprehension is useful in situations where two sequences have a relationship, and referring to the sequence elements by name instead of index offset makes more sense: # associate a list of state names with a list of state capitals # using a dict comprehension: states = ['VT', 'TN', 'NH', 'VA'] capitals = ['Montpelier','Nashville', 'Concord', 'Richmond'] state_dict = {i:j for i,j in zip(states, capitals)} Similiar to list comprehensions, dict comprehensions are commonly used for filtering or subsetting existing, larger dicts: # depths is a dict of maximum depths in ft.: depths = {'Indian':25938,'Atlantic':27490,'Pacific':35797,'Artic':17880} #subset depths to return a dict of oceans with max depth > 20,000 ft.: depths_gt_20k = {i:j for (i,j) in depths.items() if j > 20000} print('depths subset: ', depths_gt_20k) #to extract only the ocean names discarding the depths, use a list comprehension: oceans = [ocean for ocean in depths if depths[ocean] > 20000] print('Oceans with max depth>20000: ', oceans) dict values can be any valid Python object. For example, we can create a dict of lists of mathematicians by country of origin: by_country = { 'German' : ['Gauss', 'Riemann', 'Hilbert', 'Weierstrass', 'Cantor'], 'French' : ['Pascal', 'Fermat', 'Lagrange', 'Cauchy'], 'British': ['Newton', 'Hamilton', 'Hardy'] } #obtain reference to first element of `german` mathematician list: first = by_country['German'][0] #obtain reference to last element of `german` mathematician list: last = by_country['German'][-1] #determine the length of each key's associated list: for country in by_country: iterlen = len(by_country[country]) print("Country: {} | List Length: {}".format(country, iterlen)) Independent dict’s can be combined into a single dict using update: #average distance from sun in AU (1AU~93,000,000 miles) inner_planets = {'Mercury':.387, 'Venus':.722, 'Earth' :1.000, 'Mars':1.520} outer_planets = {'Jupiter':5.20, 'Saturn':9.58, 'Uranus':19.20, 'Neptune':30.10} inner_planets.update(outer_planets) planets = inner_planets print(planets) Generally, if a key is called and no such key exists in the dict, an error will be thrown. This can be avoided by using the dict.get(key[, default]) method. Set a default argument to return if a key is requested and it doesn’t exist: # Behavior without using `dict.get`: print(planets['Pluto']) #would throw an error # Behavior using `dict.get`: print(planets.get('Pluto', 'No longer a planet')) #prevents error from being thrown Any immutable datatype can be used for a dictionary key. Tuples can be used when more than a single field is necessary to differentiate items in the input dataset. For example, if premium is calculated at the peril-level, differentiating by policy number alone is not sufficent. Instead, we can use a combination of policy number, location and peril in the form of a tuple as the key, with the associated peril-level premium as the value: peril_level = { ('00001', 1, 'BLD_FIRE') : 122.31, ('00001', 1, 'CONT_FIRE'): 97.64, ('00001', 1, 'GL_PREMOP'): 147.77, ('00001', 1, 'BLD_WATR') : 73.19, ('00001', 1, 'BLD_WTHR') : 123.41, ('00002', 1, 'BLD_FIRE') : 432.52, ('00002', 1, 'CONT_FIRE'): 400.15, ('00002', 1, 'GL_PREMOP'): 100.01, ('00002', 1, 'BLD_WATR') : 63.84, ('00002', 1, 'BLD_WTHR') : 126.57, } #print the premium associated with key ('00001',1,'BLD_FIRE'): peril_level[('00001',1,'BLD_FIRE')] Here are some additional methods made available to Python’s dict type: planets = { 'Mercury': .387, 'Venus' : .722, 'Earth' :1.000, 'Mars' :1.520, 'Jupiter': 5.20, 'Saturn' : 9.58, 'Uranus' :19.20, 'Neptune':30.10 } #return the number of key-value pairs (the `length` of the dict): len(planets) #returns `8`; same as `len(planets.keys())` 'Mars' in planets #returns `True` 'Pluto' in planets #returns `False` planets.popitem() #remove and return an arbitrary key-value pair planets.pop('Jupiter') #remove `Jupiter` from planets planets.clear() #removes all key-value pairs from dict
http://www.jtrive.com/python-data-structures-and-iteration.html
CC-MAIN-2020-16
refinedweb
1,416
56.96
Developing a IOT device: humidity sensor for plants using Arduino This is the first of a collection of posts that will share along with the experiences to develop project with the Internet of Things (IOT), using different methods and modules in many applications, testing and exploring ways to take the advantage of the different approaches, with real cases to bring this technologies for our every day lives. What about the real project? I will start by apply IOT in an area that I like it a lot, that was a thesis in my unfinished study for a master degree in informatics. I will talk about internet of things for little indoor garden measure with sensors. Why this project is internet of things? The Internet of things (IoT) is the network of devices, vehicles, and home appliances that contain electronics, software, actuators, and connectivity which allows these things to connect, interact and exchange data fonte: Wikipedia This is a project that I’ve been looking for a long times, with a lot of tries to end with a good quality measure for soil moisture for my plants. But there’s a lot of projects around in different ways that take care of that. Is perfectly fine to look other ones like the Intructables. There’s some consolidated projects on the market: There’s a lot of projects that taked advantage from crowdfunding platform to try to put these IOT projects for indoor gardens. There’s others released already and they are available at Amazon. One of those is the Aero Garden. So we can see there’s a lot of different designs and and interfaces and the marketing around these new Gadgets, but a lot of these example of IOT devices on the market is not something that really add for us some value in a long term. Considering this, each IOT project that you need to do needs, as any other software project, a really good design, usability, security and accessibility to be useful for your target users. So it’s really important to prototype and test to build something that will be sustainable. When we think about automated indoor garden, there’s a lot of variables involved, and when we consider the biologic world as our context, is really essencial to make experiments for a while in different situations, and this case I will focus in this post about sensors and I will show different possibilities around the use in a practical world to validate this project. I will use a Wifi connection to connect to a server and send data about the humidity to send to an e-mail (I could tweet as well) in a 1 hour interval. Then we could use in different situations. So it’s a great start to feel how the sensors behave in a real environment. Why develop projects with IOT? What I need to know to start? If you are new in this IOT world, a warning: you can become addicted and you will wish to reinvent many devices that you see around. It’s more common buy components with different supports and languages like Python, LUA and Javascript. It’s easier to develop prototypes for new devices, but this new ways to develop bring new complexities in existent simple structures. With a lot of options and projects, we sometimes insist to recreate disaster projects from scratch, so it’s always good to have a good references before start. Another important consideration about the IOT prototypes that you decided to build, is before proceed by solding components, is think about how to build in large scale and for this there’s a lot of variants to be considered, like the price of the components and their use in real world, because not everything will work as we expected. So is important to test the prototype in different situations and by real. I think is the same with software development, but when we are dealing with hardware, there’s new dimensions and variables to be considered, and the software is embedded to a hardware, so this can change everything. How to develop an IOT project I will use this project as example to show a possible architecture and you use as reference to create a project in the same field as well in other contexts. Which components I will use? Arduino We need two components for this: the humidity moiusture sensor and the little NodeMCU, a powerful microcontroller with Wifi integrated. Sensor The sensor sends a tension to the soil to measure humidity. When the values increase it indicates that is dry Advantages - You will not need to develop with the Arduino language, and you can develop using Lua or a Python variation called MicroPython - MicroUSB as source Disadvantages - For some reason, is not supported in a 5.4GHz network How to connect the components Schematic view between an Arduino NODEMCU and a sensor Parte 1: Developing and embedded software with MicroPython I decided to use this microcontroller with Python, and there’s a little initial effort to setup. We have to change the firmware, doing a proper download and set the initial system from scratch, that is different that counts with the embedded Lua platform. But once you upload the firmware you can upload files and run on the device and access the libraries that is possible to handle the Wifi connection. The required steps to setup the firmware is not required if you wish to develop using Lua, that is the embeded system in this model of Arduinos, the ESP8266, that is the included firmware from factory. If you want to try to build system upon MicroPython and customize your device, you need to follow these steps: - Intall the required drivers on mac, and then your operational system will recognize the MicroUSB connection: - Download the firmware - Install Esptool - Erase the flash and reset the device - esptool.py — port /dev/cu.wchusbserial1420 erase_flash - Upload a new firmware - esptool.py — port /dev/cu.wchusbserial1420 — baud 460800 write_flash — flash_size=detect 0 ~/Downloads/esp8266–20180511-v1.9.4.bin - Connect the device - picocom /dev/cu.wchusbserial1420 -b115200 Connecting the sensor You can read now the sensor values by doing: from machine import ADC >>> adc = ADC(0) >>> adc.read() 402 MicroPython code Part 2: Developing the server that will receive the requests We need a server that will receive the requests, so for this purpose I created an Open Source project called Ahorta, that will receive the data send by the sensor through the wifi connected. See the project in Github Basically, is a NodeJS stack with Sendgrid, hosted on Heroku to send e-mail about the values that the sensor reads. Our server will work as follows; - A server receives the requests to an API rest that has endpoints that sends the sensor value about humidity - Activates the requests to send e-mail, using the Heroku stack with Sendgrid to send e-mail, that could easily be any other integration. Security For security reasons, we need to create a token that needs to be present on the request, and in our example we had a environment variable that holds the key, that should match with the key on the server, that is set with the environment variable as well. This token will be stored on the device, source of request, that validates to not handle any other unknown request. Result As a result of this scripts and the Node on the server, we send a e-mail hour by hour to the destination e-mail. Important to know - The range of support of this microcontroller is 2.4ghz network, and lucky me that I have both frequencies in my home network - Use the best cables you have, cause I had a lot of issues with malfunction cables that it was hard to find the source - A velocidade de conexão deve funcionar bem na faixa de 115200 (No meu tem escrito no chip as instruções, dizendo que na criação do flash usar a faixa de 9600) - The connection speed of transfer should work at 115200 (On my device has 9600 specified to create the flash install) Links - - - - - - - - Conclusion I hope this post was helpful to understand real world IOT projects. With these devices we can do great projects with different architectures for home automation. Are you interested to contribute with this project? Follow this project on Github You can subscribe and know about opportunities to work in these projects and receive bounties for complete issues.
https://alexanmtz.medium.com/developing-a-iot-device-humidity-sensor-for-plants-using-arduino-aa1e69faa047
CC-MAIN-2022-33
refinedweb
1,420
53.95
I’ve been on a bit of a Web Applications kick for the last few weeks – I thought it was about time to take a step back and get some infrastructure work done as well. One of the tasks on my plate was the configuration of a lab environment within Microsoft Azure. I have a requirement to be running a number of Windows Server 2012 R2 machines that are linked to an Active Directory domain. I can then quickly install whatever software I want on them and start using them. I don’t care where the servers are – they could be on a Hyper-V box (which is what I would normally do). Today I’m going to do them in Azure. This article will kick things off by creating my first machine in Windows Azure plus all the other bits that I need as well. To start with you need to download and install the Azure PowerShell SDK. This is your basic Web Platform Installer, so read the license, agree to the terms and install as normal. You will also need an Azure Subscription – if you haven’t signed up for a free trial yet, why not? One thing I did find was that I needed to totally log out and back in again in order for the Azure PowerShell cmdlets to be recognized. I’m not sure if that is normal or not but it’s a reasonable thing to expect. To create a connection to the Azure environment, use: Add-AzureAccount The command will prompt you via a GUI pop-up to log in – use the same credentials that you use to access Azure. Everything else can be done in the context of this authenticated session. I don’t want to embed credentials into my setup script and it’s not something I do on a regular basis, so this is perfectly fine with me. I also want my machines to communicate over a virtual network on the back-end. Since Virtual Networks are per-subscription, not per-service, it makes sense to set these up ahead of time. This can be done in the portal (and I recommend that way). However if you insist on doing it on the command line then you need an XML document describing your subscriptions network topology. You can then apply this configuration with PowerShell: Set-AzureVNetConfig -ConfigurationFile $tempXmlFile A good plan is to set up your network in the portal and then export it. You can do the export with the following PowerShell: Get-AzureVNetConfig -ExportToFile "C:\temp\MyLab-Network.xml" Creating a Virtual Machine In order to create a virtual machine with PowerShell I need to go through the following steps: - Create an Affinity Group (if it doesn’t exist) - Create a Storage Account (if it doesn’t exist) - Create a Cloud Service (if it doesn’t exist) - Re-connect to the subscription with the storage account - Create the Virtual Machine Before I get started I need the following information: - Location: Where am I going to run my environment? You can see the selections when you drop down the Location field creating a VM in the Portal. - Affinity Group: I get to choose this but it has to be unique within my subscription. - Virtual Network: This comes from my network setup described above. - Storage Account Name: I generate this but it has to be unique within Azure. - Cloud Service Name: I generate this but it has to be unique within Azure. Since I don’t want to re-create everything every time I’m going to store the storage account name and cloud service name for later. I’m creating a Create-Lab.ps1 script to hold all this. This is how it starts: $VerbosePreference = "Continue" $Location = "West US" $AffinityGroup = "Lab" $VNet = "Lab-Network" $Subnet = "Subnet-1" $IPNetwork = "10.1.1" $StorageAccountFile = "$(get-location)StorageAccount.txt" $StorageAccount = "lab" + ([guid]::NewGuid()).ToString().Substring(24) if (Test-Path $StorageAccountFile) { $StorageAccount = Get-Content $StorageAccountFile } $CloudServiceFile = "$(get-location)CloudService.txt" $CloudService= "lab" + ([guid]::NewGuid()).ToString().Substring(24) if (Test-Path $CloudServiceFile) { $CloudService= Get-Content $CloudServiceFile } An Affinity Group is a collection of resources (like storage, cloud services, virtual machines in our case) that you want to be close to one another. It allows the Azure Fabric Controller (which is the thing you interact with to create your environment) to more intelligently locate the resources you want. The Affinity Group Name is your choice, but you will get an error if you try to add it twice so I’ve done a little test: ### ### Create the Affinity Group ### $AffinityGroupExists = Get-AzureAffinityGroup -Name $AffinityGroup -ErrorAction SilentlyContinue if (!$AffinityGroupExists) { Write-Verbose "[Create-Lab]:: Affinity Group $AffinityGroup does not exist... Creating." New-AzureAffinityGroup -Name $AffinityGroup -Location $Location } else { Write-Verbose "[Create-Lab]:: Affinity Group $AffinityGroup already exists" } Once I have the affinity group created I can move on to the storage account and cloud service. These need to be created once as well. Since they have DNS names within the Azure namespace they need to be unique. What I’ve done above is create a name based on a globally unique ID. There is still a chance of a collision, so I preprended a specific string – in this case “lab”. Choose your own string to make it more unique to you if you bump into problems. (Side note: I wish Azure would allow GUIDs here – it would make it much easier to create unique names). ### ### Create the Storage Account ### $StorageAccountExists = Get-AzureStorageAccount -StorageAccountName $StorageAccount -ErrorAction SilentlyContinue if (!$StorageAccountExists) { Write-Verbose "[Create-Lab]:: Storage Account $StorageAccount does not exist... Creating." New-AzureStorageAccount -StorageAccountName $StorageAccount -AffinityGroup $AffinityGroup $StorageAccount | Out-File "$CWDStorageAccount.txt" } else { Write-Verbose "[Create-Lab]:: Storage Account $StorageAccount already exists" } ### ### Create a Cloud Service ### $CloudServiceExists = Get-AzureService -ServiceName $CloudService -ErrorAction SilentlyContinue if (!$CloudServiceExists) { Write-Verbose "[Create-Lab]:: Cloud Service $CloudService does not exist... Creating." New-AzureService -ServiceName $CloudService -AffinityGroup $AffinityGroup $CloudService | Out-File "$CWDCloudService.txt" } else { Write-Verbose "[Create-Lab]:: Cloud Service $CloudService already exists" } Note I am storing the names I have chosen into text files. This allows me to re-use the names later on but it also allows me to do other things in automation with other scripts. Now that I have created all my base services it’s time to create a virtual machine. My first machine is going to be special – it will be my domain controller. I need some more information here: - The Image Name that I’m going to base my VM on - The Instance Size or how much resources I want to give it - The Admin Username and password for accessing it afterwards The Image Name is probably the most complex here. You can use Get-AzureVMImage to get a full list of the images available to you – both on the marketplace and the ones you have uploaded. I’m going to use the latest Windows Server 2012 R2 Datacenter build that they have provided. To do that I need to do some filtering, like this: $Image = Get-AzureVMImage | ? ImageFamily -eq "Windows Server 2012 R2 Datacenter" | Sort PublishedDate | Select -Last 1 You can get a list of the instance sizes using the following: Get-AzureRoleSize | ? SupportedByVirtualMachines -eq $true | Select InstanceSize,Cores,MemoryInMb For my purposes (a domain controller), a Medium or Basic_A3 instance size seems perfect. I want this to have a static IP address so here is my eventual configuration: ### ### Create the Domain Controller VM ### $Image = Get-AzureVMImage | ? ImageFamily -eq "Windows Server 2012 R2 Datacenter" | Sort PublishedDate | Select -Last 1 $AdminUsername = "itadmin" $AdminPassword = "P@ssw0rd" # Set the VM Size and Image $DC = New-AzureVMConfig -Name LAB-DC ` -ImageName $Image.ImageName ` -InstanceSize Medium -HostCaching ReadWrite # Add credentials for logging in Add-AzureProvisioningConfig -VM $DC -Windows ` -AdminUsername $AdminUsername ` -Password $Adminpassword # Set Networking Information Set-AzureSubnet -VM $DC -SubnetNames $Subnet Set-AzureStaticVNetIP -VM $DC -IPAddress "$IPNetwork.2" # Create the VM New-AzureVM -ServiceName $CloudService -VNetName $VNet -VM $DC The first few IP addresses in the C-class range I constructed are needed for the platform. I started my addressing at 10.1.1.11 to be sure to give Azure enough room. Since a domain controller has to have a static IP address, this is really needed. By default you’ll get a PowerShell endpoint and an RDP endpoint created. You can find out where these endpoints are using PowerShell too: Get-AzureEndpoint -VM (Get-AzureVM | ? Name -eq "LAB-DC") So now you have the cloud service name (which you created and it’s in CloudService.txt) – add .cloudapp.net to that to get the computername, and the port number (from the endpoint command above). You need to go and import the certificate before using Enter-PSSession to connect using the AdminUsername and password you created. You can find out all about this process from this blog post. Note that you can also create additional machines on your lab network in Azure in the same way – they just need a different IP address and name.
https://shellmonger.com/2015/03/22/setting-up-a-test-lab-in-azure/
CC-MAIN-2017-51
refinedweb
1,494
52.9
Face embedding calculation from Java Hi, is there a way to calculate face embedding from the Java port of OpenCV? I have browsed the JavaDoc but I cannot find it. Thanks in advance. updated 2019-10-22 04:11:30 -0500 Hi, is there a way to calculate face embedding from the Java port of OpenCV? I have browsed the JavaDoc but I cannot find it. Thanks in advance. you're lucky, here's a complete example: import org.opencv.core.*; import org.opencv.dnn.*; import org.opencv.imgcodecs.*; import org.opencv.imgproc.*; public class FaceRecognition { public static Mat process(Net net, Mat img) { Mat inputBlob = Dnn.blobFromImage(img, 1./255, new Size(96,96), new Scalar(0,0,0), true, false); net.setInput(inputBlob); return net.forward().clone(); } public static void main(String[] args) { System.loadLibrary(Core.NATIVE_LIBRARY_NAME); Net net = Dnn.readNetFromTorch("openface.nn4.small2.v1.t7"); Mat feature1 = process(net, Imgcodecs.imread("Abdullah_Gul_0004.jpg")); Mat feature2 = process(net, Imgcodecs.imread("Abdullah_Gul_0007.jpg")); double dist = Core.norm(feature1, feature2); if (dist < 0.6) System.out.println("SAME !"); } } Hi. I keep having strange results. I am running the above code verbatim (no changes).I am using OpenCV 4.1.1 I downloaded the Java libraries from Sourceforge and I am running under Windows 10 64 bit. I downloaded the model from here: I downloaded test images from here: When I run the code, it gives me a distance of 0.9325596751685123 between the two faces. Clerly the embedding is not working :( updated 2019-10-22 10:02:10 -0500 Thanks a lot for the quick reply! However, let me add a note fro beginners like me :) When I ran the above code, I got the following error: cannot open <openface.nn4.small2.v1.t7> This because, after installing OpenCV for Java as explained here, you need to download the model separately from here . Then, you need to pass the full file name for the downloaded model in: Dnn.readNetFromTorch( [fullFileNameHere] ); uhhhmm -- maybe you found something ! code was adapted from the js sample but it looks like the js code is working on RGBA, not BGR Asked: 2019-10-21 23:26:25 -0500 Seen: 52 times Last updated: Oct 22
https://answers.opencv.org/question/220163/face-embedding-calculation-from-java/?sort=latest
CC-MAIN-2019-51
refinedweb
370
60.51
From: Jonathan Turkanis (technews_at_[hidden]) Date: 2005-02-25 16:24:02 Gennadiy Rozental wrote: > "Jonathan Turkanis" : >>? Good point. I did some poking around, and what seems to be happening is that random_shuffle is defined in namespace _STL but is brought into namespace std by a using directive: namespace std { using namespace _STLPORT_STD } where _STLPORT_STD is #define'd as _STL. I might be wrong, becasue I'm unfamiliar with the structure of the STLPort source, but it looks like this is what is happening. It thefore looks like the problem is an incompatibility between STLPort 4.5 and the Borland C runtime library. >>? I'm not sure, but I don't know the exact versions of STLPort which have shipped with Borland 5.x. >> Should I commit? > > Ok. All right, done. >> Jonathan Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2005/02/80962.php
CC-MAIN-2021-04
refinedweb
154
76.82
assert - insert program diagnostics #include <assert.h> void assert(int expression); The assert() macro inserts diagnostics into programs. When it is executed, if expression is false (that is, compares equal to 0), assert() writes information about the particular call that failed (including the text of the argument, the name of the source file and the source file line number - the latter are respectively the values of the preprocessing macros __FILE__ and __LINE__) on stderr and calls abort(). Forcing a definition of the name NDEBUG, either from the compiler command line or with the preprocessor control statement #define NDEBUG ahead of the #include <assert.h> statement, will stop assertions from being compiled into the program. The assert() macro returns no value. No errors are defined. None. None. None. abort(), stderr , <assert.h>. Derived from Issue 1 of the SVID.
http://pubs.opengroup.org/onlinepubs/7908799/xsh/assert.html
crawl-003
refinedweb
138
56.35
Solution for Programming Exercise 7.2 THIS PAGE DISCUSSES ONE POSSIBLE SOLUTION to the following exercise from this on-line Java textbook.: Discussion The applet contains five components. There are several ways to lay them out. A GridLayout with five rows certainly won't work, because the TextArea should be taller than the other components. One possible layout is to use a GridLayout with two rows. The TextArea would occupy the first row. The bottom half would contain a Panel that holds the other four components. (A GridLayout with two columns and one row would also work, if you wanted an applet that was wider and not so tall. You could put the TextArea in the left half and the other components in a Panel in the right half.) However, I decided to use a BorderLayout. The TextArea occupies the Center position, and the South position is occupied by a Panel that contains the other components. The Panel uses a GridLayout with four rows. In a BorderLayout, the height of the component in the South position is the preferred height of that component. It is just as tall as it wants to be. The TextArea gets any left-over space. So, it will be just as large as it can be without crowding the other components. Once the choice is made, writing the init() method is not hard. The actionPerformed() method will be called when the user clicks the button. Since there is only one possible source for the ActionEvent, I don't even bother checking where the event came from. This method just has to get the text from the TextArea, do the counting, and set the labels. The only interesting part is counting the words. Back in Exercise 3.4, words such as "can't", that contain an apostrophe, were counted as two words. This time around, let's handle this special case. Two letters with an apostrophe between them should be counted as part of the same word. The algorithm for counting words is stillwordCt = 0 for each character in the string: if the character is the first character of a word: Add 1 to wordCt but testing whether a given character is the first character in a word has gotten a little more complicated. To make the test easier, I use a boolean variable, startOfWord. The value of this variable is set to true if the character is the start of a word and to false if not. That is, the algorithm becomes:wordCt = 0 for each character in the string: Let startOfWord be true if at start of word, false otherwise if startOfWord is true: Add 1 to wordCt The use of a "flag variable" like startOfWord can simplify the calculation of a complicated boolean condition. The value is computed as a series of tests. The first tests checks whether the character in position i is a letter. If it is not, then we know that it can't be the start of a word, so startOfWord is false. If it is a letter, it might be the start of a word, so we go on to make additional tests. Note that if we get to the other tests at all, we already know that the character in position i is a letter. And so on. This style of "cascading tests" is very useful. In each test, we already have all the information from the previous tests. Note that the cascade effect works only with "else if". Using "if" in place of "else if" in the preceding code would not give the right answer. (You should be sure to understand why this is so.) The Solution /* In this applet, the user types some text in a TextArea and presses a button. The applet computes and displays the number of lines in the text, the number of words in the text, and the number of characters in the text. A word is defined to be a sequence of letters, except that an apostrophe with a letter on each side of it is considered to be a letter. (Thus "can't" is one word, not two.) */ import java.awt.*; import java.awt.event.*; import java.applet.*; public class TextCounterApplet extends Applet implements ActionListener { TextArea textInput; // For the user's input text. Label lineCountLabel; // For displaying the number of lines. Label wordCountLabel; // For displaying the number of words. Label charCountLabel; // For displaying the number of chars. public void init() { setBackground(Color.darkGray); /* Create the text input area and make sure it has a white background. */ textInput = new TextArea(); textInput.setBackground(Color.white); /* Create a panel to hold the button and three display labels. These will be laid out in a GridLayout with 4 rows and 1 column. */ Panel south = new Panel(); south.setLayout( new GridLayout(4,1,2,2) ); /* Create the button, set the applet to listen for clicks on the button, and add it to the panel. */ Button countButton = new Button("Process the Text"); countButton.setBackground(Color.lightGray); countButton.addActionListener(this); south.add(countButton); /* Create each of the labels, set their colors, and add them to the panel. */ lineCountLabel = new Label(" Number of lines:"); lineCountLabel.setBackground(Color.white); lineCountLabel.setForeground(Color.blue); south.add(lineCountLabel); wordCountLabel = new Label(" Number of words:"); wordCountLabel.setBackground(Color.white); wordCountLabel.setForeground(Color.blue); south.add(wordCountLabel); charCountLabel = new Label(" Number of chars:"); charCountLabel.setBackground(Color.white); charCountLabel.setForeground(Color.blue); south.add(charCountLabel); /* Use a BorderLayout on the applet. The text area occupies the Center position. The panel that holds the button and labels is in the South position. Note that the text area will be sized to fill the space that is left after the panel is assigned its preferred height. */ setLayout( new BorderLayout(2,2) ); add(textInput, BorderLayout.CENTER); add(south, BorderLayout.SOUTH); } // end init(); public Insets getInsets() { // Leave a 2-pixel border around the edges of the applet. return new Insets(2,2,2,2); } public void actionPerformed(ActionEvent evt) { // Respond when the user clicks on the button by getting // the text from the text input area, counting the number // of chars, words, and lines that it contains, and // setting the labels to display the data. String text; // The user's input from the text area. int charCt, wordCt, lineCt; // Char, word, and line counts. text = textInput.getText(); charCt = text.length(); // The number of characters in the // text is just its length. /* Compute the wordCt by counting the number of characters in the text that lie at the beginning of a word. The beginning of a word is a letter such that the preceding character is not a letter. This is complicated by two things: If the letter is the first character in the text, then it is the beginning of a word. If the letter is preceded by an apostrophe, and the apostrophe is preceded by a letter, than its not the first character in a word. */ wordCt = 0; for (int i = 0; i < charCt; i++) {. if (startOfWord) wordCt++; } /* The number of lines is just one plus the number of times the end of line character, '\n', occurs in the text. */ lineCt = 1; for (int i = 0; i < charCt; i++) { if (text.charAt(i) == '\n') lineCt++; } /* Set the labels to display the data. */ lineCountLabel.setText(" Number of Lines: " + lineCt); wordCountLabel.setText(" Number of Words: " + wordCt); charCountLabel.setText(" Number of Chars: " + charCt); } // end actionPerformed() } // end class TextCounterApplet [ Exercises | Chapter Index | Main Index ]
http://math.hws.edu/eck/cs124/javanotes3/c7/ex-7-2-answer.html
CC-MAIN-2017-47
refinedweb
1,234
65.93
tickr 0.6.1-1 source package in Debian Changelog tickr (0.6.1-1) unstable; urgency=low * Add: 'quick setup' thing (in tickr_quicksetup.c) which is launched at program startup if config file doesn't exist. * Little improvements in layout of 'feed picker win' and 'preferences win'. * Fix a segfault that happens when trying to export params and no config file exists yet. * Make several windows that should not be resized by user, unresizable. * Fix Launchpad bug #1007346: When 'window always-on-top' is disabled, 'visible on all user desktops' stops working. * If mouse wheel scrolling applies to speed (or feed), then Ctrl + mouse wheel scrolling applies to feed (or speed.) * No real code changes in libetm, only in comments, so no need for a new version number. * Update tickr_helptext.c and tickr.1 (man page.) * Add new cli option 'no-ui' (similar to 'instance-id') used by new IF_UI_ALLOWED macro and remove all #if USE_GUI occurences. * In tickr_list.c, free listfname before using it. Fixed by swapping 2 lines: warning(FALSE, 4, "Can't save URL list ", listfname, ...); l_str_free(listfname); * Use/add #define FONT_MAXLEN 68 ARBITRARY_TASKBAR_HEIGHT 25 to replace a few 'magic' numeric values. * Rename: rss_title/description(_delimiter) -> item_title/description(_delimiter) then add new param: feed_title(_delimiter). Now we have: feed title / item title / item description. * Use table in resource properties window. * Fix a bug in f_list_load_from_file() in tickr_list.c which uncorrectly retrieves any feed title string containing TITLE_TAG_CHAR when TITLE_TAG_CHAR has not been removed from string first, for instance: 'NYT > World' -> ' World'. * New param: disable left-click. * Add 'check for updates' feature. * Launch 'import OPML file' if feed list doesn't exist. * Remove code changing get_params()->disable_popups value in START/END_PAUSE_TICKER_WHILE_OPENING macros which prevents this setting to be saved and add START/END_PAUSE_TICKER_ENABLE_POPUPS_WHILE_OPENING new macros. Which ones to use depends on context. * Move: #ifdef G_OS_WIN32 extern FILE *stdout_fp, *stderr_fp; #endif from *.c into tickr.h. * Default always-on-top setting changed to 'n' (so that tickr is not intrusive by default.) -- Emmanuel Thomas-Maurin <email address hidden> Mon, 04 Jun 2012 14:23:24 +0200 Upload details - Uploaded by: - Emmanuel Thomas-Maurin on 2012-06-06 - Original maintainer: - Emmanuel Thomas-Maurin - Architectures: - any - Section: - net - Urgency: - Low Urgency See full publishing history Publishing Builds Downloads Available diffs - diff from 0.6.0-2 to 0.6.1-1 (90.7 KiB) No changes file available.
https://launchpad.net/debian/+source/tickr/0.6.1-1
CC-MAIN-2019-22
refinedweb
398
60.01
Antoine Levy Lambert skrev: > the import task in 1.8.x supports importing resources which can be > URLs. So maybe the main build file cannot be http based, but the meat > of the build file can. So maybe you can have a stub of a build file on > the file system ? The scenario for this is that we frequently have to bootstrap our deployment scheme on machines with not much more than a JVM[1], but which can "call home". Being able to pull in the environment from the mother ship means things are easier to script. I did not know the import task now knows URL's, which could mean that the functionality I need could be satisfied simply by downloading the remote build.xml file to a temporary local file (with basedir set to current directory), executed, and then deleted when the JVM exits. [1] I'd also like an easy ant deployment scheme on such a machine, but that is an issue for another day. >>. >> > to build the optional tasks, you need to download the dependencies > using ant -f fetch.xml ... (there are additional parameters). This can > put the libraries in lib/optional where the bootstrap process uses > them. I think currently only NetRexx is the only commercial dependency > which cannot be fetched that way, at least for my JDK 1.5 on Mac which > include s JMF and JAI in the extensions. > Thank you! I now have a compiling tree. I'll have a closer look :) -- Thorbjørn Ravn Andersen "...plus... Tubular Bells!" --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
http://mail-archives.eu.apache.org/mod_mbox/ant-dev/201001.mbox/%[email protected]%3E
CC-MAIN-2020-10
refinedweb
275
73.98
Originally posted by Michael Ernest: Consider that main() and test() are both static methods in class A. Since they both get initialized at class-loading time, they're visible to each other. main() just happens to be the static method that the JVM looks for a class is invoked by name. So when main() calls test(), it only has to lookup a method in its own namespace. We don't typically spend much time thinking about this behavior in Java because it doesn't dynamic object behavior, but it is in fact expected behavior. You can't call a static method from an object without a reference, but one static method can call another that way if they're in the same class.
http://www.coderanch.com/t/235648/java-programmer-SCJP/certification/basic-concept
CC-MAIN-2015-40
refinedweb
123
68.6
On 07/15/2011 04:28 AM, Pádraig Brady wrote: > On 15/07/11 08:50, Paul Eggert wrote: >> On 07/14/11 17:25, Pádraig Brady wrote: >>> I'm not sure about defining these to 0 in gnulib. >>> That will silently ignore the intent of a program on certain platforms. > Absolutely. What I was getting was that it's probably better to leave > the following to the app too: > > #ifndef SA_RESETHAND > # define SA_RESETHAND 0 > /* Now the app writer knows they need to handle this case */ > #endif Can't the gnulib sigaction module be taught to fake SA_RESETHAND, by wrapping the user's signal-handler inside a gnulib shim, where the shim resets signals correctly before calling the user's handler? Granted, it's not trivial, but I think that it is still within the realm of technical possibilities. At which point, SA_RESETHAND would always be non-zero once you use the gnulib <signal.h>. >> On NonStop, if you invoke signal(), it uses the SA_RESETHAND semantics >> (POSIX allows this). Conversely, if you invoke sigaction(), NonStop >> always behaves as if SA_RESTART and SA_RESETHAND are zero, i.e., it >> doesn't support either feature with sigaction. > > Thanks for checking that. Seems like there's some room for improvement in the gnulib sigaction module then. -- Eric Blake address@hidden +1-801-349-2682 Libvirt virtualization library signature.asc Description: OpenPGP digital signature
http://lists.gnu.org/archive/html/bug-coreutils/2011-07/msg00083.html
CC-MAIN-2015-27
refinedweb
229
62.78
- Tutoriais - Survival Shooter tutorial - Player Character Player Character Verificado com a versão: 4.6 - Dificuldade: Principiante This is part 2 of 10 of the Survival Shooter tutorial, in which you will setup and code the player character, as well as create an animator state machine. Player Character Principiante Survival Shooter tutorial Transcrições - 00:03 - 00:05 So what we're going to do is setup our player now - 00:05 - 00:07 which is going to get him moving and jazzing and - 00:07 - 00:09 beboping and doing all that stuff. - 00:09 - 00:11 What we're going to do is we're going to go ahead and locate - 00:11 - 00:13 the player model, which is going to be located - 00:13 - 00:15 in Models folder, and we're going to - 00:15 - 00:17 expand that and we're going to look in the Characters folder - 00:17 - 00:19 and we should see a player, - 00:19 - 00:21 there we go, Player, it should be right there. - 00:21 - 00:23 So what we're going to do is we're going to go ahead and - 00:23 - 00:25 drag that either in to the scene or in to - 00:25 - 00:27 the hierarchy view. - 00:27 - 00:29 So we'll just click and drag and drop and there he is. - 00:31 - 00:35 There he is, looking pretty swag. - 00:35 - 00:39 And so obviously if we just click and drag him to the scene he might - 00:39 - 00:40 go where ever, you know, so we need to - 00:40 - 00:42 place him at a specific spot so we're going to be sure that - 00:42 - 00:45 the position is at (0, 0, 0). - 00:46 - 00:48 If it is then you're golden, if it's not then - 00:48 - 00:50 go ahead and set it to (0, 0, 0) - 00:50 - 00:52 by typing (0, 0, 0) or again we - 00:52 - 00:54 have this cog or gear icon - 00:54 - 00:56 and you can just click reset or reset position, - 00:56 - 00:59 which will put it where ever we want it to be on the origin. - 00:59 - 01:01 Now we need specific things to - 01:01 - 01:03 interact with the player in a very special way. - 01:03 - 01:05 And so we want to ensure that we have properly - 01:05 - 01:07 tagged our player so certain things will know - 01:07 - 01:09 when they've come in contact with the player. - 01:09 - 01:13 To do that we have this Tag drop down - 01:13 - 01:15 here in the inspector and it currently says Untagged. - 01:15 - 01:17 And what we're going to do is we're going to click that drop down - 01:17 - 01:19 and we are going to select Player. - 01:19 - 01:21 Thus tagging our player - 01:21 - 01:23 as the player so that the player will - 01:23 - 01:25 behave appropriately when they are playing. - 01:27 - 01:28 Alright, fantastic. - 01:28 - 01:30 Now what we need to do is - 01:30 - 01:32 we're going to go ahead and create - 01:32 - 01:34 an animation controller for our player, - 01:34 - 01:36 which is going to allow us to - 01:36 - 01:38 have all the neat little - 01:38 - 01:40 animation effects of moving and turning and - 01:40 - 01:41 doing all that stuff, - 01:41 - 01:43 and I'm still zoomed in here. - 01:43 - 01:45 So what we want to do is we want to go ahead and - 01:45 - 01:47 locate the Animation folder - 01:47 - 01:47 which we'll just click on right here. - 01:47 - 01:50 Currently it is in fact empty and we are going to make - 01:50 - 01:53 an animator controller. - 01:53 - 01:55 Shall we just talk a little bit about the - 01:55 - 01:57 animations that we've setup first? - 01:57 - 01:58 Great idea! - 01:58 - 02:01 Cool. So the model that we've just dragged in - 02:02 - 02:06 is the player model and what you can do is drag up - 02:07 - 02:10 the preview window in the lower right of the editor. - 02:10 - 02:13 So this model has been animated - 02:13 - 02:15 for you, so when you bring in an FPX - 02:15 - 02:17 in to Unity, or a Maya file - 02:17 - 02:19 or a 3DS Max, whatever package you use, - 02:20 - 02:22 When you bring that in you can - 02:22 - 02:24 store animations within it or if you have - 02:24 - 02:26 a humanoid, a biped-type model - 02:26 - 02:29 you can retarget those animations on to - 02:29 - 02:31 similar type of models. - 02:32 - 02:34 This particular thing is rigged - 02:34 - 02:36 in what we call a generic manner. - 02:36 - 02:39 So if I look at my rig you can see generic up at the top. - 02:41 - 02:43 And what that means is that it's a very specific - 02:43 - 02:46 bone structure because our character is going to move like this. - 02:47 - 02:49 It's going to be kind of hopping around. - 02:49 - 02:52 So he's not got two arms, two legs as a biped would. - 02:52 - 02:54 So the animations that are stored in this - 02:55 - 02:58 particular model that we've given you - 02:58 - 03:01 are a move, an idle and a death. - 03:01 - 03:03 So what we're going to do now is to create - 03:03 - 03:05 a state machine called the Animator Controller - 03:05 - 03:07 to control when these three different - 03:07 - 03:10 animations are going to be played back. - 03:11 - 03:13 If you're looking in the inspector here - 03:13 - 03:15 I highly encourage you not to - 03:15 - 03:16 touch anything. - 03:16 - 03:18 Because these changes change how Unity - 03:18 - 03:21 imports the stuff and if you just start clicking - 03:21 - 03:23 and a little box comes up and says 'would you like to apply?' - 03:23 - 03:24 and you're like 'yeah, apply for what?' - 03:24 - 03:26 and then all of a sudden your stuff is not going to - 03:26 - 03:28 work later, so just look but don't touch. - 03:28 - 03:30 So we're doing this just to - 03:30 - 03:32 show your where the animation's going to be - 03:32 - 03:34 coming from and we're going to go ahead and - 03:34 - 03:36 actually implement it now. - 03:36 - 03:38 We're going to go back to this animation folder - 03:38 - 03:40 and again it's still empty so - 03:40 - 03:42 no magic there yet. - 03:42 - 03:44 And we're going to go ahead and add what's called - 03:44 - 03:45 an animator controller. - 03:45 - 03:48 Animator controllers are part of the mecan animation - 03:48 - 03:51 system, they're basically a finite state machine that just - 03:51 - 03:53 blends animations and does all this neat stuff, - 03:53 - 03:55 it's actually magic, we hired wizards. - 03:55 - 03:57 And what we're going to do is we're going to go ahead and - 03:57 - 03:59 create this asset that's going to allow us to do that. - 03:59 - 04:01 So I'm going to right click here inside - 04:01 - 04:05 this animation folder in the project view here - 04:05 - 04:07 and I am going to select - 04:07 - 04:09 Create and then Animator Controller. - 04:09 - 04:12 Not Animation, not Animator Override Controller - 04:12 - 04:14 but Animator Controller. - 04:14 - 04:16 And I'm going to get this name here and I'm going to name it - 04:16 - 04:21 Player AC for Animator Controller. - 04:21 - 04:22 There we go. - 04:22 - 04:24 And so once we have that what we're - 04:24 - 04:26 going to do is we are going to - 04:26 - 04:28 actually just click and drag that on - 04:28 - 04:30 to the player, which is going to apply this animator - 04:30 - 04:32 controller to the player, - 04:32 - 04:34 I'm not going to put it under the player - 04:34 - 04:36 I'm going to put it on the player and you'll know the difference - 04:36 - 04:39 because Player itself will become highlighted. - 04:39 - 04:41 And I'll let go. - 04:41 - 04:43 I can confirm I did this correctly be clicking on the player - 04:43 - 04:46 and I see here Player AC - 04:46 - 04:48 is now listed in the animator - 04:48 - 04:50 component, Player AC, right there. - 04:50 - 04:52 That's how I know I've done it correctly. - 04:54 - 04:57 Now let's go ahead and look at this animator controller, - 04:57 - 04:58 now that we have it created, - 04:58 - 05:01 and to do that I can go to Window - 05:01 - 05:03 and then select Animator - 05:03 - 05:05 but easier is to - 05:05 - 05:07 see my Player AC and just double click on it, - 05:07 - 05:10 which is automatically going to open up the animator window. - 05:10 - 05:12 If it's not docked here - 05:12 - 05:14 you can just dock it by clicking the - 05:14 - 05:17 tab, dragging it and docking it in to the window. - 05:17 - 05:19 So as we were looking at earlier - 05:19 - 05:21 when Will clicked on the player - 05:21 - 05:23 FPX model and we looked in the inspector and we saw - 05:23 - 05:25 those animations that were baked in to those settings. - 05:25 - 05:28 Those actually create assets that we're able to use. - 05:28 - 05:30 So if I go back to the Model's - 05:30 - 05:33 character's folder right here. - 05:34 - 05:37 and again look at my player.fpx. - 05:37 - 05:39 If I were to expand that I would - 05:39 - 05:41 see the assets that make up - 05:41 - 05:43 this imported model - 05:43 - 05:45 and so I'm going to go ahead and click this model here - 05:45 - 05:48 and what I see is I see these three animations - 05:48 - 05:50 death, idle and move. - 05:50 - 05:52 So not only were the animations imported - 05:52 - 05:54 as part of the model but we actually have them assets - 05:54 - 05:56 that we can utilise. - 05:56 - 05:58 And so what I'm going to do now is I'm actually - 05:58 - 06:00 going to just take each of these in turn - 06:00 - 06:02 and move them in to my - 06:02 - 06:04 animator window, I'm just going to click and drag them, - 06:04 - 06:06 and drop them - 06:06 - 06:09 I'm just going in order here so I'm going to do death, - 06:09 - 06:11 idle and move, the order's not going to matter because - 06:11 - 06:13 we're going to reorder them in just a moment. - 06:14 - 06:16 So when you have this done right - 06:16 - 06:18 you'll see that you have this Any state - 06:18 - 06:21 and a Death state and an Idle state and a Move state. - 06:21 - 06:23 And so we'll notice that the death state is - 06:23 - 06:25 orange, or which ever state you drag in first - 06:25 - 06:26 is orange, right? - 06:26 - 06:28 Orange means that it's the default state. - 06:28 - 06:31 Obviously I do not want death as the default state - 06:31 - 06:33 It's a sad game but it's not that sad. - 06:33 - 06:35 And so what I want to do is - 06:35 - 06:37 I am going to set a different state - 06:37 - 06:39 as our default state, so I am going to right click - 06:39 - 06:42 on Idle and I'm going to click - 06:42 - 06:44 Set As Default. - 06:46 - 06:49 So that will then turn orange. - 06:49 - 06:51 And I can reorder by clicking and dragging and moving - 06:51 - 06:53 things around, if you just happened to be very particular - 06:53 - 06:55 about how things look you can start - 06:55 - 06:57 building yourself some geometric structures if you really - 06:57 - 06:58 feel like it, whatever. - 06:58 - 07:01 Anyway, now we have idle as the - 07:01 - 07:03 default state and so what we need to do - 07:03 - 07:05 is we need to - 07:05 - 07:07 tell the animator - 07:07 - 07:09 when we want to transition from one - 07:09 - 07:11 state to another state. - 07:11 - 07:13 So we've got these three states, idle, move and death. - 07:13 - 07:16 And they're self-contained animations, they can play and all that stuff but - 07:16 - 07:18 the logic of when we go from one - 07:18 - 07:20 to the next isn't currently in here. - 07:20 - 07:22 So what we need to do is we need to create something - 07:22 - 07:24 called a parameter which is basically going to control - 07:24 - 07:26 when something happens, - 07:26 - 07:28 when something changes state. - 07:28 - 07:30 And we access those parameters right here - 07:30 - 07:32 with this little Parameters button - 07:32 - 07:33 it's very cleverly named, - 07:33 - 07:36 and we have this little + icon right there. - 07:36 - 07:38 What we want to do is we want to create a parameter - 07:38 - 07:39 so the first parameter we're going to create, - 07:39 - 07:41 when we click this we're going to see some options - 07:41 - 07:43 we can create a float, as in a floating point number, - 07:43 - 07:46 int is an integer, bool is a boolean - 07:46 - 07:49 or trigger, we want to create a boolean - 07:49 - 07:52 and we're going to call this IsWalking. - 07:52 - 07:54 It's very important that you spell it right - 07:55 - 07:56 like I just did not. - 07:56 - 07:58 There we go, IsWalking, - 07:58 - 07:59 and then enter. - 07:59 - 08:01 It has to be capital I - 08:01 - 08:05 lowercase s, capital W alking. - 08:05 - 08:07 alright, it's important that it's spelt - 08:07 - 08:09 right because we have scripts that you're going to use - 08:09 - 08:11 that reference it by name - 08:11 - 08:14 so if you named it something else, like Skeletor, - 08:14 - 08:16 when the time comes it's not going to work - 08:16 - 08:20 you'll just have to open up your code and change it - 08:20 - 08:21 which you really don't want to do. - 08:21 - 08:23 So we have this IsWalking which is a boolean which - 08:23 - 08:25 means it can be either true or false - 08:25 - 08:27 and that true or false will dictate whether - 08:27 - 08:29 he is walking. - 08:29 - 08:31 I'm going to create another one now so again I'm going to click - 08:31 - 08:33 this little + icon and I'm going to select - 08:33 - 08:35 Trigger, and a trigger is like a boolean. - 08:35 - 08:37 It can have a state True or False - 08:37 - 08:39 but unlike a boolean the moment we set it to true - 08:39 - 08:41 it sets it back to false. - 08:41 - 08:43 Which is useful when I just want to - 08:43 - 08:46 trigger something to happen one time and then it resets itself. - 08:46 - 08:48 So we're going to select a trigger here - 08:48 - 08:50 and this trigger is going to be - 08:50 - 08:52 called Die. - 08:53 - 08:55 We listened to a lot of heavy metal while making this. - 08:55 - 08:57 So die is a trigger that's going to - 08:57 - 08:59 be triggered when the player - 08:59 - 09:00 actually dies. - 09:00 - 09:02 Now what we want to do is - 09:02 - 09:05 we have these parameters which is - 09:05 - 09:07 great but now we need to tell - 09:07 - 09:10 mechanim and the animator specifically how to use - 09:10 - 09:12 those parameters, we need to set up some logic - 09:12 - 09:14 to move us from one to the next. - 09:23 - 09:25 So what we want to do is we want to setup - 09:25 - 09:28 transitions that basically says - 09:28 - 09:31 'this state is capable of going to that state' - 09:31 - 09:33 and 'this state is capable of going to that state'. - 09:33 - 09:35 We set up these transitions explicitly - 09:35 - 09:37 so that we can't accidentally transition from - 09:37 - 09:40 say Dead to Jump, - 09:40 - 09:42 because that doesn't make a whole lot of sense. - 09:42 - 09:44 So what we want to do is create these - 09:44 - 09:46 transitions which basically are one-way bridges - 09:46 - 09:48 that define logically how - 09:48 - 09:50 we go from one state to another. - 09:50 - 09:52 So what I'm going to do is I'm going to right click on Idle - 09:52 - 09:54 and I'm going to select Make Transition. - 09:54 - 09:57 And when I click Make Transition I get this - 09:57 - 09:59 rodeo lasso thingy - 09:59 - 10:01 which is fun to play with by itself, - 10:01 - 10:02 but we're here for business, - 10:02 - 10:04 so what I'm going to do is once I've confirmed that - 10:04 - 10:06 there's this white line attached to my mouse - 10:06 - 10:08 I'm going to move my mouse over the Move - 10:08 - 10:10 animator state and I'm just going to click - 10:10 - 10:12 and it's going to connect the two. - 10:12 - 10:15 So now what we have is a transition from idle to move. - 10:15 - 10:17 See, there you go in case you didn't believe me. - 10:17 - 10:19 And so this is basically saying that we - 10:19 - 10:22 can go from idle to move. - 10:22 - 10:24 So what I'm going to do now is in order to - 10:24 - 10:26 actually dictate when that happens - 10:26 - 10:28 I have to click on the transition. - 10:31 - 10:32 Too close. - 10:32 - 10:33 There we go. - 10:33 - 10:36 I'm going to click and you'll see it turn blue - 10:36 - 10:39 and so now I have selected - 10:39 - 10:40 the transition between the two. - 10:40 - 10:42 And once I've done that over in the inspector view we get - 10:42 - 10:45 the properties of this transition - 10:45 - 10:48 and what I'm interested in is - 10:49 - 10:51 is down on the bottom we have these conditions. - 10:51 - 10:54 What is the condition of this transition? - 10:54 - 10:56 And so what I'm going to do is - 10:56 - 10:58 click this drop down that says Exit Time - 10:58 - 11:00 and I'm going to select IsWalking - 11:01 - 11:02 and I'm going to leave that as true. - 11:02 - 11:04 So the way you read this is you are transitioning - 11:04 - 11:06 from idle to move when IsWalking - 11:06 - 11:08 equals true. - 11:10 - 11:12 So now what I'm going to do - 11:12 - 11:15 is I'm just going to do the exact opposite, - 11:15 - 11:18 so I'm going to right click on move - 11:18 - 11:21 make transition, get my fun rodeo arrow, - 11:21 - 11:22 now I'm going to click on idle, - 11:22 - 11:24 so this is stating that I am able to move - 11:24 - 11:27 from the move state back to the idle state. - 11:27 - 11:29 and I'm sure that you can all guess what - 11:29 - 11:31 the condition for that is going to be but we'll - 11:31 - 11:34 go through it anyway, I'm going to select Conditions - 11:35 - 11:37 IsWalking equals false. - 11:38 - 11:40 So if the player is walking we play the walking animation. - 11:40 - 11:42 If the player is not walking we don't, - 11:42 - 11:44 we play the idle animation. - 11:45 - 11:47 Finally what I want to do is - 11:47 - 11:50 I am going to create one more transition, - 11:50 - 11:52 this time we want the player to be able to die. - 11:52 - 11:54 So to do that we're not going - 11:54 - 11:57 to transition from idle or move or death - 11:57 - 11:59 because we can theoretically die - 11:59 - 12:01 in any state depending on whichever animation is - 12:01 - 12:02 currently going on. - 12:02 - 12:06 So we are going to use the Any state. - 12:06 - 12:08 And by that I mean the actually green any state, - 12:08 - 12:11 I don't mean you're going to transition from any state you want. - 12:11 - 12:13 So the one that says Any state. - 12:13 - 12:15 So I'm going to right click on that and - 12:15 - 12:17 I only have one option and that's Make Transition - 12:17 - 12:19 and I am going to transition from - 12:19 - 12:22 Any state to Death, so no matter what state - 12:22 - 12:24 you're currently in if you die, you die. - 12:25 - 12:27 I'm going to select that transition, - 12:27 - 12:29 you'll see it turn blue - 12:29 - 12:31 and I'm going to set my condition to be - 12:33 - 12:34 Die. - 12:34 - 12:36 Now since it's a trigger I don't need to - 12:36 - 12:39 specify true, false or a value like that, - 12:39 - 12:40 it's just if Die get's triggered - 12:40 - 12:44 we immediately switch in to the death animation. - 12:44 - 12:47 The next thing we need to do with this player character - 12:47 - 12:50 is to give him a physical presence in the scene. - 12:50 - 12:52 So if you change from your animator window - 12:53 - 12:56 back to the scene view using the tab at the top - 12:56 - 12:59 then you can select the player. - 12:59 - 13:01 We're going to add a rigidbody. - 13:01 - 13:04 A rigidbody component is something that allows - 13:04 - 13:06 the physics engine of Unity to be applied to the object. - 13:08 - 13:10 We are going to click Add Component - 13:10 - 13:11 and choose Physics - 13:12 - 13:14 and Rigidbody. - 13:14 - 13:18 It's important that you don't select Physics 2D - 13:18 - 13:21 Rigidbody 2D, we're doing 3D physics - 13:21 - 13:22 so just physics. - 13:23 - 13:26 So then what we're going to do, because we don't want - 13:26 - 13:29 this to slow down over time - 13:29 - 13:31 we're going to set the drag and angular drag - 13:31 - 13:33 to infinity. - 13:33 - 13:35 So you can do this actually by just typing - 13:35 - 13:39 INF in the drag box and hitting return. - 13:39 - 13:41 If you've done it correctly you'll see a capitalised - 13:41 - 13:43 infinity. - 13:44 - 13:46 We do want to use gravity, we want to keep the player - 13:46 - 13:50 firmly grounded, despite this being a dream sequence. - 13:50 - 13:52 And we do want to use - 13:52 - 13:54 constraints, so basically we're going to use - 13:54 - 13:56 constraints to ensure that the player - 13:56 - 14:00 character doesn't fall forward or fall over any - 14:00 - 14:02 different directions, and we also want - 14:02 - 14:05 to freeze the Y position so he doesn't sink through the floor - 14:05 - 14:08 or jump up or anything like that. - 14:08 - 14:11 In Freeze Position we're going to choose the Y coordinate - 14:11 - 14:16 and Freeze Rotation X and Z or Zee. - 14:16 - 14:19 You'll have to expand constraints if it's not currently expanded. - 14:19 - 14:21 Yep, sorry, you need to expand constraints, - 14:21 - 14:23 if you can't see those properties you should see them - 14:23 - 14:24 at the bottom. - 14:24 - 14:27 Next we need to make sure things can actually - 14:27 - 14:29 bump in to the character because currently - 14:29 - 14:31 he reacts to physics - 14:31 - 14:33 but if he doesn't bump in to anything - 14:33 - 14:34 nothing's going to happen. - 14:34 - 14:36 So if you go to Add Component - 14:37 - 14:39 and then Physics - 14:39 - 14:40 and then there's a capsule collider along with - 14:40 - 14:42 all the other colliders there. - 14:42 - 14:44 We need to give it specific settings - 14:44 - 14:48 so we make sure that the capsule covers the player. - 14:48 - 14:50 So if you change the centre - 14:50 - 14:52 to 0.2 in the X - 14:53 - 14:56 0.6 in Y and leave the Z as 0 - 14:56 - 14:58 or Zee a 0. - 14:58 - 15:00 We also need to change the height - 15:00 - 15:04 to 1.2, so what that's going to mean is that - 15:04 - 15:06 this sort of leaning over character thing here - 15:06 - 15:08 he's got a little bit of an X offset so that the - 15:08 - 15:10 capsules going to cover him nicely. - 15:10 - 15:13 Next we need the player to make - 15:13 - 15:15 a little yelping sound when he gets hurt. - 15:16 - 15:18 So we're going to add an audio source - 15:18 - 15:20 component, so if you go to Add Component - 15:20 - 15:23 Audio, and then find Audio Source - 15:24 - 15:26 and we're going to change - 15:26 - 15:28 the audio clip from None, using the little - 15:28 - 15:30 circle select by it - 15:31 - 15:33 to Player Hurt. - 15:34 - 15:36 When it opens the context sensitive menu - 15:36 - 15:38 you can see Player Hurt there. - 15:38 - 15:40 One thing about the selection window is - 15:40 - 15:42 that you can double click to close - 15:42 - 15:44 it at the same time, if you want to assign something - 15:44 - 15:46 you can open up the circle select, double click - 15:46 - 15:48 will also close the window. - 15:48 - 15:50 Once other thing I'm going to quickly do because we've got these - 15:50 - 15:53 settings done is I'm going to close the rigidbody - 15:53 - 15:55 or collapse it up using the arrow next - 15:55 - 15:58 to it just so that we can see the ones below it. - 15:58 - 16:01 We do not want the player to make a yelp sound - 16:01 - 16:03 on awake, so we're going to uncheck Play On Awake - 16:03 - 16:06 You'll know that if you haven't unchecked that - 16:06 - 16:08 when you start the game it'll go 'oh'. - 16:08 - 16:11 If it does that go back to this - 16:11 - 16:13 and uncheck Play On Awake. - 16:13 - 16:16 So we're going to find the Player Movement script - 16:16 - 16:19 and that's in the Scripts - Player folder. - 16:19 - 16:22 So if you look in the project panel, find Scripts, expand that - 16:22 - 16:24 and then click on Player, you'll see on the right - 16:24 - 16:27 hand side the different player scripts that we've got. - 16:27 - 16:29 We've got an empty class called - 16:29 - 16:31 Player Movement and we're going to go through - 16:31 - 16:32 making that now. - 16:32 - 16:34 You can assign a script a number of different ways. - 16:34 - 16:36 A script is just a component - 16:36 - 16:38 like any other component. - 16:38 - 16:40 But what we're going to do is we're going to drag and drop - 16:40 - 16:42 for this one, so I'm going to grab - 16:42 - 16:44 my Player Movement script, I'm going to drag it - 16:44 - 16:48 up and drop if on to the player game object. - 16:48 - 16:50 When that's done you should see that it appears - 16:50 - 16:53 as a component at the bottom of the list. - 16:53 - 16:55 Be sure to save your scene before we go on to this next part - 16:55 - 16:57 too because we're in the beta software. - 16:59 - 17:02 So first of al double click it for opening and - 17:02 - 17:04 you should get mono develop to come up. - 17:04 - 17:06 There's a number of different ways to open a script. - 17:06 - 17:08 You can either double click the icon - 17:08 - 17:10 in the project panel, you can - 17:10 - 17:12 click open with it selected - 17:12 - 17:14 at the top of the inspector. - 17:14 - 17:16 Or one final way is to, when it's - 17:16 - 17:18 applied as a component - 17:18 - 17:20 you can actually click the cog icon and choose - 17:20 - 17:21 Edit Script. - 17:22 - 17:24 If you give it a moment it's going to open up that - 17:24 - 17:26 script editor for Unity which is - 17:26 - 17:28 Mono Develop. - 17:28 - 17:30 All of this is going to be done within - 17:30 - 17:32 the class, so you notice there's a pair - 17:32 - 17:36 of early braces, or brackets? - 17:36 - 17:38 What we're going to start off by doing is - 17:38 - 17:40 making some variables - 17:40 - 17:43 that we can adjust throughout the course of class. - 17:43 - 17:46 We're always going to start off with our public ones at the top. - 17:46 - 17:48 and then underneath with private ones. - 17:48 - 17:51 So we'll start off with a public float - 17:51 - 17:53 called Speed, which is obviously - 17:53 - 17:56 going to control how fast the player is. - 17:56 - 17:59 I'm going to give it a default value of 6. - 17:59 - 18:01 So the F on the end there is just - 18:01 - 18:03 saying that this is a floating point variable. - 18:04 - 18:07 Now we move on to the private variables. - 18:07 - 18:09 So I could go ahead and type - 18:09 - 18:12 vector3 because vector2 is the first thing it suggests - 18:12 - 18:14 I could accidentally press return and get that - 18:14 - 18:17 but I don't, I want a vector3 type of variable. - 18:17 - 18:19 So these variables we're declaring - 18:19 - 18:21 the type of them first, if you're familiar with Java Script - 18:21 - 18:23 you'll remember that it does it the other way round. - 18:23 - 18:25 So in C# we say the type of variable first - 18:25 - 18:27 and then the name. - 18:27 - 18:29 So this one's going to be called Movement and - 18:29 - 18:31 we're going to use that to store - 18:31 - 18:34 the movement that we want to apply to the player. - 18:34 - 18:36 The next one we're going to have a reference - 18:36 - 18:38 to the animator component - 18:38 - 18:39 and we'll call that Anim. - 18:40 - 18:43 We're also going to have a reference to - 18:43 - 18:44 the rigidbody component - 18:45 - 18:48 and we'll call that Player Rigidbody. - 18:50 - 18:53 So the next one is an integer - 18:53 - 18:55 called Floor Mask, now I'm going to explain this one. - 18:55 - 18:57 If you remember earlier we made that floor - 18:57 - 18:59 quad, that big square that we just left - 18:59 - 19:01 on the floor, that's the thing that we want to raycast - 19:01 - 19:04 in to and the way we tell our raycast - 19:04 - 19:06 that we only want to hit that floor is we use - 19:06 - 19:09 a layermask, which is stored as an integer. - 19:09 - 19:11 If I hit save right now, basically we designed - 19:11 - 19:13 this project on a PC, - 19:13 - 19:15 we've opened it on a Mac, - 19:15 - 19:17 files have different line endings, it doesn't matter - 19:17 - 19:19 at all, just hit Convert, everything will be fine. - 19:19 - 19:21 So the last private variable that we're going to make - 19:21 - 19:23 is a float - 19:23 - 19:26 called camRayLength. - 19:26 - 19:28 That's going to be the length of the ray - 19:28 - 19:30 that we cast from the camera, - 19:30 - 19:32 and we're going to give that a value of 100. - 19:32 - 19:34 The next thing that we're going to do is setup - 19:34 - 19:36 those references. - 19:36 - 19:38 We're going to setup the references - 19:38 - 19:40 in the awake function - 19:42 - 19:44 Some of you will be familiar with - 19:44 - 19:47 the start function, awake is very similar - 19:47 - 19:49 but it gets called regardless of whether - 19:49 - 19:51 the script is enabled or not. - 19:51 - 19:54 So it's good for setting up references and things like that. - 19:55 - 19:57 We're going to start off by setting - 19:57 - 19:59 up our floor mask - 19:59 - 20:03 and we're going to use LayerMask.GetMask - 20:03 - 20:05 so then we can parse in a string - 20:05 - 20:07 which is just a word or a sentence - 20:08 - 20:10 of the layer that we're going to get. - 20:10 - 20:13 So we put the floor quad - 20:13 - 20:17 on the floor layer so we're going to get mask from floor. - 20:18 - 20:21 Then we're going to use GetComponent to - 20:21 - 20:25 get the references to the animator, like that. - 20:26 - 20:28 You'll notice a little angled brackets there, - 20:28 - 20:31 that's just denoting the type of what we're looking for. - 20:31 - 20:33 In this case we're looking for an animator so we write . - 20:33 - 20:36 If we were looking for a rigidbody we'd write and so on. - 20:39 - 20:41 So we write here. - 20:42 - 20:45 And that's all we need to do to setup the references. - 20:46 - 20:48 Next what we're going to do is - 20:48 - 20:51 put a call to fixed update. - 20:51 - 20:53 Fixed update is a function - 20:53 - 20:56 that Unity will automatically call on it's scripts - 20:57 - 21:00 that fires every physics update. - 21:01 - 21:05 Unity runs on a number of updates. - 21:05 - 21:07 The normal update system that you're familiar with - 21:07 - 21:09 runs along with rendering. - 21:09 - 21:12 The fixed update runs with physics. - 21:12 - 21:14 So since we're moving a physics character - 21:14 - 21:16 he's got a rigidbody attached we're going to - 21:16 - 21:18 use fixed update to move him. - 21:18 - 21:20 So what we're going to do is get the input - 21:20 - 21:23 from the horizontal and vertical axis, - 21:23 - 21:25 but we're not going to get the standard input, - 21:25 - 21:27 we're going to get the raw input. - 21:27 - 21:31 So whereas a normal axis would have values - 21:31 - 21:33 varying between -1 and 1 - 21:33 - 21:36 the raw axis will only have - 21:36 - 21:39 a value of -1, 0 or 1. - 21:40 - 21:44 It won't have any variation in between those. - 21:44 - 21:46 So what that means is that - 21:46 - 21:48 when we're controlling the character, rather than him - 21:48 - 21:51 slowly accelerating towards it's full speed - 21:51 - 21:53 he's going to immediately - 21:53 - 21:55 snap to full speed, which will give us a much - 21:55 - 21:56 more responsive feel. - 21:56 - 21:59 Just so you know, an axis is actually input. - 21:59 - 22:02 So these axis, horizontal - 22:02 - 22:05 vertical, jump, fire1, fire2, - 22:05 - 22:07 those are defaults within Unity, - 22:07 - 22:09 so if you're wondering what horizontal is it's - 22:09 - 22:11 already in there. - 22:13 - 22:16 So the horizontal axis basically maps - 22:16 - 22:18 to the A and D keys - 22:18 - 22:20 as well as the left and right arrow keys. - 22:20 - 22:24 Either of those manipulate the horizontal axis. - 22:24 - 22:27 Pressing the A key gives me a value of -1 - 22:27 - 22:30 in the horizontal axis, pressing the D key - 22:30 - 22:33 gives me a value of 1 in the horizontal axis. - 22:37 - 22:41 Vertical is another axis, it's the W and S keys - 22:41 - 22:43 as well as the up and down arrows. - 22:43 - 22:45 Fire1 is your left mouse button - 22:45 - 22:47 or right control, so on and so forth. - 22:47 - 22:49 If you're curious you can go in the input manager, - 22:49 - 22:51 I think Will had that up there a second ago - 22:51 - 22:54 and that's where these axis exist and how we set them up. - 22:54 - 22:56 But the ones we're all using here are defaults - 22:56 - 22:58 so there's nothing special that you need - 22:58 - 22:59 to do to get those. - 22:59 - 23:03 Go ahead and put in the vertical axis as well, - 23:03 - 23:05 we're storing that as a float called V, - 23:05 - 23:07 so these are private variables within - 23:07 - 23:09 fixed update so we can use - 23:09 - 23:12 them within specifically fixed update. - 23:12 - 23:14 So remember that the variables that we created earlier - 23:14 - 23:16 are outside of these functions so we can use - 23:16 - 23:18 them within any of the functions in this script. - 23:20 - 23:22 Fixed update and awake are automatically - 23:22 - 23:26 called by Unity, they're mono-behaviour functions. - 23:26 - 23:28 But we can also make our own functions - 23:28 - 23:32 that we can call within those fixed update and awake. - 23:32 - 23:34 What we're going to do now is create a - 23:34 - 23:37 move function that we can call in fixed update later - 23:37 - 23:38 to move our character. - 23:38 - 23:40 We're basically going to split up the actual - 23:40 - 23:42 operations of those player movement script - 23:42 - 23:45 in to movement, turning and animation. - 23:45 - 23:46 We're going to put them in to separate functions - 23:46 - 23:48 to keep them all modular. - 23:48 - 23:50 If you make a Move function - 23:50 - 23:52 that's going to have 2 parameters, - 23:52 - 23:54 a float called H - 23:55 - 23:56 and a float called V - 23:56 - 23:58 and unsurprisingly those are going to be the input - 23:58 - 24:00 that we'll parse in to this function when we call it. - 24:00 - 24:02 We have this movement vector that we stored - 24:02 - 24:04 earlier and we want to set the value - 24:04 - 24:08 of that, so if you type movement.set - 24:08 - 24:10 then you get the chance to put each of the - 24:10 - 24:13 X, Y and Z components of those - 24:13 - 24:16 so we're going to use H for it's X component. - 24:16 - 24:18 We don't want any vertical components so we're - 24:18 - 24:19 going to put that as 0. - 24:19 - 24:22 We're going to use an F because it's a floating point. - 24:23 - 24:26 And then V for the Z component, so what that means is - 24:26 - 24:28 X and Z are flat along the ground. - 24:29 - 24:31 The horizontal and vertical movement that we give it - 24:31 - 24:34 will translate to lateral movement in the game. - 24:34 - 24:36 Now we've set that we have a bit of a - 24:36 - 24:40 problem because if you move - 24:40 - 24:43 just in the Z axis or just in the X axis - 24:43 - 24:45 then you've got a value of 1, - 24:45 - 24:47 like a size of 1 for that vector. - 24:47 - 24:49 However, if you use both then - 24:49 - 24:51 the length of the vector is different, - 24:51 - 24:53 it's 1.4, - 24:53 - 24:55 so we need to change that so that you don't - 24:55 - 24:57 get an advantage by moving diagonally. - 24:57 - 25:00 What we want to do is normalise that. - 25:00 - 25:02 So what that means is it's going to take - 25:02 - 25:04 a direction that we have - 25:04 - 25:06 but it's going to make sure that the size is always 1. - 25:06 - 25:08 Effectively make sure that the player moves at the same speed - 25:08 - 25:11 regardless of which key combination you use. - 25:11 - 25:13 We don't want it to move at a speed of 1, we want it to - 25:13 - 25:15 move at our speed, so we're going to times - 25:15 - 25:18 that by our speed variable that we stored. - 25:18 - 25:21 Also, this is called fixed update. - 25:21 - 25:23 So we don't want it to move at - 25:23 - 25:25 6 units per fixed update, - 25:25 - 25:28 it would move 6 units every 50th of a second - 25:28 - 25:30 and we wouldn't see our player ever again, - 25:30 - 25:32 so instead we're going to change it so that it's - 25:32 - 25:34 per seconds and the way we do that is by - 25:34 - 25:36 multiplying by Time.DeltaTime. - 25:36 - 25:40 DeltaTime is the time between each update call. - 25:40 - 25:42 So if you're moving it by that much - 25:42 - 25:44 per 50th of a second - 25:44 - 25:47 over the course of 50 50ths of a second - 25:47 - 25:49 it is going to move 6 units. - 25:49 - 25:51 So finally in this function the last thing we need to do is - 25:51 - 25:53 apply that movement to the player. - 25:53 - 25:55 So we're going to do that using a rigidbody function - 25:55 - 25:57 called MovePosition. - 25:57 - 25:59 So MovePosition moves a rigidbody - 25:59 - 26:01 to a position in world space. - 26:01 - 26:03 So we need to move it relative - 26:03 - 26:06 to the position that the character currently is. - 26:06 - 26:08 We need to add our movement - 26:08 - 26:12 to the player's position, the transform.position + movement. - 26:12 - 26:14 So it's going to be it's current positions plus - 26:14 - 26:16 this input that we've given him to move him - 26:16 - 26:17 slightly further along. - 26:17 - 26:19 The next thing that we're going to do is look at the - 26:19 - 26:20 turning of the character. - 26:20 - 26:23 Again we're going to make a new function. - 26:23 - 26:25 This time we're going to call it Turning. - 26:25 - 26:29 We don't require any parameters for this. - 26:30 - 26:32 Because the direction the character - 26:32 - 26:34 is facing is based on the mouse input - 26:34 - 26:36 rather than the input that we've already stored. - 26:36 - 26:38 The first thing that we're going to do is create - 26:38 - 26:40 a ray that we cast - 26:40 - 26:42 from the camera in to the scene. - 26:42 - 26:44 Let's have a little look at how that actually works first. - 26:44 - 26:46 If you have a look at this diagram we have - 26:46 - 26:48 a representation of the camera, - 26:48 - 26:53 the screen and the level plus the floor quad around the level. - 26:53 - 26:55 So that box that you're looking at there - 26:55 - 26:57 is effectively your level, - 26:57 - 26:59 the floor quad is around that - 26:59 - 27:01 and the camera, if you think of the camera as - 27:01 - 27:03 something that's looking from where you're - 27:03 - 27:05 looking at the game on the screen - 27:05 - 27:07 forward on to the game level - 27:07 - 27:11 you cast a ray, a single invisible line from that point - 27:11 - 27:14 to the floor quad to get a particular position back. - 27:14 - 27:16 And we want to use that because we want the - 27:16 - 27:18 character to turn and face the - 27:18 - 27:20 point of wherever the camera's looking. - 27:20 - 27:22 So when you move the mouse around in the game - 27:22 - 27:24 he's going to turn around and face that position - 27:24 - 27:26 so that you can turn around and also shoot - 27:26 - 27:28 in a particular direction. - 27:28 - 27:30 So we see this end bracket on line 40, - 27:30 - 27:32 make sure that's there. - 27:32 - 27:34 It's a real quick gotcha that everyone always does is they - 27:34 - 27:37 move their functions down outside of the class - 27:37 - 27:39 by accident, so if you have - 27:39 - 27:41 avoid Turning with your open and closed brackets - 27:41 - 27:44 and you don't have another bracket immediately after that - 27:44 - 27:45 you've missed one. - 27:45 - 27:47 And don't just add another one - 27:47 - 27:48 because then you're going to have one in the wrong spot - 27:48 - 27:50 and another one in the wrong spot. - 27:50 - 27:52 Instead just move that function back - 27:52 - 27:54 up so that it is inside the class. - 27:54 - 27:56 The first thing we're going to do in this function is - 27:56 - 27:57 we're going to create a ray. - 27:58 - 28:00 So we'll call that camRay, - 28:00 - 28:02 a ray coming from the camera. - 28:02 - 28:04 And we're going to use a function of - 28:04 - 28:06 the camera, the main camera called - 28:06 - 28:08 ScrenPointToRay. - 28:08 - 28:10 So what that's going to do is take a point on - 28:10 - 28:12 the screen and cast a ray - 28:12 - 28:15 from that point forwards in to the scene. - 28:15 - 28:17 So the point that we're going to give it - 28:17 - 28:19 is the mouse position. - 28:20 - 28:22 So it's always going to find the point - 28:22 - 28:23 underneath the mouse if you imagine. - 28:23 - 28:25 So if you're looking at the game, - 28:25 - 28:27 there's a mouse on your screen, the point underneath that - 28:27 - 28:29 mouse is the point it's going to find - 28:29 - 28:31 if that hits the floor quad. - 28:31 - 28:33 We need to get information back when we - 28:33 - 28:35 do this raycast and in order to get - 28:35 - 28:37 information back from this raycast - 28:37 - 28:39 we need a RaycastHit variable, - 28:39 - 28:41 so that's what we're creating here. - 28:41 - 28:43 The next stage is to actually - 28:43 - 28:44 cast the ray that we've created. - 28:44 - 28:47 So we created this imaginary invisible line - 28:47 - 28:49 and now we need to actually perform the action - 28:49 - 28:51 of casting the ray - 28:51 - 28:53 so that it can hit something. - 28:53 - 28:55 A raycast function will return - 28:55 - 28:57 true if it has hit something - 28:57 - 29:00 and it will return false if it hasn't - 29:00 - 29:03 So we're going to put that inside and If statement. - 29:03 - 29:05 And if it has hit something - 29:05 - 29:07 then the code within the If statement - 29:07 - 29:08 will be carried out. - 29:08 - 29:10 If it hasn't hit anything then the code within - 29:10 - 29:12 the If statement won't be carried out - 29:12 - 29:13 so we'll just skip out of this function. - 29:13 - 29:16 That's what that little If at the start there is for. - 29:16 - 29:18 So we need to give this a number of parameters. - 29:18 - 29:20 Let's start off with the ray itself, that's the - 29:20 - 29:23 positions and directions of the cast that we're going to have. - 29:23 - 29:25 We need to use an Out variable - 29:25 - 29:27 for the floor hit, so Out means - 29:27 - 29:29 that we're going to get information out of - 29:29 - 29:31 this function and we're going to store it - 29:31 - 29:33 in that floorHit variable. - 29:33 - 29:35 Next we need to give it a length, - 29:35 - 29:37 so how far are we going to do this raycast for?. - 29:37 - 29:40 And that's the variable camRayLength that we store earlier. - 29:40 - 29:43 And finally we want to make sure that this raycast is - 29:43 - 29:46 only trying to hit things on the floor layer. - 29:46 - 29:48 That's that floor mask that we created earlier. - 29:48 - 29:51 Remember that since this is an If statement - 29:51 - 29:53 and a function there should be - 29:53 - 29:56 two closed braces at the end there. - 29:58 - 30:04 So if you see you've got if (Physics.Raycast ( - 30:04 - 30:07 and then at the end we need to close both of those brackets again. - 30:07 - 30:12 Okay so we've got the open curly braces afterwards - 30:12 - 30:14 and this will be the code that's carried out - 30:14 - 30:16 if we've hit something, so we need to - 30:16 - 30:18 create a vector3 from the - 30:18 - 30:21 player to where the mouse has hit. - 30:21 - 30:23 And that's the floorHit.Point, - 30:23 - 30:26 so that's the point that it's hit the floor - 30:26 - 30:30 minus transform.position, that's the position of the player. - 30:30 - 30:33 We're going to apply this - 30:33 - 30:35 to the character to make him turn - 30:35 - 30:39 but we don't want him to sort of start leaning back - 30:39 - 30:41 so we need to make sure that the Y component - 30:41 - 30:43 of this vector is definitely 0. - 30:43 - 30:45 Now we can't set a player's rotation based on - 30:45 - 30:47 a vector so we need to change that - 30:47 - 30:49 from a vector in to the - 30:49 - 30:51 horrible word, quaternion. - 30:52 - 30:54 So quaternion basically speaking is - 30:54 - 30:55 a way of storing a rotation. - 30:55 - 30:57 We have a vector3 but we can't - 30:57 - 30:59 use that to store a rotation so we use a - 30:59 - 31:02 quaternion and we're going to create one called newRotation - 31:02 - 31:04 and quaternions are also a class - 31:04 - 31:06 that has a number of functions of - 31:06 - 31:08 which we're going to use one called LookRotation - 31:08 - 31:11 So what lookRotation does, the default for - 31:11 - 31:13 characters and cameras and things like that - 31:13 - 31:16 in Unity and in most 3D modelling - 31:16 - 31:19 is that the Z axis is their forward axis. - 31:19 - 31:23 So we want to made the playerToMouse vector - 31:23 - 31:25 the forward vector of the player. - 31:25 - 31:27 So that's all that this function is doing - 31:27 - 31:29 when we give it the playerToMouse. - 31:29 - 31:31 When we actually have to apply it so we're going to - 31:31 - 31:33 address the player rigidbody. - 31:33 - 31:35 I'm going to use the moveRotation function - 31:35 - 31:36 and since we don't want to give it an offset - 31:36 - 31:39 we're trying to give it a completely new rotation. - 31:40 - 31:42 We're just going to assign it like that, we don't need to do - 31:42 - 31:45 transform.rotation + newRotation, - 31:45 - 31:46 it's just this rotation. - 31:46 - 31:50 So that's our turning, the next and final function of this, - 31:50 - 31:52 we're going to look at the animation. - 31:53 - 31:55 After the turning we're going to make another function - 31:55 - 31:57 called Animating. - 31:57 - 32:01 Now we do need to give this the - 32:01 - 32:03 H and V parameters because - 32:03 - 32:05 whether or not the player is walking - 32:05 - 32:07 or idle is dependent on the input. - 32:08 - 32:10 So what we're going to do is we're going to create - 32:10 - 32:14 a boolean variable called Walking. - 32:14 - 32:16 And we need this to be true - 32:17 - 32:19 if either the H variable - 32:19 - 32:21 or the V variable has some value. - 32:21 - 32:23 If it's 0 then - 32:23 - 32:25 there's no input on that axis - 32:25 - 32:26 and we don't need to worry about it. - 32:26 - 32:29 But if either H or V has - 32:29 - 32:32 a value that's non-0 then - 32:32 - 32:34 it's true and the player is walking. - 32:34 - 32:38 So what that complicated bit of code there is doing - 32:39 - 32:42 is saying that first of all H - 32:42 - 32:44 is that not equal to 0? - 32:45 - 32:47 That will return either true of false - 32:47 - 32:51 depending on whether it's 0 or not 0. - 32:54 - 32:57 is V not equal to 0? - 32:57 - 32:59 What this is basically saying is 'hey, did we - 32:59 - 33:01 press the horizontal axis or did we - 33:01 - 33:03 press the vertical axis'? - 33:03 - 33:05 If we pressed either of those we're walking. - 33:05 - 33:06 If we didn't we're not. - 33:06 - 33:08 So the reason we're doing that is we want to - 33:08 - 33:10 parse this to our animator component - 33:10 - 33:13 so you'll remember we made a parameter called IsWalking. - 33:13 - 33:15 And the way that we set that is we say - 33:15 - 33:18 anim, so our reference to the animator component, - 33:18 - 33:20 Set.Bool, so we made a parameter which - 33:20 - 33:23 was a type bool, a boolean, true or false. - 33:23 - 33:25 And it was called IsWalking. - 33:25 - 33:27 So first to actually set this - 33:27 - 33:30 we tell it which one so the IsWalking parameter - 33:30 - 33:33 and then we give it a value, so we could here write true or false - 33:33 - 33:35 but we want to use Walking, the variable - 33:35 - 33:36 that we just made. - 33:36 - 33:40 So the very last thing that we need to do in this class - 33:40 - 33:42 is we need to put, so we need to put - 33:42 - 33:44 calls to those functions that we've made. - 33:44 - 33:47 Currently they're just functions sitting there by themselves. - 33:47 - 33:49 We need to make sure that those functions are - 33:49 - 33:52 actually being called an actually being used. - 33:52 - 33:55 These three functions aren't happening - 33:55 - 33:57 until we tell them to, we haven't actually - 33:57 - 33:59 called them anywhere in the script, they just - 33:59 - 34:01 exist and are waiting to happen. - 34:01 - 34:03 In fixed update that we created earlier - 34:03 - 34:05 we already got the input, we stored that. - 34:06 - 34:08 Now after we've got the input entirety - 34:08 - 34:11 we now need to call these functions that we've created. - 34:12 - 34:14 So we call them just by using their name. - 34:15 - 34:18 And then parsing in the values if they're required. - 34:18 - 34:20 So Move requires 2 floats, - 34:20 - 34:22 we've got our floats for input there - 34:22 - 34:24 and that's all that we need to do. - 34:24 - 34:26 So we need to say H and V - 34:26 - 34:28 semi-colon. - 34:28 - 34:32 Then we'll call Turning which doesn't take any parameters - 34:32 - 34:33 And finally we'll call - 34:34 - 34:36 Animati, which does. - 34:39 - 34:40 Like that. - 34:40 - 34:42 These are in fixed update, so they're being called - 34:42 - 34:44 every physics step. - 34:44 - 34:46 That is the end of that script so what I want you to do - 34:46 - 34:48 now is to go to File - Save - 34:48 - 34:50 and switch back to Unity. - 34:50 - 34:53 If anything is wrong you will see an error - 34:53 - 34:55 at the bottom of the screen in red. - 34:56 - 34:59 When you have no errors in your script, - 35:00 - 35:02 when you press play at the top of the editor - 35:03 - 35:05 you can move your character around with the arrow keys - 35:05 - 35:08 or W, A, S and D. - 35:08 - 35:10 The animation should function - 35:10 - 35:12 but the actual movement of the mouse should be a little bit - 35:12 - 35:14 tricky right now because we haven't moved the - 35:14 - 35:16 camera to the right point, so the camera isn't - 35:16 - 35:20 seeing the entirety of the floor quad yet - 35:20 - 35:21 and that can be a problem, but we're going to solve - 35:21 - 35:22 that in the next phase. - 35:22 - 35:24 So you should be able to move around. - 35:24 - 35:27 You'll note if I move the mouse up away from the floor quad - 35:27 - 35:29 that it doesn't rotate. - 35:29 - 35:31 As soon as I move it back over there the rayCast - 35:31 - 35:33 kicks back in and I can turn around - 35:34 - 35:36 and run around this this. - 35:40 - 35:42 Okay so we're at the end of phase 2. - 35:42 - 35:46 Phase 3 will be setting up the camera - 35:46 - 35:48 and we're going to have a quick look at the script that - 35:48 - 35:50 will make that camera work. PlayerMovement Code snippet using UnityEngine; public class PlayerMovement : MonoBehaviour { public float speed = 6f; // The speed that the player will move at. Vector3 movement; // The vector to store the direction of the player's movement. Animator anim; // Reference to the animator component. Rigidbody playerRigidbody; // Reference to the player's rigidbody. int floorMask; // A layer mask so that a ray can be cast just at gameobjects on the floor layer. float camRayLength = 100f; // The length of the ray from the camera into the scene. void Awake () { // Create a layer mask for the floor layer. floorMask = LayerMask.GetMask ("Floor"); // Set up references. anim = GetComponent <Animator> (); playerRigidbody = GetComponent <Rigidbody> (); } void FixedUpdate () { // Store the input axes. float h = Input.GetAxisRaw ("Horizontal"); float v = Input.GetAxisRaw ("Vertical"); // Move the player around the scene. Move (h, v); // Turn the player to face the mouse cursor. Turning (); // Animate the player. Animating (h, v); } void Move (float h, float v) { //); } void Turning () { // Create a ray from the mouse cursor on screen in the direction of the camera. Ray camRay = Camera.main.ScreenPointToRay (Input.mousePosition); // Create a RaycastHit variable to store information about what was hit by the ray. RaycastHit floorHit; // Perform the raycast and if it hits something on the floor layer... if(Physics.Raycast (camRay, out floorHit, camRayLength, floorMask)) { // Create a vector from the player to the point on the floor the raycast from the mouse hit. Vector3 playerToMouse = floorHit.point - transform.position; // Ensure the vector is entirely along the floor plane. playerToMouse.y = 0f; // Create a quaternion (rotation) based on looking down the vector from the player to the mouse. Quaternion newRotation = Quaternion.LookRotation (playerToMouse); // Set the player's rotation to this new rotation. playerRigidbody.MoveRotation (newRotation); } } void Animating (float h, float v) { // Create a boolean that is true if either of the input axes is non-zero. bool walking = h != 0f || v != 0f; // Tell the animator whether or not the player is walking. anim.SetBool ("IsWalking", walking); } } var speed : float = 6f; // The speed that the player will move at. private var movement : Vector3; // The vector to store the direction of the player's movement. private var anim : Animator; // Reference to the animator component. private var playerRigidbody : Rigidbody; // Reference to the player's rigidbody. private var floorMask : int; // A layer mask so that a ray can be cast just at gameobjects on the floor layer. private var camRayLength : float = 100f; // The length of the ray from the camera into the scene. function Awake () { // Create a layer mask for the floor layer. floorMask = LayerMask.GetMask ("Floor"); // Set up references. anim = GetComponent (Animator); playerRigidbody = GetComponent (Rigidbody); } function FixedUpdate () { // Store the input axes. var h : float = Input.GetAxisRaw ("Horizontal"); var v : float = Input.GetAxisRaw ("Vertical"); // Move the player around the scene. Move (h, v); // Turn the player to face the mouse cursor. Turning (); // Animate the player. Animating (h, v); } function Move (h : float, v : float) { //); } function Turning () { // Create a ray from the mouse cursor on screen in the direction of the camera. var camRay : Ray = Camera.main.ScreenPointToRay (Input.mousePosition); // Create a RaycastHit variable to store information about what was hit by the ray. var floorHit : RaycastHit; // Perform the raycast and if it hits something on the floor layer... if(Physics.Raycast (camRay, floorHit, camRayLength, floorMask)) { // Create a vector from the player to the point on the floor the raycast from the mouse hit. var playerToMouse : Vector3 = floorHit.point - transform.position; // Ensure the vector is entirely along the floor plane. playerToMouse.y = 0f; // Create a quaternion (rotation) based on looking down the vector from the player to the mouse. var newRotation : Quaternion = Quaternion.LookRotation (playerToMouse); // Set the player's rotation to this new rotation. playerRigidbody.MoveRotation (newRotation); } } function Animating (h : float, v : float) { // Create a boolean that is true if either of the input axes is non-zero. var walking : boolean = h != 0f || v != 0f; // Tell the animator whether or not the player is walking. anim.SetBool ("IsWalking", walking); } Tutoriais relacionados - Animator Scripting (Lição) - The Animator Component (Lição) - The Animator Controller (Lição) - Variables and Functions (Lição) - Rigidbodies (Lição) - Colliders (Lição)
https://unity3d.com/pt/learn/tutorials/projects/survival-shooter/player-character?playlist=17144
CC-MAIN-2019-35
refinedweb
11,482
72.09
Suppose we have a positive number n, we have to find the least number of perfect square numbers whose sum is same as n. So if the number is 10, then the output is 2, as the numbers are 10 = 9 + 1. To solve this, we will follow these steps − Let us see the following implementation to get better understanding − #include<bits/stdc++.h> using namespace std; #define INF 1e9 class Solution { public: int solve(int n) { vector < int > dp(n+1,INF); dp[0] = 0; for(int i =1;i*i<=n;i++){ int x = i*i; for(int j = x;j<=n;j++){ dp[j] = min(dp[j],1+dp[j-x]); } } return dp[n]; } }; main(){ Solution ob; cout << ob.solve(10); } 10 2
https://www.tutorialspoint.com/program-to-count-number-of-perfect-squares-are-added-up-to-form-a-number-in-cplusplus
CC-MAIN-2021-43
refinedweb
126
68.5
Fitting a polynomial to a function at more points might not produce a better approximation. This is Faber’s theorem, something I wrote about the other day. If the function you’re interpolating is smooth, then interpolating at more points may or may not improve the fit of the interpolation, depending on where you put the points. The famous example of Runge shows that interpolating f(x) = 1 / (1 + x²) at more points can make the fit worse. When interpolating at 16 evenly spaced points, the behavior is wild at the ends of the interval. Here’s the Python code that produced the plot. import matplotlib.pyplot as plt from scipy import interpolate, linspace def cauchy(x): return (1 + x**2)**-1 n = 16 x = linspace(-5, 5, n) # points to interpolate at y = cauchy(x) f = interpolate.BarycentricInterpolator(x, y) xnew = linspace(-5, 5, 200) # points to plot at ynew = f(xnew) plt.plot(x, y, 'o', xnew, ynew, '-') plt.show() However, for smooth functions interpolating at more points does improve the fit if you interpolate at the roots of Chebyshev polynomials. As you interpolate at the roots of higher and higher degree Chebyshev polynomials, the interpolants converge to the function being interpolated. The plot below shows how interpolating at the roots of T16, the 16th Chebyshev polynomial, eliminates the bad behavior at the ends. To make this plot, we replaced x above with the roots of T16, rescaled from the interval [-1, 1] to the interval [-5, 5] to match the example above. x = [cos(pi*(2*k-1)/(2*n)) for k in range(1, n+1)] x = 5*array(x) What if the function we’re interpolating isn’t smooth? If the function has a step discontinuity, we can see Gibbs phenomena, similar to what we saw in the previous post. Here’s the result of interpolating the indicator function of the interval [-1, 1] at 100 Chebyshev points. We get the same “bat ears” as before. Related: Help with interpolation 5 thoughts on “Chebyshev interpolation” Every time I see the Runge interpolation example my first thought is that interpolating polynomials are being disturbed by the poles at +i and -i, even if you focus on the real line. Going to higher and higher degree will start to exhibit power series type problems, so once you are beyond |x|>1, beware. I’m curious if there is a less hand-wavy argument than the one I just gave that makes use of that fact. The problem isn’t the function 1/(1+x^2) per se. It’s the combination of that function and a uniformly distributed set of interpolating points. The poles at +/- i explain why a power series centered at zero have radius of convergence 1, but don’t see the connection between power series and evenly spaced interpolation points. There may be a connection, but I don’t see it. There is indeed a connection with the location of the complex poles of the function. Trefethen’s books [1,2] consider this type of problem and specifically the Runge example. Chapters 7 and 8 of [1] cover convergence. See also [3]. [1] [2] [3] Just a note: Chebyshev interpolation/quadrature/numerical differentiation is landing in Boost 1.66, and it’s close friend barycentric rational interpolation has also landed. Just hawkin’ my wares, man! @Jan Van lent: Thanks for recommending Trefethen’s book. Now I understand better how the location of singularities impacts convergence.
https://www.johndcook.com/blog/2017/11/06/chebyshev-interpolation/
CC-MAIN-2018-13
refinedweb
581
55.03
HPC scale-out NASer Panasas has done its traditional yearly system upgrade: the ActiveStor (AS) 20 array replaces the AS18, giving users more bangs for their HPC buck. This is the eighth ActiveStor generation, with the gen-six AS16 being introduced in July 2014, and the gen-seven AS18 arriving in July 2015. The AS20 [ PDF ] provides more capacity, bandwidth, file system size and throughput in terms of IOPS and bandwidth. The disk size jumps from 8TB in the AS18 to 10TB in the AS20, using HGST He 10 Helium drives. Per-appliance flash capacity increases from 2.4TB to 8TB. The maximum capacity per AS appliance increases from 122.4TB to 208TB with . Maximum bandwidth increases from 195GB/sec to 360GB/sec, while maximum IOPS go from around 1.76 million to 2.6 million (thank you, gods of Xeon) and namespace size increases from about 20PB to 45PB. It sounds like a substantial capacity and performance upgrade. As before, the new AS20 runs the PanFS operating system, v6.1 in this case. PanFS v6.0 was introduced in June 2014, so 6.1 has been a while coming. Unlike competitor DDN Storage, Panasas supplies a single scale-out NAS system and not a collection of systems covering multiple HPC software environments, object storage and burst buffer needs. AS20s are orderable now and will start shipping this month. Get a configuration overview here . ®
http://126kr.com/article/8sx48647i3e
CC-MAIN-2017-13
refinedweb
234
68.16
Welcome to WindowsClient.net | | Join > Learn > Windows Forms FAQs Here are some frequently asked questions about Windows Forms and their answers. If you have two controls bound to the same data source, and you do not want them to share the same position, then you must make sure that the BindingContext member of one control differs from the BindingContext member of the other control. If they have the same BindingContext, they will share the same position in the datasource. If you add a ComboBox and a DataGrid to a form, by default the BindingContext member of each of the two controls to be set to the Form's BindingContext. Thus, the default behavior is for the DataGrid and ComboBox to share the same BindingContext, and that's why the selection in the ComboBox is synchronized with the current row of the DataGrid. If you do not want this behavior, create a new BindingContext member for at least one of the controls. using System; using System.Data; using System.Windows.Forms; public class Form1 : System.Windows.Forms.Form { // ... private DataGrid dataGrid1; private ComboBox comboBox1; private DataTable dataTable1; private void Form1_Load( object sender, EventArgs e ) { // dataGrid1 uses the Form's BindingContext dataGrid1.DataSource = dataTable1; // create a new BindingContext for the combobox comboBox1.BindingContext = new BindingContext(); comboBox1.DataSource = dataTable1; comboBox1.DisplayMember = "Col1"; comboBox1.ValueMember = "Col1"; } } Contributed from George Shepherd's Windows Forms FAQ Contact | Advertise | Issue Management by Axosoft's OnTime | Running IIS7All Rights Reserved. | Terms of Use | Trademarks | Privacy Statement© 2012 Microsoft Corporation.
http://windowsclient.net/blogs/faqs/archive/2006/05/30/how-do-i-prevent-two-controls-bound-to-the-same-datatable-from-sharing-the-same-current-position.aspx
crawl-003
refinedweb
251
57.47
Hi, I';m not new to WPF but I'm finding getting the theming working on your controls like wading through thick mud. Is there a simple guide to get people started?I have the Ultimate edition for a project I'm working on and I can't even find pre-compiled theme DLLS after wading through the enormous number of directoried and dlls ... I took the DataGrid theme XAML dictionary and popped it into my own apps styles dll and now I cannot find the DLL that contains the CultureValueConverter that is missing (I have the DataPresenter DLL referenced). Searching the online help I can only find references to this object in relation to SilverLight. Serisouly guys, I love the controls and this is the 4th employer I've recommended them too but PLEASE, PLEASE reduce the searching and inbuilt obfuscation, it's getting just plain too hard to find stuff in the dirth of options available ... Where do I start? I should add here that I have subclassed the Controls so the type is actually not XamDataChart or XamDataGrid ... I have just created thin wrappers to encapsulate custom functionality. I've so far completely failed to get any Infragistics themes to apply to the controls. Not even sure which DLLs need to be referenced as Referenceing Forest theme and setting the theme manager to use this one does nothing at all. findjammer3:I should add here that I have subclassed the Controls so the type is actually not XamDataChart or XamDataGrid ... I have just created thin wrappers to encapsulate custom functionality. That makes a big difference since WPF uses the actual type as the key when searching for implicit/local styles - so your derived class' type - and therefore any styles we provide in our themes will not be picked up by WPF. You might want to review this post about common Theme/Style issues like this or other posts with similar questions like this one. OK, What would you suggest in this scenario? Is there some plain XAML Styles for the Grid I can use an update with my Type? findjammer3: OK, What would you suggest in this scenario? Is there some plain XAML Styles for the Grid I can use an update with my Type? You could take the xaml from the DefaulStyles folder and change them but that won't help with setting the Theme property since the xaml it uses is embedded in the assemblies. As suggested in the forum post I linked to, you could call SetResourceReference in the ctor of your wrapper class passing the appropriate type of ours that you are deriving from. Thanks for the help so far, appreciated. I've just tried the simple approach so far and no luck. I have this as my class declaration in code: public class MangoDynamicDataGrid : XamDataGrid And in the ctor for this I have placed: this.SetResourceReference(StyleProperty, typeof(XamDataGrid)); And the grids are still completely unstyled. So from what you have said the default style is embedded in the Infragistics.WPF4.DataPresenter.v11.2 dll and this SetResourceReference method should tell this control to use that default style. Are there any other DLLs I should be referencing in order to get this behaviour to work? Could creating a style with my type as the target type and then setting the based on property worth trying? I'm a bit confused as to why your solution doesn't work however. findjammer3:I've just tried the simple approach so far and no luck. What specifically isn't working? Maybe you can post a sample so I can see what you are seeing and see what is happening. findjammer3:So from what you have said the default style is embedded in the Infragistics.WPF4.DataPresenter.v11.2 dll and this SetResourceReference method should tell this control to use that default style. Just to be clear, that code tells WPF that the Style property is set to a DynamicResource to the type XamDataGrid. Since Styles are keyed by the Type this would get a style for XamDataGrid and use that for your control which should work. Maybe you're doing something like setting the DefaultStyleKey of your control to your type in which case you're telling WPF that you will be providing a default style for your type and unless you're doing that there will be no generic/fallback style. I'm not currently setting the default style key to anything at all. So I tried! DefaultStyleKey = typeof(XamDataGrid); This had no effect either. I tried each method SetResourceReferenec / DefaultStyleKey in turn and each independently, neither works. This is very confusing now, shall I try doing this in XAML instead of Code? Not sure if that should or will make any difference however ... I can't think of anything else to check / try now ... Setting this in XAML: Style="{DynamicResource {x:Type igDP:XamDataGrid}}" Doesn't work. And this: <Style TargetType="{x:Type uc:MangoDynamicDataGrid}" BasedOn="{DynamicResource {x:Type dp:XamDataGrid}}"> </Style> Is illegal ... so I'm completely lost now. Is there any other way to apply a style to a derived type? I can't believe I'm the first or only person to subclass the XamDataGrid. I've managed to get it to style but copying all the Generic styles into another dictionary and globally replacing the XamDataGrid type declaration with my own Type ... This is the only approach that seems to work for me ... I've managed to get it to style but copying all the Generic styles into another dictionary and globally replacing the XamDataGrid type declaration with my own Type ... You haven't provided me with a sample or a description of what exactly is not working so it is hard to say what is going on. I can say that both approaches (setting the dynamicresource in the xaml or calling setresourcereference in code) do allow WPF to pick up the styles for the control and allow theming to continue to work. I've attached a sample showing both approaches. With regards to BasedOn, you can only use that with a StaticResource - not a DynamicResource - in which case I believe it just picks up the generic style at the time it is created. The specific problem is that setting these properties / executing the methods aren't doing what you said they would. I'll try and get a sample together later to illustrate the problem. Do you have an email address I can send the sample to? Thanks for the demo app. I've looked it over and it describes exactly what I tried ... it didn't work in my application.
http://www.infragistics.com/community/forums/p/66426/335926.aspx
CC-MAIN-2014-15
refinedweb
1,113
62.98
Generic to the subtype hierarchy of possible generic argument types. This means for example that List<Number> is not a supertype of List<Integer>. The following prominent example gives a good intuition why this kind of subtyping is prohibited: // assuming that such subtyping was possible ArrayList<Number> list = new ArrayList<Integer>(); // the next line would cause a ClassCastException // because Double is no subtype of Integer list.add(new Double(.1d)) Before discussing this in further detail, let us first think a little bit about types in general: types introduce redundancy to your program. When you define a variable to be of type Number, you make sure that this variable only references objects that know how to handle any method defined by Number such as Number.doubleValue. By doing so, you make sure that you can safely call doubleValue on any object that is currently represented by your variable and you do not longer need to keep track of the actual type of the variable’s referenced object. (As long as the reference is not null. The null reference is actually one of the few exceptions of Java’s strict type safety. Of course, the null “object” does not know how to handle any method call.) If you however tried to assign an object of type String to this Number-typed variable, the Java compiler would recognize that this object does in fact not understand the methods required by Number and would throw an error because it could otherwise not guarantee that a possible future call to for example doubleValue would be understood. However, if we lacked types in Java, the program would not change its functionality just by that. As long if we never made an errornous method call, a Java program without types would be equivalent. Viewed in this light, types are merely to prevent us developers of doing something stupid while taking away a little bit of our freedom. Additionally, types are a nice way of implicit documentary of your program. (Other programming languages such as Smalltalk do not know types and besides being anoying most of the time this can also have its benefits.) With this, let’s return to generics. By defining generic types you allow users of your generic class or interface to add some type safety to their code because they can restrain themselfs to only using your class or interface in a certain way. When you for example define a List to only contain Numbers by defining List<Number>, you advice the Java compiler to throw an error whenever you for example try to add a String-typed object into this list. Before Java generics, you simply had to trust that the list only contained Numbers. This could be especially painful, when you handed references of your collections to methods defined in third-party code or received collections from this code. With generics, you could assure that all elements in your List were of a certain supertype even at compile time. At the same time, by using generics you loose some type-safety within your generic class or interface. When you for example implement a generic List class MyList<T> extends ArrayList<T> { } you do not know the type of T within MyList and you have to expect that the type could be as unsophisticated as Object. This is why you can restrain your generic type to require some minimum type: class MyList<T extends Number> extends ArrayList<T> { double sum() { double sum = .0d; for(Number val : this) { sum += val.doubleValue(); } return sum; } } This allows you to asume that any object in MyList is a subtype of Number. That way, you gain some type safety within your generic class. Wildcards Wildcards are the Java equivalent to saying whatever type. Consequently, you are not allowed to use wildcards when instanciating a type, i.e. defining what concrete type some instance of a generic class should represent. A type instanciation occurs for example when instanciating an object as new ArrayList<Number> where you among other things implicitly call the type constructor of ArrayList which is contained in its class definition class ArrayList<T> implements List<T> { ... } with ArrayList<T> being a trivial type constructor with one single argument. Thus, neither within ArrayList’s type constructor definition (ArrayList<T>) nor in the call of this constructor (new ArrayList<Number>) you are allowed to use a wildcard. When you are however only referring to a type without instanciating a new object, you can use wildcards, such as in local variables. Therefore, the following definition is allowed: ArrayList<?> list; By defining this variable, you are creating a place holder for an ArrayList of any generic type. With this little restriction of the generic type however, you cannot add objects to the list via its reference by this variable. This is because you made such a general assumption of the generic type represented by the variable list that it would not be safe to add an object of for example type String, because the list beyond list could require objects of any other subtype of some type. In general this required type is unknown and there exists no object which is a subtype of any type and could be added safely. (The exception is the null reference which abrogates type checking. However, you should never add null to collections.) At the same time, all objects you get out of the list will be of type Object because this is the only safe asumption about a common supertype of al possible lists represented by this variable. For this reason, you can form more elaborate wildcards using the extends and super keywords: ArrayList<?> list = new ArrayList<List<?>>(); In this example, the requirement that the ArrayList must not be constructed by using a wildcard type is fullfilled because the wildcard is applied on the type argument and not on the constructed type itself. As for subtyping of generic classes, we can summarize that some generic type is a subtype of another type if the raw type is a subtype and if the generic types are all subtypes to each other. Because of this we can define List<? extends Number> list = new ArrayList<Integer>(); because the raw type ArrayList is a subtype of List and because the generic type Integer is a subtype of ? extends Number. Finally, be aware that a wildcard List<?> is a shortcut for List<? extends Object> since this is a commonly used type definition. If the generic type constructor does however enforce another lower type boundary as for example in class GenericClass<T extends Number> { } a variable GenericClass<?> would instead be a shortcut to GenericClass<? extends Number>. The get-and-put principle This observation leads us to the get-and-put principle. This principle is best explained by another famous example: class CopyClass { <T> void copy(List<T> from, List<T> to) { for(T item : from) to.add(item); } } This method definition is not very flexible. If you had some list List<Integer> you could not copy its contents to some List<Number> or even List<Object>. Therefore, the get-and-put principle states that you should always use lower-bounded wildcards (? extends) when you only read objects from a generic instance (via a return argument) and always use upper-bounded wildcards (? super) when you only provide arguments to a generic instance's methods. Therefore, a better implementation of MyAddRemoveList would look like this: class CopyClass { <T> void copy(List<? extends T> from, List<? super T> to) { for(T item : from) to.add(item); } } Since you are only reading from one list and writing to the other list, Unfortunately, this is something that is easily forgoten and you can even find classes in the Java core API that do not apply the get-and-put principle. (Note that the above method also describes a generic type constructor.) Note that the types List<? extends T> and List<? super T> are both less specific than the requirement of List<T>. Also note that this kind of subtyping is already implicit for non-generic types. If you define a method that asks for a method parameter of type Number, you can automatically receive instances of any subtype as for example Integer. Nevertheless, it is always type safe to read this Integer object you received even when expecting the supertype Number. And since it is impossible to write back to this reference, i.e. you cannot overwrite the Integer object with for example an instance of Double, the Java language does not require you to waive your writing intention by declaring a method signature like void someMethod(<? extends Number> number). Similarly, when you promised to return an Integer from a method but the caller only requires a Number-typed object as a result, you can still return (write) any subtype from your method. Similarly, because you cannot read in a value from a hypothetical return variable, you do not have to waive these hypothetical reading rights by a wildcard when declaring a return type in your method signature.
http://www.javacodegeeks.com/2013/12/subtyping-in-java-generics.html
CC-MAIN-2015-06
refinedweb
1,505
60.65
Our Own Multi-Model Database (Part 3) Our Own Multi-Model Database (Part 3) You've got your library, so now it's time to start turning it into a true multi-model database by wrapping a web server around it and testing its performance. Join the DZone community and get the full member experience.Join For Free If you haven’t read part one and part two, Neo4j's, and it's all open source, so you can see exactly how it was put together. I used the filter to check out a Java framework, looked at the first test, and started poking around. I never heard of revenj.jvm until then and it looked really interesting, assuming you already had a database you wanted to connect to, not for building a database. So, that one is out. A bunch more then run it in main. Let’s go: public class Server extends Jooby { { get("/", () -> "Hello World!"); } public static void main(final String[] args) { run(Server::new, args); } } and non blocking fashion. HTTP IO operations run in an IO thread (a.k.a event loop) while the application logic (your code) always run in a worker thread. Ohhhhhh. OK. Let’s up the user count from eight to something bigger, like say 80 users, and try again. Now that’s much better. We had 22,000 requests per second with a mean latency of 2 ms. It’s not the almost 400,000; } private static GuancialeDB db; { onStart(() -> { Config conf = require(Config.class); GuancialeDB.init(conf.getInt("guanciale.max_nodes"),conf.getInt("guanciale.max_rels")); db = GuancialeDB.getInstance(); }); // set up)) ) } Almost as fast as not doing anything at all. That’s all for now! The source code is on GitHub, as always. Next time we’ll finish out the REST of the API, add some metrics, maybe add a Shell. We’ll see, then on to persistence and replication. Published at DZone with permission of Max De Marzi , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/our-own-multi-model-database-part-3?fromrel=true
CC-MAIN-2019-22
refinedweb
354
67.35
Forum: @mkurek Thank you, yes I did try installing on ubuntu 16.04 server using the ppa method. Kept getting python site-packages that were missing. Tried changing PYTHONHOME and PYTHONPATH in my ENV but didn't fix it. Couldn't find anyone else that posted about it other than generic Python import issues. Seems like Ralph has far too many "custom" settings tied to 14.04 to make it's upgrade path simple enough for Production environments. It stinks that Docker is NOT an option for my employer. Note: I was able to install on 14.04 using my Ansible Play that fails on 16.04. def } } ]
https://gitter.im/allegro/ralph?at=5930396ccb83ba6a411ede05
CC-MAIN-2019-51
refinedweb
108
78.25
- ellie-app: - Download file: json-placeholder.elm In this blog post I am going to walk through fetching data from a JSON API with Elm. When I started learning Elm this was a pain point for me. I have also been trying to introduce others to Elm and I haven't found enough complete examples to give to people. This post will use the latest Elm version 0.19.1. I will use the jsonplaceholder API and grab a list of posts from /posts. The JSON looks like this: [ { "userId": 1, "id": 1, "title": "some title", "body": "some body text" }, ... ] Records and Decoders Let's first deal with what our data will look like. We will need a post record. type alias Post = { userId : Int , id : Int , title : String , body : String } And then a list of posts. type alias Posts = List Post Now when we fetch this data from the API we will need to parse the JSON into something Elm will understand. We do this by decoding the JSON. This was really strange to me coming from JavaScript land, but once you get used to decoding your JSON you begin to see its benefits. In this example we are parsing the data fields into primitive types like Int and String. In other cases you might want to constrain your data even more and parse the data into an Elm type. For example, I could have an endpoint that turned on a light with a status field. { "status": "on" } { "status": "off" } The only two values state could be is On and Off. You could make a type and only accept those two values. type Status = On | Off Something like "status": "blue" would be invalid and you would get a nice error message and a safe way to deal with the error. Anyway, back to our simple decoder. We first need to install the Elm decoder package. elm install elm/json And now import the package in our Elm code so we can use it. import Json.Decode as D exposing (Decoder, field, int, string) Now we can define our decoder for the Post record we defined above. postDecoder : Decoder Post postDecoder = D.map4 Post (D.field "userId" D.int) (D.field "id" D.int) (D.field "title" D.string) (D.field "body" D.string) Since we will be fetching a list of these posts we need another decoder to decode the list. Fortunately decoders compose really nicely. postsDecoder : Decoder Posts postsDecoder = D.list postDecoder Define our app Model Here I am going to use the package krisajenkins/remotedata instead of using just the elm/http package like in the official Elm docs. The RemoteData packages provides some extra types and helpers on top of elm/http. elm install elm/http elm install krisajenkins/remotedata And again like before, import the things. import Http exposing (expectJson) import RemoteData exposing (RemoteData(..), WebData) And now we can define our application model. type alias Model = { posts : WebData Posts } initialModel : Model initialModel = { posts = Loading } Notice how our model is a WebData Posts instead of just Posts. In VueJS I would declare the data as undefined and then when I succeed in fetching my data set the value. I would then have to check that posts is not undefined when I attempt to use the data. Elm deals with this uncertainty in a different way. This is one of the reasons why Elm has no runtime errors. Also notice how the initial state of the model is set to Loading. This is a type provided by the RemoteData package. The definition looks like the following: type RemoteData e a = NotAsked | Loading | Failure e | Success a We initialize our app in the Loading state because we will fire off the request right when the application starts up. When we get to the view function later on we will have to deal with each of these cases NotAsked, Failure, and Success. Define the update Now we need to actually make the request. In Elm all side effects are dealt with in the update function. Read more about The Elm Architecture if you want to know more about how that works. You cannot just fire off an AJAX call anywhere in the code like in JavaScript. This might seem like a nuisance, but as your web app grows this constraint makes it easy to find where your data is coming and going in your app. type Msg = PostsResponse (WebData Posts) update : Msg -> Model -> ( Model, Cmd Msg ) update msg model = case msg of PostsResponse response -> ( { model | posts = response } , Cmd.none ) getPosts : Cmd Msg getPosts = Http.get { url = "" , expect = expectJson (RemoteData.fromResult >> PostsResponse) postsDecoder } Now somewhere in the code we need to fire off this getPosts command. We will do that in the app initialization. The init function takes an initial model and an initial command message. main : Program () Model Msg main = Browser.element { init = \_ -> ( initialModel, getPosts ) , view = view , update = update , subscriptions = subscriptions } View the posts Lastly we need to be able to view the posts should the request succeed. viewPost : Post -> Html msg viewPost post = div [ class "post" ] [ h2 [] [ text post.title ] , p [] [ text post.body ] ] viewPosts : List Post -> Html msg viewPosts posts = div [] (List.map viewPost posts) In order to display our list of posts, we need to account for all cases of the web request. These are the 4 cases of RemoteData that I mentioned above. view : Model -> Html msg view model = case model.posts of NotAsked -> div [] [ text "Initializing" ] Loading -> div [] [ text "Loading" ] Failure _ -> div [] [ text "Network Error" ] Success posts -> viewPosts posts And we're done. We have fetched a list of posts. P.S. I'm getting close to finishing my book, Elm Calculator book. I build a calculator from scratch using Elm. I go through setting up CSS, using Elm types effectively, deployment, and testing.
https://pianomanfrazier.com/post/elm-json-placeholder/
CC-MAIN-2020-24
refinedweb
972
74.79
To check whether the input year is a leap year or not a leap year in Java Programming, you have to ask to the user to enter the year and start checking for the leap year. Following Java Program ask to the user to enter the year to check whether it is a leap year or not, then display it on the screen: /* Java Program Example - Check Leap Year or Not */ import java.util.Scanner; public class JavaProgram { public static void main(String args[]) { int yr; Scanner scan = new Scanner(System.in); System.out.print("Enter Year : "); yr = scan.nextInt(); if((yr%4 == 0) && (yr%100!=0)) { System.out.print("This is a Leap Year"); } else if((yr%100 == 0) && (yr%400 == 0)) { System.out.print("This is a Leap Year"); } else if(yr%400 == 0) { System.out.print("This is a Leap Year"); } else { System.out.print("This is not a Leap Year"); } } } When the above Java Program is compile and executed, it will produce the following output. Above Java Programming Example Output (for leap year): Above Java Programming Example Output (for not leap year): You may also like to learn and practice the same program in other popular programming languages: Tools Calculator Quick Links
https://codescracker.com/java/program/java-program-check-leap-year.htm
CC-MAIN-2019-13
refinedweb
206
62.07
Details Description The following tests fail when running ant test on trunk 2.0 [junit] Running org.apache.nutch.api.TestAPI [junit] Tests run: 4, Failures: 1, Errors: 0, Time elapsed: 11.028 sec [junit] Test org.apache.nutch.api.TestAPI FAILED [junit] Running org.apache.nutch.crawl.TestGenerator [junit] Tests run: 4, Failures: 0, Errors: 4, Time elapsed: 0.478 sec [junit] Test org.apache.nutch.crawl.TestGenerator FAILED [junit] Running org.apache.nutch.crawl.TestInjector [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0.474 sec [junit] Test org.apache.nutch.crawl.TestInjector FAILED [junit] Running org.apache.nutch.fetcher.TestFetcher [junit] Tests run: 2, Failures: 0, Errors: 2, Time elapsed: 0.526 sec [junit] Test org.apache.nutch.fetcher.TestFetcher FAILED [junit] Running org.apache.nutch.storage.TestGoraStorage [junit] Tests run: 2, Failures: 0, Errors: 2, Time elapsed: 0.468 sec [junit] Test org.apache.nutch.storage.TestGoraStorage FAILED Activity Yes this one should be closed. The tests for nutchgora seem to work fine now. Close? Set and classify Everything works fine except for org.apache.nutch.api.TestAPI. That one still fails, sometimes. When I just ran these tests with "ant test" they all worked perfectly fine, but the runs after that simply fail. Cleaning the project with "ant clean" doesn't help. See corresponding mailing list discussion in link above. I have not yet looked into this test thouroughly, because it is part of Nutch that I am completely unfamiliar with. (The NutchServer API). I think it is best we close this issue, and start a new one that will deal with the this API and the test. Hi Ferdy. Have you noticed anything dodgy with this? Hi Ferdy. There has been almost no problems within the CI testing environment for a number of weeks/months. Any failures seem to have been down to the project building on Ubuntu slaves as oppose to Solaris slaves, the failures are a result of incorrect envars being specified. I've added some more functionality to the nutchgora build characteristics e.g. Publish JUnit test result report and publish Javadoc. So as agreed we will keep an eye on this. Reopening this issue as per our concerns. For the record, the Jenkins build area has been cleaned up and we now only maintain 3 builds; trunk, Nutchgora and a maven trunk. TestAPI is troublesome: As of the recent NUTCH-1135 commit, this summary is being closed out. My bad it was a local issue indeed. Hi Ferdy copy-generated-lib: test: [echo] Testing plugin: parse-tika [junit] Running org.apache.nutch.parse.tika.TestMSWordParser [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.349 sec [junit] Running org.apache.nutch.parse.tika.TestOOParser [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.409 sec [junit] Running org.apache.nutch.parse.tika.TestPdfParser [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.674 sec [junit] Running org.apache.nutch.parse.tika.TestRSSParser [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.698 sec [junit] Running org.apache.nutch.parse.tika.TestRTFParser [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.013 sec init: init-plugin: deps-jar: ... BUILD SUCCESSFUL Total time: 2 minutes 20 seconds Ti make sure I checkedout our most recent nutchgora and applied your interim testGoraStorage patch. Is there maybe some config that you have changed at your end? Are you aware of the fact that TestMSWordParser currently fails too? Or am I missing something? [echo] Testing plugin: parse-tika [junit] Running org.apache.nutch.parse.tika.TestMSWordParser [junit] Tests run: 2, Failures: 2, Errors: 0, Time elapsed: 6.6 sec [junit] Test org.apache.nutch.parse.tika.TestMSWordParser FAILED If it is broken, could you make a subtask? Thanks Ferdy. It was also my initial thoughts that this was maybe too simplistic a fix! If however you look here [1] you will see that Dogacan changed the import namespaces for the Gora imports. It would seem that over time we forgot to do this with hard-coded imports in Injector, Generator and Fetcher tests. Are there any objections to committing this as an interm fix before concentrating on NUTCH-896 ? [1] It seems like your patch is fine, at least as a temporary solution. Tests run fine. (Please see my notes for NUTCH-1135 in the corresponding issue.) I see NUTCH-896 as a rather separate issue. Sure it would be nice have a configurable backend in tests, but as sql is currently the default backend (also for building) I see it as no problem to have it hard-coded in the tests for now. To summarize: +1 to update on this issue. All tests as above e.g. all that extend AbstractNutchTest now pass successfully. The daemon TestGoraStorage is still giving us bother, and please bear in mind that none of this takes into consideration NUTCH-896. Is the patch I submitted for these tests deemed appropriate as a temporary fix? Or should my efforts be concentrated towards NUTCH-896 ? Thanks I have marked this as critical now as it is the 'only' thing preventing us from finally achieving a stable nightly build for the nutchgora branch. In an attempt to get this moving, I'm going to create subtasks for each test, this way we will be able to track reasonable progress on each potential blocker. Over a number of issues this seems to have been phased out/addressed as testing has been stable for some time. Thanks guys.
https://issues.apache.org/jira/browse/NUTCH-1081?focusedCommentId=13151073&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-11
refinedweb
927
68.57
2017-09-15 @floitschG Welcome to the Dart Language and Library Newsletter. In this (hopefully) recurring section, we will show some of the lesser known features of Dart. Dart's semantics introduces labels as follows: A label is an identifier followed by a colon. A labeled statement is a statement prefixed by a label L. A labeled case clause is a case clause within a switch statement (17.9) prefixed by a label L. The sole role of labels is to provide targets for the break(17.14) and continue(17.15) statements. Most of this functionality is similar to other languages, so most of the following sections might look familiar to readers. I believe, Dart's handling of continue in switch statements is relatively unique, so make sure you read that section. Labels are most often used as targets for break and continue inside loops. Say you have nested loops, and want to jump to break or continue to the outer loop. Without labels this wouldn't (easily) possible. The following example uses continue label to jump from the inner loop directly to the next iteration of the outer loop: /// Returns the inner list (of positive integers) with the smallest sum. List<int> smallestSumList(List<List<int>> lists) { var smallestSum =0xFFFFFFFF; // The lists are known to have smaller sums. var smallestList = null; outer: for (var innerList in lists) { var sum = 0; for (var element in innerList) { assert(element >= 0); sum += element; // No need to continue iterating over the inner list. Its sum is already // too high. if (sum > smallestSum) continue outer; // <===== continue to label. } smallestSum = sum; smallestList = innerList; } return smallestList; } This function runs through all lists, but stops adding up variables, as soon as the sum is too high. The same technique can be used to break out of an outer loop: var firstListWithNullValues = null; outer: for (var innerList in lists) { for (var element in innerList) { if (element == null) { firstListWithNullValues = innerList; break outer; // <====== break to label. } } } // Now continue the normal work-flow. if (firstListWithNullValues != null) { ... } Labels can also be used to break out of blocks. Say we want to treat an error condition uniformly, but have multiple conditions (potentially deeply nested) that reveal the error. Labels can help structure this code. void doSomethingWithA(A a) { errorChecks: { if (a.hasEntries) { for (var entry in a.entries) { if (entry is Bad) break errorChecks; // <===== break out of block. } } if (a.hasB) { var b = a.b; if (b.inSomeBadState) break errorChecks; // <===== break out of block. } // All looks good. use(a); return; } // Error case: print("something bad happened"); } A break to a block makes Dart continue with the statement just after the block. From a certain point of view, it's a structured goto, that is only allowed to jump to less-nested places that are after the current instruction. While statement labels are most useful on blocks, they are allowed on every statement. For example, foo: break foo; is a valid statement. Note that the loop continues from above can be implemented by wrapping the loop body into a labeled block and breaking out of it. That is, the following two loops are equivalent: // With continue. for (int i = 0; i < 10; i++) { if (i.isEven) continue; print(i); } // With break. for (int i = 0; i < 10; i++) { stmtLabel: { if (i.isEven) break stmtLabel; print(i); } } Labels can also be used inside switches. They allow programs to continue with another case clause. In its simplest form this can be used as a way to fall through to the next clause: void switchExample(int foo) { switch (foo) { case 0: print("foo is 0"); break; case 1: print("foo is 1"); continue shared; // Continue at the clause that is marked `shared`. shared: case 2: print("foo is either 1 or 2"); break; } } Interestingly, Dart does not require the target of the continue to be the clause that follows the current case clause. Any case clause with a label is a valid target. This means, that Dart's switch statements are effectively state machines. The following example demonstrates such an abuse, where the whole switch is really just used as a state machine. void runDog() { int age = 0; int hungry = 0; int tired = 0; bool seesSquirrel() => new Random().nextDouble() < 0.1; bool seesMailman() => new Random().nextDouble() < 0.1; switch (0) { start: case 0: print("dog has started"); continue doDogThings; sleep: case 1: // Never used. print("sleeping"); tired = 0; age++; // The inevitable... :( if (age > 20) break; // Wake up and do dog things. continue doDogThings; doDogThings: case 2: // Never used. if (hungry > 2) continue eat; if (tired > 3) continue sleep; if (seesSquirrel()) continue chase; if (seesMailman()) continue bark; continue play; chase: case 3: // Never used. print("chasing"); hungry++; tired++; continue doDogThings; eat: case 4: // Never used. print("eating"); hungry = 0; continue doDogThings; bark: case 5: // Never used. print("barking"); tired++; continue doDogThings; play: case 6: // Never used. print("playing"); tired++; hungry++; continue doDogThings; } } This function jumps from one switch clause to the next simulating the life of a dog. In Dart, labels are only allowed on case clauses, so I had to add some case lines that will never be reached. This feature is pretty cool, but it has been used extremely rarely. Because of the added complexity for our compilers, we have frequently discussed its removal. So far it has survived our scrutiny, but we might eventually simplify our specification and make users add a while(true) loop (with a label!) themselves. The dog example could be rewritten as follows: var state = 0; loop: while (true) switch (state) { case 0: print("dog has started"); state = 2; continue; case 1: // sleep. print("sleeping"); tired = 0; age++; // The inevitable... :( if (age > 20) break loop; // <===== break out of loop. // Wake up and do dog things. state = 2; continue; case 2: // doDogThings. if (hungry > 2) { state = 4; continue; } if (tired > 3) { state = 1; continue; } if (seesSquirrel()) { state = 3; continue; } ... If the state values were named constants this would be as readable as the original version, but wouldn't require the switch statement to support state machines. This section discusses our plans to make async functions start synchronously. This change is planned for Dart 2.0. The current Dart specification requires that async functions are delayed: If f is marked async (9), then a fresh instance (10.6.1) o implementing the built-in class Future is associated with the invocation and immediately returned to the caller. The body of f is scheduled for execution at some future time. For example: Future<int> foo(x) async { print(x); return x + 1; } main() { foo(499).then(print); print("after foo call"); } When this program is run, it emits the following output: after foo call 499 500 The specification doesn't explain what precisely “at some future time” means, but in practice async functions use scheduleMicrotask to start their body. There are some benefits to delaying the execution of async function bodies: asynckeyword made it easy to detect that a function would yield. This way asyncis mostly similar to await. However, this approach also comes with drawbacks: asyncbecause it introduces latency. asyncmodifier, which is an implementation detail and should not be seen as part of the signature. asyncfunctions cannot be used in many use-cases. When programs need to fetch data from the server they often use async. This makes sense: XMLHttpRequests are asynchronous, and waiting for them in an async function is the easiest way to deal with the corresponding futures. Often, programs start by fetching their resources as early as possible, so that work is done in parallel with the request. Some Googlers noticed big latency issues when using this approach. Because of the immediate yield of async functions, these requests weren't sent immediately, but the function was just bumped back in the microtask queue. Only later, when the microtask queue was finally executing the body, did it do the request. Often this delay was significant and noticeable. async Dart considers async an implementation detail. That is, as a user of an API it doesn‘t matter if a function body is implemented with async or without. As long as the function returns a Future it doesn’t matter how the body of the function is implemented. This is the reason for having the async keyword after the function‘s signature, and not as part of it. Since async is not part of the type / signature, users may override async functions with synchronous functions, use closures of either implementation approach interchangeably, or refactor functions from one async to non- async or the inverse. In general, Dart wants our users to see async functions similar to non- async functions (from a user’s point of view). Despite these efforts, we see users that take the async as part of the signature. Specifically, knowing that async immediately returns, is used as a part of the contract of a function. This is counter to how we envision async to be used: since async is not part of the signature / type, a user should be allowed to change the body from async to non- async. During readability reviews we have seen code where the authors clearly didn't expect the async function to yield. For example, we saw code like the following: class A { bool isDoingRequest = false; Future<String> doRequest(Uri link) async { isDoingRequest = true; return (await rpcCall()).data; } Future foo() async { if (!isDoingRequest) { var str = await doRequest(...); } } } In this example, some other function is testing for the value of the isDoingRequest. If that field is set to false, it invokes doRequest, which, in the first line, sets the value to true. However, because of the asynchronous start, the field is not set immediately, but only in the next microtask. This means that other calls to foo might still see the isDoingRequest as false and initiate more requests. This mistake can happen easily when switching from synchronous functions to async functions. The code is much easier to read, but the additional delay could introduce subtle bugs. Running synchronously also brings Dart in line with other languages, like Ecmascript. <footnote>C# also executes the body of async functions synchronously. However, C# doesn't guarantee, that await always yields. If a Task (the equivalent class for Future) is already completed, C# code may immediately continue running at the await point. </footnote> Switching to synchronous starts of async functions requires changes in the specification and in our tools. The tool changes are relatively small, since few code touches the async/ await functionality. A prototype CL for the VM and dart2js can be found here: The specification has already been updated with Running async functions synchronously is a subtle change that might break programs in unexpected ways. Most programs don't depend on the additional yield on purpose, but some may depend on it by accident. We are aware that this change has the potential to cause big headaches. Once the patch is complete we intend to roll it out behind a flag. This way, users can start experimenting without being forced to switch in one go. With a bit of luck, most programs just continue working (or the reason for failures is obvious). If necessary, a full program search-and-replace can also bring back the old behavior: // Before: Future foo() async { doSomething(); } Future bar() async => doSomething(); // After: Future foo() async { await null; doSomething(); } Future bar() async { await null; return doSomething(); } This transformation is purely syntactic, and preserves the old behavior if done at the same time as the switch to the new semantics. Note that a slightly more advanced transformation would pay attention not to return a void value in the bar case above. However, it would be probably easier to just fix those by hand. Depending on the feedback and our own experience of migrating Google's whole codebase, we could also add a temporary flag to our tools that maintains the old behavior.
https://dart.googlesource.com/sdk/+/3dda726bca7e979e1cab071bc6a4ea3860dbd0ea/docs/newsletter/20170915.md
CC-MAIN-2020-10
refinedweb
1,978
64.2
Updates to Windows Azure (Mobile, Web Sites, SQL Data Sync, ACS, Media, Store) This available to start using immediately (note: some of the services are still in preview). Below are more details on them: Mobile Services Windows Azure Mobile Services now supports the ability to easily schedule background jobs (aka CRON jobs) that execute on a pre-set timer interval, and which can run independently of a device accessing the service (ensuring that you don’t block or slow-down any requests from your users). This job scheduler functionality allows you to perform a variety of useful scenarios without having to create or manage a separate VM. Some of the scenarios you can enable with it include: - Periodically purging old/duplicate data from your tables. - Periodically querying and aggregating data from external web service (tweets, RSS entries, location information) and cache it in your tables for subsequent use. - Periodically processing/resizing images submitted by users of your service. - Scheduling the sending of push notifications or SMS messages to your customers to ensure they arrive at the right time of the day. Using today’s release you can now easily register background tasks by navigating to the new Scheduler tab of your mobile service in the Windows Azure portal, and then by clicking the Create button: Doing this enables you to name a new job, and select how often you want to run it (note: you can also change the schedule later on): Once a job is created, you can drill into it and select the Script tab to author the server script you want to be executed on a recurring interval. As an example, the below script fetches Twitter updates about “red polos” and sends out a push notification from the mobile service: Once you’ve entered the script, you can save and then click the Run Once button to execute a trial run. The Run Once functionality lets you easily test your job script before you enable the job for recurring execution. To enable recurring execution of the job, click either the Enable button within the script-view or go back to the Scheduler tab, select the job, and click the Enable button to activate it. This new job scheduler capability makes it incredibly easy to integrate background work within your Mobile Service (without having to create or manage a separate VM to run it in). It can be used by all Mobile Service (even the free-tier level). The free-tier of Mobile Services includes support that allows you to run one background job every hour. If you upgrade your mobile service to have a reserved instance you can run up to 10 jobs every 15 minutes. Check-out the Windows Azure Mobile Services documentation for an even more in-depth job scheduler tutorial. Mobile Service Region Support in Europe Before today, our preview of Windows Azure Mobile Services was only supported in the US East and US West regions of Windows Azure. With this week’s update you can now also create Mobile Services in the North Europe region as well. As with every Windows Azure service, over time we will extend Mobile Services to all Windows Azure regions world-wide: Mobile Service Command Line Support Earlier this year we released a cross-platform Windows Azure Command Line Tool (a.k.a ‘azure’) that allows you to manage Windows Azure Web Sites, VMs and other services from the command line on Windows, Mac, and Linux. You can learn more about it here. Today we have released an update for this tool that adds support for Windows Azure Mobile Services as well. To get started, install the ‘azure’ tool for Windows or Mac. If you haven’t used the tool before, you need to perform a one-time step and download/import the Windows Azure management credentials for your account. In a command prompt run the following command to download Windows Azure publish settings: > azure account download then import the .publishsettings file you just downloaded: > azure account import "C:\temp\my-credentials.publishsettings" Once you do this you’ll be able to access your Windows Azure subscription and perform operations against it entirely from the command-line. For example, using today’s update we can now create a new Window Azure Mobile Service entirely from the command-line (no portal interaction required at all): > azure mobile create scottgucli And with that simple command we now have a newly created Windows Azure Mobile Service! Creating new tables with the mobile service can likewise now be done entirely from the command-line: > azure mobile table create scottgucli products Many users have asked for the ability to directly upload scripts from their file system (without having to go through the portal) and the new CLI support makes this really easy. For example, to upload an “insert” script for the products table we just created above you can use the following command: > azure mobile script upload scottgucli table/products.insert -f c:\code\products.js Now this script will run every time a record is inserted into the products table. Web Sites With today’s release we have increased the scaling capabilities of our Windows Azure Web Sites service. Previously a web-site could only be scaled up to run across 3 shared instances or 3 reserved VMs. With today’s release we now support the ability for you to scale up a web-site to run across 6 shared instances (beyond which it is cheaper to switch to a reserved instance): With today’s release we also now support the ability for you to scale up a web-site to run across 10 reserved VM instances (where you are guaranteed to be the only customer using those VMs). You can use either Small, Medium, or Large sized VMs when doing this: This provides you the ability to dramatically scale up (or scale down) your web-site resources in only seconds. New Custom Create Workflow for Websites One of the other nice improvements with today’s release is a new Custom Create workflow for setting up web-sites that allows you to configure source control settings as part of the site creation (instead of having to do it after the fact). Choose the NEW->Website->Custom Create command to try this out: Selecting the “Publish from Source Control” checkbox above will display a new second-step with the custom-create wizard that allows you to configure either TFS based publishing: Or enable Git based publishing, and allow you to push either from a local repository or associate with a Git-based hosting provider: This makes it super easy to setup a new site with continuous delivery already enabled in only seconds. SQL Data Sync With today’s release you can also now leverage SQL Data Sync Services from within the new Windows Azure Management Portal. SQL Data Sync lets you synchronize data between multiple SQL databases. These SQL databases can span across your on-premises environment and the cloud, or across multiple cloud hosted databases (for example: within multiple Windows Azure regions around the world). It is a very powerful capability that can enable a variety of interesting scenarios. To use SQL Data Sync, go to the SQL Databases section of the new Windows Azure Management Portal and click on the new ADD SYNC command in the bottom tray of it: CREATING A SYNC AGENT If you plan on synchronizing with SQL Server databases that reside in your on-premises environment (or another cloud hosting provider besides Windows Azure) then you will first need to download, install and configure an agent with your SQL Server. Click the NEW SYNC AGENT command above to then associate this agent with Windows Azure: After you create an agent endpoint (or any Data Sync resource) you will see a new SYNC preview tab that appears when the “SQL Databases” extension is selected in the left navigation pane of the portal. The agent you just created will appear in the list within that tab. Click MANAGE KEY to generate and copy the key you’ll need to use when configuring the on-premises SQL Server Agent to connect to Windows Azure. CREATING A SYNC GROUP Once you have your databases ready, you can create a new Sync Group. A sync group defines references to databases that will be sync’d as well as the configuration that defines how and when the syncing will occur. Select the New Sync Group command in the portal and use the wizard to complete the following steps: 1. Give it a name. 2. Choose the hub database, and provide connection credentials. The hub database is the central database within the group and must be a SQL Database in Windows Azure: 3. Then add a reference database, and provide connection credentials. Think of reference databases as “spoke” or “member” databases that connect with the hub database. They can be any combination of on-premise SQL Servers or other Windows Azure SQL Databases. They can sync bi-directionally with the hub, to the hub, or from the hub: Once you’ve completed the above steps, your sync group is setup. DEFINING SYNC RULES You can now drill into the sync group to define your sync rules, manage the reference databases, add new reference databases, configure sync settings, and view historical sync logs. Note that you must define your sync rules via the SYNC RULES tab before syncing will begin. These rules determine the tables, columns, and even the rows that will be involved in the sync. You will want to choose the rules based on the needs of the application. CONFIGURING SYNC Once the sync rules have been defined, the sync group can be synchronized, either on demand using the SYNC button or on regular scheduled interval. Use the CONFIGURE tab to setup a sync schedule to have the synchronization happen automatically. Once setup, your databases will automatically synchronize. This can work either between your on-premises environment and the cloud, or between multiple cloud databases (for example: within multiple Windows Azure regions around the world). It is a very powerful capability that can enable a variety of compelling scenarios. Active Directory Access Control Management (ACS) Support With today’s release, you can also now create and manage Windows Azure Active Directory Access Control namespaces from the new Windows Azure Management Portal. Windows Azure AD Access Control is a service that lets you authenticate and authorize users for your web applications and services, while allowing authentication and authorization features to be factored out of your code. ACS works with standards-based identity providers, including enterprise directories such as Active Directory, and web identities such as Windows Live ID, Google, Yahoo!, and Facebook. For more information, see Windows Azure AD Access Control. Creating an ACS Namespace To create an Access Control namespace, click NEW in the Windows Azure Management Portal, and select APP SERVICES -> ACCESS CONTROL ->QUICK CREATE. In the namespace field, type the name of the Access Control namespace that you want to create: Managing a Namespace To manage an Access Control namespace, click on the new Active Directory node in the navigation bar on the left-hand side of the portal: Then, select the namespace you want to manage, and click the MANAGE button in the command bar at the bottom of the screen. This will open the ACS Management Portal. You can additionally move your ACS namespaces from one subscription to another via the CHANGE SUBSCRIPTION command in the bottom of the page. For more information about other recent enhancements that make Windows Azure AD more useful for developers, read Alex Simons’ blog post here, and Bill Hilf’s post here. Even more Active Directory functionality will be integrated within the Windows Azure Portal next year. Media Service Enhancements Today’s release contains a number of great enhancements to Windows Azure Media Services. This release includes investments to: - Job and Task management - Adding content from a Windows Azure storage account - Scaling your media services account to increase encoding concurrency Job and Task Management You can now track the progress and history of your media encoding jobs from within the Windows Azure Portal. The new JOBS tab within the Media Services extension lets you define custom queries for jobs by status, timeframe, or Job Id: The results of your queries will appear below the query editor. You can expand jobs to see information about the tasks that make up the job giving you deeper insight into each individual job and determine where things are progressing as planned or might require attention. You can also select a job within the list to either cancel the job or view details including errors related to the tasks. Adding Content from Windows Azure Storage Prior to this release, the only way to get content into your media services account was to write code using the Azure Media SDK, or upload a file within the portal from your local computer. Today we are introducing a new feature within the Windows Azure Portal that lets you choose existing media files from Windows Azure Blob Storage, thereby reducing your upload latency. From the CONTENT tab within the Media Services extension, you can select the UPLOAD command. Notice the file browser now offers a FROM STORAGE option in addition to the previous “FROM LOCAL” support we already shipped. Note that the 200 MB limit mentioned in the header only applies to the FROM LOCAL option. Files sizes in the tens of gigabytes are supported via the FROM STORAGE option: The FROM STORAGE option will open the storage browser; you will be able to select a media file from blob storage from here. Reserved Capacity for Media Service Jobs With today’s release we have also added a new capability that allows you to scale up the number of encoding tasks you are processing. By default, we only support having one active encoding task at a time. Using the new SCALE page, you can reserve encoding units that let you encode multiple tasks concurrently and have even more processing power reserved for your media workflows: Virtual Networking Simplifications Based on customer feedback, we have significantly simplified the virtual network creation workflow within the Windows Azure Portal. In an earlier release, we had introduced a “Quick Create” workflow that simplified the most common virtual networking scenarios. In this release, we have further simplified the “Custom Create” experience to make creating more advanced scenarios easier as well. Here’s a quick tour of the updates. In the Portal, Start with NEW -> NETWORKS -> CUSTOM CREATE This opens up the Create Virtual Network wizard, where you can fill in the name of the VNET and Affinity group, just as you do today. When you do this, you then be presented with a new experience – an updated page where we abstract away the complexities of CIDR: The experience is pretty simple. Need to add another address space? Click the add address space button. Need to add a subnet? Click the add subnet button. The best part is, we do the calculations for you and automatically provide you with the starting IP address. With a few clicks, soon it can be fully fleshed out and ready to use: Subscription Filtering Support within the Windows Azure Portal Many Windows Azure users have multiple Azure subscriptions that are used for different scenarios. Some users have one subscription for each department in their company; others have one for each environment (development, testing, and production). Some users even have one for each external customer that they work with. Regardless of which category you fall into, you might find that managing all of the resources across all subscriptions can be challenging, difficult to navigate, and can result in a sluggish experience. Starting today a new subscription filter UI now appears next to your user name in the top right-hand side of the Windows Azure portal to help filter your views by one or more subscriptions. Selecting it will display a drop-down list that enables you to quickly filter which subscriptions you want to see in the portal: By default, all of your subscriptions are selected and loaded into the portal. Using the dropdown, you can now pick and choose exactly which subscriptions you want to manage. This will provide a few noticeable benefits: - Resources associated with hidden subscriptions are filtered out of all the experiences across the portal. - Your filter selection roams with you across sessions and devices. If you set a filter on your laptop it will take affect when you return to your desktop or mobile device. - Users with many subscriptions will notice a significant performance improvement when loading only a small subset of their subscriptions since data for subscriptions that are filtered out is never loaded. Note: If you only have one subscription, this change doesn’t affect you one bit. Your experience remains unchanged. In fact, you’ll only see the subscription filter UI show-up if you have more than subscription. Windows Azure Store Now Available in More Countries This week we have also expanded the number of countries that the Windows Azure Store is available within (previously it was only available in the United States). The Windows Azure Store enables you to easily subscribe to services provided by 3rd party partners of ours – and have them automatically added to your Windows Azure bill. It is an incredibly cool capability – and one I’ll be blogging a lot more about shortly. In the mean-time, try it out by selecting the New->Store command within the portal and sign-up for one of the services in it today. Summary The above features are now available to start using immediately (note: some of the services are still in preview). Below are more details on them:If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.
https://weblogs.asp.net/scottgu/great-updates-to-windows-azure-mobile-services-web-sites-sql-data-sync-acs-media-more
CC-MAIN-2020-05
refinedweb
2,991
55.07
Algorithm::LBFGS - Perl extension for L-BFGS use Algorithm::LBFGS; # create an L-BFGS optimizer my $o = Algorithm::LBFGS->new; # f(x) = (x1 - 1)^2 + (x2 + 2)^2 # grad f(x) = (2 * (x1 - 1), 2 * (x2 + 2)); my $eval_cb = sub { my $x = shift; my $f = ($x->[0] - 1) * ($x->[0] - 1) + ($x->[1] + 2) * ($x->[1] + 2); my $g = [ 2 * ($x->[0] - 1), 2 * ($x->[1] + 2) ]; return ($f, $g); }; my $x0 = [0.0, 0.0]; # initial point my $x = $o->fmin($eval_cb, $x0); # $x is supposed to be [ 1, -2 ]; L-BFGS (Limited-memory Broyden-Fletcher-Goldfarb-Shanno) is a quasi-Newton method for unconstrained optimization. This method is especially efficient on problems involving a large number of variables. Generally, it solves a problem described as following: min f(x), x = (x1, x2, ..., xn) Jorge Nocedal wrote a Fortran 77 version of this algorithm. And, Naoaki Okazaki rewrote it in pure C (liblbfgs). This module is a Perl port of Naoaki Okazaki's C version. new creates a L-BFGS optimizer with given parameters. my $o1 = new Algorithm::LBFGS(m => 5); my $o2 = new Algorithm::LBFGS(m => 3, eps => 1e-6); my $o3 = new Algorithm::LBFGS; If no parameter is specified explicitly, their default values are used. The parameter can be changed after the creation of the optimizer by "set_param". Also, they can be queryed by "get_param". Please refer to the "List of Parameters" for details about parameters. Query the value of a parameter. my $o = Algorithm::LBFGS->new; print $o->get_param('epsilon'); # 1e-5 Change the values of one or several parameters. my $o = Algorithm::LBFGS->new; $o->set_param(epsilon => 1e-6, m => 7); The prototype of "fmin" is like x = fmin(evaluation_cb, x0, progress_cb, user_data) As the name says, it finds a vector x which minimize the function f(x). "evaluation_cb" is a ref to the evaluation callback subroutine, "x0" is the initial point of the optimization algorithm, "progress_cb" (optional) is a ref to the progress callback subroutine, and "user_data" (optional) is a piece of extra data that client program want to pass to both "evaluation_cb" and "progress_cb". Client program can use "get_status" to find if any problem occured during the optimization after their calling "fmin". When the status is "LBFGS_OK", the returning value x (array ref) contains the optimized variables, otherwise, there may be some problems occured and the value in the returning x is undefined. The ref to the evaluation callback subroutine. The evaluation callback subroutine is supposed to calculate the function value and gradient vector at a specified point x. It is called automatically by "fmin" when an evaluation is needed. The client program need to make sure their evaluation callback subroutine has a prototype like (f, g) = evaluation_cb(x, step, user_data) x (array ref) is the current values of variables, step is the current step of the line search routine, "user_data" is the extra user data specified when calling "fmin". The evaluation callback subroutine is supposed to return both the function value f and the gradient vector g (array ref) at current x. The initial point of the optimization algorithm. The final result may depend on your choice of x0. NOTE: The content of x0 will be modified after calling "fmin". When the algorithm terminates successfully, the content of x0 will be replaced by the optimized variables, otherwise, the content of x0 is undefined. The ref to the progress callback subroutine. The progress callback subroutine is called by "fmin" at the end of each iteration, with information of current iteration. It is very useful for a client program to monitor the optimization progress. The client program need to make sure their progress callback subroutine has a prototype like s = progress_cb(x, g, fx, xnorm, gnorm, step, k, ls, user_data) x (array ref) is the current values of variables. g (array ref) is the current gradient vector. fx is the current function value. xnorm and gnorm is the L2 norm of x and g. step is the line-search step used for this iteration. k is the iteration count. ls is the number of evaluations in this iteration. "user_data" is the extra user data specified when calling "fmin". The progress callback subroutine is supposed to return an indicating value s for "fmin" to decide whether the optimization should continue or stop. fmin continues to the next iteration when s=0, otherwise, it terminates with status code "LBFGSERR_CANCELED". The client program can also pass string values to "progress_cb", which means it want to use a predefined progress callback subroutine. There are two predefined progress callback subroutines, 'verbose' and 'logging'. 'verbose' just prints out all information of each iteration, while 'logging' logs the same information in an array ref provided by "user_data". ... # print out the iterations fmin($eval_cb, $x0, 'verbose'); # log iterations information in the array ref $log my $log = []; fmin($eval_cb, $x0, 'logging', $log); use Data::Dumper; print Dumper $log; The extra user data. It will be sent to both "evaluation_cb" and "progress_cb". Get the status of previous call of "fmin". ... $o->fmin(...); # check the status if ($o->get_status eq 'LBFGS_OK') { ... } # print the status out print $o->get_status; The status code is a string, which could be one of those in the "List of Status Codes". This is a shortcut of saying "get_status" eq "LBFGS_OK". ... if ($o->fmin(...), $o->status_ok) { ... } The number of corrections to approximate the inverse hessian matrix. The L-BFGS algorithm. Epsilon for convergence test. This parameter determines the accuracy with which the solution is to be found. A minimization terminates when ||grad f(x)|| < epsilon * max(1, ||x||) where ||.|| denotes the Euclidean (L2) norm. The default value is 1e-5. The maximum number of iterations. The L-BFGS algorithm terminates an optimization process with "LBFGSERR_MAXIMUMITERATION" status code when the iteration count exceedes this parameter. Setting this parameter to zero continues an optimization process until a convergence or error. The default value is 0. The maximum number of trials for the line search. This parameter controls the number of function and gradients evaluations per iteration for the line search routine. The default value is 20.). The maximum step of the line search. The default value is 1e+20. This value need not be modified unless the exponents are too large for the machine being used, or unless the problem is extremely badly scaled (in which case the exponents should be increased). A parameter to control the accuracy of the line search routine. The default value is 1e-4. This parameter should be greater than zero and smaller than 0.5.. Coeefficient for the L1 norm of variables. This parameter should be set to zero for standard minimization problems. Setting this parameter to a positive value minimizes the objective function f(x) combined with the L1 norm |x| of the variables, f(x) + c|x|. This parameter is the coeefficient for the |x|, i.e., c. As the L1 norm |x| is not differentiable at zero, the module modify function and gradient evaluations from a client program suitably; a client program thus have only to return the function value f(x) and gradients grad f(x) as usual. The default value is zero. No error occured. Unknown error. Logic error. Insufficient memory. The minimization process has been canceled. Invalid number of variables specified. Invalid number of variables (for SSE) specified. Invalid parameter "max_step" specified. Invalid parameter "max_step" specified. Invalid parameter "ftol" specified. Invalid parameter "gtol" specified. Invalid parameter "xtol" specified. Invalid parameter "max_linesearch" specified. Invalid parameter "orthantwise_c" specified. The line-search step went out of the interval of uncertainty. A logic error occurred; alternatively, the interval of uncertainty became too small. A rounding error occurred; alternatively, no line-search step satisfies the sufficient decrease and curvature conditions. The line-search step became smaller than "min_step". The line-search step became larger than "max_step". The line-search routine reaches the maximum number of evaluations. The algorithm routine reaches the maximum number of iterations. Relative width of the interval of uncertainty is at most "xtol". A logic error (negative line-search step) occurred. The current search direction increases the objective function value. Laye Suen, <[email protected]> This library is distributed under the term of the MIT license.
http://search.cpan.org/~laye/Algorithm-LBFGS-0.16/lib/Algorithm/LBFGS.pm
CC-MAIN-2016-30
refinedweb
1,363
58.18
While trying to maintain the mac wargames mailing list, I found out that if the mailing list grew above a certain size (about 50 people), PSI choked on the note and didn't delete it after sending it. Therefore, I got my one message, and everyone else got their 100 or so until I called PSI and had them kill the job. Anyone have any ideas on how to get around this problem? Right now I'm using Majordomo. --- Michael Rutman | [email protected] Cubist | makes me a NeXT programmer Software Ventures | maker of MicroPhone Pro #include <std.disclaimer> | really offensive political statement
http://www.greatcircle.com/list-managers/mhonarc/list-managers.199212/msg00037.html
CC-MAIN-2013-20
refinedweb
103
74.19
Your First Plugin (Cross-Platform) This guide walks you through your first plugin that targets both Rhino for Windows and Rhino for Mac. It is presumed you already have all the necessary tools installed and are ready to go. If you are not there yet, see both Installing Tools (Windows) and Installing Tools (Mac). It is also helpful to have read and understood Your First Plugin (Windows) and Your First Plugin (Mac). Overview There are many ways to architect a cross-platform solution, depending on your plugin, your projects, and your preferences. In this guide we will employ a method called “cloned project files.” This method uses two very similar .csproj files - each with platform-specific dependencies - that share the same source files. The main goal is to illustrate a straightforward way of handling platform-specific needs while sharing the maximum amount of code. We will build on what we learned in the HelloRhinoCommon sample projects seen in Your First Plugin (Windows) and Your First Plugin (Mac)… HelloRhino.CrossPlatform Let’s begin by creating a new solution to contain both platforms’ projects. For the sake of illustration, we’ll begin on macOS in Visual Studio for Mac, but the reverse - on Windows in Visual Studio for Windows - should work. It is useful to work in a folder that is visible to both macOS and Windows and/or use a version control system like git that can be used on both platforms. File New - If you have not done so already, launch Visual Studio for Mac. - Navigate to File > New > Solution… - A New Project wizard should appear. In the left column, find the Other > Miscellaneous section. Under Generic, select the Blank Solution template… - Click the Next button. - You will now Configure your new solution. For the purposes of this guide, we will name our solution HelloRhino.CrossPlatform. Fill in the Solution Name field. Browse and select a location for this plugin that is visible to both macOS and Windows. - Click the Create button. Note: It is recommended that you create a .git repository for this demo. - A new blank solution called HelloRhino.CrossPlatform should open. Add Project Mac - As in the Your First Plugin (Mac) guide, let’s add a template project to the solution. Right-click the HelloRhino.CrossPlatform solution entry in the Solution panel. Navigate to Add > Add New Project…. - A New Project wizard should appear. In the left column, find the Other > Miscellaneous section. Under General, select the RhinoCommon Plug-In template… - Click the Next button. - You will now Configure your new project. Fill in the Project Name field…call it HelloRhino.Common. Browse and select the HelloRhino.CrossPlatform folder. Check the Create a project within the solution directory checkbox; we want the HelloRhino.Common.csproj to be created in the HelloRhino.Common folder. - In the Solution panel, right-click the HelloRhino.Common project and select Rename. Rename the project HelloRhino.Common.Mac. - Rename HelloRhino.CommonCommand.cs to HelloRhinoCommonCommand.cs. - Rename HelloRhino.CommonPlugin.cs to HelloRhinoCommonPlugin.cs. - Open HelloRhinoCommonPlugin.cs and remove the period ( .) between the HelloRhinoand the Common: public class HelloRhinoCommonPlugin : Rhino.PlugIns.PlugIn { ///<summary>Gets the only instance of the HelloRhinoCommonPlugin plugin.</summary> public static HelloRhinoCommonPlugin Instance { get; private set; } public HelloRhinoCommonPlugin() { Instance = this; } } Clone Project Windows - In Visual Studio for Windows, open the HelloRhino.CrossPlatform solution you created above. - You may get a warning about trusting this project. Click OK. - Expand the References section of the HelloRhino.Common.Mac project. Notice that Visual Studio for Windows cannot resolve the references to RhinoCommon, Rhino.UI and Eto. This is to be expected, as this is not the Windows plugin project file. - In the Solution Explorer, right-click the HelloRhino.Common.Mac project and select Unload Project from the drop-down menu. - In Windows File Explorer, navigate to the HelloRhino.CrossPlatform/HelloRhino.Common folder. We are going to “clone” (copy) the HelloRhino.Common.Mac.csproj as our template for the Windows .csproj. - Copy the HelloRhino.Common.Mac.csproj file and rename it HelloRhino.Common.Windows.csproj: - In Visual Studio for Windows, right-click on the HelloRhino.CrossPlatform solution in the Solution Explorer and navigate to Add > Existing Project… - Navigate to the HelloRhino.Common.Windows.csproj you just copied in Step 6 above. - In the Solution Explorer, expand the References section of HelloRhino.Common.Windows project. Because we copied (“cloned”) the .csproj file, these Mac-specific references are still present. Let’s delete those… - Shift Select the three references Eto**, *Rhino.UI, and RhinoCommon in the list. Right-click and Remove those references. - Right-click the References to Add new references. - Browse to RhinoCommon.dll. (Let’s presume we’re targeting Rhino for Windows 64-bit). The location of RhinoCommon.dll is C:\Program Files\Rhinoceros 5 (64-bit)\System\RhinoCommon.dll. Click OK. - In the Solution Explorer, select the RhinoCommon reference you just added to HelloRhino.Common.Windows. Open the Properties panel if it is not already visible (View > Properties Window). Make sure Copy Local is False: - In the Solution Explorer, right-click the HelloRhino.Common.Windows project and select Properties from the menu. - In the Build Events section, in the Post-build event command line: enter Copy "$(TargetPath)" "$(TargetDir)$(ProjectName).rhp" Erase "$(TargetPath)"… - In the Debug section, in the Start external program, browse to: C:\Program Files\Rhinoceros 5 (64-bit)\System\Rhino.exe… - Build and Start Debugging. You should get two build errors concerning Eto. Let’s take this opportunity to do create some… Platform Defines Though we are sharing cross-platform code, there will be inevitably be situations where you want to do one thing on Windows and another on Mac. Within shared code, the way to manage this is with platform-specific defines in your platform-targeted projects. Let’s define a symbol to get around the reference errors we just encountered. Windows - In the Solution Explorer, right-click the HelloRhino.Common.Windows project and select Properties from the menu. - In the Build section, switch the Configuration: to All Configurations and, in Conditional compilation symbols, create a define called ON_RUNTIME_WIN… - Open HelloRhinoCommonCommand.cs. In the usingsection at the top, create a conditional define around the Eto references like this: #if ON_RUNTIME_APPLE using Eto.Drawing; using Eto.Forms; #endif - Build and Start Debugging. Your build errors should disappear and debugging should begin. You can use the PluginManager to install your plugin and the HelloRhino command should work. When you are done testing out the command, stop the debugging session, close Visual Studio for Windows and switch back to… Mac - In Visual Studio for Mac, open the HelloRhino.CrossPlatform solution (if it is not already open). - Notice that HelloRhino.Common.Windows is now present in your solution (as expected). In the Solution panel, right-click the HelloRhino.Common.Windows project and select Unload… - In the Solution panel, double-click the HelloRhino.Common.Mac project to bring up its Project Options (Properties). - Navigate to the Build > Compiler section in the Project Options. Notice that the configuration is set to Debug. In the Define Symbols: add a ON_RUNTIME_APPLEafter DEBUG. You must separate these with a semi-colon. It should read: DEBUG;ON_RUNTIME_APPLE. Switch the Configuration to Release. Add the ON_RUNTIME_APPLEagain (Visual Studio for Mac lacks an “All Configurations” like Visual Studio for Windows)… - Click OK to close the Project Options dialog. - Build and Run. The HelloRhino command from the plugin should autocomplete and run. Common Code You have just created two platform-specific plugins using the same, shared code. In review, let’s take a look at the Solution architecture…simply open the HelloRhino.CrossPlatform folder using either Finder or Windows File Explorer… In summary… - The HelloRhino.CrossPlatform solution (.sln) contains the: - HelloRhino.Common folder, which contains the common code for the: - HelloRhino.Common.Mac project (.csproj) and the - HelloRhino.Common.Windows project (.csproj) The cloned projects have platform-specific defines and platform-specific references, but otherwise reference the same shared (common) code. Notice that the namespace of your common code is: namespace HelloRhino.Common This common (sometimes called “core”) namespace can be thought of as a shared library. This is where the “business logic” of your plugin should be stored. Even though it was possible - in this simple example - to share all of the code, in some real-life plugins, it may be necessary to separate out platform-specific projects that share the same Common core. For example, you might want to structure a project this way: - The YourPlugin.CrossPlatform solution (.sln) contains the: - YourPlugin.Common folder, which contains the common code for the: - YourPlugin.Common.Mac project (.csproj) and the - YourPlugin.Common.Windows project (.csproj), each of which reference: - YourCommonBusinessLogic.cs files in the YourPlugin.Common namespace. - YourPlugin.Mac folder, which contains the: - YourPlugin.Mac project (.csproj), which references YourPlugin.Common.Mac. - YourMacSpecific.cs files which do Mac-specific stuff. - YourPlugin.Windows folder, which contains the: - YourPlugin.Windows project (.csproj), which references *YourPlugin.Common.Windows. - YourWindowsSpecific.cs files which do Windows-specific stuff. This is merely one example; the architecture of the project depends on the needs of the plugin. It is even possible to share much of the user interface code using Eto…but this is a subject for another guide. Congratulations! You have just built your first RhinoCommon plugin sharing common code between Rhino for Mac and Rhino for Windows.
https://developer.rhino3d.com/5/guides/rhinocommon/your-first-plugin-crossplatform/
CC-MAIN-2018-39
refinedweb
1,547
61.02
New Vulnerability Affects All Browsers 945 Jimmy writes "Secunia is reported about a new vulnerability, which affects all browsers. It allows a malicious web site to "hi-jack" pop-up windows, which could have been opened by e.g. a your bank or an online shop. Here is a demonstration of the vulnerability" Sniff, our little browser's all grown up... (Score:2, Insightful) Thank goodness we've found our first vulnerability in Firefox. Now we can move from the myth that free software is impervious to exploits, and into the reality that vulnerabilities are acknowleged and patched faster in most free software projects. Gentlemen, synchronize your watches. Will the Firefox team have a fix out before Microsoft even admits it's a bug? Re:Sniff, our little browser's all grown up... (Score:5, Insightful) Uh, who was saying that? This sounds scary (Score:5, Funny) Re:This sounds scary (Score:3, Interesting) "Firefox prevented this site from openning 619 popup windows. Click here for options" Is this Windows only or something? Re:This sounds scary (Score:5, Funny) Lynx support (Score:4, Funny) Safari vulnerable if 'pop-up-blocking' is off (Score:4, Informative) Re:Sniff, our little browser's all grown up... (Score:5, Insightful) Re:Sniff, our little browser's all grown up... (Score:3, Insightful) Re:Sniff, our little browser's all grown up... (Score:5, Informative) BTW Javascript has nothing to do with Java except the name. Not the first Firefox vulnerability (Score:5, Informative) As far as I can tell the problem is fixed in the latest Opera beta so they might be able to get it into a proper release pretty soon too. Re:Sniff, our little browser's all grown up... (Score:4, Funny) Re:Sniff, our little browser's all grown up... (Score:4, Insightful) Following their instructions on a barely-patched winXP, neither Firefox 1.0 (with or without popup blocking) or IE were vulnerable. Other people below have reported that it worked for them though, so since the browser (firefox at least) is the same, and it did/didn't work between them, how could it be a bug in firefox? Very strange indeed. NEVERMIND. (Score:4, Informative) So it works in Firefox per the instructions below for me, but Does not work in IE Still??? Crazy!!" Re:Sniff, our little browser's all grown up... (Score:3, Insightful) First? There have been plenty of other FireFox vulnerabilities in the past, however they have all been fixed extremely quickly once discovered (i.e. within a day or 2). All software has security holes in it, get over it - the difference is that the Mozilla Foundation have a habit of fixing them as soon as they find out about them whereas Microsoft have a habit of waiting for many months before bothering to fix them even if they are being active I don't get it (Score:2, Informative) I refreshed the page, and tried the link that said 'Without Pop-up Blocker'. It opened up the Citibank website, but it did not hijack my Citibank popup window. Same thing happened to me under IE6 (except I did not get the dialog when I clicked on the 'With Pop-up Blocker' link). Maybe it works under certain circumstances, but I couldn't reproduce it. Re:I don't get it (Score:2, Informative) Re:I don't get it (Score:5, Informative) And the exploit worked just 'fine' on my firefox 1.0. Re:I don't get it (Score:5, Informative) Re:I don't get it (Score:5, Informative) I hope this helps the vast masses of smart You know you've found a good exploit... (Score:4, Funny) Re:I don't get it (Score:4, Informative) The exploit worked for me on Firefox 1.0 on Windows 98 SE with pop-up blocking turned off, but the exploit didn't work for me when pop-up blocking was turned on. I think I've solved it. (Score:5, Informative) Middle-click to open citibank page in new tab YOU WILL NOT BE VULNERABLE. Left click and allow citibank page to open in new window YOU WILL BE VULNERABLE. At least, that's the behaviour I see on this box. Re:I think I've solved it. (Score:3, Interesting) Re:I don't get it (Score:4, Informative) Re:I don't get it (Score:5, Informative) under about:config, I have dom.disable_window_open_feature.location set to true. So every window must show the location (and because of it, I immediately could see the webpage I was at was not citibank.com). IT DOES WORK! (Score:5, Informative) Ok, here we go for those just playing." 5) See that it does work. That is all. Not all browsers (Score:2) Anyone else have a build of firefox that wasn't really fooled? All your typos... (Score:4, Funny) And in other news, Slashdot is reported all about a new grammatical error in the headlines. Reporting anyone? Re:All your typos... (Score:4, Funny) Not quite hijacking (Score:3, Interesting) But if I used the link from Secunia [secunia.com] to access Citybank, the Popup is then hijacked. So it seems like you need to access (click on a link to) your trusted site via an untrusted site to get hijacked? Here's how it works (Score:5, Insightful) So the attacker doesn't need you to click on anything, they just need you to have their site open -- with the timer going -- in another window. Also, the attacker needs to know in advance what name the victim site's pop-up is referenced by. A dynamically generated name could possibly defeat this attack, though the attacker could always crawl the DOM for a handle to the pop-up. Re:Here's how it works (Score:3, Insightful) I doubt it. If any browser allows you to look at the DOM of a page from a different site, that is a far greater security hole than what they are demonstrating. Re:Here's how it works (Score:3, Insightful) Evil site A helpfully offers a link that opens Good site B. If a user clicks the link and opens Good site B, Evil site A waits for the user to open a predictably named popup from Good site B, then reaches down through the DOM (using code on Evil site A) and alters the URL of the popup, bouncing you to their Evil popup. Big whoop -- this is permitted by Javascript's security model, you know -- the parent window "owns" the child window, thus it can access it and do weird things. Re:A quick workaround for FF 1.0 (Score:3, Informative) I would like to disable JavaScript entirely, but unfortunately that breaks too many pages. no problem here... (Score:4, Informative) Re:no problem here... (Score:4, Interesting) We haven't heard from any Konqueror users yet (and the modem in my Linux box is broken so I can't check it myself). Is the immunity a khtml thing or was it Apple? Re:no problem here... (Score:5, Informative) Re:no problem here... (Score:4, Funny) Re:no problem here... (Score:5, Informative) Happy. (Score:2) Well, that's one alert I'm safe from. Whew. Demo don't work (Score:3, Funny) It's called "Slashdotted" (Score:3, Funny) Re:It's called "Slashdotted" (Score:3, Funny) wait for 10,304,345 hits in the next five minutes as people post "x" in vulnerable "!X" is clear . . server goes down Profit! Safari test (Score:5, Informative) When I turned off the pop-up blocking feature, then when I tried the test, I did see a pop-up from the Secunia site instead of the Citibank text. Now that's a problem. Clearly, this is just another reason to block pop-up windows. Re:Safari test (Score:4, Insightful) I can confirm this works when the "Block Pop-up Windows" in the Safari menu is disabled, but not when the Blocking option is enabled. Rather than just a "me too", I went through the demonstration in reverse order of the previous poster (and was careful to refresh and follow the appropriate links) so I don't think this behavior is due to caching issues. While I do hope there will be a fix for this soon, IMHO, the more appropos fix is that secure sites should not EVER rely on popups. Works for me (Score:3, Informative) Re:Works for me (Score:3, Funny) Security through server meltdown? not irider (Score:3, Informative) All browsers?!? (Score:4, Funny) Re:All browsers?!? (Score:5, Funny) I just don't believe it. Anything -- even an exploit -- working in all browsers would be unprecedented! Lynx appears to be unaffected. Nyeh (Score:4, Informative) Of course it's a bug (Score:5, Insightful) Site A should be able to create and interact with a window named "popup". Site B should be able to create and interact with a window named "popup". This should happen without either site interfering, blocking or overwriting the other. They should simply be invisible to each other, existing in completely seperate little worlds. Re:Of course it's a bug (Score:5, Insightful) Re:Of course it's a bug (Score:3, Informative) Traditionally, windows weren't private to sites, but this is just a variation of the "cross-frame scripting" bugs that have been patched over time. Re:Of course it's a bug (Score:3, Insightful) Traditionally, windows weren't private to sites, but this is just a variation of the "cross-frame scripting" bugs that have been patched over time. A stupifyingly dumb design decision in the first place. The above poster's namespace comment is dead on, and there is obviously no choice but to implement per-site namespace properly. This design bug, however, is the fault of _all_ of us, for not reviewing the des Re:Of course it's a bug (Score:3, Informative) I did find this: Referring to windows and frames [netscape.com] from the Netscape JavaScript handbook. It says nothing about window names being private. So, pin this one on Netscape, and the lack of any formal open standard for what happens in a browser outside of the document. Not so bad... (Score:3) Using Opera 7.54 (Score:3, Informative) the link for disabled popup blockers doesnt open a popup when i have my popup blocker enabled (actually its just Proxomitron with custom filters) When I disable proxomitrion, it does what it says (opens the Secunia site instead of the citibank site) And with proxomitron disabled, the first method (for people running popup blockers) still does the same as it did the first time. Re:Using Opera 7.54 (Score:3) jack pot (Score:4, Funny) Re:jack pot (Score:3, Funny) Wow, did you get an email from Yassir Arafat's widow too? I'm still waiting for my cash transfer. Once again, why needless use of Javascript is BAD! (Score:4, Insightful) If web masters would stop NEEDLESSLY using Javascript to do things like open new windows, and would use it ONLY when there is no way using HTML to accomplish the same goal, then people would not need to have Javascript active all the time, and the impact of exploits like this would be greatly reduced. If, instead of using <a href="#" onclick="foo"> or <a href="javascript(foo)"> type constructs, web designers would use <a target="_blank" href="something.html" onclick="javascript(stuff)"> type constructs, then if the user HAS Javascript active, then the web master can micromanage the newly created window. If not, then the user STILL gets a new window, just not one that the web master can remove all the chrome from. Seriously - when was the last time you heard of an exploit that used straight HTML? All of the recent exploits in ALL browsers, IE included, have been in either Javascript or Active-X, not in the core HTML rendering. There is a REASON for that. Re:Once again, why needless use of Javascript is B (Score:5, Insightful) Example: Sites that pop up their "main" window from their "entry tunnel." Exactly what justification do you have for thinking I still need to view your entry tunnel? Example: (as mentioned,) sites that use Javascript to open windows. Granted, this practice came around before Opera/Mozilla introduced us to the wonders of tabbed browsing, but what's the point of pulling up a "diversionary" window and forcing the user to close it? Afraid they might not understand the concept of the "back" button? Example: using flash/java/shockwave/etc to perform functions that could be handled in HTML, especially now that we have DHTML. I have trouble with understanding the argument "we will be more successful if we deny access to some percentage of the population." etc etc etc.IMHO, this is a symptom of the problem where people assume "everyone else thinks / acts / behaves in the same way I do." Re:Once again, why needless use of Javascript is B (Score:3, Informative) Re:Once again, why needless use of Javascript is B (Score:5, Informative) Yup. Check out Ian Hickson's "Sending XHTML as text/html Considered Harmful" [hixie.ch] for a quick primer on what most sites that do XHTML are doing wrong. Check out Evan Goer's list of "X-Philes" [goer.org] for a list of the very few sites which get it right, and his purge of sites from that list [goer.org] for an indication of how easy it is to go wrong even after you've initially gotten it right. As for HTML generally not producing good markup and being "too loose", I hate to break it to you but XHTML 1.0 and HTML 4.01 are element-for-element identical; the only difference between the two is that one is an SGML application and one is an XML application. And when you serve XHTML 1.0 as "text/html" (e.g., when you do XHTML the way ESPN and others do) you don't gain any of the strictness benefits of XML. And the only thing XHTML 1.1 does on top of that is deprecate a couple more things and add modularization and ruby support, so I'm really not sure where all the "good markup" would come from in a transition to XHTML. Plus there's no reason to believe that serving XHTML 1.1 as "text/html" is conformant, so if you use 1.1 you either break the spec or you shut out IE. Likewise, switching to an XHTML DOCTYPE and using XML syntax doesn't magically confer accessibility on a page; it's just as easy to write a horrid, bloated, table-based images-for-everything page in XHTML as it is in HTML 4.01. I suspect that you're making a common mistake among people who've just discovered web standards: you're confusing XHTML with good markup and best practices (check out Molly Holzschlag on what standards are and aren't [molly.com]). Anyway, it's quite possible to write beautiful, clean, accessible, semantically rich HTML 4.01 with separation of content from presentation; after all, it's got the same set of tags and attributes as XHTML 1.0, so if you can do it in one you can do it in the other just as easily. And when you consider that serving valid, well-formed XHTML according to the spec can be a nightmare at times, it's no surprise that even "gurus" of the standards world (e.g., Mark [diveintomark.org] Pilgrim [diveintomark.org], Anne [annevankesteren.nl] van [annevankesteren.nl] Kesteren [annevankesteren.nl]) have gone back to or recommended sticking with HTML 4.01 unless you really need one of the features gained by an XML-based HTML. And lest you continue to think I'm some sort of skeptic or enemey of web standards, well, every site I've built in the past three years (basically, since I discovered there was such a thing as a "web standard") has been valid, accessible, and CSS-based. I just know from experience that valid markup and stylesheets are one part of the equation, and there are an awful lot of those "best practices" that aren't ever published in a spec from the W3C or anyone else. Re:Once again, why needless use of Javascript is B (Score:4, Informative) With scripting, you can make iFrames draggable, closeable and behave and look just like regular windows but they are, in essence, windows within a window and are tied closely to the current browser. There are reasons to have popups like, for example, color or date pickers (with a calendar). It is actually much easier to build a draggable DIV than a draggable iFrame but the draggable DIV doesn't show up on top of certain HTML elements and hence becomes useless (even with an infinitely high z-index). By the way, you can get draggable iFrames to work in both MSIE and Mozilla. I just bought my iMac for testing but I'm pretty sure I can get it to work in the mac versions too as they all have the necessary language and DHTML components. All I can say though is that JavaScript and DHTML are definitely vendor dependant, and I don't care if you are mozilla or Apple or Microsoft, they ALL have quirks and bugs that go outside of the specifications. In many ways, my high speed photoshop-style image scripting program (for use on web servers) was easier to write in C# than trying to figure out how to make things work across every browser out there! Anyways, programmer alert. I wouldn't depend on popups working in the future if your app depends on it. Make sure to use iFrames or have a non popup dependant way of doing the same thing! Re:Once again, why needless use of Javascript is B (Score:3, Insightful) Yup. It further demonstrates why any financial institution that requires you to enable javascript in order to use their website should be deemed incompetent. Re:Once again, why needless use of Javascript is B (Score:3, Informative) OK, let's try something easier. I've got a table with many rows where each row contains two sets of radio buttons. When one of the radio buttons in the first set is selected, you shouldn't select an answer in the second set. Thus, I use Javascript to disable the second set of radio buttons when that particular option is chosen. Care to tell me how to do that using regular HTML? Re:Once again, why needless use of Javascript is B (Score:3) There you go. You've just shown your ignorance. For simple web pages I would agree, but this vulnerablility is for, and demonstrated in, a web application. As other posters have pointed out, you cannot get some features of an application without using Javascript. So, until the world starts using something like Webstart and downloadable, secure thick clients via the web, the browser is all that we have. Perhaps th Re:Once again, why needless use of Javascript is B (Score:4, Insightful) Just what I want.. a user posting 300 times before realizing that, yes, they must fill out the form. Think about something like Yahoo mail. I can go into a new message and if I forget to put in a To:, it will still post to the server and come back and say that I'm a moron. With JS verification, I would know instantly. Obviously client-side verification shouldn't be used for passwords, but checking that a form is at least completely filled out is very helpful, both as a designer and a web user. Client side verification is practically instant and does not burden the server with incomplete requests. Of course, client side verification does not exempt you from having to perform server side verification. Re:Once again, why needless use of Javascript is B (Score:3, Informative) This excellent article on ALA [alistapart.com] should answer any pending questions on the issue. BTW, the target attribute of anchors was dropped between XHTML 1.1 Transitional and XHTML 1 Re:Once again, why needless use of Javascript is B (Score:5, Informative) 1. 'target' is certainly part of standard html. Just because it isn't defined initially by the A tag doesn't mean the A tag can't use it. 2. From- PS. Hey mods, if you don't know about a subject, don't mark a post 'informative' just because there's a link in it. Re:Once again, why needless use of Javascript is B (Score:3, Informative) In strict, frames and target= are depricated Re:Once again, why needless use of Javascript is B (Score:3, Informative) The "target" attribute still exists in the Transitional and Frameset versions of HTML 4.01 and XHTML 1.0. XHTML 1.1 does not have a Transitional or a Frameset version; however, it is a modularization of XHTML which means that the same functionality can be easily re-introduced. For example, Jacques Distler has produced a page using the "target" attribute [utexas.edu] which is valid against an extended XHTML 1.1 DTD. This is one of the major selling points of XML-based markup and ha Re:Once again, why needless use of Javascript is B (Score:3, Insightful) Some little JavaScript projects I have done: Bugzilla #273699 (Score:3, Informative) Mozilla/Firefox Workaround (Score:5, Informative) 1. Enter about:config in the Location Bar. 2. Enter dom.disable_window_open_feature.location in the filter field. 3. Right-click (Ctrl+click on Mac OS) the preference option and choose Toggle (the value should change to true). This issue is already being worked on bug 273699 [mozilla.org] (copy link location, paste) filed a few hours ago. As a side note, being able to see the bug fixing progress unfold is one of the many reasons why i love open source. I am able to learn so much from just seeing the process take place from start to finish, how it is reported, test cases created, problems that arise, insights into other parts of the system, who the people involved are, reviews, patches, etc. Re:Mozilla/Firefox Workaround (Score:5, Informative) From the page: "Note that, although the attack site can inject its own content, it cannot change the URL appearing in the Location Bar. Firefox and Mozilla have the ability to deny access to the Location Bar so all pop-up windows always have it." Re:Mozilla/Firefox Workaround (Score:5, Insightful) In general, it's always going to be possible if you are browsing sketchy and secure sites at the same time that the sketchy site might pop up some deceptive window, and if you are confused, and can't see the URL bar, you might think it came from the secure site, with or without this specific injection issue. Which is why this workaround out to be default behavior anyway (I HATE sites that try to hide my location bar and navigation toolbar, those bastards). Anyway, the point is, yes the issue should be fixed, but if you applied the workaround, it makes the exploit essentially worthless to an adversary. Results for Slackware 10, Konqueror, Mozilla (Score:3, Informative) Slackware 10, Konqueror, and Mozilla 1.7.3. Results with Konqueror: the popup did NOT point back at Secunia, it pointed at Citibank. Perhaps this is because I have Konqueror configured to open new windows in tabs and have "smart" popup blocking enabled. Would someone try and confirm this? If it is the issue, then we can block the vulnerability in Konqueror, at least. In Mozilla, the popup trick worked. Bad Mozilla! FYI Re:UPDATE: Slackware 10, Konqueror, Mozilla 1.7.3 (Score:3, Interesting) In Javascript, if (and only if) your web page opens a new window, it "owns" that window. In other words, you have access to the whole DOM in that window. You can step through the document object, alter things, and so forth. This is how things are supposed to work; it's what enables us to open new windows and interact with the user. For example, ma Firefox 1.0 (Score:3, Interesting) If I middleclick on the test page and *force* firefox to open the site in a new tab, the exploit fails. I don't know enough to now if this is a limitation in the exploit or in how they've written the exploit, but it's odd and interesting in my opinion there is a simple fix for this (Score:3, Interesting) Well, why not make a new rule in javascript that would disallow any javascript code to access any popups that aren't a direct child of the current instance of the browser. Basically what i mean is to have each window in it's own namespace and have the child window share said namespace. (I think one would have to not allow grandparents to access it either though). so basically if two seperate windows open a window with target="name" then 2 windows are opened one for each instance and they have nothing to do with each other. proxy As of right now... (Score:4, Funny) And this is a version of Firefox I installed approximately two weeks ago. Vulnerability? For dyslexic octopii, maybe (Score:3, Interesting) This strikes me as about as dangerous as the post-SP2 "Warning! If you copy and paste shit files from the net and click a few boxes, YOU COULD GET SPYWARE!". For the record, I just nuked and reinstalled XP-Sp2 + hotfixes a few days ago (for once, not because it was fucked up, but my new raid0 array), so I have cherry IE6 and unextensioned-FireFox 1. I tried several variations of the convoluted instructions, and could get no explicitly dangerous behavior. Mozilla didn't bat an eye, and IE once popped up a box saying "The script is trying to close this window, do you want to let it?" If I let it, then it opened the Citibank site in the window again. Oooh, scary. I'm sure there may be some actual, dangerous vulnerability here somewhere. But I've gotten better instructions from the japanese ASUS site, translated through google. just say no to javascript (Score:3, Interesting) For firefox or opera just turn it on when you absolutely need it and never forget to turn it off right away when you are done. For IE make use of the security zones to implement javascript whitelisting. That's what I do because with firefox and opera I often don't remember to turn it off again until I start getting annoying popups or worse. Seems like more than half of these vulnerabilities that keep popping up make use of javascript. That last one with the online banking passwords was pretty scary and made me very glad that I browse with javascript off. backwards on Firefox 1.0? (Score:3, Insightful) Mixed risk (Score:3, Informative) But I ran the tests, and here are my results: Mac OSX 10.3.6 Safari 1.2.4 (v125.12) - Not affected according to test. FireFox 1.0 (G4 optimized build) - Affected according to test Camino 0.8.2+ - Affected according to test All browsers have pop-up blocking enabled, and some sort of ad filtering (Pith Helmet, Ad Block, etc). Your mileage WILL vary. So... (Score:3, Funny) Re:It doesn't affect Safari (Score:5, Informative) After you have clicked on the link, you have to refresh the Secunia page, then it will work. It's kinda strange, but I guess it is a vulnerability. Kinda like walking back and forth through a bad neighborhood while counting your cash. NarratorDan Re:It doesn't affect Safari (Score:3, Insightful) If so, then it's not "jumping through hoops", which makes Safari as vulnerable as any other browser. Re:Doesn't work for me (Score:5, Informative) In Internet Explorer I pressed "With popup-blocker" (Google Toolbar) and up came Citibank, then I pressed the Fraudulent E-Mail button, and up came CitiBanks popupwindow, first when I closed the popupwindow the "This was hijacked" window appeared (as if triggered by the window.onclose function) but that does not strike me as a gigantic security-hole. Of course the issue in itself is scary, but I'm confident the Mozilla team will have a patch out in no time. This should probably serve as a reminder to webmasters out there, that if you want users to trust content you provide in popup-windows eg. for creditcard payments, you should provide the address-bar, and if the creditcard processing takes place on another server, explain to the customer before he clicks "pay by creditcard" why the window will load from another server. Re:Doesn't work for me (Score:3, Informative) I did this, and Firefox 1.0 (linux) was vunerable. The site wasn't clear that the first site wasn't the vunerability, but links from a genuine site can be made vunerable. Of course, Re:Doesn't work for me (Score:3, Informative) I'm also confident that this will be fixed soon but it's also not really a big issue for me because I do mostly tabbed browsing. It is very rarely that I open a new site in a seperate window anymore. Re:Doesn't work for me (Score:5, Insightful) I disagree. I think they have their moments. Such as displaying incidental information without interrupting the flow of something you're already doing (say, a help link in a wizard-style sequence of pages) like everything else, popups are a tool which can be used or misused. Unfortunately they're mostly misused. Re:I call bullshit!! (Score:5, Informative) 1) Send out a phishing expedition, asking people to log into their BofA account to update their account information. Make it look real official, and include a link that goes to "". The new window takes them to the real site, encrypted and everything. 2) Customers login and check their mailing address, or whatever. 3) Some percentage of them will leave their windows open for more than 10 minutes, at which point BofA sends their standard pop-up window warning about account inactivity and logout. 4) Hijack the pop-up window and do Something Nefarious, like initiate a funds transfer. Now, this isn't a perfect example. But there are an untold number of different sites out there who use pop-ups for perfectly reasonable applications, and it would be trivial for some phisher to get people to go to those sites using his link. The best thing to do is, for those sites who use pop-ups to communicate with their visitors, use some nonstandard form for naming those windows. Use the person's username, a random string, a DES hash with the first two characters of the day of the week as the salt and the time the page is first loaded as the string, whatever (no, don't use "whatever", that's just a figure of speech)' Another clue for webmasters (Score:3, Insightful) It's incredibly sad that pretty much every bank I've ever used doesn't think I might like to know that I'm really talking to their server when I use their web interface. Re:Firefox 1.0 seems fine (Score:3, Interesting) Re:Vulnerability? (Score:4, Insightful) Has happened before. Users may still have to click something, but they could easily be tricked into doing that. Most users aren't constantly vigilant and observant. If the compromised banner ad opened another window that looked like Citibank's site whilst you were using Citibank's site, you could fall for it - especially since Citibank does use pop-ups.
http://it.slashdot.org/story/04/12/09/0053205/new-vulnerability-affects-all-browsers?sbsrc=thisday
CC-MAIN-2015-18
refinedweb
5,351
70.73
The first thing we should consider is : What is the max product if we break a number N into two factors? I use a function to express this product: f=x(N-x) When x=N/2, we get the maximum of this function. However, factors should be integers. Thus the maximum is (N/2)*(N/2) when N is even or (N-1)/2 *(N+1)/2 when N is odd. When the maximum of f is larger than N, we should do the break. (N/2)*(N/2)>=N, then N>=4 (N-1)/2 *(N+1)/2>=N, then N>=5 These two expressions mean that factors should be less than 4, otherwise we can do the break and get a better product. The factors in last result should be 1, 2 or 3. Obviously, 1 should be abandoned. Thus, the factors of the perfect product should be 2 or 3. The reason why we should use 3 as many as possible is For 6, 3 * 3>2 * 2 * 2. Thus, the optimal product should contain no more than three 2. Below is my accepted, O(N) solution. public class Solution { public int integerBreak(int n) { if(n==2) return 1; if(n==3) return 2; int product = 1; while(n>4){ product*=3; n-=3; } product*=n; return product; } } 4 * 4 is not the maximum when N is 8 "Thus the maximum is (N/2)*(N/2) when N is even" I think a simpler math explanation is to say, "Prove that for all integers n > 4, ( ( n-3 ) * 3 ) > n". This gives reason to the 3, and why we may want to consider special cases for preceding numbers. Maybe not simpler, but a stronger explanation in my mind. The optimal product should contain no more than two 2 I think a good way to explain why we need to use 3 as much as possible is: assume a breakdown has 3 twos, those 3 twos have a product of 8 which is less than the product of 2 threes(we can improve it that way). So we can prove the breakdown can have at most 2 twos. Thus, the conclusion is we should use 3 as much as possible. Comment to DaHa Song: This is to say that if you want to break once, breaking in this way can give you the maximum. However, we need to do the break several times to get the optimal product. If you always do the break as (N/2)*(N/2), it is a greedy algorithm, and as we know, greedy is not always the optimal. "(N-1)/2 *(N+1)/2, then N>=5" should be "(N-1)/2 *(N+1)/2 >=N then N>=5" isn't it? the resulting equation woud be N^2-4N-1>=0 which has roots N>=4.23 and N<=-0.23, because N is integer, then N must be N>=5 Well, a very nice explanation. But I think there is a little flaw in your derivation. "When the maximum of f is larger than N, we should do the break." Here, we should use ">" instead of ">=". Thus, it's OK when N is odd, but the solution should be "N > 4" when N is even. Following that change, we get different effective factors, which are 2 and 4. We have to take factor 4 into consideration. The reason why your solution happens to be correct is the fact that 4 = 2 + 2 and 4 = 2 * 2. Based on my derivation, we should think of a reason why we use as many as 3 while not 4. Here it is. We can consider addition. First got the situation of equation: For 7, 3 * 4 = 4 * 3, also 4 + 4 + 4 = 3 + 3 + 3 + 3 For 5, 2 * 3 = 3 * 2, also 3 + 3 = 2 + 2 + 2 Then we consider the inequality: For 8, 4 * 4 < 3 * 3 * 2 For 6, 2 * 4 < 3 * 3 and 2 * 2 * 2 < 3 * 3 For 9, 10, 11, ... I know that's not strict. But heuristically, that's why we choose as many as product of continual 3s rather than other combinations of 2s or 4s. Following my derivation, the loop looks like "while(n > 4)". But through @Chuqi's derivation, that doesn't make sense. Awesome ! The core ideas of this problem are - all factors should be 2 or 3 (N > 4) - 3 * 3 > 2 * 2 * 2. @wxl163530 I think you could take 7 as an example. If n>4, the product would be 34 =12, but if it is n>=4, it would 33*1 = 9. Because it has been proved before (N/2)(N/2)>=N, then N>=4, based on this, if you wanna break 4, it should be 22 instead of 31. Moreover, even if you don't break 4 continually, 4 is the same as the product of its factors 22. So you don't need to break it to 3*1. Just my thoughts. @ainiyouyou I see~ , thanks for replying. @wxl163530 take n=7 for example: 3x3x1 = 9 this occurs for --> while(n>=4) vs 3x2x2 = 12 which occurs for --> while(n>4) hope this helps it you have not figured it out yet ; ) Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/45341/a-simple-explanation-of-the-math-part-and-a-o-n-solution
CC-MAIN-2018-05
refinedweb
894
80.11
This is my last lab assignment for the semester. It was supposed to be due next week, but my instructor will be out next week so he decided today that it would be due on Thursday. He did this because this assignment is harder than the one originally due on Thursday (which I already have completed, just my luck). I have the assignment pretty much done, but have one glitch and one lack of knowledge problem. 1. I have to display, after reading and manipulating the file, the number of records in the file. I can't find that operation in my book anywhere. If you could point me in the right direction I would appreciate it. 2. I will try to attach an output to show what I am talking about here, as well as the .txt file that I am reading from. Some of my records (the three with the shortest last names) have the info tabbed wrong. I have tried to correct this by modifying the .txt file, but no luck. And before I get too much grief, my instructor specified all of the names, except the first one, which is the name of the student doing the assignment. Code follows: Code://Travis Bryant //CPT-168-A01 //Payroll Sequential File #include<iostream> #include<string> #include<iomanip> #include<fstream> using namespace std; int main() { //declarations string fnam = "",lnam = "", ssn = ""; double hrswrkd = 0.0, parat = 0.0, grpay = 0.0, ntpay = 0.0, d_duk = 0.0; system("color f0"); //header cout<<"\t\t\t***********************************\n"; cout<<"\t\t\t* Travis Bryant *\n"; cout<<"\t\t\t* CPT-168-A0 *\n"; cout<<"\t\t\t* Payroll Seqeuntial File *\n"; cout<<"\t\t\t***********************************\n\n"; ifstream infile; // Open payroll file infile.open("payroll.txt"); cout<<fixed<<setprecision(2); cout<<" SSN\t\tName\t\tHours\tRate\tGross\tDeductions\tNetpay\n"; cout<<" ____\t\t____________\t_____\t____\t_____\t__________\t______\n"; // priming read infile>>fnam>>lnam>>ssn>>hrswrkd>>parat; //begin while loop (priming read instruction) while(infile.eof() != true) { // calculate gross pay if (hrswrkd > 40) grpay = (40 * parat) + (hrswrkd - 40) * (1.5 * parat); else grpay = hrswrkd * parat; //calculate deductions d_duk = grpay * .10; //calculate net pay ntpay = grpay - d_duk; // Display output cout << " " << ssn.substr(7) << "\t\t" << fnam.substr(0,1) << ". " << lnam << "\t" << hrswrkd << "\t" << parat << "\t" << grpay << "\t" << d_duk << "\t\t"<< ntpay<< "\n"; // read next record infile>>fnam>>lnam>>ssn>>hrswrkd>>parat; } infile.close(); //display appreciation cout<<"\n\t\t\t\tTHANK YOU!!!\n\n"; system("pause"); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/155940-sequential-file-problem-s.html
CC-MAIN-2014-42
refinedweb
417
67.45
The market trend of corn in China in 2017 was facing a depressing time, especially looking at the price development. Now, China has changed its corn policy from a stockpiling and supply orientated market situation to one, that is demand driven. Market analyst firm CCM has analysed the key trends of corn in 2016 and gives an outlook for the possible development of the corn market for 2017. Source: Baidu CCM’s research reveals, that both the planting area of corn as well as the total yield in 2016 have decreased in China. However, looking at the supply and demand situation for corn in 2017, the demand is very likely to stay stable and therefore the domestic supply will be enough to meet the total demand. As a result, the planting areas are going to be reduced once again, while the price of corn may even fall under the value of 2016. 2016 The total planting area of corn decreased by 3.11% in 2016, compared to 2015. As a result, also the harvest of corn experienced a fall down by 5 million tonnes year on year. The total yield has been 219 million tonnes, according to CCM. Looking at the demand for corn feedstuffs and industry, both have shown an increase in 2016. This trend can be explained by the immense rise of live pig farming in China, especially in the first half of 2016. Other factors, that favoured the rise, has been dropping imports of substitutes for corn, namely sorghum, barley, and DDGS. In numbers, the demand for corn feedstuffs has stated a growth of 2 million tonnes in 2016, while corn industry even achieved a growth of more than 4 million tonnes, according to CCM. The price of corn in China in 2016 was suffering a low value, compared to 2015. This was mainly due to the weakening supply and demand situation, combined with China’s corn stockpiling strategy. However, even the low domestic corn prices were still higher than the imported prices, which showed an average price of The import of corn fell dramatically in China in 2016, namely by 42.5%, according to CCM’s research. The biggest exporting nation with a share of almost 90% was Europe’s corn field, the Ukraine. 2017 CCM predicts, that the supply-demand situation for corn in China will remain imbalanced in 2017, with a surplus of supply, even facing a continuing decreasing planting area. The reason is the huge stockpile of corn, that will be used to meet the demand as well. CCM has analysed four main reasons, that are very likely to surge the corn demand in China in the year 2017. According to the research, China plans to offer export subsidies for deeply processing products. Especially in the northern part of China, subsidies for deep-processing enterprises are going to be implemented. Corn-based fuel is very likely to grow in 2017, which will raise the demand for corn as the raw material. Finally, a rising live pig market will demand more corn, which will be hardly substituted by decreasing imports of substitutes as well as less wheat usage. As mentioned before, the planting area of corn will be further decreased in 2017 as part of the restructure plan of corn planting. The area of effect will be about 666 thousand ha. CCM predicts, that the price of corn in China, 2017, will be even lower than it was in 2016. The final trend is surely depended on the market and supply situation, but the changing strategy of China’s government to not buy any corn itself, but support domestic enterprises to buy the corn. This changing policy puts a huge pressure on the corn price in 2017. What’s more, according to Reuters, another plan of China’s government to reduce its massive corn inventory, is the investment in the bioplastic industry, with corn as the raw material. The products, that can be made from corn are bio-plastic commodities like bags or plates. Most of the corn inventory of China’s government consists of poor quality corn, that is not eatable by its population. Therefore, the use fits either for feed or bio-plastic production. About,.
http://eshare.cnchemicals.com/publishing/home/2017/02/09/2156/ccm-corn-outlook-2017-in-china-down-in-planting-and-price.html
CC-MAIN-2017-43
refinedweb
703
59.94
tag:blogger.com,1999:blog-28621447226171944172009-07-02T14:51:10.334-07:00A Beginning Programmer's Guide to JavaJava Programming Mysteries Explained for Those Learning to Program for the First Time, and for Experienced Programmers Just Learning JavaMark A. Graybill Java Programming BookThere's a book on programming in Java with <a href="">Greenfoot</a> coming out soon.<br /><center><br /><a href=""><img border="0" src=""></a><img src="" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" /><br /></center><br />You can see a preview at the <a href="">book's site</a>, along with resources for students and teachers.<br /><br />This book isn't just for class use. Greenfoot is a great way to learn to use Java. You can learn the basics of programming with graphics and sound that are far easier to use than in standard Java. Greenfoot and <a href="">BlueJ</a> both illustrate basic concepts of Object Oriented programming in a way that makes them clearer than any other method I've seen. You can deal with objects <i>as</i> objects, see what their member methods and fields are, and use them outside the program itself.<br /><br /.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill ResultsI've posted an entry on my general blog about the results I saw from <a href="">a semester with Greenfoot</a>.<br /><br />While <a href="">Greenfoot</a> was not as smooth and easy to use for beginners as I could hope, it was still a very valuable tool for the class. Both it and <a href="">BlueJ</a> are excellent ways of being introduced to some of the basic concepts behind object-oriented programming. Objects, inheritance, members, etc. are all right there for you to see and interact with, which does a heck of a lot for making things clearer.<br /><br />I'd like to see BlueJ's codepad come over into Greenfoot. The student I had in individual study using BlueJ and <u>Objects First</u> as a text got a <i>lot</i> out of the codepad.<br /><br /.<br /><br />Check out the post linked above. The student all got their games in a playable state and posted at the Greenfoot Gallery, which is a huge axchievement (I did <i>not</i> expect 100% success on that count!) They're linked, as are the students' development blogs.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill: Learning Java by Patrick Niemeyer and Jonathan KnudsenOnce upon a time there was a neat little book called <a href="">Java in a Nutshell</a>..<br /><br />The first edition was pretty compact, true to the "Nutshell" name. Then Java grew, and the book grew with it. Now the <a href="">current edition</a> is <b>huge</b>, and that's even after they've moved out a lot of the examples and more detailed explanations to make room for everything else.<br /><br />For those looking to learn Java, <a href="">O' Reilly</a>, the publishers of Java in a Nutshell, have a different book. It's appropriately named <a href="">Learning Java</a>. Like the original <a>Java in a Nutshell</a>,.<br /><br /.<br /><br /><u>Learning Java</u>.<br /><br /.<br /><br /.<br />.<br /><br />Final Grade: 75%, C<br /><br />Pros:<br />Some wonderful example programs.<br />Very comprehensive.<br />No fluff.<br /><br />Cons:<br />Crowds too much into too few stand-alone example programs.<br />Does a poor job of teaching object oriented concepts.<br />Many sections are rushed.<br /><br />Recommendation:<br /.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill an Image with JavaIn <a href="">an earlier lesson</a> we learned to load an image file into Java and display it. We also scaled the image in the program given in that lesson. Now we're going to get just a tiny bit more sophisticated, and rotate the image, as promised in the <a href="">lesson introducing Java2D</a>.<br /><br />The <a href="">Java2D</a>.<br /><br /.<br /><br />The program uses the <a href="">same image of Duke</a> as the previous one. It needs to be in the same directory as your program.<br /><br />Here's the program. More discussion of the program follows the listing.<br /><span style="color: rgb(255, 204, 102);"><pre><span><span style="color: rgb(255, 204, 102);"><pre><span style="color: rgb(255, 204, 102);">// Import the basic graphics classes.</span><br /><span style="color: rgb(255, 204, 102);">import java.awt.*;</span><br /><span style="color: rgb(255, 204, 102);">import javax.swing.*;</span><br /></pre></span></span>/**<br /><span style="color: rgb(255, 204, 102);"> * Simple program that loads, rotates and displays an image.</span><br /><span style="color: rgb(255, 204, 102);"> * Uses the file Duke_Blocks.gif, which should be in</span><br /><span style="color: rgb(255, 204, 102);"> * the same directory.</span><br /><span style="color: rgb(255, 204, 102);"> * </span><br /><span style="color: rgb(255, 204, 102);"> * @author MAG</span><br /><span style="color: rgb(255, 204, 102);"> * @version 20Feb2009</span><br /><span style="color: rgb(255, 204, 102);"> */</span><br /><br /><span style="color: rgb(255, 204, 102);">public class RotateImage extends JPanel{</span><br /><br /><span style="color: rgb(255, 204, 102);"> // Declare an Image object for us to use.</span><br /><span style="color: rgb(255, 204, 102);"> Image image;</span><br /><span style="color: rgb(255, 204, 102);"> </span><br /><span style="color: rgb(255, 204, 102);"> // Create a constructor method</span><br /><span style="color: rgb(255, 204, 102);"> public RotateImage(){</span><br /><span style="color: rgb(255, 204, 102);"> super();</span><br /><span style="color: rgb(255, 204, 102);"> // Load an image to play with.</span><br /><span style="color: rgb(255, 204, 102);"> image = Toolkit.getDefaultToolkit().getImage("Duke_Blocks.gif");</span><br /><span style="color: rgb(255, 204, 102);"> }</span><br /><span style="color: rgb(255, 204, 102);"> </span><br /><span style="color: rgb(255, 204, 102);"> public void paintComponent(Graphics g){</span><br /><span style="color: rgb(255, 204, 102);"> Graphics2D g2d=(Graphics2D)g; // Create a Java2D version of g.</span><br /><span style="color: rgb(255, 204, 102);"> g2d.translate(170, 0); // Translate the center of our coordinates.</span><br /><span style="color: rgb(255, 204, 102);"> g2d.rotate(1); // Rotate the image by 1 radian.</span><br /><span style="color: rgb(255, 204, 102);"> g2d.drawImage(image, 0, 0, 200, 200, this);</span><br /><span style="color: rgb(255, 204, 102);"> }</span><br /><br /><span style="color: rgb(255, 204, 102);"> public static void main(String arg[]){</span><br /><span style="color: rgb(255, 204, 102);"> JFrame frame = new JFrame("RotateImage");</span><br /><span style="color: rgb(255, 204, 102);"> frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);</span><br /><span style="color: rgb(255, 204, 102);"> frame.setSize(600,400);</span><br /><br /><span style="color: rgb(255, 204, 102);"> RotateImage panel = new RotateImage();</span><br /><span style="color: rgb(255, 204, 102);"> frame.setContentPane(panel); </span><br /><span style="color: rgb(255, 204, 102);"> frame.setVisible(true); </span><br /><span style="color: rgb(255, 204, 102);"> }</span><br /><span style="color: rgb(255, 204, 102);">}</span></pre></span><br />Most of this program is the same as our <a href="">previous one.</a> There are a couple of significant changes, however.<br /><br />One is the following line:<br /><code style="color: rgb(0, 170, 221);">Graphics2D g2d=(Graphics2D)g;</code><br /.<br /><br />By doing this, we "magically" make all the Graphics2D functions available for drawing to our window.<br /><br />What we want to do is rotate our image. Since we don't want to have the image go outside our graphics area when we rotate it, I've moved our "point of origin" of our graphics area using<br /><code style="color: rgb(0, 170, 221);">g2d.translate(170, 0);</code><br />This moves the center point around which the drawing area is rotated from the upper left corner of the window's display area to a point 170 pixels to the right of that.<br /><br />Then we use the rotate() method of our Graphics2D object, which will cause everything drawn into it to be rotated:<br /><code style="color: rgb(0, 170, 221);">g2d.rotate(1);</code><br />The amount we're rotating is 1 radian, or about 57 degrees, noted by the 1 inside the parentheses.<br /><br />Then we use the drawImage method of our Graphics2D object to actually draw the image into our display area.<br /><br /.<br /><br /.<br /><br />Once you have image files of your sprite in all its possible rotated angles, load all these images to different Image object in a Java program (or better yet, an Image array or Collection), and change between them to show the image rotating.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill 2D Graphics: Java2DIn our <a href="">last program</a> we loaded and displayed an image using Java. Not only that, but we scaled it to be a specific size. We did this using the <a href="">Image class</a> from the <a href="">java.awt package</a>.<br /><br />Unfortunately, this pretty well covers the abilities of the Image class. But there's so much more that we'd like to do with images in our programs, especially if we're writing games. Fortunately, there's more that Java can do with images thanks to <a href="">Java2D</a>.<br /><br />If you go to the <a href="">Java API</a> you won't see a specific package called something like java.java2d or javax.swing.java2d. "Java2D" is a whole bunch of packages and classes spread through Java's libraries of stuff. The <a href="">java.awt.image</a> package is part of Java2D, as is the <a href="">java.awt.Graphics2D</a> class inside the java.awt package. For a list of all the scattered components of Java that are part of Java2D, have a look at the <a href="">Java2D API Specification</a>.<br /><br />Like so much of Java,<span style="color: rgb(255, 204, 102);"> the </span><span style="font-style: italic; color: rgb(255, 204, 102);">presentation</span><span style="color: rgb(255, 204, 102);"> of Java2D is confusing</span>. It's made to sound like <b>a single thing</b> when it's a bunch of interrelated things worked into Java. Fortunately, like many such problems in understanding Java, <span style="color: rgb(255, 0, 0);">it's only a problem of how the language is <i>presented</i>, not a problem with the language itself</span>..)<br /><br />Also,<span style="color: rgb(255, 204, 102);"> it's very easy to take items that are non-Java2D classes and use them with Java2D</span>. Usually it's nothing worse than "casting" them as Java2D objects. That means you basically tell Java "pretend this object is the right sort of thing" in a way that works.<br /><br />So have a look at <a href="">some info</a> on Java2D, and if it looks confusing, remember it's not you that's the problem. Sun does an amazing job of presenting <b>great</b> things in <i>awful</i> ways by trying to say way too much to too many different audiences at once. <u>And</u> they like to start in the middle of the story.<br /><br />The <a href="">Java2D tutorials</a> do a decent job of showing things off without a lot of mind-numbing prose (well, not <i>too</i> much, anyway.)<br /><br />Next I'll post a short, simple program we can use that rotates our images.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill and Displaying an Image in JavaHere's a common task--loading an image file and displaying it in a window.<br /><br />I'll be using this program as a basis for further two dimensional graphics exercises, so give it a try and play around with it.<br /><br />First, we need to get a picture to use. I use a picture of Duke, the Java mascot, from <a href=""></a><br /><br /.<br /><br />The only somewhat tricky part is that we're using a Toolkit method to get the image. Take a look at the <a href="">Java API reference</a> to learn more about the Toolkit. (The Toolkit object is part of the java.awt package. So select "java.awt" in the top left frame of the API reference webpage to make finding Toolkit in the lower left frame easier.)<br /><br />In this program, we scale and position the image in our window. In future programs, we'll convert the image to a Java2D object (which is really easy) so that we can do other things, like rotate it and use it as a sprite.<br /><br />Here's the program:<br /><pre style="color: rgb(221, 221, 0);"> /* ShowImage.java<br /><br /> This program loads and displays an image from a file.<br /><br /> mag-13May2008<br /> updated 20Feb2009 by mag to incorporate suggestions<br /> by mazing and iofthestorm on digg.<br />*/<br /><br />// Import the basic graphics classes.<br />import java.awt.*;<br />import javax.swing.*;<br /><br />public class ShowImage extends JPanel{<br /> Image image; // Declare a name for our Image object.<br /><br /> // Create a constructor method<br /> public ShowImage(){<br /> super();<br /> // Load an image file into our Image object. This file has to be in the same<br /> // directory as ShowImage.class.<br /> image = Toolkit.getDefaultToolkit().getImage("Duke_Blocks.gif"); /><br /> // Draw our Image object.<br /> g.drawImage(image,50,10,200,200, this); // at location 50,10<br /> // 200 wide and high<br /> }<br /><br /> public static void main(String arg[]){<br /> JFrame frame = new JFrame("ShowImage");<br /> frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);<br /> frame.setSize(600,400);<br /><br /> ShowImage panel = new ShowImage();<br /> frame.setContentPane(panel); <br /> frame.setVisible(true); <br /> }<br />}<br /><br /><br /></pre><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill: Can Java Hear You?In the program given in <a href="">Simple Mouse Interaction in Java</a>.<br /><br />By implementing the MouseListener interface, we were able to talk to the mouse very easily. However, a MouseListener is just one of the many types of Listener available in Java.<br /><span style="font-weight: bold;"><br />What Sort of Listener Are You?</span><br /><br /.<br /><br />Whew! What a lot of work that would be! If we had to do that it'd almost be enough to drive you back to the command line. (Once upon a time you <span style="font-style: italic;">did</span> have to do all that, believe it or not. Thank goodness those days are gone.)<br /><br / <span style="font-style: italic;">once</span>.<br /><br /><span style="font-weight: bold;">What is a Listener?</span><br /><br /.<br /><br />If the click happens outside our program's window, an Event is generated, but our program isn't included in the recipients of the message. If our program doesn't have a Listener, the mouse click Event gets passed to our program but is ignored.<br /><br /.<br /><br />A MouseListener is a very generalized sort of Listener. An ActionListener, however, can be set up to listen for a specific event, like a click on a specific button. Take a look in the <a href="">Java API</a>.<br /><br /.<br /><br />I'll be posting a sample program soon, until then have a look at the <a href="">Sun Java Tutorials</a> on using buttons and other controls from Swing (Java's best set of graphical user interface objects.)<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill CSS with JavaI've not been posting very regularly since the start of the school year. That will change once my class starts doing Java. Whenever I teach Java I get <em>lots</em> of material for this blog. One question can make me realize how non-obvious a whole range of things are to new programmers.<br /><br />Right now the class is working on developing web pages, you can see their first efforts, severely constrained by the available class time at <a href=""></a>. CSS is one of the things we're working on, and this dovetails right into Java since you can use CSS with Java to style your user interfaces.<br /><br />There's a nice tutorial with sample code showing how it works at <a href="">Swing and CSS by Joshua Marinacci</a>.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill Java Signatures SimpleMethods have names. For example, <code>System.out.println()</code> is usually called "print line", though we'd probably say "system-out-print line" for it as I've written it here.<br /><br />Unfortunately, the word "name" doesn't have a technical meaning. And even if we gave it one, it might lead to misunderstanding when we're switching back between technical terms and nontechnical ones. So there's a technical term for name used for methods in Java: signatures.<br /><br />The way we've used "name" above refers to the same thing as the Java term "simple signature." The simple signature of a method is its text name, ignoring everything in the parentheses, its return type, and so on. So <code>println()</code> and <code>println(String)</code> have the same simple signature.<br /><br /.<br /><br />That's why you get a runtime error if you do the following:<br /><br /><code>public static void main(){<br />...</code><br /><br />The Java compiler will compile this just fine. It will create a .class file with a method that's got the signature main().<br /><br />Then when you try to run it, the JRE will look for a method with the signature <code>main(String[])</code>, that is, a method with the text name <code>main</code> followed:<br /><br /><code>public static void main(String arg){<br />...</code><br /><br />You've got a method whose signature is main(String) not main(String[]), that is, its parameter is a String, not an array of Strings.<br /><br />An example of methods that have the same name but different signatures are the <a href="">constructor methods for the Color class</a>. You can create a new Color in any of several different ways:<br /><br /><code>Color(int r, g, b)<br />Color(int r, g, b, a)<br />Color(int rgb)<br />Color(float r, g, b)</code><br />and so on.<br /><br /.<br /><br /.<br /><br />That means when you're calling a method, and the compiler spits it out and says it's not found you need to make sure you've:<br /><ol><li>spelled the name right,</li><li>got the right type and number of parameters,</li><li>and imported the appropriate packages and classes.</li></ol><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill's Documentation: A Closer LookI started discussing the place to look up Java's classes and methods in <a href="">Java's Reference Manual</a> and the place to look up the language's syntax and elements in <a href="">The Java Language Manual</a>. Here I'm going to take a deeper look at the documentation of Java's classes, interfaces, and so on in the <a href="">the Java API Reference</a>.<br /><br />At first glance the page that's presented to you by the API reference can be overwhelming. Too much information!<br /><br />First let's look at what's in each panel. If you don't have the API reference open in another tab or window of your browser, open it now. I'll be referring to what you see. The link is <a href="">the Java API Reference</a>..<br /><br /.<br /><br />In the upper left frame, click on "All Classes" at the top of the list to list all the classes, interfaces, enums, and exceptions in the API again.<br /><br /.<br /><br />In the main window, you'll see the big title "<b>Class JFrame</b>". <code>import</code>, this is a good way to find out.<br /><br />Beneath <b>Class JFrame</b> you'll see a list of other classes with little stairsteps leading down from one to another. These are the parent classes of JFrame, from the most basic object class in Java, <code>java.lang.Object</code> to JFrame itself. JFrame inherits from all these classes. Every method and field that they have, JFrame has, too.<br /><br />You can see the documentation for any of them by simply clicking on the class's name. But don't just yet, we're going to look at more of JFrame's documentation first.<br /><br /.<br /><br /.<br /><br /.)<br /><br />I'm going to skip on down past the Nested Class Summary and list of nested classes inherited, too, and jump right into the Field Summary below them.<br /><br />Fields are the variables and constants defined for the class. The first listed have been defined directly in this class. The listing tells you what they've been defined as, and their purpose. Following that is a list of ones inherited from each of the parent classes.<br /><br /.<br /><br /.<br /><br /.)<br /><br />Having the whole document hyperlinked makes it far more useful than a paper document would be. There'd just be too much page-flipping.<br /><br />If you want to have a copy right on your own machine, you can download the entire thing. The PDF is available <a href="">from Sun</a> for Java 6, or you can get it for different versions from links on the <a href="">Java SE reference page</a>.<br /><br />That way you don't have to be on the network all the time, or held hostage to the speed of your connection to look up something.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill Mouse Interaction in JavaBuilding on the basic graphics we've done in <a href="">Very Basic Java Graphics--3 Examples</a> and <a href="">A Most Basic Graphics App</a> we'll be adding mouse interaction in this example.<br /><br /><code style="color:#00dddd;">/* MousePanel.java<br /><br /> This program adds the ability to respond to<br /> mouse events to the BasicPanel program.<br /><br /> The mouse clicks are used to draw on the<br /> panel area using drawline instructions.<br /><br /> mag-30Apr2008<br />*/<br /><br />// Import the basic necessary classes.<br />import java.awt.*;<br />import java.awt.event.*;<br />import javax.swing.*;<br /><br />public class MousePanel extends JPanel implements MouseListener{<br /><br /> public MousePanel(){<br /> super();<br /> pointX=20;<br /> pointY=20;<br /> oldX=0;<br /> oldY=0;<br /> addMouseListener(this); <br /> }<br /><br /> int pointX, pointY, oldX, oldY;<br /><br /> public void paintComponent(Graphics g){<br /> // Draw a line from the prior mouse click to new one.<br /> g.drawLine(oldX,oldY,pointX,pointY);<br /> }<br /><br /> public void mouseClicked(MouseEvent mouse){<br /> // Copy the last clicked location into the 'old' variables.<br /> oldX=pointX;<br /> oldY=pointY;<br /> // Get the location of the current mouse click.<br /> pointX = mouse.getX();<br /> pointY = mouse.getY(); mouseEntered(MouseEvent mouse){ } <br /> public void mouseExited(MouseEvent mouse){ }<br /> public void mousePressed(MouseEvent mouse){ }<br /> public void mouseReleased(MouseEvent mouse){ }<br /><br /> public static void main(String arg[]){<br /> JFrame frame = new JFrame("MousePanel");<br /> frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);<br /> frame.setSize(250,250);<br /><br /> MousePanel panel = new MousePanel();<br /> frame.setContentPane(panel);<br /> frame.setVisible(true);<br /> }<br />}</code><br /><br />To add mouse interaction, we're taking advantage of an <i>interface</i> from the <code>java.awt.events</code> package called <a href="">MouseListener</a>. When we <i>implement</i> MouseListener as part of our JPanel class, then call <code>addMouseListener(this)</code> as part of our <code>MousePanel()</code> <a href="">constructor</a>, our program is "wired up" to the mouse, so that our MousePanel gets mouse events from the <a href="">JVM</a>.<br /><br />Implementing the <code>MouseListener</code> interface requires that we implement all the methods called for by the interface. We only use <code>MouseClicked()</code> in this example, however, so it's the only one that has code in its <a href="">code block</a>.<br /><br />This program lets you draw by clicking the mouse at various places in the MousePanel's drawing area, drawing a line from the prior click location (or from top left for the first click) to the location of the current click.<br /><br />Here's a little challenge for you:<br /><br />Change the program so that it doesn't draw a line from upper left on the first click, but instead draws a dot in the location of the first click. Later clicks will then draw lines from the prior click's location.<br /><br />Hint: If you set the initial oldX and oldY values to a value that isn't in the visible MousePanel area, you can tell if there haven't been any prior clicks.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill Types: Names and ObjectsWhen we learned about <a href="">primitive variables</a> we learned that they are a way of storing a simple value, like a number or a true-false state. We give them a name, and whenever we want to get that value we use that name. Whenever we want to change the value that's stored, we set the name equal to the new value:<br /><pre style="color: rgb(0, 221, 221); font-size: 100%;">int value; //Declare an integer variable named value.<br />value=100; // Store the number 100 in value.<br />System.out.println(value); // Print the number in value, "100"<br />value=20; // Put 20 in value. Over-writes the 100.<br />count=value; // Gets the number from value and puts<br /> // it in count.<br /> // Assumes 'count' was declared as an int before.</pre><br />When we use objects, we're using a more complex kind of variable. We don't do it for the sake of complexity, but because adding some complexity here makes things a lot simpler elsewhere in our program. A <u>lot</u> simpler.<br /><br /.<br /><br />This why there's a <a href="">two step process</a> in creating a Java variable. The first step, declaration, sets up a name we're going to use for a particular type of object. The second step, initialization, associates that name with an object--creating a new object if necessary.<br /><br />Let's say I'm talking to a friend. I say, "I'm going to get a hamster and call it Wilmot."<br /><br />"That's nice," says the friend. "Can I see Wilmot?"<br /><br />"I don't have him yet."<br /><br />"What color is Wilmot?"<br /><br />"I don't know, I don't have a hamster yet."<br /><br />Then I go to the pet store and pick up a hamster. I call him Wilmot, put him in a cage and take him home. I've now initialized the name Wilmot I declared to my friend earlier to point to a specific hamster.<br /><br />Now when my friend says "Can I see Wilmot?" I can show him a hamster, and say "This is Wilmot." When he wants to know the color, there is an object to get the color of.<br /><br />In Java terms, if I had a Hamster class, I could declare a name like this:<br /><pre style="color: rgb(0, 255, 255);">Hamster wilmot;</pre><br />I now have a name for a Hamster object, but no Hamster. I can initialize the reference variable by getting a new Hamster object using the class's constructor, Hamster():<br /><pre style="color: rgb(0, 255, 255);">wilmot=new Hamster();</pre><br />Now I have a Hamster object associated with the name. The name <i>references</i> the Hamster object that's been created. Hence the term "reference variable".<br /><br />Now, if I create a new name and make it point to the same Hamster object, it becomes a new name for <i>the same object</i>:<br /><pre style="color: rgb(0, 255, 255);">Hamster fluffy;<br />fluffy=wilmot;</pre><br />Now, if I make any changes to <span style="color: rgb(0, 255, 255);font-family:monospace;" >fluffy</span> they also happen to <span style="color: rgb(0, 255, 255);font-family:monospace;" >wilmot</span>. For example, if wilmot's size is 10cm, and I set fluffy's size to 12cm, then get wilmot's size it will be 12cm.<br /><br />This is different from what happens with primitive variables. If I do the same thing with a pair of primitive variables like this:<br /><pre style="color: rgb(0, 255, 255); font-family: monospace;">int value, count;<br />value=100; // Initialize value to 100;<br />count=value; // Set count equal to value.<br />count=count-20; // Subtract 20 from count.<br /> // We could have said 'count-=20;', too.<br />System.out.println("value= " + value); // Print value.<br />System.out.println("count= " + count); // Print count.</pre><br />Then the output will be:<br /><pre style="color: rgb(221, 221, 0); font-family: monospace;">value= 100<br />count= 80</pre><br /.<br /><br />If we need to have each name apply to a different hamster, we need to go back to the pet store for another hamster. In Java, we need to use the constructor to get another Hamster object:<br /><pre style="color: rgb(0, 255, 255); font-family: monospace;">Hamster wilmot, fluffy;<br />wilmot=new Hamster();<br />fluffy=wilmot; // fluffy and wilmot are now both names<br /> // for the same Hamster object. Anything we do to fluffy<br /> // will also apply to wilmot, and vice versa.<br />fluffy=new Hamster();<br />// Now fluffy applies to a different Hamster object.<br />// We can make changes to fluffy and<br />// they won't affect wilmot. We now have two<br />// Hamster objects, each with its own name<br />// to refer to it.</pre><br /():<br /><pre style="color: rgb(0, 255, 255); font-family: monospace;">Hamster wilmot, fluffy;<br />wilmot=new Hamster();<br />fluffy=wilmot.clone();</pre><br />Now fluffy is a copy of wilmot, and we can make changes to fluffy without affecting wilmot. Note: whether a class has the clone() method varies from class to class. Normally, if the class implements the <a href=""><i>Clonable</i></a> interface then it has a clone() method that works this way.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill: Beginning Programming with Java for Dummies by Barry Burd.<br /><br /.<br /><br /.<br /><br />The differences I have with this book are few, but significant. The first is that graphics are left until the very tail end of the book. As I've stated <a href="">elsewhere</a>, I'm a proponent of teaching graphics early. I feel they're a good motivator for new programmers, and they allow new programmers to visualize the effects of their program's flow control structures in a way that rows of numbers can't equal.<br /><br />The second difference I have with this book's approach is the use of an integrated development environment (IDE) right from the start. I've also commented in a <a href="">prior article</a> why I think it's a good idea to at least be familiar with common command line operations before starting with the IDE. I feel that starting with an IDE right off glosses over some skills that are crucial to programmers at any skill level.<br /><br /, <a href="">Arachnophilia</a> but also using <a href="">BlueJ</a> for the latter part of the book.<br /><br /.<br /><br / <a href="">Greenfoot</a> may convince me that objects and classes should be right up front now. However, I feel that starting with this book and then moving on to other resources like Greenfoot and <a href="">Head First Java</a> to develop a better understanding of Java and object oriented programming is a perfectly acceptable way to learn.<br /><br />Even with its problems, I consider this book to be the best on the market for a non-programmer. It does not go as deeply into Java as <a href="">Beginning Programming in Java for the Absolute Beginner</a>, but it's an easier read and overall I like the structure better even though it covers less.<br /><br /><span style="font-weight:bold;">Final Grade: 90%, A</span><br /><br /><u>Pros:</u><br />Easy to read.<br />Good pacing of material.<br />Doesn't get boring.<br />A very good introduction to programming.<br /><br /><u>Cons:</u><br />Uses an IDE, and recommends a Windows-specific IDE (though any multiplatform IDE will work.)<br />Graphics are left until the very end.<br />Limited scope, but it's easy to go on from here.<br /><br />Recommendation:<br />Get it if you're a non-programmer looking to learn how to program. You may also want to get the "next" Java book you plan on using at the same time, and start referring to it as you work through this one. You should consider downloading <a href="">Greenfoot</a>.)<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill a Java Variable: Two StepsThere are two steps to creating a variable; declaration and initialization.<br /><br />Declaration is creating a name and saying what type of variable it names:<br /><pre style="font-face:monospace; color:#00dddd;">int count;<br />String name;<br />Scanner input;<br />JFrame frame;</pre><br />Here we're saying "I hereby declare that I have a variable named <span style="font-family: monospace; color:#00dddd;">count</span> and that it will be used to name an integer variable. I also declare that a variable called <span style="font-family: monospace; color:#00dddd;">name</span> will be used to refer to a String object." And so on.<br /><br />Now, these variables are only half done at this point. We've created names, but we haven't actually associated them with anything yet. It's like telling someone the names of your pets before you actually get them.<br /><br />"I have a hamster named Wilmot and a cat called Tiger and a dog named Bowser."<br /><br />"Where are they?"<br /><br />"I don't have them yet."<br /><br />At this point, telling someone to feed Wilmot wouldn't make much sense. You need a pet to go with the name. Similarly, asking your program to fetch data from your Scanner named <span style="font-family: monospace; color:#00dddd;">input</span> doesn't make sense until you have attached it to an actual instance of a Scanner. So let's do it:<br /><pre style="font-face:monospace; color:#00dddd;">input=new Scanner(System.in);</pre><br />This is called initialization. We gave it an initial value. In this case, we created a new Scanner object and made input refer to it (using the = sign.)<br /><br />We can initialize the rest, too:<br /><pre style="font-face:monospace; color:#00dddd;">count=0;<br />count</span> to 0, <span style="font-face:monospace; color:#00dddd;">name</span> to an empty string, and <span style="font-face:monospace; color:#00dddd;">frame</span> to a new Jframe object.<br /><br />Now we have things associated with the names. Now if we tell our program to get data from <span style="font-face:monospace; color:#00dddd;">input</span> and set <span style="font-face:monospace; color:#00dddd;">name</span> to point to it, it'll be able to do so, because now there's an actual Scanner connected with the name <span style="font-face:monospace; color:#00dddd;">input</span>:<br /><pre style="font-face:monospace; color:#00dddd;">name=input.next();</pre><br /><br /><b style="color:#33aa00;">Shortcut:Two Steps in One</b><br /><br />You can declare and initialize a variable in one statement. But you still have to do both.*<br /><pre style="color:#00dddd;font-family:monospace;">int count=0;<br />Stringparentheses</a> after it (marking it as a method, and the capitalized name being the name of a class tells you that it's a constructor method that makes a new one of those objects.)<br /><br />We don't have to create a new object for initialization. If there's already an object of that type available, we can initialize a newly declared variable to that same object:<br /><pre style="color:#00dddd; font-family:monospace;">Scanner keyboard=input;</pre><br />This declares a new Scanner called <span style="color:#00dddd;font-family:monospace;">keyboard</span> and initializes it to point to the same Scanner as <span style="color:#00dddd; font-family:monospace;">input</span>. In pet terms, this is like giving a nickname to your hamster Wilmot. You may have two names, say, Wilmot and Fluffy, but you've still got one hamster--he just goes by either name.<br /><br />Hamster wilmot=new Hamster("Syrian", AGOUTI);<br />Hamster fluffy=wilmot;<br /><br />If you want another object, you need to create one:<br />Hamster fred=new Hamster("Teddy Bear", BANDED);<br /><br />Now you have a second hamster called fred.<br /><br /><a href="">Further reading.</a><br /><hr /><br />*Actually, Java doesn't strictly require initialization <i>all</i> the time. But it's good programming practice to do so. It prevents a lot of bugs and nasty surprises in your programs.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill Keyboard Input for Console Apps: Java's Scanner ClassA nice gift that came with Java 1.5 was the <a href="">Scanner</a> class. It's part of the java.util package. This class simplifies many common programming tasks. One is getting text from the keyboard.<br /><br />Prior to Scanner, this took a couple of extra steps. With Scanner, it's a snap. Just as we use <a href="">System.out</a> to print text to the command line, we use <a href="">System.in</a> to receive input. Like System.out, System.in is a data stream object. In this case, an <a href="">InputStream</a> object.<br /><pre style="color:#00dddd; font-size:0.8em; font-family: courier, monospace;"><br />import java.util.Scanner;<br /><br />public class TrollTalk{<br /> public static void main(String arg[]){<br /> Stringprimitive types</a>.<br /><br />It can also be used to get information from files in the same way it gets information from the user's keyboard. But that's another lesson.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill Java IDEs for Beginners: Arachnophilia<a href="">Arachnophilia</a>.<br /><br />It's simple and easy to use, but very powerful. It does syntax highlighting, recognizing Java's keywords, strings, and so on, making it easier to see mistakes when entering programs. It will also match <a href="">parentheses, brackets, and braces</a> for you, making it easier to avoid--or find--errors in your program there.<br /><br /.<br /><br /.<br /><br /.<br /><br /.<br /><br />Arachnophilia is the IDE that I use with my high school classes, as I mention on <a href="">my other blog</a>. I use it myself, especially on my little ASUS Eee computer which has limited screen space and not a lot of storage space. It's not an IDE that's just good for classwork. It's good enough for real work.<br /><br />Give it a try.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill Java IDEs for Beginners: BlueJ<p><img src="" /></p><a href="">BlueJ</a> is an integrated development environment created especially for new programmers learning Java. It's built to be easy to use, but also to provide powerful features to assist new programmers learn object oriented programming without headaches.<br /><br />It shows a graphical representation of the classes and objects in a program. It also teaches the use of program comments for automatically generating documentation for the program. And this all happens easily without any hoops to jump through. It's just there, and it does its thing.<br /><br />It also gives quick and easy access to the <a href="">API documentation for Java</a>, making it easier to use and get familiar with. Because the same documentation format will be used for your own programs, how this format works will become clear really quickly.<br /><br />The BlueJ site has some <a href="">great tutorials</a> doing fun things with Java and BlueJ. There's nothing like doing interesting exercises to make learning easier.<br /><br />When you're ready to move onward and upward, BlueJ can be used with NetBeans, a considerably more sophisticated, and complex, IDE for Java. So BlueJ is not a classroom-only dead end.<br /><br />Learning to program has never been easier. The many resources for Java make it a great way to start programming. BlueJ is a big part of making Java a great place to start. Check it out.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill Start at the Command Line?Many.<br /><br /.<br /><br /.<br /><br /?)<br /><br />So don't let the command line put you off. Even if you've started with an IDE, go ahead and use the command line sometime to do the basic tasks of showing your source file's contents, compiling your source file, and running it.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill Recommendation: Java Programming for Kids, Parents and Grandparents by Yakov Fain<a href="">Java Programming for Kids, Parents, and Grandparents</a> <a href="">Sun</a>.<br /><br /.<br /><br />I haven't tested the book on any real students yet, but I probably will. Once I do I'll post a more in-depth article on this book. The writing style is light, occasionally a bit condescending, but the material itself is well done.<br /><br />Thumbs up to those resposible for writing and distributing <u>Java Programming for Kids, Parents and Grandparents</u>. A body of good material like this is what makes programming accessible to ordinary folks who might otherwise think you need to live like some sort of egghead monk to learn this stuff.<br /><br />The contrast between the approach in this ebook with the first edition of Java Programming for Dummies (the Aaron Walsh version, not the fine book by Brad Burd) is striking. Essentially, <span style="color: rgb(255, 0, 0);">the first Java for Dummies book told you</span> <span style="font-size:130%;"><span style="color: rgb(51, 255, 255);">"Java is hard, you're a Dummy. Since Java is too complicated for a Dummy like you, we're going to teach you JavaScript instead, even though you <i>thought</i> you bought a book on Java."<span style="color: rgb(0, 0, 0);font-size:100%;" >*</span></span></span><br /><br /><span style="font-weight: bold;">The Fain book didn't bother to tell me what I can't do.</span> It was too busy telling me what I <i>can</i> do. On top of that, Yakov Fain's book is a free ebook, so it cost me $25 <i>less</i> than the Dummies book. (Yeah, I still feel burned on that one. ;) )<br /><br /><span style="font-size:85%;">* <i>Dummies</i> books for about ten years, and only the excellence of many of their current books brought me back.</span><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill Java's Graphics: paintComponent() and repaint()Why?<br /><br />It seems like magic, doesn't it? You just sort of set it up and <i>somehow</i> it happens. Hopefully when and how we want it to. It seems like a vaguely disquieting form of magic, perhaps.<br /><br /.<br /><br /.<br /><br /.<br /><br /.<br /><br />The paintComponent() method redraws your graphics area. In our examples so far this has been a JPanel. There are two ways paintComponent() can get called. One is automatic, the other is up to you.<br /><br /.<br /><br />When these sort of things happen, the Java Runtime Environment calls the paint() method for your container. In our examples so far, our container has been a JFrame. It will be a "heavyweight" component (see the <a href="">JFrames</a>. <i>Shazam!</i> It's like magic!<br /><br /.<br /><br /!<br /><br /.<br /><br /().<br /><br />So, if you have new information you want depicted on screen, use repaint().<br /><br /.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill Basic Java Graphics: 3 ExamplesWe're going to build up a graphics app from something so basic it doesn't really work, like the example from <a href="">Java Graphics--Start with a JFrame</a> and building to something more sophisticated. In the JFrame article and in <a href="">A Most Basic Graphics App</a> I've avoided the use of comments in the code to keep it shorter. In these examples I'm going to include comments in the code to describe what's happening. So don't be scared of the length of these examples, most of what's there is just <a href="">comments</a>, which Java ignores. They're there just for humans.<br /><pre style="color: rgb(0, 153, 221); font-size: 11pt;">/* BasicFrame.java<br /><br />This is a really simple graphics program.<br />It opens a frame on the screen with a single<br />line drawn across it.<br /><br />It's not very polished, but it demonstrates<br />a graphical program as simply as possible.<br /><br />mag-27Apr2008<br />*/<br /><br />// Import the basic graphics classes.<br />import java.awt.*;<br />import javax.swing.*;<br /><br />public class BasicFrame extends JFrame{<br /><br />// Create a constructor method<br />public Basic /> // create an identifier named 'window' and<br /> // apply it to a new BasicFrame object<br /> // created using our constructor, above.<br /> BasicFrame frame = new BasicFrame();<br /><br /> // Use the setSize method that our BasicFrame<br /> // object inherited to make the frame<br /> // 200 pixels wide and high.<br /> frame.setSize(200,200);<br /> <br /> // Make the window show on the screen.<br /> frame.setVisible(true); <br />}<br />}</pre><br />This example draws a line on the JFrame. The window decorations take up some of our drawing space, so the title bar may cover some of our line. With some <a href="">JVMs</a> the background area of the JFrame won't be cleared before you draw. Also, when you close the window using the close button, the application doesn't shut down. For many applications on Mac OS X this is normal behavior, but on other OSes, and even many applications under OS X it's normal to expect an application to close down completely if you click the close button on the last open window (for applications that have multiple windows) or the close button on the main window for other applications.<br /><br />So here's our next example, where we take care of that last problem:<br /><pre style="color: rgb(0, 223, 127);">/* CloseFrame.java<br /><br />This is a really simple graphics program.<br />It opens a frame on the screen with a single<br />line drawn across it.<br /><br />We're starting to add a little bit of polish<br />here--we make the program close nicely when<br />the close box is clicked, rather than just<br />sort of hanging around half-dead.<br /><br />mag-28Apr2008<br />*/<br /><br />// Import the basic graphics classes.<br />import java.awt.*;<br />import javax.swing.*;<br /><br />public class CloseFrame extends JFrame{<br /><br />// Create a constructor method<br />public Close /> BasicFrame frame = new BasicFrame();<br /><br /> // This uses a constant EXIT_ON_CLOSE that's a member of JFrame.<br /> // The constant is passed to the setDefaultCloseOperation method<br /> // of our frame object, which is a CloseFrame object,<br /> //which inherits the method from its parent JFrame.<br /> // It makes the program exit (close completely) when we click<br /> // the close button.<br /> frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);<br /><br /> frame.setSize(200,200);<br /> frame.setVisible(true);<br />}<br />}</pre><br />The magic line that makes the application exit when we click the close button is the one that reads: <tt>frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);</tt>. We can tell that EXIT_ON_CLOSE is a type of variable called a "constant" because the name follows the convention of being written in all capital letters. So we're not YELLING when we say "EXIT_ON_CLOSE" we're using the value stored in EXIT_ON_CLOSE as part of the JFrame class.<br />A JFrame has this constant available because it <i>implements</i> the interface <a href="">WindowConstants</a>. By implementing this interface, in essence, the JFrame class of objects promise to do the right thing if one of these constants is passed to a method that handles it, in this case <a href="">setDefaultCloseOperation</a>.<br /><br />Our final example for this entry goes one <b>big</b> step further:<br /><pre style="color: rgb(221, 153, 0); font-size: 11pt;">/* BasicPanel.java<br /><br />This is a somewhat more sophisticated drawing program.<br />It uses a new child of JPanel as the drawing surface<br />for a JFrame, to avoid the problems with drawing<br />directly on a JFrame.<br /><br />A custom JPanel child class called BasicPanel is created<br />with its own paintComponent method, which contains our<br />drawing code.<br /><br />A generic JFrame is then created to hold the BasicPanel<br />object, the BasicPanel is created, made into the JPanel's<br />content pane, and our paintComponent method is called<br />automatically.<br /><br />mag-28Apr2008<br />*/<br /><br />// Import the basic graphics classes.<br />import java.awt.*;<br />import javax.swing.*;<br /><br />public class BasicPanel extends JPanel{<br /><br />// Create a constructor method<br />public BasicPanel(){<br /> super(); /> g.drawLine(10,10,150,150); // Draw a line from (10,10) to (150,150)<br />}<br /><br />public static void main(String arg[]){<br /> JFrame frame = new JFrame("BasicPanel");<br /> frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);<br /> frame.setSize(200,200);<br /><br /> // Create a new identifier for a BasicPanel called "panel",<br /> // then create a new BasicPanel object for it to refer to.<br /> BasicPanel panel = new BasicPanel();<br /><br /> // Make the panel object the content pane of the JFrame.<br /> // This puts it into the drawable area of frame, and now<br /> // we do all our drawing to panel, using paintComponent(), above.<br /> frame.setContentPane(panel);<br /> frame.setVisible(true);<br />}<br />}</pre><br />Here, we've added a JPanel inside the drawable area of our JFrame. This means that if we start <a href="">drawing</a> at 0,0 we'll actually be drawing somewhere we can see it, it won't be hidden by the window decorations.<br /><br />The other big change is the change of our drawing method's name from paint() to paintComponent(). This is because we changed from a JFrame to a JPanel. If you take a look at <a href="">JPanel in the API Specification</a> you'll see that it extends JComponent. This family of objects uses paintComponent() to manage drawing.<br /><br />This last example is a good starting point for any graphical operation. In future examples, I'll be extending it further to add additional features.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill in import Statements<pre style="color:#dddd00;font-size:11pt;">import java.awt.*;<br />import java.awt.event.*;<br />import javax.swing.*;<br />import javax.swing.event.*;</pre><br />These are some representative <tt>import</tt> statments. Each one has an asterisk (or "star") at the end of it. What does that mean?<br /><br /:<br /><pre style="color:#dddd00;font-size:11pt;">import java.awt.Frame;<br />import java.awt.Panel;<br />import java.awt.Component;<br />import java.awt.Color;<br />import java.awt.Dialog;<br />import java.awt.Dimension;<br />import java.awt.Graphics;<br />import java.awt.Image;</pre><br />Since star can take the place of any name, we can replace <i>all</i> of those import statements with one:<br /><pre style="color:#dddd00;font-size:11pt;">import java.awt.*;</pre><br />This imports <i>every</i> class in java.awt. It also imports all the interfaces, exceptions, and errors defined in java.awt. It makes absolutely everything in the java.awt package available to our program.<br /><br />OK, then why do we need to do this?<br /><pre style="color:#dddd00;font-size:11pt;">import java.awt.*;<br />import java.awt.event.*;</pre><br />If the * makes us get everything in java.awt, why do we need to then go an import java.awt.event.*?<br /><br />The reason is that java.awt.event is a different package than java.awt. The stuff in java.awt.event isn't actually <i>in</i> <i>in</i> java.awt, the way that System.out is in the class System.<br /><br />Just remember, every package with its own name is a totally different package. You can tell a class name from a package name by whether it is capitalized:<br /><pre style="color:#dddd00;font-size:11pt;">import java.awt.Color;</pre><br />Color is a class in the java.awt package. This is one of the reasons we follow the convention of using a capital letter at the start of a class's name. Whereas this:<br /><pre style="color:#dddd00;font-size:11pt;">import java.awt.color.*;</pre><br />is an import of the classes in the java.awt.color package. These classes are not in java.awt, so you can't import them using "import java.awt.*;"<br /><br /.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill LinesIn <a href="">A Most Basic Graphics App</a> we had the framework of a simple graphics application. We trimmed it down a little, too much really, in <a href="">Start With a JFrame</a>. The JFrame alone doesn't accomodate graphics as well as we would like. The "window decorations"--the title bar, the frame around the window, close button and so on--all cover part of our drawing area. Likewise, there are other problems that crop up on different JVMs.<br /><br />So we'll use the BasicPanel.java program from <a href="">A Most Basic Graphics App</a> to build on now.<br /><br />We're going to concern ourselves with the part of the program that does the actual drawing:<br /><pre style="color: rgb(223, 0, 0); font-size: 11pt;">g.drawLine(10,10,150,150); // Draw a line from (10,10) to (150,150)<br /></pre><br />In particular, we're going to focus on those four numbers inside the parentheses. What are they? Where do they come from? How can we pick numbers that will let us draw what we want, and what are the limits on what we can put in there?<br /><br /.<br />:<br /><pre style="color: rgb(0, 223, 0); font-size: 11pt;">frame.setSize(200,200);</pre>This is setting the size of our JFrame (creatively named "frame") to 200 pixels wide and 200 pixels high. Since the JPanel we're drawing to is <i>inside</i> this JFrame, we know that the highest point we're going to <i>see</i>.<br /><br />We can find out exactly how large our drawing area is by using the <a href="">getClipBounds()</a> method of the <a href="">Graphics class</a>. Feel free to experiment with it, but for now let's get back to talking about the four numbers we use with drawLine and what they do. Just saying we have a high limit of 200 on those numbers is good enough for now.<br /><br />OK, so we can use numbers from 0 to somewhere short of 200 in any of those four locations and expect results.<br /><br /><b style="color: rgb(170, 170, 0);">A Pair of Pairs</b><br /><br /.<br /><pre style="color: rgb(223, 0, 0); font-size: 11pt;">g.drawLine(10,10,150,150); // Draw a line from (10,10) to (150,150)</pre>Notice how in the comment I say we're drawing a line from (10, 10) to (150, 150). Notice the first two numbers in g.drawLine() are 10 and 10, and the second two numbers are 150 and 150. This Means Something. ;)<br /><br />The first pair marks the starting point for our line, the second pair marks the ending point.<br /><br />The first number in each pair says how far left or right we want that point. The second number in each pair says how far up or down we want that point.<br />.<br /><br />On up and down, 0 is all the way to the top. The higher the number, the lower we'll go. A number of about 185 or so will take us all the way to the bottom of our drawing area.<br /><br />If we want to draw a line all the way across our drawing area, instead of just partway across, we can replace<br /><pre style="color: rgb(223, 0, 0); font-size: 11pt;">g.drawLine(10,10,150,150); // Draw a line from (10,10) to (150,150)<br /></pre>with<br /><pre style="color: rgb(223, 0, 0); font-size: 11pt;">g.drawLine(0,0,200,200);</pre>This will actually "go over" the edge of our drawing area, but it won't matter. Open up BasicPanel in your editor, make this change, compile it and run it.<br /><br />Fine, we've mastered drawing a line from the top left to the bottom right. We're not exactly drawing starships yet, are we?<br /><br />What if we change the pairs of numbers around? Open up your editor and make this change to the drawLine() statement:<br /><pre style="color: rgb(223, 0, 0); font-size: 11pt;">g.drawLine(200,200,0,0);<.<br /><br /><b style="color: rgb(170, 170, 0);">Horizontal Lines</b><br /><br />If we want a line to be horizontal, both the up-and-down numbers need to be the same. This means the second number in each pair, the second and fourth numbers. Add the following line to BasicPanel right after your other drawLine() statement:<br /><pre style="color: rgb(223, 0, 0); font-size: 11pt;">g.drawLine(10,50,200,50);</pre> Compile and run, and now you have a horizontal line. Notice the second and fourth numbers are the same. If you want to move the horizontal line up or down, change the number in the second and fourth position:<br /><pre style="color: rgb(223, 0, 0); font-size: 11pt;">g.drawLine(10,20,200,20); // A higher horizontal line<br /></pre><pre style="color: rgb(223, 0, 0); font-size: 11pt;">g.drawLine(0,0,200,200); // A lower horizontal line</pre> The higher the number you put in the second and fourth positions the lower the horizontal line will be. Go ahead and add these lines to your program, one at a time, and compile and run.<br /><br /><b style="color: rgb(170, 170, 0);">Vertical Lines</b><br /><br />Now let's try vertical lines. For these, the first and third numbers (the left-right numbers) need to be the same:<br /><pre style="color: rgb(223, 0, 0); font-size: 11pt;">g.drawLine(50,0,50,200); // A vertical line.</pre> Change the 50s to some other number to move the vertical line left or right.<br /><br /><b style="color: rgb(170, 170, 0);">Starting and Ending</b><br /><br /. ;)<br /><br />Open up BasicPanel and replace all the existing drawLine() statements with the following:<br /><br /><pre style="color: rgb(223, 0, 0); font-size: 11pt;">g.drawLine(50,20,150,20); // Draw a horizontal line from (50,20) to (150,20)<br /></pre><pre style="color: rgb(223, 0, 0); font-size: 11pt;">g.drawLine(150,20,150,150); // Draw a vertical line from (150,20) to (150,150)<br /></pre><pre style="color: rgb(223, 0, 0); font-size: 11pt;">g.drawLine(150,150,20,150); // Draw a horizontal line from (150,150) to (20,150)<br /></pre><pre style="color: rgb(223, 0, 0); font-size: 11pt;">g.drawLine(20,150,50,20); // Draw a vertical line from (20,150) to (50,20)</pre> Notice how the points we use match up (look at the pairs in the comments.) The last point is the same as the first to close the box.<br /><br />Now, if we just wanted to draw boxes Java actually has a way to do this more efficiently. But what we're focusing on here is coordinates and how to use them.<br /><br /.<br /><br />Once you play around for a while, you'll get the hang of it. If you want a larger screen area to play with, enlarge the numbers in the setSize() statement. Try this for a start:<br /><pre style="color: rgb(0, 223, 0); font-size: 11pt;">frame.setSize(480,480);</pre.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill Java Virtual Machine: Adapter Cables for Your Computer's InsidesOne of the problems programmers have is that different computer systems all require different ways to program them. Each computer has its own way of doing graphics and sound, its own way to talk to the keyboard and mouse, and so on. So a program written for one system wouldn't run on another.<br /><br />We.<br /><br /.<br /><br />Back in the old days we could hook up hardware from different manufacturers sometimes using adapter cables. In other cases, we had adapters that not only connected from one connector to another, but also translated the electronic signals.<br /><br /:<br /><br /><center><img src="" alt="The JVM gives your program access to the host's native resources." /></center><br /.<br /><br /.<br /><br /.<br /><br /.)<br /><br />The JVM is such a useful tool that languages other than Java are being written to take advantage of it. <a href="">Jython</a>, for example, is a version of the python language that's written to use the JVM. It lets you write python programs that will run on any system with a JVM on it. <a href="groovy.codehaus.org">Groovy</a> is another such language. There's a <a href="">long list</a> of languages that run on the JVM in addition to Java.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill void Type<center><img src="" alt="'Pierre" /></center><br /><br /><span style="color: rgb(255, 204, 51); font-weight: bold;">void</span> can be the most confusing type. It's an odd word to start with. It's part of the long string of words that appear before the word <a href="">main</a> in the declaration of that method.<br /><br />Void means 'empty' or 'nothing'. In the case of a Java method, it means that that method returns nothing:<br /><pre style="color: rgb(0, 221, 221); font-size: 12px;"><br />public void drawTriangle(){ ...<br /></pre><br />This is as opposed to a method with another type, which returns an object of that type:<br /><pre style="color: rgb(0, 221, 221); font-size: 12px;"><br />public int sum(int a, b){<br /> return a + b;<br />}</pre><br />This returns an int object to the caller:<br /><pre style="color: rgb(0, 221, 221); font-size: 12px;"><br />int count1, count2, amt;<br />...<br />amt=sum(count1, count2);</pre><br /.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Mark A. Graybill
http://beginwithjava.blogspot.com/feeds/posts/default
crawl-002
refinedweb
10,880
65.12
need help with my project! ADT Aditya Herlambang Greenhorn Joined: Feb 01, 2007 Posts: 11 posted Feb 01, 2007 18:30:00 0 Implement class ArraySet<E> class that implements the Set<E> interface below. It must allow any type of element in the collection. The add method must return true if the element was successfully added or false if that element already equals an element in the set. Because Set<E> extends Iterable<E>, your ArraySet<E> class will need to implement Iterator<E> iterator() and an inner class. Each method should run as specified in Big O notatation. Include a unit test that makes assertions on all methods. /** * This interface specifies the methods for a Set ADT. Since Set<E> * extendsIterable<E> you must also add the unseen method Iterator<E> * iterator() to the *collection class. */ public interface Set<E> extends Iterable<E> { // Return true if there are 0 elements in the bag. // O(1) public boolean isEmpty(); // Add element to this Set only if element does not equal // another element already in this Set. Return false if // element is in this Set, in which case, this Set does not change // O(n) public boolean add(E element); // Return the number of elements in this set // O(1) public int size(); // Return the array capacity (the array's length) // O(1) public int capacity(); // Return true if element is in this set, or false if it is not // O(n) public boolean contains(E element); // Remove element if it is in this set and return true, // otherwise return false. // O(n) public boolean remove(E element); // Return the union of this set and other // O(n) public Set<E> union(Set<E> other); // Return the intersection of this set and other // O(n) public Set<E> intersection(Set<E> other); } Your ArrayList <E> class must have a constructor with two integer arguments. The first is the initial size of the 1D array. The second is the grow-by and shrink-by size in the add an remove methods respectively. Here is the beginning of the class. public class ArraySet<E> implements Set<E> { private Object[] data; private int n; private int delta; // Construct an empty bag that can store any type of element. // O(1) public ArraySet(int initialCapacity, int growShrinkSize) { data = new Object[initialCapacity]; n = 0; delta = growShrinkSize; } // Add other methods here ... } When adding new elements, check to ensure there is room in the array data. If there is no unused array index, grow the array by delta elements. When removing existing elements, shrink the array by delta elements whenever there is room to add more than delta elements. In other words, leave no more than delta empty array locations at any time. The array must have from 0 through delta empty indexes at all times. The following assertions must pass. This is a code/project that I have to develop on my own, but I don't even know where to start. I am using eclipse in developing this code and let's say I have a folder called project 1 then how many class do I have to create inside that project?? I know one class must exist for the junit test, and what else do I need? I am predicting that I will need 2 other classes one is for the Set that extends the iteration. And one other else is for the ArrayList class that implements the set<E>, am I right?? I need further guidance to help me with this project. Thanks to those of you who are willing to help me out. Appreciate it. I agree. Here's the link: subject: I need help with my project! ADT Similar Threads add and remove method Iterator Pattern union and intersection method! Questions on Collection Interface. All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/406033/java/java/project-ADT
CC-MAIN-2015-22
refinedweb
650
62.38
In this article, we are going to see how to use dstack.ai for creating ML apps. I have taken the loan prediction dataset. This particular loan prediction app helps predict whether the customer should be given a loan or not based on different parameters. It’s a classification based problem and gives the output as Yes/No(1/0). dstack.ai is a python framework to create AI and ML apps without writing any frontend code. Before we get started I am assuming that you have installed dstack server on your system and it is running. You can start the server by the following command… In this article, you will see how you can deploy your app on streamlit sharing. Before you read the article further, make sure you have an invite to streamlit sharing. To deploy your app on Streamlit Sharing you need to have a GitHub account. Because you will have to upload all your codes there and link that to your streamlit sharing account. Once you have uploaded your code to your GitHub. Go to this website: and click on Sign in with Github. Once you log in and give access to streamlit to your GitHub account you can see the… In this article, we will see how we can use Streamlit with Image Processing Techniques and Object Detection Algorithms. I am assuming that you have Streamlit installed and working on your system. If you do not know how to start with streamlit then you can refer to my previous article where I have explained it in brief. I have also deployed this streamlit app on Streamlit Sharing. You can have a look at it before reading the article further. Here is the link: Image Processing using Streamlit Let’s dive into the code now. First importing the necessary libraries import streamlit… As you embark on a new journey keep an open mind. You never what could be until you try, waste no opportunity. This is the final article I am writing about the Quarantine Days we have all spent together at our homes. If you haven’t read my previous articles, do have a look here & here. So, a lot of things went as planned and I am happy about it. The Holidays or the Quarantine days are now over for me, since I have joined an organisation as a full time employee. This changes so many things. … In this article, we are going to make a complete sign up & login page which will be connected to the AWS RDS and we will use MySql Workbench. Before we begin you must have Lets begin now !!! We will create our database first on RDS. The steps are: Before you submit an article on AWS Pocket, there are a couple of things you must follow. 1.) Medium’s Rules and Terms of Service. 2.) You are responsible for the content that you publish. Make sure it is citated properly and no copyright issues will arise. 3.) If you are using images in your article, then you need to cite the images too. Best suggested use for images will be unsplash. 4.) You have to make sure that you have the right to publish all the content in your article including images, videos, links, etc 5.)If any violation takes place… Like, Share and Subscribe to the channel- HackerShrine Recently, I came across an open source framework — Streamlit which is used to create data apps. So I spent some time on the documentation and did some data visualization on a Food Demand Forecasting Dataset. Streamlit’s open-source app framework is the easiest way for data scientists and machine learning engineers to create beautiful, performant apps in only a few hours! All in pure Python. All for free. To get started just type this command: pip install streamlit To check whether it was installed properly run the below command: … Pre-requisites: Understanding of Machine Learning using Python (sklearn) Basics of Django Basics of HTML,CSS In this article you will learn how to deploy Machine Learning (ML) models using Django. We will also discuss the ML Problem Statement which is the HR Analytics. I have taken this problem from Analytics Vidhya. A special thank you to them for providing such amazing problem statements. Now before we start, take a look at this website-HR Analytics. This is what we are going to make. I have deployed the website on Heroku. A simple and powerful way of converting your images to customizable format and change them as per your wish. Lets see how we can do this in Adobe Illustrator. Import the Image and Select it On the top bar you will see a button Image trace- click on the dropdown. You will find multiple options, but for now choose low fidelity option. I like to participate in Hackathons to test my skills and also to learn new skills. I find this way of learning quite effective. During my second year in college, I used to think that doing an online course on any of the platforms would be enough. But soon, I would forget what I learned in that course. That’s when I started exploring and came across many websites that post Hackathon problems. I participated in a few but I couldn’t solve the problems and did not understand where and how to start. … Machine Learning | AWS | Designer
https://aniket-wattamwar.medium.com/?source=user_following_list-------------------------------------
CC-MAIN-2021-25
refinedweb
902
73.17
Hi all. Trying to move on and create a simple Mutation operator for the string that was created and then edited in the first two functions. Basically a for loop which checks each element in the array Output. Then comes my attempt at random numbers. RandMax was defined as 100 so it is like 100%. Basically starts off with i think, initializing variable Mut1 to random number. If Mut1 > 80 { //Then mutate but now choose what to mutate to. If Mut2 > 50 { then Output[index] = 1 } else { Output[index] = 0 } as you can prb see better in the code i wanted it to check each element then produce a random number to see if it would get changed, the problem is this needs to be done for each loop, each time it loops the variable Mut1 and Mut2 need to be random/ differant then last time so to speak. Reinitialised with new numbers. This is where i think i'am going wrong and was wondering if anyone knew a better way of getting it to work. Thank you for any help.Thank you for any help.Code:#include <iostream> #include <stdio.h> #include <string> #define RAND_MAX 100 using namespace std; using std::string; using std::cout; using std::endl; using std::cin; void GetUserInput(string &Input) { cout << "Please enter Amino Acid Sequence \"H,P\": " << endl; cin >> Input; cin.get(); } void GetSetupString(string theInput, int (&Output)[50]) // To declare an Array as a reference from Main, syntax is differant then other // references, ie: Type (&ArryName)[Size], try finding that in any tutorial :) { int index; for (index = 0; index < theInput.length(); ++index) { switch (theInput[index]) { case 'H': Output[index] = 1; break; case 'P': Output[index] = 0; break; default: break; } cout << Output[index] ; } cin.get(); } int GetMutation (int (&Output)[50]) { int index; for ( index = 0; index < 49; ++index) { int Mut1; Mut1 = rand(); int Mut2; Mut2 = rand(); if (Mut1 > 80) { if ( Mut2 > 50) { Output[index] = 0; } else { Output[index] = 1; } } else { //do nothing } cout << Output[index] ; } cin.get(); } int main () { string Input; int Output[50]; GetUserInput(Input); GetSetupString(Input, Output); GetMutation(Output); }
http://cboard.cprogramming.com/cplusplus-programming/80850-mutation-operator.html
CC-MAIN-2015-27
refinedweb
348
61.06
I am really a newbie to C++ and I've been reading the Sams Teach Yourself C++ in 21 Days book. There is one example program in the book demonstrating the use of function prototypes, and I've typed in the code exactly as it is in the book, but I got some errors. Firstly I got a syntax error before numeric constant in line 28, the one with the function definition. Then, I got another error " 'w' undeclared (first use this function) in line 30. Anyone can tell me what is wrong? Code: // Listing 5.1 - demonstrates the use of funtion prototypes #include <iostream> int Area(int length, int width); //function prototype int main() { using std::cout; using std::cin; int lengthOfYard; int widthOfYard; int areaOfYard; cout << "\nHow wide is your yard? "; cin >> widthOfYard; cout << "\nHow long is your yard? "; cin >> lengthOfYard; areaOfYard = Area(lengthOfYard, widthOfYard); cout << "\nYour yard is "; cout << areaOfYard; cout << " square feet\n\n"; return 0; } int Area(int 1, int w) { return 1 * w; }
https://cboard.cprogramming.com/cplusplus-programming/60858-function-prototype-definition-printable-thread.html
CC-MAIN-2018-05
refinedweb
169
70.43
The. If you are looking at where to purchase an SSL Certificate from, we recommend you start with The SSL Store. Some utilities, tools, or languages will generate the key pair and the CSR all in one command. Using python, this article will demonstrate first how to create a key pair and second how to create a CSR from the private key in the key pair. How to create an asymmetric public private key pair in Python This example of creating a key pair in python will use the RSA algorithm, but other asymmetric algorithms could also be used. - private key material is now stored in the key variable, ready to be passed as a parameter to the sign method in the following section describing how to create a CSR. How to create a CSR in Python This example will demonstrate how to programmatically create a CSR with information about our public key, about who we are, and what domains this requested SSL certificate will be used for. - Import required libraries from the cryptography module, including x509, NameOID, and hashes. - Build the CSR with the CertificateSigningRequestBuilderwith the information detailed in the paragraph above. - Add extensions to the CSR about the domains the certificate will be used for. - Sign the CSR with the private key created in the section above. from cryptography import x509 from cryptography.x509.oid import NameOID from cryptography.hazmat.primitives import hashes # Generate a CSR csr = x509.CertificateSigningRequestBuilder().subject_name(x509.Name([ # Provide various details about who we are."), ])).add_extension( x509.SubjectAlternativeName([ # Describe what sites we want this certificate for. x509.DNSName(u"example.com"), x509.DNSName(u""), ]), critical=False, # Sign the CSR with the private key. ).sign(key, hashes.SHA256()) Note that some CAs may not require that all SANs (Subject Alternative Names) be contained in the CSR, and may simply require that they be entered into a text field in the request form, along with the CSR. Conclusion This article has demonstrated how to use Python to create a CSR. Leave us a comment if you have any questions or would like to see more examples.
https://www.misterpki.com/python-csr/
CC-MAIN-2022-21
refinedweb
350
62.17
Add a post-cache-load hook It would be practical if there was a way to detect that data was returned from the cache and then do some extra post-processing. My particular use case is caching SQLAlchemy ORM instances which needs to be merged into the session. Currently I am doing something like this: @region.cache_on_arguments() def _expensive_function(foo, bar): ... def expensive_function(foo, bar): result = _expensive_function(foo, bar) if result not in session: session.merge(result, load=False) return result This could be reworked as a decorator to be more generic and easier to read, but it still has to do this in-session check and possible merge on every invocation instead of only for cache hits. It would be nice to be able to do this: def _merge(obj): session.merge(obj, load=False) @region.cache_on_arguments(on_cache_hit=_merge) def _expensive_function(foo, bar): ... I'm going to have this same issue soon as well, but for the most part your own decorator could just be on the outside: we just lose the "invalidate" and "set_" functions that are placed on the decorator, is that the only issue ? Negligible performance aspect of running merge_into_session when the cache was not hit and merging isn't necessary I suppose. oh so you're saying, when the cache was not hit, the current Session is the one that's used to do the lookup? I see so here you're looking for a hook that only applies to cache-returned values. I'm thinking a service like that might want to be explicit about all three scenarios: wrap_cache_returned, wrap_creator_returned, wrap_returned. Exactly. I tend to have a single scoped session that I use for everything and I want to merge cache hits with that session. i had a similar problem, and came up with a bastardized solution. Storing into the cache: I turn the objects column values into a dict , I ignore the relations. Pulling form the cache: I turn the stashed value into a dict that is overloaded to allow for attributes. I turn the id-based relation columns into lazyloaded functions to pull other objects from the cache It's annoying to configure a separate set of objects, but the "dict with attributes" and lazyloaded functions give me almost the same interface as the SqlAlchemy objects. None of my "read" operations (templates, views, logic) needed to be changed; all my "write" happened with fresh sqlalchemy objects off the db. I dropped a version of this framework on github
https://bitbucket.org/zzzeek/dogpile.cache/issues/28/add-a-post-cache-load-hook
CC-MAIN-2017-30
refinedweb
416
61.16
In this shot, we will discuss how to remove all whitespaces from a string in Java. Here, we will take a string from the user as input and then we will remove all the spaces from it. We have two methods to remove all the whitespaces from the string. In Java, we can use the built-in replaceAll() method to remove all the whitespaces of the given string. In fact, we can replace any character of a string with another using this method. We can also remove whitespaces manually. To remove all the whitespaces, we have to traverse through the string once and append the characters other than whitespaces in a string buffer. Then, we convert that StringBuffer into a string using the built-in Java method toString(). To understand this better let’s look at the below code snippet. Input any text with whitespaces. For example: Educative is cool! import java.util.Scanner; class RemoveWhitespaces { public static void main(String[] args) { Scanner sc= new Scanner((System.in)); System.out.println("Enter the input String:-"); String input= sc.nextLine(); String ans1= input.replaceAll("\\s",""); //replace all whitespaces with blank String ans2=removeSpaces(input); System.out.println("Desired string using method-1:\t"+ans1); System.out.println("Desired string using method-2:\t"+ans2); } static String removeSpaces(String str){ StringBuffer st= new StringBuffer(); for(char c:str.toCharArray()){ if(c!=' ' && c!='\t'){ st.append(c); } } return st.toString(); } } Enter the input below In line 1, we imported the java.util.Scanner class to read input from the user. In line 7, inside the main function, we took the input string from the user by creating a Scanner object and stored that input in a String input. In line 8, we call the replaceAll() function, replace all the whitespaces with blank, and store the resultant string in the ans1 variable. In line 9, we call the removeSpaces() method and pass the input string as a parameter to it. This method will return the string without whitespaces and store the resultant string in a variable ans2. In lines 10 and 11, we print the output that we get by using method-1 and method-2 respectively to compare them. In lines 13 to 21, we defined the removeSpaces() function. Inside the function, we initialized a variable st of StringBuffer type and we run a for loop for the given string. If any character other than white space is found then append it to st. After ending the loop, we convert StringBuffer into a string using the toString() method and return it. At the input/output section we can see we get the same output for both the methods. In these ways, we can remove all whitespaces from a string in Java. RELATED TAGS CONTRIBUTOR View all Courses
https://www.educative.io/answers/how-to-remove-whitespaces-from-a-string-in-java
CC-MAIN-2022-33
refinedweb
464
66.54
I have been trying to get a low native plugin working on iOS but I never seem to get the UnitySetGraphicsDevice or UnityRenderEvent callbacks. I am using modified example code from here and am receiving the SetTimeFromUnity native call but not the two rendering related callbacks. I also confirmed that GL.IssuePluginEvent is being called in my unity project but UnityRenderEvent is never called in response. Has anyone had any luck getting the low-level interface working on iOS? Is there some sort of trick? The sample with the documentation implies it only works on Windows/Mac. EDIT: Some clarification I actually have native calls working fine. I was wanting info specifically on the low level rendering callbacks you can receive that were added in 3.5 (UnitySetGraphicsDevice or UnityRenderEvent callbacks) that are described in the link in my post. I need to make some native OpenGL draw calls outside of Unity and going forward using those callbacks is the only way to assure I get the callback on the rendering thread. Right now I can make the OpenGL draw calls on the same thread that I make native calls on but when multi-threading is added to iOS Unity in the future (as it currently works on Windows) I will need those low level graphics calls to work to ensure I do my drawing on the correct thread. Did you solve this problem? I'm trying to build an iOS native plugin that will do some OpenGL rendering and I am concerned about not being on the render thread as well. I can't seem to find any complete examples for iOS. Answer by Siney · Jul 30, 2016 at 10:04 PM From Unity 4.5, you have to register the event. "iOS: Added support for render events (GL.IssuePluginEvent). Please note that you need to manually register them, as iOS do not support dynamic libraries. Check trampoline for UnityRegisterRenderingPlugin function." For example: import "UnityAppController.h" extern "C" void UnitySetGraphicsDevice(void* device, int deviceType, int eventType); extern "C" void UnityRenderEvent(int marker); @interface MyAppController : UnityAppController { } - (void)shouldAttachRenderDelegate; @end @implementation MyAppController (void)shouldAttachRenderDelegate; { UnityRegisterRenderingPlugin(&UnitySetGraphicsDevice, &UnityRenderEvent); } @end So you just have to do some editing of UnityAppController.mm after you build the project. If you do "Append" it will leave your edits. Unity sounds not playing after AVAudioSession is disabled 1 Answer Docs say managed-to-unmanaged calls are very slow on iOS. Why? Will it be fixed? 0 Answers When can GL.IssuePluginEvent be called? 3 Answers Native Alert on iOS (plugin) 1 Answer Native GUI-Style on IOS 2 Answers
https://answers.unity.com/questions/261124/low-level-native-plugin-on-ios.html
CC-MAIN-2020-34
refinedweb
432
57.06
Introduction to Unity Scripting In order to give life to the assets in the project, we need scripting. It’s the most fundamental part of the application you want to build using Unity Scripting. Scripts are used to write the logic of how the Game Objects should behave in the application. It can be used to create different AR, VR systems, Graphics effect, animation control, physics control, custom AI system, etc. So overall speaking, scripting is the heart and soul of the application. How to Use Unity Scripting? Unity 3D uses Monobehaviour to inherit from in its scripts. It supports C# natively. It is the most widely used scripting language in unity. However, we can write scripts in many other .NET languages if they can compile compatible DLL. So, for now, we will stick to the C# way of scripting. Steps of Creating Unity Scripting Let’s learn the steps of creating and using the scripts. 1. Creating the scripts - Navigate to the Project Right-click Create > C# Script. - Another way is by selecting Assets > Create > C# Scriptfrom the main menu. - A new script will be created, and it will prompt us to enter the new name. - Give a proper name to the script and hit enter. Our first script is created successfully. 2. Choosing the Script Editor - Unity uses Visual Studio as the default script editor. - To change the script editor, go to Edit> Preferences > External Tools. - Browse for the file location of the Script Editor. 3. Script Anatomy - Different features in Unity can control the GameObject behaviors called components. - Other than these components, we can implement this by using scripts if we need any specific feature. - When we double click on the script, the script editor will open. - The structure of the script will look like below. using System.Collections; using System.Collections.Generic; using UnityEngine; public class MyFirstScript: MonoBehaviour { // Start is called before the first frame update void Start() { } // Update is called once per frame void Update() { } } - All the scripts we write inside unity will be derived from the built-in class called Monobehaviour. - It’s like a blueprint of the new custom component that can change the Game Object behavior. - Each time it creates a new instance of the script object, you attach it as a component to the Game Object. - The class name of the script will be picked from the file name we gave to create the script. - To attach the script to the Game Object, make sure that the name of the class and the name of the file are the same. Or else it will give us a compilation error, and we won’t be able to attach this script to Game Object. - Here we can see that there are two functions that are created by default. They are Start and Update. - The start function is used to initialize the things inside the script before we use them. - The start function is called at the beginning of the Gameplay if the script is enabled in the inspector window of the Game Object. It is called only once per execution of the script. - The update function will be called every frame. Basically, this is needed if something to be handled in each and every frame. It can be movement, taking user input, triggering actions, etc. - We can write our custom functions, similar to the way Start and Update functions are written. 4. Attaching the script to the Game Object - Select the Game Object to which the script needs to be attached. - The first way is to directly drag and drop the script to the Inspector window of the Game Object. Here we will attach it to the Main Camera. - The second way is to click on the Add Component, start typing the script name, and select the script. - It will be attached to the Game Object, as we can see it in the inspector window. 5. What are these variables, functions, and classes? - Variables: Variables are like containers to hold value or references of the objects. According to the convention, the variable name will start with the lower case letter. - Functions: Functions are the snippet of code that uses the variables and the additional logic to change the behavior of the objects. They can modify these variables as well. Functions start with the Upper case letter. It’s a good practice to organize our code inside a function to increase readability. - Classes: Classes are the collection of variables and functions to create a template that defines the object property. The class name is taken from the file name we give at the time of creating the script. Usually, it starts from the Upper case letter. 6. Giving references to an inspector - The variables which are declared as public in the class will be exposed to the inspector, and we can assign any references or value to them. - Consider the below code: public class MyFirstScript : MonoBehaviour { public GameObject cameraObject; // Start is called before the first frame update void Start() { } } - Here camera object is the public variable that needs a reference. Go to the unity editor. Select the object, and if we look at the script component, we can see an empty space (None (Game Object)) like below for assigning select the Game Object from the hierarchy and drag and drop in the Camera Object box. 7. Access the components - There will be scenarios wherein we need to access different components attached to the Game Object. - Let’s take an example of accessing the Camera component of the Game Object. - Get Component; we will get the type Camera. void Start () { Camera cameraObject = GetComponent<Camera>(); Debug.Log("Camera Object is: " + cameraObj); } 8. Modify the component values - Modifying the component values is the next step to the above example. - The below code explains how we can modify the camera Field of View. - After we hit the play button in Editor, we can see that the camera Field of View will change from default value 60 to 90 in the inspector menu of the camera. public GameObject cameraObject; private int cameraFOV = 90; void Start () { Camera cameraObj = GetComponent<Camera>(); cameraObj.fieldOfView = cameraFOV; } Conclusion In this article, we have learned about Unity scripting, the need for the scripting, step by step to create the script and different script components, access the components through the script, and modify them.
https://www.educba.com/unity-scripting/?source=leftnav
CC-MAIN-2021-43
refinedweb
1,058
64.81
Save 37% off The Art of Data Usability with code fccbjorgvinsson at manning.com. Find the first part of this article here. Let’s start with a question we all ask ourselves at some point in our lives (if you haven’t already, you’re about to do it now); why does chocolate melt in your mouth but not in your hands? Maybe you know the answer thanks to your background. Maybe not, but you may have an idea why. Let’s talk through the process you’d follow to answer the chocolate question. You start with your idea, which may or may not be the answer to the question. Let’s say that your idea is that the melting point of chocolate lies somewhere between the temperature on the surface of hands and the temperature inside mouths. That sounds plausible. Next, you do some experiments, which may measure the melting points of various brands of chocolate and the temperatures of the surface of hands and inside of mouths. After collecting all temperatures and melting points you analyze the results and find out that the melting point of processed chocolate is around 34°C, which is slightly lower than the temperature inside a mouth (35-37°C) and just higher than the average temperature of the skin surface of hands at room temperature (32-33°C). Eureka! Your idea seems to have been correct, but chocolate still melts a little bit in your hands, which poses a new question. You can then test another idea (that surface temperature is raised above the melting point of chocolate by closing your hands or applying pressure). From this you get ideas like how to sugarcoat chocolate to raise the melting point and create a better-quality chocolate. This exact process of answering questions is quite old—most often referred to as the scientific method—and is the basis of a lot of academic research. You start with a question (the research question). You put forward an idea (a hypothesis). Next, you test your idea (with experiments). You analyze the results of the experiments (evaluation). Last, you let everyone know how it turned out (publication). Note: I added the last step, because although it’s not considered to be a part of the scientific method, it’s how results are established within the academic community. The scientific method is also the foundation of quality management, and there are complete quality management systems that revolve around how to set up and manage quality, based on an approach similar to the scientific method. All of these quality management systems, and quality management in general, are based on a management method usually known by its abbreviation: PDCA. The whole idea, which resembles the steps in the scientific method, is to repeat the following steps (also shown in figure 1) to reach and maintain quality: Plan your changes (hypothesize about what will improve quality) Do the changes (implement the hypothesis to experiment) Check your outcome (evaluate the results) Act on it (publish it to make it known) Figure 1. The Plan – Do – Check – Act quality cycle. The PDCA has been called a great many things over the years, but I just call it the quality cycle. To me, there is only one quality cycle, with some variations. It usually involves the steps plan, do, check, and act, but it can also be a variation of those steps. To better understand how to work with the quality cycle, we can walk through one iteration of the cycle for a simple project. You’ll follow a similar iterative process in some form whenever you work on quality. It doesn’t matter if you’re working on data quality or any other form of quality: this is the foundation. Imagine you’re managing a dataset for UFO sightings around the world, you’ve been trying to improve the dataset and are just starting a new iteration of the quality cycle. We need something to work with, so let’s start by generating a dataset we can use. We can create an example dataset using Python. Are you all set up to create a Python project? Great, let’s continue. First, we create a directory to work from. We create it in the root of our home directory and call it art_of_data_usability. Type this into a terminal (can be bash or Powershell or whatever you fancy): $ cd ❶ $ mkdir art_of_data_usability ❷ $ cd art_of_data_usability ❸ ❶ Typing cd and nothing else makes sure we are in our home directory (on bash) ❷ This command creates a directory called art_of_data_usability. ❸ Then we traverse into the newly created directory. Next, we create a virtual environment. We don’t actually need it to generate the dataset but we’re going to need it later in our example. Type this into the terminal: $ python -m venv venv ❶ $ source venv/bin/activate ❷ ❶ Creates a new virtual environment in the current directory. If your Python 3 executable is installed as python3 you use that instead of the python one. ❷ Activates the virtual environment (this is for a bash shell, in Powershell on Microsoft Windows you would run .\venv\Scripts\Activate.ps1 but before that you may have to allow running scripts in Powershell). Now we should have our environment ready and we can move on to create a small Python script to generate a csv file with a UFO sighting every day from 1956 through 2017 at a fixed location, and always reported by the same made up newslet (news outlet). The first few lines of the resulting csv file (called ufos.csv) will be these: date,location,reporter 1956-01-01,Area-52,NEVADAta 1956-01-02,Area-52,NEVADAta 1956-01-03,Area-52,NEVADAta 1956-01-04,Area-52,NEVADAta Open a file called generate_example_ufo_dataset.py and write the code in listing 1 into it, save, and close. This is our date-generating Python script. Listing 1. generate_example_ufo_dataset.py Limport csv ❶ from datetime import date, timedelta with open('ufos.csv', 'w') as ufos: ❷ ufo_csvfile = csv.writer(ufos) ❸ headers = ['date', 'location', 'reporter'] ❹ ufo_csvfile.writerow(headers) start_date = date(1956,1,1) ❺ end_date = date(2018,1,1) days = (end_date-start_date).days and two components from datetime. ❷ Open a file (hardcoded file name: ufos.csv) for writing. ❸ Create a csv writer object out for the file. ❹ We create three headers and write the header row to the file. ❺ We set the start and end dates, then we compute the amount of days in between. ❻ We hardcode the same location and reporting newslet for our file ❼ Loop over the amount of days, this will create the sequence: 0,1,2,3,4,… ❽ Set the date for the sighting, if we’re in the first iteration of our for loop (where day is 0), this will be equal to the start_date. If we’re in the next iteration (where day is 1) this will be the day after the start_date and so on. ❾ Write a row to our csv file with the sighting day, and the hardcoded location and reporting newslet.isting 2.1. generate_example_ufo_dataset.py Now you can run the data-generating script by running this command in the terminal: (venv) $ python generate_example_ufo_dataset.py This creates the desired csv file (called ufos.csv) in our working directory (which we created earlier as art_of_data_usability). Now that we have something to work with in our example, let’s move on to the quality cycle. Planning and designing metrics This phase of the quality cycle starts with an idea—an idea for an improvement, a new method, or something else. With that idea, you start planning and designing your change. You do this in four steps, those shown in figure 2. Figure 2. You plan the quality improvement and how you will measure it The first planning step is to define the objective of your idea. You base your work on a single quality attribute you want to improve; for example, the size of a dataset. There are, of course, many quality attributes to work on, but, for each iteration, you choose only one. It’s best to prioritize the quality attributes and pick the one at the top of the list. For our UFO dataset, there can be many different quality attributes you’d like to improve on (standardization of locations, completeness in the UFO sightings reports, the ability to aggregate), but for this example let’s say the highest-priority attribute now comes from your system administrators. You’ve collected so many sightings that the dataset file size is too large. Your system administrators have assigned you a quota of 640KB because they’re firm believers in the old (and incorrectly appropriated) Bill Gates quote that “640K ought to be enough for anybody.” Next, you propose a small change and write down the predicted outcome. It’s important to focus on a small change. There can be multiple viable changes but don’t try to do too many things in one go; if you want to reduce the size of a dataset and you think it might get smaller with a new data format, splitting the sightings up into years, and also by compressing it, pick one but not all of them. Tip: If you don’t know what contributed to the improvement, you risk institutionalizing unnecessary behavior. So, something might actually reduce quality, even if in your books it’s recorded as something that improves quality. The new data format may be worse than the old one but you don’t see that because, thanks to compression, the dataset is smaller. Or the compression may not become as good when you split the dataset into multiple files. For our example, let’s say we propose compression, and we think we might be able to reduce the file size sufficiently because we get so many reports from the same place that the compression can take advantage of that. You don’t implement the change at this stage, you just write down the change and its expected outcome beforehand. Doing this allows you to better focus your efforts to know what you’re doing and why. It also means you’ll spend less time in the analysis step, which could turn into a treasure hunt if you don’t plan properly, and treasure hunts are rarely productive. Writing down the expected outcome also allows you to design your metrics before you make the change, and that’s your next step. Designing your metrics before doing anything means you won’t end up in a position where you’ve made a change, but, when asked whether that change helped, can only answer, “I can’t say. It turns out we can’t really measure it.” It’s better to know, before you start, how you’ll compare the actual outcome to the expected outcome. Our goal is to get the file size to less than 640KB (but we hope to exceed those expectations). Another goal could be to aim for a specific size reduction where the metric would be to compare the original size against the outcome. This all depends on what we’re trying to achieve. The last step is to write down the plan. It may seem like you don’t need it for our example, after all, you’re just going to compress the file, but this is good practice. Real-world projects will not be as simple and you need to document what you’re doing. If you do it correctly, there’s also a side-benefit I’ll point out to you, after I describe what should be in the plan, below: Who is involved in the iteration? What will they be doing? Where will they do it? When can you expect an outcome? These questions are pretty straightforward. It may seem weird to ask where the involved people will do what they’ll be doing, but sometimes a quality improvement isn’t performed at a desk, it may be somewhere in the field. If your quality attribute is understandability and the change you propose is giving a presentation to the target group, you won’t do that at your desk: you’ll have to think about a lecture hall, a meeting room, or some other place where you’ll give the presentation. The last question, about when you can expect an outcome, is important to think about both in terms of implementation and measurements. Gathering measurements for analysis can take a lot longer than the actual implementation. If you were changing a work process at a bakery, you can’t expect a great outcome in a single day; the bakers will have to get used to the new process. You let it run for maybe a month and then you can see if things improved. You need to know what period, the reference period, to compare your measurements against. That’s your plan—it doesn’t have to be complicated. If you’ve restricted yourself to one quality attribute and a small change, it should fit on a single piece of paper (something similar to the example in table 1). Table 1. Example of a simple quality plan Tip: Here’s the side-benefit I promised you. I like to use these as cards on a Kanban board (for more on Kanban see Kanban in Action). A Kanban board consists of swim lanes representing different stages like Planned, Doing, and Done. You put each card into the swim lanes to visually show the progress, then you move them around. When you start work on a planned change you move it to the doing stage. This allows you to quickly gauge the progress. If you’re doing this physically (with real cards you write on and stick to the wall), you can use both sides of the card; The front side would contain more important information like subject, attribute, people, and reference period, but the back side would have more details, like proposed change, expected outcome, metric, and locations. If you like this approach you can think about the Kanban board as the four different steps in the quality cycle and it’ll provide you with a good overview of your quality work: Planning Doing Analysis Done (baseline established) Whatever you use, you should create a simple template you can use for all your quality adventures, just to speed up your planning phase. Implementing controls and changes After setting up your plan, you just go ahead and do it. Doing the change involves one step, but at the same time you also document problems and begin analysis, as shown in figure 3. Figure 3. Carry out the plan while documenting and analyzing problems and behavior Even though implementing the change (do) is a one step process, carrying out the plan, it’s still not simple. What you have to do in this step depends on which iteration you’re on (you will probably do many iterations for each quality attribute). If this is the first iteration, you have to create your baseline, which describes the current level of the quality attribute. Before you implement any compression algorithms, you need to know the size of the uncompressed dataset. To do that, you design a test, normally referred to as a quality control. Even if I referred to quality controls as a test you design, it’s not a simple test of yes or no, rather a way to gauge the quality level for comparisons. It leads to a yes or no answer when you ask yourself if you have the desired level of quality. The quality control for the size of a dataset is not does the dataset have size X, it’s what is the size of the dataset? The latter allows you to examine the size after implementing a change. For our case, there are many ways to measure file size. We could just use what’s available on the operating system (right click on the file, choose properties, and look at the size) or we could write a small program that does this for us. Let’s go the hard way and write a small Python script to gauge the file size. We can use the same virtual environment we created when we generated the example dataset. If we don’t have it activated, we can activate it by running the following command: $ source venv/bin/activate For this script, we’ll use the fantastic Click library to quickly turn the code into a command-line script that can take a file name as an argument. Let’s install Click into the virtual environment using Pip. At the command prompt in a terminal, run the following command: (venv) $ pip install click After running that install you should see a few lines and one of them should say how successful your installation was, something along these lines: Successfully installed click-6.7 Then we can write a small script to gauge file size. Create a file named get_ufo_dataset_size.py and add the code from listing 2 to it. Listing 2. get_ufo_dataset_size.py import os.path ❶ import click import math @click.command() ❷ @click.argument('filename') ❸ def get_ufo_filesize(filename): size_in_bytes = os.path.getsize(filename) ❹ size_in_kilobytes = math.ceil(size_in_bytes / 1024) ❺ print('Size is: {size}KB'.format(size=size_in_kilobytes)) ❻ if __name__ == '__main__': ❼ get_ufo_filesize() ❶ We install the three libraries we need, Click and two standard libraries ❷ We create a command-line interface using Click ❸ Our command-line interface should take one argument, called filename ❹ We use os.path.getsize to get the file size in bytes ❺ Because humans rarely talk about file sizes in bytes (except for small files) we convert it to kilobytes. Why do we divide it by 1024? Because the kilo in computers is 1024 (2 to the power of 10). We also round it up to get a nice number. ❻ Print out the file size as a human readable text. ❼ Then we invoke our Click command by calling it when we run the Python file (this business with __name__ and ‘__main__’ is a Python convention). To run this script, we can just execute it and pass in the name of the file we want to know the size of (which is the Click filename argument). In our case that would be the ufos.csv file. Run the following command at the prompt: (venv) $ python get_ufo_dataset_size.py ufos.csv This should output the following text: Size is: 642KB This is our baseline: 642KB (actually, the baseline is uncompressed data, but the size is the metric we’re interested in). The file is clearly too large. The system administrators don’t want it to surpass 640KB. Obviously, quality is lacking for the size attribute. The system administrators won’t be happy. Let’s implement the proposed change to see if we can improve the quality. Again we turn to Python to compress the data. We don’t have to disturb work processes because we can just copy the contents of the file into a compressed file. Activate the virtual environment, if it isn’t already activated, and create the file in listing 3 to compress the dataset. Listing 3. gzip_ufo_dataset.py import gzip ❶ import shutil with open('ufos.csv', 'rb') as ufos_csvfile: ❷ with gzip.open('ufos.csv.gz', 'wb') as ufos_gzipped: ❸ shutil.copyfileobj(ufos_csvfile, ufos_gzipped) ❹ ❶ Import the libraries we need. One of them, gzip, is the compression library we use. This library complies with our DEFLATE algorithm requirement, so this is all according to plan. ❷ Open the original csv file for reading. We’re not using Click for this but instead hardcoding the file name for simplicity. If you want practice with Click this is a good script to convert. ❸ Use gzip to open a compressed version for writing. This will automatically compress everything we write to the file. Again we hardcode the file name for simplicity. ❹ Use the shutil library to copy the contents of the original file to the compressed version of the file. To run this and generate a file called ufos.csv.gz (the compressed version of ufos.csv), we only have run the following command at the prompt: (venv) $ python gzip_ufo_dataset.py That’s what we will do in the do step of our cycle. Irrespective of how you implement the quality controls or the changes, you should always record all problems and unexpected incidents that occur during the implementation and reference periods. It makes the upcoming analysis of how well the implementation went much easier. In our example, we probably wouldn’t run into a problem because the reference period is immediate, meaning we’ll just do the change and analyze the output immediately. Also, we compressed the data into a different file to delay work process problems. This is more likely to be something you document during a longer reference period. There might still be incidents or concerns raised by something as simple as the compression. For example, we’re using gzip, which works on only a single file, but regular zip archives can contain many files and use the same DEFLATE algorithm, which makes updating a file within a zip archive more complex. This might be something you flag if you started out with a zip archive instead of the gzip format. You should document the problems and behaviors in real time and you should start analysis of those incidents when they come up. It’s best to gather the evidence during an event rather than sometime later. This is good to do for the metrics you have designed and implemented, and it’s a very basic quality control that catches everything you didn’t think about. Even with good intentions, you can’t plan and design your metrics for all situations. If you make a habit of documenting the problems and incidents, you’ll at least know something you can focus on improving in future iterations. By starting analysis of those incidents as soon as they happen, you’re more likely to be able to collect the data you need while the problem happens instead of being stuck with a problem and no means of analyzing it later because the necessary data was never recorded. When you begin, this phase may take some time because you’re creating the quality controls, but in future iterations, once you’ve put all the metrics into play, you won’t have to spend time on this again. You actually should try to avoid changing quality controls as much as possible because that affects the comparability between the results of independent iterations. If you need to change a quality control, you must re-approach it as the first iteration and begin again by finding your baseline. Analyzing the implementation Armed with the expected outcomes from your plan and the actual outcome after making the changes, you should have an easy time checking the measurements and analyzing them against the outcome (figure 4). Figure 4. Analyze, compare, and summarize when checking the outcome By this time, you should have bulk of the work done: you’ve created a plan, you’ve carried out that plan, and you have your baseline. Now it’s time to check whether you’ve improved the quality. First, you must complete the analysis that you should have started during the implementation. It’s even possible that you’ve already finished your analysis, but sometimes that’s not possible until after the reference period (the time interval between the process start and when you expect the outcome). If your change was a new work process, you analyze individual incidents while the staff works according to the new process, but you should still continue to monitor for the remainder of the complete reference period. Afterward, you’ll be able to analyze all of the incidents as a whole. The analysis of individual incidents might reveal problems with wording of the work process steps, but when you analyze the incidents as a whole, you notice that the incidents rose only in the first few weeks while the staff was getting accustomed to the new process. Once they settled in, the new work process actually resulted in happier staff, more productivity, or whatever quality you were after. In our example, we can just run our little program to check the file size of the compressed dataset file with the following command (remember to activate your virtual environment): (venv) $ python get_ufo_dataset_size.py ufos.csv.gz The output, if you run it on the dataset immediately after compressing it, should be the following: Size is: 55KB This is not the only analysis you need to do. You’ll also have to check how fast the dataset grows. This is about exceeding the expectations. Even if you’re able to reduce the size, that won’t be good enough if we surpass the limit again in a couple of days. If we have more than 60 years of daily records stored in 55KB, we can expect that we won’t see much change (in kilobytes) by adding one more record. Let’s try to add 100 years, or a little bit more than 36,500 records, to see how that affects our compressed dataset. To do that, we need to modify our initial data-generating code to work with compressed files. Luckily, that’s a pretty simple change. Create a file called append_to_gzipped_ufo_dataset.py and add the code in listing 4. Listing 4. append_to_gzipped_ufo_dataset.py import csv ❶ import gzip from datetime import date, timedelta with gzip.open('ufos.csv.gz', 'at') as ufos: ❷ ufo_csvfile = csv.writer(ufos) ❸ start_date = date(2018,1,1) ❹ end_date = date(2118,1,1) days = (end_date-start_date).days ❺, gzip, and two components of datetime ❷ We open the gzipped file with gzip but we don’t open it for writing, we open it for appending (the ‘at’ bit in the open call) because we want to add to the file, not overwrite it. This line is actually the only thing that’s really different from our original data-generating example. ❸ We create a csv writer object out of our file so we can write our csv rows. ❹ We hardcode the start and end dates to cover 100 years ❺ We’ll loop over all of the days so we need to count how many days there are in this year. ❻ We use the same location and reporter (hardcoded) because we want this to be similar to the last 60 years. ❼ Here we loop over the number of days. In essence this counts from zero up to the value (in our case days): 0,1,2,3,4,… ❽ Then to get each date, we take the start date and add the days counter to it (start_date plus 0, start_date plus 1, start_date plus 2, …). ❾ For that day we write (append) a row to our csv file. To run this script and append data to our compressed csv file, run the following command: (venv) $ python append_to_gzipped_ufo_dataset.py Now let’s see how large the file has become; run the following command: (venv) $ python get_ufo_dataset_size.py ufos.csv.gz The output should be as follows: Size is: 143KB That’s an increase of 88KB or an annual increase of around 0.88KB if the frequency and contents don’t change. After you finish the analysis, you compare it with your predictions of the outcome and draw conclusions. Be honest and objective in your analysis. It’s better to prove that the change didn’t improve the status quo, than fake the results to show that you were right. Be proud of making mistakes. You learn a lot more from an honest analysis instead of building up a web of lies to maintain the façade that you’re perfect. Remember this: you make mistakes to build up a career, but you hide your mistakes for a short-lived hobby. When planning the compression to reduce size, maybe you wrote down an expected outcome that stated that the file would be less than 640KB, but after implementing the compression, the dataset is smaller but still more than 640KB. That’s still a valid comparison; it tells you that you’ve made progress but you need to look at other options, like different compression algorithms or a different data format. Also, if the compressed size had been 639KB you’d also have to point that out in the same way. You met the expected outcome but you can obviously expect it to exceed 640KB soon. After the comparison, you should always write down the results and summarize your key takeaways. Is there a better compression algorithm? Why did the algorithm you used not meet your predictions? You have to summarize your findings because quality is a team effort. You might have learned from your mistakes or successes, but you want the whole team to learn from them. You might not be the one who implements a different compression algorithm (if the one you tried didn’t work), so it’s good to write down what made your choice not so good. Write it all down and make the reasons known. If not, the whole team is doomed to repeat your experiments and failures in the future, until someone else writes them down. Our example shows us that we greatly reduced the file size. We went from 642KB to 55KB (less than 9% of the original size). Given a 0.88KB annual increase, file size won’t get back up to 640KB until after about 664 years. This is because there isn’t that much change per row in the dataset (only the date changes). If our dataset had been more random, we probably wouldn’t have achieved this amount of compression. So, for our example, we can definitely recommend compression of the dataset, which brings us to the last step in the quality cycle. Establishing a new baseline After analyzing the changes, you act if the output from making the changes is promising, and, if you do, your improvements become your new standard for going forward. This is a relatively simple last step in the PDCA cycle, as you can see in figure 5. Figure 5. Decide how you’re going to act and think about the next iteration Originally, this step of the PDCA cycle was excluded (when Shewhart proposed the cycle). Still, it’s important to include because it’s the culmination of your work. Everything you’ve worked on was done to allow you to make the decision of whether to adopt the change or not. If your analysis shows that the change actually contributed to higher quality, you can establish the change as your baseline for the future. The change is no longer an idea, it’s what you will use from now on (which is why honesty in your analysis is so important). If the change results in lower quality, it’s pretty obvious that it shouldn’t be adopted. In those cases, you don’t adopt it, which is still an action and a perfectly acceptable result. It’s still good to know that the change didn’t improve the quality (if you document it). Status quo or no change is slightly trickier to decide. If you’ve made a change and it has no effect, should you keep it or not? In those, cases you’ll just have to make an estimate about whether it will cost you more to adopt the change or to reverse it. It’s highly situational, but can also be very difficult to say. Most often, this comes down to cost or time. If it’s too costly or time-consuming to reject the change, keep it. For example, if you’ve installed a cheap motion sensor and it turns out that it never is triggered, it may cost you more to get an electrician to dismount it than the refund you’d get. Sometimes though, it’s worth spending the time and money to reject the change. If a software change doesn’t do anything, it may end up costing you more, in the long run, to keep the code change than it will to remove it, because more code is more likely to have bugs or confuse future programmers. With our UFO sighting dataset, we can safely say that we improved the quality of the dataset. We’re going to satisfy the system administrators for a good chunk of the next millennia. We can safely accept this and establish the baseline that we will use compression for the dataset. At this point, you’d swap out the datasets in production and start working with compressed datasets. In many cases you would have already implemented this into the work processes, but we were able to leverage the immediate reference period and a copy of the data to perform our analysis without disrupting work processes. I recommend that you try, where you can, to update work processes as part of the quality change, because then you’ll get a better feeling for the effects of the change. That depends on the context, however. You always learn something new in each iteration, and based on what you learn, you can think about the next cycle: Do you want to re-prioritize the user requirements for quality? Do you want to continue improvements in the same area (continue to focus on dataset size)? Do you want to move on to another quality attribute? Even if act is the last thing you do with an idea for a change, it’s also never the last stop because it’s a cycle. You just move on to the planning step of the next cycle. Quality improvement is a continuous process. Document everything You may have noticed that I encouraged you to write a lot down in each iteration of the PDCA cycle, including the following: Write down your plan Write down problems with your changes and unexpected behaviors Write down your key findings There are a lot of things you have to document, and for good reason, which I mentioned briefly: quality is a team effort. You work with your data users to identify the requirements, and you work with a team of people who work on the product or service to improve those requirements. They all need to know why you’re making a change, when you’re going to make it, and whether it paid off. Here’s the real juicy part: Quality management is a data project. Data and quality go hand in hand. Chances are you’re interested in the book that spawned this article because it’s about data. If quality is new to you, then fear not. It’s what you’ve been doing (working with data), it’s just framed differently. When working on quality, all you do is collect data about the current status, the impact of a change, the time and date when some unexpected behavior occurred, and so on. This data is then processed and transformed into information in the analysis step, so that it can be better understood by humans. The newly created information can be put into context with how you understand the situation to become knowledge, which allows you to decide to adopt or not. The goal of all this data collection and documentation is to manage the quality process and know whether you’re fulfilling the desired quality levels (or iteratively getting there). That’s the question you’re collecting data to answer. Management of the quality process is called quality assurance, and it’s what allows us to provide confidence that the desired quality is fulfilled. Quality assurance is not only about collecting and processing the data and documenting what you’re doing, but also other aspects of the quality process that need to be managed, like training people and selecting the right tools. Quality assurance and quality controls are two of four parts of the broader term, quality management. The other two components are quality planning and quality improvements. Quality planning is the input to quality assurance and controls. Quality improvements are the results of quality assurance and controls. I consider quality planning to be a part of quality assurance because it’s the first step in the process, the input, where you identify users, specify their needs, and analyze them. Out of that process, you create a plan for the quality you want to improve and the level of quality you’re after. Quality improvements are what you do when you measure the level of quality and see that it isn’t where you would like it to be. Then you make some changes to what you’re working on in an attempt to increase the quality, and send it through the quality assurance process again and hope for the best. Let’s recap these terms, because they’re important, especially if you are a data quality manager. These terms are basically what you’re responsible for: Quality planning: Identify what you’ll be doing and create a plan Quality assurance: Follow procedures to be able to show that you’re making progress on quality Quality controls: Create measurements that tell you where you are currently Quality improvements: Make changes to increase the quality, if the level isn’t satisfactory yet The difference of data quality Everything we’ve gone through can be applied to quality management in general, irrespective of the subject. If you’re working on organizational quality, you follow the same steps and the management principles as those responsible for managing quality for a brand of tea. The same can be said about data quality. Generally speaking, there is nothing really different about managing the quality of data from managing the quality of any other product or service. You follow the same steps to manage quality as a whole and go through iterations of same steps of the quality cycle. Well, it’s almost the same but there is one important distinction between data quality and most other quality subjects: Automation is easier. Data is usually digitized and stored on computers. Think back to our dataset size example. Comparing the size of a dataset can be automated. That’s different from the majority of the other subjects of quality management, where quality controls emphasize inspections and reviews over automation. For example, how would you gauge the quality of a magazine? You wouldn’t be able to automate that—you’d have to survey your readers. If you’re making fire escapes, you don’t automatically set buildings on fire to make sure your fire escape is of good quality. You run simulations manually: for example, with fire drills. Because data is digital, we’re in a much better position to automate quality. We can automatically gauge the data quality level. That doesn’t mean inspections and reviews aren’t used: if you want to gauge the quality level of the data clarity attribute, you’re going to have to survey your users, but in most cases data quality controls can be automated. Just be aware that our response to a lack of quality is not automated. We’re automating the bureaucracy of quality management so we can focus on the fun stuff. This means that we can focus on improvements and let automation take care of the quality controls and perhaps more importantly, the quality process as a whole. The well-known protest song by Bob Dylan, The times they are a-changing, consists of four verses that tell the listener how the time is changing. The song was actually written as an anthem of change thanks to time, and indeed, the world is constantly changing. We have to look constantly at where we are and adapt to changes over time. You rarely achieve quality and then just have that quality from then until eternity. It’s especially applicable to data quality because data is an abstraction of a constantly changing world. Put yourself into the shoes of someone tasked with counting the number of birds of a specific species. The quality attribute you have is completeness of the data, meaning that the requirement is to know the exact number of birds of that species in a particular region. It’s not enough to give just an estimate of the amount, you have to count them all. If, by some miracle, you are able to count all of the birds and record them, your victory dance won’t be a long one. Some of the birds die, new ones hatch, or they all migrate for the season. Unless you’re tracking the number of dodos in the wild (or other extinct species), the quality of your amount is constantly affected by the world and you don’t always control how the world affects your data. That’s why it’s important to set the data quality management up as a cycle. You follow the four steps, and then you rinse and repeat. You should always check the level of all quality attributes, even those that already reached the desired level of quality. This can of course, in most cases, be automated as well. A good way to ensure quality is to constantly check the status of it for the attributes you’re interested in. Wouldn’t it be a good idea to just run a program that periodically checks whether file size is OK, getting close to the limit, or has surpassed the limit? It’s simple: we just write small scripts like check_ufo_filesize.py in listing 5 and run it from time to time. Listing 5. check_ufo_filesize.py import os.path ❶ import math sysadmin_limit_in_kilobytes = 640 ❷ size_in_bytes = os.path.getsize('ufos.csv.gz') ❸ size_in_kilobytes = math.ceil(size_in_bytes / 1024) if size_in_kilobytes > sysadmin_limit_in_kilobytes: ❹ print('Critical: How dare you go over the limit!?', end=' ') ❺ elif size_in_kilobytes > sysadmin_limit_in_kilobytes - 100: ❻ print('Warning: You are dangerously close to the limit', end=' ') else: ❼ print('All is OK: The sysadmins love you!', end=' ') print('(Size is: {size}KB)'.format(size=size_in_kilobytes)) ❽ ❶ We import the libraries we need, for this we only need the built-in os.path and math libraries. ❷ We set the file size limit to 640. ❸ We compute the size of the compressed file in kilobytes. ❹ If the file size is more than the limit we print out a message claiming that we’ve surpassed the limit. Our situation is now critical! ❺ For each of these print statements (messages) we end it with a space ‘ ‘ instead of the default new-line character because we want the whole output to be in a single line which is easier to read if you’re periodically writing them on the screen. ❻ If we’re getting close to the limit, in this case a 100KB from it, we print out a warning statement. After a few 100 years, when our compressed dataset becomes 540KB we’ll be thankful for this warning message because it gives us a lot of time to improve the quality before we get to a critical state. The system administrators will never again see a file larger than 640KB. ❼ This should be our regular state. We’re way under the limit and sysadmins love us. ❽ Then for completeness and to be informative we print out the actual file size like before. This is what we’ll be doing to leverage the difference of data quality: writing the quality controls as scripts that run from time to time and monitor the quality level. All those scripts have to do is notify us when things are ok, getting bad, and after they have gone bad. There’s a whole suite of software out there to help you to constantly check the quality level. We can hook our scripts into their system to make use of all the different features they have. I like to re-purpose monitoring software that’s used for system administration (probably because I know that very well). Monitoring software like Nagios, Zabbix, and others allow you to create your own little scripts that can serve as your quality controls. Monitoring software is designed to periodically run the scripts you want, based on configurations you determine, and notify staff when something bad has happened (usually through warnings and critical alerts). Setting up your work environment is all part of quality planning, but that’s a topic for another day. You’re hopefully wiser for having read this article, and, more importantly, more interested in data, data quality, and data usability. For more, download the free first chapter of The Art of Data Usability and see this slide deck on slideshare.net.
http://freecontent.manning.com/managing-quality/
CC-MAIN-2017-51
refinedweb
7,509
61.16
Feature #11151closed Numeric#positive? and Numeric#negative? Description We just added Interger#positive? and Interger#negative? to Active Support. I was wondering if we could get that implemented in Ruby itself and searched if it was already requested before to Ruby code. I found that it was requested in #5513, but rejected. Since they were requested with more methods, and I don't know Japanese enough to see if there are technical reasons, I thought to request just these two methods again. The implementation would be something like: def positive? self > 0 end def negative? self < 0 end And one of its use case is filtering, like: bunch_of_numbers.select(&:positive?) If this feature is accepted I can work in a patch. Updated by usa (Usaku NAKAMURA) almost 6 years ago In #5113, matz said - We can use > 0and < 0for the purpose. - Complex is Numeric, but we cannot define positive? and negative? for it. The latter is just appropriate comment, I think. Updated by rafaelfranca (Rafael França) almost 6 years ago Right. Thank you for the explanation. So maybe just to Fixnum and Float? Updated by phluid61 (Matthew Kerwin) almost 6 years ago On 14/05/2015, [email protected] [email protected] wrote: Issue #11151 has been updated by Rafael França. Right. Thank you for the explanation. So maybe just to Fixnumand Float? You probably mean Integer and Float. And possibly also Rational. That, or add it to Numeric and have it raise in Complex. -- Matthew Kerwin Updated by rafaelfranca (Rafael França) almost 6 years ago You probably mean Integer and Float. And possibly also Rational. Yeah. For what I could see, probably we'll just need to publish two functions that we are already using internally. and Updated by matz (Yukihiro Matsumoto) almost 6 years ago Realistic use-case is written. Accepted. But it should recognize complex numbers (should raise exception). Matz. Updated by nobu (Nobuyoshi Nakada) almost 6 years ago - Status changed from Open to Closed Applied in changeset r50522. numeric.c: Numeric#positive? and Numeric#negative? Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/11151
CC-MAIN-2021-21
refinedweb
344
60.41
Most discussion is on Typelevel Discord: I'm just saying that you can't solve this: def a = Stream(streamA1.drain, streamA2.drain).join def b = Stream(streamB1.drain, streamB2.drain).join def c = Stream(a.drain, b.drain).join you can reduce the initial cost of join, but you can't make the above be the same as def c = Stream(streamA1.drain, streamA2.drain, streamB1.drain, streamB2.drain).join unless you make join cost 0 (allocate no new primitives per join) joinDrainor something can be useful to reduce the cost of a single join in the first place, given the very common use case of just concurrently running things for their effect I think go is terribly written compared to what it could be.I think go is terribly written compared to what it could be. /** Parse a stream and return a single terminal ParseResult. */ def parse1[F[_], A](p: Parser[A]): Pipe[F, String, ParseResult[A]] = s => { def go(r: ParseResult[A])(s: Stream[F, String]): Pull[F, ParseResult[A], Unit] = { r match { case Partial(_) => s.pull.uncons1.flatMap{ // Add String To Result If Stream Has More Values case Some((s, rest)) => go(r.feed(s))(rest) // Reached Stream Termination and Still Partial - Return the partial case None => Pull.output1(r) } case _ => Pull.output1(r) } } go(p.parse(""))(s).stream }
https://gitter.im/functional-streams-for-scala/fs2?at=5bc0a85a435c2a518e95436c
CC-MAIN-2021-49
refinedweb
226
63.8
Hi all, I have a WinForm .NET 3.5 library (VS2008 project), which i would like to use in a MFC 7.1 application (VS2003 project). If I export .NET library as COM server and interop with legacy MFC 7.1 application, will there be a .NET runtime compatibility issue? I'm really new to interop and COM, and want to know if it is feasible or not. If COM is the wrong direction, what would be the proper way? Thanks in advance! Best Regards, Wade View Complete Post Hi, We have a vb6 application that calls a c# com interop. In our new version we have added a new function to the com interop, and now the vb6 app is failing with "Run-time error 430 class does not support Automation or does not support expected interface" the code looks like: [ComVisible(true), Guid("682D4AF6-E607-3112-87B2-B7B1A58D2618")] [ServiceContract] public interface IMyInterface { ... [OperationContract] string NewFunction(); } [ComVisible(true), Guid("7E99EAC4-C7FA-4e0e-9996-F4500E17AF69")] [ClassInterface(ClassInterfaceType.AutoDual)] public class MyClass:IMyInterface { ... public string NewFunction() {...} } when comparing old and new IDL files it show that the Guids of the class are the same: [ uuid(7E99EAC4-C7FA-4E0E-9996-F4500E17AF69), version(1.0), custom({0F21F359-AB84-41E8-9A78-36D110E6D2F9}, "MyCompany.MyServiceClient.MyClass") ] coclass SecureAssets { [defaul? 2001 Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/60737-inquiry-on-net-com-interop-compatibility.aspx
CC-MAIN-2017-22
refinedweb
230
61.02
31815/what-is-absolute-import-in-python-and-how-is-it-used Hi all, I am fairly new to Python. Looking for some basic help. How to use import statements in a proper way. Now I use Python 2.7. If I move to 3.x are there any conflicts with absolute imports? And also what is the difference between absolute and relative imports? An absolute {import, path, URL} tells you exactly how to get the thing you are after, usually by specifying every part: import os, sys from datetime import datetime from my_package.module import some_function Relative {imports, paths, URLs} are exactly what they say they are: they're relative to their current location. That is, if the directory structure changes or the file moves, these may break (because they no longer mean the same thing). from .module_in_same_dir import some_function from ..module_in_parent_dir import other_function Hence, absolute imports are preferred for code that will be shared. it is a python sequency which stores ...READ MORE Python doesn't have a native array data ...READ MORE Python is develop by Guido Van Rossum ...READ MORE A module is a file containing a ...READ MORE if you google it you can find. ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE You can simply the built-in function in ...READ MORE Suppose you have two tensors, where y_hat contains computed ...READ MORE The GIL does not prevent threading. All ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/31815/what-is-absolute-import-in-python-and-how-is-it-used
CC-MAIN-2019-47
refinedweb
258
69.79
Registering an Application to a URL Protocol Updated: April 2011 The About Asynchronous Pluggable Protocols article describes how to develop handlers for URL protocols. In some cases, it may be desirable to invoke another application to handle a custom protocol. To do so, register the existing application as a URL Protocol handler. After protocol handler. Without this key, the handler application will not launch. The value should be an empty string. Keys should also be added for DefaultIcon and shell. The Default string value of the DefaultIcon key must be the file name to use as an icon for this new URL protocol. protocol. URI. Internet Explorer 9. An application protocol handler can disable URL percent decoding by adding the UseOriginalUrlEncoding setting to the registration for the protocol. When this setting is set to (DWORD) 1, the command line is not percent decoded by Internet Explorer when passed to the protocol handler. Security Issues. For more information, please see Writing Secure Code. Example Protocol Handler The following sample code contains a simple C# console application demonstrating one way to implement a protocol handler for the alert protocol. using System; using System.Collections.Generic; using System.Text; namespace Alert { class Program { static string ProcessInput(string s) { // TODO Verify and validate the input // string as appropriate for your application. return s; } static void Main(string[] args) { Console.WriteLine("Alert.exe invoked with the following parameters.\r\n"); Console.WriteLine("Raw command-line: \n\t" + Environment.CommandLine); Console.WriteLine("\n\nArguments:\n"); foreach (string s in args) { Console.WriteLine("\t" + ProcessInput(s)); } Console.WriteLine("\nPress any key to continue..."); Console.ReadKey(); } } } When invoked with the URL alert:"Hello%20World" (note extra quotes) from Internet Explorer, the program responds with: Related Topics - 2/24/2012 - phanduyson - 10/6/2011 - Louis Thomas Is there a way to write a protocol handler as a service? thanks - dave - 5/18/2011 - DavidThi808 Q: Is there a way to set the Working Directory for the program in the registry when launched from url? - 3/17/2011 - Ult1m4t3Snip3r - 3/18/2011 - Ult1m4t3Snip3r I have registered a URL protocol along with the application installation, and it works flawlessly on both IE and firefox. The problem is that on a computer on which the application is not installation( neither is the URL protocol registered), and the user clicks the URL link. I would like the IE to open a javascript windows, or a web page, or something else, to give the user a choice to install the application. Q: How to do that? A: Sadly, there's no way to do this directly (e.g. no window.navigator.supportsProtocol() method). You could register in the VersionVector and use Conditional Comments in IE to detect the presence of your handler. - 1/17/2011 - EricLaw [MSFT] A: All major browsers on Windows will pick up Application Protocols registered in this way. - 1/17/2011 - EricLaw [MSFT] - 9/8/2010 - Benjamin Eberlei ================== Try converting the custom URL with tinyurl.com. It will return an http protocol URL and will forward to your custom URL (making it clickable in Outlook, etc). -Donavon - 12/8/2009 - Vadim Rapp - 1/7/2010 - i am donavon - 8/28/2009 - Thomas Lee So far this works great on IE7 using the "command" key, however for some reason it does not work in IE6. I'm getting an "Invalid Systax Error" when I click on the link from within the browser. Everything works great if I type the URL on the "START > Run..." window. I'm not sure if the "DDEEXEC" key is required by IE6 though. If so, please can you provide an example? - 8/14/2008 - MiguelGuzman - 5/19/2009 - Thomas Lee I have built a small tool for regestering URL protocols: This tool will help you register URL protocols and lauch a specific application when a URL is executed. This tool is free and open source. - 7/8/2008 - kallelundberg - 1/18/2009 - Stanley Roark This works great except that an IE window is left open showing the URI that was invoked. Is there any way (e.g. a value in the registry, EditFlags perhaps) to make this window close or not appear in the first place? => The problem with the blank IE window may be related to the zone where the page that hosts the link is located. If you test a html page form the file system (file://... ) try to host the page in IIS instead (so that it is http://...) - this worked for me. - 7/11/2008 - Ian Horwill
http://msdn.microsoft.com/en-us/library/Aa767914.aspx
crawl-003
refinedweb
755
56.96
Put Your Pages and Views on Lockdown As I’m sure you know, we developers are very particular people and we like to have things exactly our way. How else can you explain long winded impassioned debates over curly brace placement? So it comes as no surprise that developers really care about what goes in (and behind) their .aspx files, whether they be pages in Web Forms or views in ASP.NET MVC. For example, some developers are adamant that a page should not include server side script blocks, while others don’t want their views to contain Web Form controls. Wouldn’t it be great if you could have your views reject such code constructs? Fortunately, ASP.NET is full of lesser known extensibility gems which can help in such situations such as the PageParseFilter. MSDN describes this class as such: Provides an abstract base class for a page parser filter that is used by the ASP.NET parser to determine whether an item is allowed in the page at parse time. In other words, implementing this class allows you to go along for the ride as the page parser parses the .aspx file and gives you a chance to hook into that parsing. For example, here’s a very simple filter which blocks any script tags with the runat="server" attribute set within a page. using System; using System.Web.UI; public class MyPageParserFilter : PageParserFilter { public override bool ProcessCodeConstruct(CodeConstructType codeType , string code) { if (codeType == CodeConstructType.ScriptTag) { throw new InvalidOperationException("Say NO to server script blocks!"); } return base.ProcessCodeConstruct(codeType, code); } public override bool AllowCode { get { return true; } } public override bool AllowControl(Type controlType, ControlBuilder builder) { return true; } public override bool AllowBaseType(Type baseType) { return true; } public override bool AllowServerSideInclude(string includeVirtualPath) { return true; } public override bool AllowVirtualReference(string referenceVirtualPath , VirtualReferenceType referenceType) { return true; } public override int NumberOfControlsAllowed { get { return -1; } } public override int NumberOfDirectDependenciesAllowed { get { return -1; } } } Notice that we had to override some defaults for other properties we’re not interested in such as NumberOfControlsAllowed or we’d get the default of 0 which is not what we want in this case. To apply this filter, just specify it in the <pages /> section of web.config like so: <pages pageParserFilterType="Namespace.MyPageParserFilter, AssemblyName"> Applying a parse filter for Views in ASP.NET MVC is a bit trickier because it already has a parse filter registered, ViewTypeParserFilter, which handles part of the voodoo black magic in order to remove the need for code-behind in views when using a generic model type. Remember those particular developers I was talking about? Suppose we want to prevent developers from using server controls which make no sense in the context of an ASP.NET MVC view. Ideally, we could simply inherit from ViewTypeParserFilter and make our change so we don’t lose the existing view functionality. That type is internal so we can’t simply inherit it. Fortunately, what we can do is simply grab the ASP.NET MVC source code for that type, rename the type and namespace, and then change it to meet our needs. Once we’re done, we can even share those changes with others. This is one of the benefits of having an open source license for ASP.NET MVC. WARNING: The fact that we implement a ViewTypeParserFilter is an implementation detail. The goal is that in the future, we wouldn’t need this filter to provide the nice generic syntax. So what I’m about to show you might be made obsolete in the future and should be done at your own risk. It’s definitely running with scissors. In my demo, I copied the following files to my project: ViewTypeParserFilter ViewTypeControlBuilder ViewPageControlBuilder ViewUserControlControlBuilder I then created a new parser filter which inherits the ViewTypeParserFilter and overrode the AllowControl method like so: public override bool AllowControl(Type controlType, ControlBuilder builder) { return (controlType == typeof(HtmlHead) || controlType == typeof(HtmlTitle) || controlType == typeof(ContentPlaceHolder) || controlType == typeof(Content) || controlType == typeof(HtmlLink)); } This will block adding any control except for those necessary in creating a typical view. You can imagine later adding some easy way of configuring that list in case you do later allow other controls. Once we’ve implemented this new filter, we can edit the Web.config file within the Views directory to set the parser filter to this one. This is a powerful tool for hooking into the parsing of a web page, so do be careful with it. As you might expect, I have a very simple demo of this feature here. 20 responses
https://haacked.com/archive/2009/05/05/page-view-lockdown.aspx/
CC-MAIN-2020-29
refinedweb
759
52.8
This is your resource to discuss support topics with your peers, and learn from each other. 05-10-2012 09:35 AM Just joking about the alcoholism.... A couple questions regarding Cascades.... Is there a way to have a custom cascades view object where I use painting techniquest to render my view i.e. stroke, fill, drawPath etc..? So far it seems like all you can do is create a composite component of other prebuilt Cascades components. Secondly is there away to have a view and all of its subviews render to a single in memory image? What i'd love to do is take some ui I created in QML and have some C++ code turn it into a texture for use with opengl so I can add some of my own 3d effects ala Core Animation. All replies greatly appreciated. 05-10-2012 09:40 AM 05-10-2012 09:44 AM From my research, it seems as if you don't really use most of Qt's UI classes, you use the ones in the bb:cascades namespace. The one notable exception would be the WebView. I guess you could always use ForeignWindow to create an OpenGL ES based window to draw contents into it that way. Not as easy as an actually 2D Graphics API/Facade. 05-10-2012 09:46 AM 05-10-2012 02:02 PM Regarding your second question... It's possible you could make your cascades window invisible by either changing transparency or just moving it behind some other window. There is a screen API to read-back the data from a window, so long as you have a handle to it: Whether you can get a handle to the main cascades window or not, I am not sure. I would hope so Cheers, Sean 05-12-2012 08:32 AM Interesting topics indeed. BTW Qt 5 comes with a 3D package included. 05-12-2012 08:42 AM 05-15-2012 10:27 AM Getting back to the original question... Check out the sample helloforeignwindow. This demonstrates using a foreign window to do your drawing. To share your drawing, your different views can have access to the same pixmaps or other objects. Does this answer your question? 05-15-2012 11:16 AM Hi, At this point you don't have anything equivalent to a Canvas in Cascades. This means you can't perform operations like drawPath etc... Also, you can't use Qt UI and Cascades at the same time. If you want to draw an object, you could create your own custom Cascades component and scale/animate that. Alternatively, the WebView/Canvas will support drawing on top of Web Content - but that probably doesn't solve your problem. Of course, as was mentioned before, you can draw in the OpenGL window (though it looks like that wasn't your question either). 06-07-2012 10:24 AM I agree with you. The presence of a custom painter library on view/scene easy framework in cascades is an important thing for many developers. QtGui have classes like QGraphicsView, QGraphicsScene and QGraphicsItem that make easier our life. OpenGl is like kill flyers with gunshots!! A canvas and QPainter mechanism is required on Cascades, please.
http://supportforums.blackberry.com/t5/Native-Development/Cascades-Custom-Painting-amp-Alcoholism/m-p/1712767/highlight/true
CC-MAIN-2015-27
refinedweb
543
73.58
Can Large Scale NAT Save IPv4? 583 Julie188 writes "The sales pitch was that IPv6, with its zillions of new IP addresses, would eliminate the need for network address translation altogether. But Jeff Doyle, one of the guys who literally wrote the book on IPv6, suggests that not only will NAT be needed, but it will be needed to save IPv4 at the tipping point of IPv6 adoption. 'I.'" Re: Can Large Scale NAT Save IPv4? (Score:4, Insightful) Of course it could fit most people needs who, by the way, don't even know what having a unique IPv4 address means, forget about knowing what a fixed IP address is. My only concerns would be towards people hosting services, even if they only host a gaming server. Before getting a fixed IP address, I remember using services like dyndns before I setup my own private dyndns server on a fixed IP address server that I had access to. I could always reach my system even if it changed address every 6 hours on the first dialup provider I registered to back then. So yes, it could, my only concerns is that it may cause prices to have a unique address or a fixed address to rise. Part of the solution (Score:5, Insightful) Large scale or ISP wide NAT is part of the solution. It will not "save" IPv4, whatever that means. It will make it possible to transition to IPv6 and still access all the old sites, that have not yet made the transition. It is not really important that slashdot.org is still IPv4 only. You can access it just fine. And slashdot.org has no need to access you. You use IPv6 in all the cases where you wanted that nice static IPv4 address before: When running peer to peer software. Setting up your small hobby server. Using direct peer to peer VoIP. And so on. All the consumer ISPs will transition soon enough during the next few years. We will fairly quickly be able to assume consumers will in fact be able to access IPv6 only sites. For the next 10 years you can also assume consumers will be able to access IPv4 only sites - is anyone really surprised by that? If all your gaming friends got IPv6, playing on your private IPv6 only game server - what do you care that some backwards dialup only ISP, in a country you never heard of, still is IPv4 only? Port scanning posters; TOS server ban (Score:5, Interesting) slashdot.org has no need to access. You use IPv6 in all the cases where you wanted that nice static IPv4 address before: When running peer to peer software. Setting up your small hobby server. In other words, things that cable and phone companies don't really want customers on the residential plan doing in the first place, as explained in the terms of service. If all your gaming friends got IPv6, playing on your private IPv6 only game server By the time that happens in several years, you may have grown out of online gaming. Which of the current video game consoles supports IPv6? Re: (Score. So that's the cause of this behavior... thanks for the insight. Rubbish (Score:2) Let's think about this shall we. there are 64K port addresses if I am not mistaken. that's effectively two quads IF you used them optimally. for inside the nat there are only 3 quads x 3 prefixs (169,192, 10). SO that gives us a little bit more than 5.2 quads. But that assumes every nat in the stack does everything perfectly. Now you might isn't that 5.2 quads worth of addresses? No because each computer is going to be using multiple ports. So this won't work. it's a bandaid however that will delay the Re: (Score:3, Interesting) The way CGN works is to spread multiple users across the same IP address. So forget about dyndns. Also forget about google maps, because it runs through ports like water, and TCP requires a 90-second timeout before releasing a port. Basically, CGN is a hack to cushion the blow, but it doesn't eliminate the need to switch to IPv6. You will like CGN a lot less than you like your present NAT. A much better choice would be to go to NAT64. That way you get end-to-end connectivity for the hosts that do I Re: (Score:3, Insightful) Absolutely. I don't understand why do dual-stack and NAT44 instead of giving customers IPv6 and NAT64. I assume this is because the problem isn't just all those web servers on IPv4 addresses, but a significant number of end user applications that are not IPv6 aware. Unfortunately, if we allow them to avoid upgrading with NAT44 then we can confidently predict that apps won't get updated and you'll never be able to switch it off. It's human nature not to fix the problem until forced to. NOOOOOOO (Score:5, Insightful) Re: (Score:3) > Give us... Nowadays, not that many people give. It is also pretty rare that corporations give to their customer base. As well, it is rare that governments give since in the end we are paying for every dime they spend. So in the end, the most competitive solution will prevail. Read the cheapest one. If it is using a dual stack with natted IPv4 plus IPv6 well during the transition, this is what's going to happen. I would sure enjoy having IPv6 fully deployed right now but I have to be realist. Re: (Score:3, Insightful) Despite the efforts of ISPs and some institutions (heck even Comcast has an IPv6 pilot program) no significant number of end-users are going to turn on IPv6. Nothing will happen until someone with enough clout decides to put a new "must have killer app" or free content out there and only allow IPv6 access to it. Then consumers might demand there equipment, OS and ISP support it. There's no money in that, so I'm not holding my breath. You mean like ipv6porn ? (Score:5, Interesting) [ipv6porn.co.nz] is giving away free porn to anybody who can access it with an ipv6 address Re:You mean like ipv6porn ? (Score:4, Funny) And the rest of the internet is giving it away to anyone who can access it with an ipv4 address. Fail! Re: (Score:2) Methinks there is already plenty of free porn available on the IPv4 intertubez. :) Re: (Score:3, Interesting) Re: (Score:3) Yeah, like that, but something more exclusive. As to what, well, you can pretty much freeload porn and music (well, "radio") with a clear conscience, and with minimal actual risk and a small level of ethical flexibility, movies as well. I've also seen a HIT on mturk that pays out to establish a v6/v4 tunnel to a free provider. So at least there is something around to nudge the technically adept to get hooked up. Don't know how they are funded. Hey, how about a replacement for SMTP that is designed from the Re:NOOOOOOO (Score:4, Insightful) "Despite the efforts of ISPs and some institutions (heck even Comcast has an IPv6 pilot program) no significant number of end-users are going to turn on IPv6." Of course not, because that's not what end users do. End users will go IPv6 en masse as soon as the DSL "thingie" that their ISP installs on their homes and works magically to connect them to the intertubes goes IPv6. Re: (Score:3, Funny) Using "!=" in prose isn't grammatically acceptable in third-grade English class, FWIW. Re: (Score:2) Maybe using NAT for half a year and having the increased number of people calling support and the increased cost of having terribly stateful routers motivates the ISPs to push ipv6. Re:NOOOOOOO (Score:5, Insightful) I don't think non-networking guys really understand the harm that NAT/PAT/masq has done. I am talking economic damage. NAT has cost you money. It's cost you a LOT of money. It cost your company money. It cost everyone who uses computer an ASS LOAD OF MONEY totally wasted on a cheap hack to get around the fact that we needed a better addressing system. All the wasted software time which talented people worked for, and NAT is just a work-around. All the money wasted PAYING for above mentioned software, salaries, time. All of the needless hardware and software implementations related to NAT. Anyone who runs a large Cisco PIX/ASA platform can bemoan the number of statics needed between network interfaces. Think about the apps that had a really hard time working because of NAT. The games that could not peer-to-peer because both sides were behind NAT. Think about all of the companies that have multiple DNS views -- inside, and then public. That's a ton of extra work. Best thing of all that I look forward to in IPv6 is... the idiots that it will wring out of the IT/comp-sci sector. Idiot sysadmins that label their servers with IPv4 addresses, idiot programmers who won't learn IPv6 and will get the boot to the curb that they have long deserved. If you can't handle it, GTFO lamers. You don't need to know your workstation's IP address -- you need to know it's hostname and how to use DNS. I can't tell you the number of places I've worked at where people hard-code IP addresses into config files and the damage that it has caused, along with labeling servers/printers/whatever with their IPv4 address. Re: (Score:3, Informative) Mod parent up. If you've had to deal with any sort of reasonably larged sized network and NAT, everything he mentions above is a huge pain in the ass. Relying on NAT as a "firewall" is brain damaged anyway, and those who tihnk NAT needs not processing ability compared to a proper firewall are deluded. Every single packet needs to be looked up against the NAT state table, so even though you don't have any real firewall rules, processing is still going on. The "protection" that NAT provides can be replac Re:NOOOOOOO (Score:5, Interesting) Me too. I look forward to having no NAT and changing the IPs in my internal network every time I use a different ISP. "Hmm, my internet connection failed, better connect the backup one. OK, now this ISP gives me xxx:yyy:zzz:xxyz::0 IP, so I now have to go and change the addresses of all my PCs, since they won't be able to access the internet. If only there could be some way to keep the internal IPs constant..." Currently, the internal IPs of my computers do not depend on which ISP I am connected to. Re: (Score:3, Informative) Re:NOOOOOOO (Score:4, Informative) If you have carrier redundancy, the IP6 stack can/will have *both* sets of IPs active at once, and you decide which gets used outgoing at the router. IPv6 actually includes multi-homing, unlike IPv4.... Re:NOOOOOOO (Score:5, Insightful) Your rant would be more compelling if your list didn't consist of "software time", "software, salaries, time", "software" (yes, again), "time setting it up (as if setting up a proper firewall ruleset was any less cumbersome)", and "games". Yes, games. Economic damage indeed. Look, NAT isn't ideal. I'll grant that. IPv6 is right. But I'd like to point out something. If NAT is seriously as big a deal as you make it out to be, that's man-hours that kept someone employed. Software houses employ people to work in projects that need doing. Working around network realities/idiosyncrasies needs to be done. Remove those realities and the rampaging hordes you envision writing NAT code won't just get a memo saying "hey, we were going to have you work on this uber useful productive project but didn't because you were working on that NAT code but now that it's gone, you're a productive member of society again!" There's some hyperbole in my post, but the point is clear. At my office we have a phrase, "scripting yourself out of a job". There are a lot of repetitive tasks like new user creation that I'm often tempted to script to save myself (billable) time. Sadly, when everything I do is scripted, I'm not needed. Anyone can punch in values and routine tasks are out of my hands. All that's left is sitting around waiting for something to go wrong. I can't charge for that. That being said, there's an ethical fine line between predatory billing - which we don't ever do - and scripting myself out of a job. Point is the economic "impact" of NAT isn't something that's worth talking about. If anything it employ[s/ed] people. Re: (Score:3, Insightful) Re:NOOOOOOO (Score:4, Insightful) Seriously? With this logic, you would be against any sort of more efficient process ever developed. Re:NOOOOOOO (Score:4, Insightful) If NAT is seriously as big a deal as you make it out to be, that's man-hours that kept someone employed. Classic example of the broken window fallacy [wikipedia.org]. Are you really saying we should prefer one protocol over another because it employs more sysadmins and developers in activities that would otherwise be unnecessary? Continuing this line of reasoning, we should abolish protocols such as DHCP and require manual configuration of all machines. Re: (Score:3) Re: (Score:3, Interesting) Re:NOOOOOOO (Score:5, Informative) err windows xp does have ipv6 support but its not installed by default (in fact has had it since XP sp2) now it may not have all the bells and whistles of say Vistas support (if anything can be supported by Vista) but you should at least be able to get an IP and get online. Re:NOOOOOOO (Score:4, Informative) Re: (Score:3) Win/XP has fine IPv6 support except that it can only query DNS over IPv4 transport. That is, you can't run a pure IPv6 + Windows XP environment. XP doesn't support IPSEC on IPV6 either, which I believe is mandatory for a 'real' IPV6 implementation? Then again, its IPSEC support on IPV4 is pretty awful anyway. Re:NOOOOOOO (Score:5, Informative) Support for XP has stopped, it's an old OS. Windows XP is supported until 2014 [microsoft.com] if you keep up with service packs. Re: (Score:2) Re:NOOOOOOO (Score:5, Informative) Except for all the people still on XP, which has no native IPv6 support... Has too. You just need to enable it: [ipv6int.net] Re: (Score:3, Funny) Or - get off my internet. Re: (Score:3) The my internet comment was facetious, but in my experience most of the XP holdouts are clueless and by and large the major component of the botnets launching DOSes that I have to deal with day in day out. My main point is that technology moves on. The C64 died, the Amiga died, Windows 98 died, etc. Bringing the future of the internet to a halt because of some tight-arse fucktards who don't want to get off some antiquated insecure OS is going to be a net LOSE for everybody. Either get with the program, Re:NOOOOOOO (Score:4, Insightful) And NAT is the bomb. It is the best kind of firewall you can have - ie one that doesn't slow down your computer with bloatware. It really is not difficult to forward a router. No, it's not. The best kind of firewall you can have is a firewall -- which can also be done on your router device, so that it "doesn't slow down your computer with bloatware". The part I don't like about it though, is the addresses. How easy is it to remember 192.168.2.31 compared to 2001:0db8:ac10:fe01:0000:00000:00000:0000? If you don't like that address, why did you pick it? For a start, redundant zeros are redundant, so write 2001:db8:ac10:fe01::. Secondly, you are assigned a /48, meaning you can pick the rest of the bits freely. If you didn't want to remember it, why did you pick fe01 instead of, say, 0, letting you write 2001:db8:ac10::? And in case you hadn't noticed, 2001:db8:ac10:: is shorter than the IPv4 equivalent, where you have to remember both 192.168.2.31 and your external address, 192.0.2.172. What's the problem with IPv6 again? Re: (Score:3) Last time I checked, you need to enter the IP address of the DNS server. 8.8.8.8 (or even the IPs of the DNS of my ISP) is easy to remember, v6 addresses are not. Re: (Score:3, Insightful) IPv6 makes sense, they had RFC's up for a long time ppl could comment on. Many top level ppl in networking companies, and elsewhere hashed all this out and it was the best solution they could come up with. Something better is likely possible, but for now this is it and ppl need to get up to speed. [wikipedia.org] [ipv6.com] Hasn't it already? (Score:2, Insightful) For years we've heard predictions about how we'll run out of addresses "this year." Yet we haven't. I assume that's partly because my toaster doesn't have an IP, but it's also got to be because of NAT. Re:Hasn't it already? (Score:5, Funny) Re:Hasn't it already? (Score:5, Insightful) Joke aside, my network printers don't support IPv6, my 802.11 access point doesn't support IPv6, my SIP phone doesn't support IPv6, my ADSL modem/router doesn't support IPv6. Tell me again, how is this transition supposed to work if a good 50% of equipment doesn't support IPv6? Even if all these devices actually did support IPv6, why would I want them on publicly accessible IP addresses? The truth is, IPv6 hasn't taken off because really there is no huge need for it. Private networks (and there is gobs of IP space for those) are the norm, and in 90% of cases are more than acceptable with a device doing NAT to the rest of the world. There is nothing stopping people having both public and private IPs (like I have) for things that don't behave behind NAT. That is unless your ISP won't give you addresses.... Re: (Score:3, Insightful) There is nothing stopping people having both public and private IPs (like I have) for things that don't behave behind NAT. That is unless your ISP won't give you addresses.... And THAT is why you'll be needing IPv6. They won't have any addresses to give. Re:Hasn't it already? (Score:5, Insightful) Because they're globally unique. You'll never have a conflict of address when you start doing business with other entities with large networks or because the hotel just so happens to be using the same private addresses as a network you're trying to make a VPN connection to from your laptop. And just because they're public addresses doesn't mean they're publicly accessible. Re:Hasn't it already? (Score:4, Interesting) @CRC'99 all my equipment (except maybe the cable modem) support #ipv6. stop using #oldshit Ironically, if you want an IPv6 internet, the cable modem needs IPv6 support more than the other stuff he mentioned. Re: (Score:2) You have the X-307 model toaster too? With bluetooth capability so you can connect it to your headset and it can soothingly whisper to you when your toast is done, and is always connected to the main toast making database so that it always knows just how long to perfectly cook toast, bagels, english muffins, waffles, etc. and has a tri scanning array so it knows exactly what you put in it every time and can auto toast so you don't even have to push down on the button to start toasting, and then when its all done it gently rises the finished product up instead of shooting it up. Yah know, I could see a market for this. Re:Hasn't it already? (Score:4, Insightful) It has never been "this year", but it *will* be in the next two years, probably next year, at the Registry level. Existing ISPs already have their pools of addresses they can continue using for sometime longer until those are depleted, and yes, NAT has kept this from happening a lot sooner, but lets not make the mistake the US did with the metric system and keep an archaic and broken system in place when life is so much easier (after the transition anyhow) if we switch. Re:Hasn't it already? (Score:5, Informative) I don't know where you have been getting your predictions. It is pretty certain that IANA is going to run out of space [potaroo.net] about the middle of next year. We have 14 /8's left in the IANA free pool, we use up almost 2 /8's every month. Are you betting on the ipv4 space usage magically decreasing ( right when everyone will start freaking out about getting their last allocations )? Re:Hasn't it already? (Score:5, Funny) Are you betting on the ipv4 space usage magically decreasing ( right when everyone will start freaking out about getting their last allocations )? No no, there is always more to be found. That link of yours only show the _known_ reserves of addresses. They continue to find new fields of IP addresses and existing fields continue to find more than initially expected. This "peak IP" is never going to happen and you know it! Re:Hasn't it already? (Score:5, Funny) Haven't you heard? The IAB has known for decades that the default-free zone is continually making new IPv4 addresses as a natural function of the BGP protocol. The reason you've never heard about it is the evil telecom companies control the media and the NRO, and they don't want you to know the truth. It would probably be good, here (Score:2) to ask someone from Rosenet, in Thomasville GA, who have NATted *all their customers* for some years now. I expect they've learned all the necessary lessons. Re: (Score:3, Funny) You know there's probably a reason we haven't heard anything from them. :) Qwest does this in Omaha (Score:2, Interesting) If you're a Qwest customer in Omaha like my inlaws, you get a non-routable from the head end... and the last time I was there, they did not support VPN passthrough (although IIRC you could pay extra for a routable dynamic IP if you wanted VPN to work). Re: (Score:2) I've had a business package at my home for years. Yeah, it costs me a few more dollars per month but I've always gotten higher speeds, better technical support, more email accounts (back in the day) AND a static IP address. I could even host my own web/email servers if I wanted to and I did in the past. Offering a half-Internet package (Score:3, Insightful) Why should I have to pay *EXTRA* for the full internet, and competent support? Because the majority of people don't see the point of paying for the full Internet, and what little competition there is between cable and DSL forces the two to cut their rates to the point where they have to offer a half-Internet package. Useless investement (Score:5, Informative) Re: (Score:2) at work we use NAT behind a whole public class B and it work great. But as a customer I would not put up with it. I want to act as a server not only a dumb host. So please stop the carrier grade nating madness. I already need to either define a computer as DMZed or do port mapping, because of NAT. Just imagine the amount of head-scratching people will do when they find out there is another NAT in front of theirs preventing access to their subnet. If my ISP starts NATing, then its just confirmation that I nee Re: (Score:2) Of course, you might not be ABLE to switch carriers. If Time Warner were to put me behind NAT, I'd be pretty much screwed. I might be able to switch to some form of wireless connection, but that might not even be any better. In a lot of cases, carriers can do whatever they feel like. Re: (Score:2) Of course, you might not be ABLE to switch carriers. If Time Warner were to put me behind NAT, I'd be pretty much screwed. I might be able to switch to some form of wireless connection, but that might not even be any better. That would suck, though look on the bright side, in a worst case scenario you could probably get an IPv6 capable router and then tunnel to an IPv6 PoP. Its far from ideal, but at least you wouldn't be totally stuck on Time Warner's island. BTW Its worth noting that Comcast has already sta Re: (Score:2) IP6 tunnel broker. Done. Re: (Score:3, Insightful) YOU would not put up with it. But others would if it were cheeper. So the Internet will just be divided into the 0.01% of users who have real IP address, and the 99.99% average Joe. -paul Contradictory messages (Score:2, Flamebait) So the same guy advocated IPv6 and now it's IPv4 again? I'm dazzled! This sounds like what you hear during an election. Re: (Score:2) > I'm dazzled! Try reading the article. He's doing no such thing. P2P will be hard under Large Scale NAT (Score:4, Interesting) Most P2P protocols have at least some trouble working with local NAT. If it was implemented on a large scale there might be a few more problems, and it certainly gives ISP's (the ones running the NAT) more control over the traffic they route. I wonder how quickly the RIAA and friends will pick up on that and start pushing for NAT instead of IPv6... Large scale NAT is completely moronic. (Score:5, Insightful) There are only 65536 port numbers, so there is only so thin that you can spread a single IP address. Remember that some clients open many ports. There are also questions of reuse; you can't simply cram the 65536 space close to full. When a TCP connection terminates, you don't want to start reusing the port number right away. It's tricky. People are not going to be happy to be NAT ed. Will large scale NAT also come with large scale port forwarding? Large scale UPnP? What do you do about port number abuses? Dynamic DNS goes out the window. People can't have a quasi static IP any more with their own port 80, port 22, port 25 mail server or whatever. If I were to be NATed, I would not want to pay more than 5 dollars a month for such a crippled connection, regardless of bandwidth. So you will automatically have to sell the service to ten subscribers like me instead of just one to make the same revenue. As long as I can get non-NAT-ted service somewhere, than that is where I will be. NAT == CRIPPLED_INTERNET. Impose that next door. Next city. Next country. NIMBY: not in my backyard. And remember that if EVERYONE is NATted, then nobody can talk to anyone. Because you have to connect somewhere to use the Internet. That means resolving DNS to some IP address. To reach a DNS server you need an IP address. So the DNS server can't be NATed. That DNS server has to hand you the IP address of a host such as a web server. Are all web servers going to be NAT ed? That means they can't be all on port 80 any more. You are looking at redirects! There will have to be a port 80 service sitting on those NAT nodes, which will intercept web traffic, parse the HTTP request and forward to the appropriate node behind the NAT. Or else DNS will have to be re-architected so that it returns not only IP's but port numbers, so when you go to, it resolves to x.y.z.w:n, and the host x.y.z.w has port n forwarded to the right server. Good grief, and good luck with that. Re: (Score:3, Insightful) "There are only 65536 port numbers, so there is only so thin that you can spread a single IP address." But who says they have to do a one-to-many NAT? Why not have a pool of public addresses available for NAT. Say, 1 IP per every 50 customers, or even 1 per 25 customers? The point isn't necessarily that an ISP has to drop down to a single IP address for serving every single customer - but that instead of assigning 1 public IP per household/customer, they can get away with spreading it *thinner*. So, they setu Re: (Score:3, Insightful) but the web server would spawn a process on a higher numbered, unprivileged process for the actual traffic transfer. No. All traffic is exchanged over the HTTP connection initiated by the client, the server's source port for HTTP traffic is always port 80, or the port the client connected to. What happens, is (in the case of Apache); the web server initially starts up as root and binds port 80, then "changes user ID" to apache, after the port is already bound, to start its child processes. Since Re: (Score:3, Funny) But 99% of the people might not notice. They could give 99% of their customers NAT'ed service, and when someone calls and complains, apologize, and offer them a unique public ip for $500 extra per month, or if they upgrade to a "business class line" that permits them to have a dedicated static, addressable IP. FTFY no it can't. (Score:2) We have 3.7bn IPV4 addresses. That won't even cover 1 device per person, before even taking into account losses due to subnetting. The population is growing exponentially, and we should probably plan on the number of IP enabled devices growing even faster than that (higher number of devices per person). NAT, large scale or otherwise is only a band-aid delaying the inevitable. Its a horrible hack that breaks many protocols and causes all sorts of problems when you want to (say) join two previously priva Big NAT - sword cuts both ways - no need for IPv6? (Score:2, Interesting) The other side of big NATs is that they could make IPv6 unnecessary. With big NATs everybody could have private IPv4 space with the public IPv4 space being used to connect the private spaces. Protocols that don't like NATs are protocols that violate the principle of independence of protocol layers. Things like SIP and FTP are hard to NAT because they carry lower level addresses. Nobody cares about FTP any more but SIP is a security and implementation nightmare that is going to need to be re-designed from Re: (Score:2) We should have huge NATs connecting large private spaces together, with most people talking through multiple layers of NAT? FTP and SIP don't work because they "carry lower level addresses", like what, IP addresses? It's not like they use the MAC to connect. Are you insane? Trapped (Score:3) Hah. The only way this will work is if they make an extremely good IPv4/IPv6 NAT gateway. Except, if they make one that does a good job such that people are going IPv4->IPv6->IPv4 and everything basically works, then people will wonder why they don't just do an extremely good IPv4 NAT solution and go IPv4->IPv4 and drop the entire IPv6 part. NAT != Security (Score:2) In addition to using NAT to conserve IPv4 space it is still being sold as a more secure setup. NAT provides obscurity but not really security. A decent firewall is only going to allow what you configure it to allow. The only benefit I can think of is it may reduce the scope of subnet scans your network is subjected to. Then again, the bots/scripts are scanning em all anyway. Re: (Score:2) NAT does provide security : it shuts down a large number of attack vectors. It is not comprehensive but there is a significant difference in security profile between a device which is globally addressable vs a device which is only addressable on a local network and/or when it initiates a network link. A firewall is merely another means to shut down some of those attack vectors. The more unobtrusive security layers you have the better. NAT is perfect for home use and it is what I use. If I want a global IP, Work your way out (Score:2) Pirates rejoice (Score:5, Interesting) This would be great for pirates, who the hell would the MPAA and RIAA sue if everybody in one region shared a single IP#? Re: (Score:2) That sounds nice, but in practice you probably wouldn't be able to connect at all. At least one side must have a public IP address for P2P to work (with TCP), or at least be able to open incoming ports with something like UPnP. What do you think the odds are of ISPs letting customers reserve incoming ports? UDP-based NAT traversal may be possible with help from a public server. Either way, the AAs would still be able to identify individual users via a combination of port and public IP address. Re:Pirates rejoice (Score:4, Funny) NAT is good (Score:3, Insightful) Okay, let's assume that IPv4 no longer exists... 1. Is Comcast going to give me unlimited IPv6 addresses? How will that work through my router? Do I now need to announce every device to Comcast? I REALLY like the fact that I get a single IP address, and I can port forward and use NAT as I like. 2. NAT makes for a pretty good firewall. I have Linux and Mac machines, and consumer devices, behind my current NAT router. With NAT and SPI, I have it pretty good. I really only ever use an outbound firewall to detect phone-home stuff and malware (and with Linux and Mac, surprise, surprise, there's not a lot of the latter). Hey, I understand the need for IPv6. I guess I just don't want to lose what NAT offers. Re:NAT is good (Score:4, Informative) 1. Is Comcast going to give me unlimited IPv6 addresses? How will that work through my router? Do I now need to announce every device to Comcast? You get a subnet, and your router routes the whole subnet. Just like with IPv4, coincidentally. NAT makes for a pretty good firewall. I have Linux and Mac machines, and consumer devices, behind my current NAT router. With NAT and SPI, I have it pretty good. As opposed to having a firewall, instead of having a firewall? Hey, I understand the need for IPv6. I guess I just don't want to lose what NAT offers. Like what? Nothing what you stated had anything to do with NAT as such. Re:NAT is good (Score:4, Insightful) You're right. NAT makes a pretty good firewall. But you know what makes an even better firewall? A FIREWALL. NAT is a money maker!!! (Score:5, Insightful) ISPs are licking their chops for this. They want to roll out NAT for all default consumer grade ISP connections. It solves problems with scarcity, they profit from scarcity (want public IP? You pay extra for it), and it will jack with routing of P2P data and thus cut down on the leeches. It's a WIN-WIN-WIN for the Telco and cable companies. If you guys think IP6 will be adopted, just wait till they find huge money in artificial scarcity of IP4 blocks. There will be no where to run and escape it! Unless you pay that premium... Re:Fuck you. (Score:5, Insightful) Considering that ultimately they're using public resources to provide a service, I do think they owe us at least something in exchange for making profits using our right of way or airwaves. Re: (Score:3, Insightful) It isn't his responsibility, this is basically the same problem we've seen in the wireless space, the people who actually control access don't bother to upgrade until the last minute, if even then, and without somewhere else to take your business, it's not a realistic option. I've heard that Comcast has IPv6 around here, but going back to them is a non-starter. They're far worse than the other options. Unless the end us Re: (Score:2) Re: (Score:2) The ISP in my area was, in fact, created by people who overcome the limitations placed on them. Other ISPs wouldn't run high speed cable internet up to where I live, so a few people formed a cooperative and did it themselves. As for getting a new ISP, it isn't an option. There aren't any other ones here. Not that there would be a reason to switch - They're the best one around. They're a better ISP than the ISP they lease their backbone line f Re: (Score:2) Are you stupid? If the public gives them money for something, they most definitely owe us some service. There's no hypocrisy involved. It's basic economics, and it's a situation where the average American is getting fucked from their tax dollars being paid for no value returned. The problem is that our elected morons didn't set the requirements. They don't not owe us services, they just don't LEGALLY owe us anything. Big difference. Fucktard.. You can't get better evidence of the incompetence of government than this. There's a dwindling resource that will run out in just a couple of years, impacts practically every person in every OECD country, yet have you heard of even one government agency, in any country, that is mandating IPv6 for consumer grade gear to force the vendors to solve the problem before it becomes critical? Of course not! That would require foresight and competence. About the only IPv6 push I'm hearing is that for government tenders in the US, IPv6 support is required, but that does nothing to solve the problem of hundreds of millions of home routers that are IPv4 only. No government on Earth has even bothered to lift a finger to solve a well known, easily predicted problem with a ready and tested solution that would cost the government no money whatsoever (it's just legislation!). Given that, now picture the level of competence you'd get from the same bunch of idiots when tasked with solving much bigger issues like global warming, peak oil, or overpopulation. Issues like that won't be critical for decades, have no obvious solution, and all possible solutions are expected to cost trillions. I can only imagine the level of incompetence that will no doubt ensue... Re: (Score:3, Interesting) Practically all of them can support IPv6 with a simple firmware update, but I'm betting the vendors would rather sell you a new router than provide that update. Re: (Score:2) I don't think my DSL router/modem supports IPv6. It's not a problem. I just run it in bridge mode, and leave the PPPoE support to my PC. (I did this even before enabling 6to4, because the router has ridiculously small NAT tables.) Every existing DSL router should be capable of acting as a simple PPPoA-to-PPPoE bridge. This may not work for cable router/modems; I've never had the chance to configure one. Re: (Score:3, Informative) If you have the skills to set up IPv6 just for kicks I seriously doubt you are dealing with what we out here in the field run into in most folk's homes, which is CCC, or "Cheap Chinese Crap". Trendnet/Zonenet, linksys, hell pick any under $50 router and see how many updates are sitting there for it on its home page. my guess it'll be like the Trendnet that is looking at me right now, which is zip. And unless things have changed in the less than 6 months I looked at routers there were exactly squat when it c Re: (Score:2) Mostly because it's expensive, painful, and older versions of most operating systems don't properly support it. No one wants to deal with the dramas before they absolutely have to. That and there's the fact that as far as I can tell the one and only killer feature of IPv6 is a larger address space and having every item have a publicly addressable IP, which isn't a really huge selling point especially when you consider that while IPv4 addresses are easy to remember, IPv6 addresses are not. Most people don't w Re: (Score:2) The changes that businesses make tend to be the ones that either improve their profit margins immediately or the things that consumers demand. Ever notice how lately every store has to have air conditioning? It's not because it's profitable per se, it's because if you want to have customers they have to come into the store, and they won't come into you Yup, just crazy (Score:5, Insightful) Add to this how many more NAT workarounds we will need to have in software. We already have to deal with NAT busting solutions, now we will have to deal with double NAT busting solutions. Believe me, NAT was a workaround to a limitation and we shouldn't be using this workaround at any more levels than necessary. There is only so much duct tape you can use before it is time to just accept you will have to install the new solution. If IPv6 appears so hard, its because people keep on waiting for someone else to take the plunge. If you are an IT professional, then is should be your business to understand and embrace IPv6, whether that is in your network or in your software. If your issue is with your router not supporting IPv6, then make some noise to your router's manufacturer, install a third-party firmware or go with a company already offering an IPv6 capable router. Re: (Score:2) If your issue is with your router not supporting IPv6, [get a new router] And if the issue is with neither the cable company nor the phone company offering IPv6 service, what next step do you recommend? Re: (Score:3, Insightful) p2. The transition to IPv6 is probably going to need some NAT64 and DNS64 magick at some point. Not everybody is going to be well-served by running dual-stack hosts and networks. I've heard that some mobile broadband providers are looking at various kinds of NAT tricks to keep IPv4 marginally functional for legacy applications on IPv6-only networks without resorting to expensive tunnel encapsulation mechanisms. Have you actually done a count of the number of addressable devices IPv6 provides. There may well Re: (Score:2) As long as whatever solution is transparent to the application, then that's what will make the most sense. If the applications are intranet only, then they could probably exist in their own IPv4 subnet with little regards for what is happening beyond their island. If they need internet connectivity then, they will probably still be okay for the next few years since existing IPv4 addresses won't vanish, they simply won't be able allocated anymore - I assume such applications will continue speaking to the sam Re: (Score:2) "That's why we have IPv6, which can grow for at least another century before there might conceivably be a problem." But isn't there trillions of possible addresses in IPv6? I don't think would run out of those for a long, long time.
https://tech.slashdot.org/story/10/10/05/2334213/can-large-scale-nat-save-ipv4?sdsrc=prevbtmprev
CC-MAIN-2017-30
refinedweb
7,473
71.75
Turn HTML into equivalent Markdown-structured text. Project description # [html2text]() []() []() html2text is a Python script that converts a page of HTML into clean, easy-to-read plain ASCII text. Better yet, that ASCII also happens to be valid Markdown (a text-to-HTML format). Usage: html2text.py [(filename|url) [encoding]] - Options: - Or you can use it from within Python: import html2text print html2text.html2text(“<p>Hello, world.</p>”) Or with some configuration options: import html2text h = html2text.HTML2Text() h.ignore_links = True print h.handle(“<p>Hello, <a href=’’>world</a>!”) _Originally written by Aaron Swartz. This code is distributed under the GPLv3._ ## How to install html2text is available on pypi ` $ pip install html2text ` ## How to run unit tests python test/test_html2text.py -v Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/html2text/2014.7.3/
CC-MAIN-2018-51
refinedweb
151
52.05
[ ] Mikhail Markov updated HARMONY-3148: ------------------------------------ Attachment: H-3148_2.patch Sorry Leo, i've almost prepared the patch when read your message :-). This is the refactored patch based on the original one. The changes are the following: 1) Platform-dependent part is moved to portlib (as Mark suggested) 2) In addition to default method with no params the method with 2 parameters is added to be able to customize the thresholds: used memory threshold (in %) and available memory threshold 3) Fixed Windows version by adding "stat.dwLength = sizeof (stat);" line 4) Fixed Linux version :-) as sysinfo() according to doc, returns the structure containing various values in memory units starting from 2.3.23 kernel version for X86, so to get proper value in bytes the following expression is used: "info.freeram * info.mem_unit" and i have Linux with 2.6.5-7.151 kernel version. !!!TODO!!! here: figure out the kernel version as before 2.3.23 version the structure indeed returns all the values in bytes and there is no mem_unit struct member. 5) Changed thresholds to 95% & 64M of memory respectively - I've checked them both on WinXP & Linux - works fine and smoothly with 1G of RAM. Of couse this is discussible values. 6) Don't think we need to add regression test for this as in the recently contributed reliability suite some tests fails due to this issue and as this suite will be added to CC, we could track regression in the future. But just in case, here is the standalone test i used to verify the fix: import java.nio.*; public class Test { public static void main(String[] args) { final int CAPACITY = 0x2000000; //20M for (int i = 0; i < 100; i++) { ByteBuffer.allocateDirect(CAPACITY); System.out.println("Iteration: " + i); } } } Before the patch: WinXP: OOM after 41 iterations Linux: OOM after 80 iterations After the patch: works ok on both platforms. Could you please review the patch? Thanks! > [Classlib][nio] alloc many DirectByteBuffers may cause memory-out-error > ----------------------------------------------------------------------- > > Key: HARMONY-3148 > URL: > Project: Harmony > Issue Type: Bug > Components: Classlib > Reporter: Jimmy, Jing Lv > Assigned To: Tim Ellison >.
http://mail-archives.apache.org/mod_mbox/harmony-commits/200705.mbox/%3C22935169.1179412637404.JavaMail.jira@brutus%3E
CC-MAIN-2015-22
refinedweb
350
61.87
load from url issue # coding: utf-8 import ui import feedparser import urllib2 import webbrowser class MyView (object): def __init__(self): x, y = ui.get_screen_size() self.url_list = [] url = '' self.feed = feedparser.parse(url) tblview = ui.TableView() tblview.name = 'AppShopper' tblview.data_source = self tblview.delegate = self self.segview = ui.SegmentedControl(frame=((x/2)-125, -45, 250, 30)) self.segview.segments = ['New', 'Updates', 'Price Drop'] tblview.add_subview(self.segview) naview = ui.NavigationView(tblview) naview.present(hide_title_bar=True) def tableview_number_of_rows(self, tableview, section): return len(self.feed['entries']) def tableview_cell_for_row(self, tableview, section, row): feed = self.feed html = feed['entries'][row]['summary_detail']['value'] self.url_list.append(html[html.find('href')+6:html.find('iTunes')-2]) title = feed['entries'][row]['title'] thmburl = html[html.find('http', 0, 100):html.find('png', 0, 100)+3] beg = html.find('Price') end = html.find(',', beg, beg+50) price = html[beg+11:end] cell = ui.TableViewCell('subtitle') cell.text_label.number_of_lines = 0 cell.text_label.font = ('<system-bold>', 12.0) cell.text_label.text = title thumb = ui.ImageView() ui.delay(thumb.load_from_url(thmburl), 0) cell.image_view.image = thumb.image cell.detail_text_label.text = price return cell def tableview_did_select(self, tableview, section, row): webbrowser.open(self.url_list[row]) def scrollview_did_scroll(self, scrollview): segmentindex = self.segview.selected_index scry = scrollview.content_offset[1] if scrollview.tracking: if scry < -75: self.segview.enabled = True self.segview.y = scry+30 if scry < -76 and scry > -85: self.segview.selected_index = 0 elif scry < -86 and scry > -95: self.segview.selected_index = 1 elif scry < -96 and scry > -105: self.segview.selected_index = 2 else: self.segview.y = -45 self.segview.enabled = False else: pass MyView() It's probably a user error but when the code is ran, the images aren't loaded until the cells are scrolled out of view. Any help appreciated! The issue, I think, is thepat ui delay is loading the image, while the rest of the code continues... So the first time around, thumb.image doesn't exist yet. You might consider the ui.delay to call a function which loads, then sets the image, i.e something along the lines of (starting where you had the delay def load_and_show_image(): thumb.load_from_url(thumb.url) cell.image_view.image = thumb.image ui.in_background(load_and_show_image()) cell.detail_text_label.text = price return cell As an aside... This code will try to reload the image every time the scrolls into view... I suspect the images never actually change, so this is pretty wasteful of network resources, and on a slow connection you might experience lots of lag. A better option might be to spawn a background thread that fills out a instance variable list of Images, which is populated once, starting when the view is created... The cell for row function would probably still have a similar function as above, except instead of loading from a url, you'd load from the cached list. If the background image cache filler has not gotten to the particular cell yet, you would load from url as above, or maybe just keep polling until the Image loaded ( and perhaps exiting early if the cell is no longer on_screen... Actually not sure what happens if you have a reference to a cell which scrolls out of view, does it get deleted?). For the second option( keep polling) you might want to also have a way of pushing Urls to the top of the queue... Also, note that in your code, url_listwill grow and grow and grow when you scroll forward and back... I think you would want to initialize it to be the length of feed['entries'], then assign directly based on row, rather then appending... or else parse urls one time during init(), rather than for each row. Given you've already parsed the feed, I think it would be pretty quick. You could also consider using the BeautifulSoup module provided in Pythonista to simplify HTML parsing: import bs4 soup = bs4.BeautifulSoup(html) app_url = soup.a['href'] thmburl = soup.img['src'] price = soup.find('b', text='Price:').next_sibling.strip().rstrip(',')
https://forum.omz-software.com/topic/1385/load-from-url-issue
CC-MAIN-2021-31
refinedweb
664
53.58
For the past several months I have been working on the SignalR C++ Client. The first, alpha 1 version has just shipped on NuGet and because there isn’t any real documentation for it at the moment I decided to write a blog post showing how to get started with it. The SignalR C++ Client NuGet package contains Win32 and x64 bits for native desktop applications and is meant to be used with Visual Studio 2013. Since the SignalR C++ Client ships on NuGet adding it to a project is easy. After creating a C++ project (e.g. a console application) right click on the project node in the solution explorer and select the “Manage NuGet Packages” option. In the Manage NuGet Package window make sure to include prelease packages in your search by selecting “Include Prerelease” in the dropdown and enter “SignalR C++” in the search window. Finally click the “Install” button next to the Microsoft ASP.Net SignalR C++ Client package which will install the SignalR C++ Client (and its dependency – C++ Rest SDK) into your project. You can also install the package from the Package Manager Console – just open the package manager console (Tools → NuGet Package Manager → Package Manager Console) and type: Install-Package Microsoft.AspNet.SignalR.Client.Cpp.v120.WinDesktop –Pre The SignalR C++ Client relies heavily on asynchronous facilities provided by the C++ Rest SDK (codename Casablanca) which in turn extensively uses lambda functions introduced in C++ 11. Understanding both – asynchronous programming and lambda functions is crucial to being able to use the SignalR C++ Client effectively. I started a blog mini-series on asynchronous programming in C++ which I would recommend to read if you are not familiar with these concepts. The SignalR C++ Client supports both programming models available in SignalR – Persistent Connections and Hubs. Persistent Connections is just a simple way of exchanging data between the server and the client while Hubs enable RPC-like programming where it is possible to invoke a method on the server from the client and vice versa. In this post I will show how to use the SignalR C++ Client to handle both – Persistent Connections and Hubs. Before we can move to the client code we need to set up a server. Our server will have two endpoints – one for persistent connections and one for hubs. To make it easy we will use the chat server from the SignalR tutorial as a starting point and we will add a persistent connection endpoint to it. Just follow the steps in the tutorial to create the server. (Alternatively you can get the code from ths SignalR C++ Client GitHub repo – just clone the repo and open the samples_VS2013.sln file with Visual Studio 2013). Note that if you follow the steps you will end up installing the latest stable available NuGet packages into your project instead of the version used in the tutorial and therefore you need to update the index.html file to use the version of the jquery.signalR-x.x.x.min.js that was installed into your project instead of the one used in the tutorial – i.e. if you installed version 2.2.0 of SignalR you will need to change <script src="Scripts/jquery.signalR-2.0.3.min.js"></script> to <script src="Scripts/jquery.signalR-2.2.0.min.js"></script> You should also set up index.html as the start page by right-clicking on this file in the Solution Explorer and selecting “Set As Start Page”. After you complete all these steps you should be able to run the server (just press Ctrl+F5) and send and receive messages from the browser window that opens. Now we need to add a Persistent Connection endpoint to our server. It’s quite easy – we just need to add the following class to the project: using System.Threading.Tasks; using Microsoft.AspNet.SignalR; namespace SignalRServer { public class EchoConnection : PersistentConnection { protected override Task OnConnected(IRequest request, string connectionId) { return Connection.Send(connectionId, "Welcome!"); } protected override Task OnReceived(IRequest request, string connectionId, string data) { return Connection.Broadcast(data); } } } and configure the server to treat requests sent to the /echo path as SignalR persistent connection requests which should be handled by the EchoConnection class. Adding the following line to the Configuration method in the Startup class will do the trick: app.MapSignalR<EchoConnection>("/echo"); Our server should be now ready to use so we can now start playing with the client. Using Persistent Connections Our EchoConnection sends the "Welcome!" string to the client when it connects and then broadcasts messages it receives from the client to all connected clients. On the client side, after the client connects successfully, we will wait for the user to enter a string which will be sent to the server. We also print any message the client receives from the server. To be notified about messages we need to register a callback which will be invoked for whenever a message is received. Finally, if the user types “:q” (this is the command you want to remember when you try to git commit on a new box but forgot to configure the text editor git should use) the client will close the connection and exit. The code that does all of it is shown below (again, you can get the code from the SignalR-Client-Cpp repo – it is in the PersistentConnectionSample.cpp file) void send_message(signalr::connection &connection, const utility::string_t& message) { connection.send(message) // fire and forget but we need to observe exceptions .then([](pplx::task<void> send_task) { try { send_task.get(); } catch (const std::exception &e) { ucout << U("Error while sending data: ") << e.what(); } }); } int main() { signalr::connection connection{ U("") }; connection.set_message_received([](const utility::string_t& m) { ucout << U("Message received:") << m << std::endl << U("Enter message: "); }); connection.start() // fine to capture by reference - we are blocking // so it is guaranteed to be valid .then([&connection]() { for (;;) { utility::string_t message; std::getline(ucin, message); if (message == U(":q")) { break; } send_message(connection, message); } return connection.stop(); }) .then([](pplx::task<void> stop_task) { try { stop_task.get(); ucout << U("connection stopped successfully") << std::endl; } catch (const std::exception &e) { ucout << U("exception when starting or closing connection: ") << e.what() << std::endl; } }).get(); return 0; } You may want to try more than one instance of the client to see that all clients receive messages broadcast by the server. The code is intuitively simple but there are some interesting nuances so let’s take a closer look at it. In the main function, as explained above, we create a connection and we use the set_message_received function to set a handler that will be called whenever we receive a message from the server. Then we start the connection using the connection.start() function. If the connection started successfully we run a loop that reads messages from the console. Note that we know that connection started successfully because if connection.start() threw an exception this continuation would not run at all because it is a value based continuation (you can find more on how exceptions in C++ async work in my blog post on this very subject). Whenever a user enters a message we send the message to the server using the connection.send() function. Sending messages happens in the fire-and-forget manner but we still need to handle exceptions to prevent from crashes caused by unobserved exceptions. When the user enters “:q” we break the loop and move on to the next continuation which stops the connection. This continuation is interesting because it actually can be invoked in one more case. Note that this is a task based continuation so it will be invoked always – even if a previous task threw. As a result this continuation is also an exception handler for the task that starts the connection. Moving back to stopping the connection – stopping a connection can potentially throw so again we need to handle the exception to prevent from crashes. There are two more important things. One is that tasks are executed asynchronously so we have to block the main thread to prevent the program from exiting and terminating all the threads (see another blog post of mine for more details). In our case we can just use the task::get() function – it is sufficient, simple and works. The second important thing is related to how we capture the connection variable in one of the continuations. We capture it by reference. We can do that because we block the main thread and therefore we ensure that the reference we captured will be always valid. In general case however capturing local variables by reference will lead to undefined behavior and crashes if the function that started a task exited before the task completes (or is even started) since the variable will go out of scope and the captured reference will no longer be valid. Blocking a thread to wait for the task works but usually is not the best way to solve the problem. If you cannot ensure that the reference will be valid when a task runs you should consider capturing variables by value or, if it is not possible (like in the case of the connection and hub_connection instances) capture a std::shared_ptr (or std::weak_ptr). Regardless of how you capture your variables (maybe except for primitive values captured by value) you need to make sure you work with them in a thread safe way because you never know what thread a task is going to run on. Using Hubs The sample for hub connections shows how to connect and communicate with the SignalR sample chat server. The code looks like this (you can also find it on GitHub in the HubConnectionSample.cpp): void send_message(signalr::hub_proxy proxy, const utility::string_t& name, const utility::string_t& message) { web::json::value args{}; args[0] = web::json::value::string(name); args[1] = web::json::value(message); proxy.invoke<void>(U("send"), args) // fire and forget but we need to observe exceptions .then([](pplx::task<void> invoke_task) { try { invoke_task.get(); } catch (const std::exception &e) { ucout << U("Error while sending data: ") << e.what(); } }); } void chat(const utility::string_t& name) { signalr::hub_connection connection{U("")}; auto proxy = connection.create_hub_proxy(U("ChatHub")); proxy.on(U("broadcastMessage"), [](const web::json::value& m) { ucout << std::endl << m.at(0).as_string() << U(" wrote:") << m.at(1).as_string() << std::endl << U("Enter your message: "); }); connection.start() .then([proxy, name]() { ucout << U("Enter your message:"); for (;;) { utility::string_t message; std::getline(ucin, message); if (message == U(":q")) { break; } send_message(proxy, name, message); } }) // fine to capture by reference - we are blocking // so it is guaranteed to be valid .then([&connection]() { return connection.stop(); }) .then([](pplx::task<void> stop_task) { try { stop_task.get(); ucout << U("connection stopped successfully") << std::endl; } catch (const std::exception &e) { ucout << U("exception when starting or stopping connection: ") << e.what() << std::endl; } }).get(); } Let’s quickly go through the code. First we create a hub_connection instance which we then use to create a ChatHub proxy. Then we use the on function to set up a handler which will be invoked each time the server invokes the broadcastMessage client method. Note that the callback takes a json::value as a parameter which is an array containing parameters for the client method. If you ever used the SignalR .NET client then you probably notice it is different from what you are used to. The SignalR .NET Client does not expose JSON directly but uses reflection to convert the JSON array to typed values which are then passed to the On method as parameters. Since there isn’t reach reflection in C++ it is the responsibility of the user to interpret the parameters the SignalR C++ Client passes to the lambda in the on function. Once the hub proxy is set up we can start the connection and wait for the user to enter a message which we will then send to the server by invoking the server side send hub method. Server side hub methods are invoked with the hub_proxy::invoke function. This function has two flavors. One for methods that don’t return values: hub_proxy::invoke<void>(...), and one used to invoke non-void hub methods: hub_proxy::invoke<json::value>(...). If a server side hub method returns a value it will be returned to the user as a json:value and, again, it is up to the user to make sense out of it. There are also convenience overloads of hub_proxy::invoke function you can use to invoke a parameterless hub method which don’t take the arguments parameter. They will save you a couple lines of code required to create an empty JSON array. Long running server side methods can notify the client about their progress. If the client wants to receive these notifications it can provide a callback that should be invoked each time a progress message is received. The callback is passed as the last parameter of the hub_proxy::invoke functions and defaults to a lambda expression with an empty body. The sample chat server does not have any method sending progress messages but if you are interested you can check SignalR end to end tests that tests this scenario. That’s mostly it. The remaining code is just closing the connection and handling exceptions is done the same way as for Persistent Connection. One important thing worth noting is how we capture the hub_proxy instance in the lambda – we do it by value. hub_proxy type has semantics of std::shared_ptr where all copies point to a single implementation instance which won’t be deleted as long as there is at least one instance that has a reference to it. As a result if you capture a hub_proxy instance by value in a lambda it will be valid even if the original variable is not around anymore. (Not that this matters in this particular example since we are blocking the thread anyways so even if we captured the hub_proxy instance by reference everything would work since the original variable does not go out of scope until the connection is closed). Doing the right things There are a few things you need to be aware of when working with the SignalR C++ client. Prepare the connection before starting SignalR C++ Client uses callbacks to communicate with the user’s code. However you can only set these callbacks when the connection is in the disconnected state. Otherwise an exception will be thrown. Similarly, when using hubs you have to create hub proxies before you start the connection. Process messages fast or asynchronously The callbacks invoked when a message is received ( connection::set_message_received, hub_proxy::on) are invoked synchronously from the thread that receives messages. This is to ensure that the callbacks are invoked in the same order as the order the messages were received. The drawback is that the new messages won’t be received until the callback completes processing the current message. Therefore you need to process messages as quickly as possible or, if you don’t care about order, process messages asynchronously. Handle exceptions Any kind of network communication is susceptible to errors. Intermittent connection losses and timeouts may and will happen and they will result in exceptions. These exceptions have to be handled otherwise your app will crash due to an unobserved exception. To make exception handling easier the SignalR C++ Client follows a pattern where functions returning pplx::task<T> will not throw exceptions in case of errors but will instead return a faulted task. This saves the user from having to have an exception handler in addition to handling exceptions in a task based continuation. (You can read more about handling exceptions when using tasks here. Capture connection and hub_proxy instances correctly Copy constructors and copy assignment operators of the connection and hub_connection classes are intentionally deleted. This was done to prevent from capturing connection and hub_connection instances by value in lambda expressions (I also don’t think there is a clear answer as to what the operation of copying a connection should do). The issue with capturing connection and hub_connection instances by value is that because these classes use a std::shared_ptr pointing to the actual implementation it is possible to create a cycle which would prevent from destroying the connection instance (i.e. the destructor wouldn’t run). The cycle would result not only in a memory leak but would also keep the connection running after the variable went out of scope if the connection was not explicitly stopped – the destructor is responsible for stopping connections that were not stopped explicitly. The cycle would be created if a connection/hub_connection instance was captured by value in callbacks that are passed back to and stored in the connection or hub_connection instances i.e. callbacks passed to connection::set_message_received, set_reconnecting, set_reconnected, set_disconnected functions on both connection and hub_connection. While deleting copy constructors and copy assignment operators on connection and hub_connection classes makes it more difficult to create a cycle it does not make it impossible – you could still inadvertently create a cycle if you capture a std::shared_ptr<connection> or std::shared_ptr<hub_connection> by value. So, what to do? The safest way is to create a weak pointer ( std::weak_ptr) to the connection and capture this pointer. Then in the callback you will need call std::weak_ptr::lock() function to obtain a shared pointer to the connection. You need to check the return value of the std::weak_ptr::lock() function – it will return the nullptr if the instance it points to was destroyed (in which case you probably will want just to exit the callback). You could also try capturing the connection instance by reference by you will have to ensure that the reference is always valid when the callback is executed which sometimes might be hard to do. Note that capturing hub_proxy instances by value is fine. Hub_proxy is linked back to the connection using a weak pointer so capturing hub_proxy instances by value won’t create cycles. If you, however, try invoking a function on a hub_proxy instance that outlived its connection an exception will be thrown. Stop connection explicitly While the connection is being stopped when the instance goes out of scope it is recommended to explicitly stop connections. Stopping the connection explicitly has a few advantages: - Throwing an exception from the destructor in C++ results in undefined behavior. As a result the connection class destructor (or to be more accurate the connection_impl dtor) catches and swallows all the exceptions. When stopping connections explicitly all the exceptions are passed back to the user (in form of a faulted task) giving the user a chance to handle them the way they see it fit - Stop is an asynchronous operation but the destructor isn’t. Therefore when the connection is running when the destructor is called the destructor blocks and waits until the stop operation completes. Since the destructor runs in the current thread you might experience “unexpected” delays – “unexpected” because the delay will happen even though you didn’t invoke any function explicitly. Rather the destructor is just called automatically for you when the variable leaves the scope (or when you call delete on dynamically allocated instances). These delays could be especially annoying if the current thread the destructor is running in happens to be the UI thread - Relying on the destructor stopping the connection may result in the connection not being stopped at all. Internally the connection is using std::shared_ptrpointing to the actual implementation which will be destroyed only when no one is referencing it. In case of a bug where the connection implementation instance stores a callback that captures its connection instance a cycle is created and the reference count will never reach 0. In this situation the destructor will never be called which means that not only would memory be leaked but also that the connection would never be stopped Logging The SignalR C++ Client is able to log its activities. You can control what activities are being logged and how they are logged. To control what’s being logged pass a trace_level to the connection or hub_connection constructor. The default setting is to log all activities. To control how the activities are logged you need to create a class derived from the log_writer class and pass a std::shared_ptr pointing to your writer in the when creating a connection/hub_connection instance. Your implementation has to ensure that logging is thread safe as the SignalR C++ Client does not synchronize logging in any way. If you don’t provide you own log_writer the SignalR C++ Client will use the default implementation that uses the OutputDebugString function to log entries. This is especially useful when debugging the client with Visual Studio since entries logged this way will appear in the Visual Studio Output window. Limitations While the SignalR C++ Client is fully functional it contains a couple of limitations. Currently the client supports only the webSockets transport. It also does not support detecting stale connections using the heartbeat mechanism. (The way heartbeat works in other clients is that the server sends keep alive messages every few seconds and if the client misses a few of these keep alive messages it will consider the connection to be stale/dead and will try restarting the connection. The SignalR C++ Client currently ignores keep alive messages). Finally, the SignalR C++ Client does not support sending or receiving State information. Future This is only an alpha 1 release. I expect there will be some bugs that have not been caught so far and fixing them should be a priority. Another interesting exercise is to try to make the SignalR C++ Client work on other platforms – specifically on Linux and Mac OS. It should be possible because the SignalR C++ Client is built on top of C++ REST SDK which is cross platform. Finally – depending on the feedback – adding new transports (at least the longPolling transport) may be something worth looking at. Hi, great work! I’m looking forward to possible future ports to Linux (cpprestsdk being cross-platform as you said bodes well). Thanks for the kind words. Conincidentally, I was playing with this a little bit this weekend: Hi Pawel We’re very interested in your C++ SignalR library. Do you have a roadmap or even a planned release date? Would be great to also have long polling in the library… We could even contribute to the code base if necessary. Please mail me back whenever possible. I would like to get in touch with you. Thanks. Regards, Armin Hi Armin, The SignalR C++ Client project is owned by Microsoft. At this point there isn’t a firm release date (if you think about a traditional RTM). On the other hand the code is pretty stable even at the current state – we have not heard any complaints so far. Can you tell me more about your scenario where you need the long polling transport? Thanks, Pawel Hi Pawel Thank you for your quick response. It’s good to know that the current state seems to be stable. We need long polling because our (server- and client-) components also run on Windows 7 as self-hosted (OWIN) components. So long polling would just be a fallback for environments which don’t have web sockets support. Thanks, Armin Hi Pawel I’m sorry for bothering you again. Do you have any plans of adding long polling? Could we somehow contribute to the code base? Thanks. Regards, Armin Hi Armin, At the moment I am spending like 110% time at work working on ASP.NET 5. I would like to add long polling transport to SignalR C++ Client but due to the workload I can only do this as a hobby and there is only so much I can do in my free time. I will try to take a stab at it soon but can’t promise any timeline at this point. The SignalR C++ client is open source and you can definitely try to add this – either just for yourself or contribute back. Thanks, Pawel Hi Pawel, As explained by Armin we need long polling just as a fallback for operating systems which don’t have web socket support. We have C++ components that can also run on win7 … We’re very interested in the SignalR C++ client library, especially we would know the actual roadmap to fix the limitations and the planned release dates. The idea is to use C++ Rest SDK and SignalR C++ client together. We need similar features/behaviors of SignalR provided by ASP:NET library. Keep us informed Thanks in advance Domenico Hi Domenico, Thanks for sharing your scenario. I am thinking about adding the long polling transport but am not sure yet when this will be ready. What other features do you have in mind when you say “similar features/behaviors of SignalR provided by ASP.NET library”? Thanks, Pawel Hi Pavel, I haven’t specific features in mind, I’m just thinking if SignalR C++ client has other limitations compared with ASP.NET library. Consider that in meantime we wrapped the ASP.NET library to use SignalR in a C++ application. The prototypes seem to work but they add additional layers … The right and clean solution is to use a complete SignalR C++ client library with the support of the long polling transport. Keep us informed if you have an idea when it could be ready. Thanks, Domenico Hi, When I test locally my app, it works OK with local version of server. But when I publish it to internet, I receive this error in my client: “exception when starting or stopping connection: websockets not supported on the server and there is no fallback transport”. My server runs on Azure it’s c# app. AFAIR websockets are disabled by default on Azure. You need to turn it on in the portal. Pawel Moozzyk, Your C++ library seems to be what I need at the moment. Thank you for all of your work on this. One thing that I notice missing is support for Mac. Any plans for this or are you aware of a different fork which supports this? I figured I’d see if anyone else has already ported prior to trying myself. I’m lazy 🙂 I made some progress some time ago and was able to make the client compile on Mac but had some link errors due to boost mismatches. I can push it to a branch if you want to take a look. I have not had time to look into this in the past few months. I pushed the changes to this branch:. Note that you need to build casablanca (cpprestsdk) first. I’ll take a look at this to see if I can get it working. Thanks for your quick reply. Off the wall subject. Is there a BKM for how to get signalr installed on win2k12/IIS8 servers in the first place. I have no idea how to do that. Perhaps it’s obvious and I’m just not seeing it. I’m not seeing instructions on how to do this – even on the MS IIS site. You create an application (for instance as described here:) and then deploy this application to IIS the same way you would deploy any other web app. Hope this helps, Pawel Hi, I have already worked a Selfhost with C# and JS client’s, but now I need to use it on c++, I did the sample that you post on Github , but when I redirect to my Selfhost service doesn’t work, can you help me? … please..!!! Define “doesn’t work”. The tests in this very repo use a selfhost test server so it should just work. if you have a chance, take a look to my project. thanks. I think I am lost. Where is the code you are trying to run? I see you forked/cloned/added SignalR C++ client – what is the purpose of this? Why aren’t you using NuGet packages? this is my new repository . Actually, I have a huge project that work’s with SignalRService (console application) and execute many things, but I decided to make an isolated project that just makes a connection by Hubconnection (HubConnectionSample project) to SignalService project, that starts the service on “”. But the connection is not successful and I get an exception -> exception when starting or stopping connection: not a number <-. Then I create another project (HubConnnection c# project) to make a test and it works, make connection successfully. Thanks. The repository that I comment on my last Reply it’s clean and has the NuGet Packages, To Test, you have to Run the project SignalRService (c# Sample) and after HubConnectionSample (c++ Sample) So, the issue is that the server is using an ancient version of SignalR – it is using SignalR 1.x and SignalR C++ client can only work against 2.2.x version of the server. The exception you are seeing is thrown when the client is trying to parse the negotiate response and tries to read the “TransportConnectTimeout” as a number () but this property does not exist in older protocols. The funny part (or the real bug) is that the client checks the protocol version and throws a meaningful exception when the version is lower than expected but this check relies on being able to parse the negotiation response and parsing the negotiation response fails in this case. To fix this you need to update the server to- ideally – the latest version of SignalR 2.x (which is 2.2.2). thank you so much, I really appreciate your help! 🙂 , have a nice day. well, I think, that I’m no doing something well. I have an exception. -> exception when starting or stopping connection: not a number <- The C++ Client code that throw the exception. 😥 void chat(const utility::string_t& name) { signalr::hub_connection connection{ U("") }; auto proxy = connection.create_hub_proxy(U("messageReceivingHub")); proxy.on(U("addInvoke"), [](const web::json::value& m) { ucout << std::endl << m.at(0).as_string() << U(" wrote:") << m.at(1).as_string() << std::endl << U("Enter your message: "); }); connection.start() . wait . then([proxy, name]() { ucout << U("Enter your message:"); for (;;) { utility::string_t message; std::getline(ucin, message); if (message == U(":q")) { break; } send_message(proxy, name, message); } }) .then([&connection]() // fine to capture by reference - we are blocking so it is guaranteed to be valid { return connection.stop(); }) .then([](pplx::task stop_task) { try { stop_task.get(); ucout << U("connection stopped successfully") << std::endl; } catch (const std::exception &e) { ucout << U("exception when starting or stopping connection: ") << e.what() << std::endl; } }).get(); getchar(); } The C# client that works. static void HubConn(string _name, string _message) { try { var hubConnection = new HubConnection(""); IHubProxy HubProxy = hubConnection.CreateHubProxy("messageReceivingHub"); HubProxy.On("addInvoke", (name, message) => Console.WriteLine("Nombre: {0}, Mensage {1}", name, message)); hubConnection.Start().Wait(); HubProxy.Invoke("invoke", _name, _message).Wait(); Console.ReadLine(); } catch (Exception ex) { if (ex.InnerException.Message.Contains("refused")) Console.WriteLine("Error: Servicio de captura dactilar no disponible: " + ex.InnerException); else Console.WriteLine(ex.Message, "Error"); Console.ReadLine(); } } The code of Hub using System; using System.Linq; using System.Collections.Generic; using System.Threading.Tasks; using System.Configuration; using Microsoft.AspNet.SignalR; using Microsoft.AspNet.SignalR.Hubs; namespace SignalR.SelfHost.Server.Hubs { /// ///Clase que comunica el cliente web con el servicio de identitum local /// [HubName("messageReceivingHub")] public class messageReceivingHub : Hub { private void Response(string response) { Console.WriteLine(response); Clients.Caller.clientResponse(response); } public override Task OnConnected() { Console.WriteLine("Client connected: " + Context.ConnectionId); return base.OnConnected(); } public void invoke(string name, string message) { Boolean succes = false; try { succes = true; Console.WriteLine("Invoke, InitializeCatalogs OK"); Clients.Caller.addInvoke(name, message); } catch (Exception ex) { Console.WriteLine("Error en hub invoke: " + ex.Message); succes = false; } } } } Do you happen to know the granularity of the DefaultMessageBufferSize setting? I’m curious if it applies to the number of messages for a particular connection or all of the connections which a particular persistent connection or hub are serving? Put another way, does each connection get it’s own DefaultMessageBufferSize or is it shared between them? This is the size of a ring buffer storing messages per connection.
https://blog.3d-logic.com/2015/05/20/signalr-native-client/
CC-MAIN-2018-17
refinedweb
5,364
62.88
40267/how-to-find-files-and-skip-directories-in-os-listdir When I am using os.listdir I am ...READ MORE You probably want to use np.ravel_multi_index: import numpy as ...READ MORE It appears that a write() immediately following a read() on a ...READ MORE suppose you have a string with a ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE Enumerate() method adds a counter to an ...READ MORE Open the command line and enter jupyter notebook ...READ MORE The global variable can be used in ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/40267/how-to-find-files-and-skip-directories-in-os-listdir
CC-MAIN-2022-21
refinedweb
127
71.31
Github user NightOwl888 commented on the issue: Sure. I am currently working on, which I plan to merge into #179 (probably within the next few hours). About the only thing left to port is the [collation namespace](), a few missing tests and some obsolete functionality (I wasn't planning on doing these, so you are welcome to them). You might want to also check to make sure I didn't miss anything else. There are also currently 45 failing tests out of about 1400. Most of the failing tests are in the Synonym and Th namespaces if you want to have a go at them. Additionally, I haven't gone through the API to ensure the accessibility of fields and members is the same as it was in Java. Certain functionality, such as the Pattern namespace could use some more overloads to make them more .NET friendly (for example, passing regex's in as strings/regex options in addition to passing in pre-made Regex objects). I am sure there must be other tweaks to the API that need to be done, I was just giving this as an example. FYI - I am currently working on the Hunspell namespace - the rest is fair game. Just make your intentions clear so we don't duplicate. ---
http://mail-archives.eu.apache.org/mod_mbox/lucenenet-dev/201608.mbox/%[email protected]%3E
CC-MAIN-2019-35
refinedweb
214
70.84
Hello Please help me with the following: i am trying to open a file with a specific application using java. I am working on a desktop application. The following code using java.awt.Desktop works for known extension types,but when trying to open a file with an unknown extension it throws an exception. I know this is about to happen, because it's in the class documentation, but what i want to ask you is if there is another way of popping up the windows "open with" dialog through java. I just don't want to let unknown extension file types untreated. This is the code i use, and it works,just that it throws an exception for unknown file types: import java.awt.Desktop ... if (Desktop.isDesktopSupported()) { desktop = Desktop.getDesktop(); temp = new File(localPath); desktop.open(temp); } Thank you very much
https://www.daniweb.com/programming/software-development/threads/139035/open-unknown-file-types-through-java
CC-MAIN-2018-47
refinedweb
143
66.74
Compatibility - 2.0.0 and master5.35.25.15.04.2 - 2.0.0 and masteriOSmacOS(Intel)macOS(ARM)LinuxtvOSwatchOS Swift library for Data Visualization 📊 three rendering backends to generate plots: To encode the plots as PNG images it uses the lodepng library. SwiftPlot can also be used in Jupyter Notebooks, with Python interop support for Google Colab.. SwiftPlot is licensed under Apache 2.0. View license Add the library to your projects dependencies in the Package.swift file as shown below. dependencies: [ .package(url: "", from: "2.0.0")), ],. Add these lines to the first cell: %install-swiftpm-flags -Xcc -isystem/usr/include/freetype2 -Xswiftc -lfreetype %install '.package(url: "", from: "1.0.28")' Cryptor %install '.package(url: "", from: "2.0.0")' SwiftPlot AGGRenderer In order to display the generated plot in the notebook, add this line to a new cell: %include "EnableJupyterDisplay.swift" If you wish to display the generated plot in a Google Colab environment, add these lines to a new cell instead: import Python %include "EnableIPythonDisplay.swift" func display(base64EncodedPNG: String) { let displayImage = Python.import("IPython.display") let codecs = Python.import("codecs") let imageData = codecs.decode(Python.bytes(base64EncodedPNG, encoding: "utf8"), encoding: "base64") displayImage.Image(data: imageData, format: "png").display() } Note that because Google Colab doesn't natively support Swift libraries that produce rich output, we use Swift's Python interop as a workaround. For computers running MacOS or Windows, Docker instance is to easy to setup and use swift-jupyter. Please refer SwiftPlot_Docker_setup.md for setup instructions.. More tests can be found in the SwiftPlotTests folder.) import SwiftPlot import AGGRenderer let x:[Float] = [10,100,263,489] let y:[Float] = [10,120,500,800] let x1:[Float] = [100,200,361,672] let y1:[Float] = [150,250,628,800] var agg_renderer: AGGRenderer = AGGRenderer(). Built-in Colors can be found here. If you want to contribute to improve this library, please read our guidelines. Feel free to open an issue.
https://swiftpackageindex.com/KarthikRIyer/swiftplot
CC-MAIN-2021-10
refinedweb
321
53.37
var isPhone = function() { return (/android|iphone/i.test(navigator.userAgent.toLowerCase()) && !(/ipad/i.test(navigator.userAgent.toLowerCase()))); }; var isPhone = function() { return (/android|iphone/i.test(navigator.userAgent.toLowerCase()) && !(/ipad/i.test(navigator.userAgent.toLowerCase()))); }; Please revisit your question on SO here. I have added an answer there. thank you tommy and thomson. I tried the same thing and it worked perfectly for me. But Now I have a "Cancel" button that should remove the popover as soon as clicked. How to dismiss the popover... Here is a sample code, hope it helps. var sbTpl = new Ext.XTemplate('<tpl for=".">', '<div class="sb-section-title">{title}</div>', '<div class="sb-section-score">{correct} of {total}... I am going through the same problem, I am expecting an XML response from the following request: REQUEST: ... I Couldn't make this work or use it. I have 20 list Items and all are supposed to have different background color. I tried so many things but could get the background color properly. By all means... @mitchellsimoens Thanks a lot for the response. I am sorry I could'nt update the status of this issue after finding the solution. My code was all correct but the problem was the Localhost... @doncoleman I have posted my problem on 3 different threads, but I din't get any solution or suggestion for that. I am facing the same problem and tried your suggestion but din't worked. With... Hello All, I am new to Sencha touch and am trying to load XML file data into a List. I tried around 10 examples but even the simplest din't worked. With all the experiments one thing I found in... There is an XML response that I have to parse and get JSON. <?xml version="1.0" encoding="utf-8"?> <string... Here is the simple XML file that I have to parse<br>:<br><br> And this is how I am doing it: in WOList.js My problem is I am not getting anything in the list. THis is the first time using...
https://www.sencha.com/forum/search.php?s=de741454149750b7d8cfb0fbb4b65e96&searchid=19740313
CC-MAIN-2017-43
refinedweb
343
69.18
With nilearn, is it possible to extract voxel coordinates from every voxel within an ROI image consisting of ones and zeros? With If you have a nibabel image as img, you can do the following: import numpy as np import nibabel as nib data = img.get_fdata() # get image data as a numpy array idx = np.where(data) # find voxels that are not zeroes ijk = np.vstack(idx).T # list of arrays to (voxels, 3) array xyz = nib.affines.apply_affine(img.affine, ijk) # get mm coords See this blog post for the mm to voxel and voxel to mm conversions with nibabel. 2 Likes
https://neurostars.org/t/extract-voxel-coordinates/7282
CC-MAIN-2022-21
refinedweb
103
64.2
Forums › General › General Chat › The GetPositionList.srv file was not found. Tagged: ROS_MASTER_URI - AuthorPosts There is a GetPositionList in niryo_one_msgs / srv. When rpi_example_python_api.py is executed, the following error occurs. niryo@niryo-desktop:~/catkin_ws/src/niryo_one_python_api/examples$ python rpi_example_python_api.py Traceback (most recent call last): File “rpi_example_python_api.py”, line 4, in <module> from niryo_one_python_api.niryo_one_api import * File “/home/niryo/catkin_ws/src/niryo_one_python_api/src/niryo_one_python_api/niryo_one_api.py”, line 34, in <module> from niryo_one_msgs.srv import GetPositionList ImportError: cannot import name GetPositionList I want to know why an error occurs. Edouard RenardKeymasterJuly 10, 2018 at 4:51 pmPost count: 133 I have two problems. 1. After catkin_make -j2, the following error occurred. Unable to register with master node [http: // localhost: 11311]: master may not be running. Will keep trying. Changing the localhost part did not change anything. 2. Robot’s LED does not change from red to blue. I waited. I have been waiting a day. Is it related to the above error ?? Edouard RenardKeymasterJuly 12, 2018 at 3:15 pmPost count: 133 Could you explain the exact steps you followed before you got this error ? Also, yes, for your question 2., the LED will not switch to blue, because it seems that the Niryo One ROS stack is not running in the first place. Could you also try to flash a new microSD card with the 1.1.0 Rpi Image (you can download it here), and see if everything is working well ? - AuthorPosts You must be logged in to reply to this topic.
https://niryo.com/forums/topic/the-getpositionlist-srv-file-was-not-found/
CC-MAIN-2019-04
refinedweb
253
61.83
#include <deal.II/dofs/function_map.h> This class declares a local typedef that denotes a mapping between a boundary indicator (see GlossBoundaryIndicator) that is used to describe what kind of boundary condition holds on a particular piece of the boundary, and the function describing the actual function that provides the boundary values on this part of the boundary. This type is required in many functions in the library where, for example, we need to know about the functions \(h_i(\mathbf x)\) used in boundary conditions \begin{align*} \mathbf n \cdot \nabla u = h_i \qquad \qquad \text{on}\ \Gamma_i\subset\partial\Omega. \end{align*} An example is the function KellyErrorEstimator::estimate() that allows us to provide a set of functions \(h_i\) for all those boundary indicators \(i\) for which the boundary condition is supposed to be of Neumann type. Of course, the same kind of principle can be applied to cases where we care about Dirichlet values, where one needs to provide a map from boundary indicator \(i\) to Dirichlet function \(h_i\) if the boundary conditions are given as \begin{align*} u = h_i \qquad \qquad \text{on}\ \Gamma_i\subset\partial\Omega. \end{align*} This is, for example, the case for the VectorTools::interpolate() functions. Tutorial programs step-6, step-7 and step-8 show examples of how to use function arguments of this type in situations where we actually have an empty map (i.e., we want to describe that no part of the boundary is a Neumann boundary). step-16 actually uses it in a case where one of the parts of the boundary uses a boundary indicator for which we want to use a function object. It seems odd at first to declare this typedef inside a class, rather than declaring a typedef at global scope. The reason is that C++ does not allow to define templated typedefs, where here in fact we want a typedef that depends on the space dimension. (Defining templated typedefs is something that is possible starting with the C++11 standard, but that wasn't possible within the C++98 standard in place when this programming pattern was conceived.) Definition at line 74 of file function_map.h. Declare the type as discussed above. Since we can't name it FunctionMap (as that would ambiguate a possible constructor of this class), name it in the fashion of the standard container local typedefs. Definition at line 81 of file function_map.h.
https://dealii.org/8.5.0/doxygen/deal.II/structFunctionMap.html
CC-MAIN-2018-34
refinedweb
405
54.56
Could anybody help me to figure out what is wrong with this script. I am new to ruby and this script is supposed to make a playlist, which it does do, but when I try to use the playlist it is empty. Thanks. def shuffle(arr) shuf = [] while arr.length > 0 rand_sel = rand(arr.length) curr_sel = 0 new_arr = [] arr.each do |x| if curr_sel == rand_sel shuf.push(x) else new_arr.push(x) end curr_sel = curr_sel + 1 end arr = new_arr end shuf end songs = shuffle(Dir[’ **/*.mp3’]) File.open “dat playlist.m3u”, “w” do |x| songs.each do |mp3| x.write mp3 + “\n” end end puts “Das it mane. Das it”
https://www.ruby-forum.com/t/cant-seem-to-find-whats-worng/224100
CC-MAIN-2021-31
refinedweb
110
97.2
Ok, I added in another class to keep track of tile steps (sounds useless doesnt it ?:P) Anyways, when I tried to add it in, it gave me a weird error... It seems my compiler always gives me a new error thats totaly weird -,-.It seems my compiler always gives me a new error thats totaly weird -,-.Code: Borland C++ 5.5.1 for Win32 Copyright (c) 1993, 2000 Borland C:\MazeOfPain.cpp: Error E2176 C:\KeeperClass.h 3: Too many types in declaration *** 1 errors in Compile *** Ok, the weird thing is, I have only declared threee classes. Keeper, Enemy, WeakWall. The codes for each (Starting from WeakWall and ending with Enemy.) Code: #include <windows.h> #include <iostream> #include <string.h> #include <conio.h> #include <fstream> using namespace std; class WeakWall { private: SHORT Strength; SHORT SteppedOn; SHORT x; SHORT y; public: WeakWall() {Strength=1; SteppedOn=0; x=0; y=0;} WeakWall(SHORT str, SHORT Step, SHORT x_pos, SHORT y_pos) {Strength=str; SteppedOn=Step; x=x_pos; y=y_pos;} SHORT GetX() {return x;} SHORT GetY() {return y;} SHORT GetStr() {return Strength;} SHORT GetWStatus() {return SteppedOn;} void Reset() {Strength=1; SteppedOn=0; x=0; y=0;} } Code: #include "WeakWall.h" class Keeper { private: SHORT zzx[5]; public: Keeper() {zzx[0] = 0; zzx[1] = 0; zzx[2] = 0; zzx[3] = 0; zzx[4] = 0;} Keeper(SHORT z, SHORT zz, SHORT zzz, SHORT zzzz, SHORT b) {zzx[0] = z; zzx[1] = zz; zzx[2] = zzz; zzx[3] = zzzz; zzx[4] = b;} void Check(SHORT x) {zzx[x] = 1;} void Uncheck(SHORT x) {zzx[x] = 0;} void UncheckAll() {zzx[0] = 0; zzx[1] = 0; zzx[2] = 0; zzx[3] = 0; zzx[4] = 0;} SHORT IsChecked(SHORT x); }; SHORT Keeper::IsChecked(SHORT x) { if(zzx[x] == 1) return 1; return 0; } My compiler gives me no other errors (thankfully -,-) but It just seems to not like that I declared three classes. Though I dont know why it would mind that as I'm sure some people declare many more.My compiler gives me no other errors (thankfully -,-) but It just seems to not like that I declared three classes. Though I dont know why it would mind that as I'm sure some people declare many more.Code: #include "KeeperClass.h" #define Enemy1 '@' #define Enemy2 '&' #define Enemy3 '$' #define Enemy4 '%' #define No 10 //Movement Definitions. #define Ea 15 #define So 20 #define We 25 HANDLE hOt = GetStdHandle(STD_OUTPUT_HANDLE); HANDLE hIn = GetStdHandle(STD_INPUT_HANDLE); class Enemy { private: char Ent; int ex_pos; int ey_pos; int tag; int mproc; SHORT alive; public: Enemy() {Ent=0;ex_pos=0;ey_pos=0;tag=0;mproc=0;alive=0;} Enemy(char t, int ex, int ey, int etag, int moving, SHORT live) {t=Ent;ex=ex_pos;ey=ey_pos;etag=tag;moving=mproc;live=alive;} char GetType() {return Ent;} int GetX() {return ex_pos;} int GetY() {return ey_pos;} int GetTag() {return tag;} int GetMovement() {return mproc;} //40 SHORT IsAlive() {return alive;} void SetType(char c) {Ent=c;} void SetX(int x) {ex_pos=x;} void SetY(int y) {ey_pos=y;} void SetTag(int Tag) {tag=Tag;} void SetMovement(int Move) {mproc=Move;} void SetLive(int live) {alive=live;} }; So, what is this error? Wish it was more discriptive -,-. [Edit] To avoid two posts, is it possible to create an array based on an int? IE: Or is their some way I can fool my compiler by using a define? #define x; ?Or is their some way I can fool my compiler by using a define? #define x; ?Code: int y=7; int x[y];
https://cboard.cprogramming.com/cplusplus-programming/75235-compiler-class-bug-printable-thread.html
CC-MAIN-2017-04
refinedweb
586
60.65
Created on 2014-07-11 23:06 by hakril, last changed 2014-07-21 20:12 by vstinner. Will playing with generators and `yield from` I found some inconsistency. list comprehension with yield(-from) would return a generator. generator comprehension would yield some None in the middle of the expected values. Examples: l = ["abc", range(3)] g1 = [(yield from i) for i in l] print(g) <generator object <listcomp> at 0x7f5ebd58b690> print(list(g)) ['a', 'b', 'c', 0, 1, 2] # this result is super cool ! g2 = ((yield from i) for i in l) print(g2) <generator object <genexpr> at 0x7f5ebd58b6e0> print(list(g2)) ['a', 'b', 'c', None, 0, 1, 2, None] For `g1`: it returns a generator because the listcomp contains a `yield from`. For `g2` it append None because it yield the return value of `yield from i`. It could be rewritten as: def comp(x): for i in x: yield (yield from i) There seem to be two issues here: > list comprehension with yield(-from) would return a generator. This is somewhat surprising, and I'm not sure it's expected/documented. The example you provided seems to behave reasonably, so I don't think we should change the behavior (unless there is some actual bug in similar example). If anything, we could document this, but I'm not sure if it's worth doing it and where could be added. > generator comprehension would yield some None in the middle > of the expected values. This also seems expected, and the behavior is consistent with the equivalent generator function. It's a side effect of the hidden closure that provides the new scope for the iteration variable - that's an ordinary function object, so using yield or yield from turns it into a generator expression instead. Generator expressions are already generators, so using yield or yield from just adds more yield points beyond the implied ones. I've never figured out a good way to document it - it's a natural consequence of the comprehension's closure. An explicit mention in the reference docs for comprehensions may be worth adding. I found something else, I think it worth mentioning it. This side-effect allows to create generators that return other values that None. And the CPython's behavior is not the same for all versions: >>> {(yield i) : i for i in range(2)} <generator object <dictcomp> at 0x7f0b98f41410> >>> list(_) # python3.(3,4,5) [0, 1] # python3.2 [0, 1, {None: 1}] # python3.2 appends the generator's return value. > For `g1`: it returns a generator because the listcomp contains a `yield from`. IMO it's a bug: [... for ... in ...] must create a list.
https://bugs.python.org/issue21964
CC-MAIN-2018-30
refinedweb
445
64
Last measured linear acceleration of a device in three-dimensional space. (Read Only) using UnityEngine; public class Example : MonoBehaviour { // Move object using accelerometer float speed = 10.0f; void Update() { Vector3 dir = Vector3.zero; // we assume that device is held parallel to the ground // and Home button is in the right hand // remap device acceleration axis to game coordinates: // 1) XY plane of the device is mapped onto XZ plane // 2) rotated 90 degrees around Y axis dir.x = -Input.acceleration.y; dir.z = Input.acceleration.x; // clamp acceleration vector to unit sphere if (dir.sqrMagnitude > 1) dir.Normalize(); // Make it move 10 meters per second instead of 10 meters per frame... dir *= Time.deltaTime; // Move object transform.Translate(dir * speed); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2019.1/Documentation/ScriptReference/Input-acceleration.html
CC-MAIN-2020-10
refinedweb
131
51.14
One of the reasons for having multiple PI’s was to have one take over duties from an aging ITX based Linux box of reading from a USB Weather station and uploading the data to both it’s website and WeatherUnderground. Unfortunately this project got pushed forward when, last Thursday morning, the ITX box decided to die on me. I think it’s power supply finally gave out after 9 years of service. So this is the first part of a series on setting up a USB weather station onto a Raspberry PI. Part 1 – The Weather Station The weather station that I’ve been using since November 2010 is one I bought from Maplin. They usually sell for about £125 but a couple of times a year they reduce them down to between £50-£70. N96FY USB Wireless Weather Forecaster. At the time of writing this they’ve got it reduced to £69. It consists of the external sensors and an internal touch screen unit. All that’s needed is a USB A-A cable to connect the touch screen to the PI. Configuring the Raspberry PI First I started with a blank copy of raspian on a 4Gb SD Card. Booted the pi up and configured it to use the entire card. With raspian this is easy as you get prompted on the first boot to do it. At the same time I enabled the ssh server from that same menu. Installing the software Now this is based on the instructions over on dragontail.co.uk who got a Raspberry PI working with one of these weather stations. Those instructions are for the Debian based image. I’ve taken them and modified them to get working with raspian so there are a few minor changes. First we need to install some dependencies: pi@kell ~ $ sudo apt-get install git python-dev pi@kell ~ $ sudo apt-get install gnuplot python-simplejson python-tweepy python-paramiko Next we need to download and compile the latest versions of a few dependencies as the versions in the repos are not up to date. First is Cython: pi@kell ~ $ mkdir work pi@kell ~ $ cd work pi$kell ~/work $ wget pi$kell ~/work $ tar xzf Cython-0.16.tar.gz pi$kell ~/work $ cd Cython-0.16 pi$kell ~/work/Cython-0.16 $ sudo python setup.py install That will take a while… Next is libusb: pi$kell ~/work/Cython-0.16 $ cd ~/work pi$kell ~/work $ wget pi$kell ~/work $ tar xjf libusb-1.0.9.tar.bz2 pi$kell ~/work $ cd libusb-1.0.9 pi$kell ~/work/libusb-1.0.9 $ ./configure pi$kell ~/work/libusb-1.0.9 $ make pi$kell ~/work/libusb-1.0.9 $ sudo make install Next is cython hidapi: pi$kell ~/work/libusb-1.0.9 $ cd ~/work pi$kell ~/work $ git clone pi$kell ~/work $ cd cython-hidapi pi$kell ~/work/cython-hidapi $ vi setup.py Now we need to change the path to libusb from /usr/include/libusb-1.0 to /usr/local/include/libusb-1.0. Make that change then save. pi$kell ~/work/cython-hidapi $ sudo python setup.py install Now finally pywws: pi$kell ~/work/cython-hidapi $ cd ~/work pi$kell ~/work $ wget pi$kell ~/work $ tar xvzf pywws-12.05_r521.tar.gz pi$kell ~/work $ cd pywws-12.05_r521 pi$kell ~/work/pywws-12.05_r521 $ sudo python TestWeatherStation.py Thats the basic installation, for the rest I’d suggest reading the pywws documentation. The one thing I’d suggest is moving the installation out of your home directory, for me thats: pi$kell ~/work/pywws-12.05_r521 $ cd ~/work pi$kell ~/work $ sudo mv pywws-12.05_r521 /usr/local/weather In the next article I’ll cover configuring it to upload to Twitter, Weather Undergound, The Met Office & to a website. Thanks for this article Peter – I’ve only just discovered WeatherUnderground and my immediate thought was how to get the Pi to upload data. A visit to Maplins is in order :) I’ve not had chance to write the second part yet which covers the config but for weather underground its really simple. In fact when I did the upgrade to the pi the newer versions of pywws actually supports thee Met Office WOW system as well which I’ll also cover. Twitter is slightly trickier but once done works well also. The only thing not working right is uploading to a website, it works ok hourly but it should also be working every 15 minutes but it’s not. Hopefully I’ll get that next part done next week. Excellent article – managed to install and get working upto the point of data coming into pi with the command testweathersation– Data comes in but when I try to move to data it creates a weather.ini not a dat file – so no Data yet…. […] the first article I covered how to install a weather station on a Raspberry PI. In the second part I’ll cover […] Reblogged this on Gigable – Tech Blog. Check for the latest version of Cython – 0.17 was released on 1 Sep. Great article, can’t wait for part 2… :) Like ;-) If I get chance part 3 will be sometime next week Thank you! tried this on my pi (model B with 256mB & the 2012-09-18 rasbian wheezy image.) No matter what I do, I get an error when I do the following: ‘sudo python setup.py’ install from the ~/work/cython-hidapi directory. it appears to go well until building hid.o when I get a warning that a variable ânum_devâ is set but not used. Then when it is working with hid.so, I get: /usr/bin/ld cannot find -ludev collect2: ld returned 1 exit status error: command ‘gcc’ failed with exit status 1 Any help appreciated Should also say I’m working via ssh.. I’ve been meaning to look at udev as I knew that was behind the permissions problem but not got round to it. I’ll give that a go here as I’ve been doing some tweaking to the setup. I’ve now got it posting live (every few minutes) but twitter seems to have broken overnight . Once I’ve got that fixed I’ll finally post those changes. I had to run sudo apt-get install libudev-dev to overcome the cannon find -ludev error. Thanks for spotting that. I missed that change where the libusb source is hardcoded to the i386 architecture when I was copying it from my notes. I got it working earlier this morning using additional instructions given here. They gave a few extra steps to be done if there was an error at the stage I mentioned earlier. Is this essentially the same as you guys did? I’m a total newbie at linux stuff. I also had tried the cython 0.17 rather than 0.16, but that did not help. I finally got the TestWeatherStation module to work. No w to do battle with getting it operational as a weather station. Have any of you gotten jQuery to run on a pi? I’m currently running my station from my desktop & have a set of webpages that I’d like to offload to the pi. They use jQuery to do the plots & I’d rather not re do the site if possible. Personally I’ve not seen that site so need to take a look. As for jQuery, that runs in the browser (unless there’s a server version I’ve not known) so it should be useable on the pi as all it’ll be doing is serving it to the browser. Honestly, I’m simply using pywws as an uploader for Weather Underground, then monitoring the station from there. Works great. […] The raspberry can be the perfect for a weather station, a nice project about this can be found on this Blog […] […] his blog, Peter Mount explains the necessary raspian changes and in the 2nd part he also covers uploading […] […] The RPi has since be used to power everything from home-made jukeboxes to baby monitors to miniature near-space crafts and digital weather-stations. […] When I got that slew of trackbacks this morning I was wondering what was going on. It seems it’s down to this article over on CNN It has a short list of uses for the RPi and links right back to here when it mentions ditial weather-stations. do you recommend any other weather stations? I don’t see N96FY USB Wireless Weather Forecaster on maplin anymore. Yes I’ve just spotted that the website doesn’t list them anymore, although I did see some in my local store just last week. The maplin units are a branded wh1081 unit so any which are based on that for the wh1080 would work – if pywws supports it then it should work. A quick search came up with amazon as another source: I’ve also seen similar units on eBay as well. Thank you. I’m in the States so I don’t have access to a local store. Amazon though is global :) Thank you again. Nice little project i can work on with my son. weather station and some Raspberry Pi. Peter For you and your readers Clas Ohlson sell the same weather station and it goes down To£46 from £86 around Christmas time. I purchased a second for spares because the transmitter unit tends to fail. Clas Ohlson also sell spares. Can’t wait for part 2, keep,up,the good work and I’ll be replacing man windows/Cumulus software combo as soon as possible. Mark Sorry for the typo, it goes down to £40… Thank you for your guide! But I have a question… This solution should works with WH1081PC station? (see the link:). I need to buy a station who works perfectly with raspberryPI… Thanks […] Une station météo. […] […] Une station météo. […] Hello Please can you help. First I should say that I am a complete novice with Linux. I have got as far as cython hidapi.When I type vi setup.py it seems to hang. I have noted comments above but I don’t know how to follow the solutions given. How do I change the library path as mentioned by Jordan and Jay R Also how to change the path to libusb. Step by step instruction on the section before the PYWWS section would be most helpful. With this help I will put my weather station on my web site. Thank you John This might be a silly question, but once this is all done how does the weather station communicate with the Pi? In this instance over USB. The control panel exposes all of the recorded data which pywws and other software then reads periodically. At some point I’m going to look at linking the pi direct to the outside hardware. Someone has already done that using a radio receiver attached to gpio bypassing the console. Guide to using Raspberry Pi with Met Office WOW site… As someone mentioned above, i also have one of the Maplin setups, but the TX unit failed. has anyone thought of connecting the sensors directly to gpio ? (it’s on my list of jobs, but swmbo keeps moving things above it !) Thanks Greg Someone did post a link earlier where with a circuit which does exactly that. It consists of a suitable radio receiver which picks up the signal from the external sensors. As each transmitter has a unique I’d you could then add additional stations. I have also thought about looking at 1-wire as most of the sensors use that and you can easily get them. Ignore the first part of my response (damn lurg) I misread tx for rx. The second part is OK though, take a look at 1-wire. When I connect the weather station to the raspberry via usb, it looses Connection to the sensors. Anyone else having this problem? I’ve not had that problem. How close is your pi to the receiver? If is close our between it and the sensors it could cause interference. The distance of the sensors & what’s between them and the receiver could also be a factor. Does the problem happen if you put both close to the sensors? […] בעזרתו, כמו לדוגמה שולחן הקפה האינטרקטיבי, מחשב נייד, תחנת מזג האוויר, שרת-מחשב ענן, ואף תחנת רדיו […] […] megoldás létezik, de a PYWSS állítólag működik is. Kell hozzá néhány library telepítés és egy pywws nevű Python script gyűjtemény (3), amit […] what kinda of information should I be seeing when I run sudo python /usr/local/weather/TestWeatherStation.py it seems to me like it just hangs. When I unplug and plug back in my weather station it crashes the program. It should output a hexdump of the raw data from the station. Take a look at part 3 here: If you aren’t getting anything it might be a permissions problem – I had that one at first. Hello, I installed successfully pywws on my Rasp. All is connected with Wifi dongle to my website Also, I had a solarcamera wifi (or GSM) to get regular picture. Now, I must admit the visual of the pywws on the website is not very attractive. Any idea to improve the graphic page and more visual ? thanks, Laurent Yes the graphs are pretty basic. You could look at the templates but I’d doubt you’ll get anything looking much better. As part of another project I’m looking at generating html5 graph’s of various data so when I’ve done that I’m gong to look at replacing them with something better looking. I’m trying to the do the same thing, connect my Rasp Pi to my weather station. I’ve dabbled in linux and pyuthon the past but I consider myself a newbie at it. Please forgive my simple question. My question is, can I just install pywws on the Pi and get it work without compiling Cython and the Dev environment that you outline at the beginning of your article? For the record, I’m running the B version of Pi and the latest Raspbian distro (NOOBs V1.1) The problem I have is my pi weatherperson is so old it’s pre-raspian. When twitter failed last week I was going to rebuild it from scratch but in the end just updating tweetie was enough. I’m not certain if you can bypass cython. I suppose if the current raspian has cython & it’s version is high enough then you should be OK. I really need to find the time to do the rebuild. Then not only will I get an up to date os but also IPv6, it’s the only pi I have that’s not got a global ip address. Thanks for the quick reply Peter. In the end I had to compile Cython and have been working through your fantastic tutorial! […] ports built in. A quick search revealed that Peter Mount has done all of the hard work already in a part 1 and part 2 tutorial. The tutorials are excellent, well written and unlike most Linux projects I […] […] Une station météo. […] […] Weather Station […] thank you verry much for this nice tutorial. i’m thinking about installing a windturbine at home to generate my own electrical energy, so it is usefull to log the wind data and analyze it. For this usage it is usefull to get one file with all the data in. Never programmed python before, but i was able to edit the Hourly.py to do this job. The generated File could be analyzed there: here tho code i’ve added to the python file: def main(argv=None): txtFiles = [] for root, dirnames, filenames in os.walk(“/usr/local/weather/data/raw”, topdown=True): for filename in filenames: if filename.endswith(‘.txt’): outfileName = os.path.join(root, filename) txtFiles.append (outfilename) outputFile = open(“/var/www/combined.txt”, “w”) for data in sorted(txtFiles): readFile = open(data, “r+”) fileData = readFile.read(); readFile.close(); outputFile.write(fileData); print data outputFile.close(); […] Install and configure pywws and dependencies as per this or more recently this. […] […] 天气预报站: … aspberry-pi-part-1/ […] […] 天气预报站 […] […] •statie meteo: […] Hi, I followed your tutorial step by step and everything is ok until cython-hidapi instal. I have changed the libusb-1.0 path and when I run the code “~/work/cython-hidapi $ sudo python setup.py install” I get this. >Traceback (most recent call last): > File “setup.py”, line 3, in > from Cython.Distutils import build_ext >ImportError: No module named Cython.Distutils What can I do to overcome this step? I don’t know much of programming… sorry I am installing this in a raspberry pi b+ Regards This tutorial is now a few years old, so my only advice is to research how the Cython package has changed over time. The error implies that Cython.Distutils has been renamed or moved. Yes it is but also how usb is supported in pywws had changed as well since then, mainly making usb more stable. My best advice is to go to the pywws site for the latest instructions or alternatively ask on the pywws Google group which is linked from there. […] online that have great tutorials. These projects range from an in-car computer, an arcade table, Weather station, and even a cloud server. What ever you choose to do with your Pi I’m sure that you’ll […] […] kann den Raspberry auch nutzen, um sich Hörbücher vorlesen zu lassen, oder als twitternde Wetter-Station, oder zur Feuchtigkeitskontrolle vernachlässigter Zimmerpflanzen, oder zur Verkehrzählung vor […]
http://blog.retep.org/2012/07/30/installing-a-usb-weather-station-on-a-raspberry-pi-part-1/?like=1&source=post_flair&_wpnonce=178c498074
CC-MAIN-2015-40
refinedweb
2,960
74.19
SICP Earlier this year I started reading SICP. SICP stands for Structure and Interpretation of Computer Programs, a computer science text originally used in introductory courses taught at the MIT in the eighties. It had been on my reading list since 2010, when Uncle Bob recommended it in one of his Clean Coder videos. SICP had been recommended again and again and I was very happy to find the time and energy to read it. SICP is one of the greatest books - if not the greatest book - I have ever read. It is fast paced and written in the style of all scientific books - too dense and hard to follow. Reading it is hard work. I used to like such books when I was studying applied mathematics many years ago - actually I was eating such books for breakfast ;-) The high density gives me new information fast. As I like to read books from cover to cover, I do not mind that the information is not accessible directly. Yes it is challenging to read. And it is very insightful. I appreciate most the skill of the authors to define abstractions. I am stunned again and again how they name their composed methods. The Scheme Programming Language SICP is strongly related to Scheme as it is one of the bibles of the Lisp/Scheme world. SICP uses Scheme for its code samples and exercises as Gerald Jay Sussman, one author of SICP, is also one of the creators of Scheme. Scheme is a version of Lisp and as such a divine language per se. It is a great language and unlike Lisp it follows a minimalist design philosophy. I really like its simplicity: There are no reserved words, not much syntax, just atoms, pairs and lists. (define atom? (lambda (x) (and (not (pair? x)) (not (null? x)))))Atoms are numbers, strings, booleans, characters and symbols. Symbols i.e. names usually represent other atoms or functions. Maybe you do not like all the extra parenthesis, but it is compact and uniform. Because of the uniformity of the S-expression, it is easy to create parsers, embedded languages and even full featured Schemes, e.g. in Python, Haskell or any language. (To be fair, the last link contains implementations of Lisp not Scheme in 73 languages.) History As I said before Scheme is based on Lisp. The Lisp programming language was created 60 years ago by John McCarthy. And Lisp is very special, it seems to transcend the utilitarian criteria used to judge other languages. In 1975 Scheme was created by Gerald Jay Sussman and Guy Lewis Steele. Watch this great talk by Guy Steele about issues associated with designing a programming language. One thing which confused me a lot when I started playing with Scheme were its versions. The Scheme language is standardised by IEEE and the de facto standard is called the Revised nReport on the Algorithmic Language Scheme (RnRS). The most widely implemented standard is R5RS from 1998. I use that mainly because the first Scheme implementation I downloaded was an R5RS compliant one. R5RS files use the .scmfilename extension. The newer R6RS standard, ratified in 2007, is controversial because it departs from the minimalist philosophy. It introduces modularity which is breaking everything. Being used to the strong compatibility of Java or at least the major compatibility of Python I did not expect versions to be incompatible at all. R6RS files use the .ss, .slsand .spsfilename extensions. The current standard is R7RS from 2013, a smaller version, defining a subset of the large R6RS version retaining the minimalism or earlier versions. (Almost) Complete List of Special Forms and Functions I did not read any tutorials or books on the Scheme language - I was exploring the language on my own. Still I wanted to know which library functions I could use. Similarly I used to list all classes available in each Java release. I searched for an exhaustive list of Scheme functions. There are many Schemes and I did not want to depend on any implementation specific extensions. In the end I decided to scrape the standards. Scheme makes a difference between forms and functions. Function calls evaluate all arguments before control is passed to the function body. If some expressions should not be evaluated - as in ifor switch(which is condin Scheme) - a special form is needed. Special forms evaluate their arguments lazily. For more information see Why is conda special form in Scheme. R5RS Forms and Functions I scraped the list of special forms and built-in functions from the R5RS HTML documents. The list is incomplete since I had to rely on formatting of the document. It misses defineand the abbreviations like 'and @, but looks pretty good (to me) otherwise. Browsing the 193 forms and functions gives an idea of built in data types, i.e. boolean char complex (number) exact (number) inexact (number) list number pair procedure rational (number) string symbol vectoras well as possible conversions between them available in any R5RS compliant Scheme. char->integer exact->inexact inexact->exact integer->char list->string list->vector number->string string->list string->number string->symbol symbol->string vector->list As mentioned earlier, R6RS is much larger than R5RS. It defines 630 forms and functions, most of them in the (new) libraries. The standard separates built-in forms and functions from the ones defined in libraries. (This is still very small, considering that the Java 8 core library contains 6000 public classes.) My list of forms and functions contains all, built-in and library alike. It looks complete, but I am not sure, I did not work with R6RS. From a quick glance R6RS adds bitwise and floating point operations as well as different types of byte vectors, enum-setand hashtable. When looking at my list now, I see that I should have scraped the names of the library modules as well. (I added that to my task list for 2019 ;-) Scheme Requests for Implementation Next to the RnRS standards, there are the SRFIs, a collection of concrete proposals and reference implementations. People implementing Scheme chose to implement SRFIs or not. Most Schemes support some of then, see Arthur A. Gleckler's report of SRFI support by Scheme implementations in 2018. Some Schemes have package managers which allow to download and instal packages. Usually some of those are SRFI implementations. I guess I also need to scrape these. Conclusion Scheme (Lisp) is so much fun, especially when you do not have to deliver anything. Most people I meet got in touch with Lisp during university but never followed up. Some are even afraid of it. Since 2016 I run Scheme coding sessions at every unconference or SoCraTes event I attend. I invite people to mob with me and I do all the typing. We always have a good time, and - after some warm up with Scheme - people really like it. They all enjoy the opportunity to dive into Lisp again. Will you?
http://blog.code-cop.org/2018/
CC-MAIN-2021-39
refinedweb
1,162
65.32
In this article i have told you, how to stop running multiple instance of application in Visual Studio.NET using csharp and VB.NET. you can use Visual studio 2005/2008/2010. Here i have used Visual studio 2010.Let's start. Open you Visual Studio 2010. Create a new project from Menu File->New project, small window will be open,from installed template select Visual C# and then from right of this small window select Windows Form Application, give name and click OK button to continue. In the Solution Explorer Project has created as shown in Figure.1.1. using System;using System.Collections.Generic;using System.Linq;using System.Windows.Forms;namespace Mutex//name space of your project{ static class Program { ///<summary> /// The main entry point for the application. ///</summary> [STAThread] staticvoid Main() { try Create a project in VB.Net. Right Click on your project from Solution Explorer select Application table and find “Window Application Framework properties” checked make single application as shown in below figure 1.2 Mutex Class Run Single instance vb.net.zip (63.33 kb) Run Single instance csharp.zip (46.59 kb)
http://dotnetplace.com/post/Stop-Running-Multiple-Instances-of-Application-in-CsharpVBNET.aspx
CC-MAIN-2019-51
refinedweb
189
50.33
> tcpipstack.rar > TINY /* * tiny - user ftp built on tinytcp.c * * Written March 31, 1986 by Geoffrey Cooper * * Copyright (C) 1986, IMAGEN Corporation * "This code may be duplicated in whole or in part provided that [1] there * is no commercial gain involved in the duplication, and [2] that this * copyright notice is preserved on all copies. Any other duplication * requires written notice of the author." |===================================================================| | | |===================================================================| 940213 rr minor mods 940403 rr some bug fixes. receive side working. 940424 rr start working on the server side 940513 rr more coding on server side - may work, who knows 940529 rr simplifications and cleanup 940925 rr shorten some names 941012 rr split into 2 parts Notes: 940424: This is the CLIENT side of the FTP connection. Currently it is only capable of requesting that files be RETRieved. It cannot retrieve more than one file. The file logic needs beefing up. (rr) 940424: Names in this program often appear to have been chosen quite poorly - that is, they either are unintelligible, or don't reflect the true meaning of what they represent. I have been changing them slowly, but consider this to be a low priority. (rr) 940424: There is something funny about the kbhit stuff. It seems to sometimes wait for a key even if I haven't pressed one. (rr) There is also something wrong with receiving long multi-line responses such as to HELP, like the data gets overwritten or unterminated. 940513: file should be opened when socket opens. 940513: Need some work on the interactive client; it doesn't work. The server is coming along nicely. 940529: Do we need to do a new listen after a file is received? */ /* #define DEBUG_FTP */ #define RECEIVE_DATA_PORT 0x8080 /* This is working now. */ /* #define IMMEDIATE_OPEN */ #include "tinytcp.h" #include "fileio.h" #include /* for printf, etc */ #include /* for kbhit, etc. Amazingly, Hitech C has these entry points too! */ #include /* for strcpy, etc. */ /* the following are what these seem to be */ #define isina kbhit #define busyina getch #define busyouta putch /* Sockets */ struct tcp_Socket s_og_ctl, /* outgoing connection socket */ s_og_data, /* data socket */ s_ic_ctl, /* server control socket */ s_ic_data; /* server data socket */ /* Receive buffer for client side. */ char b_response[ 120 ]; /* response buffer */ int i_response; /* index into response buf */ /* Client output command buffer */ char b_c_command[ 82 ]; /* send buffer */ /* Server output buffer */ char b_s_response[ 128 ] = ""; /* server output buffer */ /* Server file transfer buffer, index, and length */ Byte b_s_data[ 1024 ]; /* server output buffer */ int n_s_sent, /* bytes sent of buffer */ n_s_left; /* bytes left in serv2 buffer */ /* file handle for retrieve */ char recv_filename[ 82 ]; FH p_recv_file = ( FH ) 0; /* file handle for server */ char send_filename[ 82 ]; FH p_send_file = ( FH ) 0; /* ----- prototypes ------------------------------------------------- */ static Void ftp_process_response P(( Void )); /* ----- ftp control handler ---------------------------------------- */ Void ftp_ctlHandler( s, dp, len ) struct tcp_Socket * s; Byte * dp; int len; { S8 Byte c, *bp, data[ 82 ]; S8 int i; #ifdef DEBUG_FTP printf( "in ftp_ctlHandler %08lx %08lx %d\n", s, dp, len ); /* 22904a58 22900a14 0 */ #endif /* 940403 comes in here with 2 pointers and 65 - a string. */ if( dp == 0 ) { tcp_Abort( &s_og_data ); return; } /* Received a message. What do I do with it? */ do { i = len; if( i > sizeof( data )) i = sizeof( data ); Move( dp, data, i ); len -= i; /* Look at the buffer that was received and try to interpret it as an FTP response */ bp = data; while( i-- > 0 ) { c = *bp++; if( c == '\r' ) continue; /* ignore the cr? */ if( c == '\n' ) { /* We've received the end of the response. */ b_response[ i_response ] = '\0'; /* Process the message received. */ ftp_process_response(); i_response = 0; } else if( i_response < ( sizeof( b_response ) - 1 )) { /* Not the end yet, keep storing. */ b_response[ i_response++ ] = c; } } } while( len > 0 ); } /* ----- ftp data handler ------------------------------------------- */ Void ftp_dataHandler( s, dp, len ) struct tcp_Socket * s; Byte * dp; int len; { #ifdef DEBUG_FTP printf( "in ftp_dataHandler %08lx %08lx %d\n", s, dp, len ); #endif /* When a file is retr'd, it comes in here. */ if( len <= 0 ) { /* 0 = EOF, -1 = close */ if( p_recv_file != ( FH ) 0 ) { my_close( p_recv_file ); p_recv_file = ( FH ) 0; } return; } /* if the file isn't open yet, try to open it */ if(( p_recv_file == ( FH ) 0 ) && ( strlen( recv_filename ))) { /* open the file for retrieve */ p_recv_file = my_open( recv_filename, MY_OPEN_WRITE ); if( p_recv_file == ( FH ) 0 ) { printf( "** Can't open file %s **", recv_filename ); recv_filename[ 0 ] = '\0'; } } /* write the data to the file */ if( p_recv_file != ( FH ) 0 ) { my_write( p_recv_file, dp, len ); dp += len; return; } /* as a last resort, dump it to the screen (Hm, to stdout?) */ while( len > 0 ) { if(( *dp < 32 ) || ( *dp > 126 )) printf( " %02x ", *dp ); else putchar( *dp ); ++dp; --len; } } /* ----- process response from distant end ftp ---------------------- */ static Void ftp_process_response( Void ) { printf( "< %s\n", b_response ); /* Look at the fourth byte of the response message. If there is a -, there is more to come. If there is a blank, this is the end. */ } /* end of tiny */
http://read.pudn.com/downloads59/sourcecode/internet/208079/tinytcp/code/TINYFTP.C__.htm
crawl-002
refinedweb
773
69.11
Hello, hello. So I am having some challenges with trying to set an array up with setting values at a position of the array. My array size is 10. *see code below* but I have some getter and setter value methods that I cant seem to get right. Any help would be much appreciated. Below I have the methods I have been trying to set right. The parameters have to be as are. Thanks! // class, private instance variables and default constructor public class StatsArray { private int size; //how big is the array private int[] stats; // an array of integers StatsArray() { this.size = 10; this.stats = new int[size] ; //instantiate the array called stats } // random generator to fill the array public void fillArray() { Random random = new Random(); for(int index = 0; index < stats.length; index++) stats[index] = random.nextInt(101); } // These are the methods I am trying to work with and need help on: I'm not too sure on how to set the value to the position in the int array. public void setValue(int position, int value) { for(int index = 0; index < stats.length; index++) position = this.stats[index] + value; } public int getValue(int position) { return this.stats[position]; }
http://www.javaprogrammingforums.com/whats-wrong-my-code/15002-needing-some-help-array-getter-setter-methods.html
CC-MAIN-2015-32
refinedweb
200
66.64
I want to create a simple java class, with a main method, but when I compile my code, I get this error message : Error: Main method not found in class errors.TestErrors, please define the main method as: public static void main(String[] args) package errors; public class TestErrors { public static void main(String[] args){ System.out.println("hello"); } } As said in my comments, looks like you've declared a String class among your own classes. To prove this, I've created a basic example: class String { } public class CarelessMain { public static void main(String[] args) { System.out.println("won't get printed"); } public static void main(java.lang.String[] args) { System.out.println("worked"); } } If you execute this code, it will print "worked" in the console. If you comment the second main method, the application will throw an error with this message (similar for your environment): Error: Main method not found in class edu.home.poc.component.CarelessMain, please define the main method as: public static void main(String[] args)
https://codedump.io/share/BSMXAwcfkB8b/1/main-method-not-found-even-if-i39ve-declared-it
CC-MAIN-2016-50
refinedweb
171
54.93
I am looking at how to format axis tick marks in matplotlib The link shows the following pieces of code def millions(x, pos): 'The two args are the value and tick position' return '$%1.1fM' % (x*1e-6) formatter = FuncFormatter(millions) In the line: formatter = FuncFormatter(millions) you are creating an instance of the FuncFormatter class, which is being initialised with the millions function. This is a class which matplotlib accepts as part of its api to format ticks. In the example, the formatter object is passed to the set_major_formatter method for the y axis so that ticks will be formatted with the millions function. You can see how this works in the matplotlib source code. The class is defined as follows: class FuncFormatter(Formatter): """ User defined function for formatting The function should take in two inputs (tick value *x* and position *pos*) and return a string """ def __init__(self, func): self.func = func def __call__(self, x, pos=None): 'Return the format for tick val *x* at position *pos*' return self.func(x, pos) So now the object stored in formatter will have an attribute func which points to the millions function. When matplotlib makes a call to the formatter object that you have passed it, it will pass the arguments (i.e. the values represented by the ticks) to the millions function which is pointed to by self.func Since the millions function only formats the ticks based on the x value and not on the position, the definition of millions only includes the pos argument as a dummy placeholder. It must do this in order for there not to be an error when self.func(x, pos) is called by matplotlib when it formats the ticks.
https://codedump.io/share/9p49rMcwSdNC/1/what-happened-to-the-parameters-when-the-function-was-called
CC-MAIN-2017-13
refinedweb
290
58.62
Opened 6 years ago Closed 2 years ago #10871 closed New feature (wontfix) Add input support to admin actions (with patch) Description I wanted to be able to set tags to a lot of objects at once in the admin site. The attached patch adds a takes_input attribute to an admin action (default False). If set to True, a 4th "input" argument will be passed, which you can use as such: def add_tag(modeladmin, request, queryset, input): for obj in queryset: Tag.objects.add_tag(obj, input) add_tag.takes_input = True I'd love some comments on it. Some TODO would be: - Hiding the input if no give action takes an input - js-disabling it if the current selected action doesn't take an input.. not too sure about that. Attachments (1) Change History (13) Changed 6 years ago by Adys comment:1 Changed 6 years ago by Will Hardy - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 6 years ago by Adys It's personally one of the things I'd expect to be able to do without an intermediate page. Per the docs, and intermediate page would be for more complex actions. Adding an input field on-demand, to me, sounds reasonable. comment:3 Changed 6 years ago by Alex - Triage Stage changed from Unreviewed to Design decision needed comment:4 Changed 6 years ago by kmike - Cc kmike84@… added comment:5 Changed 5 years ago by rctay I quote Version1.2Features: Admin-03 (Support for input arguments on admin actions.) - this would overly complicate the admin UI and dilute the purpose of the admin actions. Thought this would be relevant to this discussion. comment:6 Changed 4 years ago by SmileyChris - Severity set to Normal - Type set to New feature comment:7 Changed 4 years ago by jezdez - Version 1.1-beta-1 deleted comment:8 Changed 4 years ago by julien - UI/UX set comment:9 Changed 3 years ago by aaugustin - UI/UX unset Change UI/UX from NULL to False. comment:10 Changed 3 years ago by aaugustin - Easy pickings unset Change Easy pickings from NULL to False. comment:11 Changed 3 years ago by aaugustin - UI/UX set Revert accidental batch modification. comment:12 Changed 2 years ago by aaugustin - Resolution set to wontfix - Status changed from new to closed Comment 5 points out this was rejected in the past. To reverse that decision, please make your case on django-developers. Would I be right in saying that this can already be done using an action with an intermediate page? A change like this appears to complicate things, and doesn't cover the field (it only works where the required input is a single line of text).
https://code.djangoproject.com/ticket/10871
CC-MAIN-2015-32
refinedweb
457
60.45
Moose - A postmodern object system for Perl 5 version 2.0501. If you're new to Moose, the best place to start is the Moose::Manual docs, followed by the Moose::Cookbook. The intro will show you what Moose is, and how it makes Perl 5 OO better. The cookbook recipes on Moose basics will get you up to speed with many of Moose's features quickly. Once you have an idea of what Moose can do, you can use the API documentation to get more detail on features which interest you. The MooseX:: namespace is the official place to find Moose extensions. These extensions can be found on the CPAN. The easiest way to find them is to search for them (::), or to examine Task::Moose which aims to keep an up-to-date, easily installable list of Moose extensions. for an example. This will accept the name of a role which the value stored in this attribute is expected to have consumed. This marks the attribute as being required. This means a value must be supplied during class construction, or the attribute must be lazy and have either a default or a builder. Note that c<required> does not say anything about the attribute's value, which can be undef. This will tell the class to store the value of this attribute as a weakened reference. If an attribute is a weakened reference, it cannot also be coerced. can have a trigger on a read-only attribute. NOTE: Triggers will only fire when you assign to the attribute, either in the constructor, or using the writer. Default and built values will not cause the trigger to be fired..). DUCKTYPE With the duck type option, you pass a duck type object whose "interface" then becomes the list of methods to handle. The "interface" can be defined as the list of methods passed to duck_type to create a duck type object. For more information on duck_type please check Moose::Util::TypeConstraints. for more information. The value of this key is the default value which will initialize the attribute. NOTE: If the value is a simple scalar (string or number), then it can be just passed as is. However, if you wish to initialize it with a HASH or ARRAY ref, then you need to wrap that inside a CODE reference. See the default option docs in Class::MOP::Attribute for more information. Creates a method allowing you to clear the value. See the clearer option docs in Class::MOP::Attribute for more information. Creates a method to perform a basic test to see if a value has been set in the attribute. See the predicate option docs in Class::MOP::Attribute for more information. Note that the predicate will return true even for a weak_ref attribute whose value has expired. An arbitrary string that can be retrieved later by calling $attr->documentation. This is variation on the normal attribute. Note that you can only extend an attribute from either a superclass or a role, you cannot extend an attribute in a role that composes over an attribute from another role. Aside from where the attributes come from (one from superclass, the other from a role), this feature works exactly the same. This feature is restricted somewhat, so as to try and force at least some sanity into it. Most options work the same, but there are some exceptions: These options can be added, but cannot override a superclass definition. You are allowed to add additional traits to the traits definition. These traits will be composed into the attribute, but preexisting traits are not overridden, or removed. These three items are syntactic sugar for the before, after, and around method modifier features that Class::MOP provides. More information on these may be found in for for a class matching Moose::Meta::$type::Custom::Trait::$trait_name. The $type variable here will be one of Attribute or Class, depending on what the trait is being applied to. If a class with this long name exists, Moose checks to see if it has the method register_implementation. This method is expected to return the real class name of the trait. If there is no register_implementation method, it will fall back to using Moose::Meta::$type::Custom::Trait::$trait as the trait name. The lookup method for metaclasses is the same, except that it looks for a class matching Moose::Meta::$type::Custom::$metaclass_name. If all this is confusing, take a look at Moose::Cookbook::Meta::Labeled_AttributeTrait, which demonstrates how to create an attribute trait. To learn more about extending Moose, we recommend checking out the "Extending" recipes in the Moose::Cookbook, starting with Moose::Cookbook::Extending::ExtensionOverview, which provides an overview of all the different ways you might extend Moose. Moose::Exporter and Moose::Util::MetaRole are the modules which provide the majority of the extension functionality, so reading their documentation should also be helpful. Generally if you're writing an extension for Moose itself you'll want to put your extension in the MooseX:: namespace. This namespace is specifically for extensions that make Moose better or different in some fundamental way. It is traditionally not for a package that just happens to use Moose. This namespace follows from the examples of the LWPx:: and DBIx:: namespaces that perform the same function for LWP and DBI respectively. Metaclass compatibility is a thorny subject. You should start by reading the "About Metaclass compatibility" section in the Class::MOP docs. Moose will attempt to resolve a few cases of metaclass incompatibility when you set the superclasses for a class, in addition to the cases that Class::MOP handles. Moose tries to determine if the metaclasses only "differ by roles". This means that the parent and child's metaclass share a common ancestor in their respective hierarchies, and that the subclasses under the common ancestor are only different because of role applications. This case is actually fairly common when you mix and match various MooseX::* modules, many of which apply roles to the metaclass. If the parent and child do differ by roles, Moose replaces the metaclass in the child with a newly created metaclass. This metaclass is a subclass of the parent's metaclass which does all of the roles that the child's metaclass did before being replaced. Effectively, this means the new metaclass does all of the roles done by both the parent's and child's original metaclasses. Ultimately, this is all transparent to you except in the case of an unresolvable conflict. superand innercannot be used in the same method. However, they may be combined within the same class hierarchy; see t/basics). We offer both a mailing list and a very active IRC channel. The mailing list is. Please report any bugs to [email protected], or through the web interface at. You can also discuss feature requests or possible bugs on the Moose mailing list ([email protected]) or on IRC at irc://irc.perl.org/#moose. ([email protected]) or join us on IRC at irc://irc.perl.org/#moose to discuss. The Moose::Manual::Contributing has more detail about how and when you can contribute. There are only a few people with the rights to release a new version of Moose. The Moose Cabal are the people to go to with questions regarding the wider purview of Moose. They help maintain not just the code but the community as well. Stevan (stevan) Little <[email protected]> Jesse (doy) Luehrs <doy at tozt dot net> Yuval (nothingmuch) Kogman Shawn (sartak) Moore <[email protected]> Hans Dieter (confound) Pearcey <[email protected]> Chris (perigrin) Prather Florian Ragwitz <[email protected]> Dave (autarch) Rolsky <[email protected]>.
http://search.cpan.org/~doy/Moose-2.0501-TRIAL/lib/Moose.pm
CC-MAIN-2013-20
refinedweb
1,287
65.01
If you want to provide your users with a way of updating an application, you have a few different choices, some of which are: This article demonstrates the last method in the above list - providing your applications with a one click update or an automatic update facility without using ClickOnce. The update is done through a class and application which installs online updates and is illustrated through the UpdateMe application (included as a download with this article). UpdateMe Without too much difficulty, you should be able to use the class and update application, as shown via the UpdateMe application, to get automated updates out to the users of your software. update I have kept the UpdateMe application as simple as possible so that you will be able to make use of the code without needing to spend too much time in sifting through what will be relevant for your own application. To make use of the UpdateMe application, you will need to have the ability to store files online(read the next line though). Note: At the time of this article first being published, I have placed the relevant files (version and update) in a public Dropbox folder, pointed to by the default values in the UpdateMe textboxes- all things being impermanent, this folder may not be available for long so I suggest you use your own webspace. Tip: One advantage of using the class, methods and application described and included in this article is that you can store the version information and download files on a free file sharing site, such as Dropbox, and not have to spend a penny on hosting the update information or download yourself. Dropbox Please see the readme.txt document included in the download for how to set up the version and download file in your own webspace. This code originated from the TeboCam webcam security application I have written which is soon to be published as open source. The implementation takes form in the following three key components: update This article is principally about the update class and its use within the example UpdateMe application. The order of processing is as follows: UpdateMe It really is that simple (he says after spending the weekend tidying up the code...). Initially, when I wrote this code, someone asked "How do you update the update application?" - so back to the code I went - and the solution turned out to be quite simple: Any files relating to the update application are saved online with a prefix – the update class contains a method that simply renames these files to overwrite the update application. So with one button, it is possible to update your main application as well as the update application itself. There is more information relating to the method that handles this lower down in the article. There are two modes to the update: This application makes use of the Ionic zip open source library for unzipping files. I also make use of the methods, etc. that I have found through Google – if you find anything for which I need to give credit to others, for please do let me know. The update class contains the methods for interrogating the version file.The version file, which you will need to place online, takes the format of a pipe delimited file. The order of processing is: Getting the update availability information: The getUpdateInfo method downloads a pipe delimited file, splitting the columns, into a List object. One point to note is the line number to start reading the data from – this is a zero based index meaning the first line is 0.In the example below, we start reading from the second line - I prefer to have a header line as when I amend the version file I can know by looking at the file which information goes where. getUpdateInfo List info = update.getUpdateInfo(downloadsurl.Text, versionfilename.Text, Application.StartupPath + @"\", 1); As the result of reading the version file is a List object, returned by the getUpdateInfo method, you can place whatever you need to in this file for your update information.I suggest at first to keep it simple with the version, download URL and download filename being the only vital pieces of information required for an update. app|version|release date|url|file updateme|1.2|24/09/2011| One thing to note with keeping the information, in this version file online, is that it allows you to direct users' applications to pick up the new application from whichever location you desire. Tip - When running the UpdateMe application, make changes to the "This Version No." textbox to test if the update is picked up. With regards to installing the updates, two methods exist: installUpdateNow installUpdateRestart The installUpdateRestart method is the business end of the UpdateMe application - installing an update and restarting the new version of the UpdateMe application through the update application. This method starts the update application - passing parameters indicating which file to download and which process to start once the download of the file has completed. public static void installUpdates(string downloadsURL, string filename, string destinationFolder, string processToEnd, string postProcess, string startupCommand, string updater) { string cmdLn = ""; cmdLn += "|downloadFile|" + filename; cmdLn += "|URL|" + downloadsURL; cmdLn += "|destinationFolder|" + destinationFolder; cmdLn += "|processToEnd|" + processToEnd; cmdLn += "|postProcess|" + postProcess; cmdLn += "|command|" + @" / " + startupCommand; ProcessStartInfo startInfo = new ProcessStartInfo(); startInfo.FileName = updater; startInfo.Arguments = cmdLn; Process.Start(startInfo); } The webdata class is used by both the UpdateMe application and the update application.This class contains the methods for downloading data from the online location. One area I think worth pointing out is the use of the custom bytesDownloaded event.If you are fairly new to .NET, then custom events are something which I would strongly recommend using and understanding (that is understanding how to use them and why they are so handy). webdata bytesDownloaded To enable the progressbar in the update application to correctly display how far through the data download we are - we will need to be able to specify the bytes downloaded together with the total number of bytes in the download. We start by defining a delegate and a class that holds the arguments for the custom event - the class ByteArgs inherits from the EventArgs class. ByteArgs EventArgs public delegate void bytesDownloaded(ByteArgs e); public class ByteArgs : EventArgs { private int _downloaded; private int _total; public int downloaded { get { return _downloaded; } set { _downloaded = value; } } public int total { get { return _total; } set { _total = value; } } } When we start downloading the data, we create a new instance of the ByteArgs class and assign the downloaded and total values. One point to note when we call the event - is that we test if it is null first - if we do not test if the event is null and the event is not consumed, we will get a NullReferenceException.This is important because the update application consumes the bytesDownloaded event for the purpose of the progressbar whilst the UpdateMe application does not. null NullReferenceException //let us declare our downloaded bytes event args ByteArgs byteArgs = new ByteArgs(); byteArgs.downloaded = 0; byteArgs.total = dataLength; //we need to test for a null as if an event is not consumed, we will get an exception if (bytesDownloaded != null) bytesDownloaded(byteArgs); Earlier in this article, I mentioned the question - "How do you update the update application?" A method within the Update class called updateMe resolves this issue: Update updateMe update.updateMe(updaterPrefix, Application.StartupPath + @"\"); We simply pass the prefix of the update application files to this method together with their location - the prefix allows us to download these files while the update application is running without running into any file sharing/lock issues. Calling this method when UpdateMe starts up allows us to install any new files related to the update application. Tip: you will notice that the um.zip file included as a download with this article has files with the M1234_ prefix - these files will have their prefix removed and will replace or supplement any files in the application folder once the update has been published when running the UpdateMe application. M1234_ Initially, I used an HTML webpage to hold the update information – the code would inspect this page for updates. This was causing problems at times with connections failing - so I decided to place the update information in a pipe delimited online file that can be downloaded, for some reason the file download option was more reliable – 103 bytes of download (the version file should be around this size) is not going to use up too much bandwidth. One area that is not covered by this article is the issue of cleaning up legacy files from the application folder - this can potentially be handled when the updateMe method runs in your main application (what I mean by a legacy file is a file which was used by a previous version of your application and is no longer of use). Alternatively you can leave legacy files on the users' machines - I admit this is not ideal however with the risk of removing files the user deliberately placed in the application folder it may be better to allow these legacy files to persist. I really do hope that this will be of help to others and let me know what can be improved – please. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) private void moveFiles() { DirectoryInfo di = new DirectoryInfo(tempDownloadFolder); FileInfo[] files; LogFile.WriteToLog("Processing subfolders..."); // Do we have any subdirectory from the archive? foreach (DirectoryInfo sub in di.GetDirectories()) { if (!Directory.Exists(sub.Name)) { Directory.CreateDirectory(sub.Name); LogFile.WriteToLog("Created the destination folder '"+sub.Name+"'"); } files = sub.GetFiles(); //Get all the files from the subdirectory foreach (FileInfo f in files) { // If the file is not the downloaded file if (f.Name != downloadFile) { File.Copy(tempDownloadFolder+"\\"+sub.Name+"\\"+f.Name, destinationFolder+"\\"+sub.Name+"\\"+f.Name, true); LogFile.WriteToLog("Moved file '"+f.Name+"' to subfolder '"+sub.Name+"'"); } } } LogFile.WriteToLog("Processing root files..."); // Now get all files at root level files = di.GetFiles(); foreach (FileInfo fi in files) { if (fi.Name != downloadFile) { File.Copy(tempDownloadFolder + fi.Name, destinationFolder + fi.Name, true); LogFile.WriteToLog("Moved file '"+fi.Name+"' to root folder"); } } } class LogFile { public static void WriteToLog(string strMessage) { string path = AppDomain.CurrentDomain.BaseDirectory+"\\LogFile.txt"; //string path = @E:\Data\Sharp_Projects\AutoUpdater\source\UpdateMe\bin\Debug // This text is added only once to the file. if (!File.Exists(path)) { // Create a file to write to. using (StreamWriter sw = File.CreateText(path)) { sw.WriteLine(DateTime.Now.ToString("yyyyMMdd HH:mm:ss", System.Globalization.CultureInfo.InvariantCulture) + " - " + strMessage); } } else { using (StreamWriter sw = File.AppendText(path)) { sw.WriteLine(DateTime.Now.ToString("yyyyMMdd HH:mm:ss", System.Globalization.CultureInfo.InvariantCulture) + " - " + strMessage); } } } } 20131203 17:12:36 - Processing subfolders... 20131203 17:12:36 - Created the destination folder 'nouveau' 20131203 17:12:36 - Moved file 'test.txt' to subfolder 'nouveau' 20131203 17:12:36 - Moved file 'test2.txt' to subfolder 'nouveau' 20131203 17:12:36 - Processing root files... 20131203 17:12:36 - Moved file 'Ionic.Zip.pdb' to root folder 20131203 17:12:36 - Moved file 'UpdateMe.pdb' to root folder FtpWebRequest WebRequest General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/265751/Application-Auto-update-via-Online-Files-in-Csharp?msg=4298676
CC-MAIN-2014-35
refinedweb
1,881
51.58
If you're anything like me, you probably learn a lot better by going through code rather than reading books. I'm happy to release the Foundations of Programming Learning Application - it's a complete solution meant to show what was covered in the Foundations series. It's a Visual Studio 2008 solution. You can download it here. It should require no configuration (my fingers are crossed on that one) and ought to just run out of the box. There are comments sprinkled all over to help explain things or provide some insight. No doubt there'll be typos, since I'm nothing without word. (you can grab the free ebook from:) What is it?It's a sample awards website - with categories and nominees. The root container is called a Round - a sample Round would be called 'The 2008 CodeBetter Awards'. A Round has a state (planning, annoucements, voting, winners) and a number of Categories (Best Blogger, Best Blog Post, Best Open Source Project, ...) with each categories having a Nominee (Title, Summary, Link, Author...). The website is using the ASP.NET MVC Preview 4 - I don't think you'll need to install anything extra as all the DLLs are included with the project. I'm using an SQL Lite database with a relative path to the file, so all should work as-is. Dummy data is already loaded. The web application mostly shows a read-only view of the data. There's also a sample console application that does more administrative stuff (it isn't interactive, it just runs through 4 steps or so). You can run the administrative portion over and over again - the first step is to clean itself up. The admin part basically adds a new round, with categories and nominees. Of course, there's a project full of unit tests as well. I tried to keep everything simple and straightforward (which is largely why I didn't want to build a whole web-based admin module and user registration and all that). Like most, I'm pretty new to ASP.NET MVC. Some might think my views have too much code, I think they have the perfect amount . There's extensive use of Lambdas, so if you have a hard time reading them, I hope my excessive examples will help illuminate them. [Advertisement] I'm excited to finally release the official, and completely free, Foundations of Programming EBook You've been kicked (a good thing) - Trackback from DotNetKicks.com Hmm, get an invalid zip file error after download. Using WinZip and IE. At client, so perhaps proxy issue? Yeah, it's that old IE thing that happened with previous CodeBetter downloads. Firefox grabs it properly. If you're having troubles downloading the file, I've also uploaded at: (thanks for letting me know about the problem jdn) The CodeBetter.Awards project didn't open with VS2008. it complained about unsupported project type. All the other projects opened ok. Looks like the ASP.NET MVC projects have their own ProjectTypeGuid. I changed the csproj files to use the default web application and it seems to work now even if you don't have the ASP.NET MVC Preview installed. You mind redownloading and testing it again DaRage? You mentioned in the comments of the repository that in more complex systems one might use repository per class, does that mean you also going to need a more than one type of IDataStore? I am battling to see the purpose of IDataStore, why not have an NHibernate or SqlLite Repository? Adriaan: Great questions. First, for such a small system, the benefits of the Repository class are certainly limited. The tradeoff between taking an example that's easy to learn versus one that demonstrates the full capabilities. I have seen multiple implementations that had an IXXXDataStore per domain, and I didn't like it. Unless it's a really big system, I think it's a code smell and you have a lot of duplication that can be rewritten. For such a system there are even more advanced patterns to help you out. The Repository and the DataStore serve two very distinct purposes (again, not all that obvious for such a slim application). For the Awards system, the API of one maps pretty closely to the API of another - so I see your confusion, why have both. I've worked on many projects where the repository does A LOT of things - caching, which I did here, interaction with web services, interaction with a legacy store, logging, and interaction with the data store. For the Awards site you could easily move the caching up into the domain or down into the NHibernate DataStore (especially given that NHibernate supports a 2nd level cache), and then remove the repository all together. But I think that before long, you'll start to see code that really isn't pertinent to the domain (it's more plumbing than anything) and that isn't tied to NHibernate or SQLLite. Karl that makes perfect sense, thank you. I am trying to imagine how else would you handle IDataStore other than IXXXDataStore? How big(amount methods) can you take IDataStore, surely one can't have all data acces methods in there for a large system or do you? What are the more advanced patterns you are referring to? Well, if you look @ the sample admin module, you'll see that we're saving a round, 2 categories and a number of nominees with a single line in our NHibernateStore (GetSession.Save(round)). So I think you really can deal with a lot in a single datastore. The pattern I was thinking of was the table data gateway (martinfowler.com/.../tableDataGateway.html). You can use it with an O/R mapper together. No problem Karl. Just FYI, from home, I have the same issue, the zip file is corrupt when I download with IE, but works from other browsers. CodeBetter isn't a download heavy site, but I remember this problem with CB a few years ago. I apologize that I don't have a URL to a previous post, as I dont know if the problem was fixable (I download hundreds of files a month so there's nothing inherently wrong with IE (well, about that anyway), it's something about CB's Community Server). To anyone else, if you have problems opening the sample app after downloading from IE, use a different browser. Pingback from ASP.NET MVC Archived Blog Posts, Page 1 Pingback from Dew Drop - July 19, 2008 | Alvin Ashcraft's Morning Dew It works now. Thanks karl. Pingback from Reflective Perspective - Chris Alcock » The Morning Brew #140 Pingback from Weekly Links #10 | GrantPalin.com Pingback from Foundations of Programming « Build and Deliver I have found this a very useful application to learn from. I am however unsure about a part of the round and other classes. In the round class you have the method "CategoriesChanged" and similar Changed methods in other classes. You state that is is required due to the ReadOnlyCollection acting as a cache so that it provides full List functionality while using IList for use by NHibernate which I can see the idea. But I cant seem to find a situation where it would actually be needed, from what I have tested and from reading the MSDN documentation I have the impression that the ReadOnlyCollection already reflects any changes made to the underlying list it wraps. If you could please correct me if I have missed something that makes this a valid method to use. @Steve: Wrote a test, looks like you're right. I continue to be amazed at how little I know, thanks...definitely good to know! Hi, Just read the book and looked at the sample application - all the things I have been hearing about in the last few years, very well explained, especially the concept of domain-driven-design. Looking at your sample, you have as little logic as possible in the ascx file. I am currently working on a complex set of business requirements using a domain-centric approach. I have my aspx pages made up of various ascx controls to keep things simple and for reasons of re-use. Retrieving data to be displayed for each control is either looked up by the ascx file via linq and/or is mapped to a class structure specific for the ascx file. Data retrieved from the domain to be displayed is mapped to an ascx file class structure. Does this sound like the right way of doing things, or would you not have an ascx file retrieve any data directly, and if so, why? (The only reason I can think of is that maybe the web server is not allowed to access a database directly, but via another machine). Regards, Rik Great book. Thank you! I'm having a strange error when running default.aspx for the first time -- it crashes on the namespace add of System.Web.Mvc in web.,config. I had not touched anything as far as Copy Local or anything. Added the References folder to the web site properties to no avail. Ditto for Adding an Import to Site.Master. At that point, it gets a method not found on System.Web.Mvc.RouteCollectionExtensions.IgnoreRoute, even though I'm now explicitly typing the routes input param to RegisterRoutes in Global.asax.cs as a System.Web.Routing.RouteCollection. Was this sample app built for one of the Previews? I have MVC Beta 1 installed. [email protected] Hello, First of all thank you for the sample and free e-book. They are simply great. Can you please also provide a winforms demo and asp.net demo (not asp.net mvc version) too? I believe many users (including me) would find it useful Thank you once again. Karl, thank you for all the effort you put into this. Without mentoring it is difficult to try and get a thorough understanding of the concepts promoted by the Agile community but this has certainly
http://codebetter.com/blogs/karlseguin/archive/2008/07/18/foundations-of-programming-learning-application.aspx
crawl-002
refinedweb
1,680
73.17
Hello, I need to do a project in which the camera OV5640 Camera Board (B) is connected to the NodeMCU board in order to take pictures with the camera and then see them in a web page with the wifi function of the ESP8266MOD. Before writing this publication I tried to gather information, but I did not find anything similar in terms of connections and, most importantly, the sketch. My knowledge of Arduino is quite limited. I hope that one of you will help me carry this out. If anyone knew how to make the sketch and the connections, I would appreciate it if you would pass it on to me. Next, I show you a picture of the two modules I have. In general, it's much easier to buy a mass-produced consumer good than to prototype something. I'm sure there are a kajillion purchasables on the internet that will do exactly this. As stated, this is a sizeable project. This page states that the module will produced "compressed data". If that means JPEG or PNG, then this project just got a lot a notch easier. This seems applicable, but I get the impression that it is built for some sort of adapter board named 'ArduCAm' that sits between an arduino and the camera. This appears to have the information needed to talk to the camera without some sort of intermediary board. A programmer would be looking for an arduino library that will compile for the NodeMCU that can 'speak' that protocol. Otherwise, it's a big job. Maybe someone else knows more - I'm just a programmer that has never dealt with this hardware before. To me it looks like hours and hours of time (which is to say, thousands of dollars), and of course I'd have to purchase the modules myself to get any sort of result any time this century. If you want a webcam, get a webcam. If you want to learn programming, start with something much simpler. If you are building something commercially for profit share or otherwise have deep pockets, or if you can persuade someone that this is a charity gig, then all I can say is that this project is certainly do-able. Don't let my response dishearten you. There's every possibility that someone on this forum has already done exactly this project using exactly these components. Actually it sounds quite straightforward to me - indeed hoping the webcam can produce jpg/png images on request. NodeMCU requests image from the web cam, stores it in SPIFFS (up to 3 MB available on board - if need more then you need an external SD card), and then can transmit the image as is in a web site. Assuming there’s no need to do any changes to the image, there’s no need for decompression, and the image as received from the camera is the image used for the web page. Now the question: why use a NodeMCU for that? My web cam can do just that: connect to WiFi, have clients connect to it and get the latest image from the camera. Or get the complete video stream, of course. Though iirc the stills can be higher quality than the video stream. All in all it seems quite straightforward to do, the hardest part may be the communication between the camera and the NodeMCU. No idea how that works - I’ll have to look into the documentation of the camera. As it appears to be designed for use with a MCU that shouldn’t be too hard. There is an arducam using esp8266 issue on github : Should ask there, it looks they managed to make it work, at least for low resolution mode. I really have tried to find something that works for me on the internet and I have not found anything. Effectively the images can be jpg / png. The camera does not have Wi-Fi access, so I implemented the NodeMCU. I have found a program that could work, take a look, the problem is that I do not know how to connect the camera to the NodeMCU according to that program. I do not see any pin defined in the program, or so I think. // ArduCAM Mini demo (C)2016 Lee // web: // This program is a demo of how to use most of the functions // of the library with ArduCAM ESP8266 5MP camera. // This demo was made for ArduCAM ESP8266 OV5640 5MP Camera. // It can take photo and send to the Web. // It can take photo continuously as video streaming and send to the Web. // The demo sketch will do the following tasks: // 1. Set the camera to JEPG output mode. // 2. if server.on("/capture", HTTP_GET, serverCapture),it can take photo and send to the Web. // 3.if server.on("/stream", HTTP_GET, serverStream),it can take photo continuously as video //streaming and send to the Web. // This program requires the ArduCAM V4.0.0 (or later) library and ArduCAM ESP8266 5MP camera // and use Arduino IDE 1.5.8 compiler or above #include <ESP8266WiFi.h> #include <WiFiClient.h> #include <ESP8266WebServer.h> #include <ESP8266mDNS.h> #include <Wire.h> #include <ArduCAM.h> #include <SPI.h> #include "memorysaver.h" #if !(defined ESP8266 ) #error Please select the ArduCAM ESP8266 UNO board in the Tools/Board #endif //This demo can only work on OV5640_MINI_5MP_PLUS or ARDUCAM_SHIELD_V2 platform. #if !(defined (OV5640_MINI_5MP_PLUS)||(defined (ARDUCAM_SHIELD_V2) && defined (OV5640_CAM))) #error Please select the hardware platform and camera module in the ../libraries/ArduCAM/memorysaver.h file #endif // set GPIO16 as the slave select : const int CS = 16; //you can change the value of wifiType to select Station or AP mode. //Default is AP mode. int wifiType = 1; // 0:Station 1:AP //AP mode configuration //Default is arducam_esp8266.If you want,you can change the AP_aaid to your favorite name const char *AP_ssid = "arducam_esp8266"; //Default is no password.If you want to set password,put your password here const char *AP_password = ""; //Station mode you should put your ssid and password const char* ssid = "SSID"; // Put your SSID here const char* password = "PASSWORD"; // Put your PASSWORD here ESP8266WebServer server(80); ArduCAM myCAM(OV5640, CS); void start_capture(){ myCAM.clear_fifo_flag(); myCAM.start_capture(); } void camCapture(ArduCAM myCAM){ WiFiClient client = server.client(); size_t len = myCAM.read_fifo_length(); if (len >= MAX_FIFO_SIZE){ Serial.println("Over size."); return; }else if (len == 0 ){ Serial.println("Size is 0."); return; } myCAM.CS_LOW(); myCAM.set_fifo_burst(); #if !(defined (ARDUCAM_SHIELD_V2) && defined (OV5640_CAM)) SPI.transfer(0xFF); #endif if (!client.connected()) return; String response = "HTTP/1.1 200 OK\r\n"; response += "Content-Type: image/jpeg\r\n"; response += "Content-Length: " + String(len) + "(); } void serverCapture(){ start_capture(); Serial.println("CAM Capturing"); int total_time = 0; total_time = millis(); while (!myCAM.get_bit(ARDUCHIP_TRIG, CAP_DONE_MASK)); total_time = millis() - total_time; Serial.print("capture total_time used (in miliseconds):"); Serial.println(total_time, DEC); total_time = 0; Serial.println("CAM Capture Done!"); total_time = millis(); camCapture(myCAM); total_time = millis() - total_time; Serial.print("send total_time used (in miliseconds):"); Serial.println(total_time, DEC); Serial.println("CAM send Done!"); } void serverStream(){ WiFiClient client = server.client(); String response = "HTTP/1.1 200 OK\r\n"; response += "Content-Type: multipart/x-mixed-replace; boundary=frame\r\n\r\n"; server.sendContent(response); while (1){ start_capture(); while (!myCAM.get_bit(ARDUCHIP_TRIG, CAP_DONE_MASK)); size_t len = myCAM.read_fifo_length(); if (len >= MAX_FIFO_SIZE){ Serial.println("Over size."); continue; }else if (len == 0 ){ Serial.println("Size is 0."); continue; } myCAM.CS_LOW(); myCAM.set_fifo_burst(); #if !(defined (ARDUCAM_SHIELD_V2) && defined (OV5640_CAM)) SPI.transfer(0xFF); #endif if (!client.connected()) break; response = "--frame\r\n"; response += "Content-Type: image/jpeg(); if (!client.connected()) break; } } void handleNotFound(){ String message = "Server is running!\n\n"; message += "URI: "; message += server.uri(); message += "\nMethod: "; message += (server.method() == HTTP_GET)?"GET":"POST"; message += "\nArguments: "; message += server.args(); message += "\n"; server.send(200, "text/plain", message); if (server.hasArg("ql")){ int ql = server.arg("ql").toInt(); myCAM.OV5640_set_JPEG_size(ql); delay(1000); Serial.println("QL change to: " + server.arg("ql")); } } void setup() { uint8_t vid, pid; uint8_t temp; #if defined(__SAM3X8E__) Wire1.begin(); #else Wire.begin(); #endif Serial.begin(115200); Serial.println("ArduCAM Start!"); // set the CS as an output: pinMode(CS, OUTPUT); // initialize SPI: SPI.begin(); SPI.setFrequency(4000000); //4MHz //Check if the ArduCAM SPI bus is OK myCAM.write_reg(ARDUCHIP_TEST1, 0x55); temp = myCAM.read_reg(ARDUCHIP_TEST1); if (temp != 0x55){ Serial.println("SPI1 interface Error!"); while(1); } //Check if the camera module type is OV5640 myCAM.wrSensorReg16_8(0xff, 0x01); myCAM.rdSensorReg16_8(OV5640_CHIPID_HIGH, &vid); myCAM.rdSensorReg16_8(OV5640_CHIPID_LOW, &pid); if((vid != 0x56) || (pid != 0x40)){ Serial.println("Can't find OV5640 module!"); while(1); } else Serial.println("OV5640 detected."); //Change to JPEG capture mode and initialize the OV5642 module myCAM.set_format(JPEG); myCAM.InitCAM(); myCAM.write_reg(ARDUCHIP_TIM, VSYNC_LEVEL_MASK); //VSYNC is active HIGH myCAM.OV5642_set_JPEG_size(OV5640_320x240);delay(1000); if (wifiType == 0){ if(!strcmp(ssid,"SSID")){ Serial.println("Please set your SSID"); while(1); } if(!strcmp(password,"PASSWORD")){ Serial.println("Please set your PASSWORD"); while(1); } // Connect to WiFi network Serial.println(); Serial.println(); Serial.print("Connecting to "); Serial.println(ssid); WiFi.mode(WIFI_STA); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("."); } Serial.println("WiFi connected"); Serial.println(""); Serial.println(WiFi.localIP()); }else if (wifiType == 1){ Serial.println(); Serial.println(); Serial.print("Share AP: "); Serial.println(AP_ssid); Serial.print("The password is: "); Serial.println(AP_password); WiFi.mode(WIFI_AP); WiFi.softAP(AP_ssid, AP_password); Serial.println(""); Serial.println(WiFi.softAPIP()); } // Start the server server.on("/capture", HTTP_GET, serverCapture); server.on("/stream", HTTP_GET, serverStream); server.onNotFound(handleNotFound); server.begin(); Serial.println("Server started"); } void loop() { server.handleClient(); }. I have opened the serial monitor with the program that I have gone through before and this is the message that I get: Soft WDT reset ctx: cont sp: 3fff1fb0 end: 3fff21c0 offset: 01b0 >>>stack>>> 3fff2160: 3fffdad0 3fff1164 3fff1164 40202a60 3fff2170: feefeffe feefeffe feefeffe feefeffe 3fff2180: feefeffe feefeffe feefeffe feefeffe 3fff2190: feefeffe feefeffe feefeffe 3fff1190 3fff21a0: 3fffdad0 00000000 3fff1188 40208220 3fff21b0: feefeffe feefeffe 3fff11a0 40100710 <<<stack<<< ets Jan 8 2013,rst cause:2, boot mode:(3,6) load 0x4010f000, len 1384, room 16 tail 8 chksum 0x2d csum 0x2d v614f7c32 ~ld ArduCAM Start! SPI1 interface Error! Soft WDT reset There's an ESP exception decoder out there, which integrates nicely with the IDE. It will give clues on what is going wrong. wvmarle:. but what are those pins, if my camera does not have those, as are not the sioc siod .... Really this is impossible to do. I have the camera module that less people use so there is not much information. I'm sure none of this forum has it. If it works with an Arduino you can normally make it work with a NodeMCU - but you'll have to study the data sheet in more detail (as said I just skimmed it) to understand how this camera communicates, and check the ArduCAM library for any changes (e.g. pin definitions) that need to be made to make it work with the NodeMCU. For your board: SOIC and SOID are probably Clock and Data for I2C. You may be able to use the parallel interface but you need 9 pins for that, out of the 11 you have available on a NodeMCU. For that reason I'd first try the I2C interface, though almost certainly the parallel one can be much faster.
https://forum.arduino.cc/t/connection-project-nodemcu-and-ov5640-camera-board-b/515065
CC-MAIN-2022-33
refinedweb
1,867
60.21
basically my GUI should ask user to input a file name then output a file name, then look for a search word inside the (notepad) text file. GUI public class GUI { //string variables to store file names String readFile,writeFile; //method to encrypt the given file by adding 5 public void GUI(Scanner scan) { //reading input and output filenames System.out.println("Enter Input file name"); readFile=scan.next(); System.out.println("Enter Output file name"); writeFile=scan.next(); try { //opening files FileInputStream in=new FileInputStream(readFile); FileOutputStream out=new FileOutputStream(writeFile); //closing files in.close(); out.close(); } catch(Exception e) { //if any exception generates displays on console System.out.println(e.toString()); } } public static void main(String[] args){ //user has to enter input file name; Scanner scan = new Scanner(System.in); System.out.println("Enter input file name"); System.out.println("Enter output file name"); } } after i need help with my other class that i attached Read a input file and look for how many times a word happen Copy the input file to the output file and putting word from input to output txt Output # of words in input, the # of lines in input and the # of times the search word comes. a word is a string of characters end with blank space. This post has been edited by jon.kiparsky: 05 October 2012 - 10:34 AM
http://www.dreamincode.net/forums/topic/293612-java/page__pid__1711798__st__0
CC-MAIN-2016-22
refinedweb
229
54.93
NAME ASYNC_WAIT_CTX_new, ASYNC_WAIT_CTX_free, ASYNC_WAIT_CTX_set_wait_fd, ASYNC_WAIT_CTX_get_fd, ASYNC_WAIT_CTX_get_all_fds, ASYNC_WAIT_CTX_get_changed_fds, ASYNC_WAIT_CTX_clear_fd - functions to manage waiting for asynchronous jobs to complete SYNOPSIS #include <openssl/async.h> ASYNC_WAIT_CTX *ASYNC_WAIT_CTX_new(void); void ASYNC_WAIT_CTX_free(ASYNC_WAIT_CTX *ctx); int ASYNC_WAIT_CTX_set_wait_fd(ASYNC_WAIT_CTX *ctx, const void *key, OSSL_ASYNC_FD fd, void *custom_data, void (*cleanup)(ASYNC_WAIT_CTX *, const void *, OSSL_ASYNC_FD, void *)); int ASYNC_WAIT_CTX_get_fd(ASYNC_WAIT_CTX *ctx, const void *key, OSSL_ASYNC_FD *fd, void **custom_data); int ASYNC_WAIT_CTX_get_all_fds(ASYNC_WAIT_CTX *ctx, OSSL_ASYNC_FD *fd, size_t *numfds); int ASYNC_WAIT_CTX_get_changed_fds(ASYNC_WAIT_CTX *ctx, OSSL_ASYNC_FD *addfd, size_t *numaddfds, OSSL_ASYNC_FD *delfd, size_t *numdelfds); int ASYNC_WAIT_CTX_clear_fd(ASYNC_WAIT_CTX *ctx, const void *key); DESCRIPTION For an overview of how asynchronous operations are implemented in OpenSSL see ASYNC_start_job(3). An ASYNC_WAIT_CTX object represents an asynchronous "session", i.e. a related set of crypto operations. For example in SSL terms this would have a one-to-one correspondence with an SSL connection. Application code must create an ASYNC_WAIT_CTX using the ASYNC_WAIT_CTX_new() function prior to calling ASYNC_start_job() (see ASYNC_start_job(3)). When the job is started it is associated with the ASYNC_WAIT_CTX for the duration of that job. An ASYNC_WAIT_CTX should only be used for one ASYNC_JOB at any one time, but can be reused after an ASYNC_JOB has finished for a subsequent ASYNC_JOB. When the session is complete (e.g. the SSL connection is closed), application code cleans up with ASYNC_WAIT_CTX_free(). ASYNC_WAIT_CTXs can have "wait" file descriptors associated with them. Calling ASYNC_WAIT_CTX_get_all_fds() and passing in a pointer to an ASYNC_WAIT_CTX in the ctx parameter will return the wait file descriptors associated with that job in *fd. The number of file descriptors returned will be stored in *numfds. It is the caller's responsibility to ensure that sufficient memory has been allocated in *fd to receive all the file descriptors. Calling ASYNC_WAIT_CTX_get_all_fds() with a NULL fd value will return no file descriptors but will still populate *numfds. Therefore application code is typically expected to call this function twice: once to get the number of fds, and then again when sufficient memory has been allocated. If only one asynchronous engine is being used then normally this call will only ever return one fd. If multiple asynchronous engines are being used then more could be returned. The function ASYNC_WAIT_CTX_get_changed_fds() can be used to detect if any fds have changed since the last call time ASYNC_start_job() returned an ASYNC_PAUSE result (or since the ASYNC_WAIT_CTX was created if no ASYNC_PAUSE result has been received). The numaddfds and numdelfds parameters will be populated with the number of fds added or deleted respectively. *addfd and *delfd will be populated with the list of added and deleted fds respectively. Similarly to ASYNC_WAIT_CTX_get_all_fds() either of these can be NULL, but if they are not NULL then the caller is responsible for ensuring sufficient memory is allocated. Implementors of async aware code (e.g. engines) are encouraged to return a stable fd for the lifetime of the ASYNC_WAIT_CTX in order to reduce the "churn" of regularly changing fds - although no guarantees of this are provided to applications. Applications can wait for the file descriptor to be ready for "read" using a system function call such as select or poll (being ready for "read" indicates that the job should be resumed). If no file descriptor is made available then an application will have to periodically "poll" the job by attempting to restart it to see if it is ready to continue. Async aware code (e.g. engines) can get the current ASYNC_WAIT_CTX from the job via ASYNC_get_wait_ctx(3) and provide a file descriptor to use for waiting on by calling ASYNC_WAIT_CTX_set_wait_fd(). Typically this would be done by an engine immediately prior to calling ASYNC_pause_job() and not by end user code. An existing association with a file descriptor can be obtained using ASYNC_WAIT_CTX_get_fd() and cleared using ASYNC_WAIT_CTX_clear_fd(). Both of these functions requires a key value which is unique to the async aware code. This could be any unique value but a good candidate might be the ENGINE * for the engine. The custom_data parameter can be any value, and will be returned in a subsequent call to ASYNC_WAIT_CTX_get_fd(). The ASYNC_WAIT_CTX_set_wait_fd() function also expects a pointer to a "cleanup" routine. This can be NULL but if provided will automatically get called when the ASYNC_WAIT_CTX is freed, and gives the engine the opportunity to close the fd or any other resources. Note: The "cleanup" routine does not get called if the fd is cleared directly via a call to ASYNC_WAIT_CTX_clear_fd(). An example of typical usage might be an async capable engine. User code would initiate cryptographic operations. The engine would initiate those operations asynchronously and then call ASYNC_WAIT_CTX_set_wait_fd() followed by ASYNC_pause_job() to return control to the user code. The user code can then perform other tasks or wait for the job to be ready by calling "select" or other similar function on the wait file descriptor. The engine can signal to the user code that the job should be resumed by making the wait file descriptor "readable". Once resumed the engine should clear the wake signal on the wait file descriptor. RETURN VALUES ASYNC_WAIT_CTX_new() returns a pointer to the newly allocated ASYNC_WAIT_CTX or NULL on error. ASYNC_WAIT_CTX_set_wait_fd, ASYNC_WAIT_CTX_get_fd, ASYNC_WAIT_CTX_get_all_fds, ASYNC_WAIT_CTX_get_changed_fds and ASYNC_WAIT_CTX_clear_fd all return 1 on success or 0 on error. NOTES On Windows platforms the openssl/async.h header async.h. SEE ALSO crypto(7), ASYNC_start_job(3) HISTORY ASYNC_WAIT_CTX_new, ASYNC_WAIT_CTX_free, ASYNC_WAIT_CTX_set_wait_fd, ASYNC_WAIT_CTX_get_fd, ASYNC_WAIT_CTX_get_all_fds, ASYNC_WAIT_CTX_get_changed_fds, ASYNC_WAIT_CTX_clear_fd were first added to OpenSSL 1.1.0. Licensed under the OpenSSL license (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at.
https://www.openssl.org/docs/manmaster/man3/ASYNC_WAIT_CTX_get_changed_fds.html
CC-MAIN-2018-13
refinedweb
922
53.31
cfix 1.2, which has been released today, introduces a number of new features, the most prominent being improved support for C++ and additional execution options. New C++ API To date, cfix has primarily focussed on C as the programming language to write unit tests in. Although C++ has always been supported, cfix has not made use of the additional capabilities C++ provides. With version 1.2, cfix makes C++ a first class citizen and introduces an additional API that leverages the benefits of C++ and allows writing test cases in a more convenient manner. Being implemented on top of the existing C API, the C++ API is not a replacement, but rather an addition to the existing API set. As the following example suggests, fixtures can now be written as classes, with test cases being implemented as methods: #include <cfixcc.h> class ExampleTest : public cfixcc::TestFixture { public: void TestOne() {} void TestTwo() {} }; CFIXCC_BEGIN_CLASS( ExampleTest ) CFIXCC_METHOD( TestOne ) CFIXCC_METHOD( TestTwo ) CFIXCC_END_CLASS() To learn more about the definition of fixtures, have a look at the respective TestFixture chapter in the cfix documentation. Regarding the implementation of test cases, cfix adds a new set of type-safe, template-driven assertions that, for instance, allow convenient equality checks: void TestOne() { const wchar_t* testString = L"test"; // // Use typesafe assertions... // CFIXCC_ASSERT_EQUALS( 1, 1 ); CFIXCC_ASSERT_EQUALS( L"test", testString ); CFIXCC_ASSERT_EQUALS( wcslen( testString ), ( size_t ) 4 ); // // ...log messages... // CFIX_LOG( L"Test string is %s", testString ); // // ...or use the existing "C" assertions. // CFIX_ASSERT( wcslen( testString ) == 4 ); CFIX_ASSERT_MESSAGE( testString[ 0 ] == 't', L"Test string should start with a 't'" ); } Again, have a look at the updated API reference for an overview of the new API additions. Customizing Test Runs Another important new feature is the addition of the new switches -fsf (Shortcut Fixture), -fsr (Shortcut Run), and -fss (Shortcut Run On Failing Setup). Using these switches allows you to specify how a test run should resume when a test case fails. When a test case fails, the default behavior of cfix is to report the failure, and resume at the next test case. By specifying -fsf, however, the remaining test cases of the same fixture will be skipped and execution resumes at the next fixture. With -fsr, cfix can be requirested to abort the entire run as soon as a single test case fails. What else is new in 1.2? - CFIX_ASSERT_MESSAGE - ANSI support for CFIX_LOG, CFIX_INCONCLUSIVE, and CFIX_ASSERT_MESSAGE (and the entire C++ API) - CfixPeGetValue and CfixPeSetValue - Kernel mode: Drivers do not need to link against aux_klib.lib any more - Before and After routines - Support for cl 13.00 and Visual Studio 2003 (in addition to 2005 and 2008) Download As always, cfix 1.2 is source and binary compatible to previous versions. The new MSI package and source code can now be downloaded on Sourceforge. cfix is open source and licensed under the GNU Lesser General Public License.
http://jpassing.com/2008/11/10/cfix-12-introduces-improved-c-support/
CC-MAIN-2015-11
refinedweb
476
52.7
Thanks Marco > this weird namespace issue is actually caused by a nested redefinition > of a prefix mapping; just look at what's before the <head> element in > the template. Indeed <jx:import seems to be the source of the bug... nice one > the imported file redefines the jx namespace within the > <page> element that originally defines it. You mean the definition xmlns:jx="" being done more than once is the cause to this... this is bizarre cause I use jx:import uri="direct path not resource" in which I also define jx and I don't have this. wouldn't it be linked with the fact it is a resource://... that is imported ? > solution would be to have JXT track the template's namespace mappings > and eat duplicate ones from imported templates. Is this a difficult hack or have you done this ? I haven't really touched any core generators (no need :-) but it's never to late Thanks for the help Tibor
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200503.mbox/%[email protected]%3E
CC-MAIN-2016-44
refinedweb
163
72.87
Hey there GameDevs, I'm having trouble grasping a few concepts about using binary triangle trees for representing terrain data. Lets say I'm working with a 17x17 square grid of vertices and i have a class as follows: public class Terrain { private int maxWidth; private int numVertices; private float[][] heightmap; public Terrain(int gridWidth) { // 16 assuming each grid cell's length and width = 1; this.maxWidth = gridWidth + 1; // 17 this.numVertices = this.maxWidth * this.maxWidth; // 289 System.out.println(this.numVertices); this.heightmap = new float[this.maxWidth][this.maxWidth]; // 17x17 for(int x = 0; x < this.maxWidth; x++) { for(int z = 0; z < this.maxWidth; z++) { this.heightmap[x][z] = 0; // Zero out height data; } } } } Bounds of terrain: The vertices would be contained in a vertex buffer from 0 - 289; Each 1x1 cell contains 2 triangles and total of 4 vertices / 6 indices. So what I am not grasping is how to represent this data using a BTT. The way I understand a BTT is BTT { data leftchild rightchild } Now I'm assuming the data variable in each tree would be an index buffer of the 3 indices for the triangle. This biggest thing i struggle with is how do I keep track of the left, right, and bottom neighbor of each triangle without being able to pass data by reference. Please let me know if you need anymore information from me, this is all i can think up right now. Edited by Divega, 17 October 2012 - 09:25 PM.
http://www.gamedev.net/topic/632965-binary-triangle-trees-and-terrain/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024
CC-MAIN-2016-36
refinedweb
251
72.05
Welcome on aardvark-platform official chat Microsoftprefix. let controlSnake (kb:IKeyboard)= controller { let! move = (kb.IsDown Keys.Up %? V3d.OIO %. V3d.OOO) %+ (kb.IsDown Keys.Down %? -V3d.OIO %. V3d.OOO) %+ (kb.IsDown Keys.Left %? -V3d.IOO %. V3d.OOO) %+ (kb.IsDown Keys.Right %? V3d.IOO %. V3d.OOO) if move <> V3d.Zero then return (fun _ -> move ) } @dallinbeutler it seems Mod.integrate is extension from here AdaptiveFuncin ModExtensionif there is afunfrom base? @dallinbeutler afun @krauthaufen @haraldsteinlechner hey, what's purpose of <DebugType>full</DebugType>in rendering examples? examples are generated by new.fsx and for some reason in the template this was activated. that is why it is everywhere, but honsetly i cannot imagin why it should be necessary quick question. What's the difference between afun and AdaptiveFunc? I'm just trying to make a simple movement controller, and afun doesn't seem integrated with AdaptiveFunc or the Mod module. Should I be using the adaptive computation expression instead of controller? I'm both glad and sorry trapped into controller troubles. Glad since contribution seems to be needed. Sorry because you found one of the darked holes so early. In fact camera controllers have a long history for krauthaufen and me - and unfortunately we regularly escalate somewhat in that matter. in retrospecive we remember from our adaptivefunc stuff that it is very correct in terms of timing. What we also found is that it is rather hard to work with (maybe just because of chaos and missing docu). I just looked it up, people most often use direct construction of AdaptiveFunc using adaptive blocks. i think this is the simpliefied last version which is used the most. So in the end the whole controller thing in rendering might not be that bad if there was better documentation. Also, there could be potential in using a real FRP library. The first versions of those controllers appear in logs in early 2015. in later 2015 afun appears in late 2015. So to sum up, in practice people tend to use adaptiveFunc (which is a->a i thing) since it might be easier to work with. I would suggest going this way?
https://gitter.im/aardvark-platform/Lobby?at=5d484ccb475c0a0feb045e4a
CC-MAIN-2022-33
refinedweb
358
58.69
Provided by: allegro4-doc_4.4.2-4_all NAME xwin_set_window_name - Specify the window name and group (or class). Allegro game programming library. SYNOPSIS #include <allegro.h> void xwin_set_window_name(const char *name, const char *group); DESCRIPTION This function is only available under X. It lets you to specify the window name and group (or class). They are important because they allow the window manager to remember the window attributes (position, layer, etc). Note that the name and the title of the window are two different things: the title is what appears in the title bar of the window, but usually has no other effects on the behaviour of the application. SEE ALSO set_window_title(3alleg4)
http://manpages.ubuntu.com/manpages/trusty/man3/xwin_set_window_name.3alleg4.html
CC-MAIN-2019-18
refinedweb
112
51.14
Announcements foofightrMembers Content count1136 Joined Last visited Community Reputation130 Neutral About foofightr - RankContributor Virtual Aquarium Screensaver - give it a try! foofightr replied to spiralmonkey's topic in Your AnnouncementsThat's a nice screen saver. I didn't expect it to run on my old machine, but it did! There was one problem though: it tries to access the internet when it starts up. That's a definite no-no. C/C++: Defining classes within global functions foofightr replied to psyjax's topic in General and Gameplay ProgrammingIt's useful for writing functors. Virtual methods? foofightr replied to DvDmanDT's topic in General and Gameplay ProgrammingQuote:Original post by DvDmanDT Well, say I only have one class then. Everyone will do the same thing. So I don't need the virtual keyword.. So I can iterate through the list in x ms.. My question is, how much slower it would become if I just added that 'virtual' keyword? If the body of the virtual function is more than a few lines (and since it's an Update method, I assume it will be), the speed difference will be negligible. [C++] Please help debugging this Vector implementation foofightr replied to paulecoyote's topic in General and Gameplay ProgrammingI noticed that the vector's members are allocated on the heap. Might I suggest the following: namespace pde { template <class ValueType, unsigned int vectorWidth> class VectorTemplate { public: ValueType value[vectorWidth]; ... You save 4 bytes per vector, but more importantly you save all those small memory allocations, which can chew through (not to mention fragment) memory at an alarming rate. It also makes it impossible to have a memory leak. Replace D3DXCreateTextureFromFile foofightr replied to Halsafar's topic in Graphics and GPU ProgrammingThere's nothing stopping you from creating an empty texture, then locking it and writing arbitrary data to it that you read from an image file. That's probably all that D3DXCreateTextureFromFile does anyway. This should get you started: Create the texture Lock and write to it question: cost of dynamic cast? foofightr replied to kRogue's topic in General and Gameplay ProgrammingThe string version is not meant for speed (FYI it just uses a binary search to look up a TypeInfo, then passes it onto the regular IsA). Using strings as the lookup type lets you do things like letting the developers (yourself included) perform actions on certain types of objects in your game's hierarchy. You can type the class name into a dropdown console as an argument to a command, like "hide projectile" if you wanted to hide all 'Projectile' objects in your game level, or "count ammo" which might tell you how many 'Ammo' objects are in the level. I already conceded that RTTI is faster regardless. I'm going after functionality. question: cost of dynamic cast? foofightr replied to kRogue's topic in General and Gameplay ProgrammingQuote:Original post by ToohrVyk typeof is a O(1) operation. dynamic_cast<A>(b) will call a typeof-like function on A and b, and then determine if typeof(A) can be reached from typeof(b) in the type lattice of your program. As such, dynamic_cast is an O(n) operation, where n is the number of parents of the type of b. This is because in a hierarchy like: class A; class B : public A; class C : public B; class D; class E : public D, public C; you can use dynamic_cast to cast an object of type E to A, B, C, D and E (which means just as many checks in the worst case). I am currently working on a comparison between various RTTI methods, you can find it on this page. It's not complete yet, though, in particular I want to redo the entire benchmarking and add typeof comparison. That was an interesting article. I also tried my hand at a "better RTTI" a while back. It works like this: class TypeInfo { // info on a type (name and up to 4 parent pointers) }; class DynamicTypeObject { bool IsA (const TypeInfo& type); bool IsA (const char* type); // performs lookup of registered types virtual const TypeInfo* GetType () = 0; }; It would be used like this: class Item : virtual public DynamicTypeObject { static TypeInfo Type; const TypeInfo* GetType () { return &Type; } }; TypeInfo Item::Type("Item"); // in .cpp file class Potion : public Item { static TypeInfo Type; const TypeInfo* GetType () { return &Type; } }; TypeInfo Potion::Type("Potion", &Item::Type); // in .cpp file class Scroll : public Item { static TypeInfo Type; const TypeInfo* GetType () { return &Type; } }; TypeInfo Scroll::Type("Scroll", &Item::Type); // in .cpp file And finally you would use it like this: void Drink (Item* i) { // basic RTTI equivalent if (i->IsA(Potion::Type)) { // it's a potion // (this does not address multiple inheritance...) Potion* p = (Potion*)i; } // something RTTI could never dream of doing if (i->IsA("Potion")) { } } This still does not address multiple inheritance, but I have not really thought that far since I never needed to yet. I think a solution to that would not be so hard. The main advantage of this method seems to be that you only have to use as many TypeInfo objects as you actually need, instead of for every single class. But it occured to me that if I can be selective in this way, then why can't the compiler be as well? After all, if you don't use dynamic_cast anywhere, you don't need any RTTI (even if it's enabled). If you only use dynamic_cast<Potion*> then the compiler knows it only needs type info for Potion and its parent(s). Something like this sounds simple to optimize. Also, I benchmarked my implementation against RTTI and it was about 3 times slower (optimized VC++ 7). So unless I am doing something horribly wrong, then the "RTTI is slow" argument is out the window, as far as I'm concerned. Not that typecasting is going to take up much CPU % anyway, but thought I'd mention it anyway. So overall, RTTI doesn't seem so bad after all. It's fast and if my assumption is correct, is lightweight (or at least could be). I'm still using my own method though. Querying with strings, mmmmm. #define to template conversion foofightr replied to foofightr's topic in General and Gameplay ProgrammingQuote:Original post by silvermace i think you've misinterpreted the use of templates. the example you've provided would generate illogical code, eg: the compiler would generate the following code if you made a Whatever class specialised with the template parameter T = int would look like this:Whatever<int>::Function() { printf(int); } look into C++'s variable argument lists (MSDN page) I was thinking more like printf("int"). I shouldn't have used printf as an example - this is not about variable argument lists, it's about simple string representation of the template argument. If only #T worked... I was trying to find a way around template specialization, but I suppose I will need to have snippets like this all over the place: template<> void Whatever<int>::Function () { printf("int"); } template<> void Whatever<float>::Function () { printf("float"); } etc... #define to template conversion foofightr posted a topic in General and Gameplay Programming#define PRINT(a) printf(#a) So PRINT(hello) will be replaced by printf("hello"). Basic preprocessor stuff. Is there any such functionality for C++ templates? tempate <class T> class Whatever { void Function () { printf(T); // <---- ??? } }; Is there any way I can make this work? Smaller music format filesize? foofightr replied to Dookie's topic in Graphics and GPU ProgrammingMaybe this has been mentioned before, but if not it may be helpful... Be careful about event notifications with hardware sound buffers. I was using DirectSound to play OGG files streamed in using libvorbisfile (pretty much the same method described in this thread), and on my machine everything worked fine with events at 0.5 and 1 second. But on some other machines the events would fire dozens of times per second and mess up the audio really bad. Basically the timing was all screwed up. I tracked it down to the CreateBuffer() function in DirectSound, where I was letting it pick whether to use a hardware or software buffer. I forced it to use a software buffer and everything worked fine after that. I'm not sure whether this means event notification is hosed for all hardware buffers, or just for specific sound cards and/or drivers, but it caused me some grief so I thought I'd pass it on. Vector Bug foofightr replied to ViperG's topic in General and Gameplay ProgrammingAlso keep in mind that STL vectors don't free their own memory until they go out of scope. Calling Sensors.clear() only sets the size to 0. To force it to release its memory, there's a trick you can use: vector<sensor*>().swap(Sensors); It basically swaps its contents with a temporary vector, and when this temporary vector goes out of scope (right after that line of code) it releases the memory. Sensors now has size 0 with no memory allocated for it. - I meant for you to compile the code you originally posted (and defended), without fixing the glaring errors hoping nobody would notice... But if that's your game, then carry on I guess... - Quote:Original post by Nice Coder Quote:Original post by foofightr. I don't realise it, because it doesn't. Once you figure out how it works, you'll know the problem with what you've posted. From, Nice coder Well, here goes: x--; // subtracts 1 x | x >> 1; // does nothing x | x >> 2; // does nothing x | x >> 4; // does nothing x | x >> 8; // does nothing x | x >> 16; // does nothing x | x >> 32; // does nothing x++ // missing semi-colon (meant to add 1) x = x >> 1; // divide by 2 I dare you to compile it with any C/C++ compiler... - Quote:Original post by Nice Coder Quote:Original post by lucky_monkey that doesn't event work... sample results (ignoring the fact that you can't bitshift an integer 32 bits (as you may have been operation on a larger type for some reason)): x = 25 -> x = 12 x = 254 -> x = 127 x = 253 -> x = 126 maybe testing stuff before you post would be a good idea... Lets try an 8 bit version, shal we? x--; x | x >> 1; x | x >> 2; x | x >> 4; x | x >> 8; x++; x = x >> 1; Lets start x off as say 01101111 01101111 - 1 = 01101110 01101110 >> 1 = 00110111 01101110 | 00110111 = 01111111 ect. It won't do much as x | x >> 2, 4, or 8 = x currently. 01111111 + 1 = 10000000 10000000 >> 1 = 01000000 Which is the answer. Read before posting. From, Nice coder. sizeof(struct) or sizeof(pointer) foofightr replied to CyberSlag5k's topic in General and Gameplay ProgrammingI think what he means if that if you have this: class A { ... }; class B : public A { ... }; B* ptrB = new B; A* ptrA = ptrB; then sizeof(*ptrA) will return the size of A, not B, even though the pointer is pointing to an object B. The compiler checks the type at compile time, not runtime.
https://www.gamedev.net/profile/30-foofightr/
CC-MAIN-2017-34
refinedweb
1,853
61.16
The development of a Twisted Web application should be orthogonal to its deployment. This means is that if you are developing a web application, it should be a resource with children, and internal links. Some of the children might use Nevow , some might be resources manually using .write , and so on. Regardless, the code should be in a Python module, or package, outside the web tree. You will probably want to test your application as you develop it. There are many ways to test, including dropping an .rpy which looks like: from mypackage import toplevel resource = toplevel.Resource(file="foo/bar", color="blue") into a directory, and then running: % twistd web --path=/directory You can also write a Python script like: #!/usr/bin/env python from twisted.web import server from twisted.internet import reactor, endpoints from mypackage import toplevel endpoint = endpoints.TCP4ServerEndpoint(reactor, 8080) endpoint.listen( server.Site(toplevel.Resource(file="foo/bar", color="blue"))) reactor.run() Which one of these development strategies you use is not terribly important, since (and this is the important part) deployment is orthogonal . Later, when you want users to actually use your code, you should worry about what to do – or rather, don’t. Users may have widely different needs. Some may want to run your code in a different process, so they’ll use distributed web (twisted.web.distrib ). Some may be using the twisted-web Debian package, and will drop in: % cat > /etc/local.d/99addmypackage.py from mypackage import toplevel default.putChild("mypackage", toplevel.Resource(file="foo/bar", color="blue")) ^D If you want to be friendly to your users, you can supply many examples in your package, like the above .rpy and the Debian-package drop-in. But the ultimate friendliness is to write a useful resource which does not have deployment assumptions built in. .rpyfiles)¶ Twisted Web is not PHP – it has better tools for organizing code Python modules and packages, so use them. In PHP, the only tool for organizing code is a web page, which leads to silly things like PHP pages full of functions that other pages import, and so on. If you were to write your code this way with Twisted Web, you would do web development using many .rpy files, all importing some Python module. This is a bad idea – it mashes deployment with development, and makes sure your users will be tied to the file-system. We have .rpy s because they are useful and necessary. But using them incorrectly leads to horribly unmaintainable applications. The best way to ensure you are using them correctly is to not use them at all, until you are on your final deployment stages. You should then find your .rpy files will be less than 10 lines, because you will not have more than 10 lines to write.
https://twistedmatrix.com/documents/current/web/howto/web-development.html
CC-MAIN-2019-30
refinedweb
472
66.64
Say short a stock (sell a stock before you buy one.) Example 1: Input: [8,3,7,2,8,9] Output: 7 Explanation: Since Buying price should always be less than selling price and the difference should be maximum. We take 2 as the buying price and sell it in price of 9. Example 2: Input: [6,4,3,2] Output: 0 Explanation: Buying price should less than selling price, so no transaction is done. Solution The first thing to note about this problem is that- to make a sell, we need to buy the stock first. If we draw this values in a simple graph, we will see that the problem is based on peaks(sell) and valleys(buy). Hence we have to choose the smallest valley first then the highest peak for the profit to become maximum. We can define two variables, one for the minimum price and second for the maximum profit. While traversing through the array, for each scenario, we will calculate and update minimum price and maximum profit as well. For each iteration there can be only two situations: - if prices[i]< minimum price, then minimum price will be equal to prices[i]. - Else, maximum profit will be equal to maximum of (maximum profit, prices[i]– minimum price). After the iteration, return maximum profit. Solution Implementation #include <iostream> #include <vector> #include <algorithm> #include <climits> using namespace std; int main() { vector<int> vec = { 8,3,7,2,8,9 }; int minPrice = INT_MAX; int maxProfit = 0; for(int i = 0; i < vec.size(); i++) { if (vec[i] < minPrice) { minPrice = vec[i]; } else { maxProfit = max(maxProfit, vec[i] - minPrice); } } cout << "Maximum Profit = " << maxProfit << "\n"; return (0); } Complexity Analysis - Time complexity: O(n). - Space complexity: O(1).
https://prepfortech.in/interview-topics/arrays/best-time-to-buy-and-sell-stocks
CC-MAIN-2021-17
refinedweb
288
53.81
Plotting the Analemma My SJAA planet-observing column for January is about the Analemma and the Equation of Time. The analemma is that funny figure-eight you see on world globes in the middle of the Pacific Ocean. Its shape is the shape traced out by the sun in the sky, if you mark its position at precisely the same time of day over the course of an entire year. The analemma has two components: the vertical component represents the sun's declination, how far north or south it is in our sky. The horizontal component represents the equation of time. The equation of time describes how the sun moves relatively faster or slower at different times of year. It, too, has two components: it's the sum of two sine waves, one representing how the earth speeds up and slows down as it moves in its elliptical orbit, the other a function the tilt (or "obliquity") of the earth's axis compared to its orbital plane, the ecliptic. The Wikipedia page for Equation of time includes a link to a lovely piece of R code by Thomas Steiner showing how the two components relate. It's labeled in German, but since the source is included, I was able to add English labels and use it for my article. But if you look at photos of real analemmas in the sky, they're always tilted. Shouldn't they be vertical? Why are they tilted, and how does the tilt vary with location? To find out, I wanted a program to calculate the analemma. Calculating analemmas in PyEphem The very useful astronomy Python package PyEphem makes it easy to calculate the position of any astronomical object for a specific location. Install it with: easy_install pyephem for Python 2, or easy_install ephem for Python 3. import ephem observer = ephem.city('San Francisco') sun = ephem.Sun() sun.compute(observer) print sun.alt, sun.az The alt and az are the altitude and azimuth of the sun right now. They're printed as strings: 25:23:16.6 203:49:35.6 but they're actually type 'ephem.Angle', so float(sun.alt) will give you a number in radians that you can use for calculations. Of course, you can specify any location, not just major cities. PyEphem doesn't know San Jose, so here's the approximate location of Houge Park where the San Jose Astronomical Association meets: observer = ephem.Observer() observer.name = "San Jose" observer.lon = '-121:56.8' observer.lat = '37:15.55' You can also specify elevation, barometric pressure and other parameters. So here's a simple analemma, calculating the sun's position at noon on the 15th of each month of 2011: for m in range(1, 13) : observer.date('2011/%d/15 12:00' % (m)) sun.compute(observer) I used a simple PyGTK window to plot sun.az and sun.alt, so once it was initialized, I drew the points like this: # Y scale is 45 degrees (PI/2), horizon to halfway to zenith: y = int(self.height - float(self.sun.alt) * self.height / math.pi) # So make X scale 45 degrees too, centered around due south. # Want az = PI to come out at x = width/2. x = int(float(self.sun.az) * self.width / math.pi / 2) # print self.sun.az, float(self.sun.az), float(self.sun.alt), x, y self.drawing_area.window.draw_arc(self.xgc, True, x, y, 4, 4, 0, 23040) So now you just need to calculate the sun's position at the same time of day but different dates spread throughout the year. And my 12-noon analemma came out almost vertical! Maybe the tilt I saw in analemma photos was just a function of taking the photo early in the morning or late in the afternoon? To find out, I calculated the analemma for 7:30am and 4:30pm, and sure enough, those were tilted. But wait -- notice my noon analemma was almost vertical -- but it wasn't exactly vertical. Why was it skewed at all? Time is always a problem As always with astronomy programs, time zones turned out to be the hardest part of the project. I tried to add other locations to my program and immediately ran into a problem. The ephem.Date class always uses UTC, and has no concept of converting to the observer's timezone. You can convert to the timezone of the person running the program with localtime, but that's not useful when you're trying to plot an analemma at local noon. At first, I was only calculating analemmas for my own location. So I set time to '20:00', that being the UTC for my local noon. And I got the image at right. It's an analemma, all right, and it's almost vertical. Almost ... but not quite. What was up? Well, I was calculating for 12 noon clock time -- but clock time isn't the same as mean solar time unless you're right in the middle of your time zone. You can calculate what your real localtime is (regardless of what politicians say your time zone should be) by using your longitude rather than your official time zone: date = '2011/%d/12 12:00' % (m) adjtime = ephem.date(ephem.date(date) \ - float(self.observer.lon) * 12 / math.pi * ephem.hour) observer.date = adjtime Maybe that needs a little explaining. I take the initial time string, like '2011/12/15 12:00', and convert it to an ephem.date. The number of hours I want to adjust is my longitude (in radians) times 12 divided by pi -- that's because if you go pi (180) degrees to the other side of the earth, you'll be 12 hours off. Finally, I have to multiply that by ephem.hour because ... um, because that's the way to add hours in PyEphem and they don't really document the internals of ephem.Date. Set the observer date to this adjusted time before calculating your analemma, and you get the much more vertical figure you see here. This also explains why the morning and evening analemmas weren't symmetrical in the previous run. This code is location independent, so now I can run my analemma program on a city name, or specify longitude and latitude. PyEphem turned out to be a great tool for exploring analemmas. But to really understand analemma shapes, I had more exploring to do. I'll write about that, and post my complete analemma program, in the next article. [ 20:54 Dec 29, 2011 More science/astro | permalink to this entry | comments ]
http://shallowsky.com/blog/tags/analemma/
CC-MAIN-2015-32
refinedweb
1,104
65.01
Jan 04 2015 RPI to learn GUI programmingsummary: 0: this infolinks 1: intro HOW to learn GUI the hard way 2a: PYTHON 4 ways ( Tkinter, pygame, matplotlib, pygtk ) 2b: PYTHON WX 2ca: PYTHON QT4 2cb: QT5 update 2e: PYTHON QT5 2f: PYTHON QT5 QT QUICK (qml) 3: TCL Tk 4a: C# GTK 4b: C C++ GTK 4c: code blocks 4d: perl GTK 2 4e: java JDK 8 4f: Free Pascal / Lazarus 5a: RUBY 5a_update: for RUBY Shoes 5b: ICON 6: Python use gtk together with PYGAME, Tkinter 7: GUI design tools C _ GLADE _ GEANY more see desktop and menu system 10: get all the code examples 1: intro HOW to learn GUI the hard way while i play vkeybd i see that it exists as a executable in /usr/bin/ and as a TCL vkeybd.tcl in /usr/share/vkeybd, digging what that TCL means, i learn that it is a command language / programming language. on RPI TCL with TK rev 8.5 is installed, ( i find >130 .tcl files in the system ), looks like its a common linux language with TK GUI, but there is this thing with the prof. programmers, for them all other languages, except the one they use, is bad. as a linux beginner i am OPEN, but i ( and i see that question in the forum too ) want learn the full GUI way, means make a program what opens a window on the RPI desktop with keyboard and mouse operation, AND want be able to start it from the System MENU or desktop ICON. From forum was that bad example that a beginner asked about learning GUI programming and got the tip to install i was interested and did try it too. it is a terminal / command line thing only, so GUI would be still some way. but why i say it was a bad tip? is it that i do not like C#? i love it! most of the day the arduino IDE ( on PC or RPI ) is open, that is a C# cross compile tool for ATMEL AVR 8bit ATmega328 cpu. so: RPI is designed / promoted as a learning system for programming, so if someone buy it and want learn (GUI) programming, there must be easy tutorial to use any of the already installed languages. if that is not possible, you must question the raspberry pi foundation system concept. 2a: PYTHON 4 ways until now i used only PYTHON with the 3 flavours / (GUI) libs / ( what not easily can be mixed ) - 2a.aTkinter (example) - 2a.b Pygame (example) - 2a.c Matplotlib ( add installed ) (example) - 2a.d pygtk ( i just test, never used before ) and add i made the .desktop files to use them from RPI desktop; ( example ) here now the python GUI basics, now already included the pygtk version example also in the ZIP file with all examples. but let's start easy: you can use CLI ( even via putty nano xxx.py ) but as we anyhow want to do a python program what opens a window on desktop ( i call it a GUI program ) we should also work there! Menu/Programming/Python 2 opens the IDLE 2 ( python 2.7.3 ) could use also Python 3. Later, after understand the differences, you can learn how to write a program what runs in both environments ... in that window see the >>> prompt / interactive mode / rarely used, but very good for special checks. as this can not be a python course ( google for "python courses online" ) or start here but what we need is to start with editing code files. so you should make a sub dirctory for your python play files first and use from IDLE the file / new / save as / 'go new sub directory' / filename : myfirstpythonprogram.py but, what the hell, you could just save it at /home/pi/Desktop/.. you want see some examples first? look /home/pi/python_games/*.py if you are in the edit window use menu run / run module / ( or F5 ) must ok save first! you test your program! only when all problems solved you can skip the IDLE environment and just open a terminal and type python /home/pi/Desktop/myfirstpythonprogram.py what will use default python 2. for python 3 type python3 /home/pi/Desktop/myfirstpythonprogram.py you can select in context menu if you want open a *.py file per default at double click with leafpad editor or with IDLE there is a special thing if you want to use some RPI hardware like the GPIO pins from programming, you need root priv. so you must start with sudo python myfirst_GPIO_program.py in terminal and yes, still can use IDLE by sudo idle from a terminal in desktop, and the python shell window comes up in admin mode. 2a.a from python to GUI with python Tkinter if you loaded my examples you pls find /home/pi/GUI_test/python_test/python_tkinter_GUI.py i started also with python tkinter because i searched for a python what can give me a slider for operation see above example snap from a PID control...process control system i made. 2a.b later i see that actually with some easy code in pygame i could make my own ( and with that more flexible ) slider. pygame GUI /home/pi/GUI_test/python_test/python_pygame_GUI.py some, who anyway are interested in game programming will start there, others say "i never will do games" well, you don't need to, i used it for a easy button slider audio system tool and later for a test about synth and midi and keyboard 2a.c but when i catched some data from my arduino via the USB port ( by a python service program ) i needed a nice plot, that's what matplotlib is for ( and it comes with its own GUI window ) /home/pi/GUI_test/python_test/python_matplotlib_GUI.py now that will not start until you install something sudo apt-get install -y python-matplotlib i did the current trending for the ARDUINO �PCS and the PoorManScope 2a.d other than the Tkinter also can use the GTK, see /home/pi/GUI_test/python_test/python_pygtk_GUI.py the other python files in that python_test dir are explained later 2b: PYTHON WX additionally i want to test on one more python GUI, while i do i see while installing suggested packages: wx2.8-doc wx2.8-examples editra and in RPI forum sudo apt-get install python-wxgtk2.8 python-wxtools wx2.8-i18n libwxgtk2.8-dev sudo apt-get install python-wxgtk2.8 python-wxtools wx2.8-i18n libwxgtk2.8-dev libgtk2.0-dev but it worked anyhow for the small hello world example window. 2c: PYTHON QT4 on a new setup wheezy 05.05.2015 SD system ( using RPI1B hardware) i want test more GUI programming tools. start for PYQT from here find i would need this: sudo apt-get install libqt4-assistant libqt4-core \ libqt4-dbg libqt4-dbus libqt4-designer libqt4-dev \ libqt4-gui libqt4-help libqt4-network libqt4-opengl \ libqt4-opengl-dev libqt4-qt3support libqt4-script \ libqt4-sql libqt4-sql-ibase libqt4-sql-mysql libqt4-sql-odbc \ libqt4-sql-psql libqt4-sql-sqlite libqt4-sql-sqlite2 \ libqt4-svg libqt4-test libqt4-webkit libqt4-webkit-dbg \ libqt4-xml libqt4-xmlpatterns libqt4-xmlpatterns-dbg \ libqtcore4 libqtgui4 qt4-demos qt4-designer qt4-dev-tools \ qt4-doc qt4-doc-html qt4-qtconfig qtcreator i do only sudo apt-get install python-qt4 libqt4-dev python-qt4-dbg qt4-dev-tools 434 MB and now see in desktop menu / programming / QT4 assistant ( a help system ) designer ( widget form tool (like GLADE) ) linguist ( no idea whats that for ) but first: start only the IDLE python2 shell and type import PyQt4 looks ok first hello world example get from here other info here #!/usr/bin/env python import sys from PyQt4 import Qt # We instantiate a QApplication passing the arguments of the script to it: a = Qtt.QLabel("Hello, World") # ... and that it should be shown. hello.show() # Now we can start it. a.exec_() when i try same file from IDLE3 error: no module named PyQt4 in raspberry forum see: by DougieLawson � Fri Feb 27, 2015 4:04 pm other sources: sudo apt-get install python3-pyqt4 first i do sudo apt-get install python3-pyqt4 python3-pyqt4-dbg python3-sip ok, now python3 runs this example too. away from python check on QT what with QT creator a C++ IDE for desktop applications sudo apt-get install qtcreator ( 103 MB more ) see icon QT creator ( not QT4 ) so, that should be the IDE, but how to use it for python i have to find out, also how to combine it with the QT4 designer? without that there could be not much reason to use it instead of IDLE2 or IDLE3 more info forum and here and here sudo apt-get install python3-pyside ( 90MB more ) and see no new desktop menu entry?? here i not see what it is for, but to combine it with QTCreator see here and here 2cb: QT5 update 12/2016 i just updated my RPI3 ( remote on WIFI ) using new pixel desktop... when i happen to read about QT5 see also again a general intro at Wiki so i decided to give it an other try. first i check on my system ( mc / command / find file / *Qt*.* ) and find 77 files, most libQt5... in forum... i got the idea that Qt5 is in repository so i ignore all the old info pages about building it on RPI, one info say it needs 2 days. here i find the easy procedure sudo apt-get install qt5-default ( 45MB more ) (ver: 5.3.2 ) sudo apt-get install qtcreator ( 209MB more ) qtcreator (from terminal or desktop menu ) remotely from XRDP ( windows remote desktop ) not start qtcreator -noload Welcome gives despite error msgs a IDE window. and from HDMI / Desktop: but a new setup with the 2017-01-11-raspbian-jessie.img and not install XRDP i now try about VNC, what now is installed by default, -a- vncserver :1 does not ask for password any more -b- in sudo raspi-config / interfacing options / VNC enable / still not work after reboot, need via putty vncserver :1 again i check at file /usr/bin/raspi-config and find that VNC enable means systemctl enable vncserver-x11-serviced.service && systemctl start vncserver-x11-serviced.service && STATUS=enabled that does not the vncserver :1 check: usr/bin/vncserver is link to vncserver-virtual so possibly still need the old autostart: i make a file sudo chmod 755 /etc/init.d/vncserver sudo update-rc.d vncserver defaults ( now there is no answer on this anymore ) sudo reboot to check if VNC starts i install sudo apt-get install qt5-default sudo apt-get install qtcreator but the help system not work good, local doc missing ( like examples ) and also links to QT homepage ( like tutorials ) end at 404 error. ( and operation slow, even i use RPI3 ) installing additionally + Qt5base private development files + debugging symbols not work, need also sudo apt-get install qt5-doc qtbase5-examples qtbase5-doc-html to run example files / or start your own first program qt creator misses compiler! tools / options build and run / tab compiler ADD GCC path usr/bin/gcc OK just see a nice / but fast / tutorial video Introducing Qt Quick Controls in Qt 5.1 2e: PYTHON QT5 update 01/2017 now also experiment with PYTHON and QT5 i know, that is old style code, now can find lots of examples in internet using the self..... anyhow it runs, 2f: PYTHON QT5 QT QUICK (qml) sudo apt-get install python-pyqt5.qtquick sudo apt-get install python3-pyqt5.qtquick play with my first QML file 3: TCL Tk in a other forum info i read that RPI has installed minimum 12 languages ( also TCL mentioned ) lets get started to dig how to use it. in /usr/bin/ find wish ( wish8.5) and tclsh ( tclsh8.5) while wish already lets start you with a ready window frame, ( example ) tclsh is the tool to start your own GUI program so in a pi subdir i write a small file and start it with tclsh TCL_GUI.tcl pls go cd /usr/share/tcltk/tk8.5/demos/ and start tclsh widget and try some, you will see how powerful it is and what good examples are prepared like filemenu... , what i needed but not wanted to code by python pygame update 01/2017 TCLTk 8.5 is out, TCLTk 8.6 is installed for some tools only, the program i can not find anymore also the demos are gone. but after sudo apt-get install tcl tk tclsh works again: play more with install and got the demo files too (sudo apt-get install tk8.6-doc), but must change in widget file two times 8.5 to 8.6 ( or delete ) and all works again. ( working in XRDP window remote desktop terminal) sudo apt-get install tk8.6-doc cd /usr/share/tcltk/tk8.6/demos/ sudo nano widget tclsh widget for introduction to the language can start here 4a: C# GTK now, as i already do the install of C# and MONO i wanted to give it a try about GTK... oh, that was a long way for me!! but i not give up: -- add installation sudo apt-get update sudo apt-get upgrade sudo apt-get install mono-complete sudo apt-get install gtk-sharp2 -- how to start a C# GUI program using System; using Gtk; -- how to compile the exe mcs C_sharp_GUI.cs -pkg:gtk-sharp-2.0 -- how to start the exe mono C_sharp_GUI.exe this GUI start also by the included .desktop file. -+- so from the documentation that .EXE file should run on a windows PC, but when i try it did NOT. but my desktop PC is very old. -- GTK info / examples /usr/bin/monodoc go gnom libraries / Gtk -- design tool // not recommendable on small RPI sudo apt-get install monodevelop docu 4b: C C++ GTK i can not repeat it often enough, this is a beginner blog, a log of my learning work on RPI and i did not find a comprehensive GUI introduction on any of that languages, just check on many sites, fly over and read the commands they are using / i worry i am dead b4 i can read all that long text manuals / and also C++, i never learned / used it, in a downloaded project for the RPI camera i remember i did a "make" one time ( and it worked ), without knowing what i am doing. so today we want learn C++ in 3 easy steps: - 1 - type code, compile, run step 1 basics 1.1. create a source file: nano C_++_NOGUI.cpp content: // C_++_NOGUI.cpp #include <"iostream"> // do not use the " using namespace std; int main() { string hi = "Hello World"; cout << hi << endl; return 0; } 1.2. compile and make a executable C_++_NOGUI file g++ -std=c++0x C_++_NOGUI.cpp -oC_++_NOGUI 1.3. start ./C_++_NOGUI that works well! - 2 - make a makefile and use make 2.1 create Makefile nano Makefile , we use for start a project called mymain #__________________________________ # KLL adapt default makefile for small example project #This sample makefile has been setup for a project which contains the following files: # main.h, ap-main.c, ap-main.h, ap-gen.c, ap-gen.h Edit as necessary for your project #Change output_file_name.a below to your desired executible filename #Set all your object files (the object files of all the .c files in your project, e.g. # main.o my_sub_functions.o ) #OBJ = ap-main.o ap-gen.o OBJ = mymain.o #Set any dependant header files so that if they are edited they cause a complete re-compile (e.g. # main.h some_subfunctions.h some_definitions_file.h ), or leave blank #DEPS = main.h ap-main.h ap-gen.h DEPS = mymain.cpp #Any special libraries you are using in your project (e.g. # -lbcm2835 -lrt `pkg-config --libs gtk+-3.0` ), or leave blank #LIBS = -lbcm2835 -lrt LIBS = #Set any compiler flags you want to use (e.g. # -I/usr/include/somefolder `pkg-config --cflags gtk+-3.0` ), or leave blank #CFLAGS = -lrt CFLAGS = -std=c++0x #Set the compiler you are using ( gcc for C or g++ for C++ ) CC = g++ #Set the filename extensiton of your C files (e.g. .c or .cpp ) EXTENSION = .cpp #define a rule that applies to all files ending in the .o suffix, which says that the # .o file depends upon the .c version of the file and all the .h files included # in the DEPS macro. Compile each object file %.o: %$(EXTENSION) $(DEPS) $(CC) -c -o $@ $< $(CFLAGS) #Combine them into the output file #Set your desired exe output file name here mymain.a: $(OBJ) $(CC) -o $@ $^ $(CFLAGS) $(LIBS) #Cleanup .PHONY: clean clean: rm -f *.o *~ core *~ #__________________________________ pls note the rule for makefiles: there must be a TAB in front of commands: this applies here for the 3 lines $(CC) -c -o $@ $< $(CFLAGS) $(CC) -o $@ $^ $(CFLAGS) $(LIBS) rm -f *.o *~ core *~ only. without the TAB ( even with spaces ) it should not work! but here ( in HTML ) it not work even the TAB is in the blog source 2.2. we compile same source renamed to mymain.cpp by make 2.3. and start with ./mymain.a why in that example the extension .a was used for the resulting executable i not know, but i don't like it as in desktop / filemanager its associated as "AR archive" i think will use ".app" - 3 - try on GTK now starts the real thing: and i see good info here i could check with newly installed synaptic package manager and search libgtk that a libgtk-3-0 , libgtk-3-0-bin, libgtk-3-0-common already is installed, libgtk-3-0-dev and libgtk-3-0-doc not, so i follow that manual. sudo apt-get install libgtk-3-dev and with a example from here it runs, this one here not. i hope this log text and the examples to download help anybody, just a short remark on today learning: i actually do not know if i did a C program or a C++ and i wasted some hours to try to make the two texts ( program name and message text ) as string variables, like i did in the other examples. - 4 - there is a step 4, as option i want test a recommended IDE sudo apt-get install geany geany, geany-common suggested doc-base i open a project, copy in the above makefile and .cpp source and [Build] [Make] what i got is same as to type make in the terminal.. Ahmm, by the way, that IDE supports some languages.. 4c: Code Blocks update 23.8.2015, i see a other question how to use codeblocks i found it is a IDE for C++, pls see here and here i install with sudo apt-get install codeblocks 25MB comes up in Desktop menu/programming i opened a new project type console project, got a ready code "hello world" build and run it ( easy with [F9] ) for some next steps with coding C++ see this C Tutorial: Learn C in 20 Minutes. next i will do a GUI test too: code::blocks / File / new / project / Projects ( from templates ) GTK+project title: GTK_GUI_test /next / finish under Sources / main.c have a lots of code already build run [F9] : error with # include gtk/gtk.h fatal error file missing. because i used a new system for that test i need to run again sudo apt-get install libgtk-3-dev 61MB but same error / reboot / well, use the mc and search and find it /usr/include/gtk-3.0/gtk/gtk.h. under /project / build options / Compiler Settings / Other option / i see 'pkg-config gtk+-2.0 --cflags' what i think is part of the "make" file. So i change that option to 3.0 ok, now i get 45 error and 5 warnings ( should that be a progress? ) all about undefined reference, starting with the dialog line. i check again at the / project / build options / compiler settings / other options / OK but / project / build options / linker settings / other linker options / change 2.0 to 3.0 again when i tried to open a QT4 project directly run into a error, so what i need? i try: sudo apt-get install libqt4-dev qt4-dev-tools libqt4-declarative-fol derlistmodel libqt4-declarative-gestures libqt4-declarative-particles libqt4-dec larative-shaders qt4-qmlviewer firebird-dev libmysqlclient-dev libpq-dev libsqli te0-dev libsqlite3-dev unixodbc-dev qt4-doc-html 400MB ( 100MB doc alone ) and when i now start code::blocks QT4 project i can point to /usr/share/qt4 but still get error "can't locate library directory / this wizard can not continue" well yes, there is no library folder there?? what is the QT SDK?? /usr/include/qt4 also not work. see here but here sudo apt-get install qtcreator 86MB more also sudo apt-get install cmake kdelibs5-data subversion 27MB more, entry tutorial no, still looks like code:blocks with QT4 is a little bit more difficult. 4d: perl GTK2 when i read that some pros. argue how bad python and how good perl is in RPI forum, i wanted to take a look on that too. i have the problem that i work on a PLAY system what has seen many installs already, so its difficult to say, but it looks like perl is installed with GTK2 already, so i start here. ok, i was wrong, using a old SD card it "can't locate Gtk2.pm" so you need to install it: sudo apt-get install libgtk2-perl then it works 4e: java JDK 8 even i also never learned java, as i read about JDK 8 is installed in the new system? i check in terminal: java -version 1.8.0 ...?? javac -version 1.8.0 ...?? in case you have a older system / or anyway / do: sudo apt-get install oracle-java8-jdk a path info get here update-alternatives --list java shows me /usr/lib/jvm/jdk-8-oracle-arm-vfp-hflt/bin/java i start to dig about easy "Hello World" beginner examples. starting with the usual NOGUI version: nano java_NOGUI.java // The classic Hello World program! class HelloWorld_NOGUI { public static void main(String[] args) { //Write Hello World to the terminal window System.out.println("Hello World!"); } } compile it with javac java_NOGUI.java ls and execute that made class java HelloWorld_NOGUI ____________________________________________________ now we try GUI version, we will see if we need to install something... nano java_GUI.java import javax.swing.JOptionPane; class HelloWorld_GUI { public static void main(String[] args) { JOptionPane.showMessageDialog( null, "Hello World!" ); } } javac java_GUI.java ls and execute that new made class java HelloWorld_GUI ____________________________________________________ now as we have from a prior example ( C++ GTK ) already installed GEANY, i try to use it for a step further into java. i open a new project in GEANY: java180 and make a | file | new ( with template ) | main.java | file | save as | java_GUI.java 4f: Free Pascal / Lazarus but i see something else what got my interest: LAZARUS Link, Link, Link, Link, Link, Link, Link, Link, Link, first try the old version as the update procedure is above my head: sudo apt-get update sudo apt-get upgrade sudo apt-get install fpc here already BAD LUCK i think i am lucky the install was refused. i will try again, on a new or older system. sudo apt-get install lazarus not used. update 7/2015 looks like LAZARUS still got some friends out there on a new setup i try ( on a RPI1B ): ( for RPI2 see here ) to install free pascal and lazarus: sudo nano /etc/apt/sources.list add line deb wheezy-wsf main sudo apt-get install fpc ( 170MB more / 15min) sudo apt-get install lazarus (512MB more / 18min) and my Hello World with lazuras from here when i open lazarus in Desktop i get 5 windows. i do project / new project / application / OK and all 5 window freeze ... must reboot i had many problems to edit the label and window caption and change position and size by number input, ( resizing by graphic / mouse worked fine ) ( working headless problem?) for understanding, there was actually no coding required, i selected and positioned the label object, typed a label text /caption "hello world", typed the window caption (=filename), saved project as hello_world the executable is 15MB! what a mess, i deleted it from the zip, so also icon.... will not work more info possibly here 5.a RUBY in a terminal i type: ruby -v and see ruby 1.9.3.. so its installed already rails -v error or apt-cache search ruby | grep rails show packages i try: sudo apt-get install rails and now rails -v rails 2.3.14 i do according this, not sure i need it. sudo gem install execjs give execjs 2.5.2 and documentation sudo apt-get install nodejs actually a java runtime is available? and the test described there i also try: rails new peopledb get long list of creates cd peopledb that not work, but there is a new subdir "new" now cd new rails g scaffold people name:string group:string again get that create list?? rake db:migrate error aborted can not load such file sqlite3 rails s get the list again Grrrr! use browser well: that could not work now anyhow today not so lucky from rpi mama no info at all what it is and how to use it?? sudo apt-get install rubygems sudo gem install jekyll after some search i found: Jekyll is a simple, blog aware, static site generator. RPI1 hangs at cpu 100% and i wait after ? half hour.. error installing jekyll failed to build gem native extension so again not lucky to day but do not think i give up so easy: follow this here and this time go the long way stop here at step 4 , problem with keyring.... source ~/.rvm/scripts/rvm not try another one sudo gem install rails --include-dependencies error unable to resolve deendencies and at test again complain sqlite3?? after get key for rails sudo gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 and do sudo su follow this: curl -L get.rvm.io | bash -s stable --rails seems to run and get rvm 1.26.11 ruby 2.2.1 extract config compile install executable... ruby-gems 2.4.8 error fechting gems this was after about 2 hours, start again possibly need rvm reinstall ruby-2.2.1 but looks like he goes on with fetching some gem files building native extensions .... installing lots of documentation and i notice i not know how i call up documentation in linux? is there a general help system, or is it only the command line thing "command --help" and "man command" ? one of the many black holes in my not linux brain. but, as the online documentation is not that easy to access by RPI's slow browser, ( i only use the PC for this ) it would be good to know. RPI already hangs half hour at a rails 4.2.3 documentation, next morning ( min 4 hours more ) OK source /usr/local/rvm/scripts/rvm ruby -v 2.2.1 rails -v 4.2.3 gem install execjs ( 1 gem installed ) apt-get install nodejs ( ok ) exit ( of sudo su ) now do the test: rails new tutorials ( i deleted the old one first, test with sudo su and without but i get again a subdir new! delete, after close terminal ( putty ) and open again, ( no sudo su ) rails new tutorials i see now a: run bundle install ( 5 min ) bundle complete, bin/rake, bin/rails cd tutorials rails g scaffold Steps name:string form:string get create invoke list rake db:migrate create Steps... rails server (info run server -h shows options / stop with CTRL C... HTTP server start pid=xxxx port=3000/ there it hangs ( without prompt )) i start desktop, start browser ( i did not test from outside http:192.168.1.101:3000 ) close browser, [ctrl][c] in terminal long long way,but today is a better day! just to verify i try that thing from Raspberry Pi website again sudo gem install jekyll and again look at the dead black screen 25min later, same error for ruby GUI i want test Shoes, take a glimpse here but the installation? he use a RPI2 on ruby 1.9.3 the download page give me a 13.6MB shoes-3.2.23-gtk2-armhf.install , no idea what to do with it! for building it Link here they say : On Linux, you'll download a file ending with .run. Double-click this file and Shoes will start up. (You can also run this file from a prompt as if it was a shell script. In fact, it is a shell script!) so i make a mkdir /home/pi/Shoes/ and copy that install file from PC, just a try! in a desktop try double click but only the editor opens. so use a terminal cd Shoes ls chmod +x shoes-3.2.23-gtk2-armhf.install ./shoes-3.2.23-gtk2-armhf.install uncompressing says if Shoes not in menu must logout login ( i do a reboot anyhow) i make a nano /home/pi/GUI_test/ruby_shoes/shoes_GUI_test.rb start the shoes program / open APP / find my file / OK you might think that after last night i value that SD card with the ruby n rails installation well the shoes thing was fun, the ruby rails version thing not, i zip my GUI_test pack and upload to here so its available to you. i format SD and burn a raspbian ( still 2015.05.05 is the "newest" ) the SD was anyhow nearly full?? i do now always ( from PC) the cmdline.txt MOD [blank]ip=192.168.1.101 and boot headless. and why i write about that here? i try ruby -v 1.9.3 i copy the Shoes install again ... download the GUI_test again Ha! 5a: update for RUBY Shoes on 28.03.2018 restart on a updated RASPBIAN desktop on RPI3B board ( using putty / VNC remote from win7PC ) and find Shoes i do following: LOG: pi@RPI3:~/projects/ruby $ ruby -v ruby 2.3.3p222 (2016-11-21) [arm-linux-gnueabihf] pi@RPI3:~/projects/ruby $ rails -v bash: rails: command not found pi@RPI3:~/projects/ruby $ pi@RPI3:~/projects/ruby $ wget --2018-03-28 07:50:49-- Resolving shoes.mvmanila.com (shoes.mvmanila.com)... 208.113.218.222 Connecting to shoes.mvmanila.com (shoes.mvmanila.com)|208.113.218.222|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 12512356 (12M) [application/x-makeself] Saving to: ‘shoes-3.3.6-gtk3-armhf.install’ shoes-3.3.6-gtk3-armhf.i 100%[==================================>] 11.93M 2.99MB/s in 7.3s 2018-03-28 07:50:58 (1.63 MB/s) - ‘shoes-3.3.6-gtk3-armhf.install’ saved [12512356/12512356] pi@RPI3:~/projects/ruby $ ls shoes-3.3.6-gtk3-armhf.install pi@RPI3:~/projects/ruby $ chmod +x shoes-3.3.6-gtk3-armhf.install pi@RPI3:~/projects/ruby $ ./shoes-3.3.6-gtk3-armhf.install Verifying archive integrity... All good. Uncompressing shoes 100% Shoes has been copied to /home/pi/.shoes/walkabout. and menus created If you don't see Shoes in the menu, logout and login pi@RPI3:~/projects/ruby $ ____________________________________________________________________ # find desktop [menu][Programming]shoes walkabout # find desktop [menu][Education]shoes walkabout # find desktop [menu][Education]uninstall shoes walkabout ____________________________________________________________________ pi@RPI3:~/projects/ruby $ nano shoes_GUI_test.rb pi@RPI3:~/projects/ruby $ # if you not want to start it via desktop menu can use CLI pi@RPI3:~/projects/ruby $ alias cshoes="/home/pi/.shoes/walkabout/shoes" pi@RPI3:~/projects/ruby $ cshoes -w ~/projects/ruby/shoes_GUI_test.rb # use shoes walkabout [maintain][copy samples][select directory for a copy to] # and get lots of good stuff 5.b ICON a new friend from a german RPI forum thinks that a GUI programming introduction ( or overview type tutorial ) is worthless ( and i know my BLOG is even far from a tutorial ), because for each package and language need deep tutorials, and for searching about RPI languages he give me the LMGTFY, means everyone has to search and end up at the old dead links like it always happens to me. but, as he writes a 500 pages tutorial about a language called ICON he recommends that i included it here and link to his manuals. at first i had some bad time exactly about googling, you try and find ICON in google, very funny. what i have until now: german international forum intro, first 6 pages of that 500 manual, update from a forum thread and the official ICON homepage and wiki books and the Introduction. for a first start i fly over the Language reference and from there directly to the GUI book page 83. on the first look it seems that its a language what is very forgiving, like can use ";" or not, can write WAttrib("fg=white","bg=black") or Fg("white") Bg("black") but i also see array pointer start with 1, not with 0, [0] is the last member of a array, even -1, -2, is possible as last but one... that has its logic but if you are used to think in [0] .. length(array) UMPF. and my arduino #define here is $define ... now i try to setup it up on my old RPI1B ( how about RPI2? ) from the above manual i find in Tab. 1-1 ( i try 2 different pdf viewer ( on my win7 PC ) and both give me a hard time about COPY PASTE ( to putty ) ? i need to type that? ) mkdir /home/pi/Downloads cd /home/pi/Downloads wget tar -x -v -f icon-v951src.tgz mkdir /home/pi/icon9_51 mv -v -r iconv951src/* /home/pi/icon9_51 sudo apt-get install buildessential libx11dev libxtdev libxaw7dev cd /home/pi/icon9_51 dir ( DOS lover? ) i try: sudo apt-get update sudo apt-get install build-essential libx11-dev libxt-dev libxaw7-dev (21 MB more) wget ( 3MB 13sec) tar -xvf icon-v951src.tgz get a subdir icon-v951src rm icon-v951src.tgz delete the zip agian mv icon-v951src icon9_51 rename that dir cd /home/pi/icon9_51/config/ cp -v -r linux RaspberryPi cd RaspberryPi nano define.h and add line after UNIX #define RaspberryPi 1 ( no need ) nano Makedefs ( no need ) nano status cd /home/pi/icon9_51/ make X-Configure name=RaspberryPi make X-Configure=RaspberryPi ( 9 min ) nice line by line progress report make status=RaspberryPi make Icont make Samples make Test make Benchmark cd sudo reboot error in LPATH,IPATH,FPATH again about too many " " from the copy. ok again as there is no IDE included ( can use Geany ) there are no new desktop menu icons..., its just a compiler. first test direct: cd /home/pi/icon9_51/bin nano hello.icn procedure main() write(Hello World) end icont hello compile ok iconx hello run ok ( use also just hello ) nano hello_window.icn link graphics procedure main() WOpen("label=hello_window.icn") WWrite("Hello World") WDelay(10000) WClose() end ( that's missing in the manual ) icont hello_window compile ok now for the start we need a desktop window ( i use headless RDP ) first i # all the echo from the ICON path thing in the above /home/pi/.bashrc then i move the hello* out of /bin/ to my home/pi/GUI_test/icon_test/ and icont, iconx runs here well too. thanks @andreas p.s. put the SD in a RPI2 and icont and iconx run ok , but that not say that the build would run same, but not tested. 6: Python use gtk together with PYGAME, Tkinter somehow we get used to that a file open dialog can look different on every computer / operating system / program... but its not only a question of HMI, easy learning, also we don't want to take the developers freedom to build something new and better, but operation AND coding operation should be easy! but lets face it, modular thinking and object oriented programming was last millennium , now we do APPs. i had the question for a easy file open dialog when using python pygame ( due to the fact that my MIDI server from Arduino USB and the keyboard already works from pygame, i think i am stuck with pygame / i never say i liked it ) now for Tkinter ( good tip from Forum ) AND for gtk there is a ready dialog, i tested both and see the Tkinter one looks like the python IDLE file open, the gtk looks like the PCManFM file explorer. For pygame there are also projects ( also tip from forum ) working on GUI things, here but that might be a longer way ( how to install.. ) so i started with the existing libs, the gtk dialog works from gtk window, but the gtk call from inside pygame i got opened ( and file selected) but could not close it. some command / or program structure missing. frustrated i was thinking of a way, where i not have to combine 2 GUI versions and came up with the idea to do 2 separate programs, what can communicate by a RAM DISK File. so any program ( not only python pygame or python Tkinter ) can spawn ( subprocess ) "python fileopenwindow.py" a stand alone program, and after finished ( by window closed, cancel button, or any file select double click or open button ) the info can be fetched from the RAM DISK file. and for that i made a example for pygame and tkinter with a button [search] what does the spawn ( gtk file dialog pop up window ) and writes the result ( !open / you should never see because with subprocess main waits/, !error, !close, !cancel, /path/filename ). later a user will not know the difference / what GUI was used / and as we have a manual / mouse interaction there can not be any time delay questions for using a intermediate file. as the result also is available in the terminal with sysexit(filename) the file thing is not really needed, but catching it from that 50 lines of warnings is bad stuff. for me the add Tkinter example was only a hour, just to show you are free to choose the ready dialog. Actually as for the first GUI test i used also C# mono ( but that has gtk lib ) and TCL ( possibly use Tk ) more testing is possible. 7: GUI design tools i started with saying, raspberry pi is a programming learning tool and it is ready to start right away using python ( Tkinter, GTK, PyGame ) for learning GUI programming. there is always a next step: from working CLI nano making program code using desktop texteditor leafpad or already a IDE GEANY what is good regarding "projects" and is a good language editor but for making GUI programs there is a "upper" level, a GUI design tool. C _ GLADE _ GEANY example what i hear at the forum the logical step would be to install GLADE see here and here its all about windows, actions, widgets and their attributes, for the GTK ONLY. sudo apt-get install glade i try my window with a label "Hello World" and saved as *.glade ??? now what? here i see a easy "second" step. and i copy his example code in a GEANY project / there i store the .glade too and a copy of Makefile changed to the filename / switches but a compile give a error about #include hmm, in Cpp it worked ( on this SD / system /), but well, its my first C program ever sudo apt-get install libgtk-3-dev NO, that was installed already i did not get the (Makefile) make running, but at CLI it worked gcc -Wall -g -o C_GLADE.app C_GLADE.c $(pkg-config --cflags gtk+-3.0) $(pkg-config --libs gtk+-3.0) start with ./C_GLADE.app more see desktop and menu system 10: get all the code examples code from the download area rev 0.1 incl 2a python Tkinter, pygame, matplotlib, gtk rev 0.2 incl 2a test PY GTK and 3 TCL TK rev 0.3 incl 4a C# GTK example rev 0.4 incl 6 GTK fileopenwindow.py and pygame and Tkinter usage example rev 0.5 incl 4b C++ GTK example rev 0.6 incl 2b python WX example rev 0.7 incl 4c perl GTK2 example rev 0.8 incl 4d java example rev 0.9 incl 7 C, GLADE, GEANY example rev 1.0 update and check from new system ( download or mget ) rev 1.2 update to a tar.xz pack what is more easy on RPI and keeps all file settings. also recheck all desktop files regarding subdir. rev 1.3 incl Lazarus, free pascal test ( no exe ) ( and the raspi-config icon from forum and the sulxterminal in /KLL) rev 1.4 now the service_GUI is included in the same pack rev 1.5 menu tree rev 1.6 code blocks rev 1.7 QT5 / pyQt5 / pyQT5_Quick the *.desktop files i copy again to subdir /home/pi/GUI_test/KLL/, what could be moved / or symlinked to /home/pi/Desktop/...
http://kll.engineering-news.org/kllfusion01/articles.php?article_id=82
CC-MAIN-2019-35
refinedweb
6,897
70.33
System Administration Commands - Part 1 System Administration Commands - Part 2 System Administration Commands - Part 3 - configures ZFS file systems zfs [-?] zfs help subcommand | help property property |permission [-r] property=value filesystem|volume|snapshot ... zfs get [-r|-d depth][-Hp][-o all | field[,...]] [-s source[,...]] all | property[,...] filesystem|volume|snapshot ... zfs inherit [-rS] property filesystem|volume|snapshot ... zfs upgrade zfs upgrade [-v] zfs upgrade [-r] [-V version] -a | filesystem zfs userspace [-niHp] [-o field[,...]] [-sS field] ... [-t type [,...]] filesystem|snapshot zfs groupspace [-hniHp] [-o field[,...]] [-sS field] ... [-t type [,...]] filesystem|snapshot zfs mount zfs mount [-vO] [-o options] -a | filesystem zfs unmount [-f] -a | filesystem|mountpoint zfs share -a | filesystem zfs unshare -a filesystem|mountpoint zfs send [-Rbpv] [-[iI] snapshot] snapshot zfs send -r [-bcpv] [-[i] snapshot] snapshot zfs receive [-vnFu] [[-o property=value] | [-x property]] ... filesystem|volume|snapshot zfs receive [-vnFu] [[-o property=value] | [-x property]] ... [... zfs diff [-FHte] [-o field] ... snapshot snapshot|filesystem zfs diff -E [-FHt] [-o field] ... snapshot|filesystem can be mounted temporarily at a location other than the file systems's persistent mount point by specifying the -o mountpoint=value option to the zfs mount command. This is only permitted for file systems with non-legacy mount points. can only be mounted in the global zone by use of a temporary mountpoint property (see “Temporary Mount Point Properties”). The global administrator can forcibly clear the zoned property, though this should be done with extreme care. The global administrator should verify that all the mount points are acceptable before clearing the property.. This property is on if the snapshot has been marked for deferred destroy by using the zfs destroy -d command. Otherwise, the property is off. refreservation (through usedbyrefreservation) and the reservations of any descendent datasets (through usedbychildren).. Space accounted for by this property represents potential consumption by future writes, reserved in advance to prevent write allocation failures in this dataset. This can include unwritten data, space currently shared with snapshots, and compression savings for volumes (which may be lost when replaced with less compressible data). When allocations for later writes increase usedbydataset or usedbysnapshots, usedbyrefreservation will decrease accordingly.) This property is set to the number of user holds on this snapshot. User holds are set by using the zfs hold command. For volumes, specifies the block size of the volume. The blocksize cannot be changed once the volume has been written, so it should be set at volume creation time. The default blocksize for volumes is 8 KBs. Any power of 2 from 512 bytes to 1 MB is valid. This property can also be referred to by its shortened column name, volblock. The following native properties can be used to change the behavior of a ZFS dataset. Controls how an ACL is modified during chmod(2). A file system with an aclmode property of discard (the default) deletes all ACL entries that do not represent the mode of the file. An aclmode property of mask reduces user or group permissions. The permissions are reduced so (without an explict ACL set [by means of chmod(1)] between the mode changes). A file system with an aclmode property of passthrough indicates that no changes will be made to the ACL other than generating the necessary ACL entries to represent the new mode of the file or directory. all4, but this may change in future releases). The value off disables integrity checking on user data. Disabling checksums is NOT a recommended practice. Changing this property affects only newly-written data.. Changing this property affects only newly-written data.. Controls whether device nodes can be opened on this file system. The default value is on. Controls whether processes can be executed from within this file system. The default value is on.. Provides a hint to ZFS about handling of synchronous requests in this dataset. If logbias is set to latency (the default), ZFS uses the pool's log devices (if configured) to handle the requests at low latency. If logbias is set to throughput, ZFS does not use the configured pool log devices. Instead, ZFS optimizes synchronous operations for global pool throughput and efficient use of resources. default recordsize is 128 KB. The size specified must be a power of two greater than or equal to 512 and less than or equal to 1 MB. Changing the file system's recordsize affects only files created afterward; existing files and received data usedbydataset space is below this value, the dataset is treated as if it were taking up the amount of space specified by refreservation. The usedbyrefreservation figure represents this extra space, adding to the total used space charged to the dataset, and in turn consuming from the parent datasets' usage, quotas, and reservations. This protects the dataset from overcommitment of pool resources, by ensuring that space for future writes is reserved in advance. Space shared with snapshots can later be replaced with new data, and the snapshot represents a committment to keep both copies. If refreservation is set, usedbyrefreservation must be increased to the full size of refreservation when taking a new snapshot, accounting for this commitment. If there is insufficient space available to the dataset for this increase, snapshot creation will be denied.. Indicates whether the file system restricts users from giving away their files by means of chown(1) or the chown(2) system call. The default is to restrict chown. When rstchown is off then chown will act as if the user has the PRIV_FILE_CHOWN_SELF privilege.. Controls whether the file system is shared by using the Solaris SMB service, and what options are to be used. A file system with the sharesmb property set to off is managed through traditional tools such as share_nfs(1M). Otherwise, the file system is automatically shared and unshared with the zfs share and zfs unshare commands.. Note - This SMB-related property is not fully functional in the Oracle Solaris 10 release because the Oracle Solaris SMB server is not supported in the Oracle Solaris 10 release. Controls whether the dataset is shared by using the NFS protocol. If the property is set to on, the zfs share command is invoked with no options. You can specify a comma-separated list of options as the contents of this property. See the EXAMPLES section. You can also share a ZFS file system by using the zfs share command. Setting the sharenfs property or using the zfs share command is preferred over using the legacy share(1M). Determines the degree to which file system transactions are synchronized. This property can be set when a dataset is created, or dynamically, and will take effect immediately. sync can have one of the following settings: The default option. Synchronous file system transactions are written to the intent log and then all devices written are flushed to ensure the data is stable (that is, not cached by device controllers). Every file system transaction would be written and flushed to stable storage. This setting should be used only where extreme caution is required, as there is a significant performance penalty. Synchronous requests are disabled. File system transactions commit to stable storage only on the next DMU transaction group commit, which might be after many seconds. This setting gives the highest performance. However, it is very dangerous as ZFS would be ignoring the synchronous transaction demands of applications such as databases or NFS. Furthermore, when this setting is in effect for the currently active root or /var filesystem, out-of-spec behavior, application data loss, and increased vulnerability to replay attacks can result. Administrators should only use this option only when these risks are understood. on-disk version of this file system, which is independent of the pool version. This property can only be set to later supported versions. See the zfs upgrade command. Specifies the logical size of the volume. By default, creating a volume establishes a refreservation that is a somewhat larger than the actual logical volume size, to account for ZFS metadata overhead. Any changes to volsize are reflected in an equivalent change to the refreservation. The volsize can only be set to a multiple of volblocksize, and cannot be zero. The refreservation is set on the volume ZFS Administration Guide. Note - This SMB-related property is not fully functional in the Oracle Solaris 10 release because the Oracle Solaris SMB server is not supported in the Oracle Solaris 10 release.. Note - This SMB-related property is not fully functional in the Oracle Solaris 10 release because the Oracle Solaris SMB server is not supported in the Oracle Solaris 10. Note - This SMB-related property is not fully functional in the Oracle Solaris 10 release because the Oracle Solaris SMB server is not supported in the Oracle Solaris 10 release. When a ZFS file system is mounted, either through the legacy mount(1M) command for legacy mounts or the zfs mount command, rstchown rstchown/norstchown. For properties other than mountpoint, if the properties are changed while the dataset is mounted, the new setting overrides any temporary settings. The mountpoint property cannot be changed while a temporary mountpoint property is in effect (that is, while the dataset is mounted at a temporary location).. The default swap device varies based on the amount of the system's physical memory Oracle Solaris ZFS Administration Guide All subcommands that modify state are logged persistently to the pool in their original form. Displays a help message. Displays zfs subcommand usage information. You can display help for a specific subcommand, property, or delegated permission. If you display help for a specific subcommand or property, the subcommand syntax or property value is displayed. Using zfs help without any arguments displays a complete list of zfs subcommands. all the non-existing parent datasets. Datasets created in this manner are automatically mounted according to the mountpoint property inherited from their parent. Any property specified on the command line using the -o option is ignored. If the target filesystem already exists, the operation completes successfully. or clones). Recursively destroy all children. Recursively destroy all dependents, including cloned file systems outside the target hierarchy.. The given snapshot is destroyed immediately if and only if the zfs destroy command without the -d option would have destroyed it. Such immediate destruction would occur, for example, if the snapshot had no clones and the user-initiated reference count were zero. If the snapshot does not qualify for immediate destruction, it is marked for deferred deletion. In this state, it exists as a usable, visible snapshot until both of the preconditions listed above are met, at which point it is destroyed. Defer snapshot deletion. Destroy (or mark for deferred deletion) all snapshots with this name in descendent file systems. Recursively destroy all dependents. Creates a snapshot with the given name. All previous modifications by successful system calls to the file system are part of the snapshot. zfs snap can be used as an alias for zfs. The -rR options do not recursively destroy the child snapshots of a recursive snapshot. Only the top-level recursive snapshot is destroyed by either of these options. To completely roll back a recursive snapshot, you must rollback the individual child snapshots. file system or volume already exists, the operation completes successfully. on) . The following fields are displayed, name,used,available,referenced,mountpoint. Used for scripting mode. Do not print headers and separate fields by a single tab instead of arbitrary white space. specifying the syntax: -o name,avail,used,usedsnap,usedds,usedrefreserv,\ usedchild -t filesystem,volume A property. The following aliases can be used in place of the type specifiers: fs (filesystem), snap (snapshot), and vol . Recursively apply the effective value of the setting throughout the subtree of child datasets. The effective value may be set or inherited, depending on the property.. Set of fields to display. One or more of: name,property,value,received,source Present multiple fields as a comma-separated list. The default value is: name,property,value,source The keyword all specifies all sources. A comma-separated list of sources to display. Those properties coming from a source other than those in this list are ignored. Each source must be one of the following: local,default,inherited,temporary,received. Revert to the received property value, if any. If the property does not have a received value, the behavior of zfs inherit -S is the same as zfs inherit without -S. If the property does have a received value, zfs inherit masks the received value with the inherited value until zfs inherit -S reverts to the received value. Identifies a file system version, which determines available file system features in the currently running software release. You can continue to use older file system versions, but some features might not be available. A file system can be upgraded by using the zfs upgrade -a command. You will not be able to access a file system of a later version on a system that runs an earlier software version. Displays ZFS file system versions that are supported by the current software. The current ZFS file system versions and all previously supported versions are displayed, along with an explanation of the features provided with each version. Upgrades file systems to a new, on-disk version. Upgrading a file system means that it will no longer be accessible on a system running an older software version. A zfs send stream that is generated from a new file system snapshot cannot be accessed on a system that runs an older software version.. Displays syntax help message and exit. Displays numeric ID instead of user/group name. Does not print headers, use tab-delimited output. Uses exact (parseable) numeric output. Displays only the specified fields from the following set, type,name,used,quota. The default is to display all fields. Sorts output by the specified field. The s and S flags may be specified multiple times to sort first by one field, then by another. The default is -s type -s name. Sorts by this field in reverse order. See -s. Displays only the specified types from the following set, all,posixuser,smbuser,posixgroup,smbgroup. The default is -t posixuser,smbuser The default can be changed to include group types. Translates, ZFS file systems that have the sharenfs or sharesmb property set. Sharing a file system with the NFS or SMB protocol means that the file system data is available over the network. ZFS file systems that have the sharenfs or sharesmb property set are automatically shared when a system is booted. Shares all ZFS file systems that have the sharenfs or sharesmb property set and according to the share property values. Shares the specified file system that has the sharenfs or sharesmb property set and according to the share property values. Unshares all ZFS file systems that have the sharenfs or sharesmb property set... Creates a self-contained stream. A self-contained stream is one that is not dependent on any datasets not included in the stream package. Valid only with the -r option. If used with the -i option, the stream is dependent on the snapshot specified as an argument to the that option. See the “ZFS Streams” section of the ZFS Administration Guide for details. Generates). Generates a stream package that sends all intermediary snapshots from the first snapshot to the second snapshot. For example, -I @a fs@d is similar to -i @a fs@b; -i @b fs@c; -i @c fs@d. The incremental source snapshot may be specified as with the -i option. Generates a replication stream package that replicates the specified file system,. Generates a recursive stream package. A recursive stream package contains a series of full and/or incremental streams. When received, all properties and descendent file systems are preserved. Unlike with the replication stream packages generated with the -R flag, intermediate snapshots are not preserved unless the intermediate snapshot is the origin of a clone that is included in the stream. If the -i option is used in conjunction with the -r option, an incremental recursive stream is generated. The current values of properties as well as current snapshot and file system names are set when the stream is received. If the -F option is specified when this stream is received, snapshots and file systems that do not exist on the sending side are destroyed. The -I option cannot be used in conjunction with the -r option. When combined with the -c option, a self-contained recursive stream package is created. If both the -c and -i options are used, file systems and volumes that do not have the snapshot specified with the -i option are sent as self-contained streams. See the “ZFS Streams” section of the ZFS Administration Guide to understand how a recursive stream package differs from a replication stream package. Sends properties. Displays verbose information about the stream package generated. The format of the stream is committed. You. If -o property=value or -x property is specified, it applies to the effective value of the property throughout the entire subtree of replicated datasets. Effective property values may be set or inherited, depending on the property and whether the dataset is the topmost in the replicated subtree. Received properties are retained in spite of being overridden and may be restored with zfs inherit -rS or zfs send -Rb.. Uses all but the first element of the sent snapshot path (all but the pool name) to determine the name of the new snapshot as described in the paragraph above. Uses the last element of the sent snapshot path to determine the name of the new snapshot as described in the paragraph above. Forces a rollback of the file system to the most recent snapshot before performing the receive operation. If receiving an incremental replication stream (for example, one generated by zfs send -R -[iI]), destroy snapshots and file systems that do not exist on the sending side. Do not actually receive the stream. This can be useful in conjunction with the -v option to verify the name the receive operation would use. Sets the specified property as if the command zfs set property=value is invoked at the same time the received dataset is created from the non-incremental send stream or updated from the incremental send stream. Any editable ZFS property can also be set at receive time. Set-once properties bound to the received data, such as normalization and casesensitivity, cannot be set at receive time even when the datasets are newly created by zfs receive. Multiple -o options can be specified. An error results if the same property is specified in multiple -o or -x options. File system that is associated with the received stream is not mounted. Prints verbose information about the stream and the time required to perform the receive operation. can be set or inherited, depending on the property. In the case of an incremental update, -x leaves any existing local setting or explicit inheritance unchanged (since the received property is already overridden). All -o restrictions apply equally to -x. Delegates ZFS administration permissions for the specified file system to non-privileged users. devices exec logbias mountpoint nbmand normalization primarycache quota readonly recordsize refquota refreservation reservation rstchown secondarycache setuid shareiscsi sharenfs sharesmb snapdir sync utf8only version volblocksize volsize vscan xattr zoned. Adds a single reference, named with the tag argument, to the specified snapshot or snapshots. Each snapshot has its own tag namespace, and tags must be unique within that space. If a hold exists on a snapshot, attempts to destroy that snapshot by using the zfs destroy command return EBUSY. Specifies that a hold with the given tag is applied recursively to the snapshots of all descendent file systems. Lists all existing user references for the given snapshot or snapshots. Lists the holds that are set on the named descendent snapshots, in addition to listing the holds on the named snapshot. Removes a single reference, named with the tag argument, from the specified snapshot or snapshots. The tag must already exist for each snapshot. If a hold exists on a snapshot, attempts to destroy that snapshot by using the zfs destroy command return EBUSY. Recursively releases a hold with the given tag on the snapshots of all descendent file systems. Gives a high-level description of the differences between a snapshot and a descendent dataset. The descendent can be either a snapshot of the dataset or the current dataset. If a single snapshot is specified, then differences between that snapshot and the current dataset are given. For each file that has undergone a change between the original snapshot and the descendent, the type of change is described along with the name of the file. In the case of a rename, both the old and new names are shown. Whitespace characters, backslash characters, and other non-printable or non-7-bit ASCII characters found in file names are displayed as a backslash character followed by the three-digit octal representation of the byte value. If the -t option is specified, the first column of output from the command is the file's st_ctim value. For deleted files, this is the final st_ctim in the earlier snapshot. The type of change follows any timestamp displayed, and is described with a single character: Indicates the file was added in the later dataset. Indicates the file was removed in the later dataset. Indicates the file was modified in the later dataset. Indicates the file was renamed in the later dataset. If the -F option is specified, the next column of output is a single character describing the type of the file. The mappings are: regular file directory block device door FIFO symbolic link event portal socket If the modification involved a change in the link count of a non-directory file, the change is expressed as a delta within parentheses on the modification line. If the file was renamed, the old name is separated from the new with the string “->”. If the -H option is selected, easier-to-parse output is produced. Fields are separated by a single tab, and no arrow string (->) is placed between the old and new names of a rename. No guarantees are made on the spacing between fields of non -H output. If the -e option is selected, then all files added or modified between the two snapshots are enumerated and no deleted files are displayed. The change type always reports as + regardless of the type of modification. If the -E option is selected, then differences are given as if from an empty snapshot to the specified snapshot or dataset. If the -o field option is selected, then only selected fields are displayed. Each line starts with the standard fields requested by the -F and -t options, followed by the fields requested in successive -o options. As with the -H option, all fields are separated by a single tab. The allowable field names include: The number printed by ls -i for the file The number printed by ls -i for the file The file size as displayed by ls -s The number of links to the file The change in the number of links to the file The name of the file The name of the file before the rename, or — (hyphen) if the file was not renamed The owner name of the file as displayed by ls The group name of the file as displayed by ls The timestamp when the file's metadata was last modified The timestamp when the file was last modified The timestamp when a file was last accessed The timestamp when a file was created Unless they already have the {PRIV_SYS_CONFIG} or {PRIV_SYS_MOUNT} privilege, users must be granted the diff permission with zfs allow to use this subcommand. on. 30 GB for pool/home/bob. # zfs set quota=30G pool/home/bob Example 7 Listing ZFS Properties The following command lists all properties for pool/home/bob. # zfs get all pool/home/bob NAME PROPERTY VALUE SOURCE pool/home/bob type filesystem - pool/home/bob creation Wed Jun 6 15:16 2012 - pool/home/bob used 31K - pool/home/bob available 30.0G - pool/home/bob referenced 31K - pool/home/bob compressratio 1.00x - pool/home/bob mounted yes - pool/home/bob quota 30 discard default pool/home/bob aclinherit restricted default pool/home/bob canmount on default pool/home/bob shareiscsi off default pool/home/bob xattr on default pool/home/bob copies 1 default pool/home/bob version 5 - pool/home/bob utf8only off - pool/home/bob normalization none - pool/home/bob casesensitivity mixed - 31K - pool/home/bob usedbychildren 0 - pool/home/bob usedbyrefreservation 0 - pool/home/bob logbias latency default pool/home/bob sync standard default pool/home/bob rekeydate - default pool/home/bob rstchown 30 example shows how to share a file system with nosuid access. In this case, single quotes are not needed for the share property value because a single share option is included. # zfs set sharenfs=nosuid tank/shares The following commands show how to set sharenfs property options to enable rw access for a set of IP addresses and to enable root access for system neo on the tank/home file system. In this case, single quotes are required for multiple options. # zfs set sharenfs='[email protected]/16,root=neo' tank/home If you are using DNS for host name resolution, specify the fully qualified hostname. Example 17 18 20 Delegating Property Permissions on a ZFS Dataset The following example shows how 21 Example 22 Displaying ZFS Snapshot Differences The following command shows the output from a zfs diff command with the -F and -t options. # zfs diff -Ft myfiles@snap1 1269962501.206726811 M / /myfiles/ 1269962444.207369955 M F /myfiles/link_to_me (+1) 1269962499.207519034 R /myfiles/rename_me -> /myfiles/renamed 1269962431.813566720 - F /myfiles/delete_me 1269962518.666905544 + F /myfiles/new_file 1269962501.393099817 + | /myfiles/new_pipe The following exit values are returned: Successful completion. An error occurred. Invalid command line options were specified. See attributes(5) for descriptions of the following attributes: chown(1), pktool(1), ssh(1), mount(1M), share(1M), unshare(1M), zonecfg(1M), zpool(1M), chmod(2), chown Oracle Solaris ZFS Administration Guide. A file described as modified by the diff subcommand might have been modified in multiple ways. Any action that causes a change in the st_ctim (see stat(2)) is a basis for reporting a modification.
http://docs.oracle.com/cd/E26505_01/html/816-5166/zfs-1m.html
CC-MAIN-2014-42
refinedweb
4,397
55.34
schtasks to run script at set time every weekday By kawliga751, in AutoIt General Help and Support Recommended Posts Similar Content - By ces1a ;This script will calculate the Nth weekday of any month. Just replace the numbers for $Year, $Month, $Week, and $Weekday with numbers of your choice, ;Some lines are not really needed but are there to allow testing and proofing. It is based on Excel formula. #include <Date.au3> Global $tmp, $Year = "2019", $Month = "2", $Week = 2, $WeekDay = 4 Global $aNth = StringSplit("First ,Second ,Third ,Fourth ,Fifth ", ",") Func GetWeekDay($Week, $Year, $Month, $WeekDay) Local $LastDay = 8 - $WeekDay, $EndDay = $Week * 7 + 1 $Month = $Month > 10 ? $Month : "0" & $Month Local $iWday = _DateToDayOfWeek($YEAR, $Month, $LastDay) Return $YEAR & "/" & $Month & "/" & $EndDay - $iWday EndFunc MsgBox(0,'',$aNth[$Week] & _DateDayOfWeek($WeekDay) & " of " & _DateToMonth($Month) & _ " " & $Year & " is " & GetWeekDay($Week, $Year, $Month, $WeekDay) ) GetNthWeekDay.au3 - -
https://www.autoitscript.com/forum/topic/188946-schtasks-to-run-script-at-set-time-every-weekday/
CC-MAIN-2019-13
refinedweb
140
53.75
std::strstreambuf::str Calls freeze(), then returns a copy of start pointer of the get area, std::streambuf::eback(). The start of the get area, for all writeable std::strstreambuf objects constructed through the interface provided by std::strstream, is also the start of the put area. [edit] Parameters (none) [edit] Return value A copy of eback(), which may be a null pointer. [edit] Notes This function is typically called through the std::strstream interface. The call to freeze() guarantees that the returned pointer remains valid until the next explicit call to freeze(false): otherwise (on a dynamic buffer) any output operation could trigger buffer reallocation which would invalidate the pointer. It also causes a memory leak in the destructor of std::strstreambuf, unless freeze(false) is called before the buffer (or, more commonly, the std::strstream that manages it) is destroyed. [edit] Example #include <strstream> #include <iostream> int main() { std::strstream dyn; // dynamically-allocated read/write buffer dyn << "Test: " << 1.23 << std::ends;"
http://en.cppreference.com/w/cpp/io/strstreambuf/str
CC-MAIN-2017-30
refinedweb
164
50.06
I’m knee deep in a WPF project i’m writing for my team. I had the need today to have an innerglow, and noticed that we seem to have forgotten this little BitmapEffect of goodness (which I’ll be promptly hassling Jon tomorrow for in the next revision). That being said, as stubborn as I am I decided to try my hand at producing my own InnerGlow effect. It was actually very simple procedure to replicate inside Expression Blend. Step 1. Define your shape. In this case I went with a rounded Rectangle, but you can use any shape you see fit and firstly you need to figure out your base shape. Note: Now it’s important here to note, that you need to think of this as being "Pancakes" in terms of the design composition. Don’t worry to much about pixel perfect precision inside Vector either. A illustrator friend of mine once gave me a basic rule to live with when it comes to Vector art: You can hide a lot of imperfections, as given you can zoom endlessly you get to the point were the naked eye can’t see it and move on.. Couldn’t agree more. Step 2. Simply copy the shape (CTRL+C) and hit Paste keys (CTRL+V). It should paste exactly on top of the previous shape and retain the exact x/y co-ordinates. Rename your layer to something suites you, I myself called it IcnInnerGlow Step 3. Selecting the layer you just copied, you now want to add a BitmapEffect called "Outerglow". This can be found via your Properties area, under the group called Appearance. You will need to click on the expander for this group in order to see the below. You should see something like this: Not really inspiring is it? Step 4. You now need to knockout the Fill and bump up the Border. So in this case, go to your Fill and click on the No Brush Tab. You will also need to ensure the Border’s Brush is set to a color or setting that you feel comfortable with. In this case, I chose a solid color and ensured the Opacity was 53%. The results should now look like this. Step 5 Now comes the real magic. You select your layer, and again hit CTRL+C and CTRL+V (duplicating your layer basically). With this new layer (it should increase the glow effect, make a note of that!) you now want to make this a clipping path. In that everything inside this shape is what you only want to make visible. The rest is something we want to discard. To do this, right-click on the newly duplicated layer, Choose Path, then Make Clipping Path. A new Modal window will now appear, asking which path you want to attach this clipping path to (think masking in Flash). Choose your "InnerGlow" shape, and you should now have the following result: Notice no bleeding beyond the shape’s actual borders? Now you have a uniform InnerGlow that matches your shape. It’s now up to you what you want to do from here on out, but overall this is a quick and fairly friction free way of achieving an InnerGlow using the basic BitmapEffects in place. I personally, adjusted the color to white, added another BitmapEffect (Blur) and tweaked the settings in a way that I ended up with the following visual effect. Doesn’t look like much does it? well keep in mind I’m Zoomed In here, when I zoom back out to 100%, this is the visual look I pulled off. Now to add some Gloss to the buttons..as we all know, there isn’t enough "Gloss" online today 🙂 Summary. Expression Blend isn’t meant to be a Adobe Photoshop replacement, it’s meant to essentially act as a bridge between the Designer Tools and Developer Tools. That being said, it has a fair amount of hidden power under the hood (when dealing with WPF). I’ve found that I can achieve a lot of the visual effects I’ve commonly used in Photoshop via Expression Blend (yes its weird I know). It requires more hand-crafting but overall visually once I’ve put it into place, I can re-use across a lot of controls throughout my WPF application. I think at times the Vector UI’s look absolutely crap and its dying of gradient brush overkill, so I’m hoping to show folks a few tricks here and there to produce visually appealing art. Here’s the best part of all. If you like the above, then cut and paste this XAML code into your application, and you’re off to the races. Yet another upside to using XAML, sharing art is as easy as pasting XML. <Rectangle Stroke="#87757575" StrokeDashCap="Square" StrokeEndLineCap="Flat" StrokeLineJoin="Miter" StrokeThickness="0.5" RadiusX="3" RadiusY="3" x:<Rectangle.BitmapEffect><DropShadowBitmapEffect Opacity="0.835" ShadowDepth="1" Softness="0.305"/></Rectangle.BitmapEffect><Rectangle.Fill><LinearGradientBrush StartPoint="0.538462,0.98077" EndPoint="0.538462,-0.576924"><GradientStop Color="#FF232323" Offset="0"/><GradientStop Color="#FF5D5B5B" Offset="1"/></LinearGradientBrush></Rectangle.Fill></Rectangle><Rectangle Stroke="#87757575" StrokeDashCap="Square" StrokeEndLineCap="Flat" StrokeLineJoin="Miter" StrokeThickness="0.5" RadiusX="3" RadiusY="3" x:<Rectangle.BitmapEffect><OuterGlowBitmapEffect GlowSize="2" GlowColor="#FFF9F9F9" Opacity="0.415"/></Rectangle.BitmapEffect></Rectangle><Rectangle Stroke="#87757575" StrokeDashCap="Square" StrokeEndLineCap="Flat" StrokeLineJoin="Miter" StrokeThickness="0.5" RadiusX="3" RadiusY="3" x:<Rectangle.BitmapEffect><BitmapEffectGroup><OuterGlowBitmapEffect GlowSize="2" GlowColor="#FFF9F9F9" Opacity="0.5"/></BitmapEffectGroup></Rectangle.BitmapEffect></Rectangle> Scott – thank you very much for that. i wanted to implement (nearly the same) that glow effect on a dark drop down button of mine in a personal project i’m working on and couldn’t get it to work. Now, as easy as ctrl+c | ctrl+v and i have the solution i need. Dang, if XAML isn’t just a beautiful invention. think you just saved me an ulcer buddy..thanks heaps for that! Dang! I can keep using photoshop! Great! Cos I just spend 3 hours trying to use EB as photoshop – which is isnt! Thanks for pointing that out. You just saved me an ulcer too. So now allI gotta do is work out how to give depth to a 3d object Thanks for this. I tried pasting this XAML into Kaxaml, but it complains that the namespace ‘d’ isn’t known. It’s used thus: d:IsHidden="True" Removing that attribute, I don’t see the result you’re showing in your screenshots. Is this a Expression-Blend-only features? What about us Visual Studio users? Thanks for the post. I had the same problem with the d:IsHidden="True" and found that replacing it with Visibility="Hidden" seemed to do the trick and it built just fine. (Looks great! Thanks for the tip.) thanks a lot. this is really smart.. is there a solution to make this effect resizable? – johannes. Thanks for sharing your tips, its tips like these that actually do make a difference to the individual readers of this blog. Thank you and well done. Why the HELL does this post redirect me to riagenic.com after 15 seconds? Nobody who does that kind of garbage has business being paid to make Internet content. Which version of blend is this? Because v4 does not have this option (anymore?). bitmapeffect are deprecated!
https://blogs.msdn.microsoft.com/msmossyblog/2008/09/14/how-to-make-innerglows-with-expression-blend/
CC-MAIN-2017-26
refinedweb
1,235
66.74
Send your own mobile push notifications. Project description SPONTIT :vibration_mode: Send push notifications without your own app. :punch: Using the Spontit API and Spontit app/webapp, you can send your own push notifications programmatically to Android, iOS, and Desktop devices. You can send your own in less than 5 minutes. :sunglasses: :trophy: (Without touching Swift, Objective-C, Java, XCode, Android Studio, the App Store approval process... :dizzy_face:). TL;DR :running: - Sign up at spontit.com (you might need to click "Take me to the Desktop version."). Note down your username. It should be displayed in the top left. - Get a secret key at spontit.com/secret_keys. - Get the iPhone app or Android app. Sign in and allow notifications. pip install spontit from spontit import SpontitResource resource = SpontitResource(my_username, my_secret_key) response = resource.push("Hello!") - You can customize the image of this notification on the website or iPhone app. You can push web content and can push to different topics (topic = subchannel). To push to others, have them follow your respective account (e.g. at spontit.com/my_username) and/or topic. Currently, we only support topic creation on the iPhone app. - We are constantly working on expanding the functionality of Spontit. We GREATLY appreciate your input - feel free to add a feature request on our Github. :smiley: About :information_source: What are topics? Every user, by definition, is a main channel. Each user can create a topic. Topics are designed to act separately from your main channel. Users can follow topics without following your main channel. Users can follow your main channel without following your topics. For example, my account user ID might be "elon_musk," but I might want to push about new SpaceX developments. I could push to the topic "spacex" and only those who follow the "spacex" topic would get pushed. When you create a topic, you only need to specify the display name. We create a topic ID from this display name. You then use this ID to programmatically push to the topic. To get the mapping of display name to ID, see "Send Your First Push Notification" below. A notification pushed to a topic has an appearance defined independently from a notification that is pushed to a main channel (see "Push Notification UI Anatomy"). Creating Topics Currently, creating topics is only supported through the GUI on the iOS Spontit app. We intend to expand this functionality to the website and the API. To create a topic, get the app, sign up, click the "+" in the top right, and then click "Your Topics", and then click the "+" in the top right. Once you make a topic, you are NOT able to change its display name. This will likely NOT change for quite some time. Please keep this in mind when creating topics. Getting Started :white_check_mark: Make an Account First, go to spontit.com or download the Spontit app. Create an account and get your user ID. To see your user ID in the app, tap on the hamburger button. To see your user ID on the website, look at the top of the screen. You can change your user ID at any time here. Generate a Secret Key Once you have made an account, generate a secret key here. You might have to re-authenticate. Push Notification UI Anatomy You can change your user ID, first name, and last name at any time here. Above we see a push notification sent to a main channel. Here, "Josh Wolff" is the first and last name of the user. The call to action is the displayed text. The image shown is the personal profile picture of the user. (You can change your profile image on the homepage of the website or on the iPhone app in the sidebar.) If the user opens the notification, they can open a link attached, if any. If they have an iPhone, they can forward the notification and share it through several other mediums. Above we see a push notification to a topic. The user above could be managing this topic, but as you can see, it looks like its own account. "Dem 2020 Polls" is the display name, the non-bold text is the call to action, and the image is the image set for the topic. Currently, we only support setting images topic profile images on the iOS app. To set an image, go to "Your Topics" and click the camera icon beside the respective topic. Send Your First Push Notification :calling: The Spontit API currently only supports Python 3.6+. pip install spontit from spontit import SpontitResource Construct the resource with your credentials. spontit_resource = SpontitResource("my_user_id", "my_secret_key") Push the notification to your main account. response = spontit_resource.push("My First Push Notification") To specify a topic, specify a topic ID. The below code will send a push notification to the topic "mytopic," but it will not send a push notification to the main account. You can add more topics to the array. The topic must be created before pushing to it. To create a topic, see "Creating Topics" above. response = spontit_resource.push("My First Push Notification", to_topic_ids=["my_username/myowntopic"]) You will need to get the topic ID. For example, you might specify a topic named "My Own Topic", but the ID you must use for pushing is "my_username/myowntopic". To get a mapping of topic IDs to display names, do the following: response = spontit_resource.get_topic_id_to_display_name_mapping() Specify Content on Your Website To link content from the Internet, do the following: response = spontit_resource.push("My First Push Notification", link="", to_topic_ids=["mytopic"]) Limitations Rate Limits Each main channel and topic has an individual rate limit of 1 push per second. For example, you can push to your main account and two topics in the same second, but you cannot push 3 times to one topic in the same second. If you exceed the rate limit, we will specify this in the response returned. Note on Our Development Priorities We prioritize development of the iOS application over the website. If at any time, we describe a feature and it does not seem to be on the website, it might only exist in the iOS application. Please email us at info {at} spontit {dot} io so that we can clarify this to you and other developers. Please feel free to email us any feature requests as well! Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/spontit/
CC-MAIN-2020-10
refinedweb
1,084
66.74