text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
fourvectorMember
Content count6
Joined
Last visited
Community Reputation120 Neutral
About fourvector
- RankNewbie
Looking for a pattern, objects registering to a manager
fourvector posted a topic in General and Gameplay ProgrammingI'm finding myself running into the same kind of problem over and over again, and I'm wondering if there's a pattern I should be using for it. It started with my rendering system. I'm using a component based architecture, and naturally my GameObjects would possess renderable components, such as a sprite. In order to render these components in an appropriate order, I had made a renderer object which would sort the renderable components and call render() on them in order. So when I construct a given GameObject, and it will have a renderable component, I pass a renderer to the constructor which the renderable is then added to. The rub is that I'd like to be able to clone my GameObject, so all my components have to be cloneable. I'd like any renderables that I clone to end up registered to the same renderer as the renderable they were cloned from. To do this, my renderables keep track of which renderers they've been added to, and when they clone themselves, they can add the cloned renderable to the appropriate renderers. It seemed like a good way of doing it at first, but now I'm beginning to wonder. I've now got an Actor component that needs to be registered to a Time object that manages which Actor takes their turn next (it's a turn based game). By the same reasoning as used for the renderable, the Actor will have to keep track of which Time object it's been added to. But now I'm adding more code to the actor, just to allow it to keep track of which Time object it was added to, than I've got describing the functionality of the Actor. I'm being tempted by singletons for these object managers, but they make me feel dirty. Has anyone encountered similar issues?
Am I over engineering? A generalized Game Action framework
fourvector replied to fourvector's topic in General and Gameplay ProgrammingHi ApochPiQ. I've taken your very last piece of advice and decided to do away with a generalized action framework. I've realized it's more important to write a specific piece of code to handle picking things up than a framework for doing EVERYTHING. At least for now, maybe when I've written some specific pieces of code for different actions I might more easily be able to recognize a common theme among the various actions.
Am I over engineering? A generalized Game Action framework
fourvector replied to fourvector's topic in General and Gameplay ProgrammingI'm still mulling over and playing with what you've said, ApochPiQ. I believe I understand and agree with what you've said about Data: turn CanPickUp into a piece of data instead of an interface, and you can move it around and use its functionality in a more versatile way. I also like the idea of having Nouns as being a fundamental sort of object for storing these pieces of Data, because it closely mirrors the Entity-Component architecture. I'm a bit confused about the distinction between a capability and a verb. It seems to me that in defining a capability, what you're really doing is implicitly defining a verb. This is how I understand what you're saying: A capability has a perform() function, in which it does the thing which it is capable of doing, and a canPerform() function, in which it determines if it can do the thing it is capable of doing. In your example you include three capabilities: PickUp, ContainerCapability, and ItemCapability. In addition to their perform() and canPerform() functions, ContainerCapability and ItemCapability also have functions which are related to our understanding of how Containers and Items operate, such as Container.canFit(Item) or Item.getContainer(). I'll call functions of this type "the meat", they are what make the actual game behave in a specific fashion, as opposed to the bones, which is the scaffolding, like perform(). So capabilities hold the meat. This makes sense to me. After the meat has been defined in the capabilities, you can then construct a phrase list, with verbs being the linking up of two different capabilities. This linking defines the verb itself. I attempted to implement this structure, but got stuck at what exactly the Container.perform() function should do. It seems as though the functionality of this function had been preempted by the PickUp.perform() function. It seems to me that Container.perform() should do what PickUp.perform() does, namely, cause an item to be contained by it. Here is where I realized that Container, in this scheme, implicitly defines a verb, namely "contain", through its perform() function. I would assume then that Item.perform() would do the exact same thing as Container.perform(), only maybe in a sort of converse way. We could really rename container and item to CanContain and CanBeContained, and the perform function on both of them does the same thing. This implies that when we make our list of acceptable phrases, there's a natural verb that should be defined, namely contain, with the acceptable triplet being [CanContain contain CanBeContained] Which, when executed, does this: CanContain.perform(CanContain, CanBeContained), or CanBeContained.perform(CanContain, CanBeContained). I suppose I'm stuck at this point. How might I define other verbs in my phrase list besides the natural verb implied by the perform() function? For instance, it makes sense that I should be able to remove an item from a container. But no combination of perform() calls from the two capabilities would implement that functionality. Do I create a CanRemove capability, with a perform() function that deals with the CanContain and CanBeContained capabilities, in a similar way to how the PickUp capability did before I subsumed it into my CanContain (Container) and CanBeContained (item) capabilities. Isn't this verging too closely on precisely what I was doing before? I don't mean to be difficult, but I feel like I'm running around in circles in my head. As an aside, I've been giving some thought to the kinds of functions I'd like to call on a general subject-verb-object structure. There are three groups: [CODE] Group A boolean CanVerbNow(verb, subject, object); boolean CanBeVerbedNow(verb, object, subject); void do(verb, subject, object); Group B boolean CanVerb(verb, subject); booean CanBeVerbed(verb,object); Group C set<Noun> getVerbed(verb, subject); set<Noun> getVerbedBy(verb, object); boolean isVerbed(verb, object); boolean isVerbedBy(verb, subject) [/CODE] Group A contains the meat, these are the functions that actually care about what the verb DOES Group B only really care about what the verb does on a conceptual design level, but not on an actual functional programming level. They guarantee that the noun does these things, but they don't care what these things are. Group C don't care at all what the verb does, but instead only care about semantic relationships. I've realized group C are actually questions about the topology of a directed graph, where nodes are nouns, and verbs are edges. I think it's a very interesting observation with bearing on how I should engineer my code. A little bit of googling indicates that this problem is being though about, but maybe not much for game programming: [url=""][/url]
Am I over engineering? A generalized Game Action framework
fourvector replied to fourvector's topic in General and Gameplay ProgrammingApochPiQ, Perhaps I am confused about what you mean by data. Data wouldn't have any functionality besides getters and setters and such. I would imagine here that the noun data would just be an enum, : {CanPickUp, CanBePickedUp, etc...}. But then in the example you've given, we have a function like subject.CanReach(object). How are these functions defined? Additionally, does every Noun need to have a CanReach(object) function, shouldn't only the ones that CanPickUp? I appreciate your help, I just feel like I must not be understanding something that you're trying to say.
Am I over engineering? A generalized Game Action framework
fourvector replied to fourvector's topic in General and Gameplay ProgrammingApochPiQ I considered implementing it as data, but I wanted different CanAct and CanBeActedUpon objects to be able to modify the Action. So for instance this is PickUp: [CODE] public class PickUp implements Action{ CanPickUp subject; CanBePickedUp object; public PickUp(CanPickUp subject,CanBePickedUp object){ this.object=object; this.subject=subject; } public void Act(){ //iff the subject can pickup if(subject.isInReach(object) && subject.add(object)){ //remove the object from the region object.removeFromWorld(); } } public String getName(){ return "Pick Up "+object.getGameObject().name; } } [/CODE] With the CanPickUp and CanBePickedUp interfaces having the functions implied here. Now different components can implement these interfaces in different ways. For instance a land mine, when it is "removeFromWorld()" might, in addition to leaving the world, also do a test to see if it explodes, or something to that effect. In general it makes sense to me that different nouns should modify the action. With a data implementation, I'd have to construct some kind of huge case structure to account for different functionality. 6510.
Am I over engineering? A generalized Game Action framework
fourvector posted a topic in General and Gameplay ProgrammingI'm writing a roguelike in Java, using an entity-component architecture I put together. I want to develop a framework for defining game actions that entities can take, and can be taken on, for instance "pick up", "attack", "eat" etc. My design followed from the premise that the definition of every action can be thought of as a triplet consisting of a subject, verb, and object. So for instance, Hand - pick up - item. So I defined three interfaces: [CODE]Action[/CODE], [CODE]CanAct<type extends Action>[/CODE], [CODE]CanBeActedUpon<type extends Action>[/CODE]. Then what I would do would be to define a new action class, such as [CODE]Class PickUp implements Action{ PickUp(CanPickUp, CanBePickedUp){..} }[/CODE], with two interfaces [CODE]CanPickUp extends CanAct<PickUp>[/CODE] and [CODE]CanBePickedUp extends CanBeActedUpon<PickUp>[/CODE]: [CODE] public static CanAct<?> canActor(GameObject g, Class<? extends Action> actionType){ for(Part p : g.allParts){ if(p instanceof CanAct<?>){ CanAct<?> CA=(CanAct<?>)p; if(CA.actionType()==actionType){ return CA; } } } return null; } [/CODE]? | https://www.gamedev.net/profile/198683-fourvector/?tab=topics | CC-MAIN-2018-05 | refinedweb | 1,744 | 53.31 |
Application Note
Document Number: AN4248
Rev. 4.0, 11/2015
Implementing a Tilt-Compensated
eCompass using Accelerometer and
Magnetometer Sensors
by: Talat Ozyagcilar
Applications Engineer
1
Introduction
This technical note provides the mathematics, reference
source code and guidance for engineers implementing a
tilt-compensated electronic compass (eCompass).
The eCompass uses a three-axis accelerometer and threeaxis magnetometer. The accelerometer measures the
components of the earth's gravity and the magnetometer
measures the components of earth's magnetic field (the
geomagnetic field). Since both the accelerometer and
magnetometer are fixed on the Printed Circuit Board
(PCB), their readings change according to the orientation
of the PCB.
If the PCB remains flat, then the compass heading could
be computed from the arctangent of the ratio of the two
horizontal magnetic field components. Since, in general,
the PCB will have an arbitrary orientation, the compass
heading is a function of all three accelerometer readings
and all three magnetometer readings.
The tilt-compensated eCompass algorithm actually
calculates all three angles (pitch, roll, and yaw or
compass heading) that define the PCB orientation. The
eCompass algorithms can therefore also be used to create
a 3D Pointer with the pointing direction defined by the
yaw and pitch angles.
Contents
1
2
3
4
5
6
7
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Related Information . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Key Words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Coordinate System and Package Alignment . . . . . . . . . . 3
Accelerometer and Magnetometer Outputs
as a Function of Phone Orientation . . . . . . . . . . . . . . . . . 5
Tilt-Compensation Algorithm . . . . . . . . . . . . . . . . . . . . . . 7
Estimation of the Hard-Iron Offset V . . . . . . . . . . . . . . . . 9
Visualization Using Experimental Data . . . . . . . . . . . . . 10
Software Implementation. . . . . . . . . . . . . . . . . . . . . . . . 15
7.1 eCompass C# Source Code . . . . . . . . . . . . . . . . . 15
7.2 Modulo Arithmetic Low Pass Filter for
Angles C# Source Code . . . . . . . . . . . . . . . . . . . . 16
7.3 Sine and Cosine Calculation C# Source Code . . . 17
7.4 ATAN2 Calculation C# Source Code. . . . . . . . . . . 19
7.5 ATAN Calculation C# Source Code. . . . . . . . . . . . 19
7.6 Integer Division C# Source Code . . . . . . . . . . . . . 20
Introduction
The accuracy of an eCompass is highly dependent on the calculation and subtraction in software of stray
magnetic fields both within, and in the vicinity of, the magnetometer on the PCB. By convention, these
fields are divided into those that are fixed (termed Hard-Iron effects) and those that are induced by the
geomagnetic field (termed Soft-Iron effects). Any zero field offset in the magnetometer is normally
included with the PCB’s Hard-Iron effects and is calibrated at the same time.
This document describes a simple three-element model to compensate for Hard-Iron effects. This threeelement model should suffice for many situations. Please contact your Freescale sales representative for
details of a full 10-element model which compensates for both Hard and Soft-Iron effects.
The C# language source code listed within this document contains cross-references to the equations used.
These listings contain all the code needed to return the yaw, pitch and roll angles from the magnetometer
and accelerometer sensor readings.
For convenience, the remainder of this document assumes that the eCompass will be implemented within
a mobile phone.
1.1
Related Information
C source code and additional documentation are available for download at.
1.2
Key Words
Accelerometer, Magnetometer, Tilt angles, eCompass, 3D Pointer, Tilt Compensation, Tilt Correction,
Hard Iron, Soft Iron, Geomagnetism
1.3
Summary
1. A tilt-compensated electronic compass (eCompass) is implemented using the combination of a
three-axis accelerometer and a three-axis magnetometer.
2. The accelerometer readings provide pitch and roll angle information which is used to correct the
magnetometer data. This allows for accurate calculation of the yaw or compass heading when the
eCompass is not held flat.
3. The pitch and roll angles are computed on the assumption that the accelerometer readings result
entirely from the eCompass orientation in the earth's gravitational field. The tilt-compensated
eCompass will not operate under freefall or low-g conditions at one extreme nor high-g
accelerations at the other.
4. A 3D Pointer can be implemented using the yaw (compass heading) and pitch angles from the
eCompass algorithms.
5. The magnetometer readings must be corrected for Hard-Iron and Soft-Iron effects.
6. A simple three-parameter Hard-Iron correction algorithm is described. Please contact your
Freescale sales representative for details of Freescale's complete 10-parameter Hard and Soft-Iron
correction algorithms.
7. Reference C# code is provided at the end of this document for the full tilt-compensated eCompass
with Hard-Iron compensation.
8. Demonstration eCompass platforms are available that show Freescale's latest sensors. Please
contact your Freescale sales representative for details.
Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors, Rev. 4.0
2
Sensor
Freescale Semiconductor, Inc.
but the x-axis Gx and z-axis Gz signals are inverted in sign. Similarly. Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors.and positive x-axes respectively. Coordinate System A positive yaw angle ψ is defined to be a clockwise rotation about the positive z-axis. (see Figure 1). but the y-axis signal should be set to Bx and the x-axis signal should be set to -By. 3 .0 Sensor Freescale Semiconductor. Different PCB layouts may have different orientations of the accelerometer and magnetometer packages and even the same PCB may be mounted in different orientations within the final product. a positive pitch angle θ and positive roll angle φ are defined as clockwise rotations about the positive y. the magnetometer output Bz is correct. Inc. the y-axis points to the right and the z-axis points downward. It is crucial that the accelerometer and magnetometer outputs are aligned with the phone coordinate system. in Figure 1. The x-axis of the phone is the eCompass pointing direction. Also in Figure 1. is correctly aligned. Rev. the accelerometer y-axis output Gy. East. Figure 1. Down) coordinate system to label axes on the mobile phone. 4.Coordinate System and Package Alignment 2 Coordinate System and Package Alignment This application note uses the industry standard “NED” (North. For example.
Repeat the measurements with the PCB y. Inc. and then against. it should be possible to find a maximum value of the measured x component of the magnetic field. Place the PCB flat on the table. the vertical component also points downward with the precise angle being dependent on location.Coordinate System and Package Alignment Once the package rotations and reflections are applied in software. In the northern hemisphere. a final check should be made while watching the raw accelerometer and magnetometer data from the PCB: 1. Gravitational and Magnetic Field Vectors Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors. the geomagnetic field which should result in maximum and minimum readings in the y. When the PCB x-axis is pointed northward and downward. Repeat once more with the x-axis pointing downwards and then upwards to check that the x-axis reports 1g and then -1g. It should also be possible to find a minimum value when the PCB is aligned in the reverse direction. Figure 2. The z-axis accelerometer should read +1g and the x and y axes negligible values. Repeat with the y-axis pointing downwards and then upwards to check that the y-axis reports 1g and then reports -1g. The horizontal component of the geomagnetic field always points to the magnetic north pole.and z-axes aligned first with.and then z-axes. Invert the PCB so that the z-axis points upwards and verify that the z-axis accelerometer now indicates -1g.0 4 Sensor Freescale Semiconductor. . 4. 2. Rev.
pitch and roll applied to a starting position with the phone flat and pointing northwards. δ is the angle of inclination of the geomagnetic field measured downwards from horizontal and varies over the earth's surface from -90° at the south magnetic pole. 1 cos δ Br = B 0 sin δ Eqn.kyoto-u.81 ms-2. The phone accelerometer. The accelerometer. There is no requirement to know the details of the geomagnetic field strength nor inclination angle in order for the eCompass software to function since these cancel in the angle calculations (see Equations 20. 4. Rev. 5 Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors.Accelerometer and Magnetometer Outputs as a Function of 3 Accelerometer and Magnetometer Outputs as a Function of Phone Orientation Any orientation of the phone can be modeled as resulting from rotations in yaw. Gr. Bp. readings measured after the three rotations Rz(ψ) then Ry(θ) and finally Rx(φ) are described by the equations: 0 G p = R x ( φ )R y ( θ )R z ( ψ )G r = R x ( φ )R y ( θ )R z ( ψ ) 0 g Eqn. Gp. through zero near the equator to +90° at the north magnetic pole. 21 and 22). and magnetometer. Inc. 4 The three rotation matrices referred to in Equations 3 and 4 are: 1 0 0 R x ( φ ) = 0 cos φ sin φ 0 – sin φ cos φ Eqn. and magnetometer. Detailed geomagnetic field maps are available from the World Data Center for Geomagnetism at. 5 . 3 cos δ B p = R x ( φ )R y ( θ )R z ( ψ )B r = R x ( φ )R y ( θ )R z ( ψ )B 0 sin δ Eqn.jp/igrf/. readings in this starting reference position are (see Figure 2): 0 G r = 0 g Eqn. Br. 2 The acceleration due to gravity is g = 9.ac.kugi. B is the geomagnetic field strength which varies over the earth's surface from a minimum of 22 μT over South America to a maximum of 67 μT south of Australia.0 Sensor Freescale Semiconductor.
Equation 8 does not model Soft-Iron effects. Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors.Accelerometer and Magnetometer Outputs as a Function of Phone Orientation cos θ 0 – sin θ Ry ( θ ) = 0 1 0 sin θ 0 cos θ Eqn. Rev.0 6 Sensor Freescale Semiconductor. Vy. which rotates with the phone PCB and is therefore independent of phone orientation. 6 cos ψ sin ψ 0 R z ( ψ ) = – sin ψ cos ψ 0 0 0 1 Eqn. Equation 4 ignores any stray magnetic fields from Hard and Soft-Iron effects. The standard way of modeling the Hard-Iron effect is as an additive magnetic vector. Inc. . V. 8 where Vx. and Vz. A tilt-compensated eCompass will give erroneous readings if it is subjected to any linear acceleration. 7 Equation 3 assumes that the phone is not undergoing any linear acceleration and that the accelerometer signal Gp is a function of gravity and the phone orientation only. Since any magnetometer sensor zero flux offset is also independent of phone orientation. Equation 4 then becomes: V cos δ cos δ x B p = R x ( φ )R y ( θ )R z ( ψ )B 0 + V = R x ( φ )R y ( θ )R z ( ψ )B 0 + V y sin δ sin δ V z Eqn. 4. are the components of the Hard-Iron vector. it simply adds to the PCB Hard-Iron component and is calibrated and removed at the same time. Please contact your Freescale sales representative for details of Freescale’s full Hard-Iron and Soft-Iron calibration model and calibration source code.
Expanding Equation 9 gives: cos θ 0 sin θ 0 1 0 – sin θ 0 cos θ 1 0 0 0 cos φ – sin φ 0 sin φ cos φ cos θ sin θ sin φ sin θ cos φ 0 cos φ – sin φ – sin θ cos θ sin φ cos θ cos φ G px 0 G py = 0 G pz g G px 0 G py = 0 G pz g Eqn. 11 The y component of Equation 11 defines the roll angle φ as: G py cos φ – G pz sin φ = 0 G py tan ( φ ) = ------- G pz Eqn. 16 Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors. 4. Inc. 9 contains the three components of gravity measured by the accelerometer. the magnetometer reading can be de-rotated to correct for the phone orientation using Equation 8: B cos δ Rz ( ψ ) 0 = B sin δ cos ψ sin ψ 0 – sin ψ cos ψ 0 0 0 1 B cos δ 0 = R y ( – θ )R x ( – φ ) ( B p – V ) B sin δ Eqn. 15 With the angles φ and θ known from the accelerometer. 12 Eqn. 7 . Rev. 10 Eqn.0 Sensor Freescale Semiconductor. 13 The x component of Equation 11 gives the pitch angle θ as: G px cos θ + G py sin θ sin φ + G pz sin θ cos φ = 0 Eqn.Tilt-Compensation Algorithm 4 Tilt-Compensation Algorithm The tilt-compensated eCompass algorithm first calculates the roll and pitch angles φ and θ from the accelerometer reading by pre-multiplying Equation 3 by the inverse roll and pitch rotation matrices giving: G px 0 0 R y ( – θ )R x ( – φ )G p = R y ( – θ )R x ( – φ ) G py = R z ( ψ ) 0 = 0 g g G pz where the vector G px G py G pz Eqn. 14 – G px - tan ( θ ) = --------------------------------------------- G py sin φ + G pz cos φ Eqn.
18 = B fx B fy B fz Eqn. pitch and yaw to the range -180° to 180°. 21 ( B pz – V z ) sin φ – ( B py – V y ) cos φ -------------------------------------------------------------------------------------------------------------------------------------------------- ( B px – V x ) cos θ + ( B py – V y ) sin θ sin φ + ( B pz – V z ) sin θ cos φ Eqn. 19 represent the components of the magnetometer sensor after correcting for the Hard-Iron offset and after de-rotating to the flat plane where θ = φ = 0. A further constraint is imposed on the pitch angle to limit it to the range -90° to 90°. 15 and 22 have an infinite number of solutions at multiples of 360°. Rev. Since Equations 13. 22 Equation 22 allows solution for the yaw angle ψ where ψ is computed relative to magnetic north. 17 Eqn. The yaw angle ψ is therefore the required tilt-compensated eCompass heading. Equations 13 and 22 are therefore computed with a software ATAN2 function (with output angle range -180° to 180°) and Equation 15 is computed with a software ATAN function (with output angle range -90° to 90°). The x and y components of Equation 19 give: –B tan ( ψ ) = ---------fy- = B fx cos ψ B cos δ = B fx Eqn. . Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors. This ensures only one unique solution exists for the compass.Tilt-Compensation Algorithm cos ψ B cos δ – sin ψ B cos δ B sin δ cos θ 0 sin θ 1 0 0 = 0 1 0 0 cos φ – sin φ – sin θ 0 cos θ 0 sin φ cos φ cos θ sin θ sin φ sin θ cos φ = 0 cos φ – sin φ – sin θ cos θ sin φ cos θ cos φ B px – V x B py – V y B pz – V z B px – V x B py – V y B pz – V z ( B px – V x ) cos θ + ( B py – V y ) sin θ sin φ + ( B pz – V z ) sin θ cos φ = ( B py – V y ) cos φ – ( B pz – V z ) sin φ – ( B px – V x ) sin θ + ( B py – V y ) cos θ sin φ + ( B pz – V z ) cos θ cos φ The vector B fx B fy B fz Eqn. 4. Inc.0 8 Sensor Freescale Semiconductor. 20 sin ψ B cos δ = – B fy Eqn. it is standard convention to restrict the solutions for roll. pitch and roll angles for any phone orientation.
Vy and Vz. In the presence of Hard-Iron effects.Estimation of the Hard-Iron Offset V 5 Estimation of the Hard-Iron Offset V Equation 22 assumes knowledge of the Hard-Iron offset V. The Hard-Iron offset V can then be computed by fitting the magnetometer measurements to the equation: T ( Bp – V ) ( Bp – V ) = B 2 Eqn. the locus of the magnetic measurements is simply displaced by the Hard-Iron vector V so that the origin of the sphere is equal to the Hard-Iron offset Vx. Bpy and Bpz with a radius equal to the magnitude of the geomagnetic field B. 4. Therefore an accurate Hard-Iron estimation and subtraction are required to avoid Equation 22 jamming and returning compass angles within a limited range only. 9 . Inc. In the absence of any Hard-Iron effects. 23 The mathematics and algorithms for computing V using Equation 23 are documented in Freescale application note AN4246. which is a fixed magnetic offset adding to the true magnetometer sensor output.0 Sensor Freescale Semiconductor. Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors. the locus of the magnetometer output under arbitrary phone orientation changes lies on the surface of a sphere in the space of Bpx. “Calibrating an eCompass in the Presence of Hard and Soft-Iron Interference”. It is common practice for magnetometer sensors to be supplied without zero field offset calibration since the standard Hard-Iron estimation algorithms will compute the sum of both the magnetometer sensor zero field offset and the PCB Hard-Iron offset. It is quite normal for the Hard-Iron offset to greatly exceed the geomagnetic field. The Hard-Iron offset is the sum of any intrinsic zero field offset within the magnetometer sensor itself plus permanent magnetic fields within the PCB generated by magnetized ferromagnetic materials. Rev.
As predicted by Equation 3.Visualization Using Experimental Data 6 Visualization Using Experimental Data This section uses accelerometer and magnetometer measurements to visualize the transformations described mathematically in this document.0 10 Sensor Freescale Semiconductor. Figures 3 and 4 show scatter plots of accelerometer and magnetometer data taken as an eCompass PCB is rotated in yaw. Figure 3. Inc. Rev. Each accelerometer measurement is paired with a magnetometer measurement taken at the same time. 4. Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors. the accelerometer measurements in Figure 3 lie on the surface of a sphere with radius equal to the earth's gravitational field measured in mg. . pitch and roll angles. The slight deviation of the accelerometer measurements from the sphere is caused by handshake adding to the gravitational component during the measurement process. Scatter plot of accelerometer readings taken over a variety of orientation angles.
the magnetometer measurements in Figure 4 lie on the surface of a sphere with radius equal to the geomagnetic field strength B centered at the hard-Iron offset V. Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors. Rev. Inc.Visualization Using Experimental Data Similarly. 11 . Scatter plot of raw magnetometer readings taken in parallel with the accelerometer readings of Figure 3. Figure 4.0 Sensor Freescale Semiconductor. 4. as predicted by Equation 8.
Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors. Inc.0 12 Sensor Freescale Semiconductor. 4. Scatter plot of calibrated magnetometer readings corrected for the hard-Iron offset.Visualization Using Experimental Data Figure 5 shows the magnetometer readings of Figure 4 after correction for the hard-Iron offset using the simple algorithm of Equation 23. Rev. these corrected readings Bp-V lie on the surface of a sphere with radius equal to the geomagnetic field strength B centered at the origin. . As predicted by Equation 8. Figure 5.
The corrected measurements. 4. have zero x and y components and z component approximately equal to 1g.0 Sensor Freescale Semiconductor. Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors. 13 . Scatter plot of accelerometer readings corrected for roll and pitch. The slight variation in the z component results simply from noise or handshake during the measurement process.Visualization Using Experimental Data Figure 6 shows the accelerometer readings corrected for roll and pitch angles using Equation 9. Rev. Figure 6. Inc. defined as Ry(-θ)Rx(-φ)Gp.
4. Scatter plot of calibrated magnetometer readings corrected for roll and pitch Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors.Visualization Using Experimental Data Figure 7 shows the magnetometer readings of Figure 3 further corrected for roll and pitch. Figure 7. As predicted by Equation 8.V). these measurements. defined as Ry(-θ)Rx(-φ)(Bp . Rev. . Inc. have a circular distribution in the x and y axes and a constant z component equal to Bsinδ.0 14 Sensor Freescale Semiconductor.
Int16 iGpy. iBpz: the three components of the magnetometer sensor */ /* iGpx. then the multiplier should be 4x to maximize the dynamic range.01° angular resolution within the word length of an Int16. if the accelerometer data is signed 14-bit with range -213 to 213-1. /* sine and cosine */ /* subtract the hard iron offset */ iBpx -= iVx. Global variables used by the function are listed immediately below. /* tilt-compensated e-Compass code */ public static void iecompass(Int16 iBpx. iGpz: the three components of the accelerometer sensor */ /* local variables */ Int16 iSin. 15 . Int16 iGpx. iVz. iThe. iBpy. The function calls the trigonometric functions iTrig and iHundredAtan2Deg. /* hard iron estimate */ static Int16 iVx. iCos. iBfy. Int16 iBpy. The accelerometer and magnetometer readings are assumed to fit within a signed 16-bit Int16 (since the most sensitive consumer accelerometers and magnetometers currently provide a maximum of 14 bits of data). Q15 fractional arithmetic is used for the sines and cosines where -32768 represents -1. Inc. Sines and Cosines are computed directly from ratios of the accelerometer and magnetometer data rather than from angles.0 Sensor Freescale Semiconductor. Angles are computed using a custom ATAN2 function which returns angles in degrees times 100 so 30° is output as 3000 decimal and -45° as -4500 decimal. Custom functions are provided in this document for all the trigonometric and numerical calculations required.00 and +32767 represents 0. /* magnetic field readings corrected for hard iron effects and PCB orientation */ static Int16 iBfx. however. /* see Eq 16 */ iBpy -= iVy. It is.Software Implementation 7 Software Implementation The reference C# code in this documentation uses integer operands only and makes no calls to any external mathematical libraries. Int16 iGpz) { /* stack variables */ /* iBpx. /* roll pitch and yaw angles computed by iecompass */ static Int16 iPhi. The trigonometric rotations used to rotate the sensor measurements use 16 x 16 bit multiplies into a 32-bit integer with the result returned as an Int16. This provides 0. 7.999. Int16 iBpz. The three angles computed should be low-pass filtered (see next section). 4. Rev. iVy. All calculations are performed on the raw Int16 data read from the sensors without any need to convert to physical units of ms-2 or μT. For example.1 eCompass C# Source Code The tilt-compensated eCompass function is listed below. iGpy. recommended that the user implement fixed multipliers to boost the accelerometer and magnetometer readings closer to the maximum range -32768 to +32767 to reduce quantization noise in the mathematical routines. iBfz. /* see Eq 16 */ Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors. iPsi. /* see Eq 16 */ iBpz -= iVz.
Software Implementation /* calculate current roll angle Phi */ iPhi = iHundredAtan2Deg(iGpy./* Eq 13 */ /* calculate sin and cosine of roll angle Phi */ iSin = iTrig(iGpy. ./* Eq 15 */ /* restrict pitch angle to range -90 to 90 degrees */ if (iThe > 9000) iThe = (Int16) (18000 . The code is written for filtering the yaw (compass) angle ψ but can also be used for the roll angle φ with changes to use iPhi instead of iPsi. 26 or equivalently in software: The time constant in samples is given by the reciprocal of the filter coefficient α. Int32 tmpAngle. /* Eq 13: cos = adjacent / hypotenuse */ /* de-rotate by roll angle Phi */ iBfy = (Int16)((iBpy * iCos . /* Eq 15: sin = opposite / hypotenuse */ iCos = iTrig(iGpz./* Eq 19: z component */ /* calculate current yaw = e-compass angle Psi */ iPsi = iHundredAtan2Deg((Int16)-iBfy./* Bpy*sin(Phi)+Bpz*cos(Phi)*/ iGpz = (Int16)((iGpy * iSin + iGpz * iCos) >> 15). Rev. iGpy). Inc. /* temporary angle*100 deg: range -36000 to 36000 */ /* low pass filtered angle*100 deg: range -18000 to 18000 */ Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors. iGpz). modulo 360°. iGpz). The filter has a single pole on the real axis at z = 1 .α and has transfer function H(z) given by: α - H ( z ) = ----------------------------- 1 – ( 1 – α ) z – 1 Eqn. Eqn.2 Modulo Arithmetic Low Pass Filter for Angles C# Source Code The code for a simple exponential low pass filter. for the output angles is listed below. iGpz).iThe). /* Eq 15: cos = adjacent / hypotenuse */ /* correct cosine if pitch not in range -90 to 90 degrees */ if (iCos < 0) iCos = (Int16)-iCos. static Int16 iLPPsi. The additional complexity of the code below is to implement the filter in modulo arithmetic so that a sample to sample angle change of 359° is correctly interpreted as a -1° change. /* Eq 13: sin = opposite / hypotenuse */ iCos = iTrig(iGpz./* Eq 19 y component */ iBpz = (Int16)((iBpy * iSin + iBpz * iCos) >> 15). 4. /* Eq 19: x component */ iBfz = (Int16)((-iBpx * iSin + iBpz * iCos) >> 15). iGpz). iBfx). 25 yn + = α * ( xn – yn ) ./* Eq 15 denominator */ /* calculate current pitch angle Theta */ iThe = iHundredAtan2Deg((Int16)-iGpx.0 16 Sensor Freescale Semiconductor. iGpx). /* de-rotate by pitch angle Theta */ iBfx = (Int16)((iBpx * iCos + iBpz * iSin) >> 15). /* Eq 22 */ } 7. if (iThe < -9000) iThe = (Int16) (-18000 .iThe). /* calculate sin and cosine of pitch angle Theta */ iSin = (Int16)-iTrig(iGpx. 24 The difference equation filtering the input series x[n] into output y[n] is given by: y [ n ] = ( 1 – α )y [ n – 1 ] + αx [ n ] Eqn.iBpz * iSin) >> 15).
29 The accuracy is determined by the threshold MINDELTATRIG.(Int32)iLPPsi.tmpAngle). /* store the correctly bounded low pass filtered angle */ iLPPsi = (Int16)tmpAngle. The setting for maximum accuracy is MINDELTATRIG = 1. if (tmpAngle < -9000) tmpAngle = (Int16) (-18000 . 4. if (tmpAngle > 18000) tmpAngle -= 36000. if (tmpAngle < -18000) tmpAngle += 36000. which is restricted to the range -90° to 90°.Software Implementation static UInt16 ANGLE_LPF. For the pitch angle θ. Inc. /* final step size for iTrig */ Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors. Rev. 17 . 28 The function uses a binary division algorithm to solve for r where: 2 2 2 r (x + y )= x 2 Eqn. /* check that the angle remains in -180 to 180 deg bounds */ if (tmpAngle > 18000) tmpAngle -= 36000. the final bounds check should be changed to: if (tmpAngle > 9000) tmpAngle = (Int16) (18000 . /* calculate the new low pass filtered angle */ tmpAngle = (Int32)iLPPsi + ((ANGLE_LPF * tmpAngle) >> 15). 7. 27 y cos θ = -------------------2 2 x +y Eqn.3 Sine and Cosine Calculation C# Source Code The function iTrig computes angle sines and cosines using the definitions: x θ y x sin θ = -------------------2 2 x +y Eqn.0 Sensor Freescale Semiconductor. const UInt16 MINDELTATRIG = 1. if (tmpAngle < -18000) tmpAngle += 36000.tmpAngle). /* low pass filter: set to 32768 / N for N samples averaging */ /* implement a modulo arithmetic exponential low pass filter on the yaw angle */ /* compute the change in angle modulo 360 degrees */ tmpAngle = (Int32)iPsi .
/* (ix * ix) + (iy * iy) */ Int16 ir.Software Implementation /* function to calculate ir = ix / sqrt(ix*ix+iy*iy) using binary division */ static Int16 iTrig(Int16 ix. 4. /* check for -32768 which is not handled correctly */ if (ix == -32768) ix = -32767. /* itmp=(ir+delta)^2*(ix*ix+iy*iy). /* scratch */ UInt32 ixsq. /* set as 2^14 = 0. Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors. algorithm assumes x is positive for convenience */ isignx = 1. iy: signed 16 bit integers representing sensor reading in range -32768 to 32767 */ function returns signed Int16 as signed fraction (ie +32767=0. Int16 iy) { UInt32 itmp. range 0 to 32767*32767 = 2^30 = 1073676289 */ itmp = (UInt32)((ir + idelta) * (ir + idelta)). . if (ix < 0) { ix = (Int16)-ix. /* store the sign for later use. boost ix and iy but keep below maximum signed 16 bit */ while ((ix < 16384) && (iy < 16384)) { ix = (Int16)(ix + ix).0000) */ algorithm solves for ir*ir*(ix*ix+iy*iy)=ix*ix */ /* correct for pathological case: ix==iy==0 */ if ((ix == 0) && (iy == 0)) ix = iy = 1.99997.5 */ ir = 0. Rev. idelta = 16384. Inc. /* ixsq=ix*ix: 0 to 32767^2 = 1073676289 */ ihypsq = (UInt32)(ixsq + iy * iy). } /* calculate ix*ix and the hypotenuse squared */ ixsq = (UInt32)(ix * ix). } /* for convenience in the boosting set iy to be positive as well as ix */ iy = (Int16)Math. if (iy == -32768) iy = -32767. 1 returned as signed Int16 */ Int16 idelta. /* result = ix / sqrt(ix*ix+iy*iy) range -1. /* to reduce quantization effects. iy = (Int16)(iy + iy). isignx = -1.Abs(iy).0 18 Sensor Freescale Semiconductor. -32768=-1. /* ix * ix */ Int16 isignx. /* storage for sign of x. algorithm assumes x >= 0 then corrects later */ UInt32 ihypsq. /* delta on candidate result dividing each stage by factor of 2 */ /* /* /* /* stack variables */ ix. range 0 to 2^31 = 2147221516 */ itmp = (itmp >> 15) * (ihypsq >> 15).5 */ /* loop over binary sub-division algorithm */ do { /* generate new candidate solution for ir and test if we are too high or too low */ /* itmp=(ir+delta)^2. /* ihypsq=(ix*ix+iy*iy) 0 to 2*32767*32767=2147352578 */ /* set result r to zero and binary search step to 16384 = 0.
Angle * 100 = -----15 45 75 2 X 2 X 2 X Eqn. else if ((ix <= 0) && (iy <= 0)) /* range -180 to -90 degrees */ iResult = (Int16)((Int16)-18000 + iHundredAtanDeg((Int16)-iy. ix)).ix) in deg for ix.0 to 0.9999695 in Q15 fractional arithmetic) outputting the angle in degrees * 100 in the range 0 to 9000 (0. Inc.+ -----. Int16 ix) { Int16 iResult. 19 .0 Sensor Freescale Semiconductor.Software Implementation if (itmp <= ixsq) ir += idelta.0°). 31 Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors.+ -----. /* last loop is performed for idelta=MINDELTATRIG */ /* correct the sign before returning */ return (Int16)(ir * isignx). else if ((ix <= 0) && (iy >= 0)) /* range 90 to 180 degrees */ iResult = (Int16)(18000 . idelta = (Int16)(idelta >> 1).5 ATAN Calculation C# Source Code The function iHundredAtanDeg computes the ATAN --Y- function for X and Y in the range 0 to 32767 X (interpreted as 0. } 7.0° to 90. if (iy == -32768) iy = -32767. /* angle in degrees times 100 */ /* check for -32768 which is not handled correctly */ if (ix == -32768) ix = -32767. 4. iy in range -32768 to 32767 */ static Int16 iHundredAtan2Deg(Int16 iy. /* divide by 2 using right shift one bit */ } while (idelta >= MINDELTATRIG). /* check for quadrants */ if ((ix >= 0) && (iy >= 0)) /* range 0 to 90 degrees */ iResult = iHundredAtanDeg(iy. For Y≤ X the output angle is in the range 0° to 45° and is computed using the polynomial approximation: K1 Y K2 Y 3 K3 Y 5 . ix).--. Rev.--.--. else /* ix >=0 and iy <= 0 giving range -90 to 0 degrees */ iResult = (Int16)(-iHundredAtanDeg((Int16)-iy. } 7.4 ATAN2 Calculation C# Source Code The function iHundredAtan2Deg is a wrapper function which implements the ATAN2 function by assigning the results of an ATAN function to the correct quadrant. 30 For Y > X.(Int16)iHundredAtanDeg(iy. return (iResult). (Int16)-ix)). (Int16)-ix)). /* calculates 100*atan2(iy/ix)=100*atan2(iy. The result is the angle in degrees times 100. the identity is used (valid in degrees for positive x): 1 atan ( x ) = 90 – atan --- x Eqn.
0 20 Sensor Freescale Semiconductor.05 deg max error */ = 5701. /* temporary variable */ /* check for pathological cases */ if ((ix == 0) && (iy == 0)) return (0). /* return a fraction in range 0. Int16 ix) { Int32 iAngle. iAngle += (iTmp >> 15) * (Int32) K2.0 to 90. iAngle = iAngle >> 15. to 1. iTmp = ((Int32) iRatio >> 5) * ((Int32) iRatio >> 5) * ((Int32) iRatio >> 5). /* fifth order const Int16 K1 const Int16 K2 const Int16 K3 of polynomial approximation giving 0. /* angle in degrees times 100 */ Int16 iRatio. Angle * 100 = 9000 – -----15 45 75 2 Y 2 Y 2 Y Eqn. = -1645.iAngle). return ((Int16) iAngle). . limit result to range 0 to 9000 equals 0. = 446. 32 K1. third and fifth order polynomial approximation */ iAngle = (Int32) K1 * (Int32) iRatio.+ ------. Inc.Software Implementation K1 X K2 X 3 K3 X 5 .+ ------. to 1. iy). /* return a fraction in range 0.0 degrees */ if (iAngle < 0) iAngle = 0. */ /* first. */ else iRatio = iDivide(ix. /* for tidiness. K2 and K3 were computed by brute force optimization to minimize the maximum error. Rev. 4.--. ix). } Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors. to 32767 = 0. if ((ix == 0) && (iy != 0)) return (9000). /* check if above 45 degrees */ if (iy > ix) iAngle = (Int16)(9000 . /* ratio of iy / ix or vice versa */ Int32 iTmp. iy positive in range 0 to 32767 */ static Int16 iHundredAtanDeg(Int16 iy. if (iAngle > 9000) iAngle = 9000.--. /* check for non-pathological cases */ if (iy <= ix) iRatio = iDivide(iy. to 32767 = 0. iTmp = (iTmp >> 20) * ((Int32) iRatio >> 5) * ((Int32) iRatio >> 5) iAngle += (iTmp >> 15) * (Int32) K3. /* calculates 100*atan(iy/ix) range 0 to 9000 for all ix.--.
Rev. /* divide by 2 using right shift one bit */ } while (idelta >= MINDELTADIV). } /* loop over binary sub-division algorithm solving for ir*ix = iy */ do { /* generate new candidate solution for ir and test if we are too high or too low */ itmp = (Int16)(ir + idelta). iy = (Int16)(iy + iy). the candidate solution */ itmp = (Int16)((itmp * ix) >> 15). non-zero and where the denominator is greater than the numerator.Software Implementation 7. idelta = 16384. /* final step size for iDivide */ /* function to calculate ir = iy / ix with iy <= ix. /* last loop is performed for idelta=MINDELTADIV */ return (ir). } Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors.9999695. idelta = (Int16)(idelta >> 1). The setting for maximum accuracy is MINDELTADIV = 1. /* set as 2^14 = 0.5 */ ir = 0. const UInt16 MINDELTADIV = 1. 33 using a binary division algorithm to solve for: rx = y Eqn. /* result = iy / ix range 0. 1. 34 The accuracy is determined by the threshold MINDELTADIV.5 */ /* to reduce quantization effects.0 Sensor Freescale Semiconductor. boost ix and iy to the maximum signed 16 bit value */ while ((ix < 16384) && (iy < 16384)) { ix = (Int16)(ix + ix).6 Integer Division C# Source Code The function iDivide is an accurate integer division function where it is given that both the numerator and denominator are non-negative. /* delta on candidate result dividing each stage by factor of 2 */ /* set result r to zero and binary search step to 16384 = 0.0 to 0.. Inc. /* scratch */ Int16 ir. and ix. iy both > 0 */ static Int16 iDivide(Int16 iy. Int16 ix) { Int16 itmp. if (itmp <= iy) ir += idelta. The result is in the range 0 decimal to 32767 decimal which is interpreted in Q15 fractional arithmetic as the range 0. The function solves for r where: y r = -- x Eqn. 4. /* itmp=ir+delta. 21 . returned in range 0 to 32767 */ Int16 idelta.
and specifically disclaims any and all liability. representation. & Tm. or guarantee regarding the suitability of its products for any particular purpose. Pat. Document Number: AN4248 Rev. All other product or service names are the property of their respective owners. U. Freescale sells products pursuant to standard terms and conditions of sale. licenses granted hereunder to design or fabricate any integrated circuits based on the Freescale reserves the right to make changes without further notice to any products herein.reg. Reg. Off.S. 2015 Freescale Semiconductor.com/support information in this document.. Freescale makes no warranty. 4.” must be validated for each customer application by customer’s technical experts. There are no express or implied copyright Web Support: freescale.How to Reach Us: Information in this document is provided solely to enable system and software Home Page: freescale. nor does Freescale assume any liability arising out of the application or use of any product or circuit.htm. including “typicals.0 11/2015 . Freescale does not convey any license under its patent rights nor the rights of others. including without limitation consequential or incidental damages. Inc.net/v2/webservices/Freescale/Docs/TermsandConditions. which can be found at the following address:. “Typical” parameters that may be provided in Freescale data sheets and/or specifications can and do vary in different applications. Freescale and the Freescale logo are trademarks of Freescale Semiconductor. and actual performance may vary over time. Inc. All operating parameters. © 2013.com implementers to use Freescale products. | https://www.scribd.com/document/326601370/AN4248 | CC-MAIN-2018-51 | refinedweb | 6,016 | 57.87 |
Hello,
it seems my problem is unsolvable:)
I would be grateful for any help.
That's what I have done:
* ran install.py file;
* as I understand, I can choose whether to use as adapter wkcgi.exe or mod_webkit.dll (I use Apache2 so I go to mod_webkit_2 directory).
well, I tried both of them: 1) copied wkcgi.exe file to Apache cgi-bin folder; 2) I copied mod_webkit.dll to Apache cgi-bin directory and, could You tell me if I'm right: after this in Apache/conf httpd.conf file I add "LoadModule webkit_module modules/mod_webkit.dll;"
and " <Location /WK>
WKServer localhost 8086
SetHandler webkit-handler
</Location> "
then I run AppServer.bat
when I write I see this page. When I change something in Examples files, after refreshing I see the changes, sio everything seems to be ok.Because I did everything what wac written in WebKit Install Guide.
So now I could develope my application. I am new at it so firstly,I found the one which I try:
"from WebKit.Servlet import Servlet
class Hello(Servlet):
def respond(self, trans):
trans.response().write('Content-type: text/html\\n\\nHello, world!\\n')" I call this application aplikacija.py
If this application isn't good, could You write how should look a simple application of "Hello world".
I create my working directory with MakeAppWorkDir in C:/we named aplikacija, so I got C:/we/aplikacija.
The question: do I have add this context to Contexts dictionary of Application.config. I don't understand if I need to and if so, what address I should write: in WebKit Install Guide it is written:
:"
Why it is written: "absolute or relative to the WebKit directory" but in example is other address. As I see the path should be of working directory, I write "C:/we/aplikacija". Am I correct? (I have doubts).
and in a browser I write "" but the file is not found.
I guess I should run AppServer.bat of my working directory. So which should I run AppServer of WebKit or of my working directory? Or both of them?
The other question- do I have to copy any file from my created working directory to Apache or not?
Maybe I should do something with file launch.py, but when I run it, a window opens and closes itself.
The last question, does the application should be compiled if I want to run it in a browser? And if so, how should it be done? I write python aplikacija.py in Python command line but there are syntax errors.
I hope You won't get tired of plenty of my questions:)
thank You very much
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around | http://sourceforge.net/p/webware/mailman/attachment/20050505123902.9867.qmail%40web31310.mail.mud.yahoo.com/1/ | CC-MAIN-2015-11 | refinedweb | 462 | 77.74 |
IOCTL(2) Linux Programmer's Manual IOCTL(2)
ioctl - control device
#include <sys/ioctl.h> int ioctl(int fd, unsigned long request, ...);)
This page is part of release 4.16 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-05-03 IOCTL(2)
Pages that refer to this page: apropos(1), man(1), whatis(1), getsockopt), ioctl_xfs_scrub_metadata(2), open(2), perf_event_open(2), read(2), select_tut(2), socket(2), syscalls(2), timerfd_create(2), userfaultfd(2), write(2), errno(3), if_nameindex(3), if_nametoindex(3), openpty(3), dsp56k(4), fd(4), loop(4), lp(4), random(4), rtc(4), sd(4), smartpqi(4), st(4), tty(4), vcs(4), arp(7), capabilities(7), pipe(7), pty(7), signal(7), socket(7), tcp(7), termio(7), udp(7), unix(7) | http://man7.org/linux/man-pages/man2/ioctl.2.html | CC-MAIN-2018-26 | refinedweb | 145 | 55.24 |
This.
from tkinter.colorchooser import askcolor
The syntax for the askcolor function is as follows. All parameters on this function or optional. You can assign it a default color and a title, but you can also leave both of these out.
result = askcolor(title = "Tkinter Color Chooser")
The look of this Color Chooser can vary from operating system to operating system, but the general purpose and functionality remains the same.
Color Chooser Example
Below is a standard example of the askcolor function, without the supporting tkinter window.
Any color you pick will return a tuple as it’s value. There are two values contained in this tuple. The first is a RGB representation and the second is a hexadecimal value. We’ll be needing the hexadecimal value for tkinter.
result = askcolor(title = "Tkinter Color Chooser") print(result) print(result[0]) print(result[1])
We picked a random color using the code above. See it’s output below.
((92.359375, 116.453125, 228.890625), '#5c74e4') (92.359375, 116.453125, 228.890625) #5c74e4
Extra Tkinter Example
Here’s an extra example of the use of the Color Chooser’s askcolor function. This is the kind of example you’ll see in real life, where the user selects a color, and the font color on the GUI changes accordingly.
from tkinter.colorchooser import askcolor def callback(): result = askcolor(title = "Tkinter Color Chooser") label.configure(fg = result[1]) print(result[1]) root = tk.Tk() tk.Button(root, text='Choose Color',command=callback).pack(pady=20) label = tk.Label(root, text = "Color", fg = "black") label.pack() root.geometry('180x160') tk.mainloop()
We’ve run the function, and picked the color blue from the color palette. You can now see that the color of the text in the label has been changed.
This marks the end of the Tkinter Color Chooser Article. Suggestions or contributions for CodersLegacy are more than welcome. Any questions can be directed to the comments section below.
To see other tkinter related tit bits, head over to the Python problem solving page and scroll down to the Tkinter section. | https://coderslegacy.com/python/problem-solving/tkinter-color-chooser/ | CC-MAIN-2021-21 | refinedweb | 347 | 50.63 |
Visualforce in Salesforce Classic is comfortable — we get that. However, Salesforce Lightning is here and it is the future and Lightning Web Components are the foundational building blocks behind it all. Because Salesforce wants developers to be as successful as possible, we recommend making this journey in a two-step approach:
Today we will focus on tools and resources that Salesforce offers to support that journey.
With lightningStylesheets, you can easily style your Visualforce pages and most common Visualforce components. Simply add
lightningStylesheets=”true” to your Visualforce page, and it will adopt the Lightning look and feel in Lightning while maintaining the Classic look and feel in Classic. This is a quick and simple way to ensure your Visualforce page looks consistent regardless of how your users are accessing it.
<apex:page
Did you know that Salesforce provides a simple way to see all Visualforce pages accessed in the last 90 days in your org? The Lightning Experience Configuration Converter for Visualforce Pages scans your org and provides a useful summary of each Visualforce page — including information such as average daily page views, user profiles that accessed the page, locations where the page is used, and more. It displays this in a simple tabular format that is pre-sorted in order of most viewed pages, and also emails the information in a spreadsheet to be accessed offline.
More importantly, it highlights known areas of incompatibility and provides easy ways to make the tweaks needed to get those pages working great in Lightning. For any given page, simply click on Page Issues to see a list of all errors and warnings associated with that page.
Then click on the page name to directly go to the setup editor for that page. From here, easily find the line item that was called out and make simple adjustments as needed.
Within this tool, you can also apply Lightning Stylesheets with one click. Simply select any page and choose “Apply Lightning Experience Stylesheets.” Then “View Page” to see your newly styled page. You can also use this tool to remove Lightning Stylesheets with one click.
Before:
After:
Finally, use Live Controller to ensure your Visualforce page always displays the latest data. Because a Visualforce page in Lightning Experience may be rendered alongside other Lightning components displaying the same record, it is important that the data shown is consistent across all components. Live Controller is a brand new and standard Visualforce component that leverages Lightning Data Service to automatically re-render the Visualforce page content when it detects a change made by those components to the underlying record data.
The simplest way to get started with Live Controller is to include the
apex:liveController component to your Visualforce page.
<apex: <apex:liveController/> </apex:page>
When using a custom controller, specify the records you want to track using the record’s attribute and then iterate over them. Note that you will need to provide a live controller with a manual reset for any data stored via custom controllers, which you can accomplish with the
action attribute (this reset is handled automatically in standard controllers).
<apex:page <apex:liveController <apex:dataList {!acct.Name} </apex:dataList> </apex:page>
public class customListController { public ApexPages.StandardSetController setCon { get { if(setCon == null) { setCon = new ApexPages.StandardSetController(Database.getQueryLocator( [SELECT Name FROM Account])); } return setCon; } set; } public List<Account> getAccounts() { return (List<Account>) setCon.getRecords(); } public void refresh(){ setCon = null; } }
Lastly, use the
reRender attribute when you only want to refresh specific portions on the page. For example, when you have form inputs, refreshing the entire form while a user is entering data would cause them to lose their work. By specifying which section to re-render, you can display updates without modifying their changes. In general, we recommend specifying partial re-rendering to avoid losing any UI state changes.
<apex: <apex:liveController <apex:pageBlock ... </apex:page>
Salesforce is excited about Lightning Experience and wants to help you move over to Lightning as quickly and seamlessly as possible. This blog post discussed some key features that prepare Visualforce pages for the transition — try them out today!
To learn more about preparing Visualforce for Lightning Experience, check out this Trailhead module. For additional information on the available transition resources, head over to this link.
Grace Li is a Product Manager for Visualforce and various other aspects of the Lightning developer platform. Follow Grace on Twitter and share with her how you are using Visualforce. | https://developer.salesforce.com/blogs/2019/10/preparing-visualforce-for-lightning.html | CC-MAIN-2021-17 | refinedweb | 741 | 52.39 |
Created attachment 675977 [details]
thunderbird-im-xmpp-no-sasl.log
User Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:16.0) Gecko/20100101 Firefox/16.0
Build ID: 20121010231231
Steps to reproduce:
This is a fork of Bug 789745. The following case seems to not be covered by the detection mechanism implemented as a fix to that bug.
1. Created an XMPP account with chat.messagingengine.com:5222 (a server without SASL support).
2. Attempted to connect.
Server details:
[1]
[2]
Actual results:
The connection failed with "No authentication mechanism offered by the server".
The debug log is attached.
Expected results:
Thunderbird IM should have detected that this server does not support SASL and fallen back to legacy authentication.
Created attachment 676416 [details] [diff] [review]
WIP
This should be all we need to change to fix this. I haven't tested this at all, so not requesting review yet. Feedback welcome of course :).
The relevant specs are:
"If the receiving entity is capable of SASL negotiation, it MUST advertise one or more authentication mechanisms within a <mechanisms/> element qualified by the 'urn:ietf:params:xml:ns:xmpp-sasl' namespace in reply to the opening stream tag received from the initiating entity (if the opening stream tag included the 'version' attribute set to a value of at least "1.0")."
and
."
Created attachment 677591 [details] [diff] [review]
Patch v2
This was tested by aleth. And I also added a comment to clarify some code that made us frown when looking at it.
Created attachment 677593 [details] [diff] [review]
Patch v2
Same patch, with the additional comment added for real this time.
Comment on attachment 677593 [details] [diff] [review]
Patch v2
Thanks for fixing this Florian. Looks good to me!
Comment on attachment 677593 [details] [diff] [review]
Patch v2
[Approval Request Comment]
Regression caused by (bug #): Not really a regression, but this patch fixes an edge case that wasn't handled by the patch in bug 789745 that added support of non-SASL authentication to Thunderbird 17.
User impact if declined: impossible to login to some XMPP server, for example the fastmail server.
Testing completed (on c-c, etc.): I had someone with a fastmail account test the patch locally and confirm he can login with this patch applied.
Risk to taking this patch (and alternatives if risky): low, the patch is relatively straight forward.
comm-aurora:
comm-beta:
*** Bug 789868 has been marked as a duplicate of this bug. *** | https://bugzilla.mozilla.org/show_bug.cgi?id=806228 | CC-MAIN-2016-36 | refinedweb | 409 | 57.16 |
Details
Description.
Activity
Thanks for the feedback. Here is an update patch.
Looks like a useful addition.
Perhaps instead of adding a new field to SpecificDatumReader we can add a new method to SpecificData, since the base GenericData already has a field containing the desired value:
public SpecificData getSpecificData() { return (SpecificData)getData(); }
Then this can be used in SpecificDatumReader, as getSpecificData().getClass(...).
Also, I'd prefer if the classLoader field were defined nearer the top of the class, since it's used by both a constructor and the getClass() implementation. I'd place it just before the constructors.
Here is a patch the implements the improvement. Comments or suggestions appreciated.
I committed this. Thanks, Michael.
I made a few minor changes, javadoc mostly... | https://issues.apache.org/jira/browse/AVRO-873 | CC-MAIN-2014-15 | refinedweb | 123 | 51.04 |
Posted 03 Apr 2012
Link to this post
private static T FindChild<T>(DependencyObject parent) where T : DependencyObject
{
for (int i = 0; i < VisualTreeHelper.GetChildrenCount(parent); i++)
{
DependencyObject child = VisualTreeHelper.GetChild(parent, i);
if (child != null && child is T)
return (T)child;
else
{
T childOfChild = FindChild<T>(child);
if (childOfChild != null)
return childOfChild;
}
}
return null;
}
Posted 04 Apr 2012
Link to this post
I've tried to simulate the issue but with no avail. I've prepared a sample project with your code included (copy paste from your post). Could you please give it a try and change it to simulate the issue?
On a side note, I just wanted to encourage you to take advantage of the support ticketing system () in cases when you need a prompt response to urgent issues. This is the best way to reach our support staff and attach a | http://www.telerik.com/forums/radtabitem-object-not-found-until-it-is-selected-or-gets-focus | CC-MAIN-2017-13 | refinedweb | 145 | 64.81 |
= "[email protected]"; String to = "[email protected]"; String subject = "Test"; String message = "A test message"; SendMail sendMail = new SendMail(from, to, subject, message); sendMail.send(); } }
also read:
how a mail will be sent without any credentials. i mean there is no need of entering the gmail password?
It is only the example program to send the mail. We need not have the server to setup for sending the mails. Is it your doubts?
HI Sai,
There is no need for the password, basically it is for only sending the mail. Why do you need gmail password, you are not going to login to your gmail. The receiver will receive as from address as your gmail address. But, it is not sent from the GMail server (Using your gmail account).
Is it clear now?
Thanks,
Krishna
it cannot work with yahoo or rediffmail .com username pls help me
when i tried it i got run time exception
Exception in thread main java.lang.classformaterror : absent code attribute in methos is that is not native or abstract in class file javax/mail/internet/AddressException
at
java.lang.classloader.defineclass1<native method>
need help
it doesn’t work 4 me do i need setup 4 server ? help me plz
This doesnt work man.
well \n this \n is \n difficult \n to \n implement.
i found it and i did a tuto to send mail with java
When I run this program I am getting Exception in thread “main” java.lang.NoClassDefFoundError: com/sun/mail/util/LineInputStream…help me plz
@Rashmi what’s ur mail adress to send u the tuto
any one got the mail to inbox?? i am not getting mail …how to get the mail please tell me
Hello
Hello
but I do not see any authentication taking place in the code. This code will surely throw error
where am i placing the password still it cant word here is the code
import java.util.Properties;
import java.util.logging.Level;
import java.util.logging.Logger;;
/**
*
* @author Admin
*/
public class SendMailSSL {
public static void main(String[] args) throws MessagingException
{
String to = “[email protected]”;//Reciver Address.
Properties props = new Properties();
props.put(“mail.smtp.host”, “smtp.gmail.com”);
props.put(“mail.smtp.port”, “587”);
props.put(“mail.smtp.auth”, “true”);
props.put(“mail.smtp.starttls.enable”, “true”);
Session mailSession = Session.getDefaultInstance(props, new javax.mail.Authenticator()
{
protected PasswordAuthentication getPasswordAuthentication(String from)
{
return new PasswordAuthentication(from,”0000000000000000″);
}
});
Message message = new MimeMessage(mailSession);
try {
message.setFrom(new InternetAddress(“[email protected]”));//Sender Id.
} catch (AddressException ex) {
Logger.getLogger(SendMailSSL.class.getName()).log(Level.SEVERE, null, ex);
}
message.addRecipient(Message.RecipientType.TO, new InternetAddress(to));
message.setSubject(“Hello!”);
message.setText(“Testing from Java Application…….”);
// send message.
Transport.send(message);
System.out.println(“message sent successfully”);
}
}
What error you are getting?
i am getting an exception while running this code Exception in thread “main” java.lang.NoClassDefFoundError: com/sun/mail/util/SharedByteArrayInputStream
at mail.email.send(email.java:40)
at mail.SendMailText.main(SendMailText.java:15)….
What is the error?
Thanks for the post !! Really helpful.. Even this website also address something similar.. Have a look.. May help..
Thanks for the post !! Really helpful.. Even this website also address something similar.. Have a look.. May help..
Thank you!!
tell me hw to create installation setup of java software …..
Are you talking about installing JDK?
Send mail with attachment : | http://javabeat.net/sending-mail-from-java/ | CC-MAIN-2017-04 | refinedweb | 561 | 53.47 |
Created Date : 2009.10.
Language : C++
Tool : Visual Studio C++ 2008
Library & Utilized : Point Grey-FlyCapture, Triclops, OpenCV 2.1
Reference : PointGrey Bumblebee Reference,,
Etc. : STL
BumBleBee Stereo Camera Data Acquisition Source code.
This is Stereo Camera. The name is BumBleBee. This is product of PointGrey Company.
This camera is IEEE 1394 capble type.
This camera can obtain 3D cloud data rapidly and continously.
I need 2 library for using this camera(Triclops SDK, FlyCapture).
You can download these libs on the site(support).
You have to use my source after install libs. and you have to set path(To include directory, lib directory). and you also need opencv 2.1 lib.
I made the acquisition code as class. The class name is CSensorStereo2.
You can use this class like below source code.
The sequence is 'Open->GetData->Close'.
I did that 2D data save Iplimage in opencv and 3D depth data save as Txt file.
The source code is very easy to use ^^.
If you have any question, Plz give your comments to me.
Thank you.
source code is shared on Github
#include <stdio.h> #include "SensorStereo2.h" #include <time.h> void main() { int Width = 320; int Height = 240; CSensorStereo2 CSS2; CSS2.Initial(Width,Height); CSS2.Open(); cvNamedWindow("Reference"); char str[1000]; while(1) { //get 1 frame data(Image, Depth information) CSS2.GetData(); //Show Image cvShowImage("Reference",CSS2.ImgReference); //Save Depth sprintf(str,"./DepthData/%d_Depth.txt",time(0)); printf("%s\n", str); FILE * fp; fp = fopen(str,"w"); for(int i=0; i<Width; ++i) { for(int j=0; j<Height; ++j) { fprintf(fp,"%lf %lf %lf\n", CSS2.pDepth[i][j].x, CSS2.pDepth[i][j].y, CSS2.pDepth[i][j].z ); } } fclose(fp); if(cvWaitKey(1) >= 0 ) break; } cvDestroyWindow("Reference"); CSS2.Close(); }
Hi Mare,
I'm looking for your CSensorStereo2 header and C++ files. Could you share them with me?
I'm very intersting in using BumbleBee with Triclops and OpenCV ...
Thank you!
Greg
Hi Gregouze.
I have uploaded the source code on the Google-Doc.
However, somehow the link is not working propertly.
So I am going to send the file to your email.
Regards.
hello,
I would be intrested in using the opencv libraries with bumblebee and I think this code could be a good starting point. Can I please ask you to share the files with me. Thank you
Dear Valentin
I have sent the example source code to your email.
Please check the your email.
Thank you for visit my blog.
Hello
I am interested in using the disparities of bumblebee stereo camera in the real time .Can you please hep me in doing this
Thanx
Ok. What is your problem?
If you want the source code, I can help you easily.
Please give me your email address to my email([email protected]) or left comment on this post again.
I will send the source code by email.
Thank you for visit my blog.
Hi Mare,
I'm looking for your CSensorStereo2 header and C++ files too. Could you share them with me ([email protected])?
I'm very intersting in using BumbleBee with Triclops and OpenCV ...
Thank you!
KwangEun Ko.
Dear Kwang-Eun Ko
I have sent the source file to your email.
I sorry to reply too lately.
Please check your email.
Thank you for visiting my blog.
Hi Mare, thanks for sharing your code first.
I would like to show frames acquired from both cameras but i didn't manage it. You wrote:
cvShowImage("Reference",CSS2.ImgReference);
and can actually see just one image.
Which camera is this image coming from?
I think.. The reference image in the bumblebee is right camera.
But I cann't be sure. I don't remember.
But you can check. Cover the lense by your hand one by one.
And
If you want to get other camera, move to the "GetData" function in the CSensorStereo2 Class.
And find "triclopsGetImage16" function.
and change TriCam_REFERENCE to LEFT_IMAGE.
Now I cann't check source code. because there is not bumblebee in here.
Anyway, you can get wanted camera data by changing option "TriCam_REFERENCE", "LEFT_IMAGE", "RIGHT_IMAGE"
Thank you.
Unfortunately setting the option to TriCam_LEFT doesn't work, it returns a windows with generic red, blue and green pixel bands. I think it is caused by the imageType TriImg16_DISPARITY in the function triclopsGetImage16. I've tried also using the triclopsGetImage:
if ( triclopsGetImage( triclops, TriImg_RAW, TriCam_LEFT, &refImage ) != TriclopsErrorOk )
return false;
cvShowImage("Reference1",refImage.data); <-------..but this gives runtime error
Sorry.
I don't have bumblebee camera now.
To get bumblebee, I should visit to my university
Or I can ask to my junior in university.
Please wait..
Thank you.
I've solved by contacting the PTG Support
Thank you
Hi Anonymous,
I have same problem like your problem. I would like to show COLOR LEFT RAW IMAGE and COLOR LEFT RECTIFIED IMAGE from BumbleBee2 camera. But i didn't get COLOR LEFT IMAGEs. Please can you say me how to solve this problem?
I am looking forward your answer.
Thank you.
Hello Mare,
I'm very intersting in using BumbleBee with Triclops and OpenCV. Please, could you please share your source code with me? Please, can you help me for my Ph.D. thesis?
Thank you!
This is link address.
and You also download source code on the page.
Hi,
Thank you for your interested.
I downloaded and run it. But i created new empty project at VC++ 2012. And i copied your codes into my new empty project but i didn't run it. I have error that is "0xc0150002". I didn't understand. But When i compiled my new project, there is no compiler error and warning and it create an EXE file.
I hope, i have explained my problem.
i am looking forward your answers.
Thank you.
This comment has been removed by the author.
Hi I am a beginner in with Bumble bee, I was following your code. The code was very much helpful.
However I have the following doubt.
How to take the left and right images? I understand that the raw data comes as a 16 bit image. So after you have created 'ReferanceImgae', I tried to right bit shift the colorImage.blue, colorImage.red and colorImage.green by 8 bits and create the second Image. [SensorStereo2.cpp lines 296-298]
I tried this:
Left:
ImgReference->imageData[ i*ImgReference->widthStep+j*3+0] = (unsigned char) colorImage.blue[k];
ImgReference->imageData[ i*ImgReference->widthStep+j*3+1] = (unsigned char) colorImage.green[k];
ImgReference->imageData[ i*ImgReference->widthStep+j*3+2] = (unsigned char) colorImage.red[k];
Right:
ImgReference2->imageData[ i*ImgReference2->widthStep+j*3+0] = (unsigned char) (colorImage.blue[k]>>8);
ImgReference2->imageData[ i*ImgReference2->widthStep+j*3+1] = (unsigned char) (colorImage.green[k]>>8);
ImgReference2->imageData[ i*ImgReference2->widthStep+j*3+2] = (unsigned char) (colorImage.red[k]>>8);
But I was not able to see that. What I get is a black window.
What could have went wrong? Could you please help me?
Hi.Mike~~
I just doing the vision serving in using BumbleBee2 .
Do you have the CSensorStereo2 header and C++ files mentioned above?
I am just looking for that file~~
Can you send it to me?
My email is [email protected]
Thanks a lot!
Hi mike, now I don't have bumblebee camera, so I cannot test your code.
But if you wait about 1 month, I will buy bumblebee by 3 lens type,
Then I will update the source code more useful.
Thank you for visiting my blog.
Hello Mare..
nice work...
i'm also interested in using bumblebee cam and opencv..,
i try to run your code, but I get this: How to solve this error LNK2019?
thanks.. you can email me : [email protected]....
1>SensorStereo2.obj : error LNK2019: unresolved external symbol _triclopsSetSubpixelInterpolation referenced in function "public: bool __thiscall CSensorStereo2::Open(void)" (?Open@CSensorStereo2@@QAE_NXZ)
1>SensorStereo2.obj : error LNK2019: unresolved external symbol _triclopsSetUniquenessValidation referenced in function "public: bool __thiscall CSensorStereo2::Open(void)" (?Open@CSensorStereo2@@QAE_NXZ)
1>SensorStereo2.obj : error LNK2019: unresolved external symbol _triclopsSetTextureValidation referenced in function "public: bool __thiscall CSensorStereo2::Open(void)" (?Open@CSensorStereo2@@QAE_NXZ)
1>SensorStereo2.obj : error LNK2019: unresolved external symbol _triclopsSetDisparity referenced in function "public: bool __thiscall CSensorStereo2::Open(void)" (?Open@CSensorStereo2@@QAE_NXZ)
Hi Mare,
I'm looking for your CSensorStereo2 header and C++ files. Could you share them with me?
I'm doing the vision serving in using BumbleBee2 with Triclops and OpenCV ...
Thank you!
Here is entire source code.
Thank you. | http://study.marearts.com/2011/10/bumblebee-2d-3d-data-acquisition-source.html?showComment=1392305101761 | CC-MAIN-2019-51 | refinedweb | 1,421 | 70.7 |
i want Use the 2 text files boynames.txt and girlnames.txt Prompt the user for boy or girl and then for a letter from the alphabet. Open either boynames.txt or girlnames.txt and read each line in – when the name starts with the letter that the user specified then write the name and the number to an output file. And I want it output file an appropriate name such as boyJ.txt or girlA.txt. boyJ.txt will look something like this:
Jacob 29195
Joshua 24950
Joseph 21265
-------------------
i have a problem with the code , i want to input of the boy name that start from " J " like example above but it does not work. and it show all the name . so how can i write the code that input the frist letter of the name and it show all the name that start with that letter ?
so i hope anyone can help me thank.
thiis my code
import java.util.Scanner; import java.io.FileInputStream; import java.io.FileNotFoundException; public class demo { public static void main(String[] args) { Scanner keyboard = new Scanner(System.in); Scanner inputStream= null; String line = null; System.out.println("Please enter name:"); line = keyboard.nextLine( ); try { inputStream = new Scanner(new FileInputStream("boynames.txt")); } catch(FileNotFoundException e) { System.out.println("Error opening the file "); System.exit(0); } while(inputStream.hasNextLine()){ line=inputStream.nextLine(); System.out.println("boy name "+ line ); } inputStream.close( ); } } | https://www.daniweb.com/programming/software-development/threads/126279/i-some-help-with-java-code-please-thank | CC-MAIN-2016-44 | refinedweb | 237 | 70.7 |
Parent Directory
|
Revision Log
Fixed to create the table if an attribute experiences a major change (i.e. is new or gets a new type).
#!/usr/bin/perl -w package CustomAttributes; require Exporter; use ERDB; @ISA = qw(ERDB); use strict; use Tracer; use ERDBLoad; =head1 Custom SEED Attribute Manager =head2 Introduction The Custom SEED Attributes Manager allows the user to upload and retrieve custom data for SEED objects. It uses the B<ERDB> database system to store the attributes, which are implemented as multi-valued fields of ERDB entities. The full suite of ERDB retrieval capabilities is provided. In addition, custom methods are provided specific to this application. To get all the values of the attribute C<essential> in a specified B<Feature>, you would code my @values = $attrDB->GetAttributes([Feature => $fid], 'essential'); where I<$fid> contains the ID of the desired feature. Each attribute has an alternate index to allow searching for attributes by value. New attributes are introduced by updating the database definition at run-time. Attribute values are stored by uploading data from files. A web interface is provided for both these activities. =head2 FIG_Config Parameters The following configuration parameters are used to manage custom attributes. =over 4 =item attrDbms Type of database manager used: C<mysql> for MySQL or C<pg> for PostGres. =item attrDbName Name of the attribute database. =item attrHost Name of the host server for the database. If omitted, the current host is used. =item attrUser User name for logging in to the database. =item attrPass Password for logging in to the database. =item attrPort TCP/IP port for accessing the database. =item attrSock Socket name used to access the database. If omitted, the default socket will be used. =item attrDBD Fully-qualified file name for the database definition XML file. This file functions as data to the attribute management process, so if the data is moved, this file must go with it. =back The DBD file is critical, and must have reasonable contents before we can begin using the system. In the old system, attributes were only provided for Genomes and Features, so the initial XML file was the following. <Database> <Title>SEED Custom Attribute Database</Title> <Entities> <Entity name="Feature" keyType="id-string"> <Notes>A [i]feature[/i] is a part of the genome that is of special interest. Features may be spread across multiple contigs of a genome, but never across more than one genome. Features can be assigned to roles via spreadsheet cells, and are the targets of annotation.</Notes> </Entity> <Entity name="Genome" keyType="name-string"> <Notes>A [i]genome[/i] describes a particular individual organism's DNA.</Notes> </Entity> </Entities> </Database> It is not necessary to put any tables into the database; however, you should run AttrDBRefresh periodically to insure it has the correct Genomes and Features in it. When converting from the old system, use AttrDBRefresh -migrate to initialize the database and migrate the legacy data. You should only need to do that once. =head2 Implementation Note The L</Refresh> method reloads the entities in the database. If new entity types are added, that method will need to be adjusted accordingly. =head2 Public Methods =head3 new C<< my $attrDB = CustomAttributes->new($splitter); >> Construct a new CustomAttributes object. This object cannot be used to add or delete keys because that requires modifying the database design. To do that, you need to use the static L</StoreAttributeKey> or L</DeleteAttributeKey> methods. =over 4 =item splitter Value to be used to split attribute values into sections in the L</Fig Replacement Methods>. The default is a double colon C<::>. If you do not use the replacement methods, you do not need to worry about this parameter. =back =cut sub new { # Get the parameters. my ($class, $splitter) = @_; # Connect to the database. my $dbh = DBKernel->new($FIG_Config::attrDbms, $FIG_Config::attrDbName, $FIG_Config::attrUser, $FIG_Config::attrPass, $FIG_Config::attrPort, $FIG_Config::attrHost, $FIG_Config::attrSock); # Create the ERDB object. my $xmlFileName = $FIG_Config::attrDBD; my $retVal = ERDB::new($class, $dbh, $xmlFileName); # Store the splitter value. $retVal->{splitter} = (defined($splitter) ? $splitter : '::'); # Return the result. return $retVal; } =head3 StoreAttributeKey C<< my $attrDB = CustomAttributes::StoreAttributeKey($entityName, $attributeName, $type, $notes); >> Create or update an attribute for the database. This method will update the database definition XML, but it will not create the table. It will connect to the database so that the caller can upload the attribute values. =over 4 =item entityName Name of the entity containing the attribute. The entity must exist. =item attributeName Name of the attribute. It must be a valid ERDB field name, consisting entirely of letters, digits, and hyphens, with a letter at the beginning. If it does not exist already, it will be created. =item type Data type of the attribute. This must be a valid ERDB data type name. =item notes Descriptive notes about the attribute. It is presumed to be raw text, not HTML. =item RETURN Returns a Custom Attribute Database object if successful. If unsuccessful, an error will be thrown. =back =cut sub StoreAttributeKey { # Get the parameters. my ($entityName, $attributeName, $type, $notes) = @_; # Declare the return variable. my $retVal; # Get the data type hash. my %types = ERDB::GetDataTypes(); # Validate the initial input values. if (! ERDB::ValidateFieldName($attributeName)) { Confess("Invalid attribute name \"$attributeName\" specified."); } elsif (! $notes || length($notes) < 25) { Confess("Missing or incomplete description for $attributeName."); } elsif (! exists $types{$type}) { Confess("Invalid data type \"$type\" for $attributeName."); } # Our next step is to read in the XML for the database defintion. We # need to verify that the named entity exists. my $metadata = ERDB::ReadMetaXML($FIG_Config::attrDBD); my $entityHash = $metadata->{Entities}; if (! exists $entityHash->{$entityName}) { Confess("Entity $entityName not found."); } else { # Okay, we're ready to begin. Get the entity hash and the field hash. my $entityData = $entityHash->{$entityName}; my $fieldHash = ERDB::GetEntityFieldHash($metadata, $entityName); # Compare the old attribute data to the new data. my $bigChange = 1; if (exists $fieldHash->{$attributeName} && $fieldHash->{$attributeName}->{type} eq $type) { $bigChange = 0; } # Compute the attribute's relation name. my $relName = join("", $entityName, map { ucfirst $_ } split(/-|_/, $attributeName)); # Store the attribute's field data. Note the use of the "content" hash for # the notes. This is how the XML writer knows Notes is a text tag instead of # an attribute. $fieldHash->{$attributeName} = { type => $type, relation => $relName, Notes => { content => $notes } }; # Insure we have an index for this attribute. my $index = ERDB::FindIndexForEntity($metadata, $entityName, $attributeName); if (! defined($index)) { push @{$entityData->{Indexes}}, { IndexFields => [ { name => $attributeName, order => 'ascending' } ], Notes => "Alternate index provided for access by $attributeName." }; } # Write the XML back out. ERDB::WriteMetaXML($metadata, $FIG_Config::attrDBD); # Open a database with the new XML. $retVal = CustomAttributes->new(); # Create the table if there has been a significant change. if ($bigChange) { $retVal->CreateTable($relName); } } return $retVal; } =head3 Refresh C<< $attrDB->Refresh($fig); >> Refresh the primary entity tables from the FIG data store. This method basically drops and reloads the main tables of the custom attributes database. =over 4 =item fig FIG-like object that can be used to find genomes and features. =back =cut sub Refresh { # Get the parameters. my ($self, $fig) = @_; # Create load objects for the genomes and the features. my $loadGenome = ERDBLoad->new($self, 'Genome', $FIG_Config::temp); my $loadFeature = ERDBLoad->new($self, 'Feature', $FIG_Config::temp); # Get the genome list. my @genomes = $fig->genomes(); # Loop through the genomes. for my $genomeID (@genomes) { # Put this genome in the genome table. $loadGenome->Put($genomeID); Trace("Processing Genome $genomeID") if T(3); # Put its features into the feature table. Note we have to use a hash to # remove duplicates. my %featureList = map { $_ => 1 } $fig->all_features($genomeID); for my $fid (keys %featureList) { $loadFeature->Put($fid); } } # Get a variable for holding statistics objects. my $stats; # Finish the genome load. Trace("Loading Genome relation.") if T(2); $stats = $loadGenome->FinishAndLoad(); Trace("Genome table load statistics:\n" . $stats->Show()) if T(3); # Finish the feature load. Trace("Loading Feature relation.") if T(2); $stats = $loadFeature->FinishAndLoad(); Trace("Feature table load statistics:\n" . $stats->Show()) if T(3); } =head3 LoadAttributeKey C<< my $stats = $attrDB->LoadAttributeKey($entityName, $fieldName, $fh, $keyCol, $dataCol); >> Load the specified attribute from the specified file. The file should be a tab-delimited file with internal tab and new-line characters escaped. This is the typical TBL-style file used by most FIG applications. One of the columns in the input file must contain the appropriate key value and the other the corresponding attribute value. =over 4 =item entityName Name of the entity containing the attribute. =item fieldName Name of the actual attribute. =item fh Open file handle for the input file. =item keyCol Index (0-based) of the column containing the key field. The key field should contain the ID of an instance of the named entity. =item dataCol Index (0-based) of the column containing the data value field. =item RETURN Returns a statistics object for the load process. =back =cut sub LoadAttributeKey { # Get the parameters. my ($self, $entityName, $fieldName, $fh, $keyCol, $dataCol) = @_; # Create the return variable. my $retVal; # Insure the entity exists. my $found = grep { $_ eq $entityName } $self->GetEntityTypes(); if (! $found) { Confess("Entity \"$entityName\" not found in database."); } else { # Get the field structure for the named entity. my $fieldHash = $self->GetFieldTable($entityName); # Verify that the attribute exists. if (! exists $fieldHash->{$fieldName}) { Confess("Attribute key \"$fieldName\" does not exist in entity $entityName."); } else { # Create a loader for the specified attribute. We need the # relation name first. my $relName = $fieldHash->{$fieldName}->{relation}; my $loadAttribute = ERDBLoad->new($self, $relName, $FIG_Config::temp); # Loop through the input file. while (! eof $fh) { # Get the next line of the file. my @fields = Tracer::GetLine($fh); $loadAttribute->Add("lineIn"); # Now we need to validate the line. if ($#fields < $dataCol) { $loadAttribute->Add("shortLine"); } elsif (! $self->Exists($entityName, $fields[$keyCol])) { $loadAttribute->Add("badKey"); } else { # It's valid,so send it to the loader. $loadAttribute->Put($fields[$keyCol], $fields[$dataCol]); $loadAttribute->Add("lineUsed"); } } # Finish the load. $retVal = $loadAttribute->FinishAndLoad(); } } # Return the statistics. return $retVal; } =head3 DeleteAttributeKey C<< CustomAttributes::DeleteAttributeKey($entityName, $attributeName); >> Delete an attribute from the custom attributes database. =over 4 =item entityName Name of the entity possessing the attribute. =item attributeName Name of the attribute to delete. =back =cut sub DeleteAttributeKey { # Get the parameters. my ($entityName, $attributeName) = @_; # Read in the XML for the database defintion. We need to verify that # the named entity exists and it has the named attribute. my $metadata = ERDB::ReadMetaXML($FIG_Config::attrDBD); my $entityHash = $metadata->{Entities}; if (! exists $entityHash->{$entityName}) { Confess("Entity \"$entityName\" not found."); } else { # Get the field hash. my $fieldHash = ERDB::GetEntityFieldHash($metadata, $entityName); if (! exists $fieldHash->{$attributeName}) { Confess("Attribute key \"$attributeName\" not found in entity $entityName."); } else { # Get the attribute's relation name. my $relName = $fieldHash->{$attributeName}->{relation}; # Check for an index. my $indexIdx = ERDB::FindIndexForEntity($metadata, $entityName, $attributeName); if (defined($indexIdx)) { Trace("Index for $attributeName found at position $indexIdx for $entityName.") if T(3); delete $entityHash->{$entityName}->{Indexes}->[$indexIdx]; } # Delete the attribute from the field hash. Trace("Deleting attribute $attributeName from $entityName.") if T(3); delete $fieldHash->{$attributeName}; # Write the XML back out. ERDB::WriteMetaXML($metadata, $FIG_Config::attrDBD); # Insure the relation does not exist in the database. This requires connecting # since we may have to do a table drop. my $attrDB = CustomAttributes->new(); Trace("Dropping table $relName.") if T(3); $attrDB->DropRelation($relName); } } } =head3 ControlForm C<< my $formHtml = $attrDB->ControlForm($cgi, $name); >> Return a form that can be used to control the creation and modification of attributes. =over 4 =item cgi CGI query object used to create HTML. =item name Name to give to the form. This should be unique for the web page. =item RETURN Returns the HTML for a form that submits instructions to the C<Attributes.cgi> script for loading, creating, or deleting an attribute. =back =cut sub ControlForm { # Get the parameters. my ($self, $cgi, $name) = @_; # Declare the return list. my @retVal = (); # Start the form. We use multipart to support the upload control. push @retVal, $cgi->start_multipart_form(-name => $name); # We'll put the controls in a table. Nothing else ever seems to look nice. push @retVal, $cgi->start_table({ border => 2, cellpadding => 2 }); # The first row is for selecting the field name. push @retVal, $cgi->Tr($cgi->th("Select a Field"), $cgi->td($self->FieldMenu($cgi, 10, 'fieldName', 1, "document.$name.notes.value", "document.$name.dataType.value"))); # Now we set up a dropdown for the data types. The values will be the # data type names, and the labels will be the descriptions. my %types = ERDB::GetDataTypes(); my %labelMap = map { $_ => $types{$_}->{notes} } keys %types; my $typeMenu = $cgi->popup_menu(-name => 'dataType', -values => [sort keys %types], -labels => \%labelMap); push @retVal, $cgi->Tr($cgi->th("Data type"), $cgi->td($typeMenu)); # The next row is for the notes. push @retVal, $cgi->Tr($cgi->th("Description"), $cgi->td($cgi->textarea(-name => 'notes', -rows => 6, -columns => 80)) ); # Allow the user to specify a new field name. This is required if the # user has selected one of the "(new)" markers. push @retVal, $cgi->Tr($cgi->th("New Field Name"), $cgi->td($cgi->textfield(-name => 'newName', -size => 30)), ); # If the user wants to upload new values for the field, then we have # an upload file name and column indicators. push @retVal, $cgi->Tr($cgi->th("Upload Values"), $cgi->td($cgi->filefield(-name => 'newValueFile', -size => 20) . " Key " . $cgi->textfield(-name => 'keyCol', -size => 3, -default => 0) . " Value " . $cgi->textfield(-name => 'valueCol', -size => 3, -default => 1) ), ); # Now the three buttons: UPDATE, SHOW, and DELETE. push @retVal, $cgi->Tr($cgi->th(" "), $cgi->td({align => 'center'}, $cgi->submit(-name => 'Delete', -value => 'DELETE') . " " . $cgi->submit(-name => 'Store', -value => 'STORE') . " " . $cgi->submit(-name => 'Show', -value => 'SHOW') ) ); # Close the table and the form. push @retVal, $cgi->end_table(); push @retVal, $cgi->end_form(); # Return the assembled HTML. return join("\n", @retVal, ""); } =head3 FieldMenu C<< my $menuHtml = $attrDB->FieldMenu($cgi, $height, $name, $newFlag, $noteControl, $typeControl); >> Return the HTML for a menu to select an attribute field. The menu will be a standard SELECT/OPTION thing which is called "popup menu" in the CGI package, but actually looks like a list. The list will contain one selectable row per field, grouped by entity. =over 4 =item cgi CGI query object used to generate HTML. =item height Number of lines to display in the list. =item name Name to give to the menu. This is the name under which the value will appear when the form is submitted. =item newFlag (optional) If TRUE, then extra rows will be provided to allow the user to select a new attribute. In other words, the user can select an existing attribute, or can choose a C<(new)> marker to indicate a field to be created in the parent entity. =item noteControl (optional) If specified, the name of a variable for displaying the notes attached to the field. This must be in Javascript form ready for assignment. So, for example, if you have a variable called C<notes> that represents a paragraph element, you should code C<notes.innerHTML>. If it actually represents a form field you should code C<notes.value>. If an C<innerHTML> coding is used, the text will be HTML-escaped before it is copied in. Specifying this parameter generates Javascript for displaying the field description when a field is selected. =item typeControl (optional) If specified, the name of a variable for displaying the field's data type. Data types are a much more controlled vocabulary than notes, so there is no worry about HTML translation. Instead, the raw value is put into the specified variable. Otherwise, the same rules apply to this value that apply to I<$noteControl>. =item RETURN Returns the HTML to create a form field that can be used to select an attribute from the custom attributes system. =back =cut sub FieldMenu { # Get the parameters. my ($self, $cgi, $height, $name, $newFlag, $noteControl, $typeControl) = @_; # These next two hashes make everything happen. "entities" # maps each entity name to the list of values to be put into its # option group. "labels" maps each entity name to a map from values # to labels. my @entityNames = sort ($self->GetEntityTypes()); my %entities = map { $_ => [] } @entityNames; my %labels = map { $_ => { }} @entityNames; # Loop through the entities, adding the existing attributes. for my $entity (@entityNames) { # Get this entity's field table. my $fieldHash = $self->GetFieldTable($entity); # Get its field list in our local hashes. my $fieldList = $entities{$entity}; my $labelList = $labels{$entity}; # Add the NEW fields if we want them. if ($newFlag) { push @{$fieldList}, $entity; $labelList->{$entity} = "(new)"; } # Loop through the fields in the hash. We only keep the ones with a # secondary relation name. (In other words, the name of the relation # in which the field appears cannot be the same as the entity name.) for my $fieldName (sort keys %{$fieldHash}) { if ($fieldHash->{$fieldName}->{relation} ne $entity) { my $\n"; $retVal .= " function $changeName(fieldValue) {\n"; # The function only has a body if we have a notes control to store the description. if ($noteControl || $typeControl) { # Check to see if we're storing HTML or text into the note control. my $htmlMode = ($noteControl && $noteControl =~ /innerHTML$/); # We use a CASE statement based on the newly-selected field value. The # field description will be stored in the JavaScript variable "myText" # and the data type in "myType". Note the default data type is a normal # string, but the default notes is an empty string. $retVal .= " var myText = \"\";\n"; $retVal .= " var myType = \"string\";\n"; $retVal .= " switch (fieldValue) {\n"; # Loop through the entities. for my $entity (@entityNames) { # Get the entity's field hash. This has the notes in it. my $fieldHash = $self->GetFieldTable($entity); # Loop through the values we might see for this entity's fields. my $fields = $entities{$entity}; for my $value (@{$fields}) { # Only proceed if we have an existing field. if ($value =~ m!/(.+)$!) { # Get the field's hash element. my $element = $fieldHash->{$1}; # Generate this case. $retVal .= " case \"$value\" :\n"; # Here we either want to update the note display, the # type display, or both. if ($noteControl) { # Here we want the notes updated. my $notes = $element->{Notes}->{content}; # Insure it's in the proper form. if ($htmlMode) { $notes = ERDB::HTMLNote($notes); } # Escape it for use as a string literal. $notes =~ s/\n/\\n/g; $notes =~ s/"/\\"/g; $retVal .= " myText = \"$notes\";\n"; } if ($typeControl) { # Here we want the type updated. my $type = $element->{type}; $retVal .= " myType = \"$type\";\n"; } # Close this case. $retVal .= " break;\n"; } } } # Close the CASE statement and make the appropriate assignments. $retVal .= " }\n"; if ($noteControl) { $retVal .= " $noteControl = myText;\n"; } if ($typeControl) { $retVal .= " $typeControl = myType;\n"; } } # Terminate the change function. $retVal .= " }\n"; $retVal .= "</script>\n"; # Return the result. return $retVal; } =head3 MatchSqlPattern C<< my $matched = CustomAttributes: MigrateAttributes C<< CustomAttributes::MigrateAttributes($fig); >> Migrate all the attributes data from the specified FIG instance. This is a long, slow method used to convert the old attribute data to the new system. Only attribute keys that are not already in the database will be loaded, and only for entity instances current in the database. To get an accurate capture of the attributes in the given instance, you may want to clear the database and the DBD before starting and run L</Refresh> to populate the entities. =over 4 =item fig A FIG object that can be used to retrieve attributes for migration purposes. =back =cut sub MigrateAttributes { # Get the parameters. my ($fig) = @_; # Get a list of the objects to migrate. This requires connecting. Note we # will map each entity type to a file name. The file will contain a list # of the object's IDs so we can get to them when we're not connected to # the database. my $ca = CustomAttributes->new(); my %objects = map { $_ => "$FIG_Config::temp/$_.keys.tbl" } $ca->GetEntityTypes(); # Set up hash of the existing attribute keys for each entity type. my %oldKeys = (); # Finally, we have a hash that counts the IDs for each entity type. my %idCounts = map { $_ => 0 } keys %objects; # Loop through the list, creating key files to read back in. for my $entityType (keys %objects) { Trace("Retrieving keys for $entityType.") if T(2); # Create the key file. my $idFile = Open(undef, ">$objects{$entityType}"); # Loop through the keys. my @ids = $ca->GetFlat([$entityType], "", [], "$entityType(id)"); for my $id (@ids) { print $idFile "$id\n"; } close $idFile; # In addition to the key file, we must get a list of attributes already # in the database. This avoids a circularity problem that might occur if the $fig # object is retrieving from the custom attributes database already. my %fields = $ca->GetSecondaryFields($entityType); $oldKeys{$entityType} = \%fields; # Finally, we have the ID count. $idCounts{$entityType} = scalar @ids; } # Release the custom attributes database so we can add attributes. undef $ca; # Loop through the objects. for my $entityType (keys %objects) { # Get a hash of all the attributes already in this database. These are # left untouched. my $myOldKeys = $oldKeys{$entityType}; # Create a hash to control the load file names for each attribute key we find. my %keyHash = (); # Set up some counters so we can trace our progress. my ($totalIDs, $processedIDs, $keyCount, $valueCount) = ($idCounts{$entityType}, 0, 0, 0); # Open this object's ID file. Trace("Migrating data for $entityType. $totalIDs found.") if T(3); my $keysIn = Open(undef, "<$objects{$entityType}"); while (my $id = <$keysIn>) { # Remove the EOL characters. chomp $id; # Get this object's attributes. my @allData = $fig->get_attributes($id); Trace(scalar(@allData) . " attribute values found for $entityType($id).") if T(4); # Loop through the attribute values one at a time. for my $dataTuple (@allData) { # Get the key, value, and URL. We ignore the first element because that's the # object ID, and we already know the object ID. my (undef, $key, $value, $url) = @{$dataTuple}; # Remove the buggy "1" for $url. if ($url eq "1") { $url = undef; } # Only proceed if this is not an old key. if (! $myOldKeys->{$key}) { # See if we've run into this key before. if (! exists $keyHash{$key}) { # Here we need to create the attribute key in the database. StoreAttributeKey($entityType, $key, 'text', "Key migrated automatically from the FIG system. " . "Please replace these notes as soon as possible " . "with useful text." ); # Compute the attribute's load file name and open it for output. my $$fileName"); # Store the file name and handle. $keyHash{$key} = {h => $fh, name => $fileName}; # Count this key. $keyCount++; } # Smash the value and the URL together. if (defined($url) && length($url) > 0) { $value .= "::$url"; } # Write the attribute value to the load file. Tracer::PutLine($keyHash{$key}->{h}, [$id, $value]); $valueCount++; } } # Now we've finished all the attributes for this object. Count and trace it. $processedIDs++; if ($processedIDs % 500 == 0) { Trace("$processedIDs of $totalIDs ${entityType}s processed.") if T(3); Trace("$entityType has $keyCount keys and $valueCount values so far.") if T(3); } } # Now we've finished all the attributes for all objects of this type. Trace("$processedIDs ${entityType}s processed, with $keyCount keys and $valueCount values.") if T(2); # Loop through the files, loading the keys into the database. Trace("Connecting to database.") if T(2); my $objectCA = CustomAttributes->new(); Trace("Loading key files.") if T(2); for my $key (sort keys %keyHash) { # Close the key's load file. close $keyHash{$key}->{h}; # Reopen it for input. my $fileName = $keyHash{$key}->{name}; my $fh = Open(undef, "<$fileName"); Trace("Loading $key from $fileName.") if T(3); my $stats = $objectCA->LoadAttributeKey($entityType, $key, $fh, 0, 1); Trace("Statistics for $key of $entityType:\n" . $stats->Show()) if T(3); } # All the keys for this entity type are now loaded. Trace("Key files loaded for $entityType.") if T(2); } # All keys for all entity types are now loaded. Trace("Migration complete.") if T(2); } =head3 ComputeObjectTypeFromID C<< my ($entityName, $id) = CustomAttributes::ComputeObjectTypeFromID($objectID); >> This method will compute the entity type corresponding to a specified object ID. If the object ID begins with C<fig|>, it is presumed to be a feature ID. If it is all digits with a single period, it is presumed to by a genome ID. Otherwise, it must be a list reference. In this last case the first list element will be taken as the entity type and the second will be taken as the actual ID. =over 4 =item objectID Object ID to examine. =item RETURN Returns a 2-element list consisting of the entity type followed by the specified ID. =back =cut sub ComputeObjectTypeFromID { # Get the parameters. my ($objectID) = @_; # Declare the return variables. my ($entityName, $id); # Only proceed if the object ID is defined. If it's not, we'll be returning a # pair of undefs. if ($objectID) { if (ref $objectID eq 'ARRAY') { # Here we have the new-style list reference. Pull out its pieces. ($entityName, $id) = @{$objectID}; } else { # Here the ID is the outgoing ID, and we need to look at its structure # to determine the entity type. $id = $objectID; if ($objectID =~ /^\d+\.\d+/) { # Digits with a single period is a genome. $entityName = 'Genome'; } elsif ($objectID =~ /^fig\|/) { # The "fig|" prefix indicates a feature. $entityName = 'Feature'; } else { # Anything else is illegal! Confess("Invalid attribute ID specification \"$objectID\"."); } } } # Return the result. return ($entityName, $id); } =head2 FIG Method Replacements The following methods are used by B<FIG.pm> to replace the previous attribute functionality. Some of the old functionality is no longer present. Controlled vocabulary is no longer supported and there is no longer any searching by URL. Fortunately, neither of these capabilities were used in the old system. The methods here are the only ones supported by the B<RemoteCustomAttributes> object. The idea is that these methods represent attribute manipulation allowed by all users, while the others are only for privileged users with access to the attribute server. In the previous implementation, an attribute had a value and a URL. In the new implementation, there is only a value. In this implementation, each attribute has only a value. These methods will treat the value as a list with the individual elements separated by the value of the splitter parameter on the constructor (L</new>). The default is double colons C<::>. So, for example, an old-style keyword with a /value of C<essential> and a URL of C<> using the default splitter value would be stored as essential:: The best performance is achieved by searching for a particular key for a specified feature or genome. =head3 GetAttributes C<< my @attributeList = $attrDB->GetAttributes($objectID, $key, @valuePatterns); >> In the database, attribute values are sectioned into pieces using a splitter value specified in the constructor (L</new>). This is not a requirement of the attribute system as a whole, merely a convenience for the purpose of these methods. If you are using the static method calls instead of the object-based calls, the splitter will always be the default value of double colons (C<::>). If a value has multiple sections, each section is matched against the correspond criterion in the I<@valuePatterns> list. This method returns a series of tuples that match the specified criteria. Each tuple will contain an object ID, a key, and one or more values. The parameters to this method therefore correspond structurally to the values expected in each tuple. my @attributeList = GetAttributes('fig|100226.1.peg.1004', 'structure%', 1, 2); would return something like ['fig}100226.1.peg.1004', 'structure', 1, 2] ['fig}100226.1.peg.1004', 'structure1', 1, 2] ['fig}100226.1.peg.1004', 'structure2', 1, 2] ['fig}100226.1.peg.1004', 'structureA', 1, 2] Use of C<undef> in any position acts as a wild card (all values). In addition, the I<$key> and I<@valuePatterns> parameters can contain SQL pattern characters: C<%>, which matches any sequence of characters, and C<_>, which matches any single character. (You can use an escape sequence C<\%> or C<\_> to match an actual percent sign or underscore.) In addition to values in multiple sections, a single attribute key can have multiple values, so even my @attributeList = GetAttributes($peg, 'virulent'); which has no wildcard in the key or the object ID, may return multiple tuples. For reasons of backward compatability, we examine the structure of the object ID to determine the entity type. In that case the only two types allowed are C<Genome> and C<Feature>. An alternative method is to use a list reference, with the list consisting of an entity type name and the actual ID. Thus, the above example could equivalently be written as my @attributeList = GetAttributes([Feature => $peg], 'virulent'); The list-reference approach allows us to add attributes to other entity types in the future. Doing so, however, will require modifying the L</Refresh> method and updated the database design XML. The list-reference approach also allows for a more fault-tolerant approach to getting all objects with a particular attribute. my @attributeList = GetAttributes([Feature => undef], 'virulent'); will only return feature attributes, while my @attributeList = GetAttributes(undef, 'virulent'); could at some point in the future get you attributes for genomes or even subsystems as well as features. =over 4 =item objectID ID of the genome or feature whose attributes are desired. In general, an ID that starts with C<fig|> is treated as a feature ID, and an ID that is all digits with a single period is treated as a genome ID. For other entity types, use a list reference; in this case the first list element is the entity type and the second is the ID. A value of C<undef> or an empty string here will match all objects. =item key Attribute key name. Since attributes are stored as fields in the database with a field name equal to the key name, it is very fast to find a list of all the matching keys. Each key's values require a separate query, however, which may be a performance problem if the pattern matches a lot of keys. Wild cards are acceptable here, and a value of C<undef> or an empty string will match all attribute keys. =item valuePatterns List of the desired attribute values, section by section. If C<undef> or an empty string is specified, all values in that section will match. =item RETURN Returns a list of tuples. The first element in the tuple is an object ID, the second is an attribute key, and the remaining elements are the sections of the attribute value. All of the tuples will match the criteria set forth in the parameter list. =back =cut sub GetAttributes { # Get the parameters. my ($self, $objectID, $key, @valuePatterns) = @_; # Declare the return variable. my @retVal = (); # Determine the entity types for our search. my @objects = (); my ($actualObjectID, $computedType); if (! $objectID) { push @objects, $self->GetEntityTypes(); } else { ($computedType, $actualObjectID) = ComputeObjectTypeFromID($objectID); push @objects, $computedType; } # Loop through the entity types. for my $entityType (@objects) { # Now we need to find all the matching keys. The keys are actually stored in # our database object, so this process is fast. Note that our # MatchSqlPattern method my %secondaries = $self->GetSecondaryFields($entityType); my @fieldList = grep { MatchSqlPattern($_, $key) } keys %secondaries; # Now we figure out whether or not we need to filter by object. We will always # filter by key to a limited extent, so if we're filtering by object we need an # AND to join the object ID filter with the key filter. my $filter = ""; my @params = (); if (defined($actualObjectID)) { # Here the caller wants to filter on object ID. $filter = "$entityType(id) = ? AND "; push @params, $actualObjectID; } # It's time to begin making queries. We process one attribute key at a time, because # each attribute is actually a different field in the database. We know here that # all the keys we've collected are for the correct entity because we got them from # the DBD. That's a good thing, because an invalid key name will cause an SQL error. for my $key (@fieldList) { # Get all of the attribute values for this key. my @dataRows = $self->GetAll([$entityType], "$filter$entityType($key) IS NOT NULL", \@params, ["$entityType(id)", "$entityType($key)"]); # Process each value separately. We need to verify the values and reformat the # tuples. Note that GetAll will give us one row per matching object ID, # with the ID first followed by a list of the data values. This is very # different from the structure we'll be returning, which has one row # per value. for my $dataRow (@dataRows) { # Get the object ID and the list of values. my ($rowObjectID, @dataValues) = @{$dataRow}; # Loop through the values. There will be one result row per attribute value. for my $dataValue (@dataValues) { # Separate this value into sections. my @sections = split("::", $dataValue); # Loop through the value patterns, looking for a mismatch. Note that # since we're working through parallel arrays, we are using an index # loop. As soon as a match fails we stop checking. This means that # if the value pattern list is longer than the number of sections, # we will fail as soon as we run out of sections. my $match = 1; for (my $i = 0; $i <= $#valuePatterns && $match; $i++) { $match = MatchSqlPattern($sections[$i], $valuePatterns[$i]); } # If we match, we save this value in the output list. if ($match) { push @retVal, [$rowObjectID, $key, @sections]; } } # Here we've processed all the attribute values for the current object ID. } # Here we've processed all the rows returned by GetAll. In general, there will # be one row per object ID. } # Here we've processed all the matching attribute keys. } # Here we've processed all the entity types. That means @retVal has all the matching # results. return @retVal; } =head3 AddAttribute C<< $attrDB->AddAttribute($objectID, $key, @values); >> Add an attribute key/value pair to an object. This method cannot add a new key, merely add a value to an existing key. Use L</StoreAttributeKey> to create a new key. . The values are joined together with the splitter value before being stored as field values. This enables L</GetAttributes> to split them apart during retrieval. The splitter value defaults to double colons C<::>. =back =cut sub AddAttribute { # Get the parameters. my ($self, $objectID, $key, @values) = @_; # Don't allow undefs. if (! defined($objectID)) { Confess("No object ID specified for AddAttribute call."); } elsif (! defined($key)) { Confess("No attribute key specified for AddAttribute call."); } elsif (! @values) { Confess("No values specified in AddAttribute call for key $key."); } else { # Okay, now we have some reason to believe we can do this. Start by # computing the object type and ID. my ($entityName, $id) = ComputeObjectTypeFromID($objectID); # Form the values into a scalar. my $valueString = join($self->{splitter}, @values); # Insert the value. $self->InsertValue($id, "$entityName($key)", $valueString); } # Return a one. We do this for backward compatability. return 1; } =head3 DeleteAttribute C<< $attrDB->DeleteAttribute($objectID, $key, @values); >> Delete the specified attribute key/value combination from the database. The first form will connect to the database and release it. The second form uses the database connection contained in the object. . =back =cut sub DeleteAttribute { # Get the parameters. my ($self, $objectID, $key, @values) = @_; # Don't allow undefs. if (! defined($objectID)) { Confess("No object ID specified for DeleteAttribute call."); } elsif (! defined($key)) { Confess("No attribute key specified for DeleteAttribute call."); } elsif (! @values) { Confess("No values specified in DeleteAttribute call for key $key."); } else { # Now compute the object type and ID. my ($entityName, $id) = ComputeObjectTypeFromID($objectID); # Form the values into a scalar. my $valueString = join($self->{splitter}, @values); # Delete the value. $self->DeleteValue($entityName, $id, $key, $valueString); } # Return a one. This is for backward compatability. return 1; } =head3 ChangeAttribute C<< $attrDB->ChangeAttribute($objectID, $key, \@oldValues, \@newValues); >> Change the value of an attribute key/value pair for an object. =over 4 =item objectID ID of the genome or feature to which the attribute is to be changed. oldValues One or more values identifying the key/value pair to change. =item newValues One or more values to be put in place of the old values. =back =cut sub ChangeAttribute { # Get the parameters. my ($self, $objectID, $key, $oldValues, $newValues) = @_; # Don't allow undefs. if (! defined($objectID)) { Confess("No object ID specified for ChangeAttribute call."); } elsif (! defined($key)) { Confess("No attribute key specified for ChangeAttribute call."); } elsif (! defined($oldValues) || ref $oldValues ne 'ARRAY') { Confess("No old values specified in ChangeAttribute call for key $key."); } elsif (! defined($newValues) || ref $newValues ne 'ARRAY') { Confess("No new values specified in ChangeAttribute call for key $key."); } else { # Okay, now we do the change as a delete/add. $self->DeleteAttribute($objectID, $key, @{$oldValues}); $self->AddAttribute($objectID, $key, @{$newValues}); } # Return a one. We do this for backward compatability. return 1; } =head3 EraseAttribute C<< $attrDB->EraseAttribute($entityName, $key); >> Erase all values for the specified attribute key. This does not remove the key from the database; it merely removes all the values. =over 4 =item entityName Name of the entity to which the key belongs. If undefined, all entities will be examined for the desired key. =item key Key to erase. =back =cut sub EraseAttribute { # Get the parameters. my ($self, $entityName, $key) = @_; # Determine the relevant entity types. my @objects = (); if (! $entityName) { push @objects, $self->GetEntityTypes(); } else { push @objects, $entityName; } # Loop through the entity types. for my $entityType (@objects) { # Now check for this key in this entity. my %secondaries = $self->GetSecondaryFields($entityType); if (exists $secondaries{$key}) { # We found it, so delete all the values of the key. $self->DeleteValue($entityName, undef, $key); } } # Return a 1, for backward compatability. return 1; } 1; | http://biocvs.mcs.anl.gov/viewcvs.cgi/Sprout/CustomAttributes.pm?hideattic=0&revision=1.8&view=markup&pathrev=mgrast_rel_2008_1110_v2 | CC-MAIN-2020-16 | refinedweb | 6,130 | 59.09 |
Journal tools |
Personal search form |
My account |
Bookmark |
Search:
...restaurant new york
home insurance yakima
song lyric about high school
download dragonball z mugen... elementary
e fly myspace.com site tone
desert hills premium outlet center
electromation inc...in a house
2001 award braxton grammy toni
maytag clothes dryer parts
panasonic kx ...time life country music
christine lavin and lyric
vi search replace newline
copier discount ...
... landmark
morrow ohio newspaper
enlarged city school district of middletown
alicia diary ft key tone toni tony
fruitland gmc maryland
rockys auto
hawaiian flower lei
richard tavares
weber school district ogden...
nebraska state wrestling tournament 2005
brian wilson smile lyric
texas association of real estate inspectors
st. james ...
... music
braxton him i love lyric some toni
disney properties rental vacation
...
baby shower party favors
hannah lyric montana party pumping up
key ... to san francisco
hillsong music lyric
edited music downloads
letter of ...of fool in the rain
baloneys tony
ashley tisdale.com
cheap alaska ... back supports
nextel real music tone
chamber history music
parent child ...
...
please dont leave me girl lyric
otep music video codes
submissive ...clips
alldumb.com
shrock works
tony carnevale
hotel caceres
wheels auto...antigone
import tuner models
saundra
toni morrison biographu
college free sex...homing pigeon white
free ring tone for motorola phone
arlington race...video clip
erythromycin drug interaction
lyric video vixen
origami flowers
amsoil...
...
union club hotel purdue
nokia tone video
lease administration software
pocket ... sheep
effects viagra
martinez pharmacy tony
welding information
prius
adobe illustrator ... on yao video
inspirational lyric prayer song
wet pantie
pest ... skin video video
r spoiler toni y
office xp license number... nude
printing services
parson college
lyric to more than words by ...
Toni Toni Tony Lyrics
Tony Tony Toni Anniversary Lyrics
Toni Toni Tone Lyrics
Toni Tony Tony Lyrics
Tony Tony Toni Lyrics
Anniversary Tony Toni Tone
Cell Phone
Tony Toni Tone If I Had No Loot Lyric
Collection
Tony Toni Tone Anniversary Lyric
Feel Good Tone Toni Tony
Tony Toni Tone Whatever You Want Lyric
Greatest Hit Tone Tone Toni Toni Tony Tony
Song Lyric Tony Toni Tone
Internet
Tony Toni Tone Slow Wine Lyric
Just Tone Toni Tony
Tony Toni Tone
List
Music Toni Tony Tone
Result Page:
1
for Tony Toni Tone Lyric | http://www.ljseek.com/Tony-Toni-Tone-Lyric_s4Zp1.html | crawl-002 | refinedweb | 374 | 65.01 |
is available in Insider builds from Build 16226 onwards, along with the corresponding SDK. In this post, we’ll look at the code changes you need to make in your manifest and in your App class to handle the startup scenario, and how your app can work with the user to respect their choices.
Here’s a sample app, called TestStartup – the app offers a button to request enabling the startup behavior, and reports current status. Typically, you’d put this kind of option into a settings page of some kind in your app.
The first thing to note is that you must use the windows.startupTask Extension in your app manifest under the Extensions node, which is a child of the Application node. This is documented here. The same Extension declaration is used for both Desktop Bridge and regular UWP apps – but there are some differences.
- Desktop Bridge is only available on Desktop, so it uses a Desktop-specific XML namespace. The new UWP implementation is designed for use generally on UWP, so it uses a general UAP namespace (contract version 5) – although to be clear, it is currently still only actually available on Desktop.
- The Desktop Bridge EntryPoint must be “Windows.FullTrustApplication,” whereas for regular UWP it is the fully-qualified namespace name of your App class.
- Desktop Bridge apps can set the Enabled attribute to true, which means that the app will start at startup without the user having to manually enable it. Conversely, for regular UWP apps this attribute is ignored, and the feature is implicitly set to “disabled.” Instead, the user must first launch the app, and the app must request to be enabled for startup activation.
- For Desktop Bridge apps, multiple startupTask Extensions are permitted, each one can use a different Executable. Conversely, for regular UWP apps, you would have only one Executable and one startupTask Extension.
For both Desktop Bridge apps and regular UWP apps, the user is always in control, and can change the Enabled state of your startup app at any time via the Startup tab in Task Manager:
Also for both app types, the app must be launched at least once before the user can change the Disabled/Enabled state. This is potentially slightly confusing: if the user doesn’t launch the app and then tries to change the state to Enabled in Task Manager, this will seem to be set. However, if they then close Task Manager and re-open it, they will see that the state is still Disabled. What’s happening here is that Task Manager is correctly persisting the user’s choice of the Enabled state – but this won’t actually allow the app to be activated at startup unless and until the app is launched at least once first – hence the reason it is reported as Disabled.
In your UWP code, you can request to be enabled for startup. To do this, use the StartupTask.GetAsync method to initialize a StartupTask object (documented here) – passing in the TaskId you specified in the manifest – and then call the RequestEnableAsync method. In the test app, we’re doing this in the Click handler for the button. The return value from the request is the new (possibly unchanged) StartupTaskState.
async private void requestButton_Click(object sender, RoutedEventArgs e) { StartupTask startupTask = await StartupTask.GetAsync("MyStartupId"); switch (startupTask.State) { case StartupTaskState.Disabled: // Task is disabled but can be enabled. StartupTaskState newState = await startupTask.RequestEnableAsync(); Debug.WriteLine("Request to enable startup, result = {0}", newState); break; case StartupTaskState.DisabledByUser: // Task is disabled and user must enable it manually. MessageDialog dialog = new MessageDialog( "I know you don't want this app to run " + "as soon as you sign in, but if you change your mind, " + "you can enable this in the Startup tab in Task Manager.", "TestStartup"); await dialog.ShowAsync(); break; case StartupTaskState.DisabledByPolicy: Debug.WriteLine( "Startup disabled by group policy, or not supported on this device"); break; case StartupTaskState.Enabled: Debug.WriteLine("Startup is enabled."); break; } }
Because Desktop Bridge apps have a Win32 component, they run with a lot more power than regular UWP apps generally. They can set their StartupTask(s) to be Enabled in the manifest and do not need to call the API. For regular UWP apps, the behavior is more constrained, specifically:
- The default is Disabled, so in the normal case, the user must run the app at least once explicitly – this gives the app the opportunity to request to be enabled.
- When the app calls RequestEnableAsync, this will show a user-prompt dialog for UWP apps (or if called from a UWP component in a Desktop Bridge app from the Windows 10 Fall Creators Update onwards).
- StartupTask includes a Disable method. If the state is Enabled, the app can use the API to set it to Disabled. If the app then subsequently requests to enable again, this will also trigger the user prompt.
- If the user disables (either via the user prompt, or via the Task Manager Startup tab), then the prompt is not shown again, regardless of any requests from the app. The app can of course devise its own user prompts, asking the user to make manual changes in Task Manager – but if the user has explicitly disabled your startup, you should probably respect their decision and stop pestering them. In the sample code above, the app is responding to DisabledByUser by popping its own message dialog – you can obviously do this if you want, but it should be emphasized that there’s a risk you’ll just annoy the user.
- If the feature is disabled by local admin or group policy, then the user prompt is not shown, and startup cannot be enabled. The existing StartupTaskState enum has been extended with a new value, DisabledByPolicy. When the app sees DisabledByPolicy, it should avoid re-requesting that their task be enabled, because the request will never be approved until the policy changes.
- Platforms other than Desktop that don’t support startup tasks also report DisabledByPolicy.
Where a request triggers a user-consent prompt (UWP apps only), the message includes the DisplayName you specified in your manifest. This prompt is not shown if the state is DisabledByUser or DisabledByPolicy.
If your app is enabled for startup activation, you should handle this case in your App class by overriding the OnActivated method. Check the IActivatedEventArgs.Kind to see if it is ActivationKind.StartupTask, and if so, case the IActivatedEventArgs to a StartupTaskActivatedEventArgs. From this, you can retrieve the TaskId, should you need it. In this test app, we’re simply passing on the ActivationKind as a string to MainPage.
protected override void OnActivated(IActivatedEventArgs args) { Frame rootFrame = Window.Current.Content as Frame; if (rootFrame == null) { rootFrame = new Frame(); Window.Current.Content = rootFrame; } string payload = string.Empty; if (args.Kind == ActivationKind.StartupTask) { var startupArgs = args as StartupTaskActivatedEventArgs; payload = ActivationKind.StartupTask.ToString(); } rootFrame.Navigate(typeof(MainPage), payload); Window.Current.Activate(); }
Then, the MainPage OnNavigatedTo override tests this incoming string and uses it to report status in the UI.
protected override void OnNavigatedTo(NavigationEventArgs e) { string payload = e.Parameter as string; if (!string.IsNullOrEmpty(payload)) { activationText.Text = payload; if (payload == "StartupTask") { requestButton.IsEnabled = false; requestResult.Text = "Enabled"; SolidColorBrush brush = new SolidColorBrush(Colors.Gray); requestResult.Foreground = brush; requestPrompt.Foreground = brush; } } }
Note that when your app starts at startup, it will start minimized in the taskbar. In this test app, when brought to normal window mode, the app reports the ActivationKind and StartupTaskState:
Using the windows.startupTask manifest Extension and the StartupTask.RequestEnableAsync API, your app can be configured to start at user log-in. This can be useful for apps which the user expects to use heavily, and the user has control over this – but it is still a feature that you should use carefully. You should not use the feature if you don’t reasonably expect the user to want it for your app – and you should avoid repeatedly prompting them once they’ve made their choice. The inclusion of a user-prompt puts the user firmly in control, which is an improvement over the older Win32 model. | https://blogs.windows.com/windowsdeveloper/2017/08/01/configure-app-start-log/ | CC-MAIN-2020-05 | refinedweb | 1,347 | 55.34 |
Blog Browser Format
Phil
Ringnalda. Even though I can hear Sam muttering digital
magpie in my ear... Phil, you say this like it is a bad
thing.
I believe that you and I have common tendencies when it
comes to exploration, but when it comes to choices, I find that I
have a tendency to pick the dull and boring ones.
As to the topic of blog browsers, I do have a number of thoughts. One set of thoughts is that the data being captured is not merely hierarchical, it is actually hierarchical faceted metadata. But mostly my thoughts are to the dull and boring topics of the file format itself.
For starters, my site is generated dynamically. This means that you can see any blog entry, day, month, or year in any of several formats. Here's August in rss2. June 11th in txt. Entries containg "Ringnalda" in esf. I could also slice by categories if I were to use that particular feature. You get the idea. So, for starters, I'd like some name other than simply ".xml" for the files.xml format... then I could enable it everywhere.
Now as to the file format itself, it appears tailored to blogging applications that statically render their content. What are the created and modified dates for each of the dynamically renderable slices I identified above? Should I calculate number of bytes in each in anticipation that it might need to be generated?
It is also not clear how one extends this format. If you look at my archives page, you can see that I have readily available a count of the number of entries. Might this be useful?
Unfortunately, I can see how this dicussion will play out. Somebody will say that "files.xml is not a brilliant format. It is a compromise. It is for blog browsers. That's all it is for, for the 18,000th time." Then three months later will say that it is the perfect format for some other application that none of us have thought of yet. And nobody will be clear as to what applications are out there using this format, let alone know what the impact will be of any change.
We've played this game before. Why not learn from the past?
All I am saying is: give this format a name. And a namespace. And specify from the beginning how (or even if) it can be extended. | http://www.intertwingly.net/blog/986.html | CC-MAIN-2014-15 | refinedweb | 408 | 77.13 |
table of contents
NAME¶
acct - switch process accounting on or off
SYNOPSIS¶
#include <unistd.h>
int acct(const char *filename);
acct():
Since glibc 2.21:
_DEFAULT_SOURCE
In glibc 2.19 and 2.20:
_DEFAULT_SOURCE || (_XOPEN_SOURCE && _XOPEN_SOURCE < 500)
Up to and including glibc 2.19:
_BSD_SOURCE || (_XOPEN_SOURCE && _XOPEN_SOURCE < 500)
DESCRIPTION¶¶
SVr4, 4.3BSD (but not POSIX).
NOTES¶
No accounting is produced for programs running when a system crash occurs. In particular, nonterminating processes are never accounted for.
The structure of the records written to the accounting file is described in acct(5).
SEE ALSO¶
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manpages.debian.org/bullseye/manpages-dev/acct.2.en.html | CC-MAIN-2021-49 | refinedweb | 128 | 62.24 |
Proto REPL Charts is an Atom plugin that extends Proto REPL and allows you to display tables and graphs of results from executed Clojure Code.
Execute this in Proto REPL:
(proto-repl-charts.charts/line-chart"Trigonometry"{"sin" (map #(Math/sin %) (range 0.0 6.0 0.2))"cos" (map #(Math/cos %) (range 0.0 6.0 0.2))})
... and display this:
apm install proto-repl-chartsor go to your Atom settings, select "+ Install" and search for "proto-repl-charts".
Proto REPL Charts are invoked from Clojure code run in Proto REPL. A very small Clojure library, proto-repl-charts, defines a namespace
prc with functions for displaying different charts.
(Not necessary for self hosted REPL. The dependency is already available.)
Add
to your dependencies in your
project.clj file.
(Proto REPL comes with a default Clojure project. If you bring open a new Atom window for and start a REPL it will already have proto-repl-charts dependency loaded and available.)
proto-repl-charts.<chart-type-ns>namespace. See the examples below.
The chart functions are all of the form
(proto-repl-charts.<chart-type-ns>/<function-name> <tab-name> <series-map> <[options]>)
chart-type-ns- one of
canvas,
charts,
graph, or
table
function-name- the name of the function to invoke
tab-name- is the name of the chart to put in the Atom tab. Repeated execution will replace the chart in the tab with the matching name.
series-map- should be a map of series names to a sequence of values for that series. For example
{"alpha" [1 2 3], "beta" [2 4 5]}would display two series named alpha and beta with the values 1, 2, and 3 for alpha and 2, 4, and 5 for beta.
options- an optional map of display options. The only option supported right now is
:labelswhich is a list of labels. The index of the label in the list corresponds to the index of the values in the series.
(let [input-values (range 0.0 6.0 0.5)](proto-repl-charts.charts/line-chart"Trigonometry"{"sin" (map #(Math/sin %) input-values)"cos" (map #(Math/cos %) input-values)}{:labels input-values}))
(proto-repl-charts.charts/bar-chart"GDP_By_Year"{"2013" [16768 9469 4919 3731]"2014" [17418 10380 4616 3859]}{:labels ["US" "China" "Japan" "Germany"]})
(let [tlr (java.util.concurrent.ThreadLocalRandom/current)](proto-repl-charts.charts/scatter-chart"Randoms"{:gaussian (repeatedly 200 #(.nextGaussian tlr)):uniform (repeatedly 200 #(.nextDouble tlr))}))
Displays a custom chart in a tab with the given name. C3 is the charting library used. The chart config will be converted from Clojure to a JavaScript object and passed to C3. It can be any configuration data C3 supports. See C3 examples for more.
(proto-repl-charts.charts/custom-chart"Custom"{:data {:columns[["data1" 30 20 50 40 60 50]["data2" 200 130 90 240 130 220]["data3" 300 200 160 400 250 250]["data4" 200 130 90 240 130 220]["data5" 130 120 150 140 160 150]["data6" 90 70 20 50 60 120]]:type "bar":types {:data3 "spline":data4 "line":data6 "area"}:groups [["data1" "data2"]]}})
Proto REPL Charts can display a table of data that can be sorted by individual columns. The row data can either be a sequence of sequences or a sequence of maps.
(proto-repl-charts.table/table"Users"[{:name "Jane" :age 24 :favorite-color :blue}{:name "Matt" :age 28 :favorite-color :red}{:name "Austin" :age 56 :favorite-color :green}{:name "Lisa" :age 32 :favorite-color :green}{:name "Peter" :age 32 :favorite-color :green}])
Graphs of networks of nodes and edges can be displayed using the
proto-repl-charts.graph/graph function.
Proto REPL Charts supports building more complex visualizations by drawing on an HTML canvas embedded within Atom using the
proto-repl-charts.canvas/draw function.
Good catch. Let us know what about this package looks wrong to you, and we'll investigate right away. | https://atom.io/packages/proto-repl-charts | CC-MAIN-2018-43 | refinedweb | 650 | 56.76 |
This Product is no longer available.
You may also be interested in:
360 watt solar panel with solar panel support structures from...
US $0.49-0.54 / Piece
50 Pieces (Min. Order)
High efficiency polycrystalline cells 310 watt import solar ...
US $0.3-0.4 / Watt
1 Watt (Min. Order)
import solar panels from germany
US $0.39-0.43 / Watt
260 Watts (Min. Order)
Frame aluminium solar panel 60w poly import from China
US $0.36-0.5 / Watt
50 Watts (Min. Order)
CETCsolar 340w solar panels for sale import solar panels
US $163-180 / Piece
1 Piece (Min. Order)
import solar panels from China and USA
US $0.6-0.8 / Watt
1 Watt (Min. Order)
Bluesun import germany solar panels price solar panel 300w 31...
US $0.3-0.38 / Watt
1 Watt (Min. Order)
Best Products for Import Mono Solar Panels 250 watt With High...
US $0.2-0.4 / Watt
250 Watts (Min. Order)
High demand import products 600 watt solar panel from alibaba...
US $0.45-0.5 / Piece
1 Piece (Min. Order)
import 5w 10w 15w 20w 30w Min solar panels, buy solar cells b...
US $0.22-0.37 / Watt
1 Watt ! | http://www.alibaba.com/cache/china-high-efficient-250w-poly-silicon_1647703968.html | CC-MAIN-2017-43 | refinedweb | 200 | 79.67 |
SWbemServicesEx.Put method
The Put method of the SWbemServicesEx object saves the object to the namespace bound to the object and returns an SWbemObjectPath object that contains the path of the object to which the data was written.
This method is called in the semisynchronous mode. For more information, see Calling a Method.
For an explanation of this syntax, see Document Conventions for the Scripting API.
Syntax
Parameters
- objWbemObject
Required. The new object to be put in the namespace. This can be either a newly created object or a modified object.
- iFlags [optional]
This parameter determines if the call creates or updates the object, when the change does not cause any conflicts with child classes. You can use this flag when adding a new property to a base class that was not previously mentioned in any of the child classes. If the class has instances, the update fails.
wbemChangeFlagUpdateForceMode (64 (0x40))
This flag forces updates of classes when conflicting child classes exist. For example, this flag forces an update if a class qualifier was defined in a child class, and the base class tries to add the same qualifier in conflict with the existing one. In the force mode, this conflict is resolved by deleting the conflicting qualifier in the child class. If the class has instances, the update fails.
The use of the force mode to update a static class causes already exists.
wbemChangeFlagCreateOnly (2 (0x2))
Used for creation only. The call fails if the class or instance already exists.
wbemChangeFlagUpdateOnly (1 (0x1))
Causes this call to only do an update. The class or instance must exist for the call to be successful.
wbemFlagReturnImmediately (16 (0x10))
Causes the call to return immediately.
wbemFlagReturnWhenComplete (0 (0x0))
Causes this call to block until the operation has completed. This flag calls the method in the synchronous mode.
wbemFlagUseAmendedQualifiers (131072 (0x20000))
Causes WMI to write class amendment data as well (0x80041003)
Current user does not have the permission for the operation.
- NULL was specified for a property that cannot be NULL. An example of such a property is one marked by a Key, Indexed, or Not_Null qualifier.
- wbemErrInvalidObject - 2147749908 (0x80041014)
The specified object is not valid.
- wbemErrInvalidParameter - 2147749896 | https://msdn.microsoft.com/en-us/library/aa393856(v=vs.85).aspx | CC-MAIN-2017-22 | refinedweb | 365 | 57.27 |
Host Identity Protocol for Linux
HIP authenticates and secures communication between two hosts. HIP authenticates hosts and establishes a symmetric key between them to secure the data communication. The data flow between the end hosts is encrypted by IPsec Encapsulating Security Payload (ESP) with the symmetric key set up by HIP. HIP introduces mechanisms, such as cryptographic puzzles, that protect HIP responders (servers) against DoS attacks. Applications simply need to use HITs instead of IP addresses. Application source code does not need to be modified.
HIP provides transparent mobility support for existing network applications. TCP connections are bound to HITs instead of IP addresses. HITs do not change for a given host. HITs are further mapped to IP addresses. When an IP address changes, new mappings between the HIT and the new IP address are formed. When a host moves to a new network and obtains a new IP address, the host informs its peers about its new IP address, and TCP connections are sustained.
WLAN access points and broadband modems employ NATs due to the lack of IPv4 addresses. However, you have to configure your NAT settings manually if you want to use P2P software or connect to your computer behind a NAT. It may even be impossible if your ISP employs a second NAT.
With HIP, hosts can address each other with HITs across private address realms of NATs. HIP makes use of two alternative NAT traversal technologies, ICE and Teredo, to traverse the NATs. Setting up a server behind a NAT using HIP does not require manual configuration of the NAT. The HIPL on-line manual infrahip.hiit.fi/hipl/manual/ch21.html describes the details.
The InfraHIP site offers free services for the HIP community. For example, you can register your HIT to the DNS or Distributed Hash Table (DHT). The site also offers free HIP forwarding services to assist in NAT traversal and locating mobile nodes.
The Host Identity Protocol architecture (Figure 1) defines a new namespace, the Host Identity namespace, which decouples the name and locator roles of IP addresses. With HIP, the transport layer operates on host identities instead of IP addresses as endpoint names. The host identity layer is between the transport layer and the network layer. The responsibility of the new layer is to translate identities to routable locators before a host transmits the packet. The reverse applies to incoming packets.
Figure 1. The Host Identity layer is located between the transport and network layers.
The actual Host Identity Protocol (HIP) is composed of a two round-trip, end-to-end Diffie-Hellman key-exchange protocol, called base exchange, mobility updates and some additional messages. The networking stack triggers the base exchange automatically when an application tries to connect to an HIT.
Figure 2. HIP Base Exchange
During a base exchange, a client (initiator) and a server (responder) authenticate each other with their public keys and create symmetric encryption keys for IPsec to encrypt the application's traffic. In addition, the initiator must solve a computational puzzle. The responder selects the difficulty of the puzzle according to its load. When the responder is busy or under DoS attack, the responder can increase the puzzle difficulty level to delay new connections.
We can describe this process as follows:
I --> DNS: lookup R I <-- DNS: return R's address and HI/HIT
The initiator application connects to an HIT:
I1 I --> R (Hi, Here is my I1, let's talk with HIP) R1 R --> I (OK, Here is my R1, solve this HIP puzzle) I2 I --> R (Computing, here is my counter I2) R2 R --> I (OK. Let's finish base exchange with my R2) I --> R (ESP protected data) R --> I (ESP protected data)
HIP provides a mechanism similar to base exchange to handle IP address changes. When a host detects a new IP address, it informs all its peers of the address change. The hosts adjust their IPsec security associations accordingly, and the applications running on the hosts continue sending data to each other as if nothing happened.
When two hosts are connected to each other using HIP and one of them moves, the mobile host tells its current location to the other. If both hosts move at the same time, they can lose contact with each other. In this case, an HIP rendezvous server assists the hosts. The rendezvous server has a fixed IP address and, therefore, it offers a stable contact point for mobile hosts. The rendezvous server relays only the first packet, and after the contact, the hosts can communicate with each other directly. HIP includes another similar service, called HIP Relay, that forwards all HIP packets to support NAT travers | http://www.linuxjournal.com/magazine/host-identity-protocol-linux?page=0,1&quicktabs_1=2 | CC-MAIN-2014-15 | refinedweb | 783 | 54.22 |
Most of the time in my tests I mock out all external resources, e.g. file
systems, network I/O, databases, etc. I recently discovered the
#fixture_file_upload method that’s available in Rails tests. File upload was
one area I always mocked out because I didn’t even know how to do a file upload
in a functional test. With this helper method you can do regular state-based
testing with file uploads.
Say you got a
User who’s got an image along side all the normal
User
attributes.
Schema:
users (id, email, password)
We’ll store the image on the file system:
class User < ActiveRecord::Base def image=(image) file_name = File.join IMAGES_PATH, image.original_filename File.open(file_name, 'wb') do |file| file.puts image.read end end end
Our
UsersController looks the same as any other simple CRUD controller.
class UsersController < ApplicationController def new @user = User.new end def create @user = User.new params[:user] if @user.save redirect_to user_path(@user) else render :action => :new end end end
Now let’s take a look at our functional test.
def test_should_create_a_new_user_record_on_POST_to_create post :create, :user => { :email => '[email protected]', :password => 'burritos', :image => fixture_file_upload('images/boloco.gif', 'image/gif') } assert File.exists?(File.join(IMAGES_PATH, 'boloco.gif')) assert_response :redirect assert_redirected_to user_path(assigns(:user)) assert_recognizes({ :controller => 'users', :action => 'create' }, :path => 'users', :method => :post) end
The helper method
#fixture_file_upload will look for the image file relative
to ‘test/fixtures’. I decided to create an ‘images’ subdirectory there just to
keep it straightforward (as long as I don’t have an
Image model). The method
takes a filename and its MIME type. After the
POST I assert the file exists.
The
IMAGES_PATH constant is environment specific and defined in
‘config/environments/test.rb’ as:
IMAGES_PATH = File.join RAILS_ROOT, 'tmp', 'images'
You might want to throw the following line in your functional test’s
#setup
method to clean up your
IMAGES_PATH directory:
FileUtils.rm_rf Dir["#{IMAGES_PATH}/*"]
I don’t know if I’d ever write tests like this but mocking has been burning me lately a bunch so it’s good to know I can do state-based file upload testing. | https://robots.thoughtbot.com/muck-focking | CC-MAIN-2017-13 | refinedweb | 356 | 58.89 |
Spreading tasks among many workers can often accelerate their completion. For example, a single person, no matter how strong or fast, could never have built even the smallest of the Egyptian pyramids. However, by distributing the task among thousands of workers, the Egyptians were able to build these great wonders within a single lifetime.
Likewise, some computational problems, beyond the reach of all but the most powerful and expensive supercomputers, can be solved by a collection of modestly powerful and inexpensive "slave" computers. These slave computers are directed by a "master" coordinating computer. The idea of the master-slave relationship is central to a particular style of distributed computing.
Distributed computing has become very popular in recent years for a few other reasons as well. Many complex problems have large data sets associated with them. These data sets are not easy to move from computer to computer. Nevertheless, it might be advantageous to have many computers working on the problem. A powerful server could control that data and distribute the parts of data its clients need. Also, a distributed system is much more fault tolerant than a single computer that brings the operation to a halt if it fails [6].
There are often semantic debates about the new terms client and server. Generally, a server provides a service that is used by a client. One reading would be that the programs doing the work serve their coordinating program. In the example to follow, we consider coordinating the efforts the service, performed by a server analogous to the master pyramid architect. The clients are the slaves that do the work. Try to understand this subtle and potentially confusing point as more of the example is explained.
Java supports distributed computing in a variety of ways; one of the simplest is known as Remote Method Invocation (RMI) [1]. Using the RMI capability of Java, clients may execute remote method calls on a server using a transparent interface. RMI provides a way for Java programs to interact with classes and objects working on different virtual machines on other networked computers. From the perspective of the client, objects on the server may be referenced as if they were local.
In the above diagram, the necessary components of a program using the RMI system are represented as a magical connection between a client and its server. A client is supplied with the interface of methods available from the remote server, but all implementation is left to the server side. In the truest sense, the details of implementing the remote methods are abstracted from the client software.
What really connects the client and server of an RMI system is a layered connection, transparent to developers of most simple applications, operated by the RMI subsystem of the two virtual machines involved.
The stubs and skeletons provide additional abstraction for the Java developer, once an RMI system is established. The code may be written as if all methods and objects are local to the client. The remote reference layer handles reference variables to remote objects, using the Transport layer's TCP/IP connection to the server.
What happens when a client tries to execute a method
provided by the server? As mentioned above, the client has a stub of the
method provided by the server, which executes the actual
implementing class of that service. Any class whose objects are to be
passed over RMI connections must be marked as such by declaring that they
implement the
java.io.Serializable interface. All arguments
to a remote method must be either Java primitives or implement that
interface.
A client requests a reference to an object from the server using
the stub on the client side. The server gets the request from
a skeleton on the server side. Between the two is the remote
reference layer, which negotiates the requests by converting
objects into portable form across the network. This
conversion is called marshalling. If the data to
be transferred is neither primitive nor serializable, the
stub will throw a
MarshalException
[6]. The idea of serialization is mentioned
again in the example to follow.
KeyBlockChecker
Complete and operable code for this demonstration is available from the author's website, along with instructions [5].
To distribute the work of a large problem, the server coordinates the work of the clients. Clients periodically retrieve their assignments from the server using remote method calls. For this example, I'll examine a distributed computing solution to a problem of cryptoanalysis. A message has been encrypted using a long integer key. For the sake of this example I assume can only be discovered by brute force checking all possible keys.
To implement the most efficient checking of all possible keys, a server
will assign a range of unchecked keys to a client, which will check its
assigned keys in parallel with all other clients. I want clients to
request an assignment from the server in the form of a block of keys. I
need a
KeyBlock class, which must be marked as
serializable.
public class KeyBlock
implements java.io.Serializable {
public int start, end;
public KeyBlock (int s, int e)
{
start = s;
end = e;
}
}
The API for Remote Method Invocation is part of the standard
Java Software Development Kit. With RMI, as will all other parts of
the Java API (except
java.lang) a class
must be imported using an
import declaration before
it can be used in a program. If you need a class and don't know what
package of the Java API it's in, try looking in the online documentation
[2].
The server will assign blocks of keys to clients by returning an object
from the
KeyBlockManager class upon their request.
A client side interface
is used to inform clients what services are
available and what to expect:
public interface KeyBlockManager
extends java.rmi.Remote {
public KeyBlock getNext ()
throws java.rmi.RemoteException;
}
On the server side, we implement a class that is executed
when the client makes a remote call. This class creates an instance
of the
KeyBlock class, and
returns it.
public class KeyBlockManagerImpl
extends java.rmi.server.UnicastRemoteObject
implements KeyBlockManager {
public KeyBlockManagerImpl ()
throws java.rmi.RemoteException {
super ();
}
public KeyBlock getNext ()
throws java.rmi.RemoteException {
KeyBlock kb = new KeyBlock (nextStartIndex, BLOCK_SIZE);
return kb;
}
}
The developers of the Java remote method system provided a concept called the RMI registry, which runs on a generic port, and informs clients which port has the server to respond to their specific requests [3]. This provides an additional level of abstraction for developers of servers and clients. A registry provides a reference to a client looking for its server. The client code need not include the port where the server is running if it can look up that port dynamically from a registry. What port the registry uses is usually not such a difficult question to answer. Port 1099 is considered a default port for an RMI registry.
Using the RMI registry, a client can obtain a reference to an object
that resides on a different computer (or maybe just a different virtual
machine on the same computer) and call its methods as if that object were
in the local virtual machine. Pay careful attention to the lines of code
where the server code calls
(
Naming.rebind)
to get a port assignment from the RMI registry. Notice also later
that the client looks up the server
(
Naming.lookup)
in the registry and then makes
a request for an object reference. This process of using the
rmiregistry
is very similar to what the
portmap
daemon does on a Unix system to facilitate Remote
Procedure Calls.
The server must coordinate requests for key block assignments from the
clients. Assume that key blocks are a standard size and are assigned
sequentially.
public class KeyBlockManagerServer {
private static final String RMI_URL =
"rmi://RMIRegistryServer:1099/KeyBlockManagerService";
public KeyBlockManagerServer () {
try {
KeyBlockManager kbm = new KeyBlockManagerImpl ();
Naming.rebind (RMI_URL, kbm);
}
catch (Exception e) {
e.printStackTrace (System.out);
}
}
public static void main (String [] args)
{
new KeyBlockManagerServer ();
}
}
The client finds the server from the registry, and then gets a reference
to a remote object that implements the
KeyBlockManager interface from the
RMI system. The client then enters a
loop by executing a remote call to the remote object. The call to the remote
object is handled identically to a call to a local object.
The method call is transferred to the server, which executes
the implementation and returns the assignment as a serialized object of
the
KeyBlock class.
public class KeyBlockClient {
private static final String RMI_URL =
"rmi://RMIRegistryServer/KeyBlockManagerService";
public static void main (String [] args) {
try {
KeyBlockManager kbm =
(KeyBlockManager) Naming.lookup (RMI_URL);
while (true) {
KeyBlock kb = kbm.getNext ();
checkBlock (kb);
}
}
catch (Exception e) {
e.printStackTrace (System.err);
}
}
}
To actually make this RMI system operational, the source code files
must be compiled into Java bytecode class files and distributed to the
appropriate places. As a Java programmer you are already familiar with the
javac tool to produce compiled bytecode class files.
Part of the standard Java Software Development Kit (SDK) is
another tool,
rmic, which produces the stub and skeleton
files necessary for the Remote Reference Layer mentioned earlier from the
source code of the implementation [4].
The command works like this:
% javac KeyBlockManagerImpl.java
% rmic KeyBlockManagerImpl
Which produced the following new
.class files:
The respective class loaders must also have class files for the following available:
Note especially that clients must have available the class definition
of any return type of a remote method. In our primitive example, the client's
class loader must find
KeyBlock.
To start an RMI registry and bind the server to it:
This prepares two Java virtual machines to handle requests for
remote objects from clients.
% rmiregistry &
% java KeyBlockManagerServer &
The client makes a request to the registry to find the
appropriate server. Then it makes a request to its server
for a reference to an object that represents its task.
% java KeyBlockClient
There is another new technology emerging that may change the face of cross platform distributed and client-server computing. Developers are now working closely to make the processing much less Java-specific. One idea that has much potential is Common Object Request Broker Architecture (CORBA). As the name implies, CORBA is a standard for location transparency and language independence between creators and users of objects. A client would neither know nor care about where an object was or in what language it was implemented [7].
The way RMI is implemented now, with serialized object streams, is inherently too slow. You will have noticed a speed problem even in the trivial example above. A service distributed with CORBA will be faster and more universal than a similar service distributed with RMI. The main reason for the performance increase will be the Internet InterOperability Protocol (IIOP), which quickly creates very high speed connections between objects while minimizing overhead [7].
Hopefully you now have a glimpse of the power of distributed computing with remote calls, and will consider implementing solutions in that way. If you wish to download the full source code of a functional version of this distributed application, visit the author's web site [5]. Feel free to experiment with it. Enjoy!
rmiregistryTool Documentation --
rmicTool Documentation --
Last Modified:
Location: | http://www.acm.org/crossroads/xrds6-5/ovp65.html | crawl-001 | refinedweb | 1,868 | 54.12 |
Custom JSON serialization in WCF REST 3.5
- Tuesday, May 20, 2008 9:42 PMHi all,
I was hoping to get some help with some custom JSON serialization in WCF REST services. Some of my the types i am trying to return from my WCF REST service have strict JSON serialization standards they need to follow which goes beyond the scope of the customization offered by the DataContract. So, I have the JSON serialization logic implemented in a ToJSON(...) method on the type.
How can I return the output of the ToJSON(...) function instead of the default .net JSON serializer? I tried returning the custom serialized JSON as a string but that puts the JSON output within quotes. How can I leverage the extensibility of the WCF REST serialization framework to return the custom JSON generated by the function.
Thank You,
Vish
All Replies
- Wednesday, May 21, 2008 4:04 AM
You can use the WCF REST "raw" programming model to return exactly what you want. The post at, it explains how to use that mode.
- Wednesday, May 21, 2008 5:22 AM
Hi Carlos,
Thank you that worked for me. I had actually tried returning a Stream earlier. But I was writing the string to the stream using a StreamWriter. And weirdly that does not work. The response is empty. Any ideas as to why? But anyway using the UTF8 encoder like you did in your application works fine for me right now. Thank you. I appreciate your time for the response.
Thank You,
Vish
- Wednesday, May 21, 2008 1:39 PM
One reason why the response might have been empty with the StreamWriter is that the writer hadn't flushed the data to the stream (sometimes it buffers some data for performance reasons). If you call StreamWriter.Flush it should write the data to the underlying stream.
Glad to have helped
- Wednesday, May 21, 2008 1:44 PM
Hi Carlos,
I did flush the stream. But still had no luck with the response. That was very weird. Does encoding have anything to do with it?
Thank You,
Vish
- Thursday, May 22, 2008 12:35 AM
Hi Carlos,
The problem I was having with using StreamWriter to write to a memorystream and then it was, Not only did I have to flush the StreamWriter but I also had to reset the position on the MemoryStream to 0.
Thank You,
Vish
- Thursday, July 24, 2008 8:54 AM
Hi Carlos
In continuation to the above
Is it possible to prompt the user with a File Download Box on the browser using the approach above.
I am returning a Memory Stream from the Service. I am also setting the Content-Disposition and Content-Type headers in the WebOperationContext.
Still the File Download box is not prompted.
The same works from a normal ASP.Net page. I have checked the headers using Fiddler and they are the same.
Is there anything else I need to do to enable the File Download box prompt?
Thanks in advance
Regards
Vikas Manghani
- Thursday, July 24, 2008 6:57 PM
That should just work. I just used this simple .svc file, and it just worked. What is the problem you're seeing?
<%@ServiceHost language=c# Debug="true" Service="MyTest.Service" Factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" %>
namespace MyTest
{
using System;
using System.IO;
using System.ServiceModel;
using System.ServiceModel.Web;
using System.Text;
[ServiceContract(Namespace = "")]
public interface ITest
{
[OperationContract, WebGet]
Stream GetData();
}
[ServiceBehavior(IncludeExceptionDetailInFaults = true, Namespace = "")]
public class Service : ITest
{
public Stream GetData() {
WebOperationContext.Current.OutgoingResponse.ContentType = "application/json";
WebOperationContext.Current.OutgoingResponse.Headers.Add("Content-Disposition", "attachment; filename=MyFile.json");
string jsonResponse = "{\"a\":123,\"b\":[false, true, false],\"c\":{\"foo\":\"bar\"}}";
MemoryStream ms = new MemoryStream(Encoding.UTF8.GetBytes(jsonResponse));
return ms;
}
}
}
- Friday, July 25, 2008 6:19 AM
Hi Carlos
The problem I face is that I need to return a file to the user (similar to Response.TransmitFile(Filename) in ASP.Net). The user should see a prompt on the browser to download the file.
The code I am trying to use is this:
public Stream DownloadFile()
{
//Read File bytes into a byte array
byte[] bytes = ReadFile(filepath);
MemoryStream ms = new MemoryStream(bytes);
ms.Position = 0;
WebOperationContext
return ms;
}
This doesnt result in any prompt, though from what I read, the Content-Disposition header should force the browser to display a prompt.
Thanks again
Regards
Vikas
- Friday, July 25, 2008 9:15 AM
Hi Carlos
Just a small thing - when I submit an HTML form with Action= URL of WCF service, it works fine. The file gets downloaded.
But If I use xmlHttpRequest thus:
var url = Service1.svc/DownloadFile;
xmlHttp.Open(
xmlHttp.send();
I am not able to get the download box to appear. So it seems that the problem is not with the service but the client i.e. Javascript code.
I even tried the same with an HTTPHandler and that too exhibited the same behavior. The ASHX Handler prompts a download box if it is invoked due to a form submit, but not when invoked using xmlHttpRequest.send(),
Thanks and regards
Vikas Manghani
- Saturday, July 26, 2008 5:08 AM
The behavior from the client of showing a Save As box when facing a Content-Disposition header is not mandatory (look at the RFC for more details). When a browser receives a response with that it will what you expect and prompt with the save file dialog.
When you're using XmlHttpRequest, however, you're essentially taking control over the behavior of the client (as the client now is your Javascript code, which happens to be running inside the browser). The response will be handed over to the callback for xmlHttp, and it's the callback's responsibility to interpret it the way you intend.
- Monday, July 28, 2008 5:17 AM
Hi Carlos
You are right. I was thinking on similar lines and was hoping that there can be some way for xmlHttpRequest to handover the response to the browser I dont think it is possible though.
For now, I have used a dummy form submit to simulate the request.
My main concern was however the WCF service, which now works as expected
Thanks a lot for your help
Regards
Vikas
- Saturday, September 06, 2008 6:08 PMHi,
the Stream hack works fine to send raw content to the client, but what is its conceptual reason ?
In fact returning a Stream breaks the business logic : a business operation has a prototype and the technical implementation, here WCF, forces to break this prototype to adapt to the technical constraints.
Wouldn't it have been more easy and supple to extend the "WebMessageFormat" to add a "Any" value for example, allowing the code to be business interface compliant ?
Is an other way planned to obtain the same effect but in a more natural fashion ?
Thanks to all for your answers. | http://social.msdn.microsoft.com/forums/en-US/wcf/thread/765f1569-0422-4471-8ec2-1d03b2026771 | CC-MAIN-2013-20 | refinedweb | 1,145 | 64.91 |
UI elements in React are called components. A component defines the appearance (layout, style, motion) and the behavior of the UI element. Once a component is defined, it can be incorporated within other components to build a complete user interface.
React components derive from the templated base class React.Component<P, S>. P and S refer to props and state, two concepts that we will explore below. The most important method in a React component is the render method. The example below shows a minimal React component that simply renders some text.
class HelloWorld extends React.Component<void, void> { render() { return <div>Hello World</div>; } }
This example uses the JSX angle bracket syntax. TypeScript 1.6 contains native support for this notation. Simply name your source file with a “tsx” file extension rather than “ts”.
Note that this component is emitting a “div” tag, which is valid only in browser environments. To make this into a ReactXP component, simply replace the “div” with a “RX.Text” tag.
class HelloWorld extends RX.Component<void, void> { render() { return <RX.Text>Hello World</RX.Text>; } }
Also note that
RX.Component replaces
React.Component in the above example. ReactXP re-exports
React.Component as
RX.Component so your imports remain tidy, you don’t need to import
React specifically.
It’s convenient for parent components to customize child components by specifying parameters. React allows components to define a set of properties (or “props” for short). Some props are required, others are optional. Props can be simple values, objects, or even functions.
We will modify the Hello World example to introduce an optional “userName” prop. If specified, the component will render a hello message to the user. Methods within the component class can access the props using “this.props”.
interface HelloWorldProps { userName?: string; // Question mark indicates prop is optional } class HelloWorld extends RX.Component<HelloWorldProps, void> { render() { return ( <RX.Text> { 'Hello ' + (this.props.userName || 'World') } </RX.Text> ); } }
The example above renders a string using default styles (font, size, color, etc.). You can override style defaults by specifying a “style” prop. In this example, we render bold text on a green background. Note that styles within React (and ReactXP) borrow heavily from CSS.
// By convention, styles are created statically and referenced // through a private (not exported) _styles object. const _styles = { container: RX.Styles.createViewStyle({ backgroundColor: 'green' }), text: RX.Styles.createTextStyle({ color: 'red', fontSize: 36, // Size in pixels fontWeight: 'bold' }) }; class HelloWorld extends RX.Component<void, void> { render() { return ( <RX.View style={ _styles.container }> <RX.Text style={ _styles.text }> Hello World </RX.Text> </RX.View> ); } }
For more details about style attributes, refer to the styles documentation or the documentation for each component.
React uses flexbox directives for component layout. These directives are specified along with styling information. A number of flexbox tutorials are available online. Here is one we especially recommend. Using flexbox directives, you can specify the primary layout direction (row or column), justification, alignment, and spacing.
React also adopts the notion of margin and padding from CSS. Margin is the amount of space around a component, and padding is the amount of space between the boundary of the component and its children.
Here is an example style that incorporates margin, padding and flexbox directives.
const _styles = { container: RX.Styles.createViewStyle({ flexDirection: 'column', flexGrow: 1, flexShrink: 1, alignSelf: 'stretch', justifyContent: 'center', margin: 4, padding: 4, backgroundColor: 'green' }) };
For more details about layout directives, refer to the styles documentation.
Events, such as user gestures, key presses or mouse actions, are reported by way of event-handler callbacks that are specified as props. In this example, the component registers an onPress callback for a button.
class CancelButton extends RX.Component<void, void> { render() { return ( <RX.Button onPress={ this._onPress }> Cancel </RX.Button> ); } private _onPress = (e: RX.SyntheticEvent) => { e.stopPropagation(); // Cancelation logic goes here. } }
This example makes use of a TypeScript lambda function to bind the _onPress variable to the method instance at class creation time. It also demonstrates a few conventions (use of the variable name “e” to represent the event object and a method name beginning with an underscore to indicate that it’s private). It also demonstrates a best practice (calling the stopPropagation method to indicate that the event was handled).
As we saw in the examples above, a component’s appearance and behavior can change based on externally-provided props. It can also change based on its own internally-managed state. As a simple example, the visual style may change when a user mouses over the component.
React components can define a state object. When this object is updated through the use of the setState method, the component’s render method is automatically called. In the example below, we implement a simple stop light with two states. Depending on the current state, the light is drawn in red or green. A press or click toggles the state.
interface StopLightState { // Fields within a state object are usually defined as optional // (hence the question mark below) because calls to setState // typically update only a subset of the fields. isStopped?: boolean; } const _styles = { redButton: RX.Styles.createViewStyle({ width: 30, height: 30, borderRadius: 15, backgroundColor: 'red' }), greenButton: RX.Styles.createViewStyle({ width: 30, height: 30, borderRadius: 15, backgroundColor: 'green' }) }; class StopLight extends RX.Component<void, StopLightState> { getInitialState(): StopLightState { return { isStopped: true }; } render() { // Choose the appropriate style for the current state. var buttonStyle = this.state.isStopped ? _styles.redButton : _styles.greenButton; return ( <RX.Button style={ buttonStyle } onPress={ this._onToggleState } /> ); } private _onToggleState = (e: RX.MouseEvent) => { e.stopPropagation(); // Flip the value of "isStopped" and re-render. this.setState({ isStopped: !this.state.isStopped }); } }
Component state can also be stored as instance variables defined by the class. However, if a piece of data is used by the render method, it is better to add it to the state object and update it through the use of a setState call. That way, the rendered component will always reflect the current state. | https://microsoft.github.io/reactxp/docs/react_concepts.html | CC-MAIN-2019-18 | refinedweb | 984 | 52.46 |
Once Upon A Time…
Once upon a time, there was a sysadmin who wanted to make sure her website was always online. However, she figured that she was pretty good at compiling, installing, and configuring software, but that her programming skills were a bit rusty.
Oh, sure, she remembered her days at university where she learned a bit of Java, and C++, and the cool mind-bending exercises in LISP, but today she felt like trying something new. She followed the installation instructions carefully, and then jumped in without waiting.
After fishing online for information, she decided to start with the following program:
"The website may or may not be online." println()
Saving it as
watchcorgi.ooc and running
rock -v watchcorgi sure produced a lot
of output. And - as a token of its appreciation, the compiler even left an executable
on the hard drive. What a promising relationship, she thought.
However, not one to be overly chatty, she decided that instead of having to type
rock watchcorgi every time she wanted to compile her new program, she was going
to write a usefile for it, and put them both together in a directory.
Name: Watch Corgi Description: Tells on the bad websites that go down Version: 0.1.0 SourcePath: source Main: watchcorgi
Saving it as
watchcorgi.use, she realized that, if she wanted her usefile to be
valid, she needed to move her ooc module into
source/watchcorgi.ooc. So then, her
folder hierarchy now looked like:
. ├── source │ └── watchcorgi.ooc └── watchcorgi.use
Now, all she had to do was type
rock to have her program compiled. If she felt
like reading a bit, all she had to do was
rock -v - it reminded her of the countless
hours spent installing packages on Gentoo.
The Great Illusion
However, that program was not quite as useful as she had hoped so far. While it was technically correct — the best kind of correct — it did not, in fact, bring any new information to the table.
That was not meant to last, though, as she quickly devised a devious scheme. Before
the era of watchcorgi, she was using
curl, a command-line utility, to check if the
website was still online. It looked a little something like this:
curl -I
(Of course, that wasn’t her website’s actual URL, which we have sneakingly substituted with something a tad more common, in order to protect our heroine’s privacy.)
Running that simple command was enough to let her know, with a simple look, whether the website was up and running or down in the ground — in which case prompt maintenance was needed as soon as humanly possible.
She decided that if she could run that command herself, there was no reason why her
program couldn’t do so as well. After a second documentation hunt, she quickly jotted
down a few more lines, replacing the contents of
source/watchcorgi.ooc with this:
import os/Process exitCode := Process new(["curl", "-I", ""]) execute() "Sir Curl exited with: #{exitCode}" println()
And sure enough, after a quick recompilation, she saw the expected result:
Sir Curl
exited with: 0. Curious, she disconnected from the internet, and tried launching
./watchcorgi again. This time, she saw:
Sir Curl exited with: 6.
“It’s just like it always is with Unix-like tools” she thought. “An exit code of 0
is a good sign, anything else… not so much. It sure is convenient to be able
to import another ooc module for almost everything. Apparently, this
Process class
takes an array with the command arguments. And this
execute method returns the
exit code. Neato!” And so it was.
Form Follows Function
She was starting to be happy with her program. For all intents and purposes, was doing its job, and it was doing its job well. However, she could not deny that her program could have put a little more effort in the presentation. Just because a program does not have a will of its own, doesn’t mean it’s okay for it to be rude.
“Time to get to work”, she said out loud, forgetting that it was past 2 in the morning, and that nobody could probably hear her - and even if they could, there was no certainty that they would agree. While she thought about that, her fingers had kept tapping on the keyboard. Her program now looked a little bit like that:
import os/[Process, Terminal] exitCode := Process new(["curl", "-I", ""]) execute() match (exitCode) { case 0 => "Everything is fine." println() case => Terminal setFgColor(Color red) "[ERROR] The website is down!" println() Terminal reset() }
It didn’t blink, and there were no 3D effects: disappointing maybe for a sci-fi fan like her little brother, but having alerts in red, and a human-readable message was fine enough for her.
While carefully proofreading her code to check if she hadn’t missed anything, she
thought about the syntax of the
match construct. “It’s pretty smart, in fact.
Cases are tested from top to bottom - the first that matches, wins. And a case with
no value simply matches everything”. It just made sense.
She was also particularly happy with the way she was able to import both
os/Process
and
os/Terminal from the same line. Sure, she could have written two different
import directives, but she had been promised a concise programming language and it
was about time it delivered.
Corgi Ever Watching
Now that the program was polite, our programmer felt good enough to take a small break. As she was looking out the window, waiting for her 3rd cup of nocturnal coffee to brew, it came to her: “Wait a minute… what good is my program if I have to keep running it manually?”
A quick sip out of her coffee cup finished to clear her mind completely. “I am going to need some sort of loop. And I think watchcorgi should shut up if everything is fine, and only complain if something goes wrong.”
As she looked at her timer, waiting for it to run out and allow her to go back to hacking (self-discipline is important, after all), she came to a second realization: that there were two main tasks in her program - the checking, and the notifying. Surely there must be some way to write that in a more modular way?
She decided to go for broke, and split her program into three different ooc
modules. She started with
source/watchcorgi/checker.ooc:
import os/[Process] Checker: class { url: String init: func (=url) /** * @return true if the url is reachable, false otherwise */ check: func -> Bool { 0 == Process new(["curl", "-I", url]) execute() } }
Then went on with
source/watchcorgi/notifier.ooc:
import os/[Terminal] Notifier: class { quiet: Bool init: func notify: func (online: Bool, url: String) { if (online) { if (quiet) return Terminal setFgColor(Color green) "[ OK ] %s is online." printfln(url) Terminal reset() } else { Terminal setFgColor(Color red) "[ERROR] %s is not reachable! You may panic now." printfln(url) Terminal reset() } } }
And finally, thought it was better to rewrite
source/watchcorgi.ooc from
scratch using those new modules:
import watchcorgi/[checker, notifier] import os/Time notifier := Notifier new() notifier quiet = true // only bother me if something goes wrong checker := Checker new("") while (true) { notifier notify(checker check(), checker url) Time sleepSec(5) }
There. All good. Not only was her program now constantly vigilant, checking for potential problems every five seconds, she felt that the various components were just as flexible as needed, small enough, and that it made the main program file short and sweet.
The Littlest Things
There was one area of the code she wasn’t entirely happy with - in the
notifier, she was using the same pattern twice. First
Terminal setFgColor,
then
String printfln, then
Terminal reset. She decided to extract that
pattern into a function instead, and added it to the end of the the
Notifier
class definition:
say: func (color: Color, message: String) { Terminal setFgColor(color) message println() Terminal reset() }
With that new neighbor, the notify function was happy to be reduced to:
notify: (online: Bool, url: String) { if (online) { if (quiet) return say(Color green, "[ OK ] %s is online" format(url)) } else { say(Color red, "[ERROR] %s is not reachable! You may panic now." \ format(url)) } }
While this was better, she wasn’t satisfied yet - calling
format like this
(she thought of it as a version of
printfln that returned the formatted string
instead of printing it) wasn’t particularly pretty.
Like with everything that bothered her, she decided to do something about it:
say: func (color: Color, message: String, args: ...) { Terminal setFgColor(color) message printfln(args) Terminal reset() } notify: func (online: Bool, url: String) { if (online) { if (quiet) return say(Color green, "[ OK ] %s is online", url) } else { say(Color red, "[ERROR] %s is not reachable! You may panic now.", url) } }
It was subtle, but for her, it made all the difference. Being able to relay any number of arguments like that? This language might actually be comfortable after all.
All Together Now
“So, that was nice. For the life of me, I can’t think of a single thing my program is missing.” Her eyes closed gently, and she leaned back, as if overwhelmed by bliss.
Wait. Her eyes, suddenly inquisitive, were perfectly open now. “What if I want to monitor several websites? Then I would need a config file so that I could modify the list of websites to monitor… and it would need to check them in parallel, so it doesn’t get stuck on any one of them.”
She decided she needed one more module:
source/watchcorgi/config.ooc:
import io/File import os/Env import structs/List import text/StringTokenizer DEFAULT_PATH := Env get("HOME") + "/.config/corgirc" Config: class { websites: List<String> init: func (path := DEFAULT_PATH) { content := File new(path) read() websites = content split('\n') \ map(|line| line trim("\t ")) \ filter(|line| !line empty?()) } }
Armed with that new weapon, checking multiple websites in parallel was just a matter of making threads behave. Since she didn’t have much experience in the domain, and the documentation seemed a little bit obscure, she decided to ask for help in the ooc discussion group
Almost immediately, a response sprung with numerous code examples she could use
as inspiration for her own endeavor. And so she embarked courageously,
rewriting
source/watchcorgi.ooc once again:
import watchcorgi/[config, checker, notifier] import os/Time import threading/Thread import structs/[ArrayList] threads := ArrayList<Thread> new() config := Config new() for (url in config websites) { threads add(Thread new(|| guard := Guard new(url, 5) guard run() )) } // start all the threads for (thread in threads) { thread start() } // wait for all threads to complete threads each(|thread| thread wait()) Guard: class { delay: Int checker: Checker notifier: Notifier init: func (url: String, =delay) { checker = Checker new(url) notifier = Notifier new() notifier quiet = true } run: func { while (true) { notifier notify(checker check(), checker url) Time sleepSec(delay) } } }
As she began to write down a list of websites to check in
~/.config/corgirc,
she started to list the new things she had learned during that last refactoring:
That classes can be used before they are defined - in order word, the order in which classes are defined does not matter!
That threads, while really old fashioned, were quite easy to use - all you had to do was create a new
Threadobject and pass a function that takes zero arguments.
That some functions are anonymous - and that they can be defined as an argument to a function call like this:
[1, 2, 3] reduce(|a, b| a + b)
That using a foreach, such as
for (element in iterable) { /* code */ }or using the each method, like so
iterable each(|element| /* code */ ), where pretty much equivalent.
When Features Creep
As magnificent as the program was, she couldn’t shake an eerie feeling. It seemed so perfect, so concise, so damn practical - what could possibly go wrong?
“Oh, right!” she whispered. The program assumes that the
curl command-line
utility is installed and in the
$PATH. While on most Linux distributions,
that’s a safe bet, it might not be there on OSX. Or, god forbid, on Windows.
But it was almost 6AM, and rays of sunlight would soon come and disturb the oh so peaceful (yet eventful) night of coding. Obviously, she could not afford to write her own HTTP library.
Sure, in theory, a simple usage of
net/TCPSocket from the SDK, writing
something like
HEAD / HTTP/1.0\r\n\r\n
..and seeing if you get a non-empty response, would suffice. But what about parsing empty, yet worrying responses, like an HTTP 404, or an HTTP 502? What about HTTP 1.1 and the Host header, essential when several websites are running on the same IP address? And most importantly, what about HTTPS, which runs on a different port, and what’s more, over SSL?
No, definitely, writing an HTTP library was not part of the plan. But maybe
there was something she could use… maybe curl existed also as a library. A
quick search for
ooc curl revealed the existence of
fasterthanlime/ooc-curl. Jackpot!
A quick clone and.. wait. She knew better. Why not use sam instead?
A simple
sam clone curl would suffice. Or, better yet, she could add the
dependency in the .use file, and run
sam get from the watchcorgi folder
afterwards.
Her .use file now looked a little bit like this:
Name: Watch Corgi Description: Multi-threaded website monitoring system Version: 0.2.0 SourcePath: source Main: watchcorgi Requires: curl
And sure enough, after
sam get, she saw the
ooc-curl folder appear in her
$OOC_LIBS directory. It was time to rewrite
source/watchcorgi/checker.ooc:
use curl import curl/Highlevel Checker: class { url: String init: func (=url) /** * @return true if the url is reachable, false otherwise */ check: func -> Bool { 200 == (HTTPRequest new(url). perform(). getResponseCode()) } }
This piece of code was one of her favorites yet. She had used one of the features she had just learned about - call chaining. “In fact”, she would later explain to a colleague, “you can think of the dot as a comma - it separates several method calls, but they all happen on the same object, sequentially”.
Recompiling the program after this change was exciting. There was no
configuration dialog to fill out. No complicated command-line option to add
when compiling. As a matter of fact, the single line added to the use file was
enough to make sam happy - and rock itself seemed pretty content with the
use
curl directive now sitting at the top of the checker module.
A simple
rock -v did the trick. And there she had it. The perfect website
monitoring system. At last. Oh, sure, part of her brain fully realized that
the impression of perfectness would fade out over the days, but as far as
discovering a new language goes, she thought this was a pretty good run.
There was just one thing left to do…
To Give Back
At this point, she felt that watchcorgi it was worth it to publish her program somewhere. Of course, all along, she had been keeping track of it using git. In this case, she was using GitHub as a host.
She decided to make it easy for other people who might want to use
watchcorgi, to get it. After a quick search, it quickly became evident that
the process itself was trivial. She just had to send a pull request to the
sam repository that added a formula for her new pet project.
So, after forking sam on GitHub, changing the origin of her sam repository,
she opened a new file in
$OOC_LIBS/sam/library/watchcorgi.yml, and wrote:
Origin:
And then, she submitted the pull request. The sun was rising. It felt warm. I think - she thought - I just might like it here. | https://ooc-lang.org/docs/tutorial/ | CC-MAIN-2017-47 | refinedweb | 2,648 | 61.67 |
03 June 2009 16:27 [Source: ICIS news]
TORONTO (ICIS news)--Austria’s paint, coatings and varnish industry saw orders collapse in the first few months of the year, and fears a further deterioration as markets have not yet bottomed out, an industry group said on Tuesday.
Some producers recorded a decline in orders of up to 50% this year, Gunther Berghofer, head of Osterreichische Lackindustrie, said.
“We have not yet reached the bottom, and if the trend continues further industry rationalisation measures will be inevitable,” Berghofer said.
Even though the industry’s production value rose by 6% in 2008, the downturn actually started in the second half of last year, the group, which is part of ?xml:namespace>
Exports, which had already fallen 4.5% to €224m ($320m) in 2008, continued to trend downwards in the first half of 2009, Lackindustrie said.
In addition to the bleak economic outlook, the industry was also burdened by complex regulations that hindered the sale of products containing biocides, it added.
The group was particularly critical of an “action plan for sustainable public procurement” drafted by the country’s environment ministry, it said.
It also warned of “over-regulating” products based nanomaterial technology.
Nanotechnology was huge opportunity for the coatings and paint sector that needed to be fully utilised.
“We must avoid over-regulation, because that would mean the end of technological progress,” Lackindustrie deputy head Hubert Culik | http://www.icis.com/Articles/2009/06/03/9222057/austrias-paint-coatings-industry-orders-collapse-group.html | CC-MAIN-2014-52 | refinedweb | 235 | 50.67 |
NetBeans IDE Dev (Build 200704021800)
1.6.0_01; Java HotSpot(TM) Client VM 1.6.0_01-b04
Linux version 2.6.17-10-generic running on i386
cs_CZ (nb); UTF-8
+ ruby 0.51
----------------------------------------------------------------
no special settings, just downloaded from UC ruby, did not change interpreter,
just out of the box experience. Probably you have more insight what is wrong?
Gem itself or gem installation?!
On windows was gem installed successfully (using of course mswin32)
This is output from gem manager:
Select which gem to install for your platform (java)
1. ruby-debug-base 0.9.1 (mswin32)
2. ruby-debug-base 0.9.1 (ruby)
3. Skip this gem
4. Cancel installation
2
> Building native extensions. This could take a while...
Error opening script file: extconf.rb (No such file or directory)
ERROR: While executing gem ... (Gem::Installer::ExtensionBuildError)
ERROR: Failed to build gem native extension.
ruby extconf.rb install ruby-debug-base --no-rdoc --no-ri --include-dependencies
--version 0.9.1
Gem files will remain installed in
/tmp/ud/jruby-0.9.8/lib/ruby/gems/1.8/gems/ruby-debug-base-0.9.1 for inspection.
Results logged to
/tmp/ud/jruby-0.9.8/lib/ruby/gems/1.8/gems/ruby-debug-base-0.9.1/ext/gem_make.out
I can confirm this behaviour. However, the issue not only affects
ruby-debug-base gem, but any gem that must build native extensions and thus call
extconf.rb. For example, mongrel, sqlite3-ruby, mysql, etc. They all generate
the same error (for me) as described below.
Using NetBeans M8 with ruby-hudson-689, on Linux (Ubuntu Edgy).
This is a show-stopper, at least on my machine, since NetBeans doesn't seem to
have a way to use gems already installed on the system, but must install its own
via the Ruby Gem Manager.
I suppose more people will try to get debugger working this way as Tomas did. So
it is also semi-stopper for the debugger in some cases.
Since users can't install many required gems it seems as stopper. Why ruby
interpreter is executed? Maybe, we should use embedded jruby instead system
ruby. When we try to run mentioned command with embedded jruby the installation
exits with error:
extconf.rb:2:in `require': no such file to load -- mkmf (LoadError)
from extconf.rb:2
according the page
seems that mkmf is missing in embedded JRuby.
It looks like you are trying to install this gem for JRuby, which doesn't work. It doesn't work on the
command line either - this isn't a Gem Manager issue.
It looks like the fast-debug gem requires native Ruby, so you'll have to switch to native ruby first if
you're going to do this. (The Gem Manager is tied to whichever Ruby installation you are using.)
sh-2.05b$ cd netbeans/work/nbbuild/netbeans/ruby1/jruby-0.9.8/
sh-2.05b$ JRUBY_HOME=`pwd`; export JRUBY_HOME
sh-2.05b$ cd bin
sh-2.05b$ PATH=`pwd`:$PATH; export PATH
sh-2.05b$ cd
sh-2.05b$ jruby $JRUBY_HOME/bin/gem install ruby-debug-base
Bulk updating Gem source index for:
Select which gem to install for your platform (java)
1. ruby-debug-base 0.9.1 (mswin32)
2. ruby-debug-base 0.9.1 (ruby)
3. ruby-debug-base 0.9 (ruby)
4. ruby-debug-base 0.9 (mswin32)
5. ruby-debug-base 0.8.1 (ruby)
6. ruby-debug-base 0.8.1 (mswin32)
7. ruby-debug-base 0.8 (ruby)
8. ruby-debug-base 0.8 (mswin32)
9. Skip this gem
10. Cancel installation
> 2
Building native extensions. This could take a while...
Error opening script file: extconf.rb (No such file or directory)
ERROR: While executing gem ... (Gem::Installer::ExtensionBuildError)
ERROR: Failed to build gem native extension.
ruby extconf.rb install ruby-debug-base
Gem files will remain installed in /Users/tor/netbeans/work/nbbuild/netbeans/ruby1/jruby-0.9.8/lib/
ruby/gems/1.8/gems/ruby-debug-base-0.9.1 for inspection.
Results logged to /Users/tor/netbeans/work/nbbuild/netbeans/ruby1/jruby-0.9.8/lib/ruby/gems/1.8/
gems/ruby-debug-base-0.9.1/ext/gem_make.out
I'm not really sure what ruby-debug-base is supposed to do, or how the debugger is requiring it -
Martin? Since it has native code it's hard to imagine that it would be used by JRuby.
I switched my Ruby interpreter to Ruby 1.8.5 (native ruby) and ran the Gem manager - the gem
installed successfully; here's the output from the progress window.
Select which gem to install for your platform (i686-darwin8.8.1)
1. ruby-debug-base 0.9.1 (mswin32)
2. ruby-debug-base 0.9.1 (ruby)
3. ruby-debug-base 0.9 (ruby)
4. ruby-debug-base 0.9 (mswin32)
5. ruby-debug-base 0.8.1 (mswin32)
6. ruby-debug-base 0.8.1 (ruby)
7. ruby-debug-base 0.8 (mswin32)
8. ruby-debug-base 0.8 (ruby)
9. Cancel installation
2
> Building native extensions. This could take a while...
ruby extconf.rb install ruby-debug-base --no-rdoc --no-ri --include-dependencies --version > 0
make
gcc -I. -I/Users/tor/dev/ruby/install/ruby-1.8.5/lib/ruby/1.8/i686-darwin8.8.1 -I/Users/tor/dev/
ruby/install/ruby-1.8.5/lib/ruby/1.8/i686-darwin8.8.1 -I. -fno-common -g -O2 -pipe -fno-common
-c ruby_debug.c
cc -dynamic -bundle -undefined suppress -flat_namespace -L"/Users/tor/dev/ruby/install/
ruby-1.8.5/lib" -o ruby_debug.bundle ruby_debug.o -ldl -lobjc
make install
/usr/bin/install -c -m 0755 ruby_debug.bundle /Users/tor/dev/ruby/install/ruby-1.8.5/lib/ruby/
gems/1.8/gems/ruby-debug-base-0.9.1/lib
make clean
Successfully installed ruby-debug-base-0.9.1
> I'm not really sure what ruby-debug-base is supposed to do, or how the
> debugger is requiring it - Martin?
It is for debugging CRuby only. The problem could be if the user wants to
install and sets the fast debugger immediately after the IDE boots up. I may
update the wiki to mention the problem being solved here (also there could be
special support for installing the fast debugger - i.e. user would just invoke
some action "Install Fast Debugger"....)
But this issue is not about the ruby-debug-base gem only. Is is just one from
many. IMHO user may not be aware of the fact that JRuby is set. (S)he just
starts up the IDE, reads some tutorial in which some gem with native extension
is required....
It would be the best if it is possible to recognize somehow such gems in advance
and give the user the blocker dialog "You must switch to CRuby..."
Tor is right, this is gem<->jruby compatibility issue. The new information for
me was that Gem manager reflects interpreter settings - i thought it was
hardcoded only for jruby:)
Therefore the solution is to switch in such a case to a native ruby and install
this gem to a native ruby gem repository, using e.g. gem manager, it works
perfectly, great!
Reassigning this issue to newly created 'ruby' component.
Changing target milestone of all resolved Ruby issues from TBD to 6.0 Beta 1 build.
as described above, not an issue. | https://netbeans.org/bugzilla/show_bug.cgi?id=99860 | CC-MAIN-2016-30 | refinedweb | 1,239 | 53.68 |
In this article, we discuss the Adapter design pattern, which is part of the book “Design Patterns: Elements of Reusable Object-Oriented Software” by Gamma et al. (also known as the Gang of Four).
We will discuss the motivation for this design pattern (including common misconceptions) and see a number of different ways in which this pattern is implemented in C#.
TL;DR Use extension methods for your mapping code.
Defining the Interface of a Class
The above book summaries the Adapter pattern thusly:
“Convert the interface of a class into another interface clients expect. Adapter lets classes work together that couldn’t otherwise because of incompatible interfaces.”
It is fundamental to interpret this description in the context in which it was written. The book was published in 1994, before Java and similar languages even existed (in fact, the examples are in C++). Back then, “the interface of a class” simply meant its public API, to which extent “client code” could use it. C++ has no formal interface construct as in C# and Java, and the degree of encapsulation is dictated as a result of what the class exposes publicly. Note that a class’s interface is not restricted to its public methods alone; C++ also offers other devices external to the class itself, and so does C# (extension methods, for instance).
Given today’s languages where interfaces are formal language constructs, such an interpretation has mostly been forgotten. For instance, the 2013 MSDN Magazine article “The Adapter Pattern in the .NET Framework” illustrates the Adapter design pattern in terms of C# interfaces, but it slightly misses the point. We can discuss the Adapter pattern without referring to C# interfaces at all, and instead focus on Data Transfer Objects (DTOs). DTOs are simply lightweight classes used to carry data, often between remote applications.
An Example Scenario
Let’s say we have this third party class with a single method:
public class PersonRepository { public void AddPerson(Person person) { // implementation goes here } }
The interface of this class consists of the single
AddPerson() method, but the Person class that it expects as a parameter is also part of that interface. There is no way that the third party could give us the PersonRepository (e.g. via a NuGet package) without also including the Person class.
It is often the case that we may have a different Person class of our own (we’ll call it OurPerson), but we can’t pass it to
AddPerson(), because it expects its own Person class:
public class Person // third party class { public string FullName { get; set; } public Person(string fullName) { this.FullName = fullName; } } public class OurPerson // our class { public string FirstName { get; set; } public string LastName { get; set; } public OurPerson(string firstName, string lastName) { FirstName = firstName; LastName = lastName; } }
Thus, we need a way to transform an OurPerson instance into a Person. For that, we need an Adapter. Next, we’ll go through a few ways in which we can implement this.
Constructor Adapter
One way of creating a Person from an OurPerson is to add a constructor in Person which handles the conversion:
public class Person // third party class { public string FullName { get; set; } public Person(string fullName) { this.FullName = fullName; } public Person(OurPerson ourPerson) { this.FullName = $"{ourPerson.FirstName} {ourPerson.LastName}"; } }
It is not hard to see why this is a really bad idea. It forces Person to have a direct dependency on OurPerson, so anyone shipping PersonRepository would now need to ship an additional class that may be domain-specific and might not belong in this context. Even worse, this is not possible to achieve if the Person class belongs to a third party and we are not able to modify it.
Wrapper
Another approach (which the aforementioned book describes) is to implement OurPerson in terms of Person. This can be done by subclassing, if the third party class allows it:
public class OurPerson : Person // our class { public string FirstName { get; set; } public string LastName { get; set; } public OurPerson(string firstName, string lastName) : base($"{firstName} {lastName}") { FirstName = firstName; LastName = lastName; } }
Where C# interfaces are involved, an alternative approach to inheritance is composition. OurPerson could contain a Person instance and expose the necessary methods and properties to implement its interface.
The disadvantage of either of these two approaches is that they make OurPerson dependent on Person, which is the opposite of the problem we have seen in the previous approach. This dependency would be carried to wherever OurPerson is used.
Especially when dealing with third party libraries, it is usually best to map their data onto our own objects. Any changes to the third party classes will thus have limited impact in our domain.
AutoMapper
A lot of people love to use AutoMapper for mapping DTOs. It is particularly useful if the source and destination classes are practically identical in terms of properties. If they’re not, you’ll have to do a fair amount of configuration to tell AutoMapper how to construct the destination properties from the data in the source.
Personally, I’m not a fan of AutoMapper, for the following reasons:
- The mapping occurs dynamically at runtime. Thus, as the DTOs evolve, you will potentially have runtime errors in production. I prefer to catch such issues at compile-time.
- Writing AutoMapper configuration can get very tedious and unreadable, often more so than it would take to map DTOs’ properties manually.
- You can’t do anything asynchronous in AutoMapper configuration. While this may sound bizarre, I’ve needed this in the past due to terrible DTOs provided by a third party provider.
Standalone Adapter
The aforementioned MSDN Magazine article simply uses a separate class to convert from source to destination. Applied to our example, this could look like this:
public class PersonAdapter { public Person ConvertToPerson(OurPerson person) { return new Person($"{person.FirstName} {person.LastName}"); } }
We may then imagine its usage as such:
var ourPerson = new OurPerson("Chuck", "Berry"); var adapter = new PersonAdapter(); var person = adapter.ConvertToPerson(ourPerson);
This approach is valid, and does not couple the DTOs together, but it is a little tedious in that you may have to create a lot of adapter classes, and subsequently create instances whenever you want to use them. You can mitigate this a little by making them static.
Extension Methods
A slight but very elegant improvement over the previous approach is to put mapping code into extension methods.
public static class OurPersonExtensions { public static Person ToPerson(this OurPerson person) { return new Person($"{person.FirstName} {person.LastName}"); } }
The usage is very clean, making it look like the conversion operation is part of the interface of the class:
var ourPerson = new OurPerson("Chuck", "Berry"); var person = ourPerson.ToPerson();
This works just as well if you’ve converting from a third party object onto your own class, since extension methods can be used to append functionality even to classes over which you have no control.
Summary
There are many ways to apply the Adapter design pattern in mapping DTOs. Extension methods are the best approach I’ve come across for C# because:
- They don’t couple the source and destination DTOs.
- The conversion occurs at compile time, so any repercussions of changes to the DTOs will be caught early.
- They can easily be used to append functionality even to third party classes for which the source code is not available.
The Adapter design pattern has little to do with interfaces as a formal OOP language construct. Instead, it deals with “the interface of a class”, which is embodied by whatever it exposes publicly. The Adapter design pattern provides a means to work with that interface by converting incompatible objects to ones that satisfy its contract. | http://gigi.nullneuron.net/gigilabs/tag/design-patterns/ | CC-MAIN-2019-51 | refinedweb | 1,275 | 50.67 |
User Tag List
Results 1 to 3 of 3
$var = <<<EOD fail after server move
all my uses ofPHP Code:
$var = <<<EOD
<html>here</html>
EOD;
any idea how i can fix this? i would hate to have to edit the all my files and replace EOD with quotes.
I know it's the EOD because once i do remove them, the scripts run fine.
eric.
Probably the nastiest gotcha is that there may also not be a carriage return (\r) at the end of the line, only a form feed, AKA newline (). Since Microsoft Windows uses the sequence \r as a line terminator, your heredoc may not work if you write your script in a Windows editor. However, most programming editors provide a way to save your files with a UNIX line terminator.
Originally Posted by BuschPHP Code:
$var = <<<EOD(NO Space)
<html>here</html>
EOD;(NO Space)
Bookmarks | http://www.sitepoint.com/forums/showthread.php?297420-Email-Problem&goto=nextoldest | CC-MAIN-2015-18 | refinedweb | 150 | 67.69 |
# Teaching folks to program 2019, a.k.a. in the search of an ideal program: Sequence

Hi, my name is Michael Kapelko. I'm a professional software developer. I'm fond of developing games and teaching folks to program.
**Preface**
Autumn 2019 was the third time I participated as one of the teachers in the course to teach 10-15-year-old folks to program. The course took place from mid. September to mid. December. Each Saturday, we were studying from 10 AM to 12 PM. More details about the structure of each class and the game itself can be found in [the 2018 article](https://habr.com/ru/post/438320/).
I have the following goals for conducting such courses:
* create a convenient tool to allow the creation of simple games, the tool interested folks of 10 years old or older can master;
* create a program to teach programming, the program interested folks of 10 years old or older can use themselves to create simple games.
**Game**

Memory is a simple game we create during the course. The goal of Memory is to find matching items on a playing field. More details, including game mechanics, can be found in [the 2018 article](https://habr.com/ru/post/438320/). You can play the created game in a web browser by clicking [this link](http://kornerr.ru/ekids2019).
**Tool**

When I was creating the tool, my guiding principle was **unpretentiousness** that manifests itself in the following:
1. work under any operating system
* development can be conducted under Linux, macOS, or Windows
* one can play the game on a PC, a tablet, or a smartphone
2. no need to configure anything: just open the link in a web browser and start working
3. no need for the Internet: work locally if you want, there's no back-end
4. the game is available to everyone
* if you place a file on GitHub Pages, just share the link
* if you send the file over Skype, just open the file locally
The tool is an integrated development environment (IDE) that is technically a single HTML file. This single file contains both IDE and a project under development (Memory game in our case). The tool looks pretty standard:
1. left area depicts the code of a selected module;
2. middle area contains buttons to restart, save the project and manage modules;
3. top right area contains result;
4. bottom right area lists modules belonging to both IDE and the project.
Since we only have a single HTML file, we should be able to run it in two modes:
1. replay
* default mode;
* just open the file;
2. editing
* append `?0` symbols in the address bar.
Web browser cache (IndexedDB) is used to keep changes temporarily. To save changes permanently, one has to download the file with the changes by clicking the corresponding button in the middle area.
**The first classes**
I prepared [80 lines of JavaScript code](http://kornerr.ru/ekids19?%D1%82%D0%B5%D1%85%D0%BD%D0%B8%D0%BA%D0%B0) for the first class and printed the code on paper. Each student received the paper and had to type the code into the tool. The typing exercise had the following goals:
1. find out the typing speed of students;
2. demonstrate API of the tool.
The typing speed turned out to be extremely low: ranging from 14 symbols per minute (a student managed to type only half of the code) to 39 symbols per minute. Since I used to type the code with the speed of 213 symbols per minute, I was shocked by the results and started to doubt we would be able to write the game in an hour by the end of the course.
We spent the second class to find typos in the code. I met typos that I have never seen in my life. I was shocked again: students had a hard time finding the typos even with the correct code on the paper in front of them. It's hard to imagine what would happen to the students' psyche if we were to pass a [brutal UX/UI test](https://cantunsee.space/) with questions like this:

Later I tried to decrease the code down to 10 lines, offered partially completed code so that students could find and fix errors. Nothing helped: students just couldn't comprehend anything as if they saw hieroglyphs instead of familiar letters.
**Successful seventh class**
The half of the course was over, and I haven't moved an inch. In another attempt to find a way to explain the code I rewrote the game one more time. Now with a module of an intriguing title `последовательность` (`sequence` in Russian).
To my surprise, the class had a stunning success: we got everything done before "the bell rang", and the students were burning with enthusiasm. The burning was so strong that we finished the class with a spontaneous brainstorm session where we came up with functionality to make the newly appeared game even better:

The lines in Russian read:
* timer;
* tutorial;
* sounds;
* the camera should be farther;
* randomize;
* hearts (meaning lives);
* randomize after a failed matching attempt;
* exploding spheres;
* levels with different number of spheres;
* background.
Let's look closer into the class.
**Board**
Previous classes were using "teachers work with each student individually" approach. After six classes we (two teachers) realized that diving into each student's specific typos/errors takes more time than teaching anything new.
Starting with the seventh class, we decided to hook everyone to the board, i.e., the board became a central place where all of us were working, a place for everyone to stand up, approach the board and write there. PCs became secondary, a place for students to copy the board contents to. This practice clearly indicated school boards exist for many reasons:
* every student is accustomed to receiving information from the board; students know what to observe;
* teacher's environment is at the board; it's now possible to explain single new item to everyone without diving into individual errors;
* fixing individual errors becomes faster because most of them stem from negligence, i.e., typos made while copying the board contents.
I'd like to highlight the fact that teachers work at the board together with students: a teacher sets direction; however, students stand up and come to the board themselves, write answers to the teacher's questions themselves. The benefits of such an approach are the following:
* students write with their own hands, i.e., they come up with a solution and implement it themselves, a teacher does not write for them;
* students stand up and come to the board, i.e., they move, which is good for health and drains unbridled energy that usually hampers discipline;
* students have to remember the code to copy it to the board;
* teachers have an opportunity to evaluate students' observation skills by seeing how easy (or hard) it is for them to remember and write the code on the board.
**Sequence**
`последовательность` module of the game looks like this:

The sequence allows to write an algorithm in the form of events and reactions:
* events (`начало` (`start`), `выбор` (`selection`), etc.) are lines without indentation;
* reactions (`настроить ThreeJS` (`configure ThreeJS`), `показать заставку` (`show splash screen`), etc.) are lines with indentation to signify their relation to events.
Thus, when starting the game (`начало` event) we configure ThreeJS (`настроить ThreeJS` reaction), show splash screen (`показать заставку` reaction), and so on.
The class had almost an empty `последовательность` module in the beginning; only events were present:

I have duplicated these same events onto the board, leaving free space to add reactions later during the class (I used GIMP to depict free space in the following image):

We were searching for reactions in `память.реакции` module (`memory.reactions`):

Each reaction of `последовательность` module is represented in `память.реакции` module by [constructor functions](https://learn.javascript.ru/constructor-new). For example, `проверить окончание` reaction (`check for ending`) has a uniquely corresponding `ПроверитьОкончание` function (`CheckForEnding`):
```
function ПроверитьОкончание(мир) // 1.
{
мир.состояние["скрыто сфер"] = 0; // 2.
this.исполнить = function() // 3.
{
мир.состояние["скрыто сфер"] += 2; // 4.
var скрыто = мир.состояние["скрыто сфер"]; // 5.
var сфер = мир.состояние["сферы"].length; // 6.
if (сфер == скрыто) // 7.
{
мир.события["конец"].уведомить(); // 8.
}
};
}
```
The same code in English would look like this:
```
function CheckForEnding(world) // 1.
{
world.state["spheres hidden"] = 0; // 2.
this.run = function() // 3.
{
world.state["spheres hidden"] += 2; // 4.
var hidden = world.state["spheres hidden"]; // 5.
var spheres = world.state["spheres"].length; // 6.
if (spheres == hidden) // 7.
{
world.events["ending"].report(); // 8.
}
};
}
```
Let's look closer:
1. The function accepts `world` (dictionary) that is used by functions to communicate with each other. `world` consists of three regions (dictionary keys):
* `state` contains variable data used for communication;
* `settings` contain constants to configure functions;
* `events` contain [publishers](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) to be able to subscribe functions to events.
2. An instance of this constructor function is created with `new` operator while parsing `последовательность` module. Practically, everything outside of `run` method is considered to be part of the constructor body. In our case, we create `spheres hidden` variable to count hidden spheres.
3. `run` method is executed each time an event is reported.
4. Since `check for ending` reaction is executed each time a user hides a pair of spheres, we increase `spheres hidden` counter by `2`.
5. Just a shorter alias for `spheres hidden` counter.
6. Count the number of spheres at the playing field.
7. Compare the number of spheres at the playing field with the number of hidden spheres.
8. Report `ending` event if they are equal, i.e., if all spheres were hidden.
Students took turns searching for functions in `память.реакции` module:
* a student looks for a function in the module (to simplify the process, I've split the functions with `// // // //` symbols);
* once a function is located, the student speaks the name of the function out loud and comes to the board;
* the student writes the name down on the board to the list of found functions (students may use any means to remember the names except teacher's hints).
Such an exercise also highlights who's actively tracking the functions and who's unable to find the next function when it's their turn.
Once the names of all functions have been written on the board, we were mapping reactions (functions) to events in a similar fashion:
* a teacher asks, for example, which of the listed functions is suitable for event `начало`
* if a student answers correctly, the student
+ comes to the board
+ writes the reaction under the related event
+ crosses corresponding function out of the listed functions
Once we have a more-or-less working set of reactions for an event it's time to transfer them from the board to student PCs. That way we managed to fill the board with reactions both on the board:


and in the tool:

**The following classes**
During the following classes, we were trying to create a new reaction and a corresponding constructor function. First, I tried to put a solution into heads quickly (providing complete lines of code); however, that didn't work. That's why we ended up with learning the following code, which took us several classes:
```
var кот = "9";
console.log(кот);
```
Unfortunately, these two lines of code were hard to explain: students were confused with the concept of variables and their values. This wasn't the only problem: the new function required the use of an array, which I failed to explain at all. There's a long road ahead of me before I'm able to explain variables and arrays to students.
Of course, by the end of the course we managed to complete the new function, however, I haven't seen understanding and subsequent faith in themselves, which usually manifests itself with a burning enthusiasm we saw in the seventh class.
**The last class**
The last class was not using the famous greeting circle at the beginning. Instead, I asked everyone (including myself) to tell what was good (+) and what was bad (-) during the course. Here's the table I got:

The same table in English would look like this:
| № | + | - |
| --- | --- | --- |
| 1 | Personalized ending screen | Touchpad? |
| 2 | Working on PC | Writing on the board |
| 3 | Explanation | Discipline |
| 4 | Flexible learning program | Sometimes unclear and uninteresting |
| 5 | There's a finished game | Learning program is too big |
| 6 | A detailed explanation of algorithms | Doing the same thing each time |
| 7 | Teamwork | Students of disparate skill level |
| 8 | Interesting / Difficult | Too early |
| 9 | Sequence | Half of the course |
Surprisingly enough, the folks didn't like to write on the board even though it greatly increased the efficiency of teaching. On the one hand, the "learning program was too big", on the other hand, we were "doing the same thing each time", i.e. repeating what we have learned before.
We were saving the game to GitHub from time to time. This was difficult, too: we were spending half an hour while students were authenticating. As always, nobody remembered their passwords (each time), others had to verify it's really them accessing GitHub account on a new device, which required access to e-mail, which sometimes belonged to parents (the folks had to call their parents).
Nonetheless, each student had its own version of the game by the end of the course with personalized beginning and ending screens:

**Conclusion**
On the one hand, we had significant success:
* the tool worked as unpretentiously as expected;
* the concept of sequences was easily understood.
On the other hand, we had an evident failure:
* the tool wasn't friendly to students without JavaScript knowledge, i.e., everyone;
* the teaching program has been stuck most of the time.
That's why I'll try to answer the following questions when teaching in 2020:
1. Will another language (Python, Lua) be simpler to explain and work with?
2. Is it possible to hide Git inside the tool so that one could save the game to [Git without leaving the tool](https://isomorphic-git.org/)?
3. Is it possible to create API as declarative as [SwiftUI](https://www.hackingwithswift.com/quick-start/swiftui/what-is-swiftui)?
4. How to explain variables and arrays?
I'll share answers to these and other questions next year ;)
 | https://habr.com/ru/post/488174/ | null | null | 2,595 | 63.29 |
I'm having trouble including a JAR file that adds a class that will let my main class send emails.
What I have done...
Updated the dependency in my POM file, as follows:
<dependency>
<groupId>EmailAPI</groupId>
<artifactId>EmailAPI</artifactId>
<version>1.0</version>
<scope>system</scope>
<systemPath>${basedir}\src\lib\EmailAPI.jar</systemPath>
</dependency>
Add the import (NetBeans automatically added this when I used the Email class, so it seems to know where to look...)
import me.nrubin29.emailapi.Email;
Call the class, directly using the structure provided
//send an email
new Email()
.withSMTPServer("smtp.gmail.com")
.withUsername("[email protected]")
.withPassword("xxxxxxx")
.withTo("[email protected]; [email protected]")
.withSubject("[RP] Server has started")
.withBody("This is the body!")
.send();
I can build fine, it all works out... but then when I try to run it (as a plugin to Minecraft), I get a NoClassDefFoundError, as shown here:
I don't understand what I'm missing here. Can anyone point me in the right direction?
You use
<scope>system</scope>. Is it available in the Minecraft enviroment? See Maven, Introduction to the Dependency Mechanism, Dependency Scope: "This scope is similar to
provided [...] " and under provided: "indicates you expect [...] a container to provide the dependency at runtime."
It means that you are missing the jar in your runtime environment. You might need to change the scope of your maven dependency to compile.
EmailAPI requires two JARs in order to run. I think they are activation and mail or something. I can look at the project but I think you might be missing them. | http://www.dlxedu.com/askdetail/3/49e311ed8cce7401b5f1d0ddab49b368.html | CC-MAIN-2019-04 | refinedweb | 261 | 60.82 |
17 August 2005 04:46 [Source: ICIS news]
SHANGHAI (CNI)--Shenhua Group has received verbal approval from the central government for its northern China coal-to-olefins (CTO) project, a source from the Chinese company told CNI on Wednesday.?xml:namespace>
The company is still waiting for the final approval documents from the government before starting on basic engineering design on the project, which will be at Baotou in inner Mongolia. ?xml:namespace>
The project includes a 1.8m tonne/year coal-based methanol plant and a methanol-to-olefins (MTO) unit, which can produce 600,000 tonne/year of olefins. A 100MW thermal power station, polyethylene (PE) and polypropylene (PP) facilities will also be built.
CNI was told earlier that the project would produce 300,000 tonne/year of PE, 310,000 tonne/year of PP, 94,000 tonne/year of butane, 37,000 tonne/year of heavy alkanes, 19,000 tonne/year of sulphur and 14,000 tonne/year of ethane and propane. However, the source said these capacities, which were outlined in the feasibility study, could be altered later.
Hongkong’s Kerry Group and Shanghai-listed Baotou Tomorrow Technology Co are potential partners for the project, which will be Shenhua’s second CTO project. | http://www.icis.com/Articles/2005/08/17/2009292/shenhua-receives-approval-for-inner-mongolia-cto-project.html | CC-MAIN-2014-42 | refinedweb | 207 | 57.3 |
Status: Last call for comments and
ping attributes affect what
happens when users follow
hyperlinks created using the
a element..
text
Same as
textContent.
The IDL attributes
href,
ping,
target,
rel,
media,.
If the
datetime attribute
is present,.
User agents, to obtain the date, time, and time-zone offset.
valueAsDate
Returns a
Date object representing the specified date and time.
The
valueAsDate IDL
attribute must return either null or a new
Date object
initialised to the relevant value as defined by the following
list:
When a
Date object is to be returned, a new one must
be constructed.:
<article> <h1>Small tasks</h1> <footer>Published <time pubdatetoday</time>.</footer> <p>I put a bike bell on his bike.</p> </article>
Here is the same thing but with the time included. Because the element is empty, it will be replaced in the rendering with a more readable version of the date and time given.
>
Status: Last call for comments
This section is non-normative.
Status: Last call for comments
The
ins and
del elements represent
edits to the document.
inselement
Status: Last call for comments: Last call for comments
Status: Last call for.
Status: Last call for comments.
Status: Last call for comments>
Status: Last call for comments
imgelement
Status: Last call for comments. ISSUE-30 (longdesc) and ISSUE-66 (image-analysis)
figcaption element, then the
contents of the first such
figcaption behaviour. downloaded, decoded, and found to be valid;: Last call for comments. ISSUE-31 (missing-alt) blocks progress to Last Call
The requirements for the
alt
attribute depend on what the image is intended to represent, as
described in the following sections.
Status: Last call for comments.
Status: Last call for comments>
Status: Last call for comments>
Status: Last call for comments>
Status: Last call for comments>
Status: Last call for comments
figcaption"> <figcaption>Bubbles traveled everywhere with us.</figcaption> <.
Status: Last call for comments.
Status: Last call for comments.
Status: Last call for comments.
Status: Last call for comments.
Status: Last call for comments
A conformance checker must report the lack of an
alt attribute as an error unless one of
the conditions listed below applies:
titleattribute is present and has a non-empty value (as described above).
imgelement is in a
figureelement that contains a
figcaptionelement that contains content other than inter-element whitespace (as described above).
imgelement is part of the only paragraph directly in its section, and is the only
imgelement without an
altattribute in its section, and its section has an associated heading (as.)
iframeelement
Status: Last call for comments. ISSUE-100 (srcdoc) and ISSUE-103 (srcdoc-xml-escaping) block progress to Last Call.
The
srcdoc
attribute gives the content of the page that the nested
browsing context is to contain. The value of the attribute
in.
When an
iframe element is first inserted into a document, the
user agent must create a nested browsing context, and
then process the
iframe attributes for the
first time.
Whenever an
iframe element with a nested
browsing context has its
srcdoc attribute set or changed,
the user agent must process the
iframe
attributes.
Similarly, whenever an
iframe element with a
nested browsing context but with no
srcdoc attribute specified has its
src attribute set or changed,
the user agent must process the
iframe
attributes.
When the user agent is to process the
iframe
attributes, it must run the first appropriate steps from the
following list:
srcdocattribute is specified
Navigate the element's browsing
context to a resource whose Content-Type is
text/html, whose URL is
about:srcdoc, and whose data consists of the value of
the attribute.
srcattribute is specified but the
srcdocattribute is not
Resolve the value of
the
src attribute, relative
to the
iframe element.
If that is not successful, then jump to the empty step below.
If the resulting absolute URL is an
ASCII case-insensitive match for the string
"
about:blank", and the user agent is processing this
iframe's attributes for the first time, then jump to
the empty step below. (In cases other than the
first time,
about:blank is loaded
normally.)
Navigate the element's browsing context to the resulting absolute URL.
Empty: When the steps above require the user agent to
processing this
iframe's attributes for the first
time, then the user agent must queue a task to
fire a simple event named
load at the
iframe
element. (After jumping to this step, the above steps are not
resumed.)
Queue a task to fire a simple event
named
load at the
iframe element.
Any navigation required of the user
agent in the process the
iframe attributes
algorithm must be completed with the
iframe element's
document's browsing context as the source
browsing context.
Furthermore, if the process the
iframe
attributes algorithm was invoked for the first time for this
element (i.e. as a result of the element being inserted into a document), then
any navigation required of the user
agent in that algorithm must be completed with replacement
enabled.="allow-same-origin" srcdoc="<p>did you get a cover picture yet?"></iframe> </article> <article> <footer> At <time pubdate>2009-08-21T23:44Z</time>, <a href="/users/cap">cap</a> writes: </footer> <iframe seamless sandbox="allow-same-origin"="allow-same-origin" a number of other characters need to be escaped also to ensure correctness. queue a task to fire
a simple event named
load at
the
iframe element. When content fails to load
(e.g. due to a network error), then the user agent must queue
a task to fire a simple event named
error at the element instead.
The task source for these tasks is the DOM manipulation task source.
A
load event is also
fired at the
iframe element when it is created if no
other data is loaded in it..
While the
sandbox
attribute is specified, the
iframe element's
nested browsing context must have the flags given in
the following list set. In addition, any browsing contexts nested within an
iframe, either directly or indirectly, must have all
the flags set on them as were set on the
iframe's
Document's browsing context when the
iframe's
Document was created.).
sandboxattribute's value, when split on spaces, is found to have the
allow-same-originkeyword set
This flag forces content into a unique origin, thus preventing it from accessing other content from the same origin.
This flag also prevents script from
reading from or writing to the
document.cookie IDL
attribute, and blocks access to
localStorage and
openDatabase().
[WEBSTORAGE]
[WEBSQL].
These flags only take effect when the
nested browsing context of the
iframe is
navigated. Removing then, or removing
the entire
sandbox
attribute, has no effect on an already-loaded page.
Status: Last call for comments.
If either:
embedelement's
Documentis the active document when that
Documentwas created, or
embedelement's
Documentwas parsed from a resource whose sniffed type as determined during navigation is
text/html-sandboxed
..
conditions above did not apply.
An
embed element is said to be potentially active when the
following conditions are all met simultaneously:
Document.
Documentis fully active.
srcattribute set or a
typeattribute set (or both).
Documentwhose browsing context had the sandboxed plugins browsing context flag set when the
Documentwas created (unless this has been overrriden as described above).
Documentwas not parsed from a resource whose sniffed type as determined during navigation is
text/html-sandboxed(unless this has been overrriden as described above).
objectelement that is not showing its fallback content.:
srcattribute set
The user agent must resolve
the value of the element's
src
attribute, relative to the element. If that is successful, the
user agent should fetch the resulting absolute
URL, from the element's browsing context scope
origin if it has one..
srcattribute set
The user agent should find and instantiate an appropriate
plugin based on the value of the
type attribute.
Whenever an
embed element that was potentially active stops being
potentially active, any
plugin that had been instantiated for that element must
be unloaded.
The
embed element is unaffected by the
CSS 'display' property. The selected plugin is instantiated even if
the element is hidden with a 'display:none' CSS style..
The
embed element has no fallback
content. If the user agent can't find a suitable plugin, then
the user agent must use a default plugin. (This default could be as
simple as saying "Unsupported Format".)).
Any namespace-less attribute other than
name and
align two exceptions are to exclude legacy attributes that have side-effects beyond just sending parameters to the plugin.
Status: Last call for comments.
When the element is created, when it is popped off the
stack of open elements of an HTML parser
or XML parser, and subsequently whenever the element is
inserted into a
document or removed from a document; and whenever the element's
Document changes whether it is fully
active; and whenever an ancestor
object element
changes to or from showing its fallback content; and
whenever the element's
classid attribute is set,
changed, or removed; and, when its
classid attribute is not present,
whenever its
data attribute is
set, changed, or removed; and, when neither its
classid attribute nor its
data attribute are present, whenever
its
type attribute is set,
changed, or removed: the user agent must queue a task
to run the following steps to (re)determine what the
object element represents. The task source
for this task is the DOM
manipulation task source.
If the user has indicated a preference that this
object element's fallback content be
shown instead of the element's usual behavior, then jump to the
last step in the overall set of steps (fallback).
For example, a user could ask for the element's fallback content to be shown because that content uses a format that the user finds more accessible.
If the element has an ancestor media element, or
has an ancestor
object element that is not
showing its fallback content, or if the element is
not in a
Document
with a browsing context, or if the element's
Document is not fully active, or if the
element is still in the stack of open elements of an
HTML parser or XML parser, and its value is not the empty string, failed, fire a simple event named
error at the element, then jump
to the last step in the overall set of steps (fallback).
Fetch the resulting absolute URL, from the element's browsing context scope origin if it has one.. there was an HTTP 404 error,
there was a DNS error), fire a simple event named
error at the element, then jump
to the last step in the overall set of steps (fallback).
Determine the resource type, as follows:
Let the resource type be unknown.
Let the sniffed flag be false.
If the user agent is configured to strictly obey Content-Type headers for this resource, and the resource has associated Content-Type metadata, then let the resource type be the type specified in the resource's Content-Type metadata, and abort these substeps.
If there is a
type
attribute present on the
object element, and that
attribute's value is not a type that the user agent supports,
but it is a type that a plugin supports,
then let the resource type be the type
specified in that
type
attribute.
Otherwise, if the resource type is unknown, and the resource has associated Content-Type metadata, then let the resource type be the type specified in the resource's Content-Type metadata.
If this results in the resource type
being "
text/plain", then let the resource type be the result of applying the
rules for
distingushing if a resource is text or binary to the
resource instead, and then set the sniffed
flag to true.
If the resource type is unknown or
"
application/octet-stream" at this point following steps are invoked.
If the resource type is still unknown at this point, but the <path> component of the URL of the specified resource (after any redirects) matches a pattern that a plugin supports, then let resource type be the type that that plugin can handle.
For example, a plugin might say that it can
handle resources with <path>
components that end with the four character string "
.swf".
If the resource type is still unknown, and the sniffed flag is false, then change the resource type to instead be the sniffed type of the resource.
Otherwise, if the resource type is
still unknown, and the sniffed flag is
true, then change the resource
type back to
text/plain.
Handle the content as given by the first of the following cases that matches:
If plugins are being sandboxed, jump to the last step in the overall set of steps (fallback).
Otherwise, the user agent should use the plugin that supports resource type and pass the content of the resource to that plugin. If the plugin reports an error, then jump to the last step in the overall set of steps (fallback).
image/"
The
object element must be associated with a
newly created nested browsing context, if it does
not already have one.
If the URL of the given resource is not
about:blank, the element's nested browsing
context must then be navigated to that resource, with
replacement enabled, and with the
object element's document's browsing
context as the source browsing
context. (The
data attribute of the
object element doesn't get updated if the
browsing context gets further navigated to other
locations.)
If the URL of the given resource is
about:blank, then, instead, the user agent must
queue a task to fire a simple event
named
load at the
object element. named. If the element has an instantiated
plugin, then unload it.
When the algorithm above instantiates a
plugin, the user agent should pass to the
plugin parameters given by
param elements that are children of the
object element, in tree order. If the
plugin supports a scriptable interface, the
HTMLObjectElement object representing the element
should expose that interface. The
object element
represents the plugin. The
plugin is not a nested browsing
context.
If either:
objectelement's
Document's browsing context when the
Documentwas created, or
objectelement's
Documentwas parsed from a resource whose sniffed type as determined during navigation is
text/html-sandboxed
...then the steps above must always act as if they had failed to find a plugin, even if one would otherwise have been used.
The above algorithm is independent of CSS properties (including 'display', 'overflow', and 'visibility'). For example, it runs even if the element is hidden with a 'display:none' CSS style, and does not run again if the element's visibility changes..
The
willValidate,
validity, and
validationMessage
attributes, and the
checkValidity() and
setCustomValidity()
methods, are part of the constraint validation API.
Status: Last call for comments test page</title> </head> <body> <p> > <script src="o3dtest.js"></script> </p> </body> </html> | http://www.w3.org/TR/2010/WD-html5-20100304/text-level-semantics.html | CC-MAIN-2017-09 | refinedweb | 2,486 | 51.07 |
HTML5 video element allows us to create caption enabled video pages. Captions/subtitles allows media contents to reach to different geographical locations. For example, if the media contents are in English and need to be translated into French or German to reach an audience that speaks these languages. We can provide the subtitles in our HTML5 Video element in the favorite language of the user.
To provide subtitles in HTML 5 video element, we will make use of WebVTT and TTML format files. These files are simple text files.
1. WebVTT - Web Video Text Tracks
2. TTML - Timed Text Markup Language
Introduction to WebVTT
WebVTT [Web Video Text Tracks] is a simple text file with the extention .vtt. This file can contain different types of information. For example -
- Subtitles - The translation of the speech/dialog based on time.
- Captions - It is similar to subtitles but it can include sound effects or other information of the media.
- Chapters - You can create chapters so that user can navigate through the video. For example, creating chapters based on a slide show of Power Point Presentation.
- Metadata - Information about the video which you can access using scripting languages.
WebVTT files can also be captured using script languages. These files can be created manually or you can create them using some authoring tools. The format of the file is as shown here:
The WebVTT file [.vtt], starts with WEBVTT. Then it includes the time which is from and to with the decoration/position of the text which you want to display on your media. The position is shown as A:middle, i.e. the center of the screen.
The next line is the dialog/speech to display on the media. These contents are known as "cues". Each "cue" starts with an ID which is 1, 2 in our case. It must be separated with a blank line. Time specification must be done in HH:MM:SS:MMM format.You can even style the cues using CSS. The above file is used for displaying French subtitles.
Introduction to TTML
TTML [Timed Text Markup Language] is a specification published by W3C. TTML is XML based language. TTML file includes the XML version and encoding type. The format of the file is shown here:
The root element in this language starts with <tt> with a namespace. Then you include <body> and <div> tags. In the <div> tag, we are including cues. The timing is specified with the begin and end in the paragraph tag.
Compared to the VTT, the TTML format looks little complex. But when you start using it, you get familiar with it.
Microsoft provides a simple tool called Caption maker. You can visit the URL to download this tool
Now let's start creating our demo which will display the video in HTML 5 <Video> element and subtitle the same in English and French.
To start with, create a blank ASP.NET Web application and add a HTML page. I have named my HTML page as "MathsVideo.html".
Once you add the page, add a folder with the name "Media" under your web application and paste the video which you want to display. For more information about HTML 5 Video element media format support, please visit the link given below -
I am using the MP4 format for this demonstration. Once you add the media file into our Media folder, right click the Media folder and add a text file with the extension .vtt. I have named my file as "MathTrickSubtitle1_EN.vtt" and used the code shown below -
WEBVTT 1 00:00:01.000 --> 00:00:03.000 A:middle Let's say you want to multiply 12 x 13 2 00:00:04.000 --> 00:00:07.000 A:middle For this one we will draw one line 3 00:00:08.000 --> 00:00:15.000 A:middle For the two we will leave a little bit of space and will draw two lines 4 00:00:16.000 --> 00:00:23.000 A:middle For the other numbers we will draw the lines to the other direction 5 00:00:24.000 --> 00:00:29.000 A:middle Now we will group together different lines and will count the dots 6 00:00:30.000 --> 00:00:33.000 A:middle Here we have six different dots 7 00:00:34.000 --> 00:00:37.000 A:middle In the middle, we have 5 different dots 8 00:00:38.000 --> 00:00:41.000 A:middle And on the other side we have one dot 9 00:00:42.000 --> 00:00:44.000 A:middle And that's the anser - 156
Now let's repeat the above steps to add French subtitles. I have named my file as " MathTrickSubtitle1_EN.vtt ". The code is as shown below -
WEBVTT 1 00:00:01.000 --> 00:00:03.000 A:middle Disons que vous voulez multiplier 12 x 13 2 00:00:04.000 --> 00:00:07.000 A:middle Pour celui-ci nous allons tirer une ligne 3 00:00:08.000 --> 00:00:15.000 A:middle Pour les deux , nous allons laisser un peu d'espace et nous allons tracer deux lignes 4 00:00:16.000 --> 00:00:23.000 A:middle Pour les autres chiffres que nous allons tracer les lignes à l'autre direction 5 00:00:24.000 --> 00:00:29.000 A:middle Maintenant, nous allons regrouper les différentes lignes et comptera les points 6 00:00:30.000 --> 00:00:33.000 A:middle Ici, nous avons six points différents 7 00:00:34.000 --> 00:00:37.000 A:middle Au milieu , nous avons cinq points différents 8 00:00:38.000 --> 00:00:41.000 A:middle Et de l'autre côté nous avons un point 9 00:00:42.000 --> 00:00:44.000 A:middle Et ce est la anser - 156
I have used a translator to convert the English speech to French. That could be the reason, you may not see the correct word as per the speech.
Now open the Web.config file and write the code shown below. This code registers the MIME type for WebVTT and TTML. If you don't include this, you may not see the subtitles in our video.
<system.webServer> <staticContent> <remove fileExtension=".vtt" /> <mimeMap fileExtension=".vtt" mimeType="text/vtt" /> <remove fileExtension=".ttml" /> <mimeMap fileExtension=".ttml" mimeType="application/ttml+xml" /> </staticContent> </system.webServer>
Now we will add a <video> element in our HTML page. The code for HTML is shown below -
<!DOCTYPE html> <html lang="en"> <head> <title>Video With Subtitles</title> </head> <body> <video controls="controls" src="../Media/MathTrick1.mp4"> <track kind='subtitles' srclang='en' label='English' src='../Media/MathTrickSubtitle1_EN.vtt' default> <track kind='subtitles' srclang='fr' label='French' src='../Media/MathTrickSubtitle1_FR.vtt'> </video> </body> </html>
In the above code we are using <video> element and the <track> element inside the video element. The <track> element contains several attributes like - kind - You can set the value of this attribute like captions, chapters, descriptions, metadata and subtitles. We are using subtitles. srclang is set to en and fr respectively. The label and the path of the source file. We are also making English as a default subtitle. User can change the same as per his/her choice.
Now let's run the web page and see how your subtitles look like. The output is shown here:
Now if you change the language to French using the label shown above, you will see French subtitles. It is shown here:
Now let's test the same using TTML [Timed Text Markup Lanague] file. Let's first add a text file with the extension, .ttml into our Media folder. I have named it as "MathTrickSubtitle2_EN.ttml". Write the code shown below in the file -
<?xml version="1.0" encoding="UTF-8"?> <tt xmlns="" xml: <body> <div> <p begin="00:00:01.000" end="00:00:03.000">Let's say you want to multiply 12 x 13. </p> <p begin="00:00:04.000" end="00:00:07.000">For this one we will draw one line. </p> <p begin="00:00:08.000" end="00:00:15.000">For the two we will leave a little bit of space and will draw two lines. </p> <p begin="00:00:16.000" end="00:00:23.000">For the other numbers we will draw the lines to the other direction. </p> <p begin="00:00:24.000" end="00:00:29.000">Now we will group together different lines and will count the dots. </p> <p begin="00:00:30.000" end="00:00:33.000">Here we have six different dots. </p> <p begin="00:00:34.000" end="00:00:37.000">In the middle, we have 5 different dots. </p> <p begin="00:00:38.000" end="00:00:41.000">And on the other side we have one dot. </p> <p begin="00:00:42.000" end="00:00:44.000">And that's the anser - 156. </p> </div> </body> </tt>
Now let's add another text file with the .ttml extsion into our Media folder. I have named the file as "MathTrickSubtitle2_FR.ttml". Write the code shown below in the file -
<?xml version="1.0" encoding="UTF-8"?> <tt xmlns="" xml: <body> <div> <p begin="00:00:01.000" end="00:00:03.000">Disons que vous voulez multiplier 12 x 13</p> <p begin="00:00:04.000" end="00:00:07.000">Pour celui-ci nous allons tirer une ligne</p> <p begin="00:00:08.000" end="00:00:15.000">Pour les deux , nous allons laisser un peu d'espace et nous allons tracer deux lignes</p> <p begin="00:00:16.000" end="00:00:23.000">Pour les autres chiffres que nous allons tracer les lignes à l'autre direction</p> <p begin="00:00:24.000" end="00:00:29.000">Maintenant, nous allons regrouper les différentes lignes et comptera les points</p> <p begin="00:00:30.000" end="00:00:33.000">Ici, nous avons six points différents</p> <p begin="00:00:34.000" end="00:00:37.000">Au milieu , nous avons cinq points différents</p> <p begin="00:00:38.000" end="00:00:41.000">Et de l'autre côté nous avons un point</p> <p begin="00:00:42.000" end="00:00:44.000">Et ce est la anser - 156</p> </div> </body> </tt>
Now it's time to change the source of the file into our HTML page. The code for HTML page is as shown below -
<!DOCTYPE html> <html lang="en"> <head> <title>Video With Subtitles</title> </head> <body> <video controls="controls" src="../Media/MathTrick1.mp4" height="400" width="500"> <track kind='subtitles' srclang='en' label='English' src='../Media/MathTrickSubtitle2_EN.ttml' default> <track kind='subtitles' srclang='fr' label='French' src='../Media/MathTrickSubtitle2_FR.ttml'> </video> </body> </html>
Now run your web page and see the output. It will be same as shown above. I have done a test of WebVTT which is supported by IE 10, Google Chrome 18, Firefox. TTML is supported by IE 10. But I didn't find the support for the same in Google Chrome 39.0 and Firefox 33.
Summary - In this article, we have seen how to display subtitles using HTML 5 video element. We have seen WebVTT and TTML file formats to create the subtitles and embed them using a <track> element.
Will you give this article a +1 ? Thanks in advance | https://www.devcurry.com/2015/05/html5-video-with-subtitles.html | CC-MAIN-2018-39 | refinedweb | 1,934 | 77.43 |
PySide QEvent post crash
Hello everyone,
I'm working on Linux Ubuntu 12.10, with PySide 1.1.1 and python 2.7
I have a problem when posting QEvent through a QStateMachine.
If I want it to work I have to keep a reference on the event, or it crashes.
I have set up a little sample code to illustrate my problem.
I would like to know if I am doing it wrong or if it is a known problem and if I should use the workaround (keeping a reference on the event) ?
@
#!/usr/bin/python
from future import print_function
import sys
from PySide.QtCore import *
from PySide.QtGui import *
app = QApplication(sys.argv)
sm = QStateMachine()
init = QState(sm)
sm.setInitialState(init)
sm.start()
e = None
def no_crash():
global e
print("send an event that doesn't crash...")
e = QEvent(QEvent.Type(QEvent.registerEventType()))
sm.postEvent(e)
def crash():
print("and one that does...")
e = QEvent(QEvent.Type(QEvent.registerEventType()))
sm.postEvent(e)
QTimer.singleShot(2000, no_crash)
QTimer.singleShot(4000, crash)
sys.exit(app.exec_())
@
Thanks by advance for your help
Pierre
I initially ran into the same ugly crash when I was converting the 'Events, Transitions, and Guards' example from Qt Project's excellent State Machine Framework doc to PySide. Your global reference to the registered QEvent Type fixes the crash problem because it prevents python's automatic garbage collection from kicking in before you want it to.
In my final code, rather than using a global reference which is inelegant, I wrap all the functionality into a QWidget which is used as the main window of the app. That allows me to save the registered QEvent as an instance variable in the QWidget's class constructor. The idea being that python knows not to garbage collect instance variables as long as the class instance itself is being used. | https://forum.qt.io/topic/28238/pyside-qevent-post-crash | CC-MAIN-2018-05 | refinedweb | 311 | 66.13 |
#include <itkImageAdaptor.h>
Collaboration diagram for itk::ImageAdaptor< TImage, TAccessor >:
ImageAdaptors are templated over the ImageType and over a functor that will specify what part of the pixel can be accessed
The basic aspects of this class are the types it defines.
Image adaptors can be used as intermediate classes that allow the sending of an image to a filter, specifying what part of the image pixels the filter will act on.
The TAccessor class should implement the Get and Set methods as static methods. These two will specify how data can be put and get from parts of each pixel. It should define the types ExternalType and InternalType too.
Definition at line 47 of file itkImageAdaptor.h. | http://www.itk.org/Doxygen16/html/classitk_1_1ImageAdaptor.html | crawl-003 | refinedweb | 118 | 51.99 |
Ok, after many trials, I finally managed to install Ubuntu on my xps 9365 with the 500Gb Toshiba NVMe.
Not sure what finally caused it to work, I applied Dell drivers & windows updates, and finally the laptop accepted to boot correctly in AHCI mode. I could then choose to boot from USB and installed Ubuntu with UEFI enabled and secure boot disabled. My quick tests are quite satisfactory using Ubuntu 17.04 daily build. Wireless, bluetooth and touchscreen (including pressure sensitivity) work fine. Did not try to reboot Windows since, so not sure it still works under Windows but at least I can now enjoy Linux on my device, sweet!
I'm running Ubuntu 16.10 on the Dell XPS 13 9365 with BIOS 01.00.05
Drive 512 GB Toshiba NVMe
First thing I did was turn off Secure boot.
SATA Operation set to AHCI
Port Enablement M.2 PCIe SSD turned on
Everything so far is working except for wake up from suspend, once suspended you have to hard boot and hold the power button for a long time.
My boot times are about 20 seconds from press of the power button.
j-b-m - Good for you and thanks for the info. I still haven't been able to get my 9365 (XPS 13 2 in 1) with Toshiba 512GB NMVE pcie ssd to work when I change the SATA controller from Raid to SATA.
I notice when I change the SATA controlller in bios/efi from Raid to anything else, it boots super slow (almost appears to hang) even just trying to get back into bios/efi.
Any more specific ideas on what you (or anyone else) did to get this to work. I need to install Linux on this otherwise its a return (along with another 10 I ordered for my office).
Also, has anyone been able to boot from a MicroSD card in the computer? If I put a microsd card with a live os in the computer via a usb adapter it show's up in the bios/efi but the same card doesn't show up at all if I use the built in reader (though the card using the reader will show up after I've booted into an OS) its just at bios that it doesn't give me an option to boot from it.
thanks!
jon
yzfdude - what did you do to allow this to work? When I change my sata controller from 'raid' to 'sata' or 'none' the computer takes forever (5 -10 minutes) just to get past post then won't boot any OS (secure boot off other settings stock, bios version 1.0.5)
I never tried to use the Toshiba drive as PCIe. What I suggest you do is try to load Windows on the system to see if it will do that. The Bios update may be causing a problem for that drive as it seems to be for the 9360s. I am assuming you are working with a clean drive while installing since the Windows installs have the drive being used as SATA?
You may just want to configure it as SATA and leave the setting on RAID..
I also would not have an SD card in the reader. Until a driver is loaded for the reader it may have problems.
----------------------------------------------------------------------------------
XPS 2720, Inspiron 17 7779, Inspiron 15 7567, XPS 13 9365, Inspiron 1545, TB16 Dock
I will check my exact bios settings tonight and post it here for reference.
I have the exact same problem with the system seeming to hang during POST after setting to AHCI.
I tried with the installed BIOS (think it was .2) then upgraded to the latest (.5) with the same behaviour.
If this won't work and I'm unable to install Linux on this laptop I will ask for a return.
I don't understand this statement. If it won't work as PCIe (AHCI setting) why not just leave it set as RAID and use the drive in the SATA mode?
----------------------------------------------------------------------------------
XPS 2720, Inspiron 17 7779, Inspiron 15 7567, XPS 13 9365, Inspiron 1545, TB16 Dock
Please read up on previous posts. Linux doesn't see the NVMe drive in RAID mode hence cannot install to it.
So here is what I did and my current BIOS setup, on a Dell 9365 i7 with 500GB NVMe Toshiba drive.
First thing I did when receiving the laptop was upgrade BIOS to 01.00.05. I think I disabled Windows 10 fast boot (not sure of the exact name anymore) in windows settings. Shrinked Windows partition to leave a large empty space.
Then, disabled secure mode. I could boot from an Ubuntu live from an USB, but hard drive was not detected, so impossible to install.
I switched the SATA mode from RAID to AHCI (without booting to Windows), trying to boot from my USB key. I experienced a huge startup time (5 minutes to reach BIOS), unable to boot from USB. Tried all BIOS combinations I could think of without success.
Switched back the SATA mode to RAID and booted into Windows.
Applied all Windows updates and DELL updates (2 Dell updates applied)
Rebooted maybe twice. Altered the BIOS settings as described below, and suddently I could see my USB key as a boot option in the (F12) boot menu. I appeared under a name like UEFI USB device.
From this point, I could boot and install linux without issue (I installed in the empty space, leaving Windows
partition untouched).
Unfortunately, I did try so many combinations that I don't know what finally made it work. Feel free to ask if you have questions, glad if I can help.
My BIOS settings:
Advanced boot options:
Legacy ROM - unchecked
UEFI network stack - unchecked
Boot Sequence:
UEFI selected
Secure boot:
disabled (when disabling secure boot, it usually checks some options in "Advanced boot option" so better check everything twice).
In system configuration:
SATA operation:
AHCI
Misc. devices:
Camera - disabled (not sure it is related)
POST behavior:
Fastboot set to minimal | https://www.dell.com/community/Laptops-General-Read-Only/Dell-XPS-13-9365-Won-t-boot-USB-in-SATA-Mode-AHCI-Trying-to/td-p/5119108/page/2 | CC-MAIN-2019-35 | refinedweb | 1,014 | 79.6 |
.
Requirements
- Python 3
- pip3 for installation
Install
I recommend using the github repository, it’s more up-to-date.
Using setup.py
git clone cd pygmail python3 setup.py install
You can copy the gmailsend.py script to a folder in your $PATH, for example:
sudo cp bin/gmailsend.py /usr/local/bin/
Using pip:
pip3 install --upgrade git+
As far as I know pip will not take care of the script files in bin/.
Security
It uses smtp, that’s all. You can always check the source code, of course
Usage
From a interactive python shell or from any python file:
from pygmail2.Pygmail import Pygmail Pygmail().send_mail('[email protected]', 'hi, there', '<b>important stuff </b>')
Using the gmailsend.py in shell
gmailsend.py -h usage: A python script for sending emails from commandline using gmail [-h] [--subj SUBJ] [--body_file BODY_FILE] to_addr positional arguments: to_addr Recepient email address optional arguments: -h, --help show this help message and exit --subj SUBJ, -s SUBJ Subject of the email --body_file BODY_FILE, -b BODY_FILE
Write up your mail body in the shell in a interactive fashion:
gmailsend.py [email protected] -s hi_there
Pipe your mail body to the script:
echo “hi, how are you?” | gmailsend.py [email protected] -s hi_there
Load the mail body with a html file:
wget -O body.html gmailsend.py [email protected] -s hi_there -b body.html
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pygmail2/0.2/ | CC-MAIN-2022-33 | refinedweb | 261 | 65.32 |
Our Benefits
Social media farmville, the massively popular facebook game, lion monthly active users this week (source: appdata) facebook has lion users in total. e to the official facebook page of microsoft tag get exclusive content and interact with microsoft tag from facebook join facebook to create your own page or municate. Description renders an html tag this is monly used in conjunction with fb:intl to put translatable text in an attribute of an html tag. Oliver gassner - beratung, konzeption, schulung, vortr ge, artikel zu weblogs, wikis, dem social web im allgemeinen, web3d, selbstmanagement, bildung und kreativit t.
Facebook has made a number of changes recently, including the redesign and the revision of its advertising terms and all of it seems to point to facebook looking to ize its. working website xbox weekend warrior: rockin red dead redemption, epic facebook xbl logs, datel lawsuit, more. Tag is on facebook sign up for facebook to connect with tag tag is a voice connection to you talk live with your friends or leave each. Tag! is on facebook sign up for facebook to connect with tag!.
e to the official facebook page of tag heuer get exclusive content and interact with tag heuer from facebook join facebook to create your own page or municate with.
Tag - taylor assassins is on facebook sign up for facebook to connect with tag - taylor assassins. Bienvenue sur la page officielle de tag heuer sur facebook, sur laquelle vous trouverez du contenu exclusif et pourrez interagir avec tag heuer rejoignez facebook pour cr er.
Description the fql photo tag table query this table to return information about a photo tag to structure your query, use the table name (photo tag in this case) in the from clause.
Benvenuto nella pagina facebook ufficiale di facebook accedi a contenuti esclusivi ed interagisci con facebook direttamente su facebook iscriviti a facebook per creare la tua. Headquaters of the evil genius so i finally got around to getting facebook to automatically impoort my blog entries.
Farmville, the massively popular facebook game, lion monthly active users this week (source: appdata) facebook has lion users in total.
Tag your friends you can tag your friends using the same quick-and-easy interface found on facebook click on someone in your photo, then select from a list of.
Name rating posts fans installs last updated; farmville wall manager manages farmville wall posts; accepts bonuses, grabs bouquets, adopts mals, hatches eggs, and more. Status generator for works like facebook and twitter for those of us who can status generator what s it for? tag cloud. Update on monday, september: status tagging is now available to everyone on facebook one of the most popular features on facebook is tagging, which gives you the ability to. Compliance: all banners and ads including downloads, text links, banners are paid advertisers as puter news site we encourage everybody to post their products and services.
People read facebook status updates, not books use our tips, news and hacks to customise your facebook layouts, profile pictures, statuses, tagging pictures, login, photos, etc. Exploring the web robert scoble is an employee of rackspace, which can help you with all your hosting needs. Create notes tag friends have fun - hundreds of fun quizes & notes for myspace, facebook and bebo. Google trends is a great tool to get an overview on terms people are searching for with the largest search engine in the world it also shows interesting trends.
e to the official facebook page of tag get exclusive content and interact with tag right from facebook join facebook to create your own page or to start connecting with. Facebook is a social utility that connects people with friends and others who work, study and live around them people use facebook to keep up with friends, upload an unlimited.
Collegehumor is the best humor site on the watch funny videos, funny pictures, read funny articles, jokes, and see edy videos. Today, we are adding a new way to tag people and other things you re connected to on facebook in status updates and other posts from the publisher..
tag on for facebook
Who We Are
import export data england :: age of mythalogy-full setup :: giaoanbachkim com :: french verb conjugation worksheets :: tag on for facebook ::
| http://djiler.i8it.net/crysis-w23/stenednthor.html | crawl-003 | refinedweb | 711 | 55.17 |
munlock - reenable paging for some parts of memory
#include <sys/mman.h> int munlock(const void *addr, size_t len);
munlock reenables paging for the memory in the range starting at addr with length len bytes. All pages which contain a part of the specified memory range can after calling munlock be moved to external swap space again by the kernel..
On POSIX systems on which mlock and munlock are available, _POSIX_MEMLOCK_RANGE is defined in PAGESIZE from <limits.h> indicates the number of bytes per page.
On success, munlock returns zero. On error, -1 is returned, errno is set appropriately, and no changes are made to any locks in the address space of the process.
POSIX.1b, SVr4
mlock(2), mlockall(2), munlockall(2)
7 pages link to munlock(2):
lib/main.php:944: Notice: PageInfo: Cannot find action page | http://wiki.wlug.org.nz/munlock(2)?action=PageInfo | CC-MAIN-2015-22 | refinedweb | 139 | 65.62 |
cart.ApplyDiscounts(_promotionEngine, new PromotionEngineSettings()) .Where(r => r.Status == FulfillmentStatus.Fulfilled) .Select(c => c.Promotion?.Coupon?.Code) .Where(i => i != null) .Any(c => c.Equals(couponCode, StringComparison.OrdinalIgnoreCase));
Hi!
It is actually my own Promotions.
In the start up I do this:
var promotionTypeHandler = context.Locate.Advanced.GetInstance<PromotionTypeHandler>(); promotionTypeHandler.DisableBuiltinPromotions();
And then I have made my own by inheriting the basic ones:
[ContentType(GUID = "95c36376-eadc-4d2d-9142-3e1f70814422", DisplayName = "Köp artiklar få rabatt på utvalda artiklar", GroupName = "entrypromotion")] [PromotionSettings(FixedRedemptionsPerOrder = 1)] [ImageUrl("Images/BuyQuantityGetItemDiscount.png")] public class BuyQuantityGetItemDiscountPromotion : BuyQuantityGetItemDiscount, IBasePromotion { [UIHint("image")] [Display(Order = 5, Name = "Bild")] public virtual ContentReference Image { get; set; } [Display(Order = 7, Name = "Beskrivning (lång)")] public virtual XhtmlString MainBody { get; set; } }
public interface IBasePromotion { XhtmlString MainBody { get; set; } ContentReference Image { get; set; } }
But I guess that should not matter?
If I run this in an inkognito browser:
var s = _promotionEngine.Run(cart, new PromotionEngineSettings(RequestFulfillmentStatus.All, false));
All promotions has the status NotFullfilled but this line of code:
var discountPrices = _promotionEngine.GetDiscountPrices(variantLink, _currentMarket.GetCurrentMarket(), _currencyService.GetCurrentCurrency());
gives me discounted prices for the variantLink even to the promotion is not fullfilled. Should that be the case? I thougth GetDiscountPrices() should give me prices only if the promotion is fullfilled? I have my own pricing service, maybe I have to check that the promotion of the discounted price is fullfilled before I show it?
And then add one regular product and one free and then _promotionEngine.Run() gives me one fullfilled promotion.
I remove the regular one and still I get one fullfilled promotion until I remove the free one.
So the problem is that the free one should not be free unless there is on regular product in the cart?
Maybe I'm just missing some basics here?
Thanks!
/Kristoffer
With an empty cart:
_promotionEngine.Evaluate(variantLink, _currentMarket.GetCurrentMarket(), _currencyService.GetCurrentCurrency());
Gives me one Fullfiled RewardDescritpion mean while this:
_promotionEngine.Run(cart, new PromotionEngineSettings(RequestFulfillmentStatus.All, false));
Gives me a list of RewardDescriptions where all item has the status NotFulfilled.
/Kristoffer
No, actually not.
The customer has a lot of older products they wanted to get rid of and if you buy something you are supposed to be able to add 2 of these items with the price 0. So the price on the items should be there regular price until you add something with a price.
I found one user error and that is that you could choose whatever from the whole catalog, including the free items, which of couse fulfills the promotion when I add one of the free ones.
So in test purpose I now changed it to that the user needs to buy one specific item and then can then get items for free. And this is what happens starting with an empty cart.
All free items have a price by default in the product listing. Correct.
I add the specific item in the cart but there is still a price for the free items in the listing, _promotionEngine.GetDiscountPrices() returns nothing for the free items. Not correct.
I add one free item to the cart, and in the checkout the price for the free item is 0, and the price is shown using lineItem.GetExtendedPrice(currency). Correct.
I remove the specific item, price is back in the cart for the free item. Correct.
Wheter I have the specific item in the cart or not, these give me nothing:
var s = _promotionEngine.Evaluate(variantLink, _currentMarket.GetCurrentMarket(), _currencyService.GetCurrentCurrency(), RequestFulfillmentStatus.Fulfilled); var discountPrices = _promotionEngine.GetDiscountPrices(variantLink, _currentMarket.GetCurrentMarket(), _currencyService.GetCurrentCurrency());
why is
lineItem.GetExtendedPrice(currency)
giving me the discounted price not the two above?
Thanks!
/Kristoffer
Hi!
I still need some help to understand here:
I am a bit confused here how to use the promotion engine the correct way.
Thanks!
/Kristoffer
Hi!
I just tested this in Foundation, Commerce 13.21 and this must be a bug or I just totally misunderstand the functionallity.
I'm using a "Buy Products for Discount from Other Selection" promotion.
1st case.
Buy at least 1 item of from these entries:
Fashion [catalog]
Include subcategories: Yes
Get these entries:
Morgan Sneaker [variant, SKU-42518256]
Morgan Sneaker [variant, Morgan-Sneaker_1]
At the following discount:
100% off
Price for Morgan Sneaker with an empty cart:
$0
Price in the cart with only one Morgan Sneaker in the cart:
$0
2nd case
Buy at least 1 item of from these entries:
Jodhpur Boot [variant, JODHPUR-BOOT_1]
Jodhpur Boot [variant, SKU-39813617]
Include subcategories: Yes
Get these entries:
Morgan Sneaker [variant, SKU-42518256]
Morgan Sneaker [variant, Morgan-Sneaker_1]
At the following discount:
100% off
Price for Morgan Sneaker with an empty cart:
$280, correct
Price for Morgan Sneaker with a Jodhpur Boot [variant, JODHPUR-BOOT_1] in the cart:
$280
Price for Morgan Sneaker in then cart with a Jodhpur Boot [variant, JODHPUR-BOOT_1] and a Morgan Sneaker in the cart:
$0
Both these examples are according to me wrong except $280 with an empty cart in the second case.
In the first case you can by something for $0 even though the promotion says you have to buy at least one other thing.
You should not have to add an item to the cart to get the promotion price, how can you promote it in that case?
Maybe I'm on the wrong track here then please let me know how it should work, otherwise please look into this so that we can figure out how to use the promotion manager, thanks!
/Kristoffer
Hi!
I have a promotion where you can get products with 100% discount if you buy something else.
When I activate that campaign the products that you get for free gets the price 0 without that the criteria, "buy something else" eg adding another item to the cart, is fulfilled.
And if I add a regualar product and then a free product, the price is 0 which is correct, but when I remove the regular item from the cart the price is not updated for the free item?
What is the correct way to get the actuall current price? I mean the price should be 100 until I added a regular item to the cart and then get the price 0.
Is there a Epi standard way or do I need to check each product for active campaigns?
Thanks!
/Kristoffer | https://world.episerver.com/forum/developer-forum/Episerver-Commerce/Thread-Container/2020/11/discount-prices-are-shown-without-fulfilled-criteria/ | CC-MAIN-2020-50 | refinedweb | 1,055 | 51.58 |
(.+?[\\d\\,\\.]+)
Russia, Megafon: (Баланс \d+\.\d+руб\.) Italy, Wind: .*(\d+\.\d+).*euro.* Ewerything before letter б (from word "руб", Russian currency) and a dot after it: (.+?б\.) Vodafone: .+?([0-9\.]+EUR)
echo "1st number"
echo "name: %"
python /usr/lib/hildon-desktop/ussd-widget.py
python /usr/lib/hildon-desktop/ussd-widget.py 0
Комментарии
Size wrong
using it with PR1.2 (didn't use before) and the size is "wrong". the widget spans over a big rectangular area (to the right) and is transparent there (besides two or sometimes one vertical line that has the color of the widgets border). it is clickable in the transparent area.
As far as I know, all
Problem starting widget
Hi,
When trying to add widget to desktop nothing happens.
When tried to launch from terminal with: python /usr/lib/hildon-desktop/ussd-widget.py
I got the following messages:
Translation file for your language not found
IO error while reading config
Sintax error in USSD number.
What should I do?
I think, that this is related
I think, that this is related to this issue:...
So try rebooting your device. And search for widget on all desktops.
not working
i flashed my phone using the flasher, before flashing everything was ok ussd was working, but after the new installation i cannot see the ussd widget on desktop. i tried to restart, re-install the ussd widget and the OS but still i can't use it
any ideas?
I rebooted the device (for
I rebooted the device (for some other reason) and the widget appeared (even around 7 (how many times I tried "add widget") instances of it on my desktop :) )
So - the reboot was needed. Now it is working fine.
Thanks
Germany, Tschibo / O2:
Germany, Tschibo / O2: (.+?[\d\.]+)
What if I need to dial *556#
Thats all i need to do. dial *556# to check my credit level
and *555*123456789012# to load credit
i have loaded the widget and i see it on my desktop.
Im expecting to define or send some parameters to the widget so as to access these features from my network
Any advice......
--if we dont solve this , then we cant use our N900 in south africa during world cup ") ------
Tope
You can configure it as any
Tried the python commands and
Tried the python commands and the also dont work.:
import dbus
Import Error: No module named dbus
Please help.
i get thesame error message
i get thesame error message as above. can u help with detailed steps of how to resolve this.
Do you have the latest
There is unmet dependency,
There is unmet dependency, which I've forgot to add to package. You have to install it manually:
sudo gainroot
apt-get install python-dbus | http://kibergus.su/node/3 | CC-MAIN-2018-51 | refinedweb | 461 | 73.37 |
How to implement Minimum Edit Distance in Python
This Python tutorial helps you to understand what is minimum edit distance and how Python implements this algorithm. First, we will learn what is the minimum edit distance.
Definition :
Minimum Edit Distance gives you to the minimum number of operations required to change one string into another string. The operations involved are:-
- Insert
- Update
-
All the operations involve the same cost.
Example:-
Let’s say,
Input:
String 1 = ‘Cat’
String 2 = ‘Car’
Output: 1
The minimum number of operations required to change string 1 to string 2 is only one. That means to change the string ‘Cat’ into string ‘Car’ is to only update the letter ‘t’ to ‘r’.
Implementation of Minimum Edit Distance in Python
Source Code :
def edit_distance(str1, str2, a, b): string_matrix = [[0 for i in range(b+1)] for i in range(a+1)] for i in range(a+1): for j in range(b+1): if i == 0: string_matrix[i][j] = j # If first string is empty, insert all characters of second string into first. elif j == 0: string_matrix[i][j] = i # If second string is empty, remove all characters of first string. elif str1[i-1] == str2[j-1]: string_matrix[i][j] = string_matrix[i-1][j-1] # If last characters of two strings are same, nothing much to do. Ignore the last two characters and get the count of remaining strings. else: string_matrix[i][j] = 1 + min(string_matrix[i][j-1], # insert operation string_matrix[i-1][j], # remove operation string_matrix[i-1][j-1]) # replace operation return string_matrix[a][b] if __name__ == '__main__': str1 = 'Cats' str2 = 'Rats' print('No. of Operations required :',edit_distance(str1, str2, len(str1), len(str2))) str3 = 'Saturday' str4 = 'Sunday' print('No. of Operations required :',edit_distance(str3, str4, len(str3), len(str4)))
Output :
Case-1 :-
Input: str1 = 'Cats' str2 = 'Rats' No. of Operations required : 1
Case-2 :-
Input: str1 = 'Saturday' str2 = 'Sunday' No. of Operations required : 3
In Case-1, str1 =’Cats’ and str2 = ‘Rats’. To change ‘Cats’ into ‘Rats’, only one update operation is required. That means letter ‘C’ is replaced by letter ‘R’.
In Case-2 , str3 =’Saturday’ and str4=’Sunday’. To change ‘Saturday’ to ‘Sunday’, three operations are required. That means letters ‘a’ and ‘t’ are deleted and ‘n’ is inserted.
You can also read, | https://www.codespeedy.com/minimum-edit-distance-in-python/ | CC-MAIN-2020-45 | refinedweb | 385 | 55.34 |
If you are here you probably might know or want to learn the debouncing practice used to improve the web app performance.
Purpose of Debounce
Debouncing is the technique used to limit the number of times a function can be executed.
How it works?.
A debounce function will wait until the last time the function is called and fire after a predefined amount of time or once the event firing becomes inactive .
Din't get it ? sit tight let's see what exactly the above statement means .
Debrief
Lets take an example of search bar in a e-commerce app.
For suppose user wants to search for "school bag" , the user starts typing in letter by letter in the search bar . After typing each letter there will be an Api call happening to fetch the product for the user search text , In this example 10 calls will be done from browser to server. Think of the scenario that when millions of users making the same search there by making billions of Api calls . Making huge number of Api's at a time will definitely leads to slower performance .
Debouncing to the rescue.
lets mock this scenario , Lets create a search box on each key stroke it will call a getData Api , here we will not call an actual Api but lets console log a text.
Our HTML file
<!DOCTYPE html> <html> <head> <title>Parcel Sandbox</title> <meta charset="UTF-8" /> <script src="./src/index.js"></script> </head> <body> <div id="app"> <input type="text" id="userInput" /> </div> </body> </html>
our javascript file.
const inputBox = document.querySelector("#userInput"); function getData() { console.log("get Data api called "); } inputBox.onkeyup = getData;
the result:
Here you can see that normal execution will make function call for each key up event, if the function is performing the heavy task like making an Api call then this could become a costly operation with respect to load on the server and web app performance. let's find a way to improve this using debouncing.
updated javascript code
const inputBox = document.querySelector("#userInput"); function getData() { console.log("get Data api called "); } const debounce = (fn, delay) => { let timer return (...args) => { clearTimeout(timer) timer = setTimeout(() => fn(...args), delay) } } const debouncedFunction = debounce(getData, 300); inputBox.addEventListener("keyup", () => { debouncedFunction(); });
(thanks to @lexlohr
for suggesting a straightforward implementation using modern javascript in the comment section).
The Result
The result is just wow!! we could reduce so much of load from the server and the better performing webapp.
let's go through the code, a debounced function will typically return you a another function with the
setTimeout() , In the above code you might wondering why we have cleared the timer with
clearTimeout() first and then set the timer again with
setTimeOut() this is to get the delay i.e the repeated call will eventually clear the timer so api call will never happen until the difference between two function call is more than that of delay which is in this case 300 milliseconds so when a user starts typing if the difference between the last letter typed and next letter to be typed is more than the delay provided the function will be called.
You might argue what we achieved with debouncing can also be achieved with Throttling it wouldn't be wrong but these two have some subtle differences and different use cases .
If you are wondering what Throttling is, it is also a technique to reduced the number of times a function is called but let's keep the differences and use cases for a different blog post .
Hope I made debouncing clear to you guys!! , for any correction or suggestions please comment down .
Till then Happy Javascripting ❤
Peace out ✌️
Discussion (7)
It's a bit more straight forward in modern JavaScript:
Indeed, and you can also make it a curried function, and even add types with JSDocs!
Thank you !!
, this is also a great suggestion.
thank you , this looks simpler.
I had to read the post twice to understand what you were saying. For clarity for other readers, the debouncedFunction fires on every keyup however the first thing that happens is to clear and reset the timer so the inner get data function does not get called until the outer function HAS NOT fired for 300ms.
I know you have said you will keep throttling for another post but the key difference is that throttling would fire the inner function immediately and then ignore any other calls during the timeout period.
Thanks for the feedback !. will definitely update the content to make it more understandable .
The title is a perfect explanation of what debounce is. "Debouncing in in javascript", you probably read that as "Debouncing in javascript" because your brain debounced the repeated "in". That is the gist of it, plain and simple. See repeating events that shouldn't be handled, wait until the last execution within a set timeframe and then act on it. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/ashishjshetty/ever-heard-of-debouncing-in-in-javascript-what-is-it-31o2 | CC-MAIN-2021-31 | refinedweb | 824 | 62.88 |
I tried to make a program that would take a txt file and respond to it. Found some tutorial and used the code from there with some modifications.
But when I run it, it just opens the dos with nothing in it.
Here's the code:
ThanksThanksCode://This program should take a text file named VCC.txt and read it, put it to a string //and then if it matches the given string, it should respond //Includes #include <fstream> #include <iostream> using namespace std; int main() { //The string that the text file will be read to char textfile[2000]; //The file fstream file_op("Data\\VCC.txt",ios::in); //Get line and put it to char textfile file_op.getline(textfile,2000); //If textfile is Hello, wich it is, it should respond with saying Hello back, but somehow this doesn't work if(textfile == "Hello") { cout<<"Hello"; } while(1){} } | http://cboard.cprogramming.com/cplusplus-programming/80884-read-text-file-respond.html | CC-MAIN-2013-20 | refinedweb | 147 | 77.87 |
7 Third tutorial: Pollen markup & tag functions
Now you’re getting to the good stuff. In this tutorial, you’ll use Pollen to publish a multi-page article written in Pollen markup. You’ll learn about:
Adding tags & attributes with Pollen markup
Attaching behavior to tag functions
the "pollen.rkt" file
Using decode with Pollen markup
If you want the shortest possible introduction to Pollen, try the Quick tour.
7.1 Prerequisites
I’ll assume you’ve completed the second tutorial and that you understand the principles of Pollen authoring mode — creating source files, converting them to X-expressions, and then combining them with templates to make output files.
Because now it’s time to pick up the pace. You’ve learned how to do some handy things with Pollen. But we haven’t yet exploited the full fusion of writing environment and programming language. I promised you that The book is a program, right? So let’s do some programming.
7.2 Optional reading: Pollen markup vs. XML
You can skip this section if XML holds no interest. But Pollen markup evolved out of my attempt to come up with an alternative to XML that would be more usable for writing. So if you’re familiar with XML, the contrast may be helpful.
7.2.1 The XML problem
In the second tutorial, I argued that Markdown is a limiting format for authors. Why? Because Markdown is merely shorthand notation for HTML tags. As such, it has three problems: it’s not semantic, it only covers a limited subset of HTML tags, and it can’t be extended by an author.
These problems are partly limitations of HTML itself. And these limitations were meant to be cured by XML — the X stands for extensible. In principle, XML allows you to define whatever tags you like and use them in your document.
So why hasn’t XML taken over the world? In practice, XML promises more than it delivers. The reasons are apparent to any writer who’s attempted to use XML as an authoring format:
Verbose syntax. Unfortunately, XML relies on the same angle-bracket notation as HTML. If you think HTML source is hard to read, XML is worse. Since much of writing involves reading, this feature is also a major bug.
Validation overhead. Integral to XML is the concept of validation, which guarantees that a document meets certain formal criteria, usually defined in a schema. To get the full value from XML, you generally want to use validation. But doing so imposes a lot more work on you as an author, and removes much of the expressive potential of XML.
Masochistic document processing. I’m referring to XSLT, the preferred method of transforming XML documents. I know a little XSLT, so I’ll concede that there’s a method to its madness. But it’s still madness.
The nicest thing we could say about XML is that its intentions are good. It’s pointed toward the right goals. But its benefits are buried under atrocious ergonomics.
7.2.2 What Pollen markup does differently
Pollen markup can be seen as a way of reaping the benefits of XML without incurring the headaches. Like XML, Pollen markup allows you to freely tag your text. But unlike XML:
Simple syntax. Pollen markup follows the usual conventions of Pollen commands.
No structural validation. You can use any tags you want, in any order, and you needn’t define them ahead of time. Your document will still work.
Racket processing. Pollen markup tags can have behavior attached to them using Racket functions, either before you use them, or later.
7.2.3 “But I really need XML…”
You can have XML. There’s nothing wrong with using Pollen markup to generate XML files that can then be fed into an existing XML processing pipeline. In other words, using Pollen markup, you can treat XML as an output format rather than an input format.
In this tutorial, I’ll be rendering Pollen markup with an HTML template. But you could easily use the same workflow with an XML template and thus end up with XML files.
7.3 Writing with Pollen markup
Pollen markup is a free-form markup system that lets you add arbitrary tags and attributes to your text. By arbitrary, I mean that you needn’t constrain your tags to an existing specification (e.g., the tags permitted by HTML). You can — but that’s an option, not a requirement.
I like to think of Pollen markup as a way of capturing not just the text, but also my ideas about the text. Some of these are low-level ideas (“this text should be italicized”). Some are high-level ideas (“this text is the topic of the page”). Some are just notes to myself. In short, everything I know about the text becomes part of the text.
In so doing, Pollen markup becomes the source code of the book. Let’s try it out.
7.3.1 Creating a Pollen markup file
We’re going to use Pollen markup to make a source file that will ultimately become HTML. So consistent with the authoring-mode workflow we learned in the second tutorial, we’ll start with our desired output filename, "article.html", and then append the new Pollen markup suffix, which is ".pm".
In DrRacket, start a new file called "article.html.pm" like so (as usual, you can use any sample text you like):
Consistent with usual authoring-mode policy, when you run this file, you’ll get an X-expression that starts with root:
'(root "I want to attend RacketCon this year.")
Remember, even though the first line of the file is #lang pollen — same as the last tutorial — the new ".pm" suffix signals that Pollen should interpret the source as Pollen markup.
For instance, look what happens if you goof up and put Markdown source in a Pollen markup file, like so:
The Markdown syntax will be ignored, and pass through to the output:
'(root "I am **so** excited to attend __RacketCon__ this year.")
Restore the non-Markdown source, and let’s continue.
7.3.2 Tags & tag functions
Pollen markup uses the same Pollen command syntax that we first saw in Adding Pollen commands. Previously, we used this syntax to invoke functions like define and ->html. This consistency in syntax is deliberate, because Pollen markup is used to invoke a special kind of function called a tag function, which is a function that, by default, adds a tag to the text.
To see how this works, restore your "article.html.pm" file to its original state:
We can add any tag with Pollen markup, but for now, let’s start with an old favorite: em, which is used in HTML to add emphasis to text. We apply a tag by starting with the lozenge character (◊) followed by the tag name em, followed by the text in curly braces, like so:
Run this file in DrRacket and see the X-expression that results:
'(root "I want to attend " (em "RacketCon this year") ".")
You won’t be surprised to hear that you can nest tags within each other:
With the expected results:
'(root "I want to attend " (em "RacketCon " (strong "this") " year") ".")
7.3.3 Attributes
Attributes are like tags for tags. Each attribute is a key–value pair where the key is any name, and the value is a string. Anyone who’s seen HTML is familiar with them:
Here, class is an attribute for span that has value "author". And this is what it looks like as an X-expression:
'(span ((class "author")) "Prof. Leonard")
You can add any number of attributes to a tag (first as HTML, then as an X-expression):
'(span ((class "author")(id "primary")(living "true")) "Prof. Leonard")
In Pollen markup, attributes have the same logic, but a slightly different syntax. In keeping with the tag notation you just saw, the span tag is added in the usual way:
Then you have two options for adding attributes. The verbose way corresponds to how the attributes appear in the X-expression:
Each key–value pair is in parentheses, and then the list of pairs is within parentheses, with a quote (') at the front that signals that the text should be used literally.
But this is boring to type out, so Pollen also allows you to specify attributes with Racket-style keyword arguments:
In this form, each attribute name is prefixed with #:, indicating a keyword argument. As before, the attribute value is in quotation marks following the keyword name.
Both of these forms will produce the same X-expression:
'(span ((class "author")(id "primary")(living "true")) "Prof. Leonard")
Now that you know how to make tags and attributes, you might wonder whether Pollen markup can be used as a quick & dirty HTML-notation system. Sure — for a quick & dirty project, why not. Recall that X-expressions are just alternative notation for the standard angle-bracket notation used in HTML. So if you wanted HTML like this:
You could write it in Pollen markup like so:
◊div[#:class "red" #:style "font-size:150%"]{Important ◊em{News}}
And then convert it (using the ->html function) into the HTML above. Thus, the tags you already know (and love?) can be used in Pollen markup, but with fewer keystrokes and cruft.
Still, if Pollen markup were just an alternative notation system for HTML tags, it would be pretty boring. As I alluded above, that’s merely the simplest way to use it.
In the XML spirit, Pollen markup lets you use any tags you want. That’s considerably less boring.
7.3.4 Optional reading: What are custom tags good for?
XML jocks can skip this section, since you already know. But if you’ve been living in the Markdown / HTML lowlands, read on.
Tags, broadly speaking, are a means of annotating a text with extra information, which I’ll call metadata (using that term in its generic sense, not in any fiddly computery way). Metadata is the key tool that enables an author to write a book with the benefits of semantic markup and format independence.
7.3.4.1 Semantic markup
Semantic markup means adding metadata to text according to the meaning of the text, not merely its intended visual appearance. So rather than tagging RacketCon with an em tag, as we did above to indicate how the word should look, maybe we would tag it with an event tag, to indicate what kind of thing it is.
Semantic markup lets an author specify distinctions that would be ambiguous in pure visual terms, thereby capturing more meaning and intent. For instance, in books, italic styling is commonly applied to a number of unrelated types of information: emphasized words, movie titles, terms being used for the first time, headings, captions and labels, and so on. Under a non-semantic formatting scheme, perhaps one would tag them all em. But in semantic terms, one would tag them movie-title, first-use, heading, as appropriate.
This has two major benefits. First, by separating appearance and meaning, an author can manage the content of the book in useful ways. For instance, if every movie title were tagged as movie-title rather than italic, then it would be simple to generate a list of all movies mentioned in the book (for the author’s benefit) or a page index of movie references (for the reader’s benefit). But without that semantic tagging, a movie title couldn’t be distinguished from any other italicized text.
7.3.4.2 Format independence
The second benefit of custom tags is format independence, or the ability to change the rendering of the text to suit a particular device or context.
When a text is encrusted with format-specific visual tags — for instance, HTML tags — then the document markup is entangled with a single output format. If you only need one output format, fine.
But increasingly, book authors have been called upon to publish their work in multiple formats: paper and PDF, but also web, e-book, or other natively digital formats, that connect to devices with differing display capabilities.
Yes, I know that many of these formats are based on variants of HTML. But the HTML you can use in a desktop web browser is quite different from, say, the HTML you can use in a Kindle .mobi file. The .mobi file has other technical requirements too, like an .ncx and .opf file. So despite some genetic kinship, these HTML-ish formats are best understood as separate targets.
Using a display-driven model to manage this complexity is a terrible idea — as anyone who’s tried it can attest. Converting from one display-based file type to another — for instance, word processor to HTML, or HTML to PDF — is an exercise in frustration and drain-circling expectations.
This isn’t surprising. For a long time, text processing has been dominated by this display-driven model. Most word processors, like Microsoft Word and Pages, have been built around this model. It worked well enough in the era where most documents were eventually going to be printed on paper (or a paper simulator like PDF). HTML was a technical leap forward, but not a conceptual leap: it mostly represented the display options available in a web browser.
There’s a couple TeX fans at the back of the room, waving their arms. Yes, TeX got a lot of things right. In practice, however, it never became a core tool for electronic publishing (which, to be fair, didn’t exist when TeX was written). But plenty of ideas in Pollen have been lifted from TeX.
For a document to be format independent, two conditions have to be satisfied.
First, the document has to be readable by other programs, so they can handle the conversion of format-independent markup into a format-specific rendering (e.g., mapping semantic tags like movie-title onto visual tags like em). Most word-processor formats, like Word’s .docx, are bad for authoring because these formats are opaque and proprietary. We needn’t get into the political objections. As a practical matter, they’re inarguably restrictive — if you can’t get your data out of your file, you’re stuck.
Second, the document itself has to be represented in a way that’s independent of the particularities of any one format. For instance, HTML is a bad authoring format because it encourages authors to litter their text with HTML-isms like h1 and span. These have no meaning outside of HTML, and thus will always cause conversion problems. The same goes for Markdown, which is simply HTML in disguise.
The solution to the first condition is to use text-based markup rather than proprietary file types. The solution to the second condition is to let authors define custom tags for the document, rather than the other way around. Pollen markup incorporates both of these ideas.
7.3.5 Using custom tags
You can insert a custom tag using the same syntax as any other tag. Suppose you want to use an event tag to mark events. You would insert it like so:
This markup will turn into this X-expression:
'(root "I want to attend " (event "RacketCon") " this year.")
Which is equivalent to this HTML-ish markup:
In truth, Pollen doesn’t notice the differences among a custom tag, a standard HTML tag, or any other kind of tag. They’re all just markup tags. If you want to restrict yourself to a certain vocabulary of tags, you can. If you want to set up Pollen to enforce those restrictions, you can do that too. But by default, Pollen doesn’t impose restrictions like this. In general, you can pick any tag name you want, and it will work.
Don’t take my word for it. See what happens when you write this and run it:
One small but important exception to this rule. If you were wondering why I sometimes call them tag functions instead of just tags, it’s because under the hood, every tag is implemented as a function. The default behavior of this function is just to wrap the text in a tag with the given name.
The benefit of treating tags as functions will become evident later in this tutorial. But the cost of this approach is that tags occupy the same namespace as the other functions available in Pollen (and by extension, Racket). Meaning, if you try to use a tag name that’s already being used for an existing function, you’ll get an error.
For instance, suppose we try to use a custom tag called length:
When we run this file, we get an error:
length: contract violation;
expected: list?
given: "77km"
The problem is that Racket already has a function called length. Consistent with the usual rules of Pollen command notation, your command is interpreted as an attempt to invoke the length function, rather than apply a tag named length.
In practice, namespace clashes are rare. But if necessary, they’re easy to work around (for the simplest method, see Invoking tag functions).
7.3.6 Choosing custom tags
You just saw that using custom tags is easy. Choosing custom tags, on the other hand, is less science than art. As the author, it’s up to you. Some guidelines:
You’re never doing it wrong. I wanted to make sure you knew the case for semantic markup. But if your life would be easier just using HTML tags directly, go ahead.
Tag iteratively. Don’t worry about getting all your tags right the first time through. Just as you write and then rewrite, add the tags that seem right now, and change or augment them later, because …
Tags emerge from writing. It’s hopeless to try to specify all your tags in advance. As you write, you’ll learn things about the text, which will suggest new tags.
The best tag system is the one you’ll stick with. Tags aren’t free. It takes effort to insert them consistently. Don’t bother with an overambitious tag scheme that bores you more than it helps.
For boilerplate, tags are faster than text. If you find yourself repeatedly formatting certain text in a certain way — for instance, lists and tables — extract the content and wrap it in a tag that encapsulates the boilerplate.
And most important:
Tags are functions. As I mentioned above, every tag has a function behind it that uses the content of the tag as input. The default tag function just outputs the tag and its content. But you can replace this with any kind of function. So in practice, you can offload a lot of labor to tags.
As we’ll see in the next section, this is where your book truly becomes programmable.
7.4 Tags are functions
Don’t skip this section! It explains an essential Pollen concept.
If you’ve used HTML or XML, tags are just tags: things you type into the document that look the same going out as they did going in. Tags can be used to select document elements or assign styling (via CSS). But they don’t have any deeper effect on the document content.
That’s not so in Pollen. Under the hood, Pollen is just an alternate way of writing code in the Racket programming language. And tags, instead of being inert markers, are actually functions.
I think most of you know what a function is, but just to be safe — in programming, a function is a chunk of code that accepts some input, processes it, and then returns a value. Asking a function to process some data is known as calling the function.
Leading us to the Three Golden Rules of Pollen Tags:
Every Pollen tag calls a function with the same name.
The input values for that function are the attributes and elements of the tag.
The whole tag — tag name, attributes, and elements — is replaced with the return value of the called function.
Corollary to rule #3: because a tag represents a single X-expression, a tag function must also return a single X-expression. If you want to return multiple elements, you have to wrap them in a single X-expression.
Corollary to the corollary: you can use Pollen’s special splicing operator (@) as the tag of your return value to hoist its elements into the containing X-expression.
You’ve already seen the simplest kind of function in a Pollen document: the default tag function, which emulates the behavior of standard markup tags.
Let’s revisit an earlier example, now with the help of the Golden Rules:
What happens when you run this source? Working from the inside out, Pollen calls the tag function strong with the input "this". The result is (strong "this"). Then Pollen calls the tag function em with the three input values "RacketCon " (strong "this") " year", which yields (em "RacketCon " (strong "this") " year"). Finally, Pollen calls the tag function root with everything in the document, resulting in:
'(root "I want to attend " (em "RacketCon " (strong "this") " year") ".")
7.4.1 Attaching behavior to tags
Sometimes this default behavior will suffice. But other times, you’ll want to change the behavior of a tag. Why? Here are some useful examples of what you, as an author, can do with custom tag functions:
Automatically detect cross-references and add hyperlinks.
Pull in data from an external source.
Generate tables, figures, and other fiddly layout objects.
Change content based on given conditions.
Automatically detect line breaks, paragraphs, and lists.
Insert boilerplate text.
Anything annoying or repetitive.
Mathematical computations.
… and anything else you like to do with a programming language.
How do you change the behavior of a tag? Two steps:
Write a new function.
Give it the name of the tag.
Once you do this, this new behavior will automatically be invoked when you use the tag.
For example, let’s redefine the strong tag in our example above to simply print "BOOM":
When you run this file, you indeed get:
'(root "I want to attend " (em "RacketCon " "BOOM" " year"))
How does this work? Let’s look at our new function definition. As usual, we start with the lozenge character (◊) to denote a Pollen command. Then we use define to introduce a function definition. The name of the function comes next, which needs to match our tag name, strong. The expression (strong word) means “the name of this function is strong, and it takes a single word as input, which we’ll refer to as word.” Finally we have the return value, which is "BOOM".
This example defines the function with a Racket-style command. In this simple case, you could also use a Pollen-style command, e.g., ◊define[(strong word)]{BOOM}. But in general, defining functions with Racket-style commands is more flexible.
Let’s run this file again, but go back to the Golden Rules to understand what happens. Working from the inside out:
Pollen calls the function strong with the input "this" — same as before. But this time, the result of the strong function is not the X-expression (strong "this"), but simply "BOOM".
Then Pollen calls the function em with the three input values "RacketCon " "BOOM" " year". Because em is still a default tag function, it yields the X-expression (em "RacketCon " "BOOM" " year").
Finally, Pollen calls the root function with everything in the document.
The result:
'(root "I want to attend " (em "RacketCon " "BOOM" " year"))
This example is contrived, of course. But the basic idea — defining a function with the name of a tag — is the foundation of programmability in Pollen. If you get this, and the Golden Rules, you get everything.
7.5 Intermission
That was a lot of heavy material. But it also covered the most essential idea in Pollen: that every tag is a function. Congratulations on making it this far.
Experienced programmers might want to take a detour through Programming Pollen to understand more about what’s possible with tag functions.
The good news is that the rest of this tutorial will feel more relaxed, as we put these new principles to work.
Sorry that this tutorial is longer than the others, but truly — this is the stuff that makes Pollen different. If you’re not feeling enthusiastic by now, you should bail out.
Otherwise, get ready to rock.
7.6 Organizing functions
In the tag-function examples so far, we’ve defined each function within the source file where we used it. This is fine for quick little functions that are specific to a particular file.
But more often, you’ll want to use functions available in existing code libraries, and store your own functions so they can be available to other source files.
For now, we’re just invoking functions from within a Pollen markup file. But as you’ll see in the fourth tutorial, any function can be called from any kind of Pollen source file.
7.6.1 Using Racket’s function libraries
Any function in Racket’s extensive libraries can be used by loading the library with the require command. This will make its functions and values available in the current source file with the usual Pollen command syntax. For instance, suppose we want to use the value pi and function sinh from racket/math:
The result:
'(root "Pi is close to " "3.141592653589793" "." "\n" "The hyperbolic sine of pi is close to " "11.548739357257748" ".")
One caveat — you’re still in a Pollen markup file, so the return value of whatever function you call has to produce a string or an X-expression, so it can be merged into the document. That’s why we have number->string wrapping the numerical values. (This is similar to the restriction introduced in the first tutorial where functions used in preprocessor files had to produce text.)
If your functions produce incompatible results, you’ll get an error. For instance, look what happens when we remove number->string from the example above.
This will produce an error in DrRacket:
pollen markup error: in '(root "Pi is close to " 3.141592653589793 "." "\n" "The hyperbolic sine of pi is close to " 11.548739357257748 "."), 3.141592653589793 is not a valid element (must be txexpr, string, symbol, XML char, or cdata)
This code would not, however, produce an error if it were being run as a Pollen preprocessor file, because the prepreocessor automatically converts numbers to strings. If you’d like to verify this, change the suffix to .pp and run the file again.
7.6.2 Introducing "pollen.rkt"
Don’t skip this section! It explains an essential Pollen concept.
As you get more comfortable attaching behavior to tags using tag functions, you’ll likely want to create some functions that can be shared between multiple source files. The "pollen.rkt" file is a special file that is automatically imported by Pollen source files in the same directory (including subdirectories). So every function and value provided by "pollen.rkt" can be used in these Pollen files.
First, using "pollen.rkt" isn’t mandatory. Within a Pollen source file, you can always import functions and values with require (as seen in the previous section). "pollen.rkt" just makes it easier to propagate a set of common definitions to every Pollen source file in your project.
Second, notice from the ".rkt" suffix that "pollen.rkt" is a source file containing Racket code, not Pollen code. This is the default because while Pollen’s notation is more convenient for text-based source files, Racket’s notation is more convenient when you’re just dealing with code.
You can still use Pollen notation within a Racket source file. See pollen/mode.
Third, "pollen.rkt" always applies to Pollen source files in the same directory. But that’s the minimum scope for the file, not the maximum. Pollen source files nested in subdirectories will look for a "pollen.rkt" in their own directory first. But if they can’t find it, they’ll look in the parent directory, then the next parent directory, and so on. Thus, by default, a "pollen.rkt" in the root folder of a project will apply to all the source files in the project. But when you add a new "pollen.rkt" to a subdirectory, it will apply to all files in that subdirectory and below.
Though a subdirectory-specific "pollen.rkt" will supersede the one in the enclosing directory, you can still use (require "../pollen.rkt") to pull in definitions from above, and provide to propagate them into the current subdirectory. For instance, (provide (all-from-out "../pollen.rkt")) will re-export everything from the parent directory.
Let’s see how this works in practice. In the same directory as "article.html.pm", create a new "pollen.rkt" file as follows:
Here we use the define function (which we’ve seen before) to set author equal to "Trevor Goodchild". Note the final step: consistent with standard Racket rules, we have to explicitly provide the new value so that other files can see it (unlike Python, things you define in Racket are by default private, not public).
Then update good old "article.html.pm" to use our new author value:
Run this in DrRacket and you’ll get:
'(root "The author is " "Trevor Goodchild" ".")
Staying in the same dirctory, create a second Pollen source file:
Run this, and you’ll get:
'(root "The author is really " "Trevor Goodchild" "?")
That’s all there is to it. You see how the value provided by "pollen.rkt" is automatically available within both Pollen source files.
You can import functions, including tag functions, the same way. For instance, add a function for em:
We have a new bit of notation here. Notice that we defined our tag function as (em . elements) rather than (em word). The use of a dot before the last input argument makes it into a rest argument. This puts all the remaining input arguments — however many there are — into one list. In general, this is the best practice for tag functions, because you don’t usually know in advance how many elements will be passed to the function as input (for more about this, see The text body).
The txexpr function is a utility from the txexpr package (which is installed with Pollen). It builds a new X-expression from a tag, attribute list, and list of elements.
Then we use our new tag function in a source file:
With the expected results:
'(root "The " (extra-big "author") " is " (extra-big "Trevor Goodchild") ".")
By the way, if you just want to provide everything in "pollen.rkt", you can use the all-defined-out shorthand:
7.7 Decoding markup with a root tag function
As you’ve seen, the X-expression you get when you run a Pollen markup file always starts with a tag called root. You can attach a custom tag function to root the same way as any other tag — by creating a new function and calling it root.
For instance, you could do something simple, like change the name of the output X-expression:
Resulting in:
'(content "The " (code "root") " tag is now called " (code "content") ".")
Unlike other tags in your document, root contains the entire content of the document. So the function you attach to root can operate on everything.
For that reason, one of the most useful things you can do with a tag function attached to root is decoding the content of the page. By decoding, I mean any post-processing of content that happens after the tags within the page have been evaluated.
Decoding is a good way to automatically accomplish:
Detection of linebreaks, paragraphs, and list items based on whitespace.
Hyphenation.
Typographic optimizations, like smart quotes, dashes, and ligatures.
Gathering data for indexing or cross-referencing.
Any document enhancements a) that can be handled programmatically and b) that you’d prefer not to hard-code within your source files.
As an example, let’s take one of my favorites — linebreak and paragraph detection. In XML & HTML authoring, you have to insert every <br /> and <p> tag by hand. This is profoundly dull, clutters the source file, and makes editing a chore.
Instead, let’s make a decoder that allows us to denote a linebreak with a single newline in the source, and a paragraph break with a double newline. Here’s some sample content with single and double newlines:
Because we don’t yet have a decoder, these newlines just get passed through:
'(root "The first line of the 'first' paragraph." "\n" "And a new line." "\n" "\n" "The second paragraph --- isn't it great.")
When this X-expression is converted to HTML, the newlines will persist:
But in HTML, raw newlines are displayed as a single space. So if you view this file in the project server, you’ll see:
Not what we want.
So we need to make a decoder that will convert the newlines in our source into line breaks and paragraph breaks on the HTML output side. To do this, we use the decode-elements function, which provides hooks to process categories of content within the document.
Add a basic decode-elements to the source file like so:
Here, we’ll keep the tag name root, leave the attributes as empty, and pass through our decoded list of elements.
Racket jocks: you could also write this using quasiquote and unquote-splicing syntax as `(root ,@(decode-elements elements)). The txexpr package is just an alternate way of accomplishing the task.
If you run this file, what changes? Right — nothing. That’s because by default, decode-elements will let the content pass through unaltered.
We change this by giving decode-elements the name of a processing function and attaching it to the type of content we want to process. In this case, we’re in luck — the decode module already contains a decode-paragraphs function (that also detects linebreaks). We add this function using the keyword argument #:txexpr-elements-proc, which is short for “the function used to process the elements of a tagged X-expression”:
Now, when we run the file, the X-expression has changed to include two p tags and a br tag:
'(root (p "The first line of the 'first' paragraph." (br) "And a new line.") (p "The second paragraph --- isn't it great."))
That means when we convert to HTML, we’ll get the tags we want:
So when we view this in the project server, the linebreaks and paragraph breaks are displayed correctly:
And a new line.
The second paragraph --- isn't it great.
Of course, in practice you wouldn’t put your decoding function in a single source file. You’d make it available to all your source files by putting it in "pollen.rkt". So let’s do that now:
We’ll also restore the source of "article.html.pm" to its original, simplified state:
This time, "article.html.pm" will pull in the tag function for root from "pollen.rkt". Otherwise, the code hasn’t changed, so the result in the project server will be the same:
And a new line.
The second paragraph --- isn't it great.
But wait, those straight quotes look terrible. Also, three hyphens for an em dash? Barbaric.
Let’s upgrade our decoder to take of those. In pollen/misc/tutorial I’ve stashed the two functions we’ll need for the job: smart-quotes and smart-dashes.
This time, however, we’re going to attach them to another part of decode-elements. Smart-quote and smart-dash conversion only needs to look at the strings within the X-expression. So instead of attaching these functions to the #:txexpr-elements-proc argument of decode-elements, we’ll attach them to #:string-proc, which lets us specify a function to apply to strings:
Because #:string-proc only accepts one function (not two), we need to use compose1 to combine smart-quotes and smart-dashes into one function (compose1, from the Racket library, creates a new function that applies each function in its argument list, from right to left).
Now, if we run "article.html.pm" in DrRacket, we can see the effects of the new decoder functions. The quotes are curled, and the three hyphens become an em dash:
'(root (p "The first line of the ‘first’ paragraph." (br) "And a new line.") (p "The second paragraph—isn’t it great."))
And of course, this shows up in the project server too:
And a new line.
The second paragraph—isn’t it great.
By the way, decoding via the root tag is often most convenient, but you don’t have to do it that way. Decoding is just a special thing you can do inside any tag function. So you can make a decoder that only affects a certain tag on the page. Or you can make multiple decoders for different tags. The advantage of using a decoder with root is that it can affect all the content, and since it’s attached to the root node, it will always be the last tag function that gets called.
7.8 Putting it all together
For this final example, we’ll combine what we’ve learned in the first three tutorials. Though this project is still simple, it summarizes all the major concepts of Pollen.
It also provides a recipe you can adapt for your own projects, whether small or large. For instance, Butterick’s Practical Typography and Typography for Lawyers follow this core structure.
As we go through the ingredients, I’ll review the purpose of each. Save these files into a single project directory with the project server running.
7.8.1 The "pollen.rkt" file
This file provides functions that are automatically imported into Pollen source files in the same directory. It’s written in standard Racket. The "pollen.rkt" file is optional — without it, your tags will just be treated as default tag functions. But you’ll probably find it a convenient way to make tag functions available within your project, including a decode function attached to root.
Here, we’ll use the "pollen.rkt" we devised in the previous section to set up decoding for our source files:
7.8.2 The template
When you’re using Pollen authoring mode for your content — using either Markdown syntax, or Pollen markup — your source files will produce an X-expression. To convert this X-expression into a finished file, you need to use a template.
By default, when Pollen finds a source file called "filename.ext.pm" or "filename.ext.pmd", it will look for a template in your project directory called "template.ext", where ".ext" is the matching output extension.
In this project, we want to end up with HTML, so our source files will be called "filename.html.pm", and thus we need to make a "template.html". Let’s use a modified version of the one we made in the second tutorial. As we did then, let’s add the null extension to clearly indicate it’s an input file, so the whole name is "template.html.p":
7.8.3 The pagetree
A pagetree defines sequential and hierarchical relationships among a set of output files. The pagetree is used by the template to calculate navigational links (e.g., previous, next, up, etc.) A pagetree is optional — if you don’t need navigation in your project, you don’t need a pagetree.
But in this project, we do want navigation. So we’ll add an "index.ptree" file like so:
7.8.4 A CSS stylesheet using the preprocessor
Our template file above refers to a CSS file called "styles.css". When resolving linked files, the project server makes no distinction between static and dynamic files. If there’s a static file called "styles.css", it will use that.
Or, if you make a preprocessor source file called "styles.css.pp", it will be dynamically rendered into the requested "styles.css" file. The preprocessor will operate on any file with the ".pp" extension — so a preprocessor source called "filename.ext.pp" will be rendered into "filename.ext". (The corollary is that preprocessor functionality can be added to any kind of text-based file.)
Preprocessor source files, like authoring source files, get access to everything in "pollen.rkt", so you can share common functions and variables.
Let’s use an improved version of the dynamic CSS file we made in the first tutorial.
7.8.5 The content source files using Pollen markup
With the scaffolding in place, we need the content. Our pagetree contains three output files — "burial.html", "chess.html", and "sermon.html". We’re going to make these output files using Pollen markup. So we’ll create three source files and name them by adding the ".pm" source extension to each of the output names — thus "burial.html.pm", "chess.html.pm", and "sermon.html.pm", as follows (and with apologies to T. S. Eliot):
7.8.6 The result
Now visit the project server and view "burial.html", which should look something like this (the box will expand to fit your browser window):
Click the navigational links at the top to move between pages. I encourage you to change the source files, the style sheet, the template, or "pollen.rkt", and see how these changes immediately affect the page rendering in the project server. (You can also change the sequence of the pages in "index.ptree", but in that case, you’ll need to restart the project server to see the change.)
This page isn’t a miracle of web design. But it shows you in one example:
Pollen markup being decoded — paragraph breaks, linebreaks, smart quotes, smart dashes — with a decode function attached to the root node by "pollen.rkt".
A CSS file generated by the Pollen preprocessor that computes positions for CSS elements using numerical values set up with define, and mathematical conversions thereof.
Navigational links that appear and disappear as needed using conditional statements (when/splice) in "template.html.p", with the page sequence defined by "index.ptree" and the names of the links being pulled from the h1 tag of each source file using select.
7.9 Third tutorial complete
OK, that was a humongous tutorial. Congratulations on making it through.
But your reward is that you now understand all the core concepts of the Pollen publishing system, including the most important ones: the flexibility of Pollen markup, and the connection between tags and functions. | https://docs.racket-lang.org/pollen/third-tutorial.html | CC-MAIN-2018-34 | refinedweb | 7,115 | 73.17 |
Java console IO is done using class, System.in, java.util.Scanner, java.io.Console and java command line arguments, learn how to use these Classes and functions to work java with console.
We: Do you know Java IO ?
you: yes, I know about byte and character streams.
We: Oh, okay, do you know about reading and writing on console ?
You: we know about writing but not about the reading from console.
We: okay, In this tutorial we are going to introduce you with something more than print and println.
The Standard streams are the part of Java language specification to its core. Java uses three streams to interact with your command line interpreter. System.in, System.out and System.err. These are the streams which works with the console to provide you the command line interaction. You must have used System.out in our first ever program in Java, but you might not know about the System.in.
All three are defined as a part of PrintStream, which by nature is a Byte Stream but on a top level System.out and System.err have a character support but you will have to manually wrap the System.in into InputStreamReader to use the character stream properties. But the suggestion is to use the buffer properties so you should also wrap it in BufferedReader it would make the program more efficient.
Check out the following program, and Please pay attention to the program, there is another thing you will notice in the way we print the output.
import java.io.*; class ConsoleIO{ public static void main(String[] args) throws IOException{ BufferedReader is = new BufferedReader(new InputStreamReader(System.in)); System.out.print("Please enter any input: "); String s = is.readLine(); System.out.format("you entered: %s",s); } }
You see, first we have wrapped System.in inside InputstreamReader and this to BufferedReader, this way we can use the properties of character stream and buffer. The output will be like following.
Please enter any input: hello world you entered: hello world
Now you see, we have used print() which is an overloaded method that will output the argument passed to the console. There is another method println() which will print the output and move to the next line. The format work somewhat like the printf() function you might have used in 'C' programming language. Well if you don't know C not to worry. The format is used to format the string. you define the format specifier which is replaced by the corresponding argument.
%d for integer
%f for float
%s for string
%c for character etc.
well that is just one of the way in which you can interact with the console. There is more to it. Next we are going to demonstrate you the working of main(String[] args). After all, its not useless.
So to get you started, the String[] args is a string type array, which you can provide to the main method when you run the program. Check out this program. This type of parameter is known as command line arguments.
class ConsoleIO{ public static void main(String[] args){ if(args.length>0){ for(String s : args){ System.out.println(s); } } else{ System.out.println("You did not passed any command line arguments"); } } }
To demonstrate the working we have run it in two test cases, as you may see below.
$ javac ConsoleIO.java $ java ConsoleIO You did not passed any command line arguments $ java ConsoleIO hello world hello world
There we have used a special function of Java language, you can see it as overloaded for loop, in which you can Iterate over the arrays. When you want to pass some initial arguments to the program.
Beside the Command line argument there is yet another way to take the input, that is Console. you can find this class in java.io package. Check out the following program to see the working.
import java.io.Console; class ConsoleIO{ public static void main(String[] args){ Console c = System.console(); String s = c.readLine("Enter a string: "); System.out.println("the string you entered is: "+s); } }
$ java ConsoleIO Enter a string: hello world the string you entered is: hello world
This class also provide you with more methods which also include the Password input. you can check out complete details of the class using command
javap java.io.Console
Note: there is another class known as Scanner which is found in java.util.Scanner, you can also use this class to take input from console however it is very less use. For the efficient program and better execution time the First method, that is System.in is suggested. | http://www.examsmyantra.com/article/58/java/java-io-console-input-and-output | CC-MAIN-2019-09 | refinedweb | 777 | 66.84 |
Created on 2009-02-08 16:36 by mark.dickinson, last changed 2009-02-13 20:23 by rhettinger. This issue is now closed.
In the issue 5169 discussion, Antoine Pitrou suggested that for an object
x without a __hash__ method, id()/8 might be a better hash value than
id(), since dicts use the low order bits of the hash as initial key, and
the 3 lowest bits of an id() will always be zero.
Here's a patch.
Here are some timings for dict creation, created with the attached script.
They're not particularly scientific, but they at least show that this one-
line optimization can make a significant difference.
Typical results on my machine (OS X 10.5.6/Intel), 32-bit non-debug build
of the trunk (r69442): before
dict creation (selected): 1.95572495461
dict creation (shuffled): 1.98964595795
dict creation: 1.78589916229
and after:
dict creation (selected): 1.7055079937 # (14% speedup)
dict creation (shuffled): 1.5843398571 # (25% speedup)
dict creation: 1.32362794876 # (34% speedup)
BTW, all tests pass on my machine with this patch applied.
The code path for SIZEOF_LONG < SIZEOF_VOID_P could probably also
benefit from this optimization by casting the pointer to a size_t (this
will effect 64-bit Windows, where long is 32 bits but pointers are 64 bits).
(unfortunately it seems the 64-bit Windows buildbot has disappeared)
Benchmark results on my machine (64-bit Linux, gcc 4.3.2, AMD X2 3600+):
Before:
dict creation (selected): 5.09600687027
dict creation (shuffled): 5.66548895836
dict creation: 3.72823190689
After:
dict creation (selected): 4.57248306274 (10% speedup)
dict creation (shuffled): 4.81948494911 (15% speedup)
dict creation: 2.43905687332 (35% speedup)
I observe even greater speedups (15%/20%/37%) on set creation. Here is
the updated benchmark script.
Some comments, while I remember:
* the argument to _Py_HashPointer is not always divisible by 8. It's
called to create hashes for various objects, including methodobjects; see
the line:
y = _Py_HashPointer((void*)(a->m_ml->ml_meth));
in meth_hash in methodobject.c, for example; here ml_meth is a C function
pointer. I can't see how this could be a problem, though, especially as
is seems very unlikely that two function pointers could be less than 8
bytes apart.
* following from the above, it's also pretty unlikely that any two object
pointers will be less than 16 bytes apart, so it might be worth seeing if
performance with >>4 is noticeably any different from with >>3.
* we should make sure that the value returned by _Py_HashPointer isn't the
illegal hash value -1 (though it's difficult to see how it could be). One
safe way to do this that shouldn't cost any CPU cycles would be to cast to
unsigned long before the right shift, to be sure that the right shift
zero-extends instead of sign-extending, so that the result is guaranteed
nonnegative.
* It *would* be nice to do something about the SIZEOF_LONG < SIZEOF_VOID_P
case: the current conversion to a PyLong seems like quite an expensive way
to go. And I've just ordered a computer with 64-bit Windows installed...
Some tests on py3k (32-bit build):
>>> l = [object() for i in range(20)]
>>> [id(l[i+1]) - id(l[i]) for i in range(len(l)-1)]
[16, -96, 104, 8, 8, 8, 8, 8, -749528, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8]
>>> class C(object):
... __slots__ = ()
...
>>> l = [C() for i in range(20)]
>>> [id(l[i+1]) - id(l[i]) for i in range(len(l)-1)]
[-104, 24, 8, 8, 8, 8, 8, 8, 8, 8, 8, 16, 8, 8, 8, 8, 16, -8, 16]
>>> class C(object):
... __slots__ = ('x')
...
>>> l = [C() for i in range(20)]
>>> [id(l[i+1]) - id(l[i]) for i in range(len(l)-1)]
[432, 24, -384, 408, 24, 24, -480, 528, 24, 24, 24, 24, 48, -360, 504,
24, -480, 552, 24]
So, as soon as an user-defined type isn't totally trivial, it is
allocated in at least 24-byte memory units. Shifting by 4 shouldn't be
detrimental performance-wise, unless you allocate lots of purely empty
object() instances...).
Le mardi 10 février 2009 à 21:18 +0000, Antoine Pitrou a écrit :
>).
I have found the answer. The PyGC_Head forces its own alignment using a
"long double" dummy, which in 64-bit mode (Linux / gcc) wastes 8 bytes
between the end of the PyGC_Head and the PyObject itself.
(SIZEOF_LONG_DOUBLE is 16 in pyconfig.h)
On my 64-bit linux box there's nothing in the last 4 bits:
>>> [id(o)%16 for o in [object() for i in range]
And with a bit more complicated functions I can determine how much shift
gives us the lowest collision rate:
def a(size, shift):
return len(set((id(o) >> shift) % (size * 2) for o in [object() for
i in range(size)]))
def b(size):
return [a(size, shift) for shift in range(11)]
def c():
for i in range(1, 9):
size = 2**i
x = ', '.join('% 3s' % count for count in b(size))
print('% 3s: %s' % (size, x))
>>> c()
2: 1, 1, 1, 2, 2, 1, 1, 1, 2, 2, 2
4: 1, 1, 2, 3, 4, 3, 2, 4, 4, 3, 2
8: 1, 2, 4, 6, 6, 7, 8, 6, 4, 3, 2
16: 2, 4, 7, 9, 12, 13, 12, 8, 5, 3, 2
32: 4, 8, 14, 23, 30, 25, 19, 12, 7, 4, 2
64: 8, 16, 32, 55, 64, 38, 22, 13, 8, 4, 2
128: 16, 32, 64, 114, 128, 71, 39, 22, 12, 6, 3
256: 32, 64, 128, 242, 242, 123, 71, 38, 20, 10, 5
The fifth column (ie 4 bits of shift, a divide of 16) works the best.
Although it varies from run to run, probably more than half the results
in that column have no collisions at all.
.. although, if I replace object() with list() I get best results with a
shift of 6 bits. Replacing it with dict() is best with 8 bits.
We may want something more complicated.
Upon further inspection, although a shift of 4 (on a 64-bit linux box)
isn't perfect for dict, it's fairly close to it and well beyond random
hash values. Mixing things more is just gonna lower it towards random
values.
>>> c()
2: 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 2
4: 1, 1, 2, 3, 4, 3, 3, 2, 2, 2, 3
8: 1, 2, 4, 7, 8, 7, 5, 6, 7, 5, 5
16: 2, 4, 7, 11, 16, 15, 12, 14, 15, 9, 7
32: 3, 5, 10, 18, 31, 30, 30, 30, 31, 20, 12
64: 8, 14, 23, 36, 47, 54, 59, 59, 61, 37, 21
128: 16, 32, 58, 83, 118, 100, 110, 114, 126, 73, 41
256: 32, 64, 128, 195, 227, 197, 203, 240, 253, 150, 78
Instead of a shift, how about a rotate or byteswap in case the lower
bits ever become significant again in some build.
The alignment requirements (long double) make it impossible to have
anything in those bits.
Hypothetically, a custom allocator could lower the alignment
requirements to sizeof(void *). However, rotating to the high bits is
pointless as they're the least likely to be used — impossible in this
case, as only the 2 highest bits would contain anything, and for that
you'd need a dictionary with at least 2 billion entries on 32bit, which
is more than the 32bit address space. 64-bit is similar.
Note that mixing the bits back in, via XOR or similar, is actually more
likely to hurt than help. It's just like ints and strings, who's hash
values are very sequential, a simple shift tends to get us sequential
hashes. This gives us a far lower collision rate than a statistically
random hash.
[Adam Olsen]
> The alignment requirements (long double) make it impossible to have
> anything in those bits.
Not necessarily, since not all pointers passed to _Py_HashPointer come
from a PyObject_Malloc. _Py_HashPointer is used for function pointers
as well. For example, on 64-bit linux I get:
Python 2.7a0 (trunk:69516, Feb 11 2009, 10:43:51)
[GCC 4.2.1 (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from hashlib import sha224
>>> hash(sha224)
47100514454970
>>> hash(sha224) % 16
10
> for that you'd need a dictionary with at least 2 billion entries
> on 32bit,
<nitpick> If I recall correctly, the higher bits of the hash value also
get used in collision resolution: they're mixed in to the algorithm
that's used to produce the probing sequence when looking for an empty
slot. So the dictionary wouldn't necessarily have to be quite that
large for the top bits to come into play. </nitpick>
But I agree that mixing the bottom bits back in (via rotate, or xor, or
whatever) doesn't seem likely to help.
Le mercredi 11 février 2009 à 03:31 +0000, Adam Olsen a écrit :
>
> .. although, if I replace object() with list() I get best results with a
> shift of 6 bits. Replacing it with dict() is best with 8 bits.
But list() and dict() don't use id() for hash.
Here's an updated patch, that errs on the conservative side:
- rotate instead of shifting, as suggested by Raymond. This costs
very little, and I admit to feeling uncomfortable about the
possibility of just throwing bits away
- explicit check for -1
- special case for sizeof(void *) = 2*sizeof(long)
All tests pass with the patch applied. I've left the 'convert to
PyLong' code in as a safety net: it's used on platforms where
sizeof(void *) > sizeof(long) but sizeof(void *) != 2*sizeof(long). I
don't know of any such platforms in current use.
Sample timings on 64-bit linux (non-debug trunk build, Core 2 Duo).
before:
dict creation (selected): 1.18751096725
dict creation (shuffled): 1.21234202385
dict creation: 1.00831198692
set creation (selected): 0.869561910629
set creation (shuffled): 0.867420911789
set creation: 0.77153301239
and after:
dict creation (selected): 1.06817317009
dict creation (shuffled): 0.987659931183
dict creation: 0.662216901779
set creation (selected): 0.735805034637
set creation (shuffled): 0.659453868866
set creation: 0.445232152939
Antoine, I only meant list() and dict() to be an example of objects with
a larger allocation pattern. We get a substantial benefit from the
sequentially increasing memory addresses, and I wanted to make sure that
benefit wasn't lost on larger allocations than object().
Mark, I concede the point about rotating; I believe the cost on x86 is
the same regardless.
Why are you still only rotating 3 bits? My results were better with 4
bits, and that should be the sweet spot for the typical use cases.
Also, would the use of size_t make this code simpler? It should be the
size of the pointer even on windows.
I'm fine with rotating 4 bits instead of 3, especially if the timings look
good on 32-bit as well as 64-bit.
We should really benchmark dict lookups (and set membership tests) as well
as dict creation..
I'm *much* more comfortable with a byte-swap, rotation, or xoring-in
upper bits than with shifts that potentially destroy entropy.
Otherwise, your taxing apps that build giant sets/dicts and need all
distinguishing bits to avoid collision pile-ups.
> I'm *much* more comfortable with a byte-swap, rotation, or xoring-in
> upper bits than with shifts that potentially destroy entropy.
> Otherwise, your taxing apps that build giant sets/dicts and need all
> distinguishing bits to avoid collision pile-ups.
Would (id() >> 4) + (id() & 15) be ok?
> At four bits, you may be throwing away information and I don't think
> that's cool.
The current patch *does* do a rotation: it doesn't throw away any
information.
Here is an updated patch. It uses a 4-bit shift and an addition. We
should avoid the use of logical or, because it makes the outputs
non-uniformly distributed ('1' bits are more likely).
Here is an updated benchmark script, for both object() and an
user-defined class, and adding dict lookup, set lookup and set difference.
Set difference is massively faster: up to 60% faster.
Guys, let's be careful. Make sure that efforts to randomize lower bits
don't destroy information. Something like x |= x>>8 is reversible and
fast. Other fun looking transformations are not:
>>> len(set((x >> 4) + (x & 15) for x in range(10**6)))
62515
Ok, updated patch:
- uses a 4-bit rotate (not shift)
- avoids comparing an unsigned long to -1
- tries to streamline the win64 special path (but I can't test)
>.
On the contrary, the expected collision rate for a half-full dictionary
is about 21%, whereas I'm getting less than 5%. I'm taking advantage of
the sequentiality of addresses, just as int and str hashes do for their
values.
However, you're right that it's only one use case. Although creating a
burst of objects for a throw-away set may itself be common, it's
typically with int or str, and doing it with custom objects is
presumably fairly rare; certainly not a good microbenchmark for the rest
of the interpreter.
Creating a list of 100000 objects, then shuffling and picking a few
increases my collision rate back up to 21%. That should more accurately
reflect a long-running program using custom objects as keys in a dict.
That said, I still prefer the simplicity of a rotate. Adding an
arbitrary set of OR, XOR, or add makes me uneasy; I know enough to do
them wrong (reduce entropy), but not enough to do them right.
[Antoine]
> Ok, updated patch:
> - uses a 4-bit rotate (not shift)
> - avoids comparing an unsigned long to -1
> - tries to streamline the win64 special path (but I can't test)
pointer_hash4.patch looks fine to me. Still, I think it's worth
considering the simpler and faster: x |= x>>4. The latter doesn't
require any special-casing for various pointer sizes. It just works.
[Adam]
> Adding an arbitrary set of OR, XOR, or add makes me uneasy;
> I know enough to do them wrong (reduce entropy), but not
> enough to do them right.
It's easy enough to prove (just show that the function is reversible)
and easy enough to test:
assert len(set(ids)) == len(set(map(f, set(ids)))) # for any large
group of ids
"x |= x>>4"
Are you (Ray) sure you didn't mean
"x ^= x>>4" ?
Testing with a large set of ids is a good demonstration, but not proof.
Forming a set of *all* possible values within a certain range is proof.
However, XOR does work (OR definitely does not) — it's a 1-to-1
transformation (reversible as you say.)
Additionally, it still gives the unnaturally low collision rate when
using sequential addresses, so there's no objection there.
David, yes, I did mean x ^= x>>4;
How embarrassing.
> > - avoids comparing an unsigned long to -1
Just out of interest, why? The cast is unnecessary: there's no ambiguity
or undefinedness (the int -1 gets promoted to unsigned long, with
wraparound semantics), and neither gcc nor MSVC complains.
Other than that, the patch looks fine to me; x ^= x >> 4 would be fine
too. I really don't see that it makes much difference either way, since
both transformations are reversible and fast enough.
Mark:
> Just out of interest, why? The cast is unnecessary: there's no ambiguity
> or undefinedness (the int -1 gets promoted to unsigned long, with
> wraparound semantics), and neither gcc nor MSVC complains.
Well, I had memories of a weird signed/unsigned problem (issue4935) and
I wasn't sure whether it could raise its head in the present case or
not.
Raymond:
> The latter doesn't
> require any special-casing for various pointer sizes.
The special casing is just there so as to make all pointer bits
participate in the final hash (which is what the original implementation
does). Otherwise we could just unconditionally cast to unsigned long.
> Well, I had memories of a weird signed/unsigned problem (issue4935) and
> I wasn't sure whether it could raise its head in the present case or
> not.
I'm 99.9% sure that it's not a problem here. If it is a problem then it
needs to be fixed in long_hash in longobject.c as well, which uses
exactly the same code.
The relevant section of the C99 standard is 6.3.1.8, para. 1 (try
googling for 'n1256' if you don't have a copy of the standard). The
only really nasty cases are those of the form unsigned_int +
signed_long, or more generally,
low_rank_unsigned_integer binary_op higher_rank_signed_integer
where the type of the expression depends on the relative sizes (not just
ranks) of the integer types, giving potential portability problems. And
there are similar problems with the integer promotions (6.3.1.1, para. 2).
I guess it comes down to personal taste, but my own preference is to
leave out casts where the conversion they describe is already implied by
the C language rules, adding them back in to silence compiler warnings
if necessary. I find it reduces noise in the code and makes the
important casts more visible, but chacun à son goût.
> Other than that, the patch looks fine to me; x ^= x >> 4 would be fine
> too.
I've just tried x ^= x >> 4 and the speedup is smaller on our
microbenchmark (time_object_hash.py). I conjecture that trying to
maintain the sequentiality of adresses may have beneficial cache
locality effects. Should we care?
+1 for checking in pointer_hash4.patch, provided Raymond approves.
Consider it approved. Though I prefer you switch to x ^= x >> 4.
> Though I prefer you switch to x ^= x >> 4.
Okay, how about this one? Short and sweet. No loss of information
except when sizeof(void *) > sizeof(long) (unavoidable until someone
finds a way to fit all 64-bit pointers into a 32-bit integer type...)
> Okay, how about this one?
Apart from the misindentation (the file should use tabs not spaces),
have you run the benchmark script with it?
Antoine, x ^= x>>4 has a higher collision rate than just a rotate.
However, it's still lower than a statistically random hash.
If you modify the benchmark to randomly discard 90% of its contents this
should give you random addresses, reflecting a long-running program.
Here's the results I got (I used shift, too lazy to rotate):
XOR, sequential: 20.174627065692999
XOR, random: 30.460708379770004
shift, sequential: 19.148091554626003
shift, random: 30.495631933229998
original, sequential: 23.736469268799997
original, random: 33.536177158379999
Not massive, but still worth fixing the hash.
> Apart from the misindentation
Apologies. My fault for editing Python files while at work, with a
substandard editor configuration...
> have you run the benchmark script with it?
I have now. See attached file for 3 sets of results (original, xor
version, and rotate) on 64-bit Linux/Core 2 Duo.
Summary: rotate is uniformly and significantly faster than xor; xor is
uniformly and significantly faster than the unpatched version.
Ok, so the rotate version is really significantly faster (and, as Adam
pointed out, it's also theoretically better).
Antoine, please check in a patch of your choice. I think we've beaten
this issue to death already. :-)
pointer_hash5_rotate.patch has been committed to trunk and py3k. Thanks!
Sweet! | http://bugs.python.org/issue5186 | CC-MAIN-2017-04 | refinedweb | 3,252 | 72.16 |
An update to let you know what breaking changes are planned for CMS Core, as we have done the last few years we plan for one breaking change release of CMS per year. With that said, please note that this is preliminary and subject to change. More details will be provided when we get closer to a release.
For many projects this will just require a re-build of the solution assuming you have stayed up to date with the continuous release process and continuously fixed the obsolete warnings that shows up.
New NuGet packages EPiServer.CMS.AspNet and EPiServer.Framework.AspNet will be introduced which will contain all API's related to web development on the ASP.NET stack (both MVC and WebForms). Existing namespaces will be preserved to keep the upgrade as smooth as possible.
The goal is that the EPiServer.CMS.Core and EPiServer.Framework NuGet packages will be .NET Standard 2 compliant and will not contain any API's related to ASP.NET development, and that they in the future can be used stand-alone (without a web.config).
You might have noticed that we already started obsoleting methods that have dependencies on ASP.NET in core API's to prepare for this split. So for example the CreatePropertyControl-method in PropertyData will be removed since it has a dependency on WebForms (System.Web.UI.Control), but you can prepare for this change by registering controls on startup instead. depedencies on StructureMap).
The legacy features Dynamic Content and XForms will be removed from the platform and moved into separate NuGet packages as add-ons instead (e.g. EPiServer.DynamicContent and EPiServer.XForms) with their own versioning number and breaking changes, as the platform progresses these features will become more limited over time. We recommend migrating to Forms or Blocks wherever possible.
The base class PropertyList<T> is an API that have been in beta for a long time and has no official documentation, we know some projects are using it despite the shortcomings. We plan to make a few breaking changes to make sure the API properly can support these property types, document what they support and not, and then remove the beta stamp from this class.
As an example we are changing how properties are imported/exported by moving logic previously locked into PropertyData such as ToRawProperty to external services that can have their own dependencies and are easier to customize per property type. There will be a separate blog post with more details.
Performance improvements up to 50% when content is loaded from the database, the results vary depending on the size of content types and the data being loaded. Besides optimizations of the API the larger behavioural change is that the CreateWritableClone-method is now also used when loading data from the database.
We have a list of bugs that we have not been able to fix since they are considered breaking according to semantic versioning.
A separate blog post about breaking changes in the UI will be published later.
We will publish pre-release versions on our NuGet site as soon as we have something that can be tested.
good stuff coming ;)
Another thing to happen is that we will require net461 (since that is the net framework version confirming to NetStandard2.0).
Also to clarify the update process for most scenarios, that is to update package EPiServer.CMS.Core (as usual) but also add a new dependency to the new packages EPiServer.CMS.AspNet and EPiServer.ServiceLocation.StructureMap
what if you don't install them? what's planned behavior?
If you have a project that has some asp.net dependency (which is probably 99% of your projects, that is for example if you depend on a MVC Controller or WebForm Page) and do not install EPiServer.CMS.AspNet package then it will give you a compilation error saying somthing like "type PageController is missing". Ideally it would be nice if we could have used type forwarding but it will not work since the dependencies for the packages are the other way around...
If you do not install EPiServer.ServiceLocation.StructureMap then you will get an error during initialization saying something like "Either must InitializationEngine be instantiated with a IServiceLocatorFactory or an assembly with ContainerFactoryAttribute be present"
Exciting! :D
Great info!
Do you have any hints on when in 2017 version 11 will come?
Nice!
We were looking for .Net Core compatibility! Great stuff | https://world.episerver.com/blogs/Per-Bjurstrom/Archive/2017/8/planned-breaking-changes-2017-cms-core/ | CC-MAIN-2020-16 | refinedweb | 742 | 56.66 |
Flashcards
Preview
SEE Part 3, Corporations, S Corporations, Fiduciaries, Estates, and Trusts
The flashcards below were created by user
raja_rabbit
on
FreezingBlue Flashcards
.
Quiz
iOS
Android
More
A domestic limited liability company that has two or more members (without making other elections) is generally treated as a corporation for federal income tax purposes.
FALSE.
A domestic LLC with at least two members that does not file Form 8832 is classified as a
partnership
for federal income tax purposes.
Whenever a shareholder (or group of shareholders) makes a Section 351 property exchange for stock in a corporation, a statement of all facts relevant to the exchange must be attached to the individual(s) tax returns as well as to the corporate tax return in the year of the exchange.
TRUE
. Both the corporation and any person involved in a nontaxable exchange of property for stock must attach to their income tax returns a complete statement of all facts pertinent to the exchange.
A calendar-year corporation that uses the accrual method of accounting may not deduct a charitable contribution paid March 10, 2005, for tax year 2004.
FALSE..
Alpha Corporation owns 75% of the voting stock of Sky Net, Inc. Alpha Corporation’s stock ownership in Sky Net, Inc. also represents 75% of the total value of the stock. Sky Net, Inc. is a member of a controlled group with Alpha Corporation as the common parent.
FALSE..
Weal, Inc. had taxable income in 2003 of $10,000. Due to a downturn in its core business operations, Weal, Inc. expects to suffer a tax loss in 2004. Weal, Inc. must still make installment payments of estimated tax for the 2004 year.
FALSE.
Because Weal Inc. will have a tax loss for 2004 it DOES NOT have to make installment payments of estimated tax for the 2004 year because it owes less than $500 in estimated taxes.
If a corporate distribution to a shareholder exceeds earnings and profits (both current and accumulated) and exceeds the shareholder’s basis in the corporate stock, the shareholder has a gain from the sale or exchange of property.
TRUE.
As long as the corporation has sufficient earnings and profits, distributions are treated as taxable dividends. Amounts that are not considered dividends because of inadequate earnings and profits (as is in this case) a gain from the sale of the stock, normally capital gain if the stock is a capital asset.
If a distribution gives cash or other property to some shareholders and gives stock shares that increase the percentage of interest in the corporation’s assets or earnings and profits to other shareholders, then the distribution of the stock is treated as if it were a distribution of property.
TRUE).
And in this case the distribution gave property to some shareholders and stock shares to others (B) so the distribution is treated as a distribution of property
Only cash distributed as part of a corporate liquidation should be reported on a Form 1099-DIV.
FALSE.
During the liquidation of a corporation, all of the assets of the corporation must be distributed and reported by the stockholders
Gain or loss generally is recognized on a liquidating distribution of assets as if the corporation sold the assets to the distributee at fair market value.
TRUE.
Amounts received by shareholders in complete liquidations of a corporation are considered as payment in full for their stock.
Each shareholder recognizes gain or loss equal to the difference between NET FAIR MARKET VALUE of the property received (fair market value of the property received less any liabilities assumed or taken subject to by the shareholder) and the basis of the stock surrendered.
ABC Corporation was formed in 1996 and has always been an S corporation. ABC Corporation may be liable for the excess net passive income tax in 2004 if it has passive investment income for the tax year that is in excess of 25% of gross receipts and has taxable income at year-end.
FALSE.
The passive activity rules apply to Individuals, Estates, Trusts (other than grantor trusts), personal service corporations, and closely held corporations. Even though the rules do no apply to grantor trusts, partnerships and S corporations directly, they do apply to the owners of these entities. With this problem ABC Corporation would not be liable for any excess net passive income but rather the real owner of the property
S corporation elections are made for periods of five years, which may be renewed.
FALSE..
If an S corporation discharges a debt that it owes one if its shareholders, and that shareholder is required to report the amount as income, then the shareholder may increase his/her basis in the stock of the S corporation by the amount reported in income.
TRUE.
REMEMBER THAT THE S CORPORATION
OWES
THE DEBT TO THE SHAREHOLDER. SO IF IT DISCHARGES THIS DEBT, THE SHAREHOLDER'S BASIS IN THE STOCK INCREASES
An estate of a domestic decedent or a domestic trust that had no tax liability for the full 12-month 2003 tax year is not required to make estimated tax payments in 2004.
TRUE.
Generally, you must pay estimated tax if the estate is expected to owe, after subtracting any withholding and credits, at least $1,000 in tax for 2011. You will not, however, have to pay estimated tax if you expect the withholding and credits to be at least
: 1) 90% of the tax to be shown on the 2011 return, or 2) 100% of the tax shown on the 2010 return (assuming the return covered all 12 months). And because the trust didn’t have any tax liability for 2003 it means that withholding and credits to be at least 100% of the tax shown for 2003.
Generally, in determining the taxable income for most taxpayers Internal Revenue Code section 469 limits the deduction of losses from passive activities to the amount of income derived from all passive activities. For an estate or trust however, losses from a passive activity owned by the estate or trust can be used to offset portfolio (interest, dividends, royalties, annuities, etc.) income of the estate or trust in determining taxable income.
FALSE.
Passive Activity rules- A passive activity is any trade or business activity in which the tax payer does not materially participate. To determine material participation, see Publication 925. Rental activities are passive activities regardless of the taxpayer’s participation, unless the taxpayer meet certain eligibility requirements. Individuals, ESTATES, and TRUSTS can offset passive activity losses ONLY against PASSIVE ACTIVITY INCOME. Passive activity losses or credits not allowed in one tax year can be carried forward to the next year. That means that losses from a passive activity owned by the estate or trust CANNOT be offset by portfolio income (interest, dividends, royalties, annuities, etc.)
If you are the beneficiary of an estate that must distribute all its income currently, you must report your share of the distributable net income whether or not you actually received it.
TRUE.
If you are the beneficiary of an estate that is required to distribute all its income currently, you must report your share of the distributable net income, whether or not you have actually received the distribution.
If the executor of an estate elects the use of an alternate valuation date and then changes his/her mind, he/she can use the date of death as the valuation date by amending the estate tax return (Form 706) within 1 year of the date of death.
FALSE.
Generally, if you must file Form 706, the return is due within 9 months after the date of the decedent’s death..
If a husband and wife both agree to gift splitting for gift tax purposes, the liability for the entire gift tax of each spouse is joint and several.
TRUE
Gift splitting is when you split a gift and half of the gift is from yourself and the other from your spouse and so the liability falls on the both of you
A gift of property directly to an individual may be subject to the generation-skipping transfer tax, even if it is not subject to the gift tax.
FALSE.
GSTs have three forms: direct skip, taxable distribution, and taxable termination. 1) A DIRECT SKIP is a transfer made during your life or occurring at your death that is: a. SUBJECT TO THE GIFT OR ESTATE TAX b) Of an interest in property, and c) made to a skip person. So that means that whenever a transfer is subject to the generation-skip transfer tax, it is subject to the gift tax.
A grantor type trust is a legal trust under applicable state law that is not recognized as a separate taxable entity for income tax purposes.
TRUE.
A grantor type trust is a legal trust under applicable state law that is NOT recognized as a separate (trust) taxable entity for income tax purposes.
Bob Moon forms Moon Enterprises LLC (Limited Liability Company) during the year. What form must Moon Enterprises LLC file in order to elect to be taxed as a C corporation?
A. Form 1065 (U. S. Partnership Tax Return)
B. Form 8832 (Entity Classification Election)
C. Form 1120 (U. S. Corporation Income Tax Return)
D. Form 7004 (Application for Extension of time to file for corporations)
The answer is B.
This is quite obvious. Form 8832
An eligible entity uses Form 8832 to elect how it will be classified for federa tax purposes, as a corporation, a partnership, or an entity disregarded as separate from its owners.
ABC Corporation is dissolved on July 9, 2004. What is the due date, without extensions, for the filing of the final corporate income tax return?
A. March 15, 2005
B. December 31, 2004
C. October 15, 2004
D. October 9, 2004
The answer is C.
A corporation that has dissolved must generally file by the 15th of the 3rd month after the date it has dissolved.
Croaker, Inc. is a taxable domestic corporation. Dana Corporation, a large manufacturing corporation, owns 15% of Croaker, Inc.'s outstanding stock. In 2004, Dana Corporation received $100,000 in dividends from Croaker, Inc. Dana Corporation received no other dividends in 2004. Dana Corporation may deduct, within certain limits, what percentage of the dividends received?
A. 15%
B. 70%
C. 80%
D. 100%
The answer is B.
Dividend.
York, Inc. directly owns stock of Ajax Corporation. To determine if Ajax Corporation is a member of a controlled group with York, Inc. as the common parent, York, Inc. must own at least what percentage of the voting
and total value of the Ajax Corporation stock?
A. 51%
B. 75%
C. 80%
D. 100%
The answer is.
The Lux Corporation incurred $10,000 in start-up costs when it opened for business in 2004. What is the minimum period over which these expenses can be recovered?
A. 12 months
B. 36 months
C. 60 months
D. 120 months
The answer is C.
For business start-up and organizational costs paid or incurred before October 23, 2004, you can elect an amortization period of 60 months or more.
Corporations generally must make estimated tax payments if they expect their estimated tax (income tax less credits) to be equal to or more than:
A. $1
B. $500
C. $600
D. $1,000
The answer is B.
Corporations generally must make estimated tax payments if they expect their estimated tax( income tax less credits) to be equal to or more than $500 through installment payments.
A corporate payer of an individual shareholder dividend does not have the taxpayer identification number for that shareholder. What backup withholding percentage rate must the corporate payer use for this shareholder’s dividend
payments?
A. 15%
B. 28%
C. 35%
D. 39%
The answer is B.
You generally.
The board of directors of Walden Corporation authorized a year end distribution to its three shareholders. Each distribution would be equal in value but the shareholder could choose to receive the distribution in cash or corporate stock. If a shareholder chose to receive corporate stock, the distribution should be treated as:
A. A tax free distribution of stock
B. A distribution of property
C. A like-kind exchange
D. None of the above
The answer is B.).
In this case condition (A) applies because the stockholders have a choice between money or corporate stock.
In 2000, Mark purchased 100 shares of Roman, Inc. for $10 per share. In 2004 Roman, Inc. completely liquidated and distributed $8,000 to Mark. Mark must report income from this distribution as:
A. Ordinary other income
B. Dividends
C. Capital gains
D. Return of capital
The answer is (C).
Liquidating distributions, sometimes called liquidating dividends, are distributions you receive during a partial or complete liquidation of a corporation. These distributions are, at least in part, one form of return of capital. They may be paid in one or more installments…Any liquidating distribution you receive is not taxable to you until you have recovered the basis of your stock, in this case ($1,000). After the basis of your stock has been reduced to zero, you must report the liquidating distribution as a capital gain. And so Mark must report income from this distribution as a capital gain
A fiduciary representing a dissolving corporation may file a request for prompt assessment of tax. Generally, this request reduces the time allowed for assessment to:
A. 12 months
B. 18 months
C. 24 months
D. 30 months
The answer is (B)..
The basis of property you buy is usually its cost. In determining the acquisition basis in C corporation stock, a shareholder must know:
A. The amount paid in cash or property
B. The amount paid in cash and debt obligations
C. The value of provided services and debt obligations assumed
D. All of the above
The answer is (D).
1. The basis of property you buy is usually its cost. In determining the acquisition basis in C corporation stock, a shareholder must know
:
· The amount paid in cash or property
· The amount paid in cash and debt obligations
· The value of provided services and debt obligations assumed
Which of the following conditions will prevent a corporation from qualifying as an S corporation?
A. The corporation has both common and preferred stock
B. The corporation has 70 shareholders
C. One shareholder is an estate
D. All of the above
The answer is (A).
In order to be an S Corporation the firm must have no more than 100 shareholders, is a domestic corporation, its only shareholders are individuals, estates, exempt organizations…, it has no nonresident alien shareholders,
it has only one class of stock
(disregarding differences in voting rights). Generally, a corporation is treated as having only one class of stock if all outstanding shares of the corporation’s stock confer identical rights to distribution and liquidation process, it’s not in the financial service industry
Which of the following statements regarding the built-in gains tax of an S corporation is true?
A. The built-in gains tax is treated as a loss sustained by the corporation during the same tax
year
B. S corporation built-in gains tax can be recognized only in the 10-year period beginning with the year the S election is made
C. S corporation built-in gains tax is passed through and paid at the shareholder level
D. None of the above
The answer is (B).
An S corporation may owe the tax if it has net recognized built-in gain during the applicable recognition period. The applicable recognition period is the 10-year period beginning
:
· For an asset held when the S corporation was a C corporation, on the first day of the first tax year for which the corporation is an S corporation; or
· For an asset with a basis determined by reference to its basis (or the basis of any other property) in the hands of a C corporation, on the date the asset was acquired by the S corporation. So the answer regarding which is true on built-in gain taxes.
Which of the following items is not a separately stated item of a qualifying S corporation?
A. Interest income
B. Charitable contributions
C. Interest expense on business operating loans
D. Net long term capital gain
The answer is C.***
Which of the following statements regarding distributions from an S corporation is correct?
A. Property distributions are applied in a different manner than cash distributions
B. Absent an election, distributions are considered to come first from accumulated earnings and profits, if the corporation has accumulated earnings and profits from when it was a C corporation
C. A shareholder’s right to nontaxable distributions from previously taxed income may be transferred to another person
D. A distribution from the previously taxed income account is tax free to the extent of a shareholder’s basis in his/her stock in the corporation
The answer is (D).
Amounts that are not considered dividends because of inadequate earnings and profits are treated as nontaxable returns of capital to the extent of the shareholder's basis for the stock.
Pine Street Corporation is an S corporation. The Form
1120S for 2004 reflects a $3,500 ordinary loss. Mr.
Jones, the sole shareholder of Pine Street Corporation,
has a basis in the corporation at January 1, 2004, of
$1,500. Which of following statements is correct?
A. Mr. Jones may deduct a $3,500 loss on his
2004 return
B. Mr. Jones may deduct a $1,500 loss on his 2004 return and carry back a $2,000 loss to 2002
C. Mr. Jones may deduct a $1,500 loss on his 2004 return and carry forward a $2,000 loss indefinitely
D. Mr. Jones may deduct a $1,500 loss on his 2004 return and loses the remaining $2,000 loss
The answer is C. Chapter 12 pg. 24 of the book.
Each shareholder's distributive share of net losses may not exceed that shareholder's basis in the corporation. Any losses that exceed a shareholder's basis may be carried forward indefinitely to be used when the shareholder's basis is increased.
Which of the following statements regarding the termination of an S corporation election is true?
A. The election may be revoked with the consent of shareholders who, at the time the revocation is made, hold more than 50% of the number of issued and outstanding shares
B. The election may be revoked by the board of directors of the corporation only if they are not shareholders
C. The election terminates automatically if the corporation derives more than 25% of its gross receipts from passive investment income during the year
D. The election may be revoked by the Internal Revenue Service if there is a history of 10 years of operating losses
The answer is (A).
A termination election is automatically in place if:
· The corporation is no longer a small business corporation as defined in section 1361(b). This kind of termination of an election is effective as of the day the corporation no longer meets the definition of a small business corporation.
· The corporation, for each of three consecutive tax years, (a) has accumulated earnings and profits and (b) derives more than 25% of its gross receipts from passive investment income as defined in section 1362 (d)(3)(C). The election terminates on the first day of the tax year beginning after the third consecutive tax year. The corporation must pay a tax for each year it has excess net passive income. The election is revoked. An election can be revoked only with the consent of shareholders who, at the time the revocation is made, hold more than 50% of the number of issued and outstanding shares of stock (including non-voting stock). The revocation can specify and effective revocation date that is on or after the day the revocation is filed. If no date is specified, the revocation is effective at the start of. And so the answer is
Frank owned and operated a machine shop. He used the cash method of accounting. At the time of his death in 2004, Frank was owed $5,000 for work his shop had performed. This $5,000 amount was paid prior to Frank’s estate being settled. The sole beneficiary of the estate is Frank’s son Jim, but the $5,000 was not distributed to Jim before the settlement of Frank’s estate. The $5,000 must be included in the income of:
A. Frank’s final income tax return
B. Frank’s estate’s income tax return
C. The income tax return of beneficiary Jim
D. None of the above
The answer is (B)
If the decedent accounted for income under the cash method, only those items actually or constructively received before his death are included on the final return. The answer is (B) Frank’s estate’s income tax return
Snickers Trust did not file an estate tax return form 1041 for the 2003 year. At the beginning of 2004 Snickers Trust expects withholding and credits to be less than 90% of the tax reportable at year end. Snickers Trust must pay estimated income tax for 2004 if it expects to owe, after subtracting any withholding and credits, at least what amount?
A. $100
B. $600
C. $1,000
D. $2,500
The answer is (C).
Generally, an estate or trust must pay estimated income tax for 2012 if it expects to owe, after subtracting any withholding and credits, at least $1,000 in tax, and it expects the withholding and credits to be less than the smaller of:
· 1) 90% of the tax shown on the 2012 tax return, or
2)100% of the tax shown on the 2011 tax return (110% of that amount if the estate’s or trust’s adjusted gross income on that return is more than $150,000 and less than 2/3 of gross income for 2011 or 2012 is from farming or fishing.)
If an extension is not granted, when must Form 706 be filed to report estate and/or generation-skipping transfer tax.
A. By the 15th day of the fourth month following the date of death
B. Within 6 months after the date of death
C. Within 9 months after the date of death
D. Within 1 year of the date of death
The answer is (C).
For estate tax purposes, you may be required to file Form 706, United States Estate (and Generation-Skipping Transfer) Tax Return. Generally, if you must file Form 706, the return is due within 9 months after the date of the decedent’s death.
Which of the following statements concerning the deduction for estate taxes by individuals is true?
A. The deduction for estate tax can be claimed only for the same tax year in which the income in respect of a decedent must be included in the recipient’s income
B. Individuals may claim the deduction for estate tax whether or not they itemize deductions
C. The estate tax deduction is a miscellaneous itemized deduction subject to the 2% limitation
D. None of the above
The answer is (A).
Estate Tax Deduction-
Income a decedent had a right to receive is included in the decedent’s gross estate and is subject to estate tax. This income in respect of a decedent is also taxed when received by the recipient (estate or beneficiary). However, an income tax deduction is allowed to the recipient for the estate tax paid on the income. The deduction for estate tax can only be claimed for the same tax year in which the income in respect of a decedent must be included in the recipient’s income. (This also is true for income in respect of a prior decedent). Individuals can claim this deduction ONLY as an itemized deduction on line 28 of Schedule A (Form 1040). This deduction is not subject to the 2% limit on miscallenous itemized deductions. Estates can claim the deduction on the line provided for the deduction on Form 1041. For the alternative minimum tax computation, the deduction is not included as an itemized deduction that is an adjustment to taxable income.
Which of the following entities are required to file Form 709, United States Gift Tax Return?
A. An individual
B. An estate or trust
C. A corporation
D. All of the above
The answer is (A).
Only individuals are required to file gift tax returns. If a trust, estate, partnership, or corporation makes a gift, the individual beneficiaries, partners, or stockholders are considered donors and may be liable for the gift tax and GST taxes.
Which of the following statements regarding the annual exclusion for gift taxes is true?
A. The gift of a present interest to more than 1 donee as joint tenants qualifies for only 1 annual exclusion
B. A gift of a future interest cannot be excluded under the annual exclusion
C. The annual exclusion amount for 2004 is$12,000
D. None of the above
The answer is (B).
as for (A) any donee that you give a gift to the first $13,000(2011) is excluded. For example if you give three gifts all under $13,000, that means all of them are nontaxable gifts
as for (C) back in 2005 the annual exclusion amount was $11,000 but for 2011 it is $13,000.
Gifts of future interest cannot be excluded under the annual exclusion. A gift of a future interest is a gift that is limited so that its use, possession, or enjoyment will begin at some point in the future
As a general rule, a trust may qualify as a simple trust if:
A. The trust instrument requires that all income must be distributed currently
B. The trust does not distribute amounts allocated to the corpus of the trust
C. The trust has no provisions for charitable contributions
D. All of the above
The answer is (D).
· The trust instrument requires that all income must be distributed currently;
· The trust instrument does not provide that any amounts are to be paid, permanently set aside or used for charitable purposes; and
The trust does not distribute amounts allocated to the corpus of the trust
Amanda Jones and Calvin Johnson form Quail Corporationin 2004 by simultaneously making the following transfers.What is the amount of gain or loss to be reported on these transfers by Amanda and Calvin on their 2004 Federal income tax returns?
Amanda transfers property with an adjusted basis of $30,000 and a FMV of $60,000 and receives 50% of outstanding stock.
Calvin transfers property with an adjusted basis of $70,000 and a FMV of $60,000 and receives 50% of outstanding stocks.
A. Amanda reports a $30,000 gain and Calvin reports a $10,000 loss
B. Amanda reports a $0 gain and Calvin reports a $0 loss
C. Amanda reports a $30,000 gain and Calvin reports a $0 loss
D. Amanda reports a $0 gain and Calvin reports a $10,000 loss
The answer is (B).
After the transfers that both Amanda and Calvin made to form Quail Corporation constitutes as having control of the corporation (more than 80%) and hence no gain or loss is recongized.
As a GROUP Amanda and Calvin have control of the corporation and Section 351 (deferral of gain or loss) can be instituted.
Bob and John make the following transfers to Builders Corporation in return for 100% of the stock in the corporation.
Bob Transferred to Builders $100,000 to Builders. Builders transferred to Bob $10,000 land. And received 80% of Builder's stock.
John Transferred $30,000 property (basis of $10,000). Builders transferred to John $5,000 cash. John received 20% of Builder's stock.
What is the amount of gain Bob and John must recognize on the transfers?
A. Bob must recognize $10,000 gain and John must recognize $25,000 gain
B. Bob recognizes no gain and John recognizes $5,000 gain
C. Bob recognizes $10,000 gain and John recognizes $5,000 gain
D. Bob recognizes $10,000 gain and John recognizes $20,000 gain
The answer is (B).
Because Bob has control of the corporation (80%) he does NOT realize a gain or deduct a loss, in this case a gain of 10,000 because he is in control of the corporation. John recognizes a $5,000 gain on his transfer of property because he got 20% of stock in exchange of the property AND THEN the corporation transferred $5,000 to John on top of his 20% of stock
. Moreover John's basis in the stock is $10,000 because the adjusted basis carries over when an exchange between property and stock happens.
Warren purchased stock in 2002 for $10,00. In 2003 Warren sold this stock to his sister Gail for $8,000. In 2004 Gail sold this stock to an unrelated party for $11,000. How much gain must Gail recognize in 2004 on the sale of this stock?
A. $0
B. $1,000
C. $2,000
D. $3,000
The answer is (B).
When a sale of property is enacted between related persons and there is a loss, the seller of the property cannot deduct the loss.
However, the buyer's basis in the property is the same as the seller's adjusted basis in the property.
Essex Corporation is a domestic corporation founded in 1998. Essex was originally authorized 100,000 shares with a per share value of $10. In 1998 Essex issued 50,000 shares and retained 50,000 shares. In 2004 the fair market value of an Essex share of stock equaled $100. During 2004 Essex hired a consulting firm to improve its data processing systems at a contracted cost of $20,000. The consulting work was completed in 2004 and the consulting firm agreed to accept 200 shares of Essex stock as payment of the contract. In 2004 Essex Corporation is required to report this transaction as:
A. $20,000 in ordinary other income
B. $2,000 in capital loss
C. $0 nontaxable exchange
D. $18,000 in capital gain
I would go with (C).
The reason for it is because the CONSULTING FIRM would need to report the 200 shares of Essex stock as ordinary income as compensation for its SERVICES ($20,000) @ $100 per share.
But to ESSEX the transaction is an expense to the company that can deducted or capitalized (in this case capitalize because it's adding to the value of its corporation)
Pg. 2-5 in the book...Stock received for services is considered compensation for such services, and the shareholder must recognize ordinary income equal to the value of the stock received for the services rendered.
Brady Corporation of Cleveland, OH is a multi-national conglomerate. In 1986 Brady Corporation established and owned 100% of the stock of Toms, Inc. of Dayton, OH. Toms, Inc. was established for the purpose of manufacturing rubber gaskets, which Brady Corporation uses in many of its international operations. By the beginning of 2004, Brady Corporation had sold 30% of the outstanding Toms, Inc. stock. In July of 2004 Toms, Inc. declares a dividend and pays $100,000 to Brady Corporation. In 2004 Brady Corporation, subject to certain limits, takes what amount as a dividends received deduction?
A. $0
B. $70,000
C. $80,000
D. $100,000
The answer is (C).
REMEMBER THIS ABOUT DIVIDENDS BETWEEN CORPORATIONS!!!:
A corporation can deduct, with certain limits 70% of the dividends received if the corporation receiving the dividend owns less than 20% of the corporation distributing the dividend. If the corporation owns 20% or more of the distributing corporation’s stock, it can, subject to certain limits, deduct 80% of the dividends received.
In tax year 2004, Roberts Corporation made a charitable contribution to a qualified organization of $40,000 in cash plus a vehicle with a fair market value of $15,000. For tax year 2004 Roberts Corporation had $400,000 in total income, $100,000 in total expenses not including the above charitable contributions, and would have a reportable dividend received deduction of $50,000. How much of the charitable contribution can Roberts Corporation deduct for the 2004 tax year?
A. $15,000
B. $25,000
C. $40,000
D. $55,000
The answer is (A).
A corporation cannot deduct charitable contributions that exceed 10% of its taxable income for the tax year. Figure the taxable income for this purpose WITHOUT the following
:
· The deduction for charitable contributions
· The dividends-received deduction
· The deduction allowed under section 249 of the Internal Revenue Code
· The domestic production activities deduction
· Any net operating loss carry back to the tax year
· Any capital loss carry back to the tax year.And so because the taxable income is (400,000 – 100,000 = 300,000) the limit on the deduction is 30,000 which means that Roberts Corporation cannot deduct the $40,000 but can deduct the $15,000 of the FMV of the vehicle it donated.
In tax year 2004, Sun Corporation had a $10,000 long term capital loss and a $5,000 short-term capital gain. In tax year 2000, Sun Corporation reported $1,000 in long-term capital gains and $4,000 in short-term capital gains. Sun Corporation reported no other capital gains or losses in any other tax year. How much net capital loss will be available for Sun Corporation to carry into tax year 2005?
A. $0
B. $1,000
C. $4,000
D. $5,000
The answer is (D).
A capital loss is carried to other years in the following order
:
· 3 years prior to the loss year (for 2004) it was 2001 and no capital losses or gains
· 2 years prior to the loss year (2002) no capital losses or gains
· 1 year prior to the loss year (2003) no capital losses or gains
· Any loss remaining is carried forward for 5 years. ($10,000- 5,000) $5,000
As of December 31, 2003, Doyle, Inc. had incurred $6,000 in potential market feasibility costs, $3,600 in legal fees for setting up the corporation, $2,400 in advertising costs for the opening of the business, and $18,000 for the purchase of equipment. Doyle, Inc. began business operations on January 1, 2004. If Doyle, Inc. chooses to amortize its organizational and start-up expenses over the minimum 60-month period, how much can Doyle, Inc. deduct as an amortization expense in 2004?
A. $1,680
B. $1,920
C. $2,400
D. $6,000
The answer is (C). pg. 26 Publication 535 Business Expenses.
Start-up costs include amounts paid for the following:
An analysis or survey or potential markets, products, labor supply, transporation facilities, etc.
Advertisements for the opening of the business
Salaries and wages for employees who are being trained and their instructors
Travel and other necessary costs for securing prospective distributors, suppliers, or customers.
Salaries and fees for executives and consultants or for similar professional services.
Examples of organizational costs include:
The cost of temporary directors
The cost of organizational meetings
State incorporation fees
The cost of legal services.
That means the only expenses that can be amortized as start-up and organizational costs are the feasibility costs, legal fees and advertisement cost.
In 2004 Green, Inc. had gross receipts from sales of $500,000, dividends of $100,000 from a domestic corporation in which Green, Inc. owned 50% of the stock, and operating expenses of $800,000. What is the 2004 net operating loss for Green, Inc.?
A. $200,000
B. $280,000
C. $300,000
D. $330,000
The answer is (B).
Because Green Inc owns more than 20% of the stock it can elect to deduct 80% of the total dividend and giving it 520,000 – 800,000 = (280,000) net operating loss.
Richard Crepe, M.D. owns 100% of the outstanding stock of Crepe Corporation. All of Crepe Corporation’s income and expenditures are derived from the medical services provided by Dr. Crepe. At the end of 2004 Crepe Corporation had $10,000 in reportable taxable income. How much federal income tax was Crepe Corporation required to pay for the 2004 year?
A. $1,500
B. $2,500
C. $3,400
D. $3,500
The answer is (D).
Because the Richard Crepe and his Crepe Corporation is a Personal Service Corporation because it is in the health service industry it pays a flat tax rate of 35% meaning that he will have to pay $3,500
Maple Corporation had a net loss per its books for 2004 as follows:
Gross Sales........................$340,000
COGS................$150,000
Deprecaition.......$60,000
Charitable Contr..$10,000
Salaries..............$130,000
Meals and
Entertainment.....$20,000
Net Income
(Loss) per books..($30,000)
Total per books....$340,000 $340,000
Maple Corporation uses an accelerated method of depreciation for tax purposes, but not for book purposes. Maple Corporation’s tax depreciation for 2004 will be $75,000. What is the taxable income for federal income tax purposes in 2004 for Maple Corporation?
A. $(5,000)
B. $(35,000)
C. $(25,000)
D. $(20,000)
The answer is (C).
Because Maple Corporation has a taxable loss, it CANNOT deduct any charitable contributions since the limitation on charitable deduction CANNOT exceed 10% of its taxable INCOME (NOT OR LOSS).
And so the deductions add up to :
150,000+ 75,000+ 130,000+ 10,000 = $365,000
Net Loss
: 340,000- 365,000 = ($25,000)
Rose Corporation is a calendar-year filing corporation that had accumulated earnings and profits at the end of 2003 of $5,000. At the end of 2004 Rose Corporation had current-year earnings and profits of $1,000. On December 31, 2004 Rose Corporation distributed to sole shareholder Paul Rose an automobile purchased for $10,000 with a fair market value of $8,000. Paul Rose assumed a liability on the automobile of $1,000. What amount of dividend paid to Paul Rose must Rose Corporation report as an ordinary dividend in Box 1a of Form 1099-DIV?
A. $6,000
B. $7,000
C. $8,000
D. $10,000
The answer is (A). pg. 3-16 in the book.
The amount of the distribution that the shareholder includes in his or her income and the corporation reports in Form 1099-DIV is the value of any property received(FMV-liability)
TO THE EXTENT THE DISTRIBUTION IS OUT OF E&P.
And so because Rose Corp. had a total E&P of $6,000 they can only report that much as dividends paid out to Paul Rose.
Charles Watson owns 100% of the outstanding shares of Watson Corporation. Charles Watson acquired these shares in 1998 for $5,000. Watson Corporation had total earnings and profits at the end of 2004 of $10,000. On December 31, 2004, Watson Corporation distributed $8,000 in cash and property with a fair market value of $7,000 to Charles Watson. In 2004 how much in capital gain must Charles Watson report from this distribution?
A. $0
B. $5,000
C. $10,000
D. $15,000
The answer is (A). pg. 3-3 of the book
As long as the corporation has sufficient earning and profits, distributions are treated as taxable dividend. Amounts that are not considered dividends becasue of inadequate earnings and profits gain from the sale of the stock.
And so because Charles Watson got a total distribution of $15,000. $10,000 of that amount is treated as a taxable dividend and since he had a basis of $5,000, the remaining 5,000 is seen as a nontaxable return of capital and now Charles has an adjusted basis of $0 in Watson Corp. and no capital gain.
Hampshire, Inc., a calendar year taxpayer, had an accumulated earnings and profits balance at the beginning of 2004 of $20,000. During the 2004 year, Hampshire, Inc. distributed $30,000 to its sole individual shareholder. On December 31, 2004 Hampshire, Inc. reported taxable income of $50,000, federal income taxes of $7,500, and had tax exempt interest on municipal bonds of $2,500. What is Hampshire, Inc.’s accumulated earnings and profits balance at the beginning of 2005?
A. $15,000
B. $25,000
C. $30,000
D. $35,000
The answer is (D). pg. 3-5 in the book
The calculation of current and earnings and profits is as follows:
Current taxable income (or net operating loss)
+ Exempt and nondefferable income
-Items not deductible in computing taxable income*
+Deductions not permitted in computing E&P
=Current earnings and profits (or deficit)
*Federal Income Taxes
Charitable Contributions
Expenses related to tax-exempt income
Premiums paid on key-person life insurance policies
Excess of capitall losses over capital gains
Related party losses and expenses
And so Current E&P = 50,000+2,500-7500-30,000 = 15,000
In order to find out ACCUMULATED E&P simply add 2004's current E&P to the accumulated E&P of 2003 and it is $35,000
Healey, Inc. owned a parcel of undeveloped land with an adjusted basis of $10,000, an attached liability of $4,000, and a fair market value of $15,000. In 2004 this land was distributed by Healey, Inc. to its sole shareholder who also assumed the liability. Healey, Inc. will recognize how much of a gain on this distribution?
A. $0
B. $1,000
C. $5,000
D. $10,000
The answer is (C). pg. 3-18
When property subjected to a liability is distributed, the corporation is relived of any obligation. In such case, the effect to the corporation is the same as if it had sold the property for cash equal to the liability and paid off the debt...If the liability does not exceed the property's fair market value, it is ignored for gain recognition purposes and the fair market value is used.
However, when the liability exceeds both the FMV and adjusted basis of the property, the corporation must recognize the gain equal to the excess of the liability over the adjusted basis.
Arnold acquired 10 shares of Klesco, Inc. stock in 2000 for $50 per share. Klesco, Inc. decided in 2004 to reacquire all of its outstanding stock, which it did for $200 per share. What amount of capital gain in 2004 must Arnold report on the redemption of his Klesco, Inc. stock?
A. $0
B. $500
C. $1,500
D. $2,000
The answer is (C).
This one is pretty simple:
Arnold's basis in the stock is $500 (10 shares @ $50 per share).
Klesco buys back its outstanding stock for $200 per share and buys back all 10 of Arnold's shares 200*10 = $2,000 (amount recongized by Arnold) - $500 (adjusted basis) = $1,500
Sarah contracted with Downing Corporation to perform engineering services in 2004. Her contract specified she would receive $100,000 for the services rendered. Upon completion of her contract, Sarah decided to accept a payment offer from Downing Corporation of $60,000 in cash and 1,000 shares of their stock. At the time she was paid, Downing Corporation stock was trading for $45 per share. If Sarah reported on her 2004 individual return the appropriate amount for her services, what would be her basis in her 1,000 shares of Downing Corporation stock?
A. $0
B. $40,000
C. $45,000
D. None of the above
The answer is (C). pg. 2-7 in the book
Because Sarah's only contribution to Downing Corporation was services, Sara is treated as simply receiving compensation in the form of property, and must report income equal to the value of the stock. In addition, Sarah's basis in the stock will be equal to the value reported as income.
Kevin, the 100% owner of an S corporation has an adjusted basis in stock before losses and deductions at the end of 2004 in the amount of $12,000. The 2004 corporate return shows a $20,000 ordinary loss and a $5,000 charitable contribution expense. What are the allowable losses and deductions Kevin may claim on his 2004 tax return?
A. $12,000 ordinary loss and $0 contribution expense
B. $7,000 ordinary loss and $5,000 contribution expense
C. $9,600 ordinary loss and $2,400 contribution expense
D. $12,000 ordinary loss and $5,000 contribution expense
I would have gone with (D). pg. 12-24
Each shareholder's distributive share of net losses may not exceed that shareholder's basis in the corporation. Any losses that exceed a shareholder's basis may be carried forward indefinitely to be used when the shareholder's basis is increased.
John Smith died on March 30, 2004. From January 1, 2004 to March 30, 2004, $2,000 in medical bills had been paid by John. The following additional medical bills were incurred and paid by the executor out of John’s estate:
1) From March 31, 2004, to December 31, 2004, in the amount of $5,000.
2) From January 1, 2005, to March 30, 2005, in the amount of $5,000.
3) From March 31, 2005, to April 6, 2005, in the amount of $3,000. The executor of John’s estate may elect to deduct what amount of the medical expenses (subject to percentage limitations) on John’s final income tax return, Form 1040, if deductions are itemized.
A. $2,000
B. $7,000
C. $12,000
D. $15,000
The answer is (C).… So the expenses the executor can include in John Smith’s last tax return is the $2,000 + 5,000 + 5,000 = 12,000. The medical expense from March 31, 2005 to April 6, 2005 is outside the 1-year window and is not included in John’s return but rather the estate’s.
An estate has distributable net income of $12,000 consisting of $6,000 in rents, $4,000 in dividends, and $2,000 in taxable interest. Rob and his three sisters are equal beneficiaries of this, their father’s estate. A stipulation allocates dividends first to Rob. The personal representative distributed the income under the provisions of the will. In what amount and what character is the distribution to Rob?
A. $0 rents, $4,000 dividend, and $0 taxable interest
B. $0 rents, $3,000 dividend, and $0 taxable interest
C. $1,500 rents, $1,000 dividend, and $500 taxable interest
D. $1,000 rents, $1,000 dividend, and $1,000 taxable interest
I would have gone with (C).
Because the will stipulates that Rob and his 3 sisters are equal beneficiaries all income is distributed equally.
Harry, a single person, died in 2004. The executor does not elect the alternate valuation date. Given the following information, determine the value of Harry’s gross estate.
FMV at date of death
Certificate of
Deposit.......................$100,000
Mortgage Receivable
on sale or property.......$2,000,000
Paintings and
Collectibles..................$30,000
Household goods
and personal effects......$20,000
A. $2,600,000
B. $2,650,000
C. $2,620,000
D. $2,120,000
The answer is (B).
Sections 2033 through 2044 identify the various types or property that are includible in a decedent's gross estate. It requires the inclusion of any property interest owned by the decedent at date fo death.
That means all of Harry's property is included in the his gross estate.
Jack, a single individual, made the following gifts in 2004.
Payments directly to sister's qualifying college for tuition...........$15,000
Payment directly to sister's qualifying college for room and board.$25,000
Cash to nephew................$10,000
Cash to brother.................$30,000
What is the gross amount of gifts that Jack must include on his 2004 Form 709, United States Gift Tax Return?
A. $80,000
B. $40,000
C. $65,000
D. $55,000
The answer is (D). Publication 950
Read the question closely and it's asking for the GROSS AMOUNT of gifts. This is before you take the annual exclusion of each gift and get the taxable amount of gifts. And because the payment to the qualifying college for tuition is exludable under the educational exclusion clause it is not a transfer subject to the gift tax.
George and Helen are husband and wfie. During 2004, George gave $30,000 to his brother and Helen gave $22,000 to her niece. George and Helen both agree to split the gifts they made during the year. What is the taxable amount of gifts, after the annual exclusion, each must report on Form 709?
A. George and Helen each have taxable gifts of $15,000
B. George has a taxable gift of $19,000 and Helen has a taxable gift of $11,000
C. George and Helen each have taxable gifts of $4,000
D. George has a taxable gift of $8,000 and Helen has a taxable gift of zero
The answer is (C).
Now remember that when a couple decides to gift split each gift that they give will be divided among both of them and liability is split as well after any amount over the annual exclusion. And remember that the annual exclusion is applied to EACH gift.
So the $22,000 gift is split amount George and Helen and each have a gift of $11,000 that they made and since it's under the annual exclusion the entire gift is nontaxable.
The $30,000 is split among George and Helen and each have $15,000 and because the annual exclusion in 2004 was $11,000 (right now it's $13,000) both George and Helen have a taxable gift of $4,000 ($15,000-$11,000)
The trust instrument for RJC Trust is silent as to the allocation of capital gains. In 2004 RJC Trust, a simple trust had taxable interest income of $4,000, capital gains of $3,000, paid a fiduciary fee of $625, and had tax exempt interest of $1,000. If the general rule to determine the allocation of the capital transaction is applied, what amount of taxable income is distributed to the beneficiaries in 2004?
A. $6,500
B. $6,375
C. $3,500
D. $3,375
The answer is (C). pg. 14-3 in the book.
One common difference between fiduciary accounting income and taxable income is the classification of fiduciary capital gains. Typically, capital gains represent an increase in the value of the principal of the fiduciary and are not available for distribution to income beneficiaries.
This means that the TOTAL TRUST INCOME does not include the capital gains of $3,000
Moreover, you need to figure out the the fiduciary fee that is taxable.
What you do is find the TOTAL TRUST INCOME ($5,000) and the TAXABLE TRUST INCOME ($4,000) and divide the taxable trust income by the total trust income to derive the percentage of the fiduciary fee that is deductible.
Hence $4,000/$5,000 = .8*625 = $500
And so the amount that is distributed to beneficiaries is $4,000 - $500 = $3,500
In 2004, Exeter Trust had taxable interest of $2,000, capital gains of $6,000, and a fiduciary fee of $1,000. The trust instrument allocates capital gains to income. At the end of 2004, the fiduciary retains $3,000 and distributes $4,000. What is the distributable net income (DNI) of Exeter Trust for 2004?
A. $4,000
B. $4,375
C. $7,000
D. $7,375
The answer is (C). Chapter 14
This particular trust instrument allocates capital gains to income and NOT to the principal or corpus of the trust. So that means it's included in the DNI
Remember that the DNI is separate from how much the trustee may actually distribute. Regardless of how much the fiduciary retains the DNI still remains $8,000($6,000+$2,000) - $1,000 = $7,000.
To go further because the trust has a DNI of $7,000, it can only claim a $4,000 distribution deduction and the retained $3,000 will be reported by and taxed to the Trust.
The Wilder Trust is a complex trust with a controlling instrument that specifically allocates capital transactions to the corpus of the trust. The instrument goes on to state that $2,000 will be set aside out of gross income for charitable purposes and that $10,000 in income is required to be distributed each year. At the end of 2004 the Wilder Trust had $20,000 in gross income, which included $5,000 in capital gains. If there was no other information to consider, what would the Wilder Trust’s income distribution deduction be for 2004?
A. $18,000
B. $13,000
C. $10,000
D. $5,000
The answer is (C). Chapter 14
This one is simple. Wilder Trust is a complex and it specifically states that $10,000 in income is distributed. The $10,000 is the distribution deduction.
In 2002 Thomas Hatch established the TWH Trust. TWH is a revocable trust. Thomas contributed cash, a significantstock portfolio and tax exempt bonds to this trustwhen he established it. In 2004 the TWH Trust had income consisting of $5,000 in taxable interest, $3,000 in ordinary dividends, and $2,000 in tax exempt interest. Thomas has never relinquished dominion and control of the TWH Trust. What amount of TWH Trust’s income is taxable to Thomas Hatch in 2004?
A. $10,000
B. $8,000
C. $5,000
D. $0
The answer is (B). Chapter 14
This one seems pretty self-explanatory. You are looking for the TAXABLE income for Thomas' trust. Simply add the taxable interest and ordinary dividends to come to $8,000. Do not include the tax-exempt interest.
John is the sole shareholder of Maple Corporation, aqualified S corporation. At January 1, 2004, John has a basis in Maple Corporation of $2,000. The corporation’s 2004 tax return shows the following:
Ordinary Income................$10,000
Interest Income.................$1,000
Nondeductible Expenses......$2,000
Real Estate rental
losses...............................$5,000
Section 179 deduction.........$1,500
Distributions to
Mr. Maple...........................$3,000
What is Jonn's basis in Maple Corporation at the end of 2004?
A. $0
B. $3,500
C. $4,500
D. $1,500
The answer is (D). pg. 12-26
Shareholder's basis in the S Corporation Stock includes initial basis or investment ($2,000)
PLUS:
1.basis of additional capital contributions
2.share of taxable income ($10,000+$1,000)
3.Share of nontaxable income and gains
4.gain recognized by the partner (when cash plus market value of noncash assets received exceed basis)
LESS:
1.Cash distributions received ($3,000)
2.market value of noncash distributions received
3.Share of net loss (real estate rental loss of $5,000)
4.Share of separately stated expenses, but not to exceed basis first in stock and second in debt due from the S corporation
5.Share of nondeductible expenses and losses, but not to exceed basis " " "
6.Dispositions of ownership interest.
$2,000 + 10,000 + 1,000 = 13,000
13,000-2,000 = 11,000
11,000-5,000 = 6,000
6,000-1500 = 4500
4500-3000 =
1500
XYZ Corporation is a qualified S corporation. In 2004, its books and records reflected the following transactions:
Business Income....................$500,000
Real Estate rental loss............$($20,000)
Interest income.....................$5,000
Salaries and wages.................$(50,000)
Depreciation
(without Section 179 expense)..$(40,000)
Section 179 expense................$(10,000)
Other business deductions.........$(300,000)
What is XYZ’s ordinary income (loss) to be reported on its 2004 Form 1120S?
A. $85,000
B. $110,000
C. $115,000
D. $105,000
The answer is (B).
Instructions to Form 1120S pg. 16(Section 179 deduction)
Do not include any Section 179 expense deduction on Form 1065 Line 14. This amount is not deducted by the corporation. Instead, it is passed through to the shareholders in box 11 of Schedule K-1
Also the corporation does not deduct real estate losses, it too is also separately stated on the Schedule K-1.
Moreover on page 12-19 in the book
Interest income is also a separately stated income because it is considered investment (portfolio) income and is not added to Combined Ordinary Income REMEMBER THIS!
So here's how it goes
:
500,000-50,000-40,000-300,000 = 110,000
Robert owns 100 shares of Oswald, Inc. stock he purchased in 1998 for $10 per share. The 100 shares that Robert owns represent all of the outstanding Oswald, Inc. stock. In 2004, Oswald, Inc. redeems 25 of Robert’s shares for $50 per share. Oswald, Inc. had earnings and profits in 2004 of $100,000. Robert must report what amount of capital gain from this 2004 redemption of his Oswald, Inc. stock?
A. $0
B. $1,000
C. $4,000
D. $5,000
The answer is (A). pg. 4-3 in the BOOK
A quick look at a redemption reveals that it has the same characteristics as an ordinary sale: the stock of the shareholder is exchanged for property of the corporation. In such case, the transaction normally would receive capital gain treatment. Upon closer scrutiny, however, the transaction may have an effect that more closely resembles a dividen than a sale. That this may be true is easily seen in the classic example in which a corporation redeems a portion of its SOLE shareholder's stock. Although the shareholder surrenders stock as part of the exchange, like a dividend distribution, the interest of the shareholder in corporate assets as well as the shareholder's control over corporate affairs is completely affected.
And in this case Robert still owns 100% of Oswald, Inc. outsanding stock and interest and the transaction is merely a dividend distribution and NOT a capital gain
In 1998 Adam purchased 100 shares of Call Corporation stock for $50 per share. During 2004 Call Corporation completely liquidated. After paying its liabilities, Call Corporation distributed to its shareholders $10,000 in cash and appreciated property sold for $90,000. Adam’s portion received a liquidating distribution from Call Corporation of $10,000. Adam must report what amount of capital gains income from this distribution?
A. $4,500
B. $5,000
C. $22,500
D. $25,000
The answer is (B). pg. 5-3 in the BOOK
Under the general liquidation rules prescribed by Section 331, amounts received by shareholders in complete liquidation of a corporation are considered as payment in full for their stock. Each shareholder recongizes gain or loss equal to the difference between the NET FMV of the property received and the basis of the stock surrendered.
In 2004 Omega, Inc. partially compensates employee Tom Jones with 100 shares of stock. Omega, Inc. stock is selling for $200 per share at the time Tom receives his shares. On December 31, 2004 Tom sells his 100 shares of Omega, Inc. stock for $300 each. How much of an employee compensation expense can Omega, Inc. deduct in 2004 for Tom’s 100 shares?
A. $0
B. $10,000
C. $20,000
D. $30,000
The answer is (C).
Omega, Inc. can only deduct $20,000 as an employee expense to Tom Jones because that was the FMV of the shares at the time Tom was compensated.
Gold Corporation distributes land with a fair market value of $25,000 to its sole shareholder Donna Gold, who assumes the mortgage on the land of $35,000. This land had an adjusted basis to Gold Corporation of $20,000. Gold Corporation must recognize how much of a gain on this distribution?
A. $5,000
B. $10,000
C. $15,000
D. $25,000
The answer is (C). pg. 3-18
Thus, when the liability exceeds both the fair market value and the basis of the property, the corporation must recognize the gain equal to the excess of the liability over the basis.
$35,000-$20,000 = $15,000
During the 2004 initial year of operations, Robert wholly owned a limited liability company (LLC) that manufactured air compressors that were sold to retail outlets within the United States. The LLC also owned an airplane that was leased to corporate clients. At the end of 2004, the LLC had net income from the manufacturing activity of $100,000, interest income of $5,000, dividend income of $10,000, and a net loss from the airplane leasing activity of $25,000. If Robert had no other items of income or loss in 2004, he should compute his tax liability on which amount?
A. $75,000
B. $85,000
C. $90,000
D. $115,000
The answer is (C).
Because you are computing Robert's tax liability and not the partnership's, include all items.
Waco, Inc. reported net capital gains as
follows:
Tax year 2000 at $6,000
Tax year 2002 at $8,000
Tax year 2003 at $1,000
In tax year 2004, Waco, Inc. had $40,000 in long-term capital losses and $25,000 in short-term capital gains. How much net capital loss will be available for Waco, Inc. to carry into tax year 2005?
A. $0
B. $6,000
C. $14,000
D. $15,000
The answer is (B).
Remember that capital losses can be offset by current-year capital gains and capital gains from the last 3 calendar years.
Author
raja_rabbit
ID
145791
Card Set
SEE Part 3, Corporations, S Corporations, Fiduciaries, Estates, and Trusts
Description
Q & A over SEE 2005 test
Updated
2012-04-10T00:03:40Z
Show Answers
Flashcards
Preview | https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=145791 | CC-MAIN-2020-50 | refinedweb | 10,281 | 63.09 |
By Mark Schmidt on February 13, 2016
Kentico’s Modules are very powerful. They are very extendable, and they simply make customizing Kentico a joy. Kentico also provides great documentation on how to extend the main element of a module, the UniGrid. This means you can add your own buttons, your own actions and more to the grid component. But there are not a lot of good examples on how to do it for the UniForm. That's where Kentico Extenders come to the rescue.
If you are not sure what I am talking about, check these links out:
The links above do a great job on showing how to extend the UniForm. Below, I will show you how to extend your Form. In this example, I am simply adding an additional button Run Compare to the Form.
Go through the normal steps on creating a new extender.
The screenshot below shows how we have this example setup.
using CMS.ExtendedControls;
using CMS.ExtendedControls.ActionsConfig;
using CMS.FormControls;
using CMS.Helpers;
using CMS.UIControls;
namespace BizStreamToolkit.Tools
{
public class CompareProjectNewEditExtender : ControlExtender<UIForm>
{
// This gives me access to the Object I am editing. Cast it to whatever ObjectType you are workig with.
public CompareProjectInfo Project
{
get
{
return Control.UIContext.EditedObject as CompareProjectInfo;
}
}
public override void OnInit()
{
InitHeaderActions();
}
private void InitHeaderActions()
{
// Only show the run compare button on the "EDIT" action (the project already exsits)
if (QueryHelper.GetString("action", "") == "edit")
{
string url = CompareHelper.GetUrlForRunCompare(Project.CompareProjectGUID);
AddHeaderButton(
Text: "Run Compare", // "BizStreamToolkit.Button.RunCompare",
Url: url);
}
}
private void AddHeaderButton(string Text, string Url, ButtonStyle ButtonStyle = ButtonStyle.Default)
{
var page = (CMSUIPage)Control.Page;
var buttonHeaderAction = new HeaderAction()
{
Text = ResHelper.GetString(Text),
RedirectUrl = Url,
ButtonStyle = ButtonStyle,
};
page.AddHeaderAction(buttonHeaderAction);
}
}
}
As you can see in the code above, using theCMSUIPage.AddHeaderAction method is the real key. This is the step that really allows you to inject your button and its functionality into the standard admin interface. It's not really that hard at all.
For those of you who are paying special attention to the screen shots and object names like CompareProjectInfo, you might have already guessed it, but this example is actually from our new product that we are working on as part of the BizStream Toolkit. If you are interested in making deployments easier for your Kentico projects please checko out, Compare for Kentico. The tool is in open beta right now and free for anyone to use. We'd love to know your feedback on it..
Enter your email address to subscribe to the BizStream Newsletter and receive updates by email. | https://www.bizstream.com/blog/february-2016/adding-a-custom-button-to-a-kentico-form-with-unif | CC-MAIN-2018-13 | refinedweb | 428 | 51.44 |
Section (7) pid_namespaces
Name
pid_namespaces — overview of Linux PID namespaces
DESCRIPTION
For an overview of namespaces, see namespaces(7)..
PIDs in a new PID namespace start at 1, somewhat like a standalone system, and calls to fork(2), vfork(2), or clone(2) will produce processes with PIDs that are unique within the namespace.
Use of PID namespaces requires a kernel that is configured
with the
CONFIG_PID_NS
option.).
If the init process of a PID namespace terminates, the
kernel terminates all of the processes in the namespace via
a
SIGKILL signal. This
behavior reflects the fact that the init process is
essential for the correct operation of a PID namespace. In
this case, a subsequent fork(2) into this PID
namespace fail with the error ENOMEM; it is not possible to create a
new process in a PID namespace whose init process has
terminated. Such scenarios can occur when, for example, a
process uses an open file descriptor for a
/proc/[pid]/ns/pid file corresponding to
a process that was in a namespace to setns(2) into that
namespace after the init process has terminated. Another
possible scenario can occur after a call to unshare(2): if the first
child subsequently created by a fork(2) terminates, then
subsequent calls to fork(2) fail with
ENOMEM.
Only signals for which the init process has established a signal handler can be sent to the init process by other members of the PID namespace. This restriction applies even to privileged processes, and prevents other members of the PID namespace from accidentally killing the init process.
Likewise, a process in an ancestor namespace
can—subject to the usual permission checks described
in kill(2)(emsend signals
to the init process of a child PID namespace only if the
init process has established a handler for that signal.
(Within the handler, the
siginfo_t
si_pid field described in
sigaction(2) will be
zero.)
SIGKILL or
SIGSTOP are treated
exceptionally: these signals are forcibly delivered when
sent from an ancestor PID namespace. Neither of these
signals can be caught by the init process, and so will
result in the usual actions associated with those signals
(respectively, terminating and stopping the process).
Starting with Linux 3.4, the reboot(2) system call causes a signal to be sent to the namespace init process. See reboot(2) for more details..
A process is visible to other processes in its PID namespace, and to the processes in each direct ancestor PID namespace going back to the root PID namespace. In this context, visible means that one process can be the target of operations by another process using system calls that specify a process ID. Conversely, the processes in a child PID namespace can_zsingle_quotesz_t see processes in the parent and further removed ancestor namespaces. More succinctly: a process can see (e.g., send signals with kill(2), set nice values with setpriority(2), etc.) only processes contained in its own PID namespace and in descendants of that namespace.
A process has one process ID in each of the layers of the PID namespace hierarchy in which is visible, and walking back though each direct ancestor namespace through to the root PID namespace. System calls that operate on process IDs always operate using the process ID that is visible in the PID namespace of the caller. A call to getpid(2) always returns the PID associated with the namespace in which the process was created.
Some processes in a PID namespace may have parents that are outside of the namespace. For example, the parent of the initial process in the namespace (i.e., the init(1) process with PID 1) is necessarily in another namespace. Likewise, the direct children of a process that uses setns(2) to cause its children to join a PID namespace are in a different PID namespace from the caller of setns(2). Calls to getppid(2) for such processes return 0._zsingle_quotesz_s
idea of its own PID (as reported by
getpid()), which would break many
applications and libraries.
To put things another way: a process_zsingle_quotesz_s PID namespace membership is determined when the process is created and cannot be changed thereafter. Among other things, this means that the parental relationship between processes mirrors the parental relationship between PID namespaces: the parent of a process is either in the same namespace or resides in the immediate parent PID namespace._zsingle_quotesz_s PID
namespace, rather than the init process in the child_zsingle_quotesz_s
own PID namespace.
Compatibility of CLONE_NEWPID with other CLONE_* flags
In current versions of Linux,
CLONE_NEWPID can_zsingle_quotesz.
After creating a new PID namespace, it is useful for the
child to change its root directory and mount a new procfs
instance at
/proc so that
tools such as ps(1) work correctly. If a
new mount namespace is simultaneously created by including
CLONE_NEWNS in the
flags argument of clone(2) or unshare(2), then it isn_zsingle_quotesz_t
necessary to change the root directory: a new procfs
instance can be mounted directly over
/proc.
From a shell, the command to mount
/proc is:
$ mount -t proc proc /proc
Calling readlink(2) on the path
/proc/self yields the process
ID of the caller in the PID namespace of the procfs mount
(i.e., the PID namespace of the process that mounted the
procfs). This can be useful for introspection purposes,
when a process wants to discover its PID in other
namespaces.
/proc files
/proc/sys/kernel/ns_last_pid(since Linux 3.3)
This file displays the last PID that was allocated in this PID namespace. When the next PID is allocated, the kernel will search for the lowest unallocated PID that is greater than this value, and when this file is subsequently read it will show that PID.
This file is writable by a process that has the
CAP_SYS_ADMINcapability inside its user namespace. This makes it possible to determine the PID that is allocated to the next process that is created inside this PID namespace.
EXAMPLE
See user_namespaces(7).
SEE ALSO
clone(2), reboot(2), setns(2), unshare(2), proc(5), capabilities(7), credentials(7), mount_namespaces(7), namespaces(7), user_namespaces(7), switch_root(8) | https://manpages.net/detail.php?name=pid_namespaces | CC-MAIN-2022-21 | refinedweb | 1,022 | 59.43 |
An association list is a list of tuples of keys to values. For example:
alist :: [(String,Double)]
alist = [("pi", 3.14159265), ("e", 2.71828183), ("phi", 1.61803398874)]
getConstant :: String -> Maybe Double
getConstant name = lookup name alist
lookupis a prelude function that returns the value (if existing) for the supplied key. Association lists are useful for small lists, but the lookup time is O(N) in the size of elements. Enter Map which provides the same key/value abstraction but with efficient lookup.
Data.Mapprovides operations for insertion, deletion and inspection of keys. For example:
-- Given an association list, make a map
Map.fromList aList
-- Insert a new key/value into the empty map
Map.insert "pi" 3.14159265 $ Map.empty
-- Is a key in a map?
Map.member key map
-- lookup and findWithDefault allow you to find values.
$ is the function application operator - it's right associative so it means less brackets.
Similarly,
Data.Setprovides a functional implementation of sets.
We can put these together and change the implementation of the anagrams to create a map of search key to a set of results. This is hideously inefficient initially, but once the data structure is built up finding anagrams should be a little quicker.
anagramList :: String -> IO (Map String (Set String))
anagramList file = do
filecontent <- readFile file
return (foldl (\x y -> Map.insertWith Set.union (stringToKey y) (Set.singleton y) x)
Map.empty
(filter validWord $ lines filecontent))
anagramsOf :: String -> IO ()
anagramsOf word = do
anagrams <- anagramList wordfile
putStrLn (show (Map.lookup (stringToKey word) anagrams)) | http://www.fatvat.co.uk/2009/08/some-haskell-data-structures.html | CC-MAIN-2020-05 | refinedweb | 253 | 60.72 |
On Wed, 5 Nov 1997 12:13:32 -0600 (CST), Klaus Weide <address@hidden> said: >Are you sure that a call to lynx_force_repaint() causes this? Which one? I know for sure that the culprit is LY_SLrefresh and not LY_SLclear because I changed LY_SLrefresh to PUBLIC void LY_SLrefresh NOARGS { #if 0 if (FullRefresh) { SLsmg_touch_lines(0, LYlines); } #endif FullRefresh = FALSE; SLsmg_refresh(); return; } as a quick hack to get around the problem. > However, with Unix curses, > the delwin() - refresh() sequence does not restore what was there > before the popup was invoked, and none of the tweaks I tried got that > to work, [...] > >so the full redraw would still be necessary for curses. Maybe not for >ncurses? It is not necessary for SLANG. --John ; ; To UNSUBSCRIBE: Send a mail message to address@hidden ; with "unsubscribe lynx-dev" (without the ; quotation marks) on a line by itself. ; | https://lists.gnu.org/archive/html/lynx-dev/1997-11/msg00074.html | CC-MAIN-2018-43 | refinedweb | 142 | 68.7 |
Complex Inline Styles
Within your editor, you may wish to provide a wide variety of inline style behavior that goes well beyond the bold/italic/underline basics. For instance, you may want to support variety with color, font families, font sizes, and more. Further, your desired styles may overlap or be mutually exclusive.
The Rich Editor and Colorful Editor examples demonstrate complex inline style behavior in action.
ModelModel
Within the Draft model, inline styles are represented at the character level,
using an immutable
OrderedSet to define the list of styles to be applied to
each character. These styles are identified by string. (See CharacterMetadata
for details.)
For example, consider the text "Hello world". The first six characters of
the string are represented by the empty set,
OrderedSet(). The final five
characters are represented by
OrderedSet.of('BOLD'). For convenience, we can
think of these
OrderedSet objects as arrays, though in reality we aggressively
reuse identical immutable objects.
In essence, our styles are:
[ [], // H [], // e ... ['BOLD'], // w ['BOLD'], // o // etc. ]
Overlapping StylesOverlapping Styles
Now let's say that we wish to make the middle range of characters italic as well: "He_llo wo_rld". This operation can be performed via the Modifier API.
The end result will accommodate the overlap by including
'ITALIC' in the
relevant
OrderedSet objects as well.
[ [], // H [], // e ['ITALIC'], // l ... ['BOLD', 'ITALIC'], // w ['BOLD', 'ITALIC'], // o ['BOLD'], // r // etc. ]
When determining how to render inline-styled text, Draft will identify
contiguous ranges of identically styled characters and render those characters
together in styled
span nodes.
Mapping a style string to CSSMapping a style string to CSS
By default,
Editor provides support for a basic list of inline styles:
'BOLD',
'ITALIC',
'UNDERLINE', and
'CODE'. These are mapped to simple CSS
style objects, which are used to apply styles to the relevant ranges.
For your editor, you may define custom style strings to include with these defaults, or you may override the default style objects for the basic styles.
Within your
Editor use case, you may provide the
customStyleMap prop
to define your style objects. (See
Colorful Editor
for a live example.)
For example, you may want to add a
'STRIKETHROUGH' style. To do so, define a
custom style map:
import {Editor} from 'draft-js'; const styleMap = { 'STRIKETHROUGH': { textDecoration: 'line-through', }, }; class MyEditor extends React.Component { // ... render() { return ( <Editor customStyleMap={styleMap} editorState={this.state.editorState} ... /> ); } }
When rendered, the
textDecoration: line-through style will be applied to all
character ranges with the
STRIKETHROUGH style. | https://draftjs.org/docs/advanced-topics-inline-styles.html | CC-MAIN-2018-34 | refinedweb | 412 | 57.06 |
Angela Schreiber wrote:
> ...
> so, i'd like to understand what is the goal of custom
> xml properties.
> ...
Well, the same as using XML instead of text, I guess. Such as putting
things like marked-up text into properties:
<D:prop><X:comment foo='bar' xmlns:This is
an <xhtml:em>important</xhtml:em> change.</X:comment></D:prop>
In JCR (and therefore Jackrabbit), this kind of structure would
preferably be stored in nodes, not properties, I assume (a design that I
don't necessarily like).
So what we *have* to do is to make sure that somebody wants to set a
property value as above, it will either work, or fail upon PROPPATCH.
For the latter, check whether the property XML element ("comment") has
element child nodes. If it does, reject the request.
If we want enable Jackrabbit to store things like that, we need to map
the WebDAV property to something over than a single-valued string. It
may be possible to use JCR child nodes, but I'm not sure how that fits
into the Jackrabbit WebDAV design.
An alternative would be to tunnel the value in a way that it as least
unlikely to be confused with other property values, such as a
multivalued string property:
comment[0] = "WebDAV XML property"
comment[1] = content serialized as XML, including containing element,
attributes and namespace decls. (*)
BR, Julian
(*) See <> | http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200802.mbox/%[email protected]%3E | CC-MAIN-2014-10 | refinedweb | 230 | 62.17 |
Space-Time Processing—Linux Style
In late 2001, we obtained the old DEC Alphas (described as boat anchors) and decided to see what we could do with them. First, we modified the built-in bootloader to get Red Hat 7.1 running. We made two major changes. First, we chose one master machine and loaded up its SCSI interface with six hard disks and a DVD-ROM. Five disks became a level-5 RAID array (using raidtools) to store simulation or experimental data, and the remaining disk is for booting and recovery.
The second change was to install the MPI on all machines. Although installation was fairly easy, the need to derive all IP addresses through DHCP caused problems. In the end, we negotiated a fixed IP address for the master machine, now called zion. We run startup scripts on other machines that log their IP addresses onto an SMB mount using RPC (Listing 1).
Listing 1. Script to store a cluster node's IP address on an SMB share.
#!/bin/sh # # Write my IP to an SMB share # #Mount central SMB share smbmount //foo/bar /mnt/bar \ -o username=me,password=mine >& \ /dev/null#Write IP #Grab IP address from ifconfig address/sbin/ifconfig | grep Bcast \ | sed 's/^.*addr://;s/Bcast.*//' > \ /mnt/bar/$HOSTNAME.ip
Once MPI was working, we downloaded the latest Octave source and patched it with Octave-MPI. Since then, Octave-MPI seems to have been taken over by Transient Research (see the on-line Resources section). We set up our MPI system with a script that gathers the IP addresses stored by the script above, pings them and builds an rhosts file. Once dynamic generation of rhosts is complete, we simply run Octave-MPI over 4+1 machines with:
recon -v lamboot mpirun -v -c4 octave-mpi
With the system running, the Octave processing load can be shared across the cluster. We find that Alphas have significantly faster floating-point performance than do Pentiums, but using Ethernet to pass MPI messages slows the cluster down. We haven't benchmarked the system, but a simulation that takes a couple of hours to complete on a 2GHz Compaq PC runs about 10% faster on our first cluster of four Alphas (300MHz–500MHz).
We found GNU-Octave to be an excellent tool for numerical simulations. When executed with the -traditional option, also known as -braindead, it runs most MATLAB scripts. In some cases, Octave provides better features than MATLAB does, although MATLAB has a better plotting capability than the default gnuplot engine Octave uses.
Some engineers prefer to develop on Windows, so we provide a Web interface to MPI-Octave. It uses a JavaScript telnet client served from Apache on zion and some back-end scripting. Scripts were adapted from an on-line MUD game engine. For Windows users, the telnet client script automatically mounts their shared Windows drives on the zion filesystem, runs Octave over telnet and sets up plotting capabilities so that plots are written to a directory on the Web server in PNG format for displaying with a browser. Another option lets users save plots directly to their shared drives.
For debugging, a large debug buffer in the FPGA is accessible to the ARM. We memory-mapped the FPGA with 32-bit high-speed asynchronous access to the ARM, with access in userland or kernel space. Unsurprisingly, in kernel space we use a character driver module accessed as a file. This bursts up to 1,024 data words at high speed and handles all signaling, although the sustained speed is not so good. Userland access is accomplished through the neat method of mmapping to the /dev/mem interface, as long as you remember to create /dev/mem on your embedded filesystem first (Listing 2).
Listing 2. writeport.c: a simple program that writes a 32-bit integer to a physical memory location.
#include <stdio.h> #include <fcntl.h> //needed for O_RDWR and O_SYNC #include <sys/mman.h> //needed for PROT_READ etc. #define GRAB_SIZE 1024UL #define GRAB_MASK (GRAB_SIZE - 1) int main(int argc, char **argv) { void *grab_base, *virt_addr; unsigned int md, read_result, writeval; off_t phys_addr = strtoul(argv[1], 0, 0); /*open memory interface*/ if((md = open("/dev/mem", O_RDWR | O_SYNC)) == -1) { printf("ERR - /dev/mem open failed\n"); exit(1); } /* Map one page to the physical address given*/ if(grab_base = mmap(0, GRAB_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, md, phys_addr & ~GRAB_MASK), grab_base == (void *) -1) { printf("ERROR: failed to map\n"); exit(1); } /*write to virtual memory that is now mapped to the requested physical address*/ *((unsigned long *)grab_base + (phys_addr & GRAB_MASK)) = strtoul(argv[2], 0, 0); /*close memory interface*/ if(munmap(grab_base, GRAB_SIZE) == -1) { printf("ERR - unmap failed\n"); exit(1); } close(md); }
These tools allow us to upload known test vectors, saved from Octave on zion, to the debug buffer. Under ARM control, we route the debug buffer output to the input of a block under test, run the system for a few clock cycles and capture the output back in the debug buffer. Analysing the result in Octave tells us if the block works.
We found visualisation to be an important factor. After our first milestone, we invited some people to see the system. They saw boxes humming away with a couple of green LEDs to indicate everything was working. We noted a distinct lack of enthusiasm at what we believe was a world-first demonstration of the technology, so we realised something more was required. To this end, we chose to display channel models as they adapted in near real time. A channel is the complex path from a transmitter to a receiver, including reflections, multiple paths, dispersion and so on. The system we built sent training symbols over the air to sound the channel before sending data. Sounding gives us a picture of the channel, which we decided to display. We used the FPGA debug buffer on the receiver, with an ARM script running periodically to execute a program to extract the channel data from the buffer, format it and save in an Octave-compatible .mat file on zion. Octave was run non-interactively on zion to read the channel data periodically, analyse it and generate four plots as PNG image files, which a zion Web server PHP page displayed as a visualisation updated every four seconds (Figure | http://www.linuxjournal.com/article/7386?page=0,1 | CC-MAIN-2016-36 | refinedweb | 1,053 | 59.23 |
django-random-filestorage 0.1.0
Django storage class that assigns random filenames to all stored files.
Django-random-filestorage is a Django storage class that assigns random filenames to all stored files.
If a user uploads a file named foo.txt, it will be stored as <60 random characters>.txt. In cases where you refer users to uploaded files or images directly, this will prevent them from finding other files, which they may not be authorised to see, by guessing the original names used by your users.
Documentation
The full documentation is at.
Security warning
Warning
Never use django-random-filestorage for cases where the uploaded files may contain links, such as PDF files. In that case, the secrecy of your URLs can be compromised by being leaked through the referer header, as Dropbox discovered in May:
Quickstart
Install django-random-filestorage:
pip install django-random-filestorage
Then use it in your entire Django project:
DEFAULT_FILE_STORAGE="randomfilestorage.storage.RandomFileSystemStorage"
Or, set it on a specific field:
from randomfilestorage.storage import RandomFileSystemStorage random_storage = RandomFileSystemStorage(location='/media/my_files') class Example(models.Model): my_file = FileField(storage=random_storage)
Why would you want to do this?
Let’s say you have an app that manages all ice cream recipes you sell in your shop. Some of your recipes contain secret ingredients, and are therefore only available to a small set of trusted users. We’ll look at two icecreams: strawberry, which has a fairly standard and non-secret recipe, and jellyfish, which is very secret.
The recipes are stored in PDFs, which are uploaded into a Django app that uses a FileField. As Django suggests, the media directory is directly accessible through nginx or some other simple web server. So a user which is authorised to see the strawberry recipe, will be sent to a PDF like. They will not see jellyfish in their list of recipes, as it’s too secret.
However, given that the user knows that you sell jellyfish too, they can simply find that recipe on! There are many cases where names of documents, with differing access levels, are in some way predictable. Dates are another predictable example. And filenames in FileFields are derived from the original filename the user chose.
By making these filenames random, the person who can access will not be able to guess that the jellyfish recipe is available on.
What issues are not resolved?
Once a user knows the random string that was used to name the file, they could provide the link to others. Then again, they could just as well download the file and provide it to others in some other way.
If you would like stricter control over who accesses certain files, you’ll have to prevent direct access to (part of) the media directory. You can serve those files through a Django view instead, but this comes at an additional performance cost. A more performant but more complex alternative is to use Apache sendfile or nginx X-accel.
- Downloads (All Versions):
- 1 downloads in the last day
- 146 downloads in the last week
- 655 downloads in the last month
- Author: Erik Romijn
- Keywords: django-random-filestorage
- License: BSD
- Categories
- Development Status :: 4 - Beta
- Framework :: Django
- Intended Audience :: Developers
- License :: OSI Approved :: BSD License
- Natural Language :: English
- Programming Language :: Python :: 2
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3
- Programming Language :: Python :: 3.3
- Package Index Owner: erikr
- DOAP record: django-random-filestorage-0.1.0.xml | https://pypi.python.org/pypi/django-random-filestorage/0.1.0 | CC-MAIN-2015-40 | refinedweb | 581 | 54.93 |
Dec 19, 2007 05:19 PM|BigTuna99|LINK
I've customized one of my pages not to use the templates. I've removed the FilterRepeater and just have one Dynamic Filter on the page. My question is, is there a way to order the contents of the drop down list that the Dynamic Filter creates? Do I need to create a partial class for that table and do something on the OnLoaded event? Thanks in advance.
Dec 20, 2007 02:05 AM|marcind|LINK
There currently is no way to customize this. For future releases, we are thinking of having an extra metadata attribute that would let you declare the column to sort on (plus asc/desc). Is there any extra functionality you would like to see in this area?
Thanks
Dec 20, 2007 02:55 PM|BigTuna99|LINK
Well, I started customizing the dynamic filter because I needed a human readable label next to the drop down list, not just the table name. It would be nice to have an attribute for every column and table name so that you could provide your own custom string
that would be displayed in the grid column headers and the filter labels etc.
Member
6 Points
Member
45 Points
Dec 20, 2007 03:24 PM|ron.westbrook|LINK
For some reason, after you rename the tables in your LINQ to SQL diagram, you have to close the diagram and reload it in order for the new names to take effect. The names change in the diagram in real time, however they are not seen programmatically untill you reload the diagram. The same thing happened to me. I'm afraid it may be similar to the cashing issue in another topic on this forum which we are currently waiting on a work around for.
Dec 20, 2007 08:16 PM|marcind|LINK
Thanks for the input everyone,
We already have plans to add extra metadata attributes to let you provide friendlier names for columns and filters, as well as better customization for sorting. They will come in particularily handy if you have an externally created data model that you can't edit yourself.
Member
2 Points
May 23, 2008 03:54 PM|caseywills|LINK
So I was wondering if this issue has been resolved yet.
I am trying to sort a automatically generated dropdown in an DynamicGridView, and would love to know if there is a metadata attribute I can use to do that.
Right now it is sorting by the column's ID field rather than its Name field.
thanks,
case
May 23, 2008 04:05 PM|sjnaughton|LINK
Marcin in the third post in this thread just said there is no way to do this currently (and I suspect in the verion that will RTM) to do the sort. However Marcing is also doing some samples see here one sample is Extending the FilterRepeater which may give us insite as to how we could do it our selves.
Dynamic Data FilterRepeater
May 23, 2008 04:27 PM|marcind|LINK
Hi Case,
first of all because you are referring to DynamicGridView it's probably the case that you are using the December '07 CTP. That version is quite outdated at this point and I would invite you to download either the .NET Framework 3.5 SP1 or the cutting edge version of Dynamic Data on Code Gallery.
Once you have the recent bits, you can use DisplayColumnAttribute.SortColumn to provide sorting information.
Nov 26, 2008 01:15 PM|tigra68|LINK
I know that this is an older posting. Has this been added yet? I really desparately need this functionality.
I have seen posts about work arounds, but am too much of a noobie to have a clue what they are talking about. I am just now learning VB, so people throwing code at me and saying "add it" doesn't help me much. I really need to know more specifics, like WHERE to add it.
If anyone has any suggestions or help they can provide, I sure would appreciate it.
I have tried this "DisplayColumnAttribute.SortColumn" and the link doesn't work...so tried searching for it, but doesn't
seem to be for the Filters...at least not from what I am understanding.
Thanks, Ann
Nov 26, 2008 01:56 PM|sjnaughton|LINK
Have a look at this I think it may be what you need:
[MetadataType(typeof(EmployeeMD))] [DisplayColumn("LastName","LastName",false)] public partial class Employee { public class EmployeeMD { public object EmployeeID { get; set; } [UIHint("MultilineText")] public object LastName { get; set; } public object Title { get; set; } public object TitleOfCourtesy { get; set; }
//... more fields here } }
the DisplayColumn takes 3 parameters
This affects the Filters and the ForeignKey_Edit filedTemplates.
Hope this helps [:D]
Dynamic Data filters Sorting
Nov 26, 2008 02:10 PM|tigra68|LINK
Again...WHERE does it go? I have no idea which file this would need to go in. I already have this information, but have tried to figure out where to put it and cannot determine it.
Do you put it into the ForeignKey_Edit Template? Do you put it in the Custom Template for the Table you want to use it on? Do you put it in the Default page? What about the pages that handle ALL of the Tables...does it go in that file?
Also, is 'EmployeeMD' the table name or is 'Employee' the table name? Whichever is not the table name...what is it? A column in the table? Or a new name?
I kind of get the feeling that the 'EmployeeMD' is the table and that the 'public objects' are the columns... Is that right?
Thanks, Ann
Nov 26, 2008 02:20 PM|sjnaughton|LINK
You create a partial class file (in the App_Code folder in a file based website) along with the Linq to SQL files. You create a class with the partial class that represent the entity in the Linq to SQL classes (Look in the <ModelName>.designer.cs file for the class names etc) and then add the metadata, also see these videos here: in the ASP.NET Dynamic Data section.
Hope this helps [:D]
Dynamic Data Metadata Partial Classes
Nov 26, 2008 03:05 PM|tigra68|LINK
Okay, I have done what I think you said to do, but am getting errors.
I created a new class file in the App_Code folder called "Series_Master_Best_Cost", which is the name of my table, and it is how the class is declared in the 'designer.vb' file in the App_Code folder.
I changed the parameters in 'DisplayColumn' to "User_Master", which is the column name for the Filter that I need sorted in alphabetical order.
It is complaining about 'DisplayColumn', saying "Declaration Expected". I made a declaration, but still didn't seem to like it.
I had to comment out 'Option Strict On' because it kept complaining about the parameters for 'DisplayColumn', but that still did not fix things.
Here is the error information I am getting:
Nov 26, 2008 03:31 PM|tigra68|LINK
Got the error to go away. Found some information on that showed a little different information, but still is not sorting my column. I am guessing it is now a matter of me not having the right information in each of the parameters, classes, or types.
This is my partial class file:
Imports Microsoft.VisualBasic
Imports System.Web.DynamicData
Imports System.ComponentModel
Imports System.ComponentModel.DataAnnotations
<MetadataType(GetType(Series_Master_Best_Cost))> _
Partial Public Class Series_Master_Best_Cost
<DisplayColumn("USER_MASTER", "USER_MASTER", False)> _
Public Class Series_Master_Best_Cost
End Class
End Class
Is there something else that I am missing?
I also tried 'Series_Master_Best_CostMD' in the MetadataType and Public Class (not Partial), which seemed to follow what others had done, but then it said that the type was not defined.
What is the difference between DisplayColumnAttribute and DisplayColumn...they seem to both be used in that page referenced above.
Anything else you can tell me?
Thanks, Ann
Nov 26, 2008 06:27 PM|sjnaughton|LINK
Imports Microsoft.VisualBasic
Imports System.Web.DynamicData
Imports System.ComponentModel
Imports System.ComponentModel.DataAnnotations
<MetadataType(GetType(Series_Master_Best_Cost))> _
<DisplayColumn("USER_MASTER", "USER_MASTER", False)> _
Partial Public Class Series_Master_Best_Cost
Public Class Series_Master_Best_Cost
End Class
I think it should be more like this the table level attributes on the partial class.
Dynamic Data Metadata Partial Classes
Nov 26, 2008 07:14 PM|tigra68|LINK
Has anyone else been able to get this to work? I have changed it as suggested (and closed the second class too), and it still does not sort the filtered list...or anything else for that matter...
There must be something other than this that I am not getting done.
While I am a novice, it seems to me that this is trying to sort the COLUMN "user_master", not the filtered dropdown list, but that isn't sorted either...
This wouldn't be such a big issue, but we have thousands of employees in our company and trying to find one of them in an unsorted list is not possible!
Any help would be great!
HAPPY THANKSGIVING TO ALL!!!
Thanks, Ann
Nov 26, 2008 08:59 PM|marcind|LINK
Maybe an example will help; let's assume that your model generates the folowing classes (sorry for using C# syntax):
public partial class Product { public Category MyCategory; } public partial class Category { public string Name; public int Code; }
Now say you are displaying your Products list. Since Product has a foreign key reference to Category, a filter in the form of a drop-down will be generated for the MyCategory column. I am assuming now that you want to customize the order in which the items appear in this dropdown. If you are trying to do something different, please clarify.
So to customize this ordering, you would decorate the Category class (since this is the type of the MyCategory property on Product) with DisplayColumnAttribute, initialized as follows:
[DisplayColumnAttribute("Name", "Code", true)] public partial class Category { }
This will cause Dynamic Data to display the values in the dropdown using the value of the Name property, but they will be ordered in descending order based on the Code property.
As was noted earlier, you would add it in code to your App_Code folder (if in a WebSite project) or just as a class anywhere inside of a Web Application project. Since this is a class-level attribute, you don't need to use MetadataTypeAttribute.
You asked about the difference between [DisplayColumnAttribute] and [DisplayColumn]. There is none. The compiler automatically infers that if you have [DisplayColumn] you actually mean [DisplayColumnAttribute]. Note that this behavior only applies to attributes when they are used to decorate code (and not ones instantiated in code).
Nov 26, 2008 11:15 PM|sjnaughton|LINK
Hi Tigra68 nI've just tried it and all seems to be working fine in c# and VB
here's the VB to show:
<MetadataType(GetType(Employee.EmployeeMD))> _ <DisplayColumn("LastName", "LastName", True)> _ Partial Public Class [Employee] Public Class [EmployeeMD] Public EmployeeID As Object Public LastName As Object Public FirstName As Object Public Title As Object Public TitleOfCourtesy As Object Public BirthDate As Object Public HireDate As Object Public Address As Object Public City As Object Public Region As Object Public PostalCode As Object Public Country As Object Public HomePhone As Object Public Extension As Object Public Photo As Object Public Notes As Object Public ReportsTo As Object Public PhotoPath As Object Public Employees As Object Public EmployeeTerritories As Object Public Orders As Object Public Employee As Object End Class End ClassHope this helps [:D]
Dynamic Data filters Sorting
Dec 01, 2008 07:17 PM|tigra68|LINK
Okay...gonna make one more go on this, and then I think I will have to give up. I am not seeming to get where I need to with this, and I am sure it is because I do not yet have enough knowledge to know what I am doing.
Here is what I have:
Three tables
1 - Series_Master (this holds my product numbers)
2 - Series_Master_Best_Cost (this holds data about pricing on my products)
3 - User_Master (this holds information on which user is associated with which ID number - this is the one that I am trying to get to sort when displaying the Series_Master_Best_Cost table)
When I click on Series_Master_Best_Costs on the default page, it takes me to my Series_Master_Best_Cost table, with two drop downs on the top.
1 - Series_Master
2 - User_Master
I really need to get rid of the first one as it does nothing for me, but for now, was ONLY trying to sort the User_Master list.
In my App_Code folder, I created a new .vb file called 'Series_Master_Best_Cost.vb' to place my code in (the code that has been provided here on this forum).
This is the contents of that file:
Imports Microsoft.VisualBasic
Imports System.Web.DynamicData
Imports System.ComponentModel
Imports System.ComponentModel.DataAnnotations
'<MetadataType(GetType(Series_Master_Best_Cost))> _
<DisplayColumnAttribute("USER_NAME", "USER_NAME", False)> _
Partial Public Class Series_Master_Best_Cost
Public Class Series_Master_Best_CostMD
Public Series_Master_Best_Cost_ID As Object
Public Series_Master_ID As Object
Public Best_Cost_Value As Object
Public Stack_Height As Object
Public Date_Entered As Object
Public Date_Modified As Object
Public Entered_By_ID As Object
Public Modified_By_ID As Object
End Class
End Class
*Note I have commented out the MetadataType line, but made no difference. I still do not get a sorted list.
In 'DisplayColumnAttribute', I attempted to use "User_Name", "User_Master_ID", "User_ID" and "Entered_By_ID"...none made a difference.
My USER_MASTER table has the following Columns:
1 - USER_MASTER_ID
2 - USER_NAME
3 - FIRST_NAME
4 - LAST_NAME
5 - USER_ID
Entered_By_ID in the Series_Master_Best_Cost table matches User_Master_ID in the User_Master table.
I thought at first that it might be that my file name needed to be "Series_Master_Best_Costs.vb", plural like the other places, which didn't really seem to be correct, but I tried it anyway and it did not make anything happen either.
Any further assistance would be greatly appreciated.
Sorry to seem so lost, but I am... :-)
Hope everyone had a good holiday!
Dec 01, 2008 10:11 PM|sjnaughton|LINK
Sorry you are struggling with this do you want me to send you a working example using VB and Northwind DB?
Dynamic Data filters Sorting
Dec 01, 2008 10:17 PM|tigra68|LINK
That might be helpful. I was hoping that I might have provided enough information to have discerned where I might have missed out. However, a working example would also be very helpful I think.
I sent you a private message with my email address so I wouldn't have to post in here.
Thanks, Ann
Dec 02, 2008 03:37 PM|tigra68|LINK
Oooo-Kay...now that I have been able to see the working version, I discovered the one thing that was NOT in the information on how to make this work...
In the "DataClasses.dbml" file, you have to create an association for the table of the dropdown that you want to sort.
In my case, since I was trying to make the "User_Name" be sorted in alphabetical order, I had to create an association on my "User_Master" table.
You actually select that table for both Child and Parent, and then select the field that you need sorted for each of those.
When it actually creates the Association and you go to look at the Child and Parent properties, it shows that the:
And in my new .vb file, I had to write it as the following:
Imports Microsoft.VisualBasic
Imports System.Web.DynamicData
Imports System.ComponentModel
Imports System.ComponentModel.DataAnnotations
<MetadataType(GetType(USER_MASTER.USER_MASTERMD))> _
<DisplayColumn("USER_NAME", "USER_NAME", False)> _
Partial Public Class USER_MASTER
Public Class USER_MASTERMD
Public USER_MASTER_ID As Object
Public USER_NAME As Object
Public FIRST_NAME As Object
Public LAST_NAME As Object
Public FULL_NAME As Object
Public FACILITY_ID As Object
Public TITLE_ID As Object
Public DEPARTMENT_ID As Object
Public ACTIVE As Object
Public ASSOCIATE_NUMBER As Object
Public ENTERED_BY_ID As Object
Public DATE_ENTERED As Object
Public USER_ID As Object
Public DATE_WRITE As Object
Public DATE_HIRE As Object
Public BIRTH_MONTH As Object
Public BIRTH_DAY As Object
Public DATE_TERMINATED As Object
Public PLANT_ID As Object
End Class
End Class
It is important to note that your partial class needs to be the table that holds the key that you want sorted, not the page of the table that you want it displayed on! While this may be obvious to some who are more experienced, for someone who is a bit more of a novice, it was not obvious...and even more so...it most certainly was not obvious that I needed to create that association in the "DataClasses.dbml" file. That association in conjunction with the partial class was the key to making this work!
Thank you so much for the help! Having a working example that I could look at and see what was going on in the actual files made all the difference in the world!!!
Ann
Dec 03, 2008 10:17 AM|sjnaughton|LINK
Are you saying that there was no relationship in you model?
Dynamic Data filters Sorting
Dec 03, 2008 11:58 PM|sjnaughton|LINK
Sorry Ann I'm not sure I understand as I've never had to add a relationship to make any of my models work [:(]
Dynamic Data filters Sorting
Dec 04, 2008 02:54 PM|tigra68|LINK
I do not know. What I do know is that the relationship that referred back to itself (on your example it was the Employee table) was not automatically created during this process. I had to manually add it to the "DataClasses.dbml" layout by creating a new association for my table. It was ONLY after adding this association that my sorting worked.
I wish you could include screen shots on here, as that would make it easier to explain.
If you go to the project you sent me and look at the "DataClasses.dbml" page, you will see a self-referencing association on your Employee table. That is the one I had to add to mine to make it work.
If you did not create it, I am unsure of how it got there. It was not automatically generated for my project.
Regardless, it was the key to making the sorting of the datalist work for me...and now it is PERFECT!!!
Thank you again for your assistance!
Ann
Dec 04, 2008 11:25 PM|sjnaughton|LINK
No problem it's good to know you got it working and it's not something wierd stoppint it. [:D]
Dynamic Data filters Sorting
29 replies
Last post Dec 04, 2008 11:25 PM by sjnaughton | http://forums.asp.net/p/1196276/2071196.aspx?Re+Dynamic+filter+order+by | CC-MAIN-2013-48 | refinedweb | 3,081 | 62.27 |
When we last left our heroes, they were blazing a trail of understanding through the vastness space that we call organizational management (uggg….really need to work on that opener). Anyways….if you read part 1 of this (Understanding and Coding for OM Inheritance: Part 1 – Basics ), then you should now have at least a “dangerous” knowledge of OM inheritance and how it all works.
Now, let us talk about how we would go about writing our own code to detect and read inherited values for such things as reporting, custom classes or HCM Processes and Forms generic services (a topic near and dear to my heart haha). I had actually looked around the “interwebz” (via Google searches of course) and did not find much information or help from others on doing this although I found many posts from people asking “how” to do it (hence the inspiration for sharing this knowledge now in this way).
Table T77S0
The first thing to understand is table T77S0 (yes, that is a zero on the end….there is another similarly named table with an “O” on the end but that is not what we want! Great job with the naming there, SAP! haha) Table T77S0 is where all the “magic” happens. More to the point, it is our configuration table that controls OM inheritance.
At some point in our code, we want to first read this table. We might see values such as:
The important fields for us are as follows:
- INHIC : If this value is set to “X”, this means that positions with inherit “Company Code” from their related Org Unit (ie.Obligatory/Mandatory inheritance of company code is active. Also keep in mind, this means the assignment is forced and not an option to change.). If this value is not “X”, it means that we can assign positions to different company codes than their related Org Unit. This is typically not set to “X” unless we want all positions to have the same company code (smaller organization maybe?).
- INHIH : Same idea as INHIC but for “Controlling Area”. If set to “X”, it is assigned/inherited from the related Org Unit. If not, then a position can be assigned a different controlling area as the one assigned to its related Org Unit.
- INHS : This is typically our most important field to read as it is more flexible in its inheritance (it says that inheritance is “possible” but not mandatory as the other switches do). If this field is assigned the value of “X”, then our position can inherit any one or more of many Account Assignment features (Controlling Area, Company Code, Business Area and/or Personnel Area/Subarea).
(* keep in mind that although the documentation refers to “position” for inherited values, the same applies to “child” Org Units as well.)
You might notice I did not mention a setting for “Cost Center” in there. Because cost center inheritance has a bit different implications as it ties into the FICO (Financial Controlling) module as well, we will discuss how to handle that one later in this document.
Infotype 1008….or actually, “the case of the missing infotype 1008”
When trying to locate say the Personnel Area assigned to our position, the first thing we will often do is either used a standard function or select on infotype 1008 (table HRP1008) to find our entry.
However, if this value is inherited, we will not have an infotype 1008 entry for our object (position).
If we look at transaction PPOME, we might see something like this for our object (ex. a position).
This tells us that it is inheriting Cost Center, Company Code, Personnel Area and Personnel Subarea from a higher level org unit. The logical thing to do then is to look “behind” PPOME and figure how it presents this to us like that. I will save you some time…it is all in..
function group: RHOMDETAIL_APPL
Screen: 0504
We pretty much will follow what we see in the module read_and_inherit.
Shortly after our “config check”, you will notice this:
This simply says that if we are looking at a position, let’s just make sure we actually allow for inheritance by checking our configuration (table T77S0) mentioned before. But then a bit later, you will see…
CALL FUNCTION ‘RH_CHECK_ACC_INPUT’
This is where all the magic happens! This function will return any inherited controlling area (KOKRS), business area (GSBER), company code (BUKRS), personnel area (PERSA) and personnel subarea (BTRTL) as well as the information on what object type and the object ID for “who” they are inheriting the values from:
If is is not immediately apparent, every returned parameter with “INH_” in front of it is the “inherited” value. So for example, if we look at personnel area:
- INH_PERS_AREA: this is the inherited personnel area value
- INH_PERS_SUB_AREA: this is the inherited personnel subarea value (it comes along with the pers. area)
- INH_OTYPE_PERS_AREA: object type of the object from which we inherit the personnel area (for example, if we inherit from an org unit, this will be “O”)
- INH_OBJID_PERS_AREA: object ID of the object from which we inherit the personnel area (for example, this would be the org unit ID number)
- INH_BEGDA_PERS_AREA: “begin date” for the object that our object is inheriting from
Inherited Cost Center
As with infotype 1008, if we want to find our related cost center to our position, we usually utilize standard SAP function or select on infotype 1001 (relations) to find our “related” cost center (relation A011). But again, what do we do when this brings back nothing? That is a good sign it is being inherited from elsewhere. Again, as with Company Code/Controlling Area/Business Area/Pers. Area/Subarea, we can save ourselves a lot of time and headaches by not trying to “reinvent the wheel” and again, just follow the example/lead of transaction PPOME.
Within the module call to handle_main_cc , you see it first loads up an “input” structure that is just our object type and id (ex. “S” for position type and the position ID number) as follows:
And then, it simply calls standard SAP function RH_COSTCENTER_OF_OBJECT_GET.
(*note: function RH_COSTCENTER_OF_OBJECT_GET is actually really cool and powerful. We can pass over a table of input “objects” and it will find all the associated cost centers…and more…for us. Very useful on to remember!)
The returned structure maincc_tab will contain the cost center assigned to our object (whether direct relation or inherited). But how do we know which is which? Well, because this VERY cool return structure has a “flag” field called inherited that will be “X” if we inherit the cost center….and it also passed us the object type and object ID for where we inherit from!!!! VERY COOL!!!!
- SCLAS = K (cost center)
- SOBID = our cost center value in the format “(cost center)(controlling area)” or KOSTL(10)+KOKRS(4) so for example, we might have 0000124340US01
- INHERITED = flag (if “X” then it is inherited)
- INH_OTYPE = object type inherited from
- INH_OBJID = object ID of the object it inherits the cost center from
- POSITION_OTYPE = S (position)
- POSITION_OBJID = position object ID (redundant if the original object is a position)
Putting It All Together
So now we know where to find/read all these inherited values, how do we best put this new found knowledge to good use? …and by “good”, I mean easily reusable for ourselves and others (haha). Well, for me, I have a handy little class that deals strictly with finding inherited values for me. It is simply a matter of making “wrapper” methods around the previously mentioned standard functions and then we can use them as needed. For example,
The inputs to my public methods area actually all quite simple. I allow an object type, and object idea and an optional “begin date” and “end date” (which if are not passed are set to today’s date and 12/31/9999 respectively). In this way, I can use this to find inheritance on any object (be it a position/object type “S” or org unit/object type “O”).
The private class GET_INHERITED_VALUES is a wrapper around the function RH_CHECK_ACC_INPUT along with a bit of other business logic in there (like checking the config table T77S0 flags as needed). Then I have other public classes that allow me to call that private class and only get the values I want (for example GET_INHERITED_PERSA only returns me the inherited personnel area and subarea). In that one, I keep all the “fetch” code in my one private method so I only have to adjust there if needed as opposed to having the same logic in each of my public classes all acting as wrappers themselves around the standard function. Lastly, the public method GET_INHERITED_COSTCENTER is itself just a wrapper around standard function RH_COSTCENTER_OF_OBJECT_GET,but I have additional code in there that makes sure we only return an inherited cost center (do not care about others as we can handle that elsewhere…remember, our method has a specific purpose here…inherited cost center only! haha).
With that, you now have your own easy way to read inheritance values while also considering configuration.
Thank you for this!
You are very welcomed!
Very interesting & well written. Thought I had a decent understanding of OM inheritance until I read this.. there’s always so much more than what the textbooks teach us!
Thanks Chris.
Everything you ever wanted to know about inheritance without asking!!! Thanks Chris!!!
Have a great weekend!!!
Thanks! Funny how this old blog has new life again….like a zombie….hey! It is HALLOWEEN weekend! hahahaha 😯
Chris,
Sorry, feel free to tell me that I might want to open a thread for this. Do you know if there is a way to code master benefit center in a tab under PPOME tcode in the same way you code your cost center under IT1008 tab? (or just make the feel appear 😉 ?? )
Cheers!
Hi Christopher, very useful content! Can you point a couple of functions/methods to get inherited relations like the Boss (A012)? is it mandatory to go through evaluation paths?
BR
Sérgio. | https://blogs.sap.com/2015/04/11/understanding-and-coding-for-om-inheritance-part-2-coding/ | CC-MAIN-2021-04 | refinedweb | 1,686 | 58.21 |
Created on 2010-05-17 18:50 by stutzbach, last changed 2014-05-26 08:02 by rhettinger. This issue is now closed.
The set() operators (__or__, __and__, __sub__, __xor__, and their in-place counterparts) require that the parameter also be an instance of set().
They're documented that way: "This precludes error-prone constructions like set('abc') & 'cbs' in favor of the more readable set('abc').intersection('cbs')."
However, an unintended consequence of this behavior is that they don't inter-operate with user-created types that derive from collections.Set.
That leads to oddities like this:
MySimpleSet() | set() # This works
set() | MySimpleSet() # Raises TypeError
(MySimpleSet is a minimal class derived from collections.Set for illustrative purposes -- set attached file)
collections.Set's operators accept any iterable.
I'm not 100% certain what the correct behavior should be. Perhaps set's operators should be a bit more liberal and accept any collections.Set instance, while collections.Set's operators should be a bit more conservative. Perhaps not. It's a little subjective.
It seems to me that at minimum set() and collections.Set() should inter-operate and have the same behavior.
I should add:
I discovered the inconsistency while working on my sortedset class, which provides the same interface as set() but is also indexable like a list (e.g., S[0] always returns the minimum element, S[-1] returns the maximum element, etc.).
sortedset derives from collections.MutableSet, but it's challenging to precisely emulate set() when collections.MutableSet and set() don't work the same way. ;-)
Guido, do you have a recommendation?
No idea, I don't even know what collections.Set is. :-(
In my opinion, the set's operator should be a bit more liberal and accept any collections.Set instances. Given collections.Set is an ABC and isinstance(set, collections.Set) is True, the set methods should(strong recommended) follow all the generalized abstract semantic definition in the ABC. This according to PEP 3119:
""".
"""
The collections.Set defines __or__() as this (for example):
"""
def __or__(self, other):
if not isinstance(other, Iterable):
return NotImplemented
chain = (e for s in (self, other) for e in s)
return self._from_iterable(chain)
"""
which means the "|" operator should accept all iterable. So I think it's better to make set's methods should be more liberal.
Raymond, do you agree with Ray's analysis?.).
> The operator methods in setobject.c should be liberalized to accept
> instances of collections.Set as arguments.
Under this plan, set() and collections.Set will still have slightly different behavior. collections.Set will be more liberal and accept any iterable. Are you okay with that? I don't feel strongly about this point; I just want to make sure it's a conscious decision.
I do feel strongly that set and collections.Set should be able to inter-operate nicely and the proposal satisfies that requirement so I would be happy with it.
> To implement PyObject_IsInstance(other, collections.Set), there may
> be a bootstrap issue (with the C code being compiled and runnable
> before _abcoll.py is able to create the Set ABC). Alternatively,
> the code in setobject.c can lazily (at runtime) lookup
> collections.Set by name and cache it so that we only do one
> successful lookup per session.
I favor the lazy lookup approach.
>).
Agreed. Ideally, the "PyObject_IsInstance(other, collections.Set)" logic would be abstracted out as much as possible so other parts of Python can make similar checks without needing tons of boilerplate code in every spot.
For what it's worth, I don't think we will find as many inconsistency issues with ABCs other than Set. Set has methods that take another Set and return a third Set. That forces different concrete implementations of the Set ABC to interact in a way that won't come up for a Sequence or Mapping.
(I suppose that Sequence.extend or MutableMapping.update are somewhat similar, but list.extend and dict.update are already very liberal in what they accept as a parameter.)
Rough cut at a first patch is attached.
Still thinking about whether Set operations should be accepting any iterable or whether they should be tightened to expect other Set instances. The API for set() came from set.py which was broadly discussed and widely exercised. Guido was insistent that non-sets be excluded from the operator interactions (list.__iadd__ being on his list of regrets). That was probably a good decision, but the Set API violated this norm and it did not include named methods like difference(), update(), and intersection() to handle the iterable cases.
Also, still thinking about whether the comparison operators should be making tight or loose checks.
Daniel, do you have time to work on this one?
If so, go ahead an make setobject.c accept any instance of collections.Set and make the corresponding change to the ABCs:
def __or__(self, other):
if not isinstance(other, Set):
return NotImplemented
chain = (e for s in (self, other) for e in s)
return self._from_iterable(chain)
The code in the attached prelim.patch has working C code isinstance(x, collections.Set), but the rest of the patch that applies is has not been tested. It needs to be applied very carefully and thoughtfully because:
* internally, the self and other can get swapped on a binary call
* we can't make *any* assumptions about "other" (that duplicates have actually been eliminated or the the elements are even hashable).
The most reliable thing to do for the case where PyAnySet(obj) is False but isinstance(obj, collections.Set) is true is to call the named method such as s.union(other) instead of continuing with s.__or__ which was designed only with real sets in mind.
Yes, I can take a stab at it.
No need to rush this for the beta. It's a bug fix and can go in at any time. The important thing is that we don't break the C code. The __ror__ magic method would still need to do the right thing and the C code needs to defend against the interpreter swapping self and other.
Would it be sufficient to:
1) Restrict collections.Set()'s operators to accept collection.Set but not arbitrary iterables, and
2) Fix Issue2226 and let set() | MySimpleSet() work via collections.Set.__ror__
Attached is a patch that implements this approach, nominally fixing both this and Issue2226.
This solutions seems much too simple in light of how long I've been thinking about these bugs. I suspect there are code hobgoblins waiting to ambush me. ;)
If the code were acting exactly as documented, I would consider this a feature request. But "require that the parameter also be an instance of set()" (from original message) is too limited.
>>> set() | frozenset()
set()
So 'set' in "their operator based counterparts require their arguments to be sets." (doc) seems to be meant to be more generic, in which case 'instance of collections.Set' seems reasonable. To be clear, the doc could be updated to "... sets, frozensets, and other instances of collections.Set."
"Both set and frozenset support set to set comparisons. " This includes comparisons between the two classes.
>>> set() == frozenset()
True
so perhaps comparisons should be extended also.
Review of set-with-Set.patch:
Looks good overall.
I agree that restricting operations to instances of Set rather than Iterable is correct.
Implementing "__rsub__" in terms of - (subtraction) means that infinite recursion is a possibility. It also creates an unnecessary temporary.
Could you just reverse the expression used in __sub__?
Would you add tests for comparisons; Set() == set(), etc.
There are probably tested implicitly in the rest of the test suite, but explicit tests would be good.
Heads up, Issue #16373.
Armin pointed out in that one nasty consequence of the remaining part of issue 2226 and this bug is making it much harder than it should be to use the ItemsView, KeysView and ValuesView from collections.abc to implement third party mappings that behave like the builtin dict.
Raymond, will you have a chance to look at this before 3.4rc1? Otherwise I'd like to take it.
I updated the patch to apply cleanly to the default branch. I also added several new test cases which uncovered issues with Daniel's previous patch.
Specifically:
- the reverse functions were not be tested properly (added a separate test to ensure they all return NotImplemented when appropriate)
- the checks in the in-place operands were not being tested, and were also too strict (added tests for their input checking, and also ensured they still accepted arbitrary iterables as input)
I've also reduced the target versions to just 3.4 - this will require a porting note in the What's New, since the inappropriate handling of arbitrary iterables in the ABC methods has been removed, which means that things that previously worked when they shouldn't (like accepting a list as the RHS of a binary set operator) will now throw TypeError.
Python 3.3:
>>> set() | list()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for |: 'set' and 'list'
>>> from test.test_collections import WithSet
>>> WithSet() | list()
<test.test_collections.WithSet object at 0x7f71ff2f6210>
After applying the attached patch:
>>> set() | list()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for |: 'set' and 'list'
>>> from test.test_collections import WithSet
>>> WithSet() | list()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for |: 'WithSet' and 'list'
I initially missed Mark's suggestion above to avoid the recursive subtraction operation in __rsub__. v2 of my patch includes that tweak.
I think set operations with iterable (but not set) should be tested.
This didn't make it into 3.4, and the comment about needing a porting note above still applies, so to 3.5 it goes.
Thanks for the patch update. I will look at it shortly.
Attaching a draft patch with tests.
Ah, interesting - I completely missed the comparison operators in my patch and tests. Your version looks good to me, though.
That looks like a patch against 2.7 - do you want to add 2.7 & 3.4 back to the list of target versions for the fix?
Adding tests for non-set iterables as suggested by Serhiy.
Added tests that include the pure python sets.Set(). Only the binary or/and/sub/xor methods are tested.
The comparison operators were designed to only interact with their own kind. A comment from Tim Peters explains the decision raise a TypeError instead of returning NotImplemented (it has unfortunate interactions with cmp()). At any rate, nothing good would come from changing that design decision now, so I'm leaving it alone to fade peacefully into oblivion.
New changeset 3615cdb3b86d by Raymond Hettinger in branch '2.7':
Issue 8743: Improve interoperability between sets and the collections.Set abstract base class.
New changeset cd8b5b5b6356 by Raymond Hettinger in branch '3.4':
Issue 8743: Improve interoperability between sets and the collections.Set abstract base class. | https://bugs.python.org/issue8743 | CC-MAIN-2019-26 | refinedweb | 1,845 | 59.5 |
User Name:
Published: 25 Jun 2007
By: Imran Nathani.
To most of us, barcodes are a common sight. This is because today barcodes enable delivery and retail systems
to perform faster. When programming with older languages, implementing barcodes is difficult. It would need
either a set of computer graphics programmers with knowledge of different barcodes standards, or an expensive
third party component. But things are different today. With the help of the .NET framework, we will perform the
same task in an ASP.NET page with just a few lines of code.
ASP.NET
Barcodes are a pictorial representation of a particular set of information. This information is conveyed
through dark colored (normally black) lines of different widths, on any light (normally white) background. For
uniform implementation, we have few standards of barcodes, like EAN/UPC, Code 39, Code 128 and so on. Without
really understanding the theory behind these standards we just have to decide, as per our requirement, to which
standard we want to adhere. With the code presented in this article, any standard can be implemented.
If we search the Internet we can easily come across free font files which implement most barcode standards.
Using these font files, we can draw the information string on an image and set the font of that string to the
barcode font. This actually reduces the whole task of information bar-coding to simply information fonting. In
our example, we will use the file Barcodefont.ttf.
Barcodefont.ttf
We are going to split the whole procedure into two parts. The first one involves generating the image. The
second one discusses image delivery. The image generation consists of drawing the barcode on a canvas and storing
it in a file. Regarding image delivery, we consider two options: downloading the image or displaying it directly
in a control.
Every font file (.ttf) has a typeface name. Since we need to use it in our code, let’s open the
font file and check the typeface name as shown in figure 1.
.ttf
Also, we
need to decide a way to provide a unique name to each file that is generated by different users, in order to
avoid resource conflict. In the code, we use the following filename format:
[text to be bar-coded]_[long representation of DateTime.Now].jpg
Therefore, a conflict
would arise only if two users try to barcode the same string at the same instant (up to milliseconds).This
conflict would be resolved in the next millisecond.
To generate the image, we first need to import some namespaces related to drawing and file management.The code
for generating the image is reported in listing 1.
Image delivery is about how we want our users to get the generated image. There can be multiple complex ways
for image delivery; involving hyperlinks, databases, zipping and so on. We will consider two options:
1) Displaying the image in a control
Here we would display the image directly in an image box on a different page. The path to access the image
would be that of an .aspx page with the text to be bar-coded appended to the the query string, like
so:
.aspx
2) Downloading the image.
In this case, we would directly send the file for download by the user. The user actually saves the image file
to their hard disk, as shown in the following code:
Despite which image delivery option we choose, an image file is generated on the server. If this file is
required in the future, then the following clean up code is not required. However, if these files are not needed,
we can write the clean up code just before disposing the response object, like so:
A wise man once said "Knowledge is my weapon”. This is evident in this case, because of the price of third
party bar-coding components on the Internet. For more information on barcodes and barcode standards, check out
this
Wikipedia entry..
Delegates in .NET
Rupesh Kumar Nayak explains delegates in .NET. | http://dotnetslackers.com/articles/net/BarcodeImageGenerationMadeEasy.aspx | crawl-002 | refinedweb | 672 | 65.12 |
What is the fastest way to find unused enum members?
Commenting values out one by one won't work because I have almost 700 members and want to trim off a few unused ones.
I am not aware of any compiler warning, but you could possibly try with
splint static analyzer tool. According to its documentation (emphasis mine):
Splint detects constants, functions, parameters, variables, types, enumerator members, and structure or union fields that are declared but never used.
As I checked, it works as intented. Here is example code:
#include <stdio.h> enum Month { JAN, FEB, MAR }; int main() { enum Month m1 = JAN; printf("%d\n", m1); }
By running the
splint command, you will obtain following messages:
main.c:3:19: Enum member FEB not used A member of an enum type is never used. (Use -enummemuse to inhibit warning) main.c:3:24: Enum member MAR not used | https://codedump.io/share/ElOkEOFJaXcm/1/finding-unused-enum-members-in-c | CC-MAIN-2018-51 | refinedweb | 149 | 74.29 |
If you have shell access, you an use crontab to schedule a recurring job.
Otherwise you can use a service like SetCronJob or EasyCron or similar to
invoke a script regularly.
Some hosters also provide similar functionalities in their administration
interface...
I wrote the following init script for starting gradle applications at
system startup for redhat distros (centos/fedora etc).
You need to perform a few steps to tie it all together:
deploy your gradle application using gradle distZip onto your target server
create a configuration file /etc/my-service.conf
link the init script (see below) to the service name in
/etc/init.d/my-service
An example configuration file /etc/my-service.conf
username=someunixuser
serviceName=MyStandaloneServer
prog="/path/to/bin/MyStandaloneServer -a any -p params -y you -w want"
javaClass="some.java.MyStandaloneServer"
Note the path to the application from the distZip in the prog line.
You then link the init script to the actual service you want it to be run
as, e.g.
ln -s /path/to/gradle-init-start-stop
What errors are you seeing? I would expect that you may have a PATH problem
here. Where is mysqldump? If it's in /usr/local/bin, then you probably want
to make that explicit, or set the default path in /etc/launchd.conf.
Have your CustomerJDBCTemplate implement InitializingBean.
afterPropertiesSet will get called once, right after all properties have
been set by Spring's BeanFactory.
For example:
public class CustomerJDBCTemplate implements CustomerDAO, InitializingBean
{
...
// ......other methods
public void afterPropertiesSet() throws Exception {
//do your initializing, or call your initializing methods
}
}
The file /etc/rc.local is a good candidate for local jobs, and it avoids
some of the complexity of using /etc/init.d/ and similar directories.
Just add a line to /etc/rc.local to launch your job.
From execve(2) on a fairly current Linux system:);
[...]
I have not seen many scripts in the wild using the #!/bin/sh fi
Hopefully a login vbscript will work for you. Either append this to an
existing login script or save it as a ".vbs" file. Microsoft has some good
tutorials if you are unfamiliar with login scripts.
Set WshShell = CreateObject("Wscript.Shell") 'Create wshell object
WScript.Sleep(5000) 'Lets wait 5 seconds
WshShell.AppActivate "EXACT TITLE OF THE WINDOW YOU WANT" 'EDIT THIS LINE!
'The line above selects the window so we make sure the keystrokes are sent
to it
WshShell.SendKeys "%{F11}" 'Send ALT+F11
Wscript.Quit 'Quit the script.
The problem you are getting is that .First scripts can only use functions
in the base package, unless the packages have been explicitly loaded. So in
your case, you need to load
utils for the data function.
graphics for the par function.
Putting this together gives:
.First <- function() {
library(utils); library(graphics)
library(maps)
library(maptools)
library(mapdata)
map('worldHires',
c('UK', 'Ireland', 'Isle of Man','Isle of Wight'),
xlim=c(-11,3), ylim=c(49,60.9))
}
@ECHO OFF
SETLOCAL
FOR /f "tokens=1*" %%i IN (yourtextfilename.txt) DO (
IF /i %%i==%COMPUTERNAME% ECHO MSIEXEC /x install.msi key=%%j
)
This should do as you require - yourtextfilename.txt contains the data,
presumably on a shared drive; finds the line where the computername in
column 1 is the same as the computername returned by %computername% in the
target computer's environment.
(all case-insensitive EXCEPT %%i and %%j which must match and be the same
case)
Command simply ECHOed - remove the ECHO keyword after verification to
activate.
Try
su minecraft -c '/bin/bash /path/to/script/script.sh &'
The user should be the first argument to su.
You should use quotes and not ticks for the command argument (-c)
You may want to consider using su -l minecraft to have the script run in an
environment which would be similar to that if the user minecraft logged in
directly.
Give this a shot and let me know if it works..
Try passing it onto forever straight without using the export before:
exec PORT=80 MONGO_URL=mongodb://localhost27017/parties
ROOT_URL= forever start
bundle/main.js
It might have something to do with the permissions, when using a startup
script the user might be something that has its own variable scope to the
ones set.
Try creating the controller with Java/Groovy that is compiled and let it
get injected the Groovy 'script' as a dependency to do the actual work. I
seem to remember doing this before and it might be the annotations or the
way Spring loads controllers that makes the 'script' not work for you
properly.
Not the easiest thing to do, but it's not rocket science.
What I did was get the averge seconds in the sub query, then div the
seconds by 3600, which gets you hours 1).
To get the minutes 2), I subtract from the average seconds all the seconds
accounted for in the hours.
Then 3), to get the leftover seconds, I added together all the seconds
accounted for in the hours, all the seconds accounted for in the minutes
and then subtracted those seconds from the average seconds, which gives you
the leftover seconds.
The casting is necessary when you concat the numbers to text
cast(AvgSec/3600 as varchar(2)) +':' --1) gets hours
+cast(AvgSec/60 -((AvgSec/3600) * 60) as varchar(2)) +':' --2) gets
minutes
+cast((AvgSec - ((((AvgSec/3600) *60)*60)
var timer = null;
function auto_reload()
{
window.location = '';
}
and in tab body of html, you write :
<body onload="timer = setTimeout('auto_reload()',180000);">
Edit : THIS IS ONLY RUN IN SINGLE TAB.
marc_s is right with his comment. What you need is a function like in this
answer.
You would then call it like this:
udfTimeSpanFromSeconds(SUM(DATEDIFF(SECOND, t.Paydate, t.DelDate))) AS
'HH:MM:SS'
FROM
Transaction_tbl t
WHERE
t.transactID in (24, 25)
group by
vtid
Consider async api: on the server handler immediately respond with HTTP 200
and some id, and there is another endpoint to check if insert command with
id finished.
Be aware that your code prone to sql injection attack. Better approach:
var params = req.body; // you may want to filter names explicitly here
connection.query('CALL filldates(?,?,?,?,...)', params, function(err,
result, fields){
if (err){
console.log("error is" + err);
return res.send(500, "fail");
}
else{
console.log("finsihed");
return res.send(200);
}
});
This would escape all dangerous chars on client before sending query, or
use prepared statements with mysql2 to send query intdependent of
parameters:
var params = req.body; // you may want to filter names explicitl
TL; DR
Use a SQL subquery. They are fast (especially if you can pass execution off
to the DBMS) and easy to maintain.
PROC SQL NOPRINT;
CREATE TABLE Work.TransWithCount AS
SELECT STORE_NUM
, TERMINAL
, TRANS_DT
, (
SELECT COUNT(*)
FROM Work.Trans AS T
WHERE T.STORE_NUM = P.STORE_NUM
AND T.TERMINAL = P.TERMINAL
AND T.TRANS_DT >= P.START_DT
AND T.TRANS_DT <= P.END_DT
) AS TRANS_COUNT
FROM Work.Trans AS P
;
QUIT;
A non-SQL approach is also possible, but it's much more complicated. See
the update below.
Main
I can't promise that thi
#!
Here is what you are trying to do (sorry, I changed variables names, yours
were not obvious):
$time = array(1,5);
$time_plus = '10s';
$time_plus = preg_replace('#D#', '', $time_plus);
$minutes = $time [0] + (($time_plus + $time[1] >= 60) ? 1 : 0);
$seconds = ($time[1] + $time_plus) % 60;
echo $new_time = $minutes . ":" . $seconds;
Demo:.
Since time(NULL) returns the time in seconds from the epoch (usually the
Unix epoch) i.e. 00:00:00 UTC on 1 January 1970 (or 1970-01-01T00:00:00Z
ISO 8601):
#include <time.h>
#include <stdio.h>
time_t current_time;
time_t tenMinutesAgo;
int main() {
char* c_time_string;
current_time = time(NULL);
tenMinutesAgo = current_time - 10*60;//the time 10 minutes ago is 10*60
c_time_string = ctime(&tenMinutesAgo);//convert the time
tenMinutesAgo into a string format in the local time format
printf("The time 10 minutes ago in seconds from the epoch is: %i
", (int)tenMinutesAgo);
printf("The time 10 minutes ago from the epoch in the local time format
is: %s
", c_time_string);
return 0;
}
EDIT:
@PaulGriffiths makes a good point in that my solution isn't guarantee
You can check this out , and use Intent Services to run in the back ground.
android timer, For intent servicse check it out Intent Services
You're always returning the current time as the start time.
Change TimeStarted so that you can set the value, and then set it when your
program starts e.g.
public static DateTime TimeStarted { get; set; }
public static void Main(string[] args)
{
//set start time
Program.TimeStarted = DateTime.UtcNow;
}
Instead of maintaining your own start time, you could use the one for the
current process:
DateTime processStartedAt =
System.Diagnostics.Process.GetCurrentProcess().StartTime;
to get the total number of seconds since the program started you can use
the TotalSeconds property of TimeSpan.
TimeSpan sinceStarted = (DateTime.UtcNow - Program.TimeStarted);
double secondsRunning = sinceStarted.TotalSeconds;
If you want to create a message to display, use the properties on the Ti
I suggest you two ways:
1- Using "set_time_limit" function that kills script after specified
seconds and throws an error.
2- Define a variable before while loop and save time in that:
$time=time();
At the end of the while loop add this code:
if($time+TimeInSeconds<time()) exit();
These two ways can stop the script after some seconds.
I hope it will help.
If your trying to play with timers and stuffs with php, I suggest you
should play with it along with an AJAX script.
If you ever tried using jquery ajax, everything would be much easier for
you...
sounds like it is a problem with that version of Xcode. I am using 4.6.3,
and have never experienced this, (although the apps I work on are never
that big). You should definitely try Xcode 5, even if you cant submit to
the appstore, at least you'll be able to work on your app. Then you MIGHT
be able to open it in version 4.6.3, and submit it from there, not quite
sure it will work, but its worth a shot. Make sure you keep a copy of what
you have so far incase your Xcode 5 version wont work in 4.6.3, so you dont
loose your work. hope this helps.
Your code is redundant. Why format a timestamp as a string, then convert
that string back to a timestamp?
Try
$now = time();
$ten_minutes = $now + (10 * 60);
$startDate = date('m-d-Y H:i:s', $now);
$endDate = date('m-d-Y H:i:s', $ten_minutes);
instead.
Basically, you want a cumulative sum of a flag saying that the column
exceeds the value @x.
It turns out that you can do this with some tricks using row_number().
Enumerate all the rows using row_number() (in time order). Then, enumerate
all the rows, partitioning by the flag. The difference between these will
be a constant that identifies a group of consecutive rows. Then, with
aggregation, we can get the longest length of consecutive values where the
flag is true (or false):
select seqnum - seqnum_flag, count(*)
from (select d.*,
row_number() over (order by dt) as seqnum,
row_number() over (partition by (case when val > @x then 1
else 0 end)
order by dt) as seqnum_flag
from tblData d
) d
where val > @x
group
If you really have to keep the connection alive, run NOOP commands against
the server, so that the control connection remains alive.
Either do it manually:
client.sendNoOp();
Or set the client to do it at fixed interval rates:
client.setControlKeepAliveTimeout(300); // set timeout to 5 minutes
But if you can avoid wasting resources, logout and disconect after the
initial download phase, do your local processing and connect / login again
for uploading afterwards.
Do you have indexes on (e.g.) TerritoryId?
Also check:
and run SQL Server Profiler (under Tools-menu)
And instead of the WHERE clause you can use the INNER JOIN clause
It is not safe to rely on strtotime() as it is defined by the system's
timezone settings.
You should adjust date.timezone setting or use the
date_default_timezone_set()
for example:
date_default_timezone_set('America/New_York');
...
if($timenow - $chktime >= 1800) {
echo "Yay! It has been 30 minutes!";
} else {
echo "Wait! Its not been 30 minutes";
}
EDIT: add the remaining time
$time_diff = $timenow - $chktime;
if( $time_diff >= 1800) {
echo "Yay! It has been 30 minutes!";
} else {
$remaining = (1800 - $time_diff );
echo "Wait! Its not been 30 minutes
";
echo "please come back in ".date ( "i:s" , $remaining)." minutes";
}
strtotime() is a PHP function which you can read about here:
According to the manual, you could simply test with the string "now", or if
you prefer, "+2 minutes". I could find no evidence that this function
takes negative integers. I'm afraid I don't have time to test these
suggestions, but it should be easy to try. | http://www.w3hello.com/questions/Run-a-Script-5-Minutes-after-startup | CC-MAIN-2018-17 | refinedweb | 2,130 | 65.83 |
Hi Bucky.
I have a question on your video (Intermediat Java Tutorial - 19 Generic Return Types)
(I have copied the code below)
Basically i don't understand how the string "tots" is deemed the 'max' over "apples" and "chicken" using 'compateTo'.
What values is it using to compare one string against another ?
Thanks
John
import java.util.*;
public class bucky {
public static void main(String[] args) {
System.out.println(max(23,42,1));
System.out.println(max("apples","tots","chicken"));
} // end main method
public static T max(T a, T b, T c){
T m = a;
if(b.compareTo(a) > 0)
m = b;
if(c.compareTo(m) > 0)
m = c;
return m;
} // end generic method
} // end class bucky
John Keane's Profile
Hi Bucky. | https://thenewboston.com/profile.php?user=52629&type=friend | CC-MAIN-2017-09 | refinedweb | 124 | 56.86 |
The cmd module makes it easy to make command line interfaces in your programs.
cmd is different than OptParse, in that OptParse is a tool for making command line tools.
cmd, on the other hand, makes it so you can embed a command line within your program.
In these days of graphical user interfaces, a command line interpreter seems antique. I agree that GUIs are often more friendly (and in fact I'm happy to have something other than "ed" to create this document). But a command line interface can have several advantages:
portability almost any computer is able to drive a text terminal, so a command line interface can really run everywhere.
resources the CPU and memory cost of a command line interface is far lighter than a GUI library.
speed for advanced users, it's often faster to type a command than to dive into menus and windows.
development It is far faster to create a text oriented interface.
driving you can easily drive a text oriented program with the popen command. That means that the whole application can be tested automatically.
And even if you plan to create GUI software, it's often good to start with a text interface. This will allow you to focus on the applicative logic independently of the interface. This is often a good way to create modular software.
cmd module basics
The module defines only one class: the Cmd class. Creating a command line interpreter is done by sub-classing the cmd.Cmd class.
Creating a command
The main goal of an interpreter is to respond to commands. A command is the first part of a line of text entered at the interpreter prompt. This part is defined as the longest string of characters contained in the identchars member. By default identchars contains non accented letters, digits and the underscore symbol. The end of the line is the command's parameters.
Command handling is really easy: if you want to define the command spam, you only have to define the do_spam method in your derived class.
parameters
The do_xxx method should only take one extra parameter. This parameter corresponds to the part of the string entered by the user after the command name. The job of do_xxx is to parse this string and to find the command parameter's values. Python provides many helpful tools to parse this string, but this is quite out of the scope of his how-to.
errors
The interpreter uses the following format to signal errors:
*** <error description>: <additional parameters>
It's generally a good idea to use the same format for application errors.
return value
In the most common case: commands shouldn't return a value. The exception is when you want to exit the interpreter loop: any command that returns a true value stops the interpreter.
sample
The following function defines a command which takes two numerical arguments and prints the result of the addition:
def do_add(self,s): l = s.split() if len(l)!=2: print "*** invalid number of arguments" return try: l = [int(i) for i in l] except ValueError: print "*** arguments should be numbers" return print l[0]+l[1]
Now if you run the interpreter, you will have:
(Cmd) add 4 *** invalid number of arguments (Cmd) add 5 4 9
Help support is another strength of the cmd module. You can provide documentation for the xxx command by defining the help_xxx method. For the add command, you could for example define:
def help_add(self): print 'add two integral numbers'
And then, in the interactive interpreter you will have:
(Cmd) help add add two integral numbers
You can also define help for topics that are not related to commands:
def help_introduction(self): print 'introduction' print 'a good place for a tutorial'
The interpreter understands the ? character as a shortcut for the help command.
Completion
Completion is a very interesting feature: when the user presses the TAB key, the interpreter will try to complete the command or propose several alternatives. Completion will be available only if the computer supports the readline module. You can disable completion by passing the None value to the completekey attribute of the Cmd class constructor.
The interpreter is able to process completion for commands names, but for commands arguments you will have to help it. For the command xxx, this is done by defining a complete_xxx method. For example, if you have defined a color command, the completion method for this command could be:
_AVAILABLE_COLORS = ('blue', 'green', 'yellow', 'red', 'black') def complete_color(self, text, line, begidx, endidx): return [i for i in _AVAILABLE_COLORS if i.startswith(text)]
The complete_xxx method takes four arguments:
text is the string we are matching against, all returned matches must begin with it
line is is the current input line
begidx is the beginning index in the line of the text being matched
endidx is the end index in the line of the text being matched
It should return a list (possibly empty) of strings representing the possible completions. The arguments begidx and endidx are useful when completion depends on the position of the argument.
Starting the interpreter
Once you have defined your own interpreter class, the only thing left to do is to create an instance and to call the mainloop method:
interpreter = MyCmdInterpreter() interpreter.cmdloop()
In python 2.1 and 2.2 (and possibly some older, as well as future releases?) mainloop() has been renamed to cmdloop()
Interface customization
The cmd module provides several hooks to change the behavior of the interpreter. You should note that your users won't necessary thank you should you deviate from the standard behavior.
Empty lines
By default when an empty line is entered, the last command is repeated. You can change this behavior by overriding the emptyline method. For example to disable the repetition of the last command:
def emptyline(self): pass
Help summary
When the help command is called without arguments, it prints a summary of all the documentation topics:
(Cmd) help Documented commands (type help <topic>): ======================================== EOF add exit macro shell test Miscellaneous help topics: ========================== intro Undocumented commands: ====================== line help (Cmd)
This summary is separated into three parts:
documented commands are commands which have help_xxx methods
miscellaneous help topics contain the help_xxx methods without do_xxx methods
undocumented commands contain the do_xxx methods without help_xxx methods
You can customize this screen with several data members:
self.ruler define the character used to underline section titles
self.doc_header define the title of the documented commands section
self.misc_header define the title of the miscelleanous help topics section
self.undoc_header define the title of the undocumented commands section
Introduction message
At startup, the interpreter print the self.intro string. This string can be overridden via an optional argument to the cmdloop() method.
Advanced material
Defaults handling
The default method can be overridden for handling commands for which there is no do_xxx method
The completedefault method may be overridden to intercept completion for commands that have no complete_xxx methods.
Theses methods have the same parameters as the do_xxx and complete_xxx methods.
Nested interpreters
If your program becomes complex, or if your data structure is hierarchical, it can be interesting to define nested interpreters (calling an interpreter inside an other interpreter). In that case, I like having a prompt like:
(Cmd) test (Cmd:Test) exit (Cmd)
You can do this by changing the prompt attribute of the nested interpreter:
def do_test(self, s): i = TestCmd() i.prompt = self.prompt[:-1]+':Test)' i.cmdloop()
Note that it can be a better practice to do this in the constructor of the nested interpreter.
Modal interaction
Sometimes, it can be useful to have a more directed, interactive session with the users. The Cmd class allows you to use the print and raw_input functions without any problems:
def do_hello(self, s): if s=='': s = raw_input('Your name please: ') print 'Hello',s
FIXME: How to change completion behavior of raw_input?
The interpreter loop
At the start of the interpreter loop the preloop method is called, and at the end of the loop the postloop method is called. These methods take no arguments, and return no values. The following shows how to make the interpreter more polite:
class polite_cmd(cmd.Cmd,object): def preloop(self): print 'Hello' super(polite_cmd,self).preloop() def postloop(self): print 'Goodbye' super(polite_cmd,self).postloop()
Command processing
When a command line is processed, several methods are called:
precmd method is called with the string corresponding to the line entered at the interpreter prompt as its argument, and returns a string which will be used as the parameter to the onecmd method.
onecmd takes the return value of precmd and returns a boolean value (True will stop the interpreter). This is this method which does the real work: extracting the command, finding the corresponding do_xxx method and calling it.
postcmd this method takes two parameters: the return value of the onecmd method and the string returned by precmd, and should return a true value to exit the interpreter loop.
The precmd and postcmd methods do nothing by default and are only intended as hooks for derived classes. In fact, with the Python 2.2 super method, they are useless because anything can be done by overriding the onecmd method, so you should probably avoid to use these two hooks:
class dollar_cmd(cmd.Cmd, object): def onecmd(self, line): ''' define $ as a shortcut for the dollar command and ask for confirmation when the interpreter exit''' if line[:1] == '$': line = 'dollar '+line[1:] r = super (dollar_cmd, self).onecmd(line) if r: r = raw_input('really exit ?(y/n):')=='y' return r
However, if you want to simulate an interpreter entry, you should call these three methods in the proper order. For example if you want to print the help message at startup:
interpreter = MyCmdInterpreter() l = interpreter.precmd('help') r = interpreter.onecmd(l) r = interpreter.postcmd(r, l) if not r: interpreter.mainloop()
This will prevent problems if you later want a class to inherit one which has modified the hooks.
Creating components
One other strengths of the cmd module is that it handles multiple inheritance. That means that you can create helper classes intended to provide additional features.
Shell access
import os class shell_cmd(cmd.Cmd,object): def do_shell(self, s): os.system(s) def help_shell(self): print "execute shell commands"
By deriving from this class, you will be able to execute any shell command:
(Cmd) shell date Thu Sep 9 08:57:14 CEST 2002 (Cmd) ! ls /usr/local/lib/python2.2/config Makefile Setup.config config.c install-sh makesetup Setup Setup.local config.c.in libpython2.2.a python.o
By the way, the cmd module understands the ! character as a shortcut for the shell command.
Exit
class exit_cmd(cmd.Cmd,object): def can_exit(self): return True def onecmd(self, line): r = super (exit_cmd, self).onecmd(line) if r and (self.can_exit() or raw_input('exit anyway ? (yes/no):')=='yes'): return True return False def do_exit(self, s): return True def help_exit(self): print "Exit the interpreter." print "You can also use the Ctrl-D shortcut." do_EOF = do_exit help_EOF= help_exit
This class provides the exit command to abort the interpreter. You can protect exit by overriding the can_exit method.
Gluing all together
Now with a class that inherits both from exit_cmd and shell_cmd you will be able to define an interpreter that understands the shell and exit commands.
References
Example Code
listcmd.py - from Komodo Remote Debugger
Discussion
It would be cool if you could call these mini-command lines from "the" command line.
If there were some sort of default OptParse-like behavior, that would totally rock.
That way, I could pass in instructions, scripts, and get back the results on stdout.
Something like "-c" for Python, where you can pass in a line, and get back the result, without seeing the intro text.
-- LionKimbro 2006-03-06 19:06:56
The onecmd() function may be what you're after. Write your interpreter as you normally would. Then, in main, parse the command line for a '-c' option. If you find it, call onecmd() with the string following the '-c' as the parameter, else call cmdloop().
-- Mark Workman 2018-03-17 16:46:06 | https://wiki.python.org/moin/CmdModule | CC-MAIN-2018-13 | refinedweb | 2,036 | 54.12 |
You would like to provide users of your class with a copy method, or you would like to copy an object for which no copy method has been provided by the class.
Use the dclone( ) function from the standard Storable module.
use Storable qw(dclone); use Carp; sub copy { my $self = shift; croak "can't copy class $self" unless ref $self; my $copy = Storable::dclone($self); return $copy; }
As described in Recipe 11.12, the Storable module's dclone function will recursively copy (virtually) any data structure. It works on objects, too, correctly giving you back new objects that are appropriately blessed. This assumes that the underlying types are SCALAR, ARRAY, HASH, or CODE refs. Things like GLOB and IO refs won't serialize.
Some classes already provide methods to copy their objects; others do not, not so much out of intent as out of neglect. Consider this:
sub UNIVERSAL::copy { my $self = shift; unless (ref $self) { require Carp; Carp::croak("can't copy class $self"); } require Storable; my $copy = Storable::dclone($self); return $copy; }
Now all objects can be copied, providing they're of the supported types. Classes that provide their own copy methods are unaffected, but any class that doesn't provide its own copy method will pick up this definition. We placed the require on Storable within the function call itself so that you load Storable only if you actually plan to use it. Likewise, we placed the one for Carp inside the test that will end up using it. By using require, we delay loading until the module is actually needed.
We also avoid use because it would import things into our current package. This could be antisocial. From the previous code snippet, you cannot determine what package you're even in. Just because we've declared a subroutine named copy to be in package UNIVERSAL doesn't mean that the code within that subroutine is in package UNIVERSAL. Rather, it's in whatever package we are currently compiling into.
Some folks would argue that we're being outrageously cavalier by interjecting a function into somebody else's namespace like thatespecially into all possible class namespaces, as it's in UNIVERSAL. Cavalier perhaps, but hardly outrageously so; after all, UNIVERSAL is there to be used. It's no holy namespace, sacrosanct against any change. Whether this ends up being a very stupid thing or a very clever thing is not up to Perl to decide, or prevent.
Recipe 11.12; Recipe 13.9; the documentation for the standard Storable modules; the section on Inheritance in the introduction to this chapter; the section on "UNIVERSAL: The Ultimate Ancestor Class" in Chapter 12 of Programming Perl | http://etutorials.org/Programming/Perl+tutorial/Chapter+13.+Classes+Objects+and+Ties/Recipe+13.7+Copy+Constructors/ | CC-MAIN-2016-44 | refinedweb | 448 | 62.07 |
Let's now take a look at simple manipulation of Active Directory objects using ADSI. We are using Active Directory as the primary target for these scripts, but the underlying concepts are the same for any supported ADSI namespace and automation language. All the scripts use GetObject to instantiate objects, assuming you are logged in already with an account that has administrator privileges; if you aren't, you need to use IADsOpenDSObject::OpenDSObject as shown earlier in the chapter.
The easiest way to show how to manipulate objects with ADSI is through a series of real-world examples, the sort of simple tasks that form the building blocks of everyday scripting. To that end, imagine that you want to perform the following tasks on the mycorp.com Active Directory forest:
Create an Organizational Unit called Sales.
Create two users in the Sales OU.
Iterate through the Sales OU and delete each user.
Delete the Organizational Unit.
This list of tasks is a great introduction to how ADSI works because we will reference some of the major interfaces using these examples.
The creation process for the Sales Organizational Unit is the same as for any object. First you need to get a pointer to the container in which you want to create the object. You do that using the following code:
Set objContainer = GetObject("LDAP://dc=mycorp,dc=com")
Since we are creating a container of other objects, rather than a leaf object, you can use the IADsContainer interface methods and properties. The IADsContainer::Create method is used to create a container object, as shown in the following code:
Set objSalesOU = objContainer.Create("organizationalUnit","ou=Sales")
Here we pass two arguments to IADsContainer::Create: the objectclass of the class of object you wish to create and the Relative Distinguished Name (RDN) of the object itself. We use the ou= prefix because the type of object is an Organizational Unit. Most other objects use the cn= prefix for the RDN.
The IADsContainer interface enables you to create, delete, and manage other Active Directory objects directly from a container. Think of it as the interface that allows you to manage the directory hierarchy. A second interface called IADs goes hand in hand with IADsContainer, but while IADsContainer works only on containers, IADs will work on any object.
To commit the object creation to Active Directory, we now have to call IADs::SetInfo:
objSalesOU.SetInfo
ADSI implements a caching mechanism in which object creation and modification are first written to an area of memory called the property cache on the client executing the script. Each object has its own property cache, and each cache has to be explicitly written out to Active Directory using IADs::SetInfo for any creations or modifications to be physically written to Active Directory. This may sound counterintuitive but in fact makes sense for a number of reasons, mostly involved with reducing network traffic. The property cache is discussed in more detail in Chapter 19.
Each object has a number of properties, some mandatory and some optional. Mandatory properties have to be defined during the creation of an object. They serve to uniquely identify the object from its other class members and are necessary to make the object usable in Active Directory. If you need to create an object with a large number of mandatory properties, it makes sense to write them all into a cache first and then commit them to Active Directory in one operation, rather than perform a sequence of SetInfo operations.
While the Organizational Unit example has no other mandatory properties, other objects do. User objects, for example, require sAMAccountName to be set before they can be written out successfully. In addition, you can also choose to set any of the optional properties before you use IADs::SetInfo.
Putting it all together, we have our first simple script that creates an OU:
Set objContainer = GetObject("LDAP://dc=mycorp,dc=com") Set objSalesOU = objContainer.Create("organizationalUnit", "ou=Sales") objSalesOU.SetInfo
We now will move to the second task of creating a couple user objects. Creating user objects is not much different from creating an OU in the previous task. We use the same IADsContainer::Create method again as in the following:
Set objUser1 = objSalesOU.Create("user", "cn=Sue Peace") objUser1.Put "sAMAccountName", "SueP" objUser1.SetInfo Set objUser2 = objSalesOU.Create("user", "cn=Keith Cooper") objUser2.Put "sAMAccountName", "KeithC" objUser2.SetInfo
The IADs::Put method is used here to set the SAM Account Name, a mandatory attribute that has no default value. The SAM Account Name is the name of the user as it would have appeared in previous versions of NT and is used to communicate with down-level NT domains and clients. It is still required because Active Directory supports accessing resources in down-level Windows NT domains, which use the SAM Account Name.
It is also worth pointing out that the IADs::SetInfo calls can be put at the end of the script if you want to. As long as they go in the right order (i.e., the OU must exist before the user objects within that OU exist), the following works:
Set objContainer = GetObject("LDAP://dc=mycorp,dc=com") Set objSalesOU = objContainer.Create("organizationalUnit", "ou=Sales") Set objUser1 = objSalesOU.Create("user", "cn=Sue Peace") objUser1.Put "sAMAccountName", "SueP" Set objUser2 = objSalesOU.Create("user", "cn=Keith Cooper") objUser2.Put "sAMAccountName", "KeithC" objSalesOU.SetInfo objUser1.SetInfo objUser2.SetInfo
This works because the property cache is the only thing being updated until the SetInfo call is issued. Since ADSI works against the property cache and not Active Directory directly, you could put off the SetInfo calls until the end of your scripts. There is no special benefit to doing scripts this way, and it can lead to confusion if you believe incorrectly that properties exist in the underlying service during later portions of the script. In addition, if you bunch up cache writes, and the server crashes, none of your writes will have gone through, which I suppose you could see as a good thing. However, we will not be using this method; we prefer to flush the cache as soon as feasible. Bunching caches to write at the end of a script encourages developers to neglect proper error checking and progress logging to a file from within scripts.
As you've seen, creating objects is a breeze with ADSI. Deleting objects is also very straightforward. Let's iterate through the Sales OU and deleting the two users we just created:
for each objUser in objSalesOU objUser.DeleteObject(0) Next
We used a For Each loop to enumerate over the objects in objSalesOU. The objUser variable will get set to a reference of each child object in the Sales OU. We then use IADsDeleteOps::DeleteObject method to delete the object. The value 0 must be passed in to DeleteObject, but it does not hold any special significance (it is reserved for later use).
The final step is to delete the Sales OU using the same method (IADsDeleteOps::DeleteObject) that we used to delete users:
objSalesOU.DeleteObject(0) Set objSalesOU = Nothing
The IADsDeleteOps::DeleteObject method can delete all the objects within a container, so it wasn't really necessary for us to delete each user object individually. We could have instead used DeleteObject on the Sales OU to delete the OU and all child objects within the OU. This method should be used with care since a lot of objects can be wiped out by using DeleteObject on the wrong container. | http://etutorials.org/Server+Administration/Active+directory/Part+III+Scripting+Active+Directory+with+ADSI+ADO+and+WMI/Chapter+18.+Scripting+with+ADSI/18.4+Simple+Manipulation+of+ADSI+Objects/ | CC-MAIN-2019-09 | refinedweb | 1,253 | 54.42 |
Alsalam alikom wa ra7mat Allah wa barakatoh (i.e. May peace be upon you)
A couple of days ago, a colleague and myself got together for a few hours to hack something for fun. The nature of the hack is beside the point. What’s interesting to this post is that, for the first time, I got introduced to SignalR [Official Site]. From their.
I’ve heard about the library and the technology behind (Web Sockets) before but this was the first time to experience it and I’ve to say, I love it!
The very basic scenario it addresses is this: traditionally, when you browse to a page, the only time the server can influence what’s render on the client side is during that page browse. Any additional content will have to be served upon request. To design a chat web application for example, your client code (javascript) will have to keep polling every few seconds to check if new data is available. WebSockets came to say this is ridiculous, let’s have a persisted connection between client and server to let the server send data whenever it sees appropriate. This made it more efficient (no polling) and faster to deliver near real-time information.
Now, that’s of itself IS awesome –I thought- but wouldn’t it be cooler if the server can ask the client for information too not just send notifications/data? sure it would!
For the purpose of keeping this post focused, I’ll assume minimum SignalR knowledge. If you think you don’t have that, I would strongly advise you to go through the walkthroughs here ().
Imagine we want to implement a GetFiles method. The purpose of this method is to get a list of files that exist on the client machine.
I start by creating a static method in the hub class (If you don’t know what Hub is, I strongly advise you to go through its walkthrough on the wiki link above).
1: public class FilesHub : Hub
2: {
3: public static Task<IEnumerable<string>> GetFiles()
4: {
5: }
6:
7: }
A couple of notes:
- The method is static, because it’ll be called from anywhere on the server side, not necessarily as part of a response to a SignalR request.
- The return type is Task because we will need some waiting mechanism to receive the response. If you don’t know what Task is, take a look at this ()
Now, the basic idea is, GetFiles will call BeginGetFiles on some client, as part of that call, the client can not return data. The client will then compute the response (list of file paths in this case) and then call EndGetFiles on the server side. This call will trigger the finish action for the async method. Sounds cryptic? let’s look at the code.
Server side
1: public class FilesHub : Hub
2: {
3: private static Dictionary<string, TaskCompletionSource<IEnumerable<PersistenceObject>>> _getFilesTasks
4: = new Dictionary<string, TaskCompletionSource<IEnumerable<PersistenceObject>>>();
5:
6: public static Task<IEnumerable<string>> GetFiles()
7: {
8: dynamic client = ClientLoadBalancer.Local.GetClient<FilesHub>();
9: TaskCompletionSource<IEnumerable<string>> tcs = new TaskCompletionSource<IEnumerable<string>>();
10: string taskId = Guid.NewGuid().ToString();
11: _getFilesTasks[taskId] = tcs;
12:
13: try
14: {
15: client.BeginGetFiles(taskId);
16: }
17: catch (Exception ex)
18: {
19: tcs.TrySetException(ex);
20: _getFilesTasks.Remove(taskId);
21: }
22:
23: if (tcs.Task.Exception != null)
24: {
25: throw tcs.Task.Exception;
26: }
27:
28: return tcs.Task;
29: }
30:
31: public void EndGetFiles(string operationId, IEnumerable<string> result)
32: {
33: TaskCompletionSource<IEnumerable<string>> tocall;
34: if (_getFilesTasks.TryGetValue(operationId, out tocall))
35: {
36: tocall.TrySetResult(result);
37: }
38: }
39: }
Client side
1: hub.On<string>("BeginGetFiles", (opId) =>
2: {
3: Console.WriteLine("BeginGetFiles - opId: " + opId);
4: ThreadPool.QueueUserWorkItem(new WaitCallback((state) =>
5: {
6: IEnumerable<string> result = null; // Calculate result here
7: Console.WriteLine("Calling EndGetFiles - opId:" + opId);
8: hub.Invoke<string>("EndGetFiles", opId, result).Wait();
9: Console.WriteLine("Called EndGetFiles - opId:" + opId);
10: }));
11: });
Now you can just call it like this
1: IEnumerable<string> files = await FilesHub.GetFiles();
Clean? yeah, that’s what I thought. In my opinion, SignalR should provide such functionality without the above hack. However, it sounds this would be abusing the library. The main goal of the library is to send notifications (signals :)) from server to client in a pushing mechanism. Even though I agree this would abuse the library, I think it’s still a good trick and might come handy at times.
Till next time, | https://blogs.msdn.microsoft.com/haythamalaa/2013/06/17/abusing-signalr-doing-good/ | CC-MAIN-2016-50 | refinedweb | 747 | 65.42 |
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
On at least a couple of occasions lately, I realized that I may need Python in the near future. While I have amassed some limited experience with the language over the years, I never spent the time to understand Pandas, its de-facto standard data-frame library.
Where does one start? For me its usually with the data. Simple stuff, loading, wrangling, etc. Re-writing my little R6 helper class to load future’s data looked like a perfect candidate.
There was some frustration, totally expected after years of experience with R. Some things were less intuitive, however, surprisingly pretty much nothing was straight ugly.
And when it comes to code, I am not easy to please. The end result is available here.
Here is a little example how to use the code, although one can’t do much without the data, which I can’t distribute:
import pandas as pd import instrumentdb as idb def main(): # Crate the object for the database db = idb.CsiDb() # Load the data for three elements all = db.mload_bars(["HO2", "RB2", "CL2"]) print(all['HO2'].head()) print(all['RB2'].head()) # Build an array of the closing prices for each series closes = [] for ss in all.keys(): closes.append(all[ss]['close']) # Create a single data frame using these series all_df = pd.concat(closes, join='inner', axis=1) all_df.columns = [xx.lower() for xx in all.keys()] print(all_df.tail()) # That's the only line that would work without the data. print(db.future_list()) if __name__ == "__main__": main()
The structure of the database is available from Tradelib’s source code (I am using the SQLite’s version for this test). To bootstrap (create) the database I use sqlite3.exe’s read command, to which I pass data.sqlite.sql as a parameter. To be used via the CsiDb class, the database is configured using a TOML configuration file.
flavor = "SQLite" db = "sqlite:///C:/Users/qmoron/Documents/csidata.sqlite" bars_table = "csi_bars"
Now a little rant: In the above code, I tried to create a module, instrumentdb, to keep the source code in it. This created some problems while developing the module. Apparently, once loaded, it’s pretty hard to re-load the module properly within the same REPL interpreter. From R’s perspective, where I am used to re-loading files, or even packages, as my development goes, that seemed quite an obstacle. After straggling with the issue for a while, the best I was able to come up with, is the above approach of using a full-blown “main” file to drive the execution and some tests. This is unlikely to scale (in the sense of using it in a rapid REPL prototyping) – I am open to suggestions.
The post Loading Data with Pandas appeared first on Quintuitive.. | https://www.r-bloggers.com/loading-data-with-pandas/ | CC-MAIN-2020-40 | refinedweb | 484 | 66.64 |
Today ‘System?
Keith Packard: X has been targeted at systems with high performance graphics processors for
a long time. SGI was one of the first members of the MIT X consortium and
shipped X11 on machines of that era (1988). Those machines looked a lot
like todays PCs — fast processors, faster graphics chips and a relatively
slow interconnect. The streaming nature of the X protocol provides for easy
optimizations that decouple graphics engine execution from protocol decoding.
And, as a window system X has done remarkably well; the open source nature
of the project permitted some friendly competition during early X11
development that improved the performance of basic windowing operations
(moving, resizing, creating, etc) so that they were more limited by the
graphics processor and less by the CPU. As performance has shifted towards
faster graphics processors, this has allowed the overall system performance
to scale along with those.
Where X has not done nearly as well is in following the lead of application
developers. When just getting pixels on the screen was a major endeavor, X
offered a reasonable match for application expectations. But, with machine
performance now permitting serious eye-candy, the window system has not
expanded to link application requirements with graphics card capabilities.
This has left X looking dated and shabby as applications either restrict
themselves to the capabilities of the core protocol or work around
these limitations by performing more and more rendering with the CPU in the
application’s address space.
Extended the core protocol with new rendering systems (like OpenGL and
Render) allows applications to connect to the vast performance offered by
the graphics card. The trick now will be to make them both pervasive
(especially OpenGL) and hardware accelerated (or at least optimize the
software implementation).
Rayiner Hashem: Jim Gettys mentioned in one of your presentations that a major change from W to X was a switch from structured to immediate mode graphics.
> However, the recent push towards vector graphics seems to indicate a return of structured graphics systems. DisplayPDF and XAML, in particular, seem particularly well-suited to a structured API. Do you see the X protocol evolving (either directly or through extensions) to better support structured graphics?
Keith Packard: So far, immediate mode graphics seem to provide the performance and
capabilities necessary for modern graphics. We’ve already been through a
structured-vs-immediate graphics war in X when PHIGS lost out to OpenGL.
That taught us all some important lessons and we’ll have to see some
compelling evidence to counter those painful scars. Immediate graphics
are always going to be needed by applications which don’t fit the structured
model well, so the key is to make sure those are fast enough to avoid the
need to introduce a huge new pile of mechanism just for a few applications
which might run marginally faster.
Rayiner Hashem: What impact does the compositing abilities of the new X server have on memory usage? Are there any plans to implement a compression mechanism for idle window buffers to reduce the requirements?
Keith Packard: Oh, it’s pretty harsh. Every top level window has its complete contents
stored within the server while mapped, plus there are additional temporary
buffers needed to double-buffer screen updates.
If memory does become an issue, there are several possible directions to
explore:
+ Limit saved window contents to those within the screen boundary,
this will avoid huge memory usage for unusualy large windows.
+ Discard idle window buffers, reallocating them when needed and
causing some display artifacts. Note that ‘idle’ doesn’t just mean
‘not being drawn to’, as overlying translucent effects require
saved window contents to repaint them, so the number of truely idle
windows in the system may be too small to justify any effort here.
+ Turning off redirection when memory is tight. One of the features
about building all of this mechanism on top of a window system which
does provide for direct-to-screen window display is that we can
automatically revert to that mode where necessary and keep running,
albeit with limited eye-candy.
One thing I have noticed is a sudden interest in video cards with *lots* of
memory. GL uses video memory mostly for simple things like textures for
which it is feasible to use AGP memory. However, Composite is busy drawing
to those off-screen areas, and it really won’t work well to try and move
those objects into AGP space. My current laptop used to have plenty of
video memory (4meg), but now I’m constantly thrashing things in and out of
that space trying to keep the display updated.
Preliminary Exposé-like functionality on the new X Server
(530 KB .png, faster loading 240 KB .jpg. Having the clients hit the hardware directly or having the X server
do it for them doesn’t change the fundamental performance properties of the
system.
Where there is a difference is that X now uses an external compositing agent
to bring the various elements of the screen together for presentation, this
should provide for some very interesting possibilities in the future, but
does involve another context switch for each screen update. This will
introduce some additional latency, but the kernel folks keep making context
switches faster, so the hope that it’ll be fast enough. It’s really
important to keep in mind that this architecture is purely experimental in
many ways; it’s a very simple system that offers tremendous potential. If
we can make it work, we’ll be a long ways ahead of existing and planned
systems in other environments.
Because screen updates are periodic and not driven directly by graphics
operations, the overhead of compositing the screen is essentially fixed.
Performance of the system perceived by applications should be largely
unchanged by the introduction of the composting agent. Latency between
application action and the eventual presentation on the screen is the key,
and making sure that all of the graphics operations necessary for that are
as fast as possible seems like the best way to keep the system responsive.
Eugenia Loli-Queru: How is your implementation compares to that of Longhorn’s new display system (based on available information so far)?
Keith Packard: As far as I can tell, Longhorn steals their architecture from OS X, DRI-like
rendering by applications (which Windows has had for quite some time) and
built-in window compositing rules to construct the final image.
Rayiner Hashem: What impact will the new server have on toolkits? Will they have to change to better take advantage of the performance characteristics of the new
design? In particular, should things like double-buffering be removed?
There shouldn’t be any changes required within toolkits, but the hope is
that enabling synchronous screen updates will encourage toolkit and window
manager developers to come up with some mechanism to cooperate so that the
current opaque resize mess can be eliminated.
Double buffering is a harder problem. While it’s true that window contents
are buffered off-screen, those contents can be called upon at any time to
reconstruct areas of the screen affected by window manipulation or
overlaying translucency. This means that applications can’t be assured that
their window contents won’t be displayed at any time. So, with the current
naïve implementation, double buffering is still needed to avoid transient
display of partially constructed window contents. Perhaps some mechanism
for synchronizing updates across overlaying windows can avoid some of this
extraneous data movement in the future.. Do you think that existing APIs like OpenGL could form a foundation for making fast Render and Cairo implementations available more quickly?
Keith Packard: Cairo is just a graphics API and relies on an underlying graphics engine to
perform the rendering operations. Back-ends for Render and GL have been
written along with the built-in software fall-back. Right now, the GL
back-end is many times faster than the Render one on existing X servers
because of the lack of Render acceleration.
Getting better Render acceleration into drivers has been slowed by the lack
of application demand for that functionality. With the introduction of
cairo as a complete 2D graphics library based on Render, the hope is that
application developers will start demanding better performance which should
drive X server developers to get things mapped directly to the hardware for
cases where GL isn’t available or appropriate.
Similarly, while a Composite-based environment could be implemented strictly
with core graphics, it becomes much more interesting when image composition
can be used as a part of the screen presentation. This is already driving
development of minimal Render acceleration within the X server project at
Freedesktop.org, I expect we’ll see the first servers with acceleration
matching what the sample compositing manager uses available from CVS in
the next couple of weeks.
A faster software implementations of Render would also be good to see. The
current code was written to complete the Render specification without a huge
focus on performance. Doing that is mostly a matter of sitting down and
figuring out which cases need acceleration and typing the appropriate code
into the X server. However, Render was really designed for hardware
acceleration; acceleration which should be able to outpace any software
implementation by a wide margin.
In addition, there has been a bit of talk on the [email protected]
mailing list about how to restructure the GL environment to make the X
server rely upon GL acceleration capabilities rather than having it’s own
acceleration code. For environments with efficient GL implementations,
X-specific acceleration code is redundant. That discussion is very nebulous
at this point, but it’s certainly a promising direction for development.?
Jim Gettys: This is not true. The first X implementation had a $20,000 external display plugged into a Unibus on a VAX with outboard processor and
bit-blit engine. Within 3 years, we went to completely dumb frame buffers.
Over X’s life time, the cycle of reincarnation has turned several times, round and round the wheel turns. The tradeoffs of hardware vs. software go back and forth.
As far as X’s graphics goes, X mouldered most of the decade of the ’90’s, and X11’s graphics was arguably broken on day 1. The specification adopted forced both ugly and slow wide lines; we had run the “lumpy line” problem that John Hobby had solved, but unfortunately, we were not aware of it in time and X was never fixed. AA and image compositing were just gleams in people’s eyes when we designed X11. Arguably, X11’s graphics has always been lame.
It is only Keith Packard’s work recently that has begun to bring it to where it needs to be.
Rob Pike and Russ Cox’s work on Plan 9 showed that adopting a Porter-Duff model of image compositing was now feasible. Having machines 100-1000x faster than what we had in 1986 helps a lot :-).
Overall, the current protocol has done well, as demonstrated by Gnome and KDE’s development over 10 years after X11’s design, though it has been past to replace the core graphics in X, which is what Render does.
Rayiner Hashem: You mentioned in one of your presentations that a major change from W to X was a switch from structured to immediate mode graphics. However, the recent push towards vector graphics seems to indicate a return of structured graphics systems. Display PDF and XAML, in particular, seem particularly well-suited to a structured API. Do you see the X protocol evolving (either directly or through extensions) to better support structured graphics?
Jim Gettys: That doesn’t mean that the window system should adopt structured graphics.
Generally, having the window system do structured graphics requires a duplication of data structures on the X server, using lots of memory and costing performance. The
organization of the display lists would almost always be incorrect for any serious application. No matter what you do, you need to let the application do what *it* wants, and it generally has a better idea how to represent its data that the window system can possibly have.
Rayiner Hashem: What impact does the compositing abilities of the new X server have on memory usage? Are there any plans to implement a compression mechanism for idle window buffers to reduce the requirements?
Jim Gettys: The jury is out: one idea we’ve toyed with is to encourage most applications to use 16bit deep windows as much as possible. This might often save memory over the current situation where windows are typically the depth of the screen (32 bits). The equation is complex, and not all for or against either the existing or new approach.
Anyone who wants to do a compression scheme of idle window buffers is very welcome to do so. Most windows compress *extremely* well. Some recent work on the migration of window contents to and from the display memory should make this much easier, if someone wants to implement this and see how well it works.?
Jim Gettys: No, we don’t see this as a bottleneck.
One of the *really* nice things about the approach that has been taken is that your eye candy’s (drop shadows, etc) cost is bounded by update rate to the screen, which never needs to be higher than the frame rate (and is typically further reduced by only having to update the parts of the screen that have been modified). Other approaches often have the cost going up proportional to the graphics updating, rather than the bounded behavior of this design, and take a constant fraction of your graphics performance,
Rayiner Hashem: Could this actually be a performance advantage, allowing the X server to take advantage of hardware acceleration in places Apple’s implementation can not?
Jim Gettys: Without knowing Apple’s implementation details it is impossible to tell.
Eugenia Loli-Queru: How is your implementation compares to that of Longhorn’s new display system (based on available information so far)?
Jim Gettys: Too soon to tell. The X implementation is very new, and it is hard enough to keep up with what we’re doing, much less keep up with the
smoke and mirrors of Microsoft marketing ;-). Particularly sweet is that Keith says the new facilities saves code in the X server, rather than making it larger. That is always a good sign :-).
Rayiner Hashem: What impact will the new server have on toolkits?
Jim Gettys: None, unless they want to take advantage of similar compositing facilities internally..
Jim Gettys: Without understanding exactly what Raster thinks he’s measured, it is hard
to tell.
We need better driver support (more along the lines of DRI drivers) to allow the graphics hardware to draw into pixmaps in the X server to take advantage of their compositing hardware.
Some recent work allows for much easier migration of pixmaps to and from the frame buffer where the graphics accelerators can operate.
An early implementation Keith did showed a factor of 30 for hardware assist for image compositing, but it isn’t clear if the current software implementation is as optimal as it could be, so that number should be taken with a grain of salt. But fundamentally, the graphics engines have a lot more bandwidth and wires into VRAM than the CPU does into main memory.
Rayiner Hashem: Do you think that existing APIs like OpenGL could form a foundation for making fast Render and Cairo implementations available more quickly?
Jim Gettys: Understand that today’s X applications draw fundamentally differently than your parent’s X applications; we’ve found that a much simpler and narrower driver interface is sufficient for 2D graphics: 3D remains hard. The wide XFree86 driver interface is optimizing many graphics requests no longer used
by current GTK, Qt or Mozilla applications. For example, core text is now almost entirely unused: I now use only a single application that still uses the old core text primitives; everything else is AA text displayed by Render.
So to answer your question directly, yes we think that this approach will form a foundation for making fast Render and Cairo implementations.
The fully software based implementations we have now are fast enough for most applications, and will be with us for quite a while due to X’s use on embedded
platforms such as handhelds that lack hardware assist for compositing.
But we expect high performance implementations using graphics accelerators will be running over the next 6 months. The proof will be in the
pudding, now in the oven. Stay tuned :-).
David Zeuthen: First of all it might be good to give an overview of the direction HAL (“Hardware Abstraction Layer”) is going post the 0.1 release since a few key things have changed.
One major change is that HAL will not (initially at least, if ever) go into device configuration such as mounting a disk or loading a kernel driver.
Features like this really belong in separate subsystems. Having said that, HAL will certainly be useful when writing such things. For instance a volume manager, as proposed by Carlos Perelló Marín on the xdg-list, should (excluding the optical drive parts) be straightforward to write insofar that such a program will just listen for D-BUS events from the HAL daemon when storage devices are added/removed, and mount/unmount them.
Finally, the need for Free Device Information files (.fdi files) won’t be that big initially since most of the smart busses (USB, PCI) provide device class information that we can map to HAL device capabilities. However, some devices (like my Canon Digital IXUS v camera) just report the class / interface as proprietary so it is needed.
There are a lot of other reasons for supplying .fdi files though. First of all some capabilities of a device that DE’s are interested are hard/impossible to guess. For example, people should be able to use a digital camera and mp3 player as a storage device as many people already do. Second, having .fdi files gives the opportunity to fine tune the
names of the device and maybe even localize it into many languages. Third, we can advertise certain known bugs or deficiencies in the device for the libraries/servers using the device.
Rayiner Hashem: HAL seems to overlap in a lot of ways existing mechanisms like hotplug and kudzu. Will HAL interoperate with these projects or replace them entirely?
David Zeuthen: HAL might replace kudzu one day when we get more into device configuration. In the mean time both mechanisms can peacefully coexist.
For linux-hotplug, and udev for that matter, I’d say the goal is definately to interoperate for a number of reasons; first of all linux-hotplug is already widely deployed and it works pretty well; second it may not be in a vendors best interest to deploy HAL on an embedded device (though HAL will be lean and only depend on D-BUS)
because of resource issues. Finally, it’s too early for HAL to go into device configuration as noted above.
Rayiner Hashem: HAL is seperate from the underlying kernel mechanisms that handle the actual device management. Is there a chance, then, that information could get out of sync, with HAL having one hardware list and the kernel having another? If so, are there any mechanisms in place that would prevent this from happening, or allow the user to fix things manually?
David Zeuthen: There is always the possibility of this happening, but with the current design I’d say that the changes are slim. Upon invocation of the HAL
daemon all busses are probed (via a kernel interface) and devices are removed/added as appropriate using the linux-hotplug facilities.
There will be a set of tools shipped with HAL; one of them will wipe the entire device list and reprobe the devices. I do hope this will never be needed though 🙂
Eugenia Loli-Queru: Gnome/KDE are multiplatform DEs, but HAL for now is pretty tied to Linux. If HAL is to be part of Gnome/KDE, how easy/difficult would be to port it on BSDs or other Unices?
David Zeuthen: With the new architecture most of the HAL parts are OS agnostic; specifically the only Linux-specific parts are less than 2000 lines of C code for handling USB and PCI devices using the kernel 2.6 sysfs interface. It will probably
grow to 3-4k LOC when block devices are supported.
The insulation from the OS is important, not only for supporting FreeBSD, Solaris and other UNIX and UNIX-like systems, but more importantly, it allows OS’es that said DE’s run on to make drastic changes without affecting the DE’s. So, maybe we won’t get FreeBSD support for the next release of HAL, but anyone is able to add it when
they feel like it.
I’d like to add a few things on the road map for HAL. The next release (due in a few weeks give and take), will be quite simple insofar that it basically just gives a list of devices. It will also require Linux Kernel 2.6 which may be a problem for some people (but they are free to write the Linux 2.4 parts; I already got USB support for 2.4)..
Part of the release will also feature a GUI “Device Manager” to show the devices. Work-in-progress screenshots are here.
Post 0.2 (or 0.3 when it’s stable) I think it will be time to look into integrating HAL into existing device libraries such that programmers can basically just throw a HAL object and get the library to do the stuff; this will of course require buy-in from such projects as it adds D-BUS and, maybe, HAL as a dependency. Work on a volume manager will also be possible post 0.2.
It may be pretentious, but in time I’d also like to see existing display and audio servers use HAL. For instance, an X server could get the list of graphic cards (and monitors) from HAL and store settings in properties under it’s own namespace (FDOXserver.width etc.). This way it will be a lot easier to write configuration tools, especially since
D-BUS sports Python bindings instead of editing an arcane XFree86Config file.
There is a lot of crazy, and not so crazy, ideas we can start to explore when the basics are working: Security (only daddy can use daddy’s camera), Per-user settings (could store name of camera for display in GNOME/KDE), Network Transparency (plug an USB device into your X-terminal and use it on the computing server you connect to).
The Fedora developers are also looking into creating a hardware web site, see here so the device manager could find .fdi files this way (of course this must be done a distro/OS independent way).
Rayiner Hashem: How is KDE’s involvement in the freedesktop.org project?
Waldo Bastian: It could always be better, but I think there is a healthy interest in what freedesktop.org is doing, and with time that interest seems to be growing.
Rayiner Hashem: While it seems that there has been significant support for some things (the NETWM spec) there also seems to be a lot of friction in other places. This is particularly evident for things like the accessibility framework or glib that have a long GNOME history.
Waldo Bastian: I don’t see the friction actually. KDE is not thrilled to use glib but nobody at freedesktop.org is pushing glib. It has been considered to use it for some things at some point and the conclusion was that that wouldn’t be a good idea. The accessibility framework is a whole different story. KDE is working closely with Bill Haneman to get GNOME compatible accessibility support in KDE 4. Things are moving still a bit slow from our side, in part because we need to wait on Qt4 to get some of the needed support, but the future looks very good on that. TrollTech has made accessibility support a critical feature for the Qt4 release so we are very happy with their commitment to this. We will hopefully be able to show some demos in the near future.
Rayiner Hashem: What are the prospects for D-BUS on KDE? D-BUS overlaps a great deal with DCOP, but there seems to be a lot of resistance to the idea of replacing DCOP with D-BUS. If DCOP is not replacing D-BUS, are there any technical reasons you feel DCOP is better?
Waldo Bastian: D-BUS is pretty much inspired by DCOP and being able to replace DCOP with D-BUS is one of the design goals of D-BUS. Of course we need to look carefully how to integrate D-BUS in KDE, it will be a rather big change so it’s not something we are going to do in the KDE 3.x series. That said, with KDE 3.2
heading for release early next year, we will start talking more and more about KDE 4 and KDE 4 will be a good point to switch to D-BUS. Even though KDE 4 is a major release, it will still be important to keep compatibility with DCOP as much as possible, so that’s something that will need a lot of attention.
Rayiner Hashem: What do you think of Havoc Pennington’s idea to subsume more things into freedesktop.org like a unified MIME associations and a VFS framework? What inpact do you think KDE technologies like KIO will have in design of the resulting framework?
Waldo Bastian: I think those ideas are spot on. The unified MIME associations didn’t make it in time for KDE 3.2, but I hope to get that implemented in the next KDE release. Sharing a VFS framework will be somewhat more difficult Since the functionality that KIO offers is quite complex it may not really be feasible
to fold that all in a common layer. What would be feasible is to take a basic subset of functionality common to both VFS and KIO and standardize an interface for that. The goal would then be to give applications the possibility to fall-back to the other technology with some degradation of service in case a specific scheme (e.g. http, ftp, ldap) is not available via the native framework. That would also be useful for third party applications that do not want to link against VFS or KIO.
Rayiner Hashem: A lot of the issues with the performance of X11 GUIs has been tracked down to applications that don’t properly use X. We’ve heard a lot about
what applications should do to increase the performance of the system (handling expose events better, etc). From the KDE side, what do you think the X server should do to make it easier to write fast applications?
Waldo Bastian: “Fast applications” is always a bit difficult term. Everyone wants fast applications but it’s not always clear what it means in technical terms.
Delays or lag in rendering is often perceived as “slow” and a more agressive approach to buffering in the server can help a lot in that area.
I myself noticed that the server-side font-handling tends to cause slow-down in the startup of KDE applications. Xft should have brought improvements there, although I haven’t looked into that recently.
Other KDE developers may have better examples.
Eugenia Loli-Queru: If QT changes are required to confront with changes needed for interoperation with GTK+ or Java or other toolkits, is TrollTech keen on
complying? If KDE developers do the work required is TrollTech keen on applying these patches on their default X11 tree?
Waldo Bastian: TrollTech is overall quite responsive to patches, whatever their nature, but in some cases it takes a bit longer than we would like to get them into a Qt release. That said, we have the same problem in KDE where we sometimes have patches sitting in our bug-database that take quite long before they get applied (Sorry BR62425!)
“So we need a registry of types documenting the type name and the format of the data transferred under that name. That’s it.”
Oh yeah it has been in Windows since around Windows 1.0 or 2.0 I don’t know but it has been at least 15 years.
This kind of article keeps me coming back to OSNews!
Anyway, as everyone I’m excited by the progress spawned by fd.o, especially in the regions of HAL and the new X Server.
KDE is a perhaps a little more separated from fd.o than GNOME, but I can assure readers that KDE is just as committed to producing an interoperable desktop as the other project members.
Now all we need to hope for is better vendor support for graphics hardware for the new (and old of course) X Server. From what I have read recently that’s one thing the OSS movement lacks.
Yeah, that’s been in Windows since 2.0 (not 1.0). What difference does that.”
Maybe now you believe me that Apple’s system *does* use the hardware accelerator to draw into the window buffers <grin>
This is the best article I’ve read in a long time..well done Eugenia/Rayiner!
Some very good points raised, I look forward to what freedesktop.org has in store for us in the future.
Havoc didn’t understand anything about the autopackage project, that’s really a shame, because it really is a great project.
Also, autopackage will use native .deb or.rpms if they exist.
You need gail, libgail-gnome, at-spi, gnome-speech, festival, and gnopernicus for this to work. Haven’t tested it myself, but there you have the deps at least
I have gnopernicus and gnome-speech on my Slackware, I don’t know about the rest.
Actually, no I don’t. I don’t think that Keith Packard was stating that as known fact, but rather just offering a conjecture. Apple does have a DRI-like model, but only for OpenGL, not Quartz rendering. Quartz 2D rendering, unless something has changed drastically in Panther without Apple hyping it, is still done via the CPU. In fact, the big improvement in besides Quartz Extreme (which we know is just the compositor) 10.2 was hardware acceleration of window scrolling (essentially a bit-blit). Nowhere in Apple’s literature is anything about hardware acceleration of Quartz 2D mentioned (and it really is rather complicated — you can’t, for example, use the card’s regular 2D line operations because they aren’t anti-aliased). In fact, their SIGGRAPH presentation confirms that 2D rendering is done via the CPU
Anyway, if the new X server gives us 2D via hardware, that will put it on par with Longhorn. I especially like the 6-month estimate that Jim Gettys gave
PS> One further clarification. Raster was measuring the performance of Render vs imlib for a number of compositing operations. While a simple composit was much faster via Render (thanks to NVIDIA’s hardware acceleration) anything that involved scaling was much slower. My guess is that NVIDIA’s acceleration only covers the simple case of non-scaled compositing. This would certainly make sense — since the primary user of Render (Xft, which uses it to compose AA text) doesn’t use any scaling features.
One question that has really been bugging me is: When will the new X server be ready for the public, estiamted time?
and
How long will the release cycle for KDE 3.3 be? (I really hope it is no longer than 6 months).
1. About a year, I would bet.
2. About six-eight months after the 3.2, I would think.
No one has hard numbers yet.
Havoc’s statements are exactly right with RPM. Nothing wrong with file, package, or format. Its the ui and implementation that is shoddy.
@Alex. Well, KDE 3.2 will take at least until late january from the looks of it, so I’d peg KDE 4.0 at about a year from now.
Anyone up for a build and get this out to the masses. not CVS access, but some debs or rpm`s?
I love the talk about the new X server and the HAL project (and finally a real device manager, yay!)
Great article!
OK…um could someone explain exactly what the new x server is trying to accomplish? Ive heard performace and better graphics but im not really sure what that means
[rb0.7]# festival
Festival Speech Synthesis System 1.4.3:release Jan 2003
For details type `(festival_warranty)’
festival> (tts “/usr/share/apps/LICENSES/GPL_V2” nil)
[… listen and enjoy …]
SIOD ERROR: control-c interrupt
closing a file left open: /usr/share/apps/LICENSES/GPL_V2
festival> (quit)
[rb0.7]#
The way you present it here, this text2speech is _not_ part of the desktop usability embedded automatically on all apps (which is what the question asks), you have to give it a text file to read.
BTW, my fedora has a festival driver app, but not “festival” itself.
Well, the mime type issues, even for DnD, isn’t as desperate as it sounds. Both KDE and GNOME understand MIME types and have their own MIME type databases; they also tend to use generally the same mime type settings for DnD, more so internally than between the various implementations. And therein lies the problem: each desktop has its own set of mimetype definitions. So if you use apps from just one desktop, or are selective in which apps you use, you have few to no problems.
What has been lacking is a MIME type database/registry and a set of standardized types for DnD (and by extension cut ‘n paste) that everyone shares. FD.o is currently working on defining those standards. This is just like how FD.o has standardized other (meta)data registries and definitions, such as the .desktop file hierarchy or icon themes. This will lead to both better interop between the UNIX desktops as well as more consistency between individual apps as we’ll then have a common deffinition to point to when some app get its wrong.
The UNIX desktop is coallescing (as opposed to fragmenting). Tomorrow’s versions of KDE and GNOME will work even better together than today’s versions. This seems to be the opposite direction that closed source systems generally go in, isn’t it.
Sorry, there is nothing like text2speech (my invention)
festival can do more than my example, not only read the file from a texte, for example :
festival>help
[…]
Doing stuff
(SayText TEXT) Synthesize text, text should be surrounded by double quotes
(tts FILENAME nil) Say contexts of file, FILENAME should be surrounded by double quotes
(voice_rab_diphone) Select voice (Britsh Male)
(voice_ked_diphone) Select voice (American Male)
festival> (SayText “Great interview from OSNews”
festival>
I can’t comment how it is embedded in Gnome, KDE or wether it is present on Fedora (I use none of them)
I just presented the basic tool which does the work.
Hi
Yes. You are right. Gnome and KDE are converging. one significant area that I find no roadmap for converging is kparts/bonobo. I am happy that fd.o is a work being accepted by free desktops without much ego clashes. End user must be able to use KDE or Gnome apps without worrying about any interoperability problems. That is the goal we are working towards.
Regards
Rahul
You need gail, libgail-gnome, at-spi, gnome-speech, festival, and gnopernicus for this to work. Haven’t tested it myself, but there you have the deps at least
I just got it to work on my FreeBSD 4.9 box. I needed to install on top of my relatively standard install :
festival-1.4.1_1, festlex-oald-1.4.1, festvox-kal16-1.4.0, and I upgraded to libaudiofile-0.2.4
possibly some other ports were installed along with this, didn’t keep a log.
Also after installation I needed to modify /usr/local/share/festival/lib/init.scm to select the correct audio routines (commenting out options for windows and linux) – this avoids those SIOD ERROR messages.
All that I am saying is that Linux claims to be ahead on the inovation front, but yet the simplist things that Windows has had for 15 years seem to escape Linux.
Today I remembered why I keep osnews on my bookmark bar: the interviews with people in the trenches. Thanks to both the interviewers and the interviewed for a great read and a great expose of what’s up in Open Source desktop world. =)
It really seems we’ve turned a pretty important corner: up ’til now we’ve been largely playing catch-up and “clone the leader”. With enough work done to have created a viable desktop, we’re now able to stop and do some introspection. It’s visible in this article how the community is consciously addressing the issues of interop and consistency while also probing into new areas of functionality and capability. This bodes well and makes for exciting times.
This is SO much better than anything I’ve seen in a long time on OSNews. After seeing “review” after review of what writers do and don’t like about every distribution its really nice to see something on such a wide variety of important topics. It’s also nice because its just not one person droning on subjectively. Really a nice article and doesn’t make me think the site should have been named OSOpinions.com. More factual technology articles and less opinionated ones are the way to go.
So, mime-types then? What exactly is wrong with using mime.types like everything else? And these folks are supposed to lead us to linux desktop nirvana?
First of all, thanks to Eugenia and Rayiner for a great read. Let’s take a look at where autopackage was mentioned.
The argument that RPM is sufficient is worth examining. The theory goes that you can have a single RPM that works on all distros. That’s sometimes true, and in those cases, great. Of course it won’t integrate quite as nicely on Debian or Gentoo but tools such as alien do exist and using them is not such a hardship.
The problems come when it’s not possible to have a single RPM. Typically that’s because:
* The metadata used by the distributions is different.
* Different glibc versions are used.
* Files and libraries need to be put/registered in different places
(2) is kind of a red herring, there’s nothing inherant in RPM that makes this a problem, but the RPM build tools don’t really warn you about it or make it easy to avoid. Maybe in future apbuild will be more widely used, and this will become less of an issue (it will always be a compromise).
(1) and (3) can cause problems. RPM cannot have multiple sets of metadata, the format and tools don’t allow it and extending them to do so would be problematic. If distro A calls the Python runtime “python” and distro B calls it “python2” then you have a thorny problem.
Let’s not even go into the issues of Epochs and such.
(3) is not often so much of an issue, but it can be sometimes and RPM doesn’t make dealing with it easy. You could move them/munge them in post-install scripts, but then the file ownership metadata is wrong and so on.
These things cause the RPM culture to be one of “one RPM for one version of one distro”. Part of the hope is that a new installer framework will start with a new culture – one of binary portability. While it’s true that RPM works fine on Solaris, how many Solaris RPMs have you seen on the net? I haven’t seen any. I’m not interested in making a new package manager with a new format, I’m interested in making installers that work “anywhere” (restricted to linux/gnu).
There are other advantages to using autopackage I guess, none of them seriously important. You can mix source and binary installs more easily. I think the specfile format is nicer. I’m not interested in getting into a pissing match over features though, it’s not worth the effort.
The final thing to note is that autopackage is an installer framework, not a package manager. Yes, at the moment you have the whole “package remove” command thing going… in future I’d like to eliminate that stuff and go 100% with RPM/DPKG/portage integration. It’s technically possible to have “rpm -e whatever” work fine with stuff not installed by RPM, so we might as well remain consistant and do it.
It’s been years and we still have RPMs being built and rebuilt constantly, with millions of different files for different distros – if it’s possible to unify that so the job only has to be done once, is that not worth a shot? I think it is. Maybe one day Havoc will agree with me
Its not a matter of just using mime types, but agreeing on what mime types to assign to different data.
and in response to Aaron J. Seigo: mime types are registered with IANA (). There is already a standard.
Hi,
“All that I am saying is that Linux claims to be ahead on the innovation front, but yet the simplist things that Windows has had for 15 years seem to escape Linux.”
We can’t all be first, right?
Linux != XFree86.
But perhaps you could have roughed up the HAL guy a bit more, there were some things like when you ask him how portable is the HAL over other kernels and he says it’s OS agnostic but doesn’t really give any details. Also if it’s required to have a kernel interface how are they going to port it to closed source or/and non-unix OSes?
Also if you’re going to ask them to compare against quartz you could also have asked them about fresco and directfb.
Another thing that comes to mind would be asking the gnome and kde guys when are they going to make their software independent from the graphics lib (for X haters of us 😉
But anyway, great interview, keep up the good work.
Great article. It’s great to see that such an all-star programming cast has rallied around FD.org.
I just bought an video card with 128MB video RAM, so bring on the composition engine
What’s really neat is that it’s still the same ole’ X, so if you want to use your grandfather’s XFree86 3.3.6 you can, and all applications will work. But if you’ve got the hardware, go with the FD.org X Server. Wouldn’t it be ironic if by the time Longhorn hits the street composition engines would be considered old hat, “that’s so 2004, even Debian has had one for years”
This is one the very best articles I’ve read on OS News. Well done 😉
I’m perfectly aware of the IANA MIME type registry. KDE has a few MIME types of their own registered there. This isn’t a problem of saying “OK, if it is a MS Word file we agree to use
application/msword and expect the file to have a .doc extension.” It goes much deeper.
For instance: when we aren’t dealing with a file, but a chunk of data, what mimetypes do we use? e.g. If I drag and drop an image that is also a clickable link what MIME types should be used, and in which order should they be presented? If I drop it on a graphics app, it should probably display the image for editting. But if I drop it on a web browser, perhaps it should should load the URL in the link. In this case, both a link and an image MIME type should (IMHO, anyways =) be provided in the DnD information and that behaviour should be standardized. Furthermore, should all DnD’d graphics have both a raw bitmap type available (which all apps should be able to use, theoretically) as well as the actual graphic format it’s stored in (e.g. image/jpeg or image/png)? The questions aren’t that hard, but the answers need to be standardized so that we don’t have multiple answers that may all be slightly different and therefore a block to interop (which is the current situation).
Another example: when I have a MS Word document, which application should be launched? mime.types doesn’t provide that information (and shouldn’t). Currently if I want to open it in KWord I need to define that in each desktop environment I wish to use separately. That’s absurd. Or, when I use Open Office Writer and it looks to the MIME type associations for what to do with a PDF file (for instance), it certainly doesn’t look at the same information contained in my KDE settings. Ditto for Mozilla. Internally each is (generally) consistent, and they (generally) use the same IANA-defined MIME types (except for data types not registered with IANA), but between the various systems they aren’t consistent. IANA can’t fix that; mime.types won’t address it; ergo the FD.o MIME type standardization.
Nice article. Good work Eug and Ray.
I hope it becomes a trend.
I want to add my voice to those congratulating OSNews for this fine piece of journalism.
Me too.
Great job, and thanks for a fantastic article.
Excellent interview with extremely interesting technical insights… I want more !
Howdy
Hmmmm this has always seemed rather hard in a Linux distro, i mean i click on a HTML page in Mandrake 9.0 and get a HTML editor ! I don`t quite know why they did this when 99% of the time you would want to view it and not edit the thing *sigh*
Then the fun begins, you want to change the association well good luck as I`ve yet to make it work (maybe this has become more refined now i`ll buy a new distro soon)
It`s little things like thiis that we should be fixing aswell as big things like X.
@Anon E Moose
Well it`s good to see my relatives are interested in this too :p
Just my two cents, thank you all for your time.
“All that I am saying is that Linux claims to be ahead on the inovation front,”
If the kernel’s talking to you and making boasts about itself, I think its won the claim to inovation. If you’re talking about ‘people’ making that claim, no shit. I don’t care what platform it it, people spouting off about how innovative they are almost inevitably overestimating both themselves and whatever they’re talking about, no matter if we’re speaking of windows, linux, or almost any other field in the world. I don’t think I’ve seen anything in the computing field that I’d actually call innovative on any operating system since Clippy burst onto the scene.
“All that I am saying is that Linux claims to be ahead on the inovation front,”
Linux never claimed to be innovative. They’re just working on what they think they should. They’ve never bragged about “OMG look at us we’re so innovative!!!”
This in contrast to Microsoft, who does claim to be innovative but really isn’t.
“but yet the simplist things that Windows has had for 15 years seem to escape Linux.”
And in other news, the simplist things that Unix has had for 15 years seem to escape Windows.
All I can say is: so what? What matters is that it’s here NOW. Whining about how Windows has had it for x years won’t change a thing.
I seem to recall a while ago that there was a very public falling out between Keith P and the XFree86 team. I’m curious as to whether the changes/improvements to X being discussed here will be merged back into the main X tree, or if it will become a completely separate release…..
Gogs
It will be a seperate release, but not a fork. The core of the new X Server (the server is called XWin I think, the core is called Kdrive) is rewritten by Keith and other major parts are also re-written, so they are not forked off XFree86. In fact, there are some new parts now, that XFree86 doesn’t have at all. Other parts of XFree86 though, like drivers, some exteensions etc, will be forked off indeed.
Appears to be a complete rewrite actually. All this new stuff is based on kdrive, Keith Packards own X server. I don’t know what the plans for folding it back into XF86 are. All of this is still very experimental.
Any idea about how fonts are going to be handled i.e. will there have to be a rewrite of freetype qt gtk+ as well, or will kdrive be backwards compatible?
Lets see who can build the new Linux UI first.
1) Fonts will almost certainly be handled through Xft2 + the Render extension. There is no point rewriting freetype — its damn good already.
2) There will be no rewriting of Qt or GTK+. If that was the case, this new server would never make it. KDrive is just another X server, it speaks the same X protocol, so all X apps will be compatible with it. However, toolkits will need to be modified to take better advantage of the new server’s functionality. For example, it would be nice to have GTK+ and Qt render through Cairo natively. According to the interview answers, some additional coordination will probably also be necessary to fix the opaque resize problem.
This article in linked at /.…
Contains a few (imo) interesting comments.
If you’re running Gentoo and would like to run/test check this out
At the bottom of the interview with Havoc Pennington, Eugenia notes that screenreader is greyed out in Fedora. I am running Fedora, and saw the same thing. It says that gnopernicus must be installed. Therefore,
yum install gnopernicus
and text-to-speech works.
While it’s cool, it’s also annoying as hell — talks too much, you could say. Does anyone know of another way to access the underlying text-to-speech software? It would be incredibly useful if I could paste text into a textbox and press read. (Like Simpletext on MacOS 7+.)
This is why I mentioned Mac OS X. Because the way OSX does it is far better than text2speech on a seperate app. The speech itself can be triggered from the app itself. For example, on the TextEdit, you can select a piece of text, right click on it, and select “start speaking”. It feels more integrated to the OS, and doesn’t make people with usability problems feel forgotten and forced to use a third party app.
article, very interesting in-deth informations.
shocking for me noticing that os-x doesnt use 2d-hardware acc! why is apple throwing away the speed-gain ?
> Another thing that comes to mind would be asking the gnome
> and kde guys when are they going to make their software
> independent from the graphics lib (for X haters of us 😉
Whoosh.
That was the sound of the point of the article going over your head.
Eugina you need to “gok” package for the screen reader functionality to work. You distro should be able to resolve its dependencies for you. Good luck.
Ah silly me, I just realized you need gnome-speech, gnome-mag and gnopernicus. Sorry if this has already been mentioned.
Great article.
About the XServer/XFree86/Kdrive confusion:
Kdrive was initially a heavily modified version of XFree86 to allow it to run on low-memory devices such as handhelds. It uses as little memory as possible by performing a lot of calculations at runtime instead of storing it in memory. Due to the high load latencies (relative to clock speed) of modern hardware, this actually becomes a speed optimization in some cases. It has less code duplication with the kernel (although there’s still work to do in that area, but it needs sync with the kernel guys). The internals are also cleaned up and it’s supposed to be easier to work with and modify.
For these reasons, Keith Packard chose to base his new efforts on Kdrive rather than just copying the source tree from XFree86.org. At some point it was renamed to Xserver (not XWin, that is a just a website as the title at xwin.org says. And it looks pretty dead now, but fd.o is hosted there).
Xserver doesn’t support everything XFree86 does. Most of these features are useless anyway (like PIE, Ximage and a bunch of other obsolete stuff). The driver modules are gone too (Xserver is more like Xfree86 3.x with separate servers for each card). That’s not too bad since the graphics driver infrastructure in linux is in great need of an overhaul anyway.
X bloatedness was overrated to begin with, but with the new Xserver + the new X C bindings things will get even leaner.
Another nice fact is that the “unsnappines” of X is being solved, on two different fronts using two different approaches and we will end up with the benefits of both.
As XDirectFB shows, reducing the number of expose events significantly improves the feel of the desktop. Xserver will have that, and combined with the kernel CPU scheduling efforts by Andrew Morton, Nick Piggin, Con Calivas and others things will get supersnappy and ultrafast. Like running TWM on a supercomputer.
Am i the only one that thinks that Keith Packard looks like Steve Ballmer (minus the sweaty armpits and the monkey dance) ??
That is an incredible summation. Thank you very much.
Does the Xserver conform to standard X guidelines?
This is interesting because if not it might be difficult for the standard commercial unix folks trying to get some of the gnome stuff to compile around specific linux hooks and such.
It sounds very interesting and I really hope they get the card support rocking and distros start switching.
Speed is king.
Apple does use hardware acceleration, but in very limited cases. It uses bit-blit acceleration for stuff like scrolling, so the CPU isn’t stuck moving large blocks of pixels around. They also use OpenGL, of course, to composit those transparent windows together.
However, they don’t appear to use acceleration for stuff like line or polygon drawing. Current hardware has a sharp divide between 2D and 3D components. The 2D components traditionally used by Windows, MacOS, and X, don’t support anti-aliasing or alpha blending (transparency), or gradients or anything like that. Since OS X uses very high-quality vector graphics, with everything anti-aliased and transparent and whatnot, it can’t use the existing 2D acceleration, except for the aformentioned bit-blit functionality..
Longhorn is 3 years away, and I really do hope that the OSS community can throw out something just as good, UI wise.
First of all, great article!
How about gaming though. If some of the video RAM is taken, will this reduce the available video RAM when running a game? Or will it just automatically swap out the unused stuff when necessary until the game is closed and the window contents are required again? I also wonder where it would swap it to and how much of a performance penalty this would be if it happend during a game.
I hope Keith and freedesktop.org can figure out what Mike is trying to accomplish with Autopackage. It seem obvious from Keiths comments that he doesn’t get it. Installing packages was the first wall I hit when I first tried out linux. If we want people to take Linux seriously as a desktop system something like Autopackage is sorely needed. My hats off to Mike Hearn and all the contributers for all their great work. Keep It Coming!
Err it was Havoc who commented on autopackage not Keith. My bad!
>> Am i the only one that thinks that Keith Packard looks like Steve Ballmer (minus the sweaty armpits and the monkey dance) ??
You mean Jim Gettys?
Yeah, that’s what I thought!
I think GNOME and KDE should focus on kdrive more and more, make it difficult for even UNIX’s not to use it.
I am hoping this will become the best desktop and high performance workstation solution.
My reason is simple, make X.org and Xfree86.org irrelevant.
make X.org and Xfree86.org irrelevant.
Especially the X folks but I feel you on the Xfree thing too. People however emphasize X windows too much and ignore that everything from the application to the window manager back to the widget class itself can make things feel slow in the X world.
That is not ignoring the problems with Xfree86 at all. I just hope we do not get too wrapped up in this whole thing only to realize very small eye candy style gains and be left in a hardware support mode worse than before.
Ok, that being said I think we can only hope commercial *Nixes jump but you have to realize the whole network model is actually a part of X that is used in the corporate *Nix world.
The old stuff that people always complain of as cruft is commonly used out there in the old school corporate Unix world and has to be supported before the big boys will play ball.
With my display exported logged into a box over the lan I commonly brought up gui admin tools like Netbackup gui or the Veritas system tools..
Any idea if any of the major PC 3D vendors have any plans to support virtualized GPUs the way SGI did? This would seem to solve the multiple apps rendering issue…unless I’m thinking of something entirely different.
Yep, it would, but I have heard absolutely nothing about this. The nice thing, though, is that its more a matter of what software developers want rather than what hardware developers plan. If 3D accelerated desktops take of (with Longhorn), hardware manufactuers will put in support. The only catch is that we’d have to wait ’till 2006, and this seems like it will be ready before then!
> I love the talk about the new X server and the HAL project (and finally a real device manager, yay!)
me too, i agree!
virtualized GPUs ?
I am really getting tired of people bitching about linux software installation.
The average windows users have a way of messing up their win* installation by installing/uninstalling a bunch of crap. More than half of these softwares never really get uninstalled, and Win* uninstallers most of time leaves files and entries about the software in the registry.
With that said I am sure rpm/deb could be improved on:
Standardize labeling, versioning and dependency conventions for software. Some packagers label, organize, and place varied dependencies for the exact same piece of softwares they package.
1. We could adopt java packages name space convention to solve this problem.
org.gnome.* or org.kde.*
The label version and dependency (internal) must come
from the developers/organization that create/provide the softwares.
Lets solve the real problem here!!
no standards.
SGI systems have virtualized their GPUs for awhile now. Basically, a virtual GPU means that the real graphics hardware is fully abstracted from the application, and the OS has full control over managing the HW. Just as Windows and Linux have virtual memory and a virtual CPU (preemptive multitasking), IRIX machines have a virtual GPU. You can throw more graphics pipes into the virtual GPU pool, and automatically get increased performance. Also, the abstract nature of the interface makes it easy to share the GPU among many concurrently rendering applications, just as preemptive multitasking makes it easy to share the CPU among concurrent applications.
Congratulations to OSNews for this excellent article!
While I do think that some of the quesitons could have been a bit different, I really liked this article!
To make it short: This is why I really like to read OsNews! Keep it up!
cu Martin
Thanks for the info Rayiner.
It makes me wonder if nVIDIA or ATI have this function for their non average consumer GPU’s.
Standards: tried it, didn’t work out too well. There was little appetite for such a thing. While Red Hat were enthusastic, Debian were cautious and Gentoo were also enthusiastic until internal politiking ended it, getting standards for package metadata was compared to me in private to “asking for world peace”. There are archives on the net if you want all the gory details.
The autopackage approach doesn’t require everybody to suddenly relabel millions of packages, good though that would be. Modern dependency trees are huge and standardising them all is very hard indeed. Maybe somebody else can do this, I don’t know. If they can then best of luck to them. | https://www.osnews.com/story/5215/the-big-freedesktoporg-interview/ | CC-MAIN-2020-16 | refinedweb | 10,289 | 61.87 |
- 04 Jun, 2020 1 commit
- 09 Jul, 2019 2 commits.
- 06 Jun, 2019 1 commit
Motivation: The SSWG has identified a fast approaching reality of namespace clashes in SPM within the ecosystem and has proposed a rule on names that `NIORedis` no longer complies with. Modifications: All references to `NIORedis` have been switched to `RedisNIO` as this module name is unique (at least within GitHub's public repositories). The goals for this name are as follows: 1. To indicate that this is a Redis client library that is built with SwiftNIO 2. That it is a lower level library, as it directly exposes SwiftNIO as an implementation detail 2a. The idea being that a higher level library (`Redis`) will be used, and to "go one level deeper" in the stack, you append the "deeper" `NIO` postfix 3. It follows a naming pattern adopted by Vapor who has expressed their desire to adopt this library as their Redis implementation Result: A repository, package name, and module name that are unique across GitHub's public repositories that achives the goals outlined above.
- 01 May, 2019 1 commit | https://gitlab.com/Mordil/RediStack/-/commits/8b75ef7f0e3cb82c69f2e2b47ea8f49e7d4e2ca9/CONTRIBUTORS.txt | CC-MAIN-2022-21 | refinedweb | 185 | 59.33 |
Templates and Factory Functions at Namespace Scope
In the previous section, I argued that static member functions should be made non-members whenever that is possible, because that increases class encapsulation. I consider these two possible implementations for a factory function:
// the less encapsulated design class Widget { ... public: static Widget* make(/* params */); }; // the more encapsulated design namespace WidgetStuff { class Widget { ... }; Widget* make( /* params */ ); };
Andrew Koenig pointed out that the first design (where
make is static inside the class) enables one to write a template function that invokes
make without knowing the type of what is being made:
template<typename T> void doSomething( /* params */ ) { // invoke T's factory function T *pt = T::make( /* params */ ); ... }
This isn't possible with the namespace-based design, because there's no way from inside a template to identify the namespace in which a type parameter is located. That is, there's no way to figure out what
??? is in the pseudocode below:
template<typename T> void doSomething( /* params */ ) { // there's no way to know T's containing namespace! T *pt = ???::make( /* params */ ); ... }
For factory functions and similar functions which can be given uniform names, this means that maximal class encapsulation and maximal template utility are at odds. In such cases, you have to decide which is more important and cater to that. However, for static member functions with class-specific names, the template issue fails to arise, and encapsulation can again assume precedence.
Syntax Issues
If you're like many people with whom I've discussed this issue, you're likely to have reservations about the syntactic implications of my advice that non-friend non-member functions should be preferred to member functions, even if you buy my argument about encapsulation. For example, suppose a class
Wombat supports the functionality of both eating and sleeping. Further suppose that the eating functionality must be implemented as a member function, but the sleeping functionality could be implemented as a member or as a non-friend non-member function. If you follow my advice from above, you'd declare things like this:
class Wombat { public: void eat(double tonsToEat); ... }; void sleep(Wombat& w, double hoursToSnooze);
That would lead to a syntactic inconsistency for class clients, because for a
Wombat
w, they'd write
w.eat(.564);
to make it eat, but they would write
sleep(w, 2.57);
to make it sleep. Using only member functions, things would look much neater:
class Wombat { public: void eat(double tonsToEat); void sleep(double hoursToSnooze); ... }; w.eat(.564); w.sleep(2.57);
Ah, the uniformity of it all! But this uniformity is misleading, because there are more functions in the world than are dreamt of by your philosophy.
To put it bluntly, non-member functions happen. Let us continue with the
Wombat example. Suppose you write software to model these fetching creatures, and imagine that one of the things you frequently need your
Wombats to do is sleep for precisely half an hour. Clearly, you could litter your code with calls to
w.sleep(.5), but that would be a lot of
.5s to type, and at any rate, what if that magic value were to change? There are a number of ways to deal with this issue, but perhaps the simplest is to define a function that encapsulates the details of what you want to do. Assuming you're not the author of
Wombat, the function will necessarily have to be a non-member, and you'll have to call it as such:
// might be inline, but it doesn't matter void nap(Wombat& w) { w.sleep(.5); } Wombat w; ... nap(w);
And there you have it, your dreaded syntactic inconsistency. When you want to feed your wombats, you make member function calls, but when you want them to nap, you make non-member calls.
If you reflect a bit and are honest with yourself, you'll admit that you have this alleged inconsistency with all the nontrivial classes you use, because no class has every function desired by every client. Every client adds at least a few convenience functions of their own, and these functions are always non-members. C++ programers are used to this, and they think nothing of it. Some calls use member syntax, and some use non-member syntax. People just look up which syntax is appropriate for the functions they want to call, then they call them. Life goes on. It goes on especially in the STL portion of the Standard C++ library, where some algorithms are member functions (e.g.,
size), some are non-member functions (e.g.,
unique), and some are both (e.g.,
find). Nobody blinks. Not even you.
Interfaces and Packaging
Herb Sutter has explained that the "interface" to a class (roughly speaking, the functionality provided by the class) includes the non-member functions related to the class, and he's shown that the name lookup rules of C++ support this meaning of "interface." This is wonderful news for my "non-friend non-members are better than members" argument, because it means that the decision to make a class-related function a non-friend non-member instead of a member need not even change the interface to that class! Moreover, the liberation of the functions in a class's interface from the confines of the class definition leads to some wonderful packaging flexibility that would otherwise be unavailable. In particular, it means that the interface to a class may be split across multiple header files.
Suppose the author of the
Wombat class discovered that
Wombat clients often need a number of convenience functions related to eating, sleeping, and breeding. Such convenience functions are by definition not strictly necessary. The same functionality could be obtained via other (albeit more cumbersome) member function calls. As a result, and in accord with my advice in this article, each convenience function should be a non-friend non-member. But suppose the clients of the convenience functions for eating rarely needed the convenience functions for sleeping or breeding. And suppose the clients of the sleeping and breeding convenience functions also rarely needed the convenience functions for eating and, respectively, breeding and sleeping.
Rather than putting all
Wombat-related functions into a single header file, a preferable design would be to partition the
Wombat interface across four separate headers, one for core
Wombat functionality (primarily the class definition), and one each for convenience functions related to eating, sleeping, and breeding. Clients then include only the headers they need. The resulting software is not only clearer, it also contains fewer gratuitous compilation dependencies. This multiple-header approach was adopted for the standard library. The contents of namespace
std are spread across 50 different headers. Clients
#include the headers declaring the parts of the library they care about, and they ignore everything else. | http://www.drdobbs.com/cpp/how-non-member-functions-improve-encapsu/184401197?pgno=2 | CC-MAIN-2014-41 | refinedweb | 1,134 | 52.39 |
Created on 2017-11-07 23:15 by barry, last changed 2018-09-22 16:49 by xtreak.
Issue bpo-26182 added DeprecationWarnings for "import async" and "import await" since both of those pseudo-keywords were to become actual reserved keywords in Python 3.7. This latter has now happened, but the fix in bpo-26182 is incomplete. It does not trigger warnings on "from .async import foo".
base/
__init__.py
async.py
good.py
-----async.py
x = 1
-----good.py
from .async import x
$ python3.6 -W error::DeprecationWarning -c "import base.good"
$ python3.7 -c "import base.good"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/tmp/x1/base/good.py", line 1
from .async import x
^
SyntaxError: invalid syntax
$ cd base
$ python3.6 -W error::DeprecationWarning -c "import async"
DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7
I agree that this deprecation approach is not very helpful because it does not indicate a recommended way to fix.
Yep, we all know that we will be forced to rename these parameters/variables everywhere and likely break few APIs due to it.
I am curious if there is any emerging pattern for new naming as I find really annoying if every python project using these keywords would endup picking different alternatives for their rename. | https://bugs.python.org/issue31973 | CC-MAIN-2018-51 | refinedweb | 224 | 67.55 |
Red Hat Bugzilla – Full Text Bug Listing
Created attachment 561826 [details]
Patch to 4.3-SNAPSHOT
Description of problem:
The information provided by tags cannot be used to deploy system specific content.
Version-Release number of selected component (if applicable):
4.3-SNAPSHOT
How reproducible:
Deploy a content bundle like
<?xml version="1.0"?>
<project name="test-bundle" default="main" xmlns:
<rhq:bundle
<rhq:deployment-unit
<rhq:url-file
</rhq:deployment-unit>
</rhq:bundle>
<target name="main" />
</project>
with the above mentioned properties file looking like
system.propA=foobar
system.filteredProp=@@rhq.tag.it.env@@
system.woNamespace=@@rhq.tag.mytag@@
should result in (with tags assigned to the platform it:env=dev and mytag=bar)
system.propA=foobar
system.filteredProp=dev
system.woNamespace=bar
Steps to Reproduce:
1. see above
2.
3.
Actual results:
No replacement is done.
Expected results:
If deploying a content bundle with a text file (e.g. properties file) using the pattern @@rhq.tag.[namespace].semantic@@ the token should be replaced by the value of the tag named "[namespace:]semantic".
If no tag value is available the token will remain unchanged in the property file.
Additional info:
Usage scenario: e. g. source files from a NFS share called /mnt/share/[dev|pre|prod]/application.properties depending on the value replaced with the above process
note: the tag format supported by RHQ is:
[namespace:][semantic=]name
notice the semantic is also optional. With the attached patch, I believe this means the literal string "null" will be used in the name of the replacement token if there is no semantic in your tag. examples:
for tag of "foo:bar": @@rhq.tag.foo.null@@ will be replaced with "bar"
for tag of "abc": @@rhq.tag.null@@ will be replaced with "abc"
After thinking about this, I do not think this is the proper behavior, at least in that second case where neither namespace nor semantic is specified, because if you have more than one tag that omits namespace and semantic specifiers, you won't be able to determine which one to replace the @@ token with. So if I have "foo" and "bar" as two tags on my resource, what does @@rhq.tag.null@@ get replaced with? Its indeterminant.
So, I think, at least in the case where only name is in the tag, we don't populate that in the ant properties. We could put them perhaps in a comma-separated list on @@rhq.tags.null@@ but I would suggest we just don't do anything for this yet, ignore name-only tags and wait for a use-case to crop up before we do anything with those.
As for the first case, its probably safe to do the same thing - ignore them. For example:
foo:bar
foo:abc
What does @@rhq.tag.foo.null@@ get replaced with? Again, its indeterminate. bar and abc are in the same namespace, but have no semantic to qualify them.
Thus, in both cases, I will ignore tags if semantic is not specified.
the attached patch isn't all that is needed - the unit tests in the ant-bundle plugin are failing. nothing major, we just have to make sure we create a mock resource object to avoid NPEs in the tests. I have also tested replacing tag replacement tokens and I tested the cases where semantic is null (namespace:name or name-only tags).
All enterprise/jar, common/ant and plugins/ant-bundle unit tests pass on my box.
I'll commit this shortly.
git commit master - c2c2f68
tweeked some things and added unit tests.
Thanks for the patch torben!
To Test:
1) Import a platform resource
2) Add a tag to it - provide a semantic in the tag. For example, tag names such as the following will work:
it:group=qa
organization=HR
3) Create an ant bundle that has a file with replacement tokens. To match the example above, such replacement tokens could be:
@@rhq.tag.it.group@@
@@rhq.tag.organization@@
4) Deploy the bundle to the platform resource that is tagged
5) After the bundle is deployed, confirm file has been realized with the appropriate tag names (to follow the examples above, @@rhq.tag.it.group@@ should have been replaced with the string "qa" and @@rhq.tag.organization@@ should have been replaced with "HR".
Bulk closing of items that are on_qa and in old RHQ releases, which are out for a long time and where the issue has not been re-opened since. | https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=790322 | CC-MAIN-2017-30 | refinedweb | 744 | 56.86 |
How to: Create a Load Test Plug-In
You can create a load test plug-in to run code at different times while the load test is running. To create a plug-in, you can expand upon the builtin functionality of the load test. To do this, you must create a class that inherits the ILoadTestPlugin interface. This class must implement the Initialize method of this interface. For more information, see ILoadTestPlugin.
To create a Load Test Plug-in using C#
Open a Test Project that contains a Web test.
For more information, see How to: Create a Test Project.
Add a Load test to the Test Project and configure it to run a Web test.
For more information, see How to: Launch the Load Test Wizard.
Add a C# Class Library Project to your test solution.
Add a reference to the Microsoft.VisualStudio.QualityTools.LoadTestFramework dll in the Class Library project.
In the class file located in the Class Library project, add a using statement for the Microsoft.VisualStudio.TestTools.LoadTesting namespace.
Implement the ILoadTestPlugin interface for the class created in the Class Library project. See the following Example section for a sample implementation.
In the Test Project, right-click and select Add Reference. From the Projects tab, select the Class Library Project. Click OK.
Open the Load test and select the top node of the Load test. Press F4 to display the Properties window. You can now set the Load Test Plug-in property by clicking the ellipsis (…). Select your class in the dialog box.
Example
The following code shows a Load Test Plug-in that runs custom code after a LoadTestFinished event occurs. If this code is run on a test rig and the test rig does not have a localhost SMTP service, the load test will remain in the "In progress" state as = ((LoadTest)sender)"); } } } }
There are eight events that are associated with a Load test that can be handled in the Load Test Plug-in to run custom code with the Load test. The following is a list of the events which provide access to different periods of the load test run: | http://msdn.microsoft.com/en-us/library/ms243153(VS.80).aspx | CC-MAIN-2014-10 | refinedweb | 357 | 65.42 |
Guest post by Zurab Murvanidze Microsoft Student Partner University College London
About me:
My name is Zurab Murvanidze, I am 1 st year computer science student at UCL. I love learning about technology and have deep interest in machine learning, data science, quantum computing and artificial intelligence.
During this academic year, I became winner of few competitions:
Porticode 2.0 Hackathon at UCL – 1 st place, ICHack18 – best mobile app, TechXLR8 London – Urban-X Startup accelerator prize. I like developing applications and games in my spare time and in this article would love to share my experience in ML.NET.
This article will cover basics of machine learning, will introduce you to ML.NET and teach you how to create and train machine learning models. It will also demonstrate how can we implement machine learning in ASP.NET Core Web Application.
I hope once you get familiar with this technology you will come up with many creative ideas to apply machine learning to different problems.
ML.NET is open-source framework that allows developers to easily implement their custom machine learning models. You are not required to have any background in machine learning as this article covers the basics, however it would be helpful if you are familiar with C# and other .NET libraries.
Before we start coding, we need to have an idea what are we trying to achieve, what data are we using and how can we interpret it to get desired result.
In our application, we will try to predict age of marine snails, Abalones. Traditional way to determine age of Abalone involves cutting their shell through the cone, staining it and counting the number of rings using microscope (number of rings corresponds to their age).
We are not going to use traditional method, firstly because we don’t have the microscope and secondly because we can use machine learning approach.
Method is pretty straightforward, we will take data set that contains information about physical measurements of the Abalones and train model that can learn and spot some correlations between Abalone’s physical development at different ages. Hence we will make predictions based on the trained ML model.
To get data set: download abalone.data file. Also check out abalone.names to find out some general information about data set.
Making Console Application to set up the model:Making Console Application to set up the model:
Open Visual Studio 2017, select New Project > .Net Core > Console Application
Press next, select “.Net Core 2.0” as a Target Framework and press next again. In the name field write Abalone_Age and Press Create.
We need to add ML.NET NuGet Package in our solution, so open solution explorer, right click on project Abalone_Age > Add NuGet Packages… > in search field type Microsoft.ML, select it and press add package.
To keep things organized, create folder to keep data, right click the project “ Abalon_Age” and select Add > New Folder , call it data . Now add abalone.data.txt in data folder.
Our console application is organized and ready for further development, but we need to know how do we use of the data we have got? As you have probably read in description of abalone.data it provides information about sex, length, diameter, height, whole weight, shucked weight, viscera weight, shell weight and age of the abalone. We need to create class that represents all of these features and then decide which of those features are needed to prediction age. We must also create separate class to indicate feature we are predicting, in our case it is Age .
In solution explorer right click the project, Add > New File… > Empty C# class and call it Abalone .
Class Abalone is the input data class that includes all the features provided and attribute Column specifies the indices of the columns in the data set.
AbaloneAgePrediction is class that represents predicted results, it has attribute ColumnName that indicates the column “Score” which is the special column in ML.NET. The model outputs predicted values into that column.Learning Pipeline:
Machine learning model has many parts that needs to be tied together in order to execute and produce desired output, this process of collecting everything and tying them together is called pipeline. From the definition it should be clear that pipeline is the most important factor in creating accurate machine learning model.
So after we created Abalone class and we know what features it has, we can start coding pipeline for our ML model.
Go to Proram.cs file and add additional “using” statements at the top of the page.
Now declare Train() function.
The code above tells compiler that our Prediction model is based on class Abalone , and it must output predicted value in AbaloneAgePrediction class, more specifically to the value in that class which corresponds to ColumnName “Score”.
Inside the function Train() we must define learning pipeline.
Pipeline where it gets data from, which features should be included/excluded in training process and many more. It can be complicated to understand in the beginning what each piece of code does in this function, therefore I will go through all of them to make sure everything is clear.
TextLoader function takes string as a parameter that indicates path to the abalone.data.txt So, we can define constant string _datapath as a variable in the code and pass it to TextLoader function as a parameter.
I will also define constant _modelpath which will indicate where to output the model after training. (we will use this later and make sure you define it)
.CreateFrom<Abalone>(separator: “,”) separator tells compiler how to separate columns from each other, in our data set, values are separated by comma therefore we have to indicate it.
ColumnCopier – when model is trained, values under Column Label are considered as correct values and as we want to predict Age we should copy Age column into Label Column.
CategoricalOneHotVectorizer – Values in Column Sex are M or F, however algorithm requires numeric values, therefore this function assigns them different numeric values and makes suitable for training model.
ColumnConcatenator – , this is the function in which we tell pipeline which features to include to predict Age of the Abalone. We must decide how relevant are the features for our calculation and based on that decide whether include it or not. Only those features will be included in learning process, whose names are declared in this function. As you can see I have excluded Shucked Weight, and Viscera Weight, even though they are absolutely relevant and will probably make calculation a bit more accurate, they are not easy to obtain. Our aim is to make predictions based on easily obtainable measures, so we can quickly check how old tha Abalone is. To make things even easier, we can exclude Sex feature, as it is not very intuitive to tell whether Abalone is male or female, and it doesn’t have massive influence on our prediction.
FastTreeRegressor() – finally we define the algorithm that we want to use in the learning process, I will not go in too much detail here but if you want to find out more about Tree Regression algorithms, you can read this article .Prediction model:
Once we are done with the pipeline, we can define prediction model and export it to the data folder.
To Execute our first machine learning application, we need Add using System.Threading.Tasks; at the top of the page and overwrite Main function with the code provided below.
This code creates machine learning model based on pipeline we have declared in Train function.
Now before building the solution, we need to go to project options > General > Language options and change C# language version from default to Version 7.1. Now you can press build and hopefully you have created your machine learning model without any errors.
You should get similar message on the console. You can now check data folder and see that Model.zip is added.
Now by adding few lines of code, we can start making predictions. Create new instance of the Abalone in Main function providing the body measurements, Declare
var prediction = model.Predict(abalone1);
now prediction variable contains the predicted value which can be printed out on the console using Console.WriteLine(prediction.Age);
Here is the full code for Program.cs
And as you build the application, it should display the predicted value as shown in picture.
Predicted value was supposed to be 9, but there is a little error which is expected. Firstly, because there are only 4177 instances of abalones, secondly in nature physical measurements might vary and do not directly related to the age. Therefore, we do not expect this model to be 100% accurate, however it is a good approximation.
Using exported model without further training:Using exported model without further training:
As our data set is not growing It is pointless to retrain model before every prediction. Instead we can re-use already trained model. This makes predictions faster and cost effective as they do not consume too much processing power. Downsides are that our model will not get any better over time.
When model is ready we can get rid of train function, and all other unnecessary code. (comment it out or just delete it) See example below:
This code is enough to predict models based on different values.Integrating Machine Learning model into ASP.NET CORE Web Application
:
I will cover this section briefly and will not go through into details as this blog is not directly related to ASP.NET Core. I just want to demonstrate how easy it is to integrate ML.NET into other .NET libraries.
let’s get started.
Open Visual Studio 2017, click on new solution > .Net Core > ASP.NET Core Web App .
Good thing about this library is that it allows coding back end of the web page in C# and easily integrates other .Net libraries such as ML.NET
Make sure you go to options as we did earlier and change C# Language Version from default to Version 7.1. Also add Classes Abalone.cs and ML.cs.
Copy code for Abalone class from previous project and paste it here, just make sure you edit the namespace after pasting code. For ML.cs code is similar to Program.cs from previous project, however we need to make few adjustments, code for ML.cs is provided below.
As you can see we got rid of Main function and we do not instantiate object of Class Abalone in ML class anymore, instead we pass it to function Run as parameter, from Index page. this is because we need to read values from Index Page, instantiate object of class Abalone there and display calculated value on the web page for user.
See HTML code of this page below.
Interesting part happens on back end of the page, when we get filled information from user,
We create new instance of Abalone, using inputs provided by user, then function Predict() is invoked which creates new instance of ML class which contains our machine learning model. Instance of Abalone is passed to function Run() and after prediction is made it gets displayed on index Page.
Purpose of the last part of the article was to demonstrate how easy it is to make use of ML.NET and implement it into your applications or websites.
Hopefully this blog was helpful to get the basics of machine learning with ML.NET, and If you built the app, please share it to your friends so they can find out age of the Abalones without killing them....... | https://techcommunity.microsoft.com/t5/educator-developer-blog/machine-learning-using-ml-net-and-its-integration-into-asp-net/ba-p/381329 | CC-MAIN-2021-49 | refinedweb | 1,935 | 63.19 |
The assignment operator (=) may be used on pointers of the same type. However, if the types of pointers (types of variables to which they point) are not same then we will have to do type casting of one of these to the type of the other to use assignment operator. However, the void pointer (void * ) can represent any pointer type. Thus, any type of pointer may be assigned to a void pointer. However, the reverse is not valid. A void pointer cannot be assigned to any other type of pointer without first converting the void pointer to that type.
However, while dereferencing a void pointer it has to be type cast because a void pointer is a pointer without a type. The compiler does not know the number of bytes occupied by the variable to which the void pointer is pointing to. Therefore, void pointers have to be cast to appropriate type before assigning. Program provides an illustration of void pointers.
#include <stdio.h>
void main()
{
int n = 30;
int* ptrn = &n;
float y = 10.7, *ptry= &y;
void *Pv ; //declaration of void pointers
Pv = ptrn; //assigning ptrn to pv
clrscr();
printf("*(int*)Pv = %d\n", *(int*)Pv); //dereferencing pv
Pv = ptry; //assigning ptry to pv
printf("*(float*)Pv= %f\n", *(float*)Pv);//dereferencing p | http://ecomputernotes.com/what-is-c/function-a-pointer/void-pointers | CC-MAIN-2019-39 | refinedweb | 215 | 71.14 |
These days you seem to hear a lot about building a web app in 20 minutes using framework X and language Y. The most exaggerated of these I read recently was (Re)writing Reddit in Lisp in 20 minutes and 100 lines. Here is a brief critique, without having watched the movie at all:
100 lines of code in 20 minutes is 300 lines an hour. Typically you would expect to be able to do 30-40 LOC/hour, so this is very good going. Here are a few explanations, all of which are possible, with varying degrees of plausibility (and I'm ignoring the possibility that the author was exaggerating in terms of time -- though he is clearly exaggerating with the use of the word 'rewrite', as other people have pointed out):
- The author is a super genius.
- He had half the code written in his head before he sat down to type.
- He knew the problem domain exceptionally well -- he wrote something similar in python (or lisp) the week before.
- He had a working installation of everything he needed, to which he could just add a few pages i.e. he had already done all the 'set up' phase.
- He was benefiting from the development honeymoon period (more below).
There are some other possibilities that probably don't apply to this case, but often contribute to the '20 minute web app' syndrome:
- Lots of copy and paste was involved
- You have a framework or piece of software that, out of the box, happens to provide a very large proportion of what you need.
The 'honeymoon' period of software development is the bit that comes after you have set your machine up etc, and you start coding from nothing. If you have nice frameworks or libraries, you can get fantastic levels of productivity at this point. There are various reasons, but I think you can summarise it by saying that you are doing zero 'maintainance' programming, and you have, at that point, a very small application that suffers from none of the problems of large applications.
For instance, achieving OAOO is very easy when you are writing the code for the first time. Even if you do produce duplication, it's very easy to keep track of, and you haven't yet suffered the pain of not obeying OAOO.
You also haven't had to worry about the finishing touches and tying up loose ends -- it makes no sense to do them yet, so you rightly ignore that for now -- but those finishing touches take time. Nor have you had to worry about deployment etc.
The problem is that none of the above will help the more normal programmer to maintain that kind of productivity on a larger application. The honeymoon period is soon over, and finding ways to remove duplication and write maintainable, flexible code with few defects becomes much harder. Even the techniques that may have made the initial development phase so easy and productive may turn out to be the bane of your life (such as copy-and-paste, and lots of the shortcuts and methodologies typically encouraged by PHP).
So, I'm going to present to you an account of an application that took a lot longer than 20 minutes to write. I intend to make it as balanced as possible.
What I've been doing for 10 months
Since September (September 2nd, to be exact, according to my Subversion logs), I've been working on a Django project in my spare time, and it's finally complete. I've kept a pretty complete log of my hours, split by activity, so I'm hoping it will be of some use to those trying to make realistic estimations of coding time.
The project is now live at. It is a website for a charity that runs outdoor Christian camps, which I've been involved in all my life (literally!). The website has all the details of the camps, camp sites, leaders etc, and a community of people who have been on the camps, based around a message board and photo gallery system. [Edit 2012: the site is now quite different in emphasis since this article in 2006, since the forums are rarely used now]
Until a few days ago, the website was running under PHP with a flatfile database. The new website is mainly a re-implementation of the existing functionality, but I have added quite a few things and tidied quite a lot up, and I didn't bother trying to salvage any of the existing PHP code by porting it -- I wrote everything from scratch.
Aims
My main aims were:
- to get rid of the ropey old PHP code (not to mention the flatfile database), and produce some clean, maintainable code.
- to make it easy to moderate the message boards. The main users of the website are 11 - 17 year olds, and the camps are strongly Christian in ethos and aims, so it's very important that the camp website is always a safe and fun place for the campers to interact.
- to make the website manageable by other people instead of just me, and hopefully write myself out of the picture.
- to add some fun new features.
- to generally increase usability.
- to get a reliable database, and a proper SQL one that would enable things like the 'stats roundups' I occasionally do.
- to do some test driven web-development.
Hours
I've spent a total of 240 hours on the project, or about 6 working weeks. That's quite a lot, and quite a bit more than you typically hear quoted for Django apps, but the project is probably fairly large compared to most of the ones you hear about:
It contains 22 database models, about 60 view functions (which vary massively in size -- a handful are straight generic views, some of the message board ones are moderately complex), 15 Atom feeds and a total of 56 template files. It also has 14 custom template tags and various other bits and pieces as you would expect in a project of that size.
Also, the figure given above totals all my activities, including:
- learning Django (and Python to some extent)
- data migration (a lot of it -- I was careful to ensure none of the old message boards were lost, and some of them go back 6 years to an even older system. I even managed to rewrite any embedded URLs in message board posts etc so that they are still correct)
- design mockups (I'm not much of a designer, but I'm the only person who is working on this), and then the actual designs in XHTML 1.0 strict, the highest quality stuff I've done so far in terms of semantic HTML and clean CSS.
- all the setup and deployment issues (setup was easy, but final deployment was harder because of another project I wrote that used the same database, and some of the same tables, but was deployed earlier, so I had to do a bit of a database merge).
- a fair amount of content editing
- oh, and writing the code (models, views and templates), testing and debugging, which accounted for about 75% of the time spent (perhaps a little less, it's not always easy to divide up the time correctly).
In terms of code there are about 2000 lines of template code, 6000 lines of Python, and 900 lines of migration scripts (done in Python). I know LOC aren't that accurate a measure of program size, but hopefully that's of some help.
Given those figures, it looks like I was reasonably productive -- averaging over the complete time that comes to about 37 LOC/hour, which is reasonable. I also have been careful to avoid cut-and-paste, which can be an easy way to get stuff done, (and add LOC), but also an easy way to leave an unmaintainable mess behind! The great design of Django, including things like template inheritance, and the power of the Python language makes it possible to really keep duplication to a minimum.
Some of the things that added to the development time were:
- trying to handle my existing data properly (which added a fair amount of special casing etc)
- changing Django APIs -- which meant sometimes I had to rewrite, and sometimes I avoided features that I knew were not stable, and went for something that might not have been optimal just to crack on.
- lack of ability in the design area
- deciding to change to Postgres part way through (though that was fairly trouble free)
- and probably a bit of perfectionism.
I should also point out that the framework is more mature than when I started! In fact, some of the things I coded have become part of Django -- the hours where I was consciously hacking on Django itself I was careful to log separately, but nevertheless some of the code I wrote for CCIW ended being generic and has made it's way into the core framework -- so that's code you won't have to write if you start with Django now.
On the other hand, there are some things which would increase a realistic estimate of the coding time. The main one is that for at least half of the code I was writing, especially the message board code, I had a very good idea of what I was doing, having implemented it once already, even if it was several years ago. It's very difficult to measure the effect of this -- although the python code I wrote bears very little resemblance superficially to the original PHP code, it is very likely that my subconscious knowledge of how it would work in general helped me a lot.
Analysis
I'm quite pleased with my results! I'm not sure if I can really give a less biased view. I normally find with programming that by the time I've finished a project, I'm already quite unhappy with the quality of the code, and I have a list of 'cleanup' TODOs, or even 'rewrite this large chunk of it' TODOs, which usually never get done. By the time a few years have passed, I'm downright ashamed. So far I don't feel this way about any of the code -- let's see how long that lasts!
The quality of the HTML is pleasing - the Django validator app I wrote (development time not included) made creating the entire site using only XHTML 1.0 strict really very easy -- a task that I used to think was quite a challenge. The only part that proved tricky was writing a bbcode parser that would accept anything the users can throw at it and always produce valid XHTML that matched what the user would expect to get.
In terms of visual design, I'm reasonably pleased -- though there are quite a few places that could do with a designer's eye. And as for end user experience, I can't really say yet. I've tried to slim down the interface and make the pages a bit simpler than they were before, but some new features, especially 'tagging', have added more things back in.
In terms of making the website easy for other people to manage, Django's admin has solved pretty much all of that. For the main models that other people will have to administer (which are details of the camps we run, camp sites they run at and people who run them), it's astonishing how well the automatic admin functionality caters for it -- it does a lot more than I would have managed to create if I were writing a custom admin interface manually.
I've also made the website self-maintaining as much as possible -- for instance, every year each camp that has just finished gets a new forum, and this now creates itself on first access. The website also has a concept of the 'next year', which depends on when camps finish etc, and the 'clock' for this ticks over automatically.
For moderation, everything now has a feed, so it is very easy to aggregate message board posts for the entire website, or for an individual user, and be aware of new topics etc. I discovered some nice patterns while doing the Atom feed work -- detailed below.
On a downside, I did very little test driven development. I came to the conclusion that Django's view functions are very difficult to test. They take complex objects and return complex objects, and their output is highly dependent on what is in the database, so you have to do a lot of set up first. The view functions themselves often do very little -- in fact some were just generic views, so testing them would just have been testing Django. Some do quite a lot however, but what exactly they do will depend on the validity of input and data in the database etc. I realised that unit tests are pretty inappropriate, but functional tests, using a tool like twill, would be perfect.
Unfortunately, after installing and playing around with twill, I never got around to writing tests with it, partly because of the pain of having to write setup code. I know that other Djangoers have done good work here, but with time constraints this was the first thing to go. I did, however, write tests the parts of the system which could be decoupled easily from the view functions -- in particular most of the bbcode parser was developed in a test driven manner, which worked very well.
Feeds
I didn't use Django's 'high level' feed framework, as it didn't fit very well, but the lower level one was just perfect for what I wanted. Feeds are all available at URLs with '?format=atom' appended to the normal page. To handle this, I've got mini-framework involving a handle_feed_request() function and base CCIWFeed class that inherits from feeds.Feed.
With these in place (25 lines of code), my class for generating the feed for new message board posts, for example, looks like this:
class PostFeed(CCIWFeed): template_name = 'posts' title = "CCIW message boards posts" ## This is called by CCIWFeed.items() def modify_query(self, query_set): return query_set.order_by('-posted_at')[:POST_FEED_MAX_ITEMS] def item_author_name(self, post): return post.posted_by_id def item_author_link(self, post): return add_domain(get_member_href(post.posted_by_id)) def item_pubdate(self, post): return post.posted_at
(Plus there are two templates to support this).
However, you can also get a PostFeed for a specific member -- i.e. all posts that were created by that member. The only thing that needs to change is the title, so the implementation is just the following:
def member_post_feed(member): """Returns a Feed class suitable for the posts of a specific member.""" class MemberPostFeed(PostFeed): title = "CCIW - Posts by %s" % member.user_name return MemberPostFeed
The view code has to call member_post_feed() with a specific member, and passes the generated class to the feed handling code. It doesn't require a special view -- it just requires two lines in the existing HTML view for a specific member's posts.
This pattern is repeated quite a number of times, and I think it is wonderfully elegant -- it's so easy to see what it is supposed to do, and using a class in the same way you use closures is so expressive.
Conclusion
I've enjoyed this project, and I'm very pleased with the result, but I am also happy to get it over with, as it has been dragging on in my spare time since September. The launch of the new website has been a bit of a non-event. The amount of traffic on the site varies enormously -- after the camps in the summer when everyone has just met up again, there is a massive surge in activity -- last August and September the very small active user base managed to create up to 280 posts a day. But right now, there is almost no activity and it's been like that for months. I have to tell myself that it hasn't been a wasted effort :-).
If anyone wants to play with it, you can use the forum at and log in using user name and password 'guest'. I'll probably clear out everything that 'guest' creates every week or so, so create whatever you like.
Finally, if anyone is interested in any of the code, I'd be happy to make it available. Most of it is probably quite specific to CCIW, and not very re-usable, but I made the tagging functionality very generic (I posted to django-devs about this already), and there may be other bits people would want to glean.
Postscript
For the language partisans: my reference to the Lisp article wasn't meant to imply that using Lisp for web development would't scale or would be worse than Django -- I have no idea, as I have never used Lisp. (By contrast, my references to PHP did come from experience and were meant to imply that Django is much better than PHP :-). My main point was that a small project really doesn't give you much idea about a bigger project -- the techniques used for a prototype or small project may or may not scale. I do know that Django was written by experienced developers who knew the pitfalls of web development, and wrote the framework specifically to avoid them, and to make the common things very easy, and I think they have succeeded admirably. | https://lukeplant.me.uk/blog/posts/a-django-website-that-took-a-lot-more-than-20-minutes/ | CC-MAIN-2017-13 | refinedweb | 2,931 | 63.12 |
Search results in MarkLogic Server return in relevance order; that is, the result that is most relevant to the
cts:query expression in the search is the first item in the search return sequence, and the least relevant is the last. There are several tools available to control the relevance score associated with a search result item. This chapter describes the different methods available to calculate relevance, and includes the following sections:
When you perform a cts:search operation, MarkLogic Server produces a result set that includes items matching the
cts:query expression and, for each matching item, a score. The score is a number that is calculated based on statistical information, including the number of documents in a database, the frequency in which the search terms appear in the database, and the frequency in which the search term appears in the document. The relevance of a returned search item is determined based on its score compared with other scores in the result set, where items with higher scores are deemed to be more relevant to the search. By default, search results are returned in relevance order, so changing the scores can change the order in which search results are returned.
As part of a cts:search expression, you can specify the following different methods for calculating the score, each of which uses a different formula in its score calculation:
You can use the
relevance-trace option with cts:relevance-info to explore score calculations in detail. For details, see Exploring Relevance Score Computation.
The
logtfidf method of relevance calculation is the default relevance calculation, and it is the option
score-logtfidf of
cts:search. The
logtfidf method takes into account term frequency (how often a term occurs in a single fragment) and document frequency (in how many documents does the term occur) when calculating the score. Most search engines use a relevance formula that is derived by some computation that takes into account term frequency and document frequency.
The
logtfidf method (the default scoring method) uses the following formula to calculate relevance:
log(term frequency) * (inverse document frequency).
The
inverse document frequency is defined as:
log(1/df)
where
df (document frequency) is the number of documents in which the term occurs.
For most search-engine style relevance calculations, the
score-logtfidf method provides the most meaningful relevance scores. Inverse document frequency (IDF) provides a measurement of how 'information rich' a document is. For example, a search for 'the' or 'dog' would probably put more emphasis on the occurences of the term 'dog' than of the term 'the'.
The option
score-logtf for cts:search computes scores using the
logtf method, which does not take into account how many documents have the term. The
logtf method uses the following formula to calculate scores:
log(term frequency)
where.
When you use the
logtf method, scores are based entirely on how many times a document matches the search term, and does not take into account the 'information richness' of the search terms.
The option
score-simple on cts:search performs a simple term-match calculation to compute the scores. The
score-simple method gives a score of 8*weight for each matching term in the
cts:query expression, and then scales the score up by multiplying by 256. It does not matter how many times a given term matches (that is, the term frequency does not matter); each match contributes 8*weight to the score. For example, the following query (assume the default weight of 1) would give a score of 8*256=2048 for any fragment with one or more matches for 'hello', a score of 16*256=4096 for any fragment that also has one or more matches for 'goodbye', or a score of zero for fragments that have no matches for either term:
cts:or-query(("hello", "goodbye"))
Use this option if you want the scores to only reflect whether a document matches terms in the query, and you do not want the score to be relative to frequency or 'information-richness' of the term.
The option
score-random on cts:search computes a randomly-generated score for each search match. You can use this to randomly choose fragments matching a query. If you perform the same search multiple times using the
score-random option, you will get different ordering each time (because the scores are randomly generated at runtime for each search).
The scoring methods that take into account term frequency (
score-logtfidf and
score-logtf) will, by default, normalize the term frequency (how many search term matches there are for a document) based on the size of the document. The idea of this normalization is to take into account how frequent a term occurs in the document, relative to the other documents in the database. You can think of this is the density of terms in a document, as opposed to simply the frequency of the terms. The term frequency normalization makes a document that has, for example, 10 occurrences of the word
"dog" in a 10,000,000 word document have a lower relevance than a document that has 10 occurrences of the word
"dog" in a 100 words document. With the default term frequency normalization of
scaled-log, the smaller document would have a higher score (and therefore be more relevant to the search), because it has a greater 'term density' of the word
"dog". For most search applications, this behavior is desirable.
If you would like to change that behavior, you can set the
tf normalization option on the database configuration to lessen or eliminate the effects of the size of the matching document in the score calculation, which in turn would strengthen the effect of its term frequency (the number of matches in that document). The
unscaled-log option does no scaling based on document size, and the
scaled-log option (the default) does the maximum scaling of the document based on document size. Additionally, there are four intermediate settings,
weakest-scaled-log,
weakly-scaled-log,
moderately-scaled-log, and
strongly-scaled-log, which have increasing degrees of scaling in between none and the most scaling. If you change this setting in the database and
reindexer enable is set to
true, then the database will begin reindexing.
Scores are calculated based on index data, and therefore based on unfiltered searches. That has several implications to scores:
Because scores are based on fragments and unfiltered searches, index options will affect scores, and in some case will make the scores more 'accurate'; that is, base the scores on searches that return fewer false-positive results. For example, if you have
word positions enabled in the database configuration, searches for three or more term phrases will have fewer false-positive matches, thereby improving the accuracy of the scores.
For details on unfiltered searches and how you can tell if there are false-positive matches, see 'Using Unfiltered Searches for Fast Pagination' in the Query Performance and Tuning Guide.
Use a weight in a query sub-expression to either boost or lower the sub-expression contribution to the relevance score.
For example, you can specify weights for leaf-level
cts:query constructors, such as cts:word-query and cts:element-value-query; for details, see XQuery and XSLT Reference Guide. You can also specify weights in the equivalent Search API abstractions, such as the structured query constructs
value-query and
word-constraint-query, or when defining a word or value constraint in query options.
The default weight is 1.0. Use the following guidelines for choosing custom weights:
Scores are normalized, so a weight is not an absolute multiplier on the score. Instead, weights indicate how much terms from a given query sub-expression are weighted in comparison to other sub-expressions in the same expression. A weight of 2.0 doubles the contribution to the score for terms that match that query. Similarly, a weight of 0.5 halves the contribution to the score for terms that match that query. In some cases, the score reaches a maximum, so a weight of 2.0 and a weight of 20,000 can yield the same contribution to the score.
Adding weights is particularly useful if you have several components in a query expression, and you want matches for some parts of the expression to be weighted more heavily than other parts. For an example of this, see Increase the Score for some Terms, Decrease for Others.
If you have the
word positions indexing option enabled in your database, you can use the
distance-weight option to the leaf-level
cts:query constructors, and then all of the terms passed into that
cts:query constructors will consider the proximity of the terms to each other for the purposes of scoring. This proximity boosting will make documents with matches close together have higher scores. Because search results are sorted by score, it will have the effect of making documents having the search terms close together have higher relevance ranking. This section provides some examples that use the
distance-weight option along with explanations of the examples, and includes the following parts:
The distance weight is only applied to the matches for
cts:query constructors in which the
distance-weight occurs. For example, consider the following
cts:query constructor:
cts:word-query(("cat", "dog")), "distance-weight=3")
If one document has an instance of
"cat" very near
"dog", and another document has the same number of
"cat" and
"dog" terms, but they are not very near, then the one with the
"cat" near
"dog" will have a higher score.
For example, consider the following:
xquery version "1.0-ml"; (: make sure word positions are enabled in the database :) (: create 3 documents, then run two searches, one with distance-weight and one without, printing out the scores :) xdmp:document-insert("/2.xml", <p>The cat is pretty near a dog.</p>) ; xdmp:document-insert("/1.xml", <p>The cat dog is very near.</p>) ; xdmp:document-insert("/3.xml", <p>The cat is not very near the very large dog.</p>) ; for $x in (cts:search(fn:doc(), cts:word-query(("cat", "dog") , "distance-weight=3" ) ), cts:search(fn:doc(), cts:word-query(("cat", "dog") ) ) ) return element hit{attribute uri {xdmp:node-uri($x)}, attribute score {cts:score($x)}, attribute text{fn:string($x/p)}}
This returns the following results:
<hit uri="/1.xml" score="146" text="The cat dog is very near."/> <hit uri="/2.xml" score="140" text="The cat is pretty near a dog."/> <hit uri="/3.xml" score="135" text="The cat is not very near the very large dog."/> <hit uri="/3.xml" score="72" text="The cat is not very near the very large dog."/> <hit uri="/2.xml" score="72" text="The cat is pretty near a dog."/> <hit uri="/1.xml" score="72" text="The cat dog is very near."/>
Notice that the first three hits use the
distance-weight, and the ones with the terms closer together have higher scores, and thus rank higher in the search. The last three hits have the same score because they all have the same number of each term in the
cts:query and there is no proximity taken into account in the scores.
Because the
distance-weight option applies to the terms in individual
cts:query constructors, the terms are combined as an or-query (that is, any term match is a match for the query). Therefore, the example above would also return results for documents that contain
"cat" and not
"dog" and vice versa. If you want to have and-query semantics (that is, all terms must match for the query to match) and also have proximity boosting, you will have to construct a
cts:query that does an and of all of the terms in addition to the
cts:query with the
distance-weight option.
xquery version "1.0-ml"; cts:search(fn:doc(), cts:and-query(( cts:word-query("cat"), cts:word-query("dog"), cts:word-query(("cat", "dog") , "distance-weight=3" ) )) )
The difference between this query and the previous one is that the previous one would return a document that contained
"cat" but not
"dog" (or vice versa), and this one will only return documents containing both
"cat" and
"dog".
If you have a large corpus of documents and you expect to have many matches for your searches, then you might find you do not need to use the cts:and-query approach. The reason a large corpus has an effect is because document frequency is taken into account in the relevance calculation, as described in Understanding How Scores and Relevance are Calculated. You might find that the most relevant documents still float to the top of your search even without the cts:and-query. What you do will depend on your application requirements, your preferences, and your data.
Another technique that makes results with closer proximity have higher scores is to use cts:near-query. Searches that use the cts:near-query constructor will take proximity into account when calculating scores, as long as the
word positions index option is enabled in the database. Additionally, you can use the
distance-weight parameter to further boost the effect of proximity on scoring.
Because cts:near-query takes a
distance argument, you have to think about how near you want results to be in order for them to match. With the
distance parameter to
cts:near-query, there is a tradeoff between the size of the
distance and performance. The higher the number for the
distance, the more work MarkLogic Server does to resolve the query. For many queries, this amount of work might be very small, but for some complex queries it can be noticeable.
To construct a query that uses cts:near-query for proximity boosting, pass the
cts:query for your search as the first parameter to a cts:near-query, and optionally add a
distance-weight parameter to further boost the proximity. The cts:near-query matches will always take distance into account, but setting a
distance-weight will further boost the proximity weight. For example, consider how the following query, which uses the same data as the above examples, produces similar results:
xquery version "1.0-ml"; cts:search(fn:doc(), cts:near-query( cts:and-query(( cts:word-query("cat"), cts:word-query("dog") )), 1000, (), 3) )
This query uses a
distance of 1,000, therefore documents that have
"cat" and
"dog" that are more than 1,000 words apart are not included in its result. The size you use is dependent on your data and the performance characteristics of your searches. If you were more concerned about missing document where the matches are more than 1,000 words away, then you should raise that number; if you are seeing performance issues and want faster performance, and you are OK with missing results that are above the distance threshold (which are probably not relevant anyway), then you should make the number smaller. For databases with a large amount of documents, keep in mind that not returning the documents with words that are far apart from each other will probably result in very similar search results, especially for the most relevant hits (because the results with the matches far apart have low relevance scores compared to the ones that have matches close together).
You can use cts:boost-query to modify the relevance score of search results that match a secondary (or 'boosting') query. The following example returns results from all documents containing the term "dog", and assigns a higher score to results that also contain the term "cat". The relevance score of matches for the first query are boosted by matches for the second query.
cts:search(fn:doc(), cts:boost-query( cts:word-query("dog"), cts:word-query("cat")) )
As discussed in Understanding How Scores and Relevance are Calculated, many factors affect relevance score, so the exact quantitative effect of a boosting query on relevance score varies. However, the effect is always proportional to the weighting of the boosting query.
For example, suppose the database includes two documents,
/example/dogs.xml and
/example/llamas.xml that have the following contents:
/example/dogs.xml: <data>This is my dog. I do not have a cat.</data> /example/llamas.xml: <data>This is my llama. He likes to spit at dogs.</data>
Then an unboosted search for the word "dog" returns the following matches:
cts:search(fn:doc(), cts:word-query("dog")) <data>This is my dog. I do not have a cat.</data> <data>This is my llama. He likes to spit at dogs.</data>
Assume these matches have the same relevance score. If you repeat the search as a boost query with default weight, the first match has a score that is roughly double that of the 2nd match. (The actual score values do not matter, only their relative values.)
for $n in (cts:search(fn:doc(), cts:boost-query( cts:word-query("dog"), cts:word-query("cat")))) return fn:concat(fn:document-uri($n), " : ", cts:score($n)) ==> /example/dogs.xml : 22528 /example/llamas.xml : 11264
If you increase the weight on the boosting query to 10.0, the relevance score of the document containing both terms becomes roughly 10x that of the document that only contains
"dog".
for $n in (cts:search(fn:doc(), cts:boost-query( cts:word-query("dog"), cts:word-query("cat", (), 10.0)))) return cts:score($n) ==> /example/dogs.xml : 22528 /example/llamas.xml : 2048
If the primary (or 'matching') query returns no results, the boosting query is not evaluated. A boosting query is ignored in an XPath expression or any other context in which the score is zero or randomized.
The
BOOST string query operator allows equivalent boosting in string search; for details, see Query Components and Operators. The
boost-query structured query component also exposes the same functionality as cts:boost-query; for details, see boost-query.
By default, range queries do not influence relevance score. However, you can enable range and geospatial queries score contribution using the
score-function and
slope-factor options. This section covers the following topics:
By default, a range query makes no contribution to score. If you enable scoring for a given range query, it has the same impact as a word query. The contribution from a range query is just one of many factors influencing the overall score, especially in a complex query. As with any query, you can use weights to change the influence a range query has on score; for details, see Using Weights to Influence Scores.
The difference between a matching value and the reference value does not contribute directly to the score. A function is applied to the delta, with suitable scaling based on datatype, such that the resulting range is comparable to the term frequency (TF) contribution from a word query. You control the scaling using the slope factor of the function; for details, see Understanding Slope Factor.
The type of function (linear or reciprocal) determines whether values closest to or furthest from the reference value contribute more to the score. The reference value is the constraining value in the query. For example, if a range query expresses a constraint such as '> 5', then the reference value is 5. You cannot choose the function, but you can choose the type of function.
If a document contains multiple matching values, the highest contribution is used in the overall score computation.
Range query score contributions are useful in cases such as the following:
For examples of how to realize these use cases, see Range Query Scoring Examples.
Add the
score-function option to a range or geospatial query constructor to enable score contributions. You can also use the
slope-factor option to scale the contribution; for details, see Understanding Slope Factor.
For example, the following search boosts the score more for documents with high ratings (furthest from the reference value 0). Setting the slope factor to 10 decreases the range of values that make a distinct contribution and increases the difference between the amount of contribution.
(: Scoring for positive ratings in range 1 to 100 :) cts:search(doc(), cts:element-range-query(xs:QName("ratings"), ">", 0, ("score-function=linear","slope-factor=10")))
For examples of constructing a similar query with other MarkLogic Server APIs, see Range Query Scoring Examples.
You can set the value of
score-function to one of the following function types:
You can specify a score function and slope factor with the following XQuery query constructors, or the equivalent structured or QBE range query constructs.
In addition to specifying a score function for a range query, you can use the
slope-factor option to specify a multiplier on the slope of the scoring function applied to a range query. The slope factor affects how the range of differences between a matching value and the reference value affect the score contribution. You should experiment with your application to determine the best slope factor for a given range query. This section provides details to guide your experimentation.
The delta for a given range query match is the difference between the matching value and the reference value in a range query:
delta = reference_value - matching_value
For example, if a range query expresses 'greater than 5' and the matching value is 3, then the delta is 2. This delta is the basis of the score contribution for a given match, though it is not the actual score contribution.
Each possible delta value does not make a different score contribution because contribution is bucketed. The range of delta values is bounded by a min and max delta value, beyond which all deltas make the same contribution. The granularity represents the size of each bucket within that range. All deltas that fall in the same bucket make the same score contribution, so granularity determines the range of deltas that make a distinct score contribution.
The number of buckets does not change as you vary the slope factor, so changing the slope factor affects the min, max, and granularity of the score function.
The figure below shows the relationship between slope, minimum delta, maximum delta, and granularity for a linear score function.
A slope factor greater than 1 results in finer granularity, but a more narrow range of delta values. A slope factor less than 1 gives a coarser granularity, but a greater range of delta values. Doubling the slope factor with a linear function gives you half the range and half the granularity.
The minimum delta, maximum delta, and granularity for a given slope factor depend upon the type of the range index. The table below shows minimum delta, maximum delta, and granularity for each range index type with the default slope factor (1.0). The granularity is not linear for a reciprocal score function.
For example, the table contains the following information about range queries over
dateTime with the default slope factor:
Min delta: 1 minute Max delta: 30 days Granularity: ~2.6 hours
From this, you can deduce the following for a slope factor of 1.0:
In a
dateTime range query where the deltas are on the order of hours, the default slope factor provides a good spread of contributions. However, if you need to distinguish between deltas of a few minutes or seconds, you would increase the slope factor to provide a finer granularity. When you do this, the minimum and maximum delta values get closer together, so the overall range of distinguishable delta values becomes smaller.
Another way to look at slope factor is based on the target minimum or maximum delta. For example, if the default maximum delta for your datatype is 1024 and the range of 'interesting' delta values for your range query is only 1 to 100, you probably want to set slope-factor to 10, which lowers the maximum delta to 100 (
1024 div 10).
The performance impact of enabling range query score contributions depends on the nature of your query. The cost is highest for queries that return many matches and queries on strings.
The number of matches affects cost because the scoring calculation is performed for each match. The value type affects the cost because the score calculation is significantly more complex for string values.
Range query score contribution calculations are skipped (and therefore have no negative performance impact) if any of the following conditions apply:
score-functionoption is not set or is set to
zero.
score-logtfidfor
score-logtf.
This section contains examples that illustrate the use cases outlined in Use Cases for Range Query Score Contributions, plus examples of how to use the feature with additional APIs, such as structured query and QBE.
The following examples are included:
Boost the score of newer documents over similar older documents, where 'newness' is a function of
dateTime or another numeric element value. The following example boosts the score of recently published documents, where the publication date is stored in a
pubdate element:
cts:element-range-query( xs:QName("pubdate"), "<=", current-dateTime(), "score-function=reciprocal")
The example uses a reciprocal score function so that
pubdate values closest to 'now' contribute the most to the score. That is, the smallest deltas make the biggest contribution.
Boost the score based on how close some element value is to a reference value. The following example boost scores for documents containing prices closest to an ideal of $20, assuming the
price is an attribute of the
item element:
cts:element-attribute-range-query( xs:QName("item"), xs:QName("price"), ">=", 20.0, "score-function=reciprocal")
The example uses a reciprocal score function so that the smallest deltas between actual and ideal price ($20) make the highest contribution.
Boost the score based on how far away some element value is from a reference value. For example, boost scores for items with a price furthest below a maximum of $20:
cts:element-attribute-range-query( xs:QName("item"), xs:QName("price"), "<=", xs:decimal(20.0), ("score-function=linear","slope-factor=51.2"))
The example uses a linear function so that the largest deltas between the actual price and the maxiumum price ($20) make the highest contribution.
The slope factor is increased to bring the range of interesting delta values down. As shown in Understanding Slope Factor, the default maximum delta for
xs:decimal is 1024.0. However, in this example, the interesting deltas are all in the range of 0 to 20.0. To bring the upper bound down to ~20.0, we calculate the slope factor as follows:
slope-factor = 1024.0 / 20.0 = 51.2
Increasing the slope factor also reduces the granularity, so smaller price differences make different score contributions. With the default slope factor, the granularity is ~3.98, which is very coarse for a delta range of 0-20.0.
Boost the score based on geospatial distance. For example, find all hotels within 10 miles, boosting the scores for those closest to my current location:
cts:and-query(("hotel", cts:element-geospatial-query( xs:QName("pt"), cts:circle(10, $current-location), ("score-function=reciprocal", "slope-factor=10.0))))
The example uses a reciprocal score function so that points closest to the reference location (the smallest deltas) make the greatest score contribution.
The slope factor is increased because the range of interesting delta values is only 0 to 10 ('within 10 miles'). As shown in Understanding Slope Factor, the default maximum delta for a point is 100.0 miles. To bring the maximum delta down to 10.0, slope factor is computed as follows:
slope-factor = 100.0 / 10.0 = 10.0
The following example is a structured query containing a range query for ratings greater than zero, boosting the score more as the rating increases. Documents with a higher rating receive a higher range query score contribution.
For details, see Searching Using Structured Queries and the following interfaces:
The following example is a QBE that contains a range query for ratings greater than zero, boosting the score more as the rating increases. Documents with a higher rating receive a higher range query score contribution.
This query is suitable for use with the REST API
/qbe service or the Java API
RawQueryByExampleDefinition interface.
For details, see Searching Using Query By Example and the following interfaces:
Each document contains a quality value, and is set either at load time or with xdmp:document-set-quality. You can use the optional
$QualityWeight parameter to cts:search to force document quality to have an impact on scores. The scores are then determined by the following formula:
Score = Score + (QualityWeight * Quality)
The default of
QualityWeight is 1.0 and the default quality on a document is 0, so by default, documents without any quality set have no quality impact on score. Documents that do have quality set, however, will have impact on the scores by default (because the default
QualityWeight is 1, effectively boosting the score by the document quality).
If you want quality to have a smaller impact on the score, set the
QualityWeight between 0 and 1.0. If you want the quality to have no impact on the score, set the
QualityWeight to 0. If you want the quality to have a larger impact on raising the score, set the
QualityWeight to a number greater than 1.0. If you want the quality to have a negative effect on scores, set the
QualityWeight to a negative number or set document quality to a negative number.
If you set document quality to a negative number and if you set
QualityWeight to a negative number, it will boost the score with a positive number.
You can get the score for a result node by calling cts:score on that node. The score is a number, where higher numbers indicate higher relevance for that particular result set.
Similarly, you can get the confidence by calling cts:confidence on a result node. The confidence is a number (of type
xs:float) between 0.0 and 1.0. The confidence number does not include any quality settings that might be on the document. Confidence scores are calculated by first bounding the scores between 0 and 1.0, and then taking the square root of the bounded number.
As an alternate to cts:confidence, you can get the fitness by calling cts:fitness on a result node. The fitness is a number (of type
xs:float) between 0.0 and 1.0. The fitness number does not include any quality settings that might be on the document, and it does not use document frequency in the calculation. Therefore, cts:fitness returns a number indicating how well the returned node satisfies the query issued, which is subtly different from relevance, because it does not take into account other documents in the database.
When understanding the order an expression returns in, there are two main rules to consider:
cts:searchexpressions always return in relevance order (the most relevant to the least relevant).
A subtlety to note about these rules is that if a cts:search expression is followed by some XPath steps, it turns the expression into an XPath expression and the results are therefore returned in document order. For example, consider the following query:
cts:search(fn:doc(), "my search phrase")
This returns a relevance-ordered sequence of document nodes that contain the specified phrase. You can get the scores of each node by using cts:score. Things will change if you then add an XPath step to the expression as follows:
cts:search(fn:doc(), "my search phrase")//TITLE
This will now return a document-ordered sequence of
TITLE elements. Also, in order to compute the answer to this query, MarkLogic Server must first perform the search, and then reorder the search in document order to resolve the XPath expression. If you need to perform this type of query, it is usually more efficient (and often much more efficient) to use cts:contains in an XPath predicate as follows:
fn:doc()[cts:contains(., "my search phrase")]//TITLE
In most cases, this form of the query (all XPath expression) will be much more efficient than the previous form (with the XPath step after the cts:search expression). There might be some cases, however, where it might be less efficient, especially if the query is highly selective (does not match many fragments).
When you write queries as XPath expressions, MarkLogic Server does not compute scores, so if you need scores, you will need to use a cts:search expression. Also, if you need a query like the above examples but need the results in relevance order, then you can put the search in a
FLWOR expression as follows:
for $x in cts:search(fn:doc(), "my search phrase") return $x//TITLE
This is more efficient than the cts:search with an XPath step following it, and returns relevance-ranked and scored results.
You can use the
relevance-trace search option to explore how the relevance scores are computed for a query. For example, you can use this feature to explore the impact of varying query weight and document quality weight.
Collecting score computation information during a search is costly, so you should only use the
relevance-trace option when you intend to generate a score computation report from the collected trace.
When you use the
relevance-trace option on a search, MarkLogic Server collects detailed information about how the relevance score is computed. You can access the information in one of the following ways:
The following example generates a score computation report from the results of cts:search.
for $x in cts:search(fn:doc(), "example", "relevance-trace") return cts:relevance-info($x)
The resulting score computation report looks similar to the following:
<qry:relevance-info xmlns: <qry:score53248</qry:score> <qry:confidence0.462837</qry:confidence> <qry:fitness0.679113</qry:fitness> <qry:uri>/example.xml</qry:uri> <qry:path>fn:doc("/example.xml")</qry:path> <qry:term <qry:score208</qry:score> <qry:key>16979648098685758574</qry:key> <qry:annotation>word("example")</qry:annotation> </qry:term> </qry:relevance-info>
Each
qry:score element contains a
@formula describing the computation, and a
@computation showing the values plugged into the formula. The data in the
score element is the result of the computation. For example:
<qry:score 19712 </qry:score>
The following example generates a score computation report using the XQuery Search API:
xquery version "1.0-ml"; import module namespace <search-option>relevance-trace</search-option> </search:options> )
The query generates results similar to the following:
<search:response <search:snippet>...</search:snippet> <
qry:relevance-infoxmlns: <qry:score14336</qry:score> <qry:confidence0.749031</qry:confidence> <qry:fitness 0.749031 </qry:fitness> <qry:uri>/example.xml</qry:uri> <qry:path>fn:doc("/example.xml")</qry:path> <qry:term <qry:score56</qry:score> <qry:key>16979648098685758574</qry:key> <qry:annotation>word("example")</qry:annotation> </qry:term> </qry:relevance-info> </search:result> <search:qtext>example</search:qtext> ... </search:response>
The REST and Java APIs use the same query options as the above Search API example, and return a report in the same way, inside each
search:result.
This section lists several cts:search expressions that include weight and/or quality parameters. It includes the following examples:
The following search will make any documents that have a quality set (set either at load time or with xdmp:document-set-quality) give much higher scores than documents with no quality set.
cts:search(fn:doc(), cts:word-query("my phrase"), (), 3.0)
For any documents that have a quality set to a negative number less than -1.0, this search will have the effect of lowering the score drastically for matches on those documents.
The following search will boost the scores for documents that satisfy one query while decreasing the scores for documents that satisfy another query.
cts:search(fn:doc(), cts:and-query(( cts:word-query("alfa", (), 2.0), cts:word-query("lada", (), 0.5) )) )
This search will boost the scores for documents that contain the word
alfa while lowering the scores for document that contain the word
lada. For documents that contain both terms, the component of the score from the word
alfa is boosted while the component of the score from the word
lada is lowered. | http://docs.marklogic.com/guide/search-dev/relevance | CC-MAIN-2017-47 | refinedweb | 6,045 | 50.67 |
By default, I/O streams do not raise exceptions for errors. Instead, each stream keeps a mask of error bits called the I/O state. The state mask keeps track of formatting failures, end-of-file conditions, and miscellaneous error conditions. The ios_base class template defines several member functions for testing and modifying the state flags (rdstate, setstate, fail, etc.).
A common idiom is to read from an input stream until an input operation fails. Because this idiom is so common, the standard library makes it easy. Instead of calling rdstate and testing the state explicitly, you can simply treat the stream object as a Boolean value: true means the state is good, and false means the state has an error condition. Most I/O functions return the stream object, which makes the test even easier:
while (cin.get(c)) cout.put(c);
The basic_ios class overloads operator void* to return a non-null pointer if the state is good or a null pointer for any error condition. Similarly, it overloads operator! to return true for any error condition. (As explained later in this section, an end-of-file is not an error condition.) This latter test is often used in conditional statements:
if (! cout) throw("write error");
The state mask has three different error bits:
An unrecoverable error occurred. For example, an exception was thrown from a formatting facet, an I/O system call failed unexpectedly, and so on.
An end-of-file upon input.
An I/O operation failed to produce any input or output. For example, when reading an integer, if the next input character is a letter, no characters can be read from the stream, which results in an input failure.
The basic_ios conditional operators define "failure" as when badbit or failbit is set, but not when eofbit is set. To understand why, consider the following canonical input pattern. During a normal program run, the input stream's state is initially zero. After reading the last item from the input stream, eofbit is set in the state. At this time, the state does not indicate "failure," so the program continues by processing the last input item. The next time it tries to read from the input stream, no characters are read (because eofbit is set), which causes the input to fail, so the stream sets failbit. Now a test of the input stream returns false, indicating failure, which exits the input loop.
Sometimes, instead of testing for failure after each I/O operation, you may want to simplify your code. You can assume that every operation succeeds and arrange for the stream to throw an exception for any failure. In addition to the state mask, every stream has an exception mask, in which the bits in the exception mask correspond to the bits in the state mask. When the state mask changes, if any bit is set in both masks, the stream throws an ios_base::failure exception.
For example, suppose you set the exception mask to failbit | badbit. Using the canonical input pattern, after reading the last item from the input stream, eofbit is set in the state. At this time, rdstate( ) & exceptions( ) is still 0, so the program continues. The next time the program tries to read from the input stream, no characters are read, which causes the input to fail, and the stream sets failbit. Now rdstate( ) & exceptions( ) returns a nonzero value, so the stream throws ios_base::failure.
A stream often relies on other objects (especially locale facets) to parse input or format output. If one of these other objects throws an exception, the stream catches the exception and sets badbit. If badbit is set in the exceptions( ) mask, the original exception is rethrown.
When testing for I/O success, be sure to test for badbit as a special indicator of a serious failure. A simple test for ! cin does not distinguish between different reasons for failure: eofbit | failbit might signal a normal end-of-file, but failbit | badbit might tell you that there is something seriously wrong with the input stream (e.g., a disk error). One possibility, therefore, is to set badbit in the exceptions( ) mask so normal control flow deals with the normal situation of reading an end-of-file. However, more serious errors result in exceptions, as shown in Example 9-10.
#include <algorithm> #include <cstddef> #include <exception> #include <iostream> #include <map> #include <string> void print(const std::pair<std::string, std::size_t>& count) { std::cout << count.first << '\t' << count.second << '\n'; } int main( ) { using namespace std; try { string word; map<string, size_t> counts; cin.exceptions(ios_base::badbit); cout.exceptions(ios_base::badbit); while (cin >> word) ++counts[word]; for_each(counts.begin( ), counts.end( ), print); } catch(ios_base::failure& ex) { std::cerr << "I/O error: " << ex.what( ) << '\n'; return 1; } catch(exception& ex) { std::cerr << "Fatal error: " << ex.what( ) << '\n'; return 2; } catch(...) { std::cerr << "Total disaster.\n"; return 3; } } | http://etutorials.org/Programming/Programming+Cpp/Chapter+9.+Input+and+Output/9.6+Errors+and+Exceptions/ | CC-MAIN-2017-22 | refinedweb | 817 | 65.12 |
That's not really the case of using promisses - because the result is a result of many service calls - in rder to fill an array of car.Make
<td ng- <img ng-</span> </td> IsValid = (car: Car): boolean => { return (car.Title != null && car.Title != '' && car.Condition != null && car.StartDate < car.EndDate); } GetStatus = (car: Car): string => { if (!this.IsValid(car)) { return "Invalid"; } if (car.Make == null) { return ''; } for (var i = car.Make.length - 1; i >= 0; i--) { if (car.Make[i].ColourCode == 'G') { return car.Make[i].Name; } } return ''; }
car.Make[i] is being calculated on another method and is showing the result of service call. That's why I have car.Make == null this should be true if the call hasn't happened.
When I have more calls to GetStatus() function some of them returns '' as result always even after some time when the whole array Make is being calculated. | http://www.howtobuildsoftware.com/index.php/how-do/0NX/angularjs-typescript-angular-promise-calculate-function-after-function-execution-finished-data-result-exists-not-return-always-correct-result | CC-MAIN-2017-04 | refinedweb | 150 | 69.28 |
!
M. Gumblert
Ranch Hand
34
11
Threads
0
Cows
since Sep 15, 2018
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
Ranch Hand Scavenger Hunt
Number Posts (34/100)
Number Threads Started (11/100)
Number Cows Received (0/5)
Number Likes Received (0/10)
Number Likes Granted (0/20)
Set bumper stickers in profile (0/3)
Report a post to the moderators (0/1)
Edit a wiki page (0/1)
Create a post with an image (1/2)
Greenhorn Scavenger Hunt
First Post
Number Posts (34/10)
Number Threads Started (11/10)
Number Likes Received (0/3)
Number Likes Granted (0/3)
Set bumper stickers in profile (0/1)
Set signature in profile
Set a watch on a thread
Save thread as a bookmark
Create a post with an image (1/1)
Recent posts by M. Gumblert
Nested loops, debugging.
show more
4 weeks ago
Jython/Python
Problems with lists
I was in a rush and completely forgot to answer you. I managed to correct my program using list comprehension. Is´s much more easier and is working perfectly fine now. I am not sure why you said my code is unreanable. As I said I didn´t want to rename the variables because I was afraid I might break the code. Anyway, thank you.
show more
4 weeks ago
Jython/Python
Problems with lists
cijela_lista_duplikati = jedna_lista1 + jedna_lista2 + jedna_lista3 + jedna_lista4 + jedna_lista5 + jedna_lista6 dupli = set() cijela_lista_bez_duplikata = [] for a, b in cijela_lista_duplikati: if not b in dupli: dupli.add(b) cijela_lista_bez_duplikata.append((a, b)) #2. vrsta r_inf_2= re.compile(r'.*nuti\b') r_pz_2 = re.compile(r'.*n(em|eš|e|emo|ete|u)\b') #3.a vrsta r_inf_3a= re.compile(r'.*(j|lj|nj|r)eti\b') r_pz_3a = re.compile(r'.*(im|iš|i|imo|ite|e)\b') #3.b vrsta r_inf_3b= re.compile(r'.*(č|ž|j|št|žd)ati\b') r_pz_3b = re.compile(r'.*(im|iš|i|imo|ite|e)\b') #4. vrsta r_inf_4= re.compile(r'.*iti\b') r_pz_4 = re.compile(r'.*(im|iš|i|imo|ite|e)\b') #5a vrsta r_inf_5a= re.compile(r'.*ati\b') r_pz_5a = re.compile(r'.*(am|aš|a|amo|ate|aju)\b') #5b vrsta r_inf_5b= re.compile(r'.*ati\b') r_pz_5b = re.compile(r'.*(đ|ž|č|š|nj|lj)(em|eš|e|emo|ete|u)\b') #5c vrsta r_inf_5c= re.compile(r'.*(r|v)ati\b') r_pz_5c = re.compile(r'.*(r|v)(em|eš|e|emo|ete|u)\b') #5d vrsta r_inf_5d= re.compile(r'.*([^o, e, i]v|j)ati\b') r_pz_5d = re.compile(r'.*j(em|eš|e|emo|ete|u)\b') #6. vrsta r_inf_6= re.compile(r'.*(o|e|i)vati\b') r_pz_6 = re.compile(r'.*uj(em|eš|e|emo|ete|u)\b') #2. vrsta soritanje glagoli_2 = [] for element in cijela_lista_bez_duplikata: mtch = r_inf_2.match(element[1]) and r_pz_2.match(element[0]) if mtch: glagoli_2.append(mtch.group()) cijela_lista_bez_duplikata.remove(element) print(glagoli_2) print(len(glagoli_2)) #3.a vrsta sortiranje glagoli_3a = [] for element in cijela_lista_bez_duplikata: mtch = r_inf_3a.match(element[1]) and r_pz_3a.match(element[0]) if mtch: glagoli_3a.append(mtch.group()) print(glagoli_3a) print(len(glagoli_3a)) #3.b vrsta sortiranje glagoli_3b = [] for element in cijela_lista_bez_duplikata: mtch = r_inf_3b.match(element[1]) and r_pz_3b.match(element[0]) if mtch: glagoli_3b.append(mtch.group()) cijela_lista_bez_duplikata.remove(element) print(glagoli_3b) print(len(glagoli_3b)) #4 vrsta sortiranje glagoli_4 = [] for element in cijela_lista_bez_duplikata: mtch = r_inf_4.match(element[1]) and r_pz_4.match(element[0]) if mtch: glagoli_4.append(mtch.group()) cijela_lista_bez_duplikata.remove(element) print(glagoli_4) print(len(glagoli_4)) #5a vrsta sortiranje glagoli_5a = [] for element in cijela_lista_bez_duplikata: mtch = r_inf_5a.match(element[1]) and r_pz_5a.match(element[0]) if mtch: glagoli_5a.append(mtch.group()) cijela_lista_bez_duplikata.remove(element) print(glagoli_5a) print(len(glagoli_5a)) #5b vrsta sortiranje glagoli_5b = [] for element in cijela_lista_bez_duplikata: mtch = r_inf_5b.match(element[1]) and r_pz_5b.match(element[0]) if mtch: glagoli_5b.append(mtch.group()) cijela_lista_bez_duplikata.remove(element) print(glagoli_5b) print(len(glagoli_5b)) #5c vrsta sortiranje glagoli_5c = [] for element in cijela_lista_bez_duplikata: mtch = r_inf_5c.match(element[1]) and r_pz_5c.match(element[0]) if mtch: glagoli_5c.append(mtch.group()) cijela_lista_bez_duplikata.remove(element) print(glagoli_5c) print(len(glagoli_5c)) #5d vrsta sortiranje glagoli_5d = [] for element in cijela_lista_bez_duplikata: mtch = r_inf_5d.match(element[1]) and r_pz_5d.match(element[0]) if mtch: glagoli_5d.append(mtch.group()) cijela_lista_bez_duplikata.remove(element) print(glagoli_5d) print(len(glagoli_5d)) #6. vrsta sortiranje glagoli_6 = [] for element in cijela_lista_bez_duplikata: mtch = r_inf_6.match(element[1]) and r_pz_6.match(element[0]) if mtch: glagoli_6.append(mtch.group()) cijela_lista_bez_duplikata.remove(element) print(glagoli_6) print(len(glagoli_6))
Hello. This is my code. It is kind of hard to explain, because this is just a segment of the program. The basic principle is that i have this list cijela_lista_bez_duplikata that has 2003 elements in it. I am using regex to sort the elements. The problem is that once i put "cijela_lista_bez_duplikata.remove(element)" on the end of each sorting the result changes. Which is very strange because regexis shouldn't overlap (There is a tiny overlap, but nothing signifacnt). I am "losing" 200 elements in 5a.
I am not sure if you guys understand what I am trying to do.
For reference, the strings in cijela_lista_bez_duplikata look like this: ('mislim', 'misliti'), ('znam', 'znati'), ('imam', 'imati'), ('vidim', 'vidjeti'), ('moram', 'morati'), ('mogu', 'moći'), ('želim', 'željeti')...
I didn't translate the code, because I would probably mess up everything.
show more
1 month ago
Jython/Python
Troubleshooting
@Travis Risner
Thank you for your long answer I will try to rewrite the code.
@Liutauras Vilda
Thank you for your answers and pointing out the problems with split(","). I know that there are some logical problems (like the 110kg thing). I didn't bother with that because those are some problems I will look into later.
show more
2 months ago
Jython/Python
Troubleshooting
Dear everyone,
thank you for your help. As most of you suggested I translated the code. Ignore the one I started the topic with. Here is the whole code. The idea is a really basic program that counts how much kcal you can eat a day. When you type in your meal in the format: "salad, 300" the program adds that to a list, and takes 300 from your daily kcal that is defined based on your weight. All the numbers are arbitrary. The problem remains the same:
kcal_limit = kcal_limit - int(meal[1])
IndexError: list index out of range
Here is the code:
kilograms = input("My weight is: ").strip().lower() kilograms = int(kilograms) kcal_limit = 0 if kilograms < 40: print("Eat more!!!") elif kilograms <60: kcal_limit = 2400 print("You can eat 2400 kcal a day") elif kilograms <80: kcal_limit = 2600 print("You can eat 2600 kcal a day") elif kilograms <100: kcal_limit = 3000 print("You can eat 3000 kcal a day") print("You can eat this ammount of kcal: " +str(kcal_limit)) print("Please type in your meals like this: \"name of the meal , kcal\". After you are done with uploading your meals. Type finished and press enter.") eaten = "" food_list = [] while eaten != "finished": eaten = input("Your meal: ") meal = eaten.split((",")) food_list.append(meal) kcal_limit = kcal_limit - int(meal[1]) #this is for checking the while loop: print(food_list) print(kcal_limit) print("Thank you! You can eat " + str(kcal_limit) + " kcal more today to stay fit.")
show more
2 months ago
Jython/Python
Troubleshooting
Oh sorry, I was translating the code for english speakers. It's the "food"
show more
2 months ago
Jython/Python
Troubleshooting
Hello everyone.
Could someone tell me why isn't my code working?
list_of_food = [] limit = 2600 while str(eaten) != "thatsit": eaten= input("I ate: ") food = eaten.split((",")) list_of_food.append(član) limit = limit - int(food[1]) print(list_of_food) print(limit)
So the program is like a conceptual fitness app. where the user types in the food he ate and its kcal and separates it with a ",". The kcal of the eaten food is then taken from the limit. Everything is stored in a list (for statistics or whatever). This goes on until the user says "thatsit". Then the it breaks out of the while loop and prints out the list and the remaining kcal for the day (this is shown, after every input).
Everything goes fine until I want to finish the input.
limit = limit - int(food[1])
IndexError: list index out of range
Also, would it be smarter to store the eaten food in touples instead of lists in a list.
show more
2 months ago
Jython/Python
Encoding-decoding
Okay, I am not sure what you are saying. Does this mean that the code is right? There shouldn't be any problems with the .txt file as well. What can I do?
show more
3 months ago
Jython/Python
Encoding-decoding
Hello, I am in the process of learning Python 3 for the purposes of NLP.
I am trying to work with a .txt that has non-ASCII characters. In the exercise I have to demonstrate the differences in the length of documents. My code looks like this
hr_tekst = open('hr.txt', "r").read().decode('utf-8') odsjecak = hr_tekst[:1000] dat_utf = open('hr_utf_1000.txt','w') dat_utf.write(odsjecak.encode('utf-8')) dat_utf.close() dat_iso = open('hr_iso_1000.txt','w') dat_iso.write(odsjecak.encode('iso-8859-2')) dat_iso.close() print (len(open('hr_utf_1000.txt').read())) print (len(open('hr_iso_1000.txt').read())) print (len(open('hr_utf_1000.txt').read().decode('utf-8'))) print (len(open('hr_iso_1000.txt').read().decode('iso-8859-2'))) print (open('hr_utf_1000.txt').read()==open('hr_iso_1000.txt').read()) print (open('hr_utf_1000.txt').read().decode('utf-8')==open('hr_iso_1000.txt').read().decode('iso-8859-2'))
I understand what the lines do, I checked the solved exercise, and It is the same as this, but for some reason it won´t compile.
I get the following error message:
Traceback (most recent call last):
File "C:...my folders...", line 56, in <module>
print (len(open('hr.txt').read()))
File "C:\Users\user\AppData\Local\Programs\Python\Python37-32\lib\encodings\cp1250.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 88422: character maps to <undefined>
Can someone help me out?
show more
3 months ago
Jython/Python
Troubleshooting exceptions
Thank you for your answers. The class without a method was really stupid of me. I know the names are kinda, as you said, goofy, but I just wanted to try out the very last thing i learnt.
As for the scanner I am going to study the thread you suggested.
Once again, appriciate the answers.
show more
7 months ago
Beginning Java
Troubleshooting exceptions
I do not exactly get either of the things you guys suggested. What should I provide a constructor to? As for the Scanner. Would it work the way I did it? If I make it a constant, how do I call it?
Am I just calling it like:
firstinput = KEYBOARD.nextInt();
secondinput = KEYBOARD.nextLine();
and so on?
show more
7 months ago
Beginning Java
Troubleshooting exceptions
Hello guys! I am trying to build a mini-program that allows is registering users. It asks for a username and password. The input is checked and if everything is correct the program ads the registered user to an ArrayList that holds Users
import java.util.*; class User{ //private static int AccountCount = 0; -not used yet private String name; private int password; public User(String name, int password){ System.out.println("A student object has been created"); } //not used yet public void setUser(String nm, String pw){ nm = name; pw = password; } } public class registration{ public static void main (String [] args){ //an ArrayList that holds the registered students ArrayList<User> UserInTheDatabase = new ArrayList <User>(); boolean SuccessfulRegistration = false; do{ try{ Scaner firstinput = new Scanner(System.in); System.out.println("Type in your username"); userNameInput = firstinput.nextLine(); Scaner secondinput = new Scanner(System.in); System.out.println("Type in your password, only digits allowed"); passwordInput = secondinput.nextInt(); checkTheRegistration(userNameInput, passwordInput); //add to ArrayList list.UserInTheDatabase(new User(userNameInput, passwordInput)); }catch(YouNeedANameException ne){ System.out.println("You have to have a username, it cannot be nothing -message from the catch block"); }catch(YouNeedAPasswordException pe){ System.out.println("You have to have a password, it cannot be 0 -message from the catch block"); }finally{ System.out.Println("this is finally"); } SuccessfulRegistration = true; System.out.Println("You passed the checker"); }while(SuccessfulRegistration = false); } static void checkTheRegistration(String userNameInput, int passwordInput) throws YouNeedANameException, YouNeedAPasswordException{ // I really think this can be done much much better, this way I am assuming the user will fail in the username and password naming. if(userNameInput.equals("null")){ throw new YouNeedANameException(); }else if(YouNeedAPasswordException == 0){ throw new YouNeedAPasswordException(); }else{ System.out.println("U passed the checker"); } } }
This is where I get the error messages:
class YouNeedANameException extends Exception{ System.out.println("You have to have a username, it cannot be nothing - this message is from the class"); } class YouNeedAPasswordException extends Exception{ System.out.println("You have to have a password, it cannot be 0 - this message is from the class"); }
I get the following error messages:
Registration.java:59: error: <identifier> expected
System.out.println("You have to have a username, it cannot be nothing - this message is from the class");
Registration.java:59: error: illegal start of type
System.out.println("You have to have a username, it cannot be nothing - this message is from the class");
Registration.java:63: error: <identifier> expected
System.out.println("You have to have a password, it cannot be 0 - this message is from the class");
Registration.java:63: error: illegal start of type
System.out.println("You have to have a password, it cannot be 0 - this message is from the class");
I am new to exceptions (and to java in general). Please correct every mistake I have. I know on some spots the logic isn't really at the top, but if I debug this I will work on that as well.
show more
7 months ago
Beginning Java
How to put put classes into a class
@Junilu Lacar
I was trying to simulate a situatuion where i try to pull infromation about a students perfomance on a course. The result should have been named grade. In my opinion it is not neccessary to make objects that hold references to other objects, but I was just wondering how is that possibble. I admit that perhaps this isn't the best representation, but I thought that it will be easier to ask a question providing a little context on what I am experimenting with.
Also I wanted to see the logic what is behind the "object that holds references to objects" is.
@everyone
Thank you for your answers.
show more
7 months ago
Beginning Java
How to put put classes into a class
So I am a bit of a struggle.
I want to make a class that is named results. It would hold the proffessor, the student and the result. The only problem is that proffessor and student are object with their own instances and methods. I want to put classes into a class.
I was wondering how is this possibble.
Should I try to put them into an ArrayList<Object> or what is the right way to do it?
show more
7 months ago
Beginning Java
Troubleshooting
it is from 148 to 153
show more
8 months ago
Beginning Java | https://coderanch.com/u/372757/M-Gumblert | CC-MAIN-2019-26 | refinedweb | 2,572 | 51.65 |
Hello!
I'm a beginner in programming, I tried to create a program.. an array and fill it with the ABC, I think it should be OK, and after that I try to list it, but it doesn't works well, I can only see: "Z Z Z Z Z"
What could be the problem?
Code Java:
public class NewClass55 { public static void main(String[] args) { int abcd=0; for (char k = 'A'; k <= 'Z'; k++) { abcd++; } char[] array = new char[abcd]; for (int j = 0; j < abcd; j++) { for (char i = 'A'; i <= 'Z'; i++) { array[j] = i; } } for (int j = 0; j < abcd; j++) { System.out.println(array[j]); } } }
Thanks in advance! | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/6861-array-listing-problem-printingthethread.html | CC-MAIN-2014-15 | refinedweb | 114 | 70.47 |
22 package org.jacorb.ir.gui.typesystem.remote;23 24 25 import org.jacorb.ir.gui.typesystem.*;26 import javax.swing.tree.*;27 /**28 * This class was generated by a SmartGuide.29 * 30 */31 public class IREnumMember extends IRNode {32 33 34 35 /**36 * IREnumMember constructor comment.37 */38 protected IREnumMember() {39 super();40 }41 /**42 * This method was created by a SmartGuide.43 * @param name java.lang.String44 */45 protected IREnumMember ( String name) {46 setName(name);47 }48 /**49 * This method was created by a SmartGuide.50 * @return java.lang.String51 */52 public static String nodeTypeName() {53 return "";54 }55 }56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/jacorb/ir/gui/typesystem/remote/IREnumMember.java.htm | CC-MAIN-2016-44 | refinedweb | 134 | 61.43 |
J2SE 5 introduced numerous features to the Java programming language. One of these features is autoboxing and unboxing, a feature that I use almost daily without even thinking about it. It is often convenient (especially when used with collections), but every once in a while it leads to some nasty surprises, "weirdness," and "madness." In this blog post, I look at a rare (but interesting to me) case of NoSuchMethodError resulting from mixing classes compiled with Java versions before autoboxing/unboxing with classes compiled with Java versions that include autoboxing/unboxing.
The next code listing shows a simple
Sum class that could have been written before J2SE 5. It has overloaded "add" methods that accept different primitive numeric data types and each instance of
Sum> simply adds all types of numbers provided to it via any of its overloaded "add" methods.
Sum.java (pre-J2SE 5 Version)
import java.util.ArrayList; public class Sum { private double sum = 0; public void add(short newShort) { sum += newShort; } public void add(int newInteger) { sum += newInteger; } public void add(long newLong) { sum += newLong; } public void add(float newFloat) { sum += newFloat; } public void add(double newDouble) { sum += newDouble; } public String toString() { return String.valueOf(sum); } }
Before unboxing was available, any clients of the above
Sum class would need to provide primitives to these "add" methods or, if they had reference equivalents of the primitives, would need to convert the references to their primitive counterparts before calling one of the "add" methods. The onus was on the client code to do this conversion from reference type to corresponding primitive type before calling these methods. Examples of how this might be accomplished are shown in the next code listing.
No Unboxing: Client Converting References to Primitives
private static String sumReferences( final Long longValue, final Integer intValue, final Short shortValue) { final Sum sum = new Sum(); if (longValue != null) { sum.add(longValue.longValue()); } if (intValue != null) { sum.add(intValue.intValue()); } if (shortValue != null) { sum.add(shortValue.shortValue()); } return sum.toString(); }
J2SE 5's autoboxing and unboxing feature was intended to address this extraneous effort required in a case like this. With unboxing, client code could call the above "add" methods with references types corresponding to the expected primitive types and the references would be automatically "unboxed" to the primitive form so that the appropriate "add" methods could be invoked. Section 5.1.8 ("Unboxing Conversion") of The Java Language Specification explains which primitives the supplied numeric reference types are converted to in unboxing andSection 5.1.7 ("Boxing Conversion") of that same specification lists the references types that are autoboxed from each primitive in autoboxing.
In this example, unboxing reduced effort on the client's part in terms of converting reference types to their corresponding primitive counterparts before calling
Sum's "add" methods, but it did not completely free the client from needing to process the number values before providing them. Because reference types can be null, it is possible for a client to provide a null reference to one of
Sum's "add" methods and, when Java attempts to automatically unbox that null to its corresponding primitive, a NullPointerException is thrown. The next code listing adapts that from above to indicate how the conversion of reference to primitive is no longer necessary on the client side but checking for null is still necessary to avoid the NullPointerException.
Unboxing Automatically Coverts Reference to Primitive: Still Must Check for Null
private static String sumReferences( final Long longValue, final Integer intValue, final Short shortValue) { final Sum sum = new Sum(); if (longValue != null) { sum.add(longValue); } if (intValue != null) { sum.add(intValue); } if (shortValue != null) { sum.add(shortValue); } return sum.toString(); }
Requiring client code to check their references for null before calling the "add" methods on
Sum may be something we want to avoid when designing our API. One way to remove that need is to change the "add" methods to explicitly accept the reference types rather than the primitive types. Then, the
Sum class could check for null before explicitly or implicitly (unboxing) dereferencing it. The revised
Sum class with this changed and more client-friendly API is shown next.
Sum Class with "add" Methods Expecting References Rather than Primitives
import java.util.ArrayList; public class Sum { private double sum = 0; public void add(Short newShort) { if (newShort != null) { sum += newShort; } } public void add(Integer newInteger) { if (newInteger != null) { sum += newInteger; } } public void add(Long newLong) { if (newLong != null) { sum += newLong; } } public void add(Float newFloat) { if (newFloat != null) { sum += newFloat; } } public void add(Double newDouble) { if (newDouble != null) { sum += newDouble; } } public String toString() { return String.valueOf(sum); } }
The revised
Sum class is more client-friendly because it allows the client to pass a reference to any of its "add" methods without concern for whether the passed-in reference is null or not. However, the change of the
Sumclass's API like this can lead to
NoSuchMethodErrors if either class involved (the client class or one of the versions of the
Sum class) is compiled with different versions of Java. In particular, if the client code uses primitives and is compiled with JDK 1.4 or earlier and the
Sum class is the latest version shown (expecting references instead of primitives) and is compiled with J2SE 5 or later, a
NoSuchMethodError like the following will be encountered (the "S" indicates it was the "add" method expecting a primitive
short and the "V" indicates that method returned
void).
Exception in thread "main" java.lang.NoSuchMethodError: Sum.add(S)V at Main.main(Main.java:9)
On the other hand, if the client is compiled with J2SE 5 or later and with primitive values being supplied to
Sum as in the first example (pre-unboxing) and the
Sum class is compiled in JDK 1.4 or earlier with "add" methods expecting primitives, a different version of the
NoSuchMethodError is encountered. Note that the
Short reference is cited here.
Exception in thread "main" java.lang.NoSuchMethodError: Sum.add(Ljava/lang/Short;)V at Main.main(Main.java:9)
There are several observations and reminders to Java developers that come from this.
- Classpaths are important:
- Java
.classfiles compiled with the same version of Java (same
-sourceand
-target) would have avoided the particular problem in this post.
- Classpaths should be as lean as possible to reduce/avoid possibility of getting stray "old" class definitions.
- Build "clean" targets and other build operations should be sure to clean past artifacts thoroughly and builds should rebuild all necessary application classes.
- Autoboxing and Unboxing are well-intentioned and often highly convenient, but can lead to surprising issues if not kept in mind to some degree. In this post, the need to still check for null (or know that the object is non-null) is necessary remains in situations when implicit dereferencing will take place as a result of unboxing.
- It's a matter of API style taste whether to allow clients to pass nulls and have the serving class check for null on their behalf. In an industrial application, I would have stated whether null was allowed or not for each "add" method parameter with
@paramin each method's Javadoc comment. In other situations, one might want to leave it the responsibility of the caller to ensure any passed-in reference is non-null and would be content throwing a
NullPointerExceptionif the caller did not obey that contract (which should also be specified in the method's Javadoc).
- Although we typically see
NoSuchMethodErrorwhen a method is completely removed or when we access an old class before that method was available or when a method's API has changed in terms of types or number of types. In a day when Java autoboxing and unboxing are largely taken for granted, it can be easy to think that changing a method from taking a primitive to taking the corresponding reference type won't affect anything, but even that change can lead to an exception if not all classes involved are built on a version of Java supporting autoboxing and unboxing.
- One way to determine the version of Java against which a particular
.classfile was compiled is to use javap -verbose and to look in the javap output for the "major version:". In the classes I used in my examples in this post (compiled against JDK 1.4 and Java SE 8), the "major version" entries were 48 and 52 respectively (the General Layout section of the Wikipedia entry on Java class file lists the major versions).
Fortunately, the issue demonstrated with examples and text in this post is not that common thanks to builds typically cleaning all artifacts and rebuilding code on a relatively continuous basis. However, there are cases where this could occur and one of the most likely such situations is when using an old JAR file accidentally because it lies in wait on the runtime classpath.
This story, "Autoboxing, Unboxing, and NoSuchMethodError" was originally published by marxsoftware.blogspot.com. | http://www.javaworld.com/article/2597545/java-language/autoboxing-unboxing-and-nosuchmethoderror.html | CC-MAIN-2016-26 | refinedweb | 1,490 | 50.26 |
:
// Use the class name as the name space by default. if ($namespace == '') { $className = is_object($class) ? get_class($class) : $class; $namespace = substr($className, 0, strrpos($className, '.')); }
Posted by old of Satoru Yoshida ([email protected]) on 2009-01-02T16:33:20.000+0000
Set component
Posted by Darby Felton (darby) on 2009-01-09T06:05:38.000+0000
I don't experience this problem with version 1.7.2 of Zend_Amf_Server. I can call setClass(), passing it an object with no namespace, and I get no such error. I think this issue may have been resolved.
Posted by Jurrien Stutterheim (norm2782) on 2009-01-09T06:28:15.000+0000
I'm actually using the latest trunk and I'm getting this error. Code to reproduce
Complete error:
Current code in Zend_Amf_Server (just did an SVN update):
Darby, do you have display_errors switched on? ;)
Posted by Jurrien Stutterheim (norm2782) on 2009-01-09T07:06:34.000+0000
Resolved in revision 13581
Posted by Wade Arnold (wadearnold) on 2009-01-09T07:13:52.000+0000
It does not look like a unit test was written for this. Please don't mark an item as resolved without a corresponding unit test that tests the new feature.
Posted by Wade Arnold (wadearnold) on 2009-01-09T07:16:17.000+0000
Also do either of you have a use case for this that you could help me understand so that I can use it in the documentation as to why you would pass an instantiated object through setClass(). Thanks for the code submission and making Zend Amf better for everyone! Really appreciate the help!
Posted by Jurrien Stutterheim (norm2782) on 2009-01-09T07:25:13.000+0000
Actually, I did add a unit test ; )…
The primary use case for this is that Zend_Server_Reflection supports reflecting on an object. Because of this, it's better to have Zend_Amf_Server support this as well, because otherwise it would be unexpected behavior.
Posted by Wade Arnold (wadearnold) on 2009-01-09T07:31:01.000+0000
Awesome thanks!
Posted by Darby Felton (darby) on 2009-01-09T10:36:36.000+0000
Sorry, I wasn't using latest trunk version but the version included with 1.7.2. This issue says that it affects 1.7.2, but I can't see that it does.
Posted by Andrea Montemaggio (klinamen) on 2009-03-25T02:17:41.000+0000
When the object passed as argument is an instance of a class whose constructor requires arguments, an instantiation error is raised on service call. I'm using 1.7.6 version of ZF. I've done some debug and found that in Zend/Amf/Server.php on line 165 (_dispatch method) the method's declaring class obtained by reflection is called on a NEW instance created with default construction and not on the object passed to setClass() as I expected. This behavior seems to raise the instantiation trouble I've mentioned; moreover, this behavior seems to be inconsistent w.r.t. the one observed, for example, in Zend/Json/Server component.
Posted by Andrea Montemaggio (klinamen) on 2009-03-25T02:26:23.000+0000
I'm sorry, the version of ZF I'm using is 1.7.4 and NOT 1.7.6 as I reported. | http://framework.zend.com/issues/browse/ZF-5393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel | CC-MAIN-2013-48 | refinedweb | 542 | 69.38 |
MinDCF problem in negative sets with outlier scores
When we have an outlier score in negative set, the selected threshold in
bob.measure.min_weighted_error_rate_threshold function is wrong. For example:
from bob.measure import min_weighted_error_rate_threshold, farfrr cost = 0.99 negatives = [-3, -2, -1, -0.5, 4] positives = [0.5, 3] th = min_weighted_error_rate_threshold(negatives, positives, cost, True) print("threshold: " + str(th)) far, frr = farfrr(negatives, positives, th) mindcf = (cost*far + (1-cost)*frr)*100 print ("minDCF : " + str(mindcf))
In this condition the output will be:
threshold: 0.0 minDCF : 19.8
minDCF can not be more than 1. In this condition a threshold higher than maximum score must be chosen. e.g., with threshold 5 minDCF will be 1. | https://gitlab.idiap.ch/bob/bob.measure/issues/59 | CC-MAIN-2020-05 | refinedweb | 117 | 53.88 |
Hi, I'm working on a project for school that will be using lwIP, and I need to familiarize myself with how it works. For now I'm just trying to write a simple test program. It doesn't even do anything yet, but I cannot for the life of me get it to compile:
#include "lwip\udp.h" void udp_packet_recv(void *arg, struct udp_pcb *pcb, struct pbuf *p, struct ip_addr *addr, u16_t port); main(){ // Create a new pcb. struct udp_pcb * pcb; pcb = udp_new(); if (pcb == NULL) return -1; // Bind to any IP on port 60,000. if (udp_bind(pcb, IP_ADDR_ANY, 60000) != ERR_OK) return -2; // On datagram receipt, call udp_packet_recv. udp_recv(pcb, udp_packet_recv, NULL); } void udp_packet_recv(void *arg, struct udp_pcb *pcb, struct pbuf *p, struct ip_addr *addr, u16_t port){ if (p != NULL){ // TODO: Case handling GET packet. // TODO: Case handling data receipt. pbuf_free(p); // Free pbuf when finished. } }Every time I try to compile this, I get the following errors:
C:\Users\ZACHAR~1\AppData\Local\Temp\cc1kZ7BP.o:client.c:(.text+0xf): undefined reference to `udp_new()' C:\Users\ZACHAR~1\AppData\Local\Temp\cc1kZ7BP.o:client.c:(.text+0x3d): undefined reference to `udp_bind(udp_pcb*, ip_addr*, unsigned short)' C:\Users\ZACHAR~1\AppData\Local\Temp\cc1kZ7BP.o:client.c:(.text+0x5d): undefined reference to `udp_packet_recv(void*, udp_pcb*, pbuf*, ip_addr*, unsigned short)' C:\Users\ZACHAR~1\AppData\Local\Temp\cc1kZ7BP.o:client.c:(.text+0x69): undefined reference to `udp_recv(udp_pcb*, void (*)(void*, udp_pcb*, pbuf*, ip_addr*, unsigned short), void*)' C:\Users\ZACHAR~1\AppData\Local\Temp\cc1kZ7BP.o:client.c:(.text+0x8e): undefined reference to `pbuf_free(pbuf*)' collect2: ld returned 1 exit statusI read something online about linking "lwip4", but have not been able to find anything about that file elsewhere. (I can't even figure out where I originally read that...) I'm using "lwip-win32-msvc-0.1", which is based on the CVS version of lwIP from 23-01-2002. The computer I am attempting to compile on is a Sony VPCF126FM, with no hardware modifications of any kind. I'm using MiniGW command line " g++ client.c -o client.exe" to compile. (Nothing fancy.) Can anybody help me out here? This has become a real sticking point.
Link Copied
I managed to find "lwip4.dsp" and "lwip4.dsw" in the "lwip-win32-msvc-0.1\proj\msvc6" directory. I'm not sure these are the right files though, and attempting to link them with the following commands has not worked.
-llwip4.dsp -llwip4.dsw -l lwip4.dsp -l lwip4.dsw -llwip4 -l lwip4I keep getting some variation of "ld.exe: cannot find -llwip4". | https://community.intel.com/t5/FPGA-Intellectual-Property/LwIP-issue-quot-undefined-reference-quot/td-p/55579 | CC-MAIN-2021-10 | refinedweb | 435 | 61.43 |
Pidgin should be able to turn on an away message when xscreensaver activates
Bug Description
I think gaim should have an option to turn on an away message when xscreensaver
activates. This would keep users from having to do one more thing before leaving
the computer. Also, when the screensaver is activated it's not possible to read
new messages, so I think this would be a sensible default.
I think it would be nice if gaim would put up an away message whenever
xscreensaver is active (on the same display of course) and the away message
turned off after xscreensaver is deactivated. That way the user doesn't have to
keep the idle time in two places (xscreensaver config and gaim config) and if
the user wants to leave the computer he/she can just activate the screensaver
(or in the case of a laptop, close the lid, which triggers an acpi event ->
lid.sh activates xscreensaver) and not have to put up an away message first.
oh the functionality is cool enough, and certainly has notable uses that idle
time does not adequately cover, for example manually blanking out the screen in
a single step. I wouldn't object to a patch, or better, a plugin, to do this.
But I don't see any of us (upstream) spending significant time trying to figure
out how to do it.
marking as upstream, patches are welcome
This could be done quite simply, even with only a couple lines of code.
In the xscreensaver lock/start screensaver function, check to see if
gaim/gaim-remote is installed. If it is, just run $ gaim-remote away.
#!/usr/bin/python
import os, dbus, gobject, dbus.glib
bus = dbus.SessionBus()
def onSessionIdleCh
if state:
else:
bus.add_
gobject.
Changed the package to Pidgin from Gaim.
Confirmed as an enhancement upstream. (http://
Thanks for your bug. Could you describe what is your issue? The preferences have
an option to be away after "... minutes" not using the computer or gaim | https://bugs.launchpad.net/ubuntu/+source/pidgin/+bug/23693 | CC-MAIN-2017-51 | refinedweb | 335 | 72.97 |
----------------------------------------------------------------------------- -- | -- Module : Control.Concurrent.STM.TSem -- Copyright : (c) The University of Glasgow 2012 -- License : BSD-style (see the file libraries/base/LICENSE) -- -- Maintainer : [email protected] -- Stability : experimental -- Portability : non-portable (requires STM) -- -- 'TSem': transactional semaphores. -- -- @since 2.4.2 ----------------------------------------------------------------------------- {-# LANGUAGE DeriveDataTypeable #-} module Control.Concurrent.STM.TSem ( TSem , newTSem , waitTSem , signalTSem , signalTSemN ) where import Control.Concurrent.STM import Control.Monad import Data.Typeable import Numeric.Natural -- | 2.4.2 newtype TSem = TSem (TVar Integer) deriving (Eq, Typeable) -- | Construct new 'TSem' with an initial counter value. -- -- A positive initial counter value denotes availability of -- units 'waitTSem' can acquire. -- -- The initial counter value can be negative which denotes a resource -- \"debt\" that requires a respective amount of 'signalTSem' -- operations to counter-balance. -- -- @since 2.4.2 newTSem :: Integer -> STM TSem newTSem i = fmap TSem (newTVar $! i) -- NOTE: we can't expose a good `TSem -> STM Int' operation as blocked -- 'waitTSem' aren't reliably reflected in a negative counter value. -- | Wait on 'TSem' (aka __P__ operation). -- -- This operation acquires a unit from the semaphore (i.e. decreases -- the internal counter) and blocks (via 'retry') if no units are -- available (i.e. if the counter is /not/ positive). -- -- @since 2.4.2 waitTSem :: TSem -> STM () waitTSem (TSem t) = do i <- readTVar t when (i <= 0) retry writeTVar t $! (i-1) -- Alternatively, the implementation could block (via 'retry') when -- the next increment would overflow, i.e. testing for 'maxBound' -- | Signal a 'TSem' (aka __V__ operation). -- -- This operation adds\/releases a unit back to the semaphore -- (i.e. increments the internal counter). -- -- @since 2.4.2 signalTSem :: TSem -> STM () signalTSem (TSem t) = do i <- readTVar t writeTVar t $! i+1 -- | Multi-signal a 'TSem' -- -- This operation adds\/releases multiple units back to the semaphore -- (i.e. increments the internal counter). -- -- > signalTSem == signalTSemN 1 -- -- @since 2.4.5 signalTSemN :: Natural -> TSem -> STM () signalTSemN 0 _ = return () signalTSemN 1 s = signalTSem s signalTSemN n (TSem t) = do i <- readTVar t writeTVar t $! i+(toInteger n) | https://downloads.haskell.org/ghc/8.10.7/docs/html/libraries/stm-2.5.0.1/src/Control-Concurrent-STM-TSem.html | CC-MAIN-2022-33 | refinedweb | 326 | 51.95 |
Although every computer language is suitable for data, some languages lend themselves especially well for working with certain types or sources of data, or processing the data in certain ways, and so are of particular use to the data scientist.
This is the sixth (This article)
- languages that interact directly with data using programming and scripting languages. This is an introductory article; I’ll follow it with practical examples of how to use each in another article.
My philosophy on information technology is this: “All computing is merely re-arranging data”. Every programming language or scripting interface deals with taking data as input, operating on that data, and returning or creating some output of the data. As such, almost any language is suitable for a data professional to learn and use with the data they need to compute. However, certain languages lend themselves especially well for working with certain types or sources of data, or processing the data in certain ways. In this article, I’ll explain a few of those and the situations I find that they fit best.
One final caveat - pretty much anything can be done with a given language. If there is a language you’re familiar with, you’re probably efficient with it and you should install and use that as your “go-to” language. Since this is a “lab” system, however, you should experiment with the languages I show here in addition to others that you find interesting. Experimentation is the entire point of building this particular system.
SQL
I’ll start with the Structured Query Language, or SQL. I separate out this language since it has attributes of higher-level languages such as functions and strong data-types, but lacks richer features such as graphical interface control, complete system functions, working with objects and the like. It’s also not technically a scripting language, as it is tied to a specific “engine” for interpretation of data calls.
That last point is important - SQL is an interpreted language (like scripting) meaning that there is a complete system needed to accept, change and execute and return the commands and their output. Higher-level languages are compiled into a binary package that is executed on a given architecture.
SQL is a declarative language paradigm, which means you write statements describing what you want, without coding how that will end up happening on the system. This is a very simple way of starting to work with data, and by layering these statements you can create very powerful constructs quickly and with just a little practice. SQL is also quite easy to read, if not a bit wordy for complex programs.
In most platforms SQL statements can be sent to the server to be executed, called “dynamic” SQL, or stored on the server and called with a function, stored procedure or other call, which is sometimes called “server-side” code. There are advantages and disadvantages to both, but in large part the syntax and processes are the same regardless. If you learn one, you can usually leverage most of that knowledge on the other.
I won’t spend a great deal of time explaining SQL in this article, since the readers of this series will no doubt be quite familiar with it. If you are new to SQL, there is a series of tutorials you can follow here that is quite useful:. The statements in the SQL language fall into categories:
- Data Definition Language (DDL): Used to create, alter and delete data objects such as tables, indexes and the like
- Data Manipulation Language (DML): Used to work with data, such as inserting, altering, and deleting or selecting data
- Data Control Language (DCL): Used to control the system, such as security elements
It’s important to point out that only a couple of very abstract systems use a “pure” (ANSI) form of SQL. SQL is actually quite limited in scope, and so each vendor that provides an engine uses a “dialect” of SQL. For SQL Server, which is the focus of this series, that dialect is “Transact-SQL” or T-SQL ().
As far as an installation of software to input code for SQL, I normally use the Integrated Development Environment (IDE) included with the engine that uses SQL. In SQL Server, there are several options such as SQL Server Management Studio () and SQL Server Data Tools (), which I’ll cover when I describe the Relational Database Management Systems installations in another article.
I also use the Notepad++ tool () I described in an earlier article, since it has syntax coloring and a few other features, but for the most part I use the included tools for their tight integration with the product. Oracle, DB/2 and other vendor products include IDE’s in their installations as well.
References:
For more general information on the SQL language, see this reference:
For the classes I teach in Transact-SQL, I typically use books from my friend Itzik Ben-Gan:
The primary verbs and nouns within the T-SQL language are here:
Programming Options for Interacting with Data
As I mentioned earlier, every programming language works with data in some form. In fact, the line between scripting and programming is quite blurred - Python, which I’ll describe in a moment, fits squarely in both camps. For the most part, however, most programming languages are compiled, meaning they are re-written into a binary format that runs on a particular computing architecture without any “engine” to run them. A scripting language, on the other hand, requires the engine of the scripting program to interpret the statements and then run them. Even this distinction has problems, however. Java is obviously a programming language, but requires the Java engine to run. Ditto for the .NET stack from Microsoft.
So what are the criteria that a data professional should use for choosing a programming interface to data, then? Actually, for your lab system you shouldn’t choose at all - but use this system for its intended purpose of experimenting with everything you can. On the other hand, there is only so much time in the day, so are there clear choices for starting with a particular language? There are - and I’ve divided my criteria into the following (you may have others). The language (scripting or not) needs to have the following characteristics:
- Data-centric, or at least data-friendly, particularly as it deals with data types
- Ability to read multiple types of data sources
- Supported and popular in the data science community
- Handles large data sets
- Some level of built-in visualizations
Using those criteria, I’ll start with a brief (very brief) introduction on how programming languages interface with data and then move on to a few choices I’ve made for my system. Once again, I won’t have time for examples in this article, but I’ll provide some in the articles that follow.
This is a conceptual section of this article, so stick with me for a moment. These concepts are important for the discussions that follow. Also, because I’m covering a lot of information in just a few paragraphs, I’ll lose some fidelity and exactness in the process. This isn’t intended to be a formal programming class, so feel free to add more detail in the comments if the explanations I give here strike you as incomplete, or worse, incorrect. I’ll provide several links in the References section below if you want more precise information.
Various Programming Languages using a Data Interface
Languages work with data by connecting to or opening a data source to take in data. That can be anything from a prompt waiting at a console for user input to reading a text file or connecting to a relational database engine.
I’ll use the Microsoft .NET suite of languages to frame this conversation, since the concepts diverge quickly based on the architecture you choose. In the series I’ll create on a Linux version of the data laboratory system I’ll cover other frameworks.
In higher-level languages that use Object-Oriented programming (also called Imperative programming), such as C#, whenever you work with data you create a “Class”. A class (as it relates to data) is nothing more than a definition of some data object, such as a pizza. The definition includes a Property of the object, such as the “type of pizza”. Here’s a very simple example of a Pizza class:
public class Pizza
{
public string typeOfPizza {get; set;}
}
With this simple definition, you can create a new Pizza by “calling” that class name in code and passing the parameters you want to insert (the set part) or read (the get part). That call does not change the Pizza class, it’s merely used to create a brand-new pizza (called an Instance) that you can name whatever you want. In essence, you use the Pizza class as a template to create a new Pizza (perhaps called Customer123Pizza) that has the value typeOfPizza=”The Works”. Pizza is still there, and now there’s another called CustomerPizza123 as well. You can work with CustomerPizza123 and change it in other ways as well, all without bothering the original Pizza.
As a data professional, you want to work with the data in some meaningful way - usually in a tabular layout with rows and columns. But you have a Pizza object sitting there that doesn’t lend itself well to a tabular format.
There are actually a couple of ways to handle the Object-Oriented to Tabular mismatch. In the .NET framework, you can use the ADO.NET libraries (Active Data Objects) for the change. With ADO.NET, you simply create a connection to the data you want, read the data in, work with it, and optionally write it back out again. There are connection types for everything from XML to relational database management systems.
You can also use Language Integrated Query (LINQ) to query the objects using set-based logic, because LINQ works with a hierarchy of objects - in effect, it creates a network of classes. LINQ is useful because you can work not only with relational or file or text data, but any object that has a LINQ interface. You can even join up the objects if there is some dependable key value between them.
Yet another method is to use Microsoft’s Entity Framework (EF), which is a model-based interface to data. EF, along with other products that work in a similar way, allows you to create your data model, and it will generate the programming classes for you.
You can use all of these methods, as well as others I have not described here, to work with data in your experiments. If it sounds complicated, don’t worry - it really isn’t. As you experiment on your system you’ll find that each has strengths and weaknesses for any given type of data or processing need. You should become familiar with all of them in a test environment so that you know when to use each.
References:
Object Oriented Programming Tutorial:
Working with ADO.NET:
Working with LINQ:
Working with Entity Framework:
Functional Programming and F#
Object-Oriented programming isn’t the only way to write code. Another major paradigm is Functional Programming. OO programming works (as the name implies) with objects, Functional Programming (also as the name implies) works with Functions. As an analogy, you can think of functions as machines that take input, work on it, and return output.
To over-simplify a bit, OO programming focuses on Objects and what you can do to them to change their state - how they currently exist, such as the type of pizza and so on. Functional programming is more concerned with what you want to do to the pizza - the transformation of the data into something else.
That simple difference means that functional programming works very well in data mining, data patterns analysis, data reduction to meaning and more. Also, because functions are less concerned with the data’s state, it lends itself very well to stateless architectures. That’s important because state is the bane of working with large sets of data. Let me explain by way of a very simple example - again, I’ll lose a bit of technical accuracy here but it’s useful to understand the concept.
Assume for a moment you’re at a store and the clerk begins to take your very large order - you’re buying lots of things for a party. The order is a bit complicated. Some things are taxed, you have coupons for other items, and so on. The object-oriented clerk would have to send out each item to another clerk to change the “state” of your order (the object) - the number, discount, and price of items. The final computation is done at the end. The state of the item is primary.
In functional programming, the clerk would take the items and pass them through computations that figure the discount, then on to the coupons, then on to the taxes and finally through the totals. The state isn’t as important, just the calculations. The other advantage here is that you don’t have to work with all of the data at once - you could have lots of clerks working with a smaller part of the data - which means you can scale to a very large set of data to work with by splitting it up.
Microsoft’s functional programming language is called F# (F-Sharp). It works with data in this way, but has two additional advantages: it can also use Object-Oriented paradigms, and has full access to all of the .NET libraries that work with everything from graphics, computations, to everything else that .NET provides.
I mention this language as one to experiment with because it doesn’t make you choose - you can write using OO concepts or Functional Programming, and even both in the same code set. It is designed from the outset to work with large sets of data, and has a wealth of documentation and training around it.
In fact, you don’t even have to install F# to try it out. If you navigate to you can write code in a browser, take training, and even work with live datasets online.
For the data professional, I recommend working through the Learn | Data Science online labs. It will quickly give you a feel for working with data in F#. From there you can download the Visual Studio Express (free) edition that contains F#:
After you install F# (a simple next, next finish process that I won’t detail here) you’ll want to locate the FSI.EXE binary on your lab system at a CMD prompt:
DIR C:\FSI.EXE /S
This is the F# Interactive prompt. Running this program allows you to work with F# in an immediate mode, similar to scripting. To program with a full Integrated Development Environment, use the Visual Studio installation.
References:
Main F# Site:
Why use F# over some other language?
... and ...
Reference set for F#:
Walkthrough of using F# with Open-Data providers:
F# for Data Mining:
Library for working with Data in F#:
Using LINQ with F#:
Scripting (Dynamic) Languages
Scripting languages are also useful for the data professional. I’ve already covered the installation of PowerShell, the Microsoft scripting language. Like F#, one of the advantages to working with data in PowerShell is that you have the full range of the .NET libraries. I’ll show examples of that in a future article.
Python
In addition to PowerShell, several other scripting candidates exist for working with data. But one stands out for the data professional: Python. Python was predominantly used in web-based applications, where the web page served as the user interface and Python performed the computation work. As time passed, Python became a very rich scripting language due to two primary factors: it’s easy to learn, and many people wrote “libraries” or pre-defined functions that made it quick to implement to solve a problem. Those two factors also made it the go-to language for today’s data scientists. It allowed them to quickly leverage other scientist’s work in a relatively powerful simple to learn language. That means it meets the criteria of a large, supported, community install base.
Python is not only a scripting language, but fits well with the OO programming paradigm. This power, along with the fact that it runs on multiple operating systems and is an open-source project cements its choice for the data laboratory system’s toolbox.
There are two primary versions of Python - 2 and Python 3. The 2.7 and earlier versions of Python didn’t work well in certain situations, but weren’t easily changed because so many libraries referenced it. A wholesale upgrade in version 3 brought Python up to modern standards, but also caused a fairly significant problem. Because so many libraries were written to version 2 and wouldn’t work in version 3, people were reluctant to switch to the new version. Python became a victim of its own success.
Many of the popular libraries are being ported to version 3, so the way I deal with the choice of version is to ensure the libraries I want to use are available for the version I can work with. The latest version is better; but sometimes the libraries require an earlier version.
The primary libraries I add in to Python are numpy and scipy. These two libraries contain an incredible array of scientific and numeric functions. In fact, the statistics functions included in these two packages have some advantages over using the R scripting language I discussed in the last article - although I still use both.
To install Python, visit, click the Downloads section and select the release you want. Once you’ve completed the installation, you can install numpy and scipy here:.
References:
I’ve found this to be a great systematic resource for learning Python:
You can also check out the tutorials here:
There are some interesting examples here on using Python with data:
Need more data or math libraries?
This is an interesting commentary on using Python for data scientists:
This is an intriguing data visualization program for Python:
In the next installment, I’ll cover programmatic methods and tools that I’ll work with on the laboratory. | https://www.simple-talk.com/cloud/data-science/data-science-laboratory-system---programming-and-scripting--languages/ | CC-MAIN-2015-48 | refinedweb | 3,075 | 58.62 |
This manual documents Guile version 2.0.11.
Copyright (C) 1996, 1997, 2000, 2001, 2002, 2003, 2004, 2005, 2009, 2010, 2011, 2012, 2013, 2014 Free Software.”
syntax-caseSystem
syntax-case?
getopt-long
option-ref
condclause
sxml-match: Pattern Matching of SXML
sxml-match-letand
sxml-match-let*
SCMand
scm_t_bits
Next: Introduction, Up: Top [Contents][Index]
This manual describes how to use Guile, GNU’s Ubiquitous Intelligent Language for Extensions. It relates particularly to Guile version 2.0.11.
Next:.
Previous:.
Next:.
Next:.
Next: Guile and the GNU Project, Previous: Guile and Scheme, Up: Introduction [Contents][Index]..
Next:.
Next:.
Next:.
Next:.
Next: Typographical Conventions, Previous: Obtaining and Installing Guile, Up: Introduction [Contents][Index]
The rest of this manual is organised into the following chapters..
For readers new to Scheme, this chapter provides an introduction to the basic ideas of the Scheme language. This material would apply to any Scheme implementation and so does not make reference to anything Guile-specific.
Provides an overview of programming in Scheme with Guile. It covers how to
invoke the
guile program from the command-line and how to write scripts
in Scheme. It also introduces the extensions that Guile offers beyond standard
Scheme.
Provides an overview of how to use Guile in a C program. It discusses the fundamental concepts that you need to understand to access the features of Guile, such as dynamic types and the garbage collector. It explains in a tutorial like manner how to define new data types and functions for the use by Scheme programs.
This part of the manual documents the Guile API in functionality-based groups with the Scheme and C interfaces presented side by side.
Describes some important modules, distributed as part of the Guile distribution, that extend the functionality provided by the Guile Scheme core.
Describes GOOPS, an object oriented extension to Guile that provides classes, multiple inheritance and generic functions.
Previous: Organisation of this Manual, Up: Introduction [Contents][Index]
In examples and procedure descriptions and all other places where the evaluation of Scheme expression is shown, we use some notation for denoting the output and evaluation results of expressions.’).
Next: Hello Scheme!, Previous: Introduction, Up: Top [Contents][Index]/.
Next: Running Guile Scripts, Up: Hello Guile! [Contents][Index]
In its simplest form, Guile acts as an interactive interpreter for the
Scheme programming language, reading and evaluating Scheme expressions
the user enters from the terminal. Here is a sample interaction between
Guile and a user; the user’s input appears after the
$ and
scheme@(guile-user)> prompts:
$ guile scheme@(guile-user)> (+ 1 2 3) ; add some numbers $1 = 6 scheme@(guile-user)> (define (factorial n) ; define a function (if (zero? n) 1 (* n (factorial (- n 1))))) scheme@(guile-user)> (factorial 20) $2 = 2432902008176640000 scheme@(guile-user)> (getpwnam "root") ; look in /etc/passwd $3 = #("root" "x" 0 0 "root" "/root" "/bin/bash") scheme@(guile-user)> C-d $
Next:)
Next: Writing Guile Extensions, Previous: Running Guile Scripts, Up: Hello Guile! [Contents][Index]
The Guile interpreter is available as an object library, to be linked into applications using Scheme as a configuration or extension language.
Here is simple-guile.c, source code for a program that will
produce a complete Guile interpreter. In addition to all usual
functions provided by Guile, it will also offer the function
my-hostname.
#include <stdlib.h> #include <libguile.h> static SCM my_hostname (void) { char *s = getenv ("HOSTNAME"); if (s == NULL) return SCM_BOOL_F; else return scm_from_locale_string (s); } static void inner_main (void *data, int argc, char **argv) { scm_c_define_gsubr ("my-hostname", 0, 0, 0, my_hostname); scm_shell (argc, argv); } int main (int argc, char **argv) { scm_boot_guile (argc, argv, inner_main, 0); return 0; /* never reached */ }
When Guile is correctly installed on your system, the above program can be compiled and linked like this:
$ gcc -o simple-guile simple-guile.c \ `pkg-config --cflags --libs guile-2.0`
When it is run, it behaves just like the
guile program except
that you can also call the new
my-hostname function.
$ ./simple-guile scheme@(guile-user)> (+ 1 2 3) $1 = 6 scheme@(guile-user)> (my-hostname) "burns"
Next:.
Next: Reporting Bugs, Previous: Writing Guile Extensions, Up: Hello Guile! [Contents][Index]
Guile has support for dividing a program into modules. By using modules, you can group related code together and manage the composition of complete programs from largely independent parts.
For more details on the module system beyond this introductory material, See Modules.
Next:"
Next:.
Previous: Writing new Modules, Up: Using the Guile Module System [Contents][Index]
In addition to Scheme code you can also put things that are defined in C into a module.
You do this by writing a small Scheme file that defines the module and
call
load-extension directly in the body of the module.
$ cat /usr/local/share/guile/site/math/bessel.scm (define-module (math bessel) #:export (j0)) (load-extension "libguile-bessel" "init_bessel") $ file /usr/local/lib/guile/2.0/extensions/libguile-bessel.so … ELF 32-bit LSB shared object … $ guile scheme@(guile-user)> (use-modules (math bessel)) scheme@(guile-user)> (j0 2) $1 = 0.223890779141236
See Modules and Extensions, for more information.
Previous:.
Next:.
Next:.
Next:: Definition, Previous: Latent Typing, Up: About Data [Contents][Index].
Previous: Values and Variables, Up: About Data [Contents][Index] About Expressions),.
definesyntax that can be used when defining new procedures.
set!syntax that helps with changing a single value in the depths of a compound data structure.)
defineother than at top level in a Scheme program, including a discussion of when it works to use
definerather than
set!to change the value of an existing variable.
Next:.
Next: Simple Invocation, Up: About Procedures [Contents][Index].
Next: Creating a Procedure, Previous: Procedures as Values, Up: About Procedures [Contents][Index].)
Next: Lambda Alternatives, Previous: Simple Invocation, Up: About Procedures [Contents][Index].
Previous: Creating a Procedure, Up: About Procedures [Contents][Index].
Prior to Guile 2.0, Guile provided an extension to
define syntax
that allowed you to nest the previous extension up to an arbitrary
depth. These are no longer provided by default, and instead have been
moved to Curried Definitions
.)
Next:.
Next: Tail Calls, Up: About Expressions [Contents][Index].
Next:).
Next:.
Next:.
Previous: Eval Procedure, Up: Evaluating [Contents][Index].
Next: The REPL, Previous: Evaluating, Up: About Expressions [Contents][Index]
any and
every (see SRFI-1 Searching).
It will be noted there are a lot of places which could potentially be
tail calls, for instance the last call in a
for-each, but only
those explicitly described are guaranteed.
Next: Syntax Summary, Previous: Tail Calls, Up: About Expressions [Contents][Index].
Previous: The REPL, Up: About Expressions [Contents][Index] Conditionals) provide conditional
evaluation of argument expressions depending on whether one or more
conditions evaluate to “true” or “false”.
case (see Conditionals)”.
Next: Further Reading, Previous: About Expressions, Up: Hello Scheme! [Contents][Index].
Next: Local Variables, Up: About Closure [Contents][Index].
Next: Chaining, Previous: About Environments, Up: About Closure [Contents][Index]
We have seen how to create top level variables using the
define
syntax (see Definition). It is often useful to create variables
that are more limited in their scope, typically as part of a procedure
body. In Scheme, this is done using the
let syntax, or one of
its modified forms
let* and
letrec. These syntaxes are
described in full later in the manual (see)))))
The effect of the
let expression is to create a new environment
and, within this environment, an association between the name
s
and a new location whose initial value is obtained by evaluating
(/ (+ a b c) 2). The expressions in the body of the
let,
namely
(sqrt (* s (- s a) (- s b) (- s c))), are then evaluated
in the context of the new environment, and the value of the last
expression evaluated becomes the value of the whole
let
expression, and therefore the value of the variable
area.
Next: Lexical Scope, Previous: Local Variables, Up: About Closure [Contents][Index].
Next:.
Up:.
Next: Serial Number, Previous: Lexical Scope, Up: About Closure [Contents][Index]
Consider a
let expression that doesn’t contain any
lambdas:
(let ((s (/ (+ a b c) 2))) (sqrt (* s (- s a) (- s b) (- s c))))
When the Scheme interpreter evaluates this, it
let
sin the new environment, with value given by
(/ (+ a b c) 2)
letin the context of the new local environment, and remembers the value
V
let, using the value
Vas the value of the
letexpression, in the context of the containing environment.
After the
let expression has been evaluated, the local
environment that was created is simply forgotten, and there is no longer
any way to access the binding that was created in this environment. If
the same code is evaluated again, it will follow the same steps again,
creating a second new local environment that has no connection with the
first, and then forgetting this one as well.
If the
let body contains a
lambda expression, however, the
local environment is not forgotten. Instead, it becomes
associated with the procedure that is created by the
lambda
expression, and is reinstated every time that that procedure is called.
In detail, this works as follows.
lambdaexpression, to create a procedure object, it stores the current environment as part of the procedure definition.
The result is that the procedure body is always evaluated in the context of the environment that was current when the procedure was created.
This is what is meant by closure. The next few subsections present examples that explore the usefulness of this concept.
Next:.
Next:.
Next: OO Closure, Previous: Shared Variable, Up: About Closure [Contents][Index]
A frequently used programming model for library code is to allow an application to register a callback function for the library to call when some particular event occurs. It is often useful for the application to make several such registrations using the same callback function, for example if several similar library events can be handled using the same application code, but the need then arises to distinguish the callback function calls that are associated with one callback registration from those that are associated with different callback registrations.
In languages without the ability to create functions dynamically, this
problem is usually solved by passing a
user_data parameter on the
registration call, and including the value of this parameter as one of
the parameters on the callback function. Here is an example of
declarations using this solution in C:
typedef void (event_handler_t) (int event_type, void *user_data); void register_callback (int event_type, event_handler_t *handler, void *user_data);
In Scheme, closure can be used to achieve the same functionality without
requiring the library code to store a
user-data for each callback
;; In the library: (define (register-callback event-type handler-proc) …) ;; In the application: (define (make-handler event-type user-data) (lambda () … <code referencing event-type and user-data> …)) (register-callback event-type (make-handler event-type …))
As far as the library is concerned,
handler-proc is a procedure
with no arguments, and all the library has to do is call it when the
appropriate event occurs. From the application’s point of view, though,
the handler procedure has used closure to capture an environment that
includes all the context that the handler code needs —
event-type and
user-data — to handle the event
correctly.
Previous: Callback Closure, Up: About Closure [Contents][Index] ⇒ #) ⇒ 0 (my-account 'withdraw 5) ⇒ -5 (my-account 'deposit 396) ⇒ 391 (my-account 'get-balance) ⇒.
Previous: About Closure, Up: Hello Scheme! [Contents][Index]
Next:.
Next: Invoking Guile, Up: Programming in Scheme [Contents][Index]
Guile’s core language is Scheme, which is specified and described in the series of reports known as RnRS. RnRS is shorthand for the Revised^n Report on the Algorithmic Language Scheme. Guile complies fully with R5RS (see Introduction in R5RS), and implements some aspects of R6RS.
Guile also has many extensions that go beyond these reports. Some of the areas where Guile extends R5RS are:
Next: Guile Scripting, Previous: Guile Scheme, Up: Programming in Scheme [Contents][Index]
Many features of Guile depend on and can be changed by information that the user provides either before or when Guile is started. Below is a description of what information to provide and how to provide it.
Next: Environment Variables, Up: Invoking Guile [Contents][Index]
Here we describe Guile’s command-line processing in detail. Guile processes its arguments from left to right, recognizing the switches described below. For examples, see Scripting Examples.
script arg...
-s script arg...
By default, Guile will read a file named on the command line as a
script. Any command-line arguments arg... following script
become the script’s arguments; the
command-line function returns
a list of strings of the form
(script arg...).
It is possible to name a file using a leading hyphen, for example, -myfile.scm. In this case, the file name must be preceded by -s to tell Guile that a (script) file is being named.
Scripts are read and evaluated as Scheme source code just as the
load function would. After loading script, Guile exits.
directory
Add directory to the front of Guile’s module load path. The given
directories are searched in the order given on the command line and
before any directories in the
GUILE_LOAD_PATH environment
variable. Paths added here are not in effect during execution of
the user’s .guile file.
-C directory
Like -L, but adjusts the load path for compiled files.
-x extension
Add extension to the front of Guile’s load extension list
(see
%load-extensions). The specified extensions
are tried in the order given on the command line, and before the default
load extensions. Extensions added here are not in effect during
execution of the user’s .guile file.
.
The function is most often a simple symbol that names a function
that is defined in the script. It can also be of the form
(@
module-name symbol), and in that case, the symbol is
looked up in the module named module-name.
For compatibility with some versions of Guile 1.4, you can also use the
form
(symbol ...) (that is, a list of only symbols that doesn’t
start with
@), which is equivalent to
(@ (symbol ...)
main), or
(symbol ...) symbol (that is, a list of only symbols
followed by a symbol), which is equivalent to
(@ (symbol ...)
symbol). We recommend to use the equivalent forms directly since they
correspond to the
(@ ...) read syntax that can be used in
normal code. See Using Guile Modules and. See The Meta Switch.
--use-srfi=list
The option --use-srfi expects a comma-separated list of numbers,
each representing a SRFI module to be loaded into the interpreter
before evaluating a script file or starting the REPL. Additionally,
the feature identifier for the loaded SRFIs is recognized by
the procedure
cond-expand when this option is used.
Here is an example that loads the modules SRFI-8 (’receive’) and SRFI-13 (’string library’) before the GUILE interpreter is started:
guile --use-srfi=8,13
--debug
Start with the debugging virtual machine (VM) engine. Using the debugging VM will enable support for VM hooks, which are needed for tracing, breakpoints, and accurate call counts when profiling. The debugging VM is slower than the regular VM, though, by about ten percent. See VM Hooks, for more information.
By default, the debugging VM engine is only used when entering an interactive session. When executing a script with -s or -c, the normal, faster VM is used by default.
--no-debug
Do not use the debugging VM engine, even when entering an interactive session.
Note that, despite the name, Guile running with --no-debug does support the usual debugging facilities, such as printing a detailed backtrace upon error. The only difference with --debug is lack of support for VM hooks and the facilities that build upon it (see above).
-q
Do not load the initialization file, .guile. This option only has an effect when running interactively; running scripts does not load the .guile file. See Init File.
--listen[=p]
While this program runs, listen on a local port or a path for REPL clients. If p starts with a number, it is assumed to be a local port on which to listen. If it starts with a forward slash, it is assumed to be a path to a UNIX domain socket on which to listen.
If p is not given, the default is local port 37146. If you look at it upside down, it almost spells “Guile”. If you have netcat installed, you should be able to nc localhost 37146 and get a Guile prompt. Alternately you can fire up Emacs and connect to the process; see Using Guile in Emacs for more details.
Note that opening a port allows anyone who can connect to that port—in the TCP case, any local user—to do anything Guile can do, as the user that the Guile process is running as. Do not use --listen on multi-user machines. Of course, if you do not pass --listen to Guile, no port will be opened.
That said, --listen is great for interactive debugging and development.
--auto-compile
Compile source files automatically (default behavior).
--fresh-auto-compile
Treat the auto-compilation cache as invalid, forcing recompilation.
--no-auto-compile
Disable automatic source file compilation.
--language=lang
For the remainder of the command line arguments, assume that files
mentioned with
-l and expressions passed with
-c are
written in lang. lang must be the name of one of the
languages supported by the compiler (see Compiler Tower). When run
interactively, set the REPL’s language to lang (see Using Guile Interactively).
The default language is
scheme; other interesting values include
elisp (for Emacs Lisp), and
ecmascript.
The example below shows the evaluation of expressions in Scheme, Emacs Lisp, and ECMAScript:
guile -c "(apply + '(1 2))" guile --language=elisp -c "(= (funcall (symbol-function '+) 1 2) 3)" guile --language=ecmascript -c '(function (x) { return x * x; })(2);'
To load a file written in Scheme and one written in Emacs Lisp, and then start a Scheme REPL, type:
guile -l foo.scm --language=elisp -l foo.el --language=scheme
-h, --help
Display help on invoking Guile, and then exit.
-v, --version
Display the current version of Guile, and then exit.
Previous:.
Next: Using Guile Interactively, Previous: Invoking Guile, Up: Programming in Scheme .
Next:: Command Line Handling, Previous: The Top of a Script File, Up: Guile Scripting [Contents][Index]:
/usr/local/bin/guile \ /u/jimb/ekko a b c
This is the usual behavior, prescribed by POSIX.
\ /u/jimb/ekko, it opens /u/jimb/ekko, parses the three arguments
-e,
main, and
-sfrom it, and substitutes them for the
\switch. Thus, Guile’s command line now reads:
/usr/local/bin/guile -e main -s /u/jimb/ekko a b c
(main "/u/jimb/ekko" "a" "b" "c").
When Guile sees the meta switch
\, it parses command-line
argument from the script file according to the following rules:
"".
.
Next: Scripting Examples, Previous: The Meta Switch, Up: Guile Scripting [Contents][Index].
Previous:
Next: Using Guile in Emacs, Previous: Guile Scripting, Up: Programming in Scheme [Contents][Index]
When you start up Guile by typing just
guile, without a
-c argument or the name of a script to execute, you get an
interactive interpreter where you can enter Scheme expressions, and
Guile will evaluate them and print the results for you. Here are some
simple examples.
scheme@(guile-user)> (+ 3 4 5) $1 = 12 scheme@(guile-user)> (display "Hello world!\n") Hello world! scheme@(guile-user)> (values 'a 'b) $2 = a $3 = b
This mode of use is called a REPL, which is short for “Read-Eval-Print Loop”, because the Guile interpreter first reads the expression that you have typed, then evaluates it, and then prints the result.
The prompt shows you what language and module you are in. In this case, the
current language is
scheme, and the current module is
(guile-user). See Other Languages, for more information on Guile’s
support for languages other than Scheme.
Next: Readline, Up: Using Guile Interactively [Contents][Index]
When run interactively, Guile will load a local initialization file from ~/.guile. This file should contain Scheme expressions for evaluation.
This facility lets the user customize their interactive Guile environment, pulling in extra modules or parameterizing the REPL implementation.
To run Guile without loading the init file, use the
-q
command-line option.
Next:.
Next: REPL Commands, Previous: Readline, Up: Using Guile Interactively [Contents][Index]
Just as Readline helps you to reuse a previous input line, value
history allows you to use the result of a previous evaluation in
a new expression. When value history is enabled, each evaluation result
is automatically assigned to the next in the sequence of variables
$1,
$2, …. You can then use these variables in
subsequent expressions.
scheme@(guile-user)> (iota 10) $1 = (0 1 2 3 4 5 6 7 8 9) scheme@(guile-user)> (apply * (cdr $1)) $2 = 362880 scheme@(guile-user)> (sqrt $2) $3 = 602.3952191045344 scheme@(guile-user)> (cons $2 $1) $4 = (362880 0 1 2 3 4 5 6 7 8 9)
Value history is enabled by default, because Guile’s REPL imports the
(ice-9 history) module. Value history may be turned off or on within the
repl, using the options interface:
scheme@(guile-user)> ,option value-history #f scheme@(guile-user)> 'foo foo scheme@(guile-user)> ,option value-history #t scheme@(guile-user)> 'bar $5 = bar
Note that previously recorded values are still accessible, even if value history
is off. In rare cases, these references to past computations can cause Guile to
use too much memory. One may clear these values, possibly enabling garbage
collection, via the
clear-value-history! procedure, described below.
The programmatic interface to value history is in a module:
(use-modules (ice-9 history))
Return true if value history is enabled, or false otherwise.
Turn on value history, if it was off.
Turn off value history, if it was on.
Clear the value history. If the stored values are not captured by some other data structure or closure, they may then be reclaimed by the garbage collector.
Next:.
Next: Module Commands, Up: REPL Commands [Contents][Index]
When Guile starts interactively, it notifies the user that help can be
had by typing ‘,help’. Indeed,
help is a command, and a
particularly useful one, as it allows the user to discover the rest of
the commands..
Find bindings/modules/packages.
Show description/documentation.
Next:).
Next: Compile Commands, Previous: Module Commands, Up: REPL Commands [Contents][Index]
Change languages.
Next: Profile Commands, Previous: Language Commands, Up: REPL Commands [Contents][Index]
Generate compiled code.
Compile a file.
Expand any macros in a form.
Run the optimizer on a piece of code and print the result.
Disassemble a compiled procedure.
Disassemble a file.
Next:.
Next:: System Commands, Previous: Debug Commands, Up: REPL Commands [Contents][Index]
Inspect the result(s) of evaluating exp.
Pretty-print the result(s) of evaluating exp.
Previous:.
Next:.
Previous:.
Next:.
Previous:.0 is installed on your system in
/usr/,
then
(%site-dir) will be
/usr/share/guile/site/2.0..0 is installed
on your system in
/usr/, then
(%site-ccache-dir) site
packages will be
/usr/lib/guile/2.0/site-ccache.
Note that a
.go file will only be loaded in preference to a
.scm file if it is newer. For that reason, you should install
your Scheme files first, and your compiled files second..0 is
installed on your system in
/usr/, then the extensions dir will
be
/usr/lib/guile/2.0/extensions.
Next: smobs. 2.0:
pkg-config guile-2.0 --cflags pkg-config guile-2.0 --libs
Next:.
Previous: Guile Initialization Functions, Up: Linking Programs With Guile [Contents][Index]
Here is simple-guile.c, source code for a
main and an
inner_main function that will produce a complete Guile
interpreter.
/* simple-guile.c --- Start Guile from C. */ #include <libguile.h> static void inner_main (void *closure, int argc, char **argv) { /* preparation */ scm_shell (argc, argv); /* after exit */ } int main (int argc, char **argv) { scm_boot_guile (argc, argv, inner_main, 0); return 0; /* never reached, see inner_main */ }
The
main function calls
scm_boot_guile to initialize
Guile, passing it
inner_main. Once
scm_boot_guile is
ready, it invokes
inner_main, which calls
scm_shell to
process the command-line arguments in the usual way.
Here is a Makefile which you can use to compile the example program. It
uses
pkg-config to learn about the necessary compiler and
linker flags.
# Use GCC, if you have it installed. CC=gcc # Tell the C compiler where to find <libguile.h> CFLAGS=`pkg-config --cflags guile-2.0` # Tell the linker what libraries to use and where to find them. LIBS=`pkg-config --libs guile-2.0` simple-guile: simple-guile.o ${CC} simple-guile.o ${LIBS} -o simple-guile simple-guile.o: simple-guile.c ${CC} -c ${CFLAGS} simple-guile.c
If you are using the GNU Autoconf package to make your application more
portable, Autoconf will settle many of the details in the Makefile
automatically, making it much simpler and more portable; we recommend
using Autoconf with Guile. Here is a configure.ac file for
simple-guile that uses the standard
PKG_CHECK_MODULES
macro to check for Guile. Autoconf will process this file into a
configure script. We recommend invoking Autoconf via the
autoreconf utility.
AC_INIT(simple-guile.c) # Find a C compiler. AC_PROG_CC # Check for Guile PKG_CHECK_MODULES([GUILE], [guile-2.0]) # Generate a Makefile, based on the results. AC_OUTPUT(Makefile)
Run
autoreconf -vif to generate
configure.
Here is a
Makefile.in template, from which the
configure
script produces a Makefile customized for the host system:
# The configure script fills in these values. CC=@CC@ CFLAGS=@GUILE_CFLAGS@ LIBS=@GUILE_LIBS@ simple-guile: simple-guile.o ${CC} simple-guile.o ${LIBS} -o simple-guile simple-guile.o: simple-guile.c ${CC} -c ${CFLAGS} simple-guile.c
The developer should use Autoconf to generate the configure script from the configure.ac template, and distribute configure with the application. Here’s how a user might go about building the application:
$ ls Makefile.in configure* configure.ac simple-guile.c $ ./configure checking for gcc... ccache ccache gcc accepts -g... yes checking for ccache gcc option to accept ISO C89... none needed checking for pkg-config... /usr/bin/pkg-config checking pkg-config is at least version 0.9.0... yes checking for GUILE... yes configure: creating ./config.status config.status: creating Makefile $ make [...] $ ./simple-guile guile> (+ 1 2 3) 6 guile> (getpwnam "jimb") #("jimb" "83Z7d75W2tyJQ" 4008 10 "Jim Blandy" "/u/jimb" "/usr/local/bin/bash") guile> (exit) $
Next: General Libguile Concepts, Previous: Linking Programs With Guile, Up: Programming in C [Contents][Index]
The previous section has briefly explained how to write programs that
make use of an embedded Guile interpreter. But sometimes, all you
want to do is make new primitive procedures and data types available
to the Scheme programmer. Writing a new version of
guile is
inconvenient in this case and it would in fact make the life of the
users of your new features needlessly hard.
For example, suppose that there is a program
guile-db that is a
version of Guile with additional features for accessing a database.
People who want to write Scheme programs that use these features would
have to use
guile-db instead of the usual
guile program.
Now suppose that there is also a program
guile-gtk that extends
Guile with access to the popular Gtk+ toolkit for graphical user
interfaces. People who want to write GUIs in Scheme would have to use
guile-gtk. Now, what happens when you want to write a Scheme
application that uses a GUI to let the user access a database? You
would have to write a third program that incorporates both the
database stuff and the GUI stuff. This might not be easy (because
guile-gtk might be a quite obscure program, say) and taking this
example further makes it easy to see that this approach can not work in
practice.
It would have been much better if both the database features and the GUI
feature had been provided as libraries that can just be linked with
guile. Guile makes it easy to do just this, and we encourage you
to make your extensions to Guile available as libraries whenever
possible.
You write the new primitive procedures and data types in the normal fashion, and link them into a shared library instead of into a stand-alone program. The shared library can then be loaded dynamically by Guile.
This.
Next: Defining New Types (Smobs), Previous: Linking Guile with Libraries, Up: Programming in C [Contents][Index].
Next:.
It is easy for Guile to remember all blocks of memory that it has allocated for use by Scheme values, but you need to help it with finding all Scheme values that are in use by C code.
You do this when writing a SMOB mark function, for example
(see Garbage Collecting Smobs). By calling this function, the
garbage collector learns about all references that your SMOB has to
other
SCM values.
Other references to
SCM objects, such as global variables of type
SCM or other random data structures in the heap that contain
fields of type
SCM, can be made visible to the garbage collector
by calling the functions
scm_gc_protect or
scm_permanent_object. You normally use these functions for long
lived objects such as a hash table that is stored in a global variable.
For temporary references in local variables or function arguments, using
these functions would be too expensive.
These references are handled differently: Local variables (and function
arguments) of type
SCM are automatically visible to the garbage
collector. This works because the collector scans the stack for
potential references to
SCM objects and considers all referenced
objects to be alive. The scanning considers each and every word of the
stack, regardless of what it is actually used for, and then decides
whether it could possibly be a reference to a
SCM object. Thus,
the scanning is guaranteed to find all actual references, but it might
also find words that only accidentally look like references. These
‘false.
Next: Asynchronous Signals, Previous: Garbage Collection, Up: General Libguile Concepts [Contents][Index] determining.
Next:.
Previous:.
Next: Function Snarfing, Previous: General Libguile Concepts, Up: Programming in C [Contents][Index].)
Next:: Type checking, Previous: Describing a New Type, Up: Defining New Types (Smobs) [Contents][Index]
Normally, smobs can have one immediate word of data. This word
stores either a pointer to an additional memory block that holds the
real data, or it might hold the data itself when it fits. The word is
large enough for a
SCM value, a pointer to
void, or an
integer that fits into a
size_t or
ssize_t.
You can also create smobs that have two or three immediate words, and when these words suffice to store all data, it is more efficient to use these super-sized smobs instead of using a normal smob plus a memory block. See Double Smobs, for their discussion.
Guile provides functions for managing memory which are often helpful when implementing smobs. See Memory Blocks.
To retrieve the immediate word of a smob, you use the macro
SCM_SMOB_DATA. It can be set with
SCM_SET_SMOB_DATA.
The 16 extra bits can be accessed with
SCM_SMOB_FLAGS and
SCM_SET_SMOB_FLAGS.
The two macros
SCM_SMOB_DATA and
SCM_SET_SMOB_DATA treat
the immediate word as if it were of type
scm_t_bits, which is
an unsigned integer type large enough to hold a pointer to
void. Thus you can use these macros to store arbitrary
pointers in the smob word.
When you want to store a
SCM value directly in the immediate
word of a smob, you should use the macros
SCM_SMOB_OBJECT and
SCM_SET_SMOB_OBJECT to access it.
Creating a smob instance can be tricky when it consists of multiple
image_tag contains a tag returned by
scm_make_smob_type,
here is how we could construct a smob whose immediate word contains a
pointer to a freshly allocated
struct image:: Garbage Collecting Smobs, Previous: Creating Smob Instances, Up: Defining New Types (Smobs) [Contents][Index].
Next: Remembering During Operations, Previous: Type checking, Up: Defining New Types (Smobs) [Contents][Index]
Once a smob has been released to the tender mercies of the Scheme
system, it must be prepared to survive garbage collection. In the
example above, all the memory associated with the smob is managed by the
garbage collector because we used the
scm_gc_ allocation
functions. Thus, no special care must be taken: the garbage collector
automatically scans them and reclaims any unused memory.
However, when data associated with a smob is managed in some other
way—e.g.,
malloc’d memory or file descriptors—it is possible
to specify a free function to release those resources when the
smob is reclaimed, and a mark function to mark Scheme objects
otherwise invisible to the garbage collector.
As described in more detail elsewhere (see Conservative GC), every object in the Scheme system has a mark bit, which the garbage collector uses to tell live objects from dead ones. When collection starts, every object’s mark bit is clear. The collector traces pointers through the heap, starting from objects known to be live, and sets the mark bit on each object it encounters. When it can find no more unmarked objects, the collector walks all objects, live and dead, frees those whose mark bits are still clear, and clears the mark bit on the others.
The two main portions of the collection are called the mark phase, during which the collector marks live objects, and the sweep phase, during which the collector frees all unmarked objects.
The mark bit of a smob lives in a special memory region. When the collector encounters a smob, it sets the smob’s mark bit, and uses the smob’s type tag to find the appropriate mark function for that smob. It then calls this mark function, passing it the smob as its only argument.
The mark function is responsible for marking any other Scheme
objects the smob refers to. If it does not do so, the objects’ mark
bits will still be clear when the collector begins to sweep, and the
collector will free them. If this occurs, it will probably break, or at
least confuse, any code operating on the smob; the smob’s
SCM
values will have become dangling references.
To mark an arbitrary Scheme object, the mark function calls
scm_gc_mark.
Thus, here is how we might write
mark_image—again this is not
needed in our example since we used the
scm_gc_ allocation
routines, so this is just for the sake of illustration:; }
Note that, even though the image’s
update_func could be an
arbitrarily complex structure (representing a procedure and any values
enclosed in its environment),
scm_gc_mark will recurse as
necessary to mark all its components. Because
scm_gc_mark sets
an object’s mark bit before it recurses, it is not confused by
circular structures.
As an optimization, the collector will mark whatever value is returned by the mark function; this helps limit depth of recursion during the mark phase. Thus, the code above—again for the sake of illustration, since our example does
not need it thanks to the use of the
scm_gc_ allocation routines:
size_t free_image (SCM image_smob) { struct image *image = (struct image *) SCM_SMOB_DATA (image_smob); scm_gc_free (image->pixels, image->width * image->height, "image pixels"); scm_gc_free (image, sizeof (struct image), "image"); return 0; }
During the sweep phase, the garbage collector will clear the mark bits on all live objects. The code which implements a smob need not do this itself.
There is no way for smob code to be notified when collection is complete.
It is usually a good idea to minimize the amount of processing done during garbage collection; keep the mark and free functions very simple. Since collections occur at unpredictable times, it is easy for any unusual activity to interfere with normal code.
Next: Double Smobs, Previous: Garbage Collecting Smobs, Up: Defining New Types (Smobs) [Contents][Index].)
Next:.
Previous: Double Smobs, Up: Defining New Types (Smobs) [Contents][Index]
Here is the complete text of the implementation of the image datatype, as presented in the sections above. We also provide a definition for the smob’s print function, and make some objects and functions static, to clarify exactly what the surrounding code is using.
As mentioned above, you can find this code in the Guile distribution, in
doc/example-smob. That directory includes a makefile and a
suitable
main function, so you can build a complete interactive
Guile shell, extended with the datatypes described here.)
/* file "image-type.c" */ #include <stdlib.h> #include <libguile.h> static scm_t_bits image_tag; struct image { int width, height; char *pixels; /* The name of this image */ SCM name; /* A function to call when this image is modified, e.g., to update the screen, or SCM_BOOL_F if no action necessary */ SCM update_func; }; static (width * height, "image pixels"); return smob; }; } static SCM mark_image (SCM image_smob) { /* Mark the image's name and update function. */ struct image *image = (struct image *) SCM_SMOB_DATA (image_smob); scm_gc_mark (image->name); return image->update_func; } static size_t free_image (SCM image_smob) { struct image *image = (struct image *) SCM_SMOB_DATA (image_smob); scm_gc_free (image->pixels, image->width * image->height, "image pixels"); scm_gc_free (image, sizeof (struct image), "image"); return 0; } static int print_image (SCM image_smob, SCM port, scm_print_state *pstate) { struct image *image = (struct image *) SCM_SMOB_DATA (image_smob); scm_puts ("#<image ", port); scm_display (image->name, port); scm_puts (">", port); /* non-zero means success */ return 1; } void init_image_type (void) { image_tag = scm_make_smob_type ("image", sizeof (struct image)); scm_set_smob_mark (image_tag, mark_image); scm_set_smob_free (image_tag, free_image); scm_set_smob_print (image_tag, print_image); scm_c_define_gsubr ("clear-image", 1, 0, 0, clear_image); scm_c_define_gsubr ("make-image", 3, 0, 0, make_image); }
Here is a sample build and interaction with the code from the example-smob directory, on the author’s machine:
zwingli:example-smob$ make CC=gcc gcc `pkg-config --cflags guile-2.0` -c image-type.c -o image-type.o gcc `pkg-config --cflags guile-2.0` -c myguile.c -o myguile.o gcc image-type.o myguile.o `pkg-config --libs guile-2.0` -o myguile zwingli:example-smob$ ./myguile guile> make-image #<primitive-procedure make-image> guile> (define i (make-image "Whistler's Mother" 100 100)) guile> i #<image Whistler's Mother> guile> (clear-image i) guile> (clear-image 4) ERROR: In procedure clear-image in expression (clear-image 4): ERROR: Wrong type (expecting image): 4 ABORT: (wrong-type-arg) Type "(backtrace)" to get more information. guile>
Next:: Autoconf Support, Previous: Function Snarfing, Up: Programming in C [Contents][Index].
Next:.
Next: Dia Steps, Up: Extending Dia [Contents][Index])))))
Next: Dia Smobs, Previous: Dia Objective, Up: Extending Dia [Contents][Index].
Next: Dia Primitives, Previous: Dia Steps, Up: Extending Dia [Contents][Index]..).
Next: Dia Hook, Previous: Dia Smobs, Up: Extending Dia [Contents][Index]:
static.).
Next: Dia Structure, Previous: Dia Primitives, Up: Extending Dia [Contents][Index].
Next: Dia Advanced, Previous: Dia Hook, Up: Extending Dia [Contents][Index].
Previous: …
Next:: Programming Options, Previous: Scheme vs C, Up: Programming Overview [Contents][Index]”3.)
Next:: Style Choices, Previous: Available Functionality, Up: Programming Options [Contents][Index]
Next: Program Control, Previous: Basic Constraints, Up: Programming Options [Contents][Index]
Previous: Style Choices, Up: Programming Options [Contents][Index]
Previous:.
Previous: Programming Overview, Up: Programming in C [Contents][Index]
Autoconf, a part of the GNU build system, makes it easy for users to build your package. This section documents Guile’s Autoconf support.
Next:.
Next: Using Autoconf Macros, Previous: Autoconf Background, Up: Autoconf Support [Contents][Index]
As mentioned earlier in this chapter, Guile supports parallel
installation, and uses
pkg-config to let the user choose which
version of Guile they are interested in.
pkg-config has its own
set of Autoconf macros that are probably installed on most every
development system. The most useful of these macros is
PKG_CHECK_MODULES.
PKG_CHECK_MODULES([GUILE], [guile-2.0])
This example looks for Guile and sets the
GUILE_CFLAGS and
GUILE_LIBS variables accordingly, or prints an error and exits if
Guile was not found.
Guile comes with additional Autoconf macros providing more information,
installed as prefix/share/aclocal/guile.m4. Their names
all begin with
GUILE_.
This macro runs the
pkg-config tool to find development files
for an available version of Guile.
By default, this macro will search for the latest stable version of Guile (e.g. 2.0), falling back to the previous stable version (e.g. 1.8) if it is available. If no guile-VERSION.pc file is found, an error is signalled. The found version is stored in GUILE_EFFECTIVE_VERSION.
If
GUILE_PROGS was already invoked, this macro ensures that the
development files have the same effective version as the Guile
program.
GUILE_EFFECTIVE_VERSION is marked for substitution, as by
AC_SUBST.
This macro runs the
pkg-config tool to find out how to compile
and link programs against Guile. It sets four variables:
GUILE_CFLAGS, GUILE_LDFLAGS, GUILE_LIBS, and
GUILE_LTLIBS.
GUILE_CFLAGS: flags to pass to a C or C++ compiler to build code that
uses Guile header files. This is almost always just one or more
-I
flags.
GUILE_LDFLAGS: flags to pass to the compiler to link a program
against Guile. This includes
-lguile-VERSION for the
Guile library itself, and may also include one or more
-L flag
to tell the compiler where to find the libraries. But it does not
include flags that influence the program’s runtime search path for
libraries, and will therefore lead to a program that fails to start,
unless all necessary libraries are installed in a standard location
such as /usr/lib.
GUILE_LIBS and GUILE_LTLIBS: flags to pass to the compiler or to libtool, respectively, to link a program against Guile. It includes flags that augment the program’s runtime search path for libraries, so that shared libraries will be found at the location where they were during linking, even in non-standard locations. GUILE_LIBS is to be used when linking the program directly with the compiler, whereas GUILE_LTLIBS is to be used when linking the program is done through libtool..
This macro looks for programs
guile and
guild, setting
variables GUILE and GUILD to their paths, respectively.
If
guile is not found, signal an error.
By default, this macro will search for the latest stable version of Guile (e.g. 2.0). x.y or x.y.z versions can be specified. If an older version is found, the macro will signal an error.
The effective version of the found
guile is set to
GUILE_EFFECTIVE_VERSION. This macro ensures that the effective
version is compatible with the result of a previous invocation of
GUILE_FLAGS, if any.
As a legacy interface, it also looks for
guile-config and
guile-tools, setting GUILE_CONFIG and GUILE_TOOLS.
The variables.
module is a list of symbols, like: (ice-9 common-list). modvar is the Guile Scheme variable to check.
Previous:-2)
Next: Guile Modules, Previous: Programming in C, Up: Top [Contents][Index]
Guile provides an application programming interface (API) to developers in two core languages: Scheme and C. This part of the manual contains reference documentation for all of the functionality that is available through both Scheme and C interfaces.
Next::.
Next:.
Next:: Simple Data Types, Previous: Initialization, Up: API Reference [Contents][Index]
The following macros do two different things: when compiled normally,
they expand in one way; when processed during snarfing, they cause the
guile-snarf program to pick up some initialization code,
See Function Snarfing.
The descriptions below use the term ‘normally’ to refer to the case
when the code is compiled normally, and ‘while snarfing’ when the code
is processed by
guile-snarf.
Normally,
SCM_SNARF_INIT expands to nothing; while snarfing, it
causes code to be included in the initialization action file,
followed by a semicolon.
This is the fundamental macro for snarfing initialization actions. The more specialized macros below use it internally.
Normally, this macro expands into
static const char s_c_name[] = scheme_name; SCM c_name arglist
While snarfing, it causes
scm_c_define_gsubr (s_c_name, req, opt, var, c_name);
to be added to the initialization actions. Thus, you can use it to declare a C function named c_name that will be made available to Scheme with the name scheme_name.
Note that the arglist argument must have parentheses around it.
Normally, these macros expand into
static SCM c_name
or
SCM c_name
respectively. While snarfing, they both expand into the initialization code
c_name = scm_permanent_object (scm_from_locale_symbol (scheme_name));
Thus, you can use them declare a static or global variable of type
SCM that will be initialized to the symbol named
scheme_name.
Normally, these macros expand into
static SCM c_name
or
SCM c_name
respectively. While snarfing, they both expand into the initialization code
c_name = scm_permanent_object (scm_c_make_keyword (scheme_name));
Thus, you can use them declare a static or global variable of type
SCM that will be initialized to the keyword named
scheme_name.
These macros are equivalent to
SCM_VARIABLE_INIT and
SCM_GLOBAL_VARIABLE_INIT, respectively, with a value of
SCM_BOOL_F.
Normally, these macros expand into
static SCM c_name
or
SCM c_name
respectively. While snarfing, they both expand into the initialization code
c_name = scm_permanent_object (scm_c_define (scheme_name, value));
Thus, you can use them declare a static or global C variable of type
SCM that will be initialized to the object representing the
Scheme variable named scheme_name in the current module. The
variable will be defined when it doesn’t already exist. It is always
set to value.
Next: Compound Data Types, Previous: Snarfing Macros, Up: API Reference [Contents][Index].
Next: Numbers, Up: Simple Data Types [Contents][Index]
The two boolean values are
#t for true and
#f for false.
They can also be written as
#true and
#false, as per R7RS. Conditionals), if x is
#f, else return
#f.
Return
#t if obj is either
#t or
#f, else
return
#f.
The
SCM representation of the Scheme object
#t.
The
SCM representation of the Scheme object
#f.
Return
0 if obj is
#f, else return
1.
Return
1 if obj is
#f, else return
0.
Return
1 if obj is either
#t or
#f, else
return
0.
Return
#f if val is
0, else return
#t.
Return
1 if val is
SCM_BOOL_T, return
0
when val is
SCM_BOOL_F, else signal a ‘wrong type’ error.
You should probably use
scm_is_true instead of this function
when you just want to test a
SCM value for trueness.
Next: Characters, Previous: Booleans, Up: Simple Data Types [Contents][Index].
Scheme.
Next:)).
Next:: Comparison, Previous: Number Syntax, Up: Numbers [Contents][Index]
Return
#t if n is an odd number,
#f
otherwise.
Return
#t if n is an even number,
#f
otherwise.
Return the quotient or remainder from n divided by d. The quotient is rounded towards zero, and the remainder will have the same sign as n. In all cases quotient and remainder satisfy n = q*d + r.
(remainder 13 4) ⇒ 1 (remainder -13 4) ⇒ -1
See also
truncate-quotient,
truncate-remainder and
related operations in Arithmetic.
Return the remainder from n divided by d, with the same sign as d.
(modulo 13 4) ⇒ 1 (modulo -13 4) ⇒ 3 (modulo 13 -4) ⇒ -3 (modulo -13 -4) ⇒ -1
See also
floor-quotient,
floor-remainder and
related operations in Arithmetic.
Return the greatest common divisor of all arguments. If called without arguments, 0 is returned.
The C function
scm_gcd always takes two arguments, while the
Scheme function can take an arbitrary number.
Return the least common multiple of the arguments. If called without arguments, 1 is returned.
The C function
scm_lcm always takes two arguments, while the
Scheme function can take an arbitrary number.
Return n raised to the integer exponent k, modulo m.
(modulo-expt 2 3 5) ⇒ 3
Return two exact non-negative integers s and r such that k = s^2 + r and s^2 <= k < (s + 1)^2. An error is raised if k is not an exact non-negative integer.
(exact-integer-sqrt 10) ⇒ 3 and 1
Next: Conversion, Previous: Integer Operations, Up: Numbers [Contents][Index]
The C comparison functions below always takes two arguments, while the
Scheme functions can take an arbitrary number. Also keep in mind that
the C functions return one of the Scheme boolean values
SCM_BOOL_T or
SCM_BOOL_F which are both true as far as C
is concerned. Thus, always write
scm_is_true (scm_num_eq_p (x,
y)) when testing the two Scheme numbers
x and
y for
equality, for example.
Return
#t if all parameters are numerically equal.
Return
#t if the list of parameters is monotonically
increasing.
Return
#t if the list of parameters is monotonically
decreasing.
Return
#t if the list of parameters is monotonically
non-decreasing.
Return
#t if the list of parameters is monotonically
non-increasing.
Return
#t if z is an exact or inexact number equal to
zero.
Return
#t if x is an exact or inexact number greater than
zero.
Return
#t if x is an exact or inexact number less than
zero.
Next:).
Next:.
Next: Scientific, Previous: Complex, Up: Numbers [Contents][Index]
The C arithmetic functions below always takes two arguments, while the
Scheme functions can take an arbitrary number. When you need to
invoke them with just one argument, for example to compute the
equivalent of
(- z + 1.
Return z - 1.
Return the absolute value of x.
x must be a number with zero imaginary part. To calculate the
magnitude of a complex number, use
magnitude instead.
Return the maximum of all parameter values.
Return the minimum of all parameter values.
Round the inexact number x towards zero.
Round the inexact number x to the nearest integer. When exactly halfway between two integers, round to the even one.
Round the number x towards minus infinity.
Round the number x towards infinity.
Like
scm_truncate_number or
scm_round_number,
respectively, but these functions take and return
double
values.
These procedures accept two real numbers x and y, where the
divisor y must be non-zero.
euclidean-quotient returns the
integer q and
euclidean-remainder returns the real number
r such that x = q*y + r and
0 <= r < |y|.
euclidean/ returns both q and
r, and is more efficient than computing each separately. Note
that when y > 0,
euclidean-quotient returns
floor(x/y), otherwise it returns
ceiling(x/y).
Note that these operators are equivalent to the R6RS operators
div,
mod, and
div-and-mod.
(euclidean-quotient 123 10) ⇒ 12 (euclidean-remainder 123 10) ⇒ 3 (euclidean/ 123 10) ⇒ 12 and 3 (euclidean/ 123 -10) ⇒ -12 and 3 (euclidean/ -123 10) ⇒ -13 and 7 (euclidean/ -123 -10) ⇒ 13 and 7 (euclidean/ -123.2 -63.5) ⇒ 2.0 and 3.8 (euclidean/ 16/3 -10/7) ⇒ -3 and 22/21
These procedures accept two real numbers x and y, where the
divisor y must be non-zero.
floor-quotient returns the
integer q and
floor-remainder returns the real number
r such that q = floor(x/y) and
x = q*y + r.
floor/ returns
both q and r, and is more efficient than computing each
separately. Note that r, if non-zero, will have the same sign
as y.
When x and y are integers,
floor-remainder is
equivalent to the R5RS integer-only operator
modulo.
(floor-quotient 123 10) ⇒ 12 (floor-remainder 123 10) ⇒ 3 (floor/ 123 10) ⇒ 12 and 3 (floor/ 123 -10) ⇒ -13 and -7 (floor/ -123 10) ⇒ -13 and 7 (floor/ -123 -10) ⇒ 12 and -3 (floor/ -123.2 -63.5) ⇒ 1.0 and -59.7 (floor/ 16/3 -10/7) ⇒ -4 and -8/21
These procedures accept two real numbers x and y, where the
divisor y must be non-zero.
ceiling-quotient returns the
integer q and
ceiling-remainder returns the real number
r such that q = ceiling(x/y) and
x = q*y + r.
ceiling/ returns
both q and r, and is more efficient than computing each
separately. Note that r, if non-zero, will have the opposite sign
of y.
(ceiling-quotient 123 10) ⇒ 13 (ceiling-remainder 123 10) ⇒ -7 (ceiling/ 123 10) ⇒ 13 and -7 (ceiling/ 123 -10) ⇒ -12 and 3 (ceiling/ -123 10) ⇒ -12 and -3 (ceiling/ -123 -10) ⇒ 13 and 7 (ceiling/ -123.2 -63.5) ⇒ 2.0 and 3.8 (ceiling/ 16/3 -10/7) ⇒ -3 and 22/21
These procedures accept two real numbers x and y, where the
divisor y must be non-zero.
truncate-quotient returns the
integer q and
truncate-remainder returns the real number
r such that q is x/y rounded toward zero,
and x = q*y + r.
truncate/ returns
both q and r, and is more efficient than computing each
separately. Note that r, if non-zero, will have the same sign
as x.
When x and y are integers, these operators are
equivalent to the R5RS integer-only operators
quotient and
remainder.
(truncate-quotient 123 10) ⇒ 12 (truncate-remainder 123 10) ⇒ 3 (truncate/ 123 10) ⇒ 12 and 3 (truncate/ 123 -10) ⇒ -12 and 3 (truncate/ -123 10) ⇒ -12 and -3 (truncate/ -123 -10) ⇒ 12 and -3 (truncate/ -123.2 -63.5) ⇒ 1.0 and -59.7 (truncate/ 16/3 -10/7) ⇒ -3 and 22/21
These procedures accept two real numbers x and y, where the
divisor y must be non-zero.
centered-quotient returns the
integer q and
centered-remainder returns the real number
r such that x = q*y + r and
-|y/2| <= r < |y/2|.
centered/
returns both q and r, and is more efficient than computing
each separately.
Note that
centered-quotient returns x/y
rounded to the nearest integer. When x/y lies
exactly half-way between two integers, the tie is broken according to
the sign of y. If y > 0, ties are rounded toward
positive infinity, otherwise they are rounded toward negative infinity.
This is a consequence of the requirement that
-|y/2| <= r < |y/2|.
Note that these operators are equivalent to the R6RS operators
div0,
mod0, and
div0-and-mod0.
(centered-quotient 123 10) ⇒ 12 (centered-remainder 123 10) ⇒ 3 (centered/ 123 10) ⇒ 12 and 3 (centered/ 123 -10) ⇒ -12 and 3 (centered/ -123 10) ⇒ -12 and -3 (centered/ -123 -10) ⇒ 12 and -3 (centered/ 125 10) ⇒ 13 and -5 (centered/ 127 10) ⇒ 13 and -3 (centered/ 135 10) ⇒ 14 and -5 (centered/ -123.2 -63.5) ⇒ 2.0 and 3.8 (centered/ 16/3 -10/7) ⇒ -4 and -8/21
These procedures accept two real numbers x and y, where the
divisor y must be non-zero.
round-quotient returns the
integer q and
round-remainder returns the real number
r such that x = q*y + r and
q is x/y rounded to the nearest integer,
with ties going to the nearest even integer.
round/
returns both q and r, and is more efficient than computing
each separately.
Note that
round/ and
centered/ are almost equivalent, but
their behavior differs when x/y lies exactly half-way
between two integers. In this case,
round/ chooses the nearest
even integer, whereas
centered/ chooses in such a way to satisfy
the constraint -|y/2| <= r < |y/2|, which
is stronger than the corresponding constraint for
round/,
-|y/2| <= r <= |y/2|. In particular,
when x and y are integers, the number of possible remainders
returned by
centered/ is |y|, whereas the number of
possible remainders returned by
round/ is |y|+1 when
y is even.
(round-quotient 123 10) ⇒ 12 (round-remainder 123 10) ⇒ 3 (round/ 123 10) ⇒ 12 and 3 (round/ 123 -10) ⇒ -12 and 3 (round/ -123 10) ⇒ -12 and -3 (round/ -123 -10) ⇒ 12 and -3 (round/ 125 10) ⇒ 12 and 5 (round/ 127 10) ⇒ 13 and -3 (round/ 135 10) ⇒ 14 and -5 (round/ -123.2 -63.5) ⇒ 2.0 and 3.8 (round/ 16/3 -10/7) ⇒ -4 and -8/21
Next: Bitwise Operations, Previous: Arithmetic, Up: Numbers [Contents][Index]
The following procedures accept any kind of number as arguments, including complex numbers.
Return the square root of z. Of the two possible roots (positive and negative), the one with a positive real part is returned, or if that’s zero then a positive imaginary part. Thus,
(sqrt 9.0) ⇒ 3.0 (sqrt -9.0) ⇒ 0.0+3.0i (sqrt 1.0+1.0i) ⇒ 1.09868411346781+0.455089860562227i (sqrt -1.0-1.0i) ⇒ 0.455089860562227-1.09868411346781i
Return z1 raised to the power of z2.
Return the sine of z.
Return the cosine of z.
Return the tangent of z.
Return the arcsine of z.
Return the arccosine of z.
Return the arctangent of z, or of y/x.
Return e to the power of z, where e is the base of natural logarithms (2.71828…).
Return the natural logarithm of z.
Return the base 10 logarithm of z.
Return the hyperbolic sine of z.
Return the hyperbolic cosine of z.
Return the hyperbolic tangent of z.
Return the hyperbolic arcsine of z.
Return the hyperbolic arccosine of z.
Return the hyperbolic arctangent of z.
Next:"
Previous:))
Next: Character Sets, Previous: Numbers, Up: Simple Data Types [Contents][Index].
The R7RS name for the “escape” character (code point U+001B) is
#\escape..
Return
#t if x is a character, else
#f.
Fundamentally, the character comparison operations below are numeric comparisons of the character’s code points.
Return
#t if code point of x is equal to the code point
of y, else
#f.
Return
#t if the code point of x is less than the code
point of y, else
#f.
Return
#t if the code point of x is less than or equal
to the code point of y, else
#f.
Return
#t if the code point of x is greater than the
code point of y, else
#f.
Return
#t if the case-folded code point of x is the same
as the case-folded code point of y, else
#f.
Return
#t if the case-folded code point of x is less
than the case-folded code point of y, else
#f.
Return
#t if the case-folded code point of x is less
than or equal to the case-folded code point of y, else
#f.
Return
#t if the case-folded code point of x is greater
than the case-folded code point of y, else
#f.
Return
#t if the case-folded code point of x is greater
than or equal to the case-folded code point of y, else
#f.
Return
#t if chr is alphabetic, else
#f.
Return
#t if chr is numeric, else
#f.
Return
#t if chr is whitespace, else
#f.
Return
#t if chr is uppercase, else
#f.
Return
#t if chr is lowercase, else
#f.
Return
#t if chr is either uppercase or lowercase, else
#f.
Return a symbol giving the two-letter name of the Unicode general
category assigned to chr or
#f if inclusive or
#xE000 to
#x10FFFF inclusive. is a signed, 32-bit integer.
Next: Strings, Previous: Characters, Up: Simple.
Next:.
Next: Creating Character Sets, Previous: Character Set Predicates/Comparison, Up: Character Sets [Contents][Index]
Character set cursors are a means for iterating over the members of a
character sets. After creating a character set cursor with
char-set-cursor, a cursor can be dereferenced with
char-set-ref, advanced to the next member with
char-set-cursor-next. Whether a cursor has passed past the last
element of the set can be checked with
end-of-char-set?.
Additionally, mapping and (un-)folding procedures for character sets are provided.
Return a cursor into the character set cs.
Return the character at the current cursor position
cursor in the character set cs. It is an error to
pass a cursor for which
end-of-char-set? returns true.
Advance the character set cursor cursor to the next
character in the character set cs. It is an error if the
cursor given satisfies
end-of-char-set?.
Return
#t if cursor has reached the end of a
character set,
#f otherwise.
Fold the procedure kons over the character set cs, initializing it with knil.
This is a fundamental constructor for character sets.
This is a fundamental constructor for character sets.
Apply proc to every character in the character set cs. The return value is not specified.
Map the procedure proc over every character in cs. proc must be a character -> character procedure.
Next: Querying Character Sets, Previous: Iterating Over Character Sets, Up: Character Sets [Contents][Index]
New character sets are produced with these procedures.
Return a newly allocated character set containing all characters in cs.
Return a character set containing all given characters.
Convert the character list list to a character set. If the character set base_cs is given, the character in this set are also included in the result.
Convert the character list list to a character set. The characters are added to base_cs and base_cs is returned.
Convert the string str to a character set. If the character set base_cs is given, the characters in this set are also included in the result.
Convert the string str to a character set. The characters from the string are added to base_cs, and base_cs is returned.
Return a character set containing every character from cs so that it satisfies pred. If provided, the characters from base_cs are added to the result.
Return a character set containing every character from cs so that it satisfies pred. The characters are added to base_cs and base_cs is returned. in base_cs are added to the result, if given. are added to base_cs and base_cs is returned.
Coerces x into a char-set. x may be a string, character or char-set. A string is converted to the set of its constituent characters; a character is converted to a singleton set; a char-set is returned as-is.
Next: Character-Set Algebra, Previous: Creating Character Sets, Up: Character Sets [Contents][Index]
Access the elements and other information of a character set with these procedures.
Returns an association list containing debugging information for cs. The association list has the following entries.
char-set
The char-set itself
len
The number of groups of contiguous code points the char-set contains
ranges
A list of lists where each sublist is a range of code points and their associated characters
The return value of this function cannot be relied upon to be consistent between versions of Guile and should not be used in code. if the character ch is contained in the
character set cs, or
#f otherwise.
Return a true value if every character in the character set cs satisfies the predicate pred.
Return a true value if any character in the character set cs satisfies the predicate pred.
Next: Standard Character Sets, Previous: Querying Character Sets, Up: Character Sets [Contents][Index]
Character sets can be manipulated with the common set algebra operation, such as union, complement, intersection etc. All of these procedures provide side-effecting variants, which modify their character set argument(s).
Add all character arguments to the first argument, which must be a character set.
Delete all character arguments from the first argument, which must be a character set.
Add all character arguments to the first argument, which must be a character set.
Delete all character arguments from the first argument, which must be a character set.
Return the complement of the character set cs.
Note that the complement of a character set is likely to contain many
reserved code points (code points that are not associated with
characters). It may be helpful to modify the output of
char-set-complement by computing its intersection with the set
of designated code points,
char-set:designated.
Return the union of all argument character sets.
Return the intersection of all argument character sets.
Return the difference of all argument character sets.
Return the exclusive-or of all argument character sets.
Return the difference and the intersection of all argument character sets.
Return the complement of the character set cs.
Return the union of all argument character sets.
Return the intersection of all argument character sets.
Return the difference of all argument character sets.
Return the exclusive-or of all argument character sets.
Return the difference and the intersection of all argument character sets.
Previous:.
Next:: String Predicates, Up: Strings [Contents][Index]
The read syntax for strings is an arbitrarily long sequence of
characters enclosed in double quotes (
").
Backslash is an escape character and can be used to insert the following
special characters.
\" and
\\ are R5RS standard,
\| is R7RS standard, the next seven are R6RS standard —
notice they follow C syntax — and the remaining four are Guile
extensions.
\\
Backslash character.
\"
Double quote character (an unescaped
" is otherwise the end
of the string).
\|
Vertical bar character.
\a
Bell character (ASCII 7).
\f
Formfeed character (ASCII 12).
\n
Newline character (ASCII 10).
\r
Carriage return character (ASCII 13).
\t
Tab character (ASCII 9).
\v
Vertical tab character (ASCII 11).
\b
Backspace character (ASCII 8).
\0
NUL character (ASCII 0).
\followed by newline (ASCII 10)
Nothing. This way if
\ is the last character in a line, the
string will continue with the first character from the next line,
without a line break.
If the
hungry-eol-escapes reader option is enabled, which is not
the case by default, leading whitespace on the next line is discarded.
"foo\ bar" ⇒ "foo bar" (read-enable 'hungry-eol-escapes) "foo\ bar" ⇒ "foobar"
\xHH
Character code given by two hexadecimal digits. For example
\x7f for an ASCII DEL (127).
\uHHHH
Character code given by four hexadecimal digits. For example
\u0100 for a capital A with macron (U+0100).
\UHHHHHH
Character code given by six hexadecimal digits. For example
\U010402.
The following are examples of string literals:
"foo" "bar plonk" "Hello World" "\"Hi\", he said."
The three escape sequences
\xHH,
\uHHHH and
\UHHHHHH were
chosen to not break compatibility with code written for previous versions of
Guile. The R6RS specification suggests a different, incompatible syntax for hex
escapes:
\xHHHH; – a character code followed by one to eight hexadecimal
digits terminated with a semicolon. If this escape format is desired instead,
it can be enabled with the reader option
r6rs-hex-escapes.
(read-enable 'r6rs-hex-escapes)
For more on reader options, See Scheme Read.
Next: String Constructors, Previous: String Syntax, Up: Strings [Contents][Index]
The following procedures can be used to check whether a given string fulfills some specified property.
Return
#t if obj is a string, else
#f.
Returns
1 if obj is a string,
0 otherwise.
Return
#t if str’s length is zero, and
#f otherwise.
(string-null? "") ⇒ #t y ⇒ "foo" (string-null? y) ⇒ #f
Check if char_pred is true for any character in string s.
char_pred can be a character to check for any equal to that, or a character set (see Character Sets) to check for any in that set, or a predicate procedure to call.
For a procedure, calls
(char_pred c) are made
successively on the characters from start to end. If
char_pred returns true (ie. non-
#f),
string-any
stops and that return value is the return from
string-any. The
call on the last character (ie. at end-1), if that
point is reached, is a tail call.
If there are no characters in s (ie. start equals
end) then the return is
#f.
Check if char_pred is true for every character in string s.
char_pred can be a character to check for every character equal to that, or a character set (see Character Sets) to check for every character being in that set, or a predicate procedure to call.
For a procedure, calls
(char_pred c) are made
successively on the characters from start to end. If
char_pred returns
#f,
string-every stops and
returns
#f. The call on the last character (ie. at
end-1), if that point is reached, is a tail call and the
return from that call is the return from
string-every.
If there are no characters in s (ie. start equals
end) then the return is
#t.
Next: ("")
Next: Comparison, Previous: String Selection, Up: Strings [Contents][Index] (string-copy .
Next: String Searching, Previous: String Modification, Up: Strings [Contents][Index]
The procedures in this section are similar to the character ordering predicates (see Characters), but are defined on character sequences.
The first set is specified in R5RS and has names that end in
?.
The second set is specified in SRFI-13 and the names have not ending
?.
The predicates ending in
-ci ignore the character case
when comparing strings. For now, case-insensitive comparison is done
using the R5RS rules, where every lower-case character that has a
single character upper-case form is converted to uppercase before
comparison. See See the
(ice-9
i18n) module, for locale-dependent string comparison.
Lexicographic equality predicate; return
#t if all strings are
the same length and contain the same characters in the same positions,
otherwise return
#f.
The procedure
string-ci=? treats upper and lower case
letters as though they were the same character, but
string=? treats upper and lower case as distinct
characters.
Lexicographic ordering predicate; return
#t if, for every pair of
consecutive string arguments str_i and str_i+1, str_i is
lexicographically less than str_i+1.
Lexicographic ordering predicate; return
#t if, for every pair of
consecutive string arguments str_i and str_i+1, str_i is
lexicographically less than or equal to str_i+1.
Lexicographic ordering predicate; return
#t if, for every pair of
consecutive string arguments str_i and str_i+1, str_i is
lexicographically greater than str_i+1.
Lexicographic ordering predicate; return
#t if, for every pair of
consecutive string arguments str_i and str_i+1, str_i is
lexicographically greater than or equal to str_i+1.
Case-insensitive string equality predicate; return
#t if
all strings are the same length and their component
characters match (ignoring case) at each position; otherwise
return
#f.
Case insensitive lexicographic ordering predicate; return
#t if,
for every pair of consecutive string arguments str_i and
str_i+1, str_i is lexicographically less than str_i+1
regardless of case.
Case insensitive lexicographic ordering predicate; return
#t if,
for every pair of consecutive string arguments str_i and
str_i+1, str_i is lexicographically less than or equal to
str_i+1 regardless of case.
Case insensitive lexicographic ordering predicate; return
#t if,
for every pair of consecutive string arguments str_i and
str_i+1, str_i is lexicographically greater than
str_i+1 regardless of case.
Case insensitive lexicographic ordering predicate; return
#t if,
for every pair of consecutive string arguments str_i and
str_i+1, str_i is lexicographically greater than or equal to
str_i+1 regardless of case. that does not match. where the lowercased letters do not match.
Return
#f if s1 and s2 are not equal, a true
value otherwise.
Return
#f if s1 and s2 are equal, a true
value otherwise.
Return
#f if s1 is greater or equal to s2, a
true value otherwise.
Return
#f if s1 is less or equal to s2, a
true value otherwise.
Return
#f if s1 is greater to s2, a true
value otherwise.
Return
#f if s1 is less to s2, a true value
otherwise.
Return
#f if s1 and s2 are not equal, a true
value otherwise. The character comparison is done
case-insensitively.
Return
#f if s1 and s2 are equal, a true
value otherwise. The character comparison is done
case-insensitively.
Return
#f if s1 is greater or equal to s2, a
true value otherwise. The character comparison is done
case-insensitively.
Return
#f if s1 is less or equal to s2, a
true value otherwise. The character comparison is done
case-insensitively.
Return
#f if s1 is greater to s2, a true
value otherwise. The character comparison is done
case-insensitively.
Return
#f if s1 is less to s2, a true value
otherwise. The character comparison is done
case-insensitively.
Compute a hash value for s. The optional argument bound is a non-negative exact integer specifying the range of the hash function. A positive value restricts the return value to the range [0,bound).
Compute a hash value for s. The optional argument bound is a non-negative exact integer specifying the range of the hash function. A positive value restricts the return value to the range [0,bound).
Because the same visual appearance of an abstract Unicode character can
be obtained via multiple sequences of Unicode characters, even the
case-insensitive string comparison functions described above may return
#f when presented with strings containing different
representations of the same character. For example, the Unicode
character “LATIN SMALL LETTER S WITH DOT BELOW AND DOT ABOVE” can be
represented with a single character (U+1E69) or by the character “LATIN
SMALL LETTER S” (U+0073) followed by the combining marks “COMBINING
DOT BELOW” (U+0323) and “COMBINING DOT ABOVE” (U+0307).
For this reason, it is often desirable to ensure that the strings to be compared are using a mutually consistent representation for every character. The Unicode standard defines two methods of normalizing the contents of strings: Decomposition, which breaks composite characters into a set of constituent characters with an ordering defined by the Unicode Standard; and composition, which performs the converse.
There are two decomposition operations. “Canonical decomposition” produces character sequences that share the same visual appearance as the original characters, while “compatibility decomposition” produces ones whose visual appearances may differ from the originals but which represent the same abstract character.
These operations are encapsulated in the following set of normalization forms:
Characters are decomposed to their canonical forms.
Characters are decomposed to their compatibility forms.
Characters are decomposed to their canonical forms, then composed.
Characters are decomposed to their compatibility forms, then composed.
The functions below put their arguments into one of the forms described above.
Return the
NFD normalized form of s.
Return the
NFKD normalized form of s.
Return the
NFC normalized form of s.
Return the
NFKC normalized form of s.
Next: Alphabetic Case Mapping, Previous: String Comparison, Up: Strings [Contents][Index]
Search through the string s from left to right, returning the index of the first occurrence of a character which
Return
#f if no match is found.
Search through the string s from right to left, returning the index of the last occurrence of a character which
Return
#f if no match is found.
Return the length of the longest common prefix of the two strings.
Return the length of the longest common prefix of the two strings, ignoring character case.
Return the length of the longest common suffix of the two strings.
Return the length of the longest common suffix of the two strings, ignoring character case.
Is s1 a prefix of s2?
Is s1 a prefix of s2, ignoring character case?
Is s1 a suffix of s2?
Is s1 a suffix of s2, ignoring character case?
Search through the string s from right to left, returning the index of the last occurrence of a character which
Return
#f if no match is found.
Search through the string s from left to right, returning the index of the first occurrence of a character which
Search through the string s from right to left, returning the index of the last occurrence of a character which
Return the count of the number of characters in the string s which
Does string s1 contain string s2? Return the index in s1 where s2 occurs as a substring, or false. The optional start/end indices restrict the operation to the indicated substrings.
Does string s1 contain string s2? Return the index in s1 where s2 occurs as a substring, or false. The optional start/end indices restrict the operation to the indicated substrings. Character comparison is done case-insensitively.
Next: Reversing and Appending Strings, Previous: String Searching, Up: Strings [Contents][Index]..
Destructively titlecase every first character in a word in str.
Next: Mapping Folding and Unfolding, Previous: Alphabetic Case Mapping, Up: Strings [Contents][Index]
Reverse the string str. The optional arguments start and end delimit the region of str to operate on.
Reverse the string str in-place. The optional arguments start and end delimit the region of str to operate on. The return value is unspecified.
Return a newly allocated string whose characters form the concatenation of the given strings, arg ....
(let ((h "hello ")) (string-append h "world")) ⇒ "hello world"
Like
string-append, but the result may share memory
with the argument strings.
Append the elements (which must be strings) of ls together into a single string. Guaranteed to return a freshly allocated string.
Without optional arguments, this procedure is equivalent to
(string-concatenate (reverse ls))
If the optional argument final_string is specified, it is consed onto the beginning to ls before performing the list-reverse and string-concatenate operations. If end is given, only the characters of final_string up to index end are used.
Guaranteed to return a freshly allocated string.
Like
string-concatenate, but the result may share memory
with the strings in the list ls.
Like
string-concatenate-reverse, but the result may
share memory with the strings in the ls arguments.
Next: Miscellaneous String Operations, Previous: Reversing and Appending Strings, Up: Strings [Contents][Index]
proc is a char->char procedure, it is mapped over s. The order in which the procedure is applied to the string elements is not specified.
proc is a char->char procedure, it is mapped over s. The order in which the procedure is applied to the string elements is not specified. The string s is modified in-place, the return value is not specified.
proc is mapped over s in left-to-right order. The return value is not specified.
Call
(proc i) for each index i in s, from left to
right.
For example, to change characters to alternately upper and lower case,
(define str (string-copy "studly")) (string-for-each-index (lambda (i) (string-set! str i ((if (even? i) char-upcase char-downcase) (string-ref str i)))) str) str ⇒ "StUdLy"
Fold kons over the characters of s, with knil as the terminating element, from left to right. kons must expect two arguments: The actual character and the last result of kons’ application.
Fold kons over the characters of s, with knil as the terminating element, from right to left. kons must expect two arguments: The actual character and the last result of kons’ application.
(lambda (x) ).
Next: Representing Strings as Bytes, Previous: Mapping Folding and Unfolding, Up: Strings [Contents][Index] returnsize to.
Next:.
Previous:
Next: Symbols, Previous: Strings, Up: Simple Data Types [Contents][Index] R6RS I/O Ports).
Next:.
Next: and Integer Lists, Previous: Bytevector Manipulation, Up: Bytevectors [Contents][Index]
The contents of a bytevector can be interpreted as a sequence of integers of any given size, sign, and endianness.
(let ((bv (make-bytevector 4))) (bytevector-u8-set! bv 0 #x12) (bytevector-u8-set! bv 1 #x34) (bytevector-u8-set! bv 2 #x56) (bytevector-u8-set! bv 3 #x78) (map (lambda (number) (number->string number 16)) (list (bytevector-u8-ref bv 0) (bytevector-u16-ref bv 0 (endianness big)) (bytevector-u32-ref bv 0 (endianness little))))) ⇒ ("12" "1234" "78563412")
The most generic procedures to interpret bytevector contents as integers are described below.
Return the size-byte long unsigned integer at index index in bv, decoded according to endianness.
Return the size-byte long signed integer at index index in bv, decoded according to endianness.
Set the size-byte long unsigned integer at index to value, encoded according to endianness.
Set the size-byte long signed integer at index to value, encoded according to endianness.
The following procedures are similar to the ones above, but specialized to a given integer size:
Return the unsigned n-bit (signed) integer (where n is 8, 16, 32 or 64) from bv at index, decoded according to endianness.
Store value as an n-bit (signed) integer (where n is 8, 16, 32 or 64) in bv at index, encoded according to endianness.
Finally, a variant specialized for the host’s endianness is available
for each of these functions (with the exception of the
u8
accessors, for obvious reasons):
Return the unsigned n-bit (signed) integer (where n is 8, 16, 32 or 64) from bv at index, decoded according to the host’s native endianness.
Store value as an n-bit (signed) integer (where n is 8, 16, 32 or 64) in bv at index, encoded according to the host’s native endianness.
Next: Bytevectors as Floats, Previous: Bytevectors as Integers, Up: Bytevectors [Contents][Index] integers of size bytes representing the contents of bv, decoded according to endianness.
Return a list of signed integers of size bytes representing the contents of bv, decoded according to endianness.
Return a new bytevector containing the unsigned integers listed in lst and encoded on size bytes according to endianness.
Return a new bytevector containing the signed integers listed in lst and encoded on size bytes according to endianness.
Next: Bytevectors as Strings, Previous: Bytevectors and Integer Lists, Up: Bytevectors [Contents][Index]
Bytevector contents can also be accessed as IEEE-754 single- or double-precision floating point numbers (respectively 32 and 64-bit long) using the procedures described here.
Return the IEEE-754 single-precision floating point number from bv at index according to endianness.
Store real number value in bv at index according to endianness.
Specialized procedures are also available:
Return the IEEE-754 single-precision floating point number from bv at index according to the host’s native endianness.
Store real number value in bv at index according to the host’s native endianness.
Next: Bytevectors as Arrays, Previous: Bytevectors as Floats, Up: Bytevectors [Contents][Index]
Bytevector contents can also be interpreted as Unicode strings encoded in one of the most commonly available encoding formats. See Representing Strings as Bytes, for a more generic interface.
(utf8->string (u8-list->bytevector '(99 97 102 101))) ⇒ "cafe" (string->utf8 "café") ;; SMALL LATIN LETTER E WITH ACUTE ACCENT ⇒ #vu8(99 97 102 195 169)
Return a newly allocated bytevector that contains the UTF-8, UTF-16, or
UTF-32 (aka. UCS-4) encoding of str. For UTF-16 and UTF-32,
endianness should be the symbol
big or
little; when omitted,
it defaults to big endian.
Return a newly allocated string that contains from the UTF-8-, UTF-16-,
or UTF-32-decoded contents of bytevector utf. For UTF-16 and UTF-32,
endianness should be the symbol
big or
little; when omitted,
it defaults to big endian.
Next: Bytevectors as Uniform Vectors, Previous: Bytevectors as Strings, Up: Bytevectors [Contents][Index]
As an extension to the R6RS, Guile allows bytevectors to be manipulated with the array procedures (see Arrays). When using these APIs, bytes are accessed one at a time as 8-bit unsigned integers:
(define bv #vu8(0 1 2 3)) (array? bv) ⇒ #t (array-rank bv) ⇒ 1 (array-ref bv 2) ⇒ 2 ;; Note the different argument order on array-set!. (array-set! bv 77 2) (array-ref bv 2) ⇒ 77 (array-type bv) ⇒ vu8
Previous: Bytevectors as Arrays, Up: Bytevectors [Contents][Index]
Bytevectors may also be accessed with the SRFI-4 API. See SRFI-4 and Bytevectors, for more information.
Next: Variables, Previous: Symbol Data, Up: Symbols [Contents][Index].
Next:.
Next: Read Syntax, Previous: Symbol Primitives, Up: Symbols [Contents][Index], one for a symbol’s property list, and one
returns
.
Next: Symbol Uninterned, Previous: Symbol Props, Up: Symbols [Contents][Index] in The Revised^5 Report on Scheme)),.
Alternatively, if you enable the
r7rs-symbols read option (see
see Scheme Read), you can write arbitrary symbols using the same
notation used for strings, except delimited by vertical bars instead of
double quotes.
|foo bar| |\x3BB; is a greek lambda| |\| is a vertical bar|
Note that there’s also an
r7rs-symbols print option
(see Scheme Write). To enable the use of this notation, evaluate
one or both of the following expressions:
(read-enable 'r7rs-symbols) (print-enable 'r7rs-symbols)
Previous:.
Next:
:.
Next: Coding With Keywords, Up: Keywords [Contents][Index]
Keywords are useful in contexts where a program or procedure wants to be able to accept a large number of optional arguments without making its interface unmanageable.
To illustrate this, consider a hypothetical
make-window
procedure, which creates a new window on the screen for drawing into
using some graphical toolkit. There are many parameters that the caller
might like to specify, but which could also be sensibly defaulted, for
example:
If
make-window did not use keywords, the caller would have to
pass in a value for each possible argument, remembering the correct
argument order and using a special value to indicate the default value
for that argument:
(make-window 'default ;; Color depth 'default ;; Background color 800 ;; Width 100 ;; Height …) ;; More make-window arguments
With keywords, on the other hand, defaulted arguments are omitted, and non-default arguments are clearly tagged by the appropriate keyword. As a result, the invocation becomes much clearer:
(make-window #:width 800 #:height 100)
On the other hand, for a simpler procedure with few arguments, the use
of keywords would be a hindrance rather than a help. The primitive
procedure
cons, for example, would not be improved if it had to
be invoked as
(cons #:car x #:cdr y)
So the decision whether to use keywords or not is purely pragmatic: use them if they will clarify the procedure invocation at point of call.
Next: Keyword Read Syntax, Previous: Why Use Keywords?, Up: Keywords [Contents][Index]
If a procedure wants to support keywords, it should take a rest argument and then use whatever means is convenient to extract keywords and their corresponding arguments from the contents of that rest argument.
The following example illustrates the principle: the code for
make-window uses a helper procedure called
get-keyword-value to extract individual keyword arguments from
the rest argument.
(define (get-keyword-value args keyword default) (let ((kv (memq keyword args))) (if (and kv (>= (length kv) 2)) (cadr kv) default))) (define (make-window . args) (let ((depth (get-keyword-value args #:depth screen-depth)) (bg (get-keyword-value args #:bg "white")) (width (get-keyword-value args #:width 800)) (height (get-keyword-value args #:height 100)) …) …))
But you don’t need to write
get-keyword-value. The
(ice-9
optargs) module provides a set of powerful macros that you can use to
implement keyword-supporting procedures like this:
(use-modules (ice-9 optargs)) (define (make-window . args) (let-keywords args #f ((depth screen-depth) (bg "white") (width 800) (height 100)) ...))
Or, even more economically, like this:
(use-modules (ice-9 optargs)) (define* (make-window #:key (depth screen-depth) (bg "white") (width 800) (height 100)) ...)
For further details on
let-keywords,
define* and other
facilities provided by the
(ice-9 optargs) module, see
Optional Arguments.
To handle keyword arguments from procedures implemented in C,
use
scm_c_bind_keyword_arguments (see Keyword Procedures).
Next: Keyword Procedures, Previous: Coding With Keywords, Up: Keywords [Contents][Index]
Guile, by default, only recognizes a keyword syntax that is compatible
with R5RS. A token of the form
#:NAME, where
NAME has the
same syntax as a Scheme symbol (see Symbol Read Syntax), is the
external representation of the keyword named
NAME. Keyword
objects print using this syntax as well, so values containing keyword
objects can be read back into Guile. When used in an expression,
keywords are self-quoting objects.
If the
keyword read option is set to
'prefix, Guile also
recognizes the alternative read syntax
:NAME. Otherwise, tokens
of the form
:NAME are read as symbols, as required by R5RS.)
Previous: Keyword Read Syntax, Up: Keywords [Contents][Index]
Return
#t if the argument obj is a keyword, else
#f.
Return the symbol with the same name as keyword.
Return the keyword with the same name as symbol.
Equivalent to
scm_is_true (scm_keyword_p (obj)).
Equivalent to
scm_symbol_to_keyword (scm_from_locale_symbol
(name)) and
scm_symbol_to_keyword (scm_from_locale_symboln
(name, len)), respectively._keyword.
Equivalent to
scm_symbol_to_keyword (scm_from_latin1_symbol
(name)) and
scm_symbol_to_keyword (scm_from_utf8_symbol
(name)), respectively.
SCM_UNDEFINED)
Extract the specified keyword arguments from rest, which is not
modified. If the keyword argument keyword1 is present in
rest with an associated value, that value is stored in the
variable pointed to by argp1, otherwise the variable is left
unchanged. Similarly for the other keywords and argument pointers up to
keywordN and argpN. The argument list to
scm_c_bind_keyword_arguments must be terminated by
SCM_UNDEFINED.
Note that since the variables pointed to by argp1 through
argpN are left unchanged if the associated keyword argument is not
present, they should be initialized to their default values before
calling
scm_c_bind_keyword_arguments. Alternatively, you can
initialize them to
SCM_UNDEFINED before the call, and then use
SCM_UNBNDP after the call to see which ones were provided.
If an unrecognized keyword argument is present in rest and
flags does not contain
SCM_ALLOW_OTHER_KEYS, or if
non-keyword arguments are present and flags does not contain
SCM_ALLOW_NON_KEYWORD_ARGUMENTS, an exception is raised.
subr should be the name of the procedure receiving the keyword
arguments, for purposes of error reporting.
For example:
SCM k_delimiter; SCM k_grammar; SCM sym_infix; SCM my_string_join (SCM strings, SCM rest) { SCM delimiter = SCM_UNDEFINED; SCM grammar = sym_infix; scm_c_bind_keyword_arguments ("my-string-join", rest, 0, k_delimiter, &delimiter, k_grammar, &grammar, SCM_UNDEFINED); if (SCM_UNBNDP (delimiter)) delimiter = scm_from_utf8_string (" "); return scm_string_join (strings, delimiter, grammar); } void my_init () { k_delimiter = scm_from_utf8_keyword ("delimiter"); k_grammar = scm_from_utf8_keyword ("grammar"); sym_infix = scm_from_utf8_symbol ("infix"); scm_c_define_gsubr ("my-string-join", 1, 0, 1, my_string_join); }
Previous:.
Next: Smobs, Previous: Simple Data Types, Up: API Reference [Contents][Index]
This chapter describes Guile’s compound data types. By compound we mean that the primary purpose of these data types is to act as containers for other kinds of data (including other compound objects). For instance, a (non-uniform) vector with length 5 is a container that can hold five arbitrary Scheme objects.
The various kinds of container object differ from each other in how their memory is allocated, how they are indexed, and how particular values can be looked up within them.
Next: Lists, Up: Compound Data Types [Contents][Index] Expression Syntax. The correct way to try these examples is as follows.
'(1 . 2) ⇒ (1 . 2) '(foo . bar) ⇒ .
Return a newly allocated pair whose car is x and whose
cdr is y. The pair is guaranteed to be different (in the
sense of
eq?) from every previously existing object.
Return
#t if x is a pair; otherwise return
#f.
Return 1 when x is a pair; otherwise return 0. car of a pair, or the car of the cdr of a pair, etc., the procedures
called
caar,
cadr and so on are also predefined. However,
using these procedures is often detrimental to readability, and
error-prone. Thus, accessing the contents of a list is usually better
achieved using pattern matching techniques (see Pattern Matching).
Return the car or the cdr of pair, respectively.
These two macros are the fastest way to access the car or cdr of a pair; they can be thought of as compiling into a single memory reference.
These macros do no checking at all. The argument pair must be a valid pair.
These procedures are compositions of
car and
cdr, where
for example
caddr could be defined by
(define caddr (lambda (x) (car (cdr (cdr x)))))
cadr,
caddr and
cadddr pick out the second, third
or fourth elements of a list, respectively. SRFI-1 provides the same
under the names
second,
third and
fourth
(see SRFI-1 Selectors).
Stores value in the car field of pair. The value returned
by
set-car! is unspecified.
Stores value in the cdr field of pair. The value returned
by
set-cdr! is unspecified.
Next: Vectors, Previous: Pairs, Up: Compound Data Types [Contents][Index]
A very important data type in Scheme—as well as in all other Lisp dialects—is the data type list.6
This is the short definition of what a list is:
(),
Next:).
Next:.
Next: List Selection, Previous: List Predicates, Up: Lists [Contents][Index] ....
scm_list_n takes a variable number of arguments, terminated by
the special
SCM_UNDEFINED. That final
SCM_UNDEFINED is
not included in the list. None of elem … can
themselves be
SCM_UNDEFINED, or
scm_list_n will
terminate at that point.).
Next: Append/Reverse, Previous: List Constructors, Up: Lists [Contents][Index] and
list-cdr-ref are identical. It may help to
think of
list-cdr-ref as accessing the kth cdr of the list,
or returning the results of cdring k times down lst.
Copy the first k elements from lst into a new list, and return it.
Next: Searching, Previous: Append/Reverse, Up: Lists [Contents][Index]
The following procedures modify an existing list, either by changing elements of the list, or by changing the list structure itself.
Set the kth element of list to val.
Set the kth cdr of list to val.
Return a newly-created copy of lst with elements
eq? to item removed. This procedure mirrors
memq:
delq compares elements of lst against
item with
eq?.
Return a newly-created copy of lst with elements
eqv? to item removed. This procedure mirrors
memv:
delv compares elements of lst against
item with
eqv?.
Return a newly-created copy of lst with elements
equal? to item removed. This procedure mirrors
member:
delete compares elements of lst
against item with
equal?.
See also SRFI-1 which has an extended
delete (SRFI-1 Deleting), and also an
lset-difference which can delete
multiple items in one call (SRFI-1 Set Operations).
These procedures are destructive versions of
delq,
delv
and
delete: they modify the pointers in the existing lst
rather than creating a new list. Caveat evaluator: Like other
destructive list functions, these functions cannot modify the binding of
lst, and so cannot be used to delete the first element of
lst destructively.
Like
delq!, but only deletes the first occurrence of
item from lst. Tests for equality using
eq?. See also
delv1! and
delete1!.
Like
delv!, but only deletes the first occurrence of
item from lst. Tests for equality using
eqv?. See also
delq1! and
delete1!.
Like
delete!, but only deletes the first occurrence of
item from lst. Tests for equality using
equal?. See also
delq1! and
delv1!.
Return a list containing all elements from lst which satisfy the predicate pred. The elements in the result list have the same order as in lst. The order in which pred is applied to the list elements is not specified.
filter does not change lst, but the result may share a
tail with it.
filter! may modify lst to construct its
return.
Next: List Mapping, Previous: List Modification, Up: Lists [Contents][Index].
Return the first sublist of lst whose car is
eq?
eqv?
equal? to x where the sublists of lst are
the non-empty lists returned by
(list-tail lst
k) for k less than the length of lst. If
x does not occur in lst, then
#f (not the
empty list) is returned.
See also SRFI-1 which has an extended
member function
(SRFI-1 Searching).
Previous:).
Next:.
Next:. Like strings, vectors do not have to
be quoted.
The following are examples of the read syntax for vectors; where the first vector only contains numbers and the second three different object types: a string, a symbol and a number in hexadecimal notation.
#(1 2 3) #("Hello" foo #xdeadbeef)
Next: Accessing from C, Previous: Vector Creation, Up: Vectors [Contents][Index]
vector-length and
vector-ref return information about a
given vector, respectively its size and the elements that are contained
in the vector.
Return the number of elements in vector as an exact integer.
Return the number of elements in vec as a
size_t.
Return the contents of position k of vec. k must be a valid index of vec.
(vector-ref #(1 1 2 3 5 8 13 21) 5) ⇒ 8 (vector-ref #(1 1 2 3 5 8 13 21) (let ((i (round (* 2 (acos -1))))) (if (inexact? i) (inexact->exact i) i))) ⇒ 13
Return the contents of position k (a
size_t) of
vec.
A vector created by one of the dynamic vector constructor procedures (see Vector Creation) can be modified using the following procedures.
NOTE: According to R5RS, it is an error to use any of these procedures on a literally read vector, because such vectors should be considered as constants. Currently, however, Guile does not detect this error.
Store obj in position k of vec. k must be a valid index of vec. The value returned by ‘vector-set!’ is unspecified.
(let ((vec (vector 0 '(2 2 2 2) "Anna"))) (vector-set! vec 1 '("Sue" "Sue")) vec) ⇒ #(0 ("Sue" "Sue") "Anna")
Store obj in position k (a
size_t) of vec.
Store fill in every position of vec. The value
returned by
vector-fill! is unspecified.
Return a copy of vec.
Copy elements from vec1, positions start1 to end1, to vec2 starting at position start2. start1 and start2 are inclusive indices; end1 is exclusive.
vector-move-left! copies elements in leftmost order.
Therefore, in the case where vec1 and vec2 refer to the
same vector,
vector-move-left! is usually appropriate when
start1 is greater than start2.
Copy elements from vec1, positions start1 to end1, to vec2 starting at position start2. start1 and start2 are inclusive indices; end1 is exclusive.
vector-move-right! copies elements in rightmost order.
Therefore, in the case where vec1 and vec2 refer to the
same vector,
vector-move-right! is usually appropriate when
start1 is less than start2.
Next:);
Previous:.
Next:: VLists, Previous: Bit Vectors, Up: Compound Data Types [Contents][Index]
Arrays are a collection of cells organized into an arbitrary number of dimensions. Each cell can be accessed in constant time by supplying an index for each dimension.
In the current implementation, an array uses a vector of some kind for
the actual storage of its elements. Any kind of.
The array procedures are all polymorphic, treating strings, uniform numeric vectors, bytevectors, bit vectors and ordinary vectors as one dimensional arrays.
Next: 3x3 matrix with index ranges 0..2 and 0..2.
#u32(0 1 2)
is a uniform u8 array of rank 1.
#2u32@2@3((1 2) (2 3))
is a uniform u8))
Previous:.
Next: Record Overview, Previous: Arrays, Up: Compound Data Types [Contents][Index]
The
(ice-9 vlist) module provides an implementation of the VList
data structure designed by Phil Bagwell in 2002. VLists are immutable lists,
which can contain any Scheme object. They improve on standard Scheme linked
lists in several areas:
The idea behind VLists is to store vlist elements in increasingly large
contiguous blocks (implemented as vectors here). These blocks are linked to one
another using a pointer to the next block and an offset within that block. The
size of these blocks form a geometric series with ratio
block-growth-factor (2 by default).
The VList structure also serves as the basis for the VList-based hash lists or “vhashes”, an immutable dictionary type (see VHashes).
However, the current implementation in
(ice-9 vlist) has several
noteworthy shortcomings:
vlist-consmutates part of its internal structure, which makes it non-thread-safe. This could be fixed, but it would slow down
vlist-cons.
vlist-consalways allocates at least as much memory as
cons. Again, Phil Bagwell describes how to fix it, but that would require tuning the garbage collector in a way that may not be generally beneficial.
vlist-consis a Scheme procedure compiled to bytecode, and it does not compete with the straightforward C implementation of
cons, and with the fact that the VM has a special
consinstruction.
We hope to address these in the future.
The programming interface exported by
(ice-9 vlist) is defined below.
Most of it is the same as SRFI-1 with an added
vlist- prefix to function
names.
Return true if obj is a VList.
The empty VList. Note that it’s possible to create an empty VList not
eq? to
vlist-null; thus, callers should always use
vlist-null? when testing whether a VList is empty.
Return true if vlist is empty.
Return a new vlist with item as its head and vlist as its tail.
Return the head of vlist.
Return the tail of vlist.
A fluid that defines the growth factor of VList blocks, 2 by default.
The functions below provide the usual set of higher-level list operations.
Fold over vlist, calling proc for each element, as for SRFI-1
fold and
fold-right (see
fold).
Return the element at index index in vlist. This is typically a constant-time operation.
Return the length of vlist. This is typically logarithmic in the number of elements in vlist.
Return a new vlist whose content are those of vlist in reverse order.
Map proc over the elements of vlist and return a new vlist.
Call proc on each element of vlist. The result is unspecified.
Return a new vlist that does not contain the count first elements of vlist. This is typically a constant-time operation.
Return a new vlist that contains only the count first elements of vlist.
Return a new vlist containing all the elements from vlist that satisfy pred.
Return a new vlist corresponding to vlist without the elements equal? to x.
Return a new vlist, as for SRFI-1
unfold and
unfold-right
(see
unfold).
Append the given vlists and return the resulting vlist.
Return a new vlist whose contents correspond to lst.
Return a new list whose contents match those of vlist.
Next:).
Next: Records, Previous: Record Overview, Up: Compound: Structures, Previous: SRFI-9 Records, Up: Compound Data Types [Contents][Index]
A record type is a first class object representing a user-defined data type. A record is an instance of a record type.
Note that in many ways, this interface is too low-level for every-day use. Most uses of records are better served by SRFI-9 records. See SRFI-9 Records.
Return
#t if obj is a record of any type and
#f
otherwise.
Note that
record? may be true of any Scheme value; there is no
promise that records are disjoint with other Scheme types.
Create and return a new record-type descriptor.
type-name is a string naming the type. Currently it’s only used in the printed representation of records, and in diagnostics. field-names is a list of symbols naming the fields of a record of the type. Duplicates are not allowed among these symbols.
(make-record-type "employee" '(name age salary))
The optional print argument is a function used by
display,
write, etc, for printing a record of the new
type. It’s called as
(print record port) and should look
at record and write to port.
Return.
Return a procedure for testing membership in the type represented by rtd. The returned procedure accepts exactly one argument and returns a true value if the argument is a member of the indicated record type; it returns a false value otherwise.
Return.
Return modifier procedure is unspecified. The symbol
field-name must be a member of the list of field-names in the call
to
make-record-type that created the type represented by
rtd.
Return.
Return the type-name associated with the type represented by rtd. The
returned value is
eqv? to the type-name argument given in
the call to
make-record-type that created the type represented by
rtd.
Return a list of the symbols naming the fields in members of the type
represented by rtd. The returned value is
equal? to the
field-names argument given in the call to
make-record-type that
created the type represented by rtd.
Next: Dictionary Types, Previous: Records, Up: Compound Data Types [Contents][Index]
A structure is a first class data type which holds Scheme values
or C words in fields numbered 0 upwards. A vtable is a structure
that represents a structure type, giving field types and permissions,
and an optional print function for
write etc.
Structures are lower level than records (see Records). Usually, when you need to represent structured data, you just want to use records. But sometimes you need to implement new kinds of structured data abstractions, and for that purpose structures are useful. Indeed, records in Guile are implemented with structures.
Next:: Meta-Vtables, Previous: Structure Basics, Up: Structures [Contents][Index]
A vtable is itself a structure. It has a specific set of fields describing various aspects of its instances: the structures created from a vtable. Some of the fields are internal to Guile, some of them are part of the public interface, and there may be additional fields added on by the user.
Every vtable has a field for the layout of their instances, a field for the procedure used to print its instances, and a field for the name of the vtable itself. Access to the layout and printer is exposed directly via field indexes. Access to the vtable name is exposed via accessor procedures.
The field number of the layout specification in a vtable. The layout
specification is a symbol like
pwpw formed from the fields
string passed to
make-vtable, or created by
make-struct-layout (see Meta-Vtables).
(define v (make-vtable "pwpw" 0)) (struct-ref v vtable-index-layout) ⇒ pwpw
This field is read-only, since the layout of structures using a vtable cannot be changed.
The field number of the printer function. This field contains
#f
if>
Next:.
Previous: Vtable Example, Up: Structures [Contents][Index]
Guile’s structures have a facility whereby each instance of a vtable can contain a variable-length tail array of values. The length of the tail array is stored in the structure. This facility was originally intended to allow C code to expose raw C structures with word-sized tail arrays to Scheme.
However, the tail array facility is confusing and doesn’t work very
well. It is very rarely used, but it insinuates itself into all
invocations of
make-struct. For this reason the clumsily-named
make-struct/no-tail procedure can actually be more elegant in
actual use, because it doesn’t have a random
0 argument stuck in
the middle.
Tail arrays also inhibit optimization by allowing instances to affect their shapes. In the absence of tail arrays, all instances of a given vtable have the same number and kinds of fields. This uniformity can be exploited by the runtime and the optimizer. The presence of tail arrays make some of these optimizations more difficult.
Finally, the tail array facility is ad-hoc and does not compose with the rest of Guile. If a Guile user wants an array with user-specified length, it’s best to use a vector. It is more clear in the code, and the standard optimization techniques will do a good job with it.
That said, we should mention some details about the interface. A vtable
that has tail array has upper-case permission descriptors:
W,
R or
O, correspoding to tail arrays of writable,
read-only, or opaque elements. A tail array permission descriptor may
only appear in the last element of a vtable layout.
For exampple, ‘pW’ indicates a tail of writable Scheme-valued fields. The ‘pW’ field itself holds the tail size, and the tail fields come after it.
(define v (make-vtable "prpW")) ;; one fixed then a tail array (define s (make-struct v 6 "fixed field" 'x 'y)) (struct-ref s 0) ⇒ "fixed field" (struct-ref s 1) ⇒ 2 ;; tail size (struct-ref s 2) ⇒ x ;; tail array ... (struct-ref s 3) ⇒ y (struct-ref s 4) ⇒ #f
Next:: VHashes, Previous: Dictionary Types, Up: Compound Data Types [Contents][Index]
An association list is a conventional data structure that is often used
to implement simple key-value databases. It consists of a list of
entries in which each entry is a pair. The key of each entry is
the
car of the pair and the value of each entry is the
cdr.
ASSOCIATION LIST ::= '( (KEY1 . VALUE1) (KEY2 . VALUE2) (KEY3 . VALUE3) … )
Association lists are also known, for short, as alists.
The structure of an association list is just one example of the infinite
number of possible structures that can be built using pairs and lists.
As such, the keys and values in an association list can be manipulated
using the general list structure procedures
cons,
car,
cdr,
set-car!,
set-cdr! and so on. However,
because association lists are so useful, Guile also provides specific
procedures for manipulating them.
Next:.
Next: Retrieving Alist Entries, Previous: Alist Key Equality, Up: Association Lists [Contents][Index] ⇒ ((3 . "pay gas bill")) (set! task-list (acons 3 "tidy bedroom" task-list)) task-list ⇒ ( (( ⇒ ((..
Next: Removing Alist Entries, Previous: Adding or Setting Alist Entries, Up: Association Lists [Contents][Index]
assq,
assv and
assoc find the entry in an alist
for a given key, and return the
(key . value) pair.
assq-ref,
assv-ref and
assoc-ref do a similar
lookup, but return just the value.
Return the first entry in alist with the given key. The
return is the pair
(KEY . VALUE) from alist. If there’s
no matching entry the return is
#f.
assq compares keys with
eq?,
assv uses
eqv? and
assoc uses
equal?. See also SRFI-1
which has an extended
assoc (SRFI-1 Association Lists).
Return the value from the first entry in alist with the given
key, or
#f if there’s no such entry.
assq-ref compares keys with
eq?,
assv-ref uses
eqv? and
assoc-ref uses
equal?.
Notice these functions have the key argument last, like other
-ref functions, but this is opposite to what
assq
etc above use.
When the return is
#f it can be either key not found, or
an entry which happens to have value
#f in the
cdr. Use
assq etc above if you need to differentiate these cases.
Next: Sloppy Alist Functions, Previous: Retrieving Alist Entries, Up: Association Lists [Contents][Index].
Delete the first entry in alist associated with key, and return the resulting alist.
Next: Alist Example, Previous: Removing Alist Entries, Up: Association Lists [Contents][Index]
sloppy-assq,
sloppy-assv and
sloppy-assoc behave
like the corresponding non-
sloppy- procedures, except that they
return
#f when the specified association list is not well-formed,
where the non-
sloppy- versions would signal an error.
Specifically, there are two conditions for which the non-
sloppy-
procedures signal an error, which the
sloppy- procedures handle
instead by returning
#f. Firstly, if the specified alist as a
whole is not a proper list:
(assoc "mary" '((1 . 2) ("key" . "door") . "open sesame")) ⇒ ERROR: In procedure assoc in expression (assoc "mary" (quote #)): ERROR: Wrong type argument in position 2 (expecting association list): ((1 . 2) ("key" . "door") . "open sesame") (sloppy-assoc "mary" '((1 . 2) ("key" . "door") . "open sesame")) ⇒ #f
Secondly, if one of the entries in the specified alist is not a pair:
(assoc 2 '((1 . 1) 2 (3 . 9))) ⇒ ERROR: In procedure assoc in expression (assoc 2 (quote #)): ERROR: Wrong type argument in position 2 (expecting association list): ((1 . 1) 2 (3 . 9)) (sloppy-assoc 2 '((1 . 1) 2 (3 . 9))) ⇒ #f
Unless you are explicitly working with badly formed association lists,
it is much safer to use the non-
sloppy- procedures, because they
help to highlight coding and data errors that the
sloppy-
versions would silently cover up.
Behaves like
assq but does not do any error checking.
Recommended only for use in Guile internals.
Behaves like
assv but does not do any error checking.
Recommended only for use in Guile internals.
Behaves like
assoc but does not do any error checking.
Recommended only for use in Guile internals.
Previous: Sloppy Alist Functions, Up: Association Lists [Contents][Index]
Here is a longer example of how alists may be used in practice.
(define capitals '(("New York" . "Albany") ("Oregon" . "Salem") ("Florida" . "Miami"))) ;; What's the capital of Oregon? (assoc "Oregon" capitals) ⇒ ("Oregon" . "Salem") (assoc-ref capitals "Oregon") ⇒ "Salem" ;; We left out South Dakota. (set! capitals (assoc-set! capitals "South Dakota" "Pierre")) capitals ⇒ (("South Dakota" . "Pierre") ("New York" . "Albany") ("Oregon" . "Salem") ("Florida" . "Miami")) ;; And we got Florida wrong. (set! capitals (assoc-set! capitals "Florida" "Tallahassee")) capitals ⇒ (("South Dakota" . "Pierre") ("New York" . "Albany") ("Oregon" . "Salem") ("Florida" . "Tallahassee")) ;; After Oregon secedes, we can remove it. (set! capitals (assoc-remove! capitals "Oregon")) capitals ⇒ (("South Dakota" . "Pierre") ("New York" . "Albany") ("Florida" . "Tallahassee"))
Next: Hash Tables, Previous: Association Lists, Up: Compound Data Types [Contents][Index]
alist- true if obj is a vhash.
Return a new hash list based on vhash where key is associated with
value, using hash-proc to compute the hash of key.
vhash must be either
vlist-null or a vhash returned by a previous
call to
vhash-cons. hash-proc defaults to
hash (see
hash procedure). With
vhash-consq, the
hashq hash function is used; with
vhash-consv the
hashv
hash function is used.
All
vhash-cons calls as the hash
function; the last one uses
eqv? and
hashv.
Again the choice of hash-proc must be consistent with previous calls to
vhash-cons.
Fold over the key/value elements of vhash in the given direction,
with each call to proc having the form
(proc key value
result), where result is the result of the previous call to
proc and init the value of result for the first call
to proc.)
Return the vhash corresponding to alist, an association list, using
hash-proc to compute key hashes. When omitted, hash-proc defaults
to
hash. | https://www.gnu.org/software/guile/manual/guile.html | CC-MAIN-2016-22 | refinedweb | 18,745 | 55.74 |
#include <hallo.h> * Bill Allombert [Mon, Apr 10 2006, 11:57:48PM]: >. What about this one: as you pointed out the Modules submenu is not heavily used. What about just putting the module entries of each WM using them into separate ...-Modules submenus in the same hierarchy level as the WM entries? Imagine: -> IceWM -> Window Maker -> Foo WM -> Foo WM Modules \- Foo Background Setup |- Foo Gadget Setup |- Foo Other Module -> Other WM -> TWM I think that would be a good compromise. The Module submenu's location follows directly the Foo WM starting entry and is easy to find. And if only few WMs are adding modules there, it would not significantly increase the number of top menu entries. And who does really install more than a handful of WMs using "Modules" entries? Having some addigional *-Modules entries in the menu would not really hurt. Eduard. -- <McBulba> un nu? <lx_jakal> hunger! <retfie> durst! <lx_jakal> ja, das auch | https://lists.debian.org/debian-devel/2006/04/msg00219.html | CC-MAIN-2017-13 | refinedweb | 156 | 65.93 |
- NAME
- SYNOPSIS
- DESCRIPTION
- TRAITS
- METHODS
- DIAGNOSTICS
- SEE ALSO
- LIMITATIONS
- BUGS
- AUTHOR
- LICENSE AND COPYRIGHT
NAME
Archive::RPM - Work with a RPM
SYNOPSIS
use Archive::RPM; my $rpm = Archive::RPM->new('foo-1.2-1.noarch.rpm'); # RPM2 header functions... # other functions...
DESCRIPTION
Archive::RPM provides a more complete method of accessing an RPM's meta- and actual data. We access this information by leveraging RPM2 where we can, and by "exploding" the rpm (with rpm2cpio and cpio) when we need information we can't get through RPM2.
TRAITS
This package allows for the application of various "TraitFor" style traits through the with_traits() function, e.g.:
Archive::RPM->with_traits('Foo')->new(...);
By default, we look for traits in the "Archive::RPM::TraitsFor" namespace, though this can be overridden by prepending a "+" to the full package name of the trait.
METHODS
An object of this class represents an actual RPM, somewhere on the filesystem. We provide all the methods RPM2::Header does, as well as additional functions to manipulate/extract the rpm itself (but not to install).
Right now, our documentation is horrible. Please see RPM2 for the methods provided by "RPM2::Header", and the source for the other functions we have defined. We support all methods provided by RPM2::Header, except the "files" method (that's handled by other bits).
- new('file.rpm') | new(rpm => 'file.rpm', ...)
Creates a new Archive::RPM object. Note that the rpm parameter is required, and if it is the only one being passed the "rpm =>" may be omitted.
- rpm => 'filename'|Path::Class::File
Required. Takes either a filename or a Path::Class::File object pointing to the rpm.
- auto_cleanup => 0|1
Default is 1; if the rpm is extracted to the filesystem, purge this automatically.
- rpm
Returns a Path::Class::File object representing the rpm we're working with.
- extracted_to
Returns a Path::Class::Dir object representing where the rpm has been exploded to. If the rpm has not been exploded, it will be.
- has_been_extracted
Returns true if the rpm has been exploded; false if not.
- is_source_package | is_srpm | is_source
Returns true if this is a source rpm; false if not.
- has_files
True if this rpm contains any files. (Some, e.g. Fedora's "perl-core" package, are "meta-packages" and do not deliver files; they simply ensure a given set of dependencies exist on a system. Sort of like Task::* CPAN dists.)
- num_files
Returns the number of files delivered.
- grep_files
Grep over the array of files; e.g.
my ($spec) = $srpm->grep_files(sub { /\.spec$/ });
- map_files
-
- files
Returns an array of all the dir/files delievered by the rpm. Note that these are returned as Path::Class objects, and we use the directories and files present on the filesystem after exploding the rpm rather than the list described by the rpm itself.
- first_file
-
- last_file
-
- join_files
-
- num_changelog_entries
Returns the total number of changelog entries.
- changelog_entries
Returns an array of all the changelog entries.
- first_changelog_entry
Returns the first changelog entry; note that changelogs are stored in reverse chronological order. That is, the first changelog entry is the newest entry.
- last_changelog_entry
Returns the oldest changelog entry.
- get_changelog_entry(Int)
Get a specific changelog entry.
- map_changelog_entries
-
- find_changelog_entry
-
- grep_changelog_entries
-
DIAGNOSTICS
We tend to complain and die loudly on any errors.
SEE ALSO
LIMITATIONS
Our documentation and test suite is clearly lacking, sadly.
We also have to explode the rpm for anything more intense than simply looking at the header for info. While this isn't really a _horrible_ thing, it's annoying to have to, say, explode a 100MB ooffice rpm just to get a count of how many files there are in it.
We do the "exploding" using external rpm2cpio and cpio binaries. While we could have used Archive::Cpio to handle the cpio extraction, it seemed a touch overkill; as there does not appear to be a Perl library to handle the "rpm2cpio" part, we may as well use the cpio bin. (It's not like it's missing from many systems, anyways.
BUGS
All complex software has bugs lurking in it, and this module is no exception. If you find a bug please either email me, or (preferred) to this package's RT tracker at
[email protected].
Patches are welcome.
AUTHOR
Chris Weyl <[email protected]>
LICENSE AND COPYRIGHT | https://metacpan.org/pod/release/RSRCHBOY/Archive-RPM-0.07/lib/Archive/RPM.pm | CC-MAIN-2015-14 | refinedweb | 715 | 66.23 |
Hi,
I have developed an Entity Bean(CMP) with Custom Primary Key class (Single Column Key in Database).
Entity Bean : StudentBean
Primary Key Class : StudentPK
package: student
Data BAse Table: Student(rollNo*, name, clas,marks)
RollNo Number(2)
in ejb-jar XML document :
<prim-key-class>student.StudentPK</prim-key-class>
<primkey-field>rollno</primkey-field>
when I am generating Container classes EJBC compiler
is giving an Error : pk = bean.rollno can't covert integer to StudentPK...
what is wrong in this
ThanX in advance
Ramesh Raju
Entity Bean (CMP) - Primary Key (3 messages)
- Posted by: Ramesh Raju Mandapati
- Posted on: August 08 2000 11:23 EDT
Threaded Messages (3)
- Entity Bean (CMP) - Primary Key by Torfinn Aas on August 09 2000 03:04 EDT
- Entity Bean (CMP) - Primary Key by Saad Hamdan on August 10 2000 09:48 EDT
- Entity Bean (CMP) - Primary Key by Stephane Valseme on August 10 2000 07:15 EDT
Entity Bean (CMP) - Primary Key[ Go to top ]
Do you have a constructor for StudenPK with an integer parameter? Remember also to include a parameter-less constructor and a toString()-methode.
- Posted by: Torfinn Aas
- Posted on: August 09 2000 03:04 EDT
- in response to Ramesh Raju Mandapati
public class StudentPK implements Serializable {
public int snr;
public StudentPK(int a) {
this.snr = a;
}
public StudentPK()
}
public String toString() {
return "" + sBookingnr;
}
}
Regards
Torfinn
Entity Bean (CMP) - Primary Key[ Go to top ]
This is an example of the PK class in an entity bean CMP.
- Posted by: Saad Hamdan
- Posted on: August 10 2000 09:48 EDT
- in response to Torfinn Aas
I am not sure why the ejbc complain if we do not have the hashCode & equals methods.
If someone knows please let us know.
public class calssPK implements java.io.serializable {
public int intPK;
public classPK() { }
public classPK(int intPK) {
this.intPK = intPK;
}
public int hashCode() {return 1;}
public boolean equals (Object obj) { return false;}
}
Entity Bean (CMP) - Primary Key[ Go to top ]
Hi,
- Posted by: Stephane Valseme
- Posted on: August 10 2000 07:15 EDT
- in response to Ramesh Raju Mandapati
Are you using a Weblogic container ?
If yes I have exactly the same problem (only with WebLogic).
The file where you get the error is : xxxEOImpl.java which is generated...
Since I didn't find any example on the WebLogic site with custom primary key, I wonderer if this is simply possible with WebLogic ?!
Does anyone have any experience with custom primary key AND weblogic ???
MANY THANKS,
Stephane. | http://www.theserverside.com/discussions/thread.tss?thread_id=497 | CC-MAIN-2016-07 | refinedweb | 416 | 51.07 |
Hey guys, I created a program for my class called CountDiceRollsArray thats purpose is to roll a single die the specified number of times by the user and then display a table that shows the six face values of the die and how many times that face value was rolled within the specified number of times. I wrote the program and it works well. I was just wondering if someone out there that has experience with programming can tell me how I can write the code better or if any one of my variables can be made into an array or used more efficiently. I was wondering if it is "proper" java etiquette to call methods form other methods, like I did with my display() method or if thats frowned upon in the java community and I should try to call everything from the main. Here is my code. Thanks!
Code Java:
import java.util.Scanner; import java.util.Random; public class CountDiceRolls { public static void main(String[] args) { int numberOfDieRolls = rollADie(); title(); faceValue(numberOfDieRolls); } public static int rollADie() { Scanner input = new Scanner(System.in); System.out.print("How Many Times Would You Like To Roll A Die: "); int num = input.nextInt(); return num; } public static void title() { System.out.print("Face\tTimes\n"); } public static void faceValue(int numberOfDieRolls) { Random random = new Random(); int [] faceVal = new int [6]; for(int count = 0; count < numberOfDieRolls; count++) { int face = 1 + random.nextInt(6); if(face == 1) { faceVal[0]++; } else if(face == 2) { faceVal[1]++; } else if(face == 3) { faceVal[2]++; } else if(face == 4) { faceVal[3]++; } else if(face == 5) { faceVal[4]++; } else if(face == 6) { faceVal[5]++; } } display(faceVal); } public static void display(int [] faceVal) { System.out.print(" 1\t" + faceVal[0] + "\n 2\t" + faceVal[1] + "\n 3\t" + faceVal[2] + "\n 4\t" + faceVal[3] + "\n 5\t" + faceVal[4] + "\n 6\t" + faceVal[5]); } } | http://www.javaprogrammingforums.com/%20java-theory-questions/11794-how-make-code-more-efficient-printingthethread.html | CC-MAIN-2013-48 | refinedweb | 317 | 59.03 |
See also: IRC log
NW: Agenda recently updated --
accepted as posted
... Resolved that minutes of 16/7/07 are approved
DC: SW, have you created the "HTTP Redirections" issue?
SW: Wasn't aware we'd chosen a name, will go ahead ASAP with "HTTP Redirections"
<scribe> ACTION: SW to create new TAG issue called "HTTP Redirections" per minutes of 16/7/07 [recorded in]
<trackbot-ng> Created ACTION-9 - Create new TAG issue called \"HTTP Redirections\" per minutes of 16/7/07 [on Stuart Williams - due 2007-08-20].
<scribe> ACTION: Stuart to put up straw poll to try to find a new slot for this meeting [recorded in]
<trackbot-ng> Created ACTION-8 - Put up straw poll to try to find a new slot for this meeting [on Stuart Williams - due 2007-08-20].
RL withdraws his regrets
NM says he is at risk
We already have regrets from TBL
NW: Any suggestions?
<Stuart> (Member-only URI)
SW: I sent the above as a
starter
... URI-based extensibility
... Web 2.0
... HTTP URIs rule
TVR: Challenge is how to raise these, or any topics, in a way that works for the TP
DC: Molly Holzschlag has asked for observer status at the HTML WG meeting, which is fine with me, and has made some suggestions for a TP session (see)
NW: Who is WaSP (Web Standards Project) -- relation with WHAT WG?
HST: They've been around for a long time
DC: They have pushed for a more
aggressive attempt to support standards than W3C has pursued,
in that W3C has a policy of not making public criticisms of its
members if at all possible
... I also think we could talk about the relationship between rel='nofollow' and <marquee> -- if the latter is bad, why isn't the former?
NM: Wrt URI-based extensibility, is that where we talk about HTML 5 extensibility, or does that need a separate heading/slot/bullet?
TVR: Even if it is covered, the title doesn't communicate that
Various: The overall topic is a big one
NW: TP is a good place to talk about big topic
TVR: Start with the smaller (HTML5) topic, and then enlarge
SW: The topic emerged from our call discussion about follow-your-nose, we had trouble articulating exactly what we wanted, so it seemed a good topic
<Noah> Now that I think about it: the need for distributed HTML 5 extensibility is, as a requirement. Applying URI-based extensibility in particular is the Web-compatible way of achieving such extensibility.
NW: So, do we have consensus? Should we address HTML5 extensibiliyt
<Rhys> +1 to the broader topic
HST: I thought we had consensus on the broader topic
NW: I'm happy to go to the broad topic
NM: Once you agree on distributed extensibility as a requirement for HTML5, you still have to agree mechanism, i.e. URI-based or not
DC: How much time do we
have?
... I could imagine a whole conference on this topic
NM: I thought this was for the whole day
<Noah> Any interest in inviting Sam Ruby to discuss his views on HTML 5 extensibility?
NW: The message subject is "Any interest in a TAG-driven session during the 2007 W3C Tech Plenary Day?"
SW: Is this the subject that we have the most affinity for?
DC: 'We' isn't the point - I have a specific pointI want to get across
NM: Molly was particularly concerned about Adobe's AIR and application construction in general, which would I guess point also to Microsoft's Silverlight
DC: The TP programme committee has a list of 20 topics to talk about, see (Member-only link)
<Noah> Speaking just for myself, I find some of the technical topics we're noodling on here to be more compelling than the rough list at 07-TechPlenAgenda.html
NW: Are we ready for the chair of the TAG to go back to Steve with an overview of what we've discussed, and see where it might fit in?
<scribe> ACTION: SW to discuss TAG slot at TP with Steve Bratt, informed by above discussion [recorded in]
<trackbot-ng> Created ACTION-10 - Discuss TAG slot at TP with Steve Bratt, informed by above discussion [on Stuart Williams - due 2007-08-20].
NW: Same question about the AC -- anything to say?
HST: It's half-a-day, I think we should duck
DC: We don't seem to be doing REC-track work -- if anybody cared I guess we'd hear about it. . .
<Noah> I agree with Norm, we haven't forgotten rec track work. What we have not done is produce 3 month heartbeats so identified.
HST: At Extreme last week, I got a lot of good response when I pointed people to the finding -- we could just make it a REC
NW: I just don't think our recent work has been REC-like, we're still looking for that
<DanC> Alternatives finding is dated 1 November 2006 )
NM: I think a lot of our recent pubs can be seen as a heartbeat
<DanC> (our last WD was )
NM: Most recent approved finding was Metadata finding, at beginning of the year
DC: We have to do a written
report which summarises what we've done
... If anyone objects, we'll here about it
<scribe> ACTION: SW to tell Steve Bratt that the TAG doesn't want a slot at the AC meeting [recorded in]
<trackbot-ng> Created ACTION-11 - Tell Steve Bratt that the TAG doesn't want a slot at the AC meeting [on Stuart Williams - due 2007-08-20].
NW: Who has not read : HST, DC, TVR
NM: It's not long, shall I walk through it?
NW: Go ahead then
NM: We've gotten some pushback from DO and Mark Baker
<DanC> (hmm... it's not clear that the quoted GPN is quoted, especially when the one that's not quoted looks the same)
[Scribe not trying to transcribe NM's walkthrough]
<DanC> [this bit about xml 1.1 is awfully relevant, and yet it's not in there? odd.]
<DanC> [perhaps the xml 1.1 versioning situation fits better in a separate item.]
<DanC> [the "ASCII doesn't have a version identifier" example that Noah often uses isn't in here. odd.]
HST: There's a shift between
"provide for marking version" and "when to mark version" in
your prose
... That seems to me to take us off track
NM: I think those are closely related -- if you would never want to mark the version, then the language shouldn't provide the mechanism for you to do so.
NM: If you can't spec. what the version indicator means, you shouldn't have it in your spec.
<DanC> [I hear Noah defending his position but not convincing HT. I think the article is interesting as is, and I'd like to see it go out signed "Noah, a TAG member" and let other TAG members respond with other articles or comments.]
NM: You will just be storing up trouble for the future, c.f. XML's version attribute
HST: Brings us back to the metapoint as to what the status of TAG blog entries should be -- the settled will of the TAG as a group, or an opportunity for TAG members to discuss something using the blog medium?
NM: DO said something similar
SW: I think the value of a blog would be lost if it required consensus
TVR: I agree, we shouldn't turn TAG blog entries into mini-findings
NM: I think we should reserve the possibility that we do both, that is, we may sometimes want to publish something with consensus
NM: and that I can ask for telcon review on how to make the best posting I can, before I post it
<DanC> (sure, a little telcon preview is a good thing, from time to time)
<Stuart> +1 to DanC
<Norm> But not that I have to ask for it
<Rhys> +1 to the proposal on blog entries
NW: So it you remove "for the T A G" from the bottom, you can publish as and when you choose
NW: I'm also not sure that I'm unhappy with the existing WebArch GPN -- In NM's post, I'm not happy with the idea that the spec. author has to tell in advance whether there will ever be incompatible changes
TVR: Just to make sure that we're not just talking about version attributes alone as the way of indicating version
NM: We get to namespaces in the last section
TVR: What about DOCTYPE line as a version identifier, which is what the HTML WG are going back and forth about
SW: The questions raised by the
article are closely related to the terminology and analysis in
the Versioning finding we're working on
... I think Mark Baker's comments are along the same lines
NM: I thought his comments (about header metadata) were strictly-speaking out of scope, because the BPN is about in-band identifiers, but metadata is out-of-band
?
<DanC> (er... let's not leave posting mechanics in the someday pile, please.)
DC: Movable Type is weblog
support software, in some ways the first one
... Karl Dubost installed it for W3C and interface it with our CVS system -- this is good, but does introduce a 15-minute delay
DC: It supports categories, so if you categorise things they get syndicated, and can then be grabbed by a web page
<DanC> (yes, analagous to )
NM: What about formatting: monospace, display, etc?
DC: Karl is pretty good with CSS, I don't know what would happen if I added my own. . .
NW: Summarising -- Karl's setup can easily be expanded to allow us to log in, author new entries, categorise them as "TAG" and feed our blog page
SW: Different from B2 Evolution -based blogs?
DC: Yes -- advantage is it's baked not fried -- that is, the HTML is constructed only once, not on demand from a SQL DB
TVR: I don't mind what the content management is as long as it doesn't get in the way by producing opaque URIs
DC: I think we will got good URIs from Movable Type, including year, month and words from the title
<Norm> -> http//
<DanC> (do I read this correctly? 4 objections to <> ?)
SW: That option was added late, so the no-entries may just be ones who never saw it
<Noah> Should we just do a prefer/live-with on the two surviving options right now on the phone?
<Norm> Yes, something like that.
<DanC> (I suspect is technically tied to b2evolution)
<Noah> +1
<Rhys> +1 to /tag/blog
<Norm> Proposed:
SW: DC, can you organise the top-level 'tag' directory that we will need?
DC: Not without approval from TimBL. . .
DO: We can resolve on /tag/blog, pending approval
NM: I don't think we're in a great rush
<Norm> No objections.
NW: OK, lets try /tag/blog
Resolved: To locate the TAG blog at
<scribe> ACTION: DC to try to reach TBL to get approval [recorded in]
<trackbot-ng> Created ACTION-12 - Try to reach TBL to get approval [on Dan Connolly - due 2007-08-20].
<Norm> Proposed title: TAG Lines
NW: Any objections to TAG Lines?
Resolved: The TAG blog will be called "TAG Lines", using the mechanisms established by Karl Dubost
<scribe> ACTION: DC to ask Karl Dubost if categories can be restricted to particular login lists [recorded in]
<trackbot-ng> Created ACTION-13 - Ask Karl Dubost if categories can be restricted to particular login lists [on Dan Connolly - due 2007-08-20].
NM: We should try to use the description of whatever category we pick to make clear that it's for TAG members only
DC: I'd be happy if Yves Lafon wrote something about caching for him to use the category 'web architecture'
HST: Then we shouldn't use 'web architecture' for the TAG blog
NM: So two categories then, 'web architecture' and 'TAG Member', and I should use both for my post?
DC, HST: Yes
RL: New draft coming soon,
significantly changed, lots of new material
... available soon, then I'm mostly away until just before the f2f, so you all can review
SW: I think we've had problems if we try to discuss things that aren't public. . .
RL: My plan was to put it in public space, but not announce it publicly
DC: Hmmm, I'd rather it were just out there
RL: But I won't be available to answer comments
NM: I've found that just saying that works pretty well
RL: OK, I'll go ahead and do that
NW: Background: W3C servers get hammered by tools which don't cache schema documents which they request very frequently
TVR: I don't see that this is a TAG issue
HST: I think it's a TAG issue, but not restricted to schema docs -- the Web provides for caching, if you anticipate large volumes of traffic from your site, or sites which use your software, to one or more stable resources, you should provide for caches
NM: More than that, I think our interest here follows from the advice to use only one URI for any given resource
DO: New issue or not, I think we
should take it on, because it follows from our advice on
avoiding multiple URIs
... We should follow through on the ramifications of our recommendations
<DanC> (it's also true that people shy away from http URIs because they fear their server will melt down.)
<DanC> name brainstorm... httpCaching... hotSpot...
NW: I hear consensus we should take this up. New issue, or attached to an existing one? If new, then what name?
<Norm> ...representationCaching...managingHotURIs...
SW: schemas only?
TVR: No
<Norm> ...scalabilityOfPopularURIs
<Norm> ...uriScalability
<Stuart> ...frequentlyAccessedResources
HST: schemas, stylesheets (XSLT, CSS, ...)
TVR: Images
HST: Lists of e.g. language codes
<DanC> (caching proxies aren't enough in some cases... in some cases, products that ship with URIs hadcoded might as well ship with a representation hardcoded, and only phone home once every 6 months.)
<Norm> Right. Web frameworks running on end-user-machines don't have caching proxies necessarily
TVR: The architectural problem is in part that the publisher of a popular URI cannot in general ensure responsible caching by clients
NM: But we do expect providers of e.g. popular home pages to scale up their servers in keeping with demand
<DanC> ("economics aside" is not a tactic I want us to take. I want us to keep economics in mind.)
NM: So maybe the W3C is actually at fault here -- what's the division of responsiblity between provider and consumer?
<Noah> FWIW: I the reason I thought this discussion was useful was to >scope< the issue, which I think should go beyond caching
<Norm> ...scalabilityOfPopularResources
<Norm> scalabilityForPopularResources
<DanC> (I prefer URI to resource in this case, even though it's wrong. But oh well.)
<DanC> cachingBestPractices
<Norm> scalabilityOfURIAccess
.
<Noah> But yes, the fact that many providers can't afford to scale is a crucial (economic) piece of the puzzle.
<Norm> Proposed: The TAG accept a new issue, scalabilityOfURIAccess-58
<DanC> aye, and I don't care about the number
Resolved: the TAG accepts a new issue scalabilityOfURIAccess-58
NW: Someone willing to summarize this and send it to the list?
DC: NW, did Ted Guild's message cover the background?
NW: Not all. I'll take the action
<DanC>
NM: Do try to indicate that this isn't just about caching
NW: Will do
<scribe> ACTION: NW to announce the new scalabilityOfURIAccess-58 issue [recorded in]
<trackbot-ng> Created ACTION-14 - Announce the new scalabilityOfURIAccess-58 issue [on Norman Walsh - due 2007-08-20]. | http://www.w3.org/2007/08/13-tagmem-minutes.html | CC-MAIN-2014-15 | refinedweb | 2,629 | 59.87 |
Printed Processing sketch
I just printed the first stl file that i generated using a processing sketch.
I used the unlekkerlib to export a stl file from a sketch that generates simple 3d spiral, used blender to add a socket and printed it on my makerbot. Skeinforge complained about some invalid triangles, but beside that it worked surprisingly well.
This is what it looks like in blender
and this is what the makerbot made of it
and this is the processing-sketch i used to generated the spiral
import unlekker.data.*; void setup() { size(300,300,P3D); noLoop(); } void draw() { translate(width/2,height/2); background(0); fill(255); lights(); noStroke(); //stroke(255); beginRaw("unlekker.data.STL","guru.stl"); beginShape(QUAD_STRIP); for( int i =0; i < 100; i++ ) { for(int a=0; a < 36; a++) { float r = 10 - map(i,0,100,0,10); vertex( r * sin( radians( a * 10 )) + sin(radians(i*10)) * 10, -i*1, r * cos(radians(a*10)) + cos(radians(i*10)) * 10); int j = i+1; vertex( r * sin( radians( a * 10 )) + sin(radians(j*10)) * 10, -j*1, r * cos(radians(a*10)) + cos(radians(j*10)) * 10); } } endShape(); endRaw(); }
See also:
Custom Cookie Cutters for the Makerbot
3D printed spider
3D printed elephant
little 3D figure
Hi Nikolaus,
Just wanted to let you know that we used this program as an example for quick tutorial on Processing -> STL at our Maker:SF meetup here in Oakland, CA.
Thanks for creating this! I can't wait to try printing it out.
Cheers,
Anca. | http://www.local-guru.net/blog/2010/01/13/printed-processing-sketch | CC-MAIN-2017-39 | refinedweb | 259 | 53.14 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.