text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
Hello, Sorry for the response delay.
Advertising
At some point, I saw I was in the middle of a big really big refactoring, and I'm afraid it is very related to the framework I use. However I would like to make some observations about my findings. Apart from the DOM parser, one of the things I pursued was to replace constants by configuration, and remove code that was automatically executed during file inclusion. E.g., namespaces are registered if PHPTAL is configured for that, and then it registers only what is set in config - and by default config has the built-in namespaces :). I'd like to elaborate these a bit more as suggestions for PHPTAL, but first I have to dig back what I've done. Then, I'll try to extract the DOM parser into a pure PHPTAL installation, because mine is *too* changed now :(. regards, rodrigo moraes _______________________________________________ PHPTAL mailing list [email protected]
|
https://www.mail-archive.com/[email protected]/msg00031.html
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
DSL for XML descriptorsDan Allen May 1, 2010 7:09 PM
I'd like to rekindle the discussion about the descriptors DSL (Re: ShrinkWrap - Descriptors). The descriptors DSL is a fluent API for producing standard XML-based configuration files such as:
- web.xml
- beans.xml
- ejb-jar.xml
- application.xml
- faces-config.xml
- persistence.xml
Of course, the design should be extensible so that non-standard descriptors can be introduced.
The discussion about creating these descriptors seems to have been abandoned, possibility because it's not linked to the ShrinkWrap space. An umbrella JIRA issue was never created, though there are JIRAs for each individual descriptor:
SHRINKWRAP-44 - web.xml
SHRINKWRAP-45 - application.xml
SHRINKWRAP-46 - ra.xml
SHRINKWRAP-115 - beans.xml
We are missing ejb-jar.xml and persistence.xml, and should probably add support for the JBoss AS-specific descriptors after we are done with the standard ones.
The other problem is that the pastebin links that Aslak posted seemed to have expired. Aslak, could you post the latest prototype code that you have? And preferably in a place it will not expire?
I consider this issue high priority for each of use of Arquillian.
1. Re: DSL for XML descriptorsAndrew Rubinger May 2, 2010 12:29 AM (in response to Dan Allen)
Absolutely agree with the feature as presented. We've had this in our sights pretty much since the inception of ShrinkWrap, though I've been pretty firm to hold off until the core APIs and impls are finalized as stable and released. Not to bite off more than we can chew, so to speak.
That said, I'd be interested in seeing an API proposal for a limited subset of just one of the above spec DSLs. We should likely start these in an extension module, maybe even outside the rest of the ShrinkWrap release cycle/process.
And I'd even like to see modules separating each concern: for each spec type we'll have the fluent DSL API bit, a schema binding layer, and XML>Object>XML converters of some form. Experience with the jboss-metadata project also shows that it's a good idea to have the API represented as interfaces and let some impl classes define the object model and associated XB annotations.
To sum up, yep we need to go here. Should be a priority. But I'd like to conserve our resources first to the ShrinkWrap API freeze (ie. 1.0.0-beta-1), user guide and documentation, and 1.0.0. From there I think porting in features like this make for excellent 1.1.0.
With an increasing number of projects consuming ShrinkWrap now, out of consideration locked API has gotta come soon or we'll make everyone mad despite our current "alpha" badge.
S,
ALR
2. Re: DSL for XML descriptorsAndrew Rubinger May 2, 2010 12:37 AM (in response to Andrew Rubinger)
Ah, another note.
I'm not sure the DSL stuff should be a property of the archive. For instance:
webArchive.webXml().servletClass(MyServlet.class);
Because we already have methods to add files, resources (any Asset, really) as the web.xml. I still view archives as "virtual filesystems which contain resources at paths". I remember disagreeing with Aslak/Emmanuel about this some time back.
Instead.
S,
ALR
3. Re: DSL for XML descriptorsDan Allen May 2, 2010 1:01 PM (in response to Andrew Rubinger)
Andrew Rubinger wrote:.
I'm absolutely in agreement with you. I think we have two parallel builders. We have the ShrinkWrap class + family that build archives. Then we have Descriptors + family that build descriptors. When the descriptor is ready, it can be popped into the archive.
I see an API emerging that strikes a balance between the two snippets above, in semantics and readability:
webArchive.descriptor(WebXml.class).addServletClass(MyServlet.class, "/servletpath") .addContextParam("key", "value").addAsWebXml().addClass(...);
I'm uncertain how to end the descriptor DSL and get back to the archive DSL. I proposed "addAsWebXml". Perhaps someone can chime in there.
All I'm suggesting here is that the archive be aware of the general concept of descriptor building. Other than that, it has no direct tie to a descriptor (though perhaps there is a way to limit the types).
public interface Archive<T extends Archive<T>> extends Assignable { ... <D extends Descriptor> D descriptor(Class<D> clazz); } public interface WebXml extends Descriptor<WebXml> { Descriptor<?> addServletClass(Class<? extends Servlet> clazz, String... urlPattern); Descriptor<?> addContextParam(String key, String value); ... // this will call setWebXml() on WebArchive and return the archive Archive<? extends WebArchive> addAsWebXml(); }
Don't hurt me, I'm a newbie with designing DSLs. Hopefully you can take this and run with it.
4. Re: DSL for XML descriptorsDan Allen May 2, 2010 1:04 PM (in response to Dan Allen).
5. Re: DSL for XML descriptorsAndrew Rubinger May 2, 2010 1:07 PM (in response to Dan Allen)
Dan Allen wrote:I'm uncertain how to end the descriptor DSL and get back to the archive DSL. I proposed "addAsWebXml". Perhaps someone can chime in there.
This is where I think the "Assignable" type comes in. From there we can go:
archive.as(WebXml.class).servlet(MyServlet.class,"/path").as(WebArchive.class).addResource("myfacelet.xml");
Please excuse the poor DSL-ing. Just showing how we can assign as WebXML (which would probably be some bridge object able to both add the resource and give access to the Descriptor) and back again out to the archive view.
S,
ALR
6. Re: DSL for XML descriptorsAndrew Rubinger May 2, 2010 1:21 PM (in response to Dan Allen)
Dan Allen wrote:.
Here again I think the saviour is "Assignable". It's basically the hook that gives us multiple interitence via wrapping an underlying archive, and gets specialized into any view.
For instance, the exporters are Assignable, but not Archives or Container types.
S,
ALR
7. Re: DSL for XML descriptorsAslak Knutsen May 3, 2010 7:03 AM (in response to Andrew Rubinger)
My first attempt(the expired link) handled the descriptors using the Assignable method. Here is my little summary of the issue.
- With the current extensions implementation you don't know what type of Archive you're being assigned from, you just get the pure Archive or a new wrapped Archive in another extension. This is not a issue with web.xml or ejb.xml but with beans.xml which can be in both a WebArchive and a JavaArchive. You don't know where to place the descriptor, /META-INF or WEB-INF ?
- Using the Assignable method, it's not clear what happens when you do: archive.as(Descriptor).something().as(JavaArchive).setDescriptor(Asset) ?
- A Descriptor should have access to the Archive so it can add the classes along side the descriptor definition of it, so you don't have to do: archive.addClass(servlet).addWebXmlServlet(servlet)
- The descriptor should be able to know where to add it self. We should try to avoid add(Descriptor, "/WEB-INF/web.xml").
8. Re: DSL for XML descriptorsDan Allen Jun 13, 2010 6:00 PM (in response to Andrew Rubinger)
I'm working on a formal prototype in the shrinkwrap-sandbox project on gitbub (by formal, I mean more formal than using pastebin, so people can actually compile this stuff).
I'm using JAXB to generate the XML. So far, I've implemented the persistence and beans descriptors as well as a fluent API to create them. I'll merge in Aslak's web descriptor patch soon.
I've made the descriptor a ShrinkWrap Asset, rather than an Archive, which makes a heck of a lot more sense to me. Here's what it will look like:
PersistenceDef persistence = Descriptors.create(PersistenceDef.class) .persistenceUnit("test").transactionType(TransactionType.JTA) .provider(ProviderType.HIBERNATE) .jtaDataSource("java:/DefaultDS") .classes(User.class) .schemaGenerationMode(SchemaGenerationModeType.CREATE_DROP) .showSql() .formatSql() .property("hibernate.transaction.flush_before_completion", true) .persistenceUnit().name("another").transactionType(TransactionType.RESOURCE_LOCAL) .provider(ProviderType.ECLIPSE_LINK) .nonJtaDataSource("jdbc/__default") .excludeUnlistedClasses() .schemaGenerationMode(SchemaGenerationModeType.CREATE); BeansDef beans = Descriptors.create(BeansDef.class) .interceptor(InterceptorSample.class) .interceptor(AnotherInterceptorExample.class) .decorator(DecoratorSample.class) .alternativeStereotype(StereotypeExample.class); ShrinkWrap.create("test.jar", JavaArchive.class) .addManifestResource(persistence, "persistence.xml") .addManifestResource(beans, "beans.xml"); .as(ZipExporter.class).exportZip(new File("/tmp/test.jar"), true);
Feel free to hackup the github project. This is by no means set in stone. What's important is that we have a place to collaborate on the API.
I consider this a very important issue. As people are beginning to use Arquillian more regularly, it's a huge hole to not be able to create the descriptors. Writing XML strings in Java is horrible and having to switch over to an XML file breaks the developer's flow in the test.
Plus, having this functionality will be a huge boast for the project. People are going to flip when they see it.
9. Re: DSL for XML descriptorsAndrew Rubinger Jun 13, 2010 6:13 PM (in response to Dan Allen)
Dan Allen wrote:
I've made the descriptor a ShrinkWrap Asset, rather than an Archive, which makes a heck of a lot more sense to me. Here's what it will look like:
Fantastic.
S,
ALR
10. Re: DSL for XML descriptorsAslak Knutsen Jun 13, 2010 7:50 PM (in response to Dan Allen)
Very nice!
It needs to solve 2 more things then it's perfect:
- You should only have to add Classes to the Descriptor and they should be added to the Archive as well
- You should be able to let the Descriptor pick it's storage location and name by default
A possible solution could be to have a common Descriptors interface and add add(Descriptor) to the Archive interface. On add(Descriptor) the Archive could give the Descriptor a callback to whom it was added to.
public class ArchiveImpl implements Archive { public T add(Descriptor desc) { desc.populate(this); } } public class BeanXml implements Descriptor { public void populate(Archive<?> archive) { if(WebArchive.class.isInstance(archive)) { WeldArchive.cast(archive).addWebResource(makeAsset(), name); } else if(JavaArchive.class.isInstance(archive)) { JavaArchive.cast(archive).addManifestResource(makeAsset(), name); } else { throw new Exception("Unsupported Archive: " + archive); } addClasses(ClassContainer.cast(archive)); } private Asset makeAsset() {} private ArchivePath name() { return "beans.xml" } private void addClasses(ClassContainer container) { contianer.addClasses(interceptors); contianer.addClasses(alternatives); ... } }
BeanXml is more advanced then most descriptors since it can support two different Archive types..
If we want to avoid the Type checking and casting etc in the Descriptor, we could add some magic to the add(Descriptor) impl:
public class ArchiveImpl implements Archive { public T add(Descriptor desc) { // find the populate method matching this archive.. Method method = desc.getClass().getMethods(); method.invoke(desc, this); } } public class BeanXml implements Descriptor { public void populate(WebArchive archive) { archive.addWebResource(makeAsset(), name); addClasses(archive); } public void populate(JavaArchive archive) { archive.addManifestResource(makeAsset(), name); addClasses(archive); } private Asset makeAsset() {} private ArchivePath name() { return "beans.xml" } private void addClasses(ClassContainer container) { contianer.addClasses(interceptors); contianer.addClasses(alternatives); ... } }
That will give us the following..
ShrinkWrap.create(JavaArchive.class) .add( Descriptors.create(BeanXml.class) .enableInterceptor(Transactions.class) ) .addClass(BankAccount.class);
Nifty!
11. Re: DSL for XML descriptorsAslak Knutsen Jun 13, 2010 7:40 PM (in response to Aslak Knutsen)
and one more thing..
- A Descriptor should keep in sync with changes to the archive, add / remove etc
public class ApplicationXml extends Descriptor { public void populate(EnterpriseArchive archive) { archive.setApplicationXml(makeAsset()); archive.registerListener( new UpdateApplicationXmlOnModuleChange(); ); } } ShrinkWrap.create(EnterpriseArchive.class) .add( Descriptors.create(ApplicationXml.class) .something() ) .addModule(war);
The UpdateApplicationXmlOnModuleChange should then be able to pick up on the addModule call and register it with it's descriptor metadata.
?
12. Re: DSL for XML descriptorsAndrew Rubinger Jun 13, 2010 11:55 PM (in response to Dan Allen)
It occurs to me that we can achieve some greater separation.
For instance:
Dan's prototype here works fantastically as a standalone API without any ShrinkWrap references. We could actually spin this whole thing off into another project, which aims to generate spec DSLs using a method-chaining (ie. fluent) API. In other words, PersistenceDef wouldn't implement Asset.
Then some ShrinkWrap extension can come along and provide the integration necessary: adapt this into an Asset, put it in the default spec locations in the Archive (as Aslak mentions), listen on archive events and change the descriptor appropriately, etc.
S,
ALR
13. Re: DSL for XML descriptorsAslak Knutsen Jun 14, 2010 4:20 AM (in response to Andrew Rubinger)
DefWrap ?
14. Re: DSL for XML descriptorsDan Allen Jun 15, 2010 12:36 AM (in response to Andrew Rubinger)
I agree with both of you. Having to pick the location by default seems redundant. Also, I totally agree that this could be a reusable library! Implementing the Asset interface was just a convenience on the first go around, but it would be easy enough to push this into an extension class.
Just to prove how generally useful this can be. Imagine for a minute that you want to develop a web application, but define your web.xml entirely in Java (the same goes for other descriptors). You simply implement a factory, not unlike Arquillian's @Deployment method.
public class WebAppFactory { @Descriptor("${project.build.directory}/${project.build.finalName}/WEB-INF/web.xml") public Asset create() { return Descriptors.create(WebAppDef.class) .displayName("My Webapp") .facesDevelopmentMode() .servlet("javax.faces.webapp.FacesServlet", new String[] { "*.jsf" }) .sessionTimeout(3600); } }
Then you use a Maven plugin to generate it into your output directory at the end of the compile phase (so it gets packaged or hot deployed as needed).
<plugin> <groupId>org.jboss.shrinkwrap</groupId> <artifactId>maven-descriptor-plugin</artifactId> <version>1.0.0-SNAPSHOT</version> <executions> <execution> <configuration> <!-- Remove configuration once annotation processing is implemented --> <factoryClass>com.acme.WebAppFactory</factoryClass> <target>${project.build.directory}/${project.build.finalName}/WEB-INF/web.xml</target> </configuration> <goals> <goal>generate</goal> </goals> </execution> </executions> </plugin>
I've hacked up a very crude Maven plugin to implement this idea in the ShrinkWrap github sandbox.
maven-descriptor-plugin master tree
Image the possibilities.
This would be really nice for CDI in that you could active interceptors, alternatives, etc in Java code. Of course, it would be easy to tie factory methods into Maven profiles so that different descriptors are generated in different deployment scenarios.
|
https://developer.jboss.org/message/540648
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Kross
#include <childreninterface.h>
Detailed Description
Interface for managing Object collections.
The Manager as well as the Action class inherit this interface to allow to attach QObject to a global or a local context related instances that should be published to the scripting code.
Definition at line 38 of file childreninterface.h.
Member Enumeration Documentation
Additional options that could be defined for a QObject instance.
Definition at line 45 of file childreninterface.h.
Member Function Documentation
Add a QObject to the list of children.
- Parameters
-
Definition at line 80 of file childreninterface.h.
Definition at line 90 of file childreninterface.h.
Definition at line 97 of file childreninterface.h.
Definition at line 111 of file childreninterface.h.
- Returns
- the map of options.
Definition at line 118 of file childreninterface.h.
Definition at line 104 of file childreninterface.h.
The documentation for this class was generated from the following file:
Documentation copyright © 1996-2017 The KDE developers.
Generated on Sun Nov 12 2017 03:31:29 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online.
|
https://api.kde.org/4.x-api/kdelibs-apidocs/kross/html/classKross_1_1ChildrenInterface.html
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
5
5
'
0
"
N
4
0
5
5
'
0
"
N
N
0 5
Kilometers
Gravel or dirt road
Secondary road
Rivers
Study sites
Area proposed for protection
Open soil, sparse vegetation cover
Grass & scrublands
Forest
based on Landsat ETM+ UTM zone 38
Map 25: Partly supervised satellite image classication of suryev region above Lahic.
Based on Landsat 7 image
108
Potential Analysis for Further Nature Conservation in Azerbaijan
109
dramatically in the area. On the upper colline stage,
high forests are still present, whereas they have vanished
from the lower montane stage. Supposedly, the forests
on the colline stage are used less intensively than those
on the montane stages. High forests on the montane
stage have been entirely replaced with intensively used
simple coppice or coppice-with-standards. Recent hill-
side slides could be observed on this stage.
Many steep slopes on the lower montane stage show
skeleton-rich soils and sparse forests or scrubland. It can
be assumed that these slopes were formerly covered by
dense forests as can be seen on the Oguz transect. Af-
ter past felling, the soils probably eroded and the forests
could not regenerate.
Especially in the grasslands on the upper montane stage,
only insular patches of forest remain. Still, they illustrate
the potential for the mountains to be covered with for-
ests. It can also be assumed that the timberline generally
shifted downwards a few hundred metres due to long-
term use.
S i g n i f i c a n c e & p r o t e c t i o n : Te area, in par-
ticular the upper treeline, is characterised by the abun-
dance of fruit tree species. Also, near the timberline,
some patches of the formerly widespread Persian Oak
forests are preserved. However, in general, the forest that
formerly occurred in the area mainly disappeared due to
the intensive use. For the Azerbaijani part of the Greater
Caucasus the situation, location and conditions of the
Lahij region are unique.
Especially in the Lahij region it would be advisable to
pay attention to sustainable use. Te continuation of
land use albeit under certain pre-conditions is advis-
able. Te establishment of a protected area with IUCN
category IV (Zakaznik in Azerbaijan) is recommended.
Within the frame of this Zakaznik, participatory land
use planning and guidance to establish sustainable val-
ue chains should be enabled. As part of the Zakaznik,
core zones with the exclusion of any land use should
be established, e.g., hard-to-reach, rocky scarps with lit-
tle used woodlands featuring a high biodiversity, or the
investigated Persian Oak forest near Burovdal.
Fig. 33: Lahij Cross section through the sequence of vegetation types on the slope
108 109
Part Two Fact sheets of regions surveyed
Te diverse ethnic composition of the population of the
Lahij region, the holy Mountain Babadag (a popular
destination for many pilgrims in summer), a growing
but still rural tourism and the historically grown cul-
tural landscape make the Lahij region a prefered area
for the development and implementation of regional
development concepts. Detailed regional planning, de-
velopment of a vision for the region, evaluation of the
landscape and its natural products, integration of ecot-
ourism and adaptation of these plans are promising and
may lead to the establishment of a protected area, but do
not necessarily need to do so.
Table 8: Overview of colline to upper montane vegetation types and characteristics of Lahic transect
Stage Upper colline Lower montane Upper montane Upper montane/ subalpine
Altitude in m 800 - 1.000 1.000 - 1.400 1.500 - 1.800 1.800 - 2.000
Dominating tree
species
Georgian Oak,
hornbeam
Georgian Oak,
hornbeam
Persian Oak, apple, pear Persian Oak
Other tree species
beech, ash, wild cherry,
maple, pear, hawthorn,
lime, walnut, White
Poplar
beech, ash, wych elm,
Service Tree, pear,
White Poplar, walnut,
hawthorn, hedge maple,
Caucasian Maple
pear
hornbeam, birch, Hedge
Maple, pear, wild cherry
Forest types on
northern slopes
Georgian Oak
-hornbeam forest
Georgian Oak
-hornbeam forest
mostly grasslands,
Scrubland with Persian
Oak & fruit trees
Mostly grasslands,
Persian Oak forest
Forest types on
southern slopes
Georgian Oak forest
Georgian Oak
-hornbeam forest or
scrubland
mostly grasslands,
Persian Oak scrubland
Grasslands, no forests
Forest types in special
habitats
beech forest in gorge
hornbeam forest with
lime, yew, ash, White
Poplar & walnut, on
steep, northern slope
with thin soil, oak-
hornbeam forest on
rocky southern slopes
Felling
high forest, felling of
single trees, in some
places simple coppice or
coppice with standards
simple coppice and
coppice with standards
Cutting of thin wood for
nrewood
moderate to intensive felling
in the remaining forest
Pasturing intensive
hampered by dense
coppice
hampered by dense fruit
scrubland
intensive on the grasslands,
moderate in the remaining
forest
Consequences of use
thinning of forests,
constrained
rejuvenation
rejuvenation, scrub
encroachment, erosion
on steep slopes
promotion of fruit
scrubland by felling and
pasturing, groves only
on northern slopes or
colluvium
downwards-shift of
timberline, threat to the
remaining forest
Impact of use high high high high to very high
Regeneration
potential
relatively high high
within the scrubland
good rejuvenation, no
regeneration on grassland
within the scrubland or
forest good rejuvenation, no
regeneration on grassland
reat high high moderate to high high to very high
110
Potential Analysis for Further Nature Conservation in Azerbaijan
111
Table 9: Fruit tree forest formations of the upper montane and its site condition
1 2 3 4
Forest type
Persian Oak Fruit tree
-Shrubbery forest with
sea buckthorn
Fruit tree - Shrubbery
Sea buckthorn - Pastures
Shrubbery-Floodplains
Persian oak Shrubbery
Altitude (m NN.) 1720m 1700m 1540m 1650m
Exposure North North nat site South
Relief
moderately to strongly
inclined slope, sheltered
relief site
moderately to very
strongly inclined slopes,
also smaller slope sections
gravel terrace, closeness to
the river
strongly to very strongly
inclined slope
Substrate
mountain loam, high
skeleton grade
mountain loam river gravel
mountain loam, high
skeleton grade
Dominating tree
species
oak, apple, plum plum, apple, willow sea buckthorn (as shrub) Persian Oak (as shrub)
Other tree species
pear,
Shrubs: sea buckthorn,
roses, whitethorn, and
others
Pear, as shrubs: berberis,
sea buckthorn, roses,
whitethorn, whitebeam,
bird cherry, neld maple,
Persian oak, and others
as shrubs: White poplar,
willows, berberis, and
others
apple, sea buckthorn,
juniper, berberis, roses,
and others
Stand height 8m 2 8m 2,5 4m 1 5m
Wood stock (m/ha)
total
- - - -
Intensity of use moderate high moderate - high moderate very high
Use of wood
strong use of wood
(coppice)
low at the moment ?
(Previously strong felling
for nre wood)
Grazing
moderate strong
grazing
low moderate,
(dense bushes resp. steep
sites prevent of grazing)
hardly grazing,
because of dense bushes,
dimcult to access
strong grazing
Development of
population
limited regeneration of
the oak, development of
fruit shrubbery, which is
relatively resistant against
grazing (sea buckthorn,
roses, juniper and others)
shrub layer is strongly
developed
regeneration of fruit
shrubbery (Persian Oak
was eliminated through
wood harvest)
distribution of sea
buckthorn
grazing of the shrubbery,
low increase of the Persian
Oak, possibly due to
location (at the southern
slope anthropogenic
timberline, lower than at
the northern slope)
Rejuvenation of
trees
moderate
moderate good (fruit
shrubbery)
low moderate, grazed
Distribution widespread widespread common small-area spread widespread - common
Degree of danger moderate high moderate moderate moderate high
Reasons of danger
intensincation of wood
felling and grazing,
danger of erosion
intensincation of wood
felling and grazing, danger
of erosion
intensive felling combined
with intensive grazing
Biodiversity:
Tree species :
Shrub species:
4
9
total:
15
total:
10
total:
14
Closeness to nature moderately semi-natural moderately modined semi-natural semi-natural
Conservation value/
Priority
medium - high medium medium high medium high
110 111
Part Two Fact sheets of regions surveyed
2.5.3.5. Juniper heathlands south of
Altiaghaj
L o c a t i o n : Juniperus excelsa grows in the valleys of
the Chigilchay and the Gozluchay Rivers in the moun-
tains south of Altiaghaj between 830 and 1300 m a.s.l.
Coloured badlands of cretaceous rocks on the slopes
(Maxioaiiiv 1963) and up to one kilometre wide,
braided n oodplains n lled with gravel are typical for
this area. Steppes and heathlands are limited to moder-
ate slopes, while plains are used as arable land. Here,
however, agriculture is based on a rather less intensive
approach, due to limitations in precipitation and water
availability for irrigation.
L a n d s c a p e c h a r a c t e r i s t i c s : Sparse forests and
scrublands of xerophilous tree species form the natural
lower timberline in the eastern Greater Caucasus (Hix-
xixc 1972). Tese sites especially are strongly innu-
enced by domestic animals, artincial burning and select-
ed felling. As a result, they appear like steppes with sin-
gle stress-adapted shrubs of Juniperus sp. and Rosaceae.
It seems problematic to call the steppe-like formations
sparse forests or scrublands. Given that their appearance
and origin are comparable to European heathlands, in
the following they will be referred to as heathlands.
Te transition from Georgian Oak forest at the northern
slopes to Juniper heathland farther south, and sheltered
from the Caspian innuence, occurs as follows:
South of the oak-covered ridge, a species-rich shrub-
land extends, with Pyrus salicifolia, Crataegus orientalis
and Crataegus pentagyna, interspersed with nower-rich
patchy meadows. Te most elevated parts along this
gradient are vegetated by a Stipa sp.-Inula aspera-steppe
with low-growing shrubs such as Rosa sp. and Junipe-
rus sabina. On the southern slopes the herb-rich veg-
etation changes to tragacanthic vegetation of Astragalus
sp., Juniperus sabina and J. communis. Te habitat with
tragacanthic vegetation, such as Astragalus sp.-Stipa sp.
steppe, is comparable to habitat type 4090 of the Eu-
ropean habitat directive (Endemic oro-Mediterranean
heaths with gorse). It is characterised as heath of dry
mountains of the Mediterranean and Irano-Turanian
regions with low, cushion-forming, often spiny shrubs,
such as Astragalus (etc.) and various composites and
labiates. Te FFH-directive lists juniper heathlands as
habitats of community interest (Couxcii oi rui Eu-
ioiiax coxxuxiriis 1992).
Photo 45: Fruit tree formations at upper montane, Lahic transect. (H. Gottschling)
112
Potential Analysis for Further Nature Conservation in Azerbaijan
113
In this region, herb-rich habitats with Hordeum bul-
bosum occur due to spring activity, which is caused by
landslides. Little kettle-like lakes and wetlands occur in
the higher reaches, some of them periodically falling dry
in summer. For the most part, these eutrophic lakes do
not accumulate peat (although some do), and their ori-
gin is mainly runon water, collected in depressions cre-
ated by landslides. Heading southwest, the conditions
get drier and a low shrub vegetation with a loose herb
layer grows on rubble-rich soil.
Several (temporarily n ooded) riverbeds carry large
amounts of gravel across the area in northwest-southeast
direction, originating in the high Greater Caucasus. Te
water runon is very low in summer and only intense for
a short period following snowmelt or heavy rains in the
high mountains.
In principle, Juniper heathlands and sparse forests can
also be found at dinerent altitudes of the Southern Cau-
casus (Piiiiixo 1954). Lowland types with Pistacia mu-
tica occur around Lake Mingchevir and in the Tryan-
Map 26: Partly supervised satellite image classication of Altiaghaj suryev region. Based on Landsat 7 image
112 113
Part Two Fact sheets of regions surveyed
chay Reserve. Tey diner strongly from the submontane
and montane type presented here, with numerous spe-
cies of Rosaceae.
Cl i ma t e : Maxioaiiiv (1963) characterises the
climate in the region as moderately warm with dry
summers and winters. Te climate station of Shama-
khi (749 m a.s.l.), about 25 km from the investigated
area, measures the highest mean temperatures in July at
29.8C and the lowest mean temperatures in January at
-3C. Te annual precipitation here is about 600 mm,
although this amount is dispersed very irregularly over
the annual circle.
S o i l : Dark chestnut mountain soils, brown mountain
forest soils, grey-brown mountain soils and thin moun-
tain Chernozems occur in the investigated area (Maxi-
oaiiiv 1oo).
Ve g e t a t i o n : a) Juniper heathland: Juniper shrubs of
no more than four metres height are sparsely distrib-
uted on the slopes, with a density of rarely more than
40 plants per hectare. Especially at intersections with
other scrubland types, many ligneous species such as
Berberis vulgaris, Spirea crenata and Viburnum lantana
are intermixed. South of the village Gurbanchi Kocha,
Juniperus excelsa forms a particular type of scrubland,
together with Crataegus orientalis, Lonicera iberica and
Pyrus salicifolia. On dry sites at higher altitudes, Pyrus
salicifolia displaces the juniper. Te best-preserved juni-
per heathlands can be found in a tributary valley to the
Chigilchay River (N 4047; E 4852), where juniper
shrubs are associated with Jasminum fruticans and Cyti-
sus sp.
b) Adjacent vegetation: At altitudes of 1,300 m a.s.l.,
scrublands become more closed or change into horn-
beam forests. Here, only small shrubs of Juniperus com-
munis occur, while J. excelsa disappears completely.
Below 830 m a.s.l., summer droughts prevent the ex-
istence of shrubs, except for Rhamnus pallasii on rock
formations. Other sites at lower altitudes are occupied
by dinerent types of steppe. Only on a sea-exposed slope
southwest of Shirvan in northern Gobustan a small
group of Juniperus excelsa grows at 450 m a.s.l.
c) Floodplains: Old remnants of poplar and willow
trunks indicate that parts of the noodplains were cov-
Fig. 34: Altiaghaj: Cross section
[m] a.s.l.
[km]
1000
1400
800
1 2 3 4 5
Quercus
iberica-
Carpinus
betulus-
forest
Juniperus communis-
Artemisia steppe
Stipa spec.-Inula
aspera-steppe
Alti-Agac-region (Eastern East-Caucasus)
NO SW
0
Inula aspera
Pyrus salicifolia
Stipa sp. Quercus iberica
Alada Silsilesi
6
1200
Pyrus
salicifolia-
Crataegus
spec.-shrub
Astragalus spec.-
Juniperus sabina-
steppe
Wadi
Crataegus sp.
Carpinus betulus
Tamarix ramossisima
Paliurus spina christi
Juniperus sabina
Verefte Silsilesi
Juniperus communis
Astragalus spec.
Artemisia spec.
Astragalus
spec.-Stipa
spec.-
steppe
114
Potential Analysis for Further Nature Conservation in Azerbaijan
115
ered by forests. Today most noodplains are treeless, with
only small patches of scrubland. Te latter are mostly
composed of Salix purpurea, Hippophae rhamnoides and
Clematis orientalis. Sometimes young stands of Ulmus
minor, Fraxinus excelsior, Pyracantha oxycoccus and Ligus-
trum vulgare are intermixed. Frogs (Rana ridibunda) and
tracks of Water Voles (Arvicola terrestris) were detected
at the edge of small ponds.
d) Special habitats are n at swamp-like lakes fed by
runon and/or groundwater year-round. Most of these
lakes occur just south of Altiaghaj National Park on
the southern slope of the main ridge. Most lakes are
eutrophic, alkaline and fed by spring water, and sedi-
mentation of clay gyttja or coarse detritus gyttja occurs.
Peat accumulation is rather rare, due to the inconsist-
ent availability of water over the annual cycle. Where
peat does occur, the peat types are radicell peat mainly
composed of sedges of low decomposition degrees, and
in some cases Phragmites peat. Water level nuctuations,
occasional desiccation and salinisation are frequent.
During rainy periods in spring and autumn the innow
brings clay from the surrounding slopes, and the organic
material is mixed with clastic material. Small reedbeds
of Typha minima and Phragmites australis with species
typical of carbonate-rich swamps, such as Juncus inexus
and Mentha longifolia, grow here. In most of the lakes
the submerged vegetation is dominated by Batrachium
trichophyllos, Chara vulgaris and Potamogeton pectinatus,
indicating the eutrophic-calcareous conditions and a
certain salinity.
Bi r d s : About 150 wild bird species have been record-
ed in the Eastern Greater Caucasus region, 40 of them
are listed in the annex I of the directive 79/409/EEC
and eight by the IUCN (3 NT, 3 VU, 2 EN). Eight spe-
cies are included in the Azerbaijan Red Data Book and
60 are of special European conservation concern (6 x
SPEC 1, 17 x SPEC 2, and 38 x SPEC 3).
Te lower altitudes are dominated by semi-desert breed-
ing communities, consisting of wheatears, Rock Spar-
row, Lesser Grey Shrike and Chukars. In the steppes,
Ortolan Bunting and several larks are common, along
with Grey Partridges and Quail.
At the lakes, Great Reed Warbler, Little Grebe, Moor-
hen and Ruddy Shelduck breed, and the forests are very
rich in passerines (tits, warblers, woodpecker), accom-
panied by Scops Owl, Nightjar, and others.
shrubby Hordeum bulbosum-meadow
[m] a. s. l.
[m]
1100
1300
200 400 800
secondary
Hordeummurinum
pasture
Tragacanthic
vegetation
Lakes inAltiaghaj-region
NO SW
0
Pyrus salicifolia
Hordeum murinum
600
1200
Crataegu s sp.
Schoenoplectus triqueter
Phragmite s australis
Carex diandra ,
Glyceria uitans
smal l settlement
Artemisia spec . Sparganium erectum
spring activity
1000
Sparganium erectum-
Typha angustifolia -
landslide lake
system of springs and rills withhydrophilic vegetation
Schoenoplectus triqueter -Phragmites
australis-Ranunculus trichophyllos-
karst lake
Hordeum bulbosum
Fig. 35: Cross section through lake region close to Altiaghaj National Park
114 115
Part Two Fact sheets of regions surveyed
Te entire region shelters a high number of raptors,
most of which breed in the forests. Among them are
Short-toed, Booted, Imperial (VU) and Lesser Spotted
Eagle and Levant Sparrowhawk. Montagues Harriers
breed in the steppes and valleys of the juniper woodland
region. In migration, Lesser Kestrel (VU), Saker Falcon
(EN) Pallid Harrier (NT), Greater Spotted (VU) and
Steppe Eagle also are regular guests.
Poaching (mainly for Chukar), the disturbance and de-
struction of the lakes by cattle and strong tree cutting
and grazing activities in the forests have been identined
as the main threats.
Ma mma l s : Due to the dinerent habitat types in this
region the mammal fauna is very diverse and consists
of 55 species, eight of which are listed in annex II and
20 in the annex IV of the directive 92/43/EEC. Te
Azerbaijan Red Book includes 3 species and the IUCN
list three.
Of special conservation concern are Lynx lynx, Lutra
lutra (NT) and Barbastella barbastellus (VU).
Te region is very rich in bats (14 species) and rodents
(21 species). Of special interest are the large carnivores,
[m]
-5
0
0 40 30 20 10 50
N S
Phragmite s australis
P
y
r
u
s
s
a
l
i
c
i
f
o
l
i
a
-
C
r
a
t
a
e
g
u
s
s
p
e
c
.
-
s
h
r
u
b
Kettle hole lake close to Altiaghaj
Phragmites-radicell-peat
?
unknown water depth
[m] above reference
60 70
Phalaris arundinace a
eutroph-subneutra l
oating mat
terrestrialisation
Pyrus salicifolia
Crataegus sp.
?
Carex riparia
P
y
r
u
s
s
a
l
i
c
i
f
o
l
i
a
-
C
r
a
t
a
e
g
u
s
s
p
e
c
.
-
s
h
r
u
b
Fig. 36: Cross section trough kettle hole at Altiaghaj region
Photo 46: Juniper heathland and Stipa spec steppe of Altiaghaj
region (S. Schmidt)
116
Potential Analysis for Further Nature Conservation in Azerbaijan
117
which live in the dry Georgian Oak forests. Tey in-
clude Wolf (Canis lupus), Brown Bear (Ursus arctos),
Wildcat (Felis sylvestris) and Lynx (Lynx lynx). Teir
habitats have decreased during the last decades, thus
the area is of high conservation concern for them. Also,
Caucasian Red Deer (Cervus elaphus maral) still occurs
in the area. Along larger streams, River Otters (Lutra
lutra) have been found.
Several times during the investigation, hunters were ob-
served and shots were heard in the region. Locals report-
ed that there is hunting for Hares, Wolves and Brown
Bears.
Amp h i b i a n s a n d Re p t i l e s : Four species of am-
phibians and 16 reptiles have been recorded in the East-
ern Caucasus region. Four of them are listed in annex II
and eight in annex IV of the directive 92/43/EEC. Tes-
tudo greaca is the only species included in the Azerbaijan
Red Data Book. Te IUCN list the latter as vulnerable
and Vipera ursinii as endangered.
Typical species of the lower and dry altitudes are Eremias
velox and E. arguta. Te steppes host Vipera ursinii and
Macrovipera lebetina, as well as many Ophisaurus apodus
and Testudo graeca. In the forests, Lacerta strigata is most
abundant. Along water in the wadis, Natrix tesselata
and Rana ridibunda are widespread, as they are around
ponds all over the region, where Emys orbicularis and
Bufo viridis are also very common.
Hu ma n i n f l u e n c e : Te examined sites are partly
used as summer pastures for cattle and sheep. Near the
villages, grazing occurs year-round. In part, the area is
kept as a retention area and is reserved for the seasonal
movements of livestock between summer and winter
pastures. Around the small lakes described above, small
but permanent houses can be found. Te lakes with
their reedbeds are grazed almost entirely.
Selective burning and cutting of juniper and other
thorny shrubs takes place. Juniper recovers poorly or not
at all from burning. It can be assumed that the current
vegetation structure is a result of long-term selection
processes through pasturing and wood-cutting, compa-
rable to European heathlands with juniper.
Photo 47: Juniper spec. heathland of Altiaghaj (J. Peper)
116 117
Part Two Fact sheets of regions surveyed
S i g n i f i c a n c e & p r o t e c t i o n : Due to its rich
habitat structure, the eastern foothills of the Greater
Caucasus shelter a very high faunal diversity. Among
the species found here are many of conservation con-
cern, especially several raptors and the larger carnivores.
Large parts of the region are barely settled and oner an
important refuge for large animals such as bear, lynx and
wolf. Tis makes the region highly valuable for nature
conservation.
Te habitat described is peculiar for the greater Cauca-
sus. Although Juniper sparse forest also occurs within
the steep loam escarpments around Tryanchay and
south of Lake Ajinohur, this formation here is embed-
ded into a semi-cultural landscape and might be sup-
ported by land use and.
Te natural extension of sparse Juniper forest is to the
east, in Turkmenistan. Tere, the Kopetdag Mountains
are, to large extend characterised by this habitat, and
this is where it has its main natural extension, renect-
ing the ecological pre-conditions. However, the species
composition is partly dinerent, with Juniper turcomani-
ca dominating.
Te area is an important link between the ecosystems of
the eastern Greater Caucasus and the Gobustan area. It
is not only used by herdsmen as a migration corridor be-
tween summer and winter pastures, but possibly also by
wolves, vultures, and other migrating animals. It must
be a main goal for the future to stop the persecution of
mammals to ensure their survival in these easternmost
forests, juniper shrublands and steppes of the Eastern
Caucasus.
It seems appropriate to include and/or attach further
parts of this unique region (the steppes, the dry river
valleys and the juniper scrublands) into existing protec-
tion regimes. Herewith the designation of a Zakaznik,
creating a National Park Region around Altiaghaj Na-
tional Park is advised. As a result, land use management
schemes need to be developed to preserve and protect
this cultural landscape and foster protection measures.
A strong connection to the national park administration
will be benencial for the immediate implementation.
Immediate actions to strengthen the conservation meas-
ures in the region would be:
t burning or cutting of juniper should be halted im-
mediately
t grazing should be managed extensively to prevent
erosion,
t livestock numbers should be reduced in general to
avoid damaging the grass layer, which promotes
erosion and decreases soil fertility
2.5.4. Lesser Caucasus Mountains
2.5.4.1. Mountain forests of the Smkir-
chay valley
L o c a t i o n : Te Smkirchay valley is situated in the
northern part of the Lesser Caucasus between the dis-
trict towns of Dashksn and Gdby and stretches
from the southwest to the northwest. It encompasses
a sequence of steppes and nelds in the lowlands, areas
with fruit trees and scrubland at middle altitudes, and
forests at higher altitudes. Te forested part stretches
about 45 km along the valley. Te lower montane belt
could not be reached by researchers during the project,
so no data is available for it.
L a n d s c a p e c h a r a c t e r i s t i c s : In the Lesser
Caucasus, the slopes are generally gentler than in the
Greater Caucasus. Te mountainous areas are more
easily accessible and therefore more intensively used
than in the Greater Caucasus. By contrast, because of
its narrow entrance, the upper Smkirchay valley is ac-
cessible only via passes from neighbouring valleys. Due
to the clayey soils, during rainfall the tracks become
Photo 48: Crested Lark (Galerida cristata)
(S. Schmidt)
118
Potential Analysis for Further Nature Conservation in Azerbaijan
119
so muddy that only caterpillars
and special lorries can negotiate
the passes. Terefore, there are
still well-preserved mountain
forests within the valley, which
elsewhere in the Lesser Caucasus
have mostly become degraded
or replaced with pastures or sec-
ondary steppe.
Climate: Te annual rainfall in
the foothills (Gnj, 303 m a.s.l.)
is about 300 mm. Te highest
mean temperatures occur in July
(31.7C) and the lowest in Janu-
ary (-2.3C). From the lowest,
the colline stage, to the highest,
the sup-alpine stage, the climate
undergoes a signincant change.
In the colline stage the weather is
rather warm and dry; higher up
the mountains it grows increas-
ingly colder and wetter. Tis cli-
matic sequence is renected in the
vegetation zones. Te timberline
at 1,800 to 1,900 m a.s.l. is de-
termined by pasturing.
Soil: In the Smkirchay valley,
there is often only a shallow soil
layer on the bedrock. On the
upper colline stage, shallow col-
luvium, gravel and skeleton-rich
soils over bedrock were found.
Shallow and deep loamy soils,
colluvium and gravel occur
on the middle montane stage;
loamy soils of medium depth
and deep loams are found on the
upper montane stage.
Vegetation: On the upper col-
line stage, forests concentrate
near the rivers, where water is
more freely available than on the
slopes. Tese forests are made up
of walnut, hackberry and various
maple species. On the slopes,
scrubland with pomegranate,
Christs Torn and juniper can
be found.
Map 27: Partly supervised satellite image classication of Smkirchay valley.
Based on Landsat 7 image
118 119
Part Two Fact sheets of regions surveyed
Dense beech and beech-hornbeam forests occur on the
middle montane stage. On northern slopes, beech dom-
inates on deep soils, whereas hornbeam dominates on
shallow soils. On southern slopes Georgian Oak-horn-
beam forests occur.
In level and slightly sloped areas, open pear-maple for-
ests occur. In more intensively used areas, scrublands
can be found with medlar, hawthorn, plum, apple and
pear. Te latter is a speciality of the used forests of the
Lesser Caucasus. Tere are also wide areas with no forest
vegetation at all.
On the upper montane stage, beech and hornbeam for-
ests in natural condition, partially with Persian Oak, oc-
cur. In the Smkirchay valley, there are still tall forests,
whereas outside of it on the upper montane stage the
forests have disappeared. Te timberline is located at
1,800 to 1,900 m a.s.l. on northern slopes.
Bi r d s : About 120 wild bird species have been recorded
in the northern Lesser Caucasus region, 26 of them are
listed in the annex I of the directive 79/409/EEC and 6
by the IUCN (3 NT, 1 VU, 1 EN, 1 DD). Ten species
are included in the Azerbaijan Red Data Book and 41
are of special European conservation concern (4 x SPEC
1, 9 x SPEC 2, and 31 x SPEC 3). However, due to
poor accessibility the general knowledge about the bird
communities of the Lesser Caucasus is still insumcient.
Te areas of remaining beech forest are inhabited by typ-
ical breeding bird communities of this habitat, includ-
ing woodpeckers, tits, n ycatchers, warblers, thrushes
and nnches. Also, several species of raptors breed here,
among which Common Buzzard and Sparrowhawk are
most abundant. Other species that could be observed
include Honey Buzzard, Booted Eagle and Lesser Spot-
ted Eagle.
Where the forest has been replaced by meadows, sec-
ondary steppe and shrubs, Red-backed Shrikes, Tree
Pipits, Common Whitethroats, Lesser Whitethroats
and Woodlarks are common, and Bee-eater, Cuckoo
and Wryneck also occur here.
On the subalpine meadows, passerines such as Shore
Lark, Water Pipit, Alpine Accentor and Northern
Wheatear breed. Alpine Swifts and Crag Martins forage
here and breed in steep clins. So do Caspian Snowcock,
Peregrine Falcon, Golden Eagle and vultures. Dur-
ing investigations in September 2007 a high number
of Grinon, Black (NT) and Bearded Vultures was re-
corded. As there are very high stocking rates of sheep in
the region, they probably nnd enough carrion. Where
shrubs of willows and birches remain above the treeline,
Fig 37: Cross section through Smkirchay valley
120
Potential Analysis for Further Nature Conservation in Azerbaijan
121
the rare Caucasian Black Grouse may still occur in small
numbers.
Furtermore, the region is an important site for migrat-
ing raptors. In September 2007 many Lesser Kestrels
(VU), Pallid Harriers (NT), Hobbies and Steppe Buz-
zard passed through.
Empty cartridges were found, and hunting probably
concentrates on Chukars, Grey Partridges and Caspian
Snowcocks. As a result of the high stocking rates, there
is heavy overgrazing, which negatively innuences the
habitat quality for species of the subalpine meadows.
Te forest is subject to very heavy logging and intensive
grazing, which leads to a decrease in forest species.
Within this study, no mammals could be recorded/de-
tected within the Smkirchay valley and its surround-
ings. Also, no literature data was available for this par-
ticular region to the project team.
Amp h i b i a n s a n d Re p t i l e s : Four species of am-
phibians and twelve reptiles were recorded in the north-
ern part of the Lesser Caucasus region. Four of them are
listed annex IV of the directive 92/43/EEC.
Te data about the herpetofauna of the Lesser Caucasus
is rather poor. Among snakes only Natrix natrix, N. tes-
selata and Coronella austriaca are known to occur here.
Ophisaurus apodus was found in shrubs and open forests
of lower altitudes. Although not many species of reptiles
occur here, the region is very interesting due to many
dinerent species and semi-species of lizards (Lacerta sp).
L. strigata and L. trilineata are common and mainly live
in shrubs. L. derjugini and L. saxicola are very wide-
spread in all warm and rocky habitats. Te phylogenetic
situation of the saxicola species complex has not yet been
resolved in detail. Tere are many dinerent sub- or semi-
species and small parthenogenetic populations. In the
investigated area, Lacerta rostombekovi, L. valentie, L.
Stage Upper colline Middle montane Upper montane
Altitude in m 600-800 1.300-1.600 1.700-1.900
Dominating tree species
shrubs: pomegranate, Christs
Torn, trees: walnut, hackberry,
maple
beech, hornbeam, pear, maple beech, hornbeam
Other tree species juniper, maple, mulberry, ng tree
Georgian Oak, lime, wych elm,
Bird Cherry, hawthorn, pear,
apple
Persian Oak, maple, ash,
Wych Elm
Forest types on northern slopes
Beech forest, beech-hornbeam
forest
beech-hornbeam forest
Forest types on southern slopes oak-hornbeam forest
hornbeam-beech-Persian Oak
forest, no forest if intensively
used
Forest types in special habitats
on skeleton-rich slopes
pomegranate-Christs Torn
scrubland or Juniperus scrubland
forests on the bottom of the
slope or gallery forests with
walnut, hackberry & maple
park-like forest with pear and
maple, pastures with fruit
scrubland
Felling moderate to intensive intensive, partially still moderate very intensive
Pasturing moderate to intensive intensive, partially still moderate very intensive
Consequences of use
closed forest areas are thinning
out more and more
under intensive use, forest
occurs only on northern
slopes, mostly as coppice
Impact of use high moderate, in parts very high very high
Regeneration potential
within the forest high, outside of
forests very low
within the forest high, outside
of forests very low
reat medium very high high
Table 10: Forest community characteristics along the Lesser Caucasus transect
120 121
Part Two Fact sheets of regions surveyed
armeniaca and L. raddei have been recorded, but others
also occur in this region (L. unisexualis, L. portschinskii,
L. dahli).
Hu ma n i n f l u e n c e : Te Smkirchay valley is used
for pasturing in the colline areas, albeit not intensively.
On the montane stages, only recently pasturing and fell-
ing have intensined. Te forests are still in good condi-
tion, but will severely suner if usage continues at the
same rate. Te dense forests are already thinning out.
Summer camps of cattle herdsmen also exist on the
montane stages.
Additionally, the forests of the upper montane stage are
increasingly innuenced by livestock from the pastures at
higher altitudes.
S i g n i f i c a n c e & p r o t e c t i o n : Due to their
intensive use over the past decades, the forests of the
Lesser Caucasus in general were often converted into
park-like landscapes with pear and maple, open scrub-
lands or grasslands. Te entire mountain chain has been
under industrial exploitation for about 200 years. Cop-
per mines and the processing of minerals led to a sharp
decline of the forest as early as the 19
th
century. Due to
the connict with Armenia and the occupation of about
20% of Azerbaijans territory, the land use pressure in
the Lesser Caucasus again increased enormously over
the last 20 year. Internally displaced persons, temporary
resettled in the area, largely depend on the use of natu-
ral resources such as wood and pastureland. As to the
state of the forest located in the occupied territory, no
neutral information is available. However, the protec-
tion of remaining forests in the Lesser Caucasus is of
urgent importance.
Compared to most of the other areas of the Lesser Cau-
casus with more intensive use and only poorly main-
tained forests, the Smkirchay valley still appears to be
the most promising place to solve utilisation connicts
and protect some of the rare mountain forests. A State
Nature Sanctuary already exists, covering parts of the
Smkirchay valley. However, the designation of its bor-
ders does not seem appropriate nor is the protection
regime sumcient. Te entire valley with the remaining
forest does hold the potential for further protection
especially as utilisation pressure is very high and the re-
maining forest is the last remnant of the formerly wide-
spread forest cover in this region. Due to their mostly
shallow soils and rapid soil erosion in deforested areas,
the Smkirchay mountain forests are very susceptible to
overuse regeneration will be almost impossible after
degradation.
Te extension of the existing State Nature Sanctuary
and even conversion to a Strict Nature Reserve, with
the forest line determining the borders, would be an im-
portant initiative. However, in order to be successful,
alternative energy concepts for fuel need to be provided
along with the extension of the protected area. Climati-
cally suited as forest habitat, but depending on the soil
quality, the Lesser Caucasus should become a focal area
for reforestation and anorestation measures.
Hardly any information is available about the fauna of
the Lesser Caucasus. In the accessible parts a monitoring
of all groups is necessary to gain a clearer picture of the
distribution and situation of the species. It is likely that
Caucasian Red Deer, Bezoar Goat and Caucasian Black
Grouse still occur in the Gy Gl region, but there is no
recent information about these species in other parts of
the Lesser Caucasus.
Photo 49: Lesser Caucasus River Valley (K.Gauger)
122
Potential Analysis for Further Nature Conservation in Azerbaijan
123
Map 28: Partly supervised satellite image classication of Hirkanian Forest as well as Zuvand upland close to the Iranian border suryev
region. Based on Landsat 7 image.
122 123
Part Two Fact sheets of regions surveyed
2.5.5. Talish Mountains
2.5.5.1. Hirkanian forest
L o c a t i o n : T e Talish Mountains are located in
southern Azerbaijan. Te border to Iran to the west and
south, the Kura-Araz lowland to the north and the Cas-
pian Sea to the east mark the boundaries of this area.
Te altitude ranges from -12 m in Lenkoran up to 2400
m a.s.l. at the Iranian border. Stretching across the land
like a green band, the total extension of the Hirkanian
forest is about 1.9 Mio ha in Iran and about 0.1 Mio
ha in Azerbaijan. With an extension of 21,435 ha in
2007, and now an area of approximately 38,000 ha, the
Hirkan National Park (former Zapovednik) is embed-
ded in the Talish Mountains and encompasses various
stages of forest.
L a n d s c a p e c h a r a c t e r i s t i c : Te Talish Moun-
tains were folded up primarily during Tertiary time. Te
relief was formed entirely by erosion processes without
any signs of glaciation. Te Lenkoran plain and the adja-
cent terraces are the result of the abrasive-accumulating
activity of the Caspian Sea, due to past sea level changes.
Te landscape rises slightly over several lower ridges up
to the high mountain region near the Iranian border.
Numerous rivers run down to the Caspian Sea by cross-
ing the folded structures and cutting deep, narrow val-
leys. River dynamics are characterized by two discharge
peaks: one in autumn and one in spring, after snowmelt.
Te highest peak is Mt. Qizyurdu at 2455 m a.s.l.
Cl i ma t e : Te mountain chain of the Talish Moun-
tains represents a natural barrier for the incursion of
northern and north-eastern air masses. Uprising humid
air masses from the Caspian Sea favour cloud develop-
ment. Annual precipitation exceeds 1500 mm, with a
peak in autumn (September to November) and a dry
phase in June and July. Te mean annual temperature
is warm-temperate and ranges between 12C and 15C.
Te summers are warm with average temperatures in
the warmest month ranging between 24 and 26C. De-
pending on altitude, average winter temperatures range
between -2 and 3C (Muui 2005).
S o i l : Soils of the Talish Mountain are very heteroge-
neous. Te most important soils are yellow soils (Fer-
rasols), yellow brown soils and mountain brown for-
est soils (Cambisols), cinnamon-coloured forest soils
(Chromic Cambisols), chestnut after forest alkali soils,
humus carbonate soils (Rendzinas) as well as podzol,
gley and alluvial soils (MENR 2004).
Ve g e t a t i o n : Te Caspian forests belong to the Hir-
kanian noral province of the Oriental Turanian region
(Miusii et al. 1965). Others have described it as Eux-
ino-Hirkanian (sub-) province of the Euro-Siberian re-
gion (Zouaiy 1963, Scuioioii 1998). Together with
the Elbrus Mountains (Iran), the Talish is described as
an autonomic area. Te vegetation represents a relict of
the arcto-tertiary forests and comprises, in comparison
to other European deciduous forests, a very rich nora of
woody and endemic species. Examples of endemic tree
species are Parrotia persica, Gleditsia caspica, Albizzia ju-
librissin and Quercus castaneifolia. It is conspicuous that
the nora almost entirely lacks coniferous trees. Only a
few Yew trees (Taxus baccata) are noteworthy. Due to
the high humidity, epiphytes (mainly cryptogams) are
abundant; bryophytes sometimes completely cover the
tree trunks. In total, the very high number of 90 tree
species, 211 shrub and semi-shrub species and about
1,500 vascular plant species occur an indication of the
importance of this forest.
Te lowland forest, which has almost entirely disap-
peared, and the forest of the colline belt (up to 400 m)
are mainly characterised by Iron wood (Parrotia persica),
Zelkovie (Zelkova carpinifolia), Date Plum (Diospyros
lotus) and Chestnut-leaved Oak (Quercus castaneifolia).
Te latter occurs partly in combination with alder (Al-
nus glutinosa ssp. barbata), and along small creeks and
riverbeds Caucasian Wing-nut predominates. In addi-
tion, the occurrence of Albizzia julibrissin growing wild
in this altitudinal belt deserves special mention.
Farther uphill, the colline to montane belt is dominated
by hornbeam (Carpinus betulus) and Chestnut-leaved
Oak (Quercus castaneifolia), partly interspersed with
Velvet Maple (Acer velutinum), Parrotia persica, Zelkova
carpinifolia, Date Plum and Gleditsia caspica.
Te montane belt, starting at about 800m a.s.l., is dom-
inated by Oriental Beech (Fagus orientalis), which is ac-
companied by alder (Alnus subcordata) and maple (Acer
velutinum), mainly on slopes with northern exposure.
Several evergreen species such as Buxus hyrcana, Ilex sp-
ingera, Ruscus hyrcanus, Danae racemosa or Hedera pas-
tuchovii are typical for these forests. Southerly exposed
slopes in this altitudinal belt are also covered with horn-
beam (Carpinus betulus) and Chestnut-leaved Oak.
Persian Oak forms the upper treeline as the natural re-
sult of increasing aridity above the altitudinal belt still
innuenced by uprising precipitation from the Caspian
124
Potential Analysis for Further Nature Conservation in Azerbaijan
125
Sea. Anthropogenicly innuenced, only fragments of
these oak forests remain in the Talish Mountain. In most
areas, hay meadows and low- quality pastures dominate
the higher parts of the mountains.
Fa u n a : Similar to their nora, the Talish Mountains
also boast an abundant fauna with 200 species of ver-
tebrates and countless invertebrates, among them many
Tertiary relicts and endemics.
Bi r d s : To a large extent, the avifauna of the lower Tal-
ish Mountains resembles that of any European broad-
leaved forest. About 83 species breed in the forest of
Talish. Among them Caspian Tit, Black Stork, Lesser
spotted eagle and Ring-necked Pheasant. Many species
known from the Greater and Lesser Caucasus occur
here, as well as additional local specialities. In areas with
oldgrowth forests, Booted and Lesser Spotted Eagles,
Goshawk, Hobby, Honey Buzzard and Black Kite oc-
cur. A brood of Shikra was re-discovered here. Where
Caucasian Wingnut, ash and maple nank the sides of
river valleys at lower altitudes, Black Storks could be re-
corded. Lesser Spotted Woodpecker is rather scarce and
Black Woodpecker only occurs in old and undisturbed
stands of beech and oak forests. Tese are also good sites
for Stock Doves and Wood Pigeons as well as Tawny and
Long-eared Owls. Te Talish subspecies of the Pheasant
has strongly declined due to poaching and is now very
rare in dense thickets in the lower valleys. Most interest-
ing among songbirds is the Sombre Tit, which is an un-
common breeder along forest edges and in woods heavi-
ly devastated by tree-cutting and grazing (e.g. along side
roads of the main Lnkran-Lerik road). Te lowland
at coastal strip at the foohills of the Hirkanian forest is
inhabitated by about 73 species of breeding birds. Te
entire species list is attached to the report.
Ma mma l s : Mammalian diversity is mainly made up
of small animals such as Caspian White-toothed Shrew,
Lesser Horseshoe Bat or the endemic Hirkan Wood
Mouse, which are all included in the IUCN or National
Red Data Books. Common in the area are species such
as Brown Bear, Lynx and Wildcat. Te voices of Golden
Jackal and Wolf can be heard all over the territory. While
the Turanian Tiger became extinct only during the last
century, a small number of Caucasian Leopards still in-
Photo 50: Hirkanian Forest degrated by timber logging, forest pasture and constant re wood collecting/cutting (S. Schmidt)
124 125
Part Two Fact sheets of regions surveyed
habit the Hirkanian Forest. Treatened by poachers, the
protection of this species is one of the most important
conservation tasks in this region.
Amp h i b i a n s & Re p t i l e s : Amphibians are rep-
resented by nine species, nve of which are listed in the
Red Data Book. Among them, Triturus cristatus is listed
in the Red Data Book as endangered in Azerbaijan. Te
herpetofauna of the Hirkan Forest is represented by 22
species, two of which Mediterranean tortoise (Testudo
graeca) and Aeskulapian snake (Elaphe longissima) are
listed in the Red Data Book of Azerbaijan.
Hu ma n i n f l u e n c e : About 20 years ago, the for-
estry conservation system and the agricultural system
were organised in several collective farms. Following in-
dependence, large-scale tree plantations were left open
and are now densely covered by bracken fern (Pteridium
aquilinum). Due to their location and accessibility, the
forests of the Talish Mountains are used to a large ex-
tent. Te various forms of utilization (e.g. silvo-pasture,
logging, fuel wood collection) lead to the formation of
several typical degradation schemes in the region, de-
pending on intensity and type of utilization. With a
combination of the three forms of utilisation mentioned
above, continuous growth and recovery of the forest can
hardly be obtained. However, due to the lack of alterna-
tive income and a consequent subsistence economy, the
population of the region often has no alternatives.
Only remote regions and inaccessible areas are spared
from utilization. Furthermore, the Hirkan National
Park within its boundaries of 2007 is largely unused.
With the extension of the National Park in 2008, areas
that were under heavy utilization or alteration as well as
several villages became part of the park. Te enect on
the forest, particularly in those regions, is exemplarily
illustrated at Fig 38.
Here, the forest extension declined signincantly or has
been transformed from natural forest to forest stages of
lower value over the years. Furthermore, roads and paths
have been extended, followed by negative impact on the
forest condition along these roads.
As depicted in Fig. 39, the forest investigated (e.g., a
patch of forest of approx. 26,300 ha around the village
Gegiran) has been altered as follows
14
:
14 forest degradation stage 1&2 represent natural and near nat-
ural conditions, forest degradation stage 3, 4 &5 increasing
alteration up to severe degraded forest with little natural and
unused trees remaining. For the overview and due to the dif-
cult classication of the latter three stages these have been
merged and are depicted together. For details see Annex I.5.
Photo 51 Pristine Hirkanian Forest (J. Etzold)
126
Potential Analysis for Further Nature Conservation in Azerbaijan
127
t the total amount of forest decreased by about 11%
over seven years,
t over a 20-year period, about 9000 ha of forest disap-
peared,
t natural and near-natural forest has been reduced
by 23%,
t the stages of devastated, scrub like and intensively
used forest increased by approx. 15 %
t extension of meadows also increased
Based on this, the alteration of natural forest is obvious.
However, it must be noted that despite the transforma-
tion from natural to intensively used forest, several eco-
system benents can still be obtained.
S i g n i f i c a n c e & p r o t e c t i o n : Due to its out-
standing biodiversity, the conservation of the Hirkanian
Forest is of great importance. Large parts of the Hirka-
nian forests are protected within Hirkan National Park.
In 2008, parallel to the analysis of this survey and draw-
ing from its report, the national park was extended and
almost doubled in size. Now covering an area of about
38,000 ha, the national park extends much further to
the north than before. A current map is not yet avail-
able. Tis fast action taken by the MENR is seen by the
authors as quick response to the situation in the region.
Now, under the new designation, major parts of the
Hirkanian Forest are protected. Nevertheless, the chal-
lenges are still enormous as several villages are located
within the park, especially in the new part, where they
are of relevant size. Furthermore, the accessibility of the
northern part of the national park is very good and pro-
tection measures must be taken urgently. In addition,
capacity as well as alternative fuel and energy concepts
need to be established for the villages within the park
and its buner zone. By neglecting this fact, a thorough
protection of the forest will not be achieved in due time.
Fig. 38: comparison of forest extension between 1987, 2000 and 2007; Based on Landsat 7 imagery
126 127
Part Two Fact sheets of regions surveyed
2.5.5.2. Xerophytic mountain region of
Zuvand
L o c a t i o n : Te Zuvand region is an intra-mountain-
uos depression in the Lesser Caucasus, extending along
at the Iranian border close to the district town of Lerik.
Tis so-called Diabar depression is geomorphologically,
climatically and by species composition more similar
to northern Iran and Nakhchivan than to the forested
parts of the Talish Mountains. Almost closed on to the
east by a high-rising mountain range, the Zuvand region
extends in elevations between 1700 up to 2582 m a.s.l.
and forms the sheltered highest region of Azerbaijans
Talish Mountains.
To the west and south-west it borders Iran, to the east
and north-east the forest area of the Lnkran Moun-
tains. Tere are two administrative regions in this sub-
area: Lerik and Yardimli, both with good road connec-
tions to the Caspian lowland.
L a n d s c a p e c h a r a c t e r i s t i c s : A semi-desert-like
vegetation with xerophytic sub-shrubs and large areas
of open soil dennes the character of the upper Talish
Mountains. Te ecosystem of this semi-arid habitat
encompasses mountain steppes, phrygana (garrigue),
pseudomacchia, plant formations on rocks and a few
noodplain formations or even mires. Due to the pecu-
liar climatic condition, typical rich subalpine and alpine
meadows do not occur in Zuvand, and the subnival and
nival belts are also not reached. Tose meadows that do
occur represent a steppe to grass-steppe character rather
than true meadow character. Green valleys with cot-
tonwood forests, clear brooks and settlements contrast
very sharply with the dry surrounding landscape. Along
the tributaries of the Konjavuchay brook in the Lerik
Rayon, a couple of villages are scattered in the valleys.
Houses and stables are built on the upper terrace, sur-
rounded by vegetable gardens and orchards with apri-
cot, cherry and apple trees. Orchards frequently extend
between the villages and are irrigated by small feeders.
Tey are surrounded by rows of planted Populus nigra
var. italica with narrow crowns, which are planted along
the edge of feeders to stabilise their walls. Feeders origi-
nate from small, simple dams upwards of the planta-
tions. Several creeks originate in the Zuvand region and
form an intensive network, and there is a high density
of little springs.
Cl i ma t e : Te climatic conditions of this area are
similar to the climate of Northern Iran and diner sharp-
ly from the climate of the Talish Mountain forest zone.
Te area is part of the climatic region of Azerbaijan
characterized by dry semi-deserts and dry steppes with
Fig. 39: Spatial alteration/deplition of forest surrounding the village Gegiran, located in the Talish mountains. Partly supervised
classication, based on Landsat 7 imagery.
128
Potential Analysis for Further Nature Conservation in Azerbaijan
129
hot summers and cold winters. Compared to the lower
altitudes, there is a sharp dinerentiation, and continen-
tal climatic enects prevail. Under certain weather condi-
tions, humid air masses and moisture from the Caspian
Sea pass the rock escarpments surrounding the highland
and cause atmospheric precipitation. Te hottest month
is July, the coldest are December and January, when the
temperature is below 0C. Annual precipitation lies be-
tween 450 to 650 mm. It mostly rains in autumn and
winter, thus little water is available for the vegetation
period.
S o i l : Te substrate of the Zuvand region is in large
parts of volcanogenic sedimentation origin. On the
steep slopes and along the numerous rocky outcrops
where tragacanthic vegetation occurs, soil is poorly de-
veloped. Te light and initial soil formations have a rich
skeleton and are subject to intensive erosion. Chestnut
coloured and light chestnut coloured mountainous soil
and mountain meadow soil occur in nat area. Especially
the meadow soils are rather productive and partly used
for hay-making. Te humus content is rather poor due
to the climatic conditions.
Ve g e t a t i o n : Because of the low rainfall during the
vegetation period, Zuvands vegetation diners markedly
from that of the rest of the Talish. Many of its plants
have special peculiarities to withstand long periods of
drought. Most species of milk vetch (Astragalus) and
prickly thrift (Acantholimon) form compact, spiky cush-
ions, while others grow only during the short, moist
spring, e.g., bulbous plants. On the slopes and rocky
areas the typical MediterraneanAnatolian and the Le-
vant xerophytic expression of vegetation can be found.
Pseudomacchia/Shiblyak (ref. to Araxov et al. 2006)
dominates the areas around the villages and forms an
intermediate stage between forests with Fagus orientalis
and Quercus macrantera and frigana vegetation. Te lat-
ter, with its low-growing thorny, cushionlike bushes, is
prevalent in higher, even dryer reaches of Zuvand.
Frigana vegetation/dry slopes: Te frigana vegetation type
covers large parts of the mountainous, rocky zone at ele-
vations between 1500 and 1800m a.s.l. and dominates on
southern/south-eastern slopes. Tis vegetation type is
characterised by three groups of plants that all form
thorny cushions. Te dominating group among them
are tragacanth bushes, predominantly Astragalus meyerii,
A, persicus, A. pycnophyllus, and A. aureus. Te second
group of plants that form cushion-like formations is the
Acantholymon group, with Acantholimon hohenackeri
resembling a spiny tragacanth. Te third group is the
Onobrychus group, with Onobrychus cornuta being the
dominant species. Although occurring naturally under
certain conditions, the frigana formations are supported
by intensive land use and often form a secondary for-
mation, particularly in response to heavy grazing. Rosa
ibericus, Jasmin fruticans, and Berberis vulgaris are of-
ten associates with the various Astragalus communities.
Frigana formations are peculiar for the area but form
transitional stages between forested shiblyak formations
with Ilex hyrcana, Cotoneaster multiora, Berberis denis-
ora, and various Prunus sp. and Rosa sp., and highland
steppes.
Steppe vegetation: Various species of Artemisia and Al-
lium (18 species) as well as ymus trautvetteri, Carex
humilis, Phleum phleoides Stipa capillata and Festuca
valesiaca (along with an abundance of other grasses) are
typical for these meadows and sites with favourable con-
dition close to the Iranian border. At elevations between
1600 and 2500 m a.s.l. these formations, rich in geo-
phytes, are prevalent.
Floodplain, creeks: Along the numerous small creeks
and villages in the highlands of Zuvand, park-like stands
of wild Populus nigra and Salix alba are in good condi-
Photo 52: Acantholimon spec. cushions at Zuvand (S. Schmidt)
128 129
Part Two Fact sheets of regions surveyed
tion and oner shade to species-rich wet meadows. Other
willow species (Salix purpurea, S. x rubens; S. caprea),
Hippophae rhamnoides and some Crataegus rhipido-
phylla also grow on the noodplains. Tey are associated
with other riparian plant species such as Myricaria ger-
manica, Calamagrostis pseudophragmites and Epilobium
hirsutum. Wet meadows at the edge of the noodplains
sometimes grow on up to 10 cm thick peat layers over
loamy sands. Tey can be characterised as initial sur-
face now mires, because water seeps out at the foots-
lopes and n ows down through the meadows. Sedges
(Carex distans, C. panicea, C. nigra), Festuca arundinacea
and Eleocharis uniglumis are the species with the highest
abundance.
Mires: Due to the limited availability of water as well as
the limited water retention potential of the landscape,
mires are not at all common in the Zuvand region. A
spring mire at the foot of Mt. Krakend (38,77040 N;
48,27134 E), located at an altitude of 2090 m a.s.l.;
was investigated. Te mire is located in the upper part
of a smooth, north-west inclining valley. Open rock
formations of volcanogenic stone (Maxioaiiiv 1963)
occur at the slopes, while the valley itself is nlled with
denuded clays and loams. Due to the regional climate,
xerophytic tragacanthic Astragalus species cover the
catchment of the mire. A wet meadow with an exten-
sion of approximately nve hectares lies at the bottom of
the valley. Festuca rubra, Vicia cracca, Anthriscus nemo-
rosa and Papaver orientalis are the most dominant plants
of this meadow community. Te mire is situated in the
middle of the meadow, has an extension of about 40 x
50 m and is divided into three small terraces. Ground-
water reaches the top of the surface, which bulges 50
cm above its surroundings, but the steep edges consist
of dry and degraded peat. Coring of the spring dome
showed a maximum thickness of the peat layer of
about 90 cm. A soil pronle at the centre of the dome
revealed a 50 cm thick peat layer free of carbonates. In
the upper 5 cm, the peat is dark black and consists of
brown mosses and sedge roots that are strongly decom-
posed. Te lower horizon has a homogenous structure
and is made up of brown, slightly decomposed radicell
peat. Species occurring at the centre of the spring mire
are, among others, Carex nigra, Poa palustris, Veronica
anagallis-aquatica, Catabrosa acquatica, Blysmus com-
pressus, and Agrostis stolonifera. Plants at the edge of the
Fig: 40: Cross section through Zuvand creek valley
130
Potential Analysis for Further Nature Conservation in Azerbaijan
mire included Stellaria graminea, Rumex crispus, Bupleu-
rum boissieri, Nepeta teucriifolia, Silene talyschensis, Alo-
pecurus pratense, and alictrum minus.
Bi r d s : About 150 wild bird species have been record-
ed in the Zuvand region, 34 of them are listed in the
annex I of the directive 79/409/EEC and seven by the
IUCN (4 NT, 1 VU, 2 EN). Nine species are included
in the Azerbaijan Red Data Book and 56 are of special
European conservation concern (4 x SPEC 1, 14x SPEC
2, and 40 x SPEC 3).
Te montane region of Zuvand hosts several passerine
species that do not occur anywhere else in Azerbaijan
(except in Nakhchivan). Tese are Bimaculated Lark,
Pale Rock Sparrow, Grey-necked Bunting, White-
throated Robin, Trumpeter Finch and Upchers War-
bler. Terefore it is of high regional importance.
Several dinerent habitat types can be found, among
them rocky slopes, mountain semi-desert and narrow
strips of noodplains the most dominant. Te rocks are
inhabited by Chukar, Finschs Wheatear, Eastern Rock
Nuthatch, Grey-necked Bunting, Rock Trush and Blue
Rock Trush. In the canyon south of Lerik, Golden Ea-
gles have been found nesting on a steep clin. Based on
several breeding-season observations, Peregrine Falcons
and Egyptian Vultures (EN) are also assumed to be nest-
ing here.
In the semi-desert areas, Bimaculated Lark, Shore Lark,
Woodlark and Tawny Pipit are the typical breeding bird
species. Grey Partridges are rather rare in this open ter-
rain.
Compared with the dry surrounding the noodplains,
gardens and orchards along the small rivers are much
richer in birdlife and attractive to many species. Typical
breeding birds are Syrian and Green Woodpecker, Scops
Owl, Golden Oriole, Lesser Grey Shrike and several
warblers and tits. During migration, high numbers of
passerines of many species rest and feed in these oases.
Most numerous are Red-breasted Flycatchers and Sylvia,
Phylloscopus and Acrocephalus warblers.
Fig 41: Cross section through creek valley of Zuvand
130
Part Three Environmental policy and legislative background in Azerbaijan
131
During four days in September 2007 high numbers of
birds were observed passing through. Among them were
more than 50 Lesser Kestrels (VU), 20 Pallid Harriers
(NT), several Steppe Eagles as well as Sparrowhawks,
Honey Buzzards, and others. Te endangered Saker Fal-
con (EN) has also been recorded here.
Several times, empty cartridges were found, and the lo-
cals probably hunt for Chukars and Grey Partridges.
Ma mma l s : 45 species of mammals have been record-
ed in the Zuvand area, nve of which are listed in annex
II and eleven in the annex IV of the directive 92/43/
EEC. Te Azerbaijan Red Book lists four species and the
IUCN list two (1 NT, 1VU).
Several bat species can be found in Zuvand, among
them the vulnerable Barbastella barbastellus. Te region
is rich in rodents and also in large carnivores. Accord-
ing to reports by local scientists, the Striped Hyena has
been observed several times in the last decade. It appears
that this region is the only refuge for this species in the
country, apart from the Mingchevir area in north-west-
ern Azerbaijan. Along the edges of the Zuvand region,
Brown Bears, Wolves and Lynx occur from the forest
zone down, while Red Fox, Golden Jackal and Badger
are widespread all over the area.
Amp h i b i a n s a n d Re p t i l e s : two species of am-
phibians and 18 species of reptiles have been recorded
in Zuvand. One of them is listed in annex II and seven
are listed annex IV of the directive 92/43/EEC. Te Az-
Map 29: Proposed corridor/connection between Zuvand Zakaz-
nik and Hirkani National Park
erbaijan Red Book includes three species and the IUCN
list one.
Te rocky slopes and the semi-deserts are inhabited by
several lizards, such as Eremias arguta, E. strauchi and
Ophisops elegans, the agama species Agama ruderata and
Stellio caucasica, and several snakes. Of special interest is
the population of Testudo graeca (VU).
Along the rivers and in ponds in the entire region, Rana
ridibunda and Bufo viridis are abundant, and Natrix na-
trix and N. tesselata can also be found.
Hu ma n i n f l u e n c e : Te human population of Zu-
vand is not large and there are no more than 20 settle-
ments, mainly placed along canyons on riverbanks. Ani-
mal husbandry is the main occupation of the locals. No
protective measures are in place, and hunting is com-
mon. Te area was designated as a Zakaznik, specincally
to serve as a game reserve. Te entire highland is used to
some extent for grazing; cultivation and sustainable use
of trees is typical for this region. However, contrary to
practices in settlements of the forest belt at lower alti-
tudes, the land use does appear much less intensive and
sustainable.
S i g n i f i c a n c e & p r o t e c t i o n : Due to its
uniqueness among Azerbaijans landscape complexes,
and with its particular species composition, the entire
region of the Zuvand uplands has a very high protection
value.
Photo 53: Rock Sparrow (Petronia petronia)
(H. Mller)
132
Potential Analysis for Further Nature Conservation in Azerbaijan
132 133
Furthermore, the Zuvand lies on an important bird mi-
gration route for raptors. Many birds of prey use the
canyon south of Lerik to pass the barrier of the Talish
Mountain range. Among them are several species of in-
ternational conservation concern.
If nothing else, at least its regional importance for many
bird and reptile species that do not occur anywhere else
in Azerbaijan makes this area worthy of protection. Te
current status, however, is somewhat contradictory to a
protection regime, as the existing Zakaznik (covering
15,000 ha) was established as a game reserve.
Te entire habitat has closer links to the Iranian high-
lands than to any other ecosystem in Azerbaijan, ren-
dering the conservation of this natural heritage espe-
cially important. Tragacanthic vegetation also occurs in
the Greater Caucasus and other regions of Azerbaijan.
However, there it is often of secondary character and
far less diverse. Te extension and condition of Acan-
tholimon is unique in the country, as is the geological
background with its volcanic sediments.
Yet, additional research is still necessary to fully assess
the regional biodiversity, in particular the noristic di-
versity. In addition, general awareness for the natural
singularity of this region needs to be raised among the
inhabitants, especially when taking into account that
the region might still hold a remnant population of
Striped Hyena.
A principal connection between the xerophytic habitats
of Zuvand and the deciduous forest of the Hirkan Na-
tional Park is strongly advised. A potential reserve could
be close to the Iranian border. Land use connicts can
be expected to be negligible. Te existing State Nature
Sanctuary (IUCN IV) needs to be upgraded to a higher
level of protection. However, as there is land use in the
area and small settlements regularly occur, an estab-
lishment of a Zapovednik (IUSN cat. I) does not seem
appropriate. As a solution, an extension of the Hirkan
National Park is advisable. Certain areas of Zuvand
should become a separate core zone, and a development
zone should provide buner functions and could harbour
these small settlements.
132 133 133
Potential Analysis for further Nature Conservation in Azerbaijan
133
PART THREE
Environmental policy and
legislative background in Azerbaijan
3.1. Azerbaijan ready for the
participation of Europes
protected area network?
3.1.1. State organization and
structure
In 2002, after about ten years of independence, Azerbai-
jan continued to describe itself as a nation in transition
to democracy. Tis process is still going on today. Te
basic political and legal parameters for the institutionali-
sation of democracy have been established and are being
renned and enacted. Tis process involves dismantling
institutions, revising laws and denning new ones to bol-
ster an open, market-oriented society.
While the environment is protected by law and pollu-
tion is controlled by regulations, in reality concern for
the environment has been secondary to economic de-
velopment. It is therefore important that environmental
legislation and management should be given a higher
priority to meet the future needs of Azerbaijan.
Tere are clear signs that priorities are changing and
more attention is given to the environment: Radical in-
stitutional change has brought about the MENR, which
despite serious obstacles has been able to take the
lead and push the environment higher on the list of na-
tional priorities for action. It is very important for the
MENR as well as for Azerbaijans environment not to
lose that momentum. Of course, Azerbaijans ambitions
will be kept within the limits of its political and nnan-
cial possibilities; yet these possibilities have to be reas-
sessed and they will most likely grow over time.
When implementing the recommendations given be-
low, the MENR should use the potential of interna-
tional organisations that have prepared many interna-
tional sound studies and background analysis applicable
to Azerbaijan as well. Also, the potential of cooperation
under the Environment for Europe and other proc-
esses is not fully utilised.
Azerbaijan has been active in formulating policies for
the environment, for sustainable development, and for
nghting poverty and supporting economic develop-
ment. Within the nrst two years of its establishment,
the MENR prepared four national programmes, two of
which were approved by Presidential Decree in February
2003, and their implementation has been discussed with
other ministries. Te other two programmes have been
submitted to the Cabinet of Ministers. However, the re-
lationship among these programmes and their relative
priority is not always clear, and there is not yet a plan
for their monitoring, review and revision. In addition,
the Ministry of Ecology and Natural Resources is the
main body that initiates environment-related activities.
Tis is, however, impossible without good coordination
among all government institutions, integration of the
environment into other sectoral policies, and plans and
provision of adequate funding. Te environmental plan-
ning process would benent from a more consolidated
and rationalized framework that also bears implementa-
tion.
3.1.2. Policy Start, Conception and
Development
15
Between 1995 and 1998 the government of Azerbaijan
made an early initial attempt to formulate an environ-
mental policy. At that time this was a positive approach,
15 with the following we refer and quote intensively to/from the
UN ECE Environmental performance review Azerbaijan. Se-
ries No.19, 2004. For an in depth analysis it is recommend to
check on this publication (attached).
134
Potential Analysis for Further Nature Conservation in Azerbaijan
since in Azerbaijan priority is usually given to the ex-
ploitation of the countrys oil and gas deposits,and to
a much lesser extent to the discussion of the ecological
consequences.
Right after the inception of the National Environmental
Action Plan (NEAP) of Azerbaijan, the nrst task was to
describe the ecological situation. Tereafter it became
apparent that the most urgent objectives were to over-
come a whole range of technical disasters and chemi-
cal pollution concerning both the population and the
biosphere. A closer look at the action plan shows that
ecological nelds of political interest were stressed very
clearly in many cases:
t Loss of fertile agricultural land through erosion,
salinisation, pollution with heavy metals and chem-
icals, and deteriorating irrigation systems,
t Treats to protected areas leading to a loss of biodi-
versity,
t Loss of forest cover, mainly in war-anected areas.
Te NEAP puts forward a list of environmental priori-
ties, setting 32 objectives grouped in nve categories, in-
cluding forestry, land and biodiversity.
Some of the NEAP policy elements have been achieved,
particularly with regard to new legislation, but many of
the implementing regulations and by-laws are still lack-
ing. Te main reasons are nnancial problems and the
lack of clearly denned priorities.
Te nrst NEAP of Azerbaijan had a very positive enect
on the development of environmental and natural re-
source protection, thus proving the value of NEAP as a
policy instrument.
16
Following this n rst political approach, a steady and
consequent national and international policy develop-
ment in Azerbaijan has become apparent.
3.1.2.1. National progresses
Te national progress in implementing nature conser-
vation declined over the last two to three years. Te
implementation and adaptation of results, gained in
several workshops and seminars (in Azerbaijan as well
as in Germany), supported by BMU, BfN, NATO, in-
16 as above: UN ECE Report Environmental performance review
Azerbaijan. Most of the text taken and substantially shor-
tened.
ternational NGOs as well as dinerent Universities did
not take place on considerable scale and therefore has
not proven sustainability.
A State Programme on Poverty Reduction and Eco-
nomic Development for 2003-2005 was developed
and put in enect, which is based on the assumption that
economic development that upsets the environmental
balance cannot be sustainable. It obliges the govern-
ment to promote balanced growth and to bring about
improvements in some of the key economic sectors: to
improve the investment climate, to increase access to
credit among businesses and entrepreneurs, to develop
the infrastructure, to encourage small and medium
enterprises, to develop the regions and agriculture, to
improve the environment, to reform energy generation
and distribution, and to promote tourism.
Te list is large and ambitious. Although it closely ap-
proaches NEAP in the nne-tuning of actions and in its
language, it mentions the natural environment rather
brieny and in general terms (Improving the manage-
ment of the countrys natural resources).
Te programme has been sharply criticised by the as-
sessing units: Te Programme, however, fails to set pri-
orities, nor does it provide an assessment of costs and
benents.
17
Parallel to the above-mentioned programme, in 2002
the MENR prepared the National Programme on
Environmentally Sustainable Socio-economic Devel-
opment. It was approved within the National Pro-
grammes on Ecology (18 February 2003).
Te programme covers the environmental aspects of the
countrys overall development strategy. It determines
the main areas of sustainable development and includes
a plan of action for 2003-2010 to address the initial
phase of the resolution of the current problems. Te
programme is meant for a set of sustainable develop-
ment issues, including, for instance, the environment,
industry, agriculture and tourism, and education, sci-
ence and culture. Te programme should have been
a good strategic document on the national level and a
clear guide for the Ministry for these seven years
18
Furthermore, a National Programme for the Restora-
17 UN ECE Report Environmental performance review Azerbai-
jan
18 ref. to UN ECE Report Environmental performance review Az-
erbaijan with a slight critical note as far as cost estimations
and measures for nancing are concerned.
134
Part Three Environmental policy and legislative background in Azerbaijan
135
tion and Expansion of Forests was developed in 2003.
Along with the National Programme on Environ-
mentally Sustainable Socio-economic Development,
presidential decree 1152, Approving the National Pro-
grammes on Ecology, also endorsed the National Pro-
gramme for the Restoration and Expansion of Forests.
It lists activities in ten subsectors, along with indicative
data on implementation, responsible institutions, n -
nancial sources and performance indicators. Tis pro-
gramme represents one of the few examples of a reason-
ably descriptive sector plan in Azerbaijan.
More recently, two important policy papers deserve spe-
cial mention among other policy announcements dur-
ing the past few years: the National Capacity Self As-
sessment for Global Environment Management in Az-
erbaijan (Baku 2005) and Te National Action Plan
on Strengthening Capacity to Respond to Challenges of
Biodiversity Conservation, Climate Change and Deser-
tincation/ Land Degradation (2006 2015)
All papers are open, clear, and stringent, and were devel-
oped with specinc objectives in mind. Self Assessment
has the goal of examining the capacity question under
the auspices of individual, institutional and systematic
capacity in environmental management. Te National
Action Plan obviously refers to the diction of the above-
mentioned assessment paper and determines in a rather
concrete way the political priorities in the years ahead.
As far as the action plan itself is concerned, it refers to
the projects specincally and allocates the respective n-
nancial means. Te National Action Plan places its
priorities very decisively on the two broad nelds of pub-
lic information and forest management, whereas other
activities are covered rather broadly and, as a result, not
concrete enough. No clear measures are mentioned con-
cerning the implementation of the action plan and the
review of its results .
3.1.2.2. International progresses
Te most formal start of Azerbaijans intention to be-
come an international player is its Partnership and Co-
operation Agreement (PCA) with the European Com-
munities (1999).
Te agreement proclaims the wish of the involved par-
ties to establish close cooperation in the area of envi-
ronmental protection, taking into account the interde-
pendence existing between the parties in this neld. Te
stated objectives are consistent with both the NEAP and
the National Programme on Environmentally Sustain-
able Socio-economic Development (see above).
More importantly, paragraph 3 mentions, among oth-
ers, two strategic objectives, namely:
t Improving laws to European Community standards;
t Developing strategies, particularly with regard to
global and climatic issues and to sustainable devel-
opment.
Te n rst of these two dennes the harmonisation of
Azerbaijani environmental legislation with that of the
European Union. If enforced properly, this clause will
result in a far-reaching overhaul of the whole system of
environmental protection in Azerbaijan a long-term
objective.
19
Tis clause is dealt with in detail in the fol-
lowing chapters.
Te second remarkable landmark for an international
role of Azerbaijan was not a big political event, but the
starting point of a Caucasian view on the common ecol-
ogy: the Biodiverstiy of the Caucasus Ecoregion, An
Analysis of Biodiversity and Current Treats and Initial
Investment Portfolio (2001).
20
Tere is no doubt that
this cooperation of specialists from Armenia, Azerbaijan,
Georgia and the Russian Federation under leadership
of the WWF has led to a new way of ecological think-
ing and the high-ranked results of this cooperation can
now be seen in the whole Caucasus region. Tis state-
ment applies especially to the area-oriented approach of
this study. All attempts to structure biodiversity and put
nature protection into a regional and then sub-regional
approach have their starting point here.
In principle, it can be said that Azerbaijan and the
MENR were very active in fulnlling international re-
quirements, documentations, and strategic planning in
the nrst years of the ministrys existence. Unfortunately,
the pace of international cooperation has slowed down
19 UN ECE Report Environmental performance review Azerbai-
jan. Later, when we talk about a kind of concrete law har-
monisation between the EU and Azerbaijan we will see how
important the political binding attempt of both sides in 1999
was.
20 Editors: Krever et all. 2001
136
Potential Analysis for Further Nature Conservation in Azerbaijan
within the last 3 years, although the MENR seems still
very active.
However, recent tendencies show back stepping in re-
gard to international relations (apart from economic),
which more or less (by incident) occured with the open-
ing of the BTC pipeline in 2006. Contradictionary,
while MENR expressed the self-capacity of Azerbaijan
to nnance nature conservation but the need for interna-
tional methodological support and knowledge transfer,
these last two points seem either a) not requested, b) not
accepted or at the end are not sustainable.
For example, the Caspian Environmental Programme is
by far reaching the optimum output, MENR still does
not participate actively in the Caucasus Protected Area
Trust Fund, the establishment of Samur-Yalama Na-
tional Park under the Caucasus Initiaitve still did not
take place and two CIM experts left the MENR after
three/two years without their resources being used sus-
tainable.
Today, the environmental policy, emphasis and direc-
tion its ranking and decision making process is hard
to follow from the outside. Support from international
development projects, initiatives or bilateral agreements
is hardly asked for by MENR.
3.1.3. Administrative law in Azerbaijan
No law and no administrative action can be handled
substantively, correctly and democratically without
binding rules for the processes, nrst during the adminis-
trative phase and later in court.
In the course of 2008, two laws were supposed to be
enacted in Azerbaijan that will comprehensively alter
the administrative law of this country and establish a
new basis for the relationship between the state and its
citizens:
t Te Law on Administrative Procedure
t Tis law lays down the principles and the course of
the administrative process by establishing concrete
rules on the obligations of the state represented
by its administrative authorities and the citizens
rights.
t Te Law on Court Proceedings in Administrative
Matters
t Tis law lays out the conditions for and the content
of court proceedings against administrative meas-
ures.
21
Tis neld of law will be crucial for the devel-
opment of the Republic of Azerbaijan into a demo-
cratic country governed by the rule of law.
22
Currently, the procedural rules of the administrative
authorities are laid down only fragmentarily in diner-
ent provisions of various laws and in numerous norma-
tive legal acts. Since a uniform code does not exist, the
provisions applied by the administrative authorities vary
substantially.
Up to now, each government body independently de-
nnes the rules applicable to its relationship with the citi-
zens. In most cases, these provisions do not deal with
the obligations of these bodies, but rather those of the
citizens. In addition, the rules are often changed. Te
instructions, decrees and other documents that set out
these rules are hardly ever published, meaning that citi-
zens cannot inform themselves about them. It is usu-
ally impossible for a citizen to obtain application forms
or answers to questions on the telephone or by other
means of communication. Rather, he must go to the
administrative authority in person and hope that one
of the employees is able and willing to answer his ques-
tions. Tese conditions contradict the principles of a
state governed by the rule of law as laid down in the
Constitution of Azerbaijan (See the preamble and Arti-
cle 7 of the Constitution of Azerbaijan)
23
To this end, the new administrative law nrst lays down
the states obligations towards its citizens and their rights
against the state. It mirrors the principles set up in the
Constitution and specines them, thereby facilitating
their practical application. For example, Article 25 of
the Constitution of Azerbaijan lays down the principle
of equal treatment in general terms, whereas Article 12
of the Law on Administrative Procedure sets out in
detail the meaning of equal treatment, i.e., that admin-
21 In their publication Introduction to the new administrative
law of Azerbaijan by HERRMANN & HYE-KNUDSEN, Baku 2006
(in Azeri, German and English) the authors spoke of a formal
decision by the Parliament to be taken in 2007. By the time of
writing this report, the two bills were nevertheless still pend-
ing, whilst being discussed in Parliament.
22 The provisions of the Law on Administrative Procedure are
discussed in detail in a commentary in the Azeri language
written by KARIMOV and VALIYEVA (published by GTZ and CILC).
The publication of a commentary on the Law on Court Pro-
ceedings in Administrative Matters is planned for 2007.
23 Herrmann & Hye-Knudsen 2006
136
Part Three Environmental policy and legislative background in Azerbaijan
137
istrative authorities must treat identical cases identically
and dinerent cases dinerently.
Te new administrative laws (both the Law on Admin-
istrative Procedure and the Law on Court Proceedings
in Administrative Matters) have a direct relationship
to the law of environment and especially to the law of
nature protection. A whole slew of activities such as the
licensing process, potential for protesting, requesting
information, and suing the government must put on a
transparent and lawful basis.
24
In the immediate future, however, it is crucial that Az-
erbaijan introduces substantive laws in the dinerent ar-
eas of administration, for example construction and po-
lice laws, which set out specincally the rights and duties
of the state and the citizens. Te Law on Administra-
tive Procedure merely regulates the formal conditions
for an administrative process that is in accordance with
rule of law principles. Te law can, however, only be ef-
fective in connection with substantive provisions in the
many areas in which administrative measures anect the
citizens lives. Otherwise, the provisions of the Law on
Administrative Procedure will remain an empty shell
without any practical benent. Te same holds true for
the Law on Court Proceedings in Administrative Mat-
ters, for judicial review of administrative actions only
makes sense, if a substantive law is available to the judg-
es as the legal standard that must be applied
25
.
3.1.4. Environmental law in Azerbaijan
Nature protection in Azerbaijan received a substantial
basis in the law only after the reforms of 1992. Under
the authority of the Soviet Union there was only a State
Committee responsible for questions of ecology, nature
and natural resources. Tis Committee was replaced by
the Ministry of Ecology and Natural Resources.
Te weak degree of organisation during the Soviet era il-
lustrates the political rank of the policy-segment nature
protection. Tis also applies for the norms in this neld.
Nature protection in a broader sense was only brought
to norms in and for Azerbaijan towards the 1990s. It is
24 For further information on (e.g.) the principles, the admini-
strative processes, the administrative acts, the appeal and
ourt proceedings see Herrmann& Hye-Knudsen (2006),
page 106 ff
25 Herrmann & Hye-Knudsen (2006), page 29
remarkable for this period but also typical for most of
the former members of the Soviet Union that substan-
tive, systematic but also mostly very general questions
were addressed in the law.
As in other countries in transformation a higher degree
of detail can only be found when looking at decrees and
other norms of carrying out the regulations. Tis still
holds true for many existing regulations in Azerbaijan.
Tere are a great number of norms (and laws) in Azerba-
ijan, most of them available on the MENR homepage
by now. Others are very dimcult to nnd and access.
In particular, the national legislation on the conserva-
tion of natural habitats and of wild nora and fauna con-
sists of several laws, such as
26
:
t Law on the Protection of the Environment,
t Law on Animal World,
t Law on Specially Protected Natural Areas and
Sites,
t Law on Phytosanitary Control
t Te Forestry Code
Tere are also a number of secondary legislative acts
putting in concrete terms the general rules laid down in
the above-mentioned laws, for instance:
t Resolutions of the Cabinet of Ministers on the
statute of the Red Book,
t List of wild animals permitting natural and legal
persons to keep and breed them in unfree or semi-
free conditions and the requirements of their keep-
ing, conservation and exploitation,
t Decree of the President of Republic of Azerbai-
jan on Approval of the General Statute of the State
Natural and Biosphere Preserves of Republic of
Azerbaijan or the,
t General Statute of National Parks of the Republic
of Azerbaijan.
By analysing all available laws with relevance to ecology
and nature conservation, the following conclusion can
be drawn:
t Azerbaijan has steadily improved its system of
environmental protection. Te policy, legal and
institutional framework that was inherited from
26 Compare list of laws attached to this report
138
Potential Analysis for Further Nature Conservation in Azerbaijan
the former Soviet Union was not designed to op-
erate within a market economy.
t Tere is a high concentration of environmental
laws at the nrst normative level in Azerbaijan.
Much progress has been made, particularly in
updating the environmental legal framework.
t Te Azerbaijani norms very often have a high
degree of generality; they are programmatic regu-
lations rather than guidelines for action. In the
context the Constitution a couple of rules show
dencits.
t Other institutional reforms are on their way. In
this regard, particular attention needs to be given
to the organization and enectiveness of the im-
plementation, especially within the inspecting
authorities.
t Finally, a number of good policies for the en-
vironment, poverty prevention and sustainable
development have been developed, but their rela-
tionship remains to be clarined.
Very often, the level of decrees gives the impression that
individual cases are often only decided by the Presidents
Omce or the Council of Ministers.
Remarkably, especially in the neld of the environment
there is no planning by law. We did not nnd binding,
superior planning or even landscape planning in the ex-
isting law.
Despite missing many practical components, the envi-
ronmental law contains an environmental impact assess-
ment; it forces applicants and omcials to do substantial
research and report all facts in the course of issuing per-
mits to enterprises with an environmental impact.
Based on constitutional principles, such as:
t damage to the environment is forbidden, since
the future of generations to come must be saved,
t everyone has the right to live in a healthy envi-
ronment,
t the country owns all natural resources (e.g. oil, gas)
we nnd the following legal structure in Azerbai-
jan:
t Parliamentary legislation that establishes the state
regulation of strictly protected natural areas, and
the protection and use of the environment and of
nature/biodiversity;
t Presidential decrees and orders and the resolutions
of the Cabinet of Ministers that ensure the imple-
mentation of the major provisions of the Laws;
t By-laws of the executive authorities (ministries
and committees) that specify the activities to im-
plement the laws;
International agreements and conventions in the
neld of nature and biodiversity, to which the Repu-
blic is a signatory
27
.
3.1.5. Environmental Impact Assess-
ment (EIA)
It is of central importance how far the protection sta-
tus for a new protected area extends and what the con-
sequences are. Tere are a number of instruments for
managing connicts for Europe. For the past couple of
years they have been part of the FFH Guideline. Tere-
fore, the FFH Guideline demands an impact assessment
in this context.
Te Scoreboard Report includes a brief remark on this
important element of legislation: Te Azeri environ-
mental legislation does not stipulate for the require-
ments of plan and project assessment, in particular of
the implications for the area in view of the areas conser-
vation objectives.
An analysis of the existing legal environmental impact
assessment procedures of Azerbaijan revealed the follow-
ing regulations:
Te Law on Environmental Protection of Azerbaijan
dennes ecological expertise as the identincation of con-
formity of the environmental conditions with qualita-
tive standards and ecological requirements in order to
identify, prevent and forecast the possible negative im-
pact of an economic activity on the environment and
related consequences.
Te State Ecological Expertise (SEE) applies to a very
broad range of products and services, and even to their
import (art. 52). Te scope of SEE (art. 54) covers seven
27 Country Study on Biodiversity and First Na tional Report
of the Republic of Azerbaijan,
Chapter 6, under 61
MF
138
Part Three Environmental policy and legislative background in Azerbaijan
139
dinerent applications. Of these, only three are fully con-
sistent with the general concept of EIA. Tey are:
t Documentation relating to the development of
new equipment, technologies, materials and sub-
stances, including those imported from abroad;
t Feasibility studies (calculations), construction
projects (reconstruction, expansion, new technical
equipment) and closing down of structures and
facilities, environmental impact assessment (EIA)
documentation;
t Te evolution of environmental conditions as a
result of economic activity or emergencies.
Neither this law nor any other legal document gives any
threshold values for activities that would require (or be
exempt from) SEE. Te MENR is the responsible au-
thority for SEE.
EIA, as a part of SEE, is in fact only required for de-
velopment activities. However, the EIA legislation does
not provide specinc screening project categories. Con-
sequently, all development proposals submitted to the
relevant authorities for approval are subject to an EIA.
Te regional departments of the MENR receive appli-
cations and ensure that adequate information has been
provided. Where an EIA is required, documentation
is sent to the head omce of the MENR for processing
due to a lack of capacity in local omces. For projects
requiring a full EIA, the MENR organizes and chairs a
special scoping meeting of representatives of the appli-
cant, invited experts and invited members of the public.
Tere are no nrm requirements on group composition;
the MENR has access to a pool of experts and composes
each commission based on case-specinc considerations.
Te MENR is responsible for verifying the accuracy
and reliability of a proponents monitoring results. If
disagreement persists, the proponent has the option of
taking the matter to the courts. Enforcement and com-
pliance are the responsibility of the general inspection
system.
In general, the EIA is established and works in Azerbai-
jan. However, the lack of screening categories and nxed
scoping requirements is a problem.
Tere are also other problems. Azerbaijani legislation
requires project documents and EIA studies to be co-
ordinated with other relevant institutions, but does not
specify the form, purpose and time frame of this coor-
dination. In evaluating alternatives, only technological
alternatives need to be considered.
Te general public and non-governmental organizations
have the right to organize public ecological reviews for
proposed projects. So far, this right has not been used
by any NGOs, possibly due to time and other resource
constraints. Public participation is required for all stages
of EIA and SEE.
As noted in the section on SEE and EIA, article 54 of
the Law on Environmental Protection enectively calls
for Strategic Environmental Assessment (SEA) without
mentioning it explicitly. SEA has been formally adopted
in few countries in the region, and it is even more rarely
mandatory. In this regard, Azerbaijani legislation seems
quite progressive, but the reality is less optimistic. Te
SEA requirement of the Law on Environmental Protec-
tion is not supported by any sub-normative acts denn-
ing the procedures for its application or mechanisms for
close cooperation between the Ministry of Ecology and
Natural Resources and other State planning institutions.
Not surprisingly, there have been no SEA applications.
3.2. Europe and Azerbaijan the
environmental policy relation
In order to gain and maintain the EUs support Azer-
baijan needs to integrate European values. Te coun-
try has begun the process of establishing democratic
freedoms. Te European Union is concerned over the
lack of respect for democratic values, the rule of law and
fundamental rights in the country. Te EU, through
the consultative bodies established under the PCA, has
attached particular importance to holding free and fair
elections, the pluralism of political parties, and freedom
of the media. Addressing these three areas under the
PCA may be the key for Azerbaijan to strengthen not
only its democracy but also its legislative framework, its
legal institutions and the degree of compliance by those
in power with the law.
28
In Azerbaijan, the Technical Assistance to the Com-
monwealth of Independent States (TACIS) in the pe-
28 C.P.M. Waters (editor), The State of Law in the South Cauca-
sus, 2005
MF
140
Potential Analysis for Further Nature Conservation in Azerbaijan
riod 2002-2006 focused on continued support for in-
stitutional, legal and administrative reforms as well as
on support in addressing the social consequences of
transition. TACIS also provided essential assistance to
the implementation of Azerbaijans Poverty Reduction
Strategy launched in 2003. Te new Country Strategy
Paper (CSP) 2007-2013 covers EC nnancial assistance
to Azerbaijan under the new European Neighbourhood
and Partnership Instrument (ENPI). It is accompanied
by a new ENPI National Indicative Programme (NIP)
for 2007-2010 whose main priorities are: (1) Democ-
ratisation, rule of law and fundamental freedoms; (2)
Socio-economic reforms and legal approximation to
the EU; (3) Energy and transport. Azerbaijan also par-
ticipates in dinerent regional and thematic programmes
under the ENPI, such as the European Instrument for
Democracy and Human Rights.
Tere is a strong impression that environmental issues
entered into the entire process of approximation only
at a very late stage. Nevertheless, nature protection and
more specincally protected areas are covered by the
agreement.
3.2.1. The Partnership and Coopera-
tion Agreement (PCA)
It is evident that the Republic of Azerbaijan is in a close
relationship with Europe. In this context it is not essen-
tial how the cooperation between two entities is organ-
ised: as informal neighbours or a formal membership in
the European Union (EU).
To strengthen the bond between the EU and Azerbai-
jan, the two countries signed a formal Partnership and
Cooperation Agreement (PCA) in April 1996, which
took enect at the beginning of July 1999.
As far as environment and nature as policy elements are
concerned, the PCA sets the following policy: Te Re-
public of Azerbaijan should endeavour to ensure that its
legislation will be gradually made compatible with that
of the Community (Art. 43 PCA).
Te intended process is called approximation and ex-
presses up to a certain degree the clear tendency
towards more than a neighbourly relationship. It is to
be decided by the contract parties (EU and Azerbaijan)
what kind of relationship they are striving for in the fu-
ture, after the implementation of the PCA. Te PCA
formed the basis for Azerbaijan becoming a member of
the European Neighbourhood Policy.
3.2.2. European Neighbourhood
Policy: Azerbaijan
After the European countries had tremendous internal
problems with the formal opening of EU membership
to Turkey, and other former members of the Soviet
Union in Eastern Europe (following Bulgaria and Ro-
mania), the EU instituted a New Policy concerning the
Eastern European countries.
In June 2004 and based on the PCA, Azerbaijan (to-
gether with Armenia and Georgia) was included in
the European Neighborhood Policy at its request and
following a recommendation made by the European
Commission. Te Commission was invited to report on
progress made by each country with regard to political
and economic reforms.
As a consequence the European Commission recom-
mended a signincant intensincation of relations with
Azerbaijan through the development of an Action Plan
under the European Neighborhood Policy (ENP).
Tis recommendation is based on the Commissions
published Country Report, which provides a compre-
hensive overview of the political and economic situation
in Azerbaijan and the state of its bilateral relations with
the European Union. Te ENP goes beyond the exist-
ing Partnership and Cooperation Agreement to oner
the prospect of an increasingly close relationship with
the EU, involving a signincant degree of economic inte-
gration and a deepening of political cooperation.
Key objectives for the action plan include, among others:
t Implementation of enective reforms in the neld of
rule of law (judiciary, law enforcement agencies)
t Progress in poverty reduction, sustainable devel-
opment and environmental protection
t Progress in connict resolution and enhanced re-
gional cooperation.
With regard to the nrst issue, which features most prom-
inently in the action plan, a team of specialists produced
a Draft National Plan of Legal Approximation.
In that context a Scoreboard Report on Environment,
Exploitation and Utilization of Natural Resources was
prepared. Both reports have been published
29
.
29 1. Draft National Programme of Legal Approximation, Leg-
140
Part Three Environmental policy and legislative background in Azerbaijan
141
3.2.2.1. The Scoreboard Report
Article 43 of the PCA points out: Te Republic of
Azerbaijan should endeavor to ensure that its legisla-
tion will be gradually made compatible with that of
the Community. Te approximation extends among
other areas to the environment and exploration and
utilisation of natural recourses.
Protection of the environment is one of the major chal-
lenges facing Europe. Terefore, the main objectives of
the EU Policy within the nelds of environment and ex-
ploitation and utilisation of natural resources are:
t Preserving, protecting and improving the quality
of the environment,
t Protecting human health,
t Prudent and rational utilisation of natural re-
sources,
t Promoting measures at an international level to
deal with regional or worldwide environmental
problems.
Te scoreboard report takes this European Policy into
account and tries to compare it to the status quo in
Azerbaijan. Te paper was prepared by two experts with
in-depth knowledge of the European legislation. In es-
sence the report shows a list of dencits of the Azerbaijani
legislation in general and in detail.
As far as the general legislation is concerned, the Score-
board Report has three main concerns
t the Law on Obtaining of Environmental Informa-
tion,
t the Law on Protecting the Environment,
(legislation on Integrated Pollution Prevention and
Control)
30
Te argumentation on specinc legislation is concen-
trated on:
islation with EU acqis, funded by the EU, implemented by
SOFREGO, 2006- 2009,Baku, 2006.
2. MAMMADOV & APRUZZI : Environment, Exploitation and Uti-
lization of NaturalResources, Scoreboard Paper on Approxi-
mation of Azerbaijani Legislation to EU Law, Baku 2004
30 The Integrated Pollution Prevention and Control is meant
for the overall environment of Azerbaijan. Especially for the
rather technical environment it is of the greatest importance.
For our study in most of the cases technical items have
nevertheless no specic attraction.
t air pollution,
t waste management,
t chemical, industrial risks and biotechnology,
t nature protection and
t noise management.
3.2.2.2. Nature Protection within the
Scoreboard Report
Within the nndings of the Scoreboard Report, the au-
thors qualined the legal work on nature in Azerbaijan
as low level of approximation to the European legis-
lation. Nevertheless, this level is actually higher than
for most of the other described environmental nelds,
as most of these are either not approximated or only
show a very low level of approximation. However, no
detailed comparison of both norm complexes was con-
ducted and was obviously not intended within this nrst
analysis.
Nevertheless, the report presents a very good basic paper
with suitable recommendations; the detailed work of
comparing specinc norms law by law and paragraph
by paragraph has still to be done.
Azerbaijan has adopted several laws, decrees and resolu-
tions in the neld of environmental protection and ex-
ploitation and the utilisation of natural resources. Teir
analysis shows that in some cases the provisions are not
in compliance with the relevant international and Euro-
pean rules. Moreover, some basic rules of environmental
protection laid down by the European Union are not
represented in the Azerbaijani legislation at all.
31
Particularly, the low level of approximation of the rel-
evant Azerbaijani legislation to the Council Directives
79/409/EEC
32
and 92/43/EEC
33
must be emphasized.
It does not provide for some specinc protection require-
ments provided for in these Directives, for instance, cri-
teria for selecting sites eligible for identincation as sites
of national importance and designation as special areas
of conservation, the prohibition of the disturbance of
31 MAMMADOV & APRUZZI 2004
32 Council Directive 79/409/EEC of 2 April 1979 on the conser-
vation of wild birds.
33 Council Directive 92/43/EEC of 21 May 1992 on the conser-
vation of natural habitats and of wild fauna and ora.
142
Potential Analysis for Further Nature Conservation in Azerbaijan
certain species, requirements of plan and project assess-
ment, in particular of their implications for the area in
view of the areas conservation objectives, etc.
Due to this, the recommendations of the Scoreboard
Report read as follows
34
:
Te recommendations below are provided with a view
towards fostering the approximation process with the
EC rules on the environmental protection and exploi-
tation and utilisation of natural resources. Te closest
possible approximation to relevant EC rules is an indis-
pensable and important condition for strengthening the
economic links with the European Union, as stipulated
in Article 43 of the PCA.
Ge n e r a l l e g i s l a t i o n : Amending the Law on
the Protection of Environment and adopting relevant
secondary legislation concerning the assessment of the
impact and enects of certain public and private proj-
ects on the environment, whose main goal is to ensure
that the authority giving the primary consent for a par-
ticular project makes its decisions with an awareness of
any likely signincant enects on the environment. Te
amending provisions should lay out a procedure that
must be followed for certain types of projects before
they can receive approval. Tis procedure, known as
Environmental Impact Assessment (EIA), is a means of
drawing together, in a systematic way, an assessment of a
projects expected signincant environmental enects. Tis
helps to ensure that the importance of the predicted ef-
fects, and the scope for reducing them, are properly
understood by the public and the responsible authori-
ties before a decision is made. Lists of project types that
always require an EIA and project types that should re-
quire an EIA whenever they are likely to have signincant
enects on the environment need to be drawn up. All
EIA procedure stages should be determined and speci-
ned as required. During the preparation of the above-
mentioned amendments the requirements of Directive
85/337/EEC35 shouldbe taken into account.
Na t u r e p r o t e c t i o n l e g i s l a t i o n : It is recom-
mended to take the following measures in this neld:
34 The following paragraphs are quoted from: MAMMADOV &
APRUZZI 2004
35 Council Directive 85/337/EEC of 27 June 1985 on the assess-
ment of the effects of certain public and private projects on
the environment.
t To adopt the appropriate mandatory rule on the
conservation of wild birds. In general, this rule
should provide for the protection, management
and regulation of all bird species naturally living
in the wild within the territory of Azerbaijan,
including the eggs of these birds, their nests and
their habitats. Moreover, it should regulate the
exploitation of these species. Special measures for
the protection of habitats should be adopted for
certain bird species and migratory species of birds.
Te rule has to regulate the specinc measures for
the protection of all bird species. Te preparation
of this rule should be guided by the requirements
of Directive 79/409/EEC
36
.
t To amend the legislation on the conservation of
natural habitats and of wild nora and fauna. As
to the requirement of establishing special areas
of conservation, it should denne the criteria for
selecting sites eligible for identincation as sites of
national importance and designation as special
areas of conservation. In regard to the general
system of protection of certain species of nora and
fauna, the Azerbaijani environmental legislation
should stipulate the prohibition of disturbance
of these species, requirements of plan and project
assessment, in particular of their implications for
the area in view of the areas conservation objec-
tives. Moreover, the protection of animal spe-
cies by Azerbaijani environmental law should be
brought in line with the relevant requirements of
Directive 92/43/EEC
37
. A stricter system of pro-
tection of plants should be established.
Te preparation of these amendments should be guided
by the requirements of the above-mentioned directive.
3.2.3. European nature protection
networks for Azerbaijan
A nrst step and yet a very practical approach to an ap-
proximation to European nature conservation standards
and their implementation as well as the implementation
of the respective legislation is participation in exiting
nature protection networks. Several networks are suit-
36 Council Directive 79/409/EEC of 2 April 1979 on the conser-
vation of wild birds.
37 Council Directive 92/43/EEC of 21 May 1992 on the conser-
vation of natural habitats and of wild fauna and ora.
142
Part Three Environmental policy and legislative background in Azerbaijan
143
Council
Directive
79/409/EEC of
2 April 1979 on
the conservation
of wild birds
Law of Republic of Azerbaijan on the Protection
of Environment 678-IQ, dated 08.06.1999
Law of Republic of Azerbaijan on Animal
Kingdom 675-IQ, dated 04.06.1999
Resolution of the Cabinet of Ministers of
Republic of Azerbaijan on Approval of some
Legal Acts connected with the Animal World
117 of 13.07.2000
Resolution of the Cabinet of Ministers of
Republic of Azerbaijan on Approval of the
Statute of Red Book of Republic of Azerbaijan
125 of 15.07.2000
Resolution of the Cabinet of Ministers of
Republic of Azerbaijan on Approval of the List
of Wild Animals Permitting Natural and Legal
Persons to Keep and Breed them in Unfree or
Semifree Conditions and of the Requirements
of their Keeping, Conservation and Exploitation
86, dated 01.05.2001 of 20.04.2004
Resolution of the Cabinet of Ministers of
Republic of Azerbaijan on Approval of some
Legal Acts connected with the Hunting 147,
dated 30.09.2004
Tis Directive as well as its amending acts seeks
to protect, manage and regulate all bird species
naturally living in the wild within the European
territory of the Member States, including the eggs
of these birds, their nests and their habitats, as
well as to regulate the exploitation of these species.
According to it, the Member States are required to
preserve, maintain and re-establish the habitats of
the sad birds as such, because of their ecological
value. Tese obligations exist even before any
reduction is observed in the number of birds or
any risk of a protected species becoming extinct
has materialized.
Te Azerbaijani law concerning animal
conservation regulates protection of wild birds in a
very general manner. It does not provide for some
specinc protection requirements laid down in the
Directive 79/409/EEC, which are important for
enective preservation of the wild birds.
According to Azerbaijani legislation only those
bird species gain special level of protection which
are included into the Red Book, i.e. species in
danger of extinction and species considered rare.
It does not fully correspond to the requirements of
the Directive in connection with European Court
of Justices Case C-335/90 (Commission of the
European Communities v. Kingdom of Spain)
Tus, for more enective bird protection it is
recommended to adopt separate mandatory rule
on conservation of wild birds taking into account
the requirements of the abovementioned EC
Directive concerning protection, management and
regulation of all bird species naturally living in the
wild, including the eggs of these birds, their nests
and their habitats, exploitation of these species,
special measures for the protection of habitats for
certain bird species and migratory species of birds
and specinc measures for the protection of all bird
species.
EU-Directive
Azerbaijan Law comparable to EU
directive
Table 11: Comparison of EU directive 79/409/EEC with Azerbaijan law, taken from the Scoreboard report
144
Potential Analysis for Further Nature Conservation in Azerbaijan
Council Directive
92/43/EEC of 21
May 1992 on the
conservation of
natural habitats
and of wild fauna
and nora
Law of Republic of Azerbaijan on Animal World
675-IQ, dated 04.06.1999
Resolution of the Cabinet of Ministers of Republic
of Azerbaijan on Approval of some Legal Acts
connected with the Animal World 117, dated
13.07.2000
Resolution of the Cabinet of Ministers of Republic
of Azerbaijan on Approval of the Statute of
Red Book of Republic of Azerbaijan 125, dated
15.07.2000 Phytosanitary
Control 102-IIIQ, dated 12.05.2006, dated 20.04.2004
Resolution of the Cabinet of Ministers of
Republic of Azerbaijan on Approval of some Legal
Acts connected with the Hunting 147, dated
30.09.2004
Te Directive establishes a European ecological
network comprising special areas of conservation
in accordance with the provisions of the Directive,
and special protection areas classined pursuant to
Directive 79/409/EEC on the conservation of wild
birds. It provides with the lists of natural habitat
types of Community interest, animal and plant
species of Community interest, animal and plant
species in need of particularly strict protection etc.
It provides for the general system of protection for
certain species of nora and fauna.
Despite being of wide range, Azerbaijani legislation
does not comply fully with the requirements of the
Directive. As to the requirement of establishing
of the special areas of conservation, it does not
denne the criteria for selecting sites eligible for
identincation as sites of national importance and
designation as special areas of conservation.
As regards the general system of protection of
certain species of nora and fauna, the Azerbaijani
environmental legislation does not stipulate
for the prohibition of the disturbance of the
certain species, requirements of plan and project
assessment, in particular of their implications
for the area in view of the areas conservation
objectives. Moreover, the protection of animal
species by Azerbaijani environmental law is far
from being in line with the relevant requirements
of the Directive.
Terefore, it is recommended to amend relevant
legislative acts taking into account provisions of the
Directive 92/43/EEC concerning special areas of
conservation, in particular, the criteria for selecting
sites eligible for identincation as sites of national
importance and designation as special areas of
conservation, prohibition of the disturbance of
the certain species, plan and project assessment, in
particular their implications for the area in view of
the areas conservation objectives etc.
EU-Directive
Azerbaijan Law comparable to EU
directive
Table 12: Comparison of EU directive 92/43/EEC with Azerbaijan law, taken from the Scoreboard report
144
Part Three Environmental policy and legislative background in Azerbaijan
145
able, although their usefulness for the European part is
somewhat questionable. However, participation at least
raises the topic of approximation and may eventually
lead to the most important nature network in Europe,
the NATURA 2000 network.
3.2.3.1. Emerald Network
Similar to the engagement of the EU in the Caucasus,
the Council of Europe with a substantial tradition in
nature protection (and often in time- and money-con-
suming concurrence with the EU) is working on the
environmental cooperation between its member coun-
tries. Te legal basis for cooperation within the Council
of Europe is the Convention on the Conservation of
European Wildlife and Natural Habitats, which came
into enect on June 1
st
, 1982 (Bern Convention).
On the basis of this convention, the Emerald Network
of Areas of Special Conservation Interest (ASCIs) was
launched in 1999. Te network aims to harmonise the
policy in protected areas and to help accession states to
adapt ecological networks to EU requirements. It is to be
set up in each contracting state or observer state accord-
ing to the Bern Convention. Besides the EU, this means
a number of other European countries as well as coun-
tries in Northern Africa. Until now, 21 pilot projects for
the implementation of the Emerald Network have been
organized in European and African countries. With re-
gard to the Caucasus Region, pilot projects were set up
for Georgia, Armenia, Azerbaijan, Turkey and Russia.
In Azerbaijan, a pilot project was started in 2005. With-
in the project, Azerbaijan and the Council of Europe
established a Group of Experts for the setting up of the
Emerald Network of Areas of Special Conservation In-
terest. Tis group issued a report in February 2006. Te
report was produced by a team of representatives of the
Ministry of Ecology and Natural Resources, scientists
from WWF Azerbaijan and the National Academy of
Sciences. Unfortunately, the results are on a rather broad
scale and do not evaluate the natural potential for the
establishment of a PA network in great detail.
Te denomination of ASCIs has to follow a certain pro-
cedure and conform with the framework of biogeogranc
regions adopted by the Standing Committee to the
Bern Convention in 1997. Tis framework applies for
the Emerald Network and the Natura 2000 Network as
well. In Azerbaijan, the Expert Group for setting up the
Emerald Network in Azerbaijan identined the Alpine
and the Steppic region (Exiiir Gioui 2006). How-
ever, this concentration on only two habitat types is seen
as the major shortcoming of this approach.
Te expert group located nve areas on the map of Az-
erbaijan that correspond to the requirements of the
Emerald Network. Te nndings explicitly built on the
national legislative framework and identify 21 types of
endangered natural habitats besides the potential Em-
erald Network areas. Unfortunately, the latter are not
described in detail and are not linked spatially to the
priority conservation areas. Te chosen sites are roughly
described in the report and species lists (albeit incom-
plete) are given.
Possible Emerald Network Areas:
t Zangezur-Daridag Alpine Region
t Mingchevir-Turyanchay Steppic Region
t Zakatala-Ilisu Alpine Region
t Shahdag Alpine Region
t Hirkan Alpine Region
In Georgia, Azerbaijans eastern neighbour, a Pilot
Project has been launched in 2002 (Exiiir Gioui
2008) in order to start the implementation of the Em-
erald Network under the responsibility of the Ministry
of Environment and in cooperation with the Noahs Ark
Center for the Recovery of Endangered Species (NA-
CRES). A second-phase pilot project started in 2004.
Georgia is divided into three biogeographical regions:
Alpine, Black Sea and Continental. Te national au-
thorities have proposed to add an Anatolian region. Te
process of identifying the species and habitats for the
designation of ASCIs in Georgia revealed a lack of infor-
mation and the need to obtain more recent and credible
data.
In Armenia, a pilot project on the implementation of
the Emerald Network was launched in 2007 (Exiiir
Gioui 2008). Further funding under the framework
of the European Neighborhood Policy may be possible.
146
Potential Analysis for Further Nature Conservation in Azerbaijan
3.2.3.2. The Pan-European Ecological
Network (PEEN)
Another initiative of the Council of Europe is the Pan-
European Ecological Network, which aims at the link-
ing of core areas for protection through the restoration
or preservation of corridors.
In 1995, the 3
rd
Ministerial Conference Environment
for Europe endorsed the Pan-European Biological and
Landscape Diversity Strategy (PEBLDS) and its main
proposal the setting up of the Pan-European Ecologi-
cal Network. Tis decision resulted from the adoption
of the United Nations Convention on Biological Di-
versity on the Rio Earth Summit. Te principal aim of
the strategy is to nnd a consistent response to the decline
of biological and landscape diversity in Europe and to
ensure the sustainability of the natural environment.
Altogether, 55 countries endorsed the Pan-European
Biological and Landscape Diversity Strategy in 1996. In
2003, the 5
th
Ministerial Conference Environment for
Europe agreed to halt the loss of biodiversity at all levels
by 2010.
Te PEEN seeks to conserve ecosystems, habitats, spe-
cies, their genetic diversity and landscapes of European
importance.
It also represents a tool for conciliation of socio-eco-
nomic activities and the preservation of biological and
landscape diversity. In this context, it intends to inte-
grate biodiversity conservation and sustainability into
the activities of all sectors, to improve the information
on and awareness of biodiversity, to increase the public
participation in conservation actions, and to assure ad-
equate funding to implement the strategy.
Te main targets of the action plan for the PEEN are:
t to involve the conventions and international
instruments in the establishment of the PEEN
(NATURA 2000, Bern Convention Emerald Net-
work, Ramsar Convention, Bonn Convention,
World Heritage Convention, European Diploma
sites, etc.)
t to ensure that by 2008 the PEEN will give guid-
ance to major national, regional and international
land use and planning policies
t to identify and renect all the constitutive elements
of PEEN, and also show them on maps by 2006
t to conserve all core areas by 2008
At present there are no indications, neither from the
European side nor from the Azerbaijani side, to con-
tribute to this network. Te authors agree that this
action does not necessarily have priority.
Natura 2000 and
Emerald Network
Biogeographic Regions
Arctic Region
Boreal Region
Atlantic Region
Continental Region
Alpine Region
Pannonian Region
Mediterranean Region
Macronesian Region
Anatolian Region
Steppic Region
Black Sea Region
Azores
Canary Islands
and Madeira
Map 30: Biogeographic regions of Natura 2000 and the Emerald Network, source: EEA (2008)
146 147
PART FOUR
Conclusions and Recommendations
147
4.1. Gap-analysis for the
extension of the protected
area system
Nature in Azerbaijan faces a signincant threat. Tis is
due to the continuing construction boom, the absence
of norms and in part a situation that allows open access.
Independent of the extension of the protected area net-
work, the conservation system needs to be strengthened.
To reach this goal, an increase of environmental and in
particular conservation awareness is urgently needed.
As a consequence, in the authorss view the most im-
portant challenge in the near future is to build a long-
term and successful communications network and pro-
gramme dedicated to raising the national awareness of
biodiversity, nature conservation and the environment.
Furthermore, of current need is an re-assessment of the
cooperation between Azerbaijan (as far as the authors
overview) the German support for nature conservation
in Azerbaijan. Te present stagnation urgently needs to
be broken up as preserving nature in this hotspot of bio-
diversity is of utterly importance. Terefore, the politi-
cal dialog between Azerbaijan and Germany should be
intensined again and an active cooperation re-initated.
Tis political and diplomacy investment has severe pri-
ority.
Beforehand this clarinction and the political re-commit-
ment of both sides towards jointly cooperation nnancial
investment is not recommended at present. Azerbaijan
is still behind schedule in fulnling the joint programmes
it was commited to, e.g. the CPAF and the Caucasus
Initiative.
Independent of this, the survey revealed that there still
exists a good potential for the establishments of pro-
tected areas in Azerbaijan and for the extension of the
existing PA network.
Te authors consider the greater Gobustan region as one
of the most important areas without any spatial protec-
tion at present that would be worthy of protection in the
future. Te complexity and variance of dinerent natural
features in this area warrants special protection. Geolog-
ical peculiarities among them most impressively the
mud volcanoes, the Goitred Gazelle, several threatened
bird species such as the Sociable Lapwing and plants
such as Ophrys caucasica contribute to the widespread
value of this area. Furthermore, the landscape gradient
and a traditional land use system are additional reasons
for recommending the area as a biosphere reserve.
A highly recommended and worthwhile project would
be the bridging of the existing gap between Hirkan Na-
tional Park and the existing Zuvand Zakaznik by con-
necting those two areas. At the same time the status of
Zuvand Zakaznik should be upgraded. Since this region
is unlike any other area in the country, it is particularly
worthy of protection. Linking the highly diverse Hirka-
nian forest with the semi-arid habitats in its neighbour-
hood is seen as a necessary approach to the protection
of this ecosystem.
Currently, the existing PAs are often limited to one sin-
gle ecological habitat type. For example, Shirvan Na-
tional Park, located on the Caspian Sea, does not in-
clude any costal strip. Ilisu Zapovednik, protecting the
mountain forest of the Greater Caucasus, is not linked
to the lowland forest of the alluvial gravel fans or even to
the noodplain forest of the Alazan River although this
approach would follow the ecological succession and
Potential Analysis for Further Nature Conservation in Azerbaijan
148
would include a much greater amount of biodiversity.
Te inclusion of the ecological gradient in the protec-
tion regime would thus protect the natural habitat gra-
dients and therewith important ecologial corridors.
From an ecological point of view, the authors recom-
mend the following prioritisation to increase protective
measures:
t Establishment of Gobustan Biosphere Reserve
t Upgrading Zuvand Zakaznik and connection to
Hirkan National Park
t Protecting one of the alluvial gravel fans with its
river dynamics and the specinc forest community
(Gakh or Oguz)
t Uniting several existing protected areas around
Mingchevir Reservoir into one protected area
and nlling the gaps in between, in particular in-
cluding the noodplain forests of Alazan, Iori and
the Kura mouth in the reserve. Establishment of
one central administration and strengthening of
protective measures.
t Establishment of a coastal reserve, including the
Kura River mouth on the Caspian Sea, the coastal
waters and several islands
Although Gy Gl National Park has recently been
established on the basis of Gy Gl Zapovednik, the
protection regime in the Lesser Caucasus needs to be
strengthened immediately. Apart from the occupied
territories, where an assessment of the ecological condi-
tions has not been possible, the Smkirchay Zakaznik is
one of the last strongholds of the once widely occurring
forest in the region. However, this remaining part needs
to be strictly protected and extended.
In general, the existing system of Zakazniks forms a
good basis for the extension or upgrading of protected
areas. However, many of these areas need to be re-as-
sessed since they do not always represent an adequate
protection status. Some even carry the status of a Zaka-
znik because they were designated as hunting reserves,
e.g. Zuvand Zakaznik.
Slightly dinerent priorities need to be set if human
pressure on the ecosystem is seen as the driving factor
behind the extension/establishment of further PAs. In
particular the coastal region, the most intensely used
and densly populated area in Azerbaijan, is under se-
vere pressure due to the current construction boom. All
coastal regions investigated in this study (Dvshi, Kura
Delta, Islands of the Caspian) should be given high pri-
ority and attention. In addition, the proposed Samur
Yalama National Park (an area that was not investigated
during this project) should be established as soon as pos-
sible since human impact on this last remaining coastal
forest is steadily increasing.
Te existing categories for protected areas should be ex-
tended to include the Biosphere approach. A historic
land use system with livestock raising as the dominant
part of the agrarian sector and with seasonal movements
between summer and winter pastures strongly depends
on the availability and accessibility of land. At present,
however, the grazing system in particular is not at all
sustainable and overgrazing is a serious threat to the
countrys environment. An integrated Biosphere con-
cept, especially for the greater Gobustan region, might
be a solution to achieve conservation as well as sustain-
able land use.
Te project also showed that there is an urgent need
for a scientincally based update of the information on
many species, their occurrence and abundance. During
the surveys, about 15 species of birds could be record-
ed for the very nrst time in the country. Also, despite
month-long neld surveys by an experienced team, sev-
eral important species such as the Striped Hyena could
not be recorded at all, indicating that these species have
become extremely scarce or have disappeared altogether.
From the faunistic point of view, the project only used
mammals, amphibians and reptiles as well as birds as in-
dicator species. Unfortunately, expert knowledge on any
other group of species is practically unavailable in the
country, and pure species inventories are rarely nnanced
internationally.
However, as the latest species surveys date back to the
80th and the national scientinc body is largely under-
staned and under-equipped the available data is to
large extend out of date, species systematics has not
been updated and connected to international state of
cience for various decades. As a consequence, an invest-
ment into the scientinc capacity in Azerbaijan is of ur-
gent need. A young generation of scientists need to be
educated, trained and developed. If this matter will not
be in focus within the next ten years the already exist-
ing gap between available but already very old scientists
and a missing successor generation will open even more
dramatically.
Independent of the establishment of further protected
areas, there is a strong need for the enhancement of the
149
Part Four Conclusion & Recommendations
protection regime in Azerbaijan. Shortcomings in bio-
diversity conservation that need to be addressed imme-
diately include among others
38
:
t very poor environmental public awareness of bio-
diversity conservation issues;
t a shortage of manuals, facilities and programmes
to raise wider public awareness of biodiversity
conservation in the educational system;
t a lack of regular national and regional workshops
and training measures in this area;
t failure to systematically involve stakeholders in
regionally and internationally organized training
measures aimed at the exchange of experience;
t poor exchange of experience and information at
all levels;
t poorly organised use and development of the
database on biodiversity conservation at relevant
institutions;
t limited opportunities for the assessment of the
dynamics of change and the scale of biodiversity
due to the failure of state environmental statistics
to fully cover the biodiversity area;
t poor general coordination of activities in this area
despite the fact that various areas of biodiversity
conservation are covered by relevant state and
national programmes adopted in the country;
t insumcient attention to social aspects of biodi-
versity conservation such as health, demographic
trends, migration, etc. in programmes that are in
preparation;
t making little use of nndings for biodiversity con-
servation provided by research which has been
nnancially and technically weak in recent years;
t reluctant implementation of pilot projects among
practical and scientinc laboratories, impeding the
practical application of scientinc nndings;
t limited activity of NGOs in the neld of biodiver-
sity protection.
38 According to the National Action Plan on Strengthening Ca-
pacity to Respond to Challenges of Biodiversity Conserva-
tion, Climate Change and Desertication / Land Degradation
(2006-2015), Baku 2005
4.2. NATURA 2000 at present
feasible in Azerbaijan?
At some point in the near future a decision has to be
made whether Azerbaijan will participate in the Eu-
ropean NATURA 2000 network. Te formal bilateral
cooperation between the EU and Azerbaijan does exist
and initial instruments are available. Te process of legal
approximation does indicate the direction.
Natura 2000 sites are also intended to contribute signin-
cantly to the coherence of the protected area network
and to the biodiversity in the biogeographic regions within
the European Union. For animal species ranging over wide
areas, sites of Community Importance correspond to
the areas within the natural range of such species that pres-
ent the necessary physical and biological factors essential to
their survival and reproduction.
At least in part Azerbaijan belongs to the NATURA
2000/ Emerald Network biogeographic regions (see
Map 30). However, this distinction focuses on the Eu-
ropean part of Azerbaijan. In addition, many habitats
in Azerbaijan also show Mediterranean and Anatolian
innuences, yet an in-depth comparison is still lacking.
Nevertheless, the authors conclude that there is a par-
tially high similarity with FFH habitat types and do not
dismiss the approach entirely.
For a spatial approximation and eventual connection to
the existing SPAs, pSCIs, SCIs and SAC of the Euro-
pean member states, much commitment is still required
and the necessary preparatory work oners a continuing
challenge. Nevertheless, as depicted in Map 31 Natura
2000 sites in Europe, the existing NATURA 2000 sites
already extend to the Black Sea and cover biomes that
occur in Azerbaijan as well. A consequent extension is
highly advisable and might even be forwarded without
full membership ambitions.
Independent of any European legislation and pro-
grammes, it is fully understandable for a young nation
and a country still in transition to develop its own prin-
ciples, standards and guidelines. Azerbaijan has repeat-
edly declared its ambition to become a reliable partner
to Europe and to use as a guideline or adopt European
legislation
39
. Te country itself dennes its position as a
bridge between Europe and Asia, and a tendency to-
39 Stated by Ilham Aliyev at a personal meeting between MSF
and the president of Azerbaijan, 15.02 2007 in Berlin, Ger-
many
150
Potential Analysis for Further Nature Conservation in Azerbaijan
wards International/European institutions such as the
EU, NATO, OSZE, etc. is visible. Te TACIS Indica-
tive Programme for Azerbaijan (2004- 2006)
40
enhances
the support of the MENR and mentions Approxima-
tion of legislation with EU standards and principles as
an indicator for the programmes success.
Te objective of Priority Area 3 (Support for legisla-
tive and economic reforms in the transport, energy and
environment sectors) and in particular sub-priority
No. 3 (Environment) of the European Neighbourhood
Partnership Instrument 2007-2015 (NIP) is the im-
provement of the countrys legislative and administra-
tive management of environmental challenges with
regard to the EUs best practice and experience. Here,
the approximation of AZE environmental standards to
EU standards is again an indicator.
Despite the brief existence of the MENR, Azerbaijan has
gained valuable experience in selecting and creating pro-
tected areas. Nevertheless, there exists a huge dinerence
between creating and managing protected areas, specin-
cally national parks. Te general weakness in executing,
implementing and managing specinc areas is a visible
handicap for good governance and as a consequence
for matching the conditions of the most important
contract partner, the EU. Nevertheless, in certain ar-
eas of public law (Environmental Impact Assessment,
Freedom of Information, etc.) the country already has
a basis. Tis means that even potentially weak instru-
ments can be renewed and do not necessarily have to be
compared directly with the EU standards. In the context
of the EUs Neighbourhood Policy and the possible an-
ticipation of environmental law and more specincly
the main FFH-Guideline of the EU, Azerbaijans basis
can be used.
It is up to the Azerbaijani Government and the EU to
further organise (and nnance) an in-depth study or in-
dividual smaller studies with the aim to compare norms
and to prepare Azerbaijan in detail for a possible iden-
tity of perspectives.
At the present, NATURA 2000 is not feasible. Despite
the fact that legal approximation (including environ-
ment) is in progress in general, a tremendous amount
of work remains to be done in other related nelds as
well (with e.g. law enforcement beeing one specinc is-
sue among others). However, the pre-feasibility study
40 adopted by the EU on 22 May 2003
revealed and created good pre-conditions for the imple-
mentation of this process with the mid-term target to
participate in the NATURA 2000 network.
Some valuable aspects and already fulnlled pre-condi-
tions that should enable the country to overcome any
approximation process easily include:
t Azerbaijan is member of the European Council,
t Azerbaijan is member of the European Neigh-
bourhood Programme,
t Geographically, Azerbaijan belongs at least partly
to Europe,
t Azerbaijan is a strategic partner for Europe,
t Te scoreboard report analysed the current status
of legal approximation, gaps were identined and
recommendations given, and priorities were set
for approximation,
t Within the PCA, Twinning instruments are avail-
able to support Azerbaijan in the process (Until
the end of 2006 the TACIS Program was also the
supporting program for twinning projects. Since
2007 responsibility has switched to the newly cre-
ated ENPI. Currently, Azerbaijan is running two
twinning projects on economic matters with the
BMWi),
t Te EU is a respected soft power in the Cauca-
sus41,
t Tere is a biogeographical link to European habi-
tats and the species composition shows partial
similarities,
t Azerbaijan signed the RAMSAR and BERN Con-
ventions,
t Azerbaijan hosts natural habitats which are of
Community Importance (pSCIs) for inclusion in
the EUs NATURA 2000 network,
t Tis report provides a basic comparison of Az-
erbaijans habitats and species with the relevant
FFH and other EU-document annexes.
On the other hand, aspects that challenge an approxi-
mation process at present include:
t Full comparison of habitats is still missing and
41 see: AHMADOVA 2006
150 151
Part Four Conclusion & Recommendations
0 250 500 1.000
km
Habitats Directive (pSCI, SCI, SAC)
Birds Directive (SPA)
Sites - or parts of sites -
belonging to both directives
NATURA 2000 - Protected Areas
Validity of NATURA 2000 data: Release 7/2008 or before
Canary Islands (ES) and Madeira (PT)
Azores (PT)
map downsized and simplified, source:
Projection: Lamberts Azimutal Equal Area
European Commission (2009)
Map 31: NATURA 2000 sites in Europe
152 152
Potential Analysis for Further Nature Conservation in Azerbaijan
state is required to submit a list of sites
(proposed sites of Community impor-
tance or pSCIs) that meet the objectives
and criteria set out in the Habitats Direc-
tive (Article 4 (1)). Suitable sites must
be proposed for all natural habitat types
listed in Annex I and for the species listed
in Annex II.).
t Establish a working group of environmen-
tal law experts and focus on the relevant
laws and directives for nature, habitats,
and protected areas.
A respective project in Turkey revealed the necessity for
both sides, the EU as well as its partner, to invest great
enorts into the harmonisation of all aspects if an ap-
proximation to NATURA 2000 is envisioned. Not only
would Azerbaijan have to adapt to EU standards, the
EU would also have to adapt its current directives and
species and habitat lists, etc. (Hauxi 2008).
In the end, it is largely up to Azerbaijan how fast and
how seriously the process of approximation will develop.
At any rate, a good basis is available, and methods and
approaches have been established. Te European Union
reached out its hand, and various options exist. Now
Azerbaijan has the opportunity if it is interested in a
close and tight cooperation with the EU to take this
chance. Although the expected duration of the entire
process will be in the mid-term range, lasting about 15-
20 years, it surely makes great sense from an ecological,
from habitat protection and nature conservation point
of view.
* * *
their implementation is very complex and time-
consuming;
t Tere are still signincant dinerences between the
environmental legislation in Azerbaijan and that
of the EU;
t Tere is a biogeographical link to Central Asian
habitats as well as to European habitats, and the
species composition in part shows a great Central
Asian and Turanian innuence. Due to this an
amendment, modincation or update of the FFH
habitat list presents a great challenge.
t Azerbaijans interest and commitment to partici-
pate in European nature conservation approaches
is not always clear.
A pragmatic and logical sequence of continuation of
the approximation would be the following, with an ap-
proximated timeline of about 15 years:
a) A clear commitment of Azerbaijans responsi-
ble authorities to support the approximation of
AZ principles, legal basis and implementation
with regard to EU standards and best practise
examples.
b) Active participation in EU nature conservation
related Twinning projects and bilateral coop-
eration.
c) Continuation to establish close ties with the
Emerald network. Tis nrst step will lead to fa-
miliarisation of Azerbaijan with EU conserva-
tion standards and implementation as it devel-
ops guidelines for respective habitat protection.
d) Since the Council of Europe has a rather weak
mandate for a general EU-Azerbaijan ap-
proximation and few instruments available
compared to Emerald, a large-scale Twinning
project should be implemented with main as-
pects such as:
t Revision of species lists, including a Red
List update
t Establish a scientinc working group of EU
and Azerbaijani experts to map, assess and
compare all AZE habitats with Annex I
types of the FFH guideline, and develop
recommendations for the respective up-
date. Selection and assessment of SACs/
pSCIs (Stage 1) (In Stage 1, each member
152
Part Four Conclusion & Recommendations
153
Literature
AHMADOVA, N. (2006): Die Rolle Aserbaidschans in der Kaukasus- und Zentralasienpolitik der Europi-
schen Union. Dissertation im Fachbereich I/ Politikwissenschaft. Universitt Siegen.
ALLWORTH (1971). Nationalities of the Soviet East: Publications and Writing Systems A Bibliographical
directory and transliteration tables for Iranian and Turkic language publications. Publications and Writing Systems.
New York.
ATAMOV, V.V., CABBAROV, M., & GURBANOV, E. (2006): Te Pytosociological Characteristics of Ecosys-
tems of Mountain of Talish Region of Azerbaijan. Asian Journal of Plant Science. 5(5). 899-904.
BEKTASHI, L., CHERP, A. (2002), EIA in Azerbaijan, evolution and current state of Environmental Assess-
ment in Azerbaijan, Impact Assessment and Project Appraisal, Vol. 20 No.4, pp.31-42.
BIRDLIFE INTERNATIONAL (2007): IUCN Red List of Treatened Species. <>.
Downloaded on 26 March 2008.
BMZ (2006). BUNDESMINISTERIUM FR WIRTSCHAFTLICHE ZUSAMMENARBEIT UND ENT-
WICKLUNG. Referat Entwicklungspolitische Informations- und Bildungsarbeit (Ed.) BMZ Materialien 155
Naturschutz im Kaukasus. Mai 2006.
CASPIAN ENVIRONMENTAL PROGRAMME (2007): Regional action plan for protection of Caspian ha-
bitats. (accessed 16/01/07).
CASPIAN ENVIRONMENT PROGRAMME (2008): (accessed
20/02/2008).
CHEREPANOV S. K. (1995): Sosydistye rastenija Rossii i sopredelnich gosydarstv. Sankt Petersburg: Mir i
Semja 95.
CIA THE WORLD FACTBOOK (2008):
aj.html (accessed 04/02/2009).
COUNCIL OF THE EUROPEAN COMMUNITIES 1992. Council Directive 92/43/EEC (21 May 1992)
on the conservation of natural habitats and of wild fauna and nora . Omcial Journal 206: 7-50.
CPAF (2008): (accessed 17/12/2008).
C.P.M. WATERS (editor) (2005): Te State of Law in the South Caucasus, Euro-Asian Studies. Palgrave Mac-
millan.
DUMONT, H.J., 1998. Te Caspian Lake: History, biota, structure, and function. Limnol. Oceanogr. 43.
EEA (EuropeanEnvironmentalAgency) (2008):.
asp?id=221 (accessed 17/12/2008)
ELLIOTT, M. (2004). Azerbaijan. Trailblazer Publications.
EMBASSY OF THE REPUBLIC OF AZERBAIJAN (2009):
26&pid=2&PHPSESSID=b99f7bb1f3ec07c2bdeeeb521d0bf57d (accessed 07/02/2009).
ENVIRONMENTAL MOVEMENT IN AZERBAIJAN (2008):-
tions.htm accessed (22/10/2008).
EUROPEAN COMMISSION (2009):-
27SPASCI_908.pdf (accessed 04/02/09).
EXPERT GROUP (Group of Experts for the setting up of the Emerald Network of Areas of Special Conserva-
tion Interest) (2006): Emerald Network Pilot Project in Azerbaijan. Report. 25 p.
EXPERT GROUP (Group of Experts for the setting up of the Emerald Network of Areas of Special Conser-
vation Interest) (2008):-
sem3_2008_en.pdf (accessed 15/12/2008).
GALLOWAY, W.E., (1975): Process framework for describing the morphologic and stratigraphic evolution of
deltaic depositional systems. In: Broussard, M.L. (Ed.), Deltas, Models of exploration. Houston Geological Soci-
ety.
GAUGER, K. (2007): Occurence, Ecology and Conservation of wintering Little Bustards (Tetrax tetrax) in
Azerbaijan. in: Archives of Nature Conservation and Landscape Research, Vol. 46, Nr. 2, p. 5-28.
GEODEZIJA KOMITET (1992): Azrbayzhanyn Bitki ruzhu Khritasi. (Soil Map of Azerbaijan) 1: 600.000.
Baku. (in Azerbaijan).
Potential Analysis for Further Nature Conservation in Azerbaijan
154
GOSKOMGEODESIYA (1993): Agroklimaticheskiy Atlas Aserbaydshanskoy Respubliki (Agroclimatological
Atlas of the Republic of Azerbaijan). Gosudarstvenniy Komitet Azerbaydschanskoy Respubliki po Geodesii i Kar-
tograni. Baku.
GROSSHEIM, A. A. (1936). Analis Flori Kavkasa. Baku, Aserbaidschanskogo nliala akademii nauk SSSR.
GULIYEV, F. (2009): Oil wealth, patrimonialism and the failure of democracy in Azerbaijan. Caucasus Ana-
lytical Digest No2. 2009.
HAUKE, DR. U. (2008). BfN: oral communication (08/12/2008).
HENNING, I. (1972). Die dreidimensionale Vegetationsanordnung in Kaukasien. Erdwissenschaftliche For-
schung 4: 182-204.
HERRMANN, T. & HYE-KNUDSEN, R. (2006): Introduction to the new administrative law of Azerbaijan
(in Azeri, German & English) available from:
HOOGENDOORN, R.M., BOELS, J.F., KROONENBERG, S.B., SIMMONS, M.D., ALIYEVA, E, BA-
BAZADEH, A.D., HUSEYNOV, D. (2005): Developent of the Kura delta, Azerbaijan; a record of Holocene
Caspian sea-level changes. Marine Geology.
IGNATOV, E.I., SOLOVIEVA, G.D. (2000): Geomorphology of Southern Azerbaijan and Coastal Respon-
se to Caspian Transgression, in Dynamic Earth Environments, Remote Sensing Observations from Shuttle-Mir
Missions, edited by K.P. Lulla & L.V. Dessinov, 268 p, John Wiley & Sons Inc: New York, Chichester, Weinheim,
Brisbane, Singapore, Toronto.
INAN, S., YALCIN, M.N., GULIEV, I.S., KULIEV, K., FEIZULLAYEV, A.A., (1997): Deep petroleum oc-
currences in the Lower Kura depression, South Caspian Basin, Azerbaijan: anorganic chemical and basin modelling
study. Mar. Pet. Geol. 14.
IUCN (2007): IUCN Red List of Treatened Species. (accessed 12/02/07.
KARIMOV, S. & VALIYEVA G. (2006): Law on Administrative Procedure. Published by GTZ and CILC.
available from:
KHANALIBAYLI, E. (2008):-
libayli.pdf (accessed 04/12/08.
KRAUSE, W. (1997): Charales (Charophyceae), Ssswassernora von Mitteleuropa. Bd. 18, G.: Jena
Stuttgart,Lbeck, Ulm: Fischer.
KREVER V. et al. (Eds.) (2001): Biodiversity of the Caucasus Ecoregion : an analysis of biodiversity and current
threats and initial investment portfolio, Signar and WWF, Moskva.
MAMEDALIEV, JU. G. (1963). Atlas Azerbaijanskoy SSR. Baku Moskau: Gosudarstwyennogo Geologit-
cheskogo Komiteta SSSR.
MAMMADOV, R. & APRUZZI, F. : Environment, Exploitation and Utilization of Natural Resources, Score-
board Paper on Approximation of Azerbaijani Legislation to EU Law, Baku 2004 available from:
index.php/legal-approximation.
MARCINEK, J. & ROSENKRANZ, E. (1996): Das Wasser der Erde: eine geographische Meeres- und Gews-
serkunde. Gotha: Perthes.
MEKHTIEV, N.N. (1966): Dynamics and morphology of the Western Coast of the southern Caspian, Baku,
Azerbaijan, Academy of Sciences of Azerbaijan, SSR.
MEUSEL, H., JGER, E., WEINERT, E. (1965): Vergleichende Chorologie der zentraleuropischen Flora.
Fischer Verlag Jena.
MIKHAILOV, V.N., KRAVTSOVA, V.I., MAGRITSKII, D.V., (2003): Hydrological and morphological pro-
cesses in the Kura River delta. Water Res. 30 (5).
MINISTRY OF ECOLOGY AND NATURAL RESOURCES OF THE AZERBAIJAN REPUBLIC (MENR)
(2004): Nomination of the Hirkan forests of Azerbaijan as UNESCO World Nature Heritage Site, unpublished
draft as of March 30, 2004.
MINISTRY OF ECOLOGY AND NATURAL RESOURCES OF THE AZERBAIJAN REPUBLIC (MENR)
(2006): National Action Plan on Strengthening Capacity to Respond to Challenges of Biodiversity Conservation,
Climate Change and Desertincation / Land Degradation (2006-2015), Baku 2005.
MINISTRY OF ECOLOGY AND NATURAL RESOURCES OF THE AZERBAIJAN REPUBLIC (MENR)
(2006): Azerbaijan Capacity Development and Sustainable Land Management Program Summary of Project Pro-
posal, MENR/UNDP/GEF.
Part Four Conclusion & Recommendations
155
MITCHELL, J., WESTAWAY, R., (1999): Chronology of Neogene and Quaternary uplift and magmatism in
the Caucasus: constraint from KAr dating of volcanism in Armenia. Tectonophysics 304.
MHR, B. (2005): Klimadiagramme weltweit. Available from:.
NASA Worldwind:
OGAR, N. (2001): Costal plants, in: Caspian Environment Programme (CEP).
additional_info/habitat.pdf (accessed 16/01/2007).
PATRIKEEV, M. P., WILSON, M. (2000): Azerbaijan, in: HEATH, M. F.,EVANS, M. I., (2000): Important
Bird Areas in Europe: Priority sites for conservation. Vol 2. Cambridge, (BirdLife Conservation Series No. 8).
PATRIKEEV, M. P. (2004): Te Birds of Azerbaijan. Sona-Moscow (Pensoft).
PATRIKEYEV M.V. (1991): To spring-summer avifauna of Southeast Shirvan and adjacent areas, Materials of
scientinc-practical conference Fauna, population and ecology of North Caucasian Birds, Stavropol, (in Russian).
PLANTS GENETIC RESOURCES IN CENTRAL ASIA AND THE CAUCASUS.-
sity.org/aze/aze_climate.htm (accessed 13/01/ 2009).
PRILIPKO, L. I. (1954). Lessnaya Rastitelnost Azerbaijana. Baku: Isdatelstwo Akademi Nauk Azerbaijanskoy
SSR.
PRILIPKO, L. I. (1970): Rastitelny Pokrov Aserbaidschana (Vegetation of Azerbaijan). Baku (Elm).
RED BOOK OF AZERBAIJAN (1989). Senior editor Adygezalov B.M., State Comity of Nature Conservation
of Azerbaijan Republic SSR and Azerbaijan Academy of Science.
RUZGAR (2008) environmental organisation in Azerbaijan.
htm (accessed 15/10/2008).
SANDWITH T., SHINE C., HAMILTON L. AND SHEPPARD D. (2001): Transboundary Protected Areas
for Peace and Co-operation. IUCN, Gland, Switzerland and Cambridge
SCHMIDT, P. (2004): Bume und Strucher Kaukasiens. Teil III: Laubgehlze der Familien Ebenaceae (Eben-
holzgewchse) bis Frankeniaceae (Frankeniengewchse). Mitt. Dtsch. Dendrol. Ges. 89, pp. 49-71.
SCHMIDT, S., GAUGER, K, AGAYEVA, N. (2008): Birdwatching in Azerbaijan, a Guide to Nature and
Landscape. Greifswald, Germany. Michael Succow Foundation.
SCHROEDER, F.-G. (1998): Lehrbuch der Pnanzengeographie. Wiesbaden: Quelle and Meyer.
SHELTON, N. (2001): Where to watch birds in Azerbaijan. Baku (Halal Print).
SILVEIRA, M.P., (2004): Environmental Performance Review # 19: Azerbaijan. UNITED NATIONS PUB-
LICATION.
SKWORZOV, G. A. (1978): Topographical map 1:100 000 K-39-99 Saisan. (General stan of the Sowjet mi-
litary).
SSC (2009) State Statistical Committee of the Republic of Azerbaijan: (accessed 31/01/09).
STRAUSS, A. (2005): Terrestrial vegetation and soil conditions of Ag-Gel National Park in Azerbaijan as a basis
for a possible reintroduction of the Goitered Gazelle (Gazella subguttutosa), Archive of Nature Conservation and
Landscape Research.
SVANTE, E., CORNELL, S., STARR, F. (2006): Te Caucasus: A Challenge for Europe, Silk Road Paper,
Central Asia-Caucasus Institute and Silk Road Studies Program, Washington, D.C.
THIELE, A., SCHLFFEL M., ETZOLD, J., PEPER, J., SUCCOW, M. (2009). Mires and Peatlands of
Azerbaijan. 7 p. Telma.
THIELE, A., SCHMIDT, S., GAUGER, K. (2008): Biodiversity and Protection Value of Coastal Ecosystems
of Azerbaijan. Project Report. Michael Succow Foundation.
VOLOBUEV, V. R. (1953): Pochvy Azerbaidzhanskoy SSR (Soils of Azerbaijan): Baku (in Russian).
ZOHARY M. (1963): Bulletin of the research council of Israel, section D Botany. Supplement on the Geobo-
anical structure of Iran.
Potential Analysis for Further Nature Conservation in Azerbaijan
156
Digital Maps:-
caucausus
UNEP (2009): Maps & Graphics,-
sea-1840-2004
Further Internet resources:
AZERBAIJAN OFFICIAL WEBSITE (2007): (accessed 23/06/07).
ENVIRONMENTAL NEWS SERVICE:http:// (accessed 20/02/08).
(accessed 20/01/2009).
UNEP (2008): (accessed 20/02/08).
WHO COUNTRY COOPERATION STRATEGY (2006) accessed at:-
ration_strategy/ccsbrief_aze_en.pdf.
WORLD CLIMATE INDEX MAP (2009): (accessed 15/01/ 2009). accessed 21/07/2009 accessed 22/07/2009).
157
List of Maps
Map 1: Priority conservation areas of the Southern Causasus, as denned in the Ecoregional
Conservation Plan. Source:-
are as-priority-conservation-areas-and-wildlife-corridors-in-the-caucausus 9
Map. 2: Topographical overview Azerbaijan, basis: Modis scene 2005 13
Map 3: Climatic regions in Azerbaijan, source: EMBASSY OF THE REPUBLIC
OF AZERBAIJAN (2009) 14
Map 4: Landscapes and climate zones of the Southern Caucasus. Source: 15
Map 5: Protected areas and investigation sites of the present project in Azerbaijan 19
Map 6: Pasture land in the Caucausus ecoregion. Source:-
land-in-the-caucausus-ecoregion 24
Map 7: Sheep and goats in the Caucasus ecoregion. source:-
and-goats-in-the-caucasus-ecoregion 25
Map 8: EU programme regions since 01.01.2007 BMWi 37
Map 9: Samur- Dvchi project area 44
Map 10: Te catchment area of the Kura River, source: UNEP, accessed 23.11.2009 48
Map 11: Partly supervised classincation of Spot Satellite Image. Image taken in July 2007 52
Map 12: Gil Island 56
Map 13: Boyuk Tava Island 58
Map 14: Kickihk Tava Island 60
Map 15: Chigill Island. 61
Map 16: Babur Island. 63
Map 17: Supervised satellite image classincation of Sari Su region. Based on Landsat 7 image 65
Map 18: Supervised satellite image classincation of Gobustan region. Based on Landsat 7 image 74
Map 19: Geological transsect accross Gobustan region 75
Map 20: Supervised satellite image classincation of Mingchevir region. Based on Landsat 7 image 81
Map 21: Partly supervised satellite image classincation of mountain forest above Gakh.
Based on Landsat 7 image 92
Map 22: Partly supervised satellite image classincation of Gakh gravel fan. Based on Landsat 7 image 97
Map 23: Coarse overview of Gakh gravel fan and the distribution of forest communities 99
Map 24: Partly supervised satellite image classincation of mountain forest Oguz survey area.
Based on Landsat 7 image 104
Map 25: Partly supervised satellite image classincation of suryev region above Lahic.
Based on Landsat 7 image 107
Map 26: Partly supervised satellite image classincation of Altiahaj suryev region.
Based on Landsat 7 image 112
Map 27: Partly supervised satellite image classification of Smkirchay valley. Based on Landsat 7 image 118
Map 28: Partly supervised satellite image classincation of Hirkanian Forest as well as Zuvand upland
close to the Iranian border suryev region. Based on Landsat 7 image 122
Map 29: Proposed corridor/connection between Zuvand Zakaznik and Hirkani National Park 131
Map 30: Biogeographic regions of Natura 2000 and the Emerald Network, source: EEA (2008) 146
Map 31: Natura 2000 sites in Europe 151
Potential Analysis for Further Nature Conservation in Azerbaijan
158
List of Figures
Fig. 1: GDP by sector and labor force by occupation, source: CIA THE WORLD FACTBOOK (2008) 27
Fig. 2: Land use, source: KHANALIBAYLI (2008) 31
Fig. 3: Conventions on nature protection signed by Azerbaijan, source: ENVIRONMENTAL
MOVEMENT IN AZERBAIJAN (2008) 34
Fig. 4: Organigram of the Ministry of Ecology and Natural Resources 36
Fig. 5: Flow chart of funding now Caucasus Protected Area Fund.
Source: 39
Fig. 6: Cross section Dvchi: Cross section of the dune complex with physiognomic landscape units and
predominating plant species 45
Fig 7: Water level nuctuation of the Caspian Sea from 1900 till 2000 48
Fig. 8: (A) Depositional environments of the modern Kura delta; (B) Location map including bathymetry
of the south-western Caspian Sea, major faults, syncline and anticline structures and oil and gas nelds;
rectangle: location of Kura delta (HOOGENDOORN et al., 2005; INAN et al. 1997) 49
Fig. 9: Gil: Cross Section of the Island Gil 57
Fig. 10: Cross section through eastern part of island. 58
Fig. 11: Cross section through eastern part of island. 59
Fig. 12: Cross section trough central part of Kichik Tava 61
Fig. 13: Cross section through Chigill Island. 62
Fig. 14: Cross section through Babur Island 64
Fig. 15: Cross section through central part of Sari Su 66
Fig. 16: Cross section through western part of Sari Su 67
Fig. 17: Cross sectiom through eastern part of Sari Su 68
Fig. 18: Gobustan: Cross section from the Caspian Sea to Mount Gijki 76
Fig. 19: Gobustan 2: Cross section through the Cheyrankshchmz River valley 77
Fig. 20: General overview of vegetation types around Mingchevir reservoir 82
Fig. 21: Cross section through Akhbakhar Hills 83
Fig. 22: Cross section from Alazan River to Lake Ajinohur 85
Fig. 23: River innow to Mingchevir Reservoir 1990 Source: NASA World Wind 86
Fig. 24: River innow to Mingchevir Reservoir 2000 Source: NASA World Wind 86
Fig. 25: Gakh: Cross section through the sequence of forest types on the slope 93
Fig. 26: Longitudinal cross section of the gravel fan 100
Fig. 27: Cross section through upper fan 100
Fig. 28: Cross section through lower fan 100
Fig. 29: Cross section through riverbed, upper fan 101
Fig. 30: Cross section through riverbed, mid-fan 101
Fig. 31: Cross section through riverbed, lower fan 102
Fig 32: Oguz: Cross section through the sequence of forest types on the slope 105
Fig. 33: Lahij: Cross section through the sequence of vegetation types on the slope 108
Fig. 34: Altiaghaj: Cross section 113
Fig. 35: Cross section through lake region close to Altiaghaj National Park 114
Fig. 36: Cross section trough kettle hole at Altiaghaj region 115
Fig. 37: Cross section through Smkirchay valley 119
Fig. 38: Comparison of forest extension between 1987, 2000 and 2007; Based on Landsat 7 imagery 126
Fig. 39: Spatial alteration/deplition of forest sourounding the village Gegiran, located in the Talish
mountains. Partly supervised classification, based on Landsat 7 imagery 127
Fig. 40: Cross section through creek valley of Zuvand 129
Fig 41: Cross section through Zuvand creek valley 130
159
List of Photos
Photo Cover: Hartmut Mller
Photo 1: Forest of the Greater Caucasus (J. Etzold) 15
Photo 2: Sub-alpine meadow (J. Etzold) 16
Photo 3: Juniper sparse forest (J. Peper) 16
Photo 4: Jeyranchl steppe with Artemisia fragrans close to the Georgian border (H. Mller) 17
Photo 5: Semi desert of Gobustan (S. Schmidt) 17
Photo 6: Psammophytic ecosystem at Absheron National Park. (M. Langhammer) 18
Photo 7: Sociable lapwing (Vanellus gregarius) (P. Meister) 18
Photo 8: Gobustan rock engravings (N. Agayeva) 20
Photo 9: Degraded Hirkanian Forest (M.Rietschel) 23
Photo 10: Heavy grazing pressure on the Greater Caucasus summer pastures (H. Mller) 25
Photo 11: Severe pollution around Bakus oil production (S. Schmidt) 25
Photo 12: Building boom in Baku (S. Schmidt) 27
Photo 13: Entrance to Shirvan National Park (S. Schmidt) 30
Photo 14: Agricultural land use pattern Talish Mountain (S. Schmidt) 31
Photo 15: Black Headed Gull (Meister) 46
Photo 16: Bare shifting sand dunes. (J. Peper) 47
Photo 17: Te Kura River delta in 1980, source: Icxariov x Soioviiva 2000 49
Photo 18: Te Kura River delta in 1990, source: Icxariov x Soioviiva 2000 50
Photo 19: Te Kura River Delta in 1996, source: Icxariov x Soioviiva 2000 50
Photo 20: Te Kura River Delta in 2007 51
Photo 21: Old nshing neet at the mouth of the Kura (S. Schmidt) 53
Photo 22: Sturgeon sold at the roadside (H. Mller) 54
Photo 23: Coast line of Boyuk Tava. (N. Agayeva) 59
Photo 24: Coastline Kichik Tava. (N. Agayeva) 60
Photo 25: Chigill Island from the Sea. (N. Agayeva) 61
Photo 26: Babur Island (N. Agayeva) 63
Photo 27: Lake Sari Su. (S. Schmidt) 69
Photo 28: Glossy Ibis (Pleagdis falcinellus), (M. Meister) 69
Photo 29: Juvenile White-headed Duck (Oxyura leucocephala) (S. Schmidt) 70
Photo 30: European Roller (Coracias garrulus) (H. Mller) 71
Photo 31: Lake Sari Su (A. Tiele) 72
Photo 32: Mud volcanoes within semi-desert of Gobustan. (S. Schmidt) 75
Photo 33: Steppe and Mud volcanoes of Gobustan (S. Schmidt) 77
Photo 34: Grazing in spring. (S. Schmidt) 78
Photo 35: Caucasus Agama (S. Schmidt) 79
Photo 36: Juniper woodland at steep loam escarpments. (H. Mller) 82
Photo 37: Bozdagh Hills south of Mingechvir Reservoir (S. Schmidt) 84
Photo 38: Grinon Vultures (Gyps fulvus) (H. Mller) 85
Photo 39: Iori River Valley with remnants of poplar noodplain forest (S. Schmidt) 86
Photo 40: Demoiselle crane (Grus virgo) (S. Schmidt) 87
Photo 41: Mingchevir Reservoir (J. Etzold) 88
Photo 42: Mountain forest Gakh, Greater Caucasus (S. Schmidt) 96
Photo 43: Gakh gravel fan with White poplar noodplain forest (S. Schmidt) 98
Photo 44: Gakh gravel fan (S. Schmidt) 103
Photo 45: Fruit tree formations at upper montane, Lahic transect. (H. Gottschling) 111
Potential Analysis for Further Nature Conservation in Azerbaijan
160
Photo 46: Juniper heathland and Stipa spec steppe of Altiaghaj region (S. Schmidt) 115
Photo 47: Juniper spec. heathland of Altiaghaj (J. Peper) 116
Photo 48: Crested Lark (Galerida cristata) (S. Schmidt) 117
Photo 49: Lesser Caucasus River Valley (K.Gauger) 121
Photo 50: Hirkanian Forest degraded by timber logging, forest pasture and constant nre wood
collecting/cutting (S. Schmidt) 124
Photo 51: Pristine Hirkanian Forest (J. Etzold) 125
Photo 52: Acantholimon spec cushions at Zuvand (S. Schmidt) 128
Photo 53: Rock Sparrow (Petronia petronia) (H. Mller) 131
List of Tables
Table 1: Potential for transboundary cooperation of Azerbaijan and its respective neighbouring countries 21
Table 2: Vegetation Formations of Kura Delta accorrding to partly supervised remote sensing classincation 53
Table 3: Overview about all Islands investigated 56
Table 4: Overview of potential for further reserves in the forest area of Azerbaijan 90
Table 5: Comparison of habitat types of the European FFH-Guideline (Annex I)
with widespread forest types of the Eastern Great Caucasus and the Lesser
Caucasus in Azerbaijan 91
Table 6: Characterisation of forest communities along Gakh transect 94
Table 7: Forest characteristics Oguz transect forest communities 106
Table 8: Overview of colline to upper montane vegetation types and characteristics of Lahic transect 109
Table 9: Fruit tree forest formations of the upper montane and its site condition 110
Table 10: Forest community characteristics along the Lesser Caucasus transect 120
Table 11: Comparison of EU directive 79/409/EEC with
Azerbaijan law, taken from the Scoreboard report 143
Table 12: Comparison of EU directive 92/43/EEC with Azerbaijan law, taken from the Scoreboard report 144
161
In memoriam
Prof. Dr. Martin Uppenbrink
PROJECT LEADERSHI P AND MANAGEMENT
Dipl. Biol. Sebastian Schmidt
PROJECT SUPERVI SI ON
Prof. Dr. Michael Succow, Prof. Dr. Martin Uppenbrink
EDI TI NG
Sebastian Schmidt, Constanze Trltzsch and Hendrik Herlyn
PROPOSAL FOR QUOTI NG
Schmidt, S. & Uppenbrink, M. 2009: Potential Analysis for further Nature Conservation in Azerbaijan
Michael Succow Foundation, Greifswald. 164 p.
PROJECT DURATI ON
03/2006-04/2009
EDI TI NG AND COMPI LI NG
Sebastian Schmidt
MAPS
Stephan Busse, Ren Fronczek
Michael Succow Foundation
Ernst-Moritz-Arndt University Greifswald
Insitute for Botany and Landcape Ecology
Grimmer Str. 88
17489 Greifswald
Germany
Tel.: +49 3834 7754623
[email protected]
PUBLISHER
Geozon Science Media
Post Omce Box 3245
D-17462 Greifswald
Tel.: +49 3834 801480
[email protected]
LAYOUT
Michael-Succow-Stiftung
progress4
PRINT-EDITION
ISBN 978-3-941971-01-1
Printed on 100% recycled Paper in climate neutral Production.
ONLINE-EDITION
Download:,
Te Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliograne;
detailed bibliographic data are available in the Internet at.
Creative Commons License 3.0
ISBN 978-3-941971-01-1
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.Cancel anytime.
|
https://www.scribd.com/document/29540467/Potential-Analysis-for-Further-Nature-Conservation-in-Azerbaijan
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Here is a piece of Visual C++ code that seems very peculiar. For some strange reason, sorting the data miraculously makes the code almost six times faster.
#include <algorithm> #include <ctime> #include <iostream> int main() { // Generate data const unsigned arraySize = 32768; int data[arraySize]; for (unsigned c = 0; c < arraySize; ++c) data[c] = std::rand() % 256; // !!! With this, the next loop runs faster std::sort(data, data + arraySize); // Test clock_t start = clock(); long long sum = 0; for (unsigned i = 0; i < 100000; ++i) { // Primary loop for (unsigned c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC; std::cout << elapsedTime << std::endl; std::cout << "sum = " << sum << std::endl; }
- Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds.
- With the sorted data, the code runs in 1.93 seconds.
Why?
The reason why the performance improves drastically when the data is sorted is that the branch prediction penalty is removed.
Now, if we look at the code
if (data[c] >= 128) sum += data[c];:
sum += data[c] >=128 ? data[c] : 0;
While maintaining readability, we can check the speedup factor.
On an Intel Core i7-2600K @ 3.4GHz and Visual Studio 2010 Release Mode, the benchmark is (format copied from Mysticial):
x86
// Branch - Random seconds = 8.885 // Branch - Sorted seconds = 1.528 // Branchless - Random seconds = 3.716 // Branchless - Sorted seconds = 3.71
x64
// Branch - Random seconds = 11.302 // Branch - Sorted seconds = 1.830 // Branchless - Random seconds = 2.736 // Branchless - Sorted seconds = 2.737
The result is robust in multiple tests. We get.
:max1 movl %edi, -4(%rbp) movl %esi, -8(%rbp) movl -4(%rbp), %eax cmpl -8(%rbp), %eax jle .L2 movl -4(%rbp), %eax movl %eax, -12(%rbp) jmp .L4 .L2: movl -8(%rbp), %eax movl %eax, -12(%rbp) .L4: movl -12(%rbp), %eax leave ret :max2 movl %edi, -4(%rbp) movl %esi, -8(%rbp) movl -4(%rbp), %eax cmpl %eax, -8(%rbp) cmovge -8(%rbp), %eax leave ret
max2 uses much less code due to the usage of instruction cmovge. But the real gain is that max2does not involve branch jumps, jmp, which would have a significant performance penalty if the predicted result is not right...
|
https://developerinsider.co/why-is-processing-a-sorted-array-faster-than-an-unsorted-array/
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Stardust/Knowledge Base/Java API/IWorklistMonitor Example
< Stardust | Knowledge Base | Java API
The IWorklistMonitor service provider interface can be used to execute custom code when item are added to or removed from a workflow participant's worklist. Typically this SPI is used to implement push notification to users (UI alerts, email, ...) or other systems.
To use this interface you need to:
- Implement the IWorklistMonitor interface as shown below.
- Create a text file named org.eclipse.stardust.engine.core.spi.monitoring.IWorklistMonitor. The file contents needs to be the fully qualified name of your implementation class, e.g. org.eclipse.stardust.example.WorklistInterceptor .
- Place the file into the META-INF/services folder of the jar that will contain your implementation class
Note: you can also try to create a separate jar file, that contains only the above folder and use it as described in Stardust Forum thread [1]
package org.eclipse.stardust.example; import org.eclipse.stardust.engine.api.model.IParticipant; import org.eclipse.stardust.engine.core.runtime.beans.IActivityInstance; import org.eclipse.stardust.engine.core.spi.monitoring.IWorklistMonitor; public class WorklistInterceptor implements IWorklistMonitor { @Override public void addedToWorklist(IParticipant arg0, IActivityInstance arg1) { System.out.println("ADDED TO WORKLIST: " + arg0.getId() + " Activity OID: " + arg1.getOID()); } @Override public void removedFromWorklist(IParticipant arg0, IActivityInstance arg1) { System.out.println("REMOVED FROM WORKLIST: " + arg0.getId() + " Activity OID: "+ arg1.getOID()); } }
|
http://wiki.eclipse.org/index.php?title=Stardust/Knowledge_Base/Java_API/IWorklistMonitor_Example&printable=yes
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
KDE 3.x Splash Screens 244 comments
so the installation works fine without errors?
if so, can u go to the splashscreen settings and launch the "test" mode? - Oct 12 2006
you need at least the devel packages for kdebase - Jul 06 2006
there you could be able to find the neccessary options. - Jul 03 2006
sorry for the bad described theme options file. - Jul 03 2006
but please send me both for testing or maybe i'll add an option for slow machines (would be fair)
thank you!
just contact me: [email protected]
(please use a BIG, SIGNIFICANT subject, my inbox is growing like never before :)) - Jan 16 2006
no special ideas. simply a nice funny tux could be the solution :)
and remember that its has NOT to be rectangluar. think of the tux with a translucent background or something.
i hope u can imagine what i mean. - Jan 14 2006
i'll do that this evening. - Jan 03 2006
make sure the theme contains a Background entry with only the filename given (without any path element) - Dec 08 2005
some adjustments to version 0.4.3 would be neccessary to make it stable.
but it would include MNG animations and progressbar support.
additionally i'm working on a style engine. at least, my job also takes time.
maybe you get it at xmas or as a present for 2k6 :) - Dec 05 2005
i forgot to disable the "announce update" option.
sorry for that - Dec 04 2005
./configure --with-qt-includes=/usr/include/qt3 (if this is our path)
and/or
install qt3 dev packages (if not already done)
- Sep 21 2005
There's a lot of crap in the current versions, so please be patient. - Aug 30 2005
btw: i've never tested to compile with qt4. tell me if it works - Aug 28 2005
have you looked at config.log? - Aug 22 2005
do you have dev files (kdebase-dev) installed?
if so, mail me your complete error log to [email protected]. so i can have a detailed look at it. - Aug 13 2005
deb ../project/experimental main
to your sources.list (as i did) and you'll get the latest kde 3.4.1 - Jul 18 2005
Bootsplash Various 18 comments
that would be great!
cya - Oct 03 2006
KDE 3.x Splash Screens 1 comment
here is what i get:
moodwrod@tibo (0) 01:20:29 PM 13:20:29
[~] superkaramba
^sys.path.insert(0, '/home/moodwrod/download/karamba/aero_aio.skz')
Traceback (most recent call last):
File "/home/moodwrod/download/karamba/aero_aio.skz/aero_aio.py", line 119, in ?
File "/home/moodwrod/download/karamba/aero_aio.skz/aero_aio.py", line 101, in __globalimport__
File "", line 2, in ?
File "/home/moodwrod/.aero_aio/ps_aio.py", line 1, in ?
import karamba, re, time, traceback, subprocess
ImportError: No module named subprocess
best regards,
christian - Jan 15 2006
Karamba & Superkaramba 66 comments
[lastname], [firstname]
so you split by " " this causes my contactlist only to show
maybe u can fix this :)
btw: clicking on a nick opens a chat window would be great, too
greets - Aug 15 2005
KDE 3.x Splash Screens 33 comments
Background = Background.jpg
in the theme file. this would set it to a fix file name.
sorry for the bad (not really existing documentation).
next version is coming soon. i was very busy the last time.
btw: great theme - Sep 23 2005
Various Stuff 24 comments
sometimes kicker needs a restart to show all the stuff as u expect, so just invoke:
dcop kicker kicker restart
on the commmand line
i have no theme at all, but i can make one for you. just email me. thats no problem
cya - Aug 29 2005
my windows look like every other windows with plastik windec and style. - Aug 28 2005
forgot to remove - Jun 30 2005
KDE 3.x Splash Screens 36 comments
look forward to moodin 0.4.3!
More effects will be available - Jul 26 2005
KDE 3.x Splash Screens 8 comments
apart from that i don't know if you use 0.4.1 or 0.4.2 of moodin engine but in 0.4.2 (or it depends on my current development version :)) the theme defaults fail.
well, the following two lines in Theme.rc fix that (only 0.4.2):
Background = Background.png
Label2 = ML:USER:loginname - Jul 22 2005
KDM3 Themes 53 comments
feel free to "moodify" the theme itself :)
greets - Jul 21 2005
last two updates, it was tricky to find out - Jul 21 2005
KDE 3.x Splash Screens 12 comments
KDE 3.x Splash Screens 20 comments
which theme are you using? is it the same with all themes? - Jul 04 2005
KDE 3.x Splash Screens 2 comments
i think i get it back some day.
just ask google for the sources -.-
- Nov 07 2008
|
https://www.pling.com/u/moodwrod/
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Mathcomp
Because C++ doesn't have math-like chained comparisons.
Observed Behaviour
C/C++ sucks like this:
int p= 121; // warning: comparison of boolean constant with arithmetic constant (39) is always true. // Not what we want! if (-17 < p < 39) { cout<< "foo"; } else { cout<< "bar"; } cout<< endl;
In the above the order of evaluation is:
(-17 < p) → boolwith the end result
true.
(true < 39) → boolwith the end result
true(both because of integer promotion and because of bool comparison, so we're double struck here).
Expected Behaviour
To be able to write a chained comparison the way it's used in MATH.
What we can do:
int p= 121; if (something?() < -17 < p < 39) { cout<< "foo"; } else { cout<< "bar"; } cout<< endl;
So here
mathcomp provides that something.
Usage
#include "mathcomp/mathcomp.hpp" // ... in code using mathcomp::mathcomp; int p= 121; if (mathcomp< -17 <= p < 39) { cout<< "foo"; } else { cout<< "bar"; }
Mathcomp supports left-to-right ordered chained comparisons that means operators
< ,
<= and
==.
Note the use of
operator< at the beginning to activate chaining comparison. Operators
< , <= , == , << can be used to activate chaining.
License
mathcomp is licensed under the LGPL aka GNU Lesser Public License. License verbatim is provided in /doc/tip/LICENSE.txt. Visit also for license details and for a rundown.
|
http://chiselapp.com/user/lmachucab/repository/mathcomp/index
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Using DevExpress Blazor UI Components With the ABP Framework
Hi, in this step by step article, I will show you how to integrate DevExpress blazor UI components into ABP Framework-based applications.
(A screenshot from the example application developed in this article) to development, we will create a solution named
DevExpressSample(or whatever you want). We will create a new startup template with EF Core as a database provider and Blazor for UI framework by using ABP CLI:
abp new DevExpressSample -u blazor
Our project boilerplate will be ready after the download is finished. Then, we can open the solution in the Visual Studio (or any other IDE) and run the
DevExpressSample.DbMigratorto create the database and seed initial data (which creates the admin user, admin role, permissions etc.)
After database and initial data created,
Run the
DevExpressSample.HttpApi.Hostto see our server side working and
Run the
DevExpressSample.Blazorto see our UI working properly.
Default login credentials for admin: username is admin and password is 1q2w3E*
Install DevExpress
You can follow this documentation to install DevExpress packages into your computer.
Don't forget to add "DevExpress NuGet Feed" to your Nuget Package Sources.
Adding DevExpress NuGet Packages
Add the
DevExpress.Blazor NuGet package to the
DevExpressSample.Blazor project.
Install-Package DevExpress.Blazor
Register DevExpress Resources
Add the following line to the HEAD section of the
wwwroot/index.htmlfile within the
DevExpressSample.Blazorproject:
<head> <!--...--> <link href="_content/DevExpress.Blazor/dx-blazor.css" rel="stylesheet" /> </head>
In the
DevExpressSampleBlazorModuleclass, call the
AddDevExpressBlazor()method from your project's
ConfigureServices()method:
public override void ConfigureServices(ServiceConfigurationContext context) { var environment = context.Services.GetSingletonInstance<IWebAssemblyHostEnvironment>(); var builder = context.Services.GetSingletonInstance<WebAssemblyHostBuilder>(); // ... builder.Services.AddDevExpressBlazor(); }
Register the DevExpressSample.Blazor namespace in the
_Imports.razorfile:
@using DevExpress.Blazor
Result
The installation step was done. You can use any DevExpress Blazor UI component in your application:
Example: A Scheduler:
This example has been created by following this documentation.
The Sample Application
We have created a sample application with Data Grid example.
The Source Code
You can download the source code from here.
The related files for this example are marked in the following screenshots.
Additional Notes
Data Storage
I've used an in-memory list to store data for this example, instead of a real database. Because it is not related to DevExpress usage. There is a
SampleDataService.cs file in
Data folder at
DevExpressSample.Application.Contracts project. All the data is stored here.
Conclusion
In this article, I've explained how to use DevExpress components in your application. ABP Framework is designed so that it can work with any UI library/framework.
John 92 weeks ago
Thanks - nice article!
Serdar Genc 92 weeks ago
thanks
|
https://community.abp.io/posts/using-devexpress-blazor-ui-components-with-the-abp-framework-wrpoa8rw
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Write a C program to find the distance between two points. As per the Pythagoras theorem, the distance between two points, i.e., (x1, y1) and (x2, y2), is √(x2 – x1)2 + (y2 – y1)2.
This example accepts two coordinates and prints the distance between them. We used the C library Math functions sqrt and pow to calculate the square root and power.
#include <stdio.h> #include<math.h> int main() { int x1, x2, y1, y2, dtn; printf("Enter the First Point Coordinates = "); scanf("%d %d",&x1, &y1); printf("Enter the Second Point Coordinates = "); scanf("%d %d",&x2, &y2); int x = pow((x2- x1), 2); int y = pow((y2- y1), 2); dtn = sqrt(x + y); printf("\nThe Distance Between Two Points = %d\n", dtn); }
In this C program, the calcDis function accepts the coordinates of two points and returns the distance between those two points.
#include <stdio.h> #include<math.h> int calcDis(int x1, int x2, int y1, int y2) { return sqrt((pow((x2- x1), 2)) + (pow((y2- y1), 2))); } int main() { int x1, x2, y1, y2; printf("Enter the First Coordinates = "); scanf("%d %d",&x1, &y1); printf("Enter the Second Coordinates = "); scanf("%d %d",&x2, &y2); printf("\nThe Distance = %d\n", calcDis(x1, x2, y1, y2)); }
Enter the First Coordinates = 2 3 Enter the Second Coordinates = 9 11 The Distance = 10
|
https://www.tutorialgateway.org/c-program-to-find-the-distance-between-two-points/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Details
Bug
- Status: Open
Major
- Resolution: Unresolved
- 2.3.4
- None
-
- None
Description
A script has an annotation on a variable, for example:
@Input("some.input.reference")
def input
An AST Transformation adds a method that initializes the input reference and the declaration is transformed to:
def input = someMethod("some.input.reference")
The someMethod() method contains a staticMethodCall.
If several scripts that are instrumented this way are called after each other then in the second script always a GroovyBug exception occurs, see attachment.
If the script is called stand-alone then everything is fine.
The order in which the scripts is called doesn't matter.
I included a more detailed code example in the attachment.
It seems there is some kind of bug with static method calls in AST transformations.
|
https://issues.apache.org/jira/browse/GROOVY-7187?attachmentSortBy=fileName
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Editors¶
The Editors group of SDK packages is a set of supported editors in nteract applications. These render within cell components and provide additional functionality such as autocomplete as well as advanced syntax highlighting.
Table of contents
/editor¶
Th
@nteract/editor package contains components for rendering CodeMirror editors in our nteract applications. To see this package in action, view the source code for the nteract play application.
Examples of /editor¶
The example below shows how to use this package to create a simple code editor component.
Example:
import CodeMirrorEditor from "@nteract/editor"; <CodeMirrorEditor cellFocused editorFocused completion theme="light" id="just-a-cell" onFocusChange={() => {}} focusAbove={() => {}} focusBelow={() => {}} kernelStatus={"not connected"} options={{ lineNumbers: true, extraKeys: { "Ctrl-Space": "autocomplete", "Ctrl-Enter": () => {}, "Cmd-Enter": () => {} }, cursorBlinkRate: 0, mode: "python" }} value={"import pandas as pd"} onChange={() => {}} />;
/monaco-editor¶
The
@nteract/monaco-editor package implements a React component with a Monaco-based code editor. To see this package in action, view the source code for rendering text files in the nteract-on-Jupyter application.
Examples of /monaco-editor¶
The example below shows how to use this package to render an editor for plain-text content.
Example:
import MonacoEditor from "@nteract/monaco-editor"; export default () => { return ( <MonacoEditor id="foo" contentRef="bar" theme="vscode" language="plaintext" value={"These are some words in an editor."} /> ); };
Documentation¶
Editor¶
The
@nteract/monaco-editor package provides the core functionality to render Monaco Editor as a React component. It also fetches code tab-completion items when running a notebook connected to a Jupyter kernel. To coordinate with notebook semantics, the package requires the following props in the
IMonacoProps interface:
id- A unique identifier for the editor instance. In the notebook context, since every cell is tied to a single instance of the editor,
idrefers to the unique ID of the cell.
contentRef- A unique identifier for the editor's host application. In the notebook context,
contentRefprovides a reference to the container element for the main notebook app component.
theme- Theme for rendering the component (docs)
language- Valid language ID of a supported language (eg:
python,
typescript,
plaintextetc.) Refer to the Monaco Editor playground to add support for a language not yet supported.
nteract provides the minimum required props to instantiate the component and also support for a host of optional properties and handlers. See the code below for optional properties.
options - Specify a list of supported EditorOptions as key-value pairs when instantiating the component.
Important callbacks:
onChange: (value: string, event?: any) => void- Contents of the editor are changed.
onFocusChange: (focus: boolean) => void- The Editor Component loses or gains focus.
onCursorPositionChange: (selection: monaco.ISelection | null) => void- Cursor position changes.
onDidCreateEditor: (editor: monaco.editor.IStandaloneCodeEditor) => void;- Created editor.
Completions¶
The package also adds the capability to retrieve code-completion items from a connected Jupyter kernel. Completions are language-specific token recommendations when the user attempts to type or enumerate the attributes of a class/object. A
dot operator and the
tab completion key are common.
nteract has a default completion provider that works with the Jupyter kernel. nteract also supports custom completion providers for users registering their own language service.
Example::
The props below controll completion behavior.
enableCompletion- Boolean flag to enable/disable the behavior entirely.
shouldRegisterDefaultCompletion- Boolean flag to enable/disable the default completion provider.
onRegisterCompletionProvider?: (languageId: string) => void- Custom completion provider implementation for a Monaco Editor supported language.
Formatting¶
The following prop also enables code formatting.
*
onRegisterDocumentFormattingEditProvider?: (languageId: string) => void - Custom formatting provider implementation for a Monaco Editor supported language.
Performance tip¶
Enable completions in your app when you use this package on a
Web worker separate from the UI thread. This provides a performance boost and ensures that the app doesn't stall UI updates when the editor is waiting for Jupyter Kernel completions.
nteract uses the Monaco Editor Web pack plugin to register and use the Monaco
editor worker. View the Monaco Editor docs for more information on configuring the package and setting up other web workers.
Example:
To improving window resizing performance, see the example below.
Resizing the browser window recalculates the width of the container of this component. The code below shows CSS overrides for better performance.
.monaco-container .monaco-editor { width: inherit !important; } .monaco-container .monaco-editor .overflow-guard { width: inherit !important; } /* 26px is the left margin for .monaco-scrollable-element */ .monaco-container .monaco-editor .monaco-scrollable-element.editor-scrollable.vs { width: calc(100% - 26px) !important; }
These style overrides for resize performance are also in the
@nteract/styles package.
Import the CSS in a top level React component with the code below.
import "@nteract/styles/monaco/overrides.css";
|
https://docs.nteract.io/groups/editors-group/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Similar to the western-style diet we will again start by loading the diet and depleting components absorbed by the host. In this case we have no manual annotation for which components should be diluted so we will use a generic human metabolic model to find those. The growth medium supllied here was created the following way:
Let's start by reading the diet which was downloaded from. Flux is in mmol/human/day. This has to be adjusted to 1 hour. Also the VMH site has a bug where it will clip fluxes after 4 digits, so we will set values like 0.0000 to 0.0001.
import pandas as pd medium = pd.read_csv("../data/vmh_high_fat_low_carb.tsv", index_col=False, sep="\t") medium.columns = ["reaction", "flux"] medium.reaction = medium.reaction.str.replace("(\[e\]$)|(\(e\)$)", "", regex=True) medium.loc[medium.flux < 1e-4, "flux"] = 1e-4 medium.flux = medium.flux / 24 medium
91 rows × 2 columns
Now we will try to identify components that can be taken up by human cells.
To achieve this we will load the Recon3 human model. AGORA and Recon IDs are very similar so we should be able to match them. We just have to adjust the Recon3 ones a bit. We start by identifying all available exchanges in Recon3 and adjusting the IDs.
from cobra.io import read_sbml_model import pandas as pd recon3 = read_sbml_model("../data/Recon3D.xml.gz") exchanges = pd.Series([r.id for r in recon3.exchanges]) exchanges = exchanges.str.replace("__", "_").str.replace("_e$", "", regex=True) exchanges.head()
0 EX_5adtststerone 1 EX_5adtststerones 2 EX_5fthf 3 EX_5htrp 4 EX_5mthf dtype: object
Now we will check which ones we can find in our set and add in the dilution factors (again going with 1:10.
medium["dilution"] = 1.0 medium.loc[medium.reaction.isin(exchanges), "dilution"] = 0.1 medium.dilution.value_counts()
0.1 79 1.0 12 Name: dilution, dtype: int64
Okay, so 79/91 components can be adsorbed by humans. We end by filling in the additional info.
medium["metabolite"] = medium.reaction.str.replace("^EX_", "", regex=True) + "_m" medium["global_id"] = medium.reaction + "(e)" medium["reaction"] = medium.reaction + "_m" medium.loc[medium.flux < 1e-4, "flux"] = 1e-4 medium
91 rows × 5 columns
# !wget -O data/agora103_genus.qza
No we we will check for growth by running the growth medium against any single model.
from micom.workflows.db_media import check_db_medium check = check_db_medium("../data/agora103_genus.qza", medium, threads=20)
check now includes the entire manifest plus two new columns: the growth rate and whether the models can grow.
check.can_grow.value_counts()
False 227 Name: can_grow, dtype: int64
Okay nothing can grow. We probably miss some important cofactor such as manganese or copper.
Let's complete the medium so that all taxa in Refseq can grow at a rate of at least 1e-4.
Sometimes you may start from a few componenents and will want to complete this skeleton medium to reach a certain minimum growth rate across all models in the database. This can be done with
complete_db_medium. We can minimize either the added total flux, mass or presence of any atom. Since, we want to build a low carb diet here we will minimize the presence of added carbon.
from micom.workflows.db_media import complete_db_medium manifest, imports = complete_db_medium("../data/agora103_genus.qza", medium, growth=0.001, threads=20, max_added_import=10, weights="C")
manifest.can_grow.value_counts()
True 227 Name: can_grow, dtype: int64
manifest is the amended manifest as before and
imports contains the used import fluxes for each model. A new column in the manifest also tells us how many import were added.
manifest.added.describe()
count 227.000000 mean 6.678414 std 3.959711 min 1.000000 25% 3.000000 50% 7.000000 75% 9.000000 max 22.000000 Name: added, dtype: float64
So we added 7 metabolites on average (1-22).
From this we build up our new medium.
fluxes = imports.max() fluxes = fluxes[(fluxes > 1e-6) | fluxes.index.isin(medium.reaction)] completed = pd.DataFrame({ "reaction": fluxes.index, "metabolite": fluxes.index.str.replace("^EX_", "", regex=True), "global_id": fluxes.index.str.replace("_m$", "(e)", regex=True), "flux": fluxes }) completed.shape
(122, 4)
Let's also export the medium as Qiime 2 artifact which can be read with
q2-micom or the normal micom package.
from qiime2 import Artifact arti = Artifact.import_data("MicomMedium[Global]", completed) arti.save("../media/vmh_high_fat_low_carb_agora.qza")
'../media/vmh_high_fat_low_carb_agora.qza'
check = check_db_medium("../data/agora103_genus.qza", completed, threads=20) check.can_grow.value_counts()
True 227 Name: can_grow, dtype: int64
check.growth_rate.describe()
count 227.000000 mean 0.002235 std 0.001484 min 0.001000 25% 0.001000 50% 0.002030 75% 0.002564 max 0.006467 Name: growth_rate, dtype: float64
|
https://nbviewer.org/github/micom-dev/media/blob/main/recipes/vmh_high_fat_low_carb_agora.ipynb
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
)
Running a
BackgroundSchedulerand then letting the execution reach the end of the script
To demonstrate the latter case, a script like this will not work:
from apscheduler.schedulers.background import BackgroundScheduler def myjob(): print('hello') scheduler = BackgroundScheduler() scheduler.start() scheduler.add_job(myjob, 'cron', hour=0)
The script above will exit right after calling
add_job() so the scheduler will not have a
chance to run the scheduled job.)
How can I use APScheduler with uWSGI?¶
uWSGI employs some tricks which disable the Global Interpreter Lock and with it, the use of threads
which are vital to the operation of APScheduler. To fix this, you need to re-enable the GIL using
the
--enable-threads switch. See the uWSGI documentation for more details.
Also, assuming that you will run more than one worker process (as you typically would in production), you should also read the next section.
How do I use APScheduler in a web application?¶
First read through the previous section.
If you’re running Django, you may want to check out django_apscheduler. Note, however, that this is a third party library and APScheduler developers are not responsible for it.
Likewise, there is an unofficial extension called Flask-APScheduler which may or may not be useful when running APScheduler with Flask.
For Pyramid users, the pyramid_scheduler library may potentially be helpful.
Other than that, you pretty much run APScheduler normally, usually using
BackgroundScheduler. If you’re running an asynchronous
web framework like aiohttp, you probably want to use a different scheduler in order to take some
advantage of the asynchronous nature of the framework.
Is there a graphical user interface for APScheduler?¶
No graphical interface is provided by the library itself. However, there are some third party implementations, but APScheduler developers are not responsible for them. Here is a potentially incomplete list:
|
https://apscheduler.readthedocs.io/en/3.x/faq.html
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
@Generated(value="OracleSDKGenerator", comments="API Version: 20160918") public class ListDomainsRequest extends BmcRequest<Void>
getBody$, getInvocationCallback, getRetryConfiguration, setInvocationCallback, setRetryConfiguration, supportsExpect100Continue
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
public ListDomainsRequest()
public String getCompartmentId()
The OCID of the compartment (remember that the tenancy is simply the root compartment).
public String getDisplayName()
The mutable display name of the identity domain.
public String getUrl()
The region-agnostic identity domain URL.
public String getHomeRegionUrl()
The region-specific identity domain URL.
public String getType()
The identity domain type.
public String getLicenseType()
The license type of the identity domain.
public Boolean getIsHiddenOnLogin()
Indicates whether or not the identity domain is visible at the sign-in screen.DomainsDomainsRequest.SortOrder getSortOrder()
The sort order to use, either ascending (
ASC) or descending (
DESC). The NAME sort order
is case sensitive.
public String getOpcRequestId()
Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID.
public Domain.LifecycleState getLifecycleState()
A filter to only return resources that match the given lifecycle state. The state value is case-insensitive.
public ListDomainsRequest.Builder toBuilder()
Return an instance of
ListDomainsRequest.Builder that allows you to modify request properties.
ListDomainsRequest.Builderthat allows you to modify request properties.
public static ListDomains>
|
https://docs.oracle.com/en-us/iaas/tools/java/2.44.0/com/oracle/bmc/identity/requests/ListDomainsRequest.html
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
This Notebook gives a brief introduction to Discrete Fourier Transform (DFT) and the Fast Fourier Transform (FFT). The radix-2 Cooley-Tukey FFT algorithm is implemented and toward the end the physical meaning is explained.
These concepts have a wide area of applications in many different areas in both physics and mathematics, such as signal processing, sound and image filtering, data compression, partial differential equations and multiplication of large integers.
Before reading this notebook one should have an idea of what a Fourier transform and a Fourier series is, but it is not nessecery.
We start by importing the needed packages.
import numpy as np from numpy import random as rnd import timeit from scipy import fftpack as fft
Let $\vec x = [x_0,x_1,...,x_{n-1}]$ be a vector with $n$ complex (or real) elements. The DFT of $\vec x$ is the complex vector $\vec y = [y_0,y_1,...,y_{n-1}]$, where the elements are defined as $$y_k=\sum_{j=0}^{n-1}x_j\omega^{k\cdot j},$$ where $\omega = \exp(-2\pi i /n)$ ($i$ is the imaginary unit) [1].
def DFT(x): """ Calculates the one dimensional discrete Fourier transform of a vector. :x: double arr. The vector that is being transformed. :returns: double arr. The Fourier transform of x. """ n = len(x) y = [0]*n omega = np.exp(-2.0j*np.pi/n) for k in range(0,n): y[k] = np.sum(x*omega**(np.arange(0,n)*k)) return y
It is easy to realize that the inverse DFT is given by $$x_k = \sum_{j=0}^{n-1} y_j\omega^{k\cdot j},$$ where $\omega = \exp(2\pi i/n)$.
def inverseDFT(y): """ Calculates the inverse one dimensional discrete Fourier transform of a vector. :y: double arr. The vector that is being transformed. :returns: double arr. The inverse Fourier transform of y. """ n = len(y) x = [0]*n omega = np.exp(2.0j*np.pi/n) for k in range(0,n): x[k] = np.sum(y*omega**(np.arange(0,n)*k))/float(n) return x
Let us try with a small example where we simply transform and inverse transform an arbitrary vector.
# Defining an array that is being transformed. x = rnd.randint(8,size=8) print('x =', x) # The Fourier transform y = DFT(x) print('y =', np.round(y,2)) # The invese Fourier transform. x = inverseDFT(y) print('x =', np.round(x,2))
x = [7 4 7 5 0 7 0 2] y = [ 32.00+0.j 2.76-7.j 0.00-4.j 11.24+7.j -4.00-0.j 11.24-7.j -0.00+4.j 2.76+7.j] x = [ 7.-0.j 4.-0.j 7.-0.j 5.+0.j -0.+0.j 7.+0.j 0.+0.j 2.+0.j]
As you already might have noticed, this DFT-algorithm is quite inefficient. There are many subcalculations that are performed more than once, and as a consequence the complexity of this algorithm is $\mathcal O(n^2)$.
The FFT algorithms exploits symmetries and that many operations are similar. In this notebook we are going to discuss the Cooley–Tukey algorithm [2].
Assume that $N$ is composite. This means that $N=n_1\cdot n_2$, where $N$, $n_1$ and $n_2$ are integers. Rewrite the two indicies as $$k=n_2k_1+k_2,$$ $$j = n_1j_2 + j_1,$$ where $k_{1,2} = 0,1,...,n_{1,2}-1$ and $j_{1,2} = 0,1,...,j_{1,2}-1$. If we insert these new indicies into the DFT, some cross terms vanish, and the final result is $$y_{n_2k_1+k_2}=\sum_{j_1=0}^{n_1-1}\sum_{j_2=0}^{n_2-1}x_{n_1j_2+n_1}\exp\left[\frac{-2\pi i}{n_1n_2}(n_1j_2+j_1)(n_2k_1+k_2)\right]$$ $$=\sum_{j_1=0}^{n_1-1}\exp\left[-\frac{2\pi i}{n}j_1k_2\right]\left(\sum_{j_2=0}^{n_2-1}x_{n_1j_2+j_1}\exp\left[-\frac{2\pi i}{n_2}j_2k_2\right]\right)\exp\left[-\frac{2\pi i}{n_1}j_1k_1\right].$$ In this equation each inner sum is a DFT of size $n_2$ and each outer sum is a DFT og size $n_1$. This yields a recursive formula for computing the DFT, which is explained in more detail in [1] and [4]. For simplicity, let us use the radix-2 algorithm. The complexity of the FFT algorithm is $\mathcal O (n\log n)$, which makes it almost linear for large data sets!
def CooleyTukeyRadix2FFT(x): """ Calculates the one dimensional discrete Fourier transform of a vector using the radix-2 Cooley-Tukey FFT algorithm. The vector that is being transformed must have a power of 2 number of elements. :x: double arr. The vector that is being transformed. :returns: double arr. The Fourier transform of x. """ # Check if n is a power of 2. if ( len(x) & (len(x) - 1)): raise Exception("The number of elements in x has to be a power of 2!") # Recursive formula for calculating the FFT. def foo(x): n = len(x) if n == 1: y = x else: y2 = foo(x[0:n:2]) y1 = foo(x[1:n + 1:2]) d = np.exp(-2j*np.pi/n)**np.arange(0,n/2) y = np.append(y2 + d*y1,y2 - d*y1) return y return foo(x) def inverseCooleyTukeyRadix2FFT(y): """ Calculates the one-dimensional inverse discrete Fourier transform of a vector using the radix-2 Cooley-Tukey FFT algorithm. The vector that is being transformed must have a power of 2 number of elements. Parameters: x: double arr. The vector that is being transformed. Returns: y: double arr. The Fourier transform of x. """ # Check if n is a power of 2. if (len(y) & (len(y) - 1)): raise Exception("The number of elements in x has to be a power of 2!") # Recursive formula for calculating the FFT. def foo(y): n = len(y) if n == 1: x = y else: x2 = foo(y[0:n:2]) x1 = foo(y[1:n + 1:2]) d = np.exp(2j*np.pi/n)**np.arange(0,n/2) x = np.append(x2 + d*x1,x2 - d*x1) return x return foo(y)/len(y)
Let us try with a small example where we simply transform and inverse transform an arbitrary vector as before.
# Defining the array that is being transformed. x = rnd.randint(10,size=8) print('x =', x) # The Fourier transform. y = CooleyTukeyRadix2FFT(x) print('y =', np.round(y,2)) # The invese Fourier transform. x = inverseCooleyTukeyRadix2FFT(y) print('x =', np.round(x,2))
x = [4 6 9 9 4 1 9 2] y = [ 44.00+0.j -1.41-8.49j -10.00+4.j 1.41-8.49j 8.00+0.j 1.41+8.49j -10.00-4.j -1.41+8.49j] x = [ 4.+0.j 6.+0.j 9.+0.j 9.-0.j 4.+0.j 1.-0.j 9.+0.j 2.+0.j]
To demonstrate the superiority of the FFT we calculate the Fourier transform of a lot bigger data set. Let us also compare with the fft function from scipy.fftpack.
x = rnd.rand(8192) # Time the loop time for DFT, CooleyTukeyRadix2FFT and scipy.fftpack.fft. % timeit y = DFT(x) % timeit y = CooleyTukeyRadix2FFT(x) % timeit y = fft.fft(x)
1 loop, best of 3: 17.3 s per loop 10 loops, best of 3: 94.6 ms per loop 10000 loops, best of 3: 82.7 µs per loop
The DFT maps a finite equally spaced sample sequence from its original domain to its frequency domain. In other words, a discrete time data set are transformed into a discrete frequency data set.
To illustrate this, we need to figure out what the DFT-formula physically means. We start by rewriting it as $$x_k=\sum_{j=0}^{n-1}y_j\exp\left(2\pi i\frac{k}{n\Delta t}j\Delta t\right).$$ What the expression tells us is simply that $\vec x$ is a superposition of exponential functions with different frequencies $f_j = \frac{j}{n\Delta t}$ and amplitudes $y_j$. Therefore, we can view the magnitude of the amplitudes $|y_k|^2$ as a measure of the "weight of the frequency $f_j$" in $\vec x$!
Let $\vec j = (j_1,j_2,...,j_d)$ and $\vec k = (k_1,k_2,...,k_d)$ be $d$-dimensional vectors of indicies from $\vec 0$ to $\vec n-1 = (n_1-1,n_2,...,n_d-1)$. Then, the $d$-dimensinal DFT is given by $$y_\vec{k}=\sum_{\vec j=\vec 0}^{\vec n-1}x_\vec{j}\exp\left[-2\pi\vec k\cdot\vec \xi\right],$$ where $\vec \xi$ is the elementwise division $(j_1/n_1,...,j_d/n_d)$ [4]. For example, the two dimensional DFT is given by $$\vec y_{k_1,k_2}=\sum_{j_1=0}^{n_1-1}\sum_{j_2=0}^{n_2-1}x_{j_1,j_2}\exp\left[-2\pi i\left(\frac{ k_1j_1}{n_1}+\frac{k_2j_2}{n_2}\right)\right].$$
References:
[1] T. Sauer: Numerical Analysis, second edition, Pearson 2014
[2] James W. Cooley and John W. Tukey: An Algorithm for the Machine Calculation of Complex Fourier Series, Math. Comp. 19 (1965), p. 297-301
[3] Wikipedia:, 03.28.2016 (acquired: April 2016)
[4] Wikipedia:, 04.28.2016 (acquired: April 2016)
|
https://nbviewer.org/url/www.numfys.net/media/notebooks/discrete_fourier_transform.ipynb
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Sample query: range(len(foo)): print("f: ", foo[i], "; b: ", bar[i])
But that seems somewhat unpythonic to me. Is there a better way to do it?
How to iterate through two lists in parallel? Answer #1:
Python 3.
Python 2.
Answer #2:
You want the
zip function.
for (f,b) in zip(foo, bar): print "f: ", f ,"; b: ", b
Answer #3:
I have compared the iteration performance of two identical lists when using Python 3.6’s
zip() functions, Python’s
enumerate() function, using a manual counter (see
count() function), using an index-list, and during a special scenario where the elements of one of the two lists (either
foo or
bar) may be used to index the other list. Their performances for printing and creating a new list, respectively, were investigated using the
timeit() function where the number of repetitions used was 1000 times. One of the Python scripts that I had created to perform these investigations is given below. The sizes of the
foo and
bar lists had ranged from 10 to 1,000,000 elements.
Results:
- For printing purposes: The performances of all the considered approaches were observed to be approximately similar to the
zip()function, after factoring an accuracy tolerance of +/-5%. An exception occurred when the list size was smaller than 100 elements. In such a scenario, the index-list method was slightly slower than the
zip()function while the
enumerate()function was ~9% faster. The other methods yielded similar performance to the
zip()function.
- For creating lists: Two types of list creation approaches were explored: using the (a)
list.append()method and (b) list comprehension. After factoring an accuracy tolerance of +/-5%, for both of these approaches, the
zip()function was found to perform faster than the
enumerate()function, than using a list-index, than using a manual counter. The performance gain by the
zip()function in these comparisons can be 5% to 60% faster. Interestingly, using the element of
footo index
barcan yield equivalent or faster performances (5% to 20%) than the
zip()function.
Making sense of these results:
A programmer has to determine the amount of compute-time per operation that is meaningful or that is of significance.
For example, for printing purposes, if this time criterion is 1 second, i.e. 10**0 sec, then looking at the y-axis of the graph that is on the left at 1 sec and projecting it horizontally until it reaches the monomials curves, we see that lists sizes that are more than 144 elements will incur significant compute cost and significance to the programmer. That is, any performance gained by the approaches mentioned in this investigation for smaller list sizes will be insignificant to the programmer. The programmer will conclude that the performance of the
zip() function to iterate print statements is similar to the other approaches.
Conclusion
Notable performance can be gained from using the
zip() function to iterate through two lists in parallel during
list creation. When iterating through two lists in parallel to print out the elements of the two lists, the
zip() function will yield similar performance as the
enumerate() function, as to using a manual counter variable, as to using an index-list, and as to during the special scenario where the elements of one of the two lists (either
foo or
bar) may be used to index the other list.
The Python3.6 Script that was used to investigate list creation.
import timeit import matplotlib.pyplot as plt import numpy as np def test_zip( foo, bar ): store = [] for f, b in zip(foo, bar): #print(f, b) store.append( (f, b) ) def test_enumerate( foo, bar ): store = [] for n, f in enumerate( foo ): #print(f, bar[n]) store.append( (f, bar[n]) ) def test_count( foo, bar ): store = [] count = 0 for f in foo: #print(f, bar[count]) store.append( (f, bar[count]) ) count += 1 def test_indices( foo, bar, indices ): store = [] for i in indices: #print(foo[i], bar[i]) store.append( (foo[i], bar[i]) ) def test_existing_list_indices( foo, bar ): store = [] for f in foo: #print(f, bar[f]) store.append( (f, bar[f]) ) list_sizes = [ 10, 100, 1000, 10000, 100000, 1000000 ] tz = [] te = [] tc = [] ti = [] tii= [] tcz = [] tce = [] tci = [] tcii= [] for a in list_sizes: foo = [ i for i in range(a) ] bar = [ i for i in range(a) ] indices = [ i for i in range(a) ] reps = 1000 tz.append( timeit.timeit( 'test_zip( foo, bar )', 'from __main__ import test_zip, foo, bar', number=reps ) ) te.append( timeit.timeit( 'test_enumerate( foo, bar )', 'from __main__ import test_enumerate, foo, bar', number=reps ) ) tc.append( timeit.timeit( 'test_count( foo, bar )', 'from __main__ import test_count, foo, bar', number=reps ) ) ti.append( timeit.timeit( 'test_indices( foo, bar, indices )', 'from __main__ import test_indices, foo, bar, indices', number=reps ) ) tii.append( timeit.timeit( 'test_existing_list_indices( foo, bar )', 'from __main__ import test_existing_list_indices, foo, bar', number=reps ) ) tcz.append( timeit.timeit( '[(f, b) for f, b in zip(foo, bar)]', 'from __main__ import foo, bar', number=reps ) ) tce.append( timeit.timeit( '[(f, bar[n]) for n, f in enumerate( foo )]', 'from __main__ import foo, bar', number=reps ) ) tci.append( timeit.timeit( '[(foo[i], bar[i]) for i in indices ]', 'from __main__ import foo, bar, indices', number=reps ) ) tcii.append( timeit.timeit( '[(f, bar[f]) for f in foo ]', 'from __main__ import foo, bar', number=reps ) ) print( f'te = {te}' ) print( f'ti = {ti}' ) print( f'tii = {tii}' ) print( f'tc = {tc}' ) print( f'tz = {tz}' ) print( f'tce = {te}' ) print( f'tci = {ti}' ) print( f'tcii = {tii}' ) print( f'tcz = {tz}' ) fig, ax = plt.subplots( 2, 2 ) ax[0,0].plot( list_sizes, te, label='enumerate()', marker='.' ) ax[0,0].plot( list_sizes, ti, label='index-list', marker='.' ) ax[0,0].plot( list_sizes, tii, label='element of foo', marker='.' ) ax[0,0].plot( list_sizes, tc, label='count()', marker='.' ) ax[0,0].plot( list_sizes, tz, label='zip()', marker='.') ax[0,0].set_xscale('log') ax[0,0].set_yscale('log') ax[0,0].set_xlabel('List Size') ax[0,0].set_ylabel('Time (s)') ax[0,0].legend() ax[0,0].grid( b=True, which='major', axis='both') ax[0,0].grid( b=True, which='minor', axis='both') ax[0,1].plot( list_sizes, np.array(te)/np.array(tz), label='enumerate()', marker='.' ) ax[0,1].plot( list_sizes, np.array(ti)/np.array(tz), label='index-list', marker='.' ) ax[0,1].plot( list_sizes, np.array(tii)/np.array(tz), label='element of foo', marker='.' ) ax[0,1].plot( list_sizes, np.array(tc)/np.array(tz), label='count()', marker='.' ) ax[0,1].set_xscale('log') ax[0,1].set_xlabel('List Size') ax[0,1].set_ylabel('Performances ( vs zip() function )') ax[0,1].legend() ax[0,1].grid( b=True, which='major', axis='both') ax[0,1].grid( b=True, which='minor', axis='both') ax[1,0].plot( list_sizes, tce, label='list comprehension using enumerate()', marker='.') ax[1,0].plot( list_sizes, tci, label='list comprehension using index-list()', marker='.') ax[1,0].plot( list_sizes, tcii, label='list comprehension using element of foo', marker='.') ax[1,0].plot( list_sizes, tcz, label='list comprehension using zip()', marker='.') ax[1,0].set_xscale('log') ax[1,0].set_yscale('log') ax[1,0].set_xlabel('List Size') ax[1,0].set_ylabel('Time (s)') ax[1,0].legend() ax[1,0].grid( b=True, which='major', axis='both') ax[1,0].grid( b=True, which='minor', axis='both') ax[1,1].plot( list_sizes, np.array(tce)/np.array(tcz), label='enumerate()', marker='.' ) ax[1,1].plot( list_sizes, np.array(tci)/np.array(tcz), label='index-list', marker='.' ) ax[1,1].plot( list_sizes, np.array(tcii)/np.array(tcz), label='element of foo', marker='.' ) ax[1,1].set_xscale('log') ax[1,1].set_xlabel('List Size') ax[1,1].set_ylabel('Performances ( vs zip() function )') ax[1,1].legend() ax[1,1].grid( b=True, which='major', axis='both') ax[1,1].grid( b=True, which='minor', axis='both') plt.show()
Answer #4:
You should use ‘zip‘ function. Here is an example how your own zip function can look like:
def custom_zip(seq1, seq2): it1 = iter(seq1) it2 = iter(seq2) while True: yield next(it1), next(it2)
Iterating through two lists in parallel in Python- Answer #5:
5a.
You can bundle the nth elements into a tuple or list using comprehension, then pass them out with a generator function.
def iterate_multi(*lists): for i in range(min(map(len,lists))): yield tuple(l[i] for l in lists) for l1, l2, l3 in iterate_multi([1,2,3],[4,5,6],[7,8,9]): print(str(l1)+","+str(l2)+","+str(l3))
5b.
Why cant we just use the index to iterate?
foo = ['a', 'b', 'c'] bar = [10, 20, 30] for indx, itm in enumerate(foo): print (foo[indx], bar[indx])
Hope you learned something from this post.
Follow Programming Articles for more!
|
https://programming-articles.com/how-to-iterate-through-two-lists-in-parallel-in-python-answered/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
-
Taking Advantage of SOAP Extensibility
Let's take a look at how SkatesTown can use SOAP extensibility to its benefit. It turns out that SkatesTown's partners are demanding some type of proof that certain items are in SkatesTown's inventory. In particular, partners would like to have an e-mail record of any inventory checks they have performed.
Al Rosen got the idea to use SOAP extensibility in a way that allows the existing inventory check service implementation to be reused with no changes. SOAP inventory check requests will include a header whose element name is EMail belonging to the namespace. The value of the header will be a simple string containing the e-mail address to which the inventory check confirmation should be sent.
Service Requestor View
Service requestors will have to modify their clients to build a custom SOAP envelope that includes the EMail header. Listing 3.5 shows the necessary changes. The e-mail to send confirmations to is provided in the constructor.
Listing 3.5 Updated Inventory Check Client
package ch3.ex3; import org.apache.axis.client.ServiceClient; import org.apache.axis.message.SOAPEnvelope; import org.apache.axis.message.SOAPHeader; import org.apache.axis.message.RPCElement; import org.apache.axis.message.RPCParam; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import org.w3c.dom.Document; import org.w3c.dom.Element; /* * Inventory check web service client */ public class InventoryCheckClient { /** * Service URL */ String url; /** * Email address to send confirmations to */ String email; /** * Point a client at a given service URL */ public InventoryCheckClient(String url, String email) { this.url = url; this.email = email; } /** * Invoke the inventory check web service */ public boolean doCheck(String sku, int quantity) throws Exception { // Build the email header DOM element DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = factory.newDocumentBuilder(); Document doc = builder.newDocument(); Element emailElem = doc.createElementNS( "", "EMail"); emailElem.appendChild(doc.createTextNode(email)); // Build the RPC request SOAP message SOAPEnvelope reqEnv = new SOAPEnvelope(); reqEnv.addHeader(new SOAPHeader(emailElem)); Object[] params = new Object[]{ sku, new Integer(quantity), }; reqEnv.addBodyElement(new RPCElement("", "doCheck", params)); // Invoke the inventory check web service ServiceClient call = new ServiceClient(url); SOAPEnvelope respEnv = call.invoke(reqEnv); // Retrieve the response RPCElement respRPC = (RPCElement)respEnv.getFirstBody(); RPCParam result = (RPCParam)respRPC.getParams().get(0); return ((Boolean)result.getValue()).booleanValue(); } }
To set a header in Axis, you first need to build the DOM representation for the header. The code in the beginning of doCheck() does this. Then you need to manually construct the SOAP message that will be sent. This involves starting with a new SOAPEnvelope object, adding a SOAPHeader with the DOM element constructed earlier, and, finally, adding an RPCElement as the body of the message. At this point, you can use ServiceClient.invoke() to send the message.
When the call is made with a custom-built SOAP envelope, the return value of invoke() is also a SOAPEnvelope object. You need to pull the relevant data out of that envelope by getting the body of the response, which will be an RPCElement. The result of the operation will be the first RPCParam inside the RPC response. Knowing that doCheck() returns a boolean, you can get the value of the parameter and safely cast it to Boolean.
As you can see, the code is not trivial, but Axis does provide a number of convenience objects that make working with custom-built SOAP messages straightforward. Figure 3.5 shows a UML diagram with some of the key Axis objects related to SOAP messages.
Figure 3.5 Axis SOAP message objects.
Service Provider View
The situation on the side of the Axis-based service provider is a little more complicated because we can no longer use a simple JWS file for the service. JWS files are best used for simple and straightforward service implementations. Currently, it is not possible to indicate from a JWS file that a certain header (in this case the e-mail header) should be processed. Al Rosen implements three changes to enable this more sophisticated type of service:
He moves the service implementation from the JWS file to a simple Java class.
He writes a handler for the EMail header.
He extends the Axis service deployment descriptor with information about the service implementation and the header handler.
Moving the service implementation is as simple as saving InventoryCheck.jws as InventoryCheck.java in /WEB-INF/classes/com/skatestown/services. No further changes to the service implementation are necessary.
Building a handler for the EMail header is relatively simple, as Listing 3.6 shows. When the handler is invoked by Axis, it needs to find the SOAP message and lookup the EMail header using its namespace and name. If the header is present in the request message, the handler sends a confirmation e-mail of the inventory check. The implementation is complex because to produce a meaningful e-mail confirmation, the handler needs to see both the request data (SKU and quantity) and the result of the inventory check. The basic process involves the following steps:
Get the request or the response message using getRequestMessage() or getResponseMessage() on the Axis MessageContext object.
Get the SOAP envelope by calling getAsSOAPEnvelope().
Retrieve the first body of the envelope and cast it to an RPCElement because the body represents either an RPC request or an RPC response.
Get the parameters of the RPC element using getParams().
Extract parameters by their position and cast them to their appropriate type. As seen earlier in Listing 3.5, the response of an RPC is the first parameter in the response message body.
Listing 3.6 E-mail Header Handler
package com.skatestown.services; import java.util.Vector; import org.apache.axis.* ; import org.apache.axis.message.*; import org.apache.axis.handlers.BasicHandler; import org.apache.axis.encoding.SOAPTypeMappingRegistry; import bws.BookUtil; import com.skatestown.backend.EmailConfirmation; /** * EMail header handler */ public class EMailHandler extends BasicHandler { /** * Utility method to retrieve RPC parameters * from a SOAP message. */ private Object getParam(Vector params, int index) { return ((RPCParam)params.get(index)).getValue(); } /** * Looks for the EMail header and sends an email * confirmation message based on the inventory check * request and the result of the inventory check */ public void invoke(MessageContext msgContext) throws AxisFault { try { // Attempt to retrieve EMail header Message reqMsg = msgContext.getRequestMessage(); SOAPEnvelope reqEnv = reqMsg.getAsSOAPEnvelope(); SOAPHeader header = reqEnv.getHeaderByName( "", "EMail" ); if (header != null) { // Mark the header as having been processed header.setProcessed(true); // Get email address in header String email = (String)header.getValueAsType( SOAPTypeMappingRegistry.XSD_STRING); // Retrieve request parameters: SKU & quantity RPCElement reqRPC = (RPCElement)reqEnv.getFirstBody(); Vector params = reqRPC.getParams(); String sku = (String)getParam(params, 0); Integer quantity = (Integer)getParam(params, 0); // Retrieve inventory check result Message respMsg = msgContext.getResponseMessage(); SOAPEnvelope respEnv = respMsg.getAsSOAPEnvelope(); RPCElement respRPC = (RPCElement)respEnv.getFirstBody(); Boolean result = (Boolean)getParam( respRPC.getParams(), 0); // Send confirmation email EmailConfirmation ec = new EmailConfirmation( BookUtil.getResourcePath(msgContext, "/resources/email.log")); ec.send(email, sku, quantity.intValue(), result.booleanValue()); } } catch(Exception e) { throw new AxisFault(e); } } /** * Required method of handlers. No-op in this case */ public void undo(MessageContext msgContext) { } }
It's simple code, but it does take a few lines because several layers need to be unwrapped to get to theRPC parameters. When all data has been retrieved, the handler calls the e-mail confirmation backend, which, in this example, logs e-mails "sent" to /resources/email.log.
Finally, adding deployment information about the new header handler and the inventory check service involves making a small change to the Axis Web services deployment descriptor. The book example deployment descriptor is in /resources/deploy.xml. Working with Axis deployment descriptors will be described in detail in Chapter 4.
Listing 3.7 shows the five lines of XML that need to be added. First, the e-mail handler is registered by associating a handler name with its Java class name. Following that is the description of the inventory check service. The service options identify the Java class name for the service and the method that implements the service functionality. The service element has two attributes. Pivot is an Axis term that specifies the type of service. In this case, the value is RPCDispatcher, which implies that InventoryCheck is an RPC service. The output attribute specifies the name of a handler that will be called after the service is invoked. Because the book examples don't rely on an e-mail server being present, instead of sending confirmation this class writes messages to a log file in /resources/email.log.
Listing 3.7 Deployment Descriptor for Inventory Check Service
<!-- Chapter 3 example 3 services --> <handler name="Email" class="com.skatestown.services.EMailHandler"/> <service name="InventoryCheck" pivot="RPCDispatcher" response="Email"> <option name="className" value="com.skatestown.services.InventoryCheck"/> <option name="methodName" value="doCheck"/> </service>
Putting the Service to the Test
With all these changes in place, we are ready to test the improved inventory check service. There is a simple JSP test harness in ch3/ex3/index.jsp that is modeled after the JSP test harness we used for the JWS-based inventory check service (see Figure 3.6).
Figure 3.6 Putting the enhanced inventory check Web service to the test.
SOAP on the Wire
With the help of TCPMon, we can see what SOAP messages are passing between the client and the Axis engine. We are only interested in seeing the request message because the response message will be identical to the one before the EMail header was added.
Here is the SOAP request message with the EMail header present:
POST /bws/services/InventoryCheck HTTP/1.0 Content-Length: 482 Host: localhost Content-Type: text/xml; charset=utf-8 SOAPAction: "/doCheck" <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope SOAP-ENV: <SOAP-ENV:Header> <e:EMail xmlns: [email protected] </e:EMail> </SOAP-ENV:Header> <SOAP-ENV:Body> <ns1:doCheck xmlns: <arg0 xsi:947-TI</arg0> <arg1 xsi:1</arg1> </ns1:doCheck> </SOAP-ENV:Body> </SOAP-ENV:Envelope>
There are no surprises in the SOAP message. However, a couple of things have changed in the HTTP message. First, the target URL is /bws/services/InventoryCheck. This is a combination of two parts: the URL of the Axis servlet that listens for SOAP requests over HTTP (/bws/services) and the name of the service we want to invoke (InventoryCheck). Also, the SOAPAction header, which was previously empty, now contains the name of the method we want to invoke. The service name on the URL and the method name in SOAPAction are both hints to Axis about the service we want to invoke.
That's all there is to taking advantage of SOAP custom headers. The key message is one of simple yet flexible extensibility. Remember, the inventory check service implementation did not change at all!
|
https://www.informit.com/articles/article.aspx?p=26666&seqNum=7
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Editor’s Note: This blog post was updated 7 August 2022 to include sections on why you should use mixins in Vue and the drawbacks of doing so.
If you are a Vue lover (like me) and are looking for a way to extend your Vue application, you’ve come to the right place. Vue mixins and directives are a powerful combination and a great way to add more reusable functions across parts of your application.
If you are from an object-oriented programming background, you’ll see Vue mixins as an imitation of parent classes. You will also see that directives are similar to helper functions.
If you do not have an OOP background, then think of mixins as a utility that you design to be shared by multiple people. If you are thinking about an office, it would be the photocopier. If you are thinking about a mall, it would be mall security. Basically, mixins are resources that multiple parts of your application share.
- Prerequisites
- What are Vue mixins?
- Why should I use mixins in Vue?
- Using mixins in Vue
- Global vs. local mixins
- Directives in Vue
- Filters in Vue
- Bringing it together
- Hardships with mixins
Prerequisites
Below are a few prerequisites that you’ll need before moving forward in this article.
- Knowledge of JavaScript
- You have, at the very least, built a Vue application. One with more than five components is a plus
- If you have shared the photocopier in the office, you can take a seat in front here
What are Vue mixins?
The Vue documentation has a really simple and straightforward explanation for what mixins are and how they work. According to the docs, mixins are a flexible way to distribute reusable functionalities for Vue components. A mixin object can contain any component options. When a component uses a mixin, all options in the mixin will be “mixed” into the component’s own options.
In simpler terms, it means that we can create a component with its data, methods, and life-cycle components, as well as have other components extend it. Now, this is different from using components inside other components where you can have a custom component with a name like
<vue-custom></vue-custom> inside of your template.
For example, we can build a normal Vue component to hold basic configurations for our app, such as:
- App name
- Greeter method
- Company name for copyright at the footer
Why should I use mixins in Vue?
Mixins allow us to reuse functionalities and logic within our Vue components. This means we can use the same logic or functionality in multiple components without having to manually rewrite the logic in each one.
This provides us a level of flexibility and allows us to reduce code duplication, making sure we abide by the popular DRY (Don’t Repeat Yourself) principle.
One other thing that makes mixins important is that they don’t have any effect outside the defined scope.
Using mixins in Vue
Let’s create a simple mixin:
<template> <div> <div>{{title}}</div> <div>{{copyright}}</div> </div> </template> <script> export default { name: "HelloWorld", data() { return { title: 'Mixins are cool', copyright: 'All rights reserved. Product of super awesome people' }; }, created: function() { this.greetings(); }, methods: { greetings: function() { console.log('Howdy my good fellow!'); } } }; </script>
Interestingly, we can refactor the logic in this component with mixins. This comes in handy in cases where you need to repeat this exact logic in multiple components.
Let’s create a simple mixin in a
myMixin.js file within our project:
export const myMixin = { data() { return { title: 'Mixins are cool', copyright: 'All rights reserved. Product of super awesome people' }; }, created: function() { this.greetings(); }, methods: { greetings: function() { console.log('Howdy my good fellow!'); } } };
Okay, that’s as simple as it gets for a mixin. Now, if we use this in our component, you will see the magic in it.
And to use this, we can import it and do the following in our template:
<template> <div> <div>{{title}}</div> <div>{{copyright}}</div> </div> </template> <script> import myMixin from "/myMixin"; export default { name: "HelloWorld", mixins: [myMixin] }; </script>
Global vs. local mixins
One important thing to note is that there are two types of mixins – global and local.
Local mixins are what we’ve explained above with the
myMixin.js file. The mixin is defined in an individual
.js file and imported for use within individual components in our Vue project.
On the other hand, global mixins allow us to do even more. Similar to local mixins, we also have our
myMixin.js file. This time, we import it directly into our
main.js file, making it globally available to all components within our project
For instance, once we’ve created our
myMixin.js file, we go to our
main.js file and import it as shown below:
import { createApp } from 'vue' import App from './App.vue' import myMixin from './myMixin' const app = createApp(App); app.mixin(GlobalMixin); app.mount('#app')
Now, any component within our Vue component can access the functionality in our mixin file without needing to import individually.
Directives in Vue
Directives are methods like
v-for that we can create to modify elements on our template. You know how
v-if hides a component if a condition is not met? How about if we underline a long sentence with a directive?
We can even change the text a little as a way to highlight it. We can have global directives that we register so that all of the components in our Vue application can use them. We also have local directives that are specific to that particular component. Awesome, right?
Let’s create a global directive in our
main.js now.
Register a global custom directive called
v-highlight:
// It's app.directive in Vue 3 Vue.directive('highlight', { // When the bound element is inserted into the DOM... // Use "mounted" instead of "inserted" in Vue 3 inserted: function(el, binding) { // set the colour we added to the bind el.style.backgroundColor = binding.value ? binding.value : 'blue'; el.style.fontStyle = 'italic'; el.style.fontSize = '24px'; } });
Here, we’re changing the style of the element attached to this directive. Plus, if there’s a color attached as a
value to the directive, we set it as the background color. If not, we set the background color to
blue.
Now, if we use this directive, you should see that parts of the text have changed.
To use this, we can do the following in our template:
<template> <div> <p v-highlight>Hello There!</p> <p v-This is a red guy</p> </div> </template>
Filters in Vue
This is another customization helper we will look at. Filters help us in many ways (you might get angry that you didn’t know about these earlier if this is your first time encountering them). We can define filters globally or locally, just like directives.
Filters can be used to apply common formatting to text or heavy filtration to an array or object. They are JavaScript functions, so we can define them to take as many arguments as possible. Also, we can chain them and use multiple filters as well. Cool, right?
Let’s define a simple filter to capitalize the first word of the body of text (this is really useful when displaying things like names supplied by your user):
Vue.filter('capitalize', function(value) { if (!value) return ''; value = value.toString(); return value.charAt(0).toUpperCase() + value.slice(1); });
And to use this, we can do the following in our template:
Now, anywhere we use this filter, the first character will always be capitalized.
N.B., Filters are still applicable in Vue 2 but have been deprecated in Vue 3.
Bringing it together
We are going to compose a simple Vue application using everything we’ve learned.
First, let’s define our mixin:
const myMixin = { data() { return { title: 'mixins are cool' }; }, created: function() { alert('Howdy my good fellow!'); } };
Then we define our directive in our
main.js:
Vue.directive('highlight', { inserted: function (el, binding) { el.style.color = binding.value ? binding.value : "blue" el.style.fontStyle = 'italic' el.style.fontSize = '24px' } })
Now, let’s define our filter in our
main.js:
Vue.filter('capitalize', function (value) { if (!value) return '' value = value.toString() return value.charAt(0).toUpperCase() + value.slice(1) })
Finally, the simple template to see if these things actually work:
<template> <div id="app"> <p v-highlight>{{title | capitalize}}</p> <p v-This is a red guy</p> <p>{{'this is a plain small guy' | capitalize}}</p> <div> </template> <script> export default { name: "HelloWorld", mixins: [myMixin] }; </script>
And that’s it!
Hardships with mixins
Vue 3 now provides other means of sharing functionalities that provide a better developer experience, such as using composables.
Mixins are still as efficient while working with the Options API, but you might experience some drawbacks, including:
Naming conflicts
While working with mixins, there’s a higher chance of having conflicting names within our components. This might be a challenge, especially in cases where a new developer is inheriting a legacy codebase and they’re not exactly familiar with the existing property names within the mixin.
This can end up causing unwanted behaviors in our Vue app.
Difficult to understand and debug
Similar to the potential naming issues I’ve highlighted above, it’s somewhat stressful for a new developer to figure out mixins and how they might be affecting the components, especially if they’re dealing with global mixins.
In general, figuring out all the functionalities within the components can be difficult.
Conclusion
Everything we mentioned here comes in handy when building applications that are likely to grow in complexity. You want to define many reusable functions or format them in a way that can be reused across components, so you do not have to define the same thing over and over again.
Most importantly, you want to have a single source of truth. Dedicate one spot to make changes.
I’m excited by the thought of the cool things you can now build with these features. Please do share them with.
|
https://blog.logrocket.com/how-use-mixins-custom-functions-vue/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Getting Svelte
Svelte is a JavaScript framework that has continued to intrigue me.
Svelte is a radical new approach to building user interfaces. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app.
An interesting prospect. If I need interactivity on a website I normally write plain old JavaScript with no framework. If the requirements are more demanding I’ll look towards React — but only if the thing is a full-on “web app”. There’s a large gap between nothing and React. I need to spend more time with Svelte but I’m starting to see it as a lighter option that fits into that gap.
As an experiment I’ve started to use both React & Svelte to build my website!
Hang on, it’s not as ghastly as it sounds…
Why React?
Personally I’ve found React to be a rather elegant solution at its core. By that I mean the way you define pure, reusable UI components. Once state and logic gets involved that’s another story. It can get ugly, fast.
Where I’ve found React to be most powerful is server-side rendering (SSR). For dbushell.com I use it as a templating language for static site generation. It’s almost perfect for that job. I find JSX quick and easy to code. A small gripe is the necessity for attributes like
className and
htmlFor (in place of
class and
for).
For a short time I was shipping the React library to the browser. I was originally doing full hydration with a router to avoid a refresh between page navigation. I dropped that in favour of a normal web experience. It wasn’t worth the cost. Now I’m just reviving individual components like my contact form.
import ContactForm from './contact-form.jsx'; ReactDOM.hydrate( <ContactForm />, document.querySelector('#contact-form') );
React is a lot of JavaScript to execute and has a noticeable impact on performance. I switched to Preact to save bytes. However, it still felt expensive for such basic interactivity. I’d usually just write some vanilla JavaScript in this scenario. I do for most client websites. For my personal site I figured it was an opportunity to try something new.
Why Svelte?
I think Svelte could fit into that gap between nothing and React. Instead of shipping React and a few of my components to the browser, I rewrote them in Svelte.
import ContactFrom from './contact.svelte'; const form = new ContactFrom({ target: document.querySelector('#contact-form') });
This cut my JavaScript bundle size in half — significantly more if I was using full fat React. My Svelte contact form replaces the React server render without attempting any hydration. I’ve been looking into Svelte hydration but it seems a little under-developed right now. A GitHub issue suggests:
[…] setting the innerHTML to ‘’ and doing client side render is faster and cleaner
Which is basically my approach.
In truth, my contact form interactivity is so simple using any framework is a bit much. The Svelte cost is a small one so I’ll keep it around for now. This experiment has given me better understanding and appreciation for what Svelte does. I’ll continue to play with it.
Svelte SSR
I was tempted to get Svelte SSR working for my entire website. It only took a couple of hours to rewrite my React components. Initial tests show a 10% faster build. Granted, both React and Svelte take under one second to build around 300 pages.
If Svelte proves to be as maintainable as React I might just drop the latter. Of course, it’s rather impractical and silly to maintain both React and Svelte components. Despite my love for JSX I have to admit that Svelte components may have the edge in readability.
Early Impressions of Svelte
For sprinkling interactive components onto a web page, Svelte’s overhead is minimal and feels more appropriate than React. The coding structure feels more natural too. I like the cleaner separation of JavaScript and HTML. JSX is perfect for templating but difficult to tame the more business logic gets involved. I have yet to dive deeper into state management in Svelte so I can’t compare further.
Svelte’s claim to compile to “framework-less vanilla JS” is a tiny bit disingenuous. For a start that paints the idea of a “framework” as being a bad thing. Secondly, Svelte actually does bundle a framework of sorts. It’s small but it exists. And that’s fine, because complexity has to go somewhere. Providing some code abstraction in a framework is a good thing. Svelte hits the sweet spot in this area. It does a lot of clever compile time stuff and makes React’s runtime framework look like a monolith.
My default is still vanilla JavaScript but if I need a little more front-end help I’ll be reaching for Svelte before I consider React. I didn’t think Svelte would work as well for static site generation but I’m close to making the switch.
Whilst I work more with Svelte I’m listening through the back catalogue of the Svelte Radio podcast. There I learnt about Tan Li Hau’s blog series on “Compile Svelte in your head” which has been a great resource for me.
|
https://dbushell.com/2020/09/07/getting-svelte-js/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Blocking Receiver Example#
Shows how to use the synchronous API of
QSerialPort in a non-GUI thread.
Blocking Receiver shows how to create an application for a serial interface using
QSerialPort ‘s synchronous API in a non-GUI thread..(QThread): Q_OBJECT # public ReceiverThread = explicit(QObject parent = None) ~ReceiverThread() def startReceiver(portName, waitTimeout, response): signals: def request(s): def error(s): def timeout(s): # private def run(): m_portName = QString() m_response = QString() m_waitTimeout = 0 m_mutex = QMutex().
def startReceiver(self, portName, waitTimeout, response):
The startReceiver() function stores the serial port name, timeout and response data, and
QMutexLocker locks the mutex to protect these data. We then start the thread, unless it is already running.
wakeOne() will be discussed later.
def run(self): currentPortNameChanged = False() m_mutex.lock().
The loop will continue waiting for request data::
Warning
The method waitForBytesWritten() should be used after each write() call for the blocking approach, because it processes all the I/O routines instead of Qt event-loop.
The timeout() signal is emitted if an error occurs when writing data.
self.request.emit(request)
After a successful writing is emitted, request() signal containing the data received from the Sender application:
self.request.emit(request)
Next, the thread switches to reading the current parameters for the serial interface, because they can already have been updated, and run the loop from the beginning.
.. _overviews_running-the-example:
Running the Example#
To run the example from Qt Creator , open the Welcome mode and select the example from Examples. For more information, visit Building and Running an Example.
See also
Terminal Example Blocking Sender Example
Example project @ code.qt.io
|
https://doc-snapshots.qt.io/qtforpython-dev/overviews/qtserialport-blockingreceiver-example.html
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Can you please take a look at the attached set of documents and
figure out why the first row in the second table disappears when
converting to PDF, and also why the values in the second row of the
same table have disappeared?
This is the code that processes the generated document prior to export:
NodeCollection runs = document.GetChildNodes(NodeType.Run, true);
Regex arabicMatchExpr = new Regex(@"\p{IsArabic}+");
foreach (Run run in runs)
{
Match arabicMatch = arabicMatchExpr.Match(run.Text);
if (arabicMatch.Success)
{
run.Font.Bidi = true;
}
else
{
Match dateMatch = dateExpr.Match(run.Text);
if (dateMatch.Success)
{
run.Font.Bidi = true;
run.Text = DateTime.Parse(run.Text).ToString(“dd/MM/yyyy”);
}
}
}
MemoryStream tempStream = new MemoryStream();
document.Save(tempStream, SaveFormat.AsposePdf);
Pdf pdfDoc = new Pdf();
pdfDoc.BindXML(tempStream, null);
pdfDoc.IsRightToLeft = true;
pdfDoc.Save(String.Format("{0}_{1:yyyyMMddHHmmss}.pdf", docType,
DateTime.Now), Aspose.Pdf.SaveType.OpenInAcrobat, httpResponse);
httpResponse.End();
Thanks,
Larkin
Can you please take a look at the attached set of documents and
Hi Larkin,
I have tested the issue and I’m able to reproduce the same problem. I have logged it in our issue tracking system as PDFNET-7152. We will investigate this issue in detail and will keep you updated on the status of a correction. We apologize for your inconvenience.<?xml:namespace prefix = o
I really need a fix for this as soon as possible. I did some debugging research and what I came up with is this:
The generated XML has a number of the text segments within the table cells enclosed in elements. I don’t know why this is done–perhaps this is how Word treats text runs within table cells, but in any case it is fine IF the text runs within the heading element are LTR. If the text runs are RTL (or perhaps it is the Unicode flag that causes the problem) then the cell renders as empty. As you see in the documents I attached in my previous post, the entire first row of the second table is not missing, it has merely collapsed because all its cells are empty since the values of every cell are Arabic characters.
To verify that this is the problem, generate the XML file from the test2.doc file and locate the following line:
الحوض
Change the element to a element, as shown below:
الحوض
Now call BindXml() using this modified XML file and you will see that in the resulting PDF the row now appears, and the value in the segment above displays correctly.
So now that I’ve done your leg work for you, can you fix the bug for me??
Hi Larkin,
Thank you very much for your help; the findings will help us resolve the issue. We'll let you know about the estimated time of the fix, as we get some information from our development team.
We appreciate your patience.
Regards,
Dear Larkin,
We appreciate for your help. I have confirmed that your investigation on the issue is right. I think the bug would be fixed in about one week. We will try our best to make it sooner and send you an update once the fix is ready.
Best regards.
|
https://forum.aspose.com/t/table-rows-cell-values-disappearing-when-converting-to-pdf/123629
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Optimize¶
There are two ways of using Oríon for optimization. One is using the commandline interface which conveniently turn a simple script into a hyper-parameter optimization process at the level of the command line. The other is using the python interface which gives total control over the pipeline of optimization.
Commandline
- Sequential API:
- A simple method for local sequential optimision.
- Service API:
- A simple method to get new parameters to try and report results in a distributed manner.
- Framework API:
- Total control over the hyper-parameter optimization pipeline.
Sequential API¶
Using the helper
orion.client.workon(),
you can optimize a function with a single line of code.
from orion.client import workon def foo(x): return [dict( name='dummy_objective', type='objective', value=1)] experiment = workon(foo, space=dict(x='uniform(-50,50)'))
The experiment object returned can be used to fetch the database of trials
and analyze the optimization process. Note that the storage for workon is
in-memory and requires no setup. This means however that
orion.client.workon()
cannot be used for parallel optimisation.
Service API¶
Experiments are created using the helper function
orion.client.create_experiment()..
To distribute the hyper-parameter optimisation)
|
https://orion.readthedocs.io/en/v0.1.8/user/api.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
public class JavaRecognizer extends LLkParser
Java 1.5 Recognizer
Run 'java Main [-showtree] directory-full-of-java-files'
[The -showtree option pops up a Swing frame that shows
the AST constructed from the parser.]
Run 'java Main.
|
http://docs.groovy-lang.org/docs/groovy-2.5.4/html/gapi/org/codehaus/groovy/antlr/java/JavaRecognizer.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
1.6 KiB
Multicrypt - multi-key encryption
Multicrypt is a library for multi-key reversible encryption. That is, it provides a simple and secure interface for encrypting a payload such that it can be decrypted by any one of a number of keys, and the payload can be shared with new keys by users with existing keys.
How It Works
Multicrypt uses an enveloped-data architecture whereby the payload is encrypted with a master key, and that master key is then encoded using each user's key.
This allows any user to decode the master key and, consequently, the payload, without having to know the master key directly.
The library is structured in such a way that discourages exposing the master key directly. In fact, your code should interact with the library, rather than the master key directly, which prevents exposing the master key at any point.
Getting Started
Here's a quick overview of how to use Multicrypt:
import { SharedValue } from 'multicrypt' const value = 'value to be encrypted' const keyOne = 'some key' const keyTwo = 'some other key' // Encode a new shared value using keyOne const shared = await SharedValue.create<string>('key1', keyOne, value) // Allow keyTwo to access the shared value: await shared.addKey(keyOne, 'key2', keyTwo) // Get the shared value: const decodedValue = await shared.get(keyTwo) // => 'value to be encoded' // Set the shared value: const encodedValue = await shared.set(keyTwo, 'override string') // Remove "key1" from the shared value: await shared.removeKey(keyTwo, "key1") // Serialize the shared value securely: const serialized = shared.toJSON()
|
https://code.garrettmills.dev/garrettmills/multicrypt/src/branch/master/README.md
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Rename-Item
Renames an item in a PowerShell provider namespace.
Syntax
Rename-Item [-Path] <String> [-NewName] <String> [-Force] [-PassThru] [-Credential <PSCredential>] [-WhatIf] [-Confirm] [<CommonParameters>]
Rename-Item -LiteralPath <String> [-NewName] <String> [-Force] [-PassThru] [-Credential <PSCredential>] [-WhatIf] [-Confirm] [<CommonParameters>]
Description
The
You can't use
Rename-Item to both rename and move an item. Specifically, you can't supply a path
for the value of the NewName parameter, unless the path is identical to the path specified in
the Path parameter. Otherwise, only a new name is permitted.
Rename-Item -Path "project.txt" -NewName "d:\archive\old-project.txt" Rename-Item : can't rename because the target specified represents a path or device name. At line:1 char:12 + Rename-Item <<<< -path project.txt -NewName d:\archive\old-project.txt + CategoryInfo : InvalidArgument: (:) [Rename-Item], PS> Move-Item -Path "project.txt" -De stination "d:\archive\old-project.txt"
This example attempts to rename the
project.txt file in the current directory to
old-project.txt
in the
D:\Archive directory. The result is the error shown in the output.
Use the
Move-Item cmdlet, instead.
Example 3: Rename a registry key
This example renames a registry key from Advertising to Marketing. When the command is complete, the key is renamed, but the registry entries in the key are unchanged.
Rename-Item -Path "HKLM:\Software\MyCompany\Advertising" -NewName "Marketing"
Example 4: Rename multiple files
This example renames all the
*.txt files in the current directory to
*.log.
Get-ChildItem *.txt Directory: C:\temp\files Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 10/3/2019 7:47 AM 2918 Friday.TXT -a---- 10/3/2019 7:46 AM 2918 Monday.Txt -a---- 10/3/2019 7:47 AM 2918 Wednesday.txt Get-ChildItem *.txt | Rename-Item -NewName { $_.Name -replace '.txt','.log' } Get-ChildItem *.log Directory: C:\temp\files Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 10/3/2019 7:47 AM 2918 Friday.log -a---- 10/3/2019 7:46 AM 2918 Monday.log -a---- 10/3/2019 7:47 AM 2918 Wednesday.log
The
Get-ChildItem cmdlet gets all the files in the current folder that have a
.txt file
extension then pipes them to
Rename-Item. The value of NewName is a script block that runs
before the value is submitted to the NewName parameter.
In the script block, the
$_ automatic variable represents each file object as it comes to the
command through the pipeline. The script block uses the
-replace operator to replace the file
extension of each file with
.log. Notice that matching using the
-replace operator is not case
sensitive.
Parameters
Prompts you for confirmation before running the cmdlet.
Note. For more information, see about_Providers.
Even using the Force parameter, the cmdlet can't override security restrictions..
Shows what would happen if the cmdlet runs. The cmdlet is not run.
Inputs.
|
https://docs.microsoft.com/en-us/powershell/module/Microsoft.PowerShell.Management/rename-item?view=powershell-7
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Search...
FAQs
Subscribe
Pie
FAQs
Recent topics
Flagged topics
Hot topics
Best topics
Search...
Search Coderanch
Advance search
Google search
Register / Login
Bala Tilak
Ranch Hand
150
76
Threads
2
Cows
since Oct 07, 2008
Merit Badge info
Cows and Likes
Cows
Total received
2
In last 30 days
0
Total given
0
Likes
Total received
1
Received in last 30 days
0
Total given
7
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
Ranch Hand Scavenger Hunt
Number Posts (150/100)
Number Threads Started (76/100)
Number Cows Received (2/5)
Number Likes Received (1/10)
Number Likes Granted (7/20)
Set bumper stickers in profile (0/3)
Report a post to the moderators (0/1)
Edit a wiki page (0/1)
Create a post with an image (0/2)
Greenhorn Scavenger Hunt
First Post
Number Posts (150/10)
Number Threads Started (76/10)
Number Likes Received (1/3)
Number Likes Granted (7/3)
Set bumper stickers in profile (0/1)
Set signature in profile
Set a watch on a thread
Save thread as a bookmark
Create a post with an image (0/1)
Recent posts by Bala Tilak
How to map .pdf url extension in Struts2
Hi All,
I have a web application build using Struts2. For the website I am not using any extension ( .HTML or .do ) for pages. Its just mydaoman/pageName
Now I have usecase where user can download certain data as .pdf or .txt etc. I want to have a URL for something like mydomain/download/data.pdf and mydomain/download/data.txt etc
How can map .pdf in Struts2 ? I tried but its giving 404 error..
Regards,
Bala.
show more
1 month ago
Struts
Tomcat could not recognize the struts tld
I have a web application which is running fine in embedded tomcat in eclipse with below struts tld declaration in JSP
<!doctype html>
<%@ taglib prefix="s" uri="/struts-tags"%>
<html lang="en" dir="ltr">
<head>
.....
I had the struts2-core-2.5.26 jar in WEB-INF/lib and added it to the build path in Eclipse..
Now I uploaded the war to server VM, I just created the war file and copied it to tomcat WebApps root as ROOT. the pages which are not using struts tags are working fine.. but when I access the jsp pages which are using struts tags getting tld not find error..
show more
6 months ago
Struts
Hibernate running multiple queries on load
Thank you. So my question is for eager fetching...Hibernate running multiple queries...Like three queries for each game loading...One for game and one for days and another for bonus info. It's taking time. Is there a way we can load all at once (including sub entities) using joins ??
show more
6 months ago
Object Relational Mapping
Hibernate running multiple queries on load
Hi Tim Holloway,
Thanks for the reply. Yes its a big legacy code.. but I think EntityManager is just a wrapper around session if I am not wrong.
And I prefer XML to keep my java classes real POJOs without any annotations.
So using XML , is there any possibility that I can fetch the one to many in one query, I can't go with lazy load as I need these values to be in memory...
show more
6 months ago
Object Relational Mapping
Hibernate running multiple queries on load
Hi All, hope you are doing good.
I have an application where a Game is Class which contains some properties along with a set called daysPlayed; and a list called bonusInfo.. which are from different tables..
Here is the piece of code..
public class Game { private int id; private String name; private Set<Integer> daysOff; private List<BonusNumbers> bonusNum; ..... Remaining properties and getter and setters as usual...
and here is the snippet of XML for mapping..
<property name="name" column="NAME" /> <set name="daysOff" table="DAYS_OFF" lazy="false"> <key column="GAME_ID"></key> <element type="integer" column="DAY"></element> </set> <list name="bonusNum" table="" lazy="false"> <key column="GAME_ID"></key> <index column="ORDER_ID"></index> <one-to-many </list> </class> <class name="BonusNumbers" table="BONUS_INFO"> <id name="id" column="id"> <generator class="native" /> </id> <property name="bonus" column="BONUS /> <property name="bonusName" column="BONUS_NAME" /> <property name="numOfBalls" column="NUM_OF_BALLS" /> </class>
When I load all the games using...
try { TypedQuery<Game> query = session.createQuery("from Game", Game.class); games = query.getResultList(); session.getTransaction().commit(); } catch (Exception e) {
For bonus and days off Hibernate running multiple queries like two queries per Game .. Can't I load all the data at once.. like using Joins?
Regards,
Bala.
show more
6 months ago
Object Relational Mapping
Upload a file to Java Web App running in Tomcat behind Apache
Yes. I have setter and getter for FileUpload. I restarted the Tomcat and Apache and it's started working now. Not sure how it was fixed.
Thanks a lot for your time and thought Tim.
show more
7 months ago
Tomcat
Upload a file to Java Web App running in Tomcat behind Apache
Sorry for the confusion I created.
The file upload working fine with the Tomcat server. I am getting the fileUpload object if I tested it on my local system right localhost:8080/mywebapp/uploadfile
or Even its successful If I directly access the Tomcat via 8080 on server like
.
But Its not working if I access the application via Apache 4 , that is is Apache 4 --> Tomcat 9 --> Struts. Means when I access the application like
the fileUpload is null.
public class UploadFileAction { private File fileUpload; public String execute() throws Exception { BufferedReader br = null; try { if (fileUpload != null) { br = new BufferedReader(new FileReader(fileUpload)); while ((line = br.readLine()) != null) { // Do what you want to do. } } } else { throw new Exception("File upload is null.."); } } catch (Exception e) { return ActionForwardStrings.ERROR; } finally { if (br != null) { br.close(); } if (reverse != null) { reverse.close(); } } message = "sucessfully upload"; return ActionForwardStrings.SUCCESS; }
show more
7 months ago
Tomcat
Upload a file to Java Web App running in Tomcat behind Apache
Thanks for the reply Tim. Yes I am logging.
I mean the File object is null when I access the application through Apache. All the other request parameters are ok.. only the File object is null.
show more
7 months ago
Tomcat
Upload a file to Java Web App running in Tomcat behind Apache
Hi,
I have a web appliction running in Tomcat (using Struts2). The Tomcat is running behind the Apache server.
I have a use case where I have to upload a file. The functionality is working fine if I access the application directly on Tomcat (IP address:8080). But when I access the application through Apache, the uploaded file is not available to tomcat.
Not sure where I have to fix this.
Thanks in Advance.
Bala.
show more
7 months ago
Tomcat
Java Framework for Forum Application
Thanks a lot for the reply @Tim Moores. It really helped me to move further..
I am able to build sample applications using both roller and jforum
Jforum suites my requirements for the Forum application and stated integrating with my existing Struts2 application.
As far as, Roller is concerned it's a very huge multi-user, multi-blog application which is heavyweight for my need. I am just looking for a single blog application. Any other simple framework for Blogging application?
Thanks,
Bala.
show more
9 months ago
Other Application Frameworks
Java Framework for Forum Application
Hi All,
I have a simple Web application that is implemented using Struts2, Hibernate, and MYSQL.
The current application's features are just
- Retrieve the data from the Data provider (let's say the end of the day stock price) and store it in DB,
- Present the data to visitors
- Also maintaining sessions of User's who logged in to access advanced analytics.
Now I am looking to upgrade the Application with the below features. Along with the above features...
- A Blog ( is there any framework Record to avoid parsing of JSP every time. This to reduce the page load time compared to a competitor website which was implemented in PHP.
Please suggest to me if there are frameworks available that can integrate with current Struts 2.0? Or Should I move to PHP ( I have to learn ..:-( ) to get frameworks for the above features..
Thanks in advance. Stay Home Stay Safe.
Regards,
Bala.
show more
9 months ago
Other Application Frameworks
Need help with Choosing FrameWord in Java
Hi All,
I have a simple Web application which is implemented using Struts2, Hibernate and MYSQL.
The current application is features are just
- Retrieve the data from Data provider ( lets say end of the day stock price) and store it in DB,
- Present the data to visitors based
- Also maintaining sessions of User's who logged in to access Advanced anlytics.
Now I am looking to upgrade the Application with below features. Along with above fetaures...
- A Blog ( is there any frame work Records to avoid parsing of JSP every time. This to reduce the page load time compare to a competitor website which was implemented in PHP.
Please suggest me if there are frame works available if I migrate to Spring based application ? Or Should I move to PHP ( I have to learn ..:-( ) to get frameworks for above features..
Thanks in advance.. Stay Home Stay Safe..
Regards,
Bala.
show more
1 year ago
Struts
QuickCache for PHP whats for Java
Thanks for the reply. Yes , as per my research on Internet also , I think, caching frequently accessed data (which dont change often ) from DB is a better choice. So we can avoid DB access for this data.
Now the question is , lets say I have a set of records, the lottery results of a state which change once in a day. I cached these results in Java (as there are 1000s of page requests for the same results on my website.). Is there any Pattern/algorithm to manage this cached data to automatically refresh when the corresponding records updated in DB ?. I am maintaining last updated column in DB for each of this lottery entry. Either programatically or using existing Cache framework in Java.
Regards,
Bala.
show more
1 year ago
Struts
QuickCache for PHP whats for Java
Hi All,
I have a java web application. My application uses Struts2. My application generates the results of games (each state page have around 10 games and the results are published twice daily for 7 games and two are weekly).
Now I dont want to generate the same html (from JSP everytime) instead I want to Cache the generated HTML page (like PHP Caching) and only generate the new page when ever there is a change in the states data..
Is there any tool like QuickCache for PHP available for Java ?
Or Do I need to program it manually, If yes, can you suggest me the best method to implement this.
Thanks in advance,
Bala.
show more
1 year ago
Struts
Including JSP Fragment at different levels
${pageContext.request.contextPath} gives the webApp context. So this solved the issue for both Development environment as well as production. Thank you.
Example:
show more
1 year ago
Struts
|
https://www.coderanch.com/u/183277/Bala-Tilak
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
OwnTech NGND driver Zephyr Module is in charge of driving the Neutral-Ground switch of the Power Converter.
How to use
To change the state of the Neutral-Ground switch using the NGND driver, the
ngnd.h file must be included:
Then, the driver structure has to be obtained from Zephyr using the following function:
Finally, the driver can be use to change the switch state using the following function:
With
value either 0 or 1.
Example
#include "ngnd.h" static const struct device* ngnd_driver = NULL; void init() { ngnd_driver = device_get_binding(NGND_LABEL); } void enable_ngnd() { ngnd_set(ngnd_driver, 1); }
Technical information
This driver has an almost-Zephyr-style implementation. It does use the standard
DEVICE_DEFINE() macro that allows it to be loaded automatically if the module is enabled. However, it does not condition its loading to the
ngnd node presence in the device tree nor apply the board-agnostic layer used by Zephyr to allow generic driver.
Enabling or disabling the module
The module is loaded by default. The user explicitly has to set
CONFIG_OWNTECH_NGND_DRIVER=n in
prj.conf to prevent the module from loading. However, even if not used directly by your code, disabling this module may lead to power converter not outputting any power.
|
https://www.owntech.org/en/ngnd-driver/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Hello
It seems it might be a good idea if it were possible to uninstall
gnuradio
propper.
I currently have two systems faling (hard) using the new build.
My gentoo box (configured using cmake in another thread)
gives me the error :
ImportError: libgruel-3.4.2git.so.0: cannot open shared object file: No
such
file or directory
Whenever i try
from gnuradio import digital.
funny part is: I never succeeded in installing 3.4.2, so I don’t blame
it
for not finding it.
I tried doing a manual ldconfig, but it didn’t seem to do the trick.
On an ubuntu machine (xubuntu to be specific) using the build-gnuradio
script, most of the digital schemes fails
due to the reallocation of packets to digital. This includes stuff that
should be updated.
Is it possible that the python stuff does not get properly updated and
is
there any way to fix this?
Downgrading, by adding a “git checkout v3.4.2” fixes makes the build run
fine again.
On both systems the building of the system is without problems.
|
https://www.ruby-forum.com/t/trouble-with-multiple-installs-or-how-i-learned-to-love-make-uninstall/213170
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Unlike.
The syntax of a do...while loop in C++ is −
do { statement(s); } while( condition );
Notice that the conditional expression appears at the end of the loop, so the statement(s) in the loop execute once before the condition is tested.
If the condition is true, the flow of control jumps back up to do, and the statement(s) in the loop execute again. This process repeats until the given condition becomes false.
#include <iostream> using namespace std; int main () { // Local variable declaration: int a = 10; // do loop execution do { cout << "value of a: " << a << endl;
|
https://www.tutorialspoint.com/cplusplus/cpp_do_while_loop.htm
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
When you teach people, you learn!
Look at things in a different way, from another perspective. And you become much more efficient!
The most efficient way to do an ifelse ala Excel:
In R, it would be:
df$X <- if(df$A == "-1"){-df$X}else{df$X}
In Python, it would be:
import numpy as np np.where(df['A'] < 1, -df['X'], df['X'])
check the full post out at stackoverflow:
|
https://www.yinglinglow.com/blog/2018/08/30/ifelse-in-R
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
MLRun Functions Marketplace ¶
Overview¶
In this tutorial we’ll demonstrate how to import a function from the marketplace into your own project and provide some basic instructions of how to run the function and view their results.
Functions Marketplace¶
MLRun marketplace has a wide range of functions that can be used for a variety of use cases. In the marketplace there are functions for ETL, data preparation, training (ML & Deep learning), serving, alerts and notifications and etc.. Each function has a docstring that explains how to use it and in addition the functions are associated with categories to make it easier for the user to find the relevant one.
Functions can be easily imported into your project and therefore help users to speed up their development cycle by reusing built-in code.
Searching for functions¶
The Marketplace is stored in this GitHub repo:
In the README file you can view the list of functions in the marketplace and their categories.
Setting the project configuration¶
The first step for each project is to set the project name and path:
from os import path, getenv from mlrun import new_project project_name = 'load-func' project_path = path.abspath('conf') project = new_project(project_name, project_path, init_git=True) print(f'Project path: {project_path}\nProject name: {project_name}')
Set the artifacts path ¶
The artifact path is the default path for saving all the artifacts that the functions generate:
from mlrun import run_local, mlconf, import_function, mount_v3io # Target location for storing pipeline artifacts artifact_path = path.abspath('jobs') # MLRun DB path or API service URL mlconf.dbpath = mlconf.dbpath or '' print(f'Artifacts path: {artifact_path}\nMLRun DB path: {mlconf.dbpath}')
project.set_function
set_function updates or adds a function object to the project
set_function(func, name='', kind='', image=None, with_repo=None)
Parameters:
func – function object or spec/code url
name – name of the function (under the project)
kind – runtime kind e.g. job, nuclio, spark, dask, mpijob default: job
image – docker image to be used, can also be specified in the function object/yaml
with_repo – add (clone) the current repo to the build source
Returns: project object
For more information see the
set_function API documentation.
View the function params¶
In order to view the parameters run the function with .doc()
my_describe.doc()
function: describe describe and visualizes dataset stats default handler: summarize entry points: summarize: Summarize a table context(MLClientCtx) - the function context, default= table(DataItem) - MLRun input pointing to pandas dataframe (csv/parquet file path), default= label_column(str) - ground truth column label, default=None class_labels(List[str]) - label for each class in tables and plots, default=[] plot_hist(bool) - (True) set this to False for large tables, default=True plots_dest(str) - destination folder of summary plots (relative to artifact_path), default=plots update_dataset - when the table is a registered dataset update the charts in-place, default=False
Running the function¶
Use the
run method to to run the function.
When working with functions pay attention to the following:
Input vs params - for sending data items to a function, users should send it via “inputs” and not as params.
Working with artifacts - Artifacts from each run are stored in the artifact_path which can be set globally through environment variable (MLRUN_ARTIFACT_PATH) or through the config, if its not already set we can create a directory and use it in our runs. Using {{run.uid}} in the path will allow us to create a unique directory per run, when we use pipelines we can use the {{workflow.uid}} template option.
In this example we run the describe function. this function analyze a dataset (in our case it’s a csv file) and generate html files (e.g. correlation, histogram) and save them under the artifact path
DATA_URL = '' my_describe.run(name='describe', inputs={'table': DATA_URL}, artifact_path=artifact_path)
Saving the artifacts in a unique folder for each run ¶
out = mlconf.artifact_path or path.abspath('./data') my_describe.run(name='describe', inputs={'table': DATA_URL}, artifact_path=path.join(out, '{{run.uid}}'))
Viewing the jobs & the artifacts ¶
There are few options to view the outputs of the jobs we ran:
In Jupyter - the result of the job is displayed in Jupyter notebook. Note that when you click on the artifacts it displays its content in Jupyter.
UI - going to the MLRun UI, under the project name, you can view the job that was running as well as the artifacts it was generating
|
https://docs.mlrun.org/en/latest/runtimes/load-from-marketplace.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Blazor Tutorial Create Component from Code Behind
You can create view markup and C# code logic in separate files when creating a Blazor component.
- Using @inherits directive to tell the Blazor compiler to derive the class generated from the Razor view from class specified with this directive.
- The class specified with @inherits directive must be inherited from BlazorComponent class and it provides all base functionality for the component.
Let's move the C# code logic from Counter.cshtml file to seperate code behind class.
using Microsoft.AspNetCore.Blazor.Components; namespace BlazorApplication { public class CounterClass : BlazorComponent { public int CurrentCount { get; set; } public void IncrementCount() { CurrentCount += 5; } } }
The Counter.cshtml file will use the properties and methods from the code behind class just by addingÂ
@inherits CounterClass.
@page "/counter" @inherits CounterClass <h1>Counter</h1> <p>Current count: @CurrentCount</p> <button class="btn btn-primary" onclick="@IncrementCount">Click me</button>
Blazor compiler generates class for all the view pages with the same class name as the page name, so here, a specified base class cannot have the same name as Razor View, otherwise, it would cause a compile-time error.
Now when the Click me button is pressed, the counter is incremented by 5.
|
https://blazor-tutorial.net/create-component-from-code-behind
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Subject: Re: [boost] [msm] eUML guard/action location
From: David Abrahams (dave_at_[hidden])
Date: 2010-01-09 17:34:39
At Mon, 14 Dec 2009 13:31:29 +0100,
Christophe Henry wrote:
>
> >Are all those parentheses used for optional arguments, or could we consider eliminating them?
>
> I'd love to eliminate them but don't see how. They actually are constructors.
> The complete expression would be something like:
>
> typedef BOOST_TYPEOF(build_stt((
> DestState() = CurrentState() + cool_event()[guard()]/action(),
> DestState2() = CurrentState() + cool_event()[guard2()]/action2()
> ) ) ) transition_table;
>
> So I'm really just pretending to pass on-the-fly instances of
> arguments from types DestState, CurrentState, etc. to evaluate the
> result with typeof after proto is done.
You could declare those instances at namespace scope, in the same way
most Proto terminals are declared, non?
--
|
https://lists.boost.org/Archives/boost/2010/01/160713.php
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
allow you to run an app in the browser and on device without code or configuration changes
These characteristics are still core to ForceJS, but a lot of things have changed in the JavaScript world in the last 2 years. Modern JavaScript applications are now built with ECMAScript 6 (aka ECMAScript 2015) and beyond. The current version of modern frameworks (such as React, Angular 2, and Ionic 2) are also built on top of ECMAScript Next.
To seamlessly support modern application development workflows, and naturally integrate with modern frameworks, I pushed a new and rearchitected version of ForceJS built on top of ECMAScript 6. The new version has the following characteristics:
- Modular architecture. Currently includes two modules: forcejs/oauth and forcejs/data
- Loaded as ECMAScript 6 modules
import OAuth from 'forcejs/oauth'; import Service from 'forcejs/data';
- Works naturally with modern JavaScript frameworks: React, Angular 2, Ionic 2, etc.
- Uses ECMAScript 6 promises instead of callbacks
- Supports singleton or named instances to accommodate apps that work with a single or multiple Salesforce instances
GitHub Repo
The new version of ForceJS is available in this GitHub repository
Quick Start
Follow the instructions below to set up a project and create a simple JavaScript (ECMAScript 6) application that shows a list of Salesforce contacts:
- Create a new directory for your project, navigate (cd) to that directory, and type the following command to initialize a project that uses the npm package manager (accept all the default values):
npm init
- Type the following command to install forcejs:
npm install forcejs --save-dev
- Type the following command to install the force-server development server:
npm install force-server --save-dev
- Type the following command to install Webpack and Babel:
npm install babel-core babel-loader babel-preset-es2015 webpack --save-dev
- Using your favorite editor, open package.json and modify the scripts section as follows:
"scripts": { "webpack": "webpack", "start": "force-server" },
- In your project’s root directory, create a file named webpack.config.js:
var path = require('path'); var webpack = require('webpack'); module.exports = { entry: './app.js', output: { filename: 'app.bundle.js' }, module: { loaders: [ { test: /\.js$/, loader: 'babel-loader', query: { presets: ['es2015'] } } ] }, stats: { colors: true }, devtool: 'source-map' };
- In your project’s root directory, create a file named index.html:
<!DOCTYPE html> <html> <body> <h1>Forcejs Quick Start</h1> <ul id="contacts"></ul> <script src="app.bundle.js"></script> </body> </html>
- In your project’s root directory, create a file named app.js:
import OAuth from 'forcejs/oauth'; import Service from 'forcejs/data'; let oauth = OAuth.createInstance(); oauth.login() .then(oauthResult => { Service.createInstance(oauthResult); loadContacts(); }); let loadContacts = () => { let service = Service.getInstance(); service.query('select id, Name from contact LIMIT 50') .then(response => { let contacts = response.records; let html = ''; contacts.forEach(contact => html = html + `<li>${contact.Name}</li>`); document.getElementById("contacts").innerHTML = html; }); }
- On the command line, type the following command to build your project:
npm run webpack
- Type the following command to start the app in a browser:
npm start
Authenticate in the OAuth login window. You should now see a list of contacts.
Feedback
Since this is a new version, I appreciate your feedback on the developer experience. Pull Requests welcome as well.
|
http://coenraets.org/blog/2016/11/new-version-of-forcejs-a-javascript-library-for-using-the-salesforce-apis-in-ecmascript-6-apps/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
The QDesignerFormWindowInterface class allows you to query and manipulate form windows appearing in Qt Designer's workspace. More...
#include <QDesignerFormWindowInterface>
Inherits QWidget.
The QDesignerFormWindowInterface class allows you to query and manipulate form windows appearing in Qt Designer's workspace.
QDesignerFormWindowInterface provides information about the associated form window as well as allowing its properties to be altered.().
Constructs a form window interface with the given parent and the specified window flags.
Destroys the form window interface. signal is emitted whenever the form's geometry changes.
Returns the grid spacing used by the form window.
See also setGrid().
Returns true if the form window offers the specified feature; otherwise returns false.
See also features().().
Sets the default margin and spacing for the form's layout.
See also layoutDefault().().
|
https://doc.qt.io/archives/4.3/qdesignerformwindowinterface.html
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
1-781-743-2119 ext 2 Chat
This article goes over how to create a custom Web Document Viewer handler. This allows you to provide custom image sources to the Web Document Viewer, whether it is a document loaded from a database, a procedurally generated image or simply a stream that doesn't map directly to a file in the relative web path.
The first step is to make sure you have the relevant references in your project. Make sure you have Atalasoft.dotImage.dll, Atalasoft.dotImage.Lib.dll, Atalasoft.dotImage.WebControls.dll, Atalasoft.dotImage.Lib.dll and Atalasoft.Shared.dll added to your project. You will also need any additional libraries you may be using.
The next step is to either create or include a "Generic Handler" (an .ashx file.) It can be either C# or VB.Net. I've attached pre-made handlers for both C# and VB.Net to this KB, so you can skip most of this and jump straight to grabbing those .ashx files.
Once you have that added to your project, open it up and update the definition of the class. Instead of Implementing the IHttpHandler interface we're going to inherit from Atalasoft.Imaging.WebControls. WebDocumentRequestHandler
(Please note that from here on out, you will need to replace our default " Handler " with whatever class name you have chosen. Also, I'm not going to assume you've added the relevant using or Imports statements to your handler and will include the full class name of all the classes we're using. Feel free to add them and save yourself some space.)
C# public class Handler : WebDocumentRequestHandler
VB.Net Public Class CustomWebDocumentViewer_VB Inherits WebDocumentRequestHandler
Then in your class constructor, add the handlers for the two events we're going to be working with, DocumentInfoRequested and ImageRequested .
C# public CustomWebDocumentViewer() { this .DocumentInfoRequested += new DocumentInfoRequestedEventHandler (CustomWebDocumentViewer_DocumentInfoRequested); this .ImageRequested += new ImageRequestedEventHandler (CustomWebDocumentViewer_ImageRequested); }
VB.Net Public Sub CustomWebDocumentViewer_VB() AddHandler Me .DocumentInfoRequested, AddressOf Me .CustomWebDocumentViewer_DocumentInfoRequested AddHandler Me .ImageRequested, AddressOf Me .CustomWebDocumentViewer_ImageRequested End Sub
Okay, so all we have left to do is to implement those two events. Before we jump in to that, let me take a moment to explain what each event does. The DocumentInfoRequested event fires only once when .openUrl is called on the viewer or thumbnailer to build a basic document outline for the viewer. It gets the page count and page size before the viewer starts requesting pages. The ImageRequested event fires once per document page, and handles serving the page to the user.
C# void CustomWebDocumentViewer_DocumentInfoRequested( object sender, DocumentInfoRequestedEventArgs e) { /(); } void CustomWebDocumentViewer_ImageRequested( object sender, ImageRequestedEventArgs e) { //e.FilePath //Tells you what image is requested //e.FrameIndex //What page is requested(0-indexed) //e.Image //Return the image to display to client here. e.Image = CodeHereToGetAtalaImageOfSpecificFrameDesired()); }
VB.Net Public Sub CustomWebDocumentViewer_DocumentInfoRequested( ByVal sender As Object , ByVal e As Atalasoft.Imaging.WebControls.DocumentInfoRequestedEventArgs) ()End Sub
Public Sub CustomWebDocumentViewer_ImageRequested( ByVal sender As Object , ByVal e As ImageRequestedEventArgs) 'e.FilePath 'Tells you what image is requested 'e.FrameIndex 'What page is requested(0-indexed) 'e.Image 'Return the image to display to client here. e.Image = CodeHereToGetAtalaImageOfSpecificFrameDesired() End Sub
In the DocumentInfoRequested we're telling the viewer to expect a single page at 800, 600 and in the ImageRequested simply serving a new 800, 600 red AtalaImage. You will need to pass in the required information to either get or make your image(s) in the FilePath field. It should be fairly easy with this to use your own image repository to serve to the Web Document Viewer.
The last step is to go in to your .aspx file where you have the WebDocumentViewer and change the _serverUrl to point to your new custom handler.
< script var _docUrl = 'document12345' ; var _serverUrl = 'CustomWebDocumentViewer.ashx' ; //You get the idea.
That _docUrl will be passed to your handler in the e.FilePath to both the DocumentInfoRequested and ImageRequested events. In your client-side javascript, you can change the document using _viewer.OpenUrl("newDocument") to change the currently loaded document, again whatever you pass in that Open will get passed directly as the e.FilePath
Example Handler with All Events
C#
using System; using System.Web; using Atalasoft.Imaging.Codec; using Atalasoft.Imaging.Codec.Pdf; using Atalasoft.Imaging.WebControls;
public class WebDocViewerHandler : WebDocumentRequestHandler {
RegisteredDecoders .Decoders.Add( new PdfDecoder () { Resolution = 200, RenderSettings = new RenderSettings () { AnnotationSettings = AnnotationRenderSettings .RenderNone } });
//// This adds the OfficeDecoder .. you need proper licensing and additional OfficeDecoder //// dependencies (perceptive filter dlls) //RegisteredDecoders.Decoders.Add(new OfficeDecoder() { Resolution = 200 }); }
//// *************************** BASE DOCUMENT VIEWING EVENTS *************************** //// these two events are the base events for the loading of pages //// the documentInfoRequested fires to ask for the number of pages and size of the first page (minimum requirements) //// then, each page needed (and only when it's needed - lazy loading) will fire an ImageRequested event //this.DocumentInfoRequested += new DocumentInfoRequestedEventHandler(WebDocViewerHandler_DocumentInfoRequested); //this.ImageRequested += new ImageRequestedEventHandler(WebDocViewerHandler_ImageRequested);
//// *************************** ADDITIONAL DOCUMENT VIEWING EVENTS *************************** //// These events are additional/optional events .. //this.AnnotationDataRequested += WebDocViewerHandler_AnnotationDataRequested; //this.PdfFormRequested += WebDocViewerHandler_PdfFormRequested;
//// this is the event that would be used to manually handle page text requests //// (if left unhandled, page text requested wil be handled by the default engine which will provide searchabel text for Searchable PDF and office files) //// Manually handling this event is for advanced use cases only //this.PageTextRequested += WebDocViewerHandler_PageTextRequested;
//// *************************** DOCUMENT SAVING EVENTS *************************** //this.DocumentSave += WebDocViewerHandler_DocumentSave; //this.DocumentStreamWritten += WebDocViewerHandler_DocumentStreamWritten; //this.AnnotationStreamWritten += WebDocViewerHandler_AnnotationStreamWritten;
//// This event is save related but conditional use - see comments in handler //this.ResolveDocumentUri += WebDocViewerHandler_ResolveDocumentUri; //this.ResolvePageUri += WebDocViewerHandler_ResolvePageUri;
//// *************************** OTHER EVENTS (usually ignored) *************************** //this.ReleaseDocumentStream += WebDocViewerHandler_ReleaseDocumentStream; //this.ReleasePageStream += WebDocViewerHandler_ReleasePageStream }
#region BASE DOCUMENT VIEWING EVENTS /// <summary> /// The whole DocumentOpen process works like this: /// On initial opening of the document, DocumentInfoRequested fires once for the document /// The AnnotationDataRequested event may also fire if coditions merit\ /// then the ImageRequested event will fire for subsequent pages.. /// /// This event fires when the first request to open the document comes in /// you can either set e.FilePath to a different value (useful for when the /// paths being opened simply alias to physial files somewhere on the server) /// or you can manually handle the request in qhich case you MUST set both e.PageSize and e.PageCount /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void WebDocViewerHandler_DocumentInfoRequested( object sender, DocumentInfoRequestedEventArgs e) { ////THESE TWO MUST BE RESOLVED IF YOU ARE MANUALLY HANDLING //// set e.PageCount to the number of pages this document has //e.PageCount = SomeMethodThatGetsPageCountForDocumentRequested(e.FilePath); //// e.PageSize to the System.Drawing.Size you want pages forced into //e.PageSize = SomeMethodThatGetsSizeOfFirstPageOfDocument(e.FilePath); }
/// <summary> /// you can use e.FilePath and e.FrameIndex to work out what image to return /// set e.Image to an AtalaImage to send back the desired image /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void WebDocViewerHandler_ImageRequested ( object sender, ImageRequestedEventArgs e) { //// NOTE: If you are using file-based viewing, but the incoming FilePath is an //// alias/ needs parsing but ends up just pointing to a valid file you can just set e.FilePath to the FULL PATH (on the server) or full UNC Path //// Example: //// Incoming request for "docId-foo" needs to resolve to C:\SomeDirectory\MyDoc.tif //e.FilePath = @"C:\SomeDirectory\MyDoc.tif"; //// or more likely: // e.FilePath = SomeMethodToReturnFullPathForIncomingRequest(e.FilePath); //// This is an approximation of what the default ImageRequested event would do if you didn't handle it manually //e.Image = new AtalaImage(HttpContext.Current.Request.MapPath(e.FilePath), e.FrameIndex, null); //// When you manually handle it, you need to pass an AtalaImage back to e.Image // e.Image = SomeMethodThatReturnsAtalaImage(e.FilePath, e.FrameIndex); //// but there's no reason that e.FilePath couldn't be a database PKID or similar.. //// your method would look up the record and get the data needed to construct and return a valid AtalaImage } #endregion BASE DOCUMENT VIEWING EVENTS
#region ADDITIONAL DOCUMENT VIEWING EVENTS /// <summary> /// When an OpenUrl that includes an AnnotationUri is called, the viewer default action is to go to the /// url specified ... which an XMP file containing all annotation layers (serialized LayerData[]) is read and loaded /// Manual handling of this event would be needed if one were to be loading annotations from a Byte[] or from a databasae, etc /// NOTE: this event WILL NOT FIRE if there was no annotationsUrl or a blank /null AnnotaitonsUrl was passed in the OpenUrl call /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void WebDocViewerHandler_AnnotationDataRequested( object sender, AnnotationDataRequestedEventArgs e) { // READ-ONLY INPUT VALUE //e.FilePath = the passed in FilePath of where to find the annotations (when the OpenUrl is called, this will populate with the AnnotationsUrl value // WHAT YOU NEED TO HANDLE THIS EVENT // To successfully handle this event, you must populate the e.Layers //e.Layers = a LayerData[] containing ALL Annotation Layers for the whole document }
/// <summary> /// This event lets you intercept requests for page text to provide your own /// manually handling this is not recommended, but the hook is here /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void WebDocViewerHandler_PageTextRequested( object sender, PageTextRequestedEventArgs e) { // You can use the e.FilePath and e.Index to know which document and page are being requested respectively // you can then go extract text/ get the text to the control you must ultimately set // e.Page = .. an Atalasoft.Imaging.WebControls.Text.Page object containing the page text and position data }
/// <summary> /// This event only fires if the allowforms: true was set and you have a license for the PdfDoc addon /// /// It is used to provide the PDF document needed by the form filling components /// If left unhandled it will still fire internally and will simply /// /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void WebDocViewerHandler_PdfFormRequested( object sender, PdfFormRequestedEventArgs e) { //// READ ONLY VALUE //e.FilePath contains the original File path in the intial request use this to figure out which file/document/record to fetch //// Required if handling you must provide an Atalasoft.PdfDoc.Generating.PdfGeneratedDocument //// containing the PDF with the fillable form if you manually handle this event //FileStream fs = new System.IO.FileStream(e.FilePath, FileMode.Open, FileAccess.Read, FileShare.Read); //e.PdfDocument = new Atalasoft.PdfDoc.Generating.PdfGeneratedDocument(fs); // If you return NULL for e.PdfDocument then the system will treat the PDF as not being a fillable PDF form } #endregion ADDITIONAL DOCUMENT VIEWING EVENTS
#region DOCUMENT SAVING EVENTS /// <summary> /// This event fires initially on DocumentSave /// Document saving rundown: /// The DocumentSave fires first.. it gives you the chance to provide alternate streams for /// e.DocumentStream and e.AnnotationStream /// Then when the Document is written the DocumentStreamWritten event will fire.. e.DocumentStream will give oyu access to the /// document stream /// Then the AnnotationStreamWritten will fire (if there were annotations to save) and e.AnnotationStream will give you access /// to the annotation stream. /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void WebDocViewerHandler_DocumentSave( object sender, DocumentSaveEventArgs e) {
/// <summary> /// e.AnnotationStream will contain the annotations that were written /// The DocumentStreamWritten will ALWAYS fire before the AnnotationStreamWritten /// even thought this event has e.DocumentStream.. it will always be null when /// AnnotationStreamWritten fires.. use the DocumentStreamWritten event to handle the DocumentStream /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void WebDocViewerHandler_AnnotationStreamWritten( object sender, DocumentSaveEventArgs e) { //// You should rewind first, then you can store in a DB or similar //// EXAMPLE: passed an e.MemoryStream for writing to a DB: //MemoryStream annoStream = e.AnnotationStream as MemoryStream //if (annoStream != null) //{ // annoStream.Seek(0, SeekOrigin.Begin); // SomeMethodToStoreByteArrayToDb(annoStream.ToArray()); //}
//// NOTE: if you set e.AnnotationStream to a file stream in the DocumentSave you can skip handling //// this event and the system will take care of closing it. Y ou would only need to handle this event //// if you are doing something else with it other than letting it write to where you specified in that //// FileStream, such as post-processing or similar }
/// <summary> /// e.DocumentStream will contain the entire document being saved /// the DocumentStreamWritten will ALWAYS fire before the AnnotationStreamWritten /// Even thought this event has e.AnnotationStream.. it will always be null when /// DocumentStreamWritten fires.. use the AnnotationStreamWritten event to handle the AnnotationStream /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void WebDocViewerHandler_DocumentStreamWritten( object sender, DocumentSaveEventArgs e) { //// You should rewind first, then you can store in a DB or similar //// EXAMPLE: passed am e.MemoryStream for writing to a DB:
//MemoryStream docStream = e.DocumentStream as MemoryStream //if (docStream != null) //{ // docStream.Seek(0, SeekOrigin.Begin); // SomeMethodToStoreByteArrayToDb(docStream.ToArray()); //}
//// NOTE: if you set e.DocumentStream to a file stream in the DocumentSave you can skip handling //// this event and the system will take care of closing it you would only need to handle this event //// if you are doing something else with it other than letting it write to where you specified in that //// FileStream such as post-processing or similar }
/// <summary> /// Fires when a source page stream is requested while performing save operation /// During document save it is necessary to get the source document pages to combine them into the destination stream. /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void WebDocViewerHandler_ResolvePageUri( object sender, ResolvePageUriEventArgs e) { //// it is often best to leave this unhandled.. the ResolveDocumentUri is usually sufficient for situations where //// The time to use this event is if your original opened document is a //// c ombination of multiple different source streams/documents: it is for very advanced use cases } #endregion DOCUMENT SAVING EVENTS
#region OTHER EVENTS //// The events in this section are almost never used manually by customers.. //// They may have some use in extremely difficult/complex use cases, but for the most part should be left //// un-handled in your custom handler .. let the control use its defaults /// <summary> /// Fires when a document release stream occurs on document save. This event is raised only for streams that were provided in ResolveDocumentUri event. /// After document save operation it is necessary to release the obtained in ResolveDocumentUri event document streams. Fires once for each stream. /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void WebDocViewerHandler_ReleaseDocumentStream( object sender, ResolveDocumentUriEventArgs e) { //// Usually just leaving this to the default works fine... //// e.DocumentUri contains the original request uri (path) for the document //// The default activity will pretty much be //e.DocumentStream.Close(); //e.DocumentStream.Dispose(); }
/// <summary> /// Fires when a page release stream occurs on document save. /// This event is raised only for streams that were provided in ResolvePageUri event. /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void WebDocViewerHandler_ReleasePageStream( object sender, ResolvePageUriEventArgs e) { //// Consider leaving this event alone as the default built in handling works well //// e.DocumentPageIndex - The index of the page in the DocumentStream if it is a multi-page document. Default value is 0. //// e.DocumentUri - the original document location / path that was passed in //// e.SourcePageIndex - The requested index of the page in the document. //// e.DocumentStream - Stream used to override the default file system save while saving the document. //// Setting this property will take precedence over using the DocumentUri property.
//// Manually handling: if the original ResolvePageUri was used and you set e.DocumentUri to an alias value.. do that here //// if you provided a DocumentStream in ResolvePageUri, then you may need to call //e.DocumentStream.Close(); //e.DocumentStream.Dispose(); } #endregion OTHER EVENTS }
With the release of DotImage 11.0, we have added support for .NET Core web applications under .NET Framework.
.NET Core does not use generic handlers... however, you can still get at the WDV middleware to handle the same types of events.
Please see the following article for more: Q10443 - INFO: Changes Introduced in DotImage 11.0
|
http://www.atalasoft.com/KB/article.aspx?id=10347
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
因為有DEP(Data Execution Prevention)or(NX)
所以我們塞shellcode進stack可能無法執行
因此有ROP的出現,希望程式跳到本來程式的部分(或lib)
讓他執行和原本shellcode類似的事情
ASLP(Address Space Layout Randomisation)
會讓原本的lib,stack每次執行的時候位址都不同
Disalbe canaries -fno-stack-protector Disable ASLR: sudo echo 0 > /proc/sys/kernel/randomize_va_space Enable execstack: -z execstack
當程式在執行的時候可以用
sudo cat /proc/pid_nu/maps
看他stack的權限是否可執行(DEP)
以及每次執行的記憶體位置(ASLR)
ROP sample
目標:讓程式跳到本來不會呼叫的FUNCTION
#include <stdio.h> void no_used() { printf("Get the shell by ROP!\n"); system("/bin/sh"); } void danger(char* string) { char buffer[100]; strcpy(buffer, string); } int main(int argc, char** argv) { danger(argv[1]); return 0; }
編譯(記得一定要關保護)
gcc -fno-stack-protector -g rop_sample.c
gdb a.out
gcc stack-protector provides a randomized stack canary (再return addr前放一個值,如果那個值被改掉了,就不給過)
找出要跳到執行的程式碼位置
(gdb) p no_used
$1 = {void ()} 0x8048444 <no_used>
找出要蓋掉return address的位置
(gdb) disas danger
Dump of assembler code for function danger:
0x08048464 <+0>: push %ebp
0x08048465 <+1>: mov %esp,%ebp
0x08048467 <+3>: sub $0x88,%esp
0x0804846d <+9>: mov 0x8(%ebp),%eax
0x08048470 <+12>: mov %eax,0x4(%esp)
0x08048474 <+16>: lea -0x6c(%ebp),%eax
0x08048477 <+19>: mov %eax,(%esp)
0x0804847a <+22>: call 0x8048340 <strcpy@plt>
0x0804847f <+27>: leave
0x08048480 <+28>: ret
End of assembler dump.
上面可以看到兩個重要的點
第一個call 0x8048340 <strcpy@plt;
第二個它個參數佔0x6c個byte(用AAAA…蓋掉)
因為stack是低往高走,array是低往高走
所以從高到低是: arg(string)、Return addr、Old ebp、var(buffer)
然後var(buffer)就是-0x6c(%ebp),然後var是低往高走
所以可以往上蓋掉Old ebp以及Return addr
於是乎
(gdb) set args “`python -c ‘print “A”*0x6C + “BBBB” + “\x44\x84\x04\x08″‘`”
(gdb) break 11
(gdb) r
就可以發現
Old ebp被改成BBBB(0x42424242)
Return addr被改成no_used(0x08048444)
(gdb) x/4x $ebp
0xbffff2a8: 0x42424242 0x08048444 0xbffff400 0x00000000
(gdb) i f
Stack level 0, frame at 0xbffff2b0:
eip = 0x804847f in danger (rop_sample.c:11); saved eip 0x8048444
called by frame at 0x4242424a
source language c.
按c繼續執行就會跳到想要的地方了
Get the shell by ROP!
$
—————————————————————————————————————-
補充常識:
Return-oriented programming (ROP) is a computer security exploit technique
that allows an attacker to execute code in the presence of security defenses
such as non-executable memory and code signing.
An attacker gains control of the call stack to
hijack program control flow and then executes carefully chosen machine instruction sequences,
called “gadgets”.
Each gadget typically ends in a return instruction
and is located in a subroutine within the existing program
and/or shared library code. Chained together,
these gadgets allow an attacker to perform arbitrary operations
on a machine employing defenses that thwart simpler attacks.
Background
Stack smashing attacks
Return-oriented programming is an advanced version of a stack smashing attack(overflow).
In a standard buffer overrun attack,
the attacker would simply write attack code (the “payload”)
onto the stack and then overwrite the return address
with the location of these newly written instructions.
Operating systems began to combat the exploitation of buffer overflow bugs
by marking the memory where data is written as non-executable,
a technique known as data execution prevention.
Return-into-library technique
Traditional buffer overflow vulnerabilities difficult or impossible
to exploit in the manner described above.
Since shared libraries, such as libc, often contain subroutines
for performing system calls and other functionality potentially
useful to an attacker, they are the most likely candidates for finding code
to assemble an attack.
In a return-into-library attack, an attacker hijacks program control flow
by exploiting a buffer overrun vulnerability, exactly as discussed above.
Instead of attempting to write an attack payload onto the stack,
the attacker instead chooses an available library function and overwrites
the return address with its entry location.
Borrowed code chunks
Shared library developers also began to remove or restrict library functions
that performed functions particularly useful to an attacker,
such as system call wrappers. As a result, return-into-library attacks
became much more difficult to successfully mount.
The next evolution came in the form of an attack that used
chunks of library functions, instead of entire functions themselves
This technique looks for functions that contain instruction sequences
that pop values from the stack into registers.
Careful selection of these code sequences allows an attacker
to put suitable values into the proper registers
to perform a function call under the new calling convention.
Return-oriented programming
Return-oriented programming builds on the borrowed code chunks approach
and extends it to provide Turing complete functionality to the attacker
Put another way, return-oriented programming provides a fully functional “language”
that an attacker can use to make a compromised machine perform any operation desired.
參考資料
|
https://hwchen18546.wordpress.com/2014/03/24/software-rop/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
ES6 Features That Can't Be Ignored (part 1)
Now that we have some great starter projects for Angular 2 and React, it's easy to hit the ground running with ES6. I summed up the features that will change the way JavaScript Developers code. You can get a full Angular 2 environment here or you can go to es6fiddle or babeljs to test it online.
Let's get started!
Constants that are inconstant
Since JavaScript is Object Oriented, when you see 'constant', you can think that your object is going to stay the same forever.
Well not really.
This only concern the primitive, if you want to make an object immutable, you need to use Object.freeze, example:
const A = { name: 1 }; Object.freeze(A); A.name = 2; // fail
You can't be half a gangster
Do you like CoffeeScript? Well they tried to get some inspiration there. What most likely went through their mind: "Guys, we might as well put some CoffeeScript like syntax in it since it's very compact and readable. We will not use curly brackets anymore".
let array1 = [1, 2, 3]; let sum = array1.map(value => value + 1); // 2,3,4
"Ahhh nevermind I already miss those lovely brackets, let's put them back.":
let array2 = [1, 2, 3]; array2.forEach(num => { if (num == 2) console.log("number 2 found"); });
All this fumble because of implicit return. If you only have one line of code, you can skip the brackets since the returned value can only be the result of this line, if not, you need to add them.
So. Quick test.
1. Almost CoffeeScript (ES6):
let doSomething = x => { const a = x + 1; const b = a + 2; return b; }; doSomething(1);
2. CoffeeScript:
doSomething = (x) -> a = x + 1 b = a + 2 return b doSomething 1
Which one do you want to bring to prom night? (Hint: Number 2).
The very useful ones
The spread operator
This feature might be very useful if you are working with Angular 2 and the change detection doesn't kick in when you update an array.
This operator is a quick way to realise a concatenation or splitting a string:
const array1 = [1, 2, 3]; const array2 = [...array1, 4, 5]; // [1, 2, 3, 4, 5] const name = "Matthieu"; const chars = [...name]; // M,a,t,t,h,i,e,u.
let variableHere = "just a variable"; let a = `some text ${variableHere} other text`;
Export/Import
It's something that you must have encountered if you have read other articles about Angular 2. Here is an example:
export var a = 1; export cb = function(x,y) { console.log(x,y); }
import { a, cb } from "somewhere/over/the/rainbow"; cb(a, a); //Or import * as importedModule from "somewhere/over/the/rainbow"; importedModule.cb(importedModule.a, importedModule.a);
Property Shorthand
This one is by far my favourite.
Before:
const obj = { variableWithAnnoyingLongName: variableWithAnnoyingLongName, anotherVariableWithAnnoyingLongName: anotherVariableWithAnnoyingLongName };
Now with ES6:
const obj = { variableWithAnnoyingLongName, anotherVariableWithAnnoyingLongName };
Combined with the new Function Signature feature it gets more powerful ! [optin-cat id=1473]
Function Signature
We can now assign the parameters of a function:
- By default ex: doSomething(a=1, b=2)
- By destructuring:
const arr1 = [1, 2, 3]; function destructure([a, b]) { console.log(a, b); //1 2 } destructure(arr1);
- By changing the arguments names ex: f({a: name, b: age})
- Or by selecting them with a shorthand ex: f({name, age}) So now we can do cool things like:
function doSomething({ age, name, id }) { console.log(age, id, name); //26 2 } const age = 26; const id = 2; doSomething({ age, id });
The controversial one
Do you like JAVA? Well ES6 is now JAVA. Period. Next.
Joke aside.
There are many features that have been
stolen implemented from JAVA. The most obvious ones:
Classes:
class Animal { constructor(id, name) { this.id = id; this.name = name; } }
Inheritance:
class Bird extends Animal { constructor(id, name, featherType) { super(id, name); this.featherType = featherType; } }
Eric Elliott said about classes that they are "the marauding invasive species" you can (must) have a look at his well-described article here.
If you are ready to fight for your own style, prepare to receive friendly-fire from many of your Angular 2 colleagues.
Conclusion
ES6 comes with new features that make JavaScript looks like a different language.
Some features are really easy to use like the multi line strings. Others will however require a little bit of time to master like the use of constants.
The next post will be about generators, promises, symbols, destructuring assignment and many other features.
|
https://javascripttuts.com/es6-features-can-make-life-easy/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Resetting the Map
Introduction
While you can pretty much copy and paste the provided here into your mod - it's best to read through it and gain an understanding of what's going on, to that end I'm going to explain each section of code as I deal with it. What I won't do is tell you where to find the files mentioned, it should be simple enough if you have the SDK open when reading this article.
The goal of this article is to enable a mod to restart a round by resetting all map entities to their default locations, the code provided is for the Source Only SDK (not the HL2MP SDK), it may still work in the HL2MP SDK but it has not been tested!
Note: I have had success with this. It can be seen in use in The Battlegrounds II. The code is available with each release or at svn://bgmod.com/code/ - Draco Houston, a BG2 coder.
There are a couple of things we need to do to restart a round, we need to get a copy of the map entity list, store a filter for entities we don't want reset and finally add some logic to gamerules that specifies what to do with all that stored information.
Creating the entity filter
As there is no built in filter class that is suitable for us to use we need to create our own. So, add two new empty files to your server project: mapfilter.cpp and mapfilter.h. - I recommend creating a seperate mod folder for any new files you add to the solution and also adding a prefix to the file names to distinguish between valve code and mod code.
Mapfilter.h
#include "cbase.h" #include "mapentities.h" #include "UtlSortVector.h" #ifndef CMAPENTITYFILTER_H #define CMAPENTITYFILTER_H typedef const char* strptr; static bool StrLessThan( const strptr &src1, const strptr &src2, void *pCtx ) { if( strcmp(src1, src2) >= 0) return false; else return true; } class CMapEntityFilter : public IMapEntityFilter { public: // constructor CMapEntityFilter(); // deconstructor ~CMapEntityFilter(); // used to check if we should reset an entity or not virtual bool ShouldCreateEntity( const char *pClassname ); // creates the next entity in our stored list. virtual CBaseEntity* CreateNextEntity( const char *pClassname ); // add an entity to our list void AddKeep( const char*); private: // our list of entities to keep CUtlSortVector< const char* > *keepList; }; #endif
MapFilter.cpp
#include "cbase.h" #include "mapfilter.h" // Constructor CMapEntityFilter::CMapEntityFilter() { keepList = new CUtlSortVector< const char *> (StrLessThan); } // Deconstructor CMapEntityFilter::~CMapEntityFilter() { delete keepList; } // [bool] ShouldCreateEntity [char] // Purpose : Used to check if the passed in entity is on our stored list // Arguments : The classname of an entity // Returns : Boolean value - if we have it stored, we return false. bool CMapEntityFilter::ShouldCreateEntity( const char *pClassname ) { //Check if the entity is in our keep list. if( keepList->Find( pClassname ) >= 0 ) return false; else return true; } // [CBaseEntity] CreateNextEntity [char] // Purpose : Creates an entity // Arguments : The classname of an entity // Returns : A pointer to the new entity CBaseEntity* CMapEntityFilter::CreateNextEntity( const char *pClassname ) { return CreateEntityByName( pClassname); } // [void] AddKeep [char] // Purpose : Adds the passed in value to our list of items to keep // Arguments : The class name of an entity // Returns : Void void CMapEntityFilter::AddKeep( const char *sz) { keepList->Insert(sz); }
This code is based on code provided by stefaniii of the VERC forums, the original code and discussion can be found in the following post: Tutorial: Creating a Roundtimer (VERC forums) ().
Putting it all together
So, we have the building blocks in place it's time to make the magic happen! We need to make some changes to our gamerules, namely we want to add a RestartRound() function. So open up sdk_gamerules.h
// only include the mapfilter if we're in the server dll. #ifdef GAME_DLL #include "mapfilter.h" #endif
Add the include and surrounding #if block to the top of the file, then using a similar #if block declare the restart round function and an instance of the filter class - like so:
#ifdef GAME_DLL // restart round function declaration void RestartRound(void); // map entity filter instance. CMapEntityFilter filter; #endif
With that done save sdk_gamerules.h and open sdk_gamerules.cpp.
Somewhere sensible (I'd recommend the constructor) put the following:
filter.AddKeep("worldspawn"); filter.AddKeep("soundent"); filter.AddKeep("sdk_gamerules"); filter.AddKeep("scene_manager"); filter.AddKeep("predicted_viewmodel"); filter.AddKeep("sdk_team_manager"); filter.AddKeep("event_queue_saveload_proxy"); filter.AddKeep("player_manager"); filter.AddKeep("player"); filter.AddKeep("info_player_start"); filter.AddKeep("ai_network");This creates our list of entities to keep, with further work your mod may require more of these to be added, simply use
filter.AddKeep("entity_name");Though I'd hope that is fairly obvious by now!
Now for the core of the matter, the RestartRound() function!
// [void] RestartRound [void] // Purpose : Trigger the resetting of all map entities and the respawning of all players. // Arguments : Void // Returns : Void void CSDKGameRules::RestartRound() { CBaseEntity *pEnt; CBaseEntity *tmpEnt; // find the first entity in the entity list pEnt = gEntList.FirstEnt(); // as long as we've got a valid pointer, keep looping through the list while (pEnt != NULL) { if (filter.ShouldCreateEntity (pEnt->GetClassname() ) ) { // if we don't need to keep the entity, we remove it from the list tmpEnt = gEntList.NextEnt (pEnt); UTIL_Remove (pEnt); pEnt = tmpEnt; } else { // if we need to keep it, we move on to the next entity pEnt = gEntList.NextEnt (pEnt); } } // force the entities we've set to be removed to actually be removed gEntList.CleanupDeleteList(); // with any unrequired entities removed, we use MapEntity_ParseAllEntities to reparse the map entities // this in effect causes them to spawn back to their normal position. MapEntity_ParseAllEntities( engine->GetMapEntitiesString(), &filter, true); // print a message to all clients telling them that the round is restarting UTIL_ClientPrintAll( HUD_PRINTCENTER, "Round restarting..." ); // now we've got all our entities back in place and looking pretty, we need to respawn all the players for (int i = 1; i <= gpGlobals->maxClients; i++ ) { CBaseEntity *plr = UTIL_PlayerByIndex( i ); if ( plr ) { plr->Spawn(); } else { break; } } roundRestart = false; }
The comments should explain what this function does and how it does it - there is one thing to note, the function does not deal with client side temporary effects (debris, decals etc). I will not cover that in this tutorial right now but an update may be forthcoming in the long run.
There are likely better, more efficient ways of doing this, but I've found this method works without any hassle and very little effort. The code contained in this article works very well when used in conjunction with Creating a Roundtimer.
UPDATE : now uses engine->GetMapEntitiesString() instead of the custom cache of the entity list. Forced the entities removed with UTIL_Remove to actually be removed from the list as soon as possible.
|
https://developer.valvesoftware.com/w/index.php?title=Resetting_the_Map&oldid=172232
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
(Almost) out-of-the box logging
Project description
(Almost) out-of-the box logging. Frustum is a wrapper around the standard’s library logging, so you don’t have to write the same boilerplate again.
Install:
pip install frustum
Usage:
from frustum import Frustum # Initialize with verbosity from 1 to 5 (critical to info) frustum = Frustum(verbosity=5, name='app') # Register all the events that you want within frustum frustum.register_event('setup', 'info', 'Frustum has been setup in {}') # Now you can use the registered events in this way frustum.log('setup', 'readme') # The previous call would output: # INFO:app:Frustum has been setup in readme # into your stdout (as per standard logging configuration)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/frustum/0.0.2/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Extensions let us modify Swift’s data types to add new functionality, and so on in a really clean way – our new code is indistinguishable from existing code.
Let's start with an extension that adds one to an integer. Yes, I realize this is essentially just
+= 1, but we're starting simple. First, add this integer:
var myInt = 0
Now add this to the playground, just beneath the
import UIKit statement:
extension Int { func plusOne() -> Int { return self + 1 } }
extension Int tells Swift that we want to add functionality to its
Int struct. We could have used
String,
Array, or whatever instead, but
Int is a nice easy one to start.
How the extension works will become clear once you use it. Put this line just below the end of the extension:
myInt.plusOne()
In the playground output you'll now see 0 for the first line and 1 for the second, so calling
plusOne() has returned a number one higher than the number we called it on.
The extension has been added to all integers, so you can even call it like this:
5.plusOne()
When you do that, you'll see 6 in the output column.
Our extension adds 1 to its input number and returns it to the caller, but doesn't modify the original value. Try typing this:
var myInt = 10 myInt.plusOne() myInt
Using a variable by itself tells the playground just to output its value, so in the output column you'll see 10, then 11, then 10 again. This is the original value, the return from the
plusOne() method, and the original, unchanged value.
To push things a little further, let's modify the
plusOne() method so that it doesn't return anything, instead modifying the instance itself – i.e., the input integer.
To make that happen, you might think we need to do something like this:
extension Int { func plusOne() { self += 1 } }
That removes the return value because we aren't returning anything now, and it uses the
+= operator to add one to
self. But this won't work, and in fact Xcode will give you a wonderfully indecipherable error message: "Left side of mutating operator isn't mutable: 'self' is immutable"
What Xcode really means is that it Swift doesn't let you modify
self inside an extension by default. The reason is that we could call
plusOne() using
5.plusOne(), and clearly you can't modify the number 5 to mean something else.
So, Swift forces you to declare the method
mutating, meaning that it will change its input. Change your extension to this:
extension Int { mutating func plusOne() { self += 1 } }
…and now the error message will go away. Once you have declared a method as being
mutating, Swift knows it will change values so it won't let you use it with constants. For example:
var myInt = 10 myInt.plusOne() let otherInt = 10 otherInt.plusOne()
The first integer will be modified correctly, but the second will fail because Swift won't let you modify constants.
It's extremely common for developers to use extensions to add functionality to things. In some ways, extensions are similar to subclasses, so why use extensions at all?
The main reason is extensibility: extensions work across all data types, and they don't conflict when you have more than one. That is, we could make a
Dog subclass that adds
bark(), but what if we find some open source code that contains a
doTricks() method? We would have to copy and paste it in to our subclass, or perhaps even subclass again.
With extensions you can have ten different pieces of functionality in ten different files – they can all modify the same type directly, and you don't need to subclass anything. A common naming scheme for naming your extension files is Type+Modifier.swift, for example String+RandomLetter.swift.
If you find yourself trimming whitespace off strings frequently, you'll probably get tired of using this monstrosity:
str = str.trimmingCharacters(in: .whitespacesAndNewlines)
…so why not just make an extension like this:
extension String { mutating func trim() { self = trimmingCharacters(in: .whitespacesAndNewlines) } }
You can extend as much as you want, although it's good practice to keep differing functionality separated into individual files.
HACKING WITH SWIFT LIVE This July is a new two-day event where you'll be inspired by great speakers on day one then learn all the amazing new features from WWDC on day two – click here for more information and tickets.
|
https://www.hackingwithswift.com/read/0/23/extensions
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
lag. This also gives great opportunity to run production scripts to flash new firmware or automate other tasks.
import sys, re
import subprocess
import time
import serial.tools.list_ports
def grep(regexp):
for port, desc, hwid in serial.tools.list_ports.comports():
if re.search(regexp, port, re.I) or re.search(regexp, desc) or re.search(regexp, hwid):
yield port, desc, hwid
# Discover all COM ports and show them
port_list_initial = serial.tools.list_ports.comports()
for port, desc, hwid in port_list_initial:
print('-', port, desc)
print('\nWaiting for new USB to SERIAL device to be plugged in...\n')
# Wait for new device to be plugged in and show it
while True:
port_list_poll = serial.tools.list_ports.comports()
for p in port_list_poll:
if p not in port_list_initial:
print('-', p)
input('\nAll done, press enter to quit')
sys.exit(1)
else:
time.sleep(0.5) # Don't poll too often
Also code is uploaded to GitHub
|
https://www.kurokesu.com/main/2019/02/01/which-usb-to-com-port-is-the-most-recent-one/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Return the current error status of the decoder
#include <pps.h> pps_decoder_error_t pps_decoder_status( pps_decoder_t *decoder, bool clear);
libpps
The function pps_decoder_status() returns the current error status of the decoder. If an error occurs during an attempt to extract data or push into objects, the decoder is set to an error state. Rather than check return codes after every operation, you can perform a series and then check if the entire set completed successfully.
The error status of the decoder. See pps_decoder_error_t.
QNX Neutrino
|
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.pps.developer/topic/api/pps_decoder_status.html
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
User talk:Leo
Translation group
Hello Leo,
I would like to form a group for the french translation! Are you interested? (cf :). You can contact me by email, answer on your discussion page, or answer on mine.
--Feng (talk) 06:35, 5 April 2017 (UTC)
Translation pages
Hi Leo,
You can join us on the #gentoo-wiki channel (IRC) to exchange ideas, ask questions, etc. In fact, I disagree with your translation of the term "handbook". Therefore, a common glossary should be created to avoid having different translations. I would have liked to create a common space for the translation activity. This space will improve the translation and the cooperation (share knowledge). Unfortunately, currently, I do not have much time available. I have already start a terminology page (& Vodinos ). The Russian translation team already has a common glossary (cf. the history). Temporarily, I suggest you learn how to use the wiki or improve your Wiki skills. If you want, you can also define and propose a structure for the common space. You can also expose your ideas for the Wiki on the main page.
--Feng (talk) 08:13, 8 April 2017 (UTC)
- Hi Feng,
- Thank you for comments about my translations. Please forgive my incorrect term for "handbook", I fixed it by looking for the other user's translation. My translations were wrong because I didn't understand what handbook really means. I'll take a look at terminology page.
- Thank you for your comments, and please forgive my errors and/or my ignorance about how the Gentoo Wiki works.
I suggest that you make a glossary grouping all the technical terms you have translated. Thus, our translation will be improved. --Feng (talk) 20:18, 18 April 2017 (UTC)
Translation notes
I have already translated (recently) the main page of the handbook. I'm going to update this translation. The last translation has been "cleared" because the English article has been updated. It would be better to translate the handbook after establishing an organization. A list of articles to be translated in priority order and the associated verification tasks, should be defined. For example, the handbook is an important set of articles that should be reviewed by several people (I have defined the "Role" status in the following table to achieve this). Currently, the organization of the French translation is not clearly defined and established. You can get a list of pages marked for translation on the page Special:PageTranslation. It would be great if you propose, on this talk page, several pages to translate and effectively translate one of these pages, then, I hope I will have enough time to suggest improvements or make comments. --Feng (talk) 13:03, 8 April 2017 (UTC)
- Hello,
- I'll take a look at this page, but how to tell you what pages I chose ? There is not any button "Talk" or "Discuss" on this page. Should I write it on User talk:Leo page ?
- Please forgive my (maybe stupid) questions, it's the first time I help for a translation project.
- --Leo (talk) 17:00, 8 April 2017 (UTC)
- You can write a comment on your talk page (the discussion is opened here), you can send me an email (watch on the sidebar) or talk with me on IRC (most frequently on #gentoo-wiki, sometimes on #gentoofr). --Feng (talk) 18:43, 8 April 2017 (UTC)
- Hi,
- Thank you for your answer, I'll first begin by translating Fontconfig page.--Leo (talk) 15:22, 10 April 2017 (UTC)
- Hello,
- As I finished to translate the fontconfig page, I'd like to translate the Awesome page. Is it okay with it ? --Leo (talk) 17:25, 18 April 2017 (UTC)
- I believe the article named "Awesome" can be an exercice if you try to improve the previous translations so I think it is okay. Your translation of the "fontconfig" article needs to be reviewed by someone else. I also recommend that you keep the term "Use Flag" as it is (cf. this thread). --Feng (talk) 20:08, 18 April 2017 (UTC)
- Hello, thank you for your message, and for your translation, I've consequently changed the "Use Flag" translation of Fontconfig. By the way, is there a way to tell to the translator what changes I have done ? --Leo (talk) 15:22, 22 April 2017 (UTC)
Gentoo Wiki Guidelines
Leo, you are allowed to create a page with content in the main namespace but you are not allowed to create an empty page. This type of "edition" should only occur in your userspace!!! Your contribution for the french translation is welcomed here: User:Feng/Gentoo Traduction! --Feng (talk) 18:01, 16 April 2017 (UTC)
- Hello Feng,
- Please forgive me, I didn't realize I was doing something wrong. Did you successfully delete it ? What should I write on User:Feng/Gentoo Traduction ? Is a formatted way to write in it ? --Leo (talk) 17:18, 18 April 2017 (UTC)
- The situation is a bit embarrassing because I do not know when the main page will be written and if it will be redirected at this location. We must follow the Wiki guidelines to edit the Wiki - Help:Contents! I will inquire for the page. --Feng (talk) 19:48, 18 April 2017 (UTC)
|
https://wiki.gentoo.org/wiki/User_talk:Leo
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Build an InstantSearch Results Page
On this page
Scope of this tutorial
This tutorial will teach you how to implement a simple instant search refreshing the whole view “as you type” by using Algolia’s engine and the Swift API Client.
Note that as of July 2017, we have released a library to easily build instant search result pages with Algolia, called InstantSearch iOS. We highly recommend that you use InstantSearch iOS to build your search interfaces. You can get started with either the Getting started guide, the InstantSearch repo or Example apps.
Let’s see how it goes with a simple use-case: searching for movies.
Covered subjects include:
- results as you type
- highlighting the matched words
- infinite scrolling
Records
For this tutorial, we will use an index of movies but you could use any of your data. To import your own data into an Algolia index, please refer to our import guide.
Here is an extract of what our records look like:
Initialization
The whole application has been built in Swift with Xcode.
New project
The first thing we need to do is to create a new Swift project: iOS > Application > Single View Application.
Dependencies
The following code depends on the following third-party libraries:
- AFNetworking to easily load images asynchronously
- Algolia Search API Client for Swift to communicate with Algolia’s REST API.
- a simple extension of UILabel with highlighting support to highlight the matching words
You can install all the dependencies with CocoaPods. Add the following lines in a
Podfile file:
And then type
pod install in your command line and open the generated Xcode Workspace.
For the
UILabel extension, just drag and drop the file in your project.
UI
Storyboard
Let’s start building the UI. In the Storyboard,
we remove the basic UI generated by Xcode and we drag-n-drop a
Navigation Controller.
Then, we set the
Style attribute of the
Table View Cell to
Right Detail and
Identifier to
movieCell.
Search bar
Since iOS 8,
UISearchDisplayController is deprecated and you should now use
UISearchController.
Unfortunately, at the time of writing,
Interface Builder is not able to create the new
UISearchController so we must create it in code.
Let’s remove
ViewController.swift and create a new subclass of
UITableViewController:
File > New File > iOS > Source > Cocoa Touch Class.
In the new file that we have just created,
we add the search bar initialization in the
viewDidLoad method and a new property
searchController:
Then, we have to implement two protocols:
UISearchBarDelegate and
UISearchResultsUpdating. Let’s add them:
We have two more things to do in the Storyboard:
- select the Navigation Controller and check the attribute
Is the initial View Controller,
- and set the
Custom Classof the table view to our subclass of
UITableViewController.
We can launch the application and see the result.
Search logic
Movie model
Let’s first create a model that has the same structure as our records. In our case, we create
MovieRecord in a new file.
Search movies
In the
viewDidLoad method, we initialize the Algolia Search API Client.
Don’t forget to add
import AlgoliaSearch at the beginning of the file.
We will store the results of the search query in an array.
All the logic goes inside the
updateSearchResultsForSearchController method.
We use an closure to asynchronously process the results once the API returns the matching hits.
Inside the closure, we need to check that the result is newer than
the result currently displayed because we cannot ensure the ordering of the network calls/answers.
We transform the resulting JSON hits into our movie model. Don’t forget to add
import SwiftyJSON too.
Display the matching movies
At this point, we have the result of the query saved in the
movies array but we don’t display anything to the user yet.
We can now implement the method of
UITableViewController so the controller will update the view.
Add
import AFNetworking at the beginning of the file so we can use AFNetworking to asynchronously
load the images from their URL, embed it inside the
UIImageView and cache it to avoid further reloading.
Infinite scroll
As you can see, we currently load 15 results per search. We now have to implement another method that will load the next page of the displayed query.
We should call this method inside
tableView(tableView: UITableView, cellForRowAtIndexPath: NSIndexPath), just before the return.
See it in action
Check the source code on GitHub.
Filtering your results
There are multiple ways to filter your results using Algolia. You can filter by date, by numerical value and by tag. You can also use facets, which are a filter with the added benefits of being able to retrieve and display the values to filter by.
Filtering
There are several ways:
Note::
Filtering / Navigation
facet method and
sum values. The values are available in the
facets_stats attribute of the JSON answer.
|
https://www.algolia.com/doc/guides/building-search-ui/getting-started/how-to/build-an-instant-search-results-with-swift/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Each Answer to this Q is separated by one/two green lines.
I’m trying to implement requests retry in Python.
It works like charm with
.get() requests, but a
.post() request never retries, regardless of a status code. I’d like to use it with
.post() requests.
My code:
from requests.packages.urllib3.util import Retry from requests.adapters import HTTPAdapter from requests import Session, exceptions s = Session() s.mount('http://', HTTPAdapter(max_retries=Retry(total=2, backoff_factor=1, status_forcelist=[ 500, 502, 503, 504, 521]))) r = s.get('') r2 = s.post('')
So, the
.get() requests do retry and the
.post() ones do not.
What’s wrong?
In urllib3
POST is not allowed as a retried method by default (since it can cause multiple inserts). You can force it though:
Retry(total=3, allowed_methods=frozenset(['GET', 'POST']))
See
You can use tenacity.
doc:
And you can log before or after
pip install tenacity
import logging logging.basicConfig(stream=sys.stderr, level=logging.DEBUG) logger = logging.getLogger(__name__) @retry(stop=stop_after_attempt(3), before=before_log(logger, logging.DEBUG)) def post_something(): # post raise MyException("Fail")
The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 .
|
https://techstalking.com/programming/python/how-to-make-python-post-requests-to-retry/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Each Answer to this Q is separated by one/two green lines.
I’m a Rubyist learning Python and I’m wondering if there is a convention in Python along the lines of the following.
In Ruby, methods that return booleans are supposed to always end in a
?. For example,
def palindrome?(string) # some code that tests whether string is a palindrome end
The only existing SO question I can find speaking to this does not provide a definitive answer.
There is no standard naming convention specific to boolean-returning methods. However, PEP8 does have a guide for naming functions.
Function names should be lowercase, with words separated by
underscores as necessary to improve readability.
Typically, people start a function with
is (e.g.
is_palindrome) to describe that a boolean value is returned.
I concur with @CarolChen on the principle of turning to PEP8, the Python Style Guide, for guidance. I will suggest however that “as necessary to improve readability” is in the eye of the beholder. For example, each of these functions are used in Python either as functions of the
str object or as builtin functions. These are as fundamental as it gets in the Python ecosystem and are good examples of a usage style focused on returning a boolean state AND having easily readable function names.
str. methods
isalnum() isalpha() isdecimal() isdigit() isidentifier() islower() isnumeric() isprintable() isspace() istitle() isupper()
builtin functions:
isinstance() issubclass()
You can define it like
def is_palindrome(variable): # your logic\ # return True / False
|
https://techstalking.com/programming/python/python-boolean-methods-naming-convention/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
FitNesse. SuiteAcceptanceTests. SuiteFixtureTests. SuiteGeneralFixtureSpec.
TestBlankAndNullCells
TestBlankAndNullCells
If a cell contains "blank" or "null" then treat it as truly blank or truly null.
Lots of people have had trouble with blank cells. In Fit, blank cells are automatically filled with the value of the variable or function, and no check is performed. Unfortunately this means that there was no good test for truly null or truly blank fields. So these keywords were added to allow users to enter them.
public class NullAndBlankFixture extends ColumnFixture { public String nullString; public String blankString; public String nullString() {return null;} public String blankString() {return "";} public boolean isNull() {return nullString == null;} public boolean isBlank() {return blankString.length() == 0;} }
.FrontPage .RecentChanges .FitNesse.SuiteAcceptanceTests
|
http://fitnesse.org/FitNesse.SuiteAcceptanceTests.SuiteFixtureTests.SuiteGeneralFixtureSpec.TestBlankAndNullCells
|
crawl-001
|
en
|
refinedweb
|
The $if statement has been enhanced to allow constant expression evaluation. To match the flavor of the basic $if statement, we have introduced to forms:
$ife expr1 == expr2 true if (expr1-expr2)/(1+abs(expr2)) < 1e-12 $ife expr true if expr1 <> 0
The expressions follow the standard GAMS syntax and are explained in more detail below. The .==. form is convenient when expecting rounding errors. For example, we can write
$ife log2(16)^2 == 16 (true statement)
which is more convenient than having to write
$ife NOT round(log2(16)^2-16,10) (true statement) $ife round(log2(16)^2,10)=16 (true statement)
A new variant on the $if statement has been introduced. It follows the usual structures and allows appropriate nesting. The syntax for the condition are the same as for the $if statement. The $ifthen and $elseif have the same modifiers as the $if statement, namely I for case insensitive compare and E for constant expression evaluation. In the example below we will execute all blocks of such a statement.
$maxgoto 10 $set x a $label two two $eval x 1 $label three display 'x=%x%'; $ifthen %x% == 1 $eval x %x%+1 $elseif %x% == 2 $eval x %x%+1 $elseif %x% == 3 $eval x %x%+1 $elseif %x% == 4 $eval x %x%+1 $else $set x done $endif $if NOT %x% == done $goto three
This is a bit contrived but illustrates some of more subtle features. Anytime we use a looping construct via a $goto statement we have to protect ourselves against the potential of an infinite loop. The number of times we jump back to a label is counted and when a limit is reached, GAMS will issue an error. It is important to note that the %string% references are substituted only once.
Lengthy and nested ithen/else structures can become difficult to debug. Tagging of the begin, the $ifthen and the end, the $endif can be helpful. For example, the next line will fail because the tags do not match:
$ifthen.one x == x $endif.one
As with the $if statement, the statement on the line with the $ifthen style statements is optional. The following two statements give the same results:
$iftheni %type% == low $include abc $elseifi %type% == med $include efg $else $include xyz $endif $iftheni %type% == low $include abc $elseifi %type% == med $include efg $else $include xyz $endif
The statements following directly a $ifthen, $elseif, or $else on the same line can be a sequence of other dollar control statements or contain proper GAMS syntax. The statements following directly a $endif can only contain another dollar control statements.
$ifthen.two c==c display 'true for tag two'; $ifthen.three a==a $log true for tag three display ' then clause for tag three'; $ifthen.four x==x display 'true for tag four'; $log true for tag four $else display ' else clause for tag four'; $endif.four $log endif four $endif.three $log endif three $endif.two $log endif two
This will produce a GAMS program like
1 display 'true for tag two'; 3 display ' then clause for tag three'; 4 display 'true for tag four';
with the following log output
--- Starting compilation true for tag three true for tag four endif four endif three endif two
Those three new dollar control options are similar to $set, $setlocal and $setglobal. Those statements are used to assign values to ‘environment variables’ which are named string to be substituted by the %var% reference. The $eval options interpret the argument as a constant expression (see for more details below) and encode the result as a string. For example:
$if NOT set d $eval d 5 $eval h '24*%d%' $eval he '0' $eval dd '0' Sets d days / day1*day%d%/ h hours / hour1*hour%h% / dh(d,h) / $label more $eval dd '%dd%+1' $eval hs '%he%+1' $eval he %he%+24 day%dd%.hour%hs%*hour%he% $ife %dd%<%d% $goto more /;
will produce the expanded input source code below:
3 Sets d days / day1*day5/ 4 h hours / hour1*hour120 / 5 dh(d,h) / 7 day1.hour1*hour24 10 day2.hour25*hour48 13 day3.hour49*hour72 16 day4.hour73*hour96 19 day5.hour97*hour120 21 /;
The syntax of constant expression use in data statements and $conditions follows the GAMS syntax, but restricted to scalar values and a subset of functions, as summarized below:
OR XOR EQV IMP AND NOT < <= = <> >= > LE LE EQ NE GE GT + - binary and unary * / ^ **
abs ceil cos exp floor frac IfThen log log2 log10 max min mod PI power round sign sin sleep sqr sqrt tan trunk
When used in data statement, the constant expressions have to be enclosed in a pair of Square brackets [ ] or curly brackets or braces { }. Spaces can be used freely inside those brackets. When used in dollar control options, like $eval or $if statements brackets are not required, however, we may have to enclose the expression within quotes (single or double) if we want to embed spaces or continue with additional $options on the same line. For example, when using $eval followed by another $statement:
$eval x 3 / 5 * pi $eval y "3/5*pi" $eval z pi / 2 $eval a ' exp(1) / pi ' $set b anything goes here $show . Level SetVal Type Text -------------------------------------------- 0 x SCOPED 1.88495559215388 0 y SCOPED 1.88495559215388 0 z SCOPED 1.5707963267949 0 a SCOPED 0.865255979432265 0 b SCOPED anything goes here
As with other dollar control statements, without the quotes, the entire remaining line will be interpreted as one argument.
The $ife and $ifthene/$elseife statements have a related problem which has been inherited by mimicking the Windows bat and cmd scripting conventions. When a constant expression contains spaces it has to be enclosed in quotes, as shown below.
$ife (2+3)>=(2+2) display 'OK, no spaces'; $ife ' (2 + 3) >= (2 + 2) ' display 'now we need quotes';
Finally, here are some data examples:
Scalars x PI half / {pi/2} /, e famous number / [ exp( 1 ) ] /; Parameter y demo / USA.(high,low) [1/3], USA.medium {1/4} /;
Information about the licensing process is now available at compile and execution time.
Two new System environment variables LicenseStatus and LicenseStatusText complement the other license related variables. In addition, two functions have been added to retrieve the licensing level and status. The use of those variables is demonstrated in the updated library model .licememo.. Here is an extract:
$set filename %gams.license% $if '%filename' == '' $set filename %gams.sysdir%gamslice.txt if(%system.licensestatus%, put '**** Error Message: %system.licensestatustext%' / '**** License file : %filename%' / '**** System downgraded to demo mode'// );
If called with an incorrect license, the report may contain:
**** Error Message: could not open specified license file **** License file : garbage **** System downgraded to demo mode
The variable system.licensestaus returns zero if no error has been encountered by the licensing process. The variable system.licensestatustext contains the respective explanation about a licensing failure. The above example uses compile time string substitutions and are not updated when execution a pre compiled work file.
Two new functions, LicenseLevel and LicenseStatus, provide this information at runtime:
Some special features for option files or other information files require writing complete expanded references of indexed identifiers like variables or equations. For example, the options for CPLEX indicator variables can now be written more compactly. For example, we no can write:
loop(lt(j,jj), put / 'indic ' seq.tn(j,jj) '$' y.tn(j,jj) yes / 'indic ' seq.tn(jj,j) '$' y.tn(j,jj) NO );
This will produce
indic seq('A','B')$y('A','B') YES indic seq('B','A')$y('A','B') NO indic seq('A','C')$y('A','C') YES indic seq('C','A')$y('A','C') NO indic seq('B','C')$y('B','C') YES indic seq('C','B')$y('B','C') NO
Besides the more compact GAMS code, it provides complete syntax checking at compile time.
New syntax has been added to extract more information about a controlling set. It is similar to the ord(i) function but uses the dot notation. The new functions are:
The following example illustrates some of those new features:
set i / '-inf',1,12,24,'13.14',inf /; parameter report; report(i,'value') = i.val; report(i,'length') = i.len; report(i,'offset') = i.off; report(i,'position') = i.pos; display report;
The display shows
---- 6 PARAMETER report value length offset position -inf -INF 4.000 1.000 1 1.000 1.000 1.000 2.000 12 12.000 2.000 2.000 3.000 24 24.000 2.000 3.000 4.000 13.14 13.140 5.000 4.000 5.000 inf +INF 3.000 5.000 6.000
|
http://www.gams.com/docs/release/rel_cmex226.htm
|
crawl-001
|
en
|
refinedweb
|
David Mertz, Ph.D ([email protected]), Data Masseur, Gnosis Software, Inc.
01 Aug 2000
In the first installment of his new 'XML Matters' column -- and as part of his ongoing quest to create a more seamless integration between XML and Python -- David Mertz presents the xml_pickle module. Mertz discusses the design goals and decisions that went into xml_pickle and provides a list of likely uses.
xml_pickle
What is XML? What is Python?
XML is a simplified dialect of the Standard Generalized Markup
Language (SGML). Many of you are familiar with SGML
via HTML. Both XML and HTML documents are composed of text interspersed with, and
structured by, markup tags in angle brackets. But XML
encompasses many systems of tags that allow XML documents to be
used for many purposes, including:
A set of tags can be created to capture any
sort of structured information you might want to represent,
which is why XML is growing in popularity as a common standard
for representing diverse information.
Python is a freely available, very high-level, interpreted
language developed by Guido van Rossum. It combines clear
syntax with powerful (but optional) object-oriented semantics.
Python is available for a range of computer platforms and offers strong portability
between platforms.
Introduction to the project
There are a number of techniques and tools for dealing with
XML documents in Python. (The Resources section provides links
to two developerWorks articles in which I discuss
general techniques. It also provides. But in an ideal environment all constructs fit
intuitively into their domain, and domains merge seamlessly.
When they do, programmers can wax poetic rather than merely
make it work.
I've begun a research project of creating a more
seamless and more natural integration between XML and Python.
In this article, and subsequent articles in this column, I'll
discuss some of the goals, decisions, and limitations of the
project; and hopefully provide you with a set
of useful modules and techniques one hand,
having a wide range of native facilities in Python makes it
easier to represent a wide range of XML structures. On the other hand, the
range of native types and structures of Python makes for more cases to worry about in representing native Python objects in XML. As a result of
these asymmetries between XML and Python, the
project -- at least initially -- contains two separate modules: xml_pickle, for representing arbitrary Python objects in XML, and xml_objectify, for "native"
representation of XML documents as Python objects. We'll address xml_pickle in this article.
xml_objectify
Part I: xml_pickle
Python's standard pickle module already provides a simple and
convenient method of serializing Python objects that is useful for persistent storage or transmission over a
network. In some cases, however, it is desirable to perform
serialization to a format with several properties not possessed
by pickle. Namely, a format that:
pickle
xml_pickle provides each of these
features while maintaining interface compatibility with
pickle. However, xml_pickle is not a general purpose
replacement for pickle since pickle retains several
advantages of its own such as faster operation (especially via
cPickle) and a far more compact object representation.
cPickle
Using xml_pickle
Even though the interface of xml_pickle is mostly the same as that of pickle, it is worth illustrating the (quite simple) usage of xml_pickle for those who are not familiar with Python or pickle.
<FONT COLOR="#3333CC"><b>import</b></FONT> xml_pickle
<FONT COLOR="#1111CC"># import the module</FONT>
<FONT COLOR="#1111CC"># declare some classes to hold some attributes</FONT>
<FONT COLOR="#3333CC"><b>class</b></FONT><A NAME="MyClass1"><FONT COLOR="#CC0000"><b>
MyClass1</b></FONT></A>: <FONT COLOR="#3333CC"><b>pass</b></FONT>
<FONT COLOR="#3333CC"><b>class</b></FONT><A NAME="MyClass2"><FONT COLOR="#CC0000"><b>
MyClass2</b></FONT></A>: <FONT COLOR="#3333CC"><b>pass</b></FONT>
<FONT COLOR="#1111CC"># create a class instance, and add some basic data members to it</FONT>
o = MyClass1()
o.num = 37
o.str = <FONT COLOR="#115511">"Hello World"</FONT>
o.lst = [1, 3.5, 2, 4+7j]
<FONT COLOR="#1111CC"># create an instance of a different class, add some members</FONT>
o2 = MyClass2()
o2.tup = (<FONT COLOR="#115511">"x"</FONT>, <FONT COLOR="#115511">"y"</FONT>,
<FONT COLOR="#115511">"z"</FONT>)
o2.num = 2+2j
o2.dct = { <FONT COLOR="#115511">"this"</FONT>: <FONT COLOR="#115511">"that"</FONT>,
<FONT COLOR="#115511">"spam"</FONT>: <FONT COLOR="#115511">"eggs"</FONT>, 3.14:
<FONT COLOR="#115511">"about PI"</FONT> }
<FONT COLOR="#1111CC"># add the second instance to the first instance container</FONT>
o.obj = o2
<FONT COLOR="#1111CC"># print an XML representation of the container instance</FONT>
xml_string = xml_pickle.XML_Pickler(o).dumps()
<FONT COLOR="#3333CC"><b>print</b></FONT> xml_string
Everything except the first line and the next-to-last line is
generic Python for working with object instances. It might be a
little contrived and a little simple, but essentially
everything you do with instance data members (including nesting instances as container data,
which is how most complex structures are built in Python) is contained in
the example above. Python programmers only need to make one method call to encode their objects as XML.
Of course, once you have "pickled" your objects, you'll want
to restore them later (or use them elsewhere). Supposing the above
few lines have already run, restoring the object representation
is as simple as:.
XML_Pickler.dump()
Sample Pyobjects.dtd document
Running the sample code above will produce a pretty good
example of the features of an xml_pickle representation of a
Python object. But the following example is a hand-coded
test case I've developed that has the
advantage of containing every XML structure, tag and attribute
allowed in document type. The specific data is invented, but
it is not hard to imagine the application the data might belong
to.
<?xml version="1.0"?>
<!DOCTYPE PyObject SYSTEM "PyObjects.dtd">
<PyObject class="Automobile">
<attr name="doors" type="numeric" value="4" />
<attr name="make" type="string" value="Honda" />
<attr name="tow_hitch" type="None" />
<attr name="prev_owners" type="tuple">
<item type="string" value="Jane Smith" />
<item type="tuple">
<item type="string" value="John Doe" />
<item type="string" value="Betty Doe" />
</item>
<item type="string" value="Charles Ng" />
</attr>
<attr name="repairs" type="list">
<item type="string" value="June 1, 1999: Fixed radiator" />
<item type="PyObject" class="Swindle">
<attr name="date" type="string" value="July 1, 1999" />
<attr name="swindler" type="string" value="Ed's Auto" />
<attr name="purport" type="string" value="Fix A/C" />
</item>
</attr>
<attr name="options" type="dict">
<entry>
<key type="string" value="Cup Holders" />
<val type="numeric" value="4" />
</entry>
<entry>
<key type="string" value="Custom Wheels" />
<val type="string" value="Chrome Spoked" />
</entry>
</attr>
<attr name="engine" type="PyObject" class="Engine">
<attr name="cylinders" type="numeric" value="4" />
<attr name="manufacturer" type="string" value="Ford" />
</attr>
</PyObject>
Informally, it is not difficult to see the
structure of a PyObjects.dtd XML document. (A formal document type definition (DTD) is available in Resources.) But the DTD will
disambiguate any issues that are not immediately evident.
PyObjects.dtd
Looking at the sample XML document, you can see that the three
stated design goals of xml_pickle have been met:
All documents that conform to the DTD and only documents that conform to the DTD will be representations of valid Python objects.
Design features, caveats and limitations
Content model
The content models of Python and XML are simply different in
certain respects. One significant difference is that XML documents are inherently linear in form. Python
object attributes -- and also Python dictionaries -- have no
definitional order (although implementation details create
arbitrary ordering, such as of hashed keys). In this respect,
the Python object model is closer to the relational model;
rows of a relational table have no "natural" sequence, and
primary or secondary keys may or may not provide any meaningful
ordering on a table. The keys are always orderable by
comparison operators, but this order may be unrelated to the
semantics of the keys.
An XML document always lists its tag elements in a particular
order. The order may not be significant to a particular
application, but the XML document order is always present. The effect of the differing
significance of key order in Python and XML is that the XML
documents produced by xml_pickle are not guaranteed to
maintain element order through "pickle"/"unpickle" cycles. For
example, a hand-prepared PyObjects.dtd XML document, such as the
one above, may be "unpickled" into a Python object. If the
resultant object is then "pickled," the <attr> tags will most
likely occur in a different order than in the original
document. This is a feature, not a bug, but the fact should be
understood.
<attr>
Limitations
Several known limitations occur in xml_pickle
as of the current version (0.2). One potentially serious flaw
is that no effort is made to trap cyclical references in
compound/container objects. If an object attribute refers back
to the container object (or some recursive version of this),
xml_pickle will exhaust the Python stack. Cyclical
references are likely to indicate a flaw in object design to
start with, but later versions of xml_pickle will certainly
attempt to deal with them more intelligently.
Another limitation is that the namespace of XML
attribute values (such as the "123" in <attr name="123">) is
larger than the namespace of valid Python variables and
instance members. Attributes created manually outside
the Python namespace will have the odd status of existing
in the .__dict__ magic attribute of an instance, but being
inaccessible by normal attribute syntax (e.g. "obj.123" is a
syntax error). This is only an issue where XML documents are
created or modified by means other than xml_pickle itself.
At this time, I simply haven't determined the best way of handling
this (somewhat obscure) issue.
<attr name="123">
.__dict__
A third limitation is that xml_pickle does not handle all attributes of Python objects. All the "usual" data members (strings,
numbers, dictionaries, etc.) are "pickled" well. But instance
methods, and class and function objects as attributes, are not
handled. As with pickle, methods are simply ignored in "pickling." If class or function objects exist as attributes,
an XMLPicklingError is raised. This is probably the correct
ultimate behavior, but a final decision has not been made.
Design choices
One genuine ambiguity in XML document design
is the choice of when to use tag attributes and when to use
subelements. Opinions on this design issue differ, and XML
programmers often feel strongly about their conflicting views.
This was probably the biggest issue in deciding the
xml_pickle document structure.
The general principle decided was that a thing that is
naturally "plural" should be represented by subelements. For
example, a Python list can contain as many items as you like,
and is therefore represented by a sequence of <item>
subelements. On the other side, a number is a singular thing
(the value might be more than 1, but there is only one thing
in it). In that case, it seemed much more logical to use an XML
attribute called "value." The really difficult case was identified with
Python strings. In a basic way, they are sequence
objects -- just like lists. But representing each character in a
string using a hypothetical tag would destroy the goal
of human readability, and make for enormous XML
representations. The decision was made to put strings in the
XML "value" attribute, just as with numbers. However, from an
aesthetic point of view, this is probably less desirable than
within a tag container, especially for multiline strings.
But this decision seemed more consistent since there was no
other "naked" #PCDATA in the specification.
<item>
In part because strings are stored in XML "value"
attributes -- but mostly to maintain the syntactical nature of the
XML document -- Python strings needed to be stored in a "safe"
form. There are a few unsafe things that could occur in Python
strings. The first type is the basic markup characters like
greater-than and less-than. A second type is the quote and
apostrophe characters that set off attributes. The third type
is questionable ASCII values, such as a null character. One
possibility considered was to encode the whole Python strings
in something like base64 encoding. This would make strings
"safe," but also completely unreadable to humans. The decision
was made to use a mixed approach. The basic XML characters are
escaped in the style of "&", ">" or """.
Questionable ASCII values are escaped in Python-style, such as
"\000". The combination makes for human-readable XML
representations, but requires a somewhat mixed approach to
decoding stored strings.
Anticipated uses
There are a number of things that xml_pickle is likely to be
good for, and some user feedback has indicated that it has
entered preliminary usage. Below are a few ideas.
- XML representations of Python objects may be indexed and
cataloged using existing XML-centric tools (not necessarily
written in Python). This provides a ready means of
indexing Python object databases (such as ZODB, PAOS, or
simply shelve).
- XML representations of Python objects could be restored as
objects of other OOP languages, especially ones having
a similar range of basic types. This is something that has yet to
be done. Much "heavier" protocols like CORBA, XML-RPC, and
SOAP have an overlapping purpose, but xml_pickle is pretty
"lightweight" as an object transport specification.
- Tools for printing and displaying XML documents can be used
to provide convenient human-readable representations of
Python objects via their XML intermediate form.
- Python objects can be manually "debugged" via their
XML representation using XML-specific editors, or simply
text editors. Once hand-modified objects are "unpickled,"
the effects of the edits on program operation can be
examined. This provides an additional option to other existing Python debuggers and wrappers.
shelve
Please send me your feedback if you develop additional uses for xml_pickle or see
enhancements that would open the module to additional uses.
Resources
xml.dom
pyxie
About the author
David Mertz wanted to call this column "Ex nihilo XML fit", if only for the alliteration; but he thinks his publisher shudders at the summoned imagery of a chthonic golem. David Mertz?
|
http://www.ibm.com/developerworks/library/xml-matters1/
|
crawl-001
|
en
|
refinedweb
|
friendly.
It's been almost a year since I introduced you to Groovy with the article "Feeling Groovy" in the alt.lang.jre series. Since then, Groovy has matured quite a bit through a number of releases that have progressively addressed problems in the language implementation and feature requests from the developer community. Finally, Groovy took a gigantic leap this past April, with the formal release of a new parser aimed at standardizing the language as part of the JSR process.
In this month's installment of Practically Groovy, I'll celebrate the growth of Groovy by introducing you to the most important changes formalized by Groovy's nifty new parser; namely variable declarations and closures. Because I'll be comparing some of the new Groovy syntax to the classic syntax found in my first-ever article on Groovy, you may want to open up "Feeling Groovy" in a second browser window now.
Why change things?
If you've been following Groovy for any amount of time, whether you've been reading articles and blogs or writing code yourself, you may have gotten wind of one or two subtle issues with the language. When it came to clever operations such as object navigation, and particularly closures, Groovy suffered from occasional ambiguities and an arguably limiting syntax. Some months ago, as part of the JSR process, the Groovy team began working on resolving these issues. The solution, presented in April with the release of groovy-1.0-jsr-01, was an updated syntax and a new-syntax-savvy parser to standardize.
The good news is that the new syntax is chock full of enhancements to the language. The other good news is that it isn't that drastically different from the old. Like all of Groovy, the syntax was designed for a short learning curve and a big payoff.
Of course, the JSR-compliant parser has rendered some now "classic" syntax incompatible with the new Groovy. You can see this for yourself if you try running a code sample from an early article in this series with the new parser: It probably won't work! Now, this may seem a little strict -- especially for a language as freewheeling as Groovy -- but the point of the parser is to ensure the continued growth of Groovy as a standardized language for the Java platform. Think of it as a helpful tour guide to the new Groovy.
Hey, it's still Groovy!
Before getting too far into what's changed, I'll take a second to chat about what hasn't. First, the basic nature of dynamic typing hasn't changed. Explicit typing of variables (that is, declaring a variable as a String or Collection) is still optional. I'll discuss the one slight addition to this rule shortly.
String
Collection
Many will be relieved to know that semicolons are also still optional. Arguments were made for and against this syntactic leniency, but in the end the less-is-more crowd won the day. Bottom line: You are still free to use semicolons if you want to.
Collections have also stayed the same for the most part. You can still declare list-like collections using the array syntax and maps the same way you always have (that is,
the way you first learned in "Feeling Groovy"). Ranges, on the other hand, have changed slightly, as I'll soon demonstrate.
list
map
Finally, the Groovy additions to standard JDK classes haven't changed
a bit. Syntactic sugar and nifty APIs are intact, as in the case of normal-Java File types, which I'll show you later.
File
Variably variables
The rules on Groovy variables have probably taken the hardest hit with the
new JSR-compliant syntax. Classic Groovy was quite flexible (and indeed terse) when it came to variable declarations. With the new JSR Groovy, all variables
must be preceded with either the def keyword or a modifier such as private, protected, or public. Of course, you can always declare the variable type as well.
For example, when I introduced you to GroovyBeans in "Feeling Groovy," I defined a type called LavaLamp in that article's Listing 22. That class is no longer JSR compliant and will result in parser errors if you try to run it. Fortunately, migrating the class isn't hard: All I had to do is add the }"
myLamp = new LavaLamp()
myLamp.baseColor = "Silver"
myLamp.setLavaColor("Red")
println "My Lamp has a ${myLamp.baseColor} base"
println "My Lava is " + myLamp.getLavaColor()
Not so bad, right?
As described above, the def keyword is required for any variable that doesn't otherwise have a modifier,(" ")
}
}
"first name: " + fname + " last name: " + lname +
" age: " + age + " address: " + address +
" contact numbers: " + numstr.toString()
}
}
Recognize that code? It's borrowed from Listing 1 of "Stir some Groovy into your Java apps." In Listing 3, you can see the error message that will pop up if you try to run the code as is:
c:\dev\projects>groovy BusinessObjects.groovy
BusinessObjects.groovy: 13: The variable numstr is undefined in the current scope
@ line 13, column 4.
numstr = new StringBuffer()
^
1 Error
The solution, of course, is to add the def keyword to numstr in the
toString method. This rather deft solution is shown in Listing 4.
Closing in on closures
The syntax for closures has changed, but mostly only with regard to parameters. In classic Groovy, if you declared a parameter to your closure you had to use a | character for a separator. As you
probably know, | is also a bitwise operator in normal Java language; consequently, in Groovy, you couldn't use the | character unless you were in the context of a parameter declaration of a closure.
|
You saw classic Groovy parameter syntax for closures in Listing 21 of "Feeling Groovy," where I demonstrated iteration. As you'll recall, I utilized the find method on collections, which attempted to find the value 3. I passed in the parameter x, which represents the next value of the iterator (experienced Groovy developers will note that x is entirely optional and I could have referenced the implicit variable it). With JSR Groovy, I must drop the | and replace it with the Nice-ish -> separator, as shown in Listing 5 below:
find
x
iterator
it
->
[2, 4, 6, 8, 3].find { x ->
if (x == 3){
println "found ${x}"
}
}
Doesn't the newer closure syntax remind you of the Nice language's block syntax? If you are not familiar with the Nice language, check out Twice as Nice, another of my contributions to the alt.lang.jre series.
As I mentioned earlier, Groovy's JDK hasn't changed. But as you've just learned, closures have; therefore, the way you use those nifty APIs in Groovy's JDK have also changed, but just slightly. In Listing 6, you can see
how the changes impact Groovy IO; which is hardly at all:
import java.io.File
new File("maven.xml").eachLine{ line ->
println "read the following line -> " + line
}
Reworking filters
Now, I hate to make you skip around a lot, but remember how in "Ant Scripting with Groovy" I spent some time expounding on the power and utility of closures? Thankfully, much of what I did on the examples for that column is easy to rework for the new syntax. In Listing 7, I simply add")
So far so good, don't you think? The new Groovy syntax is quite easy to pick up!
Changes to ranges
Groovy's range syntax has changed ever so slightly. In classic Groovy, you could get away with using the ... syntax to denote exclusivity, that is, the upper bound. In JSR Groovy, you'll simply drop that last dot (.) and replace it with the intuitive < symbol.
...
.
<
Watch as I rework my range example from "Feeling Groovy" in Listing 8 below:
myRange = 29..<32
myInclusiveRange = 2..5
println myRange.size() // still prints 3
println myRange[0] // still prints 29
println myRange.contains(32) // still prints false
println myInclusiveRange.contains(5) // still prints true
Ambiguous, you say?
You may have noticed, while playing with Groovy, a subtle feature that lets you obtain a reference to a method and invoke that reference at will. Think of the method pointer as a short-hand convenience mechanism for invoking methods along an object graph. The interesting thing about method pointers is that their use can be an indication that the code violates the Law of Demeter.
"What's the Law of Demeter," you say? Using the motto Talk only to immediate friends, the Law of Demeter states that we should avoid invoking methods of an object that was returned by another object's method. For example,
if a Foo object exposed a Bar object's type, clients could access behavior of the Bar through the Foo. The result would be brittle code, because changes to one object would ripple through a graph.
Foo
Bar
Bar
A respected colleague wrote an excellent article entitled "The Paperboy, the Wallet, and the Law of Demeter" (see Resources). The examples in the article are written in the Java language; however, I've redefined them below using Groovy. In Listing 9, you can see how this code demonstrates the Law of Demeter -- and how it could be used to wreak havoc with people's wallets!
package com.vanward.groovy
import java.math.BigDecimal
class Customer {
@Property firstName
@Property lastName
@Property wallet
}
class Wallet {
@Property value;
def getTotalMoney() {
return value;
}
def setTotalMoney(newValue) {
value = newValue;
}
def addMoney(deposit) {
value = value.add(deposit)
}
def subtractMoney(debit) {
value = value.subtract(debit)
}
}
In Listing 9 there are two defined types -- a Customer and a Wallet. Notice how the Customer type exposes its own wallet instance. As previously stated, the code's naive exposures present issues. For example, what if I (as the original article's author did) added in an evil paperboy to ravage unsuspecting customer wallets? I've used Groovy's method pointers for just this nefarious purpose in Listing 10. Note how I am able to grab a reference to the subtractMoney method via an instance of Customer with Groovy's new & syntax for method pointers.
mymoney = victim.wallet.&subtractMoney
mymoney(new BigDecimal(2)) // "I want my 2 dollars!"
mymoney(new BigDecimal(25)) // "late fees!"
Now, don't get me wrong: Method pointers aren't meant for hacking into code or obtaining references to people's cash! Rather, a method pointer is a convenience mechanism. Method pointers are also great for reconnecting with your favorite 80s movies. They can't help you if you get those lovable cute furry things wet, though! In all seriousness, think of Groovy's println shortcut as an implicit method pointer to System.out.println.
println
System.out.println
If you were paying careful attention you will have noted that JSR Groovy requires me to use the new & syntax to create a pointer to the method subtractMoney. This addition, as you've probably guessed, clears up ambiguities in classic Groovy.
And here's something new!
It wouldn't be fun if there wasn't anything new in Groovy's JSR releases, would it? Thankfully, JSR Groovy has introduced the as keyword, which is a short-hand casting mechanism. This feature goes hand-in-hand with a new syntax for object creation, which makes it easy to create non-custom classes in Groovy with an array-like syntax. By non-custom, I mean classes found in the JDK such as Color, Point, File, etc.
as
Color
Point
In Listing 11, I've used the new syntax to create some simple types:
def nfile = ["c:/dev", "newfile.txt"] as File
def val = ["http", "", "/"] as URL
def ival = ["89.90"] as BigDecimal
println ival as Float
Note that I created a new File and URL, as well as a BigDecimal using the short-hand syntax, as well as how I was able to cast the BigDecimal type to a Float using as.?
|
http://www.ibm.com/developerworks/java/library/j-pg07195.html
|
crawl-001
|
en
|
refinedweb
|
Bulk Actions
A Bulk Action is a Quick-Fix or a Context Action that can be applied to an element in the syntax tree, a single file, or all files in a folder, project or solution. This fix in scope mechanism is displayed as a menu item on the
Alt+Enter menu that can be selected to affect the single item, or expanded to show the per-file, folder, project or solution scope:
This mechanic is available for both quick-fix actions as well as context actions that are part of Code Cleanup.
In the case of a context action (such as ReSharper’s
VarToTypeAction, which converts the C#
var keyword to an explicit type), class to construct all the items as follows:
var cleanupProfile = BulkCodeCleanupActionBuilderBase.CreateProfile(profile => { profile.SetSetting(ReplaceByVar.USE_VAR_DESCRIPTOR }, new ReplaceByVar.Options() { BehavourStyle = ReplaceByVar.BehavourStyle.CAN_CHANGE_TO_EXPLICIT, ForeachVariableStyle = ReplaceByVar.ForeachVariableStyle.ALWAYS_EXPLICIT, LocalVariableStyle = ReplaceByVar.LocalVariableStyle.ALWAYS_EXPLICIT })); var projectFile = myProvider.SourceFile.ToProjectFile(); Assertion.AssertNotNull(projectFile, "projectFile must not be null"); var builder = BulkCodeCleanupContextActionBuilder.CreateByPsiLanguage<CSharpLanguage>(cleanupProfile, "Use explicit type everywhere", projectFile.GetSolution(), this); return builder.CreateBulkActions(projectFile, IntentionsAnchors.ContextActionsAnchor, IntentionsAnchors.ContextActionsAnchorPosition); an action at theplate and can be implemented using the same delegate passed in for folder, project and solution scope. You need just one class that looks something like this:
public class BulkQuickFixInFileWithCommonPsiTransaction : QuickFixBase { private readonly IProjectFile myProjectFile; private readonly Action<IDocument, IPsiSourceFile, IProgressIndicator> myPsiTransactionAction; public BulkQuickFixInFileWithCommonPsiTransaction (IProjectFile projectFile, string actionText, Action<IDocument, IPsiSourceFile, IProgressIndicator> psiTransactionAction) { myProjectFile = projectFile; myPsiTransactionAction = psiTransactionAction; Text = actionText + " in file"; } public override bool IsAvailable(IUserDataHolder cache) { return true; } public override string Text { get; private set; } public override Action<ITextControl> ExecutePsiTransaction(ISolution solution, IProgressIndicator progress) { var documentManager = solution.GetComponent<DocumentManager>(); var document = documentManager.GetOrCreateDocument(myProjectFile); var psiSourceFile = projectFile.ToSourceFile(); if (psiSourceFile == null) return; myPsiTransactionAction(document, psiSourceFile, progress); return null; } }: an instance of the
BulkQuickFixInFileWithCommonPsiTransactionclass defined above (make sure to add this class to your project), passing in an instance of the
processFileActiondelegate
var inFileFix = new BulkQuickFixInFileWithCommonPsiTransaction(projectFile, RemoveUnusedDirectivesString, processFileAction);
Create a
BulkQuickFixWithCommonTransactionBuilder, which also happens to accept a predicate that you can use to filter out files you do not want to process:
var acceptProjectFilePredicate = BulkItentionsBuilderEx.CreateAcceptFilePredicateByPsiLanaguage<CSharpLanguage>(solution); var builder = new BulkQuickFixWithCommonTransactionBuilder(this, inFileFix, solution, RemoveUnusedDirectivesString, processFileAction, acceptProjectFilePredicate);
Finally, use the builder to create all the actions in bulk:
return builder.CreateBulkActions(projectFile, IntentionsAnchors.QuickFixesAnchor, IntentionsAnchors.QuickFixesAnchorPosition);
And if your quick-fix operates on the whole file by default, and doesn’t operate only at the issue at the caret position (e.g. remove all redundant using statements):
- Implement the quick-fix. The
ExecutePsiTransaction()method fixes the issue across the whole file
Prepare an action that processes a single file with your chosen logic. The example here is the same as above::
var acceptProjectFilePredicate = BulkItentionsBuilderEx.CreateAcceptFilePredicateByPsiLanaguage<CSharpLanguage>(solution); var builder = new BulkQuickFixWithCommonTransactionBuilder(this, solution, RemoveUnusedDirectivesString, processFileAction, acceptProjectFilePredicate);
- Finally, use the builder to create all the actions in bulk:
return builder.CreateBulkActions(projectFile, IntentionsAnchors.QuickFixesAnchor, IntentionsAnchors.QuickFixesAnchorPosition);
|
https://www.jetbrains.com/help/resharper/sdk/Features/Actions/Bulk.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Most said that primitive in object are stored in Heap, however, I got different results from the following performance test:
public class Performance { long sum = 0; public static void main(String[] args) { // TODO Auto-generated method stub long startTime = System.currentTimeMillis(); long pSum = 0; for(int i = 0; i < Integer.MAX_VALUE; i++){ pSum += i; } long endTime = System.currentTimeMillis(); System.out.println("time of using primitive:" + Long.toString(endTime - startTime)); System.out.println(pSum); long startTime1 = System.currentTimeMillis(); Long Sum = 0L; for(int i = 0; i < Integer.MAX_VALUE; i++){ Sum += i; } long endTime1 = System.currentTimeMillis(); System.out.println("time of using object:" + Long.toString(endTime1 - startTime1)); System.out.println(Sum); Performance p = new Performance(); long startTime2 = System.currentTimeMillis(); for(int i = 0; i < Integer.MAX_VALUE; i++){ p.sum += i; } long endTime2 = System.currentTimeMillis(); System.out.println("time of using primitive in object:" + Long.toString(endTime2 - startTime2)); System.out.println(p.sum); }
}
The results look like this:
time of using primitive:1454 2305843005992468481 time of using object:23870 2305843005992468481 time of using primitive in object:1529 2305843005992468481
We can find the time of using primitive and using primitive in object are almost same. So I am confused if primitives in objects are stored in Heap. And why the time cost of using primitive and using primitive in object are almost same?
When you go
Long sum; ... sum += 1;
the JVM, in theory, allocates a new Long each time, cause Longs are immutable. Now, a really smart compiler could do something smart here, but this explains why your time for the second loop is so much larger. It is allocating Integer.MAXINT new Sum objects. Yet another reason Autoboxing is tricky.
The two other loops do not require allocating new Objects. One uses a primitive int, and in the other you can increment
Performance.sum without needing to allocate a new Performance each time. Accessing the primitive int on the stack or in the heap should be roughly equally fast, as shown.
Your timings have very little to do with heap vs. stack speed of access, but everything to do with allocating large numbers of Objects in loops.
As others have noted, micro benchmarks can be misleading.
Similar Questions
|
http://ebanshi.cc/questions/3253769/primitive-in-object-heap-or-stack
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Is This Content Helpful?
We're glad to know this article was helpful.
When attempting to delete a feature dataset containing a topology, an error message is returned. The feature dataset is deleted as well.
Code:
import arcpy
arcpy.env.workspace = r"C:\data\test.gdb"
arcpy.Delete_management("majorrds")
This is a known issue. The software attempts to warn the user when a topology exists in the feature dataset, which must be deleted before deleting the feature dataset. However, since the topology is a part of the feature dataset itself, it is deleted and the error message is generated.
Use the 'try:' and 'except:' statements to bypass the error and move forward with the execution of the script.
Insert the following code. Be sure to adjust the file path of the location of the dataset.
Code:
import arcpy
arcpy.env.workspace = r"C:\data\test.gdb"
try:
arcpy.Delete_management("majorrds")
print "copied zoning"
except:
print "A warning was generated"
|
http://support.esri.com/en/technical-article/000011880
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
On Thu, Mar 12, 2009 at 5:13 PM, Alois Schlögl <address@hidden> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > The following improvements have been included in the NaN-toolbox. > > - - sumskipnan_mex.mex has been optimized for speed (minimizing cache > missing, reducing loop overhead) > > - - a flag is set if some NaN occures in the data. The flag can be checked > (and reset) with function FLAG_NANS_OCCURED(). This enables a flexible > control on checks for NaN. (You can check after every call, or only at > the end of your script). > > - - the performance of var, std, and meansq has been improved. > > A performance between the NaN-toolbox and corresponding standard octave > functions (see script below) show the following results (time in [s]): > > > with NaN-tb w/o NaN-tb ratio > 0.25884 3.56726 13.78183 mean(x,1)/nanmean(x,1) > 0.36784 3.32899 9.05020 mean(x,2)/nanmean(x,2) > 0.30019 6.62467 22.06789 std(x,0,1) > 0.40114 2.23262 5.56561 std(x,0,2) > 0.28681 6.40276 22.32407 var(x,0,1) > 0.40269 2.18056 5.41505 var(x,0,2) > 0.28175 4.05612 14.39598 meansq(x,1) > 0.40703 4.19346 10.30248 meansq(x,2) > 0.25930 0.19884 0.76683 sumskipnan(x,1)/sum(x,1) > 0.30624 0.24179 0.78955 sumskipnan(x,2)/sum(x,2) > > > A performance improvement by factors as high as 22 can be seen, and > sumskipnan() is only about 25% slower than sum(). > > Of course, sumskipnan could also improve the speed of functions like > nanmean, nanstd, etc. Maybe you want to consider including sumskipnan in > standard octave. > I repeated your experiment using current Octave tip (-O3 -march=native, Core 2 Duo @ 2.83GHz): mean(x,1) mean(x,2) std(x,0,1) std(x,0,2) var(x,0,1) var(x,0,2) meansq(x,1) meansq(x,2) sum(skipnan)(x,1) sum(skipnan)(x,2) tic-toc time 0.108911 0.132629 0.114568 0.163950 0.112384 0.163973 0.112379 0.163682 0.096581 0.101545 0.090389 0.091657 0.915853 0.955799 0.883821 0.921007 0.110276 0.114233 0.082247 0.089742 tic-toc ratio 0.82993 0.69108 7.99397 5.82982 7.86431 5.61683 0.98129 0.69790 0.85159 0.88376 cputime 0.108007 0.136008 0.112007 0.164011 0.112007 0.164010 0.116007 0.160010 0.100006 0.100007 0.088005 0.088005 0.900056 0.956060 0.884055 0.924058 0.092006 0.116007 0.080005 0.092006 cputime ratio 0.81481 0.64706 8.03571 5.82924 7.89285 5.63416 0.79311 0.72500 0.80000 0.92000 It can be seen that the penalty for skipping NaNs is mostly within 20-30%, smaller for column-oriented reductions. The speed-up factors 5 and 7 for std and var are caused by the single-sweep computation done in sumskipnan. This becomes apparent when a less random data are supplied, and the NaN toolbox reverts to a backup algorithm (which is what Octave always does) - relative error at the order of 10^-4: tic-toc time 0.108613 0.132721 1.362765 1.500724 1.366353 1.499243 0.115758 0.163625 0.097873 0.102086 0.089788 0.089979 0.876386 0.914380 0.880742 0.913636 0.094084 0.091950 0.082200 0.089619 tic-toc ratio 0.82668 0.67796 0.64309 0.60929 0.64459 0.60940 0.81277 0.56196 0.83986 0.87788 cputime 0.108007 0.132008 1.364085 1.500094 1.368086 1.500093 0.116007 0.164011 0.096006 0.104006 0.092006 0.088005 0.876055 0.916057 0.880055 0.916057 0.092006 0.092006 0.084005 0.088005 cputime ratio 0.85185 0.66666 0.64223 0.61067 0.64327 0.61067 0.79311 0.56097 0.87500 0.84615 Here the std/var computations are slown down by some 35-45%. This is less favorable, though certainly no disaster. I think the Octave statistics subcommunity should discuss what would they appreciate best. Is anyone depending on the speed of std/var? Opinions about skipping NaNs? Given Octave's NA support, it may be better to just skip NAs, like R does. There were also suggestions to move the statistics functions completely out of Octave. Personally, I'd vote to retain just the stuff from statistics/base, because I sometimes use functions thereof despite not being a statistician. regards -- RNDr. Jaroslav Hajek computing expert & GNU Octave developer Aeronautical Research and Test Institute (VZLU) Prague, Czech Republic url: n = 8e3; randn("state", 123); #x = randn(n); x = 1 + randn(n) * 1e-4; #k=1; k=2; load data t=cputime();tic; m = mean(x,1); T(k,1)=toc;V(k,1)=cputime()-t; t=cputime();tic; m = mean(x,2); T(k,2)=toc;V(k,2)=cputime()-t; t=cputime();tic; m = std(x,0,1); T(k,3)=toc;V(k,3)=cputime()-t; t=cputime();tic; m = std(x,0,2); T(k,4)=toc;V(k,4)=cputime()-t; t=cputime();tic; m = var(x,0,1); T(k,5)=toc;V(k,5)=cputime()-t; t=cputime();tic; m = var(x,0,2); T(k,6)=toc;V(k,6)=cputime()-t; t=cputime();tic; m = meansq(x,1); T(k,7)=toc;V(k,7)=cputime()-t; t=cputime();tic; m = meansq(x,2); T(k,8)=toc;V(k,8)=cputime()-t; if (k == 1) t=cputime();tic; m = sumskipnan(x,1); T(k,9)=toc;V(k,9)=cputime()-t; t=cputime();tic; m = sumskipnan(x,2); T(k,10)=toc;V(k,10)=cputime()-t; else t=cputime();tic; m = sum(x,1); T(k,9)=toc;V(k,9)=cputime()-t; t=cputime();tic; m = sum(x,2); T(k,10)=toc;V(k,10)=cputime()-t; endif save data T V
|
http://lists.gnu.org/archive/html/octave-maintainers/2009-03/msg00323.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
I am back with some more PInvoke Stuff. Recently I was working on a PInvoke issue which I found interesting.
I have a C++ dll which has a function whose signature is
int TestFunc(IN_STRUCT in_Params, RET_STRUCT * pret_Par).
I wanted to call this function from C#. Function has two arguments. First argument is input structure which will be filled from C# code and passed to C++ code. Second argument is output structure which is filled in C++ code and output to C# code.
Here are the C struct definitions and a function that needs to be marshaled
#include "stdafx.h"
#include <stdio.h>
#include "Objbase.h"
#include <malloc.h>
typedef struct IN_STRUCT
{
BYTE CMD_PType;
BYTE CMD_PTType;
BYTE CMD_OC;
BYTE CMD_Seq;
};
typedef struct RET_STRUCT
{
BYTE RET_OC;
BYTE RET_Seq;
BYTE RET_RetBytes;
char *str;
BYTE RET_PD[10];
};
extern "C" __declspec(dllexport) \
int TestFunc(IN_STRUCT in_Params, RET_STRUCT * pret_Par)
{
int iRet = 0;
pret_Par->RET_OC = in_Params.CMD_OC;
pret_Par->RET_Seq = in_Params.CMD_Seq;
pret_Par->RET_RetBytes = 6;
pret_Par->RET_PD[0] = 0;
pret_Par->RET_PD[1] = 10;
pret_Par->RET_PD[2] = 20;
pret_Par->RET_PD[3] = 30;
pret_Par->RET_PD[4] = 40;
pret_Par->RET_PD[5] = 50;
pret_Par->str = new char(30);
strcpy(pret_Par->str,"This is sample PInvoke app");
return iRet;
}
Managed Structure equivalent to Native Structure:
namespace ConsoleApplication1
{
class Program
{
//This structure will be filled up by C++ Test.dll and returned back
With values to C# code.
[StructLayout(LayoutKind.Sequential)]
public struct RET_STRUCT
{
public byte RET_OC;
public byte RET_Seq;
public byte RET_RetBytes;
[MarshalAs(UnmanagedType.LPStr)]
public String RET_STR;
[MarshalAs(UnmanagedType.ByValArray, SizeConst = 10)]
public byte[] RET_PD;
};
//The values of this structure will be used to fill up IN_STRUCT and
passed to C#
[StructLayout(LayoutKind.Sequential)]
public struct IN_STRUCT
{
public byte CMD_PT;
public byte CMD_PTType;
public byte CMD_OC;
public byte CMD_Seq;
};
//C++ dll containing the func
[DllImport("Test.dll")]
public static extern int TestFunc(IN_STRUCT i, ref RET_STRUCT r);
static void Main(string[] args)
{
IN_STRUCT cmd_params = new IN_STRUCT();
RET_STRUCT ret_Params = new RET_STRUCT();
//Fill up the cmd_params
cmd_params.CMD_OC = 0x02;
cmd_params.CMD_PTType = 0x00;
cmd_params.CMD_Seq = 1;
//Call the C++ function to fill ret_params
int iRet = TestFunc(cmd_params, ref ret_Params);
//Print out the returned values
Console.WriteLine("Returned Values\n");
Console.WriteLine(ret_Params.RET_OC + " " + ret_Params.RET_Seq +
" ");
for (int i = 0; i < ret_Params.RET_RetBytes; i++)
Console.WriteLine("\n" + ret_Params.RET_PD[i]);
Console.WriteLine(ret_Params.RET_STR);
Console.ReadLine();
}
}
}
After executing the code I was expecting a valid output. But I ended up with Access Violation. I used windbg to troubleshoot this issue.
I spawned exe from windbg and tried to see call stack.
0:000> kv
ChildEBP RetAddr Args to Child
002cec30 76fc5883 006515c8 00000001 00000000 ntdll!RtlpLowFragHeapFree+0x31 (FPO: [0,10,4])
002cec44 76b9c56f 000b0000 00000000 049a3a48 ntdll!RtlFreeHeap+0x101 (FPO: [3,0,4])
002cec58 7565dc2c 000b0000 00000000 049a3a50 KERNEL32!HeapFree+0x14 (FPO: [3,0,0])
002cec6c 7565dc53 7573e6f4 049a3a50 002cec88 ole32!CRetailMalloc_Free+0x1c (FPO: [2,0,0])
002cec7c 6c7e8410 049a3a50 002cec9c 6c8084bd ole32!CoTaskMemFree+0x13 (FPO: [1,0,0])
002cec88 6c8084bd 00109d34 00000001 00109d48 mscorwks!FieldMarshaler_StringAnsi::DestroyNativeImpl+0x16 (FPO: [1,0,0])
002cec9c 6c8088e5 00109d30 0065340c 1670b1d2 mscorwks!LayoutDestroyNative+0x3a (FPO: [2,0,0])
002cee8c 6c73539b 002cef58 00000000 1670b182 mscorwks!CleanupWorkList::Cleanup+0x2ea (FPO: [2,116,4])
002ceedc 001cad4c 002cef18 01020000 00109d30 mscorwks!NDirectSlimStubWorker2+0x120 (FPO: [1,12,4])
WARNING: Frame IP not in any known module. Following frames may be wrong.
002cefa4 6c7013a4 00700876 002cefd8 00000000 0x1cad4c
002cefe0 6c6f1b4c 010d2816 00000003 002cf070 mscorwks!PreStubWorker+0x141 (FPO: [1,13,4])
002ceff0 6c7021b1 002cf0c0 00000000 002cf090 mscorwks!CallDescrWorker+0x33
……….
0:000> da 049a3a50
049a3a50 "My Name is Jyoti Patel"
From the call stack it’s clear that InteropMarshaller ( NDirectSlimStubWorker2) is trying to deallocate string using CoTaskMemFree.
There are two solutions to this problem.
1. As deallocation is done using CoTaskMemFree, Allocate the memory using CoTaskMemAlloc.
Changing line of code in C++ from
pret_Par->str = new char(30);
pret_Par->str = (char*)CoTaskMemAlloc(30);
Issue was resolved.
.)
In this case, memory was allocated by malloc, and ends up being freed by CoTaskMemFree and hence we see an AV)
2. If you are not able to change the C++ code and you want to allocate memory using new/malloc other solution is to use Intptr and do custom marshalling by calling corresponding methods from Marshal class to do marshalling.
[StructLayout(LayoutKind.Sequential)]
public struct RET_STRUCT
{
public byte RET_OC;
public byte RET_Seq;
public byte RET_RetBytes;
public IntPtr RET_STR;
[MarshalAs(UnmanagedType.ByValArray, SizeConst = 10)]
public byte[] RET_PD;
};
……….
………
………
Console.WriteLine(Marshal.PtrToStringAnsi(ret_Params.RET_STR));
Jyoti Patel and Manish Jawa
Developer Support VC++ and C#
Is there a wee typo in the last case? To use Marshal.PtrToStringAnsi shouldn’t the RET_STR field then be a IntPtr rather than the original String type?
Hi Alanjmcf,
It was a typo. Thanks for correcting.
Very timely post, which explained a crash I’ve been tearing my hair out – code that works on XP but crashes on Windows 7.
In my particular case, I had a native function that was returning a ‘const char *’ which I was trying to marshal (as LPSTR) into a string.
I can’t see any way to tell the marshalling mechanism that the result is supposed to be ‘const’ and thus it should not free it. Do I really need to do the IntPtr/Marshal.PtrToStringAnsi() approach?
Hi Jeff,
If I understand you correctly.
If your C++ function is returning const char*
You can either use IntPtr and Marshal.PtrToStringAnsi() or you can use StringBuilder.
extern "C" __declspec(dllexport) const char* ReturnString( )
{
char * ch = (char*)CoTaskMemAlloc(20);
strcpy(ch,"This is sample");
return ch;
}
You can use string builder in C# code, as string are immutable.
[DllImport("Test.dll",CharSet = CharSet.Ansi)]
public static extern StringBuilder ReturnString();
StringBuilder str = ReturnString();
It I am not able to understand your question properly. Please feel free to contact me at jpatel_at_microsoft.com
No, you’ve missed the point. Lets assume that my C++ DLL, which already serves other applications and thus cannot be changed, looks like this:
__declspec(dllexport) const char *message(int key)
{
switch(key):
default: return "Bad Key";
case 1: return "Key 1";
case 2: return "Two two";
}
}
Its far more complicated than that, indexing into other data structures which will remain alive for the duration of the application. The point is that the caller is not supposed to free the result, its ‘const’. If I make that entry point allocate memory, it will leak when all the other existing callers use it.
It appears to be impossible to marshal this across to C# because there is no ‘const’ keyword anywhere in the MarshalAs() attribute, presumably because there is no concept of ‘deleting’ .NET objects, only letting them expire through garbage collection. Its only at the bridge between native and managed that explicit memory management becomes necessary, I understand that.
I have already used the IntPtr/PtrToStringAnsi approach successfully – I was asking whether there was some other aspect of the MarshalAs() attribute mechanism that I was unaware of that would allow me to tell the marshalling mechanism that it didn’t need to free the input LPStr
(Note also, I’m doing this in comments, rather than private email because I spent way too long googling to try to find any sort of discussion on this topic – I’d much rather leave a trail that others in the same boat can follow)
‘const’ means "don’t modify it", NOT "don’t free it". You aren’t looking for a way to specify constant-ness. When the marshaled type is a string, the contract is that the marshaler will free it.
@Scot:
Um, if freeing it doesn’t modify it, I’ll eat my hat. You can’t pass ‘const’ pointers into free(), you can’t use ‘delete’ on const pointers, etc. The only mechanism any C/C++ API has to ensure its results are not free’d is to declare the result as ‘const’. Its only in .NET where you can’t *ever* explicitly delete an object, and thus the distinction is blurred.
Its pointless to worry about the ‘can’t modify’ aspect of it, since by definition marshalling creates a completely different object; of course there would be no way that the original could be modified by manipulation of the .NET object.
Can you point me at any explicit statement made anywhere that explains that the marshalling API contract including the freeing of the input data? I haven’t seen any reference to that behaviour anywhere so far.
I’m quite happy to be wrong on this, but this article (specifically in point 1) seems to suggest that the marshalling interface has just "decided to use CoTaskMemFree()" and that it can "lead to a crash". That suggests to me that the marshalling API’s *dont* define that input strings will be freed; otherwise there would be no need to clarify here, just refer to the appropriate msdn page.
I would like to clarify a few things here. Marshalling interface has not "just decided to use CoTaskMemFree" , it definitely is a conscious decision which would have been made taken into account several things. I do not know the exact details of why the decision was taken as of now.
But I tend to disagree with you on constness and liveliness part of it. Const modifies a type; that is, it restricts the ways in which an object can be used, rather than specifying how the constant is to be allocated/freed. The fact that a constant or for that matter static is allocated on read-only storage by VC++ compiler is a optimization made by VC++ compiler and is not something that is mandated to it by C++ standard .
Also marshalling does not neccesarily mean that a new object will be always created. It is a neccessity in cases where layout of object is totally different but if layout of object is same or simillar , marshaller is free to reuse the same memory location. Unless otherwise stated the fact that marshaller allocates new object is an implementation detail and not a part of contract of marshalling interface per se.
If it was a conscious decision, can someone please point me to the MSDN documentation that describes the behaviour that was apparently included by design? And can someone point out to me why it behaves differently on on .NET 2.0 vs 3.5. The exact same code works fine on 2.0, no debugger output, etc whereas it crashes almost immediately on 3.5 – at the least I should expect to see a release note somewhere explaining that a memory leak was fixed, at the expense of possible crashes.
Yes, const modifies a type and the type in question is ‘char’. ‘const char *’ is "pointer to constant characters" – you can change the value of the pointer itself, but not of what it points to. You explicitly cannot delete, or free "what it points to". This is irrespective of whether the value are in a read-only segment, or are just a pointer into my seperately maintained symbol table of objects which will remain live for the duration of the application. It makes no difference, you are not allowed to free that object. And the problem here is that there appears to be no way to tell the marshaller that fact.
I accept that marshalling may not create a new object – in fact, this is what I though the original problem was, that the String class was retaining a pointer to *my* character array, but that would again introduce undocumented behaviour, since Marshalling makes no claims about a limited life-span of the marshalled object, and there’s no way it could assume the array would remain alive once this function returns. Nevertheless, I have claimed nothing about whether Marshalling creates new objects or not.
What I asked is "where does it say that marshalling will discriminantely free its input"? I same discriminantley because its *not* indiscriminant, it doesn’t free everything, only strings. What happen if I tell it that a function returns a pointer to a struct, which contains 3 reals (note, I do this currently, this is not speculation, I need this to work). Is it documented that marshalling will also free that structure? (Note, I am not asking about structures that contain pointers to other fields, or other marshallable structures, just primitive data types)
What I want to know is "where is the secret list of extra rules about what gets freed and what doesn’t"?
Actually, on the ‘pointer to struct containing reals’, I’ve double-checked my source code and I’m already using the IntPtr/Marshal.PtrToStructure() approach there so its not a problem for me.
This article explains rules for memory management by Interop Marshaller in general. Are you observing the difference between .net 2.0 and .net 3.5 or is the difference between XP and Vista ?
Thank you for that reference, it obviously is documented and it was my Googling skills in error.
From my reading of that page, its never going to be an issue for arbitrary structure pointers because the only way you can marshall them is as IntPtr which won’t be automatically free’d.
Its hard to tell whether I’m getting the problem because of the 2.0/3.5 difference or the XP/Vista/7 difference, because technically we are compiled against 2.0, but we don’t have that available on Vista or 7 and thus it automagically upgrades itself to using 3.5.
Our app wants to try and run regardless of which version of .NET is available (to as much an extent as that is possible), and we also run as a plugin inside AutoCAD which means that we can’t dictate the contents of a .config file to force a specific version of .NET
Once again, thanks for putting up with my annoying questions.
|
https://blogs.msdn.microsoft.com/dsvc/2009/06/22/troubleshooting-pinvoke-related-issues/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
brent-search 1.0.29
Brent's method for univariate function optimization
Brent’s method for univariate function optimization.
Example
from brent_search import brent def func(x, s): return (x - s)**2 - 0.8 r = brent(lambda x: func(x, 0), -10, 10) print(r)
The output should be
(0.0, -0.8, 6)
Install
The recommended way of installing it is via conda
conda install -c conda-forge brent-search
An alternative way would be via pip
pip install brent-search
Running the tests
After installation, you can test it
python -c "import brent_search; brent_search.test()"
as long as you have pytest.
License
This project is licensed under the MIT License - see the License file for details.
- Author: Danilo Horta
- Keywords: search,line,brent
- License: MIT
- Platform: any
- Categories
- Package Index Owner: dhorta
- DOAP record: brent-search-1.0.29.xml
|
https://pypi.python.org/pypi/brent-search/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
> > Frankly, I'm a bit confused by your post. Maybe I don't understand > > what you're proposing? > > Modules are modules, right? That is, pickle.py and cPickle.so are both > represented as module objects at runtime. A C extension can call > PyModule_GetDict() on any module. If so, then any extension module can > add names to the __dict__ of any Python module. The problem is that > modules expose their representation at the C API level (namespace > implemented as PyDictObject), so it's difficult to forbid things at the > C level. Oh sure. I don't think it's necessary to forbid things at the C API level in the sense of making it impossible to do. We'll just document that C code shouldn't do that. There's plenty that C code could do but shouldn't because it breaks the world. I don't expect there will be much C in violation of this prohibition. --Guido van Rossum (home page:)
|
https://mail.python.org/pipermail/python-dev/2003-March/034270.html
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
From Jenkins to Jenkins X
This is a tale of dailymotion’s journey from Jenkins to Jenkins X, the issues we had, and how we solved them.
Our context
At dailymotion, we strongly believe in devops best practices, and are heavily investing in Kubernetes. Part of our products are already deployed on Kubernetes, but not all of them. So, when the time came to migrate our ad-tech platform, we wanted to fully embrace “the Kubernetes way” — or cloud-native, to be buzzword-compliant! This meant redefining our whole CI/CD pipeline and moving away from static/permanent environments, in favour of dynamic on-demand environments. Our goal was to empower our developers, reduce our time to market and reduce our operation costs.
Our initial requirements for the new CI/CD platform were:
- avoid starting from scratch if possible: our developers are already used to using Jenkins and the declarative pipelines, and those are working just fine for our current needs
- target a public cloud infrastructure — Google Cloud Platform — and Kubernetes clusters
- be compatible with the gitops methodology — because we love version control, peer review, and automation
There are quite a few actors in the CI/CD ecosystem, but only one matched our needs, Jenkins X, based on Jenkins and Kubernetes, with native support for preview environments and gitops.
Jenkins on Kubernetes
The Jenkins X setup is fairly straightforward, and already well documented on their website. As we’re already using Google Kubernetes Engine (GKE), the
jx command-line tool created everything by itself, including the Kubernetes cluster. Cue the little wow effect here, obtaining a complete working system in a few minutes is quite impressive.
Jenkins X comes with lots of quickstarts and templates to add to the wow effect, however, at dailymotion we already have existing repositories with Jenkins pipelines that we’d like to re-use. So, we’ve decided to do things “the hard way”, and refactor our declarative pipelines to make them compatible with Jenkins X.
Actually, this part is not specific to Jenkins X, but to running Jenkins on Kubernetes, based on the Kubernetes plugin. If you are used to “classic” Jenkins, with static slaves running on bare metal or VMs, the main change here is that every build will be executed on its own short-lived custom pod. Each step of the pipeline can then specify on which container of the pod it should be executed. There are a few examples of pipelines in the plugin’s source code. Our “challenge” here was to define the granularity of our containers, and which tools they’d contain: enough containers so we can reuse their images between different pipelines, but not too many either to keep maintenance under control — we don’t want to spend our time rebuilding container images.
Previously, we used to run most of our pipelines steps in Docker containers and when we needed a custom one, we built it on-the-fly in the pipeline, just before running it. It was slower, but easier to maintain, because everything is defined in the source code. Upgrading the version of the Go runtime can be done in a single pull-request, for example. So, having to pre-build our container images sounded like adding more complexity to our existing setup. It also has a few advantages: less duplication between repositories, faster builds, and no more build errors because some third-party hosting platform is down.
Building images on Kubernetes
Which bring us to an interesting topic these days: building container images in a Kubernetes cluster.
Jenkins X comes with a set of build packs, that uses “Docker in Docker” to build images from inside containers. But with the new container runtimes coming, and Kubernetes pushing its Container Runtime Interface (CRI), we wanted to explore other options. Kaniko was the most mature solution, and matched our needs / stack. We were thrilled…
…until we hit 2 issues :
- the first one was a blocking issue for us: multi-stage builds didn’t work. Thanks to Google we quickly found that we were not the only ones affected, and that there was no fix or work-around yet. However, Kaniko is developed in Go, and we are Go developers, so… why not have a look at the source code? Turns out that once we understood the root cause of the issue, the fix was really easy. The Kaniko maintainers were helpful and quick to merge the fix, so one day later a fixed Kaniko image was already available.
- the second one was that we couldn’t build two different images using the same Kaniko container. This is because Jenkins isn’t quite using Kaniko the way it is meant to be used — because we need to start the container first, and then run the build later. This time, we found a workaround on Google: declaring as many Kaniko containers as we need to build images, but we didn’t like it. So back to the source code, and once again once we understood the root cause, the fix was easy.
We tested a few solutions to build our custom “tools” images for the CI pipelines, in the end we chose to use a single repository, with one image —
Dockerfile — per branch. Because we are hosting our source code on Github, and using the Jenkins Github plugin to build our repositories, it can build all our branches and create new jobs for new branches on webhook events, which make it easy to manage. Each branch has its own
Jenkinsfile declarative pipeline, using Kaniko to build the image — and pushes it to our container registry. It’s great for quickly adding a new image, or editing an existing one, knowing that Jenkins will take care of everything.
The importance of declaring the requested resources
One of the major issue we encountered with our previous Jenkins platform, came from the static slaves/executors, and the sometimes-long build queues during peak hours. Jenkins on Kubernetes makes it easy to solve this issue, mainly when running on a Kubernetes cluster that supports cluster autoscaler. The cluster will simply add or remove nodes based on the current load. But this is based on the requested resources, not on the observed used resources. It means that it’s our job, as developers, to define in our build pod templates, the requested resources — in term of CPU and memory. The Kubernetes scheduler will then use this information to find a matching node to run the pod — or it may decide to create a new one. This is great, because we no longer have long build queues. But instead we need to be careful in defining the right amount of resources we need, and updating them when we update our pipeline. As resources are defined at the container level, and not the pod level, it makes things a little more complex to handle. But we don’t care about limits, only requests. And a pod’s requests are just the addition of the requests of all containers. So, we just write our resources requests for the whole pod on the first container — or on the
jnlp one — which is the default.
Here is an example of one of our
Jenkinsfile, and how we can declare the requested resources:
Preview environments on Jenkins X
Now that we have all our tools, and we’re able to build an image for our application, we’re ready for the next step: deploying it to a “preview environment”!
Jenkins X makes it easy to deploy preview environments, by reusing existing tools — mainly Helm, as long as you follow a few conventions, for example the names of the values for the image tag. It’s best to copy/paste from the Helm charts provided in the “packs”. If you are not familiar with Helm, it’s basically a package manager for Kubernetes applications. Each application is packaged as a “chart”, which can then be deployed as a “release” by using the
helm command-line tool.
The preview environment is deployed by using the
jx command-line tool, which takes care of deploying the Helm chart, and commenting on the Github pull-request with the URL of the exposed service. This is all very nice, and worked well for our first POC using plain http. But it’s 2018, nobody does http anymore. Let’s encrypt! Thanks to cert-manager, we can automatically get an SSL certificate for our new domain name when creating the ingress resource in Kubernetes. We tried to enable the
tls-acme flag in our setup — to do the binding with cert-manager — but it didn’t work. This gave us the opportunity to have a look at the source code of Jenkins X — which is developed in Go too. A little fix later we were all good, and we can now enjoy a secured preview environment with automatic certificates provided by let’s encrypt.
The other issue we had with the preview environments is related to the cleanup of said environments. A preview environment is created for each opened pull-request, and so should be deleted when the pull-request is merged or closed. This is handled by a Kubernetes Job setup by Jenkins X, which deletes the namespace used by the preview environment. The issue is that this job doesn’t delete the Helm release — so if you run
helm list for example, you will still see a big list of old preview environments. For this one, we decided to change the way we used Helm to deploy a preview environment. The Jenkins X team already wrote about these issues with Helm and Tiller — the server side component of Helm — and so we decided to use the
helmTemplate feature flag to use Helm as a templating rendering engine only, and process the resulting resources using
kubectl. That way, we don’t “pollute” our list of Helm releases with temporary preview environments.
Gitops applied to Jenkins X
At some point of our initial POC, we were happy enough with our setup and pipelines, and wanted to transform our POC platform into a production-ready platform. The first step was to install the SAML plugin to setup our Okta integration — to allow our internal users to login. It worked well, and then a few days later, I noticed that our Okta integration was not there anymore. I was busy doing something else, so I just asked my colleague if he’d made some changes and moved on to something else. But when it happened a second time a few days later, I started investigating. The first thing I noticed was that the Jenkins pod had recently restarted. But we have a persistent storage in place, and our jobs are still there, so it was time to take a closer look! Turns out that the Helm chart used to install Jenkins has a startup script that resets the Jenkins configuration from a Kubernetes
configmap. Of course, we can’t manage a Jenkins running in Kubernetes the same way we manage a Jenkins running on a VM!
So instead of manually editing the
configmap, we took at step back, and looked at the big picture. This
configmap is itself managed by the jenkins-x-platform, so upgrading the platform would reset our custom changes. We needed to store our “customization” somewhere safe and track our changes.
We could go the Jenkins X way, and use an umbrella chart to install/configure everything, but this method has a few drawbacks: it doesn’t support “secrets” — and we’ll have some sensitive values to store in our git repository — and it “hides” all the sub-charts. So, if we list all our installed Helm releases, we’ll only see one. But there are other tools based on Helm, which are more gitops-friendly. Helmfile is one of them, and it has native support for secrets, through the helm-secrets plugin, and sops. I won’t go into the details of our setup right now, but don’t worry, it will be the topic of my next blog post!
The migration
Another interesting part of our story is the actual migration from Jenkins to Jenkins X. And how we handled repositories with 2 build systems. At first, we setup our new Jenkins to build only the “jenkinsx” branches, and we updated the configuration of our old Jenkins to build everything except the “jenkinsx” branch. We planned to prepare our new pipelines in the “jenkinsx” branch, and merge it to make the move. For our initial POC it worked nicely, but when we started playing with preview environments, we had to create new PR, and those PR were not built on the new Jenkins, because of the branch restriction. So instead, we chose to build everything on both Jenkins instances, but use the
Jenkinsfile filename for the old Jenkins, and the
Jenkinsxfile filename for the new Jenkins. After the migration, we’ll update this configuration, and renaming the files, but it’s worth it, because it enables us to have a smooth transition between both systems, and each project can migrate on its own, without affecting the others.
Our destination
So, is Jenkins X ready for everybody? Let’s be honest: I don’t think so. Not all features and supported platforms — git hosting platforms or Kubernetes hosting platforms — are stable enough. But if you’re ready to invest enough time to dig in, and select the stable features and platforms that work for your use-cases, you’ll be able to improve your pipelines with everything required to do CI/CD and more. This will improve your time to market, reduce your costs, and if you’re serious about testing too, be confident about the quality of your software.
At the beginning, we said that this was the tale of our journey from Jenkins to Jenkins X. But our journey isn’t over, we are still traveling. Partly because our target is still moving: Jenkins X is still in heavy development, and it is itself on its own journey towards Serverless, using the Knative build road for the moment. Its destination is Cloud Native Jenkins. It’s not ready yet, but you can already have a preview of what it will look like.
Our journey also continues because we don’t want it to finish. Our current destination is not meant to be our final destination, but just a step in our continuous evolution. And this is the reason why we like Jenkins X: because it follows the same pattern. So, what are you waiting to embark on your own journey?
|
https://medium.com/dailymotion/from-jenkins-to-jenkins-x-604b6cde0ce3?source=collection_home---4------7---------------------
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
What is Vector?
Suppose we want to move from point A to point B where point A is situated at (33.0, 35.0) and point B is at (55.0, 45.0) then Vector AB will be the different between these two points, or the x and the y distance between this two points is (x2-x1, y2-y1) or (55.0-33.0, 45.0-35.0).
Why do we need to create a vector class?
Vector module helps game developer to perform various operations, for example moves an object from point A to point B as well as find out the vector magnitude of that object, therefore it is always better if we can create a Vector module before we create our game.
Create a Vector2D class in python
Vector class is just like any other module class with methods that we can use to move an object or modify the property of that object.
import math class Vector2D(object):): # find the unit vector)
A few methods above are the overload methods where they will be called when the Vector class instance performs certain operation, for example the __div__, __mul__, __sub__ and __add__ method will be called when we divide, multiply, subtract and add two vectors together. The __neg__ method will be called if we want to point a Vector in the opposite direction.
The __init__ method will be called at the moment we initialized the Vector2D’s instance and __str__ will be called when we print that object with the python print function.
The get_magnitude method will return the magnitude of the Vector and the normalize method will divide the x and the y length of the Vector with it’s magnitude.
Finally next_vector will take in the combine value of two tuples and return a new Vector2D object.
Create a separate python module with below script.
from vector2d import Vector2D if __name__ == "__main__": A = (10.0, 20.0) B = (30.0, 35.0) C = (15.0, 45.0) AB = Vector2D.next_vector(A+B) BC = Vector2D.next_vector(B+C) AC = AB+BC print(AC) AC = Vector2D.next_vector(A+C) print(AC)
If you run the above module then you can see that when you add up two vectors AC = AB + BC the overload __add__ method of vector AB will be called which will then return a new Vector2D object. AC = Vector2D.next_vector(A+C) will create the same outcome as AC = AB + BC when we print the vector out with the print function. In this example the result is (5.0, 25.0).
The above Vector2D function will get you started where you can now include more methods into the Vector2D module for the future expansion purposes.
|
http://gamingdirectional.com/blog/2016/08/31/create-a-vector-class-in-pygame/
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Hi there,
yesterday we've patched our PI 7.0 fom SP 09 to SP 11.
Until then we had a running scenario, containing an IDoc sending SAP System. Since the XI was patched, incoming messages from IDoc Adapter have a green flag in the SXI_Monitor, Q. Status is red.
The Message Trace is as follows:
...
<Trace level="1" type="T">--start sender interface action determination</Trace>
<Trace level="1" type="T">select interface ORDERS.ORDERS05*</Trace>
<Trace level="1" type="T">select interface namespace urn:sap-com:document:sap:idoc:messages</Trace>
<Trace level="1" type="T">no interface found</Trace>
<Trace level="1" type="T">--start receiver interface action determination</Trace>
<Trace level="1" type="T">Loop 0000000001</Trace>
<Trace level="1" type="T">select interface *</Trace>
<Trace level="1" type="T">select interface namespace</Trace>
<Trace level="1" type="T">no interface found</Trace>
<Trace level="1" type="T">--no sender or receiver interface definition found</Trace>
<Trace level="1" type="T">Hence set action to DEL</Trace>
The inbound queue XBTI* reports status SYSFAIL with error text "An exception occurred that was not caught."
I've re-tested the Scenario in Integration Directory, each step is working!
Does anybody have any idea about this behavior. Help is - as always - highly appreciated 😉)
Thanks in advance!!!
Cheers,
Matthias
Hi,
Reprocess message, this is temporary issue.
Regards,
Gourav
---
<i>Reward points if it helps you</i>
You already have an active moderator alert for this content.
Ohh!! Try following:
- Complete cache refresh.
- Reimport Idoc definition again
Your message is saying problem with interface determination so check that also. Deatcivate/activate your scenario again.
Add comment
|
https://answers.sap.com/questions/2180009/index.html
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
SNMP Alerts
The XAP
Alert interface exposes the XAP environment and the application’s health state. It allows users to register listeners on one or more alert types and receive notifications once an alert has been raised or has been resolved. You may use this framework to build a custom integration with a third party monitoring products to leverage the XAP alerting system.
A recommended approach for such integration would be to construct a listener that writes the chosen types of alerts into logger mechanism. Examples for such may be the log4j or the commons-logging frameworks.
The main advantage with this approach is the ability to use an extensive set of out-of-box log appenders that translates log messages into different protocols and APIs to be consumed by third party products.
Example
The
AlertLoggingGateway example project provided with the GigaSpaces distribution using an existing
Log4J Appender (SnmpTrapAppender) to convert log messages into SNMP traps, resulting in the alerts propagated to a third party network management solution.
AlertsLoggingGatway components
SnmpTrapTransmitter
The SnmpTrapTransmitter is a XAP PU responsible for the generic Alert-to-Log bridging. It does that by listening to all alerts in its alert filter file. Any incoming alerts are simply writing to commons logging log. Notice that, being generic in nature, the SnmpTrapTransmitter can be reused without any changes in similar projects. SnmpTrapTransmitter exposes the following configuration parameters:
AlertFileFilter - the name of Alert filter xml file used to filter Alerts to be logged.
loggerName - the name of the logger to be created.
group - the XAP group for which the Alert listener will be configured.
<bean id="SnmpTrapTransmitter" class="org.openspaces.example.alert.logging.snmp.SnmpTrapTransmitter" > <property name="alertFileFilter" value="notify-alerts.xml" /> <property name="loggerName" value="org.openspaces.example.alert.logging.AlertLoggingGateway" /> <property name="group" value="group-name-here" /> </bean>
Note that if you implement your own variant for this class, for other types of alert interception, you will also have to override the
construct() method to register for alerts, the
destroy() method to cleanup the registration, and to create your own class implementing the
AlertTriggeredEventListener interface in which you will issue the logging calls:
public class SnmpTrapTransmitter { private Log logger; @PostConstruct public void construct() throws Exception { registerAlertTrapper(); } @PreDestroy public void destroy() throws Exception { alertManager.getAlertTriggered().remove(atListener); } private void registerAlertTrapper() { atListener = new AlertTriggeredEventListener() { public void alertTriggered(Alert alert) { String loggRecord; loggRecord = alert.toString(); logger.info(loggRecord); } }; XmlAlertConfigurationParser cparse = new XmlAlertConfigurationParser(alertFileFilter); alertManager.configure(cparse.parse()); alertManager.getAlertTriggered().add(atListener); } }
SnmpTrapSender
The SnmpTrapSender is a utility class that implements the SnmpTrapAppender’s
SnmpTrapSenderFacade interface with an implementation that queues and asynchronously transmits Alerts as SNMP traps. The SNMP transmission method -
sendTrap() - uses snmp4j library as its underlying implementation.
public class SnmpTrapSender implements SnmpTrapSenderFacade { public void addTrapMessageVariable(String trapOID, String trapValue) { trapQueue.add(trapValue); } public void initialize(SNMPTrapAppender arg0) { trapQueue.clear(); loadRunParams(); } public void sendTrap() { String trapVal = trapQueue.removeFirst(); PDUv1 trapPdu = (PDUv1)DefaultPDUFactory.createPDU(SnmpConstants.version1); trapPdu.setType(PDU.V1TRAP); // pack trapVal into trapPdu snmp.send(trapPdu, target); }
Logging
The Commons-logging.properties file is a commons logging configuration file which re-directs its calls to a log4j logger. In our example this file contains redirection of commons-logging to log4j as the SNMP trapper we use is on top of log4j. log4j.properties is a log4j configuration file which delegates log writes to the SNMPTrapAppender, resulting in SNMP traps.
log4j.rootCategory=INFO,TRAP_LOG log4j.appender.TRAP_LOG=org.apache.log4j.ext.SNMPTrapAppender log4j.appender.TRAP_LOG.ImplementationClassName=org.openspaces.example.alert.logging.snmp.SnmpTrapSender log4j.appender.TRAP_LOG.ManagementHost=127.0.0.1 log4j.appender.TRAP_LOG.ManagementHostTrapListenPort=162 log4j.appender.TRAP_LOG.CommunityString=public log4j.appender.TRAP_LOG.Threshold=INFO log4j.appender.TRAP_LOG.layout=org.apache.log4j.PatternLayout log4j.appender.TRAP_LOG.layout.ConversionPattern=%d,%p,%t,%c,%m%n
Running the Example
The example is located under
<XAP root>/tools/alert-integration. To run it you should do the following:
- Set the “group” value in the pu.xml file to your own XAP group. Optionally you may edit the function
registerAlertTrapper()in
SnmpTrapTransmitter.javato create your own
Adminobject in any way you see fit.
- Optionally edit file
notify-alerts.xmlto set your own alerts and alert conditions that will be listened to by this example.
- Optionally edit
log4j.propertiesto set the IP and port used by your SNMP server software (if any).
- If you do not have an SNMP server software, you should download one for the sake of running and testing this example. iReasoning MIB browser for example (mibbrowser ) provides good basic SNMP trap viewing capabilities with a free personal edition. Make sure you configure log4j.properties to use the same IP and port used by the server.
- Install XAP’s maven dependencies to the local maven repository by executing
<XAP root>/tools/maven/installmavenrep.sh(bat)
- Build and pack the example project into a jar file. This can be done by executing the command “mvn” from the project’s root directory or performing an equivalent action within your UI. A successful build should result in the creation of the example jar file in target/AlertLoggingGateway.jar.
- If needed start XAP with at least one running LUS, GSM and GSC belonging to the XAP group declared in item #2.
- Deploy the example JAR into the GSC.
- If needed - perform XAP actions that will trigger one or more of the alerts the example is tuned to listen to. Creating a new GSCs is usually a good way for creating a multitude of different alerts.
- Start-up your SNMP server to intercept and view incoming traps. If you use MIB browser enter the Trap Receiver (Ctrl-I) and make sure it is configured to listen on the right IP and port.
External Dependencies
- log4j version >= 1.2.14
- snmpTrapAppender version >= 1.2.9
- snmp4j version >= 1.11.2
- For the example build process you should have Apache Maven installed. You may find it already part of the GigaSpaces folders under
\gigaspaces-xap\tools\maven.
|
https://docs.gigaspaces.com/xap/12.2/dev-java/snmp-connectivity-via-alert-logging-gateway.html
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
I've left you something really tasty for dessert – Boost's Generic Image Library (GIL), which allows you to manipulate images and not care much about image formats.
Let's do something simple and interesting with it; let's make a program that negates any picture.
This recipe requires basic knowledge of C++, templates, and
Boost.Variant. The example requires linking against the PNG library.
For simplicity, we'll be working with only PNG images.
#include <boost/gil/gil_all.hpp> #include <boost/gil/extension/io/png_dynamic_io.hpp> #include <string>
typedef boost::mpl::vector< boost::gil::gray8_image_t, ...
No credit card required
|
https://www.oreilly.com/library/view/boost-c-application/9781849514880/ch12s08.html
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
This is a guest post from Simon Grimm, Ionic Developer Expert and educator at the Ionic Academy. Simon also writes about Ionic frequently on his blog Devdactic.
In this tutorial we will look at the new navigation system inside Ionic Framework 4.0 to learn how to use the powerful CLI, to understand how your Ionic 3 logic can be converted to v4, and, finally, the best way to protect pages inside your app if you plan to use an authentication system at any point.
I am Number 4
There is so much to talk about if we want to cover all the Ionic 4 changes, but for today let’s just focus on one of the key aspects of your app: Navigation!
With Ionic 4, your app is using Angular, which already comes with some new additions itself. But now Ionic is also using the standard Angular Router in the background. This means, instead of pushing and popping around inside your app, we have to define paths that are aligned with our pages.
If you are new to this concept, it might look a little scary and you may think, “Why so much code, everything was so easy before…,” but trust me, by the end of this post, you’ll love the new routing and you’ll be ready to migrate your Ionic 3 apps to the new version.
For now, let’s start with a blank Ionic 4 app so we can implement some navigation concepts. Go ahead and run:
npm install -g [email protected] ionic start goFourIt blank
This will create a new project which you can directly run with
ionic serve once you are inside that folder. It should bring up a blank app in your browser with just one page.:
import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; const routes: Routes = [ { path: '', redirectTo: 'home', pathMatch: 'full' }, { path: 'home', loadChildren: './home/home.module#HomePageModule' }, ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { }:
<body> <app-root></app-root> </body>
The only thing we display is an app-root, which is still not very clear. This app root is replaced by the first real HTML of our app, which is always inside the app/app.component.html:
<ion-app> <ion-router-outlet></ion-router-outlet> </ion-app>. Now, you are hopefully ready to navigate the change a bit better.
Adding New Pages with the CLI
Because a single page is not yet an app, we need more pages! To do so, you can use the Ionic CLI, which provides a wrapper for the Angular CLI. Right now, we could add 2 additional pages like this:
ionic g page pages/login ionic g page pages/dashboard ionic g page pages/details
This command tells the CLI to generate (g) a new page at the path pages/login, pages/dashboard and pages/details. It doesn’t matter that the folder ‘pages’ does not yet exist, the CLI will automatically create them for you.
There’s a whole lot more you can do with the CLI, just take a look at the documentation.
For now, let’s get back to our main goal of implementing navigation inside our app.
After creating pages with the CLI your app-routing.module.ts will automatically be changed, which may or may not help you in some cases. Right now, it also contains routing information for the three new pages we added with the according path of their module.
Changing Your Entry & Navigating (a.k.a Push & Pop)
One thing I often do with my apps is change the initial page to be a different component. To change this, we can simply remove the routing information for the home page, delete its folder, and change the redirect to point to the login page we generated earlier.
Once a user is logged in, the app should then display our dashboard page. The routing for this is fine, so far, and we can leave it like it is.
For the detail page we generated, we do want one addition: URL parameters. Say that you want to pass data from one page to another. To do this, we’d use URL parameters and specify a dynamic slug in the path. For this routing setup, we’ll add
:myid.
Your routing should now look like this inside your app-routing.module.ts:' } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { }
With previous Ionic versions you could also supply complete objects with a lot of information to another page, but with the new version this has changed. By using the URL routing, you can (and should) only pass something like an objects ID (to be used in a HTTP request) to the following page.
In order to get the values you need on that page later, simply use a service that holds your information or makes a HTTP request and returns the right info for a given key at any time.
All routing logic is officially in place, so now we only need to add a few buttons to our app that allow us to move around. Let’s start by adding a first button to our pages/login/login.page.html:
<ion-header> <ion-toolbar <ion-title>Login</ion-title> </ion-toolbar> </ion-header> <ion-content padding> <ion-button Login </ion-button> </ion-content>
We add a block button which has two important properties:
routerLink: The link/path that should be opened
routerDirection: Determines the animation that takes place when the page changes
After a login, you most certainly want to ditch your initial page and start again with the inside area as a new starting point. In that case, we can use the direction “root,” which looks like replacing the whole view.
If you want to animate forward or backward, you would use forward/back instead. This is what we can add now, because we are already able to move from login to our dashboard. So, let’s add the next two more buttons to the pages/dashboard/dashboard.page.html:
<ion-header> <ion-toolbar <ion-title>Dashboard</ion-title> </ion-toolbar> </ion-header> <ion-content padding> <ion-button Details </ion-button> <ion-button Logout </ion-button> </ion-content>
This is the same procedure as before—both have the link, but the first button will bring us deeper into our app by going to the details page and using “42” as the ID.
The second button brings us back to the previous login page again by animating a complete exchange of pages.
You can see the difference of the animations below:
Of course, you can also dynamically add the ID for the details page or construct the link like this if you have a variable
foo inside your class:
<ion-button
To wrap things up we need to somehow get the value we passed inside our details page, plus your users also need a way to go back from that page to the dashboard.
First things first, getting the value of the path is super easy. We can inject the
ActivatedRoute and grab the value inside our pages/details/details.page.ts like this:
import { Component, OnInit } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; @Component({ selector: 'app-details', templateUrl: './details.page.html', styleUrls: ['./details.page.scss'], }) export class DetailsPage implements OnInit { myId = null; constructor(private activatedRoute: ActivatedRoute) { } ngOnInit() { this.myId = this.activatedRoute.snapshot.paramMap.get('myid'); } }
By doing this, we can get the value, which is part of the
paramMapof the current route. Now that we have stored the value, we can also show it inside the current view, plus add a button to the top bar that allows the user to navigate back.
With previous Ionic versions that back button was automatically added. Meaning, the button was there even if we didn’t want it and it was difficult to customize. But with the release of Ionic 4.0, we can control this by adding it ourselves. At the same time, we can also define a
defaultHref. This way, if we load our app on that specific page and have no app history, we can navigate back and still have our app function.
The markup for our pages/details/details.page.html looks now like this:
<ion-header> <ion-toolbar <ion-buttons <ion-back-button</ion-back-button> </ion-buttons> <ion-title>Details</ion-title> </ion-toolbar> </ion-header> <ion-content padding> My ID is: {{ myId }} </ion-content>
As you can see, this back-button will now always bring us back to the dashboard, even if we don’t have any history at that point.
By now, the whole navigation setup in our app works pretty flawlessly, but what if we wanted to restrict some routes to only authenticated user? Let’s go ahead and add this.
Protecting Pages with Guards.
When you deploy your Ionic app as a website, all URLs, right now, could be directly accessed by a user. But here’s an easy way to change it:
We can create something called guard that checks a condition and returns true/false, which allows users to access that page or not. You can generate a guard inside your project with the Ionic CLI:
ionic g guard guards/auth
This generates a new file with the standard guard structure of Angular. Let’s edit guards/auth.guard.ts and change it’s content to:
import { Injectable } from '@angular/core'; import { CanActivate, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router'; import { Observable } from 'rxjs'; @Injectable({ providedIn: 'root' }) export class AuthGuard implements CanActivate { canActivate( next: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<boolean> | Promise<boolean> | boolean { let userAuthenticated = false; // Get the current authentication state from a Service! if (userAuthenticated) { return true; } else { return false; } } }
The guard only has the
canActivate() method in which you return a boolean if the page can be accessed. In this code, we simply return false, but a real guard would make an API call or check a token value.
By default this guard is not yet enabled, but now the circle closes as we come back to our initial app routing. So, open the app-routing.module.ts once again and change it to:
import { AuthGuard } from './guards/auth.guard';', canActivate: [AuthGuard] } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { }
You can add an array of these guards to your pages and check for different conditions, but the idea is the same: Can this route be activated by this user?
Because we set it to false without any further checks, right now we could not navigate from the dashboard to the details page.
There’s also more information on guards and resolver functions inside the Ionic Academy, so check it out here!
Where to Go From Here?
We’ve only touched on a few basic elements of the (new) Angular routing concepts that are now applied in Ionic 4, but I hope it was helpful. Now that you’ve walked through this process, this concept should be a lot easier to understand and manage given that your navigation is not scrambled across various pages inside your app.
Also, securing your app or resolving additional data before entering a page becomes a lot easier with the direct paths and the use of child routing groups.
If you’re interested in the concept of routing, or would like to see more explanations around features like the Tabs, Side Menu, or child routes and modals, consider becoming an Ionic Academy member, today!
When you join, you’ll gain access to countless resources that will help you learn everything Ionic, from in-depth training resources to video courses, plus support from an incredibly helpful community.
Until next time, happy coding!
|
https://blog.ionicframework.com/navigating-the-change-with-ionic-4-and-angular-router/
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
A ring-shaped linestring. More...
#include <geometries.h>
A ring-shaped linestring.
Linearring is a non-instantiable type in SQL. It is used to represent polygon rings.
The start and end point of the linestring must be the same (assumed, but not enforced, by the implementation).
Applies a hierarchical visitor to this geometry.
Implements gis::Linestring.
Implemented in gis::Cartesian_linearring, and gis::Geographic_linearring.
Creates a subclass of Linearring from a Coordinate_system.
|
https://dev.mysql.com/doc/dev/mysql-server/latest/classgis_1_1Linearring.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Getting Started with Windows Forms FlowLayout
17 Nov 20214 minutes to read
This section explains how to add the FlowLayout control in a Windows Forms application and overview its basic functionalities.
Assembly deployment
Refer to the Control Dependencies section to get the list of assemblies or details of NuGet package that needs to be added as a reference to use the control in any application.
Refer to this documentation to find more details about installing NuGet packages in a Windows Forms application.
Adding the FlowLayout control via designer
1) Create a new Windows Forms application via designer.
2) Add the FlowLayout control to an application by dragging it from the toolbox to design view. The following assembly will be added automatically:
- Syncfusion.Shared.Base
3) To add the form as a container control of the FlowLayout, click Yes in a popup, from which it appears automatically when FlowLayout is added.
Adding layout components
The child controls can be added to layout by dragging it from the toolbox to design view.
Adding the FlowLayout control via code
To add control manually in C#, follow the given steps:
1) Create a C# or VB application via Visual Studio.
2) Add the following required assembly reference to the project:
- Syncfusion.Shared.Base
3) Include the required namespace.
using Syncfusion.Windows.Forms.Tools;
Imports Syncfusion.Windows.Forms.Tools
4) Create a FlowLayout control instance, and then set
ContainerControl as form.
FlowLayout flowLayout1 = new FlowLayout(); flowLayout1.ContainerControl = this;
Dim flowLayout1 As FlowLayout = New FlowLayout() Me.flowLayout1.ContainerControl = Me
Adding layout components
The child controls can be added to a layout by simply adding it to the form since the form is its container control.
ButtonAdv buttonAdv1 = new ButtonAdv(); ButtonAdv buttonAdv2 = new ButtonAdv(); ButtonAdv buttonAdv3 = new ButtonAdv(); ButtonAdv buttonAdv4 = new ButtonAdv(); this.buttonAdv1.Text = "buttonAdv1"; this.buttonAdv2.Text = "buttonAdv2"; this.buttonAdv3.Text = "buttonAdv3"; this.buttonAdv4.Text = "buttonAdv4"; this.Controls.Add(this.buttonAdv1); this.Controls.Add(this.buttonAdv2); this.Controls.Add(this.buttonAdv3); this.Controls.Add(this.buttonAdv4);
Dim buttonAdv1 As ButtonAdv = New ButtonAdv() Dim buttonAdv3 As ButtonAdv = New ButtonAdv() Dim buttonAdv3 As ButtonAdv = New ButtonAdv() Dim buttonAdv4 As ButtonAdv = New ButtonAdv() Me.buttonAdv1.Text = "buttonAdv1" Me.buttonAdv2.Text = "buttonAdv2" Me.buttonAdv3.Text = "buttonAdv3" Me.buttonAdv4.Text = "buttonAdv3" Me.Controls.Add(this.buttonAdv1) Me.Controls.Add(this.buttonAdv2) Me.Controls.Add(this.buttonAdv3) Me.Controls.Add(this.buttonAdv4)
Layout mode
To change the layout of child controls either horizontally or vertically, use the LayoutMode property.
- Horizontal
flowLayout1.LayoutMode = Syncfusion.Windows.Forms.Tools.FlowLayoutMode.Horizontal;
flowLayout1.LayoutMode = Syncfusion.Windows.Forms.Tools.FlowLayoutMode.Horizontal
- Vertical
flowLayout1.LayoutMode = Syncfusion.Windows.Forms.Tools.FlowLayoutMode.Vertical;
flowLayout1.LayoutMode = Syncfusion.Windows.Forms.Tools.FlowLayoutMode.Vertical
|
https://help.syncfusion.com/windowsforms/layoutmanagers/flowlayout/gettingstarted
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
1.2 What Is a Computer? • Computer • Performs computations and makes logical decisions • Millions / billions times faster than human beings • Hardware: physical devices of computer system • Computer programs • Sets of instructions for which computer processes data • Software: programs that run on computers
1.3 Computer Organization Peripherals
1.7 Machine Languages, Assembly Languages and High-Level Languages • Machine language • “Natural language” of computer component • Machine dependent • Assembly language • English-like abbreviations represent computer operations • Translator programs convert to machine language • High-level language • Allows for writing more “English-like” instructions • Contains commonly used mathematical operations • Compiler converts to machine language • Interpreter • Execute high-level language programs without compilation
1.9 History of Java • Objects • Reusable software components that model real-world items • Java • Originally for intelligent consumer-electronic devices • Then used for creating Web pages with dynamic content • Now also used to: • Develop large-scale enterprise applications • Enhance WWW server functionality • Provide applications for consumer devices (cell phones, etc.)
1.10 Java Class Libraries • Classes • Include methods that perform tasks • Return information after task completion • Used to build Java programs • A “blueprint” for creating (instantiating) objects • Java provides class libraries • Known as Java APIs (Application Programming Interfaces)
Fig. 1.1 | Typical Java development environment.
1.13 Typical Java Development Environment (Shell Window)
1.13 Typical Java Development Environment (IDE)
Fig. 2.11 | Arithmetic operators. • Integer division truncates remainder • 7 / 5 evaluates to 1 • Remainder operator % returns the remainder • 7 % 5 evaluates to 2
2.7 Arithmetic (Cont.) • Operator precedence • Some arithmetic operators act before others (i.e., multiplication before addition) • Use parenthesis when needed • Example: Find the average of three variablesa,b and c • Do not use: a + b + c / 3 • Use: ( a + b + c ) / 3
Fig. 2.12 | Precedence of arithmetic operators. • Example: Find the average of three variablesa,b and c • Do not use: a + b + c / 3 • Use: ( a + b + c ) / 3 • Use parentheses !!!
2.8 Decision Making: Equality and Relational Operators • Condition • Expression can be either true or false • if statement • Simple version in this section, more detail later • If a condition is true, then the body of the if statement executed • Control always resumes after the if statement • Conditions in if statements can be formed using equality or relational operators (next slide)
Figure 2.14 Decision Making: Equality and Relational Operators
Fig. 2.5 | Some common escape sequences.
3 import java.util.Scanner; // program uses class Scanner 2.5 Import Declarations • Used by compiler to identify and locate classes used in Java programs • All import declarations must appear before the first class declaration in the file. Placing an import declaration inside a class declaration’s body or after a class declaration is a syntax error. • Forgetting to include an import declaration for a class used in your program typically results in a compilation error containing a message such as “cannot resolve symbol.”
Notes on Import Declarations • java.lang is implicitly imported into every program • Default package • Contains classes compiled in the same directory • Implicitly imported into source code of other files in directory • Packages unnecessary if fully-qualified names are used
Software Engineering Observation 3.2 • The Java compiler does not require import declarations in a Java source code file if the fully qualified class name is specified every time a class name is used in the source code. • But most Java programmers consider using fully qualified names to be cumbersome, and instead prefer to use import declarations.
Review • Data Types (ch3)
Primitive Types vs. Reference Types • Types in Java • Primitive • boolean, byte, char, short, int, long, float, double • The AP exam tests int, double, boolean • Reference (sometimes called nonprimitive types) • Objects • Default value of null • Used to invoke an object’s methods
Software Engineering Observation 3.4 • A variable’s declared type (e.g., int, double or some object) indicates whether the variable is of a primitive or a reference type. • If a variable’s type is not one of the eight primitive types, then it is a reference type.
Pop quiz – Intro / Operators / data types AreviewQuizHardwareOperatorsDataTypes.doc
Review • Classes and Objects (ch3)
3.2 Classes, Objects, Methods and Instance Variables • Class provides one or more methods • Method represents task in a program • Describes the mechanisms that actually perform its tasks • Hides from its user the complex tasks that it performs • Method call tells method to perform its task
3.2 Classes, Objects, Methods and Instance Variables (Cont.) • Classes contain one or more attributes • Specified by instance variables. • Carried with the object as it is used • In other words the variable is part of the object when the object is “instantiated”.
3.3 Declaring a Class • Each class declaration that begins with keyword public must be stored in a file that has the same name as the class and ends with the .java file-name extension. • keyword public is an access modifier • Class declarations include: • Access modifier • Keyword class • Pair of left and right braces
Instantiating an Object of a Class • Java is extensible • We write classes (“blueprints”). If programmers want to use them in their application, they need to create an object. • Programmers can create or “instantiate” new objects with a particular class blueprint. • Just like declaring variables, we need to declare objects in Java • recall reference data types include objects • Class instance (object) creation expression • Keyword new • Then name of object to create and parentheses
Method Declaration and Call • Keyword public indicates method is available to public • Keyword void indicates no return type • Access modifier, return type, name of method and parentheses comprise method header • Calling a method • Object name, then dot separator (.) • Then method name and parentheses • Method parameters • Additional information passed to a method, comma separated • Supplied in the method call with arguments
UML Class Diagrams • UML class diagrams • Top compartment contains name of the class • Middle compartment contains class’s attributes or instance variables • Bottom compartment contains class’s operations or methods • Plus sign indicates public methods
3.5 Instance Variables, set Methods and get Methods • Variables declared in the body of method • Called local variables • Can only be used within that method • Variables declared in a class declaration • Called fields or instance variables • Each object of the class has a separate instance of the variable
Access Modifiers public and private • private keyword • Used for most instance variables • private variables and methods are accessible only to methods of the class in which they are declared • Declaring instance variables private is known as data hiding (can’t see the variable from an application that uses that object) • Return type • Indicates item returned by method • Declared in method header
Software Engineering Observation 3.3 • Precede every field and method declaration with an access modifier. • As a rule of thumb, instance variables should be declared private and methods should be declared public.
Set (mutator) and get (accessor) methods • private instance variables • Cannot be accessed directly by clients of the object • Use set methods to alter the value – called an mutator or “setter” • Use get methods to retrieve the value – called an accessor or “getter” • Why do have mutators and accessors? • So don’t have to declare everything public (a security violation) • Allows controlled access. May want the user to input a username and password if the application calls a getter.
3.7 Initializing Objects with Constructors • Constructors • Initialize an object of a class • Java requires a constructor for every class • Java will provide a default no-argument constructor if none is provided • Called when keyword new is followed by the class name and parentheses
UML Diagram with Constructors
Pop quiz • Questions: • Identify the access modifier for the GradeBook class. • Identify the name of the instance variable for the GradeBook class • Identify the name of the constructor for the GradeBook class • Identify the name of an accessor in the GradeBook class • Identify the name of a mutator in the GradeBook class • Draw a UML diagram for the GradeBook class • public class GradeBook • { • private String courseName; • public GradeBook( String name ) • { • courseName = name; • } • public void setCourseName( String name ) • { • courseName = name; • } • public String getCourseName() • { • return courseName; • } • public void displayMessage() • { • System.out.printf( “Grade book for • \n%s!\n", getCourseName() ); • } • } //end class GradeBook
Review • Control Structures (ch4)
4.4 Control Structures (Cont.) • Selection Statements • if statement • Single-selection statement • if…else statement • Double-selection statement • switch statement • Multiple-selection statement
4.4 Control Structures (Cont.) • Repetition statements • Also known as looping statements • Repeatedly performs an action while its loop-continuation condition remains true • while statement • Performs the actions in its body zero or more times • do…while statement • Performs the actions in its body one or more times • for statement • Performs the actions in its body zero or more times
4.5 if Statements • if statements (single selection) • Execute an action if the specified condition is true • if…else statement (double selection) • Executes one action if the specified condition is true or a different action if the specified condition is false • Conditional Operator ( ?: ) • Compact alternative
Good Programming Practice 4.4 • Always using braces in an if...else (or other) statement helps prevent their accidental omission, especially when adding statements to the if-part or the else-part at a later time. • To avoid omitting one or both of the braces, some programmers type the beginning and ending braces of blocks before typing the individual statements within the braces.
4.7 while Repetition Statement • while statement • Repeats an action while its loop-continuation condition remains true • Use a counter variable to count the number of times a loop is iterated • Or use Sentinel-controlled repetition • Also known as indefinite repetition • Sentinel value also known as a signal, dummy, flag, termination value • Uses a merge symbol in its UML activity diagram
4.8 Casting • Unary cast operator • Creates a temporary copy of its operand with a different data type • example: (double) will create a temporary floating-point copy of its operand • Converting values to lower types results in a compilation error, unless the programmer explicitly forces the conversion to occur • Place the desired data type in parentheses before the value • example: (int)4.5 • Promotion • Converting a value (e.g. int) to another data type (e.g. double) to perform a calculation • Values in an expression are promoted to the “highest” type in the expression (a temporary copy of the value is made)
Fig. 6.5 | Promotions allowed for primitive types.
4.11 Compound Assignment Operators • Compound assignment operators example: c=c+3; can be written as c+=3; • This statement adds 3 to the value in variable c and stores the result in variable cs
4.12 Increment and Decrement Operators • Unary increment and decrement operators • Unary increment operator (++) adds one to its operand • Unary decrement operator (--) subtracts one from its operand
Review • Control Structures-2 (ch5)
Fig. 5.3 | for repetition statement
5.3 for Repetition Statement (Cont.) for ( initialization; loopContinuationCondition; increment ) statement; can usually be rewritten as: initialization;while ( loopContinuationCondition ) {statement;increment;}
|
https://www.slideserve.com/joshua-justice/review
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
The general syntax to create a Swift dictionary is
var dict: Dictionary<Key, Value>. This code creates a mutable instance of the Dictionary type called dict. The declarations for what types the dictionary’s keys and values accept are inside the angle brackets (
Key and
Value.
<>), denoted here by
The values stored in a dictionary can be of any type, just like the values in an array. The only type requirement for keys in a Swift Dictionary is that the type must be hashable. The basic concept is that each
Key type must provide a mechanism to guarantee that its instances are unique. Swift’s basic types, such as String, Int, Float, Double, and Bool, are all hashable.
Before you begin typing code, let’s take a look at the different ways you can explicitly declare an instance of Dictionary:
var dict1: Dictionary<String, Int> var dict2: [String:Int]
Both options yield the same result: an uninitialized Dictionary whose keys are String instances and whose values are of type Int. The second example uses the dictionary literal syntax (
[:]).
As with Swift’s other data types, you can also declare and initialize a dictionary in one line. In that case, you can explicitly declare the types of the keys and values or take advantage of type inference:
var companyZIPCode: [String:Int] = ["Big Nerd Ranch": 30307] var sameCompanyZIPCode = ["Big Nerd Ranch": 30307]
Again, these two options yield the same result: a dictionary initialized with a single key-value pair consisting of a String key,
"Big Nerd Ranch", and an Int value,
30307.
It is useful to take advantage of Swift’s type-inference capabilities. Type inference creates code that is more concise but just as expressive. Accordingly, you will stick with type inference in this tutorial.
Time to create your own dictionary. Start with a new macOS playground called Dictionary. Declare a dictionary called movieRatings and use type inference to initialize it with some data.
Listing 10.1 Creating a dictionary
import Cocoa
var str = "Hello, playground"var movieRatings = ["Tron": 4, "WarGames": 5, "Sneakers": 4]
(Since dictionaries are not ordered, the sidebar result may show the key-value pairs in a different order each time your code executes.)
You created a mutable dictionary to hold movie ratings using the Dictionary literal syntax. Its keys are instances of String and represent individual movies. These keys map onto values that are instances of Int that represent the ratings of the movies.
As an aside, just as you can create an array literal with no elements using
[], you can create a dictionary with no keys or values using
[:]. As with arrays, this syntax omits anything the compiler could use to infer the key and value types, so that information would have to be declared explicitly.
Accessing and Modifying Values
Now that you have a mutable dictionary, how do you work with it? You will want to read from and modify the dictionary. Begin by using count to get some useful information about your dictionary.
Listing 10.2 Using
count
var movieRatings = ["Tron": 4, "WarGames":... ["Tron": 4, "WarGames": 5, "Sneakers": 4] movieRatings.count 3
Now, read a value from the movieRatings dictionary.
Listing 10.3 Reading a value from the dictionary
var movieRatings = ["Tron": 4, "WarGames":... ["Tron": 4, "WarGames": 5, "Sneakers": 4] movieRatings.count 3 let tronRating = movieRatings["Tron"] 4
The brackets in
movieRatings["Tron"] are the subscripting syntax you have seen before. But because dictionaries are not ordered, you do not use an index to find a particular value. Instead, you access values from a dictionary by supplying the key associated with the value you would like to retrieve. In the example above, you supply the key
"Tron", so tronRating is set to
4 – the value associated with that key.
Option-click the tronRating instance to get more information (Figure 10.1).
Figure 10.1 Option-clicking
tronRating
Xcode tells you that its type is Int?, but movieRatings has type [String: Int]. Why the discrepancy? When you subscript a Dictionary instance for a given key, the dictionary will return an optional matching the type of the Dictionary’s values. This is because the Dictionary type needs a way to tell you that the value you asked for is not present. For example, you have not rated Primer yet, so
let primerRating = movieRatings["Primer"] would result in primerRating having type Int? and being set to
nil.
A dictionary’s keys are constants: They cannot be mutated. The informal contract a dictionary makes is something like “Give me a value, and a key to store it by, and I’ll remember both. Come back with the key later, and I’ll look up its value for you.” If a key were able to mutate, that could break the dictionary’s ability to find its related value later.
But values can be mutated. Modify a value in your dictionary of movie ratings:
Listing 10.4 Modifying a value
... movieRatings.count 3 let tronRating = movieRatings["Tron"] 4 movieRatings["Sneakers"] = 5 5 movieRatings ["Sneakers": 5, "WarGam...
As you can see, the value associated with the key
"Sneakers" is now
5.
There is another useful way to update values associated with a dictionary’s keys: the updateValue(_:forKey:) method. It takes two arguments: The first,
value, takes the new value. The second,
forKey, specifies the key whose value you would like to change.
There is one small caveat: updateValue(_:forKey:) returns an optional, because the key may not exist in the dictionary. But that actually makes this method more useful, because it gives you a handle on the last value to which the key mapped, using optional binding. Let’s see this in action.
Listing 10.5 Updating a value
... movieRatings["Sneakers"] = 5 5 movieRatings ["Sneakers": 5, "WarGam... let oldRating: Int? = 4 movieRatings.updateValue(5, forKey: "Tron") if let lastRating = oldRating, let currentRating = movieRatings["Tron"] { print("old rating: \(lastRating)") "old rating: 4\n" print("current rating: \(currentRating)") "current rating: 5\n" }
Adding and Removing Values
Now that you have seen how to update a value, let’s look at how you can add or remove key-value pairs. Begin by adding a value.
Listing 10.6 Adding a value
... if let lastRating = oldRating, let currentRating = movieRatings["Tron"] { print("old rating: \(lastRating)") "old rating: 4\n" print("current rating: \(currentRating)") "current rating: 5\n" } movieRatings["Hackers"] = 5 5
Here, you add a new key-value pair to your dictionary using the syntax
movieRatings["Hackers"] = 5. You use the assignment operator to associate a value (in this case,
5) with the new key (
"Hackers").
Next, remove the entry for Sneakers.
Listing 10.7 Removing a value
... if let lastRating = oldRating, let currentRating = movieRatings["Tron"] { ... } movieRatings["Hackers"] = 5 5 movieRatings.removeValue(forKey: "Sneakers") 5
The method removeValue(forKey:) takes a key as an argument and removes the key-value pair that matches what you provide. Now, movieRatings has no entry for Sneakers.
Additionally, this method returns the value the key was associated with, if the key is found and removed successfully. In the example above, you could have typed
let removedRating: Int? = movieRatings.removeValue(forKey: "Sneakers"). Because removeValue(forKey:) returns an optional of the type that was removed, removedRating would be an optional Int. Placing the old value in a variable or constant like this can be handy if you need to do something with the old value.
However, you do not have to assign the method’s return value to anything. If the key is found in the dictionary, then the key-value pair is removed whether or not you assign the old value to a variable.
You can also remove a key-value pair by setting a key’s value to
nil.
Listing 10.8 Setting the key’s value to
nil
... if let lastRating = oldRating, let currentRating = movieRatings["Tron"] { ... } movieRatings["Hackers"] = 5 5
movieRatings.removeValue(forKey: "Sneakers")movieRatings["Sneakers"] = nil nil
The result is essentially the same, but this strategy does not return the removed key’s value.
|
https://basiccodist.in/dictionary-in-swift/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
On Mon, May 23, 2016 at 1:42 PM, T. C. <[email protected]> wrote:I've written a paper based on Melissa O'Neill's randutils.hpp (described here), proposing a simple wrapper class template for URNGs with various convenience member functions and automatic nondeterministic seeding (for random number engines): would be greatly appreciated.
The new random algorithms here (choose, pick, ...) should be made available as non-member functions that operate on URNGs, like sample and shuffle already are -- it doesn't seem reasonable for these to exist in the convenience wrapper but [not] for the general case.
With that change made, the value of a separate class here is diminished. Instead ofstd::random_generator<std::mt19937> rng; // nondeterministically seededstd::cout << rng.choose(some_list);... how would you feel aboutauto urng = std::seeded<std::mt19937>(); // returns a nondeterministically seeded std::mt19937std::cout << choose(some_list, urng);
// assuming that unsigned int is 32 bits
std::random_device rd;
std::seed_seq sseq {rd(), rd(), rd(), rd(), rd(), rd(), rd(), rd()};
std::mt19937 rng(sseq); // seeded with 256 bits of entropy from random_device
std::random_device rd;
std::mt19937 rng(rd); // seeded with however much entropy it needs from the provided random_device
Richard Smith wrote:
> The new random algorithms here (choose, pick, ...) should be made available as non-member functions that operate on URNGs, like sample and shuffle already are -- it doesn't seem reasonable for these to exist in the convenience wrapper but for the general case.
FWIW, sample doesn't exist in the standard yet, it's a proposed addition. When I wrote randutils it didn't exist. So the only stand-alone function in C++14 is shuffle, whose origins date back to random_shuffle (which predates the C++11 <random> library and thus has a different flavor). Thus pick, choose, uniform, variate, and sample all don't exist right now.
To me there is a bit of a fork in the road here, let's look at the two forks:
Option 1: Global functions, exemplified by the proposed sample algorithm, which take (optional) extra arguments to specify the engine. To simply things, there is a global state (actually per-thread) that holds some system-defined random engine (i.e., the randint proposal). But do you want a constellation of thematically-related global helper functions (i.e., seeded, uniform, variate, pick, choose, shuffle, and sample)?
Option 2: Random-generator objects, which tie together the <random> pieces but stay in the domain of object orientation. There is far less reliance on baked-in global state and only one new name in the std namespace.
The rest of the random number generation library is happy to use objects (e.g., the distributions, the engines, etc.), the second option retains the flavor of the existing library while making it easier to use.
> ... how would you feel about
>
> auto urng = std::seeded<std::mt19937>(); // returns a nondeterministically seeded std::mt19937
> std::cout << choose(some_list, urng);
I don't like it. I teach C++ (to smart people who already have some programming experience), and I don't want to to have to explain this. If urng is a std::mt19937, it looks like a band-aid around a design flaw (why doesn't std::mt19937 know how to construct itself well in the first place?
why does it need a helper function?
why doesn't it look like the rest of the rng library?). And, did a std::mt19937 object get copied here?
If you presuppose that randint is a done deal and so is sample, then perhaps there isn't as much reason for this convenience class, but I don't think they are set in stone yet, and I think it's worthwhile to think through what really fits best with the random generation library we have. In that vein, it's shuffle that's the outlier.
The focus of this proposal is providing a tiny amount of glue to make all the wonderful features of <random> actually easy to use. There is pretty much no reason to provide pick as a helper function, or variate, or even sample, because they aren't really meaty *algorithms* at all, they're virtually one liners (choose is just a call to advance by an appropriate random amount, sample is just stable_partition with an appropriate random-based predicate. etc.).
But there are a very good reasons to make these things something a random-generator object can do, because it creates something cohesive.
> ForwardIterators can't be subtracted. `sample`'s description seems to rely on that. You probably meant `std::distance`.
Yes, my implementation actually uses std::distance; when Tim turned it into a proposal, I think he tried to simplify the description but this wasn't a valid simplification. Good catch, thanks.
Hi,I've written a paper based on Melissa O'Neill's randutils.hpp (described here), proposing a simple wrapper class template for URNGs with various convenience member functions and automatic nondeterministic seeding (for random number engines):
Nicol Bolas wrote (across three messages):
> Be advised: there's already a proposal for improving the seeding of RNGs.
That's good to know. (One issue with encouraging std::random_device though is that std::random_device is not currently required to be in any way nondeterministic. It is allowed to produce the same output on every run of the program.)
> As for the idea, if you're interested in a do-everything RNG class, why bother making the random number engine a template parameter? Why expose the underlying engine at all?
Although it's not part of this proposal draft, in my original version I did provide some typedefs for common generators precisely so that people didn't need to specify an engine as a parameter.
> That seems like an over-complication of the idea. If the goal is to be as simple as Python, then just make it `random_generator`, and have implementations decide on what the underlying engine is.
>
> If you're serious about your random engine and distributions, about the specific details of seeding and so forth, you're not going to use this class. And if just want some randomness, then you don't care about the details. You just want a quick, simple, braindead class that spits out randomness in various ways upon.
The key idea is that it's easy enough for a beginner, but still handy for an expert, and provides a good stepping stone to the facilities that still exist. It's not a dumbing down, it's handy glue to avoid annoying and error-prone boilerplate, boilerplate that's confusing to beginners and a hassle for experts.
mt19937 rng(/*seed*/);
double x = std::normal_distribution<double>(17, 3)(rng);
On Mon, May 23, 2016 at 8:24 PM, M.E. O'Neill <[email protected]>.
>
I'm seeing a problem with this code. std::normal_distribution
caches results, so does other distributions internally using it.
But here it looks like that a normal_distribution object will be
created for each run of `variate`, wasting both CPU cycles and
bits from the engine.
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit
Have you not looked at Chapter 25 of the C++ Standard? That's how we do things in C++.
|
https://groups.google.com/a/isocpp.org/g/std-proposals/c/l3FLfjM91Mw/m/SNbYpzNLDgAJ
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
NAME¶
Tk_Main - main program for Tk-based applications
SYNOPSIS¶
#include <tk.h> Tk_Main(argc, argv, appInitProc)
ARGUMENTS¶
-¶); ALSO¶
Tcl_DoOneEvent(3tk)
KEYWORDS¶
application-specific initialization, command-line arguments, main program
|
https://manpages.debian.org/unstable/tk8.6-doc/Tk_Main.3tk.en.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I'm trying to set the response status line, and failing.
def handler(req):
raise apache.SERVER_RETURN, (400, "You must supply a foo")
This is with the mod_python on Ubuntu feisty (3.2.10). From the code in
apache.py, it sets req.status with the text message although the doc
says that is an integer field.
However the http client always receives standard text (eg 'Bad Request'
for code 400).
The PythonDebug option is also confusing. If I turn it on, then
responses go back as 200 OK with the body of tracebacks etc, whereas
with it off then you get 500 Internal Server Error with no traceback
type information.
It seems to me that if you want to return anything other than success
that Apache goes to great lengths to obliterate whatever it is that you
were trying to send back and replaces it with its own canned
information, hence the behaviour of PythonDebug. Is there any way
around this?
Roger
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFGPkEjmOOfHg372QQRAtuxAKDLsTPlKUZEbhGBS7DBAo0lgFd+8ACfRY5B
Kr0KeQjYPQEAc90GEY5dnbM=
=Rm72
-----END PGP SIGNATURE-----
|
https://modpython.org/pipermail/mod_python/2007-May/023567.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Opened 7 years ago
Closed 2 years ago
#25167 closed New feature (duplicate)
Provide option to disable ajax text/plain response in technical_500_response
Description
Line in question:
Ticket #10841 from a long time ago made a change to return text/plain responses unconditionally when the request was made via ajax. It would be handy if this special casing of ajax requests could be disabled with a setting or something.
I just spent the last 45 minutes tracking down why the debug error pages were not rendering with styles on a new project I was basing on an older one that did not exhibit the issue.
Turns out I hit this before and the approach I came up with in the past was to monkey patch the handler and forcefully set ajax to false. Otherwise it seems like there is a lot of code to copy and also warnings in the source about writing error handlers. So I'd rather not, but I need to display the error messages during development...
Below is a sample of the monkey patch. It would probably be better to move the condition outside of the function and disable it when not in DEBUG mode. I am going to do that now I guess, but I figure it was worthwhile to raise the issue.
from django.conf import settings from django.core.handlers.base import BaseHandler handle_uncaught_exception = BaseHandler.handle_uncaught_exception def _handle_uncaught_exception_monkey_patch(self, request, resolver, exc_info): if settings.DEBUG: request.is_ajax = lambda: False return handle_uncaught_exception(self, request, resolver, exc_info) BaseHandler.handle_uncaught_exception = _handle_uncaught_exception_monkey_patch
I guess you use something like Chrome inspector which renders HTML just fine? It would be a nice feature, but I doubt adding a setting will be a popular solution.
There was an idea to replace
settings.DEFAULT_EXCEPTION_REPORTER_FILTERwith a
DEFAULT_EXCEPTION_REPORTERsetting:
Implementing that would likely make allow customizing this behavior easy.
|
https://code.djangoproject.com/ticket/25167
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
1. What is bridging mode?
Decouple an abstraction from its implementation so that the two can vary independently.
Bridge Pattern: decouple abstraction and implementation so that they can change independently.
Another explanation is that there are two (or more) independently changing dimensions in a class. We can make these two (or more) dimensions expand independently by combining.
It may sound profound. It doesn't matter. Let's explain it through examples.
2. Bridge mode definition
①,Abstraction
Abstract role: its main responsibility is to define the behavior of the role and save a reference to the implementation role. The role is generally an abstract class.
②,Implementor
Implementation role: it is an interface or abstract class that defines the necessary behaviors and properties of a role.
③,RefinedAbstraction
Fix Abstract role: it refers to the implementation role to modify the abstract role.
④,ConcreteImplementor
Materialized role: it implements methods and properties defined by an interface or abstract class.
3. General code implementation of bridge mode
Implementation class:
public interface Implementor { void doSomething(); }
Implementation class:
public class ConcreteImplementor1 implements Implementor{ @Override public void doSomething() { // Specific business logic processing } }
public class ConcreteImplementor2 implements Implementor{ @Override public void doSomething() { // Specific business logic } }
Two are defined here, and there may be multiple.
Abstract role:
public abstract class Abstraction { // Defines a reference to an implementation role private Implementor implementor; public Abstraction(Implementor implementor){ this.implementor = implementor; } // Own behavior and attributes public void request(){ this.implementor.doSomething(); } // Get implemented roles public Implementor getImplementor(){ return implementor; } }
Fixed Abstract role:
public class RefinedAbstraction extends Abstraction{ // Override constructor public RefinedAbstraction(Implementor implementor){ super(implementor); } // Fixed the behavior of the parent class @Override public void request() { super.request(); } }
Test:
public class BridgeClient { public static void main(String[] args) { // Define an implementation role Implementor implementor = new ConcreteImplementor1(); // Define an abstract role Abstraction abstraction = new RefinedAbstraction(implementor); // Execution method abstraction.request(); } }
If our implementation role has many sub interfaces, then there are a lot of sub implementations. Pass a specific implementer in the constructor, and the code is also very clear.
4. Classic example of bridge mode - JDBC
When we first used JDBC to directly connect to the database, we would have this Code:
Class.forName("com.mysql.cj.jdbc.Driver");//Loading and registering JDBC drivers String url = "jdbc:mysql://localhost:3306/sample_db?user=root&password=your_password"; Connection con = DriverManager.getConnection(url); Statement stmt = con.createStatement(); String query = "select * from test"; ResultSet rs=stmt.executeQuery(query); while(rs.next()) { rs.getString(1); rs.getInt(2); }
If we want to replace MySQL database with Oracle database, we only need to replace com.mysql.cj.jdbc.Driver in the first line of code with oracle.jdbc.driver.OracleDriver.
This elegant way to achieve database switching is to use the bridge mode.
Let's first look at the Driver class:
package com.mysql.cj.jdbc; import java.sql.DriverManager; import java.sql.SQLException; public class Driver extends NonRegisteringDriver implements java.sql.Driver { public Driver() throws SQLException { } static { try { DriverManager.registerDriver(new Driver()); } catch (SQLException var1) { throw new RuntimeException("Can't register driver!"); } } }
This code Class.forName("com.mysql.cj.jdbc.Driver") has two functions:
① , ask the JVM to find and load the specified Driver class.
② . execute the static code of this class, that is, register the MySQL Driver in the DriverManager class.
Next, let's look at the DriverManager class:
public class DriverManager { private final static CopyOnWriteArrayList<DriverInfo> registeredDrivers = new CopyOnWriteArrayList<DriverInfo>(); //... static { loadInitialDrivers(); println("JDBC DriverManager initialized"); } //... public static synchronized void registerDriver(java.sql.Driver driver) throws SQLException { if (driver != null) { registeredDrivers.addIfAbsent(new DriverInfo(driver)); } else { throw new NullPointerException(); } } public static Connection getConnection(String url, String user, String password) throws SQLException { java.util.Properties info = new java.util.Properties(); if (user != null) { info.put("user", user); } if (password != null) { info.put("password", password); } return (getConnection(url, info, Reflection.getCallerClass())); } //... }
After registering the specific driver implementation class (for example, com.mysql.cj.jdbc.Driver) with the DriverManager, all subsequent calls to the JDBC interface will be delegated to the specific driver implementation class for execution. The driver implementation classes all implement the same interface (java.sql.Driver), which is also the reason why the driver can be switched flexibly.
5. Advantages of bridging mode
① Separation of, abstraction and Implementation
This is also the main feature of bridge mode. It is a design mode proposed to solve the shortcomings of inheritance. In this mode, the implementation can be free from abstract constraints and no longer bound to a fixed abstraction level.
② Excellent expansion ability
Take a look at our example. Would you like to add an implementation? no problem! If you want to add abstraction, there is no problem! As long as the exposed interface layer allows such changes, we have minimized the possibility of changes.
③ . make details transparent to customers
Customers don't care about the implementation of details. It has been encapsulated by the abstraction layer through aggregation relations.
6. Bridging mode application scenario
① If a system needs to add more flexibility between the abstract and concrete roles of components, avoid establishing static inheritance relations between the two levels, and enable them to establish an association relationship at the abstract level through bridging mode.
② For those systems that do not want to use inheritance or the number of system classes increases sharply due to multi-level inheritance, the bridging mode is particularly suitable.
③ . a class has two independently changing dimensions, and both dimensions need to be extended.
|
https://programmer.help/blogs/619cd0b2e5751.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
#include <RubyDirectedTester.hh>
Definition at line 50 of file RubyDirectedTester.hh.
Definition at line 56 of file RubyDirectedTester.hh.
Called by the peer if sendTimingReq was called on this peer (causing recvTimingReq to be called on the peer) and was unsuccessful.
Implements TimingRequestProtocol.
Definition at line 63 of file RubyDirectedTester.hh.
References Port::name(), and panic.
Receive a timing response from the peer.
Implements TimingRequestProtocol.
Definition at line 97 of file RubyDirectedTester.cc.
References Packet::getAddr(), RubyDirectedTester::hitCallback(), and tester.
Definition at line 53 of file RubyDirectedTester.hh.
Referenced by recvTimingResp().
|
http://doxygen.gem5.org/release/v20-1-0-0/classRubyDirectedTester_1_1CpuPort.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Hello,
I'm searching for a way to extract all text elements from a matplotlib
figure including their positions, styles, alignments etc. I first
tried to write a custom backend and to fetch all the texts from the
"draw_text()" method of the renderer. In contrast to the documentation
"draw_text()" does not receive a matplotlib.text.Text instance with
all the necessary information but only a simple string and a
pre-layouted position.
So I found this "findobj" method to get all Text elements from a
figure in a list, which is exactly what I was looking for. However, I
get some weird duplicates for all the tick labels and I don't know how
to handle them.
This is a small example that uses findobj on the axis objects and
prints the texts.
import matplotlib
import pylab as p
p.plot([1,2,3])
p.xticks([1],["tick"])
ax = p.gca()
fig = p.gcf()
p.draw()
def print_texts(artist):
for t in artist.findobj(matplotlib.text.Text):
if t.get_visible() and t.get_text():
print " %s @ %s" % (t.get_text(), t.get_position())
print "X-Axis"
print_texts(ax.xaxis)
print "Y-Axis"
print_texts(ax.yaxis)
On all my matplotlib installations, all tick labels have duplicates
positioned at the end of the axis. Why? How to filter them out from a
list of Text elements? Their get_visible() attribute is True.
Another thing is that I first had to do a "draw()" call in order to
have the ticks generated/updated at all. How do I force an update of
the tick labels. Colorbar seems to have a "update_ticks()" method, but
I can't find something similar for the axis ticks.
|
https://discourse.matplotlib.org/t/duplicate-ticks/17019
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Update spent the last couple of weeks writing sample code for ASP.NET 5/MVC 6 and I was surprised by the depth of the changes in the current beta release of ASP.NET 5. ASP.NET 5 is the most significant new release of ASP.NET in the history of the ASP.NET framework — it has been rewritten from the ground up.
In this blog post, I list what I consider to be the top 10 most significant changes in ASP.NET 5. This is a highly opinionated list. If other changes strike you as more significant, please describe the change in a comment.
1. ASP.NET on OSX and Linux
For the first time in the history of ASP.NET, you can run ASP.NET 5 applications on OSX and Linux. Let me repeat this. ASP.NET 5 apps can run on Windows, OSX, and Linux. This fact opens up ASP.NET to a whole new audience of developers and designers.
The traditional audience for ASP.NET is professional developers working in a corporation. Corporate customers are welded to their Windows machines.
Startups, in stark contrast, tend to use OSX/Linux. Whenever I attend a startup conference, the only machines that I see in the audience are Macbook Pros. These people are not the traditional users of ASP.NET.
Furthermore, designers and front-end developers – at least when they are outside the corporate prison – also tend to use Macbook Pros. Whenever I attend a jQuery conference, I see Macbook Pros everywhere (the following picture is from the jQuery blog).
Enabling ASP.NET 5 to run on Windows, OSX, and Linux changes everything. For the first time, all developers and designers can start building apps with ASP.NET 5. And, they can use their favorite development environments such as Sublime Text and WebStorm when working with ASP.NET apps (No Visual Studio required).
Take a look at the OmniSharp project to see how you can use editors such as Sublime Text, Atom, Emacs, and Brackets with ASP.NET 5:
2. No More Web Forms
I love ASP.NET Web Forms. I’ve spent hundreds – if not thousands – of hours of my life building Web Forms applications. However, it is finally time to say goodbye. ASP.NET Web Forms is not part of ASP.NET 5.
You can continue to build Web Forms apps in Visual Studio 2015 by targeting the .NET 4.6 framework. However, Web Forms apps cannot take advantage of any of the cool new features of ASP.NET 5 described in this list. If you don’t want to be left behind as history marches forward then it is finally time for you to rewrite your Web Forms app into ASP.NET MVC.
3. No More Visual Basic
It is also time to say goodbye to Visual Basic. ASP.NET 5 only supports C# and Visual Basic is left behind.
My hope is that this change won’t be too painful. I believe that there are only two people in the entire world who are building MVC apps in Visual Basic. It is time for both of you to stop it. There are good automatic converters for going from Visual Basic to C#:
4. Tag Helpers
Tag Helpers is the one feature that might have the biggest impact on the way that you create your views in an ASP.NET MVC application. Tag Helpers are a better alternative to using traditional MVC helpers.
Consider the following MVC view that contains a form for creating a new product:
@model MyProject.Models.Product @using (Html.BeginForm()) { <div> @Html.LabelFor(m => p.Name, "Name:") @Html.TextBoxFor(m => p.Name) </div> <input type="submit" value="Create" /> }
In the view above, the Html.BeginForm(), Html.LabelFor(), and Html.TextBoxFor() helpers are used to create the form. These helpers would not be familiar to an HTML designer.
Here’s how the exact same form can be created by using Tag Helpers:
@model MyProject.Models.Product @addtaghelper "Microsoft.AspNet.Mvc.TagHelpers" <form asp- <div> <label asp-Name:</label> <input asp- </div> <input type="submit" value="Save" /> </form>
Notice that this new version of the form contains only (what looks like) HTML elements. For example, the form contains an INPUT element instead of an Html.TextBoxFor() helper. A front-end designer would be fine with this page.
The only thing special about this view is the special asp-for attributes. These attributes are used to extend the elements with server-side ASP.NET MVC functionality.
Damien Edwards put together an entire sample site that uses nothing but Tag Helpers here:
5. View Components
Goodbye subcontrollers and hello View Components!
In previous versions of ASP.NET MVC, you used the Html.Action() helper to invoke a subcontroller. For example, imagine that you want to display banner ads in multiple views. In that case, you would create a subcontroller that contained the logic for returning a particular banner advertisement and call the subcontroller by invoking Html.Action() from a view.
Subcontrollers – the Html.Action() helper — are not included in the current beta of MVC 6. Instead, MVC 6 includes an alternative technology called View Components.
Here’s how you can create a View Component that displays one of two banner advertisements depending on the time of day:
using Microsoft.AspNet.Mvc; using System; namespace Partials.Components { public class BannerAd : ViewComponent { public IViewComponentResult Invoke() { var adText = "Buy more coffee!"; if (DateTime.Now.Hour > 18) { adText = "Buy more warm milk!"; } return View("_Advertisement", adText); } } }
If the time is before 5:00pm then the View Component returns a partial named _Advertisement with the advertisement text “Buy more coffee!”. If the time is after 5:00pm then the text changes to “Buy more warm milk!”.
Here’s what the _Advertisement partial looks like:
@model string <div style="border:2px solid green;padding:15px"> @Model </div>
Finally, here is how you can use the BannerAd View Component in an MVC view:
@Component.Invoke("BannerAd")
View Components are very similar to subcontrollers. However, subcontrollers were always a little odd. They were pretending to be controller actions but they were not really controller actions. View Components just seem more natural.
6. GruntJS, NPM, and Bower Support
Front-end development gets a lot of love in ASP.NET 5 through its support for GruntJS (and eventually Gulp).
GruntJS is a task runner that enables you to build front-end resources such as JavaScript and CSS files. For example, you can use GruntJS to concatenate and minify your JavaScript files whenever you perform a build in Visual Studio.
There are thousands of GruntJS plugins that enable you to do an amazing variety of different tasks (there are currently 4,334 plugins listed in the GruntJS plugin repository):
For example, there are plugins for running JavaScript unit tests, for validating the code quality of your JavaScript (jshint), compiling LESS and Sass files into CSS, compiling TypeScript into JavaScript, and minifying images.
In order to support GruntJS, Microsoft needed to support two new package managers (beyond NuGet). First, because GruntJS plugins are distributed as NPM packages, Microsoft added support for NPM packages.
Second, because many client-side resources – such as Twitter Bootstrap, jQuery, Polymer, and AngularJS – are distributed through Bower, Microsoft added support for Bower.
This means that you can run GruntJS using plugins from NPM and client resources from Bower.
7. Unified MVC and Web API Controllers
In previous versions of ASP.NET MVC, MVC controllers were different than Web API controllers. An MVC controller used the System.Web.MVC.Controller base class and a Web API controller used the System.Web.Http.ApiController base class.
In MVC 6, there is one and only one Controller class that is the base class for both MVC and Web API controllers. There is only the Microsoft.AspNet.Mvc.Controller class.
MVC 6 controllers return an IActionResult. When used as an MVC controller, the IActionResult might be a view. When used as a Web API controller, the IActionResult might be data (such as a list of products). The same controller might have actions that return both views and data.
In MVC 6, both MVC controllers and Web API controllers use the same routes. You can use either convention-based routes or attribute routes and they apply to all controllers in a project.
8.. You can interact with an MVC 6 controller from an AngularJS $resource using REST.
9. ASP.NET Dependency Injection Framework
ASP.NET 5 has built-in support for Dependency Injection and the Service Locator pattern. This means that you no longer need to rely on third-party Dependency Injection frameworks such as Ninject or AutoFac.
Imagine, for example, that you have created an IRepository interface and an EFRepository class that implements that interface. In that case, you can bind the EFRepository class to the IRepository interface in the ConfigureServices() method of the Startup.cs class like this:
services.AddTransient<IRepository, EFRepository>();
After you bind EFRepository and IRepository then you can use constructor dependency injection in your MVC controllers (and any other class) using code like this:
public class ProductsController : Controller { private IRepository _repo; public ProductsController(IRepository repo) { _repo = repo; } }
In the code above, the IRepository interface is passed to the constructor for the ProductsController. The built-in ASP.NET Dependency Injection framework passes EFRepository to the ProductsController because IRepository was bound to EFRepository.
You also can use the Service Locator pattern. Wherever you can access the HttpContext, you can access any registered services. For example, you can retrieve the EFRepository by using the following code inside of an MVC controller action:
var repo = this.Context.ApplicationServices.GetRequiredService<IRepository>();
10. xUnit.net
Goodbye Visual Studio Unit Testing Framework and hello xUnit.net!
In previous versions of ASP.NET MVC, the default testing framework was the Visual Studio Unit Testing Framework (sometimes called mstest). This framework uses the [TestClass] and [TestMethod] attributes to describe a unit test:
[TestClass] public class CalculatorTests { [TestMethod] public void TestAddNumbers() { // Arrange var calc = new Calculator(); // Act var result = calc.AddNumbers(0, 0); // Assert Assert.AreEqual(0, result); } }
ASP.NET 5 uses xUnit.net as its unit test framework. This framework uses the [Fact] attribute instead of the [TestMethod] attribute (and no [TestClass] attribute]):
public class CalculatorTests { [Fact] public void AddNumbers() { // Arrange var calculator = new Calculator(); // Act var result = calculator.AddNumbers(1, 1); // Assert Assert.Equal(result, 13); } }
If you look at the source code for ASP.NET 5 then you’ll see that xUnit.net is used to test ASP.NET extensively. For example, the MVC repository contains unit tests written with xUnit.net. You can take a look at the MVC repository (and its unit tests) here:
ASP.NET uses a fork of xUnit.net that is located here:
Please, revise the VB affirmation… If you aren’t strictly correct can cause FUD
I’m getting my information from the official aspnet repository: “ASP.NET 5 is C# only at this point and that will not change before we RTM. We plan to have extensibility points so other languages like VB, F#, etc can be added via the form of a support package or such.” See
“No More Visual Basic”… Wait! What? Is this official and no support planned in the future? I always assumed VB was coming later and C#-only was just how the CTP was.
@Stilgar – according to this issue from the aspnet repository “ASP.NET 5 is C# only at this point and that will not change before we RTM”. See
My understanding is that this answer says VB.NET support is coming in some form.
@Stilgar — Take a look at this SO answer “There are no plans to support VB in ASP.NET 5 in terms of compilation, project templates, and other tools…ASP.NET 5 has some in-progress support to enable non-C# compilers to be used, but there is still no official plans to support VB (you’d have to roll your own).”
See
Stephen is correct. VB is not part of ASP.NET future. End of story. OTOH, it will be around for a long time regardless.
If you read the GitHub question linked in that SO discussion, you’d see that it’s C# until RTM. That’s all. There has been no definitive statement about Microsoft shoving VB.NET out to pasture, and the part in this blog post was significantly misinforming I was also more than a little unprofessional and snide.
VB.NET’s usage rate is still considerably high, and many enterprise customers of Microsoft have significant investments in VB.NET. The numbers can be seen in Forester studies, as well as various articles and surveys in places like InfoWorld. I really don’t see not being able to create an ASP.NET 5 project in VB.NET after RTM.
While it may be interesting to some on a meta level that fanboy flaming applies not only to Android vs. IOS, Mac vs. Windows, and Java vs. .NET, but that some even extend it to different .NET languages, despite being in the same development environment, frameworks, with nearly identical il code, etc… That said, those types of debates usually still end up very much a waste of time.
I’d wait until some definitive information from Microsoft, rather than a derogative aside misinterpreting a SO post referencing another post, etc…
Could have sworn that was originally
“… and it was also more than a little unprofessional and snide.”
rather than:
“… I was also more than a little unprofessional and snide.”
Oops, completely different meaning. 😉
I think some of the things are taken a bit out of context.
Like 10. xUnit.net. Yes it is true that the ASP.NET team are using xUnit, but as stated here “ they plan on supporting MSTest.
Also they are adding new features to web forms like they say:
But we can totally agree that people should move on.
Interesting statement on Web Forms support – that seems to conflict with the statement on the ASP.NET 5 web site, which reads: “You can continue developing Web Forms apps and have confidence that Web Forms is an essential part of the .NET web development platform. We remain focused on adding new features to Web Forms to improve the development experience and keep the technology up-to-date with web practices.”
Sounds like a bit of creative salesmanship to me. While they are adding features to WebForms, there is NO support for .Net 5.0 (and likely will never have it), so it’s more or less on the track to retirement. Imagine if a product that only works on .Net 2.0 kept getting features: I’d call it a dead/dying product even if people are still using it since the product can’t take advantage of new _language_ features.
Yes, Web Forms and VB are dead end technologies in the Microsoft world. It’s not like you couldn’t see it coming. OTOH, no one is making you upgrade.
I find it hard to believe they are going to drop support for webforms. One Application I support has 100s of webforms. We have two developers. It would takes two full years of work to convert it to MVC.
@Todd — There is no evidence that Microsoft is planning to drop product support for Web Forms or VB.NET. In fact, Microsoft is continuing to invest in Web Forms by adding new features. As I mention in the blog post above, you can continue to build Web Forms apps using Visual Studio 2015.
However, that being said, Web Forms and VB.NET are not part of ASP.NET 5. So you cannot take advantage of many of the new ASP.NET 5 features described in the blog post — such as OSX/Linux support or GruntJS support — in a Web Forms or VB.NET app.
The bottom line is that if I was forced to decide on the ASP.NET technology to use for building a new Web application today then I would use C#/MVC with ASP.NET 5.
See
Rob Conery said Jim Newkirk created xUnit.net with Brad Wilson; kzu switched to xUnit.net for moq … c# now leaving VB in the dust … it’s about time … nevertheless, i’m flabbergasted … please excuse me while i go and check a printed calendar … i need to assure myself that today is not April 1st
I expect that there will be a publicly available package before it goes gold “We plan to have extensibility points so other languages like VB, F#, etc can be added via the form of a support package or such.” there are just too many developed apps to leave it swinging going forward. As far a requirement to use MVC, I am sure that will change as well.
You are in denial. Web forms and VB are being left out of the new lean and mean .NET 5.0. It’s a subset and will never have those features. Sorry if that ruins your day but this was their intention and they have been saying this for over 2 years.
But Microsoft has obviously not been explaining it very clearly otherwise Stephen wouldn’t have needed to reference a github issue It would have been laid out loud and clear by Microsoft by someone like Scott Guthrie or Scott Hanselman in a clear fashion.
Thank God, no more VB crap!
I have written exactly zero lines of VB.NET but it always seemed like a fine language to me. I don’t see why anyone would consider it “crap”
The fact that you haven’t written a line of it, AT ALL, to me, speaks volumes of why people consider it “crap”. Maybe not “crap” but not worthy of dual support. I’ve seen resumes that say “No VB under any circumstances”.
If you had a brand new project, VB would not likely be first choice for language for a jillion reasons.
If the world labeled every language I have not used “crap” or not worthy of support there would be very few languages indeed. Whole industries would be in trouble because I never used the languages they were built upon 🙂
> For the first time in the history of ASP.NET, you can run ASP.NET 5 applications on OSX and Linux.
Nope, ASP.NET MVC app have been supported on Mono for a long time. I’m running two right now (and have for 2 or 3 years).
> GruntJS, NPM, and Bower Support
This is a Visual Studio feature, not an ASP.NET feature. Also Gulp is better than Grunt, and both are supported.
> No More Web Forms
Web forms are still supported. From the ASP.NET site:
> You can continue developing Web Forms apps and have confidence that Web Forms is an essential part of the .NET web development platform. We remain focused on adding new features to Web Forms to improve the development experience and keep the technology up-to-date with web practices.
@Daniel — good point about Mono. I should have written “For the first time, the very same ASP.NET framework that runs on Windows will run on OSX/Linux”. Mono, unlike the new ASP.NET 5, is a separate code base.
And, I am not saying that Web Forms won’t be supported — I’m just pointing out that Web Forms won’t be part of ASP.NET 5. So, for example, you won’t be able to run Web Forms on OSX/Linux using ASP.NET 5.
I’d like to see some type safety in tag helpers, because “form asp-controller=’Products’ asp-action=’Create’ … ” looks too brittle for refactoring and compiler checking
Is there any plans to support that?
Otherwise the only way I can think of is code generation with a global list of strings for controller names, plus the same list of actions for each.
Unless you’ve been precompiling your views they’ve never been particularly friendly towards refactoring or type checking. Just yesterday I refactored a property name on my model and forgot to update the view and it blew up at run-time.
No VB and no Web Forms are the right decision.
I hope MS stands their ground and doesn’t change their direction. As a former VB.NET developer I can truthfully say that learning C# was the right decision for me. I love the direction that Microsoft is going with ASP.NET, Visual Studio, Azure, C#, etc. etc.
This is a great time to be a developer.
I hope MS has plans for items that don’t work in MVC like report viewer control, etc. I love Mac and Linux integration, but am surprised about VB.
“Tag Helpers are a better alternative to using traditional MVC helpers.” – I disagree:
@glen – I notice that they recently added a feature to add prefixes to tags used by Tag Helpers. See — it appears that this feature is optional which should make everyone happy 🙂
Hi stelhen.
Could you please give me more resources to read about this:
“You can interact with an MVC 6 controller from an AngularJS $resource using REST” ..
@mohammad – take a look a the following blog post
How do we use Tag Helpers in place of EditorFor? These don’t seem like replacements for that.
‘you can run ASP.NET 5 applications on OSX’
Presumably you mean so long as a dev ide is installed, or a client web server, unless you are talking about Cordova apps? Thank you in advance for any clarification.
@rod — ASP.NET Core 5.0 is truly cross-platform: you can run ASP.NET on OSX/Linux in the same way as you can on Windows. ASP.NET includes a web server named Kestrel that you can run from the command line and use to serve ASP.NET 5 Web applications. This is not dependent on Visual Studio. You can build ASP.NET apps using Sublime Text or (for that matter) TextEdit.
…well that is really good news indeed – kestrel it is, thank you. Hopefully kestrel might just run on demand so the user doesn’t have to start it up.
I’ve suggested a couple of things on Scott G’s site. I know open source is all the buzz for a lot of non-Windows devs but, having come back to .NET after several years (I’m a production man first, coder second) my head really spins over all this angular/grunt/bower stuff (the list goes on). I just wish it was all VS vNext. How about Anders writing a nice C# compiler that outputs all these JavaScript implementations so I don’t have to reach for the aspirins in the cupboard?
And finally a whacky idea. Back in 2000 when .NET was about to be launched, we marvelled at the simplicity of web + Windows (UI) apps and the prospect of common business and data layers. I hoped for the day when there was a single control set to go even further. What about rendering HTML and all this ASP.NET stuff for non-Windows devices whilst streaming XAML Universal Apps elsewhere i.e. finally one set of controls?
Thanks for listening. R
The majority of start-up developers using OSX/Linux are unlikely to use ASP.NET. Why? Clearly they opted not to purchase a Microsoft Surface or 3rd party machine running Windows OS. So, it is somewhat reasonable to say they are likely to use non-Microsoft web development technology as well e.g. PHP etc.
MVC has been around for decades. Why Microsoft did not go with ASP.NET MVC circa 2002 immediately was in hind sight a poor choice. Instead, they went with ASP.NET Web Forms and now millions of developers know it very well. However, these days ASP.NET MVC is getting all the love. Subsequently, many ASP.NET Web Forms developers have abandoned Microsoft in favour of other technologies that stay the course.
You say “it is finally time for you to rewrite your Web Forms app into ASP.NET MVC”.
So for the thousands of people, businesses and corporates that have spent a decade developing web forms apps, do you really mean throw it away and start from scratch?
Who can justify the time? Who pays for it? What real value does it provide? None. Sure for new projects MVC should be considered (except for say intranets). Sorry and no offence, but “rewrite your apps” is not the best advice.
GruntJS, NPM, and Bower. My prediction is they will disappear by 2020 in favour of something new. Cool kids get bored and move on to something different when the crowd copies them.
Microsoft should chart its own roadmap and stick to it. It is what true leaders do. Trying to be all things to all people is hard and usually fails.
I agree with a lot of your points. For the company I work for “rewrite your apps” is not on option for us anytime soon if at all. We have far too many web forms sites and a whole in-house framework built using web forms. We are starting to add MVC but there’s no easy quick way for us to switch our Web Forms to MVC.
We’ve been burned by Microsoft many times. I like a lot of what they’re doing now but man there are times that they really make things difficult. Naturally we are using VB.NET so if that’s not supported that’s a big deal. Not from a learning c# standpoint but just from a consistency perspective. One assembly in VB.NET and one in C#. Not that big a deal but not ideal.
100% agree! Microsoft completely lacks a consistent roadmap for web/server development. Web development is shifting from backend rendering to frontend rendering (JavaScript…). We are still in time of transition. But frontend rendering will get much more complex, more logic will go to front end etc. As a result, abstraction is crucial. They need some way to build a typed GUI that is decoupled from JavaScript logic.
In my opinion the biggest failure was to hear the designer guys that want control over HTML formatting like Razor View. HTML is just fucking markup. No one cares if it looks ugly or whatever as long as you follow the specifications. Razor is the most ugliest and worst piece of view that was ever developed by Microsoft. Even old school MFC C++ is much more logical and easier to understand. Because there is ABSTRACTION to solve problems. You always need abstraction to solve complex problems.
Microsoft needs to do:
– build JavaScript APIs for frontend development
– build a XAML parser that generates the markup and interacts with the JavaScript libraries
– commit to ONE communication API. I still don’t see why we need WebApi in the backend. WCF is much more powerful and easier to configure.
Stephen, congrats, you’ve found those two developers that use VB for ASP.NET! 😀 Great and witty post.
Vytautas you’re jumping to conclusions. (But I do see the humour in your comment)
I write code in C#.
anyone would like to discuss why we should start using x-unit for unit testing instead of MS test? people should discuss the advantages of x-unit for which we should use it and also should discuss the down side of ms-test. looking for information. thanks
No more VB and Web Forms? Hallelujah! Time to move on and expand your horizons 90’s developer guy.
Dear Stephen,
Being ASP.NET WebForms Developer (both C# and VB), Finally It’s time to completely learn MVC. 🙂
It’s time to upgrade skills (Classic ASP 3.0 -> ASP.NET WebForms -> ASP.NET MVC)
Please publish books for ASP.NET 5 and MVC 6 ASAP..!
|
https://stephenwalther.com/archive/2015/02/24/top-10-changes-in-asp-net-5-and-mvc-6
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
This article explains how to use the Custom Data Annotations Attribute in MVC 4.
Data Annotations
Validation in MVC can be done using Data Annotations that are applied to both the client and server side.
Data Annotation attributes are used to validate the user inputs when posting the form. All the Data Annotation attributes like Required, Range are derived from the ValidationAttribute class that is an abstract class. The ValidationAttribute base class lives in the System.ComponentModel.DataAnnotations namespace.
This article is showing how to create a custom Data Annotations step by step:
Step 1
Create a New MVC 4 Application.
Step 2
Select Razor View Engine.
Step 3
Here I am showing Email validation on the Employee Registration Form.
Select the Model Folder then select Add New Item -> Add New Class CustomEmailValidator.cs.
Here we need to inherit the ValidationAttribute and need to override the IsValid method.
CustomEmailValidator.cs
Now run the application:
View All
View All
|
https://www.c-sharpcorner.com/UploadFile/rahul4_saxena/mvc-4-custom-validation-data-annotation-attribute/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Interpretability: Model explainability in automated ML (preview)
In this article, you learn how to get explanations for automated machine learning (automated ML) models in Azure Machine Learning using the Python SDK. Automated ML helps you understand feature importance of the models that are generated.
All SDK versions after 1.0.85 set
model_explainability=True by default. In SDK version 1.0.85 and earlier versions users need to set
model_explainability=True in the
AutoMLConfig object in order to use model interpretability.
In this article, you learn how to:
- Perform interpretability during training for best model or any model.
- Enable visualizations to help you see patterns in data and explanations.
- Implement interpretability during inference or scoring.
Prerequisites
- Interpretability features. Run
pip install azureml-interpretto get the necessary package.
- Knowledge of building automated ML experiments. For more information on how to use the Azure Machine Learning SDK, complete this regression model tutorial or see how to configure automated ML experiments.
Interpretability during training for the best model
Retrieve the explanation from the
best_run, which includes explanations for both raw and engineered features.
Note
Interpretability, model explanation, is not available for the TCNForecaster model recommended by Auto ML forecasting experiments.
Download the engineered feature importances from the best run
You can use
ExplanationClient to download the engineered feature explanations from the artifact store of the
best_run.
from azureml.interpret import ExplanationClient client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict())
Download the raw feature importances from the best run
You can use
ExplanationClient to download the raw feature explanations from the artifact store of the
best_run.
from azureml.interpret import ExplanationClient client = ExplanationClient.from_run(best_run) raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict())
Interpretability during training for any model
When you compute model explanations and visualize them, you're not limited to an existing model explanation for an AutoML model. You can also get an explanation for your model with different test data. The steps in this section show you how to compute and visualize engineered feature importance based on your test data.
Retrieve any other AutoML model from training
automl_run, fitted_model = local_run.get_output(metric='accuracy')
Set up the model explanations
Use
automl_setup_model_explanations to get the engineered and raw explanations. The
fitted_model can generate the following items:
- Featured data from trained or test samples
- Engineered feature name lists
- Findable classes in your labeled column in classification scenarios
The
automl_explainer_setup_obj contains all the structures from above list.
from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification')
Initialize the Mimic Explainer for feature importance
To generate an explanation for automated ML models, use the
MimicWrapper class. You can initialize the MimicWrapper with these parameters:
- The explainer setup object
- Your workspace
- A surrogate model to explain the
fitted_modelautomated ML model
The MimicWrapper also takes the
automl_run object where the engineered explanations will be uploaded.
from azureml.interpret import MimicWrapper # Initialize the Mimic Explainer explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params)
Use Mimic Explainer for computing and visualizing engineered feature importance
You can call the
explain() method in MimicWrapper with the transformed test samples to get the feature importance for the generated engineered features. You can also sign in to Azure Machine Learning studio to view the explanations dashboard visualization of the feature importance values of the generated engineered features by automated ML featurizers.
engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict())
For models trained with automated ML, you can get the best model using the
get_output() method and compute explanations locally. You can visualize the explanation results with
ExplanationDashboard from the
raiwidgets package.
best_run, fitted_model = remote_run.get_output() from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='regression') from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel, init_dataset=automl_explainer_setup_obj.X_transform, run=best_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes) pip install interpret-community[visualization] engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()), from raiwidgets import ExplanationDashboard ExplanationDashboard(engineered_explanations, automl_explainer_setup_obj.automl_estimator, datasetX=automl_explainer_setup_obj.X_test_transform) raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform) print(raw_explanations.get_feature_importance_dict()), from raiwidgets import ExplanationDashboard ExplanationDashboard(raw_explanations, automl_explainer_setup_obj.automl_pipeline, datasetX=automl_explainer_setup_obj.X_test_raw)
Use Mimic Explainer for computing and visualizing raw feature importance
You can call the
explain() method in MimicWrapper with the transformed test samples to get the feature importance for the raw features. In the Machine Learning studio, you can view the dashboard visualization of the feature importance values of the raw features.
raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict())
Interpretability during inference
In this section, you learn how to operationalize an automated ML model with the explainer that was used to compute the explanations in the previous section.
Register the model and the scoring explainer
Use the
TreeScoringExplainer to create the scoring explainer that'll compute the engineered feature importance values at inference time. You initialize the scoring explainer with the
feature_map that was computed previously.
Save the scoring explainer, and then register the model and the scoring explainer with the Model Management Service. Run the following code:
from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer, save # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally save(scoring_explainer, exist_ok=True) # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') # Register scoring explainer automl_run.upload_file('scoring_explainer.pkl', 'scoring_explainer.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='scoring_explainer.pkl')
Create the conda dependencies for setting up the service
Next, create the necessary environment dependencies in the container for the deployed model. Please note that azureml-defaults with version >= 1.0.45 must be listed as a pip dependency, because it contains the functionality needed to host the model as a web service.
from azureml.core.conda_dependencies import CondaDependencies azureml_pip_packages = [ 'azureml-interpret', 'azureml-train-automl', 'azureml-defaults' ] myenv = CondaDependencies.create(conda_packages=['scikit-learn', 'pandas', 'numpy', 'py-xgboost<=0.80'], pip_packages=azureml_pip_packages, pin_sdk_version=True) with open("myenv.yml","w") as f: f.write(myenv.serialize_to_string()) with open("myenv.yml","r") as f: print(f.read())
Create the scoring script
Write a script that loads your model and produces predictions and explanations based on a new batch of data.
%%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named automl_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values}
Deploy the service
Deploy the service using the conda file and the scoring file from the previous steps.
from azureml.core.webservice import Webservice from azureml.core.webservice import AciWebservice from azureml.core.model import Model, InferenceConfig from azureml.core.environment import Environment aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={"data": "Bank Marketing", "method" : "local_explanation"}, description='Get local explanations for Bank marketing test data') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") inference_config = InferenceConfig(entry_script="score_local_explain.py", environment=myenv) # Use configs and models generated above service = Model.deploy(ws, 'model-scoring', [scoring_explainer_model, original_model], inference_config, aciconfig) service.wait_for_deployment(show_output=True)
Inference with test data
Inference with some test data to see the predicted value from AutoML model, currently supported only in Azure Machine Learning SDK. View the feature importances contributing towards a predicted value.
if service.state == 'Healthy': # Serialize the first row of the test data into json X_test_json = X_test[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered explanations output = service.run(X_test_json) # Print the predicted value print(output['predictions']) # Print the engineered feature importances for the predicted value print(output['engineered_local_importance_values']) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values']))
Visualize to discover patterns in data and explanations at training time
You can visualize the feature importance chart in your workspace in Azure Machine Learning studio. After your AutoML run is complete, select View model details to view a specific run. Select the Explanations tab to see the visualizations in the explanation dashboard.
For more information on the explanation dashboard visualizations and specific plots, please refer to the how-to doc on interpretability.
Next steps
For more information about how you can enable model explanations and feature importance in areas other than automated ML, see more techniques for model interpretability.
Feedback
Submit and view feedback for
|
https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-automl
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
How To Beep In Python - 5 Simple Ways
In this tutorial I will show you 5 simple ways to generate a beeping sound in Python.
To generate a beeping sound in Python you have the following options:
- Use the bell character on the terminal
- Use AppKit to play MacOS system sounds
- Use winsound to play Windows system sounds
- Use pygame to play custom sound files
- Use simpleaudio to play custom sound files
- Use the beepy package
To put it in a more a pythonic way: how to make your machine go PING! (HINT: check the end of the article if you don’t get the reference.)
1. Using The Bell On The Terminal
There is a so-called bell character that you can use to issue a warning on a terminal. It is a nonprintable control code character, with the ASCII character code of 0x07 (BEL). You can trigger it by simply sending it to the terminal (note the backslash):
print('\a')
This is probably the most simple way of sounding a beep, though it is not 100% guaranteed to work on all systems. Most UNIX-like operating systems like macOS and Linux will recognize it, but depending on the current settings the bell might be muted or be represented as a flash on the screen (visual bell).
2. Using AppKit.NSBeep On MacOS
If you’re on a Mac, you can tap into the Objective-C libraries to generate a sound.
First you’ll need to install the PyObjC library:
pip install -U PyObjC
Then you can simply use the
AppKit interface the ring the default system sound, like so:
import AppKit AppKit.NSBeep()
3. Using winsound On Windows
On Windows operating systems you can use the
winsound library.
winsound needs no installation it is a builtin module on windows, so you should be able to access it by default
windound a have a handy
Beep API, you can even choose the duration and the frequency of the beep. This is how you generate a 440Hz sound that lasts 500 milliseconds:
import winsound winsound.Beep(440, 500)
You can also play different windows system sound effects using the
PlaySound method:
import winsound winsound.PlaySound("SystemExclamation", winsound.SND_ALIAS)
The same API can be used to play custom sound files using the
SND_FILENAME flag instead of
SND_ALIAS:
import winsound winsound.PlaySound("beep.wav", winsound.SND_FILENAME)
4. Playing Sound Files With pygame
Pygame is a modular Python library for developing video games. It provides a portable, cross-platform solution for a lot of video game and media related tasks, one of which is playing sound files.
To take advantage of this feature, first you’ll need to install pygame:
pip install pygame
Then you can simply use the
mixer the play an arbitrary sound file:
from pygame import mixer mixer.init() sound=mixer.Sound("bell.wav") sound.play()
Just like with the previous solution, you’ll need to provide you own sound file for this to work. This API supports OGG and WAV files.
5. Playing Sound Files With Simpleaudio
Simpleaudio is a cross-platform audio library for Python, you can use it to play audio files on Windows, OSX and Linux as well.
To install the simpleaudio package simply run:
pip install simpleaudio
Then use it to play the desired sound file:
import simpleaudio wave_obj = simpleaudio.WaveObject.from_wave_file("bell.wav") play_obj = wave_obj.play() play_obj.wait_done()
6. Use Package Made For Cross-Platform Beeping - Beepy
If you want a ready-made solution you can check out the
beepy package. Basically it’s a thin wrapper around simpleaudio, that comes bundled together with a few audio files.
As always, you can install it with pip:
pip install beepy
And then playing a beep sound is as simple as:
import beepy beep(sound="ping")
Summary
As you can see there are several different ways to go about beeping in Python, but which one is the best?
If you just want a quick and dirty solution I’d recommend trying to sound the terminal bell. If you want something more fancy or robust I’d go with winsound on windows or AppKit on a Mac. If you need a cross-platform solution your best bet will be using simpleaudio or pygame, to get a custom sound file played.
Congratulations, now you’ll be able to turn your computer into “The Machine That Goes PING”.
|
https://pythonin1minute.com/how-to-beep-in-python/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
SyndicationFeed.TryParseAttribute Method
Silverlight
Attempts to parse an attribute extension.
Namespace: System.ServiceModel.SyndicationNamespace: System.ServiceModel.Syndication
Assembly: System.ServiceModel.Syndication (in System.ServiceModel.Syndication.dll)
Parameters
- name
- Type: System.String
The name of the element.
- ns
- Type: System.String
The namespace of the element.
- value
- Type: System.String
The attribute to parse.
- version
- Type: System.String
The syndication version to use when parsing.
Return ValueType: System.Boolean
A value that specifies whether the attribute extension was parsed successfully.
Attribute extensions are custom attributes that are not defined by the Atom 1.0 or RSS 2.0 specifications. They are serialized as an attribute of the <feed> (for Atom 1.0) or <rss> (for RSS 2.0) element, which depends upon the syndication version being used. This method is an extension point that allows you to handle the deserialization of a custom attribute extension. To do this, you must derive a class from SyndicationFeed and override this method. This method is called for all unrecognized attribute extensions.
For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers.
Show:
|
https://msdn.microsoft.com/en-us/library/bb908542(v=vs.95)
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
A suite of better cache tools for Django.
django-email-bandit is a Django email backend for hijacking email sending in a test environment.
Integration library for React/JSX and Django
Simple, flexible app for integrating static, unstructured content in a Django site
Django template filter application for sanitizing user submitted HTML
An interface to the Google Calculator
A command framework with a plugin architecture
A simple namespaced plugin facility
|
https://pypi.org/user/ironfroggy/
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Board Of Directors And Marketing Department Commerce Essay
Published: Last Edited:
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
In the business world, profitability may be considered as one of important factors that a company is looking for because it can refer to its survival in the business in terms of capability to invest. It appears that marketing department might play a crucial role in increasing its sales by applying suitable marketing tools in order to encourage consumers to make purchase decisions on its products. However, Board of Directors does not realise an importance of marketing executives in the boardroom. They might prefer to be interested in important issues which would severely affect the company and also financial reports such as balance sheet, income statement, statement of cash flow. It is obvious that they might focus on operational and financial aspects more than others. This assignment will point out essentials of marketing function to Board of Directors, especially for the Chief Executive Officer. In addition, the reason why marketing should be focused as a strategic activity will be defined here.
A Relationship between Board of Directors and Marketing Department
Board of Directors (BOD) is a group of persons who are elected by shareholders of a company in order to take control of an overall business. Susan Shultz (2001) describes that BOD should approve corporate strategy and evaluate major investment. Moreover, they should provide general suggestion of the business and advise management on significant issues facing the corporation, as well. Neville Bain (2008) also states that there are four main tasks which BOD should pursue. Firstly, they establish vision, mission and values of the company. Secondly, they decide the strategy and the structures. Thirdly, they delegate authority to management and monitor and evaluate the implementation of policies, strategies business plans. Lastly, they account to shareholders and be responsibility to stakeholders. BOD is comprised of both executives and non-executives directors. Executive directors are elected from executives within the company. On the other hand, non-executive directors are chosen on the basis of their previous experience which is related to the business (Johnson, Whittington & Scholes, 2011).
Marketing Department is responsible for creating value for customers by understanding customer needs and building customer relationships in order to capture value from customers in return (Kotler & Armstrong, 2012). Marketing are positioned as one of primary activities according to the Michael Porter's value chain within the company. By gaining reinforcement from both support and other primary activities such as operations, logistics, service, it can deliver value to consumers in order to produce higher margin products.
Due to the fact that Board of Directors is apparently on the top of organisation chart, their responsibility is concerned with the organisational management and overall activities in the organisation as a whole. Each function in the organisation chart would be paid attention to. In a company, there are five key activities which would be involved such as general management, marketing, finance, accounting, and operation. In order to operate the business smoothly, all the mentioned activities would collaborate to achieve the organisational goal. Vision, mission, goal and objectives of the company should be clearly appointed by top management team and those should be delivered to all employees. Peter Drucker's Management By Objectives (MBO) is one of an approach which might be worthwhile for a company. Basically, employees have been encouraged to set goal and objectives. Consequently, all players in the organisation might work towards the same goal. It would help an organisation planning, directing, controlling and assessing its management systematically. Moreover, MBO may indicate performance of the organisation, as well.
One of important things which BOD is concerned with is financial performance and shareholders. Marketing activities is also significantly engaged to it. Peter Doyle (2000) points out how marketing creates value and how Shareholder Value Analysis (SVA) can be applied as a powerful technique for developing and justifying marketing strategies. SVA is a kind of financial measures which illustrate the contribution of marketing to the company's financial performance. There is a fascinating relationship between marketing and SVA. 'Without marketing, SVA is just another accounting tool that sacrifices long-term competitiveness for short-term profits' (Doyle, 2000: 310). On the other hand, SVA demonstrates a framework for marketing to collaborate with other functions in the company more effectively. Graham Hooley et al. (2004) also describe the relationship between marketing and performance outcomes. High level of market orientation would enhance resources such as marketing assets and market innovation capabilities (Slater and Narver, 1995; Han et al., 1998 cited in Hooley et al., 2004). Well-developed marketing resources may lead to higher market performance through an increase in customer satisfaction, customer loyalty, sales volume and market share, which in turn bring to financial performance at last (see Figure 1).
Figure Marketing and performance outcomes
Source: Hooley, G., Saunders, J. & Piercy, N. (2004) Marketing Strategy and Competitive Positioning , pp. 21.Market-oriented culture
Marketing resources
Assets
Capabilities
Marketing performance
Customer satisfaction & loyalty
Sales volume & market share
Financial performance
Jean Johnson et al. (2012) indicate four key dimensions of a company's strategic orientation which would impact its market orientation strategy by conducting the cross-sectional quantitative research. The study sampled 800 companies from various sectors including electronics, fabric, transportation, computer equipment. The first dimension is a company's aggressiveness which refers to its efforts to gain market share, eagerly. Secondly, it concerns how much the companies contributes to future orientation, which mainly focuses on building sustainable competitive advantage. Thirdly, the marketing formalisation would be a systematic approach to help decision makers tackling with explicit issues, understanding market and enhancing customer satisfaction through the manoeuvre. Lastly, risk proclivity is a consistent paradigm of taking or avoiding risk of an organisation. The result reveals that a company's aggressiveness vigorously affects market orientation. For large organisations, their strategic orientation tends to be inconsistent with a market orientation due to a lack of concentrating on sustaining an existing market. However, risk averse large companies tend to build their market formalisation in order to strengthen relationships with existing customers.
Roles of marketers in Board of Directors
Although marketing might crucially take part in an organisation, now it appears that there might be issues which should be considered. Mark Davies & Barry Ardley (2012) point out that strong marketing leadership takes a crucial part in the value of marketing. They also reveal that there are five factors which involve in reducing the role of marketing department in the board meeting. Firstly, according to Malcolm McDonald & Hugh Wilson (2004) and Jagdish Sheth & Rajendra Sisidia (2005), the role of marketing department has been reduced in terms of sales support, tactical promotional activities or marketing communication. Moreover, other managers have intervened in the core strategy which formerly relates to marketing. Secondly, young marketing executives lack skills in team-working, decision making and negotiation (Dacko, 2006) which are necessary for working effectively. Thirdly, marketers are likely insensitive to environmental change such as economic downturn, low cost competitors, technological advances. Fourthly, competency required for marketing is neglected. Comparing to accountants who tend to follow accounting regulations tightly. While marketers tend to be more flexible due to their responsibilities. Lastly, as a result of lack of model of measurement, marketing accountability is difficult to achieve.
In the meaning of marketers who want to be promoted to executive level, Roger Bennett (2009) states that they need to prepare themselves in terms of knowledge and problem solving skills, from his research of which the data is from senior marketers in 209 registered companies in the food and beverages manufacturing sector. Additionally, emotional intelligence, social behaviour, financial and general management knowledge would distinguish from others. Coral Ingley & Nicolas Van de Walt (2001) conducted a research on 'The Strategic Board: the Changing Role of Directors in Developing and Maintaining Corporate Capability' by analysing the selection, evaluation and performance of board of directors in New Zealand. It is obvious that not only strategic vision and leadership, but also individual competency are an essential qualification in director capability.
Marketers as strategists
It appears that marketing department might be reduced its strategic role in a company, especially for its executive board. Although the Board of Directors realises how important marketing is, they may not rest assured that marketing is highly relevant to the word 'strategy'.
According to Alfred D. Chandler, 'strategy is the determination of the long-run goals and objectives of an enterprise and the adoption of courses of action and the allocation of resource necessary for carrying out these goals' (Johnson et al., 2011: 4). Strategy comprises of three levels which spread from top management level. Corporate strategy defines a company's overall direction in terms of objectives of the company, acquisition and allocation of resources and coordination of every Strategic Business Units (SBUs). Business strategy focuses on one range of business. It may deal with positioning the business against competitors, anticipating changes in environment. Functional departments such as manufacturing, marketing, finance might develop their strategies in terms of resources, processes and people in order to implement both corporate and business strategies (see Figure 2).
Figure Hierarchy of strategy
Source: Wheelen, T. & Hunger, D. (2004) cited in Johnson, G., Whittington, R.& Scholes, K. (2011) Exploring Strategy (9th edition), pp. 38.
As a part of an organisation, marketing has to cope with the change in marketing environment such as new entrants, customer behaviour, technological update. In order to build and maintain competitive advantage of a business, marketers should formulate marketing strategies to fit the environment and capabilities of the company. However, the strategies might reflect to vision, mission, goal and objectives of its company. Not only environmental analysis, but also analyses on customer, competition and industry would be recognised. The information which is derived by applying the PESTEL analysis and the Porter's Five-Force model, would encourage marketers to define their strategic position in the market. After that, they would develop strategic options in terms of marketing directions and methods by which the strategy might be pursued, based on competitive advantages. When they evaluate all strategic alternatives and implement by putting in the operating plan, a review of strategies should not be neglected through strategy control and metrics.
Enrique Claver et al. (2007) conducted a case study on the clarification of the relationship between environmental management and firm performance of a Spanish agricultural cooperative which is known as COATO. It succeeds in broadening its agricultural activities to various products such as vegetables, fruits, honey, almond, oil. It is awarded for outstanding achievement which guarantees quality and sustainability of the cooperative. Not only are there agricultural products, but also services for members of the cooperative for example financial services, technical support, fuel station, training. According to the research's findings, COATO examines environmental factors that would have an influence on the business in order to formulate a corporate strategy to gain a competitive advantage. After carrying out the research, the organisation might be affected by four factors which are environmental regulations, stakeholders, the Spanish cooperative sector, and complementary resources and capabilities. The last two factors involve with environmental management. COATO found that many Spanish cooperatives lack technology and facilities for toxic and waste management. It can reflect obstacles to cooperatives' adaptation to environmental issues. The more there are good environmental management practices, the more positive influence they may have. These information helped the managers to determine the strategic direction which is eventually transformed into 'proactive environmental strategy'. The strategy comprises of environmental management-prevention and pioneering entry strategy. Its performance is evaluated on three aspects which are environmental performance, economic performance and competitive advantage. At the end, the acceptance of the proactive strategy positively affects both environmental and economic performance. Moreover, there is a high probability of gaining a competitive advantage through cost and differentiation.
Kyung Hoon Kim et al. (2011) studied the relationships between Sustainable Competitive Advantage (SCA), marketing strategy and employment brand equity in the health service industry in the Republic of Korea in 2009. Researchers believe that marketing strategy which is established from a vision and objectives of an organisation would drive that organisation to be the leader in the market. His research findings indicate doctors working in hospitals agree that sustainable competitive advantage has a noteworthy influence on marketing strategy. The higher level of SCA is, the stronger marketing strategy is. In addition, marketing strategy has a significant influence on hospitals' image.
Michael Volkov et al. (2002) studied consumer behaviour and consumer complaint behaviour, focusing on advertising area, in Australia. Consumer complaint behaviour herein means a set of behavioural and non-behavioural responses to negative perceptions on advertisements. The research shows that there is a significant difference between complaints and non-complainants and advertising complainants tend to widely spread negative messages on products.
Above three researches are examples of implementing strategic marketing to improve performance of the organisations. Not only macro and micro environments, but also 'internal customers' that COATO, a Spanish cooperative, analyses and come up with a 'proactive environmental strategy' which make it become successful in the business. While doctors in hospitals in Republic of Korea tends to agree that Sustainable Competitive Advantage has an influence on brand image of the hospitals. And the study of Michael Volkov et al. shows that consumer behaviour is different and may response to the same marketing tools in different ways. Marketers who possess this information might gain competitive advantage by developing suitable marketing tools.
Conclusion
Marketing is one of primary activities which drive a company to achieve goal. Clear vision, mission, goal and objectives which are developed by board of directors in the corporate level, might enhance the company to create competitive advantage. Consequently, board of directors and shareholders might be satisfied by a greater number of financial performance, especially for profitability. Marketing department would be responsible for monitoring marketing environment and appraising the company's capability to create, evaluate, choose and implement the most appropriate marketing strategy. However, every function in a company should work collaboratively and strategically to reach the goal.
--------------------------------------------------------------------------
Reference List
Bain, N. (2008) The Effective Director: Building Individual and Board Success. London: Institute of Directors.
Bennett, R. (2009) 'Reaching the Board: Factors Facilitating the Progression of Marketing Executives to Senior Positions in British Companies', British Journal of Management [Online], 20 (1): 30-54.
Claver, E., López, M., Molina, J. & Tari, J. (2007) 'Environmental Management and Firm Performance: A Case Study', Journal of Environmental Management [Online], 84 (4): 606-619.
Dacko, S. (2006) 'Narrowing the Skills Gap for Marketers of the Future', Marketing Intelligence and Planning [Online], 24 (3): 283-295.
Davies, M. & Ardley, B. (2012) 'Denial at the Top Table: Status Attributions and Implications for Marketing', Journal of Strategic Marketing [Online], 20 (2): 113-126.
Doyle, P. (2000) 'Value-based Marketing', Journal of Strategic Marketing [Online], 8 (4): 299-311.
Hendry, K. & Kiel, G. (2004) 'The Role of the Board in Firm Strategy: Integrating Agency and Organisational Control Perspectives', Corporate Governance: An international Review [Online], 12 (4): 500-520.
Hooley, G., Saunders, J. & Piercy, N. (2004) Marketing Strategy and Competitive Positioning (3rd edition). London: Pearson Education Limited.
Ingley, C. & Van der Walt, N. (2001) 'The Strategic Board: the Changing Role of Directors in Developing and Maintaining Corporate Capability', Corporate Governance: An International Review [Online], 9 (3):174-185.
Johnson, G., Whittington, R.& Scholes, K. (2011) Exploring Strategy (9th edition). London: Pearson Education Limited.
Johnson, J., Martin, K. & Saini, A. (2012) 'The Role of a Firm's Strategic Orientation Dimensions in Determining Market Orientation', Industrial Marketing Management [Online], 41 (4): 715-724.
Kim, K., Jeon, B., Jung, H., Lu, W. & Jones, J. (2011) 'Effective employment Brand Equity through Sustainable Competitive Advantage', Journal of Business Research [Online], 64 (11): 1207-1211.
Kotler, P. & Armstrong, G. (2012) Principles of Marketing (14th edition). London: Pearson Education Limited.
Kumar, N. (2004) Marketing as Strategy: Understanding the CEO's Agenda for Driving Growth and Innovation. Massachusetts: Harvard Business Press.
McDonald, M. & Wilson, H. (2004) 'Marketing Existential Malpractice and an Etherised Discipline: A Soteriological Comment', Journal of Marketing Management [Online], 20 (3-4): 387-408.
Morgan, N., Vorhies, D. & Mason, C. (2009) 'Market Orientation, Marketing Capabilities, and Firm Performance', Strategic Management Journal [Online], 30 (8): 909-920.
Sheth, J. & Sisodia, R. (2005) 'A Dangerous Divergence: Marketing and Society', Journal of Public Policy and Marketing [Online], 24 (1): 160-162.
Shultz, S.F. (2001) The Board Book: Making your Corporate Board a Strategic Force in your Company's Success. New York: American Management Association.
Volkov, M., Harker, D. & Harker, M. (2002) 'Complaint Behaviour: A Study of the Differences between Complainants about Advertising in Australia and the Population at Large', The Journal of Consumer Marketing [Online], 19 (4): 319-330.
PART 2
'PERSUASIVE PAPER'
Introduction
Competitive advantage
Generic strategy for creating competitive advantage
Cost leadership
Differentiation
Focus
G.T. Lumpkin et al. (2002) thoroughly examine
Sustainable competitive advantage
Sustainable competitive advantage in a company
C. Marlena Fiol (2001) argues that organizational identity as a core competency leading to competitive advantage by contextualizing and providing meaning to new adaptive behaviours. I suggested that organizational identity represents a relatively inimitable resource, leading to sustainability of the advantage.
TQM
Figure TQM process, activities, tacitness, complexity and sustainability of advantage
Source: Reed, R., et al. (2005) 'Total Quality Management and Sustainable Competitive Advantage', Journal of Quality Management, pp. 13.
Role of marketing in sustainable competitive advantage
Unique and valued products
Clear, tight definition of market targets
Enhanced customer linkages
Established brand and company credibility
Conclusion
--------------------------------------------------------------------------
You are to prepare a persuasive paper in which you attempt to convince your (hypothetical) manager to take a particular action related to a strategic marketing topic of your choice
Your paper will be related to issues we discuss in class
Sample topics might include:
- Market orientation is not the only important business orientation of a successful business
- Listening to the customer can harm dynamic innovation in the long-term.
- The important role of market driving behaviour versus market driven behaviour
You are to prepare a persuasive paper in which you attempt to convince your (hypothetical) manager to take a particular action related to a strategic marketing topic of your choice. Your paper will be related to issues we discuss in class. Sample topics include: why new research approaches should be used to understand consumer markets in addition to consumer survey approaches, or why market orientation is not the only important business orientation of a successful business, or why listening to the customer can harm dynamic innovation in the long-term (*NB* Please note this paper must not be on the role of marketing within corporate boards - as outlined in the 'Get the Evidence Brief above'). A good example of a topic could be the important role of market driving behaviour versus market driven behaviour, illustrating your paper with examples from the Virgin Galactic Space Project (). See the following quotations:
Harris and Cia (2002:172) explain that "a small number of leading theorists reminded the marketing academy that being 'market driven' was not necessarily the solution for, or indeed the practice of, every firm." The emergence of this research agenda went further, cautioning against the institutionalising of "overly close relationships", not least because this would "suppress innovation and harm firm performance" (Harris and Cia, 2002:173).
Keep in mind that many bosses, yours included, may have little professional management training and even less knowledge of social science (not having the benefits of university education and Birmingham University degree!). Criteria for evaluating your paper will be based on 1) how effectively you marshal the best evidence to make your case, 2) the extent you have made your case in a strong, truthful, and convincing manner, and 3) how well you use facts about the particular organization (again, real or hypothetical) to help you make your case.
why new research approaches should be used to understand consumer markets in addition to consumer survey approaches
why market orientation is not the only important business orientation of a successful business
why listening to the customer can harm dynamic innovation in the long-term
Reference List
Amini, A., Darani, M., Afshani, M. & Amini, Z. (2012) 'Effectiveness of Marketing Strategies and Corporate Image on Brand Equity as a Sustainable Competitive Advantage', Interdisciplinary Journal of Contemporary Research in Business [Online], 4 (2): 192-205.
Cui, Y. & Jiao H. (2011) 'Dynamic Capabilities, Strategic Stakeholder Alliances and Sustainable Competitive Advantage: Evidence from China', Corporate Governance [Online], 11 (4): 315-398.
Culpan, R. (2008) 'The Role of Strategic Alliances in Gaining Sustainable Competitive Advantage for Firms', Management Revue [Online], 19 (1): 94-105.
De Lemos, T., Almeida, L., Betts, M. & Eaton, D. (2003) 'An Examination on the Sustainable Competitive Advantage of Private Finance Initiative Projects', Construction Innovation [Online], 3 (4): 249-259.
Emerald Group Publishing Limited (2010) 'Co-creation at Orange and Cisco Systems: Gaining Competitive Advantage and Sustainable Growth'. Strategic Direction, 26 (8): 23-25. Retrieved from EBSCO Database [Access on 20 November 2012].
Esper, T., Fugate, B. & Davis-Sramek, B. (2007) 'Logistics Learning Capability: Sustaining the Competitive Advantage Gained through Logistics Leverage', Journal of Business Logistics [Online], 28 (2): 57-81.
Finney, R., Campbell, N. & Orwig, R. (2004) 'From Strategy to Sustainable Competitive Advantage: Resource Management as the Missing Link', Marketing Management Journal [Online], 14 (1): 71-81.
Fiol, C.M. (2001) 'Revisiting an Identity-based View of Sustainable Competitive Advantage', Journal of Management [Online], 27 (6): 691-699.
Gupta, A. (2012) 'Sustainable Competitive Advantage in Service Operations: An Empirical Examination', Journal of Applied Business Research [Online], 28 (4): 735-742.
He, N. (2012) 'How to Maintain Sustainable Competitive Advantages-----Case Study on the Evolution of Organizational Strategic Management', International Journal of Business Administration [Online], 3 (5): 45-51.
Hoffman, N. (2000) 'An Examination of the "Sustainable Competitive Advantage" Concept: Past, Present, and Future', Academy of Marketing Science Review [Online], 4: 1-16.
Javalgi, R., Radulovich, L., Pendleton, G. & Scherer, R. (2005) 'Sustainable Advantage of Internet Firms: A Strategic Framework and Implications for Global Marketers', International Marketing Review [Online], 22 (6): 658-672.
Lubit, R. (2001) 'Tacit Knowledge and Knowledge Management: The Keys to Sustainable Competitive Advantage', Organizational Dynamics [Online], 29 (4): 164-178.
Lumpkin, G.T., Droege, S. & Dess, G. (2002) 'E-Commerce Strategies: Achieving Sustainable Competitive Advantage and Avoiding Pitfalls', Organizational Dynamics [Online], 30 (4): 325-340.
Matthews, J. & Shulman, A. (2005) 'Competitive Advantage in Public-sector Organizations: Explaining the Public Good/ Sustainable Competitive Advantage Paradox', Journal of Business Research [Online], 55 (2): 232-240.
Pfeffer, J. (2005) 'Producing Sustainable Competitive Advantage through the Effective Management of People', Academy of Management Executive [Online], 19 (4): 95-106.
Porter, M. & Kramer, M. (2006) Strategy and Society: The Link between Competitive Advantage and Corporate Social Responsibility Available from:
2008/Mark-Kramer-Keynote/Strategy-Society.PDF [Accessed on 20 November 2012].
Powell, T. (2001) 'Competitive Advantage: Logical an Philosophical Consideration', Strategic Management Journal [Online], 22 (9): 875-888.
Reed, R., Lemak, D. & Mero, N. (2005) 'Total Quality Management and Sustainable Competitive Advantage', Journal of Quality Management [Online], 5 (1): 5-22.
Srivastava, R., Fahey, L. & Christensen, H.K (2001) 'The Resource-based View and Marketing: The Role of Market-based Assets in Gaining Competitive Advantage', Journal of Management [Online], 27 (6):777-802.
Van Niekerk, A. (2007) 'Strategic Management of Media Assets for Optimizing Market Communication Strategies, Obtaining a Sustainable Competitive Advantage and Maximizing Return on Investment: An Empirical Study', Journal of Digital Asset Management [Online], 3 (2): 89-98.
Vorhies, D. & Morgan, N. (2005) 'Benchmarking Marketing Capabilities for Sustainable Competitive Advantage', Journal of Marketing [Online], 69 (1): 80-94.
PART 3
THE 'LEARNING LOG'
The 'Contemporary Issues in Strategic Marketing' is one of my interesting subjects for studying at the University of Birmingham. After being rotated to participate in a marketing division, I have been attacked by many marketing issues which the management team would like to know how to cope with. Although I have taken many marketing courses, in some cases, it is difficult to apply those theories and put that into practice because it seems hypothetical and cannot be achieved. Professor Philip Kotler, a famous marketing guru, said 'Marketing takes day to learn. Unfortunately it takes a lifetime to master'. I could not agree with his quotation more.
In week 1, 'if you don't know where you are going, any road will take you there' was a quotation from Alice in Wonderland which was raised to introduce the strategic marketing. It can obviously illustrate the competitive marketing framework. In order to formulate any plan efficiently, strategists or planners should know what their organisation wants to achieve, through their mission, vision and objectives. If they create any plan without any goal, it would be vague management which may lead to a failure of the organisation. Strategy is also important. 'Strategy is the long-term direction of an organisation' (Johnson,Whittington, & Scholes, 2011: 3). It can identify how to allocate limited resources in a direction that would create a Sustainable Competitive Advantage (SCA). SCA would be worthwhile for an organisation in the long run as long as it can maintain its competitive position in the market. Strategic management could be categorised into three hierarchical levels. Firstly, corporate strategy directs an organisation's overall business management such as defining mission, setting objectives and goals, designing the business portfolio (Kotler & Armstrong, 2012). Secondly, business strategy is concerned about activities under a Strategic Business Unit (SBU) such as targeting a niche market, positioning. Lastly, functional strategy is developed by a functional department such as marketing, human resources, production, in order to achieve corporate goal. Examples of functional strategy are a selection of distribution channels, lowering prices. All three levels of strategy must be consolidated for corporate success (West, Ford & Ibrahim, 2012). From my previous experience in a government agency, my executive board has emphasised that corporate strategy should be clear and must describe a direction of the organisation. In addition, the strategy should be consistent with the government policy. Top and middle-level managers from all divisions will have annual meetings to integrate SBUs' activities following to the vision, mission and objectives of the organisation. Every primary and support activities in the value chain such as operations, marketing and sales, procurement, should work in synergy in order to achieve the organisational goal. It is obvious that some divisions would like to set their Key Performance Indicators (KPIs) as the first priority to achieve. Consequently, other divisions have been affected by those actions such as a conflict between the marketing and operation divisions. Approximately one year ago, a key customer requested for a consultancy project through the marketing division which passed this request to the operation division afterwards. The operation division refused it due to the preference of the government projects which gained more reputation. The marketing division had to solve this problem by outsourcing from another company. This action affected the cost of the proposed project, as well. What I have learnt from this case is that a well-planned strategy will be worthless if staff in SBU or functional level does not realise and implement to achieve the objectives of the organisation.
The activity which allowed students to think about key issues that a tea shop facing in the business is useful because we can apply our knowledge on strategic marketing to analyse this case study. We might know how to cope with this kind of situation in the real working experience.
As a marketer in a government organisation, I have occasionally participated in strategic planning. SWOT, Value Chain and Porter's Five Forces model are used to identify and analyse both internal and external business environment in order to formulate a five-year strategic plan. Consequently, the decision makers know where is the organisation's position in the market, comparing to competitors. Moreover, they can use these information to choose a directional strategy. When studying 'Strategic Fit', I realise that a successful company might be able to balance its strategies, capabilities, and environments efficiently. Strategic decisions at any strategic level would be made after this stage for example resource allocation, strategic orientation, market segmentation. Each strategic decision would be translated into an action plan through methods or tools to reach the goals and it would be monitored and evaluated the performance of each strategic level at the end.
Another two challenges which should be concerned about in my organisation are a late response to the environmental change and how to build sustainable competitive advantage. Due to the fact that the organisation is under the supervision of a minister who is assigned by the Prime Minister, there is a high probability that the minister could be rotated to another position or to be expelled. Hence, the strategy that is formulated by management teams would be influenced by the new minister's policy. This could interrupt the action plan and lead to an unachievable goal.
In conclusion, this module encourages me to think critically by enhancing my basic marketing concepts through a variety of issues, articles, case studies, video clips from Google Videos and YouTube. It would be fruitful for me to apply the knowledge not only from textbooks but also from the lecturer's experience to tackle real marketing issues. In my point of view, this course reviews and updates my comprehension of marketing principles, which I learnt many years ago. I would apply this strategic thinking skills in my job as a marketer in order to participate in creating a sustainable competitive advantage of the organisation.
|
https://www.ukessays.com/essays/commerce/board-of-directors-and-marketing-department-commerce-essay.php
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
#include <qstereo.h>
#include <qstereo.h>
Collaboration diagram for QStereo:
QStereo class is a visualization widget for stereo camera devices. This calss can be used for the following purposes:
0
A constructor.
A destructor.
[protected, virtual]
[inline, slot]
Reset the stereo camera.
[signal]
Emitted when the size of camera images has been changed.
[virtual, slot]
Resize camera images.
Connect a stereo camera device.
[slot]
display an image on the screen.
Capture a pair of images from a stereo camera and show them.
Capture images from a stereo camera and show them periodically.
[inline]
Return the current stereo camera device.
Stop capturing images from a stereo camera.
[protected]
Stereo camera device.
Internal pixmap for double-buffering.
Memory buffer to store a left PNM image.
Painter for drawing.
Size of a PNM image.
Memory buffer to store a right PNM image.
The height of camera images.
The width of camera images.
|
http://robotics.usc.edu/~boyoon/bjlib/d2/dbb/classQStereo.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Content-type: text/html
#include <xti.h>
int t_optmgmt(int fd, const struct t_optmgmt *req, struct t_optmgmt _optmgmt() function enables a transport user to retrieve, verify or negotiate protocol options with the transport provider. The argument fd identifies a transport endpoint.
The req and ret arguments point to a t_optmgmt structure containing the following members:
struct netbuf opt; t_scalar_t flags;
The opt field identifies protocol options and the flags field is used to specify the action to take with those options.
The options are represented by a netbuf structure in a manner similar to the address in t_bind(3NSL). The argument req is used to request a specific action of the provider and to send options to the provider. The argument len specifies the number of bytes in the options, buf points to the options buffer, and maxlen has no meaning for the req argument. The transport provider may return options and flag values to the user through ret. For ret, maxlen specifies the maximum size of the options buffer and buf points to the buffer where the options are to be placed. If maxlen in ret is set to zero, no options values are returned. On return, len specifies the number of bytes of options returned. The value in maxlen has no meaning for the req argument, but must be set in the ret argument to specify the maximum number of bytes the options buffer can hold.
Each option in the options buffer is of the form struct t_opthdr possibly followed by an option value.
The level field of struct t_opthdr identifies the XTI level or a protocol of the transport provider. The name field identifies the option within the level, and len contains its total length; that is, the length of the option header t_opthdr plus the length of the option value. If t_optmgmt() is called with the action T_NEGOTIATE set, the status field of the returned options contains information about the success or failure of a negotiation.
Several options can be concatenated. The option user has, however to ensure that each options header and value part starts at a boundary appropriate for the architecture-specific alignment rules. The macros T_OPT_FIRSTHDR(nbp), T_OPT_NEXTHDR (nbp,tohp), T_OPT_DATA(tohp) are provided for that purpose.
T_OPT_DATA(nhp) If argument is a pointer to a t_opthdr structure, this macro returns an unsigned character pointer to the data associated with the t_opthdr.
T_OPT_NEXTHDR(nbp, tohp) If the first argument is a pointer to a netbuf structure associated with an option buffer and second argument is a pointer to a t_opthdr structure within that option buffer, this macro returns a pointer to the next t_opthdr structure or a null pointer if this t_opthdr is the last t_opthdr in the option buffer.
T_OPT_FIRSTHDR(tohp) If the argument is a pointer to a netbuf structure associated with an option buffer, this macro returns the pointer to the first t_opthdr structure in the associated option buffer, or a null pointer if there is no option buffer associated with this netbuf or if it is not possible or the associated option buffer is too small to accommodate even the first aligned option header.
T_OPT_FIRSTHDR is useful for finding an appropriately aligned start of the option buffer. T_OPT_NEXTHDR is useful for moving to the start of the next appropriately aligned option in the option buffer. Note that OPT_NEXTHDR is also available for backward compatibility requirements. T_OPT_DATA is useful for finding the start of the data part in the option buffer where the contents of its values start on an appropriately aligned boundary.
If the transport user specifies several options on input, all options must address the same level.
If any option in the options buffer does not indicate the same level as the first option, or the level specified is unsupported, then the t_optmgmt() request will fail with TBADOPT. If the error is detected, some options have possibly been successfully negotiated. The transport user can check the current status by calling t_optmgmt() with the T_CURRENT flag set.
The flags field of req must specify one of the following actions:
T_NEGOTIATE This action enables the transport user to negotiate option values.
The user specifies the options of interest and their values in the buffer specified by req→opt.buf and req→opt.len. The negotiated option values are returned in the buffer pointed to by ret->opt.buf. The status field of each returned option is set to indicate the result of the negotiation. The value is T_SUCCESS if the proposed value was negotiated, T_PARTSUCCESS if a degraded value was negotiated, T_FAILURE if the negotiation failed (according to the negotiation rules), T_NOTSUPPORT if the transport provider does not support this option or illegally requests negotiation of a privileged option, and T_READONLY if modification of a read-only option was requested. If the status is T_SUCCESS, T_FAILURE, T_NOTSUPPORT or T_READONLY, the returned option value is the same as the one requested on input.
The overall result of the negotiation is returned in ret→flags.
This field contains the worst single result, whereby the rating is done according to the order T_NOTSUPPORT, T_READONLY, T_FAILURE, T_PARTSUCCESS, T_SUCCESS. The value T_NOTSUPPORT is the worst result and T_SUCCESS is the best.
For each level, the option T_ALLOPT can be requested on input. No value is given with this option; only the t_opthdr part is specified. This input requests to negotiate all supported options of this level to their default values. The result is returned option by option in ret→opt.buf. Note that depending on the state of the transport endpoint, not all requests to negotiate the default value may be successful.
T_CHECK This action enables the user to verify whether the options specified in req are supported by the transport provider.If an option is specified with no option value (it consists only of a t_opthdr structure), the option is returned with its status field set to T_SUCCESS if it is supported, T_NOTSUPPORT if it is not or needs additional user privileges, and T_READONLY if it is read-only (in the current XTI state). No option value is returned.
If an option is specified with an option value, the status field of the returned option has the same value, as if the user had tried to negotiate this value with T_NEGOTIATE. If the status is T_SUCCESS, T_FAILURE, T_NOTSUPPORT or T_READONLY, the returned option value is the same as the one requested on input.
The overall result of the option checks is returned in ret→flags. This field contains the worst single result of the option checks, whereby the rating is the same as for T_NEGOTIATE .
Note that no negotiation takes place. All currently effective option values remain unchanged.
T_DEFAULT This action enables the transport user to retrieve the default option values. The user specifies the options of interest in req→opt.buf. The option values are irrelevant and will be ignored; it is sufficient to specify the t_opthdr part of an option only. The default values are then returned in ret default values are then returned. In this case, ret→opt.maxlen must be given at least the value info→options before the call. See t_getinfo(3NSL) and t_open(3NSL).
T_CURRENT This action enables the transport user to retrieve the currently effective option values. The user specifies the options of interest in req→opt.buf. The option values are irrelevant and will be ignored; it is sufficient to specifiy the t_opthdr part of an option only. The currently effective values are then returned in req currently effective values are then returned.
The option T_ALLOPT can only be used with t_optmgmt() and the actions T_NEGOTIATE, T_DEFAULT and T_CURRENT. It can be used with any supported level and addresses all supported options of this level. The option has no value; it consists of a t_opthdr only. Since in a t_optmgmt() call only options of one level may be addressed, this option should not be requested together with other options. The function returns as soon as this option has been processed.
Options are independently processed in the order they appear in the input option buffer. If an option is multiply input, it depends on the implementation whether it is multiply output or whether it is returned only once.
Transport providers may not be able to provide an interface capable of supporting T_NEGOTIATE and/or T_CHECK functionalities. When this is the case, the error TNOTSUPPORT is returned.
The function t_optmgmt() may block under various circumstances and depending on the implementation. The function will block, for instance, if the protocol addressed by the call resides on a separate controller. It may also block due to flow control constraints; that is, if data sent previously across this transport endpoint has not yet been fully processed. If the function is interrupted by a signal, the option negotiations that have been done so far may remain valid. The behavior of the function is not changed if O_NONBLOCK is set.
Upon successful completion, a value of 0 is returned. Otherwise, a value of -1 is returned and t_errno is set to indicate an error.
ALL - apart from T_UNINIT.
On failure, t_errno is set to one of the following:
TBADF The specified file descriptor does not refer to a transport endpoint.
TBADFLAG An invalid flag was specified.
TBADOPT The specified options were in an incorrect format or contained illegal information.
TBUFOVFLW The number of bytes allowed for an incoming argument (maxlen) is greater than 0 but not sufficient to store the value of that argument. The information to be returned in ret will be discarded.
TNOTSUPPORT This action is not supported by the value TPROTO can be set by the XTI interface but not by the TLI interface.
The t_errno values that this routine can return under different circumstances than its XTI counterpart are TACCES and TBUFOVFLW.
TACCES can be returned to indicate that the user does not have permission to negotiate the specified options.
TBUFOVFLW can be returned even when the maxlen field of the corresponding buffer has been set to zero.
The format of the options in an opt buffer is dictated by the transport provider. Unlike the XTI interface, the TLI interface does not fix the buffer format. The macros T_OPT_DATA, T_OPT_NEXTHDR, and T_OPT_FIRSTHDR described for XTI are not available for use by TLI interfaces.
The semantic meaning of various action values for the flags field of req differs between the TLI and XTI interfaces. TLI interface users should heed the following descriptions of the actions:
T_NEGOTIATE This action enables the user to negotiate the values of the options specified in req with the transport provider. The provider will evaluate the requested options and negotiate the values, returning the negotiated values through ret.
T_CHECK This This action enables a user to retrieve the default options supported by the transport provider into the opt field of ret. In req, the len field of opt must be zero and the buf field may be NULL.
If issued as part of the connectionless mode service, t_optmgmt() may block due to flow control constraints. The function will not complete until the transport provider has processed all previously sent data units.
See attributes(5) for descriptions of the following attributes:
close(2), poll(2), select(3C), t_accept(3NSL), t_alloc(3NSL), t_bind(3NSL), t_close(3NSL), t_connect(3NSL), t_getinfo(3NSL), t_listen(3NSL), t_open(3NSL), t_rcv(3NSL), t_rcvconnect(3NSL), t_rcvudata(3NSL), t_snddis(3NSL), attributes(5)
|
http://backdrift.org/man/SunOS-5.10/man3nsl/t_optmgmt.3nsl.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Introduction to C++/CLI Generics
Before version 2.0, the .NET framework supported the Universal Type Container Model, in which objects are stored in a uniform manner. The universal type container in the Common Type System is Object and all types are derived either directly or indirectly from it. Version 2.0 of the framework supports a second model, called the Type Parameter Model, in which the binding of type information associated with an object is delayed, information that can vary from one invocation to another being parameterized. C++ supports this parameterized model for implementing templates. .NET framework 2.0 brings something similar, and it's called generics.
C++/CLI supports both templates and generics for defining parameterized reference and value; and interface classes, functions, and delegates. This article is focused on C++/CLI generics, presented in comparison with templates. Unfamiliarity with templates should not be a problem for understanding generics.
Parameterized List
Type parameters
Templates type parameters are introduced either with the class or typename keyword (there is no syntactical difference):
template <class T> ref class tFoo1 { }; template <typename T>> ref class tFoo2 { };
The same applies to generics:
generic <class T> ref class gFoo1 { }; generic <typename T> ref class gFoo2 { };
The type placeholder (T in this example) is to be replaced with a user-specified type argument. Instantiation syntax is the same, both for templates and generics:
// templates tFoo1<int>^ tfoo1; tFoo2<String^>^ tfoo2; // generics gFoo1<int>^ gfoo1; gFoo2<String^>^ gfoo2;
What is different is the time of the instantiation. Templates are instantiated at compile time, when the compiler constructs a new type by inserting the provided type into the type placeholder (in this example, there will be two types constructed from tFoo1, and two types constructed from tFoo2—one using type int and one using type String). Generics instantiation is done by the CLR at runtime, which constructs the type-specific instances by modifying the general syntax, depending on whether the type argument is a value or a reference type.
Templates support expressions, template parameters, and default parameter values. These are not supported by generics.
Non-type parameters and default parameter value
Templates enable you to supply a default value for both type and non-type parameters. Here is a template stack that specifies a default value for the size of the stack:
template <class T, int size = 128> ref class tStack { array<T>^ m_stack; int m_pointer; public: tStack() { m_stack = gcnew array<T>(size); m_pointer = -1; } }; tStack<int, 10>^ tiStack = gcnew tStack<int, 10>;
Non-type parameters are not supported by generics. An attempt to use one will raise an error:
generic<class T, int size> ref class gStack { };
To achieve the same thing with generics, you must pass the size to the stack constructor:
ref struct StackException: public System::Exception { System::String^ message; public: StackException(System::String^ msg):message(msg) {} System::String^ GetMessage() {return message;} }; generic<class T> ref class gStack { array<T>^ m_stack; int m_pointer; int m_size; public: gStack(int size) { m_size = size; m_stack = gcnew array<T>(size); m_pointer = -1; } void Push(T elem) { if(m_pointer < m_size-1) { m_stack[++m_pointer] = elem; } else throw gcnew StackException(gcnew System::String("Full stack")); } T Pop() { if(m_pointer >=0 ) { T elem = m_stack[m_pointer]; --m_pointer; return elem; } else throw gcnew StackException(gcnew System::String("Empty stack")); } bool IsEmpty() { return (m_pointer == -1); } }; gStack<String^>^ sStack = gcnew gStack<String^>(10);
The result of both code samples is a stack with space for 10 elements.
The default value also can be specified for the type parameters with templates:
template <class T = int, int size = 128> ref class tStack { };
A similar attempt with generics will raise errors:
generic<class T = int> // this is not allowed ref class gStack { };
Template Parameters
Templates support template parameters:
template <class T> ref class tFoo1 { }; template <template <class T> class Foo, class Type> ref class tFoo2 { Foo<Type> foo; }; tFoo2< tFoo1, int >^ tfoo2;
An attempt to construct a similar parameterized type with generics will raise errors:
generic <generic <class T> class Foo, class Type> // not allowed ref class gFoo2 { };
Constraints
Though generally speaking, by using templates you can create parameterized types supporting an unlimited number of types. This holds true only if this means storing and retrieving objects of a parameter type (like in the stack example above, where objects of any type can be added and removed from the stack). When you need to manipulate these objects, for instance calling their methods, you introduce implicit constraints; these limit the number of types that can be used with the template.
Given the template class Foo below,
template <class T> public ref class Foo { T object; public: Foo() { object.Init(); } }
by calling Init() for an object, you limit the instantiation of this class only to types that have a method called Init() (which can be called from this context). An attempt to use int, for example, to instantiate it will be flagged as an error:
Foo<int> foo;
A constraint is introduced in this case too:
template <class T> ref class Foo { T::X object; };
Class Foo can be parameterized only with types that contain an inner type (accessible from this context) called X.
All these constraints are implicit constrains, and templates support no formal syntax to describe them. Templates are bound at compile time, when the correctness of template instantiation must be known. Generics are bound at runtime, but there is a need for a compile time mechanism to check the validity of runtime binding, and prevent the building of the program if the instantiation type does not match the specified prerequisites.
Look at this code before going further:
interface class IShape { public: void Draw(); }; ref class Circle: public IShape { Point m_center; double m_radix; public: Circle(Point^ pt, double radix):m_center(*pt), m_radix(radix) {} virtual void Draw() { System::Console::WriteLine("Drawing a circle at {0} with radix {1}", m_center.ToString(), m_radix); } }; ref class Brick { public: void Draw() { System::Console::WriteLine("Drawing a brick"); } };
Here, you have two reference types, Circle and Brick. They both provide a method called Draw() that takes no parameter and return void, but Circle implements the IShape interface. Now, assume you need a container for all these shapes, to which you can add elements and that provides a method that processes all contained shapes by drawing them.
generic<class T> ref class Aggregator { System::Collections::Generic::List<T>^ m_list; public: Aggregator() { m_list = gcnew System::Collections::Generic::List<T>; } void Add(T elem) { m_list->Add(elem); } void ProcessAll() { for each(T elem in m_list) { elem->Draw(); } } }; Aggregator<IShape^>^ agr = gcnew Aggregator<IShape^>; agr->Add(gcnew Circle(gcnew Point(0,0), 1)); agr->Add(gcnew Circle(gcnew Point(1,1), 2)); agr->Add(gcnew Circle(gcnew Point(2,2), 3)); agr->ProcessAll(); delete agr;
Compiling this code, there are several errors raised, saying that 'Draw' : is not a member of 'System::Object'. What happens is that, by default, with the lack of explicit constraint specifications, the compiler assumes T is of type System::Object, which doesn't have a method called Draw().
To address this problem, you introduce an explicit constraint, specifying that T can only be a type that implements the IShape interface. Constraints are introduced with the non-reserved word "where".
generic<class T> where T: IShape ref class Aggregator { };
Now, you can add shapes to the container, as long as they implement IShape.A container for bricks cannot be created as long as Brick does not implement IShape interface, because Brick does not meet the constraint:
Aggregator<Brick^>^ agr1 = gcnew Aggregator<Brick^>;
If we change Brick to:
ref class Brick: public IShape { public: virtual void Draw() { System::Console::WriteLine("Drawing a brick"); } };
we can have a brick-only container,
Aggregator<Brick^>^ agr = gcnew Aggregator<Brick^>; agr->Add(gcnew Brick()); agr->Add(gcnew Brick()); agr->Add(gcnew Brick()); agr->ProcessAll();
or a container with both circles and bricks:
Aggregator<IShape^>^ agr = gcnew Aggregator<IShape^>; agr->Add(gcnew Circle(gcnew Point(0,0), 1)); agr->Add(gcnew Brick()); agr->Add(gcnew Circle(gcnew Point(1,1), 2)); agr->Add(gcnew Brick()); agr->ProcessAll();
The following applies to generic constraints:
For a type to be a constraint, it must be a managed, unsealed reference type or interface. Classes from namespaces System::Array, System::Delegate, System::Enum, and System::ValueType cannot appear in the constraint list.
public ref class A { }; generic <class T1, class T2> where T1: A // okay, reference type where T2: int // struct from prohibited namespace public ref class Foo { };
Can specify only one reference type for each constraint (because multiple inheritance is not supported in .NET).
public ref struct A { }; public ref class B { }; generic <class T1, class T2> where T1: A, B // error where T2: A, IComparable // ok public ref class Foo { };
The constraint clause can contain only one entry per each parameter.
generic <class T> where T: IShape where T: IComparable // error C3233: 'T' : generic // type parameter already constrained ref class Foo { };
Multiple constraints for a parameter type are separated by a comma.
generic <class T> where T: IShape, IComparable ref class Foo { };
Constraint type must be at least as accessible as the generic type of function.
ref struct A {}; generic <class T> where T: A // wrong, A is private public ref class Foo {};
Entries can follow any order, no matter the order of type parameters:
generic <class T1, class T2, class T3> where T2: IComparable where T1: ISerializable where T3: IClonable public ref class Foo { };
Generic types can be used for constraint types too:
generic <class T> public ref class A {}; generic <class T1, class T2> where T1: A<T2> where T2: A<T1> public ref class Foo {};
Conclusions
This article is a brief introduction to the .NET generics with C++/CLI. Further readings are recommended to achieve a comprehensive view on generics (issues such as template specialization or invoking generic functions were not addressed, and may follow in an upcoming article). Though at a first glance templates and generics look the same, they actually are not. Templates are instantiated at compile time, generics at runtime. Templates support non-type parameters, default, values and template parameters, while generics do not. And although templates do not have any mechanism to explicitly specify constraints, generics provide a syntax for static verification, because instantiation is done at runtime, when is too late to discover than a type cannot be bound.
There are no comments yet. Be the first to comment!
|
http://www.codeguru.com/cpp/cpp/cpp_managed/general/article.php/c10881/Introduction-to-CCLI-Generics.htm
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
See also: IRC log
<DanC> ah... httpRange-14 is not on the agenda... I kinda expected it to continue from last week... ah... two weeks... 29 MAr
<DanC> "Next week's scribe will be Norm."
<Norm> Crap. Sorry. I'm on another call and can't get away for a few minutes
<noah> Norm: I will scribe for a few minutes until you show up.
<noah> Agenda at is approved
<noah> Telcon on 29 March
<noah> Possible regrets from Noah next 3 weeks
<DanC> 15 March 2005 Tag Teleconference
<noah> VQ: Here: everyone except Roy, Tim and Norm
<noah> scribe: Noah
<scribe> scribe: Noah Mendelsohn
<scribe> scribenick: noah
Henry will scribe on 29 March
RESOLUTION: minutes of 15 March at are approved
VQ: shall we move the minutes in date space?
DC: No, leave them where they are.
HT: That means they are in an attachment in an email archive, which makes searching hard. I needed that today. Is it not policy to have them in date space?
DC: They need to be linked from the tag home page
VQ: Right, I've been doing that.
HT: Well, it's easier to grep if you mirror date space, but I can write a better tool.
ED: I somewhat agree, I'd prefer to see them all in common place in date space, per year.
DC: You don't have to go through
list archives, they're all one click away.
... In any case, in general, I'd like them to be in a final resting place before we approve them.
<scribe> ACTION: Henry with help from Ed to draft proposal on where in date space to put minutes [recorded in]
<DanC> (re filing the minutes in CVS/datespace, all of us can send mail, but only some of us can do CVS, and when it goes bad, it tends to fall on the team contact, i.e. me)
NM: Scribe's question: should we unapprove the minutes of 15 March until they land in whatever is the best place?
Several: No, they're approved, leave them.
VQ: Amy van der Heil reports MIT can host
DC: 3 real days?
VQ: maybe last day will be short, but otherwise yes, full 3 days.
NM: Remember that TimBL will leave early on 15 June due to family birthday
<DanC> TAG action items
<DanC> (very handy so far, thanks, ht)
VQ: When you make flight plans, please let me know so we can schedule wrapup on last day
+1 Noah
See pending actions at
DO: Asked some questions about which are assigned to him.
NM: Yes, ACTION: Henry and David to draft initial finding on URNsAndRegistries-50 [recorded in]]
DO: Yes working on it
... I also worked on terminology for extensibility and versioning.
... sent to Norm and Noah for early review
NM: Don't have it yet,
HT: I have finished ACTION: HT to review " Storing Data in Documents ..." [recorded in]
DC: right, and followup email is largely supporive
<DanC> action list Date: 2005/03/21 11:50:27
VQ: will update action list later today or tomorrow
<DanC> (re how long done action stay... a week or two, please)
Norm, Oasis announcement[11]
s /[11]//
VQ: Hmm, Norm's not here, let's skip it until he shows up.
We received a request at:
VQ: I would like to have some discussion on what to do with this issue.
Norm joins the call.
DO: A couple of comments: 1) This issue could use some authoritative comments
<Norm> DO: expresses concern that TAG is picking up a lot of issues but we aren't closing them very fast
<Vincent> no echo on my side
<Norm> DanC: I think you write an RFC and get consensus from the community
<Norm> NM: Some groups use a new header, some use a new method (WebDAV). These have different characteristics in the face of things like "must understand"
<Norm> NM: I think he's asking for good practice, clarity on who should do what and when
<DanC> (hmm... I still don't see an issue any smaller than "please predict the future for me")
<Norm> VQ: Agrees, it's a good practice request. Not clear who's supposed to do this, us or IETF, for example.
<Norm> VQ: Shouldn't we do something?
<Norm> DanC: No, we're not obliged to take on an issue or formally respond to every request
<Norm> DO: If the TAG is going to decline, we should at least say we decline.
<Norm> NM: +1
<Ed> +1
<Norm> DO: I'd prefer if we could provide a bit of rationale. I don't think we get an enormous number of requests such that we can't reply.
<DanC> (yes, it's polite to explicitly decline. but if you try to formalize that as a policy, you'll quickly get into denial-of-service, and "have we already declined that request?" stuff)
<Norm> NM: summarizes, asks if we're ready to decide
<Norm> NM: I'd be interested in the opinions of timbl and royf.
<Norm> NM: Two options? 1. reject or 2. pick up the issue and prioritize it later
<Norm> DanC: Putting it on the issues list is a commtiment to resolve it
<Norm> DO: Some issues that we took up were reduced in priority before the first webarch but those are being reexamined
<Norm> DO: Proposes that we defer talking about this issue until timbl and royf are present
<Norm> VQ: I'll draw their attention to the issue before next time
<Norm> VQ: Return to XRI.
<scribe> scribe: Norm Walsh
<DanC> "The public review starts today, 15 March 2005 and ends 14 April 2005."
<scribe> scribenick: norm
ht: Included it in new issue 50. Reinventions of URNs and registries.
NDW: That satisifies my expectations of what we would do with this
<Zakim> DanC, you wanted to express some patent concerns about reading it at all
DanC: XRIs have crossed my desk a
couple of times, but the package seems to be labeled "patent
incumbered" so I'm not inclined to read it at all
... their deadline is 14 Apr. HT, are you inclined to figure something out by 14 Apr?
ht: That seems unlikely
<DanC> (we had pretty close to a finding on .mobi; we had a web page with TAG endorsement)
ht: At the very least, should we say "uh, guys, would you like to talk to us about this before moving ahead?"
Ed: I'd be happy to review it and try to highlight some of the major issues
DanC suggests mailing comments to www-tag or tag or straight to them. Any of those is fine by me.
Ed agrees to read them and post some notes about it
ht suggests taking a quick glance at urnsAndRegistries-50
VQ: Does that address your concerns?
NDW: Marvelously.
<scribe> ACTION: Ed to review XRI and provide comments [recorded in]
DanC: I believe we closed it in Basel. There was some kickback but eventually it did stick.
<DanC>
DanC: I believe the issues list should be updated to record our decision to close the issue
VQ: I'll do that.
... Any other information about this issue?
ht: My memory is that the public believes that the TAG said you should use XLink, HTML WG pushed back, TAG said you should consider it, HTML WG went quiet.
My memory is that the HTML WG said even considering it was too strong, but we stuck our ground.
<DanC> (well, yes, mark it closed, but note some outstanding dissent)
VQ: Any other business?
ht: I would be happy if we brainstorm about on URNsAndRegistries-50
DanC: Countless discussions go
like this: I'll find some URN scheme or the equivalent, e.g.
doi: and urn:doi:
... They've gone so far as to deploy plugins for doi:.
... what the plugin does is lookup...
... So they own and operate a mapping from DOI to HTTP
... Ask these folks why not just use http? Why a separate scheme? One part of it is financial incentive for being at the center of one of these namespaces
... The other is that they don't trust DNS and HTTP.
... Engineers can't predict the future. I can't predict that DNS and HTTP will last forever.
... So they really do want their stuff to be looked up and they can't be talked out of it.
NM: They've got a mapping, the insurance they're getting is that if someone steals their DNS name, they can redirect to another.
DanC: Clearly they're creating
aliases here, which we've discouraged.
... The other folks don't want their stuff to be looked up.
... e.g., urn:googlesearch:, they don't do anything about grounding that in reality and they don't feel embarrased about it.
... But for some reason they don't want to promise that an address will persist for a long time.
... Consider urn:ietf:...
... How do you manage it? Well, we keep a website with all the names in it.
... Duh!
... So they have no mapping, but to actually manage the namespace...they use a webserver!
... I promised to renew that draft if someone would stand by me for the incoming barrage, but there have been no offers
ht: Two things I'd add:
apparently the IETF are now running a server that will lookup
those URNs.
... I haven't persued it, but someone asserted it exists.
<DanC> A Registry of Assignments using Ubiquitous Technologies and Careful Policies
ht: The other example, the ITU
are looking at doing this (as is OASIS, i.e. XRI)
... Both of these guys say they'll be running servers, in the OASIS case it'll be a SAML server of some kind
... The part of the puzzle that I don't understand how to respond to is, the argument that "we need something abstract" something not as concrete as URLs
... We need something independent of specific locations.
... That sounds like broken record stuff to me, but I'm hoping to hear "oh, they don't understand such and such..."
DanC: I can replay a conversation
where I convinced one person.
... The name of the XML spec was a subject of conversation.
... Do you feel bad that there's no URN for it? Answer: yes.
... Why? Because we want it to survive
... Redundancy is the key, putting something in a newspaper gets lots of copies.
... So the copy of the XML spec is all the web caches around the world provides that.
... So he says "gee, then maybe we shouldn never have done that URN stuff"
... The way you make things valuable is by getting agreement that things are shared. So you can use a link instead of sending a 15 page spec.
... The way the binding between the names and what they mean is established is through protocols of some sort. HTTP is one example.
... it makes sense to makup new URI schemes for totally new communication patterns, but if it looks like DNS and HTTP, *use* DNS and HTTP.
<DanC> ( is unavailable at the moment, but records some relevant experience of Norm's)
ADJOURNED
What's the incantation to get rrsagent to make the log public?
<DanC> norm, do you want it to draft minutes?
Sure.
I'll take a look at cleaning those up as soon as I get a couple of other things off my plate
<DanC> not bad... noah knows how to drive it. ;-)
<DanC> hmm... it doesn't recognize Norm as scribe too...
<DanC> ScribeNick: Norm
This is scribe.perl Revision: 1.117 of Date: 2005/03/10 16:25:39 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/[10]// Found Scribe: Noah Inferring ScribeNick: noah Found Scribe: Noah Mendelsohn Found ScribeNick: noah Found Scribe: Norm Walsh Found ScribeNick: norm Found ScribeNick: Norm WARNING: No scribe lines found matching ScribeNick pattern: <Norm> ... Scribes: Noah, Noah Mendelsohn, Norm Walsh ScribeNicks: noah, norm Default Present: noah, DanC, [INRIA], Ht, Dave_Orchard, EdRice, Norm Present: noah DanC [INRIA] Ht Dave_Orchard EdRice Norm Regrets: TimBL RoyF WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Got date from IRC log name: 22 Mar 2005 Guessing minutes URL: People with action items: ed from help henry with WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
|
http://www.w3.org/2005/03/22-tagmem-minutes.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
On Fri, Feb 18, 2011 at 02:08:04PM -0800, Robert Bradshaw wrote: > On Thu, Feb 17, 2011 at 8:38 PM, W. Trevor King <wking at drexel.edu> wrote: > > On Thu, Feb 17, 2011 at 3:53 PM, Robert Bradshaw wrote: > >> On Thu, Feb 17, 2011 at 3:12 PM, W. Trevor King wrote: > >>> On Thu, Feb 17, 2011 at 01:25:10PM -0800, Robert Bradshaw wrote: > >>>> On Thu, Feb 17, 2011 at 5:29 AM, W. Trevor King wrote: > >>>> > On Wed, Feb 16, 2011 at 03:55:19PM -0800, Robert Bradshaw wrote: > >>>> >> On Wed, Feb 16, 2011 at 8:17 AM, W. Trevor King wrote: > >>>> >> > What I'm missing is a way to bind the ModuleScope namespace to a name > >>>> >> > in expose.pyx so that commands like `dir(mylib)` and `getattr(mylib, > >>>> >> > name)` will work in expose.pyx. > >>>> >> > >>>> >> You have also hit into the thorny issue that .pxd files are used for > >>>> >> many things. They may be pure C library declarations with no Python > >>>> >> module backing, they may be declarations of (externally implemented) > >>>> >> Python modules (such as numpy.pxd), or they may be declarations for > >>>> >> Cython-implemented modules. > >>>> >> > >>>> >> Here's another idea, what if extern blocks could contain cpdef > >>>> >> declarations, which would automatically generate a Python-level > >>>> >> wrappers for the declared members (if possible, otherwise an error)? > >>>> > > >>>> > Ah, this sounds good! Of the three .pxd roles you list above, > >>>> > external Python modules (e.g. numpy) and Cython-implemented modules > >>>> > (e.g. matched .pxd/.pyx) both already have a presence in Python-space. > >>>> > What's missing is a way to give (where possible) declarations of > >>>> > external C libraries a Python presence. cpdef fills this hole nicely, > >>>> > since its whole purpose is to expose Python interfaces to > >>>> > C-based elements. > >>>> > >>>> In the case of external Python modules, I'm not so sure we want to > >>>> monkey-patch our stuff in > >>> > >>> I don't think any of the changes we are suggesting would require > >>> changes to existing code, so .pxd-s with external implementations > >>> wouldn't be affected unless they brough the changes upon themselves. > >> > >> Say, in numpy.pxd, I have > >> > >> cdef extern from "...": > >> cpdef struct obscure_internal_struct: > >> ... > >> > >> Do we add an "obscure_internal_struct" onto the (global) numpy module? > >> What if it conflicts with a (runtime) name? This is the issue I'm > >> bringing up. > > > > Defining a cpdef *and* a non-matching external implementation should > > raise a compile-time error. I agree that there is a useful > > distinction between external-C-library and external-Python-module .pxd > > wrappers. Perhaps your matching blank .py or .pyx file could serve as > > a marker that the .pxd file should be inflated into its own full > > fledged python module. I'm not even sure how you would go about > > adding attributes to the numpy module. When/how would the > > Cython-created attributes get added? > > Yes, this is exactly the issue. Ah, I'm retracting my agreement on the external-C-library and external-Python-module .pxd wrappers. There is no difference in how their .pxd files should be treated, and I now agree that .pxd files should not generate .so modules unless they have a paried .py/.pyx file. > >." This seems to be broken in Cython at the module level, since I can rebind a cdef-ed class but not a cpdef-ed method: $?) > >>> > >>> Compilation is an issue. I think that .pxd files should be able to be > >>> cythoned directly, since then they Cython can build any wrappers they > >>> request. If the file has a matching .pyx file, cythoning either one > >>> should compile both together, since they'll produce a single Python > >>> .so module. > >> > >> ... > > > > Under the mantra "explicit is better than implicit", we could have > > users add something like > > > > cdef module "modname" > > > > to any .pxd files that should be inflated into Python modules. .pxd > > files without such a tag would receive the current treatment, error on > > any cpdef, etc. The drawback of this approach is that it makes Cython > > more complicated, but if both behaviors are reasonable, there's > > probably no getting around that. > > The other drawback is that it subverts the usual filename <-> module > name convention that one usually expects. I've been convinced that the `cimport .pyx file` route is a better way to go.`? On Sat, Feb 19, 2011 at 10:24:05AM +0100, Stefan Behnel wrote: > Robert Bradshaw, 18.02.2011 23:08: > > On Thu, Feb 17, 2011 at 8:38 PM, W. Trevor King wrote: > >> On Thu, Feb 17, 2011 at 3:53 PM, Robert Bradshaw wrote: > >>> On Thu, Feb 17, 2011 at 3:12 PM, W. Trevor King wrote: > >>>>>> A side effect of this cpdef change would be that now even bare .pxd > >>>>>> files (no matching .pyx) would have a Python presence, > >>>>> > >>>>> Where would it live? Would we just create this module (in essence, > >>>>> acting as if there was an empty .pyx file sitting there as well)? On > >>>>> this note, it may be worth pursuing the idea of a "cython helper" > >>>>> module where common code and objects could live. > >>>> > >>>> I'm not sure exactly what you mean by "cython helper", but this sounds > >>>> like my 'bare .pyx can create a Python .so module idea above. > >>> > >>> I'm thinking of a place to put, e.g. the generator and bind-able > >>> function classes, which are now re-implemented in every module that > >>> uses them. I think there will be more cases like this in the future > >>> rather than less. C-level code could be #included and linked from > >>> "global" stores as well. However, that's somewhat tangential. > > If you generate more than one file from a .pyx, including files that are > shared between compiler runs (or even readily built as .so files), you'd > quickly end up in dependency hell. I disagree here, but I like your cimportable .pyx better, so it doesn't matter ;). > >>>>>> Unions don't really have a Python parallel, > >>>>> > >>>>> They can be a cdef class wrapping the union type. > >>>> > >>>> But I would think coercion would be difficult. Unions are usually (in > >>>> my limited experience) for "don't worry about the type, just make sure > >>>> it fits in X bytes". How would union->Python conversion work? > >>> > >>> There would be a wrapping type, e.g. > >>> > >>> cdef class MyUnion: > >>> cdef union_type value > > Wouldn't that have to be a pointer to the real thing instead? Do you mean `cdef union_type *value`? Why would the above version not work? The union type has a well defined size and a number of well defined interpretations, so I don't see the problem. > >>> with a bunch of setters/getters for the values, just like there are > >>> for structs. (In fact the same code would handle structs and unions). > >>> > >>> This is getting into the wrapper-generator territory, but I'm starting > >>> to think for simple things that might be worth it. > >> > >> I think that if Cython will automatically generate a wrapper for > >> > >> cdef public int x > >> > >> it should generate a wrapper for > >> > >> cdef struct X: cdef public int x > > > > Or > > > >? If safety with a new feature is a concern, a warning like "EXPERIMENTAL FEATURE" in the associated docs and compiler output should be sufficient. > >> There really aren't that metatypes in C, so it doesn't seem like a > >> slippery slope to me. Maybe I'm just missing something... > >> > >>>> Ok, I think we're pretty much agreed ;). I think that the next step > >>>> is to start working on implementations of: > >>>> > >>>> * Stand alone .pxd -> Python module > >>> > >>> I'm not sure we're agreed on this one. > > Same from here. To me, that doesn't make much sense for code that wraps a > library. And if it doesn't wrap a library, there isn't much benefit in > writing a stand-alone .pxd in the first place. A .pyx is much more explicit > and obvious in this case. Especially having some .pxd files that generate > .so files and others that don't will make this very ugly. > > I'd prefer adding support for cimporting from .pyx files instead, > potentially with an automated caching generation of corresponding .pxd > files (maybe as ".pxdg" files to make them easier to handle for users). > However, cyclic dependencies would be tricky to handle automatically then.. > >>>> * Extending class cdef/cdpef/public/readonly handling to cover enums, > >>>> stucts, and possibly unions. > >>> > >>> This seems like the best first step. > > +1 Working on it... > >>>> * I don't know how to handle things like dummy enums (perhaps by > >>>> requiring all cdef-ed enums to be named). > >>> > >>> All enums in C are named. > >> > >> But my Cython declaration (exposing a C `#define CONST_A 1`): > >> > >> cdef extern from 'mylib.h': > >> enum: CONST_A > >> > >> is not a named enum. > > > > Ah, yes. Maybe we require a name (that would only be used in Python space). > > ... require it for cpdef enums, you mean? > > OTOH, the "enum: NAME" scheme is ugly by itself. There should be a way to > declare external constants correctly. After all, we loose all type > information that way. I just saw that in math.pxd things like "M_PI" are > declared as plain "enum" instead of "const double" or something. The type > inferencer can't do anything with that. It might even draw the completely > wrong conclusions. Something like: [cdef|cpdef] extern [public|readonly] <type> <name> For example: cdef extern readonly double M_PI That would be nice, since the C compiler would (I think) raise an error when you try to use an invalid <type> for macro value. : <>
|
https://mail.python.org/pipermail/cython-devel/2011-February/000068.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Dňa 17. januára 2012 9:29, Eric Dumazet <[email protected]> napísal/a:> Le mardi 17 janvier 2012 à 09:04 +0100, Štefan Gula a écrit :>> Dňa 17. januára 2012 5:47, Eric Dumazet <[email protected]> napísal/a:>> >>> > 2) You call ipgre_tap_bridge_fini() from ipgre_exit_net() and>> > ipgre_init_net(), thats completely bogus if CONFIG_NET_NS=y>> >>> > Just remove the struct kmem_cache *ipgre_tap_bridge_cache>> > and use instead kmalloc(sizeof(...))/kfree(ptr) instead.>> >>> As this is completely the same part of code from net/bridge/br_fdb.c,>> can you give me a hint about how change that as I believe it should be>> changed also there?>> Please dont copy code you dont understand :(>> bridge code is ok, but not yours, since you destroy the kmem_cache when> a net namespace exits.>> Either you fix your code, either you change your memory allocations to> mere kmalloc()/kfree() calls and dont care of a private kmem_cache you> have to create and destroy.>> Since you ask a SLAB_HWCACHE_ALIGN, the SLUB allocator will anyway merge> your kmem_cache with the standard one (kmalloc-64 I guess)>>>ok maybe I am getting it wrong, but I am little bit stuck here. Irecheck the original bridge code. The difference I recognize is thatin bridge code function:br_fdb_init() and br_fdb_fini()are called from module init and module exit functions:br_init and br_deinitin my code they are called from functions:ipgre_init_net and ipgre_exit_netinstead of:ipgre_init and ipgre_finiTo be honest I am not so familiar enough with kernel structure that Isee the difference on the first time. But I think that with your helpit can be done easily. The main idea was to create hash-table that isused to determine the destination IPv4 address (part of the entrystructure). That hash-table should be different per each gretapinterface - I think that's the reason why I put those init and finiinside ipgre_init_net and ipgre_exit_net. Am I right that theplacement of this calls is correct or not? If not where those callsshould be placed?On the other hand I have no idea how to substitute those two functionwith a code that you are suggesting kmalloc()/kfree(). I would be gladif you can help me here by providing me example how to substitutethose two functions with kmalloc/kfree for the future usage (I am morereverse engineer learner type of person than manuals reading one)--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] majordomo info at read the FAQ at
|
http://lkml.org/lkml/2012/1/17/107
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
This is your resource to discuss support topics with your peers, and learn from each other.
04-23-2010 06:31 AM
I've seen it mentioned a few times and I'd REALLY need to know this.
How can I place my resources in a different, updateable cod? (I need to read some text-only configuration files that control the application's features.)
I created an "appNameres.cod" file (contains config text files), and it installs to the phone.
It also appears in the "modules" listing on the device (listed as "appNameres").
However, when I use
String filename = new String("cod://appNameres/appconfig.cfg"); InputStream is = file.getClass().getResourceAsStream(file);
It doesn't find the file I need.
Can anyone help me?
Solved! Go to Solution.
04-23-2010 07:18 AM
I have a simple Class in my separate resources COD that I pass what resource name I want and it returns the Stream to it. It might not be the way that you want it but it reduces the problems of making it work.
04-23-2010 07:37 AM
Interesting idea, but could I ask you to provide with a bit more info?
How do I call the class?
Say I have
class ResLoader { public static InputStream loadResource(String filename) { InputStream is = file.getClass().getResourceAsStream(filename); return is; } }
this class.
HOW exactly do I use it (sorry for the caps but I am kind of desperate here
).
04-23-2010 08:29 AM
Something like:
class Res { public static InputStream getResourceStream(String file) { return Res.class.getResourceAsStream(file); } }
Just make sure you use a class in that COD file so it looks for resources in that COD file.
04-23-2010 09:30 AM - edited 04-23-2010 10:15 AM
Okay, I'm being borderline stupid here.
I added
import rimbbresloader; ... InputStream is = rimbbresloader.loadResourceFile("appconfig.cfg"); ...
And when I hit build on the main project, it dies with the error:
[javac] C:\Work\proj\work\fctmain.java:1973: cannot find symbol
[javac] symbol : variable rimbbresloader
[javac] location: class fctmain
[javac] InputStream is = rimbbresloader.loadResourceFile("appconfig.cfg");
[javac] ^
[javac] 1 error
Also I (kinda have to) use eclipse. How could I import the blackberry JDE-built library into eclipse?
PS: Before getting this job I only used NetBeans (a bit), Code::Blocks and Visual Studio, oh and notepad. Eclipse and ant are pretty much arcane magic
to me.
04-23-2010 02:22 PM
I've finally managed to build the application, loaded the library resource *.cod to the simulator, loaded the app *.cod to the simulator but I get the following error : "Can't find entry point".
The application does not run.
04-23-2010 03:06 PM
@rcmaniac2: Okay! I've managed to make it run. And it loads the resources too.
Now, I have another issue, the library requires for permissions to be set to "Allow".
Otherwise it gives me "Interprocess Communication" error and quits.
Any points on that?
04-23-2010 07:15 PM
Do you have the resources COD compiled as a Library?
04-24-2010 01:10 AM
I think you're getting that error because accessing a COD from another COD requires the COD's to be signed, or have the permissions set to allow the same in your settings.
04-24-2010 07:37 AM
I have a COD that contains resources and a COD that contains code, I never have to sign the resource COD and the COD that contains COD only needed to be signed when I added functions that required signing so I don't think you need to sign it in order to get it to work.
|
https://supportforums.blackberry.com/t5/Java-Development/How-can-I-place-resources-in-a-separate-cod/m-p/490262
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
MemberInfo.MemberType Property
When overridden in a derived class, gets a MemberTypes value indicating the type of the member — method, constructor, event, and so on.
Assembly: mscorlib (in mscorlib.dll)
Property ValueType: System.Reflection.MemberTypes
A MemberTypes value indicating the type of member.
Implements_MemberInfo.MemberType.
The following example displays the member name and type of a specified class.
using System; using System.Reflection; class Mymemberinfo { public static int Main() { Console.WriteLine ("\nReflection.MemberInfo"); // Get the Type and MemberInfo. Type MyType = Type.GetType("System.Reflection.PropertyInfo"); MemberInfo[] Mymemberinfoarray = MyType.GetMembers(); // Get the MemberType method and display the elements. Console.Write("\nThere are {0} members in ", Mymemberinfoarray.GetLength(0)); Console.Write("{0}.", MyType.FullName); for (int counter = 0; counter < Mymemberinfoarray.Length; counter++) { Console.Write("\n" + counter + ". " + Mymemberinfoarray[counter].Name + " Member type - " + Mymemberinfoarray[counter].MemberType.ToString()); } return.
|
https://msdn.microsoft.com/en-us/library/system.reflection.memberinfo.membertype(v=vs.90).aspx
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
21, 200447 *48 */49 package org.mr.core.cmc;50 51 import org.apache.commons.logging.Log;52 import org.apache.commons.logging.LogFactory;53 54 import javax.management.*;55 56 /**57 * returns the memory status of the system.58 * 59 * @author lital kasif 60 */61 public class GetMemStatJMX extends StandardMBean implements GetMemStatJMXMBean {62 public Log log;63 64 public GetMemStatJMX()throws NotCompliantMBeanException{65 super(GetMemStatJMXMBean.class);66 log=LogFactory.getLog("GetMemStatJMX");67 }68 69 /**70 * @return a String with the mantaray's stats71 */72 public String [] getStatus(){73 long free = Runtime.getRuntime().freeMemory()/1000;74 long memoryInVM = Runtime.getRuntime().totalMemory()/1000;75 long maxMem = Runtime.getRuntime().maxMemory()/1000;76 long used = memoryInVM - free;77 78 String [] value ={"Used memory = "+ used +"K bytes",79 "Free memory in system "+free+"K bytes",80 "Total of memory in VM "+memoryInVM+"K bytes",81 "Max memory available for the VM "+maxMem+"K bytes"};82 return value;83 }84 85 protected String getDescription(MBeanInfo i_mBeanInfo) {86 return "returns the memory status of the system ";87 }88 89 protected String getDescription(MBeanAttributeInfo i_mBeanAttributeInfo) {90 return "returns the memory status of the system ";91 }92 93 }94
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/mr/core/cmc/GetMemStatJMX.java.htm
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
We use the following configuration file to control the log4j. prints logging message ending with a newline character.
# Define the root logger with appender file log = c:
The following Java class shows how to use Log4J logging library.
import org.apache.log4j.Logger; import java.io.*; import java.sql.SQLException; import java.util.*; public class Main { /*"); } }
All the libraries should be available in CLASSPATH and the log4j.properties file should be available in PATH.
|
http://www.java2s.com/Tutorials/Java/log4j/0030__log4j_HelloWorld.htm
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Introduction: Calculator Coded With Python
After learning a bit about the programming language Python, I though that it would be neat to try and replicate some of the math that they Python shell does with a GUI. While I clearly did not match the shell's performance my calculator adds a few helpful shortcuts such as the numbers "pi" and "e" as well as the trig functions sin() cos() and tan().
You will need to keep everything both files in the zip file in the same folder for this to work. The code has a number of comments, but if you have any questions as to how anything works please let me know and I will be happy to show you!
Recommendations
We have a be nice policy.
Please be positive and constructive.
5 Comments
I have edited the code to fix the print issue but I keep on getting this error: Traceback (most recent call last):
File "/Downloads/Revised Calculator/Gui.py", line 67, in <module>
import tkFont
ModuleNotFoundError: No module named 'tkFont'
Here are the edited files.
Please edit your code. All of your print commands are missing parenthesis.
amazing
what did you make this for? school?
I did, it was a short project.
|
http://www.instructables.com/id/Calculator-Coded-with-Python/
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
im building a usb box with 2x buttons and Sparkfun Micro Pro. One button sends SPACE bar to computer, and other button sends ESC.
i coppied the code from Sparkfun Website but the code as one key code written only...
can anyone help write the code to include the 2 buttons? maybe we could write it to use pin 10 for key ESC.
the code im using is:
#include <Keyboard.h>
int buttonPin = 9; // Set a button to any pin
void setup()
{
pinMode(buttonPin, INPUT); // Set the button as an input
digitalWrite(buttonPin, HIGH); // Pull the button high
}
void loop()
{
if (digitalRead(buttonPin) == 0) // if the button goes low
{
Keyboard.write(' '); // send a ' ' to the computer via Keyboard HID
delay(1000); // delay so there aren't a kajillion z's
}
}
As you already noticed i have no clue how this language work
Any help is highly appreciated!
Thanks!
|
https://forum.sparkfun.com/viewtopic.php?t=46582&who_posted=1
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
How to hide lines in excel programatically
By
kcvinu, in AutoIt General Help and Support
Recommended Posts
Similar Content
- By Gowrisankar.
-
- By Simpel
Hi.
I try to figure out who is using a excel workbook which I can only open "read only". I use this code:
#include <Array.au3> #include <Excel.au3> Local $sFile = ; excel file with path on a network drive Local $oExcel = _Excel_Open(True, True) Local $oTabelle = _Excel_BookOpen($oExcel, $sFile) Local $aUsers If IsObj($oTabelle) Then $aUsers = $oTabelle.UserStatus _ArrayDisplay($aUsers) EndIf If I am the one allowed to write to the excel file (I'm the first one who opened it) then I will get an array with myself:
If my collegue opened the excel robertocm
change linked image paths in excel 2007 Open XML Files with AutoIt and 7-zip:
#include <File.au3> ;Change this Local $sFind = "C:\Users\MyUserName\Documents\MyImageFolder\My%20Image1.png" Local $sReplace = "C:\Users\ANOTHERUSERNAME\Documents\AnotherImageFolder\My%20Image1.png" Local Const $sMessage = "Directory to change excel image paths" Local $sFileSelectFolder = FileSelectFolder($sMessage, "") Local $sTempDir = @ScriptDir & "\testdir" ;Required 7-zip Local $PathZipProgram = @ProgramFilesDir & "\7-Zip\" If Not(FileExists($PathZipProgram & "\7z.exe")) Then MsgBox(16, "", "7z.exe not found in path " & $PathZipProgram) Exit EndIf ;look for excel files in selected directory and all subdirectories Local $SFileList = _FileListToArrayRec($sFileSelectFolder, "*.xls.;*.xlsm", $FLTAR_FILES, $FLTAR_RECUR, $FLTAR_NOSORT, $FLTAR_FULLPATH) If Not @error Then For $i = 1 To $SFileList[0] DirRemove($sTempDir, 1) ;use x command to keep the folder stucture, -aoa Overwrite All existing files without prompt, use -r to unzip the subfolders from the zip file RunWait('"' & $PathZipProgram & '7z.exe" x -aoa -r "' & $SFileList[$i] & '" -o"' & $sTempDir & '" -y', $PathZipProgram, @SW_HIDE) __ReplaceImagePaths($sTempDir, $sFind, $sReplace) RunWait('"' & $PathZipProgram & '7z.exe" a -r "' & $SFileList[$i] & '" "' & $sTempDir & '\*" -tzip -y', $PathZipProgram, @SW_HIDE) Next Else MsgBox(16, "Error", "No files were found in the folder specified.") EndIf DirRemove($sTempDir, 1) Func __ReplaceImagePaths($sTempDir, $sFind, $sReplace) ;List all files with .xml.rels extension in the directory \xl\drawings\_rels Local $aFileList = _FileListToArray($sTempDir & "\xl\drawings\_rels", "*.xml.rels", 1, True) If @error = 1 Then ;MsgBox (0, "", "Path was invalid") SplashTextOn("Title", "Path was invalid", -1, -1, -1, -1, 1, "", 24) Sleep(2000) SplashOff() Exit EndIf If @error = 4 Then ;MsgBox (0, "No files", "No files were found") SplashTextOn("Title", "No files were found", -1, -1, -1, -1, 1, "", 24) Sleep(2000) SplashOff() Exit EndIf Local $iRetval ;Loop through the array For $i = 1 To $aFileList[0] $iRetval = _ReplaceStringInFile($aFileList[$i], $sFind, $sReplace) Next EndFunc
Some references:
|
https://www.autoitscript.com/forum/topic/181216-how-to-hide-lines-in-excel-programatically/
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
6109-1(d)(3)(ii) of the Procedure and Administration Regulations Notice 2004-1 SECTION 1. PURPOSE This notice addresses the requirements of section 301.6109-1(d)(3)(ii) of the regulations on Procedure and Administration, relating to applications for Individual Taxpayer Identification Numbers (ITINs). The Service has changed its ITIN application process. This notice confirms that taxpayers who comply with the new ITIN application process will be deemed to have satisfied the requirements in section 301.61091(d)(3)(ii) relating to the time for applying for an ITIN. This notice also solicits public comments regarding the changes to the ITIN application process. SECTION 2. BACKGROUND Section 6109(a)(1) generally provides that a person must furnish a taxpayer identifying number (TIN) on any return, statement, or other document required to be made under the Internal Revenue Code (Code). For taxpayers eligible to obtain a social security number (SSN), the SSN is the taxpayer’s TIN. See section 6109(d); section 301.6109-1(d)(4). Taxpayers who are required under the Code to furnish a TIN, but who are not eligible for a SSN, must obtain an ITIN from the Service. See Section 301.6109-1(d)(3)(ii). A taxpayer must apply for an ITIN on Form W-7, Application for the IRS Individual Taxpayer Identification Number.. SECTION 3. FORM W-7 AND ACCOMPANYING INSTRUCTIONS The Service has revised Form W-7 and the accompanying instructions. In general, a taxpayer who must obtain an ITIN from the Service is required to attach the taxpayer’s original, completed tax return for which the ITIN is needed, such as a Form 1040, to the Form W-7. There are, however, certain exceptions to the requirement that a completed return be filed with the Form W-7. These exceptions are described in detail in the instructions to the revised Form W-7. One of the exceptions applies to holders of financial accounts generating income subject to information reporting or
withholding requirements. In these cases, an applicant for an ITIN must provide the IRS with evidence that the applicant had opened the account with the financial institution and that the applicant had an ownership interest in the account. The Treasury Department and the IRS will consider changes to the requirements of this exception if necessary to ensure the timely issuance of ITINs to holders of these types of financial accounts. In addition, financial institutions may participate in the IRS' acceptance agent program. SECTION 4. CLARIFICATION OF REGULATORY REQUIREMENTS Section 301.6109-1(d)(3)(ii) provides that any taxpayer who is required to furnish an ITIN must apply for an ITIN on Form W-7. The regulation further states that the application must be made far enough in advance of the taxpayer’s first required use of the ITIN to permit the issuance of the ITIN in time for the taxpayer to comply with the required use (e.g., the timely filing of a tax return). This requirement was intended to prevent delays related to Code filing requirements. Under the Service’s new ITIN application process, applicants, in general, are required to submit the Form W-7 with (and not in advance of) the original, completed tax return for which the ITIN is needed. Accordingly, taxpayers who comply with the Service’s new ITIN application process will be deemed to have satisfied the requirements of section 301.6109-1(d)(3)(ii) with respect to the time for applying for an ITIN. The original, completed tax return and the Form W-7 must be filed with the IRS office specified in the instructions to the Form W-7 regardless of where the taxpayer might otherwise be required to file the tax return. The tax return will be processed in the same manner as if it were filed at the address specified in the tax return instructions. No separate filing of the tax return (e.g., a copy) with any other IRS office is requested or required. Taxpayers are responsible for filing the original, completed tax return, with the Form W-7, by the due date applicable to the tax return for which the ITIN is needed (generally, April 15 of the year following the calendar year covered by the tax return). If a taxpayer requires an ITIN for an amended or delinquent return, then the Form W-7 must be submitted together with the return to the IRS office specified in the instructions accompanying the Form W-7. SECTION 5. EFFECTIVE DATE This notice is effective December 17, 2003. SECTION 6. COMMENTS The Service is committed to maintaining a dialogue with stakeholders on the ITIN application process, including Form W-7. Comments in response to this notice will be considered carefully by the Service in future revisions to the ITIN application process
and Form W-7. The Service welcomes all comments and suggestions and is particularly interested in comments on the following matters: 1. How can Form W-7 and the instructions be simplified or clarified? 2. The instructions to Form W-7 provide four exceptions to the requirement that a completed tax return be attached to Form W-7. Should these exceptions be modified? Are additional exceptions needed? 3. ITIN applicants may submit a Form W-7 to an acceptance agent. The acceptance agent reviews the applicant's documentation and forwards the completed Form W-7 to the Service. What steps, if any, should the Service consider to improve the acceptance agent program? Comments must be submitted by June 15, 2004. Comments may be submitted electronically to [email protected]. Alternatively, comments may be sent to CC:PA:LPD:PR (Notice 2004-1), Room 5203, Internal Revenue Service, P.O. Box 7604, Ben Franklin Station, Washington, DC 20044. Submissions may be hand delivered Monday through Friday between the hours of 8 a.m. and 4 p.m. to: CC:PA:LPD:PR (Notice 2004-1), Courier’s Desk, Internal Revenue Service, 1111 Constitution Avenue, N.W., Washington, DC 20224. SECTION 7. CONTACT INFORMATION The principal author of this notice is Michael A. Skeen of the Office of Associate Chief Counsel (Procedure and Administration), Administrative Provisions and Judicial Practice Division. For further information regarding this notice, contact Michael A. Skeen on (202) 622-4910 (not a toll-free call).
|
https://www.scribd.com/document/536013/US-Internal-Revenue-Service-n-04-1
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Opened 9 years ago
Last modified 3 years ago
#624 reopened enhancement
Add Latent BuildSlave for DRMAA supporting systems
Description (last modified by dustin)
Supplied are two modules drmaabuildslave - contains a basic latent buildslave which uses a DRMA api (Requires the drmaa python module)
sgebuildslave - a latent drmaa buildslave, extended for Grid Engine - a popular open-source distribution system by Sun
Attachments (7)
Change History (35)
Changed 9 years ago by smackware
Changed 9 years ago by smackware
drmaabuildslave.py
comment:1 Changed 9 years ago by dustin
- Milestone changed from undecided to 0.8.+
smackware - can you provide some snippets of documentation that I can include in the manual?
comment:2 Changed 8 years ago by dustin
- Keywords drmaa grid sge removed
- Priority changed from trivial to major
comment:3 Changed 8 years ago by dustin
- Keywords virtualization added; latent removed
comment:4 Changed 7 years ago by dustin
As a reminder, hopefully we can get this documented and merged soon!
comment:5 Changed 6 years ago by dustin
- Resolution set to wontfix
- Status changed from new to closed
No response for quite a while -- feel free to re-open if there's further work on this.
comment:6 Changed 5 years ago by mvpel
My colleague has implemented DRMAA-based latent slaves in 0.8.4p2, and we're about to port it to 0.8.8 on Monday. He said it was very easy to implement, and it's working fine with Grid Engine now, and we'll be using it with HTCondor after the upgrade.
comment:7 Changed 5 years ago by dustin
Sounds good - do you want to re-open this?
comment:8 Changed 5 years ago by mvpel
Yeah, let's reopen, why not?
I got the attached code working with just a small bit of change to fix the lack of a delay and status-checking mechanism that caused the master to not wait for the slave to be scheduled and dispatched before giving up on it, and reporting that it failed to substantiate. I'll provide an updated file later.
I also adapted the sgebuildslave.py into an htcondorbuildslave.py, though my lack of familiarity with Python is tripping me up a bit - I need to figure out how to pass arguments to the buildslave_setup_command, or set environment variables, since I need to provide it with the slave name. I've got an ugly little hack in there at the moment.
For the slave names, I'm using "LatentSlave01" through "LatentSlave16" (we have several different builds), rather than host names (hence my need for a setup-command argument), since a given latent slave could wind up running on any of the exec hosts in the pool (we'll have 42 when finished), and it's preferable to avoid having to update the slave list every time an exec host is added or removed.
The slave is created fresh by the buildslave_setup_command script each time a latent slave starts. The setup command runs "buildslave create-slave" using the HTCondor-managed scratch directory, and then execs the buildslave in there. HTCondor takes care of deleting that directory when the job exits or is terminated. I also have a bit of code that creates the info/host file so you can tell which exec host the slave wound up on.
I've noticed that when the slave terminates, it's marked as "removed" in the HTCondor history. I'd prefer to have the slave shut itself down gracefully rather than being killed off through the scheduler, so that HTCondor will see it as "completed," rather than "removed."
I'm also trying to figure out if it's possible to have the slave do the checkout and build in the buildslave's HTCondor scratch directory, and then use the file transfer for anything that needs to go back to the master. The catch is that the master won't know the name of that directory, and in fact it won't be created at all until the slave starts up, so the master-side checkouts from buildbot.steps.source.svn.SVN may not play well. I'm not entirely clear on how the checkout mechanism works yet.
comment:9 Changed 5 years ago by mvpel
When creating the DRMAA session in the master.cfg, the reconfig doesn't work because the session was already established at startup. You have to do:
Session = drmaa.Session() try: Session.initialize() except drmaa.errors.AlreadyActiveSessionException: print "Using previously-initialized " + Session.contact + " DRMAA session"
comment:10 Changed 5 years ago by rutsky
- Cc rutsky.vladimir@… added
comment:11 Changed 5 years ago by mvpel
After some Python learning-curve issues, and a bit of tweaking and poking, it looks like we've got a fully-functional DRMAA latent build slave submitting to HTCondor. I'll give it overnight to make sure that the wheels don't fall off, but it appears to be in good shape. I'll provide the revised files and some instructions.
There's probably a better way to handle the job resource requirements than the hardcoding I'nm doing, it'd be nice to be able to pass memory and disk space requirements in from the master.cfg.
Changed 5 years ago by mvpel
DRMAA Abstract Latent Build Slave
Changed 5 years ago by mvpel
DRMAA HTCondor Abstract Latent Build Slave
Changed 5 years ago by mvpel
Startup script for HTCondor latent build slave
Changed 5 years ago by mvpel
Sample master.cfg to create HTCondor latent slave instances
Changed 5 years ago by mvpel
Sample master.cfg to create HTCondor latent slave instances
comment:12 Changed 5 years ago by mvpel
This is what's working on our HTCondor pool. The sgebuildslave.py may also need some adjustment as well.
One caveat is that the twistd.log files for the buildslave are deleted when the slave terminates, along with the rest of the Condor scratch directory. There may be a way to transfer them back to the master by using Condor's output-transfer mechanisms, with transfer_output_remaps to differentiate the log files from the various slaves. However since the slave is killed in the above, rather than exiting on its own, that'll pose a problem - Condor won't transfer files back to the owner if a job is killed.
It appears that the build_wait_timeout=0 is not actually causing the slave to shut itself down when the build finishes as some of the docs imply, but rather causing the insubstantiate to be invoked by the master to force the slave to shut down. If the slave could be directed to simply exit after the build finishes... am I missing a step somewhere?
The run_buildslave script can translate the TERM signal to a HUP signal to initiate a graceful shutdown of the slave, but I don't think that'll be sufficient to get the automatic file transfer to occur. So probably the slave-start script would need to do it in the TERM trap.
comment:13 Changed 5 years ago by dustin
I don't think users will be terribly worried about twistd.log files. There's seldom much of interest in there.
Jc2k, can you take a look at these additions? mvpel, do you think you could turn this into a pull req so that we can include tests, docs, etc.?
comment:14 Changed 5 years ago by dustin
- Cc Jc2k added
comment:15 Changed 5 years ago by dustin
- Resolution wontfix deleted
- Status changed from closed to reopened
comment:16 Changed 5 years ago by mvpel
Thanks for the pointer - I've forked the Github repo, so I'll plan to convert things into a branch when I have some time this week. I found a typo or two in any case, and perhaps I'll use the exercise of converting run_buildslave into Python as an educational experience. Reaching for /bin/sh is a 30-year-old habit for me, and from what I've learned over the last couple of months Python seems pretty spiffy.
With some further research, I found the "kill_sig=SIGHUP" Condor directive, which results in a HUP signal being sent to the run_buildslave script instead of a TERM, so that should mean that the "trap" wouldn't be required since a HUP would propagate to the buildslave child, which would close out due to the --allow-shutdown=signal.
However, having the trap would allow the startup script to try to append the twistd.log file somewhere before exiting, or whatever else - but like you said perhaps that's not worth the effort.
And after reading up on Python function arguments, I'm going to turn the nativeSpecification pieces into default-value keyword arguments, so the creator of the HTCondor latent slave in master.cfg can adjust them as appropriate, and perhaps a way to sanity-check and accept arbitrary submit description directives - perhaps something as simple as a string list called "extra_submit_descriptions".
comment:17 Changed 5 years ago by mvpel
First cut:
I fleshed out some documentation in the sample file as well, to help clarify what's going on and why.
Still have the gross hardcoded submit description directives, I'll deal with that later. I'll pull it, transfer it to my pool, and test it later this week or early next week, and do another commit to this branch as things progress.
comment:18 Changed 5 years ago by mvpel
It occurs to me - would the master get offended if the slave signals a graceful shutdown after the master had already called stop_instance()?
comment:19 Changed 5 years ago by Jc2k
I'm not sure what would happen in that case - I think i've always disable graceful shutdown of slaves by the master.
One nice thing you can add to this branch is something like this:
from buildbot import config try: from drmaa import drmaa except ImportError: drmaa = None
And then in your ___init___:
if not drmaa: config.error("The python module 'drmaa' is needed to use a %s" % self.__class__.__name__)
Then when the user uses buildbot checkconfig they will get a helpful error message, rather than a python stack trace.
comment:20 Changed 5 years ago by mvpel
Great, thanks for that! I realized it's probably is not necessary in htcondor.py to gripe about a missing buildbot.buildslave.drmaa, since it's an Buildbot internal component. Yes?
Here's the commit:
comment:21 Changed 5 years ago by mvpel
I just had an idle buildslave fail to shut down after a HUP, in spite of cheerfully logging that it would inform the master, so maybe we do need to stick with a TERM, or try a HUP first and then a TERM.
comment:22 Changed 5 years ago by mvpel
I've committed some updates I worked on last night in the wake of some testing with our Buildbot, as well as adding keyword arguments to allow the user to define certain aspects of the resource requests and set the accounting group and user. I also added the "extra_submit_description" for arbitrary directives, and improved the docstrings quite a bit.
With the ability to specify different resource requests for different latent buildslaves, you can set up big ones for larger builders by calling for more memory, disk space, and even CPUs, while having the smaller builders use a different set of latent buildslaves which request fewer resources from the scheduler.
comment:23 Changed 5 years ago by mvpel
I found what may be an issue in Enrico's code or possibly the HTCondor code, in that when jobs are sent to a remote scheduler's queue as a result of having the "SCHEDD_HOST" config value set to the remote machine's hostname, the job ID provided by DRMAA uses the local hostname instead of the remote:
DRMAA-provided job ID: buildbot_host.23456.0
Actual Job ID? sched_host.23456.0
The master gets an invalid job ID exception when it tries to DRMAA-terminate the former. I can tell at least that the HUP signal is working well because the slave goes promptly and gracefully away when I condor_rm the job, and the master doesn't seem to mind seeing a shutdown message after termination and releaseLocks in the slightest.
After reverting to a queue located on the buildmaster's host, the DRMAA job ID is working properly to terminate the slaves. I've got a support case open with HTCondor about it to see whether it's in their DRMAA or DRMAA-Python.
comment:24 Changed 5 years ago by mvpel
Ok, it appears that when the master goes to terminate the latent slave, it does not want to hear anything further from that slave whatsoever, otherwise it thinks that the slave is withdrawing from participation in builds - does that sound correct? If the master says "slave wants to shut down," then it's not going to try to use that slave again? So maybe I do need to just kill -9 when the DRMAA terminate occurs?
comment:25 Changed 5 years ago by mvpel
Good news Monday morning - everything appears to be working smoothly with the code I have in place right now, so now it's just a matter of adding the additional features to allow user control over the scheduler parameters and we'll have a solid piece of code for latent slaves on HTCondor and eventually Grid Engine.
I rewrote the run-buildslave script in Python over the weekend, so I'll see how that goes when I bring it over. If anyone wants to give me some Python-newbie pointers as to style and syntax, I'd appreciate it:
comment:26 Changed 4 years ago by dustin
- Milestone changed from 0.8.+ to 0.9.+
Ticket retargeted after milestone closed
comment:27 Changed 3 years ago by Edemaster
Registering my interest on this feature. I'm starting to look at the code and get it running in my environment. So far, I've rebased the code onto nine here:
comment:28 Changed 3 years ago by Edemaster
- Cc grand.edgemaster@… added
sgebuildslave.py
|
http://trac.buildbot.net/ticket/624
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
QML program doesn't minimize properly
Hello,
I'm having a problem with a QML program I am making. It's a full screen program to which I've added a minimize button which calls QtQuick2ApplicationViewer.showMinimized() when clicked. This works fine on the development machine (Win7 x64) but when I deploy it on another computer, it doesn't minimize properly. What happens is that the program's screen doesn't go away but the interface doesn't respond. When I use alt+Tab none of the other open programs will be visible until I manage to guess how many times I have to hit alt+Tab to get back to the program, at which point it responds again. This leads me to believe that the program does "minimize", but something goes wrong so the screen doesn't go away.
I'm posting my main.cpp if that helps:
@#include <QApplication>
#include "qtquick2applicationviewer.h"
#include <QDesktopWidget>
#include <QQmlContext>
#include "externalfilestarter.h"
#include <QtOpenGL/QGLFormat>
#include <QMessageBox>
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
if(!(QGLFormat::openGLVersionFlags() & QGLFormat::OpenGL_Version_2_0)) { QMessageBox::critical(0, "Unsupported OpenGL", "This program requires OpenGL v2.0 or higher.\r\nPlease update your graphics card drivers.", QMessageBox::Ok); return -4; } ExternalFileStarter extFile; QtQuick2ApplicationViewer viewer; viewer.rootContext()->setContextProperty("QtQuick2ApplicationViewer", &viewer); viewer.rootContext()->setContextProperty("extFile", &extFile); viewer.setMainQmlFile(QStringLiteral("qml/Design2014Papers/main.qml")); viewer.setResizeMode(QtQuick2ApplicationViewer::SizeRootObjectToView); viewer.setGeometry(app.desktop()->screenGeometry()); viewer.showFullScreen(); return app.exec();
}
@
I'll repeat that this problem only appears when I deploy it to other Win7 computers so it sounds like a deployment problem but I'm not sure how to pinpoint the problem. I'm using Qt 5.1 with MinGW.
Cheers,
Lucijan
I've tested this further and I can get the program to minimize in Qt 5.0.2 but not in 5.1.0. Everything is fine in Qt Creator, yet when I deploy 5.0.2 works, but 5.1.0 doesn't. Is this a bug then?
I have noticed that 5.0.2 requires d3dcompiler_43.dll but 5.1.0 doesn't, and the 5.1.0\mingw48_32\bin folder doesn't contain that file at all, and it's also missing libEGL.dll and GLESv2.dll which were present in 5.0.2. I found them in the Tools\QtCreator\bin folder.
I have a small program which shows what happens, but you will have to deploy it first:
@import QtQuick 2.0
Rectangle {
width: 360
height: 360
Text {
id: minText
text: qsTr("Minimize")
anchors.centerIn: parent
MouseArea {
anchors.fill: parent
onClicked: {
QtQuick2ApplicationViewer.showMinimized();
}
}
}
Text { id: quitText text: qsTr("Quit") anchors.left: minText.right anchors.leftMargin: 10 anchors.verticalCenter: minText.verticalCenter MouseArea { anchors.fill: parent onClicked: { Qt.quit(); } } }
}
@
Here's the main.cpp file:
@#include <QtGui/QGuiApplication>
#include "qtquick2applicationviewer.h"
#include <QQmlContext>
int main(int argc, char *argv[])
{
QGuiApplication app(argc, argv);
QtQuick2ApplicationViewer viewer; viewer.rootContext()->setContextProperty("QtQuick2ApplicationViewer", &viewer); viewer.setMainQmlFile(QStringLiteral("qml/MinimizeTest/main.qml")); viewer.showFullScreen(); return app.exec();
}
@
This only happens if the program is set to showFullScreen(), but doesn't happen if showExpanded() is used.
So is this a bug in Qt or am I missing something?
I'm not sure this helps you, did you try to use QQmlApplicationEngine instead of QQuickView. Qt 5.1 meant to use QQmlApplicationEngine for Qt Quick.
It looks like QQmlApplicationEngine is connected to Qt Quick Controls which I'm not using so it doesn't look like this solves my problem.
So, can anyone test the small program posted above? I would really like to know if it's something I've done or a problem with Qt.
|
https://forum.qt.io/topic/32421/qml-program-doesn-t-minimize-properly
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Opened 5 years ago
Closed 4 years ago
#21466 closed Bug (invalid)
override_settings(LOGIN_URL=…) does not work when not first test
Description
Overriding
LOGIN_URL in the tests does not work when another test is run beforehand.
from django.test import TestCase from django.test.utils import override_settings from django.core.urlresolvers import reverse from django.conf import settings class OverrideSettingsTest(TestCase): def test_a(self): """ Toggle this test by commenting it out and see whether test_b() passes. """ response = self.client.get(reverse("harmless-view")) self.assertEqual(response.status_code, 301) @override_settings(LOGIN_URL="/THIS_IS_FINE/") def test_b(self): # settings appear to be overridden as expected self.assertEqual(settings.LOGIN_URL, "/THIS_IS_FINE/") response = self.client.get(reverse("redirect-to-login")) # The following assertion fails only when test_a() is run. self.assertRedirects(response, "/THIS_IS_FINE/", status_code=301, target_status_code=404 ) def test_c(self): response = self.client.get(reverse("harmless-view")) self.assertEqual(response.status_code, 301)
.F. ====================================================================== FAIL: test_b (override_bug.tests.OverrideSettingsTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "django/test/utils.py", line 224, in inner return test_func(*args, **kwargs) File "override_bug/override_bug/tests.py", line 24, in test_b target_status_code=404 File "django/test/testcases.py", line 617, in assertRedirects (url, expected_url)) AssertionError: Response redirected to '', expected '' ---------------------------------------------------------------------- Ran 3 tests in 0.031s
Attachments (1)
Change History (6)
Changed 5 years ago by
comment:1 Changed 5 years ago by
I'm almost sure the problem is related to the way you are using
settings.LOGIN_URL. In your sample project, it is used as a parameter of the
as_view call in your URLConf, that means that it will be defined once and for all requests at import time.
You can workaround this issue by subclassing
RedirectView and overriding
get_redirect_url() so that when
settings.LOGIN_URL changes your view can take that change into account. I don't think we can do anything on Django's side.
comment:2 Changed 5 years ago by
Sorry for opening this ticket. You're right that this has nothing to do with Django.
I used django-braces'
LoginRequiredMixin which sets
settings.LOGIN_URL at import time and replicated this bug in my example project without further reflecting upon this issue.
comment:3 Changed 4 years ago by
I think I'm hitting the same issue as well. In my project, I have set a variable called
REGISTRATION_ENABLED. However, when I try to override this setting in my tests, it's never read. Changing the setting beforehand in my settings.py makes the test pass, however. This is how it's used in my tests:
def test_auth(self): """ Test that a user can register using the API, login and logout """ # test registration workflow submit = { 'username': 'Otto', 'password': 'password', 'first_name': 'first_name', 'last_name': 'last_name', 'email': '[email protected]', 'is_superuser': False, 'is_staff': False, } url = '/api/auth/register' response = self.client.post(url, json.dumps(submit), content_type='application/json') self.assertEqual(response.status_code, 201) # test disabled registration with self.settings(REGISTRATION_ENABLED=False): submit['username'] = 'anothernewuser' response = self.client.post(url, json.dumps(submit), content_type='application/json') self.assertEqual(response.status_code, 403)
And the code block in my views:
class HasRegistrationAuth(permissions.BasePermission): """ Checks to see if registration is enabled """ def has_permission(self, request, view): return settings.REGISTRATION_ENABLED
Note that I'm using in my application.
comment:4 Changed 4 years ago by
apologies, I forgot to add the output of my tests!
$ ./manage.py test api Creating test database for alias 'default'... .......F.................................................... ====================================================================== FAIL: test_auth (api.tests.test_auth.AuthTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/api/tests/test_auth.py", line 64, in test_auth self.assertEqual(response.status_code, 403) AssertionError: 201 != 403 ---------------------------------------------------------------------- Ran 60 tests in 29.056s FAILED (failures=1) Destroying test database for alias 'default'...
When i modify the setting to
False in settings.py, the test passes without failure.
comment:5 Changed 4 years ago by
I think it's more likely that your use of
override_settings() is invalid. Please be sure you've read the caveats in the documentation and ask questions using our support channels. If after doing those setps you still believe you've found a bug, please open a new ticket, thanks.
An example project which exhibits the bug (filetype: tar)
|
https://code.djangoproject.com/ticket/21466
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.