text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
#include <net_termlist.h>
Definition at line 37 of file net_termlist.h.
The term frequency.
This is the number of documents (in the network database) indexed by the term.
Definition at line 47 of file net_termlist.h.
Referenced by RemoteDatabase::open_allterms(), and RemoteDatabase::open_term_list().
The "name" of this term.
Definition at line 41 of file net_termlist.h.
Referenced by RemoteDatabase::open_allterms(), RemoteDatabase::open_metadata_keylist(), and RemoteDatabase::open_term_list().
The within-document-frequency of the term.
This information may not be available, in which case the field should have a value of 0.
Definition at line 54 of file net_termlist.h.
Referenced by RemoteDatabase::open_term_list(). | http://xapian.org/docs/sourcedoc/html/classNetworkTermListItem.html | crawl-003 | refinedweb | 101 | 64.07 |
I have a list of java threads
top -H -p [java ppid number]
#include <sys/types.h>
#include <signal.h>
#include <stdio.h>
#include <linux/unistd.h>
#include <errno.h>
#include <unistd.h>
#include <pthread.h>
int main(void)
{
pthread_kill(23242,SIGKILL);
return 0;
}
sudo gcc ckill.c -o comp -pthread
Segmentation fault (core dumped)
pthread_kill() can send a signal to a thread within the same process. And for that reason, it takes a
pthread_t as an argument, not a PID. So you can't send signals to some java process as you are doing.
If you want to send a signal to any process on the system, have a look at
kill(). | https://codedump.io/share/TwdGvVtux61W/1/phtreadkill-segmentation-fault-core-dumped | CC-MAIN-2017-09 | refinedweb | 113 | 79.97 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
An overview paper describing ECMAScript 4 has been added to the ECMAScript site. It was recently announced on the mailing list:
I'm pleased to present you with an overview paper describing ES4 as the language currently stands. TG1 is no longer accepting proposals, we're working on the ES4 reference implementation, and we're expecting the standard to be finished in October 2008.
....
Chris, as a CE you can (and should) post items like this to the homepage. Use the link at the bottom of the FAQ. (I promoted the item manually, so there's no need to repost it).
Thanks for that. How can this be added to the JavaScript department so it gets listed there?
We really need a button for that, but it's not built into Drupal. I've updated the database manually.
Let me tell you that the grammar is rather annoying to parse (I've just spent the best of last week attempting to write a parser) and that the specifications are somewhat hairy.
Still, at least on the paper, JavaScript 2 looks impressively better than JavaScript 1, and becomes an interesting contender in the PL field.
I am wondering, what is the future of it? In browsers, we'll always have a vendor specific dialect of it and the legacy monster will never be slain. For embedded languages in scripting, surely ECMAScript isn't the only worthy competitor - Lua, some sort of LISP - Slang or likes. Not to mention the trend to allow non embedded languges with their own run-time to be ran from applications, e.g. Python, the VBasic family.
I can only tell that there's strong support for incorporating JavaScript 2 in Firefox. Let's not forget that Firefox (and Thunderbird, Songbird, Nvu...) is in large part written *in* JavaScript itself and will benefit a lot from added static checks, generators and other features -- in addition to which, the work on JS2 is done in part by Firefox people and during the big renovation of Firefox' JavaScript interpreter.
With FF having iirc nearly 30% of the market in Europe and Oceania, and between 15% and 19% on the other continents, this means that JavaScript 2 will have a future. Hopefully, other non-Microsoft browsers will follow suit relatively soon. The existence of a (functional) reference implementation rather than an informal and unintelligible syntax-based specification will help both writing deployed implementations and prevent fragmentation. The fact that Adobe is joining forces with Firefox on the JavaScript front for Flash (and presumably Air) is also a positive point.
So, I'm rather positive for the moment.
I have a solution for the legacy monster, called ScreamingMonkey -- it's an Active Scripting engine for IE built around Tamarin, which as you may know is widely distributed.
You can read about ScreamingMonkey and related projects here. One of those projects is IronMonkey, which hopes to support optional (downloadable) Python and Ruby memory-safe implementations on Tamarin.
That brings up a good point. The default scripting language supported by browsers will remain JS. It's not going to become some other language for the foreseeable future. And the cost of supporting more languages is quite high, especially if they are not co-hosted on the same VM hosting JS. Consider these points against trying to wedge existing runtimes into browsers:
But worst of all, if you think it's hard to get browser vendors, including the "legacy monster", to agree on one language, consider how impossible it is to get them to agree on many languages. Unless, of course, one vendor dominates and forces everyone to reverse-engineer, or instead license, its code. I hope the hazards with that kind of monopoly are clear by now.
Some might argue that Firefox should take on all of the above problems in order to win developers and pressure other browser vendors. This is good strategy when done incrementally, by embracing and extending IE's de-facto standards in concert with other vendors and web developers in a truly-open standards body like the WHAT-WG. But it's suicidal to try to please everyone -- solving the above problems takes way too long and has a huge opportunity cost. You end up pleasing no one, and losing users because you didn't innovate asymmetrically, in user-facing features and better platform APIs apart from programming languages.
Firefox must remain small (it's still under 6MB), because people have to choose to download it, over various kinds of networks; we don't have many OEM bundling deals that make us the default browser. Same for Opera desktop and Safari on the iPhone, and of course the mobile web cannot tolerate N language runtimes, with O(N) or O(N2) bridging costs, for N > 1. It's JS or bust for the mobile web, which I keep saying is the same as the web.
/be
What you say is true and points to the fact ("imo") that we are so very lucky that the people developing Javascript have, at least, "not bad taste". Nothing is perfect about Javascript but nothing is terribly offensive, either, and it is darn flexible.
-t
The problem with getting traction of other scripting languages on the browser is that the browser has no common virtual machine that can play host to the variety of languages. Sure you can try to plug a machine onto the browser, but that hasn't proven successful in the past. Part of the reason for the success of JavaScript is exactly because people don't want to be downloading programs that run locally.
Also, from what I see, a big push is in making JavaScript an intermediate language. For example, Ajax is mostly about using various languages like Java, C#, Python, PHP, etc... that auto-generates the JavaScript glue code. The generated code is not intended for human consumption. I'm left wondering whether this use of JavaScript as an intermediate language has influenced the design of ES4? That is, does ES4 make compiling to JavaScript easier and/or more efficient?
Supporting code generators targeted at JS has been at most a secondary goal. If only JS were never written or read by humans. But it is, in spite of crunchers and obfuscators, and that's still a huge virtue of JS and other web content languages -- the view-source advantage.
Nominal types, final classes and methods, let, let const, and other name binding forms that can't be overridden, all will help code generators produce more robust code that can be more readily optimized by browser JS engines.
On the [email protected] list, the topic of call/cc came up, as part of a concurrency thread, but also in part because some code generators (hi, Nicolas!) would love to have it in the target IL. See this message and the responses in its parent thread.
And don't forget tail calls!
some ES4 implementations are expected to exist in “hosted” environments, like the JVM or .NET. These environments do not always provide powerful control operators like first-class continuations or stack marks. Therefore ES4 is also precluded from incorporating such features.
Given that JS has closures, a "stack frame" can continue to exist indefinitely. That means that ES4 on the JVM or .NET can't use the native stack anyway, right? So why not have continuations?
Given that JS has closures, a "stack frame" can continue to exist indefinitely. That means that ES4 on the JVM or .NET can't use the native stack anyway, right?
You can have closures and a native stack. Closures pointing to stack frames is only one way to implement closures, and not the best way since it can lead to not being able to garbage collect items in the same stack frame that were not captured by the closure that would otherwise be garbage.
Not having continuations allows a language to fully interoperate with native APIs that have callbacks. With continuations you can't do that because you can't yield and resume control across native API stack frames. With continuations all of your native calls must be leaf functions unless you put some restrictions on use that require the programmer to know when he can or cannot call a continuation that might cross native stack frames.
The thread I cited in a reply just above tells why we are not imposing coroutines or call/cc on the standard. See this message, which nicely summarizes one goal in specifying generators but not coroutines or call/cc: greater interoperation through lowering the requirements bar a little. (We have other goals and reasons, too; see the thread.)
I believe these archives have the bad taste of being private.
Sorry about that -- mailman did the right thing, yay.
As nabble.com and other sites host digests or archive mirrors, I think it's ok to just open this up. We never intended to keep the list private, and anyone can join. But any LtUers interested in JS2/ES4 and not on the list, join already! We crave feedback.
Just some thoughts. I'm not any sort of an expert in PL design, although I do code in JavaScript fairly often.
It was very interesting to see what ES4 is actually going to look like, given everything I've heard about the features that it's going to have.
As a check list, the things that are being added to ES all sound like very good things. However I can't shake the feeling that they've taken a small language and turned it into a big language, and I'm not sure that this is going to be to anyone's benefit in the long term.
I wonder if perhaps ES4 is making the same mistake as C++[1], in trying to add new features to a legacy language, and ending up with something that has a lot of syntax, and idiosyncrasies. In particular, despite what the document says, it isn't clear to me that the class based OO and the prototype system really fit together particularly neatly, and I think that the let expressions are really a work around for the fact that variables aren't block scoped in previous versions.
It's a bit like there's two languages in there: the JavaScript we all know and quite like, and something new and interesting[2], but not obviously the same thing.
Still, I guess we won't really know if ES4 is any good until people have tried to actually program big[3] things in it.
[1] I believe Bjarne said somewhere that trying to be a superset of C was a mistake. I may be wrong. I'm sure you can find someone who's said that though!
[2] The structural types are cool, I've never seen that in an OO language, and I can see them being very useful. Shame they can't be recursively defined. Also interesting is having two word keywords, like "this function"
[3] I've seen it often said that ES4 needed classes for making "programming in the large" easier. Has anyone come across evidence for this based on actual projects?
We have indeed turned it into a bigger language. Some new features probably have more utility than others, but we've tried to remain use-case driven, and ES4 is a language for the next few years. Nobody can say what the situation will be like in a decade -- ES5, ES6, or something new.
As Brendan wrote, the legacy language is a fact to be dealt with, not something we can easily work around. Backward compatibility (compatibility with the web) is of extreme importance; in some cases ES4 is not compatible, but we've tried to minimize those and do only ones that we expected would be OK.
To address two concrete points:
The class-based system and the prototype system do in fact fit together in an OK way, but not without warts and subtleties (not all of which are completely worked out yet, I think). The main problem is working out how overriding a method in one hierarchy affects the other hierarchy, and not the least when it should and when it shouldn't.
The let expressions are not a work around for the lack of block scoping in previous editions; that was fixed by the let directives ("let is the new var"). Let expressions just provide even narrower scopes for bindings and are ready for macros, should those ever make an appearance. Let expressions emphasize the (quasi) functional flavor of the language. From the Date class:
intrinsic function getDay(): double
let (t = timeval)
isNaN(t) ? t : WeekDay(LocalTime(t));
I'm sorry, I probably meant let directives, not let expressions.
Your comment led to me into actually looking at the source code of the standard library. I completely see what you mean about let being the new var, and thank heavens for that!
With respect to your comment that the legacy language is a fact to be dealt with, I'm not entirely sure you got what I meant by my original comment.
Rather than saying that you should have thrown out the legacy language, I really meant that you'd done too much to it, fixing it where it wasn't really broken (and of course, compatibility means that you can't fix it where it really is, for example the binding of this).
This isn't really meant as a criticism at the moment, as I said, people (other than the working group) need to try to use it before we know if it's an improvement. Rather it is an impression, an aesthetic judgement, and I am quite prepared to find myself wrong on it - it would not be the first time.
I showed the document linked to one of my colleagues who said something along the lines of "They're trying to turn it into Java, it's disgusting"[1], and I know what he means (especially having now looked at some ES4 code). That said, if you need to run a Java in the browser (and I still don't quite see what the use case is, FireFox extensions maybe?), then the Java that has first class lexically scoped functions, a more expressive type system, and some nice things like destructuring bind and list comprehensions doesn't sound like a bad Java to me.
[1] This particular colleague is of the bent that thinks that static typing and information hiding[2] are necessarily bad things. We both love Python, but he thinks that the fact that Python is dynamically typed is a feature, I just think that the fact they left out static typing has allowed them to make a beautiful language with no cruft that's there to keep the compiler happy. If you can add static typing to Python without detracting from the beauty of it, I'll think it's a good thing. Needless to say, my colleague hates Java.
[2] Yes, I pointed out that you can already do public and private variables in Javascript. And yes, we do put underscores in front of variables in Python to say "don't look at this unless you really want to". I guess he thinks that it shouldn't be a language feature.
Peter, your colleague sounds like many people I know. There is intense dislike of Java, and a knee-jerk response to seeing class in JS code. Of course, Python has classes, too, and both Python and ES4 are dynamic (ES4 by default). Types in ES4 are optional, and the strict mode is optional at the implementation's discretion.
What's more, as the overview points out, classes are baked into the standard ES3 library, and of course the DOM and many other libraries used by JS hackers.
Can any of these considerations help unjerk your colleague's knee? It would be interesting to hear more.
On the general question of evolve JS vs. do a new language: TG1 has many implementors who reckon we can't afford two disjoint implementations. Now there could be aggressive code-sharing, even with a "new JS" that is not backward-compatible. But we also want to re-use brainprint, allow for code migration and gradual typing, and otherwise evolve rather than supplant.
Wow, I'm talking to the creator of one of the world's most programmed languages!
I can't speak for my colleague (he's home sick today), but here's my (overly long) thoughts on these issues:
With regard to size and optionality:
I hope you won't mind me saying that ES4 is going to be a large language. By a large language I mean that there's a lot of details in the overview, a lot of keywords, and a lot of concepts.
I learnt JavaScript from the JavaScript 1.1 guide and reference from Netscape, at the age of about 14 or 15, not really knowing any programming apart from some very rusty BBC BASIC, and not enough C to do anything. I recall the JavaScript guide being pretty comprehensive, and pretty easy to understand. I wonder if a 14 year old will think the same of the Javascript 2 guide, regardless of how much is optional?
I guess that the language becomes more complex as you add features to the language that are not easily defined in terms of the core language. Obviously static types need to add complexity, and to be statically typed, so do classes (this trick won't give you the advantages you require).
I'm not saying that being a large language is a bad thing and I'm not saying that the additions made are in bad taste (some of them make me quite excited, keep borrowing from Python just as much as you want!). It's just interesting to compare ECMAScript's approach to complexity to, for example, Scheme. Would it be fair to say, that Scheme is successful (Anything that's been around that long, must be considered some sort of success) because of what it is: a very small and elegant lisp, and to make it anything else would be to destroy it; whereas JavaScript is successful despite what it is (I hear this often) and change at the cost of complexity is worth it?
With regard to adding classes:
Ecmascript is (as far as I know) the best example of a prototype based language that is currently used, and so if the designers of the language are now moving away from that model (and I think it's clear that this is what is happening) towards classical inheritance, does that suggest that prototypical inheritance was a bad move in the first place? Is class based inheritance superior? I'm assuming that the language designers aren't of the opinion that the mixture of both forms is the best option, but I may be wrong. I'm still interested in hearing where the pressure to add classes came from.
The argument that Python has classes too is a bit of a red herring. Python never tried prototype based inheritance, and it doesn't have to support the legacy of it. The argument that the DOM is implemented using classes is too. A JavaScript program (in the browser) only ever has to deal with DOM instances and factory functions, and I've never thought to myself "I wish I could subclass DIV". I'm not sure which other libraries you mean.
With respect to evolving JS versus doing a new language:
I'm not entirely sure which you've done! You can re-use brainpaint (excellent expression btw) between C and C++, but good style in one language isn't good style in another. I wonder if perhaps what will happen is that library developers who write code with complex inheritance structures, and people who always wished that JavaScript was Java will write in an "It's Java for the webbrowser" dialect, and people like me will just use the "old JS" dialect as a sort of "JavaScriptScript" to manipulate objects created from the libraries.
Urgh, I've got real actual work to do now :-(
Would it be fair to say, that Scheme is successful (Anything that's been around that long, must be considered some sort of success) because of what it is: a very small and elegant lisp, and to make it anything else would be to destroy it;
I'm aware that there was quite a furore around the amount of change that was involved in R6RS, I'm not a Scheme programmer and I don't know if the report authors got it right or not.
I hope you take my point though, with Scheme there is a principle that it's meant to be a small language, and while it has grown, that the cost of growth is something that the report authors will have had at the front of their minds.
I might be pointing out how much simpler it was before, but not to the end of telling the ES4 team that they've got it wrong.
I'm fully aware that I'm only an armchair language designer, but the cost of growth versus maintaining simplicity does seem like a valid topic for friendly debate.
No bad religion
It's true that older "JS2" or "ES4" drafts, led by Waldemar Horwat with Microsoft people on board (see JScript.NET), sent down a summary judgment that classes have vanquished prototypes in OOP -- not far from the truth in academic research, FWIW.
These days, the Ecma TG1 crowd is not dogmatic at all about classes being in any way better than prototypes for general OOP. We expect untyped code and prototype-based programming to be commonplace on the web, for a long time if not forever.
Why classes?
However, prototypes in JS have their weaknesses, which cause problems on the web, notably in Ajax libraries, which strive for good reasons to be compositional. JS's single-prototype system is flexible enough to support many OOP styles, but you have to totally order objects along the prototype chain, which sometimes bites back. And you still don't get fixtures: efficiently sealed instances that can't be tampered with.
Anything directly supporting high-integrity abstractions in the language will be novel, and it will look like classes, whatever the keyword names. It will entail declarative special forms for fixing the fields and methods produced by an object factory, detailing the construction protocol for the factory, and efficiently hiding member names.
The alternative involves closures for private variables, which means (a) idiomatic JS FP, not OOP, which is inconvenient and mysterious to most programmers; (b) unlikelihood of optimizing away closure costs in most implementations (think of browsers for cell phones).
Other libraries
By other libraries than the DOM requiring classes and even interfaces to reflect in a first-class way, I mean toolkits including Adobe Flex for Flash, the WPF and Silverlight frameworks from Microsoft, the Java reflections in JS, etc.
DOM extensibility
Browser programmers absolutely do want to subclass DOM types, or more generally mock up and self-host DOM-like abstractions that play nice with the builtin DOM classes. Again this requires novelty in the language, and the canonical way to provide it is something very much like we've done: classes and interfaces.
This demand goes beyond the DOM, to general browser extensibility (see the Self-hosting sub-section in the overview). Browsers have a cliff below which lie ActiveX and the Netscape Plugin API. On the safe side of the cliff, crowding ever closer to the edge, are today's limited Ajax apps. Tomorrow, there should be no unreasonable limit to what you can do in downloaded JS2 -- there should be no cliff apart from the one wisely warning people away from binding to native platform DLLs.
Language "mood"
My point about Python having classes was more that a dynamic language can have some amount of classical OOP support and not be flamed. But I take your point. What this comes down to, more than arguments over the sufficiency of prototypes making classes redundant, is the "mood" of the language. And about that, people are very "fighty" right now.
I'm a pragmatist. I say choose your own mood, but if you need classes, they're there. Why should JS be gratuitously different for such use-cases? I have my own taste, but I won't confuse it for necessity and stubbornly dismiss arguments about the insufficiency of JS1, and say "work harder, more closures!"
Multi-paradigm already
A key phrase in the overview is "multi-paradigm". JS is not just Scheme, not just one style or way of doing OOP. This is true in both the language (a hybrid of Self and Scheme, with some brutal and even broken simplifications), and in its Ajax library ecosystem (Prototype feels like Ruby, MochiKit like Python, etc.).
Scheme
About size of Scheme vs. JS: Scheme is intentionally for teaching and research, so can (try to) stay small. If you talk to people who have used Scheme in industrial settings, you hear stories similar to those who have to find and use Ajax libraries, except that inside a Scheme shop "silo" one can make provision for the right compiler and libraries -- on the web, non-interoperation is not an option (see also how the spec must over-specify compared to Scheme or C).
For anything like the web, Scheme, like JS1, is too small. You have to procure or invent a lot of library code just to make the core language usable in real-world settings.
Bigger is better
So it's emphatically the case that TG1's majority, including me, believe that JS should grow. Its over-minimization has imposed a big complexity tax on JS programmers that cannot be paid merely by piles of Ajax libraries. And the end-game is hollowing out the browser, so it becomes a minimal and more stable runtime for a web-wide world of JS that can do what today only plugins now boast about.
All successful programming and natural languages get bigger over time. Eventually old forms (the subjunctive mood in English, sigh) dwindle away. With the Web this can take years or decades. The ES3-compatible subset will remain, and perhaps endure for most JS scripts (counting without coalescing copy-and-paste hacks that are shared widely and skew the statistics).
In this light, "JS1" certainly can continue to be the subject of primers and books for n00bs, so I wouldn't worry about that. But majority needs should not dictate majoritarian exclusivity: there are valid minority-share use-cases for JS that JS2 meets with optional types including classes.
Imagine
Imagine a world with browsers exposing fancy text, 2D and 3D graphics, multimedia, and other low-level APIs; with shared-nothing concurrency for background computation, and SQLite available to web apps. This vision is not far away[1], but these primitives need JS2 for programming in the large and performance scaling.
Without JS2, the plugin cliff looms close. With JS2, browser vendors can get out of the way, and JS developers can drive innovation. You won't have to wait a year for the new Firefox, or three to five years for a new IE, to see material progress.
[1] Microsoft would tell you this vision is called Silverlight, Adobe would say AIR, and both would lock you into a single-vendor solution. There's no reason all of these facilities should not be available in cross-browser open standards such as the WHAT-WG is working on right now. But notice how both Silverlight and AIR are sold not only with fancier text and graphics APIs -- they variously tout more, bigger, and "faster" programming languages than JS1. This is not (just) over-engineering or marketing.
Thanks for a very detailed and interesting answer. :-)
"unlikelihood of optimizing away closure costs in most implementations (think of browsers for cell phones)" - you mean ES4, a much larger language, will be easier for cell phones?
"Browser programmers absolutely do want to subclass DOM types" - they can extend Array already. Subclassing DOM types is a browser API issue, not a language issue.
"fancy text, 2D and 3D graphics, multimedia... these primitives need JS2 for programming in the large and performance scaling" - are we really out of ideas for optimizing JavaScript?
you mean ES4, a much larger language, will be easier for cell phones?
Yes. Don't be distracted by surface syntax, which is of course bigger. The runtime semantics (recall that strict mode is optional and no analyses are required by the spec) are not nearly as big, and much of the standard library is self-hosted. This design can be built on top of a modern, well-factored ES3 implementation, such as Opera's latest -- and Opera is part of TG1 and in favor of ES4.
Subclassing DOM types is a browser API issue, not a language issue.
Sorry, this begs the question of why should it be so. Thinking like this, based on artificial or historical divisions of labor in existing browsers, made worse by IE's monopoly stagnation of the web, is exactly why there's a plugin/ActiveX cliff, and why plugins like Silverlight and Flash still tempt developers away from the open web. But your assertion is not its own justification.
are we really out of ideas for optimizing JavaScript?
No, of course not -- but you will not find room in page-load time or memory for the kinds of lambda-lifting, indeed whole-program compiler, optimizations needed for closures that are as efficient as class instances for integrity, private members, and inheritance. If JS had less mutability, it would be easier.
With runtime profiling or trace-based optimization you can do better, but closures have irreducible costs in JS, and compilers can't afford the code footprint for fancy analyses. This stubborn fact, combined with the idiomatic and frankly syntactically heavy/awkward costs of writing closures instead of classes, suggests that classes address a valid use-case not served by JS1.
Consider that the self-hosted built-ins, which are proving to perform better in modern VMs (see the footnote in the overview), cannot be expressed using closures.
Certainly, Mozilla and others will optimize JS closures. They simply are not the one-size-fits-all solution you seem to think they are. Not in JS, not in its common embeddings such as web browsers and the Flash Player.
Will ES4 allow me to play MP3s, videos and make OpenGL calls? In which browser/OS combinations? Under what security restrictions? That's the plugin cliff. Changing JavaScript is irrelevant. It is an API, API, API issue.
Plugins are not needed merely for "APIs", or Flash would not itself contain an ActionScript execution engine -- it could under your theory expose APIs to browser JS only (same for Silverlight).
Advanced rendering APIs such as 3D canvas are coming to browsers. Guess what happens next when one tries to program them at high rate, with too few native types, using JS1 only?
Web developers can't simply program arbitrary APIs from JS1, and win against proprietary platforms competing directly against the open web standards.
Security is of course critically important, and mostly orthogonal to choice of implementation language. Or do you trust compiled C++ code implicitly, just because it is packaged as a plugin? VM-hosted languages can be more thoroughly and flexibly secured, but that is a topic for another day.
Re your points: a) Flash isn't competing with JavaScript on script execution speed, it was actually slower before version 9. This didn't matter, and doesn't now. The people writing 3D engines in Flash aren't complaining about VM speed, they're complaining about rendering. b) I don't trust arbitrary plugins, I trust the Flash plugin because it's everywhere. I certainly wouldn't want arbitrary web pages to run at the same privilege level as Flash, including saving arbitrary amounts of data to my system.
And a little bit of my own perspective. I make online maps for a living. Maps can be written in JS, Flash, Java, SVG+JS, you name it. They're all pretty complex as webapps go, and all have major speed issues when you try to add interesting features. I should be your poster child, right? But making JavaScript run faster isn't even on my wishlist. JavaScript is okay. What's not okay is speed of DOM manipulation, HTML and vector repainting, DOM memory footprint, browser bugs etc. The ES4 proposal isn't fixing any of my problems with making online maps, but instead throws another heavy spec at browser developers to keep them busy. It's not just useless; it's actively harmful, like the 3 megabyte SVG spec.
My point about scaling is simple: a JS program calling OpenGL-ES methods on a 3D canvas will want the greater performance coming with JS2/ES4, and today will not compete with an AS3 (JITted) program in Flash 9.
Beware the hidden costs of JS1 mated to a C++ DOM. A lot of the overhead in method calling and memory use comes from the lack of a unified heap and type system. With JS2, more of the DOM can be self-hosted. This actually outperforms native DOM in many cases (same for the built-in classes).
When you look at a hierarchical instruction profile of Firefox, you see fairly flat distribution, not often an outlier to go fix. Yet major speed improvements are possible, and they're coming. The trick is to get the dynamic type conversion and method binding/dispatch out of the runtime, and trace the fast paths. This can be done with JS1 up to a point, but the DOM and other such interfaces have richer types than JS1 affords.
Browser bugs are always with us, but unifying memory management and type system machinery between JS and the DOM will reduce bug habitat.
I'll have more to say about this on my blog in the near future.
I try to move 30 map tiles (jpegs) simultaneously, and my frame rate drops. This is like 60 DOM calls per frame. How much of the slowness is due to method call overhead? I don't believe even 10%.
Firefox 2D canvas: I can draw a small line at 30 fps, and a big line at 2 fps. It's not method call overhead - the number of calls is the same. The true reasons: big spec, hasty implementation. CSS2, SVG, same story. Why will the ES4 story be different? The thinking certainly seems the same.
No no no no. Number of potential bugs is at least proportional to number of features. To fix bugs, stabilize features, don't grow them in a geometrical progression. Don't fix "will want", fix "want". Common sense.
I just got a comment from the lead interface developer of maps.yandex.ru, Russia's biggest online map. Translated from here:
Agree with the comments.
Instead of solving real problems, they make up new ones.
As for the introduction of classes and types, I'm afraid of it like the fire. It seems we can lose the beautiful language JavaScript :(.
That's not an option. I can cite real DHTML benchmarks that do spend too much time in the glue between JS and the native code. Your benchmarks are real too. Acknowledging neither kind excludes the reality of the other, but you want fixes only for yours before any of the ones we track that do point to JS and its glue code can be fixed. Sorry, we have to take a broader view.
But here, I will make you a deal: file bugs on the two cases you mention (30 map tiles moved per frame, canvas scaling badly). Put relevant version, OS, and CPU info in the bugs. Cc: me on them. I'll help make sure they both get fixed.
As for "we can lose the beautiful language JavaScript", that's false. Nothing is lost, ES4 is a superset of ES3. Keep using what you like and esteeming its beauty.
I filed one of the bugs, but forgot to cc you. The bug number is 402690. The other one is trickier to distill, I'll get to it.
The "nothing is lost, just don't use the new features" argument is wrong. View-source/copy-paste becomes harder with typed code. Obfuscation, minification and other source-to-source transformations become harder with lots of new syntax. The number of people mixing and matching JavaScript will go down. The loss of simplicity is a real loss - see the transition from C to C++, eloquently described by yosefk:
a programming language is not exactly a tool. It is more accurately described, well, as a language. The key difference between tools and languages in the context of "blame" is choice. You probably don't choose to speak English - you do so in order to communicate with all the other people speaking English. When a bunch of people do something because other people do it, too, it's called "network effects". For example, if you want to work on a project for reasons having nothing to do with computer linguistics, and the project uses C++, you'll have to use C++, too. No choice.
I like to talk about a "utility" metric for code: the amount of useful work it does, divided by the number of convoluted, typed arguments you have to give it. Languages like Java encourage code with low "utility". JavaScript code typically has high "utility", because it has no classes.
The Web isn't moving from documents to apps. It's still 99% documents. Nobody's going to write LiveJournal in Silverlight until Google can search Silverlight. The "rich" 1% of the Web that ES4 is fighting for doesn't belong to "open standards" now, and never has.
ES4 does not mandate typing, and type annotations will not be used much for a while, if ever, in most small copy/paste-able scripts. But think: do you copy and paste minified gmail JS? No. But apps such as gmail are where ES4 can shine. Structural types for APIs, nominal types for optimized/frozen toolkits, untyped code elsewhere. ES4 is a best of several paradigms language.
It's hard to argue about the future, but ES4 is not aimed only at 1% of the web, or only at "rich" (intenet applications, or whatever you meant). There is no one size that fits all, or even one size fits 99%.
Good point about Ajax search engine indexing problems. Browser-personalized pages can't be indexed without local help. That is a possibility to explore, but it goes beyond the core language.
Yesterday I wrote that supercompiling JavaScript would be a fun hobby project. With ES4 this becomes a Big Pain, like obfuscation, minification or any other source-to-source transformation.
Popular technologies face pressure to grow. Java's generics, C# LINQ, Scheme R6RS, C++. CORBA, SOAP, SGML. It's sad to see JavaScript going the same way. It used to be the simplest and most powerful popular language, my favorite language.
You have to resist, people. Seriously: classes, metaclasses, virtual properties, annotations, tail calls, multimethods, nullable types, strong typing, interfaces, generics, packages, namespaces, pragmas, constants, destructuring bind, iterators and generators, array comprehensions?
Instead of the 'web browser program', 'http', 'html' and a highly complex language like 'JS4', a simpler and potentially better solution would be to have a 'output viewer program', a structured binary transfer protocol, an easy lispy-like functional language where code can be executed either at compile-time as well as run-time.
Instead of having web servers send html pages through http, we should have application servers which send programs in binary form (not compiled code, but the source code represented in binary instead of text).
Instead of having html, we could have programs. If a programming language is structured appropriately, a program listing can appear as declarative as html is, but without its restrictions and the need to hand computations to external languages.
Instead of having web browsers, we should have output viewer programs which provide a graphical output viewport to downloaded programs.
Instead of having predefined GET/POST/etc operations, the language should define a simple, transparent and unified way to represent remote procedure calls.
Of course, the above conclusion is easy to make now, but it was hard 20 years ago. Developing such technological infrastructure is an evolutionary process, driven by needs.
But at some point, evolution must stop, rights and wrongs be recognized, and start over cleanly. It will benefit everyone in the long run.
Interestingly, I don't think this conclusion is really any easier now than 20 years ago. Researchers knew all the same stuff 20 years ago; it's not like they didn't know what networks could do. Surely people were working on such systems.
HTTP and HTML succeeded because they were not the kind of system you describe. Documents clearly were not programs. Even a non-programmer could see all the moving parts and get a good feel for what was going on.
The interface between client and server was dead simple, which created two very healthy, very competitive spaces, allowing evolution to do its thing. We've all benefitted, and I don't think stopping evolution is a good idea (never mind "possible").
HTML is programs (c.f. <SCRIPT>). GET/POST is a nice general RPC mechanism. etc. My reaction was the exact opposite: that everything he's asking for is essentially already there.
in spades.
Actually, that was my first impression too.
Then I realized that couldn't be what Achilleas meant. So I went back and re-read it. Now your guess is as good as mine, and I don't wish to put words in Achilleas's mouth, but I think he was asking for something different.
HTTP is RPC, sort of by definition--but not really. Or, you have to forget everything you know about RPC in practice in order to say that. When someone says RPC, they mean something like CORBA, SOAP, COM+, or Java RMI. (Or Alice ML, fine.) HTTP succeeded because it was not those things.
Some HTML documents are programs, but I imagine Achilleas is asking for something different: an elegant, general-purpose programming language (not JavaScript and certainly not JavaScript 2), in which paragraphs and hyperlinks are library features.
Now--this complicates my story, because people do actually want something like this now. But most people I talk to don't seem to care about the language not being JavaScript. They just want a language to write Google-Maps-like apps in. ECMAScript Edition 4 aims to be that language. I think it has very a good chance of success.
But my point above was just that someone could have invented that 20 years ago. They probably did. We got HTML instead for a reason. HTML was initially successful long before <script> and I think because it wasn't there at the time; see the last sentence of .
Some HTML documents are programs, but I imagine Achilleas is asking for something different: an elegant, general-purpose programming language (not JavaScript and certainly not JavaScript 2), in which paragraphs and hyperlinks are library features.
All the output of a page should be library features.
HTTP and HTML succeeded because they were not the kind of system you describe. Documents clearly were not programs. Even a non-programmer could see all the moving parts and get a good feel for what was going on.
It could also happen with documents being programs. The exact same html structure could be available to document writers, but it could have been programmable instead of hardcoded, giving the chance to people with better needs to evolve the standard without the need to embed other forms in it.
The interface between client and server was dead simple
I'm for simplicity, too. The model of downloading pages is good. The problem is what is inside those pages.
We've all benefitted, and I don't think stopping evolution is a good idea (never mind "possible").
I don't think so. Writing a web application today is a real struggle; creating up a web application can take days, whereas the same functionality in desktop can take hours to code.
Right, only programmers are a tiny minority among human beings.
To get off the ground back then, this hypothetical Web programming language would have had to be very heavily optimized for minimal friction when all you want to do is write text and links. Take that to its logical conclusion and you get wiki-markup. Take it just barely far enough to be an explosive success and the hottest thing on the planet, and you get HTML.
Writers aren't programmers. Generally speaking. Many are nerds. Few are programmers.
What made the Web grow? What made 14 kerjillion people download and install web servers, and write little "welcome to my home page" home pages? How many of those people were programmers? Have we forgotten what it was like?
Embedding a markup language in an expressive programming language is not hard. See Ocsigen for a statically typed embedding of XHTML in OCaml. The only problem with this approach is sensible error messages for non-developers.
No, the other problem is that (especially from a non-coder's point of view) tbe concrete syntax sucks elephants through gauze. By the time you've written a new one you've essentially reinvented HTML, complete with "code goes here" tags.
While I agree with this one I also can hardly imagine anything sucking more than the HTML/SGML/XML disaster of a concrete syntax that we are doomed with instead. And for ages to come.
From a basic user's point of view, not having to put quotes around text is a big win. You want a markup language rather than a textup language.
That aside, I pretty much agree - having some quick escape for tags and a couple of types of bracket (one for parameters, one for quoting/tag-around-this-code) is much nicer.
But if the language is elegant, the writers (who are not programmers) need not know that they are actually programming. The API that implemented the page output could be declarative in nature.
This comment is not aimed at anyone in particular. I'd like to suggest that if people want further discussion about reinventing the web, that someone start a new forum topic for it, and keep the current thread a little more focused on things more closely related to ES4. (Yes, I know the subjects are connected.)
But at some point, evolution must stop, rights and wrongs be recognized, and start over cleanly.
In biological systems, this "start over cleanly" utopia happens only with mass extinction events, and without recognition of rights and wrongs, and with tragic loss of information due to the slate-cleaning.
Hindsight is never perfect, and it does not tell you where to evolve next. Compatibility is important for the Web, even though it can be a pain to engineer. Over time, bad old forms die off, but you can no more predict when or which ones, than dictate better forms.
In this light, ES4 is indeed righting wrongs and providing cleaner ways to do things. That it cannot, and should not, remove older forms (many of which are just fine) does not make it unfit -- quite the opposite. Right now, JS developers have to struggle to use it "in the large", and many are wooed by bigger languages that can promise better programming in the large support. But the switching costs from JS to these other languages and runtimes is high, higher than the cost of using ES4 (the cost of implementing and shipping ES4 can be born by a handful of browser vendors, for the common good).
Someone starting from a green field can afford to pick C# on Silverlight and/or WPF on Windows, for example. Most people, whether for commercial reasons or not, prefer to maximize "reach" on the Web. This favors using browser-based standards, including ones not "based" in plugins. And of course, many people are not starting from zero in an empty field of web content.
Jason's right, the Web would never have happened with any 20-year roll-up of conventional wisdom, frozen into a programming language. The Web requires distributed extensibility, backward and forward compatibility, error corrections in browser parsers, and increasing programmability of core browser functionality over time.
No one group or individual will command or control the single lispy protocol envisioned here. Didn't Curl try such a "better is better" approach?
Well, nothing of what you said does not stop new and more efficient web standards to be created. They don't have to be compatible with old stuff, as long as the two (old and new) can run in parallel, and browser vendors could easily incorporate the new standards in their products for the common good, as you say.
And the new standards could be simpler and easier to implement. Actually, we don't need standards, we need a programming language. Any 'standard' can be covered if the browsers are programmable...
"We don't need $criticalHighLevelThing, we need $lowerLevelThing, any $highLevelThing can be implemented with it"
Some in the W3C believed this around the turn of the century. XHTML2, XForms, and SVG would replace the existing web with clean, well-formed XML content languages. It did not and will not happen. Learning why not is the beginning of wisdom.
Super-programmability of browsers (which I support), or lack of it, has little to do with the reasons.
The Web evolves incrementally, by short path innovations that degrade gracefully in older browsers. Browsers continue to live under footprint pressure, which makes having N >= 2 parallel implementations of anything a survival disadvantage. And web content authors write hypertext, so expect and deserve backward compatibility including standard error corrections (i.e., HTML, not XML).
In the Imagine closing of my long-winded reply to Peter Russell, I point to a future where ES4 and advanced APIs together allow browser innovation to take off in a larger world of downloadable code, instead of depending too much on browser vendors. That's my dream, and it is a real place, not a utopia. It depends on steady, evolutionary progress -- not "rip and replace" and "do things twice".
/be
JS4 is another huge language designed by a committee. Why not actually learn something from computer science? Huge languages aren't acceptable any more.
Instead meta programming and domain specific languages (DSLs) that compile to a virtual machine are the best solution. Firefox needs to integrate a virtual machine, than can efficiently and easily support most languages. It should offer support for closures, agents from Erlang and TCO (tail call optimization). A Lisp-like functional language would be perfect indeed. You could use the JVM of course, but it has bad support for non-OO languages. You could even integrate the Erlang virtual machine.
Even today many languages like Java (see GWT) use Javascript as a virtual machine and compile to it. That's a good idea and Javascript should make it easier for developers, who want to do this.
Today's most popular languages like C# and Java use a virtual machine. All languages that compile to the JVM or CLR and are interoperable. Why can't Firefox and Javascript do the same. Reuse the JVM or design a web virtual machine (WVM) for browsers. Then everyone will use his favourite programming language for browser programming instead of Javascript.
Doesn't Silverlight already does this? Also Sun is going to release a small JRE optimized for the desktop. Bad news for Javascript.
JS2, not JS4.
Firefox is integrating the Tamarin VM and co-evolving it to support ES4 in Flash as well as Firefox.
We are supporting development of other VM-hosted languages. As noted at that link, we're also adding glue so that Tamarin can support JS2 in IE.
The web will support multiple languages interoperably some day, but not soon, and JS will remain the mandatory default language forever. If you bothered to read my comments earlier in this thread, you would have read the reasons for this, and perhaps come up with better arguments.
But your post is pie in the sky, I'm sorry to say. Erlang is not going to be embedded in all browsers, never mind in any one. The odds of a Java come-back in all browsers are low. The only realistic hope for an open VM commonly implemented and interoperating among browsers is Tamarin.
I've read your arguments before commenting of course. I've seen no good arguments against the upcoming consumer JVM from Sun or Silverlight from Microsoft. The new consumer JVM will be a very small download and applications will start very fast. There go technical arguments against Java in the browser.
Very few languages run on Tamarin, while many run on the CLR or JVM. How fast is Tamarin?
This is off topic so I won't argue about this any more. We will simply see which technology will win.
This site is not about technology, but about programming languages. Programming languages shouldn't grow by piling feature upon feature. Instead a language should offer facilities to allow meta-programming, so that users can declare DSLs that are able to interoperate with each other. This is the main point that I'm trying to make. Many of these facilities are discussed on this interesting web site. Programming language designers should not ignore these useful discussions.
Programming languages shouldn't grow by piling feature upon feature. Instead a language should offer facilities to allow meta-programming, so that users can declare DSLs that are able to interoperate with each other.
So instead of talking how technologies are getting simpler (smaller JVMs and Silverlight), I'm wondering what we are left with in terms of simple PLs and DSLs. Which programming languages do you have in mind? Or do you want the languages to get out of the way, and address the platform/VM directly?
Meta-programming for bootstrapping reasons, and for emulation of magic-in-ES3 native types, was a conscious design goal of ES4 (or JS2 -- we are equating the MIME types). But meta-objects for ES3 are not enough: users deserve convenient syntax, which without macros mean some growth in the surface language.
ES4 classes cannot be simulated by closures. ES3 allows mutation of any object, including a captured activation object.
Let's say a meta-object API was added to seal objects. That's still not enough, because classes also provide fixed properties (fixtures), with convenient syntax. Classes also provide instance-bound methods that cannot be hijacked. Simulating all of these using closures plus opt-in MOP hook calls requires a lot of source-level and runtime overhead.
In a perfect world, perhaps there would have been macros and a MOP long ago. In the real world, we're growing the language ahead of adding macros (which I think we will add, eventually). To deprive users of improvements, including usable syntax, would be wrong at this late date (eight years after ES3).
For "how fast is Tamarin", see Chris Oliver's blog. For one micro-benchmark (Tak), it's 4-5x slower than HotSpot, when last tested. Not shabby. The CLDC JVM or a successor won't be HotSpot-speed either.
But really, you are missing something else in my arguments that does not depend on "seeing who wins": Tamarin is already in Flash, so it's much more widely deployed than a similarly small and fast-enough JVM. Distribution is everything, and you have to buy it or already have it. Sun will be rolling a stone up a hill, but they are way behind Adobe.
A final point that I've made before, perhaps not here: JVMs don't "do JS first, fast, and compatibly" (Rhino is not a drop-in alternative to browser engines).
The time for a JVM to have provided high performance to JS was years ago, before Sun apparently denied Macromedia a license for a smallish JVM on which ActionScript (a JS variant) could be hosted. There's no going back now.
Can we tone tone down the rethoric a bit? It will help people see the the more informative points you are trying to make.
I meant "pie in the sky" in the nicest possibly way ;-).
The frustrating thing about this thread is the divergence between what people think might be possible in browsers, and the far more limited set of uplifts that can be engineered compatibly, and actually deployed to enough users to make a difference by inducing web content authors to target new languages.
Erlang, various JVMs, and the like are good to great pieces of work, but they are not going to get onto 90%+ of desktops any time soon. Mobile may be another matter for Java, but J2ME is apparently very fragmented by incompatibilities across phones and OSes.
I should add that Tamarin is only one of several advanced ES4 VMs in progress that I know about. Not all have clear paths to widespread distribution, but in total it looks like ES4 has a shot at being more widely supported on desktops than any other language. Like ES3 before it.
I wasn't replying to you (look at the identation)!
Sorry for the misunderstanding.
It should offer support for closures, agents from Erlang and TCO (tail call optimization). A Lisp-like functional language would be perfect indeed.
Rewrite that as "It should offer [X]. [Y] would be perfect indeed," and every single programmer on the planet will fill in X and Y differently.
Taking my cue from Brendan, here's how I'd complete the sentence: "It should offer pie. Pie in the sky would be perfect indeed."
The important point being that I'd like my pie down here where I can actually eat it! Given a choice between unachievable perfection and achievable improvement I know which I'll take.
If you are interested in a closer look at the relationship between ES3 and the current draft ES4, here's a new document detailing the proposed incompatibilities with ES3. Backwards-compatibility is an extremely important issue to TG1, and always has been.
Also, we've released another build of the reference implementation, now with binaries for Windows and Linux -- I encourage anyone interested to download and try it out (keeping in mind this is still an early pre-release). | http://lambda-the-ultimate.org/node/2504 | crawl-002 | refinedweb | 9,849 | 62.68 |
We have a be nice comment policy. Please be positive and constructive.
We noticed you attached photosto your comment.
Posted:Oct 1, 2010
Let your inbox help you discover our best projects, classes, and contests. Instructables will help you learn how to make anything!
© 2017 Autodesk, Inc.
@namespace url();
@-moz-document domain("instructables.com") {
#div, .nav {background: url(""); }}
I'm very confused, why it's only working with a background image and not with a background-color... Maybe somebody who is better in css than me (probably everybody) can try it.
Case in point.... Steve's son standing behind me as I type... "Whoahhhh. Why's it white! Looks weird."
Perhaps a greasemonkey script is in order. | http://www.instructables.com/community/Where-have-all-the-oranges-gone-Long-time-passin/ | CC-MAIN-2017-47 | refinedweb | 117 | 71.51 |
There are three types of UUIDs which uuidgen can generate: time-based UUIDs, random-based UUIDs, and hash-based UUIDs. By default uuidgen will generate a random-based UUID if a high-quality random number generator is present. Otherwise, it will choose a time-based UUID. It is possible to force the generation of one of these first two UUID types by using the --random or --time options.
The third type of UUID is generated with the --md5 or --sha1 options, followed by --namespace namespace and --name name. The namespace may either be a well-known UUID, or else an alias to one of the well-known UUIDs defined in RFC 4122, that is @dns, @url, @oid, or @x500. The name is an arbitrary string value. The generated UUID is the digest of the concatenation of the namespace UUID and the name value, hashed with the MD5 or SHA1 algorithms. It is, therefore, a predictable value which may be useful when UUIDs are being used as handles or nonces for more complex values or values which shouldn't be disclosed directly. See the RFC for more information. | https://man.linuxreviews.org/man1/uuidgen.1.html | CC-MAIN-2020-24 | refinedweb | 187 | 70.33 |
almost? I just came accross a counterexample in an old script. It splits a line and then removes the last character from one of the resulting values.
Well, chop is so simple and presumably fast, that it seems awkward to do without it. I suppose that substr is the general case, but I would have to look up the arguments in the docs, to tell it to locate from index -1 through the end, and replace with nothing. Replacing a trivial, easy-to-understand, and very efficient function with one that's rarely used (so I have to look it up) just doesn't sit well with me.
Obviously, I'd define my own sub chop that does this, just to keep the point of usage self-documenting.
But, if people are going to do this, what's the point of removing it from the language? Maybe the string "class" can have members for friendly-named common things to do, even if they could all be expressed with regex's or substr's. Perhaps $s.chop() because we already know what it means, and more generally $s.chop($length) will efficiently delete the last n items (bytes, chars, glyphs, depending on the same criteria that affects a regex at that point) from the string.
—John
Good point, brother John
Uhm... maybe, instead of wiping away some functions, they could be made "optional" and imported via a use, like
I wonder what Perl6 people thinks about that...
Ciao!--bronto
UPDATE I'm just using chop right now after years! I have a string that matches /^\d+[MG]$/i, a disk quota given in Megs or Gigs, and I have to convert it in Kbytes:
if (defined $quota) {
my $factor = chop $quota ;
my $softquota = $quota*$QuotaConversionFactor{uc($factor)} ;
...
[download]
Don't let it go away, please... :-(
--
# Another Perl edition of a song:
# The End, by The Beatles
END {
$you->take($love) eq $you->made($love) ;
}
I like the idea of calling it Perl5 or somesuch, and importing just the ones I need.
Personally, I haven't ever used chop except in the occasional obfu or golf.. having to look up the substr arguments wouldn't be any noteworthy pain since I just don't need chop that often. Actually, I need substr fairly frequently, so I wouldn't have to look it up at all. :-)
I suppose a few people might miss chop, but I won't be one of them. Just like some people have less use for substr than me. I do think that it is one of the things that you may want to keep because we've always had it just as well as ditch cause it's hardly much use.
So the question is, would it hurt to keep around? I think one argument in favour of ditching is that it's one less thing to advise beginners about.
I don't know. I see your point that this one specific operation will be a lot more awkward afterwards, and I concede that. What I'm not sure about is the significance of that argument. Count me as undecided, with a tendency towards ditching.
Makeshifts last the longest.
It's about the risk/reward ratio. Why go to all of the trouble to include a seldom used keyword that introduces so many bugs? If you need chop, there are plenty of ways to duplicate the functionality. Further, if you go to the trouble of duplicating that, it probably means that it's really what you need.
So many things are being added to Perl, it makes sense to remove items that are seldom used and prone to cause problems. I rarely see an instance of chop that isn't a bug (you should see all of the code reviews I've done on applicants lately!).
Cheers,
Ovid
Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.
I can well understand them getting rid of chop, although I do think the main problem is having it named so similarly to chomp.
When I was beginning Perl I came across chop and chomp, and whilst I could remember the fact that one wasn't fussy what it removed and one only removed end-of-line characters it took me a remarkable amount of time to learn which was which. Caused some nasty bugs, too.
Since I learnt the names properly I don't think I've ever touched chop for anything. So far you've only said you've been able to find one example, and people can do it with substr's (substr ($foo,-1) = '', admittedly messy), or a simple regexp $foo =~ s/.$//; which to me is perfectly readable. I really don't see the advantage of keeping chop paying off against the risk of having the confusing (and easily mis-typed) chomp/chop pair.
I also can't see many people are going to go and write their own version of chop, to be honest. It's a simple enough thing to 'just do' and the function call imposes a much higher overhead than the operation itself.
or a simple regexp $foo =~ s/.$//; which to me is perfectly readable
...but will not handle multi-byte characters. chop will.
~Particle *accelerates*
Which, fortunately, is why we've got the marvellous \X sequence: s/\X$// will do what you want.
--
Tommy
Too stupid to live.
Too stubborn to die.
Update Perhaps you meant multiple codepoints used to "compose" one glyph, rather than multiple bytes to form one codepoint. The former is what \X does. Perl5 regex only does the latter; Perl6 is said to do the former too (u0, u1, and u2 levels if memory serves).
...but will not handle multi-byte characters.
It will in Perl 6, and already does under the utf8 pragma in Perl 5.6+. Besides, as a Perl 5 regex, it doesn't make sense for $ matches before a trailing \n.
- Yes, I reinvent wheels.
- Spam: Visit eurotraQ.
--
May the Source be with you.
You said you wanted to be around when I made a mistake; well, this could be it, sweetheart.
--
May the Source be with you.
You said you wanted to be around when I made a mistake; well, this could be it, sweetheart.
You said you wanted to be around when I made a mistake; well, this could be it, sweetheart.
$str='abcde';
$str = substr $str,0,(length($str)-1);
print $str;
[download]
..
substr ($str, -1, 1, '');
[download]
vessel becomes selves:
vessel => SLICE => 'ves' | 'sel'
SWAP => 'sel' | 'ves'
NEW => 'selves'
loyal becomes alloy
loyal => SLICE => 'loy' | 'al'
SWAP => 'al' | 'loy'
NEW => 'alloy'
[download]
$word = chop($word) . $word;
[download]
-Blake
Why not provide all the same commands (perhaps as members) that treat a string semantically as a list of characters? Have push, pop, shift, and unshift. substr is like splice.
I read this earlier and for some reason, I haven't been able to shake it out of my head all day. I've finally settled on why. I would rather see chomp go.
It seems to me that chop is the more general function in that it'll remove any character. Furthermore, it has a more useful return value. Lastly, chomp could easily be replaced with a nice simple regex s|$/$||; which, in scalar context, would return a value about as useful as chomp's.
Don't get me wrong. I understand the benefits of chomp (for cross platform code in particular) and I know chomp is ubiquitous in existing code. I don't really think chomp should be removed either.
It's just that, of the two, I like chop more. It's an old friend that I still fondly remember from the perl4 days before chomp.
If the standalone chop has to go, I like John's idea of making it a method in the string class. That said, I strongly agree that chop should be kept. I don't think it should be renamed. Without it, Perl will feel just a little less like Perl to me.
-sauoq
"My two cents aren't worth a dime.";
I think that for the very rare situations when chop is actually useful, s/.$// will suffice. All sorts of optimizations will make it fast. Something used this little does not need to occupy space in the core namespace (PHP has te be "better" for something).
Something used this little does not need to occupy space in the core namespace (PHP has te be "better" for something).
Well, PHP puts everything it can think of in the core namespace (and has no other) and they didn't get chop right, either.
;-)
— Arien
I've never needed to use chop(), so I not really fussed about its disappearance or not, but this does give me the opportunity to mention my great wish for Perl6.
Please let me treat my strings as arrays of char!
Yes, I know I could probably write a module, Maybe use Overload; (can it handle operators that are paired []?) or just split to an array and then join (or unpack & pack; or substr as Rvalue and Lvalue) etc., but it would be so useful sometimes to be able to say $string[n]... which, with Perl6's new treatment of sigil's would no longer be ambiguous.
Then chop (getting smoothly back on topic:) simply becomes $#string-- or whatever $# will be in the new money.
my $dna = 'gagagtatgcgattaatgcatattataaaaagcggcatgacggca';
for (1..10)
{
my $base = chop($dna);
print "$base\n";
}
[download]
Helgi Briem
No recent polls found | https://www.perlmonks.org/?node_id=196971 | CC-MAIN-2021-10 | refinedweb | 1,592 | 81.43 |
Red Hat Bugzilla – Bug 57122
Internal error: Segmentation fault (program cpp0)
Last modified: 2007-04-18 12:38:34 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)
Description of problem:
gcc crashes with segmentation fault when i try to compile C source file
with certain error.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. command line: gcc -D'MD5Name(x)=#Broken##x' test.c
test.c:
#include <stdio.h>
int main ()
{
printf (MD5Name(SLAWA));
return 0;
}
Actual Results: gcc emits diagnostics:
<command line>: '#' is not followed by a macro parameter
gcc: Internal error: Segmentation fault (program cpp0)
Please submit a full bug report.
See <URL:> for instructions.
gcc also creates core dump in current directory
Expected Results: compile-time error
Additional info:
I will try to attach core dump and test.c files
Created attachment 39690 [details]
source file needed to reproduce bug
Created attachment 39691 [details]
Core dump
The ICE was fixed in gcc-2.96-100, though the warning is correct,
such argument is bogus.
If you want to stringify the whole thing (which I belive because you're passing
it to printf), then it should be
gcc -D'MD5Name(x)="Broken" #x' test.c
*** This bug has been marked as a duplicate of 54380 *** | https://bugzilla.redhat.com/show_bug.cgi?id=57122 | CC-MAIN-2017-13 | refinedweb | 223 | 57.37 |
An ALGORITHM is a step-by- step process for solving a
problem. Using your
internal knowledge and acquiring external knowledge solves a problem. There
are many different ways of solving a problem. Look around. Your friends have
different way of earning their income, driving a car, scheduling his/her
classes,
and etc. Therefore, we all have our own Algorithm to earn income or drive a car.
ALGORITHM DEVELOPMENT
When developing an algorithm, it’s important to analyze
all of the requirements
involved with that part of the problem. This ensures that the algorithm takes
into
account all aspects of the problem. The de sign of a program is often revised
many times before it is finalized.
Implementation is the process of writing the source code
that will solve the
problem. This is the act of making the design to work.
Testing a program includes running it and making sure that
it works. There are
two types of errors, programming language error and the logical error
that is
an error on the logic of the algorithm. The first error is normally handled
through
the compiler of the language. After running the program, a list of errors and
their
description is given. The correctness of the logic of the program requires to
running multiple times with various input data and carefully examining the
results. Testing might include hand-tracing program code, in which the developer
mentally plays the role of the computer to see where the program logic goes for
a
given set of input data. If the result matches with expected result then the
logic is
correct, otherwise; there are logical errors.
For example, to give direction to your new friend to come
to your house, there are
many algorithms. One is to drive from your friend’s house to your house and
write done the name of streets and the mileage to create a step-by-step
procedure
for driving from one location to another. Another algorithm might be to look at
the map and develop such step-by-step procedure. Another algorithm might be to
ask your trusted one to write such algorithm and pass it on to your friend.
After
all, if your friend won’t find your house, the algorithm has logical error. If
your
friend cannot read your handwriting, the algorithm has language error.
Practice a:
Evaluate the following Algorithm whether it is correct.
Find programming
language error and the logical error.
Direction from Bob’s house on rout 45to White Hall:
1. Make left from the parking lot.
2. Come to the first steppppsign.
3. Make left to rout 45.
4. Drive 1.2 mile on rout 45.
5. Make right to rout 480.
6. Drive .3 miles.
7. Make right to Higggg street.
8. Drive on High street for .5 miles.
9. White Hall is on your left hand side.
AUTOMATION PHASES
We use computer programs to do our job or to solve a
problem. The user of the
computer runs a computer program, which is called Software. A Software
Engineer is the person that develops the software and is also known as computer
programmer. As we learn more about programming language, it is also important
to understand and develop a good design. After the design process is completed
then the programmer writes the program in a programming language following a
set of syntax rules that represents the grammar of the language. The writing of
computer instructions is called coding and the written program is known as
source code or simply code.
Programming development consists of four phases:
I. Specification: Collecting the requirements.
II. Design: Developing the design using flowchart.
III. Algorithm: Writing the step-be-step instructions.
IV. Tracing: Testing the logic of the program.
I. Specification
Requirements often address the input data, output data, standards, formulas,
formatting, and graphical interface. Normally these requirements are stored in a
database called Dictionary. Following is how we can store variable
specification .
For example, suppose we are writing a program to read
employee’s information
and to print a list of labels. The dictionary information is as follows:
II. Design, using Flowchart
Any system without a design has no guarantee of success. A
civil engineer would
never consider building a bridge without designing it first. Developing an
automated system is no exception. Every automated system implements many
programs. A program is consists of one or more algorithms. A good programmer
spends time thinking about the algorithms involved before writing a code.
An algorithm is often described using flow charting symbols .
What is Flowchart: Flowchart shows the step-by-step
process using pre-defined
symbols. Flowchart provides enough structure to show how the code will operate
without getting involved with any program languages.
A list of Flow Chart Symbols is as follows:
A terminal symbol is always the first and the last
symbol in any flowchart. As
the first symbol it contains the name of a module and as last contains the word
‘End’.
A processing symbol is used for equations.
III. Algorithm
As we discussed in chapter 3, every language has a list of
built-in methods and a
list of user-developed methods. We learned about System and JoptionPane
class. These two classes have many methods to talk to user. Followings are two
more classes, Format and String classes.
FORMATTING
By default, the Java displays the variables in the format
that matches the data
type. Integer values , for example, are normally displayed in decimal form. To
change the display format, we can use the methods in NumberFormat class .
This class provides generic formatting for numbers. The
principal methods of this
class are getPercentInstance and getCurrencyInstance. Rather
than instantiating an instance using the new operator, one calls a method of the
class.
import
java.text.NumberFormat;
NumberFormat getPercentInstance returns a NumberFormat
object
that represents a percentage format for the current locale.
NumberFormat getCurrencyInstance returns a NumberFormat
object
that represents a currency format for the current locale.
double
cost=10.55, discount = .10;
NumberFormat money = NumberFormat.getCurrencyInstance();
System.out.println(“Cost = “+ money.cost);
NumberFormat percent =
NumberFormat.getPercentInstance();
System.out.println(“Discount = “+percent.discount);
Practice a.
Mary Roman bought a new car. Write a program to ask Mary
the style, color,
price, and amount of down payment. Calculate what percent of the total price was
down payment. Call this variable percentDownPayment. Display the information
including percentDownPayment in JoptionPane window.
DECIMAL FORMAT
This class formats numeric outputs in defined patterns. It
is instantiated using the
new operator. Examples of the methods in this class are:
import
java.text.DecimalFormat;
DecimalFormat (String pattern) – this methods constructs a
new decimal
format. For example, “0.###” would instruct the program to return a number with
three significant digits and at least a 0 to the left of the decimal.
double
area=10.55656554;
DecimalFormat twoDecimal = new DecimalFormat(“0.##”);
// display the area with two decimal digits.
System.out.println(“Area = “+ twoDecimal.format(area));
Practice b.
Write a program to read two decimal numbers. Calculate the
quotient of two
numbers (divide the two numbers.) Display the numbers and the quotient both
on System and JoptionPane window. Make the display message user friendly
such as follow:
First number =
Second number =
Quotient =
III. Tracing
To test the program and validate the result, we need to
trace the logic using real
numbers. Suppose a program reads the radius of a circle and calculates the area
and circumference. Following is the three phases:
I. Specification: Read the radius of a circle.
Calculate the area and
circumference. Variable descriptions are as follow:
II. Design:
import
javax.swing.*;
import java.text.DecimalFormat;
import java.math.*;
public class circleDemo {
double raduis, area, circumference;
DecimalFormat twoDecimal = new DecimalFormat("0.##");
public circleDemo()
{
String textRaduis = JOptionPane.showInputDialog("Please enter a number
for
raduis");
raduis = Double.parseDouble(textRaduis);
area = Math.PI * Math.pow(raduis,2);
circumference = 2 * Math.PI * raduis;
// build result
String result = "Raduis= "+raduis
+"\nArea = "+twoDecimal.format(area)
+"\nCircumference= "+twoDecimal.format(circumference);
// display the informtion in System window
System.out.println("\n\t"+result);
// display the information in JOptionPane window
JOptionPane.showMessageDialog(null,result);
}
public static void main(String[] args)
{
circleDemo application = new circleDemo();
}
}
IV. Tracing
When we run the program, for radius of 4 the area and
circumference must
be these numbers.
QUESTIONS AND PROJECTS
KEYWORDS
• Four phases of automation
• NumberFormat
• DecimalFormat
QUESTIONS
1. How can we truncate a decimal number to three decimal digits?
Project1: Jessica Kinzer has a designer clothing store
called “Jessica Kinzer
Designer Clothing.” She keep the sales by sales person name and their first,
second, third, and fourth quarter sales. She is interested in an automation
system
that can ask for sales person name and their sales for each quarter. She is also
would like to see the total sale for that person. Develop the system and show
the
four phases of the automation.
An example of data:
Salesperson 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr
Simon 50000 78000 99000 95000
Total for Simon= 322000
I. Specification
IV. Algorithm | http://www.algebra-online.com/tutorials-4/radical-inequalities/problem-solving-using.html | CC-MAIN-2014-42 | refinedweb | 1,492 | 51.24 |
06, 2007 10:00 AMRuby implementations are a dime a dozen nowadays. There are already two implementations of Ruby for the JVM (JRuby and XRuby), and .NET is catching up as well. IronRuby has caused a lot of buzz in the past month, but until it's released in late July 2007, it's not known just how complete it'll be.[.]A compliant Ruby parser is a big part of a Ruby implementation, and using Ruby.NET's parser surely saves the IronRuby team a lot of work.
Since the last release we have added support for interoperability with other .NET languages, so that components developed using other .NET languages can conveniently use classes implemented using Ruby.NET and vice versa.An example for this is shown with a Ruby class that's used in C# code. The Ruby class:
class Person
def init(name, age)
@name = name
@age = age
end
def print()
puts "#{@name} is #{@age}"
end
end
Person bruce = new Person();
bruce.init("Bruce", 42);
bruce.print();
We will soon be moving to a more traditional open source model of community contribution to our code base and will be calling for volunteer developers. If anyone has any experience in managing that kind of process, we'd be interested in your input.In light of recent doubts about IronRuby, and the the fact that over the past year, many Ruby runtime developers have been hired to work on their projects (JRuby, XRuby, Rubinius, IronRuby), .NET Open Source developers with an interest in Ruby might want to look into this. | http://www.infoq.com/news/2007/06/rubydotnet-08-release | crawl-002 | refinedweb | 261 | 65.32 |
No,. ;)
Blogging on App Engine, part 10: Recap
Posted by Nick Johnson | Filed under coding, app-engine, tech, ...
Blogging on App Engine, part 8: PubSubHubbub
Posted by Nick Johnson | Filed under tech, app-engine, coding, bloggart
This is part of a series of articles on writing a blogging system on App Engine. An overview of what we're building is here.
Notice anything different? That's right, this blog has now been migrated to Bloggart - after all, if I won't run it, why should I expect anyone else to? ;)
Migrating comments to Disqus
In the previous post, I promised I'd cover migrating comments in today's post. Unfortunately, doing so proved to be both more complicated and less interesting than anticipated, so I'm going to instead provide an overview of the required steps. If you're really determined to see all the nitty-gritty, you can examine the change yourself.
Import of comments to disqus is through the disqus API. The API uses a straightforward RESTful model, with requests URL-encoded as either GET or POST requests, and responses returned as JSON strings. To make our lives easier, we'll define a straightforward wrapper function to make Disqus API calls:
def disqus_request(method, request_type=urlfetch.GET, **kwargs): kwargs['api_version'] = '1.1' if request_type == urlfetch.GET: url = "" % (method, urllib.urlencode(kwargs)) payload = None else: url = " ...
Blogging on App Engine, part 7: Migration
Posted by Nick Johnson | Filed under tech, app-engine, coding, bloggart
This is part of a series of articles on writing a blogging system on App Engine. An overview of what we're building is here.
We're finally going to tackle (at least part of) that big bugbear of blogging systems: Migrating from the old system to the new one. In this post, we'll cover the necessary pre-requisites, briefly cover the theory of importing from a blogging system hosted outside App Engine, then go over a practical example of migrating from Bloog (since that's what this blog is hosted on).
Regenerating posts
Before we can write migration or import scripts, we need to improve (again) our dependency regeneration code. One thing that's probably occurred to you if you've been following this series is that there's currently no easy way to regenerate all the resources when something global changes such as the theme or the configuration. One could simply call .publish() on each blog post, but that would result in regenerating the common resources, such as the index and tags pages, over and over again - potentially hundreds of times. The same applies to migration: We could publish each new post as we process it, but this ...
Blogging tech, app-engine, coding, bloggart | http://blog.notdot.net/tag/bloggart/1 | CC-MAIN-2016-44 | refinedweb | 452 | 62.78 |
Opened 8 years ago
Closed 8 years ago
#13375 closed (duplicate)
django model validation error
Description
I'm having a problem with django svn 12995 that does not exist in 12852. It appears that model validation is broken...
Start a django project 'testme':
startproject testme
inside testme Start an app 'testapp':
manage.py startapp testapp
edit testme/settings.py and add to installed applications:
'django.contrib.comments', 'testapp',
inside testme/testapp/models.py, simply add:
from django.contrib.comments.models import Comment
Run anything that validates models (syncdb, shell, etc):
run manage.py shell
Will show the model validation error:
error: cannot import name Comment
This worked in svn 12852 but is broken in 12995
Since the comments framework was last changed before 12852, I believe it is in the model validation.
Looks like it happend in revision 12976 | https://code.djangoproject.com/ticket/13375 | CC-MAIN-2017-43 | refinedweb | 139 | 58.69 |
django-adminplus 0.1.5
Add new pages to the Django admin..
All AdminPlus does is allow you to add simple custom views (well, they can be as complex as you like!) without mucking about with hijacking URLs, and providing links to them right in the admin index.
Installing AdminPlus
Grab', my_view) # And of course, this still works: from someapp.models import MyModel admin.site.register(MyModel)
Now my_view will be accessible at admin/somepath and there will be a link to it in the Custom Views section of the admin index.
register_view takes a 3rd, optional argument: a friendly name for display in the list of custom views. For example:
def my_view(request): """Does something fancy!""" admin.site.register_view('somepath', my_view, 'My Fancy Admin View!')
All registered views are wrapped in admin.site.admin_view.
-: django-adminplus-0.1.5.xml | http://pypi.python.org/pypi/django-adminplus | crawl-003 | refinedweb | 141 | 59.09 |
.
To follow this tutorial, it is recommended to install a compiler in your local environment and run the c++ code that will be discussed in his tutorial. Many Linux system comes preinstalled with g++ compiler, and I will use the same in this tutorial. You can also use an IDE like the open-source Visual Studio Code provided by Microsoft, a cross-platform IDE for writing code efficiently. It also provides many functionalities that increase the speed of coding.
Calculating Factorial in C++
Factorial of a natural number n is the product of all the natural numbers that are lesser than n, including n. The general formula for calculating the factorial for a number n is:-
Fact(n) = n x (n-1) x (n-2) x … x 3 x 2 x 1
This formula can also be written as Fact(n) = n x Fact(n-1)
For example, if we want to calculate the factorial of 5 then it will be-
Fact(5) = 5 x 4 x 3 x 2 x 1 = 120,
As a result we get the factorial of 5 is 120.
Note: The factorial of zero and one is the same i.e 1. So, Fact(0) = Fact(1) = 1
We have learned how to calculate the factorial manually using multiplication. Now let us write C++ programs to implement the same functionality.
Calculating Factorial using loops
Here we will see how to calculate the factorial of a number using loops. We will use the for loop of C++ to iterate up to the given number and multiply each number with the product of the previous two numbers. The below code shows how we can calculate the factorial of a number.
// including the required header files #include <iostream> using namespace std; int main() { int p = 1, num, i; cout << "[+] Enter the number : "; cin >> num; // inputting the number for which we need to calculate the factorial // calculating the factorial for (i = num; i > 0; i--) { p = p * i; } // displaying the factorial of the number to the console cout << "[+] The factorial of " << num << " is " << p << endl; return 0; }
We first include all the required header files in the above code, and C++ boilerplate code has been added. In the main function, we created three variables viz. p, num, i to store the factorial, the input of the user, and the loop variable, respectively. Next, we prompt the user to enter the number and then store the number in the num variable. Next, we use the for loop of C++ to iterate from the number and decrement to 1 and multiply each number with the product of the previous two numbers and store it in the p variable. After the loop completion, we print the result of multiplication, i.e., the factorial, to the console. On running the above code, we will get the output as shown in the below image.
Calculating Factorial with Recursion
We have seen how we can calculate the factorial of a number by just using the for loops, but there is also an easier way to perform the same operation, i.e., by using recursion. Recursion is the process in which a function is called in its own definition. Recursion is a trendy method in algorithm design, and it can be used to write larger programs in a simple way. But recursion can’t be used everywhere as it sometimes complexes the code and makes it harder to understand and write. So let us see how recursion helps us to calculate the factorial of a number.
Let us implement recursion to calculate the factorial of a number using the C++ programming language. See the below code for illustration.
// including the required header files #include <iostream> using namespace std; // creating the fact() function where we will perform recursion int fact(int n) { if (n <= 1) { return n; } // performing recursion return n * fact(n - 1); } int main() { int p = 1, num, i; cout << "[+] Enter the number : "; cin >> num; //calling the fact() function and displaying the return value of it cout << "[+] The factorial of " << num << " is " << fact(num) << endl; return 0; }
In the above code, we first include the required header files and the basic boilerplate of the C++ language. Next, we create a fact() function that accepts an integer as an argument and then returns the factorial of the number. This function is used to implement recursion and to calculate the factorial of the number. In the main() function, we first created the required variables as done in the previous code. Then we prompt the user to enter the number for which we will calculate the factorial. Next, we call the fact() function by passing the number and then displaying the function’s return value. As a result, we will get the factorial of the number displayed in the console as shown in the below image.
Conclusion
In this tutorial, we have seen how to calculate the factorial of a number. We use two methods to operate, i.e., by using the simple for loop and performing recursion. You may also want to see our tutorial on printing spiral matrices using C++. | https://www.codeunderscored.com/calculating-factorial-in-c/ | CC-MAIN-2022-21 | refinedweb | 854 | 58.21 |
0
/*I've actually located the segmentation fault location, I believe it is because I am trying to declare one array memory location equal to another when they are in seperate memory segments. However, if this is the case I have no idea on how to correct it. Here is my code so far. I went through and commented out chunks at a time to isolate where the segmentation fault occurs (line 70) I will comment the line below causing it.*/ #include<iostream> using namespace std; // INPUT - a char array // OUTPUT - none // RETURN - a char array char * soundex(const char word[],char sound[]); int main() { char sound[40];//so the house isn't fire bombed char word[40];//user inputs their word here //Prompt user for an input cout<<"Please input 10 words you would like to know the soundex code of"<<endl; for(int c=1;c<=10;++c) { cin>>word; cout<<word<<" "<<soundex(word,sound)<<endl; } return 0; } char * soundex(const char word[],char sound[]) { char word2[40];// used to puttz around with for(int b=0;b<40;++b) //copy word[] to word2[] { word2[b]=word[b]; if (word[b]=='\0') break; } sound="Z0000"; // initialize sound so any 0's are contained word2[0]=toupper(word[0]); //used to make the loop nicer for(int i=1;i<40;++i) { if(word[i]=='\0')// if you reach a null character stop break; word2[i]=toupper(word[i]);//convert everything to a standard switch(word2[i]) /* convert word[] to the digits 0-6 depending on their soundex coding - note because of the for loop this only applies to word[1+] */ { case 'B': case 'P': case 'F': case 'V': word2[i]='1';break; case 'C': case 'S': case 'K': case 'G': case 'J': case 'Q': case 'X': case 'Z': word2[i]='2';break; case 'D': case 'T': word2[i]='3';break; case 'L': word2[i]='4';break; case 'M':case'N': word2[i]='5';break; case 'R': word2[i]='6';break; case '\0': //even though null wont make it this far... word2[i]='\0';/*I did this because I felt it might fix the segmentation fault. But it didn't*/ default: word2[i]='0';break; } } sound[0]=word2[0]; // set sound[0] the same as word2[0] /* The above line is the first instance of the segmentation fault occuring, it also happens below in the while loop, most likely for the same reason. However, as far as I know this is syntactically correct (as I used it in the above code to copy word[] into word2[]) and it is logically what I need to do.*/ int count = 1; // used for the while loop as a counter int position = 1; // used for the index while(count<=5) { /* As long as the char is not the same as the previous and is not a zero, assign it to sound. If a null is reached end. Also copies nulls before ending*/ if(word2[position]!=word2[position-1] && word2[position]!='0') { sound[count]=word2[position]; /* The segmentation fault most likely occurs here as well*/ ++count; } ++position; if(word[position]=='\0') break; } return sound; } /* Any help would be greatly appreciated! Right now I am lost as to how to fix this, I have tried a lot of different approaches including declaring a char x = word2[0] then sound[0] = x but everything I did had the same segmentation fault. I am hoping that it is not because I am being limited to too few memory spaces am exceeding that... if that is the case I'm hoping there is a way around it*/ | https://www.daniweb.com/programming/software-development/threads/107811/segmentation-fault-need-help-correcting | CC-MAIN-2018-17 | refinedweb | 599 | 55.1 |
»Consider filtered views
Both namespaces and ACL policies can filter Nomad objects from an operator access. This can occasionally lead to confusion when a job with the same name is running in multiple namespaces. ACLs will filter the views down to objects that are accessible via the provided token (or the anonymous policy).
»Secure the UI
Depending on the size of your team and the details of your Nomad deployment, you may wish to control which features different internal users have access to. You can enforce this with Nomad's access control list (ACL) system.
Nomad starts with ACLs disabled by default, which means all features—read and write—are available to all users of the Web UI out of the box. Visit the Secure Nomad with Access Control collection to learn how to configure your Nomad cluster for ACLs.
»Don't forget "as-code"
Although the Web UI lets users submit jobs in an ad hoc manner, Nomad was deliberately designed to declare jobs using a configuration language. It is recommended to treat your job definitions, like the rest of your infrastructure, as code.
By checking in your job definition files as source control, you will always have a log of changes to assist in debugging issues, rolling back versions, and collaborating on changes using development best practices like code review. | https://learn.hashicorp.com/tutorials/nomad/web-ui-considerations?in=nomad/web-ui | CC-MAIN-2022-40 | refinedweb | 222 | 57.91 |
section.container.element list block wrapper section.container.element Select XSL-FO element name to contain sections block Description Selects the element name for outer container of each section. The choices are block (default) or wrapper. The fo: namespace prefix is added by the stylesheet to form the full element name. This element receives the section id attribute and the appropriate section level attribute-set. Changing this parameter to wrapper is only necessary when producing multi-column output that contains page-wide spans. Using fo:wrapper avoids the nesting of fo:block elements that prevents spans from working (the standard says a span must be on a block that is a direct child of fo:flow). If set to wrapper, the section attribute-sets only support properties that are inheritable. That's because there is no block to apply them to. Properties such as font-family are inheritable, but properties such as border are not. Only some XSL-FO processors need to use this parameter. The Antenna House processor, for example, will handle spans in nested blocks without changing the element name. The RenderX XEP product and FOP follow the XSL-FO standard and need to use wrapper. | https://bitbucket.org/scons/scons/raw/5ba470ff00b246dd11338b3386f3b3a04543cd58/src/engine/SCons/Tool/docbook/docbook-xsl-1.76.1/params/section.container.element.xml | CC-MAIN-2014-10 | refinedweb | 198 | 57.27 |
Simple real time visualisation of the execution of a Python program
heartrate
This library offers a simple real time visualisation of the execution of a Python program:
The numbers on the left are how many times each line has been hit. The bars show the lines that have been hit recently - longer bars mean more hits, lighter colours mean more recent.
Calls that are currently being executed are highlighted thanks to the
executing library.
It also shows a live stacktrace:
Installation
pip install --user heartrate
Supports Python 3.5+.
Usage
import heartrate; heartrate.trace(browser=True)
This will:
- Start tracing your program
- Start a server in a thread
- Open a browser window displaying the visualisation of the file where
trace()was called.
In the file view, the stacktrace is at the bottom. In the stacktrace, you can click on stack entries for files that are being traced to open the visualisation for that file at that line.
trace only traces the thread where it is called. To trace multiple threads, you must call it in each thread, with a different port each time.
Options
filesdetermines which files get traced in addition to the one where
tracewas called. It must be a callable which accepts one argument: the path to a file, and returns True if the file should be traced. For convenience, a few functions are supplied for use, e.g.:
from heartrate import trace, files trace(files=files.path_contains('my_app', 'my_library'))
The supplied functions are:
files.all: trace all files.
files.path_contains(*substrings)trace all files where the path contains any of the given substrings.
files.contains_regex(pattern)trace all files which contain the given regex in the file itself, so you can mark files to be traced in the source code, e.g. with a comment.
The default is to trace files containing the comment "
# heartrate" (spaces optional).
If you're tracing multiple files, there are two ways to get to the pages with their visualisations:
- In the stacktrace, click on stack entries for files that are being traced. This will open the page and jump to the line in that stack entry.
- Go to the index page at(you can click on the logo in the top left corner) to see a list of traced files.
host: HTTP host for the server. To run a remote server accessible from anywhere, use
'0.0.0.0'. Default
'127.0.0.1'.
port: HTTP port for the server. Default
9999.
browser: if True, automatically opens a browser tab displaying the visualisation for the file where
traceis called. False by default.
GitHub
Get the latest posts delivered right to your inbox | https://pythonawesome.com/simple-real-time-visualisation-of-the-execution-of-a-python-program/ | CC-MAIN-2020-40 | refinedweb | 441 | 73.88 |
If you want to stop now, you can! Your old code lives in
src/AppBundle, but it works! Over time, you can slowly migrate it directly into
src/.
Or! We can keep going: take this final challenge head-on and move all our files at once! If you're not using PhpStorm... this will be a nightmare. Yep, this is one of those rare times when you really need to use it.
Open
AppBundle.php. Then, right click on the
AppBundle namespace and go to Refactor -> Move. The new namespace will be
App. And below... yea! The target destination should be
src/.
This says: change all
AppBundle namespaces to
App and move things into the
src/ directory. Try it! On the big summary, click OK!
In addition to changing the namespace at the top of each file, PhpStorm is also searching for references to the namespaces and changing those too. Will it be perfect? Of course not! But that last pieces are pretty easy.
Woh! Yes! Everything is directly in
src/. AppBundle is now empty, except for a
fixtures.yml file. We're going to replace that file soon anyways.
Delete AppBundle! That felt amazing!
Let's do the same thing for the
tests/ directory... even though we only have one file. Open
DefaultControllerTest.php and Refactor -> Move its namespace. In Flex, the namespace should start with
App\Tests. Then, press F2 to change the directory to
tests/Controller.
Ok, Refactor! Nice! Now delete that AppBundle.
With those directories gone, open
composer.json and find the
autoload section. Remove both
AppBundle parts.
So... will it work? Probably not - but let's try! Refresh! Ah!
The file ../src/AppBundle does not exist in config/services.yaml
Ah, that makes sense. Open that file: we're still trying to import services from the old directory. Delete those two sections. And, even though it doesn't matter, remove
AppBundle from the exclude above.
In
routes.yaml, we also have an import. Remove it! Why? Annotations are already being loaded from
src/Controller. And now, that's where our controllers live!
Oh, and change
AppBundle to
App for the homepage route - I can now even Command+Click into that class. Love it!
Back in
services.yaml, we still have a lot of
AppBundle classes in here: PhpStorm is not smart enough to refactor YAML strings. But, the fix is easy: Find all
AppBundle and replace with
App.
Done! There is one last thing we need to undo: in
config/packages/doctrine.yaml. Remove the
AppBundle mapping we added.
So, what other
AppBundle things haven't been updated yet? It's pretty easy to find out. At your terminal, run:
git grep AppBundle
Hey! Not too bad. And most of these are the same: calls to
getRepository(). Start in
security.yaml and do the same find and replace. You could do this for your entire project, but I'll play it safe.
Now, completely delete the
AppBundle.php file: we're already not using that. Next is
GenusAdminController. Open that class. But instead of replacing everything, which would work, search for AppBundle. Ah! It's a
getRepository() call!
Our project has a lot of these... and... well... if you're lazy, there's a secret way to fix it! Just change the
alias in
doctrine.yaml from
App to
AppBundle. Cool... but let's do it the right way! Use
Genus::class.
We have a few more in
GenusController. Use
SubFamily::class,
User::class,
Genus::class,
GenusNote::class and
GenusScientist::class.
Ok, back to the list! Ah, a few entities still have
AppBundle. Start with
Genus. The
repositoryClass, of course! Change
AppBundle to
App. There's another reference down below on a relationship. Since all the entities live in the same directory, this can be shortened to just
SubFamily.
Make the same change in
GenusNote,
SubFamily and
User.
Almost done! Next is
GenusFormType: open that and change the
data_class to
Genus::class.
Then, finally,
LoginFormAuthenticator. Update
AppBundle:User to
User::class.
Phew! Search for
AppBundle again:
git grep AppBundle
They're gone! So... ahh... let's try it! Refresh! Woh! An "Incomplete Class" error? Fix it by manually going to
/logout. What was that? Well, because we changed the
User class, the User object in the session couldn't be deserialized. On production, your users shouldn't get an error, but they will likely be logged out when you first deploy.
Go back to
/admin/genus, then login with
[email protected], password
iliketurtles. Guys, we're done! We have a Symfony 4 app, built on the Flex directory structure, and with no references to AppBundle! And it was all done in a safe, gradual way.
To celebrate, I've added one last video with a few reasons to be thrilled that you've made it this far. | https://symfonycasts.com/screencast/symfony4-upgrade/bye-appbundle | CC-MAIN-2018-43 | refinedweb | 804 | 79.97 |
This module contains various constants used by Pygame. It’s contents are automatically placed in the pygame module namespace. However, an application can use pygame.localspygame constants to include only the Pygame constants with a ‘from pygame.localspygame constants import *’.
Detailed descriptions of the various constants are found throughout the Pygame documentation. pygame.display.set_mode()Initialize a window or screen for display flags like HWSURFACE are found in the Display section. Event types are explained in the Event section. Keyboard K_ constants relating to the key attribute of a KEYDOWN or KEYUP event are listed in the Key section. Also found there are the various MOD_ key modifiers. Finally, TIMER_RESOLUTION is defined in Time. | http://www.pygame.org/docs/ref/locals.html | CC-MAIN-2014-52 | refinedweb | 114 | 53.07 |
Definition of Python Doubly linked List
In python also we have doubly linked list, which allows us to traverse in both the direction because by using the single linked list we can only traverse into the forward direction, but in the case of doubly linked list, we can move in both directions like forward and backward. If we talked about a doubly-linked list in general it always contains three components inside it, in is the data, reference to the previous node and the last is the reference to the next node. But this is not in the case of a single-linked list. Also by the use of it, we can traverse and search elements in both directions. In the coming section of the tutorial, we will see how we can implement the linked data structure into python for beginners to understand it better.
Syntax:
As we already know that it is used to store the elements inside it, but we do not have any specific syntax for this we need to follow the algorithm to create it, for better understanding see the structure of the node class in linked see below;
class Node:
def __init__(self, data):
self.item = data
self.nref = None
self.pref = None
As you can see we are creating one node here, which contains the next and previous reference inside the node with actual data. In the coming section, we will see the basic insertion operation, also it is a generic class that anyone can use.
How Doubly linked list works in Python?
As now we already know that doubly linked allows us to travers in both the direction like forwarding or backward. But by the use of a singly linked list, we are only allowed to traverse into the single direction that is forward because a singly linked list does not contain the previous pointer inside it. But in the case of a doubly-linked list, we have two references inside the node which will maintain the address of the next and previous node as well. In this section we will the basic flow of the doubly linked list, with the help of one flow chart after that what are the steps needed to implement the linked list in python, let’s get started;
1) The data inside the linked list is maintained in the form of a node, and it is a data structure that is used to store the elements inside it.
2) We have used the term node, which again contains three components inside it, which are as follows:
a) data: this will represent the element that we want to store inside the linked data structure. This will act as the actual value of the element for us.
b) next reference: This will contain the address to the next node, for we can say reference of the next node. This is similar like we have in the case of a single-linked list, so with the help of this, we will be able to move in the forward direction of the linked like.
c) the previous reference: this will contain the address of the previous node, which means the reference of the previous node. Which will help us to search and travers the element in the backward direction also. This is the advantage of the doubly linked list we can say.
3) Now let’s take a look at the flow chart, which will help us to understand it better see below;
flow chart :
If you can see in the above chart, we have different nodes which in turn contain the inside variable to cerate and access the doubly linked list in Python.
Advantages: We have so many advantages of using doubly linked list in python, we have already seen the working let’ take a loser look at some of the advantages of doubly linked list in python to see below;
1) By the use of it we can travers in both the direction, that is forward ad backward. because it maintains two references inside the node.
2) By traversing to both direction insertion and searching become easy now.
The disadvantage of using a doubly-linked list: For a doubly linked list we have to maintain the extra pointer or we can say the reference to the memory which will hold the address of the previous node. So this is the one disadvantage or more work we can say in the case of doubly linked list in python.
points to remember while using doubly linked list in python see below;
- While creating it we need to have three things inside the node class, which are data, previous reference, and next reference.
- We can add, remove elements from the linked list from any end. In the below example, we are just adding the values to it nothing more.
Examples
In the below program we are trying to create the doubly linked list in python, here we have a defined function to add the elements inside the doubly linked list and trying to show them using the print method we have. This is just a basic example to show the working and implementation of a doubly-linked list in python for beginners.
Example #1
# create a class
class Node:
def __init__(self, actualdata):
self.actualdata = actualdata
self.nextRefrence = None
self.prevRefrence = None
class doubly_linked_list_demo:
def __init__(self):
self.head = None
def addElement(self, NewVal):
NewNode = Node(NewVal)
NewNode.nextRefrence = self.head
if self.head is not None:
self.head.prevRefrence = NewNode
self.head = NewNode
def showListElement(self, node):
while (node is not None):
print(node.actualdata),
last = node
node = node.nextRefrence
myList = doubly_linked_list_demo()
myList.addElement(10)
myList.addElement(20)
myList.addElement(30)
myList.addElement(40)
myList.addElement(50)
myList.addElement(60)
myList.showListElement(myList.head)
Output:
Conclusion
As we have seen that how the doubly linked list works in python in this tutorial, we have one advantage in the case of searching for elements as well. Also, this is very easy to use and implement just need to understand its internal working first, which can be easily handled by the developers.
Recommended Articles
This is a guide to Python Doubly Linked List. Here we discuss definition, syntax, How Doubly linked list works in Python? examples with code implementation. You may also have a look at the following articles to learn more – | https://www.educba.com/python-doubly-linked-list/ | CC-MAIN-2021-49 | refinedweb | 1,059 | 67.79 |
Q3osc
q3osc is a heavily modified version of the ioquake3 gaming engine featuring an integrated oscpack implementation of Open Sound Control for bi-directional communication between a game server and a multi-channel ChucK audio server. By leveraging ioquake3’s robust physics engine and multiplayer network code with oscpack’s fully-featured OSC specification, game clients and previously unintelligent in-game weapon projectiles can be repurposed as behavior-driven independent OSC-emitting virtual sound-sources spatialized within a multi-channel audio environment for real-time networked performance.
The most up-to-date downloads and information can currently be found on.
q3osc is an update of the manner in which the quake3 gaming engine can be used to export player locations and entity movements and actions outside of the q3 server via OSC. While q3osc is working from a fresh ioquake3 codebase, the inspiration came from Julian Oliver's excellent Q3APD project, which unfortunately makes use of the string-based FUDI protocol instead of a more flexible proper OSC protocol.
After using Q3APD for the 8-channel work maps & legends, it became apparent that while the mod was great, the idea could be improved and further explored, especially in terms of using OSC instead of FUDI, additional player gestures and data-points being exported from quake3 to an external audio engine. Since Q3APD used the string-based FUDI UDP implementation, rather than a full-blown standards-based OSC implementation, only PD could reasonably be used as the recipient of Q3APD outgoing data-streams. Since there are other excellent languages to be used, osc is a better choice.
With q3osc, the goal is to use a fully-featured OSC implementation like oscpack to not only recreate the basic user-coordinate tracking from Q3APD, but to also expand the scope of usable in-game parameters to include missle objects and other actionable items and events in the game world. Using OSC, we can implement audio engines built in any osc-capable audio software, such as ChucK, Max/MSP, SuperCollider or PD.
By adding behavioral controls to in-game entities like plasma and bfg-bolts (both of which have interesting visual attributes), visual in-game behaviors like bouncing and attraction/homing both to self and other in-game entities, we can create audio gestures which tightly follow the visual gestures.
Contents
- 1 works
- 2 source
- 3 media
- 4 files
- 5 publications/lectures
- 6 Contributors
- 7 client install process
- 8 current status
- 8.1 feature list
- 8.2 enhancements
- 8.3 update history
- 8.3.1 8/07/08
- 8.3.2 6/17/08
- 8.3.3 6/16/08
- 8.3.4 6/15/08
- 8.3.5 6/13/08
- 8.3.6 6/03/08
- 8.3.7 5/29/08
- 8.3.8 5/28/08
- 8.3.9 5/25/08
- 8.3.10 5/24/08
- 8.3.11 5/12/08
- 8.3.12 5/11/08
- 8.3.13 5/05/08
- 8.3.14 4/19/08
- 8.3.15 4/18/08
- 8.3.16 4/16/08
- 8.3.17 4/11/08
- 8.3.18 4/5/08
- 8.3.19 3/27/08
- 8.3.20 2/27/08
- 8.3.21 2/27/08
- 8.3.22 2/17/08
- 8.3.23 2/16/08
- 8.3.24 2/13/08
- 8.3.25 2/12/08
- 8.3.26 2/06/08
- 8.3.27 1/30/08
- 8.3.28 1/28/08
- 8.3.29 1/27/08
- 8.3.30 1/26/08
- 8.3.31 1/25/08
- 8.3.32 1/24/08
- 8.3.33 1/21/08
- 8.3.34 1/20/08
- 8.3.35 1/20/08
- 8.3.36 1/19/08
- 8.3.37 1/18/08
- 8.3.38 1/16/08
- 8.3.39 1/13/08
- 8.3.40 1/11/08
- 8.3.41 1/09/08
- 8.3.42 1/08/08
- 8.3.43 1/03/08
- 8.3.44 12/29/07
- 8.3.45 12/07
- 8.4 references
- 8.5 port forwarding
- 8.6 dev shortcuts
- 8.7 things to think about
- 8.8 Subversion Repository Setup
- 8.9 bundle data: 1/13/08
- 8.10 current work paths
- 8.11 licensing
- 8.12 Unsolicited user-feedback
- 8.13 Development/Community Links
works
source
This source archive contains everything needed to compile q3osc excluding of course standard and required C and C++ development libraries. As current development is done in KDE (Fedora Core 8/PlanetCCRMA) using KDevelop, the ccrma.kdevelop file in the top-level ccrma-kdevelop directory can be opened in KDevelop and should be buildable from the start. Builds make use of a tweaked version of the ioquake3/OpenArena Makefile with added changes in a second Makefile.local - running build from KDevelop will call these Makefiles by default.
- Download Linux Source: Initial Release (4.6.08) - q3osc_4.6.08.tar.bz2, ~291MB
- Download Mac OS X (10.5.2) Source: Initial OS X Release (6.16.08) - q3osc-OSX_6.16.08.zip, ~559MB
media
- nous sommes tous Fernando... by the Stanford Laptop Orchestra
- demo movie: q3osc-demo1.mov (~340mb) [no-sound]
- random screenshots: ~rob/q3osc/images (.tga, .jpg)
files
- new map: quintet_dome.bsp
- ChucK stereo-demo script: q3osc_balls2_stereo.ck
- ChucK demo script: q3osc_balls.ck
- test map "space6a": .bsp, .aas
- more devmaps: devmaps.zip
publications/lectures
- draft of ICMC 2008 paper (submitted): q3osc or: How I Learned to Stop Worrying and Love the Game
- Hamilton, R., "Maps and Legends: FPS-Based Interfaces For Composition and Immersive Performance" In Proceedings of the International Computer Music Association Conference, Copenhagen, Denmark, 2007.
Contributors
- Rob Hamilton
- Ge Wang
- Dave Kerr
client install process
Just for clarification, these are the instructions for configuring client installations of q3osc, suitable for running on each platform against a server (Linux for now, OS X coming soon) running q3osc.
ALL OS's
- install custom q3config.cfg file from here:
Mac OS X
NOTE: Currently, this is the procedure for just installing the vanilla OpenArena mod with the q3osc config. This setup is suitable for running as a client connecting to a Linux-based q3osc server, but can not create or host q3osc games. '
- download OpenArena 0.7.0 binary for Mac OS X (OpenArena0.7.0j2.dmg): [~260 MB]
- download OpenArena .0.7.1 patch (OA071-PATCH.ZIP): [~11 MB]
- running OpenArena will create a directory in ~/Library/Application Support/OpenArena which contains by default the basoa directory, itself containing a default q3config.cfg file. Copy the file above ( ) here.
- copy the maps directory and textures directory to ~/Library/Application SUpport/OpenArena/ as well
- MacBook video settings:
/r_mode -1 /r_customwidth 1280 /r_customheight 756 /vid_restart
- MacBookPro video settings:
/r_mode -1 /r_customwidth 1432 /r_customheight 870 /vid_restart
Windows
- .cfg files are located in C:\Documents and Settings\rob\Application Data\OpenArena\baseoa
- set Shortcut Target settings to: "G:\Program Files\openarena-0.7.0\openarena.exe" +set sv_pure 0 +set cg_draw2d 0 +set cg_drawGun 0 [replace path with valid path]
Linux/client and dev install
- when OpenArena is run, ~/.openarena is created with ~/.openarena/baseoa and ~/.openarena/ccrma created when dev-code is run
- maps go into ~/.openarena/baseoa/maps (maps dir will need to be created)
- for Development need OpenAL on Linux: yum install openal-devel
- load project into kDevelop, build will call the custom Makefiles
SLOrk installs
- bash script testing on shabushabu /Users/slork/slork/users/rob/q3osc/
- download installer package here: q3osc-slork-installer.zip (~271.6 MB)
- create dir /Users/slork/slork/users/rob and unpack the zip there, so that there is a /Users/slork/slork/users/rob/q3osc/ dir with all the necessary components
- run ./install.sh and follow the instructions.
/r_mode -1 /r_customwidth 1280 /r_customheight 756 /vid_restart
current status
feature list
enhancements
- change rate of weapons fire (plasma, bfg) dynamically: bg_pmove.c ~1667; addTime = g_plasma_rate;
- lock projectiles on 100% vertical or horizontal planes even after bounce
- track projectiles per client and allow clients to selectively or en masse destroy projectiles
- fix bot errors: trying to switch to weapons which are no longer in the array
dedicated server support
OSC input functionality
assignable clientID projectile tracking
individual control of projectiles (speed, xyz)
changeable color for each user for plasma and bfg
use gethostbyname() to get ips for slork stations
update history
8/07/08
initial OSC input patches inplace. q3osc can receive simple OSC messages and realize their values in real time:
- /g_gravity <float>
- /g_speed <float>
- /g_homing_speed <float>
working on linking for OSX:
dev builds: q3osc-8.7.08r9.zip and q3osc-8.7.08r9lite.zip
6/17/08
changes for OS X build:
- Modified Makefiles
- cleaned up unix/unix_net.c
- cleaned up osc/oscpack/ip and osc dirs
- copy baseoa into build/release-darwin-i386/
- mv baseq3 to ccrma (change this in make?)
- copy libSDL-1.2.0.dylib to release-darwin-i386 dir
6/16/08
- Mac OS X source q3osc-OSX_6.16.08.zip added to downloads section. Version is compiled against OS X 10.5.2 for intel; not tested against ppc (shouldn't work) and not bundled as a .app (still command line driven)
6/15/08
To build Mac OS X .app bundle:
- copy customized ioquake3.i386 into ioquake3.app/Contents/MacOS/ and rename ioquake3.ub
- make ccrma/ dir in Contents/MacOS/ and put in cgamei386.dylib, qagamei386.dylib, and uii386.dylib from darwin build dir
- how to make /q3osc appear as a mod? (need to first change from /ccrma to /q3osc in makefile build script
6/13/08
- Thanks to Dave Kerr, the OS X server linker issues have been weeded out. Will be posted soon...
6/03/08
- performance of "nous sommes tous Fernando..." for 16 slork-stations, with the Stanford Laptop Orchestra, outside in the Knoll courtyard at CCRMA.
5/29/08
- idiot-proofing changes:
- g_client.c changes to set plasma and bfg as initial weapons with unlimited ammo
- g_cmds.c edits to disable all "give" commands.
- addition of osc_broadcast cvar for future use of Osc broadcasting
- change osc_projectile to default to 1, and to pull from CVAR_ARCHIVE
5/28/08
- demo of "fernando" for documentary film-makers with 6 slork-stations and projection
5/25/08
- q3osc featured on ioquake3.org; interview with Khalsa in irc chat (#ioquake3 on freenode).
5/24/08
- First concert performance using q3osc with nous sommes tous Fernando..." for 5 laptops and four performers. Performers were Chryssie Nanou, Ge Wang, Michael Berger and Juan-Cristobel Castillo with Rob Hamilton acting as virtual camera operator.
5/12/08
- all slork stations updated with quintet.ck and q3osc
5/11/08
- for quintet, added 4 custom .cfg files, one per client, switchable via F1-F4 to set keyboard's 1-5 to control respectively gradations of Plasma-speed, Homing-speed, gravity and client-speed. Files are pspeed.cfg, hspeed.cfg, gspeed.cfg, gravity.cfg
5/05/08
- added new devmap "quintet_dome.bsp" (see files section)
- added client currentClient.hostname = osc_client_hostname<n>.string; in g_active.c for clients 1-20 (may be better way of doing this in the future as clientnum is derived from joining order
- added "slork_switch" game command-line flag: 0, OSC all routed to projectile/client hostnames; 1, OSC routed to osc_client_hostname<n>
4/19/08
- trying compiling for OS X 10.5 Intel again:
- in /unix/unix_net.c, fixed typo in "if (ioctl(interfaceSocket, OSIOCGIFADDR, (caddr_t)&ifr) < 0) {" to "if (ioctl(interfaceSocket, SIOCGIFADDR, (caddr_t)&ifr) < 0) {"
- added "USE_CODEC_VORBIS=0" to Makefile.local to kill Ogg
- change Makefile references from "i686-apple-darwin8" to "i686-apple-darwin9"
- stuck on something having to do with the C wrappers around the oscpack code? oscpack compiles fine on its own but with the wrappers, throws errors; maybe need different compiler flags?:
Undefined symbols: "_read$UNIX2003", referenced from: SocketReceiveMultiplexer::Implementation::Run() in UdpSocket.o "_EndMessage", referenced from: _EndMessage$non_lazy_ptr in osc.o "_write$UNIX2003", referenced from: SocketReceiveMultiplexer::Implementation::AsynchronousBreak() in UdpSocket.o "_EndBundle", referenced from: _EndBundle$non_lazy_ptr in osc.o "_select$UNIX2003", referenced from: SocketReceiveMultiplexer::Implementation::Run() in UdpSocket.o "_close$UNIX2003", referenced from: UdpSocket::Implementation::~Implementation()in UdpSocket.o SocketReceiveMultiplexer::Implementation::~Implementation()in UdpSocket.o SocketReceiveMultiplexer::Implementation::~Implementation()in UdpSocket.o ld: symbol(s) not found collect2: ld returned 1 exit status make[1]: *** [build/release-darwin-i386/baseq3/qagamei386.dylib] Error 1 make: *** [build_release] Error 2
4/18/08
- Added installer for SLOrk stations for Mac Client
4/16/08
- commented out CG_AddPlayerWeapons method in cg_weapons.c to make weapons not visible for every avatar
- changed this and instead comented out the call for CG_AddPlayerWeapons at the end of CG_AddViewWeapon in in cg_weapons.c
4/11/08
- q3osc presented at the ANET II Summit meeting on networked audio at the Banff Centre in Alberta, Canada. Here is a link to the slides I used in the talk: ANET II Slides
4/5/08
- run dedicated server with +set dedicated 1
- rcon commands via terminal work from client
- CCRMA port range open and external clients can connect with no problems
3/27/08
custom video config for Macbook Pro (Fed 8):
/r_customheight 870 /r_customwidth 1434
2/27/08
- should add offset to user position which would move player bounding box +- XYZ from user-center
2/27/08
- added demo video 4
2/17/08
- cg_railTrailTime: can rail-trails be made to persist and turned into "strings" when crossed?
2/16/08
- Jeff Smith's 16-chnl example
- recorded demo with sound
2/13/08
- q3osc presentation to Stanford Composition Seminar
2/12/08
- upgraded chuck demo script with Stereo panning (Y-axis) and more robust shred destruction (with dedicated events per shred)
2/06/08
- q3osc presentation to the Bay Area Computer Music Technology Group
- initial demo ChucK script posted: sonifies projectile bounces with reverb'd and enveloped sineOSCs
- chuck --loop
- chuck + q3osc_balls.ck
- chuck ^
1/30/08
- interesting to note, all modification made on weapon entities seems to be handled entirely on the server side, to such an extent that a clean default installation of OpenArena on a ppc Mac can interface perfectly with the server running q3osc-tweaked code. Wow. It just got even easier.
1/28/08
- seems like upper-bound on renderable projectiles on the screen at any given time is ~253, tracked via new object ids sent via osc to ChucK. Projectile IDs start at 73, not 0.
1/27/08
- modified durations on CG_BubbleTrail le->endTime to extend bubbles... funny looking but mostly useless... eats up alot of entities.
- increased MAX_ENTITIES in tr_types.h from 1023 to 100000; not sure how this is affecting things, may be more complicated.
1/26/08
- modified cg_weapons.c to turn on "bubble trails" for certain weapons
- added g_parent_homing_only flag to make homing projectiles only follow parent (bind j homingparentonly)
1/25/08
- created q3osc.ck ChucK main processes/class-models
- made "/<classname" descriptor for each osc projectile message and "/player" for osc client messages.
1/24/08
- added sendOSCmessage and sendOSCbundle switched by osc_bundle boolean
- added cg_plasma_trail_length to cg_local.h et al.
1/21/08
- added osc_bundle checks to determine whether bundles or single multi-field osc messages are sent
- split sendOSCMessage_projectile into sendOSCmessage_projectile... and sendOSCbundle_projectile... using osc_bundle boolean as switch between methods
- added methods and check for boolean into g_missile code
1/20/08
- added first demo movie: q3osc-demo1.mov (~340mb)
1/20/08
- using cg_oldPlasma 0
1/19/08
- added g_plasma_persist, and g_homing_persist and used in checks on G_HomingMissle to trigger expiration of all typed entities
- added g_plasma_homing_persist, g_bfg_homing_persist and logic track them in G_HomingMissle
- added bindable "plasmahomingpersist", "bfghomingpersist"
1/18/08
- added osc_send_projectile, osc_send_client booleans to turn on and off osc output
- added g_bfg_persist flag: 1 postpones ent->nextthink so bfgs don't explode... when flag is set to 0, then after g_bfg_time G_ExplodeMissile will be called and all bfg entites bouncing around will explode/go away
1/16/08
- added g_homing_radius
- recorded video demo of homing balls
- /record balls1
- /stoprecord
- /demo balls1
- /video (converts to .avi from demo format)
1/13/08
- Added insane g_parent_homing flag to allow projectiles to home on their parent client. This can create great spheres of entites revolving around the client when g_homing_speed in set just so.
1/11/08
- Running q3osc with client data output as bundles to one IP/Port while projectile data is routed to its own IP/Port. Homing and bounce are triggered by client variables, so each client can decide for him/herself whether or not homing/bounce are enabled.
1/09/08
- Ludicrous bouncing/homing enabled for Plasma and BFG projectiles. Added g_homing_speed, g_plasma_speed, g_rocket_speed cvars. Added g_plasma_bounce (taken from q3apd)
- g_synchronousClients 1 : synchronizes client messages but purportedly makes playability more difficult... will see.
1/08/08
- missle tracking working; need to isolate individual gentity_t id's for each rocket, as well as target ids (this can probably be gleaned from the radius reference)
- osc bundle output working: needed to "#define OSC_HOST_LITTLE_ENDIAN 1" in OscHistEndianness.h (oscpack) for linux system. Will need to re-set this for Windows compilation.
- Osc output console variables "/osc_hostname" and "/osc_port" are working. Currently designing and implementing osc namespace.
1/03/08
- osc output from test.cpp using oscpack compiled into ioquake3 engine successfull using homing rocket as trigger. Now moving on to isolate good params for export including the methods used in q3apd plus new goodies.
12/29/07
- initial linking success with test.cpp, thanks to ge's sweet sweet compiler flags:
$(B)/baseq3/qagame$(ARCH).$(SHLIBEXT) : $(Q3GOBJ) $(CC) $(SHLIBLDFLAGS) -lstdc++ -o $@ $(Q3GOBJ)
$(B)/baseq3/cgame$(ARCH).$(SHLIBEXT) : $(Q3GOBJ) $(CC) $(SHLIBLDFLAGS) -lstdc++ -o $@ $(Q3GOBJ)
DO_CPP=$(CPP) $(BASE_CPPFLAGS) $(SHLIBCFLAGS) -o $@ -c $<
12/07
- uber-beta floundering; while test .cpp classes are compiling correctly, something is going screwy in the linking process, which causes qagamei386.so to fail on the foo method call (see below)
Loading dll file qagame. Sys_LoadDll(/user/r/rob/data/q3/dev/ccrma-kdevelop/build/release-linux-i386/ccrma/qagamei386.so)... Sys_LoadDll(/user/r/rob/data/q3/dev/ccrma-kdevelop/build/release-linux-i386/ccrma/qagamei386.so) failed: "Failed loading /user/r/rob/data/q3/dev/ccrma-kdevelop/build/release-linux-i386/ccrma/qagamei386.so: /user/r/rob/data/q3/dev/ccrma-kdevelop/build/release-linux-i386/ccrma/qagamei386.so: undefined symbol: foo"
references
Hamilton, R., "q3osc: or How I Learned to Stop Worrying and Love the Game", In Proceedings of the International Computer Music Association Conference, Belfast, Ireland, 2008.
Hamilton, R., "Maps and Legends: FPS-Based Interfaces For Composition and Immersive Performance" In Proceedings of the International Computer Music Association Conference, Copenhagen, Denmark, 2007.
port forwarding
-
-
Each computer playing Quake III must use a different port number, starting at 27660 and incrementing by 1. You'll also need to do the following:
1. Right click on the QIII icon 2. Choose "Properties" 3. In the Target field you'll see a line like "C:\Program Files\Quake III Arena\quake3.exe" 4. Add the Quake III net_port command to specify a unique communication port for each system. The complete field should look like this: "C:\Program Files\Quake III Arena\quake3.exe" +set net_port 27660 5. Click OK. 6. Repeat for each system behind the NAT, adding one to the net_port selected (27660,27661,27662) IN UDP 27660 (for first player)
"Quake3 uses ports 27960-27961.. these are the only ones you will need and it is guaranteed 1005 to work."
+set dedicated 2 +set net_port 27970
dev shortcuts
171.64.197.200: 27660-70
- after compiliation, move files from baseq3 to ccrma directory and then run
rm -rf ccrma && mv baseq3 ccrma && ./ioquake3.i386 +set sv_pure 0 +set fs_game ccrma +devmap space +set vm_ui 0 +set vm_cgame 0 +set vm_game 0 +set cg_draw2d 0 +set cg_drawGun 0
- run dedicated server 1 +set net_port 27660
- run regular game 0 +set net_port 27660
- screenshot location
~/.openarena/ccrma/screenshots
- set third-person view properties
/cg_thirdPerson (0/1; default: 0) /cg_thirdPersonAngle (0-360; default: 0) /cg_thirdPersonRange (default: 40)
- set custom video size: for linux larger monitor @ ccrma:
/r_mode -1 /r_customheight 1021 /r_customwidth 1672 /vid_restart
- for linux:
/r_customwidth 1275 /r_customheight 960
- random video properties
r_ext_texture_filter_anisotropic 4 r_flares 1 cg_shadows 2 r_stencilBits 8 r_detailTextures 1
- turn off blood
/com_blood 0
things to think about
- Bi-directional communication: ChucK <--> ioq3
- VideoTrace
- head-tracking with wiimote as lean-controller?
- another homing missle approach
- l3dgeWorld
- Stanford's Jeremy Bailenson VR
- virtual world resources
- SARC Sonic Lab
- q3 on PocketPC
- Q3ce Google Code page
- Q4 SDK
- OpenArena Message Boards
- Serious Games Summit
- Phoronix: Linux & Solaris Hardware site
- Kandinsky: "Point and line to plane"
- q3 bsp structure (from Stanford EE)
Subversion Repository Setup
svn co svn+ssh://cmn51.stanford.edu/user/r/rob/svn/q3osc
[cmn51 rob] ~/data> svnadmin create --fs-type fsfs svn [cmn51 rob] ~/data/q3osc/dev> svn co Checked out revision 0. [cmn51 rob] ~/data/q3osc/dev> cd svn [cmn51 rob] ~/data/q3osc/dev/svn> ls [cmn51 rob] ~/data/q3osc/dev/svn> svn mkdir q3osc A q3osc [cmn51 rob] ~/data/q3osc/dev/svn> svn commit -m "Setting up empty q3osc directory - rob" Adding q3osc Committed revision 1. [cmn51 rob] ~/data/q3osc/dev/svn> ls -al total 32 drwxr-xr-x 4 rob users 4096 May 6 22:27 ./ drwxr-xr-x 3 rob users 4096 May 6 22:25 ../ drwxr-xr-x 3 rob users 4096 May 6 22:27 q3osc/ drwxr-xr-x 6 rob users 4096 May 6 22:28 .svn/ [cmn51 rob] ~/data/q3osc/dev/svn> cp -r ~/data/source/q3osc/* q3osc/ [cmn51 rob] ~/data/q3osc/dev/svn/q3osc> ls -al total 1184 drwxr-xr-x 5 rob users 4096 May 6 22:31 ./ drwxr-xr-x 4 rob users 4096 May 6 22:27 ../ drwxr-xr-x 3 rob users 4096 May 6 22:30 build/ -rw-r--r-- 1 rob users 6729 May 6 22:30 ccrma.kdevelop -rw-r--r-- 1 rob users 947792 May 6 22:30 ccrma.kdevelop.pcs -rw-r--r-- 1 rob users 5991 May 6 22:30 ccrma.kdevses drwxr-xr-x 25 rob users 4096 May 6 22:31 ccrmamod/ -rw-r--r-- 1 rob users 755 May 6 22:31 devnotes -rw-r--r-- 1 rob users 756 May 6 22:31 devnotes~ -rw-r--r-- 1 rob users 10405 May 6 22:31 Doxyfile -rw-r----- 1 rob users 59721 May 6 22:31 Makefile -rw-r----- 1 rob users 59722 May 6 22:31 Makefile~ -rw-r--r-- 1 rob users 564 May 6 22:31 Makefile.local -rw-r--r-- 1 rob users 564 May 6 22:31 Makefile.local~ drwxr-xr-x 6 rob users 4096 May 6 22:28 .svn/ [cmn51 rob] ~/data/q3osc/dev/svn/q3osc> svn add * A build A build/release-linux-i386 A build/release-linux-i386/client A build/release-linux-i386/client/be_interface.d ... ... ... A Makefile.local~ [cmn51 rob] ~/data/q3osc/dev/svn/q3osc> ls build/ ccrma.kdevelop ccrma.kdevelop.pcs ccrma.kdevses ccrmamod/ devnotes devnotes~ Doxyfile Makefile Makefile~ Makefile.local Makefile.local~ [cmn51 rob] ~/data/q3osc/dev/svn/q3osc> svn commit -m "Initial code check-in; entire project + bins, maybe a bad idea" Adding q3osc/Doxyfile Adding q3osc/Makefile Adding q3osc/Makefile.local ... ... ... Adding q3osc/devnotes~ Transmitting file data ............................................................................................................................................................. Committed revision 2. [cmn51 rob] ~/data/q3osc/dev/svn/q3osc> svn log ------------------------------------------------------------------------ r1 | rob | 2008-05-06 22:28:14 -0700 (Tue, 06 May 2008) | 1 line Setting up empty q3osc directory - rob ------------------------------------------------------------------------ [cmn51 rob] ~/data/q3osc/dev/svn/q3osc>
bundle data: 1/13/08
196 byte message: 23 (#) 62 (b) 75 (u) 6e (n) 64 (d) 6c (l) 65 (e) 0 () 0 () 0 () 0 () 0 () 0 () 0 () 0 () 1 () 0 () 0 () 0 () 18 () 2f (/) 63 (c) 6c (l) 61 (a) 73 (s) 73 (s) 6e (n) 61 (a) 6d (m) 65 (e) 0 () 0 () 2c (,) 73 (s) 0 () 0 () 70 (p) 6c (l) 61 (a) 73 (s) 6d (m) 61 (a) 0 () 0 () 0 () 0 () 0 () 18 () 2f (/) 70 (p) 72 (r) 6f (o) 6a (j) 65 (e) 63 (c) 74 (t) 69 (i) 6c (l) 65 (e) 6e (n) 75 (u) 6d (m) 0 () 0 () 2c (,) 69 (i) 0 () 0 () 0 () 0 () 0 () 54 (T) 0 () 0 () 0 () 1c () 2f (/) 6f (o) 72 (r) 69 (i) 67 (g) 69 (i) 6e (n) 0 () 2c (,) 66 (f) 66 (f) 66 (f) 0 () 0 () 0 () 0 () 42 (B) ffffffcc (?) f () ffffffaa (?) 45 (E) 1f () 6e (n) 0 () ffffffc5 (?) 11 () ffffffdd (?) ffffffd0 (?) 0 () 0 () 0 () 14 () 2f (/) 6f (o) 77 (w) 6e (n) 65 (e) 72 (r) 6e (n) 75 (u) 6d (m) 0 () 0 () 0 () 2c (,) 69 (i) 0 () 0 () 0 () 0 () 0 () 0 () 0 () 0 () 0 () 14 () 2f (/) 74 (t) 61 (a) 72 (r) 67 (g) 65 (e) 74 (t) 6e (n) 75 (u) 6d (m) 0 () 0 () 2c (,) 69 (i) 0 () 0 () 0 () 0 () 0 () 37 (7) 0 () 0 () 0 () 10 () 2f (/) 62 (b) 6f (o) 75 (u) 6e (n) 63 (c) 65 (e) 0 () 2c (,) 69 (i) 0 () 0 () 0 () 0 () 0 () 1 () 0 () 0 () 0 () 14 () 2f (/) 65 (e) 78 (x) 70 (p) 6c (l) 6f (o) 64 (d) 65 (e) 0 () 0 () 0 () 0 () 2c (,) 69 (i) 0 () 0 () 0 () 0 () 0 () 0 ()
[ 000000001 /classname "plasma" /projectilenum 84 /origin 102.030594 2550.875000 -2333.863281 /ownernum 0 /targetnum 55 /bounce 1 /explode 0 ]
/projectile "plasma" 0 87 894.454651 184.281860 -8038.076172 -1079107888 0 0
current work paths
svn checkout svn+ssh://gate-ccrma.stanford.edu/user/r/rob/svn/q3osc/
svn repo: /user/r/rob/svn/q3osc/
ccrma: ~/dev/svn/
macbook pro: /data/Projects/q3osc/svn-work/
~/dev/svn/q3osc/build/release-linux-i386>
licensing
- oscpack
- open sound control
- openarena
- ioquake3
- quake3
Unsolicited user-feedback
"So, normally I would be really cynical and make fun of your SLORK shit, but I actually found that video extremely cool. It's like a concert of people playing the DOOMophone. And there was something so visually pleasing about watching the video when combined with music, which was so much better than the expected loud sound of gunfire, etc. Awesome, seriously!"
Development/Community Links
Best link ever: ioquake3 mailing list
Second best: ioquake3 bugzilla
~rob
oscpack
Julian Oliver's q3apd
original project page
ChatBear thread 1
ChatBear thread 2
Tremulous thread 1
Tremulous thread 2
Electro-music.com thread
Tutorial for OSC hooking games
Bot Editing
Backend details
q3 autoexec cfg maker
console commands
Q3 mod making
trap calls in q3
server config
rcon php
iPhone q3
Linux Video Editors
3D terrain generation tutorial
mixing C and C++
more c/c++ mixing
Good description of the q3 VM, Client and server roles, and console vars
demo recording
ioquake3 on PS3
Official PS2-Linux page
Quake-3 Networking
q3map2build (.aas file maker)
Map editing hints
Saffire review
Saffire review2
Saffire review 3
Saffire specs
skinning faq
q3 with Wii C++ port
q3a server guide
dumpOSC
Focus On Mod Programming in Quake III Arena by Shawn Holmes
Panning
cvar guide
extended cvars
threading
thread thread
baylor threads stuff
rcon python class
ron stuff
console rcon app
quake3 papers/tools via link
C++ referencing C funtions | https://ccrma.stanford.edu/wiki/Q3osc | CC-MAIN-2015-27 | refinedweb | 4,522 | 56.45 |
22802/system-exit-method-in-java
How does System.exit() method affect a program?
public class TestExit
{
public static void main(String[] args)
{
System.out.println("hello world");
System.exit(0);
}
}
Is it necessary to include this method in my code? Please explain.
System.exit() is a method of System class and is used to execute the shutdown hooks before the program quits. This method is generally used to shut down the larger programs where all parts of the program are not aware of each other. In such cases, System.exit() is called in order to take care of necessary shutdown tasks like closing file, closing connection, freeing resources etc.
One thing I would like to mention, this method never returns normally i.e the method won't return anything, once a thread goes there, it won't come back.
Also, if you have code which contains the non-daemon threads, you need to use System.exit() to shut down all non-daemon threads and release other resources. If there are no other non-daemon threads, returning from main will automatically shut down the JVM and will call the shutdown hooks.
Whenever you require to explore the constructor ...READ MORE
As you might know, static here is ...READ MORE
Use the following code :
new Timer(""){{
...READ MORE
Use java.util.Arrays:
String res = Arrays.toString(array);
System. ...READ MORE
Yes; the Java Language Specification writes:
In the Java ...READ MORE
Using three dots:
public void move(Object... x) {
...READ MORE
You can use split() method.
str = "Hello ...READ MORE
import java.io.IOException;
public class chkClearScreen {
public static void ...READ MORE
The method invoked here will be the ...READ MORE
You cannot override a private or static ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/22802/system-exit-method-in-java | CC-MAIN-2022-40 | refinedweb | 313 | 69.28 |
[email protected] wrote:
> At this point you might be wondering why I am bringing this up.
>
> First, I'd like to ask that you and others not to be so quick to prejudge
> the conceptual cost (both in implementation and usage) associated with
> various proposals.
I'll try to be neutral about proposals. Read: let's not get religious on
tasks.
> Second, and more importantly, I am going to continue to try to steer this
> conversation towards requirements. Please, everybody, tell me WHAT you are
> trying and WHY what you are proposing is the simplest solution to your
> problem. Abstract examples with foos and pearly gates just don't do it for
> me.
Very correct.
My requirements (constraints):
1) One should be able to understand the build.xml without looking at the
documentation. There are a couple of things that break this pattern:
- tstamp -> you don't know what variables are set unless you look into
the docs.
- the proposed .antrc -> win32 people are _NOT_ used to such pattern
Interesting enough, Costin doesn't seem to like my visibility pattern :)
probably he's used to man and not to the win32 help system. Lucky boy.
2) I need conditional execution on collection of tasks depending on
environmental parameters such as class presence, property presence.
Condition on property value is not required at this moment, but I do
picture a need for this.
3) I would like to have two different set of tasks:
- external tasks: these are the tasks that "do something" exterally to
Ant.
- internal tasks: these do not change the environment _outside_ Ant,
but change it's internal state or its internal behavior.
For now, these are the internal tasks:
- Project
- Target
- Property
- Available
- Filter
- TStamp
I propose to use a different namespace for them, this would help both
visual GUI tools and people to understand their different behavior.
I would also like these internal tasks to behave OO-like: each target
inherits all the tasks it depends to (both external and internal), but
if an internal task is present and has the same "effect", it should
change the internal state and overwrite the previous state.
4) I do not care about iterative internal tasks since I wouldn't use
them. And, IMO, they do not belong at this level but inside the external
task logic.
Gone preparing the asbesto shield :)
--
Stefano Mazzocchi One must still have chaos in oneself to be
able to give birth to a dancing star.
<[email protected]> Friedrich Nietzsche
--------------------------------------------------------------------
Come to the first official Apache Software Foundation Conference!
------------------------- --------------------- | https://mail-archives.eu.apache.org/mod_mbox/ant-dev/200003.mbox/%[email protected]%3E | CC-MAIN-2021-39 | refinedweb | 428 | 60.04 |
Class similar to AnalogIn that uses burst mode to run continious background conversions so when the input is read, the last value can immediatly be returned. This slightly modified version allows NC pins.
Dependents: Pinscape_Controller
Fork of FastAnalogIn by
« Back to documentation index
FastAnalogIn Class Reference
A class similar to AnalogIn, only faster, for LPC1768, LPC408X and KLxx. More...
#include <FastAnalogIn.h>
Detailed Description
A class similar to AnalogIn, only faster, for LPC1768, LPC408X and KLxx.
AnalogIn does a single conversion when you read a value (actually several conversions and it takes the median of that). This library runns the ADC conversion automatically in the background. When read is called, it immediatly returns the last sampled value.
LPC1768 / LPC4088 Using more ADC pins in continuous mode will decrease the conversion rate (LPC1768:200kHz/LPC4088:400kHz). If you need to sample one pin very fast and sometimes also need to do AD conversions on another pin, you can disable the continuous conversion on that ADC channel and still read its value.
KLXX Multiple Fast instances can be declared of which only ONE can be continuous (all others must be non-continuous).
When continuous conversion is disabled, a read will block until the conversion is complete (much like the regular AnalogIn library does). Each ADC channel can be enabled/disabled separately.
IMPORTANT : It does not play nicely with regular AnalogIn objects, so either use this library or AnalogIn, not both at the same time!!
Example for the KLxx processors:
// Print messages when the AnalogIn is greater than 50% #include "mbed.h" FastAnalogIn temperature(PTC2); //Fast continuous sampling on PTC2 FastAnalogIn speed(PTB3, 0); //Fast non-continuous sampling on PTB3 int main() { while(1) { if(temperature > 0.5) { printf("Too hot! (%f) at speed %f", temperature.read(), speed.read()); } } }
Example for the LPC1768 processor:
// Print messages when the AnalogIn is greater than 50% #include "mbed.h" FastAnalogIn temperature(p20); int main() { while(1) { if(temperature > 0.5) { printf("Too hot! (%f)", temperature.read()); } } }
Definition at line 68 of file FastAnalogIn.h.
Constructor & Destructor Documentation
Create a FastAnalogIn, connected to the specified pin.
- Parameters:
-
Definition at line 25 of file FastAnalogIn_KLXX_K20D50M.cpp.
Member Function Documentation
Disable the ADC channel.
Disabling unused channels speeds up conversion in used channels. When disabled you can still call read, that will do a single conversion (actually two since the first one always returns 0 for unknown reason). Then the function blocks until the value is read. This is handy when you sometimes needs a single conversion besides the automatic conversion
Definition at line 88 of file FastAnalogIn_KLXX_K20D50M.cpp.
Enable the ADC channel.
- Parameters:
-
Definition at line 74 of file FastAnalogIn_KLXX_K20D50M.cpp.
An operator shorthand for read()
Definition at line 115 of file FastAnalogIn.h.
Returns the scaled value.
- Parameters:
-
Definition at line 107 of file FastAnalogIn.h.
Returns the raw value.
- Parameters:
-
Definition at line 97 of file FastAnalogIn_KLXX_K20D50M.cpp.
Generated on Wed Feb 3 2016 23:29:45 by
| https://os.mbed.com/users/mjr/code/FastAnalogIn/docs/234c5cd2b8de/classFastAnalogIn.html | CC-MAIN-2018-13 | refinedweb | 492 | 50.02 |
Let's make a DEV.to CLI... together
JavaScript Joel
Oct 15 '18
Updated on Oct 17, 2018
・11 min read
For hacktoberfest I'm gonna make a CLI for DEV.to... Let's make it together!
This is meant to be a follow along type tutorial... so follow along. But if you think you are too good to learn something cool, you can just skip to the end.
If I skip over something too quickly and you want more explanation, ask me in the comments!
Setup
Since I'm the one doing the driving, I get the pick the language. I'll be using MojiScript (of course).
git clone devto-cli cd devto-cli npm ci
There isn't an API for DEV.to. And what happens to all sites that don't have an API? They get scraped!
# install axios npm install --save-prod axios
Add the axios dependency to
index.mjs
import log from 'mojiscript/console/log' import run from 'mojiscript/core/run' import axios from 'mojiscript/net/axios' import main from './main' const dependencies = { axios, log } run ({ dependencies, main })
Create src/api.mjs
Create a new file
src/api.mjs to contain our scraping API. We are using
mojiscript/net/axios, which is a curried version of
axios.
import pipe from 'mojiscript/core/pipe' const getData = response => response.data export const getUrl = axios => pipe ([ url => axios.get (url) ({}), getData ]) export const getDevToHtml = axios => pipe ([ () => getUrl (axios) ('') ])
Import
getDevToHtml into
main.mjs
import pipe from 'mojiscript/core/pipe' import { getDevToHtml } from './api' const main = ({ axios, log }) => pipe ([ getDevToHtml (axios), log ]) export default main
Now run the code:
npm start
If everything is successful, you should see a bunch of HTML flood the console.
JavaScript interop
Now I don't want to slam DEV.to with HTTP calls every time I debug my code, so let's cache that output to a file.
# this will get you the same version in this tutorial curl -Lo devto.html
Next I'm gonna create a file
interop/fs.mjs, which is where
fs.readFile will be. I place this in an
interop folder because this is where MojiScript requires JavaScript interop files to be placed. JavaScript is written differently than MojiScript and is sometimes incompatible (unless inside the interop directory).
To make
fs.readFile compatible with MojiScript, I need to first
promisify it.
promisify (fs.readFile)
Now that it's promisified, I also need to curry it.
export const readFile = curry (2) (promisify (fs.readFile))
I'm also dealing with UTF8, so let's add a helper to make life easier.
export const readUtf8File = file => readFile (file) ('utf8')
And the full
interop/fs.mjs:
import fs from 'fs' import curry from 'mojiscript/function/curry' import { promisify } from 'util' export const readFile = curry (2) (promisify (fs.readFile)) export const readUtf8File = file => readFile (file) ('utf8')
Read the cache
Inside of
src/mocks/axios.mock.mjs, I'm going to create
mockAxios. That will return the contents of our file when
get is called.
import pipe from 'mojiscript/core/pipe' import { readUtf8File } from '../interop/fs' const mockAxios = { get: () => pipe ([ () => readUtf8File ('devto.html'), data => ({ data }) ]) } export default mockAxios
Using the mock is easy. All I have to do is change the
dependencies. Nothing in
main.mjs needs to change!
// don't forget to add the import! import mockAxios from './mocks/axios.mock' const dependencies = { axios: mockAxios, log }
Now when we run
npm start no HTTP requests are being made. This is good because I am probably gonna run
npm start a whole bunch before I complete this thing!
Parsing the HTML
I like
cheerio for parsing. I'm pretty sure this is what the cool kids are using.
npm install --save-prod cheerio
create another interop
interop/cheerio.mjs.
import cheerio from 'cheerio'; import pipe from 'mojiscript/core/pipe'; import map from 'mojiscript/list/map'; export const getElements = selector => pipe ([ cheerio.load, $ => $ (selector), $articles => $articles.toArray (), map (cheerio) ])
note: When cheerio's
toArray is called, the elements lose all those nice cheerio methods. So we have to
map
cheerio back onto all the elements.
Next add
getElements to
main.
import { getElements } from './interop/cheerio' const main = ({ axios, log }) => pipe ([ getDevToHtml (axios), getElements ('.single-article:not(.feed-cta)'), log ])
Run
npm start again to see the Array of elements.
npm install --save-prod reselect nothis
Create
interop/parser.mjs. I'm gonna use
reselect to select the attributes I need from the HTML. I'm not really gonna go into detail about this. It's basically just doing a whole bunch of gets from an element. The code is easy to read, you can also skip it, it's not important.
import reselect from 'reselect' import nothis from 'nothis' const { createSelector } = reselect const isTextNode = nothis(({ nodeType }) => nodeType === 3) const parseUrl = element => `{element.find('a.index-article-link').attr('href')}` const parseTitle = element => element.find('h3').contents().filter(isTextNode).text().trim() const parseUserName = element => element.find('.featured-user-name,h4').text().trim().split('・')[0] const parseTags = element => element.find('.featured-tags a,.tags a').text().substr(1).split('#') const parseComments = element => element.find('.comments-count .engagement-count-number').text().trim() || '0' const parseReactions = element => element.find('.reactions-count .engagement-count-number').text().trim() || '0' export const parseElement = createSelector( parseUrl, parseTitle, parseUserName, parseTags, parseComments, parseReactions, (url, title, username, tags, comments, reactions) => ({ url, title, username, tags, comments, reactions }) )
Add
parseElement to
main.
import map from 'mojiscript/list/map' import { parseElement } from './interop/parser' const main = ({ axios, log }) => pipe ([ getDevToHtml (axios), getElements ('.single-article:not(.feed-cta)'), map (parseElement), log, ])
Now when you run
npm start you should see something like this:
[ { url: '', title: 'How to find the best open source Node.js projects to study for leveling up your skills', username: 'Corey Cleary', tags: [ 'node', 'javascript', 'hacktoberfest' ], comments: '0', reactions: '33' } ]
Format the data
Add the
import,
formatPost and add
formatPost to
main and change
log to
map (log).
import $ from 'mojiscript/string/template' const formatPost = $`${'title'} ${'url'}\n#${'tags'} ${'username'} ・ 💖 ${'comments'} 💬 ${'reactions'} ` const main = ({ axios, log }) => pipe ([ getDevToHtml (axios), getElements ('.single-article:not(.feed-cta)'), map (parseElement), map (formatPost), map (log) ])
Run
npm start again and you should see a handful of records that look like this:
The Introvert's Guide to Professional Development #introvert,tips,development,professional Jenn ・ 💖 1 💬 50
Finally, this is starting to look like something!
I am also going to add a conditional in
main.mjs to use
axios only when
production is set in the
NODE_ENV.
import ifElse from 'mojiscript/logic/ifElse' const isProd = env => env === 'production' const getAxios = () => axios const getMockAxios = () => mockAxios const dependencies = { axios: ifElse (isProd) (getAxios) (getMockAxios) (process.env.NODE_ENV), log }
Run it with and without
production to make sure both are working.
# dev mode npm start # production mode NODE_ENV=production npm start
Viewing the Article
The list is nice and I was planning on stopping the walk through here, but it would be super cool if I could also read the article.
I would like to be able to type something like:
devto read 3408
I notice the url's have an ID on the end that I can use: <-- right there.
So I'll modify
parser.mjs to include a new parser to get that id.
const parseId = createSelector( parseUrl, url => url.match(/-(\w+)$/, 'i')[1] )
Then just follow the pattern and
parseId into
parseElement.
Now the CLI is going to have two branches, one that will display the feed, the other that will show the article. So let's break out our feed logic from
main.mjs and into
src/showFeed.mjs.
import pipe from 'mojiscript/core/pipe' import map from 'mojiscript/list/map' import $ from 'mojiscript/string/template' import { getDevToHtml } from './api' import { getElements } from './interop/cheerio' import { parseElement } from './interop/parser' const formatPost = $`${'title'} ${'url'}\n#${'tags'} ${'username'} ・ 💖 ${'comments'} 💬 ${'reactions'} ` export const shouldShowFeed = args => args.length < 1 export const showFeed = ({ axios, log }) => pipe ([ getDevToHtml (axios), getElements ('.single-article:not(.feed-cta)'), map (parseElement), map (formatPost), map (log) ])
Next, I'm gonna wrap
cond around
showFeed. It's possible we will have many more branches (maybe help?) in the CLI, but for right now we just have the 1 path.
This is what
main.mjs should look like now.
import pipe from 'mojiscript/core/pipe' import cond from 'mojiscript/logic/cond' import { showFeed } from './showFeed' const main = dependencies => pipe ([ cond ([ [ () => true, showFeed (dependencies) ] ]) ]) export default main
We will need access to node's args. So make these changes
main.mjs. I am doing a
slice on them because the first 2 args are junk args and I don't need them.
// add this line const state = process.argv.slice (2) // add state to run run ({ dependencies, state, main })
Okay we have a lot of work to do before we can actually view the article. So let's add the help. That's something easy.
View the Help
Create
src/showHelp.mjs.
import pipe from 'mojiscript/core/pipe' const helpText = `usage: devto [<command>] [<args>] <default> Show article feed read <id> Read an article ` export const showHelp = ({ log }) => pipe ([ () => log (helpText) ])
Now we can simplify
main.mjs and add the new case to
cond.
import pipe from 'mojiscript/core/pipe' import cond from 'mojiscript/logic/cond' import { shouldShowFeed, showFeed } from './showFeed' import { showHelp } from './showHelp' const main = dependencies => pipe ([ cond ([ [ shouldShowFeed, showFeed (dependencies) ], [ () => true, showHelp (dependencies) ] ]) ]) export default main
Now if we run
npm start -- help, we should see our help:
usage: devto [<command>] [<args>] <default> Show article feed read <id> Read an article
And if we run
npm start we should still see our feed!
Article from Cache
The same as I read main feed from cache, I also want to read the article from cache.
curl -Lo article.html
Modify
axios.mock.mjs to read the article too.
import pipe from 'mojiscript/core/pipe' import ifElse from 'mojiscript/logic/ifElse' import { readUtf8File } from '../interop/fs' const feedOrArticle = ifElse (url => url === '') (() => 'devto.html') (() => 'article.html') const mockAxios = { get: url => pipe ([ () => feedOrArticle (url), readUtf8File, data => ({ data }) ]) } export default mockAxios
Parsing the Article
Parsing the article HTML is much easier because I'm planning on just formatting the whole
article-body block as text. So I just need the title and body.
Create
interop/articleParser.mjs.
import reselect from 'reselect' const { createSelector } = reselect const parseTitle = $ => $('h1').first().text().trim() const parseBody = $ => $('#article-body').html() export const parseArticle = createSelector( parseTitle, parseBody, (title, body) => ({ title, body }) )
Read the Article
Because there is no state, the CLI will not know what URL to pull when I issue the
read command. Because I am lazy, I'll just query the feed again. And pull the URL from the feed.
So I'm gonna hop back into
showFeed.mjs and expose that functionality.
I'm just extracting the functions from
showFeed and putting them into
getArticles. I haven't added any new code here.
export const getArticles = axios => pipe ([ getDevToHtml (axios), getElements ('.single-article:not(.feed-cta)'), map (parseElement) ]) export const showFeed = ({ axios, log }) => pipe ([ getArticles (axios), map (formatPost), map (log) ])
Show the Article
Now I want to write a function like the one below, but we'll get an error
id is not defined. The
id is the argument to the
pipe, but it's not accessible here. The input to
filter is the Array of articles, not the
id.
const getArticle = ({ axios }) => pipe ([ getArticles (axios), filter (article => article.id === id), // 'id' is not defined articles => articles[0] ])
But there's a trick. Using the W Combinator I can create a closure, so that
id is exposed.
const getArticle = ({ axios }) => W (id => pipe ([ getArticles (axios), filter (article => article.id === id), articles => articles[0] ]))
Compare that block with the one above it, not much different just add
W (id => and a closing
). The W Combinator is an awesome tool. More on Function Combinators in a future article :) For now, let's move on.
All together
src/showArticle.mjs should look like this:
import W from 'mojiscript/combinators/W' import pipe from 'mojiscript/core/pipe' import filter from 'mojiscript/list/filter' import { getArticles } from './showFeed' export const shouldShowArticle = args => args.length === 2 && args[0] === 'read' const getArticle = ({ axios }) => W (id => pipe ([ getArticles (axios), filter (article => article.id === id), articles => articles[0] ])) export const showArticle = ({ axios, log }) => pipe ([ getArticle ({ axios }), log ])
Modify
main.mjs's
cond to include the new functions:
import { shouldShowArticle, showArticle } from './showArticle' const main = dependencies => pipe ([ cond ([ [ shouldShowArticle, args => showArticle (dependencies) (args[1]) ], [ shouldShowFeed, showFeed (dependencies) ], [ () => true, showHelp (dependencies) ] ]) ])
Run
npm run start -- 1i0a (replace id) and you should see something like this:
{ id: '1i0a', url: '', title: 'Email Sending in Django 2, Part -1', username: 'Shobi', tags: [ 'django', 'emails', 'consoleemailbackend' ], comments: '0', reactions: '13' }
HTML to Text
I found a great npm packge that look like it'll handle this for me.
npm install --save-prod html-to-text
We have already laid out most of our foundation, so to make an HTTP request, parse the HTML and format it into text, it's as simple as this. Open up
showArticle.mjs.
const getArticleTextFromUrl = axios => pipe ([ ({ url }) => getUrl (axios) (url), cheerio.load, parseArticle, article => `${article.title}\n\n${htmlToText.fromString (article.body)}` ])
I also want to create a view for when the
id is not found.
const showArticleNotFound = $`Article ${0} not found.\n`
I'll also create an
isArticleFound condition to make the code more readable.
const isArticleFound = article => article != null
I'll use the same W Combinator technique to create a closure and expose
id and modify
showArticle.
export const showArticle = ({ axios, log }) => W (id => pipe ([ getArticle ({ axios }), ifElse (isArticleFound) (getArticleTextFromUrl (axios)) (() => showArticleNotFound (id)), log ]))
All together
showArticle.mjs looks like this:
import cheerio from 'cheerio' import htmlToText from 'html-to-text' import W from 'mojiscript/combinators/W' import pipe from 'mojiscript/core/pipe' import filter from 'mojiscript/list/filter' import ifElse from 'mojiscript/logic/ifElse' import $ from 'mojiscript/string/template' import { getUrl } from './api' import { parseArticle } from './interop/articleParser' import { getArticles } from './showFeed' const isArticleFound = article => article != null const showArticleNotFound = $`Article ${0} not found.\n` const getArticleTextFromUrl = axios => pipe ([ ({ url }) => getUrl (axios) (url), cheerio.load, parseArticle, article => `${article.title}\n\n${htmlToText.fromString (article.body)}` ]) export const shouldShowArticle = args => args.length === 2 && args[0] === 'read' const getArticle = ({ axios }) => W (id => pipe ([ getArticles (axios), filter (article => article.id === id), articles => articles[0] ])) export const showArticle = ({ axios, log }) => W (id => pipe ([ getArticle ({ axios }), ifElse (isArticleFound) (getArticleTextFromUrl (axios)) (() => showArticleNotFound (id)), log ]))
Run
npm start -- read 1i0a again and you should see the article!
Finishing Touches
I'd like to make the
id more clear in the feed.
const formatPost = $`${'id'}・${'title'} ${'url'}\n#${'tags'} ${'username'} ・ 💖 ${'comments'} 💬 ${'reactions'} `
Add this to the
package.json, I'm gonna name the command
devto.
"bin": { "devto": "./src/index.mjs" }
In
src/index.mjs, add this mystical sorcery at the top:
#!/bin/sh ':' //# comment; exec /usr/bin/env NODE_ENV=production node --experimental-modules --no-warnings "$0" "$@"
Run this command to create a global link to that command.
npm link
If everything went well, you should now be able to run the following commands:
# get the feed devto # read the article devto read <id>
So you decided to skip to the end?
You can lead the horse to water... or something.
To catch up with the rest of us follow these steps:
# clone the repo git clone cd devto-cli # install npm ci npm run build npm link # run devto
Warnings about the CLI
Scraping websites is a bad idea. When the website changes, which is guaranteed to happen, your code breaks.
This is meant to just be a fun demo for #hacktoberfest and not a maintainable project. If you find a bug, please submit a pull request to fix it along with the bug report. I'm not maintaining this project.
If this was a real project, some things that would be cool:
- login, so you can read your feed.
- more interactions, comments, likes, tags. Maybe post an article?
Happy Hacktoberfest!
For those of you that read through the whole thing, thank you for your time. I know this was long. I hope that it was interesting, I hope you learned something and above all, I hope you had fun.
For those of you that actually followed along step by step and created the CLI yourself: You complete me 💖.
Please tell me in the comments or twitter what you learned, what you found interesting or any other comments, or criticisms you may have.
My articles are very Functional JavaScript heavy, if you need more, follow me here, or on Twitter @joelnet!
More articles
Ask me dumb questions about functional programming
Let's talk about auto-generated documentation tools for JavaScript
(open source and free forever ❤️)
Non-atomic increments in NodeJS or how I found a vulnerability in express-brute package.
Roman Voloboev - Apr 18
Hooked with React - Error handling and loading state in react hooks, Part 2
Paramanantham Harrison - Apr 18
A Dead simple object validator
Afroze Kabeer Khan - Apr 18
Looks awesome! :) Anyway why does dev.to not have an API?
Thanks! Hopefully (if you went through the tutorial) you had fun!
And I'd love to know the answer to that as well!
There's a pull request for an API on the dev.to repo, so it looks like there will be one eventually! :)
This is awesome!
Good stuff, Joe! Very functional! ✅
Now let me change a couple of selectors by sending Dev.to a PR #JustKidding 🤣
haha fortunately I have the files I needed downloaded to the repo. so you can always run it in dev mode to remember how it used to look before the breaking changes :D
Let me send a PR to your repo as well :P
📦 NEW: update the HTML
🤣
🤣
Super impressive, what a tutorial! 👏👏👏
Thanks Andy. Hopefully this tutorial is fun a one. There's something in it for everyone to learn :)
Cheers!
Looks great, I still haven't fully wrapped my head around it but very impressive. Definitely a good way to get people into Mojiscript 🙂
Functional languages are so different than imperative languages that you have no base to reference it from. It can feel like learning to program from scratch.
Learning functional concepts one article at a time, might also be difficult because you won't understand the reason you do X, until you also understand Y. So it's easy to understand the code, but not understand the why.
I'm going to be creating a bunch of functional programming articles that will attempt to solve these issues.
But I also think just diving into a fully functional language like MojiScript will be much faster than trying to comprehend what the hell a Monad is.
Cheers!
Wow! Nice work! There's so much here!
Thanks for putting this together :)
Looking forward to hearing your feedback after going through it :)
Awesome Work! Keep Doing it Great!
Hey thanks man. Really appreciate it!
2 Super impressive, what a tutorial! 👏👏👏
I enjoyed making this one, so I am glad you had fun with it!
Cheers!
Cool, fun and helpful.
I can use it to get the whole text for Google translate. Now Google cannot translate Dev.to by URL.
Awesome!
It would be interesting to do it using Javascript with Sanctuary.js or Folktale.js or most.js for comparison
MojiScript makes all attempts to be compatibles with other functional libraries. So if you are familiar with Sanctuary, or Ramda, you should be able to import and use those libraries inside of MojiScript too!
Currently though some functions aren't fantasy-land spec compliant. This is on the roadmap to be supported.
Great read Joel! A lot of functional stuff in here so I am gonna have to read it more than once, but very cool none the less.
Yes, it's very functional. Hopefully it was also informative and fun to go through :)
Thanks! | https://dev.to/joelnet/lets-make-a-devto-cli-together-4eh1 | CC-MAIN-2019-18 | refinedweb | 3,298 | 58.89 |
Ruby?
I actually ran into a similar problem myself recently as I wanted to use ploticus to graph some data for a personal project. The solution I came up with was actually very similar, if rather less refined, than the one my colleague used. As a result I thought I'd share it.
First a caveat - this is literally something I knocked up one evening. It isn't intended to be robust, performant or otherwise enterprisey. It's just for some data I use for me, myself, and I.
A sophisticated way to drive a C library like ploticus is to bind directly to the C API. Ruby makes this easy, so I'm told, but it's too much work for me (particularly if I want to be done before cocktail-hour). So my approach is to build a ploticus script and pipe it into ploticus. Ploticus can run by taking a script from standard input that controls what it does, so all I have to do is run ploticus within ruby and pipe commands into it. Roughly like this:
def generate script, outfile IO.popen("ploticus -png -o #{outfile} -stdin", 'w'){|p| p << script} end
To build up the script, I like to get objects that can work in my terms, and produce the necessary ploticus stuff. If you have anything that uses the prefabs, then putting together something is easy. I wanted to do clustered bar graphs, like this, which requires a ploticus script.
I built what I needed in three levels. At the lowest level is PloticusScripter, a class that builds up ploticus script commands. Here it is:
class PloticusScripter def initialize @procs = [] end def proc name result = PloticusProc.new name yield result @procs << result return result end def script result = "" @procs.each do |p| result << p.script_output << "\n\n" end return result end end class PloticusProc def initialize name @name = name @lines = [] end def script_output return (["#proc " + @name] + @lines).join("\n") end def method_missing name, *args, &proc line = name.to_s + ": " line.tr!('_', '.') args.each {|a| line << a.to_s << " "} @lines << line end end
As you see the scripter is just a list of proc commands (well they could be anything that responds to script_output - but I didn't need anything else yet). I can instantiate the scripter, call proc repeatedly to define my ploticus procs, then when I'm done call script to get the entire script for piping into ploticus.
The next level is something to build clustered bar graphs:
class PloticusClusterBar attr_accessor :rows, :column_names def initialize @rows = [] end def add_row label, data @rows << [label] + data end def getdata scripter scripter.proc("getdata") do |p| p.data generate_data end end def colors %w[red yellow blue green orange] end def clusters scripter column_names.size.times do |i| scripter.proc("bars") do |p| p.lenfield i + 2 p.cluster i+1 , "/", column_names.size p.color colors[i] p.hidezerobars 'yes' p.horizontalbars 'yes' p.legendlabel column_names[i] end end end def generate_data result = [] rows.each {|r| result << r.join(" ")} result << "\n" return result.join("\n") end end
This allows me to build a graph with simple calls to
add_row to add data rows. This makes it much more easy
for me to build up the data for the graph.
To make a particular graph, I'll write a third class on top of that:
#produces similar to ploticus example in ploticus/gallery/students.htm class StudentGrapher def initialize @ps = PloticusScripter.new @pcb = PloticusClusterBar.new end def run load_data @pcb.getdata @ps areadef @pcb.clusters @ps end def load_data @pcb.column_names = ['Exam A', 'Exam B', 'Exam C', 'Exam D'] @pcb.add_row '01001', [44, 45, 71, 89] @pcb.add_row '01002', [56, 44, 54, 36] @pcb.add_row '01003', [46, 63, 28, 87] @pcb.add_row '01004', [42, 28, 39, 49] @pcb.add_row '01005', [52, 74, 84, 66] end def areadef @ps.proc("areadef") do |p| p.title "Example Student Data" p.yrange 0, 6 p.xrange 0, 100 p.xaxis_stubs "inc 10" p.yaxis_stubs "datafield=1" p.rectangle 1, 1, 6, 6 end end def generate outfile IO.popen("ploticus -png -o #{outfile} -stdin", 'w'){|p| p << script} end def script return @ps.script end end def run output = 'fooStudents.png' File.delete output if File.exists? output s = StudentGrapher.new s.run s.generate output end
It's a very simple example, but it's a good illustration of what I call the Gateway pattern. The PloticusClusterBar class is the gateway with the perfect interface for what I want to do. I make it transform between that convenient interface and what the real output needs to be. The PloticusScripter class is another level of gateway. Even for a simple thing like this I find a design of composed objects like this a good way to go. Which may only say how my brain's got twisted over the years. | https://martinfowler.com/bliki/RubyPloticus.html | CC-MAIN-2017-17 | refinedweb | 811 | 68.97 |
Food and Agriculture Organization of the United Nations
Newsroom historic archives | New FAO newsroom
Championing the cause of
cassava
Cassava
production in Africa is growing more rapidly than
in other regions. This increased production has
reduced hunger, particularly in western African
countries, such as
Ghana.
To champion the cause of cassava, FAO has organized a
forum of agricultural experts to prepare a plan of action
for implementing the Global Development Strategy for
Cassava, an initiative to promote this important food crop.
The forum, at FAO headquarters in Rome from 25 to 28 April,
has been made possible with funding from the International
Fund for Agricultural Development (IFAD) and the
International Development
Research Center (IDRC).
The
Global Development Strategy for Cassava, initiatied by
IFAD, was prepared through a series of consultations with
cassava producers, processors, the private sector,
government ministries, international and non-governmental
organizations, technical and research centres and donor
agencies.
"A broad consensus has been reached that cassava can spur
rural development," says Marcio Porto, Chief of FAO's
Crop
and Grassland Service. There is also broad agreement on
what needs to be achieved to make cassava more competitive
in domestic and international markets. Now it's time for us
to work together to plan out the precise steps to reach
these goals."
Advantages of cassava
Subsistence farmers have long appreciated cassava's
advantages. It can grow in poor soils on marginal lands
where other crops cannot. It requires minimal fertilizer,
pesticides and water. Also, because cassava can be harvested
anytime from 8 to 24 months after planting, it can be left
in the ground as a safeguard against unexpected food
shortages. As Mr Porto points out, 'Because it has
traditionally been a crop of the poor, expanding the market
for cassava can bring direct economic benefits to those who
need it most."
Peeling
cassava root
FAO/18287/P.
Cenini
Pressing
peeled and grated cassava
FAO/18301/P.
Cenini
Setting
cassava pulp out to dry
FAO/17770/A.
Conti
Making
cassava flour
FAO/18440/P. Cenini
Ghana has shown just how important improving cassava
cultivation can be in the fight against hunger. Thanks in
part to a nearly
40 percent increase in cassava production, Ghana was
able to reduce undernourishment more rapidly than any other
country between 1980 and 1996. "Experience has shown that
growth in cassava production and consumption can be an
import engine for agricultural development in developing
countries," says Mr Porto.
Improved processing is essential
Once harvested, cassava deteriorates quickly, so it must
be eaten or processed quickly. Although some varieties can
be eaten raw or cooked like potatoes, many contain high
levels of cyanogenic glucosides that must be removed before
they can be eaten. The toxins are typically removed from
these bitter varieties by peeling and grating the root to
make a pulp that is then left to ferment slightly before
being pressed, dried and roasted. In Brazil, this processed
cassava meal is known as farinha de mandioca and in West
Africa, gari. Gari accounts for 70 percent of Nigeria's
total cassava consumption. In other parts of Africa, the
fermented cassava pulp is pounded into a paste, known as
foo- foo.
If these traditional foods are to become the basis for
commercially viable local industries, new and improved
processing technologies will be required. Commercial cassava
producers and processors need to find ways of increasing
production, reducing labour costs and improving product
quality in order to compete with imported grains.
Demand for traditional cassava foods will grow as
population increases in developing countries, but consumer
trends are expected to change as more and more people move
to the cities. Cassava producers and processors will need to
respond to the growing urban demand for foods that are more
convenient or seen as more modern, such as store-bought
bread and baked goods made from imported wheat flour.
The development of high-quality cassava flour could help
many developing countries reduce their dependence on
imported grains. One report has stated that a 15 percent
substitution of cassava flour for wheat flour could save
Nigeria close to US$ 15 million a year in foreign exchange.
In Jamaica, bakers of 'bammy
bread' made from cassava meal have been successful in
carving out a profitable market niche. "Simply put, many
governments could save money by making sound investments in
the development of their commercial cassava industry," says
Mr Porto.
In addition, Latin American countries, particularly
Brazil and Colombia, have made progress in developing and
marketing cassava snack foods, similar to potato chips, as
well as frozen 'heat and serve' cassava products. The
growing importance of manufactured cassava products in
Brazil has led to the creation of franchising chains with
141 stores all over the country, such as the group "Casa do
Pão de Queijo", which sells cassava cheese bread and
coffee.
Animal feed, industrial starch
In Asia, where rice is the most popular staple food,
commercial cassava production has focussed on animal feed,
mainly in the form of chips and pellets for export. Thailand
has led the way. Over the past 30 years, thanks to effective
public/private partnerships and sound Government policies, a
competitive cassava industry has been created almost from
scratch. In 1995, Thailand exported 3.3 million tonnes of
cassava pellets, mostly to the European Union.
In Africa and Latin America, the domestic market for
cassava-based animal feed shows potential for growth. More
than 30 percent of the cassava produced in Latin America is
used for domestic animal feed, compared to less than 2
percent in Africa. Research in Cameroon has shown that
poultry breeders could lower their production costs by 40
percent by incorporating cassava into their chicken
feed.
Details on
the forum are given by Mr. Porto in an interview (3
min. 4 sec.) with Liliane Kambirigi of FAO's Radio
Service.
Listen to or
download the interview
in
Realaudio
(379Kb- Instant
play)
in
mp3
(1798Kb)
Playing
RealAudio files requires the RealPlayer
software
Playing the mp3 files requires any mp3 player
software: Winamp, Windows Media player, Quicktime
4.0;RealplayerG2,
etc...
All the required software is available free on the
Web:;;
If you can't
download the interview, please call for a feed: FAO
Media-Office ([email protected])
Liliane Kambirigi, Eric Deleu (radio unit)
039-06-5705 3223 / 6863
Asia also leads the way in the production of starches
derived from cassava. Cassava starch has unique properties,
such as its high viscosity and its resistance to freezing,
which make it competitive with other industrial starches.
More research needs to be done on the development and
marketing of cassava-based
starches.
The emphasis is on information
"At this forum, we've placed the emphasis on
information", says Mr Porto. "It's important for us to get
the message out about the importance of cassava to millions
and millions of families in Africa, Asia, Latin America and
the Caribbean, and the contribution that cassava can make to
the well-being of millions of cassava producers and
processors."
However, as Mr Porto makes clear it's not just the
general public that needs to become more aware of the
importance of cassava. " Everyone who has a stake in the
growth of the cassava industry, including governments,
producers and processors, needs to be better informed", says
Mr Porto. "Also donor agencies need to know more about
ongoing development projects related to cassava so that they
don't waste money duplicating their efforts."
-
Fibre
Cassava tubers
(peeled)
Cassava flour
(tapioca)
Potatoes
Potato flour
Husked rice
Although cassava roots are an excellent source
of calories, they lack protein and vitamins.
Cassava leaves, however, are rich in protein and
vitamins A and B and can be an important part of a
well-balanced, nutritious diet.
New publication: The world
cassava economy: Facts trends and outlook (currently
available only in English). To order contact the
Sales and
Marketing Group, Viale delle Terme di Caracalla, 00100
Rome, Italy; by fax at +39 (06) 5705 3360; or email
[email protected]
26 April 2000
<
©FAO,
2000 | http://www.fao.org/english/newsroom/highlights/2000/000405-e.htm | CC-MAIN-2016-07 | refinedweb | 1,335 | 50.26 |
Actionscript:
- import com.actionsnippet.qbox.*;
- var sim:QuickBox2D = new QuickBox2D(this);
- sim.createStageWalls({lineAlpha:0,fillColor:0x000000})
- sim.addBox({x:3, y:3, width:3, height:3, skin:BoxSkin});
- sim.addCircle({x:3, y:8,radius:1.5, skin:CircleSkin});
- sim.addPoly({x:6, y:3, verts:[[1.5,0,3,3,0,3,1.5,0]], skin:TriangleSkin});
- sim.addBox({x:6, y:3, width:3, height:3, skin:BoxSkin});
- sim.addCircle({x:6, y:8,radius:1.5, skin:CircleSkin});
- sim.addPoly({x:12, y:3, verts:[[1.5,0,3,3,0,3,1.5,0]], skin:TriangleSkin});
- sim.start();
- sim.mouseDrag();
You'll need this fla to run this snippet since the graphics are in the library. This snippet shows how to easily use linkage classes as the graphics for your rigid bodies. This was actually one of the first features I implemented in QuickBox2D.
Take a look at the swf here...
31 Comments
As a small addition:
It can happen quite easily that you actually want to ‘draw’ the DisplayObject on runtime outside of Box2D code, and then apply it as a skin to a QuickObject.
I added the following to BoxObject.as, CircleObject.as, …
if (p.skin is DisplayObject){
bodyDef.userData = p.skin;
} else if (p.skin is Class){
…
now I can do something like
var s:Sprite = new Sprite();
s.graphics.beginFill(0×000000)
s.graphics.drawCircle(0,0,5);
sim.addCircle({x:0, y:0, radius:5, skin:s});
I thought this to be quite useful.
FWIW and thanks for the great work!
very nice bart!
I’ll add it to the next release and credit you in the comments. Let me know if you have any other recommendations. The main update in the next release is the ability to add any kind of joint… but I’d like to add other things if people have ideas…
I know what you have there is just pseudo-code, but you’d need to do the pixel to meters conversion so radius would be (5 / 30)… unless you’ve changed the scale settings to match pixels (which isn’t recommended apparently).
well::: I’m just getting around the whole physics world and am already so excited by it
Another recommendation I’d have: It’s clear that you’ll often be converting from meters to pixels and vice versa. Why not add a static helper function somewhere to convert values? or even just the put the scale ratio in a static variable. Something like (in QuickBox2D class):
public static const M2P:Number = 30; // M2P -> Meter2Pixel
public static const P2M:Number = 1 / M2P;
and then use the M2P value throughout your code, instead of hardcoding the 30 everywhere….
then, the pseudo example above would be as easy as:
var radius:Number = 5;
var s:Sprite = new Sprite();
s.graphics.beginFill(0×000000)
s.graphics.drawCircle(0,0,radius);
sim.addCircle({x:0, y:0, radius:radius*QuickBox2D .P2M, skin:s});
when creating your own DisplayObjects, I think it’s fair enough for having the developer be responsible for matching the properties of skin & QuickObject.
again: FWIW and really looking forward to any updates!
That’s not a bad idea… the one annoying thing about static const vars is that they are super slow to read for some reason - where just regular public props are fast. This doesn’t matter really for instantiation… just something worth noting. I’ll probably post this in a snippet at some point but check it out:
var t:Number;
var i:int;
var val:Number;
var vars:Vars = new Vars();
t = getTimer();
for (i = 0; i<1000000; i++){
val = i * vars.publicValue;
}
trace("public: ", getTimer() - t);
t = getTimer();
for (i = 0; i<1000000; i++){
val = i * 30.0;
}
trace("hard coded: ", getTimer() - t);
t = getTimer();
for (i = 0; i<1000000; i++){
val = i * Vars.CONST_VALUE;
}
trace("static const: ", getTimer() - t);
/*
public: 8
hard coded: 8
static const: 65
*/
The Vars class just looks like this:
package {
public class Vars {
public static const CONST_VALUE:Number = 30.0;
public var publicValue:Number = 30.0;
}
}
Thanks for the suggestions - I'll definitely think more about this ... I need to dig around Box2D to see where exactly the scale is set (forget at the moment)... some additional info here:
the thing that makes it slow is the static part… not the const.. a public const seems to have the same read time as a regular public var.
that’s interesting:
well, in time critical code, you could still do:
var w:Number = Vars.CONST_VALUE;
t = getTimer();
for (i = 0; i<1000000; i++){
val = i * w;
}
Still, I think the numbers aren’t that dramatic, for one-million iterations…
your right - it isn’t a huge difference…. it just bugs me - I believe they fixed this problem in haxe with the inline keyword.
I’ll definitely have some variation on this in the next release…
Quick question, how do i change the frame the ’skin’ is on?
Thanks,
Thomas Francis
Hey Thomas,
You can do that by using the QuickObject userData property. So given the above example you could do:
var box:QuickObject = sim.addBox({x:3, y:3, width:3, height:3, skin:BoxSkin});
box.userData.gotoAndStop(10);
That should do it
Great, Thanks.
Another question/comment, When i skin a group, are the boxes that i drew supposed to stay visible?
No prob.
Looks like you found a bug in the GroupObject.as. I’m working on the library now so I’ll take a look and if its an easy fix I’ll post a link to the .as file here… should know in the next few mins
Oh, cool. I’ll poke around with the files to see if I can fix it or not. But, i’ll probably grab the file you put up, if you do.
Yeah… it was a simple fix. You can download the updated GroupObject.as file here:
(just replace the old file in com.actionsnippet.qbox.objects)
…and here is an fla describing how skinning should work with GroupObjects … basically sometimes you want the things your grouping to have individual skins and other times you want to use one skin for the entire group (the latter being what your doing):
I’ll update the actual zip tomorrow, keeping it as version 107.
Wow, this is nice. Thanks. I will be watching this project, real good stuff.
Is there a way to set a custom “stageWalls’ boundaries? I wanted to have some shapes falling in from the top of the swf, but making stage walls creates around the stage (no surprise there). Is there a way to do it with params in the createStageWalls function? or i do i need to create my own static objects?
Thanks
-Thomas
you’ll need to create your own stage walls… you can copy the code from QuickBox2D.as and just remove the top box. This could be a cool feature though… something like:
sim.createStageWalls({top:false});
I’ll probably add that to the next release.
When I add multiple groups (through a for.. loop, or through a timer), I occasionally get some grey boxes [that make up the shape of the group, but they are all split apart and independent(as in they don't act as a group)] up at the top corner, in the stage origin. That’s probably a bug. I’ll probably post here if I find more weird things.
incase anyone is wondeirng… this bug has been resolved - you can either follow the instructions to manually update your QuickBox2D alpha 107 or just re-download the zip from the QuickBox2D page on this site.
Hi there,
The current version of Quickbox2d, seems to not work with the current official release of box2D.
Could you tell me which relesae of box2dflash you are building off of so I can download that version?
Right now i’m using box2dflash version 2.02
QuickBox2D Alpha 107 was done with Box2DFlashAS3 2.0.1… QuickBox2D Alpha 108 will be released in a few days….
Hey Mario… i tested QuickBox2D 107 with Box2DFlash AS3 2.0.2 and it worked fine… maybe your classpath isn’t set up properly? What error are you getting… ?
hi Zevan,
first thx for your library QuickBox2d, i played with box2dAs3 few times ago and it wasn’t easy..
so i test your library and it is very cool but i have just a little question/correction :
in the ASDoc, you say:
class QuickBox2D > gravity > “This can be changed during runtime.”
but this lign doen’t works:
“sim.gravity = new b2Vec2(0, 0);”
i have to do:
“sim.w.SetGravity(new b2Vec2(0, 0));”
Thanks for pointing that out bertrand…. I’m going to release QuickBox2D alpha 108 tomorrow or saturday - and I’ll be sure to make sure this issue is taken care of.
I’m having trouble having the skin show up on circles, any ideas?
If you upload your file I can take a look, off hand I can’t think of any reason that the circle skinning wouldn’t work… is your registration point centered?
Hi, cool demo, I downloaded the fla and opened in both cs3 and cs4, in both versions the circles are invisible and dont show the graphic, but they can still be dragged etc…
@bbb …. oops, looks like I was messing with CircleObject and forgot to remove a comment.. you can fix the fla by going into the CircleObject.as file and changing this:
if (p.skin is Class){
/* bodyDef.userData = new p.skin();
bodyDef.userData.width=p.radius * 60;
bodyDef.userData.height=p.radius* 60;*/
to this:
if (p.skin is Class){
bodyDef.userData = new p.skin();
bodyDef.userData.width=p.radius * 60;
bodyDef.userData.height=p.radius* 60;
Sorry about that, I’m going to fix it in the zip of Alpha 108 now…
Hi - I’m having trouble skinning joints. Specifying a skin as a linked MovieClip in the Library gives me the following error: ReferenceError: Error #1056: Cannot create property startWidth on se.everyone.JointCable. This happens even if I define the joint as a distance joint, as instructed in the docs.
Very grateful for any ideas!
I assume your skin is assocated with a class “se.everyone.JointCable” simply make the JointCable class dynamic and it should work for distance joints…. basically, because of the way that joints are skinned their skins must be dynamic:
dynamic public class JointCable
something like that…
- this may be fixed/not required in the upcoming release of QuickBox2D… let me know if that helps
Thanks Zevan - Brilliant and amazingly fast response, I’m sorry I’d missed the fact it needed to be explicitly defined as dynamic. This now works.
One other thing that may be obvious and which I’ll probably regret asking as I’ll look like a total noob, but I can’t work out how to specify depths for joints. At the moment, all my joints are under all my skinned objects - in other words, objects hanging in front of other objects have joints which are behind those other objects. Can depths be specified for joints just as for other DisplayObjects?
Thanks again for the help!
That’s not a noob question, its tricky and something I’m trying to decide how to deal with… I think this should work:
addChild(myJoint.userData);
basically bring the joints skin up to the top…. ideally there should be some kind of option for this in the addJoint() method, but I haven’t created it yet… something like:
var myJoint:QuickObject = sim.addJoint({skinDepth:”top”})… or somethign like that… need to decide what makes the most sense…
Hi Zevan - thanks again for all the help and the awesome Lib which is working really well! Great work indeed!
Unfortunately I haven’t managed to solve this one problem, and I wonder if you have time to expand on what you thought might be a possible solution? I guess a lot of other things may take priority, but it would be neat to get joints that really visibly connect objects.
Anyway, thanks again for the work so far! It really helps bring Flash to life!
2。 [...]
[...] 10. リンケージされているテクスチャを貼る – QuickBox2D Skinning [...] | http://actionsnippet.com/?p=1634 | CC-MAIN-2017-34 | refinedweb | 2,045 | 74.08 |
hi everyone, i'm chris and i'm just starting out with java. through my java course i am trying to create a java program that computes a series as follows: 1/2 +2/3+ ... i/i+1 up to where i = 20. it also has to display the grand total with each addition, i.e. count 1 and total = .5, count 2 = 1.1667, etc. i am comeing across an error while trying to compile it stateing that the problem is with the system is in the line system.out.println("the total at "+ count + "is" + m);, pointing at the dor between system and out. the code is as follows :
any help is appreciated.any help is appreciated.//* problem 5.4 page 212*// //* christopher bruce *// public class problem5_4 { public static void main (double[] args) { int count = 1; double m = 0; do { m = m+count/(count + 1); system.out.println("the total at "+ count + "is" + m); count = count ++; } while (count <21); } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/23655-help-first-time-java-programer.html | CC-MAIN-2014-42 | refinedweb | 161 | 76.32 |
Issue Type: Improvement Created: 2008-11-20T16:16:44.000+0000 Last Updated: 2012-11-13T15:41:49.000+0000 Status: Resolved Fix version(s): Reporter: Kawsar Saiyeed (ksid) Assignee: Pádraic Brady (padraic) Tags: - Zend_Controller
Related issues: - ZF-11783
Attachments:
EDIT: An update is at the bottom of this post following the comments by Benjamin and Kevin EDIT: Yet another update, it's a pain not being able to comment!
Hi Guys,
Currently ZF only checks for the 'HTTPS' server variable when attempting to detect SSL usage but on some setups that variable is not available while 'HTTP_X_FORWARDED_PROTO' is. In my case it happens to be because of Lighttpd proxying requests to Apache which deals with the ZF site.
This results in situations where users are redirected to the http version of a site when using things like the redirector helper with a route.
This occurs in the following locations: {quote} Zend_Controller_Action_Helper_Redirector->_redirect() [Line 192] $proto = (isset($_SERVER['HTTPS'])&&$_SERVER['HTTPS']!=="off") ? 'https' : 'http';
Suggestion: $proto = ((isset($_SERVER['HTTPS'])&&$_SERVER['HTTPS']!=="off") || (isset($_SERVER['HTTP_X_FORWARDED_PROTO'])&&$_SERVER['HTTP_X_FORWARDED_PROTO']=="https")) ? 'https' : 'http'; {quote}
{quote} Zend_Controller_Request_Http->getScheme() [Line 966] return ($this->getServer('HTTPS') == 'on') ? self::SCHEME_HTTPS : self::SCHEME_HTTP;
Suggestion: return ($this->getServer('HTTPS') == 'on' || $this->getServer('HTTP_X_FORWARDED_PROTO') == 'https') ? self::SCHEME_HTTPS : self::SCHEME_HTTP; {quote}
{quote} Zend_Soap_AutoDiscover->getSchema() [Line 132] if (isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] == 'on') {
Suggestion: if ((isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] == 'on') || (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')) { {quote}
{quote} Zend_OpenId->selfUrl() [Line 97] if (isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] == 'on') {
Suggestion: if ((isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] == 'on') || (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')) { {quote}
This may also affect versions earlier than 1.7.0 and other files but these are the few I've found and the 2 in Zend_Controller affect me directly. A few other things that I noticed is the difference in function names for the same purpose (Zend_Soap_AutoDiscover->getSchem_a_() and Zend_Controller_Request_Http->getSchem_e_()) and the difference in how the check is carried out for HTTPS (Zend_Controller_Action_Helper_Redirector checks against '!=="off"' while Zend_OpenId checks against '== 'on''.
Thanks
Kevin is correct, the HTTP_X_FORWARDED_PROTO can be set clientside (after a little reading I believe all HTTP_X headers can be set clientside). Unfortunately, this means there may be no easy way to identify whether or not a secure connection is being used by the client when a reverse proxy is in place. I think Benjamin's suggestion of having one central function (he suggests ::isHttpsRequest()) is nice because you won't have different components checking for it in different ways.
Maybe the function could take a boolean that defaults to false on whether the user would like it to check the HTTP_X_FORWARDED_PROTO header. During my searching on the topic I came across this file:… I'm not too sure which ticket/issue it relates to within drupal as I am not familiar with it but it touches on the concerns Kevin raised as well as make use of another Microsoft specific header when using their tech for reverse proxying. I have seen the HTTP_X_FORWARDED_SSL header used in some places too (the value is set to 'on' if SSL is used) but I'm pretty sure it can be set client side too.
This is out of the scope of ZF but if using Apache (as I am) behind a reverse proxy, the only way around this may be to set up two vhosts operating on two different ports with Apache and two different document roots. In the secure document root place a file called 'secure' which the app can then check for. If it finds it then it is being accessed by SSL but if it doesn't (as only one document root has that file) it is not. Your reverse proxy can then proxy SSL queries to the secure port and standard queries to the other port. This means that you will need two copies of your files though so it is far less than ideal and it may not even work as this is just a guess at a possibility.
Benjamin, I didn't think about manually setting the HTTPS value so that may be something that we can do provided (as you mentioned) it is done right. To do it right might still mean configuring 2 document roots with the same files because the HTTP_X headers can be set clientside so shouldn't really be trusted unless your front server scrubs them out before setting them itself.
I still think it would be nice to have one function that does the SSL checking so even if it doesn't check for the HTTP_X headers (which I now agree it shouldn't) it will mean you have one location where that logic is stored rather than the 4 at the moment.
Posted by Benjamin Eberlei (beberlei) on 2008-12-10T14:39:54.000+0000
Matthew,
what do you think? Maybe we should implement a general purpose static function ::isHttpsRequest() somewhere for everyone to use since this problem (aswell as checking many other $_SERVER variables) occours frequently in many components.
Posted by Kevin McArthur (kevin) on 2008-12-10T14:52:01.000+0000
This is likely a bad idea -- the HTTP_X_FORWARDED_PROTO header can be set by the end user as well as a forwarding webserver and could enable MITM and crypto-decrease attacks in some scenarios.
Posted by Benjamin Eberlei (beberlei) on 2008-12-11T08:40:26.000+0000
hm then this issue is a "won't fix" in my point of view.
Posted by Benjamin Eberlei (beberlei) on 2008-12-11T08:42:47.000+0000
Well you can still set $_SERVER['HTTPS'] in your bootstrap, if you are absolutly sure that you are doing it the right way. I suggest this, and leaving the core of the ZF unchanged.
Additionally a function that has a boolean value true false for checking the protocol header would create additional non-needed overhead on all the apis for setter/getter methods.
Posted by Pádraic Brady (padraic) on 2009-12-26T15:03:58.000+0000
Trivially fixed by end user - the suggested fix (presumably why this is still open) is insecure and will not be implemented. | https://framework.zend.com/issues/browse/ZF-5012 | CC-MAIN-2017-09 | refinedweb | 1,021 | 57.4 |
_Maxxx_ wrote:it's about sharing the love - in both directions!
_Maxxx_ wrote:Plus, if the other dev is more junior, this is their apprenticeship - your opportunity to help them grow by sharing your experiences.
Marc Clifton wrote:it then becomes clear that nobody in the room is qualified to review the code,
Marc Clifton wrote:but he/she should be quiet, take notes, and come to me for separate
_Maxxx_ wrote:then you work in circumstances where teaching them better methods and/or ensuring that all code is able to be maintained by all developers is imperative!
_Maxxx_ wrote:You need to either get these people up to speed, or start writing code that is easily maintainable by the people you work with and are likely to work with in the future.
_Maxxx_ wrote:Oh, you arrogant f***!
Marc Clifton wrote:end to digress into "what's an anonymous method?", or "what is closure?" or "gee, I didn't know that was in the .NET framework."
jschell wrote:Except of course that is in fact an avenue to teach other people about new concepts.
_Maxxx_ wrote:A code review not only stopped that, but allowed us to educate the team.
_Maxxx_ wrote:it also allowed me to point out that writing benchmarking programs and running them from the IDE in debug is a waste of time!
Marc Clifton wrote:there's a difference between a code review meeting and a mentoring meeting
Slacker007 wrote:I am a big proponent for code reviews,
Slacker007 wrote:prior to deployment
Munchies_Matt wrote:Code review is THE single most important quality tool you have.
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
}
Sander Rossel wrote:I think code reviews would be great if they were done by someone who is at my level or above.
Slacker007 wrote:Good luck to you and your career.
S
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Lounge.aspx?fid=1159&df=90&mpp=10&noise=3&prof=True&sort=Position&view=None&spc=None&select=4486089&fr=18035 | CC-MAIN-2015-18 | refinedweb | 344 | 67.18 |
# 2018/12/03~ # Fernando Gama, [email protected]. # Luana Ruiz, [email protected]. """ graphTools.py Tools for handling graphs Functions: plotGraph: plots a graph from an adjacency matrix printGraph: prints (saves) a graph from an adjacency matrix adjacencyToLaplacian: transform an adjacency matrix into a Laplacian matrix normalizeAdjacency: compute the normalized adjacency normalizeLaplacian: compute the normalized Laplacian computeGFT: Computes the eigenbasis of a GSO matrixPowers: computes the matrix powers computeNonzeroRows: compute nonzero elements across rows computeNeighborhood: compute the neighborhood of a graph computeSourceNodes: compute source nodes for the source localization problem isConnected: determines if a graph is connected sparsifyGraph: sparsifies a given graph matrix createGraph: creates an adjacency marix permIdentity: identity permutation permDegree: order nodes by degree permSpectralProxies: order nodes by spectral proxies score permEDS: order nodes by EDS score edgeFailSampling: samples the edges of a given graph splineBasis: Returns the B-spline basis (taken from github.com/mdeff) Classes: Graph: class containing a graph """ import numpy as np import scipy.sparse import scipy.spatial as sp from sklearn.cluster import SpectralClustering import os import matplotlib matplotlib.rcParams['text.usetex'] = True matplotlib.rcParams['font.family'] = 'serif' import matplotlib.pyplot as plt zeroTolerance = 1e-9 # Values below this number are considered zero. # If adjacency matrices are not symmetric these functions might not work as # desired: the degree will be the in-degree to each node, and the Laplacian # is not defined for directed graphs. Same caution is advised when using # graphs with self-loops. def plotGraph(adjacencyMatrix, **kwargs): """ plotGraph(A): plots a graph from adjacency matrix A of size N x N Optional keyword arguments: """ # Data # Adjacency matrix W = adjacencyMatrix assert W.shape[0] == W.shape[1] N = W.shape[0] # Positions (optional) if 'positions' in kwargs.keys(): pos = kwargs['positions'] else: angle = np.linspace(0, 2*np.pi*(1-1/N), num = N) radius = 1 pos = np.array([ radius * np.sin(angle), radius * np.cos(angle) ]) # Create figure # Figure size if 'figSize' in kwargs.keys(): figSize = kwargs['figSize'] else: figSize = 5 # Line width if 'lineWidth' in kwargs.keys(): lineWidth = kwargs['lineWidth'] else: lineWidth = 1 # Marker Size if 'markerSize' in kwargs.keys(): markerSize = kwargs['markerSize'] else: markerSize = 15 # Marker shape if 'markerShape' in kwargs.keys(): markerShape = kwargs['markerShape'] else: markerShape = 'o' # Marker color if 'color' in kwargs.keys(): markerColor = kwargs['color'] else: markerColor = '#01256E' # Node labeling if 'nodeLabel' in kwargs.keys(): doText = True nodeLabel = kwargs['nodeLabel'] assert len(nodeLabel) == N else: doText = False # Plot figGraph = plt.figure(figsize = (1*figSize, 1*figSize)) for i in range(N): for j in range(N): if W[i,j] > 0: plt.plot([pos[0,i], pos[0,j]], [pos[1,i], pos[1,j]], linewidth = W[i,j] * lineWidth, color = '#A8AAAF') for i in range(N): plt.plot(pos[0,i], pos[1,i], color = markerColor, marker = markerShape, markerSize = markerSize) if doText: plt.text(pos[0,i], pos[1,i], nodeLabel[i], verticalalignment = 'center', horizontalalignment = 'center', color = '#F2F2F3') return figGraph def printGraph(adjacencyMatrix, **kwargs): """ printGraph(A): Wrapper for plot graph to directly save it as a graph (with no axis, nor anything else like that, more aesthetic, less changes) Optional keyword arguments: 'saveDir' (os.path, default: '.'): directory where to save the graph 'legend' (default: None): Text for a legend 'xLabel' (str, default: None): Text for the x axis 'yLabel' (str, default: None): Text for the y axis 'graphName' (str, default: 'graph'): name to save the file """ # Wrapper for plot graph to directly save it as a graph (with no axis, # nor anything else like that, more aesthetic, less changes) W = adjacencyMatrix assert W.shape[0] == W.shape[1] # Printing options if 'saveDir' in kwargs.keys(): saveDir = kwargs['saveDir'] else: saveDir = '.' if 'legend' in kwargs.keys(): doLegend = True legendText = kwargs['legend'] else: doLegend = False if 'xLabel' in kwargs.keys(): doXlabel = True xLabelText = kwargs['xLabel'] else: doXlabel = False if 'yLabel' in kwargs.keys(): doYlabel = True yLabelText = kwargs['yLabel'] else: doYlabel = False if 'graphName' in kwargs.keys(): graphName = kwargs['graphName'] else: graphName = 'graph' figGraph = plotGraph(adjacencyMatrix, **kwargs) plt.axis('off') if doXlabel: plt.xlabel(xLabelText) if doYlabel: plt.yLabel(yLabelText) if doLegend: plt.legend(legendText) figGraph.savefig(os.path.join(saveDir, '%s.pdf' % graphName), bbox_inches = 'tight', transparent = True) def adjacencyToLaplacian(W): """ adjacencyToLaplacian: Computes the Laplacian from an Adjacency matrix Input: W (np.array): adjacency matrix Output: L (np.array): Laplacian matrix """ # Check that the matrix is square assert W.shape[0] == W.shape[1] # Compute the degree vector d = np.sum(W, axis = 1) # And build the degree matrix D = np.diag(d) # Return the Laplacian return D - W def normalizeAdjacency(W): """ NormalizeAdjacency: Computes the degree-normalized adjacency matrix Input: W (np.array): adjacency matrix Output: A (np.array): degree-normalized adjacency matrix """ # Check that the matrix is square assert W.shape[0] == W.shape[1] # Compute the degree vector d = np.sum(W, axis = 1) # Invert the square root of the degree d = 1/np.sqrt(d) # And build the square root inverse degree matrix D = np.diag(d) # Return the Normalized Adjacency return D @ W @ D def normalizeLaplacian(L): """ NormalizeLaplacian: Computes the degree-normalized Laplacian matrix Input: L (np.array): Laplacian matrix Output: normL (np.array): degree-normalized Laplacian matrix """ # Check that the matrix is square assert L.shape[0] == L.shape[1] # Compute the degree vector (diagonal elements of L) d = np.diag(L) # Invert the square root of the degree d = 1/np.sqrt(d) # And build the square root inverse degree matrix D = np.diag(d) # Return the Normalized Laplacian return D @ L @ D def computeGFT(S, order = 'no'): """ computeGFT: Computes the frequency basis (eigenvectors) and frequency coefficients (eigenvalues) of a given GSO Input: S (np.array): graph shift operator matrix order (string): 'no', 'increasing', 'totalVariation' chosen order of frequency coefficients (default: 'no') Output: E (np.array): diagonal matrix with the frequency coefficients (eigenvalues) in the diagonal V (np.array): matrix with frequency basis (eigenvectors) """ # Check the correct order input assert order == 'totalVariation' or order == 'no' or order == 'increasing' # Check the matrix is square assert S.shape[0] == S.shape[1] # Check if it is symmetric symmetric = np.allclose(S, S.T, atol = zeroTolerance) # Then, compute eigenvalues and eigenvectors if symmetric: e, V = np.linalg.eigh(S) else: e, V = np.linalg.eig(S) # Sort the eigenvalues by the desired error: if order == 'totalVariation': eMax = np.max(e) sortIndex = np.argsort(np.abs(e - eMax)) elif order == 'increasing': sortIndex = np.argsort(np.abs(e)) else: sortIndex = np.arange(0, S.shape[0]) e = e[sortIndex] V = V[:, sortIndex] E = np.diag(e) return E, V def matrixPowers(S,K): """ matrixPowers(A, K) Computes the matrix powers A^k for k = 0, ..., K-1 Inputs: A: either a single N x N matrix or a collection E x N x N of E matrices. K: integer, maximum power to be computed (up to K-1) Outputs: AK: either a collection of K matrices K x N x N (if the input was a single matrix) or a collection E x K x N x N (if the input was a collection of E matrices). """ # S can be either a single GSO (N x N) or a collection of GSOs (E x N x N) if len(S.shape) == 2: N = S.shape[0] assert S.shape[1] == N E = 1 S = S.reshape(1, N, N) scalarWeights = True elif len(S.shape) == 3: E = S.shape[0] N = S.shape[1] assert S.shape[2] == N scalarWeights = False # Now, let's build the powers of S: thisSK = np.tile(np.eye(N, N).reshape(1,N,N), [E, 1, 1]) SK = thisSK.reshape(E, 1, N, N) for k in range(1,K): thisSK = thisSK @ S SK = np.concatenate((SK, thisSK.reshape(E, 1, N, N)), axis = 1) # Take out the first dimension if it was a single GSO if scalarWeights: SK = SK.reshape(K, N, N) return SK def computeNonzeroRows(S, Nl = 'all'): """ computeNonzeroRows: Find the position of the nonzero elements of each row of a matrix Input: S (np.array): matrix Nl (int or 'all'): number of rows to compute the nonzero elements; if 'all', then Nl = S.shape[0]. Rows are counted from the top. Output: nonzeroElements (list): list of size Nl where each element is an array of the indices of the nonzero elements of the corresponding row. """ # Find the position of the nonzero elements of each row of the matrix S. # Nl = 'all' means for all rows, otherwise, it will be an int. if Nl == 'all': Nl = S.shape[0] assert Nl <= S.shape[0] # Save neighborhood variable neighborhood = [] # For each of the selected nodes for n in range(Nl): neighborhood += [np.flatnonzero(S[n,:])] return neighborhood def computeNeighborhood(S, K, N = 'all', nb = 'all', outputType = 'list'): """ computeNeighborhood: compute the set of nodes within the K-hop neighborhood of a graph (i.e. all nodes that can be reached within K-hops of each node) computeNeighborhood(W, K, N = 'all', nb = 'all', outputType = 'list') Input: W (np.array): adjacency matrix K (int): K-hop neighborhood to compute the neighbors N (int or 'all'): how many nodes (from top) to compute the neighbors from (default: 'all'). nb (int or 'all'): how many nodes to consider valid when computing the neighborhood (i.e. nodes beyond nb are not trimmed out of the neighborhood; note that nodes smaller than nb that can be reached by nodes greater than nb, are included. default: 'all') outputType ('list' or 'matrix'): choose if the output is given in the form of a list of arrays, or a matrix with zero-padding of neighbors with neighborhoods smaller than the maximum neighborhood (default: 'list') Output: neighborhood (np.array or list): contains the indices of the neighboring nodes following the order established by the adjacency matrix. """ # outputType is either a list (a list of np.arrays) or a matrix. assert outputType == 'list' or outputType == 'matrix' # Here, we can assume S is already sparse, in which case is a list of # sparse matrices, or that S is full, in which case it is a 3-D array. if isinstance(S, list): # If it is a list, it has to be a list of matrices, where the length # of the list has to be the number of edge weights. But we actually need # to sum over all edges to be sure we consider all reachable nodes on # at least one of the edge dimensions newS = 0. for e in len(S): # First check it's a matrix, and a square one assert len(S[e]) == 2 assert S[e].shape[0] == S[e].shape[1] # For each edge, convert to sparse (in COO because we care about # coordinates to find the neighborhoods) newS += scipy.sparse.coo_matrix( (np.abs(S[e]) > zeroTolerance).astype(S[e].dtype)) S = (newS > zeroTolerance).astype(newS.dtype) else: # if S is not a list, check that it is either a E x N x N or a N x N # array. assert len(S.shape) == 2 or len(S.shape) == 3 if len(S.shape) == 3: assert S.shape[1] == S.shape[2] # If it has an edge feature dimension, just add over that dimension. # We only need one non-zero value along the vector to have an edge # there. (Obs.: While normally assume that all weights are positive, # let's just add on abs() value to avoid any cancellations). S = np.sum(np.abs(S), axis = 0) S = scipy.sparse.coo_matrix((S > zeroTolerance).astype(S.dtype)) else: # In this case, if it is a 2-D array, we do not need to add over the # edge dimension, so we just sparsify it assert S.shape[0] == S.shape[1] S = scipy.sparse.coo_matrix((S > zeroTolerance).astype(S.dtype)) # Now, we finally have a sparse, binary matrix, with the connections. # Now check that K and N are correct inputs. # K is an int (target K-hop neighborhood) # N is either 'all' or an int determining how many rows assert K >= 0 # K = 0 is just the identity # Check how many nodes we want to obtain if N == 'all': N = S.shape[0] if nb == 'all': nb = S.shape[0] assert N >= 0 and N <= S.shape[0] # Cannot return more nodes than there are assert nb >= 0 and nb <= S.shape[0] # All nodes are in their own neighborhood, so allNeighbors = [ [n] for n in range(S.shape[0])] # Now, if K = 0, then these are all the neighborhoods we need. # And also keep track only about the nodes we care about neighbors = [ [n] for n in range(N)] # But if K > 0 if K > 0: # Let's start with the one-hop neighborhood of all nodes (we need this) nonzeroS = list(S.nonzero()) # This is a tuple with two arrays, the first one containing the row # index of the nonzero elements, and the second one containing the # column index of the nonzero elements. # Now, we want the one-hop neighborhood of all nodes (and all nodes have # a one-hop neighborhood, since the graphs are connected) for n in range(len(nonzeroS[0])): # The list in index 0 is the nodes, the list in index 1 is the # corresponding neighbor allNeighbors[nonzeroS[0][n]].append(nonzeroS[1][n]) # Now that we have the one-hop neighbors, we just need to do a depth # first search looking for the one-hop neighborhood of each neighbor # and so on. oneHopNeighbors = allNeighbors.copy() # We have already visited the nodes themselves, since we already # gathered the one-hop neighbors. visitedNodes = [ [n] for n in range(N)] # Keep only the one-hop neighborhood of the ones we're interested in neighbors = [list(set(allNeighbors[n])) for n in range(N)] # For each hop for k in range(1,K): # For each of the nodes we care about for i in range(N): # Store the new neighbors to be included for node i newNeighbors = [] # Take each of the neighbors we already have for j in neighbors[i]: # and if we haven't visited those neighbors yet if j not in visitedNodes[i]: # Just look for our neighbor's one-hop neighbors and # add them to the neighborhood list newNeighbors.extend(oneHopNeighbors[j]) # And don't forget to add the node to the visited ones # (we already have its one-hope neighborhood) visitedNodes[i].append(j) # And now that we have added all the new neighbors, we add them # to the old neighbors neighbors[i].extend(newNeighbors) # And get rid of those that appear more than once neighbors[i] = list(set(neighbors[i])) # Now that all nodes have been collected, get rid of those beyond nb for i in range(N): # Get the neighborhood thisNeighborhood = neighbors[i].copy() # And get rid of the excess nodes neighbors[i] = [j for j in thisNeighborhood if j < nb] if outputType == 'matrix': # List containing all the neighborhood sizes neighborhoodSizes = [len(x) for x in neighbors] # Obtain max number of neighbors maxNeighborhoodSize = max(neighborhoodSizes) # then we have to check each neighborhood and find if we need to add # more nodes (itself) to pad it so we can build a matrix paddedNeighbors = [] for n in range(N): paddedNeighbors += [np.concatenate( (neighbors[n], n * np.ones(maxNeighborhoodSize - neighborhoodSizes[n])) )] # And now that every element in the list paddedNeighbors has the same # length, we can make it a matrix neighbors = np.array(paddedNeighbors, dtype = np.int) return neighbors def computeSourceNodes(A, C): """ computeSourceNodes: compute source nodes for the source localization problem Input: A (np.array): adjacency matrix of shape N x N C (int): number of classes Output: sourceNodes (list): contains the indices of the C source nodes Uses the adjacency matrix to compute C communities by means of spectral clustering, and then selects the node with largest degree within each community """ sourceNodes = [] degree = np.sum(A, axis = 0) # degree of each vector # Compute communities communityClusters = SpectralClustering(n_clusters = C, affinity = 'precomputed', assign_labels = 'discretize') communityClusters = communityClusters.fit(A) communityLabels = communityClusters.labels_ # For each community for c in range(C): communityNodes = np.nonzero(communityLabels == c)[0] degreeSorted = np.argsort(degree[communityNodes]) sourceNodes = sourceNodes + [communityNodes[degreeSorted[-1]]] return sourceNodes def isConnected(W): """ isConnected: determine if a graph is connected Input: W (np.array): adjacency matrix Output: connected (bool): True if the graph is connected, False otherwise Obs.: If the graph is directed, we consider it is connected when there is at least one edge that would make it connected (i.e. if we drop the direction of all edges, and just keep them as undirected, then the resulting graph would be connected). """ undirected = np.allclose(W, W.T, atol = zeroTolerance) if not undirected: W = 0.5 * (W + W.T) L = adjacencyToLaplacian(W) E, V = computeGFT(L) e = np.diag(E) # only eigenvavlues # Check how many values are greater than zero: nComponents = np.sum(e < zeroTolerance) # Number of connected components if nComponents == 1: connected = True else: connected = False return connected def sparsifyGraph(W, sparsificationType, p): """ sparsifyGraph: sparsifies a given graph matrix Input: W (np.array): adjacency matrix sparsificationType ('threshold' or 'NN'): threshold or nearest-neighbor sparsificationParameter (float): sparsification parameter (value of the threshold under which edges are deleted or the number of NN to keep) Output: W (np.array): adjacency matrix of sparsified graph Observation: - If it is an undirected graph, when computing the kNN edges, the resulting graph might be directed. Then, the graph is converted into an undirected one by taking the average of incoming and outgoing edges (this might result in a graph where some nodes have more than kNN neighbors). - If it is a directed graph, remember that element (i,j) of the adjacency matrix corresponds to edge (j,i). This means that each row of the matrix has nonzero elements on all the incoming edges. In the directed case, the number of nearest neighbors is with respect to the incoming edges (i.e. kNN incoming edges are kept). - If the original graph is connected, then thresholding might lead to a disconnected graph. If this is the case, the threshold will be increased in small increments until the resulting graph is connected. To recover the actual treshold used (higher than the one specified) do np.min(W[np.nonzero(W)]). In the case of kNN, if the resulting graph is disconnected, the parameter k is increased in 1 until the resultin graph is connected. """ # Check input arguments N = W.shape[0] assert W.shape[1] == N assert sparsificationType == 'threshold' or sparsificationType == 'NN' connected = isConnected(W) undirected = np.allclose(W, W.T, atol = zeroTolerance) # np.allclose() gives true if matrices W and W.T are the same up to # atol. # Start with thresholding if sparsificationType == 'threshold': Wnew = W.copy() Wnew[np.abs(Wnew) < p] = 0. # If the original graph was connected, we need to be sure this one is # connected as well if connected: # Check if the new graph is connected newGraphIsConnected = isConnected(Wnew) # While it is not connected while not newGraphIsConnected: # We need to reduce the size of p until we get it connected p = p/2. Wnew = W.copy() Wnew[np.abs(Wnew) < p] = 0. # Check if it is connected now newGraphIsConnected = isConnected(Wnew) # Now, let's move to k nearest neighbors elif sparsificationType == 'NN': # We sort the values of each row (in increasing order) Wsorted = np.sort(W, axis = 1) # Pick the # If the original graph was connected if connected: # Check if the new graph is connected newGraphIsConnected = isConnected(Wnew) # While it is not connected while not newGraphIsConnected: # Increase the number of k-NN by 1 p = p + 1 # Compute the new # Check if it is connected now newGraphIsConnected = isConnected(Wnew) # if it's undirected, this is the moment to reconvert it as undirected if undirected: Wnew = 0.5 * (Wnew + Wnew.T) return Wnew def createGraph(graphType, N, graphOptions): """ createGraph: creates a graph of a specified type Input: graphType (string): 'SBM', 'SmallWorld', 'fuseEdges', and 'adjacency' N (int): Number of nodes graphOptions (dict): Depends on the type selected. Obs.: More types to come. Output: W (np.array): adjacency matrix of shape N x N Optional inputs (by keyword): graphType: 'SBM' 'nCommunities': (int) number of communities 'probIntra': (float) probability of drawing an edge between nodes inside the same community 'probInter': (float) probability of drawing an edge between nodes of different communities Obs.: This always results in a connected graph. graphType: 'SmallWorld' 'probEdge': probability of drawing an edge between nodes 'probRewiring': probability of rewiring an edge Obs.: This always results in a connected graph. graphType: 'fuseEdges' (Given a collection of adjacency matrices of graphs with the same number of nodes, this graph type is a fusion of the edges of the collection of graphs, following different desirable properties) 'adjacencyMatrices' (np.array): collection of matrices in a tensor np.array of dimension nGraphs x N x N 'aggregationType' ('sum' or 'avg'): if 'sum', edges are summed across the collection of matrices, if 'avg' they are averaged 'normalizationType' ('rows', 'cols' or 'no'): if 'rows', the values of the rows (after aggregated) are normalized to sum to one, if 'cols', it is for the columns, if it is 'no' there is no normalization. 'isolatedNodes' (bool): if True, keep isolated nodes should there be any 'forceUndirected' (bool): if True, make the resulting graph undirected by replacing directed edges by the average of the outgoing and incoming edges between each pair of nodes 'forceConnected' (bool): if True, make the graph connected by taking the largest connected component 'nodeList' (list): this is an empty list that, after calling the function, will contain a list of the nodes that were kept when creating the adjacency matrix out of fusing the given ones with the desired options 'extraComponents' (list, optional): if the resulting fused adjacency matrix is not connected, and then forceConnected = True, then this list will contain two lists, the first one with the adjacency matrices of the smaller connected components, and the second one a corresponding list with the index of the nodes that were kept for each of the smaller connected components (Obs.: If a given single graph is required to be adapted with any of the options in this function, then it can just be expanded to have one dimension along axis = 0 and fed to this function to obtain the corresponding graph with the desired properties) graphType: 'adjacency' 'adjacencyMatrix' (np.array): just return the given adjacency matrix (after checking it has N nodes) """ # Check assert N >= 0 if graphType == 'SBM': assert(len(graphOptions.keys())) == 3 C = graphOptions['nCommunities'] # Number of communities assert int(C) == C # Check that the number of communities is an integer pii = graphOptions['probIntra'] # Intracommunity probability pij = graphOptions['probInter'] # Intercommunity probability assert 0 <= pii <= 1 # Check that they are valid probabilities assert 0 <= pij <= 1 # We create the SBM as follows: we generate random numbers between # 0 and 1 and then we compare them elementwise to a matrix of the # same size of pii and pij to set some of them to one and other to # zero. # Let's start by creating the matrix of pii and pij. # First, we need to know how many numbers on each community. nNodesC = [N//C] * C # Number of nodes per community: floor division c = 0 # counter for community while sum(nNodesC) < N: # If there are still nodes to put in communities # do it one for each (balanced communities) nNodesC[c] = nNodesC[c] + 1 c += 1 # So now, the list nNodesC has how many nodes are on each community. # We proceed to build the probability matrix. # We create a zero matrix probMatrix = np.zeros([N,N]) # And fill ones on the block diagonals following the number of nodes. # For this, we need the cumulative sum of the number of nodes nNodesCIndex = [0] + np.cumsum(nNodesC).tolist() # The zero is added because it is the first index for c in range(C): probMatrix[ nNodesCIndex[c] : nNodesCIndex[c+1] , \ nNodesCIndex[c] : nNodesCIndex[c+1] ] = \ np.ones([nNodesC[c], nNodesC[c]]) # The matrix probMatrix has one in the block diagonal, which should # have probabilities p_ii and 0 in the offdiagonal that should have # probabilities p_ij. So that probMatrix = pii * probMatrix + pij * (1 - probMatrix) # has pii in the intracommunity blocks and pij in the intercommunity # blocks. # Now we're finally ready to generate a connected graph connectedGraph = False while not connectedGraph: # Generate random matrix W = np.random.rand(N,N) W = (W < probMatrix).astype(np.float64) # This matrix will have a 1 if the element ij is less or equal than # p_ij, so that if p_ij = 0.8, then it will be 1 80% of the times # (on average). # We need to make it undirected and without self-loops, so keep the # upper triangular part after the main diagonal W = np.triu(W, 1) # And add it to the lower triangular part W = W + W.T # Now let's check that it is connected connectedGraph = isConnected(W) elif graphType == 'SmallWorld': # Function provided by Tuomo Mäki-Marttunen # Connectedness introduced by Dr. S. Segarra. # Adapted to numpy by Fernando Gama. p = graphOptions['probEdge'] # Edge probability q = graphOptions['probRewiring'] # Rewiring probability # Positions on a circle posX = np.cos(2*np.pi*np.arange(0,N)/N).reshape([N,1]) # x axis posY = np.sin(2*np.pi*np.arange(0,N)/N).reshape([N,1]) # y axis pos = np.concatenate((posX, posY), axis = 1) # N x 2 position matrix connectedGraph = False W = np.zeros([N,N], dtype = pos.dtype) # Empty adjacency matrix D = sp.distance.squareform(sp.distance.pdist(pos)) ** 2 # Squared # distance matrix while not connectedGraph: # 1. The generation of locally connected network with given # in-degree: for n in range(N): # Go through all nodes in order nn = np.random.binomial(N, p) # Possible inputs are all but the node itself: pind = np.concatenate((np.arange(0,n), np.arange(n+1, N))) sortedIndices = np.argsort(D[n,pind]) dists = D[n,pind[sortedIndices]] inds_equallyfar = np.nonzero(dists == dists[nn])[0] if len(inds_equallyfar) == 1: # if a unique farthest node to # be chosen as input W[pind[sortedIndices[0:nn]],n] = 1 # choose as inputs all # from closest to the farthest-to-be-chosen else: W[pind[sortedIndices[0:np.min(inds_equallyfar)]],n] = 1 # choose each nearer than farthest-to-be-chosen r=np.random.permutation(len(inds_equallyfar)).astype(np.int) # choose randomly between the ones that are as far as # be-chosen W[pind[sortedIndices[np.min(inds_equallyfar)\ +r[0:nn-np.min(inds_equallyfar)+1]]],n] = 1; # 2. Watts-Strogatz perturbation: for n in range(N): A = np.nonzero(W[:,n])[0] # find the in-neighbours of n for j in range(len(A)): if np.random.rand() < q: freeind = 1 - W[:,n] # possible new candidates are # all the ones not yet outputting to n # (excluding n itself) freeind[n] = 0 freeind[A[j]] = 1 B = np.nonzero(freeind)[0] r = np.floor(np.random.rand()*len(B)).astype(np.int) W[A[j],n] = 0 W[B[r],n] = 1; # symmetrize M W = np.triu(W) W = W + W.T # Check that graph is connected connectedGraph = isConnected(W) elif graphType == 'fuseEdges': # This alternative assumes that there are multiple graphs that have to # be fused into one. # This will be done in two ways: average or sum. # On top, options will include: to symmetrize it or not, to make it # connected or not. # The input data is a tensor E x N x N where E are the multiple edge # features that we want to fuse. # Argument N is ignored # Data assert 7 <= len(graphOptions.keys()) <= 8 W = graphOptions['adjacencyMatrices'] # Data in format E x N x N assert len(W.shape) == 3 N = W.shape[1] # Number of nodes assert W.shape[1] == W.shape[2] # Name the list with all nodes to keep nodeList = graphOptions['nodeList'] # This should be an empty list # If there is an 8th argument, this is where we are going to save the # extra components which are not the largest if len(graphOptions.keys()) == 8: logExtraComponents = True extraComponents = graphOptions['extraComponents'] # This will be a list with two elements, the first elements will be # the adjacency matrix of the other (smaller) components, whereas # the second elements will be a list of the same size, where each # elements is yet another list of nodes to keep from the original # graph to build such an adjacency matrix (akin to nodeList) else: logExtraComponents = False # Flag to know if we need to log the # extra components or not allNodes = np.arange(N) # What type of node aggregation aggregationType = graphOptions['aggregationType'] assert aggregationType == 'sum' or aggregationType == 'avg' if aggregationType == 'sum': W = np.sum(W, axis = 0) elif aggregationType == 'avg': W = np.mean(W, axis = 0) # Normalization (sum of rows or columns is equal to 1) normalizationType = graphOptions['normalizationType'] if normalizationType == 'rows': rowSum = np.sum(W, axis = 1).reshape([N, 1]) rowSum[np.abs(rowSum) < zeroTolerance] = 1. W = W/np.tile(rowSum, [1, N]) elif normalizationType == 'cols': colSum = np.sum(W, axis = 0).reshape([1, N]) colSum[np.abs(colSum) < zeroTolerance] = 1. W = W/np.tile(colSum, [N, 1]) # Discarding isolated nodes isolatedNodes = graphOptions['isolatedNodes'] # if True, isolated nodes # are allowed, if not, discard them if isolatedNodes == False: # A Node is isolated when it's degree is zero degVector = np.sum(np.abs(W), axis = 0) # Keep nodes whose degree is not zero keepNodes = np.nonzero(degVector > zeroTolerance) # Get the first element of the output tuple, for some reason if # we take keepNodes, _ as the output it says it cannot unpack it. keepNodes = keepNodes[0] if len(keepNodes) < N: W = W[keepNodes][:, keepNodes] # Update the nodes kept allNodes = allNodes[keepNodes] # Check if we need to make it undirected or not forceUndirected = graphOptions['forceUndirected'] # if True, make it # undirected by using the average between nodes (careful, some # edges might cancel) if forceUndirected == True: W = 0.5 * (W + W.T) # Finally, making it a connected graph forceConnected = graphOptions['forceConnected'] # if True, make the # graph connected if forceConnected == True: # Check if the given graph is already connected connectedFlag = isConnected(W) # If it is not connected if not connectedFlag: # Find all connected components nComponents, nodeLabels = \ scipy.sparse.csgraph.connected_components(W) # Now, we have to pick the connected component with the largest # number of nodes, because that's the one to output. # Momentarily store the rest. # Let's get the list of nodes we have so far partialNodes = np.arange(W.shape[0]) # Create the lists to store the adjacency matrices and # the official lists of nodes to keep eachAdjacency = [None] * nComponents eachNodeList = [None] * nComponents # And we want to keep the one with largest number of nodes, but # we will do only one for, so we need to be checking which one # is, so we will compare against the maximum number of nodes # registered so far nNodesMax = 0 # To start for l in range(nComponents): # Find the nodes belonging to the lth connected component thisNodesToKeep = partialNodes[nodeLabels == l] # This adjacency matrix eachAdjacency[l] = W[thisNodesToKeep][:, thisNodesToKeep] # The actual list eachNodeList[l] = allNodes[thisNodesToKeep] # Check the number of nodes thisNumberOfNodes = len(thisNodesToKeep) # And see if this is the largest if thisNumberOfNodes > nNodesMax: # Store the new number of maximum nodes nNodesMax = thisNumberOfNodes # Store the element of the list that satisfies it indexLargestComponent = l # Once we have been over all the connected components, just # output the one with largest number of nodes W = eachAdjacency.pop(indexLargestComponent) allNodes = eachNodeList.pop(indexLargestComponent) # Check that it is effectively connected assert isConnected(W) # And, if we have the extra argument, return all the other # connected components if logExtraComponents == True: extraComponents.append(eachAdjacency) extraComponents.append(eachNodeList) # To end, update the node list, so that it is returned through argument nodeList.extend(allNodes.tolist()) elif graphType == 'adjacency': assert 'adjacencyMatrix' in graphOptions.keys() W = graphOptions['adjacencyMatrix'] assert W.shape[0] == W.shape[1] == N return W # Permutation functions def permIdentity(S): """ permIdentity: determines the identity permnutation Input: S (np.array): matrix Output: permS (np.array): matrix permuted (since, there's no permutation, it's the same input matrix) order (list): list of indices to make S become # Number of nodes N = S.shape[1] # Identity order order = np.arange(N) # If the original GSO assumed scalar weights, get rid of the extra dimension if scalarWeights: S = S.reshape([N, N]) return S, order.tolist() def permDegree(S): """ permDegree: determines the permutation by degree (nodes ordered from highest degree to lowest) Input: S (np.array): matrix Output: permS (np.array): matrix permuted order (list): list of indices to permute S to turn into # Compute the degree d = np.sum(np.sum(S, axis = 1), axis = 0) # Sort ascending order (from min degree to max degree) order = np.argsort(d) # Reverse sorting order = np.flip(order,0) # And update S S = S[:,order,:][:,:,order] # If the original GSO assumed scalar weights, get rid of the extra dimension if scalarWeights: S = S.reshape([S.shape[1], S.shape[2]]) return S, order.tolist() def permSpectralProxies(S): """ permSpectralProxies: determines the permutation by the spectral proxies score (from highest to lowest) Input: S (np.array): matrix Output: permS (np.array): matrix permuted order (list): list of indices to permute S to turn into permS. """ # Design decisions: k = 8 # Parameter of the spectral proxies method. This is fixed for # consistency with the calls of the other permutation functions. #) N = simpleS.shape[0] # Number of nodes ST = simpleS.conj().T # Transpose of S, needed for the method Sk = np.linalg.matrix_power(simpleS,k) # S^k STk = np.linalg.matrix_power(ST,k) # (S^T)^k STkSk = STk @ Sk # (S^T)^k * S^k, needed for the method nodes = [] # Where to save the nodes, order according the criteria it = 1 M = N # This opens up the door if we want to use this code for the actual # selection of nodes, instead of just ordering while len(nodes) < M: remainingNodes = [n for n in range(N) if n not in nodes] # Computes the eigenvalue decomposition phi_eig, phi_ast_k = np.linalg.eig( STkSk[remainingNodes][:,remainingNodes]) phi_ast_k = phi_ast_k[:][:,np.argmin(phi_eig.real)] abs_phi_ast_k_2 = np.square(np.absolute(phi_ast_k)) newNodePos = np.argmax(abs_phi_ast_k_2) nodes.append(remainingNodes[newNodePos]) it += 1 if scalarWeights: S = S[nodes,:][:,nodes] else: S = S[:,nodes,:][:,:,nodes] return S, nodes def permEDS(S): """ permEDS: determines the permutation by the experimentally designed sampling score (from highest to lowest) Input: S (np.array): matrix Output: permS (np.array): matrix permuted order (list): list of indices to permute S to turn into permS. """ #) E, V = np.linalg.eig(simpleS) # Eigendecomposition of S kappa = np.max(np.absolute(V), axis=1) kappa2 = np.square(kappa) # The probabilities assigned to each node are # proportional to kappa2, so in the mean, the ones with largest kappa^2 # would be "sampled" more often, and as suche are more important (i.e. # they have a higher score) # Sort ascending order (from min degree to max degree) order = np.argsort(kappa2) # Reverse sorting order = np.flip(order,0) if scalarWeights: S = S[order,:][:,order] else: S = S[:,order,:][:,:,order] return S, order.tolist() def edgeFailSampling(W, p): """ edgeFailSampling: randomly delete the edges of a given graph Input: W (np.array): adjacency matrix p (float): probability of deleting an edge Output: W (np.array): adjacency matrix with some edges randomly deleted Obs.: The resulting graph need not be connected (even if the input graph is) """ assert 0 <= p <= 1 N = W.shape[0] assert W.shape[1] == N undirected = np.allclose(W, W.T, atol = zeroTolerance) maskEdges = np.random.rand(N, N) maskEdges = (maskEdges > p).astype(W.dtype) # Put a 1 with probability 1-p W = maskEdges * W if undirected: W = np.triu(W) W = W + W.T return W class Graph(): """ Graph: class to handle a graph with several of its properties Initialization: graphType (string): 'SBM', 'SmallWorld', 'fuseEdges', and 'adjacency' N (int): number of nodes [optionalArguments]: related to the specific type of graph; see createGraph() for details. Attributes: .N (int): number of nodes .M (int): number of edges .W (np.array): weighted adjacency matrix .D (np.array): degree matrix .A (np.array): unweighted adjacency matrix .L (np.array): Laplacian matrix (if graph is undirected and has no self-loops) .S (np.array): graph shift operator (weighted adjacency matrix by default) .E (np.array): eigenvalue (diag) matrix (graph frequency coefficients) .V (np.array): eigenvector matrix (graph frequency basis) .undirected (bool): True if the graph is undirected .selfLoops (bool): True if the graph has self-loops Methods: .computeGFT(): computes the GFT of the existing stored GSO and stores it internally in self.V and self.E (if this is never called, the corresponding attributes are set to None) .setGSO(S, GFT = 'no'): sets a new GSO Inputs: S (np.array): new GSO matrix (has to have the same number of nodes), updates attribute .S GFT ('no', 'increasing' or 'totalVariation'): order of eigendecomposition; if 'no', no eigendecomposition is made, and the attributes .V and .E are set to None """ # in this class we provide, easily as attributes, the basic notions of # a graph. This serve as a building block for more complex notions as well. def __init__(self, graphType, N, graphOptions): assert N > 0 #\\\ Create the graph (Outputs adjacency matrix): self.W = createGraph(graphType, N, graphOptions) # TODO: Let's start easy: make it just an N x N matrix. We'll see later # the rest of the things just as handling multiple features and stuff. #\\\ Number of nodes: self.N = (self.W).shape[0] #\\\ Bool for graph being undirected: self.undirected = np.allclose(self.W, (self.W).T, atol = zeroTolerance) # np.allclose() gives true if matrices W and W.T are the same up to # atol. #\\\ Bool for graph having self-loops: self.selfLoops = True \ if np.sum(np.abs(np.diag(self.W)) > zeroTolerance) > 0 \ else False #\\\ Degree matrix: self.D = np.diag(np.sum(self.W, axis = 1)) #\\\ Number of edges: self.M = int(np.sum(np.triu(self.W)) if self.undirected \ else np.sum(self.W)) #\\\ Unweighted adjacency: self.A = (np.abs(self.W) > 0).astype(self.W.dtype) #\\\ Laplacian matrix: # Only if the graph is undirected and has no self-loops if self.undirected and not self.selfLoops: self.L = adjacencyToLaplacian(self.W) else: self.L = None #\\\ GSO (Graph Shift Operator): # The weighted adjacency matrix by default self.S = self.W #\\\ GFT: Declare variables but do not compute it unless specifically # requested self.E = None # Eigenvalues self.V = None # Eigenvectors def computeGFT(self): # Compute the GFT of the stored GSO if self.S is not None: #\\ GFT: # Compute the eigenvalues (E) and eigenvectors (V) self.E, self.V = computeGFT(self.S, order = 'totalVariation') def setGSO(self, S, GFT = 'no'): # This simply sets a matrix as a new GSO. It has to have the same number # of nodes (otherwise, it's a different graph!) and it can or cannot # compute the GFT, depending on the options for GFT assert S.shape[0] == S.shape[1] == self.N assert GFT == 'no' or GFT == 'increasing' or GFT == 'totalVariation' # Set the new GSO self.S = S if GFT == 'no': self.E = None self.V = None else: self.E, self.V = computeGFT(self.S, order = GFT) def splineBasis(K, x, degree=3): # Function written by M. Defferrard, taken verbatim (except for function # name), from # """ Return the B-spline basis. K: number of control points. x: evaluation points or number of evenly distributed evaluation points. degree: degree of the spline. Cubic spline by default. """ if np.isscalar(x): x = np.linspace(0, 1, x) # Evenly distributed knot vectors. kv1 = x.min() * np.ones(degree) kv2 = np.linspace(x.min(), x.max(), K-degree+1) kv3 = x.max() * np.ones(degree) kv = np.concatenate((kv1, kv2, kv3)) # Cox - DeBoor recursive function to compute one spline over x. def cox_deboor(k, d): # Test for end conditions, the rectangular degree zero spline. if (d == 0): return ((x - kv[k] >= 0) & (x - kv[k + 1] < 0)).astype(int) denom1 = kv[k + d] - kv[k] term1 = 0 if denom1 > 0: term1 = ((x - kv[k]) / denom1) * cox_deboor(k, d - 1) denom2 = kv[k + d + 1] - kv[k + 1] term2 = 0 if denom2 > 0: term2 = ((-(x - kv[k + d + 1]) / denom2) * cox_deboor(k + 1, d - 1)) return term1 + term2 # Compute basis for each point basis = np.column_stack([cox_deboor(k, degree) for k in range(K)]) basis[-1,-1] = 1 return basis def coarsen(A, levels, self_connections=False): # Function written by M. Defferrard, taken (almost) verbatim, from # """ Coarsen a graph, represented by its adjacency matrix A, at multiple levels. """ graphs, parents = metis(A, levels) perms = compute_perm(parents) for i, A in enumerate(graphs): M, M = A.shape if not self_connections: A = A.tocoo() A.setdiag(0) if i < levels: A = perm_adjacency(A, perms[i]) A = A.tocsr() A.eliminate_zeros() graphs[i] = A # Mnew, Mnew = A.shape # print('Layer {0}: M_{0} = |V| = {1} nodes ({2} added),' # '|E| = {3} edges'.format(i, Mnew, Mnew-M, A.nnz//2)) return graphs, perms[0] if levels > 0 else None def metis(W, levels, rid=None): # Function written by M. Defferrard, taken verbatim, from # """ Coarsen a graph multiple times using the METIS algorithm. INPUT W: symmetric sparse weight (adjacency) matrix levels: the number of coarsened graphs OUTPUT graph[0]: original graph of size N_1 graph[2]: coarser graph of size N_2 < N_1 graph[levels]: coarsest graph of Size N_levels < ... < N_2 < N_1 parents[i] is a vector of size N_i with entries ranging from 1 to N_{i+1} which indicate the parents in the coarser graph[i+1] nd_sz{i} is a vector of size N_i that contains the size of the supernode in the graph{i} NOTE if "graph" is a list of length k, then "parents" will be a list of length k-1 """ N, N = W.shape if rid is None: rid = np.random.permutation(range(N)) parents = [] degree = W.sum(axis=0) - W.diagonal() graphs = [] graphs.append(W) #supernode_size = np.ones(N) #nd_sz = [supernode_size] #count = 0 #while N > maxsize: for _ in range(levels): #count += 1 # CHOOSE THE WEIGHTS FOR THE PAIRING # weights = ones(N,1) # metis weights weights = degree # graclus weights # weights = supernode_size # other possibility weights = np.array(weights).squeeze() # PAIR THE VERTICES AND CONSTRUCT THE ROOT VECTOR idx_row, idx_col, val = scipy.sparse.find(W) perm = np.argsort(idx_row) rr = idx_row[perm] cc = idx_col[perm] vv = val[perm] cluster_id = metis_one_level(rr,cc,vv,rid,weights) # rr is ordered parents.append(cluster_id) # TO DO # COMPUTE THE SIZE OF THE SUPERNODES AND THEIR DEGREE #supernode_size = full( sparse(cluster_id, ones(N,1) , # supernode_size ) ) #print(cluster_id) #print(supernode_size) #nd_sz{count+1}=supernode_size; # COMPUTE THE EDGES WEIGHTS FOR THE NEW GRAPH nrr = cluster_id[rr] ncc = cluster_id[cc] nvv = vv Nnew = cluster_id.max() + 1 # CSR is more appropriate: row,val pairs appear multiple times W = scipy.sparse.csr_matrix((nvv,(nrr,ncc)), shape=(Nnew,Nnew)) W.eliminate_zeros() # Add new graph to the list of all coarsened graphs graphs.append(W) N, N = W.shape # COMPUTE THE DEGREE (OMIT OR NOT SELF LOOPS) degree = W.sum(axis=0) #degree = W.sum(axis=0) - W.diagonal() # CHOOSE THE ORDER IN WHICH VERTICES WILL BE VISTED AT THE NEXT PASS #[~, rid]=sort(ss); # arthur strategy #[~, rid]=sort(supernode_size); # thomas strategy #rid=randperm(N); # metis/graclus strategy ss = np.array(W.sum(axis=0)).squeeze() rid = np.argsort(ss) return graphs, parents # Coarsen a graph given by rr,cc,vv. rr is assumed to be ordered def metis_one_level(rr,cc,vv,rid,weights): # Function written by M. Defferrard, taken verbatim, from # nnz = rr.shape[0] N = rr[nnz-1] + 1 marked = np.zeros(N, np.bool) rowstart = np.zeros(N, np.int32) rowlength = np.zeros(N, np.int32) cluster_id = np.zeros(N, np.int32) oldval = rr[0] count = 0 clustercount = 0 for ii in range(nnz): rowlength[count] = rowlength[count] + 1 if rr[ii] > oldval: oldval = rr[ii] rowstart[count+1] = ii count = count + 1 for ii in range(N): tid = rid[ii] if not marked[tid]: wmax = 0.0 rs = rowstart[tid] marked[tid] = True bestneighbor = -1 for jj in range(rowlength[tid]): nid = cc[rs+jj] if marked[nid]: tval = 0.0 else: tval = vv[rs+jj] * (1.0/weights[tid] + 1.0/weights[nid]) if tval > wmax: wmax = tval bestneighbor = nid cluster_id[tid] = clustercount if bestneighbor > -1: cluster_id[bestneighbor] = clustercount marked[bestneighbor] = True clustercount += 1 return cluster_id def compute_perm(parents): # Function written by M. Defferrard, taken verbatim, from # """ Return a list of indices to reorder the adjacency and data matrices so that the union of two neighbors from layer to layer forms a binary tree. """ # Order of last layer is random (chosen by the clustering algorithm). indices = [] if len(parents) > 0: M_last = max(parents[-1]) + 1 indices.append(list(range(M_last))) for parent in parents[::-1]: #print('parent: {}'.format(parent)) # Fake nodes go after real ones. pool_singeltons = len(parent) indices_layer = [] for i in indices[-1]: indices_node = list(np.where(parent == i)[0]) assert 0 <= len(indices_node) <= 2 #print('indices_node: {}'.format(indices_node)) # Add a node to go with a singelton. if len(indices_node) == 1: indices_node.append(pool_singeltons) pool_singeltons += 1 #print('new singelton: {}'.format(indices_node)) # Add two nodes as children of a singelton in the parent. elif len(indices_node) == 0: indices_node.append(pool_singeltons+0) indices_node.append(pool_singeltons+1) pool_singeltons += 2 #print('singelton childrens: {}'.format(indices_node)) indices_layer.extend(indices_node) indices.append(indices_layer) # Sanity checks. for i,indices_layer in enumerate(indices): M = M_last*2**i # Reduction by 2 at each layer (binary tree). assert len(indices[0] == M) # The new ordering does not omit an indice. assert sorted(indices_layer) == list(range(M)) return indices[::-1] def perm_adjacency(A, indices): # Function written by M. Defferrard, taken verbatim, from # """ Permute adjacency matrix, i.e. exchange node ids, so that binary unions form the clustering tree. """ if indices is None: return A M, M = A.shape Mnew = len(indices) assert Mnew >= M A = A.tocoo() # Add Mnew - M isolated vertices. if Mnew > M: rows = scipy.sparse.coo_matrix((Mnew-M, M), dtype=np.float32) cols = scipy.sparse.coo_matrix((Mnew, Mnew-M), dtype=np.float32) A = scipy.sparse.vstack([A, rows]) A = scipy.sparse.hstack([A, cols]) # Permute the rows and the columns. perm = np.argsort(indices) A.row = np.array(perm)[A.row] A.col = np.array(perm)[A.col] # assert np.abs(A - A.T).mean() < 1e-9 assert type(A) is scipy.sparse.coo.coo_matrix return A def permCoarsening(x, indices): # Original function written by M. Defferrard, found in # # Function name has been changed, and it has been further adapted to handle # multiple features as # number_data_points x number_features x number_nodes # instead of the original # number_data_points x number_nodes """ Permute data matrix, i.e. exchange node ids, so that binary unions form the clustering tree. """ if indices is None: return x B, F, N = x.shape Nnew = len(indices) assert Nnew >= N xnew = np.empty((B, F, Nnew)) for i,j in enumerate(indices): # Existing vertex, i.e. real data. if j < N: xnew[:,:,i] = x[:,:,j] # Fake vertex because of singeltons. # They will stay 0 so that max pooling chooses the singelton. # Or -infty ? else: xnew[:,:,i] = np.zeros([B, F]) return xnew | https://www.programcreek.com/python/?code=alelab-upenn%2Fgraph-neural-networks%2Fgraph-neural-networks-master%2FUtils%2FgraphTools.py | CC-MAIN-2021-10 | refinedweb | 7,774 | 57.06 |
EEM v2.3 + TCL
Hi all,
i had to make a capacity planning script for my customer but I really don't find an issue.
My main problem is that i'm using EEM v2.3 and I don't have all features.
So, i want to catch the result of this command : "sh platform hardware capacity fab | i Bus"
And if the current percentage exceed 60% i would like to send a syslog message and a SNMP trap.
alias exec Bus-load tclsh sup-bootdisk:Bus_load.tcl
event manager applet Bus
event none (i will replace it by a cron-entry event)
action 1 cli command "enable"
action 2 cli command "Bus-load" (lauch the tcl script)
action 3 syslog priority critical msg "Result : $_cli_result"
And here is my tcl script :
set commande [exec "sh platform hardware capacity fab | i Bus"]
set result [regexp {.[0-9]%,} $commande Current_pourcent]
set result [regexp {[0-9]+} $Current_pourcent Current]
set Value [expr $Peak]
puts "$Value";
But this always send a syslog (not only when it exceeds 60%).
I can also make the condition in TCL but I can't send syslog in TCL because I can't import namespace.
I had another idea to solve my problem but I don't find anything on this.
I can store this value in an OID object et get it with a LMS but I don't find the right command.
Can someone can help me on this way too?
Thanks for your help, I'm very confused.
You should be using EEM Tcl and not chaining together EEM with tclsh. Try the attached EEM Tcl policy. It's a none policy right now, but you can change it to a cron policy after you run your tests. | https://supportforums.cisco.com/discussion/10994571/eem-v23-tcl | CC-MAIN-2017-09 | refinedweb | 292 | 69.92 |
I have been trying to figure this out for a while now and just dont seem to be able to break through so hopefully someone out there has done this before.
My issue is that I am trying to do a batch update of a google spreadsheet using the gdata python client libraries and authenticating via oauth2. I have found an example of how to do the batch update using the gdata.spreadsheet.service module here:
However that does not seem to work when authenticating via oauth2 and so I am having to use the gdata.spreadsheets.client module instead as discussed in this post:
Using the gdata.spreadsheets.client module works for authentication and for updating the sheet however batch commands does not seem to work. Below is my latest variation of the code which is about the closest I have got. It seems to work but the sheet is not updated and the batch_status returned is: 'Insert not supported on batch.' (Note: I did try modifying the batch_operation and batch_id parameters of the CellEntries in the commented out code but this did not work either.)
Thanks for any help you can provide.
import gdata
import gdata.gauth
import gdata.service
import gdata.spreadsheets
import gdata.spreadsheets.client
import gdata.spreadsheets.data
token = gdata.gauth.OAuth2Token(client_id=Client_id,client_secret=Client_secret,scope=Scope,
access_token=ACCESS_TOKEN, refresh_token=REFRESH_TOKEN,
user_agent=User_agent)
client = gdata.spreadsheets.client.SpreadsheetsClient()
token.authorize(client)
range = "D6:D13"
cellq = gdata.spreadsheets.client.CellQuery(range=range, return_empty='true')
cells = client.GetCells(file_id, 'od6', q=cellq)
objData = gdata.spreadsheets.data
batch = objData.BuildBatchCellsUpdate(file_id, 'od6')
n = 1
for cell in cells.entry:
cell.cell.input_value = str(n)
batch.add_batch_entry(cell, cell.id.text, batch_id_string=cell.title.text, operation_string='update')
n = n + 1
client.batch(batch, force=True) | http://www.dlxedu.com/askdetail/3/008ebf447932b4157b95c8becab87111.html | CC-MAIN-2018-39 | refinedweb | 297 | 52.87 |
Example Data:
0000090: f0ed e0b8 0000 0000 0000 0000 0000 0000 ................ 00000a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000b0: 576b 6a70 236a 7023 6e62 646a 6023 736f Wkjp#jp#nbdj`#so 00000c0: 6260 6600 0000 0000 0000 0000 0000 0000 b`f............. 00000d0: 635f 5e44 175e 4417 5617 4352 4f43 1b17 c_^D.^D.V.CROC.. 00000e0: 5859 5b4e 1756 1743 5244 4319 1906 0504 XY[N.V.CRDC..... 00000f0: 0300 0000 0000 0000 0000 0000 0000 0000 ................
As most people know XOR is substitution cypher. A major flaw of substitution cyphers is they can be broken either by frequency analysis or bruteforcing. I have my doubts that frequency analysis would be a good way to tackle this problem. Frequency analysis is useful for dictionary text but what about a URL such as? It would be an interesting task to do frequency analysis of ascii strings extracted from compiled exeuctables files. If we were to dump all the ASCII strings in *.dlls and *.exe files in system32 we would get about 60 mb of strings. Maybe another post. This leaves us with the option of bruteforcing. The main problem with bruteforcing data for valid strings is the how do we know the range of the encoded data and if it's a valid string and not a false positives. With our data example above we know that the encoded data range starts and ends with '\x00'. We could create a regular expression to find all data that falls between those characters. Our regular expression pattern would be '\x00(?!\x00).+?\x00'. If the ranges were different we would need another regular expression pattern. First problem solved. Now lets assume that we have XORed each byte in the block of data with a static key between 0x1 and 0xff. Now we are at the second problem. How do we know if the XORed data is valid text? We know that a valid string must be printable ascii. If we XOR a block of data and the data contains a non-printable ASCII characters we can ignore that set. That will eliminate about 0x9b or 155 values between 0x0 - 0xff. This will make our set of false positives a little bit smaller.
Function for testing if valid Ascii:
def valid_ascii(char): if char in string.printable[:-3]: return True else: return None Output of string.printable >>> string.printable '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c'
Let's see what the output would be from bruteforcing the example above would look like.
0xaf key 0x1 Vjkq"kq"oceka"rncag 0xaf key 0x2 Uihr!hr!l`fhb!qm`bd 0xaf key 0x3 This is magic place !!! Valid String 0xaf key 0x4 Sont'nt'jf`nd'wkfdb 0xaf key 0x5 Rnou&ou&kgaoe&vjgec 0xaf key 0x6 Qmlv%lv%hdblf%uidf` 0xaf key 0x7 Plmw$mw$iecmg$thega 0xaf key 0x8 _cbx+bx+fjlbh+{gjhn 0xaf key 0x9 ^bcy*cy*gkmci*zfkio 0xaf key 0xa ]a`z)`z)dhn`j)yehjl 0xaf key 0xb \`a{(a{(eioak(xdikm 0xaf key 0xd Zfg}.g}.coigm.~bomk 0xaf key 0xe Yed~-d~-`ljdn-}alnh 0xaf key 0x12 Eyxb1xb1|pvxr1a}prt 0xaf key 0x13 Dxyc0yc0}qwys0`|qsu 0xaf key 0x16 A}|f5|f5xtr|v5eytvp 0xaf key 0x17 @|}g4}g4yus}w4dxuwq 0xaf key 0x18 Osrh;rh;vz|rx;kwzx~ 0xaf key 0x1a Mqpj9pj9tx~pz9iuxz| 0xaf key 0x1c Kwvl?vl?r~xv|?os~|z 0xaf key 0x1e Iutn=tn=p|zt~=mq|~x 0xaf key 0x29 ~BCY CY GKMCI ZFKIO 0xaf key 0x2a }A@Z @Z DHN@J YEHJL 0xaf key 0x5d 67-~7-~3?97=~.2?=; 0xaf key 0x5e 54.}4.}0<:4>}-1<>8 0xcf key 0x22 A}|f5|f5t5apma95z{yl5t5apfa;;$'&! 0xcf key 0x23 @|}g4}g4u4`ql`84{zxm4u4`qg`::%&' 0xcf key 0x25 Fz{a2{a2s2fwjf>2}|~k2s2fwaf<<# !& 0xcf key 0x28 Kwvl?vl?~?kzgk3?pqsf?~?kzlk11.-,+ 0xcf key 0x2a Iutn=tn=|=ixei1=rsqd=|=ixni33,/.) 0xcf key 0x2b Htuo<uo<}<hydh0<srpe<}<hyoh22-./( 0xcf key 0x2c Osrh;rh;z;o~co7;tuwb;z;o~ho55*)(/ 0xcf key 0x2e Mqpj9pj9x9m|am59vwu`9x9m|jm77(+*- 0xcf key 0x2f Lpqk8qk8y8l}`l48wvta8y8l}kl66)*+, 0xcf key 0x32 Qmlv%lv%d%q`}q)%jki|%d%q`vq++4761 0xcf key 0x33 Plmw$mw$e$pa|p($kjh}$e$pawp**5670 0xcf key 0x34 Wkjp#jp#b#wf{w/#lmoz#b#wfpw--2107 0xcf key 0x35 Vjkq"kq"c"vgzv."mln{"c"vgqv,,3016 0xcf key 0x36 Uihr!hr!`!udyu-!nomx!`!udru//0325 0xcf key 0x37 This is a text, only a test..1234 !!! Valid String 0xcf key 0x38 [gf|/f|/n/{jw{#/`acv/n/{j|{!!>=<; 0xcf key 0x39 Zfg}.g}.o.zkvz".a`bw.o.zk}z ?<=: 0xcf key 0x3a Yed~-d~-l-yhuy!-bcat-l-yh~y##<?>9 0xcf key 0x3d ^bcy*cy*k*~or~&*edfs*k*~oy~$$;89> 0xcf key 0x3e ]a`z)`z)h)}lq}%)fgep)h)}lz}''8;:= 0xcf key 0x3f \`a{(a{(i(|mp|$(gfdq(i(|m{|&&9:;< 0xcf key 0x69 67-~7-~?~*;&*r~102'~?~*;-*ppolmj 0xcf key 0x6a 54.}4.}<})8%)q}231$}<})8.)ssloniThe above output was created using iheartxor (see below)
Yeah, that's a lot of extra data. We have a 456 bytes file with an output size of 2,090 bytes. That's a 400% increase in size. How can we slim this down? Let's say we did this on a large file such as Notepad.exe with the same data pasted at the end. The original Notepad.exe has a size of 69,280 bytes and the output is 117,400 bytes. That's a 69% increase. Who want's to look through that much data for a couple of XORed strings? This is exactly why people use string searches. To be continued...I think I have to go ask Reddit.
Introducing iheartxor
iheartxor is a tool that can be used to bruteforce xor encoded strings within a user defined data block via a regular expression pattern (-r). The default search pattern is a regular expression that searches for data between null bytes ('\x00'). The tool can also be used to do a straight xor on a file with -f file.name -k value. The value must between 0x0-0x255. The tool is still in development. Please leave comments or ideas. I'm still trying to figure out the best way to detect valid ASCII text without using a search strings such as xorsearch. | http://hooked-on-mnemonics.blogspot.com/2012/04/adventures-in-bruteforcing.html | CC-MAIN-2014-49 | refinedweb | 1,061 | 76.72 |
fchmod - change mode of a file
#include <sys/stat.h> int fchmod(int fildes, mode_t mode);
The fchmod() function has the same effect as chmod() except that the file whose permissions are to be changed is specified by the file descriptor fildes.
If the Shared Memory Objects option is supported, and fildes references a shared memory object, the fchmod() function need only affect the S_IRUSR, S_IWUSR, S_IRGRP, S_IWGRP, S_IROTH, and S_IWOTH file permission bits.
Upon successful completion, fchmod() returns 0. Otherwise, it returns -1 and sets errno to indicate the error.
The fchmod() function will fail if:
- [EBADF]
- The fildes argument is not an open file descriptor.
- [EPERM]
- The effective user ID does not match the owner of the file and the process does not have appropriate privilege.
- .
None.
None.
None.
chmod(), chown(), creat(), fcntl(), fstatvfs(), mknod(), open(), read(), stat(), write(), <sys/stat.h>. | http://pubs.opengroup.org/onlinepubs/007908799/xsh/fchmod.html | CC-MAIN-2013-20 | refinedweb | 144 | 62.27 |
The dependencies of OSRA give an insight into the term "open source ecosystem". Rather than reinvent the wheel, OSRA makes use of open source libraries for optical character recognition (OCRAD, GOCR), bitmap to vector image conversion (POTRACE), messing about with images (ImageMagick, GREYstoration, ThinImage, CImg) and of course, cheminformatics (it uses either OpenBabel or RDKit). Luckily for Windows users, Igor provides a compiled version.
Back in July, I emailed Igor to request a new feature, the ability to output an SDF file containing the coordinates taken from the image. Three hours later he replied that he'd added it. And now, only four months later, I've gotten around to testing it (insert excuse here)...
Anyway, one nice way of testing conversion code is by roundtripping (or "There And Back Again"). So I took the now legendary depiction faceoff test file (see, for example, here), used the coordinates therein to create a PNG image using OASA (via Pybel or cinfony), ran OSRA on the resulting image to get some coordinates, and then used those coordinates to generate a PNG again. By eyeballing the two PNG images, it's possible to discover errors. So, here are the results of the OChRe.
Notes:
(1) If you notice any trends in the errors, comment below and Igor might fix them.
(2) Where there are missing images after the OChRe, this is where OSRA missed a bond (probably reasonably), generated two molecules, and caused OASA a headache (it only handles single molecules).
Here's the code:
import pybel
import popen2
odir = "images"
for mol in pybel.readfile("sdf", "onecomponent.sdf"):
title = mol.title
print title
mol.draw(usecoords=True, show=False, filename=os.path.join(odir, "%s_oasa.png" % title))
o, i, e = popen2.popen3("../osra-trunk/osra -f sdf %s/%s_oasa.png" % (odir, title))
osrasdf = o.read()
newmol = pybel.readstring("sdf", osrasdf)
try:
newmol.draw(usecoords=True, show=False, filename=os.path.join(odir, "%s_osra.png" % title))
except AssertionError:
print "Unconnected!"
Image credit: Jason.Hudson
17 comments:
Nice analysis! BTW, I think we can all forgive 14384490 to go wrong :)
Maybe you can add some summary on the top of the page to list success/fail rates. And maybe categorize them accordingly too?
If there's any sort of "trend," it's that the right column often has wedge bonds to non-chiral atoms. So I'd suggest adding a pass after recognition: if two bonded atoms are both non-chiral, then the bond should not have wedge/hash notation.
That would solve a fair number of minor bugs. It's harder to classify the major issues.
But I can attest from Open Babel that these kinds of round-trip tests are hugely useful.
Love the roundtripping idea, very nice work !
Do you think it would be possible to use the very same framework with an additional image->PDF->image conversion and different resolutions? This would be a closer reality check.
Thanks, again ... looking forward for the next study ;-)
Noel, many thanks for this analysis and the kind words!
To Geoff's comment - the wedge bonds are detected based on the line thickness,
in some cases it doesn't work too well - in most cases I've seen before it was the other way around though - a wedge bond mis-recognized as regular single bond. I'm not sure however that a post-image processing check for chiral atoms is simple, there are a lot of weird stereochemistry cases out there...
To Joerg - unfortunately PDF is only getting processed at 150dpi right now. There are two reasons for that - 1) Speed. A multi-page document can take quite a while even at a single smaller resolution, 2) There are some strange problems with Ghostscript that I've seen when attempting rendering a PDF at a higher resolutions - memory usage going through the roof, program crashes etc.
Thank you for the comments,
I really appreciate the input!
Igor, if multipage PDFs are a problem, why not split them up into single page PDFs first? Would also allow easy parallelization of PDF processing...
available from many GNU/Linux distributions, like Ubuntu:
Egon,
ImageMagick library which I use already supports reading PDFs one page at a time (using Ghostscript), the problem is not that there are many pages in a document but to keep processing reasonably fast so that the user doesn't get bored and walks away :) Also there seems to be
some strange issues with Ghostscript at high resolutions. I would consider a replacement for Ghostscript but it does makes things more complicated if I have to use an image processing framework outside of ImageMagick.
@egon, I'll add the summary if you give me the figures. I'm trying to build a bazaar here, not a cathedral. :-)
@igor: I've never really understand the DPI issue and OSRA. What DPI should I use to analyse the images here (how do I find out)?
Noel, you created the images from a CT, right? And that's what OSRA outputs... I thought you could easily compare those...
What numbers did you have in mind instead?
@egon: Checking identity of connection tables would be pretty fast. But categorising the errors...it would be faster just to eyeball the images and make notes which any reader of the blog can do. And the exact figure doesn't seem very interesting to me; what can it be compared to?
46 out of 90 are converted without error. I wonder is the color causing problems regarding wedge detection - I should be able to check this...
Noel, about the DPI - if you leave it out (or set to 0)
OSRA will try 72,150, and 300 dpi and pick the best one automatically. It seems to work quite well, if I say so myself. Usually screen captures and computer-generated images designed to be viewed on the screen are 72 dpi, scanned documents are 300
(it's rather a convention in OCR that 300 dpi is what
one should focus on), and 150 is just in-between :)
For PDF/PS the document itself is actually already in vector representation, ImageMagick renders it to raster format then I process it as any other image. If you know that you've scanned your image at a different resolution or
if you're just looking for faster processing time then
use -r option to specify the
resolution you'd like to use.
Firstly, black+white gave the same results. Secondly, it seems that Beda Kosata and Daniel Svozil are already engaged in a thorough investigation of OChRe statistics for different programs. I look forward to seeing the results.
Beda Kosata and Daniel Svozil - sounds interesting, do you have a link?
"personal communication". I suggest you get in touch with Beda if interested.
I believe mis-recognized wedge bonds have simple explanation - whenever a single bond ends at an intersection that could be mistaken for a bond end looking thicker there is a chance for miscategorization.
This chance is greater for low-res images. Even though the bond thickness is sampled at three different points, all away from the very ends, it could potentially be that the measurements are 1,2,3 pixels thickness for beginning, middle and close to the end of the bond for a regular single bond.
version 1.1.0 is now released, it has better wedge bond detection algorithm
which would hopefully resolve the unfortunate "trend" with mis-categorized stereochemistry.
I will repeat the analysis (in a new blog post). I've also figured out how to handle the cases where OSRA generates two disconnected molecules. | http://baoilleach.blogspot.com/2008/11/of-ochre-osra-and-oasa-but-not-oscar.html | CC-MAIN-2014-42 | refinedweb | 1,266 | 63.9 |
Can There Be a Non-US Internet? 406
Daniel_Stuckey writes "After discovering that the US government has been invading the privacy of not just Americans, but also Brazilians, Brazil is showing its teeth. The country responded to the spying revelations by declaring it'll just have to create its own internet. In reality, although Brazil President Dilma Rousseff is none too happy with the NSA's sketchy surveillance practices, Brazil and other up-and-coming economies have been pushing to shift the power dynamics of the World Wide Web away from a US-centric model for years."
Oblig. (Score:5, Funny)
Re:Oblig. (Score:5, Insightful)
Non-US Internet (Score.
Re:Non-US Internet (Score.
....
6. Control the ideas/speech of all websites within Iran.
Technically yes; practically unlikely (Score:5, Insightful)
Re:Technically yes; practically unlikely (Score:5, Interesting)
Initially, yes.
But after a couple of years I don't think there would be that much of a difference.
As long as all the on-line commercial entities in that country were okay with never having any US business. Otherwise the NSA (and others) can demand access to their data in exchange for access to our markets.
And that isn't even considering the old spy standby of either getting one of your spies hired by them or offering one of their employees money to get you access.
The problems are not technological. They are human nature.
Re:Technically yes; practically unlikely )
notably for offering an easier way to avoid 'namespace pollution' by seperating the networks into regions based off a numerical 'country id'
We already have this. That's why we have amazon.com and amazon.co.uk and amazon.de and amazon.fr etc. This has nothing to do with IPv4 vs IPv6, especially since the latter has more than enough addresses to last until we are off this planet (which will never happen).
Amazon.*** namespaces (Score:2)
Amazon's actually using the namespace partly because the publishing world has lots of weird national boundaries - a given book might be published in the US but not yet available in the UK because UK publishing rights haven't been sold to a UK publisher yet, or the UK edition may have different text, title, or cover - and they use the namespace to help keep that isolated.)
So you don't believe nuclear mines would be an adequate deterrance?
How about large conventional ones?
of course you could read between the lines and guess that I mean there is a way given sufficient determination. Of course, I expect a certain percentage of ACs to be raised by wolves and barely literate.
Re: (Score:2)
I can't help but find this [palegray.net] a little ironic given the context of this story.
56 Marietta is a nice facility, though.
To get firmly back on topic, what you're suggesting is unworkable for many reasons. I've seen a few of those reasons firsthand.
Re: (Score:2)
Why ironic?
Re: (Score:2, Interesting)
There are two methods: satellite based infrastructure, and fortifying undersea cables against submarines and anchors.
The problem with both of them is that they are both economically prohibitive. The NSA essentially found a Sorority that had an unlocked front door and got caught engaged in the most epic panty-raid in the history of unencrypted communications. End to end encryption is going to become the new norm, and stronger defenses against MITM & encryption back doors are going to become a requirement
Re: (Score:2)
Re: (Score:3)
In cypherspace noone can read your stream.
Seriously though, satellites have too much latency, real ships have anchors too big to armor against. Especially considering they can use a supertanker or container ship if they have to and the sub can scope out a likely vulnerable spot to put the anchor. Quantum crypto: once the cable is cut you then need a subsea quantum repeater. Even if the tech were available it would be electronic and therefore subject to traditional signals intercept. Traditional crypto
WTF is the point? (Score:5, Insightful).
Re: )
Fundamentally the reason that the internet is US centric is partially the fact that ICANN is located in the US, but mostly because the most used services are based in the US. To create a truly non US-centric model you would have to relocate ICANN and come up with significant competitors to people like Google etc who have no US presence(once they have a US presence they're subject to all the same laws that allow the NSA to spy on you in the first place).
You could technically achieve this, but the countries which could be candidates for replacing the US in this position are not Brazil and would also spy on traffic. So unless this is yet another pissing match where idiots go in with the slogan "Anyone but the US", making the internet non US centric is a gigantic waste of everyone's time and money. I mean does anyone seriously believe that if Chinese companies displaced the US ones that China wouldn't spy on everyone, or that the Europeans wouldn't either also spy or allow the NSA to spy?
Re: (Score:2)
Also, and not to sound like an apologist, pretty much every other country has just as crappy government reputations for things like privacy.
Re: (Score:2)
No.
Re:Yes, but it won't make any difference. (Score:4, Insightful)
This seems to be a common strawman argument used when discussing the NSA and spying. No one has suggested that the only government spying in the world is the US. However, the US seems to be granted special privilages by the most of the world in that it is the only nation:
1. That does extraordinary rendition without having to be held accountable by any international body
2. Attacks and kills people in other countries via drones that they are not at ear with
3. Mandates cyberwarfare against not just "intelligence" targets
4. Operates prisons that were specifically created to circumvent human rights treaties and allow torture
Other countries may do some or all of these things but they are belittled, sanctioned, or bombed (usually in that order). The US does this "to protect its interests" and the rest of the Western world says "ok".
All of the items mentioned above happened after someone received "intelligence" and then acted on it. The US is not infalliable and they have made many mistakes that have resulted in innocents getting killed or imprisioned for years. If any other country did this (China, Iran, Iraq,etc)
....well the US and allies would have bombed them by now for being a threat to the rest of the world.
Re: )
Other countries have done lots of these things in the past themselves. They stopped doing it because they couldn't afford it anymore; some time in the 1950's and 1960's, countries like France and Britain increasingly just picked up the phone and asked the US to clean up their messes; it was cheaper, simpler, and less risky. And why did the US do it? Because it was pretty easy for it to do so, and because it gives it great power. So, the rest of the Western world doesn't just say "OK", it says "yes, please".
Re: )
ICANN? Give me a break, that's nothing. Do you even know what ICANN does? Not route traffic, of course.
Fundamentally the reason that the Internet is US-centric is that the US has paid for much of the infrastructure. It's not necessarily about the services either, it's about the routing. If Latin/South America wants to avoid traversing US infrastructure to route their packets to the rest of the world, they will have to build their own backbones and lay their own transoceanic cable. Until they do that it's pretty obvious their data is going to be inspected...
Re:Yes, but it won't make any difference. (Score:5, Informative)
Re: (Score:2)
Re: (Score:3)
China's ambitions of world domination are quite open. They just follow the old doctrine of communism: There's no need to conquer by force. Communism is the natural end state, all they need do is wait and victory will come peacefully.
They are officially communist still, even though their economy has adopted so many elements of the free market system now there isn't lot of actual communism left.
Re: (Score:2)
They just follow the old doctrine of communism: There's no need to conquer by force.
Nothing to do with communism. That's Sun Tzu [wikipedia.org] 25 centuries ago.
Re: (Score:2)
Man, what the hell kind of dream world are you living in. China may not give a crap how many pressure cookers you want to buy, but they sure as fuck care about your political opinions. Especially if you are or ever were Chinese. Ask any Chinese dissident whether they'd prefer the US was spying on them or China, hell ask most US dissidents.
The US spies on you, but for the most part it seems to have done a whole lot of nothing with any of the information that it has gathered, it's also restricted by law)
...ICANN is located in the US, but mostly because the most used services are based in the US...
Even companies that are perceived as American are no longer really so. Yes, Google, Microsoft, Yahoo, IBM, Oracle,
... have head offices in the US, but the have a very real, physical presence in many other countries, including China. So, today "American company" very often means "a company that started in America", that's all. People in Europe, who use Google probably only pass through the US occasionally. The internet is already "non US-centric". The Brazilians, if they put a cable across the Atlantic)
That global US backed standard infrastructure was invested into by many countries on good faith with 'private/public' hard currency loans with real interest rates.
The US and UK baited countries with speed, trade deals, low costs, crime fighting laws to ensure global uptake.
What can be done? Reconfigure all public and private core gov networking? No more wi
Re: (Score:2)
Thats a lot of trade law and basic telco infrastructure to rework for any single nation. As far as Soviet or German occupation experiences - people knew their countries where occupied, standards where set, secret police where well networked. Globally cryptographers, telco experts and their politic
Why do we keep discussing this... (Score:5, Insightful)
...as if the United States was the first, last, and only country to hold a government that spies on its own citizens in some way?
Are we really THAT naive to think that A) the United States invented this concept, and B) no other government thought to do it too?
It's mentalities like this that shock me more than anything Snowden could reveal. I find mass ignorance far more alarming, as it tends to hint as to what governments are yet capable of doing to you. To all of us. While the deaf and blind vote for it.
We were ignorant enough to pay for and allow a program like PRISM to come to fruition. Sitting back assuming that no other country has a similar or same capability is like assuming no one masturbates because people don't talk about it.
Re: (Score:3)
Domestically most nations can do anything they want to their own telco network and any links in/satellites systems above their country.
The rest is embassies, aircraft, spy ships, limited satellites and human spies - easy to track, limited and hard work.
Every other country has to use the US (NSA) telco network at some point if they want to reach out, or make a dea
Re:Why do we keep discussing this... (Score:5, Insightful)
...as if the United States was the first, last, and only country to hold a government that spies on its own citizens in some way?
Nobody thinks that. But the United States was supposed to be different to the hundreds of abusive governments that had preceded it. This does demonstrate that the US is worse than any other government - it shows that it is exactly the same. And that's damning enough.
Re: )
It wasn't all that long ago that most stories about internet freedom covered the abuses of North Korea, China, and the Islamic Republics. Of course there were always a few comments, usually from our brave AC's, who claimed the US did the same but was better at hiding it. Bless all the slashdot anonymous cowards, keep up the good work.
in reality (Score:3)
In reality, although Brazil President Dilma Rousseff is none too happy with the NSA's sketchy surveillance practices
In reality, getting a 'non-USA' internet won't do anything to stop the NSA. What difference does it make who gives out DNS names and IP addresses? (because that's what they mean when they say non-USA internet)..
It's one thing for the NSA to spy on people and gather information illegally. It's another thing entirely to present such information in a US court and use it to shut down a website in anoth.
Moving DNS and IP assignment responsibility outside the US will not do any of that. Sorry, bro.
It Does Not Need To Be Done (Score:2, Insightful)
That is not what they declared (Score:5, Informative)
That is not what they declared, building local cloud, secure email services and infrastructure is different from "creating it's own internet" and I never heard this wording here, only in "international" press. The big difference is that when someone talk like that it gives the idea that it will be separated from the rest of the internet. That is not what the Brazilian government is proposing.
The national constitution (I'm Brazilian) states that the State has to provide the basic rights that are not met otherwise (if you can't buy water the State has to provide it, there is free medical care, the best universities are free, etc). Since private communications are a basic right (our constitutuion and the universal declaration of human rights), they are planning to offer alternatives for people who care.
Honestly, to force local clouds seems like a double win. On one hand you make companies accountable for our citizens rights, on the other hand - the one I think is the main point here - it creates investments, infrastructure, brings technology and high tech jobs. The cables to Europe are a need, our internet sucks. I hope they make some cables to China and Russia too, as online gaming is better over there.
But mainly, there is no censorship here, Brazilians will not be separated from the internet and nobody in the country thinks that even a possibility. Specially since this government is the one that fought against censorship in the past, you know, during the US created military dictatorship from 64 to 86/90.)
The WWW Internet is a global phenomenon now. And the WWW Internet was invented by a physicist who was trying to solve a real physical problem.
He was trying to solve a problem of distributing scientific papers among CERN scientists. He did it during work time paid by CERN and on the CERN's computer.
The main database of the Internet, MySQL, is also an International project.
It is just not true that the Internet is sort of an US present t
Re:WWW (Score:5, Informative)
Ever use email? Dropbox? Online games from your XBox or PC? FTP? VOIP? Bittorrent?
All these and thousands more are internet protocols that don't use WWW.
And, by the way, we do have multiple Internets (with a big I). Read up on Internet 2. And there's lots and lots of internets (with a little i) that you don't know about because they're not connected to the Internet (with a big i)
Re: (Score:2)
By the way, the actual invention was done not by a programmer but by the an engineer who was doing the real work.
Re: (Score:2)
Re: (Score:3, Interesting)
This is a common misconception. The WWW is not merely stuff transmitted over TCP port 80 on the Internet. It's an information space that has the ability to use the Internet as a transport mechanism. It's not a subset of the Internet, it's a higher level abstraction than the Internet.
Anything addressable by URI is a node in the WWW. For instance, POTS telephone numbers are leaf nodes because you can address them with tel:. They are on the WWW but they aren't on the)
The government of Switzerland may disagree that Geneva is an international city. Cosmopolitan might be the world you are looking for.
Also, it wasn't the web that prevented closed-garden internets, but rather universities. Until the mid 1990s, nearly everyone on the Internet (on any protocol) was at a university or research institute (like CERN). The universities weren't trying to make a profit, so they embraced an open architecture. It was US dominated, because then as now, most large research universities
Re: (Score:2)
*word
Re: (Score:2)
It is Geneva International all right. It started with another grand idea. Henry Dunant, the founder of the International Committee of the Red Cross, came out with the idea that the wounded soldier does not belong to any state or government anymore. That she/he belongs to the higher authority.
This idea still keeps changing the wold.)
No one wants to disconnect the North America from the Internet. But its contribution should be positive. Nowadays it leaves an impression of a total eavesdropping of the Internet and it scares people.
The article 12 of the Universal Declaration of Human Rights gives the right of privacy of communication and home to all people on Earth.)
The most common reasons governments want to have non-US "internet governance" these days are that they want to restrict free speech and free reading by their citizens, or restrict some kinds of commerce by their citizens (US restricts gambling, drugs, etc.) There are other issues; most governments used to have telecom monopolies, either state-run or quasi-nationalized, though the 90s liberalized much of that away. Some governments would like more money to stay in their countries, or keep people from buying goods online that are heavily taxed locally.
It really irks me when international groups get together to talk about internet policy, and advertise their shindig as being about "ending the digital divide" or "providing connectivity to Africa" or other noble-sounding goals, but actually devote most of their agenda to governments wanting censorship. These days, of course, the NSA is giving them a good excuse to want internet governance so they can do their own wiretapping in case the NSA isn't sharing.. | https://tech.slashdot.org/story/13/09/25/231220/can-there-be-a-non-us-internet?sdsrc=next | CC-MAIN-2017-13 | refinedweb | 3,176 | 62.07 |
NodeJS or Django for machinelearning and API?
I am developing a project in which FrontEnd is Flutter app and I want to make a backend with machine learning and Mongo DB. I know Django and Nodejs both. I tried Django REST Framework but didnt like it at all as I am used to make APIs in NodeJS as my database is also MongoDB. I consider NodeJS to be good for making REST APIs and Python good for machine learning. Spawning child processes is not a great idea please suggest one language that can I use for my project backend.?
- Why is Django Crispy Forms throwing "module 'django.forms.forms' has no attribute 'BoundField'"
When I use the "|crispy" or "|as_crispy_field" filters on my form/fields, I get an error that the field has no attribute BoundField.
This was working fine, but I updated django/crispy forms, and I'm not sure whether I missed a trick? The form works fine without the filter.
EDIT: I'm using Django 3.1.2 and django-crispy-forms 1.8.1.
forms.py:
from django import forms from django.utils import timezone from bootstrap_modal_forms.forms import BSModalForm from backgammon import models class MatchForm(BSModalForm): date_played = forms.DateField(initial=timezone.now) class Meta: model = models.Match fields = [ 'date_played', 'winner', 'score' ]
views.py
from django.contrib.auth.mixins import PermissionRequiredMixin from bootstrap_modal_forms.generic import BSModalCreateView from .forms import MatchForm class MatchCreate(PermissionRequiredMixin, BSModalCreateView): permission_required = 'backgammon.add_match' template_name = 'backgammon/match_form.html' form_class = MatchForm success_message = 'Match saved.' success_url = reverse_lazy('backgammon-index')
match_form.html
{% load crispy_forms_tags %} <div class="container bg-light"> <form method="post"> {% csrf_token %} <div class="modal-body"> {% for field in form %} <div class="form-group{% if field.errors %} invalid{% endif %}"> {{ field|as_crispy_field }} </div> {% endfor %} </div> <div class="modal-footer"> {% if object %}<a class="btn btn-danger mr-auto" href="{% url 'match-delete' pk=object.pk %}">Delete</a>{% endif %} <button type="button" class="btn btn-default" data-Close</button> <button type="button" class="submit-btn btn btn-primary">Save</button> </div> </form> </div>
Traceback:
Traceback (most recent call last): File "C:\Users\harla\Anaconda3\envs\djangoenv\lib\site-packages\django\core\handlers\exception.py", line 47, in inner response = get_response(request) File "C:\Users\harla\Anaconda3\envs\djangoenv\lib\site-packages\django\core\handlers\base.py", line 202, in _get_response response = response.render() File "C:\Users\harla\Anaconda3\envs\djangoenv\lib\site-packages\django\template\response.py", line 105, in render self.content = self.rendered_content File "C:\Users\harla\Anaconda3\envs\djangoenv\lib\site-packages\django\template\response.py", line 83, in rendered_content return template.render(context, self._request) File "C:\Users\harla\Anaconda3\envs\djangoenv\lib\site-packages\django\template\backends\django.py", line 61, in render return self.template.render(context) File "C:\Users\harla\Anaconda3\envs\djangoenv\lib\site-packages\django\template\base.py", line 170, in render return self._render(context) File "C:\Users\harla\Anaconda3\envs\djangoenv\lib\site-packages\django\template\base.py", line 162, in _render return self.nodelist.render(context) File "C:\Users\harla\Anaconda3\envs\djangoenv\lib\site-packages\django\template\base.py", line 938, in render bit =\defaulttags.py", line 211, in render nodelist.append\base.py", line 988, in render output = self.filter_expression.resolve(context) File "C:\Users\harla\Anaconda3\envs\djangoenv\lib\site-packages\django\template\base.py", line 698, in resolve new_obj = func(obj, *arg_vals) File "C:\Users\harla\Anaconda3\envs\djangoenv\lib\site-packages\crispy_forms\templatetags\crispy_forms_filters.py", line 98, in as_crispy_field if not isinstance(field, forms.BoundField) and settings.DEBUG: Exception Type: AttributeError at /backgammon/match-create/ Exception Value: module 'django.forms.forms' has no attribute 'BoundField'
Local variables describe the "field" variable as <django.forms.boundfield.BoundField object at 0x0000021087C7E808>
Without the crispy filter, the page loads correctly:
-
- Custom User with "USERNAME_FIELD" non-unique in Django
Situation:
We have an existing system where current employee logins to the system using their mobile number.
But as same mobile number can be used by multiple users, so there is constraint in existing db that, at any moment there can be only one user with given "Mobile Number" and "is_active: True" (i.e their may be other employees registered with same number earlier, but their status of "is_active" would have changed to "false", when they left).
Also, only admin can add/register new employees/user.
I have created a custom "User" model and "UserManager" as below:
models.py
from django.db import models from django.contrib.auth.models import AbstractBaseUser from django.contrib.auth.models import PermissionsMixin from django.utils.translation import gettext_lazy as _ from .managers import UserManager # Create your models here. class User(AbstractBaseUser, PermissionsMixin): """ A class implementing a fully featured User model. Phone number is required. Other fields are optional. """ first_name = models.CharField(_('first name'), max_length=50, null=True, blank=True) last_name = models.CharField(_('last name'), max_length=50, null=True, blank=True) phone = models.CharField( _('phone number'), max_length=10, null=False, blank=False, help_text=_('Must be of 10 digits only') ) email = models.EmailField(_('email address'), null=True, blank=True) is_staff = models.BooleanField( _('staff status'), default=False, help_text=_('Designates whether the user is a staff member.'), ) is_active = models.BooleanField( _('active'), default=True, help_text=_( 'Designates whether this user should be treated as active.' 'Unselect this instead of deleting accounts.' ), ) date_joined = models.DateTimeField(_('date joined'), auto_now_add=True) last_updated = models.DateTimeField(_('last updated'), auto_now=True) objects = UserManager() USERNAME_FIELD = 'phone' REQUIRED_FIELDS = [] def get_full_name(self): """ Return the first_name plus the last_name, with a space in between. """ return self.first_name + ' ' + self.last_name def get_short_name(self): """Return the short name for the user.""" return self.first_name # def email_user(self, subject, message, from_email=None, **kwargs): # """Send an email to this user.""" # send_mail(subject, message, from_email, [self.email], **kwargs) # def sms_user(self, subject, message, from=None, **kwargs): # """Send a sms to this user.""" # send_sms(subject, message, from, [self.phone], **kwargs) def __str__(self): return self.first_name + ' ' + self.last_name
managers.py
from django.contrib.auth.models import BaseUserManager # Create your models here. class UserManager(BaseUserManager): """Define a model manager for User model""" use_in_migrations = True def normalize_phone(self, phone): """ Applies NFKC Unicode normalization to usernames so that visually identical characters with different Unicode code points are considered identical """ phone = self.model.normalize_username(phone) phone = phone.strip() if not phone.isdigit() or len(phone) != 10: raise ValueError('Phone number must of 10 digits only') return phone def make_default_password(self, phone): """ Generates a default password by concatenating Digi + Phone number """ return 'Pass' + phone def _create_user(self, phone, password=None, **extra_fields): """Create and save a User in DB with the given phone""" if not phone: raise ValueError('Phone number must be provided') phone = self.normalize_phone(phone) password = self.make_default_password(phone) user = self.model(phone=phone, **extra_fields) user.set_password(password) user.save(using=self._db) return user def create_user(self, phone, password=None, **extra_fields): """Create and save a regular User with the given phone and password""" extra_fields.setdefault('is_staff', False) extra_fields.setdefault('is_superuser', False) return self._create_user(phone, password, **extra_fields) def create_staff(self, phone, password=None, **extra_fields): """Create and save a regular User with the given phone and password""" extra_fields.setdefault('is_staff', True) extra_fields.setdefault('is_superuser', False) return self._create_user(phone, password, **extra_fields) def create_superuser(self, phone, password=None, **extra_fields): """Create and save a Superuser with the given phone and password"""(phone, password, **extra_fields)
Now when I'm running
python manage.py makemigrations, its giving following error:
users.User: (auth.E003) 'User.phone' must be unique because it is named as the 'USERNAME_FIELD'
I'm stuck, what to do next.
Any help related to above query or suggestion for how to proceed in the above situation will be highly appreciated.
I am novice to Django and trying to rewrite the rest based backend in Django (using DjangoRestFramework).
- Looking for best suited managed NoSQL for IOT Sensors Data (alternatives to overcome limitations of DynamoDB and AWS Keysapce)
We use POSTgreSQL as our DB for storing data received from Sensors installed in solar plants. We store it in following format.
plant_id timestamp sensor_data 10 2020-08-12 10:00:00 [{'sensor_id': 12, 'data': {'current': 1.32, 'volt': 242.06}, {'sensor_id': 13, 'data': {'freq': 50.01, 'power': 6.57, 'irr':241.0}] 10 2020-08-12 10:05:00 [{'sensor_id': 12, 'data': {'current': 2.1, 'volt': 245.7}, {'sensor_id': 13, 'data': {'freq': 56.1, 'power': 8.57, 'irr':241.0}] .... 11 2020-08-12 10:00:00 [{'sensor_id': 22, 'data': {'current': 1.32, 'volt': 242.06}, {'sensor_id': 23, 'data': {'freq': 50.01, 'power': 6.57, 'irr':241.0}] 11 2020-08-12 10:05:00 [{'sensor_id': 22, 'data': {'current': 2.1, 'volt': 245.7}, {'sensor_id': 23, 'data': {'freq': 56.1, 'power': 8.57, 'irr':241.0}]
Usually, we query for the readings of a sensor (or multiple sensors) for a particular parameter (e.g. 'freq', 'power' .. etc) for a given time range. The drawback of our current design is - even for fetching data for one sensor and one parameter for a given time range, the entire plant's (all sensors and and all parameters) data has to be fetched. Also we use a single server POSTgreSQL, which is not suitable for our rapidly increasing data size (around 180-200 GB now).
Solutions we have tried
DynamoDB : We did a POC project using DynamoDB, storing data in the following format.
partion_key : sensor_id, sort_key : timestamp, GSI : plant_id, # and parameter and values as keys-values
The major setback with DynamoDB was we couldn't get data for multiple sensors at once using DynamoDB Query (sometimes we require data for 50-60 sensors, so different query for each sensor is not viable). Also, we cannot use DynamoDB Scan as it doesn't allow range query on sort key(timestamp).
AWS Keyspace (Cassandra) : Since we need a managed service, we tried AWS Keyspace (which is a managed DB service for Apache Cassandra), and stored data as
plant_id | sensor_id | parameter | time_stamp | value ----------+-----------+-------------+---------------------+----------- 10004 | 1016 | irradiation | 2020-03-02 08:20:00 | 785.25665 10004 | 1016 | irradiation | 2020-03-02 08:30:00 | 747.36255 10004 | 1016 | irradiation | 2020-03-02 08:35:00 | 730.76013
Here too, the drawback is we cannot query for multiple sensors (AWS keyspace does not support IN queries).
MongoDB : We tried storing in the following manner
{ "plant_id": 12, "sensor_id": 44, "timestamp" : 2020-08-12 10:00:00 "data":[ { "parameter": "x", "value": 1 }, { "parameter": "y", "value": 2 }, { "parameter": "z", "value": 1 } ] }
But here, while retrieving data for single parameter, entire array of dicts is fetched (if I'm not mistaken), which quite similar to POSTgreSQL.
TLDR; Kindly recommend a managed NoSQL DB service which has enough querying flexibility(over DynamoDB and AWS Keyspace) OR suggest changes in Data model in the datastores we are using to fit our need.
- compact mongodb collection in background
I always run the compact command on mongodb and it disturb the performance of the DB , do you know if adding Background : true will work ? I'm asking before testing it .
db.runCommand({compact:'OPC_PLANT_DATA'}, {'background' : true})
- Where to start for workout project
I have learned the basics of react and redux and now I want to make a workout planner with a range of exercise lists.
However, I didn't find a good workout API that has a lot of exercises. So I was wondering how I could make my own API which would consists of the picture of the exercise,which muscle it works and a short description on how to execute it.
Is a MERN stack suitable? Where can I store the pictures and information?
I have nearly no experience on Full Stack development, only a few tutorials so I'm sorry if this is a stupid question
- Send CSV file via Rest API
I'm not a developer, but recently have been doing lots of automations at work using VBA, Python and AWS. Recently I started learning and working with API's in AWS.
I've been trying to upload a CSV file through a REST API, but it's just not working as expected.
Basically I have used API Gateway to set up my API that triggers a Lambda Function to upload the file to a S3 bucket.
When I tested my API in Postman, it worked fine, I just had to set Headers as Key: "Content-Type" and Value: "application/csv", then in the Body part I selected "Binary" and browsed my csv file that I want to upload.
But the problem is when I tried to call my api from my application (or from Postman using the raw text instead of binary), I am sending the following Body (Also tried without double quotes and didn't work either): --data-binary "\file-location\file-name.csv"
The api returns a success code, but when I open the file in S3, then content is "--data-binary "\file-location\file-name.csv"" instead of the actual file's content that I selected.
This is the code I'm using in my VBa to call the API:
Dim ws As Worksheet Dim URL, env, msg, result As String Dim objHTTP As Object Set ws = ThisWorkbook.Worksheets("Sheet1") URL = "" msg = "--data-binary ""\\file_location\file_name.csv""" Set objHTTP = CreateObject("MSXML2.ServerXMLHTTP") objHTTP.Open "POST", URL, False objHTTP.setRequestHeader "Content-Type", "application/csv" objHTTP.send (msg) result = objHTTP.responseText 'getResponse ws.Range("s3_api_resp").Value = result Set objHTTP = Nothing
- Drill-down automatically with unique keys in Power Query / Power BI
I have the following problem :
I have a list of orders in API REST with the following properties for each :
{ "order_id": 1505270, "products": { "812147904": { "item_id": "812147904", "product_id":20, "price": 10000.0, } ... } ... }
The item "812147904" is different each time depending the orders so I cannot expand rows / drill down easily to create a table.
My goal is to have a table of Orders (any) in Power Query / Power BI where I have order_id, item_id, product_id and price.
Can you help me ?
Thank you.
- How can I get local and session storage values using http request in C#?
I want to get local and session storage values from http response in C#. Currently I am using this code:
var httpWebRequest = (HttpWebRequest)WebRequest.Create(" 1.amazonaws.com/dev/product"); httpWebRequest.Method = "POST"; httpWebRequest.AllowAutoRedirect = true; httpWebRequest.KeepAlive = true; httpWebRequest.ContentType = "text/plain"; httpWebRequest.Headers.Add("Accept-Encoding", "gzip, deflate, br"); httpWebRequest.UserAgent = "PostmanRuntime/7.26.5"; httpWebRequest.Headers.Add("Pragma", "no-cache"); httpWebRequest.Timeout = 40000; var httpResponse = (HttpWebResponse)httpWebRequest.GetResponse();
- List of xy coordinates to predict a xy target
I have a database containing coordinate points (X, Y) .. A column corresponds to a single coordinate => For n points, I therefore have 2n columns following the model: X1, Y1, X2, Y2, ..., Xn, Yn
Each line corresponds to a polygon, described by a sequence of coordinates (X, Y) (These are my features) For each line, I have an output Target which is a point of XY coordinates, therefore composed of 2 outputs (TX & TY)
My output corresponds to the location of a target to predict in (X, Y) according to the points
I have already done some work on the database by transforming all these coordinates of Points into coordinates of Vectors (to give binder) This allowed me to build linear regression models especially with Ridge, But the predictions made do not satisfy me, I would like to be more precise ..
Have you ever worked on a similar problem? If so, I am looking for some avenues to explore, otherwise your ideas will be welcome
The aim of my subject is to predict the location of electrical outlets in a given room. For that I translated my room into a polygon, and the electric outlet into a Target T.
- Clustering group with few samples
I would like a feedback from a person with more experience.
I have a dataframe that is in the image format that I sent, with about 1 million samples and 50 features.
What I'm looking for are customers similar to customers who own 'Product A'. So I thought about use dummies on categorical variables and then doing a clustering. Problem: The number of customers who own 'Product A' represents about 1% of all customers, so I am not sure if a cluster will be able to separate the group I am looking for. Is clustering appropriate in this case? If so, do you know the most efficient algorithm in this case? I worked only with Kmeans and I don't know if it would be ideal for having to inform the number of clusters I want to form.
- What should I do if there are too many zero values in the outlier handling part?
I am working on a data science project which is about Churn analysis(whether costomer is leaving or not). I am trying to do outlier handling part but i have a question about how i need to think when my data has many zero values. I know it may contain a meaning but please see the results below. Results ,Value Counts, z score-hard edges and outliers
I would like to ask what should i need to do for better results and should i keep all the zero values? Any suggestion? What should I do if there are too many zero values in the outlier handling part? | https://quabr.com/64149952/nodejs-or-django-for-machinelearning-and-api | CC-MAIN-2020-45 | refinedweb | 2,873 | 50.02 |
Turbogears 0.9-prerelease mod_python integration
The SVN version of TG uses cherrypy-2.2.0-beta which, AFAIK, is not compatible with the mpcp script described at the other mod_python integration page.
I've managed to get it running using Robert Brewer's modpython_gateway.py. It has been tested on a Debian Sarge box, Apache 2.0.54 (worker MPM) and a backported libapache2-mod-python2.4 package from Ubuntu (the package didn't install cleanly and some black drudgery had to be done in order to get it working by hand so I will not post it here yet, if there's some demand I might try to build a better backport).
However, provided you can run your TG app with python2.3, you should be fine with Sarge's stock modpython2.3. (I'm a big fan of 2.4's syntax sugar so I couldn't resist :)
Quick recipe
Install wsigiref, sudo easy_install wsgiref should get you going.Since changeset (39) on Feb 9, 2006, modpython_gateway.py no longer depends upon wsgiref.
- Download Robert's script and place it in site-packages, or somewhere else on your Python path.
- You'll need to write a script to start your TG app as start-yourapp.py will not work. It should be something like:
import pkg_resources pkg_resources.require("TurboGears") import cherrypy import turbogears turbogears.update_config(modulename="yourapp.config") turbogears.update_config(configfile="/home/alberto/yourapp/prodcfg.py") from yourapp.controllers import Root cherrypy.root = Root() cherrypy.server.start(initOnly=True, serverClass=None)
Make sure you don't make any reference to sys.argv (as modpython doesn't have a command line) and you use full paths (as Apache's working directory is /, like any well behaved unix daemon...). Don't forget to fix your paths and application name!.
Place this script in your application's package (where controllers.py lives), let's say you call it myapp_modpython.py.
- Now you should sudo python setup.py install your app, you can get away by sudo python setup.py develop it. But then you should make your application's directory writable by your Apache user (normally www-data) which you should by no means do on a production box (if you can't guess why, then you should not be running a production box ;)
- Now you should configure your Apache virtual host, something like:
NameVirtualHost 10.0.0.100:80 <VirtualHost 10.0.0.100:80> ServerName ServerAdmin [email protected] ServerSignature Off AddDefaultCharset utf-8 <Location /> SetHandler python-program PythonHandler modpython_gateway::handler PythonOption wsgi.application cherrypy._cpwsgi::wsgiApp # PythonOption import myapp_modpython # does not work PythonFixupHandler myapp.myapp_modpython # works! tested with TG0.9 alpha # Switch it off when everything is working fine. PythonDebug on # This section can be skipped if you have no mod_deflate or don't want compression # Recipe stolen somewhere around the httpd.apache.org realms. |ico)$ no-gzip dont-vary # Make sure proxies don't deliver the wrong content <IfModule mod_headers.c> Header append Vary User-Agent env=!dont-vary </IfModule> </IfModule> </Location> # For a little speed boost, you can let Apache serve (some of) your static files directly: Alias /static /home/alberto/yourapp/static Alias /favicon.ico /home/alberto/yourapp/static/images/favicon.ico <Location /static> SetHandler None </Location> <Directory /home/alberto/yourapp/static> AllowOverride None # FollowSymLinks is set for max. performance. For max security turn them off. # However, if someone can make symlink in your server, this is your least motive for concern :) Options -ExecCGI -Indexes -Multiviews +FollowSymLinks Order allow,deny allow from all </Directory> </VirtualHost>
Should get you going. Most of the configuration options don't add much to the understanding of this topic, The important stuff are the Python* directives.
Important things to consider
- CherryPy?'s server logfile should be writable by www-data (if someone has a better solution please edit this page!, maybe dumping to stderr and let PythonDebug catch it?)
- Use absolute paths or compute them at runtime, Apache lives at /
- Your app will be running under the Apache user (www-data on Debian), take this into account when setting permissions and ownership...
- You should install all you're eggs unzipped, that is, with easy_install -Z <package_name>. If you don't, Apache will try to create a .python-eggs directory in it's home dir to store an unzipped, compiled version to use. That really sucks as you'll need to give Apache permission on this directory and the idea is that the Apache user is not able to write into any executable file.
Please, if you have any suggestions on how to improve this set-up you should edit this page. If you have any suggestions on how to improve my english, do so as well :) | http://trac.turbogears.org/wiki/ModPythonIntegration09?version=15 | CC-MAIN-2015-27 | refinedweb | 787 | 50.12 |
which means they can grow indefinitely, as they are defined by a description, not by pixels like in pictures.
In a new Next.js/React project with a hundred icons, I had to choose the best way to integrate icons. In this article, I propose to:
Once you have an SVG (let's say a wonderful arrow icon), there are at least four ways to import them... not all of these methods are worth it!
An SVG can be used as the source of an
img HTML tag:
const Icon = ({ src, ...imgProps }) => <img src={src} {...imgProps} />; const ArrowIcon = <Icon src="icons/arrow.svg" alt="Big arrow" />;
Even if this icon can be manipulated exactly as an image, advantages of SVG, except file size reduction, cannot be used. In particular, customization is not possible. Moreover, if the file is moved or deleted, developers may not be aware of it. Let's see what's next!
A really simple way to create an icon is to set it as the background image of a
div element.
import styled from "styled-components"; const Icon = styled.div` ${({ url }) => `background-image: url(${url});`} display: flex; align-items: center; justify-content: center; overflow: hidden; background-size: cover; background-position: center center; ${({ size }) => ` width: ${size}px; height: ${size}px; `}; `; const ArrowIcon = <Icon url="icons/arrow.svg" size={20} />;
The icon can be manipulated as a
div element, so the size is customizable, but that's about it. Thus, let's go through another way.
One popular method to manage a big set of icons is to use a font. It is quite easy to generate a font with online tools such as IcoMooon or Fontello.
A generic icon component should look like:
import "./style.css"; const Icon = (props) => ( <span> <i className={`icon-${props.slug}`} /> </span> );
Where the file
style.css, normally created by a font generator, should be something like:
@font-face { font-family: "icons"; src: url("fonts/icons.woff2") format("woff2"), url("fonts/icons.woff") format("woff"); font-weight: normal; font-style: normal; font-display: block; } .icon-arrow:before { content: "\\e954"; /* `\\e954` is a character defined in the font */ }
For additional props, you can wrap the icon, replacing
span by a styled component with props. For instance:
const WrappedIcon = styled.span` ${(props) => ` i { ${props.color ? `color: ${props.color};` : ""} ${props.size ? `font-size: ${props.size}px;` : ""} } `}; `;
And finally get a simple icon component:
const ArrowIcon = <Icon slug="arrow" size={20};
This method seems to be used in a lot of projects. Before challenging it, let's see the last one.
Importing icons in React (granted it is properly configured, we'll see that later) is as simple as that:
import Arrow from "icons/arrow.svg"; const MyComponent = () => <Arrow />;
This is the method I chose in my case, let's understand why.
Let's dive further into the comparison of the two concurrent methods: inline SVG vs. font...
Fonts are managed like text and it is therefore possible to customize an icon's
size or color. Precise positioning is possible with the same properties as text (
font-size,
line-height,
text-align, etc.), so it is quite painful when icons are used as buttons or images. For emojis used in a text, it could be more relevant, however.
SVG are customizable as images and properties
fill and
stroke allow to redefine colors respectively of the content or the border.
All parts of the SVG can be separately customized, which is impossible to do with fonts.
➡️ Using inline SVG is the best choice if you need color variants of icons, or to customize precise parts of them.
➡️ Icons used as characters should be part of a font.
Fonts are loaded asynchronously, which means blank characters can appear while the font is being loaded.
All icons are loaded in the font, even if only a few of them are used on the page. However, it can be served by a CDN, a fast method if CORS is correctly configured.
On the contrary, SVG are loaded with the DOM, which avoids blank spaces, but relatively increases loading duration. An icon usually weighs from 500B to 5kB (less with SVG Optimization) so it can be non-negligible if hundreds of icons are used at a time - quite rare, however!
Fonts are also subject to anti-aliasing methods applied by browsers to font characters when scaling. SVG do not suffer from this issue and are fully scalable.
➡️ In pages with hundreds of icons, using a font reduces the page size. Serving it via a CDN is a performant option.
➡️ For pages with moderate use of icons, SVG is a better option to avoid anti-aliasing and blank characters at loading.
What if icons evolve along the development process?
With a font, you need to regenerate the font correctly, with the correct source files - otherwise, some character codes may change!
With SVG, you just need to add, replace or delete a file and update file imports in the project. In case of replacement, check the SVG structure is still the same, as fine customization of SVG parts may be affected.
➡️ If icons are likely to evolve, SVG files are a more consistent and independent format to handle icon addition, edition, or deletion.
Once icon systems are set, both methods are easy to use as developers, as seen in the first part.
The only point of attention can be when it is necessary to customize SVG icons that are already styled. For instance, with a
fill or a
width property already set:
<svg width="15px"> <path fill="none" d="..." /> </svg>
In this case, applying custom
fill or
width won't be applied. Cleaning the SVG by removing
fill="none" and
width="15px" should fix the issue.
➡️ Once an icon system is set, the developer experience should be satisfying in both cases.
Fonts are supported by almost all browsers, using the
@font-face rule. All modern web browsers support WOFF2 and WOFF which are optimized formats for the web, but other formats can be used for broader browser support. However, Opera Mini does not support
@font-face at all.
All modern browsers now support inline SVG method, even Internet Explorer which only does not support CSS transforms (more details on caniuse website).
➡️ Both methods are supported by more than 98% of web browsers. Inline SVG is slightly better supported and has better fallback options.
Serving accessible fonts is hard. If you are adventurous, this article details the tricks to guarantee accessible fonts, so I won't expand on it here.
If accessibility is important for your project, then SVG are made for this.
An SVG file may have a
title or even a
desc (for description) tags. For better browser support, it is possible to add an id to these tags and refer to them the attributes
aria-labelledby as the following minimal example shows.
<svg aria- <title id="circleTitle">A circle</title> <desc id="circleDescription">A red circle with a blue border</desc> <circle cx="50" cy="50" r="40" stroke="blue" stroke- </svg>
For more details, I advise you to look at this CSS-tricks page.
➡️ For accessible projects, SVG is the best option with no hesitation.
A font system is an option to consider if your project uses a huge amount of icons per page, is focused on performance, does not need to be accessible, and has a graphic chart that won't evolve.
In other cases, for better flexibility and user experience, I would recommend an inline SVG system. That's what I chose in the case of a new B2C project with common use of icons (<30 per page) and a library with possible evolutions.
In the following part, I will explain how to configure such an icon system.
Here are the four simple steps I followed to integrate icons in a Next.js/React project using
styled-components as styling library.
babel-plugin-inline-react-svg:
# With npm npm install --save-dev babel-plugin-inline-react-svg # or, with yarn yarn add --dev babel-plugin-inline-react-svg
This package will allow the import of SVG files as React components. It also uses SVGO to optimize SVGs.
Note: SVGR also offers good alternatives to
babel-plugin-inline-react-svg depending on your bundler, such as
@svgr/webpack.
inline-react-svgplugin to babel configuration (
babel.config.jsor
.babelrc):
plugins: ["inline-react-svg"];
For further configuration, refer to the npm page.
In a TypeScript project, you should also add a module declaration in a type file:
// fileTypes.d.ts declare module "*.svg" { import React = require("react"); const src: (props: React.SVGProps<SVGSVGElement>) => JSX.Element; export default src; }
// icons/Arrow.jsx import Arrow from "./arrow.svg"; // ... (possible customization of the component) export default Arrow;
With this method, refactoring icons is easier if the import method changes.
Warning: using a unique index file with all SVG icons gathered in it is not recommended, because it prevents from tree-shaking the library.
import Arrow from "icons/Arrow"; const MyComponent = () => <Arrow />;
It is also possible to customize icons with
styled-components, as shown in this example:
// MyComponent.tsx import styled from "styled-components"; import Arrow from "icons/Arrow"; const CustomSpan = styled.span` color: red; font-size: 40px; `; const CustomArrow = styled(Arrow)` width: auto; height: 30px; stroke: blue; stroke-width: 50px; fill: currentColor; /* fill with the current color, here red */ `; const MyComponent = () => ( <button> <CustomSpan> Go <CustomArrow /> </CustomSpan> </button> );
...which renders as a wonderful button:
That's all you need to manage icons in your project!
Theodo - Developer | https://blog.theodo.com/2021/03/icon-library-react-why-inline-svg-better-than-font/ | CC-MAIN-2021-17 | refinedweb | 1,580 | 64.81 |
Debate:Should external links on RationalWiki be flagged as no follow?
Currently every external link on the site is flagged as no follow. This applies not just to the [single bracket link], but also interwiki links such as [[wp:something]] or [[cp:something worse]], as well as all the links in the voting extension or that use the capture tag.
The question is, should we maintain this policy, remove all no-follow tags, or something in between?
Contents
Maintain current policy of flagged links[edit]
My first instinct is to opt for the status quo unless someone can come up with a good case for changing it. Wikipedia does alright with nofollow links. There have been some suggestions that it might benefit our Google rankings. I would suggest that the best way to improve Google rankings is with good articles rather than Ken-type SEO.
Генгисunbelieving 22:21, 28 July 2009 (UTC)
- My concerns have nothing to do with our ranking. I personally find the whole idea of no follow distasteful. It is about receiving the benefit of incoming links to our site, but refusing to extend that same service to other sites. We become a "black hole" for the search engines, links come in but nothing comes out. The whole idea of collaboration and egalitarianism that underlies a "wiki" and even our specific site policies seems to be undermined by this action. tmtoulouse 22:28, 28 July 2009 (UTC)
- When you put it like that then I can see the point. Why didn't you proffer that rationale straightaway?
Генгисunbelieving 22:36, 28 July 2009 (UTC)
- I don't know. The answer is probably somewhere between laziness, not totally sure what I think about it, and wanting to see what would pop up organically without my immediate take. tmtoulouse 22:37, 28 July 2009 (UTC)
I like the current policy, at the moment I can stick down a link with no concern about how it will
effect affect (delete which is inappropriate) the persons webshite for a popular search starting with [A-Z] at a search engine starting with GBY. We can link to people we would not otherwise want to help out with no fear of assisting them in anyway. We could a code or template that allows follow I suppose, that I would support as it will allow the editors to selectively choose the pages we are going to assist. I would like no follow to still not work on talkpage though, the same as comment sections on a blog. - π 23:54, 28 July 2009 (UTC)
- I think it would be nice to have a tag like follow (link) /follow, only perhaps easier to type (2 or 3 letters?), we also link to many places we would like to "assist", like our blogs, and the many "good guys" out there. That's if we keep the general "nofollow" policy. ħuman
00:40, 29 July 2009 (UTC)
- I didn't get it until TMT wrote his comment above. That makes a lot of sense. Sterile leak 02:00, 29 July 2009 (UTC)
- Thinking about this a bit more, we already have reciprocity with our links to external sites in that people who visit here can click to an external site. The issue is really one of helping other sites with their search-engine ranking. Do we really want to boost the importance of daft Julie and other nutter bloggers or the pile of crap at Conservapedia just because we find it ridiculous or amusing? I certianly don't want to contribute to Ken's SEO machinations. I think the nofollow should stay but we should allow exceptions. As we don't want to make adding the exception too much hassle I suggest that it could be done with three square-bracket pairs which would add only two characters (sorry, it would be four for external links). It would be easy to remember but unusual enough that drive by spammers would miss it.
Генгисunbelieving 08:17, 29 July 2009 (UTC)
- I like Genghis' suggestion since it would be (as I mentioned below) a sort of "Seal of Approval" (Thinking about it a bit more, could/should we maybe limit this to non-talk namespaces so that our article link policies/habits can take care of the "policing"?). Back before we had nofollow for CP links (okay, and before he realized that normal links have nofollow anyway), Ken was here every other day, spamming links to his pet articles and abusing us as a platform for his SEO wanking. I can't say that I miss that. --Sid (talk) 12:42, 30 July 2009 (UTC)
[edit]
Other options[edit]
I understand that "no follow" is the default setting on mediawiki and that its purpose is to deter spammers - as the nofollow attribute means that their sites will not get a higher google ranking as a result of the link. Its utility is debatable as:
- Many spammers probably do not understand the issue anyway.
- Our highly active user base and our legion of eager sysops will clear out any spam quite quickly.
So why not disable it? Well, we link to a lot of weird sites and one especially weird site in particular. Do we really want to give them the extra google hits? How selective could we be with nofollow? Is it "all or nothing"? If we can do something selective then why not go for it? If not .......--BobNot Jim 20:33, 28 July 2009 (UTC)
Questions/Comments[edit]
One think that I've always wondered about this. Would disabling nofollow improve RW's google ranking?--BobNot Jim 20:35, 28 July 2009 (UTC)
- No, there is no advantage in it directly, though indirectly I suppose there might be. IE people doing a tit-for-tat type arrangement with the tags. But I don't think this is common. tmtoulouse 20:37, 28 July 2009 (UTC)
- It's just that I seem to recall reading that the Google page-rank algorithms gave some value to outgoing links as well as incoming ones. Presumably with nofollow set the outgoing links would be zero. But I can't seem to find where I read that now and I might well have been mistaken. (Or perhaps somebody on the internet was wrong - but that can't happen, can it?)--BobNot Jim 20:50, 28 July 2009 (UTC)
Difflinks[edit]
How does google "see" weird links like diffs? As links to the current version? ħuman
21:21, 28 July 2009 (UTC)
- A diff link is just an html page, google will see it as it appears to us. tmtoulouse 21:22, 28 July 2009 (UTC)
- But will it raise the profile of the "undiffed" version, or just the basic www as a whole? ħuman
23:44, 28 July 2009 (UTC)
- I would imagine google probably ignores the php arguments in a link, and so a link like would probably appear as. tmtoulouse 23:56, 28 July 2009 (UTC)
- I don't know how Google "sees" those links, but if you check the HTML intro code of those two links have different meta data: The diff link has meta name="robots" content="noindex,nofollow" in the intro, so Google wouldn't index it either way (if I understood it correctly). --Sid (talk) 12:35, 30 July 2009 (UTC)
Subtlety of distinction[edit]
If we decide to go the partial nofollow route, how "smart" can the software be? Like, say, nofollowing any link in a "capture" tag? Nofollowing any diff or permalink? Nofollowing any "shortcut" (ie, wp:some shit, cp:some thing else) link? ħuman
21:28, 28 July 2009 (UTC)
- I suppose it could be as smart as we want it to be. One issue is that external link formatting is not handled by a single proccess or chunk of code but divided up amongst several extensions and internal code for the MediaWiki software itself. tmtoulouse 22:39, 28 July 2009 (UTC)
- Could individual links be tagged as : eg <nofollow> ... </nofollow>? This message brought to you by:
respondand honey 00:10, 29 July 2009 (UTC)
- See my comment a few sections above after I write it. ħuman
00:39, 29 July 2009 (UTC)
- Would it be better to "blacklist" certain sites, such as (www\.)?conservapedia.com for nofollowing and then let the rest be plain vanilla links? Also, I have the vague impression that something like 80% of the links on this site are to sites that we find in some manner objectionable (religious nutters, cranks, scam artists, etc.) This would suggest to me that nofollowing should be opt out, rather than opt in. --JeevesMkII The gentleman's gentleman at the other site 21:15, 29 July 2009 (UTC)
- I... don't know. Active blacklisting like this looks like direct discrimination against a few sites to me. And while I wouldn't exactly mind blacklisting CP, I like the approach of exceptions (like the "[[[link]]]" suggestion somewhere on this page) more. It would be like a sort of "Seal Of Approval". --Sid (talk) 12:38, 30 July 2009 (UTC)
(undent) Interwiki links are not nofollowed by mediawiki, that's done by an extension. The extension has a whitelist at Mediawiki:iwnofollow-whitelist. Mediawiki 1.15 allows whitelisting external links in LocalSettings.php with $wgNoFollowDomainExceptions Nx (talk) 10:15, 31 July 2009 (UTC)
- Sounds good, exactly what we need. Are we going to 1.15? If so can we get dynamic tabs so we can replace that javascript hack for the [0] tab. - π 10:25, 31 July 2009 (UTC)
- Even with MW1.14 we can disable it on certain namespaces with $wgNoFollowNsExceptions Nx (talk) 07:46, 2 August 2009 (UTC)
NOTE[edit]
-).
- Yahoo! "follows it", but excludes it from their ranking calculation.
- MSN Search respects "nofollow" as regards not counting the link in their ranking, but it is not proven whether or not MSN follows the link.
- Ask.com ignores the attribute altogether.
- (from WP) so it's not mandatory anyway. This message brought to you by:
respondand honey 00:19, 29 July 2009 (UTC)
- What is the difference between "follow" and "rank" in this case? Surely if it "follows" but does not "rank" that is the same as "not follow" and not "rank".--BobNot Jim 06:20, 29 July 2009 (UTC)
- From wp: " ... others still "follow" the link to find new web pages for indexing ..." This message brought to you by:
respondand honey 06:25, 29 July 2009 (UTC) | https://rationalwiki.org/wiki/Debate:Should_external_links_on_RationalWiki_be_flagged_as_no_follow%3F | CC-MAIN-2019-43 | refinedweb | 1,747 | 70.33 |
Technical Support
On-Line Manuals
Compiler Reference Guide
Version 6.13
This intrinsic reads or modifies the FPSCR.
To use this intrinsic, your source file must contain #include <arm_compat.h>. This is only available for
targets in AArch32 state.
#include <arm_compat.h>
unsigned int __vfp_status(unsigned int mask, unsigned int flags)
mask
flags
Use this intrinsic to read or modify the flags in FPSCR.
The intrinsic returns the value of FPSCR, unmodified, if mask and flags are
0.
You can clear, set, or toggle individual flags in FPSCR using the bits in mask
and flags, as shown in the following table. The intrinsic returns the
modified value of FPSCR if mask and flags are not both
0.
Table
B4-1 Modifying the FPSCR flags
The compiler generates an error if you attempt to use this
intrinsic when compiling for a target that does not have VFP.. | http://www.keil.com/support/man/docs/armclang_ref/armclang_ref_chr1359125005991.htm | CC-MAIN-2020-05 | refinedweb | 146 | 55.64 |
I love DOM Inspector already, from the November 9, 2001 Developer's Day. But
one feature I'm looking for is the ability to create nodes from scratch, and
paste them into a document or document fragment.
hewitt, I'm working on a DOM-I-like XUL+JS called Node Creator. If you want to
assign this bug to me, go ahead. We can mind-meld DOM-I and NC later.
brantgurga wants RFE deprecated. I'm happy to oblige him. That's not why I'm
writing now.
I've gone back to my offline computer and tinkered with a 1.1a+ build using
Gerv's deprecated Patch Maker. I've modified the following files:
content/viewers/dom/dom.xul
content/viewers/dom/dom.js
content/editingOverlay.xul
I think I've modified this file:
resources/locale/en-US/editing.dtd
And I've created two new files, currently located locally at
chrome://inspector/content/viewers/dom/creatorDialog.xul
chrome://inspector/content/viewers/dom/creatorDialog.js
In the files I modified, all I did was add code. Bonsai indicates the only one
of these files to undergo any significant change is dom.js -- and looking at
the source code, it appears at first glance that the parts in question that I'm
looking at have not changed significantly. In other words, the patch I have
(which I will have to make available later) appears compatible with Mozilla
1.2b+.
(Translation: though I built the patch on old code, the code hasn't bitrotted
in the most important parts to the patch.)
The current version adds a submenu where you right-click on an element, above
the "Delete" menu item. The submenu is titled "Insert node...", and offers
five options: before this node, after this node, in place of this node
(replace), as first child, and as last child. The current version allows you
to create and insert elements, text nodes, CDATA nodes, comments and processing
instructions. I organized the JS to actually follow the pattern of the current
cmdEditBlahBlah functions, also, so undo works too.
As it is right now, it's probably good enough to set a milestone of 1.3 alpha,
and I recommend so. Only thing I don't like about it is that it can't edit
and/or split text nodes -- but that's a different bug, I think.
Patch for the modified files and static versions of the new files coming in a
few days via ZIP, (ie, when I have a floppy disk with me). I'd like advice on
where the two new files should really be placed.
Created attachment 105515 [details]
patch modifying files, plus two new files not in the patch
Patch Maker doesn't allow me to create files, so I had to include them
separately. The diff and the two new files are included as a ZIP. Also,
there's one specific part about this patch that I do not like: I hard-coded a
few namespaces in. See notes in the patch.
8) Bug 98815 almost certainly solves the problem I had with the patch. Maybe
next week I can have a version which uses DOM 3 methods. That being said, the
patch is otherwise good.
Created attachment 105708 [details] [diff] [review]
Patch, including new files and lookupNamespaceURI()
I'm not sure if I formatted the --- and +++ lines correctly, but otherwise it
is a working patch.
Created attachment 105714 [details] [diff] [review]
patch v3, following suggestions of timeless and caillon
Hopefully this one is better.
Created attachment 105730 [details] [diff] [review]
patch, v4, with the correct license this time
Thanks, timeless.
Comment on attachment 105730 [details] [diff] [review]
patch, v4, with the correct license this time
This is not a full-fledged review: I am just pointing out some things.
0. I intend to redo the UI and I'm strongly considering going with a UI
approach similar to what venkman uses. It may be confusing at first glance,
but after getting into it, it is very highly modular and configurable for an
app which uses several views at once and needs to do many things in different
situations, like inspector does. So this work might be obsoleted. There's
fair warning. ;-)
That said,
1. Your coding style needs some attention. Binary operators have spaces before
and after them. Your indentation is wacky in some places (why on earth are
your comments breaking the indentation rules of the blocks they are contained
in? Also why are they in the middle of the block if they apply to the whole
function?). Major blocks (e.g. functions) get extra newlines after them.
Please indent the stuff within your cases... In your XUL, one element per line
please. etc.
2. I don't really like your error handling. Why do you let the user click on a
menu item which pops open a window, then pops up an alert to say "you can't do
that" and then closes both windows? That is extremely lame UI. Disable the
menu item in that case.
3. What's up with the code in several places that does this:
>+ var node = this.node ? this.node : viewer.selectedNode;
>+ this.node = node;
>+ var insertedNode = this.insertedNode;
Do
if (!this.node) {
this.node = viewer.selectedNode;
}
And there's also no need for the temp var for insertedNode. Just use it off of
|this|.
4. Script elements should precede the document element.
5. Don't execute JS in the global scope. Use an onload handler and a function.
The less stuff in the global scope, the less potential conflicts we have.
I also have a few implementation questions that I don't have the time to write
down now, as I've gotten swamped in the last few hours actually, and I have a
date at Hogwarts coming up soon. I'll ping you later about them. In the
meanwhile, could you post a screenshot of some stuff? I get a general idea
from your xul, but pictures are worth more than my interpretations of code...
I'll get back to you on all that. Coding style I'm not sure how to clean up
for mozilla.org conventions. Script element before the document element --
wouldn't that violate XML well-formedness?
The rest of this, I think I can accomplish without much trouble at all. I'll
zip up some screenshots for you of the dialog box and the modified context menu.
Err, um yeah. I was finishing up the comments in haste and I've no idea where
that came from. I meant to say that the indentation level of scripts should not
be the same as the document element.
Created attachment 106744 [details] [diff] [review]
patch, v5
Hopefully I got everything caillon asked for. I think I did.
I also did a little more work to prevent wrong situations from happening (like
an text/html document receiving a CDATA section, bug 27403 ). So the overall
result is fairly smooth.
Tree frozen for 1.3a. It'd be nice if I could get a r=.
Comment on attachment 106744 [details] [diff] [review]
patch, v5
const kClipboardHelperCID = "@mozilla.org/widget/clipboardhelper;1";
+const nsIDOMNode = Components.interfaces.nsIDOMNode;
What about lining up the = with the one in the lines above?
+ this.dialogTitle = "Insert node as first child of this node";
shouldn't this be localizable?
similar in other places
+ // XUL documents do not allow creation of ProcessingInstruction nodes --
known bug.
care to cite a bugnumber here?
+ alert("Aborted: Qualified names must have at most one colon.\n" + qname);
IIRC you're supposed to not use alert in chrome, but instead use
nsIPromptService...
Also, this is not localizable.
+ var element_data = parseQName(qname);
isn't it normal style to use interCaps?
It would look better if you indented the text after case: labels...
Comment on attachment 106744 [details] [diff] [review]
patch, v5
requesting from the right address this time (on behalf of Alex Vincent)
Comment on attachment 106744 [details] [diff] [review]
patch, v5
Patch needs work. I answered biesi's concerns (localization is spotty at best
in DOM Inspector as currently implemented), and then caillon and I entered into
a debate regarding prefix vs namespace URI vs both, and I'm trying to figure
out the best route now.
Doubtful I'll be able to get this in before 1.3 beta -- I'm on vacation, and
don't have a Mozilla build to test my ideas on. I return home on Jan 18, but
that only leaves four days for caillon to review.
I wonder if they'll allow this for checkin to 1.3 itself. If not, we'll have
to bump this back to 1.4 alpha. I'd really rather avoid that.
Not going to make 1.3 beta, unless we get a miracle.
Created attachment 112754 [details]
Demo of proposed namespaces feature (work in progress)
caillon asked for the new feature to be 100-200 lines... the bad news is it
weighs in currently at almost exactly 300 lines (XUL + JS), and there's still a
few bugs in it. (I've included documentation in a textbox.) I'm providing
this attachment mainly as a proof-of-concept, and I'd like feedback (not a
formal review, but if this works out, the majority of this code will end up in
my next patch for this bug).
Created attachment 114021 [details]
Demo, v2 (CSS stylesheet)
Created attachment 114022 [details]
Real CSS stylesheet for demo
The preceding attachment is the XUL document -- one second while I fix it...
Created attachment 114023 [details]
Demo, v2 (XUL document standalone)
Much-improved over original version. Looking for feedback, particularly on
whether you'd appreciate using something like this in the create nodes widget.
biesi identified a bug in the insertNode() function -- calling setAttributeNS()
on a Document node when it should call on the documentElement.
"earthsound" identified a few errors when this widget runs in Mozilla 1.3a;
recent nightly doesn't have those errors.
I've started porting the XUL/JS/CSS to the creator dialog box, and I've
encountered a glitch: if the user brings up the dialog box and then hits
Cancel, undo/redo is broken. By that I mean a new command has been added to the
undo history, but nothing happened. This is unacceptable because if the user
had done other commands, undid them, and called an insert node method, and then
canceled, there is no way for the user to redo the commands they undid.
I have two options at this point: hack the undo history in Inspector to allow
for "temporary stack" commands, or change my approach and make the create nodes
feature a panel for the right side.
caillon: which do you prefer?
If I were to hack the history stack, it would be done by splitting execCommand()
from inspector.xml:
into two separate functions -- with the part inside the "if (!noPush) {...}"
brackets going into a new method.
Created attachment 114694 [details]
patch, v6
This one's a doozy: 69KB.
Note this patch depends on attachment 114685 [details] [diff] [review] of bug 193726 being checked in.
This patch includes:
* new commands for dom.js and editingOverlay.xul, with labels in editing.dtd
and a slight modification to dom.xul
* documentation of the creatorDialog widget in a help file you can reach from
the widget
* CSS stylesheet for certain effects of the widget, no XBL bindings included
* JavaScript functions for basic operation, namespace handling and on-the-fly
validation (if invalid, the ok button is disabled)
* Scrolling boxes for multiple attributes
* Zero known JavaScript errors or strict warnings
Comment on attachment 114694 [details]
patch, v6
Patch needs more work. biesi gave me some QA love, (he and I appear to have
switched roles for this bug), and helped me discover the patch is no good for
HTML documents. (Namespaces. Nuff said.)
Needless to say, that is quite annoying.
The solution I have right now is to add a separate version of the widget
specifically for HTML documents, and to discriminate between the two. This
results in approximately 20KB (and three files) being added to the patch when
I'm done with it. That doesn't count changes to the help file.
It was either that or (1) load the whole XUL+JS+CSS, (2) add more spaghetti
code to the JS for when XML mode is on and when it's not, (3) lots of DOM
interaction to remove XML-specific code, and (4) creating another CSS
stylesheet anyway...
The current detection for XML documents will be a new method of the viewer
object of dom.js:
creatorDialog_fileName: function() {
var ifaces = Components.interfaces;
var response = "";
var bool = ((this.subject instanceof
ifaces.nsIDOMXMLDocument)||(this.subject instanceof ifaces.nsIDOMXULDocument));
if (bool) {
response =
"chrome://inspector/content/viewers/dom/creatorDialog_XML.xul";
} else {
response =
"chrome://inspector/content/viewers/dom/creatorDialog_HTML.xul";
}
return response;
}
Created attachment 115449 [details]
patch, v7
Once again, you need to check in attachment 114685 [details] [diff] [review] as a prerequisite to
checking in this patch.
biesi warned me of a hang in Mozilla using the preceding patch with a certain
number of steps. I have unfortunately discovered a more generalized testcase
which hangs Mozilla 100% of the time with this patch.
(1)Insert a node.
(2)Append a child node to the inserted node.
(3)Replace the node you inserted, and Mozilla hangs.
I do not know why this hang happens. The commands I've inserted use only the
Document Object Model to modify the original document. A standalone testcase
replicating these results without DOM Inspector does not cause a hang.
zach suggests (not unreasonably) that I should fix the hang before this patch
is checked in. I however have no way of diagnosing where the actual hang
happens. I have observed the dialog box closing before Mozilla hangs, so the
added files are above suspicion. Also, since you can replace a node you have
not inserted before without a hang, that suggests my modifications to the
existing code are not at fault.
helpwanted: I need to find what exactly is causing the hang comment 27
describes. If you find it, and particularly if you can fix it, please let me know.
Re comment 27: my code is definitely not at fault. The modified code executes
doCommand() and thus modifies the DOM of the inspected document before the
dialog box closes. The hang happens after the dialog box closes.
Based on the known hang, I am inclined to ask for drivers' a= following a
successful r= for a patch on this bug, even if it is not required.
Re patch v7, there are two VirtualAttr.prototype.toString() functions. I left
them in there for diagnostic purposes, but unfortunately a bad disk sector on my
floppy corrupted one of them, resulting in a JS error. You may safely remove
them without ill effect, and I recommend doing so.
Comment on attachment 115449 [details]
patch, v7
patch is broken; blame the floppy.
Created attachment 115578 [details] [diff] [review]
patch, v7.1
thanks to biesi for bearing with me while I helped him fix the damage.
Borrowed code from v6 of this patch, which hadn't changed by version.
Comment on attachment 115578 [details] [diff] [review]
patch, v7.1
caillon took a quick glance at this. His commentary so far:
>+ creatorDialog_fileName: function() {
>+ var ifaces = Components.interfaces;
+ const ifaces = Components.interfaces;
>+ this.isInStack = false;
caillon prefers this.inStack, but he thinks this is nitpicking. I prefer
isInStack.
>+ <meta http-
>+ <title></title>
caillon wants me to lose the meta tag and include a title. No problem.
+ <title>DOM Inspector Create Nodes Help Page</title>
I have to change the line number count, though.
>+ max-height: 150px;
>+ height:150px;
>+ max-width:600px;
+ max-height: 150px;
+ height: 150px;
+ max-width: 600px;
>+[width="textfield"] {
>+ min-width:250px;
+ min-width: 250px;
>+#deck {
>+ max-width:600px;
+ max-width: 600px;
Later, in creatorDialog_XML.css,
+#attrsBox {
+ overflow: -moz-scrollbars-vertical;
+ max-height: 150px;
+ height: 150px;
+ max-width: 600px;
}
+
+#nsAttrsBox {
+ overflow: -moz-scrollbars-vertical;
+ max-height: 100px;
+ height: 100px;
+ max-width: 600px;
+}
+
+[width="uri"] {
+ min-width: 250px;
+}
+
+[width="prefix"] {
+ min-width: 80px;
+}
+
+[width="textfield"] {
+ min-width: 73px;
+}
+
+[width="nsTextfield"] {
+ min-width: 168px;
+}
+
+#deck {
+ max-width: 600px;
+}
Think that's everything caillon saw at first glance. He recommended that if he
wasn't available to get reviews from sicking and/or bz. Requesting r=sicking,
sr=roc (because he was nice enough to respond on bug 193726 -- sorry, roc).
Due to the known hang mentioned earlier and the short amount of time left for
1.4a, asking for drivers' a= is still an option in my book.
Comment on attachment 115578 [details] [diff] [review]
patch, v7.1
Correction: after checking with caillon, sr= requested from bz.
Um... I won't be able to review this for 1.4a. I may not be able to review this
for 1.4b. I have lots of high-priority layout reviews on my plate and very
little time to spend on reviews....
same here, there is no way that i'll be able to r this for 1.4a :(
Not sure if you'll be allowed to land this during beta or not (don't know how
strict the no-new-features-during-beta rule is)
Heh, ok, ok. I figured 1.4a was a long shot, given I had to wait for another
bug to be fixed.
sicking: is 1.4b an option?
bz: who should I ask sr= of?
Um... peterv? dmose? jst? heikki? jag? hewitt? (depends on what the patch
mostly does).
Comment on attachment 115578 [details] [diff] [review]
patch, v7.1
per e-mail with bz.
I'm not going to get to this by 1.4a. This seems like a rather big change for
1.4b... Should this perhaps go into 1.5a?
:(
Comment on attachment 115578 [details] [diff] [review]
patch, v7.1
patch will need localization.
caillon: if you recall earlier screenshots of the dialog box my patch would pop
up, it was a very detailed UI. However, I've recently come up with an idea
which may greatly simplify this bug to a dialog box with only a multiline
textbox and OK/Cancel buttons.
var parser = new DOMParser();
var aDoc = parser.parseFromString("<foo>" + textbox.value
+ "</foo>", "application/xml");
if (aDoc.documentElement.tagName == "parsererror") return false; // the
fragment was not well-formed
if (aDoc.documentElement.childNodes.length == 1) return true; // we got a
single node, probably a text node
if (aDoc.documentElement.childNodes.length == 3) {
if ((aDoc.documentElement.firstChild.nodeType == nsIDOMNode.TEXT_NODE) &&
(aDoc.documentElement.childNodes[1].nodeType == nsIDOMNode.ELEMENT_NODE) &&
(aDoc.documentElement.lastChild.nodeType == 3)) {
// well, we probably got an element node with whitespace on both sides,
check it in detail and clean it up
}
}
What do you think? A simple textbox is nicer, and allows the user to create a
lot more complex content on-the-fly. Are there security risks to that
approach? A potential downside to this is the user adding in elements like
<html:script/>, but I don't see how that could be a major concern for people
who use DOM Inspector. Plus, localization would be a snap.
Created attachment 150613 [details]
Demo of a much simpler UI (run from chrome)
Something like this will be far easier for the DOM Inspector user to use, and
of course to maintain in the future.
Created attachment 171854 [details]
Advanced demo of create content widget
If you don't have XMLExtras (specifically, DOMParser) enabled in your builds,
the demo will run marginally slower.
Something very similar to this will be my next patch for DOM Inspector on this
bug.
Reassigning DOM-I bugs which have stagnated in my buglist back to default owner. Hopefully someone will pick up some of these bugs and work on them. I'll continue to follow them.
Created attachment 253955 [details] [diff] [review]
Patch for element and text node creation.
Personally I believe that this bug needs something far simpler to solve it. Attached is a patch that allows you to insert element and text nodes at any point in the tree with a very simple interface. You simply use the context menu to choose what position relative to the selected node you want to insert, then in the dialog either choose a namespace and tagname for the element, or enter some text for a text node.
The dialog was based on the code from 205872, in particular the namespace picker.
There are 4 commands for inserting nodes, depending on where they are inserted relative to the selected node. These 4 are written as just extensions of a generic command that handles the dialog displaying and node creation.
The code is I think reasonably well written to allow for additional node types to be added using the same dialog if needed in the future.
Comment on attachment 253955 [details] [diff] [review]
Patch for element and text node creation.
>@@ -249,16 +257,24 @@ DOMViewer.prototype =
> case "cmdEditPasteReplace":
> return new cmdEditPasteReplace();
> case "cmdEditPasteFirstChild":
> return new cmdEditPasteFirstChild();
> case "cmdEditPasteLastChild":
> return new cmdEditPasteLastChild();
> case "cmdEditPasteAsParent":
> return new cmdEditPasteAsParent();
>+ case "cmdEditInsertAfter":
>+ return new cmdEditInsertAfter();
>+ case "cmdEditInsertBefore":
>+ return new cmdEditInsertBefore();
>+ case "cmdEditInsertFirstChild":
>+ return new cmdEditInsertFirstChild();
>+ case "cmdEditInsertLastChild":
>+ return new cmdEditInsertLastChild();
> case "cmdEditDelete":
> return new cmdEditDelete();
> }
> return null;
> },
This entire function body can be replaced with one line of code:
return new window[aCommand];
Otherwise it doesn't look too bad on first glace. I'll have to apply it and test it before I can fully review it.
Comment on attachment 253955 [details] [diff] [review]
Patch for element and text node creation.
>+ if (/^cmdEditPaste/.test(aCommand) || /^cmdEditInsert/.test(aCommand)) {
What about:
if (/^cmdEdit[Paste|Insert]$/.test(aCommand)) {
>+ onInsertPopupShowing: function onInsertPopupShowing(menupopup) {
Generally (at least as of late), we've been trying to prefix function arguments with a, so I'd suggest aMenupopup here.
General comments:
A few places looked like you might have lines longer than 80 characters, and some of the files have a copyright date of 2006, and you might want 2007.
Comment on attachment 253955 [details] [diff] [review]
Patch for element and text node creation.
>+ case "cmdEditInsertAfter":
>+ return (parentNode) && (parentNode.nodeType == nsIDOMNode.ELEMENT_NODE);
>+ case "cmdEditInsertBefore":
>+ return (parentNode) && (parentNode.nodeType == nsIDOMNode.ELEMENT_NODE);
>+ case "cmdEditInsertFirstChild":
>+ return (selectedNode) && (selectedNode.nodeType == nsIDOMNode.ELEMENT_NODE);
>+ case "cmdEditInsertLastChild":
>+ return (selectedNode) && (selectedNode.nodeType == nsIDOMNode.ELEMENT_NODE);
Use instanceof Element for these tests (instanceof also null-checks).
> var commandId = menupopup.childNodes[i].getAttribute("command");
> if (viewer.isCommandEnabled(commandId))
> document.getElementById(commandId).setAttribute("disabled", "false");
> else
> document.getElementById(commandId).setAttribute("disabled", "true");
> }
> },
>
>+ onInsertPopupShowing: function onInsertPopupShowing(menupopup) {
>+ for (var i = 0; i < menupopup.childNodes.length; i++) {
>+ var commandId = menupopup.childNodes[i].getAttribute("command");
>+ if (viewer.isCommandEnabled(commandId))
>+ document.getElementById(commandId).setAttribute("disabled", "false");
>+ else
>+ document.getElementById(commandId).setAttribute("disabled", "true");
>+ }
>+ },
Aren't these two functions identical (in which case generically rename it)?
> <!--<menuitem id="mnEditInsert"/>-->
Shouldn't this be removed?
>+ var rows = document.getElementsByTagName("row");
>+ for (var i=0; i<rows.length; i++) {
>+ if (rows[i].hasAttribute("types"))
>+ rows[i].hidden = rows[i].getAttribute("types").indexOf(dialog.nodeType.value)<0;
>+ }
Let me think about this, I'm not quite convinced yet.
Nit: spaces around <
Comment on attachment 253955 [details] [diff] [review]
Patch for element and text node creation.
>+<dialog id="InsertNode"
>++ title=""
The attributes dialog sets its title at run time, but you don't...
>+ <label value="&nodeType.label;" control="tx_nodeType"/>
>+ <menulist id="ml_nodeType" oncommand="dialog.updateType();">
control="ml_nodeType" surely?
Yes sorry those last two are dumbass mistakes. The title I meant to come up with something sensible before I submitted "Insert Node" ok? (in the dtd of course), the other yeah silly copy and paste error.
Comment on attachment 253955 [details] [diff] [review]
Patch for element and text node creation.
>+ </row>
>+ <row types="element">
>+ <spacer/>
>+ <textbox id="tx_namespace"/>
>+ </row>
>+ <row types="element">
>+ <label value="&tagName.label;" control="tx_tagName"/>
>+ <textbox id="tx_tagname" oninput="dialog.toggleAccept()"/>
>+ </row>
I think you should wrap these in their own <rows> which means that you will only have to show and hide one of them. sr- given this and previous comments.
Created attachment 254198 [details] [diff] [review]
patch rev 2
This is an updated patch that addresses all of the previous comments.
(In reply to comment #53)
> I think you should wrap these in their own <rows> which means that you will
> only have to show and hide one of them. sr- given this and previous comments.
I'm not totally sure what you meant here. Using separate <rows> for the type chooser, text node options and element options won't work since multiple <rows> stack on top of each other rather than displaying one after the other.
For this patch I have split it into two grids though that has the side effect of the node type chooser no longer totally lining up with the following controls. Screenshot coming
Created attachment 254199 [details]
Screenshot of current behaviour
(In reply to comment #55)
> Created an attachment (id=254199) [details]
> Screenshot of current behaviour
I think that the whole "not lining up" issue looks really bad, fyi
Forgot to add there's also a minor typo from domNodeDialog.xul corrected in this patch.
(In reply to comment #54)
>I'm not totally sure what you meant here. Using separate <rows> for the type
>chooser, text node options and element options won't work since multiple <rows>
>stack on top of each other rather than displaying one after the other.
I meant something like this, with the second rows inside the first:
<grid>
<columns><column><column flex="1"></columns>
<rows>
<row><!-- node type menulist --></row>
<rows id="elementView">
<row><!-- namespace menulist --></row>
<row><!-- namespace textbox --></row>
<row><!-- node name textbox --></row>
</rows>
<row id="textView" flex="1"><!-- multiline textbox --></row>
</rows>
</grid>
Dialog boxes are almost never the correct solution for things like this. doing it with in-line editing on the tree box would be much nicer.
(In reply to comment #59)
> Dialog boxes are almost never the correct solution for things like this.
> doing it with in-line
> editing on the tree box would be much nicer.
While I know you are correct, I'm of the opinion that we get this feature implemented first like this. Since everything else in DOMi uses dialogs it isn't a huge deal, but feel free to file a bug to start removing dialogs from DOMi..
(In reply to comment #61)
>.
You can add attributes, but it just requires more dialog pop-ups. I don't think the DOMi is really supposed to be an editor per se, but this is hand to test a small fix out quickly.
I think you may have hit on that though...
Created attachment 254748 [details] [diff] [review]
patch rev 3
Wasn't aware of that trick so here's a more final patch.
With this the fields line up ok, the only issue is that the width of the second column fluctuates depending on which type of node is being edited. I guess the only way to fix that would be to apply a fixed width though that would be a pain for localisers.
(In reply to comment #63)
>With this the fields line up ok, the only issue is that the width of the
>second column fluctuates depending on which type of node is being edited.
I was thinking radio buttons, but what if someone wants to extend the dialog to add (say) comment nodes? So I decided against it.
Comment on attachment 254748 [details] [diff] [review]
patch rev 3
>+ case "cmdEditInsertAfter":
>+ return parentNode instanceof nsIDOMElement;
>+ case "cmdEditInsertBefore":
>+ return parentNode instanceof nsIDOMElement;
>+ case "cmdEditInsertFirstChild":
>+ return selectedNode instanceof nsIDOMElement;
>+ case "cmdEditInsertLastChild":
>+ return selectedNode instanceof nsIDOMElement;
I seem to remember caillon didn't like instanceof Element either, so nsIDOMElement is fine.
>+ if (aCommand in window)
>+ return new window[aCommand]();
Nice though db48x's method is, I'm not convinced it's foolproof.
(Perhaps he can convince me, or come up with foolproof code.)
>+ switch (dialog.nodeType.value)
>+ {
>+ case "text":
>+ document.getElementById("row_text").hidden = false;
>+ document.getElementById("row_element").hidden = true;
>+ break;
>+ case "element":
>+ document.getElementById("row_text").hidden = true;
>+ document.getElementById("row_element").hidden = false;
>+ break;
>+ }
Why aren't these properties on the dialog object?
Note: This way may be more extensible:
dialog.rowText.hidden = dialog.nodeType.value != "text";
dialog.rowElement.hidden = dialog.nodeType.value != "element";
I wonder whether we should go for a different structure, maybe a deck?
dialog.nodeDeck.selectedIndex = dialog.nodeType.selectedIndex;
>+ <grid>
You need to restore the flex, otherwise the textbox stops flexing.
>+ </row>
>+ <rows id="row_element">
Tabs crept in here. sr=me with this fixed.
(In reply to comment #65)
> >+ if (aCommand in window)
> >+ return new window[aCommand]();
> Nice though db48x's method is, I'm not convinced it's foolproof.
> (Perhaps he can convince me, or come up with foolproof code.)
It's foolproof as long as the functions to be called are given the same names as the commands that should cause them to be called. Since that convention has been upheld so far it doesn't seem unreasonable to expect it to continue to be upheld. A comment to that effect wouldn't hurt though. What else is needed for it to be foolproof?
Comment on attachment 254748 [details] [diff] [review]
patch rev 3
awesome. sorry the review took so long. r=db48x
checked in | https://bugzilla.mozilla.org/show_bug.cgi?id=112775 | CC-MAIN-2016-22 | refinedweb | 4,867 | 65.93 |
Lexer: lex_string_tok is horrible
In a recent mail, @rae made me aware of how GHC lexes strings:
-- Strings and chars are lexed by hand-written code. The reason is -- that even if we recognise the string or char here in the regex -- lexer, we would still have to parse the string afterward in order -- to convert it to a String. <0> { \' { lex_char_tok } \" { lex_string_tok } } .... ------------------------------------------ -- Strings & Chars -- This stuff is horrible. I hates it. --- <--- Simon M., 2003 lex_string_tok :: Action lex_string_tok span buf _len = do tok <- lex_string "" (AI end bufEnd) <- getInput let tok' = case tok of ITprimstring _ bs -> ITprimstring (SourceText src) bs ITstring _ s -> ITstring (SourceText src) s _ -> panic "lex_string_tok" src = lexemeToString buf (cur bufEnd - cur buf) return (L (mkPsSpan (psSpanStart span) end) tok') lex_string :: String -> P Token lex_string s = do i <- getInput case alexGetChar' i of Nothing -> lit_error i Just ('"',i) -> do setInput i let s' = reverse s magicHash <- getBit MagicHashBit if magicHash then do i <- getInput case alexGetChar' i of Just ('#',i) -> do setInput i when (any (> '\xFF') s') $ do pState <- getPState let err = PsError PsErrPrimStringInvalidChar [] (mkSrcSpanPs (last_loc pState)) addError err return (ITprimstring (SourceText s') (unsafeMkByteString s')) _other -> return (ITstring (SourceText s') (mkFastString s')) else return (ITstring (SourceText s') (mkFastString s')) Just ('\\',i) | Just ('&',i) <- next -> do setInput i; lex_string s | Just (c,i) <- next, c <= '\x7f' && is_space c -> do -- is_space only works for <= '\x7f' (#3751, #5425) setInput i; lex_stringgap s where next = alexGetChar' i Just (c, i1) -> do case c of '\\' -> do setInput i1; c' <- lex_escape; lex_string (c':s) c | isAny c -> do setInput i1; lex_string (c:s) _other -> lit_error i lex_stringgap :: String -> P Token lex_stringgap s = do i <- getInput c <- getCharOrFail i case c of '\\' -> lex_string s c | c <= '\x7f' && is_space c -> lex_stringgap s -- is_space only works for <= '\x7f' (#3751, #5425) _other -> lit_error i
Horrible indeed it is: the string lexer is completely hand-written! Obviously, writing down the FSM transitions by hand is very error prone and completely non-extensible.
Why was it not written within Alex' declarative language in the first place? I don't know for certain, but the comment that starts the above snippet suggests it's because of the need to escape the lexed strings, indeed a valid concern.
Nevertheless, I see two options here:
- Escape literal strings in the parser or (better) in the
AlexTokencase of
lexToken. Needs to retain the whole unescaped string and might lead to escaping the same string repeatedly.
- Add lexer logic that escapes the string. Even the alex user guide gives an example for string parsing using "start codes" (3.2.2.2 here, which seems to be inspired by flex's "start conditions"), e.g. you can specify and "call" a sub-lexer for lexing the content of a string literal. GHC's lexer already seems to use start codes for layout, so the awareness of the feature apparently wasn't enough to use it for lexing strings. Indeed, it doesn't solve the escaping problem at all! On the other hand, supporting string escapes seems to be an explicit goal for flex's start condition as per the user guide. So maybe it's as simple as modifying the lookahead token when we see a
<string>start code?
Some feedback from @simonmar would be helpful, who might recall the exact pitfalls involved with (2).
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information | https://gitlab.haskell.org/ghc/ghc/-/issues/19761 | CC-MAIN-2022-21 | refinedweb | 569 | 61.9 |
Certain practical constraints guide our selection of which applications to instrument and post here. We welcome your suggestions for additions to the suite, but please keep in mind the following practical issues.
One project member (Ben Liblit) is responsible for building, testing, posting, and supporting all instrumented applications. Ben’s time is finite, and so we must use it in ways that maximize our return (in usable data) for the time we invest.
For this reason, we prefer to post instrumented rebuilds of applications which are already cleanly packaged for their host platform. For Fedora, that means applications which already have RPMs that build and install cleanly, and which pull in a minimum of other non-standard supporting packages. We do not have the manpower to deal with fixing up packages that are broken even before our instrumentor comes along.
The entire concept of this project centers around learning failure patterns from large numbers of runs. Thus, maximizing our return for time invested also requires that we post packages that will attract large numbers of users. The more obscure or specialized a piece of software, the smaller its user base and therefore the longer it will take us to gather enough data. We prefer to post applications that have enough users to let statistical debugging do its magic quickly and effectively.
We would like to support more platforms that just Fedora / i386, including non-Linux operating systems. However, each platform we support requires a build machine here at Bug Isolation Headquarters, and we only have so many machines. Each machine needs a system administrator as well, which gets back to our limited human resources.
We are open to the idea of working with trusted, competent third parties who provide their own build machines. You might build binaries yourself or simply provide machine access for us to use. Contact us if you want to help out in this way.
Our instrumentation strategy and statistical debugging techniques are applicable to a wide variety of programming languages and paradigms. However, our current implementation is for C. We do not yet support C++, C#, Java, Python, Perl, and so on. The ideas still apply, but we don’t have the implementation.
If the current application suite seems to have a GNOME bias, now you know why. Most KDE applications are written in C++, and therefore are out of our reach. Mozilla and Mozilla-based projects (Galeon, Epiphany, Thunderbird, Firebird, etc.) are unavailable for the same reason.
Before suggesting a new application, then, please confirm that it is written in C. Note that we do support multithreaded applications: Evolution, Nautilus, and Rhythmbox make heavy use of threads, and our instrumented rebuilds of these packages work quite well. Furthermore, applications do not need to be graphical or interactive. Batch programs, daemons, command-line tools, and so on can all be instrumented if we hear sufficient demand. | http://www.cs.wisc.edu/cbi/downloads/selection.html | crawl-002 | refinedweb | 480 | 54.93 |
Hi and welcome to Just Answer!I will help you to navigate the tax matter - and will address all your tax related questions.
A Limited Liability Company (LLC) is a business structure allowed by state statute. Owners of an LLC are called members. Most states also permit “single-member” LLCs, those having only one owner.Depending on elections made by the LLC and the number of members, the IRS will treat an LLC as either a corporation, partnership, or as part of the LLC’s owner’s tax return (a “disregarded entity”)..
Specifically, an LLC with only one member is treated as an entity disregarded as separate from its owner for income tax purposes. That means - the LLC doesn't file its own income tax return - but all income and expenses are reported on owner's individual tax return.
I have not done anything yet but I would like to do the right thing and file. I got an automated call from IRS to file so I logged into the website and I have no clue what I should be doing.
For federal tax purposes - you will report all business income and expenses - on the schedule C - net income (after deductions)will be reported on the form 1040 line 12 - the business has net income over $400, it may be required to file Schedule SE, Self-Employment Tax and net income is likely self-employment income and 15.3% self-employment tax would be required.Self-employment taxes from schedule SE will go to the form 1040 line 56 - Also - you will deduct half of self-employment taxes on the line 27. Generally - that all you needs for income tax purposes . Please review all possible deductions - that might help to reduce income tax liability.
You may either prepare your tax return by yourself or may use any tax preparation service.If you decide to do by yourself - you may either fill tax forms or may use a tax preparation software.I provided above referenced to forms - please take a look. You may print them from the IRS website.
I kept all my records of charging my services to my client. Can I claim the rent payment each month as my business? I use a room in my apartment as my office, I use my personal cell phone for my business.
Since I pay myself from my business account, do i still have to file a personal taxes?
You may deduct ONLY business related expenses on the schedule C.None of personal expenses could be deducted.Specifically - if you are using a cell phone for business - you need to determine the percentage of business use - and may deduct prorated part of your payments.
Ok.
You also may deduct the cost of your home office. But there are special requirements..
For a full explanation of tax deductions for your home office refer to Publication 587, Business Use of Your Home. Generally - you need to determine the area of the room that is used for your business - and percentage of your total home. Then - you will deduct a business portion of your rent and utilities.
Since I pay myself from my business account, do i still have to file a personal taxes?Because the LLC is disregarded entity for income tax purposes - it is ignored - and all income and deductions are reported as YOUR income and your deductions. There is no income tax return for the LLC - all are reported on YOUR individual tax return.So - you personally will pay income and self-employment taxes.
Ok so my first step is to fill out the Schedule C on the IRS website?
If you decide to prepare your tax return by yourself - yes - schedule C that will be your starting form.You may print that form from here -
Just you assessing my questions, Should I hire someone to do it for me? Or is it possible for me to do it by myself.
That is your choice. If you do not feel comfortable to prepare your tax return - you might better to use any tax preparation service. But that is up to you.
You are not required to use the tax service - that is up to you.
Thank you so much for answering my questions. I just don't want to go to jail so I am going to print out the forms.
You should not worry about jail... that most likely is not an issue. But be sure you correctly calculate your taxable income - to minimize your obligations.
I am from a small country and doing things wrong will take you to jail so I am not sure about the US.:) When I got the call from IRS today that I need to login to the website it totally freaked me out.
I appreciate your help! | http://www.justanswer.com/tax/7ss96-started-llc-business-august-2012-employee.html | CC-MAIN-2015-35 | refinedweb | 803 | 72.87 |
: Mobile
This topic shows you how to authorize authenticated users for accessing data in Azure Mobile Services from a Windows Phone app. In this tutorial you add code to the data access methods in your your controller.
Now that authentication is required to access data in the TodoItem table, you can use the userID value assigned by Mobile Services to filter returned data.
The methods below should have the AuthorizeLevel attribute applied at the Authorizationlevel of User. This restricts table access to only authenticated users.
In Visual Studio 2013, open your mobile service project, expand the DataObjects folder, then open the TodoItem.cs project file.
The TodoItem class defines the data object, and you need to add a UserId property to use for filtering.
Add the following new UserId property to the TodoItem class:
public string UserId { get; set; }
When using the default database initializer, Entity Framework will drop and recreate the database whenever it detects a data model change in the Code First model definition. To make this data model change and maintain existing data in the database, you must use Code First Migrations. The default initializer cannot be used against a SQL Database in Azure. For more information, see How to Use Code First Migrations to Update the Data Model.
In Solution Explorer, expand the Controllers folder, open the TodoItemController.cs project file, and add the following using statement:
using Microsoft.WindowsAzure.Mobile.Service.Security;
The TodoItemController class implements data access for the TodoItem table.
Locate the PostTodoItem method and add the following code at the beginning of the method:
// Get the logged-in user.
var currentUser = User as ServiceUser;
// Set the user ID on the item.
item.UserId = currentUser.Id;
This code adds a UserId value to the item, which is the user ID of the authenticated user,.
In Visual Studio 2013.
(Optional) If you have additional login accounts, you can verify that users can only see their own data by closing the app (Alt+F4) .NET How-to Conceptual Reference Learn more about how to use Mobile Services with .NET.
Want to edit or suggest changes to this content? You can edit and submit changes to this article using GitHub.
Need more help using this Azure service? Go to an MSDN forum or StackOverflow discussion | http://azure.microsoft.com/en-us/documentation/articles/mobile-services-dotnet-backend-windows-phone-authorize-users-in-scripts/ | CC-MAIN-2014-52 | refinedweb | 379 | 56.55 |
FormTemplates Schema Overview
Last modified: October 05, 2009
Applies to: SharePoint Foundation 2010
Available in SharePoint Online
This schema describes optional XML you can include in a content type as custom information. This XML node must be stored within an XMLDocument element in the content type definition. For more information, see Custom Information in Content Types.
This schema enables you to specify the form templates used to display an item's Display, New, and Edit pages in the Microsoft SharePoint Foundation user interface.
The schema has the following elements:
FormTemplates The root element. The FormTemplates element has the following attribute:
xmlns Required Text. Represents the XML namespace of the schema. The namespace for this schema is:
Display Required Text. Specifies the name of the custom Display form template to use.
Edit Required Text. Specifies the name of the custom Edit form template to use.
New Required Text. Specifies the name of the custom New form template to use.
The form templates referenced here are .asmx controls that render the central section of a SharePoint Foundation Web page. That is, the form template renders everything except the SharePoint Foundation frame elements (what is usually termed the chrome) on the page. SharePoint Foundation renders the chrome for the page.
The form template names you specify must be names of rendering templates found within an .ascx file located on the front-end Web server at the following location:
%ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\CONTROLTEMPLATES
If you do not include this XML document in your content type definition XML, SharePoint Foundation uses the default values. In that case, SharePoint Foundation renders the forms automatically for you.
Following are the default contents of this XML document is for the Document content type:
Following are the default contents of this XML document for the Item content type: | https://msdn.microsoft.com/en-us/library/ms468901.aspx | CC-MAIN-2017-09 | refinedweb | 306 | 57.06 |
First things first, lets get our data section out of the way since that isn't very technical. Your data section should contain your prompt as well as your "In binary, the number you entered is: " strings.
org 100h section .data prompt1: db "Please enter a decimal number: $" prompt2: db 0Dh, 0Ah, "In binary, the number you entered is: " prompt3: db 0Dh, 0AHn "In hexadecimal, the number you entered is: "
Now we need to do some actual work. We'll start by printing the prompt for the user to input their number. We will use int 21h function 9 in order to display the prompt:
section .text start: mov ah, 9 ;prints the prompt mov dx, prompt1 ;tells what to print int 21h
Now we need to get the user input. We're going to use a loop and read character by character until the character that the user has selected is the carriage return (Enter). which is ASCII code 13. You can find ASCII code values here. In order to read the character that the user is selecting, we'll use int 21h function 1 - the input character function. This next code section I'm going to type is fairly long, but I've commented every line so you can follow along easily. Comments in x86 assembly language start with ";"
;input base 10 value. mov bx, 0 ;bx holds input value mov ah, 1 ;input char function int 21h ;read character into al top1: ;while (char != Carriage return) cmp al, 13 ;is char = Carriage return? je out1 ;If so, we're finished with input, jump to out and ax, 000Fh ;convert from ASCII to base 10 value push ax ;save it to stack mov ax, 10 ;set up to multiply bx by 10 mul bx ;dx:ax=bx*10 pop bx ;saved value in bx add bx, ax ;bx = old bx*10 + new digit mov ah, 1 ;input char function again for next digit int 21h ;read next char jmp top1 ;loop until cmp al, 13 = true out1: mov ah,9 ;print binary output label mov dx,prompt2 ;this is specified in .data section int 21h
Now that we have the number the user has selected, we need to convert it to binary. This is done by using a "for" loop and bit rotation. We will initialize our count register, cx, to 16 in order to loop through our "top2" loop 16 times. We will be using int 21h function 2 in order to print single characters. Again, I have commented line by line so you can understand what I am doing:
; for 16 times do this: mov cx, 16 ;loop counter top2: rol bx, 1 ;rotate most significant bit into carry flag jc one ;does carry flag = 1? mov dl, '0' ;if not, set up to print a 0 jmp print ;now print it one: mov dl, '1' ;printing a 1 if 0 is not true print: mov ah, 2 ;print char fcn int 21h ;now print it loop top2 ;loop until done
It's that easy to convert base 10 to binary, but let's kick it up a notch. You might think Hexadecimal numbers may be trickier due to the fact that it is not just ones and zeroes. Now we're using numbers 0-9 as well as letters A-F in order to construct numbers. Sounds hairy, but it actually isn't all that bad. Again, we'll be using int 21h function 2 in order to print single characters, but this time instead of putting 16 in our count register, we're only putting 4! It's also helpful to know what a nybble is in this situation. I've added a link to the word so you can read up on them. Again, commented line by line for easy reading (we're almost done!!):
mov cx, 4 ;loop counter top characters [A-F] or dl, 30h ;convert 0-9 to '0'-'9' jmp print2 ;now print AF: add dl, 55 ;convert 10-15 to 'A'-'F' print2: mov ah, 2 ;print character function int 21h ;print it loop top3 ;loop until done
And that's that. We've just converted Base-10 numbers to binary and hexadecimal. Now we just need to exit the program. This is pretty simple using int 21h function 04Ch -DOS function:
Exit: mov ah, 04Ch ;DOS function: Exit program mov al, 0 ;Return exit code value int 21h ;Call DOS. Terminate Program
All of this can be compiled in NASM using command line: nasm -f bin filename.asm -o filename.com
And you can run it in DOSbox by calling commands:
mount a ~/Path/To/Directory
a:
filename.com
Hope this helped!
Full code here:
org 100h section .data prompt1: db "Please enter a decimal number: $" prompt2: db 0Dh,0Ah, "In binary, the number you entered is: $" prompt4: db 0Dh,0Ah, "In octal, the number you entered is: $" prompt3: db 0Dh,0Ah, "In hexadecimal, the number you entered is: $" section .text start: mov ah,9 ; print prompt mov dx,prompt1 int 21h ; input base 10 value mov bx,0 ; bx holds input value mov ah,1 ; input char function int 21h ; read char into al top1: ; while (char != Carriage Return) cmp al,13 ; is char = Carriage Return? je out1 ; if so, we're done and ax,000Fh ; convert from ASCII to base 10 value push ax ; and save it on stack mov ax,10 ; set up to multiply bx by 10 mul bx ; dx:ax = bx*10 pop bx ; saved value in bx add bx,ax ; bx = old bx*10 + new digit mov ah,1 ; input char function int 21h ; read next character jmp top1 ; loop until done ; now, output it in binary out1: mov ah,9 ; print binary out text mov dx,prompt2 int 21h ; for for i = 0, i < 16, i++: mov cx, 16 ; loop counter top2: rol bx,1 ; rotate most significant bit into carry flag jc one ; carry flag = 1? mov dl,'0' ; if no, prepare to print 0 jmp print ; now print one: mov dl,'1' ; printing a 1 print: mov ah,2 ; print char function int 21h ; print it loop top2 ; loop until done ; output it again, only this time in hexadecimal out3: mov ah,9 ; print hex output label mov dx,prompt3 int 21h ; for 4 times do this: mov cx, 4 ; loop counter top [A-F] or dl,30h ; convert 0-9 to '0'-'9' jmp print2 ; now print AF: add dl,55 ; convert 10-15 to 'A'-'F' print2: mov ah,2 ; print char int 21h loop top4 ; loop until done Exit: mov ah,04Ch ;DOS function: Exit program mov al,0 ;Return exit code value int 21h ;Call DOS. Terminate program | https://www.dreamincode.net/forums/topic/372579-nasm-convert-base-10-numbers-to-binary-and-hexadecimal/ | CC-MAIN-2018-34 | refinedweb | 1,124 | 71.99 |
Too many ideas and not enough doing.
That's my refrain, and I'm looking to break it. It's very easy for me to think of ideas which are fascinating, but then I often find something else to occupy me while I think about it. Laundry, dishes, breakfast, lunch, after-lunch snack, after-after lunch snack, jogging, GemCraft. Some days it seems the only time I get anything done is when I'm busy not getting something else done. Which might explain why I'm writing this.
I was struck by a video I saw the other day. It was ostensibly about how to do a video blog, but one of the points he included was an advisement about life in general: "When doing something, try. Because it might hurt less if you fail after putting in your full effort, but you're a lot more likely to fail if you don't." It's not like I haven't heard such a thing before. It's not so much a revelation as it is a call to focus. Part of the reason I don't get some things done is that I know they are difficult, and I want to think about them as much as possible beforehand, so they don't go wrong. And I think the right into never getting done. A weird sort of being so cerebral as to not try at all.
Which leads into a bunch of stuff related to E2, go figure. You guys are probably aware I have a ton of random ideas relating to E2, and I try to only write the stuff in root logs which actually has been coded and is running. Still, there's dozens of ideas buzzing around, some of merit and others certainly not, and I thought I'd write down a smattering of them that have crossed my mind tonight:
That's a short list, and most of those aren't priorities, but all of them are worth having a look taken at them. However, outside of people on staff, E2 is really low on casual coder contributions. Traffic on edev has dropped to almost nothing. I'm sure part of this is because I have been ill-temperated on that list in the past, and for that I apologize.
Development is way better than it has been for years, though. We have multiple dev servers, so people can work on pet projects without disturbing each other. Almost all changes are now done using patches, so things are well-documented, and it's a lot easier to track down where new bugs come from. Plus, all changes go through the dev server first, so we almost never see site-breaking coding mistakes any longer. If something goes kaboom on the dev side, we can recreate a dev server from scratch in under five minutes. And I'm not sure when was the last time we had as many active people on staff who understood ecore so well.
So, if any of that stuff up there sounds interesting to somebody, toss me or Oolong a message to get in edev, and we'll get you rolling on the dev server. Contributors very much welcomed.
Most programmers are aware that Python and Ruby are very popular languages today. Ruby's popularity is driven a great deal by the popular Rails platform, which makes it possible for somebody who doesn't know much about writing code to still make an acceptable, database-driven website. And Python's popularity has gotten huge boosts from testimonials by geek luminaries such as Eric Raymond (seen here) and Randall Munroe (seen here).
Slightly related to the long list of ideas for E2 above, I caught a neat little presentation, about 20 minutes long, entitled Python Vs. Ruby: A Battle to the Death. It's more about the design philosophy of the two languages and how those affect how much you can bend the language syntax.
What I really came away with it was how difficult it must be to make Ruby's interpreter run quickly. One of thse neatest things in compilers I have seen in the last few years was HipHop, an invention by Facebook's engineers which will take PHP code and create C++ code which you can then compile, gaining significant performance improvements. Now, PHP is sort of hideous in one major fashion: It didn't have namespaces for most of its existence. So most PHP code puts everything in a global namespace. Symbols are everywhere, and it's hard to tell where they come from. Even Perl doesn't do that. But the PHP language syntax itself is still static enough that most code can still be compiled to C++ without preserving the PHP interpreter.
Ruby, with its ability to add entirely new syntax, not just at the start of interpretation, but dynamically changing what syntax means as a program runs, seems to make it impossible to abstract it into a lower level language for speed gains. Python, on the other hand, already has PyPy and Shed Skin which will compile most Python down to C or C++. Since things compiled to C/C++ can be exported as symbols and called easily from Perl, this provides a pre-existing route to potentially add Python htmlcodes with very good performance and little original code necessary.
That's not to say Python, if its full featureset is utilized, can't be confusing or prevent this sort of optimization. I do recall an amusing hack, a few April 1sts back, when Python was extended to add the "comefrom" directive. But the way that worked was by preparsing the relevant Python files and catching the exceptions thrown by the Python interpreter when it found this new (illegal) keyword. Generally speaking, Python is static enough that it can be compiled down, removing the need to lug around the Python interpreter to run Python code.
Drifting away from the E2 implication for a bit, and back to the talk, the thesis seemed to be "Ruby lets you create domain specific languages which can allow more readable code than Python ever will". Which is a funny, since one of the things people tout about Python is that it almost reads like English, whereas Ruby often looks similar to Perl with puncutation all over the place. So there Ruby is: extending itself to overcome its own weaknesses. Which I have to admit is so meta that it is very cool.
Ready. Set. Go.
Oh shit, never mind, I already hit start on the stopwatch.
You see, today I am timing how long it takes for me to bust out that thousand when I do it in one sitting. I might be able to start doing a little typing in the morning when I am fresh if I can do it in a manageable time. You see, I was able to do five-hundred in about twenty to forty. All depending on my mental state, or course. The more talkative I was the quicker and more fluid my results. The more exhausted and fatigued the old brain, the longer and discombobulated my words became, hence the new test and procedure, only if today’s experiment is a favorable one.
Sad part is, I just ran out of planned material and now I must wing it, everything is new and unplanned like a chess game diverted until the just after the beginning twelve. Some of the more inept might want an example; G4 (I forgot how to the write the opening move for the first pawn move, should I look it up, or will someone oblige, remember folks I am on a timer this time around, kindness would be appreciated.) G5 (Yeah, I am bum rushing you with end pawn what you gon’na do about it, take over the center, HA.)
Sorry, I might be taking that analogy too far, it is one of those habitual habits I have, I dive deep everythyme. I am not really one of those deep free divers though, those guys are really cool. I am, sadly, not that cool. With the pressure of water on my chest I doubt I can hold my breath for the two minutes that I can do on land, but they say it is all a mental thing, so most likely I can but I don’t think I have the chest strength.
That knocking sound is me hitting the bottom of the barrel. I wouldn't expect anything to lucid right now, it will be very vapid and other words like that.
Should I try is the question or should I give up and die. Now look me in the square in the eye and tell me that should not try, try, try.
I am feeling a little cooped up here, even my balls are starting to sweat and get itchy. When I say ball I mean scrotum and testicle, but mainly scrotum.
(Quick, think of something to make this not sound so bad.)
Umm? Dude, I can’t, that is pretty damn precise. There is no putting a positive spin on that shit. You got to take it or leave it, and I choose leave it. Word count, all I am focused on is word count. Consider this like a focal seizure (E before I when before Z, yeah, that one helps clear up some of that other mess I established and vacated.) but I have fine motor movement with coordinated thought. Yeah, that is what is going on, I am having a focal seizure.
Time out text message.
So, I just lost that time session, damnit, but I was slamming those keys, as if I was, now deceased, pro wrestler Randy “the macho man” Savage. So, I am just going to call it twenty, and move on in this mini quest.
Now, I need to get back in the zone and get this half done in the same amount of time, but it feels like lap two of the obstacle course and I have to reclaim that wall as mine.
Fuck, I woke up the roomie and now his television is blaring to combat my massive air strike on these 26+ key locations mapped out on this board in front of me.
Bam. Bam. BAM.
Take that you mother truckers, leaving your kids at home watching two cartoon cavedude’s from Bedrock pushing Winston’s cigarettes.
Sorry I was distracted, by the primal need… to hunt… find and seduce a mate. I keep telling myself, women are not meat, but it is so hard. I just want to gobble them all up, devouring and savoring every one of them. However, I am not a pig I have never over-indulged. I think. What are the standards? How can I say this, without giving too much away? I have never slept with more woman than my current age, I can remember their names and it takes up the entire allotment of a man with six digits on each hand, but I am not going to take the time to make sure I am correct. I guess I could it wouldn’t take long.
Okay so, there is one, then two, umm three, oh no she was number four and she was three, five and six. Seven or eight, doesn’t matter the order, nine(!) (This girl was a breeder), ten. There you have it, I have a person to remember for each one of my fingers, but I am still young. Who knows where tomorrow will take me.
I am feeling the strain, just a hundred and fifty-eight more words, and ten minutes until then end of one hour.
Can he do it?
Just one good Idea, and I am home free. Ollie-Ollie-oxen-free, no home base is that other tree.
No way.
Yes way.
You are a liar, and I do not play with liars, I stick them in dyers, then hang them from tell a phone wires pants a blaze.
Yes, I would like to blaze one.
Then hurry up son.
I run and I run right through more or less the biggest slump, just like my man, here, Forrest Gump.
I can never really remember what happens after the old lady tells him to take the #7(?) bus. Do they get back together, or is he visiting her grave, I just do not remember.
All the iconic moments are there though, “Run Forrest, run.” “Stupid is as stupid does.” And, “Life’s like a box of chocolates.”
There are more, but those are the top three most repeated.
Log in or register to write something here or to contact authors.
Need help? [email protected] | https://everything2.com/title/May+23%252C+2011 | CC-MAIN-2018-30 | refinedweb | 2,128 | 78.38 |
Reactive OpenDDS, Part II: Operators
by Charles Calkins, Principal Software Engineer
January 2015
Introduction
Reactive programming is a "programming paradigm oriented around data flows and the propagation of change." Relationships can be established between producers (observables), consumers (observers), and transformers of data (operators), with updates occurring asynchronously, rather than by request. This aids in application scalability and better utilization of hardware resources, as threads are not blocked waiting on potentially long-running operations. In Part I of this article, a method was shown to convert samples as obtained by OpenDDS into observable sequences, as well as a method to simulate these sequences for test purposes. This article describes the operators that can be applied to observable sequences to filter, transform and otherwise manipulate them. We shall use the Reactive Extensions for .NET (Rx.NET), for our examples.
Code in this article is provided in the associated code archive.
Projects use MPC v4.0.54 for project generation, and were built with Visual Studio 2013, and use version 4.5 of the .NET Framework.
The version of Rx.NET (Rx-Main and Rx-Testing packages via NuGet) used in this article is 2.2.5.
Operators
The list of operators that operate on observables is considerable, and although a core set is available in all reactive frameworks, individual frameworks may also add operators of their own. For example, RxJava provides a
parallel() operator which doesn't have a clear analogue in Rx.NET or RxCpp.
Also, the naming of a given operator can differ from framework to framework. In Part I, we had seen
Select() and
map()— both apply a function to elements of an observable sequence, and, in our case, can be used to convert the sequence type into a different one. We have also seen the
Take() operator to limit the sequence to a given number of elements. We can use the Visual Studio Unit Testing Framework as an easy way to demonstrate the behavior of a number of other operators.
In the testing framework, individual tests are implemented as public class methods that return
void, have no parameters, and are marked with the
[TestMethod] attribute. The public class that they are contained within is marked with the
[TestClass] attribute. Methods marked with additional attributes are used to initialize tests and clean up after them. For example, a method marked with the
[TestInitialize] attribute is executed before each test method. By compiling a project containing these attributes in Visual Studio, Visual Studio will identify these methods as tests and allow them to be run from the Test Explorer pane in Visual Studio, or by
MSTest.exe from the command line.
A scheduler is a mechanism, part of the reactive framework, that controls when subscriptions start, notifications are published, and provides a notion of time. Although the default is a real time scheduler, unit tests for observables can use a virtual time-based scheduler,
TestScheduler, as introduced in Part I of this article. The method
CreateColdObservable() creates a cold observable (an observable that publishes only once an observer has subscribed, in contrast to hot observables which publish regardless) by specifying the individual notifications produced by the observable:
OnNext() to produce a value, and
OnCompleted() to indicate that the observable will no longer publish values. The first parameter to both notification methods is the time, in ticks (one ten-millionth of a second) to indicate when the notification is produced.
TestScheduler uses these virtual times, rather than wall-clock time, to sequence the output of observables so tests run at expected unit test speeds.
For the examples below, we shall define three cold observables,
xs,
ys and
zs, that are recreated before each test, as follows. These particular observables demonstrate sequences with notifications at irregular intervals as well as duplicated values.
- // ReactiveCS\ReactiveCS.cs
- [TestClass]
- public class Tests : ReactiveTest
- {
- TestScheduler scheduler;
- ITestableObservable<int> xs, ys, zs;
-
- [TestInitialize]
- public void TestInit() {
- scheduler = new TestScheduler();
- xs = scheduler.CreateColdObservable(
- OnNext(10, 1),
- OnNext(20, 2),
- OnNext(40, 3),
- OnNext(41, 3),
- OnNext(60, 4),
- OnCompleted<int>(70)
- );
- ys = scheduler.CreateColdObservable(
- OnNext(5, 10),
- OnNext(15, 20),
- OnNext(45, 30),
- OnCompleted<int>(50)
- );
- zs = scheduler.CreateColdObservable(
- OnNext(10, 1),
- OnNext(20, 2),
- OnNext(30, 3),
- OnNext(40, 3),
- OnNext(50, 1),
- OnNext(60, 2),
- OnNext(70, 3),
- OnCompleted<int>(80)
- );
- }
For each of the operators described in this article, a demonstration of their output when applied to
xs,
ys and/or
zs, as appropriate, will be presented, as well as a suggestion as how the operator would be useful to apply to an OpenDDS observable.
Amb()
The
Amb() operator, given multiple observables, chooses the observable that is first to produce any items. From then on,
Amb() selects items from that observable. One use of this operator is to select the quickest responding from a set of redundant observables. While the original inspiration for
Amb(), John McCarthy's Ambiguous operator, would arbitrarily choose one of the provided values (or even roll back computation to select an alternative value if the first led to an error), the
Amb() operator always selects the first-responding observable.
OpenDDS: Consider multiple stock ticker feeds, or rendundant sensors. The
Amb() operator can be used to obtain the quickest-responding feed for increased application response. While similar to the OWNERSHIP DDS quality of service policy,
Amb(), once an observable is selected, will use that observable from then on. In contrast, in DDS, the OWNERSHIP_STRENGTH can change dynamically, potentially leading to samples from a different data writer than initially used to be selected.
Unit tests for observables based on the
TestScheduler can look like the following. First, create an observer
results which is of the type produced by the sequence (
int, in our case). Next, apply the operator, and subscribe
results to the obervable that was created by the application of the operator.
Start() the scheduler, and when it completes,
results.Messages contains the sequence that was produced by the observable that was subscribed to. By using
AssertEqual(), the generated sequence can be compared against an expected sequence to determine the pass/fail criteria of the test. In the case above,
xs.Amb(ys) selects the sequence produced by
ys because the first sample of
ys is at time 5, which is earlier than the first sample of
xs at time 10.
- // ReactiveCS\ReactiveCS.cs
- [TestMethod]
- public void Amb()
- {
- var results = scheduler.CreateObserver<int>();
-
- xs.Amb(ys)
- .Subscribe(results);
-
- scheduler.Start();
-
- results.Messages.AssertEqual(
- OnNext(5, 10),
- OnNext(15, 20),
- OnNext(45, 30),
- OnCompleted<int>(50));
- }
As a side note,
AssertEqual() compares elements in the sequence using the default comparator of the underlying element — in this case, the equality operator of
int. If the sequence produces a structure, it may be necessary to implement a custom equality comparison. In particular, if the sequence element contains floating point values,
AssertEqual() will fail unless a custom equality comparison is implemented. One such comparison is described here which handles special cases for infinity and near-zero values, but otherwise compares against a supplied tolerance.
- public bool NearlyEqual(double a, double b, double epsilon)
- {
- double absA = Math.Abs(a);
- double absB = Math.Abs(b);
- double diff = Math.Abs(a - b);
-
- if (a == b)
- {
- // shortcut, handles infinities
- return true;
- }
- else if (a == 0 || b == 0 || diff < Double.MinValue)
- {
- // a or b is zero or both are extremely close to it
- // relative error is less meaningful here
- return diff < (epsilon * Double.MinValue);
- }
- else
- {
- // use relative error
- return diff / (absA + absB) < epsilon;
- }
- }
Merge()
The
Merge() operator merges the sequences from multiple observers into a single stream, ordered by time. That is,
xs.Merge(ys) produces
OnNext(5, 10),
OnNext(10, 1),
OnNext(15, 20),
OnNext(20, 2),
OnNext(40, 3),
OnNext(41, 3),
OnNext(45, 30),
OnNext(60, 4), and
OnCompleted(70). The sequence terminates when the last of the merged sequences terminates. Here,
xs completes at 70 and
ys at 50, so the sequence produced by
Merge() completes at 70.
OpenDDS: Suppose sensors that publish on different topics have similar data — say topics "Temperature1" through "Temperature20" — which are to be processed at one time, as they are all of the same data type. The
Merge() operator can combine the data streams into a single stream for analysis.
Where()
The
Where() operator, also called
filter() in some of the reactive frameworks for other languages, takes a function that accepts an element of the sequence type as a parameter, and returns true if it should be present in the resulting sequence, or false if it should not be. For instance,
xs.Where(x => x > 2) yields
OnNext(40, 3),
OnNext(41, 3),
OnNext(60, 4), and
OnCompleted(70), as these are the only values produced by
xs greater than two. The sequence completes at the same moment that the filtered sequence completes.
OpenDDS: OpenDDS provides the PARTITION and content filtering methods to filter which data samples are received, and
Where() provides similar behavior. If the same OpenDDS data stream, though, is to be filtered in multiple ways, it may be more efficient to receive a single stream from OpenDDS, subscribe to the OpenDDS observer multiple times and filter each subscription differently, rather than having multiple OpenDDS subscribers with differing PARTITION and content filtering expressions. Then again, OpenDDS quality of service and content filtering can be applied at the publisher, so while subscriber development complexity may be reduced, network performance will still be impacted.
Distinct()
The
Distinct() operator ensures that no duplicated values are produced by the sequence.
zs.Distinct() yields
OnNext(10, 1),
OnNext(20, 2),
OnNext(30, 3), and
OnCompleted(80), as the values of 1, 2 and 3 are emitted only the first time they are seen. The emitted sequence terminates when the original sequence does, at time 80.
OpenDDS: If an OpenDDS sample stream should be, say, monotonically increasing but is subject to jitter, the
Distinct() operator can ensure that only unique values are processed.
DistinctUntilChanged()
The
DistinctUntilChanged() operator differs from
Distinct() in that it only drops duplicated values if they appear next to each other in the original sequence. That is,
zs.DistinctUntilChanged() yields
OnNext(10, 1),
OnNext(20, 2),
OnNext(30, 3),
OnNext(50, 1),
OnNext(60, 2),
OnNext(70, 3), and
OnCompleted(80), and only the value 3 produced at time 40 is dropped because the previous sample at time 30 was also 3.
OpenDDS: Since
DistinctUntilChanged() will only remove duplicates if they arrive consecutively, it allows data samples to be processed only if they have changed. For example, a sequence of stock values may only need attention if the price has moved, but can otherwise be ignored if the value remains stable.
Concat()
The
Concat() operator concatenates sequences together — the second sequence begins when the first completes.
xs.Concat(ys) yields
OnNext(10, 1),
OnNext(20, 2),
OnNext(40, 3),
OnNext(41, 3),
OnNext(60, 4),
OnNext(75, 10),
OnNext(85, 20),
OnNext(115, 30), and
OnCompleted(120), where the elements from
ys immediately follow those of
xs. The sequence produced by
xs ends at time 70, and the first element of
ys is produced at time 5, so the result of the concatenation has the first element of
ys emitted at time 70+5 = 75.
OpenDDS: Suppose two OpenDDS data streams exist that represent work items to service, where one stream is high priority and the other low priority. Suppose also that work items must be serviced at the relative time intervals when they arrive — say if a robot on a factory floor needs time to move its manipulator arm to a starting position, the assembly of the part that the work item represents cannot begin until the arm is ready. By concatenating the low priority work observable on to the end of the high priority work observable, it will be ensured that all high priority work is completed first, but all work, regardless of when it arrives, is still executed with the appropriate time intervals between them.
Zip()
The
Zip() operator combines two sequences into one, using a supplied function, and the number of elements produced by the combination is equal to the shorter of the sequences being combined — elements are taken pairwise, so both original sequences must have an element available to combine into one that can be emitted in the resulting sequence. The observer created by
xs.Zip(ys, (x, y) => x + y) yields
OnNext(10, 11),
OnNext(20, 22),
OnNext(45, 33), and
OnCompleted(60), pairing the 1, 2 and 3 of
xs with the 10, 20, and 30 of
ys. The times of each element emitted is the time at which an element from each of the zipped sequences was able to be used. That is, while an element of
ys is available at time 5, it isn't until time 10 that an element of
xs is available to pair with it, so the time of the result of the zip of the two is time 10. The completion time of the emitted sequence is documented to be the end of the shorter sequence, but is not the case in the version of Rx.NET used in this article. In this example, it is the time of the sample that does not have a match in the paired sequence.
OpenDDS: Consider a calculation that can only be performed when a data value arrives from each of three different OpenDDS topics. By using the
Zip() operator, the resulting observable wouldn't contain an item to process unless values from all three topics had already arrived. The
Zip() operator is related to OpenDDS's implementation of the MultiTopic content subscription feature, as it can be used to unify samples produced by disparate observables.
Sample()
The
Sample() operator, given a sampling interval, returns the most recent data sample received within that interval. In order for the sampling interval to apply to the virtual time scheduler, unlike the time-independent operators above, the scheduler must be specified as an argument to
Sample(). If the scheduler is allowed to be its default value, the sampling interval would be interpreted as real time, producing incorrect test results.
Sampling
xs every 25 ticks can be done by:
- // ReactiveCS\ReactiveCS.cs
- var results = scheduler.CreateObserver<int>();
- xs.Sample(TimeSpan.FromTicks(25), scheduler)
- .Subscribe(results);
The sequence produced is
OnNext(25, 2),
OnNext(50, 3),
OnNext(75, 4),
OnCompleted(75). That is, one notification generated at the end of each sample interval, containing the most recent value of the observable. For example, at the 25 tick mark, the most recent sequence value from
xs has the value 2, produced at time 20. The sequence completes at the end of the last sample interval, not at the point at which the sampled sequence completes.
OpenDDS: The
Sample() operator can be used to reduce the data rate of an OpenDDS data stream. For instance, a data sample stream containing time updates may be arriving much more quickly than a clock that needs updates only once per second. The DDS TIME_BASED_FILTER quality of service policy behaves in a similar way as does the
Sample() operator, although the sample yielded by TIME_BASED_FILTER will be the first in the sampling interval window, while the sample yielded by the
Sample() operator will be the last sample in the window.
Throttle()
From its name, one may think that the
Throttle() operator reduces the sequence rate below a threshold. Instead, it allows elements to be produced only if a throttling interval has passed without any elements being generated. An example used in this presentation uses
Throttle() to limit requests made to a web service that returns words that complete the text that the user is typing. Rather than querying the web service on each character typed, the web service is queried only when the user has stopped typing for a period of time.
As it is also based on a time interval, the scheduler must be supplied. The observable created by
xs.Throttle(TimeSpan.FromTicks(15), scheduler) produces the sequence
OnNext(35, 2),
OnNext(56, 3),
OnNext(70, 4), and
OnCompleted(70). The sample times are explained as follows. The first sample of
xs is at time 10, but as that is within the 15 tick interval (starting from 0), it is skipped. The next value of 2 is produced at time 20, but as the value following it, 3, isn't produced until time 40, the 20 tick gap between 2 and 3 is greater than the throttle interval, so the value of 2 produced at time 20 is emitted from the throttled sequence at time 35 (20 plus the throttle interval). Similarly, the first 3 from
xs is dropped, but the second 3 is emitted, as the interval between the second 3 and 4, times of 41 and 60 respectively, is greater than the 15 tick throttle interval. There are no samples following 4, so it can be safely emitted, and the emitted sequence terminates at the same moment that the throttled sequence does.
OpenDDS: Suppose an OpenDDS data stream normally produces values continually, but occasionally stops and restarts, perhaps due to a mechanical fault. The
Throttle() operator can be used to signal the application that a restart has occurred and that the device producing the data stream requires maintenance.
GroupBy()
Unlike the previous operators which produce an observable sequence of elements, the
GroupBy() operator yields an observable sequence of observable sequences. The orignal sequence that
GroupBy() is applied to is divided into separate observable sequences based on a supplied function. As these new observables are created, they are produced as notifications in the sequence returned by
GroupBy(), and, as they appear, they can be treated as any other observable — operators may be applied to them, or they may be subscribed to.
Testing the result of
GroupBy is a bit convoluted, but is described here. As new grouped observables are produced, they are added to a list for later examination. As an example, consider the division of the elements of an observable into two groups, one group containing values less than or equal to 2, and the other group containing values 2 or greater. We can set up the test as follows:
- // ReactiveCS\ReactiveCS.cs
- [TestMethod]
- public void GroupBy()
- {
- // as each group appears, add it to the groups list
- var groups =
- new List<Tuple<long, bool, ITestableObserver<int>>>();
-
- xs.GroupBy(x => x > 2)
- .Subscribe(g =>
- {
- var observer = scheduler.CreateObserver<int>();
- g.Subscribe(observer);
- groups.Add(Tuple.Create(
- scheduler.Clock, g.Key, observer));
- });
-
- scheduler.Start();
The variable
groups contains a list of tuples, where a tuple stores three pieces of information: the time at which the group was created, the group key (here, just true or false — the return value of the grouping function — to identify the two groups), and an observer of the group represented by the tuple. That is, each time a new group is created by the grouping operation, a new observer is built, it subscribes to the new group, and an item is added to the
groups list. We then start the scheduler as before.
Next, we create a helper function that validates the contents of a tuple in the
groups list — it compares the values in the tuple to ones supplied as arguments to it.
- var assertGroup = new Action<int, long, bool,
- Recorded<Notification<int>>[]>(
- (index, clock, key, messages) =>
- {
- var g = groups[index];
- Assert.AreEqual(clock, g.Item1);
- Assert.AreEqual(key, g.Item2);
- g.Item3.Messages.AssertEqual(messages);
- });
To test that
GroupBy() operated as expected, we first confirm that two groups were created. Only two groups should exist as the grouping function can only return true or false.
- Assert.AreEqual(2, groups.Count);
Next, we check the first group (group 0). The first element in
xs has the value 1 produced at time 10. It is not greater than 2, so the value 1 is added to the "false" group. As the "false" group doesn't yet exist, it is created at time 10. The only other element in the sequence that fails the "greater than 2" test is 2 itself at time 20, so the "false" group should only contain two elements, and terminate when the grouped sequence terminates.
- // at time 10, the "false" group appears
- assertGroup(0, 10, false, new[] {
- OnNext(10, 1),
- OnNext(20, 2),
- OnCompleted<int>(70)
- });
We then check the second group (group 1). The "true" group is created when the first element that is greater than 2 is seen in the sequence (time 40), contains all elements greater than 2 from the original sequence, and also completes at the same time that the original sequence does.
- // at time 40 the "true" group appears
- assertGroup(1, 40, true, new[] {
- OnNext(40, 3),
- OnNext(41, 3),
- OnNext(60, 4),
- OnCompleted<int>(70)
- });
- }
OpenDDS: Grouping can be used to not only arrange a sequence of OpenDDS data samples into groups by the data values themselves, but can also be used to group based on other OpenDDS properties, such as instance or sample state, transforming a topic-based observable sequence into an instance-based one.
Window()
The
Window() operator breaks a sequence into time slices, creating a new observable (as
GroupBy() did) for each time slice (window). Splitting
xs into 25 tick windows is done by
xs.Window(TimeSpan.FromTicks(25), scheduler), and three windows are created. The first window, starting at time 0, contains
OnNext(10, 1),
OnNext(20, 2), and
OnCompleted(25). That is, only two samples from
xs are produced within the first 25 ticks, and the window closes at tick 25. The second window ranges from ticks 25 to 50, and contains
OnNext(40, 3),
OnNext(41, 3), and
OnCompleted(50), and the third window starts at tick 50 and ends when the original sequence ends at tick 70. It contains
OnNext(60, 4), and
OnCompleted(70).
OpenDDS: As with
Sample(),
Window() can be used as another form of rate limiting. Suppose a process can only operate at the rate of 10 samples a second — dividing the incoming data into one-second windows and processing only the first, at most, 10 samples that arrive in each window (by using the
Take() operator on each window), will ensure that the process is never overloaded if the OpenDDS data sample rate increases.
Buffer()
The
Buffer() operator is similar to the
Window() operator in that it divides the buffered sequence into time slices, but the slices themselves are different. Slices can be created either by a time interval (as with
Window()) or by a count of elements, and a buffer is a single instant in time. The creation time, completion time, and element times of the buffer are the same.
A 25 tick-sized buffer can be created with
xs.Buffer(TimeSpan.FromTicks(25), scheduler), and, as with
Window(), three buffers are created. The first, at time 25 (the end of the interval, unlike
Window() which created the window at the start of the interval), contains
OnNext(25, 1),
OnNext(25, 2), and
OnCompleted(25). It contains the same values as does the first window, but both values, and the completion, are at the buffer creation time — tick 25. The second buffer, at time 50, contains
OnNext(50, 3),
OnNext(50, 3), and
OnCompleted(50). Again, the same values as in the second window, but all at the start/end time of the buffer. Lastly, in the same pattern, the third buffer contains
OnNext(70, 4), and
OnCompleted(70).
A 3 count-sized buffer can be created with
xs.Buffer(3). Here, a buffer is built every time three elements have arrived, or the original sequence has completed. So, for
xs, two buffers are created. The first one contains
OnNext(40, 1),
OnNext(40, 2),
OnNext(40, 3), and
OnCompleted(40), and is at time 40 because that was the time at which the third element arrived. The second buffer contains
OnNext(70, 3),
OnNext(70, 4), and
OnCompleted(70), and is at time 70 because the sequence completed before a third element was received.
OpenDDS: A use of the
Buffer() operator is for data smoothing. Consider a noisy analog-to-digital converter that is sending its raw readings over OpenDDS. The application could use the
Buffer() operator to produce a buffer after, say, 7 samples have arrived. Out of the 7 samples in the buffer, the highest and lowest sample could be discarded, the remaining 5 averaged, and then the result used for calculation.
GroupJoin()
The
GroupJoin() is an operator used for combining two observables. A window of time is provided for each observable, and, for elements from each observable that fall into the overlap of the windows, a joining function is applied, producing a new element that is a combination of the matching ones.
A demonstration of
GroupJoin() is shown below, as provided in the collection of 101 Rx Samples here. The code is identical to the Rx sample, save a small modification to convert it into a unit test.
Previously, we created observables based on an adapted event, a subject, or a time-based generated sequence. Another way to create an observable is to convert a standard data structure, such as a list, into an observable, via a call to
ToObservable().
- // ReactiveCS\ReactiveCS.cs
- [TestMethod]
- public void GroupJoin()
- {
- var leftList = new List<string[]>();
- leftList.Add(
- new string[] { "2013-01-01 02:00:00", "Batch1" });
- leftList.Add(
- new string[] { "2013-01-01 03:00:00", "Batch2" });
- leftList.Add(
- new string[] { "2013-01-01 04:00:00", "Batch3" });
-
- var rightList = new List<string[]>();
- rightList.Add(
- new string[] { "2013-01-01 01:00:00", "Production=2" });
- rightList.Add(
- new string[] { "2013-01-01 02:00:00", "Production=0" });
- rightList.Add(
- new string[] { "2013-01-01 03:00:00", "Production=3" });
-
- var l = leftList.ToObservable();
- var r = rightList.ToObservable();
A number of predefined special observables are also available. In this example, the windows for each observable are defined by
Observable.Never<>, a special observable that never produces any elements, but also never terminates. The joining function pairs an element from the left observable with an observable that generates elements that fall into the window of time of the right observable.
- var q = l.GroupJoin(r,
- // windows from each left event going on forever
- _ => Observable.Never<Unit>(),
- // windows from each right event going on forever
- _ => Observable.Never<Unit>(),
- // create tuple of left event with observable of right events
- (left, obsOfRight) => Tuple.Create(left, obsOfRight));
The right observable can then be subscribed to, and for each element pushed to it, a comparison is made against a given element from the left observable. If the element matches specified criteria (here, a time index), then a message is generated.
- var messages = new List<string>();
-
- // e is a tuple with two items, left and obsOfRight
- using (q.Subscribe(e =>
- {
- var xs = e.Item2;
- xs.Where(
- // filter only when datetime matches
- x => x[0] == e.Item1[0])
- .Subscribe(v =>
- {
- messages.Add(string.Format(
- string.Format(
- "{0},{1} and {2},{3} occur at the same time",
- e.Item1[0],
- e.Item1[1],
- v[0],
- v[1]
- )));
- });
- }))
- {
- Assert.AreEqual(2, messages.Count);
- Assert.AreEqual(
- "2013-01-01 02:00:00,Batch1 and " +
- "2013-01-01 02:00:00,Production=0 " +
- "occur at the same time", messages[0]);
- Assert.AreEqual(
- "2013-01-01 03:00:00,Batch2 and " +
- "2013-01-01 03:00:00,Production=3 " +
- "occur at the same time", messages[1]);
- }
- }
OpenDDS: Similar to OpenDDS's implementation of MultiTopic,
GroupJoin() can be used to combine multiple OpenDDS data streams based on common characteristics, such as the data sample reading time, sensor ID, or other criteria.
Count(), Sum(), Min(), Max(), Average() and Aggregate()
A number of operators are availble that provide for collecting basic statistics from an observable. When an observable completes, a single value is pushed from each of these operators, yelding the statistical result across all values produced by the observable. As such, these operators are not suitable for observers that produce infinite sequences, as since the observable never terminates, no values will ever be emitted by these operators.
The
Count(),
Sum(),
Min(),
Max(), and
Average() operators do as their names suggest. The
Aggregate() operator provides a means to accumulate values into a single result, and is supplied with an accumulator function and an optional seed value as parameters. We use
ToObservable() as before to convert a list into an observable sequence in order to demonstrate these operators.
The test below also shows that a single observable can be subscribed to multiple times. This can be advantageous when used with OpenDDS as, rather than creating multiple subscribers for a given topic that filter the data in various ways, a single subscriber can be used and the resulting observable can be subscribed to as many times as needed to create modified observables for use by the application.
- // ReactiveCS\ReactiveCS.cs
- [TestMethod]
- public void CountSumMinMaxAverageAggregate()
- {
- var o = new List<int>() { 3, 10, 8 }.ToObservable();
- o.Count().Subscribe(e => Assert.AreEqual(3, e));
- o.Sum().Subscribe(e => Assert.AreEqual(21, e));
- o.Min().Subscribe(e => Assert.AreEqual(3, e));
- o.Max().Subscribe(e => Assert.AreEqual(10, e));
- o.Average().Subscribe(e => Assert.AreEqual(7, e));
- o.Aggregate(6, (acc, i) => acc + i*2)
- .Subscribe(e => Assert.AreEqual(48, e));
- }
OpenDDS: Provided that an OpenDDS observable terminates, these operators can be used to obtain statistical information from OpenDDS sample data.
Scan(), Skip() and Timestamp()
While
Aggregate() applies an accumulator function over the elements of a sequence and generates a single value when the sequence completes, the
Scan() operator does the same, but produces a value corresponding to each value of the observed sequence.
Scott Weinstein uses the
Scan() operator to create a new operator,
ToCommonAggregates(), to collect statistics on an observable that are pushed out as each element in the source observable arrives. As such, unlike
Count() and the other standard operators,
ToCommonAggregates() is suitable to be used with infinite sequences.
Weinstein's code is shown here. As we did above, Weinstein created a class which will be used as the element type of a new observable that will be created.
- // ReactiveCS\ReactiveCS.cs
- public class StatInfoItem<T>
- {
- public T Item { get; set; }
- public double Sum { get; set; }
- public int Count { get; set; }
- public double Mean { get; set; }
- public double M2 { get; set; }
- public double StdDev { get; set; }
- public double Min { get; set; }
- public double Max { get; set; }
-
- public override string ToString()
- {
- return "[" + Item + ": Sum=" + Sum + ", Count=" + Count +
- ", Mean=" + Mean + ", StdDev=" + StdDev + ", Min=" + Min +
- ", Max=" + Max + "]";
- }
- }
The .NET Framework allows methods to appear to be added to an existing class, without actually modifying that class. As these methods extend an existing class, they are called extension methods. To create an extension method, first create a static class to hold them. In it, create public static methods that take
this T as the first parameter, where
T is the type that is being extended. As shown in this James Michael Hare blog post, class
int can be extended to have a
Half() method by code like:
- static class Extensions
- {
- public static int Half(this int source) {
- return source / 2;
- }
- }
which now allows the syntax
4.Half() to be legal, and return the value 2. Weinstein creates an extension method that extends
IObservable<T>, the interface of observables in Rx.NET. Two functions must be provided as parameters to
ToCommonAggregates(). The first function identifies what component of the observable should be used for statistical analysis (such as a sensor data reading). The second function provides a name to associate with the accumulated statistics (such as a sensor ID).
- // ReactiveCS\ReactiveCS.cs
- static class Extensions
- {
- public static IObservable<StatInfoItem<T>>
- ToCommonAggregates<T, TSrc>(
- this IObservable<TSrc> source,
- Func<TSrc, double> dataSelector,
- Func<TSrc, T> itemSelector)
- {
ToCommonAggregates() uses the
Scan() operator to accumulate its values. The
Scan() operator is provided a starting value and a function to accumulate values, and each time a value is added to the accumulator, it is also pushed as a sample to subscribers. Here, the accumulator calculates various statistical values, and each time a new element appears in the sequence to which
ToCommonAggregates() is applied, a statistical sample is generated. The
Skip() operator is also used to drop the initial sample, as it is a seed value that is not fully initialized.
-),
- };
- })
- // need a seed, but don't want to include seed value
- // in the output
- .Skip(1);
- }
- }
We can test it as follows. We create an observable by transforming
ys. The observable
ys is a sequence of integers. We apply the
Timestamp() operator which associates a timestamp with each element in the sequence, resulting in a type of
Timestamped. The
Timestamped type is a structure with two fields,
Timestamp, giving the time of the sample, and
Value, which stores the value that is timestamped — in our case, the original
int. We now transform that using the
Select() operator to generate a
SensorEventArgs type from the timestamp and value. So, at this point, we've now created another simulated OpenDDS sample. We then apply the
ToCommonAggregates() operator, identifying the OpenDDS data reading as the data value on which to perform statistical analysis, and the name of the sensor to label the data aggregate.
- // ReactiveCS\ReactiveCS.cs
- [TestMethod]
- public void ToCommonAggregates()
- {
- var results =
- scheduler.CreateObserver<StatInfoItem<string>>();
- var obs =
- // start with IObservable<int>
- ys
- // change to IObservable<Timestamped<int>>
- .Timestamp(scheduler)
- // change to IObservable<SensorEventArgs>
- .Select(i => new SensorEventArgs("Temp7",
- i.Timestamp.DateTime, i.Value))
- // change to IObservable<StatInfoItem<string>>
- .ToCommonAggregates(i => i.Reading, i => i.SensorID);
The test looks as before — subscribe, run the scheduler, and examine the results. The observable contains three data samples, causing three statistical aggregates to be generated. The first aggregate is dropped due to the call to
Skip(), leaving two. The time of each aggregate corresponds to the original time of each corresponding element in
ys, and the aggregate observable completes when
ys does.
- obs.Subscribe(results);
-
- scheduler.Start();
-
- results.Messages.AssertEqual(
- OnNext(15, new StatInfoItem<string>()
- {
- Item = "Temp7",
- Sum = 30.0,
- Count = 2,
- Mean = 15.0,
- M2 = 50.0,
- StdDev = 5.0,
- Min = 0.0,
- Max = 20.0,
- }),
- OnNext(45, new StatInfoItem<string>()
- {
- Item = "Temp7",
- Sum = 60.0,
- Count = 3,
- Mean = 20.0,
- M2 = 200.0,
- StdDev = 8.16496580927726,
- Min = 0.0,
- Max = 30.0,
- }),
- OnCompleted<StatInfoItem<string>>(50)
- );
- }
OpenDDS: As OpenDDS sample streams are potentially infinite (that is, a sensor generates values essentially forever), the use of
Scan() for element-wise statistics gathering is likely more useful than operators such as
Count(),
Average() and such that only produce a value when the stream has terminated.
And More...
A number of other operators exist, and sites such as Introduction to Rx, 101 Rx Samples and the list of RxJava operators will provide greater detail, but the above gives a flavor of what operators are available in reactive frameworks.
Conclusion
Part I of this article showed how to convert an OpenDDS stream of data samples into an observable, but this article showed that the real power of reactive programming is to use operator composition in order to manipulate observable sequences. Reactive programming can lead to more responsive applications, and better use of hardware resources. With these articles, we've seen how to integrate OpenDDS into a reactive application, and operatators that are particularly useful for manipulating OpenDDS sample data.
References
- [1] Reactive programming
- [2] Reactive OpenDDS, Part I
- [3] Rx (Reactive Extensions)
- [4] MPC (The Makefile, Project, and Workspace Creator)
- [5] Alphabetical List of Observable Operators
- [6] Reactive-Extensions/RxCpp
- [7] The ambiguous operator, pt.2
- [8] Unit Testing Reactive Extensions with a method using Wait
- [9] Floating point comparison functions for C#
- [10] DevCamp 2010 Keynote - Rx: Curing your asynchronous programming blues
- [11] Testing Rx Queries using Virtual Time Scheduling
- [12] GroupJoin - Joins two streams matching by one of their attributes
- [13] Streaming OLAP with the Reactive Extensions (RX) for .Net
- [14] C#/.NET Little Wonders: Extension Methods Demystified
- [15] Introduction to Rx | https://objectcomputing.com/resources/publications/mnb/reactive-opendds-part-ii-operators | CC-MAIN-2019-13 | refinedweb | 5,966 | 52.19 |
05 July 2011 19:26 [Source: ICIS news]
LONDON (ICIS)--The European Commission has cleared a move by Abu Dhabi’s International Petroleum Investment Co (IPIC) to take majority control of Spain’s integrated energy and petrochemicals firm Companía Espanola de Petroleos (CEPSA), it said on Tuesday.
IPIC, which holds 47.1% in CEPSA, is set to acquire Total’s 48.8% stake to take control of CEPSA.
The Commission said the deal would lead to overlaps in the markets for phenol and acetone.
However, IPIC’s and CEPSA’s combined market shares for those product are ”moderate” as a number of credible competitors will remain in the market, the Commission said.
IPIC had notified the Commission in May that it plans to take control of CEPSA.
IPIC first acquired a 9.5% stake in CEPSA in 1998, which it raised to 47.1% in 2009.
IPIC’s portfolio companies also include ?xml:namespace>
The company is controlled by the government of the Emirate of Abu Dhabi.
For more on CEPSA | http://www.icis.com/Articles/2011/07/05/9475272/eu-clears-abu-dhabis-ipic-to-take-control-of-spains-cepsa.html | CC-MAIN-2014-10 | refinedweb | 171 | 67.55 |
so, everything works through the first iteration (don't know if this is an appropriate term for linked lists, but we just learned arrays) but then it stops. I thought that telling it to continue until position.next != null and increasing the position after every iteration would work but I think I have coded something incorrectly / am not taking something into consideration. I would be greatly obliged if you can offer any advice!
I had one more question I wanted to ask...
I am also writing a method called insertionSort (using insertion sort to sort a linked list). Anyway, I know in an array, you copy the item into a temporary space and then insert it into the sorted list in its appropriate place. I am wondering how I create such a space as with bubble sort my temp. space was merely incorporated as the next node.
Thanks,
Hunter
public class LinkedList { public Node head; public LinkedList(int length) { head = null; for (int i = 0; i < length; i ++) insert(i); } public LinkedList() { head = null; } public void clear() { head = null; } public void insert(int n) { Node current = new Node(n); current.next = head; head = current; } public void insert(Node n, int index) { Node previous, current; Node nnode = new Node(-10); nnode.next = head; previous = nnode; current = head; while (current != null && index > 0) { current = current.next; previous = previous.next; index --; } if (previous == nnode) { n.next = head; head = n; } else { previous.next = n; n.next = current; } } public void delete(int index) { int current; Node currentnode = head; for (current = index; current > 1; current --) currentnode = currentnode.next; if (currentnode == head) head = head.next; else currentnode.next = currentnode.next.next; } public Node getNode(int index) { Node nnode = new Node(-10); nnode.next = head; Node current = head; while(current.next != null && index > 0) { current = current.next; index --; } return current; } public int getVal(int index) { int current; Node currentnode = head; for (current = index; current > 0; current --) currentnode = currentnode.next; return currentnode.getVal(); } public void print() { System.out.println(); head.print(); } private void swap(Node pa, Node a, Node pb, Node b) { Node temp = b.next; pa.next = b; if (a.next != b) { b.next = a.next; pb.next = a; } else b.next = a; a.next = temp; } public void selectionSort() { Node current, a, previous, position; position = new Node(0); position.next = head; head = position; while (position.next != null) { current = previous = position.next; a = position.next; while(a != null) { if (a.getVal() < current.getVal()) { current = a; while(previous.next != current) previous = previous.next; } a = a.next; } if (current != previous) { Node t = position.next; swap(position, t, previous, current); } position = position.next; } head = head.next; } public void bubbleSort() { Node current, a, previous, position; position = new Node(0); position.next = head; head = position; while (position.next != null) { current = position.next; previous = position; a = current.next; while(a != null) { if (a.getVal() < current.getVal()) { Node temp = a.next; a.next = previous.next; previous.next = current.next; current.next = temp; previous = a; a = temp; } else { a = a.next; current = current.next; previous = previous.next; } } position = position.next; } head = head.next; } | https://www.daniweb.com/programming/software-development/threads/96952/bubble-sort-in-linked-list-help-greatly-appreciated | CC-MAIN-2017-13 | refinedweb | 506 | 54.79 |
shopping_list = [“banana”, “orange”, “apple”]
stock = {
“banana”: 6,
“apple”: 0,
“orange”: 32,
“pear”: 15
}
prices = {
“banana”: 4,
“apple”: 2,
“orange”: 1.5,
“pear”: 3
}
Write your code below!
def compute_bill(food):
total=0
for stuff in food:
total += prices[stuff]
return total
print compute_bill(shopping_list)
This code runs the function correctly. My question is, how does the program know to only add the integers of this function. I keep thinking i have to specify greatly. For instance: it adds the values of apple, banana, and orange, but when building the function, I continue to think that I have to specify it to not add the string… Just looking for further clarity on dictionaries and their values/keys. Thank you | https://discuss.codecademy.com/t/dictionary-values-https-www-codecademy-com-courses-learn-python-lessons-a-day-at-the-supermarket-exercises-making-a-purchase/418027 | CC-MAIN-2020-40 | refinedweb | 118 | 60.95 |
The ActionForm Class
The ActionForm Class
In this lesson you will learn about the ActionForm... the ActionForm class. Here is the code of the class:
AddressForm.java
ActionForm
ActionForm What is ActionForm
Error
Error Hi. I am getting error in the following code after the line I have commented as ERROR. How to solve this. Thanks in advance.
package... com.mysql.jdbc.Connection;
/**
* Servlet implementation class Modifystdfamily
*/
@WebServlet
error
"+it);
}
}
this is my program i am getting an error saying cannot find symbol class string...error import java.io.*;
public class employee
{
String name...*;
public class employee{
String name,address,ss;
int mos,phone;
double
Error
Error Hi. I am getting error in the following code after the line I have commented as ERROR. How to solve this. Thanks in advance.
package... com.mysql.jdbc.Connection;
@WebServlet("/Modifystdfamily")
public class Modifystdfamily extends
error
error whats the error..............
import java.util.Scanner;
public class g {
public static void main(String[] args) {
Scanner s=new Scanner... class g {
public static void main(String[] args) {
int d,x,y
error
error import java.io.*;
public class sereies
{
public static void main(String args[])throws IOException
{
BufferedReader keyin=new BufferedReader(new InputStreamReader(System.in));
int n;
double sum
class not found error - JDBC
class not found error thanks for your response. please clarify the following doubts.
i am having the specified mysql connector jar file. where that jar file has to be placed. also does the jdbc driver need to be installed
java struts error - Struts
*;
import javax.servlet.http.*;
public class loginform extends ActionForm{
private...*;
import javax.servlet.http.*;
public class loginaction extends Action{
public ActionForward execute(ActionMapping am,ActionForm af,HttpServletRequest req
scanner Class Error - Java Beginners
scanner Class Error Hello Sir ,When i run the program of Scanner Class ,there is Error
Can not Resolve Symbol-Scanner
how i can solve..., Scanner class is not provided. Check your version.It should work with java 1.5
Diff Bn Actionform and Pojo - Struts
Diff Bn Actionform and Pojo
Hi Friends,
I am confused with Pojo and ActionForm ,bcoz both have getter and setter methods.In both we declare fields and corresponding getter n setters also.plz
Java error class
Java error class
... able class. This mean the
application does not able to catch the error occurred.... For this we have a class name Error class. Inside
the main method we assign
Error in jdeveloper 10G
Error in jdeveloper 10G 500 Internal Server Error
javax.servlet.jsp.JspException: Exception creating bean of class view.AddressForm: {1... org.apache.struts.action.ActionMapping;
public class AddressAction extends Action
validte method call upon ActionForm creation.
validte method call upon ActionForm creation. I want to know if validate method is called when ActionForm object is created?
Hi,
The validate method is not called when ActionForm object is created.
The validate
i got an error while compile this program manually.
i got an error while compile this program manually. import javax.servlet.http.*;
import org.apache.struts.action.*;
public class ActionOne{
public...,
ActionForm form,
HttpServletRequest request
download file Error in struts2 action class
download file Error in struts2 action class Hi,
i am using bellow block of code for download file :
public void downloadGreeting(String...("Error in downloadGreeting()");
}
} else
Java error class interface or enum excepted
Java error class interface or enum excepted
Java Error class interface or enum excepted are the class of java error that occurred
when a programmer
MAin error
MAin error Error while running hello program in another dir rather in bin.
path is already set.
java -version jdk1.6.0_24
no error while compilation but @ d tym of runnin error in main class is generated
Exception in thread
Java error
Java error class WhileLoopMethod{
public static void main(String args[]){
new Hello();
}
}
class Hello{
int a = 5, b = 88;
while(a
}
//it is giving compile time error .....illegal start
error in driver
error in driver i have done connection settings for oraclethin driver in netbeans ide 6.9.1. and i was able to view tables of my database.but when i am trying to access tables from my java class i was getting error
Welcome to the Apache Struts Tutorial
compile error
compile error Hello All
for example
public class... program with Test.java and try to compile with javac test.java an error like
test.java :2 : class A is public ,should be declared in a file named A.java
JFREE error
JFREE error hi.........
the code for making chart is: WHICH IS GIVING THE ERROR
import org.jfree.chart.*;
import org.jfree.data.category.... org.jfree.chart.plot.*;
import java.awt.*;
public class BarExample1{
public
error rectify
error rectify public class HelloWorld{
public static void main(String []args)throws IOException {
try {
for(int i=0;i<24;i++)
{
for(int j=0
abstract class
abstract class abstract class AbstractClass{
}
is possible and compile the class or any compile time error occur tell me the answer
Blockquote
Code error
java.net.URL;
import java.util.Iterator;
class Function {
void ext(URL urL, String...)
read.close();
}
}
}
While using this it shows error as:
run... in thread "main" java.lang.NoClassDefFoundError: means, that the class which you
error at programming
error at programming import javax.servlet.http.*;
import javax.servlet.*;
import java.io.*;
import java.sql.*;
public class Login extends HttpServlet
{
public void doPost(HttpServletRequest req,HttpServletResponse res
How to eliminate error," Could not find the main class: filecopy.FileCopy. Program will exit." ?
How to eliminate error," Could not find the main class: filecopy.FileCopy. Program will exit." ? run:
java.lang.NoClassDefFoundError: filecopy... class: filecopy.FileCopy. Program will exit.
Exception in thread "main" Java
annotation error
annotation error class names "JdbcProjam.java "are only accepted if annotation processing is explicitly requested
Is your class name is JdbcProjam or JdbcProgram? Check it properly. Java is case sensitive. So use
compilation error
compilation error Hi my program below is not compiling,when I try to compile it i'm getting the error message "tool completed with exit code 1... java.util.*;
import java.text.*;
class HardwareItems
{
String code
error come
error come com.techi.bean.Employee cannot be cast...*;
public class EmpBean {
public List dataList(){
ArrayList list=new ArrayList... javax.servlet.http.*;
public class BeanInServlet extends HttpServlet
error at programming
error at programming <p>import javax.servlet.http.*;
import...;public class Login extends HttpServlet
{
public void doPost(HttpServletRequest... javax.servlet.*;
import javax.servlet.http.*;
public class Login1 extends
Adapter class
Adapter class here may I use paint metod?If not then whats the error comes here
Sealed class
, and the compiler can provide better
error feedback. The class which is not dynamic...Sealed class Hi........
What are sealed class?
Please give an example
Thanks Ans: Sealed class: Sealed classes conserve memory
Sealed class
between sealed class and dynamic classes?
Please give me an example of each class
Thanks Ans:
Dynamic class: Specifies that instances of a class may possess dynamic properties added at run time.
Sealed class: Sealed
Scanner class
Scanner class what have to do when an error occur about Scanner class.i code
scanner sc=new Scanner(System.in); but it shows an error regarding this.
Use Scanner sc=new Scanner(System.in
output error
output error this is my program
import java.io.*;
public class separate
{
public static void main(String[] args)throws IOException
{
BufferedReader keyin=new BufferedReader(new InputStreamReader(System.in
error - JDBC
is
import java.sql.*;
public class DBConnect{
public static void main
Error - JDBC
*;
/**
*
* @author Nicodemus
* @version
*/
public class login
still error
an object (button) of JButton class
then use the following
Java Error
that occurs
in a program. Error in java is a subclass of the Throwable class... is a
"normal" condition. It is also a subclass of Error class as most... in the
throws clause any subclass of the Error class that might be thrown during
DispatchAction class? - Struts
DispatchAction class? HI, Which is best and why either action class or dispatch class. like that Actionform or Dynactionform . I know usage , explain me clearly when to use.
Thanks Prakash
Hi Friend
error in sample example
error in sample example hi can u please help me
XmlBeanFactory class is deprecation in java 6.0
so, pls send advance version class
Use XmlBeanFactory(Resource) with an InputStreamResource parameter
JFREE error again
JFREE error again hi.........
As i had asked u the jfree error i.... but then also its giving me error
i am able to compile the code but now when i am... is the error which i am getting plz help to solve this error
plz help
JAVA Annotation Error
JAVA Annotation Error while compiling simple java program i get annotation error i.e
class will only be accepted if annotation processing is explicitly requested..how to solve
groovy compilation error - Framework
, i got the error message like:
[groovyc...\projects6.9\BookDemo\grails-app\conf\BootStrap.groovy: 4: unable to resolve class Book...]
[groovyc] 1 error
Compilation error: Compilation Failed
how we can resolve
remove given error
remove given error class Amol
{
public static void main(String args...: cannot find symbol
symbol : variable out
location: class System
System.out.println("Hello");
^
1 error
run time error
if you have used class name in small letters instead of capital one as you have saved your class with HelloWorld.
Check it out. Even then if error occurs, post... output for any program ...
Post your code.
public class
Error - Java Beginners
:
Enter the HexaDecimal :
I got error in the line String status=session.getAttribute----->
Type of error...*;
public class StringToHexa{
public static void main(String args[])throws
login page error
following error arise ."The server encountered an internal error (No action instance... class
Swing error in code
() {
^
SClient.java:39: error: class, interface, or enum expected
});
^
SClient.java:40: error: class, interface, or enum...Swing error in code import java.rmi.*;
import java.awt.*;
incompatble type error
incompatble type error class ArithmeticExceptiontest
{
public static void main(String args[])
{
int num1 = 0;
int num2=10...);
}
}
}
trows incampatble type error.
deployment error - XML
getting the following error after i deployed my war file on tomcat.I suspecting my... web.xml entries to support spring and site mesh?? here is the error on my console and my web.xml .........
--------------------------------------
SEVERE: Error
Jboss related linkage error
Jboss related linkage error Please check below error when i run... "org.apache.axis.message.MessageElement.getChildElements
(Ljavax/xml/namespace/QName;)Ljava/util/Iterator;"
the class...)
of the current class,
org/apache/axis/message/MessageElement,
and the class loader
loginform giving 404 error
loginform giving 404 error Hi,
I am developing a login form code from roseindia.net. i will develop all the formbean class, action class, login.jsp... the submit button the give error is populated.
HTTP Status 404 - /login.do
type
output error - JSP-Servlet
error? How can i solve this problem? */
HTTP Status 500... report
description The server encountered an internal error...
javax.servlet.ServletException: Wrapper cannot find servlet class numbers or a class it depends
converting java class to java fx class
converting java class to java fx class i am working on a java project which would convert text to speech and speech to text. i have developed.... but when i m converting the same file to javafx .. it is giving an error " speech
java error - Java Beginners
been obsoleted. To get the details of your error, compile your class with Xlint
possible loss of precision error
possible loss of precision error how do i fix this error " possible loss of precision found int required char ch = input.read()"
this is the code
import java.io.InputStreamReader;
import java.io.IOException;
class Test1
error message - Java Beginners
error message sir,
while i m trying to execute my first program(print my name)I get error message like" javac is not recognized as internal... and run java programs.
i am sending simple java code,
public class Hello
<identifier> expected error
expected error expected error
print("
import java.util.*;
public class Person{
Queue<Person> busQ = new LinkedList<Person>();
busQ.addLast(homer);
busQ.addLast(marge);
busQ.addLast(maggie);
busQ.addLast
compile time error
the class file also but when i tried to do next command to see the output,it showing error that "could not find or load main class fibbo"
fibbo is my class name for find out the fibbonaci series.
import java.util.*;
public class fibbo
Error running webservice
Error running webservice Hi,
I am getting following error:
05/10 12:45:46 ERROR org.springframework.web.context.ContextLoader - Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error
error detection - Java Beginners
error detection
Hi Amardeep and all friends out... me an error,of "(" or "[" expected. i tried to change everything i can to repair the error but got stuck. please help me to find out what must i do to repair
error
error while iam compiling iam getting expected error
Programming Error - JSP-Servlet
();
}
}
}//End of class
m geeting error in this can u please tell me whats the error nd how to solve... a simple, single part, text/plain e-mail
public class TestEmail {
public
Java Error - Java Beginners
ch= reader.read();
^
1 error
import java.io.*;
//program 9
class demo1 {
public static void main(String args...:
import java.io.*;
class demo1 {
public static void main(String args
Java Error - Java Beginners
Java Error Here Error near Read Method
import java.util.*;
import java.io.*;
public class inputdemo
{
public static void main(String[] args) throws IOException
{
String bookname,author;
float price;
System.out.println
runtime error - Java Beginners
runtime error sir run time error is coming again in this code also
null pointer Exception at for( File f:contents)
import java.io.*;
public class RecentDocuments
{
public static void main(String[] args)
{
String
Error in context path
Error in context path I Tried a Struts2 Login application having following class as Action class..
package com.actions;
import... org.apache.struts2.interceptor.ServletRequestAware;
public class
compilation error - Java Beginners
in same package
Dog class is being compiled bt. Person is not compiled .
error... is there is one package in which 2 class person & dog reside. both classes r public still i'm not able to make object of Dog clss in Person class!!
plezz help
Script error - JSP-Servlet
running this it shows an error as "STACK OVERFLOW".
Correct the code . The code... class members
...
Site can be viewed by only class members
Programming error - JDBC
javax.servlet.http.*;
import java.io.*;
import java.sql.*;
public class UpdOrders extends...)
{
out.println("Database Error :"+se.getMessage());
}
catch(Exception e)
{
out.println(e.toString());
}
}
}
Please reply me the error in this program because i
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/81290 | CC-MAIN-2013-20 | refinedweb | 2,475 | 50.33 |
Introduction
When it comes to running applications on Kubernetes in production, you will sooner or later have the challenge to update your services with a minimum amount of downtime for your users…and – at least as important – to be able to release new versions of your application with confidence…that means, you discover unhealthy and “faulty” services very quickly and are able to rollback to previous versions without much effort.
When you search the internet for best practices or Kubernetes addons that help you with these challenges, you will stumble upon Flagger, as I did, from WeaveWorks.
Flagger is basically a controller that will be installed in your Kubernetes cluster. It helps you with canary and A/B releases of your services by handling all the hard stuff like automatically adding services and deployments for your “canaries”, shifting load over time to these and rolling back deployments in case of errors.
As if that wasn’t good enough, Flagger also works in combination with popular Service Meshes like Istio and Linkerd. If you don’t want to use Flagger with such a product, you can also use it on “plain” Kubernetes, e.g. in combination with an NGINX ingress controller. Many choices here…
I like linkerd very much, so I’ll choose that one in combination with Flagger to demonstrate a few of the possibilities you have when releasing new versions of your application/services.
Prerequisites
linkerd
I already set up a plain Kubernetes cluster on Azure for this sample, so I’ll start by adding linkerd to it (you can find a complete guide how to install linkerd and the CLI on):
$ linkerd install | kubectl apply -f -
After the command has finished, let’s check if everything works as expected:
$ linkerd check && kubectl -n linkerd get deployments [...] [...] control-plane-version --------------------- √ control plane is up-to-date √ control plane and cli versions match Status check results are √ NAME READY UP-TO-DATE AVAILABLE AGE flagger 1/1 1 1 3h12m linkerd-controller 1/1 1 1 3h14m linkerd-destination 1/1 1 1 3h14m linkerd-grafana 1/1 1 1 3h14m linkerd-identity 1/1 1 1 3h14m linkerd-prometheus 1/1 1 1 3h14m linkerd-proxy-injector 1/1 1 1 3h14m linkerd-sp-validator 1/1 1 1 3h14m linkerd-tap 1/1 1 1 3h14m linkerd-web 1/1 1 1 3h14m
If you want to, open the linkerd dashboard and see the current state of your service mesh, execute:
$ linkerd dashboard
After a few seconds, the dashboard will be shown in your browser.
Microsoft Teams Integration
For alerting and notification, we want to leverage the MS Teams integration of Flagger to get notified each time a new deployment is triggered or a canary release will be “promoted” to be the primary release.
Therefore, we need to setup a WebHook in MS Teams (a MS Teams channel!):
- In Teams, choose More options ( ⋯ ) next to the channel name you want to use and then choose Connectors.
- Scroll through the list of Connectors to Incoming Webhook , and choose Add.
- Enter a name for the webhook, upload an image and choose Create.
- Copy the webhook URL. You’ll need it when adding Flagger in the next section.
- Choose Done.
Install Flagger
Time to add Flagger to your cluster. Therefore, we will be using Helm (version 3, so no need for a Tiller deployment upfront).
$ helm repo add flagger $ kubectl apply -f [...] $ helm upgrade -i flagger flagger/flagger \ --namespace=linkerd \ --set crd.create=false \ --set meshProvider=linkerd \ --set metricsServer= \ --set msteams.url=<YOUR_TEAMS_WEBHOOK_URL>
Check, if everything has been installed correctly:
$ kubectl get pods -n linkerd -l app.kubernetes.io/instance=flagger NAME READY STATUS RESTARTS AGE flagger-7df95884bc-tpc5b 1/1 Running 0 0h3m
Great, looks good. So, now that Flagger has been installed, let’s have a look where it will help us and what kind of objects will be created for canary analysis and promotion. Remember that we use linkerd in that sample, so all objects and features discussed in the following section will just be relevant for linkerd.
How Flagger works
The sample application we will be deploying shortly consists of a VueJS Single Page Application that is able to display quotes from the Star Wars movies – and it’s able to request the quotes in a loop (to be able to put some load on the service). When requesting a quote, the web application is talking to a service (proxy) within the Kubernetes cluster which in turn talks to another service (quotesbackend) that is responsible to create the quote (simulating service-to-service calls in the cluster). The SPA as well as the proxy are accessible through a NGINX ingress controller.
After the application has been successfully deployed, we also add a canary object which takes care of the promotion of a new revision of our backend deployment. The Canary object will look like this:
apiVersion: flagger.app/v1beta1 kind: Canary metadata: name: quotesbackend spec: targetRef: apiVersion: apps/v1 kind: Deployment name: quotesbackend progressDeadlineSeconds: 60 service: port: 3000 targetPort: 3000 analysis: interval: 20s # max number of failed metric checks before rollback threshold: 5 # max traffic percentage routed to canary # percentage (0-100) maxWeight: 70 stepWeight: 10 metrics: - name: request-success-rate # minimum req success rate (non 5xx responses) # percentage (0-100) threshold: 99 interval: 1m - name: request-duration # maximum req duration P99 # milliseconds threshold: 500 interval: 30s
What this configuration basically does is watching for new revisions of a quotesbackend deployment. In case that happens, it starts a canary deployment for it. Every 20s, it will increase the weight of the traffic split by 10% until it reaches 70%. If no errors occur during the promotion, the new revision will be scaled up to 100% and the old version will be scaled down to zero, making the canary the new primary. Flagger will monitor the request success rate and the request duration (linkerd Prometheus metrics). If one of them drops under the threshold set in the Canary object, a rollback to the old version will be started and the new deployment will be scaled back to zero pods.
To achieve all of the above mentioned analysis, flagger will create several new objects for us:
- backend-primary deployment
- backend-primary service
- backend-canary service
- SMI / linkerd traffic split configuration
The resulting architecture will look like that:
So, enough of theory, let’s see how Flagger works with the sample app mentioned above.
Sample App Deployment
If you want to follow the sample on your machine, you can find all the code snippets, deployment manifests etc. on
First, we will deploy the application in a basic version. This includes the backend and frontend components as well as an Ingress Controller which we can use to route traffic into the cluster (to the SPA app + backend services). We will be using the NGINX ingress controller for that.
To get started, let’s create a namespace for the application and deploy the ingress controller:
$ kubectl create ns quotes # Enable linkerd integration with the namespace $ kubectl annotate ns quotes linkerd.io/inject=enabled # Deploy ingress controller $ helm repo add ingress-nginx $ kubectl create ns ingress $ helm install my-ingress ingress-nginx/ingress-nginx -n ingress
Please note , that we annotate the quotes namespace to automatically get the Linkerd sidecar injected during deployment time. Any pod that will be created within this namespace, will be part of the service mesh and controlled via Linkerd.
As soon as the first part is finished, let’s get the public IP of the ingress controller. We need this IP address to configure the endpoint to call for the VueJS app, which in turn is configured in a file called
settings.js of the frontend/Single Page Application pod. This file will be referenced when the index.html page gets loaded. The file itself is not present in the Docker image. We mount it during deployment time from a Kubernetes secret to the appropriate location within the running container.
One more thing : To have a proper DNS name to call our service (instead of using the plain IP), I chose to use NIP.io. The service is dead simple! E.g. you can simply use the DNS name
123-456-789-123.nip.io and the service will resolve to host with IP
123.456.789.123. Nothing to configure, no more editing of /etc/hosts…
So first, let’s determine the IP address of the ingress controller…
# get the IP address of the ingress controller... $ kubectl get svc -n ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-ingress-ingress-nginx-controller LoadBalancer 10.0.93.165 52.143.30.72 80:31347/TCP,443:31399/TCP 4d5h my-ingress-ingress-nginx-controller-admission ClusterIP 10.0.157.46 <none> 443/TCP 4d5h
Please open the file
settings_template.js and adjust the endpoint property to point to the cluster (in this case, the IP address is 52.143.30.72, so the DNS name will be 52-143-30-72.nip.io).
Next, we need to add the correspondig Kubernetes secret for the settings file:
$ kubectl create secret generic uisettings --from-file=settings.js=./settings_template.js -n quotes
As mentioned above, this secret will be mounted to a special location in the running container. Here’s the deployment file for the frontend – please see the sections for volumes and volumeMounts:
apiVersion: apps/v1 kind: Deployment metadata: name: quotesfrontend spec: selector: matchLabels: name: quotesfrontend quotesapp: frontend version: v1 replicas: 1 minReadySeconds: 5 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 template: metadata: labels: name: quotesfrontend quotesapp: frontend version: v1 spec: containers: - name: quotesfrontend image: csaocpger/quotesfrontend:4 volumeMounts: - mountPath: "/usr/share/nginx/html/settings" name: uisettings readOnly: true volumes: - name: uisettings secret: secretName: uisettings
Last but not least, we also need to adjust the ingress definition to be able to work with the DNS / hostname. Open the file ingress.yaml and adjust the hostnames for the two ingress definitions. In this case here, the resulting manifest looks like that:
Now we are set to deploy the whole application:
$ kubectl apply -f base-backend-infra.yaml -n quotes $ kubectl apply -f base-backend-app.yaml -n quotes $ kubectl apply -f base-frontend-app.yaml -n quotes $ kubectl apply -f ingress.yaml -n quotes
After a few seconds, you should be able to point your browser to the hostname and see the “Quotes App”:
If you click on the “Load new Quote” button, the SPA will call the backend (here:), request a new “Star Wars” quote and show the result of the API Call in the box at the bottom. You can also request quotes in a loop – we will need that later to simulate load.
Flagger Canary Settings
We need to configure Flagger and make it aware of our deployment – remember, we only target the backend API that serves the quotes.
Therefor, we deploy the canary configuration (canary.yaml file) discussed before:
$ kubectl apply -f canary.yaml -n quotes
You have to wait a few seconds and check the services, deployments and pods to see if it has been correctly installed:
$ kubectl get svc -n quotes NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE quotesbackend ClusterIP 10.0.64.206 <none> 3000/TCP 51m quotesbackend-canary ClusterIP 10.0.94.94 <none> 3000/TCP 70s quotesbackend-primary ClusterIP 10.0.219.233 <none> 3000/TCP 70s quotesfrontend ClusterIP 10.0.111.86 <none> 80/TCP 12m quotesproxy ClusterIP 10.0.57.46 <none> 80/TCP 51m $ kubectl get po -n quotes NAME READY STATUS RESTARTS AGE quotesbackend-primary-7c6b58d7c9-l8sgc 2/2 Running 0 64s quotesfrontend-858cd446f5-m6t97 2/2 Running 0 12m quotesproxy-75fcc6b6c-6wmfr 2/2 Running 0 43m kubectl get deploy -n quotes NAME READY UP-TO-DATE AVAILABLE AGE quotesbackend 0/0 0 0 50m quotesbackend-primary 1/1 1 1 64s quotesfrontend 1/1 1 1 12m quotesproxy 1/1 1 1 43m
That looks good! Flagger has created new services, deployments and pods for us to be able to control how traffic will be directed to existing/new versions of our “quotes” backend. You can also check the canary definition in Kubernetes, if you want:
$ kubectl describe canaries -n quotes Name: quotesbackend Namespace: quotes Labels: <none> Annotations: API Version: flagger.app/v1beta1 Kind: Canary Metadata: Creation Timestamp: 2020-06-06T13:17:59Z Generation: 1 Managed Fields: API Version: flagger.app/v1beta1 [...]
You will also receive a notification in Teams, that a new deployment for Flagger has been detected and initialized:
Kick-Off a new deployment
Now comes the part where Flagger really shines. We want to deploy a new version of the backend quote API – switching from “Star Wars ” quotes to “Star Trek ” quotes! What will happen, is the following:
- as soon as we deploy a new “quotesbackend”, Flagger will recognize it
- new versions will be deployed, but no traffic will be directed to them at the beginning
- after some time, Flagger will start to redirect traffic via Linkerd / TrafficSplit configurations to the new version via the canary service, starting – according to our canary definition – at a rate of 10%. So 90% of the traffic will still hit our “Star Wars” quotes
- it will monitor the request success rate and advance the rate by 10% every 20 seconds
- if 70% traffic split will be reached without throwing any significant amount of errors, the deployment will be scaled up to 100% and propagated as the “new primary”
Before we deploy it, let’s request new quotes in a loop (set the frequency e.g. to 300ms via the slider and press “Load in Loop”).
Then, deploy the new version:
$ kubectl apply -f st-backend-app.yaml -n quotes $ kubectl describe canaries quotesbackend -n quotes [...] [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Synced 14m flagger quotesbackend-primary.quotes not ready: waiting for rollout to finish: observed deployment generation less then desired generation Normal Synced 14m flagger Initialization done! quotesbackend.quotes Normal Synced 4m7s flagger New revision detected! Scaling up quotesbackend.quotes Normal Synced 3m47s flagger Starting canary analysis for quotesbackend.quotes Normal Synced 3m47s flagger Advance quotesbackend.quotes canary weight 10 Warning Synced 3m7s (x2 over 3m27s) flagger Halt advancement no values found for linkerd metric request-success-rate probably quotesbackend.quotes is not receiving traffic: running query failed: no values found Normal Synced 2m47s flagger Advance quotesbackend.quotes canary weight 20 Normal Synced 2m27s flagger Advance quotesbackend.quotes canary weight 30 Normal Synced 2m7s flagger Advance quotesbackend.quotes canary weight 40 Normal Synced 107s flagger Advance quotesbackend.quotes canary weight 50 Normal Synced 87s flagger Advance quotesbackend.quotes canary weight 60 Normal Synced 67s flagger Advance quotesbackend.quotes canary weight 70 Normal Synced 7s (x3 over 47s) flagger (combined from similar events): Promotion completed! Scaling down quotesbackend.quotes
You will notice in the UI that every now and then a quote from “Star Trek” will appear…and that the frequency will increase every 20 seconds as the canary deployment will receive more traffic over time. As stated above, when the traffic split reaches 70% and no errors occured in the meantime, the “canary/new version” will be promoted as the “new primary version” of the quotes backend. At that time, you will only receive quotes from “Star Trek”.
Because of the Teams integration, we also get a notification of a new version that will be rolled-out and – after the promotion to “primary” – that the rollout has been successfully finished.
What happens when errors occur?
So far, we have been following the “happy path”…but what happens, if there are errors during the rollout of a new canary version? Let’s say we have produced a bug in our new service that will throw an error when requesting a new quote from the backend? Let’s see, how Flagger will behave then…
The version that will be deployed will start throwing errors after a certain amount of time. Due to the fact that Flagger is monitoring the “request success rate” via Linkerd metrics, it will notice that something is “not the way it is supposed to be”, stop the promotion of the new “error-prone” version, scale it back to zero pods and keep the current primary backend (means: “Star Trek” quotes) in place.
$ kubectl apply -f error-backend-app.yaml -n quotes $ k describe canaries.flagger.app quotesbackend [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Synced 23m flagger quotesbackend-primary.quotes not ready: waiting for rollout to finish: observed deployment generation less then desired generation Normal Synced 23m flagger Initialization done! quotesbackend.quotes Normal Synced 13m flagger New revision detected! Scaling up quotesbackend.quotes Normal Synced 11m flagger Advance quotesbackend.quotes canary weight 20 Normal Synced 11m flagger Advance quotesbackend.quotes canary weight 30 Normal Synced 11m flagger Advance quotesbackend.quotes canary weight 40 Normal Synced 10m flagger Advance quotesbackend.quotes canary weight 50 Normal Synced 10m flagger Advance quotesbackend.quotes canary weight 60 Normal Synced 10m flagger Advance quotesbackend.quotes canary weight 70 Normal Synced 3m43s (x4 over 9m43s) flagger (combined from similar events): New revision detected! Scaling up quotesbackend.quotes Normal Synced 3m23s (x2 over 12m) flagger Advance quotesbackend.quotes canary weight 10 Normal Synced 3m23s (x2 over 12m) flagger Starting canary analysis for quotesbackend.quotes Warning Synced 2m43s (x4 over 12m) flagger Halt advancement no values found for linkerd metric request-success-rate probably quotesbackend.quotes is not receiving traffic: running query failed: no values found Warning Synced 2m3s (x2 over 2m23s) flagger Halt quotesbackend.quotes advancement success rate 0.00% < 99% Warning Synced 103s flagger Halt quotesbackend.quotes advancement success rate 50.00% < 99% Warning Synced 83s flagger Rolling back quotesbackend.quotes failed checks threshold reached 5 Warning Synced 81s flagger Canary failed! Scaling down quotesbackend.quotes
As you can see in the event log, the success rate drops to a significant amount and Flagger will halt the promotion of the new version, scale down to zero pods and keep the current version a the “primary” backend.
Conclusion
With this article, I have certainly only covered the features of Flagger in a very brief way. But this small example shows what a great relief Flagger can be when it comes to the rollout of new Kubernetes deployments. Flagger can do a lot more than shown here and it is definitely worth to take a look at this product from WeaveWorks.
I hope I could give some insight and made you want to do more…and to have fun with Flagger :)
As mentioned above, all the sample files, manifests etc. can be found here:.
Discussion (0) | https://dev.to/cdennig/release-to-kubernetes-like-a-pro-with-flagger-1igf | CC-MAIN-2021-43 | refinedweb | 3,100 | 60.24 |
This post is an attempt to explain what iterators and generators are in Python, defend the
yield statement, and reveal why a library like SimPy is possible. But first some terminology (that specifically targets my friends who Java). Iteration is a syntactic construct that implements a loop over an iterable object. The
for statement provides iteration, the
while statement may provide iteration. An iterable object is something that implements the iteration protocol (Java folks, read interface). A generator is a function that produces a sequence of results instead of a single value and is designed to make writing iterable objects easier.
Iterables
Iterable objects are constructed by the built-in function,
iter, which takes an iterable object and returns an iterator. The Python data model allows you to define custom objects that implement double underscore methods related to the built-in functions and operators. Therefore if you implement an object with an
__iter__ method, your object can be passed to the
iter built-in.
The
__iter__ method must return an iterable object, which if it is the same object, can simply return
self. Iterable objects must have a
next method that is called on every pass of the loop. When iteration is complete, the
next method should raise
StopIteration. Here is an example of a Dealer iterator that shuffles a deck of cards on
iter then deals out cards on each call of next, until there are no more cards left in the deck:
The thing to note here is that the object keeps track of its own state, through it’s own pointer value (the “shoe”). This means that the iterable can be “exhausted” without returning any more data. Try the following and see what happens:
dealer = Dealer() for card in dealer: for card in dealer: print card
Note that I also used the shorthand and didn’t call the
iter function directly, but let the syntax of the for loop handle it for me. Also note that other built-in functions consume iterables like
list which will take the contents of the iterable and store it in memory in a list, or
enumerate which will also provide an index of each value in the iterator.
Generators
Generators are designed to allow you to easily create iterables without having to deal with the iterator interface. Instead you can create a function that does not
return but rather
yield values. When the
yield keyword is used inside a function, a generator is immediately returned that has a
next method. Look how simple our dealer is using a generator function:
def dealer(): cards = [ u"{: >2}{}".format(*card) for card in zip(FACES * len(SUITS), SUITS * len(FACES)) ] random.shuffle(cards) for card in cards: yield card
The generator allows us to forget about how to implement an iterable, keep track of state, etc. which greatly simplifies the process. You can get access to the generator directly from the function:
dealer_generator = dealer() print dealer_generator.next()
Or you can simply loop over the function as we’ve been doing so far:
for card in dealer(): print card
The
yield statement is often mistaken for yielding a value instead of simply returning one. What the generator is actually doing is yielding the execution context back to the caller. Whenever the caller calls
next() on the generator, the execution is returned directly to the line where the yield was executed. Consider the following example:
def surround(n): for idx in xrange(n): print "above {}".format(idx) yield idx print "below {}".format(idx) for idx in surround(4): print "around {}".format(idx)
You get output that appears as follows:
above 0 around 0 below 0 above 1 around 1 below 1 above 2 around 2 below 2 above 3 around 3 below 3
What is happening here? On the
for loop call, a generator is returned, the “above” print statement happens, then control is yielded to the executing context, which prints “around”. That block complete, the loop continues, going to the next cycle, and calls next on the generator, which returns control right after the yield, printing the “below” statement, continuing to the next “above” then yielding, so on and so forth.
SimPy and Context
Generators are incredibly handy for things like comprehensions, memory safe iteration, reading from multiple files simultaneously, and more. However, I want to talk about their ingenious use in the discrete event simulation library, SimPy.
SimPy allows you to create processes which are essentially generators. These processes can run forever, but they must
yield events that occur in the simulation. One very important event is the
timeout event that allows time to pass in the simulation. So how would we implement a simple SimPy environment using generators? Consider a blinking light generator:
def blinker(env): while True: print 'Blink at {}!'.format(env.now) yield 5
The desired effect is that this prints “Blink” every 5 time steps in the simulation (env in this case is just a SimPy environment). The offset allows us to start blinking lights that blink at different times. Note that this
while loop doesn’t terminate, so if we just hit go on this thing, even if we manage to wait 5 (however we do that) then this will go forever, how do we cancel it? Moreover, how do we cancel multiple blinking lights?
Basically what we can do is we can simply manage the generators for our simulation and call
next on them when appropriate, and if we want to terminate, then simply don’t call their next method. Here is a simple implementation:
from collections import defaultdict class BlinkerEnvironment(object): def __init__(self, blinkers=4): self.now = 0 self.blinkers = defaultdict(list) for idx in xrange(blinkers): # schedule blinkers by offset self.blinkers[idx].append(blinker(self)) def run(self, until=100): while self.now < until: if self.now in self.blinkers: for blinker in self.blinkers.pop(self.now): timeout = blinker.next() + self.now self.blinkers[timeout].append(blinker) self.now = min(self.blinkers.keys())
As you can see in this code, the blinkers dictionary is a list of blinkers keyed to the time value that they are supposed to be called again. The environment keeps track of the current timestamp, and initializes 4 blinkers that are offset so that the blinkers aren’t all blinking at the same time.
The
run method is passed an
until argument, which limits how long the simulation goes on. If the current timestamp is in the blinkers schedule, then we go and fetch all the generators for the now value, then call their next method. We reschedule the blinker based on the timeout number that it yields to us, then we increment now by the next scheduled blink to take place (skipping over time steps that don’t matter is what gives discrete event simulation its desired properties). And voila, we’ve implemented a simple simulation using generators! | https://bbengfort.github.io/2016/02/iterators-generators/ | CC-MAIN-2021-17 | refinedweb | 1,150 | 52.7 |
I'm not new to programming, but I am new to c++, basically Dev c++ compiler etc....
I knew lots of C and kicked ass with djgpp but now I am very confused about a few things(mainly lib and .h files)
OK so I am trying to get into Open gl.
Anyone who knows about this please tell me if I am doing this right....
I have a header file glut.h that I place inside dev c++\include\gl dir
I have a lib file glut32.lib that I place dev c++\lib dir
I then write a program
blah blah
#include <gl\glut.h>
blah blah
blah blah
and lastley I goto the program options, paramters and type
-lglut32 in the lib options part(I also tried manually selecting the lib)
I then get an error saying there is a problem with the file included glut.h
Anyone who can help I will appreciate. | https://cboard.cprogramming.com/cplusplus-programming/48348-probably-stoopid-question-printable-thread.html | CC-MAIN-2017-34 | refinedweb | 157 | 80.41 |
I made three prototype implementations of the Topologi XSD to RELAX NG Compact Syntax translator, before adopting a particular one.
First, I used Topologi’s high-level inhouse Java library for XSD, which we use on other products. I looked at converting that into the Java API of one of the versions of RELAX NG in James Clarks’ Trang translation software.
Second, I tried using XSLT to generate RELAX NG Compact Syntax directly.
Third, I looked at using XSLT 2 (Saxon) to generate RELAX NG as XML, then use Trang to convert from this XML to RELAX NG Compact Syntax.
Which one did I go with?
Well, the first method was a complete nightmare. In order to do the work I needed to keep in mind
I was interested in seeing whether adopting the kind of Java API approach (objects) would help simplify the issue, but actually it was the worst approach: the number of levels and connections and abstractions multiplied crazily. Every problem involved looking up multiple manuals or specs. James is a great programmer, but no-one would describe his programming style as chatty or discursive, which was a killer.
The second approach was actually pretty workable, but was starting to look alarmingly fragile: I was adding more functions to generate good RELAX NG compact, and I could see the writing on the wall.
So I chose the third way, ultimately. I sloughed off all Compact Syntax issues to Trang to deal with, working on the command line, and then could just concentrate on converting XSD elements into equivalent RELAX NG attributes. Just the XML syntax, and the RELAX NG, XSD and XSLT semantics, only dipping into the components where needed, and never making an abstract object representation of the XML independent of the infoset. Phew..small is beautiful.
The MS OOX schemas have several features that make translation easier: they use different prefixes for each each kind of object, so no name munging is needed when converting components into RELAX NG patterns with their single namespace. And the OOX derivation tree is shallow, thank goodness. OOX uses all sorts of XSD nasties: extension, substition groups, abstract elements, and I think I found RELAX NG exquivalents for almost everything. If there had been multiple levels of derivation between schema documents, it would need a bit more work.
The main difficulty I had, actually, was with understanding RELAX NG. I have had the experience (with XSD and RELAX NG) of understanding a technology well, but then having a subsequent technology blank out that knowledge. So I found it quite troublesome to figure out how to map xsd import and includes into RELAX NG, when there are foreign namespaces involved. I had made a rule that the translator would not need to look in other documents: I am still not convinced I have it right yet, actually.
But all in all, I think the draft RELAX NG compact syntax schemas for draft Ecma OOX at least show that ISO RELAX NG is a viable technical option even for large complex documents that use XSD schemas: the choice of a particular document type should not force your hand to adopt one stream of schema technology…especially for grammar-based schema languages. I’m also working recently on another project where the independent schema consultant developers in RELAX NG and then distributes as XSD: a nice approach, and I expect over the few years that schema-language neutrality will become a more widely adopted stance by buyers/developer/overseers.
Question: why not convert the xs:documentation elements to RNG-namespaced documentation? Rather trivial detail admittedly.
Most comments are translated to >> comments.
A few are removed in situations where RELAX NG compact does not allow them (the details escape me, it is something like comments on enumerated values).
And there is one or two situations where the comments are duplicated, both as [xs:documentation] and >> comments. Comments were not high on my priorities for making the Ecma draft deadline; however they certainly are high on the list of things to have right by the time the ISO standard comes out.
For the first stage in approach #3, couldn't XSLT 1.0, possibly using EXSLT, have done just as well? Or was 2.0 more of an incidental choice?. | http://www.oreillynet.com/xml/blog/2006/10/three_ways_of_writing_xml_tran.html | crawl-002 | refinedweb | 716 | 59.64 |
06 July 2011 16:09 [Source: ICIS news]
LONDON (ICIS)--The ICIS Petrochemical Index (IPEX) for July experienced its first pull back in 10 months, decreasing by 2.8% to 364.35 from the revised* June figure of 374.94. The downward move followed an easing of upstream crude oil prices, and some concerns over global macroeconomic and sustainability issues.
An easing of prices was seen across all regional US dollar denominated sub-indexes, with the US experiencing the greatest fall, of 4.9%, followed by northeast Asia, which fell 2.8% and northwest Europe – the most resilient, seeing only a 0.6% decrease. A 0.7% weakening of the dollar against the euro was also seen over the same period.
Bearish sentiment was seen across 11 of the 12 groups of chemicals, with the butadiene (BD) group the only one bucking the trend. BD prices surged 14.0% in Europe, with gains of 9.4% and 4.9% seen in Asia and the ?xml:namespace>
Double-digit percentage declines, in US dollar terms, were seen, BD, polyvinyl chloride (PVC), polyethylene (PE), polypropylene (PP) and PS.
The June IPEX has been revised from 374.37 to 374.94, following incorporation of the
*As of July 2010, the index has been revised retrospectively to replace the latest available contract prices at the time of publication, that had previously been used in the data series with actual settled contract prices. This has had the effect of moving the derived IPEX index from an estimated status to an actual | http://www.icis.com/Articles/2011/07/06/9475604/july-ipex-sees-first-fall-in-10-months.html | CC-MAIN-2014-10 | refinedweb | 256 | 66.44 |
Graph Algorithms in Ruby
A lot (read: most) of Rubyists are focused on one aspect of software engineering: web development. This isn’t necessarily a bad thing. The web is growing at an incredible rate and is definitely a rewarding (monetarily and otherwise) field in which to have expertise. However, this does not mean that Ruby is good just for web development.
The standard repertoire of algorithms is pretty fundamental to computer science and having a bit of experience with them can be incredibly beneficial. In this article, we’ll go through some of the most basic graph algorithms: depth first search and breadth first search. We’ll look at the ideas behind them, where they fit in with respect to applications, and their implementations in Ruby.
Terminology
Before we can really get going with the algorithms themselves, we need to know a tiny bit about graph theory. If you’ve had graph theory in any form before, you can safely skip this section. Basically, a graph is a group of nodes with some edges between them (e.g. nodes representing people and edges representing relationships between the people). What makes graph theory special is that we don’t particularly care about the Euclidean geometrical structure of the nodes and edges. In other words, we don’t care about the angles they form. Instead, we care about the “relationships” that these edges create. This stuff is a bit hand-wavy at the moment, but it’ll become clear as soon as we look at some concrete examples:
Alright, so there: we have a graph. But, what if we want a structure that can represent the idea that “A is related to B but B isn’t related to A”? We can have directed edges in the graph.
Now, there is a direction to go with each relationship. Of course, we can create a directed graph out of an undirected graph by replacing each undirected edge with two directed edges going opposite ways.
The Fundamental Problem
Say we’re given a given a directed graph (G) and two nodes: (S) and (T) (usually referred to as the source and terminal). We want to figure out whether there is a path between (S) (T). Can we can get to (T) by the following the edges (in the right direction) from (S) to (T)? We’re also interested in what nodes would be traversed in order to complete this path.
There are two different solutions to this problem: depth first search and breadth first search. Given the names and a little bit of imagination, it’s easy to guess the difference between these two algorithms.
The Adjacency Matrix
Before we get into the details of each algorithm, let’s take a look how we can represent a graph. The simplest way to store a graph is probably the adjacency matrix. Let’s say we have a graph with (V) nodes. We represent that graph with a (V x V) matrix full of 1’s and 0’s. If there exists an edge going from node [i] to node [j], then we place a (1) in row (i) and row (j). If there’s no such edge, then we place a (0) in row (i) and row (j). An adjacency list is another way to represent a graph. For each node (i), we setup lists that contain references to the nodes that (i) has an edge to.
What’s the difference between these two approaches? Say we have a graph with 1000 nodes but only one edge. With the adjacency matrix representation, we’d have to allocate a (1000*1000) element array in order to represent the graph. On the other hand, a good adjacency list representation would not need to represent outputs for all for the nodes. However, adjacency lists have their own downsides. With the traditional implementation of linked lists, it takes linear time in order to check if a given edge exists. For example, if we want to check if edge (4, 6) exists, then we have to look at the list of outputs of 4 and then loop over it (and it might contain all the nodes in the graph) to check if 6 is part of that list.
On the other hand, checking if row 4 and column 6 is 1 in a adjacency matrix takes a constant amount of time regardless of the structure of the graph. So, if you have a sparse graph (i.e. lots of nodes, few edges), use an adjacency list. If you have a dense graph and are doing lots of existence checking of edges, use an adjacency matrix.
For the rest of the article, we’ll be using the adjacency matrix representation mostly because it is slightly simpler to reason about.
Depth First Search
Let’s take a look at the fundamental problem again. Basically, we’re trying to figure out whether or not we can reach a certain node (T) from another node (S). Imagine a mouse starting at node (S) and a piece of cheese at node (T). The mouse can set off in search of (T) by following the first edge it sees. It keeps doing that at every node it reaches until it hits a node where it can’t go any further (and still hasn’t found (T)). Then, it will backtrack to the last node where it had another edge it could have gone down and repeats the process. In a sense, the mouse goes as “deep” as it can within the graph before it backtracks. We refer to this algorithm as depth first search.
In order to build the algorithm, we need some way to keep track of what node to backtrack to when the time comes. To facilitate this, we use a data structure called a “stack”. The first item placed on a stack is the first one that will leave the stack. That’s why we call the stack a LIFO (last in, first out) structure – the “last in” element is the “first out” element. The mouse has to keep a stack of nodes as it travels through the network. Every time it arrives at a node, the mouse adds all possible outputs (i.e. children) to the stack. Then, it takes the first element off the stack (refered to as a “pop” operation) and moves to that node. In the case that we’ve hit a dead end (i.e. the set of possible outputs is empty), then the “pop” operation will return the node to which we can backtrack.
Let’s see an example of depth first search in action:
In this graph, we have the A as the source node (i.e. (S)) and F as the terminal node (i.e. (T)). Let’s add bit of color to the graph to make the process of searching a bit clearer. All nodes that are part of the stack is colored blue and the node that is at the top of stack (i.e. next to “pop”) is colored purple. So, right now:
The mouse will pop “A” off the stack and add the outputs in an arbitrary order:
The mouse then moves to node “B” (i.e. it pops it off the stack) and adds its only output to the stack:
Uh-oh. We’ve hit a node with no output nodes. Fortunately, that’s no problem. We just pop off “B”, making “D” the top of the stack:
From there, it’s just one jump to the node we’re looking for! Here’s the final result:
You might be wondering what would happen if node F wasn’t reachable from node A. Simple: at some point, we’d have a completely empty stack. Once we reach that point, we know that we’ve exhausted all possible paths from node A.
Depth First Search in Ruby
With solid handle on how depth first search (i.e. DFS) works, look at the Ruby implementation:
def depth_first_search(adj_matrix, source_index, end_index) node_stack = [source_index] loop do curr_node = node_stack.pop return false if curr_node == nil return true if curr_node == end_index children = (0..adj_matrix.length-1).to_a.select do |i| adj_matrix[curr_node][i] == 1 end node_stack = node_stack + children end end
If you followed along with the example, the implementation should come just naturally. Let’s break it down step by step. First of all, the method definition is pretty important:
depth_first_search(adj_matrix, source_index, end_index)
We have to pass in
adj_matrix, which is the adjacency matrix representation of the graph implemented as an array of arrays in Ruby. We also provide the indices of the source and terminal nodes (
source_index and
end_index) within the adjacency matrix.
node_stack = [source_index]
The stack starts off with the source node. This can be thought of as the mouse’s starting point. Although here we’re using a standard Ruby array for our stack, as long as the implementation gives a way to “push” and “pop” elements (and those operations behave the way we expect them to), the stack implementation doesn’t matter.
loop do ... end
The code uses a forever loop and then breaks when we encounter certain conditions.
curr_node = node_stack.pop return false if curr_node == nil return true if curr_node == end_index
Here, we pop the current node off the stack and reak out of the loop if the the stack is empty or it’s the node we’re looking for.
children = (0..adj_matrix.length-1).to_a.select do |i| adj_matrix[curr_node][i] == 1 end
This part is a bit wordy, but the goal is pretty simple. Out of the adjacency matrix, pick the “children” of
curr_node by checking the 1’s in the adjacency matrix.
node_stack = node_stack + children
Take those nodes and push them onto the end of the stack. Perfect! If you want to give the implementation a quick whirl:
adj_matrix = [ [0, 0, 1, 0, 1, 0], [0, 0, 1, 0, 0, 1], [0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 1], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0] ] p depth_first_search(adj_matrix, 0, 4)
That’s a quick implementation of depth first search. In practice, you’ll want a more complex definition of what a “node” is, since they are rarely just indices and often have data attached to them. But, the general concept of depth first search stays the same. Note that our implementation only works for acyclic graphs, i.e. graphs in which there is no loop. If we want to also operate on cyclic graphs, we’d need to keep track of whether or not we’ve already visited a node. That is a pretty easy exercise; try implementing it!
Breadth First Search
What if we have a graph where we have a 10,000 node long path that contains a bunch of nodes we don’t care about? Then, usin depth first search might end up traversing that enormous path for absolutely no reason. That’s where breadth first search comes in. Instead of going down a path, we “spread out” our search across every level of the graph. Imagine taking a jug of water and pouring it into the source node. If the edges are pipes, the water should flow through each “level” of the graph (as determined by the distance from the source node) and possibly reach the terminal node.
Let’s get back to our mouse, but this time, we’ll use breadth first search. Our furry friend will start out at a node and put all the children of the node in a list. Instead of traveling to the most recently inserted element, the mouse will select the “oldest” element in the list. Taking a look at an example will make the concept clear (again, in the diagrams: blue means part of the list, purple means the next element to be taken off the list, and green means terminal). We start off in the same way as Depth First Search:
Add on the children to the list, but the mouse will now travel to the element inserted first:
Now, here’s the interesting part. We add “F” onto the list, but don’t process it immediately because “C” and “B” were added earlier. So, here’s the state:
Remove “C” and then jump to “B”. Here’s where the difference is:
Finally, the mouse can get a hold of the cheese when we jump back to “F”:
Breadth First Search in Ruby
The structure (i.e. the “list”) we’ve been using is called a “queue”. A stack, as you might recall, is LIFO (last in, first out). A queue, on the other hand, is a FIFO structure (first in, first out). Simply by replacing the stack with a queue in our DFS implementation, it is now an implementation of BFS:
def breadth_first_search(adj_matrix, source_index, end_index) node_queue = [source_index] loop do curr_node = node_queue.pop return false if curr_node == nil return true if curr_node == end_index children = (0..adj_matrix.length-1).to_a.select do |i| adj_matrix[curr_node][i] == 1 end node_queue = children + node_queue end end
There are really only two important bits to this code.
curr_node = node_queue.pop
Notice that we’re still using the
pop method. Like before, this returns the last element of
node_queue and removes it from the array. However, it does not give us the most recently inserted element like it did for the stack because of this line:
node_queue = children + node_queue
Since we are adding elements to the beginning of the list, taking an element from the end of the list would give us the element inserted first not the element inserted last. Thus, slightly altering our management of the node list changes the behavior of a queue.
Typically, you’d want to wrap this sort of thing in a class or module and give it
enqueue and
dequeue methods to clarify the fact that we are using
node_queue as a queue. Here, once again, breadth first search will only work if we don’t have a cycle in our graph. To include graphs with cycles, we need to keep track of nodes we have already “discovered”. Modifying the implementation to do this is quite straightforward and a recommended exercise.
Conclusion
I hope you’ve enjoyed this quick tour of the the most fundamental of graph algorithms in Ruby. Of course, even with those, we’ve left lots of areas uncovered. For example, what if instead of just checking whether or not we can reach a node from a source, we want the path from the source to that node? How about if we want the shortest path? It turns out that DFS and BFS, with a little bit effort, can be extended into all sorts of useful algorithms. In the next algorithms article, we’ll take a look at some of these different “extensions.” | https://www.sitepoint.com/graph-algorithms-ruby/?utm_source=sitepoint&utm_medium=articletile&utm_campaign=likes&utm_term=ruby | CC-MAIN-2019-13 | refinedweb | 2,476 | 70.13 |
10. Text Widget in Tkinter
By Bernd Klein. Last modified: 16 Dec 2021..
import tkinter as tk root = tk.Tk() T = tk.Text(root, height=2, width=30) T.pack() T.insert(tk.END, "Just a text Widget\nin two lines\n") tk.mainloop()
The result should not be very surprising:
Let's change our little example a tiny little bit. We add another text, the beginning of the famous monologue from Hamlet:
import tkinter as tk root = tk.Tk() T = tk.Text(root, height=10,
So let's add a scrollbar to our window. To this purpose, Tkinter provides the Scrollbar() method. We call it with the root object as the only parameter.
import tkinter as tk root = tk.Tk() S = tk.Scrollbar(root) T = tk.Text(root, height=4, width=50) S.pack(side=tk.RIGHT, fill=tk.Y) T.pack(side=tk.LEFT, fill=tk
In our next example, we add an image to the text and bind a command to a text line:
import tkinter as tk root = tk.Tk() text1 = tk.Text(root, height=20, width=30) photo = tk.PhotoImage(file='./William_Shakespeare.gif') text1.insert(tk.END, '\n') text1.image_create(tk.END, image=photo) text1.pack(side=tk.LEFT) text2 = tk.Text(root, height=20, width=50) scroll = tk(tk.END, "Not now, maybe later!")) text2.insert(tk.END,'\nWilliam Shakespeare\n', 'big') quote = """ To be, or not to be that is the question: Whether 'tis Nobler in the mind to suffer The Slings and Arrows of outrageous Fortune, Or to take Arms against a Sea of troubles, """ text2.insert(tk.END, quote, 'color') text2.insert(tk.END, 'follow-up\n', 'follow') text2.pack(side=tk.LEFT) scroll.pack(side=tk.RIGHT, fill=tk.Y) root.mainloop()
| https://python-course.eu/tkinter/text-widget-in-tkinter.php | CC-MAIN-2022-05 | refinedweb | 292 | 55.3 |
From Techotopia
There is an old saying amongst veteran programmers that goes something like "Don't comment bad code, re-write it!". Before exploring what these seasoned programmers are really saying, it is important to understand what comments are.
Comments in both programming and scripting languages provide a mechanism for the developer to write notes that are ignored by the compiler or interpreter. These notes are intended solely for either the developer or anyone else who may later need to modify the script. The main purpose of comments, therefore, is to allow the developer to make notes that help both anyone who may read the script later to understand issues such as how a particular section of a script works, what a particular function does or what a variable is used to store. Commenting a script is considered to be good practice. Rest assured that a section of script that seems obvious when you write it will often be confusing when you return to it months, or even years later to modify it. By including explanatory comments alongside the script script does then you must have written it badly. Whilst one should always strive to write good code there is absolutely nothing wrong with including comments to explain what the code does. Even a well written script can be difficult to understand if it is solving a difficult problem so, ignore the old programmers adage and never hesitate comment your JavaScript scripts.
Another useful application of comments in JavaScript is to comment out sections of a script. Putting comment markers around sections of a script ensures that they are not executed by the interpreter when the web page is loaded. This can be especially useful when you are debugging a script and want to try out something different, but do not want to have to delete the old code until you have tested the new code actually works.
[edit] Single Line Comments
The mechanism for a single line comment borrows from the C++ and Java langauges by prefixed the line with //. For example:
// This is a comment line. It is for human use only and is ignored by the JavaScript Interpreter. var myString = "Hello"; // This is another comment
The // syntax tells the intepreter that everything on the same line following on from the // is a comment and should be ignored. This means that anything on the line before the // comment marker it is not ignored. The advantage of this is that it enables comments to be placed at the end of a line of scripting. For example:
var myString = "Welcome to Technotopia"; // Variable containing welcome string
In the above example everything after the // marker is considered a comment and, therefore, ignored by the JavaScript interpreter. This is provides an ideal method for placing comments on the same line of script that describe what that particular script line does.
[edit] Multi-line Comments
For the purposes of supporting comments that extend over multiple lines JavaScript */ function (num1, num2) { return (num1 + num2) }
Multi-line comments are particularly useful for commenting out sections of a script you no longer wish to run but do not yet wish to delete, together with an explanation of when and why you have commented it out:
/* Commented out December 23 while testing improved version var myValue = 1; var myString = "My lucky number is "; document.writeln ( mayString + myValue ); */
In the above example everything between the /* and */ markers is considered to be a comment. Even though this content contains valid JavaScript it is ignored by the interpreter.
[edit] Summary
Comments in JavaScript enable the developer to add notes about the script or comment out sections of script that should not be executed by the interpreter. Comments can be single line comments (using the // marker) or multi-line (beginning with /* and ending with */).
Commenting is considered to be good practice. Regardless of how well you understand the logic of some JavaScript, there is a good chance you may one day have to return to that script and modify it. Often this can be months, or even years, later and what seemed obvious to you at the time you wrote it may seem less obvious in the future. Also, it is often likely that some other person will have to work on your scripts in the future. For both these reasons it is a good idea to provide at least some basic amount of commenting in your scripts. | http://www.techotopia.com/index.php/Comments_in_JavaScript | crawl-001 | refinedweb | 737 | 55.98 |
Paraphrase Mining¶
Paraphrase mining is the task of finding paraphrases (texts with identical / similar meaning) in a large corpus of sentences. In Semantic Textual Similarity we saw a simplified version of finding paraphrases in a list of sentences. The approach presented there used a brute-force approach to score and rank all pairs.
However, as this has a quadratic runtime, it fails to scale to large (10,000 and more) collections of sentences.
For larger collections, util offers the paraphrase_mining function that can be used like this:
from sentence_transformers import SentenceTransformer, util model = SentenceTransformer('all-MiniLM-L6-v2') # Single list of sentences - Possible tens of thousands of sentences sentences = ['The cat sits outside', 'A man is playing guitar', 'I love pasta', 'The new movie is awesome', 'The cat plays in the garden', 'A woman watches TV', 'The new movie is so great', 'Do you like pizza?'] paraphrases = util.paraphrase_mining(model, sentences) for paraphrase in paraphrases[0:10]: score, i, j = paraphrase print("{} \t\t {} \t\t Score: {:.4f}".format(sentences[i], sentences[j], score))
The paraphrase_mining()-method accepts the following parameters:]
Instead of computing all pairwise cosine scores and ranking all possible, combintations, the approach is a bit more complex (and hence efficient). We chunk our corpus into smaller pieces, which is defined by query_chunk_size and corpus_chunk_size. For example, if we set query_chunk_size=1000, we search paraphrases for 1,000 sentences at a time in the remaining corpus (all other sentences). However, the remaining corpus is also chunked, for example, if we set query_chunk_size=10000, we look for paraphrases in 10k sentences at a time.
If we pass a list of 20k sentences, we will chunk it to 20x1000 sentences, and each of the query is compared first against sentences 0-10k and then 10k-20k.
This is done to reduce the memory requirement. Increasing both values improves the speed, but increases also the memory requirement.
The next critical thing is finding the pairs with the highest similarities. Instead of getting and sorting all n^2 pairwise scores, we take for each query only the top_k scores. So with top_k=100, we find at most 100 paraphrases per sentence per chunk. You can play around with top_k to the ensure a certain behaviour.
So for example, with
paraphrases = util.paraphrase_mining(model, sentences, corpus_chunk_size=len(sentences), top_k=1)
You will get for each sentence only the one most other relevant sentence. Note, if B is the most similar sentence for A, A must not be the most similar sentence for B. So it can happen that the returned list contains entries like (A, B) and (B, C).
The final relevant parameter is max_pairs, which determines the maximum number of paraphrase pairs you like to get returned. If you set it to e.g. max_pairs=100, you will not get more than 100 paraphrase pairs returned. Usually, you get fewer pairs returned as the list is cleaned of duplicates, e.g., if it contains (A, B) and (B, A), then only one is returned. | https://sbert.net/examples/applications/paraphrase-mining/README.html | CC-MAIN-2021-43 | refinedweb | 502 | 61.56 |
If you do not know, 12 inches is 1 foot.
Program to add two distances in the inch-feet system
#include <stdio.h> struct Distance { int feet; float inch; } d1, d2, result; int main() { // take first distance input printf("Enter 1st distance\n"); printf("Enter feet: "); scanf("%d", &d1.feet); printf("Enter inch: "); scanf("%f", &d1.inch); // take second distance input printf("\nEnter 2nd distance\n"); printf("Enter feet: "); scanf("%d", &d2.feet); printf("Enter inch: "); scanf("%f", &d2.inch); // adding distances result.feet = d1.feet + d2.feet; result.inch = d1.inch + d2.inch; // convert inches to feet if greater than 12 while (result.inch >= 12.0) { result.inch = result.inch - 12.0; ++result.feet; } printf("\nSum of distances = %d\'-%.1f\"", result.feet, result.inch); return 0; }
Output
Enter 1st distance Enter feet: 23 Enter inch: 8.6 Enter 2nd distance Enter feet: 34 Enter inch: 2.4 Sum of distances = 57'-11.0"
In this program, a structure Distance is defined. The structure has two members:
- feet - an integer
- inch - a float
Two variables d1 and d2 of type
struct Distance are created. These variables store distances in the feet and inches.
Then, the sum of these two distances are computed and stored in the
result variable. Finally, result is printed on the screen. | https://www.programiz.com/c-programming/examples/inch-feet-structure | CC-MAIN-2021-04 | refinedweb | 215 | 69.38 |
Wikipedia talk:WikiProject Computing
From Wikipedia, the free encyclopedia
:59, 15 March, 2009 (UTC)
[edit] Unix merging
The current version is problematic : the recent merge hasn't been finished (eg: Mv, it's enough urgent), and its heterogeneity : the page names are depending of chronology instead of a lexical place (eg: ls instead of ls (Unix) will engendered a conflict if the users would need to add another "ls" definition). Thanks for your support. JackPotte (talk) 20:47, 18 April 2009 (UTC)
- The situation of absent article has been now resolved, but the page name debate is still open. JackPotte (talk) 21:13, 18 April 2009 (UTC)
- In the case of a command name like "mv", it makes sense to rename it to "mv (Unix)" or "mv (command)", because there is already a disambiguation page for it, as an abbreviation with multiple meanings. It also makes sense to group all of the Unix command names by a single naming convention. — Loadmaster (talk) 15:51, 27 April 2009 (UTC)
- I agree, with this norm every bot would easily modify only these pages. JackPotte (talk) 00:04, 1 May 2009 (UTC)
- Won't "mv (POSIX)" be more encyclopaedic? Stuartyeates (talk) 09:36, 24 May 2009 (UTC)
[edit] Advice sought concerning Alexa traffic rankings in Comparison of wiki farms article
Please see:
- Comparison of wiki farms
- Talk:Comparison of wiki farms#Removed Alexa column and the related talk sections that follow.
- Wikipedia:Village pump (policy)#Spamfighters repeatedly trying to delete longstanding popular chart of wiki farms
--Timeshifter (talk) 19:26, 8 June 2009 (UTC)
[edit] Dell mergers/acquisitions FLRC
User:SRE.K.A.L.24 has nominated List of mergers and acquisitions by Dell for featured list removal here. Please join the discussion on whether this article meets the featured list criteria. Articles are typically reviewed for two weeks; editors may declare "Keep" or "Remove" the article's featured status. The instructions for the review process are here. Dabomb87 (talk) 01:43, 9 June 2009 (UTC)
[edit] Project Members
If there are no complaints (within a day or so), I will delete all "red-link" members names. -- Mjquin_id (talk) 03:46, 9 June 2009 (UTC)
- So far, of the ones I have checked it looks like nearly all are deleted accounts.
- The redlink user accounts that haven't been deleted have not had user contributions in over a year (of the ones I have checked). Maybe those should be kept though? We need all the help we can get.
- Many people only use their talk pages, and not their user pages. Busy people shouldn't be discouraged from helping out here when they have time. The most skilled people are oftentimes the most busy, and may not edit often. --Timeshifter (talk) 15:05, 9 June 2009 (UTC)
- I more or less agree with Timeshifter. Some editors choose not to have a user page, even though they regularly contribute (e.g Oceanh). We could probably create a list of inactive members (those who haven't edited in a year), like over here Wikipedia:WikiProject_Films/Participants#Inactive_members, but it would be a lot of work, I suppose. decltype (talk) 15:31, 9 June 2009 (UTC)
[edit] Usefulness of Alexa traffic rankings
How useful are Alexa traffic rankings?
At Talk:Comparison of wiki farms#Advice sought concerning Alexa traffic rankings DreamGuy wrote the following:
- Alexa traffic ratings are completely meaningless in the real world. They are easily manipulated, only poll an extremely small and unrepresentative sample of the web-surfing public, and serve no purpose whatsoever other than to give WP:UNDUE weight. They should be removed here and everywhere on Wikipedia. DreamGuy (talk) 15:47, 9 June 2009 (UTC)
I find Alexa rankings useful in a rough way as another factor for choosing between which wiki farm to use. There are many other factors of course; software used, free/paid, staff responsiveness, censorship policy, etc..
I think popularity (as in Alexa traffic rank) of a site is an indicator of longevity. I have found this to be true for other web hosts. It may be true also for wiki farms.
There are no policies or guidelines against including Alexa traffic rankings in an article, list, or chart. There are many Wikipedia pages with Alexa traffic rankings. See the results of this search of Wikipedia articles for Alexa rank.
There is an infobox with Alexa rank as one of the parameters: Template:Infobox Website. That infobox is on many pages.
So what do others think? Please see also: Alexa Internet#Accuracy of ranking by the Alexa Toolbar. Here is my opinion: Any manipulation over the longterm to jack up the rank of a particular site would be difficult to maintain since it would require many multiple installs of the Alexa toolbar on many computers in a coordinated effort that would have to be changing over time to avoid scrutiny by Alexa. Very few websites would have the resources, network of people, and time, to do this successfully longterm, and get away with it unobserved by Alexa. As with all info reported in Wikipedia articles people have to decide for themselves as to the relative merits of Alexa traffic rankings. --Timeshifter (talk) 13:37, 10 June 2009 (UTC)
[edit] Assimilation
"Talk to the potential child WikiProjects about co-ordination, and see what sort of response you get. Be careful not to try to dictate to them; they could be sensitive about you appearing out of nowhere and wanting to assimilate them."
I personally believe dictate may be too harsh of a word to use here, but I do have some minor concerns about how daughter projects (underneath this one) are managed. There seems to be this sort of implied structure, with Computing acting as the rule all parent. My main concern is WikiProject Computer Security; the topic is "very" broad. I believe the project is being treated as a "child task-force" of this project (e.g. our project color scheme, your banner with a switch for the project and article rating, and us using your criteria system; whereas we don't use the major one).
This seems to run the same way for other projects; another example may be WikiProject Software (implying parentage and a hierarchy of projects). Thoughts? Possible resolutions? blurredpeace ☮ 19:11, 13 June 2009 (UTC)
[edit] RfC: Parentage and Implied Structures for Daughter WikiProjects
See my section above this one (that has gone uncommented from participants of the project) for a better view on the situation. I generally want to see independence given to WikiProjects parented by this one, and remove the underlying sense of bureaucracy from all the daughter projects (e.g. WikiProject Software's coordination and hierarchy). I request comments, possible resolutions, and meaningful discussion so we can come to some sort of consensus on what to do. blurredpeace ☮ 11:38, 15 June 2009 (UTC)
- To be honest i have no idea what you are implying, do you mean the sub project software does not have enough powers or something?--Andy Chat c 11:50, 15 June 2009 (UTC)
- Well really, my concern is mostly towards WikiProject Software and WikiProject Computer security for two, somewhat, related reasons. As for WPCS, I see the project almost as a duplication of this project (the project page, and how we are using WPC's navigator sidebar). For WPS, I believe that they're implying a form of hierarchy (or somewhat of a bureaucracy). Why is it necessary to have project coordinators when there's only thirty-four participants (twenty-nine active)? The parentage box at WPS bothers me a bit as well, as it's a direct hierarchy (that gained no consensus on, it was just created). I would have brought this up directly with WPS but WPCS was concerned as well, so I thought this might be the best place to place my concerns. blurredpeace ☮ 12:07, 15 June 2009 (UTC)
I'm ending the RfC as nobody seems to be overtly interested in discussion. I'll bring this up with the daughter projects in question respectively. Thanks for everyone's time. blurredpeace ☮ 00:06, 21 June 2009 (UTC)
[edit] Any chance of including information on differences of MPEG 2 Layer I, MPEG 2 Layer II and MPEG 2 Layer III
I got here attempting research the differneces between MPEG 2 Layer I, MPEG 2 Layer II and MPEG 2 Layer III. I am concentrating my research on playability and functionality issues. I need my MP3 files to play on the largest number of players and still retain functionality (rewind, fast forward, and scrolling through tracks are of the greatest concern). I am looking at what role the I, II vs. III layering plays in decreasing playability and functionality in the end file. Or what are the differences in the different encoder types (I hope this is the right phrase for the three "layers" associated with MPEG 2). Ultra57 (talk) 22:36, 17 June 2009 (UTC)
- Are you asking how to do this? or if the article should have it? before i answer i need to know befor ei might make the wrong assumption--Andy Chat c 22:42, 17 June 2009 (UTC)
- UPDATE - I may have stumbled on the wrong area of the WIKI in that I am researching for MP3's. I would still be interested in the Layer differences for this subject. I am trying to determine what the ramifications of Layer I vs. Layer II vs. Layer III have on playability and functionality. Will this change an MP3 file's ability to maybe play on an older player or will functionality be lost on those older players. Are these development phases that involved improvements to security or some other type of improvement? What are the general differences between I, II and III. I am looking at LAME and it appears to have all three layers (just depending on the version). Other encoders may have all three in various versions also. What I was attempting to determine was the ramifications and justifications for the I, II or III, and should I be expecting a IV? == Felt the WIKI would benefit from such information. Ultra57 (talk) 01:59, 18 June 2009 (UTC)
- So basically you are treating this like answer forum? if so i advise you not to it against wikipedia polocies. if you want to know about techincal merits of the encoders there is a part of wikipedia where they answer those type of questions is about problem wiht the articles.--Andy Chat c 12:08, 18 June 2009 (UTC)
[edit] Articles for deletion nomination of Grace (plotting tool)
I have nominated Grace (plotting tool), an article that you created, for deletion. I do not think that this article satisfies Wikipedia's criteria for inclusion, and have explained why at Wikipedia:Articles for deletion/Grace (plotting tool). Your opinions on the matter are welcome at that same discussion page; also, you are welcome to edit the article to address these concerns. Thank you for your time.
Please contact me if you're unsure why you received this message. Papa November (talk) 12:43, 19 June 2009 (UTC)
[edit] The OS commands collections
We've now got 161 DOS commands to approve, and 310 Unix ones. Before create the missing articles, please adopt the norm : article name = "command (DOS) or (Unix)". JackPotte (talk) 04:10, 23 June 2009 (UTC)
[edit] COmputing article layout
Is there any page in the project that shows hwoa page should be laid out? apart from MOS which i can not even find a link to
If not i think we should designa layout that computer article shoudl take so that we can work on getting articles up to FA status or FL status--Andy (talk - contrib) 12:49, 23 June 2009 (UTC)
[edit] Consensus Please
[edit] List of convicted computer criminals at FLC
Please comment at Wikipedia:Featured list candidates/List of convicted computer criminals/archive1. Dabomb87 (talk) 15:08, 24 June 2009 (UTC)
[edit] Wireless network standards
Anyone familiar with the topic? I just deprodded Isa100.11a as it is a developing standard that seems notable judging from the coverage in the tech press, but I don't know enough about this area to work on the article. Ta v much. Fences&Windows 01:03, 25 June 2009 (UTC)
[edit] New Interaction & Usability task force
I want to contact people interested in Usability, Human-computer interaction, Information visualization, Interface design, Web design and Information architecture willing to create a task force within the WP:Computing project. The goals for this task force should be (among others):
- to create and expand articles in these topics lacking content,
- document the history and relevant actors of this movement,
- cross-reference the primary concepts as defined in the articles,
- identify Computing articles with special interest for this group,
- and increase awareness of the discipline by referencing topics in this area from other computing articles when relevant.
I've not participated in any WP task force before, and I'm not quite sure what is the bureaucracy involved. How and where can I publicize this initiative, what are the required steps to create a task force, and to whom should I ask approval within the Computing project?
Thanks, Diego (talk) 14:23, 25 June 2009 (UTC)
- This is the information i know when i asked on helpdesk, first you ha ve to get a consensus on the parent project first and if there is then it has to be decided if the taskforce is needed.Then you have the job of creating the task force etc which i believe is fair easy compare to making sub project. There will be other on here with experaince who can better serve you if it to go ahead and help dcreate it with the experaince. Good luck.
- Support as i have no objection and feel it help general imrpove these type of articles--Andy (talk - contrib) 14:40, 25 June 2009 (UTC)
[edit] AfD - List of books on the history of computing
I've suggested at the Wikipedia:Articles for deletion/List of books on the history of computing that the article might be useful in this wikiproject's namespace, as a subpage(s). Anyone interested? -- Quiddity (talk) 03:34, 28 June 2009 (UTC)
[edit] Conversions to/from WebArchive format
At under the subheading 'Conversions' I have three external links; one for each type of conversion.
Is the exceptional treatment of those three external links reasonable?
Key points:
- there is much misunderstanding about WebArchive format and its place within open source
- a typical misconception is that WebArchive is proprietory, and/or for Safari only
so:
- in the page, I aim to draw attention to all supporting and related applications, classes, convertors and utilities
— without bias
— with special reference to conversion/interop.
Grahamperrin (talk) 09:08, 28 June 2009 (UTC)
[edit] Manufacturers of computer peripherals including HDD
Having just bought a new Buffalo Technology external HDD from Staples in Epsom I was searching for information about the company 'Buffalo Technology' and the technology of HDD. I found it disconcerting that Buffalo Technology (aka Melco, [1]) was not listed as one of the 5 companies that make HDD [2]. Now it could be that one of the 5 make HDD for Buffalo, but there is no (none whatsoever) credit to any other company on the equipment itself or documentation or company website. I also considered if it could be that Buffalo are so small that they are not worth mentioning, though the article on HDD manufacturers strongly infers that there are no other manufacturers and a further article [3] lists the companies that have gone out of business. Speaking at ITCVE 2008, Dave Gibson (employee of Buffalo Technology) revealed that Buffalo was the number one memory vendor in Japan (see external reference [4]). I am not sure of the validity of this reference.
Is there a good reason why the pages on Melco can not be linked with manufacturers of HDD? If not I would like to propose that reference 1 is linked to reference 2 directly with an insert/edit explaining that there are other manufacturers of HDD either specifically or by reference to another list of other manufacturers of HDD (including the listed competitors of Buffalo Technology in reference 1).
There is a vaugue mention of Mitsubishi on reference 1 and Mitsubishi leaving the industry on reference 2 but then there is no actual link affiliation between Melco and Mitsubishi. This could be a red herring.
RobertHeathfield (talk) 04:53, 2 July 2009 (UTC) | http://ornacle.com/wiki/Wikipedia_talk:COMP | crawl-002 | refinedweb | 2,761 | 58.01 |
Difference between revisions of "Lifting"
Revision as of 10:12, 19 January 2013
Lifting is a concept which allows you to transform a function into a corresponding function within another (usually more general) setting.
Lifting in general
We usually start with a (covariant) functor, for simplicity we will consider the Pair functor first. Haskell doesn't allow a
type Pair a = (a, a) to be a functor instance, so we define our own Pair type instead.
data Pair a = Pair a a deriving Show instance Functor Pair where fmap f (Pair x y) = Pair (f x) (f y)
If you look at the type of
fmap (
Functor f => (a -> b) -> (f a -> f b)), you will notice that
fmap already is a lifting operation: It transforms a function between simple types
a and
b into a function between pairs of these types.
lift :: (a -> b) -> Pair a -> Pair b lift = fmap plus2 :: Pair Int -> Pair Int plus2 = lift (+2) -- plus2 (Pair 2 3) ---> Pair 4 5
Note, however, that not all functions between
Pair a and
Pair b can constructed as a lifted function (e.g.
\(x, _) -> (x, 0) can't).
A functor can only lift functions of exactly one variable, but we want to lift other functions, too:
In a similar way, we can define lifting operations for all containers that have "a fixed size", for example for the functions from
Double to any value
((->) Double), which might be thought of as values that are varying over time (given as
Double). The function
\t -> if t < 2.0 then 0 else 2 would then represent a value which switches at time 2.0 from 0 to 2. Using lifting, such functions can be manipulated in a very high-level way. In fact, this kind of lifting operation is already defined.
Control.Monad.Reader (see MonadReader) provides a
Functor,
Applicative,
Monad,
MonadFix and
MonadReader instance for the type
(->) r. The
liftM (see below) functions of this monad are precisely the lifting operations we are searching for.
If the containers don't have fixed size, it's not always clear how to make lifting operations for them. The
[] - type could be lifted using the
zipWith-family of functions or using
liftM from the list monad, for example.
Applicative lifting
This should only provide a definition what lifting means (in the usual cases, not in the arrow case). It's not a suggestion for an implementation. I start with the (simplest?) basic operations
zipL, which combines to containers into a single one and
zeroL, which gives a standard container -}
Today we have the
Applicative class that provides Applicative functors. It is equivalent to the
Liftable class.
pure = liftL0 (<*>) = appL zeroL = pure () zipL = liftA2 (,)
In principle,
Applicative should be a superclass of
Monad, but chronologically
Functor and
Monad were before
Applicative.
Unfortunately, inserting
Applicative between
Functor and
Monad in the subclass hierarchy would break a lot of existing code and thus has not been done as of today (Jan 2013).
Monad lifting
Lifting is often used together with monads. The members of the
liftM-family take a function and perform the corresponding computation within the monad.
return :: (Monad m) => a -> m a liftM :: (Monad m) => (a1 -> r) -> m a1 -> m r liftM2 :: (Monad m) => (a1 -> a2 -> r) -> m a1 -> m a2 -> m r
Consider for example the list monad (MonadList). It performs a nondeterministic calculation, returning all possible results.
liftM2 just turns a deterministic function into a nondeterministic one:
plus :: [Int] -> [Int] -> [Int] plus = liftM2 (+) -- plus [1,2,3] [3,6,9] ---> [4,7,10, 5,8,11, 6,9,12] -- plus [1..] [] ---> _|_ (i.e., keeps on calculating forever) -- plus [] [1..] ---> []
Every
Monad can be made an instance of
Liftable using the following implementations:
{-# OPTIONS -fglasgow-exts #-} {-# LANGUAGE AllowUndecidableInstances #-} import Control.Monad instance (Functor m, Monad m) => Liftable m where zipL = liftM2 (\x y -> (x,y)) zeroL = return ()
Lifting becomes especially interesting when there are more levels you can lift between.
Control.Monad.Trans (see Monad transformers) defines a class
class MonadTrans t where lift :: Monad m => m a -> t m a -- lifts a value from the inner monad m to the transformed monad t m -- could be called lift0
lift takes the side effects of a monadic computation within the inner monad
m and lifts them into the transformed monad
t m. We can easily define functions which lift functions between inner monads to functions between transformed monads. Then we can perform three different lifting operations:
liftM".
Arrow lifting
Until now, we have only considered lifting from functions to other functions. John Hughes' arrows (see Understanding arrows) are a generalization of computation that aren't functions anymore. An arrow
a b c stands for a computation which transforms values of type
b to values of type
c. The basic primitive
arr, aka
pure,
arr :: (Arrow a) => b -> c -> a b c
is also a lifting operation. | http://wiki.haskell.org/index.php?title=Lifting&diff=prev&oldid=55265 | CC-MAIN-2020-10 | refinedweb | 817 | 50.46 |
MXNet NDArray: Convert NumPy Array To MXNet NDArray
MXNet NDArray - Convert A NumPy multidimensional array to an MXNet NDArray so that it retains the specific data type
< > Code:
Transcript:
We start by importing NumPy as np
import numpy as np
And then we print the NumPy version that we are using.
print(np.__version__)
We are using version 1.13.3.
Next, we import MXNet as mx
import mxnet as mx
And then we print the MX version that we are using.
print(mx.__version__)
We’re using MXNet version 0.12.1.
First, we’re going to create a NumPy example integer array using the np.array functionality.
numpy_ex_int_array = np.array([ [[1,2,3,4], [2,3,4,5], [3,4,5,6]], [[4,5,6,7], [5,6,7,8], [6,7,8,9]] ], dtype=np.int32)
It’s going to be 1, 2, 3, 4, so on and so forth, and the data type that we are using is np.int32.
We can look at the shape of this example variable
numpy_ex_int_array.shape
And we see that it is 2x3x4.
Next, we check the data type of the numpy_ex_int_array
numpy_ex_int_array.dtype
And see that it is int32.
Finally, we can print it
print(numpy_ex_int_array)
And we see that it is in fact a 2x3x4 tensor or 2x3x4 multidimensional array.
To convert this NumPy multidimensional array to an MXNet NDArray, we’re going to use the mx.nd.array functionality and pass in our numpy_ex_int_array and then we assign that to the mx_ex_int_array Python variable.
mx_ex_int_array = mx.nd.array(numpy_ex_int_array)
Once we have this, we can check the shape
mx_ex_int_array.shape
And we see that it is in fact 2x3x4 which is what we would expect.
Next, we can check the data type
mx_ex_int_array.dtype
and see that it is numpy.float32.
Finally, we can print the converted array
print(mx_ex_int_array)
And we see that it is an MXNet NDArray that is 2x3x4 and the context is currently my CPU.
The second example that we’re going to do is we’re going to do a floating NumPy multidimensional array.
numpy_ex_float_array = np.array([ [[.1,.2,.3,.4], [.2,.3,.4,.5], [.3,.4,.5,.6]], [[.4,.5,.6,.7], [.5,.6,.7,.8], [.6,.7,.8,.9]] ], dtype=np.float32)
We use the np.array functionality again.
This time there is a decimal point in front of all the numbers.
So 0.1, 0.2, 0.3, 0.4, so on and so forth, and we’re going to assign that to the Python variable, numpy_ex_float_array.
We can check the shape of this multidimensional array
numpy_ex_float_array.shape
And we see that it is 2x3x4.
Next, we can check the data type
numpy_ex_float_array.dtype
And see that it is float32 which is what we would expect given we defined it as np.float32.
Then we can print this multidimensional array.
print(numpy_ex_float_array)
Because it is a floating point number, we see 0.1, 0.2, 0.30000001.
So we see that they are all floating numbers for this multidimensional array.
Then to convert this NumPy multidimensional array to an MXNet NDArray, we use the mx.nd.array functionality and pass in our numpy_ex_float_array and assign it to the Python variable, mx_ex_float_array.
mx_ex_float_array = mx.nd.array(numpy_ex_float_array)
We can check the shape of this NDArray
mx_ex_float_array.shape
And we see that it is 2x3x4 which is what we expect.
Next, we can check the data type of our converted array
mx_ex_float_array.dtype
and see that it is a numpy.float32 data type.
Lastly, we can print this converted NumPy multidimensional array
print(mx_ex_float_array)
And we see that it is an MXNet NDArray that is 2x3x4.
The context is the CPU.
If we compare the numbers in the tensor, we see 0.1, 0.2, 0.3 with a bunch of zeros and then a 1 (0.30000001), we see 0.1, 0.2, 0.3 with a bunch of zeros and then a 1 (0.30000001).
So this is the way we convert a NumPy multidimensional array to an MXNet NDArray using the mx.nd.array functionality. | https://aiworkbox.com/lessons/convert-numpy-array-to-mxnet-ndarray | CC-MAIN-2020-40 | refinedweb | 687 | 75.2 |
[
]
Ben Craig updated THRIFT-1753:
------------------------------
Attachment: (was: cleaner_port.patch)
> Multiple C++ Windows, OSX, and iOS portability issues
> -----------------------------------------------------
>
> Key: THRIFT-1753
> URL:
> Project: Thrift
> Issue Type: Bug
> Components: C++ - Library
> Affects Versions: 0.9, 1.0
> Environment: Windows MSVC10, MSVC11
> OSX GCC-4.2
> iOS Clang-4.0
> Reporter: Ben Craig
> Fix For: 1.0
>
> Attachments: cleaner_port2.patch
>
>
> These are all in the C++ library. Here is a summary of what I changed.
> All of these fixes make a ~5000 line patch (though a lot of that is
> deleted lines).
> * General cleanup of the msvc project
> * Using HAVE_CONFIG_H instead of force including files
> * Getting rid of some unnecessary files (stdafx.*, TargetVersion.h)
> * Significant rework of windows portability. No longer using config.h
> and force_inc.h to make Windows look like *nix. Instead, making lots of
> Thrift specific #defines that are vaguely *nixy, and having those forward
> to *nix or Windows stuff appropriately. For example, THRIFT_CTIME_R calls
> ctime_r on *nix, and a wrapper thrift_ctime_r on Windows. The old
> approach doesn't work when multiple libraries attempt the same trick. For
> example, if openssl #defined errno to ::WSAGetLastError() as well, then
> that would cause problems.
> * Adding preprocessor flag that can optionally squelch console output.
> Default behavior is unchanged. Console output is great for home deployed
> server apps, but it looks unprofessional for consumer apps.
> * Adding THRIFT_UNUSED_VARIABLE helper macro, to aid in squelching
> warnings.
> * Adding redirector header for <functional> and <tr1/functional>. Since
> namespaces aren't consistent (std vs std::tr1), I have added symbols to
> the apache::thrift::stdcxx namespace. This is important for Clang / iOS
> support
> * usleep and sleep on Windows were both sleeping in milliseconds. sleep
> now correctly sleeps for seconds, and usleep attempts to sleep for
> microseconds (after converting very coarsely to milliseconds).
> * Adding support for using C++11 std::thread (and mutex, and monitor).
> Thrift already supported boost::thread and posix threads. Clients that
> use std::thread no longer need built boost libraries. The boost headers
> are sufficient for them. Switching from boost::thread to std::thread
> resulted in a ~50k reduction in exe size in my tests. By default, msvc10
> and below will use boost::thread, msvc11 and up will use std::thread.
> * Fixing more 64-bit socket truncation issues in non-blocking server and
> ssl support. openssl itself has socket truncation issues, so I could not
> fix them all.
> * Fixed THRIFT-1692 "SO_REUSEADDR allows for socket hijacking on Windows
> ". Now using SO_EXCLUSIVEADDRUSE on windows, and SO_REUSEADDR on *nix.
> * Making TFileTransport use thrift style threads instead of redoing all
> the pthread+boost stuff itself.
> * Includes, and builds upon THRIFT-1740 (Make C++ library build on OS X
> and iOS)
> * Moved several functions out of thrift/windows/config.h, and into other
> thrift/windows headers.
> * Using built-in stdint.h on Windows if available (by checking HAVE_STDINT_H) and
> using boost typedefs otherwise.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/thrift-dev/201211.mbox/%3C304874248.125679.1353100032274.JavaMail.jiratomcat@arcas%3E | CC-MAIN-2015-14 | refinedweb | 498 | 60.61 |
- Software
- Capture
- Graphics Conversion
- 3D Modeling
- 3D Rendering
- Presentation
- Viewers
- Image Galleries
- Hot topics in Graphics Software3d game opengl source code digital archive image tagging floor plan wpf create pdf opengl 2.0 print to pdf c# wpf
Pictag
Pictag is a simple web photo gallery with automatic thumbnail generation and tagging capability. It's self-contained in a single file, uses the file system for gallery layout, and requires no stand-alone database (it uses SQLite) or no database at allbion digital content index/management
Albion is a digital document archive presentation package. Features include a searchable database, an integrated spell checker, and automatic thumbnail generation. As native Perl as possible, the database/photos/search-function can be delivered on CD!
BrickDraw3D for MacOS
BrickDraw3D is a MacOS program for building with virtual Lego bricks. It is based on the LDraw parts library and is useful for viewing and editing models in the DAT format. See for examples.
Fontboy
Fontboy is a font viewer for BeOS. Fontboy let you browse your installed fonts and gives you a detailed view on each font including the complete unicode character set..
G-House 3D
Tool to design a house in Gnome. It creates a VRML97 model from a floor::ColorNames
A Perl-module to map color names to RGB codes.
Inventoriana
Inventoriana is an open source software program aimed at helping individuals and institutions catalog and describe more fully digitized images. It is extensible, flexible, and easy to use.
NetSlideLive
NetSlide Live gives you the posibility to control a slideshow over an intern network.
PDF::Create
PDF::Create is a Perl module that allows you to create PDF documents, possibly on the fly, using a large number of primitives.
Perl OpenGL 2
Perl bindings to the Modern OpenGL API
-
PhotoShelf
Web based digital photo archiving and management system, written entirely in perl. Complete with almost all features necessary for backup and public viewing.
PowerPlay
A project designed to make it easier to learn things like species and other things common to school.
Shell Controls Lib
A set of classes to browse the Windows Shell namespace
Toon! Icon Theme
Toon is an icon theme for Gnome/KDE. It is an extension and translation of the Gartoon theme as originally drawn by zeus. It follows the the new tango and cross-platform guidelines from freedesktop.
WPF Media Tagger
Create tags for your media files. Define Start/End time and additional information for each media tag. Later, you can play the tags using an embedded media player.
- | https://sourceforge.net/directory/graphics/graphics/developmentstatus:beta/license:artistic/ | CC-MAIN-2017-22 | refinedweb | 422 | 55.64 |
x KiPython Web Development Techdegree Graduate 14,810 Points
Unable to open database file Error
Please, help, Kenneth Love ! I've got a problem: I can initialize the db from console with models.initialize() BUT when I run my app i get an exception right on initialization. Can't recognize the problem..
app.py:
from flask import Flask, g, jsonify, render_template import config import models from resources.todos import todos_api app = Flask(__name__) app.register_blueprint(todos_api, url_prefix='/api/v1') @app.route('/') def my_todos(): return render_template('index.html') if __name__ == '__main__': models.initialize() app.run(debug=config.DEBUG)
models.py:
import datetime from peewee import * import config DATABASE = SqliteDatabase('todos.sqlite', threadlocals=True) HASHER = PasswordHasher() class User(Model): username = CharField(unique=True) email = CharField(unique=True) password = CharField() class Meta: database = DATABASE class Todo(Model): name = CharField(max_length=255, unique=True) completed = BooleanField(default=False) created_at = CharField(default=datetime.datetime.now) class Meta: database = DATABASE def initialize(): DATABASE.connect() DATABASE.create_tables([User, Todo], safe=True) DATABASE.close()
exception:
sqlite3.OperationalError: unable to open database file During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C://angular-todo_v2/app.py", line 18, in <module> models.initialize() File "C:\\angular-todo_v2\models.py", line 70, in initialize DATABASE.connect()
4 Answers
Kenneth LoveTreehouse Guest Teacher
So, I got it to work. Not sure exactly why this works, though.
import os DATABASE = SqliteDatabase( os.path.join( os.path.dirname(os.path.realpath(__file__)), 'todos.db' ), threadlocals=True )
All this is really doing is finding the current directory and then prepending that to the database file name.
You'd probably want to put this in
config.py or something as like
BASE_DIR and then use that everywhere you need a file path.
So weird.
Kenneth LoveTreehouse Guest Teacher
Uh, shear doggedness? :)
It wasn't able to find the file when the
app.py script was run but it did work when the function was run directly through the shell. Well, the only thing that changes between those is the entry point, right? Now, they're both in the same directory, so I don't see why that matters at all but that's what lead me down that path. When I hardcoded a full path, the
app.py function call worked. So...use Python to find the full path.
Daniel Santos34,969 Points
Kenneth Love & Alexey Kislitsin ,
You guys saved my life, I was trying to solve this problem for hours at work, I guess Kenneth too. Anyhow thank you.
Alexey, are you using windows? I am wondering because I have used peewee many times in OSX and Linux with not problem. The only time I've seen this in on windows.
Just for future reference, I believe that the 'threadlocals' argument is 'True' by default in the BaseModel class definition in peewee.py
-Dan
Kenneth LoveTreehouse Guest Teacher
Just checking, after you run
models.initialize(), does the DB file exist? Does it have any weird permissions that would stop you from accessing it (unlikely but still good to check)?
Alx KiPython Web Development Techdegree Graduate 14,810 Points
Hello, Kenneth! If I run:
import models models.initialize()
From console, everything runs as expected. DB file being created well.
But if i run app.py (no matter with existing DB or without) than exception happens.
PS Just realized that same thing happens with courses.py from Flask REST API. It worked well couple days ago! Must be something wrong.. reinstalled venv - does not help.
Traceback (most recent call last): File "C:\\virtenv\lib\site-packages\peewee.py", line 3600, in connect **self.connect_kwargs) File "C:\\virtenv\lib\site-packages\peewee.py", line 3869, in _connect conn = sqlite3.connect(database, **kwargs) sqlite3.OperationalError: unable to open database file During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C://courses/courses.py", line 40, in <module> models.initialize() File "C:\\courses\models.py", line 81, in initialize DATABASE.connect() File "C:\\virtenv\lib\site-packages\peewee.py", line 3602, in connect self.initialize_connection(self._local.conn) File "C:\\virtenv\lib\site-packages\peewee.py", line 3514, in __exit__ reraise(new_type, new_type(*exc_args), traceback) File "C:\\virtenv\lib\site-packages\peewee.py", line 134, in reraise raise value.with_traceback(tb) File "C:\\virtenv\lib\site-packages\peewee.py", line 3600, in connect **self.connect_kwargs) File "C:\\virtenv\lib\site-packages\peewee.py", line 3869, in _connect conn = sqlite3.connect(database, **kwargs) peewee.OperationalError: unable to open database file
What kind of permissions do you mean?
Kenneth LoveTreehouse Guest Teacher
File and folder permissions. If, for some extremely odd reason you could write but not read or something.
Hmm, wonder if there have been any updates to Peewee to make this work differently. I'll check and report back.
EDIT
Nope, the docs still support exactly this format. Let me try it myself.
Kenneth LoveTreehouse Guest Teacher
WEIRD
I get the exact same output (so you're not hallucinating!) but I have no idea why it would do that.
Still investigating.
Alx KiPython Web Development Techdegree Graduate 14,810 Points
Daniel Santos Yes I use Windows and PyCharm, but an interesting thing is that it worked before.
Alx KiPython Web Development Techdegree Graduate 14,810 Points
Alx KiPython Web Development Techdegree Graduate 14,810 Points
Thank You! I'd spend years on it! Probably whole life.))
So it founds its place with os.path.realpath(file) and then "puts it here"/todos.db. OK... But why? I mean DB is in the same folder!
Can I ask how did you get to it? | https://teamtreehouse.com/community/unable-to-open-database-file-error | CC-MAIN-2022-27 | refinedweb | 936 | 62.54 |
Coinversation: The first synthetic asset protocol on Polkadot
Coinversation Protocol is a. Users can forge a certain synthetic asset by collateralizing CTO or DOT, such as U.S. dollars, and automatically have a long position in the asset. Users can also convert minted assets into other assets through the trading platform, so as to realize the purpose of shorting the asset and longing other assets.).
The main functional modules of the entire system include: forging synthetic assets(MintC), DEX, collateral pools, fee pools, oracles, and liquidity mining.
1. The user deposits CTO or DOT to mint the system’s default stable currency cUSD. The mortgage ratio of CTO is 800% (provisional), and the mortgage ratio of DOT is 500% (provisional). That means CTO worth $800 or DOT worth $500 can mint for $100 CUSD. The user’s staking ratio should be as high as possible. When the DOT price drops, the staking ratio may be lower than 800%. At this time, the user should replenish DOT or return (destroy) a part of CUSD. The system stipulates that users who are greater than or equal to the specified mortgage ratio will get the dividend of the transaction fee in the fee pool as an incentive.
2. cUSD is a type of synthetic asset and the standard currency of the entire system. All debts will convert into cUSD for calculation. cUSD is also a stable currency, and its value in the system always define as $1. cUSD can convert into other synthetic assets in a decentralized contract exchange, such as cryptocurrencies such as cBTC, cETH, cDOT, foreign exchange such as the Euro, Japanese Yen, and RMB, and even gold and various stocks. It also supports buying long and short selling. All of these assets are synthesized by the system. They are not real assets. Their conversion rate is determined by the external real price provided by the oracle. This conversion process does not require a counterparty, and users can always convert all of their CUSD into any synthetic assets supported by the system.
3. The respective debt ratios of all users in the system are determined when staking the CTO or DOT to mint CUSD, and it has nothing to do with the price of other synthetic assets after conversion. The debt ratio will change only when the user mints or destroys CUSD. The debt pool is the sum of all user debts. The changes in asset prices will change the debt. Through a constant debt ratio, we can calculate the profit of each user.
4. When the decentralized contract exchange performs synthetic asset conversion (ie. transaction), users need to pay a 0.3% (provisional) handling fee, and these handling fees enter the system fee pool. The fee pool distributes dividends to users who meet the specified staking ratio in the entire system every two weeks (provisional). The dividend ratio is determined according to the debt ratio. New users need to hold debts for more than a certain number of days or accumulate use them for more than a certain number of days (to be determined) to be eligible for the fee pool dividend.
5. The prices or conversion exchange rates of all synthetic assets in the system are provided by the oracle reading external exchange data, and future planning can introduce decentralized oracles.
The Goal
Realize a decentralized virtual asset issuance platform and contract trading platform, which can replace the perpetual/future contract functions of major centralized exchanges in the long term. even issue any type of assets on the agreement, in the traditional financial market To have a role to play.
Advantage
Centralized contract exchanges have exposed more and more problems, and the entire industry needs a solution for decentralized contract exchanges. The project’s decentralized contract trading program has the characteristics of general DEX openness, transparency, anti-censorship, and no KYC. It is also because there is no counterparty in the project, it perfectly solves the problems of general DEX transaction depth and liquidity. We believe that the prospect of this project is very broad and it is a real DEX solution.
Team
Coinversation Protocol was founded by a Ph.D. team in economics in the United States. It also has a technology development and operation team in China, with members from Alibaba, Ant Financial, Peking University, and other first-line technology companies and universities. | https://medium.com/coinversation-protocol/coinversation-the-first-synthetic-asset-protocol-on-polkadot-3da9bd28ddd1?source=user_profile---------2---------------------------- | CC-MAIN-2022-27 | refinedweb | 726 | 54.22 |
import java.net.InetAddress; 31 32 import org.apache.http.HttpHost; 33 34 /** 35 * Read-only interface for route information. 36 * 37 * @since 4.0 38 */ 39 public interface RouteInfo { 40 41 /** 42 * The tunnelling type of a route. 43 * Plain routes are established by connecting to the target or 44 * the first proxy. 45 * Tunnelled routes are established by connecting to the first proxy 46 * and tunnelling through all proxies to the target. 47 * Routes without a proxy cannot be tunnelled. 48 */ 49 public enum TunnelType { PLAIN, TUNNELLED } 50 51 /** 52 * The layering type of a route. 53 * Plain routes are established by connecting or tunnelling. 54 * Layered routes are established by layering a protocol such as TLS/SSL 55 * over an existing connection. 56 * Protocols can only be layered over a tunnel to the target, or 57 * or over a direct connection without proxies. 58 * <p> 59 * Layering a protocol 60 * over a direct connection makes little sense, since the connection 61 * could be established with the new protocol in the first place. 62 * But we don't want to exclude that use case. 63 * </p> 64 */ 65 public enum LayerType { PLAIN, LAYERED } 66 67 /** 68 * Obtains the target host. 69 * 70 * @return the target host 71 */ 72 HttpHost getTargetHost(); 73 74 /** 75 * Obtains the local address to connect from. 76 * 77 * @return the local address, 78 * or {@code null} 79 */ 80 InetAddress getLocalAddress(); 81 82 /** 83 * Obtains the number of hops in this route. 84 * A direct route has one hop. A route through a proxy has two hops. 85 * A route through a chain of <i>n</i> proxies has <i>n+1</i> hops. 86 * 87 * @return the number of hops in this route 88 */ 89 int getHopCount(); 90 91 /** 92 * Obtains the target of a hop in this route. 93 * The target of the last hop is the {@link #getTargetHost target host}, 94 * the target of previous hops is the respective proxy in the chain. 95 * For a route through exactly one proxy, target of hop 0 is the proxy 96 * and target of hop 1 is the target host. 97 * 98 * @param hop index of the hop for which to get the target, 99 * 0 for first 100 * 101 * @return the target of the given hop 102 * 103 * @throws IllegalArgumentException 104 * if the argument is negative or not less than 105 * {@link #getHopCount getHopCount()} 106 */ 107 HttpHost getHopTarget(int hop); 108 109 /** 110 * Obtains the first proxy host. 111 * 112 * @return the first proxy in the proxy chain, or 113 * {@code null} if this route is direct 114 */ 115 HttpHost getProxyHost(); 116 117 /** 118 * Obtains the tunnel type of this route. 119 * If there is a proxy chain, only end-to-end tunnels are considered. 120 * 121 * @return the tunnelling type 122 */ 123 TunnelType getTunnelType(); 124 125 /** 126 * Checks whether this route is tunnelled through a proxy. 127 * If there is a proxy chain, only end-to-end tunnels are considered. 128 * 129 * @return {@code true} if tunnelled end-to-end through at least 130 * one proxy, 131 * {@code false} otherwise 132 */ 133 boolean isTunnelled(); 134 135 /** 136 * Obtains the layering type of this route. 137 * In the presence of proxies, only layering over an end-to-end tunnel 138 * is considered. 139 * 140 * @return the layering type 141 */ 142 LayerType getLayerType(); 143 144 /** 145 * Checks whether this route includes a layered protocol. 146 * In the presence of proxies, only layering over an end-to-end tunnel 147 * is considered. 148 * 149 * @return {@code true} if layered, 150 * {@code false} otherwise 151 */ 152 boolean isLayered(); 153 154 /** 155 * Checks whether this route is secure. 156 * 157 * @return {@code true} if secure, 158 * {@code false} otherwise 159 */ 160 boolean isSecure(); 161 162 } | http://hc.apache.org/httpcomponents-client-dev/httpclient/xref/org/apache/http/conn/routing/RouteInfo.html | CC-MAIN-2015-35 | refinedweb | 628 | 69.82 |
deform_autoneed 0.2.2b
Auto include resources in deform via Fanstatic.
Deform Autoneed README
A simple package to turn any deform requirements into Fanstatic resources and serve them.
Some ideas were taken from js.deform, but this package is in many ways its absolute opposite: It only serves whatever content deform ships with. Hence it should be compatible with any version of deform.
Note
Note: This package patches deforms render function the same way as js.deform does. If you don’t want that, you can include the rendering yourself.
- Tested with the following deform/Python versions:
- Python 2.7, 3.2, 3.3
- deform 0.9.5 - Python 2.7
- deform 0.9.9 - Python 2.7, 3.2, 3.3
- deform 2.0a.2 - Python 2.7, 3.2, 3.3
It should be compatible with most fanstatic versions, including current stable 0.16 and future 1.0x.
This package should also work with future versions of deform that are somewhat API-stable. Should be framework agnostic and compatible with anything that Fanstatic works on. (Any WSGI)
Simple usage
During startup procedure of your app, simply run:
from deform_autoneed import includeme includeme()
Or if you use the Pyramid framework:
config.include('deform_autoneed')
This will populate the local registry with any resources that deform widgets might need, and patch deforms render function so they’re included automatically.
And that’s it!
Using registered resources in other pages
deform prior to 2 depends on jquery, while deform 2 depends on jquery and bootstrap. If you want any of these base packages in any other view that isn’t a form, simply:
from deform_autoneed import need_lib need_lib('basic')
Basic means any base requirements of deform itself. You may also call other deform dependencies here. Essentially, you can use any key from deforms default resource registry in: deform.widget.default_resources.
Replacing a resource requirement
If you wish to replace a resource with something else, ResourceRegistry has a method for that. It will have an effect on everything that might depend on that resource.
Example:
deforms form.css is a registered requirement. We’ll replace it with out own css, where our_css is a fanstatic resource object.
resource_registry.replace_resource(‘deform:static/css/form.css’, our_css)
Note that replace_resource accepts either fanstatic.Resource“-objects or paths with package name, like ‘deform:static/css/form.css’ as arguments.
Registering a custom widgets resources
If you’re using any widgets/forms in deform that require non-standard plugins, you can register them within this package to include them.
First, create a Fanstatic library for your resources and an entry point in your setup.py. (See the Fanstatic docs for this)
from fanstatic import Library my_lib = Library('my_lib', 'my/static')
Add your library to autoneed’s registry:
from deform_autoneed import resource_registry resource_registry.libraries['my_package_name'] = my_lib
If you have structured your requirements the same way as in deform.widget.default_resources, and your directory for static resources is called static, you can call the method populate from resources to automatically create your package.
resource_registry.populate_from_resources(your_resources)
If not, you can simply add the requirements using the method create_requirement_for.
resource_registry.create_requirement_for('my_special_widget', ['my_package_name:my/static/css/cute.css', 'my_package_name:my/static/js/annoying.js'], )
In other words, this example had the directory layout, where the static directory is the base of your fanstatic library.
- my_package_name/
- my/
- static/
- css/
- js/
And the custom widget will require something called ‘my_special_widget’. (See the deform docs on custom widgets)
After this, your dependencies will be included automatically whenever deform needs them.
Bugs, contact etc…
- Source/bug tracker: GitHub
- Initial author and maintainer: Robin Harms Oredsson mailto:[email protected]
- License: GPLv3 or later
Changelog
0.2.2b (2014-04-08)
- Resource dependencies consider the order deform list them. A widget requirement with several listed resources will have them depend on each other in order.
0.2.1b (2014-04-08)
- NOTE: remove_resources changed to remove_resource - it only accepts one resource now.
- Replacing resources may require to replace dependencies as well. This is now the default option for replace_resource and remove_resource.
0.2b (2014-03-25)
- New methods to interact and replace resources.
- ResourceRegistry objects now keep track of fanstatic.Resources in ResourceRegistry.requirements, rather than file paths.
- create_requirement_for now figures out proper paths from fanstatic libraries, so just specify proper package paths like: package_name:some/dir/with/file.js.
0.1b (2014-03-21)
- Initial version
- Downloads (All Versions):
- 17 downloads in the last day
- 71 downloads in the last week
- 80 downloads in the last month
- Author: Robin Harms Oredsson
- Keywords: web colander deform fanstatic
- Categories
- Development Status :: 4 - Beta
- Intended Audience :: Developers
- License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)
- Programming Language :: Python
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3
- Topic :: Internet :: WWW/HTTP
- Topic :: Internet :: WWW/HTTP :: WSGI :: Application
- Package Index Owner: betahaus
- DOAP record: deform_autoneed-0.2.2b.xml | https://pypi.python.org/pypi/deform_autoneed/0.2.2b | CC-MAIN-2015-11 | refinedweb | 814 | 50.12 |
Making a macOS app
There are some things I’ve been wanting to write for a while that just don’t make sense as a web app. Since I’m primarily using macOS now (for leisure at least), I want a macOS app.
After much digging and despairing, the best solution I’ve found for creating GUI apps on macOS is actually Xamarin.Mac from Microsoft. There is definitely an irony there.
(And to be up-front here — it’s the state of things which is bad, not Xamarin.)
Swift
Swift is nice, but I can't even find basics like “how do I read the contents of a directory”. I’m sure that’s a lack in my Google-fu - it’s a fairly basic requirement - but if a web search doesn’t show up something like that, how’s it going to fare when I hit something more complicated?
Update: I’ve since found how to do this, thanks to a StackOverflow answer (tagged as iOS for extra fun). I had tried that answer before and it failed to build, with Xcode complaining that
FileManager didn’t exist. Turns out you need to
import Foundation.
JavaScript
I’m not un-familiar with JavaScript these days but none of the "JavaScript as an application" things I've looked at give me warm fuzzies.
(So far that’s mostly Electron and nwjs - there was something else but I’ve forgotten it.)
Update: Trey has pointed me at Tauri, which I now remember was what I looked at briefly before. I don’t remember why I discounted it before; it certainly looks better than Electron. I’ll give it another try and see how it goes.
Apart from the general thing about carrying around hundreds of megabytes of JavaScript interpreter, the whole thing just feels rather unwieldy. It reminds me of a phrase I came across a long time back.
Imagine a stegosaurus wearing rocket powered roller skates, & you'll get a fair idea of its elegance, stability & ease of crash recovery. - Lionel Lauer
Python
Using Python and Qt would be nice - but before I can use that they need to release a version which works on Apple Silicon. Last time I tried that wasn’t there. Though it didn’t tell me that, I found out when I tried importing the Qt library and it threw an exception at me.
Plus I think I’d have to buy a Qt licence to distribute whatever I wrote (assuming I don’t make it OSI compliant). I’d like to have a choice in that.
Python and Gtk - well, it's Gtk. If I wanted that kind of pain I’d just bang my head against the wall repeatedly, it would be quicker and easier.
Clojure
Clojure - I don’t know it well enough yet, but to be honest I didn’t want to be forced to lay my UI out manually in code anyway. There are some tutorials by folk who’ve gone that path but … no thanks.
(defn frequency-controls [{:keys [frequency]}] {:fx/type :h-box :alignment :center :spacing 20 :children [{:fx/type :h-box :alignment :center :spacing 5 :children [{:fx/type :text-field :text (str frequency)} {:fx/type :label :text "Hz"}]}]})
The code above comes from Matthew D Miller’s website. No disrespect intended to Matthew - if you’re writing GUI code in Clojure that appears to be a perfectly functional way to do it. (Pun absolutely intended…)
Other options
Rust - I did some searching, but the summary is it’s not ready for mainstream GUI yet.
Go - also not yet ready for mainstream GUI building.
ObjectiveC - I know it would work, but I never could get my head around all the
[notation :weirdness] that it involves.
C++ - just no. What kind of masochist are you? (That sounds like a one of those “twenty questions” quizzes from Facebook.)
Xamarin.Mac
So that brings us back to the start, so to speak. And I have to say the Xamarin team have done a good job.
I’ve followed the “Hello Mac” walkthrough. There’s a few places where the Xcode GUI has changed, but it was clearly written.
It integrates with Xcode to use the Interface Builder. Visual Studio takes care of all the plumbing behind the scenes, though, you don’t have to care about that. And the acid test - after finishing the walkthrough I was able to trivially extend the “Hello Mac” example to reset the counter, no Googling or puzzling required.
Conclusion
I’m glad I’ve found something that seems worth pursuing further, but I have to be honest that I’m a bit sad at the amount of effort it took.
Surely I must be missing something obvious? It can’t be that the tools to write GUI apps are that lacking, surely? Please do let me know (politely!), I’d like to have more options. | https://www.solarwinter.net/making-a-macos-app/ | CC-MAIN-2022-05 | refinedweb | 821 | 80.62 |
I know that this is beaten horse but I don't know what else to do.
The problem is the good ol' "text dump" instead of script execution.
I read the fact and I have google it a thousand of times but I still
haven't make it work.
Here is my httpd.conf:
.....
DocumentRoot "/webapp/"
Alias / "/webapp/"
<Directory "/webapp/">
SetHandler mod_python
PythonHandler index
PythonDebug On
</Directory>
....
index.py is:
def index(req):
req.write('hello')
_---------
If I access,, I get a 404 Not Found, if I access I get a text dump of the script.
I also added the line, PythonPath "sys.path ['/webapp/']", and still
does the same thing.
Notice that I WANT SetHandler to be in my httpd.conf because I want to
catch all requests to the root directory with index.py, nothing else.
What else should I try. I have been using mod_python for a while so I'm
not that new to it.
/amn | http://modpython.org/pipermail/mod_python/2005-June/018246.html | CC-MAIN-2018-39 | refinedweb | 161 | 77.53 |
The QRegion class specifies a clip region for a painter. More...
#include <qregion.h>
List of all member functions.
QRegion is used with QPainter::setClipRegion() to limit the paint area to what needs to be painted. There is also a QWidget::repaint() that takes a QRegion parameter. QRegion is the best tool for reducing flicker.
A region can be created from a rectangle, an ellipse, a polygon or a bitmap. Complex regions may be created by combining simple regions using unite(), intersect(), subtract() or eor() (exclusive or). You can move a region using translate().
You can test whether a region isNull(), isEmpty() or if it contains() a QPoint or QRect. The bounding rectangle is given by boundingRect().
The function rects() gives a decomposition of the region into rectangles.
Example of using complex regions:
void MyWidget::paintEvent( QPaintEvent * ) { QPainter p; // our painter QRegion r1( QRect(100,100,200,80), // r1 = elliptic region QRegion::Ellipse ); QRegion r2( QRect(100,120,90,30) ); // r2 = rectangular region QRegion r3 = r1.intersect( r2 ); // r3 = intersection p.begin( this ); // start painting widget p.setClipRegion( r3 ); // set clip region ... // paint clipped graphics p.end(); // painting done }
QRegion is an implicitly shared class.
Warning: Due to window system limitations, the whole coordinate space for a region is limited to the points between -32767 and 32767 on Mac OS X and Windows 95/98/ME.
See also QPainter::setClipRegion(), QPainter::setClipRect(), Graphics Classes, and Image Processing Classes.
Specifies the shape of the region to be created.
See also isNull().
If t is Rectangle, the region is the filled rectangle (x, y, w, h). If t is Ellipse, the region is the filled ellipse with center at (x + w / 2, y + h / 2) and size (w ,h ).
Create a region based on the rectange r with region type t.
If the rectangle is invalid a null region will be created.
See also QRegion::RegionType.
If winding is TRUE, the polygon region is filled using the winding algorithm, otherwise the default even-odd fill algorithm is used.
This constructor may create complex regions that will slow down painting when used.
The resulting region consists of the pixels in bitmap bm that are color1, as if each pixel was a 1 by 1 rectangle.
This constructor may create complex regions that will slow down painting when used. Note that drawing masked pixmaps can be done much faster using QPixmap::setMask().
Returns TRUE if the region overlaps the rectangle r; otherwise returns FALSE.
The figure shows the exclusive or of two elliptical regions.
Returns the region's handle.
The figure shows the intersection of two elliptical regions.
Example:
QRegion r1( 10, 10, 20, 20 ); QRegion r2( 40, 40, 20, 20 ); QRegion r3; r1.isNull(); // FALSE r1.isEmpty(); // FALSE r3.isNull(); // TRUE r3.isEmpty(); // TRUE r3 = r1.intersect( r2 ); // r3 = intersection of r1 and r2 r3.isNull(); // FALSE r3.isEmpty(); // TRUE r3 = r1.unite( r2 ); // r3 = union of r1 and r2 r3.isNull(); // FALSE r3.isEmpty(); // FALSE
See also isNull().
A null region is a region that has not been initialized. A null region is always empty.
See also isEmpty().
Returns TRUE if the region is different from r; otherwise returns FALSE.
See also intersect().
See also intersect().
See also unite() and operator|().
See also intersect().
See also subtract().
See also subtract().
See also eor().
See also eor().
See also unite() and operator+().
See also unite().
The union of all the rectangles is equal to the original region.
The figure shows the result when the ellipse on the right is subtracted from the ellipse on the left. (left-right )
The figure shows the union of two elliptical regions.
Writes the region r to the stream s and returns a reference to the stream.
See also Format of the QDataStream operators.
Reads a region from the stream s into r and returns a reference to the stream.
See also Format of the QDataStream operators.
This file is part of the Qt toolkit. Copyright © 1995-2005 Trolltech. All Rights Reserved. | http://doc.trolltech.com/3.3/qregion.html | crawl-002 | refinedweb | 664 | 70.8 |
Comment: Re:Clojure ? (Score 1) 466
Comment: Re:Clojure ? (Score 1) 466
+ - Your Car Will Soon Sense If You're Tired Or Not Paying Attention
The difference between then and now is a few things:
- 1. I actually, objectively, learn faster, I have more experience, more techniques at my disposal, and fast paths in my brain to do so.
- 2. The discomfort of not knowing something is much more pronounced relative to my usual competency..
+ - Strongest evidence yet of two distinct human cognitive systems->
Link to Original Source
+ - The End of Moore's Law, one more time->
Link to Original Source
Comment: Re:Try that with LISP (Score 1) 214
Comment: Re:Linus management technique works (Score 1) 1501.
Comment: Re: PHP 6.0 without the stupid? (Score 1) 219
Comment: Re:PHP 6.0 without the stupid? (Score 1) 219.
Comment: Re: PHP 6.0 without the stupid? (Score 1) 219
Comment: Re:Build it and run it (Score 1) 254
Comment: a systematic approach (Score 1) 254
1. Establish a clear goal and sub-goals.
2. Use the goals to determine the scope of your reading.
3. Allocate quite a bit of time in large chunks.
4. Identify key layers of abstraction.
5. Enumerate classes (functions, namespaces) of interest.
6. Systematically, read through each class superficially.
7. Pick 8 classes to focus on.
8. Do a deep dive.
9. List/sketch inputs and outputs in terms of function names, types referenced.
10. Look at relevant tests for usage as needed.
11. Check off each class once looked through.
12. Measure the complexity of a component by how many checks are required for full understanding.
13. Iterate until goals achieved. | http://slashdot.org/~an_orphan/tags/ellison | CC-MAIN-2014-35 | refinedweb | 281 | 69.48 |
Opened 11 years ago
Closed 11 years ago
Last modified 11 years ago
#91 closed Bug (No Bug)
"_ScreenCapture_CaptureWnd" does not work on Win2k
Description
When I try to use "_ScreenCapture_CaptureWnd" function defined in "ScreenCapture.au3" on Windows 2000 OS, it produces following error
--------------------------- Fatal Error --------------------------- AVector: []: Out of bounds. --------------------------- OK ---------------------------
However if I try the same thing on Windows XP, it takes screenshot properly.
Following is the script which I run.
#include<ScreenCapture.au3> _ScreenCapture_CaptureWnd("test.bmp", WinGetHandle(""))
I faced this problem in AutoIt version "3.2.10.0"
Attachments (1)
Change History (5)
Changed 11 years ago by Moto
comment:1 Changed 11 years ago by Gary
Does the example in the help file work or give you an error also?
This example:
#include <ScreenCapture.au3> _Main() Func _Main() Local $hGUI ; Create GUI $hGUI = GUICreate("Screen Capture", 400, 300) GUISetState() ; Capture window _ScreenCapture_CaptureWnd (@MyDocumentsDir & "\GDIPlus_Image.jpg", $hGUI) EndFunc ;==>_Main
comment:2 follow-up: ↓ 4 Changed 11 years ago by Gary
Forgot because Win2k doesn't have GDIPlus included you have to install it.
Do you have it installed?
comment:3 Changed 11 years ago by Gary
- Resolution set to nobug
- Status changed from new to closed
comment:4 in reply to: ↑ 2 Changed 11 years ago by Moto
Do you have it installed?
No I did not install it. And now I checked it does work for both Au3 script and the compiled executable with the dll.
Forgot to check "GDIPlus.au3".
Thanks.
Guidelines for posting comments:
- You cannot re-open a ticket but you may still leave a comment if you have additional information to add.
- In-depth discussions should take place on the forum.
For more information see the full version of the ticket guidelines here.
Screenshot of the screencapture error message. | https://www.autoitscript.com/trac/autoit/ticket/91 | CC-MAIN-2019-18 | refinedweb | 296 | 64.41 |
Hello, the code I am posting sorts a pivot table but there can be only one statistic per row e.g Mean or Count.
I often have multiple statistics per row e.g Mean, Mode etc... .
I often have to generate multiple, long, pivot tables and to sort them manually is tedious made all
I the more so by having code that almost dose this for me.
All the pivot tables I make have the Mean as a row statistic, can some one see how to alter the following so I could sort on the Mean if there are multiple row statistics.
I am new to python, if this was VBA I could make a go at it.
I know it is not the best form to post a question were you ask: Please do this for me, but that is my skill level in python at the moment.
Thank for any assistance.
def sortTable(obj, i, j, numrows, numcols, section, more, custom): """Sort the rows of the table according to the selected column values Cell formats are NOT updated, so the formats for all cells in a column should be the same. custom parameters is direction ('a', the default, or 'd')""" if not section == "datacells": return direction = custom.get("direction", "a") if not direction in ['a', 'd']: print "direction must be 'a' or 'd'" raise ValueError PvtMgr = more.thetable.PivotManager() numrowdims = PvtMgr.GetNumRowDimensions() if numrowdims != 1: print "Cannot sort table unless there is exactly one row dimension" raise ValueError col = j # sorting column | https://www.daniweb.com/programming/software-development/threads/276705/sort-pivot-table | CC-MAIN-2017-13 | refinedweb | 254 | 71.75 |
inotify_event
Structure that describes a watched filesystem event
Synopsis:
#include <sys/inotify.h> struct inotify_event { _Int32t wd; _Uint32t mask; _Uint32t cookie; _Uint32t len; char name[0]; };
Description:
The inotify_event structure describes a filesystem event returned by the inotify system. You get these events by reading the file descriptor returned by inotify_init().
The members of this structure include:
- wd
- The watch descriptor associated with the event, as returned by inotify_add_watch().
- mask
- A bitmask that includes the event type:
- IN_ACCESS — the file was read.
- IN_MODIFY — the file was written to.
- IN_ATTRIB — the attributes of the file changed.
- IN_CLOSE_WRITE — a file that was opened for writing was closed.
- IN_CLOSE_NOWRITE — a file that was opened not for writing was closed.
- IN_OPEN — the file was opened.
- IN_MOVED_FROM — the file was moved or renamed away from the item being watched.
- IN_MOVED_TO — the file was moved or renamed to the item being watched.
- IN_CREATE — a file was created in a watched directory.
- IN_DELETE — a file or directory was deleted.
- IN_DELETE_SELF — the file or directory being monitored was deleted.
- IN_MOVE_SELF — the file or directory being monitored was moved or renamed.
along with other information:
- IN_UNMOUNT — the backing filesystem was unmounted.
- IN_Q_OVERFLOW — the inotify queue overflowed.
- IN_IGNORED — the watch was automatically removed because the file was deleted, or its filesystem was unmounted.
- IN_ISDIR — the event occurred against a directory.
- cookie
- An unique number that identifies related events..
- len
- The length of the name field, including any required padding.
- name
- The name of the object that the event occurred to.
While the QNX Neutrino. | http://developer.blackberry.com/native/reference/bb10/com.qnx.doc.neutrino.lib_ref/topic/i/inotify_event.html | CC-MAIN-2013-20 | refinedweb | 254 | 60.82 |
In one of the latest articles HowTo – Send Test Messages to the Adapter Engine (to an Integrated Configuration) Karsten Möhwald has shown an amazing idea on how we can send a message from RWB to any ICO (does not matter what kind of adapter is being used in the ICO).
Where can we use this idea?
a) if we need to test an ICO but we don’t have a working connection with the existing adapter (for example JMS, JDBC, IDOC, etc.)
b) if we want to design regression testing tool which will skip the sender adapter and will always post the data in the same way to PI (via SOAP)
But how can we do the same trick without RWB?
If you have a look at the configuration in Karsten’s blog you will see that in RWB we’re using the URL:
http://<server_host>:<j2ee_port>/XISOAPAdapter/MessageServlet?channel=<party>:<service>:<channel>
to the dummy SOAP channel but where to put the information in case we’d like to do the same with SOAPUI or any other SOAP testing tool ?
If you think that we can try the second standard way of creating the URL for sender soap adapter:
http://<server_host>:<j2ee_port>/XISOAPAdapter/MessageServlet?senderParty=
&senderService=BC_Michal_Krawczyk&receiverParty=&receiverService=
&interface=SI_Michal_Out&interfaceNamespace=urn:michal.krawczyk.com
then this is not going to work as PO will let us know that it’s not possible to use non SOAP channels (for example JDBC) like shown on the screenshot below:
What do we need to do then? It turns out that we need to create a whole envelope for PO message (with XI header) where we need put:
– sender system
– interface & namespace
but also:
– message ID, timestamp, queueid, etc.
and then if we send it to the dummy SOAP channel like
http://<server_host>:<j2ee_port>/XISOAPAdapter/MessageServlet?channel=<party>:<service>:<channel>
the message will go to the correct service as per the information from the XI header.
If you know any other easier way to do the same thing, please do let me know,
Hi Michal,
Nice tweaking of HowTo – Send Test Messages to the Adapter Engine (to an Integrated Configuration) Karsten Möhwald . I did something similar in the past to post dynamic headers to XI (SAP XI/PI: Testing Scenarios involving Dynamic Configuration). Thanks for sharing your ideas.
Best Regards,
Praveen Gujjeti
now we need to wait for the tweak of the tweak to be able to do the same thing in even a simpler way maybe 🙂
Regards,
Michal Krawczyk
Hi Michal,
I am still updating in SAP PO skills and not that much familiarize. I believe this blog is definitely help whenever we required to send messages directly to AEX (ICO). Thanks for sharing valuable information. Keep posting. 😎
Regards,
Hari Suseelan
Hi Hari,
>>> I believe this blog is definitely help whenever we required to send messages directly to AEX (ICO).
I’ve added a section on where can we use this idea to the blog – thanks for the suggestion,
Regards,
Michal Krawczyk
Hi Michal,
I was following you for long time and I was amazed to see your PI updates. Keep posting new updates and it will be very useful for me for upcoming PI projects. Thanks.
Regards,
Hari Suseelan
Hi Michal,
thanks for sharing this.
I have one question for adding message ID, timestamp, queueid etc. in SOAP header. Can we put some dummy values in those fields or we need to keep some rules?
Thanks & regards
Dingjun
hi,
>>>>Can we put some dummy values in those fields or we need to keep some rules?
rules are rules – you cannot use dummy values …
unless you’re like Neo 🙂
Regards,
Michal Krawczyk
Hi Michal,
But how can we make sure that the value for <SAP:MessageId>in the SOAP header is unique, because this value is normally internal generated by PI.
Regards
Dingjun
Hi Djingjun,
>>>>But how can we make sure that the value for <SAP:MessageId>in the SOAP header is unique, because this value is normally internal generated by PI
I’m glad you’ve asked that 🙂 fortunately there’s “SAP to the rescue” we can generate the message ID on our own with API com.sap.guid and it will be as unique as the one generated by PI,
hope that clarifies,
Regards,
Michal Krawczyk
Thanks for the quick reply.
Regards
Dingjun
Hi Michal,
Works like a charm on PO 7.4, thanks for sharing!
Regards
Thorsten
Hi Thorsten,
>>>Works like a charm on PO 7.4, thanks for sharing!
a promise is a promise 🙂
Regards,
Michal Krawczyk
Hi Michal,
Good to hear that there are ways to test ICO’s but to me these still look like workarounds.
We used to have proper testing tools in the Integration Directory (Test Configuration) and from the RWB as well.
From a developer point of view it is definitely a step back that all these options are no longer there.
Do you (or anyone else) know if it is on the roadmap of SAP to bring better testing tools (back) to PI? I feel they should be there.
Kind regards, and thx for all your valuable comments on the PI side of life.
Robert
Hi Robert,
not sure if your question is still unanswered, but at least the “Send Test Message” tool was enhanced in the meantime to support sending messages to an ICO as well, including value help support.
Details and version information are given in the following note:
HTH and kind regards,
Jan
Hi Michal,
Just happened to find a slightly different variation of this, which will not require creation of a new test ICO. Instead I used details of an iFlow which had sender as a SAP ECC system. Since, this interface had a sender channel with message protocol XI 3.0, all I had to do was to ensure that the SOAP message was created exactly as SAP creates for proxy messages. So my SOAP body looks like:
<SAP:Manifest wsu:Id=”wsuid-main-92FFF13F5C59777FE1ABE00000A1551F7″ xmlns:wsu=”” xmlns:xlink=”” xmlns:SAP=”“>
<SAP:Payload xlink:href=”cid:ASN_SampleXML.xml“>
<SAP:Name>MainDocument</SAP:Name>
<SAP:Description/>
<SAP:Type>Application</SAP:Type>
</SAP:Payload>
where payload file was included as attachment and file name provided in Payload field(highlighted). Hope its useful for others.
Regards,
Sanjeev.
Hi Sanjeev,
I had an issue where the payload is just ignored using the approach Michal described above. We are running in PO 7.31 AEX Only.
I’ve had to do exactly the same thing as you. I used Karsten’s approach via the RWB to get the correct SOAP format with attachments (in the message monitor) and then setup SOAPUI to use attachments like you said and updating the SOAP body.
Sadly I wish I read your comment before doing all this, as it would have saved me a lot of time, but I ended up working it out the hard way.
I’m now editting my HTTP Client to build requests in the same way and I should be good now the payload is being picked up correctly.
Michal – Might be worth updating your blog to include this tip from Sanjeev. Or maybe Sanjeev writing your own blow to supplement this one.
Either way great blogs from both of you.
Cheers,
Katan
Glad it helped! 🙂
Regards,
Sanjeev. | https://blogs.sap.com/2014/02/20/michals-po-tips-how-to-send-messages-directly-to-aex-ico-adapter-independent-soapui-version/ | CC-MAIN-2017-51 | refinedweb | 1,219 | 68.4 |
implementation 'androidx.legacy:legacy-support-v4:latestVersion' implementation 'com.github.bingoogolapple:BGABanner-Android:latestVersion'
QUESTION
Why did my new python edited photo lose brightness?Asked 2021-Jun-07 at 13:18
I am working on a photo editing project and I am curious about why did my new photo lose it's brightness. The program shoud get 2 photos out of the original one. One of them shoud contain only RED value and the other shoud contain BLUE and GREEN values. But when I put them back together the brightness is not the same as in original picture.
Here is my code:
import io, re, requests from PIL import Image, ImageOps, ImageEnhance, ImageChops import cv2 import numpy as np imgpth ='image.jpg' #red image img2 = Image.open(imgpth).convert('RGB') source = img2.split() R, G, B = 0, 1, 2 out = source[G].point(lambda i: i * 0) source[G].paste(out, None, None) out = source[B].point(lambda i: i * 0) source[B].paste(out, None, None) img2 = Image.merge(img2.mode, source) #green and blue image img = Image.open(imgpth).convert('RGB') source = img.split() R, G, B = 0, 1, 2 out = source[R].point(lambda i: i * 0) source[R].paste(out, None, None) img = Image.merge(img.mode, source) blend2 = Image.blend(img, img2, 0.5) blend2.show()
Original image : this is the origianl image
Output image: enter image description here
ANSWERAnswered 2021-Jun-07 at 13:18
blend2 = Image.blend(img, img2, 0.5)
The third argument, 0.5, is the alpha level of each layer. Essentially, you are setting each layer to be 50% transparent. This effectively reduces the brightness. Instead, you should read in
img1 and
img2 and then set the red layer of the second to the red layer of the first.
img2[R] = img1[R]
Source
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
No vulnerabilities reported
Save this library and start creating your kit
Explore Related Topics
Save this library and start creating your kit | https://kandi.openweaver.com/java/bingoogolapple/BGABanner-Android | CC-MAIN-2022-33 | refinedweb | 337 | 53.47 |
Is it possible to determine if the user has selected a file for a particular input type="file" field using javascript/jQuery?
I have developed a custom fieldtype for ExpressionEngine (PHP-based CMS) that lets users upload and store their files on Amazon S3, but the most popular EE hosting service has set a max_file_uploads limit of 20. I'd like to allow the user to upload 20 files, edit the entry again to add 20 more, etc. Unfortunately upon editing the entry the initial 20 files have a "replace this image" file input field that appears to be knocking out the possibility of uploading new images. I'd like to remove any unused file input fields via javascript when the form is submitted.
Yes - you can read the value and even bind to the
change event if you want.
<html> <head> <title>Test Page</title> <script src=""></script> <script type="text/javascript"> $(function() { $('#tester').change( function() { console.log( $(this).val() ); }); }); </script> </head> <body> <input type="file" id="tester" /> </body> </html>
But there are other, multiple-upload solutions that might suit your needs better, such as bpeterson76 posted.
This code will remove all the empty file inputs from the form and then submit it:
HTML:
<form action="#"> <input type="file" name="file1" /><br /> <input type="file" name="file2" /><br /> <input type="file" name="file3" /><br /> <input type="file" name="file4" /><br /> <input type="file" name="file5" /><br /> <input type="submit" value="Submit" /> </form>
JavaScript:
$(function() { $("form").submit(function(){ $("input:file", this).filter(function(){ return ($(this).val().length == 0); }).remove(); }); });
Example:
This uploader has worked really well for my purposes:
The benefit to using it as a basis for your coding is that the files are uploaded asynchronously, so no need to limit to 20 at a time. There's a definite UI benefit to doing the upload while the user is searching.
The resize is a pretty nice feature too, if you need it! | http://m.dlxedu.com/m/askdetail/3/0b99acdd856fd46a3b63b32233af449c.html | CC-MAIN-2019-13 | refinedweb | 324 | 51.99 |
I want to dynamically create a form with the labels and values (displayed in labels) for a dynamic class. All I'll have to work with is the class name, no idea on what the field names are, data types or values.
Could you create an alternative form through the api? You set the class name, and then you can maybe get the fields you want from the class definition? I haven't had to do that before but maybe that can be an option. Do the data types/fields exist somewhere, or do you have to create those from scratch as well?
Thanks Josh.
The classes are all defined in the cms_class table. It's a setup very similar to how biz forms works. Thought maybe there was a control that did something like this already. Seems like a simple task.
Maybe you could use the DataForm control(part of the CMS.FormControls namespace). There is a property on there to actually set the class name and alternative form if needed. Then you could access individual fields through getting the EditingFormControls. Also, I think there is a property where you can access the labels as well.
using CMS.FormControls;
//create new form based off of class name
DataForm customForm = new DataForm();
//use namespace and then class name(test.testing)
customForm.ClassName = "test.testing";
//fetch editing controls, can set properties on if they aren't null
EditingFormControl testfield1= customForm.FieldEditingControls["testfield1"];
EditingFormControl testfield2= customForm.FieldEditingControls["testfield2"];
Please, sign in to be able to submit a new answer. | https://devnet.kentico.com/questions/dynamically-display-form-and-data-with-labels | CC-MAIN-2018-05 | refinedweb | 257 | 66.33 |
#include <wx/sckipc.h>
A wxTCPServer object represents the server part of a client-server conversation.
It emulates a DDE-style protocol, but uses TCP/IP which is available on most platforms.
A DDE-based implementation for Windows is available using wxDDEServer.
Constructs a server object.
Registers the server using the given service name.
Under Unix, the string must contain an integer id which is used as an Internet port number.xTCPConnection type, or of a user-derived type. If the topic is "STDIO", the application may wish to refuse the connection. Under Unix, when a server is created the OnAcceptConnection message is always sent for standard input and output. | https://docs.wxwidgets.org/3.1.2/classwx_t_c_p_server.html | CC-MAIN-2019-09 | refinedweb | 110 | 51.55 |
The objective of this post is to explain how to scan the surrounding WiFi networks with the ESP8266.
The code
The code for this tutorial will be very simple and since we only want to scan and get some information about the surrounding WiFi networks, we will do all the coding on the setup function. So, we will leave the main loop empty.
First of all, we include the ESP8266WiFi library, which will make available the functionality needed for the ESP8266 to interact with the WiFi networks. We can remember from previous posts that this is also the library needed for us to connect to a WiFi network.
#include "ESP8266WiFi.h"
In the setup function, we start by opening a serial connection, so we can send the data to the Arduino IDE serial console.
Next, to scan the networks, we call the scanNetworks method on the WiFi object of the library we included. This method will return the number of networks discovered [1].
Note that this method can be called with two additional arguments, one that indicates the scanning should be done asynchronously, and another to show networks with hidden SSID [1]. More details about these functionalities can be found here.
int numberOfNetworks = WiFi.scanNetworks();
Now, to get each network SSID, we just need to call the SSID method, which receives as argument the index of one of the previously discovered networks. In the example bellow, we are getting this parameter for the first network of the list. Naturally, you should check first if any network was discovered
Serial.println(WiFi.SSID(0));
We can also get the RSSI (Received Signal Strength Indicator) of each network by calling the RSSI method on the WiFi object, again passing as input the index of the network.
Serial.println(WiFi.RSSI(0));
The complete code is shown bellow, which includes the iteration over the discovered networks and printing of these indicators to the serial port.
#include "ESP8266WiFi.h" void setup() { Serial.begin(115200); int numberOfNetworks = WiFi.scanNetworks(); for(int i =0; i<numberOfNetworks; i++){ Serial.print("Network name: "); Serial.println(WiFi.SSID(i)); Serial.print("Signal strength: "); Serial.println(WiFi.RSSI(i)); Serial.println("-----------------------"); } } void loop() {}
To test the code just upload it to the ESP8266 and open the serial console. You should get something similar to figure 1.
Figure 1 – Output of the program.
Note that there are more functionalities associated with the scanning of surrounding WiFi networks, which can be seen here.
Related Posts
ESP8266: Connecting to a WiFi Network
References
[1]
Technical details
- ESP8266 libraries: v2.3.0 | https://techtutorialsx.com/2017/02/25/esp8266-scanning-wifi-networks/ | CC-MAIN-2017-26 | refinedweb | 428 | 55.95 |
I'm quite new to namespaces though I am now sold on the idea and think they're excellent. My main gripe was though that the scope of a use statement was only in the file it was called. So, in my current project outside of the bootstrap file I have loads of fully qualified namespaces. E.g.
$object = new \\Vendor\\Package\\Class($parameter);
Not that it would happen often but if I renamed the Package to Package2 I would have to go through my code and rename \Vendor\Package\ to \Vendor\Package2\.
I have just found out as of 5.3 (which I'm using) you can call classes through variables—including namespaces. So I guess I could do something like:
$package = '\\Vendor\\Package\\\\';
$currentPackage = $package . 'Class';
$object = new $currentPackage($parameter);
I'm interested to know if you think this is a good idea or not. My gut tells me to avoid this for readability but was interested to know if others were using this. Of course, if you ever renamed a class you'd be stuck updating everything anyway.
IMHO opinion php namespace should be used they way they were intended to be and there is not much point in trying to get them work in some other fashion. Just makes your code more confusing and slower.
In most cases, just add a use statement at the top of each file for all the objects you need. Should only be a few and then use just the class name in your code.
use \\Vendor\\Package\\SomeClass;
...
$someClass = new SomeClass();
If you do end up with a big refactor job then at least you only need to change the user statements.
Consider also using composer. Even if you are not yet using any third party libraries composer will give you a nice autoloader for free so you won't need things like \Vendors. Your namespace's will be shorter and more flexible.
Thanks for the link - great reply!
This topic is now closed. New replies are no longer allowed. | http://community.sitepoint.com/t/dynamic-fully-qualified-namespaces-good-or-bad/33707 | CC-MAIN-2015-18 | refinedweb | 340 | 72.87 |
Content-type: text/html
#include <sys/types.h> #include <sys/kstat.h> #include <sys/ddi.h> #include <sys/sunddi.h>
Solaris DDI specific (Solaris DDI)
Named kstats are an array of name-value pairs. These pairs are kept in the kstat_named structure. When a kstat is created by kstat_create(9F), the driver specifies how many of these structures will be allocated. The structures are returned as an array pointed to by the ks_data field.
union { char c[16]; long l; ulong_t ul; longlong_t ll; u_longlong_t ull; } value; /* value of counter */
The only member exposed to drivers is the value member. This field is a union of several data types. The driver must specify which type it will use in the call to kstat_named_init().
kstat_create(9F), kstat_named_init(9F)
Writing Device Drivers | http://backdrift.org/man/SunOS-5.10/man9s/kstat_named.9s.html | CC-MAIN-2017-09 | refinedweb | 130 | 70.19 |
If a user has opted into submitting performance data to Mozilla, the Telemetry system will collect various measures of Firefox performance, hardware, usage and customizations and submit it to Mozilla. The Telemetry data collected by a single client can be examined from the integrated about:telemetry browser page, while the aggregated reports across entire user populations are publicly available at.
Note: Every new data collection in Firefox now needs a data collection review from a data collection peer. Just set the feedback? flag for :bsmedberg. We try to reply within a business day.
The following sections explain how to add a new measurement to Telemetry.
Telemetry Histograms
Telemetry histograms are the preferred way to track numeric measurements such as timings. Telemetry also tracks more complex data types such as slow SQL statement strings, browser hang stacks and system configurations. Most of these non-histogram measurements are maintained by the Telemetry team, so they are not covered in this document. If you need to add a non-histogram measurement, contact that team first.<<
Choosing a Histogram Type
The first step to adding a new histogram is to choose the histogram type that best represents the data being measured. The sample histogram used above is an "exponential" histogram.
Ony flag and count histograms have default values. All other histograms start out empty and are not submitted if no value is recorded for them.
The following types are available:
- flag: This histogram type allows you to record a single value (
0is not allowed and asserts.
- boolean:.
- count: This histogram type is used when you want to record a count of something. It only stores a single value and defaults to
0.
Count histograms and keyed histograms are fully supported only in our V4 pipeline tools, such as the unified telemetry (v4) dashboards. These are not fully supported in Telemetry v2 pipeline tools such as the histogram change detector.
- enumerated: This histogram type is intended for storing "enum" values. An enumerated histogram consists of a fixed number of "buckets", each of which is associated with a consecutive integer value (the bucket's "label"). Each bucket corresponds to an enum value and counts the number of times its particular enum value was recorded. You might use this type of histogram if, for example, you wanted to track the relative popularity of SSL handshake types. Whenever the browser started an SSL handshake, it would record one of a limited number of enum values which uniquely identifies the handshake type.
Set "n_buckets" to a slightly larger value than needed to allow for new enum values in the future. The current Telemetry server does not support changing histogram declarations after the histogram has already been released. See Miscellaneous section.
- linear:%.
If you need a linear histogram with buckets < 0, 1, 2 ... N >, then you should declare an enumerated histogram. This restriction was added to prevent developers from making a common off-by-one mistake when specifying the number of buckets in a linear histogram.
- categorical: Categorical histograms are similar to enumerated histograms. However, instead of specifying
n_
buckets, you specify an array of strings in the
labelsfield. From JavaScript, the label values or their indices can be passed as strings to
histogram.add(). From C++ you can use
AccumulateCategorical()with passing a value from the corresponding
Telemetry::LABEL_*enum, or, in exceptional cases the string values.
If you need to add new labels, you should use a new histogram name. The current Telemetry server does not support changing histogram declarations after the histogram has already been released. See Miscellaneous section.
exponential:.
Keyed Histograms
Keyed histograms are collections of one of the histogram types above, indexed by a string key. This is for example useful when you want to break down certain counts by a name, like how often searches happen with which search engine.
Count histograms and keyed histograms are fully supported only in our V4 pipeline tools, such as the unified telemetry (v4) dashboards. These are not fully supported in Telemetry v2 pipeline tools such as the histogram change detector.
Declaring a Histogram
Histograms should be declared in the toolkit/components/telemetry/Histograms.json file. These declarations are checked for correctness at compile time and used to generate C++ code. It is also possible to create histograms at runtime dynamically, but this is primarily done by add-ons when they create their own histograms in Telemetry.
The following is a sample histogram declaration from Histograms.json for a histogram named
MEMORY_RESIDENT which tracks the amount of resident memory used by a process:
"MEMORY_RESIDENT": { "alert_emails": ["[email protected]"], "expires_in_version": "never", "kind": "exponential", "low": "32 * 1024", "high": "1024 * 1024", "n_buckets": 50, "bug_numbers": [12345], "description": "Resident memory size (KB)" },
Note that histogram declarations in Histograms.json are converted to C++ code so the right-hand sides of fields can be the names of C++ constants or simple expressions as in the "low" and "high" fields above.
The possible fields in a histogram declaration are:
- alert_emails: Required for all new histograms.: Required. The version number in which the histogram expires, e.g.
"30"; a version number of type
"N"and
"N.0"is automatically converted to
"N.0a1"in order to expire the histogram also in the development channels. A telemetry probe acting on an expired histogram will be considered a non-op. For histograms that never expire the value
"never"can be used as in the example above.
Please do not use
"default". A value of
"default"is effectively the same as
"never", but means that expiration hasn't been set.
- kind: Required. One of the histogram types described in the previous section. Different histogram types require different fields to be present in the declaration.
- keyed: Optional, boolean, defaults to
false. Determines whether this is a keyed histogram.
- low: Optional, the default value is 0. This field represents the minimum value expected in the histogram. Note that all histograms automatically get a bucket with label "0" for counting values below the "low" value.
- high: Required for linear and exponential histograms. The maximum value to be stored in a linear or exponential histogram. Any recorded values greater than this maximum will be counted in the last bucket.
- n_buckets: Required for linear and exponential histograms. The number of buckets in a linear or exponential histogram.
- n_values: Required for enumerated histograms. Similar to n_buckets, it represent the number of elements in the enum.
- labels: Required for categorical histograms. This is an array of strings which are the labels for different values in this histograms. The labels are restricted to a C++-friendly subset of characters (
^[a-z][a-z0-9_]+[a-z0-9]$).
- bug_numbers: Required for all new histograms. This is an array of integers and should at least contain the bug number that added the probe and additionally other bug numbers that affected its behavior.
- description: Required. A description of the data tracked by the histogram, e.g. "Resident memory size"
- out.Because they are collected by default, opt-out probes need to meet a higher "user benefit" threshold than opt-in probes.
Make sure you've NEEDINFO'd a privacy peer for ALL new data collection:
Adding a JavaScript Probe");
For histogram measuring time, TelemetryStopwatch can also(ID id, uint32_t sample); /** * Adds time delta in milliseconds to a histogram defined in Histograms.json * * @param id - histogram id * @param start - start time * @param end - end time */ void AccumulateTimeDelta(ID id, TimeStamp start, TimeStamp end = TimeStamp::Now());
The histogram names declared in Histograms.json are translated into constants in the
mozilla::Telemetry namespace:
mozilla::Telemetry::Accumulate(mozilla::Telemetry::STARTUP_CRASH_DETECTED, true);; }
Miscellaneous
- Changing histogram declarations after the histogram has been released is tricky. You will need to create a new histogram with the new parameters.
- For enum histograms, it's prudent to set "n_buckets" to a slightly larger value than needed since new elements may be added to the enum in the future.
getHistogramByIdwill throw an NS_ERROR_ILLEGAL_VALUE JavaScript exception if it is called with an invalid histogram ID
- Flag histograms will ignore any changes after the flag is set, so once the flag is set, it cannot be unset
- Histograms which track timings in milliseconds or microseconds should suffix their names with "_MS" and "_US" respectively. Flag-type histograms should have the suffix "_FLAG" in their name.
- If a histogram does not specify a "low" value, it will always have a "0" bucket (for negative or zero values) and a "1" bucket (for values between 1 and the next bucket)
- The histograms on the about:telemetry page only show the non-empty buckets in a histogram except for the bucket to the left of the first non-empty bucket and the bucket to the right of the last non-empty bucket | https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Adding_a_new_Telemetry_probe | CC-MAIN-2016-40 | refinedweb | 1,447 | 55.44 |
50
This item is only available as the following downloads:
( XML )
Full Text
ShLST EXPORT BRIEFS
The following trade items have been gathered from Agricultural Attache and other guverneriLu reports as a
service to U.S. exporters of food and agricultural product. In supplying the trade leads the Department of
Agriculture does not guarantee reliability of the overseas inquirer. Your best source for (uriher information on
these trade leads is the listed foreign firm originating the inquiry. You may also contact 'the Epoir' Trade
Services Division, FAS, (202) 447-7103. November 2, 1979
2045 Soy lecithin (France). Wishes to obtain an undisclosed quantity of soy
lecithin. Also interested in tall and pine oils. Bank ref: Banque Nationale
De Paris, 75009 Paris. Contact: Jean Seferian, General Manager, ETS. Roche,
12, Rue Du Helder, 75009 Paris, France. Phone: 770-23-48.
2046 Fish meal (England). Fishing equipment firm seeks to represent American
manufacturers of fish meal in both the U.K. and Europe. Subject firm already
represents some North American manufacturers. Bank ref: Lloyds Bank Ltd.,
Boscawrn Street, Truro, Cornwall. Contacts: Mr. Frost, Managing Director,
Transatlantic Fishing Systems, Ltd., 42 Comfort Road, Mylor Bridge, Falmouth,
Cornwall, England. Phone: 0326-74024. Telex: 45617 Falmouth G.
2004 Rice (Jordan). The Ministry of Supply in Jordan intends direct purchase of American
rice white and yellow. Requests specifications of American rice as well as
samples. Contact: Jordan Ministry of Supply through the Commercial Officer,
American Embassy, P.O. Box 354, Amman, Jordan.
2005 Wine (France). Person would like to become agent for California wines in Europe.
Bank ref: Credit Lyonnais Reims, France. Contact: Robert Dominici, 61, Rue Du
Jard, 51100 Reims, France. Phone: (26) 85.27.45.
2006 Health or dietary foods (Sweden). Interested in dried, instant or concentrated
juices as health or dietary foods. Market forecast 1 million Swedish Kroner first
year, and 20% increase per year. Bank ref: Goetbanken. Contact Jan Hoegberg,
Hoegberg & Co., Box 512, S-191 05 Sollentuna, Sweden. Phone: 08/20 42 90.
2008 Dried potatoes (Portugal). Wants several tons of dehydrated potatoes (to recon-
stitue as mash potatoes). Quantity is dependent upon various factors, i.e., import
license permits, quotations, etc., preferably flakes but importer is willing to try
the powder type more commonly marketed in the U.S., usual industrial packing.
Importer interested in receiving samples of the product along with quotations. Samples
shall weigh no more than 500 grams C(gross weight) and be sent Parcel Post (green
label). Also send instructions on how to utilize product. Quote CIF Lisbon. Bank
ref: Banco Espirito Santo E Commercial De Lisboa. Contact: Managing Director, Food
Division, Correia Dos Santos, Jeronimo Martins, LDA, Rua Garrett 13/23, 12226 Jermar
P, Portugal. Phone: 362191/363641.
2009 Hothouse plants (Netherlands). Interested in new kinds of green and flowering
plants for culture in hothouses. Also interested in chrysanthemum cuttings.
Requests prices and catalogues. Bank ref: Rabo Bank Naaldwijk. Contact: A.C.
Groen, De Bruin B.V., Korte Kruisweg 66, 2676 BS Maasdijk, Netherlands. Telex:
34530. Phone: 01745-5440.
Issued weekly by the Export Trade Services Division, Foreign Agricultural Service, U.S. Department of
Agriculture. Room 4945 South Building, Washington, D.C. 20250
2
2010- Canned fruits and vegetables (France). Interested in large quantities of canned
2015 fruits and vegetables (pineapple, peaches, fruit cocktail, tomatoes, asparagus,
corn). Packaged in 5 kg; 3kg; 1 kg and 0.5 kg cans. Firm is looking for manu-
facturers and not agents. Quote CIF & C&F Le Havre and Marseilles. Bank ref:
Societe Lyonnaise, 10, Boulevard D'Athenes, 13001 Marseille. Contact: Mrs.
Felisa Angera, Commercial Director, Aillaud and Chabert 5, Boulevard Camille
Flammarion, Le Massalia, 13001 Marseille, France. Telex: 430 209 F. Phone:
(91) 64.64.30.
2016 Chicken (Egypt). Wants 1,250 tons of frozen chicken, Grade A, in cardboard cartons.
125 tons in November, 125 tons in December, 1,000 during 1980. 850-1,250 gram/birds
slaughtered to Islamic rites. Quote CIF Egyptian ports. Bank ref: Bank Misr
(Main Branch). Contact: Galal Halawa, Golden House, 11 Midan El-Falaki, Bab El
Louk, 10th Floor, Apt. 104, Cairo, Egypt. Phone: 754815.
2017 Bred heifers (Egypt). Wants 1,000 bred heifers, commercial grade, dual purpose
cattle adaptable to Egyptian climatic conditions, 200 head consignments over a 6
month period. Cooperative has 200 head operation but needs assistance in expansion
plans. Wants technical information on automatic milking, milk processing, feeding,
management, animal health, facilities, etc. Quote CIF Cairo Airport or Alexandria
port. Bank ref: Principal Bank for Development and Agricultural Credit, Tanta
Branch (Holds three-fourths of cooperative shares). Contact: Mohamed F. El-
Minshawi, Eshnaway Animal Production Cooperative, 11A Lotfy Hassouna St., Dokki,
Cairo, Egypt. Telex: 92018 UN, Attn: Mr. Badrawi. Phone: 816415.
2018 Corn (Egypt). Government tender with closing date of November 7 financed under
AID Loan 263-K-052 for 100,000 mt of U.S. #2 yellow corn, moisture max 14%,
shipment from U.S. ports in two equal increments of 50,000 mt each in November and
December. Telegraphic offers will be accepted. Full details from AID (202)
235-8862. Contact: General Authority for Supply Commodities, Purchasing Committee,
24 Gomhoria Street, Cairo, Egypt. Telex: 92062. Cable: ESTRAM.
2019 Goats (Peru). Wants to buy milking goats. Nubians, 50 animals, 47 females and
3 males, 6-12 months old, registered, preferably bred. Include productivity certi-
ficates of dams and sires. Shipment by airfreight, delivery January-Febrauay
1980. Vaccination and health certificates will be required. Quote C&F or FOB.
Bank ref: Banco De Credito Del Peru, Lampa 499, Lima 1. Contact: Javier Sacio,
Javier Sacio Leon, Juan De Arona 883, Oficina 502, Lima 27, Peru. Phone: 407842,
404665 and 406227.
2020 Duck hatching eggs (Mexico). Wants 1,000 eggs every 8 or 10 days, during one
year or a year and a half, in cartons standard size. First shipment as soon as
possible. Further shipments every 8 or 10 days. Quote CIF Mexico City airport.
Bank ref: Banco Nacional De Mexico, Sucursal Lindavista, Instituto Politecnico
Nacional 1733, Mexico 14, D. F. Contact: Sr. Eduardo Gonzalez Pacheco, Granja
Noni, San Juan De Puerto Rico 1131, Mexico 14, D. F. Phone: 5-86-33-54 or
5-86-46-00, Ext. 134.
2021 Pork fat back (Portugal). Estimated 500 tons per month bulk, frozen, soon after
January 1, 1980. Quote FOB Deep Sea port. Bank ref: Midland Bank, England.
Contact: William Blanshard, Blanshard and D'Eca, Rua Do Lavadouro 16, Mucifal,
Sintra 2710, Portugal.
2022 Shell eggs (Argentina). Table eggs, 50,000 dozen, Grade A, immediate delivery
after receipt of Letter of Credit. Buyer request shipment in Argentina flag
vessel. Quote CIF Buenos Aires, including FOB price plus ocean freight and
consular fee. Bank ref: The First National Bank of Boston. Contact: Julia
Capurro, Comesur S.A.C.I., Gral Urquiza 1482, 1243 Buenos Aires, Argentina. Telex:
Public Booth no. 390900/Buenos Aires. Phone 93-9700.
2023 Pulses, lentils, almonds, rice, edible oils (Qatar). Wishes to contact U.S.
suppliers of pulses favaa beans, chick peas, etc), lentils, almonds, rice and
edible oils. Requests samples and C&F quotations with 3% commission. Contact:
Said Shouly, Manager, Abu Kamal Trading Est., P.O. Box 2575, Doha,*Qatar. Telex:
4614 SHOULY DH. Phone 22605.
2024 Lambs, sheep (Bolivia). Regional Develpment Corporation is seeking 2,500 pregnant
Corriendale lambs and 83 sheep purebreed Corriendale. (A) age for lambs should
be four milk teeth and for sheep two teeth. (B) both sheep and lambs should have
health certificate proving animals are free from following diseases: spheri
phorus necrophorus, clostridium novyi, vibrio fetus virus, spirochaeta penorthe,
fusiformis nodosus, moraxile bovis. (C) .animals should also have standard
vaccinations as well as anti-parasite baths. Purchase will be financed with funds
generated by the sale of wheat under PL-480 program. Contact: Corporacion De
Desarrollo De Chuquisaca, Casilla 156, Sucre, Bolivia. Cable: DESARROLLO.
2025 Broilers, rice (Chile). Interested in CIF quotation to import 300 mt frozen broil*
ers,Grade A of 1,300 to 1,600 grams. Type of packaging: Cryovac and cardboard
carton strapped. Time of delivery: November. Also wishes CIF quotation to
import 300 mt long grain rice for November delivery. Bank ref: Banco Sud
Americano. Contact: Alfredo Rioja, A. Rioja y Cia, LTDA, Huerfanos 1178, OF.
430, Santiago, Chile. Telex: CURCIA SGO 260. Phone: 65864.
2026 Supermarket foods (Bahrain). Firm is newly established supermarket that is seeking
business relations with U.S. firms. Products desired are those typical of a
diversified supermarket operation. Bank ref: Citibank. Contact: Abdul Rasool
Jawahery, Razak Supermarket, SH. Mubarak Building, Suite No. 510, Government Road,
P.O. Box 5324, Manama, Bahrain. Phone: 259492.
2027- Seeds, nuts, spices and corn oil (Iran). Wishes to buy (A) millet seeds, 300 tons;
2031 (B) peanuts, corn oil, 200 tons each; (C) watermelon, pumpkin and sesame seeds,
chashew nuts, 100 tons each; (D) hazelnuts, coconut powder, black pepper, turmeric
fingers, 50 tons each; and (E) shelled walnuts, 20 tons. Corn oil in gallon
containers, balance of items in bulk. Corn oil, pepper, turmeric for 1979 delivery,
balance early 1980 delivery. Standard health certification required. Quote
C&F Iran. Bank ref: Bank Melli Iran, Bazar Branch and Bank of Tehran, Takhte
Tavoos Branch. Contact: Nasser Sabahi, Sabahi Enterprises, No. 22 Second Street
Koohenoor, Takhtetavoos Ave., Tehran, Iran. Telex: 215750 TLXD IR or 215636 TLXS
IR. Cable: TEHRAN, SAFARI, Sabahi. Phone: 629629 or 522786.
2032 Wine (Germany). Interested in California wine, red and dry white, good quality,
bottled, delivery soonest, samples required. Quote FOB port of embarkation or
C&F Hamburg. Bank ref: Hamburger Sparkasse. Contact: Guenther Schmiedhausen,
Seifarth & Co., Specialities Imports, P.O. Box 5204, 2000 Norderstedt 2, West
Germany (Street Address: Robert-Koch-Str. 19.). Telex: 02-174 330 (ANSWER BACK:
SECO D). Cable: MALOSSOL, Hamburg. Phone: 040-524 0027.
uNIVERSITY OF FLORIDA
3 1262 090513010
2033 Chicken (Kuwait). Importer of food products with branches in several other Middle
East locations is interested in contacting American Poultry Exporters for purchase
of frozen chicken in initial amounts of 25,000 tons annually. Once reliability and
saleability of product has been proven, he may increase order to 50,000 tons annually.
Company is only interested in very large and reliable producer, exporters.
Contact: Anwar Barakat, Al Mubarak and Al Barakat Co., P.O. Box 1710 Kuwait. Telex:
2303 BARAKAT KT. Phone: 818151, 818161.
2034- Cows liver, sheep, eggs (Egypt). Wishes to buy large quantities of cows liver,
2036 chickens, and eggs. Cow liver-4 to 6 kgs., chickens-800 to 1200 grams, eggs-45 to
50 grams and from 55 to 60 grams. Quote CIF Alexandria. Contact: Thomas Dikran
Tutundjian, Thomas Import-Export, 37, Rue Kasr El Nil, Cairo, Egypt. Telex: 92313
ITTAS UN Attn: Thomas Dik. Cairo. Phone: 972962, 974956.
2037 Turkey (England). Prime U.K. agent wishes to take on agency for U.S. turkey supplier.
Contact: Trevor Blashfield, Director, Universe Foods Ltd., 183 Heath Road, Twicken-
ham, Middlesex TW1 4BH, England. Phone: 01-892 9167. Telex: 25920. Bank ref:
Coutts & Co. Ltd., Robats Office, 15 Lombard Street, London.
2038 Beef tenderloin (Denmark). Wants 10 tons of beef tenderloins without side strap
muscle/silver trimmed, 4-5 Ibs and/or 5 Ibs/up, individually wrapped outstretched,
arrival before Dec. 20, 1979. Quote CIF Jutland. Bank ref: Aktivbanken A/S,
Kolding, DK-6000 Kilding, Denmark. Contact: Preben Frandsen, Kolding Export-
Kompagni A/S, P.O. Box 339, DK-6000 Kolding, Denmark. Telex: 51303. Phone:
5-52 20 66.
2039 Food rations (Japan). Wants emergency (C) rations, similar to those used by U.S.
Army, one container load for the initial shipment. Specific information including
ingredients statement required. Quote C&F. Bank ref: Mitsubishi Bank Mita Branch,
Tokyo. Contact: Yoshizumi Takano, Acting Manager, International Division, Morinaga
& Co., Ltd., 33-1, Shiba 5-Chome, Minato-Ku, Tokyo, Japan. Telex: J 242-2954.
Cable: MORINAGA, Tokyo. Phone: (456)0111.
2040 Honey (Korea). Commission sales agent requests .C&F offers for U.S. honey. Contact:
Suh-Woo Tongsang Co., Ltd., CPO Box 6755, Seoul 100, Korea, Hwang Kyou-Hyon,
President. Cable: SUHWOO, Seoul. Telex: KOWEST K24841. Phone: 28-9961/3.
2041- Breakfast cereal, confectionery products (Chile). Interested in cereal breakfast
2042 foods, candy and other confectionery products. Contact: Enique Dvorquez, Manager,
BM Dvorquez Y Cia. Ltda., 18 De Septiembre 201, Arina, Chile. Phone: 31097.
Cable: YORK ARICA.
2043 Fruit juice, kidney beans (French West Indies). Wants concentrated fruit juice
(reconstitute 1 part juice to 4 parts water) grape, orange, grapefruit, lemon, pear,
apple. Firm intends to order each 2 months a 30 cubic meter container of mixed
flavors concentrated fruit juice in 50 to 100 kilos plastic bags or barrels. Also
interested in red kidney beans. Firm intends to import each six weeks a 30 cubic
meters container of red kidneys in 100 pounds sacks. Contact: Philippe Le Maistre,
Ets Mallenec et Cie, Zi De Jarry, BP 2036, 97110 Pointe A Pitre, French West Indies.
Cable: MALENEC 019851. Phone: 82.10.65.
Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http: xmlns:xsi http: xsi:schemaLocation http:
INGEST IEID EOUQTL79N_O1IL0C INGEST_TIME 2012-10-23T15:02:35Z PACKAGE AA00012161_00050
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES | http://ufdc.ufl.edu/AA00012161/00050 | CC-MAIN-2019-04 | refinedweb | 2,190 | 71.1 |
Por Pete Logan (Intel), adicionado
XS is that most of the extension functions we have written to make configuration easier for BPEL based workflow are also available to the XSLT developer.
For those not familiar with SOA Expressway extension functions, they are granular operations that can be performed on the contents of messages or XML / JSON documents which SOA Expressway can embed into XPath or XSLT. What they add up to is a Swiss Army Knife for doing all sorts of useful things, especially when SOA Expressway is used in some message mediation or security mediation capacity.
The range of functions encompasses:
- digest generation (MD5, SHA, etc.)
- exslt functions for dates and regular expressions.
- crypto and canonicalization.
- full digital signature generation and verification.
- encoding and decoding to binary, base64 etc.
- timestamping, UUID generation, random numbers.
- cookie and authentication token handling.
- MIME attachment get and set.
Okay I could go on; there were more than two hundred functions the last time I counted. Go to our site at and request the full documentation set to find out more.
So how does an extension function get used in everyday life?
Here's how to write a message to the transaction log from within your XSLT. I'm assuming you have constructed a basic workflow and already have an XSL Transform action within it.
The basic form would look like this:
<?xml version="1.0" encoding="ISO-8859-1"?>
<xsl:stylesheet xmlns:
<xsl:variable
<xsl:template
<!-- The variable is parsed lazily and is only evaluated when it is used in the test below. -->
<xsl:if</xsl:if>
<xsl:apply-templates />
</xsl:template>
</xsl:stylesheet>
There are three parts to remember:
1, Make sure your transform has the soae-xf, exslt or soae-cache namespace declared as appropriate (shown in blue).
2, Declare your Extension Function with a variable. In this case $log (shown in green)..
Interoperation between the workflow variables and execution steps and the nitty gritty of XSLT is necessary because it gives the developer added flexibility when it comes to mediating messaging in a product that's used as a gateway or ES! | https://software.intel.com/pt-br/blogs/2010/09/07/how-to-extend-xslt-using-built-in-extension-functions | CC-MAIN-2017-17 | refinedweb | 352 | 56.86 |
Hi,
Can you please advice how to add images in between text? For example… smiley images…
Regards,
Vinay
Hi,
Hello Vinay,
Thanks for these details. But we wanted a more intuitive approach to this requirement. For example, generating an HTML document, and converting the HTML page into PDF at runtime.
Hi Vinay,<?xml:namespace prefix = o
I think you can get your desired results using Aspose.Pdf for .NET. Please see How to – Convert HTML to PDF using InLineHTML approach for details and sample code to convert HTML2PDF.
Please feel free to contact support in case you have any further queries.
Thank You & Best Regards, | https://forum.aspose.com/t/how-to-add-images-between-texts/100569 | CC-MAIN-2022-21 | refinedweb | 105 | 66.94 |
A continuation of the project based on the following post “Google Search results web crawler (re-visit Part 2)” & “Getting Google Search results with Scrapy”. The project will first obtain all the links of the google search results of target search phrase and comb through each of the link and save them to a text file.
Two new main features are added. First main feature allows multiple keywords to be search at one go. Multiple search phrases can be entered from a target file and search all at one go.
There is also an option to converge all the results of all the search phrases. This is useful when all the search phrases are related and you wish to see all the top ranked results group together. The results will display all the top search result of all the key phrases followed by the 2nd and so forth.
Other options include specifying the number of text sentences of each result to print, min length of the sentence, sort results by date etc. Below are the key options to choose from:
NUM_SEARCH_RESULTS = 30 # number of search results returned SENTENCE_LIMIT = 50 MIN_WORD_IN_SENTENCE = 6 ENABLE_DATE_SORT = 0
The second feature is an experimental feature that deal with language processing. It will try to retrieve all the noun phrases from all the search results and note the its frequency. The idea is to retrieve the most popular noun phrases based on the results of all the search, this is something similar to word cloud.
This is done using the python pattern module which also deal with the HTML request and processing used in the script. Under the pattern module, there is sub module that handles natural language processing. For this feature, the pattern module will tokenize the text and (part-of-speech) tag each of the word. With the in-built tag identifcation, you can specify it to detect noun phrase chunk tag or NP (Tags: DT+RB+JJ+NN + PR). For more part-of-speech tag, you can refer to pattern website. I have included part of the code for the noun phrase detection (Under pattern_parsing.py).
def get_noun_phrases_fr_text(text_parsetree, print_output = 0, phrases_num_limit =5, stopword_file=''): """ Method to return noun phrases in target text with duplicates The phrases will be a noun phrases ie NP chunks. Have the in build stop words --> check folder address for this. Args: text_parsetree (pattern.text.tree.Text): parsed tree of orginal text Kwargs: print_output (bool): 1 - print the results else do not print. phrases_num_limit (int): return the max number of phrases. if 0, return all. Returns: (list): list of the found phrases. """ target_search_str = 'NP' #noun phrases target_search = search(target_search_str, text_parsetree)# only apply if the keyword is top freq:'JJ?+ NN NN|NNP|NNS+' target_word_list = [] for n in target_search: if print_output: print retrieve_string(n) target_word_list.append(retrieve_string(n)) ## exclude the stop words. if stopword_file: with open(stopword_file,'r') as f: stopword_list = f.read() stopword_list = stopword_list.split('\n') target_word_list = [n for n in target_word_list if n.lower() not in stopword_list ] if (len(target_word_list)>= phrases_num_limit and phrases_num_limit>0): return target_word_list[:phrases_num_limit] else: return target_word_list def retrieve_top_freq_noun_phrases_fr_file(target_file, phrases_num_limit, top_cut_off, saveoutputfile = ''): """ Retrieve the top frequency words found in a file. Limit to noun phrases only. Stop word is active as default. Args: target_file (str): filepath as str. phrases_num_limit (int): the max number of phrases. if 0, return all top_cut_off (int): for return of the top x phrases. Kwargs: saveoutputfile (str): if saveoutputfile not null, save to target location. Returns: (list) : just the top phrases. (list of tuple): phrases and frequency """ with open(target_file, 'r') as f: webtext = f.read() t = parsetree(webtext, lemmata=True) results_list = get_noun_phrases_fr_text(t, phrases_num_limit = phrases_num_limit, stopword_file = r'C:\pythonuserfiles\google_search_module_alt\stopwords_list.txt') #try to get frequnecy of the list of words counts = Counter(results_list) phrases_freq_list = counts.most_common(top_cut_off) #remove non consequencial words... most_common_phrases_list = [n[0] for n in phrases_freq_list] if saveoutputfile: with open(saveoutputfile, 'w') as f: for (phrase, freq) in phrases_freq_list: temp_str = phrase + ' ' + str(freq) + '\n' f.write(temp_str) return most_common_phrases_list, phrases_freq_list
The second feature is very crude and give rise to quite a number of redundant phrases. However, in some cases, are able to pick up certain key phrases. Below are the frequency results based on list of the search key phrases. As seen, the accuracy still need some refinement.
Key phrases
Top cafes in singapore
where to go to for coffee in singapore
Recommended cafes in singapore
Most popular cafes singapore
================
Results
=================
Singapore 139
coffee 45
the past year 23
plenty 23
the Singapore cafe scene 22
new additions 22
View Photo 19
PH 16
cafes 14
20 Best Cafes 13
Fri 11
Coffee 11
Nylon 10
Thu 10
Artistry 10
Indonesia 10
The coffee 9
The Plain 9
Chye Seng Huat Hardware 9
the coffee 9
Photos 9
you re 9
Everton Park 8
sugar 8
Hours 8
t 8
Changi Airport 7
time 7
Food 7
p. 7
Common Man Coffee Roasters 7
Tel 7
Rise & Grind Coffee Co 6
good coffee 6
40 Hands 6
a lot 6
the cafe 6
The Coffee Bean 6
your friends 6
Malaysia 6
s 6
a cup 6
Korea 6
Sarnies 6
Waffles 6
Address 6
Chinese New Year 6
desserts 6
the river 6
Taiwan 6
home 6
the city 5
service 5
the best coffee 5
Tea Leaf 5
great coffee 5
a couple 5
the heart 5
people 5
the side 5
Nylon Coffee Roasters 5
hours 5
Singaporeans 5
food 5
any time 5
eve 5
eggs 5
a bit 5
Eve 5
the day 5
kopi 5
Thailand 5
brunch 5
their coffee 5
Chinatown 5
Restaurants 4
Brunch 4
the top 4
Jalan Besar 4
Ideas 4
Dutch Colony 4
night 4
Cafes 4
a variety 4
Visit 4
course 4
Melbourne 4
The Best 4
Main script can be obtained from Github. | https://simply-python.com/tag/natural-language-processing/ | CC-MAIN-2019-30 | refinedweb | 973 | 61.16 |
Hello, and welcome to another blog in the series Joanna Chan and I are writing about our experiences and learning building a mobile application using the SAP NetWeaver Cloud as a platform. Sorry for the hashtag or should I say mot-dièses in the title, it just means linking to it in twitter becomes easier 🙂
Long live the whiteboard but the code’s in the blog!
As has been the case with my other blogs in this series, I’ve thrown together a little video which goes through the key points of the blog and tries to explain them with the help of a few whiteboard drawings. This time however, I’ll leave my code walkthrough to the blog – so please excuse the rather copious amounts of code below.
The video as embedded in the SCN blog is a little on the small size, so please feel free to open in another window if you want to make it a little bigger.
Problem:
So, we want to be able to send a user an email confirming that they have successfully sent their time sheet entries to our cloud repository. We want this email to be nicely formatted. An image or two in the body/sig block of the email would be nice. We’d like to group all the time sheet entries for a given user into one email. So if the user sends 10 time sheet entries through, only 1 email should be sent. But if they only post 1, then only 1 should be sent.
Hundreds of emails
The first issue here is how on earth to only send one email, given that the trigger for the emails is a POST or a PUT to the time sheet entry resource?
What we need to do somehow is to gather up all the triggers/posted entries into a bucket and send the bucket-load over. What we need is some way for the first time sheet entry to put up a flag that any subsequent entries can see and gather around. Then we need some way that the flag can be brought down after a certain amount of time has passed and send the email with the details of all the entries collected to that point. Sounds reasonable (although not simple) but how on earth does one achieve that? 😕
Threads, synchronised methods and delayed action
In Java, just as in ABAP, it is possible to have asynchronous processes started from the same thread. and communicate with each other. In ABAP this in practice proves to be tricky as it means calling an RFC and communicating via polling shared memory areas. In Java, it’s a bit simpler than that, and a decent part of the language deals with just this point. Whereas in ABAP the easily addressable memory space of a program is constrained to a single process (outside of using shared memory construction), in Java this is not the case.
I have a class – UserMailSender that has a private constructor, this means that it cannot be instantiated from outside of methods within the class itself. (It also mucks up subclassing it, but that’s a different issue). That means a static method of the class must be used to create an instance. This design pattern is somewhat similar to the singleton design pattern that I love so much to hate. However, in this use case it makes sense – even if it isn’t actually a singleton. (I’m sure that this design pattern should have a name – factory was closest I could find, but it doesn’t really fit.) (Leave a comment if you know what the design pattern should be called!)
The method to actually trigger the sending of an email is:
public static synchronized UserMailSender addEntryForMail( TimeSheetEntry entry, ServletContext context) { synchronized (threadLock) { // see if there already exists an instance for this user UserMailSender foundInstance = null; int userId = entry.getAssociatedDay().getAssociatedUser().getId(); Iterator<UserMailSender> mailerIter = mailSenders.iterator(); while (foundInstance == null && mailerIter.hasNext()) { UserMailSender mailer = mailerIter.next(); if (mailer.getUserId() == userId) { foundInstance = mailer; } } if (foundInstance == null) { foundInstance = new UserMailSender(userId, context); mailSenders.add(foundInstance); } foundInstance.addEntry(entry); return foundInstance; } }
By specifying that the method is synchronized we can be sure that only one call to it will occur at a time, other calls will wait until the previous call is finished. (This does result in a bottleneck in our code, but so far this doesn’t seem to be a problem.) By further specifying within the code that a synchronised block exists we can “enqueue” , to use a more ABAP term, the handling of the list so that only one thread at a time may have access to add, retrieve or delete instances from the global list of email handlers.
It is this method that is called from the servlets (see my previous blog for more details:
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { ... // store in db userData.store(entry); // send email UserMailSender.addEntryForMail(entry, this.getServletContext()); /send JSON response outputResponse(entry, request, response); ....
When the method is called, it checks through a list of existing instances to see if a class has been instantiated for the current user id. (A HashMap or other lookup would probably be better code here than just looping through the array, but given the number of users I’m guessing would concurrently access my application, it’s probably not worth fretting about). If I find an instance, then I call the addEntry method on it. If I can’t find an instance I create one and add it to the list of existing instances.
Now, an interesting thing need to happen when we create that instance. It needs to shut itself down after a certain period of time, and remove itself from the list of instances available for users.
So in the constructor method of the class:
private UserMailSender(int userId, ServletContext context) { this.userId = userId; // can only be instantiated via static method // on instantiation, create a callback that will trigger in 1 minute Thread callBackThread = new Thread(new waitThenSendMail(this, context)); callBackThread.start(); }
I create a new “thread” or asynchronous task. This thread is started using the start() method. The constructor for the thread requires that you pass it an instance that implements the Runnable interface. So I have created a private inner class which does this:
private class waitThenSendMail implements Runnable { UserMailSender userMailSender; ServletContext context; public waitThenSendMail(UserMailSender userMailSender, ServletContext context) { this.userMailSender = userMailSender; this.context = context; } @Override public void run() { try { Thread.sleep(DELAY_BEFORE_SEND); } catch (InterruptedException e) { // that's cool just send the mail } UserMailSender.removeMailer(userMailSender.getUserId()); userMailSender.sendMail(context); } }
Note how the constructor of this class takes a reference to the instance of UserMailSender class that called the thread. Another important point to note is that a reference to ServletContext is being passed through to all of these methods – this is so that we can eventually pass it to the sendMail method that is called in the implementation of the run() method. The run() method tries to sleep for however long I like (I’ve used 60 secs as I think this is quite reasonable). This is possible because we have a separate process that is not blocking anything else from occurring. Just that one thread is paused. Once it has finished sleeping, the thread then (and this is quite important) removes the reference to the instance from the global list.
private static synchronized void removeMailer(int userId) { synchronized (threadLock) { UserMailSender foundInstance = null; Iterator<UserMailSender> mailerIter = mailSenders.iterator(); while (foundInstance == null && mailerIter.hasNext()) { UserMailSender mailer = mailerIter.next(); if (mailer.getUserId() == userId) { foundInstance = mailer; } } if (foundInstance != null) { int index = mailSenders.indexOf(foundInstance); if (index != -1) { mailSenders.remove(mailSenders.indexOf(foundInstance)); } } } }
Like the method that fetched/created the instance in the global list, this method is synchronised both by definition and by the user of a synchronized block to ensure that only one thread at a time adds or removes entries.
Once we are sure that no more entries will be added to the current list (it is “finalised”) then we start the process of actually sending the email.
Before I could send an email from NWCloud there is a bit of a work to do in order to set it up. You firstly will need an email account to send the emails from. The online documentation states that you can “ integrate your own e-mail provider (currently subject to restrictions)” so whilst I do know that Google mail users (including apps users) do work, if you’re using anything else, it might not! The online doco is pretty good, so I won’t repeat it here. But will just clarify a few things that were perhaps not as straightforward.
Testing locally
You’ll need to paste a copy of your email set-up xml into your local cloud set-up. And you’ll need to re-do this every time you patch to a new SDK level! So it is worth keeping a copy with your application.
I store a copy in my WEB-INF folder. It might be an extra piece of info to deploy to the cloud, but it’s a nice easy place to keep track of it. In the version 2.x of the NetWeaver Cloud this process is supposed to become easier I understand. Good thing too!
You’ll need to copy this file to a folder of the “Servers” project in Eclipse:
(Now wasn’t that simpler than the doco 😉 ).
Deploying to the NWCloud Trial system
The next step is to deploy also to the cloud. You’ll need to use the command line interface for this:
One thing the doco doesn’t mention is that you’ll need to add the –host parameter with a link to the trial system, otherwise it defaults to the productive NWCloud. BTW – I’ve upgraded to a later SDK since I took the screenshot if you were wondering! You’ll notice also that the content of the parameter values differ somewhat from the suggestions in the doco. All I can say is that you have proof in that there screenshot – what I entered worked!
Multipart MIME Messages
So now we have a collection of data that we want to send as an email, we have the required email account, and we uploaded that info to the cloud – how do we actually send the email? Well, before we can do that, we need to understand how to build that email.
The emails that you receive today are generally formatted as Multipart Multipurpose Internet Mail Extensions (MIME) Messages . In order to programmatically send one of these messages it helps to have a little understanding of what they are and how they are put together.
A multipart MIME message allows us to send the nicely formatted emails that most email clients support, along with images. Unlike HTML pages where the browser fetches each image individual and the user agent string can be used to redirect the user to an appropriate rendering of the content, in MIME messages all the content, including alternate renderings must all be sent at once.
For our purposes there are two important multipart subtypes – Alternative and Related.
We need to create a message that has Related parts (that is to say the formatted text of the message and an inline image) but we also need to allow for the case were the email client that is used to receive our email does not know how to display HTML formatted documents, so we need to drop back to just using plain text. This requires us to use the Alternative subtype.
There a many different MIME types that we could use to convey our formatted message, but I’ll use HTML as it’s pretty darn simple and has lots of formatting options.
Building the HTML message
I threw together some code to build the message from the user object and the list of entries that should be sent:
public String getHTMLMailText(List<TimeSheetEntry> entries) { String" + entryText + "</div>"; } String text = String .format("<p>Hi " + name + ",</p>" + "<p><i>Thanks for using the Discovery Consulting demonstration " + "mobile application.</i></p>" + "<p><i>If you would like to speak to one of our team on how we " + "can assist your enterprise, please reply to this email or contact " + "us on 0418105358.</i></p>" + "<p>Your time sheet entries have been successfully received " + "and will be transferred to payroll.</p>" + "<div style=\"background-color:#dcedf9\"> " + entriesText + "</div>" + "<p></p><p>Regards,</p>" + "<p>Discovery Consulting Group Pty Ltd</p>"); return text; }
There was a slightly amusing (for me) exchange between our client relations person (great bloke is Leigh) and me when he offered to help with the formatting of the email, and I sent him an early version of this code and told him to go for it 😉 . Afterwards we refined terms a little – he sent me a mock-up and I did the formatting 🙂 and we think it’s not a bad result!
One of the things to note is that HTML in emails does not render the same way as HTML in your browser in every case, you may have very little control over things like margins between divs etc. So it is worth while testing and doing more testing. Don’t forget that most people read email via mobile devices, so make sure you test those too. In this example code we’ve got a very simple alternating background colour for the time sheet entries to help distinguish one from the other.
Using the javax.mail services in NWCloud – adding inline images
Please excuse the gratuitous code in this section – but I think it’s the best way to explain it! Here’s the code that I use to send my emails.
package au.com.discoveryconsulting.timesheet.demo.mail; import java.io.File; import java.util.List; import java.util.UUID; import javax.mail.Message.RecipientType; import javax.mail.MessagingException; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeBodyPart; import javax.mail.internet.MimeMessage; import javax.mail.internet.MimeMultipart; import javax.naming.InitialContext; import javax.servlet.ServletContext; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import au.com.discoveryconsulting.timesheet.demo.TimeSheetUser; import au.com.discoveryconsulting.timesheet.demo.entries.TimeSheetEntry; public class MailEventSender { Transport transport; Logger logger = LoggerFactory.getLogger(MailEventSender.class); public void sendEmail(TimeSheetUser user, List<TimeSheetEntry> entries, ServletContext context) { try { InitialContext ctx = new InitialContext(); Session smtpSession = (Session) ctx .lookup("java:comp/env/mail/Session"); transport = smtpSession.getTransport(); transport.connect(); MimeMessage mimeMessage = createMimeMessage( smtpSession, "'Discovery Mobile Demo' <[email protected]>'", user.getSettings().getEmail(), "Demo Time Sheet Successfully Received", user.getMailText(entries), user.getHTMLMailText(entries), context); transport.sendMessage(mimeMessage, mimeMessage.getAllRecipients()); transport.close(); } catch (Exception e) { logger.error(e.toString()); } }
The only public method of the email sending class is what was called from our time sheet entries aggregation logic. It get a reference to the email setup that we have loaded onto our server and then creates a MIME message using the private createMimeMessage method. It then uses the references to the email setup to send the message.
Ok – I’m going to try to break this code down a little.
private MimeMessage createMimeMessage(Session smtpSession, String from, String to, String subjectText, String mailText, String mailHTML, ServletContext context) throws MessagingException { try { MimeMessage mimeMessage = new MimeMessage(smtpSession); InternetAddress[] fromAddress = InternetAddress.parse(from); InternetAddress[] toAddresses = InternetAddress.parse(to); mimeMessage.setFrom(fromAddress[0]); mimeMessage.setRecipients(RecipientType.TO, toAddresses); mimeMessage.setSubject(subjectText, "UTF-8");
Up to here it’s pretty simple really – just creating the message and setting who it’s from, to and the subject. It’s the next bit that gets tricky! I even included a picture in my code to help me understand what the heck was happening 🙂 ! Of course the inline Java formatting used in SCN doesn’t use a fixed width font, so the pictures kinda screwed up…
// create main message // +----------------------------------------------+ // | multipart/related........................... | // | +---------------------------+ +------------+ | // | |multipart/alternative..... | | image/jpg. | | // | | +-----------+ +---------+ | |........... | | // | | |text/plain | |text/html| | |........... | | // | | +-----------+ +---------+ | |........... | | // | +---------------------------+ +------------+ | // +----------------------------------------------+ MimeMultipart mainPart = new MimeMultipart("related");
Firstly the related multipart is created,
// add the messages MimeBodyPart messageWrapper = new MimeBodyPart(); MimeMultipart messagesPart = new MimeMultipart("alternative"); MimeBodyPart html = new MimeBodyPart(); MimeBodyPart plaintext = new MimeBodyPart(); messagesPart.addBodyPart(plaintext); messagesPart.addBodyPart(html);
Then a new MimeBodyPart is created to insert the alternative multipart which contains both the HTML and plain text formatted versions of the email.
messageWrapper.setContent(messagesPart); mainPart.addBodyPart(messageWrapper); MimeBodyPart sigAttachment = new MimeBodyPart(); mainPart.addBodyPart(sigAttachment);
The content of the alternative multipart is then inserted into the body part that is then added to the main related multipart. Another body part is created to hold the footer image that I use in my email.
// create the details for the sig content String embeddedAttachmentId = UUID.randomUUID().toString(); String</p></body></html>"; String sigPath = context .getRealPath("/images/Discovery_email_sig.jpg"); File sigFile = new File(sigPath); sigAttachment.attachFile(sigFile); sigAttachment.setContentID("<" + embeddedAttachmentId + ">"); sigAttachment.setHeader("Content-Type", "image/jpg"); sigAttachment.setFileName(sigFile.getName());
This is where I really had some fun! Unfortunately in NWCloud 1.x the email libs don’t quite work as they should 🙁 , so I had a bit of fun trying to get the javax.mail.internet.MimeBodyPart setDataHandler method working. End result it doesn’t 😛 and I had to try something else. Fortunately the attachFile method does work 🙂 so I found some way of getting the image file into the MIME message. However, in order to do that, I needed to get a reference to the image file itself.
Fortunately there happens to be a way to get real file references from virtual paths (the ones used in your eclipse project) through the ServletContext method getRealPath. This was why all the email aggregation logic kept a reference to this servlet context. So that the context could be used in the final email handling to get a file handle for the image file to be sent.
Some other stuff that’s worth noting, is that when calling the setContentID method the id needs to be inside carets <> otherwise it doesn’t work.
I’ve wrapped the HTML that I got for the entries with a simple <html> and <body> tags and added the attached image into the HTML with a reference to “cid:” or content-id and the GUID I created to reference the associated image.
plaintext.setText(mailText, "utf-8", "plain"); html.setText(mailHTMLWithSig, "utf-8", "html"); mimeMessage.setContent(mainPart); return mimeMessage; } catch (Exception e) { logger.error(e.toString()); return null; } }
Finally I actually load the generated HTML into the HTML body part, the plain text in the plain text body part and link the whole lot to the MIME message object that had the to, from, subject etc.
The result
I’m really happy with the generated email, it looks good and it took some rather tricky work to get it there.
For those that are interested – here’s the actual MIME message, including the plain text version of the email (minus all the server/send headers):
Date: Tue, 29 Jan 2013 00:35:55 -0800 (PST) From: "'Discovery Mobile Demo'" <[email protected]> To: [email protected] Message-ID: <1594048334.5.1359448554627.JavaMail.javamailuser@localhost> Subject: Demo Time Sheet Successfully Received MIME-Version: 1.0 Content-Type: multipart/related; <div style=3D"background-color:#e8f= 3fb"><p><b>Date:</b> 15/01/2012</p><p><b>Project:</b> DISCO1 - Timesheet de= mo app</p><p><b>Hours:</b> 8.0 hours (non-billable)</p><p><b>Comments:</b> = I really hope this works</p></div><div style=3D"background-color:#dcedf9"><= p><b>Date:</b> 16/01/2012</p><p><b>Project:</b> DISCO1 - Timesheet demo app= </p><p><b>Hours:</b> 10.0 hours (non-billable)</p><p><b>Comments:</b> Worki= ng really hard</p></div><div style=3D"background-color:#e8f3fb"><p><b>Date:= </b> 17/01/2012</p><p><b>Project:</b> DISCO1 - Timesheet demo app</p><p><b>= Hours:</b> 12.0 hours (non-billable)</p><p><b>Comments:</b> Must get emails= working</p></div></div><p></p><p>Regards,</p><p>Discovery Consulting Group= Pty Ltd</p><p><img src=3D"cid:f5ab0d22-17e7-49c5-9ddd-a861e5ce089a" alt=</p></body></html> ------=_Part_4_1473430879.1359448552910-- ------=_Part_3_779503997.1359448552910 Content-Type: image/jpg; name=Discovery_email_sig.jpg Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename=Discovery_email_sig.jpg Content-ID: <f5ab0d22-17e7-49c5-9ddd-a861e5ce089a> /9j/4AAQSkZJRgABAQEAeAB4AAD/2wBDAAoHBwkHBgoJCAkLCwoMDxkQDw4ODx4WFxIZJCAmJSMg IyIoLTkwKCo2KyIjMkQyNjs9QEBAJjBGS0U+Sjk/QD3/2wBDAQsLCw8NDx0QEB09KSMpPT09PT09 <snip> - for the sanity of us all! +4PzoooAXdL/AM8x+dG6X/nmPzoooAduf+5+tOHSiigBaKKKACiiigAooooAKKKKACiiigAooooA KKKKACiiigAooooAKKKKACiiigAooooAKKKKAP/Z ------=_Part_3_779503997.1359448552910--
Wrap up – key learnings, what else I’d implement
Sending an email with an image attachment, a company logo or a picture of the item you just ordered, whatever 🙂 is quite possible using the SAP NWCloud. This generates some very interesting possibilities. I certainly see a much more interactive solution in the future. (That is if we’re still using email!)
One key learning I had here is to re-examine every design choice/restriction that I’ve used/had with ABAP code. It is likely that it is either not relevant or is possible when using the NW Cloud – I just have to figure out how. That’s not to say it’s not possible with ABAP – it’s just quite often easier in the cloud.
I’m very chuffed with my solution, especially as I had not found anyone else writing about using SAP NWCloud to send emails with inline images (and even blogs on using javax.mail are relatively few)! However, something that I should have implemented, but didn’t – was some way of verifying that the email address that the user has entered is their own. A simple email verification link would have been simple to build using SAP NW Cloud, and is certainly a requirement for any productive solution like this where the user can enter their own (or if they wanted to, someone else’s) email address. (I did default the email address when the user signs in using Google – but what would be good would be to also default a “validated email address” flag.) There is all sorts of legislation regarding spam emails which I should probably try to be very careful about. In Australia, for example, any “commercial electronic message” must have a means for the recipient to unsubscribe and can be backed up by court action and fines! (Better get working on implementing that one!)
As per my previous blog, I’d like to implement some decent JUnit tests here too. If for nothing else than to give me some exercise in understanding how they could help with the development or potentially be a PITA.
Disclaimer, ramblings and what’s next
As usual, even if I make outrageous claims like “here is the code I used”, it’s all for use at your own risk, I don’t promise that any of it works or won’t completely bugger up your system. Likewise, none of my ridiculous ramblings should be taken to be indicative of the opinion of my wonderful and very patient employer (who has had to deal with me not being able to come into the office again today as my poor little girl isn’t feeling too well!) If you do reuse my code, some attribution would be appreciated but really what is code development if not elaborate copy/paste. These are my opinions, ramblings and mistakes. I’ve tried not to offend, imply anything which might offend or eat fresh raspberries with whipped cream whilst putting together this blog. I’m pretty sure I’ve failed on one point there.
Next blog will likely be the use of CORS to allow successful connections from mobile devices to the cloud. But we’ll see! (might be working on that unsubscribe button! 😉 )
In the meantime, and in my own time, I’ve put together a little page on which shows that I don’t just live, eat, sleep and dream SAP stuff – there is another side! (so there Jo! I know it’s not quite the same as frocking up and writing a fashion blog, but it counts? cc John Moy.)
@jwenna @wombling Yes, but in between his technical #SCN blogs did @wombling find time to frock up and write a fashion blog? I think not!
— John Moy (@jhmoy) January 24, 2013
If you liked my blog and might even have found the code useful, please take a sec, log in and rate it and even click on the like button. So far Jo is getting far more likes than me and I’m nothing if not competitive. So if you want to have a little fun at my expense, please go read her blogs, rate them and like them and don’t bother with mine 😉 .
There are more than just laws against spam. The SAP NetWeaver Cloud Supplemental Terms and Conditions contain this clause:
D.
Thanks D, (if I use the longer shortened version of your name I see stars!)
yes it certainly is important not to be sending spam mails out – even accidentally. One thing that is interesting about the Australian law is that even one single unsolicited email is considered spam – there is no need for it be a “mass email”.
In the link I posted above, Tiger Airways were fined $110,000 for having a un-register link that didn’t work – although in most cases businesses are just warned and told to fix their systems. Still, it’s not the kind of publicity that you’d want!
Thanks for reading and sharing your knowledge of the T&C – I remember that series of blogs you did about all the Supplemental Terms and Conditions and your thoughts on them well.
Cheers,
Chris
Thanks to Leo van Hengel for letting me know the video link was broken, I’ve now fixed it. Thanks Leo!
Letts’ Law: All programs evolve until they can send email.
Nice one Chris!
Now we can start building workflow applications on top of NW Cloud and move the workflow inbox to your gmail 🙂
The next evolution is surely to move away from email and just IM the person. Workflow via twitter 🙂
Might be worth noting after some feedback from my testers, I added the following code to ensure that the timesheets being sent to the users were actually sorted – as the logic I have above doesn’t actually sort them before putting them into the HTML – it’s whichever one gets there first.
// sort the entries by date first!
Comparator<TimeSheetEntry> comparator = new Comparator<TimeSheetEntry>() {
public int compare(TimeSheetEntry c1, TimeSheetEntry c2) {
SimpleDateFormat formatter = new SimpleDateFormat(“yyyymmdd”);
try {
Date d1 = formatter.parse(c1.getAssociatedDay().getDay());
Date d2 = formatter.parse(c2.getAssociatedDay().getDay());
return d1.compareTo(d2);
} catch (ParseException e) {
return 0;
}
}
};
Collections.sort(entries, comparator);
Righto – decided I needed more than a comment to store the changes I’d made – so have published a “part two”. Not a particularly impressive part 2 but might be interesting to some, nevertheless!
Sending formatted email with inline images from #sapnwcloud (part two, the fixes) | https://blogs.sap.com/2013/01/30/sending-formatted-email-with-inline-images-from-sapnwcloud/ | CC-MAIN-2017-51 | refinedweb | 4,509 | 54.63 |
This whole thing started when some punk said MIT/Scheme had no applications. I <3 Edwin, man. I'm still working on that baseball program (Fenway park model). I'm channeling Torvalds or Ty Cobb or someone today. ;;; Code to animate a projectile, simulating a baseball in flight, for MIT Scheme 9.4 ;;; ;;; Uses the Vizajo interface (via FFI) to Pikturo (Open Inventor compatible library). ;;; ;;; Loads a scene from the file "project-1-scene.iv", a Pikturo scene graph. ;;; ;;; Utilizes code from Project 1 for the ball physics (in "final-project-physics.scm") (declare (usual-integrations)) (load-option 'FFI) (load "final-project-physics.scm") ;; TODO: there must be a simpler way to find a ;; TODO: file to load in the system library directory (with-working-directory-pathname (->namestring (system-library-directory-pathname)) (lambda () (load "vizajo-mit.scm"))) (C-include "vizajo-mit") ;; format does not have the ability to put in leading zeros ;; for a fixed length integer, so we have this function to do it (define (build-image-path prefix frame ext) (string-append prefix "-" (cond ((<= frame 9) "00") ((<= frame 99) "0") (else "")) (write-to-string frame) ext)) ;; the update procedure for the physics "trajectory" procedure (define (output-frames path fmt w h) (let ((ext (cond ((= fmt vizajo:file-PostScript) ".ps") ((= fmt vizajo:file-RGB) ".rgb"))) (node-name "ball_translation") (field-name "translation") (frame 0)) (define (position-ball x y u v t) (let* ((image-path (build-image-path path frame ext)) (translation (string-append "0 " (write-to-string y) " " "-" (write-to-string x))) (ret-val (C-call "vizajo_set_field_value" node-name field-name translation)) (field-val-alien (make-alien '(* char)))) (C-call "vizajo_get_field_value" field-val-alien node-name field-name) ; TODO: leaks, should free returned string (display "set-translation ") (display translation) (newline) (display "translation=") (display (c-peek-cstring field-val-alien)) (newline) (if (= 1 ret-val) (C-call "vizajo_snapshot" image-path fmt w h 0) (error "could not set field value")) (set! frame (+ frame 1)))) (trajectory 0 1 45 45 position-ball))) ;; main procedure of the program (define (main args) (let* ((file-format-arg (cadr args)) (file-format (cond ((string=? file-format-arg "PS") vizajo:file-PostScript) ((string=? file-format-arg "RGB") vizajo:file-RGB) (else (error "invalid file format"))))) ;; glX can cause floating point underflow exceptions, defer them (flo:defer-exception-traps!) (C-call "vizajo_startup") (let* ((input-iv-path (caddr args)) (image-prefix (cadddr args)) (scene-read-return (C-call "vizajo_read_scene" input-iv-path))) (if (= 1 scene-read-return) (output-frames image-prefix file-format 1280 720) (error "could not read scene file"))) (C-call "vizajo_shutdown"))) ;; bootstrap the program (main (list "" "PS" "./final-project-fenway-park.iv" "play-ball")) ---- from the other mailing list ---- Sigh...I worked on a telemetry project at Cessna Aircraft, hardware telemetry for monitoring flight tests--a safety critical development area. Our system gave ground engineers visualization of the critical flight parameters so test pilots wouldn't lose control going into flutter. That's what I thought you were talking about, and probably what I should be working on. We almost lost a few. I ain't got no problem with Microsoft, my brother-in-law is feeding 6 other people and living a great life, which he damn well deserves. When I booted one of the first IBM PCs in my basement in '82, and wrote one of the first fantasy baseball apps (I'll elide the provenance--I still haven't heard back from the MOMA on that one) I had an instant on system > 10 print "Take me out to the ballgame! Buy me some peanuts..." > 20 goto 10 > run Nobody watching me, no compromises, just a pure creation engine. I want to get back to that. I couldn't believe the ensuing platform wars. I'm tired of fighting other people's crap wars, and even thinking about it. I'll sit here at #106 Scheming Pony Way and write the damn thing myself if I have to. Nobody even seems to be listening to me anyway. -- Stewart Milberger ^|||^ \. ./ Scheming | | Pony { } Sent with ProtonMail Secure Email.
play-ball.webm
Description: video/webm | https://lists.gnu.org/archive/html/mit-scheme-devel/2020-02/msg00005.html | CC-MAIN-2020-16 | refinedweb | 680 | 54.73 |
On Thurs Sept 21 at 18:54, Linus Torvalds wrote: > On Thu, 21 Sep 2006, Ramsay Jones wrote: > > > > IMHO, setting the value in the Makefile, for systems that don't define > > PATH_MAX, is a much better solution. In fact, that is what I thought was > > already being done. > > Well, considering that we _can_ test defines, why not just do it > automatically. > > In other words, instead of this patch: > > > > - > > > -#ifndef PATH_MAX > > > -# define PATH_MAX 4096 > > > -#endif > > > +#include <limits.h> > > Just make the code read > > #include <limits.h> > > /* > * Insane systems don't have a fixed PATH_MAX, it's POSIX > * compliant but not worth worrying about, so if we didn't > * get PATH_MAX from <limits.h>, just make up our own > */ > #ifndef PATH_MAX > # define PATH_MAX 4096 > #endif > > and after that we can just ignore the issue forever more. Yes, that would certainly be a solution. (Of course, setting the value in the Makefile would still be a better solution ;-) However, ... > > The thing is, it's not like we even really _care_ what PATH_MAX is all > that deeply. We just want to get some random value that is reasonable. > > Linus > ... given the above, a better solution is: don't use PATH_MAX. Simply #define a new symbol in a suitable git header file and globally replace uses of PATH_MAX with the new symbol. Job done. All the best, Ramsay - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [email protected] More majordomo info at on Sat Sep 23 03:22:56 2006
This archive was generated by hypermail 2.1.8 : 2006-09-23 03:23:40 EST | http://www.gelato.unsw.edu.au/archives/git/0609/27850.html | CC-MAIN-2018-05 | refinedweb | 272 | 72.46 |
Report message to a moderator
On 2011-09-01 22:05, Lothar Werzinger wrote:
> I tried to set the property cbi.include.source to false as described in the buckminster book, but the build still looks
> for the unavailable org.apache.commons.logging.source bundle.
>
The cbi.include.source has no relevance when you do a resolve. It's only valid when you build a site.
resolve ../mmt-tools.cquery
perform biz.tradescape.mmt.cdoserver.feature#site.p2
Why are you doing two different resolves b.t.w.? What's the purpose of the second one?
- thomas.
> P.S.
> just curious; why did you create a new thread instead of replying to the existing one?
Not sure what you mean, this is my sixth reply in this thread. All postings, from your original posting, shows up as as
single thread in my news reader.
Hi Lothar
We do something very similar:
Then we have a different job that executes the following commands to build the site:
importtargetdefinition -A '${WORKSPACE}/......../tp-37.target'
import '${WORKSPACE}/......../source.cquery'
build
perform some.feature#site.p2
I missed importtargetdefinition in your command list above. Did you try that?
Cheers, Stephan
setpref targetPlatformPath="/var/lib/hudson/jobs/test_tp/builds/2011-09-01_12-38-00/archive//targetPlatform"
On 2011-09-02 07:40, Lothar Werzinger wrote:
> Thomas Hallgren wrote on Thu, 01 September 2011 22:08
>>.
>
>
> test_rcp.cquery/test_rcp.rmap are from the job that builds the TP
> mmt-tools.cquery/mmt-tools.rmap is from the job that tries to buils site.p2 using the previouly materialized TP.
>
There must be some misunderstanding then. You don't need to re-resolve before when you perform unless you switch
workspace between the attempts.
> As outlined in another post the goal is to have a TP (immutable after creation) that is used for development in the IDE
> as well as for all automatic builds.
The TP is never changed during a build.
> As I had trouble with the build part I reduced it to simply building a feature composed of other features in the TP, so
> that I didn't have to distribute code to compile in the sample I provided to reproduce the issue.
>
OK, that's fine. The problem I will look at is why you don't get both source bundles into your TP. I have what I need to
do that.
>
> I am using the web forum and for whatever reasons it shows up as another thread there.
The web-forum apparently has some flaws...
- thomas
On 2011-09-02 09:11, Stephan wrote:
> I missed importtargetdefinition in your command list above. Did you try that?
>
The importtargetdefinition uses the PDE target definition construct. Lothar uses Buckminster to build the target
platform based in the cquery/rmap dependency resolution.
All our builds use Buckminster to resolve both the target platform and the workspace at the same time. I've found that
approach much more flexible and less error prone. Rebuilding the TP is never a problem unless the p2 repositories that
serve up the TP are very slow. And if they are, you can always create a local mirror of them instead of a pre-defined
target platform.
IMO, using p2 repositories rather than pre-defined TP's as input to your build is more flexible.
- thomas | http://www.eclipse.org/forums/index.php?t=msg&th=238080/ | CC-MAIN-2013-48 | refinedweb | 554 | 67.04 |
.
for first one , i think we can use hash table.
I’m sure there’s a cleverer way to solve the store credit question, but it
eludes me this morning. Although my solutions look very similar to the official
ones, I didn’t look ahead this time, scout’s honor.
I came up with the same solution as above. This plays on the fact that there is only one solution to the problem (so repeat numbers don’t matter) and the number of items we are looking for is always 2. A change to either of these prerequisites would need a different solution.
def store01 (credit, items):
lookup = {}
for (elem, cost) in enumerate(items):
find = credit – cost
if find in lookup: return lookup[find], elem + 1
lookup[cost] = elem + 1
print store01(100, [5, 75, 25])
print store01(200, [150, 24, 79, 50, 88, 345, 3])
print store01(8, [2, 1, 9, 4, 4, 56, 90, 3])
# =>
# (2, 3)
# (1, 4)
# (4, 5)
My Haskell solution (see for a version with comments):
A pretty ugly ruby version …
;; I apologize in advance if this is posted thrice. Even though I refresh the page, my post doesn’t show up…
(defun shop (val l)
(maplist (lambda (x)
(let ((x (car x)) (xs (cdr x)))
(mapcar (lambda (r) (if (= val (+ x r))
(return-from shop (cons (position x l) (position r l)))))
xs)))
l))
(defun rev (str)
(let (l w)
(loop for c across str
if (char= #\space c)
do (when w
(push (reverse w) l)
(setf w nil))
else do (push c w))
(reduce (lambda (x y) (concatenate ‘string x ” ” y))
(mapcar (lambda (x) (concatenate ‘string x))
(if w
(cons (reverse w) l)
l)))))
(defun t9 (str)
(let ((h (make-hash-table))
res
(last #\Nul))
(mapcar (lambda (letter code)
(setf (gethash letter h) code))
(loop for l across “abcdefghijklmnopqrstuvwxyz ” collect l)
(list “2” “22” “222” “3” “33” “333” “4” “44” “444”
“5” “55” “555” “6” “66” “666” “7” “77” “777” “7777”
“8” “88” “888” “9” “99” “999” “9999” “0”))
(loop for c across str
for now = (gethash c h)
if (and (not (char= last #\0))
(char= (aref now 0) last))
do (push (concatenate ‘string ” ” now) res)
else do
(setf last (aref now 0))
(push now res))
(reduce (lambda (x y) (concatenate ‘string x y)) (reverse res))))
Link to C program to reverse words (Exercise 2)
Here are Java functions for each exercise:
The source code in its entirety, which includes the definition of the hash map that’s used for exercise 3, can be read here.
My Python Solution for second problem
s=str(raw_input(‘Enter something’))
set1=s.split()
list1=[]
rev=[]
s4=”
for s1 in set1:
list1.append(s1)
while list1:
s3=list1.pop()
s4+=s3
s4+=’ ‘
print s4
[/source code]
Yet another set of Python solutions.
I had to have a go at the Reverse Words exercise in C, in-place. I’d imagine this is a classic of programming interviews; first reverse the letters in each of the words, then reverse the whole string:
In F#…
I would like to be on the mailing list for new posts.
Thank you,
Tony
There is an RSS icon on the About page. Or you can subscribe via WordPress.
C implementation of reverse words:
T9 number :
my Python implementation:
Here is a Python program for the store credit problem. The basic idea is to store the prices and indices in a dictionary and iterate over the dictionary, where at each step we look for the complementary price. Since dictionary look-up is constant time on average, this is a linear-time algorithm in the average case. It is also linear-space. | http://programmingpraxis.com/2011/02/15/google-code-jam-qualification-round-africa-2010/?like=1&source=post_flair&_wpnonce=8ef4095fcf | CC-MAIN-2015-35 | refinedweb | 609 | 67.52 |
Subsets and Splits