text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
A Universe of Learning: Part 1
It might not seem so, but, believe it or not, there is more to life than just deep learning. There is, in fact, a whole universe of machine learning, from probabilistic methods (Bayesian networks, Markov chains) to deterministic modeling (logistic regression, support vector machines) to ensemble learning (random forests, stacking). If we want to be the best engineers and data scientists that we are capable of being, then we need to discover the whole universe, learn about the pros and cons of every tool in the toolbox, and understand which problem each tool is best at solving.
Thanks to a large (and growing) number of easy-to-use frameworks, it has become rather trivial to play in this universe. Scikit, mllib, NLTK, PyBrain, and a million other packages will let you plumb data through any learner with just a few lines of python. A competent developer could get something working, however inaccurate or insensitive it may be, within mere minutes. But to understand how to optimize, and why you might choose a simple option, like naive Bayes, over something more complex, like bucket of models, you need to consider the benefits and detractors for each option, and how they might be relevant to your particular problem or goal.
So, let’s dig in, one learning method at a time, starting with:
Naive Bayes
This learner is based on the Bayes Theorem of probability (shown below), and is beloved for data classification, trained against very large data sets.
It is fast, efficient, performs very well for multi-class prediction, assumes features are unrelated and independent, and works well with non-numeric inputs. However, it is, of course, not perfect; this learner also suffers from the Zero Frequency Problem (it is entirely dependent on observed values in the training data in order to make predictions), is not a particularly good estimator (pay little attention to the probability scores), and requires careful feature engineering to benefit from feature independence.
The math and procedure behind naive Bayes is pretty straightforward: generate a frequency table (which will show how often each feature is seen in the training data), create a table of probabilities for each feature (how often each feature relates to the total number of training samples), then use those numbers as variables in the formula to calculate the probability of each class. As an example, let’s look at how we might decide if an e-mail is likely to be spam based on the existence of the word “prince…”
# assume a base probability of spam at 20%
# how many times does the word "prince"
# appear in spam and non-spam emails...
"prince" | Yes | No | Total
-------------------------------------
spam | 6 | 12 | 18
non-spam | 1 | 23 | 24
# what is the likelihood of the word "prince"
# in a spam or non-spam e-mail...
"prince" | Yes | No | Total
-------------------------------------
spam | 0.33 | 0.67 | 18
non-spam | 0.04 | 0.96 | 24
-------------------------------------
TOTAL | 0.17 | 0.83 | 1.0
# now, plug those values into the formula...
P(spam|"prince") = (0.33 x 0.2) / 0.17
P(spam|"prince") = 38.83%
Naive Bayes models are known as “eager” offline learners, which means they are trained before use, and are always trying to construct a generalized, feature-dependent understanding of the world. They are somewhat resistant to noisy data (which is why they are well liked for extremely large data sets), and are therefore used on problems that are at risk of such noise: recommendation engines, sentiment analysis, text classification, etc. If you have a lot of data, and a discreet set of possible answers, and you want to make predictions quickly in real-time, this might be a great learner for you.
Let’s take a look at some sample code: using just two files (sentiment-neg.txt and sentiment-pos.txt), scikit-learn, and a little python, you can create a very simple sentiment classification model (positive or negative).
import re
from random import shuffle
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
sentences = []
# get all negative sentences...
with open("sentiment-neg.txt", 'r') as file:
lines = file.readlines()
for line in lines:
line = re.sub(r'[^\x00-\x7F]','',line)
sentences.append([
line.strip(),
'neg'
])
# get all positive sentences...
with open("sentiment-pos.txt", 'r') as file:
lines = file.readlines()
for line in lines:
line = re.sub(r'[^\x00-\x7F]','',line)
sentences.append([
line.strip(),
'pos'
])
shuffle(sentences)
test_threshold = int(len(sentences)*.25)
tests = sentences[:test_threshold]
sentences = sentences[test_threshold:]
vectorizer = CountVectorizer(stop_words='english')
training_set = vectorizer.fit_transform([r[0] for r in sentences])
test_set = vectorizer.transform([r[0] for r in tests])
nb = MultinomialNB()
nb.fit(training_set, [r[1] for r in sentences])
predictions = nb.predict(test_set)
for i, prediction in enumerate(predictions[:10]):
print prediction+': '+tests[i][0]
# ----------------
# OUTPUT
# ----------------
# neg: unfortunately, heartbreak hospital wants to convey...
# neg: the film feels formulaic, its plot and pacing typical...
# pos: there is a kind of attentive concern that hoffman brings...
# pos: one fantastic (and educational) documentary.
# neg: the movie is a negligible work of manipulation...
# pos: my thoughts were focused on the characters...
# pos: warm and exotic.
# neg: has all the right elements but completely fails...
# neg: ringu is a disaster of a story, full of holes...
# pos: cedar takes a very open-minded approach to this...
This particular classifier uses word counts as scoring vectors in order to determine sentiment classification, but you might decide to test more complex features (perhaps n-grams, first or last words, only verbs, etc). Those decisions I will leave to the engineer; experiment and test!
Next time, we’ll take a look at Markov chains! | https://medium.com/@Bryan_Healey/a-universe-of-learning-part-1-39c8874aa880?source=rss-3dd8c705e3c3------2 | CC-MAIN-2018-05 | refinedweb | 942 | 54.42 |
Hi Antees,
I wonder whether someone got his own PropertyHelper installed and what needs
exactly to be done?
What I'm doing is basically this:
1. I'm deriving my PropertyHandler from `org.apache.tools.ant.PropertyHelper'
2. I wrote a `Init' task to install my propery handler. The execute method looks
like:
public void execute() throws
BuildException
{
PropertyHelper PH = new PropertyHelper(); /* that's my own here */
Project P = getProject();
PH.setProject(P);
P.addReference("ant.PropertyHelper",PH);
}
I can see that my methods are called, especially the method "replaceProperties()"
of my helper. However, having done this I'm not longer able to resolve ${basedir}
and neither Java system properties
My build.xml looks basically like:
<project name="ant-test" default="test" basedir=".">
<taskdef name="my-init" classname="my.Init" />
<!-- install my property handler -->
<property name="X" value="Y" />
<my-init />
<target name="test">
<echo> basedir=${basedir} </echo>
<echo> X=${X}</echo>
</target>
</project>
This would echo then print
basedir=${basedir}'
X=Y
instead which is a bit unexpected to me.
Note that I can reproduce this with a very basic PropertyHelper:
public class PropertyHelper extends org.apache.tools.ant.PropertyHelper {
protected PropertyHelper() {
super();
}
}
Any help available??
Side issue:
Installing your own PropertyHelper is not enough in general. It appears that
task `Property' is not consistently using the PropertyHelper interface. In order
to have my fancy property handling installed I also had to subclass class
Property and make a couple of adjustments.
Finally - what is it all about?
I'm investigating build scripts written in our organisation. Lot's of times I found
things like
classname = "x.y.z"
filename = "x/y/z.class
in build.properties. I want to avoid that users need to write things more than
once if not really required. So what I want to have is this:
filename = ${subst \\., /, ${classname}}.class # evaluates to: x/y/z.class
Everythings works fine so far except that my expression evaluator fails to
resolve ${basedir} :-(((
Thanks for any help,
Wolfgang.
______________________________________________________________
Verschicken Sie romantische, coole und witzige Bilder per SMS!
Jetzt bei WEB.DE FreeMail:
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected] | http://mail-archives.apache.org/mod_mbox/ant-user/200505.mbox/%[email protected]%3E | CC-MAIN-2015-18 | refinedweb | 365 | 51.85 |
stream write
Discussion in 'C++' started by chiku, May 7, 2009.
- Similar Threads
Doing readline in a thread from a popen4('rsync ...') stream blocks when the stream ends.Rasmusson, Lars, Apr 28, 2004, in forum: Python
- Replies:
- 1
- Views:
- 966
- popov
- Apr 30, 2004
- Replies:
- 9
- Views:
- 915
- Alex Buell
- Apr 27, 2006
get stream mode flags from an opened streamAlexander Korsunsky, Feb 17, 2007, in forum: C++
- Replies:
- 1
- Views:
- 639
- John Harrison
- Feb 17, 2007
what is the different between byte stream and character stream?dolphin, Mar 17, 2007, in forum: Java
- Replies:
- 6
- Views:
- 761
- Thomas Fritsch
- Mar 18, 2007
Stream operator in namespace masks global stream operatormrstephengross, May 9, 2007, in forum: C++
- Replies:
- 3
- Views:
- 573
- James Kanze
- May 10, 2007
Conversion from Input Stream to Output StreamKashif Ur Rehman, May 17, 2007, in forum: Java
- Replies:
- 2
- Views:
- 1,045
- Tom Hawtin
- May 17, 2007
Why no diagnostic for stream << stream?Anand Hariharan, Mar 17, 2009, in forum: C++
- Replies:
- 2
- Views:
- 377
- Anand Hariharan
- Mar 18, 2009
byte stream vs char stream bufferRoedy Green, May 7, 2014, in forum: Java
- Replies:
- 20
- Views:
- 725
- Silvio
- May 18, 2014 | http://www.thecodingforums.com/threads/stream-write.682879/ | CC-MAIN-2016-40 | refinedweb | 195 | 64.68 |
It looks like most of the solutions focus on the "sort & insert" idea. Here I offer another perspective to reconstruct the queue by selecting its front element from the "unordered" queue one by one. (Here "unordered" queue is the one given as the input while "ordered" queue is the one with correct order.)
To begin, we need to determine the signature of the front element of the "ordered" queue: there should be no other element before it.
Now in terms of the (h, k) model, what should be the h and k values of the front element? First, since there is no other element before, its k value should be 0. Second, for all those elements with k equal to 0, its h value should be the minimum. The reasoning is as follows: suppose we do have another element (h', k') with k' = 0 and h' <= h (Note we've denoted the front element as (h, k) ). Since (h, k) is the front element, it will be in front of (h', k'), which means there is at least one people, i.e. (h, k), in front of the person (h', k'). By definition of the (h, k) model, we should have k' >= 1, which contradicts with the assumption that k' = 0. Therefore we know in the (h, k) model, the front element of the "ordered" queue will be the one with k = 0 and minimum h value in the "unordered" queue.
One more thing to consider is what happens if we have found the first front element. Straightforwardly we want to "remove" it from the "ordered" queue so that we are up to find the "next" front element. Since we've already figured out the front element of the "ordered" queue, we know that for all other people, there is at least one people (which is the front element) in front of them. For those with height greater than that of the front element, removing the front element won't matter. But for those with height no more than that of the front element, removing the front should decrease their k value by 1.
If we have n people in the "unordered" queue, we need to pick the front element n times, and each time we need to go through the "unordered" queue to choose the front element, then modify those with height no more than the front element. This yields a time complexity of O(n^2). Here is the java program (note we need to modify the k values so I made a copy of the input "people" matrix):
public int[][] reconstructQueue(int[][] people) { int n = people.length; int[][] copy = new int[n][]; int[][] res = new int[n][]; for (int i = 0; i < n; i++) copy[i] = Arrays.copyOf(people[i], 2); for (int i = 0; i < n; i++) { int k = -1; // pick the front element for (int j = 0; j < n; j++) { if (copy[j][1] == 0 && (k == -1 || copy[j][0] < copy[k][0])) k = j; } res[i] = people[k]; // set the result // modify the k values of those with smaller or equal h values for (int j = 0; j < n; j++) { if (copy[j][0] <= copy[k][0]) copy[j][1]--; } } return res; }
Nice. That's my own original solution as well. I didn't like it much anymore after seeing the sort+insert idea. But I do find it very simple.
Here's a Python version. I put people into a pool together with a copy of their k-value, which then gets decreased during the run.
def reconstructQueue(self, people): pool = [[p[1], p] for p in people] queue = [] while pool: p = min(pool)[1] pool.remove([0, p]) queue.append(p) for other in pool: if other[1][0] <= p[0]: other[0] -= 1 return queue | https://discuss.leetcode.com/topic/61528/java-o-n-2-solution-by-picking-the-front-of-the-queue-one-by-one | CC-MAIN-2017-47 | refinedweb | 634 | 76.96 |
Opened 7 years ago
Closed 3 years ago
#13749 closed New feature (fixed)
Link from admin-site to site
Description
Somtimes one useful have feature like: link on top header which go to site.
Attachments (5)
Change History (34)
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
Changed 7 years ago by
Patch: added link to django admin <h1> tag
comment:3 Changed 7 years ago by
comment:4 Changed 7 years ago by
comment:5 Changed 7 years ago by
comment:6 Changed 7 years ago by
This should use the sites framework, and should be <h1><a>..</></h1> for valid HTML.
comment:7 Changed 7 years ago by
I like the idea of linking to the "real" site from the admin site by default, but the "Django administration" text shouldn't be the link. If anything, "Django administration" should link to the main index page of the *admin* site. I'd be in favor of adding a smaller "View live site" link next to the "Django administration" header. Although it might technically be incorrect to link this to "/" in some case, I think that's the common case (a reasonable default), and it would be easy enough to change.
If somebody provides a patch that does what I've described above, and it looks good from a design standpoint, I'll check it in.
Changed 7 years ago by
comment:8 Changed 7 years ago by
What about having the link in the upper right navigation? I've added a patch which does that.
comment:9 Changed 6 years ago by
This is technically a new feature and there is no definite patch for it yet, so I'm moving out of 1.3 which is launching really soon. I think more discussion needs to happen here. I too feel the need of having that link available in most projects, but not all. Even if that's a sensible default, a good patch would allow to override it easily, probably via a
{% block %}.
I'm also not convinced that "live site" is the right term as it can be ambiguous (the admin itself could be considered live when it's deployed to the server). So I'd prefer something along the lines of "Site's front end" (or something prettier) as opposed to the admin which is the site's back end.
comment:10 Changed 6 years ago by
comment:11 Changed 6 years ago by
comment:12 Changed 6 years ago by
I chose to use 'view site' as the link text, so it looks like the 'view on site' text that's available on models on the admin site. Do I need to do anything special about this new translation string? Also I'm not sure what the {{ root_path }} thing is supposed to do, I just copied/pasted it from base.html. It looked like some default fallback that's used everywhere.
I also implemented the link to the main page (as per adrian), as it is a convention on most websites.
Changed 6 years ago by
Implemented julien's and adrian's suggestions
Changed 6 years ago by
Updated patch. Set the 'Django administration' heading to the default/old styling.
comment:13 Changed 6 years ago by
The patch looks fine to me. Personally some kind of integration with the sites-framework would be nice, but it would add a usually unnecessary dependency and could also be done by people who customize the admin templates.
comment:14 Changed 6 years ago by
This is looking good. However,
{% url 'admin:index' %} should always return a url, so the conditional test and the use of
root_path are redundant. The template code could be simplified to:
<h1 id="site-name"><a href="{% url 'admin:index' %}">{% trans 'Django administration' %}</a></h1>
Note that
root_path should eventually be made completely redundant by #15294.
comment:15 Changed 6 years ago by
Changed 6 years ago by
removed redundant conditional test
comment:16 Changed 6 years ago by
comment:17 Changed 4 years ago by
comment:18 Changed 4 years ago by
comment:19 Changed 3 years ago by
So I guess this ticket can be closed, right?
comment:20 Changed 3 years ago by
pull request opened on github:
comment:21 Changed 3 years ago by
Hi!
Patch looks good but it needs documentation (a mention in the release notes) and some tests.
Thanks!
comment:22 Changed 3 years ago by
Hi Baptiste,
I added a line in the release notes (1.8) and added one test for the "View site" string.
comment:23 Changed 3 years ago by
comment:24 Changed 3 years ago by
Ran tests with
PYTHONPATH=.. python runtests.py --settings=test_sqlite admin_views Looks ok
What about the translations for this new field? Do the translators get informed about it automatically (is there a 'list' functionality for untranslated fields?), or should I write a new ticket for this?
comment:25 Changed 3 years ago by
No need for a ticket, if the string is marked for translation (with
trans), the traditional translation workflow will take care of it.
comment:26 Changed 3 years ago by
Hi,
I've had a chance to discuss this with a few people at EuroPython and we came to the conclusion that another approach is needed.
Create a new attribute on
Adminsite (maybe call it
site_url, similar to
site_header,
site_title, etc) which would default to '/' and use this in the template.
Users could customize this link to be what matches their setup (but the default should be good enough for the general case) and even disable it if they want (the template should check if this attribute is set and not show any link if that's not the case).
I'll remove the
easy pickings flag because it might be a bit more complicated to implement it this way.
Thanks
comment:27 Changed 3 years ago by
It shouldn't be *that* complicated - it should just be an attribute on AdminSite that returns reverse('%s:index" % self.name). The ability to have multiple installations of admin was one of the design requirements for namespaced URLs when we added them.
comment:28 Changed 3 years ago by
Here a PR that follow bmispelon suggestions:
Agreed; When I modify the base template for a site to say "Myproject admin" instead of "Django Admin", I usually make that text a link to '/'. | https://code.djangoproject.com/ticket/13749?cversion=0&cnum_hist=19 | CC-MAIN-2017-26 | refinedweb | 1,073 | 66.27 |
From: William A. Rowe, Jr. <[email protected]>
I changed the subject so the Unix people could move on (or stay and learn). :)
Also sorry I haven't answered back - I've been busy. Days and days of rain but finally some
sun - time for yard work.
First Windows programming is very complicated. There are so many "features" that no one person
can possible know/understand it all.
> > Yes Win32 does offer functions to do this but NT/W2K/Win64 OS
> > also does this every time it "runs" any Apache Server
> > "call/function/action". NT/W2K/Win64 are Unicode "full time" systems.
>
> I disagree entirely with your phrase 'Apache Server call...' - it only
> translates Win32 API system calls (you are correct, the API is native
> unicode and must translate constantly). These are minimal under Apache,
> the example I can think of include the registry (used when loading and
> installing the service).
So you have an Unicode version and an Ansi version of Apache Server up and running to make
this comparison? It's not that I am saying Unicode will be faster - just feature ready for
NT and up computers and may be faster - you let me know.
Anyhow, it has been years and I no longer "do Windows" daily but please tell me when Windows
and Win32 changed. Win32 is a programming spec and does not tell the OS how to "accomplish
its work". That was because Win32 was suppose to be OS free.
Win9x OS runs DOS for Win32 with very little Unicode (most is buggy except maybe the little
for MessageBox to report no Unicode support), but lots of DOS. Say if someone did it right
PlayStation OS could run Win32 with no Unicode (or maybe PlayStation handles Unicode????).
But NT/W2K/Win64 OS is Unicode - not because of any Win32 saying it has to be but because
MS wanted a Unicode "featured" OS. The world is not a char pointer!
A lot of people do not yet get the idea of the different usage of the three words Windows,
Win32 and OS. Windows is say a catch-all term for programs running on DOS /Win9x/ NT/ W2K/
or Win64. Win32 is a programming spec (and not C run time or Basic or whatever language that
will use some Win32 "actions"). OS (how a computer operates). The OS of Win9x and the OS
of NT and the OS of say some mainframe can all run Win32 but are all different in operating
style (hardware/drivers/etc).
Win32 has no strcmp() function - it is called lstrcmp().
Win32 has no beginThread() function - it is called CreateThread().
Win32 has no malloc() function - it is HeapXXX().
The golden rule was suppose to be - program to Win32 specs and the same source should run
on any OS that did Win32 "actions". (Sure - I did say was suppose to be!)
If you do not compile NT code under the define Unicode ( and Win32 does not make you -but
MS wishes it could have - no need to carry old programs, etc), then without defining Unicode,
the NT OS "translates" everything to Unicode, whoa one should never say never and never say
everything:), so the Unicode OS can work! Please look at ntdll.dll.
> Since the winsock api is -NOT- unicode, it is
> really irrelivant.
Say what? Winsock is a buffer based (just double your buffer) and/or Unicode aware library,
and silly old me, I must get my glasses checked because I wonder just what is all that Unicode
stuff in the Winsock include files?
You know I could go on and on but lets keep to one subject per message.
This one message is - Why isn't the Windows version of Apache Server using the tchar.h include
file instead of say string.h, stdlib.h and etc.?
If one started to program a Windows program (C run time or even a Win32 program) it might
flow as follows:
#do some defines needed for picking OS, Winsock version , COM and etc.
#include <windows.h>
#include <tchar.h>
#include "user includes"
Rest of code good old Win32 spec code.
Now there is one source file that could be used on Win9x, NT/W2K and Win64 all based on defines.
If the programmer defined Unicode - all Win32 functions will default to Unicode.
If the programmer defined _Unicode - then the Unicode version of the C run time functions
will be used. There are also SBCS defines and MBCS defines. All sorts of defines. Tchar.h
handles a lot for Windows. Some say a very nice feature.
TCHAR someString;
Is someString - Unicode? MBCS? SBCS? Well what was defined?
Unicode was defined, well then it is a Unicode string!
Just look at tchar.h - all sort of C run time string functions, type defines, printf type
defines are all ready to go just by having the programmer making a define - for example --
Unicode - all strings functions and chars are unicode ready. String.h is not.
Put all Apache Server text in a resource file and you have a French, English or FarOutStarWorld
Apache system language defaulted server ready to go. String.h doesn't handle this.
So when programming Windows, in order to use all the great features that makes Windows say
Windows, one must start with a good basic beginning. Tchar.h is that first building block
that other Windows features may rely upon. So why doesn't Windows Apache Server use tchar.h?
When porting software - port programming ideas not straight source code. Use all the wonderful
Unix http knowledge with Windows features.
Also perhaps one should quote from the books of the great Windows authors, instead of saying
my research only. Pick an author, a book, a page and a paragraph. I most likely still have
it close by.
Hope this helps,
JLW | http://mail-archives.apache.org/mod_mbox/httpd-dev/200005.mbox/%3C001301bfbb4b$0b793f60$0100007f@localhost%3E | CC-MAIN-2014-23 | refinedweb | 968 | 82.24 |
1 Jun 01:59 2009
Re: [PATCH V2] board support patch for phyCORE-MPC5200B-tiny
Jon Smirl <jonsmirl <at> gmail.com>
2009-05-31 23:59:55 GMT
2009-05-31 23:59:55 GMT
On Sun, May 31, 2009 at 6:11 PM, Wolfgang Denk <wd <at> denx.de> wrote: >> +/*--------------------------------------------------------------------------- >> + Environment settings >> +---------------------------------------------------------------------------*/ >> +#if 0 >> +#define CONFIG_ENV_IS_IN_FLASH 1 >> +#define CONFIG_ENV_ADDR (CONFIG_SYS_FLASH_BASE + 0xfe0000) >> +#define CONFIG_ENV_SIZE 0x20000 >> +#define CONFIG_ENV_SECT_SIZE 0x20000 >> +#else >> +#define CONFIG_ENV_IS_IN_EEPROM 1 >> +#define CONFIG_ENV_OFFSET 0x00 /* environment starts at the */ >> + /*beginning of the EEPROM */ >> +#define CONFIG_ENV_SIZE CONFIG_SYS_EEPROM_SIZE >> +#endif >> +#define CONFIG_ENV_OVERWRITE 1 > > Are you sure? This is pessimal choice. EEPROM is slow and unreliable. > > After you decided for a solution, then please remove the (then) dead > code. Phytec ships default with the boards using the EEPROM. But, I'm with you and I flip the #if to use the FLASH. Both FLASH and EEPROM work. The flash has 128KB page size which wastes a lot of space holding a 2KB environment so I see why some people want to stick with EEPROM. So I'd like to keep them both in place. Would it be better if the(Continue reading) | http://blog.gmane.org/gmane.comp.boot-loaders.u-boot/month=20090601 | CC-MAIN-2015-40 | refinedweb | 188 | 65.42 |
Manages a TransformFeedback buffer. More...
#include <vtkTransformFeedback.h>
Manages a TransformFeedback buffer.
OpenGL's TransformFeedback allows varying attributes from a vertex/geometry shader to be captured into a buffer for later processing. This is used in VTK to capture vertex information during GL2PS export when using the OpenGL2 backend as a replacement for the deprecated OpenGL feedback buffer.
Definition at line 40 of file vtkTransformFeedback.h.
Definition at line 44 of file vtkTransformFeedback.h.
The role a captured varying fills.
Useful for parsing later.
Definition at line 50 of file vtkTransformFeedback.h.
Return 1 if this class is the same type of (or a subclass of) the named class.
Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h.
Reimplemented from vtkObjectBase.
Clear the list of varying attributes to capture.
Capture the varying 'var' with the indicated role.
Get the list of captured varyings,.
Definition at line 79 of file vtkTransformFeedback.h.
Returns the number of data elements each vertex requires for a given role.
Definition at line 217 of file vtkTransformFeedback.h.
Returns the number of bytes per vertexs, accounting for all roles.
The number of vertices expected to be captured.
If the drawMode setter is used, PrimitiveMode will also be set appropriately. For the single argument version set function, set the exact number of vertices expected to be emitted, accounting for primitive expansion (e.g. triangle strips -> triangle strips). The two argument setter is for convenience. Given the number of vertices used as input to a draw command and the draw mode, it will calculate the total number of vertices.
The size (in bytes) of the capture buffer.
Available after adding all Varyings and setting NumberOfVertices.
GL_SEPARATE_ATTRIBS is not supported yet.
The bufferMode argument to glTransformFeedbackVaryings. Must be GL_INTERLEAVED_ATTRIBS or GL_SEPARATE_ATTRIBS. Default is interleaved. Must be set prior to calling BindVaryings. vtkSetMacro(BufferMode, int) vtkGetMacro(BufferMode, int) Call glTransformFeedbackVaryings(). Must be called after the shaders are attached to prog, but before the program is linked.
Get the transform buffer object.
Only valid after calling BindBuffer.
Get the transform buffer object handle.
Only valid after calling BindBuffer.
The type of primitive to capture.
Must be one of GL_POINTS, GL_LINES, or GL_TRIANGLES. Default is GL_POINTS. Must be set prior to calling BindBuffer.
Generates and allocates the transform feedback buffers.
Must be called before BindBuffer. This releases old buffers. nbBuffers is the number of buffers to allocate. size if the size in byte to allocate per buffer. hint is the type of buffer (for example, GL_DYNAMIC_COPY)
Binds the feedback buffer, then call glBeginTransformFeedback with the specified PrimitiveMode.
Must be called after BindVaryings and before any relevant glDraw commands. If allocateOneBuffer is true, allocates 1 buffer (used for retro compatibility).
Calls glEndTransformFeedback(), flushes the OpenGL command stream, and reads the transform feedback buffer into BufferData.
Must be called after any relevant glDraw commands. If index is positive, data of specified buffer is copied to BufferData.
Get the transform buffer data as a void pointer.
Only valid after calling ReadBuffer.
Release any graphics resources used by this object.
Release the memory used by the buffer data.
If freeBuffer == true (default), the data is deleted. If false, the caller is responsible for deleting the BufferData with delete[]. | https://vtk.org/doc/nightly/html/classvtkTransformFeedback.html | CC-MAIN-2019-47 | refinedweb | 537 | 54.08 |
Xively (formerly Cosm and before that Pachube (pronounced Patch bay)) is an on-line database service allowing developers to connect sensor-derived data (e.g. energy and environment data from objects, devices & buildings) to the Web and to build their own applications based on that data. It was created in 2007 by architect Usman Haque. Following the nuclear accidents in Japan in 2011, Xively was used by volunteers to interlink Geiger counters across the country to monitor the fallout.
In this post, we will show how to network pcDuino to Xively and feed data to it.
Step 1: Parts List
1 x pcDuino v2
1 x Analog temperature sensor LM35 (part of Advanced Learning Kit for Arduino)
Several Jumper Wires
1 x Plastic mounting plate for pcDuino/Arduino w rubber feet
1 x Breadboard
Step 2: Wire Diagram
We wire the analog temperature sensor according to the following way:
VCC of LM35 -> 3.3V of pcDuino
GND of LM35 -> Ground of pcDuino
Output of LM35 -> A5 of pcDuino
The whole setup is shown below:
Step 3: Install Software Package
We will use Python to develop the project. First, we will need to install the tools:
1. Install Python
$ sudo apt-get install python-dev
2. Install python-pip (python-pip is a package that can be used to install and manage python software)
$ sudo apt-get install python-pippip
3. Install xively extension:
$sudo pip install xively-python
The Python library for pcDuino can be downloaded from github at: using the following command:
$git clone
Step 4: Register at Xively
We need to register at xively.com for a free Developer account.
Step 5: Pyhton Code
The python code can be downloaded at:
# part of the python code is copied from page 82 of Getting Started with BeagleBone, by Matt Richardson
# Jingfeng Liu
# LinkSprite.com/pcDuino.com
from adc import analog_read
import time
import datetime
import xively
from requests import HTTPError
api =xively.XivelyAPIClient("APIKEY")
feed=api.feeds.get(FEEDID)
def delay(ms):
time.sleep(1.0*ms/1000)
def setup():
print "read channel ADC0 value ,the V-REF = 3.3V"
delay(3000)
def loop():
while True:
value = analog_read(5)
temp = value*(3.3/4096*100)
print ("value = %4d"%value)
print ("temperature = %4.3f V" %temp)
now=datetime.datetime.utcnow()
feed.datastreams=[ xively.Datastream(id='office_temp', current_value=temp, at=now)
]
try:
feed.update()
print "Value pushed to Xively: " + str(temp)
except HTTPError as e:
print "Error connecting to Xively: " + str (e)
time.sleep(20)
def main():
setup()
loop()
main()
To run the code:
$python sively-temp.py
We can see the data posted at xively.com webpage:
Discussions | https://mobile.instructables.com/id/pcDuino-as-Networked-Device-to-feed-data-to-Xively/ | CC-MAIN-2019-22 | refinedweb | 438 | 55.24 |
Wit.ai is a NLP (natural language processing) interface for applications capable of turning sentences into structured data. And most importantly, it is free! So, there are no API call limits!
Wit.ai API provides many kind of NLP services including Speech Recognition.
In this article, I am going to show how to consume the Wit Speech API using Python with minimum dependencies.
Step 1: Create an API Key
In order to use Wit.ai API, you need to create a Wit.ai app. Every Wit.ai app has a server access token which can be used as an API Key.
Follow these steps to create a Wit.ai app and generate API Key:
- Go to Wit.ai home page and sign in with your GitHub or Facebook account.
- Click on ‘+’ sign in menu bar on top and create a new app.
- Now, open the app dashboard and got to Settings of your app.
- In Settings, under API Details, copy the Server Access Token and use it as API key.
Step 2: Python script to record audio
Obviously, we need to pass audio data to the Wit API for speech recognition. For this, we create a Python script to record audio from microphone.
For this purpose, we will be using PyAudio module.
Installations
- Windows: Just install PyAudio module using a simple pip command:
pip install pyaudio
- MAC OS X: Install portaudio library using Homebrew and then install PyAudio module using pip:
brew install portaudio pip install pyaudio
- Linux: Install portaudio library development package using this command:
sudo apt-get install portaudio19-dev
Then, install PyAudio module using pip:
pip install pyaudio
Now, consider the code below to record audio from microphone:
import pyaudio import wave def record_audio(RECORD_SECONDS, WAVE_OUTPUT_FILENAME): #--------- SETTING PARAMS FOR OUR AUDIO FILE ------------# FORMAT = pyaudio.paInt16 # format of wave CHANNELS = 2 # no. of audio channels RATE = 44100 # frame rate CHUNK = 1024 # frames per audio sample #--------------------------------------------------------# # creating PyAudio object audio = pyaudio.PyAudio() # open a new stream for microphone # It creates a PortAudio Stream Wrapper class object stream = audio.open(format=FORMAT,channels=CHANNELS, rate=RATE, input=True, frames_per_buffer=CHUNK) #----------------- start of recording -------------------# print("Listening...") # list to save all audio frames frames = [] for i in range(int(RATE / CHUNK * RECORD_SECONDS)): # read audio stream from microphone data = stream.read(CHUNK) # append audio data to frames list frames.append(data) #------------------ end of recording --------------------# print("Finished recording.") stream.stop_stream() # stop the stream object stream.close() # close the stream object audio.terminate() # terminate PortAudio #------------------ saving audio ------------------------# # create wave file object waveFile = wave.open(WAVE_OUTPUT_FILENAME, 'wb') # settings for wave file object waveFile.setnchannels(CHANNELS) waveFile.setsampwidth(audio.get_sample_size(FORMAT)) waveFile.setframerate(RATE) waveFile.writeframes(b''.join(frames)) # closing the wave file object waveFile.close() def read_audio(WAVE_FILENAME): # function to read audio(wav) file with open(WAVE_FILENAME, 'rb') as f: audio = f.read() return audio
Here, we use PyAudio file to record audio in WAV format.
For writing audio stream to a WaveFile, we use in-built Python library wave. Once audio is recorded using PyAudio, it is saved as a wav file in current directory.
Save this Python script as Recorder.py (as we will import this Python script by this name in main Python script).
Step 3: Python script to interact with Wit Speech API
Now, its time to write Python script for interacting with Wit Speech API.
Consider the code below:
import requests import json from Recorder import record_audio, read_audio # Wit speech API endpoint API_ENDPOINT = '' # Wit.ai api access token wit_access_token = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXX' def RecognizeSpeech(AUDIO_FILENAME, num_seconds = 5): # record audio of specified length in specified audio file record_audio(num_seconds, AUDIO_FILENAME) # reading audio audio = read_audio(AUDIO_FILENAME) # defining headers for HTTP request headers = {'authorization': 'Bearer ' + wit_access_token, 'Content-Type': 'audio/wav'} # making an HTTP post request resp = requests.post(API_ENDPOINT, headers = headers, data = audio) # converting response content to JSON format data = json.loads(resp.content) # get text from data text = data['_text'] # return the text return text if __name__ == "__main__": text = RecognizeSpeech('myspeech.wav', 4) print("\nYou said: {}".format(text))
Wit Speech API accepts HTTP POST request.
The POST request must contain:
- headers
headers = {'authorization': 'Bearer ' + wit_access_token, 'Content-Type': 'audio/wav'}
where wit_access_token is the API Key we generated in Step 1.
- data
data = audio
The data to be passed is the audio stream in wav format. As you can notice, the recorded audio is saved in a file called myspeech.wav. We read audio back again form this file using read_audio method.
And we send this HTTP request to this endpoint:
A sample response of HTTP request looks like this:
{u'_text': u'hey how are you', u'entities': {}, u'msg_id': u'1ca8f790-4e83-443c-915c-914bc1a42100'}
Wit.ai is not just limited to speech recognition.
It also allows you to create a chatbot which can recognize any user-defined entity in the provided text! Since I have not created any enitities for my current Wit app, the entities section of HTTP response is empty.
Github repository: Wit Speech API Wrapper
Here is a demo video of how above scripts work: | http://blog.codingblocks.com/2017/speech-recognition-using-wit-ai | CC-MAIN-2018-43 | refinedweb | 835 | 56.76 |
Name | Description | NIS+ NAMESPACE | NIS+ NAMES | NIS+ SECURITY | LIST OF COMMANDS | Files | See Also | Notes
NIS+ is a new version of the network information nameservice. This version differs in several significant ways from version 2, which is referred to as NIS or YP in earlier releases. Specific areas of enhancement include the ability to scale to larger networks, security, and the administration of the service.
The man pages for NIS+ are broken up into three basic categories. Those in section 1 are the user commands that are most often executed from a shell script or directly from the command line. Section 1M man pages describe utility commands that can be used by the network administrator to administer the service itself. The NIS+ programming API is described by man pages in section 3NSL.
All commands and functions that use NIS version 2 are prefixed authorization policies.
The naming model of NIS+ is based upon a tree structure. Each node in the tree corresponds to an NIS+ object. There are six types of NIS+ objects: directory, table, group, link, entry, and private.
Each NIS+ namespace has at least one NIS+ directory object. An NIS+ directory is like a UNIX file system directory which contains other NIS+ objects including NIS+ directories. The NIS+ directory that forms the root of the NIS+ namespace is called the root directory. There are two special+ tables (not files), contained within NIS+ directories, store the actual information about some particular type. For example, the hosts system table stores information about the IP address of the hosts in that domain. NIS+ tables are mult objects are used for access control at group granularity. NIS+ group objects, contained within the groups_dir directory of a domain, contain a list of all the NIS+ principals within a certain NIS+ group. An NIS+ principal is a user or a machine making NIS+ requests. identifies Internet, this global root is served by a Domain Name Service. When an NIS+ server is serving a root directory whose name is not `.'(dot) this directory is referred to as a local root.
NIS+ names are said to be fully qualified when the name includes all of the labels identifying all of the directories, up to the global root. Names without the trailing dot are called partially qualified.. can mask the fact that the name existed but a server for it was unreachable. If the name presented to the list or lookup interface is fully qualified, the EXPAND_NAME flag is ignored.
In the list of names from the NIS_PATH environment variable, the '$' (dollar sign) character is treated specially. Simple names that end with the label '$' have this character replaced by the default directory (see nis_local_directory(3NSL)). Using "$" as a name in this list results in this name being replaced by the list of directories between the default directory and the global root that contain at least two labels.
Below is an example of this expansion. Given the default directory of some.long.domain.name., and the NIS_PATH variable set to fred.bar.:org_dir.$:$. This path is initially broken up into the list: can then be resolved.+ context. They are stored as records in an NIS+ table named cred, which always appears in the org_dir subdirectory of the directory named in the principal name.
This mapping can be expressed as a replacement function: mappings: sensitive to the context of the machine on which the process is executing. returs a record containing the NIS+ principal.
Like NIS+ principal names, NIS+ group names take the form:
group_name.domain
All objects in the NIS+ namespace and all entries in NIS+ tables can+ simple information managed by the service. The service defines access rights that are selectively granted to individual clients or groups of clients. Principal names and group names are used to define clients and groups of clients that can be granted or denied access to NIS+ information. These principals and groups are associated with NIS+ domains as defined below.
The security model also uses the notion of a class of principals called nobody, which contains all clients, whether or not they have authenticated themselves to the service. The class world includes any client who has been authenticated..
The NIS+ name service uses Secure RPC for the integrity of the NIS+ service. This requires that users of the service and their machines must have a Secure RPC key pair associated is commands. See nisauthconf(1M).
The NIS+ service defines four access rights that can be granted or denied to clients of the service. These rights are read, modify, create, and destroy. These rights are specified in the object structure at creation time and can: destroy or remove an existing object or entry. When a client attempts to destroy an entry or object by removing it, the service first checks to see if the table contents of other tables within org_dir to be read (such as the entries in the password table) unless the table itself gives "real" access rights to the nobody principal.
Additional capabilities are provided for granting access rights to clients for directories. These rights are contained additionally grants create access through the OAR structure for objects of type table, link, group, and private to any member of the directory's group. This has the effect of giving nearly complete create access to the group with the exception of creating subdirectories. This restricts the creation of new NIS+ domains because creating a domain requires creating both a groups_dir and org_dir subdirectory.
Notice that there is currently no command line interface to set or change the OAR of the directory object. maintained criterion so are allowed for anyone..
Name | Description | NIS+ NAMESPACE | NIS+ NAMES | NIS+ SECURITY | LIST OF COMMANDS | Files | See Also | Notes | http://docs.oracle.com/cd/E19253-01/816-5165/6mbb0m9lv/index.html | CC-MAIN-2015-35 | refinedweb | 952 | 54.42 |
Difference between revisions of "Weeding down a big target list"
Revision as of 21:52, 21 January 2008
Here is a case study on how to weed down a big list of potential targets using semi-automatic searches. This is exactly how I weeded down this list of Lynds Dark Nebula.
Contents
- 1 1. get master LDN list.
- 2 2. get rid of the lower opacity clouds.
- 3 3. get rid of the ones i know are covered by spitzer.
- 4 4. get number of references from simbad.
- 5 5. get hits in spitzer archive.
- 6 6. get information from lee and myers paper.
- 7 7. sort by spitzer availability.
- 8 8. make a sorted cut
- 9 final product at this point
- 10 next step
1. get master LDN list.
this link on wiki: go to Online Data: ask it to give us plain text ("| separated values"), all objects, ldn number, ra and dec (1950 and 2000), glon and glat (galactic lat and long), area, and opacity class, plus any other columns you want.
take plain text and import into excel, such that columns import cleanly. i had to rename the file to something.txt and then ask excel to import it, with "|" delimiting the columns. you have then i think 1794 objects.
2. get rid of the lower opacity clouds.
i think it shows up already sorted by ldn number. sort by opacity class, reverse numerical order. move everything other than opacity class 5 or 6 down to the bottom of the table. keep the opacity class 5 or 6 near the top. i entered some blank lines which i colored bright red to remind myself easily where the border was between opacity classes. this gets rid of 1247 of the objects.
3. get rid of the ones i know are covered by spitzer.
i happen to know there are two big surveys using Spitzer, called GLIMPSE and MIPSGAL. both are surveying the galactic plane, the former using IRAC and the latter using MIPS. The GLIMPSE survey is older than MIPSGAL. their website is here: here, you can learn that they surveyed the inner 65 degrees (of longitude) of the galaxy, plus or minus 1 degree of latitude.
sort the lynds target list by ldns number. this ends up being rougly galactic long sorted.
what i did next was ask xls to do conditional formatting, and (a) color the cells in the galactic longitude column to be light blue if the value was less than 65 or greater than 295 (and less than 360). and (b) color the cells in the galactic longitude column to be light blue if the value was greater than -1 and less than 1. anything that is then blue in both the glat and glon cells is an object that appears in mipsgal and glimpse. this gets rid of just 28 of the objects; i was expecting it to get rid of many more. i entered some blank lines which i colored bright red to remind myself easily where the border was between glimpse and non-glimpse coverage.
i have now 518 objects left in the pool of potential targets. sort this by ldn number. extract the ldn number to its own spreadsheet and use xls to create a text file of just ldn numbers for these 518 objects. in order to do this, i make a separate worksheet, copy the ldn numbers to column b, in column a, make the first cell "LDN" and then "fill down" to fill out the rest of the column with all "LDN". i make column A right-adjusted, column B left-adjusted, and save this worksheet to a text file.
4. get number of references from simbad.
go to simbad tell it ("output options") that you want plain text output. query by list (second down on that page). pick the plain text list of targets file you just created. it will take several seconds to come back. save the file of results as a plain text file.
import this results text file back into xls, as a new worksheet. copy the column with the number of bibliographic references into a new column in the master spreadsheet. if it doesn't match the number of rows you had from before, you did something wrong, and you should go back and check things.
5. get hits in spitzer archive.
take the input file you just made for simbad, reformat it so that each line is surrounded by quotes, e.g. "ldn 4 " rather than just ldn 4 without the quotes and break it into pieces of 100 targets per file (you'll have 6 files). The top of each of these files should look like this:
COORD_SYSTEM: Equatorial # Equatorial (default), Galactic, or Ecliptic EQUINOX: J2000 # B1950, J2000 (default), or blank for Galactic NAME-RESOLVER: Simbad # NED or Simbad (default) RADIUS: 5 # RADIUS is in unit arcmin #Name
start leopard. query by fixed target list, load the first of your files you just created. this will take a while for each leopard search file. when it is done searching, it will ask if you want to load the AORs for each program it found. just tell it "cancel" for now. leopard automatically generates a file in the same directory as your target list, with the same root filename but with a time/date stamp appended. because of a leopard bug, you need to restart leopard after each target search.
this header tells leopard to search for things within 5 arcmin. it's going to tell us everything (IRS, IRAC, MIPS) that it thinks overlaps. it's not perfect, but it will get us close enough for weeding purposes.
make a new column for these results in the master xls file. look at the summary files, and if there is an IRAC/MIPS hit, enter that information in the xls file.
6. get information from lee and myers paper. this is the one looking at optically-selected cores to see if they thought there was a YSO inside. add this to a column in the spreadsheet.
7. sort by spitzer availability.
start with ones without any spitzer data (as opposed to ones with just irac or just mips). this leaves 270 obj.
8. make a sorted cut
sort those by lee&myers, then number refs (decr numbers), then area (incr number). just 9 yeses from lee&myers. 14 without lee&myers but with >10 pubs (worth going to look at those pubs to see if those pubs think there is something there).
-> 23 objects to go look up the pubs, look up the POSS image, see how big/interesting they look. LDN number: 31 129 219 425 518 543 769 778 981 1036 1041 1082 1100 1125 1139 1143 1235 1257 1340 1407 1598 1685 1744
final product at this point
next step
still need to add notes here re: pub checking and image checking for those objects
--Dewolf 10:42, 18 January 2008 (PST) I have set up a PowerPoint with images of each of the final potential targets from Luisa's Excel file. It includes visual opacity and visibility info, as well as the number of Simbad references. We had a snow day (6th one since Thanksgiving!) again today, so I had plenty of time to work on this. Media:LDN Targets.ppt
--Johnson 13:50, 21 January 2008 (PST) Attached is a chart that shows the SIMBAD references for each of the objects in Dr. Rebull's short list: 31 129 219 425 518 543 769 778 981 1036 1041 1082 1100 1125 1139 1143 1235 1257 1340 1407 1598 1685 1744. Media:SIMBADsearch.doc Talk with you soon. --chj | https://vmcoolwiki.ipac.caltech.edu/index.php?title=Weeding_down_a_big_target_list&diff=prev&oldid=3024 | CC-MAIN-2022-27 | refinedweb | 1,282 | 81.33 |
I've read some of the differences between << +=. But i think i may not understand these differences, because my expected code doesn't output what i want to achieve.
in response to Ruby differences between += and << to concatenate a string
I want to unscramble "Cat" into an array of its letters/words
=> ["c", "ca", "cat", "a", "at", "t"]
def helper(word)
words_array = []
idx = 0
while idx < word.length
j = idx
temp = ""
while j < word.length
**temp << word[j]**
words_array << temp unless words_array.include?(temp)
j += 1
end
idx += 1
end
p words_array
end
helper("cat")
One difference is that because
<< works in place it is somewhat faster than
+=. The following code
require 'benchmark' a = '' b= '' puts Benchmark.measure { 100000.times { a << 'test' } } puts Benchmark.measure { 100000.times { b += 'test' } }
yields
0.000000 0.000000 0.000000 ( 0.004653) 0.060000 0.060000 0.120000 ( 0.108534)
Update
I originally misunderstood the question. Here's whats going on. Ruby variables only store references to objects, not the objects themselves. Here's simplified code that does the same thing as yours, and has the same issue. I've told it to print
temp and
words_array on each iteration of the loops.
def helper(word) words_array = [] word.length.times do |i| temp = '' (i...word.length).each do |j| temp << word[j] puts "temp:\t#{temp}" words_array << temp unless words_array.include?(temp) puts "words:\t#{words_array}" end end words_array end p helper("cat")
Here is what it prints:
temp: c words: ["c"] temp: ca words: ["ca"] temp: cat words: ["cat"] temp: a words: ["cat", "a"] temp: at words: ["cat", "at"] temp: t words: ["cat", "at", "t"] ["cat", "at", "t"]
As you can see, during each iteration of the inner loop after the first, ruby is simply replacing the last element of
words_array. That is because
words_array holds a reference to the string object referenced by
temp, and
<< modifies that object in place rather than creating a new object.
On each iteration of the outer loop
temp is set to a new object, and that new object is appended to
words_array, so it doesn't replace the previous elements.
The
+= construct returns a new object to
temp on each iteration of the inner loop, which is why it does behave as expected. | https://codedump.io/share/pSKGGOmjlLG7/1/ruby-operator-confusion-with-shovel-ltlt-and---concating-arrays | CC-MAIN-2017-34 | refinedweb | 377 | 63.7 |
Adding a Reverse Lookup Zone
The Active Directory Installation wizard does not automatically add a reverse lookup zone and PTR resource records, because it is possible that another server, such as the parent server, controls the reverse lookup zone. You might want to add a reverse lookup zone to your server if no other server controls the reverse lookup zone for the hosts listed in your forward lookup zone. Reverse lookup zones and PTR resource records are not necessary for Active Directory to work, but you need them if you want clients to be able to resolve FQDNs from IP addresses. Also, PTR resource records are commonly used by some applications to verify the identities of clients.
The following sections explain where to put reverse lookup zones and how to create, configure, and delegate them. For information about any of the IP addressing concepts discussed in the following sections, see "Introduction to TCP/IP" in this book.
Planning for Reverse Lookup Zones
To determine where to place your reverse lookup zones, first gather a list of all the subnets in your network, and then examine the class (A, B, or C) and type (class-based or subnetted) of each subnet.
To simplify administration, create as few reverse lookup zones as possible. For example, if you have only one class C network identifier (even if you have subnetted your network), it is simplest to organize your reverse lookup zones along class C boundaries. You can add the reverse lookup zone and all the PTR resource records on an existing DNS server on your network.
Subdomains do not need to have their own reverse lookup zones. If you have multiple class C network identifiers, for each one you can configure a reverse lookup zone and PTR resource records on the primary name server closest to the subnet with that network identifier.
However, organizing your reverse lookup zones along class C boundaries might not always be possible. For example, if your organization has a small network, you might have received only a portion of a class C address from your ISP. Table 6.3 shows how to configure your network with each type of subnet.
Table 6.3 Planning Reverse Lookup Zones
Configuring a Standard Reverse Lookup Zone
The following procedures describe how to add a reverse lookup zone for a class C network ID.
To add a reverse lookup zone
In Control Panel, double-click Administrative Tools and then double-click DNS .
Optionally, if the server to which you want to add a reverse lookup zone does not appear in the list, right-click DNS , click Connect to Computer , and then follow the instructions to add the desired server.
To display the zones, click the server name.
Right-click the Reverse Lookup Zones folder, and click New Zone . A zone configuration wizard appears.
Windows 2000-based clients and Windows 2000 DHCP servers can automatically add PTR resource records, or you can configure PTR resource records at the same time as when you create A resource records; otherwise, you might want to add PTR resource records manually.
To add PTR resource records
In Control Panel, double-click Administrative Tools and then double-click DNS .
To display the zones, click the server name.
Right-click the zone in the Reverse Lookup Zones folder, point to New , and then point to Pointer .
To create the PTR resource record, follow the instructions in the dialog box.
Note
If you can't select the Pointer field because it is shaded, double-click the zone.
Configuring and Delegating a Classless In-addr .arpa Reverse Lookup Zone
Many organizations divide class C networks into smaller portions. This process is referred to as "subnetting a network." If you have subnetted a network, you can create corresponding subnetted reverse lookup zones, as specified in RFC 2317. Although your network has been subnetted, you do not need to create corresponding subnetted reverse lookup zones. It is an administrative choice. DNS servers and zones are independent of the underlying subnetted infrastructure.
However, in certain situations, you might want to create and delegate classless reverse lookup zones. If you own one class C address, and you want to distribute the addresses in the range to several different groups (for example, branch offices), but you do not want to manage the reverse lookup zones for those addresses, you would create classless reverse lookup zones and delegate them to those groups. For example, suppose that an ISP has a class C address and has given the first 62 addresses to Reskit. The ISP can include records in its zone indicating that the name server on Reskit has information about that portion of the namespace. Reskit can then manage that portion of the namespace by including resource records with the IP address–to–host mappings, also known as a classless in-addr.arpa reverse lookup zone .
The following sections, explain the syntax of classless reverse lookup zones and describe how to delegate and configure reverse lookup zones by using the preceding example. For more information about delegating reverse lookup zones, see the Request for Comments link on the Web Resources page at . Search for RFC 2317, "Classless in-addr.arpa delegation."
.gif)
Note
Dynamic update does not work with classless in - addr.arpa zones. If you need to dynamically update PTR resource records, do not use classless zones.
Syntax of a Classless In - addr .arpa Reverse Lookup Zone
You can use the following notation to specify the name of the classless in - addr.arpa reverse lookup zone:
<subnet-specific label>. <octet>. <octet>. <octet>.in-addr.arpa
where octet specifies an octet of the IP address range. The octets are specified in reverse order of the order in which they appear in the IP address.
Although subnet-specific label could be comprised of any characters allowed by the authoritative DNS server, the most commonly used formats include the following:
< minimum value of the subnet range > - < maximum value of the subnet range >
< subnet > / < subnet mask bit count >
<subnet ID >
Subnet specifies which segment of the class C IP address this network is using. Subnet mask bit count specifies how many bits the network is using for its subnet mask. Subnet ID specifies a name the administrator has chosen for the subnet.
For example, suppose that an ISP has a class C address 192.168.100.0 and has divided that address into four subnets of 62 hosts per network, with a subnet mask of 255.255.255.192, and given the first 62 host addresses to a company with the DNS name Reskit.com. The name of the classless reverse lookup zone can use any of the following syntax lines:
0 - 26.100.168.192.in - addr.arpa
0/26.100.168.192.in - addr.arpa
Subnet1.100.168.192.in - addr.arpa
You can use any of this syntax in Windows 2000 DNS by entering the zones into a text file. For more information about creating and delegating subnetted reverse lookup zones through text files, see the Microsoft TechNet link on the Web Resources page at . Search Microsoft TechNet using the phrases "subnetted reverse lookup zone" and "Windows NT."
Delegating a Classless Reverse Lookup Zone
You never need to delegate a classless reverse lookup zone, even if your network is subnetted. However, there are a few cases in which you might want to delegate a classless reverse lookup zone. For example, you might want to do so if you gave a merged organization a portion of your class C address, or if you had a remote subnetted network and wanted to avoid sending replication or zone transfer traffic across a wide area link.
Figure 6.10 shows how an administrator for a class C reverse lookup zone would then configure its DNS server.
.gif)
Figure 6.10 Reverse Lookup Delegations
You can delegate and create classless reverse lookup zones from within the DNS console.
To delegate a classless reverse lookup zone
On the DNS server for your domain, create a reverse lookup zone.
For the preceding example, create the reverse lookup zone 100.168.192.in - addr.arpa. The reverse lookup zone is added on the server for ISP.com, not Reskit.com.
Right-click the reverse lookup zone that you created, point to NewDelegation .
In the New Delegation wizard, enter the name of the delegated domain and the name and IP address of the delegated name server. In the preceding example, the delegated domain is 0 - 26.
Right-click the reverse lookup zone and click New alias .
Add CNAME records for all the delegated addresses.
For example, for the IP address 192.168.100.5, create a CNAME record of 5 that points to 5.0 - 26.100.168.192.in - addr.arpa.
Create the classless reverse lookup zone in the subdomain, by following the procedure in the following section.
Configuring a Classless In-addr .arpa Reverse Lookup Zone
You must configure a classless reverse lookup zone if one has been delegated to you. In the preceding example, an administrator for an ISP delegated a reverse lookup zone to Reskit.com, and an administrator for Reskit.com must therefore configure a classless reverse lookup zone. Figure 6.11 shows how Reskit.com would configure its classless reverse lookup zone.
.gif)
Figure 6.11 Classless Reverse Lookup Zone
To create a classless reverse lookup zone
In the DNS console, click the server name to display configuration detail it, right-click the Reverse Lookup Zones folder, and then click Create a New Zone . The Add New Zone wizard appears.
When you reach the Network ID page, in the field named Enter the name of the zone directly , enter the name of the classless reverse lookup zone.
For example, type 0-26.100.168.192.in-addr.arpa .
Then add any necessary PTR resource records in that zone. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-2000-server/cc961414(v=technet.10) | CC-MAIN-2018-47 | refinedweb | 1,638 | 62.58 |
With the new release of the .NET framework, creating a sophisticated TCP Server is just like eating a snack! Basic features including multi-thread client connection handling, collective management for client connections with thread safe collection object, and background reclaiming of all doomed connections, are all easy to address.
This example was built on the Microsoft .NET Framework SDK version 1.0.3705 (with SP2), I suggest you use the latest version wherever you can acquire one. A simple editor should be enough to begin your coding and I have used C# to create all programs and compiled it with the C# compiler that comes with the SDK. There are several source files and commands to produce executables are as follows:
CSC TcpServer.cs CSC TcpServer2.cs CSC TcpServer2b.cs CSC TcpServer3.cs CSC TcpClientTest.cs
Coming with the framework are the very useful classes
TcpListener and
TcpClient in
the namespace
System.Net.Sockets, which provided most of the needed TCP network task for building TCP
applications. In just several simple statements will give a simple TCP listening service for the TCP server:
TcpListener listener = new TcpListener(portNum); listener.Start(); TcpClient handler = listener.AcceptTcpClient(); int i = ClientSockets.Add ( new ClientHandler(handler) ) ; ((ClientHandler) ClientSockets[i]).Start() ;
In the third line of the above statements allows the TCP server to accept incoming client connections, each
connections will give a separated
TcpClient instance representing individual client connections.
Since each client connections is handled in separated threads, the class
ClientHandler will
encapsulate client connection and a new thread will be created and started (that is what the last two lines in the
above statements do).
As to managing client connections, especially when reclaiming doomed client connections and shutting them down
before ending the main TCP server thread, an
ArrayList object is so handy and comes into the game as a thread
safe collection bag for all client connections. The followings lines depict how thread safe access to
ArrayList can be achieved:
private static ArrayList ClientSockets lock ( ClientSockets.SyncRoot ) { int i = ClientSockets.Add ( new ClientHandler(handler) ) ; ((ClientHandler) ClientSockets[i]).Start() ; }
The keyword
lock provided thread-synchronized access to a property
SyncRoot of the
instance of
ArrayList,
ClientSockets, a collection of object instances of class
ClientHandler, representing TCP client connections.
In a typical TCP servicing environment, many clients making connections will mix with clients which are dropping their connections at the same time, and such dropped connections should be properly release their held resources. Without reclaiming means server will soon be overloading with doomed client connections, which hold sacred resources without releasing them back to system. The following code shows the reclaiming of the thread in my application:
ThreadReclaim = new Thread( new ThreadStart(Reclaim) ); ThreadReclaim.Start() ; private static void Reclaim() { while (ContinueReclaim) { lock( ClientSockets.SyncRoot ) { for ( int x = ClientSockets.Count-1 ; x >= 0 ; x-- ) { Object Client = ClientSockets[x] ; if ( !( ( ClientHandler ) Client ).Alive ) { ClientSockets.Remove( Client ) ; Console.WriteLine("A client left") ; } } } Thread.Sleep(200) ; } }
As the reclaiming thread will compete with main thread to access the client connections collection, synchronized access is needed when checking dead connections before removing them.
Maybe some readers ask me why I did not choose to use features like callbacks or delegates to let client connection instances unregister themselves from the collection, this I will explain later.
Before stopping the main server, closing all connections properly is also very important:
ContinueReclaim = false ; ThreadReclaim.Join() ; foreach ( Object Client in ClientSockets ) { ( (ClientHandler) Client ).Stop() ; }
First, the resource reclaiming thread is ended and a global variable
ContinueReclaim is responsible
for controlling it. In addition, it is waited to be properly stopped before the main thread starts going on to the
next step. Finally, a loop is started to drop each client connections listed in
ClientSockets, as this time
only the main thread is accessing it no thread-synchronisation code is needed.
Here I would like to explain why I do not use a callback or delegate to address the reclaiming task. Since the
main thread needs to hold the
ClientSockets collection exclusively while dropping the client
connections, it would have produced deadlock as the client connection trying to access
ClientSockets
collection tried to use callback or delegate to unregister itself and at the same time main thread was waiting client
connection to stop! Of course some may say using timeout while client connection class trying to access
ClientSockets collection is an option, I agree it could solve the problem but as I always prefer a
cleaner shutdown, using a thread to control resource reclaiming task would be a better idea.
Things seem so fine when there are not too many clients connecting at the same time.
But the number of connections can increase to a level that too many threads are created
which would severely impact system performance! Thread pooling in such a case can give
us a helping hand to maintain a reasonable number of threads created at the same time base on
our system resource. A .NET class
ThreadPool helps to regulate when
to schedule threads to serve tasks on the work-item queue and limit the maximum number
of threads created. All these functions come as a cost as you lose some
of the control of the threads, for example, you cannot suspend or stop a
thread preemptively as no thread handle will be available by calling the static method:
public static bool QueueUserWorkItem(WaitCallback);
of class
ThreadPool. I have created another example to let all client connections
be scheduled to run in thread pool. In the sample program TcpServer2.cs is
several amendments I have made. First, I do not use a collection object to include client connections.
Secondly, there is no background thread to reclaim the doomed client connections. Finally, a special class
is used to synchronize client connections when the main thread comes to a point to instruct ending all of them.
As the client handling task scheduled in the thread pool created on the previous example will tend to hold the precious thread in the thread pool forever, it will restrict the scalability of the system and large number of task items cannot get a chance to run because the thread pool has a maximum limit on the number of threads to be created. Suppose even though it did not impose a limit on number of threads created, too many threads running still lead to CPU context switching problems and bring down the system easily! To increase the scalability, instead of holding the thread looping with it, we can schedule the task again whenever one processing step has finished! That is what I am doing in example TcpServer2b.cs; trying to achieve the client task handling thread function returns after each logical processing step and try rescheduling itself again to the thread pool as the following code block shows:
// Schedule task again if ( SharedStateObj.ContinueProcess && !bQuit ) ThreadPool.QueueUserWorkItem(new WaitCallback(this.Process), SharedStateObj); else { networkStream.Close(); ClientSocket.Close(); // Deduct no. of clients by one Interlocked.Decrement(ref SharedStateObj.NumberOfClients ); Console.WriteLine("A client left, number of connections is {0}", SharedStateObj.NumberOfClients) ; } // Signal main process if this is the last client connections // main thread requested to stop. if ( !SharedStateObj.ContinueProcess && SharedStateObj.NumberOfClients == 0 ) SharedStateObj.Ev.Set();
No loop is involved and as the task tries to reschedule itself and relinquish thread it held, other tasks get a chance to begin running in the thread pool now! Actually this is similar capability to what the asynchronous version of the socket functions provide, task get, waiting and processing in the thread pool thread needs to reschedule again at the end of each callback functions if it want to continue processing.
Using a queue with multiple threads to handle large numbers of client requests is similar to the asynchronous
version of the socket functions which use a thread pool with a work item queue to handle each tasks. In example
TcpServer3.cs, I have created a queue class
ClientConnectionPool and wrapped a Queue object
inside:
class ClientConnectionPool { // Creates a synchronized wrapper around the Queue. private Queue SyncdQ = Queue.Synchronized( new Queue() ); }
This is mainly to provide a thread safe queue class for later use in the multi-threaded task handler,
ClientService, part of its source shown below:
class ClientService { const int NUM_OF_THREAD = 10; private ClientConnectionPool ConnectionPool ; private bool ContinueProcess = false ; private Thread [] ThreadTask = new Thread[NUM_OF_THREAD] ; public ClientService(ClientConnectionPool ConnectionPool) { this .ConnectionPool = ConnectionPool ; } public void Start() { ContinueProcess = true ; // Start threads to handle Client Task for ( int i = 0 ; i < ThreadTask.Length ; i++) { ThreadTask[i] = new Thread( new ThreadStart(this.Process) ); ThreadTask[i].Start() ; } } private void Process() { while ( ContinueProcess ) { ClientHandler client = null ; lock( ConnectionPool.SyncRoot ) { if ( ConnectionPool.Count > 0 ) client = ConnectionPool.Dequeue() ; } if ( client != null ) { client.Process() ; // Provoke client // if client still connect, schedufor later processingle it if ( client.Alive ) ConnectionPool.Enqueue(client) ; } Thread.Sleep(100) ; } } }
Using the
Dequeue and
Enqueue functions, it is so easy to give tasks handling based on
the FIFO protocol. Making it this way we have the benefit of good scalability and control for the client
connections.
I will not provide another Asynchronous Server Socket Example because the .NET SDK already has a very nice one and for anyone interested please goto the link Asynchronous Server Socket Example. Certainly Asynchronous Server Sockets are excellent for most of your requirements, easy to implement, high performance, and I suggest reader have a look on this example.
The NET framework provide nice features to simplify
multi-threaded TCP process creation that was once a very difficult task to many
programmers. I have introduced three methods to create a multi-threaded TCP server
process. The first one has greater control on each threads but it may impact system
performance after a large number of threads are created. Second one has better performance
but you have less control over each thread created. The last example gives you the benefit of
bothscalability and control, and is the recommended solution. Of course, you can use
the Asynchronous Socket functions to give you similar capabilities, whilst my example
TcpServer2b.cs, which gives you a synchronous version with thread pooling, is
an alternative solution to the same task. Many valuable alternatives and much depends
on you requirements. The options are there so choosing the one best suited your application is the
most important thing to consider! Full features can be explored even wider
and I suggest that readers look at the .NET documentation to find out more.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/IP/dotnettcp.aspx | crawl-002 | refinedweb | 1,737 | 53.92 |
A declaration can appear anywhere a statement appears, and certain statements permit additional declarations within those statements.
Declarations made in a substatement (of a selection or loop statement) are limited in scope to the substatement, even if the substatement is not a compound statement. For example, the following statement:
while ( test( ) ) int x = init( );
is equivalent to:
while ( test( ) ) { int x = init( ); }
The first example uses a declaration as the entire loop body, and the second uses a compound statement (enclosing the loop body in curly braces). In both cases, though, the scope of x is limited to the body of the while loop.
A simple declaration can appear wherever a statement can be used. You can declare an object, a type, or a namespace alias. You can also write a using declaration or using directive. You can declare a function, but not define a function, although there is rarely any reason to declare a function locally. You cannot define a namespace or declare a template.
In traditional C programming, declarations appear at the start of each block or compound statement. In C++ (and in the C99 standard), declarations can appear anywhere a statement can, which means you can declare variables close to where they are used. Example 4-1 shows examples of how declarations can be mixed with statements.
#include <cctype> #include <cstddef> #include <iomanip> #include <iostream> #include <ostream> #include <string> // Count lines, words, and characters in the standard input. int main( ) { unsigned long num_lines, num_words, num_chars; num_lines = num_words = num_chars = 0; using namespace std; string line; while (getline(cin, line)) { ++num_lines; num_chars += line.length( ) + 1; bool in_word = false; for (size_t i = 0; char c = line[i]; ++i) if (isspace(static_cast<unsigned char>(c))) { if (in_word) ++num_words; in_word = false; } else if (! in_word) in_word = true; if (in_word) ++num_words; } cout << right << setw(10) << num_lines << setw(10) << num_words << setw(10) << num_chars << '\n'; }
Sometimes a construct can look like an expression statement or a declaration. These ambiguities are resolved in favor of declarations. Example 4-2 shows some declarations that look like they might be expressions.
class cls { public: cls( ); cls(int x); }; int x; int* y = &x; int main( ) { // The following are unambiguously expressions, constructing instances of cls. cls(int(x)); cls(*static_cast<int*>(y)); // Without the redundant casts, though, they would look like declarations, not // expressions. cls(x); // Declares a variable x cls(*y); // Declares a pointer y }
The for, if, switch, and while statements permit a declaration within each statement's condition:
if (int x = test_this(a, b)) { cout << x; }
If the condition contains a declaration, the scope of the declared name extends to the end of the entire statement. In particular, a name declared in the condition of an if statement is visible in the else part of the statement. In a loop, the condition object is created and destroyed for each iteration of the loop. The name cannot be redeclared in the immediate substatement (but can be redeclared in a nested statement). The name is not visible in subsequent statements. For example:
if (derived* d = dynamic_cast<derived*>(b)) { d->derived_only_func( ); } else { assert(d == NULL); // Same d as above double d; // Error: can't redeclare d if (d == 0) int d; // Valid: inner block } cout << d; // Invalid: d no longer in scope
Like the if, switch, and while statements, a for loop permits a declaration in its condition. Unlike those other statements, it also allows a declaration to initialize the loop. Both declarations are in the same scope. See Section 4.5.2 for details.
The syntax for a condition declaration is:
type-specifiers declarator = expression
This syntax is similar to the syntax of a simple object declaration. In this case, the initializer is required (without it, the condition would not have a value), and only one declarator is permitted. See Chapter 2 for more information about type specifiers and declarators. | http://etutorials.org/Programming/Programming+Cpp/Chapter+4.+Statements/4.2+Declarations/ | crawl-001 | refinedweb | 641 | 52.7 |
Announcing the Native Python Connector for Snowflake
Dec 18, 2015
Author: Artin Avanes
Engineering, Snowflake Technology
Additional thanks to co-writers Greg Rahn, Product Management and Shige Takeda, Engineering.
A key priority for us at Snowflake is expanding the choices of tools that developers and analysts have that can take advantage of Snowflake’s unique capabilities. A few months ago we announced support for R with our R dplyr package, which combines Snowflake’s elasticity with SQL pushdown capabilities, allowing R enthusiasts to execute their favorite R functions efficiently against large sets of semi-structured and structured data (get Snowflake’s dplyr package at Github).
Today we are thrilled to announce the general availability of a native Python connector for Snowflake. The Python connector is a pure Python package distributed through PyPI and released under the Apache License, Version 2.0.
We want to thank the many customers and partners who worked closely with us during the technology preview of the connector. That assistance helped us to prioritize key features and refine the connector based on real-world usage. Ron Dunn of Ajilius, a data warehouse automation solution provider, commented “The early availability of a Python adapter showed Snowflake’s commitment to integration, and the deep engagement of the team through the beta process was evidence of their strong developer support.”
What’s in the Connector?
Undoubtedly, Python has emerged as one of the most popular languages that developers are choosing for modern programming and is one that many of our customers have told us they use. With the release of a native Python connector, we are delivering an elegant and simple way of executing Python scripts against Snowflake, combining the power and expressiveness of Python with the unique capabilities of the Snowflake Elastic Data Warehouse service. Some of the key capabilities of the connector:
- Support for Python 2 (2.7.9 or higher) and Python 3 (3.4.3 or higher).
- Native Python connector for all platforms including OS X, Linux, and Windows.
- Easy installation through Python’s pip package management system, without the need for a JDBC or ODBC driver.
- Implementation of Python’s DB API v2.
- Support for connecting to Snowflake and executing DDL or DML commands.
- Support for COPY command (for parallel loading) and encrypted PUT & GET methods.
- Support for date and timestamp data types.
- Improved error handling.
Putting the Connector to Work: Automatic, scheduled loading via AWS Lambda and Python
Data loading is one of the challenges that we heard customers complain about frequently when describing their traditional data warehouse environments. Do any of these questions sound familiar to you?
- How to set up a load server(s)?
- How to configure networking to connect the load server with the data warehouse?
- How to maintain and schedule load scripts?
- How to scale with increasing loads?
Using Snowflake, these tedious tasks are not necessary anymore or greatly simplified and automated. By combining Python via our Python connector with AWS Lambda, users have a simple and powerful way to load data into Snowflake without the need for configuring servers or networking.
To illustrate, let’s use the example of how to easily operationalize steps to get data out of MongoDB and and to automatically invoke scheduled loading of JSON data arriving in S3 into Snowflake. We will show how to achieve this by using scheduled Lambda events to issue regular, incremental loads via Python and the Python connector.
In a nutshell, a user just needs to perform the following steps to enable automatic data loading into Snowflake using Python and AWS Lambda:
STEP 1: Export JSON data from MongoDB into Amazon S3
To get data out of MongoDB, simply use the export functionality provided by MongoDB:
mongoexport --db <dbname> --collection <data> --out out.json
Please note that you need to transform the exported data into valid JSON by removing the BSON-specifics (MongoDB’s extended JSON format). For example, you can use this conversion script to convert extended JSON into JSON. Once exported, you can upload the data into an AWS S3 bucket using any of the available methods.
STEP 2: Set up Python environment for AWS Lambda
To set up AWS Lambda to allow execution of Python scripts, a user has to first create a virtual Python deployment environment for AWS Lambda and install the Python connector. You can find the corresponding shell scripts at our github location.
The Python connector itself is supported on all OS platforms and is published at the following PyPI location. Simply run the PIP command:
pip install --upgrade snowflake-connector-python
STEP 3: Develop a Python-based loading script
In the following example, we demonstrate a simple Python script that loads data from an non-Snowflake S3 bucket into Snowflake. The script leverages the new Snowflake Connector for Python:
- First, import the the Python connector for Snowflake:
import snowflake.connector
- Then, set up your Snowflake account information and credentials for your S3 bucket:
ACCOUNT = '<your account name>' USER = '<your user name>' PASSWORD = '<your password>' AWS_ACCESS_KEY_ID = '<your access key for S3 bucket>' AWS_SECRET_ACCESS_KEY = '<your access key for S3 bucket>'
- Connect to Snowflake, and choose a database, schema, and virtual warehouse :
cnx = snowflake.connector.connect ( user=USER, password=PASSWORD, account=ACCOUNT, database=myDB, schema=mySchema, warehouse=myDW )
- Create a table for your JSON data and load data into Snowflake via the copy command. Note that you don’t need to define a schema; simply use the variant column type:
cnx.cursor().execute("create JsonTable (col1 variant)") sql = """copy into JsonTable FROM s3://<your_s3_bucket>/data/ credentials = (aws_key_id='<aws_access_key_id>', aws_secret_key=('<aws_secret_access_key>')) file_format=(type=JSON)""" cnx.cursor().execute(sql)
To create a basic Lambda handler function, please see the corresponding lambda handler shell script in our github location.
STEP 4: Set-up a scheduled AWS Lambda event for Amazon S3
The last step is to configure Lambda for scheduled events to be fired with Amazon S3 as event source. For example, a user can schedule a Lambda event to be fired at a particular time during the day or every 5 minutes. As of today the maximum execution time for each Lambda invocation is 300 seconds. That is, the load has to complete within this time window. Please also note that scheduled events are only supported by the AWS Lambda console (see below). More information about AWS Lambda can be found here.
That’s it! We now have an automated way to take data exported from MongoDB and load it into Snowflake. In our example here, the output from the Lambda function shows that we loaded 132 rows into Snowflake.
| https://www.snowflake.com/blog/announcing-the-native-python-connector-for-snowflake/ | CC-MAIN-2020-05 | refinedweb | 1,094 | 51.48 |
- NAME
- VERSION
- SYNOPSIS
- DESCRIPTION
- CLASS METHODS
- SUPPORT
- BUGS
NAME
Lucy - Apache Lucy search engine library.
VERSION
0.4.2
SYNOPSIS
First, plan out your index structure, create the index, and add documents:
# indexer.pl use Lucy::Index::Indexer; use Lucy::Plan::Schema; use Lucy::Analysis::EasyAnalyzer; use Lucy::Plan::FullTextType; # Create a Schema which defines index fields."; }
DESCRIPTION.
Getting Started
Lucy::Simple provides a stripped down API which may suffice for many tasks.
Lucy::Docs::Tutorial demonstrates how to build a basic CGI search application.
The tutorial spends most of its time on these five classes:
Lucy::Plan::Schema - Plan out your index.
Lucy::Plan::FieldType - Define index fields.
Lucy::Index::Indexer - Manipulate index content.
Lucy::Search::IndexSearcher - Search an index.
Lucy::Analysis::EasyAnalyzer - A one-size-fits-all parser/tokenizer.
Delving Deeper.
Backwards Compatibility Policy.
CLASS METHODS
The Lucy module itself does not have a large interface, providing only a single public class method.
error
my $instream = $folder->open_in( file => 'foo' ) or die Clownfish->error;
Access a shared variable which is set by some routines on failure. It will always be either a Clownfish::Err object or undef.
SUPPORT
The Apache Lucy homepage, where you'll find links to our mailing lists and so on, is. Please direct support questions to the Lucy users mailing list.
BUGS. | https://metacpan.org/pod/distribution/Lucy/lib/Lucy.pod | CC-MAIN-2015-18 | refinedweb | 218 | 51.24 |
Sleight of hand (soh) CLI tool.
Project description
Sleight of hand, or soh, is a command line tool that handles a lot of common tasks for developers. For the most part, it offers a convenient command line interface to a lot of standard library operations, such as base64 encoding, creating datetime strings, fetching system information, uuid generation, etc.
Installation
To install soh use pip.
pip install soh
Usage
The entry point for all commands is
$ soh Usage: soh [OPTIONS] COMMAND [ARGS]... Sleight of hand CLI commands. (+) indicates command group. Use the -c flag on most commands to copy output to clipboard Options: -h, --help Show this message and exit. Commands: b64 + Base64 operations create + Create files dt + Datetimes epoch + Epoch times json JSON printing jwt Display JWT contents secret + Secrets generators serve Simple http server at current directory sys + System information uuid Generate UUIDs version soh CLI version
To get help on any command use the -h or --help flag.
$ soh uuid -h Usage: soh uuid [OPTIONS] Generate UUIDs. Options: -v, --version INTEGER uuid version [default: 4] -ns, --namespace TEXT namespace (v3, v5) {dns, url, oid, x500} -n, --name TEXT name (v3, v5) -u, --upper use upper case -c, --clip copy to clipboard -h, --help Show this message and exit.
To copy the execution output of most commands to clipboard, use the -c or --clip command.
$ soh uuid -c c64af300-8895-4dff-b005-15dcd4c72f24 (copied to clipboard 📋)
Developer Setup
To set up a local development environment follow these (or portions of these) steps.
# clone git clone [email protected]:crflynn/soh.git cd soh # setup pre-commit brew install pre-commit pre-commit install # setup pyenv and python 3 brew install pyenv pyenv install 3.7.3 pyenv local 3.7.3 # setup poetry and install deps curl -sSL | python poetry install poetry install --develop soh
pre-commit will enforce black code formatting to pass before committing. The configuration for black is in the pyproject.toml file.
To run tests,
pytest
The testing configuration is found in pytest.ini.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/soh/ | CC-MAIN-2019-35 | refinedweb | 366 | 63.19 |
I'm very new to OOP and am trying my hardest to keep things strictly class based, while using good coding principles.
I'm a fair ways into my project now and I have a lot of general use methods I want to put into an utilities class. Is there a best way to create a utilities class?
public class Utilities
{
int test;
public Utilities()
{
}
public int sum(int number1, int number2)
{
test = number1+number2;
}
return test;
}
You should make it a
static class, like this:
public static class Utilities { public static int Sum(int number1, int number2) { return number1 + number2; } } int three = Utilities.Sum(1, 2);
The class should (usually) not have any fields or properties. (Unless you want to share a single instance of some object across your code, in which case you can make a
static read-only property. | https://codedump.io/share/jr0IBZ03G5B9/1/creating-an-utilities-class | CC-MAIN-2017-09 | refinedweb | 142 | 65.76 |
How do I create the equivalent of PHP variable variable names in Python?
I hear this is a bad idea, in general, though. Is that true?
8
You can use dictionaries to accomplish this. Dictionaries are stores of keys and values.
>>> dct = {'x': 1, 'y': 2, 'z': 3} >>> dct {'y': 2, 'x': 1, 'z': 3} >>> dct["y"] 2
You can use variable key names to achieve the effect of variable variables without the security risk.
>>>>> z = {x: "eggs"} >>> z["spam"] 'eggs'
For cases where you’re thinking of doing something like
var1 = 'foo' var2 = 'bar' var3 = 'baz' ...
a list may be more appropriate than a dict. A list represents an ordered sequence of objects, with integer indices:
lst = ['foo', 'bar', 'baz'] print(lst[1]) # prints bar, because indices start at 0 lst.append('potatoes') # lst is now ['foo', 'bar', 'baz', 'potatoes']
For ordered sequences, lists are more convenient than dicts with integer keys, because lists support iteration in index order, slicing,
append, and other operations that would require awkward key management with a dict.
0
Use the built-in
getattr function to get an attribute on an object by name. Modify the name as needed.
obj.spam = 'eggs' name="spam" getattr(obj, name) # returns 'eggs'
0
It’s not a good idea. If you are accessing a global variable you can use
globals().
>>> a = 10 >>> globals()['a'] 10
If you want to access a variable in the local scope you can use
locals(), but you cannot assign values to the returned dict.
A better solution is to use
getattr or store your variables in a dictionary and then access them by name.
6
- 1
locals().update({‘new_local_var’:’some local value’}) works just fine for me in Python 3.7.6; so I’m not sure what you mean when you say you cannot assign values through it.
Mar 19, 2020 at 9:04
- 5
@JimDennis`locals()“ provides a dictionary created to represent local variables. Updating it does not guarantee to update the actual local variables. In modern Python implementations it’s more like a picture (showing the content) in a nice frame (a high-level
dict) – drawing on the picture won’t actually change the real thing.
Nov 10, 2021 at 11:16
- 1
The reason it doesn’t work, at least on CPython, is that CPython allocates a fixed size array for locals, and the size of said array is determined when the function is defined, not when its run, and can’t be changed (access to true locals doesn’t even use the name; the name is replaced with the index into the array at function compile time).
locals()returns a true
dict; within a function, that
dictis made by loading names and associated values in the array when you call
locals(), it won’t see future changes. If it changes, you’re at global or class scope (which use
dictscopes).
Dec 2, 2021 at 16:02
it’s the maintainance and debugging aspects that cause the horror. Imagine trying to find out where variable ‘foo’ changed when there’s no place in your code where you actually change ‘foo’. Imagine further that it’s someone else’s code that you have to maintain… OK, you can go to your happy place now.
Sep 3, 2009 at 14:28
A further pitfall that hasn’t been mentioned so far is if such a dynamically-created variable has the same name as a variable used in your logic. You essentially open up your software as a hostage to the input it is given.
Dec 19, 2014 at 10:50
You can modify your global and local variables by accessing the underlying dictionaries for them; it’s a horrible idea from a maintenance perspective … but it can be done via globals().update() and locals().update() (or by saving the dict reference from either of those and using it like any other dictionary). NOT RECOMMENDED … but you should know that it’s possible.
Mar 19, 2020 at 9:13
@JimDennis actually, no it can’t. Modifications to the dict returned by
localswill not affect local namespaces in CPython. Which is another reason not to do it.
May 18, 2020 at 22:27
@juanpa.arrivillaga: I had tried testing this in an IPython shell, but did so at the top level (where locals() behaves like globsls()). Redoing that test within a nested code (within the definition of a function) does show that I can’t modify locals() from within that. As you say, the help for locals (3.7.6) does warn: “NOTE: Whether or not updates to this dictionary will affect name lookups in the local scope and vice-versa is implementation dependent and not covered by any backwards compatibility guarantees.”
May 22, 2020 at 8:22
|
Show 3 more comments | https://coded3.com/how-do-i-create-variable-variables/ | CC-MAIN-2022-40 | refinedweb | 800 | 70.73 |
Windows.
Highlights the essential elements of using controls in Windows Forms applications.
Describes the different kinds of custom controls you can author with the System.Windows.Forms namespace.
Discusses the first steps in developing a Windows Forms control.
Shows how to add to properties to Windows Forms controls.
Shows how to handle and define events in Windows Forms controls.
Describes the attributes you can apply to properties or other members of your custom controls and components.
Shows how to customize the appearance of your controls.
Shows how to create sophisticated layouts for your controls and forms.
Shows how to implement multithreaded controls.
Describes this class and has links to all of its members.
Lists metadata attributes to apply to components and controls so that they are displayed correctly at design time in visual designers.
Describes how to implement classes such as editors and designers that provide design-time support.
Describes how to implement licensing in your control or component. | http://msdn.microsoft.com/en-us/library/6hws6h2t.aspx | crawl-002 | refinedweb | 159 | 50.84 |
Styles used to format output generated with ODS use either template definitions created with PROC TEMPLATE or cascading style sheet (CSS) files. The TEMPLATE procedure compiles the code into a binary item store which can be used with any of the ODS destinations that allow formatting. Prior to SAS® 9.2, CSS files could only be used with Markup language files such as HTML.
The TEMPLATE procedure introduced the new IMPORT statement in SAS 9.2, which allows you to take an existing CSS file and read this file into a template definition. This new functionality allows you to now create a template definition from a CSS file. The style sheet properties of the CSS files will need to match the naming convention of what we expect. sample CSS file from the default style. */
/* The IMPORT statement is used to import this style */
/* definition and create a template style. */
ods html stylesheet="C:\temp\example.css" style=styles.default;
ods html close;
proc template;
define style styles.test;
import "c:\temp\example.css";
end;
run;
ods pdf file="c:\temp\trash\temp.pdf" style=styles.test;
proc print data=sashelp.class;
run;
ods pdf close; | http://support.sas.com/kb/36/907.html | CC-MAIN-2018-05 | refinedweb | 195 | 68.87 |
Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!
Learn how Concerto can help you.
The existence of GOD points AWAY from simulation but it cannot be proven nor negated.
Thinking about it GOD may actually be the maintainer of the simulation.
But how do the finiteness of some qualities (like c or age of *our* universe) point towards simulation? A simulation of such quality must use something equivalent to -- or better than -- Turing machines where infinity is by no means a limiting factor.
Anyway, how does it influence the your life or others' whether we are part of a simulation or not?
And last but not least, how do you define the term "simulation"?
This a pretty nice article on the simulation hypothesis.
We value your feedback.
Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!
Why do you feel the existence of MATH points towards simulation? Hmmm, Since math exists outside the universe, as a truth/concept - it is the same in the simulator's universe. So the fact that our universe appears to be mathematical in nature, is even stronger evidence Toawrds simulation.
The existence of God would imply the a larger existance than just the universe, and so would point TOWARD simulation, or at least the possibility of it. As you pointed out astutely, He might just be the programmer. Heck, what else would we call The Programmer?
The fact that things in the universe are finite means the simulation only requires a finite amount of memory. All simulations need memory. I don't know if it's fair to assume infitite computers yet.
When I ask questions about the nature of the universe I rarely consider how it will influence my life, I just want to learn and think about interesting stuff.
I would define "the universe is a simulation" as implying that the matter and energy are not real, at least not in the configurations we percieve, but actually exist only in the configuration/state of a simulator. Or possibly even Matrix Style; where just our perceptions are simulated, (so the tree does NOT need to make a sound when nobody is there to hear it fall.)
Thanks for the article, Shadow. Got any points to support your opinion? I'd really love to have more points AGAINST simulation. (The article did not contain any, or did I miss it?)
So if the universe is a simulation there is no possible way to find out the truth!
Have you realised that you make restrictions based on ad-hoc assumptions? Just as many (religious) people can hardly imagine that God is with the Palestinian mother who lost her child in a civil conflict and with Bruce Willis washing his teeth-- at the very same moment. God is omnipotent so it's just trivial (S)He can be present in as many places (incl. infinite) as (S)He wishes. Similarly, assumptions on any other kind of "higher" existence are futile as far as my understanding goes.
Okay so how do you define "simulator"? or "real" for that matter?
All questions about it arise from a part of "It" (the mind) to understand the "Whole".
Impossible.
A part cannot understand the whole and cannot make any statements about the whole, just as your 'pinky' cannot understand the entire body.
Therefore, the only thing the mind can know is that it does not know. :-)
That includes devising and carrying out any tests to check if it's a simulation.
plus, the original quesion leaves it open to have "probability" instead of "certainty"
Any more discussions is mind-going-in-circles-in-t
It does not mean the mind will give up such efforts, however. :-(
I can't follow this deduction. First of all, the original sentence starts with "if that turns out to be wrong", i.e. you must've already carried out some such tests. Second, I interpret "do something about it" as "changing this fact" and tests don't automatically qualify as changes. Some may, but some may not, i.e. those that don't change may be carried out.
Referring back to the Part vs Whole hypothesis, though, if a test (or a series of tests) is a kind of "understanding" then it won't work without altering the Whole, which actually qualifies as a change. This appears to be a paradox but it is based on a premise, i.e. "if that turns out to be wrong".
Can you please cite some literature on this fact?
Or to see, observe, and think for yourself. No scientific progress would have been made if scientists always insisted on citing literature for (563-483 B.C.)
[Can you fly in the air like birds by yourself? Can you soar in the sky like an eagle? Can you see your own face directly? Can your pinky know your brain? Get the idea (of a factual observation)?]
@*>: true enough.
"what "facts" you have observed that brought you to this conclusion."
Cannot fly in the air.
Cannot soar in the sky like an eagle?
Cannot see your own face directly (from your own eyes)?
Cannot "pinky" control the brain?
...
That:
You are. :-)
As for me, I'm not sure I can't fly. Never tried, and all I need is great speed and maybe dense air, both of them possible, right? Soaring is a similar issue.
I can imagine some bionic extension so that I can move my eye from its socket and see myself.
And frankly, I have no idea whether my pinky can control my brain or not. Maybe it does. Maybe my pinky is the avatar of the controller within the simulator.
So maybe I can understand the universe.
Why don't you feel the universe when asleep? [Mind is absent.]
Why don't you feel your tooth ache when asleep? Where is it then?
As I said (not me really, other wise people have said since the beginning): The only fact mind can know (for sure) is that it does not know. :-)
While I agree that a part cannot COMPLETELY undestand the whole, It can certainly understand some-things about the whole. I'm not asking to predict the future here, nor am I asking to even PROVE or DISPROVE anything. Rather lets try to determine the probabaility of simulation, based on what we know about the uiniverse and simulations.
"Math exists outside the universe" I say this because a universe need not exists for the ratio between a circle's circumference and it's radius, in a flat euclidean plain, to be Pi. Granted we needed a universe to exist in to DISCOVER this, but this truth does not require the existsence of, nor is effected by the absence of, the universe.
>Have you realised that you make restrictions based on ad-hoc assumptions?
I kind of though I was saying lets NOT make asumptions. Especially ones that assume infinite stuff.
>What I wrote is not a hypothesis, etc.; it's a factual observation.
If you mean "the part understanding the whole" thing, I dont think one can call that an obersvation, seems more like a logical/mathematical ducution to me.
>So if the universe is a simulation there is no possible way to find out the truth!
True, but ONLY if they dont WANT us to find out.
I dont really want to get caught up in semantics, but:
>>How do I define real?
But basically, something real is not artifical or illusionary. So if the universe is NOT a simulation, then what we percieve is REAL. If the universe IS a simulation, I would define what is REAL as the processing substrate (whatever that may be made of) on which the simulation runs (like say, World of Warcraft(not real) running on servers (real) ).
>>Define Simulator
That which runs a simulation.
The real, unreal, simulation, proof, not-proof, the universe, you me ... are all in the mind.
[A hint: Where are all these in deep sleep? An experiment: ask the same question(s) in your sleep.]
All these return (after waking up) when the mind becomes active.
Enjoy [the ride]. :-)
[in jest: I can answer your question, "is the universe a simulation," if you give me a few details: who is doing the simulation? what platform is being used for the simulation? what previous simulations were conducted by this researcher? what is the purpose of this simulation? is the result going to be published? if so, in what journal? Finally, is this simulation being done in the universe or outside of it? if outside, where?]
I refuse to belive we must go in circles to discuss this. For example, the points above on mathematics, are perfectly linear, logical arguments no circling back to the starting point needed (though possibly flawed, and please point out those flaws).
I'm not sure I understand your point about sleeping. I seems like you are saying that the mind must exist in order for the universe to exist. Granted, that for the universe to be PERCIEVED conciousness must exist, but I would argue the opposite; that for a mind to exist, a universe is a pre-requisite (even if that universe is a simulation.)
[Certainly not about thinking piled upon thinking, arguments, presuppositions, hypothesis, conjectures, theories, beliefs, etc.]
Why don't you conduct this direct experiment?: Ask any question you like in your sleep.
I'm not sure what point this makes though. Please elaborate.
More and more words piled up high will go nowhere, except to cause more and more confusion.
Rather than more thoughts, observe the things as they are, reflect, contemplate, and meditate. :-)
"More and more words piled up high will go nowhere, except to cause more and more confusion."
100% disagree. Unless those words actually fail to communicate understanding, like say, platitues and rhetoric. In fact the only way we have achived our current level of understanding is DUE to words. ("the shoulders of giants", as they say)
"Rather than more thoughts, observe the things as they are, reflect, contemplate, and meditate. :-) "
This statement seems to contradict itself: conteplation, reflection and mediation are all forms of thought.
[As I have said earlier, my mind accepts the fact that it does not know.]
Almost all of the mental 'simulation' in our mind is due to emergent properties. Our vision, for example, works as it does only because of our relative size to other objects around us. Imagine shrinking down to a size where a hydrogen atom approximates the scale of the Earth orbiting the Sun. If you were to look around at that scale, what would you see? More specifically, because of your size compared to wavelengths of light, how could "vision" work? Similarly, what would the universe "sound" like at that scale?
It's only because we are the size that we are that we have a mental simulation as we have. At other scales, our perception of the universe would be much different.
At the hydrogen atom scale, what concept would we have of galaxies? What would "color" mean?
At the other end of the (known) scale, with the (known, currently observable) universe scaled to the orbit of the Earth around the Sun, what would we "see"? Let's imagine we're at the "edge" looking in at the filaments of galactic structures. We'd be looking through a volume of space maybe a couple dozen billion light-years across. That is, it'd take a couple dozen billion years to detect any "visible" change in the far "edge" of that spatial volume. How rapidly would any change seem to happen within that volume?
What would our understanding of something like "inhabited planet" be like at that scale? Could we even detect that such things as "planets" even existed?
Well, it's kind of meaningless to think about existing at either the smallest or largest scales. But it should help illuminate some of the problem. We just don't what it is that we're experiencing. We've relatively recently run into 'dark matter', and then 'dark energy'. Now, there's maybe 'dark flow' and who knows what else? We seem unable to "experience" the vast majority of "reality".
I don't see any reason to think that there is any evidence at all that points toward or away from simulation. There is no meaning at all in thinking that "cleaves so tightly to a mathematical structure" has anything to do with 'simulation'. The universe most definitely does not "cleave so tightly to a mathematical structure". Rather, we specifically and deliberately created our mathematical languages as ways to model the behavior of things we observed. That's essentially the whole point of mathematical symbology. To assign any meaning to how one parallels the other is simply to say we built the models fairly well (so far). Other indications can be thought of in similar ways.
We have a long way to go before we have a clue about what's really "out there" in reality. And if it's a 'simulation', what would it be 'simulating'?
Tom
Thanks for the thoughtful post. I think I see your point, regarding scale, which is, I think, that, we cannot trust what we actually experience. I dont know that I agree with this 100%, considering how much we know about those extreem scales you mentioned (our ability to mantipulate our environment, and use tools, help to mitigate our scale disadvantage).
>>The universe most definitely does not "cleave so tightly to a mathematical structure". Rather, we specifically and deliberately created our mathematical languages as ways to model the behavior of things we observed.
I dont understand. Much of mathematics was discovered WAY before physics. Physisist noticed that they could use this mathematical language to descibe the behavior of the universe. So on one hand, we have this self-suffiecient, logical relationship structure called math, that does not even need a universe for it's truth to be true, and on the other a universe that follows very specific evolution, that can be descibed precicely by these maths. (And thus programmed preciciely too)
>>Everything we experience, everything we see, hear, feel, etc., is a known 'simulation'. That's been a known fact for quite a while.
A known fact? I see you put quotes around simulations, maybe thats what I'm missing. Or are you suggestion that our own mental representation of the universe is a simulation? If so, I would suggest "simulation" is the wrong word to use. Simulacrum might be better.
>>I don't see any reason to think that there is any evidence at all that points toward or away from simulation.
Well, lets imagine you were going to simulate a (super-simplistic) universe on your computer, how would you make that universe work/evolve, what optimizations would you use in your program? What limits would your program have? What features in our universe match this? Granted, this is most certainly NOT evidence, but interesting to think about.
>>And if it's a 'simulation', what would it be 'simulating'?
Personally I would love to run a super complex simulation of the universe and see if life and then intelligence spontantiously grew into existence. Alas I'll need a bigger computer for that one. Much bigger.
[With respect]: So, your desired simulation will be nothing but simulating what you can say about universe. "The dog chasing its own tail." ;-)
While I thank you for you input, this does not actually address the original question on probabilities, rather, it just dismisses it. (but so nicely no offence is taken :) )?""
Then Steven Wolfram has his own ideas regarding the universe as a computer.
Some of the people studying the Holographic Principle have suggested some potential empirical tests of the fundamental discreteness of the universe.
These may not quite be simulations in the "Matrix" sense, but that version may be impossible to test experimentally. (even if you take the red pill, that could just be a hallucination)
The "Matrix" version of the simulation hypothesis also does not address the question of whether the outer machine world is also a simulation, and the hypothesis is that the entire universe is a simulation, rather than just some peoples perception of the universe, then that version could have a difficulty in that the computer running the simulation (and thus the universe in which the computer exists) would have to be more complicated than the universe it is simulating.
So, would a system simulating the universe actually need to be more COMPLEX, or just LARGER, than the universe? Consider the progam of LIFE ('s_Game_of_Life). It is an extreemly SIMPLE program, that can be run on a fairly simple computer. However, give it TONS and TONS of memory, and the result will be a highly complex "simulation". In this case, the simulation uses the nature of chaos theory to turn a simple set of rules, into something highly complex. Fractals might be another exmple of complexity growing from simplicity.
Just wondering out loud now: How does this premise hold up given quantum uncertainty? Consider the fact that only obervation allows a particle wavefunction to collape into a specific place/momentum, etc... Therfore the state of need not be SPECIFICLLY defined by the simulation program's state. Can you think of anyway in which this would act like say.. compression, or some other programming concept that would allow our super-huge "universe-computer" to be smaller? | https://www.experts-exchange.com/questions/28338890/Is-the-universe-a-simulation.html | CC-MAIN-2018-05 | refinedweb | 2,949 | 65.52 |
Before we start you will need platform for C/C++, you can use Code::Blocks is a free IDE(Integrated development environment) for C/C++, or Visual Studio, I think they have a free version too. Next, I will give a few examples of how C look like, and I will try to describe how he works.
Number 1
#include "stdafx.h" int main() { printf("This is just a test"); getchar(); return 0; }
We include stdafx.h is the library that helps the compiler to understand the code.
#include "stdafx.h" int main() { }
This is the main function, I think this is obvious. Next the printf function will show whatever we want on the console. In this case “This is just a test” is what you will see when you run this application. The getchar(); function will allow the console to stay open until I press ENTER. And finally return 0;means that we finished the program without any error.
This is what you will see:
Number 2
#include "stdafx.h" int main() { int a,b; a = 10; b = 20; printf("%d + %d = %d",a , b , a + b); printf(" "); printf("%d * %d = %d",a , b , a * b); printf(" "); printf("%d - %d = %d",a , b , a - b); printf(" "); printf("%d : %d = %d",a , b , a / b); getchar(); return 0; }
In the main function I make 2 variables a and b, next, I make a = 10 and b = 20. As you know from the previous example printf() function will show what ever we want on the console. Now, lets see what this line of code do: printf(“%d + %d = %d”,a , b , a + b); . If you read the previous tutorial you know that %d indicate the position where a decimal integer should be. We have 3 of %d there, so we need 3 variable, the first one is a, the second one is b and the third one is a + b or a * b etc…
You can use Constant Characters to show every operation on a new line or whatever you want.
The final result should be:
You see that 10 : 20 = 0 that is because our variables are int. The int type does not support 0.5, if you make a and b float, you will have your answer, and of course change %d to %f.
Number 3
#include "stdafx.h" #include <stdio.h> int main() { float a = 0,b = 0,c = 0,r = 0; printf("a:"); scanf("%f",&a); printf("b:"); scanf("%f",&b); printf("c:"); scanf("%f",&c); r = a + b - c; printf("r = %f", r); getchar(); return 0; }
In this example a new function appears scanf();. Well, let’s take a look and see what we have here:scanf(“%f”,&a);. The scanf() function in this case has 2 parameters, the first one is the type of data in this case float, and the second one is an address. More specific is the address where the a variable it will be stored. & means address.
Number 4
#include "stdafx.h" #include <stdio.h> void main() { char c; int i; float f; double d; c = 'C'; i = 1195; f = 123.4567; d = 11212.33E3; printf("character = %c", c); printf(" "); printf("int = %d", i); printf(" "); printf("float = %f", f ); printf(" "); printf("double = %e", d); getchar(); }
This is a very simple example, shows you how to use every type of fundamental data, so you can experiment with C as you want.
One Reply to “Getting started with C [2]” | https://horiacondrea.com/getting-started-with-c-2/ | CC-MAIN-2020-29 | refinedweb | 577 | 79.6 |
#include <tbint.h>
#include <tbint.h>
Inheritance diagram for sc::TwoBodyInt:
eri
[virtual]
The computed shell integrals will be put in the buffer returned by this member.
Some TwoBodyInt specializations have more than one buffer: The type arguments selects which buffer is returned. If the requested type is not supported, then 0 is returned.
[pure virtual]
Given four shell indices, integrals will be computed and placed in the buffer.
The first two indices correspond to electron 1 and the second two indices correspond to electron 2.
Implemented in sc::TwoBodyIntCints, sc::TwoBodyIntCCA, and sc::TwoBodyIntV3.
-1
Return log base 2 of the maximum magnitude of any integral in a shell block obtained from compute_shell.
An index of -1 for any argument indicates any shell.
[inline, virtual]
If redundant is true, then keep redundant integrals in the buffer.
The default is true.
Reimplemented in sc::TwoBodyIntCCA. | http://www.mpqc.org/mpqc-html/classsc_1_1TwoBodyInt.html | CC-MAIN-2013-48 | refinedweb | 145 | 51.34 |
Enterprise Java Interview Questions and Answers: What is new in JEE 6?
It is imperative to keep track of the key enhancements and improvements to JEE. If you are interested in JEE basics the try Enterprise Java frequently asked questions and answers.It is imperative to keep track of the key enhancements and improvements to JEE. If you are interested in JEE basics the try Enterprise Java frequently asked questions and answers.
Q. What is new in the Java EE 6 (JEE 6) compared to Java EE 5 (JEE 5)?
A.
- JEE 6 provides more extensibility points and service provider interfaces to plug into for the service providers. For example, "Java Authentication Service Provider interface Containers" provides a mechanism for the authentication providers to integrate with the containers.
- JEE 5 favored convention over configuration by replacing XML with annotations and POJOs. The JEE 6 extended POJOs with dependency injection (i.e JSR 299 -- Context and Dependency Injection - CDI). This enables a JSF managed bean component to interact with an enterprise Java bean (i.e. EJB) component model to simplify development. It is about time all various types of managed beans are unified. In Java EE 6, the CDI builds on a new concept called "managed beans", which is managed by the enterprise edition container. In CDI, a managed bean is a Java EE component that can be injected into other components. The specification also provides a set of services like resource injection, lifecycle callbacks, and interceptors.
- Any CDI managed component may produce and consume events. This allows beans to interact in a completely decoupled fashion. Beans consume events by registering for a particular event type and qualifier.
- The JAX-WS stack was overhauled to be an integrated stack with JAX-WS 2.x, JAXB 2.x, SAAJ 1.x, JAX-RS 1.x (new RESTful) and JAXR 1.x. (new pull-parsing API).
- Servlet 3.0 spec introduced the Async Servlet, file uploading functionality, and annotations based configuration.The web.xml file is optional.
- Singleton EJBs (i.e. one EJB per container) and asynchronous session beans were introduced. The EJB was streamlined with fewer classes, interfaces, and simpler object to relational mapping by taking advantage of the JPA.
- Bean based validation framework was introduced to avoid duplication.
- Enterprise Java Beans (i.e EJBs) can be packaged directly into a WAR file. No longer required to be packaged as JAR and then included into an EAR.
- The JSF 2.0 simplifies the development of UI components.The JSF 2.0 has integrated Ajax and CDI support.
Q. How does the new bean validation framework avoid duplication?
A. Developers often code the same validation logic in multiple layers of an application, which is time consuming and error-prone. At times they put the validation logic in their data model, cluttering it with what is essentially metadata. JEE 6 Improves validation and duplication with a much improved annotation based bean validation. Bean Validation offers a framework for validating Java classes written according to JavaBeans conventions. You use annotations to specify constraints on a JavaBean. For example,
The JavaBean is defined below
public class Contact { @NotEmpty @Size(max=100) private String firstName; @NotEmpty @Size(max=100) private String surname; @NotEmpty @Pattern("[a-zA-Z]+") private String category; @ShortName private String shortName; //custom validation ... public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } ... }
Custom validators can be defined by defining an annotation and relevant implementation
@ConstraintValidator(ShortNameValidator.class) @Documented @Target({ElementType.METHOD, ElementType.FIELD, ElementType.ANNOTATION_TYPE}) @Retention(RUNTIME) public @interface ShortName { String message() default "Wrong name"; String[] groups() default {}; }
Next the validation implementation class
public class ShortNameValidator implements ConstraintValidator <Contact, String> { private final static Pattern SHORTNAME_PATTERN = Pattern.compile("[a-zA-Z]{5,30}"); public void initialize(Contact constraintAnnotation) { // nothing to initialize } public boolean isValid(String value, ConstraintValidatorContext context) { return SHORTNAME_PATTERN.matcher(value).matches(); } }
You could use the validator as shown below
Contact contact = new Contact(); conatct.setFirstName("Peter"); //.... set other values ValidatorFactory validatorFactory = Validation.buildDefaultValidatorFactory(); Validator validator = validatorFactory.getValidator(); Set<ConstraintViolation<Contact>> violations = validator.validate(Contact);
Q. What are the benefits of the asynchronous processing support that was introduced in Servlet 3.0 in JEE 6?
A.
1. If you are building an online chess game or a chat application, the client browser needs to be periodically refreshed to reflect the changes. This used to be achieved via a technique known as the server-polling (aka client pull or client refresh). You can use the HTML tag <META> for polling the server. This tag tells the client it must refresh the page after a number of seconds.
<META http-
The URL newPage.html will be refreshed every 10 seconds. This approach has the downside of wasting network bandwidth and server resources. With the introduction of this asynchronous support, the data can be sent via the mechanism known as the server push as opposed to server polling. So, the client waits for the server to push the updates as opposed to frequently polling the server.
2. The Ajax calls are integral part of any web development as it provides richer user experience. This also means that with Ajax, the clients (i.e. browsers) interact more frequently with the server compared to the page-by-page request model. If an Ajax request needs to tap into server side calls that are very time consuming (e.g. report generation), the synchronous processing of these Ajax requests can degrade the overall performance of the application because these threads will be blocked as the servers generally use a thread pool with finite number of threads to service concurrent requests. The asynchronous processing will allow these time consuming requests to be throttled via a queue, and the same thread(s) to be recycled to process queued requests without having to chew up the other threads from the server thread-pool. This approach can be used for non Ajax requests as well.
Note: In JEE 6, The EJB 3.1 can also specify a Session Bean to be asynchronous.
Q. What are benefits of web fragements introduced in Servelt 3.0 spec?
A. Web applications use frameworks like JSF, Struts, Spring MVC, Tapestry, etc. These frameworks normally bootsrap (i.e register) via the web.xml file using the <servlet> and <listener> tags. For example
The web.xml file
<?xml version="1.0" encoding="UTF-8"?> <web-app <servlet> <servlet-name>My JSFServlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>My JSFServlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> </web-app>
If a particular application uses more than one framework, the above approach is not modular as you will have to bootstrap all the frameworks within the same web.xml file, making it large and difficult to isolate framework specific descriptors. The Servlet 3.0 specification addresses this issue by introducing web fragments. A web fragment can be considered as one of the segment of the whole web.xml and it can be thought of as one or more web fragments constituting a single web.xml file. The fragment files are stored under /META-INF/web-fragment.xml, and it is the responsibility of the container to scan the fragement files during the server start-up.
The web-fragment.xml file
<web-fragment> <servlet> <servlet-name>myFrameworkSpecificServlet</servlet-name> <servlet-class>myFramework.myFrameworkServlet </servlet-class> </servlet> <listener> <listener-class>myFramework.myFrameworkListener</listener-class> </listener> </web-fragment>
Note: The Servlet 3.0 specification also provides enhanced pluggability by providing an option to add servlets, filters, servlet mappings and filter mappings during the run time.
Q. What do you understand by the term "Web profile" in Java EE 6?
A. Java EE 6 introduces the concept of profiles as a way to slimming the footprint of the Java EE platform and better target it for specific audiences. Profiles are configurations of the Java EE platform that are designed for specific classes of applications. For example, the required elements for a Web profile are
- Java EE 6 (JSR-316)
- Common Annotations for Java Platform 1.1 (JSR-250)
- Dependency Injection for Java 1.0 (JSR-330)
- Contexts and Dependency Injection for Java EE platform 1.0 (JSR-299)
- Servlet 3.0 (JSR-315)
- JavaServer Pages (JSP) 2.2 (JSR-245)
- Expression Language (EL) 2.2 (JSR-245)
- Debugging Support for Other Languages 1.0 (JSR-45)
- Standard Tag Library for JavaServer Pages (JSTL) 1.2 (JSR-52)
- JavaServer Faces (JSF) 2.0 (JSR-314)
- Enterprise JavaBeans (EJB) 3.1 Lite (JSR-318)
- Java Transaction API (JTA) 1.1 (JSR-907)
- Java Persistence API (JPA) 2.0 (JSR-317)
- Bean Validation 1.0 (JSR-303)
- Managed Beans 1.0 (JSR-316)
- Interceptors 1.1 (JSR-318)
The JEE 6 has also introduced the concept known as "pruning" to manage complexities. This is similar to the concept introduced in Java SE 6. Pruning is performed as a multistep process where a candidate is declared in one release but may be relegated to an optional component in the next release, depending on community reaction. For example, JAX-RPC will be pruned and replaced by JAX-WS. However, if Java EE application server vendors do include a pruned technology, they must do so in a compatible way, such that existing applications will keep running. The profiles and pruning are debatable topics and only time will tell if they work or not.
More relevant JEE questions and answers:
1 Comments:
Wonderful blog & good post.Its really helpful for me, awaiting for more new post. Keep Blogging!
JAVA Training in Chennai
Links to this post:
Create a Link | http://java-success.blogspot.com.au/2012/03/enterprise-java-interview-questions-and.html | CC-MAIN-2018-09 | refinedweb | 1,607 | 50.94 |
I’m really happy with my PyRosetta prototype script, so it’s time for a production run!
But there is a problem. My nodes have 4GB of memory and 8 cores. But PyRosetta needs ~1.5GB of memory to run.
How can I still make use of all 8 cores?
Can I somehow reduce the memory usage, since I don’t need features like orbitals, etc?
I tried to set `low_memory_mode` to True in __init__.py
config = {"low_memory_mode": True, "protocols": False, "core": True, "basic": True, "numeric": False, "utility": True, 'monolith': False},
but this had no noticeable effect.
Or can I make use of threads? I Know Python is locked the global interpreter lock (GIL). But most of the CPU time is spent in C++ code, and at least in theory it’s possible to release the GIL before calling a C++ function (for example) scorefx(pose). But I’m not sure if PyRosetta does that.
Any help is very welcome. Until then I’ll chug along at half speed:)
PS:
I already tried using
r.init('-linmem_ig 20')
But that did not change anything.
My peptide is only 33 amino acids, and I'm (only) using the mm_* score functions.
And I also threw out most of the residues in database\chemical\residue_type_sets\fa_standard\residue_types.txt. Can I somehow prevent the loading of the rest of the database? (For example orbital information, fragments, itd…)
Rosetta isn't really set up for threads on the C++ level, so even aside from the GIL, that's not going to be an option.
Removing residues helps, but removing patches from database\chemical\residue_type_sets\fa_standard\patches.txt is a real savings, especially if you start removing things which can be applied combinatorially. For just protein modeling, you can replace it with the patches.txt.slim version in the same directory. I'd start with that as a base, and only add back the patches which you think you'll need.
Most of the information in the database is loaded on an as-needed basis, so if you aren't using it, it won't be taking up memory.
Past that, the other recommendation is to make sure you aren't keeping around references to objects you aren't using, or only loading objects on an as-needed basis. For example, Poses can take up a bunch of memory, so instead of loading all of your poses at the beginning and storing them in a list, load them one-by-one if you can, and delete/overwrite the pose object once you're done with it. The same goes for movers and other objects - often they're small, but occasionally there's ones which will store references to Poses or other large objects, keeping large amounts of memory in use, even if you don't need them.
The other recommendation is to avoid using the "monolith" build of PyRosetta if you can. That loads all of Rosetta into memory, even if you're not using those portions of the code. The "namespace" build only loads those portions of the code that you need when you need it, saving some memory if you're not using all of Rosetta. (There might not be a Windows namespace build, though.)
I remember Sergey saying that "monolith" will be the mainstream build in the future ().
Re enabling low-memory-mode: this should be done by modifying config.json from the same dir. When you directly try to adjust default values in __init__.py they got overwritten by the very next line when script loads json config file and which have low_memory mode set to false. So please try again by modifying json file.
Also: try to limit your imports and remove everything that you do not need. Each import in namespace mode consume some memory.
Also: to make sure that low-memory mode is worked try to run: python -c "import rosetta; rosetta.init(); print rosetta.config". And you can also check 'top' output from bare input to see how much memory you allocated.
Hope this helps, | https://rosettacommons.org/node/3830 | CC-MAIN-2022-27 | refinedweb | 678 | 65.73 |
topics discussed: *[Ii]mpl, OmNamespace and QName usage in OM, M1 and testing
[12/22/2004 9:01 AM] <Srinath> Just in time (JIT) :) it is 7.59 here [12/22/2004 9:01 AM] -->| Harsha ([email protected]) has joined #apache-axis [12/22/2004 9:01 AM] <Chinthaka> hi all [12/22/2004 9:01 AM] <alek_s> good that this week meeting is at 9AM EST [12/22/2004 9:01 AM] <Jaliya> Hi All [12/22/2004 9:02 AM] <Harsha> Hi All [12/22/2004 9:02 AM] <alek_s> i am in europe so it is 3PM but otherwise it would be 4 AM :) [12/22/2004 9:02 AM] <Ajith> hello all [12/22/2004 9:02 AM] <alek_s> he all [12/22/2004 9:02 AM] <gdaniels> good ${TIMEOFDAY}, all [12/22/2004 9:02 AM] <Deepal> hi [12/22/2004 9:02 AM] <Srinath> what is 4 today? [12/22/2004 9:02 AM] <Srinath> shall I start with package structure ? [12/22/2004 9:02 AM] -->| MHansen ([email protected]) has joined #apache-axis [12/22/2004 9:02 AM] -->| JaimeM ([email protected]) has joined #apache-axis [12/22/2004 9:03 AM] <gdaniels> Sure. I'd also like to discuss OMNamespace a little [12/22/2004 9:03 AM] <Chinthaka> :( [12/22/2004 9:03 AM] <gdaniels> And I'm sure we can talk engine stuff a lot :) [12/22/2004 9:03 AM] <Ajith> alek : you back in poland? [12/22/2004 9:03 AM] <alek_s> yes i am :) [12/22/2004 9:03 AM] <Ajith> cool [12/22/2004 9:03 AM] <gdaniels> Eran: why the frowny-face? [12/22/2004 9:03 AM] <alek_s> so we are spanning minimum 3 continents [12/22/2004 9:03 AM] <Srinath> shall we have the org.apache.axis.engine only with *Impl for impl classes [12/22/2004 9:03 AM] <Chinthaka> just kidding :) [12/22/2004 9:04 AM] <gdaniels> Srinath: +1 [12/22/2004 9:04 AM] <Chinthaka> alek : great isn't it ? :) [12/22/2004 9:04 AM] <MHansen> Hi - I am new to this chat. I'm a heavy axis1.2 user and interested in 2.0. OK if I participate? [12/22/2004 9:04 AM] <alek_s> yes ;-) [12/22/2004 9:04 AM] <Ajith> alek : 4 continents (with harsha at australia) [12/22/2004 9:04 AM] <Srinath> :) what others think? [12/22/2004 9:04 AM] <alek_s> Hi MHansen! [12/22/2004 9:04 AM] <gdaniels> MHansen - of course! Welcome! [12/22/2004 9:04 AM] <Jaliya> Hi MHansen, welcome [12/22/2004 9:05 AM] <MHansen> Great. Thanks :-) [12/22/2004 9:05 AM] <Deepal> Hi MHansen [12/22/2004 9:05 AM] <Ajith> yep , all interested parties are welcome [12/22/2004 9:05 AM] <Harsha> Hi Mhansen [12/22/2004 9:05 AM] <Ajith> shall we start then [12/22/2004 9:05 AM] -->| farhaan ([email protected]) has joined #apache-axis [12/22/2004 9:05 AM] <gdaniels> My compatriot Jaime (who did the JMS stuff for Axis 1.X) is here too, though he's not saying much (which is a little unusual for him, I can tell you :)) [12/22/2004 9:06 AM] <gdaniels> Ajith: We started - Srinath asked about package structure [12/22/2004 9:06 AM] <gdaniels> :) [12/22/2004 9:06 AM] <alek_s> MHansen: what are your interests? What is that you want to see in AXIS2 that is not in AXIS1.x ? [12/22/2004 9:06 AM] <wizard42> Hi * ! I am also is new to this chat. I just would like to listen and have a look where Axis will go to. [12/22/2004 9:06 AM] <Srinath> shall I sealed the package discussion and change it ASAP ..the silance is assumed as a approval ;) [12/22/2004 9:07 AM] <gdaniels> Sounds good to me, Srinath. [12/22/2004 9:07 AM] <Srinath> 3.2. [12/22/2004 9:07 AM] <Srinath> 1 [12/22/2004 9:07 AM] <Srinath> sealed :) [12/22/2004 9:08 AM] <Jaliya> ok, the next is the engine, I guess [12/22/2004 9:08 AM] <alek_s> i think that decision about .imp versus *Impl should be left to particular cases [12/22/2004 9:08 AM] <MHansen> alek_s: - I'm working on an EAI (enterprise application integration) middleware system that is build around JAX-RPC and other J2EE standards. My focus with Axis 1.2 has been SAAJ and JAX-RPC. I'm trying to help get bugs out when I have time. For Axis-2, I'd like to see JAX-RPC 2.0, a rock-solid SAAJ implementation, and performance improvements. I'd love to lend a hand as time permits. [12/22/2004 9:08 AM] <JaimeM> ha, sorry about my quiet nature this morning [12/22/2004 9:08 AM] <alek_s> if you have more classes than *Impl it iwll b e very messy if you mix interfaces and implementaiton in the same package IMHO [12/22/2004 9:08 AM] <Srinath> Hi MHansen, JamieM, wizard42 .. welcome all [12/22/2004 9:09 AM] <alek_s> MHansen: sounds good [12/22/2004 9:09 AM] <gdaniels> I'm fine with engine.impl.* as well - just don't like axis.impl.engine... [12/22/2004 9:09 AM] <Srinath> I am +1 for both ..:) [12/22/2004 9:09 AM] <Jaliya> +1 glen , let's go ahead with engine.impl [12/22/2004 9:10 AM] <Deepal> yep its better [12/22/2004 9:10 AM] <alek_s> +1 to both [12/22/2004 9:10 AM] <gdaniels> discretion of the builder to use package.FooImpl or package.impl.Foo [12/22/2004 9:11 AM] <Srinath> I see that better not to have impl in package stucture at all? [12/22/2004 9:11 AM] <Srinath> are we for impl then ? [12/22/2004 9:11 AM] <gdaniels> Didn't understand that Srinath? [12/22/2004 9:11 AM] <alek_s> Srinath: if implementation is complex not just *Impl classes where you want ot put additional classes? [12/22/2004 9:12 AM] <Srinath> Glen: Dr Sanjiva Said +1 from me too. Never liked "impl" in a package name .. just like [12/22/2004 9:12 AM] <Srinath> I hated "enum" in a package name ;-). [12/22/2004 9:12 AM] <Srinath> others do nto specify which one [12/22/2004 9:12 AM] <Srinath> so I think it is XXImpl [12/22/2004 9:13 AM] <Srinath> shall we go for o.a.a.engine.impl ect then? [12/22/2004 9:14 AM] -->| {eng}bar4ka ([email protected]) has joined #apache-axis [12/22/2004 9:14 AM] <gdaniels> I'm fine with either way, just not axis.impl.... :) [12/22/2004 9:14 AM] <Srinath> me too Jaliya, other are we for engine.impl then? [12/22/2004 9:15 AM] <Jaliya> ?? [12/22/2004 9:15 AM] <gdaniels> okay... let's move on then? [12/22/2004 9:16 AM] <gdaniels> I'd like to talk about namespaces a little [12/22/2004 9:16 AM] <gdaniels> I'm a little concerned about this OMNamespace thing [12/22/2004 9:16 AM] <alek_s> namespaces are always little tricky ... [12/22/2004 9:16 AM] <{eng}bar4ka> hey guys i am having some problem loading AxisServlet, i guess its not a classpath issue cause HappyAxis.jsp can find it, but i could not load it i guess [12/22/2004 9:17 AM] <gdaniels> {eng}bar4ka: This chat right now is about Axis2 development, so we're trying to stay on that. You could either hang out later, or ask on axis-user.... [12/22/2004 9:18 AM] <gdaniels> So here's my concern - there are a lot of cases where I want to use just QNames, and never worry about explicit namespace declaration. [12/22/2004 9:18 AM] <alek_s> example? [12/22/2004 9:18 AM] <gdaniels> OMElement foo = new OMElement(qname, content); writer.write(foo); [12/22/2004 9:18 AM] <alek_s> content? [12/22/2004 9:19 AM] <gdaniels> should gen something like <ns:qname xmlns:content</ns:qname> [12/22/2004 9:19 AM] <alek_s> you mean element text content or List<OmElement> [12/22/2004 9:19 AM] <Chinthaka> seems like some Sri lAnkan guys have some prob [12/22/2004 9:19 AM] <Chinthaka> with chat [12/22/2004 9:19 AM] <gdaniels> but that's secondary [12/22/2004 9:19 AM] <Chinthaka> Srinath, Deepal and Ajith [12/22/2004 9:19 AM] <gdaniels> the point is I DON'T want to have to declare an OMNamepace [12/22/2004 9:19 AM] <gdaniels> Eran: oh dear [12/22/2004 9:20 AM] <Chinthaka> they just told me, but lets go ahead [12/22/2004 9:20 AM] <gdaniels> aren't they using the same net connection as you? [12/22/2004 9:20 AM] <alek_s> if you depend on namespace prefix you better have prefix in QName right? [12/22/2004 9:20 AM] <Jaliya> we are in two location glen [12/22/2004 9:20 AM] <Chinthaka> Glen : no [12/22/2004 9:21 AM] <gdaniels> Alek: if you depend on a particular prefix, yes [12/22/2004 9:21 AM] -->| Srinath_ ([email protected]) has joined #apache-axis [12/22/2004 9:21 AM] -->| deepal123 ([email protected]) has joined #apache-axis [12/22/2004 9:21 AM] <gdaniels> but you can do that : QName qname = new QName("ns", "foo", "prefix") [12/22/2004 9:21 AM] <gdaniels> still no need to make OMNamespace [12/22/2004 9:21 AM] <deepal123> we had some prob. [12/22/2004 9:21 AM] -->| Ajith1030 ([email protected]) has joined #apache-axis [12/22/2004 9:21 AM] <gdaniels> I just want to make sure the APIs work without it [12/22/2004 9:21 AM] <Chinthaka> but glen [12/22/2004 9:22 AM] <Chinthaka> isn't it better to have OMNamespace [12/22/2004 9:22 AM] <Ajith1030> aah finally Back in the chat again [12/22/2004 9:22 AM] <deepal123> what did u talk , can some put that into yahoo [12/22/2004 9:22 AM] <Chinthaka> so that other Elements, attr, etc can use that [12/22/2004 9:22 AM] <gdaniels> Eran: Clearly, in my opinion the answer is "no". :) Or at least "not always". [12/22/2004 9:22 AM] <Jaliya> Deepal: Sent the chat using yahoo [12/22/2004 9:23 AM] <gdaniels> If you care about it, sure you should be able to register particular NS mappings at particular places. I definitely want that. [12/22/2004 9:23 AM] <gdaniels> I just don't want to force people to use this class. [12/22/2004 9:23 AM] <alek_s> Glen: i thin in this case it is as if you delcared namespace on this element (shows up during serialization) [12/22/2004 9:23 AM] <gdaniels> Alek: Yes. [12/22/2004 9:23 AM] <gdaniels> Another example: [12/22/2004 9:24 AM] |<-- Harsha has left irc.freenode.net () [12/22/2004 9:24 AM] <gdaniels> QName q1 = new QName("ns1", "foo") [12/22/2004 9:24 AM] <alek_s> Glen: the only a problem is if you try later to declare the same prefix to bind ieth other namespace - but that is an error that OM impl should detect [12/22/2004 9:24 AM] <gdaniels> QName q1 = new QName("ns1", "bar") [12/22/2004 9:24 AM] |<-- Deepal has left irc.freenode.net (Read error: 60 (Operation timed out)) [12/22/2004 9:24 AM] |<-- Srinath has left irc.freenode.net (Read error: 60 (Operation timed out)) [12/22/2004 9:24 AM] <gdaniels> OMElement root = new OMElement(q1); [12/22/2004 9:24 AM] <gdaniels> OMElement child = new OMElement(q2); [12/22/2004 9:24 AM] <gdaniels> root.addChild(child); [12/22/2004 9:24 AM] <gdaniels> should serialize to [12/22/2004 9:24 AM] <alek_s> Glen: in this case there is *no* namespace prefix declared [12/22/2004 9:25 AM] <gdaniels> <p1:foo xmlns:<p1:bar/></p1:foo> [12/22/2004 9:25 AM] <Ajith1030> Hey What did you guys decide about the OMNamespace [12/22/2004 9:25 AM] <gdaniels> yes [12/22/2004 9:25 AM] <alek_s> StAX or XmlPull is able to generate you prefix but it is not known until serialization happen [12/22/2004 9:25 AM] <alek_s> Glen: exactly [12/22/2004 9:25 AM] <gdaniels> so the framework should declare you one [12/22/2004 9:25 AM] <alek_s> however if you put root into OMElement root2=new OMElement(otherQname) [12/22/2004 9:26 AM] <alek_s> and otherQName has prefix "n" then you get serializaiton where <n:foo><n:bar/></n:foo> [12/22/2004 9:26 AM] <alek_s> is it what you want? [12/22/2004 9:26 AM] <gdaniels> there are tradeoffs [12/22/2004 9:27 AM] <gdaniels> If I'm building up content, and I want to write a QName string inside an attribute or something, it's nice to have the mapping done already [12/22/2004 9:27 AM] <alek_s> i think it is the simplest way to do fragments of XML - elements that are not attached to XML document [12/22/2004 9:27 AM] <alek_s> then you *really* should declare mapping explicitly [12/22/2004 9:27 AM] <{eng}bar4ka> gdaniels, sorry about "offtopic", but the axis-user you talk is a channel or the mailing list ? [12/22/2004 9:27 AM] <gdaniels> We don't now and it works well [12/22/2004 9:28 AM] <gdaniels> {eng}bar4ka: Mailing list [12/22/2004 9:28 AM] <alek_s> otherwise effect iis unpredictable [12/22/2004 9:28 AM] <gdaniels> maybe we can have an option [12/22/2004 9:28 AM] <alek_s> if you do not declare namespace ... [12/22/2004 9:28 AM] <alek_s> for attributes it would be neat to have option to set QName as attribute value [12/22/2004 9:29 AM] <Ajith1030> Hey guys, shall we get this namespace thing into the mailing list [12/22/2004 9:29 AM] <alek_s> and let OM to resolve prefix during serialization [12/22/2004 9:29 AM] <gdaniels> alek: Axis 1.X does that [12/22/2004 9:29 AM] <alek_s> (i have done this in XSUL/XPP3/XB1 and it works pretty well) [12/22/2004 9:29 AM] <gdaniels> well something like it [12/22/2004 9:29 AM] <gdaniels> I'm +1 for it [12/22/2004 9:29 AM] <Ajith1030> it seems we need to dicuss a bit about the M1 [12/22/2004 9:30 AM] <alek_s> Glen: in AXIS 1.x you talk about serializer writing to SAX? [12/22/2004 9:30 AM] <alek_s> Glen: or DOM? [12/22/2004 9:30 AM] <gdaniels> Alek: In Axis1.1 it writes to text [12/22/2004 9:30 AM] <gdaniels> s/1.1/1.X/ [12/22/2004 9:31 AM] <{eng}bar4ka> gdaniels, sorry about bothering again, i will hang for a hope and help to solve this problem, its already in the axis-user mailing list at [12/22/2004 9:31 AM] <alek_s> Glen: what part of AXIS 1.x do you refer to exactly? [12/22/2004 9:31 AM] <gdaniels> SerializationContext (but really also MessageElement) [12/22/2004 9:32 AM] <gdaniels> MessageElement has a list of "qname-attributes" which need to be resolved at serialization time [12/22/2004 9:32 AM] <alek_s> so it is geared for SAX writer we are talking about tight? [12/22/2004 9:33 AM] <gdaniels> ? [12/22/2004 9:33 AM] <alek_s> you get exactly the same gfunctionality if you declare set of namespaces on top-level elemenbt in OM (liek SOAP:Envelope) [12/22/2004 9:33 AM] <alek_s> Geln: when qname-attributes are resolved? [12/22/2004 9:34 AM] <alek_s> I am fine to move discussion to mailing list [12/22/2004 9:34 AM] <gdaniels> I don't remember exactly, Alek [12/22/2004 9:34 AM] <gdaniels> Need to go look [12/22/2004 9:34 AM] <Chinthaka> Alek : I also agree :) [12/22/2004 9:34 AM] <gdaniels> The point is that the idea of attributes with qname values is good [12/22/2004 9:34 AM] <alek_s> lets then talk about M1 ? [12/22/2004 9:34 AM] |<-- Ajith has left irc.freenode.net (Read error: 113 (No route to host)) [12/22/2004 9:35 AM] <alek_s> Glen: i have seen that used in XSUL and i a gree it is GooThing(tm) :) [12/22/2004 9:35 AM] <Chinthaka> Ajith quit again ???? :( :( [12/22/2004 9:35 AM] <gdaniels> Ajith1030 is Ajith :) [12/22/2004 9:35 AM] <gdaniels> That was Ajith's ghost departing [12/22/2004 9:35 AM] <Chinthaka> :) [12/22/2004 9:36 AM] <gdaniels> So Ajith, what did you want to talk about re: M1? [12/22/2004 9:37 AM] <gdaniels> Hm [12/22/2004 9:37 AM] <gdaniels> maybe he did lose the connection again [12/22/2004 9:37 AM] <Chinthaka> Ajith, Srinath, Farhaan, u there ? [12/22/2004 9:38 AM] <farhaan> I am here, but Srinath & Co. seems to be having some problems at the University [12/22/2004 9:38 AM] <Srinath_> we are here :) [12/22/2004 9:38 AM] <gdaniels> darn Internet. [12/22/2004 9:38 AM] <Chinthaka> yeah Glen [12/22/2004 9:38 AM] <Ajith1030> hold on a sec till I align my name [12/22/2004 9:38 AM] <Chinthaka> this always happens in Wednesday nights after 8 [12/22/2004 9:39 AM] <Chinthaka> :( [12/22/2004 9:39 AM] <gdaniels> Oh, btw - I won't be on next week's chat as I'll be in London [12/22/2004 9:39 AM] <Srinath_> oops [12/22/2004 9:40 AM] <Chinthaka> "Glen in London", "Alek in Poland", "We in Srilanka, always" :D [12/22/2004 9:40 AM] <Srinath_> we can talk bit M1 .. [12/22/2004 9:41 AM] <Srinath_> shall we move the OM code to src ? [12/22/2004 9:41 AM] <gdaniels> Let's do the move to src after M1 [12/22/2004 9:41 AM] =-= Ajith1030 is now known as Ajith [12/22/2004 9:41 AM] <gdaniels> ? [12/22/2004 9:41 AM] <alek_s> Glen: would it make any difference if next week chat was 9AM (ie. 3PM in England)? [12/22/2004 9:41 AM] <gdaniels> build off prototype2 for now [12/22/2004 9:42 AM] <gdaniels> alek: Yes, that would work better [12/22/2004 9:42 AM] <Ajith> that will save us a whole lot of trouble :) [12/22/2004 9:42 AM] <alek_s> what is M1 exactly? [12/22/2004 9:42 AM] <alek_s> and what is "src" in this context? [12/22/2004 9:42 AM] <gdaniels> alek: src is the "real" src directory [12/22/2004 9:42 AM] <Srinath_> yes let's do it in p2 then [12/22/2004 9:42 AM] <Chinthaka> I think Farhaan sent an email [12/22/2004 9:42 AM] <gdaniels> i.e. not scratch/* but java/src [12/22/2004 9:42 AM] <Srinath_> alek ..src is the real tree [12/22/2004 9:42 AM] <Chinthaka> to mailing list abt M1 :( [12/22/2004 9:43 AM] <Srinath_> M1 = Milstone 1 I think ;) [12/22/2004 9:43 AM] <Srinath_> I think we can start revirwing the code now [12/22/2004 9:44 AM] <Srinath_> for everyhing [12/22/2004 9:44 AM] <gdaniels> reviewing? [12/22/2004 9:44 AM] <alek_s> did he send it to [Axis2]? i can not find this ... anyway i will find it [12/22/2004 9:44 AM] <Srinath_> review :) [12/22/2004 9:45 AM] <Ajith> ahum a review just now [12/22/2004 9:45 AM] <Ajith> ? [12/22/2004 9:45 AM] <Chinthaka> Alek : [12/22/2004 9:45 AM] <Srinath_> I put most of the things we argeed to p2 (expect movinf address logic to sender) [12/22/2004 9:45 AM] <Ajith> I am doing some major changes to the code so I need at least a day to clean the things up [12/22/2004 9:45 AM] <Chinthaka> [Axis2] Milestone Release & To Do's [12/22/2004 9:45 AM] <Srinath_> but there may be missing things [12/22/2004 9:46 AM] <alek_s> Chinthaka: thanks! [12/22/2004 9:46 AM] <Srinath_> e.g. ClientAPI added by Jaliya and Co last week :) [12/22/2004 9:47 AM] <Jaliya> Deepal and I have added both the sync and async paths for the client [12/22/2004 9:47 AM] <gdaniels> I think everyone should try to take a serious look at the code as it stands this week/weekend [12/22/2004 9:47 AM] <Jaliya> not the ideal async with a seperate listener but using callbacks [12/22/2004 9:47 AM] <Ajith> agreed for this weekend :) [12/22/2004 9:47 AM] <gdaniels> Jaliya: +1 [12/22/2004 9:47 AM] <Srinath_> +1 glen .. I try my best to clean up testCases ect by this week [12/22/2004 9:48 AM] <gdaniels> Srinath_: Did you build an infrastructure for doing client-server testing? [12/22/2004 9:48 AM] <farhaan> Alek:section 4 on Release01M1 needs updating though [12/22/2004 9:48 AM] <gdaniels> i.e. using ant's <parallel> task [12/22/2004 9:48 AM] <Srinath_> we all try to optimise thing till friday and try to freeze the code [12/22/2004 9:48 AM] <deepal123> im sorry im bit busy with deployment stuff [12/22/2004 9:48 AM] <Jaliya> we need to clarify that I guess, the testing of scenarios [12/22/2004 9:49 AM] |<-- JaimeM has left irc.freenode.net () [12/22/2004 9:49 AM] <Srinath_> glen: I did it using simple axis server started at the setup() for now [12/22/2004 9:49 AM] <Jaliya> Can we do it using ant or just using simple axis server [12/22/2004 9:49 AM] <gdaniels> Jaliya: Um, both. :) [12/22/2004 9:49 AM] <Jaliya> we are doing the same glen at the moment [12/22/2004 9:49 AM] <Srinath_> I think the <paralel> info is at the 1.x test .. will have a look [12/22/2004 9:49 AM] <gdaniels> 1.X doesn't use <parallel> [12/22/2004 9:49 AM] <gdaniels> although it could [12/22/2004 9:50 AM] <gdaniels> right now we use a custom task I think [12/22/2004 9:50 AM] <alek_s> Farhaan: how do you plan to do testing? [12/22/2004 9:51 AM] <Srinath_> I usually like to start alll in the setup of the TestCxase including simple AxisServer! [12/22/2004 9:51 AM] <alek_s> Farhaan: what features are to be tested? [12/22/2004 9:51 AM] <Chinthaka> gle [12/22/2004 9:51 AM] <Srinath_> glen:what does parellel do?g [12/22/2004 9:52 AM] <farhaan> Alek: we have already come up with unit tests and most probably we want to include some of the whitemesa tests next week [12/22/2004 9:52 AM] <alek_s> Farhaan:i think it would be useful to add this to wiki page [12/22/2004 9:52 AM] <gdaniels> The parallel task just runs things on multiple threads and syncs them [12/22/2004 9:53 AM] <alek_s> (in 3.6 Test Cases) [12/22/2004 9:53 AM] <farhaan> Alek: agreed, i will add the test matrix [12/22/2004 9:53 AM] <gdaniels> Here's what I'd like to see - there are a variety of client tests we're going to have, and they might want to run against a variety of servers (in-proc SimpleAxisServer, out-of-proc SimpleAxisServer, servlet engine, etc) [12/22/2004 9:54 AM] <Jaliya> glen : what is in-proc [12/22/2004 9:54 AM] <Srinath_> glen:you mean to set up a one or more servers and send all requests to them [12/22/2004 9:54 AM] <gdaniels> So there should be a generic "setup" which keys off config/switches that starts up anything it needs to (SimpleAxisServer), and then the client tests receive a base URL from that configuration which points them at the right place (either "local:" or "http:"...) [12/22/2004 9:54 AM] <gdaniels> Srinath_: I mean one server, but which one is dependent on config. [12/22/2004 9:55 AM] <Srinath_> glen:I got you [12/22/2004 9:55 AM] <gdaniels> typical cases are 1) servlet engine, so setup is external and we just need URL; 2) SimpleAxisServer needs to run with HTTP; 3) use local in-process server with "local:" transport [12/22/2004 9:55 AM] <alek_s> if you have SimpleAxisServer using JUNIT.setUp/teardown to start/stop server is easy [12/22/2004 9:56 AM] <gdaniels> alek: but we don't want to start/stop SAS for each test [12/22/2004 9:56 AM] <Srinath_> we can have both types ..in the junit test cases we can use setup(..) to bring up the servers [12/22/2004 9:56 AM] <gdaniels> We want to start one at the beginning of the run and then stop it after all tests [12/22/2004 9:56 AM] <Srinath_> but we can test samples using one server [12/22/2004 9:57 AM] <Chinthaka> Glen, one thing, what is in-proc and out-of-proc ? [12/22/2004 9:57 AM] <gdaniels> in-process, out-of-process [12/22/2004 9:57 AM] <alek_s> GLen: why not - those are unit tests [12/22/2004 9:57 AM] <gdaniels> Because it'll be sloooooow [12/22/2004 9:57 AM] <deepal123> glen with Async one we have some problems for testing [12/22/2004 9:57 AM] <alek_s> Glen: i do it all the time in XSUL - if you have really small server starting/stopping it is very fast ... [12/22/2004 9:57 AM] <Srinath_> glen:alek: I thik we should have both types [12/22/2004 9:57 AM] <alek_s> Glen: how slow is sloooooow :) [12/22/2004 9:58 AM] <gdaniels> alek: yes, but as the # of tests increases even a small incremental slowdown can snowball [12/22/2004 9:58 AM] <deepal123> that is in async case we start a new thread and assign work to that thread [12/22/2004 9:58 AM] <alek_s> i start server, bomabrd it with methods, shutdown when it is finished [12/22/2004 9:58 AM] <Srinath_> plus I have seen Geronimo brings up complete j2ee contianer in unit test setup() :) [12/22/2004 9:58 AM] <alek_s> i have pretty complicated async testing done like that [12/22/2004 9:58 AM] <Jaliya> + 1 for Glen one server for some tests [12/22/2004 9:59 AM] <gdaniels> I need to run - got a meeting..! [12/22/2004 9:59 AM] <gdaniels> Talk more on mailing list [12/22/2004 9:59 AM] <Chinthaka> oki [12/22/2004 9:59 AM] <gdaniels> bye all [12/22/2004 9:59 AM] <Srinath_> sure glen .. will put a mail regarding test [12/22/2004 9:59 AM] <Jaliya> but I think we have to write the test cases with some connections [12/22/2004 9:59 AM] <Chinthaka> but these days mailing list seems silent :) [12/22/2004 9:59 AM] |<-- gdaniels has left irc.freenode.net () [12/22/2004 10:00 AM] <alek_s> Srinath: i think it is what setup() is supposed to do in *unit* test... [12/22/2004 10:00 AM] <Chinthaka> I also got to go, byee all [12/22/2004 10:00 AM] -->| dims ([email protected]) has joined #apache-axis [12/22/2004 10:00 AM] <Chinthaka> today log ? [12/22/2004 10:00 AM] <alek_s> i have even bringing up two servers in unti tests and it worked just fine () [12/22/2004 10:00 AM] <alek_s> i will send it [12/22/2004 10:01 AM] <alek_s> you did it last week [12/22/2004 10:01 AM] =-= Chinthaka is now known as Chinthaka_away [12/22/2004 10:01 AM] <--| deepal123 has left #apache-axis | https://wiki.apache.org/ws/ChatAgenda/20041222/ChatLog?action=diff | CC-MAIN-2016-50 | refinedweb | 4,799 | 71.18 |
A factory for creating ID3v2 frames during parsing. More...
#include <id3v2framefactory.h>
Detailed Description
A factory for creating ID3v2 frames during parsing.).
- Note
- You do not need to use this factory to create new frames to add to an ID3v2::Tag. You can instantiate frame subclasses directly (with new) and add them to a tag using ID3v2::Tag::addFrame()
- See also
- ID3v2::Tag::addFrame()
Constructor & Destructor Documentation
Constructs a frame factory. Because this is a singleton this method is protected, but may be used for subclasses.
Destroys the frame factory.
Member Function Documentation
Create a frame based on data. synchSafeInts should only be set false if we are parsing an old tag (v2.3 or older) that does not support synchsafe ints.
Create a frame based on data. tagHeader should be a valid ID3v2::Header instance.
Returns the default text encoding for text frames. If setTextEncoding() has not been explicitly called this will only be used for new text frames. However, if this value has been set explicitly all frames will be converted to this type (unless it's explicitly set differently for the individual frame) when being rendered.
- See also
- setDefaultTextEncoding()
After a tag has been read, this tries to rebuild some of them information, most notably the recording date, from frames that have been deprecated and can't be upgraded directly.
Set the default text encoding for all text frames that are created to encoding. If no value is set the frames with either default to the encoding type that was parsed and new frames default to Latin1.
Valid string types for ID3v2 tags are Latin1, UTF8, UTF16 and UTF16BE.
- See also
- defaultTextEncoding().
The documentation for this class was generated from the following file: | http://taglib.github.io/api/classTagLib_1_1ID3v2_1_1FrameFactory.html | CC-MAIN-2015-48 | refinedweb | 286 | 57.47 |
The official source of product insight from the Visual Studio Engineering Team
The Visual Studio 2010 SDK adds several project templates to Visual Studio that let you create and share custom controls. These are called extensibility projects. In addition to compiling the control, the extensibility project prepares it for publication by incorporating it into a VSIX extension. A VSIX control extension can be shared by publishing it to the Visual Studio gallery, or by sending it directly to interested developers.
There is currently no extensibility web control project template in the Visual Studio SDK, but you can make your own by following the steps in this walkthrough. The walkthrough is divided into two parts:
On completion of the second part, you will have an extensibility project template that makes creating and publishing a web control a straightforward process.
In the first.
To create and publish your custom web control when no project template is available, start with an existing extensibility control project and modify it to create an extensibility web control project.
1. Open Visual Studio and create a new Visual C#/Extensibility/Windows Forms Toolbox Control project named MyWebControls. Check Create directory for solution before clicking OK.
2. Add a new Visual C#/ASP.NET Server Control project named Temp to the solution.
1. Rename the file Temp/ServerControl1.cs to Temp/ColorTextControl.cs.
2. Delete the file MyWebControls/ToolboxControl.cs.
3. Copy the file Temp/ ColorTextControl.cs to the MyWebControls project.
4. Remove the Temp project from the solution.
1. Add the following references to the project:
2. Remove the following references from the project:
1. Open the file ColorTextControl.cs in the code editor.
2. Change the namespace to MyWebControls.
3. In the ToolboxData attribute, replace both occurrences of ServerControl1 with ColorTextControl.
These values specify the opening and closing tags generated for the control when it is dragged from the Toolbox into a web page at design time. They must match the name of the control class, which is also the name of the control that will appear in the toolbox.
4. Add a ProvideToolBoxControl attribute to the control class. The first argument to this attribute is the name of the assembly, which is the same as the namespace when the project is generated. The color text control will appear in a category in the toolbox with this name.
The start of the control class source code now looks like this:
namespace MyWebControls
{
[DefaultProperty("Text")]
[ToolboxData("<{0}:ColorTextControl runat=server></{0}:ColorTextControl>")]
[ProvideToolboxControl("MyWebControls", false)]
public class ColorTextControl : WebControl
{
5. Replace the get method of the default Text property with this code:
get
{
String s = (String)ViewState["Text"];
return "<span style='color:green'>' + s + "</span>";
}
This surrounds the text with a span tag that colors it green.
6. Open the file Properties/AssemblyInfo.cs in the code editor.
7. Replace the AssemblyVersion Revision number with the “*” character.
[assembly: AssemblyVersion("1.0.0.*")]
Custom toolbox controls are cached and refreshed only when the assembly signature changes. By incrementing the assembly revision number with each build, the cache is always refreshed.
Before publishing your web control, test it out in an experimental instance of Visual Studio.
1. Press F5 to launch an experimental instance of Visual Studio.
2. From the Tools menu, select the Extension Manager. The MyWebControls extension appears and is enabled.
3. Close the Extension Manager.
4. Create a new ASP.NET web application project.
5. Open default.aspx in Source mode.
6. Open the toolbox. You should see ColorTextControl in the category MyWebControls.
7. Drag a ColorTextControl to the body of the web page.
8. Add a Text attribute with the value Think Green! to the ColorTextControl tag. The resulting tag should look like this:
<cc1:ColorTextControl
9. Press F5 to launch the ASP.NET Development Server.
The ColorTextControl should render something like this:
10. Close the ASP.NET Development Server.
11. Close the experimental instance of Visual Studio.
Before you can publish your control to the Visual Studio Gallery, you need an icon and a screen shot. The icon you supply appears next to the extension in the Extension Manager.
1. If you have an icon image file, add it to your project as an existing item. Otherwise, add a new image file to your project and edit the image to represent the control. In this walkthrough, we use an icon called Color.bmp, which shows the Color letter “A” on a yellow field.
2. If you have a screenshot image file, add it to your project as an existing item. Otherwise, create a screenshot image file and add it to your project. In this walkthrough, we use a screen shot called ScreenShot.bmp, which shows the control as it appears in a web page.
3. Open the file source.extension.vsixmanifest for editing.
4. Change the Description to Color Text Web Control.
5. Set the Icon to Color.bmp and the Preview Image to ScreenShot.bmp.
6. Change the Author, Version, and so forth, as desired.
7. Save and close the manifest file, then rebuild the project as a Release build. Publishing your web control
You are now ready to publish your web control Control, then click Next.
6. In Step 2: Upload, select I would like to upload my control. The Select your control text box appears.
7. Click the Browse button and select the MyWebControls.vsix file located in the project bin/Release folder. Click Next.
8. In Step 3: Basic Information, you see the information you entered into the manifest editor. The description you entered appears in the Summary field.
9. Set the Category to ASP.NET and the Tags to toolbox, web control. You can enter a more detailed description at this time.
10. Read and agree to the Contribution Agreement, then type the text image into the text box.
11. Click Create Contribution. A warning appears that This page has not yet been published.
12. Click Publish.
13. Search the Visual Studio Gallery for MyWebControls. The MyWebControls extension listing appears.
Now that your web control is published, install it in Visual Studio and test it there.
1. Return to Visual Studio.
2. From the Tools menu, select Extension Manager.
3. Click Online Gallery, then search for MyWebControls. The MyWebControls extension listing appears.
4. Click the Download button. After the extension downloads, click the Install button. Your web control is now installed in Visual Studio.
6. Create a new ASP.NET web application project.
7. Open default.aspx in Source mode.
8. Open the toolbox. You should see ColorTextControl in the category MyWebControls.
In the next part of this walkthrough, you will create your web control from a project template. In preparation, delete your web control.
1. Return to your web browser.
2. Click the My Contributions link in the upper left-hand corner. The MyWebControls listing appears.
3. Click Delete to remove your web control until you upload it again.
4. Return to Visual Studio.
5. From the Tools menu, select Extension Manager.
6. Select MyWebControls, then click Uninstall.
7. Restart Visual Studio to complete the uninstall process.
This is the end of part one of this walkthrough. Part two shows you how to create and publish an extensibility web control project template based on the web control project you created above. To continue, please see Walkthrough-- Publishing an Extensibility Web Control Project Template (Part 2 of 2).
Short Bio: Martin Tracy is a senior programmer writer for the Visual Studio Project and Build team. As part of VS 2010, he has documented numerous features of the MSBuild system and Visual Studio platform. His long term focus is on developing infrastructure to support. | http://blogs.msdn.com/b/visualstudio/archive/2010/05/25/walkthrough-publishing-a-custom-web-control-part-1-of-2.aspx | CC-MAIN-2014-52 | refinedweb | 1,274 | 60.82 |
0
Hello. I need to write a program to evaluate a definite integral with Simpson's composite rule.
I know there is a Simpson's rule available in Scipy, but I really need to write it by the following scheme.
,
where a, b - limits of integral, n-number of intervals of integral's partition.
for k-even
and
for k-uneven
I already got this far (see below), but my answer is not correct. The output of my code is equal 0.07. But answer of WolphramAlpha that uses identical algorithm is 0.439818.
(WolframAlpha result)
What should I change for in my code? Any help will be appreciated.
#include <iostream.h> #include <math.h> #include <conio.h> using namespace std; double f(double x) { return ((0.6369)*sin(sin(x))*sin(x)); } int main() { double a = 0.0; double b = 1.57; double n = 2.0; double eps = 0.1; double h = (b-a)/(2*n); double s = f(a) + f(b); int k = 1; int x; double i; while (k>(2*n)-1) { x = a+h*k; s = s+4*f(x)+2*f(x+h); k = k+2; } i = s*h/3; cout << "integral is equal " << i; cout << "\nPress any key to close"; getch(); return 0; }
Edited 2 Years Ago by Auroch: improving | https://www.daniweb.com/programming/software-development/threads/487611/simpson-s-rule | CC-MAIN-2016-50 | refinedweb | 216 | 77.33 |
I am trying to collect bitcoin live streaming data to Azure event hub. Below is the code how we can do in on local machine. How can I use this code to collect the streaming in Azure Event Hub. All examples and documents provided by Microsoft are unclear. import logging import websocket, json cc = ..
Category : websocket
I am trying to access a websocket to listen to some information and so I want to use the run_forever command to keep listening until I stop the program, but when I do I get the error below, could someone tell me what I am doing wrong? Code: loop = asyncio.get_event_loop() async def listen(): url ..
Is there a way to automatically remove older rows of data when new rows are added so as to maintain a constant row count when writing to csv files using python? Are there any other data structures or data storing solutions that can be used for this purpose? I am storing streaming data from websockets ..
The code is very simple, sanitized version is given below. It works fine on my windows PC, but get error "[Errno 104] Connection reset by peer" when running on my CentOS server. It was actually running properly for over a year on my CentOS server, but from yesterday on, it constantly throws the 104 error. ..
I’m trying to create a text game in Python using Websockets for the networking part. Everything works great using the documentation provided. However I need to create a Webserver class in order to be able to inject the connection instead of passing it as an argument in every function that exists in the flow. That ..
import websocket #pip install websocket-client import json import threading import time token_file = open("token.txt", "r") token_read = token_file.read() token_list = token_read.split() token_file.close() print(token_list) def send_json_request(ws, request): ws.send(json.dumps(request)) def recieve_json_response(ws): response = ws.recv() if response: return json.loads(response) def heartbeat(interval, ws): print(‘Heartbeat begin’) while True: time.sleep(interval) heartbeatJSON = { "op": 1, "d": "null" } send_json_request(ws, heartbeatJSON) print("Heartbeat ..
I am using the websocket library in Python and I am new to this. I want to create multiple different connections to websockets. This happens through my custom WebsocketProcess class which opens the connection, receives the event, keeps a record Id and then calls an API to grab the information for this particular record. However, ..
A live data stream (Websocket) stream = websocket.buffer() The output needs to be a list made of 1 minute sublists for example, if in 1st minute we received stream1, stream2 and strean3//// in 2nd minute we received only stream4 and in 3rd minute we recieved another 5 streams etc., the output should look like this ..
I am trying to create a email verifier api but for most of domains I not able to reach guareentee ..
I am trying to create an email verifier API, but for most domains I am not able to reach the ..
Recent Comments | https://askpythonquestions.com/category/websocket/ | CC-MAIN-2021-31 | refinedweb | 499 | 56.86 |
24 February 2012 14:28 [Source: ICIS news]
LUDWIGSHAFEN, Germany (ICIS)--BASF will increase its research and development (R&D) spending to €1.7bn ($2.3bn) in 2012, a rise of more than €100m from 2011, the chairman of the German petrochemical major said on Friday.
Speaking at BASF’s annual financial press conference, Kurt Bock said that the company will further integrate sustainability into its business with innovation as the key to its growth.
“[In 2011] we continued to lay down the foundation for future profitable growth by increasing expenditures for research and development in 2011 by 7.6% to €1.6bn,” Bock said.
“In 2012, we plan to increase our global research and development expenditures to €1.7bn,” he added.
Investments into R&D increased by €113m year on year in 2011 to €1.6bn, with a third spent on projects for increased energy efficiency and climate protection, BASF said.
Of the €1.6bn, 8% was spent on the chemicals segment, with 9% on plastics and 1% on oil and gas, it added.
"Our research pipeline included approximately 2,800 projects in 2011. We aim to achieve sales of around €30bn in 2020 with innovations - new and improved products or applications that have been on the for 10 years or less," BASF said.
BASF announced earlier in February that it is investing €24.8m to expand its R&D facilities in ?xml:namespace>
The company has also started to build a global R&D and technology centre in
The company said that in 2011, the number of employees involved in R&D rose to around 10,100 compared with 9,600 in 2010.
Earlier on Friday, BASF reported a 2.8% year-on-year increase in its fourth-quarter 2011 net profit to €1.13bn on the back of strong revenue | http://www.icis.com/Articles/2012/02/24/9535810/basf-to-increase-r.html | CC-MAIN-2014-42 | refinedweb | 303 | 65.83 |
This section we will learn how to use multiply two number. A class consists of a collection of types of encapsulation instance variable and type of method with implementation of those types class that can used to create object of the class. The class define two public for declare class name and second instance variable define. Here this example of java program also provide complete illustration with source code.
Description this program
In this program we are going to use two any number for multiply. First all of we have to define class named "Multiply" . Then declare the class object .Here we are going to declare class for instance variable. We have to create one object for values passes. This program also provide two static variable for main class or local public class. The instance type variable define charactor type name, width, height, and length. After we are going to use make a new object and then again create integer type values. Therefore this program performs by the using " val=mul.width*mul.height*mul.length;" . So that this program will be display massage by using "System.out.println("Multiply number is="+ val )" method.
Here is the code of this program
Ask Questions? Discuss: Multiplication of Two Number in Class
Post your Comment | http://www.roseindia.net/java/java-conversion/multiplication-of-two-number-in.shtml | CC-MAIN-2013-48 | refinedweb | 213 | 67.76 |
Hi,
I'm working with 4D integer matrices and need to compute std() on a
given axis but I experience problems with excessive memory consumption.
Example:
---
import numpy
a = numpy.random.randint(100,size=(50,50,50,200)) # 4D randint matrix
b = a.std(3)
---
It seems that this code requires 100-200 Mb to allocate 'a'
as a matrix of integers, but requires >500Mb more just to
compute std(3). Is it possible to compute std(3) on integer
matrices without spending so much memory?
I manage 4D matrices that are not much bigger than the one in the example
and they require >1.2Gb of ram to compute std(3) only.
Note that quite all this memory is immediately released after
computing std() so it seems it's used just internally and not to
represent/store the result. Unfortunately I haven't all that RAM...
Could someone explain/correct this problem? | https://mail.python.org/archives/list/[email protected]/message/53JBBDK6KDBBUTMMFAXJXEIZSDC2ALMW/attachment/2/attachment.htm | CC-MAIN-2022-33 | refinedweb | 153 | 57.98 |
The RSA Cryptosystem is a method of encryption wherein the security of any encrypted message stems from the difficulty in factoring large numbers into their primes. The cryptosystem takes its name from its inventors Rivest, Shamir and Adleman. It is one of the first public-key cryptosystems and is widely cited when explaining the paradigm of public key cryptography.
Public-Key Cryptography
In public key cryptosystems (also known as asymmetrical cryptography), members have a private key as well as a public key. Their private key is for them alone (kept secret) whereas the public key is published for all to see. This public-private split maintains secrecy and security whilst enabling anyone to send a message to the publisher of the public key. In particular, anyone can encrypt using a public key but private keys are necessary for decryption and forming public keys.
Public-key cryptography also has applications in authentication since the public key is derived from one’s private key and hence methods can be designed to prove that you must be the owner of that key.
The process of generating public keys and sharing them has a realm of cryptography all to itself. The sharing of private and composite keys over open channels is usually done using the Diffie-Hellman key exchange method.
Number theory background
Finding Prime Factors
The security of any encryption comes from the notion of one-way functions or operations that are easy to do yet near impossible to undo without information on how the initial calculation was made. This means that encryption is efficient and attacks will be unsuccessful.
The security of RSA encryption comes from the “factoring problem” – it is almost impossible (with current technology) to factor a large number into its prime factors. However, it is simple to calculate the product of a set of prime numbers.
Whilst it has not been proven that no efficient factorisation algorithm exists, no efficient, non-quantum algorithm is currently known. In 2009, an attempt to factor a 232-digit number using many hundreds of computers took two years to find the prime factors (Kleinjung; et al. 2010).
Greatest Common Divisors (GCDs)
The greatest common divisor (GCD) or largest common factor of two integers a and b, often denoted as GCD(a, b) is the largest integer x which divides both a and b.
If the GCD of two numbers is 1, i.e. the two integers share no common factors other than the trivial factor of 1, then the integers are referred to as coprime.
The GCD of two values can be found using Euler’s algorithm:
Euler's Function
Euler’s function is a function that tells us how many values in the range from 0 to the argument are coprime to the argument. It is denoted by the Greek letter φ.
For a generic value of n, φ(n) is given by the equation below:
where p|n denotes the distinct prime divisors of n.
A frequently used special case is then n is the product of distinct primes. Let's say n = p1 × p2 × ... × pk. Then, Euler’s function can be written as follows:
Examples
For example the prime factorization of 52 is 22 × 13 and hence the value of Euler's function (and thus the number of integers coprime to 52) is given by:
In the case where the prime factorisation only consists of unique primes we may apply the alternative form of Euler's equation. Take 1147 which has a prime factorisation of 1147 = 37 × 31. The number of values coprime to this is given by:
You can check that this is equivalent to the fractional formation of Euler's function in this case and generally.
Euler's Theorem
If a and n are coprime so that GCD(a, n) = 1 then Euler’s theorem states the following:
This means that when working modulo n we may work modulo φ(n) in exponents. For example, a5φ(n)+3 ≡ a3 (mod n).
Encryption and Decryption
Engaging in a conversation using RSA encrypted messages follows the following algorithm. In this example, Alice is the sender, and Bob the desired recipient.
Generating the public and private keys (setup)
First, Bob generates his private and public keys (and shares the public key). This allows anyone to send messages to Bob.
- Bob chooses large prime numbers. Let these prime numbers be p and q.
- Bob computes n = p × q
- Bob then chooses an encryption key e such that e is coprime to n, that is, the greatest common divisor of n and e is 1. This is necessary to ensure that e has a unique inverse modulo φ(n), this inverse is the decryption key d.
- Bob then sends the pair which forms the pubic key (n, e) to Alice but keeps p, q and d private. Alice does not need to know p, q or d to send a message or perform encryption. The primes are only needed to find the decryption key d and hence knowledge of them enables decryption. This is the essence of a public key cryptosystem, anyone can encrypt using a public key but private keys are necessary for decryption and forming public keys.
- Bob computes the decryption key d where d ≡ e-1 (mod φ(n)) i.e. mod (p – 1)(q – 1). This requires knowledge of the private keys p and q and is performed using the extended Euclidean algorithm. The private decryption key d is kept secret. Knowledge of d and e is sufficient to factor n to obtain p and q. Knowledge of p and q is sufficient to find d.
Example
Suppose that Bob chooses primes 37 and 31 and therefore calculates n = 37 × 31 = 1147. He then chooses an encryption key e = 7 such that GCD(n, e) = GCD(1147, 7) = 1. n and e are then published.
The decryption key d is then given by 463. You can check that 7 * 463 ≡ 3241 ≡ 1 (mod 1080). Calculating modular inverse uses the extended euclidean algorithm to find x and y satisfying ax - my = 1. Then, x is modular inverse of a (mod m). Python code for calculating modular inverse is included below.
# the extended euclidean algorithm# given a, b => returns g = gcd(a, b) and x, y such that a*x-b*y=1def egcd(a, b):if a == 0:return (b, 0, 1)else:g, y, x = egcd(b % a, a)return (g, x - (b // a) * y, y)# mod inverse using extended euclidean algorithmdef modinv(a, m):g, x, y = egcd(a, m)if g != 1:raise Exception('modular inverse does not exist')else:return x % mprint modinv(7, 1080) # outputs 463
Sending messagesSending messages
Once the set-up is complete, messages can be sent as follows:
- Alice writes her message and coverts it to a number m. For example, this can be done by converting letters to numeric values (A = 01, B = 02, …). Note that no character is mapped to zero as zeros will be lost in the encryption if they appear at the start of messages or message segments. If m is larger than n then the message is broken into blocks and sent in segments.
- Alice encrypts m to the ciphertext c by the equivalence: c ≡ me (mod n) and sends c to Bob.
- Upon receiving the encrypted message c Bob decrypts it as follows: m ≡ cd (mod n). This is true since, cd (mod n) ≡ med (mod n) ≡ m1 (mod n), using ed ≡ 1 (mod φ(n)) and Euler's theorem.
Continued Example
Suppose for simplicity that Alice wishes to send the message AABB which is then encoded numerically to 1122. The encryption leads to:
Bob then receives this and decrypts the message to attain.
Bob may then convert the numerical code back to text to get the original message AABB.
Attacks and Security
The algorithm is widely used and very secure. There are no known consistently effective attacks on the system with current technology. The necessity of keeping d and the prime numbers used to construct n (usually denoted p and q) secret are the greatest risk to security, which is usually a human and social problem, i.e. one might accidentally reveal them or be tricked into doing so.
Below, we'll discuss various aspects related to RSA security in more detail, including some possible attacks and minor changes required in the RSA algorithm to circumvent these attacks.
Importance of large primes and other values
Suppose than an attacker, Eve, intercepts the communications and thus obtains n, e and c. Eve does not know m, d, p or q. Due to the near impossibility of factoring n to find p and q we assume that Eve has no way to factor n given n, e and c and that a brute force approach is impractical due to n, p, q, e, d, m and c being suitably large.
Eve has knowledge of the RSA protocol and hence knows that:
Eve knows c, e and n so why shouldn’t she find m by taking the eth root of m mod n? This is very difficult due to n being large. The only way to solve this root problem is by trying each possible value of e or d in the absence of knowing the decryption key d ≡ e-1 (mod φ(n)). Finding this key requires knowledge of p and q.
Bob’s choice of p and q affects the security of the cryptosystem. These primes should be large (with at least 100 digits for practical use) and chosen randomly and independently. Moreover, to protect against factoring using the ‘p – 1 technique’ one should check that both p – 1 and q – 1 have at least one large prime factor since if these values only have small prime factors it introduces vulnerabilities to the system. (See Footnote 1 for more details).
The choice of e should also be a large number to increase security. One possible way of choosing e is to take e to be a moderately large prime which is also likely to ensure that GCD(e, (p – 1)(q – 1)) = 1 so that the decryption key d is known to exist before finding it.
Short plaintext attack
RSA is commonly used to send short messages including encryption keys for other algorithms such as the Data Encryption Standard (DES) or Advanced Encryption Standard (AES) encryption algorithms. This means that encryption must be engaged in carefully as smaller messages can lead to possible vulnerabilities. Consider sending the message m ≈ 1017, an AES encryption key say. Let m be encrypted using RSA as below using a value of n with approximately 200 digits.
This means that c likely has approximately 200 digits like n, despite the small size of m (so long as e is sufficiently large). Eve intercepts this ciphertext and now knows c, n and e. These are public knowledge. In an attack on the system Eve makes two lists:
- cx-e (mod n) for all x such that 1 ≤ x ≤ 109
- ye (mod n) for all y such that 1 ≤ y ≤ 109
Eve then looks for matches in the first and second list. If one is found she has:
and hence can deduce:
Although this will not always work, it demonstrates that Eve will be able to get the original message (whilst not entirely breaking the cryptosystem by factoring n or finding d) when the message is equal to the product of two values x and y both smaller than 109. Many values of m will satisfy this and hence this shows the vulnerability of the system when sending small messages. One way to avoid such an attack is to arbitrarily make the message longer by appending and/or prepending a number of random digits to the message m.
This meet-in-the-middle attack is much more efficient than trying all of the 1017 possible values for m. This attack requires the computation of the two lists so 2 × 109 calculations rather than 1017. For more on this attack see Boneh, Joux and Nguyen (2000).
Algorithm summary
The encryption key of the system is given by (e, d, n) where ed ≡ 1 (mod φ(n)) and e and n are public whereas d is private. e is the encryption key and d is the decryption key, where arithmetic is performed modulo n. The larger the values used in constructing keys the more secure the cryptosystem at the cost of slightly slower computation times.
Setup:
- Bob chooses secret primes p and q and calculates n = pq.
- Bob chooses the encryption key e with GCD(e, (p – 1)(q – 1)) = 1.
- Bob computes d such that de ≡ 1 (mod (p – 1)(q – 1)).
- Bob makes n and e public while keeping d, p and q private.
Sending messages:
- Alice encrypts the message m as c ≡ me (mod n) and sends c to Bob.
- Bob decrypts the ciphertext c by computing m ≡ cd (mod n).
Conclusion
This set of notes has considered the RSA cryptosystem and introduced the idea of public-key cryptosystems. The security of RSA encryption is derived from the near impossibility of prime factorization of large numbers.
This set of notes has also introduced concepts useful across cryptography and number theory such as Euler’s function, Euler’s theorem and the concept of greatest common divisors.
Footnotes
- The p – 1 factoring algorithm: Choose an integer a > 1 (often a = 2). Choose a bound B. Compute b ≡ aB| (mod n). This is done as follows: let b1 ≡ a (mod n) and bj ≡ bj-1j (mod n). Then bB ≡ b (mod n). Let d = GCD(b – 1, n). if 1 < d < n , we have found a non-trivial factor of n, namely b – 1.
Bibliography
- Boneh, Joux and Nugyen (2000): D. Boneh, A. Joux and P. Nguyen, “Why textbook ElGalal and RSA encryption are insecure”, Advances in Cryptography – ASIACRYPT 2000, Lecture notes in Computer Sceince, 1976, Springer-Vering, 2000 pp. 30 – 43.
- Kleinjung; et al. (2010):rey Timofeev, and Paul Zimmermann, “Factorization of a 768-bit modulus”, International Association for Cryptologic Research, 2010
- This article also draws on the textbook: “Introduction to Cryptography with Coding Theory” by Wade Trappe and Laurence C. Washington. | https://www.commonlounge.com/discussion/d27b7dc0a89348c7b95191781e445f0c | CC-MAIN-2020-24 | refinedweb | 2,369 | 60.95 |
Opened 10 years ago
Closed 3 years ago
#1976 closed Feature Requests (fixed)
Inverse function for complete
Description
As mentioned in the '[boost] [filesystem] "leaf"' thread, complete is the only path composition function without a corresponding decomposition function..
I'd like to call it relative, but that conceptually conflicts with the relative_path member decomposition function. Perhaps the member function could be changed to local_path(), or something.
A discussion will need to be held to determine expected behaviour in the presence of symlinks, since root/foo/bar/.. is not always root/foo.
Change History (16)
comment:1 Changed 10 years ago by
comment:2 Changed 10 years ago by
Here's a naive implementation (that doesn't handle symlinks and isn't well tested):
#include <boost/filesystem.hpp> boost::filesystem::path naive_uncomplete(boost::filesystem::path const path, boost::filesystem::path const base) { if (path.has_root_path()){ if (path.root_path() != base.root_path()) { return path; } else { return naive_uncomplete(path.relative_path(), base.relative_path()); } } else { if (base.has_root_path()) { throw "cannot uncomplete a path relative path from a rooted base"; } else { typedef boost::filesystem::path::const_iterator path_iterator; path_iterator path_it = path.begin(); path_iterator base_it = base.begin(); while ( path_it != path.end() && base_it != base.end() ) { if (*path_it != *base_it) break; ++path_it; ++base_it; } boost::filesystem::path result; for (; base_it != base.end(); ++base_it) { result /= ".."; } for (; path_it != path.end(); ++path_it) { result /= *path_it; } return result; } } }
comment:3 Changed 10 years ago by
I would like this functionality to be added as well. My use case is similar to the first one mentioned (wanting to persist relative paths).
comment:4 Changed 9 years ago by
Getting paths relative to an arbitrary directory is a very useful feature. My use case is the same at the first one (it is an IDE which needs to store paths relative to project file).
comment:5 Changed 9 years ago by
I would like this for a media application which should save lines in an M3U playlist as filepaths relative to the playlist path.
comment:6 Changed 8 years ago by
This would be useful for me as well.
comment:7 Changed 7 years ago by
This feature would be very useful for us, too. In a use case similar to the first one.
comment:8 Changed 7 years ago by
Is there any plans to include this function is boost? Maybe under name relativize?
comment:9 Changed 7 years ago by
comment:10 Changed 6 years ago by
I'm interested in this feature too.
comment:11 Changed 6 years ago by
comment:12 Changed 5 years ago by
There a quite a few links on the internet linking here. This would be a great feature. I'd wonder how to make this portable though with drive letters on windows, reparse points, symlinks on *nix...
comment:13 Changed 4 years ago by
There's an example at , it relies only on lexicographic compare so doesn't handle links etc, but on the other hand I think the expected behavior of such a function is that it's entirely lexicographic on the path.
comment:14 Changed 4 years ago by
+1 for this as well
comment:15 Changed 3 years ago by
I also need this!
comment:16 Changed 3 years ago by
An additional use case for this came up on the users list: | https://svn.boost.org/trac10/ticket/1976 | CC-MAIN-2018-34 | refinedweb | 545 | 64.91 |
On Thu, Nov 02, 2006 at 10:19:49PM -0500, Rici Lake wrote: > It's not going to lurk, by the way. In my experience, it simply > segfaults when you call luaopen_io. It might be nice to have a better > error message :) but I don't believe that it will leave any sort of > lurking poisoned chalice about. I could be wrong, though... I haven't > tested it on more than a few systems, and that accidentally. This did lurk for me; I only noticed this when I read the earlier message and noticed the API change from 5.0 that I'd overlooked in the change list. I don't load the I/O library, since all I/O goes through my own functions (to use the engine's VFS, to let go of my monolithic Lua lock while the file access is taking place so other threads can use Lua in the meantime, etc). > If you just use luaL_openlibs() from linit.c, customising the library > list according to your needs, then you avoid the whole problem. (Or if > you create a custom version which accepts a luaL_Reg structure as a > second parameter.) Assuming that the library writers don't follow my > advice and do pollute the global namespace with their libraries.. > I think the following will implement the function you're looking for: > > lua_Debug ar; > if (lua_getstack(L, 0, &ar) = 0) { > printf("No call frame available"); > exit(1); /* Can't call lua_error() */ > } lua_error should work, ending up in the panic function. (That won't print an error if no panic function was set, but neither would any other errors, eg. OOM.) -- Glenn Maynard | http://lua-users.org/lists/lua-l/2006-11/msg00059.html | crawl-001 | refinedweb | 275 | 69.82 |
#include <db.h>
int DB_ENV->remove(DB_ENV *dbenv, char *db_home, u_int32_t flags);
The DB_ENV->remove method destroys a Berkeley DB environment if it is not currently in use. The environment regions, including any backing files, are removed. Any log or database files and the environment directory are not removed..
In multithreaded applications, only a single thread may call DB_ENV->remove.
A DB_ENV handle that has already been used to open an environment should not be used to call the DB_ENV->remove method; a new DB_ENV handle should be created for that purpose.
After DB_ENV->remove has been called, regardless of its return, the Berkeley DB environment handle may not be accessed again.
The DB_ENV->remove method returns a non-zero error value on failure and 0 on success.Parameters
When using a Unicode build on Windows (the default), the db_home argument will be interpreted as a UTF-8 string, which is equivalent to ASCII for Latin characters.flags
The DB_ENV->remove method may fail and return one of the following non-zero errors: | http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/env_remove.html | crawl-002 | refinedweb | 173 | 53.51 |
There exist good implementations of Faster R-CNN yet they lack support for recent ConvNet architectures. The aim of reproducing it from scratch is to fully utilize MXNet engines and parallelization for object detection.
[1] On Ubuntu 14.04.5 with device Titan X, cuDNN enabled. The experiment is VGG-16 end-to-end training.
[2] VGG network. Trained end-to-end on VOC07trainval+12trainval, tested on VOC07 test.
[3] VGG network. Fast R-CNN is the most memory expensive process.
[4] VGG network (parallelization limited by bandwidth). ResNet-101 speeds up from 2 img/s to 3.5 img/s.
[5] py-faster-rcnn does not support ResNet or recent caffe version.
The above experiments were conducted at mx-rcnn using a MXNet fork, based on MXNet 0.9.1 nnvm pre-release.
bash script/additional_deps.sh
bash script/get_voc.sh
bash script/get_pretrained_model.sh
bash script/vgg_voc07.sh 0,1(use gpu 0 and 1)
See if
bash script/additional_deps.sh will do the following for you.
HOMErepresents where this file is located. All commands, unless stated otherwise, should be started from
cython easydict matplotlib scikit-image.
pythontype
import mxnetto confirm.
makein
Command line arguments have the same meaning as in mxnet/example/image-classification.
prefixrefers to the first part of a saved model file name and
epochrefers to a number in this file name. In
model/vgg-0000.params,
prefixis
"model/vgg"and
epochis
0.
begin_epochmeans the start of your training process, which will apply to all saved checkpoints.
export MXNET_CUDNN_AUTOTUNE_DEFAULT=0.
final-0000.paramsin
HOMEthen use
--prefix final --epoch 0to access it.
python demo.py --prefix final --epoch 0 --image myimage.jpg --gpu 0 --vis. Drop the
--visif you do not have a display or want to save as a new file.
The following tutorial is based on VOC data, VGG network. Supply
--network resnet and
--dataset coco to use other networks and datasets. Refer to
script/vgg_voc07.sh and other experiments for examples.
See
bash script/get_voc.sh and
bash script/get_coco.sh will do the following for you.
datain
datafolder will be used to place the training data folder
VOCdevkitand
coco.
VOCdevkitfolder in
HOME/data.
coco/imagesand annotation jsons to
data/annotations.
(Skip this if not interested) All dataset have three attributes,
image_set,
root_path and
dataset_path.
image_setcould be
2007_trainvalor something like
2007trainval+2012trainval.
root_pathis usually
data, where
cache,
selective_search_data,
rpn_datawill be stored.
dataset_pathcould be something like
data/VOCdevkit, where images, annotations and results can be put so that many copies of datasets can be linked to the same actual place.
See if
bash script/get_pretrained_model.sh will do this for you. If not,
modelin
modelfolder will be used to place model checkpoints along the training process. It is recommended to set
modelas a symbolic link to somewhere else in hard disk.
vgg16-0000.paramsfrom MXNet model gallery to
modelfolder.
resnet-101-0000.paramsfrom ResNet to
modelfolder.
See if
bash script/vgg_alter_voc07.sh 0 (use gpu 0) will do the following for you.
python train_alternate.py. This will train the VGG network on the VOC07 trainval. More control of training process can be found in the argparse help.
python test.py --prefix model/final --epoch 0after completing the training process. This will test the VGG network on the VOC07 test with the model in
HOME/model/final-0000.params. Adding a
--viswill turn on visualization and
-hwill show help as in the training process.
See if
bash script/vgg_voc07.sh 0 (use gpu 0) will do the following for you.
python train_end2end.py. This will train the VGG network on VOC07 trainval.
python test.py. This will test the VGG network on the VOC07 test.
See if
bash script/get_selective.sh and
bash script/vgg_fast_rcnn.sh 0 (use gpu 0) will do the following for you.
scipyis used to load selective search proposals.
datafolder.
script/get_selective_search.shwill do this.
python -m rcnn.tools.train_rcnn --proposal selective_searchto use the selective search proposal.
python -m rcnn.tools.test_rcnn --proposal selective_search.
script/vgg_fast_rcnn.shwill train Fast R-CNN on VOC07 and test on VOC07test.
Region Proposal Network solves object detection as a regression problem from the objectness perspective. Bounding boxes are predicted by applying learned bounding box deltas to base boxes, namely anchor boxes across different positions in feature maps. Training process directly learns a mapping from raw image intensities to bounding box transformation targets.
Fast R-CNN treats general object detection as a classification problem and bounding box prediction as a regression problem. Classifying cropped region feature maps and predicting bounding box displacements together yields detection results. Cropping feature maps instead of image input accelerates computation utilizing shared convolution maps. Bounding box displacements are simultaneously learned in the training process.
Faster R-CNN utilize an alternate optimization training process between RPN and Fast R-CNN. Fast R-CNN weights are used to initiate RPN for training. The approximate joint training scheme does not backpropagate rcnn training error to rpn training.
This repository provides Faster R-CNN as a package named
rcnn.
rcnn.core: core routines in Faster R-CNN training and testing.
rcnn.cython: cython speedup from py-faster-rcnn.
rcnn.dataset: dataset library. Base class is
rcnn.dataset.imdb.IMDB.
rcnn.io: prepare training data.
rcnn.processing: data and label processing library.
rcnn.pycocotools: python api from coco dataset.
rcnn.symbol: symbol and operator.
rcnn.tools: training and testing wrapper.
rcnn.utils: utilities in training and testing, usually overloads mxnet functions.
This repository used code from MXNet, Fast R-CNN, Faster R-CNN, caffe, tornadomeet/mx-rcnn, MS COCO API.
Training data are from Pascal VOC, ImageNet, COCO.
Model comes from VGG16, ResNet.
Thanks to tornadomeet for end-to-end experiments and MXNet contributers for helpful discussions.
History of this implementation is:
mxnet/example/rcnn was v1, v2, v3.5 and now v5. | https://apache.googlesource.com/incubator-mxnet/+/refs/heads/v0.12.0/example/rcnn | CC-MAIN-2021-43 | refinedweb | 967 | 54.18 |
News & Announcements
Modern, faster Netlify Functions
We've been working on a series of improvements to Netlify Functions that will give developers better and faster serverless workflows. In the spirit of sharing progress earlier and more often, we want to let you in while we're still developing this feature and use your feedback to create the best possible experience.
A better, faster bundler
When you publish a JavaScript function, our build system does some processing to package your code and its dependencies into a self-contained, deployable artifact. This is known as bundling.
This process involves some optimizations to ensure that your functions are as small and fast as possible, like discarding any code that doesn’t contribute to the output of the program (dead code elimination), as well as filtering the dependencies — and the subset of their files — that your function actually needs (tree shaking).
With the latest release of our function bundler, we’re starting to use esbuild under the hood to handle some parts of this. It also includes an additional step of inlining, where your function code and its dependencies are physically merged into a single file.
For you, this means a faster bundling process and smaller, more performant functions.
How to enable the new bundler
The new bundler will be enabled for all projects during the week of May 17, but you can choose to opt-in right now to test the new functionality in public beta and take advantage of the performance improvements immediately.
We've added a new
node_bundler property to the
netlify.toml configuration file. To enable the new bundler, set its value to
esbuild.
[functions]
node_bundler = "esbuild"
We're working hard to make sure that your functions automatically work with the new bundler, with no changes required from you. Still, such a fundamental change to the bundling engine creates the risk for some edge cases that might need your attention.
To account for this, we've introduced an advanced configuration section.
Advanced configuration
You can define a list of modules that should be copied to the generated function artifact with their source and references untouched, skipping the inlining and tree-shaking stages. This is useful for handling dependencies that can’t be inlined, such as modules with native addons.
This is done with the
external_node_modules property, which you can apply to all functions, or filter some them by name using a wildcard pattern.
# All functions
[functions]
external_node_modules = ["module-one", "module-two"]
# Functions with a name starting with "my-function-*"
[functions."my-function-*"]
external_node_modules = ["module-three", "module-four"]
# A function named "my-function-1"
[functions.my-function-1]
external_node_modules = ["module-five", "module-six"]
We’ll continue to work on detecting and handling these cases automatically so that you don’t have to configure anything. Still, we wanted to give you this level of control should you need it.
Also, we'll automatically fall back to the default bundler if an error occurs during the bundling stage.
New ECMAScript features
In addition to bundling functions faster and smaller, esbuild makes it possible for you to write your functions using the ECMAScript modules (or ES modules) syntax, which makes use of the
import and
export keywords instead of the
require and
module.exports primitives seen in CommonJS modules.
To use this syntax in a function, you should create a module that exports a function named
handler.
import { something } from 'some-module'
export async function handler(event, context) {
return {
statusCode: 200,
body: JSON.stringify({ message: "Hello World" })
}
}
There are additional language features that are now also supported, such as optional chaining, nullish coalescing, and logical assignment operators.
Note that these features are currently only supported when using the new bundler.
Whether you’re a Netlify Functions power user or you’re just getting started, we’d love to hear about your experience with these new features. If you’re looking for some examples to get you off the ground, make sure to check out our Functions playground. | https://www.netlify.com/blog/2021/04/02/modern-faster-netlify-functions/ | CC-MAIN-2022-21 | refinedweb | 663 | 50.97 |
/* **************************************************************** */ /* */ /* I-Search and Searching */ /* */ /* **************************************************************** */ /* Copyright (C) 1987-2002 <sys/types.h> #include <stdio.h> #if defined (HAVE_UNISTD_H) # include <unistd.h> #endif #if defined (HAVE_STDLIB_H) # include <stdlib.h> #else # include "ansi_stdlib.h" #endif #include "rldefs.h" #include "rlmbutil.h" #include "readline.h" #include "history.h" #include "rlprivate.h" #include "xmalloc.h" /* Variables exported to other files in the readline library. */ char *_rl_isearch_terminators = (char *)NULL; /* Variables imported from other files in the readline library. */ extern HIST_ENTRY *_rl_saved_line_for_history; /* Forward declarations */ static int rl_search_history PARAMS((int, int)); /* Last line found by the current incremental search, so we don't `find' identical lines many times in a row. */ static char *prev_line_found; /* Last search string and its length. */ static char *last_isearch_string; static int last_isearch_string_len; static char *default_isearch_terminators = "\033\012"; /* Search backwards through the history looking for a string which is typed interactively. Start with the current line. */ int rl_reverse_search_history (sign, key) int sign, key; { return (rl_search_history (-sign, key)); } /* Search forwards through the history looking for a string which is typed interactively. Start with the current line. */ int rl_forward_search_history (sign, key) int sign, key; { return (rl_search_history (sign, key)); } /* Display the current state of the search in the echo-area. SEARCH_STRING contains the string that is being searched for, DIRECTION is zero for forward, or 1 for reverse, WHERE is the history list number of the current line. If it is -1, then this line is the starting one. */ static void rl_display_search (search_string, reverse_p, where) char *search_string; int reverse_p, where; { char *message; int msglen, searchlen; searchlen = (search_string && *search_string) ? strlen (search_string) : 0; message = (char *)xmalloc (searchlen + 33); msglen = 0; #if defined (NOTDEF) if (where != -1) { sprintf (message, "[%d]", where + history_base); msglen = strlen (message); } #endif /* NOTDEF */ message[msglen++] = '('; if (reverse_p) { strcpy (message + msglen, "reverse-"); msglen += 8; } strcpy (message + msglen, "i-search)`"); msglen += 10; if (search_string) { strcpy (message + msglen, search_string); msglen += searchlen; } strcpy (message + msglen, "': "); rl_message ("%s", message); free (message); (*rl_redisplay_function) (); } /* Search through the history looking for an interactively typed string. This is analogous to i-search. We start the search in the current line. DIRECTION is which direction to search; >= 0 means forward, < 0 means backwards. */ static int rl_search_history (direction, invoking_key) int direction, invoking_key; { /* The string that the user types in to search for. */ char *search_string; /* The current length of SEARCH_STRING. */ int search_string_index; /* The amount of space that SEARCH_STRING has allocated to it. */ int search_string_size; /* The list of lines to search through. */ char **lines, *allocated_line; /* The length of LINES. */ int hlen; /* Where we get LINES from. */ HIST_ENTRY **hlist; register int i; int orig_point, orig_mark, orig_line, last_found_line; int c, found, failed, sline_len; int n, wstart, wlen; #if defined (HANDLE_MULTIBYTE) char mb[MB_LEN_MAX]; #endif /* The line currently being searched. */ char *sline; /* Offset in that line. */ int line_index; /* Non-zero if we are doing a reverse search. */ int reverse; /* The list of characters which terminate the search, but are not subsequently executed. If the variable isearch-terminators has been set, we use that value, otherwise we use ESC and C-J. */ char *isearch_terminators; RL_SETSTATE(RL_STATE_ISEARCH); orig_point = rl_point; orig_mark = rl_mark; last_found_line = orig_line = where_history (); reverse = direction < 0; hlist = history_list (); allocated_line = (char *)NULL; isearch_terminators = _rl_isearch_terminators ? _rl_isearch_terminators : default_isearch_terminators; /* Create an arrary of pointers to the lines that we want to search. */ rl_maybe_replace_line (); i = 0; if (hlist) for (i = 0; hlist[i]; i++); /* Allocate space for this many lines, +1 for the current input line, and remember those lines. */ lines = (char **)xmalloc ((1 + (hlen = i)) * sizeof (char *)); for (i = 0; i < hlen; i++) lines[i] = hlist[i]->line; if (_rl_saved_line_for_history) lines[i] = _rl_saved_line_for_history->line; else { /* Keep track of this so we can free it. */ allocated_line = (char *)xmalloc (1 + strlen (rl_line_buffer)); strcpy (allocated_line, &rl_line_buffer[0]); lines[i] = allocated_line; } hlen++; /* The line where we start the search. */ i = orig_line; rl_save_prompt (); /* Initialize search parameters. */ search_string = (char *)xmalloc (search_string_size = 128); *search_string = '\0'; search_string_index = 0; prev_line_found = (char *)0; /* XXX */ /* Normalize DIRECTION into 1 or -1. */ direction = (direction >= 0) ? 1 : -1; rl_display_search (search_string, reverse, -1); sline = rl_line_buffer; sline_len = strlen (sline); line_index = rl_point; found = failed = 0; for (;;) { rl_command_func_t *f = (rl_command_func_t *)NULL; /* Read a key and decide how to proceed. */ RL_SETSTATE(RL_STATE_MOREINPUT); c = rl_read_key (); RL_UNSETSTATE(RL_STATE_MOREINPUT); #if defined (HANDLE_MULTIBYTE) if (MB_CUR_MAX > 1 && rl_byte_oriented == 0) c = _rl_read_mbstring (c, mb, MB_LEN_MAX); #endif /* Translate the keys we do something with to opcodes. */ if (c >= 0 && _rl_keymap[c].type == ISFUNC) { f = _rl_keymap[c].function; if (f == rl_reverse_search_history) c = reverse ? -1 : -2; else if (f == rl_forward_search_history) c = !reverse ? -1 : -2; else if (f == rl_rubout) c = -3; else if (c == CTRL ('G')) c = -4; else if (c == CTRL ('W')) /* XXX */ c = -5; else if (c == CTRL ('Y')) /* XXX */ c = -6; } /* The characters in isearch_terminators (set from the user-settable variable isearch-terminators) are used to terminate the search but not subsequently execute the character as a command. The default value is "\033\012" (ESC and C-J). */ if (strchr (isearch_terminators, c)) { /* ESC still terminates the search, but if there is pending input or if input arrives within 0.1 seconds (on systems with select(2)) it is used as a prefix character with rl_execute_next. WATCH OUT FOR THIS! This is intended to allow the arrow keys to be used like ^F and ^B are used to terminate the search and execute the movement command. XXX - since _rl_input_available depends on the application- settable keyboard timeout value, this could alternatively use _rl_input_queued(100000) */ if (c == ESC && _rl_input_available ()) rl_execute_next (ESC); break; } #define ENDSRCH_CHAR(c) \ ((CTRL_CHAR (c) || META_CHAR (c) || (c) == RUBOUT) && ((c) != CTRL ('G'))) #if defined (HANDLE_MULTIBYTE) if (MB_CUR_MAX > 1 && rl_byte_oriented == 0) { if (c >= 0 && strlen (mb) == 1 && ENDSRCH_CHAR (c)) { /* This sets rl_pending_input to c; it will be picked up the next time rl_read_key is called. */ rl_execute_next (c); break; } } else #endif if (c >= 0 && ENDSRCH_CHAR (c)) { /* This sets rl_pending_input to c; it will be picked up the next time rl_read_key is called. */ rl_execute_next (c); break; } switch (c) { case -1: if (search_string_index == 0) { if (last_isearch_string) { search_string_size = 64 + last_isearch_string_len; search_string = (char *)xrealloc (search_string, search_string_size); strcpy (search_string, last_isearch_string); search_string_index = last_isearch_string_len; rl_display_search (search_string, reverse, -1); break; } continue; } else if (reverse) --line_index; else if (line_index != sline_len) ++line_index; else rl_ding (); break; /* switch directions */ case -2: direction = -direction; reverse = direction < 0; break; /* delete character from search string. */ case -3: /* C-H, DEL */ /* This is tricky. To do this right, we need to keep a stack of search positions for the current search, with sentinels marking the beginning and end. But this will do until we have a real isearch-undo. */ if (search_string_index == 0) rl_ding (); else search_string[--search_string_index] = '\0'; break; case -4: /* C-G */ rl_replace_line (lines[orig_line], 0); rl_point = orig_point; rl_mark = orig_mark; rl_restore_prompt(); rl_clear_message (); if (allocated_line) free (allocated_line); free (lines); RL_UNSETSTATE(RL_STATE_ISEARCH); return 0; case -5: /* C-W */ /* skip over portion of line we already matched */ wstart = rl_point + search_string_index; if (wstart >= rl_end) { rl_ding (); break; } /* if not in a word, move to one. */ if (rl_alphabetic(rl_line_buffer[wstart]) == 0) { rl_ding (); break; } n = wstart; while (n < rl_end && rl_alphabetic(rl_line_buffer[n])) n++; wlen = n - wstart + 1; if (search_string_index + wlen + 1 >= search_string_size) { search_string_size += wlen + 1; search_string = (char *)xrealloc (search_string, search_string_size); } for (; wstart < n; wstart++) search_string[search_string_index++] = rl_line_buffer[wstart]; search_string[search_string_index] = '\0'; break; case -6: /* C-Y */ /* skip over portion of line we already matched */ wstart = rl_point + search_string_index; if (wstart >= rl_end) { rl_ding (); break; } n = rl_end - wstart + 1; if (search_string_index + n + 1 >= search_string_size) { search_string_size += n + 1; search_string = (char *)xrealloc (search_string, search_string_size); } for (n = wstart; n < rl_end; n++) search_string[search_string_index++] = rl_line_buffer[n]; search_string[search_string_index] = '\0'; break; default: /* Add character to search string and continue search. */ if (search_string_index + 2 >= search_string_size) { search_string_size += 128; search_string = (char *)xrealloc (search_string, search_string_size); } #if defined (HANDLE_MULTIBYTE) if (MB_CUR_MAX > 1 && rl_byte_oriented == 0) { int j, l; for (j = 0, l = strlen (mb); j < l; ) search_string[search_string_index++] = mb[j++]; } else #endif search_string[search_string_index++] = c; search_string[search_string_index] = '\0'; break; } for (found = failed = 0;;) { int limit = sline_len - search_string_index + 1; /* Search the current line. */ while (reverse ? (line_index >= 0) : (line_index < limit)) { if (STREQN (search_string, sline + line_index, search_string_index)) { found++; break; } else line_index += direction; } if (found) break; /* Move to the next line, but skip new copies of the line we just found and lines shorter than the string we're searching for. */ do { /* Move to the next line. */ i += direction; /* At limit for direction? */ if (reverse ? (i < 0) : (i == hlen)) { failed++; break; } /* We will need these later. */ sline = lines[i]; sline_len = strlen (sline); } while ((prev_line_found && STREQ (prev_line_found, lines[i])) || (search_string_index > sline_len)); if (failed) break; /* Now set up the line for searching... */ line_index = reverse ? sline_len - search_string_index : 0; } if (failed) { /* We cannot find the search string. Ding the bell. */ rl_ding (); i = last_found_line; continue; /* XXX - was break */ } /* We have found the search string. Just display it. But don't actually move there in the history list until the user accepts the location. */ if (found) { prev_line_found = lines[i]; rl_replace_line (lines[i], 0); rl_point = line_index; last_found_line = i; rl_display_search (search_string, reverse, (i == orig_line) ? -1 : i); } } /* The searching is over. The user may have found the string that she was looking for, or else she may have exited a failing search. If LINE_INDEX is -1, then that shows that the string searched for was not found. We use this to determine where to place rl_point. */ /* First put back the original state. */ strcpy (rl_line_buffer, lines[orig_line]); rl_restore_prompt (); /* Save the search string for possible later use. */ FREE (last_isearch_string); last_isearch_string = search_string; last_isearch_string_len = search_string_index; if (last_found_line < orig_line) rl_get_previous_history (orig_line - last_found_line, 0); else rl_get_next_history (last_found_line - orig_line, 0); /* If the string was not found, put point at the end of the last matching line. If last_found_line == orig_line, we didn't find any matching history lines at all, so put point back in its original position. */ if (line_index < 0) { if (last_found_line == orig_line) line_index = orig_point; else line_index = strlen (rl_line_buffer); rl_mark = orig_mark; } rl_point = line_index; /* Don't worry about where to put the mark here; rl_get_previous_history and rl_get_next_history take care of it. */ rl_clear_message (); FREE (allocated_line); free (lines); RL_UNSETSTATE(RL_STATE_ISEARCH); return 0; } | http://opensource.apple.com/source/gdb/gdb-1344/src/readline/isearch.c | CC-MAIN-2015-35 | refinedweb | 1,596 | 62.98 |
struct myStruct{ int x, y, z; int operator [](int index){ return (&x)[index]; } } myStruct s; s.x = 0;
and
struct myStruct{ int a[3]; int operator [](int index){ return a[index]; } int x(void){ return a[0]; } int y(void){ return a[1]; } int z(void){ return a[2]; } }; myStruct s; s.a[0] = 0;
are nearly equivalent in terms of speed on modern compilers. The problem arises when I need to access it by subscript, and also use it like an array of floats (like with OpenGL). However, which is more tedious to use:
1. Ensuring that every platform supports telling the compiler how to align and pad the members so that one can make an unsafe cast of the address of the first member to a pointer type, so that it can be subscripted or returned, and being able to refer to each member by their name
or
2. Storing the members as an array, allowing me to return a pointer and subscript safely and reliably, and providing an inlined x(), y(), and z() member function that returns a[0], a[1], and a[2] respectively
?
I'd prefer the second, as deterministic type safety seems like a better choice.
The downfalls of each, as far as I can tell:
1. Requiring alignment and padding to both be controlled the same way no matter what compiler (if the compiler can't or doesn't do it right, the program could crash!), and unsafe cast from a single member to a pointer of its type, knowing it will be accessed like an array.
2. Takes a couple more keystrokes to access the data (s.a[0] or s.x() vs s.x), and a little tougher to conceptually grasp.
I believe that a properly inlined s.x() or s.a[0] would be just as fast/slow as s.x; both would dereference a this pointer, and both would use a constant offset from the beginning of the struct (and possibly array) to reach the data, so it seems like it makes no difference in speed.
The pros of each:
1. Simple to use and understand; behaves how you expect a vector class might.
2. Performs reliably, and safely.
Which would be preferable, knowing that my goal is portability, and reliability (I'd rather not depend on compiler pragmas, macros, and settings)? Does the array method have any serious implications, like speed? | https://www.gamedev.net/topic/635794-vector-math-class-storage-array-vs-separate-members/ | CC-MAIN-2017-22 | refinedweb | 403 | 72.56 |
There’s an element of confusion regarding the term “lists of lists” in Python. I wrote this most comprehensive tutorial on list of lists in the world to remove all those confusions by beginners in the Python programming language. This multi-modal tutorial consists of:
- Source code to copy&paste in your own projects.
- Interactive code you can execute in your browser.
- Explanatory text for each code snippet.
- Screencapture videos I recorded to help you understand the concepts faster.
- Graphics and illustrated concepts to help you get the ideas quickly.
- References to further reading and related tutorials.
So, if you’re confused by lists of lists, read on—and resolve your confusions once and for all!
What’s a List of Lists?
Definition: A list of lists in Python is a list object where each list element is a list by itself. Create a list of list in Python by using the square bracket notation to create a nested list
[[1, 2, 3], [4, 5, 6], [7, 8, 9]].
Interactive Example Python List of Lists
Let’s dive into a detailed example of a list of lists. Try it yourself in our interactive Python shell by clicking on the “run” button.
Puzzle: Can you figure out the output before executing the script?
Memory Analysis
It’s important that you understand that a list is only a series of references to memory locations. By playing with the code visualizer, you’ll gain a deeper understanding of how Python works at its core:
Simply click the “Next” button to see how each line of code unfolds.
Create a List of Lists in Python
Create a list of lists by using the square bracket notation. For example, to create a list of lists of integer values, use
[[1, 2], [3, 4]]. Each list element of the outer list is a nested list itself.
Convert List of Lists to One List
Say, you want to convert a list of lists
[.
Convert List of Lists to Dictionary
For some applications, it’s quite useful to convert a list of lists into a dictionary.
- Databases: List of list is table where the inner lists are the database rows and you want to assign each row to a primary key in a new dictionary.
- Spreadsheet: List of list is two-dimensional spreadsheet data and you want to assign each row to a key (=row name).
- Data Analytics: You’ve got a two-dimensional matrix (=NumPy array) that’s initially represented as a list of list and you want to obtain a dictionary to ease data access.
There are three main ways to convert a list of lists into a dictionary in Python (source):
- Dictionary Comprehension
- Generator Expression
- For Loop
Let’s dive into each of those.
1. Dictionary Comprehension
Problem: Say, you’ve got a list of lists where each list represents a person and consists of three values for the person’s name, age, and hair color. For convenience, you want to create a dictionary where you use a person’s name as a dictionary key and the sublist consisting of the age and the hair color as the dictionary value.
Solution: You can achieve this by using the beautiful (but, surprisingly, little-known) feature of dictionary comprehension in Python.
persons = [['Alice', 25, 'blonde'], ['Bob', 33, 'black'], ['Ann', 18, 'purple']] persons_dict = {x[0]: x[1:] for x in persons} print(persons_dict) # {'Alice': [25, 'blonde'], # 'Bob': [33, 'black'], # 'Ann': [18, 'purple']}
Explanation: The dictionary comprehension statement consists of the expression
x[0]: x[1:] that assigns a person’s name
x[0] to the list
x[1:] of the person’s age and hair color. Further, it consists of the context
for x in persons that iterates over all “data rows”.
Exercise: Can you modify the code in our interactive code shell so that each hair color is used as a key and the name and age are used as the values?
Modify the code and click the “run” button to see if you were right!
2. Generator Expression
A similar way of achieving the same thing is to use a generator expression in combination with the dict() constructor to create the dictionary.
persons = [['Alice', 25, 'blonde'], ['Bob', 33, 'black'], ['Ann', 18, 'purple']] persons_dict = dict((x[0], x[1:]) for x in persons) print(persons_dict) # {'Alice': [25, 'blonde'], # 'Bob': [33, 'black'], # 'Ann': [18, 'purple']}
This code snippet is almost identical to the one used in the “list comprehension” part. The only difference is that you use tuples rather than direct mappings to fill the dictionary.
3. For Loop
Of course, there’s no need to get fancy here. You can also use a regular for loop and define the dictionary elements one by one within a simple for loop. Here’s the alternative code:
persons = [['Alice', 25, 'blonde'], ['Bob', 33, 'black'], ['Ann', 18, 'purple']] persons_dict = {} for x in persons: persons_dict[x[0]] = x[1:] print(persons_dict) # {'Alice': [25, 'blonde'], # 'Bob': [33, 'black'], # 'Ann': [18, 'purple']}
Again, you map each person’s name to the list consisting of its age and hair color.
Convert List of Lists to NumPy Array
Problem: Given a list of lists in Python. How to convert it to a 2D NumPy array?
Example: Convert the following list of lists
[[1, 2, 3], [4, 5, 6]]
into a NumPy array
[[1 2 3] [4 5 6]]
Solution: Use the
np.array(list) function to convert a list of lists into a two-dimensional NumPy array. Here’s the code:
# Import the NumPy library import numpy as np # Create the list of lists lst = [[1, 2, 3], [4, 5, 6]] # Convert it to a NumPy array a = np.array(lst) # Print the resulting array print(a) ''' [[1 2 3] [4 5 6]] '''
Try It Yourself: Here’s the same code in our interactive code interpreter:
<iframe height="700px" width="100%" src="" scrolling="no" frameborder="no" allowtransparency="true" allowfullscreen="true" sandbox="allow-forms allow-pointer-lock allow-popups allow-same-origin allow-scripts allow-modals"></iframe>
Hint: The NumPy method
np.array() takes an iterable as input and converts it into a NumPy array.
Convert a List of Lists With Different Number of Elements
Problem: Given a list of lists. The inner lists have a varying number of elements. How to convert them to a NumPy array?
Example: Say, you’ve got the following list of lists:
[[1, 2, 3], [4, 5], [6, 7, 8]]
What are the different approaches to convert this list of lists into a NumPy array?
Solution: There are three different strategies you can use. (source)
(1) Use the standard
np.array() function.
# Import the NumPy library import numpy as np # Create the list of lists lst = [[1, 2, 3], [4, 5], [6, 7, 8]] # Convert it to a NumPy array a = np.array(lst) # Print the resulting array print(a) ''' [list([1, 2, 3]) list([4, 5]) list([6, 7, 8])] '''
This creates a NumPy array with three elements—each element is a list type. You can check the type of the output by using the built-in
type() function:
>>> type(a) <class 'numpy.ndarray'>
(2) Make an array of arrays.
# Import the NumPy library import numpy as np # Create the list of lists lst = [[1, 2, 3], [4, 5], [6, 7, 8]] # Convert it to a NumPy array a = np.array([np.array(x) for x in lst]) # Print the resulting array print(a) ''' [array([1, 2, 3]) array([4, 5]) array([6, 7, 8])] '''
This is more logical than the previous version because it creates a NumPy array of 1D NumPy arrays (rather than 1D Python lists).
(3) Make the lists equal in length.
# Import the NumPy library import numpy as np # Create the list of lists lst = [[1, 2, 3], [4, 5], [6, 7, 8, 9]] # Calculate length of maximal list n = len(max(lst, key=len)) # Make the lists equal in length lst_2 = [x + [None]*(n-len(x)) for x in lst] print(lst_2) # [[1, 2, 3, None], [4, 5, None, None], [6, 7, 8, 9]] # Convert it to a NumPy array a = np.array(lst_2) # Print the resulting array print(a) ''' [[1 2 3 None] [4 5 None None] [6 7 8 9]] '''
You use list comprehension to “pad”
None values to each inner list with smaller than maximal length. See the original article on this blog for a more detailed version of this content.
Convert List of Lists to Dataframe
Problem: You’re given a list of lists. Your goal is to convert it into a Pandas Dataframe.
Example: Say, you want to compare salary data of different companies and job descriptions. You’ve obtained the following salary data set as a list of list:
salary = [['Google', 'Machine Learning Engineer', 121000], ['Google', 'Data Scientist', 109000], ['Google', 'Tech Lead', 129000], ['Facebook', 'Data Scientist', 103000]]
How can you convert this into a Pandas Dataframe?
Solution: The straight-forward solution is to use the
pandas.DataFrame() constructor that creates a new Dataframe object from different input types such as NumPy arrays or lists.
Here’s how to do it for the given example:
import pandas as pd salary = [['Google', 'Machine Learning Engineer', 121000], ['Google', 'Data Scientist', 109000], ['Google', 'Tech Lead', 129000], ['Facebook', 'Data Scientist', 103000]] df = pd.DataFrame(salary)
This results in the following Dataframe:
print(df) ''' 0 1 2 0 Google Machine Learning Engineer 121000 1 Google Data Scientist 109000 2 Google Tech Lead 129000 3 Facebook Data Scientist 103000 '''
An alternative is the
pandas.DataFrame.from_records() method that generates the same output:
df = pd.DataFrame.from_records(salary) print(df) ''' 0 1 2 0 Google Machine Learning Engineer 121000 1 Google Data Scientist 109000 2 Google Tech Lead 129000 3 Facebook Data Scientist 103000 '''
If you want to add column names to make the output prettier, you can also pass those as a separate argument:
import pandas as pd salary = [['Google', 'Machine Learning Engineer', 121000], ['Google', 'Data Scientist', 109000], ['Google', 'Tech Lead', 129000], ['Facebook', 'Data Scientist', 103000]] df = pd.DataFrame(salary, columns=['Company', 'Job', 'Salary($)']) print(df) ''' Company Job Salary($) 0 Google Machine Learning Engineer 121000 1 Google Data Scientist 109000 2 Google Tech Lead 129000 3 Facebook Data Scientist 103000 '''
If the first list of the list of lists contains the column name, use slicing to separate the first list from the other lists:
import pandas as pd salary = [['Company', 'Job', 'Salary($)'], ['Google', 'Machine Learning Engineer', 121000], ['Google', 'Data Scientist', 109000], ['Google', 'Tech Lead', 129000], ['Facebook', 'Data Scientist', 103000]] df = pd.DataFrame(salary[1:], columns=salary[0]) print(df) ''' Company Job Salary($) 0 Google Machine Learning Engineer 121000 1 Google Data Scientist 109000 2 Google Tech Lead 129000 3 Facebook Data Scientist 103000 '''
Slicing is a powerful Python feature and before you can master Pandas, you need to master slicing. To refresh your Python slicing skills, download my ebook “Coffee Break Python Slicing” for free.
Summary: To convert a list of lists into a Pandas DataFrame, use the
pd.DataFrame() constructor and pass the list of lists as an argument. An optional columns argument can help you structure the output.
Related article: How to convert a list of lists into a Python DataFrame?
Convert List of Lists to List of Tuples
If you’re in a hurry, here’s the short answer: use the list comprehension statement
[tuple(x) for x in list] to convert each element in your
list to a tuple. This works also for list of lists with varying number of elements.
But there’s more to it and studying the two main method to achieve the same objective will make you a better coder. So keep reading:
Method 1: List Comprehension + tuple()
Problem: How to convert a list of lists into a list of tuples?
Example: You’ve got a list of lists
[[1, 2], [3, 4], [5, 6]] and you want to convert it into a list of tuples
[(1, 2), (3, 4), (5, 6)].
Solution: There are different solutions to convert a list of lists to a list of tuples. Use list comprehension in its most basic form:
lst = [[1, 2], [3, 4], [5, 6]] tuples = [tuple(x) for x in lst] print(tuples) # [(1, 2), (3, 4), (5, 6)]
Try It Yourself:
This approach is simple and effective. List comprehension defines how to convert each value (
x in the example) to a new list element. As each list element is a new tuple, you use the constructor
tuple(x) to create a new tuple from the list
x.
If you have three list elements per sublist, you can use the same approach with the conversion:
lst = [[1, 2, 1], [3, 4, 3], [5, 6, 5]] tuples = [tuple(x) for x in lst] print(tuples) # [(1, 2, 1), (3, 4, 3), (5, 6, 5)]
You can see the execution flow in the following interactive visualization (just click the “Next” button to see what’s happening in the code):
And if you have a varying number of list elements per sublist, this approach still works beautifully:
lst = [[1], [2, 3, 4], [5, 6, 7, 8]] tuples = [tuple(x) for x in lst] print(tuples) # [(1,), (2, 3, 4), (5, 6, 7, 8)]
You see that an approach with list comprehension is the best way to convert a list of lists to a list of tuples. But are there any alternatives?
Method 2: Map Function + tuple() lists into a list ot tuples using the
map() function:
lst = [[1], [2, 3, 4], [5, 6, 7, 8]] tuples = list(map(tuple, lst)) print(tuples) # [(1,), (2, 3, 4), (5, 6, 7, 8)]
Try it yourself:
The first argument of the
map() function is the
tuple function name. This
tuple() function converts each element on the given iterable
lst (the second argument) into a tuple. Tuples to a List of Lists
- How to Convert a List of Lists to a Pandas Dataframe
- How to Convert a List of Lists to a NumPy Array
- How to Convert a List of Lists to a Dictionary in Python
Convert List of Lists to CSV File
Problem:_2<<.
Sort List of Lists by Key
Every computer scientist loves sorting things. In this section, I’ll show you how you can modify the default Python sorting behavior with the key argument.
Definition and Usage: To customize the default sorting behavior of the
list.sort() and
sorted() method, use the optional
key argument by passing a function that returns a comparable value for each element in the list.
Related article: [Ultimate Guide] sort() function of Python lists
Syntax: You can call this method on each list object in Python (Python versions 2.x and 3.x). Here’s the syntax:
list.sort(key=None, reverse=False)
Arguments:
Related articles:
- Python List Methods [Overview]
- Python List sort() – The Ultimate Guide
- Python Lists – Everything You Need to Know to Get Started
The
list.sort() method takes another function as an optional
key argument that allows you to modify the default sorting behavior. The key function is then called on each list element and returns another value based on which the sorting is done. Hence, the key function takes one input argument (a list element) and returns one output value (a value that can be compared).
Here’s an example:
>>> lst = [[1, 2], [3, 2], [3, 3, 4], [1, 0], [0, 1], [4, 2]] >>> lst.sort() >>> lst [[0, 1], [1, 0], [1, 2], [3, 2], [3, 3, 4], [4, 2]] >>> lst.sort(key=lambda x:x[0]) >>> lst [[0, 1], [1, 0], [1, 2], [3, 2], [3, 3, 4], [4, 2]] >>> lst.sort(key=lambda x:x[1]) >>> lst [[1, 0], [0, 1], [1, 2], [3, 2], [4, 2], [3, 3, 4]]
You can see that in the first two examples, the list is sorted according to the first inner list value.
In the third example, the list is sorted according to the second inner list value. You achieve this by defining a key function
key=lambda x: x[1] that takes one list element
x (a list by itself) as an argument and transforms it into a comparable value
x[1] (the second list value).
Related article:
Sort List of Lists by First Element
Both the list
sort() method and the
sorted() built-in Python function sort a list of lists by their first element.
Here’s an example:
lst = [[1, 2, 3], [3, 2, 1], [2, 2, 2]] lst.sort() print(lst) # [[1, 2, 3], # [2, 2, 2], # [3, 2, 1]]
The default sorting routine takes the first list element of any inner list as a decision criteria. Only if the first element would be the same for two values, the second list element would be taken as a tiebreaker.
Sort List of Lists Lexicographically
Problem: Given a list of lists. Sort the list of strings in lexicographical order! Lexicographical order is to sort by the first inner list element. If they are the same, you sort by the second inner list element, and so on.
Example:
[[1, 2, 3], [3, 2, 1], [2, 2, 2], [2, 0, 3]] --> [[1, 2, 3], [2, 0, 3], [2, 2, 2], [3, 2, 1]]
Solution: Use the
list.sort() method without argument to solve the list in lexicographical order.
lst = [[1, 2, 3], [3, 2, 1], [2, 2, 2], [2, 0, 3]] lst.sort() print(lst) ''' [[1, 2, 3], [2, 0, 3], [2, 2, 2], [3, 2, 1]] '''
Sort List of Lists By Length
Problem: Given a list of lists. How can you sort them by length?
Example: You want to sort your list of lists
[[2, 2], [4], [1, 2, 3, 4], [1, 2, 3]] by length—starting with the shortest list. Thus, your target result is
[[4], [2, 2], [1, 2, 3], [1, 2, 3, 4]]. How to achieve that?
Solution: Use the
len() function as key argument of the
list.sort() method like this:
list.sort(key=len). As the
len() function is a Python built-in function, you don’t need to import or define anything else.
Here’s the code solution:
lst = [[2, 2], [4], [1, 2, 3, 4], [1, 2, 3]] lst.sort(key=len) print(lst)
The output is the list sorted by length of the string:
[[4], [2, 2], [1, 2, 3], [1, 2, 3, 4]]
You can also use this technique to sort a list of strings by length.
List Comprehension Python List of Lists
You’ll learn three ways how to apply list comprehension to a list of lists:
- to flatten a list of lists
- to create a list of lists
- to iterate over a list of lists
Additionally, you’ll learn how to apply nested list comprehension. So let’s get started!
Python List Comprehension Flatten List of Lists
Problem: Given a list of lists. How to flatten the list of lists by getting rid of the inner lists—and keeping their elements?
Example: You want to transform a given list into a flat list like here:
lst = [[2, 2], [4], [1, 2, 3, 4], [1, 2, 3]] # ... Flatten the list here ... print(lst) # [2, 2, 4, 1, 2, 3, 4, 1, 2, 3]
Solution: Use a nested list comprehension statement
[x for l in lst for x in l] to flatten the list.
lst = [[2, 2], [4], [1, 2, 3, 4], [1, 2, 3]] # ... Flatten the list here ... lst = [x for l in lst for x in l] print(lst) # [2, 2, 4, 1, 2, 3, 4, 1, 2, 3]
Explanation: In the nested list comprehension statement
[x for l in lst for x in l], you first iterate over all lists in the list of lists (
for l in lst). Then, you iterate over all elements in the current list (
for x in l). This element, you just place in the outer list, unchanged, by using it in the “expression” part of the list comprehension statement
[x for l in lst for x in l].
Try It Yourself: You can execute this code snippet yourself in our interactive Python shell. Just click “Run” and test the output of this code.
Can you flatten a three-dimensional list (= a list of lists of lists)? Try it in the shell!
Python List Comprehension Create List of Lists
Problem: How to create a list of lists by modifying each element of an original list of lists?
Example: You’re given the list
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
You want to add one to each element and create a new list of lists:
[[2, 3, 4], [5, 6, 7], [8, 9, 10]]
Solution: Use two nested list comprehension statements, one to create the outer list of lists, and one to create the inner lists.
lst = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] new = [[x+1 for x in l] for l in lst] print(new) # [[2, 3, 4], [5, 6, 7], [8, 9, 10]]
Explanation: The main idea is to use as “expression” of the outer list comprehension statement a list comprehension statement by itself. Remember, you can create any object you want in the expression part of your list comprehension statement. Read more here.
Print List of Lists Without Brackets
Problem: Given a list of lists, print it one row per line—without brackets.
Example: Consider the following example list:
lst = [[1, 2, 3], [4, 5, 6]]
You want to print the list of lists with a newline character after each inner list:
1 2 3 4 5 6
Solution: Use a for loop and a simple print statement:
lst = [[1, 2, 3], [4, 5, 6]] for x in lst: print(*x)
The output has the desired form:
1 2 3 4 5 6
Explanation: The asterisk operator “unpacks” all values in the inner list
x into the print statement. You must know that the print statement also takes multiple inputs and prints them, whitespace-separated, to the shell.
Related articles:
Print List of Lists With Newline & Align Columns
Problem: How to print a list of lists with a new line after each list so that the columns are aligned?
Example: Say, you’re going to print the list of lists.
[['Alice', 'Data Scientist', 121000], ['Bob', 'Java Dev', 99000], ['Ann', 'Python Dev', 111000]]
How to align the columns?
Alice 'Data Scientist', 121000], Bob 'Java Dev', 99000], Ann 'Python Dev', 111000]]
Solution: Use the following code snippet to print the list of lists and align all columns (no matter how many characters each string in the list of lists occupies).
# Create the list of lists lst = [['Alice', 'Data Scientist', '121000'], ['Bob', 'Java Dev', '99000'], ['Ann', 'Python Dev', '111000']] # Find maximal length of all elements in list n = max(len(x) for l in lst for x in l) # Print the rows for row in lst: print(''.join(x.ljust(n + 2) for x in row))
The output is the desired:
Alice Data Scientist 121000 Bob Java Dev 99000 Ann Python Dev 111000
Explanation:
- First, you determine the length
n(in characters) of the largest string in the list of lists using the statement
max(len(x) for l in lst for x in l). The code uses a nested for loop in a generator expression to achieve this.
- Second, you iterate over each list in the list of lists (called
row).
- Third, you create a string representation with columns aligned by ‘padding’ each row element so that it occupies
n+2characters of space. The missing characters are filled with empty spaces.
You can see the code in action in the following memory visualizer. Just click “Next” to see which objects are created in memory if you run the code in Python:
Related articles: You may need to refresh your understanding of the following Python features used in the code:
Python List of Lists Enumerate
Say, you’ve given the following code that uses the enumerate function on a list of lists:
lst = [['Alice', 'Data Scientist', '121000'], ['Bob', 'Java Dev', '99000'], ['Ann', 'Python Dev', '111000']] for i,l in enumerate(lst): print('list ' + str(i) + ': ' + str(len(l)) + ' elements')
The output is:
list 0: 3 elements list 1: 3 elements list 2: 3 elements
The enumerate function creates an iterator of (index, element) pairs for all elements in a given list. If you have a list of lists, the list elements are list themselves. So, the enumerate function generates (index, list) pairs. You can use them in the loop body—for example, to print the length of the i-th list elements.
Remove Empty – Python List of Lists. | https://blog.finxter.com/python-list-of-lists/ | CC-MAIN-2020-34 | refinedweb | 4,077 | 65.46 |
Wimdows.NET
Wim's .NET blog
Unexpected behaviour: Defining values in enums
I don't normally regurgitate other blogger's posts like a headless chicken, but in this case it's worth mentioning.
Stephen, a colleague of mine, and a picky sod (who puts their enums in alphabetical order? - no serious comments on that remark, pulllleazee!), ran into some unexpected behaviour with regards to enums. Worth a read..
CreativeCreek.net - portfolio website service launched (ASP.NET 2.0)!
Just like to let you all know I've just launched a porfolio website service, CreativeCreek.net, developed in ASP.NET 2.0.
Dump the Module keyword in VB.NET!
Though I'm mainly a C# developer, I now and then get exposed to some VB.NET stuff. No, this is not going to be a C# vs. VB.NET debate. We've seen enough heated arguments and flame wars on that topic over the years.
Something about VB.NET Console applications created in Visual Studio.NET (all versions), bugs me though: the dreaded Module keyword. The default skeleton for a Console app in VB.NET looks like this:
Module Module1
Sub Main()
End Sub
End Module
Whilst under the hood, a module is simply a class with static members and a private default constructor to prevent instantiation, I don't think its use should be promoted like that. And I really wonder why MS hasn't changed the default Console app skeleton to look as follows:
Class Program
Shared Sub Main()
End Sub
End Class
In my opinion, the Module keyword shouldn't have even existed in VB.NET. It's one of the reasons why a lot of VB.NET code I've seen simply gets dumped in a Module, and Object Oriented Programming goes out the window. Of course, there's nothing stopping you coding like that in VB.NET without using the Module keyword, or even in C# for that matter. But it is a step in the right direction in trying to get developers to think about object oriented class design first (static/shared vs. instance members etc), before shoving anything and everything in a Module.
Zune MP3 player - DRM'ed or not?
Here's a question for a debate. Do you think Microsoft's upcoming audio player, dubbed Zune, will play only DRM compatible content?
If that is the case, I won't be touching it with a barge-pole; same reason I haven't got myself an iPod.
If I have legal MP3 files, ripped them from CD's I own or bought them legally I feel I should be able to play this on however many devices I want, as long as I own them. That means my MP3 player, my phone, server, desktop, laptop, you name it - without restrictions.
If Microsoft wants to have an edge over the IPod, they need to make sure it works without DRM mangled content. If not, it will flop. Big time.
What do you guys think?
ViewState, CheckBox, ControlState...errr...
J. Michael Palermo writes about the CheckBox in ASP.NET maintaining its state regardless of the EnableViewState attribute to False in ASP.NET 2.0.
Hang on - isn't that the same in ASP.NET 1.0 and 1.1? Oh yes.
All web controls that implement the IPostBackDataHandler maintain their basic 'value-state' (for lack of a better description) using the HTTP POST form collection. Irrespective of setting the EnableViewState property for the web control to False.
Nothing has changed in ASP.NET 2.0 as far as that is concerned. The CheckBox control along with TextBox and other controls that implement the IPostBackDataHandler interface, still maintain their basic value-state using their respective HTTP POST values. ControlState has nothing to do with this.
ControlState is a way to allow more fine-grained control over individual portions or behavioural elements of a web control, which under the bonnet actually still uses the ViewState statebag. Fritz Onion's article outlines this nicely and in more detail.
Generic Parse method on Enum - a solution
David Findley writes about how he wishes we had a generic Parse method on the Enum class in .NET 2.0.
Though I agree in principle, it's actually quite trivial to create a generic static class with a Parse method, which alleviates some of the pain.
Here's my stab at it:
public static class EnumUtil<T>
{
public static T Parse(string s)
{
return (T)Enum.Parse(typeof(T), s);
}
}
Say we have the following enum:
public enum Color
{
Black,
White,
Blue,
Red,
Green
}
We can now simply use the generic EnumUtil class as follows:
Color c = EnumUtil<Color>.Parse("Black");
I feel that this helper method doesn't actually warrant a class in its own right, so you may want to add it to one of your Util classes (if you happen to have a generic one!) instead.
ArraySegment Structure - what were they thinking?
From<string> seg = new ArraySegment<string>(new string[] { "John","Jack","Jill","Joe"},1,2);
// Access first item in segment
string first = seg[0];
// Iterate through ArraySegment
foreach (string s in seg)
{
Console.WriteLine(s);
}
Turns out you can't. There's no indexer for ArraySegment and no enumerator. You have to access the .Array property and use .Count and .Offset as passed to the constructor. What is the point of that!?
So I rolled my own generic DelimitedArray class which does exactly that. See the inline code below.
public class DelimitedArray<T>
{
public DelimitedArray(T[] array, int offset, int count)
{
this._array = array;
this._offset = offset;
this._count = count;
}
private int _offset;
private T[] _array;
private int _count;
public int Count
{
get { return this._count; }
}
public T this[int index]
{
get
{
int idx = this._offset + index;
if (idx > this.Count - 1 || idx<0)
{
throw new IndexOutOfRangeException("Index '" + idx + "' was outside the bounds of the array.");
}
return this._array[idx];
}
}
public IEnumerator<T> GetEnumerator()
{
for (int i = this._offset; i < this._offset + this.Count; i++)
{
yield return this._array[i];
}
}
}
Hope this is of use to someone.
ASP.NET 1.1 server control for <link> - enabling relative URL paths using tilde "~"
Here's a simple - but useful Link webcontrol class that supports the "~" tilde syntax for relative paths for the href attribute of the <link> element.
[DefaultProperty("Text"),ToolboxData("<{0}:Link runat=server Href=\"\" Rel=\"Stylesheet\" Type=\"text/css\"></{0}:Link>")]
public class Link : System.Web.UI.WebControls.WebControl
{
private string _href;
public string Href
{
get { return _href; }
set { _href = value; }
}
protected override void Render(HtmlTextWriter output)
{
output.WriteBeginTag("link");
output.WriteAttribute("href",base.ResolveUrl(this.Href));
foreach (string key in this.Attributes.Keys)
{
output.WriteAttribute(key,this.Attributes[key]);
}
output.Write(HtmlTextWriter.TagRightChar);
output.WriteEndTag("link");
}
}
You can then simply drop it in the <head> section of your page (provided you've used the Register directive to register the assembly):
<cc1:Link</cc1:Link>
The reason I had to roll my own is that when you add runat="server" for the <link> element, it turns into a HtmlGenericControl instance on the server, which is obviously used for numerous HTML elements, and as such no specific path resolve mechanism is applied to any of its attributes, since the attributes are different per HTML element.
Hope it helps someone out. | http://weblogs.asp.net/wim | CC-MAIN-2014-35 | refinedweb | 1,210 | 66.03 |
Make both applications rely on cookies of different names. To do this, edit
the forms authentication sections of the web.config and set a value in one
of your applications and a different value in another application:
application1:
<authentication mode="Forms">
<forms name="cookiename1" ... />
application2:
<authentication mode="Forms">
<forms name="cookiename2" ... />
The sessionID is stored in a cookie already there's no need to manage it.
Because the HTTP protocol is stateless the only way to maintain state is
through a cookie. What happens when you set a session value the server
will look up the dictionary of items associated with that cookie id
(session Id).
What is meant by stateless is that between requests HTTP does not know if
your still alive or have closed your browser. Therefore with each request
the browser will attach all cookie values to the request on the domain.
SessionId is stored in the cookie automatically when they go to your site.
The Server then uses that value to look up anything you've set in the users
session.
Depending on which programming language and/or server you're using the
session could be handled differently but
You're getting back "k2": "v2", "k1": "v1" because they're sent in GET
params. If you follow up with a second request you'll see you send no
cookies. Unless you use requests.Session cookies are not automatically
handled in the client and you have to explicitly pass a dict or CookieJar
with each request.
In [17]: r = requests.get("")
In [18]: r.content
Out[18]: '{
"cookies": {
"k2": "v2",
"k1": "v1"
}
}'
In [20]: r.cookies.get_dict()
Out[20]: {}
In [21]: r = requests.get("")
In [22]: r.content
Out[22]: '{
"cookies": {}
}'
With servlet 3 it is possible to set session tracking mode as a part of
servlet registration - ServletContext#setSessionTrackingModes... you can
try that.
However in your case I would investigate who is calling
HttpServletRequest#getSession(...). Put breakpoint in this method to see
who is calling it. Some piece of code in your application is initializing
session.
You would use this if user checked the remember me checkbox:
Yii::app()->user->login($identity, 24*3600*7);
and this if he didn't:
Yii::app()->user->login($identity, 0);
Make sure that you allowed auto login in your config file:
'components' => array(
'user' => array(
'allowAutoLogin'=>true,
),
// ...
),
you need to setup your middleware in your application.rb
config.middleware.insert_before "ActionDispatch::Cookies",
"GeoFilterMiddleware"
and in your middleware do something like this:
def call(env)
status, headers, body = @app.call(env)
if from_uk?(env)
Rack::Utils.set_cookie_header!(headers, 'country', { :value =>
'UK', :path => '/'})
end
[status, headers, body]
end
Well, you can always pass the User Id in cookie; however, another way to
edit self is to allow "self" or "me" in the URIs to resolve to the sessions
current user for example /users/me/profile/ instead of /users/42/profile/ -
then however you need to make sure that no data is cached as it would be
quite confusing for the users to see another users' profile data as their
own. Actually in my latest project we used both of these approaches
simultaneously.
have a look at
var $qsts = $(".quest").show();
var $anrs = $(".ans").hide();
$('.quest').click(function(){
var $ans = $(this).next().toggle(10);
$anrs.not($ans).hide();
});
$('.ans').change(function(){
var $ans = $(this).closest('.ans');
var $act = $ans.prev().toggleClass('question-active',
$ans.find('input:checkbox:checked').length > 0)
var items = JSON.parse($.cookie('question-active') || '[]');
var qstName = $act.get(0).className.match(/(questiond*)/)[1];
if($act.hasClass('question-active')){
if($.inArray(qstName, items) == -1){
items.push(qstName)
}
} else {
items.splice($.inArray(qstName, items), 1)
}
$.cookie('question-active', JSON.stringify(items))
});
var items = JSON.parse($.cookie('question-act
You can't decide what file name a cookie should have when it's saved,
that's up to the browser.
The only thing you can do with cookies is storing information in them and
reading it later again.
For the request: construct the array, adding any Cookies you want, then add
the behaviour to the mock:
final Cookies[] cookies = new Cookies[] { ... };
final HttpServletRequest request = mock(HttpServletRequest.class);
given(request.getCookies()).thenReturn(cookies);
... pass to controller/servlet etc ...
For the response you create the mock and then verify the addCookie call by
either using an ArgumentCaptor to capture the actual cookie passed to
addCookie:
final ArgumentCapor<Cookie> captor =
ArgumentCaptor.forClass(Cookie.class);
verify(response).addCookie(captor.capture());
final List<Cookie> cookies = captor.getValue();
... perform asserion on cookies ...
Or build the expected cookie and verify:
final Cookie expectedCookie = ...
verify(response).addCookie(expe
Cookie values are url escaped, which means that spaces are replaced with
the + character and other punctuation marks are replaced with %xx codes.
The CGI::Cookie::fetch method decodes the value, and the spaces in your
cookie value are restored.
You can use the raw_fetch method if you don't want the cookie values to be
decoded.
CookieContainers can hold multiple cookies for different websites, therefor
a label (the Domain) has to be provided to bind each cookie to each
website. The Domain can be set when instantiating the individual cookies
like so:
Cookie chocolateChip = new Cookie("CookieName", "CookieValue") { Domain =
"DomainName" };
An easy way to grab the domain to is to make a Uri (if you aren't using one
already) that contains your target url and set the cookie's domain using
the Uri.Host property.
CookieContainer gaCookies = new CookieContainer();
Uri target = new Uri("");
gaCookies.Add(new Cookie("__utmc", "#########") { Domain = target.Host });
There is some code that worked for me. It should expire when you close the
browser because of the date to expire being before now:
var vEnd = new Date();
vEnd.setDate(vEnd.getDate() - 1);
var endOfCookieText = "; expires=" + vEnd.toGMTString() + "; path=/";
document.cookie = escape('testCookie') + "=" + escape("eat cookies") +
endOfCookieText;
FIDDLE MODIFIED
Note that the fiddle gives a bunch of load errors on the console for me.
I am using the same implementation and do not see your issue using
Fiddler2. However maybe the issue is related to your debugging tool? In
IE10 debugging tools the secure and http only flags are only displayed when
the cookies are first received. If you check using Chrome debugging tools
you should see the flags displayed correctly on all requests.
Fixed it by using $.map and .join()
var openClose = $('.openClose');
openClose.on('click', function() {
var cook = ReadCookie('slideHide'),
miniParent =
$(this).parent().parent().parent().children('.main-content'),
miniDisp = miniParent.css('display');
if (miniDisp ==="block") {
KillCookie('slideHide');
$(this).parent().parent().parent().children('.main-content').slideUp();
var slide = cook+","+
"#"+$(this).parent().parent().parent().attr("id");
SetCookie('slideHide', slide, 100);
} else {
$(this).parent().parent().parent().children('.main-content').slideDown();
KillCookie('slideHide');
var newCookie=[],
a= $('.module').children('.main-content').filter(":hidden"),
c = $.map(a,function(n,i){
return "#"+$(n).p
Your Code seems little strange to me...
And if you try it like this:
$(document).ready(function() {
$("#button").click(function() {
if ($.cookie('cookie_1') == null) {
$.cookie('cookie_1', $('#phrase_here').text(), {expires: 7});
}
else {
$.cookie('cookie_2', $('#phrase_here').text(), {expires: 7});
}
});
});
$.cookie is only read access if no other parameters (but the cookie's name)
are supplied to the method [See the source]
If you're interested in reading it, just supply $.cookie('UniqueID') and
remove the second parameter.
As an FYI, path (and other cookie properties) are only relevant when
assigning a value, not retrieving. In other words, you don't need to supply
path:'/' to get cookies that are applied to that path, document.cookie
should natively perform that check.
setMaxAge public void setMaxAge(long.
Thank you all for your assistance.
create a function that returns null or false if the cookie doesn't exist :
function getCookie(c_name) {
var c_value = document.cookie,
c_start = c_value.indexOf(" " + c_name + "=");
if (c_start == -1) c_start = c_value.indexOf(c_name + "=");
if (c_start == -1) {
c_value = null;
} else {
c_start = c_value.indexOf("=", c_start) + 1;
var c_end = c_value.indexOf(";", c_start);
if (c_end == -1) {
c_end = c_value.length;
}
c_value = unescape(c_value.substring(c_start, c_end));
}
return c_value;
}
var acookie = getCookie("cookiename");
if (!acookie) {
alert("Cookie not found.");
}
FIDDLE
Most sites won't send back a cookie with the expires date and will handle
when to log you off server side. The reason for that would be that if it
was grabbing when to log you out from the cookie you could easily modify it
and never get logged out which wouldn't be very secure.
You could try to parse the response from the first code section and use the
extracted url with the same opener. Without knowing the actual format of
the link:
import urllib, urllib2, cookielib, os
import re # going to use this now!
username = 'somebody'
password = 'somepass'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({'username' : username, 'j_password' :
resp = opener.open('', login_data)
content = resp.read()
print content
match = re.search(
r"<as+",
content,
re.IGNORECASE
)
assert match is not None, "Couldn't find the file link..."
file_link = match.group('file_link')
print "down
You can simply use
file_get_contents('')
to grab the file, and write it on your disk.
But it seems that your file passes by a script which need JS to be enabled.
Try this to set the header:
$.ajax({
url: '',
type: 'GET',
dataType: "json",
success: displayAll,
headers : {
'ASP.NET_SessionId': 'rtretretertret43545435454'
}
});
Don't forget to add the comma after 'displayAll' as its missing in your
question.
I was dealing with the same problem and I managed to get your first
solution working, only with a slight change. Just replace Set-Cookie width
Cookie:
request.addRequestHeader("Cookie", cookie);
Btw. session based means that the auth data are not stored in cookies but
on server side, identified by a key, which IS stored in cookies. So it
actually doesn't matter whether it's session based or not, cookies are used
in both cases.
I've also tried the second solution (it's simpler) but from what I've read
it seems that Browser.EXTRA_HEADERS is supported only by the default
Android browser. So if the user has a different browser in his device it
won't work.
This is an old question but I hope it will help someone.
The problem was that I had earlier set a cookie with the jquery plugin,
which sets the current path as that path for the cookie. The above .Net
code sets the path of the added cookie to "/", so there was two different
cookies present, which was invisible to me when viewing the
document.cookies variable. My solution was to set the path of the jquery
cookie to be "/".
I have good news and bad news.
The bad news is-- you can't do it. See here for an explanation.
Essentially your server is setting a cookie for a domain it does not
contain, so the cookie is rejected.
The good news is-- you don't want to do this anyway. Your design violates
OWASP 2013 A4 (unsecure and unvalidated direct object reference). In this
case you are storing user's access permissions as the domain of the cookie,
which a hacker can easily modify.
Find a different way to designate subdomain access. There are all kinds of
ways to do this.
Here is one way that is pretty close to your plan:
Store the subdomain in the cookie value itself. Create a secure document
container that lists the subdomains to which the user is granted access.
You could for example store the subdomain
The short answer is no, you can't set the Cookie header. The reason for
this is that Chrome is your User Agent, so it is required by the HTTP
specification to disallow modifications to headers which have security
implications.
One solution would be to perform an action that allows the server to set
the cookie on your XmlHttpRequest object. You say you're already trying to
do this but it's not working. I suspect that's because you need to set
withCredentials on your ajax request. Add the xhrFields attribute, as
follows.
var token;
$.ajax({
url: "",
crossDomain: true,
xhrFields: {withCredentials: true},
type: 'post',
async: false,
data: {
username: "username",
password: "password"
}
}).done(func
All valid linearized (Fast Web View) PDFs are also valid un-linearized
PDFs, so it's hard to see why FPDF would complain - the worst it could do
is produce an output file which is not linearized.
Our cpdf tool can remove linearization easily:
cpdf in.pdf -o out.pdf
ought to do it.
Why don't you use the length proprety to see if the input is empty ?
For example :
$("#linkurl").on('keyup blur', function(e){
if($(this).val().length == 0){
$('#link').attr('disabled');
}else{
$('#link').removeAttr('disabled');
}
});
xdebug.auto_trace is required for the auto prepend stuff.
Try this( from the docs)
xdebug.default_enable
Type: boolean, Default value: 1
If this setting is 1, then stacktraces will be shown by default on an error
event. You can disable showing stacktraces from your code with
xdebug_disable(). As this is one of the basic functions of Xdebug, it is
advisable to leave this setting set to 1.
As far as I know, you can't.
You can disable errors or warnings user wide, or per project. See the
documentation.
Update
Instead, you can use the # noqa commant at the end of a line, to skip that
particular line (see patch 136). Of course, that would skip all PEP8
errors.
The main author argues against source file noise, so the suggested # pep8
Try with Jquery declared because the Jquery is not included:
<html>
<head>
<script type="text/javascript"
src=""></script>
<meta http-
<script type="text/javascript">
$(document).ready(
function(){
$('input:submit').attr('disabled',true);
$('input:file').change(
function(){
if ($(this).val()){
$('input:submit').removeAttr('disabled');
}
else {
$('input:submit').attr('disabled',true);
}
});
});
</script>
Might not be the best solution, but since you can't control the returning
data -
You can load only some of the HTML, e.g. only the elements that interest
you:
$('#some-id').load(' div#elementId');
Also, like apsillers mentioned, you can exclude the script:
$('#some-id').load(' :not(script)');
Or, you could remove it at return level:
$.get('', function(data) {
$(data).find('script').remove();
$('#some-id').html(data);
});
You don't say if this is what you're doing, but --
If you set the cookie and try to read the cookie during the same execution
of the script, the $_COOKIE array will not yet be populated by the cookie.
You won't see it till the next time the browser sends a request to the
script.
there are ways to encrypt sections of the app.config file so that if opened
with a text editor it won't be possible to read its content, would that
help? see the discussion here for some links further down.... Encrypting
the app.config file
or you can also decrypt the appSettings or other settings from your code
and generate encrypted settings at compile time. It depends on how often
and how you plan to edit those settings.
sure there are also other ways and possibly also file access security
methods but it depends on your network as well, if you want to exclude a
group of people from editing that file in your domain, you could give them
only read access but what happens if they copy the whole application folder
to another location and edit the config file in the new location?
I found an easy solution:
(setq ido-auto-merge-delay-time 9)
The time here is in seconds. I could set a very large number to completely
disable this feature.
OK, per the comments, this modified version of your script disables access
to create folders within the target directory whilst the dialog is active
then removes the block at the end before the function completes:
Function Get-FileName($initialDirectory)
{
<#DENY CreateDirectories privilege
to currently logged on security principal#>
$acl = get-acl $initialDirectory
$right = "CreateDirectories"
$principal =
[System.Security.Principal.WindowsIdentity]::GetCurrent().Name
$denyrule = New-Object
System.Security.AccessControl.FileSystemAccessRule($principal,$right,"DENY")
$acl.AddAccessRule($denyrule)
set-acl $initialDirectory $acl
[System.Reflection.Assembly]::LoadWithPartialName("System.windows.forms")
|
Out-Null
$OpenFileDialog = New-Object System.Windows.
I've faced the same issue, and solved it like following :
1) Switched to spring-data-rest-webmvc 1.1.0.M1
2) Split your context configuration to web-config and rest-config
WebConfig.java
@Configuration
@EnableHypermediaSupport
@EnableSpringDataWebSupport
@EnableWebMvc
@ComponentScan(basePackages = {"com.yourcompanyname.XXX"})
public class WebConfig extends WebMvcConfigurationSupport {
@Bean
public InternalResourceView;
}
@Bean
public ReloadableResourceBundleMessageSource messageSource()
{
ReloadableResourceBundleMes
It's in the registry, under
[HKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionInternet Settings]
you can either use the REG command in your BAT, or prepare a couple of .REG
files, to automate the changes.
for example, to disable Proxy, try
REG ADD "HKCUSoftwareMicrosoftWindowsCurrentVersionInternet Settings" /v
ProxyEnable /t REG_DWORD /d 0 /f | http://www.w3hello.com/questions/-How-to-disable-cookie-in-asp-file-or-asp-NEt- | CC-MAIN-2018-17 | refinedweb | 2,763 | 50.33 |
Quoting constructs
Pretty much every programming language has some form of quoting construct to allow embedding of data in a program, be it literal strings, numeric data or some combination thereof.
Show examples of the quoting constructs in your language. Explain where they would likely be used, what their primary use is, what limitations they have and why one might be preferred over another. Is one style interpolating and another not? Are there restrictions size of the quoted data? The type? The format?
This is intended to be open-ended and free form. If you find yourself writing more than a few thousand words of explanation, summarize and provide links to relevant documentation; but do provide at least a fairly comprehensive summary here, on this page, NOT just a link to [See the language docs].
Note: This is primarily for quoting constructs for data to be "embedded" in some way into a program. If there is some special format for external data, it may be mentioned but that isn't the focus of this task.
Go[edit]
package main
import (
"fmt"
"os"
"regexp"
"strconv"
)
/* Quoting constructs in Go. */
// In Go a Unicode codepoint, expressed as a 32-bit integer, is referred to as a 'rune'
// but the more familiar term 'character' will be used instead here.
// Character literal (single quotes).
// Can contain any single character including an escaped character.
var (
rl1 = 'a'
rl2 = '\'' // single quote can only be included in escaped form
)
// Interpreted string literal (double quotes).
// A sequence of characters including escaped characters.
var (
is1 = "abc"
is2 = "\"ab\tc\"" // double quote can only be included in escaped form
)
// Raw string literal(single back quotes).
// Can contain any character including a 'physical' new line but excluding back quote.
// Escaped characters are interpreted literally i.e. `\n` is backslash followed by n.
// Raw strings are typically used for hard-coding pieces of text perhaps
// including single and/or double quotes without the need to escape them.
// They are particularly useful for regular expressions.
var (
rs1 = `
first"
second'
third"
`
rs2 = `This is one way of including a ` + "`" + ` in a raw string literal.`
rs3 = `\d+` // a sequence of one or more digits in a regular expression
)
func main() {
fmt.Println(rl1, rl2) // prints the code point value not the character itself
fmt.Println(is1, is2)
fmt.Println(rs1)
fmt.Println(rs2)
re := regexp.MustCompile(rs3)
fmt.Println(re.FindString("abcd1234efgh"))
/* None of the above quoting constructs can deal directly with interpolation.
This is done instead using library functions.
*/
// C-style using %d, %f, %s etc. within a 'printf' type function.
n := 3
fmt.Printf("\nThere are %d quoting constructs in Go.\n", n)
// Using a function such as fmt.Println which can take a variable
// number of arguments, of any type, and print then out separated by spaces.
s := "constructs"
fmt.Println("There are", n, "quoting", s, "in Go.")
// Using the function os.Expand which requires a mapper function to fill placeholders
// denoted by ${...} within a string.
mapper := func(placeholder string) string {
switch placeholder {
case "NUMBER":
return strconv.Itoa(n)
case "TYPES":
return s
}
return ""
}
fmt.Println(os.Expand("There are ${NUMBER} quoting ${TYPES} in Go.", mapper))
}
- Output:
97 39 abc "ab c" first" second' third" This is one way of including a ` in a raw string literal. 1234 There are 3 quoting constructs in Go. There are 3 quoting constructs in Go. There are 3 quoting constructs in Go.
Phix[edit]
Single quotes are used for single ascii characters, eg 'A'. Multibyte unicode characters are typically held as utf-8 strings.
Double quotes are used for single-line strings, with backslash interpretation, eg "one\ntwo\nthree\n".
The concatenation operator & along with a couple more quotes can certainly be used to mimic string continuation, however it is technically an implementation detail rather than part of the language specification as to whether that occurs at compile-time or run-time.
Phix does not support interpolation other than printf-style, eg printf(1,"Hello %s,\nYour account balance is %3.2f\n",{name,balance}).
Back-ticks and triple-quotes are used for multi-line strings, without backslash interpretation, eg
constant t123 = `
one
two
three
`
or (entirely equivalent, except the following can contain back-ticks which the above cannot, and vice versa for triple quotes)
constant t123 = """
one
two
three
"""
Both are also equivalent to the top double-quote one-liner. Note that a single leading '\n' is automatically stripped.
Several builtins such as substitute, split, and join are often used to convert such strings into the required internal form.
Regular expressions are usually enclosed in back-ticks, specifically to avoid backslash interpretation.
You can also declare hexadecimal strings, eg
x"1 2 34 5678_AbC" -- same as {0x01, 0x02, 0x34, 0x56, 0x78, 0xAB, 0x0C}
-- note however it displays as {1,2,52,86,120,171,12}
-- whereas x"414243" displays as "ABC" (as all chars)
Literal sequences are represented with curly braces, and can be nested to any depth, eg
{2, 3, 5, 7, 11, 13, 17, 19}
{1, 2, {3, 3, 3}, 4, {5, {6}}}
{{"John", "Smith"}, 52389, 97.25}
{} -- the 0-element sequence
Raku[edit]
The Perl philosophy, which Raku thoroughly embraces, is that "There Is More Than One Way To Do It" (often abbreviated to TIMTOWDI). Quoting constructs is an area where this is enthusiastically espoused.
Raku has a whole quoting specific sub-language built in called Q. Q changes the parsing rules inside the quoting structure and allows extremely fine control over how the enclosed data is parsed. Every quoting construct in Raku is some form of a Q syntactic structure, using adverbs to fine tune the desired behavior, though many of the most commonly used have some form of "shortcut" syntax for quick and easy use. Usually, when using an adverbial form, you may omit the Q: and just use the adverb.
In general, any and all quoting structures have theoretically unlimited length, in practice, are limited by memory size, practically, it is probably better to limit them to less than a gigabyte or so, though they can be read as a supply, not needing to hold the whole thing in memory at once. They can hold multiple lines of data. How the new-line characters are treated depends entirely on any white-space adverb applied. The Q forms use some bracketing character to delimit the quoted data. Usually some Unicode bracket ( [], {}, <>, ⟪⟫, whatever,) that has an "open" and "close" bracketing character, but they may use any non-indentifier character as both opener and closer. ||, //, ??, the list goes on. The universal escape character for constructs that allow escaping is backslash "\".
The following exposition barely scratches the surface. For much more detail see the Raku documentation for quoting constructs for a comprehensive list of adverbs and examples.
- The most commonly used
- Q[ ], common shortcut: 「 」
- The most basic form of quoting. No interpolation, no escape sequences. What is inside is what you get. No exceptions.
- 「Ze backslash characters!\ \Zay do NUSSING!! \」 -> Ze backslash characters!\ \Zay do NUSSING!! \
- "Single quote" quoting. - Q:q[ ], adverbial: q[ ], common shortcut: ' '
- No interpolation, but allow escape sequences.
- E.G. 'Don\'t panic!' -> Don't panic!
- "Double quote" quoting. - Q:qq[ ], adverbial: qq[ ], common shortcut: " "
- Interpolates: embedded variables, logical characters, character codes, continuations.
- E.G. "Hello $name, today is {Date.today} \c[grinning face] \n🦋" -> Hello Dave, today is 2020-03-25 😀
- 🦋
- Where $name is a variable containing a name (one would imagine), {Date.today} is a continuation - a code block to be executed and the result inserted, \c[grinning face] is the literal emoji character 😀 as a character code, \n is a new-line character and 🦋 is an emoji butterfly. Allows escape sequences, and indeed, requires them when embedding data that looks like it may be an interpolation target but isn't.
Every adverbial form has both a q and a qq variant to give the 'single quote' or "double quote" semantics. Only the most commonly used are listed here.
- "Quote words" - Q:qw[ ], adverbial: qw[ ], common shortcut: < >
- No interpolation, but allow escape sequences. (Inherited from the q[] escape semantics
- E.G. < a β 3 Б 🇩🇪 >
- Parses whatever is inside as a white-space separated list of words. Returns a list with all white space removed. Any numeric values are returned as allomorphs.
- That list may be operated on directly with any listy operator or it may be assigned to a variable.
- say < a β 3 Б 🇩🇪 >[*-1] # What is the last item in the list? (🇩🇪)
- say +< a β 3 Б 🇩🇪 > # How many items are in the list? (5)
- "Quote words with white space protection" - Q:qww[ ], adverbial: qww[ ]
- May preserve white space by enclosing it in single or double quote characters, otherwise identical to qw[ ].
- say qww< a β '3 Б' 🇩🇪 >[2] # Item 2 in the list? (3 Б)
- "Double quote words" quoting. - Q:qqw[ ], adverbial: qqw[ ], common shortcut: << >> or « »
- Interpolates similar to standard double quote, but then interprets the interpolated string as a white space separated list.
- "Double quoted words with white space protection" - Q:qqww[ ], adverbial: qqww[ ]
- Same as qqw[ ] but retains quoted white space.
- "System command" - Q:qx[ ], adverbial: qx[ ]
- Execute the string inside the construct as a system command and return the result.
- Return structured text between two textual delimiters. Depending on the adverb, may or may not interpolate (same rules as other adverbial forms.) Will return the text with the same indent as the indent of the final delimiter. The text delimiter is user chosen (and is typically, though not necessarily uppercase) as is the delimiter bracket character.
There are other adverbs to give precise control what interpolates or doesn't, that may be applied to any of the above constructs. See the doc page for details. There is another whole sub-genre dedicated to quoting regexes.
zkl[edit]
Quoting text: zkl has two types of text: parsed and raw. Strings are limited to one line, no continuations.
Parsed text is in double quotes ("text\n") and escape ("\n" is newline, UTF-8 ("\Ubd;" or "\u00bd"), etc).
"Raw" text is unparsed, usefull for things like regular expressions and unit testing of source code. It uses the form 0'<sigil>text<sigil>. For example 0'<text\n> is the text "text\\n". There is no restriction on sigil (other than it is one character).
Text blocks are nultiple lines of text that are gathered into one line and then evaluated (thus can be anything, such as string or code and are often mixed). #<<< (at the start of a line) begins and ends the block. A #<<<" beginning tag prepends a " to the block. For example:
#<<<
text:=
"
A
";
#<<<
is parsed as text:="\nA\n";
Other data types are pretty much as in other languages. | http://rosettacode.org/wiki/Quoting_constructs | CC-MAIN-2020-16 | refinedweb | 1,779 | 57.37 |
jGuru Forums
Posted By:
Mohammed_Malik
Posted On:
Tuesday, February 15, 2005 03:37 AM
I'm runining windows xp and using JCreator as my compiler. I have been trying to my applet to work with a JApplet, I got some code which had import javax.media, I found out that I need to install JMF whic I have duly done so from the sun website, i've followed th installation instructions to the best of my understanding but after installing I still get the
import javax.media package not found
i've been trying to understand for almost 2 weeks but with no success can you please help.
Re: I'm having problems installing the jmf
Posted By:
Reza_Nazarian
Posted On:
Thursday, March 24, 2005 09:28 AM | http://www.jguru.com/forums/view.jsp?EID=1227230 | CC-MAIN-2014-41 | refinedweb | 128 | 55.58 |
:
Easy
PREREQUISITES:
Strings, palindromes, ad hoc, dynamic programming
PROBLEM:
Given N, output a string consisting of lowercase letters such that there are exactly N palindromic substrings. Two substrings are considered different if they have different starting / ending positions.
QUICK EXPLANATION:
Print a string of length N of the following form:
abcabcabcabcabcabc.... This string has exactly N palindromic substrings.
EXPLANATION:
This problem admits many kinds of solutions. We’ll first describe the simplest one.
Simple
Notice that any string with length K has at least K palindromic substrings, because every single-character substring is palindromic! Thus, the answer, if it exists, must have a length of \le N. Unfortunately, most strings of length K actually has strictly more than K palindromic substrings. For example,
abababab contains 8 one-character palindromic substrings, but it also has the substrings
aba,
bab,
ababa, etc., appearing multiple times. In fact, if there is a character appearing twice consecutively, then you automatically have more than K palindromic substrings. In the extreme case, in a string consisting of only one letter like
aaaaa..., all substrings are palindromic.
But in fact there are ways to construct a string of length K having exactly K palindromic substrings. One of the simplest is a string of the form:
abcabcabcabcabcabc.... It’s easy to see why it doesn’t have any palindromic substring of length > 1: any substring of length > 1 has at least one of the following substrings:
ab,
bc, and
ca. But the reversals of these substrings don’t appear anywhere in the string!
Thus, a very, very simple solution arises: Simply print a string of the form
abcabcabcabcabcabc... of length N!
Short
The above solution is probably the simplest you can ever get. But what if we are conserving letters, i.e., we want the string to be as short as possible?
In that case, we want our string to actually have lots of palindromic substrings of length > 1. As mentioned above, an example of this is a string consisting of only one character like
aaaaa.... If the length is K, then there are 1 + 2 + 3 + \ldots + K = K(K+1)/2 palindromic substrings. Thus, to form a string with N palindromic substrings, we simply find a K such that K(K+1)/2 = N. Unfortunately, not all N s can be represented as K(K+1)/2. In fact, such numbers are rare! Below x, there are only approximately \sqrt{2x} such values.
The idea in this solution is to append such single-character strings together to form one large string. For example, consider the strings
aaa...aaabbb...bbb. Suppose there are A
as and B
bs. It’s easy to see that no palindromic substrings containing an
a and a
b because if so, the substring
ab would appear, but the reversal,
ba, doesn’t appear anywhere in the string! Thus, there are A(A+1)/2 + B(B+1)/2 palindromic substrings. Similarly,
aaa...aaabbb...bbbccc...ccc has A(A+1)/2 + B(B+1)/2 + C(C+1)/2 substrings, etc…
So our goal is to find a representation of N as a sum of triangle numbers where the sum of the bases is as small as possible. We can achieve this using dynamic programming: Let F(N) be the smallest sum of bases for any representation of N as a sum of triangle numbers. Then a simple recurrence can be derived by enumerating all possibities for the last summand. Specifically, if k(k+1)/2 is the last summand in the representation, then:
Our base case here is simply F(0). Along with F(N), we also store the k that minimizes the above, so that we can reconstruct the representation!
The following pseudocode illustrates this:
BOUND = 10011 F[0..BOUND] K[0..BOUND] for N = 1..BOUND: F[N] = INFINITY for k=0 increasing where k*(k+1)/2 <= N: cost = k + F[N - k*(k+1)/2] if F[N] > cost: F[N] = cost K[N] = k // store the k that minimizes
Now, to find the representation of some N, we simply use the
K array:
def find_representation(N): rep = [] while N > 0: k = K[N] rep.append(k) N -= k*(k+1)/2 return rep
Finally, we can construct the string itself using the representation with something like this:
alphabet = 'abcdefghijklmnopqrstuvwxyz' def construct(N): rep = find_representation(N) s = '' for i=0..rep.length-1: append alphabet[i] to s rep[i] times return s
With this answer, it turns out that you only need a string of at most 161 characters, and at most 5 distinct characters!
Investigating the shortest possible strings
The algorithm above produces some really short strings. Unfortunately, they aren’t the shortest possible. The smallest counterexample is N = 26: The algorithm produces a string of length 10:
aaaaaabbcd. But in fact a shorter string is possible:
aaaaaabaa.
In fact, it doesn’t seem easy to figure out the shortest possible string. So unfortunately I don’t have the answer for that
If you wish to investigate it, here are the optimal answers for every N \le 55, along with an optimal string. Perhaps you can find a pattern and then derive a formula for the optimal string.
N opt string 1 1 a 2 2 ab 3 2 aa 4 3 aab 5 4 aabc 6 3 aaa 7 4 aaab 8 5 aaabc 9 5 aaaba 10 4 aaaa 11 5 aaaab 12 6 aaaabc 13 6 aaaaba 14 7 aaaabac 15 5 aaaaa 16 6 aaaaab 17 7 aaaaabc 18 7 aaaaaba 19 8 aaaaabac 20 8 aaaaabab 21 6 aaaaaa 22 7 aaaaaab 23 8 aaaaaabc 24 8 aaaaaaba 25 9 aaaaaabac 26 9 aaaaaabab 27 9 aaaaaabaa 28 7 aaaaaaa 29 8 aaaaaaab 30 9 aaaaaaabc 31 9 aaaaaaaba 32 10 aaaaaaabac 33 10 aaaaaaabab 34 10 aaaaaaabaa 35 11 aaaaaaabaac 36 8 aaaaaaaa 37 9 aaaaaaaab 38 10 aaaaaaaabc 39 10 aaaaaaaaba 40 11 aaaaaaaabac 41 11 aaaaaaaabab 42 11 aaaaaaaabaa 43 12 aaaaaaaabaac 44 12 aaaaaaaabaab 45 9 aaaaaaaaa 46 10 aaaaaaaaab 47 11 aaaaaaaaabc 48 11 aaaaaaaaaba 49 12 aaaaaaaaabac 50 12 aaaaaaaaabab 51 12 aaaaaaaaabaa 52 13 aaaaaaaaabaac 53 13 aaaaaaaaabaab 54 14 aaaaaaaaabaabc 55 10 aaaaaaaaaa | https://discuss.codechef.com/t/ndiffpal-editorial/12633 | CC-MAIN-2020-40 | refinedweb | 1,040 | 70.02 |
Hi there. I have a little problem which is rather related to Python than Panda.
I use a dictionary in order to store some states for my game. Then I use a task which reads those states (the keys) every frame and depending on the “1” value sets the game`s state.
Now, the problem is that if I define the dictionary using
global [varname]
in my code I get this error:
NameError: global name 'states' is not defined
Then if I use “self” to access it, “self” is not defined and I just don`t know how to pass “self” to the task. I have tried to add the task with arguments but I cannot pass “self” as an argument.
Here is my code:
from direct.showbase.ShowBase import ShowBase from direct.task import Task class Main(ShowBase): # States dictionary states = {"STARTUP":0, "SHOWMENU":0, "STOP":0} # Constructor def __init__(self): ShowBase.__init__(self) # Methods def setState(self, key): for key_d in self.states: if(key_d == key): self.states[key_d] = 1 else: self.states[key_d] = 0 # Tasks ## Task manager taskMgr = Task.TaskManager() ## Game state task def gameStateTask(task): global states for key, value in states: if(key == "STARTUP"): if(value == 1): print("start") return task.cont elif(key == "SHOWMENU"): if(value==1): print("showmenu") elif(key == "STOP"): if(value == 1): print("stop") ## Add the tasks taskMgr.add(gameStateTask, "StateManager") # Run the app app = Main() app.run()
Any solution? | https://discourse.panda3d.org/t/problem-with-python/12208 | CC-MAIN-2022-33 | refinedweb | 239 | 66.94 |
0
Hi all,
I'm having a bit of a problem with my code. I've created an arraylist method and when values are inputted it prints them one by one.
import java.util.*; public class ArrayListDemo{ public static void test(int i){ List arrlist = new ArrayList(); arrlist.add(i); System.out.print(arrlist); } public static void main(String[] args) { test(10); test(20); test(30); test(40); test(50); } }
the array list prints as [10]
[20]
[30]
[40] and so on.
How I want it to print is like,
[10]
[10, 20]
[10, 20, 30] and so on.
I think it's something wrong with the basics , but i just cant figure it out. Any help will be much appreciated.
Thanx for taking the time to read. :) | https://www.daniweb.com/programming/software-development/threads/337123/arraylist-storing-problem | CC-MAIN-2018-43 | refinedweb | 128 | 82.65 |
Let's start our Django application from the beginning. To start our first application we have to follow the below steps. Before going to start our project learn about django project layout.
- Create a directory with your project name & change your directory to the project. Lets consider our project as "Library Management"
- Now, create virtual environment for our project.
- Install Django package in environment.
- Start our project Library Management
- Run the below command to test our project setup.
- Now create an application "library" to show "hello world" as response in browser
- "db.sqlite3" is file database developed in python. It is best suitable for beginners. we can also use MySQL and Postgres databases. "library" is our application with its files in it.
- Now, open "library_management/urls.py" and replace the contents of file with below code.
- And open "library/views.py" file and replace file contents with below code
- Open your web-browser and access url "". You can find "Hello World" as response.
mkdir Library && cd Library
virtualenv env source env/bin/activatevirtual environment is now ready to use.
pip install django
django-admin startproject library_managementAfter executing the above commands a new folder "library_management" will be created. Inside that folder you can see another folder "library_management" and a file "manage.py". Inside "library_management/library_management" folder you can find below files
__init__.py settings.py urls.py wsgi.py
python manage.py runserverOpen your browser and hit the url. You can see the below output.
If you do not see the above output, you might have missed something. so, delete everything that you did and start from the beginning again to save our time. As we are beginners we don't know much to resolve errors.
python manage.py startapp libraryAfter executing the above command in the directory you can observe that a folder with name "library" have created and a file "db.sqlite3" also. If you look inside of folder "library" you can find the following files.
admin.py apps.py __init__.py migrations models.py tests.py views.py
from django.conf.urls import url from django.contrib import admin from library import views urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^hello-world/$', views.hello_world), ]
from django.http import HttpResponse def hello_world(request): html_source_code = """ <html> <title> Hello World</title> <body> <h1> Hello World</h1> </body> </html> """ return HttpResponse(html_source_code) | http://django-easy-tutorial.blogspot.in/2017/03/django-first-application.html | CC-MAIN-2017-26 | refinedweb | 392 | 53.17 |
This preview shows
pages
1–3. Sign up to
view the full content.
CS100J, Fall 2005 Answers to Sample Prelim 1 Questions Sample questions Below, we give some sample questions. The answers are given after all the questions. Note that you may be asked to write a small procedure or function, using assignments, if-statements, blocks, and return statements. Two sample questions of this nature appear at the end of the sample questions. 1. (a) When do you use "=" and "==" (b) what is the difference between 'c' and "c" (c) if b == true and c ==" true " what is the type of the variables b and c? (d) What is the difference between a method declared with keyword static and one without the keyword? 2. Below is a class. Draw a folder of this class. public class A{ public static void main(){ int a= 2 ; int b= negate(negate(a)) ; } public static int negate( int x) { return (–x) ; } } 3. Find the various syntax and semantic errors in the code given below: public class A{ public static void main() { String a= "true" String b= false ; int d = diffinlength(a) } public static boolean diffinlength(String s1, String s2) { return (abs(s1.length – s.length())) } } 4. Starting with values a=5 b=23 c=7 d=0 b1= true b2= false Find the values of a, b, c, d. Start with the above values for EACH item. (a) if ((a%a)==d) (b) d= b/c ; d= 3; a= a–d ; (c) a= a*5 ; (d) if (b1) b2= true ; a= a – –5 if (b2) b1= false ; (e) if (b1 || (a!=3)) b2= true 5. Give the syntax of the assignment statement and write down how to execute it. 6. Give the syntax of a block and explain how to execute it. 7. Below is a class Employee. public class Employee {
View Full
Document
This preview
has intentionally blurred sections.
private String name; // employee’s name private Date hireDate; // date employee was hired /** Constructor: employee named n hired on date d public Employee (String n, Date d) { … } /** = name of the employee */ public String getName() { …} /** = hireDate */ public Date getHireDate() {…} /** = a representation of the employee, giving their name and date of hire *./ public String toString() { …} } (a) Write the three method bodies (but not the constructor body). (b) Write a new-expression to create an Employee with name “Roger” who is hired at the time the new- expression is evaluated. Draw the manila folder that represents the newly created object. 8. Look at class Employee of the previous exercise. (a) Write a subclass VIP that has a field, bonus, which contains a double value. The subclass needs a constructor that initializes all three fields. Make sure you write the body of the constructor correctly. The subclass should have its own toString function and a getter method for the bonus. (b) Which components does subclass VIP inherit? Which does it override?
- Fall '07
- FAN/VANLOAN
Click to edit the document details | https://www.coursehero.com/file/195327/aboutprelim1sample/ | CC-MAIN-2016-50 | refinedweb | 493 | 73.47 |
Python alternatives for PHP functions
import time
result = time.mktime(my_time_struct)
result = time.mktime((year, month, day, hour, minute, second, wday, yday, is_dst))
(PHP 4, PHP 5)
mktime — Get Unix timestamp for a date.
The number of the hour.
The number of the minute.
The number of seconds past the minute.
The number of the month.
The number of the day. message
if using the system settings or the TZ environment
variable. See also date_default_timezone_set()
Now issues the E_STRICT and E_NOTICE
time zone errors.
Example #1 mktime() example
mktime() is useful for doing date arithmetic
and validation, as it will automatically calculate the correct
value for out-of-range input. For example, each of the following
lines produces the string "Jan-01-1998".
<?phpecho #2. | http://www.php2python.com/wiki/function.mktime/ | CC-MAIN-2019-51 | refinedweb | 125 | 61.53 |
18 October 2007 05:28 [Source: ICIS news]
By Prema Viswanathan
SINGAPORE (ICIS news)--The year-end holiday season demand for playstation consoles and compact discs (CDs) is set to drive Asian polycarbonate (PC) consumption from late October, producers and traders said on Thursday.
Buyers were also keen to secure supplies amid worries that PC prices may track surging crude values, which has been hovering around $88/bbl currently, and turnarounds which tightened supply.
Asian PC markets had been somewhat tepid in the past few weeks due to sluggish trade, but this is set to change in the coming weeks, a producer said.
“The optical grade market is likely to be more bullish than the general purpose (GP) moulding and extrusion grades, due to strengthening demand for CDs and handheld game consoles, whose casing is made of PC,” he said.
PC optical grade prices have risen by $20/tonne in northeast Asia to $2,630-2,650/tonne CIF (cost, insurance and freight) ?xml:namespace>
A second producer said it had raised offers for November by $20/tonne for the optical grade from October levels and was hopeful of achieving its price target.
Another producer, however, anticipated customer resistance if it raised offers despite the higher demand and was planning to roll over prices next month.
Buying interest for GP moulding grade PC also picked up this week, ahead of the peak demand season for the electronics and office automation segments which begins in November, a trader said.
Prices of this grade rose by up to $50/tonne in southeast Asia, to $2,800-2,850/tonne CIF SE Asia, and by up to $30/tonne in northeast Asia to $2,800-2,830/tonne CIF Hong Kong.
The sharp surge in crude values this week brought PC buyers into the market, in anticipation of a rise in PC prices in the coming weeks on the back of an expected increase in feedstock bisphenol A (BPA) prices.
Also boosting market sentiment was relatively snug supply due to maintenance turnarounds at Chimei-Asahi’s | http://www.icis.com/Articles/2007/10/18/9071012/holiday-season-to-boost-asian-pc-demand.html | CC-MAIN-2015-11 | refinedweb | 343 | 50.4 |
Solving an Implicit Relationship with a Newton Solver¶
This tutorial will show how to use a solver to drive some unknown in your model to zero by varying a param. In OpenMDAO you do this by adding a component with an implicit equation, and specifying a nonlinear solver in the containing group.
We will illustrate this with a simple problem.
Find the Intersection of a Parabola and a Line¶
Consider a parabola defined by “y = 3x^2 - 5” and a line defined by “y = -2x + 4”. We would like to find their intersection.
Thie figure above shows that there are two solutions to this problem. We will implement this in OpenMDAO and find both solutions.
For our approach, we will consider the line and parabola as separate disciplines, each requiring its own Component definition that takes a value ‘x’ as an input param, and produces the output ‘y’. Implementing these 2 components with derivatives is straightforward:
from __future__ import print_function from openmdao.api import Component, Group, Problem, Newton, ScipyGMRES class Line(Component): """Evaluates y = -2x + 4.""" def __init__(self): super(Line, self).__init__() self.add_param('x', 1.0) self.add_output('y', 0.0) # User can change these. self.slope = -2.0 self.intercept = 4.0 def solve_nonlinear(self, params, unknowns, resids): """ y = -2x + 4 """ x = params['x'] m = self.slope b = self.intercept unknowns['y'] = m*x + b def linearize(self, params, unknowns, resids): """ Jacobian for our line.""" m = self.slope J = {} J['y', 'x'] = m return J class Parabola(Component): """Evaluates y = 3x^2 - 5""" def __init__(self): super(Parabola, self).__init__() self.add_param('x', 1.0) self.add_output('y', 0.0) # User can change these. self.a = 3.0 self.b = 0.0 self.c = -5.0 def solve_nonlinear(self, params, unknowns, resids): """ y = 3x^2 - 5 """ x = params['x'] a = self.a b = self.b c = self.c unknowns['y'] = a*x**2 + b*x + c def linearize(self, params, unknowns, resids): """ Jacobian for our parabola.""" x = params['x'] a = self.a b = self.b J = {} J['y', 'x'] = 2.0*a*x + b return J
We have made these components slightly more general so that you can, for example, change the slope and intercept of the Line to try solving different problems.
Now we need to add a component that defines a residual for the difference between “parabola.y” and “line.y”. We want to let an OpenMDAO solver drive this difference to zero.
class Balance(Component): """Evaluates the residual y1-y2""" def __init__(self): super(Balance, self).__init__() self.add_param('y1', 0.0) self.add_param('y2', 0.0) self.add_state('x', 5.0) def solve_nonlinear(self, params, unknowns, resids): """This component does no calculation on its own. It mainly holds the initial value of the state. An OpenMDAO solver outside of this component varies it to drive the residual to zero.""" pass def apply_nonlinear(self, params, unknowns, resids): """ Report the residual y1-y2 """ y1 = params['y1'] y2 = params['y2'] resids['x'] = y1 - y2 def linearize(self, params, unknowns, resids): """ Jacobian for our parabola.""" J = {} J['x', 'y1'] = 1.0 J['x', 'y2'] = -1.0 return J
This component holds both our state and the residual. This component produces no explicit outputs, so the solve_nonlinear method doesn’t do anything (but it still must be declared). In the apply_nonlinear method, we take the difference “y1-y2” and place it in the residual for “x”. The derivatives are straightforward.
Note that the residual equation is not a direct function of the state, but it is indirectly a function via y1 and y2. The partial derivative of the residual with respect to ‘x’ is zero, though the total derivative calculated by OpenMDAO of the residual with respect to ‘x’ is nonzero.
Finally, lets set up the model.
top = Problem() root = top.root = Group() root.add('line', Line()) root.add('parabola', Parabola()) root.add('bal', Balance()) root.connect('line.y', 'bal.y1') root.connect('parabola.y', 'bal.y2') root.connect('bal.x', 'line.x') root.connect('bal.x', 'parabola.x') root.nl_solver = Newton() root.ln_solver = ScipyGMRES() top.setup()
Here we connect the output of the Line and Parabola component to the params of the Balance component. The state on “Balance” feeds the params on both components.
To solve this system, we need to specify a nonlinear solver in “root.nl_solver”. There are two types of solvers in OpenMDAO: nonlinear solvers and linear solvers.
A nonlinear solver is used to drive residuals to zero by varying other quantities in your model. The quantities that are varied by the nonlinear solver include all states, but also include any cyclic params on the first component in a cycle. Every unknown in OpenMDAO has a corresponding residual and the nonlinear solver seeks to drive the norm of all the residuals to zero.
A linear solver solves the linearized system of equations in order to calculate a gradient (though there are some other uses too such as preconditioning.)
Every Group contains a linear solver in ln_solver and a nonlinear solver in nl_solver’. The default nonliinear solver is called RunOnce which just runs the components in the group one time without driving the residuals to zero. The default linear solver is LinearGaussSeidel, which is an adequate chain rule solution for the gradient, but must be replaced if your model has cycles or states.
The Newton solver is well-suited for solving this sort of problem, and is the solver you will generally use when solving any system with an implicit state, so we specify Newton in “root.nl_solver”. The Newton solver requires gradients and calculates them through use of the linear solver in “root.ln_solver”. The default solver is LinearGaussSeidel, but to calculate the gradients across a system with implicit states, we should use the ScipyGMRES linear solver, which handles the coupled problem by solving a system of linear equations.
top.run() print('Solution x=%.2f, line.y=%.2f, parabola.y=%.2f' % (top['bal.x'], top['line.y'], top['parabola.y']))
Running our code should give us an answer:
Solution x=1.43, line.y=1.14, parabola.y=1.14
On Initial Values for States¶
Our problem has two solutions, and we have found one of them. Which solution you arrive at is determined by the initial condition you chose, specifically the solution follows the gradient from the initial point to the solution.
We can find both solutions then:
# Positive solution top['bal.x'] = 7.0 root.list_states() top.run() print('Positive Solution x=%.2f, line.y=%.2f, parabola.y=%.2f' % (top['bal.x'], top['line.y'], top['parabola.y'])) # Negative solution top['bal.x'] = -7.0 root.list_states() top.run() print('Negative Solution x=%.2f, line.y=%.2f, parabola.y=%.2f' % (top['bal.x'], top['line.y'], top['parabola.y']))
OpenMDAO provides a function list_states that lists all the states contained in a group and all of its subgroups. This can be useful in larger nested models that have many implicit components. Since your initial state potentially feeds the initial params in other components, it is important to inspect them to make sure they are correct.
States in model: bal.x: 7.000000 Positive Solution x=1.43, line.y=1.14, parabola.y=1.14 States in model: bal.x: -7.000000 Negative Solution x=-2.10, line.y=8.19, parabola.y=8.19 .. tags:: Tutorials | http://openmdao.readthedocs.io/en/latest/usr-guide/tutorials/implicit.html | CC-MAIN-2017-17 | refinedweb | 1,225 | 58.08 |
Whenever you’re learning a new tool, you should first ask yourself two questions.
1) Why does this tool exist? 2) What problems does this tool solve?
If you can’t answer both of those questions, you may not need the tool in the first place. Let’s take those questions and apply them to webpack.
At its core, webpack is a module bundler. It examines all of the modules in your application, creates a dependency graph, then intelligently puts all of them together into one or more bundle(s) that your
index.html file can reference.
App.js ---> | |Dashboard.js -> | Bundler | -> bundle.jsAbout.js ---> | |
Historically when building a JavaScript application, your JavaScript code would be separated by files (these files may or may not have been actual modules). Then in your
index.html file, you’d have to include
<script> tags to every JavaScript file you had.
<body>...<script src=""></script><script src="libs/react.min.js"></script><script src='src/admin.js'></script><script src='src/dashboard.js'></script><script src='src/api.js'></script><script src='src/auth.js'></script><script src='src/rickastley.js'></script></body>
Not only was this tedious, but it was also error-prone. There were the obvious issues like typos or forgetting to include a file, but more than that, the order of the
<script> tags mattered. If you loaded a script that depended on React before loading the React script, things would break. Because webpack (intelligently) creates a bundle for you, both of those problems go away. You don’t have to worry about forgetting a
<script> and you don’t have to worry about the order.
<body>...<script src='dist/bundle.js'></script></body>
As we’ll soon see, the “module bundling” aspect is just one part of webpack. If needed, you’re also able to tell webpack to make certain transformations on your modules before adding them to the bundle. Examples might include transforming SASS/LESS to regular CSS or “modern JavaScript” to ES5 that the browser can understand.
Assuming you’ve initialized a new project with npm, there are two packages you need to install to use webpack,
webpack and
webpack-cli.
npm install webpack webpack-cli --save-dev
Once you’ve installed
webpack and
webpack-cli, it’s time to start configuring webpack. To do that, you’ll create a
webpack.config.js file that exports an object. Naturally, this object is where all the configuration settings for webpack will go.
// webpack.config.jsmodule.exports = {}
Remember, the whole point of webpack is to “examine all of your modules, (optionally) transform them, then intelligently put all of them together into one or more bundle(s)” If you think about that process, in order to do that, webpack needs to know three things.
1) The entry point of your application 2) Which transformations, if any, to make on your code 3) The location to put the newly formed bundle(s)
Whenever your application is composed of modules, there’s always a single module that is the entry point of your application. It’s the module that kicks everything off. Typically, it’s an
index.js file. Something like this.
index.jsimports about.jsimports dashboard.jsimports graph.jsimports auth.jsimports api.js
If we give webpack the path to this entry file, it’ll use that to create the dependency graph of our application (much like we did above, except… better). To do that, you add an
entry property to your webpack config which points to your entry file.
// webpack.config.jsmodule.exports = {entry: './app/index.js'}
Now that webpack knows the entry file, the next thing we need to tell it is what transformations to run on our code. To do this, we’ll use what are called “loaders”.
Out of the box, when webpack is building its dependency graph by examining all of your
import/
require() statements, it’s only able to process JavaScript and JSON files.
import auth from './api/auth' // 👍import config from './utils/config.json' // 👍import './styles.css' // ⁉️import logo from './assets/logo.svg' // ⁉️
There’s a very good chance that you’re going to want your dependency tree to be made up of more than just JS and JSON files - i.e., you’re going to want to be able to import
.css files,
.svg files, images, etc, as we’re doing above. This is where “loaders” can help us out. The primary purpose of a loader, as the name suggests, is to give webpack the ability to process more than just JavaScript and JSON files.
The first step to adding any loader is to download it. Because we want to add the ability to
import
.svg files in our app, we’ll download the
svg-inline-loader from npm.
npm install svg-inline-loader --save-dev
Next, we need to add it to our webpack config. All of the information for your loaders will go into an array of objects under
module.rules.
// webpack.config.jsmodule.exports = {entry: './app/index.js',module: {rules: []}}
Now there are two pieces of information we need to give webpack about each loader. First, the type of file we want to run the loader on (in our case, all
.svg files). Second, the loader to use on that file type (in our case,
svg-inline-loader).
To do this, we’ll have an object with two properties,
test and
use.
test will be a regex to match the file path and
use will be the name of the loader we want to use.
// webpack.config.jsmodule.exports = {entry: './app/index.js',module: {rules: [{ test: /\.svg$/, use: 'svg-inline-loader' }]}}
Now anywhere in our app, we’ll be able to import
.svg files. What about our
.css files though? Let’s add a loader for that as well. We’ll use the
css-loader.
npm install css-loader --save-dev
// webpack.config.jsmodule.exports = {entry: './app/index.js',module: {rules: [{ test: /\.svg$/, use: 'svg-inline-loader' },{ test: /\.css$/, use: 'css-loader' }]}}
Now anywhere in our app, we can import
.svg and
.css files. However, there’s still one more loader we need to add to get our styles to work properly. Right now, because of our
css-loader, we’re able to
import
.css files. However, that doesn’t mean those styles are being injected into the DOM. What we really want to do is
import a CSS file then have webpack put all of that CSS in a
<style> tag in the DOM so they’re active on the page. To do that, we’ll use the
style-loader.
npm install style-loader --save-dev
// webpack.config.jsmodule.exports = {entry: './app/index.js',module: {rules: [{ test: /\.svg$/, use: 'svg-inline-loader' },{ test: /\.css$/, use: [ 'style-loader', 'css-loader' ] }]}}
Notice, because we now have two loaders for our
.css rule, we change
use to be an array. Also, notice that we have
style-loader before
css-loader. This is important. Webpack will process those in reverse order. So
css-loader will interpret the
import './styles.css' line then
style-loader will inject that CSS into the DOM.
As we just saw with
style-loader, loaders can do more than just allow you to
import certain file types. They’re also able to run transformations on files before they get added to the final output bundle. The most popular is transforming “next generation JavaScript” to the JavaScript of today that browsers can understand using Babel. To do this, you can use the
babel-loader on every
.js file.
npm install babel-loader --save-dev
// webpack.config.jsmodule.exports = {entry: './app/index.js',module: {rules: [{ test: /\.svg$/, use: 'svg-inline-loader' },{ test: /\.css$/, use: [ 'style-loader', 'css-loader' ] },{ test: /\.(js)$/, use: 'babel-loader' }]}}
There are loaders for just about anything you’d need to do. You can check out the full list here.
Now that webpack knows the entry file and what loaders to use, the next thing we need to tell it is where to put the bundle it creates. To do this, you add an
output property to'}}
So the full process looks something like this.
1) webpack grabs the entry point located at
./app/index.js.
2) It examines all of our
import and
require statements and creates a dependency graph.
3) webpack starts creating a bundle, whenever it comes across a path we have a loader for, it transforms the code according to that loader then adds it to the bundle.
4) It takes the final bundle and outputs it at
dist/index_bundle.js.
We’ve seen how you can use loaders to work on individual files before or while the bundle is being generated. Unlike loaders, plugins allow you to execute certain tasks after the bundle has been created. Because of this, these tasks can be on the bundle itself, or just to your codebase. You can think of plugins as a more powerful, less restrictive version of loaders.
Let’s take a look at a few examples.
Earlier we saw that the main benefit of webpack was that it would generate a single bundle for us that we could then use to reference inside of our main
index.html page.
What
HtmlWebpackPlugin does is it will generate this
index.html page for us, stick it inside of the same directory where our bundle is put, and automatically include a
<script> tag which references the newly generated bundle.
So in our example, because we’ve told webpack to name the final bundle
index_bundle.js and put it in a folder called
dist, when
HtmlWebpackPlugin runs, it’ll create a new
index.html file, put it in
dist, and include a script to reference the bundle,
<script src='index_bundle.js'></script>. Pretty nice, right? Because this file is being generated for us by
HtmlWebpackPlugin, even if we change the output path or file name of our bundle,
HtmlWebpackPlugin will have that information and it’ll adapt accordingly.
Now, how we do adjust our webpack config in order to utilize
HtmlWebpackPlugin? As always, we first need to download it.
npm install html-webpack-plugin --save-dev
Next, we add a
plugins property which is an array to'},plugins: []}
Then in order to use
HtmlWebpackPlugin, we create a new instance of it inside of our
plugins array.
//()]}
If you’re using React, you’ll want to set
process.env.NODE_ENV to
production before you deploy your code. This tells React to build in production mode which will strip out any developer features like warnings. Webpack makes this simple by providing a plugin called
EnvironmentPlugin. It comes as part of the
webpack namespace so you don’t need to download it.
// webpack.config.jsconst path = require('path')const HtmlWebpackPlugin = require('html-webpack-plugin')const webpack = require('webpack')module.exports = {entry: './app/index.js',module: {rules: [{ test: /\.svg$/, use: 'svg-inline-loader' },{ test: /\.css$/, use: [ 'style-loader', 'css-loader' ] },{ test: /\.(js)$/, use: 'babel-loader' }]},output: {path: path.resolve(__dirname, 'dist'),filename: 'index_bundle.js'},plugins: [new HtmlWebpackPlugin(),new webpack.EnvironmentPlugin({'NODE_ENV': 'production'})]}
Now, anywhere in our application, we’ll be able to tell if we’re running in production mode by using
process.env.NODE_ENV.
HtmlWebpackPlugin and
EnvironmentPlugin are just a small taste of what you can do with webpack’s plugin system. Here’s a full list of officially supported plugins.
Whenever you build your app for production, there are a few steps you want to take. We just learned about one of them which was setting
process.env.NODE_ENV to
production. Another would be minifying your code and stripping out comments to reduce the bundle size.
Utilizing plugins for each one of these production tasks would work, but there’s a much easier way. In your webpack config, you can set the
mode property to
development or
production depending on which environment you’re in.
//()],mode: 'production'}
Notice we were able to get rid of our
EnvironmentPlugin. The reason for that is by setting
mode to
production, webpack will automatically set
process.env.NODE_ENV to
production. It will also minify our code and strip out warnings.
At this point, we have a pretty solid grasp on how webpack works and how to configure it, the only other thing we need to do now is actually run it.
Assuming you’re using npm and have a
package.json file, you can create a
script to execute
webpack.
// package.json"scripts": {"build": "webpack"}
Now whenever you run
npm run build from the command line,
webpack will execute and create an optimized bundle named
index_bundle.js and put it inside of the
dist directory.
At this point, there’s nothing more about webpack itself that we’re going to cover. However, it is important that you understand how to easily switch between running in
development mode and running in
production mode.
As we talked about, when we’re building for
production, we want everything to be as optimized as possible. When we’re building for
development, the opposite is true.
To make it easy to switch between
production and
development builds, we’ll have two different commands we can run via our npm
scripts.
npm run build will build our app for production.
npm run start will start a development server which will automatically regenerate our bundle whenever we make a change to our code.
If you’ll remember, we hardcoded
mode to
production inside of our webpack config. However, we only want to run in
production mode when we run
npm run build. If we run
npm run start, we want
mode set to
development. To fix this, let’s adjust our
scripts.build property in our
package.json file to pass along an environment variable.
"scripts": {"build": "NODE_ENV='production' webpack",}
If you’re on Windows, the command is a bit different:
"SET NODE_ENV='production' && webpack"
Now, inside of our webpack config, we can toggle
mode based on
process.env.NODE_ENV.
// webpack.config.js...mode: process.env.NODE_ENV === 'production' ? 'production' : 'development'}
Now whenever we want to build our app for production, we just run
npm run build in our command line. That will generate an
index.html file and an
index_bundle.js file and put them in the
dist directory.
Unlike building for production, when we’re developing, it’s all about speed. We don’t want to have to re-run
webpack and wait for it to rebuild the
dist directory every time we change our code. This is where the
webpack-dev-server package can help us out.
As the name implies,
webpack-dev-server is a development server for webpack. Instead of generating a
dist directory, it’ll keep track of your files in memory and serve them via a local server. More than that, it supports live reloading. What that means is whenever you make a change in your code,
webpack-dev-server will quickly recompile your code and reload the browser with those changes.
As always, to use it we first need to install it.
npm install webpack-dev-server --save-dev
Then all we need to do is update our
start script to run
webpack-dev-server.
"scripts": {"build": "NODE_ENV='production' webpack","start": "webpack-dev-server"}
Just like that, we have two commands, one for creating a development server and one for building our app for. | https://ui.dev/webpack/ | CC-MAIN-2021-43 | refinedweb | 2,550 | 66.33 |
I am new to this forum and must admit that Python is not my best friend.
However I am forced to deal with it since I need to make a plugin server to be used in X-Plane.
The thing is that X-Plane has support for Python scripts and can run using this code as a bare minimun.
- Code: Select all
from XPLMDefs import *
from XPLMUtilities import *
from XPLMProcessing import *
from XPLMDataAccess import *
class PythonInterface:
def XPluginStart(self):
self.Name = "PTserver"
self.Sig = "SandyBarbour.Python.Template"
self.Desc = "A test script for the Python Interface."
self.ser = SocketPlugin()
return self.Name, self.Sig, self.Desc
def XPluginStop(self):
pass
def XPluginEnable(self):
return 1
def XPluginDisable(self):
pass
def XPluginReceiveMessage(self, inFromWho, inMessage, inParam):
pass
Where I am struggling is to add a simple http server for sending and receiving a text string when an incomming request is made.
There are lots of general Python example code to be found online and I am able to start and use a server if I go via a Mac Terminal session.
Here is an example
The big problem is that the server must be threaded and non-blocking so it does not mess up the call backs in the plugin.
The classes in Python confuses me big time because it works in a diferent way that I am use tu in ex. Lua scripting.
Can someone please help me in the right direction.
Laid out simple:
Start a http server that serves and takes request in a parallel thread to the X-Plane script, and able to interact via a text string ......
Re Peter | http://www.python-forum.org/viewtopic.php?f=17&t=4051 | CC-MAIN-2014-52 | refinedweb | 273 | 60.04 |
Using the getdate() function, how do I select the last WHOLE 3 months,
So for example, for todays date (29/09/2010)
I want to return all records with the date between 01/06/2010 and 31/08/2010
Is this possible?
View Complete Post
I have a table that contains records with a start and end date.. within our application we have another table that contains a retention number.. its a number 1 thru 100 or -1 indicating retain forever.
Now i need to pull back all records that are in the table that have a deviceID = 231 and that have a startdate < the retention period.. So what i was expecting was records that have a startdate less than the retention time from todays date..
I have the following 2 queries that return the same results.. and not really sure which logic would be correct.. for this example, there are 15 records for that ID.. and 2 of which are older than 20days ago. So which logic should i use?
DECLARE @retention_time intDECLARE @id bigint
SET @retention_time = '20'SET @id = '231'SELECT * FROM DeviceInfo WHERE Id = @id AND DATEDIFF(day, startDate,GETDATE()) > @retention_timeSELECT * FROM DeviceInfo WHERE Id = @id AND startDate < (GETDATE() - @retention_time) need to know how to show top 30 records from four table
with fastest speed.. in ms sql server 2005..
hope You do the needfull
Hello, how would I join fields together?
return (from c in storedb.Product_Categories
where c.Category_Name.Contains(searchText) orderby c.Category_Name select new { c.Cat_GUID, c.Category_Key && " ;" && c.Category_Name // HOW CAN I DO
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/30549-return-last-3-months-data-based-on-date.aspx | CC-MAIN-2017-39 | refinedweb | 276 | 66.13 |
Hey, Scripting Guy! When I use Windows Explorer to connect to a remote computer, I can see a description of that computer in the Details pane. How can I change the description for a computer?-- GF
Hey, GF. Just to make sure everyone is clear what we’re talking about, we are not talking about the Description attribute in Active Directory; instead, we’re talking about the computer description that gets broadcast across the network. (If you’d rather know how to change the Description attribute in Active Directory, see this Hey, Scripting Guy! column.)
For example, in Windows XP you can get to the computer description by right-clicking My Computer, clicking Properties, and then looking on the Computer Name tab in the System Properties dialog box:
And, as you noted, if you connect to this computer using Windows Explorer, the description will appear in the Details pane as well:
We thought it was pretty exciting, too.
So how can you change the description for a computer? Well, you could open up Regedit.exe and manually change the registry value HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\lanmanserver\parameters\srvcomment. Or, you could just run a script like this one:
Const HKEY_LOCAL_MACHINE = &H80000002
strComputer = "."
Set objRegistry = GetObject _
("winmgmts:\\" & strComputer & "\root\default:StdRegProv")
strKeyPath = "System\CurrentControlSet\Services\lanmanserver\parameters"
strValueName = "srvcomment"
strDescription = "Description changed programmatically"
objRegistry.SetStringValue HKEY_LOCAL_MACHINE, strKeyPath, strValueName, strDescription
Of course it’s easy; after all this time did you really think we’d suddenly start giving you complicated and convoluted answers to your questions? We begin by defining a constant named HKEY_LOCAL_MACHINE and setting the value to &H80000002; in a minute we’ll use this constant to tell the script which registry hive we want to work with. We then connect to the WMI service (in this case on the local computer, though we can just as easily modify the registry on a remote machine) and bind to the StdRegProv class. (Which, as we never tire of telling people, happens to be found in the root\default namespace.)
Next we assign values to three variables:
strKeyPath = "System\CurrentControlSet\Services\lanmanserver\parameters"
strValueName = "srvcomment"
strDescription = "Description changed programmtically"
The variable strKeyPath represents the path within the HKEY_LOCAL_MACHINE portion of the registry; strValueName represents the registry value (srvcomment) we’re about to change; and strDescription - that’s right: strDescription represents the new computer description. That’s a very astute observation.
Note. We’d tell you that you guys are way better at this stuff than we are, but we don’t want our manager to get any ideas. And yes: getting an idea would be a first for a Microsoft manager!
All we have to do now is call the SetStringValue method, passing HKEY_LOCAL_MACHINE and our three variables as the method parameters:
objRegistry.SetStringValue HKEY_LOCAL_MACHINE, strKeyPath, strValueName, strDescription
Scripts like that really do make life worth living, don’t they?
Note. Be forewarned that even though this change is made in the registry the new description might not take effect until the computer has rebooted. Just something to keep in mind.
How would I adapt this to change it on a remote computer? I would have full domain admin rights on the PC running the script.
How would set strDescription to an environment variable?
For example set the description to the value of %username%
Use psexec to run it on a remote computer.
Manually changing the registry value is a lot easier. How do you even run scripts?
@Montgomery Burns, yes, this is an old VBScript written more than 7 years ago. This is MUCH easier to do in Windows PowerShell as far as how to run the script I have a FAQ on the Script center here technet.microsoft.com/.../dd940113
Here is a batch file I made for doing this. Prompts for the name and description in the command window and then asks if you want to do more machines at the end. Just copy and paste into a text file and save as .bat
:start
set /p "machinename=Enter the name of the Machine:"
set /p "description=Enter the Description:"
sc \\%machinename% start RemoteRegistry
ping %machinename%
reg add \\%machinename%\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters /v "srvcomment" /t REG_SZ /d "%description%" /f
sc \\%machinename% stop RemoteRegistry
set /p "goagain=Do you want to change another machine?(Y/N)"
If "%goagain%"=="y" goto start
If "%goagain%"=="Y" goto start
If "%goagain%"=="yes" goto start
If "%goagain%"=="YES" goto start
I am not so sure why everyone makes this so hard. WMIC does this in one short line. It can be put into almoat any kind of script.
WMIC /node:w8test os set description='My new descritpion'
That's it. No registry stuff to play with.
or you could just use REG.exe: REG ADD HKLM\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters /v srvcomment /t REG_SZ /d "your description here" /f
I want to thank Keith Pratt for hi 22 Mar 2013 8:18 AM post.
I just used your script to change descriptions on 15 computers. I worked extremely well.
We had a similar issue where we needed to change 100's of remote computer descriptions, so we created this tool: It allows you to local or remote connect to a computer
on a network (run it with admin rights!) and gather and change the description immediately without requirement for reboot etc. Hope it helps. | http://blogs.technet.com/b/heyscriptingguy/archive/2005/12/07/how-can-i-change-the-description-for-a-computer.aspx | CC-MAIN-2014-42 | refinedweb | 895 | 53.81 |
Related Posts
Share This
Project Difficulty
Recent Posts Arduino Uno are two-a-penny on ebay and the software to drive them is available from various sources but in my opinion they all suffer from the effects of trying to attach a high-frequency, high pin-count LCD to a relatively small and slow MCU.
QVGA shields like this are available for as low as £3.50 delivered
- The 16MHz ATmega328 is just not fast enough to push pixels out to a high-resolution TFT at what I would call interactive speed. That is, fast enough to present a responsive user interface.
- Driving TFTs at anything like a reasonable speed needs a parallel interface, and that needs a lot of pins. You end up having hardly any left over for your actual design.
- The vast majority of these shields are 320×240 (QVGA) which looks OK up to about 2.4″ but above that the pixel density becomes too low and images appear to be low resolution, pixellated and just ‘old tech’.
- The driver code needs considerable memory resources. You can easily use up the entire 32Kb available to the Uno if you decide you want to use a couple of text fonts and forget about JPEG decoding, 2Kb is not enough SRAM.
The answer to all these problems is to offload the work of driving the LCD to a co-processor and have the Arduino communicate using a high-level command set.
I decided to build the graphics co-processor around the STM32F0 MCU with a 48MHz core, 64Kb of flash memory and 8Kb of SRAM. It comes in a 48 pin LQFP package.
The driver software will make use of my stm32plus library surrounded by some fairly straightforward command decoding software. Multiple fonts will be supported and we’ll include JPEG decoding logic as well as compressed and uncompressed bitmap support. To assist the flash-poor Arduino we’ll include a 16Mb SPI flash IC directly connected to the ARM to provide access to fast graphics.
The title of this article rather cheekily states that this project will use no pins on the Arduino Uno R3. Can that really be true? Well, sort of. We’re going to communicate with the graphics accelerator over the shared I2C bus which requires use of the SCL and SDA pins but since I2C is a shared bus these pins continue to be available to other devices. That’s also why this shield is specifically for the R3 release of the Uno because it requires the two new I2C pins added on the R3.
The new I2C pins below the red line
If you’ve read my previous articles you’ll know that I like to extract the maximum performance possible out of my projects and this is no exception. I’m going to optimise the Arduino library and the STM32 firmware to the absolute maximum. This should be fun, let’s get on with it.
The LCD
The LCD from the Sony Ericsson Vivaz U5 is the best all-round LCD that I have come across so far that features a built-in controller making it easy to control from a low-end MCU.
The 3.2″ display sports a resolution of 640×360 pixels which gives a density of 229ppi. This is sufficient to render graphics and text smoothly and without any of the ‘jaggies’ that make the larger QVGA screens look so poor. Another advantage is that the original displays and even many of the clones are using a wide viewing angle display technology that maintains colour fidelity even at angles approaching 180°. The only technology I know that does this is IPS but there may be others.
The display is capable of rendering up to 24-bit colour depth but because it exposes a 16-bit data bus we would need to do two transfers per pixel to support that mode. Instead, we will drive it at a 16-bit colour depth so we can transfer a whole pixel in one GPIO write. As you’ll see later in the optimisation section this will allow us to achieve an optimal pixel fill rate.
Schematic
Here’s the schematic for this design. Click to see a clearer PDF representation.
The design is very modular so let’s take a look at each section starting with the power supply.
The LCD requires a 2.8V power supply and all the other components are 2.8V compatible so it makes sense for me to run the whole board at 2.8V.
The ZXCL280H5TA from Diodes Inc. is an LDO regulator capable of supplying up to 150mA which is way more than we need for the 2.8V parts of this design (the largest current consumer is the LCD backlight and that’s driven from the 5V arduino PSU).
Now let’s take a look at the big one, the MCU itself.
I’ve labelled the MCU as an STM32F051C8T7 which is a 64/8Kb device that I happen to have in stock. The fact is though that this project does not require the additional peripherals included with the 051 series so I recommend that you save money and use the STM32F030C8T6 currently available for £1.23 from Farnell.
Port B is given over entirely to the 16-bit LCD data bus so we can write out a full 16-bit pixel in one operation. Driving the LCD at 16 bits per-pixel gives us a maximum of 64K colours. The remaining control signals (LCD_RES, WR and RS) are mapped to PA0..2.
The SPI flash IC is connected to PA4..7 which corresponds to the SPI1 STM32 peripheral so we can use hardware support to drive the SPI flash at the maximum speed permitted by the STM32.
The I2C interface is connected to PF6..7 which corresponds to the I2C2 STM32 peripheral so again, we can use hardware support for the I2C protocol. This device will be an I2C slave which means that the Arduino will be driving the I2C clock and data lines at 5V TTL levels. PF6 and PF7 are marked as “FT” in the datasheet which means that they are 5V tolerant and will not burn out when they receive 5V levels.
P1 is a jumper block that connects the I2C bus pullup resistors. I2C requires one pair of pullups per bus so this jumper block allows the pullups to be disconnected if some other device on the bus is providing the pullups.
A physical reset button is provided so that I can easily reset the board if it happens to get out of sync with the Arduino (it happens if restarts are accidentally staggered).
I will use the blue LED on PA9 to indicate activity as commands are received and processed. The red LED on PA10 will be a ‘buffer full’ indicator that will come on if the Arduino manages to fill up the STM32’s command buffer, causing the I2C bus to stall until space is available. The operating voltage of 2.8V does limit the choice of LED colours that I can use with this simple circuit but blue and red will be fine.
Not wanting to waste any STM32 pins, I decided to expose PA3 and PA8 as pin headers that the Arduino can drive either as GPIO or timer output pins. The STM32 has powerful timer functionality that can be used to generate PWM and other timer-based waveforms with no CPU overhead.
The two-wire SWD debug interface is broken out to a pin header so that the STM32 can be programmed in-circuit using the cost-effective ST-Link/v2 debugger.
Decoupling is provided according to ST’s recommendations and a bulk 47-100µF electrolytic is provided to provide low frequency decoupling for the whole board.
Let’s move on to the LCD connector.
The AXE534124 is a 34 pin 0.4mm connector made by Panasonic and sold only by Digikey in the US, this makes it quite expensive for non-US citizens to get hold of but nevertheless Digikey will ship it to us but we have to deal with the customs fees.
The socket has quite short legs and is a bit of a pain to solder. I do it by reflow to get it tacked down and then use lots of flux and a very fine tip iron to touch up any loose legs under the microscope.
I discovered the pinout for this LCD during my reverse engineering article and the additional decoupling capacitors are the same as you can find in Sony’s official schematic for the cellphone.
The backlight for this cellphone consists of 6 white LEDs in series. We have no information as to the forward voltage of this LED string so we’ll drive it using a constant current LED driver.
The AP5724 from Diodes, Inc. is a boost converter that works by raising its output voltage until a preset current flows through the LED string.
The 5.1Ω resistor, R7 sets the constant current to 20mA. The backlight intensity is varied by applying a PWM signal to the EN pin and the Renesas R61523 controller in the LCD panel is slightly unusual in that it can generate that PWM signal itself, which saves us an MCU pin.
I think we’re done with the LCD-related circuitry, let’s move on to the flash memory.
Spansion S25 flash devices come in SOIC-8 packages that are either 150 or 208mil wide. I got my first batch of boards printed to accept the 150mil footprint and have fitted them out with the 16Mb S25FL216K device.
The 208mil width is perhaps the more common format as capacities increase beyond 16Mb so if you opt to download the Gerbers for this project then you’ll find that the flash footprint is for the 208mil device. You can choose just about any of the S25 range but make sure you select the 208mil width.
The IC at the top is a 16Mb flash IC in 150mil format and the one at the bottom is a 128Mb device in 208mil format.
The interface to the flash IC is plain SPI and we map that directly to the SPI peripheral on the STM32 MCU. Even the lowly STM32F0 has a DMA peripheral that permits us to operate the flash memory asynchronously to the MCU core and at the MCU’s full permissable clock speed.
The remainder of the schematic concerns the pin headers. There’s lots of them. Most are devoted to connecting down into the Arduino sockets so that we can break out all the pins to a separate header where you can access them for GPIO.
Bill of materials
Here’s the full bill of materials for this project.
The reset button is the 6x6mm button that you can easily find on ebay if you search ‘pcb button’. It’s the one with the silver top, black button and four little black corner posts.
These buttons do come in different sizes so make sure you get the 6x6mm variant.
PCB layout
The PCB layout is all based around the restrictions of having to fit onto the Arduino Uno as a shield.
The attached 80x45mm LCD dominates the surface of the PCB between the rows of Arduino pin headers so the control circuitry is located offset to the top of the PCB where it overhangs the edge of the Arduino. The assumption is that this will be at the top of any shield stack that you have because if it wasn’t then you wouldn’t be able to see the LCD.
There are cutouts placed in the PCB where the Arduino’s power supply and USB connector are located because these parts protrude upwards just enough to interfere with the PCB. I didn’t need all of the space on the top so instead of cutting it off sharp I designed it with a curved edge. There’s no design need for this, it just looks nice.
Printing the boards
The design fits within a 10x10cm square so I was able to use the low-cost printing service at Elecrow to get the design printed.
LCDs look best against a black background with the idea being that there’s nothing standing out that distracts your eye from the image displayed on the screen and for that reason I reluctantly went for the gloss black solder mask. I say ‘reluctantly’ because the black soldermask is probably the hardest to work with. The contrast is low so traces are difficult to see, flux stains are easily visible and the white silkscreen discolours to light brown easily under reflow. If, like me, you own a black car then you’ll know what it’s like trying to keep it clean. Cleaning black PCBs is just as difficult!
Assembling the board
This isn’t a difficult board to assemble. It’s fairly low density and the parts are of a manageable size for SMD. I reflowed all the SMD parts using my reflow oven and then soldered all the through-hole components and pin headers manually.
The front side shows all the components, upward facing pin headers and the space for the LCD panel. The panel will be mounted on double-sided sticky pads to lift it clear of the PCB.
The rear side shows the few capacitors mounted on the rear and the downward facing pin headers. Note the 2×3 ICSP female header that mates with the male header on the Arduino board so that it can be relocated on the top of this board.
The picture above shows how it looks with the LCD fitted to the board. The plug on the FPC tail presses into the corresponding receptable on the board leaving the panel sitting between the two rows of Arduino pins. The LCD is mounted on double-sided sticky pads to lift it clear of the traces and vias on the back of the PCB.
The STM32 firmware
The basic idea behind the graphics accelerator is a master-slave arrangement whereby the Arduino is the I2C master and the STM32 is the slave. High-level commands such as ‘draw line from a to b’ or ‘draw text at point p’ will be sent from the Arduino and queued for execution in a circular buffer by the STM32. If the buffer should fill up then the STM32 will suspend the I2C bus until space becomes available.
The I2C management code will be IRQ-driven and the graphics operations will run in the normal CPU context. The graphics operations will reflect those available in my stm32plus library:
- Backlight brightness operations
- Sleep, wake, gamma set operations
- Set foreground, background colours
- Draw rectangle, fill rectangle
- Clear screen
- Gradient fill rectangle
- Draw line, draw polyline
- Plot individual points
- Draw ellipse, fill ellipse
- Raw panel operations (set window, write raw data)
- Select font, draw text, draw text with filled background
- Draw bitmap from arduino or onboard flash with optional LZG compression
- Draw jpeg from arduino or onboard flash
- Erase and program the onboard flash
- T1, T2 pin GPIO and/or timer/PWM options
When you instantiate an stm32plus LCD driver you do so by supplying the orientation, colour depth and driving mode as compile-time template constants. This allows the compiler to produce optimal code for your use case without wasting cycles executing conditions like ‘if portrait then … else …’ when such conditions will always only go one way. It also means that I’ll need to provide firmware that runs the LCD in portrait and landscape mode.
This LCD has a natural 16:9 widescreen aspect so all my examples will be designed to run in the 16:9 landscape orientation.
The core loop of the firmware that you can see in CommandExecutor.cpp looks like this:
for(;;) { // wait for data to become available while(_commandBuffer.availableToRead()==0) { #if !defined(DEBUG) // go to immediate sleep mode. will wake immediately on data arrival (IRQ) __WFI(); #endif } // keep the busy light on while buffered commands are processed _indicators.setBusy(true); do { processNextCommand(); } while(_commandBuffer.availableToRead()!=0); // buffered commands processed, switch off the indicator _indicators.setBusy(false); }
The STM32 core stays in sleep mode until woken up by the I2C IRQ that indicates data has arrived from the Arduino. The IRQ handler deposits the data in the circular buffer and returns, which means that the next time this loop calls
availableToRead() it will return a non-zero value.
The wake-up from sleep operation is immediate and has zero cost in terms of cycles. It’s ifdef’d out for debugging because the debugger gets really confused when it can’t communicate with an asleep MCU.
The interrupt handler that receives and deposits data into the SRAM circular buffer looks like this. You can see the full source code in CommandReader.h.
void CommandReader::onInterrupt(I2CEventType eventType) { bool full; switch(eventType) { case I2CEventType::EVENT_ADDRESS_MATCH: _addressReceived=true; break; case I2CEventType::EVENT_RECEIVE: // data received // got some data _addressReceived=false; // write the byte _commandBuffer.write(I2C_ReceiveData(*_i2c)); // add to the circular buffer full=_commandBuffer.availableToWrite()==0; _indicators.setFull(full); // set/reset the full LED // is the buffer full? Suspend incoming if it is. if(full) _commandBuffer.suspend(); break; case I2CEventType::EVENT_STOP_BIT_RECEIVED: if(_addressReceived) // no data in frame? must be a reset request NVIC_SystemReset(); else _addressReceived=false; break; default: break; } }
The
suspend() operation simply masks off all interrupts at the NVIC level, this has the effect of halting I2C communication until we unmask interrupts again.
The circular buffer implementation, which you can see here, is designed to be safe in the common scenario of an IRQ writer and a normal code reader.
There’s some additional logic in there to detect when a zero length packet is received, and if it is then the MCU gets reset. This is my way of remotely resetting the STM32 from the Arduino that should work even in cases where the main MCU core has hung but the I2C bus is still operational.
The photograph shows the board, with LCD connected and wired up to the ST-Link/v2 debugging and programming dongle. If you’re not interested in modifying the firmware then you can just use ST’s official application and driver to upload the
hex file included with the firmware on github.
Testing
To test the board I created a suite of Arduino sketches that exercised the capabilities of the graphics library. The STM32 was hooked up to the ST-Link/v2 debugger so that I could perform single-step debugging in the Eclipse GUI.
The photograph shows the board displaying a JPEG image that was stored on the onboard flash IC and then decoded and displayed by the STM32.
Optimisation
Now that I’ve got a stable baseline I can turn my attention to the fun topic of optimisation. The system is already very fast and meets my goals, but can I make it faster?
Optimising the Arduino library
The Arduino library is very simple, and it needs to be with so few resources available in the little ATmega32 yet any gains made here could have the biggest impact. Let’s see how we can structure our C++ code to give the compiler the best chance to produce the smallest output.
Back in the old days C++ programmers were tought to place their class definitions in header files and the implementations in source (cpp) files. It made for a clear distinction between design and implementation but unfortunately it results in suboptimal code generation when we use a modern C++ compiler.
When the compiler needs to make a call to, for example,
int foo(int a,int b) it consults the information it has about that function, or class method, and in the case where it can only see a signature declaration it must fall back to the default calling strategy. Registers will be stacked, parameters will be registered and/or stacked and a branch will be made. Afterwards the return value will be registered and the saved registers unstacked. This is all costly both in time and space but because all you’ve given the optimiser to work with is a method signature then that’s all it can do for you. Tough luck.
Fortunately we can improve on that by using the most misunderstood and worst-chosen keyword name in the C and C++ language:
inline. I still today see people who should know better claim that it directs the compiler to place a function definition inline to the calling code which thereby makes your code bigger. It absolutely does not do that, despite the misleading name.
The effect of the
inline keyword is to suspend the usual behaviour of the one definition rule and allow a definition to appear in multiple translation units (source files) as long as they are all identical. Incidentally, gcc cleverly achieves this by marking inline functions as weak references. In a modern compiler the
inline keyword is little more than a linkage modifier.
When you declare everything inline you are giving the optimiser all the information it needs to do a complete job on your source file to achieve the goals that you have told it achieve with the optimisation flags that you gave on the command line. Most gcc users will select one of the
-O options that are shortcuts for large collections of individual
-f options.
Since the Arduino IDE is preset to compile with the
-Os option, the optimiser will not do anything to increase code size. So if it would increase code size to place a method inline, it won’t do it. If a method is very small and consists of fewer instructions than the lengthy standard call procedure, it will be placed inline where it will be optimised as an integral part of your method. It will do this to any function where it can see the whole body, regardless of whether or not you declare it to be
inline.
To see the effect of this I created my library once as an old-style cpp/h combo and then again as all inline. I used the GraphicsMethods example to test it because it makes a lot of library calls. The net result was that the compiled binary was about 500 bytes smaller when everything was declared inline. On these small MCUs differences such as that can be very significant. I suspect I could make further significant gains by optimising the poorly implemented
Wire Arduino library class but for now this will do.
Are there any disadvantages? Not many. You’ll still need
.cpp files around to instantiate any static class data members and ISR implementations that you’ve got – static functions at namespace level can stay inline and go to internal namespaces to keep them from causing trouble. Working around circular dependencies can be trickier but it’s always possible to overcome those by improving your design.
Optimising the STM32 library
I’ve already spent some time optimising the stm32plus library driver as far as it’s feasible to go. The entire access mode is hand-written in assembly language to squeeze the last bit of performance possible out of the core pixel transfer code. I wrote about that development here in the LG KF700 reverse engineering article. The full assembly language source code to the access mode optimised for the 48Mhz F0 is here on github.
Let’s see what optimisation I can achieve with my STM32 firmware.
Firstly I decided to tune the optimisation options that I was using on a per-file basis. I needed to use the
-Os option on the bulk of the source files just so I could squeeze it all in to the 64Kb flash memory on the F0 but I had enough room to enable the
-O3 high performance option on the
CommandExecutor class that handles the core loop of retrieving commands from the circular buffer and handing them off for execution. With these optimisations my build stats are:
text data bss dec hex filename 59872 112 1244 61228 ef2c build/release/awcopper.elf
With the firmware fully optimised it’s time to take a look at how it’s performing. Let’s break out the logic analyser and probe the pixel write-cycle to see how it compares with the fastest permitted by the R61523 datasheet, and if it falls short then I’ll examine the options available to me to make it as close to optimal as I can get.
The screen grab from my logic analyser shows that the combined write cycle is taking 83ns and this is the code that does it, taken from the access mode class:
str %[wr], [%[creset], #0] // [wr] = 0 str %[wr], [%[cset], #0] // [wr] = 1
The Cortex-M0 takes 2 cycles to execute a
str instruction (and on the M0 it actually does take 2 cycles unlike the F4 which uses its instruction pipeline to mess up your carefully calculated cycle counts). Running at 48Mhz, each cycle is 1/48000000 = 20.83ns so our measured result of 83ns equals the expected result of 4×20.8 = 83.3ns.
Let’s see how the 83ns write cycle compares to the limits imposed by the R61523 controller.
In the above image the important timings for us are twds (data setup), twdh (data hold) and twc (write cycle). The timings related to the CS (chip select) signal are irrelevant because we keep it tied to ground. Here’s the table of limiting values.
The controller clocks in the data on the rising edge of WR. The data setup time (twds, min 15ns) is the minimum time that the data must be present before the WR control line goes high. We set up the data before we pull WR low so our data setup time is at least 2 clock cycles which is well within the limits.
The data hold time (twdh, min 20ns) is the time that the data must remain present after WR has gone high. Again for us that is at least 2 clock cycles so we are well within the spec again.
Now lets looks at the overall write cycle time limit. It’s 60ns and we’re clocking it at 83ns. The controller can go faster but seemingly we’re stuck because our code cannot be written any more efficiently. Or are we…
Let’s overclock the STM32.
Overclocking the STM32
Overclocking is something that’s come to be associated with PC hardware enthusiasts seeking to squeeze the last bit of performance out of their CPUs, memory and graphics cards by tweaking the values of system clocks and voltage levels at the expense of heat generation and sometimes overall system stability. But can we overclock an MCU and what does it mean if we do?
The simple answer is that it’s trivially easy to raise the core clock of the M0 from 48MHz to 64MHz. Every F0 application that runs at a clock speed higher than its reference oscillator is going to have some startup code that sets the value of the system clock from a PLL whose frequency is calculated from the reference oscillator and some multipliers and dividers.
For example, this board runs the F0 from its internal oscillator using the PLL to generate a 48MHz clock like this:
/* PLL configuration = (HSI/2) * 12 = ~4812); /* Enable PLL */ RCC->CR |= RCC_CR_PLLON;
Note the
RCC_CFGR_PLLMULL12 PLL multiplier of 12 which calculates 8MHz / 2 * 12 = 48MHz. The maximum value that this multiplier can take is 16. So to overclock the F0 to 64MHz it really is as simple as this:
/* PLL configuration = (HSI/2) * 16 = ~6416); /* Enable PLL */ RCC->CR |= RCC_CR_PLLON;
The core clock will now run at 64MHz, a healthy 33% increase over ST’s stated limit. However, there are other issues that we need to be sure we’re happy with. Any internal clock that is sourced from the system clock is going to be ticking faster than expected and that includes the peripheral clocks.
The good news is that my overclocked F051 boots up and runs just as stable and without any noticeable increase in temperature over the 48MHz version. Now let’s take a look at our LCD write cycle times:
The write cycle is now 62ns which is right where we would calculate it to be given the new MCU cycle time of 15.625ns. That’s more like it, we’re only 2ns off the stated minimum write cycle and with the setup and hold times still within spec that’s about as close to the limit as I want to go.
There’s still the peripheral clocks to deal with. They’re going to be ticking at higher rates and we need to make sure that they are still working OK.
The SysTick clock is a core part of every Cortex-M0 and we use it internally to perform accurate millisecond delays. The stm32plus
MillisecondTimer class initialises SysTick using ST’s standard peripheral library call:
SysTick_Config(SystemCoreClock / 1000);
The
SystemCoreClock variable is a
uint32_t set by ST in the startup code to the value of the core clock in MHz. We simply change it from
48000000 to
64000000 and SysTick is back to ticking at 1ms.
The I2C bus is next for consideration. In this board we’re using it as a slave, the clock is generated by the Arduino therefore there is nothing to do here. Our bus continues to operate at the frequency selected by the Arduino library.
Finally, the SPI bus needs to be checked out. The SPI clock is generated from the core clock and a divider. The minimum value of the divider, which is the value we are using, is 2, giving a SPI clock of 24MHz. Let’s verify that with the logic analyser.
It’s as expected, the clock is operating at around 24MHz. Let’s see how it looks after the overclocking. The SPI flash IC has a limit of 44MHz that we cannot exceed.
The clock frequency is up 30% to 31.25MHz which is within the limits of the flash IC and represents a nice speed increase for this project.
We’re not using any other peripherals so the effect of the overclocking on those peripherals is not investigated here.
Video introduction
I made up a little video in which I talk through the build process and give a brief tour of the board. You can watch it on the embedded player here but you’ll get better quality if you click here to go to the YouTube site and watch it there.
Firmware resources
If you’re considering building one of these boards yourself, and I do encourage you to try because it’s not difficult, then here’s a list of the resources you’ll need to complete the firmware side of the project.
The STM32 firmware
You can download a release from github or you can check out the master branch. If you don’t want to compile the firmware yourself then you can just flash one of the pre-built
.hex files using the ST-Link/v2 utility.
Building the firmware yourself will require that you have first built and installed stm32plus. Assuming you’ve done that you can then use
scons to build the firmware. There are several build options:
$ scons scons: Reading SConscript files ... Usage: scons mode=<MODE> [-jX | -c] [overclock=yes] <MODE>: debug/release. debug = -Og release = combination of -Os and -O3 [overclock] specify this option to overclock the MCU to 64MHz Examples using -j to do a 4-job parallel build: scons mode=debug -j4 scons mode=release overclock=yes -j4 The -c (clean) option removes the build output files.
The build options allow you to build the debug or release version with or without overclocking support. Note that a debug build only has a single font available due to the increased size of the compiled binary. The full complement of 9 fonts is included with the release build.
The Arduino library and examples
The Arduino library and examples are an integral part of the source code on github. To install, simply extract the contents of awcopper.zip into your Arduino ‘libraries’ directory. I have tested the library on version 1.0.6 of the Arduino IDE.
The Arduino examples
Each one of the Arduino examples demonstrates a different capability of the firmware. Here’s a brief overview of each one.
GraphicsMethods
This one demonstrates all of the graphics primitives, excluding those that operate on the SPI flash IC. Since the graphics commands are streamed across the I2C bus to the STM32 it makes perfect sense to use the C++
<<operator to stream commands to the graphics library. For example:
copro << awc::foreground(awc::WHITE) // set foreground to white << awc::font(awc::ATARI) // select the Atari font << awc::text(Point::Origin,"hello"); // text string at the origin
Here’s a video that shows the GraphicsMethods demonstration in action. It’s a bit small embedded here in the page so click here to open it up on the main YouTube site where the quality will be better.
ProgramFlash
This demonstrates how to program bitmaps into the flash IC using your PC to send the bitmaps over the USB cable to the Arduino. A small PC application
UploadToFlash.exe, written in C#, is provided in the
utilities directory of
awcopper.zip that handles the PC side of the operation.
To use, first compile and flash the Arduino example. It will erase the flash IC and then sit there waiting for data to arrive from the PC.
Now run
UploadToFlash.exe and use it to select the bitmaps to upload to the flash IC. You can specify the page address in bytes of each one to upload. The address will be automatically increased by the size of each image you add. You can add jpegs, uncompressed images and LZG compressed images.
Note that the address of each image must be aligned to a 256-byte page in the flash IC.
Uncompressed and LZG images can be created from JPEGs, PNGs etc. by the
bm2rgbi.exe C# utility included with stm32plus and also included here in the
utilities directory of
awcopper.zip. It’s a command line program, here’s an example of how to create an uncompressed image from a JPEG.
$ bm2rgbi.exe sample2.jpg sample2.bin r61523 64 Width: 640 Height: 360 Format: Format24bppRgb Writing converted bitmap Completed OK
Here’s another example that shows how to create an LZG compressed image from a JPEG. LZG is very similar to PNG in its operation but is optimised for use on a small MCU.
$ bm2rgbi.exe sample2.jpg sample2.lzg r61523 64 -c Width: 640 Height: 360 Format: Format24bppRgb Writing converted bitmap Compressing converted bitmap Compression completed: 460800 down to 213021 (53%) bytes Completed OK
When you’ve got all your images lined up in the PC application and your Arduino is ready and waiting then just click Program Now and wait for it to finish.
The picture shows the flash programmer after its finished programming. Each green square represents a page that’s been programmed and verified.
FlashBitmaps
This example shows how to display images stored in the flash IC. JPEGs, uncompressed and LZG (see above) images are all supported. The example program will show one of each to give you an idea of the difference in execution speed.
Uncompressed bitmaps are limited mostly by the speed of the SPI bus whereas LZG and JPEG images spend a significant portion of their time being decompressed by the STM32. The code that I wrote to interact with the flash IC uses DMA transfers from the SPI bus to make optimum use of the bus frequency and to allow us to interleave image processing with the data transfer from the SPI bus.
The code used to display a JPEG image is very straightforward:
const Rectangle fullScreen(0,0,Copper::WIDTH,Copper::HEIGHT); copro << awc::jpegFlash(fullScreen,JPEG_SIZE,JPEG_ADDRESS);
Like all other commands, this will be streamed across as a minimal number of bytes to the STM32 where it will be executed asynchrously, freeing up your Arduino to immediately do other things while the image is being obtained from flash and rendered on screen.
Here’s a video that shows the process of programming the onboard flash and subsequently running a demo that shows the different types of bitmaps being displayed. Click here to view it in high quality on the YouTube website.
GpioPins
This example demonstrates how to program the T1 and T2 pins as GPIO outputs from the STM32. The example will toggle them on and off at 1Hz while displaying an alternating image on the screen.
You could hook up these pins to a pair of LEDs to see them in action. Remember that the STM32 is operating at 2.8V so the output high level on the pins is 2.8V. Please take care not to source or sink current in excess of the limits documented by the STM32 datasheet or damage may occur.
TimerPwmPin
This example shows how to program the T1 and T2 pins using the STM32 timer peripheral to generate alternating PWM waveforms. On the STM32 timer waveform generation is handled in hardware and has no impact on the operation of the MCU core.
The example will vary the duty cycle of the PWM waveform up and down from zero to 100% while showing a graphical preview of what it will look like. Here’s a video that shows the example in action. I’ve wired T1 and T2 to LEDs so you can see the actual output.
Build your own PCB
If you’d like to build this project yourself then you’ll need a PCB and the parts listed in the bill of materials section as well as a Vivaz U5 LCD that you can get on ebay. You can get the gerbers for the PCB from my downloads page.
The PCBs can be ordered from ITead, Elecrow or Seeed Studio in batches of 10. You’ll need to order the 10x10cm option. I generally use Elecrow but they’re all the same quality so if you have a personal favourite then go ahead and use them.
You don’t have to use black, and you’ll save yourself some cursing if you don’t. Green and red both reflow and clean up very well. Blue less so, but still OK. Yellow looks OK to me but may be a bit of an acquired taste and I don’t think it would contrast well with the LCD. White is like black, but worse. Traces are invisible and flux stains shout out ‘look at me!’. Avoid white. I wrote an article about selecting solder mask colours, have a read if you’re unsure.
Future improvements
There’s always room for improvement and I’ve had a few ideas that could be implemented in a ‘version 2’ of this project.
- Synchronised resets. The STM32 board should be slaved to the Arduino’s reset line. Currently you have to ensure that the two boards are reset quite close together to avoid the risk of the I2C stream coming from the Arduino being misinterpreted by the STM32. This could be achieved with a reistor divider to ensure that the 5V Arduino reset level is translated to 2.8V.
- TE support. The TE (tearing effect) LCD output signal could be used to synchronise writes to the LCD so that graphics could be drawn flicker-free. | https://andybrown.me.uk/wk/2015/02/02/awcopper/ | CC-MAIN-2022-21 | refinedweb | 6,601 | 68.2 |
From: Rene Rivera (grafikrobot_at_[hidden])
Date: 2008-04-27 17:41:45
Jonathan Biggar wrote:
> I noticed some discussion in threads about the worries that some people
> have with linking their application against boost and then needing a
> third party library that also links to a potentially different version
> of boost that might not be available to the in source form.
>
> Has any thought been given to changing the boost namespace declaration
> to include the version number, like this:
>
> namespace boost_1_35_0 {
> // boost libraries declared here
> };
>
> #ifndef BOOST_NAMESPACE_ALIAS_DECLARED
> #define BOOST_NAMESPACE_ALIAS_DECLARED
>
> namespace boost = boost_1_35_0;
>
> #endif // BOOST_NAMESPACE_ALIAS_DECLARED
>
> This allows an application linked to boost 1.X+1.0 to also link to a
> third party library that privately uses boost 1.X.0 without causing ODR
> problems. Of course it doesn't help if the third party library exposes
> boost types in its interface.
[...]
I was just discussing such a trick in the IRC channel in relation to the
library building, and dealing with variants. The idea would be that
instead of generating a bunch of different library files, we mangle the
boost namespace name to include the variant one is using (which would
include the version as you suggest). Using this would allow to build
many variants, but still have the simple linking use pattern of
"-lboost_foobar". Unfortunately it might not be possible to remove all
separate library build, as many aspects of compiling affect ODR at the
std level. As the discussions of the MSVC runtime options have illustrated.
-- -- | https://lists.boost.org/Archives/boost/2008/04/136637.php | CC-MAIN-2021-49 | refinedweb | 248 | 55.88 |
#include <db.h> int DB_ENV->rep_set_request(DB_ENV *env, u_int32_t min, u_int32_t max);
The
DB_ENV->rep_set_request() method sets a
threshold for the minimum and maximum time that a client waits
before requesting retransmission of a missing message.
Specifically, if the client detects a gap in the sequence of
incoming log records or database pages, Berkeley DB will wait for
at least min microseconds before
requesting retransmission of the missing record. Berkeley DB will
double that amount before requesting the same missing record
again, and so on, up to a maximum threshold of
max microseconds.
These values are thresholds only. Replication Manager applications use these values to determine when to automatically request retransmission of missing messages. For Base API applications, Berkeley DB has no thread available in the library as a timer, so the threshold is only checked when a thread enters the Berkeley DB library to process an incoming replication message. Any amount of time may have passed since the last message arrived and Berkeley DB only checks whether the amount of time since a request was made is beyond the threshold value or not.
By default the minimum is 40000 and the maximum is 1280000 (1.28 seconds). These defaults are fairly arbitrary and the application likely needs to adjust these. The values should be based on expected load and performance characteristics of the master and client host platforms and transport infrastructure as well as round-trip message time.
The database environment's replication subsystem may also be configured using the environment's DB_CONFIG file. The syntax of the entry in that file is a single line with the string "rep_set_request", one or more whitespace characters, and the request times specified in two parts: the min and the max. For example, "rep_set_request 40000 1280000". Because the DB_CONFIG file is read when the database environment is opened, it will silently overrule configuration done before that time.
The
DB_ENV->rep_set_request() method
configures a database environment, not only operations performed
using the specified DB_ENV
handle.
The
DB_ENV->rep_set_request() method may be
called at any time during the life of the application.
The
DB_ENV->rep_set_request()
method returns a non-zero error value on failure and 0 on success.
The
DB_ENV->rep_set_request()
method may fail and return one of the following non-zero errors:
Replication and Related Methods | https://docs.oracle.com/cd/E17276_01/html/api_reference/C/repset_request.html | CC-MAIN-2018-13 | refinedweb | 383 | 50.67 |
]. The term tax mitigation is a synonym for tax avoidance. Its original use was by tax advisors as an alternative to the pejorative term tax avoidance. Latterly resistors typically do not take the position that the tax laws are themselves illegal or do not apply to them (as tax protesters do) and they are more concerned with not paying for particular government policies that they oppose.
[edit] Tax avoidance
Tax avoidance is the legal utilization of the tax regime to one's own advantage, to reduce the amount of tax that is payable by means that are within the law. The United States Supreme Court has stated that "The legal right of an individual to decrease the amount of what would otherwise be his taxes or altogether avoid them, by means which the law permits, cannot be doubted."
Most countries impose taxes on income earned or gains realized within that country regardless of the country of residence of the person or firm. Most countries have entered into bilateral double taxation treaties with many other countries to avoid taxing nonresidents twice -- once where the income is earned and again in the country of residence (and perhaps, for..
The company/trust/foundation may also be able to avoid corporate taxation if incorporated in an offshore jurisdiction (see offshore company, offshore trust or private foundation). Although income tax would still be due on any salary or dividend drawn from the legal entity. For a settlor (creator of a trust) to avoid tax there may be restrictions on the type, purpose and beneficiaries of the trust. For example, the settlor of the trust may not be allowed to be a trustee or even a beneficiary and may thus lose control of the assets transferred and/or may be unable to benefit from them.
[edit] Tax evasion
By contrast tax evasion is the general term for efforts by individuals, firms,).
. [9]: IRC v Willoughby.[22].
The clear articulation of the concept of an avoidance/mitigation distinction goes back only to the 1970s. The concept originated from economists, not lawyers.[23] The use of the terminology avoidance/mitigation to express this distinction was an innovation in 1986: IRC v Challenge.[24] courts have rejected time and time again.
Tax resistance is the refusal to pay a tax for conscientious reasons (because the resister does not want to support the government or some of its activities).).
In the UK case of Cheney v. Conn,[31];
An affirmative act "in any manner" is sufficient to satisfy the third element of the offense. That is, an act which would otherwise be perfectly legal (such as moving funds from one bank account to another) could be grounds for a tax evasion conviction (possibly an attempt to evade "payment"), provided the other two elements are also met. Intentionally filing a false tax return (a separate crime in itself.
A further stumbling block for tax protesters is found in the Cheek Doctrine with respect to arguments about "constitutionality." Under the Doctrine, the belief that the Sixteenth Amendment was not properly ratified and the belief that the Federal income tax is otherwise unconstitutional are not treated as beliefs that one is not violating the "tax law" — i.e., these errors are not treated as being caused by the "complexity of the tax law."
In the Cheek case.[37]
The Court ruled that such beliefs — even if held in good faith — are not a defense to a charge of willfulness. By pointing out that arguments about constitutionality of Federal income tax laws "reveal full knowledge of the provisions at issue and a studied conclusion, however wrong, that those provisions are invalid and unenforceable," the Supreme Court may have been impliedly warning that asserting such "constitutional" arguments (in open court or otherwise) might actually help the prosecutor prove willfulness.] Tax shelters
Tax shelters are investments that allow, and purport to allow, a reduction in one's income tax liability. Although things such as home ownership, pension plans, and Individual Retirement Accounts (IRAs) can be broadly considered "tax shelters", insofar as funds in them are not taxed, provided that they are held within the IRA for the required amount of time, the term "tax shelter" was originally used to describe primarily certain investments made in the form of limited partnerships, some of which were deemed by the U.S. Internal Revenue Service to be abusive.
The Internal Revenue Service and the United States Department of Justice have recently teamed up to crack down on abusive tax shelters. In 2003 the Senate's Permanent Subcommittee on Investigations held hearings about tax shelters which are entitled U.S. TAX SHELTER INDUSTRY: THE ROLE OF ACCOUNTANTS, LAWYERS, AND FINANCIAL PROFESSIONALS. Many of these tax shelters were designed and provided by accountants at the large American accounting firms.
Examples of U.S. tax shelters include: Foreign Leveraged Investment Program (FLIP) and Offshore Portfolio Investment Strategy (OPIS). Both were devised by partners at the accounting firm, KPMG. These tax shelters were also known as "basis shifts" or "defective redemptions."
Prior to 1987, passive investors in certain limited partnerships (such as oil exploration or real estate investment ventures) were allowed to use the passive losses (if any) of the partnership (i.e., losses generated by partnership operations in which the investor took no material active part) to offset the investors' income, lowering the amount of income tax that otherwise would be owed by the investor. These partnerships could be structured so that an investor in a high tax bracket could obtain a net economic benefit from partnership-generated passive losses.
In the Tax Reform Act of 1986 the U.S. Congress introduced the limitation (under 26 U.S.C. § 469) on the deduction of passive losses and the use of passive activity tax credits. The 1986 Act also changed the "at risk" loss rules of 26 U.S.C. § 465. Coupled with the hobby loss rules (26 U.S.C. § 183), the changes greatly reduced tax avoidance by taxpayers engaged in activities only to generate deductible losses.
United States v. Connor, 898 F.2d 942, 90-1 U.S. Tax Cas. (CCH) paragr. 50,166 (3d Cir. 1990) ); Perkins v. Commissioner, 746 F.2d 1187, 84-2 U.S. Tax Cas. (CCH) paragr. 9898 (6th Cir. 1984) (26 U.S.C. § 61 ruled by the United States Court of Appeals for the Sixth Circuit to be “in full accordance with Congressional authority under the Sixteenth Amendment to the Constitution to impose taxes on income without apportionment among the states”; taxpayer’s argument that wages paid for labor are non-taxable was rejected by the Court, and ruled frivolous); | http://ornacle.com/wiki/Tax_evasion | crawl-002 | refinedweb | 1,106 | 52.8 |
Some time ago NWCPP (Northwest C++ Users Group in Seattle) organized a public panel on the future of C++, with Scott Meyers, Herb Sutter, and Andrei Alexandrescu. I started thinking about C++ and realized that I wasn't that sure any more if C++ was the answer to all my problems. I wanted to ask the panelists some tough questions. But I was there for a big surprise--before I had the opportunity to say anything, they started the criticism of C++ in the earnest--especially Scott.
One of the big issues was the extreme difficulty of parsing C++. Java and C#, both much younger languages, have a multitude of programming tools because it's so easy to parse them. C++ has virtually nothing! The best tool one can get is Microsoft Visual Studio, which is really pathetic in that department (I haven't tried Eclipse). Apparently, VS uses a different (incomplete) parser for its browser than it does for its compiler, and that's probably why it can't deal with namespaces or nested classes. When you search for a definition of a function, you get a long list of possible matches that don't take into account any of the qualifications of the original query. Finding all callers of a method is so unreliable that it's better not to use it. And these are the most basic requirements for an IDE. By the way, the Help engine seems to be using yet another parser.
I talked to John Lykos at one of the conferences, and he told me that he would pay big bucks for a tool that would be able to tell what header files must be included in a given file. That was many years ago and to my knowledge there still isn't such a tool. On the other hand, there are languages in which the programmer doesn't even have to specify include files, so clearly this is not an insurmountable problem and it's only C++ that makes it virtually impossible.
Complex programming problems require complex languages. An expressive language helps you deal better with the complexity of a problem. I believe there is some proportionality between the complexity of the problem and the complexity of the language. But if a language is disproportionately complex, it starts adding complexity to the problem you're trying to solve instead of reducing it. There are endless examples of unnecessary complexity in C++.
Accidentally, the parsing difficulties of C++ might be the biggest roadblock in its evolution. In principle, changing the syntax of the language shouldn't be difficult, as long as you can provide good translation tools. You can look at syntax as a matter of display, rather than its inherent part. Just like you have pretty printers that format your code, you could have a pretty viewer that shows C++ declarations using Pascal-like syntax. You could then switch between programming using the rationalized syntax and the traditional syntax.
Another approach to improving C++ would be to add optional disambiguating syntax, like the keyword "declare" that would disambiguate between declarations and statements (similarly with "define"). I'd gladly use and enforce such syntax if it made my code more parseable to my own tools.
As long as C++ gurus live in the clouds of the Olympus, they won't see the need for this kind of evolution. That's why C++ becomes more and more elitist. In the future, people who do NYT crossword puzzles and the ones who program in C++ will be in the same category.
Very smart people keep writing books with titles that read like "Esoteric Nooks and Crannies of C++", or "More Bizarre Pitfalls of C++". Or puzzles that start with "What's wrong with this code fragment that to all normal people looks perfectly OK?". You don't see such publications in the Java or C# community. C++ is becoming a freak language that's parading its disfigurements in front of mildly disgusted but curiously fascinated audience.
"So you have to put a space between angle brackets? How bizarre!"
"Are you telling me that you can't pass an instance of a locally defined class to an STL algorithm? How curious!"
"Talk to me dirty! Tell me more about name resolution!"
"Pardon my French, Is this specialization or overloading?" | http://www.relisoft.com/tools/CppCritic.html | crawl-002 | refinedweb | 722 | 62.17 |
Problem Statement:-
There are 100 doors and a person walks through it in such a way that:-
In first walk he opens first door
In the second walk, he opens each door that is a multiple of two except door 2.
……………………………………
…………………………………..
Similarly, in i-th walk, he opens each door that is a multiple of i except i-th door.
Which doors are open at end?
Naive Approach
The basic approach would be to to start from 2 and go till 100 and keep the track of doors which are opened and which are closed. Then print the no. of open doors. But its time complexity will be:-O(n2)
Code in Python
def toggle(n): #This function changes the current position of door,if the door is open then it is closed otherwise it is opened if doors[n]==0: doors[n]=1 n=100 #No. of doors(Can be changed for a general n-doors problem.) doors=[0]*n #Making a list and initializing them with 0 #Here 0 means that door is close and 1 means that door is open doors[0]=1 for i in range(2,n+1): j=i+i while j<=n: toggle(j-1) j=j+i count=0#Counter to keep track of open doors for i in range(n): if doors[i]==1: count=count+1 print(i+1,end=" ")#Printing the open doors print() #Inserting a new line after printing all doors that are open print("\n No. of open doors are",count)#Total no. of open doors
Output:-
Efficient Approach
From the above output, it can be observed that only those doors are opened that are not prime. So, we can directly print all those numbers that are not prime rather than keeping track of which doors are opened or not.
def not_prime(n): if n==1: return True if n<=3: return False if n%2==0 or n%3==0:#Used so that we can skip 5 numbers in each iteration return True i=5 while(i*i<=n): if n%i==0 or n%(i+2)==0: return True i=i+6 return False n=100#It can be changed with the number of doors print(*[str(i)+" " if not_prime(i) else '' for i in range(1,n+1)],end=" ")
Output:-
| https://freshlybuilt.com/100-doors-problemmodified-using-python/ | CC-MAIN-2020-40 | refinedweb | 386 | 80.45 |
VARIOUS TOPICS IN JAVA
Firstly, the definition of java
THINGS TO NOTE ABOUT JAVA
Every line of code that runs in Java must be inside a
class. In our example, we named the class Main..");
}
Java Data
- Boolean: Stores fractional numbers. Sufficient for storing 15 decimal digits
- Char: Stores a single character/letter or ASCII values
- String: Stores words
Java Operators
- + : adds values
- - : removes values
- * : multiplies values
- / : divides values
- % : Returns the division remainder
- ++ : Increases the value of a variable by 1
- — — : Decreases the value of a variable by 1
Java Switch
Use the
switch statement to select one of many code blocks to be executed.
Syntax
switch(expression) {
case x:
// code block
break;
case y:
// code block
break;
default:
// code block
}
HOW DOES JAVA SWITCH WORK/RULES
- The variable used in a switch statement can only be integers, convertable integers (byte, short, char), strings and enums.
- ca
Loops
Loops can execute a block of code as long as a specified condition is reached.
Loops are handy because they save time, reduce errors, and they make code more readable.
Java While Loop
The
while loop loops through a block of code as long as a specified condition is
true:
Syntax
while (condition) {
// code block to be executed
}
The Do/While Loop
The
do/while loop is a variant of the
while loop. This loop will execute the code block once, before checking if the condition is true, then it will repeat the loop as long as the condition is true.
Syntax
do {
// code block to be executed
}
while (condition)
Java For Loop.
Statement 2 defines the condition for executing the code block.
Statement 3 is executed (every time) after the code block has been executed.
Java Break
The
break statement can also be used to jump out of a loop.
Java Continue
The
continue statement breaks one iteration (in the loop), if a specified condition occurs, and continues with the next iteration in the loop.
Java Arrays
Arrays are used to store multiple values in a single variable, instead of declaring separate variables for each value.
To declare an array, define the variable type with square brackets:\
String[] cars;
We have now declared a variable that holds an array of strings. To insert values to it, we can use an array literal — place the values in a comma-separated list, inside curly braces:
String[] cars = {"Volvo", "BMW", "Ford", "Mazda"};
Java Methods
A method is a block of code which only runs when it is called. You can pass data, known as parameters, into a method. Methods are used to perform certain actions, and they are also known as functions.
WHY METHODS?
- To define the code once
-:
Syntax
public class Main {
static void prosper(){
// code to be executed
}
}
Example Explained
prosper()is the name of the method
staticmeans that the method belongs to the Main class and not an object of the Main class.
voidmeans that this method does not have a return value. You have the option to remove this so as to return a value.
Call a Method
To call a method in Java, write the method’s name followed by two parentheses () and a semicolon;
In the following example,
myMethod() is used to print a text (the action), when it is called:
Java OOP
OOP stands for Object-Oriented Programming.
Procedural programming is about writing procedures or methods that perform operations on the data, while object-oriented programming is about creating objects that contain both data and methods.
Object-oriented programming has several advantages over procedural programming:
- OOP is faster and easier to execute
- OOP provides a clear structure for the programs
- OOP helps to keep the Java code DRY “Don’t Repeat Yourself”, and makes the code easier to maintain, modify and debug
- OOP makes it possible to create full reusable applications with less code and shorter development time
Tip: The “Don’t Repeat Yourself” (DRY) principle is a useful tip on OOP about reducing the repetition of code. You should extract out the codes that are common for the application, and place them at a single place and reuse them instead of repeating it.
Java Classes/Objects.
Create a Class
To create a class, use the keyword
class:
Main.java
Create a class named “
Main" with a variable x:
public class Main {
int x = 5;
}
Create an Object
In Java, an object is created from a class. We have already created the class named
Main, so now we can use this to create objects.
To create an object of
Main,);
}
}
Java Constructors
A Java constructor is special method that is called when an object is instantiated. In other words, when you use the new keyword. The purpose of a Java constructor is to initializes the newly created object before it is used. … Typically, the constructor initializes the fields of the object that need initialization.
Example
//
Java Modifiers
The
public keyword(which you are quite familiar with):
- Public :The class is accessible by any other class
- Default :The class is only accessible by classes in the same package. This is used when you don’t specify a modifier.
For attributes, methods and constructors including public, you can use the one of the following:
- Public :The class is accessible by any other class
- Private: The code is only accessible within the declared class
- Protected: The code is accessible in the same package and subclasses.
Non-Access Modifiers
For classes, you can use either
final or
abstract:
- Final: The class cannot be inherited by other classes.
- Abstract :The class cannot be used to create objects (To access an abstract class, it must be inherited from another class. )
For attributes and methods, you can use the one of the following:
- Final: Attributes and methods cannot be overridden/modified
- Static: Attributes and methods belongs to the class, rather than an object
- Abstract: Can only be used in an abstract class, and can only be used on methods. The method does not have a body, for example abstract void run();. The body is provided by the subclass (inherited from).
- Transient: Attributes and methods are skipped when serializing the object containing them
- Synchronized: Methods can only be accessed by one thread at a time
- Volatile: The value of an attribute is not cached thread-locally, and is always read from the “main memory”
Java Encapsulation
The meaning of Encapsulation, is to make sure that “sensitive” data is hidden from users. To achieve this, you must:
- declare class variables/attributes as
private
- provide public get and set methods to access and update the value of a
privatevariable
Get and Set
It is possible to access final method data if we provide public get and set methods.
The
get method returns the variable value, and the
set method sets the value.
Syntax for both is that they start with either
get or
set, followed by the name of the variable, with the first letter in upper case:
Example
public class Person {
private String name; // private = restricted access
// Getter
public String getName() {
return name;
}
// Setter
public void setName(String newName) {
this.name = newName;
}
}
Example explained
The
get method returns the value of the variable
name.
The
set method takes a parameter (
newName) and assigns it to the
name variable. The
this keyword is used to refer to the current object.
However, as the
name variable is declared as
private, we cannot access it from outside this class:
Note: The variable must be declared public. However, as we try to access a
private variable, we get an error.
Here is recommended video on the rest of the java topics | https://shadowprosper7777.medium.com/various-topics-in-java-a8b686cf5ce8?source=post_internal_links---------4---------------------------- | CC-MAIN-2022-21 | refinedweb | 1,261 | 55.27 |
Not logged in
Log in now
Recent Features
Deadline scheduling: coming soon?
LWN.net Weekly Edition for November 27, 2013
ACPI for ARM?
LWN.net Weekly Edition for November 21, 2013
GNU virtual private Ethernet
I am a maintainer and packager in openSUSE and I would say that the macro set in other modern distributions (Fedora, Mandriva) is rather compatible with ours. The only macros with which we have problems are icon update ones and of course with the names of the devel packages.
> But rpm5 also seems like an attempt at macro set unification, which isn't likely with RH controlled branch FWIW.
RPM5 is a way to break package compatibility between distributions first of all.
Anyway, I am sure that any migration to RPM5 cannot be justified with such behavior of the upstream. I read the Jeff's posts in the mailing list, his accusations of everybody around of being mentally ill etc and I would say that anybody who still wants to migrate to RPM5 should blame themselves for their stupidity.
Who maintains RPM? (2011 edition)
Posted May 7, 2011 3:23 UTC (Sat) by proyvind (guest, #74683)
[Link]
The macros and compatibility between suse, fedora and mandriva as currently is and has historically been isn't worth shit.
The move to rpm5 is rather to standardize packaging around upstream, and not by trying to solve it with awkward macros to get half-assed adopted somewhere here and there, but never remotely sufficient to be of much relevance.
And just by stating that you're a package maintainer in opensuse doesn't really give you that much credibility to have insight on matters beyond suse packaging, much less rpm internals itself, and by claiming that macro sets in other modern distributions is rather compatible with yours, you're just revealing how way off you really are..
If you think more consistent macros is the solutions to everything, good for you, but what we want to achieve, and what I'm working on related to rpm5 is actually on implementing proper functionality within rpm itself, to be automized, externalized from spec files, and reduce the huge crap pile of macros and feeble attempts of achieving compatibility with %if foo blablbla %endif.
JPackage is a brilliant example of how miserably failures such attempts usually are, we'd rather work on making such project superfluous...
But go on, your macros will probably lead you to the magic leprecon, taking you to the gold at the end of the rainbow..
But in the real world, there's a need for cleaner, simpler, and well-designed means to achieve any sensible form of compatibility..
If you read our mailing lists, then you should also notice that Jeff's responses are well warranted, even though a bit hot-headed and misinterpreting people occationally, something which you cannot blame him for considering people's behaviour and attitude..
Jeff has been the person over the last month who's actually been the most active and helpful on cooker, and also given us a huge boost in discussing new ideas and related r&d, while being frequently trashed by the same group of people with their own agenda.
You can find these same people having been especially active dating back to september since the mageia announcement, trolling and generally generating tension on the list before Jeff turned up as well..
Posted May 7, 2011 13:48 UTC (Sat) by Ansus (guest, #74724)
[Link]
I have ported hundreds of packages from other RPM distros and as I can say, the only places where you have to place %if...%endif most often are the BuildRequires tags due to different package names and the update-desktop-files machinery.
(offtopic) from .desktop accounting department
Posted May 9, 2011 12:42 UTC (Mon) by gvy (guest, #11981)
[Link]
Posted May 7, 2011 13:50 UTC (Sat) by Ansus (guest, #74724)
[Link]
From what I saw he attacked completely uninvolved people for even friendly questions.
Posted May 7, 2011 13:55 UTC (Sat) by Ansus (guest, #74724)
[Link]
In fact it will only split package format among distributions removing any hope of reconciliation. Or do you hope that Fedora will switch to RPM5 also? Speaking for openSUSE, it values compatibility with Fedora and will not switch either.
What even worse is that RPM5 breaks not only spec files compatibility, but also makes binary formats incompatible.
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/441863/ | CC-MAIN-2013-48 | refinedweb | 741 | 53.04 |
Generating a Zeebe-Python Client Stub in Less Than An Hour: A gRPC + Zeebe Tutorial
The high points:
- Starting in Zeebe 0.12, Zeebe clients communicate with brokers via a stateless gRPC gateway, with Protocol Buffers used as the interface design language and message interchange format.
-.
- We’ll show you step-by-step how we generated and started prototyping with a Python client stub in less than an hour, and we’ll provide everything you need to follow along.
What is gRPC, and why is it a good fit for Zeebe?
Zeebe 0.12, released in October 2018, introduced significant updates to Zeebe. Topics were removed. Topic subscription was replaced by a new exporter system, and we also shipped a ready-to-go Elasticsearch exporter.
Lastly, we reworked the way that Zeebe clients communicate with brokers. Zeebe now uses gRPC for client-server communication, and clients connect to brokers via a stateless gateway. The client protocol is defined using Protocol Buffers v3 (proto3).
gRPC was first developed by Google and is now an open-source project and part of the Cloud Native Computing Foundation. If you’re new to gRPC, the “What is gRPC” page on the project website provides a simple, concise description:. On the server side, the server implements this interface and runs a gRPC server to handle client calls. On the client side, the client has a stub (referred to as just a client in some languages) that provides the same methods as the server.
gRPC has many nice features that make it a good fit for Zeebe:
- gRPC supports bi-directional streaming for opening a persistent connection and sending or receiving a stream of messages between client and server.
- gRPC uses the common http2 protocol by default.
- gRPC uses Protocol Buffers as an interface definition and data serialization mechanism–specifically, Zeebe uses proto3, which supports easy client generation in ten different programming languages.
That third benefit is particularly interesting for Zeebe users. If Zeebe doesn’t provide an officially-supported client in your target language, you can easily generate a so-called client stub in any of the ten languages supported by gRPC and start using Zeebe in your application.
In the rest of this post, we’ll look more closely at Zeebe’s gRPC Gateway service and walk through a simple tutorial where we generate and use a client stub for Zeebe in Python.
Zeebe’s gRPC Gateway Service
At a high level, there are two things you need to do to create a gRPC service and generate client and server code in your target language:
- Define a service interface and the structure of messages using Protocol Buffers as the interface definition language
- Use the protocol buffer compiler to generate usable data access classes in your preferred programming language based on the
protodefinition
To understand exactly what one of these service interfaces looks like, we recommend taking a look at Zeebe’s
.proto file, which you can find in GitHub.
In lines 8 through 201, you’ll see that we define the structure of request and response messages that can be sent via the service, and starting on line 203, we define the service itself.
If you’re already familiar with the capabilities of Zeebe’s Java or Go clients, then the service defined in the
.proto will look familiar to you. That’s because the
.proto file serves as a central place where the client interaction protocol is defined one time for clients in all programming languages (and therefore doesn’t need to be duplicated every time a new client is created).
Another compelling point about gRPC: a service’s server code doesn’t have to be written in the same language as client code. In other words, the Zeebe team wrote the server code for the gateway service just one time and in Java (our preferred language), and Zeebe brokers can receive and respond to requests from clients in any other gRPC-supported language.
Generating a client stub for Zeebe in Python
Alright, alright. Onto the fun part. In this section, we’ll walk through step-by-step how we generated a client stub for Python, and we’ll provide everything you need to follow along.
First, you’ll need to go through the Prerequisites in the Python quickstart on the gRPC site to install gRPC and gRPC tools. You can stop at the “Download the example” section header.
Next, create and change into a new directory.
mkdir zeebe-python cd zeebe-python
Then add the gateway.proto file to that directory. Here’s the raw file in GitHub.
OK, we’re ready to generate some gRPC code based on the service defined in the gateway.proto file. Inside the directory you just created, run:
python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. ./gateway.proto
In that directory, you should now see two newly-generated files:
gateway_pb2.py, which contains the generated request and response classes, and
gateway_pb2_grpc.py, which contains our generated client and server classes. We won’t be editing these files, but we will be calling the methods in them as we build a simple client application.
Before you go any further, you’ll need to download Zeebe if you haven’t already. In this walkthrough, we’re using Zeebe 0.13.1, which you can find here:.
Next, go ahead and start up a Zeebe broker.
cd zeebe-broker-0.13.1/ ./bin/broker
While we’re here, let’s use the Zeebe command line interface (zbctl) to deploy a workflow model. Here’s the model we’ll be using in this tutorial, which you should save as a bpmn file in the zeebe-broker directory.
To deploy, run:
./bin/zbctl deploy simple-process.bpmn
Note that if you’re on a Mac, you’ll need to run:
./bin/zbctl.darwin deploy simple-process.bpmn
And if you’re on Windows, it’s:
./bin/zbctl.exe deploy simple-process.bpmn
Next, we’re going to build a simple client application to create workflow instances, to work on jobs, and so that we can finish instances for our model, to send messages.
In the directory where you generated the
gateway_pb2.py and
gateway_pb2_grpc.py files, create a new Python file. Let’s call it
zeebe_client.py.
If you need a reference along the way, you can see our finished zeebe_client.py file here. Note that we’re going to call all of these different methods from one file to keep things simple, but that probably isn’t how you’d do things in a real-world scenario.
Setup and First Client Call
First, you’ll need to import three modules.
import grpc import gateway_pb2 import gateway_pb2_grpc
To start calling service methods, we need to create what’s called a stub. In a new function, define a channel (26500 is the default Zeebe port) then instantiate the
GatewayStub class of the
gateway_pb2_grpc module.
def run(): with grpc.insecure_channel('localhost:26500') as channel: stub = gateway_pb2_grpc.GatewayStub(channel)
We already started up the Zeebe broker, and so the first thing we’ll do is check the broker topology to confirm that it’s running and configured as expected.
def run(): with grpc.insecure_channel('localhost:26500') as channel: stub = gateway_pb2_grpc.GatewayStub(channel) topologyResponse = stub.Topology(gateway_pb2.TopologyRequest()) print(topologyResponse)
Let’s talk through how we came up with that request. We can see that in our proto service definition, a
TopologyRequest to the RPC
Topologyshould get us a
TopologyResponse. This is a pretty simple call that doesn’t require us to pass any arguments.
And what does a
TopologyResponse look like? We can see starting on line 27 of the proto file that a
TopologyResponse includes
brokers,
clusterSize,
partitionsCount, and
replicationFactor.
So let’s run the thing and see what happens! At the end of your
zeebe_client.py file, be sure to add:
if __name__ == '__main__': run()
Then we can hop over to our terminal and run it.
python zeebe_client.py
Et voila! We have a response. Note that
partitions aren’t printed here because gRPC doesn’t show default values, and in this case, we have the default 1 partition.
brokers { host: "0.0.0.0" port: 26501 partitions { } } clusterSize: 1 partitionsCount: 1 replicationFactor: 1
We can follow the same approach for other RPCs in the
Gateway service. For those of you who’d like to see a few more examples, we’ll walk through how to get a list of workflows, create a workflow instance, work on a job, and send a message.
Get a List of Deployed Workflows
Next, we’ll update the function to get a list of deployed workflows.
def run(): with grpc.insecure_channel('localhost:26500') as channel: stub = gateway_pb2_grpc.GatewayStub(channel) topologyResponse = stub.Topology(gateway_pb2.TopologyRequest()) print(topologyResponse) listResponse = stub.ListWorkflows(gateway_pb2.ListWorkflowsRequest()) print(listResponse)
When you run the client, you should see the response:
workflows { bpmnProcessId: "simple-process" version: 1 workflowKey: 1 resourceName: "simple-process.bpmn" }
Create a Workflow Instance
OK, so we’ve used our client to retrieve some information about the Zeebe brokers and our workflows. Now it’s time to start doing. So let’s create a workflow instance! We’ll add the following to our function:
createResponse = stub.CreateWorkflowInstance(gateway_pb2.CreateWorkflowInstanceRequest( bpmnProcessId = 'simple-process', version = -1, payload = '{"orderId" : "ab1234"}')) print(createResponse)
As a reminder, we can find out what arguments we should pass based on the
CreateWorkflowInstanceRequest message definition in the proto file:
message CreateWorkflowInstanceRequest { int64 workflowKey = 1; string bpmnProcessId = 2; /* if bpmnProcessId is set version = -1 indicates to use the latest version */ int32 version = 3; /* payload has to be a valid json object as string */ string payload = 4; }
OK, so now we’ve created an instance for the simple-process.bpmn workflow we already deployed, and we’re using the most recent version of the workflow model (in our case, the only version).
Our instance has a payload, too, which we’ll use later to correlate a message to the workflow instance.
Activate and Work on Jobs
Our simple workflow model includes a “Collect Money” service task, and next, we need to request and work on a corresponding job so that we can complete this task.
If you open the workflow model in the Zeebe Modeler, click on the “Collect Money” service task, and open the properties panel, you can see that the job type is
payment-service (click to enlarge).
And so we’ll need to create a simple worker that requests and completes jobs of type
payment-service.
First, let’s take a look at the
ActivateJobsRequest in our proto file. The
ActivateJobsRequest is what we’ll use to communicate to the Zeebe brokers that we have a worker ready to take on a certain number of a certain type of job.
message ActivateJobsRequest { string type = 1; string worker = 2; int64 timeout = 3; int32 amount = 4; }
For the purposes of this tutorial, the most important fields in the request are
type (again, this is the job type specified in the service task in the workflow model) and
amount (the number of jobs our worker will accept at that time). We’ll also set a
timeout(how long the worker has to complete the job)and
worker(a way to for us to identify the worker that activated a job).
When we look at the
Gateway service in the proto file, we see that an
ActivateJobsRequest returns a stream rather than a single response:
rpc ActivateJobs (ActivateJobsRequest) returns (stream ActivateJobsResponse) { }
So we’ll need to handle the stream a bit differently than we’ve been handling single responses when we update our client.
for jobResponse in stub.ActivateJobs(gateway_pb2.ActivateJobsRequest( type = 'payment-service', worker = 'zeebe-client-test', timeout = 1000, amount = 32)): for job in jobResponse.jobs: print(job) stub.CompleteJob(gateway_pb2.CompleteJobRequest(jobKey = job.key))
Note that in this tutorial, we’re simply activating and completing jobs–there’s no business logic in our client application. In the real world, you’d replace
print(job) with your job worker’s business logic.
When we run
zeebe_client.py, we see information about the job we just completed:
key: 2 type: "payment-service" jobHeaders { workflowInstanceKey: 6 bpmnProcessId: "simple-process" workflowDefinitionVersion: 1 workflowKey: 1 elementId: "collect-money" elementInstanceKey: 21 } customHeaders: "{\"method\":\"VISA\"}" worker: "zeebe-client-test" retries: 3 deadline: 1543398908569 payload: "{\"orderId\":\"ab1234\"}"
There’s one last thing we have to do to complete the workflow: send a message that can be correlated with this workflow instance.
If you’re new to messages and message correlation in Zeebe, you might want to check out the blog post where we go over the feature in detail or read through the Message Correlation reference page of the documentation.
Once again, we’ll look at our model in the Zeebe Modeler to get more information about the message we need to send sending. The workflow instance will need a message with name
payment-confirmed and a matching
orderId(the correlation key) in order to correlate the message (click to enlarge).
stub.PublishMessage(gateway_pb2.PublishMessageRequest(name = "payment-confirmed", correlationKey = "ab1234", timeToLive = 10000, messageId = "messageId", payload = '{"total-charged" : 25.95}' ))
When we created our workflow instance, we included a payload
orderId : "ab1234", and because
orderId is the correlation key we specified in the workflow model, we need to make sure the value in the
correlationKey field of our message matches the workflow instance payload so that the message will be correlated.
The
PublishMessageResponse is empty, so we won’t worry about storing it in a variable or printing it.
If we run the updated application, we don’t get any feedback about the message being correlated in the terminal itself, but if, for example, we’re using Zeebe’s Elasticsearch exporter and inspecting the stream of workflow events in Kibana, we can see that our message was correlated to a workflow instance and that the workflow instance was completed (click to enlarge):
Wrapping Up
And that concludes our gRPC client stub tutorial for Python! After generating a Python client stub using Zeebe’s protocol buffer and native gRPC tools, we walked through how to:
- Request a cluster topology
- Request a list of deployed workflows
- Deploy a workflow instance
- Activate and complete jobs
- Publish a message
…all in Python and without using
zbctl.
We chose Python because, well, it’s one of the co-authors’ preferred programming language. But remember: you could go through these steps and create your own stub in any of the programming languages listed here.
What we generated today wasn’t a fully-featured client, but for many users, we expect that a stub like this one will be more than sufficient for working with Zeebe in their target language. We hope you give it a try.
Questions? Feedback? We want to hear from you. Visit the Zeebe Community page to see how to contact us. | https://zeebe.io/blog/2018/11/grpc-generating-a-zeebe-python-client/ | CC-MAIN-2019-26 | refinedweb | 2,469 | 53.71 |
A Look at Common Performance Problems in Rails
Over the last few months I have analyzed a number of Rails applications w.r.t. performance problems (some of these involved my consulting business, some were open source). The applications targeted a variety of domains, which resulted in enough differences to make each performance improvement task challenging. However, there were enough commonalities that made it possible to extract a number of areas where each of these applications fell short of achieving good performance. These were:
-
Following my suggestions in this article, may or may not improve the performance of your application. Achieving good performance is a tricky business, especially if the performance characteristics of implementation language constructs are somewhat underspecified (as is the case with Ruby).
I strongly suggest to measure your application's performance before you change anything and then again after each change. A good tool for performance regression tests is my package railsbench. It's really easy to install and benchmarks are defined within a few minutes. Unfortunately it won't tell you where the time is spent in your application.
If you have a windows machine around (or a dual boot Intel Mac), I suggest to evaluate Ruby Performance Validator (RPVL) by Software Verification Ltd. I have found it to be of immense value for my Rails performance work that went into the core of Rails, especially after SVL implemented a call graph feature that I suggested on top of the already existing hot spot view. As far as I know, it's the only tool for Ruby application performance analysis on the market right now. Railsbench has built in support for RPVL, which makes it a snap to run benchmarks defined for railsbench under RPVL.
Choosing a Session Container
Rails comes with several built in session containers. All applications I have analyzed used either PStore, which stores session information in a separate file on your file system, or ActiveRecordStore, which stores it in the database. Both choices are less than ideal, especially slowing down action cached pages. Two much better alternatives are available: SQLSessionStore and MemCacheStore.
SQLSessionStore avoids the overhead associated with ActiveRecordStore by
- not using transactions (they are not required for correct operation of SQLSessionStore)
- offloading the work of updating "created_at" and "updated_at" to the database
If you use Mysql, you should make sure to use a MyISAM table for sessions. It is faster than InnoDB, and transactions are not required. I have recently added Postgres support to SQLSessionStore, but Postgres seems to be a lot slower for session storage than Mysql with MyISAM tables, so I suggest to install Mysql just for the session table (I can't think of a good use case were you'd need to join based on session id) if you want DB based session storage.
MemCacheStore is even faster than SQLSessionStore. My measurements show a 30% speed improvement for action cached pages. You need need to install Eric Hodel's memcache client library and do some configuration in environment.rb to be able to use it. Warning: do not attempt to use Ruby-Memcache (it's really, really slow).
For my own projects I tend to use database based session storage, as it enables simple administration through either the Rails command line or administrative tools provided by the database package. You'd need to write your own scripts for MemCacheStore. On the other hand, memchached presumably scales better for very high traffic web sites and comes with Rails supported automated session expiry.
Caching Computations During Request Processing
If you need the same data over and over again, during processing a single request, and can't use class level caching because your data depends in some way on the request parameters, cache the data to avoid repeated calculations.
The pattern is easily employed:
module M
def get_data_for(request)
@cached_data_for_request ||=
begin
expensive computation depending on request returning data
end
end
end
Your code could be as simple as "A".."Z".to_a or it could be a database query, retrieving a specific user, for example.
Perform Request Independent Computations at Startup or on First Access
This advice is so simple that I hesitated somewhat if I should include it in this article. But I find that many applications I have analyzed failed to employ this useful optimization.
The technique is actually very simple: if you have data that doesn't change over your application's lifetime, or changes so seldom that a server restart could be employed if it changes, cache this data in an appropriate class variable on some class of your application. The general pattern goes like this:
class C
@@cached_data = nil
def self.cached_data
@@cached_data ||= expensive computation returning data
end
...
end
Some examples:
- application configuration data (if your application is designed to be installed by others)
- never changing (static) data in your database (otherwise use caching)
- detecting installed Ruby classes/modules using ObjectSpace.each
Optimizing Queries
Rails comes with a powerful domain specific language for defining associations between model classes which reflect table relationships. Alas, the current implementation hasn't been optimized for performance, yet. Relying on the built in generated accessors can severely hurt performance.
The first part of the problem is usually described as the "1+N" query problem: if you load a N objects from class Article (table "articles"), which has a n-1 relationship to class Author (table "authors"), accessing the author of a given article using the generated accessor methods will cause N additional queries to the database. This, of course, puts some additional load on the database, but more importantly for Rails application server performance, the SQL query statements to be issued will be reconstructed for object accessed.
You can get around this overhead by adding an :include => :author to your query parameters like so:
Articles.find(:all, :conditions => ..., :include => :author)
This will avoid all of the above mentioned overhead by issuing a single SQL statement and constructing the author objects immediately. This technique is commonly called "find with eager associations" and can also be used with other relationship types (such as 1-1, 1-n or n-m).
However, n-1 relationships can be optimized further by using a technique called "piggy backing": ActiveRecord objects involving joins carry the attributes from the join table(s) along the attributes from the original table. Thus, a single query with a join can be used to fetch all required information from the database. You could replace the query above with
Articles.find(:all, :conditions => ...,
:joins => "LEFT JOIN authors ON articles.author_id=authors.id",
:select => "articles.*, authors.name AS author_name")
assuming that your view will only display the author's name attached to the article information. If, in addition, your view only displays a subset of the available article columns, say "title", "author_id" and "created_at", you should modify the above to
Articles.find(:all, :conditions => ...,
:joins => "LEFT JOIN authors ON articles.author_id=authors.id",
:select => "articles.id, articles.title, articles.created_at, articles.author_id, authors.name AS author_name")
In general, loading only partial objects can be used to speed up queries quite a bit, especially if you have a large number of columns on your model objects. In order to get the full speedup from the technique, you also need to define a method on the model class to access any attributes piggy backed on the query:
class Articles
...
def author_name
@attributes['author_name'] ||= author.name
end
end
Using this pattern relieves you from knowing whether the original query has a join or not, when writing your view code.
If your database supports views, you could define a view containing just the required information and you would get around writing complicated queries manually. This would also get you the correct data conversion for fields retrieved from the join table. As of now, you don't get these from Rails, but need to code them manually.
Note for "living on the edge guys": I got tired of repeating the same patterns over and over again. So I coded a small extension which does most of the work automatically. A preliminary release can be found on my blog.
Avoiding Slow Helpers
A number of helpers in Rails core will run rather slowly. In general, all helpers that take a URL hash will invoke the routing module to generate the shortest URL referencing the underlying controller action. This implies that several routes in the route file need to be examined, which is a costly process, most of the time. Even with a route file as simple as
ActionController::Routing::Routes.draw do |map|
map.connect '', :controller => "welcome"
map.connect ':controller/service.wsdl', :action => 'wsdl'
map.connect ':controller/:action/:id'
end
you will see a big performance difference between writing
link_to "Look here for joke #{h @joke.title}",
{ :controller => "jokes", :action => "show", :id => @joke },
{ :class => "joke_link" }
and coding out the tiny piece of HTML directly:
<a href="/jokes/show/<%= @joke.id %>"
class="joke_link">Look here for joke <%= h @joke.title %></a>
For pages displaying a large number of links, I have measured speed improvements up to 200% (given everything else has been optimized).
In order to make the template code more readable and avoid needless repetition, I usually add helper methods for link generation to application.rb :
def fast_link(text, link, html_options='')
%(<a href="#{request.relative_url_root}/#{link}"> hmtl_options>#{text})
end
def joke_link(text, link)
fast_link(text, "joke/#{link}", 'class="joke_link"')
end
writing the example above as:
joke_link "Look here for joke #{h @joke.title}", "show/<%= @joke.id %>"
Of course, coding around the slowness of route generation in this way is cumbersome and should only be used on performance critical pages. If you don't need to improve your performance today (uh, this sounds like some of the spam mails I get everyday), you could wait for the first release of my upcoming template optimizer, which will do all of this automatically for you, most of the time.
Topics for Future Rails Performance Improvement
As mentioned above, performance of route recognition and route generation leaves something to be desired. The route generation problem will be addressed by my template optimizer. Last week a new route implementation was incorporated into the Rails development branch. I performed some measurements which seem to indicate performance impovements for route recognition. For routes generation, I got mixed results. It remains to be seen whether the new implementation can be improved to a point where it's consistently faster than the previous one.
Retrieving a large number of ActiveRecord objects from the database is relatively slow. Not so much because the actual wire transfer is slow, but the construction of the Ruby objects inside Rails is rather expensive, due to representing row data as hashes indexed by string keys. Moving to an array based row class should rectify this problem. However, doing this properly would involve changing substantial parts of the ActiveRecord implementation, so it should not be expected to arrive before Rails 2.0.
Finally, the way SQL queries are constructed currently, makes the computation of the queries more expensive than the retrieval of the actual data from the database. I think we could improve this a lot by changing it so that most SQL queries can be created once and simply be augmented by actual parameter values. The only way to work around this problem at the moment is coding your queries manually.
Note: these are my own opinions about possible avenues to pursue for better performance and do not represent any official "core team" opinion towards these issues.
Conclusion
The list of problems cited above should not fool you into thinking that Rails might be too slow (or even that I think it's too slow). To the contrary, I'm convinced that Rails is an excellent web application development framework, usable for developing robust and also fast web applications, at increased productivity. Like all frameworks, it offers convenience methods, which can greatly improve your development speed and which are appropriate most of the time for most of your needs. But sometimes, when it's necessary to squeeze out some extra requests per second, or when you are restricted to limited hardware resources, it's good to know how performance can be improved. Hopefully this article has helped outlining some areas which can be used as points of attack, should you experience performance problems.
About the author
Stefan Kaes writes RailsExpress, the definitive blog about Rails performance and is the author of the forthcoming book, "Performance Rails", scheduled to publish in early 2007. Stefan's book will be among the first in the new Addison-Wesley Professional Ruby Series, set to launch in late 2006 with the second edition of Hal Fulton's "The Ruby Way" and the flagship "Professional Ruby on Rails", authored by Series Editor Obie Fernandez. The Series will consist of a robust library of learning tools for how to make the most of Ruby and Rails in the professional settings. | http://www.infoq.com/articles/Rails-Performance/ | CC-MAIN-2015-11 | refinedweb | 2,149 | 51.38 |
Introduction
Developers cannot build more than one-page web application in React because React is a single-page application (SPA). Therefore, a web application that is built in React will not reload the page. How we can make more than one page then? react-router is the answer to this question. react-router gives us the flexibility to render components dynamically based on the route in the URL. These are the steps how you can set up your react-router in react application.
Installation
As usual, we need to install the package by running this command in the terminal.
npm install react-router-dom // or yarn add react-router-dom
Primary Components
According to react-router documentation, there are three main categories of components in react-router (routers, route matchers, and navigation).
- routers ->
<BrowserRouter>and
<HashRouter>
- route matchers ->
<Route>and
<Switch>
- navigation ->
<Link>,
<NavLink>, and
<redirect>
Routers
The difference between
<BrowserRouter> and
<HashRouter> is the URL link.
<HashRouter> could store a hash in the link, and usually, we use it to refer to several sections in the page.
We must put a router in the top hierarchy component. I usually place the
<BrowserRouter> in the
index.js and wrap the
<App/> component.
// ./src/index.js // ... import { BrowserRouter as Router } from 'react-router-dom'; ReactDOM.render( <React.StrictMode> <Router> <App /> </Router> </React.StrictMode>, document.getElementById('root') );
Route Matchers
The idea of route matchers is to declare the conditional rendering components corresponding with the URL. I might say
<Route> is similar to "if statement", and
<Switch> is similar to switch statement. Take a look at the snippets below.
Using Route
// ./src/App.js // ... import { Route } from 'react-router-dom'; function App() { return ( <div className="App"> <Nav /> {/* I will show this components in the next section */} <Route path="/about"> <About /> </Route> <Route path="/portfolio"> <Portfolio /> </Route> <Route path="/contact"> <Contact /> </Route> <Route path="/"> <Home /> </Route> </div> ); }
If we are not using
<Switch>, it will render
<About /> and
<Home /> components at the same time when users go to the
localhost:3000/about link. It renders two components at the same time because
/about matches with
"/about" and
"/" paths.
We can solve more than one component at the same time by adding the
exact attribute.
<Route exact <About /> </Route> <Route exact <Portfolio /> </Route> <Route exact <Contact /> </Route> <Route exact <Home /> </Route>
or we can use the
<Switch> component like this snippet below.
Using Switch
// ./src/App.js // ... import { Switch, Route } from 'react-router-dom'; function App() { return ( <div className="App"> <Nav /> {/* I will show this components in the next section */} <Switch> <Route path="/about"> <About /> </Route> <Route path="/portfolio"> <Portfolio /> </Route> <Route path="/contact"> <Contact /> </Route> <Route path="/"> <Home /> </Route> </Switch> </div> ); }
a quick note about why I put
path="/"in the last of Route. If I put
path="/"in the beginning, other Route will not render at all. When users go to
localhost:3000/about, it will match with '/' first, and others will be ignored.
Navigation
Navigation components allow the website to create a new link in the URL without reloading the page like using an anchor tag (
<a>). Whenever we are using the anchor tag, the page will be reloaded, and we cannot do that in SPA.
// ./src/components/Navbar/Navbar.js // ... import { Link } from 'react-router-dom'; const Nav = () => { return ( <nav className={styles.wrapper}> <Link to="/">Home</Link> <Link to="/about">About</Link> <Link to="/portfolio">Portfolio</Link> <Link to="/contact">Contact</Link> </nav> ); }; export default Nav;
NavLink
The main difference between
<Navlink> and
<Link> is styling purposes. If we want to give a style when the link is active we can use
<NavLink> like the snippet below
<NavLink to="/contact" activeClassName="active"> Contact </NavLink>
It will be rendered to be HTML like this if the users visit
<a href="/contact" className="active">React</a>
Redirect
If this component renders, it will force to redirect to corresponding with the
to prop.
<Redirect to="/login" />
Conclusion
These three primary categories of
react-router components are the basis of how we can apply
react-router to our project. If we understand these three kinds of categories, it will be easier to implement
react-router. I will share another advanced topic about
react-router in the next blog.
Discussion (2)
Keep sharing!
Sure! Thank you for reading my post. | https://dev.to/raaynaldo/react-router-setup-5gml | CC-MAIN-2022-33 | refinedweb | 714 | 52.9 |
/** * To
Once the data gets into an SD card there just are not undetected write errors.Noise on the SPI bus is be the only thing left. You can check that by enabling CRC on SPI transfers between the Arduino and SD. I added software CRC to SdFat so edit SdFatConfig.h at about line 35 and change USE_SD_CRC to 1 or 2.Code: [Select]/** * ToThen all transfers on the SPI bus will be CRC protected. Calls to SdFat functions will fail with an error return.
As a test, try running the GSM and the Arduino/SDcard on two separate batteries.This will put to rest the problem area.An barrier of opto isolators between the two may help your problem.Try to write a RESET message into the SDcard.Your memory may be getting corrupted.Good Luck
I guess that the flash memory was not enough for SdFat buffers, everything crashed in a reset loop...
#include <SdFatUtil.h>
Serial.println(FreeRam());
if (sd.card()->errorCode()) { // print SD I/O error code Serial.println(sd.card()->errorCode(), HEX);}
Flash use will not cause a crash in a reset loop. If you are that close to running out of RAM, you will likely have a problem if you log data and run gprs at the same time. A pin change interrupt in SoftwareSerial can cause a stack overflow while an SdFat function like remove() is executing.
I looked at your SD module and these often fail. The problem is that these modules don't use proper level shifters on MOSI, SCK, and CS. These signals should be converted from 5V to 3.3V with an IC based level shifter. Most SD cards are not designed to accept 5V signals.
I am not using SoftwareSerial, I am using the internal Serial object to communicate with the gprs.
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=122098.15 | CC-MAIN-2015-11 | refinedweb | 340 | 76.42 |
I’ve been running static code analysis on four large code bases for over two years now. After the initial work of looking through all of the warnings and fixing the serious bugs I put the projects into code analysis maintenance mode. In this mode a build machine compiles each project several times a day with /analyze or clang and reports on any new warnings.
It turns out that static analysis finds a lot of new bugs (a few dozen a week), and I thought I’d summarize the sorts of bugs that it typically finds, along with some details of how we use static analysis effectively.
Format strings
The top type of bug that /analyze finds is format string errors – mismatches between printf-style format strings and the corresponding arguments. Sometimes there is a missing argument, sometimes there is an extra argument, and sometimes the arguments don’t match, such as printing a float, long or ‘long long’ with %d.
In some cases these errors are benign, due to luck of the ABI, but in many cases they cause incorrect values to be printed. In more serious cases – especially those involving ‘%s’ in format strings – they lead to code that will crash if it is ever executed. Part of what inspired me to start doing static code analysis was a series of annoying crashes caused by format string errors. Thanks to static analysis those format string crashes have now completely gone away, and hundreds of lines of diagnostics will now print correct values when we need them.
One common type of error is to pass a string object (std::string or the moral equivalent) to a varargs function. This is illegal, and no amount of “operator char*()” can automatically fix it. For some string classes this code may actually ‘work’ in VC++, but it is still illegal and it is 100% guaranteed to cause crashes when compiled with gcc or clang. You need to explicitly request conversion to a built-in type when passing objects to a printf style function.
Writing format strings that work for both 32-bit and 64-bit code requires attention to detail. Pointers need to be printed with %p rather than 0x%08x. The size_t and ptrdiff_t types are trickier because there is no standard C++ printf format specifier for these types. gcc supports C99’s %zu and VC++ supports %Iu. The practical options are to cast your size_t and ptrdiff_t variables to void* and print with %p, or cast them to ‘long long’ and print with %llu. We’ve standardized on the latter. I’ve talked to Microsoft about this and they might add %zu support for some future VS version. Progress can be slow, but if you don’t ask it’s even slower.
Yes, <inttypes.h> is supposed to solve this, but that is for C99, and VC++ doesn’t support C99, and the inttypes.h format strings are ugly anyway, so we don’t use them.
I’m sure that some readers are scoffing at our use of format strings – aren’t all the cool kids using iostreams or templates or some other type-safe system? Well, perhaps. But, format strings are quite expressive and convenient, so we are sticking with them.
There are two possible conclusions you can take from the hundreds (thousands!) of format string errors that I have seen over the last two years. One conclusion is that you should never ever use format strings because people are error prone and will frequently get them wrong. Another possible conclusion is that you should never use format strings without static analysis to check them. As long as our static analysis tools are looking out for us – as long as they are configured to have zero tolerance towards this class of bugs (and they are) then we can be confident that format string bugs will not last long.
It’s important to realize that most static analysis tools will only analyze format strings if you tell them which function parameters are printf style format strings. We have hundreds of format function and I went through and annotated most of them so that static analysis would help us out:
- For VC++’s /analyze I put _Printf_format_string_ in front of the format string argument
- For gcc/clang I put this after the function declaration: __attribute__ (( format( __printf__, fmtargnumber, firstvarargnumber )))
Obviously we wrap _Printf_format_string and __attribute__ in appropriate preprocessor magic so that each compiler only sees its own annotations. The gcc/clang annotations are needed to catch Linux/OSX only bugs, and because gcc/clang’s -Wformat detects a few additional format string issues.
Our build machines are configured so that the format string warnings are treated as fatal errors – they are reported continually until they are fixed.
Variable shadowing
The other most common warning is variable shadowing. It is amazingly common to have the same variable reused in multiple nested scopes. I have even seen triply nested for loops that all use i as their index variable. I did a careful analysis of variable shadowing on one project and found that, on that project, 98% of the instances of variable shadowing were legal and correct code – if somewhat confusing. Of course that means that the other 2% were buggy.
However, the actual incidence of bugs caused by variable shadowing is higher than 2%. That 2% bug rate was on code that had been tested and shipped for years. When you have just written or just checked-in code the percentage of variable shadowing that causes bugs is far higher. I don’t have any statistics, but I know that last week /analyze found three instances of variable shadowing that were outright bugs, and two instances that were performance problems.
The most common form of variable shadowing is nested for loops that use the same index variable – in our code bases i is the most commonly shadowed variable. It’s surprising how frequently these nested loops are actually correct, but they always make me nervous.
Here’s another (artificial) example of a common way that variable shadowing can cause bugs – in a larger function the error can be much harder to see:
bool IsFooEnabled()
{
bool result = true;
if (Bar())
{
bool result = Shazam();
}
return result;
}
Our code bases have thousands of instances of variable shadowing. Fixing them all would probably fix a few dozen bugs, but would also probably introduce new bugs. It feels like a poor tradeoff. Right now our build machines are configured to warn once about new instances of variable shadowing. Some day I might configure them to treat new instances of variable shadowing as errors – last week’s results suggest that that might be a good idea.
Slow warnings are less useful
Format string errors and variable shadowing should be easy and fast for the compiler to detect – gcc and clang detect them as part of their normal compilation. Unfortunately, for historical reasons the only way to get VC++ to complain about these bugs is to turn on /analyze. That causes huge build slowdowns, so therefore developers have /analyze turned off for regular builds and they only find out about these bugs when the /analyze build machines complain a few hours later. This is sub-optimal. VC++ really needs to add a /analyze:fast mode that just does the quick types of static analysis. We would immediately enable that in all of our projects and suddenly far fewer bugs would get checked in. I have suggested this to Microsoft and I hope they recognize the value.
Update, April 2, 2014
Microsoft updated the variable shadowing bug that I filed to say “A fix for this issue has been checked into the compiler sources. The fix should show up in the next major release of Visual C++.”
Most excellent.
Logic bugs
There are many forms of logic bugs that static analysis can detect. Precedence problems with a?b:c, illogical combinations of ‘|’ and ‘||’, and various other peculiarities. Here’s one example (note that kFlagValue is a constant):
if ( SomeFunction() || SomeOtherFunction()->GetFlags() | kFlagValue )
The code above is an expensive and misleading way to go “if ( true )”. Visual Studio gave a clear warning that described the problem well:
warning C6316: Incorrect operator: tested expression is constant and non-zero. Use bitwise-and to determine whether bits are set.
Here’s another example:
float angle = 20.0f + bOnFire ? 5.0f : 10.0f;
This example is sufficiently subtle that many people don’t see the bug. It looks like this function will store 25.0f or 30.0f. That’s what they want you to think. In fact it always stores 5.0f. Yay precedence! These are the warnings that /analyze gives for this code:
warning C6336: Arithmetic operator has precedence over question operator, use parentheses to clarify intent.
warning C6323: Use of arithmetic operator on Boolean type(s).
Logic bugs are frustrating because we often find them in code that has been play-tested and shipped. In these cases we often end up adding parentheses to make the existing undesirable behavior explicit because the risk of changing the behavior is too great.
Signed, unsigned, and tautologies
This code looks innocuous enough:
result = max( 0, a – b )
This code would have been fine if both a and b were signed – but one of them wasn’t, making this operation nonsensical. Clang didn’t initially find this bug because max() was a macro and clang normally avoids peering too deeply into macros, to avoid excessive false positives. However if you use ccache to optimize build times then clang sees preprocessed source and then this warning pops up. It’s nice when faster builds find more bugs.
We had quite a few places where we were checking to see if unsigned variables were less than zero – now we have fewer.
Buffer overruns
Some of the most serious bugs in C/C++ code are buffer overruns. One common error that can cause buffer overruns is passing the wrong size to a string function – sizeof when ARRAYSIZE is wanted, or just an incorrect number. I highly recommend creating and using template based intermediaries that can take a reference to an array and can infer the array size. The only array size that is guaranteed to be correct is one that the compiler infers.
(see Stop using strncpy already! for examples of this technique)
Using template array-reference overrides is great for new code, but it doesn’t help find code that was written incorrectly in the first place. To find bugs in this code you need to annotate functions that take a pointer and a count so that /analyze knows that the count refers to the number of bytes or element in the passed in buffer.
We have also had a fair number of bugs where code writes one or two elements beyond the end of an array, using a constant index. This is a 100% repro buffer overrun and some of these have existed for most of a decade.
Annotations
Here are some of the macros that we use to annotate our function declarations. The _PREFAST_ define is set when running /analyze. The empty macros for other cases are omitted below to save space.
#ifdef _PREFAST_
// Include the annotation header file.
#include <sal.h>
// Tag all printf/scanf style format strings with these
#define PRINTF_FORMAT_STRING _Printf_format_string_
#define SCANF_FORMAT_STRING _Scanf_format_string_
// Various macros for specifying the capacity of the buffer pointed
// to by a function parameter. Variations include in/out/inout,
// CAP (elements) versus BYTECAP (bytes), and null termination (_Z).
#define IN_Z _In_z_
#define IN_CAP(x) _In_count_(x)
#define IN_BYTECAP(x) _In_bytecount_(x)
#define OUT_CAP(x) _Out_cap_(x)
#define OUT_Z_CAP(x) _Out_z_cap_(x)
#define OUT_BYTECAP(x) _Out_bytecap_(x)
#define OUT_Z_BYTECAP(x) _Out_z_bytecap_(x)
#define INOUT_BYTECAP(x) _Inout_bytecap_(x)
#define INOUT_Z_CAP(x) _Inout_z_cap_(x)
#define INOUT_Z_BYTECAP(x) _Inout_z_bytecap_(x)
// These macros are use for annotating array reference parameters used
// in template functions
#if _MSC_VER >= 1700
#define IN_Z_ARRAY _Pre_z_
#define OUT_Z_ARRAY _Post_z_
#define INOUT_Z_ARRAY _Prepost_z_
#else
#define IN_Z_ARRAY _Deref_pre_z_
#define OUT_Z_ARRAY _Deref_post_z_
#define INOUT_Z_ARRAY _Deref_prepost_z_
#endif // _MSC_VER >= 1700
#else // _PREFAST_
…
Here are some examples of how we use these macros:
int V_sprintf_count( OUT_Z_CAP(maxLenInChars) char *pDest, int maxLenInChars, PRINTF_FORMAT_STRING const char *pFormat, … ) FMTFUNCTION( 3, 4 );
template <size_t maxLenInChars> int V_sprintf_safe( OUT_Z_ARRAY char (&pDest)[maxLenInChars], PRINTF_FORMAT_STRING const char *pFormat, … ) FMTFUNCTION( 2, 3 );
They are a bit unreadable when squeezed into a narrow blog post but the details are there. OUT_Z_CAP says that pDest is an output buffer that is maxLenInChars characters in length that will be null-terminated by the function. PRINTF_FORMAT_STRING says that pFormat is a format string, The FMTFUNCTION macro tells gcc the same thing, by parameter number. OUT_Z_ARRAY says that pDest is an output array that will be null-terminated by the function. We almost never use V_sprintf_count in new code because it requires more typing and because it is error prone.
With these annotations static analysis can give far more and more accurate warnings. Recommended.
Policy
Using static analysis is rarely as simple as just running it. Microsoft’s /analyze is painfully slow so it is impractical for every developer to run it on their local machine. Instead we have build machines that do this. These build machines do full rebuilds a few times a day and they report on new warnings.
Not all warnings are created equal. Some warnings are so unreliable or irrelevant that we completely disable them. Some, like the format string warnings, are 100% reliable and always worth fixing. Our scripts have a zero-tolerance policy towards these warnings and a few others that we trust. Here is our Python dictionary of warnings that we treat as fatal, together with brief descriptions:
alwaysFatalWarnings = {
4789 : “Destination of memory copy is too small”,
6053 : “Call to function may not zero-terminate string”,
6057 : “Buffer overrun due to number of characters/bytes mismatch”,
6059 : “Incorrect length parameter”,
6063 : “Missing string argument”,
6064 : “Missing integer argument”,
6066 : “Non-pointer passed as parameter when pointer is required”,
6067 : “Parameter in call must be the address of the string”,
6066 : “Non-pointer passed as parameter when pointer is required”,
6209 : “Using sizeof when a character count might be needed.”,
6269 : “Possible incorrect order of operations: dereference ignored”,
6270 : “Missing float argument to varargs function”,
6271 : “Extra argument: parameter not used by the format string”,
6272 : “Non-float passed as argument <number> when float is required”,
6273 : “Non-integer passed as a parameter when integer is required”,
6278 : “array new [] paired with scalar delete”,
6281 : “relational/bitwise operator precedence problem”,
6282 : “Incorrect operator: assignment of constant in Boolean context”,
6283 : “array new [] paired with scalar delete”,
6284 : “Object passed as a parameter when string is required”,
6290 : “Bitwise operation on logical result: precedence error”,
6302 : “Format string mismatch: char* when wchar_t* needed”,
6303 : “Format string mismatch: wchar_t* when char* needed”,
6306 : “Incorrect call to ‘fprintf*’: consider using ‘vfprintf*'”,
6328 : “Wrong parameter type passed”,
6334 : “Sizeof operator applied to an expression with an operator”,
6336 : “Arithmetic/question operator precedence problem”,
6522 : “Invalid size specification: expression must be of integral type”,
6523 : “Invalid size specification: parameter ‘size’ not found”,
}
Unfortunately the majority of the other warnings are unreliable. /analyze has improved a lot from VS 2010 to VS 2012 (I strongly recommend using VS 2012 for /analyze), but it still has room to improve, and static analysis always has to maintain a balancing act between having too many false positives and missing possible problems. The things that /analyze warns about range from psychically brilliant to provably impossible. Therefore our build machines are configured to report most warnings once, and then fold them into the background noise. Since I’ve had the most practice at looking at /analyze warnings and understanding their quirks I take a quick look at most of the new warnings. If I can quickly identify the warning as being spurious then I can save other developers from wasting time trying to see what the problem is. And if the warning identifies a real bug I can make sure it gets fixed.
(see VC++ /analyze Bug Finder Bug Fixed to get the crash-fix to VS 2012)
(see /analyze the Ugly for discussions of some spurious warnings)
Using VS 2012 for /analyze
Even if you are still building your code with VS 2010 you can still use VS 2012 for /analyze. This is important because VS 2012 finds more bugs, has fewer false positives, and has 64-bit support. Running /analyze using VS 2012 is relatively easy.
In addition to giving more accurate results, VS 2012 gives better warnings for format string mismatches. If I try to print a std::string using ‘%s’ then VS 2010 gives this warning:
warning C6284: Object passed as parameter ‘2’ when string is required in call to ‘printf’
The warning is accurate and helpful, but the VS 2012 warning is far better:
warning C6284: Object passed as _Param_(2) when a string is required in call to ‘printf’ Actual type: ‘const class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >’.
It’s hard to overstate the value of knowing not just what was expected but also what was actually passed.
You can load VS 2010 projects into VS 2012 – the profile file format is unchanged. In Properties-> Configuration Properties-> General you will find
Platform Toolset that will let you specify which compiler toolset you want to build with. To the right you can see the huge number of options that I have from within VS 2013 Preview (not yet ready for prime-time).
When you are ready to automate this transformation it’s as simple as inserting this tag in the .vcxproj file XML:
<PlatformToolset>v110</PlatformToolset>
There may be a v100 tag in there already – remove that and replace it with v110 (VS 2012). Then set up your environment with:
“%VS110COMNTOOLS%..\..\VC\vcvarsall.bat”
Then invoke a command-line build as usual. You’ll find a bunch of new warnings, and probably some incompatibilities that you’ll have to work through, but at least the project file aspect is easy.
We also do a few tricks with the Executable Directories path (Properties-> Configuration Properties-> VC++ Directories) to redirect link.exe, lib.exe, and mt.exe to a nop.exe so that we avoid the link errors that we might otherwise hit – we are only concerned with compile warnings.
You should also be sure to update VS 2012 with Update 3 (released June 26, 2013) to make sure you get the fix to the annotation parser crash.
Summary
It is the nature of programmers to make mistakes. We write incorrect code all the time, and when reading code we see what we expect, regardless of what is actually there. Bugs slip into code for the same reason that speling errors slip into writing.
Static analysis will find bugs in your code if you give it a chance. Run it regularly, annotate your printf functions and buffer handling functions, generate reports on any new warnings, keep reporting on warnings that are reliable and worth fixing, and use coding techniques that naturally avoid dangerous patterns. Static analysis won’t find all of the bugs that you write – just like a spell checker misses mistakes – but in both cases if you don’t make use of the error-finding technologies that are available then you may end up looking foollish.
The reddit discussion of this post can be found here.
There’s a good discussion of how this post applies to the D language here.
Nice post. We’re a java/C# company, are you familiar with the effectiveness of static analysis tools for these languages and whether they can provide the same level of feedback you’re getting with native code?
Java and C# avoid a number of the issues discussed here by design. Having said that FxCop (the C# analysis engine) is excellent and will find a whole lot of subtle issues in your code
I haven’t used Java significantly but C# has great static analysis. C# also has the advantage of additional run-time checks so that buffer overruns aren’t as dangerous.
I’ve been a Java programmer in my last job and most of my private projects are in Java.
I use the integrated analyzer of NetBeans as well as FindBugs. This sometimes shows me problems that I haven’t noticed. For example possible exceptions that I didn’t catch or caught wrong or code that could be more efficient.
Both the C# compiler and the CLR as well as the Java compiler and the JVM optimize quite a lot of your code. In addition I sometimes check the state of the JVM with VisualVM to see if there are any performance bottlenecks or memory leaks. For C# there are also similar tools but I didn’t use them yet and I don’t know if they are as good as VisualVM…
How does your infrastructure discern new warnings? (third-party tool? in-house?)
What is the consequence of a zero-tolerance violation? Failed build? Reject the changelist from source control?
We have a custom python script that looks at the build results. It reports the failure to the blamelist (everyone whose checkins contributed to that build).
However, due to the crazy slowness of the /analyze builds they are decoupled from the main build process. Thus, while unit test failures will prevent a build from being blessed as a good build, /analyze failures will not. If we had /analyze:fast that could check for at least printf format errors as part of the regular builds then we would mark those warnings as being build errors and they would fail the main build.
What’s the overhead in runtime for the /analyze analysis compared to your main build process? How often does a “successful” main build end up being discarded by errors that /analyze detects?
The build time for /analyze is enormous. It takes about 5.5 hours for one of our projects. That’s 340 separate .vcxproj files (tools, libraries, multiple games) being built from scratch on a machine that is not our most powerful. Because /analyze is the only time we do a full rebuild each time, and because the machines are different, it’s hard to compare, but I’m sure it’s five to ten times slower than a regular build.
Lately we have been having fresh warnings on almost every /analyze build on that project, because the builds only occur twice a day and because we are reporting new variable shadowing warnings. However these new warnings don’t block or discard the main build.
On other projects most of the /analyze builds find no new warnings — some combination of fewer devs and more frequent /analyze builds.
It might be worth investigating whether Incredibuild can distribute /analyze builds.
Thanks for mentioning CCache in passing in this post! We just shaved 60% from our full rebuild time.
Thanks for the informative post. Great read as always (:
Eclipse has a pretty decent static analysis tool. When we first turned it on our engine produced an unenumerable amount of warnings and errors, it was hard to see what was produced by something you did vs. something that found its way in via code creep/cruft.
One interesting feature with Eclipse however, is the ability to filter your problems and warnings to different tabs. What I ended up doing was placing all the engine generated warnings inside one tab, and all the new project errors/warnings in another. People ended up paying attention to their own warnings however, getting them to fix the existing ones was something that never really caught on — ‘deadlines!’
Eclipse’s tool is able to analyze on the fly with no noticeable performance hits to the environment as well, in contrast to Visual Studio (as you mentioned). This allows our programmers to catch problems before check-in.
The tool packed with most modern versions of Eclipse detects the following (probably a much less extensive list then Visual Studio, but covers most of the major issues you pointed out in this article):
Cheers,
Love your articles by the way!
-Eddie.
Pingback: Vote for the VC++ Improvements That Matter | Random ASCII
Pingback: Debugging Optimized Code–New in Visual Studio 2012 | Random ASCII
Do you guys use PVS-Studio ( ) too? If so has it caught anything your other tools haven’t and how long does it take to run?
I have not tried PVS-Studio. I imagine it would find some new bugs (every new static analysis tool probably does) but I also imagine that it would find fewer than /analyze did, just because /analyze has found the easy ones already.
Maybe I’ll try it some day.
Pingback: You Got Your Web Browser in my Compiler! | Random ASCII
Pingback: A Crash of Great Opportunity | Random ASCII
Hello, I just started diving in into SAL annotations and I noticed what seems like a bug to me. I have the following functions:
void
MyFunc0(
_Out_writes_(NumberOfEntries) DWORD* Array,
_In_ DWORD NumberOfEntries
)
{
DWORD i;
for (i = 0; i <= NumberOfEntries; ++i)
{
Array[i] = i;
}
}
void
MyFunc1(
_Out_writes_bytes_(NumberOfEntries*sizeof(DWORD)) DWORD* Array,
_In_ DWORD NumberOfEntries
)
{
DWORD i;
for (i = 0; i <= NumberOfEntries; ++i)
{
Array[i] = i;
}
}
void
MyFunc2(
_Out_writes_all_(NumberOfEntries) DWORD* Array,
_In_ DWORD NumberOfEntries
)
{
DWORD i;
for (i = 0; i <= NumberOfEntries; ++i)
{
Array[i] = i;
}
}
void
MyFunc3(
_Out_writes_bytes_all_((sizeof(DWORD)*NumberOfEntries)) DWORD* Array,
_In_ DWORD NumberOfEntries
)
{
DWORD i;
for (i = 0; i <= NumberOfEntries; ++i)
{
Array[i] = i;
}
}
Intentionally I have written the loop from 0 to NumberOfEntries to see off by one reports from the static analyzer. Unfortunately the only place where the warning appears is MyFunc1.
I've run the analyzer on these functions using both Visual Studio 2013 Update 5 and Visual Studio 2015 in a user mode, kernel mode and in a native project and the same results appear everywhere. The rule set used by the analyzer was: "Microsoft All Rules" which as I understand should contain all analyzer checks.
Also, I've tried the wmemcpy example shown in which points it should report an off-by one BUG and it does not report so.
Am I missing something or is this a bug?
I’m not an expert on annotations but it seems like that is probably a bug. You should file a bug on connect.
Also, the value of annotations is most often useful when analyzing the *callers* of those functions. Try calling the functions with buffers that are intentionally too small to see if those errors are caught.
BTW, both the printf and shadowed variables warnings are part of VS 2015 or later without using /analyze…
Yep! The printf warnings have saved me some trouble more than once. I wish they applied to user-defined printf-style functions, but maybe that will come later. | https://randomascii.wordpress.com/2013/06/24/two-years-and-thousands-of-bugs-of-static-analysis/ | CC-MAIN-2021-43 | refinedweb | 4,469 | 58.92 |
import "encoding/csv" is the field delimiter. // It is set to comma (',') by NewReader. Comma rune // Comment, if not 0, is the comment character. Lines beginning with the // Comment character without preceding whitespace are ignored. // With leading whitespace the Comment character becomes part of the // field, even if TrimLeadingSpace is true. TrailingComma bool // ignored; here for backwards compatibility //.
Code:) }
Output:
[first_name last_name username] [Rob Pike rob] [Ken Thompson ken] [Robert Griesemer gri]
This example shows how csv.Reader can be configured to handle other types of CSV files.
Code:)
Output:
[.]].
Code:) }
Output:.
Code:) }
Output:
first_name,last_name,username Rob,Pike,rob Ken,Thompson,ken Robert,Griesemer,gri | https://static-hotlinks.digitalstatic.net/encoding/csv/ | CC-MAIN-2018-51 | refinedweb | 106 | 52.15 |
I am trying to make a meter with rotating needle (not speedometer exactly). This meter is not controlled by the player, but when a certain speed is hit the player must press a button. I want to move my needle from 90 degrees to -90 degrees one time. The speed of the needles movement will need to depend on a time variable. I am able to move the needle to a specific point, but I am having difficulty getting the needle to smoothly move from 90 to -90 over time. I was messing around with Mathf.Lerp and Quaternion.Euler.
Here's what I have so far:
using UnityEngine;
using System.Collections;
public class MoveDial : MonoBehaviour {
public Transform needle;
private float needleSpeed;
private float needleStart;
private float needleEnd;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
needle.rotation = Quaternion.Euler(0,0,-90);
//needle.rotation = Mathf.Lerp (needleStart, needleEnd, needleSpeed);
}
}
Answer by sumitb_mdi
·
Dec 18, 2016 at 11:03 AM
private const float ANGLE_CHANGE_SPEED = 10.0f; //Change this as per your requirement.
float currentAngle = 90.0f;
void Update () {}
if (currentAngle > -90.0f) {
needle.rotation = Quaternion.Euler (0, 0, currentAngle);
currentAngle -= (ANGLE_CHANGE_SPEED * Time.deltaTime);
}
}
This works exactly how I.
Uniform rotation - uniform lerp
1
Answer
Cant slow down rotation Speed
1
Answer
Rotate multiple objects around a object, maintaining their own trajectory(not rotating their local forward vector)
1
Answer
Quaternion Rotation On Specific Axis Issue
0
Answers
Problem about Rotating the Character
1
Answer
EnterpriseSocial Q&A | https://answers.unity.com/questions/1287202/moving-needle-from-one-angle-to-another-angle-one.html | CC-MAIN-2022-05 | refinedweb | 253 | 56.66 |
Important: Please read the Qt Code of Conduct -
Qml VideoOutput doesn't work after setting source to null once
- theonlylawislove last edited by
import QtQuick 2.7 import QtQuick.Controls 2.0 import QtMultimedia 5.8 ApplicationWindow { visible: true width: 640 height: 480 property bool flip: true Timer { interval: 5000 running: true repeat: true onTriggered: { flip = !flip if(flip) { videoOutput.source = null } else { videoOutput.source = player } } } VideoOutput { id: videoOutput anchors.fill: parent source: player } MediaPlayer { id: player source: "" autoPlay: true loops: MediaPlayer.Infinite } }
After a few a trigger of
Timerthat sets
videoOutput.sourceto
null, it will never work again. The
VideoOutputwill just have a stale painting of a previously decoded frame, and will never change.
Any update on this?
I think I'm running into the same issue. This problem seems to be present on Linux, but not on Windows.
Why not change the source on the MediaPlayer rather than the VideoOuput?
That works thanks.
@lplessard I feel like I have closure now. ;-) | https://forum.qt.io/topic/90004/qml-videooutput-doesn-t-work-after-setting-source-to-null-once | CC-MAIN-2020-50 | refinedweb | 163 | 62.44 |
Of course, not all documentation is intended for end users of desktop applications—some is meant for other developers. I’m not thinking here of internal documentation for your own team (I’ll discuss that briefly later in the chapter), but about documenting products that are aimed strictly at developers. The most obvious place this comes into play is when you’re delivering (and documenting) a class library. Ideally, your class library documentation should follow the format and conventions that the .NET Framework SDK established for .NET namespaces.
Although I’m concentrating on help files in this chapter, you should not neglect the application’s user interface as a source of subtle but useful clues to its proper use. A well-designed application can cut down considerably on the number of times that the user has to refer to the help file at all. Consider these factors:
People (at least those who speak one of the Western languages) tend to read left to right, top to bottom. That’s why it makes sense to group the controls that indicate the end of a process (such as OK, Apply, and Cancel buttons) together in the lower-right corner of the application. Don’t make users hunt around for controls.
When entering data in a particular control is inappropriate, disable the control.
The use of tooltips to indicate the purpose of a control can be a great help to new users. The ToolTip control makes this easy to implement on .NET Windows forms. Tooltips can also help make your application more usable for users with special needs; screen readers, for example, will pick up tooltip text. Some users find tooltips annoying, though, so you might consider providing an option to turn them off.
For step-by-step processes, most users are accustomed to a wizard interface. It’s worth using such an interface, with clear directions on each panel, for such processes.
By paying attention to the design of your user interface, you can make it possible for some users to never need the help files at all. Watching actual users interact with your application can tell you whether you’re getting to this point.
Fortunately, there’s a free tool that makes this pretty easy to accomplish:.
You’ll find the XML tags used by these special comments explained in the XML Documentation section of the C# Programmer’s Reference. Table 12.1 lists the XML documentation tags that VS .NET can use for XML documentation. This list isn’t fixed by the C# specification; different tools are free to make use of other tags.
As an example, here’s a piece of the code for the DownloadEngine class, with embedded documentation comments:
/// <summary> /// Replace a possibly null string with another string. /// /// </summary> /// <param name="MaybeNull" type="string"> /// <para> /// A string that might be null or empty. /// </para> /// </param> /// <param name="Replacement" type="string"> /// <para> /// A replacement string to be used if the first string is null or empty. /// </para> /// </param> /// <returns> /// Returns the original string if it's not null or empty, otherwise returns the replacement string /// </returns> private string ReplaceNull(string MaybeNull, string Replacement) { if((MaybeNull.Length == 0) || (MaybeNull == null)) { return Replacement; } else { return MaybeNull; } }
Embedded in your code, these comments don’t do a lot of good for your customers, but VS .NET can collect these comments into an external XML file. To enable this collection, right-click on the project in the Solution Explorer and select Properties. Then select the Build page and enter a name for the XML documentation file, as shown in Figure 12.5.
Figure 12.5: Activating XML documentation in VS .NET
After you’ve built the XML comments file for your application, NDoc can do its work. Figure 12.6 shows the NDoc user interface.
Figure 12.6: Using NDoc to build a help file
You can select one or more assemblies to document and tell NDoc where to find the XML comments file for each assembly. NDoc will combine the information in the XML comments file with information determined by examining the assembly itself, and then build MSDN-style documentation for you. Figure 12.7 shows a part of the resulting help file.
Figure 12.7: Developer-style help for a class library
For an even better developer experience, you can integrate your class library help files directly with the help for VS .NET. The Visual Studio Help Integration Kit, which I mentioned earlier in this chapter, contains the necessary tools and instructions for this process. | https://flylib.com/books/en/1.67.1.71/1/ | CC-MAIN-2018-30 | refinedweb | 755 | 62.17 |
Comment attribute. may be associated to any label to store user comment. More...
#include <TDataStd_Comment.hxx>
Comment attribute. may be associated to any label to store user comment.
Something to do AFTER creation of an attribute by persistent-transient translation. The returned status says if AfterUndo has been performed (true) or if this callback must be called once again further (false). If <forceIt> is set to true, the method MUST perform and return true. Does nothing by default and returns true.
Reimplemented from TDF_Attribute.
Dumps the minimum information about <me> on <aStream>.
Reimplemented from TDF_Attribute.
Returns the comment attribute..
Find, or create a Comment attribute. the Comment attribute is returned.
Finds, or creates a Comment attribute and sets the string. the Comment attribute is returned. | https://www.opencascade.com/doc/occt-7.2.0/refman/html/class_t_data_std___comment.html | CC-MAIN-2018-05 | refinedweb | 125 | 63.15 |
i'm using visual c++ 6.0 and i have made sure i chose win32 project (Not console app).i'm using visual c++ 6.0 and i have made sure i chose win32 project (Not console app).Code:
/*
Questions go here:
1. is a reference another word for handle?
2. research TCHAR datatype.
3. Difference between WNDCLASSEX and WNDCLASS?
WNDCLASSEX is the newer one. There are more differences but for our purposes
lets assume that the newer one is the "better one".
*/
//always necessary to include windows.h for making windowed programs
#include <windows.h>
#include "stdafx.h"
#include <stdio.h>
//function prototypes
ATOM produceAndregisterClass();
int createAndShowWindow();
LRESULT CALLBACK WindowProcedure
(HWND hwnd, unsigned int message, WPARAM wParam, LPARAM lParam);
//global variables follow
HINSTANCE hinst; //a reference to the application instance
HWND mainWindow; //this reference will eventually point to the main Window,
//once instantiated
TCHAR className[] = "myWindow"; //the name of the "class" which we will register
/* Main Steps
1. Define a window "class". Populate a WND Class Ex structure.
2. Register the window class. Pass the WNDCLASSEX struct pointer to RegisterClassEx.
3. Provide a windows procedure. This is the function which will deal with the user's
interaction with the window.
4. Create and show the window.
5. Put application into a "message loop".
*/
//entry point into a windowed application is main
int APIENTRY WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
printf("in main function");
if ( produceAndregisterClass() == 0 )
return -1; //producing and registering class did not work for whatever reason
//thus we will exit early.
if ( createAndShowWindow () == 0 )
return -1; //again, something went wrong so exit early.
//message loop goes here
BOOL bRet;
MSG msg;
while( (bRet = GetMessage( &msg, NULL, 0, 0 )) != 0)
{
if (bRet == -1)
{
return -1;
}
else
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
return 0;
}
ATOM produceAndregisterClass()
{
//first let us populate a WNDCLASSEX struct.
//this struct contains all the details of windows we will display
WNDCLASSEX windowClass;
//set the size. when dealing with WNDCLASSEX structures, the RHS of the
//code is always the same when setting the cbSize member
windowClass.cbSize = sizeof(WNDCLASSEX);
//a style helps determine the window's behavior. these are enumerated
//at the MSDN website.
//it is my understanding that using the | operator allows you to specify a combination of styles
//I've set it to redraw the window whenever the height or width is adjusted.
windowClass.style = CS_HREDRAW | CS_VREDRAW;
//pointer to the programmer defined window procedure to deal with the user interaction with window
windowClass.lpfnWndProc = (WNDPROC) WindowProcedure;
//number of extra bytes to allocate to structure. i'm saying 0 because this is a simple window,
//although it remains to be seen exactly how you work how much more you need if the window was
//"complex".
windowClass.cbClsExtra = 0;
//assign the reference to the application instance
windowClass.hInstance = hinst;
//assignment of an icon. NULL means system provides default.
windowClass.hIcon = 0;
//assignment of cursor. NULL means default.
windowClass.hCursor = 0;
//background colour assignment, this can be a handle to a data type called HBRUSH or
//alternatively it can be a specified colour (if the latter must cast to HBRUSH).
//always must add 1 to chosen colour.
windowClass.hbrBackground = (HBRUSH) (COLOR_WINDOW + 1);
//this should be a string pointing to a resource that specifies a menu.
//but a NULL value means no menu, which is what we want for this simple window.
windowClass.lpszMenuName = 0;
//assign the class name (which we've stored in a global variable)
windowClass.lpszClassName = className;
//i guess this icon member is for the icon your app should have when it is listed
//in a directory. don't really know but its not that important. we'll just use NULL.
windowClass.hIconSm = 0;
//The structure is filled and now we must register it.
return RegisterClassEx(&windowClass);
}
//call this function to create and display window only after registering the class.
int createAndShowWindow()
{
mainWindow = CreateWindowEx(
WS_EX_APPWINDOW,//window behavior
className,
"Test Window", //title bar string
WS_CAPTION, //style
CW_USEDEFAULT, //x position
CW_USEDEFAULT, //y position
CW_USEDEFAULT, //width
CW_USEDEFAULT, //height
0, //handle to parent window, but there is no parent so...?
0, //menu handle. null means use the one specified in class
hinst, //handle instance
0 //lparam value...not really sure what its for
);
if ( !mainWindow ) return 0;
else return 1;
}
//this function always has this signature and it deals with the user interaction
//messages are passed from the operating system to this function which processes
//the messages
LRESULT CALLBACK WindowProcedure
(HWND hwnd, unsigned int message, WPARAM wParam, LPARAM lParam)
{
//the window will do nothing, except vanish when we ask it to
//if there is a message we have not dealt with in the case statement
//we use the defaultwindowprocedure called DefWindowPoc.
switch (message)
{
case WM_DESTROY:
PostQuitMessage (0);
return 0;
}
return DefWindowProc (hwnd, message, wParam, lParam );
}
when i run the executable nothing at all happens. not even the very first line of the main function which is
i must be doing something trivial but for the life of me i can't see what.i must be doing something trivial but for the life of me i can't see what.Code:
printf("in main function");
any help would be appreciated. thanks. | http://cboard.cprogramming.com/windows-programming/56032-problems-standard-win32-code-printable-thread.html | CC-MAIN-2014-23 | refinedweb | 854 | 57.47 |
SOLVED Unique font ID or temp data
I'm writing a script with a persistent window that needs to keep track of which fonts are open. The document open/close/activated event callbacks are working well, but I'd like to have a reliable way to identify each open font. I can't always distinguish between fonts based on their family name, file path (new documents are the same), or their index in the list of
AllFonts(). Does RoboFont assign the font/document a UUID that I could use? Or is there a good alternative?
On a related note, I would like to be able to store temporary data in a font object without it being permanently stored in the UFO. Is this possible? The Glyphs API has
tempDataand
userData, but in the RoboFont docs and forum I've only seen mention of
font.lib—which is permanent like
userData. If there isn't a unique ID for each font, I could create my own IDs and store them in each font's temporary data for as long as the document is open. Does RoboFont have something like
font.tempData?
Tal and I spoke about
tempLibsome years ago. I dont remember why this idea is still on our the shelves.
you could easily use a start up script to create a
def _get_temp_lib(self): lib = getattr(self.naked(), "myCustomTempLib", None) if lib is None: lib = self.naked().myCustomTempLib = dict() return lib RFont.tempLib = property(_get_temp_lib)
Maybe python’s built-in id function could be helpful to you there. I have been using it in the past to distinguish fonts with identical names/paths/etc by putting them in a “fonts dictionary” where the id is the key to get the font object. But I can imagine your usage being more complex than this.
do not use fontParts objects to get an id... those are wrapper classes around fe defcon object
# this will get different results f = CurrentFont() print(id(f)) f = CurrentFont() print(id(f))
# this will get the correct id f = CurrentFont().asDefcon() print(id(f)) f = CurrentFont().asDefcon() print(id(f))
Thank you for the code, @frederik! Your
tempLibalternative works well for me. I really appreciate it.
The code sample to get the id of a font's defcon object is giving me an error however. (I'm using RF v3.5b)
AttributeError: 'RFont' object has no attribute 'asDefcon'
Replacing
.asDefcon()with
.naked()appears to work however. I'm not sure if that's safe to use because there's no documentation on
RFont.naked(). What do you think, Frederik?
@benedikt Thanks for letting me know about the
id()function—great to know about in general!
.asDefcon()is a RoboFont 4 attribute
And see for an official defcon support
sure
<fontPartsObject>.naked()still exists
only the latest has
.asDefcon()and
.asFontParts()which is just a lot more readable | https://forum.robofont.com/topic/963/unique-font-id-or-temp-data/6?lang=en-US | CC-MAIN-2022-27 | refinedweb | 480 | 67.45 |
What sorting method is the easiest to implement? and which is most effiecent?
What sorting method is the easiest to implement? and which is most effiecent?
easiest could be Bubble Sort.
Quick Sort or Merge Sort are quite efficient.
what you need to do?
input from 2 differnt files (both containing random numbers) into vectors. I need to sort the vector in ascending order
#include <algorithm> and use std::sort, e.g.,
Of course, if this is an exercise to learn about sorting then this solution would not be appropriate.Of course, if this is an exercise to learn about sorting then this solution would not be appropriate.Code:
std::vector<int> vec;
// ...
std::sort(vec.begin(), vec.end());
yes, i trying to use my own function to do the sorting so this isn't really appropriate, but thanks for the tip
Search the Web for "sorting". It is a well studied area of computer science, so you will find alot of information on it, from Java applets to example code to graphs comparing efficiency.
im going to ask a really newbie question, but can i use merge sort with vectors. This one site speaks of using it with arrays. are the principles still the same?
Yes. A vector is like an array for this purpose.Yes. A vector is like an array for this purpose.Quote:
but can i use merge sort with vectors. This one site speaks of using it with arrays. are the principles still the same?
There are a few easy ones to implement, but it depends on the data structure. For example Bubble Sort is easy for an array, but it is quite tricky for a linked list. Some algorithms are easy for one data structure but are basically impossible for other data structures.
Some algorithms may not suit because they don't respect the existing ordering of items that have the same key value, i.e algorithms that are not stable. E.g. Quicksort is typically fast, but is not stable.
Some algorithms can be efficient time-wise, and reasonably easy to implement, but take too much RAM.
Even the shape of the distribution of the values, the number of duplicates, and the initial ordering (number of inversions) play an important part in how long the sorting takes.
If you want to overall easiest to implement I'd say Insertion Sort (works on more data structures). If you want the fastest algorithm for a huge number of items it's probably Pigeonhole Sort. If you want the fastest algorithm for a small number of items then its definitely something else entirely. In practice you usually shouldn't or often even cant, use either of those for what you're sorting, depending on the data type or item count.
I fully realise that you couldn't know what to ask when you don't know any of the above about sorting. However you've now been made aware that there is no real answer to the general question about sorting speed. Here's some of what we need to know to give you the best answer:
- What type of data structure - array/list/doubly-linked-list etc?
- How many items could there be?
- Will they usually be mostly sorted already?
- What is the data type?
- Could there be duplicates?
Okay you did mention you're sorting a vector later on so that's the first question answered. In case you cant wait for another reply though, Shell Sort and Comb Sort are fairly easy to implement, perform quite well for any number of items, and take no extra memory at all.
just ask iMac, he is an expert at sorting algorithms :D | https://cboard.cprogramming.com/cplusplus-programming/103137-sorting-printable-thread.html | CC-MAIN-2017-26 | refinedweb | 617 | 74.9 |
[SOLVED] Problems with ESP8266 and NRF24L01+
Hello,
I have a problem using an ESP8266 and a NRF24L01+ module.
I followed all the steps from this guide and still it won't work.
I am using an adapter like this for converting the power and for power stability.
I have two NRF24L01+ modules and I've tested them with an Arduino Nano which it works.
I can't get it to work with ESP8266, I've checked the wirings multiple times.
Arduino IDE 1.6.8 with latest version of MySensors Library on Linux and Mac OS X using an ESP8266 and NRF24L01+
I've tried to power the module directly from ESP8266 and separately from a 5V PSU with the 5V regulator adapter and common ground with the ESP8266.
Thanks for your help!
This is the serial monitor error:
0;255;3;0;9;TSM:INIT 0;255;3;0;9;!TSM:INIT:TSP FAIL 0;255;3;0;9;TSM:FAIL:CNT=7 0;255;3;0;9;TSM:FAIL:PDT 0;255;3;0;9;TSM:FAIL:RE-INIT```
- sundberg84 Hardware Contributor last edited by sundberg84
@numanx said in Problems with ESP8266 and NRF24L01+:
separately from a 5V PSU with the 5V regulator
You mean 3.3v adapter? The ESP is not compatible with 5v.
Do you use the NodeMCU as described in the build section or another device? I know the pinout can be a bit tricky...
This is the sketch that I'm_GATEWAY_ESP8266 #define MY_ESP8266_SSID "MySSID" #define MY_ESP8266_PASSWORD "MyVerySecretPassword" // #if defined(MY_USE_UDP) #include <WiFiUdp.h> #endif #include <ESP8266WiFi.h> #include <MySensors.h> void setup() { } void presentation() { // Present locally attached sensors here } void loop() { // Send locally attached sensors data here }```
@sundberg84
The ESP8266 was running from my PC using a usb cable.
I've tried to power the NRF24L01+ from the ESP8266 directly with dupont cable also I've tried through the Socket adapter plate board. I've tried to power the NRF24L01+ from a 5V power supply. The NRF24L01+ ground was common with the ESP8266 ground and the VCC on the adapter(which pass through the 5V to 3.3V regulator and then to NRF24L01+) was connected from the 5V power supply.
I am using the NodeMCU as described in the following link
- sundberg84 Hardware Contributor last edited by sundberg84
@numanx - good, then the nodeMCU regulated the voltage to 3.3v internaly if you power it through the USB.
tried to power the NRF24L01+ from a 5V power supply
Make sure you power the radio with 3.3v as well... the radio cant handle 5v. Use 3V3 from the nodeMCU.
!TSM:INIT:TSP FAIL
This means the radio cant initialize.
Either its wrong wired, got the wrong power or its a bad radio.
- NeverDie Hero Member last edited by
@sundberg84 @NeverDie
Thanks for your answers.
I've connected the ESP8266 and NRF24L01+ exactly as described in Connecting the Radio() at the NRF24L01+ & ESP8266 section.
Still the same error, I've tried rewiring a couple of times using different wires.
- NeverDie Hero Member last edited by
Which board did you pick beneath the board manager?
@numanx
Also, what voltage are you supplying the socket adapter with? If you're supplying it with 3.3v, then that may be your problem. Supply it with 5v instead. It has a voltage regulator to reduce the voltage to 3.3v, but if you feed it with 3.3v instead, it maybe isn't producing enough regulated voltage.
How about skipping the adapter, at least temporarily to check whether the radio works without it. That would eliminate one possible problem source.
@NeverDie
The board selected is NodeMCU 1.0 (ESP-12E module)
I've tried suplying the adapter with 3.3v and also with 5V.
@mfalkvidd
Now the radio is connected directly to the ESP8266 but still I have the same error.
At this point, it becomes 20 questions unless you post detailed photos of exactly what you've done. Either your hardware is defective or you've made an error that you can't see by yourself.
If all else fails, you can try this instead:
It's not nodemcu, but it is ESP8266. It does the "wiring" for you. And it works.
Now I have another problem...
I've tried to change the Wireless SSID and Password in the sketch and it seems that after uploading on the ESP8266 it doesn't update them. I've tried clearing EEPROM memory and now the ESP8266 seems to be stucked in the AP mode and I have some strange characters when I open Serial Monitor...
0;255;3;0;9;MCO:BGN:INIT GW,CP=RNNGE--,VER=2.1.1 0;255;3;0;9;TSF:LRT:OK chg_A2:-40
Ok, I've solved the problem with the Wireless SSID and password by uploading the HelloServer sketch from the ESP8266 examples.
Now I rewire the ESP8266 and Radio and I will post some photos.
Here are the photos of the ESP8266 board, NR24L01+ and the wiring.
It's hard to tell from your photos what is wired to what.
Suggest you upgrade to the current Arduino IDE and verify that you have the most current NodeMCU board definition installed. Also confirm that you're using the latest release of the Mysensors library.
Have you verified that your hardware is working? e.g. try the radio module in a different "known good" platform and see if it runs correctly. On the NodeMCU, try driving something else that has a SPI interface and confirm that it works properly.
If all of the above checks out, then I would think the next step would be to use a logic probe to see if you're getting the proper signals sent between your NodeMCU and the nRF24L01 module. If not, maybe you need to add a pull-up or a pull-down resistor on one of the datalines. The logic probe would tell the tale.
Finally got it!
It seems that I have a problem with the ESP8266.
After about 24 Hours of debugging I found the problem...
Steps:
Checked if the wiring is good with a multimeter probe on each pin of the adapter(which I know it worked) and each pin of the radio to verify the correct pinout.
It seems that the pinout was ok and the wiring was also ok.
Write a sketch with all the ports from D2 to D8 in output HIGH and with a little buzzer I've tested each pin.
It seems that D7 doesn't give voltage.
Checked the pinout diagram for the ESP-12E. Checked for continuity between GPIO13(from the chip) and D7 and there was no continuity.
Tried to connect the radio MOSI Pin directly to GPIO13 and IT WORKED!
0;255;3;0;9;TSM:INIT 0;255;3;0;9;TSM:INIT:TSP OK 0;255;3;0;9;TSM:INIT:GW MODE 0;255;3;0;9;TSM:READY:ID=0,PAR=0,DIS=0 0;255;3;0;9;MCO:REG:NOT NEEDED
Is there any way to move the MOSI pin from the board to another digital pin from ESP8266?
Thanks!
It is possible to use software spi instead of hardware spi (hardware spi only works with the default pins). See for how to enable software spi and use #define directives to use different pins.
@numanx said in Problems with ESP8266 and NRF24L01+:
Checked the pinout diagram for the ESP-12E. Checked for continuity between GPIO13(from the chip) and D7 and there was no continuity.
Tried to connect the radio MOSI Pin directly to GPIO13 and IT WORKED!
Good work!
Check the solder connection between the GPIO13 and the pcb. Probably it's faulty. With a little luck probably you can just re-solder the connection manually, and `then you'll be good to go.
@mfalkvidd
It seems that ESP8266 doesn't support SOFTSPI
In file included from /home/numanx/Arduino/ESP8266OTA/ESP8266OTA.ino:120:0: /home/andreihering/Arduino/libraries/MySensors/MySensors.h:248:2: error: #error Soft SPI is not available on ESP8266
@NeverDie @mfalkvidd
Today I've resoldered the ESP8266 Chip.
Everything works perfect.
Thanks for your help!
- George Laynes last edited by
This thread has been a life saver for me.
Just wanted to drop a line for anyone else who might be despairing why despite numerous tests, the radio would still not work on the NodeMCU.
In my case, it was NodeMCU v3 from Lolin and I had two traces not workingbetween GPIO13 and GPIO12 - there was no continuity of the trace between the ESP-12E chip and the board pins. I had to solder those two traces together and now - everything works!
Thank you for inspiring me to look for this problem. | https://forum.mysensors.org/topic/7355/solved-problems-with-esp8266-and-nrf24l01 | CC-MAIN-2022-27 | refinedweb | 1,460 | 74.49 |
import "github.com/godoctor/godoctor/doc"
Package doc contains functions to generate the HTML User's Guide and the man page for the Go Doctor.
install.go man.go user.go util.go vimdoc.go
PrintInstallGuide outputs the (HTML) Installation Guide for the Go Doctor.
PrintManPage outputs a man page for the godoctor command line tool.
PrintUserGuide outputs the User's Guide for the Go Doctor (in HTML).
Both the godoctor man page and the Vim plugin reference are generated and included in the User's Guide. The man page content is piped through groff to convert it to HTML.
func PrintUserGuideAsGiven(aboutText string, flags *flag.FlagSet, ctnt *UserGuideContent, out io.Writer)
PrintUserGuideAsGiven outputs the User's Guide for the Go Doctor (in HTML). However, if the content's ManPageHTML and/or VimdocHTML is nonempty, the given content is used rather than generating the content. This is used by the online documentation, which cannot execute groff to convert the man page to HTML (due to an App Engine restriction), and which uses a Vim-colored version of the Vim plugin documentation.
PrintVimdoc outputs vimdoc documentation for the Go Doctor Vim plugin.
type UserGuideContent struct { ManPageHTML string VimdocHTML string // contains filtered or unexported fields }
Package doc imports 9 packages (graph) and is imported by 1 packages. Updated 2017-05-28. Refresh now. Tools for package owners. | https://godoc.org/github.com/godoctor/godoctor/doc | CC-MAIN-2018-26 | refinedweb | 225 | 56.66 |
It looks like you're new here. If you want to get involved, click one of these buttons!
Hello,
I get an error with Klayout 0.25.7 on Windows with the following script when I have an instance in the selection.
import pya lv = pya.LayoutView.current() for obj in lv.object_selection: if obj.is_cell_inst(): cell = obj.inst().cell for layer in lv.each_layer(): layer_index = layer.layer_index() shapes = cell.shapes(layer_index)
The error message is
Cannot call non-const method on a const reference in Cell.shapes.
The same script works correctly with Klayout 0.52.2 on Debian Linux.
Olivier
Hi Olivier,
I can't reproduce the issue, but I think I have some idea what happens. I'll try further to find a way to reproduce it.
The workaround is probably to use the cell index to obtain the cell:
Matthias
I have created a ticket for this:
Hello,
I just did the following:
(1) I reproduced the error with my script above => the error is reproduced.
(2) I changed the code with the line that you indicated => the script now works.
(3) I changed back to the original script => the original script now works!
(4) I quit and restart Klayout and check the original script => the error is reproduced.
Point (3) is quite strange to me!
Olivier
Hi Olivier,
the issue should now be fixed with 0.25.8.
(3) may happen because when you run a script and have code inside some classes, the classes may not be redefined. Remember that you're working inside a live Ruby/Python interpreter: classes you have defined or files you have "required" or imported may not be overwritten when you rerun a script.
I don't know your full setup, so I'm just guessing.
Matthias
I now have Klayout 0.25.8 under Windows, and the error mentioned above still occurs.
Olivier
PS: Your explanation about point (3) makes sense, but my full setup is just the script above within Klayout. I didn't define any class or imported any module (except "pya").
Hi Olivier,
the ticket is not fixed yet - it's a bigger issue. I'd suggest to use the workaround for some time.
I can reproduce (3), but I have not debugged the issue yet. But I think I can explain the issue:
The problem is basically that C++ has a concept of "const objects" which means objects cannot be modified (some kind of read-only mode). Such a concept does not exist in Python, so there needs to be some emulation. Modifying an object in Python which C++ thinks is const is not doing any good.
The emulation works as long as objects are const or non-const. A C++ object will then be turned into a Python reference and this reference disallows modification if it was taken from a const object. But if such an object is used in non-const context, then the reference will be turned into a non-const one and this will not happen here automatically. The workaround enforces this and as the original C++ object is the same than before, after having it in non-const mode once it will remain there. Hence the problem disappears, even if you switch back to the original code.
I admit that's confusing, but that's one of the nasty things that happen when bridging C++ and Python.
Matthias | https://www.klayout.de/forum/discussion/1191/cell-shapes-cannot-call-non-const-method-on-a-const-reference | CC-MAIN-2019-13 | refinedweb | 569 | 75.2 |
When Windows populates its list of available users/groups for assigning file security, it performs a wholeSubtree LDAP search on e.g DC=smb4,DC=internal,DC=id10ts,DC=net with a filter of (groupType=2147483653). This query from a W2K3 client against an S4 DC does not return any results, while a W2K3 client querying against a W2K3 DC returns 17 results. This causes users/groups such as "Administrators" and "Server Operators" to not appear in the results.
When we provision a new S4 server, the groupType attribute for "Administrators" and "Server Operators" is set to "-2147483643". Note that both 2147483653 and -2147483643 result in the same 32-bit field (0x80000005).
The Windows LDAP server returns the same 17 results using either number (2147483653 or -2147483643) as the groupType search filter. The following ldapsearch commands were used to test this:
1) ldapsearch -H ldap://192.168.5.179 -D cn=administrator,cn=users,dc=ads,dc=internal,dc=id10ts,dc=net -x -w password -b dc=ads,dc=internal,dc=id10ts,dc=net "(groupType=2147483653)"
2) ldapsearch -H ldap://192.168.5.179 -D cn=administrator,cn=users,dc=ads,dc=internal,dc=id10ts,dc=net -x -w password -b dc=ads,dc=internal,dc=id10ts,dc=net "(groupType=-2147483643)"
Testing with ldapsearch against S4 gives 17 results when using -2147483643 as the filter, but no results when using 2147483653 as the filter. The following ldapsearch commands were used to test this:3653)"3643)"
Note: I may be seeing this because I am running S4 on a 64-bit Linux system. I am setting up a 32-bit VM to test in & will add another comment once I am done with that testing.
Created attachment 3951 [details]
Packet capture of W2K3 client querying W2K3 DC
Created attachment 3952 [details]
Packet capture of W2K3 client querying S4 DC
I just finished testing in a 32-bit VM, and the same behaviour occurs as was originally seen when running S4 under 64-bit Linux.
We need to add a new '32 bit integer' syntax, handling this rollover to:
lib/ldb-samba/ldif_handlers.c
And we need to use the OID allocated for that in:
static const struct dsdb_syntax dsdb_syntaxes[]
in dsdb/schema/schema_syntax.c
Okay, I try to find a solution for this issue. It seems that it solely happens without an LDAP backend (plain LDB).
Created attachment 4282 [details]
Patch to enable correct behaviour on "groupType", "userAccountControl" and "sAMAccountType"
- Change some "int" and "unsigned int" variables to int32_t or uint32_t to make sure that the behaviour is the same on all platforms!
- Introduce the right conversion mechanisms for lookups on "groupType", "userAccountControl" and "sAMAccountType" in the "samldb" module. Therefore, all possible backends can now benefit from it.
- Remove the conversion code from the "simple_ldb_map" module which was only used for LDB LDAP backends.
Comment on attachment 4282 [details]
Patch to enable correct behaviour on "groupType", "userAccountControl" and "sAMAccountType"
This should we fix in another way.
Created attachment 4290 [details]
Patch to handle integer 32bit attributes correctly
Now I wrote a much better patch which should solve the problem for "groupType" but also for all integer attributes in general. I followed Andrew's guidelines and so this patch should comply.
That looks much better.
Please don't rename the fucntions to ldb_ as that namespace should be reserved for the 'real' ldb. (we need a better prefix than ldif_, but it will do for now).
To actually hook it in, see the table in schema_syntax.c and add a new .ldb_syntax = "LDB_SYNTAX_SAMBA_INT32" for it.
Finally, we just need tests.
This is really good work. Thank you very much!
Created attachment 4292 [details]
New patch to handle integer 32bit attributes correctly
- The first patch wasn't complete
- I changed now all methods to "ldif_" prefix in the ldif handlers
- Made changes in "schema_syntax.c" were appropriate
- Removed code from "simple_ldap_map.c" - shouldn't be necessary anymore.
Question: Since this module seems very old - Does it still have the task to convert certain attributes? - Haven't we now the ldif handlers who do this for all backends (and not only LDB_LDAP)?
The simple_ldap_map code is indeed old in this respect, it certainly could handle the conversions by looking at the schema.
However, at the moment it isn't written that way, and only ldb_tdb will use the schema code to normalise integers. Therefore, please remove the simple_ldap_map.c code from the patch.
In syntax_map.c, don't change the .ldap_oid (as it must still be the LDB_SYNTAX_INTEGER), but instead add a .ldb_syntax = LDB_SYNTAX_INT32 element.
It seems you are getting very close. We should soon be able to apply the patch.
Created attachment 4300 [details]
New patch to handle 32bit integer attributes correctly
- Undo the changes on the parameter ".ldap_oid" and added parameters ".ldb_syntax" in "schema_syntax.c"
- Refactored the conversion function in "simple_ldap_map.c" (it looked a bit ugly in my eyes)
Matthias, can you put that in your git tree, or re-attach as a git-format-patch?
This version is good!
(otherwise I'll make up a commit message for you)
Also, any chance of an extension to ldap.py to test this behaviour?
Andrew Bartlett
Fixed
I've to reopen the bug since the problem shows up again.
This problem should now be finally fixed. I substituted the "strtol" with "strtoll" since with the you got always LONG_MAX on conversions (due to the intended overflows). This didn't happen earlier when I wrote the first patch - I think my glibc had a bug. | https://bugzilla.samba.org/show_bug.cgi?id=6136 | CC-MAIN-2020-40 | refinedweb | 922 | 57.16 |
Monad Transformers Tutorial
From HaskellWiki
(Difference between revisions)
Latest revision as of 19:04, 8 February 2014
Think about code in IO that needs to be able to break out of a loop:
forM_ [1..maxRetries] $ \i -> do response <- request i when (satisfied response) break
Reminder about "when":
when False _ = return () when True a = a
So, how would you implement "break"?
Another example:
Clearly we want something like Maybe's
do mc1 <- tryConnect "host1" case mc1 of Nothing -> return Nothing Just c1 -> do mc2 <- tryConnect "host2" case mc2 of Nothing -> return Nothing Just c2 -> do ..
here to catch the Nothing, but instead of Maybe values and functions we can combine with
(>>=)
, we have
(>>=)
values. So the "trick" is to implement the
IO (Maybe a)
monad again, this time, on
Maybe
values instead of
IO (Maybe a)
values:
Maybe a
newtype MaybeIO a = MaybeIO { runMaybeIO :: IO (Maybe a) } instance Monad MaybeIO where return x = MaybeIO (return (Just x)) MaybeIO action >>= f = MaybeIO $ do result <- action case result of Nothing -> return Nothing Just x -> runMaybeIO (f x)
So now we can replace the above boilerplate code with:
result <- runMaybeIO $ do c1 <- MaybeIO $ tryConnect "host1" c2 <- MaybeIO $ tryConnect "host2" ..
Or if the tryConnect function wrapped its result in MaybeIO then we just have to use runMaybeIO there, and that's it. What happens if we now have some "print" in between?
This wouldn't work, because the type of each statement in our do block is
result <- runMaybeIO $ do c1 <- MaybeIO $ tryConnect "host1" print "Hello" c2 <- MaybeIO $ tryConnect "host2" ..
and not
MaybeIO a
, so a (print "Hello") which is
IO a
cannot be put in there. This is where we want to "transform" an
IO ()
value to a
IO a
value. All we have to do is convert the
MaybeIO a
to an
IO a
that doesn't "fail" our Maybe monad. So it just means putting a
IO (Maybe a)
in there:
Just
transformIOtoMaybeIO :: IO a -> MaybeIO a transformIOtoMaybeIO action = MaybeIO $ do result <- action return (Just result)
And now we can do:
result <- runMaybeIO $ do c1 <- MaybeIO $ tryConnect "host1" transformIOtoMaybeIO $ print "Hello" c2 <- MaybeIO $ tryConnect "host2" ..
Now we can also break from the first example's loop!
But, all of this code, while useful, is not in Haskell's spirit. Because MaybeIO really could work for any monad wrapping the Maybe, not just
break :: MaybeIO a break = MaybeIO $ return Nothing forM_ [1..maxRetries] $ \ i -> do response <- transformIOtoMaybeIO $ request i when (satisfied response) break
. The only IO operations all of the MaybeIO code performs is return and bind! So let's generalize our MaybeIO definition to all monads:
IO (Maybe a)
newtype MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) } instance Monad m => Monad (MaybeT m) where return x = MaybeT (return (Just x)) MaybeT action >>= f = MaybeT $ do result <- action case result of Nothing -> return Nothing Just x -> runMaybeT (f x)
That was easy! I just replaced MaybeIO with MaybeT, IO with m, and added an "m" type parameter (with Monad constraint).
Again, really easy, just syntactic replacement of IO with a type parameter. Now, this "transformToMaybeT" operation is really common, because we don't just have
transformToMaybeT :: Monad m => m a -> MaybeT m a transformToMaybeT action = MaybeT $ do result <- action return (Just result)
, we also want
MaybeT
(when you want to break out of the loop or fail with a result!), ContT (lets you do really crazy things), ListT and many others. All of these have in common a
EitherT
operation which is very much like "transformToMaybeT". So if we have:
lift
transformToMaybeT :: Monad m => m a -> MaybeT m a transformToEitherT :: Monad m => m a -> EitherT l m a
It seems we can capture this pattern with a class:
And now "t" is our
class MonadTrans t where lift :: (Monad m) => m a -> t m a
(of kind
MaybeT
, i.e: two type parameters) and
(* -> *) -> * -> *
is our transformToMaybeT, so:
lift
Why are they named
instance MonadTrans MaybeT where lift = transformToMaybeT
? Note the kind signature of instances of the MonadTrans class:
Monad Transformers
. That is, every monad transformer type constructor takes a monad (kind
(* -> *) -> (* -> *)
) as an argument, and returns a monad (also
* -> *
) as a result. So all Monad Transformers are basically type-functions from monad types to different monad types which have additional traits.
* -> * | https://wiki.haskell.org/index.php?title=Monad_Transformers_Tutorial&diff=57550&oldid=36575 | CC-MAIN-2015-18 | refinedweb | 712 | 54.56 |
Scan wide-character input from standard input (varargs)
#include <wchar.h> #include <stdarg.h> int vwscanf( const wchar_t * format, va_list arg );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The vwscanf() function scans input from the file designated by stdin, under control of the argument format.
The vwscanf() function is the wide-character version of vscanf(), and is a "varargs" version of wscanf().
The number of input arguments for which values were successfully scanned and stored, or EOF if the scanning reached the end of the input stream before storing any values. | http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/v/vwscanf.html | CC-MAIN-2018-13 | refinedweb | 103 | 65.62 |
Introduction: Raspberry Pi and Wiimote Controlled Robot Arm
Step 1: Components
Some programming experience would be nice, but it isn't required. I'll try to keep everything simple. (Knowing me things might not go according to plan)
Since I'm doing this on a raspberry pi, I will explain everything the way I got the arm to run on it. Most everything is well documented for Windows and Mac - only a google search away so it shouldn't be much of a problem..
Step 3: A Much Better Controller
I couldn't think of any controller greater and funner to use than a game console controller. I used a wiimote because I actually have one, and it is really easy program. With that said, I'm sure other controllers would work just as well, and maybe they are easier to use (I don't have any others to try so I don't know).
The wiimote uses bluetooth to connect so you will need a bluetooth dongle if your computer doesn't have it built in. I'm using a cirago bluetooth/wifi dongle to connect, there are plenty of tutorials on installing the stuff needed to get bluetooth running on the raspberry pi. I installed bluez through the terminal, but I'll assume that you have bluetooth fully functioning.
We need one more download to connect to the wiimote. Pop open that terminal and type: sudo apt-get install python-cwiid
You can see a list of the bluetooth devices by typing: hcitool scan
Press one and two on the wiimote to set it in a discovery mode. Then the wiimote will pop up with its address and the name Nintendo will be there somewhere.
We are now ready to begin using the wiimote.
Step 4: Establishing a Connection
Someone super awesome reverse engineered the usb protocol for the robot arm. They posted all of their work here:
Another really cool person came up with the python code for the arm and they were nice enough to post it in the Magpi, here is the link to that (Page 14 I believe):
My plan was to merge this program with one that reads the wiimote sensors and buttons.
For our program we need to do several things.
Connect to the robot arm
Connect to the wiimote
Tell the wiimote what it needs to do when each button is pressed, and then do it.
We first import all of the functions we need to establish a connection:
import usb.core, usb.util, cwiid, time
Then we connect to the arm with
while (Arm == None):
Arm = usb.core.find(idVendor=0x1267, idProduct=0x0000)
Next we define a function that lets us control the arm
def ArmMove(Duration, ArmCmd):
Arm.ctrl_transfer(0x40, 6, 0x100, 0, ArmCmd, 1000)
time.sleep(1)
ArmCmd=[0,0,0]
Arm.ctrl_transfer(0x40, 6, 0x100, 0, ArmCmd, 1000)
Each arm command uses a byte of info that is sent through the usb to the controller on the arm. Our software just manipulates the info that the arm receives.
Once the program connects to the arm it need to connect with the wiimote. Pressing both 1 and 2 at the same time sets the wiimote into pairing mode so that it can connect through bluetooth.
Wii = None
while (Wii==None):
try:
Wii = cwiid.Wiimote()
except: RuntimeError:
print 'Error connecting to the wiimote, press 1 and 2 '
Step 5: The Code
I followed some of the websites listed above to get an idea of how the code works, but all of the code in my program is my own work. I hope I don't scare anyone away with it. 75% of it is mostly comments and the small portion remaining is actual code. My hope is that people will be able to understand it and make it their own. I'm sure that I did some over kill in it too and that it could be simplified quite a bit, but there are many ways to skin a cat.
You can control the arm with or without the nunchuk. I could never find a program that used all of the stuff in the nunchuk (the joystick, the buttons and the accelerometer) so I wanted to make sure that there was a program that had everything in it and was easy enough to understand so that people could do what ever they wanted with the wiimote and nunchuk. To accomplish this I made sure that every button or sensor or accessory was used so that people can customize it for their own purposes. For this reason, some of the code may seem redundant, but there is a reason behind it. The only things that I didn't use were the speaker (no one can get it to work yet) and the IR sensor.
Feel free to take this code and use it any way you want!
# +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
# |T|A|Y|L|O|R| |B|O|A|R|D|M|A|N| | | |R|P|I| |A|R|M|
# +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
'''First we need to import some files (These files contain all the commands needed for our program)
We have usb.core and usb.util - these are used to control the usb port for our arm
Next we have cwiid which communicates with the wiimote
And we have the time libary which allows us to slow or pause things'''
import usb.core, usb.util, cwiid, time
#Give our robot arm an easy name so that we only need to specify all the junk required for the usb connection once
print 'Make sure the arm is ready to go.'
print ''
Armc = 1750
Arm = None
while (Arm == None):
#This connects to the usb
Arm = usb.core.find(idVendor=0x1267, idProduct=0x0000)
#This will wait for a second, and then if the program could not connect, it tells us and tries again
Armc = Armc + 1
if (Armc == 2000):
print 'Could not connect to Arm, double check its connections.'
print 'Program will continue when connection is established...'
print ' '
Armc = Armc/2000
continue
#Set up our arm transfer protocol through the usb and define a Value we can change to control the arm
Duration = 1
ArmLight = 0
#Create delay variable that we can use (Seconds)
Delay = .1
Counter = 9999
def ArmMove(Duration, ArmCmd):
#Start Movement
Arm.ctrl_transfer(0x40,6,0x100,0,ArmCmd,1000)
time.sleep(Duration)
#Stop Movement
ArmCmd=[0,0,ArmLight]
Arm.ctrl_transfer(0x40,6,0x100,0,ArmCmd,1000)
#Establish a connection with the wiimote
print 'Connected to arm successfully.'
print ' '
print 'Press 1 and 2 on the wiimote at the same time.'
#Connect to mote and if it doesn't connect then it tells us and tries again
time.sleep(3)
print ''
print 'Establishing Connection... 5'
time.sleep(1)
print 'Establishing Connection... 4'
time.sleep(1)
print 'Establishing Connection... 3'
Wii = None
while (Wii==None):
try:
Wii = cwiid.Wiimote()
except RuntimeError:
print 'Error connecting to the wiimote, press 1 and 2.'
print 'Establishing Connection... 2'
time.sleep(1)
print 'Establishing Connection... 1'
time.sleep(1)
print ''
#Once a connection has been established with the two devices the rest of the program will continue; otherwise, it will keep on trying to connect to the two devices
#Rumble to indicate connection and turn on the LED
Wii.rumble = 1 #1 = on, 0 = off
print 'Connection Established.'
print 'Press any button to continue...'
print ''
''' Each number turns on different leds on the wiimote
ex) if Wii.led = 1, then LED 1 is on
2 = LED 2 3 = LED 3 4 = LED 4
5 = LED 1, 3 6 = LED 2, 3 7 = LED 1,2,3
8 = LED 4 9 = LED 1, 4 10 = LED 2,4
11 = LED 1,2,4 12 = LED 3,4 13 = LED 1,3,4
14 = LED 2,3,4 15 = LED 1,2,3,4
It counts up in binary to 15'''
time.sleep(1)
Wii.rumble = 0
Wii.led = 15
# Set it so that we can tell when and what buttons are pushed, and make it so that the accelerometer input can be read
Wii.rpt_mode = cwiid.RPT_BTN | cwiid.RPT_ACC | cwiid.RPT_EXT
Wii.state
while True:
#This deals with the accelerometer
'''create a variable containing the x accelerometer value
(changes if mote is turned or flicked left or right)
flat or upside down = 120, if turned: 90 degrees cc = 95, 90 degrees c = 145'''
Accx = (Wii.state['acc'][cwiid.X])
'''create a variable containing the y accelerometer value
(changes when mote is pointed or flicked up or down)
flat = 120, IR pointing up = 95, IR pointing down = 145'''
Accy = (Wii.state['acc'][cwiid.Y])
'''create a variable containing the z accelerometer value
(Changes with the motes rotation, or when pulled back or flicked up/down)
flat = 145, 90 degrees cc or c, or 90 degrees up and down = 120, upside down = 95'''
Accz = (Wii.state['acc'][cwiid.Z])
#This deals with the buttons, we tell every button what we want it to do
buttons = Wii.state['buttons']
#Get battery life (as a percent of 100):
#Just delete the nunber sign inn front
#print Wii.state['battery']*100/cwiid.BATTERY_MAX
# If the home button is pressed then rumble and quit, plus close program
if (buttons & cwiid.BTN_HOME):
print ''
print 'Closing Connection...'
ArmLight = 0
ArmMove(.1,[0,0,0])
Wii.rumble = 1
time.sleep(.5)
Wii.rumble = 0
Wii.led = 0
exit(Wii)
''' Arm Commands Defined by ArmMove are
[0,1,0] Rotate Base Clockwise
[0,2,0] Rotate Base C-Clockwise
[64,0,0] Shoulder Up
[128,0,0] Shoulder Down
[16,0,0] Elbow Up
[32,0,0] Elbow Down
[4,0,0] Wrist Up
[8,0,0] Wrist Down
[2,0,0] Grip Open
[1,0,0] Grip Close
[0,0,1] Light On
[0,0,0] Light Off
ex) ArmMove(Duration in seconds,[0,0,0])
This example would stop all movement and turn off the LED'''
#Check to see if other buttons are pressed
if (buttons & cwiid.BTN_A):
print 'A pressed'
time.sleep(Delay)
ArmMove(.1,[1,0,ArmLight])
if (buttons & cwiid.BTN_B):
print 'B pressed'
time.sleep(Delay)
ArmMove(.1,[2,0,ArmLight])
if (buttons & cwiid.BTN_1):
print '1 pressed'
ArmMove(.1,[16,0,ArmLight])
if (buttons & cwiid.BTN_2):
print '2 pressed'
ArmMove(.1,[32,0,ArmLight])
if (buttons & cwiid.BTN_MINUS):
print 'Minus pressed'
ArmMove(.1,[8,0,ArmLight])
if (buttons & cwiid.BTN_PLUS):
print 'Plus pressed'
ArmMove(.1,[4,0,ArmLight])
if (buttons & cwiid.BTN_UP):
print 'Up pressed'
ArmMove(.1,[64,0,ArmLight])
if (buttons & cwiid.BTN_DOWN):
print 'Down pressed'
ArmMove(.1,[128,0,ArmLight])
if (buttons & cwiid.BTN_LEFT):
print 'Left pressed'
ArmMove(.1,[0,2,ArmLight])
if (buttons & cwiid.BTN_RIGHT):
print 'Right pressed'
ArmMove(.1,[0,1,ArmLight])
#Here we handle the nunchuk, along with the joystick and the buttons
while(1):
if Wii.state.has_key('nunchuk'):
try:
#Here is the data for the nunchuk stick:
#X axis:LeftMax = 25, Middle = 125, RightMax = 225
NunchukStickX = (Wii.state['nunchuk']['stick'][cwiid.X])
#Y axis:DownMax = 30, Middle = 125, UpMax = 225
NunchukStickY = (Wii.state['nunchuk']['stick'][cwiid.Y])
#The 'NunchukStickX' and the 'NunchukStickY' variables now store the stick values
#Here we take care of all of our data for the accelerometer
#The nunchuk has an accelerometer that records in a similar manner to the wiimote, but the number range is different
#The X range is: 70 if tilted 90 degrees to the left and 175 if tilted 90 degrees to the right
NAccx = Wii.state['nunchuk']['acc'][cwiid.X]
#The Y range is: 70 if tilted 90 degrees down (the buttons pointing down), and 175 if tilted 90 degrees up (buttons pointing up)
NAccy = Wii.state['nunchuk']['acc'][cwiid.Y]
#I still don't understand the z axis completely (on the wiimote and nunchuk), but as far as I can tell it's main change comes from directly pulling up the mote without tilting it
NAccz = Wii.state['nunchuk']['acc'][cwiid.Z]
#Make it so that we can control the arm with the joystick
if (NunchukStickX < 60):
ArmMove(.1,[0,2,ArmLight])
print 'Moving Left'
if (NunchukStickX > 190):
ArmMove(.1,[0,1,ArmLight])
print 'Moving Right'
if (NunchukStickY < 60):
ArmMove(.1,[128,0,ArmLight])
print 'Moving Down'
if (NunchukStickY > 190):
ArmMove(.1,[64,0,ArmLight])
print 'Moving Up'
#Make it so that we can control the arm with tilt Functions
#Left to Right
if (Accx < 100 and NAccx < 90 ):
ArmMove(.1,[0,2,ArmLight])
print 'Moving Left'
if (Accx > 135 and NAccx > 150):
ArmMove(.1,[0,1,ArmLight])
print 'Moving Right'
#Up and Down
if (Accy < 100 and NAccy < 90):
ArmMove(.1,[64,0,0])
print 'Moving Up'
if (Accy > 135 and NAccy > 150):
ArmMove(.1,[128,0,0])
print 'Moving Down'
#Here we create a variable to store the nunchuck button data
#0 = no buttons pressed
#1 = Z is pressed
#2 = C is pressed
#3 = Both C and Z are pressed
ChukBtn = Wii.state['nunchuk']['buttons']
if (ChukBtn == 1):
print 'Z pressed'
ArmLight = 0
ArmMove(.1,[0,0,ArmLight])
if (ChukBtn == 2):
print 'C pressed'
ArmLight = 1
ArmMove(.1,[0,0,ArmLight])
#If both are pressed the led blinks
if (ChukBtn == 3):
print 'C and Z pressed'
ArmMove(.1,[0,0,0])
time.sleep(.25)
ArmMove(.1,[0,0,1])
time.sleep(.25)
ArmMove(.1,[0,0,0])
time.sleep(.25)
ArmMove(.1,[0,0,1])
time.sleep(.25)
ArmMove(.1,[0,0,0])
time.sleep(.25)
ArmMove(.1,[0,0,1])
time.sleep(.25)
ArmMove(.1,[0,0,0])
#Any other actions that require the use of the nunchuk in any way must be put here for the error handling to function properly
break
#This part down below is the part that tells us if no nunchuk is connected to the wiimote
except KeyError:
print 'No nunchuk detected.'
else:
if (ArmLight == 0):
if (Accz > 179 or Accz < 50):
ArmLight = 1
ArmMove(.1,[0,0,ArmLight])
time.sleep(.5)
elif (ArmLight == 1):
if (Accz > 179 or Accz < 50):
ArmLight = 0
ArmMove(.1,[0,0,ArmLight])
time.sleep(.5)
if (Counter == 10000):
print 'No nunchuk detected.'
Counter = Counter/10000
break
Counter = Counter + 1
break
Copy the code into a python editor and then save it out. If the editor is still open then you can run the program with F5. You can also run it by opening a terminal, navigating to the location of the program and then typing: sudo python filename.py
Just replace filename with the actual name of the file (you still have to copy the code into the editor and then save it).
The photo shows: sudo Nunchuk_W_Arm.py, but I forgot to add the python in there. It's supposed to be: sudo python Nunchuk_W_Arm.py
I'm now working on a version of the program that has an actual interface with buttons to control the arm and stuff that display the wiimote goodies. The wiimote can still be used to control the arm; its just a more visual program.
Step 6: Controlling the Robot
There are lots of things to control on the robot so I tried to keep everything organized. Pictures speak louder than words.
The nunchuk doesn't need to be connected to run everything, I just wanted to keep everything flexible. The only difference between using and not using the nunchuk are: with the nunchuk attached flicking the remote will not toggle the light on and off, the joystick adds another way to rotate the arm and move the base, C and Z are used to turn on and off the light, tilting both the nunchuk and the wiimote will rotate/control the base.
Participated in the
Microcontroller Contest
2 People Made This Project!
- corridorsfn made it!
- Jabberwacky made it!
Recommendations
6 Discussions
2 years ago
you're my savior
4 years ago
Fantastic post!
Last week my son and I put this together with:
Raspberry PI 3 / OWI 535 / USB Intf / WiiMote + Nun. We are total newbies to Python scripting.
The
RaspPi 3 has IDLE 2 and IDLE 3. Script only works with IDLE 2. Also
we had to change the Print statements from ' ' to (" ").
We also had to change our accelerometer values ().
We
are running into a bit of a challenge, the Wiimote works WITHOUT
Nunchuck. However, when it is plugged in, only the Nunchuck works. The
Wiimote no longer works; buttons, accelerometer, etc.
Not sure if it is the function of the script or the Wiimote/Nunchuck. I don't understand how the 'While True:' and the 'while(1)' statements work.
4 years ago
the second step in installing pyusb is sudo apt install pyusb
5 years ago on Introduction
I have put everything together and the arm and wiimote connect. I cannot get the code to run. Many syntax errors. Is it possible to have you e-mail the code to me as a ".py"
6 years ago on Introduction
could this be replicated with another Bluetooth controller such as the ps3 controller?
6 years ago on Introduction
i want to make this using the same arm but im going to put it on a rc car frame you think i can use the other nunchuk to control the car | https://www.instructables.com/Raspberry-Pi-and-Wiimote-controlled-Robot-Arm/ | CC-MAIN-2021-04 | refinedweb | 2,853 | 71.85 |
Creating content-only page types
Content-only page types are suitable for content on MVC sites. Their only task is to hold pieces of content, such as individual news articles, products, etc. They do not inherently posses a visual representation on the website, but you can still use them to create a hierarchical data structure that supports workflow and versioning.
Content-only pages:
- Are not based on page templates.
- Do not provide configuration options related to the live site presentation (such as navigation and URL properties).
- Do not have a presentation URL by default. You can specify a URL pattern for the page type to enable the page builder feature and allow content editors to display pages in preview mode.
Creating content-only. For example, you can use your site name as the namespace.
- Name – page type identifier appended to its namespace
- Click Next.
Step 2
- Enter a Table name for the database table that stores the page type data.
- Enter a Primary key name for the table.
- (Optional) Select if you want the page to to Inherit fields from page type.
Enable Content-only page type.
- Click Next.
The system creates the database table.
Step 3
Click New field.
Define the custom field using the field editor.
Click Save.
- Repeat the steps above to define all custom fields.
- selected,.
Using MVC builder features with content-only pages
If you have an MVC site and wish to use the page builder or page templates in the Pages application with the given page type, you need to make sure the Page tab is available:
- Open the Page types application.
- Edit () the corresponding content-only page type you just created.
- On the General tab, enable the Use Page tab setting.
- Click Save.
Continue with further configuration:
- Specify URL pattern for content-only pages – to allow content editors to display content-only pages on the live site and in the preview mode.
-_Article.
The system automatically ensures all operations are performed correctly on these tables. The advantage of this storage is that it is very fast and you can easily write standard SQL SELECT queries to retrieve data from the Microsoft SQL Server database.
Was this page helpful? | https://docs.kentico.com/k12sp/developing-websites/defining-website-content-structure/creating-and-configuring-page-types/creating-content-only-page-types | CC-MAIN-2020-16 | refinedweb | 365 | 64.91 |
You are not logged in.
Pages: 1
For one thing, "ret" isn't initialized to zero, so that can give unexpected behaviour.
Give Valgrind a try, it's a great memory debugger.
That said, you don't have a memory leak I think. What causes not all memory
to be freed are glibc's malloc implementation and heap fragmentation, but
it can be something else as well. When you mmap a library you'll get high VIRT
but low RES. When you start using that library its pages are read from the
file on-demand and copied to memory, and you end up with higher RES. The
same happens for executables and other mapped files.
If you're on Linux look at /proc/$PID/smaps and maps to see what that
memory area is exactly. I suspect it's glibc's/libpthread's RW mapping for
its own bookkeeping, in this case all the thread stuff. That or it's the heap.
Memory usage is a complicated thing affected by many details, using RES
or VIRT to detect if you've a memory leak or not isn't a good method, it
only works if you have a big leak and when they steadily keep increasing
over time (though that can also be caused a by heap fragmentation). The
numbers you posted aren't out of the ordinary and I wouldn't worry about it.
If you repeat your test again after the first one your memory usage should
end up about the same as it ended up before. But comparing cold cache
with the memory usage afterwards will never work. To hunt down real
memory leaks use Valgrind (or read the code).
I use valgrind too.. and mtrace..
But my problem is that the memory is not liberated totally, and with every new thread that is created new memory is accumulated.
TIA
Begin program .... 13560 984
End program 109964 1312
If you modify the code and increase to 4000 threads the final memory(report) increase 300kb. and not liberated, virtual memory aprox 100 Mb.
The memory should return to her first (984) value or not?
My problem is in a big project that i have... and the memory increase up to 1 > 2 GB because we received to much connections every time..
This sample is a little scale in order to find a solution...
#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <pthread.h> static void* cliente(void *datos); int main() { int ret; pthread_t mythread[4]; char buf[1024]; sprintf (buf, "ps -p %d u", getpid()); system (buf); printf("BEGIN: press key\r\n"); fgets(buf,sizeof(buf),stdin); for (ret = 0; ret<4; ret++) { pthread_create((pthread_t *)&mythread[ret],NULL,cliente,NULL); pthread_detach(mythread[ret]); } sleep (1); sprintf (buf, "ps -p %d u", getpid()); system (buf); printf("END: press key\r\n"); fgets(buf,sizeof(buf),stdin); sprintf (buf, "ps -p %d u", getpid()); system (buf); for (ret = 0; ret<4; ret++) { pthread_create((pthread_t *)&mythread[ret],NULL,cliente,NULL); pthread_detach(mythread[ret]); } sleep (1); sprintf (buf, "ps -p %d u", getpid()); system (buf); printf("END2: press key\r\n"); fgets(buf,sizeof(buf),stdin); sprintf (buf, "ps -p %d u", getpid()); system (buf); return (0); } static void* cliente(void *datos) { char *buff=(char *)malloc(100000000); sleep(5); printf("end thread..\r\n"); free(buff); buff=NULL; pthread_exit(NULL); }
correct... in your code, the RSS memory increase aprox 200 kb... if you increase the number of threads the memory used increasingly and it would not be liberated
Pages: 1 | https://developerweb.net/viewtopic.php?pid=26701 | CC-MAIN-2021-21 | refinedweb | 591 | 63.49 |
This procedure uses the clsetup utility to register the associated VxVM disk group as a Sun Cluster device group.
After a device group has been registered with the cluster, never import or export a VxVM disk group by using VxVM commands. If you make a change to the VxVM disk group or volume, follow the procedure SPARC: How to Register Disk Group Configuration Changes (Veritas Volume Manager) to register the device group configuration changes. This procedure ensures that the global namespace is in the correct following prerequisites have been completed prior to registering a VxVM device group:
Superuser privilege on a node in the cluster.
The name of the VxVM disk group to be registered as a device group.
A preferred order of nodes to master the device group.
A desired number of secondary nodes for the device group.
When you define the preference order, you also specify whether the device group should be switched back to the most preferred node if that node fails and later returns to the cluster.
See cldevicegroup(1CL) for more information about node preference and failback options.
Nonprimary cluster nodes (spares) transition to secondary according to the node preference order. The default number of secondaries for a device group is normally set to one. This default setting minimizes performance degradation that is caused by primary checkpointing of multiple secondary nodes during normal operation. For example, in a four-node cluster, the default behavior configures one primary, one secondary, and two spare nodes. See also How to Set the Desired Number of Secondaries for a Device Group.
Become superuser or assume a role that provides register a VxVM device group, type the number that corresponds to the option for registering a VxVM disk group as a device group.
Follow the instructions and type the name of the VxVM disk group to be registered as a Sun Cluster device group.
If this device group is replicated by using storage-based replication, this name must match the replication group name.
If you use VxVM to set up shared disk groups for Oracle Parallel Server/Oracle RAC, you do not register the shared disk groups with the cluster framework. Use the cluster functionality of VxVM as described in the Veritas Volume Manager Administrator's Reference Guide.
If you encounter the following error while attempting to register the device group, reminor the device group.
To reminor the device group, use the procedure SPARC: How to Assign a New Minor Number to a Device Group (Veritas Volume Manager). This procedure enables you to assign a new minor number that does not conflict with a minor number that an existing device group uses.
If you are configuring a replicated device group, set the replication property for the device group.
Verify that the device group is registered and online.
If the device group is properly registered, information for the new device group is displayed when you use the following command.
If you change any configuration information for a VxVM disk group or volume that is registered with the cluster, you must synchronize the device group by using clsetup. Such configuration changes include adding or removing volumes, as well as changing the group, owner, or permissions of existing volumes. Reregistration after configuration changes ensures that the global namespace is in the correct state. See How to Update the Global Device Namespace.
The following example shows the cldevicegroup command generated by clsetup when it registers a VxVM device group (dg1), and the verification step. This example assumes that the VxVM disk group and volume were created previously.
To create a cluster file system on the VxVM device group, see How to Add a Cluster File System.
If problems occur with the minor number, see SPARC: How to Assign a New Minor Number to a Device Group (Veritas Volume Manager). | http://docs.oracle.com/cd/E19316-01/820-4679/cihiiihh/index.html | CC-MAIN-2016-50 | refinedweb | 635 | 52.09 |
For a given descending sequence, given an n. Find two numbers in the array so that their sum equals the given number. Requires a time complexity of O (n). BOOL Findsumelements (int array[], int length, int sum, int *elem1, int *elem2) {if (length =
Side topic: TCP three times handshake, four times handshake process. Agent mode in design mode, façade mode, adapter mode. JVM class loading mechanism and process. Valitile keyword. The database has four isolation levels. AOP in spring. Two way
Describe the storage level of the CPU External storage, memory, multilevel cache, registers. How to make full use of CPU caching mechanism. Enhance the time locality of code and spatial local GPU architecture The new features of c++11, small cores, g
Lucas theorem Ans=c (n,0) +c (n,1) +......+c (n,k) =c (n/2333,0) *c (n%2333,0) +c (n/2333,0) *c (n%2333,1) +......+c (n/2333,0) *c (n%2333,2332) +c (/2333,1) *C (n%2333,0) +......+C (n/) 2333,k/2333) * (n%2333,k%2333) =∑c (N/2333,J) *sum[n%2333][23
Point Memory space management data type structure execution contextVariable object active object Scope scope chain closure Important Before we learn about memory space, we need to have an intuitive understanding of three of data structures. They ar
From June 1 onwards, China's network security work has a basic legal framework, for the network also has more legal constraints, China's information security industry into a new era. "The People's Republic of China Network security Law" further impro
Recently in looking at the data of EMS, the following part of the mind map, after a deep understanding of the full review Time: March 31, 2017 11:16:55
Description Olya loves energy drinks. She loves them so much as that her room are full of empty cans to energy drinks. Formally, her room can be represented as a field of nxm cells, each cell of the which is empty or littered with cans. Olya drank a
Four. Realization of 1888 protocol in realityShortly after the adoption of the 1888 standard, the University of Tokyo in Japan submitted its own implementation (see), the author does not understand the Japanese, do not unde
Energy Saving Click to open the topic Time limit: Ms | Memory limit: 65535 KB Difficulty: 5 description Dr.kong designed robots are getting smarter. Recently, the municipal company handed over a task to the city, starting 5:00 every day, it was res
Transmission DoorTest instructions: Is given a starting point and an end point, each can go to four directions up to the K-step, each walk spent a second, asked to go to the end of the minimum number of seconds required.Idea: Direct BFS will definite
Olya loves energy drinks. She loves them so much, She is the full of the empty cans from the energy drinks. Formally, her hostel can be represented as a field of nxm cells, each cell of the which is the empty or littered with cans. Olya drank a lot o
According to Lucas ' theorem, we haveAns (n,k) =ans (⌊np⌋,⌊kp⌋−1) ∗∑i=0p−1 (N%PI) + (⌊n/p⌋⌊k/p⌋) ∑i=0k%p (n%pi) ans (n,k) =ans (\lfloor\frac n p \rfloor,\lfloor \ Frac k p\rfloor-1) *\sum_{i=0}^{p-1}\binom{n\%p}i+\binom{\lfloor N/p\rfloor}{\lfloor k/
Tags: GIS geographic software programming source code control " we The solutions provided do not imply that the e-form++ Visual Graphics component library can only develop these aspects of the application, in fact e-form++, like any other
Tags: database connection statement foreign lang. com SQL Date commit setFirst, Introduction SQLAlchemy is an ORM (object-relational Mapping) framework used to map a relational database to an object. The framework builds on the DB API, transforms
Tags: c + + #include <iostream>using namespace std;void add (int i, int j) {cout << "Add" &NBSP;<<&NBSP;I&NBSP;+&NBSP;J&NBSP;<<&NBSP;ENDL;}
Tags: document gen Operations Log Resource group number move fromThis is a bit of a leap from the previous article, but the GUI has official documents, but also can view the editor's own script document, more look at the API, multi-operation
Tags: mod display connect file files Mac software fixed TCPI use the Ubuntu installation of VSFTPD, reproduced please indicate the source, the following is my record:1. Enter "sudo apt-get update" and "Enter the administrator password of the
Reduce the brightness of the screen. At the same time, although the IBM ThinkPad notebook computer is equipped with a keyboard light thinklight, but in order to save electricity, it is best only in the dark conditions to enjoy her care. When you are | http://topic.alibabacloud.com/c/energy_2_72_1 | CC-MAIN-2018-43 | refinedweb | 796 | 56.69 |
- Write a program in C to implement your own sizeof operator macro.
The sizeof is a compile time operator not a standard library function. The sizeof is a unary operator which returns the size of passed variable or data type in bytes. As we know, that size of basic data types in C is system dependent, hence we use sizeof operator to dynamically determine the size of variable at run time.
Algorithm to implement your own sizeof operator.
Here we are going to do a pointer arithmetic trick to get the size of a variable. Here is the generic approach to find size of any variable without using sizeof operator:
Here we are going to do a pointer arithmetic trick to get the size of a variable. Here is the generic approach to find size of any variable without using sizeof operator:
- Suppose we want to find the size of variable 'Var' of data type D(for example an integer variable 'Var').
- Get the base address of the variable 'Var' using address of(&) operator.
- When we increment a pointer variable, it jumps K bytes ahead where K is equal to the size of the variable's data type.
- Now, ((&Var + 1) - &Var) will give the size of the variable Var of data type D.
C program to implement you own sizeof operator
#include <stdio.h> #define new_sizeof(var) (char *)(&var+1) - (char *)(&var) int main() { int i; double f; printf("Integer size from sizeof : %d bytes\n", sizeof(i)); printf("Integer size from my_sizeof : %d bytes\n", new_sizeof(i)); printf("Float size from sizeof : %d bytes\n", sizeof(f)); printf("Float size from my_sizeof : %d bytes\n", new_sizeof(f)); return 0; }Output
Integer size from sizeof : 4 bytes Integer size from my_sizeof : 4 bytes Float size from sizeof : 8 bytes Float size from my_sizeof : 8 bytes | http://www.techcrashcourse.com/2016/02/c-program-to-implement-you-own-sizeof-operator.html | CC-MAIN-2017-17 | refinedweb | 306 | 57.4 |
Talk:Proposed features/Date namespace
Contents
Range separator
For range, wiki say:
When only ranges of years are specified (no month or other details) a single hyphen may be used where the standard excepts a double hyphen.
But ISO 8601 say:
Of these, the first three require two values separated by an interval designator which is usually a solidus or forward slash "/".
--Pyrog (talk) 12:08, 20 December 2014 (UTC)
- ISO 8601 has "/" or "--" alternatively (according to wikipedia). Prevalent usage in Openstreetmap is something like "name:1994-2001"
- Hence the current "compromise". Because it is part of key-name I am thinking that "/" in date ranges would be awkward and as few as possible special chars should be used. In particular we should resist something like the syntax mess off key:start_date.
- single hyphen for "year-year" style ranges (because of our own backward compatibility)
- double hyphen for "yyyy-mm-dd--yyyy-mm-dd" style ranges
Disadvantages and interpretation
Copying the section that needs clarification bellow so it can be discussed - article page is not meant for discussion.
- it is impossible to link multiple properties with same period in time without losing consistency
- name:1800-1900 = School 1
- amenity:1800-1900 = school
- Now your software need to check if date ranges are same. What should they do if rages are different?
- name:1800-1900 (1a) = School 1 (2a) + amenity:1800-1900 (3a) = school (4a)
- name:1799-1900 (1b) = School 1 (2b) + amenity:1800-1900 (3b) = school (4b)
- name:1799-1899 (1c) = School 1 (2c) + amenity:1800-1900 (3c) = school (4c)
- name:1796-1850 (1d) = School 1 (2d) + amenity:1800-1900 (3d) = school (4d)
- How data consumers should use this data?
- Should they treat (1) as different values?
- Which values in (1) are same? Some? None?
- How to use values (3) with respect to (1)? Somehow? Nohow?
- What software should think if name timerange (1) is outside main tag timerange (3)? Is there name (2) for other feature for the period (1) or do they undefined or do they unnamed?
- If example above is not clear for you, try to repeat your answers for datas with granularity of a day instead of year. Answers will be less obvious. Please define rules in proposal or somewhere at wiki.
- this namespace schema is only about metadata. It is impossible to say what geometry of object was at moment X. OSM history shows history in OSM, not history of real object.
- name:1796-1850 = School will tell you only about name, not type of object or position/geometry over time
- amenity:1800-1900 = school will tell you class of the object, not position/geometry over time
Can you please expand the example with verbose values instead of "(1a)" or "(3d)" and clearly separate key/values from example numbering? I mean why you would tag "amenity:1800-1900 (3a) = school (4a)"? It is "amenity:1800-1900 = school".
- as of linking (logically associating) multiple tags with periods of time/dateranges:
- if the dates are exactly the same and on the same OSM object you may assume that they are logically linked.
- This is wild guess. You cannot say anything even if date ranges are equal. Granularity of dates is always unknown for you.
- ok, point taken. For many apps this won't matter - if you want a rendering what something looked like in January 1813 just apply all tags which are valid in January 1813. If you need to prove that some features existed over identical timespans you need something else. RicoZ (talk) 13:36, 23 January 2015 (UTC)
- if mappers use 1799/1800 inconsistently - bad luck. The various properties of a single object do not always change at the same time so this flexibility may be good or bad.
- You cannot say anything about data at all. if there name:1796-1850=ZYX, how you can say if there was name for it at 1795? How can you say if it was unnamed? What does it mean if object have amenity:1800-1900 = school, but name:1796-1850. What you should do about name during 1796-1800? What you should do about 1850-1900?
- geometry: it is possible to have different geometries over time. Draw the geometry of a "building:1929-1964=house" and another geometry tagged with "building:1965-=house", and in principle you can record every detail of the location as it changed over time. It is more difficult in the case that the geometry remains the same and only a part (like garage) of the building is added - the same overlapping geometries which are generally a problem in OSM. For objects with geometry changing over time I would prefer actually drawing overlapping geometries instead of any kind of multipolygon - because that makes it much easier to split/store/recombine objects between historic and main databases.
- So? Where it was stated that you should use different geometries?
One idea that was crossing my head was to use
- "tagname:period:daterange" or
- "tagname:period:period_identificator111"+"period:period_identificator111=daterange"
- instead of current "tagname:daterange". It would be much easier to search for ":period:" in tagnames and the use of period_identificators would make it easier to link features together - if indeed mappers would use it consistently. Not sure if I should push this - (1) is just syntactic sugar and (2) might be more appropriately modeled by relations. RicoZ (talk) 13:04, 22 January 2015 (UTC)
- It should be clearly stated that without explicit indicator information about equal timeranges cannot be restored only based at "equal" time ranges. Time is continuous, you can only specify it up to some precision. Xxzme (talk) 06:36, 23 January 2015 (UTC)
This is a proposal
This a proposal and should be treated as such. I don't see a healthy discussion about implementation and tagging. The concept was added in 2009 as far as a I can see. Perhaps getting a statistic (one try) would get a better overview. I propose to add a Template:Proposal_Page Template.--Jojo4u (talk) 20:00, 4 August 2015 (UTC)
- I tried to get an estimate of usage by doing an overpass query and post processing the results, there seems to be in the order of a few hundred uses - do not recall the details anymore. It is quite hard to get more precise results because often multiple tags with several date namespace suffixes apply to one object. Also did not find any really good way to do the query.
- Don't think adding a proposal page at this point makes much sense, it will never be used widely unless historical mapping adopts it for their database and then it is their stuff. Otoh some examples like Adolf Hitler Straße or Istambul exist which are prominent enough and changed names many times to warrant existence and documentation of this. RicoZ (talk) 09:26, 5 August 2015 (UTC)
Detection/Syntax
There should be no key for which statistics via overpass or via taginfo are not possible. Perhaps it would be best to be more verbose. E.g.
name:date(1965--1971-12-18) = schoolor
name:date[1965--1971-12-18] = school. Taginfo has some key character statistics to play with: --Jojo4u (talk) 13:41, 22 October 2016 (UTC)
The []-brackets are also used by traffic_sign=* in the value: Where the traffic sign requires a value, you can supply it after the ID using brackets [value]. The value may contain a dot . as decimal separator and a minus - for negative values.--Jojo4u (talk) 13:41, 22 October 2016 (UTC)
- I like the idea to make it easier to parse/regex, noticed that similar ideas are already in use like
name:his:1965--1971-12-18 = schooland similar ones. But then.. the currently described syntax was also in use long before I wrote this page so I am not sure what the best way forward is. RicoZ (talk) 21:11, 9 January 2017 (UTC)
- Not sure about the brackets, having them in a value might be a huge difference from having them in the key name and having special regex chars in this place might cause a lot of damage to older software instead of helping? Haven't ever seen anything similar except one example in. RicoZ (talk) 20:57, 14 January 2017 (UTC)
Historical routes
Would this way: 6676219 mapping of historical (train) routes (on existed and visable signs) be correct?
This is not a namespace.
This kind of syntax resembles a namespace (XML namespace) because it uses a colon sign as a delimiter, however, it's just a variable suffix. A namespace is not just a suffix - it's a reference to a group of entities (in OSM - keys). For example, language suffix for
name keys is a namespace, because it refers to a group of keys with the values in a specific language. While here, date range suffix is a variable value itself. So, it would be better to keep the description technically correct and to stop referring to this syntax as to one which uses a namespace concept. --BushmanK (talk) 17:29, 17 December 2016 (UTC)
It is awful from the point of view of querying and data processing
I'm perfectly aware of "any tags you like" core principle, but there is a requirement for these tags to be usable. I also know that this scheme is already supported by some services (townlands.ie) but I'm wondering, how much of technically redundant preprocessing they have to do to make it usable. Here is what I'm talking about. Normally, every OSM key represents a specific property, for example - one of the names. Value is self-explanatory: it contains a value of that property. While in this case, both key and value containing a value. Existing applications, indeed, should not be confused by it, but only since they just ignore this unknown tag. Many of them are designed to use keys only as a source of information about which property does particular tag represent. Keys are usually seen as members of a finite set. While this scheme turns it into an infinite one and forces applications to do what they haven't been doing before: to parse and analyze keys instead of just comparing them with a known set. Another problem is that it is impossible to query such keys to get every key with a date range without using regular expressions. It dramatically affects the performance and increases the computing power requirements comparing to directly queriable keys. I mean, to process a single tag with data range, we have to do (roughly) the following:
- query all keys, ending with a pattern "colon, none or (less than five digits or ISO 8601 date), one or two dashes, none or (less than five digits or ISO 8601 date)"
- for each key, extract its name and store it in a variable; extract a start and an end date from a string, store each in own numeric variable; extract tag value
- cross-reference the preprocessed keys with a set of known ones
- deal with data ranges (for example, query them to find the most recent values of keys)
- finally, use keys and values as usual
Looks complicated, isn't it? And what happens if someone will use date range suffix for everything? Imagine an object with
name:1953--=Springfield,
name:1800--1953=Central town and without
name=Springfield. It is completely correct from the point of view of this scheme, but to put "Springfield" on a map, we have to do all that processing above. Who is talking about backward compatibility here?
Actually, the only advantage of this scheme is that anyone can just easily type this manually.
To avoid accusations of not giving an alternative, I could probably say that relation for data range could be close to perfect in terms of querying and usage. For example:
type=daterange for_key=name start_date=1800 end_date=1953 value=Central town
An object with any properties (keys), affected by a date range, can be a member of any number of
type=daterange relations.
for_key= refers to a key, affected by this range.
value represents a value of tag, effective for the indicated time period. Date tags are self-explanatory. Any other extensions can be provided, such as date keys in strftime format, for example.
- With this syntax, affected keys are not modified.
- It is possible to ignore date range relations by not querying them.
- It is possible to filter them within a query (comparing date key numeric values with something is way less hungry operation than regular expressions).
- This scheme is extendable.
- Yes, relations are a bit more complicated to edit, but that shouldn't be a problem for people smart enough to work with historical data, right?
So, practically, current proposal only makes working with data harder. Anyone who wants to argue - start from writing an Overpass Turbo query that selects all names of all objects for a certain date. --BushmanK (talk) 18:29, 17 December 2016 (UTC)
- I'm the developer of Townlands.ie which implements this, and I really can't find these complaints persuasive. This system is straight forward and natural for humans to read (and machines to parse) "name-from-X-to-Y-was-Z", and you're main complaint is "I can't use overpass, and now there are a non-fixed set of tags".
- (i) Despite what BushmanK says, it's not hard to implement, only 30 lines of python. The Key:opening_hours syntax cannot be parsed with a regex. If you're worried about "computationally expensive parsing", that ship sailed long ago. If you're consuming OSM data and don't want to parse these tags, then just ignore them! Just like if you don't care about multilingual names, you don't have to parse out the keys which vary by language, dialect, and writing system.
- (ii) "Can't query with Overpass" So? You can't do routing with Overpass. Or check opening hours. Or do geocoding. Can you check a polygon for validity with Overpass? The standard for OSM isn't "can be queried with Overpass". This tag shouldn't be dismissed because it's not overpass-able.
- (iii) "Let's use relations" Relations are not categories and neither are they "tags". You're suggesting creating one relation, with many tags, and having one object be a member of that relation. To replace a simple, human and machine readable, tag. That's a silly suggestion. Your suggestion cannot handle dates, or ambiguous dates, so you cannot 'just do a numeric comparison'.
- (iv) You suggest relations should be usable, but you forget who will be interested in historic names of things. Historians, Librarians, Geneology researchers, Humanities scholars. These people aren't usually less technical than, well, technical people. So yes, we should have something that's easy to use, and easy to type in manually.
- Rorym (talk) 12:48, 20 December 2016 (UTC)
- 30 lines of code is about ten times more than normal. And it is just for names, while the idea of this proposal doesn't seem to be limited to it. So, expecting date range suffix to be applied to names only, it is possible to simplify parsing exactly in that manner I've mentioned in my diary entry on this topic: utilizing a qualifier, such as name:daterange_*=*. Even this will allow easier syntax handling. Your argument about routing is fallacious. Overpass API is a querying tool, and it is normally expected to be able to query OSM keys (not values since we have opening hours syntax) without any additional pre- or post-processing. Routing is not a querying problem. So, this argument is irrelevant. I've mentioned relations only as an example of non-exceptional syntax. But since you prefer to pick on that, I'd like to mention, that my example has nothing to do with relations as categories - it represents a complex property of a single object (so, it can't be, by any means, a category). Another fallacious argument. Calling a suggestion "silly" because I'm using a well-formed relation as an example is a typical attempt to directly dismiss an argument without any credible argumentation (which is a demagoguery), because if you are right, than any other scheme that uses relations in OSM is also "silly". So, your arguments are either subjective or fallacious. I have nothing more to add. --BushmanK (talk) 20:16, 30 December 2016 (UTC)
- @BushmanK: if you're interested in using dates for overpass queries, you should probably join the discussion on the Overpass API dev ML fairly quickly, as your specific use case around this Date namespace thing doesn't seem to be covered yet, see and Mmd (talk) 12:37, 22 December 2016 (UTC)
- @Mmd, I'm guessing, it's up to Overpass developers - to implement something for this particular case or to ignore it. My point here is that current form of the proposed syntax is exceptional, while fundamental incompatibility of Overpass API with it is just a practical example of how it could make working with this scheme harder for everyone who'd want to query this sort of data.--BushmanK (talk) 20:22, 30 December 2016 (UTC)
- There or other ideas for a syntax that might be easier on machines, see also Talk:Proposed_features/Date_namespace#Detection.2FSyntax. But nobody is forced to parse this - every object with a name should have a plain "name" tag while name:daterange is optional and should be used in addition to plain name even if it means the current name is written twice there. RicoZ (talk) 21:17, 9 January 2017 (UTC)
- "name" was just an example here, the intended meaning was if "tagXXX:date-" (meaning untill now) is used than "tagXXX" should be provided as well as fallback for those apps that don't evaluate the daterange. Regardless of the daterange syntax we might agree upon this kind of fallback will still be necessary .. will add this to the description. RicoZ (talk) 20:28, 14 January 2017 (UTC) | http://wiki.openstreetmap.org/wiki/Talk:Date_namespace | CC-MAIN-2017-17 | refinedweb | 3,005 | 61.36 |
ESP8266 and MicroPython - Part 1
The first release of Micropython for ESP8266 was delivered a couple of weeks ago. The documentation covers a lot of ground. This post is providing only a little summary which should get you started.
Until a couple of weeks ago, the pre-built MicroPython binary for the ESP8266 was only available to backers of the Kickstarter campaign. This has changed now and it is available to the public for download.
The easiest way is to use esptool.py for firmware handling tasks. First erase the flash:
$ sudo python esptool.py --port /dev/ttyUSB0 erase_flash esptool.py v1.0.2-dev Connecting... Erasing flash (this may take a while)...
and then load the firmware. You may adjust the file name of the firmware binary.
$ sudo python esptool.py --port /dev/ttyUSB0 --baud 460800 write_flash --flash_size=8m 0 esp8266-2016-07-10-v1.8.2.bin esptool.py v1.2-dev Connecting... Running Cesanta flasher stub... Flash params set to 0x0020 Writing 540672 @ 0x0... 540672 (100 %) Wrote 540672 bytes at 0x0 in 13.1 seconds (330.8 kbit/s)... Leaving...
Now reset the device. You should then be able to use the REPL (Read Evaluate Print Loop). On Linux there is
minicom or
picocom, on a Mac you can use
screen (eg.
screen /dev/tty.SLAB_USBtoUART 115200), and on Windows there is Putty to open a serial connection and get the REPL prompt.
The WebREPL work over a wireless connection and allows easy access to a prompt in your browser. An instance of the WebREPL client is hosted at. Alternatively, you can create a local clone of their GitHub repository. This is neccessary if your want to use the command-line tool
webrepl_cli.py which is mentionend later in this post.
$ sudo minicom -D /dev/ttyUSB0 #4 ets_task(4020e374, 29, 3fff70e8, 10) WebREPL daemon started on ws://192.168.4.1:8266 Started webrepl in setup mode could not open file 'main.py' for reading #5 ets_task(4010035c, 3, 3fff6360, 4) MicroPython v1.8.2-9-g805c2b9 on 2016-07-10; ESP module with ESP8266 Type "help()" for more information. >>>
The public build of the firmware may be different than the firmware distributed to the backers of the Kickstarter campaign. Especially in regard of the available modules, turned on debug messages, and alike. Also, the WebREPL may not be started by default.
Connect a LED to pin 5 (or another pin of your choosing) to check if the ESP8266 is working as expected.
>>> import machine >>> pin = machine.Pin(5, machine.Pin.OUT) >>> pin.high()
You can toogle the LED by changing its state with
pin.high() and
pin.low().
Various ESP8266 development board are shipped with an onboard photocell or a light dependent resistors (LDR) connected to the analog pin of your ESP8266 check if you are able to obtain a value.
>>> import machine >>> brightness = machine.ADC(0) >>> brightness.read()
Make sure that you are familiar with REPL and WebREPL because this will be needed soon. Keep in mind the password for the WebREPL access.
Read the instructions about how to setup your wireless connection. Basically you need to upload a
boot.py file to the microcontroller and this file is taking care of the connection setup. Below you find a sample which is more or less the same as shown in the documentation.
def do_connect(): import network SSID = 'SSID' PASSWORD = 'PASSWORD' sta_if = network.WLAN(network.STA_IF) ap_if = network.WLAN(network.AP_IF) if ap_if.active(): ap_if.active(False) if not sta_if.isconnected(): print('connecting to network...') sta_if.active(True) sta_if.connect(SSID, PASSWORD) while not sta_if.isconnected(): pass print('Network configuration:', sta_if.ifconfig())
Upload this file with
webrepl_cli.py or the WebREPL:
$ python webrepl_cli.py boot.py 192.168.4.1:/boot.py
If you reboot, you should see your current IP address in the terminal.
>>> Network configuration: ('192.168.0.10', '255.255.255.0', '192.168.0.1', '192.168.0.1')
First let’s create a little consumer for Home Assistant sensor’s state. The code to place in
main.py is a mixture of code from above and the RESTful API of Home Assistant. If the temperature in the kitchen is higher than 20 °C then the LED connected to pin 5 is switched on.
If a module is missing then you need to download it from the MicroPython Library overview and upload it to the ESP8266 with
webrepl_cli.py manually.
# Sample code to request the state of a Home Assistant entity. API_PASSWORD = 'YOUR_PASSWORD' URL = '' ENTITY = 'sensor.kitchen_temperature' TIMEOUT = 30 PIN = 5 def get_data(): import urequests url = '{}{}'.format(URL, ENTITY) headers = {'x-ha-access': API_PASSWORD, 'content-type': 'application/json'} resp = urequests.get(URL, headers=headers) return resp.json()['state'] def main(): import machine import time pin = machine.Pin(PIN, machine.Pin.OUT) while True: try: if int(get_data()) >= 20: pin.high() else: pin.low() except TypeError: pass time.sleep(TIMEOUT) if __name__ == '__main__': print('Get the state of {}'.format(ENTITY)) main()
Upload
main.py the same way as
boot.py. After a reboot (
>>> import machine and
>>> machine.reboot()) or power-cycling your physical notifier is ready.
If you run into trouble, press “Ctrl+c” in the REPL to stop the execution of the code, enter
>>> import webrepl and
>>> webrepl.start(), and upload your fixed file. | https://home-assistant.io/blog/2016/07/28/esp8266-and-micropython-part1/ | CC-MAIN-2017-34 | refinedweb | 882 | 61.63 |
ASP.NET and .NET from a new perspective
When you wrap content with an UpdatePanel, it pretty much takes care of everything for you. But it can't do absolutely everything...
Take for example some inline script:
alert('hi');
</script>
<p>Some html after the script</p>
Inline meaning it appears inline with the rest of your HTML.
First of all -- I'd say it's best not to use inline script if you don't have to. If you can move that script block to the bottom or the top of the page, or to a separate javascript reference, without consequence, then why not keep the script and UI nice and separate? But there are times when inline script is either necessary or just preferred. For example -- check out how you instantiate an instance of Silverlight on your page. That's inline script. It makes sense to keep the script where it is, since that's where you want your Silverlight app to be.
Then, along came UpdatePanel...
The over simplified way of explaining how UpdatePanel does its work on the client is through the innerHTML DOM property. When a delta is retrieved from the server, it finds itself in the existing DOM, disposes of the contents, and then assigns the new content via innerHTML. Thanks to a basically consistent behavior across the major browsers, the new content appears just as if it were there to begin with.
But inline script doesn't work this way. Setting the innerHTML of a DOM element to HTML which contains a script block does not cause that script to execute. The only way to execute arbitrary script dynamically is to eval() it directly, or dynamically create a script element with document.createElement("script") and inject it into the DOM.
So if we had the above HTML+SCRIPT inside an UpdatePanel, whenever it updated, the inline script would simply be ignored.
UpdatePanel realizes that there may be script to be executed. But it only knows about such scripts if they are registered through the ScriptManager's Register script APIs. If you use that API correctly, UpdatePanel once again picks up the slack and takes care of the rest for you automatically.
ScriptManager.RegisterStartupScript(this, typeof(Page), UniqueID, "alert('hi')", true);
UpdatePanel intercepts registrations like these during asynchronous postbacks and sends the content down to the client separately from the HTML. And than the client side PageRequestManager dynamically injects a script element for you. So one solution to this inline script problem is simply not to use inline script. If you use this Register API all the time, it will work whether there is an update panel involved or not.
But I want my Inline Script back!
Alright already. So here is a novel little control that gives you the benefits of inline script without having the draw back of not working in an UpdatePanel.);
}
}
If the request is normal, it just renders whatever the contents are as usual. If you happen to be in the middle of an asynchronous postback, it uses the RegisterStartupScript API.
<asp:UpdatePanel
<ContentTemplate>
<i88:InlineScript
<script type="text/javascript">
alert('hi');
</script>
</i88:InlineScript>
<asp:Button
</ContentTemplate>
</asp:UpdatePanel>
You get the beloved alert dialog when the update panel updates! Thankfully, because you still put the script element itself in the html, you still get javascript intellisense and all that jazz, too.
Simple, not terribly useful, probably not the best thing to do performance, but could be pretty handy in the right situation.
UPDATE: If you actually use this, you may want to add a check for sm == null as well as IsInAsyncPostBack, so that the control works even if there's no ScriptManager on the page.
Dave Reed has another great post that shows how to build a simple control that lets you put your script
Pingback from Enlaces varios en Buayacorp - Dise??o y Programaci??n
It's a bit misleading to indicate that inline script is necessary in some cases citing an example for silverlight. It would be far superior to put this code in an external .js file and link an init function for silverlight to the window.onload.
I personally think the only script tags in a web page should be links to external files in the head of the page. Problem is not enough people are talking about how to do things in external files and with AJAX we now have a new type of speghetti code in our pages, javascript, when it could be so easily kept out of the page altogther. Seperate structure, presentation AND behaviour.
Brian -- there's one important difference moving the script to an onload event handler. The page markup is displayed as the browser parses it, and so when the silverlight component is created it could cause an unwanted flickering in the page layout. That, and onload could take a long time to fire. For example it wont' fire until all the images on the page have been downloaded. I for one do not want to wait that long for an important part of my page to be visible. There's ways to limit that problem, but anything except inline script has the potential to cause flickering.
There are ways to address this. You could check for the existence of the element you want during load. Getting a reference to the element and trying again until the reference exists. A bit more work is involved but I think it's worth the effort to create a true seperation of behaviour.
Go back a few years ago and nobody cared about moving presentation (style) out of the markup. In recent times javascript is back in vogue, mainly because of Ajax but unfotunately with this has come a complete disregard for being accessible and unobtrusive. Microsoft have spoken, though, with their Ajax implentation and now Silverlight so millions will follow and I will have to stare at <a href="#" onclick="dosomething()" as well as embedded script blocks within the markup (which truly makes me cringe) for years to come.
Checking again until it exists? Do you mean with a setTimeout? That's going to result in the same as if you didn't check until all the markup was parsed. There may still be a flicker.
I'm all for separation of UI and logic, believe me. The real problem is that the DOM and Browser in general was never intended to be used like we use it today. We have to make due with what we have.
You may find this discussion interesting:-
Setting a timeout for 1 millisecond is pretty pointless - just use 0 milliseconds. That tells the browser to execute it as soon as possible but not immediately. It's a queuing mechanism. The browser constantly creates 'tasks' that are executed by a single thread. So 0 doesnt really mean 0, it's more like a suggestion. It puts the task on the end of the queue (probably after the task which is parsing the html).
I have a bunch of controls with complex javascript "object" components that need to be initialized and have order dependency, previously this was being done with inline javascript created in the render method. But update panel ruined that. So i created a control that provides priority queues and uses a string builder to create one big startup script and register it. So here is my question, once a script has been registered with the ScriptManager can I replace that script with a new one using the same unique id? This doesn't appear to work. The other possibility would be to remove the script from the ScriptManger's script array, but this doesn't seem to work either. Any ideas?
Randy -- no, there's really no way to replace a script that's already been registered.
But I do not understand why are trying to do that. The pattern you should follow is to use the ScriptManager.Register APIs. You pass it the control which owns the script, which is what allows UpdatePanel to be able to successfully execute the script as well as detect whether the script should be executed (a script registered by a control that isn't within an updating update panel shouldn't be executed). Just search for say ScriptManager.RegisterStartupScript and there should be plenty of docs and examples.
You can also implement the IScriptControl interface if you don't mind requiring a ScriptManager control on the page. You should definitely do this if you're dependent on the ms ajax library.
If I'm missing something please paste in some code to help me understand :)
Works perfectly
Thanks :-)
I have actually already gone down the registering startup script with the control and not the page path. And that solves the problem of dynamically created/conditional visible controls but the order dependency problem still remains. In fact, the documentation for the ScriptManager actually recomends using a string builder if you have order dependencies. It appears that these two requirements are mutually exclusive. Here is my use case:
I have a conditionally visible homemade required field validator that is dependent on the control it is making required having run through its own set of javascript initialization.
Even according to the documentation for registerStartupScript the scripts are executed in an indeterminate order, which results in a psuedo random, javascript error that makes you pull your hair out.
This may be out of the scope of this thread, but I have been battling this problem for a couple months now and thought I saw a glimmer of hope in your inline script control.
What if I were to register a javascript function call as the startup script, where my JavaScriptLoader control is creating this function dynamically in an update panel?
Randy --- where on earth does it say Startup scripts aren't run in order of registration? That's just not true. If your validator were always after the control it validates in the markup then it should always have its render method called last.
I'm still having trouble understanding your dependency problem. Why can't you factor these things down into libraries that you can include? Then both controls just include both libraries, with RegisterClientScriptInclude (or RegisterClientScriptResource).
If the script is completely dynamic and so cannot be externalized, then I think there's a design problem. You should still be able to design it so that the functions are never changing, and all that changes is the parameters you pass it. One control depends on the data of the other. And that can be designed so that order doesn't matter. Both libraries get included, and then startup script initializes the whole shibang with specific parameters to an api defined in the libraries.
Yes, after a quick test it appears that the register calls are not the source of my ordering issues. I can't find the page where i read that, maybe I dreamed it. Now that my credibility is completely lost, it appears that the ordering problems are a result of the script executing before the html elements are created, as is the default based on the LoadScriptsBeforeUI property. So the root of the problem is that these js object initialization scripts actually augment the html element with additional functionality, i.e. event handlers, properties, as well as dynamically generated functions. Here is an example of a init script generated by our dateBox:
var ctl00_MC_SQI_SI_ctl00_DateComp = getObjectById('ctl00_MC_SQI_SI_ctl00_DateComp');
ctl00_MC_SQI_SI_ctl00_DateComp.IsEmpty = true; ctl00_MC_SQI_SI_ctl00_DateComp.ExtendedType = "LandaDateBox";
ctl00_MC_SQI_SI_ctl00_DateComp.SelectedDate = new Date('1/1/0001'); ctl00_MC_SQI_SI_ctl00_DateComp.ClearToSpecifiedDate = null;
function ctl00_MC_SQI_SI_ctl00_DateComp_DateChanged () { return LandaDateBox_ChangeHandler(getObjectById('ctl00_MC_SQI_SI_ctl00_DateComp')); }
var ctl00_MC_SQI_SI_ctl00_DateComp_DateValidator = getObjectById('ctl00_MC_SQI_SI_ctl00_DateComp_DateValidator');
function ctl00_MC_SQI_SI_ctl00_DateComp_DateValidator_Validate() { return LandaDateFieldValidator_Validate(ctl00_MC_SQI_SI_ctl00_DateComp_DateValidator); }
function ctl00_MC_SQI_SI_ctl00_DateComp_DateValidator_ValidateOnBlur(event) { LandaDateFieldValidator_ValidateOnBlur(window.event ? window.event : event, ctl00_MC_SQI_SI_ctl00_DateComp_DateValidator); }
function ctl00_MC_SQI_SI_ctl00_DateComp_DateValidator_ValidateOnChange(event) { return LandaDateFieldValidator_ValidateOnChange(window.event ? window.event : event, ctl00_MC_SQI_SI_ctl00_DateComp_DateValidator); }
ctl00_MC_SQI_SI_ctl00_DateComp_DateValidator.evaluationfunction = ctl00_MC_SQI_SI_ctl00_DateComp_DateValidator_Validate;
This is all being generated dynamically but it is all also unique to this control. I am not sure how this could be externalized.
My current line of attack is to wrap these init scripts with an function and add that function to the Sys.Application.load event then use the control based registerStartupScript to register that whole mess. This seems to get me most of the way there.
Thanks for all your help.
Randy... if you register your script with RegisterStartupScript then it should always appear after the HTML, unless your controls are after the end of the form element (and if thats the case, just put them inside). Startup scripts always go to the bottom of the form. And during async updates, they always execute after the html has been updated.
What do we use when the inline JavaScript code is like this
<script type="text/javascript">
function DisplayMsg()
{
alert ("Message");
</script>
----
I dont want to build the entire JS code in the code behind and pass it to RegisterScriptBlock or RegisterStartupScript. Nor do I want to create a separate JS file for all the script code that is present in an user control.
Kulp -- not sure what you're asking. Are you saying you want this script to be included but you don't want to do the work of registering it through the ScriptManager APIs? Well thats exactly what the control I described in this article is good for...
Article is really really helpful, but as i am new to ASP.NET i am getting an error "i88:InlineScript" is not a valid tag.
Can any one tell me how to register and use that control ?
Please help i am stuck
Parag -- try this
msdn2.microsoft.com/.../c76dd5k1(vs.71).aspx
Can you paste a code snippet how to use the control you created. It will be really really helpful.
I created a usercontrol and put the prerender method in the usercontrol and registered the control on my aspx page. but when i put <script> tag with in the usercontrol tag it started giving me error.
I was looking for this code...thanks. In my situation i could dynamically load a user control but faild to load or run javascript associated with that user control. with your code I fixed the problem.
I still have a one more issue though.
i cannot load a user control that has ajax control (say TabContainer). it works however if i don't use update panel from the calling program (default.aspx). any idea how to fix this issue?
thanks for the contribution to the asp.net community.
regards
achu. ([email protected])
"looking for a true web alternative for desktop applications"
achu -- why can't you load the control? Are you getting an error or what?
yes my TabContainer (Declaratively) defind in the usercontrol. which is dynamically load into the main page say (default.aspx).
Usercontrol loading with ajax tab details but no visual tab present (I think same situation for all ajax controls due to javascript i believe)
to summaries....
1. default page has gridview with buttons on each row,
2. when user click a button a usercontrol is loaded and display in to a popupcontrol (which is in default.aspx).
everything work as expected but if I try to load user control that include a ajax control (like tabcontrol) won't work properly.
things fixed with your codes...
1. Javascript issue
2. Postback from usercontrol.
thanks
achu.
achu -- I'm afraid you'll have to tell me more than just "it wont work properly" :) What does that mean? I can't help if I don't know what the problem is.
When i try to load usercontrol from default.aspx the tabpanel not displays the visual tab instead it show only the tab text. how do i fix this without removing updatepanel? please see this simple code to illustrate the problem.
*****User Control *********
<div id="mydiv" runat="server">
<cc1:TabContainer
<cc1:TabPanel
<ContentTemplate>
<span>Tab1 Text</span>
</ContentTemplate>
</cc1:TabPanel>
</cc1:TabContainer>
</div>
******Default.aspx***********
<asp:UpdatePanel
<ContentTemplate>
<asp:LinkButton</asp:LinkButton>
<asp:Panel</asp:Panel>
</ContentTemplate>
protected void Button1_Click(object sender, EventArgs e)
UserControl control = (UserControl)this.Page.LoadControl
("UserControl.ascx");
control.ID = "MyControl1";
box.Controls.Clear();
box.Controls.Add(control);
achu -- Sounds like a bug with the Tab panel control, which is part of the AJAX Control Toolkit. I'm afraid my team doesn't have much to do with that, and I can't speak to the source code. I recommend you visit the forums for the toolkit and see if anyone familiar with it can help you, here:
forums.asp.net/1022.aspx
check this out
blog.jeromeparadis.com/.../1501.aspx
i am dynamically creating the script in the button click.
and registered using scriptmanager..
ScriptManager.RegisterStartupScript(saveButton, Me.GetType(), WellPATHConstant.Dialogbox, rejectMessageSB.ToString, False)
could u say me.. when the script is executed... i want this to be on click of that button...
Pingback from .Net Project Blog » Blog Archive » Using JavaScript inside UpdatePanel (Asp.NET 2.0)
Hello,
I receive this error "The type or namespace name 'StringBuilder' could not be found (are you missing a using directive or an assembly reference?)"
App_Code/InlineScript.cs
.. InlineScript
/// </summary>);
}
...
cheers,
imperialx
imperialx -- you need to add the right using statement. StringBuilder is under the System.Text namespace, so add:
using System.Text;
I have a problem I have yet been able to solve. I see you knowledge of scriptmanager, and lifecyle is far better then mine so I ask...
I have an app. I am working on that dynamically renders usercontrols(its a module application). These usercontrols contain 3rd party UI controls. These controls require script references that get generated when the usercontrols do. So this all works fine until I tried to use AJAX webservices which required a ScriptManager. The ScriptManager(as I understand it) takes over script ref. for the page. So if the SM exisits none of my third party UI controls render correctly.
Any Ideas?
Thanks
I actually left out one key thing....
I am dynamically generating these user controls via the 3rd party's CallBack control.
LDAdams -- Sorry I'm not really following what the problem is. ScriptManager existing on a page cannot cause scripts to vanish. Perhaps if you package up a small example of your problem and send it to me I can help more (blog at infinity88.com).
First my apologies if this is not where you wanted me to post this.
I also realize the problem I am having is complicated by using a 3rd party tool. So I appreciate any help. I do feel that there is a code solution for this problem though.
I am not sure an example would really help as it would look pretty standard. ScriptManager on the page and the standard code to dynamically render a control. I will though try and explain myself better.
First here is the problem stated from an employee from the company:
"Recently I've come across a user who was using CallBack to dynamically create our controls on a page with a script manager. Once a script manager is on the page it will handle all the scripts, meaning it will not let callback initialize the newly created control.
This can be remedied two ways.
The first is an old one: Place an empty instance of the control on the page, this way it knows about the scripts. This worked fine in our testing. My Comment: (Since each dynamic usercontrol can have a number these controls in it I don’t think this is doable)
Second is: Use Update panel. This is a situation where update panel is probably a better option for creating dynamic controls" My Comment (I have built the application around their Callback. I was attempting to stay away from full page re renders which there control provides.)
The ScriptManager is not actually making the scripts vanish; as I understand it the ScriptManager is just not allowing the newly created controls to use their dynamically rendered script refs.
Hope this makes better sense. Thanks for any help, and again I understand if this falls out of the scope of this blog as I am using 3rd party UI components.
Thanks...
ldadams -- I assume by "Callback" you mean the ASP.NET Callback feature. And I think what you mean by "using callback to dynamically create our controls on a page" you mean that somehow, someone is using a callback to call a server-side method, which is creating the 3rd party control and manually rendering it into a string, which becomes the callback result. On the client, the rendered html is injected into the DOM.
Is that what you are saying?
If so, then this problem has nothing to do with ScriptManager, at all. When you call Render() on a control directly, you are completely by-passing the lifecycle for the control. You can't just instantiate a control and call render, and expect it to work. The control must particpate in the lifecycle of the page by being a part of the control tree -- postback, viewstate, and script registration all depend on this. Doing this custom rendering does nothing more than give you the HTML the control would have rendered, but generating HTML is not the only thing controls do, so you're really on your own at that point.
As you elluded to you'd have to use an UpdatePanel to get the functionality you want. Its absolutely necessary to do full postbacks to do this in the way you want, and UpdatePanel is what makes full postbacks via AJAX possible.
You might be able to hack it up by manually ensuring scripts are included when they are needed and continuing to use your callback mechanism, but understand that the control was not designed for that kind of use, and neither were controls in general, so you'd probably go down a path of hacking up things just to work around the issues, and you might run into a big dead end depending on the nature of the control. For example if the control requires postback data (like, it contains a textbox and you need to interact with the textbox value on postbacks), then this isn't going to work at all.
great job!
tx for InlineScript and explanation of the issue with inline JS block
Hi,
I get this error:
i88:InlineScript Unkown server tag.
anyone help?
Hi.....
I have a problem that when I use script manager given below my code works fine............ But my update panel is not updating ........ If I am not using the script manager then update panel will update its content...... Can Any one help me....
protected override void Render(HtmlTextWriter writer)
ScriptManager.RegisterStartupScript(this, typeof(MyTest), UniqueID, "setstyles();", true);
Thnaks
Abins -- you render base into a string 'script', but then it goes no where. You just have this 'setstyles' call. Whatever it is you are rendering isn't going anywhere. Also why are you capturing the rendering like that in the first place... unless you're trying to handle some real specific scenario you shouldn't need to do that, just let it render normally and it will update..
Here's the link:
I have the control in a .ascx file. I have registered the control within my .aspx page. I am calling the control within the update panel. But, I get this error: "... does not have a public property named 'script'"
It is erroring at <script type="text/javascript">
DKoser -- it sounds like you are trying to put the script element inside a control that doesn't support nested content. You'll have to post your markup or email it to me for me to tell you more.
Thanks for the article, it help me
This may be a simple question but I have no idea how to do it. I want to place inline html between custom asp.net user control tags
e.g
<customPrefix:customTag
<h1>test</h1>other html
</customPrefix:customTag>
how do you access the content in between the tags and print it somewhere in your control?
Robert -- you need the right attributes. First define a default property with the DefaultPropertyAttribute, then you need the PersistenceModeAttribute, with PersistenceMode.InnerDefaultProperty. This article may be helpful as well:
msdn2.microsoft.com/.../f820d25y.aspx
Hi,
I test RegisterStartupScript() under ASP.NET 3.5 and found the registered script appears at <head>, Not at the end of the page. WHY?
Thanks,
Ricky.
<body>
<form id="form1" runat="server">
<div>
<asp:ScriptManager
</asp:ScriptManager>
<asp:UpdatePanel
<asp:Button
</asp:UpdatePanel>
</div>
</form>
</body>
string msg = string.Format("alert('{0}');", "Hi, Button1 is clicked!");
ScriptManager.RegisterStartupScript(this, typeof(Page), "clcikTest", msg, true);
Ricky -- it doesn't. Show me, because thats not how it works for me.
Hi, the link contains the screen shots I described. Following 1,2,3.png, you should observe the behaviors.
If you could not link to it, please let me know.
The code is of
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="InjectJS.aspx.cs" Inherits="InjectJS" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="">
<head runat="server">
<title>Untitled Page</title>
</head>
<asp:TextBox</asp:TextBox>
</html>
public partial class InjectJS : System.Web.UI.Page
protected void Page_Load(object sender, EventArgs e)
ScriptManager1.SetFocus(TextBox1);
protected void Button1_Click(object sender, EventArgs e)
ScriptManager.RegisterStartupScript(UpdatePanel1, typeof(UpdatePanel), "clcikTest", msg, true);
ricky -- your code registers the script during an async postback. So how is it the code could 'appear' anywhere? Script that comes back during an async post is just executed, it doesn't 'appear' anywhere. It does get dynamically added to the HEAD as a script element, but just so it can be executed. It doesn't matter at that point if it's in the head or elsewhere, it still executes when it is supposed to.
Hey,I have some problems on UpdatePanel with FormView.In my project, a FormView was used within a UpdatePanel to reduce the flicker when user input data. A javascript calendar Control was used to select date.the question is that when UpdatePanel was remove, the calendar control work fine, but when added, failed.
another problem is that if the initial FormView was set to Edit, It also works fine, but if set to ReadOnly, failed.Here's some code snippet:
<asp:UpdatePanel
<ContentTemplate>
<asp:TextBox</asp:TextBox>
//here's the TextBox, work fine
<script type="text/javascript">
Sys.WebForms.PageRequestManager.getInstance().add_endRequest(InitCalendar)
function InitCalendar()
{
Calendar.setup(
{
inputField : $get("<%=TextBox1.ClientID%>"),
ifFormat : "%Y-%m-%d",
showsTime : false,
button : "TextBox1",
singleClick : false,
step : 1
}
)
}
InitCalendar();
</script>
Ferret -- inline script doesn't work inside an update panel like that, as the article explains. It will work on the FIRST request to the page, but it will not on async updates, which explains why you dont see the problem when you're originally in edit mode.
Referring to "public class InlineScript : Control {"
I assume it is a Web User Control, am I correct?
Do you have the vb version of the code?
Thanks!
Yenyen -- no, it's just a control. If it were a user control there'd need to be an ascx file. This is a custom server control. Sorry I don't have a VB version but it should be easy enough, its basically just syntax differences. You could always compile it then use reflector to view it in vb.
Hello InfiniteLoop,
I have seen a few of your informative posts and i think u are a very smart guy. So I wanted to ask you a complex case scenario that you probably didn't encounter before.. let's see what u can come up with ;).
Here is the deal.
I have a control that overrides the Render method, puts the html content into a Cache object, and only render a script tag that calls an external js file (which is handled by a custom handler). The handler for js files takes the HTML code from the cache object and puts "document.write" around it, and so it writes everything into the page.
To further clarify, the result of the page is this:
document.write("<html code of the control including script tags and the client code created by the update panel >"); included as external file in a script tag.
This is a requirement and can't be changed.
In one of the controls that require this I have an update panel..... but .. surprise surprise.. it doesn't work :).
Can u think of any way to make the update panel work? why exactly it doesn't work, I don't quite get it, because even all the other javascript content and script tags inside the control work fine.
If not, my only solution is to use custom Ajax calls, retrieve the data needed and change it one by one through the DOM. Would you see problems with that too?
In case the only problem is that would take me ages :-s
document.write only works while the page is initially loading. If you call document.write after the page is already loaded, you are starting a new document, and that clears out the current document completely.
Also, "rendering" script tags inline won't work for reasons explained in this article, they must be registered through the appropriate API.
Also, registered scripts that are urls (not actual script but a url to one) are only going to be included once on the page. If you register "foo.js" when the page first loads, and then during an update panel request you register "foo.js" again, it isn't included a 2nd time. That's because UpdatePanel figures the script is already loaded, and its some component oblivious to the fact its an asynchronous postback trying to re-register a script it has already registered. Even if the component knew it was an asynchronous post it would be stuck, because it doesn't know whether it existed prior on the page or if this asynchronous post is the first time its been added to the page. So long story short-- script includes are only included once. If they weren't, there'd be all kinds of problems.
So just thinking out loud here -- you'd have to first make sure that the script is included with a different url each time, which you could do for example by just adding some querystring to the end (foo.js?timestamp). And second, you'd have to tell the handler which serves this script something it would need to know -- whether this is the initial load of the page or not. If it is, document.write as usual. If it isn't, the handler will need to assign the appropriate HTML into some page level variable, which could be given to it with the querystring; foo.js?context=newhtml. Then once the update panel is finished updating (the endRequest event fires), you can get the new HTML from the newhtml variable (window.newhtml) and set it on some placeholder element's innerHTML.
Even that could have problems, it depends on exactly what that HTML is and what is in the update panel. But I hope that gives you a sense of what you need to do, or at least why it's not working.
Why this crazy redirection of the HTML with javascript?
nice! that's a very interesting answer.
For the reason..
Well my Boss saw this trick used by expedia to hide their footer from google.. and wanted to implement it in our website for all controls that are duplicate in each page and didn't add value to the page seo.
Google duplicate content sometimes makes u do weird things ;)
Either if I use your idea in production or not ur answer clarified many doubts.. thanks a lot
Ps. I can't see this trick used in expedia anymore, well let's see, maybe I can persuade him to use iframes instead.....
Marco -- I see. Neat trick.. search engines don't always take kindly to being fooled though. If expedia doesnt do it anymore, it could very well because they were asked to stop or get delisted. I'm no search engine guru though, so don't take my word for it.
Hi Dave,
I have a couple questions...
1. Do you know where I can learn more about what makes a script compatible with the UpdatePanel. From my projects, I have noticed none of the Google API scripts reload after an async Postback even after registering them with the ScriptManager. I have had to encapsulate the markup & script tags inside iframes to get it to work for now.
2. Do you know what causes this error to be thrown when the postback is caused by a control that triggers an UpdatePanel refresh -> "The state information is invalid for this page and might be corrupted". In my case I allow controls to be dragged & re-arranged in the DOM tree using javascript. But the control tree on the server is never altered since I do all the appropriate arranging in the Render methods only. But it looks like on a async Postback it fails in the LoadPostData method and it doesn't get as far as the control event that triggered the postback.
I should also mention that I have already added the call to prevent client caching of the response.
if (Page.Request.Browser.Browser != "IE") Page.Response.Cache.SetNoStore();
khan210182 -- UpdatePanel will only load a script once, for reasons sited by two of my comments up. All controls register all their script resources all the time. That would be terrible if each of their script libraries had to be reloaded each time.
It isn't correct design IMO to have a script that requires reloading whenever you have to 'redo' something. Imagine if assemblies worked that way... instead of just instantiating a type in that assembly when you needed it, you had to reload the entire assembly and then it 'automatically' did what you wanted. Terrible.
Script resources are only loaded once. Any scenario where you need to load it twice can be redesigned to have a library that never changes but can be called with different data. Its the DATA that changes on post backs, not the logic that deals with the data.
That said, you can't control google scripts obviously.. so instead just make UpdatePanel 'think' its a new script each time by appending a timestamp onto the end of the querystring (googleapi.js?t=<timestamp>).
As for #2 -- it depends on what you are doing. Its ok to move elements around as long as you aren't changing their IDs or Names. Also, if you are raising postbacks with __doPostBack, its important to utilize event validation, or turn it off. Search on event validation for more info on what that is.
I suggest you simply turn off partial rendering to test the problem. Usually an error like that would fail on a regular postback, too, and you're just making it harder to see with partial rendering enabled. That is always a good test to do by the way - if you page doesn't work with regular postbacks, partial rendering probably won't either.
Hi Infinite Loop,
1. I have created Control as you explained
2. Added control to the web page.
3. It gives me error while adding to the Form Unhandled excetpion Object reference not set to an instance of an object.
4. If I add script then it is showing
Does not have a public property name script....
Please help me out I am stucked ...
Make sure you have a script manager on the page. And make sure you typed the control code exactly as I showed it, it sounds like you did it wrong.
I have a page with more than one update panel, gridviews, textboxes, ect. The buttons on this page won't cause a postback. I have tried declaring new buttons, strip the page down to only one section, but NOTHING. The page won't even refresh. It's like clicking on a button in a windows app and the button does not have any event linked to it.
I'm wondering if the postback function is not being registered with the scriptmanager?? As I understand it there is a sort of fixed postback function in javascript that the button execute automatically?
Funny thing is... I have other pages with the same setup, but with lest controls and they seem to work fine. Also if I have a dropdown with it's AutoPostBack property set to TRUE, it does a post back.
Hope some here could help.
NeCroFire -- odd, but probably something simple. Are you sure there's no javascript error occuring when you click the buttons? Can you post a quick snippet of what your markup looks like?
CodeProject: AJAX UpdatePanel With Inline Client Scripts Parser. Free source code and programming help
I am building some web parts that include some AJAX functionality and found out that any inline scripts
Hi there,
How can I change the script dynamically? I mean, from the codebehind.
Emerson -- the benefit of the control is that it can be declared. If you're doing it dynamically anyway, why not just use the ScriptManager APIs to register it?
Thank you very much for this, it helped immensely. Very well explained in regards to the relationship between scripts and the DOM.
Greak Work,
But I have a problem as follows; this control calls the script at startup but I need to call the script where it is actually rendered in the page. I need this because the script I am trying to use has "document.write" statements so when I call the script at startup it screws the page.
Tankut
Tankut -- once the page is loaded, you can't use document.write. It isnt a matter of "where" it is called -- script has no concept of location once the html has been processed. You just can't do it -- thats how the browser works. So if you want ajax, you will have to convert to the script to support it, such as by using innerHTML to insert content after the first time instead.
hi,
i have a usercontrol,inside tht user control i have some ajax controls.i am trying to load the user control with _dopostback() so tht i can get the delayed load logic.what happens is i tried the code for grid populate in the _dopostback(controlname).click event
what happens is i get the
contains multiple calls to Sys.Application.notifyScriptLoaded(). Only one is allowed error but if i load the usercontrol on pageload and and try the same logic it works.
can u help me?
can u help me
You are a LEGEND, I was racking my brain about this all day!!!
thanks for you post.
but i found a problem using this with Google Chrome.
the issue is: if we use a "for" statement in javascript inside the InlineScript control, e.g:
var count=2;
for(var i=0;i<count;i++)
alert(i);
an error occurs in Chrome's console:
Uncaught SyntaxError: Unexpected end of input
i found the problem is cause by the "count" variable.
i don't know it is an chrome's problem or ajax.net's problem.
could you have a check with it?
thanks in advance! please let me know if you have any idea about it. my mail is seiecnu(at)gmail.com
Tried the code and indeed I get the alert firing...
BUT my situation is a bit awkward.
I basically have a bunch of headlines in the left column when clicked load up in the right column update panel - works fine
The article data is contained within a SQL database and displayed in a details view on the page.
Recently i've been tasked with displaying a countdown "weeks to go" article. I've got an external .js file that does what I need but when i try to overwrite the div using getElementById it complains - "is null or not an object". It's because it's the 2nd article and the div doesn't seem to exist in the DOM yet.
Does that make sense?
Waseem -- I'm afraid I'll need more details. Can you break it down into some code samples, even if it is just psuedo code?
Well, as is usual, hacking and fiddling can help.
I created the .cs file in app_code, as mentioned. I added a namespace for good measure. Then i just added <%@ Register tagprefix="i88" namespace="myspace" %> to the head.
Now my <i88:InlineScript> works, and my repeater and whatnot-javascript too :-D
Thanks for the script!
Can anyone plz tell me how to call a .js file function on the button click of a content page which is Ajax update panel? plz its very urjent for me.
I registered the i88:InlineScript control in my .aspx page where my script manager is. My update panel is in a web control (.ascx). My script is generated dynamically in XSL. When I run it I get a pop-up alert saying "'i88' is an undeclared namespace".... not really sure why it thinks its a namesapce? Anyone have any suggestions?
HBBob -- sounds like you need a @register directive. ASP.NET is complaining it doesn't know what the 'i88' tag prefix is. If you already have that, perhaps you put in the wrong namespace. If you didnt, perhaps it is compiling -- is it in the project, or if its a website style app, is it in app_code?
You rock best brain i have seen ever :)
Do not use this tip on the web but in an IE-only intranet environment, simply adding the defer="defer" attribute to your inline script block will cause IE to run the scripts that you embedded in your UpdatePanel content.
Remember this works in IE only. In a real web environment you should use the server-side solution described on this page.
Note: Even with the defer attribute, I've noticed that sometimes the script does not run when it is the first element inside the UpdatePanel. This would suggest the defer trick is a bit of a hack! | http://weblogs.asp.net/infinitiesloop/archive/2007/09/17/inline-script-inside-an-asp-net-ajax-updatepanel.aspx | crawl-002 | refinedweb | 7,031 | 64.81 |
For testing purposes it is convenient to use a self-signed certificate. Follow these instructions. You will be prompted for a password a few times:
openssl genrsa -des3 -out server.orig.key 2048 openssl rsa -in server.orig.key -out server.key openssl req -new -key server.key -out server.csr openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Here is client.py, slightly modified from the Python 2.7.3 docs:
import socket, ssl, pprint s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # Require a certificate from the server. We used a self-signed certificate # so here ca_certs must be the server certificate itself. ssl_sock = ssl.wrap_socket(s, ca_certs="server.crt", cert_reqs=ssl.CERT_REQUIRED) ssl_sock.connect(('localhost', 10023)) print repr(ssl_sock.getpeername()) print ssl_sock.cipher() print pprint.pformat(ssl_sock.getpeercert()) ssl_sock.write("boo!") if False: # from the Python 2.7.3 docs # Set a simple HTTP request -- use httplib in actual code. ssl_sock.write("""GET / HTTP/1.0r Host:""") # Read a chunk of data. Will not necessarily # read all the data returned by the server. data = ssl_sock.read() # note that closing the SSLSocket will also close the underlying socket ssl_sock.close()
And here is server.py:
import socket, ssl bindsocket = socket.socket() bindsocket.bind(('', 10023)) bindsocket.listen(5) def do_something(connstream, data): print "do_something:", data return False def deal_with_client(connstream): data = connstream.read() while data: if not do_something(connstream, data): break data = connstream.read() while True: newsocket, fromaddr = bindsocket.accept() connstream = ssl.wrap_socket(newsocket, server_side=True, certfile="server.crt", keyfile="server.key") try: deal_with_client(connstream) finally: connstream.shutdown(socket.SHUT_RDWR) connstream.close()
Note: if you try to use the standard system ca certificates, e.g. on Debian:
ssl_sock = ssl.wrap_socket(s, ca_certs="/etc/ssl/certs/ca-certificates.crt", cert_reqs=ssl.CERT_REQUIRED)
then server.py explodes with:
Traceback (most recent call last): File "server.py", line 24, in ssl_version=ssl.PROTOCOL_TLSv1)08F10B:SSL routines:SSL3_GET_RECORD:wrong version number
If you specify the SSL version, e.g.
connstream = ssl.wrap_socket(newsocket, server_side=True, certfile="server.crt", keyfile="server.key", ssl_version=ssl.PROTOCOL_TLSv1)
then you can run into other problems, e.g.
Traceback (most recent call last): File "server.py", line 27, in ssl_version=ssl.PROTOCOL_TLSv() ssl.SSLError: [Errno 1] _ssl.c:490: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
Archived Comments
Date: 2014-08-13 19:11:58.59444 UTC
Author: David
Thanks very much. This was extremely helpful.
I would like to offer one odd caveat here. When I create my certificate and key files on Linux, I am unable to connect to the server from a client running on OS X.
Details:
I tested on 3 computers in July and August, 2014:
Windows 7, Python 2.7.6. OpenSSL 1.0.1i
OS X 10.7.5, Python 2.7.8, OpenSSL 0.9.8y
Ubuntu 14.04, Python 2.7.6, OpenSSL 1.0.1f
I created a set of certificate and key files on each of the computers. I then ran the client on Windows and OS X and server on all three of the computers using each of the 3 sets of certificate files.
If the certificate files were created on Linux, I could not connect the OS X client to the server, regardless of the server platform. If the certificate files were created on Windows or OS X, all combinations of client and server worked. The Windows client worked against all 3 servers with all three certificate files. The OS X client worked on all three servers if the OS X or Windows certificates were used. But the OS X client failed against all three servers when those servers used the Linux certificate files.
I can offer no explanation of why.
Date: 2015-09-14 07:22:50.526278 UTC
Author: Limey
In order to get this to work with python V3 I had to change:
ssl_sock.write(“boo!”)
to:
ssl_sock.write(“boo!”.encode())
No idea if this is the best solution, but it made it work. =)
Date: 2015-11-06 06:06:35.515219 UTC
Author: Ketan Kothari
Thank’s for describing with example. Good to use and easy to understand.
One thought on “Python SSL socket echo test with self-signed certificate”
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
I think instead of “server.key” in above command we should have “server.orig.key”. I guess the certificate should be signed with private key if i am not wrong. | https://carlo-hamalainen.net/2013/01/24/python-ssl-socket-echo-test-with-self-signed-certificate/ | CC-MAIN-2019-26 | refinedweb | 749 | 63.36 |
The Data Science Lab
Dr. James McCaffrey of Microsoft Research tackles how to define a network in the second of second in a series of four articles that present a complete end-to-end production-quality example of binary classification using a PyTorch neural network (see the first article about preparing data here). third step. here.. You can find the article that explains how to create Dataset objects and use them with DataLoader objects here.
The Overall Program StructureThe overall structure of the like single main() function. It is possible to define other helper functions such as train_net(), evaluate_model(), and save_model(), but in my opinion this modularization approach unexpectedly makes the program more difficult to understand rather than easier to understand.
Defining a Neural Network for Binary Classification
The first step when designing a PyTorch neural network class is to determine its architecture. The number of input nodes is determined by the number of predictor values, four in the case of the Banknote Authentication data. Although there are several design alternatives for the output layer, by far the most common is to use a single output node, where the value of the node is coerced to between 0.0 and 1.0. Then a computed output value that is less than 0.5 corresponds to class 0 (authentic banknote for the demo data) and a computed output value that is greater then 0.5 corresponds to class 1 (forgery). This design assumes that the class-to-predict is encoded as 0 or 1 in the training data, rather than -1 or +1 as is used by some other machine learning binary classification techniques such as averaged perceptron.
The demo network uses two hidden layers, each with eight nodes, resulting in a 4-(8-8)-1 network. The number of hidden layers and the number of nodes in each layer are hyperparameters. Their values must be determined by trial and error guided by experience. The term "AutoML" is sometimes used for any system that programmatically, to some extent, tries to determine good hyperparameter values.
More hidden layers and more hidden nodes is not always better. The Universal Approximation Theorem (sometimes called the Cybenko Theorem) says, loosely, that for any neural architecture with multiple hidden layers, there is an equivalent architecture that has just one hidden layer. For example, a neural network that has two hidden layers with 5 nodes each, is roughly equivalent to a network that has one hidden layer with 25 nodes.
The definition of class Net is shown in Listing 2. In general, most of my colleagues and I use the term "network" or "net" to describe a neural network before it's been trained, and the term "model" to describe a neural network after it's been trained.
Listing 2: Class BanknoteDataset Definition
The Net class inherits from torch.nn.Module which provides much of the complex behind-the-scenes functionality. The most common structure for a binary classification network is to define the network layers and their associated weights and biases in the __init__() method, and the input-output computations in the forward() method.
The __init__() Method
The __init__() method begins by defining the demo network's three layers of nodes:
def __init__(self):
super(Net, self).__init__()
self.hid1 = T.nn.Linear(4, 8) # 4-(8-8)-1
self.hid2 = T.nn.Linear(8, 8)
self.oupt = T.nn.Linear(8, 1)
The first statement invokes the __init__() constructor method of the Module class from which the Net class is derived. The next three statements define the two hidden layers and the single output layer. Notice that you don't explicitly define an input layer because no processing takes place on the input values.
The Linear() class defines a fully connected network layer. You can loosely think of each of the three layers as three standalone functions (they're actually class objects). Therefore the order in which you define the layers doesn't matter. In other words, defining the three layers in this order:
self.hid2 = T.nn.Linear(8, 8) # hidden 2
self.oupt = T.nn.Linear(8, 1) # output
self.hid1 = T.nn.Linear(4, 8) # hidden 1
has no effect on how the network computes its output. However, it makes sense to define the networks layers in the order in which they're used when computing an output value.
The demo program initializes the network's weights and biases like so:)
If a neural network with one hidden layer has ni input nodes, nh hidden nodes, and no output nodes, there are (ni * nh) weights connecting the input nodes to the hidden nodes, and there are (nh * no) weights connecting the hidden nodes to the output nodes. Each hidden node and each output node has a special weight called a bias, so there'd be (nh + no) biases. For example, a 4-5-3 neural network has (4 * 5) + (5 * 3) = 35 weights and (5 + 3) = 8 biases. Therefore, the demo network has (4 * 8) + (8 * 8) + (8 * 1) = 104 weights and (8 + 8 + 1) = 17 biases.
Each layer has a set of weights which connect it to the previous layer. In other words, self.hid1.weight is a matrix of weights from the input nodes to the nodes in the hid1 layer, self.hid2.weight is a matrix of weights from the hid1 nodes to the hid2 nodes, and self.oupt.weight is a matrix of weights from the hid2 nodes to the output nodes.
It's good practice to explicitly initialize the values of a network's weights and biases, so that your results are reproducible. The demo uses xavier_uniform_() initialization on all weights, and it initializes all biases to 0. The xavier() initialization technique is called glorot() in some neural libraries, notably TensorFlow and Keras. Notice the trailing underscore character in the initializers' names. This indicates the initialization method modifies its weight matrix argument in place by reference, rather than as a return value.
PyTorch 1.6 supports a total of 13 initialization functions, including uniform_(), normal_(), constant_(), and dirac_(). For most binary classification problems, the uniform_() and xavier_uniform_() functions work well.
The uniform_() function requires you to specify a range, for example, the statement:
T.nn.init.uniform_(self.hid1.weight, -0.05, +0.05)
would initialize the hid1 layer weights to random values between -0.05 and +0.05. Although the xavier_uniform_() function was designed for deep neural networks with many layers and many nodes, it usually works well with simple neural networks too, and it has the advantage of not requiring the two range parameters. This is because xavier_uniform_() computes the range values based on the number of nodes in the layer to which it is applied.
With a neural network defined as a class with no parameters as shown, you can instantiate a network object with a single statement:
net = Net().to(device)
Somewhat confusingly for PyTorch beginners, there is an entirely different approach you can use to define and instantiate a neural network. This approach uses the Sequential technique to both define and create a network at the same time. This code creates a neural network that's almost the same as the demo network:
net = T.nn.Sequential(
T.nn.Linear(4,8),
T.nn.Tanh(),
T.nn.Linear(8,8),
T.nn.Tanh(),
T.nn.Linear(8,1),
T.nn.Sigmoid()
).to(device)
Notice this approach doesn't use explicit weight and bias initialization so you'd be using whatever the current PyTorch version default initialization scheme is (default initialization has changed at least three times since the PyTorch 0.2 version). It is possible to explicitly apply weight and bias initialization to a Sequential network but the technique is a bit awkward.
When using the Sequential approach, you don't have to define a forward() method because one is automatically created for you. In almost all situations I prefer using the class definition approach over the Sequential technique. The class definition approach is lower level than the Sequential technique which gives you a bit more flexibility. Additionally, understanding the class definition approach is essential if you want to create complex neural architectures such as LSTMs, CNNs, and Transformers.
The forward() Method
When using the class definition technique to define a neural network, you must define a forward() method that accepts input tensor(s) and computes output tensor(s). The demo program's forward() method is defined as:
def forward(self, x):
z = T.tanh(self.hid1(x))
z = T.tanh(self.hid2(z))
z = T.sigmoid(self.oupt(z))
return z
The x parameter is a batch of one or more tensors. The x input is fed to the hid1 layer and then tanh() activation is applied and the result is returned as a tensor z. The tanh() activation will coerce all hid1 layer node values to be between -1.0 and +1.0. Next, z is fed to the hid2 layer and tanh() is applied. Then the new z tensor is fed to the output layer and logistic sigmoid activation is applied. Logistic sigmoid activation coerces the single output node value to be between 0.0 and 1.0 so that the output value can be loosely interpreted as the probability that the result is class 1.
For binary classifiers, the two most common hidden layer activation functions that I use are the tanh() and relu() functions. The relu() activation function ("rectified linear unit") was designed for use with deep neural networks with many hidden layers, but relu() usually works well with relatively shallow networks too.
A rather annoying characteristic of PyTorch is that there are often multiple variations of the same function. For example, there are at least three tanh() functions: torch.tanh(), torch.nn.Tanh(), and torch.nn.functional.tanh(). Multiple versions of functions exist mostly because PyTorch is an open source project and its code organization evolved somewhat organically over time. There is no good way to deal with the confusion of multiple versions of PyTorch functions. You just have to live with it.
Testing the Network
It's good practice to test a neural network before trying to train it. The short program in Listing 3 shows an example. The test program instantiates a 4-(8-8)-1 neural network as described in this article and then feeds it an input of (0.1, 0.2, 0.3, 0.4). See the screenshot in Figure 2.
Listing 3: Testing the Network
# test_net.py
import torch as T
device = T.device("cpu")
print("Begin test ")
T.manual_seed(1) # for initialization repro
net = Net().to(device)
x = T.tensor([[0.1, 0.2, 0.3, 0.4]],
dtype=T.float32).to(device)
y = net(x)
print("input = ")
print(x)
print("output = ")
print(y)
print("End test ")
The three key statements in the test program are:
net = Net().to(device)
x = T.tensor([[0.1, 0.2, 0.3, 0.4]],
dtype=T.float32).to(device)
y = net(x)
The net object is instantiated as you might expect. Notice the input x is a 2-dimensional matrix (indicated by the double square brackets) rather than a 1-dimensional vector because the network is expecting a batch of items as input. You could verify this by setting up a different input like so:
x = T.tensor([[0.1, 0.2, 0.3, 0.4],
[0.5, 0.6, 0.7, 0.8]],
dtype=T.float32).to(device)
If you're an experienced programmer but new to PyTorch, the call to the neural network seems to make no sense at all. Where is the forward() method? Why does it look like the net object is being re-instantiated using the x tensor?
As it turns out, the net object inherits a special Python __call__() method from the torch.nn.Module class. Any object that has a __call__() method can invoke the method implicitly using simplified syntax of object(input). Additionally, if a PyTorch object which is derived from Module has a method named forward(), then the __call__() method calls the forward() method. To summarize, the statement y = net(x) invisibly calls the inherited __call__() method which in turn calls the program-defined forward() method. The implicit call mechanism may seem like a major hack but in fact there are good reasons for it.
You can verify the calling mechanism by running this code:
y = net(x)
y = net.forward(x) # same output
y = net.__call__(x) # same output
In non-exploration scenarios, you should not call a neural network using the __call__() or __forward__() methods because the implied call mechanism does necessary behind-the-scenes logging and other actions.
If you look at the screenshot in Figure 2, you'll notice that the first result is displayed as:
output =
tensor([[0.6321]], grad_fn=<SigmoidBackward>)
The grad_fn is the "gradient function" associated with the tensor. A gradient is needed by PyTorch for use in training. In fact, the ability of PyTorch to automatically compute gradients is arguably one of the library's two most important features (along with the ability to compute on GPU hardware). In the demo test program, no training is going on, so PyTorch doesn't need to maintain a gradient on the output tensor. You can optionally instruct PyTorch that no gradient is needed like so:
with T.no_grad():
y = net(x)
To summarize, when calling a PyTorch neural network to compute output during training, you should never use the no_grad() statement, but when not training, using the no grad() statement is optional but more principled.
Wrapping Up
Defining a PyTorch neural network for binary classification is not trivial but the demo code presented in this article can serve as a template for most scenarios. In situations where a neural network model tends to overfit, you can use a technique called dropout. Model overfitting is characterized by a situation where model accuracy of the training data is good, but model accuracy on the test data is poor.
You can add a dropout layer after any hidden layer. For example, to add two dropout layers to the demo network, you could modify the __init__() method like so:
def __init__(self):
super(Net, self).__init__()
self.hid1 = T.nn.Linear(4, 8) # 4-(8-8)-1
self.drop1 = T.nn.Dropout(0.50)
self.hid2 = T.nn.Linear(8, 8)
self.drop2 = T.nn.Dropout(0.25)
self.oupt = T.nn.Linear(8, 1)
The first dropout layer will ignore 0.50 (half) of randomly selected nodes in the hid1 layer on each call to forward() during training. The second dropout layer will ignore 0.25 of randomly selected nodes in he hid2 layer during training.
The forward() method would use the dropout layers like so:
def forward(self, x):
z = T.tanh(self.hid1(x))
z = self.drop1(z)
z = T.tanh(self.hid2(z))
z = self.drop2(z)
z = T.sigmoid(self.oupt(z))
return z
Using dropout introduces randomness into the training which tends to make the trained model more resilient to new, previously unseen inputs. Because dropout is intended to control model overfitting, in most situations you define a neural network without dropout, and then add dropout only if overfitting seems to be | https://visualstudiomagazine.com/articles/2020/10/14/pytorch-define-network.aspx | CC-MAIN-2021-17 | refinedweb | 2,537 | 55.13 |
Hi, We were getting this error and i was trying out a solution from Microsoft, but the BeginInvoke method doesnt get generated by the compiler when i use a namespace.Given below is the sample code i was trying
Imports
Namespace org.test
Public Class Form1
Dim testing As New MethodInvoker(AddressOf test)
Public Sub testing2()
Me.begininvoke(testing)
End Sub
MsgBox(
End
Am i missing something as the Me.begininvoke is not recognized at all. How do i overcome this problem? I am a newbie so probably iam missing something very much basic
Thanks,Kris
Without seeing the code that caused the error you're trying to get around, it's kind of hard to say what the problem is. Check out this thread:
and see if that will solve your problem.
Microsoft is conducting an online survey to understand your opinion of the Msdn Web site. If you choose to participate, the online survey will be presented to you when you leave the Msdn Web site.
Would you like to participate? | https://social.msdn.microsoft.com/Forums/en-US/cc22da56-0ba7-4d02-9198-b4e6c6357ec8/controls-created-on-one-thread-cannot-be-parented-to-a-control-on-a-different-thread?forum=vblanguage | CC-MAIN-2015-18 | refinedweb | 173 | 64.81 |
Subsets and Splits