text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Dear Mark and Listers,
> My student co-worker discovered the YaTiSeWoBe package that contains a
> widget and some gluing code to embed a Jython console into Java/Swing
> applications:
I am the author of YaTiSeWoBe, and I have to confess that this Console
is not the best piece of software I ever wrote : ( It's a Saturday
afternoon hack... It has not been designed to be a separate component,
but is directly part of the project. I'll be happy to contribute to the
birth of a new project for this draft work. I started writing it because
I could find no other out there.
Nevertheless, it does provide a few nice features like some syntax
formating and command history, but deserves some refactoring. The
"interface" is inspired by the bash, mainly for shortcut key bindings.
There is a completion feature in this implementation that is not yet
documented, though (I believe it has nothing to do with the
"rlcompleter"-style completion, but I don't know that package):
Like the bash, the console has its own set of environment variables
stored in the map "ENV" (try ">>> print ENV"). One of the variables
("ENV['PATH']") defines a search path where user can save python scripts
to be executed from the console. Typing ">>> ./<tab>" will show you a
completion list of all "*.py" files that appear in any PATH directories.
Typing ">>> ./te<tab>" will show you a list of all files that match
"te*.py". That is already something : )
> Is someone on this list using these classes?
No one does, as far as I know.
> Despite not really working with Java 1.5.0-b64
What do you mean exactly with that? (you can answer me directly)
> we will likely use it as a starting point for our own software.
I can certainly give you some advices on what is wrong with this class.
I tried to have a computer science student working on this, but could
find none that was interested in something useful, apart from silly
desktop games...
HTH,
--javier
------------------------------------------------------------------
Javier Iglesias, M.Sc. tel:+4121 692 3587 fax:+4121 692 3585
PhD student, unix sysadmin
INFORGE-CP1, University of Lausanne, CH-1015 Lausanne, Switzerland
------------------------------------------------------------------
Hi Jython list, hi Javier,
since embedding the jython interpreter was discussed recently on this
list, I want to share some info and ask an additional question on this
topic.
First of all: we're working on a mid-scale Java/C++ project for
multimedia content analysis and annotation. Because a user interface
with graphical dialogs would be complicated, much work and we need some
form of user definable macros, we decided to embed a Jython
interpreter. Personally I've got some experience with an
embedding/extending a CPython interpreter in a Qt application, so my
hope is to be able to get to a similar feature set soon.
My student co-worker discovered the YaTiSeWoBe package that contains a
widget and some gluing code to embed a Jython console into Java/Swing
applications:?
show=28&language=fr&title=YaTiSeWoBe
Is someone on this list using these classes? Despite not really working
with Java 1.5.0-b64 and not beeing feature-complete, it already
provides useful keybindings and a simple command history, so we will
likely use it as a starting point for our own software.
What we would need most is some form of command completion. With
CPython I would use
from rlcompleter import readline
readline.parse_and_bind("tab: complete")
however the module rlcompleter is not available with Jython AFAIK.
Since libreadline is an external binary library, copying the
rlcompleter sources won't work, I guess. So, what could I do?
Any suggestions, hints on this?
Thanks in advance,
Mark
--
Dipl.-Ing. Mark Asbach Tel +49 (0)241-80-27678
Institute of Communications Engineering asbach@...
RWTH Aachen University, Germany | http://sourceforge.net/p/jython/mailman/message/11764496/ | CC-MAIN-2015-11 | refinedweb | 636 | 63.09 |
In this article, we’re going to see how we can work with logging in Akka .NET. There is a logging adapter which allows one to work with various different logging providers (e.g. NLog), and it is very easy to use from within actors.
The source code for this article is available at the Gigi Labs BitBucket repository.
Example Application
Before we get to the details of how to use logging in Akka .NET, let’s create a very simple application. The first thing we need to do is install the Akka package:
Install-Package Akka
To work with basic Akka components, we need the following
using:
using Akka.Actor;
We can now create a simple actor. This actor will just output whatever message (string) it receives:
public class LoggingActor : ReceiveActor { public LoggingActor() { this.Receive<string>(s => Console.WriteLine(s)); } }
Finally, we create our
ActorSystem in
Main() and send a message to the actor:
static void Main(string[] args) { Console.Title = "Akka .NET Logging"; using (var actorSystem = ActorSystem.Create("MyActorSystem")) { Console.WriteLine("ActorSystem created!"); var actor = actorSystem.ActorOf(Props.Create<LoggingActor>(), "loggingActor"); actor.Tell("Hello!"); Console.WriteLine("Press ENTER to exit..."); Console.ReadLine(); } }
If we run this, we get the following output:
You can see that some of the output came from the
Console.WriteLine()s we have in
Main(); while the “Hello!” message came from the actor. But there is also a warning that is not coming from anywhere within our code.
StandardOutLogger
We can adjust our actor’s code to use a logger instead of writing directly to the console. The actor doesn’t care what kind of logger it is; this detail is entirely configurable. By default, it will use the StandardOutLogger which writes to the console. But we can use something else (e.g. NLog or Serilog) and the code would be completely unchanged.
public class LoggingActor : ReceiveActor { private readonly ILoggingAdapter logger = Logging.GetLogger(Context); public LoggingActor() { this.Receive<string>(s => logger.Info(s)); } }
If we run this, the “Hello!” message still gets written to the console, but it is now part of a longer formatted log message:
LogLevel
Just about all logging frameworks have a concept of log level. Typically these consist of at least Debug, Info, Warning and Error (some even have a Fatal level). You tell the logging framework what is the minimum level you’re interested in. So if your minimum level is Warning, the logging framework omits Debug and Info messages.
In Akka .NET, you can configure the minimum log level you’re interested in as part of the HOCON configuration within your App.config as follows:
< { loglevel = DEBUG } ]]> </hocon> </akka> </configuration>
If we set
loglevel to
INFO, we get exactly the same as before. But now that we’ve set it to
DEBUG, we actually get some additional logging from within Akka .NET:
Note: the screenshot actually contradicts what I was saying earlier about the StandardOutLogger. It actually seems to be replacing StandardOutLogger with this DefaultLogger. Presumably DefaultLogger uses StandardOutLogger underneath.
Automatic Logging
Akka .NET provides certain logging mechanisms out of the box. One of these is logging received messages (so we don’t even need to do that in code). You can turn on these mechanisms in the HOCON configuration as follows:
<akka> <hocon> <![CDATA[ akka { loglevel = DEBUG>
Now, to actually log received messages, you also need to make your actor implement the empty
ILogReceive interface as per this StackOverflow question:
public class LoggingActor : ReceiveActor, ILogReceive
If you run the application now, you’ll see that a lot more messages are being logged, including another entry for the “Hello!” message:
Tip: ideally make sure the
loglevel HOCON setting and the node under
actor in HOCON (in this case
debug) are in sync. If you set
loglevel to
INFO now, all the automatic logging will stop writing.
Integrating NLog
The logging adapter is particularly useful because you can swap out logging frameworks without having to touch the code. We are currently using the StandardOutLogger (or DefaultLogger?), but with some simple configuration, we can use NLog or some other provider instead.
The Akka .NET Logging documentation page shows the currently supported logging frameworks. Typically, you install a NuGet package for the framework you want, and set up its own configuration. I’m going to show you how to do this with NLog, but it’s an arbitrary choice, and the steps are similar for other logging frameworks.
1. Install the relevant NuGet package:
Install-Package Akka.Logger.NLog
2. Add an NLog.config to the project and configure it as you like:
<?xml version="1.0" encoding="utf-8" ?> <nlog xmlns="" xmlns: <targets> <target name="file" xsi: </targets> <rules> <logger name="*" minlevel="Info" writeTo="file" /> </rules> </nlog>
3. Right click NLog.config, Properties, then set to Copy always or Copy if newer:
4. Update your HOCON to use the relevant logger:
<akka> <hocon> <![CDATA[ akka { loglevel = DEBUG loggers = ["Akka.Logger.NLog.NLogLogger, Akka.Logger.NLog"]>
Let’s run it now:
You can see that the console output is now limited to what we’re writing in
Main(), while the rest has gone into a file called
test.log. Since our
minlevel is
Info, we’re not getting the automatic logging which is configured for the
debug log level.
8 thoughts on “Actor Logging with Akka .NET”
Great post. Concise and straight to the point!
How can you add both Standard Console out and also an additional logger?
See the loggers array in the HOCON? Just add additional entries there. I don’t remember the exact setting for Console, but you should be able to find it in the docs (assuming they are better now than they were back when I wrote this).
Thank you for the post, however the log file is not being created
Here’s my config files: | https://gigi.nullneuron.net/gigilabs/actor-logging-with-akka-net/ | CC-MAIN-2021-25 | refinedweb | 970 | 57.77 |
While at PyCon in Dallas this weekend, I got a chance to hear David Creemer talk about how he's using CherryPy (among lots of other tools) to deliver a site that's getting 250,000 hits a day before it's even been officially launched. He mentioned during the "lightning" (5 minute) talk that, of his entire toolkit, CherryPy and SQLObject were two of the tools that "mostly worked, except..." I spoke with him after the talk about his concerns, and the big CherryPy issue was dispatching: he prefers a Routes-style dispatch mechanism, which makes changes to the design easier.
He had previously posted his cherrypy+routes script, which I read but hadn't done anything about. I mentioned to him yesterday that it might be better, when overriding the dispatch mechanism, to do so in a custom Request class, rather than in a single exposed default method on the cherrypy tree. Here's a first crack at what that would look like; I haven't tested it but it's more to get the idea of custom Request classes out there than to be a working patch
import urllib import cherrypy from cherrypy import _cphttptools import routes mapper = routes.Mapper() controllers = {} def redirect( url ): raise cherrypy.HTTPRedirect( url ) def mapConnect( name, route, controller, **kwargs ): controllers[ name ] = controller mapper.connect( name, route, controller=name, **kwargs ) def mapFinalize(): mapper.create_regs( controllers.keys() ) def URL( name, query = None, doseq = None, **kwargs ): uri = routes.url_for( name, **kwargs ) if not uri: return "/UNKNOWN-%s" % name if query: uri += '?' + urllib.urlencode(query, doseq) return uri class RoutesRequest(_cphttptools.Request): def main(self, path=None): """Obtain and set cherrypy.response.body from a page handler.""" if path is None: path = self.object_path page_handler = self.mapPathToObject(path) virtual_path = path.split("/") # Decode any leftover %2F in the virtual_path atoms. virtual_path = [x.replace("%2F", "/") for x in virtual_path if x] kwargs = self.params.copy() kwargs.update( cherrypy.request.mapper_dict ) try: body = page_handler(*virtual_path, **kwargs) except Exception, x: x.args = x.args + (page_handler,) raise cherrypy.response.body = body def mapPathToObject(self, objectpath): """For path, return the corresponding exposed callable (or raise NotFound). path should be a "relative" URL path, like "/app/a/b/c". Leading and trailing slashes are ignored. """ # tell routes to use the cherrypy threadlocal object config = routes.request_config() if hasattr(config, 'using_request_local'): config.request_local = lambda: self config = routes.request_config() # hook up the routes variables for this request config.mapper = mapper config.host = self.headerMap['Host'] config.protocol = self.scheme config.redirect = redirect config.mapper_dict = mapper.match( objectpath ) if config.mapper_dict: c = config.mapper_dict.pop( 'controller', None ) if c: controller = controllers[c] # we have a controller, now emulate cherrypy's index/default/callable semantics: action = config.mapper_dict.pop( 'action', 'index' ) meth = getattr( controller, action, None ) if not meth: meth = getattr( controller, 'default', None ) if not meth and callable( controller ) and action == 'index' : meth = controller if meth and getattr( meth, 'exposed', False ): return meth raise cherrypy.NotFound( objectpath ) # 'authui' is a module with login/logout functions mapConnect( name = 'home', route = '', controller = home ) mapConnect( name = 'auth', route = 'auth/:action', controller = authui, requirements = dict( action='(login|logout)' ) ) mapFinalize() cherrypy.server.request_class = RoutesRequest cherrypy.server.start()
Look, Ma, no root!
This is a good improvement on David Creamer's original code. One change needs to be made to make it work past the first use. The line in the mapPathToObject method:
config.request_local = lambda: self
should be changed to:
config.request_local = lambda: cherrypy.request
Otherwise, the second request (works the first time) will try to access an object with no mapper_dict attribute and an AttributeError is raised.
Still and all, this is excellent!
And I can't believe I mispelled David's last name in my last post - my apologies to David; that should be Creemer. | http://www.aminus.org/blogs/index.php/fumanchu/2006/02/26/making_a_custom_cherrypy_request_class_f | crawl-001 | refinedweb | 626 | 59.19 |
Linked by Thom Holwerda on Tue 12th Jul 2005 20:16 UTC
Thread beginning with comment 3233
To view parent comment, click here.
To read all comments associated with this story, please click here.
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[3]: Ubuntu
by cr8dle2grave on Tue 12th Jul 2005 22:59 in reply to "RE[2]: Ubuntu"
Because Ubuntu uses it's own repositories (with packages existing in their own "namespace"), not the offcial Debian ones, which is the only reasonable way to release a "freeze" of the unstable branch that can then be maintained with security patches over the course of the 18 months Canonical has committed to.
RE[4]: Ubuntu
by Anonymous Penguin on Tue 12th Jul 2005 23:07 in reply to "RE[3]: Ubuntu"
Member since:
2005-07-06
What doesn't add up here, IMO, is that it is perfectly possible to take a branch of Debian (normally testing or unstable) and patch it for your own needs without forking. Even the heavily criticized Linspire is still 98% Debian compatible (I dist-upgraded it to Sid with very few problems). | http://www.osnews.com/thread?3233 | CC-MAIN-2014-52 | refinedweb | 197 | 53.14 |
Opened 6 years ago
Last modified 2 years ago
#17939 new defect
SageTex sageplot giving oversized plots with default Debian pdflatex.
Description
As above.
After using the third possible way of getting latex to talk to sagetex from here:
The example code inside the sagetex folder produces a very ugly set of plots roughly twice the size of those shown here when :
Using: Sage Version 6.5, Release Date: 2015-02-17 from github. pdfTeX 3.14159265-2.6-1.40.15 (TeX Live 2015/dev/Debian)
If no one else can reproduce this but I will attach the produced pdf. This however also happened under regular latex in Debian.
Change History (6)
comment:1 Changed 6 years ago by
comment:2 Changed 5 years ago by
- Cc tmonteil vdelecroix added
- Keywords debian added
comment:3 Changed 5 years ago by
Can you verify that this is only with Debian?
comment:4 Changed 4 years ago by
comment:5 Changed 3 years ago by
- Cc dimpase chapoton added
- Priority changed from major to minor
I seem to recall default image sizes getting a lot bigger a few years ago, but haven't seen any specific validation of this ticket. Certainly downgrading status as it is fairly easy (if tedious) to fix on an image-by-image basis.
comment:6 Changed 2 years ago by
t is the list of matched tokens in a
pyparsing result. So in principle,
t.opts is the stuff in the first matching square brackets (if present) from a
\sageplot command.
However, if you are right then maybe we need to change
sageplotparser = (r'\sageplot' + Optional(squarebrackets)('opts') + Optional(squarebrackets)('format') + curlybrackets('code'))
to include the default in this part, rather than in the plot method
def plot(self, s, l, t): self.plotn += 1 if len(t.opts) == 0: opts = r'[width=.75\textwidth]' else: opts = t.opts[0] return (r'\includegraphics%s{sage-plots-for-%s.tex/plot-%s}' % (opts, self.fn, self.plotn - 1))
I'd want to test whether this is really being skipped but at any rate somewhere in here is where the fix would have to lie. Once we figure out whether this is an actual bug, we should report upstream.
By the way, the scripts dtx file is eminently readable!
After some minor digging around it looks like line 97 of sagetexparse.py never evaluates to true and
is never run. Since the name of the variable is "t" and I'm not familiar with the code base around those parts I'm not too keep on trying to fix it myself, as just trying to see where else in the code 't' appears made my eyes bleed. | https://trac.sagemath.org/ticket/17939 | CC-MAIN-2021-31 | refinedweb | 449 | 71.55 |
You are given a rod of length N units along with an array that contains the prices of all pieces of size smaller than N (1 to N). You need to find out the maximum profit that can be obtained by cutting the rod into pieces and then selling it.
Sample Input: Prices – [ 1, 5, 8, 9, 10, 17, 17, 20 ], N = 8
Expected Output: 22
Explanation: The rod can be cut into two pieces of size 2 and 6 with total profit as 22.
Algorithm:
- We will use dynamic programming to solve this problem. Create a DP array of size N+1 intialized to 0.
- Iterate for all sizes starting from 1 to N.
- For each size, consider all sizes less than or equal to the current size.
- If we take size j ( j<=i, where i is the length of the rod ) and cut the rod into two pieces of size j and ( i – j ), the result will be maximum of DP[ i ] and ( prices[ j ] + DP[ i – j ] ). We do this for all j from 1 to i and for all i from 1 to N. We keep storing the result for each subproblem in the DP array and return the final result stored in DP[ N ].
#include <bits/stdc++.h> using namespace std; int helper(vector<int> prices, int n){ vector<int> dp(n+1, 0); for(int i=1; i<=n; i++){ for(int j=1; j<=i; j++){ dp[i] = max(dp[i], prices[j-1] + dp[i-j]); } } return dp[n]; } int main(){ vector<int> prices = {1, 5, 8, 9, 10, 17, 17, 20}; int n=4; cout<<helper(prices, n); return 0; }
Also take a look at another popular interview question Maximum Sum Subarray. | https://nerdycoder.in/2020/08/06/rod-cutting-problem-dp-02/ | CC-MAIN-2021-04 | refinedweb | 292 | 72.39 |
Text processing plays a large role in many useful software programs. It can typically involve operations such as pulling apart strings, searching, substituting, and parsing. Applications of text processing include web scraping, natural language processing, text generation and much more.
In this post, we will survey several text processing operations. Specifically, we will discuss how to split strings on multiple delimiters and match text to specific patterns. We will also discuss more complicated operations which require regular expressions such as how to search and replace text using the regular expressions module in python. Finally, we will discuss how to use strip methods, available to string objects in python, to remove unwanted characters and text.
To begin, let’s show how to split and match text using basic string object methods.
Splitting and Matching Strings and Text
Suppose we have a string with several names of programming languages:. It does not handle strings with multiple delimiters nor does it account for possible whitespace around delimiters. For example, suppose our string has several delimiters:
my_string2 = 'python java, sql,c++; ruby'
This may be the form in which text data is received upon scraping a website. Let’s try using the ‘split()’ method on our new string:
print(my_string2.split())
We see that the ‘sql’ and ‘c++’ parts of our string were not split properly. To amend this issue, we can use the ‘re.split()’ method to split our string on multiple delimiters. Let’s import the regular expressions module, ‘re’, and apply the ‘re.split()’ method to our string:
import re print(re.split(r'[;,\s]\s*', my_string))
This is incredibly useful because we can specify multiple patterns for the delimiters. In our example, our string had commas, a semicolon and whitespace as separators. Whenever a pattern is found, the entire match becomes the delimiter between whatever fields lie on either side of the matched pattern.
Let’s look at another example using a different delimiter, ‘|’:
my_string3 = 'python| java, sql|c++; ruby'
Let’s apply ‘re.split()’ to our new string:
print(re.split(r'[;|,\s]\s*', my_string3))
We see that we get the desired result. Now, let’s discuss how to match patterns in the beginnings and ends of text.
If we need to programmatically check the start or end of a string for specific text patterns, we can use the ‘str.startswith()’ and ‘str.endswith()’ methods. For example, if we have a string specifying a url:
my_url = ''
we can use the ‘str.startswith()’ method to check if our string starts with a specified pattern:
print(my_url.startswith('http:'))
print(my_url.startswith('www.'))
Or we can check if it ends with a specific pattern:
print(my_url.endswith('com'))
print(my_url.endswith('org'))
A more practical example is if we need to programmatically check file extensions in a directory. Suppose we have a directory with files of different extensions:
my_directory = ['python_program.py', 'cpp_program.cpp', 'linear_regression.py', 'text.txt', 'data.csv']
We can check against multiple file extensions using the ‘str.endswith()’ method. We simply need to pass a tuple of the extension values. Let’s use list comprehension and the ‘str.endswith()’ method to filter our list so that it only includes ‘.cpp’ and ‘.py’ files :
my_scripts = [script for script in my_directory if script.endswith(('.py', '.cpp'))] print(my_scripts)
Next, let’s discuss how to perform more sophisticated operations, such as searching and replacing, with the regular expressions module.
Searching and Replacing Text with ‘re’
Next, let’s consider the following string literal:
text1 = "python is amazing. I love python, it is the best language. python is the most readable language."
Suppose, for some wild reason, we want to replace the word ‘python’ with ‘c++’. We can use the ‘str.replace()’ method:
text1 = text1.replace('python', 'C++')
Let’s print the result:
print(text1)
For more complicated patterns we can use the ‘re.sub()’ method in the ‘re’ module. Let’s import the regular expressions module , ‘re’:
import re
Suppose, we wanted to change the date formats in the following string from “12/01/2017” to “2017–12–01”:
text2 = "The stable release of python 3.8 was on 02/24/2020. The stable release of C++17 was on 12/01/2017."
We can use the ‘re.sub()’ method to reformat these dates:
text2 = re.sub(r'(\d+)/(\d+)/(\d+)', r'\3-\1-\2', text2) print(text2)
The first argument, “r’(\d+)/(\d+)/(\d+)’”, in the substitution method is the pattern to match. The ‘\d+’ expression corresponds to a digit character in the range 0–9. The second argument, “r’\3-\1-\2’”, is the replacement pattern. The digits in the replacement pattern refer to capture group numbers in the pattern. In this case, group 1 is the month, group 2 is the day, and group 3 is the year. We can see this directly using the ‘group()’ , ‘match()’, and ‘compile()’ methods:
date_pattern = re.compile(r'(\d+)/(\d+)/(\d+)') date_pattern.match(“12/01/2017”) print(date_pattern.group(1)) print(date_pattern.group(2)) print(date_pattern.group(3))
Compiling the replacement patterns also leads to improved performance on repeated substitutions. Let’s compile the match pattern:
date_pattern = re.compile(r'(\d+)/(\d+)/(\d+)')
And then call the substitution method using the replacement pattern:
date_pattern = date_pattern.sub(r'\3-\1-\2', text2)
We can also specify a substitution callback function for more complicated substitutions. For example, if we want to reformat “12/01/2017” as “01 Dec 2017”:
from calendar import month_abbr def format_date(date_input): month_name = month_abbr[int(m.group(1))] return '{} {} {}'.format(date_input.group(2), month_name, date_input.group(3)) print(date_pattern.sub(format_date, text2))
Another interesting problem to consider is how to search for and replace text in a case-insensitive manner. If we consider the earlier example:
text3 = "Python is amazing. I love python, it is the best language. Python is the most readable language."
Now, the first words in the first and second sentences in this text are capitalized. In this case, the substitution method would substitute text in a case sensitive manner:
print(text3.replace('python', 'C++'))
We see that only the lowercase ‘python’ has been replaced. We can use ‘re.sub()’ to replace text in a case-insensitive manner by passing ‘flags = re.IGNORECASE’ to the sub method:
print(re.sub('python', 'C++', text3, flags =re.IGNORECASE))
Now let’s discuss how to strip unwanted characters from strings and text.
Stripping Strings and Text
Suppose we wanted to remove unwanted characters, such as whitespace or even corrupted text, from the beginning, end or start of a string. Let’s define an example string with unwanted whitespace. We will take a quote from the author of the python programming language, Guido van Rossum:
string1 = ' Python is an experiment in how much freedom programmers need. \n'
We can use the ‘strip()’ method to remove the unwanted whitespace and new line, ‘\n’. Let’s print before and after applying the ‘strip()’ method:
print(string1) print(string1.strip())
If we simply want to strip unwanted characters at the beginning of the string, we can use ‘lstrip()’. Let’s take a look at another string from Guido:
string2 = " Too much freedom and nobody can read another's code; too little and expressiveness is endangered. \n\n\n"
Let’s use ‘lstrip()’ to remove unwanted whitespace on the left:
print(string2) print(string2.lstrip())
We can also remove the new lines on the right using ‘rstrip()’:
print(string2) print(string2.lstrip()) print(string2.rstrip())
We see in the last string the three new lines have been removed. We can also use these methods to strip unwanted characters. Consider the following string containing the unwanted ‘#’ and ‘&’ characters:
string3 = "#####Too much freedom and nobody can read another's code; too little and expressiveness is endangered.&&&&"
If we want to remove the ‘#’ characters on the left of the string we can use ‘lstrip()’:
print(string3) print(string3.lstrip('#'))
We can also remove the ‘&’ character using ‘rstrip()’:
print(string3) print(string3.lstrip('#')) print(string3.rstrip('&'))
We can strip both characters using the ‘strip()’ method:
print(string3) print(string3.lstrip('#')) print(string3.rstrip('&')) print(string3.strip('#&'))
It is worth noting that the strip method does not apply to any text in the middle of the string. Consider the following string:
string4 = "&&&&&&&Too much freedom and nobody can read another's code; &&&&&&& too little and expressiveness is endangered.&&&&&&&"
If we apply the ‘strip()’ method passing in the ‘&’ as our argument, it will only remove them on the left and right:
print(string4) print(string4.strip('&'))
We see that the unwanted ‘&’ remains in the middle of the string. If we want to remove unwanted characters found in the middle of text, we can use the ‘replace()’ method:
print(string4) print(string4.replace('&', ''))
I’ll stop here but I encourage you to play around with the code yourself.
Conclusions
To summarize, in this post we surveyed a wide variety of methods available for text processing in python. We went over how to split strings using the string object ‘split()’ method. We showed how to match text using ‘str.startswith()’ and ‘str.endswith()’ methods to check the start and end of strings for specific text patterns. We used the substitution method in ‘re’ to reformat dates in string literals and replace text in a case insensitive manner. We discussed how to split strings with the ‘re.split()’ method along multiple delimiters. We also showed how to use ‘lstrip()’ and ‘rstrip()’ to remove unwanted characters on the left and right of strings respectively. Finally, we demonstrated how to remove multiple unwanted characters found on the left or right using ‘strip()’. I hope you found this post useful/interesting. The code from this post is available on GitHub. Thank you for reading!
Guest Post: Sadrach Pierre
Stay up to date with Saturn Cloud on LinkedIn and Twitter.
You may also be interested in: 7 Ways to Execute Scheduled Jobs in Python. | https://www.saturncloud.io/s/textprocessingpython/ | CC-MAIN-2020-24 | refinedweb | 1,645 | 66.44 |
,
it's several days after I said 'in the next day', but I've
finally uploaded a new TemplateServer version.
Changes:
- several small bug fixes
- reimplemented the #block directive to avoid maximum
recursion depth errors with large blocks.
- created many new test cases in the regression testing
suite
- added an example site to the examples/ directory
- started the User's Guide
The User's Guide is just a skeleton at the moment.
I'm in the process of filling it in. I'd appreciate any
examples.
Version 0.8.1 will contain a less skeletal User's Guide.
Cheers,
Tavis
On Monday 28 May 2001 12:15, Manfred Nowak wrote:
> The only problem is to start the AppServer for WebKit.
> Can i do that from within WebKit.cgi or must my ISP
> start this?
I haven't worked with WebKit.cgi but I'm sure you could
modify it to restart the AppServer if it's not running.
Another option is to use a chron job to check up on the
AppServer and restart it if necessary.
Tavis
Mike Orr wrote:
>.
What are these modifications?
sorry, I'm a WebNewbie :-(?
am I the only one who want to use Webware over an ISP ?
Manfred
On 28 May 01, at 12:35, Ian Bicking wrote:
>..
A typical site constructed with CGI -- where there are a few portions
that are dynamic, and more portions that are static -- will probably
use less resources over time. This is what most shared hosts are
targetting. Also, a run-away process takes a lot of resources, and
CGI is better protected from this.
It would be spiffy if the adapters -- which are more isolated and
hopefully more robust -- could act as the superego for AppServer,
restarting it if it takes too long, doesn't respond, or isn't there at
all. I suppose that could be as simple as calling an rc script as
[os.]system("webware restart") whenever the connection to the
AppServer is denied or times out..
Ian.
At 08:24 AM 5/28/2001 +0200, J=F6rn Schrader wrote:
>Is the above code thread-save.
>
>Joern.
I happen to know that Geoff does mix both techniques in the same manner=20
than you have described. So do I.
I think his earlier message was just emphasizing that the initialization=20
code should go in the module.
Feel free to load up SitePage with as many conveniences as appropriate for=
=20
your site.
-Chuck
Hi all,
i try to work with WebKit on my own Homepage.
How do i start AppServer on the Server of my ISP
or can i run WebKit only on Intranets ??
Best regards
Manfred
>This isn't thread-safe -- 2 servlets could create the store at the same
time.
>
>You're better off putting the store initialization into a module that gets
>imported -- then Python automatically ensures that only one thread imports
>it at once. (Either that or you should explicitly use a threading.Lock
>object to make it thread-safe.)
>
>If you want to "pre-load" the store when the app server starts up to reduce
>response time, then you can put the import into the __init__.py in your
>context directory. __init__.py automatically gets imported when the app
>server starts.
Thanks Geoff for your hint concerning thread-safety. But Ian's way to have a
Base
Servlet that cares for store opening is attractive too. Do you think, that
one can
mix both methods by having a store initialization module that gets imported
in
a base servlet:
# MyObjectStore.py
from MiddleKit.Run.MySQLObjectStore import MySQLObjectStore
store = MySQLObjectStore()
store.readModelFileNamed('MyMiddleKitModel')
# MyPage.py
from MyObjectStore import store
from WebKit.Page import Page
class MyPage(Page):
def Store(self):
return store
# MyDerivedPage.py
from MyPage import MyPage
class MyDerivedPage(MyPage):
def writeContent(self):
currentStore = self.Store()
# do something with currentStore
Is the above code thread-save.
Joern.
Howdy:
I finally got around to trying the latest Webware (0.5.1-rc3) and
I'm stuck. I ran the install.py thing, and then moved the
WebKit.cgi over to my apache setup to test it (I'll get around to
recompiling apache for something better later). I fixed the
permissions, and changed the line in the cgi to point to the WebKit
directory, but all I get is Internal Server Error or 404 Not Found.
Apache is working and serving up static text, as well as several
other cgi's like viewcvs. What am I missing?
Conceptual questions: I assume apache listens as normal on port 80,
and the WebKit.cgi talks to the AppServer over the loopback
interface? The appserver says it's listening on port 8086 on the
loopback - is this correct?
I have a somewhat kluged apache setup with a virtual host on the
external interface and the non-virtual setup on the internal one.
Could this be the problem? I get 404 errors when I use my internal
domain name in the URL to WebKit.cgi, but 500 errors when I use the
IP address.
Everything else seems to work okay (even Zope, before I removed it).
Thanks in advance, Steve
*************************************************************
Steve Arnold sarnold@...
Assoc. Faculty, Dept of Geography, Allan Hancock College
Linux: It's not just for nerds anymore...
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200105&viewday=28 | CC-MAIN-2017-17 | refinedweb | 918 | 67.25 |
Hello again! I was given an assignment the other day in C++ and I was suppose to use the counter and add a sentinel so the program knew when to stop.
Instructors directions:
"The file for this assignment reads a sequence of positive integers from a file (2 3 5 7 9 10 22 54 66) and prints them out to the screen together with their sum. Create the text file that can be used to input these integers and add to the program a set of instructions that provides the number of integers in the sequence. You are to assume you do not know the number of integers in the sequence."
I have been able to get everything to work except 2 things. I can't get the "2" to print out and the sentinel, "0", has been printed out. I need help with having the "2" print out and for the sentinel, "0", not to.
Here is my code:
using namespace std; #include<iostream> #include<fstream> #include<iomanip> int main() { int count = 1; int sentinel = 0; double The_Sum; double Positive_Integers; ifstream datain ("Input.txt"); The_Sum = 0; cout<<" | Positive Integers \n"<<endl; cout<<"----|--------------------\n"<<endl; datain >> Positive_Integers; while (Positive_Integers != sentinel) { for (count = 1; count<=Positive_Integers; count ++) { datain>>Positive_Integers; cout<<count<<setw(4)<<"|"<<setw(20)<<Positive_Integers<<"\n"<<endl; The_Sum = The_Sum + Positive_Integers; } } cout<<endl<<"The Sum = "<<The_Sum<<endl<<endl; system ("pause"); return 0; }
Here is my text from the file:
TEXT FILE: 2 3 5 7 9 10 22 54 66 0
Thanks again guys. So far I'm enjoying this website. :) | https://www.daniweb.com/programming/software-development/threads/386391/sentinel-count-help | CC-MAIN-2018-30 | refinedweb | 261 | 66.47 |
... JAVA" is a free, on-line textbook. It is suitable for use in an introductory
JSF Books
.
Books
of Core java Server Faces... to Java developers working in J2SE with a JSP/Servlet engine like Tomcat, as well...JSFSP Programming Books
JSP Programming Books
... server: Microsoft ASP, PHP3, Java servlets, and JavaServer Pages? (JSP[1... JSP is a key component of the Java 2 Platform Enterprise Edition (J2EE
Tomcat Books
the Java technology. It is a JSP and servlet container that can be attached to other.... Moodie wastes no time on Java or JSP introductions, and discusses JSP and Java..., and installation. A bonus discussion of Java Web development, using servlets and JSP
JSP PDF books
;
The free servlet and JSP Books
Slides and exercises from Marty Hall's world...JSP PDF books
Collection is jsp books in the pdf format
Project in jsp
Project in jsp Hi,
I'm doing MCA n have to do a project 'Attendance Consolidation' in JSP.I know basic java, but new to jsp. Is there any JSP source code available for reference...? pls/jsp code to download a video
java/jsp code to download a video how can i download a video using jsp/servlet Project
JSP Project Register.html
<html>
<body >
<form...;
Process.jsp
<%@ page language="java" %>
<%@ page import="java.util.*"%>
<%!
%>
<jsp:useBean id="formHandler" class="test.FormBean" scope
Ajax Books
Java+XML books under his belt and the Freemans are responsible for the excellent...
Ajax Books
AJAX - Asynchronous JavaScript and XML - some books and resource links
These books
Java XML Books
;
Free
Java XML Books...
Java XML Books
Java
and XML Books
One night
J2ME Books
;
Free
J2ME Books
J2ME programming camp...;
Books: Java Platform, Micro Edition
Java...;
Java
books hit the wire
Wireless Java has
Misc.Online Books
Misc.Online Books
... vision of a software project is dealing with one's coworkers and customers..., C++, or Java. We show how the particular core concepts are realized
Java server Faces Books
Java server Faces
Books
...;
Store-Java
Buy two books direct from O'Reilly and get the third free by using code OPC10 in our shopping cart. This deal includes books
Need E-Books - JSP-Servlet guidance - JSP-Servlet
form, can anyone guide me through the project ? Hi maverick
Here is the some free project available on our site u can visit and find the solution...project guidance i have to make a project on resume management
EJB Books
;
Java for the Web with Servlets, JSP, and EJB
This is a big...
EJB Books
Professional
EJB Books
Written
jsp project - JSP-Servlet
jsp project sure vac call management project PDF books
| Free JSP Books
| Free
JSP Download |
Authentication...
uing JDBC in JSP |
Download CSV File from
Database in JSP
JSP... java script in jsp
| Create and use Custom error page in JSP
|
Custom Iterator
CORBA and RMI Books
, JSP, and Servlets introduces Java developers to the key database and web...
CORBA and RMI
Books
Client/Server Programming with Java
JSP
JSP Hi ,
I am working in JSP. In my project i have to generate my entire database records to pdf,excel,csv format , so which concept i have to use... available on internet.
If you have to write Java program for this then you
download - JSP-Servlet
download here is the code in servlet for download a file.
while...;
/**
*
* @author saravana
*/
public class download extends HttpServlet...();
System.out.println("inside download servlet");
BufferedInputStream
Application server to download
Application server to download Which Application server can be downloaded for free to use personally at home to practice JSP,EJB etc
project
project how to code into jsp of forgot password
java project
java project Connecting to MySQL database and retrieving and displaying data in JSP page by using netbeans
Free PHP Books
Free PHP Books
PHP
5 Power Programming
In this book, PHP 5's co-creator and two...-strength enhancements in any
project-no matter how large or complex. Their unique
Accessing database from JSP
;
This will create a table "books_details" in database "books"
JSP Code... Accessing database from JSP
... take a example of Books database. This database
contains a table named books
download image using url in jsp
download image using url in jsp how to download image using url in jsp
jsp
the code.</p>
<p>first form(project Manager.jsp)
<%@ page language = "java" import = "java.sql.<em>" import = "java.util.</em>" import = "java.io.*" errorPage = "" %>
<jsp:useBean id = "formHandler
Free Java Projects - Servlet Interview Questions
Free Java Projects Hi All,
Can any one send the List of WebSites which will provide free JAVA projects with source code on "servlets" and "Jsp" relating to Banking Sector? don't know
upload and download files - JSP-Servlet
upload and download files HI!!
how can I upload (more than 1 file) and download the files using jsp.
Is any lib folders to be pasted? kindly... and download files in JSP visit to :.
Oracle 9i free download
Oracle 9i free download where to download oracle 9i for database connectivity in j2ee
How do i start to create Download Manager in Java - JSP-Servlet
How do i start to create Download Manager in Java Can you tell me from where do i start to develop Download manager in java
Send Email From JSP & Servlet
J2EE Tutorial - Send Email From JSP &
Servlet... classpath to c:\jsdk2.0\src (java servlet development kit).
(We are using... for executing servlets and JSP .
It is a joint effort
MySQL Books
MySQL Books
List of many MySQL books.
MySQL Books
Hundreds of books and publications related to MySQL are in circulation that can help you get the most of out MySQL. These books
JSP Tutorials Resource - Useful Jsp Tutorials Links and Resources
. You should be able to program in Java.
This tutorial teaches JSP...;
Introduction
To JSP
Java Server Pages... is based on Java, an
object-oriented language. JSP offers a robust platform
project
project Sir, I have required coding and design for bank management system in php mysql. I hope u can give me correct information.
Please visit the link:
JSP Bank Application
The above link will be helpful for
Our this issue contains:
Java Jazz Up Issue 1 Index
Java Around the Globe
Using Java means a better user
project query
project query I am doing project in java using eclipse..My project is a web related one.In this how to set sms alert using Jsp code. pls help me
Books Of Java - Java Beginners
Books Of Java Which Books Are Available for Short / Quick Learning
Ada Books
Ada Books
... to Java and execution speeds similar to and sometimes exceeding C. gnat... free. This makes a combination that's hard for Linux programmers
Free Java Is it correct?
Free Java Is it correct? Hi,
Is Java free? If yes then where I can download?
Thanks
Hi,
You can download it from
Thanks
Top 10 PC Games for Free Download
Top 10 PC Games for Free Download
.games {
clear: both;
width: 100... PC games for free download is one of the most popularly searched items... for free download.
Battlefield 1942
Free Java Training
Free Java training is provided online by Roseindia for all the
non-programmers and programmers so that they can learn the Java programming
language... the language must opt for the online java training
program. It is not only
java project - Java Beginners
java project project for internet banking Hi friend,
I am sending you a link. This link will help you.
Please visit for more information.
Thanks
MINI PROJECT
MINI PROJECT CAN ANYONE POST A MINI PROJECT IN JAVA?
Hi...
You can have the following projects as per ur requirement. Free and easy to download.. All projects are JAVA based
Sale and purchase in java swing
Java
Project - JSP-Servlet
Project Can you send me the whole project on jsp or servlet so that i can refer it to create my own :
the topic is Advertisement Management System
Atleast tell me how many modules does it include | http://roseindia.net/tutorialhelp/comment/34323 | CC-MAIN-2014-41 | refinedweb | 1,333 | 73.98 |
The LINQ Project
On Tuesday September 13, 2005, Anders Hejlsberg did the first public demo of the LINQ Project in the Jim Allchin keynote.
What is LINQ you might ask?
LINQ stands for Language INtegrated Query and in a nutshell, it makes query and set operations, like SQL statements first class citizens in .NET languages like C# and VB.
Query expressions, like “from”, “where”, and “select” and all the others you know and love from SQL are now first class citizens in C# and VB. Not only that, the query expressions can be used across domains. While in this example I’m querying objects, I could just as easily query a database as well.
What does LINQ code look like?
The sample below queries a list of strings and returns all strings with a length of five.
using System;
using System.Query;
using Danielfe; beauty of LINQ is that you can use query operations on anything. Here’s a sample that runs against SQL Server 2000 on the Northwind database and returns all customers who’s title is five.
using System.Data.DLinq; //DLinq is LINQ for Databases
using nwind; //Custom namespace that is tool generated
Northwind db = new Northwind("Data Source=(local);Initial Catalog=Northwind;Integrated Security=True");
Table<Customers> allCustomers = db.GetTable<Customers>();
var result =
from c in allCustomers
where c.ContactTitle.Length == 5
select c.ContactName;
result.Print();
The Customers class is an auto-generated class that lets you program against the customer table. The first two lines establish a connection to the database and retrieve data for the Customer table. The second line queries all northwind customers who’s ContactTitle is five characters long and returns the ContactName and prints them to the console.
In short, LINQ makes it a heck of a lot easier to program any sort of data source in a unified way.
More information: | http://blogs.msdn.com/b/danielfe/archive/2005/09/13/464904.aspx?Redirected=true | CC-MAIN-2014-15 | refinedweb | 312 | 64.41 |
While the announcement that VP8 has been open sourced has been met with great enthusiasm, and will probably be the highlight of this year’s Google IO for some time to come, there are many other interesting new announcements which came at this year’s IO. One of which was a Google Storage.
Google storage is nothing less than a competitor for Amazon’s S3 service, and is quite similar in implementation. Did I say quite similar? Try almost exactly the same.
Like Amazon S3, Google Storage gives developers a RESTful API for accessing, creating and deleting objects on Google Storage, but this is only to be expected.
Like S3, Google Storage uses the concept of Buckets, where each bucket is a unique in a namespace. So if you create a bucket called images, which will be accessible via a hypothetical images.googlestorage.com, then no other bucket can have the same name. Each user will have a limit of 1000 buckets to his account. Unlike AWS though, Google will provide this service from user Google Apps domains as well.
Like S3, Google Storage is more of a non-relational database of keys and objects rather than a hierarchical storage service. What this means is, that data stored in Google Storage does not have an imposed hierarchy. There are no files and folders, there is just data objects which have associated metadata. While S3 limits data objects to 5GB with 2k metadata, Goolge allows upto 100GB per object.
Each data object can be accessed by its unique name, which is a 1024byre UTF-8 string. So you can have names such as:
“someimage.jpg”
or “some!wierd-kind~offile?na$me”
A developer can imply a hierarchy in files by naming the objects carefully, such as:
“images/2010/02/image1.jpg“
“images/2010/02/image2.jpg“
“images/2010/02/image3.jpg“
“images/2010/03/image4.jpg“
Here, while it may seem like “image1.jpg”, “image2.jpg” and “image3.jpg” are all in the same “images/2010/02” folder, and “image4.jpg” is in a different folder “images/2010/03” in fact they all lie in the same flat hierarchical structure with names that include the “/” characters. The end user accessing the file as “bucker.googlestorage.com/images/2010/03/image4.jpg“ may feel like there is some hierarchy, but Google Storage wont know the difference. Mor importantly, it will allow you to have names like:
“images/20^10//03/image4.jpg“
Each object can have associated metadata, as name-value pairs. So for “images/2010/03/image4.jpg“ you might want to store the images resolution, author name, account id, a link to the images thumbnail.etc.
Google Storage will also allow one to define ACLs (Access control lists) for specifying who all can have access to your data. Google has the advantage of letting Google Storage users use Google accounts for setting permissions.
As for the developer himself, similar to Amazon S3, access is granted via the associated access key and secret code, which are shared only between the developer and Google Storage.
Google has opened up early access to this service for developers, and for the testing period, is giving them 100GB of free storage and 300GB free bandwidth. Once it is released though, Google Storage will feature an Amazon S3-like pay for what you use structure.
While Amazon has recently decreased the cost for hosting content on their infrastructure by providing a lower redundancy option for storing less important data, Google’s Storage prices are much higher and stand as follows (taken from the Google Storage Overview):
Google Storage for Developers pricing is based on usage.
- on the other hand charges lesser, anywhere between a maximum $0.15 a GB to a low of $0.055 a GB if you store over 5000 TB.The data transfer charges too are the same for Amazon S3 (for Americas and EMEA, Amazon is cheaper for APAC), however they do provide 1GB free data transfer out.
While it will be interesting to see Google Storage compete in this space, it currently doesn’t seem to stack as well against S3 when it comes to features. Amazon for example had recently announced versioning for Amazon S3 which would allow people to store multiple versions of objects automatically. Google probably has improvements in mind for this service in the horizon, but for not Amazon S3 seems to have the upper hand. | https://www.digit.in/internet/google-i-o-2010-google-storage-4612.html | CC-MAIN-2017-47 | refinedweb | 740 | 63.49 |
In this java tutorial, we are sorting an array in ascending order using temporary variable and nested for loop. We are using Scanner class to get the input from user.
Java Example: Program to Sort an Array in Ascending Order
In this program, user is asked to enter the number of elements that he wish to enter. Based on the input we have declared an int array and then we are accepting all the numbers input by user and storing them in the array.
Once we have all the numbers stored in the array, we are sorting them using nested for loop.
import java.util.Scanner; public class JavaExample { public static void main(String[] args) { int count, temp; //User inputs the array size Scanner scan = new Scanner(System.in); System.out.print("Enter number of elements you want in the array: "); count = scan.nextInt(); int num[] = new int[count]; System.out.println("Enter array elements:"); for (int i = 0; i < count; i++) { num[i] = scan.nextInt(); } scan.close(); for (int i = 0; i < count; i++) { for (int j = i + 1; j < count; j++) { if (num[i] > num[j]) { temp = num[i]; num[i] = num[j]; num[j] = temp; } } } System.out.print("Array Elements in Ascending Order: "); for (int i = 0; i < count - 1; i++) { System.out.print(num[i] + ", "); } System.out.print(num[count - 1]); } }
Output:
Related Java Examples:
1. Java Program for bubble sorting in ascending and descending order
2. Java Program to swap two numbers using bitwise xor operator
3. Java Program to sort the strings in an alphabetical order
4. Java Program to reverse the array | https://beginnersbook.com/2018/10/java-program-to-sort-an-array-in-ascending-order/ | CC-MAIN-2021-10 | refinedweb | 269 | 56.45 |
Source: lprng Version: 3.8.B Severity: important Tags: patch User: [email protected] Usertags: hurd Hello, lprng currently FTBFS on hurd-i386, because MAXPATHLEN is not defined for GNU/Hurd. POSIX says it is optional when there is no hard limit. The inlined patch below defines that value to 4096 for GNU. Since lprng does not seen to be the Debian default, cups is, this small patch can be used to make lprng available until all cups tests are OK for Hurd. Looking at the code it seems to be rather easy to write a proper patch for lprng, not using MAXPATHLEN at all to send upstream. However, I have chosen not to do this until now. No new release has been made since February 2011 and it looks like lprng is no longer developed upstream?? --- a/src/include/portable.h 2011-02-25 10:20:27.000000000 +0100 +++ b/src/include/portable.h 2012-05-04 16:55:40.000000000 +0200 @@ -152,6 +152,12 @@ #endif /*************************************************************************/ +#if defined(__GNU__) +# define IS_GNU OSVERSION +# define MAXPATHLEN 4096 +#endif + +/*************************************************************************/ #if defined(__convex__) /* Convex OS 11.0 - from w_stef */ # define IS_CONVEX OSVERSION | https://lists.debian.org/debian-hurd/2012/05/msg00021.html | CC-MAIN-2014-10 | refinedweb | 192 | 76.72 |
Polyglot Programming in Spring0 Comments writing the code for a recent blog post on Spring Integration, I thought it would be fun to write some of the code in Groovy. I’ve been writing in Groovy for about 7 years. I’m as fluent in Groovy as I am in Java, so its no problem for me to bounce between the two languages. If you been following my blog, you’ve probably seen me use Spock for my unit tests. Spock is a great tool. A Groovy testing framework for testing Java code.
Many Java developers who are not familiar with Groovy, view it as just a scripting language. Groovy is not just a scripting language. It is an Object Oriented Programming language, just like Java is. Some may argue that Groovy is a pure object oriented language and Java is not because of Java’s support of Primitive Types. Because unlike Java, Groovy does autoboxing of primitive types.
Its also important to understand, Groovy was never intended to be a replacement of Java. It’s written to supplement Java. As such, its very easy to mix and match code. Ultimately both Groovy and Java compile down to JVM byte code, and the compiled code is compatible.
Polyglot Programming in the Spring Framework
The Spring Framework is no stranger to Polyglot programming. The Grails community really spearheaded things with the Groovy language. In version 4 of the Spring Framework, we’re seeing a lot more support of polyglot programming around the Groovy language. But in the various Spring Framework projects, we’re seeing more polyglot programming support around the Scala programming language. With Rod Johnson involved with Typesafe, I think it’s a safe bet we will see additional support of Scala in Spring in the future.
Groovy Spring Beans
It is possible to write Spring Beans in Groovy. Your application will compile and run just fine. Whenever programming for Dependency Injection, I recommend developing your code to an interface.
AddressService
When doing polyglot programming, I prefer to write the interface in Java. Now any class, written in Java or Groovy can implement the interface. You probably could write the interface in Groovy and use it in Java just fine, but there are some pitfalls of using Groovy from Java you need to be aware of. For example, if you use
def as a type, Java treats it as a Object data type. Sometimes the strong typing of Java is not a bad thing. This also seems more appropriate to me when defining the interface to use.
package guru.springframework.si.services; import guru.springframework.si.model.commands.PlaceOrderCommand; import org.springframework.validation.Errors; public interface AddressService { Errors verifyAddress(PlaceOrderCommand command); }
AddressServiceImpl.groovy
You can see my Groovy class simply implements the
AddressService interface. I mark the class with the
@Service("addressService") annotation as I would a normal Java Spring Bean.
package guru.springframework.si.services import guru.springframework.si.model.commands.PlaceOrderCommand import org.springframework.stereotype.Service import org.springframework.validation.BeanPropertyBindingResult import org.springframework.validation.Errors @Service("addressService") class AddressServiceImpl implements AddressService{ @Override Errors verifyAddress(PlaceOrderCommand command) { def i = 0 def msg = Thread.currentThread().id + ' : In Address Service' while (i < 1000) { println msg i = i + 100 msg = msg + '. ' } new BeanPropertyBindingResult(command, 'Place Order Command') } }
Using a link below, you can checkout the full project. You will see the Groovy class compiles with the Java code and runs in the Spring context like any other Spring Bean.
Conclusion
In enterprise Java / Spring shops, you probably will not see much Polyglot programming. But, the Grails team uses Groovy for Spring Beans all the time. I’ve demonstrated its easy to use Groovy Spring beans outside of the Grails environment. This is not a technology issue to overcome. Culturally, I suspect it may be some time before you see polyglot programming in large scale enterprise applications. But doing polyglot programming like this is something fun to do in blog. | https://springframework.guru/polyglot-programming-in-spring/ | CC-MAIN-2021-43 | refinedweb | 658 | 58.48 |
.
The simple repository introduced in the previous blog is great for simple modules. However, in the previous blog we had to make some changes to our TaskInfo class and Tasks table to make everything work using PetaPoco’s mapping conventions. However, PetaPoco supports an IMapper interface and, in the DAL 2, we have implemented a custom IMapper implementation (PetaPocoMapper) that provides support for three Attributes which are defined in the DotNetNuke.ComponentModel.DataAnnotations namespace. For those of you who have used Entity Framework or LINQ 2 Sql these are similar to the attributes used in those frameworks.
Use of these attributes is demonstrated in Listing 1, where we show the original TaskInfo object decorated with custom mapping attributes to map to the Tasks table. This ability to map the properties to columns means that we can be flexible with our naming.
The TaskInfo class should now map correctly to the Tasks table from the previous blog - Listing 2.
The DAL 2 PetaPocoMapper class also takes care of the table prefix or object qualifier. (Note: There is a bug in the CTP which means that object qualifiers don’t actually work - this bug will be fixed in the next pre-Release package of DNN 7.).
It can often be unnecessary and lead to poor performance to go to the database every time we need a collection of tasks or even a single task. In the DotNetNuke core we make extensive use of in-memory cache’s to deliver the best performance possible.
Usually items are cached in a collection and if individual items are required they are retrieved using a LINQ query on the in-memory collection. The DAL 2 provides a very simple caching mechanism, through the use of a fourth Attribute. The Cacheable attribute allows the developer to define the cache key, the priority and the timeout, as shown in Listing 3.
Under the covers the DAL 2 calls DotNetNuke’s built in caching support, so all the normal rules on object caching are applied. For example, caching is disabled if the Host Setting is set to None. But for module developers, all you need to do to add support for caching is to add the cacheable attribute.
Most of the methods of the repository class are aware of the attribute and do the appropriate thing: Get methods check the cache if caching is enabled and update/insert/delete methods clear the cache when called - the exceptions being the methods which take a "SqlCondition” parameter. This can be seen by looking at the code in Listings 4 and 5.
Ignoring the IsScoped property for now, the Get method checks if the class T is cacheable. If it is cacheable it calls DotNetNuke’s GetCachedData utility method, which takes a delegate as a callback to use if the cache has expired - in this case GetInternal(). If T is not cacheable the GetInternal() is called directly. GetInternal is implemented in the PetaPocoRepository class and is shown in Listing 6.
So now we have the ability to modify our custom mappings through attributes and the ability to control caching, but we still are working with all the data in our module, rather than just the data for one specific instance - or for some modules for one specific portal.
To solve this problem the DAL 2 provides a Scope attribute. The Scope attribute identifies a property of the object which is used to scope the data. In most cases this would be the module Id but by keeping this generic we can support different types of scoping - portal Id or user Id as an example.
Lets look at our TaskInfo class after adding the Scope attribute (as well as a ModuleID property) - Listing 7.
We also have to update our Table as well so it has a ModuleID column - LIsting 8.
And finally we have to make a couple of minor changes to the TaskController class - Listing 9.
Notice that the Repository provides overloads for the Get and GetById method which are used to pass the scope value - in this case the moduleId.
Under the covers the Repository knows that the scope value “moduleId” refers to the Scope column “ModuleID” and generates the appropriate WHERE clause, to query the database. The cool thing is that this works with the cacheable attribute, so that if the class is both scoped and cacheable then there are separate cached collections for each scope value.. | http://www.dnnsoftware.com/community-blog/cid/142379 | CC-MAIN-2017-51 | refinedweb | 738 | 57.61 |
In my article My semi automated workflow for blogging, I have talked about my blogging workflow. There were two main things (actually one thing) in that flow that were not automated. i.e., automatically Uploading to Blogger and automatically Uploading to Medium. I have talked about the first one here. This article is about uploading posts to Medium automatically.
Developer documentation for Medium is a breath of fresh air after the mess that is Google API’s. Of course, Google API’s are complex because they have so many different services, but they could’ve done a better job at organizing all that stuff. Anyway, Let’s see how you can use Medium API’s.
Setting Up
We don’t really need any specific dependencies for what we’re doing in this article. You can do everything with
urllib which is already part of the python standard library. I’ll be using
requests as well to make it a bit more simpler but you can achieve the same without it.
Getting the access token
To authenticate yourself with Medium, you need to get an access token that you’ll pass along to every request. There are two ways to get that token.
- Browser-based authentication
- Self-issues access tokens
Which one you should go with, depends on what kind of application you’re trying to build. As you can probably guess based on the title, we’ll be covering the second method in this article. The first method needs an authentication server setup which can accept callback from Medium. But, since at this moment, I don’t have that setup, I’m going with the second option.
The Self-issued access tokens method is quite easy to work with as you directly take the
access token without having to have the user authenticate via the browser.
To get the access token, Go to Profile Settings and scroll down till you see
Integration tokens section.
There enter some description for what you’re going to use this token and click on
Get integration token. Copy that generated token which looks something like
181d415f34379af07b2c11d144dfbe35d and save it some where to be used in your program.
Using Access token to access Medium
Once you have the access token, you’ll use that token as your password and send it along with every request to get the required data.
Let’s get started then. As, I’ve said we’ll be using
requests library for url connections. We’ll also be using the
json libary for parsing the responses. So, Let’s import them.
import requests import json
Then use
access_token you’ve got and put it in a
headers dictionary.
access_token = '181d415f34379af07b2c11d144dfbe35d' headers = { 'Authorization': "Bearer " + access_token, 'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36' }
The
User-Agent in the above dictionary is required as Medium won’t accept your request otherwise. You don’t have to have the same value as I did.
Validating the access token
First thing to check is if the access_token is valid. You can do that by making a
GET request to and checking the response.
me_url = base_url + 'me' me_req = ureq.Request(me_url, headers=headers) me_response = ureq.urlopen(me_req).read() json_me_response = json.loads(me_response) print(json_me_response)
And, when I print the
json_me_response, which is a json object, I get the following:
{ "data": { "id":"5303d74c64f66366f00cb9b2a94f3251bf5adskak7623as", "username":"durgaswaroop", "name":"Durga swaroop Perla", "url":"", "imageUrl":"*qVDXEHT9DDYUOcrj." } }
If we got that response like above, then we know that the access token we have is valid.
From there, I extract, the
user_id from the JSON string, with
user_id = json_me_response['data']['id']
Get User’s Publications
From the above request, we’ve validated that the access token is correct and we also have got the
user_id. Using that we can get access to the publications of a user. For that, we’ve to make a
GET to{{userId}}/publications and you’ll see the list of the publications by that user.
user_url = base_url + 'users/' + user_id publications_url = user_url + 'publications/' publications_req = ureq.Request(publications_url, headers=headers) publications_response = ureq.urlopen(publications_req).read() print(publications_response)
I don’t have any publications on my medium account, and so I got an empty array as response. But, if you have some publications, the response will be something like this.
{ "data": [ { "id": "b969ac62a46b", "name": "About Medium", "description": "What is this thing and how does it work?", "url": "", "imageUrl": "*ae1jbP_od0W6EulE.jpeg" }, { "id": "b45573563f5a", "name": "Developers", "description": "Medium’s Developer resources", "url": "", "imageUrl": "*[email protected]" } ] }
Now, one weird thing about Medium’s API is that they don’t have a
GET for posts. From the API’s we can get a list of all the publications but you can’t get a user’s posts. You can only publish a new post. Although, it is odd for that to be missing, It is not something I’m looking for anyway, as I am only interested in publishing an article. But if you need that, you probably should check to see if there are any hacky ways of achieving the same (at your own volition).
Create a New Post
To create a new post, we have to make a
POST request to{{authorId}}/posts. The
authorId here would be the same as the
userId of the user whose access-token you have.
I’m using
requests library for this as making a
POST request becomes easy with it. Of course, first you need to create a payload to be uploaded. The payload should look something like the following, as described here
{ "title": "Liverpool FC", "contentFormat": "html", "content": "<h1>Liverpool FC</h1><p>You’ll never walk alone.</p>", "tags": ["football", "sport", "Liverpool"], "publishStatus": "public" }
So, for this, I did the following:
posts_url = user_url + 'posts/' payload = { 'title': 'Medium Test Post', 'contentFormat': 'markdown', 'tags': ['medium', 'test', 'python'], 'publishStatus': 'draft', 'content': open('7.Test_post.md').read() } response = requests.request('POST', posts_url, data=payload, headers=headers) print(response.text)
As you see, for
contentFormat, I’ve set
markdown and for
content I read it straight from the file. I didn’t want to publish this as it is just a dummy post and so I’ve set the
publishStatus to
draft. And sure enough, it works as expected and I can see this draft added on my account.
Do note that the
title in the payload object won’t actually be the title of the article. If you want to have a title, you add it in the
content itself as a
<h*> tag.
The full code is available as a gist.
That is all for this article.
For more programming and Python articles, checkout Freblogg and Freblogg/Python
Some articles on automation:
Web Scraping For Beginners with Python
My semi automated workflow for blogging
This is the seventh article as part of my twitter challenge #30DaysOfBlogging. Twenty-three. | https://www.freblogg.com/publish-articles-to-your-medium-blog | CC-MAIN-2021-31 | refinedweb | 1,149 | 65.01 |
The
unique() function in C++ helps remove all the consecutive duplicate elements from the array or vector. This function cannot resize the vector after removing the duplicates, so we will need to resize our vector once the duplicates are removed. This function is available in the
<algorithm.h> header file.
The
find() function accepts the following parameters:
first: This is an iterator that points to the first index of the array or vector where we want to perform the search operation.
last: This is an iterator that points to the last index of the array or vector to where we want to perform the search operation.
The
unique() function returns an iterator pointing to the element that follows the last element that was not removed.
Let’s look at the code below:
#include <iostream> #include <algorithm> #include <vector> using namespace std; int main() { vector<int> vec = {10,20,20,20,30,30,20,20,10}; auto it = unique(vec.begin(), vec.end()); vec.resize(distance(vec.begin(), it)); for (it = vec.begin(); it!=vec.end(); ++it) cout << ' ' << *it; }
unique()function and pass the required parameters.
unique()function only removes the consecutive duplicates).
RELATED TAGS
CONTRIBUTOR
View all Courses | https://www.educative.io/answers/how-to-use-the-unique-function-in-cpp | CC-MAIN-2022-33 | refinedweb | 198 | 56.66 |
Changing Locale Dynamically
I am confused about how you are supposed to go about changing the Locale. I am doing this:
<f:view
<f:loadBundle
<h:form>
<h:commandButton
</h:form>
</f:view>
The swapLocale method swaps the Locale to/from English and Spanish, and my Person bean is session-scoped. The problem is that when I press the button, I don't get the new Locale until I manually reload the page. I put a trace in the swapLocale and getLocale methods, and I find that when I press the button, the getLocale method is called, then the swapLocale method is called, and the getLocale method is not called again after the call to swapLocale. I get the same behavior with action instead of actionListener, and also if I remove immediate="true".
Clearly, I am misunderstanding the lifecycle here. Why isn't getLocale being called after the actionListener method [swapLocale]? What is the proper way to let the user set the Locale?
Thanks!
- Marty
-----
JSF 2.0 Training Course:
In the last part of your post, you said '... Here we, take the #{person.locale} value and set it as UIViewRoot locale.' what exactly is this UIViewRoot locale should I use to replace "#{person.locale}"?
I use another way to retrieve the current locale setting, which is stored in session. But now the problem is I cannot get the correct English/French link to be displayed without refreshing the page first. It looks like it alternately creates the correct and wrong links, the wrong link means the method bundled with actionListener in the command link is not called.
Please help, thanks.
Message was edited by: scorpioy
hi, actually i was replying to the example given by Mr. Marty Hall. So it may have seemed incomplete to you.
Anyways, the user presses a
@ManagedBean
@SessionScoped
public class Person implements Serializable {
private String firstName, lastName, emailAddress;
private boolean isEnglish = true;
private final Locale ENGLISH = Locale.ENGLISH;
private Locale SPANISH = new Locale("es");
//getters and setters for firstName, lastName, emailAddress
public void swapLocale(ActionEvent event) {
System.out.println("[swapLocale] Setting isEnglish to " + !isEnglish);
isEnglish = !isEnglish; //swap boolean field isEnglish
}
}
notice the above method just sets the boolean field to true or false.
Now there is another method in person bean:
@ManagedBean
@SessionScoped
public class Person implements Serializable {
.
.
.
public Locale getLocale() {
if (isEnglish) {
return (ENGLISH);
} else {
return (SPANISH);
}
}
}
so basically #{person.locale} which runs the getLocale() method returns the currently selected locale.
So finally in the phaselistener beforePhase method, the #{person.locale} is read, and set as the current locale.
Now for any page in the application, whether the page swaps locale or not, the phaselistener beforePhase will check the #{person.locale} value and set it as current locale. This was done as the original post had problem with locale when a page was refreshed/reloaded in the browser.
Well, thanks to some advice from folks here (thanks nash_era!), here is what I finally did:
Problem:
- Calling setLocale on the view
*** Triggered when you reload the page
*** [b]Not [/b]triggered when you redisplay the page after running action listener (or submitting form)
- Setting the Locale of the UIViewRoot
*** Triggered when you redisplay the page after running action listener (or submitting form)
*** [b]Not [/b]triggered when you reload the page (Because the Locale is reset to the default)
Solution
- Do both!
So, in the ActionListener I did this:
public void swapLocale1(ActionEvent event) {
switchLocale();
}
private void switchLocale() {
isEnglish = !isEnglish;
Locale newLocale;
if (isEnglish) {
newLocale = ENGLISH;
} else {
newLocale = SPANISH;
}
FacesContext.getCurrentInstance().getViewRoot().setLocale(newLocale);
}
Then, in the facelets page, I did this:
I also added this to the FormSettings class:
public Locale getLocale() {
if (isEnglish) {
return(ENGLISH);
} else {
return(SPANISH);
}
}
As I said before, doing either one (setting the UIViewRoot or using f:view with the locale attribute) was not enough. The above works, but it feels like a kludgy hack to me. I strongly suspect that I am misunderstanding the JSF lifecycle, and if I understood it properly, I would have a simpler approach.
Any suggestions? BTW, I put a ZIP of the above project in in case anyone wants to try it out or look at my code.
Thanks!
- Marty
------
JSF 2.0 Training:
Changing the locale in the Browser perfectly changes the locale used but only changing it dynamically seems not to work at all. It looks as if the locale given from the Browser has more priority and overwrites always the configured locale in the session.I am using CRS (Crystal Reports Server) 2008 v1 and we are viewing reports through a JSP application deployed on the built-in Tomcat server that comes with CRS. I would like to know how to dynamically change the locale of reports through this application (i.e. through Java). The below code seems to work fine for CRS XI but it is not working in CRS 2008 v1. The locale seems to get stuck to Swedish regional settings in my case!
[url=]Mercedes Benz 300D Parts[/url]
Message was edited by: fatabangi007
PS Just in case anyone wants to try out my actual code, I exported my Eclipse project as a ZIP file, and you can grab it from. I included the JAR files in WEB-INF/lib, so you can deploy on any servlet 2.5 container. Deploy it and run.
The key behavior to observe is that when you press the button, the getLocale method (called because of the attribute on f:view) is called before the swapLocale method (the button's actionListener).
Cheers-
- Marty
Hi,)
even i am curious about the lifecycle issue in your code. Hopefully somebody will put some light on it.
=====)
Actually, I should have mentioned in my original post that I had already tried this. I just went back and tried it again to be sure.
If I do the above but STILL set the attribute on f:view, I have the exact same behavior as before: when I press the button it calls getLocale BEFORE calling swapLocale, and I don't see the right properties until I manually reload the page.
If I do the above but WITHOUT setting the attribute on f:view, the properties never change at all.
> even i am curious about the lifecycle issue in your
> code. Hopefully somebody will put some light on it.
I also went back and carefully tested my original version as posted above, restarting the server and closing all browsers to be sure no session data was cached. Same behavior: pressing the button results in calling getLocale [b]first [/b]and swapLocale [b]second[/b].
I had assumed that I was just totally misunderstanding the lifecycle. I suppose it is also possible that there is a Mojarra bug. Just in case, I am running Mojarra 2.0.2 (FCS b10) on JDK 1.6.0.15 on Windows.
>.
Hmm. Who am I to argue with Burns and Schalk? I have that book sitting right in front of me but hadn't read that part yet. This advice is news to me, so I will take it into account in the future. However, I am a little confused by the book example, since the advice you quote is at the top of page 286, yet the diagram at the bottom of page 287 still uses f:loadBundle. I am guessing the 287 diagram is just a leftover from the previous edition that they forgot to update.
Thanks for the tip!
hi,
here is another retake on the whole issue. Since locale change is a cross cutting concern, its better that locale is set for pages without the page doing anything special for it. Plus the "do both" approach is bit bothersome. If you add a page in future,where you forget to add f:view, then again you will have locale problem for that page. Here is a clean approach to change locale,I did the following:
1)remove all f:view tags from pages(also remove f:loadBundle from pages).
2)change the swapLocale() method to:
public void swapLocale(ActionEvent event) {
System.out.println("[swapLocale] Setting isEnglish to " + !isEnglish);
isEnglish = !isEnglish;
//FacesContext.getCurrentInstance().getViewRoot().setLocale(isEnglish?ENGLISH:SPANISH);
}
notice that i commented out the last statement in the method. So now, its no longer the responsibility of a page to set locale.
3) getLocale() method as before, no changes in it:
public Locale getLocale() {
if (isEnglish) {
return (ENGLISH); // ENGLISH is Locale.ENGLISH
} else {
return (SPANISH);// SPANISH is Locale("es")
}
}
4)Here's the interesting part:
register a listener:
the listener code is:
import java.util.Locale;
import javax.el.ELContext;
import javax.el.ValueExpression;
import javax.faces.application.Application;
import javax.faces.context.FacesContext;
import javax.faces.event.PhaseEvent;
import javax.faces.event.PhaseId;
import javax.faces.event.PhaseListener;
public class LifeCycleListener implements PhaseListener {
public PhaseId getPhaseId() {
return PhaseId.ANY_PHASE;
}
public void beforePhase(PhaseEvent event) {
System.out.println("START PHASE " + event.getPhaseId());
FacesContext context;
if (event.getPhaseId() == PhaseId.RENDER_RESPONSE) {
context = event.getFacesContext();
Application application = context.getApplication();
ValueExpression ve = application.getExpressionFactory().
createValueExpression(context.getELContext(),
"#{person.locale}", Locale.class);
try {
System.out.println("setting locale in phase listener");
Locale localeToSet = (Locale) ve.getValue(context.getELContext());
context.getViewRoot().setLocale(localeToSet);
} catch (Exception e) {
System.out.println("error in getting locale");
}
}
}
public void afterPhase(PhaseEvent event) {
System.out.println("END PHASE " + event.getPhaseId());
}
}
In the above code, the if block is run just before any page is rendered. Here we, take the #{person.locale} value and set it as UIViewRoot locale.
So now, no code in any page to change locale, no worry about forgetting the f:view, plus no dual setting.
Dont know, if you have read this . Its quite a informative article.
ps:am from india | https://www.java.net/node/703940 | CC-MAIN-2014-10 | refinedweb | 1,617 | 57.57 |
- Behavior-Driven Development in Python
Friday, February 13, 2015 by martijn broeders, ensures that they are easy to both detect and fix. This speed, clarity, focus and quality in your code is why you need to be adopting this process... now.
What Is Behavior-Driven Development?
Behavior-Driven Development (which we will now refer to as "BDD") follows on from the ideas and principles introduced in Test-Driven Development. The key points of writing tests before code really apply to BDD as well. The idea is to not only test your code at the granular level with unit tests, but also test your application end to end, using acceptance tests. We will introduce this style of testing with the use of the Lettuce testing framework.
Behavior-Driven Development (BDD) is a subset of Test-Driven Development (TDD).
The process can be simply defined as:
- Write a failing acceptance test
- Write a failing unit test
- Make the unit test pass
- Refactor
- Make the acceptance test pass
Rinse and repeat for every feature, as is necessary.
BDD in Agile Development
BDD really comes into its own when used with agile development.
Tip: Refer to The Principles of Agile Development for more information on agile development methods.
With new features and requirements coming in every one, two or four weeks, depending on your team, you need to be able to test and write code for these demands quickly. Acceptance and unit testing in Python allows you to meet these goals.
Acceptance tests famously make use of an English (or possibly alternative) language format "feature" file, describing what the test is covering and the individual tests themselves. This can engage everyone in your team—not just the developers, but also management and business analysts who otherwise would play no part in the testing process. This can help to breed confidence across the whole team in what they are striving to achieve.
The feature files allow for tests to be described in a language this is/can be accessible to all levels of the business, and ensures that the features being delivered are articulated and tested in the way the business requires and expects. Unit testing alone can not ensure the application being delivered actually provides the full functionality that is required. Therefore acceptance testing adds another layer of confidence in your code to ensure that those individual 'units' fit together to deliver the full package required. The great thing about acceptance testing is that it can be applied to any project you are working on, either big or small scale.
Gherkin Syntax
Acceptance tests usually make use of the Gherkin Syntax, introduced by the Cucumber Framework, written for Ruby. The syntax is quite easy to understand, and, in the Lettuce Python package, makes use of the following eight keywords to define your features and tests:
- Given
- When
- Then
- And
- Feature:
- Background:
- Scenario:
- Scenario Outline:
Below, you can review these keywords in action, and how they can be used to structure your acceptance tests.
Installation
The installation of the
Lettucepackage is straightforward, following the usual
pip installpattern that most Python developers will be familiar with.
Perform the following steps to begin using
Lettuce:
$ pip install lettuce
$ lettuce /path/to/example.featureto run your tests. You can either run just one feature file, or, if you pass a directory of feature files, you can run all of them.
You should also install
nosetests(if you don't already have it installed) as you will be making use of some of the assertions that
nosetestsprovides to make your tests easier to write and use.
$ pip install nose
Feature Files
Feature files are written in plain English, and specify the area of the application that the tests cover. They also provide some setup tasks for the tests. This means that you are not only writing your tests, but are actually forcing yourself to write good documentation for all aspects of your application. So you can clearly define what each piece of code is doing and what it is handling. This documentation aspect of the tests can be great as the size of your application grows, and you wish to review how a certain aspect of the application works, or you wish to remind yourself of how to interact with a part of the API for example.
Let's create a Feature file which will test an application that was written for my Test-Driven Development in Python article for Tuts+. The application is just a simple calculator written in Python but will show us the basic of writing acceptance tests. You should structure your application with an
appand a
testsfolder. Within the
testsfolder, add a
featuresfolder also. Place the following code in a file named
calculator.pyunder the
appfolder.
class Calculator(object): def add(self, x, y): number_types = (int, long, float, complex) if isinstance(x, number_types) and isinstance(y, number_types): return x + y else: raise ValueError
Now add the following code to a file named
calculator.featureunder the
tests/featuresfolder.
Feature: As a writer for NetTuts I wish to demonstrate How easy writing Acceptance Tests In Python really is. Background: Given I am using the calculator Scenario: Calculate 2 plus 2 on our calculator Given I input "2" add "2" Then I should see "4"
From this simple example, you can see how straightforward it is to describe your tests and share them across the various people involved in your team.
There are three key areas of note in the feature file:
- Feature block: Here is where you write documentation for what this group of tests is going to cover. No code is executed here, but it allows the reader to understand exactly what this Feature is testing.
- Background block: Executed prior to every Scenario within the Feature file. This is similar to the
SetUp()method and allows you to perform necessary setup code, such as making sure you are on some page, or have certain conditions in place.
- Scenario block: Here, you define the test. The first line serves as the documentation again, and then you drop into your Scenario to execute the test. It should be fairly easy to see how you can write any test in this style.
Steps File
Following on from the Feature file, we need to have the steps file underneath. This is where the 'magic' happens. Obviously, the Feature file itself will not do anything; it requires the steps to actually map each line to execute Python code underneath. This is achieved through the use of regular expressions.
"Regular Expressions? Too complex to bother with in testing" can often be a response to RegEx's in these tests. However, in the BDD world, they are used to capture the whole string or use very simple RegEx's to pick out variables from a line. Therefore you shouldn't be put off by their use here.
Regular Expressions? Too complex to bother with in testing? Not in Lettuce. Simple and Easy!
If we review an example. you'll see how easily the Steps file follows on from the Feature.
from lettuce import * from nose.tools import assert_equals from app.calculator import Calculator @step(u'I am using the calculator') def select_calc(step): print ('Attempting to use calculator...') world.calc = Calculator() @step(u'I input "([^"]*)" add "([^"]*)"') def given_i_input_group1_add_group1(step, x, y): world.result = world.calc.add(int(x), int(y)) @step(u'I should see "([^"]+)"') def result(step, expected_result): actual_result = world.result assert_equals(int(expected_result), actual_result)
The first thing worth noting is the standard imports at the top of the file. So we need access to our
Calculatorclass and, of course, the tools provided by Lettuce. You also import some handy methods from the
nosetestpackage such as
assert_equalsto allow for easy assertions in the steps. You can then begin to define the steps for each line in the Feature file. We can see that, as explained earlier, the regular expressions are mostly just picking up the whole string, except where we want access to the variable within the line.
If we use the
@step(u'I input "([^"]*)" add "([^"]*)"')line as our example, you can see that the line is first picked up using the
@stepdecorator. Then you use the
'u'character at the start to indicate a unicode string for Lettuce to perform regular expressions upon. Following that, it's just the line itself and a very simple regular expression to match anything within the quotes—the numbers to add in this case.
You should then see that the Python method follows directly after this, with the variables passed into the method with whatever name you wish. Here, I have called them
xand
yto indicate the two numbers to be passed to the calculator
addmethod.
Another item of note here is the use of the
worldvariable. This is a globally scoped container, and allows variables to be used across steps within a scenario. If we didn't, all variables would be local to their method, but, here, we create an instance of
Calculator()once, and then access that in each step. You also use the same technique to store the result of the
addmethod in one step and then assert on the result in another step.
Executing the Features
With the feature file and steps in place, you can now execute the tests and see if they pass. As mentioned earlier, executing the tests is simple and Lettuce provides a built-in test runner, available to you from the command line after installation. Try executing
lettuce test/features/calculator.featurein your preferred command line application.
$ 1 feature (1 passed) 1 scenario (1 passed) 2 steps (2 passed)
Lettuce's output is really nice, as it shows you each line of the feature file that has been executed and highlights in green to show that it has passed the line successfully. It also shows which feature file it is running and the line number, which comes in handy once you have built up a larger test suite of numerous features and need to find an offending line of a feature such as when a test fails. Finally, the last part of the output provides you with stats about the number of features, scenarios and steps that have been executed, and how many passed. In our example, all the tests were good, but let's take a look at how Lettuce shows you test failures and how you can debug and fix them.
Make a change to the code of
calculator.py, so that the test will fail, such as changing the add method to actually subtract the two numbers passed in.
class Calculator(object): def add(self, x, y): number_types = (int, long, float, complex) if isinstance(x, number_types) and isinstance(y, number_types): return x - y else: raise ValueError
Now, when you run the feature file using Lettuce, you will see how it clearly indicates what has gone wrong in the test and which part of the code has failed.
$ Traceback (most recent call last): File "/Users/user/.virtualenvs/bdd-in-python/lib/python2.7/site-packages/lettuce/core.py", line 144, in __call__ ret = self.function(self.step, *args, **kw) File "/Users/user/Documents/Articles - NetTuts/BDD_in_Python/tests/features/steps.py", line 18, in result assert_equals(int(expected_result), actual_result) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py", line 515, in assertEqual assertion_func(first, second, msg=msg) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py", line 508, in _baseAssertEqual raise self.failureException(msg) AssertionError: 4 != 0 1 feature (0 passed) 1 scenario (0 passed) 2 steps (1 failed, 1 passed) List of failed scenarios: Scenario: Calculate 2 plus 2 on our calculator # tests/features/calculator.feature:9
Clearly, the expected value of
4now does not match the actual return value of
0. Lettuce has clearly shown you this issue and you could then debug your code to find out what has gone wrong, apply a fix, and make the test pass again.
Alternative Tools
There are plenty of alternative options within Python to do this form of testing. We have examples, such as Behave, Lettuce and also Cucumber, which, as mentioned, defined this structure. The other tools are essentially clones/ports of Cucumber. Cucumber can be used with Python code, via the use of a Ruby-Python interpreter, but that is beyond the scope of this tutorial.
- Behave: a near exact port of Cucumber into Python. Has a good level of documentation, and is updated constantly by the developers. They also offer a comparison with other tools, which is worth a read.
- Freshen: another direct port of Cucumber, featuring tutorials and examples on their website, and simple installation tools, such as 'pip'.
The key point, with all of these tools, is that they are all more or less the same. Once you have mastered one, you'll quickly pick up on the others, should you choose to switch. A quick review of the documentation should be enough for most developers who are proficient in Python.
Advantages
You can dive into refactoring confidently.
There are significant advantages to using a thorough test suite. One of the major ones revolves around the refactoring of code. With a robust test suite in place, you can dive into refactoring confidently, knowing that you have not broken any previous behavior in your application.
This grows in importance the more your application develops and increase in size. When you have more and more legacy code, it becomes very hard to go back and make changes with confidence and know that you have definitely not broken any existing behaviour. If you have a full suite of acceptance tests written for every feature being developed, you know that you have not broken that existing functionality as long as when you make your changes you run a full build of your tests before pushing the changes live. You check that your code has not "regressed" due to your changes and restrings.
Another great benefit of building acceptance testing into your daily workflow is the ability to have a clarification session before starting the development of a feature.
You could, for example, have the developers who will code the solution of a feature, the testers (quality assurance/QA's) who test the code once complete, and the business/technical analyst all sit down and clarify the requirements of a feature, and then document this as the feature files that the developers will work towards.
Essentially, you can have a set of failing feature files that the developers can run and make pass one by one, so that they will know they are done with the feature once all are passing. This gives developers the focus they need to deliver exactly to the requirements and not expand upon the code with features and functionality not necessarily required (also known as "gold plating"). The testers can then review the features files to see if everything is covered appropriately. The process can then be undertaken for the next feature.
Final Thoughts
Having worked in a team using the process and tools outlined above, I have personally experienced the huge advantages of working in this manner. BDD provides your team with clarity, focus, and the confidence to deliver great code, while keeping any potential bugs to a minimum.
Heads Up!
If this article has whetted your appetite for the world of testing in Python, why not check out my book "Testing Python", released on Amazon and other good retailers recently. Visit this page to purchase your copy of the book today, and support one of your Tuts+ contributors.
Leave a comment › Posted in: Daily | https://www.4elements.com/blog/comments/behavior-driven_development_in_python | CC-MAIN-2018-05 | refinedweb | 2,609 | 60.75 |
On 01/15/2014 10:11 AM, James Hogan wrote:
When KVM is enabled and TLB invalidation is supported,
kvm_mips_flush_host_tlb() can cause a machine check exception due to
multiple matching TLB entries. This can occur on shutdown even when KVM
hasn't been actively used.
Commit adb78de9eae8 (MIPS: mm: Move UNIQUE_ENTRYHI macro to a header
file) created a common UNIQUE_ENTRYHI in asm/tlb.h but it didn't update
the copy of UNIQUE_ENTRYHI in kvm_tlb.c to use it.
Commit 36b175451399 (MIPS: tlb: Set the EHINV bit for TLBINVF cores when
invalidating the TLB) later added TLB invalidation (EHINV) support to
the common UNIQUE_ENTRYHI.
Therefore make kvm_tlb.c use the EHINV aware UNIQUE_ENTRYHI
implementation in asm/tlb.h too.
Signed-off-by: James Hogan <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: [email protected]
Cc: Gleb Natapov <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: [email protected]
Cc: Markos Chandras <[email protected]>
Cc: Sanjay Lal <[email protected]>
---
This is based on John Crispin's mips-next-3.14 branch.
I do not object to it being squashed into commit adb78de9eae8 (MIPS: mm:
Move UNIQUE_ENTRYHI macro to a header file) since that commit hasn't
reached mainline yet.
---
arch/mips/kvm/kvm_tlb.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/arch/mips/kvm/kvm_tlb.c b/arch/mips/kvm/kvm_tlb.c
index c777dd36d4a8..52083ea7fddd 100644
--- a/arch/mips/kvm/kvm_tlb.c
+++ b/arch/mips/kvm/kvm_tlb.c
@@ -25,6 +25,7 @@
#include <asm/mmu_context.h>
#include <asm/pgtable.h>
#include <asm/cacheflush.h>
+#include <asm/tlb.h>
#undef CONFIG_MIPS_MT
#include <asm/r4kcache.h>
@@ -35,9 +36,6 @@
#define PRIx64 "llx"
-/* Use VZ EntryHi.EHINV to invalidate TLB entries */
-#define UNIQUE_ENTRYHI(idx) (CKSEG0 + ((idx) << (PAGE_SHIFT + 1)))
-
atomic_t kvm_mips_instance;
EXPORT_SYMBOL(kvm_mips_instance);
Thanks. That looks good to me.
Reviewed-by: Markos Chandras <[email protected]>
--
markos | http://www.linux-mips.org/archives/linux-mips/2014-01/msg00154.html | CC-MAIN-2015-06 | refinedweb | 321 | 55.1 |
This page describes a work in progress, there are no KDE Applications running on Windows Mobile (that I know of). It is only intended to give a starting point to developers for cross compilation and Windows CE related issues. Since Qt is already ported for Windows CE it should be possible to get some KDE applications running on that Platform.
First thing you need is Visual Studio professional or standard edition (preferably with service pack 1).
Be aware that the express editions are explicitly not supported for Windows CE development.
” )
Next get up your Visual Studio Command Prompt which should be accessible from the Windows Start Menu. Change the working directory to qt-directory and start the configuration. There should be a configuration can get CeGCC here:
Or to build the latest Version yourself see:.0 does not load the libgcc_s_sjlj-1.dll libstdc++-6.dll properly, but a fix is in the works, see:. This should be easier then working with Cabwiz to generate the setup.xml
Microsoft Cabwiz is part of Microsofts Windows Mobile SDKs
I will try to demonstrate the usage of Cabwiz and the inf file format by creating a small configuration example to install the Qt example Tetrix, which comes as part of the Qt package, on a Windows Mobile Device.
If you built Qt using msvc you need to deploy 4 files to the Windows CE device:
Tetrix.exe ; the program <yourqtdir>\examples\widgets\tetrix\release QtCore4.dll ; shared Qt library <yourqtdir>\lib QtGui4.dll ; shared Qt library <yourqtdir>\lib msvcr90.dll ; choose the MS Visual C runtime according to your developing environment
The goal is to get the installer to install the program to \Program Files\Tetrix and copy the shared libraries into the Windows folder, so other applications can find them as well, with also inserting a link in Start Menu\Games to our Tetrix.exe.
To use Cabwiz for that, you need to provide it with an [information file Information file documentation:] which details the configuration of the deployment package and which files there are to Include.
An inf file consist of different Sections you have to provide, the first one of those is CEStrings which defines the Application Name and the InstallDir where the files are to be placed.
[CEStrings] AppName="Tetrix" InstallDir=%CE1%\%AppName%
With the % tags we tell the interpreter that AppName and CE1 are variables. CE1 is defined by Windows Mobile and stands for the Program Files directory.
Next the Version section:
[Version] Signature="$Windows NT$" CESignature="$Windows CE$" Provider=”Cabwiz”
Provider is meant for the creator of the application and is used to help identify the program if you leave it at Cabwiz WinCE will refer to your program as “Cabwiz Tetrix”.
To create a shortcut that is later inserted into the start menu:
[Shortcuts] %AppName%,0,”tetrix.exe”
AppName is the name for our shortcut, 0 means its a file (1 would mean its a folder), and “tetrix.exe” is the executable. The path used for the shortcut is taken from InstallDir by default
Now to the files. First you have to tell Cabwiz where it can find the files you want to include in the cab file. So for each directory you have to create a name in the SourceDiskNames Section. To keep it simple all the files in this example are in c:\tetrix
[SourceDisksNames] 1=,"arm files",,c:\tetrix
To declare the SourceFiles, the files to include in the cabinet file:
[SourceDisksFiles] tetrix.exe=1 QtCore4.dll=1 QtGui4.dll=1 msvcr90.dll=1
Those files all are in c:\tetrix defined in section SourceDiskNames as 1.
Since the goal is to put the shared libraries in the Windows directory and the Tetrix.exe under Program files Cabwiz has to be made aware that they are different. For this reason you can create “copyfile sections” that you can name yourself. For example the two sections:
[ProgramFiles] tetrix.exe
[SharedLibs] QtCore4.dll,,,0xA0000000 QtGui4.dll,,,0xA0000000 msvcr90.dll,,,0xA0000000
Only the file name is mandatory here, the arguments are optional. With the flag 0xA0000000 you tell the installer that those files are shared, and should only be replaced if they are newer then files that may be already on the device. For more on that see:
Two sections are left which are needed to create a .cab file, DefaultInstall and DestinationDirs:
[DefaultInstall] CeShortcuts=Shortcuts CopyFiles=ProgramFiles,SharedLibs
DefaultInstall just declares how the shortcuts and file lists are named.
Finally, to declare where everything should be copied to:
[DestinationDirs] Shortcuts=,%CE14% ProgramFiles=,%InstallDir% SharedLibs=,%CE2%
Notice again the CE variables, for more information see the Mobile SDK help files.
Thats it, now all you have to do is to start Cabwiz from the command line and provide the .inf file. for a more detailed reference see: or just use lcab.
Tetrix.inf:
[CEStrings] AppName="Tetrix" InstallDir=%CE1%\%AppName% [Version] Signature="$Windows NT$" CESignature="$Windows CE$" Provider=”Company”
[Shortcuts] %AppName%,0,”tetrix.exe”
[SourceDisksNames] 1=,"arm files",,c:\tetrix
[SourceDisksFiles] tetrix.exe=1 QtCore4.dll=1 QtGui4.dll=1 msvcr90.dll=1
[ProgramFiles] tetrix.exe
[SharedLibs] QtCore4.dll,,,0xA0000000 QtGui4.dll,,,0xA0000000 msvcr90.dll,,,0xA0000000
[DefaultInstall] CeShortcuts=Shortcuts CopyFiles=ProgramFiles,SharedLibs
[DestinationDirs] Shortcuts=,%CE14% ProgramFiles=,%InstallDir% SharedLibs=,%CE2%
Who needs standard C when he can have "Visual" C seems to be the reasoning. If you try to port native code from Linux to Windows CE or even from Windows to Windows CE you will soon run into walls because Microsoft does not provide you with a C-Library that is in any way complete or following standards. This is ugly when some parts are missing like errno, fcntl, types, stat, ... but gets even more ugly when there are the headers, but not the library functions like Microsoft's complete string.h is the line:
#include <stdlib.h>
nothing more. Maybe the best way to go here would be to delete those "dummy files" or reroute them to a compatibility library? * pcre * qca2 * qimageblitz * redland * strigi * soprano * tiff * update-mime-database * windbus * zlib ( ported V 1.1.4 ).
built, not tested yet
built, not tested yet
built, not tested yet | https://techbase.kde.org/index.php?title=Projects/KDE_on_Windows/Windows_CE&diff=prev&oldid=47393 | CC-MAIN-2018-30 | refinedweb | 1,024 | 62.88 |
Thread example was loosely based on Richter’s example in his article on Safe Thread Synchronization. Instead of rewriting his example, I will just link to it.
His example properly demonstrates the problem with a Finalizer thread attempting to lock on the object. However Jack goes on to say that locking on
this in an ordinary method is fine. I still beg to differ, and have a better code example to prove it.
Again, suppose you carefully craft a class to handle threading
internally. You have certain methods that carefully protect against
reentrancy by locking on the
this keyword. Sounds great in theory, no?
However now you pass an instance of that class to some method of another
class. That class should not have a way to use the same
SyncBlock for
thread synchronization that your methods do internally, right?
But it does!
In .NET, an object’s
SyncBlock is not private. Because of the way
locking is implemented in the .NET framework, an object’s
SyncBlock is
not private. Thus if you lock
this, you are using to the current
object’s
SyncBlock for thread synchronization, which is also available
to external classes.
Richter’s article explains this
well.
But enough theory you say, show me the code! I will demonstrate this
with a simple console app that has a somewhat realistic scenario. Here
is the application code. It simply creates a
WorkDispatcher that
dispatches a
Worker to do some work. Simple, eh?
class Program { static void Main() { WorkDispatcher dispatcher = new WorkDispatcher(); dispatcher.Dispatch(new Worker()); } }
Next we have the carefully crafted
WorkDispatcher. It has a single
method
Dispatch that takes a lock on
this (for some very good
reason, I am sure) and then dispatches an instance of
IWorker to do
something by calling its
DoWork method.
public class WorkDispatcher { int dispatchCount = 0; public void Dispatch(IWorker worker) { Console.WriteLine("Locking this"); lock(this) { Thread thread = new Thread(worker.DoWork); Console.WriteLine("Starting a thread to do work."); dispatchCount++; Console.WriteLine("Dispatched " + dispatchCount); thread.Start(this); Console.WriteLine("Wait for the thread to join."); thread.Join(); } Console.WriteLine("Never get here."); } }
From the look of it, there should be no reason for this class to
deadlock in and of itself. But now let us suppose this is part of a
plugin architecture in which the user can plug in various
implementations of the
IWorker interface. The user downloads a really
swell plugin from the internet and plugs it in there. Unfortunately,
this worker was written by a malicious eeeeevil developer.
public class Worker : IWorker { public void DoWork(object dispatcher) { Console.WriteLine("Cause Deadlock."); lock (dispatcher) { Console.WriteLine("Simulating some work"); } } }
The evil worker disrupts the carefully constructed synchronization plans
of the
WorkDispatcher class. This is a somewhat contrived example, but
in real world multi-threaded application, this type of scenario can
quite easly surface.
If the
WorkDispatcher was really concerned about thread safety and
protecting its synchronization code, it would lock on something private
that no external class could lock on. Here is a corrected example of the
WorkDispatcher.
public class WorkDispatcher { object syncBlock = new object(); int dispatchCount = 0; public void Dispatch(IWorker worker) { Console.WriteLine("Locking this"); lock (this.syncBlock) { Thread thread = new Thread(worker.DoWork); Console.WriteLine("Starting a thread to do work."); dispatchCount++; Console.WriteLine("Dispatched " + dispatchCount); thread.Start(this); Console.WriteLine("Wait for the thread to join."); thread.Join(); } Console.WriteLine("Now we DO get here."); } }
So Jack, if you are reading this, I hope it convinces you (and everyone
else) that locking on
this, even in a normal method, is a pretty bad
idea. It won’t always lead to problems, but why risk it?
6 responses | https://haacked.com/archive/2006/08/08/threadingneverlockthisredux.aspx/ | CC-MAIN-2019-51 | refinedweb | 614 | 59.5 |
I'm working on a module named Class::Sniff. While it's primarily for finding "code smells" in object-oriented hierarchies, it also lets you graph those hierarchies. For example, here's the graph of the B:: modules (Perl's backend modules).
The problem I'm trying to solve is that requiring a package via string eval creates a symbol table entry for that package, even if the require fails. Thus, I can't tell if the package is "real" or not (e.g., if it's likely to get called and should be added to my graph).
Jesse Vincent sent a bug report about Class::Sniff detecting non-existent packages.
Seems Jesse has a lot of code like this:
eval "require RT::Ticket_Overlay";
if ($@ && $@ !~ qr{^Can't locate RT/Ticket_Overlay.pm}) {
die $@;
};
[download]
Well, that seems quite reasonable. Except for this:
#!/usr/bin/env perl -l
use strict;
use warnings;
print $::{'Foo::'} || 'Not found';
eval "use Foo";
print $::{'Foo::'} || 'Not found';
eval "require Foo";
print $::{'Foo::'} || 'Not found';
__END__
Not found
Not found
*main::Foo::
[download]';
[download]:
If you do that with a non-existent package which nonetheless has a symbol table entry, it still has no keys in its symbol table. However, if you do that with a module which exists but failed to compile,)
Of course, even this is problematic. As Rafael Garcia-Suarez pointed out, a nested stash will create its parent stash (e.g., CGI::Application will create a symbol table for CGI, even if the latter is not loaded).
I can't just check %INC because inlined modules won't be there unless the author remembers to add them manually. Could I hook require or add a coderef to @INC? It's a strange edge case I'm dealing with, but for larger, more complex applications, it's a problem.
Cheers,
Ovid
The official Perl 6 wiki.
I think you mostly want that so you can check whether something can influence how a class behaves, which seems to point mostly to @ISA to me (barring weirdo UNIVERSAL or MRO trickery). If you want to check whether @Some::Package::ISA exists, you can look into the namespace hash:
#!perl -w
use Data::Dumper;
sub isa_exists {
my ($package) = @_;
exists ${"$package\::"}{ISA} and defined *{${"$package\::"}{ISA}}{
+ARRAY};
};
print "Now you don't\n";
print isa_exists('Some::Package'),"\n";
eval q(package Some::Package;@ISA='foo';);
print "Now you see it\n";
print isa_exists('Some::Package'),"\n";
print "Now you don't\n";
print isa_exists('Some::Other::Package'),"\n";
print "Now you don't\n";
print isa_exists('Some::Other::Package'),"\n";
eval q(require A::Package::That::Doesn't::Exist);
print "A::Package::That::Doesn't::Exist ($@)\n";
print isa_exists("A::Package::That::Doesn't::Exist");
[download]
I've been chasing my tail over something extremely similar myself of late - to the point of drafting a SoPW node. Ovid has, unknown to him 'til now, saved me the trouble of the posting and provided the clarity/insight I was seeking...
My algorithm did the simplest test (eval { require ...}) first, going on to test for a local file scoped package after that - so a non-existent package always exists if a test is made for the existence of the stash.
It never even occurred to me that, as Ovid points out, the require causes vivification of a stash for the package - whether or not it exists ... doh !!!
Many thanx again Ovid.
.oO(Wonder whether my sig should be changed to read: "A user level that overstates my experience by an order of magnitude" ?)
Update:
Nearly, but not quite - some more extensive testing (appears to) show that a stash is always created - but would appear to be empty i.e. no keys, if the package is non-existent. So whereas require creates the stash and (sometimes) partially populates it, the test for the stash also appears to create a stash - but seemingly always empty ... or more accurately, an existent package always has the standard packages & classes as its keys.
Any road up, it's close enough for jazz at this joint :D))
It would seem like a bug internally for the way the symbol tables are handled. If the symbol tables are creating stuff automatically without checking it would appear to be similar to nested Perl hashes, it just creates something(at least for some things).
my %hash;
print "Ovid exists '" . (exists $hash{'Ovid'}) . "'\n";
if ($hash{'Ovid'}{'tall'}) { print "Ovid is tall\n" }; # I hate this b
+ehavior!
print "Ovid exists '" . (exists $hash{'Ovid'}) . "'\n";
[download]
The question would be, is that a bug that should be fixed or something that should be documented?
All of that being said, I don't any answer for your problem Ovid, you have covered any idea I would have come up and a few I did not know about.
You could add a coderef to @INC, but no matter how you go about it, that will take a lot of work. First you need to implement the full handling of @INC, including coderefs that pass back filehandles and coderefs that pass back objects (then the INC method gets called). At this point there are three ways to go.
The simplest is to say that if you did not find the module, you are going to mark it as not loaded. This is sufficient for Jesse Vincent's use case.
Somewhat more complex is to add a __DIE__ handler so that if that module fails to compile you might be told about it.
The hardest is to manually try to detect whether the module successfully loaded, which would mean returning a filehandle that will track whether it was successfully read up to the end, or __END__ or __DATA__. (It is up to you whether to worry about the possibility of arriving at those tokens within strings.)
Even after all of this work, are you done? No! Because people sometimes create small helper classes within a file, and these are real. To handle those I'd suggest that any package that is found which has subroutines in it that you never saw anyone trying to load are to be considered successfully loaded. That also will cover modules that were loaded before Class::Sniff was.
I would say that the fact that overriding CORE::require doesn't override use should be documented as a bug. And I would stick with the simplest solution, which is that you only ignore modules that your coderef said were not found.
Isn't this approximately the same issue that base tries to deal with when it sets $VERSION to "-1, set by base.pm" if it's not already set, based solely on whether require returns true or not? And the same issue promptly ignored by advocates of parent? Those advocating parent seem to be (apologies if I misrepresent them - my goal is not a straw-man) that you Really Shouldn't Do That, So Please Stop It.
For example, if you try to:
eval "use Foo;"
[download]
if (eval "use Foo;") {
# success
} else {
# failure (not found, failed to compile, who cares?)
}
[download]
Introspection in perl is fine ... as long as you don't push the edge cases. Then it gets hard. I'm guessing Perl 6 will make this easier, but you'd know better than I :-)
How about over-riding eval so that you can compare the environment before and after executing the eval?
eval isn't directly over-rideable, but with PPI and a small source filter (eeuuww!) maybe you can replace all the evals with my_evals. my_eval would have a prototype and take a string parameter or a sub-ref, so that it should "just work" with both string- and block-evals.
My savings account
My retirement account
My investments
Social Security
Winning the lottery
A Post-scarcity economy
Retirement?! You'll have to pull the keyboard from my cold, dead hands
I'm independently wealthy
Other
Results (75 votes),
past polls | http://www.perlmonks.org/index.pl?node_id=744049 | CC-MAIN-2014-42 | refinedweb | 1,328 | 69.92 |
CFD Online Discussion Forums
(
)
-
FLUENT
(
)
- -
Open files in a UDF
(
)
Riccardo
June 25, 2002 04:09
Open files in a UDF
Hi everyone! could you tell me in which way I can access data stored in a file and use it in UDF to define a profile? I'm not so expert in C language, and the UDF i've made is this one:
#include "udf.h" DEFINE_PROFILE(wall_temp,thread,position) { face_t f; real cond; FILE *fp; fp=fopen("space/ICADdisk/provafile.ext,"r"); begin_f_loop (f,thread) {
fscanf(fp,"%g",cond);
F_PROFILE(f,thread,position) = cond; } end_f_loop (f,thread) }
Fluent can compile the UDF, but when I try to initialize the solution to start the iterations, i get a"SEGMENTATION VIOLATION" error. What can I do? Thank you! Riccardo
Greg Perkins
June 25, 2002 20:04
Re: Open files in a UDF
Its probably best not to do file manipulation inside a DEFINE_PROFILE rotuine - since this can be executed frequently during the solution process.
I've been using file read/writes for a long time, without problems. In the past I found it best to use format of "%f" for single precision and "%lf" for double precision in the read statements. Never tried the "%g" format.
Here's some other tips:
1. open and then close files - don't leave open as in the above code 2. probably do the file stuff in a routine called from either a DEFINE_INITIALIZE or DEFINE_ON_DEMAND. 3. if you use parallel fluent - you will need to pay special attention when writing files. reading should be ok, but writing is best performed by only the host or a specified node - this will require #ifdef switches in your code.
Good Luck
Greg
Lanre
June 26, 2002 10:45
Re: Open files in a UDF
fp=fopen("space/ICADdisk/provafile.ext,"r");
Missing a quote here------------------^
All times are GMT -4. The time now is
21:16
. | http://www.cfd-online.com/Forums/fluent/29942-open-files-udf-print.html | CC-MAIN-2016-44 | refinedweb | 320 | 72.36 |
Hello all,
I have an app that I'm building, slowly as I learn, and I'm having some issues with the JDialog implementation.
I'm using NetBeans which generates pretty much everything except for the gutsy bits of code needed... (i.e. code for button actions, etc)
I have my main application which uses a JFrame. I have created a menu using the JMenu items, so basically it's all Swing components so far.
I wanted to add a JDialog pop-up window when the user clicks Help->About in the main JFrame window. I have generated the code for the event listeners and all of that. I've even generated the JDialog swing component, again using NetBeans auto content. I've populated my JDialog with a little bit of text and viola, Insta-JDialog.
However, I'm having some issues calling it from the main JFrame. I've gone through the tutorials/examples on the Sun site, which are mediocre at best. Yes they show oodles of ways to use the different Dialog options and buttons, but don't actually show how to implement a JDialog Swing component from a pre-existing JFrame swing application.
I've found this thread: which gives a pretty good example, but I'm not certain exactly where to place the code. I've been comparing the code in that thread to the code generated by NetBeans, and I can sort of see where it is possible to place it, but it doesn't make sense that NetBeans would require me to write all of that when everything else in NetBeans is simple 'click & generate' style of adding things.
No I'm not just looking for the easiest way out of this, I'm truly perplexed by where to place anything. Comparing handwritten examples with pre-generated examples always proves difficult. (Yes, a drawback of using a full IDE, but non-the-less...)
At any rate, here is what I've got so far:
public class MainMenu extends javax.swing.JFrame { /** Creates new form MainMenu */ public MainMenu() { initComponents(); } ... ... // inside here lies all the code generated for the JFrame Menu and it's Listeners ... ... private void DataPortviewChkBox(java.awt.event.ItemEvent evt) { // use CheckBox to open DataPortview JFrame window Object source = evt.getItemSelectable(); if (DataPortviewChkBox.isSelected()) frame4.setVisible(true); else frame4.setVisible(false); } ... private void AboutMenuItemActionPerformed(java.awt.event.ActionEvent evt) { // TODO add your handling code here: } private void HelpMenuItemActionPerformed(java.awt.event.ActionEvent evt) { // TODO add your handling code here: } ... public static void main(String args[]) { java.awt.EventQueue.invokeLater(new Runnable() { public void run() { new MainMenu().setVisible(true); } }); }
As you can see, I've already got pretty much everything laid out. And as I said before, the JDialog has already been created via NetBeans SwingWindow options.
I though I'd be able to call the JDialog using the same method that I used to call the additional JFrame with the CheckBox function above (lines 12-19), but I was obviously wrong.
So, given all of this, am I actually to write out the Dialog contents as specified in the link to the thread above? And if so, I'm assuming that is to be dropped into the main JFrame file, but again I'm not certain where. I've tried to put it just prior to the action event for the About button (line 20), but that didn't seem to work either.
Any insight would be greatly appreciated.
Thanks. | https://www.daniweb.com/programming/software-development/threads/233309/jdialog-using-netbeans | CC-MAIN-2018-39 | refinedweb | 578 | 64.91 |
I am in an introductory class of Java programming and our first assignment is to create a program that can compute -b +- √b^2 - 4ac / 2a. This is my first time doing anything with java and I'm rather lost on the if/else statement and doing the square roots. Any help with this would be greatly apperciated.
So far this is all I have
import java.util.Scanner;
import javax.swing.JOptionPane;
public class homework1
{
public static void main(String[] args)
{
//Display message
double aceof = (double)
double bceof = (double)
double cceof = (double)
System.out.print("Enter A")
double aceof = input.nextDouble();
System.out.print("Enter B")
double bceof = input.nextDouble();
System.out.print("Enter C")
double cceof = input.nextDouble();
if (aceof == 0) | http://www.javaprogrammingforums.com/whats-wrong-my-code/5355-help-quadratic-forumla-equation-java-please.html | CC-MAIN-2014-52 | refinedweb | 123 | 52.56 |
Hi,
Need some help with python code, I subscribed to Daily Coding Problem platform, where they supply a coding question everyday. However, there seems to be some python code involved in the question. I am well versed with js and java. I would really appreciate, if someone could explain what this python code is doing, either in plain english or convert it into js/java.
This problem was asked by Jane Street.
cons(a, b) constructs a pair, and
car(pair) and
cdr(pair) returns the first and last element of that pair. For example,
car(cons(3, 4)) returns
3 , and
cdr(cons(3, 4)) returns
4 .
Given this implementation of cons:
def cons(a, b): def pair(f): return f(a, b) return pair
We need to implement cdr and car function.
Please don’t provide implementation of car or cdr, just explain cons function in either plain english or java.
Thanks in advance,
Regards
Nishkarsh | https://forum.freecodecamp.org/t/explanation-of-python-code/423254 | CC-MAIN-2022-27 | refinedweb | 158 | 65.12 |
Editor's Note: This article was originally
written after the release of Mac OS X 10.2 Jaguar, but the
information is still relevant even with the release of Panther. Perl 5.8 is included in the
installation of Mac OS X v10.3 Panther, but you may want to
install your own version if you plan to customize it; or if you
are still running Jaguar.
The newest release of Apples operating system, OS X v10.2 (Jaguar) is filled with many welcome improvements and additions. The integrated Address Book, improved Mail, System Preference tweaks—all good things. But what really excited me was the new stuff under the hood: bash, Python, gcc 3, Ruby. About the same time Jaguar was in development, work was finishing up on my beloved Perl, version 5.8.1.
A quick trip to Jaguars Terminal showed me that this version didnt make it into the default install:
[cpu:~] user% perl -v
This is perl, v5.8.1-RC3 built for darwin-thread-multi-2level
As is almost always the case with UNIX-based systems, there is a workaround. So, if youre looking to get Perl 5.8 up and running on Jaguar, Im here to tell you how.
If youve installed shell applications before, you know how the process usually goes: download the source, configure the source, compile the source, and install the source. Perl is no different, but does require some patience; compiling the package could take you an hour or more, depending on the speed of your machine.
Some caveats to consider before starting:
/Library/Perl
The first step is to download Perl 5.8.1 itself. You can find it on any number of CPAN mirrors worldwide. I opened up the Terminal and entered the following commands (you can replace the URL below with one that reflects your closest mirror):
[cpu:~] user% cd /usr/local/
[cpu:~] user% sudo mkdir src
[cpu:~] user% sudo chown root:staff src
[cpu:~] user% sudo chmod 775 src
[cpu:~] user% cd src
[cpu:~] user% curl -O
[cpu:~] user% tar zxvf perl-5.8.1.tar.gz
[cpu:~] user% cd perl-5.8.1
These instructions created a /usr/local/src directory with adequate permissions to move around. The curl -O command downloaded a copy of Perl 5.8.1 (about a 10 MB file) into this directory. Next, I extracted the archive into a directory called perl-5.8.1, and then moved into it.
/usr/local/src
curl -O
perl-5.8.1
Since Perl is already installed in OS X (and perhaps already tweaked and configured to your liking), you may want to think about backing up your existing installation. Perl is predominantly located in three places: the private system library in /System/Library/Perl, the local libraries in /Library/Perl, and the binaries (like perl, pod2html, h2xs, etc.) under /usr/bin. Depending on your needs, you may also have modules installed under /Network/Library/Perl. If youre worried about things going wrong, you may want to save copies of these directories.
/System/Library/Perl
perl
pod2html
h2xs
/usr/bin
/Network/Library/Perl
Most shell applications offer you the choice of where to install, and Perl is no different. Some users install large applications in a directory called /opt, others follow Finks behavior and drop them into /sw, and still others follow the standard /usr/local. Ill be using the default Perl layout for OS X which is determined by hints/darwin.sh. This file is included as part of the Perl distribution and is utilized during our initial configuration. Opening that file in a text editor, well see that a default install is defined as:
/opt
/sw
/usr/local
hints/darwin.sh
# Default install; use non-system directories
prefix='/usr/local'; # Built-in perl uses /usr
siteprefix='/usr/local';
vendorprefix='/usr/local';
usevendorprefix='define';
# Where to put modules.
privlib='/Library/Perl'; # Built-in perl uses /System/Library/Perl
sitelib='/Library/Perl';
vendorlib='/Network/Library/Perl';
If we dont specifically tell Perl to be installed elsewhere, the binaries will be installed into /usr/local/bin, and our Perl modules (our @INC) will be set for /Library/Perl. This means that Apples system library, /System/Library/Perl, wont be utilized (and, if you were fickle enough, could be deleted). If you want the Perl modules installed in the exact same locations as Apples current Perl 5.6.0, then youll want to assign an install prefix of /usr, which hints/darwin.sh associates with the following directories:
/usr/local/bin
@INC
/usr
# I'm building/replacing the built-in perl
siteprefix='/usr/local';
vendorprefix='/usr/local';
usevendorprefix='define';
# Where to put modules.
privlib='/System/Library/Perl';
sitelib='/Library/Perl';
vendorlib='/Network/Library/Perl';
When installing, its always a good idea to make sure youre starting from a fresh build directory. This ensures that theres no old configuration data or libraries hanging around and messing up your installation. Heres the command (and dont worry if this is your first time messing around with Perl—this command wont hurt anything):
rm -f config.sh Policy.sh
If you want to experiment with different Configure options, then be sure to run the following to start afresh (this command duplicates the above deletion, along with cleaning out stuff created during a Configure):
Configure
make distclean
Now comes the first part of your great journey with Perl 5.8.1: the configuration. Perl needs to know hundreds of bits of information to be correctly compiled for your architecture. Thankfully, the Configure script handles most of this for you. Before moving on, however, you need to set an environment variable:
setenv LC_ALL C
echo "setenv LC_ALL C" >> ~/.tcshrc
If youre curious about what those commands are doing, then youll want to check the INSTALL file, as well as man perllocale. OS X has one of the necessary environment variables set (LANG), but without the above, youll get annoying error messages each time you try to run a script. The above assumes the tcsh shell; if you prefer bash, use echo export LC_ALL=C >> ~/.bash_profile instead.
INSTALL
man perllocale
LANG
tcsh
bash
echo export LC_ALL=C >> ~/.bash_profile
Now, enter one of the following commands, depending on where you want Perl installed:
# you want to use the default Perl directories:
./Configure -de [email protected] [email protected]
# you want to install Perl the same way Apple did:
./Configure -de -Dprefix=/usr [email protected] [email protected]
The only difference between these two lines is the -Dprefix flag, which determines where Perl should be installed (the rest of this article assumes you ran the first one). The -de helps the configuration go more quickly by assuming the defaults for most of the questions, as well as not prompting you for further configuration of the generated config.sh. The -Dcf_email and -Dperladmin flags set the email addresses for the installer and administrator of this Perl installation, respectively. These flags are optional, as the Configure will automatically do its best to guess them.
-Dprefix
-de
config.sh
-Dcf_email
-Dperladmin
You may also want to look at -Dotherlibdirs. With this option included in the Configure statement, you can add more library directories to Perls @INC. For example, if I wanted each of my users to have their own Perl library, I could use -Dotherlibdirs=~/Library/Perl. If I log in as morbus, Ill be able to stick my own modules in /Users/morbus/Library/Perl without interfering with the rest of my Perl installation. Of course, if I didnt use -Dotherlibdirs, Id be able to duplicate the same effect in my code with use lib.
-Dotherlibdirs
-Dotherlibdirs=~/Library/Perl
morbus
/Users/morbus/Library/Perl
use lib
Thankfully, the Configure command is the most brain-consuming part of the installation. Once the Configure is run (it took two minutes on my dual 400mhz G4), youll be home free. It will usually finish by saying, You must run make, so go ahead and follow its orders:
[cpu:~] user% make
This command took about 15 minutes on my machine. It should complete with the following: Everything is up to date. Type make test to run test suite. Again, the install knows best, so run:
[cpu:~] user% make test
The testing process is a common part of a Perl-related installation. Youll see a large number of tests run, most returning a status of OK, and some skipped due to our platform. You should only get two failures, both related to Berkeley DB:
ext/DB_File/t/db-btree...............#
# This test is known to crash in Mac OS X versions 10.1.5 (or earlier)
# because of the buggy Berkeley DB version included with the OS.
#
FAILED at test 0
This test, as well as db-recno, fails because OS X ships with an old version of a common database library (Berkeley DB, version 1.x). Perl gives us more advice concerning these failures:
db-recno
# You can safely ignore the errors if you're never going to use the
# broken functionality (recno databases with a modified bval).
# Otherwise you'll have to upgrade your DB library.
I wont be showing you how to upgrade your Berkeley DB library, but bug reports concerning this have been filed both at OpenDarwin.org and Apple (#2923078 and #3021743). The make test took about ten minutes on my G4 and finished by saying: Failed 2 test scripts out of 657, 99.70% okay.
make test
And now, the final step:
[cpu:~] user% sudo make install
If you ran the first Configure command above, youll see a progression of files installing into /Library/Perl (the Perl modules), /usr/local/bin (the Perl binaries) and /usr/local/share/man/ (Perl-related documentation). This should finish in less than five minutes.
/usr/local/share/man/
Now that youve installed the new Perl, its easy enough to make sure it worked with perl -v (you can also check the configuration with perl -V). But a stronger, more certifiable way to test it is by running one of the more common Perl programs: CPAN, the Comprehensive Perl Archive Network. As its name suggests, CPAN is a huge repository of Perl code, and also the primary means of installing and updating Perl modules.
perl -v
perl -V
Modules have been written for every conceivable purpose, from interfacing with database applications to rendering graphics to parsing various types of specialized files. Fresh installations of Perl come with a number already installed; you can find them by browsing through the /Library/Perl directory. But if none of the included modules meet your needs, then CPAN is your next step.
Installing a new Perl module by hand, while not extraordinarily difficult, can be a bit of a hassle. It involves downloading and unpacking the module, configuring it, and building it, all the while making sure there are no unresolved dependencies or conflicts between modules. If youre the sort of person who enjoys that kind of process, go right ahead. There is an easier way to go, however.
CPAN.pm is a Perl module whose purpose is to automate the downloading and building of modules. It takes care of the tricky situations where certain modules require other modules to run, and those modules in turn require additional modules, and so forth. It also supports the building and installation of bundles, which are groups of related modules that work together to handle a particular set of functions.
CPAN.pm
To start CPAN, enter the following:
[cpu:~] user% sudo perl -MCPAN -eshell
If this is the first time youre running the module, youll be asked a series of questions about how your system is set up. In most instances, the default answers will work, but make sure that the network setup questions are answered correctly so that CPAN.pm can get online and properly fetch new modules. The answers you give will be saved in ~/.cpan/CPAN/MyConfig.pm. If your setup changes, or you think you answered a prompt incorrectly, you can change your settings by opening that file with a text editor and modifying the appropriate lines.
~/.cpan/CPAN/MyConfig.pm
Eventually, youll see something like this:
cpan shell -- CPAN exploration and modules installation (v1.61)
ReadLine support available (try 'install Bundle::CPAN')
cpan>
Go ahead and install this Bundle::CPAN thing. Perl modules that are closely related to each other (like all Digest modules, all Net modules, etc.) are packaged into bundles, creating quick combined installs. To install Bundle::CPAN, just use this command:
Bundle::CPAN
Digest
Net
cpan> install Bundle::CPAN
Ultimately, thats all you need to know about installing modules for your shiny new Perl (and any previous version of Perl, actually). Weve both verified that our Perl 5.8.1 installation is working smoothly and improved our CPAN capabilities. As the above command executes, youll notice CPAN downloading, extracting, testing, and installing modules and their prerequisites.
Another useful CPAN command is simply r. Once run, CPAN will search through your Perl modules, comparing your installed versions with the latest versions available. When finished, itll show you a compiled list:
r
cpan> r
Package namespace installed latest in CPAN file
Math::BigFloat 1.35 1.36 T/TE/TELS/math/Math-BigInt-1.61.tar.gz
Math::BigRat 0.07 0.08 T/TE/TELS/math/Math-BigRat-0.08.tar.gz
Pod::Text 2.2 2.21 R/RR/RRA/podlators-1.24.tar.gz
To get up to date, its a simple matter of issuing install Math::BigFloat, install Math::BigRat, etc.
install Math::BigFloat
install Math::BigRat
Another useful CPAN command is force, which will unconditionally execute a command, regardless of other results. If you recall the caveats at the beginning of the article, modules that have been built with XS or C code under 5.6.0 wont work under 5.8.1, forcing you to recompile and reinstall. But what if you already have the latest version of, say, HTML::Parser? CPAN complains:
force
cpan> install HTML::Parser
HTML::Parser is up to date.
Thats where force comes in. Simply prepend it to any CPAN command and itll execute unconditionally (i.e., whether the module is already installed, tests fail, prerequisite libraries cant be found, etc.):
cpan> force install HTML::Parser
Running install for module HTML::Parser
.
.
.
The most common use of Perl, and one of its strong points, is CGI: processing and outputting data for Web applications.
Before you can use Perl for CGI scripts, you need to start the Apache Web server. If you havent done so already, go into the Sharing pane under System Preferences and click the button to turn Web Sharing on. After a few seconds, Apache will start running in the background. You can type ps -ax in a Terminal window to display a list of running processes, and you should see /usr/sbin/httpd in the output. Thats Apache, also known as the HTTP daemon. With Web Sharing switched on, Apache should automatically start every time you boot up your Mac. Now, if you open a browser window and go to, you should see the default index page that Apache serves up when its freshly installed.
ps -ax
/usr/sbin/httpd
Apache looks in the /Library/WebServer/Documents directory for Web content to serve. Each user on your system also has his or her own area for Web content: the Sites directory in the users home directory. Users personal Web directories can be accessed in a browser by prefixing the users name with a tilde (~). So, for example, you can view the Web site of the user named morbus by going to.
/Library/WebServer/Documents
Perl (and other) CGI scripts reside in /Library/WebServer/CGI-Executables on a standard Mac OS X installation. Two CGI scripts should already be in this directory: test-cgi and printenv. test-cgi is a shell script that displays the values of various environment variables. printenv performs a similar function, but is written in Perl.
/Library/WebServer/CGI-Executables
test-cgi
printenv
For the vast majority of Perl CGI applications, youll want to use the CGI.pm module which makes parsing form data and outputting HTML easy. Once your Perl application is written, place it in the CGI directory (/Library/WebServer/CGI-Executables), give it execute permissions (chmod 755 scriptname), and then access it via the Web (). For more information on setting up and maintaining your Apache webserver, try this series at OReilly. Also worthwhile reading at the OReilly site is Build Your Own Apache Server with mod_perl.
CGI.pm
chmod 755 scriptname
At this point, your Perl 5.8.1 installation should be rock-solid, complete with updated modules and the knowledge necessary to rebuild 5.6.0 extensions. Future updates to the Perl 5.8 branch should follow this same installation procedure.
Get information on Apple products.
Visit the Apple Store online or at retail locations.
1-800-MY-APPLE | http://developer.apple.com/Internet/Opensource/Perl.Html | crawl-002 | refinedweb | 2,824 | 62.88 |
Access over 20 million homework documents through the notebank
Get on-demand Q&A homework help from verified tutors
Read 1000s of rich book guides covering popular titles
in westlandia, the public holds 50% of m1 in the form of currency, and the required reserve ratio is 20%. <?xml:namespace prefix = o to hold 50% of the loan in currency, only $400 × 0.5 = $200 of the loan will be deposited in round 2 from the loan granted in round 1.)
2. how does your answer compare to an economy in which the total amount of the loan is deposited in the banking system and the public does not hold any of the loans in currency? (hint: do another table with none of the loan proceeds held in currency.)
3. what does this imply about the relationship between the public’s desire for holding currency and the money multiplier?3. what does this imply about the relationship between the public’s desire for holding currency and the money multiplier? | https://www.studypool.com/questions/1264/macoeconomics-4 | CC-MAIN-2020-16 | refinedweb | 169 | 56.49 |
In TFS 2008, we had a lot of requests to make the Build Details View customizable. In 2010, we have taken the first step in that direction. In this post, I will explain how to change the Log View of the new Build Details View to show custom build information.
For reference you may want to check out Patrick’s blog post on adding custom build information in 2010 and my previous post on the Log View of the new Build Details View.
The Scenario:
One of the things we don’t show anymore on the build details view is the start and finish times of the build. We show duration instead. So, to “fix” this I want to at least show the start and finish times in the Log View. These won’t be perfect, but they will be the first and last things that happen.
Changes to the DefaultTemplate.xaml file:
First, I want to write out the start time and finish time as build information. But, I really don’t want to create a new class just to hold a string and a time. So, for this post I am going to cheat and use a type that is basically just a couple of Strings – PlatformConfiguration. I am really just doing this to keep the post reasonably short. I don’t recommend that you use this object in this way. In fact, as you may be able to tell from the code samples, I tried to use a Tuple<String, DateTime> object first, but all types of Generics aren’t fully supported by the Workflow Designer.
Anyway, here’s the XAML I added to the beginning of the build process (right after the first ending tag for Sequence.Variables).
<mtbwa:WriteBuildInformation x:
And this is added at the end of the XAML (right before the final ending tag for Sequence).
<mtbwa:WriteBuildInformation x:
Then I ran a build of a simple hello world application, but nothing shows up in the log view of the build details view. I tried again with Verbosity set to Diagnostic, still nothing. And that’s by design. Your build information is hidden by default. You have to register a converter for it and install that converter somehow with the Visual Studio client. We don’t provide a default converter for every kind of build information.
Creating my own converter to show my new type:
The next thing I had to do was create a class that can convert my new build information into a WPF paragraph. The class has to implement the interface IBuildDetailInformationNodeConverter (which can be found in the Microsoft.TeamFoundation.Build.Controls assembly). This interface has one property and one method. The property called InformationType must return the type of information node the converter works on (in my case that’s PlatformConfiguration). The method called Convert takes in the information node (and some other info) and returns an object. The object must be a WPF Paragraph (if not you will see a cast error in your log view output).
My converter class:
using System; using System.Windows.Documents; using Microsoft.TeamFoundation.Build.Client; using Microsoft.TeamFoundation.Build.Controls; namespace CustomBuildInformationConverters { class NameDateTimeTupleConverter : IBuildDetailInformationNodeConverter { public object Convert(IBuildDetailView view, IBuildInformationNode node, double parentIndent) { String description = node.Fields["Platform"]; String time = node.Fields["Configuration"]; return new Paragraph(new Run(String.Format("{0} : {1}", description, time))); } public string InformationType { get { return "PlatformConfiguration"; } } } }
I put this class into a Visual Studio AddIn that I created using the Wizard. For more information on creating a VS add-in, try this article. Here is the code to register/unregister my converter with the Build Details View:
public void OnStartupComplete(ref Array custom) { IVsTeamFoundationBuild VsTfBuild = (IVsTeamFoundationBuild)_applicationObject.DTE. GetObject("Microsoft.VisualStudio.TeamFoundation.Build.VsTeamFoundationBuild"); if (VsTfBuild != null) { _converter = new NameDateTimeTuple); } } private DTE2 _applicationObject; private AddIn _addInInstance; private NameDateTimeTupleConverter _converter;
And here is how the Log View looks now (arrow added for emphasis):
In conclusion, I just want to say that there are many other things you can do with a Paragraph than simply display text and you will see some of those things in upcoming posts.
Enjoy!
Hmm custom add-ins for every client bites when you have more than a few developers. We have this same issue with custom checkin policies. They are effectively useless to us because of the need to have every developer install client-side software.
BTW how can we insert build start/finish times given our builds are still MSBuild based and not WF-based. We have quite a large set of customized targets and it's very unlikely that they are getting ported to WF anytime soon. | https://blogs.msdn.microsoft.com/jpricket/2009/12/21/tfs-2010-displaying-custom-build-information-in-visual-studio/ | CC-MAIN-2017-17 | refinedweb | 776 | 55.13 |
Handling big integers in JavaScript
As a part of my training and to refresh myself on certain topics, I recently returned to solving katas from codewars. After few easy ones I jumped to 4 kyu kata with the following instruction:
Multiply two numbers! Simple! - The arguments are passed as strings. - The numbers may be way very large - Answer should be returned as a string - The returned "number" should not start with zeros e.g. 0123 is invalid
Sounds pretty easy, right? My first attempt looked like this:
function multiply(a, b) { const paramA = parseFloat(a.toLocaleString()); const paramB = parseFloat(b.toLocaleString()); return (paramA * paramB).toString() }
Unfortunately, the result was given with an exponential. So besides the decimal number I got '2.8308690771532805e+48'. That was not the solution I was (and the kata tests ;-)) looking for.
Here I found that
BigInt might be helpful in this case. It's a primitive wrapper object introduced in ES2020. Keep in mind that it's quite a new feature, and it won't be available in newer browser versions.
That's the solution for this task:
function multiply(a, b) { return (BigInt(a) * BigInt(b)).toString(); }
That's it. Thanks to
BigInt wrapper it's possible to represent every number which cannot be handler by the
number primitive.
Here's an example of how it works in the wild.
const regularBigNumber = 98172784239189284787326419872394817239487126349871263948716234; const bigNumberWrapped = BigInt(regularBigNumber); console.log(regularBigNumber) // Output: 9.817278423918929e+61 console.log(bigNumberWrapper) // Output: 98172784239189284787326419872394817239487126349871263948716234
It handles different formats of numbers, like hex, octal or binary. We can pass a number as a string - it will be converted as well.
That was the first time I needed to use
BigInt and probably I won't have many chances to use it in the future ;-).
In my opinion, however, it's important to know even less known features or obscure functionalities of the programming language in which we work every day.
For more details I suggest to read the MDN documentation | https://michalmuszynski.com/blog/big-integers-in-javascript/ | CC-MAIN-2022-33 | refinedweb | 330 | 59.3 |
I have json such as ["123","123a","12c3","1f23","e123","r123"]
as response from rest server.
I want to parse this json as Collection and iterate over it and make exec request over each element in it
such as :
SERVER + "/get?param=${el}"
where el will be 123,123a,12c3,1f23,e123 and r123
My question is how can I do it.
You can do something like this:
import org.json4s._ import org.json4s.jackson.JsonMethods._ object JSonToMap { def main(args: Array[String]) { implicit val fmt = org.json4s.DefaultFormats val json = parse("""{ "response" : ["123","123a","12c3","1f23","e123","r123"] }""") val jsonResp = json.extract[JsonResp] println(jsonResp) jsonResp.response.foreach { param => println(s"SERVER /get?param=${param}") } } case class JsonResp(response: Seq[String], somethingElse: Option[String]) }
Now you have a case class where the "response" member is a list of your strings. You can then manipulate this list however you need to create the requests to SERVER. | https://codedump.io/share/JYRBGCtNMwEF/1/gatling-convert-json-array-to-map | CC-MAIN-2017-09 | refinedweb | 158 | 60.41 |
Original Dev Thread
Background & Motivation
Currently the Scala package supports the Module and the Feed Forward API. The Clojure package builds on the Scala package and supports the Module API. The Gluon API is only supported so far in the Python package, however it is more full featured and all of the newer documentation and books and online resources like Dive into Deep Learning use the Gluon API. Supporting this API would allow the JVM packages to grow and to eventually share a common API for documentation and tutorials.
For example - Dive Into Deep Learning could have parallel texts and exercises for Scala and Clojure (and others ...)
Challenges
- Keeping the MXNet JVM packages coordinated, so there is a minimum of code duplication and no divergence in the jni bindings which would complicate deployment. However allow interested contributors to pick up areas of development that interest them independently.
- The Gluon API is large so finding a good area to tackle.
- The Current Scala native bindings might not be sufficient to support the Gluon API.
- How to build out pieces one by one instead of all at once.
- Some syntax will be different in different langs.
Proposal
- Have guidelines about developing Gluon API for JVM langs.
- Any new JNI binding development needs to go into the Scala package first to avoid divergence and packaging issues in other packages.
- If a contributor wants to add a package to the Clojure package first (for example) that doesn't touch the JNI bindings, that development can happen in Clojure first and then when it arrives in the Scala package can be converted over if desired to have a single implementation. This will allow some decoupling of development.
- The API should follow as closely as possible to the Gluon API and live under the same namespace.
- Example: The model zoo API in the Clojure package would be org.apache.clojure-mxnet.gluon.model-zoo
- Use the existing code when possible to build out the Gluon API.
- If implementing gluon.model.zoo or gluon.data before core functionality, have it work with existing module code.
Questions
- Autograd is an interesting area. Currently there are no good JVM autograd options. To get this to work would be exciting for many users. But need to look into how it could actually work.
- Would definitely take JNI changes
- Large Effort | https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103089990 | CC-MAIN-2022-05 | refinedweb | 390 | 56.66 |
Type: Posts; User: boudino
Does the order of elements matter?
Despite it is wrong in design because there should be loop, not a recursion, in which sense it doesn't work?
Can you post sample data? Maybe there are more then one answers (good and bad) for a queastion? At first look, I cannot see anything weird.
If I look well, you never wait for the timer.
Maybe it is sizeable (I've never tried it), so try to play with mouse on the bottom of the event if it can change to resize cursor and if so, try to drag the panel up.
Maybe you are interested in Object.MemberwiseClone().
I don't know a way how you coudl replace it, especialy not in third party controls. You can only use it in your own controls (controls in your project) instead the standard one. Just drag and drop it...
There is a solution, or better workaround, which is explicit interface implementation.
public class SaleElementUpdateHandler : IElementUpdater<FooBar>, IElementUpdater<IBaseTransactionElement>
...
It is not a debugger issue, but a compiler issue. I gues, that because your local machine is 64-bit, the compiler is looking for the referenced assemblies in wrong location. Try to force compilation...
It is because generics in C# don't support covariance and contravariance (althought CLR does). You can cast the instance to IElementUpdater<FooBar>, but not to...
If the dll is a common .net assembly, convert the base64 string into byte array and call Assembly.Load(byte[]) with it. It should work.
That's to solution I would use too. As benefit, it is better scalable and maintainable.
Try set full path to the process image:
p.StartInfo.FileName = Path.Combine(firebirdInstallationPath1, "gsec.exe");
If you declare it as Color[,,], is is allocated as one big portion of memory. I would try to declare it as so called "jagged arrays" as Color[][][], which needs more work to deal with, but splits the...
string filename = String.Format("archive_{0:yyyyMMdd}", DateTime.Today);
I would use Roles class for checking the access level and own implementation of RoleProvider, or choose one of framework's implemetations.
But it is question how clear is internaly implemented. I don't know, but I can imagine (althought it doens't semm to be smart) that clear just new internal array[] as a storage without leaven the...
It si because serialization of generics are gliched. To overcome it, you have to implement ISerializable interface, but in your case, it should be sufficient to change the type of the collection from...
Generally, the monitoring program must support it by not locking the file (it has to open it in shared mode). This OS/file system matter, so your options are limited. If it is possible, it would be...
With the respect to DataMiser's point, I've used following unmanaged methods to (de)allocate large amout of memory
IntPtr p = Marshal.AllocHGlobal(size);
Marshal.FreeHGlobal(p);
Post a sample of the string array which you are trying to convert.
What if the end excedes lenght of the string?
source = source.Substring(0, Math.Min(end, source.Length));
Visual inheritance was allways trouble in VisualStudio, so I would prefere composition of user controls over inheritance. But if you really need it, check the z-order of the controls in the designer,...
Just guess, but LINQ to XML wouldn't help? Because it seems to me that you don't want to sort the XML, but the XML Document buit in the memory.
From quick look, you are creating a new instance of the Trademan every time the button is clicked, so you always deal with turn == 1. | http://forums.codeguru.com/search.php?s=c9606720b7e31f014127ae19c22559fc&searchid=5795571 | CC-MAIN-2014-52 | refinedweb | 611 | 66.74 |
Scaffolding
This article demonstrates how to scaffold a Kendo UI Chart for ASP.NET MVC by using the Kendo UI Scaffolder Visual Studio extension.
Important
The Kendo UI Scaffolder will not include the required
UI for ASP.NET MVCfiles to the project. To achieve this automatically, use the Telerik UI for ASP.NET MVC Visual Studio extensions. To achieve this manually, refer to this article.
Getting Started
Below are listed the steps for you to follow when scaffolding the Kendo UI Chart for ASP.NET MVC.
Create a New ASP.NET MVC Application
Create a new ASP.NET MVC application. Include an Entity Framework Data Model and add
Telerik UI for ASP.NET MVC. If you have already done so, you can move on to next step. Otherwise, follow the steps of the steps described in the Getting Started section of this article.
Generate the Chart Controller
Right-click the location where the Chart Controller should be generated. Select Add > New Scaffolded item... from the displayed menu. In this example, you are going to generate it in the Controllers folder.
Figure 1. A new scaffolded item
Select the Scaffolder
Select Kendo UI Scaffolder from the list of available Scaffolders.
Figure 2. The Kendo UI Scaffolder
Select the Chart
Select the Kendo UI Chart from the available widgets to the left to scaffold.
Figure 3. The Kendo UI Chart Scaffolder
Set Model and Data Context Options
On the next screen, you are presented with the Model and Data Context options.
Enter the Controller and View names.
Figure 4. The Grid options
The Model Class DropDownList contains all model types from the active project. In this example, you are going to list products in the Chart. Select the Product entity.
Figure 5. The Model class
From the Data Context Class DropDownList, select the Entity Framework Data Model class to be used. In this example, it is NorthwindEntities.
Figure 6. The Data Context class
Use View Model Objects
(Optional) In some scenarios it is convenient to use view model objects instead of the entities returned by Entity Framework. If this is the case, check the Use an existing ViewModel checkbox. This presents you with a DropDownList similar to the first one, from which select the ViewModel to be used.
If you have not yet created it, add a new class to the
~/Modelsfolder. Name it
ProductViewModel.
Example
public class ProductViewModel
{ public int ProductID { get; set; } public string ProductName { get; set; } public short? UnitsInStock { get; set; } }
Select the ProductViewModel class from the ViewModel Class DropDownList.
Figure 7. The ViewModel class
Important
The names of the properties in the ViewModel have to be exactly the same as the corresponding ones in the Entity. Otherwise, the Kendo UI Scaffolder is not able to link them correctly.
Set Chart Functionalities
Click the Chart options item on the left.
Figure 8. The options when setting the Chart functionalities
This screen contains the Chart functionalities that you can configure before scaffolding:
- Data Binding Type—Remote or Local.
- Title—Define the
Titleof the Chart.
- Show Tooltip—Show the tooltip.
- Show Legend—Show a legend. The available options are
Bottomand
Top.
Figure 9. The legend options
Series Type—Select the series type. Each series type shows different Series Options configuration.
Figure 10. The series options
Add More Series—Add one additional configuration panel for a series.
Each field marked with an asterisk
* is mandatory and the rest of the fields are optional.
When finished with the Chart configuration, click Add at the bottom. The Chart Controller and the corresponding View are generated.
See Also
- Overview of the Kendo UI Chart for ASP.NET MVC
- How to Bind to SignalR Hubs in ASP.NET MVC Apps
- How to Create View Model Bound Dynamic Series in ASP.NET MVC Apps
- Ajax Binding of the Kendo UI Chart for ASP.NET MVC
- Overview of the Kendo UI Chart Widget
- | https://docs.telerik.com/aspnet-mvc/helpers/chart/scaffolding | CC-MAIN-2018-05 | refinedweb | 645 | 59.5 |
Yesterday, our Princeton JUG has celebrated the first day of Spring with the presentation on Spring framework. The presentation was well received, and most importantly, the atmosphere in the meeting was very friendly and productive. We started with pizza provided by our host and sponsor (Infragistics). Then, I spent 10 min talking about recent and upcoming Java events followed by book raffle. I had 3 books to give away and to make the process simple, I wrote a random number generator to pick a winner for the sign up sheet. Here “s this beauty:
// Generate a random number from 0 to upperLimit entered from the command line
// The upperLimit is the number of
import java.util.Random;
public class RandomNumberGenerator {
public static void main(String[] args) {
int upperLimit = Integer.parseInt(args[0]);
System.out.println( “The winning number is “+
(new Random()).nextInt(upperLimit));
}
}
One of the books that I gave away was Pro Spring by Apress.
The main talk lasted for an hour and a half and was prepared by members of our JUG. This is an important aspect of any JUG movement: give people a chance to research and present a topic that “s interesting for the group. After this demo seven people decided to create a separate mailing list to discuss and learn the Spring framework.
The other positive thing is that our JUG helps people finding jobs. We had a vote if posting job offers should be allowed in our mailing list, and 95% of voters said “Yes! rdquo;. I “m getting lots of requests from recruiters, and post the most interesting and real ones in our mailing list. The first member of Princeton JUG is starting a new and interesting job on Monday.
The bottom line is: JUG rocks! Do you have one in your town yet? | https://yakovfain.com/2006/03/22/jugging-along-in-princeton/ | CC-MAIN-2018-05 | refinedweb | 301 | 63.49 |
HELLO! Today we’re going to talk about THREAD POOLS and PARALLELIZING COMPUTATION. I learned a couple of things about this over the last few days. This is mostly going to be about Java & the JVM. It turns out that there are lots of things to know about concurrency on the JVM, but luckily, lots of people know those things so you can learn them!
A thread pool lets you run computations in more than one thread at the same time. Let’s say I have a Super Slow Function, and I want to run it on 10000 things, and I have 32 cores on my CPU. Then I can run my function 32 times faster! Here’s what that looks like in Python.
from multiprocessing.pool import ThreadPool def slow_function(): do_whatever results = ThreadPool(32).map(slow_command, list_of_things)
This seems really simple, right? Well, I was trying to parallelize something in Java (Scala, really) the other day and it was not this simple at all. So I wanted to tell you some of the confusing things I ran into.
My task: run a Sort Of Slow Function on 60 million things. It was already parallelized, but it was only using maybe 8 of my 32 CPU cores. I wanted it to use ALL OF THEM. This task was trivially parallelizable so I thought it would be easy.
blocked threads
One of the first things I started seeing when I looked at my program with YourKit (a super great profiler for the JVM) was something a little like this (taken from here ):
What was this red stuff?! My threads were "blocked". What is a blocked thread?
In Java, a thread is "blocked" when it’s waiting for a "monitor" on an object. When I originally googled this I was like "er what’s a monitor?". It’s when you use synchronization to make sure two different threads don’t execute the same code at the same time.
// scala pseudocode class Blah { var x = 1 def plus_one() { this.synchronized { x += 1 } } }
This
synchronize means that only one thread is allowed to run this
x += 1 block at a time, so you don’t accidentally end up in a race. If one thread is already doing
x += 1 , the other threads end up — you guessed it — BLOCKED.
The two things that were blocking my threads were:
lazyvals in Scala used
synchronizedinternally, and so can cause problems with concurrency
Double.parseDoublein Java 7 is a synchronized method. So only one thread can parse doubles from strings at a time. Really? Really. They fixed it in Java 8 though so that’s good.
waiting threads
So, I unblocked some of my threads. I thought I was winning. This was only sort of true. Now a lot of my threads were orange. Orange means that the threads are like "heyyyyyyy I’m ready but I have nothing to do".
At this point my code was like:
def doStuff(pool: FuturePool) { // a FuturePool is a thread pool while not done { var lines = read_from_disk var parsedStuff = parse(lines) pool.submit(parsedSuff.map{expensiveFunction}) } }
This was a pretty good function. I was submitting work to the pool! Work was getting done! In parallel!
But my main thread was doing all the work of submitting. And you see that
parse(lines) ? Sometimes this happened:
main: here is work to do! main: start parsing thread pool: ok I'm ready for more main: I'M STILL PARSING OK main: ok here is more work
The main thread couldn’t submit more work to the thread pool because it was too busy parsing!
This is like if you get a 5 year old to mix the batter for the cake when you’re doing a Complicated Kitchen Thing and they’re like OK OK OK OK OK OK WHAT NEXT and you’re like JUST A MINUTE.
The obvious solution to here was to give the parsing work to the threads! Because threads are not 5 year olds and they can do everything the main thread can do. So I rewrote my function be more like this:
def doStuff(pool: FuturePool): // a FuturePool is a thread pool // make sure it only has 32 threads so it // does not spin up a bajillion threads while not done { var lines = read_from_disk pool.submit(parsedSuff.map{parse}.map{expensiveFunction}) }
AWESOME. This was great, right?
Wrong. Then this happened:
OutOfMemoryError . What. Why. This brings us to…
Work queues
This
FuturePool abstraction is cool. Just give the thread work and it’ll do it! Don’t worry about what’s underneath! But now we need to understand what’s underneath.
In Java, you normally handle thread pools with something called an
ExecutorService . This keeps a thread pool (say 32 threads). Or it can be unbounded! In this case I wanted to only have as many threads as I have CPU cores, ish.
So let’s say I run
ExecutorService.submit(work) 32 times, and there are only 32 threads. What happens the 33rd time? Well,
ExecutorService keeps an internal queue of work to do. So it holds on to Thing 33 and does something like
loop { if(has_available_thread) { available_thread.submit(queue.pop()) } }
In my case, I was reading a bunch of data off disk. maybe 10GB of data. And I was submitting all of that data into the ExecutorService work queue. Unsurprisingly, the queue exploded and crashed my program.
I tried to fix this by changing the internal queue in ExecutorService to an
ArrayBlockingQueue with size 100, so that it would not accept an unlimited amount of work. Awesome!
still not done
I spend like.. 8 hours on this? more? I was trying to do a small thing at work but I ended up working on it at like midnight because it was supposed to be a minor task and I couldn’t really justify spending more work time on it. I am still confused about how to do this thing that I thought would be easy.
I think what I need to do is:
- read from disk
- submit work to the ExecutorService. But with a bounded queue!!
- catch the exception from ExecutorService when it fails to schedule work, wait, and try again
- etc etc
Or maybe there is a totally simple way and this could take me 5 minutes!
This kind of thing makes me feel dumb, but in a really good and awesome way. I now know a bunch of things I didn’t know before about Java concurrency!! I used to feel bad when I realized I didn’t know how to work with stuff like this. ("threads and work queues are not that advanced of a concept julia what if you are an awful programmer").
Now I don’t really feel bad usually I’m just like WELL TODAY IS THE DAY WHEN I LEARN IT. And tomorrow I will be even more awesome than I am today. Which is pretty awesome =D
Abstractions
I think the thread pool abstractions I’m working with in Scala are not the best abstractions. Not because they don’t make it easier to program with concurrency — they do!
But the best abstractions I work with (the TCP/IP network layer! unix pipes!) let me use them for years without understanding in the slightest how they worked. When working with these concurrency abstractions I end up having to worry almost immediately about what’s underneath because the underlying queue has filled up and crashed my program.
I love learning about what’s underneath abstractions, but it is kinda time consuming. I guess it’s hard to build abstractions over thread pools! Maybe you really just have to understand how they’re implemented to work with them effectively. Either way — now I know more, and I can work with them a little » Thread pools! How do I use them?
评论 抢沙发 | http://www.shellsec.com/news/6379.html | CC-MAIN-2017-34 | refinedweb | 1,313 | 82.54 |
Opened 7 years ago
Closed 7 years ago
#17161 closed Bug (invalid)
Call super in BaseForm.__init__
Description
Currently /django/forms/forms.py BaseForm.__init__ (as well as a fair number of other classes) does not call super().__init__. This makes it impossible to create form mixins.
Consider:
from django import forms class FormMixin(object): def __init__(self, *args, **kwargs): super(FormMixin, self).__init__(*args, **kwargs) self.my_flag = true class MyForm(forms.Form, FormMixin): field1 = forms.CharField() class MyModelForm(forms.ModelForm, FormMixin): class Meta(object): model = SpamModel
Because of python's mro the init() in the mixin never gets called because BaseForm.__init__ does not call it.
Ideally, all classes in django that have an __init__() should also call super().__init__()
Change History (2)
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
The suggested change would break very badly, since
object.__init__ does not accept any parameters and you will get a
TypeError.
Unfortunately, multiple inheritance is pretty badly broken in Python for
__init__. See
The preferred solution is to avoid implementing
__init__ in your mixin class, and to avoid using
super inside the
__init__ of the class that inherits from
object. If you do need to inherit two
__init__ methods, use a subclass that overrides
__init__ and explicitly calls each
__init__, as per
e.g.
class AwithB(A, B): def __init__(self, arg_for_A, arg_for_B): A.__init__(arg_for_A) B.__init__(arg_for_B)
Therefore, the current code is as good as its going to get.
I believe if you reorder the superclasses, adding the call to super() is not necessary. For example:
I would normally add a mixin in this order anyway since I normally have the mixin override the superclass instead of inserting a mixin to be called by the superclass. | https://code.djangoproject.com/ticket/17161 | CC-MAIN-2018-22 | refinedweb | 296 | 58.69 |
0
I have to write a program where a user trys to pick a number. The code i've written below always exits after 2 guesses abd says the guess was correct even if it's wrong. Can anyone see what i'm doing wrong? Thanks
# include <stdio.h> # define NUMBER 50 int main(void) { int guess=0, count=0, loop=0; printf("Guess> "); scanf("%d", &guess); while (loop==0) { if (guess < NUMBER) { count++; printf("Too small - guess again> "); scanf("%d", &guess); } if (guess > NUMBER) { count++; printf("Too big - guess again> "); scanf("%d", &guess); } if (guess==NUMBER) { loop=1; count++; printf("Correct. That took you %d guesses.\n", count); } return 0; } } | https://www.daniweb.com/programming/software-development/threads/149816/guess-number | CC-MAIN-2017-51 | refinedweb | 111 | 83.56 |
Matplotlib is a Python library used to create charts and graphs.
In this first lesson, you will get an overview of the basic commands necessary to build and label a line graph. The concepts you will learn include:
- Creating a line graph from data
- Changing the appearance of the line
- Zooming in on different parts of the axis
- Putting labels on titles and axes
- Creating a more complex figure layout
- Adding legends to graphs
- Changing tick labels and positions
- Saving what you’ve made
By the end of the lesson, you’ll be well on your way to becoming a master at presenting and organizing your data in Python.
To the right, you can see an example of the kind of chart you’ll be able to make by the end of this lesson!
Instructions
Before we start working with Matplotlib, we need to
import it into our Python environment.
Paste the following code into script.py:
from matplotlib import pyplot as plt
This will import the plotting functions from matplotlib, and make them accessible using the shorter name
plt. Most of the commands we will learn will start with
plt. going forward. | https://www.codecademy.com/courses/data-visualization-python/lessons/matplotlib-i/exercises/introduction-matplotlib-i | CC-MAIN-2022-27 | refinedweb | 193 | 64.44 |
Introduction: Make a Crawling Robot Zombie With Severed Legs
Second Prize in the
Halloween Decor Contest
We all love zombies and robots, two of the things that are most likely to be our undoing one day. Lets help things along by building a creepy little robot zombie.
My goal with this Instructable is to take a doll and (re)animate it with servos and mechanical linkages, its braaaaiiins will be an Arduino microcontroller.
I like to use whatever I can scrounge up for my projects, since it keeps the cost down, but it does mean that maybe you won't have exactly the same items on hand, I will try and give alternatives. Besides, you are a maker, you'll figure it out.
In the same vein, my design has been planned around limited tools, you could make it with anything from hand-tools to CNC cutters or anything in between.
This project could be a pretty good introductory robot for someone who has never built one before.
A word of warning: There was a point in the middle of this project when I was struggling to fit an undead robot into a dress and had to take a step back to ask myself what the heck I was doing with my time... I guess this could happen to you too, don't worry though, I got over it ;-)
Shameless Plug
I started this whole project for an Instructables contest, so if you think it's cool, and you don't want to risk being eaten by spurned zombots, I'd appreciate your vote!
Update: Thanks Guys!
So I won a prize in the Halloween contest, thanks for your votes!
Update 22-Oct-2014: Added Remote Control (step 21) and a more in-depth video
Step 1: Parts Required
Electronics
4x High Torque Servos
The servos for the arms need to be relatively high torque since they have to lift the weight of the robot. I would also recommend going for dual-bearing servos if possible. These have a bearing on either end of the output shaft in order to better handle side loads.
I used TrackStar TS-600MG from HobbyKing since I liked the sound of metal gears and dual ball bearings. The ball bearings turned out to be a fiction though, the shafts are supported by brass bushings.
2x Slim Servos
The servos for the neck (if you choose to animate it) don't have to lift much weight, so small low-profile servos will do. I imagine 9g servos would not be up to the task though.
I used Corona DS-239MG from HobbyKing, since I had them on hand.
Arduino Nano
Use any micro controller that you like, but I am partial to the Arduino Nano, which I buy by the boatload from DealExtreme.
Batteries
I used 6x 4.2V Li-ion batteries that I got for free (they expired 10 years ago, but heck, this is a zombie right?). You can use whatever is available to you, your choice will affect your DC-DC converter requirements.
DC-DC converter/UBEC (Universal Battery Elimination Circuit)
What you need here is dependant on your batteries, read electronics section for more details).
I used an obsolete Volgen FDC15-48S05 which I got out of a scrapped piece of telecoms equipment, so you are unlikely to find one exactly the same.
If you are using RC batteries, you could use a UBEC from the radio control world (HobbyKing has quite the selection, I have read good things about the TURNIGY 8-15A UBEC for robots, but have never tried it myself)
Assorted Bits and Bobs
Connectors (for batteries)
Perfboard/Stripboard (I got from DealExtreme)
Insulated Wire
2.54mm (0.1") header pins (to plug servos into)
Solder
Insulating Tape
Heat Shrink
Mechanical
Torso/Shoulder assembly
6mm MDF (could use acrylic, plywood, aluminium, whatever you have tools to cut)
Wood Glue (if using wood)
Matchsticks/Toothpicks (used as dowels)
Arm Linkages
12mm dowel (I used about 800mm of it)
Aluminium Servo Horns (DX or HK)
Neck Linkage
Bicycle Brake Cable
Brake Cable tube
EZ Connectors for servo cable (DealExtreme equivalent)
Spring/Servo linkage ball joint (could use some rubber hose)
Fasteners
I used M3 (3mm) fasteners for pretty much everything. If you take apart old electronics you will find loads of them (of course they are cheap as chips to buy too).
M3x12mm screws
M3x30mm screws
M3 nuts (Nylocs are handy if you can get them)
M3 washers (split and flat)
Aesthetics
Doll
Bigger is probably better in this case. I found the local Chinese shop and bought one of those cheapies with a cloth body and plastic limbs, which ended up being perfect.
Zombification
Acrylic Paint
Plastic Packets (white preferably)
Popcorn Kernels (for teeth)
Silicone (or other flexible glue)
Superglue/Double Sided Tape
Step 2: Tools Required
Mechanical Work
- CNC router / scroll saw/ laser cutter / jigsaw / handsaw
- Drill press / Hand drill / Homemade Drill Press
- Screwdrivers
- Socket/Spanner
Electrical Work
- Soldering Iron
- Pliers
- Side-cutters
Software
- Arduino IDE
- Putty (optional)
Step 3: Electronics: Power Supply
Power Supply Requirements
Servos
Most servos are rated to run off a supply of 5V or 6V although one will find a handful of HV (High Voltage) ones that will accept 7.4V. Mine are only rated up to 6V, so I needed a way to regulate the voltage.
Arduino
The Ardunio requires a 7-12V input voltage which it regulates down to 5V (or you can directly supply a regulated 5V). The current required will be negligible compared to the servos. Take a look at the datasheet for all the info.
Voltage Conversion
I also didn't have access to any high drain batteries (there is more to a battery than it's capacity and voltage, some batteries are better at delivering all of their power quickly than others, i.e. supplying a lot of current, which is something motors/servos require), so I decide to stack six 4.2V batteries in series, resulting in an output voltage of 25.2V at maximum charge.
Remember that POWER=CURRENT*VOLTAGE (P=IV), so if we assume that the power drawn on the output of the DC-DC converter is equal to the power supplied by the batteries, then the current that the batteries have to supply is only (Vout/Vin)*(Iout), which in my case means an input current about 5 times lower than output current.
You could use a UBEC from the radio control world (Hobby King has quite the selection, I have read good things about the TURNIGY 8-15A UBEC for robots, but have never tried it myself)
I decided to supply power to the Ardunio via a 12V linear regulator instead of the DC-DC converter, just in case that converter browns out or goes into current limiting when the servos exceed its current capability(which would otherwise cause the Arduino to restart). The 12V regulator drops all of the extra voltage as heat, but this is not a big deal since the current draw of the Arduino is so low. The Arduino's on-board regulator drops the 12V down to 5V.
Wiring
The wiring for the power supply is incredibly simple. I made a little piece of stripboard with 2.54mm header pins on it to plug the batteries into, connecting the positive of one to the negative of the next. I have shown it for completeness, but if you are using an RC battery you won't need such a contraption.
I also included and on/off toggle switch.
Read the documentation that comes with you DC-DC converter carefully and you will see which pins/plugs get the +/- from the battery and which pins/plugs provide the output.
- If you buy from HobbyKing it is always wise to read people's comments on the product, it seems like markings/documentation is not to be trusted.
- If you are using scavenged components like I did, then Google is your friend. I was able to find a datasheet for my converter without much trouble by searching for numbers that were printed on it.
Step 4: Electronics: Build the Circuit
The circuit is actually extremely simple; all we are doing is providing power to the Ardunio and the servos.
- Provide 12V power to Ardunio (Arduino will regulate this down to 5V)
- Provide 5V power to the servos
- Connect the Arduino's output pins to the servo's data lines.
Arduino and Servo Pins
I started by soldering the Arduino into a piece of perfboard and putting in 3 rows of 2.54mm header pins next to digital pins 2,3,4,5,6 and 7.
Connect each of the pins in the row closest to the Arduino to the adjacent digital pin.
The 2nd row from the Arduino is the 5V power supply rail for the servos, connect them all together.
The 3rd row from the Arduino is the negative (ground) power supply rail for the servos, connect them all together.
Power
In the previous step we planned out our power requirements and made connectors for the batteries.
I soldered a corresponding connector onto the perfboard and ran the positive to the positive inputs of the DC-DC converter and the 12V regulator.
Run the negative of the battery connector to the negative inputs/ground pins of the DC-DC converter and the 12V regulator.
Take the 5V output of the DC-DC converter to the center row of servo pins and the ground/negative output to the outermost row of servo pins.
Take the 12V regulator's output to the "Vin" pin on the Ardunio and make sure that at least one of the Ardunio's "GND" pins is connected to ground (battery negative).
Capacitors
The capacitors are there to help the regulator out by supplying some of the current to the servos under peak conditions. They act like tiny batteries that can discharge and recharge incredibly quickly.
I would start with two 560uF caps and see how you go, it was plenty for me.
Put your chosen capacitors between the Ground and 5V servo rails. If you are using electrolytic capacitors, make sure to get the polarity right (the side with the stripe is negative)
Step 5: Planning, CAD Work
At this stage I just had a concept that I was going to make a "half-a-zombie" crawler, so I started sketching things to figure out how to make the robot actually move. I find pen and paper planning to be incredibly helpful. I normally end up with pages and pages of doodled drawings before I break out the CAD tools. For this partiular project it meant a lot of sitting sketching, waving my arms around like a half-zombie, while my wife tried to watch a TV show.
The design i went with in the end has what robotocists call "2 degrees of freedom" (DOF) per arm.
- The shoulder, which can move up and down,
- The "elbow" which moves forwards and backwards linearly. The servo's rotational movement is converted in (mostly) linear movement by adding a 2nd linkage, turning the arm into a parallelogram
Now that I had a design in mind I started to draw things up in CAD. There are a number of tools to use for this (AutoDesk inventor looks like a good free option if you are a student), but I used SolidWorks, since I have access via work. The whole topic of 3D cad is too much to cover here, but these are the things I focused on
- Although I had access to a CNC router, didn't want to end up with a design was impossible to fabricate without one. All of the parts are 2D, which means they can easily be cut with a laser cutter, scroll saw or even hand saw.
- I had 6mm MDF available, so I designed around that. The design could easily be adjusted for other thicknesses by adjusting the depth of the cutouts.
- The smallest bit available on the CNC router I used was 6mm, hence why there are no inside diameters smaller than that.
I have attached my DXF files if you intend to use a CNC tool and also PDFs which you could print to scale and stick on to your material before cutting by hand.
If you use my designs you will want to confirm that the hole placings work for your servos.
Step 6: Torso: Cut and Glue
Cutting
The first step here was to cut out the pieces. There is not much to say about this.
- If you are using a CNC machine (laser cutter or router) then you will use the DXF files you created in the CAD step (mine can be found in the planning step if you want to use them).
- If you are doing it by hand and want to use my design, print out the PDF (found in the planning step) of the parts and stick it to your material, then cut out using a scroll saw, hacksaw or whatever other tools you can get your grubby paws on. One could definitely simplify some of the shapes if cutting by hand.
Fastening
Next up is fastening the pieces together. Since I had my pieces cut out of 6mm MDF I used PVA (Alcolin Cold Glue in particular), if you cut your pieces out of acrylic or other material you will need to choose a suitable glue. This is where it comes in handy to have a lot of clamps,. As you can see from my pictures, I do not have enough clamps, so I used my vise too.
The parts were very rigid once glued, but I decide to glue in some dowels for extra strength. Once the pieces were glued together I drilled some small holes perpendicularly through adjoining pieces and inserted matchsticks covered in glue (matchsticks or toothpicks are ideal, drill your holes accordingly). It is worth noting though, MDF does not like having its edges drilled into and likes to split; I got around this by drilling with successively larger bits, from 2mm to 2.5mm to 3mm.
Once all the glue is set, use a bit of sandpaper to clean up any dowels that are sticking out.
Step 7: Torso: Shoulder Assembly
Now that your pieces are glued into 3 distinct assemblies, torso, left shoulder bracket and right shoulder bracket, you can begin installing the servos.
IMPORTANT: You must center the servos before you attach the horns. The easiest way to do this is with a servo tester (like in my picture) but if you don't have you may as well use your Arduino. See the "Servos: A Refresher" step if you need more help.
I installed the shoulder servos with some M3x12mm pozi screws and M3 nuts, along with spring washers to prevent things vibrating loose (you could also use Loctite).
Attach the servo horn to the shoulder servo. I made mine horizontal, but in retrospect I wish I had installed them with about 20 degrees down-tilt. Your zombie has no reason to lift its arms in the air like it just don't care, but being able to tilt them lower down would increase "ground clearance" when crawling.
Attach the shoulder brackets to the servo horns. I had to use hex socket screws here, since it is impossible to get a screwdriver in there.
Now you can install the elbow servos, with the same size screws and nuts as the shoulders.
Finally, insert a long screw through the hole that is opposite the shoulder servo's pivot point, this helps prevent too much side-load being placed on the servo's bearings. There is probably a neater solution, but I coldn't think of a cheaper one. Since the arms are not moving at high speed the friction really shouldn't be an issue. I used an M3 Nyloc nut to hold the screw in place, but made sure not to tighten it too much (we don't want it to cause friction, just to stop the screw falling out).
If you don't have access to any Nylock nuts, just use two regular nuts and tighten them towards each other.
Step 8: Making Arm Linkages
Design Theory
As discussed in the planning stage, the arms need to form a parallelogram, this means the "forearm" bar will always stay parallel to an imaginary line drawn between the servo shaft and the pivot point of the parallel bar. The parallel linkage effectively turns the rotation of our "elbow" servo into a forwards/backwards motion on the forearm bar.
For clarity see the annotated image: If A and B are the same length and C and D are the same length, A will always be parallel to B.
So long as your arm pieces form parallelograms they can be any length that you like. Choose something that suits the scale of you doll. Keep in mind that there is a compromise when increasing the length of your robot's upper arms. Longer upper arms mean that you can lift the torso higher off the ground, but they also mean that your servos need to generate more torque.
Building an Army
You will have seen arms made of aluminium bars in some of my photos. These were my first concept, but proved too difficult to work with my limited tools. After a bit of thinking I chose to use 12mm hardwood dowels instead, these were great, easy to work with hand tools and plenty strong enough.
You will need to decide exactly where to put your holes, based on your servo horns and arm lengths. The only suggestion I have is to trim the arm on the servo so that you can remove it from the horn without removing the horn from the servo, if you look at the pictures you will see that I did this towards the end.
I also chose to plane the sides flat on some of the arms, this was mostly because the screws I had were either a touch too short or far too long.
In order to tie the parallel bars to the forearm I used long screws (about 30mm, but will depend on your dowels) with Nyloc nuts tightened until just before they started to cause friction.
Mounting for the Doll Arms
While you are at it, you probably want to test fit the doll arms and make some method of mounting them.
The way I did it was to drill a hole though the arm that was a fraction smaller than an M3x16mm threaded hex circuit board spacer, then carefully tap the spacer into the hole. Since the wood deformed around it it was very solid, but I squeezed a bit of super glue around it anyway to keep it in place. I also put a very short screw with a washer in the back side, which meant it couldn't be pulled back through the hole, even if the glue failed. I now had a threaded hole to use to fasten the plastic doll arms to the forearm bar.
Step 9: Add an Electronics Tray
You will need to make a small tray to carry the electronics as well as the neck servos if you choose to use them. Besides carrying the electronics, this tray serves the important function of giving the arms something to leverage against. You will find that the robot struggles to move without it.
This tray could easily be built into the original design as an additional piece, but I was designing this thing on the fly, so mine is separate. I simply grabbed a scrap bit of 4mm plastic and used a some discarded right-angled aluminium extrusion to bolt it to the torso.
The size of your tray will depend very much on what kind of electronics/power system you go with. I have attached some pictures of my final setup for an idea of how I arranged things. More info on the individual items can be found in the relevant steps.
Note: If you are planning to use neck servos you will probably want to drill the holes through the torso before attaching the electronics tray, since it will get in the way
Step 10: Neck Linkage
You don't need to animate the head if you want to save on servos, but thought it would be cool, so I did. If you decide not to, then perhaps try mount it on a spring, that should give it a bit of a zombie-ish wiggle.
Head mounting plate
You need something to mount the head to. I chose to use my holesaw that was closest in diameter to the doll's neck to cut out a disk of MDF. If I was doing it again I might use wood (because MDF dislikes having things screwed into its edges) or find a different way altogether.
I then drilled three holes into the edges of of the disk, with successively larger bits to prevent splitting, and inserted some M3 threaded hex circuit board spacers and glued them in place. I drilled matching holes in the doll's neck so that I could now easily attach the head to the board with short machine screws.
Linkage
My original plan was to use an RC ball-joint on a swiveling screw as the neck joint, but this didn't give me the fluid movement I had hoped. I have included photos of it anyway, since it shows how many ways there are to achieve things.
In the end I used a spring instead, which gave much smoother movement. I believe the spring came from a desk lamp of sorts, but it is hard to tell, since more recently it just came out of my box-o-springs.
Step 11: Neck Muscles and Tendons (servos and Cables)
In order to make the head move I decided to go with servos mounted at the back of the robot, with their force applied to the head via push-rods, similar to how most RC planes control their flaps.
Choose Push-Rods
My first attempt was to use stiff wire push-rods from an old RC plane I found in the scrap, but they were too rigid to go through the bends in the tubes without encountering huge friction. I then discovered some flexible stranded cable from bicycle brakes/gears which worked much better. It has a perfect compromise between flexibility (needed to go around bends in the tube) and rigidity (needed to prevent too much bending where the wire is outside the tube) for this application.
Here is a website that explains all sorts of connectors and things that the RC plane folk use.
Mount the Servos
I chose to use low-profile wing servos, since the head really doesn't require much force to move and the space savings were attractive. Mounting these was easy too, since the mounting tabs are parallel to the electronics tray. I drilled some 3mm holes and attached the servo with M3 screws and bolts.
One of my servos was pretty beat-up and missing a mounting tab, so I just glued it in place with silicone, which worked well, so that is an option too.
Locate and Drill the Holes for Push-Rods/Cables
Look at my photos for guidance, but you will need to determine where to put the holes for your push-rods. Take the following into account:
- The tubes should have as few bends/kinks as possible
- The cables should exit the tubes as close to the servos and neck as possible, without requiring the cable to flex too much.
Step 12: Zombification: Skin/Painting (arms)
Rot that Skin
I made use of a great technique that I found here on instructables to zombify the doll's skin.
I started by wrapping the limbs and face tightly with plastic from shopping packets, securing the ends with a few dabs of Cyanoacrylate (Super Glue) or thin double-sided tape. Don't use silicone here, because the paint won't stick to it (or at least, Acrylic wouldn't stick to the marine sealant I used in places)
One the limbs are wrapped up, use a heat gun on them, the heat causes the plastic to shrink and deform, tightening around the doll parts and creating a great basis for zombie skin.
Paint that Skin
One you are satisfied with your skin effect it is time to start painting. I used acrylics and was very pleased with how it turned out. I only used 3 colours in the end, white, black and a sort of reddish brown.
You could choose to make your zombie browner, greener, juicier or fresher, depending what flavour of undead you prefer. I went with a pale-dead kind of look this time.
I started off by painting pretty much everything white because I didn't want the difference in the fleshy doll skin and white plastic to mess up my paint job.
Then I did various layers of slightly watered-down grey/pink/brown.
Don't be afraid to use your fingers as well as the brushes or anything else you can get your hands on. I smeared a pinkish mix on with my hands in places and stuck darker colours in the crevices with a toothpick.
To bring detail out in the long folds I put blobs of darker paint on the fold then drag it down with the crease, this gets paint stuck in the deeper bits and wiped off the higher bits, which helps bring out the texture.
Don't be afraid to just go bananas, after all, it's a zombie so it is very forgiving.
Step 13: Zombification: Teeth
Most dolls have very un-zombie little mouths, so I decided to correct that.
The First Cut is the Creepest
I considered a gaping mouth, or even one that could be controlled by servos, but in the end decided to go a little more subtle. I used a Dremel cut-off wheel and some side-cutters to enlarge the mouth on one side into a bit of a grimace.
Make the Teeth
I then got some popcorn kernels an used wood glue to stick them to a piece of thin card (I would recommend black card, not blue like I used) . I also built up the "gums" with some contact adhesive glue (the wood glue was too fluid to do it).
Paint
The paint was done in multiple layers, starting with light washes, followed by blacks and browns. I wiped most of the paint off with a rag on each pass, leaving darker areas in the gaps between teeth. Finally I put some thick reddish brown on the gums.
Glue in Teeth
Once the paint was dry I used some marine silicone to stick the whole assembly into the head (any glue will work, so long as it is flexible and sticks to the plastic). Choose a glue that is nice and tacky, or set very quickly, because it is very awkward to hold the teeth in place via the neck.
Step 14: Zombification: Flexible Skin
There were a few places where the doll's dress did not cover the mechanics, in particular, the elbows and the neck. I corrected this by creating "skin" with some more white plastic.
Neck
- Wrap a bit of thin double sided tape or some super glue around the neck.
- Make a ring of fencing wire, just narrower than the shoulders
- Make a cone/ tube of white plastic (from a shopping packet) from the neck to the shoulders.
- Paint to match the other skin
I was going to glue magnets onto the top of the torso to hold the wire ring in place, but it turned out not to be needed, since the ring was larger than the collar of the dress. Your mileage may vary.
Arms
The pictures describe this step better than words will. I extended the arms with tubes of plastic which was then painted to match the rest of the skin. You may have to take more or less care here depending on the dol's clothes. I just needed to cover the elbows, so I just tucked the plastic into the sleeves.
Step 15: Zombification: Severed Leg
Bone
Take a left-over piece of dowel from the arm assemblies and carve a bone shape out of it, making sure to leave a bit of extra length on the end.
I did all of the carving with a Dremel sanding band, but you could just as easily achieve it with a file and some sand paper.
Leg
Chop a piece of one of the unused doll legs and wrap it in plastic, then hit it with the heat gun, just like the previous steps.
Use a paintable caulk to fill up the end of the leg. I stuck some torn-up bits of plastic bag in there to make it extra fleshy.
Once you have filled the top of the leg with caulk you can stick the bone in (before it sets). Make sure that it is inserted to a depth that makes sense. Since I included a head on my bone I made sure it lined up with where the knee would be.
Paint
Paint it.
I used exactly the same methods as described in the earlier steps.
Attach
I drilled a hole through the top of the leg and use a bit of fencing wire to attach the leg to the electronics tray.
A loose attachment is preferable so that the leg flops about nicely when the doll crawls.
Step 16: Zombification: Weathered Clothes
Zombies are not known for their dress sense, so we need to mess up the clothes a little bit.
The doll I bought was wearing a rather fancy dress, made of the cheapest, most synthetic, material known to mankind.
I tried everything I could think of to weather it, but nothing really gave me the effect I was looking for.
Bleach
The bleach made no difference... I had read that it would yellow and damage synthetics, but apparently not this stuff.
Coffee
Adam Savage always talks of spritzing clothes with coffee to weather them. Either he drinks way stronger coffee than me (unlikely) or they should make baristas' uniforms out of this stuff.
Tea
Because, why not? No real effect.
Acrylic Paint
In the end this was the most effective, and still only barely. I rubbed in watered down white paint to the dark areas to try and wash them out.
I also grubbed up the white areas with some green/brown paint and added some reddish brown around the collar.
Step 17: Servos: a Refresher
Before we can get started on writing code, let's just refresh on how and what servos do.
Your standard hobby servo consists of the following major parts
- DC Motor
- Gearbox (LOTs of reduction)
- Potentionmeter (variable resistor)
- Control Circuit
The potentiometer is connected to the output shaft of the gearbox. The control circuit uses this potentiometer to determine what angle the output shaft is at and therefore, which way and how much it needs to turn the DC motor in order to achieve the angle requested by the input signal. The potentiometer itself can generally not turn more than 180 degrees and as such there is a mechanical limit to how far a servo can turn (although one does get special servos that can turn further, or even continuously).
Control Signal
The control signal is actually a 5V pulse of 1000 to 2000 microseconds, where 1000 indicates minimum rotation and 2000 indicates maximum rotation, and all the values in between. These control pulses are sent every 20 milliseconds.
What all of this means is that we can use a microcontroller to generate the pulses which will set the servo shaft to a specified angle.
Connector
The standard servo connector has 3 sockets and fits onto a 2.54mm (0.1") row of male header pins. The connectors can have a variety of colour schemes, but they are usually:
- Ground: Black/Brown
- +5v: Red
- Signal: Orange/White
Step 18: Code: Planning
Translate Movement into Servo Positions
It is easy to describe how the arms need to move to make the zombot move forwards, but how do we convert that into servo movements?
First, lets describe how we would move forwards if we were lying on the ground and could only use our arms.
- Raise arm off the ground
- Extend arm as far forward as we can
- Lower arm and grab the ground
- Pull ourselves forwards (pull arm back)
We could do this with both arms synchronised (like swimming butterfly) or with alternate arms (like swimming crawl).
I will work the example with the crawl option, you can easily use the same procedure to generateother patterns of movement.
The most logical way I could think of to implement this in code was to define a series of "frames" which contained the position of all the servos at a given moment. Looping through the frames at a given rate will give us a movement animation.
Here I am considering raising/extending as "maximum" and lowering/retracting as "minimum".
Determine Servo Limits
Before we can write code to use our fancy new frame-by-frame animation, we need to determine the minimum and maximum for each servo. There are two major factors to consider
- There could be a physical obstruction. if your mechanical assembly doesn't allow your servo to turn as far as your software requests it could damage the servo.
- We need to translate "min" and "max" into milliseconds, and these are opposite on either side of the body. For example: the shoulder servo (looking from the front) on the right hand side needs to turn clockwise to raise the arm, but on the left hand side clockwise will lower the arm.
I wrote the following tiny piece of code to determine the range of motion of a servo. Simply upload it to your arduino and connect a servo to the specified pin (pin 3 in the example).
- Use a serial terminal (I prefer putty) to connect to the Arduino (9600 Baud).
- Press 'q' to send the servo to min (1000 microseconds)
- Press 'w' to center the servo
- Press 'e' to send the servo to max (2000 microseconds)
- Use 'o' and 'p' to incrememnt or decrement the current position by 5 microseconds
- Make note of how many microseconds correspond to retracted/lowered
- Make note of how many microseconds correspond to extended/raised
Once you have determined how many microseconds correspond to retracted/lowered and extended/raised, do the same for all the other servos.
// By Jason Suter 2014 // This example code is in the public domain. #include <Servo.h> //pin details int servoPin = 3; static int minMicros = 1000; static int midMicros = 1500; static int maxMicros = 2000; Servo servoUnderTest; // create servo object to control a servo int posMicros = 1500; // variable to store the servo position void setup() { servoUnderTest.attach(servoPin); //configure serial port Serial.begin(9600); } 19: Code: Crawl
Arduino Tools and Foundation Knowledge
If you are looking for a good introduction to Arduinos, then there are a whole lot of Instructables for you, here is a great one.
I am going to assume you have downloaded and install the Arduino IDE and are able to upload code to your Arduino, the previously mentioned instructable will guide you through all of that and more.
Code Overview
In the previous step we decided that we would implement the crawling by cycling through "frames" of animation which would define the position of each of the servos at a given point in time.
The full code is attached as a file, but I will go through most of the sections here so that you can understand them. If you already do understand them, then that's great, because you can probably think of a better way of doing this that I did.
Import Required Libraries
The only library that is required for this code is "Servo.h" which is full of handy functions to make it extremely easy to control servos from an Arduino.
#include <Servo.h>
Define Global Variables
These are variables that can be accessed from anywhere in the code. We use them to store information that will be required at a later stage
Timer Variables
The millis() command returns the number of milliseconds since the Arduino board began running the current program. We can store the result of millis() and then compare it to the result at a later stage to determine how long it has been between the two calls.
long previousFrameMillis = 0; long previousInterpolationMillis = 0;
Servo Detail Variables
These variables store which pin each servo is connected to, as well as how many micros correspond to min (retracted/lowered) and max (raised/extended)
//servo pins int LNpin = 7; //left neck muscle int RNpin = 6; //right neck muscle int LSpin = 5; //left shoulder int LEpin = 4; //left elbow int RSpin = 2; //right shoulder int REpin = 3; //right elbow //these are the 'min' and 'max' angles for each servo //can be inverted for servos that are in opposite orientation int LSMin = 1000; int LSMax = 1700; int RSMin = 2000; int RSMax = 1300; int LEMin = 1700; int LEMax = 1300; int REMin = 1300; int REMax = 1700; int LNMin = 1400; int LNMax = 1600; int RNMin = 1600; int RNMax = 1400;
Frames
One's first impression might be that we should just set the servos to the positions defined by the first frame, wait a given delay (frameDuration) and then set the servos to the next position.
This will result in a horrible animation though, since the servos will jump as fast as they can to the specified position and then wait there for their next instruction.
The way around this is to interpolate between frames. In other words, If my frame duration is 2 seconds, after 1.75 seconds I want the servos to be three quarters (or 75%) of the way between frame 1 and frame 2.
It is a trivial bit of maths to figure out where the servo should be if we know how much of the frame has elapsed as ratio. In words it is just (previous frame)+(the difference between the next and previous frames)*(percentage of frame duration elapsed), this is known as "linear interpolation".
int frameDuration = 800; //length of a frame in milliseconds int frameElapsed = 0; //counter of milliseconds of this frame that have elapsed int interpolationDuration = 30; //number of milliseconds between interpolation steps
Here we define the actual frames. Note that I used 0 to 1000, where 0 indicates retracted/lowered and 1000 indicates extended/raised. I could have chosen any number, and there could well be more logical choices, but the range provided me with a satisfactory compromise between resolution and readability.
We will use the map() function later to map 0 to LSMin variable and 1000 to LSMax variable that we defined earlier (obviously this example is for the left shoulder, but it would be the same process for all the other servos).
If you wanted to define more complex or smoother animations you could easily add more frames and also use numbers other than min/max. One option would be to use about 8 frames to make a nice elliptical motion.
//frames for the crawling gait are stored here int currentFrameIndex = 0; int numFrames = 4; int crawlGaitRS[] = {0,1000,1000,0}; int crawlGaitRE[] = {0,0,1000,1000}; int crawlGaitLE[] = {1000,0,0,1000}; int crawlGaitLS[] = {0,0,1000,1000}; int crawlGaitLN[] = {1000,1000,0,1000}; int crawlGaitRN[] = {0,1000,1000,1000};In order to implement this interpolation calculation we need to keep track of the previous frame and the following frame, so we set up some variables to do that.
//servo last frame micros int LSlast = 0; int RSlast = 0; int LElast = 0; int RElast = 0; int LNlast = 0; int RNlast = 0; //servo next frame micros int LSnext = 0; int RSnext = 0; int LEnext = 0; int REnext = 0; int LNnext = 0; int RNnext = 0; // variable used to store the current frame of animation int currentFrameIndex = 0;
Servo Objects
Finally, we create some servo objects and assign them to variables. These are instances of the servo class that we included in Servo.h and will provide useful functions to control each servo.
// create servo objects to control servos Servo LS; Servo RS; Servo LE; Servo RE; Servo LN; Servo RN;
Define Functions
Arduino Setup Function
The Arduino setup() function is the first bit of code that gets run after the global variables have been defined. All that is needed here for now is to attach the servo objects to their pins and start up the Serial port, in case we want to report anything for debuggging.
void setup() { Serial.begin(9600); LS.attach(LSpin); RS.attach(RSpin); LE.attach(LEpin); RE.attach(REpin); LN.attach(LNpin); RN.attach(RNpin); }
Set Next Frame
This function is called once our servos get to the end of a frame. All that it is doing is:
- Incrementing the "currentFrameIndex" (unless we have have reached the last frame, in which case it loops back to frame 0)
- Store the current frame position as "last frame"
- Retrieve the next frame position from the animation array
void setNextFrame() { //if this was the last frame, start again if (currentFrameIndex < numFrames - 1) { currentFrameIndex++; } else { currentFrameIndex = 0; } //we have reached the destination frame, so store it as "last frame" LSlast = LSnext; RSlast = RSnext; LElast = LEnext; RElast = REnext; LNlast = LNnext; RNlast = RNnext; //generate new "next frame" LSnext = crawlGaitLS[currentFrameIndex]; RSnext = crawlGaitRS[currentFrameIndex]; LEnext = crawlGaitLE[currentFrameIndex]; REnext = crawlGaitRE[currentFrameIndex]; LNnext = crawlGaitLN[currentFrameIndex]; RNnext = crawlGaitRN[currentFrameIndex]; }
Interpolation Function
As described previously, we will use a linear interpolation to determine exactly what position a servo should be in at a given time between two frames.
My computer science lecturer always said that being a good programmer was all about being lazy, if you can avoid rewriting code multiple times by putting it in a function, then do it.
This function simply implements the linear interpolation equation between two frames and then maps that frame position to a servo position and applies it to the servo object.
void writeInterpolatMicros(Servo servo, int prevFrame, int nextFrame, int servoMin, int servoMax, float elapsedRatio) { int interpolated = prevFrame + int(float(nextFrame - prevFrame)*elapsedRatio); servo.writeMicroseconds(map(interpolated,0,1000,servoMin,servoMax)); }
Servo Update Function
This function makes the code neater by removing a chunk of it from the main loop.
First the ratio of the frame that is already completed is calculated using the number of milliseconds that have elapsed since the frame started, divided by the number of milliseconds it takes to complete a frame.
This ratio is passed along to the interpolation function for each servo, updating each one's position.
void updateServos() { float frameElapsedRatio = float(frameElapsed)/float(frameDuration); writeInterpolatMicros(LS,LSlast,LSnext,LSMin,LSMax,frameElapsedRatio); writeInterpolatMicros(LE,LElast,LEnext,LEMin,LEMax,frameElapsedRatio); writeInterpolatMicros(RS,RSlast,RSnext,RSMin,RSMax,frameElapsedRatio); writeInterpolatMicros(RE,RElast,REnext,REMin,REMax,frameElapsedRatio); writeInterpolatMicros(LN,LNlast,LNnext,LNMin,LNMax,frameElapsedRatio); writeInterpolatMicros(RN,RNlast,RNnext,RNMin,RNMax,frameElapsedRatio); }
Main Loop
The main loop() is where all the action happens, as soon as it finishes executing all the code contained inside it it jumps back to the beginning and starts all over again.
The first step in the main loop is to record the current number of milliseconds since the program started running) so that we can determine how much time has elapsed since the last iteration of the loop.
Using this time we can determine whether the elapsed time is greater than the period we defined for interpolation steps, if so, call the updateServos() function to generate new interpolated positions.
We also check whether the elapsed time is greater than the frame duration, in which case we need to call the setNextFrame() function.
void loop() { unsigned long currentMillis = millis(); if(currentMillis - previousInterpolationMillis > interpolationDuration) { // save the last time that we updated the servos previousInterpolationMillis = currentMillis; //increment the frame elapsed coutner frameElapsed += interpolationDuration; //update the servos updateServos(); } if(currentMillis - previousFrameMillis > frameDuration) { // save the last time that we updated the servos previousFrameMillis = currentMillis; //reset elapsed frame tiem to 0 frameElapsed = 0; //update the servos setNextFrame(); }
Step 20: Code: Adding Control
In the previous step we defined a basic crawl routine and discussed how to use interpolation to move the servos smoothly. Now, what if you want to control your Zombot?
There are oodles of ways to implement this, both in the hardware and the software. I chose to handle it using the Arduino's serial connection, which means it will be really trivial to control the robot wirelessly via Bluetooth, or as I am doing at the moment for debugging, simply via the USB cable.
This is by no means the most elegant way of achieving these results, but it is hopefully easy to read and understand, as well as flexible.
As before I will attach the full code below, but I will go through the key points here.
Communication Protocol
I have decided to set my communication protocol up in the following way.
- Each data packet (instruction) from the controller to the robot is a string of characters consisting of two parts, separated by a ":" character.
- The "command" comes first and tells the robot what to do (for example: begin moving)
- The "argument" comes second and provides extra information
Define Global Variables
In addition to the previously defined variables, we now need to to setup some variables which will be used in our communication protocol via serial.
char inDataCommand[10]; //array to store command char inDataArg[10]; //array to store command argument int inDataIndex = 0; //used when stepping through characters of the packet boolean packetStarted = false; //have we started receiving a data packet (got a "[" boolean argStarted = false; //have we received a ":" and started receiving the arg boolean packetComplete = false; //did we receive a "]" boolean packetArg = false; //are we reading the arg yet, or still the command
Define Functions
Read a Command From Serial
This function can be called to check the serial interface for received commands.
It checks the incoming serial bytes (characters) one by one, throwing them away unless they are a "[" which indicates the beginning of a command.
Once a command has been started, each byte is stored into the "command" variable until a ":" or "]" is received. If a ":" is received, we begin storing the following bytes into the "argument" variable until a "]" is received.
If at any point a "[" is received during the reading of another instruction, that previous instruction is discarded. This prevents us getting stuck if someone never transmitted a "]" end-of-command character and we wanted to send a new command.
Once a full command has been received the "processCommand" function is called, which will actually interperet and action the command.
void SerialReadCommand() { /* This function checks the serial interface for incoming commands. It expects the commands to be of the form "[command:arg]" where 'command' and 'arg' are strings of bytes separated by the character ':' and encapsulated by the chars '[' and ']' */ if (Serial.available() > 0) { char inByte = Serial.read();; //incoming byte from serial if (inByte == '[') { packetStarted = true; packetArg = false; inDataIndex = 0; inDataCommand[inDataIndex] = '\0'; //last character in a string must be a null terminator inDataArg[inDataIndex] = '\0'; //last character in a string must be a null terminator } else if (inByte == ']') { packetComplete = true; } else if (inByte == ':') { argStarted = true; inDataIndex = 0; } else if (packetStarted && !argStarted) { inDataCommand[inDataIndex] = inByte; inDataIndex++; inDataCommand[inDataIndex] = '\0'; } else if (packetStarted && argStarted) { inDataArg[inDataIndex] = inByte; inDataIndex++; inDataArg[inDataIndex] = '\0'; } if (packetStarted && packetComplete) { //try and split the packet into command and arg Serial.print("command received: "); Serial.println(inDataCommand); Serial.print("arg received: "); Serial.println(inDataArg); //apply input processUserInput(); packetStarted = false; packetComplete = false; argStarted = false; packetArg = false; } else if (packetComplete) { //this packet was never started packetStarted = false; packetComplete = false; argStarted = false; packetArg = false; } } }
Process a Command
Once a valid command (and optionally an argument) have been received, they need to be procesed, so that we can take the appropriate action.
For the moment I have not needed more than one byte of information to define a command, so we just look at the first byte of command. Using this byte as the argument for a switch() statement allows us to perform a function defined by the command byte.
In this example we are looking for the character "w" , "s" or "c".
If "w" is received, then the animation frames are overwritten with a new animation that defines a "butterfly" movement.
If "s" is received, then the animation frames are overwritten with a new animation that defines a "crawl" movement.
If "c" is received, then the animation frames are all set to the same position, effectively stopping all movement.
Since one cannot re-assign all of the values in an array at once, we first define a new temporary array for each servo, containing the new frames, then use the "memcpy" to copy those values over the actual frame array's location in memory.
void processUserInput() { /*for now all commands are single chars (one byte), can expand later if required char commandByte = inDataCommand[0]; if (commandByte != '\0') { switch (commandByte) { case 'w': { //synchronised forwards (butterfly) numFrames = 4; int newRS[] = {0,1000,1000,0}; int newRE[] = {0,0,1000,1000}; int newLE[] = {0,0,1000,1000}; int newLS[] = {0,1000,1000,0}; 's': { //crawl stroke, 180 degrees out of phase numFrames = 4; int newRS[] = {0,1000,1000,0}; int newRE[] = {0,0,1000,1000}; int newLE[] = {1000,0,0,1000}; int newLS[] = {0,0,1000,1000}; 'c': { //turn left crawlGaitRS[] = {250,250,250,250}; crawlGaitRE[] = {250,250,250,250}; crawlGaitLE[] = {250,250,250,250}; crawlGaitLS[] = {250,250,250,250}; crawlGaitLN[] = {250,250,250,250}; crawlGaitRN[] = {250,250,250,250}; break; } } } }
Main Loop
The only addition required in the main loop is a call to check for user input on each iteration of the loop.
//get user input using the SerialReadCommand function we wrote SerialReadCommand();
Step 21: Remote Control: Bluetooth
Buy a Device
There are all kinds of of ways in which one could add remote control, but the simplest to me is via a Bluetooth serial module. These Bluetooth-Serial modules allow you to connect you phone or computer to the device as though it is connected via a cable and send/receive serial commands from the micro-controller.
These JY-MCU modules are available cheaply from various Chinese stores, I got mine from the purveyors of the most Extreme Deals for about $7.50.
Update the Code
Choose your Serial Pins
You can use the module on the standard Ardunio pins SERIAL0 and SERIAL1, but then you have to disconnect it every time that you want to upload a new version of you firmware.
Using the Arduino Library Software Serial we are able to define a second serial port and use that instead.
First import the library
#include <SoftwareSerial.h>
Then, during the global variable declarations, we initialise an instance of the SoftwareSerial class and define which pins will be used. I chose digital pin 11 as Receive (Rx) and 10 as Transmit (Tx).
SoftwareSerial BTSerial(11, 10); // RX, TX
Modify Read Procedure
The only differences now to using the regular serial port is that during setup() we startup the software serial instance instead and when callign functions we refer to the SoftwareSerial instance that we created. Your device may be running at 9600 baud rate, which would be more than sufficient, but mine has been set to 115200 in the past, so I see no reason to change it. Check this if you are receiving nonsense characters.
BTSerial.begin(115200);
When checking for available data we would call:
BTSerial.available()
and when reading a character we'd call:
BTSerial.read()
Connect the Hardware
Wire the Blutooth Module to the Arduino
If you are using the same JY-MCU module as I am, then:
- connect the Vcc to the 5V pin of the Arduino for power (therefore using the Arduino's onboard regulator)
- connect GND to a ground pin on the Arduino
- Connect Tx to Rx on Arduino (pin 11 in my case)
- Connect Rx to Tx on Arduino (pin 10 in my case)
WARNING: 3.3V logic
The receive pin on the JY-MCU is rated as 3.3V logic. In my case I just used the 5V output from the Arduino and it worked without a hitch , but you may want to drop your Arduino's Tx output voltage with a pair of voltage divider resistors.
User Your Fancy New Wireless Link
Before you can talk to the Arduino from your computer over the air (assuming it has Bluetooth built in or you have installed a dongle) or you phone (assuming you have a Bluetooth terminal app that works or have written your own) you need to pair the devices.
This process varies with operating system, but in general:
- Find the Bluetooth icon in the quick launch bar and click on it
- Select the option to add a device
- Choose you module from the list (it may show up as "linvor") and click connect
- Enter the pairing code when requested (usually 1234 with these modules)
Once the devices are paired, look in your control panel's device manager (if on windows) and see what com port number the Bluetooth module has been assigned under the "Ports (Com & LPT)" section. Use a serial terminal, such as putty, to connect to this port as you would any wired serial link.
More Information
There is a great in depth Instructable on this module if you need more help
Step 22: Conclusions and Competitions
Conclusions and Comments
I hope you have enjoyed my Instructable. I fully intend to do some more development on this robot, a project like this is never finished!
I would love to see your versions of it, hear your questions and listen to what you think about it all, so please leave a comment. I will try my best to help with any troubles you might have.
Competitions
This is the step that I would like you to collaborate with me on, you click the "vote" button and I don't unleash Zombots on the populace. Or, you know, you just vote for me because you think that I made a cool diverse instructable and hopefully taught you something along the way :-D
In particular I am excited about the RadioShack Microcontroller contest and the Form1+ 3D printer contest, because I cannot imagine anything cooler than having that ability to use 3D printing to bring more crazy robots to life, among other things.
It fits pretty well in Halloween Props as well as as seeming like something that Super Villain might make though, so... don't be shy.
Recommendations
We have a be nice policy.
Please be positive and constructive.
30 Comments
This is great! Another idea to give an aged, dead effect to the skin is to layer liquid latex and stretched out cotton wool, and then paint with acrylics. It's cheap and pretty effective.
This is incredibly complicated. But you did a good job of explaining it so that those who have the skills could build one. I do not have the skills and therefore will not be making one.
Thank you! I tried to break it down into (zombie)bite-size pieces, so if someone really wants to follow along they can learn each bit as they go. I generally see Instructables as a way of inspiring and teaching techniques, after all, it would be boring to make another zombot just like mine, but to use the techniques to make something new would be awesome!
This is incredibly complicated. But you did a good job of explaining it so that those who have the skills could build one. I do not have the skills and therefore will not be making one.
Awesome project! I would add some hidden backward facing hooks under the hands to allow the doll to pull itself along the ground easier.
Thanks! I actually considered this, it definitely needs something to help it gain traction. In the end I decided against hooks for aesthetics (even though they would be invisible under normal use) and also because they would only work on grass/carpet. For indoor use on the tiles I thought I might have luck with some sort of rubbery coating, like plasti-dip, which would be ideal if one could mix up a nice gory red/black colour.
I'm thinking a rubbery coating of some sort for crawling on smooth surfaces and a removable piece that can be slid over the hands with little hooks/spikes for grass and carpet. Like the things they make to slip over your shoes for walking on ice. You could make them blend in well enough by painting them to match the hands and maybe building up the skin a little around them.
This is what should be shown to get people interested in robotics, all those typical moon rovers are just too bland!
And the potential for prank videos with this...
Thanks, I agree! I get bored doing "me too" projects, it has to do something I haven't seen before. If you want to teach someone something you have to catch their attention.
As for prank videos, I already spend most of my robot-time chasing my cats, but perhaps I should dress my toddler up as a zombie-survivor and let him attack it with one of his plastic hammers ;-)
This is a very well documented instructable. Its a fun idea and I hope you scared off a few kids. | http://www.instructables.com/id/Make-a-Crawling-Robot-Zombie-with-Severed-Legs/ | CC-MAIN-2018-09 | refinedweb | 9,613 | 63.53 |
22 December 2009 06:07 [Source: ICIS news]
By John Richardson
SINGAPORE (ICIS news)--China’s capacity to produce polyethylene and polypropylene will expand at a double-digit pace next year, while demand growth is expected to ease, said an analyst with CBI, a Shanghai-based commodity information service, on Tuesday.
The country’s polyethylene (PE) capacity would jump by 1.99m tonnes in 2010 to 11.1m tonnes, while its ability to produce polypropylene (PP) would increase by 2.74m tonnes to 12.7m tonnes, based on CBI data.
“This will include not only new capacities due to start next year, but the impact of plants that were commissioned in the second half of 2009,” said Longston Li, who is with CBI polymers team.
“Many of the players in the ?xml:namespace>
Demand for the polymers would continue to increase in
This is partly explained by the low base in 2008, when there was a severe weakness in demand and consumption of different grades of PE and PP had either fell or had very minimal growth.
PE demand would rise 7.1% next year to 16.27m tonnes in 2010, while PP demand would grow 12% to 14.55m tonnes next year, moderating from the projected 31.5% surge in demand for PE and 24% jump for PP in 2009, Li said.
Demand growth for polymers this year has been driven by re-stocking on the back of the Chinese government’s massive economic stimulus package.
Meanwhile, the global re-stocking process caused by deep production cutbacks during the economic crisis had ceased, according to market sources.
There were expectations that
The recovery of exports of finished plastics products since the middle of the year should also bode well for the PE and PP markets, he added.
“But many players are still very concerned about the impact of new plants,” Li said.
With additional reporting by Judith Wang
Source: CBI | http://www.icis.com/Articles/2009/12/22/9321058/china-pepp-capacity-expansion-to-outpace-demand-growth-in-10.html | CC-MAIN-2014-41 | refinedweb | 321 | 72.26 |
In this article, we’ll show you how to use Python to create your digital clock. We’ll do this using Tkinter. As we all know, Tkinter is used to make a wide range of GUI (Graphical User Interface) applications. We’ll learn how to make a digital clock with Tkinter in this demo.
Python Digital Clock – What’s needed?
The main prerequisites for creating a digital clock in Python include:
- Time module
- Python functions
- Tkinter basics (Label Widget)
If you’re using Windows, you won’t need to install anything because our simple digital clock software will just use the time and Tkinter modules that are already installed.
However, if you’re using a Linux operating system, the pre-installed Python interpreter may not have Tkinter, so you’ll have to manually install it, as demonstrated below.
Tkinter Module
Tkinter is Python’s standard GUI library. Tkinter is named after the Tk interface. When Python is used in conjunction with Tkinter, it is possible to easily construct graphical user interfaces. The Tk GUI toolkit has a sophisticated object-oriented interface called Tkinter. The latter is a Python binding. In fact, the toolkit is handy in creating graphical user interfaces. If you don’t already have it, you can install it by running the following command that uses the pip package manager.
pip install tkinter
or
sudo apt-get install python3-tk
Time
The Time module has several methods for obtaining time. This article will use strftime() to convert the current time to the Hour: Minutes: Seconds format.
The actual implementation phase
We’ll use geometry() to provide the visible window’s dimensions and mainloop() to prevent the displayable window from disappearing too rapidly in this code.
The application window’s design
In this stage, we’ll use the Tkinter library to define the window panel. After that, we’ll decide on the text design we’ll utilize for the digital clock.
Design of the label
This is the program’s coolest step. Because you can customize the design to your liking, this stage will set your work apart from the competition. It’s time to put your creativity to the test if you enjoy inventing stuff.
We will customize the following four elements:
- The font used to display digital digits.
- The color of our digital clock’s backdrop.
- Make sure the color of the digital digits isn’t the same as your background.
- The text’s text border width.
For instance,
font_of_text= ("Boulder", 59, 'bold') the_background = "#f2e750" the_foreground= "#363529"<br>width_of_border = 32
Feel free to use RGB or hex values for colors. In some situations, you can opt to use the color’s hex values. We use the color picker provided by Google in the browser. Simply type “Color picker” onto Google.
Let’s put the pieces together and define the label. The text that will display our time is the label function.
label = Label(app_window, font=font_of_text, bg=the_background , fg=the_foreground, bd=width_of_border )
label.grid(row=0, column=1)
Function of a Digital Clock
If we’re working on an application, functions are the most efficient approach to get things done. Functions are also beneficial since they help to structure and understand the program.
Starting the program
This is fantastic! You’ve made it to the end of our application project’s final stage. Functions, as you may know, do not run unless they are called. We’ll use the function to start the application. Let’s have a look at how to use the app:
digital_clock() app_window.mainloop()
Overall Understanding of making a digital Clock
The concept is simple: first, we’ll use Tkinter to build an empty window, then we’ll configure and position a Label widget within that window, and last, we’ll update the value of the label to be the current time every 0.08s.
What is the significance of 80 milliseconds?
According to studies, the human brain can only comprehend 12 discrete images per second; anything more than that is interpreted as a single image, causing the illusion of motion.
Below is the complete code for our digital clock, which you may experiment with altering the code parameter as desired and then pressing run again.
''' Creation of the Digital Clock in Python ''' from tkinter import * from tkinter.ttk import * from time import strftime main = Tk() main.title('The Digital clock in Python') def clock(): tick = strftime('%H:%M:%S %p') clock_label .config(text =tick) clock_label .after(1000, clock) clock_label = Label(main, font =('sans', 80), background = 'yellow', foreground = 'green') clock_label.pack(anchor= 'center') clock() mainloop()
Code walkthrough:
The first step entails importing the packages needed as follows.
from tkinter import * from tkinter.ttk import * from time import strftime
The next stage is to create UI for the digital clock as follows.
main = Tk()
The title method is responsible for setting the title for our clock.
main.title('The Digital clock in Python')
Let’s now define a way for obtaining the time. We will call it a clock. To get the time, we’ll use the strftime method and store it in a string. We’ll call the string tick. It’s now time to give the time format.
def clock(): tick = strftime('%H:%M:%S %p')
Now we’ll use the config method to set the label.
clock_label .config(text = tick)
We can now call our clock function and do the same with the after method. We’d like to call it every second, so that’ll be the first argument we’ll use.
clock_label .after(1000, clock)
We’ll need a label once we’ve finished crafting the title. Our title will be saved on the label. Let’s start by making a label. We’ll accomplish the same thing with the label approach.
clock_label = Label(root, font=('ds-digital', 100), background = 'yellow', foreground = 'green')
The design phase involves picking a typeface and a color for the background and foreground. We’re ready to ship our label now that we’ve finished designing it. We’ll use the pack approach to accomplish this. The anchor method can also be used to define the label’s alignment.
clock_label .pack(anchor='center')
Let’s call our clock function now, and then we’ll call mainloop at the end. That’s all there is to it!
clock() mainloop()
Example 2: How to create a digital clock using Tkinter
# importing the entire tkinter module from tkinter import * from tkinter.ttk import * # to retrieve the system's time, you need to import the strftime function from time import strftime # creation of the tkinter window main = Tk() main.title('The digital clock in Python') # This function displays the current time on the label defined herein def displayTime(): string = strftime('%H:%M:%S %p') clock_label.config(text = string) clock_label.after(1000, displayTime) # creating an attractive look: needs styling the label widget clock_label = Label(main, font = ('calibri', 42, 'bold'), background = 'purple', foreground = 'white') # defining how to place the digital clock at the centre tkinter's window clock_label.pack(anchor = 'center') displayTime() mainloop()
Example 3: how to create a digital clock using Tkinter
# start by importing all the necessary libraries from tkinter import * import sys import time #library to get the current time #create a function displayTime and variable time_now def displayTime(): #show the current hour,minute,seconds time_now = time.strftime("%H : %M : %S") #clock configuration clock_label.config(text=time_now) #after every 200 microseconds the clock will change clock_label.after(200,displayTime) #Creation of a variable responsible for storing the tkinter window main=Tk() #window size defined main.geometry("720x420") #First label - shows the time, #Second label - shows hour:minute:second, #Third label - shows the digital clock's title at the top clock_label=Label(main,font=("times",72,"bold"),bg="yellow") clock_label.grid(row=2,column=2,pady=25,padx=100) displayTime() #creation of a digital clock's variable digital_clock_title=Label(main,text="The Digital Clock in Python",font="times 24 bold") digital_clock_title.grid(row=0,column=2) hours_mins_secs=Label(main,text="Hours Minutes Seconds",font="times 15 bold") hours_mins_secs.grid(row=3,column=2) main.mainloop()
Conclusion
This is how to make a simple Digital Clock in Python! Is it simple? So, what exactly are you waiting for? Make your own by experimenting with the code! Make the variations to suit your liking. Finally, we would be glad to know how that goes for you. | https://www.codeunderscored.com/make-digital-clock-python/ | CC-MAIN-2022-21 | refinedweb | 1,396 | 57.37 |
Example News Item
Still learning the ropes of our blog system.
« March 3, 2006 | Main | March 29, 2006 »
Still learning the ropes of our blog system.
A Virtual Directory is a directory service (primiarly LDAP interface,
though in theory other protocols like DSML or another Web Service could
be used) that is unique in that it doesn't hold data in its own storage
system like a traditional directory server.
Instead it aggregates, on the fly, in real-time, data from various remote services usually other LDAP or RDBMS systems but could be Web Services or other proprietary APIs as well.
[Renamed the title]
In many, if not most, large organizations you will find that there are
multiple directories used. Sometimes this is because you have an LDAP
domain controller for different global regions or subsidiaries or it
could be that one directory is internal & the other is external
people (such as customers or partners).
However, you will discover that many LDAP enabled applications are not
capable of speaking to multiple LDAP servers. Instead they can only
speak to a single service.
So what do you do?
You can deploy a Virtual Directory (such as Oracle Virtual Directory)
which allows you to deploy a single, stateless "directory router"
service that makes multiple LDAP servers appear as a single LDAP server
without the need to synchronize to a single master service.
The way this works is that the Virtual Directory has "adapters" that
connect to your LDAP servers. In the adapter you define a namespace,
normally as a branch in the Virtual Directory's namespace.
For example -- if the root of your Virtual Directory is
"dc=example,dc=com" - you would create a virtual branch as
"ou=ldap1,dc=example,dc=com" for your first LDAP adapter. The value for
ou could be whatever makes the most sense for your organization.
And it doesn't even have to match the namespace your internal LDAP server is actually managing.
The benefit of exposing your LDAP adapters as branches is that it
simplifies the processing the Virtual Directory has to do in order to
determine which LDAP server is best suited to anwer the request.
After you have configured your LDAP adapters (with Oracle Virtual
Directory, this can be accomplished with just a few mouse clicks),
you're ready to point your LDAP client applications at the Virtual
Directory.
Now when the client performs a search against the Virtual Directory,
the Virtual Directory will pass the search request to all adapters that
could possibly answer that request. And then pass it back to the client
all responses received from all adapters that responded with returned
entries.
Thus your clients believe they searched a single LDAP server and got 1
or more entries. It has no idea that in reality those entries came back
from multiple LDAP servers.
And this applies to all types of LDAP operations including bind and add/modify/delete - not just search.
This page contains all entries posted to Virtual Identity Dialogue in March 2006. They are listed from oldest to newest.
March 3, 2006 is the previous archive.
March 29, 2006 is the next archive.
Many more can be found on the main index page or by looking through the archives. | http://blogs.oracle.com/mwilcox/2006/03/18/ | crawl-002 | refinedweb | 542 | 51.68 |
Learn how to migrate your existing App Engine app from the Go 1.9 runtime to the Go 1.12+ runtimes of the App Engine standard environment.
Changes in the App Engine Go 1.12+ runtime
You must make changes to your existing App Engine Go app and your deployment process in order to use the App Engine Go 1.12+ runtimes. The key differences in the new runtimes are outlined below:
The behavior of some elements in the
app.yamlconfiguration file has been modified. For more information, see Changes to the
app.yamlfile.
The Go 1.12+ runtimes do not provide access to proprietary App Engine APIs. You can use the Google Cloud client library or third party libraries to access Google Cloud services. You can find suggested alternatives to App Engine APIs in the Migrating from the App Engine Go SDK section.
Each of your services must include a
mainpackage. For more information, see Creating a
mainpackage and Structuring your files.
The
appenginebuild tag is deprecated and will not be used when building your app for deployment.
How you import dependencies into your project has changed. For the Go 1.12+ runtimes, specify dependencies either by:
- Putting your application and related code in your
GOPATH.
- Or, creating a
go.modfile to define your module.
For more information, see Specifying dependencies.
App Engine no longer modifies the Go toolchain to include the
appenginepackage. If you are using the
appenginepackage or the
google.golang.org/appenginepackage, you must migrate to the Google Cloud client library.
You must use the
gcloud app deploycommand to deploy your app, services, and configuration files, such as the
queue.yamland
cron.yamlfiles.
Migrating from the App Engine Go SDK
The
appengine package and the
google.golang.org/appengine package are no
longer supported. You will have to migrate to the
Google Cloud client library to access
equivalent Google Cloud services:
- Use Cloud Tasks to enqueue tasks from Go 1.12 and newer using the
cloudtaskspackage. You can use any App Engine service as the target of an App Engine task.
- Instead of the App Engine Mail API, use a third-party mail provider such as SendGrid, Mailgun, or Mailjet to send email. All of these services offer APIs to send email from applications.
- To cache application data, use Memorystore for Redis.
Access the App Engine Modules API using the
google-api-go-clientlibrary. Use the environment variables and the App Engine Admin API to obtain information and modify your application's running services:
Cloud Storage is recommended over using the App Engine Blobstore API. Use Cloud Storage through the
storagepackage. To get started, see the Cloud Storage Client Libraries page.
Access Datastore through the
datastorepackage. To get started, see the Datastore Client Libraries page.
Instead of using the App Engine Search API, host any full-text search database such as ElasticSearch on Compute Engine and access it from your service.
Use similar functionalities provided by the App Engine Images API in Cloud Storage through the
storagepackage and a third-party service to manipulate images. To get started, see the Cloud Storage Client Libraries page.
Use
request.Context()or your preferred context instead of using
appengine.NewContext.
The following App Engine-specific functionalities have been superseded by the Go standard library packages listed below:
Changes to the
app.yaml file
The behavior of some elements in the
app.yaml configuration file has been
modified:
For more information, see the
app.yaml reference.
Creating a
main package
Your service must include a
package main statement in at least one source
file.
Writing a main package
If your service doesn't already contain a
main package, add the
package main statement and write a
main() function. At a minimum,
the
main() function should:
Read the
PORTenvironment variable and call the
http.ListenAndServe()function:
port := os.Getenv("PORT") if port == "" { port = "8080" log.Printf("Defaulting to port %s", port) } log.Printf("Listening on port %s", port) if err := http.ListenAndServe(":"+port, nil); err != nil { log.Fatal(err) }
Registering your HTTP handlers
You can register your HTTP handlers by choosing one of the following options:
- The preferred method is to manually move all
http.HandleFunc()calls from your packages to your
main()function in your
mainpackage.
Alternatively, import your application's packages into your
mainpackage, ensuring each
init()function that contains calls to
http.HandleFunc()gets run on startup.
You can find all packages which use the
http.HandleFunc()call with the following bash script, and copy the output into your
mainpackage's
importblock:
gp=$(go env GOPATH) && p=$(pwd) && pkg=${p#"$gp/src/"} && find . -name "*.go" | xargs grep "http.HandleFunc" --files-with-matches | grep -v vendor/ | grep -v '/main.go' | sed "s#\./\(.*\)/[^/]\+\.go#\t_ \"$pkg/\1\"#" | sort | uniq
Structuring your files
Go requires each package has its own directory. You can tell
App Engine where your
main package is by using
main: in your
project's
app.yaml file. For example, if your app's file structure looked
like this:
myapp/ ├── app.yaml ├── foo.go ├── bar.go └── web/ └── main.go
Your
app.yaml file would have:
main: ./web # Relative filepath to the directory containing your main package.
For more information about the
main flag, see the
app.yaml
reference.
Moving files to your
GOPATH
Find your
GOPATH by using the following command:
go env GOPATH
Move all relevant files and imports to your
GOPATH. If using relative
imports, such as
import ./guestbook, update your imports to use the full
path:
import github.com/example/myapp/guestbook. | https://cloud.google.com/appengine/docs/standard/go/go-differences | CC-MAIN-2020-40 | refinedweb | 917 | 60.01 |
I am using the Photoshop's javascript API to find the fonts in a given PSD.
Given a font name returned by the API, I want to find the actual physical font file that that font name corresponds to on the disc.
This is all happening in a python program running on OSX so I guess I'm looking for one of:
2016年12月02日47分57秒
Unfortunately the only API that isn't deprecated is located in the ApplicationServices framework, which doesn't have a bridge support file, and thus isn't available in the bridge. If you're wanting to use ctypes, you can use ATSFontGetFileReference after looking up the ATSFontRef.
Cocoa doesn't have any native support, at least as of 10.5, for getting the location of a font.
2016年12月02日47分57秒
open up a terminal (Applications->Utilities->Terminal) and type this in:
locate InsertFontHere
This will spit out every file that has the name you want.
Warning: there may be alot to wade through.
2016年12月02日47分57秒
I haven't been able to find anything that does this directly. I think you'll have to iterate through the various font folders on the system:
/System/Library/Fonts,
/Library/Fonts, and there can probably be a user-level directory as well
~/Library/Fonts.
2016年12月02日47分57秒
There must be a method in Cocoa to get a list of fonts, then you would have to use the PyObjC bindings to call it..
Depending on what you need them for, you could probably just use something like the following..
import os def get_font_list(): fonts = [] for font_path in ["/Library/Fonts", os.path.expanduser("~/Library/Fonts")]: if os.path.isdir(font_path): fonts.extend( [os.path.join(font_path, cur_font) for cur_font in os.listdir(font_path) ] ) return fonts
2016年12月02日47分57秒 | http://www.91r.net/ask/469.html | CC-MAIN-2016-50 | refinedweb | 287 | 63.7 |
The final keyword
Posted on March 1st, 2001
The final keyword has slightly different meanings depending on the context, but in general it says “This cannot be changed.” You might want to prevent changes for two reasons: design or efficiency. Because these two reasons are quite different, it’s possible to misuse the final keyword.
The following sections discuss the three places where final can be used: for data, methods and for a class.
Final data
Many programming languages have a way to tell the compiler that a piece of data is “constant.” A constant is useful for two reasons:
- It can be a compile-time constant that won’t ever change.
-.
When using final with object handles rather than primitives the meaning gets a bit confusing. With a primitive, final makes the value a constant, but with an object handle, final makes the handle a constant. The handle must be initialized to an object at the point of declaration, and the handle can never be changed to point to another object. However, the object can be modified; Java does not provide a way to make any arbitrary object a constant. (You can, however, write your class so that objects have the effect of being constant.) This restriction includes arrays, which are also objects.
Here’s an example that demonstrates final fields:
//: FinalData.java // The effect of final on fields class Value { int i = 1; } public class FinalData { // Can be compile-time constants final int i1 = 9; static final int I2 = 99; // Typical public constant: public static final int I3 = 39; // Cannot be compile-time constants: final int i4 = (int)(Math.random()*20); static final int i5 = (int)(Math.random()*20); Value v1 = new Value(); final Value v2 = new Value(); static final Value v3 = new Value(); //! final Value v4; // Pre-Java 1.1 Error: // no initializer // handle //! I2 are final primitives with compile-time values, they can both be used as compile-time constants and are not different in any important way. I3. Also note that i5 cannot be known at compile time, so it is not capitalized..
The variables v1 through v4 demonstrate the meaning of a final handle. As you can see in main( ), just because v2 is final doesn’t mean that you can’t change its value. However, you cannot re-bind v2 to a new object, precisely because it’s final. That’s what final means for a handle. You can also see the same meaning holds true for an array, which is just another kind of handle. (There is know way that I know of to make the array handles themselves final.) Making handles final seems less useful than making primitives final.Blank finals
Java 1.1:
//: BlankFinal.java // "Blank" final data members class Poppet { } class BlankFinal { final int i = 0; // Initialized final final int j; // Blank final final Poppet p; // Blank final handle //.Final arguments
Java 1.1 allows you to make arguments final by declaring them as such in the argument list. This means that inside the method you cannot change what the argument handle points to:
//: FinalArguments.java // Using "final" with method arguments class Gizmo { public void spin() {} } public class FinalArguments { void with(final Gizmo g) { //! g = new Gizmo(); // Illegal -- g is final g.spin(); } handle to an argument that’s final without the compiler catching it, just like you can with a non-final argument.
The methods f( ) and g( ) show what happens when primitive arguments are final: you can only read the argument, but you can't change it.
Final methods.
Any private methods in a class are implicitly final. Because you can’t access a private method, you can’t override it (the compiler gives an error message if you try). You can add the final specifier to a private method but it doesn’t give that method any extra meaning. is as efficient as possible.
//:.
Final caution
It can seem to be sensible to make a method final while you’re designing a class. You might feel that efficiency is very important when using your class and that no one could possibly want to override your methods anyway. Sometimes this is true..
The standard Java library is a good example of this. In particular, the Vector class is commonly used and might be. Second, many of the most important methods of Vector, such as addElement( ) and elementAt( ) are synchronized, which as you will see in Chapter 14.
mRmgtB zF Qu VwH BoxW oUPosted by vlKOVXKRFp on 06/18/2013 09:32pm
buy tramadol online tramadol hcl para que sirve - buy ultram without no prescriptionReply
ffBgUZ Fm sO tyn jaZX ZKPosted by TrlYCsyNMt on 06/15/2013 09:11pm
tramadol online buy discount tramadol - buy tramadol canadaReply | http://www.codeguru.com/java/tij/tij0071.shtml | CC-MAIN-2014-41 | refinedweb | 789 | 63.8 |
Investors in Tandem Diabetes Care Inc (Symbol: TNDM) saw new options begin trading this week, for the September 18th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the TNDM options chain for the new September 18th contracts and identified one put and one call contract of particular interest.
The put contract at the $100.00 strike price has a current bid of $9.30. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $100.00, but will also collect the premium, putting the cost basis of the shares at $90.70 (before broker commissions). To an investor already interested in purchasing shares of TNDM, that could represent an attractive alternative to paying $100.80.30% return on the cash commitment, or 53.88% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for Tandem Diabetes Care Inc, and highlighting in green where the $100.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $105.00 strike price has a current bid of $7.70. If an investor was to purchase shares of TNDM stock at the current price level of $100.80/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $105.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 11.81% if the stock gets called away at the September 18th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if TNDM shares really soar, which is why looking at the trailing twelve month trading history for Tandem Diabetes Care Inc, as well as studying the business fundamentals becomes important. Below is a chart showing TNDM's trailing twelve month trading history, with the $105.00 strike highlighted in red:
.64% boost of extra return to the investor, or 44.26% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 63%, while the implied volatility in the call contract example is 69%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 252 trading day closing values as well as today's price of $100.80) to be 61%.. | https://www.nasdaq.com/articles/first-week-of-tndm-september-18th-options-trading-2020-07-17 | CC-MAIN-2021-43 | refinedweb | 413 | 64.1 |
Hello,
I am using Oracle 12c and VS 2012 Express. I tried this simple code but it fails:
#include <stdio.h>
#include <iostream>
#include <occi.h>
using namespace oracle::occi;
using namespace std;
int main( int argc, const char* argv[] )
{
//create environment and connection
Environment* env = Environment::createEnvironment();
Connection* conn = env->createConnection( "xxx", "xxx", "localhost" );
std::cout << "Environment and Connection created" << std::endl;
....
}
The createEnvironment method fails and createConnection raises an exception.
Does anyone experience the same problem ?
Regards,
Benoît
You may be using the default OCCI libraries that are built with VS2010. Please check this:
Oracle Database Client Preinstallation Tasks
You need to use the libraries that are compatible with VS2012. See the above link for their location.
I checked the path of the libraries. They are all correct. If they had not been, I think the project would not even compile.
I have two instances of Oracle running. The problem might come from that...
However, I abandon for the moment.
By the way, what is the exception thrown? Print the message. With whatever credentials you are using, can you try sqlplus? (sqlplus user/passwd@connstring) | https://community.oracle.com/message/11204243 | CC-MAIN-2014-15 | refinedweb | 187 | 61.43 |
I really don't like the C# out keyword. I don't like its syntax and the fact that it makes method signatures look so weird. Worse than that though is the way it forces consumers of your code to also write the directive and code 'around it'. I can fully appreciate its purpose and why the designers put it into the C# language spec, but I can't help feeling it could have been done in a cleaner / better way. This article will show you how you can do without the out keyword for certain types of methods by using generics, anonymous methods and implicit / explicit type converters.
out
It's probably best to show an example of where the out keyword is used.
Microsoft has used the keyword in the framework. One example is in the Int32.TryParse() method. In order to return a value and a bool back to the caller, they resorted to doing this:
Int32.TryParse()
bool
int value;
if (Int32.TryParse("123", out value)) {
// we have a real integer
}
// or the more natural (but exception prone) way
value = Int32.Parse("123");
Personally, I feel that the last line looks much cleaner, neater, easier to read and more to the point. You are calling a method that returns a value (just like 90% of the .NET Framework methods do) and your variable will be set or an exception is thrown.
If you have a method that should return a bool and a value, then you're stuck with this pattern (and the dreaded out keyword) unless you decide to return a bespoke object from the method which contains both the value and the return result. Writing another class to get around the out keyword isn't ideal. It introduces a class that exists for perhaps a single method and leaves you with the decision of where to put the class. If the out keyword was bad, this is downright awful.
out
The out keyword lets us return other values to the caller but it doesn't really help them. Let's say that you called the first method which returned a bool value of false. What does this mean? Of course, at a high level it means that the string passed to the method did not parse into an integer - but why exactly did it fail? All we know is that the string isn't valid for some reason.
false
string
int value;
if (!Int32.TryParse(txtAge.Value, out value)) {
// ok, we failed to parse the input but why????
}
Wouldn't it be nice if you could pass a message back like "the value you supplied was too big" or "the value contained letters" instead of giving the user a catch all message like "sorry that does not compute". Of course, we can do this by throwing an exception, but it's a performance bottleneck and it forces the consumer of your method to write exception handlers (which I suspect is why Microsoft created the TryParse() method instead).
TryParse()
So, we can sum up the problem by specifying what goals our solution should have. It should:
We can provide a reusable generic solution to this problem by making use of generics, type converters and anonymous methods. If we can return a reusable rich object which allows the consumer to get the value, message or exception and still write code that looks natural, then I think we're on to a winner.
My Result<T> class provides a solution to all of those requirements and is small enough to not cause any memory or performance issues due to the use of generics. Let's take a look at the Result<T> class below:
Result<T>
/// <summary>
/// Reusable generic result class which allows a value, exception
/// or message information to be passed back to a
/// calling method in one go.
/// </summary>
/// <typeparam name="T">The expected return type</typeparam>
public class Result<T> {
protected T result = default(T);
protected string message = string.Empty;
protected Exception exception = null;
protected Predicate<T> boolConverter = delegate { return true; };
public Result(T result) : this(result, string.Empty, null, null) {}
public Result(T result, Predicate<T> boolConverter) :
this(result, string.Empty, null, boolConverter) { }
public Result(T result, string message) : this(result, message, null, null) { }
public Result(Exception exception) : this(default(T),
string.Empty, exception, null) { }
public Result(T result, string message,
Exception exception, Predicate<T> boolConverter) {
this.result = result;
this.exception = exception;
if (exception != null && message == string.Empty)
this.message = exception.Message;
else
this.message = message;
if(boolConverter!= null)
this.boolConverter= boolConverter;
}
public T ActualResult {
get { return this.result; }
}
public string Message {
get { return this.message; }
}
public Exception Exception {
get { return this.exception; }
}
public bool Success {
get { return CheckForSuccess(this); }
}
public static explicit operator bool(Result<T> result) {
return CheckForSuccess(result);
}
static bool CheckForSuccess(Result<T> result) {
return result.Exception == null &&
string.IsNullOrEmpty(result.message) &&
result.boolConverter(result.ActualResult);
}
public static implicit operator T(Result<T> result) {
if (result.Exception != null)
throw result.Exception;
if(!result.boolConverter(result.ActualResult))
throw new Exception("Result failed the boolConverter test
and is not guaranteed to be valid");
return result.ActualResult;
}
}
The first thing to notice is that the class takes a generic parameter when you define an instance of it. If we were to write a wrapper around the Int32.TryParse() method, then it would look something like this:
public Result<int>ParseInt(string sValue) {
int value;
if (!Int32.TryParse(sValue, out value)) {
// more investigation required into why it failed here
return new Result<int>(new Exception
("String couldn't be converted to an integer"));
}
return new Result<int>(value);
}
This is where the pattern really kicks in. The consumer can now decide how they want to use your method. There are no less than two ways to call the ParseInt() method.
ParseInt()
Method 1 (the 'confident' method): Treat the return result as a normal value:
public void DoSomething(string sValue) {
// the ParseInt method will throw an exception if the string
// wasn't successfully parsed
int value = ParseInt(sValue);
}
Method 2 (the 'defensive' method): Treat the return result as a result object
public void DoSomething(string sValue) {
// the ParseInt method will throw an exception
// if the string wasn't successfully parsed
Result<int> value = ParseInt(sValue);
if((bool)value) {
// get the int out of the result
int intValue = value.ActualValue;
} else {
// tell everybody why it failed
MessageBox.Show(value.Message);
// or you could do this
// throw value.Exception;
}
}
Say we wanted to extend the ParseInt() method so that it only returned true for integers that were:
true
That's where the anonymous methods kick in. The Result<T> class provides an alternative constructor which allows a delegate to be passed to it. The delegate is then inspected when the type converters are called or when the Success property is inspected.
Success
public Result<int>ParseInt(string sValue) {
int value;
if (Int32.TryParse(sValue, out value)) {
// more investigation required into why it failed here
return new Result<int>(new Exception("String couldn't be converted to an integer"));
}
return new Result<int>(value,
delegate(T obj) {
if (Convert.ToInt32(obj) > 0)
return true;
return false;
};
);
}
The result is now only valid when the value is greater than zero.
The Result<t> class makes use of both implicit and explicit type converters. The class will always explicitly convert to a bool which indicates whether the method succeeded or not. The value returned can be checked by querying the Success property as well as using the explicit conversion.
Result<t>
Result<int> value = ParseInt(sValue);
if((bool)value) { // this invokes the explicit type converter
}
Generally speaking, implicit type conversions can be quite dangerous. One object turning into another type of object without your realising it doesn't exactly set your pulse running, but with the Result<t> class, it is a perfect fit. If you want to treat the Result as the native type, then it will attempt to convert it for you automatically. However, if the method returns a failure result, then an exception is thrown at the point where the conversion takes place. This is to prevent you from using a result which is not valid without you realising it.
Result
Result<int> value = ParseInt(sValue);
if(value > 99) { // this invokes the implicit type converter and may throw an exception
}
It might be beneficial to extend the Result class for particular datatypes. I know that this goes some way against the whole generics idea but in doing so could allow more specialised results to be put in place. For example, if the result is always numeric, then a sub class could be written that extends Result<t> and adds some standard boolean tests such as:
public static Predicate<T> GreaterThanZero {
get {
return delegate(T obj) {
if (Convert.ToInt32(obj) > 0)
return true;
return false;
};
}
}
Adding these sort of helper methods nicely encapsulates the logic of the class - yet still allows it to be extended.
I hope that somebody out there finds this simple class useful. I have used the pattern in a number of large projects and it's been extremely useful to me. I'd love to hear from anybody who uses the class and especially those who extend it.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
852y3agnna wrote:SNIP: huge wrapper class
852y3agnna wrote:SNIP: uses either of the two things you hate inside of it (exception handling and the "out" keyword)
<br />
try {<br />
int value = ParseInt(sValue);<br />
} catch (Exception ex) {<br />
}<br />
<br />
Result<int> value = ParseInt(sValue);<br />
if((bool)value) {<br />
} else {<br />
}<br />
852y3agnna wrote:SNIP: for each and every function that takes variables by reference instead of value
class MainClass<br />
{<br />
public static void Main(string[] args)<br />
{<br />
Exception ex;<br />
int timer = Environment.TickCount;<br />
<br />
for(int i = 0; i<1000000; i++) {<br />
try {<br />
ex = new Exception("");<br />
//throw ex;<br />
} catch {}<br />
}<br />
<br />
timer = Environment.TickCount - timer;<br />
<br />
Console.WriteLine(timer);<br />
Console.ReadLine();<br />
<br />
}<br />
TryParse
Enum
T
Exception
Result<bool> result = GetResult();<br />
bool value = (bool)result; // explicit, and tells us if the method suceeded or not<br />
value = result; // implicit, and gives us the return value of the method
result.ActualValue
result.Success
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/script/Articles/View.aspx?aid=14780 | CC-MAIN-2017-30 | refinedweb | 1,776 | 52.9 |
OpenGL Headline News
Intel Mesa driver for Linux is now OpenGL 4.6 conformant.
Feb 01, 2018 | Read article... | Permalink
Khronos Launches OpenGL 4.6 Adopters Program - Conformance Test Suite in Open Source
The Khronos Group announces the launch of the OpenGL.
Jan 31, 2018 | Read article... | Permalink
Kh.
Jan 30, 2018 | Read article... | Permalink
New Optimized OpenGL/Graphics Math (glm) for C - feedback requested
There is a new optimized OpenGL/Graphics Math for C. The original glm library is for C++ only (templates, namespaces, classes…). This new library is targeted to C99 but currently you can use it for C89 safely by language extensions. Almost all functions (inline versions) and parameters are documented inside related headers. Complete documentation is in progress. Feedback is welcome on the OpenGL forums.
Jan 11, 2018 | Read article... | Permalink
Plumeria Smart Creator 3 Shows Off OpenGL with Physics.
Jan 02, 2018 | Read article... | Permalink | https://www.opengl.org/news/permalink/pixellight-engine-0.9.10-released/ | CC-MAIN-2018-09 | refinedweb | 150 | 70.09 |
On 30-Nov-03, 09:23 (CST), Thomas Viehmann <[email protected]> wrote: > If you absolutely want that, how about having a script on top of add > user manage the system uids. That could avoid reassigning UIDs as long > as it can and then ask the user about it (it could even reassign UIDs > upun reinstallation).. I think the idea of a namespace for usernames used by packages is a good idea, but rather than "debian-", we should take this to the LSB folk, so that we can get it done once. Steve -- Steve Greenland The irony is that Bill Gates claims to be making a stable operating system and Linus Torvalds claims to be trying to take over the world. -- seen on the net | https://lists.debian.org/debian-devel/2003/11/msg02308.html | CC-MAIN-2016-50 | refinedweb | 125 | 76.56 |
Static Code Analysis for Julia
using Pkg Pkg.add("StaticLint")
using StaticLint ``` html **Documentation**: []() ## Description This package supports LanguageServer.jl functionality broadly by: 1. linking the file tree of a project 2. marking scopes/namespaces within the syntax tree (ST) 3. marking variable bindings (functions, instances, etc.) 4. linking identifiers (i.e. variable names) to the relevant bindings 5. marking possible errors within the ST Identifying and marking errors (5.) is, in general, dependent on steps 1-4. These are achieved through a single pass over the ST of a project. A pass over a single `EXPR` is achieved through calling a `State` object on the ST. This `State` requires an `AbstractServer` that determines how files within a project are loaded and makes packages available for loading. ### Passes For a given experssion `x` this pass will: * Handle import statements (`resolve_import`). This either explicitly imports variables into the current state (for statements such as `import/using SomeModule: binding1, binding2`) or makes the exported bindings of a modules available more generally (e.g. `using SomeOtherModule`). The availability of includable packages is handled by the `getsymbolserver` function called on the `state.server`. * Determine whether `x` introduces a new variable. `mark_bindings!` performs this and may mark bindings for child nodes of `x` (e.g. when called on an expression that defines a `Function` this will mark the arguments of the signature as introducing bindings.) * Adds any binding associated with `x` to the variable list of the current scope (`add_binding`). * Handles global variables (`mark_globals`). * Special handling for macros introducing new bindings as necessary, at the moment limited to `deprecate`, `enum`, `goto`, `label`, and `nospecialize`. * Adds new scopes for the interior of `x` as needed (`scopes`). * Resolves references for identifiers (i.e. a variable name), macro name, keywords in function signatures and dotted names (e.g. `A.B.c`). A name is first checked against bindings introduced within a scope then against exported variables of modules loaded into the scope. If this fails to resolve the name this is repeated for the parent scope. References that fail to resolve at this point, and are within a delayed scope (i.e. within a function) are added to a list to be resolved later. * If `x` is a call to `include(path_expr)` attempt to resolve `path_expr` to a loadable file from `state.server` and pass across the files ST (`followinclude`). * Traverse across child nodes of `x` (`traverse`) in execution order. This means, for example, that in the expression `a = b` we traverse `b` then `a` (ignoring the operator). ### Server As mentioned, an `AbstractServer` is required to hold files within a project and provide access to user installed packages. An implementation must support the following functions: `StaticLint.hasfile(server, path)::Bool` : Does the server have a file matching the name `path`. `StaticLint.getfile(server, path)::AbstractFile` : Retrieves the file `path` - assumes the server has the file. `StaticLint.setfile(server, path, file)::AbstractFile` : Stores `file` in the server under the name `path`, returning the file. `StaticLint.canloadfile(server, path)::Bool` : Can the server load the file denoted by `path`, likely from an external source. `StaticLint.loadfile(server, path)::AbstractFile` : Load the file at `path` from an external source (i.e. the hard drive). `StaticLint.getsymbolserver(server)::Dict{String,SymbolServer.ModuleStore}` : Retrieve the server's depot of loadable packages. An `AbstractFile` must support the following: `StaticLint.getpath(file)` : Retrieve the path of a file. `StaticLint.getroot(file)` : Retrieve the root of a file. The root is the main/first file in a file structure. For example the `StaticLint.jl` file is the root of all files (including itself) in `src/`. `StaticLint.setroot(file, root)` : Set the root of a file. `StaticLint.getcst(file)` : Retrieve the cst of a file. `StaticLint.setcst(file, cst::CSTParser.EXPR)` : Set the cst of a file. `StaticLint.getserver(file)` : Retrieve the server holding of a file. `StaticLint.setserver(file, server::AbstractServer)` : Set the server of a file. `StaticLint.semantic_pass(file, target = nothing(optional))` : Run a full pass on the ST of a project (i.e. ST of all linked files). It is expected that `file` is the root of the project.
12/03/2017
1 day ago
742 commits | https://juliaobserver.com/packages/StaticLint | CC-MAIN-2021-17 | refinedweb | 693 | 51.85 |
PhoneGap just updated to 1.6.1. Any ETA on the new Sencha touch supporting the cordova versions of phonegap? Thanx!
I am also very interested to get ST 2.0.1 working with Cordova 1.6.1.
In fact, I can't get ST 2.0.1RC with PhoneGap 1.4.1 (before the namespace change) too.
I am succesfully building native android apps with Cordova (1.5 and 1.6) and Sencha.
But if you prefer to not listen, then fine.twitter: @realjax
I've said it left, right and center on this forum before.. but it seems everyone is convinced it's a namespace problem where it really is not.
so listen very carefully, I shall say this only once (more)..
Sencha's 2.0x class loading system fails with
- Cordova and phonegap
- on Android API 11 and up (possibly even on API 10).
Make sure you do not use your Sencha app on those platforms without creating a proper build first.
Create a production or package build of your Sencha app and use that as the asset of your phonegap project.twitter: @realjax
yes, toss it in the asset/www folder.twitter: @realjax | http://www.sencha.com/forum/showthread.php?191614-Bug-Fix-for-Cordova-1.5-(PhoneGap)&p=783650 | CC-MAIN-2015-18 | refinedweb | 197 | 69.99 |
Here is a small script that will detect some guideline violations for vim. I put it in my .vimrc and it highlight all the errors it can find in all opened files. I'd like to know if you made improvements on it.
" autocmd that will set up the w:created variable (vim tip 1598) " so we can run the check only once per window autocmd VimEnter * autocmd WinEnter * let w:created=1 :fu FuncHaikuCheck() call matchadd('Search', '\%>80v.\+', -1) " line with more than 80 chars call matchadd('Search', ' ', -1) " probably wrong indenting call matchadd('Search', '\(for\|if\|select\|while\)(', -1) "keyword without space after it call matchadd('Search', '[a-zA-Z0-9][,=<>/+\-*;][a-zA-Z0-9]', -1) "operator without space around it call matchadd('Search', '[^*][=/+\- ]$', -1) "operator at end of line (without false positives on char*\nclass::method(), /* and */) :endfu " call the function on all opened files autocmd WinEnter * if !exists('w:created') | call FuncHaikuCheck() | endif autocmd BufWinEnter * call FuncHaikuCheck() | https://dev.haiku-os.org/wiki/CodingGuidelines/VIM?version=5 | CC-MAIN-2019-30 | refinedweb | 161 | 68.4 |
I'm trying to define and then implement an abstract setter which takes a List of Objects as a parameter. Here's the gist of this simple idea:
public abstract class MyClass {
public abstract void setThings(List<?> things);
}
public class Bar extends MyClass {
private List<String> things;
@Override
public void setThings(List<String> things) {
this.things = things;
}
}
Method does not override method from its superclass
both methods have the same erasure, but neither overrides the other
public abstract <T> void setThings(List<T> things);
So Java is quite correctly telling you that you haven't implemented the abstract method
setThings which takes a
List<?> not a
List<T> or
List<String>. All of these are different things. See this question for a detailed explanation.
The simplest solution is to introduce a generic for your abstract class as well:
public abstract class MyClass<T> { public abstract void setThings(List<T> things); } public class SubClass extends MyClass<String> { private List<String> things; public void setThings(List<String> things) { this.things = things; } } | https://codedump.io/share/E20sgwK82D7F/1/implement-generic-method-with-list-parameter | CC-MAIN-2017-43 | refinedweb | 169 | 61.06 |
I have a C# winforms app that runs a macro in another program. The other program will continually pop up windows and generally make things look, for lack of a better word, crazy. I want to implement a cancel button that will stop the process from running, but I cannot seem to get the window to stay on top. How do I do this in C#?
Edit: I have tried TopMost=true; , but the other program keeps popping up its own windows over top. Is there a way to send my window to the top every n milliseconds?
Edit: The way I solved this was by adding a system tray icon that will cancel the process by double-clicking on it. The system tray icon does no get covered up. Thank you to all who responded. I read the article on why there is not a ‘super-on-top’ window… it logically does not work.
Form.TopMost will work unless the other program is creating topmost windows.
There is no way to create a window that is not covered by new topmost windows of another process. Raymond Chen explained why.
If by “going crazy” you mean that each window keeps stealing focus from the other, TopMost will not solve the problem.
Instead, try:
CalledForm.Owner = CallerForm; CalledForm.Show();
This will show the ‘child’ form without it stealing focus. The child form will also stay on top of its parent even if the parent is activated or focused. This code only works easily if you’ve created an instance of the child form from within the owner form. Otherwise, you might have to set the owner using the API.
Set Form.TopMost
I was searching to make my WinForms application “Always on Top” but setting “TopMost” did not do anything for me. I knew it was possible because WinAmp does this (along with a host of other applications).
What I did was make a call to “user32.dll.” I had no qualms about doing so and it works great. It’s an option, anyway.
First, import the following namespace:
using System.Runtime.InteropServices;
Add a few variables to your class declaration:;
Add prototype for user32.dll function:
[DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool SetWindowPos(IntPtr hWnd, IntPtr hWndInsertAfter, int X, int Y, int cx, int cy, uint uFlags);
Then in your code (I added the call in Form_Load()), add the call:
SetWindowPos(this.Handle, HWND_TOPMOST, 0, 0, 0, 0, TOPMOST_FLAGS);
Hope that helps. Reference
Set the form’s
.TopMost property to true.
You probably don’t want to leave it this way all the time: set it when your external process starts and put it back when it finishes.
The way i solved this was by making a system tray icon that had a cancel option.
What is the other application you are trying to suppress the visibility of? Have you investigated other ways of achieving your desired effect? Please do so before subjecting your users to such rogue behaviour as you are describing: what you are trying to do sound rather like what certain naughty sites do with browser windows…
At least try to adhere to the rule of Least Surprise. Users expect to be able to determine the z-order of most applications themselves. You don’t know what is most important to them, so if you change anything, you should focus on pushing the other application behind everything rather than promoting your own.
This is of course trickier, since Windows doesn’t have a particularly sophisticated window manager. Two approaches suggest themselves:
- enumerating top-level windows
and checking which process they
belong to, dropping their
z-order if so. (I’m not sure if
there are framework methods for
these WinAPI functions.)
- Fiddling with child process permissions to prevent it from accessing the desktop… but I wouldn’t try this until the othe approach failed, as the child process might end up in a zombie state while requiring user interaction.
Why not making your form a dialogue box:
myForm.ShowDialog();
I had a momentary 5 minute lapse and I forgot to specify the form in full like this:
myformName.ActiveForm.TopMost = true;
But what I really wanted THIS!
this.TopMost = true;
Here is the SetForegroundWindow equivalent:
form.Activate();
I have seen people doing weird things like:
this.TopMost = true; this.Focus(); this.BringToFront(); this.TopMost = false;
The following code makes the window always stay on top as well as make it frameless.
using System; using System.Drawing; using System.Runtime.InteropServices; using System.Windows.Forms; namespace StayOnTop { public partial class Form1 : Form {; [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool SetWindowPos(IntPtr hWnd, IntPtr hWndInsertAfter, int X, int Y, int cx, int cy, uint uFlags); public Form1() { InitializeComponent(); FormBorderStyle = FormBorderStyle.None; TopMost = true; } private void Form1_Load(object sender, EventArgs e) { SetWindowPos(this.Handle, HWND_TOPMOST, 100, 100, 300, 300, TOPMOST_FLAGS); } protected override void WndProc(ref Message m) { const int RESIZE_HANDLE_SIZE = 10; switch (m.Msg) { case 0x0084/*NCHITTEST*/ : base.WndProc(ref m); if ((int)m.Result == 0x01/*HTCLIENT*/) { Point screenPoint = new Point(m.LParam.ToInt32()); Point clientPoint = this.PointToClient(screenPoint); if (clientPoint.Y <= RESIZE_HANDLE_SIZE) { if (clientPoint.X <= RESIZE_HANDLE_SIZE) m.Result = (IntPtr)13/*HTTOPLEFT*/ ; else if (clientPoint.X < (Size.Width - RESIZE_HANDLE_SIZE)) m.Result = (IntPtr)12/*HTTOP*/ ; else m.Result = (IntPtr)14/*HTTOPRIGHT*/ ; } else if (clientPoint.Y <= (Size.Height - RESIZE_HANDLE_SIZE)) { if (clientPoint.X <= RESIZE_HANDLE_SIZE) m.Result = (IntPtr)10/*HTLEFT*/ ; else if (clientPoint.X < (Size.Width - RESIZE_HANDLE_SIZE)) m.Result = (IntPtr)2/*HTCAPTION*/ ; else m.Result = (IntPtr)11/*HTRIGHT*/ ; } else { if (clientPoint.X <= RESIZE_HANDLE_SIZE) m.Result = (IntPtr)16/*HTBOTTOMLEFT*/ ; else if (clientPoint.X < (Size.Width - RESIZE_HANDLE_SIZE)) m.Result = (IntPtr)15/*HTBOTTOM*/ ; else m.Result = (IntPtr)17/*HTBOTTOMRIGHT*/ ; } } return; } base.WndProc(ref m); } protected override CreateParams CreateParams { get { CreateParams cp = base.CreateParams; cp.Style |= 0x20000; // <--- use 0x20000 return cp; } } } } | https://exceptionshub.com/how-to-make-a-window-always-stay-on-top-in-net.html | CC-MAIN-2021-21 | refinedweb | 972 | 60.92 |
C# is a simple, modern, general purpose and object oriented programming language developed by Microsoft. It is component oriented and easy to learn. C# is a structured language used to produce efficient programs. This language is part of the .Net framework and can be compiled on diverse computer platforms. It’s syntax is similar to that of C and C++. C# is specifically designed to be a platform independent language on the lines of Java.
However, unlike C++, multiple inheritance is not supported by C#. Instead interfaces are provided. Interfaces permit several classes to implement the same set of methods. C# is compiled to Microsoft Intermediate Language(MSIL). MSIIL is similar to Java bytecode and gives C# the capability to be platform independent. It runs using Just In Time(JIT) compilers and makes it possible for classes to be inherited across languages. Garbage collection is a prominent feature of C#. The language also offers support for direct access to memory. The classic attributes of OOPS such as inheritance, encapsulation and polymorphism are supported by C#. We walk you through the C# pause function in this intermediate level tutorial. We assume you’re familiar with the basics of C#, otherwise you can take this excellent course on C# programming for complete beginners. (If you’re pressed for time, this amazing course helps you learn C# in under an hour!)
What is the .NET Framework?
.NET Framework is a software framework developed by Microsoft which runs predominantly on Microsoft Windows. It consists of a large library and supports language interoperability across several programming languages. By language interoperability we mean each programming language can use program code created in other programming languages. Note that applications created for .NET framework execute in a software environment known as the Common Language Runtime(CLR). The latter is an application virtual machine offering a host of services which includes exception handling, security and memory management. The CLR and the class library together form the .NET framework.
CLR offers developers a framework that permits applications to execute under multiple computer environments. It is designed with the goal of being a working implementation of Microsoft’s Common Language Infrastructure (CLI). Among other things the Framework Class Library offers numeric algorithms, web program development, network communications, user interface, data access, database connectivity and cryptography.
.NET Framework provides a Common Type System or CTS. All possible datatypes and programming constructs supported by the CLR are defined by the CTS specification. The framework simplifies computer software installation so that it meets the specified security requirements and does not conflict with the previously installed computer software. The Framework Class Library provides classes that encapsulate functions such as graphic rendering, file reading and writing, database operations, XMl document manipulation and more. This is made available to all the languages which use .NET Framework. You can learn more about how C# and the .NET framework with this course.
A Look at a Typical C# Program
A C# program is usually divided into the following parts
- Namespace declaration
- A class
- Class methods
- Class attributes
- A Main method
- Statements & Expressions
Example : C# program to print Good Morning
using System; namespace GoodMorningApplication { class GoodMorning { static void Main(string[] args) { /* my first program in C# */ Console.WriteLine("Good Morning"); Console.ReadKey(); } } }
Let’s look into what the program actually does.
The first line of the program-using keyword includes the System namespace in the program. A C# program usually has multiple using statements The next line gives the namespace declaration. A namespace is nothing but a collection of classes. The GoodMorningApplication namespace contains the class GoodMorning. The next line gives a class declaration, the class GoodMorning contains the data and method definitions. Classes can contain more than one method. The GoodMorning class has only one method Main. The next line defines the Main method, which is the entry point for all C# programs. The Main method defines what the class will do when executed. The next line /*…*/ will be ignored by the compiler and is a comment. WriteLine is a method of the Console class defined in the System namespace. This statement prints “Good Morning” on the screen. The last line Console.ReadKey(); makes the program wait for a key press and it prevents the screen from running and closing quickly when the program is launched.
Keep in mind C# is case sensitive. All statements and expressions in a C# program must end with a semicolon. The C# program execution begins at the Main method. Also unlike Java, the file name can be different from the class name. Learn more about writing your own C# programs with this course.
C# ServiceController.Pause Method
This method suspends a service’s operation. It is present in the Namespace “System.ServiceProcess”. It throws two exceptions. Win32Exception is thrown when an error occurs while accessing a system API. InvalidOperationException is called when a particular service is not found.
The syntax is as follows:
public void Pause()
The empty parentheses indicate that this function does not accept any arguments. The keyword void indicates that the function returns anything. Note that this function has a public access specifier. In other words everybody can call this function.
The following program shows how to use the Pause method to pause a service. Note that sc_new is an object of the ServiceControllerclass in the program.
sc_new.Pause(); while (sc_new.Status != ServiceControllerStatus.Paused) { Thread.Sleep(1000); sc_new.Refresh(); } Console.WriteLine("Status = " + sc_new.Status);
Let’s walk through this program. sc_new.Pause() suspends a service’s operation. sc_new.Status gets the status of the service that is referenced by this instance. Thread.Sleep(1000) will stop the executing thread for 1000 milliseconds. sc_new.Refresh() refreshes property values by resetting the properties to their current values. Remember that only when the ServiceControllerStatus is Paused can you call Continue for the service. ServiceControllerStatus Enumeration indicates the current state of the service. WriteLine is a method of the Console class defined in the System namespace. It displays the status of the service.
Note that this was just an introduction to C# pause. There’s many more layers to C#. We recommend you explore C# further with this advanced 3 part course (Part I, Part II, Part III), or if you’re more the DIY type, jump over to this course to learn how to create your own Android App in C#. | https://blog.udemy.com/c-sharp-pause/ | CC-MAIN-2017-26 | refinedweb | 1,058 | 50.53 |
Sometimes simple notation can make a big difference. One example of this is the Kronecker delta function δij which is defined to be 1 if i = j and zero otherwise. Because branching logic is built into the symbol, it can keep branching logic outside of your calculation. That is, you don’t have to write “if … else …” in when doing your calculation. You let the symbol handle it.
The permutation symbol εijk is similar. It has some branching logic built into its definition, which keeps branching out of your calculation, letting you handle things more uniformly. In other words, the symbol encapsulates some complexity, keeping it out of your calculation. This is analogous to how you might reduce the complexity of a computer program. [1]
Definition
The permutation symbol, sometimes called the Levi-Civita symbol, can have any number of subscripts. If any two of the subscripts are equal, the symbol evaluates to 0. Otherwise, the symbol evaluates to 1 or -1. If you can order the indices with an even number of swaps, the sign of the permutation is 1. If it takes an odd number of swaps, the sign is -1. You could think of putting the indices into a bubble sort algorithm and counting whether the algorithm does an even or odd number of swaps.
(There’s an implicit theorem here saying that the definition above makes sense. You could change one order of indices to another by different series of swaps. Two different ways of getting from one arrangement to another may use a different number of swaps, but the number of swaps in both approaches will have the same parity.)
Incidentally, I mentioned even and odd permutations a few days ago in the context of finite simple groups. One of the families of finite simple groups are the alternating groups, the group of even permutations on a set with at least five elements. In other words, permutations whose permutation symbol is 1.
Examples
For example, ε213 = -1 because it takes one adjacent swap, exchanging the 2 and the 1, to put the indices in order. ε312 = 1 because you can put the indices in order with two adjacent swaps: 3 <-> 1, then 3 <-> 2. The symbol ε122 is 0 because the last two indices are equal.
Mathematica
You can compute permutation symbols in Mathematica with the function
Signature. For example,
Signature[{3, 1, 2}]
returns 1. The function works with more indices as well. For example,
Signature[{3, 1, 2, 5, 4}]
returns -1.
Python
SymPy has a function
LeviCivita for computing the permutation symbol. It also has
Eijk as an alias for
LeviCivita. Both take a variable number of integers as arguments, not a list of integers as Mathematica does. If you do have a list of integers, you can use the
* operator to unpack the list into separate arguments.
from sympy import Eijk, LeviCivita from numpy.random import permutation print( LeviCivita(3, 1, 2) ) print( Eijk(3, 1, 2, 5, 4) ) p = permutation(5) assert(Eijk(*p) == LeviCivita(*p))
Product formula
When all indices are distinct, the permutation symbol can be computed from a product. For two indices,
For three indices,
and in general
Cross products
An example use of the permutation symbol is cross products. The ith component of the cross product of b × c is
Here we’re using tensor notation where components are indicated by superscripts rather than subscripts, and there’s an implied summation over repeated indices. So here we’re summing over j and k, each running from 1 to 3.
Similarly, the triple product of vectors a, b and c is
This is also the determinant of the matrix whose rows are the vectors a, b, and c. Determinants of larger matrices work the same way.
Relation to Kronecker delta
This post started out by talking about the more familiar Kronecker delta as an introduction to the permutation symbol. There is a nice relation between the two given below.
If we set r = i we get the special case
Related posts
[1] One way of measuring the complexity of a computer program is the maximum number of logic branches in any function. If you have a moderately complex function, and you replace an if-then statement with a call to a small function that has an if-then statement, you’ve reduced the overall complexity. This is sort of what the delta and permutation functions do.
3 thoughts on “The permutation symbol”
Any particular reason you didn’t mention the sgn function? It looks to me like the product formula could be written as a product of sgn invocations, and then it would always be valid, even when there are duplicated indices. This moves all the conditionals down into the sgn function.
You might find this interesting.
John Ousterhout: “A Philosophy of Software Design” | Talks at Google
You often hear that the cross product is “only for 3D”. But the right hand side of your formula
is easy to extend to (n-1) vectors, each with n components. | https://www.johndcook.com/blog/2018/09/16/permutation-tensor/ | CC-MAIN-2020-29 | refinedweb | 841 | 63.29 |
8.1 Writing C header files
Writing portable C header files can be difficult, since they may be read by different types of compilers:
- C++ compilers
C++ compilers require that functions be declared with full prototypes, since C++ is more strongly typed than C. C functions and variables also need to be declared with the
extern "C"directive, so that the names aren’t mangled. See section Writing libraries for C++, for other issues relevant to using C++ with libtool.
- ANSI C compilers
ANSI C compilers are not as strict as C++ compilers, but functions should be prototyped to avoid unnecessary warnings when the header file is
#included.
- non-ANSI C compilers
Non-ANSI compilers will report errors if functions are prototyped.
These complications mean that your library interface headers must use some C preprocessor magic in order to be usable by each of the above compilers.
‘foo.h’ in the ‘tests ‘foo.h’ as follows:
#ifndef FOO_H #define FOO_H 1 /* The above macro definitions. */ #include "…" BEGIN_C_DECLS int foo PARAMS((void)); int hello PARAMS((void)); END_C_DECLS #endif /* !FOO_H */
Note that the ‘#ifndef FOO_H’ prevents the body of ‘foo(8).
Do not be naive about writing portable code. Following the tips given above will help you miss the most obvious problems, but there are definitely other subtle portability issues. You may need to cope with some of the following issues:
- Pre-ANSI compilers do not always support the
void *generic pointer type, and so need to use
char *in its place.
- The
const,
inlineand
signedkeywords are not supported by some compilers, especially pre-ANSI compilers.
- The
long doubletype is not supported by many compilers.
This document was generated on December 1, 2011 using texi2html 5.0. | http://www.manpagez.com/info/libtool/libtool-2.4.2/libtool_44.php | CC-MAIN-2017-09 | refinedweb | 285 | 54.22 |
Create a thread
#include <pthread.h> int pthread_create( pthread_t* thread, const pthread_attr_t* attr, void* (*start_routine)(void* ), void* arg );
If attr is NULL, the default attributes are used (see pthread_attr_init()).
The thread in which main() was invoked behaves differently. When it returns from main(), there's an implicit call to exit(), using the return value of main() as the exit status.
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The pthread_create() function creates a new thread, with the attributes specified in the thread attribute object attr. The created thread inherits the signal mask of the parent thread, and its set of pending signals is empty.
QNX Neutrino extensions
If you adhere to the POSIX standard, there are some thread attributes that you can't specify before creating the thread:
There are no pthread_attr_set_* functions for these attributes.
As a QNX Neutrino extension, you can OR the following bits into the __flags member of the pthread_attr_t structure before calling pthread_create():
After creating the thread, you can change the cancellation properties by calling pthread_setcancelstate() and pthread_setcanceltype().; } | http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_create.html | CC-MAIN-2018-26 | refinedweb | 184 | 53.21 |
If I lived in "Google-land", I could use Google Bookmarks. But I don't and had to look for alternatives. After some searching I found a possible way to import bookmarks:
- Upload your single html file of exported bookmarks to a document on Google Docs.
- In your browser navigate to said page.
- Manually click on each link and save as bookmark.
import the bookmarks directly into the browser's database. That would be cool. But I'm not bold enough to root my new phone and "play with that, probably break it a bit - and then cry about it later." (Ward from androidcommunity.com)
My Solution
On several places I read hints that some Android apps were able to import bookmarks, but I couldn't find any. Instead I found Matthieu Guenebaud's Bookmarks Manager. It's able to backup and restore the browser bookmarks and uses a plain zip file to store them.
Viewing .ZIP: Bookmarks_2010-09-10_14-11-24.zip Length Method Size Ratio Date Time Name ------ ------ ----- ----- ---- ---- ---- 2273 DeflatN 710 68.8% 09.10.2010 2:11p bookmarks.xml 526 DeflatN 531 0.0% 09.10.2010 2:11p 23.png 764 DeflatN 769 0.0% 09.10.2010 2:11p 24.png 326 DeflatN 331 0.0% 09.10.2010 2:11p 51.png 684 DeflatN 689 0.0% 09.10.2010 2:11p 57.png 239 DeflatN 238 0.5% 09.10.2010 2:11p 69.png 541 DeflatN 546 0.0% 09.10.2010 2:11p 90.png 1266 DeflatN 1271 0.0% 09.10.2010 2:11p 198.png 490 DeflatN 495 0.0% 09.10.2010 2:11p 164.png 304 DeflatN 309 0.0% 09.10.2010 2:11p 124.png 408 DeflatN 413 0.0% 09.10.2010 2:11p 229.png ------ ----- ----- ---- 7821 6302 19.5% 11The file
bookmarks.xmlhas a simple XML structure of
<bookmark>s inside a
<bookmarks>element. Yes, that's something I can use.
- Backup the bookmarks, even if empty, to get an initial file.
- Unpack the archive.
- Insert bookmarks into the existing XML structure.
- Repack the archive.
- Restore from the modified zip file.
- Enjoy your new wealth of bookmarks.
bookmarks.xmlby hand, here is some Scala code.:
val bookmarkXml = scala.xml.XML.loadFile(targetFolder + "/bookmarks.xml") val lastOrder = Integer.parseInt((bookmarkXml \\ "order").last.text) val oldNodes = bookmarkXml \\ "bookmark" val newNodes = to_xml_list(bookmarks, lastOrder + 1000) val root = <bookmarks>{ oldNodes }{ newNodes }</bookmarks> scala.xml.XML.save(targetFolder + "/bookmarks.xml", root, "UTF8", true, null)Thanks to Scala's excellent XML support, reading and modifying the bookmarks file is easy. The method
to_xml_list()iterates all favourites and creates XML fragments for each one using the following method.
def to_xml(bm: Favorite, order: Int) = { <bookmark> <title>{ bm.name }</title> <url>{ bm.url }</url> <order>{ order }</order> <created>{ bm.fileDate.getTime }</created> </bookmark> }
Favoriteis a class representing an Internet Explorer favourite that I wrote long ago. (Yeah baby, code reuse!)
orderis a number Bookmark Explorer uses for sorting bookmarks. See the complete source of GuenmatBookmarks.scala in my Scala Utilities repository.
3 comments:
This method works well on HTC Desire. But there is a minor glitch in Guenebaud's Bookmarks Manager when used on the Samsung Galaxy S: The import of bookmarks works, but they are not displayed in the bookmark list of the browser. Instead they are shown in the history of the browser and there they are marked as bookmark (little yellow star). To fix this you have to open each of these pages in the browser. Once it’s loaded, it’s shown in the list of bookmarks together with the proper icon as expected.
@Peter Kofler - Thank you very much regarding Samsung Galaxy issue. But could you please let me know, how can i resolve this issue programmatically ?
I'm not sure. It could be an Android browser problem. At least it looks like one because the pages are marked as bookmarked, but not displayed in the main list. AFAIK Galaxy and HTC use different browsers. The one by HTC is a special version or custom at all. Maybe it's missing an optional field, e.g.date of last access (if exists). | https://blog.code-cop.org/2010/10/android-browser-bookmarks.html?showComment=1308743159650 | CC-MAIN-2019-26 | refinedweb | 696 | 71.61 |
quotactl - manipulate disk quotas
Synopsis
Description
Return Values
Errors
Colophon
#include <sys/quota.h> #include <xfs/xqm.h>
int quotactl(int cmd, const char *special, int id ", caddr_t " addr );
The quota system can be used to set per-user and per-group limits on the amount of disk space used on a file system. For each user and/or group, a soft limit and a hard limit can be set for each file system. The hard limit cant be exceeded. The soft limit can be exceeded, but warnings will ensue. Moreover, the user cant file system being manipulated.
The addr argument is the address of an optional, command-specific, data structure that is copied in or out of the system. The interpretation of addr is given with each command below.
The subcmd value is one of the following:There is no command equivalent to Q_SYNC for XFS since sync(1) writes quota information to disk (in addition to the other file system metadata that it writes out).
On success, quotactl() returns 0; on error -1 is returned, and errno is set to indicate the error.
quota(1), getrlimit(2), quotacheck(8), quotaon(8)
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.sgvulcan.com/quotactl.2.php | CC-MAIN-2017-09 | refinedweb | 221 | 65.12 |
How to Build a Personal Stock Market Dashboard
In this tutorial, we will be building a single-page web dashboard for tracking a real or imaginary stock portfolio. This dashboard will:
- Allow the user to record stock purchases.
- Track the current price of all stocks held through web scraping.
- Show the user the percentage gain or loss on their holdings, for each stock and in total.
After working through this tutorial, you'll:
- Be able to build a single-page application with Flask and JavaScript.
- Understand web scraping with
Requestsand
Beautiful Soup.
- Know how to manage persistent data with Replit's database.
Creating the Dashboard
We're going to build our dashboard using Python (plus a bit of JavaScript). Sign in to Replit or create an account if you haven't already. Once logged in, create a new Python repl.
Our dashboard will need to have three functions:
- Displaying the current state of the stock portfolio.
- Recording share purchases.
- Flushing the database (this will be useful for testing).
Let's start by creating an HTML frontend with the basic data and form elements necessary to enable this functionality. In your repl's file pane, create a directory named
templates, and in that folder, create a file called
index.html (this file structure is necessary for building Flask applications).
Then enter the following HTML into the
index.html file:
<!DOCTYPE html>
">
<style>
.positive:before { content: "+"; color: green; }
.positive { color: green; }
.negative { color: red; }
th, td { padding: 1em; }
body { margin: 2em auto; max-width: 80em; }
</style>
<body>
<form action="/buy" method="post">
<input type="text" placeholder="ticker" name="ticker"/>
<input type="text" placeholder="# shares" name="shares"/>
<input type="text" placeholder="price" name="price"/>
<input type="submit" value="Buy"/>
</form>
<table id="portfolio">
<tr>
<th>Ticker</th>
<th>Number of shares</th>
<th>Total cost</th>
<th>Current value</th>
<th>Percent change</th>
</tr>
<tr>
<td>Loading...</td>
<tr>
</table>
<a href="/flush">Flush DB</a>
</body>
</html>
In this file, we've imported Bootstrap CSS and applied some minimal styles of our own to make the default content look a little better. We've also created a form for recording share purchases, and a table that will display our share portfolio.
Now let's write some Python code so we can display this page in our Flask app. Enter the following code in
main.py:
from flask import Flask, render_template, request, jsonify, url_for, redirect
import json
site = Flask(__name__)
@site.route('/')
def index():
return render_template('index.html')
site.run(host='0.0.0.0', port=8080)
Now run your repl. The resulting page should look like this:
The first thing we need to implement to get this page functional is the share purchase form. Let's do that now.
Accessing the Database and Recording Purchases
We can allow users to "buy" stock by entering the ticker symbol, the number of shares purchased, and the price per share. While it would also make sense to record the purchase time, we will leave that out for the sake of simplicity (but you can add it later on your own).
We will record these purchases by adding them to the Replit database. This is a simple key-value store that you can think of as a big Python dictionary which retains its state between runs of a repl. Using this database will save us from having to re-enter all of our stock information every time we restart our dashboard.
To use the Replit Database, all we have to do is add the following import statement at the top of
main.py:
from replit import db
Now we can use the globally scoped variable
db like a Python dictionary, keeping in mind that whatever we store in it will persist between executions of our application. A cheat sheet for using the database is available from your repl's database tab on the sidebar.
Let's give it a spin and write the function for buying shares. Add the following code just above the line beginning with
site.run:
@site.route('/buy', methods=['POST'])
def buy():
# Create shares key if it doesn't exist
if 'shares' not in db.keys():
db['shares'] = {}
# Extract values from form
ticker = request.form['ticker'][:5].upper()
price = float(request.form['price'])"))
First, if necessary, we create an empty dictionary at the "shares" key of our
db database. This code will only need to run the first time we buy shares.
Then, we extract the ticker, price and number of shares from the form data, coercing each one into an appropriate format. We want stock tickers to be uppercase and a maximum of five characters long, prices to include fractional amounts, and number of shares to be integers (though you could change that later to support fractional shares).
Finally, we add our share purchase to the "shares" dictionary. This dictionary is made up of ticker symbol keys mapped to inner dictionaries, which in turn contain the following information:
- The total number of shares owned.
- The total cost of all shares purchased.
- A list of individual purchases. Each purchase is a dictionary containing the number of shares purchased, and their unit price.
This may seem like a complicated structure, but it is necessary to allow users to buy shares in the same company at different prices. With some data added, our dictionary could look like this:
{
"AAPL": {
"total_shares": 15,
"total_cost": 1550,
"purchases": [
{
"shares": 10,
"price": 100
},
{
"shares": 5,
"price": 110
}
]
},
"MSFT": {
"total_shares": 20,
"total_cost": 4000,
"purchases": [
{
"shares": 20,
"price": 200
}
]
}
}
In the data above, we've bought 10 shares of Apple Inc (AAPL) at $100 per share, and 5 at $110 per share. We've also bought 20 shares of Microsoft Corporation (MSFT) at $200 per share. The
total_shares and
total_cost values could be derived from the list of purchases, but we're storing them in the database to avoid having to recalculate them unnecessarily.
Run your code now, and add a few stocks with random values. You can use some example tickers: AAPL, MSFT, AMZN, FB, GOOG. While our purchases won't show up on our dashboard, you can determine whether they're getting added to the database by visiting the database tab on your repl's sidebar. If your code is correct, you should see non-zero values under the "STORAGE" heading.
At the moment, our dashboard will also allow you to add any value as a ticker symbol, even if it's not a real public company. And, needless to say, our dashboard also doesn't show us the current value of our stocks. We'll fix those issues soon, but first we need to implement some functionality to help us test.
Flushing the Database
A persistent database is vital for creating most useful web applications, but it can get messy in the early stages of development when we're adding a lot of different test data and experimenting with different structures. That's why it's useful to have a quick and easy way to delete everything.
We've already included a link to flush the database in our
index.html file, so now let's create the backend functionality in Flask. Add this code to
main.py, below the
buy function:
@site.route('/flush')
def flush_db():
del db["shares"]
return redirect(url_for("index"))
Here we're deleting the
shares key from our database and then redirecting the user to the dashboard. As
shares is the only key in our database, this code will suffice for now, but if we add more keys, we'll have to change it accordingly.
Test this new functionality out by flushing your database before moving on to the next section, especially if you have invalid stock tickers. You can confirm whether the flush worked by checking the database tab of your repl's sidebar, where the values under "STORAGE" should now be zero. Note that deletion may take a few seconds to reflect.
Serving Our Portfolio Data
We want our dashboard to be a live display that fetches new stock prices periodically, without us having to refresh the page. It would also be nice to unload calculations such as percentage gain or loss to the client's web browser, so we can reduce load on our server. To this end, we will be structuring our portfolio viewing functionality as an API endpoint that is queried by JavaScript code, rather than using Jinja templates to build it on the server-side.
The first thing we must do to achieve this is to create a Flask endpoint that returns the user's portfolio. We'll do this at
/portfolio. Add the following code to
main.py, below the
buy function:
@site.route('/portfolio')
def portfolio():
if "shares" not in db.keys():
return jsonify({})
portfolio = json.loads(db.get_raw("shares"))
# Get current values
for ticker in portfolio.keys():
current_price = float(get_price(ticker))
current_value = current_price * portfolio[ticker]['total_shares']
portfolio[ticker]['current_value'] = current_value
return jsonify(**portfolio)
The purpose of this function is to serve a JSON object of the shares portfolio to the client. Later, we'll write JavaScript for our dashboard which will use this object to construct a table showing our portfolio information.
In the above code, if no stocks have been added, we return an empty JSON object. Otherwise, we set
portfolio to a copy of the
shares dictionary in our Replit database. The Python Replit library uses custom list and dictionary types that cannot be directly serialized into JSON, so we use
db.get_raw to convert the whole thing into a string and
json.loads to convert that string into a standard Python dictionary.
Then we need to get the current values for each of our stock holdings. To do so, we loop through
portfolio.keys(), call
get_price(ticker) and multiply the return value by the total shares we're holding for this stock. We then add this value under the new
current_value key in our stock's dictionary.
Finally, we convert our portfolio dictionary to JSON using Flask's
jsonify and return it.
There's just one problem: we haven't implemented
get_price yet! Let's do that now, before we try to run this code.
Fetching Current Prices
We'll fetch the current prices of our stocks by scraping the Yahoo Finance website. While the more traditional and foolproof way of consuming structured data such as share prices is to use an API that provides structured data in a computer-ready format, this is not always feasible, as the relevant APIs may be limited or even non-existent. For these and other reasons, web scraping is a useful skill to have.
A quick disclaimer before we jump in: Copyright law and web scraping laws are complex and differ by country. As long as you aren't blatantly copying their content or doing web scraping for commercial gain, people generally don't mind web scraping. However, there have been some legal cases involving scraping data from LinkedIn, and media attention from scraping data from OKCupid. Web scraping can violate the law, go against a particular website's terms of service, or breach ethical guidelines – so take care with where you apply this skill.
Additionally, from a practical perspective, web scraping code is usually brittle and likely to break in the event that a scraped site changes its appearance.
With those considerations in mind, let's start scraping. We'll use Python Requests to fetch web pages and Beautiful Soup to parse them and extract the parts we're interested in. Let's import those at the top of
main.py.
from bs4 import BeautifulSoup
import requests
Now we can create our
get_price function. Enter the following code near the top of
main.py, just below
site = Flask(__name__):
def get_price(ticker):
page = requests.get("" + ticker)
soup = BeautifulSoup(page.text, "html5lib")
price = soup.find('fin-streamer', {'class':'Fw(b) Fz(36px) Mb(-4px) D(ib)'}).text
# remove thousands separator
price = price.replace(",", "")
return price
The first line fetches the page on Yahoo Finance that shows information about our stock share price. For example, the link below will show share price information for Apple Inc:
We then load the page into a Beautiful Soup object, parsing it as HTML5 content. Finally, we need to find the price. If you visit the above page in your browser, right-click on the price near the top of the page and select "Inspect". You'll notice that it's inside a
fin-streamer element with a class value containing
Fw(b) Fz(36px) Mb(-4px) D(ib). If the market is open, and the price is changing, additional classes may be added and removed as you watch, but the previously mentioned value should still be sufficient.
We use Beautiful Soup's
find method to locate this
fin-streamer. The
text attribute of the object returned is the price we want. Before returning it, we remove any comma thousands separators to avoid float conversion errors later on.
Although we've implemented this functionality for the sake of portfolio viewing, we can also use it to improve our share buying process. We'll make a few additional quality-of-life changes at the same time. Find your
buy function code and modify it to look like this:
@site.route('/buy', methods=['POST'])
def buy():
# Create shares key if it doesn't exist
if 'shares' not in db.keys():
db['shares'] = {}
ticker = request.form['ticker']
# remove starting $
if ticker[0] == '$':
ticker = ticker[1:]
# uppercase and maximum five characters
ticker = ticker.upper()[:5]
current_price = get_price(ticker)
if not get_price(ticker): # reject invalid tickers
return f"Ticker $'{ticker}' not found"
if not request.form['price']: # use current price if price not specified
price = float(current_price)
else:
price = float(request.form['price'])
if not request.form['shares']: # buy one if number not specified
shares = 1
else:"))
The first change we've made to this function is to strip leading
$s on ticker symbols, in case users include those. Then, by calling
get_price in this function, we can both prevent users from adding invalid stock tickers and allow users to record purchases at the current price by leaving the price field blank. Additionally, we'll assume users want to buy just one share if they leave the number of shares field blank.
We can now test out our code. Run your repl, add some stocks, and then, in a separate tab, navigate to this URL (replacing the two ALL-CAPS values first):
You should now see a JSON object similar to the database structure detailed above, with the current value of each stock holding as an additional field. In the next section, we'll display this data on our dashboard.
Showing Our Portfolio
We will need to write some JavaScript to fetch our portfolio information, assemble it into a table, and calculate the percentage changes for each stock as well as our portfolio's total cost, current value and percentage change.
Add the following code just above the closing
</body> tag in
templates/index.html:
<script>
function getPortfolio() {
fetch("/portfolio")
.then(response => response.json())
.then(data => {
console.log(data);
});
}
getPortfolio();
</script>
This code uses the Fetch API to query our
/portfolio endpoint and returns a
Promise, which we feed into two
then methods. The first one extracts the JSON data from the response, and the second one logs the data to JavaScript console. This is a common pattern in JavaScript, which provides a lot of asynchronous functionality.
Run your repl and open its web page in a new tab.
Then open your browser's devtools with F12, and you should see your portfolio JSON object in the console. If you don't, give it a few seconds.
Now let's add the rest of our JavaScript code.
console.log(data); and add the following code in its place:
var table = document.getElementById("portfolio");
var tableHTML = `<tr>
<th>Ticker</th>
<th>Number of shares</th>
<th>Total cost</th>
<th>Current value</th>
<th>Percent change</th>
</tr>`;
var portfolioCost = 0;
var portfolioCurrent = 0;
for (var ticker in data) {
var totalShares = data[ticker]['total_shares'];
var totalCost = data[ticker]['total_cost'];
var currentValue = data[ticker]['current_value'];
var percentChange = percentChangeCalc(totalCost, currentValue);
row = "<tr>";
row += "<td>$" + ticker + "</td>";
row += "<td>" + totalShares + "</td>";
row += "<td>$" + totalCost.toFixed(2) + "</td>";
row += "<td>$" + currentValue.toFixed(2) + "</td>";
row += percentChangeRow(percentChange);
row += "</tr>";
tableHTML += row;
portfolioCost += totalCost;
portfolioCurrent += currentValue;
}
portfolioPercentChange = percentChangeCalc(portfolioCost, portfolioCurrent);
tableHTML += "<tr>";
tableHTML += "<th>Total</th>";
tableHTML += "<th> </th>";
tableHTML += "<th>$" + portfolioCost.toFixed(2) + "</th>";
tableHTML += "<th>$" + portfolioCurrent.toFixed(2) + "</th>";
tableHTML += percentChangeRow(portfolioPercentChange);
tableHTML += "</tr>"
table.innerHTML = tableHTML;
This code constructs an HTML table containing the values queried from our portfolio endpoint, as well as the extra calculated values we mentioned above. We use the
toFixed method to cap the number of decimal places for financial values to two.
We also use a couple of helper functions for calculating and displaying percentage changes. Add the code for these above the
getPortfolio function declaration:
function percentChangeCalc(x, y) {
return (x != 0 ? (y - x) * 100 / x : 0);
}
function percentChangeRow(percentChange) {
if (percentChange > 0) {
return "<td class='positive'>" + percentChange.toFixed(2) + "%</td>";
}
else if (percentChange < 0) {
return "<td class='negative'>" + percentChange.toFixed(2) + "%</td>";
}
else {
return "<td>" + percentChange.toFixed(2) + "%</td>";
}
}
The
percentChangeCalc function calculates the percentage difference between two numbers, avoiding division by zero. The
percentChangeRow function allows us to style gains and losses differently by adding classes that we've already declared in the page's CSS.
Finally, we need to add some code to periodically refetch our portfolio, so that we can see the newest price data. We'll use JavaScript's
setInterval function for this. Add the following code just above the closing
</script> tag.
// refresh portfolio every 60 seconds
setInterval(function() {
getPortfolio()
}, 60000)
Run your repl, add some stocks if you haven't, and you should see something like this:
From this point on, we highly recommend viewing your application in a new browser tab rather than Replit's in-page browser, to get the full-page dashboard experience.
Caching
Our dashboard is feature-complete, but a bit slow. As we're rendering it with client-side JavaScript that has to execute in the user's browser, we won't be able to make it load instantly with the rest of the page, but we can do some server-side caching to speed it up a little and reduce the load on our repl.
Currently, whenever we send a request to the
/portfolio endpoint, we execute
get_price on each of our stocks and rescrape Yahoo Finance to find their prices. Under normal conditions, stock prices are unlikely to change significantly moment-to-moment, and our dashboard is not a high-frequency trading platform, so we should write some logic to store the current share price and only renew it if it's more than 60 seconds old. Let's do this now.
As we're going to be modifying the database structure in this section, it's a good idea to flush your repl's database before going any further, so as to avoid errors.
First, we'll import the
time module, near the top of
main.py.
import time
This allows us to use
time.time(), which returns the current Unix Epoch, a useful value for counting elapsed time in seconds. Add the following code to the
buy function, just above the
return statement:
db['shares'][ticker]['current_price'] = current_price
db['shares'][ticker]['last_updated'] = time.time()
This code will add the current share price for each ticker and when it was last updated to our database.
Now we need to modify the
get_price function to resemble the code below:
def get_price(ticker):
# use cache if price is not stale
if ticker in db["shares"].keys() and time.time() < db["shares"][ticker]["last_updated"]+60:
return db["shares"][ticker]["current_price"]
page = requests.get("" + ticker)
soup = BeautifulSoup(page.text, "html5lib")
price = soup.find('fin-streamer', {'class':'Fw(b) Fz(36px) Mb(-4px) D(ib)'}).text
# remove thousands separator
price = price.replace(",", "")
# update price in db
if ticker in db["shares"].keys():
db["shares"][ticker]["current_price"] = price
db["shares"][ticker]["last_updated"] = time.time()
return price
The if statement at the top will cause the function to return the current price recorded in our database if it has been fetched recently, and the two new lines near the bottom of the function will ensure that when a new price is fetched, it gets recorded in the database, along with an updated timestamp.
You can play around with different caching time periods in this function and different refresh intervals in the JavaScript code to find the right tradeoff between accurate prices and fast load times.
Where Next?
Our stock dashboard is functional, and even useful to an extent, but there's still a lot more we could do with it. The following features would be good additions:
- Support for fractional shares.
- The ability to record the sale of shares.
- Timestamps for purchase (and sale) records.
- Support for cryptocurrencies, perhaps using data from CoinMarketCap.
- The ability to create multiple portfolios or user accounts.
- Graphs.
You can find the code for this tutorial in the repl below: | https://docs.replit.com/tutorials/personal-stock-market-dashboard | CC-MAIN-2022-27 | refinedweb | 3,524 | 63.49 |
When I first saw c# threading code, I just falled in love forever. As days passed by the power and simplicity of “System.Threading” namespace converted my love in to blind love.
In case you have not still fallen in love and new to C# threading, you can start looking at my C# training threading video.
The blind love was but obvious because in today’s modern world parallel task execution is compulsory component.My love helped me to please users with mind blowing UI interfaces, multi-threaded batch processes, until one day…..
But one fine day my love with c# thread’s had a bump.I had two threads running and both where updating the same local variable. The updation done by one thread was not visible to the other thread.
In the same application I had created one function called as “SomeThread” which will run until the “_loop” is true.
Note :- When you run the code ensure that you have selected release with CNTRL + F5 pressed.
See it yourself, take the same source code and change the “_loop” variable to volatile as shown in the below code snippet and run it.
Ensure that you select mode “release” and hit control + f5 to see the expected behavior.
In case you want to see it live what I have said in this blog see the video below
<OBJECT type="application/x-shockwave-flash" codebase="" WIDTH="640" HEIGHT="360" data=""><. | http://www.codeproject.com/Articles/389730/The-unsung-hero-Volatile-keyword-Csharp-threading?fid=1718675&df=90&mpp=25&sort=Position&spc=Relaxed&select=4296641&tid=4261166 | CC-MAIN-2015-18 | refinedweb | 239 | 71.95 |
Originally posted by fengqiao cao: hi, i found new stuff. the output of the following code is: no such file found doing fianlly -2 i tried to not comment line 1 the output is -1 and i also tried to comment line 3, but compile erro. From the output, we can understand that finally block executed first and then "return statement". I assum that the line 3 must be executed anytime, but why not returen the value in line 3? import java.io.*; public class Mine { //start of class mine public static void main(String argv[]){ Mine m=new Mine(); System.out.println(m.amethod()); } public int amethod() { //start of amethod()method try { FileInputStream dis=new FileInputStream("Hello.txt"); }catch (FileNotFoundException fne) { System.out.println("No such file found"); //return -1; //1 }catch(IOException ioe) { //return 0; } finally{ System.out.println("Doing finally"); //return -1; //2 } return -2; //3 }//end of amethod()method. }//end of class mine
i am a little confused if like what i said why the finally got executed?( i know it is defined in java doc. that "finally block" always got executed) | http://www.coderanch.com/t/231405/java-programmer-SCJP/certification/concept | CC-MAIN-2014-41 | refinedweb | 186 | 57.37 |
Opened 4 years ago
Closed 3 years ago
#6847 closed defect (wontfix)
To enable connecting with NoUserID (for Testlink integration)
Description
Currently, I have to use 'anonymous' as the username to connect with xmlrpc:
import xmlrpclib server = xmlrpclib.ServerProxy("http://'''anonymous'''@server/trac/project/xmlrpc") multicall = xmlrpclib.MultiCall(server) for ticket in server.ticket.query("id=1"): multicall.ticket.get(ticket) print map(str, multicall())
If I use a blank username (NoUserID is configured to be On with Apache):
server = xmlrpclib.ServerProxy("")
It returned the error "401 Authorization Required".
The problem is that it seems Testlink connects to Trac with a blank username, so it fails unless I hack Testlink's code.
Attachments (0)
Change History (1)
comment:1 Changed 3 years ago by osimons
- Resolution set to wontfix
- Status changed from new to closed
Note: See TracTickets for help on using tickets.
That is quite an intricate set of authentication config details for a particular purpose, but I fail to see how it is an issue for the RPC plugin - or even what is expected to change in the plugin to fix this? The RPC plugin don't provide authentication. Apache does, and it is read and configured by Trac itself long before it arrives at plugin code.
I'm closing this. Please reopen if I haven't understood the issue correctly. | http://trac-hacks.org/ticket/6847 | CC-MAIN-2013-48 | refinedweb | 223 | 54.52 |
Metrics¶
Airflow can be set up to send metrics to StatsD.
Setup¶
First you must install statsd requirement:
pip install 'apache-airflow[statsd]'
Add the following lines to your configuration file e.g.
airflow.cfg
[metrics] statsd_on = True statsd_host = localhost statsd_port = 8125 statsd_prefix = airflow
If you want to avoid sending all the available metrics to StatsD, you can configure an allow list of prefixes to send only the metrics that start with the elements of the list:
[metrics] statsd_allow_list = scheduler,executor,dagrun
If you want to redirect metrics to different name, you can configure
stat_name_handler option
in
[scheduler] section. It should point to a function that validates the statsd stat name, applies changes
to the stat name if necessary, and returns the transformed stat name. The function may looks as follow:
def my_custom_stat_name_handler(stat_name: str) -> str: return stat_name.lower()[:32]
If you want to use a custom Statsd client instead of the default one provided by Airflow, the following key must be added
to the configuration file alongside the module path of your custom Statsd client. This module must be available on
your
PYTHONPATH.
[metrics] statsd_custom_client_path = x.y.customclient
See Modules Management for details on how Python and Airflow manage modules. | https://airflow.apache.org/docs/apache-airflow/2.1.4/logging-monitoring/metrics.html | CC-MAIN-2022-33 | refinedweb | 201 | 50.06 |
I've been working on this script all weekend and am almost done with it accept for one last detail; implementing a "process another number?" prompt after a user input is processed.
The script works like this: First, the user enters a number to test (to find the two prime numbers that make up that number). Next the program finds the numbers, outputs them, then quits.
What I want to do is create a prompt after the number has been processed that asks the user if they want to try another number or quit. I tried doing this but it isnt working rite... As it is now, after the user enters yes to continue the script simply exits instead of asking the user for another number.
Here is that output:
This program finds two prime numbers that add up to any even number you enter.
Enter an even integer larger than 2: 6
========================================
Found two prime numbers that sum to 6 !
6 = 3 + 3
========================================
Would you like to try another number (yes or no)? y
>>>
Could someone please show me what i'm doing wrong and how to get it working rite?
Heres my code:
import math def is_prime(p): if p == 2: return True else: return prime_test(p) def is_even(n): return n % 2 == 0 def prime_test(p): stop = int(math.ceil(math.sqrt(p))) + 1 if p % 2 == 0: return False else: for i in range(3, stop, 2): if p % i == 0: return False return True # is a prime def validated_input(moredata): while moredata[0] == "y" or moredata[0] == "Y": input = raw_input("\nEnter an even integer larger than 2: ") try: n = int(input) except ValueError: print "*** ERROR: %s is not an integer." % input for x in range(2, int(math.ceil(math.sqrt(n))) + 2): if n % x == 0: break if x == int(math.ceil(math.sqrt(n))) + 1: print "*** ERROR: %s is already prime." % input if is_even(n): if n > 2: return n else: print "*** ERROR: %s is not larger than 2." % n else: print "*** ERROR: %s is not even." % n return None def main(): print "This program finds two prime numbers that add up to any even number you enter." moredata = "y" n = validated_input(moredata) if n: for p in xrange(2, int(math.ceil(math.sqrt(n))) + 2): if is_prime(p) and is_prime(n - p): print "="*40,"\nFound two prime numbers that sum to",n,"!" print n, '= %d + %d' % (p, n - p),"\n","="*40 moredata = raw_input("\nWould you like to try another number (yes or no)? ") if __name__ == '__main__': main() | https://www.daniweb.com/programming/software-development/threads/114252/need-some-help-implementing-a-continue-or-quit-prompt | CC-MAIN-2018-30 | refinedweb | 426 | 71.44 |
See: Description
The job of a "name" in the context of ISO 19103 is to associate that name
with an
Object. Examples given are objects: which form namespaces
for their attributes, and Schema: which form namespaces for their components.
A straightforward and natural use of the namespace structure defined in 19103 is the translation
of given names into specific storage formats. XML has different naming rules than shapefiles,
and both are different than NetCDF. This common framework can easily be harnessed to impose
constraints specific to a particular application without requiring that a separate implementation
of namespaces be provided for each format.
Records and Schemas are similar to a
struct in C/C++, a table in SQL,
a
RECORD in Pascal, or an attribute-only class in Java if it were stripped of all notions
of inheritance. They are organized into named collections called Schemas. Both records and schemas
behave as dictionaries for their members and are similar to "packages" in Java. | http://docs.geotools.org/latest/javadocs/org/opengis/util/package-summary.html | CC-MAIN-2019-18 | refinedweb | 162 | 51.18 |
I am using NetBeans and doing my first graphics applet for a intro to Java class. I an getting a error "happyface.HappyFace class wasn't found in HappyFace project". The error brings up a box to chose a main class but in the box the only option is <no main classes found>.
Here are my steps;
1) create the project by selecting
a) file
b) new project
c) category - java
d) project - java application (other choices are java class library, java project with existing sources, java free form project)
e) Next
f) project name - "HappyFace" (create main class is selected and "happyface.HappyFace" in text box)
2) I paste this exact code into the HappyFace.java file;
import javax.swing.JApplet;
import java.awt.Graphics;
public class HappyFace extends JApplet
{
public void paint(Graphics canvas)
{
canvas.drawOval(100, 50, 200, 200);
canvas.fillOval(155, 100, 10, 20);
canvas.fillOval(230, 100, 10, 20);
canvas.drawArc(150, 160, 100, 50, 180, 180);
}
}
3) I build and clean successfully
4) I run and get error above
Also, the first line ("import javax.swing.JApplet;") shows an "incorrect package" error with choices to solve as "move class to correct folder" or "change package declaration to "HappyFace". I have chosen both options and neither solve the above issue.
Note - I am clearly new to Java as this issue shows. I just installed JDK and netbeans and have successfully ran the usual "hello world" program.
Any ideas or links to solution would be appreciated. thanks for your time.
Ian | http://forums.devshed.com/java-help-9/java-class-wasnt-found-project-error-using-netbeans-955086.html | CC-MAIN-2017-30 | refinedweb | 255 | 67.35 |
Hello everyone. I'm new to Java and I'm trying to check if the user is inputting a word, or a number. We were just introduced to while loops in class and I figured I'd try one out. My program worked up until I tried to validate what the user is inputting.
My problem is that the input doesn't stop and the user can just keep hitting return.
*** Disclaimer, I'm just messing around and the text doesn't mean anything.
Code Java:
import java.util.Scanner; public class SchoolDegree { public static void main(String[] args) { System.out.print("Please enter your school name: "); Scanner in = new Scanner(System.in); String schoolName = in.nextLine(); while (in.hasNextLine()) { if (schoolName.equals("Syracuse")) { schoolName += ", aka CUUUUUSE."; } else if (schoolName.equals("Ohio")) { schoolName += ", aka NO SOUP FOR YOU."; } else schoolName += ", aka who?"; } while (in.hasNextInt()) { System.out.println("How can your school be a number?"); } System.out.println("You attend " + schoolName); } } | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/18009-while-loop-issue-printingthethread.html | CC-MAIN-2015-11 | refinedweb | 161 | 63.46 |
What Is Forth Anyway?
Forth is a stack-based procedural programming language that was invented by Charles H. Moore around 1968. By 1970 Forth was being used to control the space telescopes at Kitt Peak Observatory in Arizona. At that time programming tools for small computer systems were rather crude and computer memory was very expensive. C compilers were not yet widely available, BASIC was still fully interpreted and therefore very slow, and Fortran compilers were too expensive for many. Programming during this time period usually meant assembler language or worse yet machine language. Forth offered a higher level of programming that increased productivity immensely while still offering the assembly language interface if required for an application. And as counter-intuitive as it seems, the larger a Forth program got the more efficient it used memory. In fact, Forth programs could be smaller than equivalent assembler language programs on the same hardware platform.
Forth source code is line-oriented and can be entered interactively via a keyboard or stored on a mass storage device of some kind and loaded in. In the early days, Forth source code was stored in 1K blocks on disk, but today it is stored in normal files. A line of Forth source consists of a series of space character delimited tokens. The Forth interpreter reads in the tokens and either executes them interactively or, if in compile mode, compiles them for later execution.
The Forth word is a language element that can be used interactively or within a program. Source code for a particular word is called its "definition." Primitive Forth words typically have their definitions coded in the CPU's native machine language, whereas non-primitive words have their definitions coded in Forth. A Forth program is run by executing the highest level word which defines the program.
The Forth word ":" (colon) puts the interpreter into compile mode and the word ";" (semicolon) ends compile mode. Immediate Forth words execute even in compile mode, whereas non-immediate Forth words have their addresses compiled into the word being defined.
Many things make Forth unique but two things need to be specifically pointed out.
- First, Forth is stack-oriented language, meaning that all arguments to and values returned from Forth words are unnamed, untyped, and located on a stack.
- Second, Forth uses Reverse Polish Notation (RPN) like many HP calculators for expression evaluation.
Let's use the Forth word DUP as an example of argument passing. DUP duplicates the value currently on the top of the data stack. DUP takes its input value from the stack and pushes two copies of that value back onto the stack.
Using RPN for expression evaluation takes getting used to when you're used to normal algebraic or infix evaluation. The algebraic expression 2 + 5 * 3 evaluates as 2 + (5 * 3), whereas the RPN evaluation becomes 2 5 3 * +. Expression conversion to RPN is easy if you keep the following in mind:
- Identifiers are used in the same order.
- Operators are used in the same order.
- Operators follow identifiers.
It should be noted that Forth leaves all data typing issues to the programmer. Typing is usually handled by having Forth words defined for each data type. For example, the addition operator "+" expects its arguments to be integers and integers only. If floating-point addition is required in an application a special operator like "F+" might be defined. (In the Forth implementation discussed later, I have overloaded operators like "+" to handle all valid data types.)
Traditionally Forth was implemented as a threaded, interpretive language (TIL) though that is not necessarily the case today where Forth can be compiled into machine code for direct execution just like C++. A TIL implementation (as discussed in this article) has to do with how a language's source code is translated into executable form. TILs generally employ a two-phase translation process. In the first phase (compile mode), source code is interpreted into a list of fully resolved address references to underlying functionality. At runtime, a thread of execution is established from the top-level Forth word through the lists of compiled addresses of the words with which it was defined. Forth source code is converted into a fully analyzed internal form (lists of addresses) by the first phase of the translation process so the compiler is no longer needed. In addition, no further interpretation is used at runtime as the thread of execution has been fully established.
Three components -- a dictionary, the inner interpreter, and the outer interpreter -- make up traditional Forth implementations. Typically implemented with a linked list, the dictionary contains an entry for every Forth word (primitives and non-primitives) that can be executed interactively or used in the definition of new Forth words. New, successfully compiled Forth words are added to the dictionary for subsequent lookup.
The outer interpreter provides users with a crude command-line interface to the Forth environment. It typically provides a prompt such as ">" and waits for user input. The outer interpreter gathers up a line of input at a time and submits that line to the inner interpreter where all the action happens. If the line is interpreted successfully, "OK" is output and the process repeats until no more input is available.
Listing One is pseudocode for the inner interpreter I used in my Java implementation. It is fairly typical in its operation.
begin prepare tokenizer for parsing line of text while tokens exist on line get next token if not in compile mode if token is string literal push string onto data stack continue endif search dictionary for word if word found in dictionary execute it else try to parse word as number if numeric push number onto data stack else output error indication - word followed by ? return false endif endif else ( in compile mode ) if token is string literal add string to word being defined continue endif search dictionary for word if word found in dictionary if word marked as immediate execute word else add word to word being defined endif else ( word not found in dictionary ) try to parse word as number if numeric add number to word being defined else output error indication - word followed by ? return false endif endif endif end while return true end
The following examples give you a sense of how Forth works. The ">" prompt and the "OK" are provided by the outer interpreter.
> 1 . 1 OK
Since we are not in compile mode, the first 1 is identified as a numeric literal and is pushed onto the data stack. The "." (called "dot") causes the item at the top of the stack to be popped off and output to the console; hence the second 1.
> 10 HEX . A OK
This example performs number base conversion. Unless changed, Forth uses base 10 for numeric operations. Here we push a decimal 10 value onto the stack, change the numeric base to HEX and then use dot to print the top of the stack value. An "A" is printed as A is 10 in hex.
> : hello_world "*** Hello World ***" . cr cr ; OK
This is an example of Forth compilation. Here we define a new word called hello_world which echoes the string *** Hello World *** to the console followed by a couple of carriage returns. The final OK signifies successful compilation. Later when hello_world is typed on the command line, the string will be output.
> "c:/archive/java/jforth/testcode" load OK
Here a file/pathname string is pushed onto the data stack and the Forth word load pops this string off of the stack and compiles the Forth code in the file. This is how you would compile a complete Forth program. It is interesting to note that both compile and runtime behaviors can be combined in a file of loaded Forth code. Below, the definition of dl1 is compiled and the very next statement executes the new word interactively.
( Do loop with positive 1 loop increment ) : dl1 10 0 do i . loop ; " do loop with positive 1 loop increment: " . dl1 cr
Consult any book on Forth for hundreds of other examples. | http://www.drdobbs.com/embedded-systems/jforth-implementing-forth-in-java/207801675?pgno=2 | CC-MAIN-2014-52 | refinedweb | 1,348 | 53.31 |
As the title states, I am having issues creating a DLL for use with my AutoIt script.
It compiles without error, however I can not get it to work with C++ or AutoIt - however I have not worked with DLLs in AutoIt before. Are there any good resources to aid in creation of DLLs? I am guessing there is a problem with my code.
As you will see in the header, I want to return a string (or array of chars). The two parameters are also string/char arrays.
My header:
#define DLL_EXPORT __declspec (dllexport) #ifdef __cplusplus extern "C" { #endif char * DLL_EXPORT licenseCheck(char * salt, char * user);
That is the header of the actual DLL.
The following is the C++ code I am using in an attempt to access this DLL:
#include <iostream> #include <windows.h> using namespace std; typedef char (*LicFunc)(char, char); HINSTANCE hinstDLL; int main() { // LicFunc licenseCheck("",""); hinstDLL = LoadLibrary("../../../MichaelsFuncs/bin/Release/MichaelsFuncs.dll"); FARPROC licenseCheck = GetProcAddress(hinstDLL, "licenseCheck"); if (licenseCheck == 0) { cout << "licenseCheck is NULL\n"; return 0; } typedef char (__stdcall * pICFUNC)( char *, char *); pICFUNC licenseCheck(char, char); licenseCheck = pICFUNC(licenseCheck); cout << licenseCheck(*"<|+5 :{ax?H_+,^o", *"Test"); FreeLibrary(hinstDLL); return 0; }
And this is my AutoIt code, which is also trying to access this DLL:
I am sure the errors are in the DLL - can anyone please guide me along to get this DLL working properly?
The compiled DLL is attached.
Thanks | http://www.autoitscript.com/forum/topic/137712-creating-dll-in-c-for-use-with-autoit-script-having-issues/?p=964230 | CC-MAIN-2014-23 | refinedweb | 237 | 61.36 |
Following on from the previous post on the double pendulum, here is a similar Python script for plotting the behaviour of the "spring pendulum": a bob of mass $m$ suspended from a fixed anchor by a massless spring.
As before, the code places frames of the animation in a directory
frames/ which can be turned into a gif with, for example, ImageMagick's
convert utility. A derivation of the Lagrangian equations used is given under the code.
import numpy as np from scipy.integrate import odeint import matplotlib.pyplot as plt from matplotlib.patches import Circle # Pendulum equilibrium spring length (m), spring constant (N.m) L0, k = 1, 40 m = 1 # The gravitational acceleration (m.s-2). g = 9.81 def deriv(y, t, L0, k, m): """Return the first derivatives of y = theta, z1, L, z2.""" theta, z1, L, z2 = y thetadot = z1 z1dot = (-g*np.sin(theta) - 2*z1*z2) / L Ldot = z2 z2dot = (m*L*z1**2 - k*(L-L0) + m*g*np.cos(theta)) / m return thetadot, z1dot, Ldot, z2dot # Maximum time, time point spacings and the time grid (all in s). tmax, dt = 20, 0.01 t = np.arange(0, tmax+dt, dt) # Initial conditions: theta, dtheta/dt, L, dL/dt y0 = [3*np.pi/4, 0, L0, 0] # Do the numerical integration of the equations of motion y = odeint(deriv, y0, t, args=(L0, k, m)) # Unpack z and theta as a function of time theta, L = y[:,0], y[:,2] # Convert to Cartesian coordinates of the two bob positions. x = L * np.sin(theta) y = -L * np.cos(theta) # Plotted bob circle radius r = 0.05 # Plot a trail of the m2 bob's position for the last trail_secs seconds. trail_secs = 1 # This corresponds to max_trail time points. max_trail = int(trail_secs / dt) def plot_spring(x, y, theta, L): """Plot the spring from (0,0) to (x,y) as the projection of a helix.""" # Spring turn radius, number of turns rs, ns = 0.05, 25 # Number of data points for the helix Ns = 1000 # We don't draw coils all the way to the end of the pendulum: # pad a bit from the anchor and from the bob by these number of points ipad1, ipad2 = 100, 150 w = np.linspace(0, L, Ns) # Set up the helix along the x-axis ... xp = np.zeros(Ns) xp[ipad1:-ipad2] = rs * np.sin(2*np.pi * ns * w[ipad1:-ipad2] / L) # ... then rotate it to align with the pendulum and plot. R = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]) xs, ys = - R @ np.vstack((xp, w)) ax.plot(xs, ys, c='k', lw=2) def make_plot(i): """ Plot and save an image of the spring pendulum configuration for time point i. """ plot_spring(x[i], y[i], theta[i], L[i]) # Circles representing the anchor point of rod 1 and the bobs c0 = Circle((0, 0), r/2, fc='k', zorder=10) c1 = Circle((x[i], y[i]), r, fc='r', ec='r', zorder=10) ax.add_patch(c0) ax.add_patch(c[imin:imax], y[imin:imax], c='r', solid_capstyle='butt', lw=2, alpha=alpha) # Centre the image on the fixed anchor point, and ensure the axes are equal ax.set_xlim(-np.max(L)-r, np.max(L)+r) ax.set_ylim(-np.max(L)-r, np.max(L)+r) ax.set_aspect('equal', adjustable='box') plt.axis('off') plt.savefig('frames/_img{:04d}.png'.format(i//di), dpi=72) # Clear the Axes ready for the next image. plt.cla() # Make an image every di time points, corresponding to a frame rate of fps # frames per second. # Frame rate, s-1 fps = 10 di = int(1/fps/dt) # This figure size (inches) and dpi give an image of 600x450 pixels. fig = plt.figure(figsize=(8.33333333, 6.25), dpi=72) ax = fig.add_subplot(111) for i in range(0, t.size, di): print(i // di, '/', t.size // di) make_plot(i)
There are two degrees of freedom in this problem, which are taken to be the angle of the pendulum from the vertical and the total length of the spring. Let the origin be at the anchor with the $y$-axis pointing up and the $x$ axis pointing to the right. The spring is assumed to have a force constant $k$ and an equilibrium length $l_0$ such that its potential energy when extended or compressed to a length $l$ is $\frac{1}{2}k(l-l_0)^2$. The components of the bob's position and velocity are:
The potential and kinetic energies are therefore:
and so the Lagrangian is
Applying the Euler-Lagrange relations for $\theta$ and $l$ as before gives
To cast this pair of second-order differential equations into four first-order differential equations, let $z_1 = \dot{\theta}$ and $z_2 = \dot{l}$ to give
which are the relations that appear in the function
deriv.
Comments are pre-moderated. Please be patient and your comment will appear soon.
Miguel 2 years, 1 month ago
I try to make the simulation of the pendulum, but it does not give me a shame, please I need you to help meLink | Reply
christian 2 years, 1 month ago
Hi Miguel,Link | Reply
Could you be a little more specific about what the problem is, exactly?
I'd like to help,
Christian
james 1 year, 9 months ago
hello, does this code use the Euler Lagrangian method or runge kutta? thank you also for the code :)Link | Reply
christian 1 year, 9 months ago
It solves the Euler-Lagrange equations using scipy.integrate.odeint, which is a Python wrapper to the LSODA algorithm (which is not a Runge-Kutta method).Link | Reply
New Comment | https://scipython.com/blog/the-spring-pendulum/ | CC-MAIN-2021-39 | refinedweb | 947 | 72.97 |
Django database backend for MySQL that provides pooling ala SQLAlchemy.
Project description monkey-patch it.
The second point sounds bad, but it is the best option because it does not freeze the Django MySQL backend at a specific revision. Using this method allows us to benefit from any bugs that the Django project fixes, while layering on connection pool
You can define the pool implementation and the specific arguments passed to it. The available implementations (backends) and their arguments are defined within the SQLAlchemy documentation.
- MYSQLPOOL_BACKEND - The pool implementation name (‘QueuePool’ by default).
- MYSQLPOOL_ARGUMENTS - The kwargs passed to the pool.
For example, to use a QueuePool without threadlocal, you could use the following configuration.
MYSQLPOOL_BACKEND = 'QueuePool' MYSQLPOOL_ARGUMENTS = { 'use_threadlocal': False, }
Connection Closing
While this has nothing to do directly with connection pooling, it is tangentially related. Once you start pooling (and limiting) the database connections it becomes important to close them.
This is really only relevant when you are dealing with a threaded application. Such was the case for one of our servers. It would create many threads for handling conncurrent operations. Each thread resulted in a connection to the database being opened persistently. Once we deployed connection pooling, this service quickly exhausted the connection limit of it’s pool.
This sounds like a huge failure, but for us it was a great success. The reason is that we implemented pooling specifically to limit each process to a certain number of connections. This prevents any given process from impacting other services, turning a global issue into a local issue. Once we were able to identify the specific service that was abusing our MySQL server, we were able to fix it.
The problem we were having with this threaded server is very well described below.
Therefore, this library provides a decorator that can be used in a similar situation to help with connection management. You can use it like so.
from django_mysqlpool import auto_close_db @auto_close_db def function_that_uses_db(): MyModel.objects.all().delete()
With pooling (and threads), closing the connection early and often is the key to good performance. Closing returns the connection to the pool to be reused, thus the total number of connections is decreased. We also needed to disable the use_threadlocal option of the QueuePool, so that multiple threads could share the same connection. Once we decorated all functions that utilized a connection, this service used less connections than it’s total thread count.
Forking
If you are using mysqlpool with a daemon (our project uses Django admin commands to build daemons) then you need to take care with the connection pool. After a fork() the pool will be unusable. In our case, the file descriptors for the connections were closed, and in the child, any new connections or files assumed the fd of the MySQL connection this caused the Django ORM to read/write on some non-MySQL connection in our case, Redis, so Django would send SQL to redis an expect a reply! The solution is to close the pool before fork()ing. This will release the pooled connections which will be reopened when the child first attempts to use them.
from django_mysqlpool import close_pool close_pool() pid = os.fork()
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/django-mysqlpool/ | CC-MAIN-2019-35 | refinedweb | 553 | 64.2 |
Get Up And Running With Grunt
- By Mike Cunsolo
- October 29th, 2013
- 58 Comments
In this article, we’ll explore how to use Grunt1 in a project to speed up and change the way you develop websites. We’ll look briefly at what Grunt can do, before jumping into how to set up and use its various plugins to do all of the heavy lifting in a project.
We’ll then look at how to build a simple input validator, using Sass2 as a preprocessor, how to use grunt-cssc and CssMin to combine and minify our CSS, how to use HTMLHint3 to make sure our HTML is written correctly, and how to build our compressed assets on the fly. Lastly, we’ll look at using UglifyJS4 to reduce the size of our JavaScript and ensure that our website uses as little bandwidth as possible.
5
Grunt.js67 and Ember.js8 or with communities such as JS Bin9,10 to concatenating11 JavaScript. It can also be used for a range of tasks unrelated to JavaScript, such as compiling CSS from LESS and Sass. We’ve even used it with blink(1)12" : "Brandon Random", " : "Mike Cunsolo", dependency. <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0;"> <title>Enter your first name</title> <link rel="stylesheet" href="build/css/master.css"> </head> <body> <label for="firstname">Enter your first name</label> <input id="firstname" name="firstname" type="text"> <p id="namevalidation" class="validation">< will).
Keeping The JavaScript As Lean As Possible
Let’s set up a JavaScript file to validate a user’s name. To keep this as simple as possible, we’ll check only for non-alphabetical characters. We’ll also use the
strict mode of JavaScript, which prevents us from writing valid but poor-quality JavaScript. Paste the following into
assets/js/base.js:
function Validator() { "use strict"; } Validator.prototype.checkName = function(name) { "use strict"; return (/[^a-z]/i.test(name) === false); }; window.addEventListener('load', function(){ "use strict"; document.getElementById('firstname').addEventListener('blur', function(){ var _this = this; var validator = new Validator(); var validation = document.getElementById('namevalidation'); if (validator.checkName(_this.value) === true) { validation.innerHTML = 'Looks good! :)'; validation.className = "validation yep"; _this.className = "yep"; } else { validation.innerHTML = 'Looks bad! :('; validation.className = "validation nope"; _this.className = "nope"; } }); });
Let’s use UglifyJS to minify this source file. Add this to
grunt.initConfig:
uglify: { build: { files: { 'build/js/base.min.js': ['assets/js/base.js'] } } }
UglifyJS compresses all of the variable and function names in our source file to take up as little space as possible, and then trims out white space and comments — extremely useful for production JavaScript. Again, we have to set up a
watch task to build our Uglify’ed JavaScript. Add this to the
watch configuration:
watch: { js: { files: ['assets/js/base.js'], tasks: ['uglify'] } }
Building CSS From Sass Source Files
Sass is incredibly useful for working with CSS, especially on a team. Less code is usually written in the source file because Sass can generate large CSS code blocks with such things as functions and variables. Walking through Sass itself is a little beyond the scope of this article; so, if you are not comfortable with learning a preprocessor at this stage, you can skip this section. But we will cover a very simple use case, using variables, one mixin and the Sassy CSS (SCSS) syntax, which is very similar to CSS!
Grunt’s Sass plugin requires the Sass gem. You will need to install Ruby on your system (it comes preloaded in OS X). You can check whether Ruby is installed with this terminal command:
ruby -v
Install Sass by running the following:
gem install sass
Depending on your configuration, you might need to run this command via sudo — i.e.
sudo gem install sass: — at which point you will be asked for your password. When Sass is installed, create a new directory named
assets and, inside that, another named
sass. Create a new file named
master.scss in this directory, and paste the following in it:
@mixin prefix($property, $value, $prefixes: webkit moz ms o spec) { @each $p in $prefixes { @if $p == spec { #{$property}: $value; } @else { -#{$p}-#{$property}: $value; } } } $input_field: #999; $input_focus: #559ab9; $validation_passed: #8aba56; $validation_failed: #ba5656; $bg_colour: #f4f4f4; $box_colour: #fff; $border_style: 1px solid; $border_radius: 4px; html { background: $bg_colour; } body { width: 720px; padding: 40px; margin: 80px auto; background: $box_colour; box-shadow: 0 1px 3px rgba(0, 0, 0, .1); border-radius: $border_radius; font-family: sans-serif; } input[type="text"] { @include prefix(appearance, none, webkit moz); @include prefix(transition, border .3s ease); border-radius: $border_radius; border: $border_style $input_field; width: 220px; } input[type="text"]:focus { border-color: $input_focus; outline: 0; } label, input[type="text"], .validation { line-height: 1; font-size: 1em; padding: 10px; display: inline; margin-right: 20px; } input.yep { border-color: $validation_passed; } input.nope { border-color: $validation_failed; } p.yep { color: $validation_passed; } p.nope { color: $validation_failed; }
You will notice that the SCSS extension looks a lot more like CSS than conventional Sass. This style sheet makes use of two Sass features: mixins and variables. A mixin constructs a block of CSS based on some parameters passed to it, much like a function would, and variables allow common fragments of CSS to be defined once and then reused.
Variables are especially useful for hex colours; we can build a palette that can be changed in one place, which makes tweaking aspects of a design very fast. The mixin is used to prefix rules such as for appearance and transitions, and it reduces bulk in the file itself.
When working with a large style sheet, anything that can be done to reduce the number of lines will make the file easier to read when a team member other than you wants to update a style.
In addition to Sass, grunt-cssc combines CSS rules together, ensuring that the generated CSS has minimal repetition. This can be very useful in medium- to large-scale projects in which a lot of styles are repeated. However, the outputted file is not always the smallest possible. This is where the
cssmin task comes in. It not only trims out white space, but transforms colors to their shortest possible values (so,
white would become
#fff). Add these tasks to
gruntfile.js:' } } }
Now that we have something in place to handle style sheets, these tasks should also be run automatically. The
build directory is created automatically by Grunt to house all of the production scripts, CSS and (if this were a full website) compressed images. This means that the contents of the
assets directory may be heavily commented and may contain more documentation files for development purposes; then, the
build directory would strip all of that out, leaving the assets as optimized as possible.
We’re going to define a new set of tasks for working with CSS. Add this line to
gruntfile.js, below the default
task:
grunt.registerTask('buildcss', ['sass', 'cssc', 'cssmin']);
Now, when
grunt buildcss is run, all of the CSS-related tasks will be executed one after another. This is much tidier than running
grunt sass, then
grunt cssc, then
grunt cssmin. All we have to do now is update the
watch configuration so that this gets run automatically.
watch: { css: { files: ['assets/sass/**/*.scss'], tasks: ['buildcss'] } }
This path might look a little strange to you. Basically, it recursively checks any directory in our
assets/sass directory for
.scss files, which allows us to create as many Sass source files as we want, without having to add the paths to
gruntfile.js. After adding this,
gruntfile.js should look like this:
module.exports = function(grunt){ "use strict"; require("matchdep").filterDev("grunt-*").forEach(grunt.loadNpmTasks); grunt.initConfig({ pkg: grunt.file.readJSON('package.json'),' } } }, watch: { html: { files: ['index.html'], tasks: ['htmlhint'] }, js: { files: ['assets/js/base.js'], tasks: ['uglify'] }, css: { files: ['assets/sass/**/*.scss'], tasks: ['buildcss'] } }, htmlhint: { build: { options: { 'tag-pair': true, // Force tags to have a closing pair 'tagname-lowercase': true, // Force tags to be lowercase 'attr-lowercase': true, // Force attribute names to be lowercase e.g. <div id="header"> is invalid 'attr-value-double-quotes': true, // Force attributes to have double quotes rather than single 'doctype-first': true, // Force the DOCTYPE declaration to come first in the document 'spec-char-escape': true, // Force special characters to be escaped 'id-unique': true, // Prevent using the same ID multiple times in a document 'head-script-disabled': true, // Prevent script tags being loaded in the for performance reasons 'style-disabled': true // Prevent style tags. CSS should be loaded through }, src: ['index.html'] } }, uglify: { build: { files: { 'build/js/base.min.js': ['assets/js/base.js'] } } } }); grunt.registerTask('default', []); grunt.registerTask('buildcss', ['sass', 'cssc', 'cssmin']); };
We should now have a static HTML page, along with an
assets directory with the Sass and JavaScript source, and a
build directory with the optimized CSS and JavaScript inside, along with the
package.json and
gruntfile.js files.
By now, you should have a pretty solid foundation for exploring Grunt further. As mentioned, an incredibly active community of developers is building front-end plugins. My advice is to head on over to the plugin library13 and explore the more than 300 plugins.
(al)
Footnotes
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
↑ Back to top Tweet itShare on Facebook | http://www.smashingmagazine.com/2013/10/get-up-running-grunt/ | CC-MAIN-2015-32 | refinedweb | 1,554 | 56.25 |
The QModemService class implements telephony functionality for AT-based modems. More...
#include <QModemService>
Inherits QTelephonyService.
The QModemService class implements telephony functionality for AT-based modems.
The default implementation uses AT commands from the GSM standards 3GPP TS 07.07, 3GPP TS 07.05, 3GPP TS 27.007, and 3GPP TS 27.005.
The implementation can be customized for proprietary modem command sets by inheriting QModemService and overriding QModemService::initialize(), which should create new interfaces to replace the default implementations.
For example, consider a modem that uses different commands for network registration than the GSM standard specifies. The modem integrator would create a new modem service class that overrides the default implementation of network registration:
class MyModemService : public QModemService { ... void initialize(); ... }; void MyModemService::initialize() { if ( !supports<QNetworkRegistration>() ) addInterface( new MyNetworkRegistration(this) ); QModemService::initialize(); }
The new functionality can then be implemented in the MyNetworkRegistration class, which will inherit from QNetworkRegistrationServer. The MyModemService::initialize() method first calls supports() to check if an instance of QNetworkRegistration has already been added.
This modem service handles the following posted events, via QModemService::post():
See also QTelephonyService.
Creates a new modem service handler called service.
Creates a new modem service handler called service and attaches it to parent. The mux parameter specifies the serial device multiplexer to use for accessing the modem.
This version of the constructor may be needed if the modem is accessed via some mechanism other than kernel-level serial devices. The caller would construct an appropriate multiplexer wrapper around the new mechanism before calling this constructor.
Destroys this modem service and all of its interfaces.
Sends command to the modem on the primary AT chat channel. If the command fails, the caller will not be notified.
See also primaryAtChat() and retryChat().
This is an overloaded member function, provided for convenience.
Sends command to the modem on the primary AT chat channel. When the command finishes, notify slot on target. The slot has the signature done(bool,QAtResult&). The boolean parameter indicates if the command succeeded or not, and the QAtResult parameter contains the full result data.
The optional data parameter can be used to pass extra user data that will be made available to the target slot in the QAtResult::userData() field.
See also primaryAtChat() and retryChat().
Connects slot on target to receive posted item values. This can be used as an alternative to the posted() signal. Only those posted items that equal item will be delivered to the slot.
See also post() and posted().
Creates and returns a vendor-specific modem service handler called service (the default is modem).
This function will load vendor-specific plug-ins to handle extended modem functionality as required. If the QTOPIA_PHONE_VENDOR environment variable is set, then that vendor plug-in will be loaded. Otherwise this function will issue the AT+CGMI command and ask each vendor plug-in if it supports the returned manufacturer value.
See also QModemServicePlugin.
Returns the modem indicator object for this modem service.
See also QModemIndicators.
Returns the serial device multiplexer that is being used to communicate with the modem.
Requests the SMS service from the modem and places it into PDU mode. The "smsready" item will be posted when it is ready. If the SMS service is already known to be ready and in PDU mode, then the "smsready" item will be posted upon the next entry to the event loop. The object requesting SMS service should use connectToPost() to monitor for when "smsready" is posted.
This implementation repeatedly sends AT+CMGF=0 every second for fifteen seconds until the command succeeds or the retry count expires. Higher levels that call this function should timeout their requests after fifteen seconds if "smsready" has not been received in the meantime.
See also post() and connectToPost().
Posts item to functionality providers that have expressed an interest via connectToPost() or posted(). The posted item will be delivered upon the next entry to the event loop.
Posted events are used as a simple communication mechanism between functionality providers. Providers that post items are completely decoupled from providers that consume items.
For example, posting the item sms:needed will ask the SMS functionality provider to initialize the SMS subsystem. Once SMS has been initialized, it will post the sms:available item. Other providers that depend upon SMS initialization can then proceed with using the SMS functionality.
See also connectToPost() and posted().
Signal that is emitted whenever an item is posted using post(). Slots that are connected to this signal should filter on item to determine if the posted item is relevant to them. Alternatively, they can use connectToPost() to only receive notification of specific posted items.
See also post() and connectToPost().
Returns the AT chat handler for the primary modem control channel.
See also secondaryAtChat().
Signal that is emitted when the modem has reset, either during start up or just after a AT+CFUN command. Slots that connect to this signal should issue AT commands to put the modem into a known state and to enable unsolicited notifications of interest to the connected object.
Retry command up to 15 times, once per second, until it succeeds. This is typically used to send initialization commands that may need several tries before the modem will accept them during start up.
See also primaryAtChat() and chat().
Returns the AT chat handler for the secondary modem control channel. This will return the same as primaryAtChat() if the modem does not have a secondary modem control channel.
See also primaryAtChat().
Processes a request to suspend modem operations and to put the modem into a low-power state. The modem vendor plug-in must call suspendDone() once the operation completes. The default implementation calls suspendDone().
This is intended for modems that need a special AT command or device ioctl operation to suspend the modem. If the modem does not need any special help, this function can be ignored.
This function is called in response to a suspend() command on the QCop channel QPE/ModemSuspend.
See also wake() and suspendDone().
Notifies the system that a suspend() operation has completed. The suspendDone() message is sent on the QCop channel QPE/ModemSuspend.
See also suspend() and wake().
Processes a request to wake up the modem from a suspend() state. The modem vendor plug-in must call wakeDone() once the operation completes. The default implementation calls wakeDone().
This function is called in response to a wake() command on the QCop channel QPE/ModemSuspend.
See also wakeDone() and suspend().
Notifies the system that a wake() operation has completed. The wakeDone() message is sent on the QCop channel QPE/ModemSuspend.
See also wake() and suspend(). | https://doc.qt.io/archives/qtopia4.3/qmodemservice.html | CC-MAIN-2021-21 | refinedweb | 1,097 | 50.94 |
#include <unistd.h>
ssize_t pread(int fildes, void *buf, size_t
nbyte,
off_t offset);
ssize_t read(int fildes, void *buf, size_t nbyte);
The read() function shall attempt to read nbyte bytes from the file associated with the open file descriptor, fildes, into the buffer pointed to by buf. The behavior of multiple concurrent reads on the same pipe, FIFO, or terminal device is unspecified.
Before any action described below is taken, and if nbyte is zero, the read() function may detect and return errors as described below. In the absence of errors, or if error detection is not performed, the read() function shall return zero and have no other results.
On files that support seeking (for example, a regular file), the read() shall start at a position in the file given by the file offset associated with fildes. The file offset shall be incremented by the number of bytes actually read.
Files that do not support seeking-for example, terminals-always read from the current position. The value of a file offset associated with such a file is undefined.
No data transfer shall occur past the current end-of-file. If the starting position is at or after the end-of-file, 0 shall be returned. If the file refers to a device special file, the result of subsequent read() requests is implementation-defined.
If the value of nbyte is greater than {SSIZE_MAX}, the result is implementation-defined.
When attempting to read from an empty pipe or FIFO:
When attempting to read a file (other than a pipe or FIFO) that supports non-blocking reads and has no data currently available:
The read() function reads data previously written to a file. If any portion of a regular file prior to the end-of-file has not been written, read() shall return bytes with value 0. For example, lseek() allows the file offset to be set beyond the end of existing data in the file. If data is later written at this point, subsequent reads in the gap between the previous end of data and the newly written data shall return bytes with value 0 until data is written into the gap.
Upon successful completion, where nbyte is greater than 0, read() shall mark for update the st_atime field of the file, and shall return the number of bytes read. This number shall shall return -1 with errno set to [EINTR].
If a read() is interrupted by a signal after it has successfully read some data, it shall return the number of bytes read.
For regular files, no data transfer shall occur past the offset maximum established in the open file description associated with fildes.
If fildes refers to a socket, read() shall be equivalent to recv() with no flags set.
If the O_DSYNC and O_RSYNC bits have been set, read I/O operations on the file descriptor shall complete as defined by synchronized I/O data integrity completion. If the O_SYNC and O_RSYNC bits have been set, read I/O operations on the file descriptor shall complete as defined by synchronized I/O file integrity completion.
If fildes refers to a shared memory object, the result of the read() function is unspecified.
If fildes refers to a typed memory object, the result of the read() function is unspecified.
A read() from a STREAMS file can read data in three different modes: byte-stream mode, message-nondiscard mode, and message-discard mode. The default shall be byte-stream mode. This can be changed using the I_SRDOPT ioctl() request, and can be tested with I_GRDOPT ioctl(). In byte-stream mode, read() shall retrieve data from the STREAM until as many bytes as were requested are transferred, or until there is no more data to be retrieved. Byte-stream mode ignores message boundaries.
In STREAMS message-nondiscard mode, read() shall retrieve data until as many bytes as were requested are transferred, or until a message boundary is reached. If read() does not retrieve all the data in a message, the remaining data shall be left on the STREAM, and can be retrieved by the next read() call. Message-discard mode also retrieves data until as many bytes as were requested are transferred, or a message boundary is reached. However, unread data remaining in a message after the read() returns shall be discarded, and shall not be available for a subsequent read(), getmsg(), or getpmsg() call..
A read() from a STREAMS file shall return() shall fail if a message containing a control part is encountered at the STREAM head. This default action can be changed by placing the STREAM in either control-data mode or control-discard mode with the I_SRDOPT ioctl() command. In control-data mode, read() shall convert any control part to data and pass it to the application before passing any data part originally present in the same message. In control-discard mode, read() shall discard message control parts but return to the process any data part in the message.
In addition, read() shall fail if the STREAM head had processed an asynchronous error before the call. In this case, the value of errno shall not reflect the result of read(), but reflect the prior error. If a hangup occurs on the STREAM being read, read() shall continue to operate normally until the STREAM head read queue is empty. Thereafter, it shall return 0.
The pread() function shall be equivalent to read(), except that it shall read shall result in an error.
Upon successful completion,.
The following example reads data from the file associated with the file descriptor fd into the buffer pointed to by buf.
#include <sys/types.h> #include <unistd.h> ... char buf[20]; size_t nbytes; ssize_t bytes_read; int fd; ... nbytes = sizeof(buf); bytes_read = read(fd, buf, nbytes); ...
None..
Note that a read() of zero bytes does not modify st_atime. A read() that requests more than zero bytes, but returns zero, shall modify st_atime.
Implementations are allowed, but not required, to perform error checking for read() requests of zero bytes.
The use of I/O with large byte counts has always presented problems. Ideas such as lread() and lwrite() (using and returning longs) were considered at one time. The current solution is to use abstract types on the ISO C standard function to read() and write(). The abstract types can be declared so that existing functions work, but can also be declared so that larger types can be represented in future implementations. It is presumed that whatever constraints limit the maximum range of size_t also limit portable I/O requests to the same range. This volume of IEEE Std 1003.1-2001 also limits the range further by requiring that the byte count be limited so that a signed return value remains meaningful. Since the return type is also a (signed) abstract type, the byte count can be defined by the implementation to be larger than an int can hold.
The standard developers considered adding atomicity requirements to a pipe or FIFO, but recognized that due to the nature of pipes and FIFOs there could be no guarantee of atomicity of reads of {PIPE_BUF} or any other size that would be an aid to applications portability.. A value of zero is to be considered a correct value, for which the semantics are a no-op.
I/O is intended to be atomic to ordinary files and pipes and FIFOs. Atomic means that all the bytes from a single operation that started out together end up together, without interleaving from other I/O operations. It is a known attribute of terminals that this is not honored, and terminals are explicitly (and implicitly permanently) excepted, making the behavior unspecified. The behavior for other device types is also left unspecified, but the wording is intended to imply that future standards might choose to specify atomicity (or not).
There were recommendations to add format parameters to read() and write() in order to handle networked transfers among heterogeneous file system and base hardware types. Such a facility may be required for support by the OSI presentation of layer services. However, it was determined that this should correspond with similar C-language facilities, and that is beyond the scope of this volume of IEEE Std 1003.1-2001. The concept was suggested to the developers of the ISO C standard for their consideration as a possible area for future work.
In 4.3 BSD, a read() or write() that is interrupted by a signal before transferring any data does not by default return an [EINTR] error, but is restarted. In 4.2 BSD, 4.3 BSD, and the Eighth Edition, there is an additional function, select(), whose purpose is to pause until specified activity (data to read, space to write, and so on) is detected on specified file descriptors. It is common in applications written for those systems for select() to be used before read() in situations (such as keyboard input) where interruption of I/O due to a signal is desired.
The issue of which files or file types are interruptible is considered an implementation design issue. This is often affected primarily by hardware and reliability issues.
There are no references to actions taken following an "unrecoverable error". It is considered beyond the scope of this volume of. Department of Commerce FIPS 151-1 and FIPS 151-2 require the historical BSD behavior, in which read() and write() return the number of bytes actually transferred before the interrupt. If -1 is returned when any data is transferred, it is difficult to recover from the error on a seekable device and impossible on a non-seekable device. Most new implementations support this behavior. The behavior required by IEEE Std 1003.1-2001 is to return the number of bytes transferred.
IEEE Std 1003.1-2001 does not specify when an implementation that buffers read()ss actually moves the data into the user-supplied buffer, so an implementation may chose to do this at the latest possible moment. Therefore, an interrupt arriving earlier may not cause read() to return a partial byte count, but rather to return -1 and set errno to [EINTR].
Consideration was also given to combining the two previous options, and setting errno to [EINTR] while returning a short count. However, not only is there no existing practice that implements this, it is also contradictory to the idea that when errno is set, the function responsible shall return -1.
None.
fcntl() , ioctl() , lseek() , open() , pipe() , readv() , the Base Definitions volume of IEEE Std 1003.1-2001, Chapter 11, General Terminal Interface, <stropts.h>, <sys/uio.h>, <unistd.h> | http://www.makelinux.net/man/3posix/P/pread | CC-MAIN-2015-06 | refinedweb | 1,764 | 60.55 |
Metrics Machine Batch Export
The Batch Export feature from Metrics Machine isn't in the new extension version. Can that feature be added back?
Those could be added, I guess. Where features are generated for a given set of ufo files.
+1 request for Metrics Machine Batch Export
Very useful for larger families!
@mr @jmickel I miss this feature as well
The generateAllKernFiles.py from Adobe seems to be a nice workaround in the meantime? It is not as user-friendly as the MM feature but yea:(
LOL - that script crashes Robofont for me.
is there any traceback? or anything in the log
see
thanks
The Batch Export window is on my to do list. It'll be back. :)
Somewhat related:
Would it be possible, with MetricsMachine working inside of Robofont, to have the "insert subtable breaks" option from MM Batch Export available in Robofont's Generation Fonts?
Those subtables solve 90% of my overflow problems.
there are wild plans to have a small api available, where external tools can have access to the MetricsMachine logic and callback!
(don’t expect it anytime soon...)
@connor said in Metrics Machine Batch Export:
That’s not a RF script, just run it on the command line.
Even easier is :
kernFeatureWriter.py my_font.ufo
Coming in version 1.1.0:
from metricsMachine import MetricsMachineFont font = MetricsMachineFont(font) text = font.kerning.compileFeatureText(insertSubtableBreaks=True) font.kerning.exportFeatureText(aPath, insertSubtableBreaks=True) font.kerning.insertFeatureText(insertSubtableBreaks=True)
Cool!!!
The fea code could be generated while compling a binary font!!
Awesome! | https://forum.robofont.com/topic/512/metrics-machine-batch-export/ | CC-MAIN-2019-22 | refinedweb | 254 | 51.85 |
On Mon, 2008-08-18 at 16:31 +0100, [email protected] wrote:> Theodore Tso <[email protected]> wrote on 18/08/2008 15:25:11:> > >.> > Also, just to double-check, you don't think AV scanning would read the > whole file on every write?Make this a userspace problem. Send a notification on every mtimeupdate and let userspace do the coallessing, ignoring, delaying, andperf boosting pre-emptive scans. If someone designs a crappyindexer/scanner that can't handle the notifications just blame them, itshould be up to userspace to use this stuff wisely.Current plans are for read/mmmap to be blocking and require a response.Close and mtime update and going to be fire and forget async changenotifications. I'm seriously considering a runtime tunable to allow theselection of open blocking vs async fire and forget, since I assume mostprograms handle open failure much better than read failure. For thepurposes of an access control systems (AV) open blocking may make themost sense. For the purposes of an HSM read blocking makes the mostsense.Best thing about this is that I have code that already addresses almostall of this. If someone else wants to contribute some code I'd be gladto see it.But lets talk about a real design and what people want to see.Userspace program needs to 'register' with a priority. HSMs would wanta low priority on the blocking calls AV Scanners would want a higherpriority and indexers would want a very high priority.On async notification we fire a message to everything that registered'simultaneously.' On blocking we fire a message to everything inpriority order and block until we get a response. That response shouldbe of the form ALLOW/DENY and should include "mark result"/"don't markresult."If everything responds with ALLOW/"mark result" we will flip a bit INCORE so operations on that inode are free from then on. If any programresponds with DENY/"mark result" we will flip the negative bit IN COREso deny operations on the inode are free from then on.Userspace 'scanners' if intelligent should have set a timespace in aparticular xattr of their choosing to do their own userspace resultscaching to speed up things if the inode is evicted from core. Thismeans that the 'normal' flow of operations for an inode will look like:open -> async to userspace -> userspace scans and writes timestampread -> blocking to userspace -> userspace checks xattr timestamp andmtime and responds with ALLOW/"mark result"read -> we have the ALLOW/mark result bit in core set so just allow.mtime update -> clear ALLOW/"mark result" bit in core, send asyncnotification to userspaceclose -> send async notification to userspaceIf some general xattr namespace is agreed upon for such a thing somedaya patch may be acceptable to clear that namespace on mtime update, but Idon't plan to do that at this time since comparing the timestamp in thexattr vs mtime should be good enough.******************************Great, how to build this interface. THIS IS WHAT I ACTUALLY CARE ABOUTThe communication with userspace has a very specific need. The scanningprocess needs to get 'something' that will give it access to theoriginal file/inode/data being worked on. My previous patch set doesthis with a special securityfs file. Scanners would block on 'read.'This block was waiting for something to be scanned and when available adentry_open() was called in the context of the scanner for the inode inquestion. This means that the fd in the scanner had to be the same dataas the fd in the original process.If people want me to use something like netlink to send asyncnotifications to the scanner how do I also get the file/inode/data tothe scanning process? Can anyone think of a better/cleaner method toget a file descriptor into the context of the scanner other than havingthe scanner block/poll on a special file inside the securityfs? | http://lkml.org/lkml/2008/8/18/204 | CC-MAIN-2014-49 | refinedweb | 645 | 53.71 |
Sherman, set the Wayback Machine to last week
The Internet has begun to route around the damage caused by the disruption of access to my 1999-2002 BYTE.com columns. I really hope a better solution will be forthcoming. When you are a writer whose entire corpus exists online, woven into a fabric of citation and commentary, it is incredibly painful to see that fabric torn apart. The Wayback Machine is a terrific resource. Serving as the canonical namespace for formerly-free-but-now-restricted content is not, however, its best and highest use. In any case, few users will avail themselves of this option. Most, following Google-supplied links, will just mutter "damn" and move on.
Discontinuity does have an upside. When I was unable to redirect my RSS feed, I learned something interesting about my suscribership. A third of it was robotic, and didn't give a damn that nothing new seemed to have been posted for three months. It's useful to know these things. But on the whole, you'd rather not be forced to learn them the hard way.
I wrote a column for BYTE on the Wayback Machine when it first appeared. Ironically, it was for a planned revival of the print edition of BYTE -- which morphed into a PDF download, so the column never did appear on the web with its own URL. The column speculated that the Wayback Machine would become the resolver of last resort. When Google returned a URL that was blocked or 404'd, the browser would automagically redirect to a Wayback Machine URL. Funny how things turn out. | http://weblog.infoworld.com/udell/2002/11/29.html | crawl-001 | refinedweb | 270 | 72.97 |
fix some common errors in translated strings automatically. Use it with caution to not have it add errors.
See also
Quality checks¶
Weblate employs a wide range of quality checks on strings. The following section describes them all in further detail. There are also language specific checks. Please file a bug if anything is reported in error.
See also
CHECK_LIST, Customizing behavior
Translation checks¶
Executed upon every translation change, helping translators maintain good quality translations.
Unchanged translation¶
Happens if the source and correspanding translation strings is identical, down to at least one of the plural forms. Some strings commonly found across all languages are ignored, and various markup is stripped. This reduces the number of false positives.
This check can help find strings mistakenly untranslated.
Checks that question marks are replicated between both source and translation. The presence of question marks is also checked for various languages where they do not belong (Armenian, Arabic, Chinese, Korean, Japanese, Ethiopic, Vai or Coptic).
See also
Question mark on Wikipedia
Trailing exclamation¶
Checks that exclamations are replicated between both source and translation. The presence of exclamation marks is also checked for various languages where they do not belong (Chinese, Japanese, Korean, Armenian, Limbu, Myanmar or Nko).
See also
Exclamation mark on Wikipedia
Punctuation spacing¶
New in version 3.9.
Checks that there is non breakable space before double punctuation sign (exclamation mark, question mark, semicolon and colon). This rule is used only in a few selected languages like French or Breton, where space before double punctuation sign is a typographic rule.
Trailing ellipsis¶
Checks that trailing ellipsises are replicated between both source and translation.
This only checks for real ellipsis (
…) not for three dots (
...).
An ellipsis is usually rendered nicer than three dots in print, and sound.
Formatted strings¶
Checks that formatting in strings are replicated between both source and translation. Omitting format strings in translation usually cause severe problems, so the formatting in strings should usually match the source.
Weblate supports checking format strings in several languages. The check is not enabled automatically, only if a string is flagged appropriately (e.g. c-format for C format). Gettext adds this automatically, but you will probably have to add it manually for other file formats or if your PO files are not generated by xgettext.
This can be done per unit (see Additional info on source strings) or in Component configuration. Having it defined per component is simpler, but can lead to false positives in case the string is not interpreted as a formating string, but format string syntax happens to be used.
Besides checking, this will also highligh the formatting strings to easily insert them into translated strings:
formatting strings
AngularJS interpolation string¶
See also
AngularJS: API: $interpolate
C# format¶either tense is in use.
Same plurals¶
Check that fails if some plural forms duplicated in the translation. In most languages they have to be different.
Inconsistent¶
Weblate checks translations of the same string across all translation within a project to help you keep consistent translations.
The check fails on differing translations of one string within a project. This can also lead to inconsistencies in displayed checks. You can find other translations of this string on the All locations
Zero-width space (<U+200B>) character are used to truncate messages within words.
As they are usually inserted by mistake, this check is triggered once they are present in translation. Some programs might have problems when this character is used.
See also
Zero width space on Wikipedia
XML markup¶
This usually means the resulting output will look different. In most cases this is
New in version 3.7.
Translation rendered text should not exceed given size. It renders the text with line wrapping and checks if it fits into given boundaries.
This check needs one or two parameters - maximal width and maximal number of lines. In case the number of lines is not provided, one line text is considered.
You can also configured used font by
font-* directives (see
Customizing behavior), for example following translation flags say that the
text rendered with ubuntu font size 22 should fit into two lines and 500
pixels:
max-size:500:2, font-family:ubuntu, font-size:22
Hint
You might want to set
font-* directives in Component configuration to have same
font configured for all strings within a component. You can override those
values per string in case you need to customize it per string.
See also
Managing fonts, Customizing behavior
Source checks¶
Source checks can help developers improve the quality of source strings.
Unpluralised¶
The string is used as a plural, but does not use plural forms. In case your translation system supports this, you should use the plural aware variant of it.
For example with Gettext in Python it could be:
from gettext import ngettext print ngettext('Selected %d file', 'Selected %d files', files) % files
Numerous translations of this string have failing quality checks. This is usually an indication that something could be done to improving the source string.
This check failing can quite often be caused by a missing full stop at the end of a sentence, or similar minor issues which translators tend to fix in translation, while it would be better to fix it in the source string. | http://docs.weblate.org/en/weblate-3.9.1/user/checks.html | CC-MAIN-2019-51 | refinedweb | 877 | 54.63 |
Man Page
Manual Section... (3) - page: res_querydomain
NAMEres_init, res_query, res_search, res_querydomain, res_mkquery, res_send, dn_comp, dn_expand - resolver routines
SYNOPSIS
#include <netinet/in.h> #include <arpa/nameser.h> #include <resolv.h> extern struct state _res; int res_init(void); int res_query(const char *dname, int class, int type,
- unsigned char *answer, int anslen);
- unsigned char *answer, int anslen);
- int class, int type, unsigned char *answer, int anslen);
- int type, char *data, int datalen, struct rrec *newrr, char *buf, int buflen);
- int anslen);
- int length, unsigned char **dnptrs, unsigned char *exp_dn, unsigned char **lastdnptr);
- unsigned char *comp_dn, char *exp_dn, int length);
DESCRIPTIONThese functions make queries to and interpret the responses from Internet domain name servers., i].
RETURN VALUEThe res_init() function returns 0 on success, or -1 if an error occurs.4.3BSD.
SEE ALSOgethostbyname(3), resolv.conf(5), resolver(5), hostname(7), named(8) | http://linux.co.uk/documentation/man-pages/subroutines-3/man-page/?section=3&page=res_querydomain | CC-MAIN-2013-20 | refinedweb | 140 | 55.64 |
This is working great for me so far, but I did encounter one issue. When the controller class from my sub-application is loaded, it looks for the stores and models in my main application's app path rather than in the application path of my sub-application. I get this error:
Code:
Uncaught Error: [Ext.Loader] Failed loading synchronously via XHR: 'app/store/Products.js'; please verify that the file exists. XHR status code: 404
Code:
stores: [ 'MYSUBAPP.store.Products' ],
Code:
store: 'MYSUBAPP.store.Products'
Is this working as designed, or should I not need to use full class names for my models and stores?
Has anyone managed to get this approach working with a Sencha Architect project?
I modified the sample Portal project from to use the MVCLoader controller, portlets, etc. and everything works fine until you use a portlet with a store.
When the master application tries to load a portlet with a store the following is logged to the console:
Uncaught TypeError: Cannot read property 'buffered' of undefined (ext-all-debug.js:94482)
...presumably because it can't resolve the underlying store. I've tried fully qualifying the store class references in the portlet's Main controller (as in the post above) and even tried overriding some of the architect-generated classes to match the ExtJS working portal/portlet sample, but something must be different with the Architect version that's causing this issue.
Any ideas? Has anyone got this working with Architect?!
Many thanks,
James
Hi Frederic,
The implementation of multiple applications using Extjs with MVC architecture is very interesting. I wonder if you can give us some information in a real application you have used this architecture. I would like some information as I am in the process of analysis to use ASP.NET MVC + Extjs as interfaces for my application. I have an ERP with 350 tables and 300 interfaces without counting the reports.
Doing a study. Today I was developing a new version of my application using Extjs and MVC architecture, which would use a solution like yours to be able to have each module its own APP. The problem is the number of models, views and controllers for the system. Thinking quickly we would have 350 controller, 350 views and 350 models.If it all together in the app-all.js will give about a 5mb
Using the solution of multiple MVC applications, can reduce the size of the file; since each module will have its own app, and still have the performance gain over the creation / destruction of each module on user interaction.
Can you give us more information?
How big is the largest application you have ever used this technique?
How many tables she has?
How many interfaces? Controllers? Views?
Which file size and app.js classes.js each app?
Have you had any problems related to performance during use of the system after some time because an interface is not changing?
Sorry for so many questions. I can not find more information on this subject in extjs forum. I see that you are the one who now works as a solution to this and that holds the knowledge of the advantages and disadvantages. In a crucial moment that I am, I need to take this decição because this new version will be increased gradually and can grow to be twice or more tables and interfaces. It really is a great application.
I can not make the decision to start developing it without having a deeper study of this process extjs MVC applications.
Duplicate XHR requests
Duplicate XHR requests
Hi guys,
Fredric's portal solution is very impressive. I've been working a hybrid portal solution since about a year but it is based on ext 3 at the moment. It was the similar requirement that I claimed to be able hosting independently deployable applications in a portal shell. Its hybrid nature means it is conditionally uses the ext js library when complex user interaction requiring. Unless just a plain HTML presentation.
While ext3 had no dynamic class loading mechanism I'm simulating this feature with requirejs' module approach with successful.
Although ext4 has more sophisticated and complex solution handling it all together so I am going toward.
Fredric's solution covers all the needs what I must implement - bless his name -
but I was experiencing a strange behavior of sync script loading. When any of portlet views was instantiating duplicated XHR requests will be injected. While stack trace shows nested requires in ext core and I'm using ext 4.1.1a I'm not sure is it whether an Ext4 issue currently or not. That seems to be sure why these files are can't debuggables under firebug.
Does anybody can confirm my experiences?
----
Finally made a patch so now breakpoints are working.
More info:
Last edited by rix; 27 Oct 2012 at 8:56 AM. Reason: Ext.Loader issue
Update
Update
Sorry for the late reply guys.
I have been busy doing other stuff and its time for a code review and hopefully I will be able to solve any issues you have or at least answer them.
On the question of stores my first thing to look for is of the model actually are loaded in time of the store being instantiated. I know that in some cases (even in pure Ext) i have ended up putting the model in the requires:[] on the grid or store itself. Try that.
On the question of HOW big applications you can do with this i would say that there is no limit as long as you are letting the Ext.Loader load the sub-applications. Thats the whole point. Thats also goes for Ext not using the Multi MVC approach.
Did i miss any important question?
Code:
sencha compile -classpath=app exclude -all and include -namespace SubAppNamespace and concat -yui minified.js
An example to load the script:
Code:
// when using scriptElements, look at preserveScripts if (Ext.Loader.scriptElements['../../SubAppName/production/minified.js']) { Ext.create('Ext.window.Window',{ width: 350, height: 350, layout: 'fit', items:[{ xtype: 'gridportlet' }] }).show(); }else{ Ext.Loader.loadScript({ url: '../../SubAppName/production/minified.js', onLoad: function() { Ext.create('Ext.window.Window',{ width: 350, height: 350, layout: 'fit', items:[{ xtype: 'gridportlet' }] }).show(); }, onError: function(op){ console.log(op); } }); }
Now the MVCHelper supports Ext4.1.3
Now the MVCHelper supports Ext4.1.3
Had to make some adjustments to the MVCLoader preprocessor to be able to use ExtJS-4.1.3.
Please fetch new code at
If you want to contribute just tell me and i will add you to the project.
Similar Threads
Applications - how many?By westy in forum Ext: DiscussionReplies: 0Last Post: 21 Feb 2011, 5:45 AM
Building Big ApplicationsBy aramaki in forum Community DiscussionReplies: 14Last Post: 29 Dec 2010, 3:55 PM
Calling ext GWT applications from legacy applicationsBy mathaj77 in forum Ext GWT: DiscussionReplies: 3Last Post: 14 Aug 2009, 4:15 AM
Writting big applicationsBy bkraut in forum Ext 2.x: Help & DiscussionReplies: 2Last Post: 21 Jul 2008, 12:32 AM
Customize applicationsBy JuanPalomo in forum Ext.nd for Notes/DominoReplies: 0Last Post: 2 Apr 2008, 11:51 PM
Thread Participants: 35
- Dumbledore (1 Post)
- steffenk (4 Posts)
- gasgas (1 Post)
- rix (1 Post)
- stevil (1 Post)
- bclinton (3 Posts)
- mitchellsimoens (8 Posts)
- martin17 (1 Post)
- TonySteele (2 Posts)
- yyjia (2 Posts)
- edspencer (1 Post)
- xjpmauricio (1 Post)
- gilcachaca (1 Post)
- maneljn (4 Posts)
- halcwb (3 Posts)
- mmamane (1 Post)
- mrdanger (1 Post)
- vadimv (2 Posts)
- kent78 (1 Post)
- Springvar (1 Post)
- smay (1 Post)
- Nickname (1 Post)
- shevken (1 Post)
- freshyseth (1 Post)
- spcmdr (1 Post)
- objectiveanand (1 Post)
- ckarcz (1 Post)
- vitorpfn (1 Post)
- olecom (2 Posts)
- jdonmoyer (1 Post)
- brian.moeskau (1 Post)
- james.shannon (1 Post)
- savonac (1 Post)
- psiegers (1 Post)
- sureshatpure (1 Post) | http://www.sencha.com/forum/showthread.php?133087-Multiple-Ext.Applications-(Big-Applications)/page5 | CC-MAIN-2015-18 | refinedweb | 1,309 | 64.41 |
Integrate with Appodeal to monetize and promote your apps & games with ads on Android and iOS.
Integrate with Appodeal to monetize and promote your apps & games with ads on Android and iOS.
Once you have included Felgo Appodeal Plugin you can display ad banners and interstitials from Google advertisers in your apps & games. You then get paid every time your users click on ads in your app.
Combine the Appodeal Plugin with other plugins like in-app purchases to up-sell users from ad-supported apps to ad-free versions.
To try the plugin or see an integration example have a look at the Felgo Plugin Demo app.
Please also have a look at our plugin example project on GitHub:.
The Appodeal item can be used like any other QML item. Here is a simple example:
import Felgo 3.0 Appodeal { } Appodeal plugin you need to add the platform-specific native libraries to your project, described here:
build.gradlefile and add the following lines to the dependencies block:
dependencies { implementation 'net.vplay.plugins:plugin-appodeal:3.+' }: | https://felgo.com/doc/plugin-appodeal/ | CC-MAIN-2020-16 | refinedweb | 176 | 56.35 |
I would like to override a method in an object that's handed to me by a factory that I have little control over.
My specific problem is that I want to override the getInputStream and getOutputStream of a Socket object to perform wire logging.
However the generic problem is as follows:
public class Foo {
public Bar doBar() {
// Some activity
}
}
Foo
doBar
Bar doBar() {
// My own activity
return original.doBar();
}
Since Java uses class-based OO, this is impossible. What you can do is use the decorator pattern, i.e. write a wrapper for the object that returns the wrapped streams. | https://codedump.io/share/ICsaPclQMQCv/1/overriding-a-method-in-an-instantiated-java-object | CC-MAIN-2018-26 | refinedweb | 101 | 63.49 |
Formatting Your Output with C++ Streams
If you do a great deal of C++ programming, sooner or later you'll end up having to format some text data. This might be the case if you're writing text to the screen (such as in a console program) or if you're writing text to a file.
Thanks to the beauty of an important part of the C++ standard called streams, you can easily format your text so it lines up appropriately, your floating point numbers have the precision you need, your numbers are right- or left-justified, your tables have columns that look nice, and so onregardless of whether you're writing to the console or to a file. In this article, I'll show you how to do all this.
I assume that you're familiar with the basic idea of using cout in your programs to write text to the console. From this notion, I'll build on how you can format your text nicely. But first I need to talk about an issue that comes up again and again in C++ programming: the ANSI Standard Issue.
ANSI Standard Issue
Back in 1998, ANSI released its official standard for the C++ programming language.
NOTE
Note: Actually, it's ISO that released the standard. Officially, it ratified an ANSI standard that had been approved a year earlier, in 1997. The current standard is therefore an ANSI/ISO standard or simply an ISO standard.
This standard included a huge set of classes and functions known as the C++ Standard Library, which is where you find all the stream classes.
To get the most out of the language, the standards committee agreed to lump all the classes in the Standard Library into a single namespace called the std (which stands for standard) namespace.
In earlier versions of C++, if you wanted to use the streams for doing things such as writing to the console, you would include the header file iostream.h, like so:
#include <iostream.h>
Then, if you want to write text to the console, you would have code such as the following, which would write code to the console:
cout << "Hello" << endl;
But the ANSI standard replaces this with a new file that doesn't have the .h extension, like so:
#include <iostream>
The idea here is that compilers can support both the new ANSI header files and the older ones. But if you want to use the new ANSI headers, the cout and endl identifiers as well as all the other features of the streams are in the std namespace. Thus, just attempting a cout << "Hello" << endl; will result in an error. Instead, you have to fully qualify the identifiers from the standard namespace, as follows:
std::cout << "Hello" << std::endl;
Frankly, that's a pain. Who wants to do that? So instead of typing std:: over and over again, you can put the following line of code after your #include lines:
using namespace std;
That tells the compiler that if it can't find an identifier (such as cout), to then try looking in the std namespace. Thus, this good old line will compile just fine:
cout << "Hello" << endl;
The compiler won't at first find the cout identifier, and so the compiler will in turn look in the std namespace, where it will find it. The end result of all this is that in this article, I'm assuming that you'll be using the new ANSI headers; however, in my samples I'll include the using namespace std; line. And now back to our regularly scheduled program!
As a compromise between these two approaches, one can use a using declaration:
using std::cout; using std::endl; cout << "Hello" << endl;//now fine | http://www.informit.com/articles/article.aspx?p=170770&seqNum=7 | CC-MAIN-2016-44 | refinedweb | 627 | 64.85 |
To comment a group of lines, first select the lines. Then, holding down the Control key throughout, press K, then C. To uncomment, hold down the Control key and press K, then U. (These two stroke combinations are called "chords.")
Do you have a Visual Studio tip, you'd like to share? Send it to me at [email protected].
Posted by Peter Vogel on 04/19/2011 at 1:16 PM
Printable Format
Just amazing how many people are willing to waste their time arguing over something obvious
Oh, god... that was painful to read. So you might want your code back? How about using source control? Having DVCSs (git, bzr, hg to name just 3 of them) available nowadays it's just a matter of seconds to be able to start using it on a project so it makes no sense no _not_ use it.
I'm with you Peter Vogel, even though I will delete code; commented-out code provides a direct way to communicate to the editor the way something used-to-be or an alternative. Whenever I leave "dead code" it's to leave some useful bit of information, or I'm unsure that I will not go back to using it. DOT ww.shortuggoutlet.com DOT ww.uggstore-order.com DOT ww.v-shoes.com DOT ww.uggshop-outlet.com DOT ww.germe.org
You're doing it wrong.
What a terrible idea. This post should be banned from they eyeballs of any jr. dev not yet able to find the forehead slapping humor in it.
Code Hoarder!
Oh, my dear. Listen, if you're not using code, STRIP IT OUT. Otherwise, all you've accomplished is collecting cruft that someone else is going to have to sort out one day. If you think you might want to re-use a bit of code, strip it out and store it in a separate text file in your home directory, for your personal use. Don't burden the rest of your organization with it. Crufty code is not good code.
You are joking, right? Ever heard of source control?
In all seriousness, I thought this was a joke too. I get the sense from some of your follow-up comments that maybe your codebases are your own. In which case, do whatever. The problem with commenting-out code is that it wastes cycles for other developers visiting this module for the first time.
I'm Joe Teammate, and I have to go work on a module you wrote because you've been on vacation. I take a look at the code and see two versions. One is commented out, and I have a bug to fix. Is the commented-out version correct? Was it supposed to come back in? Is the version that's running some sort of stub? Is that why I have this bug? Now I have to read and understand and compare and contrast both versions. And once I do that, I can begin on my task.
Do you see why people can have such violent reactions to vistigial limbs and evolutionary dead ends in comments? The code should reflect our best understanding of the problem domain, right now. If our understanding gets better, we change the code to reflect it, and throw out our old misunderstandings.
The next person who comes along will see our best understanding, uncluttered, and we can still get to old code by doing diffs. The price we pay working with source control saves our teammates threefold.
This is a joke, right?
Misterm: I can see your point about the wording in my tip--it just didn't occur to me that it could be read that way. Of course, I frequently rewrite code to make it better (or just to make it work)--I'm a big fan of refactoring. But I have to admit, it's unusual for me to remove code. While my first idea isn't always my best idea, it's not without some value. For instance, I had a routine that was to handle data coming in through a serial port. It was important to eliminate duplicate entries, provided those entries came within a specific time frame. The original version of the method was quite lengthy and I realized that the way I was solving the problem wasn't optimal. I replaced the method with a "more elegant" solution. But that old code still had some neat features and I could see where I might want it again. I haven't yet found a good place to keep code so I can find my way back to it when I need it. I've found, over the years, that what I remember about code is where I used it so that provides the best way of finding my back to it. It's funny you mention the meta-data because I do add comments about why I replaced the code (good code is self-explanatory about the "how" it works but it's often difficult for code, by itself, to explain the "why" it's there).
Having (small) bits of code (occasionally) in the comments by means of explanation I can get on board with....especially if it's just the algorithm or pseudo-code.
Still, I would say there are *very* few instances where you should be straight up commenting blocks of code for the purposes of preserving it for whatever reason.
(Code blocks should only be commented out briefly, for testing etc).
The first words of this post are "I never remove code" (or something to that effect). Even allowing literary license that's a rather strong endorsement of what is at best a "good enough" temporary documentation hack.
If you feel it's important to note, add a comment like:
/*
Used to use someLibrary.fooFighter() here, but [it's slow, not thread-safe, etc], so now a [more efficient, better, etc] otherLibrary.barKiller() version is used.
See commit #XYZ in the Dev branch.
*/
If you're going to leave chunks of code in your code-base, you certainly should at least explain WHY it's there, and if you're using a good VCS then once the explanation is n place, the commented out code becomes extraneous.
Well, actually that is my point: The diff command gives me the evolution of my code rather than a copy of "the method" or the "the algorithm." The commenting method lets me freeze one version of the code that, for whatever reason, I feel is valuable and will want at some point (milestones do the same thing for me but at a much larger level). I use source control/diff to track the changes/evolution of the code (version control's intended purpose). But, at the same time, I'm using source control for a purpose it wasn't intended: as a repository for code I think has value though it isn't useful in its current location. Since this came up, I've been thinking about it and, it seems, that I've decided that the various source control systems I've used had most of the functionality that I wanted for the code repository I haven't been able to find(though that wasn't their intended purpose). That does make a certain kind of sense: these systems are source code tracking tools, after all.
I'm trying to be open-minded...really.
However, I can think of a million reasons why this is a **bad** idea, and not single one why it is a good one.
Pete, can you provide some examples as to what this accomplishes that source control doesn't (besides cluttering up your code)?
Version Control Systems will not only allow you to accomplish **anything** you could do with this, they be much more sophisticated and powerful without the drawbacks.
Can you explain how this is better than just going through your VCS log/ Diff?
The only possible benefit I can think of is it saves you of having to actually use the diff command, as your codebase is essentially in "diff" mode perpetually...but compared to the diff tool in a VCS, it is crappy and unsophisticated with none of the great tools VCS would provide...
As already mentioned it also bloats your code base and makes it harder to read.
Where exactly is the win here?
hahaha! Oh wait, I thought this was the onion for a second.
Actually, what's been most interesting about this column has been the responses. The first thing that surprised me was the number of people who assumed from five words ("you might want it back"--in retrospect, I'd leave out the word "back") that I don't use source control (is that even an option anymore?). One responder went from there to suggesting that I don't use TDD. I guess I can see the assumption that I might not use source control but the leap to TDD was especially interesting because 30 seconds with Google would have turned up some of the articles I've written about TDD. While many commenters were concerned about the source control issue and wanted to offer advice to other readers (and, I guess, to me) others felt they added value just by expressing their heartfelt opinion about the article's author ("you should be shot" being the most extreme example of those). I'm sort of surprised that no one has suggested a good code repository (several readers recommended keeping old code in parallel txt files which doesn't sound like a bad idea but doesn't sound optimal, either). You wonder if there's an opportunity out there for someone to deliver a useful code repository. I've tried a couple (even used EverNote for awhile because clipping was so easy) but I haven't found anything I liked...yet.
I am amazed at the number of people who regard old code as junk or something that you have to "wade through." I regard old code as a resource. I've tried creating code libraries in the past but I've found that I best remember where code is based on the application I created it for (or where I last used it). And I'm a big fan of TDD--it's essential to both productivity and reliability. With TDD, it takes me a long time before I'm willing to remove a test: tests are a more useful resource than code. So I don't delete old tests I don't comment them out: I mark them with the Ignore attribute to remind me that I thought this test mattered--was I wrong to include it at first or wrong to exclude it later?
Commented code is a code small IMO -
At least pity the poor sap who has to come in after you leave a project and wade through all your commented code. (resubmitting this since the initial one was swallowed)
You should check out my rant about DOT . This is bad code smell and you should avoid it. Pity the poor sap who come after you leave and has to wade through pages of full of commented code trying to understand what actually works.
I've got a shorter chord for you. Alt-F4.
Next time you want to comment out code, try it instead.
Now text searches across a large source directory finds class references in all of your commented out code which influences refactoring and estimates. Sooooooo bad.
If it is commented out, it has no value, not today, not tomorrow, never. It is just debt. Do you comment out that codes unit tests too? Let me guess, there aren't any.
If you're against source control and want to keep dead code around in case you need it, try cutting the code and you can then cycle your clipboard ring to get it back.
Thanks for this. I'm a pythoner using vim but this post has brought me laughter to the point of tears. Especially the date, 04/19/2011 .. hold on I need to laugh it off before I can type then rest of this comment. Anyways, thanks again.
Epic fail. Please use Source control
I prefer to wrap old code in #if /#endif preprocessor directives. Then I can specify which version I want to compile and it uses that code. No need to remember those pesky comment/uncomment keyboard shortcuts
#if V1
this is old code
#endif
jk :)
You should be shot. Grrrr!
Here's a good tip: Instead of filling your source with dead code, why not fill it with ASCII Art, you know pictures of things like maybe a badger or a racing car or a really awesome spaceship. Other developers that have to maintain your code will really appriciate it.
Yeah, you never know when you mich need a conditional clause, and this shortcut is way quicker than typing "if"...
Someone linked me to this just for the laugh. Nice work on figuring that out... On a serious note though, I remap my keys for commenting/uncommenting. I also don't like to keep a bunch of uncommented code as I like to keep my code as clean as I can. If I feel I have some code that I want to save for later for another project, I usually just store the snippet in SVN. Nothing new that hasn't been discussed before in this thread.
I sometimes use source control systems. I never know when I might want my code back.
I've found a brilliant shortcut that will improve the quality of YOUR code, try Alt F4 after commenting out.
I really wonder how much people are paying you to fill their code bases with crap. An awesome thought; I want to be like you.
Thanks for that. Really useful. Do you know whether there are any other things you can do using shortcuts?
May I commend you on your amazing ability to memorize keyboard shortcuts.
Extremely disappointing.
Selim: I like the formatting commands--I was just selecting text and use the Tab key when indenting didn't come out the way I wanted. This is much more useful (provided I can remember the key combination--I'm more of a menu-and-mouse kind of guy).
Dimitar: Ouch! That "come out of the 90s" comment stung :>. But just because code doesn't work in one place doesn't mean it won't be useful somewhere else. I don't regard that code as "junk" (to quote an earlier comment)--I regard old code as a resource. Hmmmmmm....maybe this a topic for a new reality TV show: "Software Hoarders."
I'm just like Peter Short, except that I use several stages usually called hello.backup.cs.txt, hello.backup2.cs.txt, hello.backup.final.cs.txt and hello.backup.finalfinal.cs.txt
It's much easier to work this way!
Geez, Peter (both of you) it's time to come out of the early 90s. Code that doesn't work should not exist, commented or not.
Remapping the keyboard is a great way to personalize your Visual Studio UI to support you--it's something I think every developer should do to create an editor that works for you. Unfortunately, I don't get to do this. As a consultant, I spend a fair amount of time working on other people's computers: I stick with the vanilla keyboard. Nothing is more embarrassing then, right in front of a client, hitting some key combo...and have nothing happen (or, worse, something you don't want happens).
Wow (or Geeeez...). The jump from wanting to keep old code to not using source control is a big leap...and not one that I'd be willing to make. I remember writing a front end for CVS in REXX on an IBM 4381 in the early 90s--and that wasn't my first version control system. But I do like keeping code around--just because it doesn't work here doesn't mean that it won't be useful somewhere else (and often a big help in understanding the existing code: maintaining code is as much about being a detective and historian as anything else). Unlike Peter Short, I'm too lazy to keep a separate code library: I use my client's repository as my library. The only objection I have to source control is that I keep having to change them (currently Subversion with Tortoise). Like I said: Geeez.
When I first read this I was these are WordStar commands (late 70's early 80's) but it just so happens they do work.
I would never use this in place of source control they could be helpful sometimes in debugging.
I agree it's stupid to do this when there is free source control products all over the place. But if you did want to comment/uncomment quickly, this is not the fast way. Alt+C/Alt+X is the way to go:
Trolls gonna troll :P
I personally keep a txt file along side my cs file and give it a corresponding name, eg hello.cs gets a hello.safety.cs.txt and then if i want to take something out then i copy the code into that so i dont have ugly comments everywhere ruining my readability. If i want syntax highliting on these files i can simply change the notepad++ registration to recognize txt files as cs. EASY AND COOL.
LOL, I read this thinking it had to be posted on April 1rst. This has got to be a joke and meant for a good laugh.
"You never known when you might want it back." OK, great. You comment out your code. later you rename a couple of variables, maybe a parameter. "Oh, wait, I do need this code." Uncomment the code - BOOM. You no longer compile. Great plan.
Use source control for what it is meant for: versioning your code.
This is fucking bullshit. Leave it to a .NET developer to not fully utilize technology.
Is this a real tip or an Onion-esque satire?
How clueless can you be
No no no no no no no.
"I never delete code -- you never know when you might want it back."
-That's what source control is for.
Also, you should never comment out code and then check it in. You have history in your source control for that. "Inline-history", as you call it, is a great way to confuse future maintainers and clutter your files with junk.
Wow this is BAD! I would NEVER hire you after reading this. dead code should never go into the repo. Version control is the solution. how would you remember what parts you need to uncomment ? Very often its not just a single chunk of code but spread out all over your source. And over time this would only get more and more messy. Say it with me: SOURCE CONTROL, SOURCE CONTROL, SOURCE CONTROL
I always change my text editor to use 1 key to comment and uncomment - usually F8.
This is much easier to use pressing 1 key rather than 3, and much easier to remember.
Source control is good--but commented out code also provides a in-line history of changes to your code (and without running a diff operation).
I agree with John. Unless you comment something out just to test something, commented code should never make it into the source control repository!this being said, a nice one is Ctrl-K + F or Ctrl-K + D that formats the current selection or document respectively. Very handy when the editor gets indentation all messed up. Ctrl+Shift+B is also very handy: it does a full build.
If "you never know when you might want it back" then it's a job for source control. Commenting out code is only for stuff that's coming back in the immediate future (e.g. not-finished-yet case or if blocks, "real" code being replaced with debugging code, etc.)
> More TechLibrary
I agree to this site's Privacy Policy.
> More Webcasts | http://visualstudiomagazine.com/blogs/tool-tracker/2011/04/wbtip_comment-keystrokes.aspx | CC-MAIN-2014-15 | refinedweb | 3,374 | 72.97 |
Adding Identity Roles To Identity Server 4 in .NET Core 3.1!
Prerequisites
First of all, you must have a .NET Core 3.1 SDK (Software Development Kit) installed in your computer and I also assumed you are currently running Windows 10 or some Linux with proper environment set.
And the IdentityServer4 package on which we will dive in. The IdentityServer as a whole has been one of the de-facto standard in large scale deploy-able authentication service for .NET driven ecosystem. It uses OpenID Connect and OAuth 2.0 as base technology.
So where do we start?
The first thing we need to do, is to fetch the official starter template for IdentityServer4. In order to do that we need execute the following code below:
The command above will install the IdentityServer4 template in your workstation.
After installing the official templates, we create and bootstrap our project on which we will call Is4RoleDemo.
As a rule of thumb, always run the bootstrapped code first to see if there are no errors and whatsoever. Then if there are no problems commit it to a version control system (VCS), so you could rollback if there’s a problem.
We can now start modifying the project. Start by editing the Config.cs which can be found on the root directory.
The Config.cs contains initial data that can be used to run IdentityServer4 for in-memory storage setup. The in-memory storage should only be used in non-production environment.
First, we insert and create a new client on the variable named clients inside Config.cs file. This new config will require the client to have PKCE (Proof Key for Code Exchange) enabled authentication and verification (currently authentication with PKCE is now the standard in OAuth 2.0).
After configuring the clients, we need to add roles to the initial seed data. Edit the SeedData.cs file and add the following before the user seed data.
The code above specifies two new roles which are member and admin. After adding the roles, we need to assign the roles to the seed users. Add the code below after the user initialization specifically alice’s user initialization.
What this does is, it will add the specific role to the user on the AspNetRoles mapping table. Same with alice put the code below bob’s user initialization in SeedData.cs.
With all that implemented, adding roles and assigning roles to users. We need now to get the roles and put it on to user claims and return it as part of user info.
To return the roles on claims we will implement a profile service that will inject the roles. To do that, we create a folder Services and a class file named ProfileService.cs inside it.
The ProfileService class will inherit the IProfileService trait. Then we override the method GetProfileDataAsync(ProfileDataRequestContext context) inside the class and put the code below.
The code above specifically search for the specific user and check whether that returned user data is assigned to any roles. And if it satisfy that condition then modify the current claims adding the role assigned for that user.
Check the whole source for the ProfileService class below and compare it with your current code.
We should head now to the Startup.cs file to create and initialize the class ProfileService class that we’ve created. Put the code inside the ConfigureServices(IServiceCollection services) method. This will scoped the class and inject our custom implementation to the service.
Finally, build and run the project to see if there are any errors using dotnet build and dotnet run. If everything is okay, test the implementation using Postman.
On postman window, make sure to set the authentication to OAuth 2.0 with PKCE and query to /userinfo endpoint of the identity service. If everything is correct, on the OAuth 2.0 get token tab — copy the JWT token returned by our identity service.
After copying the JWT token, we put the token on a JWT analyzer site like. The site will analyze and return a JSON structure on which we will use to confirm if the role has been added properly.
If you look above the JSON, you’ll see there is a role field included that means we succeeded in adding roles to the claims. Otherwise, debug and see the ProfileService class if the user is returning roles.
(Optional) Connecting to MVC ASP.NET Core projects
To use the identity service we created in your own MVC project. First and foremost, we need to add in the MVC project the Microsoft.AspNetCore.Authentication.OpenIdConnect package using NuGet package manager.
Then on Startup.cs we add and configure the OpenId Connect driver to connect to our newly created identity service. Then modify the AddOpenIdConnect option and add the code below, this will map the roles to the authorize attribute field roles.
To use the roles in controllers you need to import Microsoft.AspNetCore.Authorization namespace. This will let you use the authorize attribute to secure the class or method that you intend to filter out users.
I think that’s all for it, check the full source repository in following link below.
Conclusion
Working with IdentityServer4 is somewhat pain in the ass — as developer support is locked behind a paywall. There are many ways to put roles in claims, but I think this is the most simplified and easiest implementation there is. With roles you can now filter out users accessing specific endpoints.
You can found the complete repository here.
Follow me for similar article, tips, and tricks ❤. | https://ffimnsr.medium.com/adding-identity-roles-to-identity-server-4-in-net-core-3-1-d42b64ff6675?readmore=1&source=---------6---------------------------- | CC-MAIN-2021-21 | refinedweb | 933 | 66.84 |
Python Cookbook
Programming Python, Second Edition with CD
Python in a Nutshell
Python Programming for the Absolute Beginner (Absolute Beginner)
Python Pocket Reference (Pocket Reference (O'Reilly))
Format: Paperback
Author: Mark Lutz
ReleaseDate: December, 2003
Publisher: O'Reilly Media
Rating:
This should be your first Python book!
I know there are several choices for 'beginning' type Python books, and you may be tempted to choose a different one because it is newer than this one, but please understand that you lose nothing by reading this book instead. This is simply a stellar introduction to the Python language, for both newcomers to programming and those who are already proficient in another language. It covers Python 2. 3 (which is just short of the current 2. 4), and there are only a couple of items not referred to (e. g. decorators and decimals). But you can easily read up on the latest features online. The benefits of this book far outweigh the fact that it was published a few years ago!
Here is the true advantage of Learning Python: the authors describe the language in complete detail from the ground up. They begin with how to use the interactive interpreter and IDLE, and then move on to built-in data types. Every single thing that could be considered a 'component' of the Python language gets its own chapter (numbers, strings, lists, etc. ), and the larger components (functions, modules, classes, etc. ) each get their own Part (which is further divided into chapters). In other words, they take plenty of time to describe everything you need to know about everything in the language. You won't finish learning the core language until well into the 400-range of pages.
Another intro Python book that I just began reading has already covered numbers, arithmetic operators, functions, modules, and a few other things, all by page 20! I won't name the book yet, because I'm not fairly deep enough in it yet. But this is certainly not good for a newcomer.
Don't even wonder about other books! Learning Python covers every aspect of the language in great detail, yet at the same time remains intelligent (e. g. it does not explain to you what variables in general are (hopefully you have a basic understanding of programming already), but it explains in great detail what variables *in Python* are). After you read this book, you will have an amazing foundation in Python.
Not really useful
It also lacks a reference section and is excessively wordy. This book is not very good for actually learning Python.
Learning implies tutorials and a gentle progression from basic to advanced subjects; this book does neither. For example, in chapter 3, "How You Run Programs", it introduces modules and namespaces--fairly advanced concepts to read about before even the first "hello world" program! In chapter 4, as it describes the use of numbers and strings, it is already delving deep into the uses and implications of Python's objects.
With well over 500 pages, there should be plenty of room for a reference section, but there is none. There is no list of built-in classes and their methods.
The overall tone of the book is enthusiastic, touting Python's object-orientedness and other advantages. Unfortunately, it is excessively wordy and difficult to read. Cheerleading can be excused, but it is present on nearly every page and gets old quick.
In a book about programming or a programming language, one might want tutorials, reference, discussion of advanced topics, or code examples. This book provides none of these things. I do not recommend it.
Not worth the expense.
This book does an adequate job of teaching, but I'd say that "How to Think Like a Computer Scientist" does better, and you can just look at it on the website. Programming books have two uses: to teach you and as a reference. The index is lousy, which makes it hard to find things, and it doesn't cover enough material to make it useful. I wish I'd just bought two copies of Python in a Nutshell instead. | http://www.linux-directory.com/programming/Learning-Python-Second-Edition.shtml | crawl-001 | refinedweb | 688 | 63.29 |
original in en Egon Willighagen
This article is based on the latest release of Lire, being lire-20011017. Note that configuration has changed a lot since the previous release, and that, basically, the first article in this series is already outdated. The general idea of lr_config, however, has not changed.
New features in this release are, among others: two new super services (FTP and firewall), a lot of new reports (total > 68), new output formats (XHTML and RTF) and lots of bug fixes. But, the most important change in this release is in the engine. The report generation process has completely been rewritten to make use of XML technology.
This article will introduce one of the XML formats that are now used in Lire, and how this is used to specify reports. It will not be a tutorial on how to make new reports, but it will show you how you can change the predefined reports at a low level. But first, this article will explain how you can tell Lire which reports it should generate and how parameters for these reports can be set.
Each super service (e.g. `email' is a super service, the `postfix' and `sendmail' service's belong to this super service) has a number of reports available, which extract information from the log for you. The WWW super service has, for example, 31 reports. Not all reports are interesting for everyone. Some are very specific. By default, most of those reports are selected, but it is useful to customize this.
The reports that will be used in the generation of the report are given in the file <prefix>/etc/lire/<superservice>.cfg (assuming Lire is installed in the directory <prefix>). For example, the configuration file for the FTP super service looks like:
# Report configuration for the FTP super service
# Top X reports
top-remote-host hosts_to_show=10
#top-files files_to_show=10
top-files-in files_to_show=10
top-files-out files_to_show=10
top-users users_to_show=10
# By day reports
bytes-by-day
# Transfers by X reports
transfers-by-direction
transfers-by-type
The FTP super service thus has eight reports defined and all but one are selected. The "top-files" is deselected by means of the "#" character. Removal of the "#" char will select the report again.
Note that not all line starting with "#" are reports. In this configuration file the lines "Report configuration for the FTP super service", "Top X reports", "By day reports" and "Transfers by X reports" are comments. Similar things can be expected in the other configuration files.
Ordering is very simple. The order in which report lines appear in the config files, is the order in which the reports will be given in the output. Rearranging the lines in these configuration files reorders them in the output. For example, in the above example, transfers-by-type will be the last report given in the output.
Many reports can partly be customized with the configuration files explained in the previous section. For example, consider this DNS super service configuration:
# Report configuration for the DNS super service
# Top reports
top-requesting-hosts hosts_to_show=10
top-requesting-hosts-by-method hosts_to_show=10 method='recurs'
top-requesting-hosts-by-method hosts_to_show=10 method='nonrec'
top-requested-names names_to_show=10
top-requested-names-by-method names_to_show=10 method='recurs'
top-requested-names-by-method names_to_show=10 method='nonrec'
requesttype-distribution
requesttype-distribution-by-method method='recurs'
requesttype-distribution-by-method method='nonrec'
# By Day reports
requests-by-period period=1d
requests-by-period-by-method period=1d method='recurs'
requests-by-period-by-method period=1d method='nonrec'
# By Hour reports
requests-by-period period=1h
requests-by-period-by-method period=1h method='recurs'
requests-by-period-by-method period=1h method='nonrec'
All fifteen reports are selected, but furthermore, for the reports giving a Top X output the number X can be defined. With the above configuration the report top-requesting-hosts will give a Top 10.
These reports are generated from only eight report specifications. The use of parameters (period, method, hosts_to_show, and names_to_show) makes this possible. This is one of the new powerful features of the new XML based engine.
Important: all variable settings must be placed on the same line as the report name!
A more exotic example is taken from the WWW super service configuration file:
top-referers-by-page referer_to_show=5 page_to_show=10 referer_exclusion='^-$'
In this example a Perl regular expression is used as content for the referer_exclusion variable. This expression matches all referers "-". Such referers are found in the log file in cases when e.g. the URL of your web page was typed by the client user. (When users visit your page by clicking on a link in a page, refering to your page, the page linked from will be given in the referer field.) All referers that match "-" will be excluded from the analysis.
This new release starts a complete new branch of Lire. The report generation and specification process has completely been rewritten to make use of XML technology. Reports are specified in XML, but variable setting is done in plain ASCII format. The previous report specification was a Perl script that had to know both the input format as well as the output format. With the new XML format, the implementation is separated from the specification, and one does not have to know the input and output format; just the information that can be processed.
Thus, when customizing reports on low level, you need to know XML a bit. An example report taken from the <prefix>/share/lire/reports/firewall directory:
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE lire:report-spec PUBLIC
"-//LogReport.ORG//DTD Lire Report Specification Markup Language V1.0//EN"
"">
<lire:report-spec xmlns:
<lire:title>Top Bytes per From-IP Report</lire:title>
<lire:description>
<para>
This report lists the IP addresses sending the highest data volume.
</para>
</lire:description>
<lire:param-spec>
<lire:param
<lire:description>
<para>This parameter controls the number of sending IP adresses to
display in the report.
</para>
</lire:description>
</lire:param>
</lire:param-spec>
<lire:display-spec>
<lire:title>Volume per sending IP, Top $ips_to_show</lire:title>
</lire:display-spec>
<lire:report-calc-spec>
<lire:group
<lire:field
<lire:sum
</lire:group>
</lire:report-calc-spec>
</lire:report-spec>
First thing you should notice it that almost every XML element in this report starts with lire:. This is used to assign a namespace to that element. Every element with the lire namespace, is defined in XML DTD (empty link!), which can be browsed at.
All other elements are supposed to belong to the DocBook XML 4.2 DTD. Such as the <para> element on the tenth line of the example.
If you want to change the title that appears in the report, you need to change the <lire:title> content in the <lire:display-spec>. Keep in mind that strings starting with "$" are Perl variables where the name corresponds to one of the specified parameters in the <lire:param-spec> section.
The tricky thing is that you have take the correct <lire:title> element. You need the element which is content of the <lire:display-spec> node. The latter element contains the information that is displayed in the output report. The first <lire:title> element contains the report title that is used in documentation of the Lire software.
The next example shows a fragment of the requests-by-result WWW report specification. One can see that the <lire:display-spec> now not only outputs a title but also some more explanation. Note that all content within the <lire:description> element is not using the lire namespace, and thus is DocBook content.
<lire:display-spec>
<lire:title>Requests By HTTP Result</lire:title>
<lire:description>
<para>
The most common HTTP status codes are given below:
<variablelist>
<varlistentry>
<term>200</term>
<listitem>
<para>OK (The request has succeeded.)</para>
</listitem>
</varlistentry>
<!-- rest is cut out -->
</variablelist>
</para>
</lire:description>
</lire:display-spec>
The report output will look something like (only top part shown):
Requests By HTTP Result
The most common HTTP status codes are given below:
200 OK (The request has succeeded.)
201 Created (The request has been fulfilled and resulted in a new resource being created.)
206 Partial Content (The server has fulfilled the
Most reports have graphics associated with the data. These images are generated from the data and the report specification also defines the format in which the image is plotted. Take for example the following snippet from the FTP transfers-by-type report.
For this report the data is visualized with a pie chart as can be seen from the @charttype attribute in the above code. The result looks like:
By changing the chart type to bars as in 'charttype="bars"' the output changes to:
Note that the report title contains a bug. The report is on the transfer type not the file type. This bug has already been reported.
More specific information about the XML language used for report specification can be found at the LogReport web site. You will see that the language is quite extensive, and for now i can suggest that you will use report specification coming with the distribution as your main guide.
Elements that have not been covered in this article, but that are used in those reports are used for parameter specification (<lire:param-spec>) and calculation of the outputed data (<lire:report-calc-spec>). Especially, the latter has many options and use prior knowledge of the internal format (called DLF) in which the log data is stored. This will be covered in a future article.
This article introduced the XML based report generation engine and explained how you can customize the reports you get. More information can be found at the LogReport web site:.
If you want to get in touch with the LogReport team, you can join IRC. The developers can often be found at the #logreport channel at the OpenProjects.org IRC network. Questions, comments, and support requests are welcomed. If you prefer email, you can reach the team on the public mailing list [email protected]. | http://www.linuxfocus.org/English/November2001/article218.meta.shtml | CC-MAIN-2014-41 | refinedweb | 1,696 | 54.42 |
Learn how to use the lambda, map, filter, and reduce functions in Python to transform data structures.
“Object-oriented programming makes code understandable by encapsulating moving parts. Functional programming makes code understandable by minimizing moving parts.” : Michael Feathers
There are multiple programming languages in the world, and so are the categories in which they can be classified. A programming paradigm is one such way which tries to classify programming languages based on their features or coding style. A programming paradigm is essentially a style or a way of programming.
Most of the times, we understand Python as an object-oriented language, where we model our data in the form of classes, objects, and methods. However, there also exist several alternatives to OOP, and functional programming is one of them.
Here are some of the conventional programming paradigms prevalent in the industry:
Functional Programming(FP)
As per Wikipedia, Functional programming is a programming paradigm, a style of building the structure and elements of computer programs, that treats computation as the evaluation of mathematical functions and avoids changing state and mutable data.
The above definition might sound confusing at first, but it essentially tries to put forward the following aspects:
- FP relies on functions, and everything is done using functions. Moreover, FP focuses on defining what to do, instead of performing some action. The functions of this paradigm are treated as first-class functions. This means functions are treated like any other objects, and we can assign them to variables or pass them into other functions.
- The data used in functional programming must be immutable, i.e. should never change. This means if we need to modify a data in a list, we need to create a new list with updated values rather than manipulating the existing one.
- The programs written in FP should be stateless. A stateless function has no knowledge about its past. Functional programs should carry out every task as if they are performing it for the first time. Simply put, the functions are only dependent on the data passed to them as arguments and never on the outside data.
- Laziness is another property of FP wherein we don’t compute things that we don’t have to. Work is only done on demand.
If this makes sense now, here is a nice comparison chart between the OOP and FP which will make things even more apparent.
Python provides features like lambda, filter, map, and reduce that can easily demonstrate the concept of Functional Programming. All the codes used in this article can be accessed from the associated Github Repository or can be viewed on my_binder by clicking the image below.
The Lambda Expression
Lambda expressions — also known as “anonymous functions” — allow us to create and use a function in a single line. They are useful when we need a short function that will be used only once. They are mostly used in conjunction with the map, filter and the sort methods, which we will see later in the article.
Let’s write a function in Python, that computes the value of
5x + 2. The standard approach would be to define a function.
Now let’s compute the same expression using Lambda functions. To create a lambda expression, we type in the keyword lambda, followed by the inputs. Next, we enter a colon, followed by the expression that will be the return value.
This lambda function will take the input x and return
5x + 2, just like the earlier function
f. There is a problem, however. Lambda is not the name of the function. It is a Python keyword that says that what follows is an anonymous function. So how do we use it? One way is to give it a name.
Let us call this lambda expression
g. Now, we can use this like any other function.
Lambda expression with multiple inputs.
The examples below show how lambda functions can be used with or without input values.
Lambda expression without inputs.
Now, let’s look at a common use of Lambda function where we do not assign it a name. Let’s say we have a list of the first seven U.S Presidents and we’d like to sort this list by their last name. We shall create a Lambda function that extracts the last name, and uses that as the sorting value.
The map, filter, and reduce functions simplify the job of working with lists. When used along with lambda expressions they help to make our lives easier by accomplishing a lot in a single line of code.
The Map Function
The map function applies a function to every item of iterable, yielding the results. When used with lists,
Map transforms a given list into a new list by applying the function to all the items in an input_list.
Syntax
map(function_to_apply, iterables)
Usage
Suppose we have a function that computes the volume of a cube, given the value of its edge(a).
def volume(a): """volumne of a cube with edge 'a'""" return a**3
What if we need to compute the volumes for many different cubes with different edge lengths?
# Edge length in cm edges = [1,2,3,4,5]
There are two ways to do this — one by using the direct method and the other by using the map function.
Now let’s see how we can accomplish this task using a single line of code with the map function.
The map function takes in two arguments. The first is a function, and the second is our list, tuple, or any other iterable object. Here, the map function applies the volume function to each element in the list.
An important thing to note here is that the output of the map function is not a list but a map object, which is an iterator over the results. We can, however, turn this into a list by passing the map to the list constructor.
Example
Let’s now see an example which demonstrates the use of
lambda function with the
map function. We have a list of tuples containing name and heights for five people. Each of the height is in centimeters, and we need to convert it into feet.
We will first write a converter function using a lambda expression which will accept a tuple as the input and will also return a tuple.
The Filter Function
The ‘filter’ function constructs an iterator from those elements of iterable for which function returns true. This means filter function is used to select certain pieces of data from a list, tuple, or other collection of data, hence the name.
Syntax
filter(function, iterable)
Usage
Let’s see an example where we want to get the list of all numbers that are greater than 5, from a given input list.
We first create a lambda function that tests the input to see if it is above 5 or not. Next, we pass in the list of data. The filter function will only return the data for which the function is true. Once again, the return value is not a list, but a filter object. This object has to be passed to a list constructor to get the output.
Example
An interesting use case of the ‘filter’ function arises when the data consists of missing values. Here is a list containing some of the countries in Asia. Notice numerous strings are empty. We’ll use the filter function to remove these missing values. We’ll pass none as the first argument, and the second argument is the list of data as before.
This filters out all values that are treated as false in a boolean setting.
The Reduce Function
The ‘reduce’ function is a bit unusual, and, as of Python 3, it is no longer a built-in function. Instead, it has been moved to the
functools module. The ‘reduce’ function transforms a given list into a single value by applying a function cumulatively to the items of sequence, from left to right,
Syntax
reduce(func, seq)
where reduce continually applies the function
func() to the sequence seq and returns a single value.
Usage
Let’s illustrate the working of the
reduce function with the help of a simple example that computes the product of a list of integers.
The following diagram shows the intermediate steps of the calculation:
However, Guido van Rossum, the creator of Python, had to say this about the ‘reduce’ function:
Use functools.reduce if you really need it; however, 99% of the time an explicit for loop is more readable.
The above program can also be written with an explicit for loop:
Example
The ‘reduce’ function can determine the maximum of a list containing integers in a single line of code. There does exist a built-in function called
max() in Python which is generally used for this purpose as
max(list_name).
List Comprehensions: Alternative to map, filter and reduce
List comprehension is a way to define and create lists in Python. In most cases, list comprehensions let us create lists in a single line of code without worrying about initializing lists or setting up loops.
It is also a substitute for the lambda function as well as the functions map(), filter() and reduce(). Some people find it a more pythonic way of writing functions and find it easier to understand.
Syntax
Usage
Let’s try and replicate the examples used in the above sections with
list comprehensions.
- List Comprehensions vs. Map function
We used map function in conjunction with the lambda function to convert a list of heights from cm to feet. Let’s use list comprehensions to achieve the same results.
- List Comprehensions vs. Filter function
We used the filter function to remove the missing values from a list of Asian countries. Let’s use list comprehensions to get the same results.
- List Comprehensions vs. Reduce function
Similarly, we can determine the maximum of a list containing integers quickly with list comprehension instead of using lambda and reduce.
We have used a generator expression above which is similar to list comprehension but with round brackets instead of the square one.
List comprehensions are a diverse topic and require an article of their own. Keeping this in mind, here is an article that I wrote that covers not only list comprehensions but even dictionary, set, and generator comprehensions in python.
Conclusion
The map, filter, and reduce functions significantly simplify the process of working with lists and other iterable collections of data. Some people have reservations about using them, especially since list comprehensions appear to be more friendly, yet their usefulness cannot be ignored. | https://parulpandey.com/2019/07/12/elements-of-functional-programming-in-python/ | CC-MAIN-2022-21 | refinedweb | 1,772 | 63.19 |
!
Note - The Minecraft Turtle is now part of the minecraft-stuff module, see minecraft-stuff.readthedocs.io for documentation and install instructions.
The MinecraftTurtle is really easy to install and use, clone it and run the setup program.
If you want to get started quickly through, I would download the complete code from my github, as it contains some examples and all the files you need to have go.
Download Minecraft Turtle Code:
cd ~ git clone
Install the library
cd ~/minecraft-turtle python setup.py install python3 setup.py install
Try the 'squares' example:
Startup Minecraft and load a game.
cd ~/minecraft-turtle/examples python example_squares.py
Create your own turtle program:
The turtle is really easy to program, Open IDLE, create a new file and save it.
from mcturtle import minecraftturtle from mcpi import minecraft from mcpi.:
I have been working through the Adventures in Minecraft book with my son, and this looked cool.
I have saved minecraftturtle.py to the mcpi folder (on PC) where the minecraft.py, blocks.py etc. are.
I get this error when running:
import minecraftturtle
ImportError: No module named minecraftturtle
Does the file need to be compiled, as the others seem to all have .pyc version too.
If you want to use import minecraftturtle, then you need to have it in the same folder that your may .py file is. If you want to kee it in the mcpi folder, then do:
import mcpi.minecraftturtle as minecraftturtle
What Alexander said.... Thanks
oh my goodness that is awesome. i just ran the spiral example!
I was wondering if there is a way to change bricks.
steve.penblock(block.DIRT.id) | https://www.stuffaboutcode.com/2014/05/minecraft-graphics-turtle.html | CC-MAIN-2019-35 | refinedweb | 276 | 76.52 |
Access control in Azure Data Lake Storage Gen1
Azure Data Lake Storage Gen1 implements an access control model that derives from HDFS, which in turn derives from the POSIX access control model. This article summarizes the basics of the access control model for Data Lake Storage Gen1.
Access control lists on files and folders
There are two kinds of access control lists (ACLs), Access ACLs and Default ACLs.
Access ACLs: These control access to an object. Files and folders both have Access ACLs.
Default ACLs: A "template" of ACLs associated with a folder that determine the Access ACLs for any child items that are created under that folder. Files do not have Default ACLs.
Both Access ACLs and Default ACLs have the same structure.
Note
Changing the Default ACL on a parent does not affect the Access ACL or Default ACL of child items that already exist.
Permissions
The permissions on a filesystem object are Read, Write, and Execute, and they can be used on files and folders as shown in the following table:
Short forms for permissions
RWX is used to indicate Read + Write + Execute. A more condensed numeric form exists in which Read=4, Write=2, and Execute=1, the sum of which represents the permissions. Following are some examples.
Permissions do not inherit
In the POSIX-style model that's used by Data Lake Storage Gen1, permissions for an item are stored on the item itself. In other words, permissions for an item cannot be inherited from the parent items.
Common scenarios related to permissions
Following are some common scenarios to help you understand which permissions are needed to perform certain operations on a Data Lake Storage Gen1 account.
Note
Write permissions on the file are not required to delete it as long as the previous two conditions are true.
Users and identities
Every file and folder has distinct permissions for these identities:
- The owning user
- The owning group
- Named users
- Named groups
- All other users
The identities of users and groups are Azure Active Directory (Azure AD) identities. So unless otherwise noted, a "user," in the context of Data Lake Storage Gen1, can either mean an Azure AD user or an Azure AD security group.
The super-user
A super-user has the most rights of all the users in the Data Lake Storage Gen1 account. A super-user:
- Has RWX Permissions to all files and folders.
- Can change the permissions on any file or folder.
- Can change the owning user or owning group of any file or folder.
All users that are part of the Owners role for a Data Lake Storage Gen1 account are automatically a super-user.
The owning user
The user who created the item is automatically the owning user of the item. An owning user can:
- Change the permissions of a file that is owned.
- Change the owning group of a file that is owned, as long as the owning user is also a member of the target group.
Note
The owning user cannot change the owning user of a file or folder. Only super-users can change the owning user of a file or folder.
The owning group
Background
In the POSIX ACLs, every user is associated with a "primary group." For example, user "alice" might belong to the "finance" group. Alice might also belong to multiple groups, but one group is always designated as her primary group. In POSIX, when Alice creates a file, the owning group of that file is set to her primary group, which in this case is "finance." The owning group otherwise behaves similarly to assigned permissions for other users/groups.
Because there is no “primary group” associated to users in Data Lake Storage Gen1, the owning group is assigned as below.
Assigning the owning group for a new file or folder
- Case 1: The root folder "/". This folder is created when a Data Lake Storage Gen1 account is created. In this case, the owning group is set to an all-zero GUID. This value does not permit any access. It is a placeholder until such time a group is assigned.
- Case 2 (Every other case): When a new item is created, the owning group is copied from the parent folder.
Changing the owning group
The owning group can be changed by:
- Any super-users.
- The owning user, if the owning user is also a member of the target group.
Note
The owning group cannot change the ACLs of a file or folder.
For accounts created on or before September 2018, the owning group was set to the user who created the account in the case of the root folder for Case 1, above. A single user account is not valid for providing permissions via the owning group, thus no permissions are granted by this default setting. You can assign this permission to a valid user group.
Access check algorithm
The following pseudocode represents the access check algorithm for Data Lake Storage Gen1 accounts.
def access_check( user, desired_perms, path ) : # access_check returns true if user has the desired permissions on the path, false otherwise # user is the identity that wants to perform an operation on path # desired_perms is a simple integer with values from 0 to 7 ( R=4, W=2, X=1). User desires these permissions # path is the file or folder # Note: the "sticky bit" is not illustrated in this algorithm # Handle super users. if (is_superuser(user)) : return True # Handle the owning user. Note that mask IS NOT used. entry = get_acl_entry( path, OWNER ) if (user == entry.identity) return ( (desired_perms & entry.permissions) == desired_perms ) # Handle the named users. Note that mask IS used. entries = get_acl_entries( path, NAMED_USER ) for entry in entries: if (user == entry.identity ) : mask = get_mask( path ) return ( (desired_perms & entry.permmissions & mask) == desired_perms) # Handle named groups and owning group member_count = 0 perms = 0 entries = get_acl_entries( path, NAMED_GROUP | OWNING_GROUP ) for entry in entries: if (user_is_member_of_group(user, entry.identity)) : member_count += 1 perms | = entry.permissions if (member_count>0) : return ((desired_perms & perms & mask ) == desired_perms) # Handle other perms = get_perms_for_other(path) mask = get_mask( path ) return ( (desired_perms & perms & mask ) == desired_perms)
The mask
As illustrated in the Access Check Algorithm, the mask limits access for named users, the owning group, and named groups.
Note
For a new Data Lake Storage Gen1 account, the mask for the Access ACL of the root folder ("/") defaults to RWX.
The sticky bit
The sticky bit is a more advanced feature of a POSIX filesystem. In the context of Data Lake Storage Gen1, it is unlikely that the sticky bit will be needed. In summary, if the sticky bit is enabled on a folder, a child item can only be deleted or renamed by the child item's owning user.
The sticky bit is not shown in the Azure portal.
Default permissions on new files and folders
When a new file or folder is created under an existing folder, the Default ACL on the parent folder determines:
- A child folder’s Default ACL and Access ACL.
- A child file's Access ACL (files do not have a Default ACL).
umask
When creating a file or folder, umask is used to modify how the default ACLs are set on the child item. umask is a 9-bit value on parent folders that contains an RWX value for owning user, owning group, and other.
The umask for Azure Data Lake Storage Gen1 is a constant value set to 007. This value translates to
The umask value used by Azure Data Lake Storage Gen1 effectively means that the value for other is never transmitted by default on new children - regardless of what the Default ACL indicates.
The following pseudocode shows how the umask is applied when creating the ACLs for a child item.
def set_default_acls_for_new_child(parent, child): child.acls = [] for entry in parent.acls : new_entry = None if (entry.type == OWNING_USER) : new_entry = entry.clone(perms = entry.perms & (~umask.owning_user)) elif (entry.type == OWNING_GROUP) : new_entry = entry.clone(perms = entry.perms & (~umask.owning_group)) elif (entry.type == OTHER) : new_entry = entry.clone(perms = entry.perms & (~umask.other)) else : new_entry = entry.clone(perms = entry.perms ) child_acls.add( new_entry )
Common questions about ACLs in Data Lake Storage Gen1
Do I have to enable support for ACLs?
No. Access control via ACLs is always on for a Data Lake Storage Gen1 account.
Which permissions are required to recursively delete a folder and its contents?
- The parent folder must have Write + Execute permissions.
- The folder to be deleted, and every folder within it, requires Read + Write + Execute permissions.
Note
You do not need Write permissions to delete files in folders. Also, the root folder "/" can never be deleted.
Who is the owner of a file or folder?
The creator of a file or folder becomes the owner.
Which group is set as the owning group of a file or folder at creation?
The owning group is copied from the owning group of the parent folder under which the new file or folder is created.
I am the owning user of a file but I don’t have the RWX permissions I need. What do I do?
The owning user can change the permissions of the file to give themselves any RWX permissions they need.
When I look at ACLs in the Azure portal I see user names but through APIs, I see GUIDs, why is that?
Entries in the ACLs are stored as GUIDs that correspond to users in Azure AD. The APIs return the GUIDs as is. The Azure portal tries to make ACLs easier to use by translating the GUIDs into friendly names when possible.
Why do I sometimes see GUIDs in the ACLs when I'm using the Azure portal?
A GUID is shown when the user doesn't exist in Azure AD anymore. Usually this happens when the user has left the company or if their account has been deleted in Azure AD.
Does Data Lake Storage Gen1 support inheritance of ACLs?
No, but Default ACLs can be used to set ACLs for child files and folder newly created under the parent folder.
Where can I learn more about POSIX access control model?
- POSIX Access Control Lists on Linux
- HDFS permission guide
- POSIX FAQ
- POSIX 1003.1 2008
- POSIX 1003.1 2013
- POSIX 1003.1 2016
- POSIX ACL on Ubuntu
- ACL using access control lists on Linux
See also
Feedback | https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-access-control | CC-MAIN-2019-51 | refinedweb | 1,714 | 64.51 |
dircntl()
Control an open directory
Synopsis:
#include <dirent.h> int dircntl( DIR * dir, int cmd, ... );
Since:
BlackBerry 10.0.0
Arguments:
- dir
- Provide control for this directory.
- cmd
- At least the following values are defined in <dirent.h>:
- D_GETFLAG — retrieve the flags associated with the directory referenced by dir. For more information, see " Flag values," below.
- D_SETFLAG — set the flags associated with the directory referenced by dir to the value given as an additional argument. The new value can be any combination of the flags described in " Flag values," below.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The dircntl() function provides control over the open directory referenced by the dir argument. This function behaves in a manner similar to the file control function, fcntl().
Flag values
Returns:
The return value depends on the value of cmd:
Examples:
#include <stdio.h> #include <stdlib.h> #include <dirent.h> int main(int argc, char **argv) { DIR *dp; int ret; if(!(dp = opendir("/"))) { exit(EXIT_FAILURE); } /* Display the flags that are set on the directory by default*/ if((ret = dircntl(dp, D_GETFLAG)) == -1) { exit(EXIT_FAILURE); } if(ret & D_FLAG_FILTER) { printf("Directory names are filtered\n"); } else { printf("Directory names are not filtered\n"); } if(ret & D_FLAG_STAT) { printf("Servers asked for extra stat information\n"); } else { printf("Servers not asked for extra stat information\n"); } closedir(dp); return 0; }
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/d/dircntl.html | CC-MAIN-2016-40 | refinedweb | 258 | 51.24 |
’s an example:
case class User(val name: String, val age: Int) val userBase = List(new User("Travis", 28), new User("Kelly", 33), new User("Jennifer", 44), new User("Dennis", 23)) val twentySomethings = for (user <- userBase if (user.age >=20 && user.age < 30)) yield user.name // i.e. add this to a list twentySomethings.foreach(name => println(name)) // prints Travis Dennis
The
for loop used with a
yield statement actually creates a
List. Because we said
yield user.name, it’s a
List[String].
user <- userBase is our generator and
if (user.age >=20 && user.age < 30) is a guard that filters out users who are in their 20s.
Here is a more complicated example using two generators. It computes all pairs of numbers between
0 and
n-1 whose sum is equal to a given value
v:
def foo(n: Int, v: Int) = for (i <- 0 until n; j <- i until n if i + j == v) yield (i, j) foo(10, 10) foreach { case (i, j) => print(s"($i, $j) ") // prints (1, 9) (2, 8) (3, 7) (4, 6) (5, 5) }
Here
n == 10 and
v == 10. On the first iteration,
i == 0 and
j == 0 so
i + j != v and therefore nothing is yielded.
j gets incremented 9 more times before
i gets incremented to
1. Without the
if guard, this would simply print the following:
(0, 0) (0, 1) (0, 2) (0, 3) (0, 4) (0, 5) (0, 6) (0, 7) (0, 8) (0, 9) (1, 1) ...
Contents | https://docs.scala-lang.org/tutorials/tour/for-comprehensions.html.html | CC-MAIN-2018-51 | refinedweb | 251 | 82.34 |
The QRasterPaintEngine class enables hardware acceleration of painting operations in Qtopia Core. More...
#include <QRasterPaintEngine>
This class is under development and is subject to change.
Inherits QPaintEngine.
This class was introduced in Qt 4.2.
The QRasterPaintEngine class enables hardware acceleration of painting operations in Qtopia Core.
Note that this functionality is only available in Qtopia Core.
In Qtopia Core, painting is a pure software implementation. But starting with Qtopia Coretopia Core aware of the accelerated driver.
Note: The QRasterPaintEngine class does not support 8-bit images. Instead, they need to be converted to a supported format, such as QImage::Format_ARGB32_Premultiplied.
See the Adding an Accelerated Graphics Driver in Qtopia Core documentation for details.
See also QCustomRasterPaintDevice and QPaintEngine.
Creates a raster based paint engine with the complete set of paint engine features and capabilities.
Destroys this paint engine.plement). | https://doc.qt.io/archives/4.3/qrasterpaintengine.html | CC-MAIN-2019-18 | refinedweb | 140 | 52.36 |
[
]
Vasily Zakharov commented on HARMONY-3702:
------------------------------------------
I've checked the issue on the current build. The results are as follows:
WriterTest works identically on RI, Harmony/IBMVM and Harmony/DRLVM.
ReaderTest work differently on RI, Harmony/IBMVM and Harmony/DRLVM - the outputs are similar,
but all three are different.
So, though HARMONY-3307 is now resolved, the problem still exists.
> [classlib][luni] Reader and Writer convert characters incorrectly
> -----------------------------------------------------------------
>
> Key: HARMONY-3702
> URL:
> Project: Harmony
> Issue Type: Bug
> Components: Classlib
> Reporter: Vasily Zakharov
> Attachments: test.dat
>
>
> java.io.Reader converts bytes to characters differently than RI does. Also, java.io.Writer
converts characters to bytes differently than RI does.
> The attached test.dat file contains random test data and must be placed to the current
directory. ReaderTest below reads that file with FileReader and then dumps it to standard
output by converting each character to int. WriterTest reads the test.dat file with FileInputStream,
converts each byte to character by casting and then dumps the resulting characters to standard
output by OutputStreamWriter.
> public class ReaderTest {
> public static void main(String args[]) throws Exception {
> char[] buffer = new char[0x100000];
> java.io.Reader reader = new java.io.FileReader("test.dat");
> int length = reader.read(buffer, 0, buffer.length);
> for (int i = 0; i < length; i++) {
> System.out.println((int) buffer[i]);
> }
> }
> }
> public class WriterTest {
> public static void main(String args[]) throws Exception {
> byte[] buffer = new byte[0x100000];
> java.io.InputStream iStream = new java.io.FileInputStream("test.dat");
> int length = iStream.read(buffer, 0, buffer.length);
> char[] charBuffer = new char[length];
> for (int i = 0; i < length; i++) {
> charBuffer[i] = (char) buffer[i];
> }
> java.io.Writer writer = new java.io.OutputStreamWriter(System.out);
> writer.write(charBuffer, 0, length);
> writer.close();
> }
> }
> In both cases, output files on RI and on Harmony are different:
> $ RI/bin/java ReaderTest > reader.ri
> $ HY/bin/java ReaderTest > reader.hy
> $ diff --binary -q reader.ri reader.hy
> Files reader.ri and reader.hy differ
> $ RI/bin/java WriterTest > writer.ri
> $ HY/bin/java WriterTest > writer.hy
> $ diff --binary -q writer.ri writer.hy
> Files writer.ri and writer.hy differ
> My investigations show that the problem is in Reader/Writer, not in InputStream/OutputStream.
Also, I've tried other implementations of Reader/Writer and they share the same problem.
> The problem was discovered on Windows XP/IA-32 but probably affects other platforms too.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/harmony-commits/200707.mbox/%3C27067230.1185142352216.JavaMail.jira@brutus%3E | CC-MAIN-2015-11 | refinedweb | 415 | 54.29 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.