text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
I want to use the string class in my program but I keep getting the following errors:
process.cpp:10: error: 'string' has not been declared
process.cpp:10: error: prototype for 'Process:: Process(int, int, int)'
does not match any in class 'Process'
process.h:14: error: candidates are: Process:: Process(const Process&)
process.h:16: error: Process:: Process(int, char*, int)
process.cpp: In constructor 'Process:: Process(int, int, int)':
process.cpp:14: error: invalid conversion from 'intâ to âchar*'
make: *** [process.o] Error 1
I don't understand why I'm getting this since my code doesn't seem to differ from the examples I found in tutorials too much:
/* * process.h * * Created on: Oct 13, 2009 * Author: NTUser1 */ #ifndef PROCESS_H_ #define PROCESS_H_ #include "thread.h" #include <string> #include <iostream> using namespace std; class Process : public Thread { public: Process(int at, string process, int bt); int arrivalTime; // arrival time of a process string name; // process name, e.g., P0, P1, P2 int burstTime; // CPU burst time of a process int remainTime; // the remaining burst time of a process void run(); private: };
/* * process.cpp * * Created on: Oct 13, 2009 * Author: NTUser1 */ #include "process.h" Process::Process(int at, string process, int bt){ arrivalTime = at; burstTime = bt; //etc for the rest of the vars name = process; } void Process::run(){ state = RUNNING; sleep(burstTime); stop(); } | https://www.daniweb.com/programming/software-development/threads/230544/can-t-get-strings-to-work | CC-MAIN-2017-51 | refinedweb | 225 | 64.61 |
From: Juergen Hunold (juergen.hunold_at_[hidden])
Date: 2007-05-29 15:53:19
Hi Lawrence !
On Dienstag 29 Mai 2007, you wrote:
> Thanks for the quick response. That was definitely helpful. One
> thing I noticed, after doing this, is that the environment variables
> are treated as case sensitive when using os.environ. While this
> makes sense in some contexts, it doesn't in the context of Windows,
> since Windows is a case insensitive OS. Is there any way to specify
> a case-insensitive match using os.environ?
I doubt this ;-)) (B)jam is definetely unix-centric...
> Also, I got it working when putting it in my Jamroot. Now, if I want
> to put it in site-config.jam as you suggested, what other steps are
> there so that the Jamroot file can use the contents of
> site-config.jam. I tried putting the import os; and local BOOST_ROOT
> = [ os.environ BOOST_ROOT ] ; into the site-config file, but my
> Jamroot file was not able to see the variables. What I gathered from
> reading is that all jam files have their own namespaces. How do you
> "export" something into the global namespace then?
You only do have to follow the "recipies" link in my link.
It is all document in the fine manual:
I hope this helps ! * juergen.hunold_at_[hidden] ! * * Geschäftsführer: ! Sitz des Unternehmens: Hannover * | https://lists.boost.org/Archives/boost/2007/05/122467.php | CC-MAIN-2020-10 | refinedweb | 224 | 68.47 |
25 May 2012 09:44 [Source: ICIS news]
SINGAPORE (ICIS)--Japanese refiner Cosmo Oil expects to restart its 120,000 bbl/day No 2 crude distillation unit (CDU) at the firm’s ?xml:namespace>
The No 2 CDU at the 220,000 bbl/day refinery was shut on 3 May this year.
The unit was previously restarted on 30 March this year for the first time since the whole refinery was shut after it was damaged by the 9.0-magnitude earthquake that hit northeast
Fire and explosions triggered by the earthquake destroyed liquefied petroleum gas (LPG) tanks at the refinery.
Cosmo Oil resumed production at its 100,000 bbl/day No 1 CDU on 20 April this year.
The No 1 CDU is scheduled to be shut for maintenance from 22 September to 16 November this year, the company source said.
The
The company also has a 100,000 bbl/day refinery | http://www.icis.com/Articles/2012/05/25/9563741/japans-cosmo-oil-chiba-crude-unit-shutdown-until-mid-july.html | CC-MAIN-2014-15 | refinedweb | 153 | 68.2 |
I keep trying to file my income tax returns online. However, I insist on using OpenOffice.org and so I have failed to accomplish what I set out to do. A request on the income tax office’s site for an OpenOffice.org version did not get any response. A discussion on the ILUG-Delhi list pointed to a version of the ITR1 utility at freedom-matters.in. This Excel utility was for the previous year. Changing the Excel macro code from VBA to OpenOffice.org Basic is time consuming. It is also hard to keep up with updated releases, and the various versions of the Excel utilities for various IT forms (ITR1 through ITR4, at present).
VBA was originally included in the Go-oo version of OpenOffice.org, but was not a complete implementation. The VBA interoperability project is now a joint project of Novell and Sun (Oracle) and is active on this wiki. Hence, it was worth exploring the state of this project, as far as the utilities for various ITR forms were concerned. The objective was to experiment and see whether we could use the Excel utilities next year, and ascertain the skill levels needed.
I started with the development snapshot on Fedora. Arch Linux includes the beta and the development versions in its repositories, making it very easy to work with all three versions (stable, beta and development) of OpenOffice.org concurrently. The VBA module is included in the current releases of OpenOffice.org; however, the security options for macros have to be set to “Medium”. In addition, in the Load/Save options for VBA, “Executable code” has to be enabled for Excel sheets.
The development version I used was OOO300_m86 (build 9518). There has been considerable progress. Validation is triggered by a call to Worksheet_Change. OpenOffice.org introduces an additional macro — Worksheet_Change_OnChange_Proxy — in the code, which calls the Worksheet_Change macro. Currently, the implementation was incomplete, so the validation code failed. However, most of the validations are now carried out by hidden functions in the spreadsheet, and they are executed. To make some progress, I commented the calls to the Worksheet_Change macro in the proxy. With this change, ‘_2′, on my blog.) However, the need for this macro was short-lived.
I checked the beta version OOO330m5 (build 9521). On this version, the name conversion was done properly. Hence, I did the rest of the experimentation on this version. The Worksheet_Change function is not called in this release — but, as discussed earlier, its implementation was still incomplete in the development version.
There are a number of hidden sheets. In order to debug the code and get a better idea of what was happening, it was useful to make all the hidden sheets visible. It is quite simple, as the following Python macro illustrates:
def unhide(): model = XSCRIPTCONTEXT.getDocument() sheets = model.getSheets() for n in range(sheets.getCount()): sheet = sheets.getByIndex(n) sheet.IsVisible = True
Once I entered the data and computed the tax, the result shown was incorrect. Since the Calculator sheet was now visible, I noticed a problem — the tax was not shown properly. The problem was that the Calculator sheet was protected, but with no password. It’s possible that the translation of the function from Excel to OpenOffice.org was in error. The code expects the password to be the value in cell ‘BE1′ on Sheet6 (the Home sheet), appended with ‘(21)*’. The content of the cell was a null string. Hence, the password expected was ‘(21)*’. Manually changing the blank password to the expected password resulted in the taxes being computed correctly.
One problem I encountered was that
Not var = "" was interpreted as
(Not var) = "" instead of
Not (var = "") as I expected. It is difficult to argue which is the better or more appropriate interpretation; in this context, compatibility with Excel is the issue. An ideal solution could be that the compatibility module inserts parentheses as needed when loading VBA code, to remove ambiguity. Finally, after I added parentheses as needed, the macros successfully generated a proper XML file.
Making the utility for the ITR1 form run on OpenOffice.org and get an XML output was a challenge, but possible even for a person who knew no BASIC — much less VBA! VBA compatibility in OpenOffice is improving. In all likelihood, by the time we need to file our tax returns next year, we will be able to use OpenOffice.org with minimal additional effort and generate the XML file for online filing of the return. So, next year, I expect that I will save the Government of India some data entry costs! | http://www.opensourceforu.com/2010/10/exploring-software-state-of-vba-in-openoffice-calc/ | CC-MAIN-2014-42 | refinedweb | 770 | 58.48 |
#include <filesort_utils.h>
A wrapper class around the buffer used by filesort(). The buffer is a contiguous chunk of memory, where the first part is <num_records> pointers to the actual data.
We wrap the buffer in order to be able to do lazy initialization of the pointers: the buffer is often much larger than what we actually need.
The buffer must be kept available for multiple executions of the same sort operation, so we have explicit allocate and free functions, rather than doing alloc/free in CTOR/DTOR.
We need an assignment operator, see filesort(). This happens to have the same semantics as the one that would be generated by the compiler. We still implement it here, to show shallow assignment explicitly: we have two objects sharing the same array.
Sort me... | http://mingxinglai.com/mysql56-annotation/classFilesort__buffer.html | CC-MAIN-2017-47 | refinedweb | 132 | 61.26 |
Tip
Try the Microsoft Azure Storage Explorer
Microsoft Azure Storage Explorer is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux.
Overview
This article will show you how to perform common scenarios using File storage. The samples are written in Python and use the Microsoft Azure Storage SDK for Python. The scenarios covered include uploading, listing, downloading, and deleting files. share
The FileService object lets you work with shares, directories and files. The following code creates a FileService object. Add the following near the top of any Python file in which you wish to programmatically access Azure Storage.
from azure.storage.file import FileService
The following code creates a FileService object using the storage account name and account key. Replace 'myaccount' and 'mykey' with your account name and key.
file_service = **FileService** (account_name='myaccount', account_key='mykey')
In the following code example, you can use a FileService object to create the share if it doesn't exist.
file_service.create_share('myshare')
Upload a file into a share
An Azure File Storage'))
How to: Create a Directory
You can also organize storage by putting files inside sub-directories instead of having all of them in the root directory. The Azure file storage service allows you to create as many directories as your account will allow. The code below will create a sub-directory named sampledir under the root directory.
file_service.create_directory('myshare', 'sampledir')
How to: List files and directories in a)
Download files')
Next steps
Now that you've learned the basics of File storage, follow these links to learn more. | https://docs.microsoft.com/en-us/azure/storage/storage-python-how-to-use-file-storage | CC-MAIN-2017-22 | refinedweb | 269 | 52.8 |
It’s been around four months since I last wrote anything interesting inside a code editor, but spurred on by a few personal itches I decided I’d have another stab at it - and behold, snippets of code sprung forth, nearly unbidden. Time is as scarce as ever and my current role lends itself more to crossing out and re-drawing things on whiteboards than on coding them from scratch, but nevertheless I’ve been having fun with an unlikely combination of Docker, Cognitive Services and
asyncio.
The Problem¶
It all started out innocuously enough late last year when I needed a sizable corpus of text to try out some NLP techniques and build a small proof of concept - I just grabbed my RSS subscription list, did some brute force fetching, tossed it into DocumentDB, hooked up a Jupyter Notebook to it, and that was it.
Later on, after the meeting was long past, I realized that it would be pretty nice to revisit RSS aggregation from a text mining perspective, and started cleaning things up for kicks. For instance, here’s the smallest OPML feed parser you’ll see all day:
from xml.etree import ElementTree def feeds_from_opml(filename): tree = ElementTree.parse(filename) for feed in tree.findall('.//outline'): if feed.get('xmlUrl'): yield {'title': feed.get('title'), 'url': feed.get('xmlUrl')} list(feeds_from_opml('feeds.opml'))[:2]
[{'title': 'Open Culture', 'url': ''}, {'title': 'The Kid Should See This.', 'url': ''}]
Going Async¶
I started out running the whole thing as a single script, but fetching multiple URLs is something best done using
multiprocessing, so I started out with that - only to realize that fetcher processes were still getting tied up waiting for responses and that this was an opportunity to take advantage of
aiohttp and
async/await in Python 3.x.
As it turned out, moving to
aiohttp speeded up things so much that I had to add a semaphore to avoid exhausting TCP connections on my Mac:
async def throttle(sem, session, feed, client, database): """Throttle number of simultaneous requests""" async with sem: res = await fetch_one(session, feed, client, database) log.info("%s: %d", res[0]['url'], res[1]) async def fetcher(database): """Fetch all the feeds""" sem = Semaphore(MAX_CONCURRENT_REQUESTS) while True: client = await connect_pipeline(connect=ENTRY_PARSER) tasks = [] threshold = datetime.now() - timedelta(seconds=FETCH_INTERVAL) async with ClientSession() as session: log.info("Beginning run.") async for feed in database.feeds.find({}): log.debug("Checking %s", feed['url']) last_fetched = feed.get('last_fetched', threshold) if last_fetched <= threshold: task = ensure_future(throttle(sem, session, feed, client, database)) tasks.append(task) responses = gather(*tasks) await responses log.info("Run complete, sleeping %ds...", CHECK_INTERVAL) await sleep(CHECK_INTERVAL)
However, running all this on a single process got old pretty quickly, so I soon had multiple processes fetching, parsing and storing feeds and communicating over
aiozmq.
This works fine when you have one or two kinds of workers, but soon enough the next step was obvious - I needed a proper task queue. But I wanted speed and the ability to share arbitrary chunks of data rather than “pure” queueing, so I decided to roll my own atop Redis.
Redis Queues¶
I love Redis, and I’m flabbergasted at how little people take advantage of it for task queueing and keep thinking it’s just another flavour of
memcache.
With a little help from
bson, I was soon able to toss just about anything into a queue with a few tiny, straightforward wrappers:
from asyncio import get_event_loop from aioredis import create_redis from config import log, REDIS_SERVER, REDIS_NAMESPACE from json import dumps, loads from bson import json_util async def connect_redis(loop=None): """Connect to a Redis server""" if not loop: loop = get_event_loop() return await create_redis(REDIS_SERVER.split(':'), loop=loop) async def enqueue(server, queue_name, data): """Enqueue an object in a given redis queue""" return await server.rpush(REDIS_NAMESPACE + queue_name, dumps(data, default=json_util.default)) async def dequeue(server, queue_name): """Blocking dequeue from Redis""" _, data = await server.blpop(REDIS_NAMESPACE + queue_name, 0) return loads(data, object_hook=json_util.object_hook)
It bears noting at this point that I had
bson handy because I was using the MongoDB protocol to talk to DocumentDB, and that it is doubly useful because I can use it to serialize not just
datetime objects in a sensible way, but also (should I want to) toss complete document “trees” into Redis if I really want to.
Which I didn’t - given that only the feed and item parsers did database writes (and that the processing between each write gave the database more than enough breathing room), my write cycles are low enough that passing around document IDs (and having them individually retrieved from the database by each worker) is a tenable approach.
Packaging the Runtime¶
With inter-process communication sorted, I then began to think about deploying and running the whole thing across multiple machines. Piku was great for deploying via
git and iterating quickly, but it’s pretty obvious that I had to consider moving to Docker sooner or later.
And since I’ve been maintaining my own Python images for a while now, I decided to move my development process to
docker-compose, which had two interesting side-effects:
- I moved back from a cloud setting into a laptop-only deployment (using local Redis and MongoDB containers).
- I spent a good while rebuilding my containers - not just to target Python 3.6.1 (which I needed so I could
yieldfrom inside a coroutine) but also because there are no Python wheels for
musl.
This last bit was the interesting one for me - most of the stuff on my
requirements.txt is instantly downloadable as precompiled wheels for
glibc, but if you want to use Alpine as a container base, then you have to wait until
pip rebuilds the whole shebang locally - which was OK when I was relying solely on external APIs for text processing, but which soon devolved into the old classic when I added NLTK to the mix:
That time is, thankfully, quickly offset by the ease with which I get the whole stack up and running:
!docker-compose up
Starting newsfeedcorpus_db_1 Starting newsfeedcorpus_redis_1 Starting newsfeedcorpus_parser_1 Starting newsfeedcorpus_scheduler_1 Starting newsfeedcorpus_importer_1 Starting newsfeedcorpus_web_1 Starting newsfeedcorpus_fetcher_1 Attaching to newsfeedcorpus_db_1, newsfeedcorpus_redis_1, newsfeedcorpus_scheduler_1, newsfeedcorpus_fetcher_1, newsfeedcorpus_web_1, newsfeedcorpus_parser_1, newsfeedcorpus_importer_1 db_1 | WARNING: no logs are available with the 'none' log driver redis_1 | WARNING: no logs are available with the 'none' log driver scheduler_1 | 2017-04-02 19:56:10 INFO Configuration loaded. fetcher_1 | 2017-04-02 19:56:10 INFO Configuration loaded. fetcher_1 | 2017-04-02 19:56:10 INFO Beginning run. web_1 | 2017-04-02 19:56:10 INFO Configuration loaded. web_1 | 2017-04-02 19:56:10 WARNING cPickle module not found, using pickle importer_1 | 2017-04-02 19:56:11 INFO Configuration loaded. parser_1 | 2017-04-02 19:56:11 INFO Configuration loaded. parser_1 | 2017-04-02 19:56:12 INFO 'pattern' package not found; tag filters are not available for English newsfeedcorpus_importer_1 exited with code 0 parser_1 | 2017-04-02 19:56:13 INFO Beginning run. fetcher_1 | 2017-04-02 19:56:15 INFO: 200 fetcher_1 | 2017-04-02 19:56:15 INFO: 200 fetcher_1 | 2017-04-02 19:56:15 INFO: 200 fetcher_1 | 2017-04-02 19:56:16 INFO: 200 fetcher_1 | 2017-04-02 19:56:16 INFO: 200 fetcher_1 | 2017-04-02 19:56:16 INFO: 200 fetcher_1 | 2017-04-02 19:56:16 INFO: 200 fetcher_1 | 2017-04-02 19:56:16 INFO: 304 scheduler_1 | 2017-04-02 19:56:16 INFO Run complete, sleeping 3600s... fetcher_1 | 2017-04-02 19:56:16 INFO: 200 fetcher_1 | 2017-04-02 19:56:16 INFO: 304 newsfeedcorpus_fetcher_1 exited with code 137 newsfeedcorpus_scheduler_1 exited with code 137 newsfeedcorpus_web_1 exited with code 137 newsfeedcorpus_redis_1 exited with code 0 newsfeedcorpus_db_1 exited with code 0
…and scaling it at will:
!docker-compose scale fetcher=4
Creating and starting newsfeedcorpus_fetcher_2 ... Creating and starting newsfeedcorpus_fetcher_3 ... Creating and starting newsfeedcorpus_fetcher_4 ...
!docker-compose scale fetcher=1
Stopping and removing newsfeedcorpus_fetcher_2 ... Stopping and removing newsfeedcorpus_fetcher_3 ... Stopping and removing newsfeedcorpus_fetcher_4 ...
Obviously that works for me in Piku as well, but it’s nice to think that I’ll be able to do this across a bunch of machines later on (and I’ve got just the place to do it in).
Doing Actual NLP¶
Although I’m pretty happy with Cognitive Services, I decided to strip them out from the code for the moment being, for two reasons:
- I exhausted my personal API request quota over a weekend (duh!)
- A lot of the stuff I need (like language detection and basic analysis) can be done in-process more efficiently
- More importantly, I can learn a lot more by doing it myself while using off-the-shelf stuff like NLTK
So it made sense to take care of the low-hanging fruit and leave the toughest things (like natural language understanding and document clustering) to Cognitive Services.
Soon enough I had a couple of basic amenities (like keyword extraction) going, so I started a little module I call
langkit:
from langkit import extract_keywords extract_keywords(""”.""", language="en", scores=True)
[('complex service oriented application stacks', 25.0), ('link may experience odd behavior', 23.5), ('generic linux box', 9.0), ('linux network interface', 9.0), ('slow networking interfaces', 8.666666666666666), ('single networking interface', 8.666666666666666), ('networking link', 6.166666666666666), ('webservice running', 4.0), ('things slowdown', 4.0), ('completely saturate', 4.0), ('busy enough', 4.0), ('manually throttle', 4.0), ('host behind', 4.0), ('bandwidth', 1.0), ('eth0', 1.0), ('congested', 1.0), ('consider', 1.0), ('manifest', 1.0), ('bugs', 1.0)]
So far it’s mostly about the little RAKE keyword extractor you can see working above, but I have more plans for it - in particular, I expect to try my hand at doing TF-IDF and document clustering “by hand” once I can find a couple of vacant hours one evening.
In the meantime, given the limited time I have, I decided to go low-brow for a bit and take a little time to start building a minimal web UI and examining what I could do differently with the new web stacks now surfacing around
asyncio.
Gotta Go Fast¶
It didn’t take me much time to get around to trying Sanic, which leverages
uvloop to achieve a massively impressive performance of 25.000 requests per second on my MacBook (an i5 clocking in at 2.9GHz):
$ wrk Running 10s test @ 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 435.44us 422.41us 12.70ms 93.40% Req/Sec 12.57k 1.70k 15.23k 71.78% 252720 requests in 10.10s, 29.64MB read Requests/sec: 25021.80 Transfer/sec: 2.94MB
However, this is where I’ve started getting seriously annoyed at
async/await, not just because it has a fair amount of impact in program structure (Sanic is OK, but I miss the simpler structure of Bottle, as well as all the nice stuff I built to use with it over the years), but also because it can be a right pain to remember to
await for a future when it’s not immediately obvious that what you’re trying to serve up to the browser isn’t a “normal” result.
The Internet Of Things Intrudes¶
Prompted by my frustrations with
async/await, I decided to take a break and do something a little closer to the real world this weekend.
Since I’m going to be doing some IoT stuff over the coming days, I decided I’d revisit Go (which I haven’t written in a while) and write a minimal Azure IoT Hub client able to manage device registrations and send device-to-cloud messages (which is the bare minimum you need to get started).
I’ve been playing around with Azure IoT Hub ever since I contriuted a little Python snippet for the docs, but using Go is a lot more fun, for a number of reasons:
- I enjoy the relatively low-level abstractions, which make a nice change
- Encoding and hashing is different enough to be a challenge (and there are very few samples of it down there)
- The final static binary, with everything baked in and UPX-compressed, weighs in at around 1MB for
arm7
Besides, the whole Go experience has a good feel to it (no need for an IDE, none of the turtles-all-the-way-down abstractions typical of the enterprise stuff I have to wade through on occasion, and raw, blistering speed when running, testing, and iterating).
It’s not going to win any awards for beauty, though:
func buildSasToken(hub *IoTHub, uri string) string { timestamp := time.Now().Unix() + int64(tokenValidSecs) encodedUri := template.URLQueryEscaper(uri) toSign := encodedUri + "\n" + strconv.FormatInt(timestamp, 10) binKey, _ := base64.StdEncoding.DecodeString(hub.SharedAccessKey) mac := hmac.New(sha256.New, []byte(binKey)) mac.Write([]byte(toSign)) encodedSignature := template.URLQueryEscaper(base64.StdEncoding.EncodeToString(mac.Sum(nil))) return fmt.Sprintf("SharedAccessSignature sr=%s&sig=%s&se=%d&skn=%s", encodedUri, encodedSignature, timestamp, hub.SharedAccessKeyName) }
Speaking of beauty in code, all I need now is some time to get back into Clojure in earnest, and I can probably get a sense of achievement (and aesthetics) back into my neural pathways, which are sorely worn out with PowerPoint slides and meetings all over the place… | http://taoofmac.com/space/blog/2017/04/02/2150 | CC-MAIN-2017-17 | refinedweb | 2,213 | 51.58 |
Difference between revisions of "Talk:DeveloperWiki:UID / GID Database"
Revision as of 07:11, 28 April 2013).])
Ossec
- grep ossec /etc/group
- ossec:x:525:
- grep ossec /etc/passwd
- ossec:x:524:525::/var/ossec:/bin/false
- ossecm:x:525:525::/var/ossec:/bin/false
- ossecr:x:526:525::/var/ossec:/bin/false
thanks :)
What about packages from AUR?
As I started discussing it at Talk:Arch_Packaging_Standards#Adding_system_users, it would be good to have a clearer process for the addition of system users, and maybe a specific sub-namespace (_e.g._ from 500 to 749) for AUR, as well as some authoritative list to avoid collisions ? Maybe another namespace (750-999?) should also be kept strictly reserved for local uses, so admins can create groups knowing that no upstream ArchLinux package will ever use the ID.
socket-sentry
Please add socketsentry group (gid 172) for kdeplasma-addons-applets-socketsentry package ([2]).
- As you can see, this discussion page doesn't get many replies from developers, you may want to use the forum or the mailing lists for this kind of requests. -- Kynikos (talk) 09:54, 17 June 2012 (UTC)
oml2 entries
Please add oml2 user and group (both 137, arbitrarily, on a Debian system I have at hand). I'm updating the package at /packages.php?ID=60321 and, creating these in the post_install script.
OlivierMehani (talk) 08:49, 15 August 2012 (UTC) | https://wiki.archlinux.org/index.php?title=Talk:DeveloperWiki:UID_/_GID_Database&diff=prev&oldid=255492 | CC-MAIN-2016-40 | refinedweb | 234 | 55.44 |
):
Thanks
Ben
On Wed, Jun 13, 2001 at 11:50:07AM -0600, Benjamin Collar wrote:
|):
You have something like :
######## a module
self = "some value"
print id( self )
class Foo :
def __init__( self , *TestClasses ) :
print id( self )
The 2 'print' lines will print different things -- the 'self' inside
the __init__ function is a different 'self' than outside the function.
The solution is to change the name of one or more of them. It is also
conceivable that you have
class Foo :
def func( self ) :
class Bar :
def __init__( self ) :
pass
which would cause the same problem, when nested_scopes are used. If
you show the rest of the code, then it will be more obvious what is
really causing the warning message, and it will be easier to suggest
alternative coding/naming styles.
-D | https://sourceforge.net/p/jython/mailman/message/5261440/ | CC-MAIN-2017-43 | refinedweb | 132 | 64.07 |
The QDeclarativeImageProvider class provides an interface for supporting pixmaps and threaded image requests in QML. More...
#include <QDeclarativeImageProvider>
This class was introduced in Qt 4.7.:DeclarativeDeclarativeImageProvider { public: ColorImageProvider() : QDeclarativeImageProvider(QDeclarativeDecl.
Image providers that support QImage..
Images returned by a QDeclarative.
See also QDeclarativeEngine::addImageProvider().
Defines the type of image supported by this image provider.
Creates an image provider that will provide images of the given type.
Destroys the QDeclarativeImageProvider
Note: The destructor of your derived class need to be thread safe.
Returns the image type supported by this provider.
Implement this method to return the image with id. The default implementation returns an empty image..
Note: this method may be called by multiple threads, so ensure the implementation of this method is reentrant.
Implement this method to return the pixmap with id. The default implementation returns an empty pixmap.. | http://doc.trolltech.com/main-snapshot/qdeclarativeimageprovider.html#imageType | crawl-003 | refinedweb | 140 | 53.78 |
Route-based IPsec VPN on Linux with strongSwan
Vincent Bernat
A common way to establish an IPsec tunnel on Linux is to use an IKE daemon, like the one from the strongSwan project, with a minimal configuration:1
conn V2-1 left = 2001:db8:1::1 leftsubnet = 2001:db8:a1::/64 right = 2001:db8:2::1 rightsubnet = 2001:db8:a2::/64 authby = psk auto = route
The same configuration can be used on both sides. Each side will figure out if
it is “left” or “right.” The IPsec site-to-site tunnel endpoints are
2001:db8:1::1 and
2001:db8:2::1. The protected subnets are
2001:db8:a1::/64 and
2001:db8:a2::/64. As a result, strongSwan
configures the following policies in the kernel:
$ ip xfrm policy src 2001:db8:a1::/64 dst 2001:db8:a2::/64 dir out priority 399999 ptype main tmpl src 2001:db8:1::1 dst 2001:db8:2::1 proto esp reqid 4 mode tunnel src 2001:db8:a2::/64 dst 2001:db8:a1::/64 dir fwd priority 399999 ptype main tmpl src 2001:db8:2::1 dst 2001:db8:1::1 proto esp reqid 4 mode tunnel src 2001:db8:a2::/64 dst 2001:db8:a1::/64 dir in priority 399999 ptype main tmpl src 2001:db8:2::1 dst 2001:db8:1::1 proto esp reqid 4 mode tunnel […]
This kind of IPsec tunnel is a policy-based VPN: encapsulation and decapsulation are governed by these policies. Each of them contains the following elements:
- a direction (
out,
inor
fwd2);
- a selector (source subnet, destination subnet, protocol, ports);
- a mode (
transportor
tunnel);
- an encapsulation protocol (
espor
ah); and
- the endpoint source and destination addresses.
When a matching policy is found, the kernel will look for a corresponding
security association (using
reqid and the endpoint source and destination
addresses):
$ ip xfrm state src 2001:db8:1::1 dst 2001:db8:2::1 proto esp spi 0xc1890b6e reqid 4 mode tunnel replay-window 0 flag af-unspec auth-trunc hmac(sha256) 0x5b68[…]8ba2904 128 enc cbc(aes) 0x8e0e377ad8fd91e8553648340ff0fa06 anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000 […]
If no security association is found, the packet is put on hold and the IKE daemon is asked to negotiate an appropriate one. Otherwise, the packet is encapsulated. The receiving end identifies the appropriate security association using the SPI in the header. Two security associations are needed to establish a bidirectionnal tunnel:
$ tcpdump -pni eth0 -c2 -s0 esp 13:07:30.871150 IP6 2001:db8:1::1 > 2001:db8:2::1: ESP(spi=0xc1890b6e,seq=0x222) 13:07:30.872297 IP6 2001:db8:2::1 > 2001:db8:1::1: ESP(spi=0xcf2426b6,seq=0x204)
All IPsec implementations are compatible with policy-based VPNs. However, some configurations are difficult to implement. For example, consider the following proposition for redundant site-to-site VPNs:
A possible configuration between
V1-1 and
V2-1 could be:
conn V1-1-to-V2-1 left = 2001:db8:1::1 leftsubnet = 2001:db8:a1::/64,2001:db8:a6::cc:1/128,2001:db8:a6::cc:5/128 right = 2001:db8:2::1 rightsubnet = 2001:db8:a2::/64,2001:db8:a6::/64,2001:db8:a8::/64 authby = psk keyexchange = ikev2 auto = route
Each time a subnet is modified on one site, the configurations need to be
updated on all sites. Moreover, overlapping subnets (
2001:db8:a6::/64 on one
side and
2001:db8:a6::cc:1/128 at the other) can also be problematic.
The alternative is to use route-based VPNs: any packet traversing a pseudo-interface will be encapsulated using a security policy bound to the interface. This brings two features:
- Routing daemons can be used to distribute routes to be protected by the VPN. This decreases the administrative burden when many subnets are present on each side.
- Encapsulation and decapsulation can be executed in a different routing instance or namespace. This enables a clean separation between a private routing instance (where VPN users are) and a public routing instance (where VPN endpoints are).
Route-based VPN on Juniper#
Before looking at how to achieve that on Linux, let’s have a look at the way it works with a Junos-based platform (like a Juniper vSRX). This platform as long-standing history of supporting route-based VPNs (a feature already present in the Netscreen ISG platform).
Let’s assume we want to configure the IPsec VPN from
V3-2 to
V1-1. First, we
need to configure the tunnel interface and bind it to the “private” routing
instance containing only internal routes (with IPv4, they would have been RFC 1918 routes):
interfaces { st0 { unit 1 { family inet6 { address 2001:db8:ff::7/127; } } } } routing-instances { private { instance-type virtual-router; interface st0.1; } }
The second step is to configure the VPN:
security { /* Phase 1 configuration */ ike { proposal IKE-P1 { authentication-method pre-shared-keys; dh-group group20; encryption-algorithm aes-256-gcm; } policy IKE-V1-1 { mode main; proposals IKE-P1; pre-shared-key ascii-text "d8bdRxaY22oH1j89Z2nATeYyrXfP9ga6xC5mi0RG1uc"; } gateway GW-V1-1 { ike-policy IKE-V1-1; address 2001:db8:1::1; external-interface lo0.1; general-ikeid; version v2-only; } } /* Phase 2 configuration */ ipsec { proposal ESP-P2 { protocol esp; encryption-algorithm aes-256-gcm; } policy IPSEC-V1-1 { perfect-forward-secrecy keys group20; proposals ESP-P2; } vpn VPN-V1-1 { bind-interface st0.1; df-bit copy; ike { gateway GW-V1-1; ipsec-policy IPSEC-V1-1; } establish-tunnels on-traffic; } } }
We get a route-based VPN because we bind the
st0.1 interface to the
VPN-V1-1 VPN. Once the VPN is up, any packet entering
st0.1 will be
encapsulated and sent to the
2001:db8:1::1 endpoint.
The last step is to configure BGP in the “private” routing instance to exchange routes with the remote site:
routing-instances { private { routing-options { router-id 1.0.3.2; maximum-paths 16; } protocols { bgp { preference 140; log-updown; group v4-VPN { type external; local-as 65003; hold-time 6; neighbor 2001:db8:ff::6 peer-as 65001; multipath; export [ NEXT-HOP-SELF OUR-ROUTES NOTHING ]; } } } } }
The export filter
OUR-ROUTES needs to select the routes to be advertised to
the other peers. For example:
policy-options { policy-statement OUR-ROUTES { term 10 { from { protocol ospf3; route-type internal; } then { metric 0; accept; } } } }
The configuration needs to be repeated for the other peers. The complete version
is available on GitHub. Once the BGP sessions are up, we start
learning routes from the other sites. For example, here is the route for
2001:db8:a1::/64:
> show route 2001:db8:a1::/64 protocol bgp table private.inet6.0 best-path private.inet6.0: 15 destinations, 19 routes (15 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 2001:db8:a1::/64 *[BGP/140] 01:12:32, localpref 100, from 2001:db8:ff::6 AS path: 65001 I, validation-state: unverified to 2001:db8:ff::6 via st0.1 > to 2001:db8:ff::14 via st0.2
It was learnt both from
V1-1 (through
st0.1) and
V1-2 (through
st0.2). The route is part of the
private routing instance but encapsulated
packets are sent/received in the
public routing instance. No route-leaking is
needed for this configuration. The VPN cannot be used as a gateway from internal
hosts to external hosts (or vice-versa). This could also have been done with
Junos’ security policies (stateful firewall rules) but doing the separation with
routing instances also ensure routes from different domains are not mixed and a
simple policy misconfiguration won’t lead to a disaster.
Route-based VPN on Linux#
Starting from Linux 3.15, a similar configuration is possible with the help of a virtual tunnel interface.3 First, we create the “private” namespace:
# ip netns add private # ip netns exec private sysctl -qw net.ipv6.conf.all.forwarding=1
Any “private” interface needs to be moved to this namespace (no IP is configured as we can use IPv6 link-local addresses):
# ip link set netns private dev eth1 # ip link set netns private dev eth2 # ip netns exec private ip link set up dev eth1 # ip netns exec private ip link set up dev eth2
Then, we create
vti6, a tunnel interface (similar to
st0.1 in the Junos
example):
# ip -6 tunnel add vti6 \ > mode vti6 \ > local 2001:db8:1::1 \ > remote 2001:db8:3::2 \ > key 6 # ip link set netns private dev vti6 # ip netns exec private ip addr add 2001:db8:ff::6/127 dev vti6 # ip netns exec private sysctl -qw net.ipv4.conf.vti6.disable_policy=1 # ip netns exec private ip link set vti6 mtu 1500 # ip netns exec private ip link set vti6 up
The tunnel interface is created in the initial namespace and moved to the “private” one. It will remember its original namespace where it will process encapsulated packets. Any packet entering the interface will temporarily get a firewall mark of 6 that will be used only to match the appropriate IPsec policy4 below. The kernel sets a low MTU on the interface to handle any possible combination of ciphers and protocols. We set it to 1500 and let PMTUD do its work.
Update (2018-04)
The MTU is also too low due to a bug that is fixed in commit c6741fbed6dc (released with Linux 4.17).
We can then configure strongSwan:5
conn V3-2 left = 2001:db8:1::1 leftsubnet = ::/0 right = 2001:db8:3::2 rightsubnet = ::/0 authby = psk mark = 6 auto = route keyexchange = ikev2 keyingtries = %forever ike = aes256gcm16-prfsha384-ecp384! esp = aes256gcm16-prfsha384-ecp384! mobike = no
The IKE daemon configures the following policies in the kernel:
$ ip xfrm policy src ::/0 dst ::/0 dir out priority 399999 ptype main mark 0x6/0xffffffff tmpl src 2001:db8:1::1 dst 2001:db8:3::2 proto esp reqid 1 mode tunnel src ::/0 dst ::/0 dir fwd priority 399999 ptype main mark 0x6/0xffffffff tmpl src 2001:db8:3::2 dst 2001:db8:1::1 proto esp reqid 1 mode tunnel src ::/0 dst ::/0 dir in priority 399999 ptype main mark 0x6/0xffffffff tmpl src 2001:db8:3::2 dst 2001:db8:1::1 proto esp reqid 1 mode tunnel […]
These policies are used for any source or destination as long as the firewall mark is equal to 6, which matches the mark configured for the tunnel interface.
The last step is to configure BGP to exchange routes. We can use BIRD for this:
router id 1.0.1.1; protocol device { scan time 10; } protocol kernel { persist; learn; import all; export all; merge paths yes; } protocol bgp IBGP_V3_2 { local 2001:db8:ff::6 as 65001; neighbor 2001:db8:ff::7 as 65003; import all; export where ifname ~ "eth*"; preference 160; hold time 6; }
Once BIRD is started in the “private” namespace, we can check routes are learned correctly:
$ ip netns exec private ip -6 route show 2001:db8:a3::/64 2001:db8:a3::/64 proto bird metric 1024 nexthop via 2001:db8:ff::5 dev vti5 weight 1 nexthop via 2001:db8:ff::7 dev vti6 weight 1
The above route was learnt from both
V3-1 (through
vti5) and
V3-2 (through
vti6). Like for the Junos version, there is no route-leaking between the
“private” namespace and the initial one. The VPN cannot be used as a gateway
between the two namespaces, only for encapsulation. This also prevent a
misconfiguration (for example, IKE daemon not running) from allowing packets to
leave the private network.
As a bonus, unencrypted traffic can be observed with
tcpdump on the tunnel
interface:
$ ip netns exec private tcpdump -pni vti6 icmp6 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vti6, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes 20:51:15.258708 IP6 2001:db8:a1::1 > 2001:db8:a3::1: ICMP6, echo request, seq 69 20:51:15.260874 IP6 2001:db8:a3::1 > 2001:db8:a1::1: ICMP6, echo reply, seq 69
You can find all the configuration files for this example on GitHub. The documentation of strongSwan also features a page about route-based VPNs. It is possible to replace IPsec by WireGuard, a fast and modern VPN implementation.
Update (2018-11)
It is also possible to transport IPv4 on top of IPv6 IPsec tunnels. The lab has been updated to support such a scenario.
Update (2018-11)
There are some serious bugs starting from Linux 4.14 impacting this setup. Be sure to apply the following patches: 9e1437937807 and 0152eee6fc3b—both applied to stable trees.
Everything in this post should work with Libreswan. ↩︎
fwdis for incoming packets on non-local addresses. It only makes sense in
transportmode and is a Linux-only specificity. ↩︎
Virtual tunnel interfaces (VTI) were introduced in Linux 3.6 (for IPv4) and Linux 3.12 (for IPv6). Appropriate namespace support was added in 3.15. KLIPS, an alternative out-of-tree stack available since Linux 2.2, also features tunnel interfaces. ↩︎
The mark is set right before doing a policy lookup and restored after that. Consequently, it doesn’t affect other possible uses (filtering, routing). However, as Netfilter can also set a mark, one should be careful for conflicts. ↩︎
The ciphers used here are the strongest ones currently possible while keeping compatibility with Junos. The documentation for strongSwan contains a complete list of supported algorithms as well as security recommendations to choose them. ↩︎ | https://vincent.bernat.ch/en/blog/2017-route-based-vpn | CC-MAIN-2021-49 | refinedweb | 2,243 | 50.26 |
Graphics Programming Using C#
Painting Shapes
With the help of Graphics class of System.Drawing namespace, you can render various kinds of shapes. These include Rectangle, Filled Rectangle, Lines, Ellipse, Filled Ellipse, Pie, Filled Pie, and Polygons. This class defines methods for painting these shapes. Each method differs according to the need of the shapes. You have to learn the syntax of these methods and apply them in your programs. Don't try to memorize all of them, rather you will slowly gain experience by practice.
As already discussed in the earlier, every method on the Graphics class has to be accessed by creating an object of the Graphics class. Table 1 shows some of the methods of this class. You can add the code given on the table in Listing 1.
Table 1: Graphics Class Methods
Using the Pen class you can specify color of the border and also the thickness. From the example given above, it can be seen that Pen class is to be applied for drawing shapes while Brush class is applied for filling shapes.
Tweaking the Unit of measurement
As you may know the default Graphics unit is Pixel. By applying the PageUnit property, you can change the unit of measurement to Inch, Millimeter etc as shown below:
Graphics g = e.Graphics; g.PageUnit = GraphicsUnit.Inch;
Page 2 of 3
| http://www.developer.com/net/csharp/article.php/10918_1435391_2/Graphics-Programming-Using-C.htm | CC-MAIN-2015-35 | refinedweb | 225 | 73.47 |
Hi,
I'm curious about the behaviour of Crimson 1.1.1 when used as a SAX2
XMLReader. For some reason, for XML elements with no namespace, the name
is returned as the qName parameter to startElement(), rather than localName
as expected; localName is empty. We're hoping for flexibility in changing
parsers (hence the move to 1.1.1 for the better JAXP support), so we'd like
to acheive consistent behaviour if at all possible. Is there any way of
getting Crimson to comply with SAX2 on this?
Regards,
Ben Pickering
---------------------------------------------------------------------
In case of troubles, e-mail: [email protected]
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected] | http://mail-archives.apache.org/mod_mbox/xml-general/200107.mbox/%[email protected]%3E | CC-MAIN-2014-52 | refinedweb | 122 | 51.65 |
Created on 2013-03-17 14:42 by barry, last changed 2015-04-21 03:06 by berker.peksag. This issue is now closed.
This came up at the Pycon 2013 Python 3 porting clinic. There are many cases in the stdlib that claim (either explicitly or implicitly) to accept bytes or strings, but that don't return the type of the arguments they accept. An example is urllib.parse.quote() which accepts bytes or str but always returns str. A similar example brought up at the clinic was difflib, which accepts both types, and works internally on both, but crashes when joining the results for return.
It should be policy for the stdlib (i.e. codified in an informational PEP and including bug reports, because they *are* bugs, not features or baked-in API) where bytes or str are accepted but the right things are not done (i.e. return the type you accept).
This bug captures the principle, and probably should be closed once such a PEP is accepted, with individual bugs opened for each individual case..)
On Mar 17, 2013, at 03:10 PM, R. David Murray wrote:
.)
Totally agree about the mixed type rule.
But this is something different, and I know it's been discussed, and is a
difficult problem. It's causing real-world pain for people though, so it's
worth thinking about again.
The particular use case that triggered this: Mercurial's test suite. It runs "hg blah blah" and compares the output against known good output. But Mercurial's output is just bytes, because pretty much everything in a Mercurial repo is just bytes (file data of course, but also filenames and even changeset metadata like usernames).
So attempting to run the Mercurial test suite under 3.x immediately fails hard. The boiled-down essence of the bug is this:
>>> import difflib
>>> a = b"hello world"
>>> b = b"goodbye world"
>>> [line for line in difflib.unified_diff(a, b)]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
File "/home/greg/src/cpython/3.2/Lib/difflib.py", line 1224, in unified_diff
yield '-' + line
TypeError: Can't convert 'int' object to str implicitly
That looks like a bug in difflib (regardless of what type it returns). But note that I'm agreeing that returning the same type you are given is generally preferrable.
Changing behavior that already matches the docs is an enhancement, not a bugfix, and one that will almost certainly break code. It is therefore one that would normally require a deprecation period. I think the most you should ask for is to skip the deprecation period.
I believe the urllib and difflib problems are quite different. I am going to presume that urllib simply converts bytes input to str and goes on from there, returning the result as str rather than (possibly) converting back to bytes. That is an example for this issue.
Difflib.unified_diff, on the other hand, raises rather than returning an unexpected or undesired type. The 3 sections like the below have two problems given the toy input of two bytes objects.
if tag in {'replace', 'delete'}:
for line in a[i1:i2]:
yield '-' + line
First, iterating bytes or a slice of bytes returns ints, not 1-byte bytes. Hence the exception. Even if that were worked around, the mixed string constant + bytes expression would raise a TypeError. One fix for both problems would be to change the expression to '-' + str(line).
Neither of these problems are bugs. The doc says "Compare a and b (lists of strings)". Actually, 'sequence of strings' is sufficient. For the operations of unified_diff, a string looks like a sequence of 1-char strings, which is why
>>> for l in difflib.unified_diff('ab', 'c'): print(l)
---
+++
@@ -1,2 +1 @@
-a
-b
+c
works.
The other lines yielded by unified_diff are produced with str.format, and % formatting does not seem to work with bytes either. So a dual string/bytes function would not be completely trivial.
Greg, can you convert bytes to strings, or strings to bytes, for your tests, or do you have non-ascii codes in your bytes? Otherwise, I think it might be better to write a new function 'unified_diff_bytes' that did exactly what you want than to try to make unified_diff accept sequences of bytes.
At a glance, this just looks like a bug in difflib - it should use
different literals when handling bytes. (Given that difflib newline
processing assumes ASCII compatibility, a latin-1 based decode/encode
solution may also be viable).
The original reproduction I posted was incorrect -- it makes difflib look worse than it should. (I passed strings rather than lists of strings.) Here is a more accurate version:
>>> import difflib
>>> a = [b'hello']
>>> b = [b'hello!']
>>> '\n'.join(line for line in difflib.unified_diff(a, b))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <genexpr>
File "/home/greg/src/cpython/3.3/Lib/difflib.py", line 1223, in unified_diff
yield '-' + line
TypeError: Can't convert 'bytes' object to str implicitly
So it still crashes, but the exception makes it pretty clear what the problem is.
Replying to Terry Reedy:
> So a dual string/bytes function would not be completely trivial.
Correct. I have one working, but it makes my eyes bleed. I fail ashamed to have written it.
> Greg, can you convert bytes to strings, or strings to bytes
Nope. Here is the hypothetical use case: I have a text file written in Polish encoded in ISO-8859-1 committed to a Mercurial repository. (Or saved in a filesystem somewhere: doesn't really matter, except that Mercurial repositories are immutable, long-term, and *must* *not* *lose* *data*.) Then I decide I should play nicely with the rest of the world and transcode to UTF-8, so commit a new rev in UTF-8.
Years later, I need to look at the diff between those two old revisions. Rev 1 is a pile of ISO-8859-2 bytes, and rev 2 is a pile of UTF-8 bytes. The output of diff looks like
- blah blah [iso-8859-2 bytes] blah
+ blah blah [utf-8 bytes] blah
Note this: the output of diff has some lines that are iso-8859-2 bytes and some that are utf-8 bytes. *There is no single encoding* that applies.
Note also that diff output must contain the exact original bytes, so that it can be consumed by patch. Diffs are read both by humans and by machines.
> Otherwise, I think it might be better to write a new function
> 'unified_diff_bytes' that did exactly what you want than to try to
> make unified_diff accept sequences of bytes.
Good idea. That might be much less revolting than what I have now. I'll give it a shot.
OK I now have two competing patches. Both are disgusting, but in different ways.
1)
- factor out all the string constants
- always concatenate, do not .format()
2)
- copy {unified,context}_diff() to {unified,context}_diff_bytes()
- this is a future maintenance headache, guaranteed!
Feedback welcome. If anyone can see a way to unify these two approaches, or a third way that sucks less, I'm all ears.
> Necessary when comparing files with unknown or inconsistent encodings:
> there's no way to convert everything to Unicode, so just leave it all
> bytes.
You could simply use the surrogateescape error handler.
Since we don't need to worry about ASCII incompatible encodings (difflib will already have issues with such files due to the assumptions about newlines), it should be possible to use the same approach as that used in urllib.parse, but based on latin-1 rather than ascii.
It's the least bad option for this kind of use case (surrogateescape can be good too, but it doesn't work properly in this case where the two encodings may be different and we want to compare the raw bytes directly).
(changed scope of issue to reflect the subsequent discussion)
Take 3:
- uses surrogateescape as suggested by Antoine
- seems to work
I was about to suggested a simplified version of the original one-function version but the new one is better. One change: name = lambda... is discouraged in the stdlib (Guido, pydev, a few years ago). Def statements require only 3 more chars and produce properly named objects for tracebacks.
encode = lambda s: s
def encode(s): return s
Under current rules, this is a 3.4 enhancement. For the context_diff and unified_diff doc change:
- Compare a and b (lists of strings);
+ Compare string or bytes sequences a and b; # (or)
+ Compare a and b (both sequences of strings or sequences of bytes);
Neither entry says anything at present about the type of from/tofile. Based on your patch, the following could go after the first sentence:
+"Arguments *fromfile* and *tofile* are normally strings but may be bytes if the items of *a* and *b* are."
Ah, I forgot we didn't do within-line diffs. If we did those, then latin-1 would be less bad choice than ascii+surrogateescape. As it is, either should work in this case (since they just need to tunnel the raw bytes, and aren't being sliced at all).
I agree with most of Terry's comments, but think the case can be made this is a bug worth fixing in 3.3 (it's definitely borderline, but making it feasible to port Mercurial is a pretty big gain for a relatively tiny risk).
3.2 is about to enter security fix only mode though, so dropping that from the list of affected versions - while the fix should apply just fine, it's definitely not appropriate to make a change like this in the final planned maintenance release.
Thanks for the review, Terry! Here is a revised patch, now on trunk:
I believe I have addressed all of your concerns.
Note also that the tests now highlight some dubious behaviour. Further feedback is welcome!
I'm happy to rebase onto 3.3 if folks generally agree it's safe. (It seems fine to me; IMHO being unable to handle bytes is a regression relative to Python 2.)
The surrogate escape approach embodies the 3.x recommendation:
decode bytes to strings, manipulate strings, encode strings to bytes.
It also makes it possible to wrap the existing context/unified_diff functions, without touching them, with a simple 12 line function. Function bytes_diff avoids the complexities of mixing and unmixing strings and bytes that remain in Greg's latest patch.
I recommend the following: replace the simple test in the attached bytes_diff.py with Greg's unittest-based tests and adjust the __name__ == '__main__' incantation accordingly. Next upload to pypi to make it available to all 3.1-3.3 users. Then, after some minimal field testing, add the utility wrapper function to 3.4 difflib. These steps would make moot for difflib the sub-issue of whether the 3.x design is a bug fixable in bugfix releases. We could even add a reference to the pypi module in the 3.2 and 3.3 docs.
> I recommend the following: replace the simple test in the attached bytes_diff.py with
> Greg's unittest-based tests and adjust the __name__ == '__main__' incantation
> accordingly.
Latest patch, following Terry's suggestion:
Pro:
- does not touch unified_diff() or context_diff(), just adds a new function
- no question that this is a new feature, so no debate about what branch to commit on
- forces caller to know exactly what they are dealing with, strings or bytes
Con:
- not a bug fix, so 3.3 users won't get it ... but they can just copy the
implementation of diff_bytes() (or, as Terry suggests, it could live on PyPI)
- has explicit isinstance() checks (for a good reason: see the comments)
- also has more Pythonic "raise TypeError if s.decode raised AttributeError",
which is friendlier to duck typing but inconsistent with the isinstance() checks
used elsewhere in the patch
Overall I'm fairly happy with this. Not too thrilled with the explicit type checks; suggestions welcome.
Regarding Terry's suggestion of putting diff_bytes() on PyPI: meh. If the only project that needs this is Mercurial, that would be pointless. Mercurial doesn't have PyPI dependencies, and that is unlikely to change. This might be one of those rare cases where copying the code is easier than depending on it.
Friendly ping. With bytes formatting in Python 3.5a3, this is now the biggest pain port for getting our test runner working cleanly on Python 3.
(For values of "our" == "Mercurial".)
OK I've revived my patch and rebased on latest trunk.
Comments welcome. I'll push this in a couple of days if nobody objects.
Some small comments:
* diff_bytes needs to be documented in the difflib docs and in Doc/whatsnew/3.5.rst.
* diff_bytes needs to be added to difflib.__all__
* This looks like a new feature to me, so it would be better to just commit it to the default branch.
+ except AttributeError:
+ raise TypeError('all arguments must be bytes, not %r' % s)
This could be changed to raise TypeError(...) from None
+ self.assertTrue(
+ isinstance(line, bytes),
assertIsInstance
+ try:
+ list(difflib.unified_diff(a, b, fna, fnb))
+ self.fail('expected TypeError')
+ except TypeError:
+ pass
with self.assertRaises(TypeError):
list(difflib.unified_diff(a, b, fna, fnb))
looks more readable to me.
I tried the [Create Patch] button. Two problems: the result is about 90% concerned with other issues; it is not reviewable on Rietveld. I will unlink it and upload a cut-down version.
Wtiht the test changes suggested by Berker, I agree that it is time to apply this, with whatever decision we make about 3.4.
I am sympathetic to the notion that there is a regression from 2.x. There is precedent for adding a feature to fix a bug (in difflib, a new parameter for SequenceMatcher, for 2.7 3 (or thereabouts)). However, doing so was contentious (discussed on pydev) and not meant to be routine. The bug being fixed had been reported (as I remember) on four separate issues by four people and seconded by other people, so we really wanted the fix in 2.7.
Would the following compromise work for Mercurial? The patch already adds a new private function _check_types. For 3.4, also add _diff_bytes as a private function. Merge both into 3.5. Create a 3.5 patch that makes _diff_bytes public by renaming it to diff_bytes, adds the new tests, and documents the new feature. The What's New entry could mention that the function was added privately in 3.4.4.
Now the review button appears for the big patch. Lets see if another submission makes it appear for the smaller version.
Changes to 3.4 aren't going to help Mercurial. Given that bytes formatting is new in 3.5, I won't be attempting a port that supports a version of Python 3 any older than 3.5. At this point, just a difflib.diff_bytes method in 3.5 would be sufficient to satisfy my needs.
I think the convert to str -> process as str -> convert back to bytes approach is a good one - it's the same one we use in urllib.parse.
In this case, since we explicit need to handle mixed encodings, I also agree with the idea of using surrogate escape to make it possible to tunnel arbitrary bytes through the process, and expose that as a new module level API for Python 3.5.
Just uploaded. Pretty sure I've addressed all of @berker.peksag's review comments: thanks for that!
I also fixed a number of subtle bugs in the tests. Pro tip: when asserting that something raises TypeError, inspect the exception message. There are many many ways that Python code can raise TypeError *other than* the condition you thought you were testing for. ;-)
I'm happy with this patch now unless anyone else spots problems.
@durin42: have you been trying this patch with your Mercurial-on-Python-3.5 patches? This would be a good time to re-update your difflib.py.
Thanks, looks great. Two trivial comments about the documentation:
* Needs ``.. versionadded:: 3.5``
* *dfunc(a, b, fromfile, tofile, fromfiledate, tofiledate, n, lineterm)* -> ``dfunc(a, b, fromfile, tofile, fromfiledate, tofiledate, n, lineterm)``
New changeset 1764d42b340d by Greg Ward in branch 'default':
#17445: difflib: add diff_bytes(), to compare bytes rather than str | https://bugs.python.org/issue17445 | CC-MAIN-2020-16 | refinedweb | 2,747 | 65.22 |
June 19, 2018•9 min read
For an introduction to functional programming concepts, see my previous post functional programming 101
Hi there, I hope you all had a nice week so far, it's Thursday, maybe you're already exhausted as I am ...
... Come on! let's have a small break, take fresh air for our brain and learn new things! 😉
Here is a retrospective on some useful functional programming techniques that I enjoy using these days when programming in JavaScript.
They make me even more productive as when it comes to developing apps.
I hope they will help you as well !
Functional programming techniques are techniques induced by the purity properties or/and the first-class properties of functions.
A Functional Abstraction is replacing a specific part of the code of some function by a functional parameter.
function applyTwiceExp(n) { return exp(exp(n)) } /* Abstraction */ function applyTwice(n, f) { return f(f(n)) }
Therefore, this abstraction makes it more generic. This technique is the most important and the most used FP technique. It is usually used with lambda expressions.
// ES5 applyTwice(3, function(x) { return x + 10 }) // -> 23 // ES6 applyTwice(3, () => x + 10) // -> 23
According to Haskell Wiki :
Function composition is the act of pipelining the result of one function, to the input of another, creating an entirely new function.
Composition makes relationships explicit. We compose entities to make the relationships between them explicit.
let compose = (f, g) => (x) => f(g(x))
const square = (x) => x*x const sqrt = (x) => Math.sqrt(x) let identity = compose(lowerCase, sqrt) identity(3) // --> 3
const lowerCase = (s) => s.toLowerCase() const trim = (s) => s.trim() let lowerCase_and_Trim = compose(lowerCase, trim) lowerCase_and_Trim("HeLLO ! ") // --> "hello !"
Decomposition make responsibilities of each function explicit.
It helps us to isolate processes and make testing easier. This is why we need to name our functions :
fetchProfiles() .then((response) => { return response.json() }) .then((collection) => { return collection.map((obj) => new Profile(obj) ) }) .then((profiles) => { profiles .map((p) => '<li>' + p.name + '</li>') .forEach((li) => $('.profiles-list').append(li)) }) .catch((err) => { $('.error-tooltip').text(err.message) })
This code is valid but the chunks of code in each
.then(...) statement is responsible for one distinct operation on the data. These processes are not tightly coupled and can be extracted for a better readability :
let toJSON = (response) => response.json() let transformData = (collection) => { return collection.map((obj) => new Profile(obj)) } let renderToDOM = (profiles) => { profiles .map((p) => '<li>' + p.name + '</li>') .forEach((li) => $('.profiles-list').append(li)) } let handleErrors = (err) => { $('.error-tooltip').text(err.message) } // This data flow is sooo easy to understand now fetchProfiles() .then(toJSON) .then(transformData) .then(renderToDOM) .catch(handleErrors)
Here we have extracted and decomposed the previous code in functions. Each function have a single responsibility, are named and can be reused in another context.
I like to say that naming and extracting functions let us write literature instead of code.
Now writing the essence of the data flow in our program is like writing sentences. The words in these sentences are the name of our functions.
That's why naming is so hard and so important !
Naming has to be specific when the extracted function has tight coupling and generic when it is.
let toJSON = (response) => response.json() let transformJSONToProfiles = (collection) => { // Specific naming return collection.map((obj) => new Profile(obj)) } let renderProfilesToDOM = (profiles) => { // Specific naming profiles .map((p) => '<li>' + p.name + '</li>') .forEach((li) => $('.profiles-list').append(li)) } let handleErrors = (err) => { // Generic naming $('.error-tooltip').text(err.message) } // The sentence you've just written : fetchProfiles() .then(toJSON) .then(transformJSONToProfiles) .then(renderProfilesToDOM) .catch(handleErrors) // It fetches profiles data, // and then transforms it to JSON, // and then transforms JSON Data to Profiles objects, // and then renders Profiles to the DOM.
The curried form of a function f accepting n arguments consists in representing f as an imbrication of n functions accepting one argument.
Thanks to the first-class citizenship of functions, we can write :
// in ES5 function f (x,y) { return x + y } /* Currying magic ! */ function f_curry (x) { return function(y) { return x + y } } // in ES6 const f = (x, y) => x + y /* TADA ! */ const f_curry = (x) => (y) => x + y
f(3, 4) // -> 7 f_curry(3)(4) // -> 7
We're now able to partially apply our functions. Let's go to the next section !
Partial application is delaying instantiation of some of the parameters of a function.
It may be useful when we need to parameter some process for a later use.
// ES5 function prepareDrink(verb) { // Outer function return function(input) { // Inner function return "I'm " + verb + " " + input + " !" } } // ES6 const prepareDrink = (verb) => (input) => `I'm ${verb} ${input} !` // Let's partially apply 'prepareDrink' var makeHotBeverage = prepareDrink('brewing') var makeJuice = prepareDrink('pressing') makeHotBeverage('coffee') // -> "I'm brewing coffee !" makeJuice('oranges') // -> "I'm pressing oranges !"
Let's decompose this example :
We're partially applying
prepareDrink function by setting a value to the first parameter which is the parameter of the outer function.
prepareDrink('brewing') returns the inner function and we're now able to store it inside a variable for a later use.
var makeHotBeverage = prepareDrink('brewing') console.log(makeHotBeverage) // --> function (input) { return "I'm " + verb + " " + input + " !" } // This is the code of the inner function.
prepareDrinkis a pure function, thanks to the principle of Referential transparency, we can now write :
makeHotBeverage('coffee') === prepareDrink('brewing')('coffee')
These two expressions are totally equivalent.
However, partial application of a function depends on the order of the parameters of f.
prepareDrink('coffee')('brewing') !== prepareDrink('brewing')('coffee')
Memoization is an optimization technique used to speed up computer programs by caching the results of expensive function calls and returning the cached result when the function is called with the already known inputs.
function memoize(f) { f.cache = f.cache || {}; return (...args) => { let key = 'key_' + args.join('') if(f.cache[key] !== undefined) { console.log('Cache hit ! -> ', f.cache[key]); return f.cache[key] } else { console.log('Not cached ...'); f.cache[key] = f(...args); return f.cache[key] } } }
function fibonacci(n) { if(n === 0) { return 0 } else if (n === 1) { return 1 } else { return fibonacci(n-1) + fibonacci(n-2) } } var memoized_fibo = memoize(fibonacci) memoized_fibo(6) // 'Not cached ...' memoized_fibo(5) // 'Not cached ...' memoized_fibo(5) // 'Cache hit ! -> 5' memoized_fibo(6) // 'Cache hit ! -> 8'
I don't claim to have the best memoization algorithm around ! If you strive for performance and reliability, better use more optimized and well-tested libraries like Lodash 😉
Data-driven programming permits to change a program logic and flow, not by modifying the code, but the data it uses.
The organization of a data-driven program is based on two ingredients:
Functional data-driven programming permits to change a program logic and flow, not by modifying the code, but the sets of functions it uses, exploiting their first-class citizenship properties.
Let's define some structure for our Photos API object :
const config = { methods : [ { name: 'getPhotos', url: 'photos', type: 'GET', beforeRequest: (params) => params, afterRequest: (result) => { return result.map(json => new Photo(json)) } }, { name: 'getPhoto', url: 'photos/:id', type: 'GET', beforeRequest: (params) => params, afterRequest: (result) => { return new Photo(result) } } ] }
This is our config file. It looks like a JSON object.
Our API will have two methods :
getPhotos(params) and
getPhoto(params)
Now, here comes the function
createApi. It is responsible for creating a usable Api object based on the config.
It leverages the power of
Array.reduce method to create each method defined in the config file on the
PhotoApi object.
function createApi(config) { var PhotoApi = {}; config.methods.reduce((Api, method) => { PhotoApi[method.name] = (params) => { return Promise .resolve(method.beforeRequest(params)) .then((newParams) => { return fetch(url, newParams) }).then(method.afterRequest) } return Api }, PhotoApi) return PhotoApi }
createApi is where lives the implementation details of the logic make the network call.
Let's describe what it does step by step :
var PhotoApi = {};
PhotoApi[method.name] = /*...*/
PhotoApi[method.name] = (params) => { /*...*/ }
We return a new Promise chain starting with the result of
method.beforeRequest(params) then pipelining the new params to the actual network call using the fetch API
then((newParams) => fetch(url, newParams)) and then conveying the request result through
method.afterRequest(result) so the result can be transformed to instances of objects used in the app.
Promise .resolve(method.beforeRequest(params)) .then((newParams) => { return fetch(url, newParams) }).then(method.afterRequest)
Let's use it !
var PhotoApi = createApi(config.methods) // creates a new namespace called 'PhotoApi' PhotoApi.getPhoto({id: 3}) .then(photo => console.log(photo)) // --> Photo { id: 3, url: '...' }
Now our
PhotoApi namespace is completely parametrized by the config file.
For instance note that, here we are using Promises and the fetch API, but if we needed to use something else (Observables ? jQuery Ajax ? Restangular ?), it's still possible to create a new
createApi function using another library instead of the duo Promises/fetch.
Mixing closures with imperative features allows one to design lightweight objects.
A closure can then contains mutable states accessed through code within the closure (this code is run using arguments of the closure, which can then be considered as the classic “message semantics” of OOP).
It is possible to express some simple OO-like solutions without using the OOP mechanisms.
This technique works best in dynamically-typed languages (for example JavaScript, Python)
In JavaScript, we often use this technique to create modules and simulate data privacy :
function createCounter() { var count = 0; function updateCounter(value) { count += (value) ? value : 0 console.log('count :', count) } return { updateCounter: updateCounter } } var counter = createCounter() // We created a module ! 'count' variable is enclosed in 'counter' console.log(counter.count) // --> undefined counter.updateCounter(7) // count : 7 counter.updateCounter(-3) // count : 4 counter.updateCounter(-2) // count : 2
The
createCounter function is a factory function.
And that's it !
I hope these functional programming techniques will help you improve your coding skills and make you more efficient.
If you have any suggestions or you think I missed something in that post, feel free to say it the comments, it's much appreciated 😊
By Yannick Spark : a Front-end engineer who works remotely at Teacup Analytics.
He likes Functional programming, Nutrition & Fasting, and Remote work.
You should definitely Follow @yannickdot on Twitter 👋 | https://sparkyspace.com/functional-programming-techniques/ | CC-MAIN-2018-34 | refinedweb | 1,667 | 50.43 |
Clean way to change between mutlible pictures
svein kristian nykaas
Greenhorn
Joined: Apr 15, 2012
Posts: 11
posted
Apr 15, 2012 11:57:53
0
Hello, I want to make a game, but before even attempting that I want to get some of the basics of it down and generally have a rather ok\clean code to work from.
What I am basically trying to have is a picture that shows like a world-map, and when you click on some part of the world-map your taken to a new picture showing a bit more detailed information of the area(and so on depending on amount of areas).
The way I'm doing this is to get the x\y coordination from the mouse, and then change the picture to another picture if it is within the coordination, however this would mean I'd need unthinkable amount of if\else statements, especially considering different conditions and states(which is fine, but it seems way too unorganized).
I did try to create a class for each location, however when I did that I didn't manage to neither go back or change to another location(which was another class itself).
So I'm just wondering, is there any cleaner ways to do this then having either a million of functions for each location within 1 class? Or having it all spammed down with lots of if\else's and\or functions in the main class?
*The code of the latest attempt*
import java.awt.*; import java.awt.event.*; import java.net.*; public class GameTest extends Frame implements MouseListener { private Image img; private boolean gotoworld = false; // to check if outside of world map public GameTest() { addMouseListener(this); // start at the world map URL myurl = this.getClass().getResource("images/Map-1.png"); Toolkit tk = this.getToolkit(); img = tk.getImage(myurl); addWindowListener(new WindowAdapter() { @Override public void windowClosing(WindowEvent e){System.exit(0);}}); } @Override public void paint(Graphics g) { g.drawImage(img,0,10,this); } @Override public void mousePressed(MouseEvent e) { // go back to the world map if(gotoworld == true) { if((e.getX() >= 7) & (e.getX() <= 50) & (e.getY() >= 5) & (e.getY() <= 50)) { URL myurl = this.getClass().getResource("images/Map-1.png"); Toolkit tk = this.getToolkit(); img = tk.getImage(myurl); gotoworld = false; } } // goes to the mountain if((e.getX() >= 43) & (e.getX() <= 305) & (e.getY() >= 152) & (e.getY() <= 254)) { URL myurl = this.getClass().getResource("images/Map-2.png"); Toolkit tk = this.getToolkit(); img = tk.getImage(myurl); gotoworld = true; } repaint(); } // end mousePressed @Override public void mouseClicked(MouseEvent e) {} @Override public void mouseReleased(MouseEvent e) {} @Override public void mouseEntered(MouseEvent e) {} @Override public void mouseExited(MouseEvent e) {} public static void main(String[] args) { GameTest gt = new GameTest(); gt.setSize(1000,600); gt.show(); } }// end main-class
Michael Dunn
Ranch Hand
Joined: Jun 09, 2003
Posts: 4632
posted
Apr 15, 2012 13:37:40
0
one way might be to consider your main map like a grid, and each smaller map fits one of the grid squares.
e.g. main map is (500,500) so make each smaller map (50,50), so you have a 10 x 10 grid
with these smaller maps in a 2D array[10][10]
lets say user clicks at (246,180), you'd then show the map from array[246/50][180/50]
svein kristian nykaas
Greenhorn
Joined: Apr 15, 2012
Posts: 11
posted
Apr 16, 2012 00:14:20
0
Good suggestion, but I can't really picture it done that way.
as you can see bellow, that's basically how you travel the "world" and to different locations etc. So need some way to make each location unique, but also generally easily accessable by going back and forth.
"Worldmap":
"after clicking the mountain"
"after clicking the forest"
"after clicking the cottage in the forest"
Michael Dunn
Ranch Hand
Joined: Jun 09, 2003
Posts: 4632
posted
Apr 17, 2012 01:41:31
0
after looking at the pics, I'd still be inclined to create multiple rectangles around the specific areas,
keeping those rectangles in an array, looping the array to see if rectangle.contains(pointClicked),
if true, show the zoomed
image
of that location/rectangle.
repeat for the zoomed locations/rectangles.
e.g. you mention a world map, rectangles around each continent, as a starter?
all the way down to the pic of the cottage in the forest, where I'd imagine the only
thing to zoom in on is the cottage, so that would be the only rectangle in that image,
clicking anywhere else would do nothing.
I agree. Here's the link:
subject: Clean way to change between mutlible pictures
Similar Threads
Resize JLable Runtime with selection border
how to jude button1 & button are clicked simultaneously?
How to get default menu items from JDialog?
setBounds() does not work????
GUI Issues. Ghosting when moving JLabel via Mouse
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/573564/GUI/java/Clean-change-mutlible-pictures | CC-MAIN-2015-22 | refinedweb | 826 | 61.77 |
I am currently working on a number of small applications for personal use, all of which require a password to keep the data and application more secure. To ensure that I can only enter a strong password, I decided to create a password strength control which would display how strong the password is - like you get when signing up with lots of websites - where they say Weak, Good, Strong, or Very Strong. To this end, I looked on the Internet for any code, and I could not find much. I did find this website:[^]. This website seems to me to have a good way of checking password strength, not just checking length or upper and lower case letters. This website also allows you to download the source for this, but it is in JavaScript, and I am writing a C# application, so I decided to use this method of checking the password strength and write my own implementation.
Below is a screenshot of the demo application I used to test the code. The actual PasswordStrengthControl is the brightly coloured box containing the word 'Good'. The table below contains the details of how the password is scored.
PasswordStrengthControl
The code is split into a class to check the password (PasswordStrength.cs) and a UserControl class (PasswordStrengthControl.cs). There is nothing special about the code. The PasswordStrength class determines the password strength and allows the caller to get the strength as a value (0 to 100), a textual description (Very Weak, Weak, Good, Strong, Very Strong), and a DataTable containing the details of the reason for the score.
UserControl
PasswordStrength
DataTable
The scoring is split into two sections - Additions and Deductions.
In the additions section of the code, we add to the overall score for things which make the password 'good'. In my code, we check the following:
Requirements are:
Char.IsSymbol(ch)
Char.IsPunctuation(ch)
In the deductions section of the code, we subtract from the overall score for things which make the password 'weak'. In my code, we check the following:
Using the code could not be simpler. Add the PasswordStrength.cs file to your project, and then add the namespace to your using section. Then use the code below. All it does is to create a new object of type PasswordStrength, and then you set the password, and read back the score and other details as needed.
using
PasswordStrength pwdStrength = new PasswordStrength();
pwdStrength.SetPassword("PasswordUnderTest");
int score = pwdStrength.GetScore();
string ScoreDescription = pwdStrength.GetPasswordStrength();
DataTable dtScoreDetails=pwdStrength.GetStrengthDetails();
To use the user control, add the PasswordStrength.cs and PasswordStrengthControl.cs files to your project. Add the namespace to your using section, and build the code. Then, drag and drop the PasswordStrength control onto your Windows Form. In the code, you can call the SetPassword(string Password) method of the control. The control will update itself accordingly.
SetPassword(string Password)
That is all there is to the code. It is not complex, but solves a small problem. You can use the code as you like, but please let me know if you do use the code.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
public string GetPasswordStrength()
{
int nScore = GetPasswordScore();
string sComplexity = "Very Weak";
if (nScore > 100) { nScore = 100; } else if (nScore < 0) { nScore = 0; }
if (nScore >= 0 && nScore <= 20) { sComplexity = "Very Weak"; }
else if (nScore > 20 && nScore < 40) { sComplexity = "Weak"; }
else if (nScore >= 40 && nScore <= 60) { sComplexity = "Good"; }
else if (nScore >= 60 && nScore <= 80) { sComplexity = "Strong"; }
else if (nScore >= 80 && nScore <= 100) { sComplexity = "Very Strong"; }
return sComplexity;
}
/// <summary>
/// Returns score for the password passed in SetPassword
/// </summary>
/// <returns></returns>
public int GetPasswordScore()
{
// Use Google's methodology;
int nScore = 0;
// If password length > 4
if (PreviousPassword.Length > 4) nScore += 20;
// if password has both upper and lower case
if (PreviousPassword.ToUpper() != PreviousPassword &&
PreviousPassword.ToLower() != PreviousPassword) nScore += 20;
// if password has a number
if (PreviousPassword.IndexOfAny(new char[] { '1', '2', '3', '4', '5', '6', '7', '8', '9', '0' }) >= 0)
nScore += 20;
// if password has any special characters
if (PreviousPassword.IndexOfAny(new char[] { '.', '[', '!', '@', '#', '$', '%', '^', '&', '*', '?', '_', '~', '-', '(', ')', ',', ']', '\'' }) >= 0)
nScore += 20;
// if password length > 12
if (PreviousPassword.Length > 12) nScore += 20;
return nScore;
/* if (dtDetails != null)
return (int)dtDetails.Rows[0][5];
else
return 0;
*/}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/59186/Password-Strength-Control?fid=1561125&df=90&mpp=10&sort=Position&spc=None&tid=3377056 | CC-MAIN-2016-07 | refinedweb | 739 | 63.8 |
Opened 6 years ago
Closed 6 years ago
Last modified 3 years ago
#9315 closed (fixed)
Keyword arguments with spaces and the url tag
Description
Hi, I asked about this on the django-users group and was told it was probably a bug so I'm reporting it:
I was testing named url patterns and I have something like this in my URLConf:
url(r'^search/(?P<words>.*)$', 'books.views.search', name='search_page'),
The view is defined like this:
def search(request, words):
Now I'd like to print a link to the search page with certain words from a template and used the url tag like this:
<p>{% url search_page words="someword" %}</p>
When viewing on the browser I get something like '/search/someword', which is good. My question is how do I pass more than one word in the 'words'
parameter?
If I do this:
<p>{% url search_page words="someword otherword" %}</p>
I get this error:
TemplateSyntaxError at (my page name here) Could not parse the remainder: '"someword' from '"someword
Can the url tag handle parameters with spaces?
Attachments (2)
Change History (15)
comment:1 Changed 6 years ago by emulbreh
- Keywords tplrf-fixed added
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 6 years ago by airrob
- Cc airrob added
comment:3 Changed 6 years ago by jacob
- milestone set to 1.1
- Triage Stage changed from Unreviewed to Accepted
comment:4 Changed 6 years ago by nessita
- Keywords pycamp2009 added
- Owner changed from nobody to nessita
- Status changed from new to assigned
Changed 6 years ago by nessita
comment:5 Changed 6 years ago by nessita
- Has patch set
Added patch to solve this issue. Fixed involved debugging and improvments of the smart_split from [browser:/django/util/text.py], plus a simplification of split_contents at [browser:django/template/__init__.py].
All tests passes except for test_templates (regressiontests.templates.tests.Templates) which was already failing.
Made by nessita (Natalia Bidart) and matiasb (Matías Bordese, not registered yet).
Contributors understanding regexs: Ramiro Morales, Facundo Batista, Pablo Ziliani, John Lenton.
comment:6 Changed 6 years ago by nessita
Last night, while being unable to sleep, I figured out that the non-greedy part of the proposed regular expression (in teh pacth attached) can be changed to a greedy expression, improving the performance of it.
So, where it reads:
smart_split_re = re.compile(r"""(\S*?"(?:[^"\\]*(?:\\.[^"\\]*)*)"\S*| # matches '"value with spaces"' and 'keyword="value with spaces"' \S*?'(?:[^'\\]*(?:\\.[^'\\]*)*)'\S*| # same as above but with quotes swapped \S+) # matches not whitespaces""", re.VERBOSE)
the \S+?" can be changed to [^\s"]" (same for the ' character). I'll submit a new patch in a while, prior some checkings.
comment:7 Changed 6 years ago by nessita
- Cc matiasb added
comment:8 Changed 6 years ago by nessita
Attaching new patch with greedy regular expression (to avoid backtracking).
Changed 6 years ago by nessita
comment:9 Changed 6 years ago by mtredinnick
I'm not going to apply the changes to the split_contents() portion, since they don't appear to fix an existing problem I'm not convinced they don't introduce a bug. We should fix on problem per ticket.
If somebody wants to fix up that portion (and if it can be reduced to one line, that would be great), the patch should include test of the i18n pieces in that function. It's clearly looking at _("...") strings, but I don't see anything in the replacement code that handles that. It might well be being handled elsewhere automatically (so we're doing it twice now) and that will be demonstrated by the tests in the new ticket and patch.
comment:10 Changed 6 years ago by mtredinnick
- Resolution set to fixed
- Status changed from assigned to closed
comment:11 Changed 6 years ago by mtredinnick
comment:12 Changed 6 years ago by nessita
Malcom,
you're welcome! We had the most fun proposing a solution for this one. We would love to keep contributing to the django project.
comment:13 Changed 3 years ago by jacob
- milestone 1.1 deleted
Milestone 1.1 deleted
This would be fixed by #7806. | https://code.djangoproject.com/ticket/9315 | CC-MAIN-2015-14 | refinedweb | 692 | 68.3 |
NAME
strftime - format date and time
SYNOPSIS
#include <time.h>
size_t
strftime(char *s, size_t max,
const char *format,
const struct tm *tm);
DESCRIPTION rules governing date representation with the E modifier can be obtained by supplying ERA as an argument to a nl_langinfo(3). One example of such alternative forms is the Japanese era calendar scheme in the ja_JP glibc locale.).
,), nl_langinfo(3), setlocale(3), sprintf(3), strptime(3)
COLOPHON
This page is part of release 5.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://man.cx/strftime%3C/a | CC-MAIN-2020-29 | refinedweb | 105 | 55.24 |
The JDBC Nested Result Set is the simplest join algorithm. In this case for each tuple in the outer join relation, the entire inner join is scanned and the tuple matches the join condition are finally added to the result set.
Understand with Example
In this Tutorial we want to describe you a code that helps in understanding JDBC Nested Result Set. The code include a class JDBC Nested Result Set, Inside the class we have a main method, follow the list of steps -
Import a package java.sql provides you a network interface, that enables to communicate between front end and back end.
Loading a driver by making a call class.forName ( ),that accept driver class as argument.
DriverManager.getConnection ( ) : This provides you to established a connection between url and database.
createStatement ( ) : The create Statement is used to obtain sql object. This object is used to send and execute sql queries in backend of the database.
executeQuery ( ) :This method is used to retrieve the record set from a table and store the record set in a result set. The select statement is used for this method.
next ( ) : This method is used to return the next element in the series.
getString ( ) : This method retrieve the value of the specific column in the current row of result set object as string representation.
Finally the println print the ID,Name,Class in the output. In the same way if the tuple in the result set 1 matches with the tuple result set 2,The output will be printed In case there is an exception in try block, the catch block caught and handle the exception.JdbcNestedResultset.java
import java.sql.*; public class JdbcNestedResultset { public static void main(String args[]) { Connection con = null; Statement st1 = null; Statement st2 = null; ResultSet rs1 = null; ResultSet rs2 = null; String url = "jdbc:mysql://localhost:3306/"; String db = "komal"; String driver = "com.mysql.jdbc.Driver"; String user = "root"; String pass = "root"; try { Class.forName(driver); con = DriverManager.getConnection(url + db, user, pass); st1 = con.createStatement(); st2 = con.createStatement(); String sql = "Select * from stu"; rs1 = st1.executeQuery(sql); System.out.println("Id\tName\tClass\tLibno"); while (rs1.next()) { String id = rs1.getString("id"); System.out.print(id + "\t"); System.out.print(rs1.getString("name") + "\t"); System.out.print(rs1.getString("class") + "\t"); rs2 = st2.executeQuery("select * from lib where id ='" + id + "'"); while (rs2.next()) { System.out.println(rs2.getString("libno") + "\t"); } } } catch (Exception e) { e.printStackTrace(); } } }
Output
If you enjoyed this post then why not add us on Google+? Add us to your Circles
Liked it! Share this Tutorial
Discuss: JDBC Nested Resultset
Post your Comment | http://roseindia.net/jdbc/Jdbc-nested-result-set.shtml | CC-MAIN-2014-42 | refinedweb | 437 | 59.7 |
1. Getting Started - caligrafy/caligrafy-quill Wiki
RequirementsRequirements
- PHP > 7.2
- MySql > 5.6
- A Apache/MySql/PHP server already installed locally
- Git and Composer
- curl, mbstring, openssl, mcrypt, gd, headers and redirect modules must be enabled in your servers
Introduction to CaligrafyIntroduction to Caligrafy
In this video we introduce you to the Caligrafy framework and to the different components that are used in it.
Join the Caligrafy CommunityJoin the Caligrafy Community
There are several ways for you to interact with the Caligrafy community
- github: You can always use github to stay up to date with the roadmap of Caligrafy, to post issues and to track when feature requests and issues are done
- slack: Joining our slack group is a great way to exchange with other members of the community, to get help on using the framework and to discuss any issues or features. Join our slack community
- facebook Caligrafy Group: Joining our Caligrafy group on facebook gives you more ways to interact with the community and to share success stories. Join our facebook group
InstallationInstallation
Quick InstallationQuick Installation
- Pull the code from github (You can either clone the repo or download the zip file)
- It is recommended to place the repo at the Server Document Root level
- run
composer installto get all the dependencies needed
- run
php caligrafer.php initializefrom the terminal to initialize the framework. Alternatively, you can also run
.bin/caligrafer initialize
- You are good to go!
- You can test if the framework is working by visiting:<server port, default 80>/<caligrafy root folder. default: caligrafy-quill>in the browser. If caligrafy is not installed at the Server Document Root level, refer to the
Different Root Foldersection below.
- If the quick installation does not complete successfully, proceed with the manual installation
Manual InstallationManual Installation
- Pull the code from github (You can either clone the repo or download the zip file)
- It is recommended to place the repo at the Server Document Root level
- Go to the downloaded repo and create a .env file by copying the example
cp .env.example .env
- Create an
APP_KEYand an
API_KEYin the .env file. You can use Caligrafer to generate API keys for you by running
php caligrafer.php generatekeysand adding the generated keys to the .env file
- Add the following to the .env file if not present:
APP_ROOT=<caligrafy root folder. default: caligrafy-quill>. If caligrafy is not installed at the Server Document Root level, refer to the
Different Root Foldersection below.
- Change the other values in .env file to match your local or production server settings
- run
composer installto get all the dependencies needed
- make the folder
/public/uploads/writable if you intend to allow uploads in your application. You will need to run the command
sudo chmod -R 777 /public/uploads
- You are good to go!
- You can test if the framework is working by visiting:<server port, default 80>/<caligrafy root folder. default: caligrafy-quill>in the browser.
Different Root FolderDifferent Root Folder
The default root folder is
caligrafy or
caligrafy-quill (depending on what version of caligrafy you got). To change the name of this root folder:
- Rename the folder
caligrafyto
your-app-nameOR
git clone <caligrafy git repository> your-app-name
- Change the
APP_ROOTentry in the
.envfile to
your-app-name
Placing the Root Folder in subfolders within the Server Document RootPlacing the Root Folder in subfolders within the Server Document Root
If you decide to place the Caligrafy repo in subfolders within the Server Document Root, you will need to modify the .env file accordingly regardless whether you picked the automatic or manual installation:
- Change/Add
APP_ROOTentry in the
.envfile to the
full path leading to the caligrafy root folder from server document root
CaligraferCaligrafer
Caligrafer is a command line tool that comes with Caligrafy to help you generate keys for your application and its corresponding API. Using Caligrafer is optional. Caligrafer comes prepackaged with the several tools and utilities that helps accelerate your work, here are some of the commands:
Caligrafer help: At all time you can know what commands are available to you by running
php caligrafer.phpfrom the Caligrafy root.
Framework Initialization: The framework is automatically initialized for you to start using instantly. You can run
php caligrafer.php initializefrom the Caligrafy root.
Create a Vue app: Scaffolds a Vue app for you to use immediately. You can run
php caligrafer.php create <app_name>from the Caligrafy root.
Create a Bot app: Scaffolds a Bot app for you to use immediately. You can run
php caligrafer.php bot <app_name>from the Caligrafy root.
Create a Face Detection and Recognition app: Scaffolds a ready to use face detection and recognition app. You can run
php caligrafer.php facedetect <app_name>from the Caligrafy root.
Create a Machine Learning app: Scaffolds a ready-to-use machine learning app for you to explore. You can run
php caligrafer.php ml <app_name>from the Caligrafy root.
Generate Keys: Generate keys will generate an APP_KEY and an API_KEY that you can add to the
.envfile you created before. You can run
php caligrafer.php generatekeysfrom the Caligrafy root.
Generate API key: Generate a new unique API key every single time that you can distribute to third-parties. You can run
php caligrafer.php generateapikeyfrom the Caligrafy root.
Virtual HostsVirtual Hosts
Creating a virtual host is optional. Caligrafy does not need the creation of virtual hosts to operate. However, any virtual host used will need to ensure to
AllowOverride All, otherwise Caligrafy may not function properly.
To create a virtual host to point to the framework:
- In your local machine, locate the
httpd-vhosts.confof your apache server. Usually it is located in
/apache2/extra
- In a production server, locate the
000-default.confthat is typically in
/etc/apache2/sites-available
- Add a virtual host:
<VirtualHost *:80> ServerName <example.loc> DocumentRoot "<path-to-server-root>/caligrafy" <Directory "<path-to-server-root>/caligrafy"> Options FollowSymLinks AllowOverride All Require all granted </Directory> </VirtualHost>
- Restart your apache server
- Add a virtual host in your local machine by opening
/etc/hostsfile and adding the following:
127.0.0.1 example.loc
Caligrafy NamespaceCaligrafy Namespace
The Caligrafy framework is all in the namespace
Caligrafy. Every class that you create will need to call the appropriate classes from the namespace.
- If you are creating a Controller:
use Caligrafy\Controller; class MyController extends Controller { // your code goes here }
- If you are creating a Model:
use Caligrafy\Model; class MyModel extends Model { // your code goes here }
Caligrafy Apache/MySQL/PHP server and Virtual MachineCaligrafy Apache/MySQL/PHP server and Virtual Machine
Build your own local Apache/MySQL/PHP serverBuild your own local Apache/MySQL/PHP server
If you would like to build your own server, you will need to go through the following setup items:
Prepare your machine terminal
Prepare your development environment
Download a code editor (recommended: Brackets, Sublime or Visual Studio)
Caligrafy Virtual MachineCaligrafy Virtual Machine
If you prefer not to install an Apache/MySQL/PHP server on your local machine, Caligrafy has a prepackaged virtual machine that works with Oracle Virtual Box. Learn more about Caligrafy VM
Caligrafy Virtual MachineCaligrafy Virtual Machine
Learn how to use install and set up a Caligrafy Virtual Machine
MVC Framework
This framework is based on a Model-View-Controller architecture that separates the concerns between back-end and front-end. The framework relies on the following structures:
- Database: The framework supports all the SQL databases that PDO supports. We recommend using MySql given its tight integration with PHP
- Model: The model is the back-end that contains all the data logic. It is the layer that interfaces with the storage and manages the data, whether it be a database, a JSON object. Caligrafy uses databases for data storage and therefore the model becomes the object representation of the main objects/entities of an application. In other words, every table in your database that represents an entity should have an equivalent model created in the framework
- Controller: The controller in an MVC architecture is the brain of the application that controls how the information is displayed. It is the logic layer that is responsible for interfacing with the model and rendering the data that will be displayed in the View
- View: The view is the front-end part that is responsible for displaying a user interface with the data that the controller passes on to it
Framework Fundamentals
This framework relies on several foundational concepts that are worth detailing. A good understanding of those fundamental concepts gets the developers up and running more efficiently. We will use an online catalogue of books (like Amazon) as an example throughout this documentation.
Routes: Routing is a key element to any framework. It uses the attributes of a user request to inform the application on what action needs to be taken on what object. That process of deconstructing requests and translating them into a call to engage controllers is called Routing. For example, if you are accessing your browser, this is sending a GET request on a BOOK object. In other words, this is requesting to view a list of books. Routing is responsible for redirecting such a URL to the controller in the application that is/will be responsible for viewing the list of all books.
Controllers: Controllers are a logical grouping of actions that can be taken on an object. For every object or entity, there should be one controller consisting of a group of actions that can be performed on that object (Create, Read, Update, Delete, etc.). Routing allows redirecting to a specific action in the controller.
Views: Caligrafy has two different ways of building views. One for rapid development that uses a server-side template engine to render HTML pages. The other one uses Vue.js which is a more sophisticated client-side framework that is suitable for building more delightful user experiences.You could learn more about Pug (or Phug the PHP equivalent) here
API & JSON: This framework relies on REST API fundamentals. Every route that is created could either be accessed from a browser or from any application capable of making requests. The Caligrafy View is intelligent to know whether to render a graphical user interface or a response in a JSON format.
Models: In this framework, models are meant to be an object representation of all objects/entities of an application. For example, in an online bookstore example, books need to be represented by a Book model. Every model has a table associated with it in the database. Unlike other frameworks, this framework does not implicitly associate a model with a table (like Laravel for example). Every Model needs to be explicitly associated with a table. That gives the flexibility of separating terminology concerns between database tables and their object representations in the framework. However, all the required table columns that have no default values need to be represented in the model. Pivot tables (or many-to-many database tables) don't need to be created in the database nor represented in the Model. This framework creates these tables from the moment a many-to-many relationship is defined between Models (more on this later).
Relationships: This framework supports one-to-one, one-to-many and many-to-many relationships between models in both directions. Each relationship is defined as part of the Model. For example, if books and publishers are two models in our example and one book has one publisher then the one-to-one relationship needs to be defined in the Book Model (hasOne relationship). Similarly, the database needs to have a foreign key in the Book table that references the id in the Publisher table. For this to happen smoothly, a naming convention needs to be followed.
Naming conventions: This framework does not put constraints on the naming conventions as much as other frameworks. The only naming convention is around the table keys in the database.
- id: Every table must have
idas its primary key no matter what the table name is. It is therefore recommended to avoid naming primary keys
book_idor
bookId. The primary key must always be
idand cannot be overridden.
- Foreign keys: the foreign keys in the database should be of the format
modelname_id. Notice that the foreign key takes the name of the Model and NOT the name of the table in the database. The foreign keys name could be overridden if need be.
- Pivot table naming: the framework takes care of creating the pivot tables for your convenience. The pivot table will be named depending on which model the many-to-many (belongsToMany relationship) has been called from first.
Validation: Input validation is key to any application. This framework integrates a third-party GUMP - a very solid library - to validate all types of inputs. The validation module is capable to do 2 things: 1) Validating inputs of all types based on easy-to-define rules 2) Provide easy ways to filter data and format it based easy-to-define rules. You could learn more about GUMP here
File Structure
Understanding the framework file structure is key to understanding how to plan your development project. There are 5 main folders in the structure that you should be aware of:
/application: This is where most of your application logic should go. This folder is where you should create the routes, models, views and controllers. You can create any classes and organize them freely in this folder without worrying about requiring them or including them.
This folder has 4 subfolders (models, views, controllers and config). The config folder is where the routes are defined.
When your logic becomes too complex to hold in a
Controller, you can always create new classes to organize them without having to require or include them in the classes where you intend to use them.
/framework: This is the underlying code for the framework. For novice programmers, you will very rarely need to change anything in this folder. However, if you need to change the way the framework operates, then this is where it can be done.
/public: This is where your application client-side code should go. All your scripts, stylesheets and any other resource that needs to be publicly accessible will reside in this folder. For example, any javascript or css that you need to reference from your Views (residing in application) will need to be in the public folder. Similarly, if your application supports uploading files, all uploaded files will live in the
/public/uploadsfolder.
/tests: This is the folder that you could use for unit testing. Any test logic that you need to create will reside in this folder.
/vendor: The vendor folder contains all the third-party libraries that this framework depends on.
Installing CaligrafyInstalling Caligrafy
This video explains how to install Caligrafy in different environments and it goes through a brief summary how the framework is structured
Next Section: Learn about Routes | https://github-wiki-see.page/m/caligrafy/caligrafy-quill/wiki/1.-Getting-Started | CC-MAIN-2022-40 | refinedweb | 2,484 | 53.81 |
> On April 15, 2015, 5:36 p.m., Alexander Rukletsov wrote: > > src/common/resources.cpp, lines 93-105 > > <> > > > > Can we replace it by > > ``` > > if !addable(left, right) { > > return false; > > } > > ``` > > ? Or even add `addable()` check to the chain of checks below?
Advertising
Each of `operator==`, `addable` and `subtractable` has subtle differences. `operator==` and `addable` in specific are different in that equal persistent volumes (same persistence ID) are not addable. Here's Jie's comment: ``` // TODO(jieyu): Even if two Resource objects with DiskInfo have the // same persistence ID, they cannot be added together. In fact, this // shouldn't happen if we do not add resources from different // namespaces (e.g., across slave). Consider adding a warning. ``` - Michael ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: ----------------------------------------------------------- > > | https://www.mail-archive.com/[email protected]/msg00032.html | CC-MAIN-2017-17 | refinedweb | 126 | 61.93 |
Description:
Wall Climbing Robot using Arduino, Bluetooth & Android App- In this tutorial, you will learn how to make a lightweight, low cost and highly efficient Wall Climbing Robot using a custom made controller board based on the Atmega328 microcontroller, this is the same microcontroller which is used in the Arduino Uno, HC-05 or HC-06 Bluetooth Module, L298N motor driver, 6v Mini Dc Gear Motors, and high RPM Quadcopter Brushless Dc Motor.
The Forward, Back, Left, and Right movement of the Wall Climbing Robot is controlled wirelessly using a specially designed Android cell phone application. The high RPM Quad Copter Brushless Dc Motor is used to create the vacuum by sucking the air due to which the Robot sticks to the wall. This is going to be a very detailed tutorial, explaining everything, so that you can make the one by yourself. You can watch the demonstration video given at the end of this article.
Without any further delay let’s get started!!!
The components and tools used in this project can be purchased from Amazon, the components Purchase links are given below:
Atmega328 microcontroller:
16Mhz crystal:
22pf capacitors:
10k Resistor:
10uf capacitors:
Lm7805 Voltage Regulator:
Dc female power jack:
6v 60RPM Mini Dc Gear Motors:
Lipo Battery:
High RPM Quad Copter Brushless Dc Motor:
L298N Motor Driver:
HC05 Bluetooth Module:!
Wall Climbing Robot:
A Robot is an electro-mechanical machine that is controlled remotely using wireless technology and is able to do tasks on its own. The majority of the Robots are Semi-automatic and are designed to perform different tasks depending on the instructions defined by the programmers. A Robot can be used for security purposes, it can capture Video & Audio information from the surroundings which can be then sent to a remote monitoring station through wireless communication. The aim of this project is to design and fabricate a Wall Climbing Robot using vacuum technology. The vacuum is created by the high RPM Brushless Dc Motor, the one used in Quad Copters. So far, four types of adhesion techniques have been investigated,
- Magnetic devices for climbing ferrous surfaces.
- Vacuum suction techniques for smooth and nonporous surfaces.
- Attraction force generators based on aerodynamic principles.
- Bio-mimetic approaches inspired by climbing animals.
Some of the most famous Wall Climbing Robots,
- City Climber: Wall climbing Robot
- Mecho-gecko: Wall climbing Robot
- Hyperion: Wall Climbing Robot
- Lemur- Weight shifting rock climber style bot
- Ninja-2 articulated leg Wall Climber Robot
- C-Bot Wall Climbing Robot Prototype
- Capuchin Weight Shifting Climbing Bot “Wall climber”
- Robot Window Shade
- The Stanford Sticky Bot
- RiSE: the Amazing Insect Like Climbing Robot
- Vortex RRAM Mobile Robot Platform
- Flexible Finger magnetic Climbing Robot
- SRI Static Electric Wall Climbing Robot
- Sucker based independent limb wall climber
- SPIBOT- Self-contained power source and vacuum pump
The Things you need to take care of while making a Wall Climbing Robot:
One of the most important things that you really need to take care of is the weight of the Robot. Use lightweight parts. Instead of using the large controller boards, like Arduino Uno or Mega2560 make a custom made controller board. This way you can reduce the price, size, and weight of the circuit board. Use very small Dc Gear Motors. While working on this project the only thing that I focused on was the weight of the Robot. For the best understanding, I designed a basic 3D model of the Wall Climbing Robot using SolidWorks 2016.
I recorded the dimensions of all the electronic parts using a Vernier Caliper and then designed each part in the SolidWorks. I roughly started with a 12×12 inch base frame. Luckily a frame of this size could accommodate all the parts. Then I started searching for a 12×12 inch lightweight sheet and luckily I found a PCB Copperplate of the same dimensions. To overcome the bending problem I cut the corner edges of the Copperplate and then fixed the 6v 60RPM Mini Dc Geared motors.
I used thermocol at the bottom side of the Copperplate to reduce the Air gap. This is just a 360 degrees wall ring, which you can easily make from a thermocol sheet. Smaller the gap, greater will be the suction. In case of high suction, you can reduce the motor speed using the Variable resistor which I will explain in the circuit diagram. After I was satisfied with the vacuum then I practically installed all the components. The final Wall Climbing Robot,
Before I am going to explain the circuit diagram and Programming, first I would like to explain about the different electronic components used in this project.
Mini Dc Gear Motor:
This is a 6V 60RPM Mini Dc Gear Motor
Operating Voltage is 6 Volts
RPM (Rated Maximum) 60 ~ 100 RPM
Torque, Effective 1.1 Kg/cm
Dimensions:
Depth: 10mm
Height: 24.3mm
Width: 12.1mm
Product Weight: 9.5g
L298N Motor Driver:.
Now let’s take a closer look at the Pinout of L298N module.
This module has three terminal blocks. terminal block1 will be used for motor A and is clearly labeled with out1 and out2, this is where we connect the two wires of the dc motor. Terminal block2 will be used for motor B and is clearly labeled with out3 and out4.
While the terminal block3 is labeled with 12v, ground and +5v.
The 12v terminal is used to supply the voltage to the dc motors, this voltage can be from 5 to 35volts. The ground terminal is connected with the ground of the external power supply and is also connected with the ground of the controller board, which in my case is Arduino board which is based on the atmega328 microcontroller, while the +5v terminal will be connected with the Arduino’s 5v.
As you can see this motor driver also have some male headers which are clearly labeled with ENA…IN1…IN2…IN3…IN4 and ENB. The ENA and ENB are used to enable both the motors. Jumper caps which I will explain in the programming. Then IN1 and IN2 pins are used for controlling the direction of motor A while the IN3 and IN4 are used to control the direction of motor B. now let’s start the interfacing.
For a detailed study read the following article.
Arduino L298n Motor Driver control Tutorial, Speed & Direction, PWM
HC05/HC06 Bluetooth Module:
I already have a very detailed tutorial on how to use the HC05 or HC06 Bluetooth Module.
You can find so many video tutorials on my YouTube channel “electronic clinic”.
Wall Climbing Robot Android application:
The Android cell phone application is protected with a username and password. The default username and password is “admin”, which later you can replace with a new username and password. Click on the Download button given below to download the APK files.
Download: wallclimbing
Circuit Diagram of the Wall Climbing Robot:
The circuit diagram of the Wall Climbing Robot is very simple. At the very top in the circuit diagram is the 5v regulated Power Supply which is based on the LM7805 voltage regulator. The 5 volts from this regulator is used to power up the Atmega328 microcontroller and is also connected with the +5v pins of the L298N Motor Drivers. The 5volts from the LM7805 are also used to power up the HC05 Bluetooth Module. The entire circuit is powered up using the Lipo Battery.
The Bluetooth module is connected with the Atmega328 TX and RX pins. The TX pin of the Bluetooth Module is connected with the RX pin of the Atmega328 microcontroller while the RX pin of the HC05 Bluetooth Module is connected with the TX pin of the Atmega328 microcontroller. While uploading the program into the Atmega328 microcontroller disconnect the TX and RX pins. Otherwise, you won’t be able to upload the program.
As you know each L298N Motor Driver can be used to control 2 motors. As you know in this project 4 motors are used, so, it means we will need two L298N motor drivers. All the 4 motors are connected with the outputs of the Motor Drivers” L298N” which can be clearly seen in the circuit diagram. The input pins of the L298N Motor Driver are connected with the Atmega328 microcontroller. All the pins are clearly labeled.
The PWM pin of the Brushless Motor is connected with the Atmega328 pin number 11 which is the PWM pin. Pin number 11 is used to control the speed of the Brushless Motor. A variable resistor is connected with the Analog pin of the Atmega328 controller, this variable resistor is used to control the speed of the Brushless motor. By rotating the Knob of the Variable Resistor the speed can be adjusted.
Atmega328 Microcontroller PCB Board Layout:
This PCB is designed in Cadsoft Eagle. The PCB board file of the Atmega328 microcontroller can be downloaded by clicking on the link given below.
Download: atmega328 board file eagle
Wall Climbing Robot Arduino Programming:
Wall Climbing Robot Arduino Program explanation:
I started off with the SoftwareSerial Library. The SoftwareSerial library is used for creating multiple Serial ports. As you know in Arduino Uno we have only one Serial port which is available on Pin number0 and Pin number1. Currently, I am using the Arduino’s default Serial port.
#include <SoftwareSerial.h>
I defined a Serial port on pin number 0 and pin number 1. You can change these pins, you can use 2 and 3 or any other pins. Just change the numbers. As I am using only one device” Bluetooth Module” which supports Serial communication, so that’s I am using the Arduino’s default Serial port.
SoftwareSerial Blue(0, 1);
Defined some variables.
long int data;
int nob = 0; // variable resistor connected to analog pin A0
this variable resistor is used to control the speed of the QuadCopter Brushless Dc Motor. Set the speed which best suits your requirements.
int nobdata = 0;
The following are the commands which are used to control the Forward, Reverse, Right, and Left movement of the Wall Climbing Robot. The purpose of using the Long int data type is that you can use large numbers which can be 6 digits long, this will increase the security. Nobody will be able to find which commands are used to control the Robot. So each command acts as the password. These commands are sent from the Android cell phone application.
long int password1 = 92;// forward
long int password2 = 91;// reverse
long int password3 = 71; // right
long int password4 = 79;// left
long int password5 = 89; // stop
char state = 0;
Then I defined pins for the 4 motors.
int urmw1 = 2; // up right motor wire 1
int urmw2 = 3; // up right motor wire2
int drmw1 = 4; // down right motor wire1
int drmw2 = 5; // down right motor wire2
int ulmw1 = 6; // up left motor wire1
int ulmw2 = 7; // up left motor wire2
int dlmw1 = 8; // down left motor wire1
int dlmw2 = 9; // down left motor wire2
The speed controller is connected with the Arduino’s pin number 11.
int bdm = 11; // brushless dc motor
in the void Setup() we do the basic settings, we tell the controller which are the input pins and which are the output pins. We activate the Serial communication using the Serial.begin(). In this project I am using 9600 as the baud rate. This is the communication speed.
void setup()
{
pinMode(bdm, OUTPUT);
pinMode(nob, INPUT);
pinMode(urmw1, OUTPUT);
pinMode(urmw2, OUTPUT);
pinMode(drmw1, OUTPUT);
pinMode(drmw2, OUTPUT);
pinMode(ulmw1, OUTPUT);
pinMode(ulmw2, OUTPUT);
pinMode(dlmw1, OUTPUT);
pinMode(dlmw2, OUTPUT);
// keep all the motors off by default);
Serial.begin(9600);
Blue.begin(9600);
delay(1000);
}
Void loop() executes infinite times, the main code is kept inside this function. void means that this function has no return type and the empty parenthesis means that this function is not taking any arguments as the input.
void loop()
{
// while(Blue.available()==0) ;
nobdata = analogRead(nob);
The above instruction is used to read the Variable resistor connected with the analog pin A0 of the Arduino and store the value in variable nobdata.
nobdata = map(nobdata, 0, 1024, 0, 255);
using the map function the data is mapped. The minimum and maximum value can be 0 and 255. This is used to control the speed of the Brushless Dc Motor.
analogWrite(bdm,nobdata);
//Serial.println(nobdata);
delay(20);
if(Blue.available()>0) // if the Arduino has received data from the Bluetooth module.
{
Store the incoming values in variable data.
data = Blue.parseInt();
delay(200);
}
//delay(1000);
//Serial.print(data);
The following conditions are used to compare the received value with the pre-defined values. If the received value is similar to the pre-defined value then the motors are controlled accordingly.
if (data == password1) // forward
{
digitalWrite(urmw1, HIGH);
digitalWrite(urmw2, LOW);
digitalWrite(drmw1, HIGH);
digitalWrite(drmw2, LOW);
digitalWrite(ulmw1, LOW);
digitalWrite(ulmw2, HIGH);
digitalWrite(dlmw1, LOW);
digitalWrite(dlmw2, HIGH);
data = 45; // garbage value to stop repetition
Serial.println(“Forward”);
}
if( data == password2) // reverse
{
digitalWrite(urmw1, LOW);
digitalWrite(urmw2, HIGH);
digitalWrite(drmw1, LOW);
digitalWrite(drmw2, HIGH);
digitalWrite(ulmw1, HIGH);
digitalWrite(ulmw2, LOW);
digitalWrite(dlmw1, HIGH);
digitalWrite(dlmw2, LOW);
data = 45; // garbage value to stop repetion
Serial.println(“Reverse”);
} else
if( data == password3) // right
{
digitalWrite(urmw1, LOW);
digitalWrite(urmw2, HIGH);
digitalWrite(drmw1, LOW);
digitalWrite(drmw2, HIGH);
digitalWrite(ulmw1, LOW);
digitalWrite(ulmw2, HIGH);
digitalWrite(dlmw1, LOW);
digitalWrite(dlmw2, HIGH);
data = 45; // garbage value to stop repetion
Serial.println(“right”);
}
else
if( data == password4) // left
{
digitalWrite(urmw1, HIGH);
digitalWrite(urmw2, LOW);
digitalWrite(drmw1, HIGH);
digitalWrite(drmw2, LOW);
digitalWrite(ulmw1, HIGH);
digitalWrite(ulmw2, LOW);
digitalWrite(dlmw1, HIGH);
digitalWrite(dlmw2, LOW);
data = 45; // garbage value to stop repetition
Serial.println(“Left”);
}
else
if( data == password5) // stop
{);
data = 45; // garbage value to stop repetition
Serial.println(“stop”);
}
}
Wall Climbing Robot Tests:
After uploading the program and installing the Android application. I powered up the Wall Climbing Robot using the Lipo Battery and adjusted the speed of the Brushless Dc Motor. My first test was to test the suction created by the Brushless Dc Motor. The Wall Climbing Robot stuck to the Wall Surface. I could feel that force.
My initial test was a great success. Then I performed some tests on Wooden sheets, Walls, and Glass. On the Wall and Wooden sheet surfaces the Wall Climbing Robot could move without any problem, but on the Mirror surface there was the sliding problem. For the extremely smooth surfaces, the suctions cups can be used.
Wall Climbing Robot in Action: | https://www.electroniclinic.com/wall-climbing-robot-car-using-arduino-bluetooth-android-app/ | CC-MAIN-2019-47 | refinedweb | 2,412 | 61.67 |
#include <LiquidCrystal.h> // include the LCD libraryLiquidCrystal lcd(8,9,4,5,6,7);int potPin = A13; //Potentiometer input pinint potValue1 = 0;int potValue2 = 0; int potValue3 = 0; // I added potValue3 and 4 thinking that it would separate the pot// value from potValue 1 and 2 in my coding. I was wrong.int potValue4 = 0;void setup() {lcd.begin(16, 2); // lcd rows and columns}void loop() {int x; // put this int in setup in case I could use 'int y' for different buttons. Int y didn't help.x = analogRead (0);if (x <100) { // allocation for 'right' button on shieldlcd.setCursor(0,0); // top line of displaylcd.print("Select Heading"); // says on the tinlcd.setCursor(0,1); // 2nd line lcdpotValue1 = analogRead(potPin) / 2.838; // divide pot by 2.838 to get 360 'degrees' for headingpotValue2 = potValue1; //work out pV2delay(100);lcd.print(potValue2); // shows heading on 2nd line of LCDlcd.print(" ");}else if (x <200) {x = analogRead (0); // allocation for 'up' button on shieldlcd.setCursor(0,0); // top line of displaylcd.print ("Altitude "); // ronseallcd.setCursor(0,1); // 2nd line lcdpotValue3 = analogRead(potPin)*3; // Does this now affect the heading reading? (No, hurray!) // *3 to give me some height in flight sim as 1024 feet isn't heigh enough// ideally I would like this to increment in hundreds, ie, 1000, 1100, 1200 feet - any idea??// for some reason the LCD climbs to 9062, then starts at 1000 again! Why?potValue4 = potValue3; // set cursor to second row, first columndelay (100);lcd.print(potValue4); // shows heading}}
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=109785.0;prev_next=next | CC-MAIN-2016-36 | refinedweb | 289 | 60.31 |
Jumping in on Jesse’s Flash based RIA thread, my personal take on the Flash RIA issue boils down to one thing, Flash does not grok XML and paradigm change that XML brings once you do grok it. Don’t get me wrong, Flash is a very good tool for what it does, static (not data driven) vector graphics. Used in its traditional niche, it is a great tool, but to try to expand its role to building full fledged UIs, will only tarnish its reputation. I have the same problem with people that try to build SVG only web sites (and I love SVG).
Macromedia is correct in its judgment that the future desktop UI will be a vector graphics based one, but what they are missing is that it will a declarative vector graphics based UI, in combination with a couple non-graphics based declarative languages, all based on XML. The multi-namespace document is the future of the web, with true XML “browsers”.
Don Box and the other developers at MS get it, but the folks at Macromedia (specifically Sean and John Dowell) haven’t shown that they get it, Declarative Languages Rock. This is where the .Net framework excels. It was built from the ground up to make use of XML, not to just interpret it. XML is at the very heart of the .Net framework, not just an add-on library. That was the way the old VS 6 was, and COM was, and Flash and Java still are. In order to truly take advantage of the XML paradigm you have to start over from scratch, and build it back up, starting with XML. Once you have that, then you can start to take advantage of things like creating and using declarative languages to communicate across machine and platform boundaries.
There’s a great quote from Robert Heinlein that goes:
Anyone who cannot cope with mathematics is not fully human. At best he is a tolerable subhuman who has learned to wear shoes, bathe, and not make messes in the house. ~Robert Heinlein, Time Enough for Love
and I think that it could be morphed it something similar regarding programming platforms and XML, but I can’t seem to get the words just so. Maybe someone out there can.
Don XML | http://weblogs.asp.net/donxml/archive/2003/04/22/5921.aspx | crawl-002 | refinedweb | 386 | 76.66 |
pthread_setschedprio − set scheduling priority of a thread
#include <pthread.h>
pthread_setschedprio(pthread_t thread, int prio);
Compile and link with −pthread..
POSIX.1-2001 also documents an ENOTSUP ("attempt was made to set the priority to an unsupported value") error for pthread_setschedparam(3).
This function is available in glibc since version 2.3.4.
For an explanation of the terms used in this section, see attributes(7).
POSIX.1-2001.
For a description of the permissions required to, and the effect of, changing a thread’s scheduling priority, and details of the permitted ranges for priorities in each scheduling policy, see sched_setscheduler(2).
getrlimit(2), sched_get_priority_min(2), sched_setscheduler)
This page is part of release 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at−pages/. | http://man.linuxtool.net/centos7/u3/man/3_pthread_setschedprio.html | CC-MAIN-2019-30 | refinedweb | 136 | 58.08 |
In this simple tutorial, we will explore how to control a servo over the internet. For that, we will use a great couple of devices, the NodeMCU ESP12-E and a Blynk App.
We will start learning how to connect the servo with the NodeMCU, knowing how to control it locally with a potentiometer, how to see its position on an OLED display and finally how to control it over the internet using a smartphone. The Block diagram gives us an overview of what will be the final project.
Also, the bellow video shows the final project working:
Step 1: BoM
- NodeMCU ESP8266-12E V1.0 (US$ 8.79)
- 0.96" Inch I2c IIC Serial 128x64 Oled LCD White Display (US$ 9.99)
- Mg995 Servo 180 Degree (US$ 19.99)
- Breadboard Power Supply Module 3.3V/5V (US$ 5.99)
- 10K ohms Potentiometer (US$ 7.19)
- 400 points Breadboard (US$ 5.89)
- Dupont Cables (US$ 7.29)
Step 2: Connecting the Servo and Potentiometer
The first thing that we will do, is to setup the NodeMCU to handle the 180o Servo, controlling it thru a 10K ohm potentiometer.
Take care that the NodeMCU is powered with 5V, but all its GPIOs work with a 3.3V level.
Let's split the Breadboard power raids, one for 5V and the other for 3.3V. For that, we will use the breadboard Power Supply as the one shown in above diagram.
NOTE: During the development phase, when your PC is connected to NodeMCUvia the USB port, you do not need connecting Vin to +5V (red wire at the diagram), once the power will be provided by your computer. Leave it disconnected.
For Servo operation, we will use the servo.h library:
#include <Servo.h> // Include the library Servo servo1; // Create the Servo and name it "servo1" #define servo1Pin D2 // Define the NodeMCU pin to attach the Servo
During setup(), the servo1 must be initiated:
servo1.attach(servo1Pin);
The 10K potentiometer will work as a voltage divisor, changing the analog input level on NodeMCU (A0) from 0 to 3.3V. Internally, the 10 bits ADC (Analog-Digital converter) will generate a digital value (from 0 to 1023), the PotReading, equivalent to the analog voltage input.
PotReading = analogRead(A0);
We should "map" this digital value, in order that the Pulse Width Modulated (PWM) digital output of pin D2, will vary from 0 to 180 (servo1Angle).
servo1Angle = map(PotReading, 0, 1023, 0, 180);
With this signal, the servo will turn from 0 to 180 degree, using the bellow command:
servo1.write(servo1Angle);
The position value that is sent to Servo will be displayed during this test phase on Serial Monitor:
Serial.println(servo1Angle);
The full code for Servo's tests can be downloaded from my GitHub:
NodeMCU_Servo_Control_Test
The below video shows the servo test:
Step 3: Let's Display!
It is OK during tests, to use the Serial Monitor, but what's happen when you are using your prototypes far from your PC in a stand alone mode? For that, let's install an OLED display, our old friend, SSD1306, wich main characteristics are:
- Display size: 0.96"
- I2C IIC SPI Serial
- 128X64
- White OLED LCD LED
Connect the OLED pins to the NodeMCU, as described bellow and shown at above electrical diagram:
- SDA ==> D1 (5)
- SCL* ==> D2 (4) * Also you can find "SDC" in the text
- VCC ==> The SSD1306 can be powered with both 5V or 3.3V (in our case, 3.3V)
- GND ==> GND
Once we have connected the display, let's download and install its library on our Arduino IDE: the "ESP8266 OLED Driver for SSD1306 display" developed by Daniel Eichhorn (Make sure that you use Version 3.0.0 or bigger!).
Bellow the library that must be downloaded and installed on your Arduino IDE:
Once you restarted found at the GITHub provided above.
A. Display Control:
void init(); // Initialise the display void displayOn(void); // Turn the display on void displayOff(void); // Turn the display offs void clear(void); // Clear the local pixel buffer void flipScreenVertically(); // Turn the display upside down
B. Text Operations:
void drawString(int16_t x, int16_t y, String text); // (xpos, ypos, "Text") void setFont(const char* fontData); // Sets the current font. <p>Available default fonts: </p><p><ul><li>ArialMT_Plain_10, <li>ArialMT_Plain_16, <li>ArialMT_Plain_24</ul></p>
Once the both the OLED itself and its Library are installed, let's write a simple program to test it. Enter with bellow code on your IDE, the result should be a display as shown in the above photo:
/*NodeMCU */ #include <ESP8266WiFi.h> /* OLED */ #include "SSD1306Wire.h" #include "Wire.h" const int I2C_DISPLAY_ADDRESS = 0x3c; const int SDA_PIN = 0; const int SCL_PIN = 2; SSD1306Wire display(I2C_DISPLAY_ADDRESS, SDA_PIN, SCL_PIN); void setup () { Serial.begin(115200); displaySetup(); } void loop() { } /* Initiate and display setup data on OLED */ void displaySetup() { display.init(); // initialize display display.clear(); // Clear display display.display(); // Put data on display Serial.println("Initiating Display Test"); display.setFont(ArialMT_Plain_24); display.drawString(30, 0, "OLED"); // (xpos, ypos, "Text") display.setFont(ArialMT_Plain_16); display.drawString(18, 29, "Test initiated"); display.setFont(ArialMT_Plain_10); display.drawString(10, 52, "Serial BaudRate:"); display.drawString(90, 52, String(11500)); display.display(); // Put data on display delay (3000); }
The above program can be downloaded from my GitHub:
Step 4: Showing the Servo Position in the OLED Display
Now let's "merge" the previous 2 codes, so we can not only control the servo position using the potentiometer but also see its position in the OLED display.
The code can be downloaded from my GitHub:
NodeMCU_Servo_Control_Test_OLED
And the result can be verified on the video:
Step 5: Creating a Blynk App to Control the Servo
Basically, we will need to change our previous code to include the Blynk part of it:
#include <BlynkSimpleEsp8266.h> char ssid [] = "YOUR SSID"; char pass [] = "YOUR PASSWORD"; char auth [] = "YOUR AUTH TOKEN"; // Servo Control Project /* Reads slider in the Blynk app and writes the value to "potReading" variable */ BLYNK_WRITE(V0) { potReading = param.asInt(); } /* Display servo position on Blynk app */ BLYNK_READ(V1) { Blynk.virtualWrite(V1, servo1Angle); } void setup () { Blynk.begin(auth, ssid, pass); } void loop() { Blynk.run(); }
So, we must define a virtual pin V0, where Blynk will "Write" a servo position, the same way as we did with the potentiometer. On Blynk App, we will use a "Slider" (with output defined from 0 to 1023) to command the servo position.
Another virtual pin, V1, will be used to "Read" the servo position, same as we do with the OLED. On Blynk App, we will use a "Display" (with output defined from 0 to 180) to show the servo position. The above foto show the App screens.
You can download the complete program from GitHub:
NodeMCU_Servo_Ctrl_Blynk_EXT
Step 6: Conclusion
As always, I hope this project can help others find their way in the exciting world of electronics, robotics, and IoT!
Please visit my GitHub for updated files:
For more projects, please visit my blog: MJRoBot.org
Saludos from the south of the world!
See you at my next instructable!
Thank you
Marcelo
4 Discussions
Question 8 months ago
how to control servo motor.using firebase and push button in 1 time?
Question 1 year ago
Is there any way how to control servo with blink app and potentiometer at the same time? Or am i just doing something wrong?
Answer 1 year ago
Same question. Did you figure it out?
2 years ago
Good ... | https://mobile.instructables.com/id/IoT-Made-Simple-Servo-Control-With-NodeMCU-and-Bly/ | CC-MAIN-2019-22 | refinedweb | 1,232 | 64.1 |
Component that helps generating sql code
- From: "Peter Laan" <plnews2000@xxxxxxxx>
- Date: Fri, 8 Jul 2005 17:02:26 +0200
I've been spending a few hours writing a component that helps generate
simple sql queries. But if I can come up with the idea, it's probably stupid
or someone has already done it. So does anyone know of a component that does
something like this (or is it a stupid idea)? It's no supposed to be able to
handle all queries, just make life a bit simpler 95% of the time. The code
would look something like this (SqlSelect is the helper class):
SqlSelect sql = new SqlSelect("Customers");
sql.AddSelectColumn("fname");
sql.AddSelectColumn("lname");
sql.AddWhereParameter("fname", "john");
sql.AddWhereParameter("lname", "smith");
SqlCommand cmd = sql.GetCommand(conn);
I then had another idea. Why not make a program that extracts all user table
data from the database and creates source code with constants for all table
and column names. Is this another stupid idea? The generated source code
would be something like this:
namespace Appname.DBNames
{
public class Customers
{
public const string a_TableName = "Customers";
public const string c_fname = "fname";
public const string c_id = "id";
public const string c_lname = "lname";
}
}
One class for each table. The 'c_' is there in case a column name collides
with a reserved name. 'a_' is used to get the table name first in
intellisense.
You would get great help from intellisense by typing
DBNames.Customers.c_lname. But if you don't use a component like above to
help generate the sql code, the code might get ugly. The sql string would
quickly get very long.
Any comments?
Peter
.
- Follow-Ups:
- Re: Component that helps generating sql code
- From: Cor Ligthert [MVP]
- Prev by Date: Re: Connection Timeout
- Next by Date: Re: Component that helps generating sql code
- Previous by thread: Running a job manually
- Next by thread: Re: Component that helps generating sql code
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.adonet/2005-07/msg00239.html | crawl-002 | refinedweb | 323 | 62.98 |
Maximum Product Of Two Primes Less Than N
August 28, 2015
Today we have a fun little exercise based on prime numbers.
Given an integer n > 4, find the maximum product of two prime numbers such that the product is less than n. For instance, when n = 27, the maximum is 2 * 13 = 26, and when n = 50, the maximum is 7 * 7 = 49.
Your task is to write a program to find the maximum product of two primes less than n. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Advertisements
Scala:
In Python. Loop over the primes using a whel and is_prime, calculate the largest factor possible and get the largest second prime. Do this until the second prim is smaller than the first.
Another solution in Python. This runs a lot faster.
A quick solution in perl….
F# solution :
In Python.
@Paul: what is this rho_factors(i) function? Or is it from some library?
@Rutger. You can find it in the essay of @ProgrammingPraxis on primes. It is an implementation of Pollard’s rho.
Similar to Paul’s second solution. I treated the factor 2 separately, so that only odd numbers need to be considered in the loop.
An alternate version that sieves all the primes < n/2.
from sympy import sieve
def max_two_prime_product(n=100):
primes = list(sieve.primerange(0, n//2+1))
i,j = 0,len(primes) – 1
maxprod = (-1, 0, 0)
while i <= j:
prod = primes[i] * primes[j]
if prod < n:
maxprod = max(maxprod, (prod, primes[i], primes[j]))
i += 1
else:
j -= 1
return maxprod
With formatting:
Python solution. Uses a test to find the highest number below n that has two factors, and returns the factors of the detected number:
[…] Maximum product less than n of two primes […]
Two solutions, one fast but wrong and the other slow but right, both written in Racket Scheme.
#include
#include
int main()
{
int num,i,j,arr[25],l=0,max=0,f1=0,f2=0;
printf(“Enter the number\n”);
scanf(“%d”,&num);
for(i=2;i<num;i++)
{
for(j=2;j<i;j++)
if(i%j==0)
break;
if(i==j)
arr[l++] = i;
}
for(i=0;i<l;i++)
printf("%d\n",arr[i]);
for(i=0;i<l;i++)
for(j=0;jmax && arr[i]*arr[j] Max = %d\n”,f1,f2,max);
} | https://programmingpraxis.com/2015/08/28/maximum-product-of-two-primes-less-than-n/ | CC-MAIN-2017-51 | refinedweb | 413 | 73.47 |
You can subscribe to this list here.
Showing
25
50
100
250
results of 2174
Hi. Can anybody please explain me how to compile JMaPacman? The game uses marauroa, but i don't know how to combine them, otherwise Pacman is missing classes from marauroa. I read faq for both of them but nothing else is said except "build them with ant". Building marauroa is okay but it doesn't effect Pacman because i'm missing something important in combining them ^(
--
Alex Nikitin
Hello everyone,
I hope this is not too much trouble for you but I need your help.
I am an undergraduate computer science student in Italy and as part of our
graduation process we are required to to write a dissertation about some
relevant argument. I have chosen software size estimation. I won't bother
you with details but I need to analyze some open source software to prove my
theory. Among other projects I have also chosen stendhal (there is a bit of
confusion as to what it's name really is, since I have seen 3 different
names: stendhal, arianne and maraurora; but it's irrelevant :) ).
I am now in the final stages of my project and I lack some information that
I need to complete the work.
What I need from you, and I hope you will be willing to help, is a list of
developers that participate to the stendhal project. In particular I need
the nicknames of your developers that you have been used in the @author tag
(?) in your source code. I need it so I can decide which parts of your
source code has been developed by you and which one is reused open source
code. Nowadays, reusing code is an important part of software development
and since size is quite closely related to effort, I need to rule out reused
code, which of course is not part of dev effort.
If you have any other suggestions (as opposed to checking the authors) as to
how I could find out the reused code in your program I would really like to
hear them too. I know many relatively small open source programs out there
don't really rely on reuse to be built since usually they are very specific
and need code specifically designed for them (I know it since I've been
involved with some and most of the times I prefer to do stuff from scratch
rather than adapting existing code :) ).
If you would be kind enough to help me out it will be much appreciated
(especially senior developers which (probably) know more about the history
of the project).
I'm looking forward to hearing from you.
Best,
Corneliu Ilisescu
Hi,
I've been playing. :)
> I try to establish the development, and i create a acount in client, and
> then try to login
>
> but always tell username and password is not correct, can you help to tell
> what is the problem and help to solve it?
Hi there,
Please could you give more detail on the error?
Error logs and code that you use would be needed for us to diagnose the problem.
If you are looking for a guide to start coding client and server
communication with Marauroa, is useful
If you'd like to discuss issues in real time you can reach the team at
#arianne on irc.freenode.net, or with this link:
- just ask and hang round for a response.
Thanks!
kymara
Try asking at irc.freenode.net #arianne or searching at the Wiki
This mailing list has been abandoned for a long time.
Regards,
Miguel
From: xuchang19840827@...
To: arianne-devel@...
Date: Fri, 4 Feb 2011 23:11:23 +0800
Subject: [Arianne-devel] problem
hi,
I try to establish the development, and i create a acount in client, and then try to login
but always tell username and password is not correct, can you help to tell what is the problem and help to solve it?
BR
Chandler.Xu
------------------------------------------------------------------------------
What You Don't Know About Data Connectivity CAN Hurt You
This paper provides an overview of data connectivity, details
its effect on application quality, and explores various alternative
solutions.
_______________________________________________
Arianne-devel mailing list
Arianne-devel@...
No.BRPK"xuchang" <xuchang19840827@...> pisze:.
Javier Ortiz added you as a friend on MyLife(TM).
Please confirm you know Javier so we can connect you.
Do You Know Javier?
YES - Connect with Javier, and see who's searching for you...
NO - I don't know Javier
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Hi
IMHO the best option would be integrating the word list editor into the Stendhal client and use the already existing client/server connection to transmit the data because:
- It complements the policy of storing all data generated by the Stendhal server at one single place with accessing it in a single way as well.
- From a perspective of performance, it allows moving the CPU load of running the user interface from the server to the client, which in this case is significative given that the server can already experience heavy loads just from running the game itself, which in many cases results in noticeable lag for the end users.
- From an end user's perspective, it makes it transparent -invisible- to her the setup of the word list editor service on the hosting server as well as the access to the editor interface from the client side.
Cheers,
Osl
Hello Oslsachem,
>> What are your ideas and opinions about the word list
>> editor?
>
> I'm afraid I can only contribute with questions for now :)
You are very welcome.
> Can the parser be -easily- configured so that it doesn't start to
> consider the new words once they get added but simply stores them?
New words are not given any type information at first. But while
parsing sentences directed to NPCs, there take place transformations,
which are influenced by this new words, as it seems. At least this is
my interpretation of the bug report. Well, I should look more into
detail and try to reproduce the problem...
To answer your question directly: Ideally there should be no influence
of new words to following inputs without assigning types to them
manually. But currently this doesn't seem to work in all cases because
of the complexity of the parser transformations.
<german>
Neuen Wörtern wird zunächst einmal noch kein Typ und somit auch
keinerlei Bedeutung zugeordnet. Allerdings finden während der
Erkennung von Sätzen, die an NPCs gerichtet sind, Transformationen
statt, auf die diese neuen Wörter scheinbar gelegentlich doch Einfluß
nehmen. Zumindest kann ich mir nur auf diese Weise den Bug Report
erklären. OK, ich sollte wohl demnächst versuchen, das Problem näher
zu betrachten und den Fall zu reproduzieren. Ich fürchte nur, das wird
nicht so ganz einfach werden.
Um Deine Frage direkt zu beantworten: Idealerweise sollten die neuen
Wörter keinen Einfluß auf folgende Eingaben haben, so lange ihnen
nicht explizit ein Word-Typ zugeordnet wird. Nur scheint dies aufgrund
der Komplexität des Parser-Algorithmus derzeit nicht immer der Fall zu
sein.
</german>
> Why is the run-time/dynamically generated word list -presumably-
> stored inside a database instead of just a (flat) file as the
> original hard-coded list is?
We think it is the lean way to store all data generated by the
Stendhal server at one single place, that is in the database.
Technically it is of course no problem to store the word list in file
format. In fact this has already been done during the development
phase of the NPC parser.
<german>
Wir denken, es ist der saubere Weg, alle vom Stendhal-Server
gespeicherten Daten an einer einzigen Stelle - also in der Datenbank
abzulegen. Technisch gesehen wäre es natürlich kein Problem, die
Wortliste wieder wie früher in einer Datei zu speichern. In der ersten
Entwicklungsphase des NPC-Parsers war das bereits der Fall.
</german>
> As it needs to be manually supervised, what could be the growth rate
> of this list (number of words per day)?
We could answer our server admins - they should know about the current
size of the list and have some impression about the growth rate. I
know, there are some fundamental concerns about storing the words at
all because one could make conclusions from the new vocabulary to the
current discussions in the game (NPCs some times also listen to user
discussions).
<german>
Dazu können wir unsere Server-Admistratoren fragen. Sie sollten
eigentlich einen Eindruck von der aktuellen Größe und der ungefähren
Wachstumssrate haben. Ich weis, auch dass es einige Bedenken gegenüber
der Speicherung der Wörter an sich gibt, da gewisse Rückschlüsse aus
dem neu dazukommenden Vokabular auf die aktuellen Diskussionen im
Spiel zu ziehen sind. Daher sollten auch nur Admins Zugriff auf die
Wortliste haben (NPCs bekommen manchmal auch Diskussionen mit, die
eigentlich nicht an sie gerichtet waren).
</german>
> Are there any additional requirements (appearance frequency, ...)
> for the new words to get added to that automatically generated list?
Currently any new word, that is not derived from any already know
word, is added to the word list (see also
SentenceImplementation.classifyWords() in the source code).
<german>
Zur Zeit wird jedes bisher unbekannte Wort, das nicht von einem der
bereits bekannten abgeleitet ist, in die Wortliste aufgenommen (siehe
auch SentenceImplementation.classifyWords() iim Quellcode).
</german>
> Is there a necessity or are there any benefits for an online list
> editor over editing the original file offline and updating it during
> server restarts?
<german>
Ein Online-Editor könnte die Bedienung vereinfachen. Er würde z.B.
Auwahllisten für den Wort-Typ bereitstellen, um versehentlichen
Fehleingaben zu vermeiden. Außerdem kann man so Sonderfunktionen wie
die im folgenden Inline-Kommentar beschriebenen vorsehen.
</german>
package games.stendhal.server.entity.npc.parser;
/**
* WordListEditor is a graphical tool to administer the word list
stored in the database. It provides
* the following functionality:
* - (re-)initialize the database with the pre-configured word list
by calling WordListUpdate
* - associate word types, plural, numerical values and aliases with
new words - remove wrong
* spelled words - add new entries
* - print out the "words.txt" file to update the source, which
initializes new word lists
*
* @author Martin Fuchs
*/
public class WordListEditor {
public static void main(final String[] args) {
// TODO implement WordListEditor
}
}
By the way:
The <german> sections are there because it is some times easier for me
to write the german text first and then translate all into english. So
if you prefer German, just read this version of my messages - it's a
free spin-off feature. ;-)
Regards,
Martin
> What are your ideas and opinions about the word list
> editor?
I'm afraid I can only contribute with questions for now :)
Can the parser be -easily- configured so that it doesn't start to consider the new words once they get added but simply stores them?
Why is the run-time/dynamically generated word list -presumably- stored inside a database instead of just a (flat) file as the original hard-coded list is?
As it needs to be manually supervised, what could be the growth rate of this list (number of words per day)?
Are there any additional requirements (appearance frequency, ...) for the new words to get added to that automatically generated list?
Is there a necessity or are there any benefits for an online list editor over editing the original file offline and updating it during server restarts?
Cheers,
Osl
Hello,
there is an entry about the automatically extended Stendhal word list
in the bug tracker:
It describes the problem that at some time the parser did no more
recognice expressions containing the word "3" correctly. While it may
look like a trivial bug, there is a longer story behind it. The parser
in the Stendhal server needs to know about word types in order to
decipher the English grammar. Because the initial word list in the
source code will never be complete to handle all user input, there is
a logic in the code to add previously unknown words in the list as
they are detected by one of the NPCs. Their type is initially unknown.
Anyhow this new entries have influence on the following behaviour of
the parser. This is what explains the behavior shown in the above bug.
The problem has been solved initially by reseting the word list in the
server to the hard coded list.
My initial idea was to take this automatically detected new words as
proposal to extend the word list permanently. Using a comfortable word
list editor it would be possible to edit the word list entries and
associate word types. This way the parser would grow and could handle
more and more extended vocabulary. Its behavior would become better
and better as time goes by. The word list editor could access the
database directly using JDBC. But I initially forgot that then either
the user interface of the word list editor would have to run on the
server. Or the database access would have to be implemented in any
other way between server and client. One way would be to configure a
limited access to the MySQL port through the firewall. Another way
would be to integrate the word list editor into the Stendhal client
and use the already existing client/server connection to transmit the
data.
What are your ideas and opinions about the word list editor?
Regards,
Martin
(original text in German)
Vor einiger Zeit wurde ein Bug bzgl. der automatisch gefüllten
Wortliste in Stendhal erfasst:
Da ich hierzu adhoc keine Lösung parat habe, möchte ich eine kleine
Diskussion starten.
Der Parser im Stendhal-Server ist auf die Zuordnung von Word-Typen
angewiesen, um die englische Grammatik aufzulösen. Da die im Quellcode
vorgegebene Liste mit nie ganz vollständig sein wird, um alle
Benutzereingaben abzudecken, ist eine Logik im Code enthalten, die
bisher unbekannte Wörter mit in der Liste erfasst, sobald sie von
einem der NPCs entdeckt werden. Deren Typ ist zunächst einmal noch
nicht bekannt. Dennoch beeinflussen diese Neueinträge das nachfolgende
Verhalten des Parsers. Dies erklärt das im oben aufgeführten Bug
beschriebene Verhalten. Das akute Problem wurde zunächst dadurch
gelöst, dass die Wortliste im Server auf die im Code verdrahtete Liste
zurückgesetzt wurde.
Meine ursprüngliche Idee war nun, diese automatisch neu erfassten
Wörter als Vorschläge für eine permanente Erweiterung der Wortliste zu
verwenden. Über einen komfortablen Wortlisten-Editor könnte man die
Einträge der Wortliste nachbearbeiten und jeweils einen Wort-Typ
zuordnen, so dass der Parser weiter reift und künftig mit dem
erweiterten Vokabular klarkommt. Nach und nach wird sein Verhalten so
immer besser. Dieser Wortlisten-Editor könnte direkt über JDBC auf die
Datenbank des Servers zugreifen. Woran ich zugegebenermassen
allerdings anfangs nicht dachte ist, dass dazu entweder die Oberfläche
des Wortlisten-Editors auf dem Server laufen muss. Oder der Datenbank-
Zugriff wird auf irgend eine andere Weise zwischen Server und Client
durchgeführt. Möglich wäre hier eine über die Firewall beschränkte
Freigabe des MySQL-Ports. Oder man könnte den Wortlisten-Editor auch
in den Stendhal-Client integrieren und die Daten über die schon
bestehende Client/Server-Verbindung übertragen.
Was sind Eure Ideen und Meinungen hierzu?
Grüße,
Martin
These are the details for the next Stendhal developers meeting:
*Date and time:
Thursday, October 16, 2008 at 18:00 UTC/GMT
Please, find out your corresponding local time ("Current time zone offset") at
*Place:
IRC
network: freenode - irc.freenode.net
channel: #arianne
*Topics:
(suggested list so far)
- Roadmap for the next release of Stendhal
- Celebration of Semos mine town revival for Halloween
- Adoption of "Community feel"-enhancing splash screen voting system
- Impact of the new damage system and proposal of adapted raids and new sociable offerings alternative to training to promote team work and encourage users to stay longer online.
- Coordinating support between Game Administrators.
- Evaluation of the eficient use of the means available to the project: IRC, forums, trackers, patch submision, and other collaborative tools like plone, wiki and testing procedures before a release ...
- Suggestion on ways to improve interpersonal relations inside the project.
As previously stated, this list is just a starting point, so feel free to add or comment on these topics in this forum thread:
Hi :)
> Hello,
> I want to work on Arianne RPG and in particular implement
> the following from the existing feature list :
> * Dropping items over existing should swap them : Currently, dropp=
ing an item from bag to character pane , in order to equip the new item and=
store the previous item in bag is not possible without manually inequippin=
g the item and the equipping a new one.
> * Adding Item sets with bonuses
> * Single click navigation: Currently, player has to double-click to a=
ttack or move, convert it into a single left click to move and a single rig=
ht click to attack.
> * Adding Item sets with bonuses: Make an item set to get bonus stats =
instead of having items with single attributes.
>=20
> If it is fine with you, I plan on completing this by the
> 11th of December, 2007, since working on a challenging Open
> Source Project is part of my course curriculum ( and that is
> the deadline :) )
Hi Mallika,
Sure, go ahead, and send it as Patches to Sourceforge tracker.
I have to warn you that the Drop item thing is not exactly easy, so be sure=
to test it throughly. Our previous tests ended up on items duplications an=
d/or items lost.
About the single click, it is prone it won't be accepted. One click is sele=
ct, two clicks move. Nothing is selectable on Stendhal but that's the usual=
approach.
We meet usually at irc.freenode.net #arianne so feel free to join us there.
Regards,
Miguel
_________________________________________________________________
Explore the seven wonders of the world
BRE=
Hello,
I want to work on Arianne RPG and in particular implement
the following from the existing feature list :
* Dropping items over existing should swap them : Currently, dropping
an item from bag to character pane , in order to equip the new item and
store the previous item in bag is not possible without manually inequipping
the item and the equipping a new one.
* Adding Item sets with bonuses
* Single click navigation: Currently, player has to double-click to
attack or move, convert it into a single left click to move and a single
right click to attack.
* Adding Item sets with bonuses: Make an item set to get bonus stats
instead of having items with single attributes.
If it is fine with you, I plan on completing this by the
11th of December, 2007, since working on a challenging Open
Source Project is part of my course curriculum ( and that is
the deadline :) )
More about me below...
About me:
I am a Master's student, studying Computer Science at New
York University, and as a requirement for one of the tougher
courses here (Production Quality Software), I was looking
for a neat Java project, and this really seems to be exactly
the kind of thing I want to work on.
Hoping for a positive response!
Thanks a lot!
Best,
Mallika
--
"sometimes WiNNinG is everythinG"
On 07.11.2007 12:40:24 Miguel Angel Blanch Lardin wrote:
>.
Well I configured Doxygen to parse the source code and output information regardless of existing JavaDoc comments, and the source is inlined into the HTML pages, so you can find information about any function.
But personally I also tend to use Eclipse's "find reference" and "find in source" features if digging in source code. The main reason for such a documentation is, it looks quite pretty and can be used to say "Look here - there is documentation about all and anything!". So yes, you are right: If there is none of the developer being used to work with that kind of documentation, the usefulness is questionable and it may not be worth that 100 MB web space.
Regards,
Martin
2007/11/4, Martin Fuchs <martin-fuchs@...>:
>
>.
Regards,
Miguel.
Regards,
Martin | http://sourceforge.net/p/arianne/mailman/arianne-devel/ | CC-MAIN-2014-52 | refinedweb | 3,296 | 59.23 |
A context manager for a docker container.
Project description
dockerctx is a context manager for managing the lifetime of a docker container.
The main use case is for setting up scaffolding for running tests, where you want something a little broader than unit tests, but less heavily integrated than, say, what you might write using Robot framework.
Install
$ pip install dockerctx
For dev, you have to use flit:
$ pip install flit $ flit install
The development-specific requirements will be installed automatically.
Demo
This is taken from one of the tests:
import time import redis import pytest from dockerctx import new_container # First make a pytest fixture @pytest.fixture(scope='function') def f_redis(): # This is the new thing! It's pretty clear. The `ready_test` provides # a way to customize what "ready" means for each container. Here, # we simply pause for a bit. with new_container( image_name='redis:latest', ports={'6379/tcp': 56379}, ready_test=lambda: time.sleep(0.5) or True) as container: yield container # Here is the test. Since the fixture is at the "function" level, a fully # new Redis container will be created for each test that uses this fixture. # After the test completes, the container will be removed. def test_redis_a(f_redis): # The container object comes from the `docker` python package. Here we # access only the "name" attribute, but there are many others. print('Container %s' % f_redis.name) r = redis.StrictRedis(host='localhost', port=56379, db=0) r.set('foo', 'bar') assert r.get('foo') == b'bar'
Note that a brand new Redis container is created here, used within the context of the context manager (which is wrapped into a pytest fixture here), and then the container is destroyed after the context manager exits.
In the src, there is another, much more elaborate test which
- runs a postgres container;
- waits for postgres to begin accepting connections;
- creates a database;
- creates tables (using the SQLAlchemy ORM);
- performs database operations;
- tears down and removes the container afterwards.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/dockerctx/ | CC-MAIN-2020-50 | refinedweb | 352 | 57.16 |
reference when a pointer to an integer is the same size in memory as an integer itself.
Now, let us learn how variables can be passed in a C program.
Pass By Value
Passing a variable by value makes a copy of the variable before passing it onto a function. This means that if you try to modify the value inside a function, it will only have the modified value inside that function. One the function returns, the variable you passed it will have the same value it had before you passed it into the function.
Pass By Reference
There are two instances where a variable is passed by reference:
- When you modify the value of the passed variable locally and also the value of the variable in the calling function as well.
- To avoid making a copy of the variable for efficiency reasons.
Let’s have a quick example that will illustrate both concepts.xytotals.c:
#include <stdio.h>
#include <stdlib.h>
void printtotal(INT total);
void addxy(INT x, INT y, INT total);
void subxy(INT x, INT y, INT *total);
void main() {
INT x, y, total;
x = 10;
y = 5;
total = 0;
printtotal(total);
addxy(x, y, total);
printtotal(total);
subxy(x, y, &total);
printtotal(total);
}
void printtotal(INT total) {
printf("Total in Main: %dn", total);
}
void addxy(INT x, INT y, INT total) {
total = x + y;
printf("Total from inside addxy: %dn", total);
}
void subxy(INT x, INT y, INT *total) {
*total = x - y;
printf("Total from inside subxy: %dn", *total);
}
Here is the output:
There are three functions in the above program. In the first two functions variable is passed by value, but in the third function, the variable `total` is passed to it by reference. This is identified by the “*” operator in its declaration.
The program prints the value of variable `total` from the main function before any operations have been performed. It has 0 in the output as expected.
Then we pass the variables by value the 2nd function addxy, it receives a copy of `x`, `y`, and `total`. The value of `total` from inside that function after adding x and y together is 15.
Notice that once addxy has exited and we print `total` again its value remains 0 even though we passed it from the main function. This is because when a variable is passed by value, the function works on a copy of that value. They were two different `total` variables in memory simultaneously. The one we set to 15 in the addxy function, was removed from memory once the addxy function finished executing. However, the original variable `total` remains 0 when we print its value from main.
Now we subtract y from x using the subxy function. This time we pass variable `total` by reference (using the “address of” operator (&)) to the subxy function.
Note how we use the dereference operator “*” to get the value of total inside the function. This is necessary, because you want the value in variable `total`, not the address of variable `total` that was passed in.
Now we print the value of variable `total` from main again after the subxy function finishes. We get a value of 5, which matches the value of total printed from inside of subxy function. The reason is because by passing a variable total by reference we did not make a copy of the variable `total` instead we passed the address in memory of the same variable `total` used in the function main. In other words we only had one total variable during entire code execution time.
You have now learnt how you can pass a variable by value and also pass by reference in this tutorial. | http://www.exforsys.com/tutorials/c-language/call-by-value-and-call-by-reference.html | CC-MAIN-2018-09 | refinedweb | 618 | 60.35 |
Lookup failed in SessionBean
Lookup failed in SessionBean Hi. I've downloaded session bean example which invokes session beans. File was named example3. So I wanted to make... directories but I get Exception in method "Object objref = ctx.lookup("ejb/test
Writing Calculator Stateless Session Bean
Writing Calculator Stateless Session Bean...;
In this EJB tutorial we will learn how to Write
Staleles Session...
javax.ejb.EJBObject. Remote interface is the client view of session bean.
Methods defined
Deploying and testing Stateless Session Bean
Deploying and testing Stateless Session Bean... Session Bean developed in the last section. We will use ant
build tool to build... learnt how to deploy Session Bean and
test on Web Logic Server
Session Bean
-lived components. The EJB container may destroy a session bean if
its client times... i.e. the EJB
container destroys a stateless session bean. ... is a Session bean
A session bean is the
enterprise bean that directly
calling a session bean bean from servlet using netbeans - EJB
calling a session bean from servlet using netbeans How to call a session bean from servlet using netbeans in Java
Session Bean Example
Session Bean Example I want to know that how to run ejb module by jboss 4.2.1 GA (session bean example by jboss configuration )?
Please visit the following link:
Error in simple session bean ..................
Error in simple session bean .................. Hi friends,
i am trying a simple HelloWOrld EJb on Websphere Applicatiopn server 6.1.
Can any... = initialContext.lookup("ejb/com/ibm/ejb/exampleHome"); <br />
Ejb message driven bean
you the process which are
involved in making a message driven bean using EJB. Mesaage driven bean in EJB
have the following features:-
1) is a JMS listener...;
For developing the message driven bean we are using both the EJB module and web
Writing Session bean - Session Bean Example with Source Code
methods are defined public. The Session
Bean interface methods that the EJB provider...Writing Session bean
In this lesson you will learn how to develop Hello
World Session Bean. We will use ant to build the application. Our application
Issue in Stateless session bean example
Issue in Stateless session bean example Hi Team,
I tried your Stateless session bean example in the path. I got the below exception can you plese guide to rectify
EJB - EJB
EJB What is the difference between Stateless session bean and Statefull session bean? what are the lifecycle methods of both SLSB and SFSB ... session bean and Statefull session bean.
*) Stateful beans are also
EJB - EJB
, a Stateless session bean does not maintain any state (instance variables values) across... not have significance in Stateless session bean. So the Container can assign... not to write code in these methods for stateless session bean.
The app
first entity bean example in eclipse europa - EJB
first entity bean example in eclipse europa pls provide steps to create simple ejb3.0 application in eclipse .And also how to create entity bean ,session bean in ejb3.0 Hi Friend,
Please visit the following links.
Stateless Session Bean Example Error
Stateless Session Bean Example Error Dear sir,
I'm getting...)
Please visit the following link:
Stateless Session Bean Example... and run.
Please help me as I'm new to EJB
ejb - EJB
ejb hi
i am making a program in java script with sateful session bean. This program is Loan calculator.In this program three field 1-type... but use stateless session bean. when user submit your require.
Please early
Stateful and Stateless Session Bean Life Cycle
Understanding Stateful and
Stateless Session Bean Life Cycle... Bean Life cycle
There are two stages in the Lifecycle of Stateless
Session Bean... container
some instances of the bean are created and placed in the pool. EJB
Writing Deployment Descriptor of Stateless Session Bean
Writing Deployment Descriptor of Stateless Session Bean...
for the session bean. We need the deployment descriptor for application...;
<session >
<description>EJB Test Session Bean<
developing a Session Bean and a Servlet and deploy the web application on
JBoss 3.0
a name 'ejb/CalculatorSessionBean' to the session bean.
Please note that bean... a Calculator Stateless Session Bean and
call it through JSP file and deploy... the MyTestSession Session Bean developed in Lesson 3. Infact we will use the same
Steps to create simple EJb 2.1 (Session Bean) and deploy on websphere application server 6.0,
Steps to create simple EJb 2.1 (Session Bean) and deploy on websphere... session bean having Java client. but i am unable to run it on Websphere application server. can any one please send me steps how i create EJB 2.1 and deploy
EJB in jsp code - EJB
EJB in jsp code Suppose in EJB we created the session bean, remote... can we access the EJB methods in the jsp file....
if u can present me... the following links:
Given a list of responsibilities related to session beans, identify those which
are the responsibility of the session bean provider and those which are the responsibility
of the EJB contai
which
are the responsibility of the session bean provider and those...;Chapter 3. Session Bean Component Contract Next
... which
are the responsibility of the session bean provider and those which
Ejb Webservice
In Ejb web Service Only Stateless Bean can be used .So take a Session Bean
Now take a Session Bean
Here select Stateless Bean and Remote interface...EJB Webservies
In this tutorial I
stateless session bean with methods error - Java Beginners
not be retained i.e. the EJB container destroys a stateless session bean.
These types...stateless session bean with methods error I have to create stateless session bean with 3 methods and then create a servlet which remotely calls all
EJB Example - EJB
EJB Example Hi,
My Question is about enterprise java beans, is EJB stateful session bean work as web service? if yes can you please explain... the following link:
Hope
Stateful Session Bean Example
Stateful Session Bean Example
...
using stateful session bean.
The purpose of account is to performs two... bean:
The enterprise bean in our example is a statelful
session bean called
Ejb message driven bean
Ejb message driven bean
... driven bean using EJB. Mesaage driven bean in EJB
have the following features... the EJB module and web module. The
steps involved in creating message driven bean
ejb
ejb what is ejb
ejb is entity java bean
Chapter 4. Session Bean Life Cycle
Chapter 4. Session Bean Life CyclePrev Part I. ... the life cycle of a
stateful or stateless session bean instance.
Stateful Session Bean
A session bean
Stateless Session Bean Example
Stateless Session Bean Example
... stateless session bean.
The purpose of example is to performs the mathematical... bean:
The enterprise bean in our example is a stateless session bean called
Chapter 2. Client View of a Session Bean
or examples about the client view of a session
bean's local and remote home interfaces, including the code used by a client to locate
a session bean's... for the Cart
session bean can be located using the following code segment
Stateful Session Beans Example, EJB Tutorial
Stateful Session Bean Example
... stateful session bean.
The purpose of account is to performs two transaction...:
The enterprise bean in our example is a statelful
session bean called AccountBean
EJB3 - stateless - EJB
EJB stateless session bean Hi, I am looking for an example of EJB 3.0 stateless session bean
doubt in ejb3 - EJB
EntityBean.UserEntityBean;
/**
* Session Bean implementation class UserBean
*/
@Stateless...;
}
}
sessionbean:
package SesssionBean;
import javax.ejb.Stateless;
import...;
import java.util.Properties;
public class Usermain {
@EJB
private static
java - EJB
an application by an example that contains a session bean and a CMP but not able.... Hi mona,
A session bean is the enterprise bean that directly... application. A session bean represents a single client accessing the enterprise
java bean - EJB
protocol where as Java Bean is standalone and works only in the same JVM.
3)EJB...java bean difference between java bean and enterprice java bean first of all dont compare java bean with enterprise java bean because
Introduction To Enterprise Java Bean(EJB). WebLogic 6.0 Tutorial.
(EJB)
Enterprise Java
Bean architecture is the component....
Developing Hello World
Session Bean... Code for
session
session Which methods can be invoked by the container on a stateless session bean
Session Beans
;
What is a Session bean
A session bean is the
enterprise bean... of the enterprise application. A session bean represents a single client
accessing... to the application. A session bean makes an interactive session only
for a single
java bean code - EJB
java bean code simple code for java beans Hi Friend... the Presentation logic. Internally, a bean is just an instance of a class.
Java Bean Code:
public class EmployeeBean{
public int id;
public
Struts integration with EJB in WEBLOGIC7
stateless session bean. Unlike other types of EJB, MDB have no home or remote... is WL-9).
Different kinds of Enterprise Beans are
1. Session Bean
2. Entity Bean
3. MessageDriven Bean (MDB)
Session Bean
Session Bean can
Identify the interface and method for each of the following: retrieve the session
bean's remote home interface, retrieve the session bean's local component interface,
determine if the sessio
: retrieve the session
bean's remote home interface, retrieve the session bean's local component interface,
determine if the session bean's caller has...
transaction has completed.
Prev Chapter 3. Session Bean
Java Session Beans
;
A session bean is the enterprise bean that directly
interacts with the user and contains the business logic of the enterprise
application. A session bean...-server session. A session bean performs and
handles operations, such as calculations
difference - EJB
; Hi friend,
Stateful Beans
*)Stateful beans are also Persistent session... the previous request and responses.
*)These bean instances are pooled.
*)Client specific data has to be pushed to the bean for each method invocation which
result
Simple EJB3.0 - EJB
;>> my question is how to make session bean and how to access this session bean on servlet/jsp.
thanQ Hi Friend,
Please visit.../ejb/entity-bean-example.shtml
Thanks
EJB 3.1 - EJB Interfaces are Optional
In EJB 3.1, now you do not
need to define any interfaces for Session Beans... directory.
Support for stateful web services via
Stateful Session Bean web... a POJO with the @Stateless
or @Stateful to get a fully functional EJB - Java Interview Questions
:
A stateless session bean does not maintain a conversational state... session bean.
These types of session beans do not use the class variables (instance... there is no need to passivates the bean's instance.
Because stateless session
Struts integration with EJB in JBOSS3.2
;
stateless session bean. Unlike other types of EJB, MDB have no home or remote... are
Session Bean
Entity Bean
MessageDriven Bean (MDB)
Session Bean
Session Bean can be defined as function bean called in RMI-IIOP
EJB remote interface
can access all the methods of the session bean.
@EJB...:- This is the session bean
in which we will declared all the methods of the Remote... is the annotation which denotes the bean is of session
type.
SessionBeanBean.java
Match the correct description about purpose and function to which session bean type
they apply: stateless, stateful, or both.
session bean type
they apply: stateless, stateful, or both.
Prev Chapter 3. Session Bean Component Contract Next
... to which session bean
type they apply: stateless, stateful, or both
EJB - EJB
EJB Here is my question.
Can one entity bean class be associated with more than one table . And if yes how we can achieve this.
thanks
EJB deployment descriptor
;org.glassfish.docs.secure.secureBean</ejb-class>
<session-type>...;:-This
node gives the brief description about the Ejb module created.
<session-type>Stateless</session-type>:-This
node assigns the Session bean
EJB deployment descriptor
;org.glassfish.docs.secure.secureBean</ejb-class>
<session... the brief description about the Ejb module created.
<session-type>Stateless</session-type>:-This
node assigns the Session bean as stateless
A Message-Driven Bean Example
.
A message-driven bean has only a bean class i.e. unlike
a session bean... implementation:
In EJB 3.0, the MDB bean class is annotated.... It can call helper methods, or invoke a session bean to process the
information
An Entity Bean Example
, such as relational database an
entity bean persists across multiple session and can...
Implement the Annotated Session Bean: BookCatalogBean... the database. We use EJB3 session bean
POJOs to implement the business logic
Introduction To Enterprise Java Bean(EJB). Developing web component.
;In
the next lesson we will write stateless session bean and then deploy... components, web component and the
enterprise bean.
Web... on the web server.
Enterprise Bean consists of all
the program
EJB Hello world example
; SessionBeanBean.java:-This
is the bean of type session in which we have defined ... to declare the
bean as a session type.
3. Main.java...
in the bean.@EJB is the
annotation
used for
configuring the EJB
EJB container services
beans and entity
beans. The state for session bean is maintained through...
EJB container services
The EJB container is a container that deploys EJB automatically when
Web Server
methods can be invoked by the container on a stateless session bean
methods can be invoked by the container on a stateless session bean Which of the following methods can be invoked by the container on a stateless session bean
EJB Insert data
:-This is the session bean
we have created. By session bean we mean the bean...
through which we can access the methods which are defined in the bean.
@EJB...
.style1 {
color: #FFFFFF;
}
EJB Insert data
ejb vs hibernate - EJB
ejb vs hibernate 1>>> If we have ejb entity bean why we need hibernate?
2>>> Is hibernate distributed
Features of EJB 3.0
the container discards the bean
instances and creates new instances.
EJB Context: Bean... that the client can directly invoke methods on EJB rather
than creating the bean...
Features of EJB 3.0
Now
creating entity bean in eclipse europa - EJB
creating entity bean in eclipse europa how to create a entity bean in eclipse europa
EJB communication - EJB
EJB communication i am trying to create call a bean from another bean but it will not call. i cann't understand the dd configuration.please help me
EJB 3.0
bean can?t perform functions outside of an EJB container.
Read more...EJB 3.0
Introduction
To Enterprise Java Bean 3.0
Enterprise beans
What is EJB 3.0?
on developing logic to solve business problems.
Types of EJB
Session Bean
Session....
A session bean is similar to an interactive session and is not shared... can
have only one user. A session bean is not persistent and it is destroyed
EJB 3.0 Tutorial
of EJB like what is an
Enterprise Bean, what is EJB container, benefits... Bean : Session Bean implies that there is a single
client inside the Application Server. Session Bean is responsible for doing
tasks for a client that calls
Accessing Database using EJB
:-This is the session
bean we have created. By session bean we mean the bean which... the methods which are defined in the bean.
@EJB:-This is the annotation... Accessing Database using EJB
EJB lookup example
();
}
Bean30Bean.java:-This
is the bean of type session in which we have... the bean as a session type.
@Stateless(mappedName="Bean30"):-It is used...
EJB lookup example
Chapter 13. Enterprise Bean Environment
or
Session), home, and
remote elements.
<!--
The ejb-ref element... interfaces
of the referenced enterprise bean; and optional ejb-link information..., and session
-->
<!ELEMENT ejb-ref (description?, ejb-ref-name, ejb-ref
Enterprise JavaBeans (EJB): An Overview
To Enterprise Java Bean(EJB)
Writing Deployment Descriptor of Stateless Session Bean
Writing Session bean
Stateful and Stateless Session Bean Life Cycle...Enterprise JavaBeans (EJB): An Overview
Enterprise JavaBeans (EJB) is a Java
EJB
Transactional Attributes
:
For a SESSION bean, the transaction attributes MUST be specified... of a
session bean's HOME interface....
A transaction attribute is a value associated with a method of a session
Building a Simple EJB Application Tutorial
create a new Session Bean from the Project Explorer (EJB Projects ->
Simple EJB... a Session Bean
Open the J2EE perspective in eclipse (Windows -> Open Perspective... project that has a Session Bean. We created a client Web
application
From struts calling function to bean - EJB
From struts calling function to bean
I am calling a function from "struts action class" to bean. I am using weblogic 8.1 web server.i am able...: Bean is already undeployed.
i am not able to solve the problem.
Waiting
EJB Books
of EJB development, including session beans, entity beans (BMP and CMP), and message... table corresponds to a single "entity bean". (Since Resin-CMP uses the EJB specification, most of its jargon comes from EJB.) By creating an entity bean
EJB remote interface
access all the methods of the session bean.
@EJB:-This is the annotation...:- This is the session bean
in which we will declared all the methods of the Remote... which denotes the bean is of session
type.
SessionBeanBean.java
EJB, Enterprise java bean- Why EJB (Enterprise Java Beans)?
Why EJB (Enterprise Java Beans)?
Enterprise Java Beans or EJB..., Enterprise
Edition (J2EE) platform. EJB technology enables rapid and simplified
EJB 3.0 Tutorials
;
Session Beans
What is a session bean
Types of session beans
Stateless Session Bean
Stateful session bean
When to use session bean
Life Cycle of Stateless session
Life Cycle of Stateful session... locating the enterprise
bean may be various, numerous and thin.
* Enterprise
Chapter 1. EJB Overview
Beans only).
Provides a query syntax (EJB QL) for entity bean finder...
Chapter 1. EJB OverviewPrev Part I. Exam Objectives Next
Chapter 1. EJB
Chapter 5. EJB transactions
, the Entity EJBs' methods will normally be called by a
method in a Session EJB... this mechanism. Alternatively, a Session EJB can opt to manage its own transactions programatically.
Because a Session EJB s not just a model of business data
java - EJB
java i have a login form in swing in netbean with username and password .i want store data in database through Sessionbean And Entitybean .when i insert data in username & password field , if data is incorrect show a massege
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/41349 | CC-MAIN-2015-27 | refinedweb | 3,044 | 59.19 |
1 //XRadar2 //Copyright (c) 2004, 2005, Kristoffer Kvam3 //All rights reserved.4 //5 //Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met name of Kristoffer Kvam nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.9 / //See licence.txt for dependancies to other open projects.package org.xradar.test.f;12 13 package org.xradar.test.d;14 15 16 /**17 * XRadar test application class18 * 19 * @author Kristoffer Kvam20 * @since XRadar 0.821 */22 public class D1 { 23 }
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/xradar/test/d/D1.java.htm | CC-MAIN-2016-44 | refinedweb | 119 | 57.98 |
Sylvain Wallez wrote:
> Stefano Mazzocchi wrote:
>
>> Sylvain Wallez wrote:
>>
>>> Hi all,
>>>
>>> I just committed a new JCR block. This block provides two features: a
>>> Repository component, and a "jcr:" protocol.
>>
>>
>>
>> Awesome news!
>>
>>> The Repository component is nothing more than the standard
>>> javax.jcr.Repository interface, but provides a way to centrally
>>> define how to access the repository (Jackrabbit conf file, JNDI, etc)
>>> and how to obtain credentials to log into the repository. There's
>>> currently only one concrete implementation that uses Jackrabbit.
>>
>>
>>
>> That's very cool... I needed something like this for Linotype.
>>
>>> The JCR source factory is more interesting, as it provides a
>>> traversable and modifiable source that hides away the details of the
>>> repository structure and node types. To achieve this, we need to
>>> configure it by defining a mapping from node types to "files" and
>>> "folders" that will be visible through the "jcr:" protocol.
>>>
>>> The result is that we can now use a JCR repository in Cocoon just
>>> like we use the regular filesystem.
>>>
>>> As an example, here's how this source factory is configured for the
>>> standard filesystem-like node types defined by JCR:
>>> <component-instance
>>>
>>> <folder-node>>
>>> <folder-node
>>> <file-node>>
>>> <file-node
>>> <content-node>>>>>>>>
>>> </component-instance>
>>
>>
>>
>> how do you expand the prefixes in the attribute values?
>
>
>
> I don't ;-)
>
> All methods in the JCR API use prefixed names, and prefix mappings are
> stored within the repository itself.
well, this is not entirely correct. you can overload the prefixes in
your workspace ;-)
> So I expect the mapping writer to
> be consistent with prefixes defined in the repository.
this is a huge mistake. if I take my content and your content and merge
them into a single repository, then we could have the exact same
prefixes, mapped to a different namespace and the JCR API is totally
consistent as long as you specify that prefix in the workspace you are
using.
I agree that by default the default prefix is good enough, but in the
long term we might want to keep the door open for overloading prefixes
in the workspace.
> Now if we want to use full namespace URIs, we can also accept the
> notation that's used in some places in JCR, i.e. "{full-uri}name".
that's ugly :-) I'd much rather use the XSLT-like prefix expansion based
on the current namespace prefix description of the element contenxt in
the XML infoset.
>>> More detailed information about this configuration is given in
>>> o.a.c.jcr.source.JCRSourceFactory's javadoc.
>>>
>>> This source factory is still a bit primitive regarding all features
>>> provided by JCR. Future enhancements include querying the repository,
>>> handling workspaces, node properties, versions, etc.
>>>
>>> Your feedback and opinions about this initial implementation and its
>>> future evolutions are more than welcome!
>>
>>
>>
>> In linotype, what I needed was a way to store new items in the
>> repository and then query the repository for the last n in reversed
>> chronological order (LIFO).
>>
>> There are many ways we can glue query capabilities to JCR in cocoon, I
>> would like to discuss with you people what is best way to do that and
>> what are the requirements.
>
>
>
> My initial thoughts about this is to use URI parameters for querying,
> e.g. "jcr://?xpath=/news/items[position() < 10]" and have this return a
> collection source, i.e. one that has children that would be the query
> results.
>
> The parameter name is the query language to be used (e.g. xpath, sql, etc).
sure, that's a start, let's see how far along we can go with that.
--
Stefano. | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200504.mbox/%[email protected]%3E | CC-MAIN-2014-15 | refinedweb | 593 | 61.67 |
For the seasoned user, PromQL confers the ability to analyze metrics and achieve high levels of observability. Unfortunately, PromQL has a reputation among novices for being a tough nut to crack.
Fear not! This PromQL tutorial will show you five paths to Prometheus godhood. Using these tricks will allow you to use Prometheus with the throttle wide open.
Aggregation
Aggregation is a great way to construct powerful PromQL queries. If you’re familiar with SQL, you’ll remember that GROUP BY allows you to group results by a field (e.g country or city) and apply an aggregate function, such as AVG() or COUNT(), to values of another field.
Aggregation in PromQL is a similar concept. Metric results are aggregated over a metric label and processed by an aggregation operator like sum().
Aggregation Operators
PromQL has twelve built in aggregation operators that allow you to perform statistics and data manipulation.
Group
What if you want to aggregate by a label just to get values for that label? Prometheus 2.0 introduced the group() operator for exactly this purpose. Using it makes queries easier to interpret and means you don’t need to use bodges.
Count those metrics
PromQL has two operators for counting up elements in a time series. Count() simply gives the total number of elements. Count_values() gives the number of elements within a time series that have a specified value. For example, we could count the number of binaries running each build version with the query:
count_values("version", build_version)
Sum() does what it says. It takes the elements of a time series and simply adds them all together. For example if we wanted to know the total http requests across all our applications we can use:
sum(http_requests_total)
Stats
PromQL has 8 operators that pack a punch when it comes to stats.
Avg() computes the arithmetic mean of values in a time series.
Min() and max() calculate the minimum and maximum values of a time series. If you want to know the k highest or lowest values of a time series, PromQL provides topk() and bottomk(). For example if we wanted the 5 largest HTTP requests counts across all instances we could write:
topk(5, http_requests_total)
Quantile() calculates an arbitrary upper or lower portion of a time series. It uses the idea that a dataset can be split into ‘quantiles’ such as quartiles or percentiles. For example, quantile(0.25, s) computes the upper quartile of the time series s.
Two powerful operators are stddev(), which computes the standard deviation of a time series and stdvar, which computes its variance. These operators come in handy when you’ve got application metrics that fluctuate, such as traffic or disk usage.
By and Without
The by and without clauses enable you to choose which dimensions (metric labels) to aggregate along. by tells the query to include labels: the query sum by(instance) (node_filesystem_size_bytes) returns the total node_filesystem_size_bytes for each instance.
In contrast, without tells the query which labels not to include in the aggregation. The query sum without(job) (node_filesystem_size_bytes) returns the total node_filesystem_size_bytes for all labels except job.
Joining Metrics
SQL fans will be familiar with joining tables to increase the breadth and power of queries. Likewise, PromQL lets you join metrics. As a case in point, the multiplication operator can be applied element-wise to two instance vectors to produce a third vector.
Let’s look at this query which joins instance vectors a and b.
a * b
This makes a resultant vector with elements a1b1, a2b2… anbn . It’s important to realise that if a contains more elements than b or vice versa, the unmatched elements won’t be factored into the resultant vector.
This is similar to how an SQL inner join works; the resulting vector only contains values in both a and b.
Joining Metrics on Labels
We can change the way vectors a and b are matched using labels. For instance, the query a * on (foo, bar) group_left(baz) b matches vectors a and b on metric labels foo and bar. (group_left(baz) means the result contains baz, a label belonging to b.
Conversely you can use ignoring to specify which label you don’t want to join on. For example the query a * ignoring (baz) group_left(baz) b joins a and b on every label except baz. Let’s assume a contains labels foo and bar and b contains foo, bar and baz. The query will join a to b on foo and bar and therefore be equivalent to the first query.
Later, we’ll see how joining can be used in Kubernetes.
Labels: Killing Two Birds with One Metric
Metric labels allow you to do more with less. They enable you to glean more system insights with fewer metrics.
Scenario: Using Metric Labels to Count Errors
Let’s say you want to track how many exceptions are thrown in your application. There’s a noob way to solve this and a Prometheus god way.
The Noob Solution
One solution is to create a counter metric for each given area of code. Each exception thrown would increment the metric by one.
This is all well and good, but how do we deal with one of our devs adding a new piece of code? In this solution we’d have to add a corresponding exception-tracking metric. Imagine that barrel-loads of code monkeys keep adding code. And more code. And more code.
Our endpoint is going to pick up metric names like a ship picks up barnacles. To retrieve the total exception count from this patchwork quilt of code areas, we’ll need to write complicated PromQL queries to stitch the metrics together.
The God Solution
There’s another way. Track the total exception count with a single application-wide metric and add metric labels to represent new areas of code. To illustrate, if the exception counter was called “application_error_count” and it covered code area “x”, we can tack on a corresponding metric label.
application_error_count{area="x"}
As you can see, the label is in braces. If we wanted to extend application_error_count’s domain to code area “y”, we can use the following syntax.
application_error_count{area="x|y"}
This implementation allows us to bolt on as much code as we like without changing the PromQL query we use to get total exception count. All we need to do is add area labels.
If we do want the exception count for individual code areas, we can always slice application_error_count with an aggregate query such as:
count by(application_error_count)(area)
Using metric labels allows us to write flexible and scalable PromQL queries with a manageable number of metrics.
Manipulating Labels
PromQL’s two label manipulation commands are label_join and label_replace. label_join allows you to take values from separate labels and group them into one new label. The best way to understand this concept is with an example.
label_join(up{job="api-server",src1="a",src2="b",src3="c"}, "foo", ",", "src1", "src2", "src3")
In this query, the values of three labels, src1, src2 and src3 are grouped into label foo. Foo now contains the respective values of src1, src2 and src3 which are a, b, and c.
label_replace renames a given label. Let’s examine the query
label_replace(up{job="api-server",service="a:c"}, "foo", "$1", "service", "(.*):.*")
This query replaces the label “service” with the label “foo”. Now foo adopts service’s value and becomes a stand in for it. One use of label_replace is writing cool queries for Kubernetes.
Creating Alerts with predict_linear
Introduced in 2015, predict_linear is PromQL’s metric forecasting tool. This function takes two arguments. The first is a gauge metric you want to predict. You need to provide this as a range vector. The second is the length of time you want to look ahead in seconds.
predict_linear takes the metric at hand and uses linear regression to extrapolate forward to its likely value in the future. As an example, let’s use PromLens to run the query:
predict_linear(node_filesystem_avail_bytes{job="node"}[1h], 3600).
It shows a graph which shows the predicted value an hour from the current time.
Alerts and predict_linear
The main use of predict_linear is in creating alerts. Let’s imagine you want to know when you run out of disk space. One way to do this would be an alert which fires as soon as a given disk usage threshold is crossed. For example, you might get alerted as soon as the disk is 80% full.
Unfortunately, threshold alerts can’t cope with extremes of memory usage growth. If disk usage grows slowly, it makes for noisy alerts. An alert telling you to urgently act on a disk that’s 80% full is a nuisance if disk space will only run out in a month’s time.
If, on the other hand, disk usage fluctuates rapidly, the same alert might be a woefully inadequate warning. The fundamental problem is that threshold-based alerting knows only the system’s history, not its future.
In contrast, an alert based on predict_linear can tell you exactly how long you’ve got before disk space runs out. Plus, it’ll even handle left curves such as sharp spikes in disk usage.
Scenario: predict_linear in action
This wouldn’t be a good PromQL tutorial without a working example, so let’s see how to implement an alert which gives you 4 hours notice when your disk is about to fill up. You can begin creating the alert using the following code in a file “node.rules”.
- name: node.rules rules: - alert: DiskWillFillIn4Hours expr: predict_linear(node_filesystem_free{job="node"}[1h], 4*3600) < 0 for: 5m labels: severity: page
The key to this is the fourth line.
expr: predict_linear(node_filesystem_free{job="node"}[1h], 4*3600) < 0
This is a PromQL expression using predict_linear. node_filesystem_free is a gauge metric measuring the amount of memory unused by your application. The expression is performing linear regression over the last hour of filesystem history and predicting the probable free space. If this is less than zero the alert is triggered.
The line after this is a failsafe, telling the system to test predict_linear twice over a 5 minute interval in case a spike or race condition gives a false positive.
Using PromQL’s predict_linear function leads to smarter, less noisy alerts that don’t give false alarms and do give you plenty of time to act.
Putting it All Together: Monitoring CPU Usage in Kubernetes
To finish off this PromQL tutorial, let’s see how PromQL can be used to create graphs of CPU-utilisation in a Kubernetes application.
In Kubernetes, applications are packaged into containers and containers live on pods. Pods specify how many resources a container can use. If a container uses more resources than its pod has, it ‘spills over’ into a second pod.
This means that a candidate PromQL query needs the ability to sum over multiple pods to get the total resources for a given container. Our query should come out with something like the following.
Aggregating by Pod Name
We can start by creating a metric of CPU usage for the whole system, called container_cpu_usage_seconds_total. To get the CPU utilisation per second for a specific namespace within the system we use the following query which uses PromQL’s rate function:
rate(container_cpu_usage_seconds_total{namespace= “redash”[5m])
This is where aggregation comes in. We can wrap the above query in a sum query that aggregates over the pod name.
sum by(pod_name)( rate(container_cpu_usage_seconds_total{namespace= “redash”[5m]) )
So far, our query is summing the CPU usage rate for each pod by name.
Retrieving Pod Labels
For the next step, we need to get the pod labels, “pod” and “label_app”. We can do this with the query:
group(kube_pod_labels{label_app=~”redash-*”}) by (label_app, pod)
By itself, kube_pod_labels returns all existing labels. The code between the braces is a filter acting on label_app for values beginning with “redash-”.
We don’t, however, want all the labels, just label_app and pod. Luckily, we can exploit the fact that pod labels have a value of 1. This allows us to use group() to aggregate along the two pod labels that we want. All the others are dropped from the results.
Joining Things Up
So far, we’ve got two aggregation queries. Query 1 uses sum() to get CPU usage for each pod. Query 2 filters for the label names label_app and pod. In order to get our final graph, we have to join them up. To do that we’re going to use two tricks, label_replace() and metric joining.
The reason we need label replace is that at the moment query 1 and query 2 don’t have any labels in common. We’ll rectify this by replacing pod_name with pod in query 1. This will allow us to join both queries on the label “pod”. We’ll then use the multiplication operator to join the two queries into a single vector.
We’ll pass this vector into sum() aggregating along label app. Here’s the final query:
sum( group(kube_pod_labels{label_app=~”redash-*”}) by (label_app, pod) * on (pod) group_right(label_app) label_replace sum by(pod_name)( rate(container_cpu_usage_seconds_total{namespace= “redash”[5m]) ), “pod”, “$1”, “pod_name”, “(.+)” )by label_app
Hopefully this PromQL tutorial has given you a sense for what the language can do. Prometheus takes its name from a Titan in Greek mythology, who stole fire from the gods and gave it to mortal man. In the same spirit, I’ve written this tutorial to put some of the power of Prometheus in your hands.
You can put the ideas you’ve just read about into practice using the resources below, which include online code editors to play with the fire of PromQL at your own pace.
PromQL Tutorial Resources
This online editor allows you to get started with PromQL without downloading Prometheus. As well as tabular and graph views, there is also an “explain” view. This gives the straight dope on what each function in your query is doing, helping you understand the language in the process.
This tutorial by Grafana labs walks you through setting up Prometheus and Grafana on your local machine. It comes with a pre-built sample app so you can get started writing PromQL queries straight away.
This is the go-to reference for anything Prometheus related. It also contains an introductory PromQL tutorial with an overview of PromQL’s features to ease novices into the language. | https://coralogix.com/blog/promql-tutorial-5-tricks-to-become-a-prometheus-god/ | CC-MAIN-2021-25 | refinedweb | 2,411 | 63.9 |
JEP 110: HTTP/2 Client (Incubator)
Summary
Define a new HTTP client API that implements HTTP/2 and WebSocket, and
can replace the legacy
HttpURLConnection API. The API will be
delivered as an incubator module, as defined in
JEP 11, with JDK 9. This implies:
The API and implementation will not be part of Java SE.
The API will live under the jdk.incubtor namespace.
The module will not resolve by default at compile or run time.
Motivation
The existing
HttpURLConnection API and its implementation have numerous
problems:
The base
URLConnectionAPI was designed with multiple protocols in mind, nearly all of which are now defunct (
ftp,
gopher, etc.).
The API predates HTTP/1.1 and is too abstract.
It is hard to use, with many undocumented behaviors.
It works in blocking mode only (i.e., one thread per request/response).
It is very hard to maintain.
Goals
Must be easy to use for common cases, including a simple blocking mode.
Must provide notification of events such as "headers received", errors, and "response body received". This notification is not necessarily based on callbacks but can use an asynchronous mechanism like CompletableFuture.
A simple and concise API which caters for 80-90% of application needs. This probably means a relatively small API footprint that does not necessarily expose all the capabilities of the protocol.
Must expose all relevant aspects of the HTTP protocol request to a server, and the response from a server (headers, body, status codes, etc.).
Must support standard and common authentication mechanisms. This will initially be limited to just Basic authentication.
Must be able to easily set up the WebSocket handshake.
Must support HTTP/2. (The application-level semantics of HTTP/2 are mostly the same as 1.1, though the wire protocol is completely different.)
Must be able to negotiate an upgrade from 1.1 to 2 (or not), or select 2 from the start.
Must support server push, i.e., the ability of the server to push resources to the client without an explicit request by the client.
Must perform security checks consistent with the existing networking API.
Should be friendly towards new language features such as lambda expressions.
Should be friendly towards embedded-system requirements, in particular the avoidance of permanently running timer threads.
Must support HTTPS/TLS.
Performance requirements for HTTP/1.1:
Performance must be on par with the existing
HttpURLConnectionimplementation.
Performance must be on par with the Apache HttpClient library and with Netty and Jetty when used as a client API.
Memory consumption of the new API must be on par or lower than that of
HttpURLConnection, Apache HttpClient, and Netty and Jetty when used as a client API.
Performance requirements for HTTP/2:
Performance must be better than HTTP/1.1 in the ways expected by the new protocol (i.e., in scalability and latency), notwithstanding any platform limitations (e.g., TCP segment ack windows).
Performance must be on par with Netty and Jetty when used as a client API for HTTP/2.
Memory consumption of the new API must be on par or lower than when using
HttpURLConnection, Apache HttpClient, and Netty and Jetty when used as a client API.
Performance comparisons will only be in the context of comparable modes of operation, since the new API will emphasise simplicity and ease of use over covering all possible use cases,
This work is intended for JDK 9. Some of the code may be re-used by Java EE in their implementation of HTTP/2 in the Servlet 4.0 API, so only JDK 8 language features and, where possible, APIs will be used.
It is intended that with the benefit of experience using the API in JDK 9, it will be possible to standardize the API in Java SE under the java.net namespace in JDK 10. When this happens, as part of a future JEP, the API will no longer exist as an incubator module.
Non-Goals
This API is intended to eventually replace the
HttpURLConnection API for new code,
but we do not intend immediately to re-implement the old API using the
new API. This may happen as future work.
Some requirements were considered in earlier versions of this JEP for JDK 8, but they are being left out in order to keep the API as simple as possible:
- Request/response filtering,
- A pluggable connection cache, and
- A general upgrade mechanism.
Some of these requirements, e.g., connection caching, will become less important with the gradual adoption of HTTP/2.
Description
Some prototyping work has been done for JDK 9 in which separate classes were defined for the HTTP client, requests, and responses. The builder pattern was used to separate mutable entities from the immutable products. A synchronous blocking mode is defined for sending and receiving and an asynchronous mode built on java.util.concurrent.CompletableFuture is also defined.
The prototype was built on NIO SocketChannels with asynchronous behavior implemented with Selectors and externally provided ExecutorServices.
The prototype implementation was standalone, i.e., the existing stack was not changed so as to ensure compatibility and allow a phased approach in which not all functionality must be supported at the start.
The prototype API also included:
- Separate requests and responses, like the Servlet and HTTP server API;
- Asynchronous notification of the following events:
- Response headers received,
- Response error,
- Response body received, and
- Server push (HTTP/2 only);
- HTTPS, via
SSLEngine;
- Proxying;
- Cookies; and
- Authentication.
The part of the API most likely to need further work is in the support of HTTP/2 multi responses (server push) and HTTP/2 configuration. The prototype implementation supports almost all of HTTP/1.1 but not yet HTTP/2.
HTTP/2 proxying will be implemented in a following change.
Alternatives
A number of existing HTTP client APIs and implementations exist, e.g., Jetty and the Apache HttpClient. Both of these are both rather heavy-weight in terms of the numbers of packages and classes, and they don't take advantage of newer language features such as lambda expressions.
Testing
The internal HTTP server will provide a suitable basis for regression and TCK tests. Functional tests could use that also, but they may need to test against real HTTP servers. | https://openjdk.java.net/jeps/110 | CC-MAIN-2019-22 | refinedweb | 1,036 | 57.16 |
- Code: Select all
def Charack(self, Health, Defense, Strenght):
#self.NPC = Npc
self.Health = 10
self.Defense = 12
self.Strenght = 15
def Npc(race, Org, Elf):
race.Org = ["10", "15", "20"]
race.Elf = ["10", "20", "1"]
Print(Npc(0),(1))
I get this error
"Traceback (most recent call last):
File "C:\Documents and Settings\Owner\My Documents\Chars.py", line 11, in <module>
print(Npc(0),(1))
TypeError: Npc() missing 2 required positional arguments: 'Org' and 'Elf'"
Ok so I'm trying to get it so that I can call a race and call a skill of the race like health.
I know I had help with a class in another post but I don't know how to use is and this is a bit simpler but still good for now.
Any idee wat my problem is?
Fist time I'm getiing an error like this ad I'm missing something right infront of me
Ty guys for reading helping | http://python-forum.org/viewtopic.php?f=6&t=4241 | CC-MAIN-2016-44 | refinedweb | 161 | 84.17 |
#include <player.h>
List of all members.
To set or get the pose of an object in a simulator, use this message type. If the subtype is PLAYER_SIMULATION_SET_POSE2D, the server will ask the simulator to move the named object to the location specified by (x,y,a) and return ACK. If the subtype is PLAYER_SIMULATION_GET_POSE2D, the server will attempt to locate the named object and reply with the same packet with (x,y,a) filled in. For all message subtypes, if the named object does not exist, or some other error occurs, the request should reply NACK.
Packet subtype. Must be one of PLAYER_SIMULATION_SET_POSE2D, PLAYER_SIMULATION_GET_POSE2D
the identifier of the object we want to locate
the desired pose or returned pose in (mm,mm,degrees) | http://playerstage.sourceforge.net/doc/Player-1.6.5/player-html/structplayer__simulation__pose2d__req.php | CC-MAIN-2016-26 | refinedweb | 124 | 51.68 |
When I compile this code :
#include <mmintrin.h>
__m64 moo(int i) {
__m64 tmp = _mm_cvtsi32_si64(i);
return tmp;
}
With (GCC) 4.0.0 20050116 like so:
gcc -O3 -S -mmmx moo.c
I get this (without the function pop/push etc)
movd 12(%ebp), %mm0
movq %mm0, (%eax)
However, if I use the -msse flag instead of -mmmx, I get this:
movd 12(%ebp), %mm0
movq %mm0, -8(%ebp)
movlps -8(%ebp), %xmm1
movlps %xmm1, (%eax)
gcc 3.4.2 does not display this behavior. I didn't get the chance to test it on
my Linux installation yet, but I'm pretty sure it's going to give the same
results.. I didn't use any special flags configuring or building gcc (just
../gcc-4.0-20050116/configure --enable-languages=c,c++ , and make bootstrap)
With -O0 flag instead of -O3, we see that it seems that gcc replaced some movq's
by movlps's (why??) and they do not get cancelled out during optimization..
I will attach the .i file generated by "gcc -O3 -S -msse moo.c".
I also tried a "direct conversion":
__m64 tmp = (__m64) (long long) i;
But I get a compiler error:
internal compiler error: in convert_move, at expr.c:367
Created attachment 7991 [details]
gcc -O3 -S -msse moo.c --save-temps
Hmm, looking at the rtl dumps this looks like the register allocator sucks as the sse register is picked in
the -msse but in the -mmmx, only the mmx register is picked. Someone needs to take an axe to the
register allocator :).
Subject: Bug 19530
CVSROOT: /cvs/gcc
Module name: gcc
Changes by: [email protected] 2005-01-20 18:34:13
Modified files:
gcc : ChangeLog
gcc/config/i386: i386.c mmintrin.h mmx.md
Log message:
PR target/19530
* config/i386/mmintrin.h (_mm_cvtsi32_si64): Use
__builtin_ia32_vec_init_v2si.
(_mm_cvtsi64_si32): Use __builtin_ia32_vec_ext_v2si.
* config/i386/i386.c (IX86_BUILTIN_VEC_EXT_V2SI): New.
(ix86_init_mmx_sse_builtins): Create it.
(ix86_expand_builtin): Expand it.
(ix86_expand_vector_set): Handle V2SFmode and V2SImode.
* config/i386/mmx.md (vec_extractv2sf_0, vec_extractv2sf_1): New.
(vec_extractv2si_0, vec_extractv2si_1): New.
Patches:
Fixed.
MMX intrinsics don't seem to be a standard (?), but I'm under the impression
that _mm_cvtsi32_si64 is supposed to generate MMX code. I just tested With (GCC)
4.0.0 20050123, and with -mmmx flag, the result is still the same, with the
-msse flag I now get :
movss 12(%ebp), %xmm0
movlps %xmm0, (%eax)
Which is correct, but what I'm trying to get is a MOVD so I don't have to fish
back into memory to use the integer I wanted to load in an mmx register.
Or is there another way to generate a MOVD?
Also, _mm_unpacklo_pi8 (check moo2.i) still generates superfluous movlps :
punpcklbw %mm0, %mm0
movl %esp, %ebp
subl $8, %esp
movl 8(%ebp), %eax
movq %mm0, -8(%ebp)
movlps -8(%ebp), %xmm1
movlps %xmm1, (%eax)
I guess any MMX intrinsics that makes use of the (__m64) cast conversion will
suffer from the same problem..... I think the fix to all these problems would be
to prevent the register allocator from using SSE registers when compiling MMX
intrinsics.. ?
Created attachment 8055 [details]
the _mm_unpacklo_pi8 one
Hmm. Seems to only happen with -march=pentium3, and not -march=pentium4...
Sorry, but this appears to be unfixable without a complete rewrite of MMX support.
Everything I tried had side effects where MMX instructions were used when we were
not using MMX intrinsics.
I'm wondering, would there be a #pragma directive that would we could use to
surround the MMX instrinsics function, and that would prevent the compiler from
using the XMM registers??
Even stranger, it doesn't do it with -march=athlon either... only
-march=pentium, pentium2 or pentium3... ?
That seems like some weird bug here. There musn't be a THAT big of a difference
between the code for pentium3 and the one for athlon right?
Oh oh, I think I'm getting somewhere... if I use both -march=athlon and -msse
flags I get the "bad" code. Let me summarize this :
-march=pentium3 = bad
-msse = bad
-march=athlon = good (ie.: no weird movss or movlps, everything looks good)
however
-march=athlon -msse = bad
hum...
(In reply to comment #10)
> That seems like some weird bug here. There musn't be a THAT big of a difference
> between the code for pentium3 and the one for athlon right?
Well, duh, athlon doesn't have sse.
Ok ok, SSE is not enabled by default on Athlon...
So, is there some sort of "pragma" that could be used to disable SSE registers
(force -mmmx sort of) for only part of some code?
The way I see it, the problem seems to be that gcc views __m64 and __m128 as the
same kind of variables, when they are not. __m64 should always be on mmx
registers, and __m128 should always be on xmm registers. Actually, Intel created
a new type __m128d, instead of trying to guess which out of integer or float
instructions one should use for stuff like MOVDQA..
We can easily see that gcc is trying to put an __m64 variable on xmm registers
in moo2.i . I can also prevent it from using an xmm register by using only
__v8qi variables (which are invalid ie.: too small on xmm registers):
__v8qi moo(__v8qi mmx1)
{
mmx1 = __builtin_ia32_punpcklbw (mmx1, mmx1);
return mmx1;
}
tadam! no movss or movlps...
Shouldn't gcc not try to place __m64 variables on xmm registers? If one wants to
use an xmm register, one should use __m128 or __m128d (or at least a cast from a
__m64 pointer), even on the Pentium 4, I think it makes sense, because moving
stuff from mmx registers to xmm registers is not so cheap either..
If one wants to move one 32 bit integer to a mmx register, that should be the
job of a specialized intrinsics (_mm_cvtsi32_si64) which maps to a MOVD
instruction. And if one wants to load a 64 bit something into an xmm register,
that should be the job of _mm_load_ss (and other such functions). At the moment,
these intrinsics (_mm_cvtsi32_si64, _mm_load_ss) do NOT generate a mov
instruction by themselves.. they go through a process (from what I can
understand of i386.c) of "vector initialization" which starts generating mov
instructions from MMX, SSE or SSE2 sets without discrimination... In my mind
_mm_cvtsi32_si64 should generate a MOVD, and _mm_load_ss a MOVSS, period. Just
like __builtin_ia32_punpcklbw generates a PUNPCKLBW.
Does it make sense? Is this what you mean by a complete rewrite or were you
thinking of something else?
> So, is there some sort of "pragma" that could be used to disable SSE
> registers(force -mmmx sort of) for only part of some code?
No.
> __m64 should always be on mmx registers, and __m128 should always be on
> xmm registers.
Well, yes and no. Given SSE2, one *can* implement *everything* in
<mmintrin.h> with SSE registers.
> I can also prevent it from using an xmm register by [...]
... doing something complicated enough that, for the existing patterns
defined by the x86 backend, it very much more strongly prefers the mmx
registers. Your problems with preferencing will come only when the
register in question is only used for data movement.
Which, as can be seen in your _mm_unpacklo_pi8 test case, can happpen
at surprising times. There are *two* registers to be register allocated
there. The one that does the actual unpack operation *is* forced to be
in an MMX register. The other moves the result to the return register,
and that's the one that gets mis-allocated to the SSE register set.
> If one wants to move one 32 bit integer to a mmx register, that should be the
> job of a specialized intrinsics (_mm_cvtsi32_si64) which maps to a MOVD
> instruction.
With gcc, NONE of the intrinsics is strict 1-1 mapping to ANY instruction.
> Does it make sense? Is this what you mean by a complete rewrite or were you
> thinking of something else?
Gcc has some facilities for generic vector operations. Ones that don't use
any of the foointrin.h header files. When that happens, the compiler starts
trying to use the MMX registers. But it doesn't know how to place the
necessary emms instruction, which is Bad.
At the moment, the only way to prevent this from happening is to strongly
discourage gcc from using the MMX registers to move data around. This is
done in such a way that the only time it will use an MMX register is when
we have no other choice. Which is why you see the compiler starting to
use SSE registers when they are available.
You might think that we could easily use some pragma or something when
<mmintrin.h> is in use, since it's the user's responsibility to call
_mm_empty when necessary. Except that due to how gcc is structured
internally, you'd not be able to be 100% certain that all of the mmx
data movement remained where you expected. Indeed, we have open PRs
where this kind of movement is in fact shown to happen.
Thus the ONLY solution that is sure to be correct is to teach the
compiler what using MMX registers means, and where to place emms
instructions. Which is the subject of the PR against which this PR
is marked as being blocked.
This cannot be addressed in any satisfactory way for 4.0.
Frankly, I wonder if it's worth addressing at all. To my mind it's
just as easy to write pure assembly for MMX. And pretty soon the
vast majority of ia32 machines will have SSE2, and that is what the
autovectorizer will be targeting anyway.
PS, your best solution, for now, is simply to use -mno-sse for the files
in which you have mmx code. Move the sse code to a separate file. That
really is all I can do or suggest.
Ok, so from what I gather, the backend is being designed for the autovectorizer
which will probably only work right with SSE2 (on x86 that is), as mucking with
emms will probably bring too much trouble. Second, we can do any MMX operations
on XMM registers in SSE2. So the code for SSE2 does not need to be changed
optimization wise for intrinsics.
As for a pragma or something, could we for example disable the automatic use of
such instructions as movss, movhps, movlps, and the likes on SSE1 (if I may call
it that way)? That would most certainly prevent gcc from trying to put __m64 in
xmm registers however eager it might want to mov it there... (would it?) And
supply a few built-ins to implement manual use of those instructions. I guess
such a solution would be nice, although I realize it might not be too kosher ;)
I use MMX to load char * arrays into shorts and convert them into float in SSE
registers, to process them with float * arrays, so I can't separate the MMX code
from the SSE code...
Of course, with the way things look at the moment, I might end up writing
everything in assembler by hand, but scheduling 200+ instructions (yup yup I
have some pretty funky code here) by hand is no fun at all, especially if (ugh
when) the algorithm changes. Also, the same code in C with intrinsics can target
x86-64 :) yah, that's cool
I think I'm starting to see the problem here... I tried to understand more of
the code, and from this and what you tell me, gcc find registers to use and then
finds instructions to that fits the bill. So preventing gcc from using some
instructions will only end up in a "instruction not found" error. The register
allocator is the one that shouldn't allocate them in the first place, right?
Well, let's forget this for now... maybe we should look at the optimization stages:
movq %mm0, -8(%ebp)
movlps -8(%ebp), %xmm1
movlps %xmm1, (%eax)
<- If movlps merely moves 64 bit stuff around, why wasn't it optimized out to a
one equivalent movq that also moves 64 bit stuff around? Would that be an
optimizer problem instead?
Hum, there apparently seems to be a problem with the optimization stages.. I
cooked up another snippet :
void moo(__m64 i, unsigned int *r)
{
unsigned int tmp = __builtin_ia32_vec_ext_v2si (i, 0);
*r = tmp;
}
With -O0 -mmmx we get:
movd %mm0, -4(%ebp)
movl 8(%ebp), %edx
movl -4(%ebp), %eax
movl %eax, (%edx)
Which with -O3 gets reduced to:
movl 8(%ebp), %eax
movd %mm0, (%eax)
Now, clearly it understands that "movd" is the same as "movl", except they work
on different registers on an MMX only machine. With "movlps" and "movq" it
should do the same I think? If the optimization stages can work this out, maybe
we wouldn't need to rewrite the MMX/SSE1 support...
(BTW, correction, when I said 200+ instructions to schedule, I meant per
function. I have a dozen such functions with 200+ instructions, and it ain't
going to get any smaller)
Hum, ok we can do a "movd %mm0, %eax", that's why it gets combined...
Well, I give up. The V8QI (and whatever) -> V2SI conversion seems to be causing
all the trouble here if we look at the RTL of something like:
__m64 moo(__v8qi mmx1)
{
mmx1 = __builtin_ia32_punpcklbw (mmx1, mmx1);
return mmx1;
}
It explicitly asks for a conversion to V2SI (__m64) that gets assigned to an xmm
register afterwards:
(insn 15 14 17 1 (set (reg:V8QI 58 [ D.2201 ])
(reg:V8QI 62)) -1 (nil)
(nil))
(insn 17 15 18 1 (set (reg:V2SI 63)
(subreg:V2SI (reg:V8QI 58 [ D.2201 ]) 0)) -1 (nil)
(nil))
(insn 18 17 19 1 (set (mem/i:V2SI (reg/f:SI 60 [ D.2206 ]) [0 <result>+0 S8 A64])
(reg:V2SI 63)) -1 (nil)
(nil))
So... the only way to fix this would be to either make the register allocator
more intelligent (bug 19161), or to provide intrinsics like the Intel compiler
does with one to one mapping to instructions directly. right? That wouldn't be
such a bad idea, I think... instead of using the current __builtins for stuff in
*mmintrin.h, we could use a different set of builtins that only supports V2SI
and nothing else..? Well, that's going to be for another time ;)
Is the emms issue mentioned in comment #14 fixed with Uros' patch proposed
here:?
Hum, it will be interesting to test this (it will have to wait a couple of
weeks), but the problem with this here is that there is no "mov" instructions
that can move stuff between MMX registers and SSE registers (MOVQ can't do it).
In SSE2, there is one (MOVQ), but not in the original SSE. So the compiler
generates movlps instructions from/to memory from/to SSE registers along MMX
calculations, and, in the original SSE case, ends up not being able to reduce
anymore than MMx->memory->XMMx->memory->MMx again for data that should have
stayed in MMX registers all along... it does not realize up front how expensive
it is to use XMM registers on "SSE1" along with MMX instructions.
As this bug is getting a bit confused, I have summarised testcases below:
--cut here--
#include <mmintrin.h>
__m64 moo_1 (int i)
{
__m64 tmp = _mm_cvtsi32_si64 (i);
return tmp;
}
__m64 moo_2 (__m64 mmx1)
{
__m64 mmx2 = _mm_unpacklo_pi8 (mmx1, mmx1);
return mmx2;
}
void moo_3 (__m64 i, unsigned int *r)
{
unsigned int tmp = __builtin_ia32_vec_ext_v2si (i, 0);
*r = tmp;
}
--cut here--
I think that the problems described were fixed by PR target/21981. With a patch
from comment #20, 'gcc -O2 -msse3' produces following asm:
moo_1:
pushl %ebp
movl %esp, %ebp
movd 8(%ebp), %mm0
popl %ebp
ret
moo_2:
pushl %ebp
punpcklbw %mm0, %mm0
movl %esp, %ebp
popl %ebp
ret
moo_3:
pushl %ebp
movl %esp, %ebp
movl 8(%ebp), %eax
movd %mm0, (%eax)
emms
popl %ebp
ret
I have checked, that there is no SSE instructions present for any of testcases
for -mmmx, -msse, -msse2 and -msse3. I suggest to close this bug as fixed and
eventually open a new bug with new testcases.
Regarding emms in moo_3: As the output of moo_3 () is _not_ a mmx register, FPU
mode should be switched to 387 mode before function exit. (In proposed patch,
this could be overriden by -mno-80387 to get rid of all emms insns.)
Yup, excited, today, I just compiled the mainbranch to check this out
(gcc-4.1-20050618) and it seems to be fixed! I don't see any strange movlps in
any of the code I tried to compile with it. Can be moved to FIXED (I'm not sure
I should be to one to switch it??)
Thanks to Uros and everybody! | https://gcc.gnu.org/bugzilla/show_bug.cgi?id=19530 | CC-MAIN-2015-22 | refinedweb | 2,790 | 70.94 |
Opened 14 years ago
Closed 8 years ago
Last modified 6 years ago
#3148 closed New feature (wontfix)
Add getters and setters to model fields
Description
Whenever you have two distinct ways to update a variable, you introduce the opportunity to have bugs. This becomes increasingly true as the system grows in size.
It is often the case that when changing one field on an object, you want to be able to automatically run code of some kind; this is the same basic motivation behind the "property" builtin in Python. However, the obvious way of doing that in Django doesn't work:
class Something(models.Model): field = models.BooleanField(...) ... def set_field(self, value): # do something field = property(set_field)
The second field overrides the first, and in the process of constructing the model Django never gets a chance to see the models.BooleanField.
This patch adds a 'getter' and 'setter' attribute to all fields, which takes a string of a method to call when a field is retrieved or set. It turns out that it is fairly easy to add a property to the class during Django's initialization, at which point it has already retrieved the field information. This example from the enclosed tests shows the basics of its usage:
class GetSet(models.Model): has_getter = models.CharField(maxlength=20, getter='simple_getter') has_setter = models.CharField(maxlength=20, setter='simple_setter') has_both = models.CharField(maxlength=20, getter='simple_getter', setter='updater') updated_length_field = models.IntegerField(default=0) def simple_getter(self, value): return value + "_getter" def simple_setter(self, value): return value + "_setter" def updater(self, value): self.updated_length_field = len(value) return value
This defines a getter on
has_getter that returns a filtered value from the DB, a setter on
has_setter that processes the value to add "_setter" to it in all cases, and on
has_both we see a setter than implements the use case of updating another field when a property is set. (As is often the case, this is a trivial example; in my real code, for instance, the value being updated is actually in a related object.)
A getter receives as its argument the current "real" value (the one that either came from the database, object construction, or a prior setting of the value), and what the getter returns is actually what the user gets back from the attribute.
A setter receives as its argument the value that the user is setting the property to, and what it returns is what the property will actually be set to. The pattern for just using that as a hook is to return what is passed in after you've done whatever it is your hook does, as shown above.
These properties are only created for fields that have getters or setters, so the backwards-compatibility impact of this should be zero, and the performance impact should be a check for getters/setters per field once at startup, which is minimal. I'm a little less certain about exactly how these getters and setters will interact with all the various field types, but due to the way in which it hooks in it should, in theory, have minimal impact, because no Python code should fail to go through the property.
Getters and setters do not operate during the creation of the object, be it by retrieval from the database or creation by instantiating the class; this avoids a lot of tricky issues with property initialization order and double-application of some common setters, but will need to be documented.
I'd be happy to add the appropriate documentation but I do not see where to contribute that.
Attachments (14)
Change History (65)
Changed 14 years ago by
Changed 14 years ago by
add documentation, fix minor whitespace issue, supercede previous patch
comment:1 Changed 14 years ago by
Before checking in this new functionality, I'd be very interested in seeing whether it would be possible to allow for "normal" property usage on models, rather than inventing a new way of doing it.
Seems like it would be tricky to figure it out, but that shouldn't stop us from experimenting... Who wants to try?
Changed 14 years ago by
figured out what the numbers in the test cases meant; documentation example added for getter/setter
comment:2 Changed 14 years ago by
Explained why I don't think there is a better way in this post.
comment:3 Changed 14 years ago by
comment:4 Changed 13 years ago by
If you've got a problem chum, think how it could be.
comment:5 Changed 13 years ago by
If you've got a problem chum, think how it could be.
comment:6 Changed 13 years ago by
comment:7 Changed 13 years ago by
Hi,
Has this issue been forgotten? I think there must be quite many developers who wish to do processing based on field changes. At least I do and this patch would provide the solution.
Changed 13 years ago by
New patch using python's property(), includes docs and tests
comment:8 Changed 13 years ago by
Hi there,
I found that ticket looking for a way to achieve exactly what this ticket asks for and saw adrian's comment above about trying to use normal properties to improve the patch.
I've just attached a new patch with tests and docs that does just this. In essence you'd do:
from django.db import models class Person(models.Model): def _get_name(self): return self.__name def _set_name(self, value): self.__name = value name = models.CharField(max_length=30, property=(_get_name, _set_name))
And django will create "Person.name" as a property() using the tuple given.
comment:9 Changed 13 years ago by
Moving to Accepted because:
- The only "core" request was in comment 3 (from adrian) about doing this "more pythonic" and it is about "doing it better" not "I'm not sure if it fits well in Django".
- There's nothing in the comments that imply "Design Decision", meaning that what the tickets requests/does does not seem to imply this status.
- Trying to give a bit of visibility to the ticket (Reasons 1 & 2 are only excuses! ;))
comment:10 follow-up: 11 Changed 13 years ago by
Reviewing your patch:
-.
- why do you need to keep the
propertytuple around? (i.e.
self.property = property)
-))
comment:11 follow-up: 12 Changed 13 years ago by
Replying to SmileyChris:.
Well, there are tests included with the patch and they don't fail! ;)
Anyway; It does not crash because the callable property() is not used in
Field.__init__() for anything. So it's safe to use it there.
Then we save the parameter in
Field.property which can never be accesed as
property outside of
Field.__init__(), it will always be
self.property.
The reason I used property= instead of something else was to make it clear what the parameter is about, and give an idea of how it works (just exactly as property())
- why do you need to keep the
propertytuple around? (i.e.
self.property = property)
Because property is read in
Field.__init__() and used then in
Field.contribute_to_class() to create the property in the model. So I need to save the property parameter from init() to use it in contribute_to_class(). Also I wanted to be consistent, the first Thing
Field.__init__() does is save all the kwargs on
self. Anyway, it's needed for contribute_to_class.
-))
Uhm.. really nice idea, I'll update the patch later with this change.
Before updating, do you really thing the parameter to Field should not be
property? As explained aboved, it will never cause a namespace collision until somebody wants to use property() in
Field.__init__() is is unlikely to happen due to what
Field.__init__() does.
comment:12 Changed 13 years ago by).
Oops, rather big oversight on my part :P I should probably apply patches rather than just reviewing them from trac's visual diff...
So I guess you can ignore all of points 2 and 3.
Uhm.. really nice idea, I'll update the patch later with this change.
Before updating, do you really thing the parameter to Field should not be
property?
Nah, I'm cool with it then - for example, Models have an
id property so the precedence has been set :)
comment:13 Changed 13 years ago by
Change the "property" name, please. There are lots of words in the English language; no need to court an accidental collision by using effectively reserved names.
Changed 13 years ago by
Updated diff according to comments.
comment:14 Changed 13 years ago by
Here it is,
The option is now: use_property; Inside the Field class it is saved as
self.model_property so it's clear what it stands for.
I also removed the add magic as per SmileyChris comment, and used his * thing, nice :)
Hope it makes you all happy!! ;)
Tests passed fine :)
Changed 13 years ago by
Fixed small typo in the docs patch.
comment:15 follow-up: 16 Changed 12 years ago by
Any update on this?
Changed 12 years ago by
Updated patch (merge conflict resolved) to latest trunk.
comment:16 Changed 12 years ago by
comment:17 Changed 12 years ago by
comment:18 Changed 12 years ago by
comment:19 Changed 12 years ago by
Doing some quick looking at this (I'll admit, I haven't applied it yet), I came up with a 7-line utility function that handles everything the patch does, but without touching core at all. If there's interest, I can post it at djangosnippets, but I think there's something else to consider.
While the latest patch is a considerable cleanup from the original patch, I think one of the original author's considerations has been lost in translation. Again, I haven't applied it, but it looks like it does exactly what my little utility function does, and mine certainly suffers from the double-setter problem he describes.
Malcolm's [source:django/trunk/django/db/models/fields/subclassing.py field subclassing work] does a good job of avoiding this, but it uses a descriptor, which would interfere with the property, unless a new descriptor is made that takes the double-setter issue *and* the property stuff into account. Ultimately, I think it's a matter of providing the simple, common case, like Malcolm did, and expecting those who need more to know what they need. I don't know if that's enough for everybody, but that's why I was working on a non-core option for this. I'd rather people do a little hunting on the problem they have before they just pick a "solution" that happens to be in core and get confused why it doesn't work the way it should.
Of course, one other thing
SubfieldBase takes into account is serialization. Again, I haven't applied the patch or tested it out (I'm heck-deep in other stuff at the moment), but I'm guessing there are going to be some surprises, since I've run into a few things in the past when an object's
__dict__ didn't line up with what some properties were advertising. I'd certainly at least want to see that looked at by someone who's passionate about this patch.
comment:20 Changed 12 years ago by
Changed 12 years ago by
updated patch for rev.8463
comment:21 Changed 12 years ago by
comment:22 Changed 12 years ago by
The docs-refactor broke the patch, always up-to-date patch here in colours and raw
comment:23 Changed 11 years ago by
comment:24 follow-up: 25 Changed 11 years ago by
I'm busy and don't have time to explain this fully. However, as a note to myself and anyone who wants to try to channel my thinking, the syntax I'd like here would look like::
class MyModel(Model): foo = CharField(...) @foo.getter def get_foo(self): ... @foo.setter def set_foo(self, value): ...
I'll try to circle 'round back to this and explain more fully when I have time.
comment:25 Changed 11 years ago by
I'm busy and don't have time to explain this fully. However, as a note to myself and anyone who wants to try to channel my thinking, the syntax I'd like here would look like
It seems simple enough to channel that, actually. I assume this would work identically to the new property methods added in Python 2.6. I've been wondering about this myself lately, but didn't spend the time to work on it because I forgot this ticket already exists. I'll see what I can throw together this week.
I can think of a few questions that might need to be addressed along the way (at least one that definitely will), but I'll hold off until I have some working code before bringing them up. On a side note, if we mimic the property methods, I assume we'd also want to include
@foo.deleter as well, which should be simple enough to throw in.
comment:26 Changed 11 years ago by
After doing some work on this, I'll throw in one particular comment about naming. Python's built-in
@property methods require that all the methods have the same name, because things go pretty wonky if you don't. They have their reasons, I'm sure, but I don't think that requirement makes a lot of sense in our situation. Instead, I'm going with the (much more obvious, IMO) approach of leaving any user-created methods in place and updating the field object to work as a descriptor. In the context of what a model definition looks like, I think that makes the most sense.
comment:27 Changed 11 years ago by
I think this a fundamental feature, and although not fluent in Python and Django, I've started working on it.
The proposed/attached patch implements the method suggested by jacob.
I've added tests for it and all the current tests still pass (there are oddities with admin_scripts, but that's with and without the patch).
Changed 11 years ago by
comment:28 Changed 11 years ago by
Some feedback:
a) A decorator should always return a callable, in this case it should return the original function (as Gulopine sugested), since the alternative leaves None on the class, which is really awkward IMO.
b) As Gulopine suggested I think the better approach is to make the Field object itself the descriptor when getter/setter/deleter are provided, not a property object.
c) This should be handled by Field.contribute_to_class, not the Metaclass.
d) I'd like to see significantly more extensive tests, particularly with things like:
- Providing only a subset of the available options.
- Providing a getter/setter with fields that have their own descriptor (FileField and the RelatedFields). This I suspect may require a large refactor of related.py, as Gulopine may have already found :)
comment:29 Changed 11 years ago by
Oh, and
e) Documentation! :)
comment:30 Changed 11 years ago by
Thanks for your feedback.
a) getter/setter/deleter now return the original function
b) I do not get this: using a decorator should make the object behave as descriptor? Would that mean to have get/set/del always, but check there if a decorator should get called?
c) OK, moved. Much better. It took me a while to come that close already though.. ;)
d) 1. This should fail, at least when going the property route, my understanding is that you need to define all, otherwise the field is e.g. not writable. This would be different when using descriptors only, of course.
- Only fields derived from Field are supported currently. FileField does not come through Field - at least that's what my testing shows. Should they get handled?
e) I think the other points should get worked out before.. :)
Changed 11 years ago by
Slightly updated patch
comment:31 Changed 11 years ago by
The reason you aren't seeing FileField/RelatedFields working right now is that they provide their own descriptor, which overwrites yours I imagine. This is why Gulopine and I discussed integrating the descriptor into the Field itself (and only conditionally attaching it to the class) and then integrating the existing descriptors for Files and related objects into the class itself so everything works together correctly. You can find our discussion of this here: (extends onto the next page).
comment:32 Changed 11 years ago by
Thanks for all your work on this BTW!
comment:33 Changed 11 years ago by
FileField/RelatedFields work as before (at least passing tests - I have not tried it for real) - they are not affected by getter/setter currently (AFAICS).
My understanding had been that I'm not using a descriptor, but property/ies instead - I've just read that it's considered to be a descriptor, too.
The discussion you've linked sounds good, but still I'm not sure where you're heading.. :)
Therefore, a last patch from me (which does not use property() anymore), but adds a FieldDescriptor class. There's a lot to fix/straighten for sure, but just to get this out for discussion.
I'll now wait until Gulopine and others provide some feedback on this.
Changed 11 years ago by
Approach using FieldDescriptor class (and using decorators to decide if fields get wrapped into it or not)
comment:34 Changed 11 years ago by
Using the model:
class GetSet(models.Model): field = models.CharField(max_length=200) @field.setter def field_setter(self, value): self._field = value + "_test_transform" @field.getter def field_getter(self): return self._field
we obtain with the latest patch "django-3148-approach-via-descriptors.diff" the following python REPL output:
Python 2.6.4 (r264:75706, Nov 8 2009, 19:34:30) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from mysite.modeltest.models import GetSet >>> example = GetSet(field = "test") >>> example.field 'test_test_transform' >>> example.save() >>> example.id 1 >>> del example >>> example = GetSet.objects.get(id = 1) >>> example.field u'test_test_transform_test_transform' >>> example.save() >>> del example >>> example = GetSet.objects.get(id = 1) >>> example.field u'test_test_transform_test_transform_test_transform' >>>
That is not shippable behavior. The setter must not apply during construction from a database load. Loading and immediately saving an object with no changes must be a no-op, in terms of the stored data. Anything else is asking for disaster.
I find myself wondering if perhaps the traffic on this bug is telling us that the interaction of properties when the values are being backed by a database store is just too complicated a concept to be worthwhile. When the setters and getters would apply is more subtle than it first appears, and perhaps "simple enough that it works as it first appears" (i.e., "no property behavior") is a better idea.
I am not being sarcastic or passive aggressive. I am seriously wondering this. I now think the documentation would have to explain this in more detail than I originally thought, which adds a chunk of complexity right where Django's documentation needs it least, and I'm no longer convinced that's worth it. (Plus, after reviewing the latest SVN code, I still do not see a way to do this correctly without setting the flag during construction like I did in my original patch, which was not an... appreciated addition to the model loading code.)
comment:35 Changed 10 years ago by
Wrong. Lack of a correct implementation does not mean the initial problem doesn't worth solving it. Logically, the desired behaviour is simple as hell - do not run setters/getters when Django does its ORM things (that is: object fetching, construction, saving, deleting, caching, etc.), but always call getters/setters when a model is used from from an external (business logic) code. I don't see anything strange or hard or inconsistent from the architectural point of view either. The ORM - which is aware of the essence of the models - should work with the data directly using internal structures (__dict__ or whatever), and all the other unaware code should be using the "frontend" properties.
The "simple enough that it works as it first appears" approach is ultimately doomed. If everyone was adopting this kind of thinking, there would be no ORM at all and we'd be running manual SQL queries which are "simple enough that they work as they first appear". Abstractions always leak and there will always be trade-offs, but that doesn't mean we should stop creating them. We should only stop when the gain is less than the cost, which is clearly not the case for getters and setters.
By the way, I am more and more thinking that the part of the confusion comes from that everyone is talking about BOTH getters and setters. In my opinion, we should have only setters. As the data is backed by a database, reading should always return what the database actually holds (with respect to Field.to_python(), of course). However, setting is different and needs to be able to be overriden indeed - because that allows to implement additional validation, transformation (e.g. proper formatting), and other things like logging.
One of the clear examples is using Model Forms. Right now, the whole powerful concept is barely useful for any non-trivial models, because you can't just call modelform.save() and expect it to call all setters properly -- instead, you always have to code all that manually. However, I can't imagine any example where a getter would be of any use.
comment:36 Changed 10 years ago by
I beg my pardon for not reading the "simple enough that it works as it first appears" sentence clearly. I'm taking my words back. Of course, the things should work as they appear. That just doesn't mean we can't achieve that behavior with custom setters. :)
comment:37 Changed 10 years ago by
comment:38 Changed 10 years ago by
comment:39 Changed 10 years ago by
comment:40 Changed 10 years ago by
comment:41 Changed 10 years ago by
One of my pet peeves is that Django models are almost always used as simple data containers. This is true for many frameworks and technologies though. What is wrong with defining methods on the class explicitly that are responsible for setting the correct fields, where there is logic associated with them? Allow the users of the model to decide whether they call the documented api of the model, or go directly to the fields.
comment:42 Changed 10 years ago by
Unassigning, I haven't had the time to work on this.
comment:43 Changed 9 years ago by
comment:44 Changed 9 years ago by
comment:45 Changed 9 years ago by
comment:46 Changed 9 years ago by
Changed 9 years ago by
This approach works with m2m fields and inherited fields
Changed 9 years ago by
slightly more complex approach, that on the other hand manages to obtain the same behaviour with m2m fields
Changed 9 years ago by
Fixes to the previous patch
comment:47 Changed 9 years ago by
comment:48 Changed 8 years ago by
I have a feeling this ticket needs to be wontfixed. The reason is that it seems the setter should not be called in model.
__init__ if the object comes from the DB. On the other hand,
__setattr__ must be called for backwards compatibility reasons. I don't see how to achieve this: you can't assign to the attribute in
__init__ - setter will not be called. You can't assign to
__dict__ - setattr will not be called. In addition the solution must not cause any severe performance regressions to
__init__
We have managed to live without this feature to this day. Maybe it is time to just let this feature go?
comment:49 Changed 8 years ago by
Indeed, this could introduce subtle backwards incompatibilities for people who have overridden
__setattr__. I don't believe this had been noticed until now.
An earlier comment claims that "the desired behaviour is simple as hell", but that isn't backed by any code and the analysis includes some hand-waving. I see that the latest patch, which is the 14th attempt at doing this correctly, still contains a comment saying it's a hack, and it alters
__bases__. I'm not comfortable with such techniques, to say the least.
My recommendation would be to call the field
_foo and have a
foo property on your model that actually gets/sets
_foo. This might seem unclean, but it's absolutely transparent, and the relation with the ORM is totally obvious. It's also immune to the multiple-execution problem.
If you have a better idea — and enough arguments to convince the core team that it doesn't have side effects such as the one found by Anssi — please make your case on django-developers. Thanks!
comment:50 Changed 8 years ago by
Anyone looking for a simple answer to this problem without using an underscore in the database could try:
classMyClass(models.Model): _my_date=models.DateField(db_column="my_date") @property def my_date(self): return self._my_date @my_date.setter def my_date(self,value): if value>datetime.date.today(): logger.warning("The date chosen was in the future.") self._my_date=value
Augustin's solution was half complete.
comment:51 Changed 6 years ago by
This solution does not work well on templates because they cannot see a field starting with "_". Also queries cannot be done on this field (for example, when writing a custom model manager) because referencing the field "_field" confuses the ORM with field joining.
patch to add getters and setters | https://code.djangoproject.com/ticket/3148 | CC-MAIN-2020-34 | refinedweb | 4,302 | 61.97 |
Installing Math::Pari on OS X
Perhaps you were bored with my Apprentice notes?
Mark Pascal asked for platform specific advice to install Math::Pari. I got this to work in Panther, and previously posted my solution to the mt-dev mailing list. Mark wants us "trackback" to his post. Because Yahoo groups don't support trackback and six apart is too cool for comments (or to link to the mt-dev archives MARK), here it is...
It took me 10 hours to compile Math::Pari on darwin/panther! This was extremely demoralizing.
The tutorial at is OK, but the patch syntax is backwards, some of the notes are irrelevant, and Crypt::Random and Crypt::DSA continue to complain and have to be force installed.
I made the changes described in and it compiled without problem. After that, Crypt::Random and Crypt::DSA installed from the CPAN command line without any errors or warnings, as god intended it to be.
Could a patched version of Math::Pari be bundled with MT 3.1 "Darwin edition," or could a note be added to the documentation?
David
--
Here are the patches from the above listserve, in case the pari
archives disappear:
Index: src/kernel/none/level0.h
===================================================================
RCS file: /home/megrez/cvsroot/pari/src/kernel/none/level0.h,v
retrieving revision 1.4
diff -u -r1.4 level0.h
--- src/kernel/none/level0.h 2002/09/10 23:46:52 1.4
+++ src/kernel/none/level0.h 2002/12/03 12:45:40
@@ -50,8 +50,8 @@
#else
-ulong overflow;
-ulong hiremainder;
+extern ulong overflow;
+extern ulong hiremainder;
INLINE long
addll(ulong x, ulong y)
Index: src/kernel/none/mp.c
===================================================================
RCS file: /home/megrez/cvsroot/pari/src/kernel/none/mp.c,v
retrieving revision 1.91
diff -u -r1.91 mp.c
--- src/kernel/none/mp.c 2002/11/05 15:53:30 1.91
+++ src/kernel/none/mp.c 2002/12/03 12:45:41
@@ -22,6 +22,9 @@
/* version (#ifdef __M68K__) since they are defined in mp.s */
#include "pari.h"
+ulong overflow;
+ulong hiremainder;
+
/* NOTE: arguments of "spec" routines (muliispec, addiispec, etc.) aren't
* GENs but pairs (long *a, long na) representing a list of digits (in basis
* BITS_IN_LONG) : a[0], ..., a[na-1]. [ In ordre to facilitate splitting: no
Recent Comments | http://hello.typepad.com/hello/2004/09/installing_math.html | crawl-002 | refinedweb | 384 | 60.41 |
In this blog post, we will discuss mainly Kafka Consumer and its Offsets. We will understand this using a case study implemented in Scala. This blog post assumes that you are aware of basic Kafka terminology.
CASE STUDY: The Producer is continuously producing records to the source topic. The Consumer is consuming those records from the same topic as it has subscribed for that topic. Obviously, in the real-world scenario, the speed of consumer and producer do not match. In fact, the consumer is mostly slow in consuming records. The reason can be, it has some processing to do on that records. Whatever may the reason, our aim for this post is to find how much our consumer lags behind in reading data/records from the source topic.
Well, it can be done by calculating the difference between the last offset the consumer has read and the latest offset which has been produced by the producer in the Kafka source topic.
First of all, let us make a Kafka consumer and set some of its properties.
val properties = new Properties() properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092") properties.put(ConsumerConfig.GROUP_ID_CONFIG, "KafkaExampleNewConsumer") properties.put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer") properties.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer") properties.put("auto.offset.reset", "latest") val consumer = new KafkaConsumer[String, String](properties)
These are necessary Consumer Config properties which you need to set.
Bootstrap_Servers config as specified in the Kafka official site is “A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. ” A Kafka server by default starts at port 9092.
Group_Id is the id of the group to which our consumer belongs.
Key.deserializer and Value.deserializer are to specify how to deserialize record’s key and value. As my producer serializes the records’ key and value using String Serializer. I need to deserialize it using String deserializer.
Note: You can see the code for my Kafka Producer from my Github repository ““. I am not showing the code for my Kafka Producer in this blog, as the blog is about Kafka Consumers.
Auto.offset.reset property is to specify whether you want to consume the records from the beginning (Earliest) or from the last committed offset (Latest).
Next, I have just created my Consumer with the properties set above.
Let us now make our consumer subscribe to a topic. To subscribe to topic, you can use –
consumer.subscribe(util.Arrays.asList("topic-1"))
Here, “topic-1” is the name of my topic.).
Now, the consumer can consume the data from the subscribed topic using command poll(long).
consumer.poll(long)
Method poll accepts a long parameter to specify timeout – the time, in milliseconds, spent waiting in the poll if data is not available in the buffer.
Note – It is an error to not have subscribed to any topics or partitions before polling for data. That is, the consumer needs to be subscribed to some topic or partition before making a call to poll.
On each poll, consumer will try to use the last consumed offset as the starting offset and fetch sequentially.
When consumer polls the data from the topic, we get all the records of that topic read by the consumer in the form of an object of class ConsumerRecords,
val recordsFromConsumer = consumer.poll(10000)
which acts as a container to hold the list of ConsumerRecord per partition for a particular topic. We can retrieve all the records of a particular Topic read by the consumer as a list of ConsumerRecord using method records of class ConsumerRecords.
val recordsFromConsumerList = recordsFromConsumer.records("topic-1").toList
Or you can do
val recordsFromConsumerList = recordsFromConsumer.asScala.toList
For this, you need to import
import scala.collection.JavaConverters._
To find the offset of the latest record read by the consumer, we can retrieve the last ConsumerRecord from the list of records in ConsumerRecords and then call offset method on that record.
val lastOffset = recordsFromConsumerList.last.offset()
Now, this offset is the last offset which is read by the consumer from the topic.
Now, to find the last offset of the topic, i.e the offset of the last record present in the topic, we can use endOffsets method of KafkaConsumer. It gives the last offset for the given partitions. Its return type is Map<TopicPartition, Long>.
The last offset of a partition is the offset of the upcoming message, i.e. the offset of the last available message + 1.
val partitionsAssigned = consumer.assignment() val endOffsetsPartitionMap = consumer.endOffsets(partitionsAssigned)
Method endOffsets accepts a collection of TopicPartition, for which you want to find the endOffsets.
As I want to find the endOffsets of the partitions which are assigned to my topic, I have passed the value of consumer.assignment() in the parameter of endOffsets.
The consumer.assignment gives the set of TopicPartitions the consumer has been assigned.
Note: You should call method “assignment” only after calling “poll” on the consumer; otherwise it will give null as the result.
Note: Method “endOffsets” doesn’t change the position of the consumer, unlike “seek” methods which do change the consumer position/offset.
Anytime if you want to check the current position of the consumer, you can find that using
val currentPosition = consumer.position(consumer.assignment().toList.head)
This method accepts a TopicPartition as a parameter for which you want to find the current position.
Now that we have with us the last read offset by the consumer and the endOffset of a partition of the source topic, we can find their difference to find the consumer lag.
val consumerLag = endOffsets.get(topicPartition.head) - lastReadOffset
Now, finally we have the consumer lag which we wanted in this case study. Thanks to class ConsumerRecords which not only lets you find the offsets but very other useful things.
That’s all for this blog post. Hope you find this blog useful. You can download the complete code from my Github repository ““.
And to know more about the Kafka and its API, you can see its official site which has explained everything very clearly.
Also, if you have any queries, you can comment down this post. I will be very happy to help you.
Thank you all for reading this blog 🙂
Happy Coding !!
Happy Blogging !!
4 thoughts on “Case Study to understand Kafka Consumer and its offsets6 min read”
Reblogged this on LearningPool.
Reblogged this on Technology Zone.
Reblogged this on Coding, Unix & Other Hackeresque Things. | https://blog.knoldus.com/case-study-to-understand-kafka-consumer-and-its-offsets/ | CC-MAIN-2022-27 | refinedweb | 1,076 | 57.67 |
Originally posted by ravish kumar: print will be ### AAAZ Reason is same as for recursive calls. Try to find out how factorial's method with recursion works AW method calls are put on a stack (means LIFO) so 1st call- S.o.p 2nd call - toString() - last call now as toString is Last In so First Out. 1st toString() is executed. 2nd S.o.p is executed. HTH
Originally posted by Seany Iris: According what you said, it should give the result: ###ZAAA is there any error?
Originally posted by Rajinder Yadav: In line 3 a string needs to be constructed before it can be output. Within the call to System.out.println(...) a call is made to create a new object from class Z. Now, when we try to display this object, a call to it's toString() method is made. So line 7 get executed first, then the string "Z" is returned and appended to the string "AAA" giving you the outputs ### AAAZ The thing to understand is how toString() works! NOTE: all classes implicitly are subclasses of class Object which has a method called toString(). The default just returns a string name of the class followed by '@' and a hashcode value. Try the code below to see this. public class Test { static public void main(String[] arg) { MyClass m = new MyClass(); System.out.println(m); } } class MyClass { } Now we can override the toString() method to return our own string. Try this code to see this public class Test { static public void main(String[] arg) { MyClass m = new MyClass(); System.out.println(m); } } class MyClass { public String toString() { return "VIP_Class"; } } Now go back to your original question and hopefully you can see why the output is the ways it is [ January 22, 2002: Message edited by: Rajinder Yadav ]
Originally posted by mark stone: if the toString method is not overridden then the method is called as from class Object. but when it is called from class Object the output is AAAZ@hascode (where Z is name of class) they are all printed on one line and seems logical too. But when the overidding method toString defined is called the result is that the AAA is printed later ie after toString method is called and its return value put in. why does the order vary. what is the difference un calling an inherited method vs overidding method. I am asking about the order of result printed. [ January 23, 2002: Message edited by: mark stone ]
Originally posted by Maulin, Vasavada: hi mark, the problem here is the S.o.pln stmt in the overriddent toString() method. first toString() is called on the object as per the discussion goes so far. this is the case in any scenario. but here we have a speciality of having that println() stmt in toString(). now, as toString() is called first, it will print newline for sure after printing "###" right? so now the pointer is on the new line and then toString() returns with whatever value we have as return. and then it prints the contatenated final value which is "AAAZ"... if u remove println() from the toString() it work as u argued here. regards maulin. | http://www.coderanch.com/t/236303/java-programmer-SCJP/certification/result | CC-MAIN-2014-10 | refinedweb | 531 | 72.16 |
Introduction
Early this year there were a couple of blog posts on how one can extend SAP Data Hub: Developing a Custom Pipeline Operator from a Base Operator, and Develop a Custom Pipeline Operator from a Dockerfile. SAP Data Hub is highly extensible, and you can take the information in these blogs a step further by building a Solution for SAP Data Hub. By creating a Solution you can make custom components for SAP Data Hub more consumable and accelerates the use of your custom pipelines and operators in other deployments. Solutions will play a big part in the SAP Data Hub eco-system.
What is a SAP Data Hub Solution or VSolution?
For the developer persona, SAP Data Hub content is made up of graphs (or pipeline), operators and dockerfiles. When you navigate to the System Manager and the Modeler within SAP Data Hub, you will see Graphs, Operators and Dockerfiles. A graph is made up of interconnected operators to execute an arbitrary task. Internal to each operator, there is a dockerfile definition on top of which the operator runs. Subengines have not fully been exposed yet with SAP Data Hub but these subengines can be attached to dockerfiles (perhaps in a future blogs). Graphs can be also interconnected as it is possible to invoke a graph from another graph. Dockerfiles and operators can include/reference other files as you will see later on.
In the System Manager you will find export features. Once you create these components above, you can export them as a Solution. You can decide to export them individually (not very useful) or as a working set. A SAP Data Hub Solution can contain one or more graphs containing one or more operators containing one or more dockerfiles as a working set. An operator (as shown above with Operator 1) can be used in multiple graphs as operators and graphs are loosely coupled. Carefully consider these dependencies when exporting the objects in a Solution. Additionally, graphs will use system operators and these would not need to included in a Solution. You may also see references to VSolutions which are the same thing as a SAP Data Hub Solution.
What you will be building in this tutorial
There are situations that may not need a custom docker file as described in the second blog referenced above, this blog will walk you through creating a Solution that includes a custom docker file, custom operator and example graph using the custom operator. It is good practice to include a graph using your custom operators to show people how the operator can be used. By creating a Solution, SAP Data Hub administrators can easily incorporate your custom code very quickly into the SAP Data Hub landscape.
The Solution that you will create will provide a basis of deployment for a HANA Python Client which can be used with Vora Pre-Ingestor or WriteFile operators. While there is a HANA Client, this operator will allow a developer to produce batch messages that could be further be customized in terms of format. The operator will expect in input string that contains SQL. The operator has an output port which sends messages containing query results that ideally can be sent to WriteFile or VORA Ingestor nodes. The complete output port identifies when all of the data has been sent. The complete output port will be typically connected to a Graph Terminator node.
The batchsize parameter will specify the number of records to be sent to the output message. The host, port, user and password parameters are connections for a provided HANA database. It is possible to use connections established by the connection manager but for simplicity this blog will use these 4 parameters.
Note: There is alternative development approach for Python Operators which is site in the SAP online documentation.
Prerequisites:
SAP Data Hub Trial V2.3
SAP HANA Client (for Linux)
SAP HANA
Note: The blog here discusses how to acquire the SAP HANA Python libraries. You may want to look in the installation directory under /hana/shared/HDB/hdbclient/ for the hdbcli-n-n.nn.tar.gz file.
Create a Dockerfile
- From the SAP Data Hub Launchpad, navigate to the SAP Data Hub Modeler.
- Open the “Repository” tab. Right click on the “Docker Files section and select “Create Folder”. Name the folder “myexample”.
- Right click on the new folder named “myexample” and select “Create Docker file”. Call the Docker file “PyHANA”.
- In the Dockerfile editor, specify the following Docker file code:
# Use an official Python 3.6 image from the repository as a parent image FROM python:3.6.4-slim-stretch #Create a directory on the docker container named /tmp/SAP_HANA_CLIENT RUN mkdir /tmp/SAP_HANA_CLIENT #Copy the local tar file to the docker container COPY hdbcli-2.3.106.tar.gz /tmp/SAP_HANA_CLIENT #Install the HANA specific components into the Python environment on the container RUN pip install /tmp/SAP_HANA_CLIENT/hdbcli-2.3.106.tar.gz
Note: your hdbcli file may have a different version so you may need to tweak the version of this file. For documentation on Dockerfiles see Docker.com.
- Click on the Tags button on the top right highlighted below. Specify the following tags for the Docker file. Press the Save button.
- Navigate to the SAP Data Hub Launchpad (main screen).
- Navigate to the System Manager.
- Navigate to the Files
- Navigate to the “vflow->dockerfiles->myexample->PyHANA” folder.
- As shown above, select the import drop down menu and select Import File. In the SAP HANA Client full install (Linux binaries), you will find the hdbcli-2.n.nn.tar.gz file. Navigate to this file location and import this file into your new PyHANA dockerfile. It should look like the following once you are done.
You may need to change the file name in the Dockerfile above to reflect the proper version of the file. The above docker code will add and install the HANA Python client into your custom docker image from this location and the it will install the driver into your Python environment using the tar.gz file once it has been added to the docker environment.
- Navigate back to the SAP Data Hub Modeler browser tab.
- Open the “Repository” tab. Navigate to the “Docker Files->myexample->PyHANA”. Double click on PyHANA to open it. On the top right, click on the Build button. Make sure this docker file builds as it will populate the necessary tags for the custom docker image for the custom operator.
Create a Custom Operator based on the new Dockerfile
- Navigate to the “Operators” and right click on the “Operators” select “New folder”.
- Enter the name “myexample”
- Navigate to the new myexample folder, right click on this folder and select “Create Operator”
- Specify the name PyHANA, the display name as “Python HANA Client” and specify the Python3Operator. Press the OK button.
- In the Input Ports section, press the plus symbol circled in red below. Specify the name “input” with a type of “string” as shown below.
- In the Output Ports section, press the plus symbol circled in red below. Specify the port name “output” with a type of “message”. Again, press the plus symbol and add the port named “complete” as type “string” as shown below.
- In the Tags section, press the plus symbol circled in red below. Specify the tag name “hdbcli” which should be found in your drop down list. Again, press the plus symbol and add the tag named “python36” as shown below. If you do not see hdbcli, in the list, refresh your browser and verify that you docker image built properly.
- In the “Operator Configuration” section, press the “Auto Propose” button. Using the tags, that you just specified should assign this custom operator to the custom dockerfile you have built.
You should now see the Operator Configuration as shown below.
- In the Parameters section, press the plus symbol circled in red below. Specify the parameter name “batchsize” with a type of “number”. Repeat this process to add the parameters host, port, user, password with the types string, number, string, string respectively as shown below. Specify default values for batchsize, host, port, user, password as “50”, “localhost”, “30015”, “system” and “changeme” respectively. You will want to keep these default values generic as your Solution will be used in a variety of environments.
- In the Script section, paste the following code:
from hdbcli import dbapi import logging import time logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s') #Get Configuration parameters batchSize=api.config.batchsize addr=api.config.host usr=api.config.user passwd=api.config.password prt=api.config.port #Connect to SAP HANA conn = dbapi.connect( address=str(addr), port=int(prt), user=str(usr), password=str(passwd) ) if (conn): api.logger.info("connection open") def shtdown(): global conn conn.close() api.logger.info("closing connection") def on_input(data): api.logger.info("SQL received"+data) global conn cursor = conn.cursor() cursor.execute(data) if cursor: api.logger.info("open cursor") results = cursor.fetchall() i=0 j=0 collist='' for row in results: i=i+1 j=j+1 colcount=0 for col in row: if col is None: collist=collist+',' #api.logger.info("none found------------------") else: if colcount==0: collist=collist+str(col)+',' else: collist=collist+str(col)+',' colcount=colcount+1 collist=collist[:-1]+'\n' colcount=0 #api.logger.info("collist--\n"+collist) if j==batchSize or len(results)==i: obj ={"message.commit.token": "commit-" + str(i), "demo.batch_size": str(j)}; msgout = api.Message( collist, obj) #api.logger.info("msg---\n"+str(msgout)) api.send( "output", msgout) collist='' j=0 api.logger.info("query complete") time.sleep(10) api.send( "complete", "Done") api.set_port_callback("input", on_input) api.add_shutdown_handler(shtdown)
- Press on the “Edit Documentation” button in top middle of the browser. For information on markdown language, see GitHub.
- Paste the following markup in the editor and press the Save
HANA Python Client ============================================ This operator will accept SQL query which will result in batch set of messages which can be used with Vora Ingestor or WriteFile operators. While there is a HANA Client, this operator will allow a developer to produce batch messages that could be further customized in terms of format via Python script. Input Ports ------------- * **input** (type string) The operator will expect in input string that contains SQL. Output Ports ------------ * **output** (type mesage) The operator has an output port which sends messages can be sent to WriteFile or VORA PreIngestor which are results of the query from the input port. * **complete** (type string) The complete port identifies when all of the data has been sent. The complete port will be typically connected to a Graph Terminator node. Configuration Parameters ------------- **batchsize** (type number) Specify the number of records to be sent to the output message. **host** (type string) : TCPIP hostname for SAP HANA. **port** (type number) TCPIP port number for SAP HANA. **user** (type string) : user name to access SAP HANA. **password** (type string) : password for HANA user.
- Press the Save but to save the Custom Operator
Configure the icon for your Operator (Optional)
- Navigate back to the System Manager and navigate to the location where you find the operator.json for the new operator. It may be under “vflow->operator->myexample”. In this case it is under “vflow->subengines->com->sap->python36->operator->myexample-PyHANA”. Select the “PyHANA” folder.
- Click on “Import File” on the top middle. Navigate to a “.png” file that you wish to use as an icon. Below is an example of a hana.png that was uploaded.
- Navigate back to the “Modeler” browser tab. You should still have the Operator editor open. If not, open the Python HANA Client operator that you just created.
- If you hover over the puzzle-piece icon you will see that you can click on it and change the icon to use one of the icons in a large list of system icons.
- Or you can use the icon that you just uploaded into the System Manager. To do use your icon, select the JSON tab on the right. Change the text “icon”:”puzzle-piece” to “iconsrc”:”hana.png”. Toggle between JSON and Form to ensure that the previous settings are still in place. If the settings are lost (even the script), the toggle between the views again and ensure the Form view has all of the settings. You should also see the icon present on your operator.
- Press the Save
Create your example Graph using the custom operator
- Select the “Graphs” tab on the top right and press the plus symbol circled in red as shown below to create a new example graph that uses the new operator.
- Type “Python Hana” in the search bar. Drag and drop the operator on the new untitled graph canvas.
- Type “Wiretap” in the search bar. Drag and drop the operator on the new untitled graph canvas.
- Type “Javascript” in the search bar. Drag and drop the first javascript operator on the new untitled graph canvas.
- Type “Graph Terminator” in the search bar. Drag and drop the operator on the new untitled graph canvas.
- Wire the Javascript node to the Python HANA Client node. Wire the Python HANA Client output port to the Wiretap node and the complete port to the Graph Terminator node as shown below.
- Open the script editor for the Javascript node.
- Paste the following code into the Editor.
$.setPortCallback("input",onInput); function onInput(ctx) { $.output("select * from sys.objects"); } $.addGenerator(onInput);
- Press the “Save As” button at the top center of the browser.
- Provide the name “myexample.PyHANA” and press the OK button to save the Graph.
At this point you could skip over to Export the Solution section you have generic code to export for your Solution. However, it would be good idea for you test the Solution working by changing the Python HANA Client connection parameter for your environment save the graph again and run it to validate it works. You do want to ship working code afterall. Press the Run button (circled in red below). When the graph is running open the Wiretap UI (circled in red below) to see the output of the custom operator.
The output should look something like this.
Once you have validated that it works. Change the database connection configuration to generic values and resave the graph.
Export the Solution
- Navigate back to the browser tab for System Management. Expand the “vflow” folder and docker folder, operators folder and graphs folders. You should see the “myexample->PyHANA” artifacts in each of these folders.
If you do expand the operator folder and do not see anything under “myexample”, the operator may be under the subengines folder as shown below. You want to find the “myexample->PyHANA” older that has the operator.json and configSchema.json files.
- By pressing CTRL-Click select “myexample->PyHANA” for the dockerfile, operator, subengines and graph. Once selected, select the Export drop down and select “Export select files/folders as solution” at the top middle of the browser.
- Ensure that you have the Graph, Dockerfile, Operator and if necessary Subengines->Operator included with your Solution. Press “Export Solution”. A “tar.gz” file should be downloaded to your system. Open it and it should have a directory “vrep\vsolution” containing 4 folders for the operators, graphs and dockerfile.
Test the Solution
- Now that you have the Solution exported you will want to test the importing of the Solution. To truly test the Solution delete the dockerfile, subengine, graph and operator from SAP Data Hub. Find each of the folder four folders within System Manager named “myexample” and delete the folder by right clicking and selecting “Delete”.
- Refresh the browser. Verify that there are no folders named “myexample”.
- On the top of the browser select the “Import” drop-down button and select “Import Solution”.
- Browse to the location of your exported Solution and select the tar.gz file that you just exported.
- Switch to the Modeler browser tab. Open and edit the myexample.PyHANA graph and make same changes for your database environment that you did for the Python HANA Client operator in Step 10 when you tested the graph earlier. Save the changes.
- Run the myexample.PyHANA that was imported.
Once the graph runs to completion, pat yourself on the back! Good Job.
Other useful links related to this topic
SAP HANA 2.0 SP02 New Features
Developing a Custom Pipeline Operator from a Base Operator
Develop a Custom Pipeline Operator from a Dockerfile
Setting up HANA Express with Python Machine Learning | https://blogs.sap.com/2018/12/05/building-sap-data-hub-solutions-aka-vsolutions/ | CC-MAIN-2019-13 | refinedweb | 2,754 | 58.08 |
The JAXB API in Java can serialize objects to XML. But, no such API exists for JSON. This is surprising considering JAX-RS requires the support for JSON format. Jackson framework is here to fill that gap. For example, Jersey (an implementation of JAX-RS) uses Jackson as the JSON serializer.
If you plan on doing JSON communication from a RESTful service, use your server’s JAX-RS support. That will be the recommended approach. In some cases, you may need to work with JSON outside the context of a RESTful service. For example, save JSON documents on file. In that case, you can use Jackson. In this post, we will learn how to do that.
Download Jackson
Go to the download page and download these JAR files:
- Core
- Annotations
- Databind
Most applications will use the databind API. That JAR file needs the other two in the list.
Write Test Code
Create a Java project and add all three Jackson JAR files to the build path.
Write a class following this example.
import java.util.ArrayList; import com.fasterxml.jackson.databind.ObjectMapper; public class Person { String name; int id; float salary; public String getName() { return name; } public void setName(String name) { this.name = name; } public int getId() { return id; } public void setId(int id) { this.id = id; } public float getSalary() { return salary; } public void setSalary(float salary) { this.salary = salary; } public Person(String name, int id, float salary) { super(); this.name = name; this.id = id; this.salary = salary; } public Person() { } public static void main(String[] args) { try { ObjectMapper mapper = new ObjectMapper(); ArrayList<Person> list = new ArrayList<Person>(); list.add(new Person("Bugs Bunny", 1, 23.45f)); list.add(new Person("Daffy Duck", 2, 23.45f)); list.add(new Person("Samity Sam", 3, 23.45f)); mapper.writeValue(System.out, list); } catch (Exception e) { e.printStackTrace(); } } }
When you run this, the program should print this out.
[{"name":"Bugs Bunny","id":1,"salary":23.45}, {"name":"Daffy Duck","id":2,"salary":23.45}, {"name":"Samity Sam","id":3,"salary":23.45}]
Congratulations. You have now developed a JSON application using Jackson. | https://mobiarch.wordpress.com/2012/11/22/json-serialization-using-jackson/ | CC-MAIN-2018-13 | refinedweb | 348 | 70.39 |
GDI+ Cursors collections of cursors you can
easily use in your application. You can apply them to any control as you wish.
To do this, access the properties of the control and open the Cursor field.
If those cursors are not enough, which is not unusual, you can use your own
cursors..
Practical Learning: Introducing Cursors
Creating Cursors
To create and design your own cursor, you can use use
Microsoft Visual Studio. To do this:
Then, in the Add New Item dialog box, you
can select Cursor File, give it a name and click Open.
A cursor is a Windows file that has the extension .cur.
Practical Learning: Creating a Cursor
Using Cursors.
Another technique consists of using a cursor not listed in the
Properties window. A cursor is based on the Cursor class. Both the Cursors
and the Cursor classes are defined in the System::Windows::Forms
namespace that is part of the System.Windows.Forms.dll library..
If at any time you want to hide a cursor, you can call the Cursor::Hide()
method. Its syntax is:
public:
static void Hide();
To display the cursor again, you can call the Cursor::Show()
method. Its syntax is:
public:
static void Show();
Practical Learning: Using Cursors
System::Void Form1_Load(System::Object^ sender,
System::EventArgs^ e)
{
System::Windows::Forms::Cursor ^ curPush =
gcnew System::Windows::Forms::Cursor(L"Push.cur");
this->panel1->Cursor = Cursors::NoMove2D;
this->treeView1->Cursor = curPush;
this->richTextBox1->Cursor = Cursors::PanSE;
} | http://functionx.com/vccli/gdi+/cursors.htm | CC-MAIN-2018-13 | refinedweb | 242 | 56.96 |
This section shows how to compile, download and run a simple “Hello, World” Linux application on a Blackfin STAMP board. For information on more detailed application development, please see the Application Development section.
Here is the program 'hello.c':
#include <stdio.h> int main() { printf("Hello, World\n"); return 0; }
bfin-uclinux-gcc and bfin-linux-uclibc-gcc are used to compile programs that run on the Linux operating system. They automatically link the application with the the Linux run time libraries, which in turn call the Linux operating system when required (for example, to print a string to the console).
The bfin-elf-gcc compiler is used to compile the Linux kernel and standalone programs as it uses a different set of libraries.
The first step is to ensure that
bfin-uclinux-gcc is in your current path.
rgetz@imhotep:~> bfin-uclinux-gcc --version bfin-uclinux-gcc (ADI-trunk/svn-3343) 4.3.3 Copyright (C) 2008 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
OR
rgetz@imhotep:~> bfin-uclinux-gcc --version bash: bfin-uclinux-gcc: command not found
If this results in
command not found, then you must modify your PATH to include it. If you installed the rpm package, then it is located in
/opt/uClinux/bfin-uclinux/bin/. If you installed the binary tarball, or compiled from source, it will be some other place. See Installing the Blackfin GNU Toolchain for more information.
The second step is to compile
hello.c on your host PC:
rgetz@imhotep:~/src> bfin-uclinux-gcc -Wl,-elf2flt hello.c -o hello
-Wl,-elf2flthas no space between the comma or the dash, and that the character after the
Wis a lower case L, not the number 1
The output executable is 'hello', also it creates a file “hello.gdb” which is used for debugging.
rgetz@imhotep:~/src> ls -l hello* -rwxr--r-- 1 rgetz users 5160 2009-04-30 11:16 hello -rw-r--r-- 1 rgetz users 87 2009-04-30 11:15 hello.c -rwxr-xr-x 1 rgetz users 78494 2009-04-30 11:16 hello.gdb rgetz@imhotep:~/src> file hello hello: BFLT executable - version 4 ram rgetz@imhotep:~/src> file hello.gdb hello.gdb: ELF 32-bit LSB executable, Analog Devices Blackfin, version 1 (SYSV), statically linked, not stripped
Need to use “bfin-linux-uclibc-gcc”.
rgetz@imhotep:~/src> bfin-linux-uclibc-gcc hello.c -o hello
The output executable is 'hello'.
rgetz@imhotep:~/src> ls -l hello* -rwxr-xr-x 1 rgetz users 10688 2009-04-30 11:17 hello -rw-r--r-- 1 rgetz users 87 2009-04-30 11:15 hello.c rgetz@imhotep:~/src> file hello hello: ELF 32-bit LSB executable, Analog Devices Blackfin, version 1 (SYSV), dynamically linked (uses shared libs), not stripped
Since the output of the compilation has not been stripped, and includes debugging information, which is not necessary for running the application, this can be removed before downloading things to the target.
rgetz@imhotep:~/src> bfin-linux-uclibc-strip hello rgetz@imhotep:~/src> ls -l hello* -rwxr-xr-x 1 rgetz users 3556 2009-04-30 11:22 hello -rw-r--r-- 1 rgetz users 87 2009-04-30 11:15 hello.c
The next step is to download the
hello to your Blackfin board. There are many ways to do this, involving either the serial port or ethernet.
On the host:
host:~/checkouts/uClinux-dist> cp ~/test/hello ./romfs/bin/hello host:~/checkouts/uClinux-dist> make image
This will re-build the files
./images/linux and
./images/uImage so that the 'hello' program is included in the uClinux-dist. Once you've booted the board with this new image, just run 'hello' like any other program.
Before you can download over the network, the board's network settings must be configured. Use either:
root> dhcpcd & root> ifconfig eth0or
root> ifconfig eth0 192.168.0.1 up
For more information on how to set up the network, check out the setting up the network page.
You can also place the 'hello' program on a web server and use 'wget' on the STAMP board.
Login to your stamp board and download:
root:~> cd /tmp root:/tmp> wget
Now on the board, modify the permissions:
root:/tmp> chmod 777 hello
You can ftp into the board from the host, and upload the file to the board:
host> ftp 192.168.0.1 username : root password : uClinux ftp> binary ftp> put ./hello ftp> quit
Now on the board, modify the permissions:
root:/tmp> chmod 777 hello
On the board, you can fetch the file from the host using tftp. The first step in this process is to copy the hello to the host's tftp server directory.
host> cp hello /tftpboot/hello
Then on the board:
root> tftp -g -r hello 192.168.0.1
Now on the board, modify the permissions:
root:/tmp> chmod 777 hello
From the host, remote copy the file to the board:
host:> rcp ./hello [email protected]:/hello
If
lrz (receive z-modem) is built into uClinux-dist, you can use this to transfer files from the PC to the board.
After 'hello' has been transferred, modify the permissions:
root:/tmp> chmod 777 hello
root:/tmp> ./hello Hello, World
Complete Table of Contents/Topics | https://www.blackfin.uclinux.org/doku.php?id=simple_hello_world_application_example | CC-MAIN-2018-30 | refinedweb | 902 | 64 |
the WP at the February, 2008 meeting as paper J16/08-0056 = WG21 N2546.].
[Voted into WP at April, 2007 meeting.]
Section 1.3 [intro.defs],.6.1 [temp.over.link]). This suggests that the name and scope of the function should be part of its signature.
Proposed resolution (October, 2006):
Replace 1.3 [intro.defs] [intro.defs]. The names of the template parameters are significant...
(See also issue 537.)
[Voted into WP at April, 2007 meeting.]
The standard defines “signature” in two places: 1.3 [intro.defs] and 14.5 [intro.defs] words “the information about a function that participates in overload resolution” isn't quite right either. Perhaps, “the information about a function that distinguishes it in a set of overloaded functions?”
Eric Gufford:
In 1.3 [intro.defs] the definition states that “Function signatures do not include return type, because that does not participate in overload resolution,” while 14.5.
[Voted into WP at April, 2006 meeting.]
The standard uses “most derived object” in some places (for example, 1.3 [intro.defs] April 2005 meeting.].
(This resolution also resolves isssue 306.)
[Voted into WP at April 2005 meeting.]".
.]
Consider the following example:
class A { class A1{}; static void func(A1, int); static void func(float, int); static const int garbconst = 3; public: template < class T, int i, void (*f)(T, int) > class int_temp {}; template<> class int_temp<A1, 5, func> { void func1() }; friend int_temp<A1, 5, func>::func1(); int_temp<A1, 5, func>* func2(); }; A::int_temp<A::A1, A::garbconst + 2, &A::func>* A::func2() {...}ISSUE 1:
In 11 [class.access] paragraph 5 we have:
A::int_temp A::A1 A::garbconst (part of an expression) A::func (after overloading is done)I suspect that member templates were not really considered when this was written, and that it might have been written rather differently if they had been. Note that access to the template arguments is only legal because the class has been declared a friend, which is probably not what most programmers would expect.
Rationale:
Not a defect. This behavior is as intended.
ISSUE 2:
Now consider void A::int_temp<A::A1, A::garbconst + 2, &A::func>::func1() {...} By my reading of 11.8 [class.access.nest] , the references to A::A1, A::garbconst and A::func are now illegal, and there is no way to define this function outside of the class. Is there any need to do anything about either of these Issues?
Proposed resolution (04/01):
The resolution for this issue is contained in the resolution for issue 45.
granting friendship can be used in the base-clause of a nested class of the friend class. However, the declarations of members of classes nested within the friend class cannot access the names of private and protected members from the class granting friendship. Also, because the base-clause of the friend class is not part of its member declarations, }; };—end note].
[Moved to DR at 4/01 meeting.]
11.2 [class.access.base] paragraph 4 says:
A base class is said to be accessible if an invented public member of the base class is accessible. If a base class is accessible, one can implicitly convert a pointer to a derived class to a pointer to that base class.Given the above, is the following well-formed?
class D; class B { protected: int b1; friend void foo( D* pd ); }; class D : protected B { }; void foo( D* pd ) { if ( pd->b1 > 0 ); // Is 'b1' accessible? }Can you access the protected member b1 of B in foo? Can you convert a D* to a B* in foo?
1st interpretation:
A public member of B is accessible within foo (since foo is a friend), therefore foo can refer to b1 and convert a D* to a B*.
2nd interpretation:
B is a protected base class of D, and a public member of B is a protected member of D and can only be accessed within members of D and friends of D. Therefore foo cannot refer to b1 and cannot convert a D* to a B*.
(See.]
The text in 11.2 [class.access.base] paragraph 4 does not seem to handle the following cases:
class D; class B { private: int i; friend class D; }; class C : private B { }; class D : private C { void f() { B::i; //1: well-formed? i; //2: well-formed? } };The member i is not a member of D and cannot be accessed in the scope of D. What is the naming class of the member i on line //1 and line //2?.]
The definition of "friend" in 11.4 [class.friend] says:
A friend of a class is a function or class that is not a member of the class but is permitted to use the private and protected member names from the class. ...A nested class, i.e. INNER in the example below, is a member of class OUTER. The sentence above states that it cannot be a friend. I think this is a mistake.
class OUTER { class INNER; friend class INNER; class INNER {}; }; October 2004 meeting.]. [class.access] paragraph 5 says that access to D::E is checked with member access to class E, but unfortunately that doesn't give access to D::E. 11 [class.access] paragraph 5) as a qualified name, then the return type is an error just like referring to C::B in the member list of class B above (i.e. //2) is ill-formed.
Proposed resolution (04/01):
The resolution for this issue is incorporated into the resolution for issue 45.
[Moved to DR at 4/01 meeting.]
Example:
#include <iostream.h> class C { // entire body is private struct Parent { Parent() { cout << "C::Parent::Parent()\n"; } }; struct Derived : Parent { Derived() { cout << "C::Derived::Derived()\n"; } }; Derived d; }; int main() { C c; // Prints message from both nested classes return 0; }How legal/illegal is this? Paragraphs that seem to apply here are:
11 [class.access] paragraph 1:
A member of a class can beand 11.8 [class.access.nest] paragraph 1:
- private; that is, its name can be used only by members and friends of the class in which it is declared. [...]
The members of a nested class have no special access to members of an enclosing class, nor to classes or functions that have granted friendship to an enclosing class; the usual access rules (clause 11 [class.access] ) shall be obeyed. [...]This makes me think that the ': Parent' part is OK by itself, but that the implicit call of 'Parent::Parent()' by 'Derived::Derived()' is not.
From Mike Miller:
I think it is completely legal, by the reasoning given in the (non-normative) 11.8 [class.access.nest] paragraph 2. The use of a private nested class as a base of another nested class is explicitly declared to be acceptable there. I think the rationale in the comments in the example ("// OK because of injection of name A in A") presupposes that public members of the base class will be public members in a (publicly-derived) derived class, regardless of the access of the base class, so the constructor invocation should be okay as well.
I can't find anything normative that explicitly says that, though.
.
[Voted into the WP at the April, 2007 meeting as part of paper J16/07-0099 = WG21 N2239.].
Proposed resolution (April, 2007):
As specified in paper J16/07-0099 = WG21 N2239.
:.
[Voted into WP at April 2005 meeting.]?a.
[Voted into WP at October 2004 meeting.].
[Moved to DR at October 2002 meeting.]
Does dropping a cv-qualifier on a reference binding prevent the binding as far as overload resolution is concerned? Paragraph 4 says "Other restrictions on binding a reference to a particular argument do not affect the formation of a conversion sequence." This was intended to refer to things like access checking, but some readers have taken that to mean that any aspects of reference binding not mentioned in this section do not preclude the binding..
[Voted into WP at October 2003 meeting.]
template <class T> void f(T); template <class T> void g(T); template <class T> void g(T,T); int main() { (&f<int>); (&g<int>); }The question is whether &f<int> identifies a unique function. &g<int> is clearly ambiguous.
13 | http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html | crawl-001 | refinedweb | 1,374 | 64.51 |
I have an extremely strange bug that I’ve only been able to reproduce
in the context of active-messaging. But, stepping through the code,
the problem appears to be at the level of dependencies.rb, so I will
post also here to see if anyone has any insight to this problem.Any
help or ideas on tackling this would be greatly appreciated.
I’m using Rails 2.1. Here’s the problem:
Assume you have a file lib/foo/bar.rb, defined:
module Foo
class Bar
end
end
And that you have an A13g processor that references this class, like:
class MyProcessor < ApplicationProcessor
subscribes_to :whatever
def on_message(message)
Foo::Bar
end
end
If I run a message through to the poller, this processor correctly
autoloads the class. On subsequent tries, I get an exception:
LoadError raised: Expected c:/app_root/lib/foo/bar.rb to define
Foo::Bar
- I’ve only found this odd behavior when running in production mode.
Development mode is just fine.
- Also, the file has to be within a subfolder on the load path (i.e. a
subdirectory of lib). ‘lib/bar.rb’ loads each time just fine.
- I’ve produced it on both Solaris and Windows platforms.
I have tried debugging through the source. Here is all I can gather:
Both first and second invocation get to gems/custom_require.rb:27. At
that point, stepping through the source code the first time takes me
to the line ‘module Foo’ in lib/foo/bar.rb.
The second time around, a step returns immediately, not going into
lib/
foo/bar.rb. Then, a few lines later, the exception is triggered when
the interpreter cannot find the class.
Has anyone encountered this sort of thing? Any idea what this could
be?
What is throwing me is that stepping through the source code, there is
no difference in the code paths taken, just a difference in what
happens when I get to gems/custom_require.rb:27 | https://www.ruby-forum.com/t/strange-class-loading-problem-in-production-environment/145381 | CC-MAIN-2022-27 | refinedweb | 327 | 66.44 |
Created on 2008-01-04 21:30 by flxkid, last changed 2014-04-13 19:00 by nikratio.
dircmp's ignore and hide list only take exact files to ignore, not unix
filename pattern's. This means you can't hide/ignore *.bak or something
similar. Changing the _filter function adds this:
def newfilter(flist, skip):
for pattern in skip:
flist = list(ifilterfalse(fnmatch.filter(flist,
pattern).__contains__, flist))
return flist
I'm sorry, but can you rephrase that in the form of a patch? I can't
quite figure out what you're trying to say, except that it sounds like
it's scratching an itch of yours.
Patch attached (sorry, this is my first bug report on an os project).
dircmp has a list of files to ignore and hide. These lists right now
are compared to the left and right lists using __contains__ to filter
out the ignore/hide lists.
This patch adds the ability to pass file patterns in addition to
filenames so that you can filter classes of files such as *.bak or temp*.*
sorry...jacked up the patch file...new one attached
The documentation doesn't say anything about dircmp being supposed to
support pattern matching. This ticket is a feature request rather than a
bug.
Please also include at least documentation changes, since this changes
the behavior of the module. This would be in the file:
Doc/library/filecmp.rst
Also. If possible a test would be great. The file for this would be:
./Lib/test/test_filecmp.py
I've implemented an enhanced version of this feature by adding a keyword
'match' to the constructor of class 'dircmp'. It defaults to function
'fnmatch' imported from module 'fnmatch'.
This allows to exclude directories and/or files by using patterns like
'*.tmp'.
By giving a different function it's also possible to use more elaborated
patterns, for example, based on regular expressions.
Attached patch includes updates of documentation and test cases.
+1 on adding the match argument. Can you comment on how one would
implement the old behavior? I would guess match=lambda x,y: x in y,
which is not that bad, but maybe that should be the default and those
who need pattern matching should use match=fnmatch.
On the patch itself, please don't change default arguments from None to
lists or function. There is a subtle difference between the two forms.
For example, in your code if someone overrides filecmp.fnmatch before
calling dircmp, old fnmatch will still be used. If you do match=None in
finction declaration and match is None check in the function body, then
the new overridden value will be used in the above scenario.
Ok, I've set default arguments (back) to None. Revised patch attached.
Defaulting the match function to fnmatch doesn't change the behavior in
the "normal" case, i.e. when regular file / directory names are used,
like in the default value of ignore. It behaves different in two cases:
a) A string given in ignore contains wildcard character(s):
In this case this parameter would have no effect in the previous
implementation, because the string would not match any file / directory
name exactly. In the changed implementation all files / directories
matching the pattern would be ignored. If the wildcard(s) were included
by intent, this is what probably was intended; if they were included by
mistake, both version do not behave as intended.
b) File system is case-insensitive:
In this case the changed implementation will ignore files / directories
which the previous version did not ignore because of a case mismatch.
But, on such a file system this is what one would normally expect, I think.
So, in both cases, I feel the changed behavior is acceptable.
Or did I miss something?
On Fri, Apr 11, 2008 at 12:10 PM, Michael Amrhein
<[email protected]> wrote:
>
..
> a) A string given in ignore contains wildcard character(s):
> In this case this parameter would have no effect in the previous
> implementation, because the string would not match any file / directory
> name exactly.
'*' is a perfectly legal filename character on most filesystems
As you are working on this, please consider changing
self.hide+self.ignore in phase0 to chain(self.hide, self.ignore) where
chain should be imported from itertools. There is no need to create the
combined list (twice!) and not accepting arbitrary iterables for hide
and ignore seems to be against the zen of python.
> Alexander Belopolsky <[email protected]> added the comment:
...
>
> '*' is a perfectly legal filename character on most filesystems
>
Oops! Never thought of putting a '*' into a file name.
Obviously, I should have tried before ...
Ok, then I agree that, for not breaking existing code, the match
function should default to string comparison.
I'll provide a second revised patch in the next days.
And, I'll chain ignore and hide, as you proposed.
There is one small issue I would like to discuss:
While the comparison of directory and file names in phase1 is
case-insensitive on case-insensitive systems (os.path.normcase applied
to each name), the filtering of ignore and hide in phase0 isn't.
I can't imagine a good reason for this and would like to change it by
also applying os.name.normcase to each name in ignore and hide.
Here's a 2nd revised patch, which
* adds a keyword 'match' to the constructor of class 'dircmp'
* defaults 'match' to str.__eq__
* modifies method 'phase0': apply os.name.normcase to each name in
ignore and hide
* modifies the docs accordingly, incl. an example for using pattern matching
* modifies the test case for the default matching
* adds a test case for using pattern matching (fnmatch.fnmatch)
The patch does not apply to py3k branch.
Patch worked fine with 2.7. I reworked it for SVN trunk but got this failure.
FAILED (failures=1)
Traceback (most recent call last):
File "test_filecmp.py", line 179, in <module>
test_main()
File "test_filecmp.py", line 176, in test_main
support.run_unittest(FileCompareTestCase, DirCompareTestCase)
File "c:\py3k\lib\test\support.py", line 1128, in run_unittest
_run_suite(suite)
File "c:\py3k\lib\test\support.py", line 1111, in _run_suite
raise TestFailed(err)
test.support.TestFailed: Traceback (most recent call last):
File "test_filecmp.py", line 158, in test_dircmp_fnmatch
self.assertEqual(d.left_list, ['file'])
AssertionError: Lists differ: ['dir-ignore', 'file', 'file.t... != ['file']
First differing element 0:
dir-ignore
file
First list contains 2 additional elements.
First extra element 1:
file
- ['dir-ignore', 'file', 'file.tmp']
+ ['file']
I've attached a py3k patch as a different pair of eyes is more likely to spot a problem.
I don't think that we can just introduce path normalization in phase0. Even though I agree that this would be the proper way to do it when reimplementing from scratch, it breaks backward compatibility.
There also is a small mistake in that the *match* attribute should also be used for subdirectories in the `phase4` method.
Other than that, this patch looks good to me. I fixed the above issues, rebased on current hg tip, and added some missing markup in the documentation. After inspecting the code, it seems that there is no difference between directory entries being "hidden" by the *hide* parameter, and being "ignored* by the *ignore* parameter, so I also updated the documentation make this less confusing.
I could not reproduce the test failure reported by Mark, but this is most likely because I could not find out on what base revision to apply his patch.
I think this is ready for commit.
Attached is an updated patch that addresses the comments from Rietveld. Thanks for the feedback!
Updated patch to acknowledge original authors in Misc/ACKS. | https://bugs.python.org/issue1738 | CC-MAIN-2021-21 | refinedweb | 1,284 | 66.94 |
Hey guys,
I've got an issue with the bowling program I've created. It's outputting the wrong scores, but I believe it's because of wrong bowling code. Is there anything wrong that you can see?
One of the issues I'm having is frame 2, where the scores were 9 and 1, and it's being counted as a STRIKE, inflating the score quite a bit.
#include <iostream> #include <fstream> #include <string> using namespace std; int main() { //declare variables int frame[10] = {0}; int balls[21] = {0}; int input_counter = 0; int sum = 0; int strike = 10; //input stream ifstream score; //open file and have a statement if file is not opened score.open("MGlane9.txt"); if(score.fail()) { cout << "File inaccessible."; } //while END OF FILE is not reached while(!score.eof()) { score >> balls[input_counter]; input_counter++; } //close Input file score.close(); for(int i=0;i<input_counter;i++) { //for anything other than strike if(balls[i] < 10) { cout << "First bowl is " << balls[i] << "Second bowl is " << balls[i+1] << endl; sum += balls[i]+ balls[i+1]; if(balls[i] + balls[i+1] == 10) { cout << "SPARE!!!";} cout << "Sum is " << sum << endl; cout << endl << endl; i++; } //for strike if(balls[i] = 10) { if(balls[i] = 10) { cout << "Strike!" << balls[i] << endl; sum += balls[i] + balls[i+1];} cout << "Sum is " << sum << endl; cout << endl << endl; } } //preview cout getchar(); //end program return 0; } | https://www.daniweb.com/programming/software-development/threads/236484/bowling-program-wrong-scores | CC-MAIN-2017-13 | refinedweb | 232 | 70.84 |
C#, VB.NET.
To implement this first we need to get Ionic.Zip.dll from DotnetZIP Library for that check this link Ionic.Zip.dll or you can get it from attached downloadable code. Now you need to add that dll to your application bin folder for that check below steps download selected files as zip folder from gridview
Now in code behind add the following namespaces
C# Code
Once you add namespaces write the following code in code behind
VB.NET Code
Demo
To test application first upload files after that select available files in gridview and click on Download Selected Files button to generate zip file archive based on files whatever you selected
Download Sample Code Attached
10 comments :
Nice Articles
it nice name space problem
what can do plz help me
why you dont record videos and upload on youtube your blog is great but recording of videos is best way to explain try to look my advice.
thank you.
excellent
what will happen if am uploading files from different folders ?
nice article dude
Great Article boss.....
Raj me also getting name space problem
did u get the solution to it.if yes plz tell me also brother
or any other friends who are here can tell me at [email protected]
Nice one...Very Help full
You rock my friend :) It works perfectly, not many people believe in KISS rule now a days, you do :) Many thanks. | http://www.aspdotnet-suresh.com/2013/04/aspnet-download-multiple-files-as-zip.html | CC-MAIN-2016-44 | refinedweb | 241 | 67.08 |
From: Suman Cherukuri (suman_at_[hidden])
Date: 2005-09-07 15:49:01
> -----Original Message-----
> From: boost-bounces_at_[hidden] [mailto:boost-bounces_at_[hidden]]
> On Behalf Of Tom
> Sent: Wednesday, September 07, 2005 1:21 PM
> To: boost_at_[hidden]
> Subject: Re: [boost] temp file
>
>
>
> On Wed, 7 Sep 2005, Suman Cherukuri wrote:
>
<snipped>
> >
> > Along the lines (temp file), does it make sense to add support for
> creating
> > temp files in the default temp folder based on the OS (and support to
> > override the default) with a unique name automatically generated?
> >
> > --Suman
> >
>
> Sometimes programs need to create any file to put there same output
> and give this file ( for example by file name ) to another program
> or library. It isn't of course perfect solution, but why not if you
> would need it ?
>
> This wrapper of C-function : mktemp() , frees you from duty to clean up.
> (removing files)
>
> Tom
mkstemp()is recommended over mktemp to avoid file name racing. Even mkstemp
is not ideal as it cannot take a const string for filename template etc.
Also, we need to use something else on windows. mktemp() and mkstemp() are
not natively supported on Windows but there are some 3rd party
installations. If there is an equivalent in Boost which is platform
independent and embedded within Filesystem namespace, it'd be very useful.
--Suman
>
> >
> > >
> > >
> > > Thanks
> > > Tom
> > > _______________________________________________
> > > Unsubscribe & other changes:
> > >
> >
> > _______________________________________________
> > Unsubscribe & other changes:
>
> >
> _______________________________________________
> Unsubscribe & other changes:
>
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2005/09/93158.php | CC-MAIN-2019-43 | refinedweb | 255 | 61.77 |
Curve Drawer: Using Two Points - Online Code
Description
This Code demonstrates how to Draw Curves using Graphics in Java.Here u can select two Co-ordinates on the Frame and then Draw Curve between those two Points with Co-ordinates being displayed on Status Bar.
Source Code
import java.awt.BasicStroke; import java.awt.BorderLayout; import java.awt.Canvas; import java.awt.Color; import java.awt.Container; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.GridLayout; imp... (login or register to view full code)
To view full code, you must Login or Register, its FREE.
Hey, registering yourself just takes less than a minute and opens up a whole new GetGyan experience. | http://www.getgyan.com/show/816/Curve_Drawer%3A_Using_two_Points | CC-MAIN-2016-40 | refinedweb | 115 | 54.9 |
Question
Consider the following three bond quotes; a Treasury bond quoted at 106:14, a corporate bond quoted at 96.55, and a municipal bond quoted at 100.95. If the Treasury and corporate bonds have a par value of $1,000 and the municipal bond has a par value of $5,000, what is the price of these three bonds in dollars?
Answer to relevant QuestionsWhat is the taxable equivalent yield on a municipal bond with a yield to maturity of 3.5 percent for an investor in the 33 percent marginal tax bracket? ...Get the trading statistics for the three main U.S. stock exchanges. Compare the trading activity to that of Table. Describe, in words, how to use the variable growth rate technique to value a stock. On March 5, 2013, the Dow Jones Industrial Average set a new high. The index closed at 14,253.77, which was up 125.95 that day. What was the return (in percent) of the stock market that day?
Post your question | http://www.solutioninn.com/consider-the-following-three-bond-quotes-a-treasury-bond-quoted | CC-MAIN-2016-44 | refinedweb | 172 | 76.62 |
I've a JSP app which gives me the error:
The type java.util.Map$Entry cannot be resolved. It is indirectly referenced from required .class files
.java
.java
JDK 1.7.0_79
tomcat 6
Run Tomcat with Java 6 or upgrade to Tomcat 7 and make sure you don't have some old pre-Java 5/pre-generics library on the classpath.
Why do you get this error? Somewhere in the JSP code (not your code, mind), is a dependency on
java.util.Map.Entry. This could be in code which Jasper generates from your JSP.
It's not a direct dependency; rather your code (or the Java code generated from your JSP) needs something else which then needs
java.util.Map.Entry
But the interface has changed in some way. Usually, that's with Java 8 because of the new static helper methods which they added: The name of the class is the same (which makes the error so confusing) but the API has changed and the code can't find something (or found something it didn't expect).
A similar problem can happen when you try to compile against a pre-generics class (even though that should work).
Even worse,
import java.util.Map in your JSP works. It's the existing bytecode somewhere else that causes the trouble.
[EDIT]
In my
/WEB-INF/lib/folder I've:
commons-fileupload,
commons-io,
poiand
rt(may this one be the problem?)
Yes :-)
rt.jar is the Java runtime. It contains
java.* and in your case, a version of
java.util.Map which doesn't match the one from your Java VM.
Remote it and it should work. | https://codedump.io/share/Y9RM8u8XowdQ/1/quotthe-type-javautilmapentry-cannot-be-resolvedquot-tomcat6--jdk7 | CC-MAIN-2017-51 | refinedweb | 279 | 76.62 |
Welcome to the Parallax Discussion Forums, sign-up to participate.
Mike Green wrote: »
Your LCD display is not the right kind of "serial" to work with the tutorial's driver. It's more like a memory device (using the I2C protocol). Look at the <simpletools.h> library functions i2c_newbus and i2c_out. The i2c_out call doesn't use a memory address, so memAddrCount is zero and memAddr is NULL. The device address "i2cAddr" (according to the Amazon link) is 0x27.
i2c_newbus(int sclPin, int sdaPin, int sclDrive);
int i2c_out(i2c *busID, int i2cAddr, int memAddr, int memAddrCount, const unsigned char *data, int dataCount);
What would I put for i2c*busID and const unsigned char*data? I assume int dataCount would be a variable that has what I want to be outputted?
#include "simpletools.h"
i2c *eeBus;
int main()
{
eeBus = i2c_newbus(14, 13, 0);
i2c_out(eeBus, 0x27,
NULL, 0, "abcdefg", 8 );
while(i2c_busy(eeBus, 0x27));
char testStr[] = {0, 0, 0, 0, 0, 0, 0, 0};
i2c_in(eeBus, 0x27,
NULL, 0, testStr, 8 );
print("testStr = %s \n", testStr);
} Libraries/Display/liblcdParallel | http://forums.parallax.com/discussion/comment/1482037/ | CC-MAIN-2020-24 | refinedweb | 178 | 63.7 |
Post your Comment
Dirty checking feature of hibernate.
Dirty checking feature of hibernate. What is the dirty checking feature of Hibernate?
Hi friends,
Dirty checking is a feature of hibernate. It allows the user or developer to avoid the time consuming databases
Hibernate Dirty Checking
This section contains the concept of Hibernate dirty checking
What is the dirty checking feature of Hibernate?
What is the dirty checking feature of Hibernate? What is the dirty checking feature of Hibernate?
Hibernate allows dirty checking... process of updating the changed object is called automatic dirty checking.
Dirty
Dirty Checking In Hibernate
Dirty Checking In Hibernate
In this section we will read about the dirty checking in hibernate.
Dirty Checking is the feature of hibernate that helps... which will demonstrate you about the dirty
checking in hibernate
UITextfield checking
UITextfield checking hello,,
how can i check UITextfield length or any specific value..
hii,
here is some code for textfield checking
if ([name.text isEqualToString:@"hello"])
or
if (name.text.length<1
checking the data
checking the data Sir, I have built an application where a user types a long string as input. Then after he/she types a single word of that string. I'm supposed to show all strings which have that word in them. Ex. the string
php dynamic array checking
php dynamic array checking php dynamic array checking
checking index in prepared statement
checking index in prepared statement If we write as follows:
String query = "insert into st_details values(?,?,?)";
PreparedStatement ps = con.prepareStatement(query);
then after query has been prepared, can we check the index
Just checking elswhere
Just checking elswhere I have been posting some questions within the HTML group but getting no joy. I thought I would try another area.
The main question I was looking for is: I have a textarea box which post a single users
Text box Checking
Checking Classpath Validity - Java Tutorials
Checking Classpath Validity
2001-12-13 The Java Specialists' Newsletter [Issue 037] - Checking that your classpath is valid
Author:
Sydney Redelinghuys..., let's see what Sydney, the one-legged
Java Guru has to say ...
Checking Your
Error in checking null value in string
Error in checking null value in string Error in checking null value in string
Hi am getting Exception : java.lang.NullPointerException in the following code:
String registerno=null;
registerno=request.getParameter("registerno
Checking the decimal whether comma used or not
Checking the decimal whether comma used or not I need to check the given integer whther comma used or not.
For example.
If the user given 123456
The output should be like 123,456
If the user given 123,456
The output should
Validaton(Checking Blank space) - Spring
Validaton(Checking Blank space) Hi i am using java springs .I am having problem in doing validation.
I am just checking blank space.
This is my controller. I named it salescontroller.
package controller;
import
Hibernate delete a row error - Hibernate
Hibernate delete a row error Hello,
I been try with the hibernate delete example (... - Hibernate 3.0rc1
10:46:41,397 INFO Environment:469 - hibernate.properties
Checking whether a year is leap or not
Checking whether a year is leap or not
This tutorial is going to teach you the coding for checking whether a year is
a leap year or not. Here, we have taken the year 2000. So
Hibernate case sensitive comparison.
Hibernate case sensitive comparison. How to check for case sensitive in Hibernate criteria?
package net.roseindia.main;
import...();
}
}
Output:
Hibernate: select this_.emp_id as emp1_0_0_, this_.date_of_join
Checking elsewhere with correctly blocked code and question
Checking elsewhere with correctly blocked code and question Hi I have already posted this question but realised I had made a few mistakes. So firstly the problem again!.
I have a textarea box for user comments which will return
HIBERNATE
HIBERNATE What is difference between Jdbc and Hibernate
hibernate
hibernate what is hibernate listeners
hibernate
hibernate what is hibernate flow
Exceptions Rethrowing with Improved Type Checking
Exceptions Rethrowing with
Improved Type Checking
In this section... checking.
Take a a look at the following code having improved type checking of Java... extends Exception { }
The third class, containing improved type checking
Post your Comment | http://roseindia.net/discussion/49023-Dirty-Checking-In-Hibernate.html | CC-MAIN-2013-48 | refinedweb | 699 | 56.45 |
Spark DataGrid Embedded Font Quandarytcorbet Aug 22, 2011 1:01 PM
01. In everything that follows, I am talking about the latest [21328] version of the SDK, not that I believe that my problems have anything to do with that release, just so anyone interested and willing to help will know the version.
02. My application happens to be rooted in AIR's WindowedApplication, but again, I do not think that has any impact on my problems; I believe the same results would obtain for a Flex Application.
03. I have a custom renderer for the Spark DataGrid which extends DefaultGridItemRenderer. It works fine. Its primary job is to change the font characteristics of each row in the list as a visual clue to the user as to the specific nature of the content that is accessible. Some entries are just in the Regular font, some in Bold, some in Italic, and some in Bold-Italic.
04. I have, for most of the project, embedded the necessary fonts like this:
[Embed (source="C:/Windows/Fonts/ArnoPro-Caption.otf", fontName="ArnoPro_BI_4",
fontStyle="italic",
fontWeight="bold",
mimeType="application/x-font",
embedAsCFF="true",
unicodeRange="U+0021-U+00ff, U+20ac-U+20ac")]
private const ArnoPro_BI_4:Class;
As I said, that all works just as advertized. But, that method of embedding carries the somewhat painful burden of slower compilations, so for the last 24 hours I have unseccessfully been trying to replace that with:
[Embed (source = "../resources/assets/ArnoPro_BI_4.swf", symbol="ArnoPro_BI_4")]
private const ArnoPro_BI_4:Class;
where the swf file was produced via fontswf, using this incantation:
fontswf -4 -u U+0021-U+00ff,U+20ac-U+20ac -b -i -a ArnoPro_BI_4 -o ArnoPro_BI_4.swf C:/Windows/Fonts/ArnoPro-Caption.otf
06. By all that is holy, the two different means of embedding the font ought to yield the same result, but they do not. I have debugging code inserted to print out the list of fonts upon initiation of the application, and they are identical. Both means of embedding do succeed in getting the embedded fonts into the .swf, but the attempt to use the fonts fails using the second approach.
There is, of course, no change being made to the code in the item renderer which merely uses setStyle() to effect the row-by-row result. The result in the second case is that the only style of the embedded font that renders is 'regular'.
07. I have used the 'keep-generated' facility to look at the code being generated by the mxmlc compiler and can see that different code is emitted, but it does not help me find a fix to the problem. Both forms of the meta-tag do something; both methods of embedding seem to correctly register themselves with the FontManager, but only the method of embedding which actually performs the transcoding during compilation seems to result in a set of registered fronts which can be found and correctly used to render output based on the runtime setting of the font style.
1. Re: Spark DataGrid Embedded Font QuandaryFlex harUI
Aug 22, 2011 2:17 PM (in response to tcorbet)1 person found this helpful
Not sure how you build the ArnoPro_BI_4 swf, but if it doesn't have the same
attributes, especially the CFF attribute, then it may not work properly.
2. Re: Spark DataGrid Embedded Font Quandarytcorbet Aug 22, 2011 4:47 PM (in response to Flex harUI)
Thank for the reply
I hoped that my posting indicated how the fonts in the the .swf file were constructed. The "-4", argument to the command-line tool, fontswf, as far as I can tell, is the precise analog to the "embedAsCFF" argument in the [Embed] syntax. That is what makes it so perplexing. Given all the external documentation that is available for each tool/methodology, I would have thought that the resultant bytecodes, classes, flags, whatever, would have been identical. The only difference would be the timing of when the transcoding took place.
Since it is clearly more efficient to only transcode whatever set of fonts an application needs once, not once per build/test turn-around, I would really like to make the fontswf workflow work. For those of us outside the beneficial environment of your licensed tools, the kindly-provided alternative to the facilities built into Flash Professional and/or Flash Builder give us the greatest degree of productivity.
Whoever has access to the source code for Font [I can only see the uninteresting FontAsset in the SDK] can probably determine what difference might result from mxmlc working with this intermediate output, when inline transcoding is 'tagged':
package
{
import mx.core.FontAsset;
[ExcludeClass]
[Embed(fontName="ArnoPro_IT_4", _resolvedSource="C:/WINDOWS/Fonts/ArnoPro-ItalicCaption.otf", fontStyle="italic", _line="1189", ();
}
}
}
versus this, when swf extraction is 'tagged':
package
{
import mx.core.FontAsset;
[ExcludeClass]
[Embed(fontName="ArnoPro_IT_4", _resolvedSource="C:/WINDOWS/Fonts/ArnoPro-ItalicCaption.otf", fontStyle="italic", _line="1191", ();
}
}
}
The only difference is that value for '_line' which probably only indicates that one of the two processes has a comment or empty line somewhere.
3. Re: Spark DataGrid Embedded Font QuandaryFlex harUI
Aug 22, 2011 5:15 PM (in response to tcorbet)1 person found this helpful
I'm not sure what you mean by "inline transcoding is 'tagged'". Font is an
internal Flash class, there is no AS source for it.
You can try using SWFDump to compare the DefineFont tags in the final SWF.
Are you getting any warnings at runtime?
4. Re: Spark DataGrid Embedded Font Quandarytcorbet Aug 22, 2011 7:17 PM (in response to Flex harUI)
01. Sorry for my inventive and not-very-helpful terminology. I just want some shorthand way to distinquish between the use of an {Embed] meta-data tag for which the source attribute is pointing at some acceptable font resident on my development host as contrasted with the one in which the source attribute is pointing to a .swf file for which transcoding has already been accomplished by means of the fontswf utility. In the former case, the mxmlc compiler must call whatever routines are necessary to transcode [inline], in the later case, that work has already been accomplished [off-line].
02. No compilation errors; no runtime errors, just a silent, perplexing failure for setStyle ("fontStyle', ...) to produce the same result that it does when the embedding was accomplished "inline".
03. I had not tried swfdump; thanks for that suggestion I will learn about that tool and see what I might find.
5. Re: Spark DataGrid Embedded Font Quandarytcorbet Aug 22, 2011 8:08 PM (in response to tcorbet)
I don't know if this is the origin of the problem or not, but it seems like it could be contributing to my problem.
Using swfdump to compare the application swf compiled with [Embed source = an external font] to the swf compiled when [Embed source = a swf produced by fontswf] shows differences in the bytecodes that describe a given font that appear to be just noise level. The key information, found in the <DefineFont4> element's attributes are identical.
I then decided to see what the swf coming out of fontswf looked like. Again, noise level bytecode differences, but complete agreement as to the attributes of the <DefineFont4> element's attributes.
Then I looked at the <swf> xml tags. Since, as I said, I am using the latest Flex SDK, and compiling to take advantage of those features, my application swf files have version set to "11". The swf produced by fontswf from that same sdk however generates a version "10" swf format. So, we can at least say that there is a potential for a different result when the mxmlc compiler performs the compilation AND the transcoding in one fell-swoop, as compared to the condition that it faces when compiling for version "11" but dealing with whatever conversions are required to [Embed] a swf that had been compiled for version "10".
I am sure you can tell me whether or not you believe that that difference might account for the silent failure behavior. Assuming that it might be the cause, the natural question is how do I either get a newer version of fontswf that outputs version "11" swfs, or how do I force the present fontswf to do that?
6. Re: Spark DataGrid Embedded Font QuandaryFlex harUI
Aug 22, 2011 9:41 PM (in response to tcorbet)
You can try debugging into the EmbeddedFontRegistry and see what the
difference is there. | https://forums.adobe.com/thread/894111?tstart=0 | CC-MAIN-2017-04 | refinedweb | 1,413 | 56.49 |
Python’s math module provides you with some of the most popular mathematical functions you may want to use. In this article, I’ll take you through the most common ones. You can also watch the following tutorial video in which I’ll guide you through the article:
The
math module is part of the Python standard library, and it is always available with every Python installation. However, you must import it before you can start using the functions it contains.
import math
Now every function in the
math library is accessible by calling
math.function_name(). If you want to import specific functions, use the standard
from math import function_name syntax.
Python Math Floor
The
math.floor(x) function takes one argument
x – either a float or int – and returns the largest integer less than or equal to
x.
>>> math.floor(3.9845794) 3 >>> math.floor(9673.0001) 9673 >>> math.floor(12) 12
The largest numbers less than or equal to 3.9845794 and 9673.0001 are 3 and 9673, respectively. Since 12 is an integer, the result of
math.floor(12) is 12 itself.
>>> math.floor(-10.5) -11
The floor of -10.5 is -11. This can sometimes be confusing but remember that -11 < -10 < -9 < … < -1 < 0.
If you create custom a custom Python class, you can make them work with
math.floor() by defining a
__floor__() method.
Try It Yourself: Run the following interactive Python shell.
Exercise: Can you figure out the output before running it?
Python Math.Ceil
The
math.ceil(x) function takes one argument
x – either a float or int – and returns the smallest integer greater than or equal to
x.
>>> math.ceil(3.9845794) 4 >>> math.ceil(9673.0001) 9674 >>> math.ceil(12) 12
The smallest numbers greater than or equal to 3.9845794 and 9673.0001 are 4 and 9674, respectively. Since 12 is an integer, the result of
math.ceil(12) is 12 itself.
>>> math.ceil(-10.5) -10
The ceiling of -10.5 is -10. Your instinct that 10 < 10.5 is correct when 10 is a positive number. But the opposite is true for negative numbers, and so -10.5 < -10.
If you create custom a custom Python class, you can make them work with
math.ceil() by defining a
__ceil__() method.
Python Math Operators
The standard mathematical operators are not defined in the math module but rather in the syntax of Python itself.
To add two numbers together, use the
+ operator.
>>> 5 + 10 15
To subtract two numbers, use the
- operator.
>>> 5 - 10 -5
To multiply two numbers together, use the
* operator.
>>> 5 * 10 50
To divide two numbers, use the
/ operator.
>>> 5 / 10 0.5
Note that this always returns a float even if the result is a whole number.
>>> 10 / 5 2.0
Remember that if you take two random numbers and divide them, it is highly unlikely they will divide each other perfectly, so it is logical that all division with
/ returns a float.
To raise a number to a certain power, use the
** operator.
>>> 5 ** 10 9765625
This is ‘five to the power of ten‘ and you write it in the same order you would write this out by hand.
Then there are some other operators used less often in mathematics but are incredibly useful for computer science and coding: modulus and floor division.
The modulus operator returns the remainder left when one number is divided by another. You perform this calculation with the
% operator in Python.
>>> 13 % 3 1
You should read the above line as ‘13 modulo 3‘, and the result is 1. This is because 3 goes into 13 four times (3 x 4 = 12) and the the total difference between 13 and 12 is: 13 – 12 = 1.
Another way to think of it is if you write 13/3 as a compound fraction, you get 4 + 1/3. Looking at the fraction left over – 1/3 – take the numerator (the top part) to get the final result: 1.
If you do many ‘modulo n’ calculations, the set of possible results ranges from 0 up to and including
n-1. So for 3, the range of possible results is 0, 1, and 2.
Here are some more examples:
>>> 14 % 3 2 >>> 15 % 3 0 >>> 16 % 3 1
You can see that
15 % 3 is 0. This result is the case for all multiples of 3.
One incredibly useful way to use the modulo operator is in
for loops if you want to do something every n-th iteration.
for i in range(10): if i % 4 == 0: print('Divisible by 4!!!') else: print('Not divisible by 4 :(')
Divisible by 4!!! Not divisible by 4 :( Not divisible by 4 :( Not divisible by 4 :( Divisible by 4!!! Not divisible by 4 :( Not divisible by 4 :( Not divisible by 4 :( Divisible by 4!!! Not divisible by 4 :(
Here I used the modulo operator to print
Divisible by 4!!! every time
i was divisible by 4 – i.e., when
i % 4 == 0 – and print
Not divisible by 4 :( in all other cases.
The final built-in operator is related to modulo. It performs floor division and is written as
//. As the name suggests, floor division is the same as normal division but always rounds the result down to the nearest whole number.
If you write
13/3 as a compound fraction, you get
4 + 1/3. Floor division returns the whole number part of this fraction,
4 in this case.
>>> 13 // 3 4 >>> 13 / 3 4.333333333333333
Here I calculated ‘thirteen floor three’, and this returns 4. The result of ‘thirteen divided by three’ is 4.3333, and if you round this down, you get 4.
Another way to think of it is if you write
13/3 as a compound fraction, you get
4 + 1/3. Floor division returns the whole number part of this fraction,
4 in this case.
Here are some more examples:
>>> 14 // 3 4 >>> 15 // 3 5 >>> 16 // 3 5
Note that all of the above examples are ints being floor divided by ints. In each case, Python returns an int. But if either of the numbers is a float, Python returns a float.
>>> 14.0 // 3 4.0 >>> 14 // 3.0 4.0
This result is different to normal division
/ which always returns a float.
You can perform floor division on any two numbers, but you may get surprising results if you add decimal places.
# No difference to normal >>> 14.999 // 3 4.0 # Returns 3.0, not 4.0! >>> 14 // 3.999 3.0 # Now we see why >>> 14 / 3.999 3.500875218804701
When you run
14 // 3.999, the result is
3.0 because
14 / 3.999 is
3.508... and the floor of
3.508... is
3.
Floor division for negative numbers works in the same way.
>>> -14 / 3 -4.666666666666667 >>> -14 // 3 -5
Recall that floor division takes the lower number and that
-5 < -4. Thus the result of floor division for negative numbers is not the same as adding a minus sign to the result of floor division for positive numbers.
Try It Yourself: Run the following interactive Python shell.
Exercise: Which line does not produce output integer 42?
Python Math Domain Error
You may encounter a special
ValueError when working with Python’s math module.
ValueError: math domain error
Python raises this error when you try to do something that is not mathematically possible or mathematically defined.
import numpy as np import matplotlib.pyplot as plt # Plotting y = log(x) fig, ax = plt.subplots() ax.set(xlim=(-5, 20), ylim=(-4, 4), title='log(x)', ylabel='y', xlabel='x') x = np.linspace(-10, 20, num=1000) y = np.log(x) plt.plot(x, y)
This is the graph of
log(x). Don’t worry if you don’t understand the code, what’s more important is the following point. You can see that log(x) tends to negative infinity as x tends to 0. Thus, it is mathematically meaningless to calculate the log of a negative number. If you try to do so, Python raises a math domain error.
>>> math.log(-10) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: math domain error
Python Math Round
Rounding is more complicated than you might expect. Incorrectly rounding floats has lead to disastrous consequences. The Vancouver Stock Exchange used an overly simplified rounding algorithm when trading stocks. In less than two years, the algorithm resulted in the price of the stock exchange being half of what it should have been!
The
round() function is not part of the math module but rather a built-in function you can access at all times.
It accepts two arguments:
round(number[, ndigits])
The
number is an int or float, and
ndigits is the rounding precision you want after the decimal point. The square brackets around
ndigits signify that it is an optional argument. If you omit
ndigits, Python rounds
number to the closest integer.
# Closest integer >>> round(10.237) 10 # One decimal place >>> round(10.237, 1) 10.2 # Two decimal places >>> round(10.237, 2) 10.24
Here you can see that
round() works as you would expect.
First, I want to round 10.237 to an integer. So, let’s look at the first value after the decimal place and round down if it’s less than 5 and up if it’s greater than 5. The first value is 2, and so you round down to get 10. For the next example, round 10.237 to one decimal place. Look at the second decimal place – 3 – and so round it down to get 10.2. Finally, round 10.237 to two decimal places by looking at the third decimal place – 7 – and rounding up to get 10.24.
This algorithm works as expected; however, it is not that simple. Let’s look at rounding 1.5 and 2.5.
>>> round(1.5) 2
This rounds to 2, as expected.
>>> round(2.5) 2
But this also rounds to 2! What is going on?
The
round() function applies a type of rounding called ’rounding half to even’. This means that, in the event of a tie, Python rounds to the closest even number.
The mathematical logic underpinning it is explained here, but in short, the reason Python does this is to preserve the mean of the numbers. If all the ties are rounded up (as we are taught in school), then if you round a collection of numbers, the mean of the rounded numbers will be larger than the mean of the actual collection.
Python assumes that about half will be odd for a random collection of numbers, and half will be even. In practice, this is true most of the time. However, there are more mathematically rigorous methods you can use in extreme circumstances.
Note that floating-point arithmetic has some inherent issues that cannot be resolved. Fortunately, this is built into all programming languages, mainly because computers represent floats as binary numbers. Some numbers that have finite floating-point representations – such as 0.1 – have infinite binary representations – 0.0001100110011… – and vice versa. Thus, the
round() function is not perfect.
# Expected 2.68 but got 2.67 >>> round(2.675, 2) 2.67
From what I’ve said above, this example should return 2.68 as that is an even number. However, it returns 2.67. This result is not a bug and is a known property of the function. For the vast majority of cases,
round() works as I described above, but you should know that it is not perfect. If you want something more precise, use the decimal module.
Python Math Pi
The math module includes some mathematical constants, one of which is π (pi).
>>> math.pi 3.141592653589793
It is the ratio of the circumference of a circle to its diameter and is 3.141592653589793 to 15 decimal places. If you are going to use this constant a lot, I recommend importing it separately to save you typing out
math. every time you want to use it.
>>> from math import pi >>> pi 3.141592653589793
Python Math Sqrt
To calculate the square root of a number, use the
math.sqrt(n) function.
>>> math.sqrt(2) 1.4142135623730951 >>> math.sqrt(16) 4.0
Note that this always returns a float. Even if you pass an int and Python can express the result as an int, it always returns a float. This functionality is similar to the division operator and makes logical sense; the vast majority of times you calculate a square root, it will not return an integer.
As of Python 3.8, there is also the function
math.isqrt(n) which returns the integer square root for some integer
n. This result you get is the same as applying
math.sqrt(n) and then
math.floor() to the result.
# Only works with Python 3.8 >>> math.isqrt(2) 1 >>> math.isqrt(16) 4
If you pass numbers that have precise square roots, you get a similar result to
math.sqrt(), but the result is always an integer.
>>> math.isqrt(16.0) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'float' object cannot be interpreted as an integer
You cannot pass a float.
>>> math.isqrt(17) 4 >>> math.floor(math.sqrt(17)) 4
The function
math.isqrt(n) is the same as
math.floor(math.sqrt(n)) if
n is an integer,
Python Math Abs).
Python Math Random
To generate random numbers, you must use the Python random module rather than the math module. That link takes you to an article I’ve written all about it.
Python Math Degrees
It is important that you can quickly switch between degrees and radians, especially if you work with trigonometric functions.
Let’s say you have an angle
r which is in radians, and you want to convert it to degrees. Simply call
math.degrees(r).
Let’s look at some common examples.
# You need to use pi a lot, so let's import it >>> from math import pi >>> math.degrees(pi) 180.0 >>> math.degrees(pi/4) 45.0 >>> math.degrees(2*pi) 360.0
First, I imported
pi so that I could easily use it in all the functions. Then I calculated some common degree-to-radians conversions. Note that
math.degrees() always returns a float. This result is expected as the vast majority of the time, the result of a conversion is not a whole number.
Note that, as is always the case with floating-point arithmetic, this function is not perfect.
>>> math.degrees(pi/3) 59.99999999999999
This should return 60.0. But note that since 0.999… recurring equals 1, it will not negatively impact your results.
Python Math Radians
Let’s say you have an angle
d in degrees, and you want to convert it to radians. Simply call
math.radians(d).
Let’s look at some common examples.
>>> from math import pi >>> math.radians(180) 3.141592653589793 >>> math.radians(180) == pi True >>> math.radians(45) 0.7853981633974483 >>> math.radians(45) == pi/4 True
One downside with converting degrees to radians is that radians are much harder for humans to read. So, I added in the equality statements afterward to show you that 180 degrees, when converted to radians, is π and likewise for 45 degrees and π/4.
This function is especially crucial if you want to use any of the trigonometric functions as they assume you are passing an angle in radians.
Python Math Sin
To calculate the sine of some angle
r, call
math.sin(r). Note that the function assumes that
r is in radians.
>>> math.sin(0) 0 # Assumes angle is in radians! >>> math.sin(90) 0.8939966636005579 # Convert to radians >>> math.sin(math.radians(90)) 1.0 # Enter as radians >>> math.sin(pi/2) 1.0
From high school math, we know that
sin(90) = 1.0 if 90 is in degrees. But here I demonstrate that you do not get 1.0 if you input 90. Instead, input
pi/2, and you get the expected result. Alternatively, you can use the
math.radians() function to convert any angle in degrees to radians.
Let’s look at the result for
math.sin(pi).
>>> math.sin(pi) 1.2246467991473532e-16
Again, from high school math, you expect the result to be 0.0, but, as is often the case with floating-point arithmetic, this is not the case. Although we know that the sine of 0 and π are the same value, unfortunately, it is not reflected in the output. This result is because π is an infinite decimal that cannot be represented fully in a computer. However, the number is so small that it should not make a massive difference to your calculations. But if you need it to equal 0, there are some methods you can try, but I will not discuss them in this article for brevity.
Finally, note that all the values returned are floats even if Python can represent them as integers.
Python Math Cos
To calculate the cosine of some angle
r, call
math.cos(r). Note that the function assumes that
r is in radians.
>>> math.cos(0) 1.0 # Assumes angle is in radians >>> math.cos(180) -0.5984600690578581 # Convert to radians >>> math.cos(math.radians(180)) -1.0 # Enter angle in radians >>> math.cos(pi) -1.0
From high school math, we know that
cos(180) = -1.0 if 180 is in degrees. However, the trigonometric functions expect the angle to be in radians. So, you must either convert it to radians using the
math.radians(180) function, or enter the actual radians value, which is
pi in this case. Both methods give you the answer -1.0 as expected.
Let’s look at the result of
math.cos(pi/2).
>>> math.cos(pi/2) 6.123233995736766e-17
The result of
math.cos(pi/2) should be 0.0, but instead, it is a tiny number close to 0. This is because π is an infinite decimal that cannot be represented entirely in a computer. This functionality should be fine for most cases. If you must have it equal to 0, check out this Stack Overflow answer for alternative methods you can use.
Python Math Tan
To calculate the tangent of some angle
r, call
math.tan(r). Note that the function assumes that
r is in radians.
>>> math.tan(0) 0.0 >>> math.tan(pi/4) 0.9999999999999999 >>> math.tan(pi/2) 1.633123935319537e+16
The results for
math.tan() are similar to those for
math.sin() and
math.cos(). You get the results you expect for 0.0, but once you start including
pi, nothing is exactly what you expect. For example,
tan(pi/4) is 1, but Python returns
0.999.... This may not look the same, but, mathematically, they are equal). The result of
tan(pi/2) should be positive infinity, but Python returns a huge number instead. This result is nice as it lets you perform calculations with
math.tan() without throwing loads of errors all the time.
Conclusion
There you have it; you now know how to use the most common functions in Python’s built-in
math module!
You can take the floor or ceiling of any number using
math.floor() and
math.ceil(). You know all the essential operators, what types they return, and when. You’ve seen that Python raises a
Math Domain Error if you try to do something mathematically impossible. And you can use some essential functions and constants for scientific computing such as
math.pi, converting angles from degrees to radians and using the most common trigonometric functions – sin, cos, and tan.
There are some more functions I didn’t get the chance to cover in this article, such as the inverse and hyperbolic trigonometric functions. With your knowledge, you’ll easily understand and use them if you quickly read the docs.. | https://blog.finxter.com/python-math-module/ | CC-MAIN-2020-50 | refinedweb | 3,338 | 77.13 |
In this project you’ll build an ESP32 or ESP8266 client that makes an HTTP POST request to a PHP script to insert data (sensor readings) into a MySQL database.
You’ll also have a web page that displays the sensor readings, timestamp and other information from the database. You can visualize your data from anywhere in the world by accessing your own server.
As an example, we’ll be using a BME280 sensor connected to an ESP board. You can modify the code provided to send readings from a different sensor or use multiple boards.
In order to create build this project, you’ll use these technologies:
- ESP32 or ESP8266 programmed with Arduino IDE
- Hosting server and domain name
- PHP script to insert data into MySQL and display it on a web page
- MySQL database to store readings
1. Type “database” in the search bar and select “MySQL Database Wizard”.
2. Enter your desired Database name. In my case, the database name is esp_data. Then, press the “Next Step” button:”. SensorData ( id INT(6) UNSIGNED AUTO_INCREMENT PRIMARY KEY, sensor VARCHAR(30) NOT NULL, location VARCHAR(30) NOT NULL, SensorData in the example_esp_data database as shown in the figure below:
3. PHP Script HTTP POST – Insert Data in MySQL Database
In this section, we’re going to create a PHP script that is responsible for receiving incoming requests from the ESP32 or ESP8266 and inserting the data into a MySQL database.
If you’re using a hosting provider with cPanel, you can search for “File Manager”:
Then, select the public_html option and press the “+ File” button to create a new .php file.-esp-data.php
Edit the newly created file (post-esp= $sensor = $location = $value1 = $value2 = $value3 = ""; if ($_SERVER["REQUEST_METHOD"] == "POST") { $api_key = test_input($_POST["api_key"]); if($api_key == $api_key_value) { $sensor = test_input($_POST["sensor"]); $location = test_input($_POST["location"]); SensorData (sensor, location, value1, value2, value3) VALUES ('" . $sensor . "', '" . $location . "', '" . following:
4. PHP Script – Display Database Content
Create another PHP file in the /public_html directory that will display all the database content in a web page. Name your new file: esp-data.php
Edit the newly created file (esp-data.php) and copy the following code:
<!DOCTYPE html> <html><body> <, sensor, location, value1, value2, value3, reading_time FROM SensorData ORDER BY id DESC"; echo '<table cellspacing="5" cellpadding="5"> <tr> <td>ID</td> <td>Sensor</td> <td>Location</td> <td>Value 1</td> <td>Value 2</td> <td>Value 3</td> <td>Timestamp</td> </tr>'; if ($result = $conn->query($sql)) { while ($row = $result->fetch_assoc()) { $row_id = $row["id"]; $row_sensor = $row["sensor"]; $row_location = $row["location"]; $row_value1 = $row["value1"]; $row_value2 = $row["value2"]; $row_value3 = $row["value3"]; $row_reading_time = $row["reading_time"]; //")); echo '<tr> <td>' . $row_id . '</td> <td>' . $row_sensor . '</td> <td>' . $row_location . '</td> <td>' . $row_value1 . '</td> <td>' . $row_value2 . '</td> <td>' . $row_value3 . '</td> <td>' . $row_reading_time . '</td> </tr>'; } $result->free(); } $conn->close(); ?> </table> <:
That’s it! If you see that empty table printed in your browser, it means that everything is ready. In the next section, you’ll learn how to insert data from your ESP32 or ESP8266 into the database.
5. Preparing Your ESP32 or ESP8266
This project is compatible with both the ESP32 and ESP8266 boards. You just need to assemble a simple circuit and upload the sketch provided to insert temperature, humidity, pressure and more into your database every 30 seconds. = "tPmAT5Ab3j7F, sensorName, sensorLocation)
-) + ""; int =
You can comment the variable above that concatenates all the BME280 readings and use the variable below for testing purposes:
String = "api_key=tPmAT5Ab3j7F9&sensor=BME280&location=Office use a Raspberry Pi for local access).
The example provided is as simple as possible so that you can understand how everything works. After understanding this example, you may change the appearance of the table, publish different sensor readings, publish from multiple ESP boards, and much more.
You might also like reading:
- [Course] Learn ESP32 with Arduino IDE
- ESP32 Publish Sensor Readings to Google Sheets (ESP8266 Compatible)
- ESP32 Bluetooth Classic with Arduino IDE – Getting Started
- ESP32 Web Server.
265 thoughts on “ESP32/ESP8266 Insert Data into MySQL Database using PHP and Arduino IDE”
Very nice.
I would like to see a Raspi version you mentioned.
All local would be the best in security.
I definitely plant to create a follow up guide on that subject, but basically you just need to install MySQL and PHP on a Raspberry Pi. Then, instead of making an HTTP POST request to a domain name, you make the request to the RPi IP address. The exact same code that I use works on the Raspberry Pi by default.
Regards,
Rui
As a matter of fact I am using it on a local raspi database, albeit that I use MariaDB rather than MySQL, but that is largely the same.
Setting up MySQL isnt really translucent and userfriendly, but MariaDB, tops that. Hint: first log in: ‘sudo mysql’, no user no pw.
But it is very doable.
Your php script doesnt care if it is mysql or mariadb nor if it is on a local server or ‘in the cloud’
Hello Rui Everything is working well. However, from the php page post-esp-data.php I got the message “No data posted with HTTP POST.” whereas the connection is working well. I do not see where is the issue down through the if conditions…. it should write “New record created successfully”
data are correctly inputed in the data base… so i do not see at all where is the problem.
thanks for your tutorial and hope u ll get some time to answer. thanks
Hi!
I made a similiar approach.
An ESP2866 measures temperture, pressure, humidity (BME280) and brightness (LDR). Every minute/10minutes/hour a website is called with parameters to post the sensor data. A php-script on the webserver at the site of my provider takes this parameters and inserts them via mysql into the specified table.
On my website (wetter.cuprum.de – please visit!) you can recall the data – visualized with jpgraph – in severeal intervals. Therefore I used woody snippets to integrate a php-script into wordpress.
My website is in German, but I think the menu item “Konzept und Umsetzung” (concept and implementation) should be understandable due to the pictures.
Happy planning/”boarding”/welding/&coding.
Kind regards, Stefan
Hi Stefan.
Thank you so much for sharing your project and congratulations! Your project looks great! People love data visualization in graphics 😀
Regards,
Sara
hello !
thanks for tutorial
i am getting ERROR CODE -1
can you resolve it please.
Hi.
Sometimes when that error is printed in the Arduino IDE Serial Monitor, but the data is still inserted into the server.
If your server is not receiving the data, make sure you have the right files saved in your server (/esp-data.php /post-esp-data.php)
Regards,
Sara
Hi Sara,
Just to be sure were exactly do I need to save the files on the server?
Regards
Zane
You need to save all the PHP files in the root directory of public_html
Hello, I have the same problem like onkar. I’m sure that I have everything correct in my file and “ERROR CODE-1” is still printed in IDE Serial Monitor.
Do you have any idea?
Hi.
Are you using ESP32 or ESP8266?
Hi
i have the same problem, response is -1
im using esp32
Hi.
Which web host are you using?
Regards,
Sara
Hi sara,, do you have a system device using esp32cam as QR code reader and send the data to domain ,for name attendance only, please notice me 🙏🏻
Hi.
We don’t have any tutorials about that subject.
Regards,
Sara
Try to check your server name, it should be “ and not “ happened to me when I just copy & paste my website
Que cambios debería realizar para utilizar el ESP8266?
Thanks, resolved
Gracias Gilbert! Esto resolvió el error
Hi, i also get the same problem. I am getting ERROR CODE -1. FYI, i am using my computer as a server using xampp. Can you resolve it please.
im using localhost
Hello, Stefan.
Your site is pretty good. But the feedback and contacts are not working.
getting stuck at the end of step #…put in my website address
mywebsite.com/esp-data.php
Connection failed: Access denied for user ‘mywebsite_esp_board’@’localhost’ to database ‘mywebsite_my_first_db’
I don’t need to change “localhost” to something else do I?
Hi Scott.
Keep the localhost and it should work.
I think your user doesn’t have the right permissions to connect to the database.
Open the “MySQL Databases” in cPanel, open your user and make sure that it has all the privileges enabled. See the following picture:
You should also double-check you’ve typed the right database name, user and password in the code.
I hope it helps.
Regards,
Sara
I have changed the “localhost” to actual database URL and the project worked.
First you need create user(esp_board) in cpanel later you need to define credentials, that step is necessary, they forget to add it.
Awesome Rui,
Thanks for this example, love it.
How secure is this type of http posting? Whats your opinion about security on these type of systems using a database across the cloud?. Do you have an example of sending a trigger back on a response from the PHP. Example if you reading is > 100 send led 1 on from a table stored value where the 100 value would be evaluated by its row id some small type of logic so to speak.
Thanks
Chris
Hi Chris.
You can define your server with HTTPS to have all data posted to your server encrypted. Of course, with this example, you are relying on third-party websites, but I’ve been using it for years and haven’t had any problem.
The URL /esp-data.php is currently open to the public, but you can set up a login system with a php code that would keep that data private to you.
At the moment, we don’t have any projects about that exact subject.
Regards,
Sara
Hi,
Can we use thiswith server ?
can you explain what are the changes
Br, Supun
Wow Rui .. very nice post .. my weekend ‘to-do’ list is now officially occupied 🙂 Thanks for all the great tutorials !
That’s great!
Thank you for following our projects.
Regards,
Sara
Hi, it’s very usefull this article, thanks a lot. You can write something about transfer video files instead sensor values? Maybe recorded in a SD memory connected to a ESP32 or ESP8266.
Thanks
At the moment I don’t have any tutorials on that subject, but thanks for the suggestion!
What sensor is this?
Card? Or something else?
And how to use it?
Is there any video demonstration?
Hi.
In this example we’re using a BME280 sensor. It reads temperature, humidity and pressure.
At the moment, we don’t have any companion video for this project.
Regards,
Sara
Hi,
Good tutorial …
You can use WampServer for both Internet and LAN …
1/3 of all servers are WampServer, I have been using it for years
for webpages, controlling steam game Servers, and media servers,
Home Automation using esp8266 and esp32 now
Regards
derek
A word of caution if planning to run this on a Raspi Pi. The conventional wisdom widely shared on the net is that MySQL does a lot of writing to the SD card causing it to wear out quickly. I would suggest that it would work better with an SSD.
Rui a very explanation of the basics of setting up a MySQL system with an ESP device. A follow up article on how to extract specific data / topic would be very welcome.
Thanks Bob for the suggestion! I also probably would recommend using SQLite (over MySQL).
Hi Bob,
its just simply to change your Raspi booting and writing from Sd to SSD or USB. Takes a few Minutes.
It will Boot and write than all Data to the USB or SSD.
It’s also possible to use library « mysqlconnector » without php script
I know, but I think this is a more reliable approach for a real application. However, it depends on your project requirements and what exactly you need to do.
Rui Santos,
You did excellent job in this tutorial.
Your setup procedures working perfect, except that the timestamp is not my local time.
How to change the SQL query time_reading to show my local time ?
Hi.
We’ve added some extra lines of code to adjust the time to your timezone in the esp-data.php file:
You can simply uncomment one of the following lines to adjust the time displayed in your webpage:
//”));
If the actual time (minute and date) is incorrect when inserting data into the database, I recommend contact your host directly to see what is going on in your server.
I hope this solves your problem.
Regards,
Sara
Sara,
Thank you very much. I can see my local time now in “reading time”. I have to +16 hours to make it right.
Regards,
Ong Kheok Chin
Hello Rui,
This tutorial is for one ES8266 and one BME sensor. Can we connect three or more ESP8266 with other sensors. further in your tutorial three values are inserted. Can we insert more than three values.
Hi.
Yes, you can insert more than 3 values. But you need to change the codes.
Regards,
Sara
Rui,
Your links refer to BlueHost as recommended but in step #2 you say go to CPanel. What is CPanel because I have no such thing in Bluehost?
Confusing?
Joe L.
Hi Joe.
You should have CPanel in Bluehost.
Please see the following links:
my.bluehost.com/hosting/help/87#need
I hope this helps.
Regards,
Sara
If i Will use digital input also how do I that?
Thanks in advance
NIELS
Hi Niels.
What do you mean? I think I didn’t understand your question.
Regards,
Sara
Me again 😉
Can i use the esp software in a sonoff basic and use gpio 1 og 14 for digital input, I Will only use it becourse the power supply is build in, and I Will not use the relay.
I Will only connect a photo diode to one of the gpio input an save on and off time in the db
Thanks in advance 😇
Hi again.
I haven’t tested that. But it should work, because it is an ESP8266.
If you try it, then let me know if it worked.
Regards,
Sara
Hi
I have try it for many days but i cant get it to work with sonoff basic, well im new i this world.
My wish is to save data in mysql from my sonoff basic, i have input on gpio14 and if this input change i will like to send on and off to my mysql.
I have some esp8266 on way from China but i cant se where in the code i can add I/O port some replacement for the sensor, maybe you can help me with that.
Tanks in advance
Niels
Niels, I hope you found an answer already, but if not, maybe this will help:
The ESP program reads the sensor in the http request, e.g. when it does this: “String(bme.readTemperature())”
I understand that you only want to send a message to your database when the input on pin14 changes. That is not so hard.
I presume you are working with MQTT, but in case you are not, i’ll just give a general example:
When you switch pin 14, you have to check if the state has changed, so that the received command was indeed ON-OFF or OFF-ON and not ON-ON,
Then you put your http request in an IF statement that basically checks:
IF (previousstate !currentstate) {do http request)
in your http request you then only have to send the current state, dont forget to end with ‘previousstate=current state’
Sara,
How is the api_key_value generated ? Do we just pick any random character/n umbers for this api_key ?
Regards,
Ong Kheok Chin
Yes. That was randomly generated by us. You can set any API Key that you want.
That way, only who knows the API key can post data to your database.
Regards,
Sara
Thanks for writing this awesome article. The article is very comprehensive as you have highlighted all the key points regarding the article. Furthermore, the process is so simply explained that even a lay man can understand the process. However, when talking of the hosting then I think digitalOcean is the best option but, I don’t think that it is only limited to the advanced users. Even someone with the basic knowledge can go with DO server. Perhaps, you can even go with the managed DigitalOcean web server. Below, is the link to the managed hosting: cloudways.com/en/digital-ocean-cloud-hosting.php.
Hey, im trying to use sql for a project and need to send data almost constantly. How much delay does uploading data have? And what could be a good alternative if its too slow?
I haven’t tested the insert speed, but after the HTTP POST request is made with your ESP, the data insertion is immediate.
I see that you have stopped recommending Stablehost, it was there earlier on this week; is there a reason for this?
No reason, I just had a bunch of people emailing me which one should they choose, so I decided to just 1 option with cPanel to be less confusing. This tutorial works with any hosting provider that offers cPanel.
Thank you for the tutorial. I follow the tutorial and it works great. But If I have a multiple ESP8266 clients, Please can you give example that display the latest readings only of all ESP8266 clients. Like this:
———————————————————————————————————–
Sensor Location Value 1 Value 2 Value 3 Timestamp
———————————————————————————————————–BME280 Office 21.14 56.91 996.52 2019-06-07 03:57:17
BME280 Kitchen 22.15 56.33 996.62 2019-06-07 03:57:18
BME280 Garage 21.77 56.66 996.80 2019-06-07 03:57:19
Thanks in advance
Regards,
Kim
Hi Kim.
You need to upload the Arduino sketch to all your EPS8266 and modify it accordingly to your needs. For example one ESP publishes the location “kitchen” and the other “room”. Each ESP will publish different data but to the same database.
Hello Sara,
Regarding location I already done that on my 4 ESP8266, my concern is how can I display it to the webpage with the latest readings only. Only the Values and Timestamp will change on every refresh if there’s a latest readings. Can it be done, do you have any example on it.
Thanks in advance,
Kim
hello thanks for this great tutorial i am stuck on file manager process because there is no file name public_html in my file manager
Hi
I recommend contacting your hosting provider team and asked them how to access your public_html folder or similar.
Some hosts might have a different name.
Regards,
Sara
In arduino serial monitor i got 200 status code but in server value is not display.. I am struggling that past 2 days.
Can you access your URLs? /post-esp-data.php and /esp-data.php
And see what I show in the post? Did you update the URL with your own domain in the code?
Hi Rui and Sara,
Firstly thanks for all the tutorial that you have provided here. They have helped me a lot over the last couple of weeks.
I created an async webserver on the ESP32. My index.html, stylesheet and js functions are located in the SPIFFS (4mB). All is working, upon typing the local IP in my browser the index.html page shows up nicely.
However…. I have a toggle “button” on my website:
The button loads perfectly and looks the way I want it to look. My question is, how on earth can I pass the state of that button to a variable in my .ino program? I followed your async web server tutorial as well as your temp and humidity web server tutorial. And I can get it to work but I do NOT want to refresh the entire page every time a button is clicked NOR do I want to have my html code inside the .ino file.
Could you point me in the right direction?
Thanks heaps in advance!
Hi Tim.
We have an example in which we get the value from the slider into the Arduino code.
I think you can do something similar with your toggle button.
See the Javascript section in this tutorial:
I hope this helps.
Regards,
Sara
Hi Sara,
Thank you for your reply! I had already used code from suggested project. However, my project is different in a few ways:
1. I store my index.hml file and a myJqueries.js file in the SPIFFS. The AJAX request and other JS code from the Servo project is called upon from within the html page. I do not want that for various reasons. What code do I need to include in my .js file and how do I call it from within my html file?
2. The entire html code of the Servo project is generated within the C/C++ code with help of client.println statements. For that reason C/C++ variables can easily be injected in the html document. However, when working with an external index.html document you need to rely on that server.on(“/”, HTTP_GET, [](AsyncWebServerRequest * request) {… and request->send(SPIFFS, “/index.html”, String(), false, processor). I cannot figure out how to retrieve data from that GET request, use it in my C/C++ program and then send something back to my index.html.
How do I use that “request–> send: correctly to exchange data between my C/C++ code and my index.html?
Would you be able yo provide sample code?
Kind regards,
Tim
Hello – First off let me tell you how much I appreciate what you folks are doing. I’ve followed several of your projects. bought several on-line courses/books and am playing with a couple projects. Now I’m attempting to post data to a MySQL database from an automated weather station. I’m not using the BME sensor but rather the ESP8266 combined with the Hiletgo DHT11 Temperature and Humidity sensor and the Argent precipitation, wind speed and direction devices. I’ve got the program to the point where data is being delivered to the MySQL database but PHP is giving me an error as follows: ” Undefined index: api_key in /home/u638013961/domains/shybearnc.net/public_html/post-esp-data.php on line 32″. I’m not sure what the error message means. By the way, I’m using Hostinger.com as my web site host. Thanks for any help. – Gary
I’m not sure Gary, but to be honest you should start with a BME280 sensor first, so you use the exact example that I show.
Later on you could modify my example with a different sensor (just to avoid these problems).
Basically, it looks like you’ve deleted this variable declared:
$api_key_value = “tPmAT5Ab3j7F9”;
I’ve created that sample API Key value, so only you can post to your exact URL.
Regards,
Rui
I tried using the BME280 and that worked just fine. Now I’ve switched back to the DHT22, made a couple modifications and, for some reason the data is now being posted – complete with new headings for the web page. So, for now, all is good. Although I do wonder what the purpose of the API_key is and how that is generated? Thanks. – Gary
plz help me out my seriel monitor shows this and data is not posting to server
10
api_key=tPmAT5Ab3j7F9&sensor=BME280&location=Office&value1=10&value2=10&value3=10
Error code: -1
Sometimes it retrieves an error code, but the data is still being inserted in the database.
Can you check your tables to see if the data is there?
Regards,
Rui
Hi Rui Santos,
I have the same issue … but isn’t inserting in database, does this have to do with the API key?
If yes so how can I generate another key? Thanks
00:44:26.724 -> api_key=&temperatura=21.30&umidade=71.00
00:44:26.724 -> Error code: -1
00:45:26.748 -> api_key=&temperatura=21.30&umidade=71.00
00:45:26.748 -> Error code: -1
00:46:26.755 -> api_key=&temperatura=21.30&umidade=71.00
00:46:26.755 -> Error code: -1
i had the same problem until i changed the adress of the website from to (delete “s”)
Apparently <ESP8266HTTPClient.h> dont work with Protocol
Hi. I’m lately working with the same project as this. is this possible to use Arduino Uno and ESP-01 to send data to my live server? Thank you in advance
Yes, it’s definitely possible to do it, but I don’t have any tutorials on that exact subject
thank you very much Rui for your reply yes i check but there is no data in database plz take me out from this problem
Hi, I had a similar problem with the same message. I found that it was because I used under the server name in the ESP32/ESP8266 Arduino Code – I changed this to http even though my server address is and it then worked. Hope this works for you as well
Thank you for sharing that solution.
sir do you know how to get/retrieve data from firebase then send to PHP then will received by the arduino?
I already tried “echo” but it was just the PHP to arduino situation
I need firebase to PHP to arduino
because I want to change the wifi SSID and password by sending those two information in firebase
Hi.
I’m sorry, but we don’t have any tutorial about that subject at the moment.
Regards,
Sara
Hi,
i am trying to make a project IOT Based Smart Irrigation System. now I have done my website in PHP and Hardware also ready now.. Now i want to connect it with my domain and save all sensors reading into online database(phpMyAdmin).
second thing is this if i want to turn on/off the water pump manually through my webpage and its effect will work on hardware(ARDUINO UNO/NODEMCU). As a result the water pump will turn on/off even if moisture level is lower then its limit. Then how it is possible ? can you give me a tutorial for this issue..
Thank you in advance for your positive response.
Regards
Ziyam Aslam
From Pakistan
Thanks for the project suggestion, but right now I don’t have that tutorial available.
thank you so much Alister you are great you solve my problem i will be very happy if i solve any of your problem 🙂
Great explanation. But I have a question.
I run my own website, just for fun and education.
My site is “veldbies.nl” and is a secure site. So will be redirected to
Browsers can handle this problem.
Your script returns:
Connecting
…..
Connected to WiFi network with IP Address: 192.168.2.15
api_key=tPmAT5Ab3j7F9&yeti=3
HTTP Response code: 302
IMHO the redirection is not the problem but the secure connection. Do you have an example with a secure connection?
That would be great.
You can open the Arduino IDE > File > Examples > WiFiClientSecure > WiFiClientSecure.ino example which shows how to make HTTPS requests with an ESP32.
Then, you can merge that example with my code to make an HTTPS post to your server…
Note: some severs allow both http and connections, so with an HTTP request you can still insert data in your database
Hi Rui,
The example is for an ESP32. I was looking for a similar example for the ESP8266, but couldn’t find it. Does it exist or can you give me directions where to find it or how to apply wifi security for an ESP8266? TIA!
I have everything in your example working great using a DHT22! Thank you! I’m having trouble adding in more values so that I can push and display more data to the database. Any advice?
I got it figured out so I can send more than 3 values, more than 1 sensor. Thanks again for the great tutorial!
Hi Sara & Rui,
Great work on this project – brilliant! I have just completed it using Wampserver and hosting on my laptop. Took a little bit of work setting up Apache and MySql but well worth the aggravation.
I was using a Wemos Lolin Lite board so the pin out to get the BME280 going took some experimenting to find the working I2C pins.
Keep up the sterling work!
Regards
Alan
Hi Alan.
That’s great!
Thank you so much for following our work.
Regards,
Sara
Is there a simple way to modify the esp-data.php file so that the database will display the most recent record on top and the oldest record on the bottom?
hello sir
how can i get api key for my website
The apiKeyValue is just a random string that you can modify. It’s used for security reasons, so only anyone that knows your API key can publish data to your database
Hi Sara, hi Rui,
first one my congratulations for your job here on Random Nerd!!!
Let me ask a question: it could be possible to combine this project with an Asinchronus web server?
Let me know if you have any ideo or suggestion for this
Attilio
Hi.
Yes, it is possible.
We’ll be publishing a project to display data from the database in charts in an asynchronous web server, probably in two weeks.
Regards,
Sara
Can i use Xampp in your project?
Yes, it should work with any system that has MySQL + PHP
lekhoitv, I am using xampp and it is working perfectly!
Hello Rui and Amanda,
I’m curious, Amanad, how you get this working with XAMPP?
For some reason it looks for my ESP that he is not addressing the file post-esd-data.php on my XAMPP-server. The errorcode showing up is -11 (whatever this will say).
In the ESP-sketch I used this data for serverName:
// REPLACE with your Domain name and URL path or IP address with path
//const char* serverName = “
const char* serverName = “
I created a database ‘esp-data’ by XAMPP in MySQL and the other php-file from this example (esp-data.php) is working fine. If I run post-esp-data.php on my iMac in the XAMPP-folder this is also working (except for not posting data).
You have to know that I’m working on iMac with XAMPP.
What I’m doing wrong in my ESP-sketch?
Regards,
Jop
Hi,
In follow-up of my question before how to address my XAMPP-server, I found solution. I was stupid thinking in using the URL that is valid only on my iMac (
Of course I need here the IP-address that is given to my localserver, as the ESP is not aware of “localhost”.
Now it is working fine.
Regards,
Jop
Hi Sara & Rui,
Great work on this project, Thank you for the tutorial.
if I want to the latest information on the first page. In the esp-data.php modify descending id suggestion me please?
Hello,
You can modify the esp-data.php PHP script in this line:
$sql = “SELECT id, sensor, location, value1, value2, value3, reading_time FROM SensorData”;
It becomes:
$sql = “SELECT id, sensor, location, value1, value2, value3, reading_time FROM SensorData order by reading_time desc”;
Hello.
Thank you for your great tutorials. I have learned much following your work. However on this project all went smooth untill the very end.
I have a HTTP Response of 301 showing up on my Serial monitor and no sensor data from BME 280 is getting posted to my database in My PHP Admin.
The post-esp-data.php and esp-data.php show up all ok in my browser but empty.
I also tried labelling my server as http and in the arduino code (when my server is as advised by one of the comments here but no luck.
Do you have any idea what the problem may be?
Thank you.
You should only use HTTP (the Arduino sketch provided only supports HTTP requests). Did you keep the same API Key both in the PHP scripts and Arduino code?
Did you had your exact domain name to the Arduino variable?
I had this same problem.
I figured out that my website had SSL and “force HTTPS” enabled. This meant that my HTTP was forced to redirect to HTTPS and that messed everything up.
To fix it, I had to turn off force HTTPS on the server side.
Hi,
I have the same probleme, but how can I turn off force HTTPS, i tried turn it off from the domains, but it says that there is no SSl certificate configured.
thanks.
Hello Sara and Rui,
I’m running into problems with this project
The database and php programs are OK, but the ESP8266 is running in to an error.
The serial monitor repeatedly shows:
Connecting
.
Connected to WiFi network with IP Address: 192.168.178.37
Soft WDT reset
>>>stack>>>
ctx: cont
sp: 3ffffde0 end: 3fffffc0 offset: 01b0
3fffff90: 40206ea0 25b2a8c0 feefeffe feefeffe
3fffffa0: feefeffe 00000000 3ffee628 40204c60
3fffffb0: feefeffe feefeffe 3ffe8514 40100459
<<<stack<<<
ets Jan 8 2013,rst cause:2, boot mode:(3,6)
load 0x4010f000, len 1384, room 16
tail 8
chksum 0x2d
csum 0x2d
v8b899c12
~ld
What could be the problem?
(The esp32-web-server is running Ok)
Cheers, Gerard
Hi again,
I commented all lines that concern BME280
and commented the lines with fixed values.
Now the previous error is solved
but now the HTTP Response code = -1
I’m running local LAMP, path, rights end , password are OK
What could wrong?
Cheers
Gerard
Can I try another sensors?such as an accelerometer? gonna work?
Yes.
You can modify this project to work with any other sensor.
Regards,
Sara
how about in water level only can i use this?
Yes.
The idea is to modify the project to insert your own data from your sensors.
Regards,
Sara
HTTP Response code: 200
Hey,
Hope I am not too late to the party, thanks for the great tutorial! It was my inspiration for a weather station that is now in operation for some months.
Took the liberty to use the basic method, but took the time to really make it a working setup including a graph view of the data etc.
See my post there:
Best,
Thomas
Hi Thomas.
Your project look great!
I’m always happy when I see our readers implementing our tutorials to make great projects.
Keep up with the great work.
Regards,
Sara
Hello Sara, Hello Rui,
I have followed a few of your tutorials, and they are all great!!!
After reading the tutorial on DS18B20, I adjusted this project to send readings from 4 sensors to my database, and it works perfectly!!!
However… may I do one suggestion?
In my opinion, sensor-readings are numerical data, and they should be stored as such in the database. This will save you a lot of trouble when you want to do calculations on your data later on…
For example:
– difference between MIN an MAX temperature for a certain day or month
– AVERAGE MIN or MAX temperature for a certain month
– etc…
But once again: great job!
I’m having a hard time deciding which tutorial I will do next 😉
Best regards,
Jan.
Hello Jan!
Sorry for taking so long to get back to you!
Here’s the tutorial that you’re looking for:
In that guide, we display and calculate those max, min, average readings!
Thanks for the suggestion.
Regards,
Rui
String apiKeyValue = “tPmAT5Ab3j7F9”;
please i don’t understand apiKeyValue what is this variable where i find
Hi.
That is just a random string set by you, to ensure that only you can post data to your database.
That string should be the same on your sketch and on post-esp-data.php file.
Regards,
Sara
hello please I have a problem with the data channel on the page “post-esp-data.php” the card works and the sensors also, data are displayed on serial monitor but the problem during send data to the “post-esp-data.php” page,
as long as I hyberge the page
thank you very much sir, great tutorial………………
Thank you for sharing the knowledge, collecting the data is what I want!
I am using the free domain server, then created /post-esp-data.php
The URL it shows “
At the beginning it occurred “Error code: 1”.
But after I changed “HTTPS” to “HTTP” of declare “serverName”, its works well now.
Thanks again
Hello Nick,
its works, i also changed “ to “
Thank you very much……….
The response -1 I could solve (after testing with SOAP UI) by adding to the header in the ESP32/ESP8266 Code the followings (make sure to change example.com to Your website’s address!):
“gzip,deflate”);
“100”);
“example.com”); //make sure to change example.com to Your website’s address
“Keep-Alive”);
“Apache-HttpClient/4.1.1 (java 1.5)*/”);
I’ve done everything you showed in your tutorial but kept returning ERROR Code : -1 and No data posted with HTTP POST. I’m using ESP8266 and Flow Sensor trying to store the flowrate and volume to a database. What am I missing guys, please HELP…
Hi.
What web host service are you using?
Regards,
Sara
i don’t know, why my posts are not displaying here.
ERROR Code : -1 and No data posted with HTTP POST
Not sure why. My intuition tells me it has to do with the headers.. Either way no data is being posted to phpmyadmin or Mysql.. All the data is coming in the serial monitor and all my String are working properly. This tells me there is no issues with the sensor or syntax..it must be connection based.
Hi.
What host provider are you using?
Regards,
Sara
Sara,
I actually am using my own machine as a host. Is this an issue?
Thank you for the prompt reply
Thank’s work fine with a HTTP on webhos*, but on local machine refuse to connect with LAMP.
🙂
I don’t know how to configure apache2 to connect with my local network’s machines
Hi.
Take a look at this tutorial to see if it helps:
Regards,
Sara
thank you 🙂
I’ll follow this tutorial to learn more about the subject.
I’ve been looking for an article like this for a long time! Thanks you
Hi, can i use esp8266 with just wifi router only but without internet access? Can i still view the datas or receive the data in php mysql? Like localhost/network? Thanks a lot
Hi.
You can use a Raspberry Pi for local access. See this tutorial:
I hope this helps.
Regards,
Sara
I did using DHT11 sensor!!
Do you have an example of how to Retrieve data from the MySql database with the esp8266 or esp32?
Hello and thanks a lot for this tutorial
please is there any free way to Hosting server and domain name??
because is not free anymore
Great tuto, Santos.
Following your instructions (which are very clear), I set up the data base and the table, and I will do the electronic part with an ESP8266 later. Thanks for sharing.
Nice tutorial.
What about if we set up a webserver to check real time data, at least when request is sended.
If loop has a 30s delay, it will take long time to check values. any option?
Thanks you
The post-esp-data.php code needs to close the php “?>”
Hi.
You can add the closing tag, but it is not necessary.
Regards,
Sara
Rui, Sara, this is really a great tutorial, thanks for providing it. I used it to a somewhat different project but the concept is same/similar.
I am using a Mega+Wifi board (it has the ESP8266), a moisture sensor is connected to the board and as a server I am using my Synology NAS.
By now I programmed the board and in the serial monitor I get good output:
api_key=tPmAT5Ab3j7F9&sensor=moist01&location=loc01&value1=149
HTTP Response code: 200
api_key=tPmAT5Ab3j7F9&sensor=moist01&location=loc01&value1=181
HTTP Response code: 200
etc etc.
Setting up the Synology NAS as a webserver, I created the database and the table, also put both PHP files in the right folder.
When I call in the browser ip-address/test/post-esp-data.php , I get as in the tutorial “No data posted with HTTP POST.”
When I call in the browser ip-address/test/esp-data.php , I get only the table header as described in the tutorial prior the tutorial gets to the ESP programming.
The issue I have is that the table is not being fed with entries and I dont get any error messages or what so ever as an orientation, can you guide me on what I might be missing?
Cheers, Alex
Rui, Sara, now I am also seeing my first post. Maybe something with my browser cache…
I just applied my “deviating” project on webspace and it works great. I guess the issue I have with Synology NAS relate to some writing permissions that I will check on the Synology forum.
Thanks again for your tutorial, great stuff and I will check out all the other projects and tutorials for more ideas and inspiration! Keep it up!
Hello,
Trying to repost my comment as the first one seems to have disappeared.
Thanks for the great tutorials!
Doing above steps I ran into an issue that I am not able to find a solution on, also it was touched in the comments by people but there is no solution posted or potential direction for a solution posted.
I try to do pretty much the same as you explain but with the variation of using a Mega+Wifi board (ESP8277), with currently just one sensor and I having my database set up on my Synology NAS.
I get good results when checking the Serial Monitor but the posting itsels seems not to work.
Calling: ip-address/test/post-esp-data.php, I just have “No data posted with HTTP POST.” constantly.
Calling: ip-address/test/esp-data.php, I just see the empty table with the column headers.
I went by now through the ESP code to check for typos, also went through all kind of trouble shooting related to MariaDB of the Synology but I couldnt find the root cause.
Appreciate any guidance you can provide or food for thought on the trouble shooting.
Thanks!
Amazing
I am realy amazed
Thank you Sara and Rui
I subscribed to bluehost. Never seen such wonderful customer support.
This example worked from the first click even with bme 680 with minor change
the only thing is that the server time is different than my local time. I should search the web to find the right functions
many thanks
I found the time adjustment in file esp-data.php
Great!
Hi! I have a problem on the page there is a message “No data posted with HTTP POST”. As far as I understand the condition is not met – if ($_SERVER[“REQUEST_METHOD”] == “POST”)-. What could be the problem? Arduino shows a server response of 200 Ok. Thanks.
Hi Denis.
What is the hosting service that you are using?
Regards,
Sara
Hello there…that is the same as my problem, the hosting service I’m using is infinityfree.
Hi,
From the Arduino Serial Monitor, I am getting HTTP Response code: 200, but in my database, nothing gets inserted. I am using infinityfree hosting. Also, I can access both post-esp-data.php and esp-data.php
What could be the issue?
Thanks for your time.
Regards,
Joel
Hi.
The issue is your hosting.
Free hosting won’t work properly for this tutorial.
Regards,
Sara
In case someone wants to try this tutorial on free hosting I found out that 000webhost.com works.
The only issue that I am getting is from time to time I get Error code: -11 which isn’t an HTTP error.
What could be the reason for that error code?
Hello Sara and Rui
I managed to make a MESH network with 3 ESP32 (DEVKITV1) being one the “server” and the others two as “client”. In both, I can put analog values in some inputs (three) and obtain the reading of the values on the “server”.
Now I would like to take the next step: send the data to an SQL “database”.
What I get on the server is:
logServer: Received from 2884070797 msg = {“Serial”: “COMxx”, “node”: 2884070797, “V_wind”: 3.299194, “V_sun”: 0, “V_bat”: 3.299194}
logServer: Received from 3294438085 msg = {“serial”: “COMxx”, “node”: 3294438085, “V_wind”: 0, “V_sun”: 1.687866, “V_bat”: 0}
Where and how should I start to proceed ??
Hugs
Manel
From the Arduino Serial Monitor, I am getting HTTP Response code: 200, but in my database, nothing gets inserted. I am using infinityfree hosting. Also, I can access both post-esp-data.php and esp-data.php
What could be the issue?
Note: i am using 000webhost.com hosting which, I think, is suitable.
Hi.
This tutorial doesn’t work properly with free hosting services.
Regards,
Sara
How can i know if the website I am using is suitable or not?
thanks in advance
It’s hard to tell without testing and trying the server configurations, but if it’s a free hosting platform it will probably not work. They will block all connections from ESP devices to avoid resources usage.
Hi.
I just tried to use another nonfree hosting platform (Education Host) and it still doesn’t send any data. I get HTTP response code:200. Any suggestions, please??
I tried infinityfreeapp free webhosting and it doesn’t send any data. I get HTTP response code:200 so it means it’s okay. What could be the possible problems here?
Hi.
Some free webhosting services don’t work with this project.
Maybe that’s one of them.
Regards,
Sara
Hi
First of all, let me thank you for the wast amount of information, posted in here.
I’ve been using this site as a reference and resource for a lot of my projects.
I’ve used this guide, as a reference to a sensor project, where i post data from a modded ESP32(POE enabled).
I wanted to make a dashboard, and used grafana as the tool.
To make it work, i had to make some changes to the mod-headers, and it might have messed up the communication between the ESP32 and raspberry. Sending the HTTP headers.
The Error! :
In my “post-esp-data.php”, that has been working flawlessly for months, i get the error:
PHP NOTICE: Undefined index: Request Method in /path/post-esp-data.php
Do you have any idea, whatsoever, how i messed up?
I think it stopped, when i got grafana working.
Am i messing up port conf?
Or might it be the mod-header conf?
I hope, that you might point me in the right direction.
I am not able to read the serial output, while the ESP32 are connected, BC of the POE configuration. At some point, i will make a cable for just serial comms, but right now, i’m forced to using the USB port, while disconnected from the system.
Either way, much appreciation for this website.
Sincerly Sebastian
Nevermind, i fixed the issue 🙂
Can’t find HTTPClient.h Library
The library is installed by default, are you using the latest ESP32 or ESP8266 board add-on for your Arduino IDE?
hi, I also downloaded the add-on and the library doesn’t install with that. Could you link the library download?
please can prepare a zip for the HTTPClient library . I can’t find it please.
I have the Display withy everything listed in my web browser, and I get the HTTP Response code: 200 reply back in the serial terminal.
I also see the sensor data changing in the serial monitor…
But the there is no updates of the variables in the browser window.
could there be something wrong with the database or the post-esp-data.php ?
Its working fine. Super excellent work you are doing with these training weblink materials and books; Thank you
Thanks very much for this tutorial. I have it running and it does exactly what I needed to be done. I am new to this type of programming and am wondering why VALUES are entered into the INSERT INTO code using concatenation for single and double quotes in the code for file post-esp-data.php? Using ‘” . $sensor . “‘ rather than ‘$sensor’. I understand the entire INSERT INTO code needs to be double quoted. Thanks.
Hello Robert,
The single quote is used to concatenate ‘
The double quote is used to define the values ”
In the end, the $sql variable will look like this:
INSERT INTO SensorData (sensor, location, value1, value2, value3) VALUES (“sensor_value” , “location_value”, “value1_value”, “value2_value”, “value3_value”)
Thank you for your help.
Good afternoon, Rui and Sara!
First of all, I want to thank you for the excellent projects that you publish here on the site!
I am trying to hire a host service to put my project in the cloud. You recommended Bluehost for being friendlier.
Could you tell me if on Amazon WS or Azure you have similar plans? I’m doing a job for college and they have free plans for students.
Another question, when I tried to hire by bluehost they asked which version of php the project uses. Does version 7.3-7.5 support?
Thanks,
I await your reply!
Hello, it should definitely work with most hosting providers, unfortunately I can only test or setup with a hosting that I already use.
It should definitely work with AWS, but I don’t have any tutorials for it. However, free hosting providers will not work because they block the ESP connections or limit the amount of connections to avoid resource usaage.
It should work with any PHP 7 version. I recommend that latest version.
Regards,
Rui
Good afternoon, Rui and Sara!
I want to thank you for the projects that you publish here.
But I have a problem.
I get “HTTP Response code: 500” what does it mean.
No data is shown on the display.
Also there is no data in the SQL-Database.
Serial monitor is showing: api_key=tPmAT5Ab3j7F9&sensor=BME280&location=Office&value1=23.77&value2=36.11&value3=1003.89
HTTP Response code: 500
I’m working with a local NAS with PHPMyAdmin and MySQL installed.
Regards Matthias
Hello everybody!
This “No data posted with HTTP POST.” is a serious issue, I see.
I had problem at Infinity hosting so I tried at my local machine with default Wampserver 3.2.0 and the issue is still there. I don’t think that is because free hosting providers are blocking ESP traffic.
I am testing without ESP device, just pasting and refreshing this link: post-esp-data.php?api_key=222&sensor=BME280&location=Office&value1=24.75&value2=49.54&value3=1005.14 and I get “No data posted with HTTP POST.”
When pasting wrong api key in link, there is the same echo “No data posted with HTTP POST.”. “Wrong API Key provided.” is not triggered. So php script is not executed below “($_SERVER[“REQUEST_METHOD”] == “POST”)”
Rui Santos, please look deeper into this when you find spare time.
Best regards and thank you.
Hi, Thank you four the very good tuto..
I just have a question/Problem
I become this message : api_key=tPmAT5Ab3j7F9&sensorId=12621789&latitude=null&longitude=null&altitude=null&PM10=5.10&PM25=3.40&temperature=null&humidity=null
HTTP Response code: 200
but i can’t see the data in my database or in my “ .. I’m not sure but i think the cause can be “the missed DB User and the password” in my arduino code .. I don’t know how to add and also I’m not sure if it’s really the reason why i still can’t see the data in my database even when i have 200 als http response code
Amazing Tutorial…
Can you please guide me how can we take user input from esp-data webpage and send it to esp32 to change the data frequency time like in this case we have 30 seconds? I am new to all this so don’t know how to take user input and send to esp32….
Hi – Thank you for a great tutorial.
I have this project running successfully and tracking a system. Is it possible to add a password protection to the php file which displays the data? Can anyone suggest some suitable code?
Kind regards
Paul
Am using localhost. How Can I upload data to localhost?
Hi Rui and Sara (Do you speak Portuguese)?
I’m trying using your code as the base for my project, but an update in a library ’cause some issues for this line:
“call to ‘HTTPClient::begin’ declared with attribute error: obsolete API, use ::begin(WiFiClient, url)”
Do you have an idea how to fix it?
Greetings from Brazil
Hi.
It seems some users are getting the same error with that method.
I’ll have to try the sketches and see what is going on.
I’ll try to fix the issue in the next few days.
Regards,
Sara
Hello. Do you happen to recall if there was a solution to this? I had this working on my NodeMCU, but I’m getting this error now I’m trying to change my Wifi settings
Sorry, just saw your solution elsewhere on the page – all sorted now, thanks 🙂
Great!
risolto cambiando la versione della board. in gestione scheda cercare 8266 e cambiare da 3.0.0 a 2.0.74
Hi Rui and Sara
Am using localhost. How Can I upload data to localhost?
Regards,
Fayez
Hi Rui and Sara,
great project, congratulations. It works great with ESP32. I tried to load it on a NodeMCU 1.0 (ESP-12E Module) but upon compilation I also receive the error message: (serverName);
“Call to ‘HTTPClient :: begin’ declared with attribute error: obsolete API, use :: begin (WiFiClient, url)”.
Help, I’ve been fighting for days but I can’t solve it, thanks.
Greetings
Hi.
There were some updates to the library.
Change the code as explained below:
old:
HTTPClient
new:
WiFiClient client;
HTTPClient
“
We’ll update the code as soon as possible.
Regards,
sara
Thanks. I had also this problem. Seems to to work now also with the newest ESP8266 Board version 3.0.2
Great!
per chi fosse interessato, ho risolto cambiando la versione della board. in gestione scheda cercare 8266 e cambiare da 3.0.0 a 2.0.74
“for those interested, I solved it by changing the version of the board. in card management look for 8266 and change from 3.0.0 to 2.0.7”
I am getting http error code 400.
api_key=tPmAT5Ab3j7F9&sensor=Pulse Sensor&location=Office&value1=1861&value2=1869&value3=1870
HTTP Response code: 400
api_key=tPmAT5Ab3j7F9&sensor=Pulse Sensor&location=Office&value1=1888&value2=1889&value3=1885
HTTP Response code: 400
This code is very good, but I would like to know if I can get information from the bank and return it to Arduino
Bom dia. Excelente tópico, porém em meu caso está retornando o Code -1. Estou usando esp8266, já tentei alterar as versões do php, mas nada mudou….
Hi.
What’s the host you’re using?
Regards,
Sara
Hello Sir!
In your tutorial, I tried to replace the BME280 sensor with as DS18B20 Temperature sensor. I completed all steps which show on your website. When I upload the code for ESP8266, in the screen monitor of Arduino IDE, it displays HTTP response code 200 and the value of the temperature sensor. But, it didn’t display any information in URL The URL which I just illustrate. Also, it didn’t have any information database in MySQL too.
Please tell me what should I do next?
Give me advice. Thank you, Sir.
Hi.
What is the host service that you’re using?
Regards,
Sara
Hi, thanks you for taking the time to write such a useful post! I followed it and everything is working as expected. It’s brilliant!!
I would like to take things a little further but I’m not sure how and I was wondering if you could point me in the right direction.
Rather than send data every 10seconds to my server, is it possible to take a sample of data every 10seconds and either store it on an SD card or maybe in a buffer, then after 1 min send 6 rows of data to the server?
I imagine it is possible but I’m just struggling to find a staring point. Any help or guidance would be greatly appreciated.
Thanks you,
Gary
Hi.
If you want to store the data on a microSD card, this article might help:
Or this one:
Regards,
Sara
Hi Sara,
Thanks for replying so quickly!
I’m comfortable storing data on the SD card, what I can’t quite figure out is how to send a batch of sample data in one go.
For example, every 10 seconds my accelerometer stores XYZ data somewhere (buffer or SD card), then after 1 minute the esp32 sends all 6 samples (6x10sec in 1 min) to the database as per the above SQL database tutorial.
Thank you
Gary
You would have to read the first 6 lines of samples stored on the microSD card, and then send 6 requests. One for each line of saved data.
It may be a bit tricky to select each line from the microSD card file, but it is possible.
Check for tutorials that read data from SD card line by line.
I hope this helps.
Regards,
Sara
Thank you for your reply.
I’ll have a search as per your suggestion.
Thank you
Gary 🙂
Hi Sara,
Further the comments above I have managed to store the data to in an array (instead of using Strings) and that is writing to the database table without issue.
Do you have any guidance on how I would write multiple rows of data to the table in on go? My code uses an RTC sync’d with the NTP server, the function the stores the data in an array every 10s, then on the 51st second uses your method in the above tutorial to write the data to the table. The issue I’m having is it’s only writing the first row of data every 51st second, not 6 rows as I would like.
I attempt to post the data using the following:-
However only appears in the table.
I have also tried to concatenate the above thu to but that didn’t work either.
I think the issue must be server side but I don’t know PHP or SQL all that well so I have no idea how to post all 6 rows.
Any help would be appreciated.
Thank you
Gary
Good morning, I have a little bug, I signed the bluehost website and put all the code above, but when I add information to the database it doesn’t update on the web page, only when I change the name of the file it updates, I found very strange, could you help me?
Hello,
What happens if you press in your web browser page the keys Ctrl+F5. It looks like your web page is currently cached and it saves the last state.
If refreshing the web page works with Ctrl+F5, then contact bluehost and ask them to disable cache in your server.
Thanks for your patience.
good morning gentlemen,
I have used this code successfully in a remote database.
But when I go to connect to a local database then it does not connect.
my string is: const char * serverName = “ //localhost/esp_conn/post-esp-data1.php”;
and : const char * serverName = “ //localhost/esp_conn/post-esp-data1.php”;
please tell me what I am doing wrong.
sinserely
Stratis Chatzirallis
Hello Stratis, you must use your local IP address, you can’t type localhost.
const char * serverName = “
I hope that helps.
Regards,
Rui
dear sir!
I can’t use syntax. they always give error “begin(WiFiClient, url)”, I don’t understand this syntax yet.
when i declare “WiFiClient client;” add using serverName); it compiles but doesn’t working.
I am still battling with my -1 error code and I have done it over and over. Please, can suggestion be given? Thanks.
Hi. I used with my local LAMP server, and it worked. Now after the syntax changed to -form, it does not work any more. I tried to move to this , but I don’t get it working either, but get error code: -1. Is this really correct and enough with a local server?
“application/x-www-form-urlencoded”);
Hi again. I can now answer my own question. That is working with my LAMP server running in Raspberry Pi using the POST method. I know also how to make GET method working. I just to had figure out how to make a string to const char * conversion to have everything concatenated to
My connection time to the router was very long. I succeeded to shorten it clearly by using the first advice here:
Thank you for that as well.
hi all!
I have some problems as follows:
– I can’t use the following syntax, “ they always give me an error “error: call to ‘HTTPClient::begin’ declared with attribute error: obsolete API, use ::begin(WiFiClient, url)”
i don’t understand what is “wificlient” parameter?
– when i use:
HTTPClient
WiFiClient client;
then the compiler can compile but they don’t working.
– Has anyone had success with the program above?.
Thank for all!
Hi.
The correct syntax is “
You’re not passing the “client” parameter to the begin() method.
Try the following:
“
Regards,
Sara
Hi, I have everything working great. One issue I have is if a value is 100 then it puts it as Min value instead of Max. Any ideas?
Hello. I really have enjoyed and been challenged by this project! I have a number of ESP32 devices running continuously to measure data every 6 hours. I have found some level of instability and that continuous operations are not possible. I introduced a WatchDog Timer which made a lot of sense to me. But this doesn’t seem to solve the problem. In fact the WDT is resetting in the loop as it should, but at some point the WiFi has disconnected so no readings are made. Perhaps a 6 hour delay is not a good idea for this device. Would it make sense to move the entire WiFi code into the loop and re-establish it only for a measurement, and then close it until the next loop. If the WiFi cannot be re-established, the device should reboot.
Hi Robert,
I have ESP32’s that runs for month, and I have ESP8266’s that have run for years, both controlling horticultural projects. My devices control irrigation and propagation systems. They use MQTT and WiFi to communicate with my Home Assistant based data recording and display system. I monitor numerous sensors and send data every 15 minutes.
When using MQTT the system checks the MQTT connection every few seconds and to do so it also checks the WiFi is connected. I have not experienced the problems you describe.
In my opinion if you want a fully effective watchdog you need an external device like a 7555 (555) monostable timer which runs continuously, if the ESP32 doesn’t reset it because it is hung in a loop, the 7555 timer times out and outputs a reset pulse to the ESP32 and causes a full reset. There are rare times, when using MQTT, that the ESP gets into an endless loop trying to reconnect to the mqtt broker but not triggering the internal WDT. The external WDT overcomes that issue.
I don’t know if your system is similar but I can assure you that provided the power supply is good ESP devices will run for very long periods of time without issue.
If possible, for the long delay (6 hrs) could you use NPT time with, perhaps, the EZtime library and trigger the sensor reading / send at predefined hours?
I don’t know how you do your timing but as an alternative to the above suggestion I would create a second counter with a built in timer, then a minute counter, then an hour counter, do my processing of sensor etc. then reset all counters to zero. You could check your WiFi connection every few seconds in the main loop and if not connected call a re-connect function. Hope this helps.
Bob. Thaks so much for this detailed response. I must admit that I am not familiar with much of what you have said, but I do like a learning challenge. My initial Google searches have shed some further light on MQTT, the 7555, Home Assistant, EZtime and NTP. I will start some detailed learning in order to appreciate how these elements can help with my project. This is valuable information and an interesting direction to pursue. Still, I do remain interested in how the code in this tutorial can be tweaked to address stability if in fact this is a sensible question. Perhaps there are other issues at play and your detailed response outlines the best technical path forward. The incredible path of learning just keeps interconnecting with new paths to explore – exciting. Thanks again.
Hello again,
Ideally all project should be modular so that you can get them working stage by stage. The project, books etc. that Ruis and Sara publish are very good and typically can be incorporated in or used as the basis of wider projects. The particular project module on which you posted is designed to store and retrieve data. so, an inital design consideration of your own project is do you need to store / retrieve / view your data just locally or from anywhere in the world? There are many options for either scenario.
In the instance you posted about I would say something is failing on the ESP32 / 8266 rather than the server or database. I would suggest you build a simple project just to emit some data every 6 hours to a serial terminal, initally you can try it at 5 minutes, then 15 minutes and gradually increase it to 6 hours. Coolterm is a very good terminal software (free) to monitor serial out from the ESP. I am happy to help if you have specific questions.
Hello. I like your suggestion and have commenced reading up on Coolterm and serial monitoring – again new to me. It may be helpful to ask a couple of questions – Is there private messaging on this site?
Hello Robert,
I don’t know about private messaging on this site Ruis or Sara would need to answer that question. If they are agreeable I am happy for them to share my email address with you.
Hi Bob.
I sent an email to Robert with your email address.
Regards,
Sara
Hello Sir! Good day does this work in soil moisture sensor too? because now im using a soil moisture sensor to send its data to the database. btw im using a xampp.
Hi.
Yes, it should work.
But you should be careful with the pins you use to connect the sensor.
ADC2 pins cannot be used when Wi-Fi is used. So, if you’re using Wi-Fi and you’re having trouble getting the value from an ADC2 GPIO, you may consider using an ADC1 GPIO instead. That should solve your problem.
See our ESP32 pinout guide to learn more:
Regards,
Sara
Hi, Sara and Rui. Very nice tutorial you worked on here.
I am using Xampp server and followed all instructions in this tutorial.
The http post is returning -1 and not posting anything in localhost. Ive checked my ip address but its still not working. I’d really appreciate your help
hi Sara and Rui, i also get the same problem like doyin.
Ola, gostaria de saber como fazer o caminho inverso, ou seja, ler os valores do banco de dados no esp32. Pois eu daria comandos para gravar no banco de dados da página html e no esp32 iria atualizando a leitura.
grato
Hi.
i have a problem. sata is using esp8266 and I have followed all the steps according to the instructions and the serial monitor is showing HTTP Response code: 200. but for some reason, in my database the data does not enter. Please help
Hi All,
Building a MySQL database for a ESP32 from a Bluehost domain.
Step 1.
No data posted with HTTP POST.
This is O.K.
Step 2.
This step failed
Results in the browser:
Connection failed: Access denied for user ‘tritonS4_esp_hvd’@’localhost’ (using password: YES)
Standard format did not show up in browser.
Any ideas. Is there a way to look at settings and username and other?
Phil
Hi All,
I just got past the errors in last comment. now, I am getting HTTP error 400 with htpp:// and 302 code any ideas.
My Host Provider is Blue Host. Domain name should allow access not unless there are check boxes somewhere to allow external logger to reach the //public.html file manager folder.
Hello,
I got the code working on my ESP32 and the serial monitor says that the values are succesfully uploaded (HTTP code 200). The log files on my server tell me the same, but for some reason the values are not added to the SQL database. I entered all the passwords correctly and I didn’t change the API key or anything else. The only thing that is changed to the ESP32’s code is that I’m using set values. Does someone here have any suggestions what to try next?
Thanks for this it was just what I was looking for. A simple guide to send data to a mysql database.
String = “api_key=” + apiKeyValue + “&sensor=” + sensorName + “&location=” + sensorLocation + “&value1=” + String(mpu.temperature());
I tried use the code above, but I change the BME with MPU6050, but the code cant compile. I get “expression cannot be used as a function” error. Can anyone help me
Many thanks for this project.
Quick question. I have an ESP32 which will always be connected to a Raspberry Pi through a serial cable. The Raspberry Pi will be powered by an Ethernet cable through POE which inturn powers the ESP32. If I want to disable the ESP32 wifi and only use the serial communication this part of the code above should be commented, right?
const char* ssid = “REPLACE_WITH_YOUR_SSID”;
const char* password = “REPLACE_WITH_YOUR_PASSWORD”;
Yes.
That part and all the other parts related to wi-fi connection.
Regards,
Sara
i have a funny problem .. my code is working but i can only write 2 lines into my database and then i get a “Error code: -1”
after a reset of the esp it writes 2 lines and then “Error code: -1”
9:37:25.198 -> Connected to WiFi network with IP Address: 192.168.36.127
09:37:25.198 -> api_key=tPmAT5Ab3j7F9&sensor=HC-SR04&location=Home&value1=tPmAT5Ab3j7F9&value2=HC-SR04&value3=Home
09:37:25.384 -> HTTP Response code: 200
09:37:35.360 -> api_key=tPmAT5Ab3j7F9&sensor=HC-SR04&location=Home&value1=tPmAT5Ab3j7F9&value2=HC-SR04&value3=Home
09:37:35.499 -> HTTP Response code: 200
09:37:45.471 -> api_key=tPmAT5Ab3j7F9&sensor=HC-SR04&location=Home&value1=tPmAT5Ab3j7F9&value2=HC-SR04&value3=Home
09:37:50.712 -> Error code: -1
09:38:00.697 -> api_key=tPmAT5Ab3j7F9&sensor=HC-SR04&location=Home&value1=tPmAT5Ab3j7F9&value2=HC-SR04&value3=Home
09:38:05.902 -> Error code: -1
so something is rotten in the kingdom of Denmark … but where?
What web hosting are you using?
Regards,
Sara
😮 this is embarrassing my problem is apparently in my own firewall, when i use local ip of my sql it works perfect…
i am using a synology as web host with a MariaDB5 SQL everything on a static IP
i have to look deeper into my router/firewall log.
thanks for quick response
Michael
Here is the code: note that it works fine but this is the one I wrote about above.
I posted a comment on the problematic HttpClient.
#include <HTTPClient.h>
#include <WebServer.h>
#include <esp_now.h>
#include <WiFi.h>
#include <Wire.h>
#include <esp_wifi.h>
//HTTPClient
#define ssid “My ssid” // WiFi SSID
#define password “My WiFi password” // WiFi password
//const char* serverName = “
String str_mac;
String mtrame = “”;
WebServer server(80);
int btn_avant = 16;
//—————–FIN WEB FOTSINY
// REPLACE WITH YOUR ESP RECEIVER’S MAC ADDRESS
//uint8_t broadcastAddress1[] = {0xF4, 0xCF, 0xA2, 0x5B, 0xAE, 0x44}; //Vide
//uint8_t broadcastAddress2[] = {0xEC, 0xFA, 0xBC, 0x63, 0x2A, 0xF1}; //EC:FA:26:63:C3:F1 = Camion
uint8_t broadcastAddress1[6];
typedef struct test_struct {
int x;
int y;
} test_struct;
test_struct test;
typedef struct test_struct2 {
int x2;
int y2;
} test_struct2;
test_struct2 test2;
*/
// callback when data is sent
/
void OnDataSent(const uint8_t mac_addr, esp_now_send_status_t status) {
char macStr[18];
Serial.print(“Packet to: “);
// Copies the sender mac address to a string
snprintf(macStr, sizeof(macStr), “%024:%00A:%0C4:%0FA:%01F:%080”,
mac_addr[0], mac_addr[1], mac_addr[2], mac_addr[3], mac_addr[4], mac_addr[5]);
snprintf(macStr, sizeof(macStr), “%02x:%02x:%02x:%02x:%02x:%02x”,
mac_addr[0], mac_addr[1], mac_addr[2], mac_addr[3], mac_addr[4], mac_addr[5]);
Serial.print(macStr);
}
Serial.print(” send status:\t”);
Serial.println(status == ESP_NOW_SEND_SUCCESS ? “Delivery Success” : “Delivery Fail”);
void setup() {
/*
*/
pinMode(btn_avant,INPUT);//Bouton Avant arriere
Serial.begin(115200);
WiFi.mode(WIFI_STA);
WiFi.printDiag(Serial); // Uncomment to verify channel number before
esp_wifi_set_promiscuous(true);
esp_wifi_set_channel(CHAN_AP, WIFI_SECOND_CHAN_NONE);
esp_wifi_set_promiscuous(false);
WiFi.printDiag(Serial); // Uncomment to verify channel change after
WiFi.begin(ssid, password);
}
Serial.println(“Connecting”);
while(WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(“.”);
Serial.println(“”);
Serial.print(“Connected to WiFi network with IP Address: “);
Serial.println(WiFi.localIP());
if(WiFi.status()== WL_CONNECTED){
Serial.print(“Channel oe:”);
Serial.println(WiFi.channel());
//WiFiClient client;
// Your Domain name with URL path or IP address with path
// serverName);
/////Ici le problème dès qu'on l'active la transmission ESP NOW //devient faild!
// Specify content-type header
"application/x-www-form-urlencoded");
// Prepare your HTTP POST request data
String = "man_num=10";
Serial.print(" ");
Serial.println(
// Send HTTP POST request
int =
if ( {
}
}
Serial.print("HTTP Response code: ");
Serial.println(
str_mac =
Serial.print("str_mac: ");
Serial.println(str_mac);
else {
Serial.print("Error code: ");
Serial.println(
// Free resources
else {
Serial.println(“WiFi Disconnected”);
//Send an HTTP POST request every 30 seconds
server.on ( “/”, handleRoot );
server.begin();
delay(1000);
/*
*/
uint8_t primaryChan = WiFi.channel();
wifi_second_chan_t secondChan = WIFI_SECOND_CHAN_NONE;
esp_wifi_set_channel(primaryChan, secondChan);
ESP_ERROR_CHECK(esp_wifi_set_promiscuous(true));
/;
// register second peer
memcpy(peerInfo.peer_addr, broadcastAddress2, 6);
if (esp_now_add_peer(&peerInfo) != ESP_OK){
Serial.println(“Failed to add peer”);
return;
/// register third peer
}
*/
memcpy(peerInfo.peer_addr, broadcastAddress3, 6);
if (esp_now_add_peer(&peerInfo) != ESP_OK){
Serial.println(“Failed to add peer”);
return;
void mseries(){
//——————————-WIFI SERIE—————————————-
/;
void loop() {
server.handleClient();
test.x = random(0,20);
test.y = random(0,20);
//esp_err_t result = esp_now_send(0, (uint8_t *) &test, sizeof(test_struct));
esp_err_t result = esp_now_send(broadcastAddress1, (uint8_t *) &test, sizeof(test_struct));
test2.x2 = random(100,200);
test2.y2 = random(100,200);
esp_err_t result2 = esp_now_send(broadcastAddress2, (uint8_t *) &test2, sizeof(test_struct2));
*/
if (result == ESP_OK) {
}
}
Serial.println(“Sent with success”);
else {
Serial.println(“Error sending the data”);
delay(1000);
}
void handleRoot(){
/*
*/
if ( server.hasArg(“DEM”) ) {
//mtrame = concat(String((int)vA0),”;”,String((int)vA1),”;”,String((int)mArr));
if (server.arg(“DEM”).toInt()>=10) {
svMarche = String((int)vMarche);
svA0 = String((int)vA0);
svA1 =String((int)vA1);
svArr = String((int)mArr);
svjoy1X =String((int)joy1X);
mtrame = svA0 + “;” + svA1 + “;” + svArr + “;” + vMarche + “;” + svjoy1X;
str_mac = “F4:CF:A2:5B:AE:44”;
Serial.println(“Départ:”);
Serial.println(str_mac);
mtrame = "MAHERY";
//ESP.restart();
setup();
delay(5000);
/*
*/
}
Serial.println("Arrivé:");
Serial.println(str_mac);
Serial.print("IA0: ");
Serial.print(svA0);
Serial.print(" IA1: ");
Serial.print(svA1);
Serial.print(" Arr: ");
Serial.print(svArr);
Serial.print(" Trame: ");
Serial.println(mtrame);
server.send ( 200, "text/plain", mtrame );
Hola Rui
He iniciado a realizar tu proyecto, me dado de alta en bluehost he seguido las pasos wizard para crear un BBDD. El dominio ya lo tenia, es externo a bluehost (cascosta.gal)
$dbname = “xxxxxxxx_LosaTermica” (nombre mi BBDD)
$username = “xxxxxxxx_ESP32_board”;
$password = “xxxxxxxxxxxxx”;
Introduzco en el naveador la siguiente URL:
Y no funciona nada, donde esta el error
GRACIAS
Jesús
i have his error
[D][HTTPClient.cpp:293] beginInternal(): protocol: host: ipmiez.com port: 80 url: /post-esp-data.php
[D][HTTPClient.cpp:579] sendRequest(): request type: ‘POST’ redirCount: 0
[D][HTTPClient.cpp:1125] connect(): connected to ipmiez.com:80
[D][HTTPClient.cpp:1257] handleHeaderResponse(): code: 302
[D][HTTPClient.cpp:1260] handleHeaderResponse(): size: 223
[D][HTTPClient.cpp:603] sendRequest(): sendRequest code=302
[D][HTTPClient.cpp:378] disconnect(): still data in buffer (223), clean up.
[D][HTTPClient.cpp:385] disconnect(): tcp keep open for reuse
is working
thank you¨
two weeks, mainly tuning PHP and sql by my provider it works
Hi, Sara. I’m working on a project in reference to this project of yours. Like a few others in the comment section, I have the error code : -1 for my HTTPresponse. I’m using XAMPP server for my project and I have tried changing the url between “ and “ I have also tried switching between using “localhost” and “127.0.0.1” in my url. Below is the error displayed in the serial monitor.
api_key=tPmAT5Ab3j7F9&parking_id=B05&distance=5.70
Error code: -1
Hope to get a quick feedback from you.
Hi Sara, – as always your tutorials are brilliant – many thanks.
My problem is that my server time is 8 hours behind my ‘Europe/London’ time.
Is it possible to add some code into the ‘esp-data.php’ file to set and format the correct zone and DST and insert that along with the ESP readings.
Have tried but there are numerous php date/time functions and my knowledge is very limited.
I’m currently using the arduino ‘time.h’ library and NTP but this takes about 30 seconds on startup and I wish to run the esp8266 in deepsleep mode on a solar/battery project where ‘up time’ is kept to minimum.
Alternatively, I can create another ‘.php file to echo back the UTC time which is immediate. Can this value be used with the time library without the NTP call.
Any other options would be appreciated.
Kind Regards
Hi.
There are some lines on the esp-data.php file that show how you can adjust the time. You need to uncomment one of those lines.
// Uncomment to set timezone to + 4 hours (you can change 4 to any number)
//$row_reading_time = date(“Y-m-d H:i:s”, strtotime(“$row_reading_time + 4 hours”));
Regards,
Ara
Hello. Can you help me? Data appeared in the database but it can’t view on the web.
Any chance someone has added a tipping bucket rain gauge to this script? I’d like to be able to count the bucket tips in the same time delay as the temp, pressure, humidity readings.
It seems like a straight forward addition but I can’t make heads or tails of how to do it. I have a working version on raspberry pi but want to port to ESP32. Below is code from the raspberrypi weather station project from their site. It logs to a mariadb successfully.
#!/usr/bin/python3
import RPi.GPIO as GPIO
import bme280
import smbus2
import time
import database
import datetime
this many inches per bucket tip
CALIBRATION = 0.01193
which GPIO pin the gauge is connected to
PIN = 27
GPIO.setmode(GPIO.BCM)
GPIO.setup(PIN, GPIO.IN, pull_up_down=GPIO.PUD_UP))
#for the bme280:
port = 1
address = 0x77
bus = smbus2.SMBus(port)
bme280.load_calibration_params(bus,address)
db = database.weather_database()
#display and log results
while True:
bme280_data = bme280.sample(bus,address)
humidity = bme280_data.humidity
pressure = (bme280_data.pressure *0.02953)
ambient_temperature = (bme280_data.temperature * 1.8) +32
x =datetime.datetime.now()
print(humidity, pressure, ambient_temperature, rain, x)
file.write(line + “\n”)
file.flush()
db.insert(ambient_temperature, 0, 0, pressure, humidity, 0, 0, 0, rain, x)
rain = 0
time.sleep(600)
GPIO.cleanup()
How do you get a signal from tipping bucket? Is it a reed switch?
Suggest you don’t use GPIO27. That is ADC2 which may clash with wifi.
Use pins 16 to 23
Yes it’s a reed switch and I can swap it to another port. Just wondering how I can do both reed switch AND BME280 on ESP32 similar to the tutorial on this page…
Before you change the Arduino IDE you will need to do the following:
* add a column ‘value4’ in the mysql table to store the depth of rain – use same settings as value3
* include value4 in ‘post-esp-data.php’ file (in 3 places)
* include value4 in ‘esp-data.php’ file (in 3 places)
ESP32 changes are straightforward. In order that you don’t miss any bucket tips I’ve used an ‘interrupt’
//insert before void setup()
int rainPin=23; //GPIO pin for tipping bucket reed switch – you can change it
float bucketTip=0; //tip counter
float depthPerTip=???; //put your factor (eg each tip of mine is 0.19 mm)
float rainTotal=0; //calculated volume of rain in sampling period
void IRAM_ATTR detectRain() { //interrupt function
}
bucketTip++;
//insert in void setup() section
pinMode(rainPin, INPUT);
attachInterrupt(digitalPinToInterrupt(rainPin), detectRain, FALLING ) ;
//insert at start of void loop() section
rainTotal=bucketTip*depthPerTip; // calculate depth of rain
bucketTip=0; // reset counter
at the end of the line starting ‘String add &value4 details – follow &value3 syntax but with String(rainTotal) – ensure the double quotes are at the end
Thats it ….I think
Bob
Thank you Bob! Now I think it just needs a debounce factor somewhere to stop the false counts.
You can try adding a short delay as shown below though it’s considered to be bad practice.
}
}
change:
void IRAM_ATTR detectRain() { //interrupt function
bucketTip++;
to:
void IRAM_ATTR detectRain() { //interrupt function
bucketTip++;
delay(50);
Better still – do as I did and use a SS443A digital ‘hall effect’ device which doesn’t seem to have any bounce. Really simple – 3 pins: +5v, GND, and data out to your GPIO – no library or change in coding and cheap. Just a 10k resistor between 5v and data pins
Thanks again for your help Bob. I appreciate it
Well, shoot. I only thought I was done.
I had this counting button presses but changing to a reed switch it doesn’t detect the magnet passing over.
I’ve added an LED on one side of the reed so I can see it light when I wave the magnet over but the code doesn’t count it.
I’ve tried wiring it a few ways, power – reed – pin23, ground – reed – pin23 and now:
3.3v & pin23 on one side – reed – LED & 1k resistor to the other end.
Thinking it is probably the wiring? I’ve also tried the 5v power
Here’s the = “SSID”;
const char* password = = “APIKe
//insert before void setup()
int rainPin=23; //GPIO pin for tipping bucket reed switch – you can change it
float bucketTip=0; //tip counter
float depthPerTip = 1; //put your factor (eg each tip of mine is 0.19 mm)
float rainTotal=0; //calculated volume of rain in sampling period
void IRAM_ATTR detectRain() { //interrupt function
}
bucketTip++;
//delay(15);
void setup() {
Serial.begin(9600);
WiFi.begin(ssid, password);
}
Serial.println(“Connecting”);
while(WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(“.”);
Serial.println(“”);
Serial.print(“Connected to WiFi network with IP Address: “);
Serial.println(WiFi.localIP());
//insert in void setup() section
pinMode(rainPin, INPUT);
attachInterrupt(digitalPinToInterrupt(rainPin), detectRain, FALLING);
// (you can also pass in a Wire library object like &Wire2)
}
}
bool status = bme.begin(0x77);
if (!status) {
Serial.println(“Could not find a valid BME280 sensor, check wiring or change I2C address!”);
while (1);
void loop() {
//insert at start of void loop() section
rainTotal=bucketTip*depthPerTip; // calculate depth of rain
bucketTip=0; // reset counter
/() *1.8 +32)
+ "&value2=" + String(bme.readHumidity()) + "&value3=" + String(bme.readPressure()/100.0 *0.02953)
+ "&value4=" + String(rainTotal) + ""; 10 minutes
delay(5000);
Don’t have reed switch so can’t test.
Try changing ‘FALLING’ in the interrupt statement to one of the following and see if any works. Also try pulling the pin low with 10k resistor – worth a try. Would recommend the Hall effect device. Mine has worked fine for months.
LOW Triggers interrupt whenever the pin is LOW
HIGH Triggers interrupt whenever the pin is HIGH
CHANGE Triggers interrupt whenever the pin changes value, from HIGH to LOW or LOW to HIGH
FALLING Triggers interrupt when the pin goes from HIGH to LOW
RISING Triggers interrupt when the pin goes from LOW to HIGH | https://randomnerdtutorials.com/esp32-esp8266-mysql-database-php/?replytocom=380089 | CC-MAIN-2022-21 | refinedweb | 14,187 | 65.73 |
*
Help with priority queues in simulations. H/W
Andrey Petrov
Greenhorn
Joined: Sep 25, 2010
Posts: 3
posted
Dec 16, 2010 19:00:24
0
I am stuck. Help. It's late, I'm tired. My mind refuses to think.
One thing: Doesn't increment globalTime;
Two thing: Heap misbehaves.
Three thing: Wrong output.
The purpose of this programming project is to learn to use priority queues (a.k.a. heaps).
Overview
One common use of priority queues is in simulation. A software simulator attempts to mimic
some real-life problem based on a set of parameters and assumptions. You are to build a
simple simulator to demonstrate a how a computer’s job scheduling algorithm works. Your
simulator will use two priority queues to accomplish that, one that prioritizes events by
time, and one that prioritizes jobs based on their specified priorities.
Your simulator will read a text file containing job information, such as the job’s submission
time, execution time, and priority. Your simulator will write a text file of information about
each job’s execution, including its start time and end time.
For this project, time will be represented by integers, starting at time 0 and ending when
the last job finishes execution.
Input
Your input file will be a text file named “Jobs.txt” that will contain information about the
sequence of jobs whose execution you will simulate. Each line of the input file will
represent one job. The information about each job will consist of its job ID (a character
string
without spaces), its arrival time (an integer greater than or equal to zero), its priority
(an integer between zero and 255 where a larger number indicates higher priority), and its
run time (an integer greater than zero). For example, the following is a possible input file:
Sample Jobs.txt Input File
JobID Arrival Time Priority Run Time
Job1 0 5 12
Job2 0 10 10
Job3 8 15 6
Job4 9 0 25
Job5 15 99 9
I suggest creating a Job class that to contain the information about each job. The class will
need to implement the Comparable interface, which means it will need to have a method
named compareTo() that will compare two jobs by priority.
Output
Your output will be a text file named “Schedule.txt” that will describe how the jobs got
scheduled and run according to time and priority. Each line of the output file will contain
information about one job, and the file will be ordered in the jobs’ run sequence. The
information to be output for each job will be its job ID, its start time, its end time, its
arrival time, its priority, and its run time. The following table shows what the output for
the above input would be:
Sample Schedule.txt Output File
JobID Start Time End Time Arrival Time Priority Run Time
Job2 0 10 0 10 10
Job3 10 16 8 15 6
Job5 16 25 15 99 9
Job1 25 37 0 5 12
Job4 37 62 9 0 25
Deliverables
A ZIP file containing your project.
Processing
For this project use the class MyPriorityQueue from Chapter 25 for your priority queue
objects. You will need two priority queues.
The first will contain scheduler events. It will be prioritized by event time, with an earlier
time being a higher priority. An event may be either a job arrival or a job completion.
I suggest creating a JobEvent class to represent events. The class should contain an event
type, an event time, and a Job object. This class will need to implement the Comparable
interface, including a compareTo() method that will compare two events and prioritize them
by time, with earlier times having higher priority.
You second priority queue will contain all the jobs that have arrived and are waiting to run.
It will be prioritized by job priority.
Your program will need to read the input file, build a Job object for each job, then an
arrival JobEvent for the Job, and put the arrival into the first priority queue.
Next, your program will need to process each event in the first priority queue. If the highes-
priority event is a job arrival, the job will need to be extracted from the JobEvent and
added to your second priority queue.
If the highest-priority event is a job completion, its information will be written to the outut
file. Then the next job will need to be extracted from the second priority queue, and a job
completion JobEvent constructed for its completion time, and that JobEvent added to the
first priority queue.
Processing will continue as long as there are JobEvents in the first priority queue.
import java.io.*; import java.util.Scanner; public class Project4 { public static void main(String args[]) throws Exception{ FileWriter out = new FileWriter("Schedule.txt"); out.write("\t"+"Sample Schedule.txt Output File" + System.getProperty("line.separator")); out.flush(); out.write("JobID"+" "+"Start"+" "+"EndTime"+" "+"Arrival"+" " +"Priority"+" "+"RunTime"+System.getProperty("line.separator")); out.flush(); MyPriorityQueue EventHeap = new MyPriorityQueue(); MyPriorityQueue JobHeap = new MyPriorityQueue(); int globalTime = 0; Scanner in = new Scanner(new File("Jobs.txt")); boolean completed = false; while(in.hasNext()) { String jobID = in.next(); int arrivalTime = in.nextInt(); int priority = in.nextInt(); int runTime = in.nextInt(); Job job = new Job(jobID, arrivalTime, priority, runTime); completed = false; JobEvent jobEvent = new JobEvent(completed, globalTime, job); EventHeap.enqueue(jobEvent); } boolean busy = false; while (EventHeap.getSize() > 0) { JobEvent event = (JobEvent) EventHeap.dequeue(); globalTime = event.getTime(); if (event.getCompleted() == false) { JobHeap.enqueue(event.getJob()); if (busy == false) { Job job = (Job) JobHeap.dequeue(); busy = true; globalTime = globalTime + job.getRunTime(); completed = true; JobEvent jobEvent = new JobEvent(completed, globalTime, job); } } else { int totalTime = globalTime + event.getJob().getRunTime(); out.write(event.getJob().getJobID()+"\t"+globalTime+"\t"+totalTime+"\t" +event.getJob().getArrivalTime()+"\t"+event.getJob().getPriority()+"\t" +event.getJob().getRunTime()+System.getProperty("line.separator")); out.flush(); busy = false; if (JobHeap.getSize() > 0) { Job job = (Job) JobHeap.dequeue(); busy = true; globalTime = globalTime + job.getRunTime(); completed = true; JobEvent jobEvent = new JobEvent(completed, globalTime, job); } } } } static class Job implements Comparable { private String jobID; private int arrivalTime; private int priority; private int runTime; public Job(String jobID, int arrivalTime, int priority, int runTime) { this.jobID = jobID; this.arrivalTime = arrivalTime; this.priority = priority; this.runTime = runTime; } public String getJobID() { return jobID; } public int getArrivalTime() { return arrivalTime; } public int getPriority() { return priority; } public int getRunTime() { return runTime; } public String toString() { return jobID + "\t" + arrivalTime + "\t" + priority + "\t" + runTime; } public int compareTo(Object o) { return this.priority - ((Job)o).priority; } } static class JobEvent implements Comparable { boolean completed = false; int time; Job job; public JobEvent(boolean completed, int time, Job job) { this.completed = true; this.time = time; this.job = job; } public int getTime() { return time; } public Job getJob() { return this.job; } public boolean getCompleted() { return this.completed; } public int compareTo(Object o) { return this.time - ((JobEvent)o).time; } } }
My output is the following:
Sample Schedule.txt Output File
JobID Start EndTime Arrival Priority RunTime
Job1 0 12 0 5 12
Job5 0 9 15 99 9
Job4 0 25 9 0 25
Job3 0 6 8 15 6
Job2 0 10 0 10 10
fred rosenberger
lowercase baba
Bartender
Joined: Oct 02, 2003
Posts: 11471
16
I like...
posted
Dec 17, 2010 06:18:11
0
"heap misbehaves" and "wrong output" doesn't really tell a reader ANYTHING about what the problem is. The best way to get help is to ask specific, focused questions, giving as much information as possible. HOW does the heap misbehave? What does it do that you don't expect? For that matter, what DO you expect it to do?
A general tip is to not write so much code before trying to fix it. You should only write 2-3 (or even one single) lines of code before you compile,
test
, and debug it. Make sure each new piece you write is ROCK SOLID before you try doing anything else.
If we were going to focus on globalTime, I'd put in some println statements around lines 39 and 54. Make sure you are entering the if-blocks, check and see what job.getRunTime() returns (if it's zero, then that's why globalTime isn't changing).
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
Campbell Ritchie
Sheriff
Joined: Oct 13, 2005
Posts: 39755
28
posted
Dec 17, 2010 06:37:08
0
Line 34: Never, never use == false or == true. It should read
if (!event.getCompleted()) . . .
And the name of the method is poor; it would be better as "hasCompletd" or "isCompleted".
Steve Luke
Bartender
Joined: Jan 28, 2003
Posts: 4181
21
I like...
posted
Dec 17, 2010 06:57:55
0
A few more tid-bits for you:
1) On time tracking with globalTime. When you first create the JobEvents, you do this:
int globalTime = 0; //... while(in.hasNext()) { //... JobEvent jobEvent = new JobEvent(completed, globalTime, job); EventHeap.enqueue(jobEvent); }
So every one of your JobEvents you make when the file gets read are created with a time of zero. I think the intention is to have the 'arrival' event's time be the Job's arrival time (while the 'completed' event's time would be the time the event had finished).
2) On the 'arrival' versus 'completed' JobEvents: when you create a JobEvent you do this:
static class JobEvent implements Comparable { boolean completed = false; //... public JobEvent(boolean completed, int time, Job job) { this.completed = true; //... }
Since you hard-code completed to be true every JobeEvent is going to be treated as a completed JobEvent, regardless of what you use as a parameter when you create the JobEvent. This means the code you expect to execute for 'arrival' events will never happen, no Jobs will be added to the JobHeap, and the code you expect to happen when a Job is pulled out of the JobHeap will not be executed either (I don't think).
Steve
I agree. Here's the link:
subject: Help with priority queues in simulations. H/W
Similar Threads
variable scope question/confusion
Timer
Discete Event Simulation Question (Code) ?
Priority Queue and iterator
Threads and Synchronization examples
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/520728/java/java/priority-queues-simulations | CC-MAIN-2014-49 | refinedweb | 1,708 | 56.35 |
Opened 5 years ago
Last modified 4 years ago
#8438 new Bugs
vector & circular_buffer storage misbehave when using compiler optimizations
Description (last modified by )
When compiling the following code without optimizations, it behaves as expected: (compiled with g++-4.7.2 with no flags at all)
#include <boost/numeric/ublas/vector.hpp> #include <boost/numeric/ublas/io.hpp> #include <boost/circular_buffer.hpp> int main () { boost::numeric::ublas::vector<double, boost::circular_buffer<double> > v (3, 1); std::cout << v << std::endl; v[1] = 5; std::cout << v << std::endl; std::cout << v[1] << std::endl; return 0; }
Output:
[3](1,1,1) [3](1,5,1) 5
When compiling the exact same code with O1, O2, and O3 it produces the following output:
[3](0,0,0) [3](0,0,0) 5
I noticed that inner_prod() also sees the vector as zeros.
Attachments (0)
Change History (4)
comment:1 Changed 5 years ago by
comment:2 Changed 5 years ago by
comment:3 Changed 4 years ago by
comment:4 Changed 4 years ago by
Could some experts of UBlas take a look at this ticket? Please, reassign to circular buffer if this has nothing to be with UBlas.
Note: See TracTickets for help on using tickets.
I was able to replicate this with clang, but I got a warning which probably has some bearing here: | https://svn.boost.org/trac10/ticket/8438 | CC-MAIN-2017-43 | refinedweb | 225 | 50.97 |
Best current methods for round trip files to pythonista from a macbook
i purchased pythonista more than a year ago and i never got around to using it much due to schedule. i'm revisiting it right now and i'm wondering what the best current methods are for getting files back and forth (though actually mostly to) to pythonista from a macbook. basically what i'd consider round tripping the files in an easy / intuitive manner?
MY OWN CODE
i only have an iphone7 so my screen real estate is really small. that means for pure python code that doesn't require any custom pythonista ui / ios modules, i can actually develop and debug most of my code a lot easier on the macbook with a better more powerful environment. but i need the most easy / intuitive way to get the files back into pythonista when it comes time to put them in the pythonista environment where i can use an external keyboard.
i'm quite ok simply transferring my own python files locally from my macbook to pythonista but i didn't see pythonista listed in the new files app in ios 11. i should double check and see if pythonista allows direct app transfers in itunes though as a possible solution (though i confess i personally think itunes is horribly designed and makes this a last resort when combined with ios's sandboxing)???
also maybe it's better for me to push my local code to git and pull into pythonista that way (see next)?. or simply sync to dropbox and pull from dropbox? (fyi cloud is not an option for me)
OTHER'S CODE (GIT etc..??)
more importantly, much of the code recommended in the forums exists in git, so i also need an intuitive method to pull from git directly to pythonista as well. oddly enough part of simplifying the equation here is finding and then pasting an appropriate git url into whatever tool is going to be used to clone the code into pythonista. i often search the forums on my macbook because it's easier to type and search etc on the macbook but i then need to get the appropriate link back to pythonista and whatever utility is going to be used to download the code locally on the iphone7. i think maybe a lot of the folks in the forum might be using pythonista in an ipad environment which gives them more options to do this locally more efficiently on the device?
i've cruised the forums and there are a variety of methods going back years and it's not exactly clear to me what's the best way to do this. are the appropriate scripts already in pythonista's installed code base or do i have to download something from somewhere else first and if so what are currently the most intuitive methods?
basically i'm excited to use pythonista but once i fire it up and want to leverage existing code that originates elsewhere, i can't figure out how to easily download it locally :) it's the biggest hurdle to using the app.
any current recommendations are welcome. thanks
For transferring your own code, I would strongly recommend using iCloud, which Pythonista supports since version 3.2. The "iCloud" folder that you see in Pythonista appears as "Pythonista 3" in iCloud Drive on your Mac (assuming both devices use the same iCloud account of course). Since iCloud Drive is Apple's native way of syncing files, it works relatively quickly and with almost no issues - if you create/modify/remove a file in "iCloud" under Pythonista, the change will appear in iCloud Drive on your Mac, and vice versa. I use iCloud with Pythonista to sync some code between my iPhone, iPad and Mac, and haven't had any issues so far.
If you want to use Git, I would recommend installing Stash, which has a
gitcommand implemented in Python that supports the most important Git features. There's also an app called Working Copy, which can work with Git repositories, and can be used together with Pythonista. I have never tried this myself though, so I don't know how (or how well) this works.
thanks for the reply @dgelessus
it sounds like icloud is an easy way to go. i'm just leery to use apple's icloud service though. partly for privacy reasons and partly because i don't like keeping my cloud service client apps on unless i'm actually using them and in the past when i would re-enable icloud it always seemed to want to turn on more globalized synching (synching things i configured it not to) then what i originally configure it for. maybe it's different now. when you sync with icloud can you lock it down to just one synch folder?
i'm playing around with dropbox right now and it looks like i can run python scripts directly from my dropbox synced folder (vs having to copy them down locally first) so i already may have my own cloud solution my own files. i only tested it with a simply print script but it worked. however, this contradicts much of what i think i've been reading in the forums / online about needing extra scripts to download / sync dropbox python scripts locally. perhaps that's because people develop locally in pythonista and need to move scripts back and forth between dropbox and local files in pythonista (ie that support is not native meaning you either run and edit the files in dropbox OR locally and that there is no native support to move the files back and forth between the two)?
so stash isn't stock provided in pythonista and i have to download it separately. i couldn't tell because i think i just ran one of your own file browser scripts in editorial and i think there was an option to use stash so i clicked it and it seemed to want to run (meaning it is provided in editorial but not pythonista)?
thanks for the input. i do have a followup regarding your file browser script for editorial since it's somewhat related here. it looks like i downloaded an older version from the workflows directory of your script (tutorial doctor posted the workflow but it's your script :)).
it runs in the sense it provides a nice file picker. the problem is it doesn't show meaningful folders. it doesn't list my mounted dropbox folder (though it shows the dropbox trash folder?). it also doesn't show my local documents folder in editorial where all my meaningful local documents are. would you know why it isn't working or if your new version in git (which isn't included in the current workflow online) takes care of the problems? my guess is it has to do with the root paths. what is the correct root path of the mounted dropbox directory in current versions of editorial?
here are the links i used for your workflow and code
here is my other post today that relates to finding a dropbox file browser that might allow me to see the full names of the long file names vs the stock editorial file browser.
thanks
I can't say much about how secure and/or private iCloud Drive is - Apple claims that all iCloud user data on their servers is encrypted and that they can't read it, but of course there's no way to know for certain. I also don't know how it compares to Dropbox. In the end, the only way to be really sure that your data is safe is to use an extra layer of encryption on top of the cloud/sync service, but that's not really possible on iOS.
By default, if you enable iCloud Drive on your device, it will sync files for all apps that support it. You can selectively disable iCloud Drive for certain apps, but you cannot selectively enable it (unless you turn it off manually for each new app). What exactly gets synced depends on the app - for example Apple's "office" apps (Pages, Numbers, Keynote) store into iCloud with no other option. (At least that's how it works on iOS 10 and before, things might have changed since iOS 11.) Other apps (such as Pythonista) let you choose yourself which files to put into iCloud and which to keep local.
I don't use Dropbox with Pythonista myself, so what I'm saying here might be wrong, you should try it out yourself to be sure. But as I understand it, when you open code from external apps (such as Dropbox) in Pythonista, the code has to be in a single file and cannot read any other files from Dropbox. This is because of how iOS handles file permissions - when you open a script from Dropbox in Pythonista, iOS allows Pythonista to access exactly that one script, and nothing else. You don't have this problem when using Pythonista's iCloud Drive integration, because there Pythonista has its own folder separated from all other apps, so iOS gives Pythonista full access to that folder. (However, iCloud Drive files outside of Pythonista's folder have the same access restrictions as Dropbox files for example.)
I can tell you for sure that Stash (or anything like it) was never included in Pythonista or Editorial. :) The old filenav version that you linked to apparently supports integration with Shellista (the predecessor of Stash), I totally forgot that I implemented that :) However the integration wouldn't work without having Shellista already installed before, it's definitely not an automatic downloader.
In any case, that version of filenav is really old, and Shellista is obsolete now. Stash is quite easy to install though, there are installation instructions on the repo page, you only need to copy and paste a single line into the Python console.
Editorial has no iCloud Drive support, but does have built-in Dropbox sync, so you should have no issues there. I don't know how well filenav works in Editorial, I never tested it there. Also the current filenav version is split into multiple files, which is always a bit difficult to use in Editorial, because then you can't simply paste it into a workflow. However the current filenav version lets you add custom "root folders"/favorites, so if you can get it to run in Editorial and find out where Editorial stores the Dropbox files, you can add that location into filenav.
@dgelessus much appreciate that detailed followup.
so the downer with icloud for me is that it's opt-out and not opt-in which i just think is lame for a trillion dollar company who's products are already over-priced as it is. ;) based on my past experience, i think what i saw is that whenever i upgraded / dated ios, i would have to go back in and opt-out of all these different things that i didn't want icloud resetting to go to the cloud. it's just a personal pet peeve and i don't like rewarding bad behavior. i'm generally just not a fan of sticking my whole life up in the cloud which is apple's default approach. dropbox while still cloud allows me to set one folder only and not worry about "side-effects" since they don't "own" the operating system providing the service.
regarding execution of files on DB. i think you're probably right with regards to reading in other files from the DB folder. i only tested a single file python program. on the plus side, the DB python program WAS able to find the libraries it needed in the local pythonista packages directories so that's good. anyway, i guess i'm going to need to sort through and download one of the DB file pickers to pull in multi-file projects locally.
regarding my seeing stash running from your filenav .. yeah it must have been shellista i saw back in editorial ;).. i clicked on an interface widget for shellista and it tried to do something and failed (probably b/c it wasn't installed ;)). i've already installed stash in pythonista (not sure how to do that in editorial or if i even need it there anyway). i think i discovered it needs to run in python 2.x to work though??
regarding filenav old single file version on editorial. it worked in the sense that it loaded some folders properly and allowed me to navigate those folders. the problem is that they're none of the folders that i need including DB and local markdown files ;)... i guess you're not sure what those paths should be set to in editorial? since it's already single file and currently working, i imagine if i edit your code to look in "the right" directory(s) it should "just work" right?
i'm not sure how or if it's even possible to make multiple file programs run in editorial either.. so outside of refactoring your recent filenav multi-file into one large program, fixing up the paths in the older one or trying to roll a new one, i'm probably leaning towards getting the old single file working with new paths.
outside of being able to set multiple paths in the newer multi-file filenav, is there anything (radically) different about it vs the original one? my motivation believe it or not for a replacement file browser in editorial vs the stock one (as expressed on the other thread) is i can't see the long filenames in the stock file browser. your filenav seems like it might accommodate that based on what i've seen so far.
finally, there is a file in your more recent filenav repo called filenav_v1.py. i searched the repo on github and it doesn't appear that this file is imported anywhere (an artifact??) and that main() simply sets up things based on two files, slim and full, based on the device size. sound right?
thx
OK, so it seems that the current version of filenav runs fine in Editorial, it seems I tried it previously, because I already had all the files in Editorial :)
The basic installation steps are like this:
- Download the filenav source code as a zip from GitHub and import it into Editorial
- Make a folder
site-packages(in the "Local Files" folder in Editorial), and in that a folder
filenav
- Use the
zipfilemodule to extract all files from the zip into that folder (
with zipfile.ZipFile("filenav.zip") as zf: zf.extractall("site-packages/filenav"))
And to launch filenav, run the following code (in the console or in a workflow):
import os import filenav.__main__ os.chdir(os.path.expanduser("~/Documents/site-packages/filenav") filenav.__main__.main([])
The main difference between filenav v1 and v2 is that v2 has a cleaner code structure, makes better use of the large screen on iPad (you get multiple folder columns, like in the Finder on macOS), and has the "favorites" feature. Also it's newer and has fewer bugs than v1 :)
filenav_v1.pyin the filenav repo is just the old version of filenav, I don't think there's anything special about it.
I don't know if filenav will help much in your situation, like the standard file browser it cuts off filenames longer than a certain length. Maybe there's a way to change it to "cut out the middle" mode ("abc...xyz" rather than "abcdefg...") but I don't know if that's easily possible with the
uimodule. Also it seems that in Editorial you cannot present views in "panel" mode (i. e. as a tab next to the console and documentation), at least on iPhone, so you can't quickly switch back and forth between filenav and the editor.
awesome. i'll give this a look tomorrow. thx!
note that even if it cuts the names off, based on when i ran your old filenav script in editorial earlier, the font is much smaller (and my guess is i can configure that), so that i see enough of the filename to give me the context i need vs the stock editorial file browser..
it sounds like you're focus is more on pythonista but do you know if the custom ui modules works the same in both editorial and pythonista? i get the feeling they may be identical which means pure ui interface code should work well in editorial if borrowed from pythonista and vice versa. your filenav would seem to bear that out.
finally, i found an editorial workflow in the online directory that almost gives me everything i want (though much less functionality than your filenav as it only allows listing one directory with no directory walk down to child dirs). it uses a ui tableview to select the filename from dropbox (this works!) but it doesn't actually set the row text as workflow output variable which means workflow input is empty in the next step which tries to open a blank filename? i don't see anything in the documentation that explains if / how tableview selections are supposed to trigger setting a default workflow output var for the next step. but the ole the guy who wrote this apparently thought it would work as is. i posted specific details in the editorial forum (summarized pretty well here though)
do you know the best way to get the selected row from a tableview out and to the next workflow step in editorial or why it's not being set by default?
thx as usual!
one last question for tomorrow.. you said:
- Download the filenav source code as a zip from GitHub and import it into Editorial
i'm still in the process of figuring out how to download things to pythonista via a variety of methods. but i've never downloaded anything into editorial yet. so is there an existing workflow in editorial to import and or download the zipfile directly or via pythonista. or some quick code i can cut and paste in a workflow to get the code base local to editorial. sorry for not knowing how to do that.
thx
Right, normally I work in Pythonista, so I don't know that much about Editorial workflows and such. As far as I know, Editorial and Pythonista share the same code for their Python runtime (although Editorial's is Python 2 only and older than Pythonista's), so the
uimodule should be compatible between Editorial and Pythonista aside from minor bugs.
I forgot that Editorial doesn't let you import files directly... you can use this code for example to download the zip file (paste the code into the Python console, then run
download_file("<zip url copied from GitHub>")).
i've had nothing but trouble downloading stuff.. the script i copied in to the console doesn't work in editorial for some reason.. no errors when i paste and enter the script. but the console can't even find the download_file function afterwards. it's showing as download_file not defined. i've spent so much time just trying to get basic things working in these apps. it's frustrating. i confess i don't usually use the console. i'm used to writing python scripts in files and running the files so this is a bit new to me. i wish i had an ipad. trying to do all of this on a small iphone is tiring. any thoughts on what i might be doing wrong? i've copied this script verbatim..
actually i just got this download script to work in pythonista console and successfully downloaded your zipfile. however it just doesn't work in the editorial console even with fresh pastes. it keeps telling me download_file is "name not defined". perhaps python version issues between the two apps? | https://forum.omz-software.com/topic/5042/best-current-methods-for-round-trip-files-to-pythonista-from-a-macbook/3 | CC-MAIN-2022-33 | refinedweb | 3,347 | 66.57 |
This section was contributed by Michael Trick a few years ago and is a walkthrough of the steps for developing a very simple application using SYMPHONY. Rather than presenting the code in its final version, we will go through the steps that a user would go through. Note that some of the code is lifted from the vehicle routing application. This code is designed to be a sequential code. The MATCH application discussed here is part of the SYMPHONY distribution and the source code can be found in the SYMPHONY/Applications/MATCH directory.
The goal is to create a minimum matching on a complete graph. Initially, we will just formulate this as an integer program with one variable for each possible pair that can be matched. Then we will include a set of constraints that can be added by cut generation.
We begin with the template code in the USER subdirectory included with SYMPHONY. This gives stubs for each user callback routine. First, I need to define a data structure for describing an instance of the matching problem. We use the template structure USER_PROBLEM in the file include/user.h for this purpose. To describe an instance, we just need the number of nodes and the cost matrix. In addition, we also need a way of assigning an index to each possible assignment. Here is the data structure:
typedef struct USER_PROBLEM{ int numnodes; int cost[MAXNODES][MAXNODES]; int match1[MAXNODES*(MAXNODES-1)/2]; int match2[MAXNODES*(MAXNODES-1)/2]; int index[MAXNODES][MAXNODES]; }user_problem;The fields match1, match2, and index will be used later in the code in order to map variables to the corresponding assignment and vice versa.
Next, we need to read in the problem instance. We could implement this function within the user_io() callback function (see the file user_master.c). However, in order to show how it can be done explicitly, we will define our own function match_read_data() in user_main.c to fill in the user data structure and then use sym_set_user_data() to pass this structure to SYMPHONY. The template code already provides basic command-line options for the user. The ``-F'' flag is used to specify the location of a data file, from which we will read in the data. The datafile contains first the number of nodes in the graph ( nnodes) followed by the pairwise cost matrix (nnode by nnode). We read the file in with the match_read_data() routine in user_main.c:
int match_read_data(user_problem *prob, char *infile) { int i, j; FILE *f = NULL; if ((f = fopen(infile, "r")) == NULL){ printf("main(): user file %s can't be opened\n", infile); return(ERROR__USER); } /* Read in the costs */ fscanf(f,"%d",&(prob->numnodes)); for (i = 0; i < prob->numnodes; i++) for (j = 0; j < prob->numnodes; j++) fscanf(f, "%d", &(prob->cost[i][j])); return (FUNCTION_TERMINATED_NORMALLY); }
We can now construct the integer program itself. This is done by specifying
the constraint matrix and the rim vectors in sparse format. We will have a
variable for each possible assignment
with
. We have a constraint
for each node
, so it can only me matched to one other node.
We define the IP in our other helper function match_load_problem() in user_main.c. In the first part of this routine, we will build a description of the IP, and then in the second part, we will load this representation to SYMPHONY through sym_explicit_load_problem(). Note that we could instead create a description of each subproblem dynamically using the user_create_subproblem() callback (see user_lp.c), but this is more complicated and unnecessary here.
int match_load_problem(sym_environment *env, user_problem *prob){ int i, j, index, n, m, nz, *matbeg, *matind; double *matval, *lb, *ub, *obj, *rhs, *rngval; char *sense, *is_int; /* set up the inital LP data */ n = prob->numnodes*(prob->numnodes-1)/2; m = 2 * prob->numnodes; nz = 2 * n; /* Allocate the arrays */ matbeg = (int *) malloc((n + 1) * ISIZE); matind = (int *) malloc((nz) * ISIZE); matval = (double *) malloc((nz) * DSIZE); obj = (double *) malloc(n * DSIZE); lb = (double *) calloc(n, DSIZE); ub = (double *) malloc(n * DSIZE); rhs = (double *) malloc(m * DSIZE); sense = (char *) malloc(m * CSIZE); rngval = (double *) calloc(m, DSIZE); is_int = (char *) malloc(n * CSIZE); /* Fill out the appropriate data structures -- each column has exactly two entries */ index = 0; for (i = 0; i < prob->numnodes; i++) { for (j = i+1; j < prob->numnodes; j++) { prob->match1[index] = i; /*The first component of assignment 'index'*/ prob->match2[index] = j; /*The second component of assignment 'index'*/ /* So we can recover the index later */ prob->index[i][j] = prob->index[j][i] = index; obj[index] = prob->cost[i][j]; /* Cost of assignment (i, j) */ is_int[index] = TRUE; matbeg[index] = 2*index; matval[2*index] = 1; matval[2*index+1] = 1; matind[2*index] = i; matind[2*index+1] = j; ub[index] = 1.0; index++; } } matbeg[n] = 2 * n; /* set the initial right hand side */ for (i = 0; i < m; i++) { rhs[i] = 1; sense[i] = 'E'; } /* Load the problem to SYMPHONY */ sym_explicit_load_problem(env, n, m, matbeg, matind, matval, lb, ub, is_int, obj, 0, sense, rhs, rngval, true); return (FUNCTION_TERMINATED_NORMALLY); }
Now, we are ready to gather everything in the main() routine in user_main(). This will involve to create a SYMPHONY environment and a user data structure, read in the data, create the corresponding IP, load it to the environment and ask SYMPHONY to solve it ( CALL_FUNCTION is just a macro to take care of the return values):
int main(int argc, char **argv) { int termcode; char * infile; /* Create a SYMPHONY environment */ sym_environment *env = sym_open_environment(); /* Create the data structure for storing the problem instance.*/ user_problem *prob = (user_problem *)calloc(1, sizeof(user_problem)); CALL_FUNCTION( sym_set_user_data(env, (void *)prob) ); CALL_FUNCTION( sym_parse_command_line(env, argc, argv) ); CALL_FUNCTION( sym_get_str_param(env, "infile_name", &infile)); CALL_FUNCTION( match_read_data(prob, infile) ); CALL_FUNCTION( match_load_problem(env, prob) ); CALL_FUNCTION( sym_solve(env) ); CALL_FUNCTION( sym_close_environment(env) ); return(0); }
OK, that's it. That defines an integer program, and if you compile and optimize it, the rest of the system will come together to solve this problem. Here is a data file to use:
6 0 1 1 3 3 3 1 0 1 3 3 3 1 1 0 3 3 3 3 3 3 0 1 1 3 3 3 1 0 1 3 3 3 1 1 0
The optimal value is 5. To display the solution, we need to be able to map back from variables to the nodes. That was the use of the node1 and node2 parts of the USER_PROBLEM. We can now use user_display_solution() in user_master.c to print out the solution:
int user_display_solution(void *user, double lpetol, int varnum, int *indices, double *values, double objval) { /* This gives you access to the user data structure. */ user_problem *prob = (user_problem *) user; int index; for (index = 0; index < varnum; index++){ if (values[index] > lpetol) { printf("%2d matched with %2d at cost %6d\n", prob->node1[indices[index]], prob->node2[indices[index]], prob->cost[prob->node1[indices[index]]] [prob->node2[indices[index]]]); } } return(USER_SUCCESS); }
We will now update the code to include a crude cut generator. Of course, We could go for a Gomory-Hu type odd-set separation (ala Grötschel and Padberg) but for the moment, let's just check for sets of size three with more than value 1 among them (such a set defines a cut that requires at least one edge out of any odd set). We can do this by brute force checking of triples, as follows:
int user_find_cuts(void *user, int varnum, int iter_num, int level, int index, double objval, int *indices, double *values, double ub, double etol, int *num_cuts, int *alloc_cuts, cut_data ***cuts) { user_problem *prob = (user_problem *) user; double edge_val[200][200]; /* Matrix of edge values */ int i, j, k, cutind[3]; double cutval[3]; int cutnum = 0; /* Allocate the edge_val matrix to zero (we could also just calloc it) */ memset((char *)edge_val, 0, 200*200*ISIZE); for (i = 0; i < varnum; i++) { edge_val[prob->node1[indices[i]]][prob->node2[indices[i]]] = values[i]; } for (i = 0; i < prob->nnodes; i++){ for (j = i+1; j < prob->nnodes; j++){ for (k = j+1; k < prob->nnodes; k++) { if (edge_val[i][j]+edge_val[j][k]+edge_val[i][k] > 1.0 + etol) { /* Found violated triangle cut */ /* Form the cut as a sparse vector */ cutind[0] = prob->index[i][j]; cutind[1] = prob->index[j][k]; cutind[2] = prob->index[i][k]; cutval[0] = cutval[1] = cutval[2] = 1.0; cg_add_explicit_cut(3, cutind, cutval, 1.0, 0, 'L', TRUE, num_cuts, alloc_cuts, cuts); cutnum++; } } } } return(USER_SUCCESS); }
Note the call of cg_add_explicit_cut(), which tells SYMPHONY about any cuts found. If we now solve the matching problem on the sample data set, the number of nodes in the branch and bound tree should just be 1 (rather than 3 without cut generation).
Ted RalphsTed Ralphs | http://www.coin-or.org/SYMPHONY/man-5.1/node116.html | crawl-003 | refinedweb | 1,450 | 54.05 |
Qt Script C++ Classes
The Qt Script module provides classes for making Qt applications scriptable. More...
This documentation was introduced in Qt 4.3.
Classes
Detailed Description
The Qt Script module only provides core scripting facilities; the Qt Script Tools module provides additional Qt Script-related components that application developers may find useful.
This module is mainly provided for backwards compatibility reasons with Qt 4.x and not being actively developed any further. For new code please consider using the QJSEngine, etc. classes in the Qt Qml module instead.
To include the definitions of the module's classes, use the following directive:
#include <QtScript>
To link against the module, add this line to your qmake
.pro file:
QT += script
For detailed information on how to make your application scriptable with Qt Script, see Making Applications Scriptable.
License Information
Qt Commercial Edition licensees that wish to distribute applications that use the Qt Script module need to be aware of their obligations under the GNU Library General Public License (LGPL).
Developers using the Open Source Edition can choose to redistribute the module under the appropriate version of the GNU LGPL.
Commercial customers that do not want to depend on LGPL code can consider using the QJSEngine and QJSValue classes in the Qt Qml module. These classes offer a similar but more narrow scripting API that is sufficient for most use cases and is not dependent on LGPL code.
Qt Script.
Qt Script furthermore contains third party code that is also licensed under the GNU General Public License, version 2, or. | http://doc.qt.io/qt-5/qtscript-module.html | CC-MAIN-2017-34 | refinedweb | 258 | 53.1 |
Reputation Points: 2,218 [?]
Q&As Helped to Solve: 770 [?]
Skill Endorsements: 28 [?]
0.
/* This program demonstrates two different techniques of creating random integers without duplicates. The first technique is a tree-based method designed to minimize the number of calls to the rand () function. It is a tree based algorithm and is well suited when calls to rand () are computationally expensive or when there is a very large number of random numbers to generated. The second technique is simpler and uses a boolean array to keep track of what numbers have been selected already. If a random number is generated that has occurred before, a new one is generated. With very large arrays, there will be a lot of repeat random number generation, so the first technique tends to be more efficient for large sets, though the timings of the two techniques are much closer than I anticipated. I had expected the first technique to vastly outperform the second on tasks like generating generating all numbers from 1 to 20,000 exactly once, but that doesn't appear to be the case. The example below is a much smaller example, shuffling playing cards, so you are only dealing with 52 numbers. */ #include <iostream> #include <string> #include <ctime> #include <iomanip> using namespace std; struct node { int value; // radomly produced integer value node* left; // pointer to right child node* right; // pointer to left child int leftSize; // yourself + number of descendants in left tree // keep track of this so you don't have to keep // recalculating by traversing }; void GenerateRandomIntegers1 (int min, int max, int* array, int arraySize); void GenerateRandomIntegers2 (int min, int max, int* array, int arraySize); void Display (int array[], int arraySize); string card[] = {"Ace", "2", "3", "4", "5", "6", "7", "8", "9", "10", "Jack", "Queen", "King"}; string suit[] = {"Spades", "Hearts", "Clubs", "Diamonds"}; int main () { srand (time (NULL)); int min = 0; int max = 51; int arraySize = max - min + 1; int* array = new int[arraySize]; cout << "First method - Shuffle\n\n"; GenerateRandomIntegers1 (min, max, array, arraySize); Display (array, arraySize); cout << "\n\nSecond method - Shuffle\n\n"; GenerateRandomIntegers2 (min, max, array, arraySize); Display (array, arraySize); cin.get (); // pause screen return 0; } void GenerateRandomIntegers1 (int min, int max, int array[], int arraySize) // a tree-oriented method. Random numbers never have to be regenerated. Random // number is generated out of a range of the number of empty slots remaining. // If the number 3 is generated, the number selected will be 4th smallest // (add 1 to 3 since 0 is smallest) as-of-yet-unpicked number. // node->leftSize is the number of nodes less than that node that are children // have been picked already (includes itself). This is needed in order to know // how much to adjust a random number. { node* tempNodes = new node[arraySize]; node* headNode = &tempNodes[0]; int range = max - min + 1; headNode->value = rand () % range + min; headNode->left = NULL; headNode->right = NULL; headNode->leftSize = 1; array[0] = headNode->value; // pick first/top node node* treeTop; node* nodeToInsert; bool inserted; for (int i = 1; i < arraySize; i++) { range--; // decrement random number range each time since one fewer // value is available. tempNodes[i].leftSize = 1; tempNodes[i].left = NULL; tempNodes[i].right = NULL; int positionAmongUnpicked = rand () % range; // insert at positionAmongUnpicked spot in list of unpicked numbers. // (i.e. if remaining unpicked numbers are {6, 12, 14, 17, 20} and // positionAmongUnpicked is 3, the value will end up being 17). tempNodes[i].value = positionAmongUnpicked + min; inserted = false; treeTop = headNode; nodeToInsert = &tempNodes[i]; // insert node do { if (nodeToInsert->value + treeTop->leftSize > treeTop->value) // check whether to insert right or left { // go right. There are treeTop->leftSize smaller nodes already, // so increment by that number. nodeToInsert->value = nodeToInsert->value + treeTop->leftSize; if (treeTop->right == NULL) { treeTop->right = nodeToInsert; inserted = true; } else treeTop = treeTop->right; } else { // go left treeTop->leftSize++; if (treeTop->left == NULL) { treeTop->left = nodeToInsert; inserted = true; } else treeTop = treeTop->left; } } while (!inserted); array[i] = nodeToInsert->value; } delete []tempNodes; // clean by deleting nodes. } void GenerateRandomIntegers2 (int min, int max, int array[], int arraySize) // this method will generate repeated random numbers. boolean array // keeps track of what numbers have been picked. Generates another if // this number has already been picked. { int range = max - min + 1; bool* picked = new bool[range]; // initialize all to unpicked for (int i = 0; i < range; i++) picked[i] = false; int value; for (int i = 0; i < arraySize; i++) { value = rand () % range; if (picked[value]) i--; // already picked. for-loop increments, so decrement here else { array[i] = value + min; picked[value] = true; // hasn't been picked yet. Assign to array, // flag as picked. } } delete [] picked; // clean up. delete boolean array } void Display (int array[], int arraySize) { for (int i = 0; i < arraySize; i++) { cout << left << setw(6) << card[array[i] % 13] << " of " << suit[array[i] % 4] << endl; } } | https://www.daniweb.com/software-development/cpp/code/217203/generating-random-numbers-without-repeats | CC-MAIN-2015-27 | refinedweb | 794 | 50.36 |
Coding is refactoring. What starts with a great idea, might prove incorrect in the future. A simple example is the following code:
public class Address { private String streetName; private Integer houseNumber; // getters and setters }
While initialy, a housenumber represented as an Integer might sound like a valid thing to do, chances are that this might change in the future when you need to handle
housenumbers like '21A', or '19 4th story'. However, your UI expects this to be a Integer. Your domain expects this to be an integer. Your service layer expects this to be an integer.
I guess you catch my drift. Until now, it took quite some effort to refactor this, since you cannot just rename an Integer to a String. Now, with the help of Type Migration, this is possible!
By selecting the Integer type in the Address, and pressing Ctrl+Shift+F6 (Refactor -> Type Migration), you can change the type of the housenumber to be a String. The
great thing about this is that the getters and setters automatically change, but also methods which call this method. For example, if I have this code in the Person class:
public Integer getPersonHouseNumber() { return address.getHouseNumber(); }
Changing the type of the house number property in the address class will have the following result:
public String getPersonHouseNumber() { return address.getHouseNumber(); }
So, the type did not only change in the Address class, but in the whole chain of method calls! This saves a lot of time in refactoring, and means that when changing
types at the DAO level, this change will automatically affect the whole call chain, for example up to the level of the UI.
Tags: IntelliJ, Java, type migration
Filed under IntelliJ, Java |
Your kidding right? Eclipse did that years ago.
Please. intelliJ. STOP.
Hi “Bob”,
I’m currently unaware of any possibility in Eclipse to do this described ‘Type Migration’. Could you please provide more insights on how to accomplish this in Eclipse? This way we could all learn something.
Hello “Erik”,
As a long time eclipse user I have recently changed companies and am now forced to work with intellij by the higher powers …
It saddens me to see so many good features lacking in the intellij tool … I am really giving it a shot learning all the shortcuts etc, but there are lots of things that are just not possible.
Another feature I dearly miss is ctrl-3 or quick assist in eclipse which basicly gives you access to everything without having to know the shortcut, just type in what you want and you shall receive!
Code templates like intelligent for-eaches etc … having just ONE key-combo for a DECENT auto complete is also great to have … oh eclipse, how I miss you during these long cold days working with the ugly looking swing app which often has an unresponsive ui on a quadcore with 4gig ram … you will always treasured!
bill
Hi Bill,
If you miss some features in IntelliJ, don’t hesitate to ask for them on the forum () or make a Jira (). The IntelliJ team is known to be open for any kind of suggestions so if you have any specific request, you can aks them there.
Regarding your ctrl+3: I think the IntelliJ equivalent of Ctrl+3 is ‘Goto->Action’ (shift+command+a on MacOS).
Code templates is called a ‘Live template’ (command+j on MacOS), the one key combo for autocomplete might be something to get used to, but at least it’s better than to have no option to do smart code completion (for example, not being able to filter out ‘compatible’ methods) in Eclipse.
So, in short, I would suggest reading the Reference Card for the shortcut, it really helps a lot. Or use the Eclipse compatible keymap.
Erik
BTW: Why would you let people force you which IDE to use?? If you want to use Eclipse, use Eclipse. Or any other IDE! I think there are more pressing things to worry about than your IDE, and if you’re more comfortable with Eclipe, the option seems obvious. | http://blog.xebia.com/2008/11/07/intellij-8-type-migration/ | crawl-002 | refinedweb | 682 | 67.89 |
Making a Dart web app offline-capable: 3 lines of code supports them, but the site can operate just fine in their absence (with the default web behavior). This is a useful property that enables progressive web applications (PWA), where you can provide more advanced features to the majority of the users, while making sure that the rest aren’t locked out.
As a background processing thread, a service worker can help with:
- offline mode (fetching resources from cache while the network is down)
- caching strategies (for near-instant cached responses that can be updated later with fresh content)
- push notifications (like in a mobile app)
- messaging (if the application is open on multiple tabs)
The important feature for our offline gaming experience is this: we would like to play Pop, Pop, Win!, and not meet this dinosaur:
Progressive web app with Dart
Supporting offline mode requires roughly the following:
- Determining which resources to put in the cache for offline use.
- Creating a service worker that prepares a cache of these resources.
- Registering the service worker, so that subsequent requests can be served from the offline cache (in case the network is down).
- In that service worker, pre-populating the offline cache with the URLs, and also handling the appropriate fetch request either from the cache, or from the network.
- Making sure that the service worker detects changes to the app or static assets, and puts the new version in the cache.
While the above list may sound a bit scary, we have a pwa package in Dart that does most of the work for us, providing a high-level API and automating most of the work.
Changes in your application
Import the
pwa package in your
pubspec.yaml:
dependencies:
pwa: ^0.1.2
After running
pub get, add the client to your
web/main.dart:
import ‘package:pwa/client.dart’ as pwa;main() {
// register PWA ServiceWorker for offline caching.
new pwa.Client();}
The above code handles item 3 from the above list by registering the service worker (which we will create in the following step). Right now we don’t use the Client instance for anything else, but as the
pwa package gets new features, it may become useful for other purposes.
Automatically generated progressive web application
The
pwa package provides code generation that handles items 1–2 and 4–5 from the above list. To ensure proper cache use (both populating and invalidating the cache) use the following workflow:
- Build your web app with all of the static resources landing in
build/web:
pub build
- Run
pwa’s code generator to scan (or rescan) your offline assets:
pub run pwa
- Build your project again, because you need to have your (new)
pwa.dartfile compiled:
pub build
These steps produce a file named
lib/pwa/offline_urls.g.dart that contains a list of the offline URLs to be cached. The
.g.dart extension indicates that the file is generated and may be overwritten automatically by
pwa’s code generator tool.
On the first run, this workflow generates the
web/pwa.dart file that contains your service worker with reasonable defaults. You can modify this file (to customize the offline URLs or use the high-level APIs, for example) because the code generator won’t change or override it again.
Caveats
While Dartium is great for most web development, at the moment it’s hard to use with service workers. We recommend using Chrome or Firefox instead.
Cache invalidation is one of the hardest problems in computer science. The underlying Web Cache API provides some guarantees, and the
pwa library goes to a great length to gracefully handle the edge cases, but don’t treat the cache as reliable storage for anything really important. Make use of the cache when it is available, and fail gracefully when it’s not.
Try it out
You can now deploy the new version of your application. Or try the offline Pop, Pop, Win! game.
After opening the game and playing one round, shut down your wi-fi or unplug the network cable, and then reload (or retype the URL). If you’re using Chrome or Firefox, your game should be up and running. Good luck, have fun! | https://medium.com/dartlang/making-a-dart-web-app-offline-capable-3-lines-of-code-e980010a7815 | CC-MAIN-2021-21 | refinedweb | 704 | 60.04 |
1. Pico:ed-Python Samples
1.1. Add Python Package
Download and unzip the package: lib.zip Install and open it: thonny
Click “Tools” to see more “Options…”
Click “Interpreter” to see more choices by clicking the arrow, and choose Rasperry Pi Pico.
Click the arrow and select “Try to detect port automatically” and click “ok” to confirm.
Cnnect the USB with pico_ed and click “View” to choose “Files”.
Getting in the “Files” folder, open the downloaded and unzipped folder of pico_ed.
Click the right mouse button on the “lib” folder and select “Move to Recyle Bin” to upload the data to pico_ed. After the operation, click “stop” button.
After adding the file, start programming.
Click “File”-”New”
Program and save it in pico_ed.
Enter the name ‘main.py” to the jumped-out column and click “OK” for confirmation.
1.2. Sample Projects
Result
Press button A to light on the LED and button B to turn it off.
Result
Press button A to display 1234567890 on the LEDs screen and button B to display abcdefghijklmnopqrstuvwxyz.
Project 03: The Music Player
from Pico_ed import * #import file from machine import Pin while True: if ButtonA.is_pressed(): #Detect if button A is pressed, if yes, return 1. music.phonate("1155665-4433221-5544332-5544332-1155665-4433221") #Use this method to play music and the numbers are equivalent to different tones. #Use method music.phonate("numbered musical notation")
Result
Press button A to play music.
Project 04: Light on the LED connecting to P1.
Hardware Connections
Program
from Pico_ed import * #Program as this way from machine import Pin p1 = Pin(pin.P1, Pin.OUT) # Set the pins in output mode, enter with the number of pin.P. while True: if ButtonA.is_pressed(): #Detect if button A is pressed, if yes, return 1. p1.value(1) #P1 pin up if ButtonB.is_pressed(): #Detect if button B is pressed, if yes, return 1. p1.value(0) #P1 pin down
Result
Press button A to light on the LED and button B to turn it off.
1.3. Technical Files | https://www.elecfreaks.com/learn-en/pico-ed/pico_ed_python.html | CC-MAIN-2022-27 | refinedweb | 343 | 76.62 |
By Gilt Senior Software Engineer Lukasz Szwed
“Scala testing systems have also stepped out on their own and created some of the most mind-blowing testing tools found in any language.” –Daniel Hinojosa, author of Testing in Scala
At Tuesday’s Dublin Scala Users Group meetup, organized by Gilt, I presented a tech talk on standard test-driven development practices available to Scala developers. These include ScalaTest, specs2, ScalaCheck, and various mocking frameworks. The event was great: we kicked things off with wood-fired pizza and a very good selection of beers. As for my presentation? Well, I noticed that many of you couldn’t make it, so I’ll share some of the highlights.
I started things off by introducing ScalaTest and specs2, which share a few things in common:
both are written in Scala
neither provides a one-size-fits-all testing framework, but are platforms that allow for different styles of testing
both offer great support for pending test cases
they’re mature and production-ready
they’re well-integrated with sbt
both borrow ideas from Cucumber–the first framework that truly showed how business professionals and other non-developers could write feature descriptions without using source code
About that last point: Cucumber is written in and for Ruby. For Java, very few tools exist that enable Test-Driven Development (TDD), BDD (Business-Driven Development), DDD (Design-Driven Development), or ATDD (Acceptance Test-Driven Development), much less make those processes fun. Experienced Java programmers know and have decent unit tests for our code–but working with tests written in JUnit or TestNG is brittle and tedious, and the test code does not look as modern as Scala code does. Also, neither JUnit nor TestNG offers ways for non-programmers to create tests.
Writing tests in or specs2 can help you turn old-but-good JUnit/TestNG code into actively used assets and eliminate the need for documentation. And that’s fun!
ScalaTest
Built by Bill Venners, ScalaTest integrating the best aspects of Cucumber while offering deep integration with JUnit and TestNG and lots of flexibility when it comes to scaling. At Gilt, we have learned that switching contexts between languages and frameworks isn’t always the best or most efficient approach, so we use ScalaTest for all of our tests: unit tests, functional tests, Selenium tests, and performance tests. Even though Play Framework–which we use and love–comes with specs2 support by default, we have replaced it with ScalaTest in order to make testing simple and consistent across all of our systems.
As Linus Torvalds–the father of the Linux kernel–says, “Talk is cheap. Show me the code.” Here is extremely simple test case, written in good-old JUnit style but presented in the ScalaTest framework:
import org.scalatest.junit.JUnitSuite import org.junit.Test class EmployeeTestJUnit4 extends JUnitSuite { @Test def testCreateEmployeeObjectAndProperties() { val employee = new Employee("Lukasz", "Szwed") assert(employee.firstName === "Lukasz") }
Using ScalaTest to refactor tests written in JUnit requires very little effort. And thanks to ScalaTest JUnitSuite, you can run tests with either the JUnit or ScalaTest framework–a pretty powerful feature. You can still use JUnit’s assertion (assertEquals, assertTrue, etc.), but can also use the more concise assertion syntax that comes with ScalaTest.
Special features
In my Tech Talk, I highlighted two of my (and my colleagues’) favorite ScalaTest aspects: Matchers and FeatureSpec. Both give you hundreds of options to verify what you are testing, work more quickly, and structure your tests to fit your own unique specifications.
Our tech talk audience loved Matchers, a feature that allows you to write code using domain-specific language (DSL)–in other words, pure English–for expressing assertions in tests. In addition to making testing more fun, it also makes source code look better and more modern (and, consequently, is perfectly in line with the whole concept of Scala). Consider this sample test:
result should equal (3) result should === (3) result should be (3) result shouldEqual 3 result shouldBe 3
Instead of:
assertEquals(3, result)
If you’re like me, you want to write your code to be as readable by humans as possible. Matchers will help you achieve this.
After my tech talk, several attendees asked me where to discover more examples of Matchers’s magic. The audience also fell in love with FeatureSpec, which offers a BDD-friendly way of performing tests that are higher-level than unit tests. Here’s an example of a test written with FeatureSpec:
import org.scalatest.{Matchers, GivenWhenThen, FeatureSpec} class FeatureSpecTest1 extends FeatureSpec with GivenWhenThen with Matchers { info("As an employee object consumer") info("I want to be able to create an employee object") info("So I can access the first name and last name") info("And get the employee full name when I need it") info("And also get the Social Security Number") feature("Employee object") { scenario("Create an employee object with first and last name") { Given("an Employee object is created") val employee = new Employee("Lukasz", "Szwed") Then("the first name and last name should be set") val firstName = employee.firstName firstName should be ("Lukasz") val lastName = employee.lastName lastName should be ("Szwed") Then("the full name should be set") employee.fullName should be (firstName + " " + lastName) Then("the ssn should be set") employee.ssn should be ("000-00-0000") } } }
When run with sbt, this test will produce the human readable output in pure English, which is like a feature specification with all of the BDD flavor of Given/When/Then. This makes it easy for non-developers to understand, as the following example shows:
specs2
Specs2 is an open source test framework designed to help with writing executable software specifications. Developed by Eric Torreborre, it allows you to draw a clear line between unit tests–which it calls “unit specifications”–and full-system tests, which are often called acceptance specifications. Well-written documentation is one of its major advantages.
specs2 is evolving in the same direction as ScalaTest and offers a similar end result for the user, but is distinguished by its implementation, structure and design. specs2 tests are asynchronous, and each runs in its thread using a Promise (it works well with Akka, as you might expect). Promises are processes that run on separate threads asynchronously using Actors and send objects–in this case, an ExecutedResult to one another. From what I’ve seen, there’s nothing else on the market as highly advanced as specs2 for dealing with tests.
Here’s an example of a simple unit specification in specs2:
import org.specs2.mutable._ class EmployeeUnitSpecification extends Specification { "An employee" should { "return the same first name and last name given to it's constructor" in { val employee = new Employee("Lukasz", "Szwed") employee.firstName must be("Lukasz") employee.lastName must be("Szwed") } "throw StringIndexOutOfBoundsException if invoking charAt on too short name" in { val employee = new Employee("Lukasz", "Szwed") employee.lastName.charAt(10) must throwA[StringIndexOutOfBoundsException] } "return the full name combined of first name and last name" ! pending } }
The code above illustrates specs2’s excellent support for pending test cases–a feature that is one of my favorites, and something that JUnit and TestNG truly lack. A pending test, by the way, is a test that has a name but is not implemented yet, or is partially implemented but not ready for execution. The big advantages of pending tests become more apparent in the design and early implementation stages, when developers typically try to describe system behavior before writing any code. Pending tests are helpful in defining specifications and functionality that need to be implemented in the production code.
Acceptance specifications are similar to test specifications, with this key difference: They separate what a test is expected to do from what actually happens during the test. Consider this example:
import org.specs2._ class EmployeeAcceptanceSpecification extends Specification { def is = "An employee should" ^ p^ "return the same first name and last name given to it's constructor" ! e1^ "throw StringIndexOutOfBoundsException if invoking charAt on too short name" ! e2^ end def e1 = { val employee = new Employee("Lukasz", "Szwed") employee.firstName must be("Lukasz") employee.lastName must be("Szwed") } def e2 = { new Employee("Lukasz", "Szwed").lastName.charAt(10) must throwA[StringIndexOutOfBoundsException] } }
In the end, you’ll note, the unit specification and acceptance specification tests look pretty familiar.
It’s always good to have options, but if you’re looking for a testing framework in Scala you should choose either ScalaTest or specs2. Because they both integrate so well with sbt, any other test library that attracts interest from the open source community should be integrated with either ScalaTest or specs2 (or both). When writing production code and jumping from project to project, however, you should pick one of them and stick to it.
ScalaCheck
ScalaCheck is a fully automated test generation tool based on the QuickCheck open source project used in Haskell. ScalaCheck includes test define generators that are responsible for generating test data in ScalaCheck. According to the user guide:
“a generator can be seen simply as a function that takes some generation parameters, and (maybe) returns a generated value. That is, the type Gen[T] may be thought of as a function of type Gen.Params => Option[T]. … Conceptually, though, you should think of generators simply as functions, and the combinators in the Gen object can be used to create or modify the behaviour of such generator functions.”
The true magic comes with forAll, which takes generators and creates universally quantified properties for a test.
ScalaCheck is well integrated with both ScalaTest and specs2, which is a critical feature. The following example shows how ScalaCheck can be used with ScalaTest:
import org.scalatest.FunSpec import org.scalatest.prop.GeneratorDrivenPropertyChecks import org.scalacheck.Gen class ScalaTestWithScalaCheck extends FunSpec with GeneratorDrivenPropertyChecks { val gen1 = Gen.oneOf("Abigail", "Amber", "Bertha", "Cally", "Diana", "Esther", "Frannie", "Texarkana", "Justine") val gen2 = Gen.oneOf("Adams", "Valles", "Simons", "Gomez", "Patel", "Mehra", "Groenfeld", "Thatcher", "Greenfield") describe("An employee object") { it("has valid full name") { forAll(gen1, gen2) { (firstName: String, lastName: String) => new Employee(firstName, lastName).fullName == firstName.trim() + " " + lastName.trim() } } } describe("An employee object should have ssn number") { it("has valid random ssn number") { forAll((Gen.choose(000, 999), "a"), (Gen.choose(00, 99), "b"), (Gen.choose(0000, 9999), "c")) { (a: Int, b: Int, c: Int) => { val ssn = a + "-" + b + "-" + c info(ssn) new Employee(gen1.sample.get, gen2.sample.get, ssn).ssn === ssn } } } } }
As the simple example above shows, we can use ScalaCheck to easily create hundreds or even thousands of test properties for a test with the certainty that we’ve also accounted for lots of outlier cases.
Mocking
One of my favourite definitions of the word “mock” comes from Testing in Scala, the book quoted at the beginning of this post. It goes like this: “[Mocking] is analogous to the stand-in opponent during preparation for a political debate. The opponent would likely be a campaign team member, but she will have a set of answers already prepared to debate the candidate who needs to train for the big event.”
This brings me to EasyMock, Mockito and ScalaMock. These testing tools integrate well with Scala and help us to avoid the common problems of over-complexity and excessive dependencies. They also make tests run faster.
ScalaMock: This one deserves special attention because it’s a framework fully written in Scala for Scala developers. It tries to address challenges by “mocking” (in this case, imitating) Scala code. With ScalaMock, you can easily gain full support for polymorphic (type-parameterized) methods, operators (methods with symbolic names), overloaded methods, traits, companion objects, and macros. If you are new to Mocking with Scala, give a try to ScalaMock.
EasyMock: the first Java Mock Framework. ScalaTest provides EasyMockSugar, which helps to both integrate and simplify the use of EasyMock.
Mockito: The first of the later-generation mocking frameworks to provide mocking options for concrete Java classes. Both Mockito and EasyMock provide similar functionality, but Mockito has the edge in offering more options–which might be why developers tend to stick to it and use it even with Scala code. At Gilt we started using Mockito back when we were still a Java shop (we began migrating our services to Scala in 2011). It works well for us, so there is no immediate plan to replace it with ScalaMock.
Wrap-up
For me, the most interesting part of the meetup was the discussion about how we are doing testing at Gilt. I’ll share more details on that in an upcoming post. Meantime, stay tuned for the next Scala Tech Talk hosted by Gilt in Dublin. I hope to see you there. | https://tech.hbc.com/2013-09-27-which-scala-testing-tools-should-you-use.html | CC-MAIN-2018-22 | refinedweb | 2,105 | 52.29 |
Discussion in 'BlackHat Lounge' started by SpellZ, Nov 29, 2011.
What do you guys think about it?
I think it converts pretty well, what about you guys?
Its cheaper than its competition..
this will be the runner for christmas...
It is everywhere -TV commercials, you name it. Has to be selling like hotcakes.
Is there any way to win it for FREE ?
It looks like a great product, never used one but it looks like the best tablet out there besides the ipad2
Don't people root it in order to lose all the Amazon crap and have proper android running on it?
Better save your money and buy an ipad 3 in next spring
If you are getting it primarily for reading I would suggest looking at the kindle (regular) or nook touch instead. E-ink is much easier on the eyes.
def a hot item at the moment. its not available in canada so might take a trip down to the us
I'm considering getting one. The price point definitely makes it attractive.
I picked one up last week and I love it so far.
You can root it so it runs android and use the full market and what not.
I also don't look like a douche when I bring it outside with me since it's not that big.
@Mises:
I want something with a backlight of some sort. I have the very first ebook reader which I got like 4 years ago? And there is no backlight! Sucha drag when the place is even slightly dark...
@paulpapapump:
I went on Kijiji found a guy selling it for $275, we'll see if its legit on Thursday. Plus, you can buy it online from different stores that ship to Canada, and that would be $285
@kahwhyc:
What do you mean by 'rooting it'? How do I do that?
@kahwhyc:
What do you mean by 'rooting it'? How do I do that?[/QUOTE]
Rooting is kind of like jailbreaking a Apple product.
There are many guides on the internet that teach you how to root. This one is the one I used. When you root your android product, you can access the full potential of the product and do a lot more to it such as custom ROMs.
Separate names with a comma. | https://www.blackhatworld.com/seo/kindle-fire.377591/ | CC-MAIN-2017-22 | refinedweb | 388 | 84.37 |
Apache Kafka makes it possible to run a variety of analytics on large-scale data. This is the first half of a two-part article that employs one of Kafka's most popular projects, the Kafka Streams API, to analyze data from an online interactive game. Our example uses the Kafka Streams API along with the following Red Hat technologies:
- Red Hat OpenShift Streams for Apache Kafka is a fully hosted and managed Apache Kafka service.
- Red Hat OpenShift Application Services simplifies provisioning and interaction with managed Kafka clusters, and integration with your applications.
- The Developer Sandbox for Red Hat OpenShift lets you deploy and test applications quickly.
The two-part article is based on a demonstration from Red Hat Summit 2021. This first part sets up the environment for analytics, and the second part runs the analytics along with replaying games from saved data.
Why do game data analysis?
Sources indicate that there are approximately three billion active gamers around the globe. Our pocket-sized supercomputers and their high-speed internet connections have made gaming more accessible than it has ever been. These factors also make data generated during gameplay more accessible and enable developers to constantly improve games after their release. This continuous iteration requires data-driven decision-making that's based on events and telemetry captured during gameplay.
According to steamcharts.com, the most popular games often have up to one million concurrent players online—and that’s just the PC versions of those games! That number of players will generate enormous amounts of valuable telemetry data. How can development teams ingest such large volumes of data and put it to good use?
Uses of game telemetry data
A practical example can help solidify the value of game telemetry data. The developers of a competitive online FPS (first-person shooter) game could send an enormous amount of telemetry events related to player activity. Some simple events and data might include:
- The player’s location in the game world, recorded every N seconds.
- Weapon choices and changes.
- Each time the player fires a shot.
- Each time a player successfully hits an opponent.
- Items the player obtains.
- Player wins and losses.
- Player connection quality and approximate physical location.
Game developers could use the data in these events to determine the busy or hot spots on the game world, the popularity of specific weapons, the players' shooting accuracy, and even the accuracy of players using specific weapons.
Developers could use this information to make game balance adjustments. For example, if a specific weapon is consistently chosen by 50% of players and their accuracy with that weapon is higher than with other weapons, there’s a chance that the weapon is overpowered or bugged.
Network engineers could ingest the player ping time and location to generate Grafana dashboard and alerts, isolate problems, and reference the data when choosing new data center locations.
Marketers could also use the data to market power-ups or incentives to players down on their luck if the game supports microtransactions.
Example and prerequisites
This article shows you how to employ Red Hat OpenShift Streams for Apache Kafka with the Kafka Streams API to ingest and analyze real-time events and telemetry reported by a game server, using a practical example. Specifically, you’ll learn how to:
- Use the Red Hat OpenShift Application Services CLI and OpenShift Streams for Apache Kafka to:
- Provision Kafka clusters.
- Manage Kafka topics.
- Connect your OpenShift project to a managed Kafka instance.
- Develop a Java application using the Kafka Streams API to process event data.
- Expose HTTP endpoints to the processed data using Quarkus and MicroProfile.
- Deploy Node.js and Java applications on Red Hat OpenShift and connect them to your OpenShift Streams for Apache Kafka cluster.
To follow along with the examples, you’ll need:
- Access to Red Hat OpenShift Streams for Apache Kafka. You can get a free account at the Getting started site.
- Access to Developer Sandbox. Use the Get started in the Sandbox link on the Developer Sandbox welcome page to get access for free.
- The Red Hat OpenShift Application Services command-line interface (CLI). Installation instructions are available on GitHub.
- The OpenShift CLI. The tool is available at the page of files for downloading.
- The Git CLI. Downloads are available at the Git download page.
The demo application: Shipwars
If you attended Red Hat Summit 2021 you might have already played our Shipwars game. This is a browser-based video game (Figure 1) that’s similar to the classic Battleship tabletop game, but with a smaller 5x5 board and a server-side AI opponent.
Shipwars is a relatively simple game, unlike the complex shooter game described earlier. Despite the simplicity of Shipwars, it generates useful events for you to process. We'll focus on two specific events:
- Player: Created when a player connects to the server and is assigned a generated username; for example, "Wool Pegasus" from Figure 1.
- Attack: Sent whenever a player attacks. Each player will attack at least 13 times, because a minimum of 14 hits are required to sink all of the opponent’s ships.
You’ll deploy the Shipwars microservices and send these events to a managed Kafka instance running on OpenShift Streams for Apache Kafka. You’ll also use Kafka Streams to process events generated by the game. This involves using Kafka Streams to:
- Join data from two separate Kafka topics.
- Create aggregations using the joined data stream.
The join can be used to create a real-time heatmap of player shots. It is also a prerequisite for the aggregation stage.
The aggregations can be used to:
- Create and store records of completed games with each turn in order.
- Analyze how human players compare to their AI counterparts.
- Continuously update your AI model.
Figure 2 shows the overall architecture of core game services, topics in Red Hat OpenShift Streams for Apache Kafka, and the Kafka Streams topology you’ll create.
Deploying the core Shipwars services on the Developer Sandbox
The README file in Red Hat's Shipwars deployment repository can help you quickly deploy the game.
The first step is to deploy Shipwars on the Developer Sandbox. Shipwars can run without Kafka integration, so you’ll add the Kafka integration after the core game microservices are running.
Get started by accessing the Developer Sandbox. Once you're logged in, you should see two empty projects (namespaces). For example, my projects are
eshortis-dev and
eshortis-stage, as shown in Figure 3. I use the
dev namespace throughout this two-part article.
The deployment process for Shipwars uses the OpenShift CLI. Do the following to obtain the OpenShift CLl login command and token:
- Click your username in the top-right corner of the OpenShift Sandbox UI.
- Click Copy Login Command.
- Select the DevSandbox login option when prompted.
- Click Display Token.
- Copy and paste the displayed login command into your terminal.
After a successful login, the OpenShift CLI prints your available projects.
Next, use the
git clone command to clone the shipwars-deployment repository into your workspace. Figure 4 shows both the OpenShift CLI and Git CLI commands.
A script is included to deploy Shipwars. This is a straightforward script that applies YAML files so you can get to the Kafka content more quickly:
$ git clone $ cd shipwars-deployment/openshift $ NAMESPACE=eshortis-dev ./deploy.game.sh
Note: The
NAMESPACE variable must be set to a valid namespace in your Developer Sandbox. The
eshortis-dev value used here is an example, and must be replaced with the equivalent for your username, such as
myusername-dev.
The script prints the resource types as it creates them. Once it is finished, you can view the deployed services in the OpenShift topology view. You can also click the Open URL link on the NGINX service, as shown in Figure 5, to view and play the Shipwars game against an AI opponent.
Creating a Kafka instance and topics
If you’re not familiar with the basics of OpenShift Streams for Apache Kafka, consider reviewing this introductory article: Getting started with Red Hat OpenShift Streams for Apache Kafka. It covers the basics of creating Kafka instances, topics, and service accounts.
In this article, we use the Red Hat OpenShift Application Services CLI,
rhoas, to manage the creation of everything needed to integrate OpenShift Streams for Apache Kafka with the Shipwars game server you deployed on the Developer Sandbox.
To get started, log in to your cloud.redhat.com account and create a Kafka instance using the following commands:
# Login using the browser flow rhoas login # Create a kafka instance rhoas kafka create shipwars
The
kafka create command will complete within a few seconds, but the Kafka instance won’t be ready at this point. Wait for two or three minutes, then issue a
rhoas kafka list command. This lists your Kafka instances and their status. You can continue with the procedure in this article when the
shipwars instance status is "ready," as shown in Figure 6.
The Red Hat OpenShift Application Services CLI allows you to select a Kafka instance as the context for future commands. Select your newly created
shipwars instance using the
rhoas kafka use command, then create the necessary topics as follows:
# Choose the kafka instance to create topics in rhoas kafka use # Create necessary game event topics rhoas kafka topic create shipwars-matches --partitions 3 rhoas kafka topic create shipwars-players --partitions 3 rhoas kafka topic create shipwars-attacks --partitions 3 rhoas kafka topic create shipwars-bonuses --partitions 3 rhoas kafka topic create shipwars-results --partitions 3 # Create topics used by Kafka Streams rhoas kafka topic create shipwars-attacks-lite --partitions 3 rhoas kafka topic create shipwars-streams-shots-aggregate --partitions 3 rhoas kafka topic create shipwars-streams-matches-aggregate --partitions 3
The topic configuration is printed in JSON format after each topic is created, as shown in Figure 7. The only configuration that you’ve explicitly set for your topics is the partition count; other values use sensible defaults.
Connecting to the Shipwars game server
At this point, you’ve obtained a managed Kafka instance and configured it with the topics required by the Shipwars game. The next step is to configure the Node.js
shipwars-game-server to connect to your topics and send events to them. This is a two-step process.
First, you need to link your OpenShift project to your managed Kafka instance. You can do this by issuing the
rhoas cluster connect command. The command starts a guided process that will ask you to confirm the project and managed Kafka instance being linked, and request you to provide a token obtained from a cloud.redhat.com/openshift/token as shown in Figure 8.
This process creates the following resources in the target OpenShift project:
- A
KafkaConnectioncustom resource that contains information such as bootstrap server URL and SASL mechanism.
- A
Secretthat contains the service account credentials for connecting to the Kafka instance via SASL SSL.
- A
Secretthat contains your cloud.redhat.com token.
Next, run the
rhoas cluster bind command. This guides you through the process of creating a
ServiceBinding and updates the
shipwars-game-server
Deployment with the credentials required to connect to your managed Kafka instance.
Once the binding process has completed, a new
Secret is generated and mounted into a new pod of the
shipwars-game-server
Deployment. You can explore the contents of this secret using the OpenShift CLI or UI, as shown in Figure 9.
Lastly, you can confirm that the Node.js-based
shipwars-game-server has connected to your managed Kafka instance by viewing its logs. Find the logs by selecting the
shipwars-game-server from the OpenShift topology view, selecting the Resources tab, and clicking the View logs link. The startup logs should contain a message stating that a Kafka producer has connected to the managed Kafka instance bootstrap server URL, as shown in Figure 10.
Conclusion to Part 1
In this first half of the article, we have set up our game and our environment for analytics. In the second half, we'll run analytics and replay some games. For now, you can verify that game events are being streamed to your managed Kafka instance using a tool such as kafkacat. As an example, the following command outputs the contents of the
shipwars-attacks topic as it receives events from the
shipwars-game-server:
$ kafkacat -t shipwars-attacks-b $KAFKA_BOOTSTRAP_SERVER \ -X sasl.mechanisms=PLAIN \ -X security.protocol=SASL_SSL \ -X sasl.username=$CLIENT_ID \ -X sasl.password=$CLIENT_SECRET -K " / " -C | https://developers.redhat.com/articles/2021/08/24/game-telemetry-kafka-streams-and-quarkus-part-1 | CC-MAIN-2021-39 | refinedweb | 2,087 | 55.34 |
Replace the signal mask, and then suspend the thread
#include <signal.h> int sigsuspend( const sigset_t *sigmask );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The sigsuspend() function replaces the thread's signal mask with the set of signals pointed to by sigmask and then suspends the thread until delivery of a signal whose action is either to execute a signal-catching function (then return), or to terminate the thread.
/* * This program pauses until a signal other than * a SIGINT occurs. In this case a SIGALRM. */ #include <stdio.h> #include <signal.h> #include <stdlib.h> #include <unistd.h> sigset_t set; int main( void ) { sigemptyset( &set ); sigaddset( &set, SIGINT ); printf( "Program suspended and immune to breaks.\n" ); printf( "A SIGALRM will terminate the program" " in 10 seconds.\n" ); alarm( 10 ); sigsuspend( &set ); return( EXIT_SUCCESS ); } | http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/s/sigsuspend.html | CC-MAIN-2021-10 | refinedweb | 144 | 60.01 |
If you took a down payment on your mortgage, most likely you are taking an Amortizing Loan.
An amortization loan refers to an exact amount you pay monthly so that by the end of the loan term you paid off the debt and the interest.
The monthly amortization consists of interest payments and principal payments. The interest payment goes toward the interest while the principal payment contributes to your actual debt.
Notably, when your debt goes down, the interest of the following month reduce, and the principal increases. …
One of the most joyful activities in analytics is working with beautiful visualization. With the Variable Factor Map, you can explain Principal Component Analysis with ease. A picture worth a thousand words
Principle Component Analysis (PCA), is a dimensionality-reduction method that is used to reduce the dimensionality of large data sets. It transforms multiple features into a much less number of new features while maintaining most of the information and variability of the original data.
If the number of features is 2, we can put them in a 2D plot and visualize how different features factor in each new component…
The Monty Hall Problem is a famous probability puzzle in statistics. It is named after Monty, the host of the television game show “Let’s Makes a Deal”. The brain teaser loosely replicates the game show concept and it goes like this:
There are 3 doors. You will have to choose a door, and you will win whatever behind it. There is one door with a car. Each remaining door has a goat. First, you are asked to pick one of the doors. Next, Monty, who knows what’s behind each of the doors, opens up one of the two doors you…
When talking about the decision trees, I always imagine a list of questions I would ask my girlfriend when she does not know what she wants for dinner: Do you want to eat something with the noodle? How much do you want to spend? Asian or Western? Healthy or junk food?
Making a list of questions to narrow down the options is essentially the idea behind decision trees. More formally, the decision tree is the algorithm that partitions the observations into similar data points based on their features.
The decision tree is a supervised learning model that has the tree-like…
Natural language processing is an interesting field because it is thought-provoking to disambiguate the input sentence to produce the machine representation language. Take a look at the famous Groucho Marx’s joke:
One morning I shot an elephant in my pajamas. How he got into my pajamas I’ll never know.
At the human level, there are several interpretations of this sentence. But it is almost impossible for the computer to comprehend.
Nonetheless, the learning curve of NLP is not so steep and can be captivating. In this project, I will explain an introductory level of natural language processing with the Beatles…
A data science team is the core of every big company. A successful data scientist needs to be the head for business strategy, can make discoveries and vision through data, and can convince stakeholders through communication and visualization. However, the amount of data nowadays increase exponentially and it makes data science job more sophisticated. Every company starts to understand the importance of data and collect them as much as they can. Data size increases from a few gigabytes to several petabytes. Cloud storage and computing becomes more popular. End-user organizations are adopting cloud storage solutions as primary storage options. Therefore…
In the job searching process, I found that many companies in my area looking for data scientists having knowledge of oil and gas. My background is mostly about mathematics, so I decided to go on an adventure along the oil pipeline.
One thing that I noticed is oil and gas is a big industry with old infrastructure. In 2014, Inside Energy reported that 45% of the U.S. crude oil pipeline was more than 50 years old. Some pipeline even laid in and before the 1920s is still in operation. …
The Receiver Operating Characteristic (ROC) curve is a probability curve that illustrates how good our binary classification is in classifying classes based on true-positive and false-positive rates.
The Area Under Curve (AUC) is a metric that ranges from 0 to 1. It is the area under the (ROC) curve.
Why is understanding the ROC curve and AUC important for a data scientist? To answer this question, let’s take a look at the breast cancer data set below:
from sklearn.datasets import load_breast_cancer
import pandas as pddata=load_breast_cancer()
columns=data.feature_names
X=pd.DataFrame(data.data, columns=columns)
y=pd.DataFrame(data.target, columns=['Target'])
df=pd.concat([y,X],axis=1)
The most fascinating about generative deep learning, such as auto-encoder is that the machine can teach itself to be creative. The algorithm simply mimics the way humans learn and innovate. When first encounter a new concept, one needs to read, listen, memorize what important, and then practice. The more training that one undergoes, the more one’s creativity becomes cohesive and logical. Creating new things base on previous knowledge is exactly what an auto-encoder is capable of!
In this blog, I will introduce my understanding of autoencoder (AE), what it does and why it is useful. I will first introduce the…
In this notebook, I will introduce different approaches to encode categorical data. I want to write it in a simple language with a little math involved. They are
12 basic encoding schemes that you should put in your tool kit. For convenience purposes, I will also provide the
scikit-learn version and the library corresponding to each method.
By the end of this post, I hope that you would have a better idea of how to deal with categorical data. You can find my code here.
There are three main routes to encode the string data type:
Classic Encoders: well known… | https://medium.com/@williamhuybui?source=user_followers_list------------------------------------- | CC-MAIN-2021-39 | refinedweb | 1,004 | 55.64 |
Ruby Weekly News 26th December 2005 - 1st January 2006
Ruby Weekly News is a summary of the week’s activity on the
ruby-talk mailing list / the comp.lang.ruby newsgroup / Ruby forum,
brought to you by Tim S…
A short one this week!
[ Contribute to the next newsletter ]
Articles and Announcements
* ICFP Contest Dates Are Set ---------------------------- James Edward G. II said that the International Conference of Functional Programming have announced the dates for 2006's
programming
contest.
"Ruby participation has been pretty small in the past, so it
would be
great to see some solid Ruby entries this year. Ruby Q. will
take a
break for the contest, to encourage others to enter and give me
time
to make my own entry."
* RRobots Tournament Results ---------------------------- Ruby Q. #59 set out the challenge of writing bots to play in an RRobots Tournament, held just after Christmas. The results are now out, with Jannis Harder's bot Ente besting 10 other entries to win the tournament. Jannis also wins a real robot: the Desktop R/C Mini-Rover from ThinkGeek.com. (Thanks to Simon Kröger and James Edward G. II.) * Forthcoming 2nd ed. of _The Ruby Way_ --------------------------------------- This thread began a couple of weeks ago, but we missed it
previously.
Hal F. announced the second edition of his book `The Ruby
Way’;
“ink is dry on the contract” and it’s aimed to be ready for the
second
quarter of 2006.
There was much discussion about the current edition, what
libraries
should be covered in the second edition, and also in response to
a
question by Patrick H., “what would your idea table of
contents
look like in an Advanced Ruby book?”
David A. Black's upcoming book "Ruby for Rails: Ruby techniques
for
Rails developers" was also noted. It is due April 2006.
Threads
ActiveRecord
Diego: “I was wondering if ActiveRecord support was available for
database
access without requiring Ruby on Rails?”
The answer is yes, “Active Record works very well on standard alone
scripts” (David Heinemeier H.).
As an alternative, George M. mentioned his Og library, which
fills the same role as ActiveRecord.
scoped_require 0.0
“… and from the substratum, it arises …”
Devin M.: “scoped_require provides an optional parameter to
Kernel#require to allow you to shove the created modules/classes into
some
sandboxy container module. I use it to prevent namespace collision
with an
external library. It’s a hack. Heed the version number.”
It works by reading the .rb file being required, and evaling the
result.
Eero S. vaguely recalled someone posting an alternative
implementation of this concept - probably thinking of Wrapped Modules
by
Austin Z. in May 2005.
The example given in that post:
module My; end
module Your; end
require_wrap ‘cgi’, My
require_wrap ‘cgi’, Your
puts My::CGI.escape(“hello, world”)
puts Your::CGI.escape(“hello, world”)
Austin’s implementation works by first calling Module.constants to
find
out all the top-level constants, then calling require, followed by
Module.constants again to find which constants have been added. It
then
`moves’ those constants under the desired namespace.
Both implementations cause problems in some cases. The most
significant
(and intractible) one seems to be the situation when the require’d
file
includes code like the following:
foo.rb
class String
def hello
“hello #{self}”
end
end
require ‘foo’, :module => F
Should the hello method be added to Ruby’s standard String class, or
should a new class F::String be created? (Would your answer be
different
if the class was called Util, and had just happened to be defined by
another library?)
Austin’s solution does the former, while Devin’s does the latter.
Arguably, the `correct’ solution is to create F::String, and say that
foo.rb should have written class ::String if it really wanted to add
the
method to the standard String class.
This will be no consolation to all the libraries that don’t use the
::
prefix. For example, set.rb in the core Ruby library says module
Enumerable, not module ::Enumerable.
Using Float For Currency
Hunter’s Lists was using Floats to represent monetary values, but
wanted
to print e.g. “9.76” instead of “9.756”.
The immediate answer to the question is to use sprintf (or printf)
for
rounding:
amount = 9.756
rounded = sprintf("%.2f", amount) # == “9.76”
Several people pointed out though that you shouldn’t use Floats to
represent currency.
Mental: “There are a lot of really nasty subtle issues that will lose
money between the cracks.”
This applies to any programming language, not just Ruby. An example
given
by Malte M.: 0.2 - 0.05 - 0.15 will be approximately
2.77555756156289e-17, not zero.
Stephen W.: “Floating point numbers represent an extremely wide
range
of values - much wider than their integer counterparts. This is
handled
through an exponent and mantissa. For this ability, they trade off
precision.”
The Ruby solution for exact decimals is BigDecimal:
require ‘bigdecimal’
a = BigDecimal.new(“0.2”)
b = BigDecimal.new(“0.05”)
c = a / b
Operations with BigDecimal are always exact (but slower than with
Float),
and so it is safe to use with currencies and other situations where
any
loss of precision is unacceptable.
Idiom wanted: do-while
Adam S. asked how to write a “do-while” loop in Ruby, in order to
remove the repetition in the below code:
b = simulate(b,m)
while another_turn?(b,m)
b = simulate(b,m)
end
James Edward G. II suggested loop, break unless, as in
loop do
b = simulate(b,m)
break unless another_turn?(b,m)
end
Ruby does support the following, which is just like a “do-while” loop
in
other languages, however Matz recently said on the ruby-core mailing
list
“Don’t use it please. I’m regretting this feature, and I’d like to
remove
it in the future if it’s possible.”
begin
b = simulate(b,m)
end while another_turn?(b,m)
Ruby on the mobile
“Do anyone knows if there is a Ruby version for cellphones and other
mobile devices on the make?”-Marcelo Paniagua.
Treefrog asked someone in the Mobile Devices division of Motorola
about
this a while ago, but was told that only Java and BREW were currently
supported.
Gene T. noted that Python is making progress in this area, linking
to a
Python for Mobile Devices page, which lists a number of ports
including
official Nokia support and downloads for “Series 60” phones. (The
first
release by Nokia was over a year ago.)
People have previously reported success in running Ruby on the Sharp
Zaurus PDA (which runs Linux) and on Windows CE.
Ncurses - how do you get mousemask working?
Richard L. couldn’t get the mousemask to work in the Ncurses
library.
| A few other projects have the line with `Ncurses::mousemask…’ in
them
| - but it’s always commented out… like they couldn’t get it
working
| either.
Paul D. replied “They couldn’t”, and posted a one-liner patch to
Ruby’s ncurses library to fix a bug.
Hopefully the authors of the other projects find out about this. | https://www.ruby-forum.com/t/26th-december-2005-1st-january-2006/56101 | CC-MAIN-2019-51 | refinedweb | 1,180 | 66.23 |
Stream is a Dart web server supporting request routing, filtering, template engine, WebSocket, MVC design pattern and file-based static resources.
Stream is distributed under an Apache 2.0 License.
Add this to your
pubspec.yaml (or create it):
dependencies: stream:
There are two ways to compile RSP files into dart files: automatic building with Dart Editor or manual compiling.
RSP is a template technology allowing developers to create dynamically generated web pages based on HTML, XML or other document types (such as this and this). Please refer to here for more information.
To compile your RSP files automatically, you just need to add a build.dart file in the root directory of your project, with the following content:
import 'package:stream/rspc.dart'; void main(List<String> arguments) { build(arguments); }
With this build.dart script, whenever your RSP is modified, it will be re-compiled.
To compile a RSP file manually, run
rspc (RSP compiler) to compile it into the dart file with command line interface as follows:
dart bin/rspc.dart your-rsp-file(s)
A dart file is generated for each RSP file you gave.
If you'd like to contribute back to the core, you can fork this repository and send us a pull request, when it is ready.
Please be aware that one of Stream's design goals is to keep the sphere of API as neat and consistency as possible. Strong enhancement always demands greater consensus.
If you are new to Git or GitHub, please read this guide first.
Add this to your package's pubspec.yaml file:
dependencies: stream: "^1.5.0+1"
You can install packages from the command line:
$ pub get
Alternatively, your editor might support pub. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:stream/
stream.dart';stream.dart'; | https://pub.dartlang.org/packages/stream | CC-MAIN-2015-27 | refinedweb | 310 | 66.33 |
For a online course I'm taking,I'm trying to save the 2nd set of integer, double, and string that I defined to variables, after reading them (the 2nd set) using a scanner. The problem is I don't know how to do that to the 2nd set of variables that I defined. I've tried instantiating them to a new variable,but I keep running into errors. I need help to read each variable and then save them.
import java.io.*;
import java.util.*;
import java.text.*;
import java.math.*;
import java.util.regex.*;
public class Solution {
public static void main(String[] args) {
int i = 4;
double d = 4.0;
String s = "HackerRank ";
int j = 4;
double y = 9.0;
String k = "is the best place to learn and practice coding!";
int j = new j();
double y = new y();
String k = new k();
j.scanner.nextInt();
y.scanner.nextDouble();
k.scanner.nextLine();
System.out.print(j + i);
System.out.print(d + y);
System.out.print(s + k);
You use assignment without declaring the type again.
int j = 4; double y = 9.0; String k = "is the best place to learn and practice coding!"; j = scanner.nextInt(); y = scanner.nextDouble(); k = scanner.nextLine(); | https://codedump.io/share/HEuY1ZEQWJ0G/1/how-to-read-and-save-an-integer-double-and-string-to-a-variable | CC-MAIN-2016-50 | refinedweb | 204 | 76.93 |
Google App Engine (often referred to as GAE or simply App Engine) is a web framework.
Supported programming languages include Python, Ruby, Java (and, by extension, other JVM languages such as Kotlin, Groovy, JRuby, Scala, Clojure), Go, and PHP. Node.js is also available in the flexible environment. Google has said that it plans to support more languages in the future, and that the Google App Engine has been written to be language independent.[4] C# is also supported.[5] Arbitrary Docker containers are also supported.[6]
Python web frameworks that run on Google App Engine include Django, CherryPy, Pyramid, Flask, web2py and webapp2,[7] as well as a custom Google-written webapp framework and several others designed specifically for the platform that emerged since the release.[8] Any Python framework that supports the WSGI using the CGI adapter can be used to create an application; the framework can be uploaded with the developed application. Third-party libraries written in pure Python may also be uploaded.[9][10]
Google App Engine supports many Java standards and frameworks. Core to this is the servlet 2.5 technology using the open-source Jetty Web Server, easily accessed and supported with JPA, JDO, and by the simple low-level API.[12] There are several alternative libraries and frameworks you can use to model and map the data to the database such as Objectify,[13] Slim3[14] and Jello framework.[15]
The Spring Framework works with GAE. However, the Spring Security module (if used) requires workarounds. Apache Struts 1 is supported, and Struts 2 runs with workarounds.[16]
The Django web framework and applications running on it can be used on App Engine with modification. Django-nonrel[17] aims to allow Django to work with non-relational databases and the project includes support for App Engine.[18]
All billed App Engine applications have a 99.95% uptime SLA.[19]
App Engine is designed in such a way that it can sustain multiple datacenter outages without any downtime. This resilience to downtime is shown by the statistic that the High Replication Datastore saw 0% downtime over a period of a year.[20]
Paid support from Google engineers is offered as part of Premier Accounts.[21] Free support is offered in the App Engine Groups, Stack Overflow, Server Fault, and GitHub. However assistance by a Google staff member is not guaranteed.[22]
SDK version 1.2.2 adds support for bulk downloads of data using Python.[23] The open source Python projects gaebar,[24] approcket,[25] and gawsh[26] also allow users to download and back up.[29] the document-oriented Google Cloud Datastore database; making HTTP requests; sending e-mail; manipulating images; and caching. Google Cloud SQL[30] can be used for App Engine applications requiring a relational MySQL compatible database backend.[31] integrated Google Cloud Datastore database has a SQL-like syntax called "GQL". GQL does not support the Join statement.[32] Instead, one-to-many and many-to-many relationships can be accomplished using ReferenceProperty.[33] This shared-nothing approach allows disks to fail without the system failing.[34] Switching from a relational database to Cloud Datastore requires a paradigm shift for developers when modeling their data.
Developers worry that the applications will not be portable from App Engine and fear being locked into the technology.[35] In response, there are a number of projects to create open-source back-ends for the various proprietary/closed APIs of app engine, especially the datastore. AppScale, CapeDwarf and TyphoonAE[36] are a few of the open source efforts.
AppScale automatically deploys and scales unmodified Google App Engine applications over popular public and private cloud systems and on-premises clusters.[37] AppScale can run Python, Java, PHP, and Go applications on EC2, Google Compute Engine, Softlayer, Azure and other cloud vendors.
TyphoonAE[36] can run Python App Engine applications on any cloud that support linux machines.
Web2py web framework offers migration between SQL Databases and Google App Engine, however it doesn't support several App Engine-specific features such as transactions and namespaces.[38]
In Google I/O 2011, Google announced App Engine Backends, which are allowed to run continuously, and consume more memory.[39][40] The Backend API was deprecated as of March 13, 2014 in favor of the Modules API.[41]
In Oct 2011, Google previewed a zero maintenance SQL database, which supports JDBC and DB-API.[42] This service allows to create, configure, and use relational databases with App Engine applications. Google Cloud SQL offers MySQL 5.5 and 5.6.[43]
Google App Engine requires a Google account to get started, and an account may allow the developer to register up to 25 free applications and an unlimited number of paid applications.[44]
Google App Engine defines usage quotas for free applications. Extensions to these quotas can be requested, and application authors can pay for additional resources.[45]
Stanford University. (online video archive)
Manage research, learning and skills at defaultLogic. Create an account using LinkedIn or facebook to manage and organize your IT knowledge. defaultLogic works like a shopping cart for information -- helping you to save, discuss and share. | http://www.defaultlogic.com/learn?s=Google_App_Engine | CC-MAIN-2017-47 | refinedweb | 857 | 56.96 |
On Mon, Apr 25, 2005 at 02:54:58PM -0700, Steven Dake wrote:> On Mon, 2005-04-25 at 09:58, David Teigland wrote:> > The core dlm functions. Processes dlm_lock() and dlm_unlock() requests.> > Creates lockspaces which give applications separate contexts/namespaces in> > which to do their locking. Manages locks on resources' grant/convert/wait> > queues. Sends and receives high level locking operations between nodes.> > Delivers completion and blocking callbacks (ast's) to lock holders.> > Manages the distributed directory that tracks the current master node for> > each resource.> > > > David> > Very positive there are some submissions relating to cluster kernel work> for lkml to review.. good job..> > I have some questions on the implementation:> > It appears as though a particular processor is identified as the "lock> master" or processor that maintains the state of the lock. So for> example, if a processor wants to acquire a lock, it sends a reqeust to> the lock master which either grants or rejects the request for the> lock. What happens in the scenario that a lock master leaves the> current configuration? This scneario is very likely in practice. Of course, every time a node fails.> How do you synchronize the membership events that occur with the kernel> to kernel communication that takes place using SCTP?SCTP isn't much different than TCP, so I'm not sure how that's relevant.It's used primarily so we can take advantage of multi-homing when you haveredundant networks.When the membership of a lockspace needs to change, whether adding orremoving a node, activity is suspended in that lockspace on all the nodesusing it. After all are suspended, the lockspace is then told (on alllockspace members) what the new membership is. Recovery then takes place:new masters are selected and waiting requests redirected.> It appears from your patches there is some external (userland)> application that maintains the current list of processors that qualify> as "lock servers". correct> Is there then a dependence on external membership algorithms?We simply require that the membership system is in agreement before thelockspace is told what the new members are. The membership systemultimately drives the lockspace membership and we can't have themembership system on different nodes telling the dlm different storiesabout who's in/out.So, yes, the membership system ultimately needs to follow some algorithmthat guarantees agreement. There are rigorous, distributed ways of doingthat (your evs work which I look forward to using), and simpler methods,e.g. driving it from some single point of control.> What user application today works to configure the dlm services in the> posted patch?I've been using the command line program "dlm_tool" where I act as themembership system myself. We're just putting together pieces that willdrive this from a membership system (like openais). Again, the pieces youdecide to use in userspace are flexible and depend on how you want to usethe dlm.> With usage of SCTP protocol, there is now some idea of moving the> protocol for cluster communication into the kernel and using SCTP as> that protocol...Neither SCTP nor the dlm are about cluster communication, they're bothabout simple point-to-point messages. When you move up to userspace andstart talking about membership, then the issue of group communicationmodels comes up and your openais/evs work is very relevant. Might you bemisled about what SCTP does?Dave-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] majordomo info at read the FAQ at | http://lkml.org/lkml/2005/4/26/30 | CC-MAIN-2017-47 | refinedweb | 585 | 56.05 |
README
js-typedjs-typed
Un-opinionated runtime type-checking and Multiple Dispatch / Multimethods for Javascript.
WIP!WIP!
Should be fairly stable at this point but no promises yet.
MotivationMotivation
There are two schools of thought that seemingly dominate the landscape of adding types to Javascript: old-school compile-time static analysis (MS TypeScript, Facebook Flow) and implementing the fruits of category theory in Javascript (e.g. Sanctuary, FantasyLand, PureScript). Those approaches have their strengths and weaknesses.
Compile-time static analysis:
- No runtime performance penalty.
- Familiar to C++/C#/Java et al. programmers.
- Safe.
- Fail early, prevents many runtime errors.
- No abstract types. Conflates the notion of types and classes, although both have interface types.
- No runtime manipulation, all the type info is lost in compiling.
- No true underlying mathematical formalism.
Category Theory Types:
- Can capture very complex types and relationships, can be abstract.
- Extensible.
- Mathematically formalized.
- Familiar to ML-Family programmers.
- Safe.
- CT Types do not necessarily map to Javascript types intuitively.
- Requires (for most) learning a new coding style / way of modeling problems.
This library exists because I find neither of those lists acceptable. I want safe(r), useful types that are closer to the underlying Javascript types for accessibility, but without losing the dynamism and runtime extensibility (and ability to capture complex types). The trade-offs are performance and safety: runtime type checks have their cost and they (by definition) occur at runtime, so no compile-time feedback.
So why even have them? Several reasons. Firstly, moving the failure earlier in the process. Functions checked by js-typed are auto-curried and the type checks occur as they receive arguments, meaning that instead of getting an error deep into a running program the error is more likely to occur during initialization. Secondly, js-typed can define types that could only be confirmed by a predicate check at runtime, e.g. a finite number type. Lastly, and perhaps most importantly, defining function signatures that are accessible at runtime enables polymorphic functions (i.e. multiply dispatched functions / multimethods) to be easily written without lots of boilerplate.
And you can have (some) of the best of both worlds: to the extent that multiple dispatch is not required and one is only interested in static type safety its easy enough to layer js-typed on top of a static analyzer like flow. Only use js-typed when you need to incrementally check a curried function, want a run-time check on input, or need multiple dispatch.
This may be a sort of Sisyphusian tilting at windmills (not to mention promiscuous mixing of metaphors), but we'll give it the ol' college try. One last thing that certainly deserves mention is contracts.js, a library that borrows from Racket's notion of contracts. Although I like the ideas in Racket contracts, and to the extent that I've looked at it I like contracts.js, but it uses a HM-ish syntax for defining contracts, and just like with PureScript et al. I'd prefer something a little closer to native Javascript. It also seems to be unmaintained: as of this writing it hasn't had a commit in almost a year.
TypesTypes
To get around some issues with typing, js-typed uses predicate or duck types, i.e. a value is recognized as being a certain type if it passes a predefined predicate test. In addition, js-typed can work with concrete types (and better than the underlying Javascript built-in operations).
js-typed will recognize all built-ins for their 'real' type: null, [],
/rsts/, etc, will be 'null',
'array', 'regexp', and so on. In addition to accurately capturing the concrete types of Javascript's
built-ins, js-typed provides some useful predicate types:
- nil: returns true on null and undefined.
- some: opposite of nil, avoids property access / assignment errors.
- clean: type of
Object.create(null)*.
- key: can be used as property accessor on a Javascript object, i.e. String or Symbol.
- primitive: Javascript primitive types: Number, String, Boolean, undefined, Symbol, and, for our purposes here, null.
- reference: Non-primitive types (e.g. Object, Array, RegExp)
- any: aliases '*', __ returns true for any argument.
*
Object.create(null) is special, because it does not entirely uphold the contract of a
Javascript object (e.g. has no
toString method). Therefor,
isType('object', Object.create(null))
returns false.
Some of these types are actually sum types: nil is the sum type of null and undefined, key is the
sum type of string and symbol, etc. Sum types are defined using the
sumType function, detailed
below.
Promises PromisesPromises Promises
If you are polyfilling things like Promise, Map, Set, Symbol, etc. for cross-browser support then I recommend that you define sensible predicate-checked types for those using this library as it will otherwise see them as plain Javascript objects (it will correctly type native implementations automatically). See Defining Types below.
Defining New Types:Defining New Types:
js-typed provides the
defType function for defining new types.
defType takes a name for the type
, a predicate for testing them, and an (optional) type constructor. defType then registers the type
and returns an object with two methods:
is and
of. It is not necessary to capture the return
value.
import * as types from 'js-typed'; let Foo = class {}; let FooType = types.defType('foo', x => x instanceof Foo, types.guardClass(Foo)); // js-typed will now recognize the foo type types.isType('foo', new Foo); //true FooType.is(new Foo); //true FooType.of({}) instanceof Foo //true
This utility makes it possible to describe interesting types:
types.defType('numeric', x => !Number.isNaN(+x)); //can be coerced to a usable number types.defType('positive', x => types.isType('number', x) && x > 0); // or for a even more complex case, first we'll define a constructor: function makeNonEmpty(...args) { switch (args.length) { case 0: throw new Error('non-empty cannot be constructed with no arguments'); case 1: if (types.isType('array', args[0])) { if (!args[0].length) { throw new Error("Empty array cannot be non-empty"); } return args[0]; } if (types.isType('string', args[0])) { if (!args[0].length) { throw new Error("Empty string cannot be non-empty"); } return args[0]; } if (types.isType('number', args[0])) { if (!args[0]) { throw new Error("Zero cannot be non-empty"); } return args[0]; } if (types.isType('object', args[0])) { if (!Object.keys(args[0]).length) { throw new Error("Empty object cannot be non-empty"); } return args[0]; } // fall-through case 2: return args; } } // then register the type with a predicate and capture the return value: let NonEmpty = types.defType( 'not-empty', x => { return ( x instanceof NonEmpty || (types.isType('string', x) && x.length !== 0) || (types.isType('array', x) && x.length !== 0) || (types.isType('number', x) && x !== 0) || (types.isType('object', x) && Object.keys(x).length !== 0) ); }, makeNonEmpty }); NonEmpty.of(1,2,3); // [1,2,3] NonEmpty.of(['a','b']); // ['a', 'b'] NonEmpty.is(''); // false
The return value of the defType function is an object with at least two methods:
is applies the
predicate tests to the arguments and
of applies the type constructor. The type constructor by
convention / definition should return a value that passes the predicate check or throw an error.
Note that notion is not to conflate the type NonEmpty with the value that has type NonEmpty:
an array literal like
[1,2,3] is still a NonEmpty as far as js-typed is concerned even though
we didn't create it with the NonEmpty constructor we defined above. Note also that you must
supply at least a predicate. The constructor defaults to
Object.
Sum TypesSum Types
To define a type that can be one of several types js-typed provides the
sumType function which
composes two or more types, similar to tagged unions in C. For instance we can define the 'nil'
type as the union of the 'null' and 'undefined' types:
types.sumType('nil', 'null', 'undefined');
ProtocolsProtocols
If you need to describe a spec that subtypes can implement, you can use
defProtocol. The three
arguments to
defProtocol are the name, a class to serve as a template, and an optional predicate
that defaults to the argument being an instance of the template class.
let Functor = types.defProtocol( 'functor', types.guardClass(class Functor { constructor() {} map(f) { return f(this); } }), a => a && types.isType('function', a.map) ); Functor.is([]); // true // now we can implement the spec: let Mappable = types.defType('mappable', x => x && types.isType('function')).implements(Functor); let m = Mappable.of({}); m.foo = 3; m.map(x => x.foo); // 3 // but that's kinda boring, so lets do something more interesting. Protocols can themselves be // extended: let Monad = types.defProtocol( 'monad', m => { return types.isType('functor', m) && types.isType('function', m.mbind) }, class Monad { constructor(name) { // test, insofar as is possible, that the subclass is in fact a Monad if (this.constructor.mbind === Monad.mbind) { throw new Error(`${name} must override the default mbind static method`); } // reference type equality will not hold, so warn instead of throw if (this.mbind(IDENTITY)(this) !== this.map(IDENTITY)) { console.warn(`cannot infer identity property for Monad ${name}`); } } static mbind() { // meant to be overriden } } ).implements(Functor); // then given a concrete implementation: let Maybe = types.defType( 'maybe', a => a instanceof Maybe, // don't worry about the guardClass bit, for now just know that it make a class callable without // 'new' types.guardClass(class Maybe { constructor(value) { // NOTE: this is for illustrative purposes, in real life one might prefer a more robust // data-hiding pattern like Symbols or WeakMaps this.__value = value; } map(f) { return types.isType('nil', this.__value) ? this : new Maybe(f(this.__value)); } toString() { return `Maybe(${this.__value})`; } static mbind(f) { return function(maybe) { let result = f(maybe.__value); return types.isType('maybe', result) ? result : Maybe.of(result); } } }) ).implements(Monad); let maybe5 = (() => [5, null][Math.floor(Math.random() * 2)]); let y = 0; let maybe10 = maybe5.map(x => { y = 1; return x * 2 }); // now there are two possibilities, either maybe10 is a Maybe with value 10 and y is 1, or maybe10 // is a Maybe(null) and y is still zero because the mapped function never ran.
Additionally js-typed supplies two curried utility functions to aid in writing predicates:
hasProp and
respondsTo.
let hasFoo = types.hasProp('foo'); hasFoo({foo: 3}); // true types.hasProp('bar', {bar: null}); // false, only true if prop exists and is non-null let hasFooMethod = types.respondsTo('foo'); hasFooMethod({foo: 3}); // false hasFooMethod({foo: function() { return 'foo'; }}); // true
You can also tag a value as having a particular type using
tag. You can even tag a constructor to
automatically tag all of its instances:
let foo = {}; types.tag('foo', foo); types.isType('foo', foo); // true let Bar = types.tag('bar', class {}); types.isType('bar', new Bar()); // true
Guarding functionsGuarding functions
Now that we've defined some types, we want to use them to avoid writing a bunch of tedious
boilerplate checks to avoid getting an error like 'cannot read property foo of 'undefined''.
js-typed supplies the
guard function to do just that.
let add = types.guard(['number', 'number'], (a, b) => a + b); add(3, 3); // 6 add(3, 'a'); // throws exception let getFoo = types.guard('exists', x => x.foo); getFoo({foo: 3}); // 3 getFoo('a'); // undefined getFoo(null); // throws because it doesn't match the signature.
A couple of other things to note about guarded functions is that they are auto-curried, can have a specified arity (even beyond the type-checked arguments), and can take either a type or an array of types to check against. If you supply types to check it will infer the desired arity from the number of types unless you supply one that's longer that the number of types. The type checking is incremental: a curried function will throw as soon as you pass it an argument it can't match, even if it doesn't have all of its arguments yet.
let add3 = add(3); add3(4); // 7 let addAndRaise = types.guard(['number','number','number'], (a, b, c) => (a + b) ** c); let partial = addAndRaise(3); let almost = partial('a'); // throws partial(1, 2); // 16 // here only the first argument will be checked to make sure its a function, but the function won't // actually be called until all the arguments are present. This pattern can obviate the need for an // anonymous function that serves no purpose other than to gather arguments and delay execution of // the callback. let takesCallBack = types.guard(3, 'function', (fn, a, b) => fn(a, b)); // We can also guard constructors using guardClass: let guardedDate = types.guard(['number', 'number', 'number'], Date); let is2014 = guardedDate(2014); let isJan2014 = is2014(0); isJan2014(1); // Date: Wed Jan 01 2014 00:00:00
Guarded constructors are also auto-curried, and may be called with or without
new.
Multiple DispatchMultiple Dispatch
We can also fake multiple dispatch / multimethods using js-typed's
Dispatcher function:
let takesMany = new types.Dispatcher([ [['string', 'string'], (a, b) => a + b], [['number', 'number'], (a, b) => a * b] ]); takesMany('Hello ', 'World'); // Hello World takesMany(6, 7); // 42
Lets say we want to add a case where if the argument is an array it returns the sum of the array's
contents: something like
[['array'], arr => arr.reduce(sum)]. However, that array case is not
very safe, lets define a new type and use that instead:
types.defType('Array<Number>', x => types.isType('array', x) && x.every(y => !Number.isNaN(+y))); takesMany.add([['Array<Number>'], arr => arr.reduce(sum)]); takesMany([1,2,3]); // 6 takesMany(['1','2','3']); // throws an exception because it has no matching signature.
We can also add a default case for when it doesn't match any other type signatures:
takesMany.setDefault(x => x); takesMany(['1','2','3']); // ['1','2','3']
The syntax though isn't the prettiest. Fortunately, the Dispatcher will also take guarded functions and automatically use their guarded signature:
let repeater = types.guard(['number', 'string'], (a, b) => b.repeat(a)); takesMany.add(repeater); takesMany(3, 'a'); // 'aaa' | https://www.skypack.dev/view/js-typed | CC-MAIN-2022-05 | refinedweb | 2,319 | 58.58 |
distutils command upload pushes the distribution files to PyPI.
The command is invoked immediately after building one or more distribution files. For example, the command
python setup.py sdist bdist_wininst upload
will cause the source distribution and the Windows installer to be uploaded to PyPI. Note that these will be uploaded even if they are built using an earlier invocation of setup.py, but that only distributions named on the command line for the invocation including the upload command are uploaded.
The upload command uses the username, password, and repository URL from the $HOME/.pypirc file (see section.
You can use the --sign option to tell upload to sign each uploaded file using GPG (GNU Privacy Guard). The gpg program must be available for execution on the system PATH. You can also specify which key to use for signing using the --identity=name option.
Other upload options include --repository=url.
will be prompt to type it when needed.
If you want to define another server a new section can be created and listed in the index-servers variable:
[distutils] index-servers = pypi other [pypi] repository: <repository-url> username: <username> password: <password> [other] repository: username: <username> password: <password>
register can then be called with the -r option to point the repository to work with:
python setup.py register -r
For convenience, the name of the section that describes the repository may also be used:
python setup.py register -r other
The long_description field plays a special role at PyPI. It is used by the server to display a home page for the registered package.
If you use the reStructuredText syntax for this field, PyPI will parse it and display an HTML output for the package home page.
The long_description field can be attached to a text file located in the package:
from distutils.core import setup with open('README.txt') as file: long_description = file.read() setup(name='Distutils', long_description=long_description)
In that case, README.txt is a regular reStructuredText text file located in the root of the package besides setup.py.
To prevent registering broken reStructuredText content, you can use the rst2html program that is provided by the docutils package and check the long_description from the command line:
$ python setup.py --long-description | rst2html.py > output.html
docutils will display a warning if there’s something wrong with your syntax. Because PyPI applies additional checks (e.g. by passing --no-raw to rst2html.py in the command above), being able to run the command above without warnings does not guarantee that PyPI will convert the content successfully. | http://www.wingware.com/psupport/python-manual/3.2/distutils/packageindex.html | CC-MAIN-2014-42 | refinedweb | 426 | 57.47 |
SUIDSandbox
IMPORTANT NOTE: The Linux SUID sandbox is almost but not completely removed. See This page is mostly out-of-date.
With r20110, Chromium on Linux can now sandbox its renderers using a
SUID helper binary. This is one of our layer-1 sandboxing solutions.
SUIDhelper executable
The
SUID helper binary is called
chrome_sandbox and you must build it separately from the main ‘chrome’ target. Chrome now just assumes it's next to the executable in the same directory. You can also control its path by CHROME_DEVEL_SANDBOX environment variable.
In order for the sandbox to be used, the following conditions must be met:
SUIDand executable by other.
If these conditions are met then the sandbox binary is used to launch the zygote process. Once the zygote has started, it asks a helper process to chroot it to a temp directory.
CLONE_NEWPIDmethod
The sandbox does three things to restrict the authority of a sandboxed process. The
SUID helper is responsible for the first two:
SUIDhelper chroots the process. This takes away access to the filesystem namespace.
SUIDhelper puts the process in a PID namespace using the
CLONE_NEWPIDoption to clone(). This stops the sandboxed process from being able to
ptrace()or
kill()unsandboxed processes.
In addition:
ptrace()each other. More specifically, it stops the sandboxed process from being
ptrace()'d by any other process. This can be switched off with the
--allow-sandbox-debuggingoption.
Limitations:
CLONE_NEWPID. If the
SUIDhelper is run on a kernel that does not support
CLONE_NEWPID, it will ignore the problem without a warning, but the protection offered by the sandbox will be substantially reduced. See LinuxPidNamespaceSupport for how to test whether your system supports PID namespaces.
setuid()method
This is an alternative to the
CLONE_NEWPID method; it is not currently implemented in the Chromium codebase.
Instead of using
CLONE_NEWPID, the
SUID helper can use
setuid() to put the process into a currently-unused UID, which is allocated out of a range of UIDs. In order to ensure that the
UID has not been allocated for another sandbox, the
SUID helper uses getrlimit() to set
RLIMIT_NPROC temporarily to a soft limit of 1. (Note that the docs specify that setuid() returns
EAGAIN if
RLIMIT_NPROC is exceeded.) We can reset
RLIMIT_NPROC afterwards in order to allow the sandboxed process to fork child processes.
As before, the
SUID helper chroots the process.
As before, LinuxZygote can set itself to be undumpable to stop processes in the sandbox from being able to
ptrace() each other.
Limitations:
ptrace()a sandboxed process because they run under different UIDs. This makes debugging harder. There is no equivalent of the
--allow-sandbox-debuggingother than turning the sandbox off with
--no-sandbox.
SUIDhelper can check that a
UIDis unused before it uses it (hence this is safe if the
SUIDhelper is installed into multiple chroots), but it cannot prevent other root processes from putting processes into this
UIDafter the sandbox has been started. This means we should make the
UIDrange configurable, or distributions should reserve a
UIDrange.
CLONE_NEWNETmethod
The
SUID helper uses CLONE_NEWNET to restrict network access.
We are splitting the
SUID sandbox into a separate project which will support both the
CLONE_NEWNS and
setuid() methods:
Having the
SUID helper as a separate project should make it easier for distributions to review and package.
Older versions of the sandbox helper process will only run
/opt/google/chrome/chrome. This string is hard coded (
sandbox/linux/suid/sandbox.cc). If your package is going to place the Chromium binary somewhere else you need to modify this string. | https://chromium.googlesource.com/chromium/src/+/lkcr/docs/linux_suid_sandbox.md | CC-MAIN-2018-13 | refinedweb | 589 | 64.41 |
QuickTip : Why is my disk usage steadily growing when using docker?
In the past few years Docker and K8S have turned into the de-facto industry standard in containerization and easily scaling any infrastructure ( large or small ).
These two systems have had such an incredible adoption that many of us have neglected the flaws that might be apparent in them. This quicktip will teach you about one such flaw and an easy solution to something that most people will not even realize is right within reach.
Why is my disk usage steadily growing over time?
A couple days ago I was horrified when I logged into one of our staging servers and saw the following graph:
When looking at an (almost) exact replica of this setup on one of our production servers, the graph looked like this:
Both servers were workers in a larger kubernetes cluster, and after further investigation it seemed that none of our production servers were affected, but all of our staging workers exhibited the same strange symptoms of steady disk usage growth over a longer period of time. So I asked the question “Why is my disk usage steadily growing over time?”.
As my mind was racing to unearth the truth behind this dilemma I started listing the differences between our production and staging setup and dove head first into Docker and K8S documentation. After a short but tedious search I found the reason ( and as it turned out an easy solution, to an otherwise interesting challenge ).
Docker builds have a slight pitfall
To show you what I mean with the title of this section, let me show you a simple example. Let’s say that we have a simple docker image extending from Alpine Linux that has bash installed :
After building this image, its size will be approx 35.8MB:
Further investigation with the docker history command shows you the size of each layer in this image:
As you can see, Alpine takes up about 5.5MB, the bash installation takes up an approximate 5.4MB, and an additional 25MB for vim. Let’s say that after a while we realize that all of this can actually be done in a oneliner and make the adjustments to the Dockerfile:
After building the image, it now takes up an approximate 34.4MB (thanks to — no-cache):
However, Docker stores untagged builds separately and does not remove old latest builds by design. And does not remove the cached layers, only the intermediate containers.
Running a quick docker images proves this:
So after 3 builds instead of the original 34.4MB, we are actually using up an approximate 106MB of disk space. The older image is still lying around. So in the end all of our latest builds start piling up these images and they’re chipping away disk space without being used.
We had found the culprit since our staging cluster always uses latest builds. The whole reason why our production cluster was not seeing this particular issue is because it uses tagged images and the amount of deployments is a lot lower than the “several a day” deploys on our staging cluster.
Enter the Docker System namespace
Since Docker 1.13 a set of commands belonging to the docker system namespace have been added. Below you will find a short description of two commands in the docker system namespace that will help you battle this particular issue:
docker system df
Provides you with the amount of disk space docker is using on your system, this also gives you a nice overview in the reclaimable section as to the amount of disk space that could potentially be reclaimed when running a prune:
docker system prune
Allows you to reclaim unused disk space by removing dangling/unused images.
In most cases this removal is safe to do, however do note that if you are running docker <= 17.06.0 this will also remove volumes that are unused.
docker system prune handles cleanup of unused networks, containers and images.
In my particular case on my local machine this managed to reclaim a total of 4.224GB (volumes were not deleted):
And on our staging server it managed to take our disk usage down to a reasonable percentage again:
tldr;
When using latest builds docker keeps previously built images on disk. Use the docker system df and docker system prune commands to take care of cleaning up these pesky unused images/networks/containers. | https://medium.com/@valkyrie_be/quicktip-why-is-my-disk-usage-steadily-growing-when-using-docker-f4b8bea49d24?source=---------2------------------ | CC-MAIN-2020-40 | refinedweb | 743 | 56.89 |
This site is supported by donations to The OEIS Foundation.
User:Peter Luschny/BellTransform
From OeisWiki
The Bell Transform
Summary: We introduce the partial Bell polynomials in two flavors: in the traditional way based on the variables x1, x2, ... and based on the variables x0, x1, x2, .... The coefficients of the x0-based univariate polynomials give the new and unexplored A257563.
Next we base the computation of the Bell transform on partitions which gives a surprising fast algorithm for small n which is similar to the algorithm used in SageMath for Bell polynomials.
Iterating the Bell transform using the row sums of the triangles as input for the next level gives us the higher order Bell numbers. In particular we recover Joerg Arndt's award winning A187761 as the second-order Bell numbers.
We then apply the Bell transform and its inverse to the power functions, the multifactorials, the rising factorials and the falling factorials showing more than 50 examples of sequences in the database. We found that almost none of them was described as the result of a Bell transformation.
♦ Bell polynomials
The Bell transform is a transformation which maps a sequence
a = a_0, a_1, a_2,...
to a triangle
T(0,0) T(1,0) T(1,1) T(2,0) T(2,1) T(2,2) T(3,0) T(3,1) T(3,2) T(3,3)
defined by
T(n,k) = Sum_{j=0..n-k+1} binomial(n-1,j-1)*a_j*T(n-j,k-1) if k != 0 else T(n, k) = (a_0)^n.
With Sage this can be implemented as
def bell_trans(n,k,a): @cached_function def T(n, k): if k==0: return a[0]^n return sum(binomial(n-1,j-1)*a[j]*T(n-j,k-1) for j in (0..n-k+1)) return T(n,k)
Let us look at the first few rows of the generated triangle.
(*) a_0^0 a_0^1, a_1 a_0^2, a_1*a_0+a_2, a_1^2 a_0^3, a_1*a_0^2+2*a_2*a_0+a_3, a_1^2*a_0+3*a_2*a_1, a_1^3 a_0^4, a_1*a_0^3+3*a_2*a_0^2+3*a_3*a_0+a_4, a_1^2*a_0^2+5*a_1*a_2*a_0 +4*a_3*a_1+3*a_2^2, a_1^3*a_0+6*a_2*a_1^2, a_1^4
For example to compute the unsigned Stirling numbers of the first kind A132393 one can write
s = [0]+[factorial(i) for i in (0..5)] for n in (0..5): [bell_trans(n,k,s) for k in (0..n)] [1] [0, 1] [0, 1, 1] [0, 2, 3, 1] [0, 6, 11, 6, 1] [0, 24, 50, 35, 10, 1]
A second example shows that there are other important numbers which can be generated this way which do not always reduce to integers.
N = 6 t = [2^(1-j) if is_odd(j) else 0 for j in (0..N)] for n in (0..N): [bell_trans(n,k,t) for k in (0..n)] [1] [0, 1] [0, 0, 1] [0, 1/4, 0, 1] [0, 0, 1, 0, 1] [0, 1/16, 0, 5/2, 0, 1] [0, 0, 1, 0, 5, 0, 1]
These are the coefficients of the central factorials [1]
Another way to look at the symbolic sums in the triangle (*) above is to interpret the an as variables xn of a multivariate polynomial. Then one arrives at the partial Bell polynomials:
def partial_bell_polynomial(n,k): v = var(['x_'+str(i) for i in (0..n+1)]) return bell_trans(n,k,v).expand() for n in (0..4): [partial_bell_polynomial(n,k) for k in (0..n)] [1] [x_0, x_1] [x_0^2, x_0*x_1 + x_2, x_1^2] [x_0^3, x_0^2*x_1 + 2*x_0*x_2 + x_3, x_0*x_1^2 + 3*x_1*x_2, x_1^3] [x_0^4, x_0^3*x_1 + 3*x_0^2*x_2 + 3*x_0*x_3 + x_4, x_0^2*x_1^2 + 5*x_0*x_1*x_2 + 3*x_2^2 + 4*x_1*x_3, x_0*x_1^3 + 6*x_1^2*x_2, x_1^4]
Note that these polynomials depend on the variables x0, x1, x2, ... For some but probably not for a good reason the partial Bell polynomials are often defined to depend only on the variables x1, x2, ... In this case the first column of the above triangle is just ignored (except for the case (0, 0) which is added somewhat unmotivated and unsystematic). To get this in the older literature common form (see for example L. Comtet, Advanced Combinatorics, page 135) from our form is easy: we just have to set x0 = 0.
With Sage we can write:
for n in (0..4): [partial_bell_polynomial(n,k).subs(x_0=0) for k in (0..n)] [1] [0, x_1] [0, x_2, x_1^2] [0, x_3, 3*x_1*x_2, x_1^3] [0, x_4, 3*x_2^2 + 4*x_1*x_3, 6*x_1^2*x_2, x_1^4]
So far we have only looked at the partial Bell polynomials. The Bell polynomials are defined as
B_n = Sum_{k=0..n} B_{n,k}. def bell_polynomial(n): return sum(partial_bell_polynomial(n,k) for k in (0..n)) for n in (0..3): bell_polynomial(n) 1 x_0 + x_1 x_0^2 + x_0*x_1 + x_1^2 + x_2 x_0^3 + x_0^2*x_1 + x_0*x_1^2 + x_1^3 + 2*x_0*x_2 + 3*x_1*x_2 + x_3
Two things remain to be mentioned:
First, it is sometimes convenient to reduce the infinite number of variables x_0, x_1, x_2, ... to one variable x only.
Second, in the considerations above we worked with 'symbolic sums' (or in the Sage parlance in the Symbolic Ring SR). But of course we can also work with 'real' polynomials.
Putting these two requirements together we arrive at polynomials in the univariate polynomial ring in x over the rational field.
def univariate_bell_polynomial(n): p = bell_polynomial(n).subs(x_0=0) q = p({p.variables()[i]:x for i in range(len(p.variables()))}) R = PolynomialRing(QQ, 'x') return R(q) for n in (0..6): univariate_bell_polynomial(n) 1 x x^2 + x x^3 + 3*x^2 + x x^4 + 6*x^3 + 7*x^2 + x x^5 + 10*x^4 + 25*x^3 + 15*x^2 + x x^6 + 15*x^5 + 65*x^4 + 90*x^3 + 31*x^2 + x
If we extract the coefficients of these polynomials we find:
for n in (0..6): univariate_bell_polynomial(n).list() [1] [0, 1] [0, 1, 1] [0, 1, 3, 1] [0, 1, 7, 6, 1] [0, 1, 15, 25, 10, 1] [0, 1, 31, 90, 65, 15, 1]
Lo and behold, these are the Stirling subset numbers A048993! This must have been observed before ;-)
Note the first substitution subs(x_0=0) which indicates the 'classical case' in the sense discussed above. If instead we use the uniform substitution subs(x_n=x) for all n we get:
def x0_based_univariate_bell_polynomial(n): p = bell_polynomial(n) q = p({p.variables()[i]:x for i in range(len(p.variables()))}) R = PolynomialRing(QQ,'x') return R(q) for n in (0..6): x0_based_univariate_bell_polynomial(n) 1 2*x 3*x^2 + x 4*x^3 + 5*x^2 + x 5*x^4 + 14*x^3 + 10*x^2 + x 6*x^5 + 30*x^4 + 48*x^3 + 19*x^2 + x 7*x^6 + 55*x^5 + 158*x^4 + 149*x^3 + 36*x^2 + x for n in (0..6): x0_based_univariate_bell_polynomial(n).list() [1] [0, 2] [0, 1, 3] [0, 1, 5, 4] [0, 1, 10, 14, 5] [0, 1, 19, 48, 30, 6] [0, 1, 36, 149, 158, 55, 7]
The row sums
1, 2, 4, 10, 30, 104, 406, 1754, 8280, 42294, 231950, ...
are Paul Barry's A186021, Bell(n)*(2-0^n). Geoffrey Critzer commented: These are "the number of collections of subsets of {1,2,...,n-1} that are pairwise disjoint."
The coefficients of the x0-based univariate polynomials are now in A257563. What is their combinatorial meaning?
♦ The Bell transform based on partitions
We did not yet consider the question of efficiency of our implementation, to which we turn now.
If we delete the intermediate steps which we introduced for the sake of exposition above we can sum up our implementation as:
def bell_polynomial(n): X = var(['x_'+str(i) for i in (0..n+1)]) @cached_function def T(n, k): if k==0: return k^n return sum(binomial(n-1,j-1)*T(n-j,k-1)*X[j] for j in (0..n-k+1)).expand() return sum(T(n,k) for k in (0..n))
The nice thing is that we can speed up things by explicitly summing over partitions of n instead of using the recursive form. This is somewhat surprising since the relevant formula looks quite formidable; the reader will find it as the first formula on Wikipedia's entry on Bell polynomials.
(Note that Wikipedia calls 'complete Bell polynomials' what we call 'Bell polynomials' and Mathworld calls 'Bell polynomials' what we call 'univariate Bell polynomials'.)
We start with the simpler case of the univariate Bell polynomials and look at an implementation which is essentially the implementation of Sage (GPL licensed), written by Blair Sutton.
def partition_based_univariate_bell_polynomial(n): polynom = x^n fn = factorial(n) for k in (0..n polynom += result*x^k return polynom
So let's benchmark!
%timeit univariate_bell_polynomial(10) 5 loops, best of 3: 105 ms per loop %timeit partition_based_univariate_bell_polynomial(10) 5 loops, best of 3: 14.1 ms per loop %timeit univariate_bell_polynomial(20) 5 loops, best of 3: 632 ms per loop %timeit partition_based_univariate_bell_polynomial(20) 5 loops, best of 3: 79.4 ms per loop %timeit univariate_bell_polynomial(30) 5 loops, best of 3: 2.15 s per loop %timeit partition_based_univariate_bell_polynomial(30) 5 loops, best of 3: 1.16 s per loop
Thus for small n we observe a speedup between 2 and 8.
Back to math. Our main goal is to explore the Bell transform. Therefore we skip the details concerning the implementation of the multivariate Bell polynomials (as polynomials) and give directly the formalism in its most useful form: as a sequence to triangle transformation. To make our setup more generic we now change from lists as input to functions.
def bell_transform(f, n): # partition_based row = [] fn = factorial(n) for k in (0.*prod([f(i-1) for i in p]) row.append(result) return row
♦ The Bell matrix
For example let's look at the family of Stirling-cycle/Lah numbers, which are the Bell transforms of the functions "n → factorial(n+j)" (for some fixed j).
for n in (0..7): bell_transform(factorial, n) [1] [0, 1] [0, 1, 1] [0, 2, 3, 1] [0, 6, 11, 6, 1] [0, 24, 50, 35, 10, 1] [0, 120, 274, 225, 85, 15, 1] [0, 720, 1764, 1624, 735, 175, 21, 1] Unsigned Stirling numbers of the first kind, A132393. for n in (0..7): bell_transform(lambda x: factorial(x+1), n) [1] [0, 1] [0, 2, 1] [0, 6, 6, 1] [0, 24, 36, 12, 1] [0, 120, 240, 120, 20, 1] [0, 720, 1800, 1200, 300, 30, 1] [0, 5040, 15120, 12600, 4200, 630, 42, 1] Unsigned Lah numbers, cf. A111596. for n in (0..7): bell_transform(lambda x: factorial(x+2), n) [1] [0, 2] [0, 6, 4] [0, 24, 36, 8] [0, 120, 300, 144, 16] [0, 720, 2640, 2040, 480, 32] [0, 5040, 25200, 27720, 10320, 1440, 64] [0, 40320, 262080, 383040, 199920, 43680, 4032, 128] Unsigned coefficients for rewriting generalized falling factorials into ordinary falling factorials, A136656.
The next step is a trival one: We collect these lists in a matrix, the Bell matrix, and call the function given as input the generator of the Bell matrix. With SageMath we can write this:
def bell_matrix(generator, dim): row = lambda n: bell_transform(generator, n) return matrix(ZZ, [row(n)+[0]*(dim-n-1) for n in srange(dim)])
Alternatively, if we want to avoid the matrix constructor and base the Bell transform on its recursive form we can compute the Bell matrix with this function:
def Bell_Matrix(generator, dim): A = [[0]*(n+1) for n in range(dim)] for n in range(dim): A[n][0] = 1 if n==0 else 0 if n>0: A[n][1] = generator(n-1) for k in (2..n): A[n][k] = sum(binomial(n-1,j-1)*A[n-j][k-1]*A[j][1] for j in (1..n-k+1)) return A
♦ The inverse Bell transform
The inverse Bell transform is the map generator → inverse(bell_transform(generator)). Here we consider 'bell_transform(generator)' as an infinite lower matrix of which we calculate the matrix inverse.
For example using the factorial function as the generator we can write:
bell_matrix(factorial, 8) [ 1 0 0 0 0 0 0 0] [ 0 1 0 0 0 0 0 0] [ 0 1 1 0 0 0 0 0] [ 0 2 3 1 0 0 0 0] [ 0 6 11 6 1 0 0 0] [ 0 24 50 35 10 1 0 0] [ 0 120 274 225 85 15 1 0] [ 0 720 1764 1624 735 175 21 1]
Then the inverse Bell transform can be computed as:
bell_matrix(factorial, 8).inverse() [ 1 0 0 0 0 0 0 0] [ 0 1 0 0 0 0 0 0] [ 0 -1 1 0 0 0 0 0] [ 0 1 -3 1 0 0 0 0] [ 0 -1 7 -6 1 0 0 0] [ 0 1 -15 25 -10 1 0 0] [ 0 -1 31 -90 65 -15 1 0] [ 0 1 -63 301 -350 140 -21 1]
Another example are the coefficients of the Abel polynomials A137452 which have the enumeration of the natural numbers as their generator:
bell_matrix(lambda n: n+1, 8).inverse() [ 1 0 0 0 0 0 0 0] [ 0 1 0 0 0 0 0 0] [ 0 -2 1 0 0 0 0 0] [ 0 9 -6 1 0 0 0 0] [ 0 -64 48 -12 1 0 0 0] [ 0 625 -500 150 -20 1 0 0] [ 0 -7776 6480 -2160 360 -30 1 0] [ 0 117649 -100842 36015 -6860 735 -42 1]
This is the inverse of the triangle of idempotent numbers A059297 :
bell_matrix(lambda n: n+1, 8) [ 1 0 0 0 0 0 0 0] [ 0 1 0 0 0 0 0 0] [ 0 2 1 0 0 0 0 0] [ 0 3 6 1 0 0 0 0] [ 0 4 24 12 1 0 0 0] [ 0 5 80 90 20 1 0 0] [ 0 6 240 540 240 30 1 0] [ 0 7 672 2835 2240 525 42 1]
Alternatively, if we want to avoid the matrix constructor and base the inverse Bell transform on its recursive form we can use the following function:
def Inverse_Bell_Matrix(f, dim): A = Bell_Matrix(f, dim) M = [[0]*(n+1) for n in range(dim)] for n in range(dim): M[n][n] = 1/A[n][n] for k in range(n-1,0,-1): M[n][k] = -sum(A[i][k]*M[n][i] for i in range(n,k,-1))/A[k][k] return M Inverse_Bell_Matrix(factorial, 7)
♦ The Bell inverse of a sequence
Given an invertible Bell matrix B with generator g then we call the generator of the inverse of B the Bell inverse of g. Thus we are looking here at a sequence to sequence transformation.
With Sage we can implement this as:
# Given a sequence f returns the inverse Bell sequence of f. def Inverse_Bell_Sequence(f, dim): M = Inverse_Bell_Matrix(f, dim) return [M[n][1] for n in (1..dim-1)]
Some examples:
# [1, -1, 1, -1, 1, -1, 1, -1, ...] maps to # [1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880, ...] print Inverse_Bell_Sequence(lambda n: (-1)^n, 9) # [1, 2, 2, 0, 0, 0, 0, 0, 0, ...] maps to # [1, -2, 10, -80, 880, -12320, 209440, -4188800, ...] print Inverse_Bell_Sequence(lambda n: [1,2,2][n] if n<3 else 0, 9) # [1, 2, 1, 0, 0, 0, 0, 0, 0, ...] maps to # [1, -2, 11, -100, 1270, -20720, 413000, -9726640, ...] print Inverse_Bell_Sequence(lambda n: [1,2,1][n] if n<3 else 0, 9)
More general: Let g be a 0-based sequence then we require g(0) = 0 and g(1) != 0 and write a = g(1), b = g(2), c = g(3), d = g(4) to simplify the notation. The Bell inverse of g then starts
1/a, -b/a^3, (-c*a+3*b^2)/a^5, -(d*a^2-10*b*c*a+15*b^3)/a^7, ...
If we assume a = 1, which will be often the case, this reduces to:
1, -b, -c+3*b^2, -d+10*b*c-15*b^3, ...
Therefore the skeleton of the coefficients is
Looking up this table in the OEIS we find Wolfdieter Lang's A176740 called 'Inversion of e.g.f. formal power series' and his paper E.g.f. Lagrange inversion.
There are two differences: Lang's table misses the leading '1' and the order of coefficients is different. For our purposes the latter is of no importance.
♦ The coefficients of the Bell polynomials
But why stop half way? So let's look at all the coefficients of the inverse Bell polynomials. The coefficients in the table above appear here in column 1.
It seems somewhat surprising that this table was not yet in the database. Now it is A268442. Let's also add the table of the coefficients of the Bell polynomials A268441 (also A036040 with different order and column 0 missing).
There is a simple check of the tables: replacing the lists by their sums reduces the triangles to the Stirling number tables: to the Stirling numbers of first kind and to the Stirling numbers of the second kind respectively.
The algorithm for building these tables is given below, implemented with Sage.
def bell_polynomial_matrix(dim, inverse): def bell_polynomial(n): X = var(['x'+str(i) for i in (0..dim)]) @cached_function def T(n, k): if k==0: return k^n return sum(binomial(n-1,j-1)*T(n-j,k-1)*X[j-1] for j in (0..n-k+1)).expand() return [T(n,k) for k in (0..n)] A = [[f for f in bell_polynomial(n)] for n in range(dim)] if not inverse: return A M = [[0 for k in (0..n)] for n in range(dim)] for n in range(dim): M[n][n] = 1/A[n][n] for k in range(n-1,-1,-1): M[n][k] = expand(-sum(A[i][k]*M[n][i] for i in range(n,k,-1))/A[k][k]) return M def coefficient_matrix(M): def coefficient(p): c = SR(p).fraction(ZZ).numerator().coefficients() return [0] if not c else c return [[coefficient(p) for p in M[n]] for n in range(len(M))]
The two tables above are then generated by the function call
M = bell_polynomial_matrix(8, inverse=true) coefficient_matrix(M)
respectively by the call
L = bell_polynomial_matrix(8, inverse=false) coefficient_matrix(L)
♦ Bell numbers of higher order
Consider the sequence
S0 → T0 → S1 → T1 → S2 → T2 → ...
Here Sn → Tn indicates the Bell transform mapping a sequence Sn to the triangle Tn and Tn → Sn+1 the operator associating a triangle with the sequence of its row sums. For example if we start this iteration with the sequence <1,1,1,...> we get
S0 = A000012 = <1,1,1,...> T0 = A048993 # Stirling subset numbers, S1 = A000110 # Bell numbers, T1 = A264428 # Bell transform of Bell numbers, S2 = A187761 # second-order Bell numbers, T2 = A264430 # Bell transform of second-order Bell numbers, S3 = A264432 # third-order Bell numbers.
Similarly if we start the iteration with the sequence <1,-1,1,-1,...> we get the higher order complementary Bell numbers.
A nice observation is that Joerg Arndt's award winning A187761 (number of maps f: [n] → [n] with f(x) ≤ x and f(f(x)) = f(f(f(x)))) here turns up as the second-order Bell numbers.
def bell_second_order(generator, n): G = [generator(k) for k in range(n)] row = lambda n: bell_transform(n, G) S = [sum(row(k)) for k in range(n)] return bell_transform(n, S)
Arndt's sequence can now be easily computed from the constant sequence 1,1,1,... as:
[sum(s) for s in [bell_second_order(lambda k: 1, n) for n in range(10)]]
The sequence is also column 1 in the triangle A264430.
[bell_second_order(bell_number, n) for n in range(8)] [[1], [0, 1], [0, 1, 1], [0, 2, 3, 1], [0, 6, 11, 6, 1], [0, 23, 50, 35, 10, 1], [0, 106, 268, 225, 85, 15, 1], [0, 568, 1645, 1603, 735, 175, 21, 1]]
♦ Associated Stirling subset numbers
As we have seen in the table above the Stirling subset numbers are generated by applying the Bell transform to the simplest sequence of positive numbers: the all 1's sequence.
On the other hand one of the simplest methods to transform a sequence is the Hilbert's Hotel Transform (HHT) also known as the 'shift right operation' in CS communities: move simultaneously cell n to cell n+1 and 0 to cell 0.
So what happens if we apply this idea to 1,1,1,... ? We define in Sage:
hht = lambda k: lambda n: 1 if n>=k else 0 for n in (0..5): print bell_matrix(hht(n), 12)
In terms of A-numbers we get for n = 0, 1, 2,... the triangles:
A048993, A008299, A059022, A059023, A059024, A059025.
which are called the (n+1)-associated Stirling subset numbers following Comtet.
However compared to our construction in the last section the construction of this sequence of sequences has a disadvantage: apart from the case n = 0 the resulting matrices are singular; in other words, the inverse Bell transform cannot be applied to these generators.
♦ The Bell transform of multifactorials
An overview about multifactorials can be found in my July 2011 blog post. In Sage parlance the general definition is:
multifactorial = lambda a,b: lambda n: prod(a*k + b for k in (0..n-1))
Here the multifactorials depends on two parameter, a and b. For example multifactorial(2, 2) are the double factorials of even numbers (2*n)!! = 2^n*n!, A000165. Note that multifactorial(2, 2) is a function, evaluated at n=4 gives multifactorial(2, 2)(4) = 384.
Now we can study the Bell transform of multifactorials.
for a in (1..4): for b in (1..a): print "Bell matrix generated by multifactorial", (a,b) bell_matrix(multifactorial(a,b), 6) print "Inverse Bell matrix generated by multifactorial", (a,b) inverse_bell_matrix(multifactorial(a,b), 6)
The table below shows the output in terms of A-numbers. From the 20 basic cases 17 have been in the OEIS (the missing three were added by the author). Looking closer at the entries it becomes obvious that the Bell transform of multifactorials was never studied systematically before.
Some special cases of the Bell-inverse of multi-factorials are:
bell_inverse(MFn, n) = n^k for k≥0.
bell_inverse(MFn, n-1) = falling_factorial(n - 1, k) for k≥0.
bell_inverse(MFn, 1) = MFn-1, n-2 (for n≥3).
bell_inverse(MF2*n, n) = [1, n, 0, 0, 0,...].
bell_inverse(MF2*n, 2*n-2) = A161381_row(n-1).
♦ The Bell transform of powers
The power functions can be seen as the degenerated cases of multifactorials: as those cases where a = 0 in the definition of multifactorials.
multifactorial = lambda a,b: lambda n: prod(a*k + b for k in (0..n-1))
These cases have been studied by Wolfdieter Lang. Lang calls the Bell transform of powers "Stirling2 triangles with scaled diagonals".
for n in range(6): bell_matrix(lambda k: n^k, 7)
Lang calls the signed inverse Bell transform of powers "Generalized Stirling number triangles of first kind".
for n in (0..5): bell_matrix(lambda k: n^k, 7).inverse()
The Bell-inverse of the power functions are
bell_inverse(n^k) = n^k*k! for k≥0.
♦ The Bell transform of rising factorials
risingfactorial = lambda n: lambda k: rising_factorial(n, k) for n in (0..7): print bell_matrix(risingfactorial(n), 7) print inverse_bell_matrix(risingfactorial(n), 7)
The Bell transform of the rising factorial and its inverse lead to:
The Bell-inverse of rising factorials are multifactorials:
bell_inverse(risingfactorial(n)) = MF(n-1,1)(k+1) for k≥0.
♦ The Bell transform of falling factorials
Similarly for the Bell transform of falling factorials:
fallingfactorial = lambda n: lambda k: falling_factorial(n, k) for n in (0..5): print bell_matrix(fallingfactorial(n), 7) print inverse_bell_matrix(fallingfactorial(n), 7)
The Bell-inverse of falling factorials are multifactorials:
bell_inverse(fallingfactorial(n)) = MF(n+1,n)(k) for k≥0.
♦ The Bell transform of monotonic factorials
There seems to be no standard name for the function
monotonicfactorial = lambda r: lambda n: rising_factorial(r, n)/factorial(n)
I therefore dubbed it monotonic factorial for the purpose of this post (as it counts the number of monotone words with length n over an r-alphabet; most often it is written as binomial(n+r-1, n) ).
for n in (0..4): print bell_matrix(monotonicfactorial(n), 7) print inverse_bell_matrix(monotonicfactorial(n), 7)
Astonishing little is found in the OEIS about this class of sequences, with the notable exception of Roger L. Bagula's A137452 (Olivier Gérard's A061356) which is the inverse Bell transform of A000169. Certainly more triangles of this type should be entered into the OEIS.
♦ An implementation for Maple
The Maple implementation of the Bell matrix given below is compatible with the interface and conventions of Sloane's Maple transforms.
# BELL: Computes the Bell matrix of a sequence. # Given the list [a(0),...,a(dim)] returns the Bell matrix # with dimension dim+1 of the sequence a. BELL := proc(a) local M, dim, n, k: if whattype(a) <> list then RETURN([]) fi: dim := nops(a); M := Matrix(dim, shape=triangular[lower]); M[1,1] := 1; for n from 1 to dim-1 do M[n+1,2] := a[n] od; for n from 1 to dim-1 do for k from 1 to n do M[n+1,k+1] := add(binomial(n-1,j-1)*M[n-j+1,k]*M[j+1,2], j=1..n-k+1) od od; RETURN(M) end: BELLi:= proc(a) if whattype(a) <> list then RETURN([]) fi: RETURN(linalg[inverse](BELL(a))) end: # Examples: a := [seq(1,n=0..8)]: BELL(a); # A048993 b := [seq(n!,n=0..8)]: BELL(b); # A132393 c := [seq(n,n=1..9)]: BELL(c); # A059297 d := [seq((-1)^n*(n+1)^n,n=0..8)]: BELL(d); # A137452 e := [seq(`if`(n::odd,0,2*n!),n=0..8)]: BELL(e); # A137513 f := [seq(`if`(n<2,(-1)^n,0),n=0..8)]: BELL(f); # A104556 g := [seq(doublefactorial(2*n-1),n=0..8)]: BELL(g); # A001497
♦ Index of triangles generated by the Bell transform
The format of the list is [An[Ak]] which means that the triangle An is generated by the sequence Ak. If Ak is not in the OEIS than Ak might point to a related sequence or to dummy sequence Axxxxxx.
[A000369[A008545]] [A001497[A001147]] [A004747[A008544]] [A008275[A133942]] [A008277[A000012]] [A008296[A265313]] [A008297[A133942]] [A008298[A038048]] [A011801[A008546]] [A013988[A008543]] [A023531[A000007]] [A035342[A001147]] [A035469[A007559]] [A038455[A006963]] [A039621[A177885]] [A039683[A000165]] [A039692[A039647]] [A039810[A000110]] [A039811[A000258]] [A039812[A000307]] [A039813[A000357]] [A039814[A003713]] [A039815[A000268]] [A039816[A000310]] [A039817[A000359]] [A046089[A001710]] [A048176[A051262]] [A048786[A034177]] [A048993[A000012]] [A048994[A133942]] [A049029[A007696]] [A049218[A005359]] [A049352[A001715]] [A049353[A001720]] [A049374[A001725]] [A049385[A008548]] [A049403[A000161]] [A049404[A000925]] [A049410[A004552]] [A049411[A008279]] [A049424[A265609]] [A051141[A032031]] [A051142[A047053]] [A051150[A052562]] [A051151[A047058]] [A051186[A051188]] [A051187[A051189]] [A051231[A051232]] [A059297[A000027]] [A059298[A000027]] [A059419[A009006]] [A060281[A001865]] [A061356[A000169]] [A075497[A000079]] [A075498[A000244]] [A075499[A000302]] [A075500[A000351]] [A075501[A000400]] [A075502[A000420]] [A075503[A001018]] [A075504[A001019]] [A075505[A011557]] [A075525[A265024]] [A078521[A038048]] [A079621[A002866]] [A079638[A029767]] [A079639[A006252]] [A079640[A007840]] [A079641[A000629]] [A079642[A089064]] [A086915[A052849]] [A088729[A000670]] [A088814[A000262]] [A092082[A008542]] [A104556[A003475]] [A105278[A000142]] [A105599[A000272]] [A105786[A000272]] [A105819[A055860]] [A106239[A057500]] [A111246[A130716]] [A111593[A155585]] [A111594[A005359]] [A111596[A155456]] [A119274[A001813]] [A119275[A130706]] [A121408[A177145]] [A122848[Axxxxxx]] [A122850[A001147]] [A125553[A208529]] [A129062[A000629]] [A130123[A000038]] [A130191[A000110]] [A130534[A000142]] [A131222[A029767]] [A132056[A045754]] [A132062[A001147]] [A132393[A000142]] [A134141[A001730]] [A135338[A155456]] [A135494[A153881]] [A136590[A136591]] [A136595[A048287]] [A136630[A000035]] [A136656[A155456]] [A137312[A052849]] [A137320[A066459]] [A137339[A052560]] [A137378[A052510]] [A137431[Axxxxxx]] [A137433[Axxxxxx]] [A137452[A213236]] [A137513[Axxxxxx]] [A143395[A000225]] [A143543[A001187]] [A144402[A000161]] [A144633[A144636]] [A144634[A144636]] [A144644[A001899]] [A145520[A000040]] [A147308[A027641]] [A147309[A022902]] [A147311[A122045]] [A147312[A122045]] [A147315[A000111]] [A151509[Axxxxxx]] [A151511[Axxxxxx]] [A166317[A002436]] [A166318[A002436]] [A171996[Axxxxxx]] [A171998[Axxxxxx]] [A174893[A000142]] [A184962[A000670]] [A185285[A004123]] [A185296[Axxxxxx]] [A185415[Axxxxxx]] [A185419[A143523]] [A185422[A080635]] [A185690[A056594]] [A185951[A193356]] [A186366[A000111]] [A187082[Axxxxxx]] [A187084[Axxxxxx]] [A188062[Axxxxxx]] [A188066[Axxxxxx]] [A188832[A009843]] [A189898[A003027]] [A191249[A062738]] [A194938[A039647]] [A195204[A076726]] [A195205[Axxxxxx]] [A202183[Axxxxxx]] [A202184[Axxxxxx]] [A202185[Axxxxxx]] [A202189[Axxxxxx]] [A202190[Axxxxxx]] [A203412[A007559]] [A209849[A155585]] [A215771[A001710]] [A215861[A215851]] [A217756[A129271]] [A223511[A045755]] [A223512[A045756]] [A223513[A045757]] [A223514[Axxxxxx]] [A223515[Axxxxxx]] [A223516[Axxxxxx]] [A223517[Axxxxxx]] [A223518[Axxxxxx]] [A223522[Axxxxxx]] [A225171[A225170]] [A227450[A007395]] [A228534[A009444]] [A228550[A033678]] [A228859[A001832]] [A228892[A002027]] [A247232[A001929]] [A256041[A005212]] [A256042[A256033]] [A256892[A000262]] [A256893[A000670]] [A259286[Axxxxxx]] [A264428[A000110]] [A264429[Axxxxxx]] [A264430[A125273]] [A264431[Axxxxxx]] [A264433[A047889]] [A264434[A011634]] [A264435[A000587]] [A264436[A007549]] [A265314[Axxxxxx]] [A265602[Axxxxxx]] [A265608[A000262]]
In particular this table shows that the coefficients of the Touchard or Bell polynomials (A048993), the Abel polynomials (A137452), the Mittag-Leffler polynomials (A137513), the modified Hermite polynomials (A104556) and the Bessel polynomials (A001497) all are Bell matrices generated by elementary sequences.
♦ References
- D. E. Knuth, Convolution polynomials, Mathematica J. 2.1 (1992), no. 4, 67-78.
Accompanying, there is an Jupyter Notebook which was developed on SageMath Cloud. SageMath is a free open source alternative to Magma, Maple, Mathematica and Matlab. Get a free account on SageMath Cloud, download this notebook, upload it to SMC and start exploring! | https://oeis.org/wiki/User:Peter_Luschny/BellTransform | CC-MAIN-2017-51 | refinedweb | 5,007 | 59.74 |
Dealing with Imbalanced Data in Machine Learning
This article presents tools & techniques for handling data when it's imbalanced.
As an ML engineer or data scientist, sometimes you inevitably find yourself in a situation where you have hundreds of records for one class label and thousands of records for another class label.
Upon training your model you obtain an accuracy above 90%. You then realize that the model is predicting everything as if it’s in the class with the majority of records. Excellent examples of this are fraud detection problems and churn prediction problems, where the majority of the records are in the negative class. What do you do in such a scenario? That will be the focus of this post.
Collect More Data
The most straightforward and obvious thing to do is to collect more data, especially data points on the minority class. This will obviously improve the performance of the model. However, this is not always possible. Apart from the cost one would have to incur, sometimes it's not feasible to collect more data. For example, in the case of churn prediction and fraud detection, you can’t just wait for more incidences to occur so that you can collect more data.
Consider Metrics Other than Accuracy
Accuracy is not a good way to measure the performance of a model where the class labels are imbalanced. In this case, it's prudent to consider other metrics such as precision, recall, Area Under the Curve (AUC) — just to mention a few.
Precision measures the ratio of the true positives among all the samples that were predicted as true positives and false positives. For example, out of the number of people our model predicted would churn, how many actually churned?
Recall measures the ratio of the true positives from the sum of the true positives and the false negatives. For example, the percentage of people who churned that our model predicted would churn.
The AUC is obtained from the Receiver Operating Characteristics (ROC) curve. The curve is obtained by plotting the true positive rate against the false positive rate. The false positive rate is obtained by dividing the false positives by the sum of the false positives and the true negatives.
AUC closer to one is better, since it indicates that the model is able to find the true positives.
Emphasize the Minority Class
Another way to deal with imbalanced data is to have your model focus on the minority class. This can be done by computing the class weights. The model will focus on the class with a higher weight. Eventually, the model will be able to learn equally from both classes. The weights can be computed with the help of scikit-learn.
from sklearn.utils.class_weight import compute_class_weight weights = compute_class_weight(‘balanced’, y.unique(), y) array([ 0.51722354, 15.01501502])
You can then pass these weights when training the model. For example, in the case of logistic regression:
class_weights = { 0:0.51722354, 1:15.01501502 }lr = LogisticRegression(C=3.0, fit_intercept=True, warm_start = True, class_weight=class_weights)
Alternatively, you can pass the class weights as
balanced and the weights will be automatically adjusted.
lr = LogisticRegression(C=3.0, fit_intercept=True, warm_start = True, class_weight=’balanced’)
Here’s the ROC curve before the weights are adjusted.
And here’s the ROC curve after the weights have been adjusted. Note the AUC moved from 0.69 to 0.87.
Try Different Algorithms
As you focus on the right metrics for imbalanced data, you can also try out different algorithms. Generally, tree-based algorithms perform better on imbalanced data. Furthermore, some algorithms such as LightGBM have hyperparameters that can be tuned to indicate that the data is not balanced.
Generate Synthetic Data
You can also generate synthetic data to increase the number of records in the minority class — usually known as oversampling. This is usually done on the training set after doing the train test split. In Python, this can be done using the Imblearn package. One of the strategies that can be implemented from the package is known as the Synthetic Minority Over-sampling Technique (SMOTE). The technique is based on k-nearest neighbors.
When using SMOTE:
- The first parameter is a
floatthat indicates the ratio of the number of samples in the minority class to the number of samples in the majority class, once resampling has been done.
- The number of neighbors to be used to generate the synthetic samples can be specified via the
k_neighborsparameter.
from imblearn.over_sampling import SMOTEsmote = SMOTE(0.8)X_resampled,y_resampled = smote.fit_resample(X.values,y.values)pd.Series(y_resampled).value_counts()0 9667 1 7733 dtype: int64
You can then fit your resampled data to your model.
model = LogisticRegression()model.fit(X_resampled,y_resampled)predictions = model.predict(X_test)
Undersample the Majority Class
You can also experiment on reducing the number of samples in the majority class. One such strategy that can be implemented is the
NearMiss method. You can also specify the ratio just like in SMOTE, as well as the number of neighbors via
n_neighbors.
from imblearn.under_sampling import NearMissunderSample = NearMiss(0.3,random_state=1545)pd.Series(y_resampled).value_counts()0 1110 1 333 dtype: int64
Final Thoughts
Other techniques that can be used include using building an ensemble of weak learners to create a strong classifier. Metrics such as precision-recall curve and area under curve (PR, AUC) are also worth trying when the positive class is the most important.
As always, you should experiment with different techniques and settle on the ones that give you the best results for your specific problems. Hopefully, this piece has given some insights on how to get started.. His content has been viewed over a million times on the internet.. If the world of Data Science, Machine Learning, and Deep Learning interest you, you might want to check his Complete Data Science & Machine Learning Bootcamp in Python course.
Original. Reposted with permission.
Related:
- How to fix an Unbalanced Dataset
- The 5 Most Useful Techniques to Handle Imbalanced Datasets
- Pro Tips: How to deal with Class Imbalance and Missing Labels | https://www.kdnuggets.com/2020/10/imbalanced-data-machine-learning.html | CC-MAIN-2021-21 | refinedweb | 1,011 | 55.74 |
The Path Element Arc is used to draw an arc to a point in the specified coordinates from the current position.
It is represented by a class named ArcTo. This class belongs to the package javafx.scene.shape.
This class has 4 properties of the double datatype namely −
X − The x coordinate of the center of the arc.
Y − The y coordinate of the center of the arc.
radiusX − The width of the full ellipse of which the current arc is a part of.
radiusY − The height of the full ellipse of which the current arc is a part of.
To draw the Path element arc, you need to pass values to these properties This can be done by passing them to the constructor of this class, in the same order, at the time of instantiation as follows −
ArcTo arcTo = new ArcTo(x, y, radius, radiusY);
Or, by using their respective setter methods as follows −
setX(value); setY(value); setRadiusX(value); setRadiusY(value);
To draw an arc to a specified point from the current position in JavaFX, follow the steps given below.
Create a Java class and inherit the Application class of the package javafx.application. You can then implement the start() method of this class as follows.
public class ClassName extends Application { @Override public void start(Stage primaryStage) throws Exception { } }
Create the Path Class Object as shown in the following code block.
//Creating a Path object Path path = new Path();
Create the MoveTo path element and set the XY coordinates to the starting point of the line to the coordinates (100, 150). This can be done using the methods setX() and setY() of the class MoveTo as shown below.
//Moving to the starting point MoveTo moveTo = new MoveTo(); moveTo.setX(100.0f); moveTo.setY(150.0f);
Create the path element quadratic curve by instantiating the class named ArcTo, which belongs to the package javafx.scene.shape as shown below −
//Creating an object of the class ArcTo ArcTo arcTo = new ArcTo()
Specify the x, y coordinates of the center of the ellipse (of which this arc is a part of). Then you can specify the radiusX, radiusY, start angle, and length of the arc using their respective setter methods as shown below.
//setting properties of the path element arc arcTo.setX(300.0); arcTo.setY(50.0); arcTo.setRadiusX(50.0); arcTo.setRadiusY(50.0);
Add the path elements MoveTo and arcTo, created in the previous steps to the observable list of the Path class as follows −
//Adding the path elements to Observable list of the Path class path.getElements().add(moveTo); path.getElements().add(cubicCurveTo); that draws an arc from the current point to a specified position using the class Path of JavaFX. Save this code in a file with the name ArcExample.java.
import javafx.application.Application; import javafx.scene.Group; import javafx.scene.Scene; import javafx.stage.Stage; import javafx.scene.shape.ArcTo; import javafx.scene.shape.MoveTo; import javafx.scene.shape.Path; public class ArcExample extends Application { @Override public void start(Stage stage) { //Creating an object of the class Path Path path = new Path(); //Moving to the starting point MoveTo moveTo = new MoveTo(); moveTo.setX(250.0); moveTo.setY(250.0); //Instantiating the arcTo class ArcTo arcTo = new ArcTo(); //setting properties of the path element arc arcTo.setX(300.0); arcTo.setY(50.0); arcTo.setRadiusX(50.0); arcTo.setRadiusY(50.0); //Adding the path elements to Observable list of the Path class path.getElements().add(moveTo); path.getElements().add(arcT ArcExample.java java ArcExample
On executing, the above program generates a JavaFX window displaying an arc, which is drawn from the current position to the specified point, as shown below. | https://www.tutorialspoint.com/javafx/2dshapes_arcto.htm | CC-MAIN-2019-47 | refinedweb | 615 | 53.92 |
Simple, fast and useful MiniProfiler for ASP.NET MVC
Join the DZone community and get the full member experience.Join For Free
MiniProfiler is very lightweight, simple, fast and useful profiler for ASP.NET websites including ASP.NET MVC. It is designed to help you find possible performance issues and have very nice and clear view over each operation that happens in your web applications.
MiniProfiler was created by the Stack Overflow guys for their internal use, but they have put it as an open source project under Apache License 2.0 for all ASP.NET and WCF developers! (Thanks guys!)
To get started using MiniProfiler, you have to install it first.
You have two options available:
Installation
1. Using NuGet Package Manager
- In VS.NET 2010, go to Tools –> Library Package Manager –> Manage NuGet Packages…
Install MiniProfiler.MVC3
Once installation is successful, both MiniProfiler will be ticked.
If you want to use MiniProfiler for Entity Framework too, then install the MiniProfiler.EF too.
2. Install manually from here (you have github clone too)
Setting up
Once you are done with installation, if you have used NuGet manager to install MiniProfiler, you are already done
The new dlls added by the NuGet are marked:
Besides dlls, there is also a MiniProfiler.cs class inside App_Start folder
The next thing you need to do is to include MiniProfiler in the master layout page. Once you expand the Views –> Shared, you can see that with installing MiniProfiler there is _MINIPROFILER UPDATED Layout.cshtml that is an example master layout regarding how to include MiniProfiler in your Layout page.
Copy the marked line and add it to your actual _Layout.cshtml
In my example:
<!DOCTYPE html> <html> <head> <title>@ViewBag.Title</title> <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> @MvcMiniProfiler.MiniProfiler.RenderIncludes() </head> <body> @RenderBody() </body> </html>
Using MiniProfiler
Now, run your application… you can see MiniProfiler is running in the top left corner:
If you want to see what does MiniProfiler includes for us, check View Source in your browser:
If you want to start the mini profiler for specific requests only (e.g. local requests), you can use MiniProfiler.Start() method. The best place to add this is in the Global.asax Application_BeginRequest.
Grouping Events
Using MiniProfiler we can group profiling steps very easily. Yep, this might make your code in some segments a bit more dirty, but if you have clean code you should no worry…
To perform grouping by steps in your Controller, first add using MvcMiniprofiler; directive
using MvcMiniProfiler;
Create instance of MiniProfiler by adding the MiniProfiler.Current that represents the currently running profiler in the HttpContext
Use the Step() method in using code block to create profiler step.
Example:
Then in my Products ActionResult add the following code:
public ActionResult Products() { List<Product> listProducts = new List<Product>(); MiniProfiler profiler = MiniProfiler.Current; using (profiler.Step("Load Product Items", ProfileLevel.Info)) { System.Threading.Thread.Sleep(1000); //1 second sleep listProducts.Add(new Product() { ProductID = 1, Name = "Product 1", Price = 100 }); listProducts.Add(new Product() { ProductID = 2, Name = "Product 2", Price = 200 }); listProducts.Add(new Product() { ProductID = 2, Name = "Product 3", Price = 300 }); } using (profiler.Step("Add Products to List", ProfileLevel.Info)) { System.Threading.Thread.Sleep(2000); //2 seconds sleep ViewBag.Products = listProducts; } return View(); }
In this example we have two profiler steps: Load Product Items and Add Products to List.
Now add new View to display products list by right clicking somewhere above the code in Products() method and Add View with name Products.
In the view add the following code (just for the demo…):
@{ ViewBag.Title = "Products"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Products</h2> <p> <ul> @foreach (var item in ViewBag.Products) { <li> @item.Name ([email protected]) </li> } </ul> </p>
Now run the web page and navigate to /Home/Products
once you click the button at the top-left corner you will get this:
Profiling Database Queries
To use MiniProfiler for profiling database queries, first you will need to de-comment one (or both) of the database related line/s inside App_Start/MiniProfiler.cs PreStart() method
//TODO: To profile a standard DbConnection: // var profiled = new ProfiledDbConnection(cnn, MiniProfiler.Current); //TODO: If you are profiling EF code first try: MiniProfilerEF.Initialize();
In our example I will be using the EF Mini Profiler.
First, I have rewritten the Products method with the following code:
public ActionResult Products() { AdventureWorksEntities context = new AdventureWorksEntities(); ViewBag.Products = select new ProductViewModel { ProductID = p.ProductID, Name = p.Name, Description = pd.Description.Substring(0, 200) }); return View(); }I am using AdventureWorks database with EF. I have added one query that joins four tables. Now just run the application.
if we click on 1 sql, the MiniProfiler will give us some detailed info regarding generated SQL query and the time needed to execute and finish
If we have some query that is executing very long or we have multiple queries which create performance issues, MiniProfiler will give us warnings…
For example, lets add loop of executing the above query 50 times
public ActionResult Products() { AdventureWorksEntities context = new AdventureWorksEntities(); List<ProductViewModel> productList = new List<ProductViewModel>(); for (int i = 0; i < 50; i++) { productList.AddRange orderby p.Name descending, p.ProductID ascending, pd.Description descending, pd.ModifiedDate descending select new ProductViewModel { ProductID = p.ProductID, Name = p.Name, Description = pd.Description.Substring(0, 200) }); } ViewBag.Products = productList; return View(); }
Once we run the app, it will take few seconds to load…
Summary
ASP.NET MVC MiniProfiler is a great tool that you must have in your toolset for building scalable, fast and performance optimized web applications. You can have clear view of what is causing performance issues in your application in almost all levels and layers.
For those that are interested to get access to the MiniProfiler source code and want to dig more, check this page.
Published at DZone with permission of Hajan Selmani, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/simple-fast-and-useful | CC-MAIN-2022-40 | refinedweb | 1,010 | 50.23 |
By Qianyi
1. I/O Models and Their Problems
2. Resource Contention and Distributed Locks
3. Redis Snap-Up System Instances
The I/O model of Redis Community Edition is simple. Generally, all commands are parsed and processed by one I/O thread.
However, if there is a slow query command, other queries have to queue up. In other words, when a client runs a command too slowly, subsequent commands will be blocked. Sentinel revitalization cannot help since it will delay the ping command, which is also affected by slow queries. If the engine gets stuck, the ping command will fail to determine whether the service is currently available for fear of misjudgment.
If the service does not respond and the Master is switched to the Slave, slow queries also will slow down the Slave. The misjudgment may also be made by ping command. As a result, it is difficult to monitor service availability.
If one query is delayed by slow queries, such as keys, lrange, and hgetall, the subsequent requests will also be delayed.
This problem also exists in a cluster containing multiple shards. If a slow query delays processes in a shard, for example, calling a cross-shard command like mget, the access to the problematic shard gets stuck. Thus, all subsequent commands are blocked.
Common Redis clients like Jedis provide connection pools. When a service thread accesses Redis, a persistent connection will be retrieved for each query. If the query is processed too slowly to return the connection, the waiting time will be prolonged because the connection cannot be used by other threads until the request returns a result.
If all queries are processed slowly, each service thread retrieves a new persistent connection, which will be consumed gradually. If this is the case, an exception of “no resource available in the connection pool” is reported because the Redis server is single-threaded. When a persistent connection of the client is blocked by a slow query, requests from subsequent connections will not be processed in time. The current connection cannot be released back to the connection pool.
Connection convergence is not supported by the Redis protocol, which deprives the Message of an ID. Thus, Request and Response cannot be associated. Therefore, the connection pool is used. The callback is put into a queue unique for each connection when a request is received to implement asynchronization. The callback can be retrieved and executed after the request returns. This is the FIFO model. However, server connections cannot be returned out of order because they cannot be matched in the client. Generally, the client uses BIO, which blocks a connection and only allows other threads to use it after request returns.
However, asynchronization cannot improve efficiency, which is limited by the single-threaded server. Even if the client modifies the access method to allow multiple connections to send requests at the same time, these requests still have to queue up in the server. Slow queries still block other persistent connections.
Another serious problem comes from the Redis thread model. Its performance will lower when the I/O thread handles with more than 10,000 connections. For example, if there are 20,000 to 30,000 persistent connections, the performance will be unacceptable for the business. For business machines, if there are 300 to 500 machines and each one handles 50 persistent connections, it is easy to reach the performance bottleneck.
The connection pool is used because the Redis protocol does not support connection convergence.
When a slow query exists in the Engine layer, the request returns slowly, which means:
max_connin each client connection pool is set to 50, the callback speed becomes slow.
Here are some solutions for asynchronous interfaces implementation based on the Redis protocol:
set k vcommand to encapsulate a request in the following form:
multi ping {id} set k v exec
The server will return the following code:
{id} OK
This tricky method uses atomic operations of Multi-Exec and the unmodified parameter return feature of the ping command to “attach” message IDs in the protocol. However, this method is fundamentally not used by any client.
The model of well-known versions before Redis 5.x remains the same. All commands are processed by a single thread, and all reads, processes, and writes run in the same main I/O. There are several BIO threads that close or refresh files in the background.
In versions later than Redis 4.0,
LAZY_FREE is added, allowing certain big keys to release asynchronously to avoid blocking the synchronous processing of tasks. On the contrary, Redis 2.8 will get stuck when eliminating or expiring large keys. Therefore, Redis 4.0 or later is recommended.
The following figure shows the performance analysis result. The left two parts are about command processing, the middle part talks about “reading,” and the rightmost part stands for “writing,” which occupies 61.16%. Thus, we can tell that most of the performance depends on the network I/O.
With the improved model of Redis 6.x, the “reading” task can be delegated to the I/O thread for processing after readable events are triggered in the main thread. After the reading task finishes, the result is returned for further processing. The “writing” task can also be distributed to the I/O thread, which improves the performance.
Improvement in the performance can be really impressive with O(1) commands like simple “reading” and “writing” if only one running thread exists. If the command is complex while only one running thread in DB exists, the improvement is rather limited.
Another problem lies in time consumption. Every “reading” and “writing” task needs to wait for the result after being distributed. Therefore, the main thread will idle for a long time, and service cannot be provided. Therefore, more improvements of the Redis 6.x model are expected.
The model of Alibaba Cloud Redis Enterprise Edition splits the entire event into parts. The main thread is only responsible for command processing, while all reading and writing tasks are processed by the I/O thread. Connections are no longer retained only in the main thread. The main thread only needs to read once after the event starts. After the client is connected, the reading tasks are handed over to other I/O threads. Thus, the main thread does not care about readable and writable events from clients.
When a command arrives, the I/O thread will forward the command to the main thread for processing. Later, the processing result will be passed to the I/O thread through notification for writing. By doing so, the waiting time of the main thread is reduced as much as possible to enhance the performance further.
The same disadvantage applies. Only one thread is used in command processing. The improvement is great for O(1) commands but not enough for commands that consume many CPU resources.
In the following figure, the gray color on the left stands for Redis Community Edition 5.0.7, and the orange color on the right stands for Redis Performance-Enhanced Edition. The multi-thread performance of Redis 6.x is in between. In the test, the reading command is used, which requires more on I/O rather than CPU. Therefore, the performance improvement is great. If the command used consumes many CPU resources, the difference between the two editions will reduce to none.
It is worth mentioning that the Redis Community Edition 7 is in the planning phase. Currently, it is being designed to adopt a modification scheme similar to the one used by Alibaba Cloud. With this scheme, it can gradually approach the performance bottleneck of a single main thread.
Performance improvement is just one of the benefits. Another benefit of distributing connections to I/O threads is that it linearly increases the number of connections. You can add I/O threads to deal with more connections. Redis Enterprise Edition supports tens of thousands of connections by default. It can even support more connections, such as 50,000 or 60,000 persistent connections, to solve the issue of insufficient connections during the large-scale machine scaling at the business layer.
The write command in Redis string has a parameter called NX, which specifies that a string can be written if no string exists. This is naturally locked. This feature facilitates the lock operation. By taking a random value and setting it with the NX parameter, atomicity can be ensured.
An “EX” is added to set an expiration time to ensure the lock will be released if the business machine fails. If a lock is not released after the machine fails or is disabled for some reason, it will never be unlocked.
The parameter “5” is an example. It does not have to be 5 seconds, but it depends on the specific tasks to be done by the business machine.
The removal of distributed locks can be troublesome. Here’s a case. A locked machine sticks or loses contact due to some sudden incidents. Five seconds later, the lock has expired, and other machines have been locked, while the once-failed machine is available again. After processing, like deleting Key, the lock that does not belong to the failed machine is removed. Therefore, the deletion requires judgment. Only when the value is equal to the previously written one can the lock be removed. Currently, Redis does not have such a command, so it usually uses Lua.
When the value is equal to the value in the engine, Key is deleted by using the CAD command “Compare And Delete.” For open-source CAS/CAD commands and TairString in Module form, please see this GitHub page. Users can load modules directly to use these APIs on all Redis versions that support the Module mechanism.
When locking, we set an expiration time, such as “5 seconds.” If a thread does not complete processing within the time duration (for example, the transaction is still not completed after 3 seconds), the lock needs to be renewed. If the processing has not finished before the lock expires, a mechanism is required to renew the lock. Similar to deletion, we cannot renew directly, unless the value is equal to the value in the engine. Only when the lock is held by the current thread can we renew it. This is a CAS operation. Similarly, if there is no API, a new Lua script is needed to renew the lock.
Distributed locks are not perfect. As mentioned above, if the locked machine is lost, the lock is held by others. When the lost machine is suddenly available again, the code will not judge whether the lock is held by the current thread, possibly leading to reenter. Therefore, Redis distributed locks, as well as other distributed locks, are not 100% reliable.
If there is no CAS/CAD command available, Lua script is needed to read the Key and renew the lock, if two values are the same.
Note: The value that will change in each call in the script must be passed through by parameters because as long as the scripts are different, Redis caches the scripts. So far, Redis Community Edition 6.2 has neither limited the cache size nor set up an eviction policy. The operation of executing the script flush command to clear the cache is also synchronous, so be sure to avoid an overlarge script cache. The feature of asynchronous cache deletion has been added to the Redis Community Edition by Alibaba Cloud engineers. The
script flush async command is supported in Redis 6.2 and later versions.
You need to run the
script load command to load the Lua script to the Redis instance to use CAS/CAD commands. Then, use the
evalsha command that contains parameters to call the script. This reduces the network bandwidth and avoids loading different scripts each time. Note: The
evalsha command may return an error of “no script exists,” which can be solved by executing the script load command again.
More information about the Lua implementation of CAS/CAD is listed below:
Before executing the Lua scripts, Redis needs to parse and translate the scripts first. Generally, Lua usage is not recommended in Redis for two reasons.
First, to use Lua in Redis, you need to call Lua from C language and then call C language from Lua. The returned value is converted twice from a value compatible with Redis protocol to a Lua object and then to C language data.
Second, the execution process involves many Lua script parsing and VM processing operations (including the memory usage of lua.vm.) So, the time consumed is longer than the common commands. Thus, simple Lua scripts like if statement is highly recommended. Loops, duplicate operations, and massive data access and acquisition should be avoided as much as possible. Remember that the engine only has one thread. If the majority of CPU resources are consumed by Lua script execution, there will be few CPU resources available for business command processing.
The compile-load-run-unload operation on the script consumes a large amount of CPU resources. The execution of Lua scripts is similar to pushing complex transactions to Redis for execution. The memory will be depleted if any exceptions occur, and Redis will stop operating after the engine's computational power is exhausted.
If we pre-compile and load the script in Redis (without unload or clean operation) and use EVALSHA to execute, this saves CPU resources compared with EVAL only. However, this is still a defective solution as Redis will fail when it restarts, switches, or changes the configuration of the code cache. The code cache needs to be reloaded. Complex data structures or modules are better alternatives to Lua.
A flash sale scenario is divided into three phases:
Snap-up/flash sale scenarios fundamentally concern highly concurrent reading and writing of hot spot data.
Snap-up/flash sale is a process of continuously pruning requests:
1. Less Data (Staticizing, CDN, frontend resource merge, dynamic and static data separation on page, and LocalCache)
Reduce the page demand for dynamic parts in every way possible. If the frontend page is mostly static, use CDN or other mechanisms to prevent all requests. Thus, requests on the server will be reduced largely in quantity and bytes.
2. Short Path (Shorten frontend-to-end path as much as possible, minimize the dependency on different systems, and support throttling and degradation.)
In the path from the user side to the final end, you should depend on fewer business systems. Bypass systems should reduce their competition, and each layer must support throttling and degradation. After throttling and degrading, the frontend prompts optimization is needed.
3. Single Point Prohibition (Achieve stateless application services scaling horizontally and avoid hot spots for storage services.)
Stateless scale-out must be supported everywhere in the service. Stateful storage services must avoid hot spots, generally some reading and writing hot spots.
The third method is commonly selected, as the first two all have defects. For the first one, it is difficult to avoid malicious orders that do not pay. For the second one, the payment fails because of insufficient inventories. So, the experience of the former two methods is very poor. Usually, the inventory is deducted in advance and will be released when the order times out. The TV framework will be used, and a security and anti-fraud mechanism will also be established.
1. String Structure
incr/decr/incrby/decrbydirectly. Note: Currently, Redis does not support upper and lower bound limits.
2. List Structure
lpopor
rpopcommand to deduct inventory until nil (key not exist) is returned.
List structure has some disadvantages, for example, more memory is occupied. If multiple inventory units are deducted at a time,
lpop needs to be called multiple times, which will affect the performance.
3. Set/Hash Structure
hincrbyto count and
hgetto judge the purchased quantity.
4. (If Service Scenarios Allow) Multiple Keys (key_1, key_2, key_3...) for Hot Commodities
TairString, another structure in the module, modifies Redis String and supports String with high-concurrency CAS and Version. It has Version values, which enable the implementation of optimistic lock during reading and writing. Note: This String structure is different and cannot be used together with other common String structures in Redis.
As shown in the above figure, when functioning, first, TairString gives an exGet value and return (value,version). Then, it operates on the value and updates with the previous version. If versions are the same, then update. Otherwise, re-read and modify before updating to implement CAS operations. This is called an optimistic lock on the server.
For the optimization of the scenario mentioned above, you can apply the exCAS interface. Similar to exSet, when encountering version conflict, exCAS will return the version mismatch error and updated version of the new value. Thus, another API call is reduced. By applying exCAS after exSet and performing
exSet -> exCAS again when the API call fails, network interaction is reduced. Thus, the access volume to Redis will be reduced.
TairString is a string that supports high-concurrency CAS.
Version-Carried String
More Semantics
exIncr/exIncrBy: Snap-up/flash sale (with upper and lower bounds)
exSet -> exCAS: Reduce network interactions
The String is based on the INCRBY method with no upper or lower bound, and the exString is based on the EXINCRBY method that provides various parameters together with the upper and lower bounds. For example, if the minimum value is set to 0, the value cannot be reduced when it equals 0. exString also supports specifying expiration time. For example, a product can only be snapped up within a specified period and cannot after the expiration. The business system also restricts the cache to clear it after a specified time. If the inventory is limited, the goods are removed after ten seconds if no order is placed. If new orders keep coming, the cache is renewed. A parameter needs to be included in EXINCRBY to renew the cache each time INCRBY or API is called to achieve this. Thus, the hit rate can be improved.
As shown in the following figure, to use Redis String, the Lua script displayed above is suitable. If the “get” KEY[1] is larger than “0,” use “decrby” minus “1.” Otherwise, the “overflow” error is returned. Value that has been reduced to “0” cannot be decreased. In the following example, ex_item is set to “3.” Subtract it 3 times and return the “overflow” error when the value becomes “0.”
exString is very simple in use. Users just need to exset a value and execute “exincrby k -1.” Remember that String and TairString are different. Their APIs cannot be used together.
How to Choose the Most Suitable Database to Empower Your Business with Proven Performance
4 Must-Try Features in ApsaraDB for MongoDB 4.0
ApsaraDB - July 10, 2019
Alibaba Cloud Community - March 25, 2022
Alibaba Clouder - March 14, 2017
Alibaba Clouder - April 16, 2019
Alibaba Clouder - September 28, 2020
Alibaba Clouder - July 1, 2020
A key value database service that offers in-memory caching and high-speed access to applications hosted on the cloud
A database engine fully compatible with Apache Cassandra with enterprise-level SLA assurance.Learn More | https://www.alibabacloud.com/blog/high-concurrency-practices-of-redis-snap-up-system_597858 | CC-MAIN-2022-33 | refinedweb | 3,204 | 56.55 |
[ aws . quicksight ]
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
register-user --identity-type <value> --email <value> --user-role <value> [--iam-arn <value>] [--session-name <value>] --aws-account-id <value> --namespace <value> [--user-name <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--identity-type (string).
Possible values:
- IAM
- QUICKSIGHT
The email address of the user that you want to register.
--user-role (string).
Possible values:
- ADMIN
- AUTHOR
- READER
- RESTRICTED_AUTHOR
- RESTRICTED_READER
--iam-arn (string)
The ARN of the IAM user or role that you are registering with Amazon QuickSight.
--session-name (string)
You need to use this parameter only when you register one or more users using an assumed IAM role. You don't need to provide the session name for other scenarios, for example when you are registering an IAM user or an Amazon QuickSight user. You can register multiple users using the same IAM role if each user has a different session name. For more information on assuming IAM roles, see ` assume-role`__ in the AWS CLI Reference.
--aws-account-id (string)
The ID for the AWS account that the user is in. Currently, you use the ID for the AWS account that contains your Amazon QuickSight account.
--namespace (string)
The namespace. Currently, you should set this to default .
--user-name (string)
The Amazon QuickSight user name that you want to create for the user you are registering.
- user name.
Arn -> (string)The Amazon Resource Name (ARN) for the user.
UserName -> (string)The user's user name.
Role -> (string)The Amazon QuickSight role for the user.
IdentityType -> (string)The type of identity authentication used by the user.
Active -> (boolean)Active status of user. When you create an Amazon QuickSight user that’s not an IAM user or an AD user, that user is inactive until they sign in and provide a password
PrincipalId -> (string)The principal ID of the user.
UserInvitationUrl -> (string)
The URL the user visits to complete registration and provide a password. This is returned only for users with an identity type of QUICKSIGHT .
RequestId -> (string)
The AWS request ID for this operation.
Status -> (integer)
The http status of the request. | https://docs.aws.amazon.com/cli/latest/reference/quicksight/register-user.html | CC-MAIN-2019-35 | refinedweb | 361 | 56.86 |
Group all the "guest OS" support options together, underCONFIG_PARAVIRT. Make this a proper menu item so it looks neater onmenuconfig etc, and make the wording for each prompt uniform.Signed-off-by: Rusty Russell <[email protected]>diff -r 3d3ac181380b arch/i386/Kconfig--- a/arch/i386/Kconfig Fri Sep 14 12:24:15 2007 +1000+++ b/arch/i386/Kconfig Fri Sep 14 12:45:09 2007 +1000@@ -214,28 +214,38 @@ config X86_ES7000 endchoice -config PARAVIRT- bool "Paravirtualization support (EXPERIMENTAL)"+menuconfig PARAVIRT+ bool "Paravirtualized guest support (EXPERIMENTAL)" depends on EXPERIMENTAL depends on !(X86_VISWS || X86_VOYAGER) help Paravirtualization is a way of running multiple instances of Linux on the same machine, under a hypervisor. This option changes the kernel so it can modify itself when it is run under a hypervisor, improving performance significantly. However, when run without a hypervisor the kernel is theoretically slower. If in doubt, say N.++if PARAVIRT source "arch/i386/xen/Kconfig" config VMI- bool "VMI Paravirt-ops support"- depends on PARAVIRT+ bool "VMI Guest support" help VMI provides a paravirtualized interface to the VMware ESX server (it could be used by other hypervisors in theory too, but is not at the moment), by linking the kernel to a GPL-ed ROM module provided by the hypervisor.++config LGUEST_GUEST+ bool "Lguest guest support"+ depends on !X86_PAE+ help+ Lguest is a tiny in-kernel hypervisor. Selecting this will+ allow your kernel to boot under lguest. This option will increase+ your kernel size by about 6k. If in doubt, say N.+endif config ACPI_SRAT booldiff -r 3d3ac181380b arch/i386/xen/Kconfig--- a/arch/i386/xen/Kconfig Fri Sep 14 12:24:15 2007 +1000+++ b/arch/i386/xen/Kconfig Fri Sep 14 12:37:38 2007 +1000@@ -3,8 +3,8 @@ # config XEN- bool "Enable support for Xen hypervisor"- depends on PARAVIRT && X86_CMPXCHG && X86_TSC && !NEED_MULTIPLE_NODES+ bool "Xen guest support"+ depends on X86_CMPXCHG && X86_TSC && !NEED_MULTIPLE_NODES help This is the Linux Xen port. Enabling this will allow the kernel to boot in a paravirtualized environment under thediff -r 3d3ac181380b drivers/lguest/Kconfig--- a/drivers/lguest/Kconfig Fri Sep 14 12:24:15 2007 +1000+++ b/drivers/lguest/Kconfig Fri Sep 14 12:31:44 2007 +1000@@ -1,23 +1,18 @@ config LGUEST config LGUEST tristate "Linux hypervisor example code"- depends on X86 && PARAVIRT && EXPERIMENTAL && !X86_PAE && FUTEX- select LGUEST_GUEST+ depends on X86 && EXPERIMENTAL && !X86_PAE && FUTEX select HVC_DRIVER ---help---- This is a very simple module which allows you to run- multiple instances of the same Linux kernel, using the+ This is a very simple module called lg.ko which allows you to run+ multiple instances of the Linux kernel, using the "lguest" command found in the Documentation/lguest directory. Note that "lguest" is pronounced to rhyme with "fell quest", not "rustyvisor". See Documentation/lguest/lguest.txt. + Usually you would also turn on "Lguest guest support", to create a+ kernel which can also boot under lguest.+ If unsure, say N. If curious, say M. If masochistic, say Y.--config LGUEST_GUEST- bool- help- The guest needs code built-in, even if the host has lguest- support as a module. The drivers are tiny, so we build them- in too. config LGUEST_NET tristate-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] majordomo info at read the FAQ at | https://lkml.org/lkml/2007/9/14/9 | CC-MAIN-2018-43 | refinedweb | 556 | 55.13 |
A DirectedEdgeStar is an ordered list of outgoing DirectedEdges around a node. More...
#include <DirectedEdgeStar.h>
A DirectedEdgeStar is an ordered list of outgoing DirectedEdges around a node.
It supports labelling the edges as well as linking the edges to form both MaximalEdgeRings and MinimalEdgeRings.
Compute the DirectedEdge depths for a subsequence of the edge array.
Traverse the star of edges, maintaing the current location in the result area at this node (if any).
If any L edges are found in the interior of the result, mark them as covered.
Traverse the star of DirectedEdges, linking the included edges together. To link two dirEdges, the <next> pointer for an incoming dirEdge is set to the next outgoing edge.
DirEdges are only linked if:
Edges are linked in CCW order (the order they are stored). This means that rings have their face on the Right (in other words, the topological location of the face is given by the RHS label of the DirectedEdge)
PRECONDITION: No pair of dirEdges are both marked as being in the result
Referenced by geos::geomgraph::PlanarGraph::linkResultDirectedEdges(). | https://geos.osgeo.org/doxygen/classgeos_1_1geomgraph_1_1DirectedEdgeStar.html | CC-MAIN-2019-09 | refinedweb | 181 | 61.97 |
Okay, so I have a camera I've been building for a 2D sidescrolling game. The player needs to stay slightly off to the left - like an original Gradius game:
The camera moves freely from the player, and needs to stay that way. At the moment, I just can't get the offset working right - I keep trying the various screen conversions but I'm not sure what will work - here is my code:
using UnityEngine;
using System.Collections;
public class CameraController : MonoBehaviour {
public Transform TrackTarget;
public float dampTime = 0.15f;
public float OffsetFloat = 0.05f;
public bool Ready = false;
private Vector3 velocity = Vector3.zero;
private Vector3 TargetOffset = Vector3.zero;
void Start()
{
TargetOffset = (camera.WorldToViewportPoint(TrackTarget.position * OffsetFloat));
Vector3 point = TrackTarget.position + TargetOffset;
Vector3 Destination = new Vector3(point.x, point.y, transform.position.z);
transform.position = Destination;
}
void FixedUpdate () {
TargetOffset = (camera.WorldToViewportPoint(TrackTarget.position * OffsetFloat));
if (Ready) {
if (TrackTarget)
{
Vector3 point = TrackTarget.position + TargetOffset;
Vector3 Destination = new Vector3(point.x, point.y, transform.position.z);
transform.position = Vector3.SmoothDamp(transform.position, Destination, ref velocity, dampTime);
}
}
}
void LateUpdate()
{
if (Ready) {
Vector3 CameraPosition = transform.position;
CameraPosition.z = -10.00f;
transform.position = CameraPosition;
}
}
}
as far as i understand you want to move the camera freely around the scene and let the ship follow it so that the relative distance from the left screen border and ship is always fixed...? if that's so I would use camera.ViewportToWorldPoint, not sure what u need to do tho...
That is about where I am at the moment, unsure on the second half.
This is just theory as I can't test it ATM but could you set your ships movement up and down to be 2 floats for x and y. If you press up the increase y by .01 (or whatever works for you) and the same for x. With x you might want to restrict x to between .1 and .5 with .5 being the middle of the scene.
The in update move the ship to the world point with ViewportToWorldPoint as
Vector3 MyPos = cacamera.ViewportToWorldPoint ( new Vector3 (myX, myY, camera.nearClipPlane));
Typing on a phone so sorry for typos!
Huh? The vehicle the camera is following is moved with physics, and there are no issues with how that works. This is related to tracking and keeping the vehicle offset from the center of the camera, as old-school 2D shooters used to do.
Answer by fafase
·
Jan 14, 2015 at 10:29 AM
My two cents on this, get the boundaries of the screen on Start, then convert the positions to world position, modify the position by the ratio you want to be inside the screen, then clamp the ship x and y positions.
Here for converting to world position :
then you create some game object that you attach to the camera so that they follow.
Finally, you clamp the ship position using those 4.
Help! I want to make a following camera but in with Y position is constant ?
2
Answers
Can I keep a player on the left side of the screen in relation to view resolution?
0
Answers
ScreenToWorldPoint for different resolutions in 2D
1
Answer
Unity 2D Position Issues
0
Answers
GUI To change with screen resolution?
2
Answers | https://answers.unity.com/questions/876193/is-it-possible-to-keep-the-player-offset-in-place.html?sort=oldest | CC-MAIN-2019-51 | refinedweb | 541 | 56.55 |
From: Vladimir Prus (ghost_at_[hidden])
Date: 2001-07-24 09:58:12
Having read the proposed coding guidelines, I have found that some parts are
reasonable, some document existing practice, some are arbitrary and some are
questionable. More details at the end of the message. On the whole, the
document is reasonable. However, I think it should not be accepted.
Most of my reasons are already stated by others, and I only have to slightly
elaborate. First of all, been reasonable, in my opinion, is not enough for
boost coding guidelines. They should incorporate good points from other
guideline and surpass them.
The single most important thing is to refactor guidelines into boost
conventions, semantical guidelines and syntactical ones. I would even suggest
specifically separating semantical guidelines which affect external interface
(and which are more important). Such refactoring will immediately make
documents status clear:
acceptance of boost conventions, will mean: "this arbitrary rules has been
agreed upon for the sake of uniformity"
acceptance of semantical guidelines will mean: "this guidelines are
considered reasonable by boost members, who find that doing otherwise often
causes problems"
acceptance of syntactical guidelines, if formal review is needed for them at
all, will mean: "we find the suggested coding style readable and won't be
very much embarassed if anybody use it"
Without refactoring, those different meaning will collide, and there's danger
readers will assume second sense, which is undesirable.
Once split, boost convention are easy to get rid of. Syntactical guidelines
too. And semantical ones should be expanded and enhanced and detailed &c.
Last remark: David said:
"If they are rejected, I will only wish that the formal review had begun
earlier so that I wouldn't have spent so much time on them here."
On the contrary, I wish for more time to be spend on guidelines, so that in a
month or too semantical guidelines will include most things that C++
community find reasonable.
-- And now commets on guidelines themself: Remark on all the document: the word "forbidded" should itself be forbidden. 1. Ok 2. Ok, but 2.10 is arbitrary rule 3. Mostly documents universal practice, except for 3.7, which is very questionable. 4. Mostly arbitrary rules. 5. Mostly arbitrary rules, except for 5.4 (spaces vs. tabs), which is important, but is already in library guidelines. 6. Mostly reasonable. 6.7 -- arbitrary. 7. Mostly reasonable, except for 7.1 and 7.15, which are really low-level 8. Mostly reasonable, 8.2 -- side note: maybe, somebody can give a good example where protected data member is usefull? 9. Reasonable, but more consideration probably needed. 10. Ok, but seems very similar to 7. Merge, probably? 11. Needs to be considered. 12. Disagree with 12.3 -- explicit qualification of everything, IMO, is nothing but the old C technique of appending some prefix to names. 12.4 -- see nothing wrong with using directive in implementation files. see nothing wrong with using directive with represent modules dependencies. Whether using directive results in "marauding army of crazed barbarians" depends on whether one puts everything in one namespace instead of making some good structure. 13. Reasonable. 14. 14.1 Not avoid, but consider carefully. 14.2 Why, on *all* constructors. If this is really meant, then I object. 14.3 Never? 15. Is it really belong here? -- Regards, Vladimir
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2001/07/15044.php | CC-MAIN-2022-33 | refinedweb | 577 | 59.09 |
Write a program to find the next number formed from permuting the digits in a given numberFor example if the given number is 312, then the next number is 321 if you permute 1,2,3.
- Start to looking at digits one by one from 1's digit towards left.
- Check that digit value to its immediate left digit value. If that digit value is higher than the digit on the immediate left side of it, then stop. Let us say kth position digit is higher than (k+1)th position digit (K+1 is the place left of kth).
- Now look for digit on the visited list (all the k places) and find out a digit which is just higher than the (k+1)th digit. Exchange those two digits and then place all the k digits in increasing order (they are already in order but in decreasing).
C++ program to find the next number formed from permuting the digits
#include <iostream> using namespace std; const int MAX_SIZE=10; int nextPermutedNum(int aInput) { int input = aInput; int buffer[MAX_SIZE]; int prev = input % 10; int digitnumber = 0; buffer[digitnumber++] = prev; while (input > 9) { input /= 10; int next = input % 10; buffer[digitnumber++] = next; if (next < prev) { digitnumber--; int newnum = input/10; newnum *= 10; // find and swap with next big number from the visited digits. for (int i=0;i<digitnumber;i++) { if (next<buffer[i]) { newnum += buffer[i]; buffer[i] = next; break; } } // fill all the remaining digit places with increasing order of visited numbers. for (int i =0;i< digitnumber;i++) { newnum *= 10; newnum += buffer[i]; } return newnum; } prev = next; } return aInput; } int main() { int num = 3397421; cout << "input: " << num << endl; int output = nextPermutedNum(num); cout << "output: " << output << endl; return 0; }
Output:-
input: 3397421 output: 3412379 | http://www.sourcetricks.com/2013/03/permute-digits-next-number.html | CC-MAIN-2016-44 | refinedweb | 295 | 52.63 |
Screenshots
Description
NEW! Works with current IE, Firefox, and Chrome browsers. See the 'Loading & Saving' Woas page for details (in Woas file).
Woas is a complete wiki in a single HTML file that works in every major browser, including older versions (even IE6!). It can store and display images and files within itself and is incredibly useful for research and documentation.
This is my version of the WoaS (stickwiki) project. It is starting out as the home of my fixes to the 0.12.0 version that I am calling 0.13.0 (and hope will be published as such in the WoaS project), but may transition over time into a new and possibly incompatible version of the original tool, depending on what happens in the original project.
I have dropped the 'Legacy' subproject; this project performs essentially the same function. After two years of inactivity it appears there may be life in the old project yet, so I am not making any decisions right now beyond using this project for my own, fixed version of WoaS 0.12.0.
Woas (my wiki-on-a-stick fork) Web Site
Categories
License
Features
- Self-modifying wiki in a single browser file (HTML)
- Easily create documentation, teaching materials, research notes, diaries ...
- Share information with anyone who can use a browser
- Uses WIKI markup (Creole-like)
- Full Help system linked to currently displayed page
- Image embedding by all browsers
- Supports per page AES256 encryption
- Customizable via javascript: macros, plugins, and wiki page scripts
- Separate system, menu, and page macros allows redefinition
- Plugins can load external Javascript files if desired
- Customizable look with custom user CSS
- Manage pages with tags and title namespaces
- Powerful search system
- System pages for listing and managing pages, importing and exporting content
- Able to generate a website of individual pages that looks like the original Woas file
- A working version of the WoaS project
User Ratings
User Reviews
Thanks for reviving the original WoaS! It's a great tool for organizing notes and information and I can use it on any of the dozens of machines I work with on a daily basis.
Really happy that development is continuing for this. It's a great method for tracking snippets of code and trains of thought on a daily, or whenever, basis. Keep up the good work. Now if I can just figure out a macro within a macro...
Finally a working comeback of the beloved principle of a local wiki. Perfect for structured offline notes.
A fantastic way to store snippets of information you would otherwise have hanging around in text files or scraps of paper. Very powerful, considering it is "just" an HTML page and a few other files. This is a great update to the original WoaS project. I have been using this since about v0.9 so I was sad to think the project had given up, until I saw the readme file on the original web site today pointing to this new project! | http://sourceforge.net/projects/woas/?source=navbar | CC-MAIN-2015-35 | refinedweb | 498 | 60.24 |
What are the specific options I would need to build in “release mode” with full optimizations in GCC? If there are more than one option, please list them all. Thanks.
About c++ : How-to-build-in-release-mode-with-optimizations-in-GCC
Question Detail
Question Answer
There is no ‘one size fits all’ – you need to understand your application, your requirements and the optimisation flags to determine the correct subset for your binary.
Or the answer you want:
-O3
Here is a part from a Makefile that I use regularly (in this example, it’s trying to build a program named foo).
If you run it like
$ make BUILD=debug or
$ make debug
then the Debug CFLAGS will be used. These turn off optimization (
-O0) and includes debugging symbols (
-g).
If you omit these flags (by running
$ make without any additional parameters), you’ll build the Release CFLAGS version where optimization is turned on (
-O2), debugging symbols stripped (
-s) and assertions disabled (
-DNDEBUG).
As others have suggested, you can experiment with different
-O* settings dependig on your specific needs.
ifeq ($(BUILD),debug) # "Debug" build - no optimization, and debugging symbols CFLAGS += -O0 -g else # "Release" build - optimization, and no debug symbols CFLAGS += -O2 -s -DNDEBUG endif all: foo debug: make "BUILD=debug" foo: foo.o # The rest of the makefile comes here...
Note that gcc doesn’t have a “release mode” and a “debug mode” like MSVC does. All code is just code. The presence of the various optimization options (-O2 and -Os are the only ones you generally need to care about unless you’re doing very fine tuning) modifies the generated code, but not in a way to prevent interoperability with other ABI-compliant code. Generally you want optimization on stuff you want to release.
The presence of the “-g” option will cause extended symbol and source code information to be placed in the generated files, which is useful for debugging but increases the size of the file (and reveals your source code), which is something you often don’t want in “released” binaries.
But they’re not exclusive. You can have a binary compiled with optimization and debug info, or one with neither.
-O2 will turn on all optimizations that don’t require a space\speed trade off and tends to be the one I see used most often. -O3 does some space for speed trade offs(like function inline.) -Os does O2 plus does other things to reduce code size. This can make things faster than O3 by improving cache use. (test to find out if it works for you.) Note there are a large number of options that none of the O switches touch. The reason they are left out is because it often depends on what kind of code you are writing or are very architecture dependent. | https://howtofusion.com/about-c-how-to-build-in-release-mode-with-optimizations-in-gcc.html | CC-MAIN-2022-40 | refinedweb | 472 | 60.95 |
Introduction to Best Java Compilers
A compiler in Java is one that compiles or executes Java code inside the Java platform. Java class file is the most common type of Java compiler and there are machines that emit native code for that particular hardware or operating system. The hardware or operating system plays a crucial role in a compilation. Different operating systems have different standards that are used in compiling different codes on different platforms. A standard on how Java compilers were specified was given in JSR 199. The Java virtual machine (JVM) is used for loading the class file and either converts to byte code or just in time code using the compilation techniques inside the Java programming language. There are compilers like BlueJ and the basic functionality of a compiler is to convert user code into machine code and then execute it which has various functions and programming sense.
Working
Today there are a number of Java compilers being used in the programming industry. There are a lot of online IDEs or interfaces where Java code can be run very smoothly to execute numerous amount of code. Some of them offer significant advantages over desktop options. Some of the points of these are given below:
- Easy to set up – There are no downloads and no installation procedure.
- Quickstart – Eclipse take one minute to open otherwise
- Easy sharing – Sharing between teachers and students, that is their assignments.
Compilers of Java
In this article, we are going to see some of the compilers in Java used to run code. They are as follows:
1. Codiva
- Codiva.io is the best compiler for Java being used extensively in coding and programming in Java language interface.
- The best advantage of Codiva is that it compiles the code instantly as the user types it, processes the compilation errors and shows it in the editor. After finishing the typing, we see the end results of the compilation which is shown in the editor of the respective compiler.
- There is also a good provision for completion automatically. These are the two features that save a lot of time while processing a simple or complex piece of code in the compiler.
- Codiva has a feature that enables more than one files and packages. It can also have file names whose names can be given customized.
- Codiva also works very smoothly on mobile platforms. Some of the disadvantages of Codiva are it supports only Java, C or C++. Codiva supports Java 9 but doesn’t support Java 9 modules and none of the other online compilers supports Java modules either. So, it is quite natural that it does not support Java 9 modules.
2. Jdoodle
- JDoodle is an extensively used online compiler for running Java code extensively on Java platform. It supports almost 70 languages. JDoodle allows only a single file but you don’t have to specify any filename. These are found by searching the file names.
- It has excellent terminal support for running programs that interact one to one with the live code. The programs are run with 10 seconds timing.
- Android Studio uses Java to build android programs and Jdoodle is one of the very few compilers used there.
- It would be a great choice if one knows a lot of languages and knows how to switch between languages.
- There are disadvantages to JDoodle. One of the disadvantages is that the code is compiled after it has been written or drafted. The user has to then find the error message, go to the line in which the error has occurred and made necessary changes. People who have used Codiva before would find it very difficult to handle JDoodle in the first place. Secondly, the disadvantage of JDoodle is that it just supports one file. The system of encapsulation, packages cannot be taught. JDoodle has many disadvantages. Despite the drawbacks, JDoodle is popular because of its numerous usage.
3. Rextester
- Rextester started as a Regular Expression Tester. It grew up to be an online interface later. It’s very popular among C# users and it can be used for more than 30 programming languages including Java.
- In Rextester, there is variation between multiple editor widgets.
- It has one of the best live collaboration support that has been used in the Java programming language. The URL can be shared and typing can be started very easily. No glitch has been seen until now and multiple users can edit at the same time.
- Netbeans is also a platform where Rextester is being used extensively.
- It supports only a single file and the class name of the file should be Rextester to get supported. Also, the class should NOT be made public.
Example of a Code Running in Blue J Platform
In this piece of code, we are going to see a hotel app in Java code. The code is given below as well as the output.
Sample Code
import java.util.Scanner;
public class HotelMenu {
public static void main(String[] args){
Scanner scan = new Scanner(System.in);
System.out.println("Welcome to BhartiyaTasteBuds.com");
System.out.println();
//Creating Menu
while(true){
System.out.println("To order South Indian Dish, Enter 1");
System.out.println("To order North Indian Dish, Enter 2");
System.out.println("To order Rajasthani Dish, Enter 3");
System.out.println("To order Gujrati Dish, Enter 4");
System.out.println("To order Bengali Dish, Enter 5");
System.out.println("To order Desserts, Enter 6");
System.out.println("To Exit, Enter 9");
System.out.println();
System.out.println("Enter your choice::");
int choice = scan.nextInt();
switch(choice){
case 1: System.out.println("Welcome to South Indian Food Court");
southIndianFood();
break;
case 2: System.out.println("Welcome to North Indian Food Court");
northIndianFood();
break;
case 3: System.out.println("Welcome to Rajasthani Food Court");
rajasthaniFood();
break;
case 4: System.out.println("Welcome to Gujrati Food Court");
gujratiFood();
break;
case 5: System.out.println("Welcome to Bengali Food Court");
bengaliFood();
break;
case 6: System.out.println("Welcome to Desserts Food Court");
desserts();
break;
case 9: System.out.println("Thanks for ordering from our App. Visit again");
System.exit(0);
break;
default: System.out.println("Incorrect input!!! Please re-enter choice from our menu");
}
}
}
public static void southIndianFood(){
System.out.println("You get:");
System.out.println("Idli : 2 Pieces:");
System.out.println("Butter Cheese Dosa : 1 Pieces:");
System.out.println("Vada : 2 Pieces:");
}
public static void northIndianFood(){
System.out.println("You get:");
System.out.println("Chole Bhature : 2 Pieces:");
System.out.println("Litti Chokha : 4 Pieces:");
}
public static void rajasthaniFood(){
System.out.println("You get:");
System.out.println("Dal Baati Churma");
System.out.println("Laal maas");
System.out.println("Methi Bajra puri");
}
public static void gujratiFood(){
System.out.println("You get:");
System.out.println("Dhokla : 2 pieces");
System.out.println("Khandvi");
System.out.println("Methi ka Thepla");
}
public static void bengaliFood(){
System.out.println("You get:");
System.out.println("Maach Bhaat");
System.out.println("Aalu Luchi");
}
public static void desserts(){
System. out. println(" You get: ");
System. out. println("Rasmalai");
System. out. println("Rasgulla : 2 Pieces");
System.out.println("Emarti : 2 Pieces");
System.out.println("Gajar ka halwa");
}
}
Output:
Conclusion
There are numerous compilers in Java bust some of the best compilers in Java are shown in this article. In desktop programming, Java uses BlueJ or Eclipse platform for executing Java code. The compilation time and efficiency depend upon the hardware or the configuration of the operating system that we are using.
Recommended Articles
This is a guide to Best Java Compilers. Here we discuss basic meaning with different best compilers of java in detail with appropriate example and output. You can also go through our other suggested articles – | https://www.educba.com/best-java-compilers/?source=leftnav | CC-MAIN-2020-34 | refinedweb | 1,279 | 51.65 |
From: Stjepan Rajko (stipe_at_[hidden])
Date: 2007-11-12 17:44:39
Hi John,
On Nov 12, 2007 2:19 AM, John Torjo <john.groups_at_[hidden]> wrote:
>
> I guess I'm a bit short sighted, because in my mind, the process should
> be *extremely* simple:
>
I agree :-)
> There would be a "processor" - to which you pass this data.
>
> The processor passes this to the left-most component (that is, the one
> that processes the first input).
> The left-most component processes it - lets say, in its operator(). When
> operator() finishes, the left-most component has processed the input and
> has generated one or more outputs.
> The processor will take those output(s), and pass them on, to the next
> components, and so on.
>
Although the library currently provides no mechanisms of its own (it
just provides a generic layer that could support to some extent a
variety of actual data transport mechanisms, and a developed
Boost.Signals layer that comes with a bunch of components), a simple
example mechanism like this would be interesting to add, and doing so
would probably also force the generic layer to expand in good ways.
Let me see if I can throw together something like this.
>
> Connecting the "dots" should be *extremely* simple.
> (note: I don't really understand why you make usage of
> DATAFLOW_PORT_TRAITS... and other macros).
>
Yes, the macros are very poorly explained :-( . Basically, (and this
should be elaborated in the docs), the VTK example is focused on how
you go about providing support for the VTK mechanism (i.e., that code
just needs to be written once). After that's included, connecting the
dots is (I hope) simple, i.e. you just do:
#include "vtk_dataflow_support.hpp" // this include file is
"operators.hpp" in the VTK examples
// ... function scope:
// get some VTK components
vtkConeSource *cone = vtkConeSource::New();
vtkPolyDataMapper *coneMapper = vtkPolyDataMapper::New();
vtkActor *coneActor = vtkActor::New();
vtkRenderer *ren1= vtkRenderer::New();
vtkRenderWindow *renWin = vtkRenderWindow::New();
// and now connect:
*cone >>= *coneMapper >>= *coneActor >>= *ren1 >>= *renWin;
// or, connect this way
connect(cone, coneMapper);
connect(coneMapper, coneActor);
connect(coneActor, ren1);
connect(ren1, renWin);
The docs do need a lot of work... :-)
Thanks for your comments!
Stjepan
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2007/11/130272.php | CC-MAIN-2021-49 | refinedweb | 380 | 57.47 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.