text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Pythonic parameterized cache paths.
Project description
A small package for pythonic parameterized cache paths.
Getting Started
Install: pip install cachepath
Import: from cachepath import CachePath, TempPath, Path
Docs: ReadTheDocs | API doc is here
- Why?
Integrates pathlib with tempfile.gettempdir and shutil.rmtree by providing TempPath and Path.rm():
path = TempPath() path.rm() # or would you rather.. path = None with tempfile.NamedTemporaryFile(delete=False) as f: path = Path(f.name) # Only now can we use Path. If we tried using it within the With # (for example for path.read_text()), we'd break on Windows path.unlink() # only if file, doesn't work on folders
Wraps pathlib import for Py2/3 compat. (not in six!):
from cachepath import Path # or try: from pathlib import Path; except ImportError: from pathlib2 import Path
Provides CachePath, which lets you quickly get a parameterized temp filename, with all folders automatically created:
r = CachePath(date, userid, 'expensive_results.txt') assert (r == Path('/tmp/', date, userid, 'expensive_results.txt') and r.parent.exists()) r.rm() # File remove r.parent.rm() # Symmetric with folder remove! # Without cachepath p = Path(tempfile.gettempdir(), date, userid, 'expensive_results.txt'). # Don't update timestamp if it already exists so that we don't cause # Make-like tools to always think something's changed if not p.parent.exists(): p.parent.mkdir(parents=True, exist_ok=True) p.unlink() # Why is it .unlink() instead of .remove()? # Why .remove and .unlink, but mkdir instead of createdir? p.parent.remove() # .remove() might throw because there was another file in the folder, # but we didn't care, they're tempfiles! import shutil shutil.rmtree(p.parent)
Why, but longer:
Do you need a temp path to pass to some random tool for its logfile? Behold, a gaping hole in pathlib:
import tempfile import os try: from pathlib import Path; except ImportError: from pathlib2 import Path def get_tempfile(): fd, loc = tempfile.mkstemp() os.close(fd) # If we forgot do this, it would stay open until process exit return Path(loc) # Easier way from cachepath import TempPath def get_tempfile(): return TempPath() # Path('/tmp/213kjdsrandom')
But this module is called cachepath, not temppath, what gives?
Suppose I’m running that same imaginary tool pretty often, but I’d like to skip running it if I already have results for a certain day. Just sticking some identifying info into a filename should be good enough. Something like Path('/tmp/20181204_toolresults.txt')
# try: from pathlib import Path; except ImportError: from pathlib2 import Path # We'll cheat a little to get py2/3 compat without so much ugliness from cachepath import Path import tempfile def get_tempfile(date): filename = '{}_toolresults.txt'.format(date) return Path(tempfile.gettempdir(), filename) # Easier to do this... from cachepath import CachePath def get_tempfile(date): return CachePath(date, suffix='.txt')
Not bad, but not great. But our requirements changed, let’s go a step further.
Now I’m running this tool a lot, over a tree of data that looks like this:
2018-12-23 person1 person2 2018-12-24 person1 2018-12-25 person1
I want my logs to be structured the same way. How hard can it be?
2018-12-23/ person1_output.txt person2_output.txt 2018-12-24/ person1_output.txt 2018-12-25/ person1_output.txt
Let’s find out:
# Let's get the easy way out of the way first :) def get_path(date, person): return CachePath(date, person, suffix='_output.txt') # Automatically ensures /tmp/date/ exists when we create the CachePath! # Now the hard way def get_path(date, person): personfilename = '{p}_output.txt'.format(p=person) returning = Path(tempfile.gettempdir())/date/personfilename # Does this mkdir update the modified timestamp of the folders we're in? # Might matter if we're part of a larger toolset... returning.parent.mkdir(exist_ok=True, parents=True) return returning
Suppose we hadn’t remembered to make the $date/ folders. When we passed the Path out to another tool, or tried to .open it, we may have gotten a Permission Denied error on Unix systems rather than the “File/Folder not found” you might expect. With CachePath, this can’t happen. Creating a CachePath implicitly creates all of the preceding directories necessary for your file to exist.
Now, suppose we found a bug in this external tool we were using and we’re going to re-run it for a day. How do we clear out that day’s results so that we can be sure we’re looking at fresh output from the tool? Well, with CachePath, it’s just:
def easy_clear_date(date): CachePath(date).clear() # rm -r /tmp/date/*
But if you don’t have cachepath, you’ll find that most Python libs play it pretty safe when it comes to files. Path.remove() requires the folder to be empty, and doesn’t provide a way to empty the folder. Not to mention, what if our results folders had special permissions, or was actually a symlink, and we had write access but not delete? Oh well, let’s see what we can do:
def hard_clear_date(date): # We happen to know that date is a folder and not a file (at least in our # current design), so we know we need some form of .remove() rather than # .unlink(). Unfortunately, pathlib doesn't offer one for folders with # files still in them. If you google how to do it, you will find plenty of # answers, one of which is a pure pathlib recursive solution! But we're lazy, # so lets bring in yet another module: p = Path(tempfile.gettempdir(), date) import shutil if p.exists(): shutil.rmtree(p) p.mkdir(exist_ok=True, parents=True) # This still isn't exactly equivalent to CachePath.clear(), because we've # lost whatever permissions were set on the date folder, and if it were # actually a symlink to somewhere else, that's gone now.
Convinced yet? pip install cachepath or copy the source into your local utils.py (you know you have one.)
By the way, as a side effect of importing cachepath, all Paths get the ability to do rm() and clear().
Shameless Promo
Find yourself working with paths a lot in cmd-line tools? You might like invoke and/or magicinvoke!
History
1.0.0 (2018-12-08)
- Big doc updates. 1.0.0 to symbolize SemVer adherence.
0.1.0 (2018-12-08)
- First release on PyPI. Adds CachePath, TempPath, Path.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/cachepath/1.1.1/ | CC-MAIN-2021-39 | refinedweb | 1,074 | 68.47 |
HI! I am new to programming (*although i know some html*) and I was wondering if someone could tell me what is wrong with my C++ area calculator? Thanks!
#include <iostream> using namespace std; int main (int argc, char **argv) { int height; int width; int area = height*width; cout << "This program was written by Michael Keefe\n\n---------------------------------------------------------------- \n\nWhat is the height of the quadrilateral? \n" << endl; cin >> height; cout << "\nThankyou.\n\nThe height of the quadrilateral is: " << height << "\n\nPlease enter the width of the quadrilateral.\n" << endl; cin >> width; cout << "\nThankyou.\n\nThe height of the quadrilateral is: " << height << "\nThe width of the quadrilateral is: " << width << "\n\n-------------------------------------------------------\n\nThe area of the quadrilateral you specified is: " << area << endl; cout << "\n\nThis program was developed by Michael Keefe" << endl; system("pause"); return (0); }
Every time I input something, it gives me the area as 152! Even 3x3!
Can someone help? | https://www.daniweb.com/programming/software-development/threads/24349/a-begginner-needs-your-help | CC-MAIN-2020-29 | refinedweb | 153 | 68.4 |
MVC interview questions
It is collection of MVC interview questions which I have faced in different technical interviews in various companies. You should be aware of differences and similarities among various versions of MVC. It will help you in tracking purpose of having new templates which are introduced in MVC.
Description:In this article, you will find MVC interview questions.
MVC stands for Model-View-Controller. Why MVC? MVC is recommended in most of the cases because it isolates the domain logic from the user interface. Controlled developemnt is possible with the help of MVC.
Questions:
1)What is model-view-controller?
2)Why to use MVC? Are there any specific criterias in which MVC is used?
3)What is the significance of route mapping?
4)How can we maintain session in MVC?
5)What is partial view?
6)What is JSON? How to use in MVC?
7)What is the difference between tempdata,viewdata and viewbag?
8)What is GET?
9)What is POST?
10)What is PUT?
11)What is DELETE?
12)How to perform validations in MVC?
13)What is ActionResult?
14)How to use ajax with the help of MVC?
15)What is Web-API? Why it is used?
16)What is razor?
17)What is master page? Is it layout in MVC?
18)What is task? What are ReadAsync , PostAsync?
19)What are ActionFilters in MVC?
20)What are the controls of HTML5 you have used in MVC project?
21)How you will implement treeview in MVC?
22)How to create hyperlink in MVC?
23)How to implement validations through Model?
24)How to apply data annotation validation in MVC?
Here i tried to answer few questions which will help the readers.
1)What is model-view-controller?
The Model-View-Controller (MVC) is a pattern separates the modeling of the domain, the presentation, and the actions based on user input into three separate.
4)How can we maintain session in MVC?
var empdetails=vp.GetData();
Session["employee"]=empdetails;
5)What is partial view?
partial view can be compared to user control in Asp.Net Web forms.Which is mainly used for code re-usability.Partial views are reusable in the Header and Footer views. By this ways it helps in reducing code.
6)What is JSON? How to use in MVC?
JSON (JavaScript Object Notation) is a lightweight data-interchange format.It is based on a subset of the JavaScript Programming Language.
JSON is built on two structures
a.A collection of name/value pairs. In various languages, this is realized as an object, record, struct, dictionary, hash table, keyed list, or associative array.
b.An ordered list of values. In most languages, this is realized as an array, vector, list, or sequence.
7)What is the difference between tempdata,viewdata and viewbag?
TempData
1.TempData is a dictionary object that is derived from TempDataDictionary class and stored in short lives session.
2.TempData is used to pass data from current request to subsequent request means incase of redirection.
3.It’s life is very short and lies only till the target view is fully loaded.
4.It’s required typecasting for complex data type and check for null values to avoid error.
5.It is used to store only one time messages like error messages, validation messages
ViewData
1.ViewData is a dictionary object that is derived from ViewDataDictionary class.
2.ViewData is used to pass data from controller to corresponding view.
3.It’s life lies only during the current request.
4.If redirection occurs then it’s value becomes null.
5.It’s required typecasting for complex data type and check for null values to avoid error.
ViewBag
1.ViewBag is a dynamic property that takes advantage of the new dynamic features in C# 4.0.
2.Basically it is a wrapper around the ViewData and also used to pass data from controller to corresponding view.
3.It’s life also lies only during the current request.
4.If redirection occurs then it’s value becomes null.
5.It doesn’t required typecasting for complex data type.
8)What is GET?
Post request is sent via HTTP request body or we can say internally.
GET request is the default method.
Since GET request is sent via URL, so that we can not use this method for sensitive data.
GET request has a limitation on its length. The good practice is never allow more than 255 characters.
GET request will be better for caching and bookmarking.
GET request is SEO friendly.
GET request always submits data as TEXT.
9)What is POST?
We have to specify POST method within form tag like
Since Post request encapsulated name pair values in HTTP request body, so that we can submit sensitive data through POST method.
POST request has no major limitation.
POST request is not better for caching and bookmarking.
POST request is not SEO friendly.
POST request has no restriction.
10)What is PUT?
Used to update an existing data record
11)What is DELETE?
Used to delete an existing data record
12)How to perform validations in MVC?
public class Employee
{
[Required(ErrorMessage = "Name Required")]
public string Name { get; set; }
[Required(ErrorMessage = "Valid Email Required")]
public string Email{ get; set; }
}
13)What is ActionResult?
An action method responds to user input by performing work and returning an action result. An action result represents a command that the framework will perform on behalf of the action method. The ActionResult class is the base class for action results. | https://www.dotnetspider.com/resources/44827-MVC-interview-questions.aspx | CC-MAIN-2020-05 | refinedweb | 924 | 70.8 |
> lwip-1.3.0.rar > netbuf.c
/** * @file * Network buffer management * */ /* *_NETCONN /* don't build if not configured for use in lwipopts.h */ #include "lwip/netbuf.h" #include "lwip/memp.h" #include /** * Create (allocate) and initialize a new netbuf. * The netbuf doesn't yet contain a packet buffer! * * @return a pointer to a new netbuf * NULL on lack of memory */ struct netbuf *netbuf_new(void) { struct netbuf *buf; buf = memp_malloc(MEMP_NETBUF); if (buf != NULL) { buf->p = NULL; buf->ptr = NULL; buf->addr = NULL; return buf; } else { return NULL; } } /** * Deallocate a netbuf allocated by netbuf_new(). * * @param buf pointer to a netbuf allocated by netbuf_new() */ void netbuf_delete(struct netbuf *buf) { if (buf != NULL) { if (buf->p != NULL) { pbuf_free(buf->p); buf->p = buf->ptr = NULL; } memp_free(MEMP_NETBUF, buf); } } /** * Allocate memory for a packet buffer for a given netbuf. * * @param buf the netbuf for which to allocate a packet buffer * @param size the size of the packet buffer to allocate * @return pointer to the allocated memory * NULL if no memory could be allocated */ void * netbuf_alloc(struct netbuf *buf, u16_t size) { LWIP_ERROR("netbuf_alloc: invalid buf", (buf != NULL), return NULL;); /* Deallocate any previously allocated memory. */ if (buf->p != NULL) { pbuf_free(buf->p); } buf->p = pbuf_alloc(PBUF_TRANSPORT, size, PBUF_RAM); if (buf->p == NULL) { return NULL; } LWIP_ASSERT("check that first pbuf can hold size", (buf->p->len >= size)); buf->ptr = buf->p; return buf->p->payload; } /** * Free the packet buffer included in a netbuf * * @param buf pointer to the netbuf which contains the packet buffer to free */ void netbuf_free(struct netbuf *buf) { LWIP_ERROR("netbuf_free: invalid buf", (buf != NULL), return;); if (buf->p != NULL) { pbuf_free(buf->p); } buf->p = buf->ptr = NULL; } /** * Let a netbuf reference existing (non-volatile) data. * * @param buf netbuf which should reference the data * @param dataptr pointer to the data to reference * @param size size of the data * @return ERR_OK if data is referenced * ERR_MEM if data couldn't be referenced due to lack of memory */ err_t netbuf_ref(struct netbuf *buf, const void *dataptr, u16_t size) { LWIP_ERROR("netbuf_ref: invalid buf", (buf != NULL), return ERR_ARG;); if (buf->p != NULL) { pbuf_free(buf->p); } buf->p = pbuf_alloc(PBUF_TRANSPORT, 0, PBUF_REF); if (buf->p == NULL) { buf->ptr = NULL; return ERR_MEM; } buf->p->payload = (void*)dataptr; buf->p->len = buf->p->tot_len = size; buf->ptr = buf->p; return ERR_OK; } /** * Chain one netbuf to another (@see pbuf_chain) * * @param head the first netbuf * @param tail netbuf to chain after head */ void netbuf_chain(struct netbuf *head, struct netbuf *tail) { LWIP_ERROR("netbuf_ref: invalid head", (head != NULL), return;); LWIP_ERROR("netbuf_chain: invalid tail", (tail != NULL), return;); pbuf_chain(head->p, tail->p); head->ptr = head->p; memp_free(MEMP_NETBUF, tail); } /** * Get the data pointer and length of the data inside a netbuf. * * @param buf netbuf to get the data from * @param dataptr pointer to a void pointer where to store the data pointer * @param len pointer to an u16_t where the length of the data is stored * @return ERR_OK if the information was retreived, * ERR_BUF on error. */ err_t netbuf_data(struct netbuf *buf, void **dataptr, u16_t *len) { LWIP_ERROR("netbuf_data: invalid buf", (buf != NULL), return ERR_ARG;); LWIP_ERROR("netbuf_data: invalid dataptr", (dataptr != NULL), return ERR_ARG;); LWIP_ERROR("netbuf_data: invalid len", (len != NULL), return ERR_ARG;); if (buf->ptr == NULL) { return ERR_BUF; } *dataptr = buf->ptr->payload; *len = buf->ptr->len; return ERR_OK; } /** * Move the current data pointer of a packet buffer contained in a netbuf * to the next part. * The packet buffer itself is not modified. * * @param buf the netbuf to modify * @return -1 if there is no next part * 1 if moved to the next part but now there is no next part * 0 if moved to the next part and there are still more parts */ s8_t netbuf_next(struct netbuf *buf) { LWIP_ERROR("netbuf_free: invalid buf", (buf != NULL), return -1;); if (buf->ptr->next == NULL) { return -1; } buf->ptr = buf->ptr->next; if (buf->ptr->next == NULL) { return 1; } return 0; } /** * Move the current data pointer of a packet buffer contained in a netbuf * to the beginning of the packet. * The packet buffer itself is not modified. * * @param buf the netbuf to modify */ void netbuf_first(struct netbuf *buf) { LWIP_ERROR("netbuf_free: invalid buf", (buf != NULL), return;); buf->ptr = buf->p; } #endif /* LWIP_NETCONN */ | http://read.pudn.com/downloads119/sourcecode/internet/tcp_ip/507869/lwip-1.3.0/src/api/netbuf.c__.htm | crawl-002 | refinedweb | 687 | 52.19 |
Multiline strings
ttag provides reliable approach for working with multiline strings. For instance if we have this in our code:
import { t } from 'ttag'; function test(name) { return t`multi line string with multiple line breaks and with formatting ${name}` }
By default the quoted phrase will contain all indentation before each line, and all those tabs will be inside the
.po
files and in a user content. To make things little bit easier, ttag removes all indentation before lines. So you will
receive this in
.po file:
#: src/multiline.js:7 msgid "" "multi line string\n "with multiple line breaks and\n "with formatting" msgstr ""
This behaviour can be changed with
dedent configuration.
{ dedent: false } | https://ttag.js.org/docs/multiline-strings.html | CC-MAIN-2019-09 | refinedweb | 115 | 64.3 |
Morning,
Many people (myself anyway) are probably using variations of type checking
scripts to increase security -
ref:
But these are trivially bypassed with the ASTTest annotation, eg:
import groovy.transform.ASTTest
import org.codehaus.groovy.control.CompilePhase
@ASTTest(phase = CompilePhase.SEMANTIC_ANALYSIS, value = {
println "pwned"
})
class Temp extends Script {
@Override
Object run() {
false
}
}
All the "hacker" needs to do is choose a phase earlier than the one the type
checking script runs in, and they can execute any code in the value closure.
This can't be prevented either by the sandboxing type-checking Cedric
proposed, or org.codehaus.groovy.control.customizers.SecureASTCustomizer
(which runs in CANONICALISATION phase).
As Cedric points out this is true of any global transform on the classpath,
however ASTTest is the only one (afaik, is this true?) that allows
user-supplied code to be run during the compilation process.
My response to that point would be that if the implementor has added
additional transforms that's their responsibility. But ASTTest is actually
in the standard distribution, which means everyone embedding groovy needs to
take special care to deal with it.
My suggestion is that it be moved to a different jar not part of groovy-all.
AFAIK (again), the only way to protect against it is to ship your own
distribution of groovy that removes it.
Even if you have an extension to check it that runs at
Phases.INITIALIZATION, the user script AST could run at the same phase, and
I don't know which would be called first.
cheers, jamie
PS Sorry if you get this twice, I think I was subscribed to the old list.
--
View this message in context:
Sent from the Groovy Dev mailing list archive at Nabble.com. | http://mail-archives.apache.org/mod_mbox/groovy-dev/201510.mbox/%[email protected]%3E | CC-MAIN-2017-39 | refinedweb | 288 | 62.98 |
I am making this program for a fake store that will take the price of an item and add it to the other items you put in and then multiply by sales tax. I was just wondering how i could round everything off to the 100th. For example, if the price of the item was 1.25 and that was all they bought, 1.25 * .0825 is 1.66562. I would like it to read 1.67. Here is the code so far.
Code:#include <iostream> using namespace std; main() { const float SALESTAX = .0825; float totalprice; float totalpricewithouttax; float item1; float item2; float item3; float item4; float item5; cout<<"Enter the price of an item: "; cin>> item1; if (item1 < 0) { cout<<"You must enter a price"; return 1; } if (item1 > 0) { cout<<"If this is the only item being bought, enter 0. If not, enter the price for item 2: "; } cin>> item2; if (item2 == 0) { totalpricewithouttax= item1; totalprice= (item1) * (item1 + SALESTAX); cout<< "The price is "<< totalpricewithouttax<< endl; cout<< "The total ammount due is "<< totalprice << endl; } } | http://cboard.cprogramming.com/cplusplus-programming/61341-rounding-off-outputs.html | CC-MAIN-2013-48 | refinedweb | 175 | 79.9 |
1. Java EE is an open and standard-based platform for developing, deploying, and managing multi-tier, web-enabled, server-centric, and component-based enterprise applications
2. As a super-set of Java SE, Java EE adds additional specifications, libraries, documentation, and tools.
1. Single specification, multiple competing implementations
2. Implementations available as both OSS/free and high-end commercial
3. Vast community resources: books, tutorials, guides, examples, etc.
4. Large developer pool
5. Application portability
1. By using the Servlet/JSP technology, Java EE applications are automatically web-enabled:
a) Efficient, Java/OO, Easy, I18N, MVC
2. Complete support for web-services
a)Clients
b)End-points
1. Java EE apps run within a Java EE application server that provides all middle-tier services.
2. Thin (web-based) clients
3. Support for rich clients through RMI, Web Services, etc.
a) The design of such clients is beyond the scope of Java EE
1. Java EE applications are made up of:
a) Presentation logic
b) Business logic
c) Data access logic and model
2. Java EE facilitates separation of concerns
1. Concurrency, Scalability, Availability
2. Security
3. Persistence, Transactions
4. Life-Cycle Management
5. Application Components focus on:
6. Presentation
7. Business Logic
Overview of JBoss Application Server
1. JBoss – The Professional Open Source Company
2. Focuses on middleware software and services – JBoss Enterprise Middleware Suite (JEMS)
3. Software is open source and free
4. Makes money on services
5. Acquired by Red Hat in April 2006 for $420M
1. Application Server (JBoss AS, Tomcat)
2. O/R Mapping and Persistence (Hibernate)
3. Portal Platform (JBoss Protal)
4. Business Process Management and Rules (JBoss jBPM, JBoss Rules)
5. Object/Data Cache (JBoss Cache)
6. Distributed Transaction Management (JBoss Transactions)
7. Development Tools (JBoss Tools plugin for Eclipse)
1. Established in 1999 as an open-source EJB container
2. Version 2.x becomes a full J2EE application server
3. Version 3.x is the production series – based on JMX microkernel and service-oriented architecture (May 2002)
4. Version 4.x explores the aspect-oriented middleware services, adds support for EJB 3 (September 2004)
5. Version 5.x comes with JBoss Microcontainer (also a stand-alone project) and replaces JMX 6. Microkernel of the 3.x and 4.x JBoss series (December 2008)
6. Version 6.x to add support for Java EE 6 APIs and Profiles. Replaces JBoss Messaging with HornetQ (6.0.0.M3 released in April 2010)
1. Enterprise-class reliability, performance, scalability, and high-availability
2. Zero-cost product license, to download, use, embed, and distribute
3. Open-source
4. Built for standards – provides a safe choice for portable applications (interoperable)
5. Service-oriented architecture provides consistency, makes it embeddable
6. Aspect-oriented architecture simplifies interaction with middleware services
7. 24×7 professional support by the core development team
8. Active developer community
9. Over 5 million downloads
10. 25+% market share
1. New kernel ⇒JBoss Microcontainer
a) Is a refactoring of old JMX Microkernel (JBoss AS 3.x and 4.x)
b)The core of JBoss AS 5
2. New messaging provider ⇒JBoss Messaging
a) Replaces old JBossMQ (shipped with JBoss AS 4.x series)
3. One of the first application servers to implement EJB 3.0 specification (dating back to 4.x series)
4. Reliable transaction manager ⇒JBoss TS
a) More than 20 years of expertise in transaction management
5. JBoss Webbased on Apache Tomcat 6.0
6. JBoss WS 3.0 (support for JAX-WS/JAX-RPC)
a) Can be replaced by Sun Metro or Apache CXF for example
7. Two new configurations:
a) Standard: Java EE compliant configuration.
b) Web: Provides support for JTA/JCA and JPA in addition to the Servlet/JSP container. The server can only be accessed through the http port.
1. Service-oriented architecture – service is either defined as a POJO or a JMX Managed Bean (use the JMX kernel, still available in JBoss 5.x but is created by JBoss Microcontainer).
2. Services are hot-pluggable
3. Makes it possible to tune the system for just the required services to lower the overall footprint (easier to secure and tune)
4. Easy to define new services and package them as SARs (service archives) or JARs (Java ARchives)
5. Examples: Servlet/JSP container, EJB container, transaction management, messaging, connection pooling, security etc.
1. JBoss Microcontainer – POJOs services container
2. JBoss Microkernel – JMX MBean server (One of the primary POJOs created by JBoss Microcontainer)
3. Aspect-oriented Framework
4. Web Application Services – based on Tomcat (Servlet, JSP, JSF)
5. Enterprise Services: EJB, ORB, JNDI, JTA
6. Web Services – based on SOAP, WSDL, UDDI, and XML
7. Messaging Services: JMS, JDBC, JCA
8. Persistence Services – Hibernate O/R mapping and transparent persistence
9. HA Services: clustering, fail-over, load-balancing, distributed deployments
10. Security Services – based on JAAS
11. Console Services – monitoring, configuration, deployment, management, lifecycle
1. Download Java SE 6 JDK from or (for Java SE 5 JDK)
2. On UNIX run the installer or uncompress the distribution into a directory of your choice
3. On Windows, run the installer
a) Avoid installing Java into a directory that contains spaces or other special characters (e.g. under C:Program Files)
b) You can choose not to install “Public JRE”, demos, and source components
1. Set JAVA_HOME to point to the directory where you installed Java and add $JAVA_HOME/bin to your PATH
a) On Windows, add JAVA_HOME System Variable under Start -> Settings -> Control Panel -> System -> Advanced -> Environmental Variables and prefix the existing PATH variable with %JAVA_HOME%bin;.
b) On UNIX-like systems, make these changes in your shell’s configuration file (e.g. ~/.bashrc): export JAVA_HOME=/path/to/java-install-dir and export PATH=$JAVA_HOME/bin:$PATH
2. Test that java -version prints the expected Java version
3. Test that javac prints usage message
4. This verifies that JDK is installed (vs. JRE)
1. Download packaged distribution
a) Preferred
2. Build from source
3.
1. Get Ant from
2. Uncompress it in a directory like C:Ant
3. Set ANT_HOME to point to the directory where you installed Ant
4. Add $ANT_HOME/bin (on Linux), %ANT_HOME%bin (on Windows) to your PATH
5. Get jboss--src.tar.gz from
6. Unzip it
7. Browse to build directory and run ant
8. Once finished the binaries will be under jboss-.GA-srcbuildoutputjboss-.GA
1. Unpack the compressed archive
a) This is the typical installation method because it is the easiest
2. Alternatively, install the source-built binary
a) Not a common installation method
3. Set JBOSS_HOME to the root folder of JBoss.
4. You can also add $JBOSS_HOME/bin (Linux) or %JBOSS_HOME%bin (Windows) to your PATH in case you want to run your server from the command-line.
1. Root dir known as home.dir or $JBOSS_HOME
2. Understanding the layout is important:
a) Locating libraries
b) Updating configuration
c) Deploying apps and services
1. ${jboss.home.dir}/bin
2. ${jboss.home.dir}/client
3. ${jboss.home.dir}/common
4. ${jboss.home.dir}/docs
5. ${jboss.home.dir}/lib
6. ${jboss.home.dir}/server
JBoss AS ${jboss.home.dir}/bin directory contains startup/shutdown scripts, bootstrap libraries, Web Services and server management utilities:
1. sh: A tool to determine JBoss classpaths (both client and server)
2. sh and jboss_init_suse.sh: JBoss system control scripts for RedHat and SuSE systems
3. sh and probe.bat: used for discovering JBoss AS clusters.
4. sh and run.bat: Scripts for starting JBoss AS
5. jar: Bootstrap code for starting JBoss AS
6. bat: Script to manage JBoss as a Windows service.
7. sh and shutdown.bat: Scripts for shutting down JBoss AS (including remote instances)
8. jar: Bootstrap code for shutting down JBoss AS
9. sh and twiddle.bat: Scripts for running JBoss AS command-line management client (based on JMX)
10. jar: Bootstrap code for the JMX management (instrumentation) client
11. wsconsume, wsprovide, wsrunclient and wstools are utilities for Web Services.
1. Contains the Java libraries (JARs) required for clients that run outside the JBoss AS containers, such as:
a) Web Service clients
b) EJB clients
c) JMX clients
2. Used by external applications that need to access JNDI resources
3. On Unix, to get the client CLASSPATH, run: ${jboss.home.dir}/bin/classpath.sh -c
4. As of JBoss 5, the file client/jbossall-client.jar contains references to other JARs via Class-Path setting in its META-INF/MANIFEST.MF This makes it possible for external JBoss clients to just reference this one JAR file as opposed to many of them.
1. Contains the lib directory (also known as common.lib.url).
a) lib folder contains the common libraries shared by all server configurations (more on this later)
b) This directory is new to JBoss 5. In earlier versions of JBoss a number of common libraries were simply duplicated for each configuration set.
1. Contains JBoss bootstrap libraries (core libraries)
2. Do not place your own files here or remove any of the existing files
3. As an example, you’ll find here the JBoss Micro container and the old JMX kernel.
1. Known in JBoss AS server.base.dir
2. Root of server configuration sets
3. JBoss comes with minimal, default and all
4. Version 5.x comes with 2 new configurations: standard and web
5. Defaults to configuration set in server/default
6. Configuration sets contain the actual JBoss services
1. Includes support for JNDI and logging. It does not contain any other J2EE services like Servlet/JSP container, EJB container, or JMS.
2. Can serve as a starting point when creating your own configuration sets
1. As the name implies, this is the default Java EE 5 configuration. Contains the most used services except JAXR, IIOP and clustering services.
all/
1. This configuration extends the default configuration set and also include JAXR, IIOP and clustering services
1. Certified Java EE 5 configuration compliant.
Lightweight web container profile (Java EE 6 web profile). It provides support for JTA/JCA and JPA except for the servlet/JSP container.
1. The currently running server/ dir is known in JBoss AS as server.home.url
2. The name of the server (e.g. “default”) is known as server.name
3. Configuration sets are independent of each other
1. Known in JBoss as server.config.url
2. Contains a bootstrap descriptor (jboss-service.xml) that defines which services are loaded for the lifetime of the instance
1. bootstrap/*: Bootstrap descriptors for core micro container services defined in xml
2. xml: Defines the core micro container beans to load during bootstap
3. jboss-service.xml: Defines the core JMX services configurations
4. properties: Specifies a set of properties that are passed to JNDI when new InitialDirContext() is called within JBoss
5. jboss-log4j.xml: Configuration file for the logging service (Log4J) defining log filters, priorities, and destinations
6. login-config.xml: Defines security realms used for authentication and authorization (JAAS)
7. props/*.properties: Java property files (usually used for JAAS realms)
8. policy: Placeholder for security permissions (Java Security Manager). Grant-All by default
9. xml: Configuration file for the standard EJB container
10. standardjbosscmp-jdbc.xml: Configuration file for the standard JBossCMP engine
11. xmdesc/*-mbean.xml: XMBean descriptors for services configured in the jboss-service.xml file
12. props/*: property files defining users and roles for the jmx-console
1. Known in JBoss as server.data.dir
2. Location where some services store private content on the file system
a) Hypersonic DB – built-in (by default use as the temporary message store by JMS)
b) XMBeans attribute persistence (not enabled by default)
c) Transaction objects (temporary storage of objects during the two-phase commit process)
3. This directory is not directory exposed to the end users (e.g. though the web interface)
1. This is where applications and services are deployed
2. Default location used by hot deployment service
3. Contains code and configuration files for all services
1. Contains all the JBoss AS services that are used to recognize and deploy different application and archive types.
1. hibernate-deployer-jboss-beans.xml – Deployer for Hibernate archives (HAR)
2. ejb-deployer-jboss-beans.xml – Service responsible for deploying EJB JAR files
3. ear-deployer-jboss-beans.xml – Service responsible for deploying EAR files
4. deployer – Service responsible for deploying WAR files
5. jboss-aop-jboss5.deployer – Deployer that sets up Aspect Manager Service and deploys AOP applications
6. etc…
1. Directory referred to by the bootstrap code when loading the configuration set
2. Known within JBoss as server.lib.url
3. This directory is for Java code (JARs) to be used both by the deployed applications and JBoss AS services
4. If you have Java libraries that you need to be made available to all your applications/services, these can be placed in the ${jboss.server.lib.url}
5. Similarly, you would also use this directory for Java libraries that need to be used by both your applications/services, and JBoss AS services.
a) A typical example of this is a JDBC driver that is needed by JBoss AS to manage a pool of database connections, as well as your code, which implicitly uses it to interact with the database server.
1. Known within JBoss as server.log.dir
2. Default destination directory for JBoss AS log files (3 log files)
3. log – logs boot process until logging service starts
4. vlog – takes over once the logging service is initialized from ${jboss.server.config.url}/jboss-log4j.xml
5. log – audit security
6. Default startup log priority: DEBUG
7. STDOUT and STDERR are logged to console
1. Log file log is rolled over daily (with the “.yyyy-MM-dd” extension)
2. Existing logs are overwritten on [re]start
3. Old log files are not automatically cleaned by the server during runtime
Since the logging system is managed by Log4J it can be easily configured to:
1. Roll over logs hourly
2. Roll over logs by size (e.g. 500KB)
3. Automatically remove old logs
4. Log to SMTP, SNMP, Syslog, JMS, etc.
To shutdown a remote JBoss AS instance, use: ./shutdown.sh -s jnp://remoteHostOrIP:1099 -S Remote instance’s IP address and port are specified by its Naming service configured in ${jboss.server.config.url}/jboss-service.xml
a) Apache Tomcat (6.x) is a free and open source Servlet (2.5) and JSP (2.1) Container
b) Embedded in JBoss AS as deploy/jbossweb.sar
c) JBoss AS configuration for Tomcat integration in each application are located in META-INF/jboss-web.xml
a) Default JAAS Security Domain
b) Class Loading and Sharing
c) Session Management and Caching
d) Clustering and Load Balancing (in all config)
1. Tomcat’s own configuration file: deploy/jbossweb.sar/server.xml
2. Configures
a) Connectors (HTTP, HTTPS, AJP)
b) Security Realms (Inherits from JBoss)
c) Logging (Tomcat Service)
d) Valves (Request/Response interceptors)
e) Virtual Hosts (Name-based)
f) Web application contexts (Per-app configuration)
1. Default web descriptor for all web apps deployers/jbossweb.deployer/web.xml
2. Configures
3. Common Filters
4. Servlets for handling static content (DefaultServlet), JSPs, SSI, CGI scripts, invokers, etc.
5. Default session timeout
6. MIME Type mappings
7. Welcome file list: index.html, index.jsp, etc.
1. Configure in the element
2. The value (in minutes) indicates how long the servlet container will maintain an idle session (in memory or on disk) before timing out
3. Value ⇐ 0 indicates that sessions never expire – unless destroyed explicitly (through users logouts)
4. Significant impact on server memory usage and end users dissatisfaction with time outs
1. Tomcat serves static content via its DefaultServlet (configured in Tomcat’s xml file)
a) Any file under an application’s structure (but outside WEB-INF and META-INF directories) is considered static content
2. Application deploy/ROOT.war/ is considered special – it has no context path
a) Serves all content not served by any other application
3. Returns a HTTP 404 response if the requested static content does not exist
4. war also provides support for servlet (see its WEB-INF/web.xml)
Frequently asked Jboss Interview Questions
1. Core infrastructure (glue) for locating objects or services within an application server
2. Also allows external clients to locate services
3. Important for clustering: hides actual location
4. Divided into API and SPI
a) Applications code against the API
b) Application servers provide the SPI
5. SPIs for accessing remote resources, such as LDAP, DNS, NIS, file systems, RMI registry
1. JNDI is to Java EE what DNS is to Internet apps
2. JNDI maps high-level names to resources like mail sessions, database connection pools, EJBs, and plain environmental properties
3. JNDI organizes its namespace using Environmental Naming Context (ENC) naming convention:
a) Starts with java:comp/env
b) Private to each application
c) Contexts are delimited with a forward-slash (/)
JNDI ENC naming convention:
1. java:comp/env/var – Environmental variables
2. java:comp/env/url – URLs
3. java:comp/env/mail – JavaMail sessions
4. java:comp/env/jms – JMS connection factories and destinations
5. java:comp/env/ejb – EJB home interfaces
6. java:comp/env/jdbc – JDBC DataSources
1. Supports both local (optimized) and remote (over RMI) access to named objects
2. Provides a JVM-private app-shared java: context in addition to app-private java:comp
3. Everything outside java: is public and externally visible
4. Exposes JNDI operations over JMX invoke operations – allows access over HTTP/S
5. Supports viewing JNDI Tree remotely
5. Supports clustering through HA-JNDI
1. Stands for Enterprise Java Bean
2. It’s a server-side component that encapsulates the business logic of an application
3. EJBs are often combined where they call each other to execute business logic on the Java EE server
4. EJBs clients can be :
a) Web-tier components (local or remote)
b) Remote clients (over RMI)
c) Web service clients (over HTTP/SOAP)
The idea of EJBs is to move the business logic out of the web-tier and into a separate layer that exclusively focuses on modeling the business domain and the related operations.
Two main components in EJB:
1. Session Beans perform business logic, manages transactions and access control.
2. Message Driven Beans perform actions on event (receive a JMS message) associated to JMS Queues and JMS Topics.
EJBs run within a EJB Container, a run-time environment of a Java EE app server
The EJB Container provides system-level services, such as:
1. Transactions (including distributed transactions)
2. Security
3. Persistence (via JPA)
Just like the web-tier components run in the Servlet container, EJBs require the services of a EJB container – i.e. EJBs cannot run on their own.
1. Simplify the development of large and distributed enterprise applications:
a) Bean developers focus on business problems, and the container takes care of the rest, most notably transactions and security. Persistence is managed by JPA.
b) Paradigm for separating the business logic from its presentation, which simplifies the UI development
c) EJBs are portable, easily integrate with existing beans, and run on any Java EE AS
EJB technology is a good candidate for the following situations:
1. Applications with complex business logic that must be scalable. Transparent distribution of load across multiple server instances is at core of EJB
2. Applications that require [distributed] transactions to ensure data integrity. The EJB container provides the necessary mechanisms that manage the concurrent access to shared objects.
Prior to EJB3
1. EJB applications were harder to test as they depend heavily on the container services
Automated unit testing was a challenge
2. EJB applications were harder to develop and deploy
A) Reusable components that contains business logic. Clients can interacts with Session Beans locally or remotely.
B) To access a session bean, the client invokes its public methods
C) Stateless Session Bean
1. Performs a task for a client
2. Is not attached to a client (no client state maintained)
3. Can implement a web service
D) Stateful Session Bean
1. Represents a single client inside the application
E) Stateless Session Beans
1. Maintain the client state only for the duration of the method invocation
2. When the method returns, the bean appears to terminate as well
3. Do not need to persist the state to the secondary storage
4. Are more scalable because they can be shared
5. Are more common in Java EE applications
6. Could represent actions such as: sending email, converting currency
F) Stateful Session Beans
1. Maintain the conversational state with the client, which survives across multiple method invocations
2. Release their state once the client removes the bean or terminates
3. Can temporarily persist their state to the secondary storage – for stability reasons
4. Could represent temporary entities such as shopping carts
1. Represent a table in a relational database
2. Each instance of this entity corresponds to a row in that table
3. Each entity has a unique identifier – this allow clients to locate a particular instance of this entity
4. Entities can have relationship, for example, a Zoo instance will contain several Animal instances
1. Instances created by the EJB container
2. These instances are kept into a pool of ready instances
3. A method call is made by a client
4. The container assign a ready instance from the pool to the client for the duration of the call
5. The SLSB instance is then returned to the pool.
1. A client session is started
2. Default constructor is invoked
3. Ressources are injected (if any)
4. @PostContruct annotated method is called
5. Now the SFSB is in cache and ready to execute any business method invocation from the client
1. Previous versions of EJB (1.x – 2.x) can be configured in the /conf/standardjboss.xml
2. For EJB 3.x, the configuration file is /deploy/ejb3-interceptors-aop.xml
This file is divided into domains.
i) Each of these domains contains actions, called Interceptors.
ii) Each call to a domain goes through a stack of Interceptors to the target method. After execution, the call unwinds through the stack in reverse order.
1. New since JBoss AS release 5.1.0
2. Also known as Embedded Jopr project
3. Available at the following URL ⇒
4. Default administrator credentials are admin/admin
5. Essentially, a simplified version of the other console applications
i. Less clutter → easier to work with
The user interface:
1. Is divided in two frames
i. The left frame list the ressources available on the server
ii. The right frame is the Control Panel where you can manage the ressource selected
The Control Panel – 4 main tabs:
1. Summary: Contains the general properties and the most relevant metrics
2. Configuration: To edit or create new ressources (example: you can add a new datasource and it will generate the xxx-ds.xml for you)
3. Metrics: List all the metrics for a ressource
4. Control: When enabled, you can perform special actions related to a ressource
The possiblities in the Admin Console are the following:
1. Deploy/Undeploy applications
2. Update applications
3. Start/Stop/Restart applications
4. Add/Delete ressources
5. Manage ressources
6. See metrics
Application components deployed on JBoss that need access to a relational database can connect to it
A) Directly – by managing their own connections
1. Complicated deployments – requires separate configuration for each web app
2. Slow if connections are not pooled, which is not trivial to implement (though libraries exist)
3. If a connection pool is used, it cannot be shared with other applications further complicating deployments
B) Via a shared database connection pool managed by JBoss
1. Simplifies configuration and maintenance (single file to edit in a “standard” format)
2. Faster because the connections are pooled (production-tested)
3. Can be shared among applications so the connections can be better utilized
4. Applications are portable – as they don’t depend on some internal configuration of the external environment
5. Recommended!
1. Define a resource references in your application
i) Require connectivity to RDBMS
2. Provide RDBMS resources (connection pools) in the server
i) Install JDBC drivers
ii) Define a RDBMS DBCP
iii) Map JBoss-managed RDBMS DBCP to the application’s resource reference
For example, in a web application we would communicate our need for a container-managed RDBMS in WEB-INF/web.xml file:
1. JDBC Driver is what enables Java applications to talk to specific RDBMS, such as MySQL, DB2, Oracle, etc.
2. Download the JDBC Driver from the database vendor (for MySQL go to)
3.Copy the driver JAR into directory ${jboss.server.lib.url} or ${jboss.common.lib.url}
4. Restart JBoss
Once mapped, the applications can access this resource to get a database connection:
1. JBoss embedded Java-based RDBMS
2. deploy/hsqldb-ds.xml configures:
i. Embedded database (known as DefaultDS)
ii. Connection factories
3. Used by JMS MQ for state management and persistence
4. Can be used for CMP
5. Data can be kept in memory or persisted
6. Can allow access to remote clients over TCP
7. This service is for development/testing use only. It is not production-quality.
8. To enable remote access, edit deploy/hsqldb-ds.xml:
1. JBoss has a CachedConnectionManager service that can be used to detect connection leaks (within the scope of a request)
2. Configured in ${jboss.server.url}/deploy/jbossjca-service.xml
3. Triggered by Tomcat’s xml→→CachedConnectionValve
a. Enabled by default – slight overhead
b. Should be used during testing
c. Can be turned off in production if the code is stable
d. If the CachedConnectionValve is enabled in Tomcat’s xml file, then Tomcat must wait for the CachedConnectionManager service on startup. This is accomplished by adding the following line to Tomcat’s META-INF/jboss-service.xml file (near the end):
jboss.jca:service=CachedConnectionManager
4. Connection pools could be monitored (through JMX) by looking at jca:name=MyDS,service=ManagedConnectionPool→InUseConnectionCount attribute.
5. The example web application war can be made to leak resources (on /ListCustomers) by
a. appending requesting /ListCustomers?leak=true, and/or by
b. adding a custom system property: -Dleak.jdbc.resources=true to JAVA_OPTS in conf or run.bat (on Windows)
1. Log4j and logging services
2. Configuring logging
1. Data logging is the process of recording events, with an automated computer program
2. JBoss AS5 uses log4j, an open source logging framework
a. Log4j is a tool to help the programmer output log statements to a variety of output targets.
3. Loggers may be assigned levels. The set of possible levels, that is:
4. TRACE
5. DEBUG
6. INFO
7. WARN
8. ERROR
9. FATAL
1. log4j supports multiple output appenders per logger.
2. The format of the log output can be easily changed by extending the Layout class.
3. The target of the log output as well as the writing strategy can be altered by implementations of the Appender interface.
4. log4j is optimized for speed.
5. log4j is designed to handle Java Exceptions from the start.
TRACE
The PatternLayout, part of the standard log4j distribution, lets the user specify the output format according to conversion patterns similar to the C language printf function.
For example, the PatternLayout with the conversion pattern
“%r [%t] %-5p %c – %m%n” will output something akin to:
176 [main] INFO org ‘-‘ is the message of the statement.
1. The log4j configuration file is located at server/xxx/conf/jboss-log4j.xml
2. By default, JBoss produces output to both the console and a log file (log/server.log).
3. By default, The logging threshold for the console is INFO, For server.log no threshold
4. By default, The server.log file is created new each time the server is launched, and grows until the server is stopped or until midnight
Listing shows how you can change the Appender for the server.log file to create, at
most, 20 log files of 10 MB in size each..
This example shows how to use the JMX Console to set the level of logging for Hibernate to INFO.
1. Filtering clients by source IP addresses
2. Requiring authentication and authorization
3. Data transport integrity and confidentiality (SSL)
We will explore each one of these in turn
A simple implementation of this filter can be found at
Security is a fundamental part of any enterprise application. You need to be able to restrict who is allowed to access your applications and control what operations application users may perform.
1. JAAS – Java Authentication and Authorization Service (pure Java)
2. Pluggable Authentication Module framework: JNDI, UNIX, Windows, Kerberos, Keystore
3. Support for single sing-on
4. Role-based access control
5. Separates business logic from A&A
6. Declarative (XML-based)
a. Described in deployment descriptors instead of being hard-coded
b. Isolate security from business-level code
7. For example, consider a bank account application. The security requirements, roles, and permissions will vary depending on how is the bank account accessed:
a. via the internet (username + password), via an ATM (card + pin), or at a branch (Photo ID + signature).
b. We benefit by separating the business logic of how bank accounts behave from how bank accounts are accessed.
8. Securing a Java EE application is based on the specification of the application security requirements via the standard Java EE deployment descriptors.
9. EJBs and web components in an enterprise application by using the ejb-jar.xml and xml deployment descriptors.
In this case we just use HTTP BASIC authentication, but other options for JBoss are: DIGEST, FORM, and CLIENT-CERT. We will cover some of these later.
Declaring security roles:
1. Already enabled by default
2. WEB-INF/classes/users.properties:
3. john=secret
4. bob=abc123
mike=passwd
1. WEB-INF/classes/roles.properties:
2. john=MyRole
3. bob=MyRole,Manager
mike=Manager,Administrator
1. Provided by jboss.security.auth.spi.UsersRolesLoginModule configured in file ${jboss.server.config.url}/login-config.xml:
Note
The properties files users.properties and roles.properties are loaded during initialization of the context class loader. This means that these files can be placed into any Java EE deployment archive (e.g. WAR), the JBoss configuration directory, or any directory on the JBoss server or system classpath. Placing these files in the ${jboss.server.home.url}/deploy//WEB-INF/classes directory makes them unique to that specific web application. Moving them to ${jboss.server.config.url} directory makes them “global” to the entire JBoss AS instance.
1. Fetches login info from a RDBMS
2. Works with existing DB schemas
3. Uses pooled database connections
4. Scales well as the user population grows
5. Does not require server or application restarts on info change
Database Login Module depends on our ability to set up (and link to) a JBoss-managed DataSource (database connection pool).
term#> mysql/bin/mysql -u root -p
mysql> CREATE DATABASE Authority;
mysql> USE Authority;
mysql> CREATE TABLE Users (Username VARCHAR(32) NOT NULL PRIMARY KEY,Password VARCHAR(32) NOT NULL);
mysql> CREATE TABLE Roles (Username VARCHAR(32) NOT NULL,Rolename VARCHAR(32) NOT NULL,PRIMARY KEY (Username, Rolename));
mysql> GRANT SELECT ON Authority.* TO authority@localhost IDENTIFIED BY “authsecret”;
It is important that the password field be at least 32 characters in order to accommodate MD5-digest-based passwords (more on this later).
You do not have to create a separate database, nor do you need separate tables, but we assume that we are starting from scratch. The default JBoss AS schema for User/Role information is as follows:
1. Table Principals(PrincipalID text, Password text)
2. Table Roles(PrincipalID text, Role text, RoleGroup text)
You also do not need a auth-specific read-only database user, but we create one because it is a good practice.
Populate the database. For example:
INSERT INTO Users VALUES (“john”, “secret”);
INSERT INTO Roles VALUES (“john”, “MyRole”);
INSERT INTO Users VALUES (“bob”, “abc123”);
INSERT INTO Roles VALUES (“bob”, “MyRole”);
INSERT INTO Roles VALUES (“bob”, “Manager”);
INSERT INTO Users VALUES (“mike”, “passwd”);
INSERT INTO Roles VALUES (“mike”, “Manager”);
INSERT INTO Roles VALUES (“mike”, “Admin”);
Define a database connection pool (resource) that will provide connectivity to the Authority database.
1. Application policy name declares a new policy. We will reference this name in each [web] application that wishes to use it.
2. The required flag means that the login module is required to succeed.
a. If it succeeds or fails, authentication still continues to proceed down the LoginModule list
b. Other options are: requisite, sufficient, and optional
3. Module option dsJndiName:
a. Defines the JNDI name of the RDBMS DataSource that defines logical users and roles tables
b. Defaults to java:/DefaultDS
4. Module option principalsQuery:
a. Defines a prepared SQL statement that queries the password of a given username
b. Defaults to select Password from Principals where PrincipalID=?
5. Module option rolesQuery:
a. Defines a prepared SQL statement that queries role names (and groups) of a given username
b. The default group name is Roles (hard-coded). Defaults to select Role, RoleGroup from Roles where PrincipalID=?
Default security domain is other, which authenticates against the properties files
Security domains declared in conf/login-config.xml are relative to java:/jaas/ JNDI context
1. Session-based
a. Allows Logout – invalidate()
b. Automatically expires when the session expires
c. More efficient and secure – single authentication
2. Fully customizable
a. Control the layout of the login page
b. Control the layout of the error page
c. Provide links to “user registration” or “recover password” actions
3. Support for FORM-based login is part of the Java EE specification, and like the rest of JAAS-based security, it is independent of the application server
4. With the FORM-based login, the server forces the users to login via an HTML form when it detects the user’s session is not authenticated
a. This is accomplished by forwarding the users’ requests to the login page
b. In case of authentication failures, users are sent to the login error page
Displays the error message if the authentication fails
This page is only shown if user enters invalid username and/or password. Authorization errors (user not in required role) are handled separately.
Authorization failures are expressed by HTTP 403 status codes
2.
1. Increase the scan frequency of the deployment scanner (default: 5 secs)
2. Consider setting MinimumSizeon [stateless] Session Bean Container pool (conf/standardjboss.xml)
3. Use Hibernate in place of CMP (2.x)
4. Avoid XA connection pools
5. o Use JDBC drivers to check on connections
1. Remove services that are not needed
2. Not a huge impact on performance
3. Frees up memory and other resource (like threads)
4. Faster JBoss startup
5. Excellent security practice
6. Breaks Java EE TCK
7. Create a new configuration set (copy of default)
8. Services that can be trimmed include the following:
9. Mail Service (and libraries)
10. Cache Invalidation Service
11. J2EE client deployer service
12. HAR deployer and Hibernate session management services
13. Hypersonic (provide a different DefaultDS for JMS MQ)
14. JMS MQ
15. HTTP Invoker (RMI over HTTP)
16. Support for XA Data Sources
17. JMX Console
18. JMX Invoker Adaptor (JMX calls over RMI)
19. Web Console
20. JSR-177 for JMX
21. Console/Email Monitor Alerts
22. Properties Service
23. Scheduler Service/Manager
24. UUID key generation
25. EAR Deployer
26. JMS Queue Destination Manager
27. CORBA/IIOP Service
28. Client User Transaction Service
29. Attribute Persistence Service
30. RMI Classloader
31. Remote JNDI Naming
32. JNDI View
33. Pooled Invoker
34. Bean Shell Deployer
1. Fault Tolerance
2. o Reliability
3. o Uptime guarantee
4. Stable Throughput – Scalability
5. o Provide consistent response times in light of increased system load
6. Manageability of Servers
7. o Server upgrade with no service interruptions
1. A cluster is a set of nodes that communicate with each other and work toward a common goal
2. A Cluster provide these functionalities:
3. Scalability (can we handle more users? can we add hardware to our system?)
4. Load Balancing (share the load between servers)
5. High Availability (our application has to be uptime close to 100%)
6. Fault Tolerance (High Availability and Reliability)
7. o State is conserved even if one server in the cluster crashes
8. Table 2. High Availability and numbers
Free Demo for Corporate & Online Trainings. | https://mindmajix.com/jboss-tutorial | CC-MAIN-2019-04 | refinedweb | 6,064 | 52.36 |
This article was first published on our open-source platform, SimpleAsWater.com. If you are interested in IPFS, Libp2p, Ethereum, IPLD, Multiformats, IPFS Cluster and other Web 3.0 projects, concepts and interactive tutorials, then be sure to check out SimpleAsWater.
This post is a continuation(part 2) our first part of the series we briefly discussed IPLD. We saw that IPLD deals with “defining the data” in IPFS(well, not just in IPFS). In this part, we will dive fully into IPLD and discuss:
- The Significance of IPLD: What is the philosophy behind it, Why do we need it and Where does it fit into IPFS?
- How does IPLD Work?: An Explanation of its specification and how does it coordinate with other components of IPFS?
- Playing with IPLD: Well, It’s always fun to play 😊
If you like high-tech Web3 concepts like IPLD explained in simple words with interactive tutorials, then check out SimpleAsWater.com.
I hope you learn a lot about IPFS from this series. Let’s start!
The Significance of IPLD
IPLD is not just a part of the IPFS project, but a separate project in itself. To understand its significance in the decentralized world we have to understand the concept of Linked Data: The Semantic Web.
What is Linked Data & Why do we Need it?
The Semantic Web or Linked Data is a term coined by Sir Tim Berners Lee in a seminal article published in Scientific American in 2001. Berners Lee articulated a vision of a World Wide Web of data that machines could process independently of humans, enabling a host of new services transforming our everyday lives. While the paper’s vision of most web pages containing structured data that could be analyzed and acted upon by software agents has NOT materialized, the Semantic Web has emerged as a platform of increasing importance for data interchange and integration through the growing community implementing data sharing using international semantic web standards, called Linked Data.
There are many current examples of the use of semantic web technologies and Linked Data to share valuable structured information in a flexible and extensible manner across the web. Semantic Web technologies are used extensively in the life sciences to facilitate drug discovery by finding paths across multiple datasets showing associations between drugs and side effects via genes linked to each. The New York Times has published its vocabulary of approximately 10,000 subject headings developed over 150 years as Linked Data and will expand coverage to approximately 30,000 topic tags; they encourage the development of services consuming these vocabularies and linking them with other online resources. The British Broadcasting Corporation uses Linked Data to make content more findable by search engines and more linkable through social media; to add additional context from supplemental resources in domains like music or sports, and to propagate linkages and editorial annotations beyond their original entry target to bring relevant information forward in additional contexts. The home page of the United States data.gov site states, “As the web of linked documents evolves to include the Web of linked data, we’re working to maximize the potential of Semantic Web technologies to realize the promise of Linked Open Government Data.” And also all the social media websites use Linked Data to create a web of People to make their platform as engaging as possible.
So we do have some Linked Data but we still have a long way to go in order to harness the true power of Linked Data.
Now imagine if you could refer your latest commits in a git branch to a bitcoin transaction to timestamp your work. So, by linking your git commit, you can view the commit from your blockchain explorer. Or, if you could link your Ethereum contract to media on IPFS, perhaps modifying it and tracking its changes on each function execution.
All this is possible using IPLD.
IPLD is the data model of the content-addressable web(as discussed in part1).
It allows us to treat all hash-linked data structures as subsets of a unified information space, unifying all data models that link data with hashes as instances of IPLD.
Or in other words, IPLD is a set of standards and implementations for creating decentralized data-structures that are universally addressable and linkable. These structures allow us to do for data what URLs and links did for HTML web pages.(as right now I can’t connect my git commit to a blockchain transaction).
IPLD is a single namespace for all hash-inspired protocols. Through IPLD, links can be traversed across protocols, allowing you to explore data regardless of the underlying protocol.
Before diving deeper into IPLD, Let’s see the properties of IPLD.
Properties of IPLD
The sky’s the limit as IPLD allows you to work across protocol boundaries. The point is that IPLD provides libraries that make the underlying data interoperable across tools and across protocols by default.
A canonical data model
A self-contained descriptive model that uniquely identifies any hash-based data structure and ensures the same logical object always maps to the exact same sequence of bits.
Protocol independent resolution
IPLD brings isolated systems together(like connecting Bitcoin, Ethereum and git), making integration with existing protocols simple.
Upgradeable
With Multiformats(we will dive more into this in part 4) support, IPLD is easily upgradeable and will grow with your favorite protocols.
Operates across formats
Express your IPLD objects in various serializable formats like JSON, CBOR, YAML, XML and many more, making IPLD a cinch to use with any framework.
Backward compatible
Non-intrusive resolvers make IPLD easy to integrate within your existing work.
A namespace for all protocols
IPLD allows you to explore data across protocols seamlessly, binding hash-based data structures together through a common namespace.
Now, let’s dive deeper into the IPLD.
Diving Deeper into IPLD Specs
IPLD is not a single specification, it is a set of specifications.
The goal of this stack is to enable decentralized data-structures which in turn will enable more decentralized applications.
Many of the specifications in this stack are inter-dependent.
These diagrams show a high level of project specification of IPLD. Even if you don’t understand it fully it’s totally OK.
To learn more about IPLD, here is a great talk by Juan Benet.
OK. Enough of theoretical stuff. Let’s dive into the most fun part of this post 😊
In IPFS, IPLD helps to structure and link all the data chunks/objects. So, as we saw in part 1, IPLD was responsible for organizing all the data chunks that constituted the image of the kitty🐱.
Playing with IPLD
In IPFS, IPLD helps to structure and link all the data chunks/objects. So, as we saw in part 1, IPLD was responsible for organizing all the data chunks that constituted the image of the kitty🐱.
In this part, we will create a medium.com like publication system, and link the tags, articles, and authors using IPLD. This will help you to get a more intuitive understanding of IPLD. You can also find the complete tutorial on Github.
Let’s get started!
Before creating the publication system, we’ll be exploring the IPFS DAG API, which lets us store data objects in IPFS in IPLD format. (You can store more exciting things in IPFS, like your favorite cat GIF, but we will stick to simpler things for now.)
If you are not familiar with Merkle tries and DAGs then head over here. If you have an understanding of what these terms mean then continue…
Create a folder name
ipld-blogs. Run
npm init and press
Enter for all the questions.
Now install the dependencies using:
npm install ipfs-http-client cids --save
After installing the module your project structure will look like this:
Creating an IPLD format node
You can create a new node by passing a data object into the
ipfs.dag.put method, which returns a Content Identifier (CID) for the newly created node.
ipfs.dag.put({name: ‘vasa’})
A CID is an address for a block of data in IPFS that is derived from its content. Every time someone puts the same
{name: 'vasa'} data into IPFS, they'll get back an identical CID to the one you got. If they put in
{name: 'vAsa'} instead, the CID will be different.
Paste this code in
tut.js and run
node tut.js
/: */ ipfs.dag.put({name: "vasa"}, { format: 'dag-cbor', hashAlg: 'sha2-256' }, (err, cid)=>{ if(err){ console.log("ERR\n", this CID
zdpuAujL3noEMamveLPQWJPY6CYZHhHoskYQaZBvRbAfVwR8S. We have successfully created an IPLD format node.
Connecting IPLD objects
One important feature of Directed Acyclic Graphs (DAGs) is the ability to link them together.
The way you express links in the
ipfs DAG store is with the
CID of another node.
For example, if we wanted one node to have a link called “foo” pointed to another CID instance previously saved as
barCid, it might look like:
{ foo: barCid }
When we give a field a name and make its value a link to a CID, we call this a named link.
Below is the code showing how to create a named link.
/: */ async function linkNodes(){ let vasa = await ipfs.dag.put({name: 'vasa'}); //Linking secondNode to vasa using named link. let secondNode = await ipfs.dag.put({linkToVasa: vasa}); } linkNodes();
Reading Nested data using Links
You can read data from deeply nested objects using path queries.
ipfs.dag.get allows queries using IPFS paths. These queries return an object containing the value of the query and any remaining path that was unresolved.
The cool thing about this API is that it can also traverse through links. Here is an example of how you can read nested data using links.
/' }); explore your IPLD nodes using this cool IPLD explorer. Like, if I want to see this CID:
zdpuAujL3noEMamveLPQWJPY6CYZHhHoskYQaZBvRbAfVwR8S, I will go to this link:
Now, as we have explored IPFS DAG API we are ready to work with IPLD and create our publication system.
Creating a Publication System
We will create a simple blogging application. This blogging application can:
Add a new Author IPLD object. An author will have 2 fields: name and profile(a tag line for your profile).
Create a new Post IPLD object. A post will have 4 fields: author, content, tags and publication date-time.
Read a Post using post CID.
Below is the code implementation for the above goals.
/* PUBLICATION SYSTEM Adding new Author An author will have -> name -> profile Creating A Blog A Blog will have a: -> author -> content -> tags -> timeOfPublish List all Blogs for an author Read a Blog */ /' }); //Create an Author async function addNewAuthor(name) { //creating blog author object var newAuthor = await ipfs.dag.put({ name: name, profile: "Entrepreneur | Co-founder/Developer @TowardsBlockChain, an MIT CIC incubated startup | Speaker |" }); console.log("Added new Author "+name+": " + newAuthor); return newAuthor; } //Creating a Blog async function createBlog(author, content, tags) { //creating blog object var post = await ipfs.dag.put({ author: author, content: content, tags: tags, timeOfPublish: Date() }); console.log("Published a new Post by " + author + ": " + post); return post; } //Read a blog async function readBlog(postCID) { ipfs.dag.get(postCID + "", (err, post) => { if (err) { console.error('Error while reading post: ' + err) } else { console.log("Post Details\n", post); ipfs.dag.get(postCID + "/author", (err, author) => { if (err) { console.error('Error while reading post author: ' + err) } else { console.log("Post Author Details\n", author); } }); } }); } function startPublication() { addNewAuthor("vasa").then( (newAuthor) => { createBlog(newAuthor,"my first post", ["ipfs","ipld","vasa","towardsblockchain"]).then( (postCID) => readBlog(postCID)) }); } startPublication();
On running this code and it will first create an author via
addNewAuthor, which will return the authors CID. Then this CID will be passed to
createBlog function which will return the
postCID. This
postCID will be used by
readBlog function to fetch the post details.
You can create more complex applications using IPLD…
Ok. that’s it for this part. If you have any question, then you can shoot them in the comments.
I hope you have learned a lot of thing from this post. In the next post, we will dive into the naming System of the distributed web, IPNS. So stay tuned…
This article was first posted on SimpleAsWater.com | https://kauri.io/understanding-ipfs-in-depth(26):-what-is-interplanetary-linked-data(ipld)/2de296caf9ac41f296484198e3350e8e/a | CC-MAIN-2020-34 | refinedweb | 2,041 | 54.83 |
Following questions have been asked in GATE CS 2011 exam.
1) What does the following fragment of C-program print?
char c[] = "GATE2011"; char *p =c; printf("%s", p + p[3] - p[1]) ;
(A) GATE2011
(B) E2011
(C) 2011
(D) 011
Answer: (C)]) ;
2) Consider the following recursive C function that takes two arguments
unsigned int foo(unsigned int n, unsigned int r) { if (n > 0) return (n%r + foo (n/r, r )); else return 0; }.
Recommended Posts:
- C Language | Set 9
- C Language | Set 7
- Data Structures and Algorithms | Set 26
- Data Structures and Algorithms | Set 27
- C Language | Set. | https://www.geeksforgeeks.org/c-language-set-8/ | CC-MAIN-2018-34 | refinedweb | 101 | 64.54 |
On Tue, Aug 07, 2007 at 08:43:54PM -0400, John W. Eaton wrote: > Ugh. Don't do it like that. Instead, write something like > > your-config-header.h: > -------------------- > #ifdef __GNUC__ > #define ATTR_MALLOC __attribute__((malloc)) > #else > #define ATTR_MALLOC > #endif > > and then in your code, just use > > #include "your-config-header.h" > ... > void *myfunc (char *myvar) ATTR_MALLOC; The more I think about it the more I think this is a bad idea for library code. It certainly makes the code more tidy but doesn't this end up screwing any application that tries to link against your code? I don't ever install config.h. -J --
pgplJzDl5hHmU.pgp
Description: PGP signature | http://lists.gnu.org/archive/html/autoconf/2007-08/msg00016.html | CC-MAIN-2014-41 | refinedweb | 110 | 77.53 |
From: Paul A. Bristow (pbristow_at_[hidden])
Date: 2001-04-20 03:50:48
Matt Austern suggested:;
}
Shall I produce a sample of this with actual values?
for comment by others? Or are there other views?
I favour this because it does not require users to remember
to put pi() rather than plain pi.
Personally I find the ()s distracting from the filling in the pie.
Paul
> -----Original Message-----
> From: austern_at_[hidden] [mailto:austern_at_[hidden]]
> Sent: Thursday, April 19, 2001 8:00 PM
> To: boost_at_[hidden]
> Subject: Re: [boost] Boost.MathConstants: Review
>
>
> "Paul A. Bristow" wrote:
> >
> > > -----Original Message-----
> > > From: Ed Brey [mailto:brey_at_[hidden]]
> > > Sent: Thursday, April 19, 2001 4:19 PM
> > > To: boost_at_[hidden]
> > > Subject: [boost] Boost.MathConstants: Review
> > >
> > I am content to provide an .hpp file of this form
> >
> > > namespace boost {
> > > namespace math {
> > > template<typename T>
> > > struct constants {
> > > static T pi() {return T(3.1415...L);}
> > > };
> > > }
> > > }
> > >
> > > where a typical user experience would be documented as this:
> > >
> > > typedef boost::math::constants<float> c;
> > > std::cout << "Baseball and apple " << c::pi();
> >
> > as suggested (but alas not suggested in previous discussions!)
> >
> > BUT I am VERY keen to use the macro values because of the
> > long and tedious work in altering the generation program
> > to write this file with the constants embedded as above.
> > (And to write a validation program too!)
> > (And it leaves the file of macros useful for C programs -
> > I agree that should be at no extra cost to C++ users).
>
>;
> }
>
> If you find it reasonable to have macros, you can hide them
> away in that implementation file where they're out of sight.
>
> This strikes me as a cleaner solution, and I think you'd be
> hard pressed to find a platform where it made a noticable
> difference in performance. It also has the advantage that,
> on some platforms you could initialize the numerical value
> in tricky ways that wouldn't be appropriate in a header.
> (I'm thinking of awful stuff, like using unions to control
> the exact bitwise representation.)
>
> --Matt
>
> | https://lists.boost.org/Archives/boost/2001/04/11233.php | CC-MAIN-2020-45 | refinedweb | 333 | 63.7 |
This python module provides an annotation that caches results of function calls automatically.
Project description
jk_cachefunccalls
Introduction
This python module provides an annotation that caches results of function calls automatically.
Information about this module can be found here:
Why this module?
Sometimes functions or methods provide data that are a little bit expensive to calculate. For example analysing some directory and providing information about the disk space used might take a few tens of seconds or even a few seconds. If various other components of a software want to use this information these software components might invoke a single method that performs all necessary operations and afterwards provides a single result. This module now provides an annotation that can automatically cache this result. If multiple calls to such a function or method are performed, the first call then will calculate the actual value, but successive calls might just return the last value calculated.
Of course such caching mechanism will not be feasible in every situation. But the situation just discribed where a directory must be scanned is an excellent example where such caching is useful: For the duration of a few seconds it is typically no problem to return the value just calculated a few seconds ago.
The annotation provided by this module enables you to implement such caching without any specific need for implementing such caching yourself. It is a convenient way of adding this functionality to an existing function or method without a complicated set of codelines.
How to use this module
Import this module
Please include this module into your application using the following code:
from jk_cachefunccalls import *
Annotate a function
Now after having imported the annotation named
cacheCalls we can make use of it. Example:
@cacheCalls(seconds=5) def someFunction(): .... # do something complicated .... return ....
In the example above the function
someFunction is annotated in such a way that return values are cached for 5 seconds.
Annotate a method
With annotating methods it is exactly the same:
class MyClass(object): ... @cacheCalls(seconds=5) def someMethod(self): .... # do something complicated .... return .... ...
Caching depending on an argument
Sometimes you need to depend function or method calls on argument(s). If arguments exists, the caching mechanism can take them into consideration. For that you can specify an additional annotation parameter that defines the index of the argument(s) to consider. Example:
@cacheCalls(seconds=5, dependArgs=[0]) def someFunction(hostName:str): .... # do something complicated .... return ....
Here we depend all caching on the very first argument. If this is a host name and successive calls to this function are performed specifying always the same host name, caching will provide the last value calculated (within a window of 5 seconds). However if a call is performed with a different host name, the cache value will be discared immediately. If multiple calls are performed specifying this new, different host name, the cached value will be returned again.
In summary:
- If you invoke such a function specifying a different value on every call, the function is always executed as there would be no caching.
- If you invoke such a function specifying the same value as you did in the last call, the anntation wrapper will return a cached value (if available).
Please note that this kind of caching is based on the id of an argument value (derived by
id()). If you specify constants (such as integers or strings) python ensures that those values have the same id. This implies if you specify an object of some kind as an argument the caching mechanism is not based on the value(s) stored in such an object but by on the identify of such an object.
The reason for this is simple: E.g. if you would like to depend some function call on an object representing a network connection to a specific host or service, there likely will be a new object for every single connection as typically such connection objects are not reused but created again if a new connection is required. Using
id() is very fast so caching will not depend on more complex calculations but simply on the identity of such an argument object. However if you modify the state of such an object this is not recognized: The value cached previously might be returned.
In this implementation we give speed of caching a greater emphasis than the exact state of an argument object. It is your responsibility as a programmer to know about the consequences of such caching (and the implications using the ids of arguments) in order to have your program behave correctly.
Ignoring the cache
Sometimes it is required to ignore a value that might have been cached. For this use the
_ignoreCache argument if you invoke such an annotated function:
x = someFunction("localhost", _ignoreCache=True)
If you specify
_ignoreCache this will control the behaviour of the wrapper around the function to be invoked. If you specify
True here the wrapper will ignore the cache (but will cache the new value returned by the invoked function).. | https://pypi.org/project/jk-cachefunccalls/ | CC-MAIN-2020-16 | refinedweb | 837 | 51.68 |
11 June 2012 11:07 [Source: ICIS news]
LONDON (ICIS)--European chemical stocks rose on Monday, in line with financial markets, after eurozone ministers agreed to provide ?xml:namespace>
The bailout package was agreed over the weekend with eurozone ministers lending
At 09:16 GMT, the
With European indices trading higher, the Dow Jones Euro Stoxx Chemicals index was up by 2.18%, as shares in many of
Petrochemical major BASF’s shares were up by 2.18%, while fellow Germany-based chemical company Bayer’s shares were trading up by 3.08%.
Shares in
France-based Arkema’s shares were trading up by 1.59% from the previous close.
Crude prices rose by more than $2/bbl in early Asian trade on Monday after the larger-than-expected bailout package for
On Friday 8 June, European chemical stocks fell after ratings agency Fitch downgraded
( | http://www.icis.com/Articles/2012/06/11/9567958/europe-chemical-stocks-rise-on-spanish-bailout-package.html | CC-MAIN-2014-52 | refinedweb | 145 | 61.26 |
User talk:Baloney Detection
Contents
Sysop[edit]
You are now sysop. Have fun RandonGeneration (talk) 22:06, 19 May 2012 (UTC)
- ...and read this. Peter with added ‼Science‼ 22:39, 19 May 2012 (UTC)
Let's discuss LW somewhere else or not discuss it because fuck it.[edit]
I feel we are eating too much space here, it's unhealthy. It is 'i am crazy or they are crazy' situation, and that's rather maddening and makes us look crazy too. You can ask at some mathematics forum about this "bayescraft" and ask why would it support many worlds interpretation, or ask whenever that guy is awesome statistics expert or just talking technobabble to justify weird beliefs like cryonics. Dmytry (talk) 15:33, 8 July 2012 (UTC)
- I see your point. It's just a bit upsetting how they get a pass (sort of) for calling themselves "rational". As for a math forum, it would be rather inappropriate for me because I'm prett bad at math generally and probably wouldn't understand much. That's why I ask. Unlike Yudkowsky, I don't proclaim myself to be an expert on subjects I'm not. I consider myself pretty knowledgeable at history, and gladly correct misunderstandings I come across. I think the role of Bayes' theorem is a subject for philosophy of science (with Yudkowsky doesn't know anything about).--Baloney Detection (talk) 20:31, 8 July 2012 (UTC)
- I lean pretty hard in the direction of Bayesianism, but the "reformulating science as Bayesianism" stuff is a case of way too much of a good thing. See here for objections. Nebuchadnezzar (talk) 20:42, 8 July 2012 (UTC)
- Do you think the scientific method and Bayes' theorem are in conflict?--Baloney Detection (talk) 20:49, 8 July 2012 (UTC)
- No, that's sheer nonsense. Nebuchadnezzar (talk) 21:55, 8 July 2012 (UTC)
- the LW's Bayesianism is to Bayes as objectivism is to objects. I don't think they know how to handle cycles, or even thought about that. Plus the science works like a strategy like "I am willing to discard a valid theory that many percent of the time" (and the cut off percent can be set based on expected utility loss in the case that the theory is true but rejected by accident, versus the cost of research). How probable is a theory given data is not well defined without how probable is a theory without data, and that is pure matter of opinion (solomonoff induction or not), beyond the usual 'larger theory is less probable'. Dmytry (talk) 20:17, 9 July 2012 (UTC)
I'm going to try answering some of the questions you asked:
Bayes rule finds how probable is the theory given data and given prior knowledge of how probable is the theory. It is also a proof that you can't know how probable is the theory given experimental data, without having prior knowledge of how probable is the theory.
While the Bayes rule is incredibly useful (and I have used it multiple times in my work), it has interesting bad properties also: namely, if you are incredibly certain in a theory, then no amount of data may be able to result in substantially low belief. For example, if you start off with sufficiently high probability of God, then no amount of evidence can convince you otherwise, as all evidence has common failure mode (God is testing me) and the certainty of evidence will never exceed some level (there is a limit minimum to probability of all evidence falling).
People, fundamentally, act in precisely this Bayesian manner when being stubborn. There is no good practical solution to this problem.
There is an alternative method: you can say, Okay we'll agree to set up an experiment so that there will be up to 1 in a million chance that we will agree to reject your theory even if it is true (alternatively, we can agree to set up experiment so that there will be up to 1 in a million chance that you will accept a false theory). And with this we'll can settle our disputes correctly most of the time and arrive at agreement without having to use prior (without-the-evidence) probabilities for the hypotheses. This is the scientific method in a nutshell. There is a place for Bayes rule in science too, and indeed Bayes rule is heavily used in science when designing the experiments and such; you just can't quite use Bayes rule in the circumstances where the prior probability is unknown. But the reason they go for Bayes rule all the way is that they do not like this method of settling disputes, and they don't like it because it settled disputes against the woo they promote. Dmytry (talk) 09:31, 11 July 2012 (UTC)
- Thank you very much for that response. Math has always been a weakness for me. I wonder though, how do you get probalilities/priors? Isn't that often very hard in real life? I recall reading a debate between a Muslim and an atheist, and the Muslim causally threw around probabilities for how unlikely it would be for it to contain X if it was not of divine origin. Many wondered from where he got those probabilities. They appeared to just be made up at the spot.--Baloney Detection (talk) 13:13, 14 July 2012 (UTC)
- Well, for example, if I am to use Bayes rule in image classification on a dataset, I will get probabilities from sampling my dataset - what percentage of images are like what? Suppose I am to write an image filter to block 'nasty' images from children's eyes. I get how often this algorithm will face a nasty image, that is the prior, then apply the probabilities of particular feature within nasty image (e.g. skin coloured objects taking high percentage of the screen), to update the probability using Bayes rule, pretty much as outlined in Eliezer's introduction to Bayes theorem. Herein lies a huge potential for fucking up: two features may be correlated. Here's the other fuck up: existence of my filter will change how the pictures will be and the prior probabilities. Making up prior probabilities from scratch, based on reason alone, fits well with philosophy of 'rationalism' (the one that's opposed to empiricism), and it is necessary to have a view that this is the only way. In scientific method the probability of data given theory can usually be inferred from the theory. For example, the theory is that the coin is unbiased, the data is sequence of 10 tails. The probability of data given theory is 1/1024 . The probability of the theory given data, on the other hand, requires making up a prior probability that the coin is unbiased p , and consider as the only alternative hypothesis that the coin lands tails up always, and then finding posterior probability by such reasoning: the fraction p of trials had the unbiased coin, and of them one in 1024 was what we seen. The total population of trials has portion p of which one in 1024 is what was seen, and portion 1-p of which one is one is what we seen. The fraction of unbiased coin that is consistent with what is seen is then (p/1024)/(p/1024 + 1-p) . If you are very sure (99.9999%) that the coin can't always land tail up, and must be unbiased, then (0.999999/1024)/(0.999999/1024 + 1-0.999999) = 0.998977 , i.e. the probability that coin is unbiased, after evidence, is 0.998977 . If you are assigning even odds that it is biased or unbiased, it is (0.5/1024) / (0.5/1024 + 0.5) = 0.00097561 , or about one in thousand. This has an interesting property that in real world situations any such a-priori assumptions that you can make up out of thin air can be gamed for fun and profit by some sort of scam scheme. Note the alternative reasoning: If I am to decide after 10 heads up that the coin can't be fair, then I'll assume that coin is not fair in one such trial in 1024 when the coin is fair. I can then consider how often I do coin trials and how bad it would be for me to believe that the coin is not fair when it is actually fair, and decide after how many trials I am going to assume that the coin is not fair. The latter scheme does not require to make up sensible prior probability that coin is not fair - having such probability would improve the calculations, but even without such probability one can find the range of losses of the strategy corresponding to the range of prior probabilities, and worst case losses can be assumed for a hostile environment (such as the one with many enough clever people who invariably believe their research is most important thing in the world, and want to trick you into paying them to do the research) Dmytry (talk) 10:04, 26 July 2012 (UTC)
Essayspace[edit]
WTF? Is the video supposed to be the content of the essay? If so, you don't understand what the Essay namespace is about. You can add the link to Sean Carroll's article and let someone delete the page.--ZooGuard (talk) 18:35, 16 July 2012 (UTC)
- Ok I added the link to the article, you can delete the page if you want to.--Baloney Detection (talk) 18:49, 16 July 2012 (UTC)
Relax, Max[edit]
Not everybody is as against LW as you are. That's okay. Personally, I agree. They're nothing but self-glorified science-fiction writers. But there's no need to get up in everyone's business about it. And please, try to keep it down to a minimum of pages.--"Shut up, Brx." 21:00, 18 July 2012 (UTC)
- I agree that I might have gotten an unhealthy obsession by LW as of late. But it really bugs me when certain people can't see the obvious crankery that is LW. I guess it is because LW is more subtle about it and you have to digg a little, they are not openly wooish like Chopra. If these people had been young in the 60s and the 70s they would likely have been Randroids. Rand held "reason" (her version of it, at least) in high regard and praised science, yet if you digg a little deeper, she was a complete crackpot.--Baloney Detection (talk) 21:59, 19 July 2012 (UTC)
This is just too funny.[edit] Dmytry (talk) 10:08, 26 July 2012 (UTC)
- They have pics of it here: --Baloney Detection (talk) 10:09, 28 July 2012 (UTC)
LW[edit]
Howdy! Let me preface by saying that I don't mean to be hostile here; I haven't seen anything to suggest that you are concerned with anything other than a fair depiction of LessWrong. So please don't take this as an attack - I just want to head off a perceived problem right now, is all.
After I rewrote the lede to the article, I noticed you very quickly added in an additional criticism to it. While not strictly wrong, it seemed misplaced, and so I moved it down a bit. But I was planning on reworking a lot of the article, and you seem to be one of the most frequent editors (along with Gerard, but I'm not worried about him). As I mentioned on the talk page, it looks to me like most of the article is filled with these little additions. But I see two problems with them:
- Their inclusion throughout the article makes it feel cluttered and difficult to read: clear commentary is hidden by numerous small details.
- The sheer profusion and the triviality of many of the criticisms smacks of an attempt to exhaustively detail every failing.
So I was wondering if maybe you could (in the future) work to integrate some of these criticisms into the article as a whole, rather than inserting them as addendums or interjections. Further, I thought it might be good for you to take a step back and pause before adding some things. I don't want to cite specific examples (since then we'd just start arguing over relative merit) but I do notice that you seem a very enthusiastic critic of LW. There's nothing wrong with that, but until we have a WIGOLW (not yet warranted, thankfully!) it might be best to dial it back a little bit. There's a lot of craziness that warrants comment, but not so exhaustively.
Please tell me if that sounds reasonable. Thank you for your time.--
talk
08:03, 10 November 2012 (UTC)
- I didn't know you was in a major revamp of the article for a longer time period. But sure, I'll refrain from adding new stuff for a while before talking it through the talk page, except possibly relevant links to the link list below if I happen to run into any (I think the link to Kruel is fitting for example).--Baloney Detection (talk) 12:31, 10 November 2012 (UTC)
I've got you a present[edit]
It's past the point where your discussions on Talk:LessWrong and Talk:Eliezer Yudkowsky can be considered to be relevant to improving the article. There's not much wrong with discussing a topic on its salient article talkpage, but you and your other have been seriously crowding the space, far more than most non-content related discussions normally do. You're filling up archive page after archive page, and making it more difficult to sort through content discussions. You are also annoying other editors. Thus, I've gotten you a present:
I would make a new WIGO, but that would just piss off the crowd that hates WIGOs and wants them off the wiki. But at least you have a forum, a place where you can talk about Eliezer Yudlowsky and his fanboys to your heart's content, without wrinkling anything. Please, try and focus new topics to the forum page I've magnanimously crafted for you.--"Shut up, Brx." 00:55, 2 January 2013 (UTC)
Yvain[edit]
Please describe on the talk page why you want to keep Yvain.--
talk
21:30, 16 June 2013 (UTC) | https://rationalwiki.org/wiki/User_talk:Baloney_Detection | CC-MAIN-2018-26 | refinedweb | 2,414 | 66.67 |
Test Types.
In This Section
- Working with Unit Tests
Provides links to topics that describe unit tests and how to create them.
- Working with Web Tests
Describes how to create, edit, run, and view Web tests.
- Working with Load Tests
Describes the uses of load tests, how to edit and run them, how to collect and store load test performance data, and how to analyze load test runs.
- Working with Manual Tests
Describes how to create and run manual tests, the only non-automated test type.
- Working with Generic Tests
Describes how to create and run generic tests. Generic tests wrap external programs and tests that were not originally developed for use in the Team System testing tools.
- Working with Ordered Tests
Describes how to create ordered tests, which contain other tests that are meant to be run in a specified order.
Reference
- Microsoft.VisualStudio.TestTools.LoadTesting
Describes the LoadTesting namespace, which provides classes and interfaces that enable load testing of unit and Web tests.
-.
- Microsoft.VisualStudio.TestTools.WebTesting
Describes the WebTesting namespace, which provides classes that enable Web testing. These classes include the WebTest class, the base class for all Web tests, and the WebTestRequest and WebTestResponse classes, for simulating HTTP requests and responses.
- Microsoft.VisualStudio.TestTools.WebTesting.Rules
Describes the WebTesting.Rules namespace, which contains rules that Web tests use to test the content of Web pages. | http://msdn.microsoft.com/en-us/library/ms182514(v=VS.80).aspx | CC-MAIN-2014-23 | refinedweb | 229 | 53.1 |
I would probably consider this an O(n) space solution since it is recursive and uses the stack as space. I just kind of did this for fun but it turned out it is quite short and easy to understand. Basically, it passes the first element down the recursion stack until hitting the last element, it then increments the first element as it unwinds and compares successive elements. If it ever finds one that doesn't match, it ends. :)
public class Solution { private ListNode helper(ListNode first, ListNode head) { if (head.next == null) { return head.val == first.val? first.next : null; } if ((first = helper(first, head.next)) == null) { return null; } if (first.val == head.val) { // we are at the end of the comparison, everything matched, just return first if (first.next == null) return first; // we arent done with comparing the entire list return first.next; } // something didnt match return null; } public boolean isPalindrome(ListNode head) { if (head == null || head.next == null) return true; return helper(head, head) != null; }
} | https://discuss.leetcode.com/topic/45509/java-recursive-o-n-time-o-n-space-if-you-consider-the-recursive-stack-o-1-otherwise-solution | CC-MAIN-2018-05 | refinedweb | 168 | 67.04 |
Machine Learning in Production
From trained models to prediction servers
After days and nights of hard work, going from feature engineering to cross validation, you finally managed to reach the prediction score that you wanted. Is it over? Well, since you did a great job, you decided to create a microservice that is capable of making predictions on demand based on your trained model. Let’s figure out how to do it. This article will discuss different options and then will present the solution that we adopted at ContentSquare to build an architecture for a prediction server.
What you should avoid doing
Assuming you have a project where you do your model training, you could think of adding a server layer in the same project. This would be called a monolithic architecture and it’s way too mainframe-computers era. Training models and serving real-time prediction are extremely different tasks and hence should be handled by separate components. I also think that having to load all the server requirements, when you just want to tweak your model isn’t really convenient and — vice versa — having to deploy all your training code on the server side which will never be used is — wait for it — useless. Last but not least, there is a proverb that says “Don’t s**t where you eat”, so there’s that too.
Thus, a better approach would be to separate the training from the server. This way, you can do all the data science stuff on your local machine or your training cluster, and once you have your awesome model, you can transfer it to the server to make live predictions.
So, how could we achieve this?
Frankly, there are many options. I will try to present some of them and then present the solution that we adopted at ContentSquare when we designed the architecture for the automatic zone recognition algorithm.
If you are only interested in the retained solution, you may just skip to the last part.
What you could do
Our reference example will be a logistic regression on the classic Pima Indians Diabetes Dataset which has 8 numeric features and a binary label. The following Python code gives us train and test sets.
Model coefficients transfer approach
After we split the data we can train our LogReg and save its coefficients in a json file.
Once we have our coefficients in a safe place, we can reproduce our model in any language or framework we like. Concretely we can write these coefficients in the server configuration files. This way, when the server starts, it will initialize the logreg model with the proper weights from the config. Hurray !
The big advantage here is that the training and the server part are totally independent regarding the programming language and the library requirements.
However, one issue that is often neglected is the feature engineering — or more accurately: the dark side of machine learning. In general you rarely train a model directly on raw data, there is always some preprocessing that should be done before that. It could be anything from standardisation or PCA to all sorts of exotic transformations.
So if you choose to code the preprocessing part in the server side too, note that every little change you make in the training should be duplicated in the server — meaning a new release for both sides. So if you’re always trying to improve the score by tweaking the feature engineering part, be prepared for the double load of work and plenty of redundancy.
Moreover, I don’t know about you, but making a new release of the server while nothing changed in its core implementation really gets on my nerves. I mean, I’m all in for having as much releases as needed in the training part or in the way the models are versioned, but not in the server part, because even when the model changes, the server still works in the same way design-wise. (cf figure 2)
feature engineering which doesn’t impact how the server works. Not good.
PMML approach
Another solution is to use a library or a standard that lets you describe your model along with the preprocessing steps. In fact there is PMML which is a standardisation for ML pipeline description based on an XML format. It provides a way to describe predictive models along with data transformation. Let’s try it !
So in this example we used sklearn2pmml to export the model and we applied a logarithmic transformation to the “mass” feature. The output file is the following:
Even if PMML doesn’t support all the available ML models, it is still a nice attempt in order to tackle this problem [check PMML official reference for more information]. However, if you choose to work with PMML note that it also lacks the support of many custom transformations. Let’s try another example but this time with a custom transformation
is_adult on the “age” feature.
This would fail and throw the following error saying not everything is supported by PMML:
The function object (Java class net.razorvine.pickle.objects.ClassDictConstructor) is not a Numpy universal function.
To sum up, PMML is a great option if you choose to stick with the standard models and transformations. But if you’re interested in more, don’t worry there are other options. Please keep reading.
Custom DSL/Framework approach
One thing you could do instead of PMML is building your own PMML, yes! I don’t mean a PMML clone, it could be a DSL or a framework in which you can translate what you did in the training side to the server side --> Aaand bam! Months of work, just like that. Well, it is a good solution, but unfortunately not everyone has the luxury of having enough resources to build such a thing, but if you do, it may be worth it. You could even use it to launch a platform of machine learning as a service just like prediction.io. How cool is that!
(Speaking about ML SaaS solutions, I think that it is a promising technology and could actually solve many problems presented in this article. However, it would be always beneficial to know how to do it on your own.)
What we chose to do
Now, I want to bring your attention to one thing in common between the previously discussed methods: They all treat the predictive model as a “configuration”. Instead we could consider it as a “standalone program” or a black box that has everything it needs to run and that is easily transferable. (cf figure 3)
The black box approach
In order to transfer your trained model along with its preprocessing steps as an encapsulated entity to your server, you will need what we call serialization or marshalling which is the process of transforming an object to a data format suitable for storage or transmission. You should be able to put anything you want in this black box and you will end up with an object that accepts raw input and outputs the prediction. (cf figure 4)
Let’s try to build this black box using
Pipeline from Scikit-learn and Dill library for serialisation. We will be using the same custom transformation
is_adult that didn’t work with PMML as shown in the previous example.
Ok now let’s load it in the server side.
To better simulate the server environment, try running the pipeline somewhere the training modules are not accessible. Make sure that whatever libraries you used to build the model, you must have them installed in your server environment as well. Concretely, if you used Pandas and Sklearn in the training, you should have them also installed in the server side in addition to Flask or Django or whatever you want to use to make your server.
This shows us that even with a custom transformation, we were able to create our standalone pipeline. Note that
is_adult is a very simplistic example only meant for illustration. In practice, custom transformations can be a lot more complex.
Ok, so the main challenge in this approach, is that pickling is often tricky. That is why I want to share with you some good practices that I learned from my few experiences:
- Avoid using imports from other python scripts as much as possible (imports from libraries are ok of course):
Example: Say that in the previous example
is_adultis imported from a different file:
from other_script import is_adult. This won’t be serialisable by any serialisation lib like Pickle, Dill, or Cloudpickle because they do not serialise imports by default. The solution is to have everything used by the pipeline in the same script that creates the pipeline. However if you have a strong reason against putting everything in the same file, you could always replace the
import other_scriptby
execfile("other_script"). I agree this isn’t pretty, but either this, or having everything in the same script, you choose. However if you think of a cooler solution, I would be more than happy to hear your suggestions.
- Avoid using lambdas because generally they are not easy to serialize. While Dill is able to serialize lambdas, the standard Pickle lib cannot. You could say that you can use Dill then. This is true, but beware! Some components in Scikit-learn use the standard Pickle for parallelisation like
GridSearchCV. So what you want to parellilze should be not only “dillable” but also “picklable”.
Here is an example of how to avoid using lambdas: Say that instead of
is_adultyou have
def is_bigger_than(x, threshold): return x > threshold. In the DatafameMapper you want to apply
x -> is_bigger_than(x, 18)to the column “age”. So, instead of doing:
FunctionTransformer(lambda x: is_bigger_than(x, 18)))you could write
FunctionTransformer(partial(is_bigger_than, threshold=18))Voilà !
- When you are stuck don’t hesitate to try different pickling libraries, and remember, everything has a solution. However, when you are really stuck, ping-pong or foosball could really help.
Finally, with the black box approach, not only you can embark all the weird stuff that you do in feature engineering, but also you can put even weirder stuff at any level of your pipeline like making your own custom scoring method for cross validation or even building your custom estimator!
The demo
For the demo I will try to write a clean version of the above scripts. We will use Sklearn and Pandas for the training part and Flask for the server part. We will also use a parallelised
GridSearchCV for our pipeline. Without more delay, here is the demo repo. There are two packages, the first simulates the training environment and the second simulates the server environment.
Note that in real life it’s more complicated than this demo code, since you will probably need an orchestration mechanism to handle model releases and transfer. In other word you need also to design the link between the training and the server.
Last but not least, if you have any comments or critics, please don’t hesitate to share them below. I would be very happy to discuss them with you.
PS: We are hiring !
In memory of MS Paint. RIP
Awarded the Silver badge of KDnuggets in the category of most shared articles in Sep 2017. Link | https://medium.com/contentsquare-engineering-blog/machine-learning-in-production-c53b43283ab1?utm_campaign=Revue%20newsletter&utm_medium=Newsletter&utm_source=The%20Data%20Science%20Roundup | CC-MAIN-2018-05 | refinedweb | 1,885 | 60.55 |
> From: [email protected] (Philip Enteles)
>
> Here is the info: I have a list running on an Ultrix machine and
> perl4.036, runs fine. I created a closed, restricted, non-advertised list.
> When a member of the list asks for a list of who is on the list (who
> testlist) they get the response that they are not on the list so can't
> have that information even though they appear in the subscriber list and
> get the mail sent to that list. This response depends on which MUI they
> use.
>
> The problem is the way the MUI forms the From line. mail and elm don't
> attach the host name while pine, Eudora and NUPop do. Yes, all of these
> are used on my system.
I had the same problem on my host, all the local mail was sent as
"username" instead of "[email protected]"
The solution for us was to mofify sendmail configuration, so it appends
"inf.utfsm.cl" for every local mail, so the From: line is ever
"[email protected]"
This change makes not difference on what mail reader you use.
Marcos.
-----
#include <signature.h> | http://www.greatcircle.com/lists/majordomo-users/mhonarc/majordomo-users.199505/msg00308.html | CC-MAIN-2014-15 | refinedweb | 193 | 73.37 |
Generator too slow [SOLVED]
On 19/04/2015 at 11:15, xxxxxxxx wrote:
User Information:
Cinema 4D Version: 15
Platform: Mac OSX ;
Language(s) : C++ ;
---------
I created a generator object that generates a spline.
First I created it in python but, since it was resulting in a very slow generation of the spline (it is a very complex calculation), I decided to code it in C++.
But it is still too slow since it is calculation the spline everytime.
Is there any way to only re-generate the spline if the parameters were changed?
On 19/04/2015 at 13:36, xxxxxxxx wrote:
At least for GetVirtualObjects(), you can use GetAndCheckHierarchyClone() to check if the object needs to be recreated or not (if it is dirty or not). If not, the returned clone can be returned. Not sure if this applies with GetContour().
// Check if needs rebuild and get cloned hierarchy of input objects Bool dirty = FALSE; mainop = op->GetAndCheckHierarchyClone(hh, child, HIERARCHYCLONEFLAGS_ASPOLY, &dirty, NULL, TRUE); // - if !dirty then object is already cached and doesn't need to be rebuilt if (!dirty) return mainop;
As for 'complex calculation', anything (and I literally mean *ANYTHING* ) that can be done once or preprocessed should not be done every time you do this complex calculation. I would need to see your calculation code in order to offer places to do this - but you should be able to see easily where calculations are invariant and could be done beforehand and then used to enhance speed and reduce complexity. For instance, if you are dividing real numbers with a real number (say, Q) repeatedly, take the inverse of that real number (1/Q) and multiply. Division is slower than multiplication in floats. Avoid Sqrt() whenever possible - again, any calculation to a particular value that is being done more than once (esp. in loops) needs to be done outside of the loop or just once into a variable and reused.
On 19/04/2015 at 15:37, xxxxxxxx wrote:
I placed this in my code and it didn't work. It crashed Cinema 4D :-(
BaseObject* ArborSkeleton::GetVirtualObjects(BaseObject* op, HierarchyHelp* hh) { Bool dirty=false; BaseObject* mainop=op->GetAndCheckHierarchyClone(hh, op, HIERARCHYCLONEFLAGS_ASPOLY, &dirty;, NULL, true); if (!dirty) return mainop; BaseDocument* doc=op->GetDocument(); return (BaseObject* ) Generate_Arbor(op,doc); }
On 19/04/2015 at 15:45, xxxxxxxx wrote:
If your generated object is a spline, you should use HIERARCHCLONEFLAGS_ASSPLINE. Also note that you pass the child of the generator object, not the generator itself (op) :
// Get first input object BaseObject* child = op->GetDown(); if (!child) return NULL; // Generate clones of input objects // NOTE: 'mainop' is the original returned with the generated objects // Check if needs rebuild and get cloned hierarchy of input objects Bool dirty = FALSE; BaseObject* mainop = op->GetAndCheckHierarchyClone(hh, child, HIERARCHYCLONEFLAGS_ASSPLINE, &dirty, NULL, TRUE); // - if !dirty then object is already cached and doesn't need to be rebuilt if (!dirty) return mainop; if (!mainop) return NULL;
On 19/04/2015 at 15:55, xxxxxxxx wrote:
This object has no child. It is simply an object that generates a spline.
On 19/04/2015 at 16:18, xxxxxxxx wrote:
That ain't gonna work then. :(
You will need to do this first in GetVirtualObjects() :
Bool dirty = op->CheckCache(hh) || op->IsDirty(DIRTYFLAGS_DATA); if (!dirty) return op->GetCache(hh);
This should stop continuous rebuilding.
On 19/04/2015 at 23:58, xxxxxxxx wrote:
YES!!! It worked just fine :-)
Now I just have to optimize my code, anyway :-)
On 20/04/2015 at 06:52, xxxxxxxx wrote:
Hello,
I just want to add that an example on how to use the cache in GetVirtualObjects() can be found in the SDK Rounded Tube plugin.
If you want to create a Python ObjectData generator plugin you can take advantage of the optimized Cache, see "Optimize Cache".
Best wishes,
Sebastian
On 20/04/2015 at 07:16, xxxxxxxx wrote:
Well, now that I have it working in C++ (it was already working in python), I guess I will keep on developing this one in C++.
And it is already caching quite well :-) | https://plugincafe.maxon.net/topic/8656/11325_generator-too-slow-solved | CC-MAIN-2020-16 | refinedweb | 680 | 60.14 |
3. Stay with XML 1.0
Everything you need to know about XML 1.1 can be summed up in two rules:
Don't use it.
(For experts only) If you speak Mongolian, Yi, Cambodian, Amharic, Dhivehi, Burmese or a very few other languages and you want to write your markup (not your text but your markup) in these languages, then you can set the version attribute of the XML declaration to 1.1. Otherwise, refer to rule 1.
XML 1.1 does several things, one of them marginally useful to a few developers, the rest actively harmful.
It expands the set of characters allowed as name characters
The C0 control characters (except for NUL) such as form feed, vertical tab, BEL, and DC1 through DC4 are now allowed in XML text provided they are escaped as character references.
C1 control characters (except for NEL) must now be escaped as character references
NEL can be used in XML documents, but is resolved to a line feed on parsing.
Parsers may (but do not have to) tell client applications that Unicode data was not normalized
Namespace prefixes can be undeclared
Let's look at these changes in more detail.
XML 1.1 expands the set of characters allowed in XML names (i.e. element names, attribute names, entity names, ID-type attribute values, and so forth) to allow characters that were not defined in Unicode 2.0, the version that was extant when XML 1.0 was first defined. However, then XML 1.1 is worthwhile.
However, note that this is only relevant:
<?xml version="1.0" encoding="UTF-8"?> <book> <title>የማቴዎሰ ወንጌል</title> <chapter number="፩"> <title>የኢየሱሰ የትው ልድ ሐረግ</title> <verse number= </verse> <verse number=
</verse> </chapter> </book>
Here the element and attribute names are in English although the content and attribute values are in Amharic. On the other hand, if we were to write the element and attribute names in Amharic, then we would need to use XML 1.1:
<?xml version="1.1" encoding="UTF-8"?> <መጽሐፋ> <አርእስት>የማቴዎሰ ወንጌል</አርእስት> <ምዕራፋ ዌጥር="፩"> <አርእስት>የኢየሱሰ የትው ልድ ሐረግ</አርእስት> <ቤት ዌጥር= </ቤት> <ቤት ዌጥር=
</ቤት> </ምዕራፋ> </መጽሐፋ>
This is plausible. A native Amharic speaker might well want to write markup like this. However, the loosening of XML's name character rules have effects far beyond the few extra languages they're intended to enable. Whereas XML 1.0 was conservative (Everything not permitted is forbidden) XML 1.1 is liberal (Everything not forbidden is permitted.) XML 1.0 listed the characters you could use in names. XML 1.1 lists the characters you can't use in names. Characters XML 1.1 allows in names include:
Symbols like the copyright sign ©
Mathematical operators such as ±,
7 (superscript 7)
The musical symbol for a six-string fretboard
The zero-width space.
Private-use characters
Several hundred thousand characters that aren't even defined in Unicode and probably never will be.
XML 1.1's lax name characters rule have the potential to make documents much more opaque and obfuscated.
The first 32 Unicode characters with code points from-characters are historical relics used to control teletypes and glass terminals. XML 1.0 does not allow them. This is a good thing. Although dumb terminals and binary-hostile gateways are far less common today than they were twenty years ago, they are still used and passing these characters through equipment that expects to be seeing plain text can have nasty consequences including disabling the screen. (One common problem that still occurs is accidentally paging a binary file on a console. This is generally quite ugly, and often disables the console.)
A few of these characters occasionally do appear in non-XML text data. For example, the form feed (#x0C) is sometimes used to indicate a page break. Thus moving data from a non-XML system such as a BLOB or CLOB field in a database into an XML document can unexpectedly cause malformedness errors. Text may need to be cleaned before it can be added to an XML document. However, the far more common problem is that a document's encoding is misidentified, for example defaulted as UTF-8 when it's really UTF-16. In this case, the parser will notice unexpected nulls and throw a well-formedness error.
XML 1.1 fortunately still does not allow raw binary data in an XML document. However, it does allow you to use character references to escape the C0 controls such as form feed and bell. The parser will resolve them into the actual characters before reporting the data to the client application. You simply can't include them directly. For example, this document uses form feeds to separate pages:
<?xml version="1.1"> <book> <title>Nursery Rhymes</title> <rhyme> <verse>Mary, Mary quite contrary</verse> <verse>How does your garden grow?</verse> </rhyme>
<rhyme> <verse>Little Miss Muffet sat on a tuffet</verse> <verse>Eating her curds and whey</verse> </rhyme>
<rhyme> <verse>Old King Cole was a merry old soul</verse> <verse>And a merry old soul was he</verse> </rhyme> </book>
However, this style of page break died out with the line printer. Modern systems use style sheets or explicit markup to indicate page boundaries. For example, you might place each separate page inside a page element or add a pagebreak element where you wanted the break to occur, like so:
<?xml version="1.1"> <book> <title>Nursery Rhymes</title> <rhyme> <verse>Mary, Mary quite contrary</verse> <verse>How does your garden grow?</verse> </rhyme> <pagebreak/> <rhyme> <verse>Little Miss Muffet sat on a tuffet</verse> <verse>Eating her curds and whey</verse> </rhyme> <pagebreak/> <rhyme> <verse>Old King Cole was a merry old soul</verse> <verse>And a merry old soul was he</verse> </rhyme> </book>
Better yet, you might not change the markup at all, just write a stylesheet that assigns each poem to a separate page. Any of these options would be superior to form feeds. Most uses of the other C0 controls are equally obsolete.
There is one exception. You still cannot embed a null in an XML document, not even with a character reference. Allowing this would have caused massive problems for C, C++, and other languages that use null-terminated strings. The null is still forbidden, even with character escaping, which means it's still not possible to directly embed binary data in XML. You have to encode it using Base-64 or some similar format first. (See Item 20).
There I'll address shortly) by requiring that these control characters be escaped with character references as well. For example, you can no longer include a "break permitted here" in element content or attribute values. You have to write it as ‚ instead.
This actually does have one salutary effect. There are a lot of documents in the world which are labeled as ISO-8859-1 but which actually use the non-standard Microsoft character set Cp1252 instead. Cp1252 does not include the C1 controls. Instead it uses this space for extra graphic characters such as €, Œ, and ™. This causes significant interoperability problems when moving documents between Windows and non-Windows systems, and it's not always one that's easy to detect.
By making escaping of the C1 controls mandatory, such mislabelled documents will now be obvious to parsers. Any document that contains an unescaped C1 character which is labeled as ISO-8859-1 is malformed. Documents that correctly identify themselves as Cp1252 will still be allowed.
The downside to this improvement is that there is now a class of XML documents which is well-formed XML 1.0 but not well-formed XML 1.1. XML 1.1 is not a superset of XML 1.0. It is neither forwards nor backwards compatible.
The fourth change XML 1.1 makes is of no use to anyone, and should never have been adopted. XML 1.1 allows the Unicode next line character (#x85, NEL) to be used anywhere a carriage return, linefeed, or carriage return-linefeed pair is used in XML 1.0 documents. Note that a NEL doesn't mean anything different than a carriage return or linefeed. It's just one more way of adding extra white space. However, it is incompatible not only with the installed base of XML software, but also with all the various text editors on Unix, Windows, the Mac, OS/2, and almost every other non-IBM platform on Earth. For instance, you can't open an XML 1.1 document that uses NELs in emacs, vi, BBEdit, UltraEdit, jEdit, or most other text editors and expect it to put the line breaks in the right places. Figure 3.1 shows what happens when you load a NEL-delimited file into emacs. Most other editors have equal or bigger problems, especially on large documents.
Figure 3-1: Loading a NEL delimited file into a non-IBM text editor
If so many people and platforms have such problems with NEL, why has it been added to XML 1.1? The problem is that there's a certain huge monopolist of a computer company that doesn't want to use the same standard everyone else in the industry uses. And—surprise, surprise—their name isn't Microsoft. No, this time the villain is IBM. Certain IBM mainframe software, particularly console-based text editors like XEdit and OS/390 C compilers, do not use the same two line ending characters (carriage return and linefeed) that everybody else on the planet has been using for the last twenty years at least. Instead they use character #x85, NEL (next line).
If you're one of those few developers writing XML by hand with a plain console editor on an IBM mainframe, then you should upgrade your editor to support the line ending conventions the rest of the world has standardized on. If you're writing C code to generate XML documents on a mainframe, you just need to use \x0A instead of \n to represent the line end. (Java does not have this problem.) If you're reading XML documents, the parser should convert the line endings for you. There's no need to use XML 1.1.
For reasons of compatibility with legacy character sets such as ISO-8859-1 (as well as occasional mistakes) Unicode sometimes provides multiple representations of the same character. For example, the e with accent acute, é, can be represented as either the single character #xE9 or with the two characters #x65 followed by #x301 (combining accent acute). XML 1.1 suggests that all generators of XML text should normalize such alternatives into a canonical form. In this case, you should use the single character rather than the double character.
However, both forms are still accepted. Neither is malformed. Furthermore, parsers are explicitly prohibited from doing the normalization for the client program. They may merely report a non-fatal error if the XML is found to be unnormalized. In fact, this is nothing that parsers couldn't have done with XML 1.0, except that it didn't occur to anyone to do it. Normalization is more of a strongly recommended best practice than an actual change in the language.
There's one other new feature that's effectively part of XML 1.1. XML 1.1 also introduces namespaces 1.1, which adds the ability to undeclare namespace prefix mappings. For example, consider this API element:
<?xml version="1.0" encoding="UTF-8"?> <API xmlns: <title>Geometry</title> <cpp xmlns: class CRectangle { int x, y; public:void set_values (int,int); private:int area (void); } </cpp> </API>
A system that was looking for qualified names in element content might accidentally confuse the public:void and private:int in the cpp element with qualified names instead of just C++ syntax (albeit ugly C++ syntax that no good programmer would write). Undeclaring the public and private prefixes allows them to stand out for what they actually are, just plain unadorned text.
In practice, however, very little code looks for qualified names in element content. Some code does look for these things in attribute values, but in those cases it's normally clear whether or not a given attribute can contain qualified names or not. Indeed this example is so forced precisely because prefix undeclaration is very rarely needed in practice, and never needed if you're only using prefixes on element and attribute names.
That's it. There is nothing else new in XML 1.1. It doesn't move namespaces or schemas into the core. It doesn't correct admitted mistakes in the design of XML such as attribute value normalization. It doesn't simplify XML by removing rarely used features like unparsed entities and notations. It doesn't even clear up the confusion about what parsers should and should not report. All it does is change the list of name and whitespace characters. This very limited benefit comes at an extremely high cost. There is a huge installed base of XML 1.0 aware parsers, browsers, databases, viewers, editors, and other tools that doesn't work with XML 1.1. They will report well-formedness errors when presented with an XML 1.1 document.
The disadvantages of XML 1.1 (including the cost in both time and money of upgrading all your software to support it) are just too great for the extremely limited benefits it provides most developers. If you're more comfortable working in Amharic, Mongolian, Yi, Cambodian, Amharic, Dhivehi, or Burmese and you only need to exchange data with other speakers of one of these languages (for instance, you're developing a system exclusively for a local Amharic-language newspaper in Addis Ababa where everybody including the IT staff speaks Amharic), then you can set the version attribute of the XML declaration to 1.1. Everyone else should stick to XML 1.0. | http://www.ibiblio.org/xml/books/effectivexml/chapters/03.html | CC-MAIN-2016-07 | refinedweb | 2,326 | 63.29 |
Linux Content
All Articles
Interviews
Linux in the Enterprise
Security Alerts
Administration
Browsers
Caching
Certification
Community
Database
Desktop
Device Drivers
Devices
Firewalls
Game Development
Getting Started
Kernel
LDAP
Multimedia
Networking
PDA
Programming
Security
Tools
Utilities
Web Design and Development
X Window System
Like so many other people out on the Internet, I get unsolicited
commercial email or "spam". Until recently, I could handle spam by
just deleting it or using email aliases. Unfortunately, my server was
rendered useless by a spam attack launched by an unknown spammer. The
experience forced me to improve my spam defenses. In two articles, I
will share the research and results of my effort to implement an
anti-spam system. In this first installment, I will briefly cover
various anti-spam systems and the system I chose, a network level
defense. In the next installment, I'll dig deeper into the details of
an implementation with qmail. (The information is general enough that
it could be applied to other email systems such as Postfix or
Sendmail.)
Let's begin by covering the current state of the anti-spam
world. Since spam is such a widespread problem, there have been an
increasing number of anti-spam measures devised in the last four
years. Some measures involve legislation, some techniques require
large groups of people, and some are simple techniques that
individuals can use. I will cover some of the more popular defenses
that individuals or administrators can implement on their own as well
as mention a few of the up-and-coming systems. None of these systems
is perfect. I do not claim that any of these will work without a
hitch.
Here is a brief rundown of the most popular techniques in use, in
order of increasing sophistication:
The first technique involving spam is to choose a hard to
guess email address and hide it from all publicly viewable
content. This makes it hard for a spammer to guess or use a dictionary
attack on your email address. If you do get spam, you will just have
to ignore it or live with it. If you want to get email from people,
you must somehow give them your email address in advance through a
non-Internet channel or use a web form as an email proxy to your
address. For some people this works, but it usually means their email
address is hard to remember. This technique defeats the purpose of
the Internet by making it hard to communicate with other people.
The next technique, aliases, escalates previous one.
Essentially you use several email aliases for the same account. For
example, it is easy to configure some mail systems to deliver all
[email protected], where something is any valid
email character string. You could have an alias for public display on
the Internet, such as on a web site or in a Usenet post. You could
also have an alias for each web site where you register. This is a
great way to find out if that web site you registered with is selling
your email address or being lax about protecting your email
address.
[email protected]
[email protected]
If a spammer (or web site) abuses one of your addresses, you can
just block that particular alias. I used this technique formerly, and
it worked pretty well. Still, it doesn't prevent all the spam.
Another issue to consider is that most popular email clients don't
support different aliases well. Finally, it is worth noting that some
anti-spam companies have sprung up around just this idea.
The next technique is content filtering, which is actually an
entire family of techniques. It is also the most widely implemented
and discussed technique. All the systems work in roughly the same
manner. Incoming email is processed by a filter. This filter scans
for patterns that may characterize the email as spam or not. It's
that simple. The techniques all vary on where and what they filter
on. The filter could be at the server or it could be at the client.
It might look at the email headers, the email body, or both.
Another interesting variation is where the patterns come from. Some
systems use your address book for a list of valid senders. Others let
you enter your own list of words. Some compile patterns from email
sent in by people on the Internet. Others build patterns based on your
past email. The most popular variations include collaborative
filtering, Bayesian filtering, and fingerprinting. No matter what
anyone claims, no system works 100% for everyone. As filtering
techniques have improved, spammers have continued to work around
them.
One content-filtering technique, called challenge-response, is
on the rise and deserves a separate description. Essentially, the
email content is filtered by the sender's email address and compared
against a whitelist of approved addresses. If the sender's email
address isn't in the list, an email is sent to the recipient and they
are challenged to respond. The challenge is usually simple for a human
to perform but hard for a computer.
Some challenges require the user to type a word in from a noisy
image. I've even seen one that asks the user to "count the kittens" in
a picture. I think these techniques are very successful, but I worry
that they may be alienating certain groups like non-English users or
the visually impaired. Also, some people refuse to use this technique
since they don't want to annoy or offend the senders. Still, some
people praise the technique and consider it so special that there has
even been patent activity around it.
Another category of spam defense is the network level
defense. This technique simply involves looking at the IP address of
the machine sending an email and deciding if it is allowed or
not. This lookup is done against a blocklist, which is just a list of
IP addresses considered to be bad. If the IP address is allowed, then
the mail connection proceeds and the email is processed. If the IP
address is not allowed, then the TCP connection is dropped or the SMTP
connection is aborted with a descriptive error like "Your machine is
on the XYZ blocklist, bye bye".
This system works because IP addresses are hard to forge and people
can't get new IP addresses easily. If an IP becomes useless to a
large portion of the Internet, the spammer must spend energy to get a
new IP. The benefits of this defense are unique. If it works, it
prevents the wasted network and CPU utilization that spam causes on
mail servers. It also unique in that it is geared more toward
administrators than end users. Unfortunately, it too has its
weaknesses. Currently, spammers routinely take over other non-secured
hosts on the Internet in order to relay spam. Also, some blocklists
end up being ineffective as they are incomplete, inconsistent, or too
extreme in their practices.
One last technique is cryptographic authentication. This is
more of a proposed or future technique than one that is currently used
in practice. The idea is similar to using a whitelist of approved
emails or hosts. The difference is that you only allow senders that
have the proper credentials based on modern cryptography. These
credentials would be impossible to forge and expensive to re-purchase
continually.
This technique is worth mentioning since there are groups working
hard on a secure email infrastructure. Such a system would require an
authentication piece as well. If this were built, not only would we be
able to send email securely, but we could have the ability to filter
spam. Unfortunately, since the existing email infrastructure is so
huge and entrenched, it will take a long time for such a system to get
built.
All of these techniques have their pros and cons. The cons are
especially annoying with naive or poor implementations. They may
filter an email and never let the original sender know that their
message was marked as spam. They may block a host that should
actually be allowed. Some require a lot of user intervention. Most
will accidentally block subscribed mailing lists. Some systems that
share patterns over the Internet mark forwarded "joke" emails as spam,
though this may be a feature. Some require lots of email or time to
learn your valid email patterns. All of this will make it harder for
new people (customers or anonymous contributors) to communicate with
you.
To cut to the chase, I looked at the above techniques and chose a
network level defense. The choice was easy for me since that system
was the only one that could have protected my machine's resources from
the recent spam attack I endured.
On November 19, 2002, I was getting 10-20 TCP connections per
second from around 300 different IP networks to my machine at a
colocation facility. I checked the source IPs and they were coming
from all over the globe. The destination email addresses all conformed
to a simple pattern; this indicated that something was performing a
simple algorithmic attack. My computer was really sluggish from
queuing all of the bounce messages. My qmail queue was
over 13,500 messages at that point. In fact, I couldn't even send out
caused all sorts of timeouts for other systems on the machine.
qmail
localhost
After about 24 hours, the attack ended. I didn't receive or relay
any spam, but I was really upset. I did some research on the net and a
few people in the spam community believed that this had the signature
of a Klez Cluster Attack. This is an attack where the spammer uses a
cluster of machines infected with a Klez virus to relay spam to
various hosts on the Internet. Think of it as unsuspecting users
donating their machine time to the spam@home project. These types of
attacks appear to be increasing steadily, and I'm not the only one
upset about it.
So, with that experience, I came up with a simple list of features
that my spam defense would have to provide:.
With that in hand, I did some research on the existing network
level spam defenses and talked to a few friends. Let me go over that
research right now.
Network Level spam systems owe their design to the original group
MAPS. The MAPS project started in 1997 as a small private mailing list
called the Realtime Blackhole List (RBL). It was composed of
like-minded anti-spammers. Paul Vixie, a widely known netizen, was
one of the main persons involved with the group and he helped
publicize their efforts. They created a list of IP addresses that
spammers were using and allowed other members to query their database
in real-time, over the Internet. If an IP address was in that list,
and it attempted to send mail through any of the MAPS subscribers'
networks, the packets were "black holed" or dropped. This worked well
against some of the main spammers who were coming from known
networks.
At first, the RBL group used the Border Gateway Protocol (BGP) for
distributing this blackhole list or database to other
systems. Although BGP was normally used for exchanging global routes
between core Internet routers, it could also be used for distributing
the RBL database. Since almost all of the systems that could talk BGP
were routers, the RBL system was mostly useful to people in control of
their routers. It also required good knowledge of the protocol and a
decent Internet connection. These features kept the RBL from being
useful to a larger set of administrators.
A simpler system was devised in order to make the system much more
approachable to normal administrators with fewer resources. In the
same way that the MAPS group used an existing protocol for a new
purpose, they found another system that would fit this new set of
requirements. They chose to use the most successful distributed
database system that was already in use, the Domain Name Service.
Paul Vixie and a few others already had expert knowledge of the DNS
system, having worked on the BIND DNS server, the most widely used DNS
server of the time. Choosing DNS allowed them to reuse a lot of
existing software and avoid conflicts with existing firewall
rules. Also, because it was DNS, it was already lightweight and well
tested. This adaptation on top of DNS is the system used by probably
99% of the network level spam providers today. In the anti-spam
community, the protocol is called IP4R, which is probably derived from
the phrase "IPv4 Reverse Lookup".
When you query a server via IP4R for IP addresses, your query is
similar to the query that a host uses when looking up the name
associated with an IP address. Suppose you use the 1.2.3.4 IP address
for your system, which is called a.b.com. When you
connect to a server on the Internet, the destination server will query
its DNS servers for the name associated with your IP address. It does
that by querying for a DNS record in the namespace
4.3.2.1.in-addr.arpa from the root name servers. If there
is a PTR record set up by your ISP for 1.2.3.4 (or the destination
server's DNS was setup correctly), the query will eventually return
a.b.com as the name associated with that IP.
a.b.com
4.3.2.1.in-addr.arpa
When email servers want to use IP4R to see if a host is from a
hostile IP address, they do a similar lookup with a few
differences. The first difference is that IP4R does not use the same
DNS namespace. You would use the blocklist provider's namespace rather
than in-addr.arpa. For example, if we used a service from
example.com, they may tell us to use the namespace
rbl.example.com. The second difference is in the DNS
reply from the lookup. A normal IP reverse lookup would expect a reply
to return a hostname. An IP4R reply, on the other hand, returns a
special IP address to indicate its answer. Let's step through a simple
IP4R lookup to illustrate.
in-addr.arpa
rbl.example.com
First, we would setup our software to query for IP addresses in
our example providers namespace: rbl.example.com.
Suppose we are checking on a host with the IP of 1.2.3.4. The DNS
query would go out on the internet for the name
4.3.2.1.rbl.example.com. If the IP address is in the
blocklist, the server will reply with the address
127.0.0.2 (in DNS parlance, an "A" or address record of
127.0.0.2). If the IP address isn't in the database, the query will
return an empty reply. That is all there is to it. The idea of
reusing DNS and keeping it as simple as possible was a great idea.
4.3.2.1.rbl.example.com
127.0.0.2
There is also another optional record that an IP4R provider can
send as the result of a query. In DNS terminology the record is called
a TXT or "text" record. These records just hold character strings. If
an IP is in the database, the provider can also return a TXT record in
the reply to the query. Within the reply, the IP4R provider can put
an explanation of why the record exists in the database and who to
contact about it. This is important, and I'll show you how this comes
into play in the actual implementation.
The last part of the protocol to mention is required for testing
purposes. All IP4R providers should have the 127.0.0.2
address in their database. This is in the 127/8 localhost
network and should never be an address on the Internet. Since it can't
be on the Internet, we can safely use it to test queries to the IP4R
provider. A neat trick for doing this is to use the ping
command on the command line. For example, if we use the
rbl.example.com again, you should be able to do:
127/8
ping
# ping 2.0.0.127.rbl.example.com
64 bytes from 127.0.0.2: icmp_seq=0 ttl=255 time=0.1 ms
64 bytes from 127.0.0.2: icmp_seq=1 ttl=255 time=0.0 ms
If your ping starts pinging (like the example above), then you have
a proper setup. If you get some other error (usually an "unknown
host"), then you know to review your configuration.
Also, since we get the address of '127.0.0.2' if an IP is in the
blocklist, we can use the same simple technique to see if arbitrary
addresses are in the IP4R provider's database. Using the
1.2.3.4 address example again:
1.2.3.4
# ping 4.3.2.1.rbl.example.com.
If it's in there, then you should get ping replies. If the address
isn't in the blocklist, then you should get an 'unknown host' or
similar error.
Before we move on, let me briefly cover something I glossed over in
that last section. In order to use an IP4R protocol, your mail
software must support it. The good news is that most email servers
(or, more pedantically, Mail Transfer Agents or MTAs), support
this. If a protocol is simple in design, it is usually simple to
implement.
In the beginning of this article we took a brief tour of the
various anti-spam techniques in order to determine the right
solution. Then we went deeper into the best one for my situation, a
network level spam defense. In the next article, I'll go over which
blocklist provider I chose, giving a detailed description of an
install with my mail server and discussing the positive results of
this effort.. | http://www.linuxdevcenter.com/pub/a/linux/2003/06/26/blocklist.html?page=last&x-order=date | CC-MAIN-2014-10 | refinedweb | 3,014 | 64.3 |
ANTLR Section Index | Page 6
What's an elegant way to check whether a node is a PLUS node and then to lift its children to the parent PLUS node?
I'm trying to flatten trees of the form: #(PLUS A #(PLUS B C) D) into #(PLUS A B C D). What's an elegant way to check whether a node is a PLUS node and then to lift its children to the parent P...more
When I inherit rules from another grammar, the generated output parser actually includes the combined rules of my grammar and the supergrammar. Why doesn't the generated parser class just inherit from the class generated from the supergrammar?
In Java, when one class inherits from another, you override methods to change the behavior. The same is true for grammars except that changes a rule in a subgrammar can actually change the lookah...more
Can you explain more about ANTLR's tricky lexical lookahead issues related to seeing past the end of a token definition into the start of another?
Consider the following parser grammar: class MyParser extends Parser; sequence : (r1|r2)+ ; r1 : A (B)? ; r2 : B ; ANTLR reports m.g:3: warning: nondeterminism upon m.g:3: k=...more
How can I include line numbers in automatically generated ASTs?
Tree parsers are often used in type checkers. But useful error messages need the offending line number. So I have written: import antlr.CommonAST; import antlr.Token; public class CommonASTWith...more
How can I handle characters with context-sensitive meanings such as in the case where single-quote is both a postfix operator (complete token) and the string delimiter (piece of a token)?
How can I handle characters with context-sensitive meanings such as in the case where single-quote is both a postfix operator (complete token) and the string delimiter (piece of a token)? For exam...more
Why do these two rules result in an infinite-recursion error from ANTLR?
Why do these two rules result in an infinite-recursion error from ANTLR? a : b ; b : a B | C ;
How do you specify the "beginning-of-line" in the lexer? In lex, it is "^".
Here is a simple DEFINE rule that is only matched if the semantic predicate is true. DEFINE : {getColumn()==1}? "#define" ID ; Semantic predicates on the left-edge of single-altern...more
How can I use ANTLR to generate C++ using only STL (standard templates libraries) in Visual C++ 5?
Apply sp3 to your VC++ 5.0 installation. Well this works for me!
Why do I get a run-time error message "Access violation - no RTTI data" when I run a C++ based parser compiled with MS Visual Studio 6.0? It compiled ok. What about g++?
In Visual Studio (Visual C++), you need to go to "Project|Settings..." on the menu bar and then on the Project Settings dialog, go to the "C/C++" tab. Then choose the can you add member variables to the parser/lexer class definitions of ANTLR 2.x?
Member variables and methods can be added by including a bracketed section after the options section that follows the class definition. For example, for the parser: class JavaParser extends ...more
How can I store the filename and line number in each AST node efficiently (esp for filename) in C++.
There are probably a number of ways to do this. One way is to use a string table to store the strings. The AST node has a reference to a type like STRING which is an object that references (poi...more
When will ANTLR support hoisting?
Sometime in the future when Ter gets so fed up with my pestering that he just has to implement hoisting so that I'll shut up. Or for ANTLR v3.0. :-)
Is it possible to compile ANTLR to an executable using Microsoft J++?
See Using Microsoft Java from the Command Line and Building a C++ ANTLR Parser (on Windows NT). more
How do you restrict assignments to semantically valid statements? In other words, how can I ignore syntactically valid, but semantically invalid sentences?
For a complex language like C semantic checking must be done either after the statement is recognized in the parser or in a later pass. Semantic checking will usually involve creation of a symbo...more | http://www.jguru.com/faq/java-tools/antlr?page=6 | CC-MAIN-2017-26 | refinedweb | 716 | 65.93 |
pathlib - pretty neat
I was looking at doing a file info thingy to put into the wrench menu. I remembered hearing about the pathlib lib on a video i watched recently. I don't know how long pathlib has been around, but i know they have just updated all the std Libs to work nice with pathlib, i guess they didn't before. Anyway, I don't really know the in's and outs, I started playing with PurePath in the same lib. Well when i say I was playing, I was just printing out some of the methods and attrs. I haven't done them all, but it looks pretty nice if you have to play with paths and files.
Anyway, i just wanted to pass on the info.
import editor from pathlib import Path, PurePath pp = PurePath(editor.get_path()) print(pp.parts) print('drive:', pp.drive) print('root:', pp.root) print('anchor:', pp.anchor) print('parents:', pp.parents) print('parent:', pp.parent) print('name:', pp.name) print('suffix:', pp.suffix) print('suffixes:', pp.suffixes) print('stem:', pp.stem) print('as_posix():', pp.as_posix()) print('as_uri():', pp.as_uri()) print('is_absolute():', pp.is_absolute()) print('is_reserved():', pp.is_reserved()) | https://forum.omz-software.com/topic/4255/pathlib-pretty-neat | CC-MAIN-2021-04 | refinedweb | 196 | 72.12 |
NSI Claims whois Database is Proprietary 106
phred writes "In yet another sign of advanced corporate megalomania, Chris Clough, a spokesman for Network Solutions, Inc., is quoted in an ABC News online story as claiming that the whois database is a proprietary product of NSI and is being provided to the net as a "community service." (This item first noted Tomalak's Realm)." And I for one am oh-so-pleased that NSI is using their property to Spam us with commercials about NSI so that they can protect their butts when they lose their monopoly.
ha (Score:1)
why doesn't the us government stomp on them?
No, this is the problem with government-enforced.. (Score:1)
"The only lasting monopolies have been government enforced."
-- Alan Greenspan (Chairman of the Federal Reserve Board)
shane
Ugh. (Score:1)
So when do we start up some sort of free DNS service and bypass all the corporate greed?
This is ridiculous (Score:1)
This is pure bullshit.
Gah. (Score:1)
--
As long as each individual is facing the TV tube alone, formal freedom poses no threat to privilege.
This is not a problem. (Score:2)
Team Hamilton has been pushing other OSes to do this for years, and no one listened. Now look what's happened. Not like we didn't warn you.
The fix is in... (Score:1)
"Well boys, we've got this government sponsored monopoly but you know this gravy train isn't going to last forever...any ideas?."
"I've got one...Once it looks like time is almost out for us, we spring a proprietary whois whammy at the last minute!"
(in unison)"BWAHAHAHAHAHA"
NSI is pushing their luck (Score:4)
Of course, there is always the renegade DNS route. To make it take off and stick, one would need superior technology, a strategy that would embrace and superceed the BIND cartels jugular embrace of the Internet name space, and a desire for an alternative which can provide both alternate and backward compatible name space.
I would think the non-North American entities who are at the mercy of NSI for Global Top Level Domain Names could agree on an LDAP name system that makes NSI obsolete, and removes the North American legal system and copyright law from what is clearly a Universal Name Registrar.
This is not a new debate. The control of the Internet has long been a US Government/Academia/Military/Commercial playground. What is needed in the 21st century will be an abstraction away from the cultural straightjacket that has been so widely forcefed to the world as "technology". I would hope to hear that the other 90% of the world's population have a say in how names are fairly allocated, and we would all benefit from a broader perspective.
ha (Score:1)
I think the government is too caught up wasting money and armaments in some little country in the east to worry about. Not to mention despite all their chest-beating about how angry they are with NSI, the government still won't do anything about them.
Gah. SPAM!!! (Score:1)
The scary thing is.. I got a SPAM yesterday about selling me the Internic Database for $1300
ChiefArcher
NSI sells mailing lists as well.. (Score:2)
Government confusion. (Score:1)
So, if you read the story, the guy interviewed summed it up pretty well: it all resolves over whether or not these domain names are the property of the registrant, or just leased for two years. Ugh. Makes my brain hurt.
A solution? (Score:3)
In all seriousness though, if they want to claim the information is propriatary, then ICANN can't exactly integrate the information into the root servers, can they?
NSI is pushing their luck (Score:2)
Any open solution should also be prepared to incorporate that as well, since that is the "shield" that NSI hides behind these days, is protecting themselves from the harvesters, domain-squatters, etc.
A solution? (Score:1)
No, this is the problem with government-enforced.. (Score:1)
Microsoft is government enforced? News to me..
red alert (Score:1)
No, this is the problem with government-enforced.. (Score:1)
Compare Linux to this NSI stuff...
You can create a free operating system because there is no official government sponsored os.
You cannot create a free NSI because there is an official government sponsored NSI.
OS Internet? (Score:1)
The commercialization of the 'net/www has caused all this.
The commercialization of the Net has done a lot of damage. I've wondered on a couple of occasions how viable it would be to create new, smaller, geek-only TCP/IP-based network, totally separate from the Internet. A sort of breakaway Internet. "Splinternet", if you like.
[yes, I know, totally non-viable. It's not really a serious suggestion. Don't flame me over this
:-) I just tend to think like this when I get fed up with "e-commerce" *spit*]
This just confirms... (Score:1)
No, this is the problem with government-enforced.. (Score:1)
NSI is just desperate. The government won't slap them hard for this, they just aren't likely to cut NSI any more slack now.
Gah. (Score:1)
You'd think these guys would realize that if the same person is the tech, billing, and admin contact for their name they probably know how to register a name themselves and don't need to pay one of these ripoff artists.
if(NSI == No Sales Intelligence) { read this! (Score:1)
Ugh. (Score:1)
My opinions
Stan "Myconid" Brinkerhoff
I was wondering the same thing... (Score:3)
The way current DNS software works, you DO have to be careful not to pollute anyone else's root servers, but anyone out there with a UNIX box and a dedicated connection could redefine the entire internet however they wanted to for anyone who cared to point their name servers to the new root servers.
Note that this has been attempted in the past (Do your research on the alternic and others) but they did not really have a lot of support and if I recall correctly some of them tried a silly stunt or two that was bad judgement at its worst.
Anyway, it can be done and there's an increasing need for it to be done. If anyone comes up with something, I'll be happy to point my DNS at them.
OS Internet? (Score:1)
Ugh. (Score:1)
"With a free domain name system you have no way to pay for the equipment, and the mass
usage that would be attributed to such a system.",
are you thinking "distributed", "parallel", and "voluntary?"
The system does not need to be centralized. Nothing about the internet or any other government needs to be centralized, despite the fact that certain aspects of both do tend to be.
Freely available to all (Score:4)
ha so where are you guys? (Score:2)
I'm talking to YOU, the guy who worked for Internic back in the hippie days and now works for NetSol getting that phat biweekly paycheck!
I don't fault you for selling out and cashing in, but I sure would enjoy hearing about your experiences!
(Or has it *been* an experience?) I wonder. Maybe you just sold out, and all my fairytale romantic notions of the net need to fade away along with all the other delusions.
Dunno about you. But I'm sick of them. (Score:2)
I don't know about the rest of you. But THIS little gem appeared in my mailbox a couple days ago. NSI is getting REALLY desperate..
Anyone else smell trapped animal?
Chas - The one, the only.
THANK GOD!!!
DNS Root (Score:1)
NSI starts looking like Micro$oft, so probably we should use a DNS that looks and acts like some free OS...
:-)
ICANN guilty of smearing with bullshit FUD (Score:1)
Who among us actually goes to the Internic website to look up domain registration information?
Yeah, I didn't think so.
Internic is making it harder for people to use their website. They're probably trying to make money too. They certainly aren't doing anything else. Open a shell and type "whois slashdot.org" and in a few short milliseconds you'll have an answer.
They are
(which is too bad, IMHO. I'm tired of getting spam because of having domains registered in my name.)
- JB
Ugh. (Score:1)
That just plain sucks.
Analogy to phone directories (Score:1)
The courts ruled long ago that listings in telephone directories are not "owned" by the telcos, which explains the ready availablity of phone listings on the Web, on CD and through 10-10 phone numbers.
I would guess the courts would find the whois directory analagous, especially since it was compiled by a monopoly.
So all we need is a plaintiff and some IP (that's Intellectual Property, not Internet Protocol) lawyers to straighten this out.
Snail mail spam (Score:1)
No, this is the problem with government-enforced.. (Score:1)
Microsoft is government enforced? News to me..
How else would you describe patents and copyrights?
Misunderstanding the top of the Internet (Score:2)
First, there is NSI's database, which contains information, such as your mailing address and so on, in addition to domain names and information (DNS server IP's and whatnow).
Second, there is the whois server. The web page which was recently changed only provided a nice, easy interface to the whois mechanism, AFIAK. Whois is an Internet standard, defined in RFC 954. It is a simple TCP/IP query/response protocol to find out information about Internet objects (networks, hosts, etc). NSI's whois server is generated by a selective dump of their registration database (leaving out credit card numbers, for instance).
And third, there is the DNS hierarchy. This includes a set of root servers, named 'A' through 'L' (I believe those are all that exist right now) in order to keep the names as short as possible, so they can fit in a single UDP packet (ick). These are distributed throughout the planet, with only one being run by NSI. NSI's role is to provide the zone file to the other root name servers. They do this by dumping the appropriate data from their database, as well as combining reverse addresses and other data from non-NSI top-level-domains (.mil,
With the ICANN switch, I'm pretty sure that NSI still generates the zone file for
No, this is the problem with government-enforced.. (Score:1)
> Microsoft is government enforced? News to me..
Microsoft currently has a lot of economic power. Billions in the bank, ready to market the snot out of their 'innovation' of the day, a strong grip around the necks of the PC market, and even some mostly failed forays out of the 'PC' and into content and set top boxes and the like.
Microsoft does not have a 'lasting monopoly' on anything. Microsoft has not yet existed long enough to be capable of having a 'lasting monopoly.'
I know this is comparing apples to oranges, but ISDN technology is older than Microsoft's ownership of DOS.
Hell, Windows itself (if you disregard those rather non 'windowish' versions before 3.0) is just about ten years old...
Markets take time to shake out.
Rich
NSI's double standard (Score:1)
NSI and AOL
No, this is the problem with government-enforced.. (Score:1)
No, I don't think that is the type of 'enforcement' the quote's author meant..
Misunderstanding the top of the Internet (Score:1)
So my point is still valid. If NSI won't allow their records into the ICANN version of the whois database by saying they are their proprietary records, then ICANN wouldn't be able to validate requests for registrations or changes from other companies against what NSI says they register, and NSI might lose out.
I might be wrong about how that end of things works, but the new registration system isn't a simple case of NSI providing the
Without a full disclosure of information, there'd be no way to track back to a domain. You'd have to run the equivalent of a whois against each of the half-dozen or more registries, without knowing which registry actually holds the record for the domain you're interested in. (Which you know now, since all domains in a given TLD come from a single database, NSI or otherwise...)
OS Internet? (Score:1)
Um, I don't think Internet2 is quite what you think it is. See the FAQs [internet2.edu] for more information -- it seems to be centred more around `gigaPOPs' and faster backbones rather than a better distributed naming service/directory infrastructure.
Now Usenet II [usenet2.org], on the other hand... time for October indeed.
--
W.A.S.T.E.
No, this is the problem with government-enforced.. (Score:1)
Well, Microsoft has been in existence for a little over 10 years or so (maybe 15 I don't know), right?
.. And the last 4 or 5 years they have held a monopoly over the PC operating system market (I'll pretend PC/DOS from IBM was a competitor earlier than that). I'd say anything over 1 year in the computer industry is "lasting." In other industries that might not be true, but in the computer industry it seems everything happens several times faster and with several times the profit of "normal" industries.
I know this is comparing apples to oranges, but ISDN technology is older than Microsoft's ownership of DOS.
Apples and oranges, right. I don't see how this relates at all.
While personally I don't think it is necessary for the government to break up microsoft (they will lose in the end, simply because their products are inferior) I do think that something should be done about their coercive licensing (Computer manufacturers are economically punished by microsoft if they don't install Windows on every system they sell).
IMO, as long as I have to pay the "Windows tax" at my local computer store whenever I buy a system, Microsoft has a monopoly. If informed customers are given a REAL CHOICE, they will choose the technically superior product (hint: not Windows).
Free DNS (Score:1)
Free DNS is very possible. At ML.ORG we came close but due to other issues (not particularly funding) it fell apart. I and a few other complete new comers were able to do it and handle it fine till we hit the 100,000 domain mark. After that it was quite simply inexperience that killed it.
I am convinced from my work with Monolith though that someone with a little more business background and just plain experience can do a much better job than NSI and charge much less or nothing for the domain. Even without the monopoly.
Freely available to all (Score:1)
Well, maybe?
[email protected]
Proprietary? (Score:1) running on local hosts, and it delivers the full name, U.S.
mailing address, telephone number, and network mailbox for
ARPANET users.
server is accessible across the ARPANET from user programs
...It's 17 years old, open, and it's about as complicated as finger, which is to say it's simple. I don't think it could be proprietary. As to the contents of the database, they've been growing for the past 17 years, and whatever cheesy corporation who thinks they 'own' the internet now needs to go back to hanging out with the script kiddies and stop bugging the users.
Re: ha (Score:1)
What?
... and get their feet all covered in .com? They'll be walking around Washington getting .com over everything. Just think of the cleanup costs!
Govt wants this (Score:2)
WIPO wants to make lists copyrightable by treaty, which trumps laws and court rulings.
WIPO has the effect of also copyrighting court proceedings, laws, membership lists of public orgs, all sorts of things.
WIPO will also copyright the NSI list. If you don't like it, call Congress. (I think you'll find the mail spool linked to
Bank Accuses NSI of misleading investors (Score:1)
An investment bank in New York claims NSI misled investors into believing that their contract would be extended or that it can not be entirely terminated.
Check it out here [techweb.com]
Freely available to all (Score:3)
In responding to a FOIA request submitted by Corsearch in 1996 requesting a copy of the domain name database (NSF FOIA No. 96-090 Request [base.com]), the NSF claimed (NSF FOIA No. 96-090 Response [base.com]):
An administrative appeal of this decision was made (NSF FOIA No. 96-090 Appeal [base.com]). This appeal was rejected. The words "possess" and "control" are being used here in the context of the Freedom of Information Act to determine if the database is an "agency record", and not in respect of claims to ownership of intellectual property. These are the same terms used in the FLITE case (Baizer v. Department of the Air Force, 887 F. Supp. 225 (N.D. Cal. 1995) [base.com]. The FLITE decision has been criticized for its "broad assertion of an exemption from FOIA for 'library' materials, and its questionable use of legal precedent" ( Supreme Court Decisions in FLITE database [essential.org], Information Policy Notes, Taxpayers Assets Project). The Flite decision draws upon the Supreme Court decision in Department of Justice v. Tax Analysts, 492 US 136 (1989) [findlaw.com].
So, the NSF has the right to obtain a copy of the database from Network Solutions, but because the NSF has not chosen to obtain the database, it is not possible to obtain the database from the NSF under the FOIA. And even if the NSF did have a copy of the database, it is not clear, in the light of the FLITE decision, whether the NSF would be required to make the database available under the FOIA.
What is "WIPO"? (Score:1)
NSI is pushing their luck - WE NEED FreeDNS (Score:1)
Whois is, put simply, a protocol for querying a server to get CONTACT information about a given query-string (domain, address space, whatever).
Drop me an e-mail if you want to get involved. [email protected] [mailto]
No, this is the problem with government-enforced.. (Score:1)
I can't speak for Alan Greenspan, or the context of the quote, of course...
ICANN guilty of smearing with bullshit FUD (Score:3)
Try some sort of keyword search and see how many entries return...much fewer than there used to be.
Try to show the domains that have you listed as a contact...oh, you can only get the first 50 that way?
Try to find information about the availability of the domain in the DNS servers...oh, they took the On Hold status off of that return as well.
Slowly, but surely, NSI *is* restricting access...both through their website, *and* through the whois client.
The addition is the redirect from to, while not impairing functionality in an of itself, sets a *VERY* dangerous precedent and course for NSI. Essentially, by putting that redirect in, if its not challenged, then NSI can claim that you register information via NSI's website and via NSI, not necessarily through the InterNIC, which NSI just happens to be providing services to. If they are allowed to assert that concept, what happens when the government or ICANN decides to either put another company in charge of interNIC, or open it up so that multiple companies have equal access to it (through whatever mechanism)? NSI can then claim that these domain names are registered through them, are their Intellectual Property, and you can't make changes via another company, whether they are now part or all of the InterNIC or not!
Jeff
Notwork Delusions (Score:1)
NSI accused of lying (Score:2)
We believe that NSOL's management has purposely disseminated misleading information, and failed to disclose material negative information, that has led investors to believe that the expiration of this contract will be postponed or that it can not be entirely and easily terminated. Investors have also been led to believe that even if the contract is terminated, NSOL's business value will continue to grow. These expectations are baseless and false.
They also make another accusation [asensio.com] about failing to disclose information.
--
Michael Dillon - E-mail: [email protected]
It's the World Intellectual Property Organization (Score:1) [wipo.org]
Basically, their intent is to get a standardized, more or less, copyright law for all member nations. Since a lot of us users here at
Disclaimer: I'm no copyright expert, just someone interested in all sides of the issue.
Snail mail spam endorced by NSI (Score:1)
They deserve to die!
Misunderstanding the top of the Internet (Score:1)
CORE (The Internet Council of Registrars -) developed a system for the 7 new TLDs where multiple registries can issue domain names. They currently have about 90 registries on 5 different continents waiting for the 7 new TLDs to take effect.
It's good reading, I'd suggest visiting them.
~PanIc~
Internet2 just a closed testing environment (Score:1)
"A key goal of this effort is to accelerate the diffusion of advanced Internet technology, in particular into the commercial sector."
What I2 sounds like at this point is what the Internet was before it was opened to civilian use. Some of you liked that; as someone who under that system wouldn't even be allowed to see the computers, I'm not fond of a university-only system. Especially not one that only certain people at a university get to use, even though all students have to pay for the service's existence.
I just hope that IPv6 trickles down quickly. We are needing that soon.
This just confirms... (Score:1)
We can have shared namespace. CORE Proved it. (Score:1)
It's good reading, I'd suggest visiting them.
~PanIc~
Ugh. (Score:1)
There will never be a free press either. Who would pay for the ink, the paper, the printing presses themselves?
Wait a minute...you mean free like "free beer" don't you? Oops, my mistake!
No, this is the problem with government-enforced.. (Score:1)
You don't HAVE to put up with this bullshit... (Score:1)
Dunno about you. But I'm sick of them. (Score:1)
Misunderstanding the top of the Internet (Score:1)
They may be the dot com people, but... (Score:2)
[gtm@gtm gtm]$ whois dot.com
[rs.internic.net]
Dely, Douglas (DD8922) [email protected]
810-979-2966 (FAX) 810-979-1434
Dely, Francoise (FD1636) [email protected]
810-979-2966 (FAX) 810-979-1434
Dot.Com (DIGITALSEPS-DOM) DIGITALSEPS.COM
Dot.Com Interactive (MEDICALBANK3-DOM) MEDICALBANK.COM
Dot.com (DOTCOMINDIA-DOM) DOTCOMINDIA.COM
Dot.com Distributing (COMDISTRIBUTING-DOM) COMDISTRIBUTING.COM
Haener, Ron (RH12447) [email protected]
941-514-7222 (FAX) 941-514-7025
Robert Gordon (DOT2-DOM) DOT.COM
dot.COM Graphics, Inc. (DOTCOMGRAPHICS-DOM) DOTCOMGRAPHICS.COM
dot.com KK (DOTCO-DOM) DOTCO.COM
dot.com dcvelopment, inc. (ACTIVEADS-DOM) ACTIVEADS.COM
dot.com development (DOTCOMDEV-DOM) DOTCOMDEV.COM
dot.com development, inc (ACTIVENEWS-DOM) ACTIVENEWS.COM
dot.com development, inc. (ACTIVETIME-DOM) ACTIVETIME.COM
dot.com development, inc. (ACTIVETRACK-DOM) ACTIVETRACK.COM
dot.com development, inc. (ACTIVE-ADS-DOM) ACTIVE-ADS.COM
dot.com development, inc. (BELLATLANTICSUCKS4-DOM) BELLATLANTICSUCKS.COM
To single out one record, look it up with "!xxx", where xxx is the
handle, shown in parenthesis following the name, which comes first.
[gtm@gtm gtm]$ whois dot2-dom
[rs.internic.net]
Registrant:
Robert Gordon (DOT2-DOM)
713 Vanguard
Austin, TX 78734
Domain Name: DOT.COM
Administrative Contact:
Wenzel, George (GW23) [email protected]
(512) 451-0046
Technical Contact, Zone Contact:
Gustwick, Bob (BG99) [email protected]
(512) 451-0046 (FAX) (512) 459-3858
Record last updated on 29-Jun-94.
Database last updated on 28-Mar-99 21:44:09 EST.
Domain servers in listed order:
NS.REALTIME.NET 205.238.128.39
NS2.REALTIME.NET 205.238.128.42
Freely available to all (Score:1)
The whois database pre-exists NSI's involvement by about two decades, back to the early days of ARPANET. I haven't examined all the legal documentation in detail, but it was my understanding that NSI was managing the whois database, and by extension the name registry itself, in trust for the US government that granted it the license.
Instead, they are trying to do something akin to the compilation copyright approach used by West Publishing to appropriate federal and state court cases to its own benefit. Because West's notation and pagination system has been widely adopted by courts in making rules on how lawyers must submit legal citations in their written briefs, West gained a monopoly over the law publishing business and, in fact, spent considerable money in Congress to maintain that monopoly.
This is a bit tangential and I won't get into all the gory details, but it seems like NSI has embarked on a similar campaign to appropriate the domain name system data to its own exclusive proprietary benefit. There can be no other way to read Clough's statement and the response from ICANN.
This may only seem important to intellectual property lawyers and geeks, but since I am of the latter persuasion and once upon a not so very long ago time remember when Jon Postel's crew ran whois, it matters to me. A lot. Especially given NSI's rapidly accelerated bullying and conniving to take advantage of their monopoly position while they provide shitty service.
-------
NSI accused of lying (Score:1)
"ICANN is the sole DNS authority with a registrar licensing program, and an approved DNS accreditation policy statement, application and greement. ICANN's policies and agreement include registrar eligibility requirements, contemplate U.S. and World Intellectual Property Organization intellectual property issues, and domain name dispute resolution. Importantly, ICANN requires registrars to disclaim all rights to ownership or exclusive use of certain DNS data elements and to escrow DNS data. This is particularly important to Internet users and domain name holders who have no such protection under the current system. If NSOL desires to continue to be the domain name registry or registrar it will be required to enter into an accreditation agreement with ICANN. Regardless, according to ICANN's registrar accreditation
plan, the entire Internet Who-Is database will be safely escrowed and free from any claims by the registry or registrar within no more than 24
months after the testbed is concluded. According to ICANN's established policies, ICANN has the right to terminate the accreditation agreement of any DNS participant who fails to abide by its policies."
-------
No way they can do that! (Score:1)
TA
Hrm... (Score:1)
NSI sells mailing lists as well.. (Score:1)
NSI a monopoly? (Score:1)
Nothing is currently stopping anyone from organizing a completely namespace (DNS-based or otherwise). Losing compatibility with the NSI namespace is an obstacle, but not a serious one. Perhaps a cross-namespace gatewaying/interchange standard for the DNS protocol is in order? (e.g. "*.foobar.gw." corresponds to "." in the foobar namespace; something along these lines, anyway.) I, for one, wouldn't have a problem with falling back to IP's & hostlists or incompatible competing root namespaces until a better solution is implemented.
aT
ha (Score:1)
ICANN guilty of smearing with bullshit FUD (Score:1)
There was no hijacking here. It was one company trying to consolidate its brand into one focused message (and why not with competition coming up), and trying to put a stop to abusers who were bringing their systems to its knees. I'm really having a hard time understanding what these guys did that was so wrong..please someone fill me in.
No, this is the problem with government-enforced.. (Score:1)
OS Internet? (Score:1)
Ugh. (Score:1)
WOW, a little overstated don't you think. They do something as simple as redesign their website and entire lists are devoted to pointing our their "gall" for doing so and poking fun at the new colors and icons and how they are now selling t-shirts etc...What makes you think that they could get away with ever flexing that muscle you ascribe them to have?
I don't think they're quite the power-hungry, world dominating Microsloftians clone you think them to be.
Less talk, more action! (Score:1)
NSI's double standard (Score:1)
So everyone has access, and people complain because they get spammed. The double standard works both ways. Some complain no matter which way rules the day.
ICANN guilty of smearing with bullshit FUD (Score:1)
InterNIC is an _activity_ of the US government; it provides services (registry and registrar, among others), to the US government.
NSI has a contract to provide these services, which contract is set to expire.
They have those names pursuant to a contract with the government, and title to whatever intellectual property those names comprise as a database does _not_ reside with NSI, it's owned by the US Government. is a front door to that government activity. It is most decidedly _not_ the property of NSI (the trademark on InterNIC rests with the feds), and when I go to that website, I expect to find "the InterNIC", _not_ Netsol.
_Especially_ when they're marking up a service they themselves are providing, and attempting to hide the fact that you can get it at cost.
Sorry; this is out and out fraud, and I've heard that there may be criminal charges.
Cheers,
ICANN guilty of smearing with bullshit FUD (Score:1)
NSI accused of lying -- not quite..... (Score:1)
Nutquirk Sellutions (Score:1)
- NeuralAbyss
~^~~~^~~~^~~~^~~~^~~~~^^^~~~~~~~~~~~~~~~
Real programmers don't comment their code.
DNS Root (Score:1)
1) They don't put their email addresses online. Nor any relevant contact information. They could vanish tomorrow and we would have no recourse.
2) 2 of their root name servers I cannot contact
3) Running a root nameserver is an exhaustive endeavor.. they should be on multiple T3 links.
4) Why in the heck are they running HTTP, SMTP, TELNET, POP and FTP on a root nameserver?!?!?!
Apologies. A for idea.. D for effort and planning.
--- | https://slashdot.org/story/99/03/29/136224/nsi-claims-whois-database-is-proprietary | CC-MAIN-2017-43 | refinedweb | 5,085 | 64.71 |
Subject: Re: [Boost-users] [boost] [Fit]Â formal review starts today
From: Vicente J. Botet Escriba (vicente.botet_at_[hidden])
Date: 2016-03-13 19:11:31
Le 13/03/2016 23:14, Lee Clagett a écrit :
>
>
>
Thanks Lee for your review.
>> You are strongly encouraged to also provide additional information:
>> - What is your evaluation of the library's:
>> * Design
> - Gcc 5.0+
> Why are the newer compiler versions not supported? I thought C++11
> support been improving in Gcc?
Yes, this is a point that must be solved.
> - Functions section and fit::lazy adaptor
> I think everything in this section should be removed from the
> library. I do not see a need for duplication with other existing
> Boost lambda libraries and even Hana. I think this library should
> focus solely on composing functions in different ways. However, I
> do not think the inclusion of this functionality is a reason to
> reject the library.
While doing the Hana review had a discussion and accepted the functional
part in Hanna as we know that we would have Fit in Boost. The high order
functions in Hana belong to a distinct library. Hana is a C++14 library.
Fit is C++11.
>
> Two of the reasons stated are constexpr and SFINAE support.
I don't believe we can do better with a C++11 compiler.
Please could you elaborate about SFINAE, what is the issue?
> Could existing lambda libraries could be upgraded? Hana should be
> upgradeable at a minimum. *And* Thomas Heller was floating around a
> C++14 phoenix that was also interesting.
IMO, it out of the scope of this review to discuss how other libraries
will be updated.
> - fit::lazy
> Why is the creation of a bound function a two-step process?
> fit::lazy(foo)(_1, 2)
> Why differ from existing libraries where a single function call
> creates the binding?
I let Paul respond here.
> - Pipable Adaptor
> This implicitly adds an additional `operator()` to the function
> which acts as a partial function adaptor (or something similar).
> Would it better to add a specially named function for this?).
Yes more documentation is always welcome. pipable and variadic functions
or functions with default parameter don't works well :( and I don't know
how it could work.
> -.
It is curious that there is already something ^like^ that in Boost :)
I like this infix notation. It is a toy. I could also live without.
> - fit::detail::make and fit::construct
> Aren't these doing the same thing? `detail::make` is slightly more
> "stripped" down, but is it necessary? Also, should this be
> exposed/documented to users who wish to extend the library in some
> way?
Please, could you elaborate?
> - Placeholders
> If they were put in their own namespace, all of them could be
> brought into the current namespace with a single `using` without
> bringing in the remainder of the Fit library. Phoenix does this.
Good suggestion.
> - Alias
> Why is this provided as a utility? What does this have to do with
> functions? This seems like an implementation detail.
Agreed. This is out of the scope of the library and even useful should
be moved away. As signaled by Paul, I have already something like that
in a lot of my libraries. Why not Boost.Utility?
>
>
>> *.
> - Why do the functions `first`, `second` take parameter packs?
> Retrieving the value stored in the pair does not require any
> values, and the parameter pack eventually is dropped in
> `alias_value`. I think this makes the code unnecessarily
> confusing.
> - Why does the function get_base exist as a public function? This
> library uses inheritance liberally, so I suppose its easier to
> retrieve one if its bases?
This is a detail of implementation. Please, could you tell us where this
is use at the user level?
> -?
Please, could you give some concrete examples?
> -?
Paul?
> -!
Paul?
> -).
Paul?
> -?
There are not too much standard libraries that have included the forward
constexpr. We do that usually.
> -.
There is already a Github issue accepted by Paul.
> - Macro specializations
> There are already a lot of specializations for various compiler
> versions that do not fully implement C++11. I think the fit
> implementation would be more clear if support for these
> compilers was removed or at least reduced. For example, there are a
> number of specializations to support Gcc 4.6 specifically, but yet
> Gcc 5.0+ is not supported.
This is something a Boost library author decides. Of course there is not
too much sense to don't support more recent compilers.
> -.
There are only 3 macros configuration documented. The others should be
renamed BOOST_FIT_DETAIL.
Paul has accepted to use Boost.Config for the detection macros, but
Boost.Config uses Boost.Core for his tests :(
There is already an Github issue that Paul must address.
>
>
>> * Documentation
> - IntegralConstant
> Needs to be defined in the Concept section.
> -.
Could you elaborate?
> - fit::is_callable
> Is this trait simply an alias for std::is_callable, or the Callable
> concept defined by Fit?.
I believe the library must document whatever. Adding some links are
always welcome.
The Callable concept seems the same even if expressed in a different way.
> - fit::partial
> I think its worth mentioning the behavior with optional arguments
> (default or variadic). When the function underlying function is
> actually invoked depends on this.
Agreed.
> - fit::limit
> Does the implementation internally use the annotation? What
> advantage is there to this and the associated metafunction?
> - fit::tap
> Is `tee` more appropriate?
> - Extending Library
> I think it might be useful to discuss how to write your own adaptor
> or decorator.
AFAIK, Fit has no specific extension mechanism. The user is free to
define as many adaptors, functions as she wants.
I agree that some examples of how to implement one simple adaptor will
be welcome.
>
>> *.
Good point. Paul added bjam support just before the review. He don't
master yet the basic of bJam.
>
>
>> * think Paul is capable
> of addressing my concerns. I also like Louis' comment about justifying
> the purpose of the library - or at least provide a better definition
> for the library. And how much of the functionality can already be done
> with Hana?
I don't reach to understand why you are rejecting the library. Is
because of the quality of the code or the test? The global design? the
documentation? Is there something that must be modified so you could
accept the library?
>
>
>> -.
I understand. The library is not so little.
>
> Lee
>
Thanks again, | https://lists.boost.org/boost-users/2016/03/85871.php | CC-MAIN-2020-24 | refinedweb | 1,063 | 68.97 |
On Wed, 16 Jun 2010 07:36:10 Stefano Sabatini wrote: > On date Wednesday 2010-06-16 01:42:46 +0930, Rodney Baker encoded: > > This... > > We use the > #if LIBAVSTUFF_VERSION_MAJOR < NEXT > ... > #endif > > construct when we want to deprecate something, this way we're sure > that old stuff won't be kept around when we'll break compatibility > (which happens *only* at major version bumps). > > [...] > > Regards. You mean like the attached? I note that there is a similar block of code further up the options list. Is it OK to move these options up and use the same #if...#endif block or would you prefer to leave them where they are (as in the attached patch)? BTW, this patch currently has the changes from the other proposed patches that Michael has not yet approved because they break API, so I will revert those changes for this patch if it is OK - just checking that this is what you meant. -- =================================================== Rodney Baker VK5ZTV rodney.baker at iinet.net.au =================================================== -------------- next part -------------- A non-text attachment was scrubbed... Name: lavc-options-c.diff-20100117-1 Type: text/x-patch Size: 1474 bytes Desc: not available URL: <> | http://ffmpeg.org/pipermail/ffmpeg-devel/2010-June/091739.html | CC-MAIN-2013-20 | refinedweb | 193 | 73.88 |
so I have a class Weather, that should return random results, and I'd like to test it using stub. I read the article of Martin Flower on this
and I feel it would be the easiest solution. But it's difficult to find any examples of syntax. Could you give me an example of a test. This is part of my homework.
class Weather
def conditions
return :good if chance > 0.3
:stormy
end
def chance
rand
end
end
Based on your example, you want to test the behavior of
chance not the implementation.
describe Weather do it 'returns good' do weather = Weather.new allow(weather).to receive(:chance).and_return(0.8) expect(weather.conditions).to eq :good end end | https://codedump.io/share/PltMMu40Q91L/1/using-stub-in-rspec-to-test-output-of-a-rand | CC-MAIN-2017-43 | refinedweb | 121 | 67.76 |
java.lang.Object
org.joe_e.Structorg.joe_e.Struct
org.ref_send.log.CallSiteorg.ref_send.log.CallSite
public class CallSite
A source code location.
public final java.lang.String name
public final java.lang.String source
public final PowerlessArray<IntArray> span
The expected structure of this table defines a span from the start of the relevant source code to the end. The first row in the table is the start of the span and the second row is the end of the span. Each row lists the line number followed by the column number. For example, a span of code starting on line 5, column 8 and ending on line 6, column 12 is encoded as:
[ [ 5, 8 ], [ 6, 12 ] ]
The delimited span is inclusive, meaning the character at line 6, column 12 is included in the span defined above.
If the end of the span is unknown, it may be omitted. If the column number is unknown, it may also be omitted. For example, in the case where only the starting line number is known:
[ [ 5 ] ]
If source span information is unknown, this member is
null.
Both lines and columns are numbered starting from one, so the first
character in a source file is at
[ 1, 1 ]. A column is a
UTF-16 code unit, the same unit represented by a Java
char.
Lines are separated by any character sequence considered a Unicode line terminator.
public CallSite(java.lang.String name, java.lang.String source, PowerlessArray<IntArray> span)
name-
name
source-
source
span-
span
Copyright 1998-2009 Waterken Inc. under the terms of the MIT X license. | http://waterken.sourceforge.net/javadoc/org/ref_send/log/CallSite.html | CC-MAIN-2014-15 | refinedweb | 266 | 64.61 |
This is free to a good home. If you are interested, leave a comment and I'll pick someone at random at 9pm on Wednesday (using or similar).
I'll scan and email the card to the lucky recipient.
TimA:reven: my son would love a copy of it :)
thanks.
If i get it ill be sure to let your son have it :)
Server : i3-3240 @ 3.40GHz 16GB RAM Win 10 Pro Workstation : i5-xxxx @ x.xxGHz 16GB RAM Win 10 pro Console : Xbox One - Games, geeks, and more.
#include <std_disclaimer>
Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have. | https://www.geekzone.co.nz/forums.asp?forumid=77&topicid=151897 | CC-MAIN-2018-51 | refinedweb | 113 | 83.36 |
Understanding Struts Controller
Understanding Struts Controller
In this section I will describe you the Controller.... It is the Controller part of the Struts
Framework. ActionServlet is configured
example explanation - Java Beginners
example explanation can i have some explanation regarding the program given as serialization xample.... Hi friend,
import java.io....();
}
}
}
---------------------------------------------
Read for more information.
explanation
the explanation in internet,but not very clear about it. Thank you
more than one struts-config.xml file
more than one struts-config.xml file Can we have more than one struts-config.xml file for a single Struts application
struts I want to know clear steps explanation of struts flow..please explain the flow clearly:
Struts - Struts
Struts Can u giva explanation of Struts with annotation withy an example? Hi friend,
For solving the problem visit to :
Thanks
Struts Books
for more experienced readers eager to exploit Struts to the fullest.
...-Controller (MVC) design
paradigm. Want to learn Struts and want get... of design are deeply rooted. Struts uses the Model-View-Controller design pattern
for more information.
Thanks...Struts Hi,
I m getting Error when runing struts application.
i...
/WEB-INF/struts-config.xml
1
struts - Struts
.
The one more responsibility of the controller is to check....
Struts-config.xml is used for making connection between view & controller...struts hi,
what is meant by struts-config.xml and wht are the tags...
.
-----------------------------------------------
Read for more information. What is model View Controller architecture in a web... the presentation data to the user.
Model-view-controller (MVC) is an architectural
explanation - Java Beginners
Java Code Explanation
struts validations - Struts
---------------------------------
Visit for more information. validations hi friends i an getting an error in tomcat while running the application in struts validations
the error in server
best Struts material - Struts
best Struts material hi ,
I just want to learn basic Struts.Please send me the best link to learn struts concepts
Hi Manju,
Read for more and more information with example at:
http
Redirection in struts - Struts
sendredirect can we forward a page in struts Hi
There are more ways to forward one page to another page in struts.
For Details you can click here:
Thanks
struts internationalisation - Struts
code to solve the problem :
For more information on struts visit...struts internationalisation hi friends
i am doing struts iinternationalistaion in the site
Struts Architecture - Struts
Struts Architecture
Hi Friends,
Can u give clear struts architecture with flow. Hi friend,
Struts is an open source framework used for developing J2EE web applications using Model
View Controller-It
makes view and modify Struts-config.xml
much easier and more quickly... in their existing web applications.
For more information:... Struts-It
what is struts? - Struts
.
Struts provides its own Controller component and integrates with other...what is struts? What is struts?????how it is used n what... of the Struts framework is a flexible control layer based on standard technologies like Java
Struts file downloading - Struts
Struts file downloading how to download a file when i open a file... will help you.
visit for more information.
Thanks
huffman code give the explanation for this code
huffman code give the explanation for this code package bitcompress;
class Compress extends BitFilter {
private boolean encode_flag = false;
private BitFilter next_filter = null;
private int value;
private 2.0- Deployment - Struts
Struts 2.0- Deployment Exception starting filter struts2
java.lang.ClassNotFoundException: org.apache.struts2.dispatcher.FilterDispatcher
JDK...
/*
For more information on Struts2 visit to : Project Planning - Struts
Struts Project Planning Hi all,
I am creating a struts application.
Please suggest me following queries i have.
how do i decide how many...,
I am sending you a link. This link will help you.
Please visit for more
java struts error - Struts
java struts error
my jsp page is
post the problem... the problem what you want in details.
For more information,Tutorials and examples on struts visit to : <s:include> - Struts
Struts Hello guys,
I have a doubt in struts tag.
what am i...? or struts doesnt execute tags inside fetched page?
the same include code...)
For any more problem give details
struts 1.3
struts 1.3 After entering wrong password more than three times,the popup msg display on the screen or the username blocked .please provide its code in struts jsp
logout - Struts
();
//...
-----------------------------------
Visit for more information: how to make code in struts if i click the logout button then no body can back refresh the page , after logout nobody will able to check
jsp - Struts
-Controller (MVC) design paradigm.
For more information, visit the following link:
Thanks...jsp
wat is Struts Hi Friend,
Struts is an open
IMP - Struts
choices) with answers for struts.
kindly send me the details
its urgent for me
Thanku
Ray Hi friend,
Visit for more information.../jakartastrutsinterviewquestions.shtml
Thanks
Struts Tutorial
the model
from view and the controller. Struts framework provides the following three... the
information to them.
Struts Controller Component : In Controller, Action class...In this section we will discuss about Struts.
This tutorial will contain
Hi... - Struts
more information,tutorials and examples on Struts with Hibernate visit to :
Thanks This link...Hi... Hi,
If i am using hibernet with struts then
java - Struts
:
For more information on struts visit to : how can i get dynavalidation in my applications using struts... :
*)The form beans of DynaValidatorForm are created by Struts and you configure
need a sample project using java technologies like jsp, servlets, struts
; Hi everybody!
I have learnt core java,jdbc,jsp,servlets & struts... other project with detailed explanation using the above technologies by which i can expertise more in real time.
Thank you in advance.
-Kiran Kumar
Hello - Struts
to going with connect database using oracle10g in struts please write the code...
--------------------------------------
Visit for more information.
Thanks
Amardeep
validation - Struts
single time only.
thank you Hi friend,
Read for more information,
Thanks
java - Struts
:
In Action Mapping
In login jsp
For read more information on struts visit to :
Thanks
java - Struts
of the Application. Hi friend,
Struts is an open source framework used for developing J2EE web applications using Model View Controller (MVC) design...
About Struts processPreprocess method - Struts
will abort request processing.
For more information on struts visit to :
Thanks...About Struts processPreprocess method Hi java folks,
Help me
Struts - Framework
,
Struts :
Struts Frame work is the implementation of Model-View-Controller... of any size.
Struts is based on MVC architecture :
Model-View-Controller... are the part of Controller.
For read more information,examples and tutorials
configuration - Struts
class,ActionForm,Model in struts framework.
What we will write in each... services are accessed by the controller for either querying or effecting a change....
Action class:
An Action class in the struts application extends Struts
Error - Struts
Error Hi,
I downloaded the roseindia first struts example... create the url for that action then
"Struts Problem Report
Struts has detected...
-----------------------
RoseIndia.Net Struts 2 Tutorial
RoseIndia.net Struts 2
exception - Struts
exception Hi,
While try to upload the example given by you in struts I am getting the exception
javax.servlet.jsp.JspException: Cannot... is running code, read for more information.
java - Struts
java code for login page using struts without database ...but using flatfiles like excle ...where the username and password has to compared from... to:
---------------------
Read for more information.
java - Struts
java Hi,
I want full code for login & new registration page in struts 2
please let me know as soon as possible.
thanks,. Hi friend,
I am sending you a link. This link will help you. Please visit for more
Struts 2.0.6 Released
Struts 2.0.6 Released
Download Struts 2.0.6 from
This release is another grate step towards making Struts 2 more robust and
usable.
Java - Struts
.
-------------------------------------------
Read for more information.
Thanks. my doubt is some
java - Struts
it.
For read more information on Struts visit to :...:
Submit
struts... friend,
Check your code having error :
struts-config.xml
In Action
Tiles - Struts
Inserting Tiles in JSP Can we insert more than one tiles in a JSP page
Textarea - Struts
characters.Can any one? Given examples of struts 2 will show how to validate... the value more then 250 characters and cant left blank.Index.jsp<ul><...;%@ taglib prefix="s" uri="/struts-tags" %><html><head>
Using radio button in struts - Struts
source code to solve the problem :
For more information on radio in Struts
java - Struts
java what do u mean by rendering a response in Struts2.0 Hi friend,
Read for more information.
Thanks
struts logic tags
struts logic tags what is the use of struts logic tag
The purpose of Struts logic tags is to alter output depending on the given... to true.
For more information, visit the following linkL
http | http://roseindia.net/tutorialhelp/comment/90772 | CC-MAIN-2013-48 | refinedweb | 1,446 | 59.7 |
Details
- Type:
Bug
- Status: Resolved
- Priority:
Major
- Resolution: Cannot Reproduce
- Affects Version/s: 1.5
-
-
- Labels:None
- Environment:Operating System: All
Platform: PC
Description
Hi there,
I encountered a problem with Velocity when invoking inherited methods of a Java
class. I get the
error "org.apache.velocity.runtime.exception.ReferenceException: reference :
template = screens/Details.vm [line 48,column 1] : $class.getFullName() is not
a valid reference" if I want to invoke the "FullName" method of the "class"
object. This method is inherited from a superior class. If I call a method
defined directly in this class, everything works fine. If I obtain the name of
the "class" object, I get the correct name.
It would be nice, if anyone could help me on this.
Activity
- All
- Work Log
- History
- Activity
- Transitions
Hi,
I now used the latest version of Velocity. I used the Nightly build from the
1st of May 2002 as well as the velocity-1.3-rc1 version and I'm always
encountering the same error as I decribed in this message.
As mentioned, this happens every time, I want to call a method, which is not
locally defined, but inherited from a superclass.
It seems as if there is a problem of mapping the methods from the Velocity
macros to the referenced Java classes. If I'm using this methods in Java (e.g.
in a Servlet, evrything works fine with the methods, I want to call)
Hiya Jeff. Please attach sample code to reproduce the bug.
Hi,
here is some sample code of the classes, I use:
This is the class, I want to use:
public class OW_Class extends OWB_Class implements OWI_Class {
/**
- OW_Class - Konstruktorkommentar.
*/
public OW_Class() { super(); }
.
.
.
}
which is inherited by this Class:
abstract class OWB_Class extends OWB_AttributedObject {
private static java.util.Hashtable LookUpTable = new java.util.Hashtable();
public OWB_Class(){ super(); }
.
.
.
}
Which is inerited by:
abstract class OWB_AttributedObject extends OWB_Object {
public OWB_AttributedObject() { super(); }
public String getFullName(){ Instance po = (Instance)getApplicationObject(); if (po == null) return; return po.getName(); }
.
.
.
}
If I now want to invoke the "getFullName" method with an "OW_Class" Object, I
get the error I described. If I inspect the class with Java Reflection inside
the Velocity Macro, I get all the method names, including this sample method's
name above.
Created an attachment (id=6233)
simple example of problem
I just added attachment 6233, which I believe demonstrates this problem clearly.
(I ran this example with Velocity 1.3.1)
After writing this example, the problem--which seems to be the same as the one
described in this bug--appears to be that Velocity fails to recognize a public
inherited method if the class from which it is inherited is package private (or
default) visibility.
Here's my example base and child class definitions (for convenience--use the
attachment if you want to run it):
package test;
class BaseClass {
private int id;
public BaseClass(int id)
public int getId(){ return id; }
}
////////////////////////////////////////////
package test;
public class ChildClass extends BaseClass {
private int someAttribute;
public ChildClass(int id)
public int getSomeAttribute(){ return someAttribute; }
public void setSomeAttribute(int attribute){ someAttribute = attribute; }
}
////////////////////////////////////////////
Notice that BaseClass has default visibility, ChildClass is public, and the
inherited class, getId(), is also public. In Java, the inherited method getId
is visible from any package when it is called on a ChildClass object. This does
not appear to be the case in Velocity.
Even if it was made valid in Velocity, method invocation will throw an
IllegalAccessException, due to a bug in JDK.
In order to bypass that, some ugly setAccessible(true) will be needed. Even
that will not work in certain security-managed environments. For now, I
suggest making a (superfluous) public method in the subclass calling the method
in the superclass.
#*
I also have a feeling this issue will get invalid when the JDK bug gets fixed.
*#
Looks like Bug Parade also has a voting system. Please vote if you think this
bug is significant.
I'm against using setAccessible within Velocity. It's hard enough to
configure the security policies for a webapp container as it is. Why add
another requirement to the mess?
Works for me in 1.6-dev.
@#%! So, in hopes of creating a beta1 test build, i switch to java 1.4.2 and tried running things. apart from the various 1.5-isms that snuck past me, i also discovered that the test case for this bugger fails in Java 1.4.x.
Testcase: testPublicMethodInheritedFromPrivateClass took 0.031 sec
FAILED
expected:<bar> but was:<$bar.bar()>
junit.framework.ComparisonFailure: expected:<bar> but was:<$bar.bar()>
at org.apache.velocity.test.BaseEvalTestCase.assertEvalEquals(BaseEvalTestCase.java:85)
at org.apache.velocity.test.issues.Velocity579TestCase.testPublicMethodInheritedFromPrivateClass(Velocity579TestCase.java:45)
Since this appears to a JDK bug that is fixed in later versions, i am leaving this resolved. I'll just adapt the testcase to ignore this when running on a pre 1.5 jdk. I don't see us ever working around this since it works in newer JDKs.
Correction. This bug still appears to be present in JDK 1.5. It is fixed in 1.6. Adapting the test case to match...
I am marking this as invalid, as there is no clear bug here yet -
According to this bug report, you are using version 1.0 (which I doubt
so
if you could try the latest CVS HEAD, as it contains the latest code and
fixes for introspection, that would be great.
If there is no problem there, try backing down to the 1.3-rc1 release.
If there still is a problem, just reopen this bug report.
And don't hesitate to bring this directly to the user list if you want | https://issues.apache.org/jira/browse/VELOCITY-70 | CC-MAIN-2016-50 | refinedweb | 945 | 57.57 |
Subject: [ggl] spacial index
From: Adam Wulkiewicz (adam.wulkiewicz)
Date: 2011-04-22 18:57:04
Adam Wulkiewicz wrote:
> Adam Wulkiewicz wrote:
>> I've commited changes. Box is removed from rtree parameters. I've leaved
>> it in the classes below for now. I've changed namespaces hierarchy and
>> renamed classes and functions as well. Now it looks a lot better.
>
> I've implemented reinsertions so basic version of insert algorithm is
> finished. Run e.g. glut_vis.cpp to see how the structure of the tree
> looks like.
I've performed some tests and I'm affraid that the new implementation is
slower than the old one. I'll gather the results and present them as
soon as I test the version with internal nodes derived from the node class.
Regards,
Adam
Geometry list run by mateusz at loskot.net | https://lists.boost.org/geometry/2011/04/1233.php | CC-MAIN-2019-43 | refinedweb | 139 | 76.93 |
);
See Chapter 18 of Java Developer's Guide and Reference for syntax, that may help.
Last edited by stecal; 06-28-2004 at 09:10 PM.
I don't know which Java development tool you are using (I'm using Eclipse) but I can provide you a basic Java-Oracle connection example.
All you'll need is classes12.jar; the file must be imported in your Java project. You can find it in your \\ORAHOME\JDBC\LIB directory.
Then you can do this:
----------------
import java.sql.*;
public class FirstExample {
public static void main(String[] args) {
try {
DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver());
Connection conn = DriverManager.getConnection("jdbc: oracle:thin:@localhost:1521:RESEARCH","scott","tiger");
// jdbc: oracle:thin - JDBC driver
// localhost (it also accepts IP)
// 1521 - port
// RESEARCH - SID
// scott - username
// tiger - password
Statement stmt = conn.createStatement();
// a select query
ResultSet rset = stmt.executeQuery ("SELECT ename FROM emp");
while (rset.next()) {
System.out.println("ENAME = " + rset.getString(1));
}
rset.close();
stmt.close();
conn.close();
} catch (Exception e) {
System.out.println("ERROR : " + e.getMessage());
}
}
}
--------------------
Works for me, the result is
ENAME = SMITH
ENAME = ALLEN
ENAME = WARD
...
...
Samo Decman
I love deadlines. I especially like the whooshing sound they make as they go
flying by. (c)Dilbert
Thanks for the response...but I thought the classes12.zip was for the java 1.3 version and ojdbc14.jar was for java 1.4.2?
Also, I got it to work in Netbeans by mounting an archive and then pointing to the ojdbc14.jar file, however, how would one compile and run this from the command line if the only way for it to work is to import these files into your project?
Thanks for your help!
Take care,
Chris
Yes and no. Classes12.jar work well in Java 1.1 and above environment, but ojdbc14.jar is ONLY for Java 1.4x. But since your default JRE is of version 1.3.1, your code wouldn't run even if you would get it compiled with your Java 1.4.1_03 SDK. That's why I used classes12 in my example.
Furthermore, the interface of both jars is the same, so instead of classes12.jar include ojdbc14.jar in above example. No code modification needed.
So how do you compile and run the program in a "stone age style"?
javac -classpath C:\oracle\ora92\jdbc\lib\ojdbc14.jar FirstExample.java
java -cp .;C:\oracle\ora92\jdbc\lib\ojdbc.jar FirstExample
Why don't you download a Java development tool? Here's my favourite
It's a nice little tool, free, about 80 Megs with all important features, no installation needed. And you don't need a P4 3.0 with 1GB RAM to run it. It's suitable for beginners, no need to be heavy-duty Java programmer.
Hi all,
I also was trying to depoly a simple JSP page with a SQL call to an oracle instance, but I keep getting the "package oracle.jdbc does not exist" or simiar errors.
I used Eclipse IDE and deployed to Tomcat server, which I thought might be the problem, so I downloaded and installed Sun's Glassfish server. The same problem persisted.
I followed instructions/advice and added the file C:\oracle\product\10.2.0\db_1\jdbc\lib\classes12.jar to my build path. Would not work.
Finally, I happened to notice a warning from eclipse:
"Classpath entry C:/oracle/product/10.2.0/db_1/jdbc/lib/classes12.jar will not be exported or published. Runtime ClassNotFoundExceptions may result."
If you see this warning, right mouse it and choose Quick Fix - a dialog offers two quick fixes, the first one will be:
"Mark the associated raw classpath entry as publish/export dependency."
With this option highlighed click Finish - this will remove the warning and resolve this problem.
Once I did that my jsp page worked correctly in either Tomcat or Glassfish. I tried it on several other projects and it also worked.
I hope this was helpful - it worked for me.
JC
Forum Rules | http://www.dbasupport.com/forums/showthread.php?43580-Error-quot-package-oracle.jdbc-does-not-exist-quot | CC-MAIN-2015-18 | refinedweb | 671 | 61.93 |
This release adds namespace support and a whitespace stripping rule. There has ben some internal restructuring. ________________________________________________________________________ Anobind is a Python/XML data binding, which is just a fancy way of saying it's a very Pythonic XML API. You feed Anobind an XML document and it returns a data structure of corresponding Python objects. For example, the document <monty> <python spam="eggs">What do you mean "bleh"</python> <python ministry="abuse">But I was looking for argument</python> </monty> Would become a set of objects so that you could write binding.monty.python.spam In order to get the value "eggs" or binding.monty.python[1].text_content() binding.monty.python[1].text_content() In order to get the value "But I was looking for argument". There are other such tools for Python, and what makes Anobind unique is that it's driven by a very declarative rules-based system for binding XML to the Python data. One can register rules that are triggered by XPatterns or plain Python code in order to register specialized binding behavior. It also offers XPath support and some support for round-tripping documents. Anobind is open source, provided under the 4Suite variant of the Apache license. It requires Python 2.2.2 and 4Suite 1.0a3. -- Uche Ogbuji Fourthought, Inc. Introducing Anobind - XML Topic Maps by the book - Charming Jython - Python, Web services, and XSLT - Perspective on XML: What is this 'agility'? - | https://mail.python.org/pipermail/xml-sig/2003-September/009868.html | CC-MAIN-2014-10 | refinedweb | 237 | 65.62 |
MS Word *.doc to *.HTML
dar
Ranch Hand
Joined: Nov 08, 2001
Posts: 45
posted
Jan 24, 2003 02:59:00
0
Hi, ranchers
I want to
convert AB.
doc file
to AB.HTML file
in
java
. Does anyone has a good suggestion?
Best Regards,
chison
karl koch
Ranch Hand
Joined: May 25, 2001
Posts: 388
posted
Jan 24, 2003 03:52:00
0
hi
here
)
k
dar
Ranch Hand
Joined: Nov 08, 2001
Posts: 45
posted
Jan 24, 2003 07:38:00
0
Than You for reply.
POI is for only W97.
But I need for Word2000, it's better if could convert all MS Word format, Word97, Word2000, WordXP.
Originally posted by karl koch:
hi
here
)
k
Michael Crutcher
Ranch Hand
Joined: Feb 18, 2002
Posts: 48
posted
Jan 24, 2003 13:56:00
0
Well I think you're out of luck. Word is in a propriety binary project. You could join the POI team and contribute to the effort but there is no quick fix. Figuring out the intricacies of a propriety file format is no trivial matter, until you or POI invest time into the new file formats you really don't have an option.
I don't know of any projects that are further along than POI.
Michael
Robert Paris
Ranch Hand
Joined: Jul 28, 2002
Posts: 585
posted
Jan 24, 2003 15:38:00
0
Actually, it's gonna be a headache, BUT...you're in luck.You can use JaWin or Jacob (
) to work with Java and COM. And in COM, Microsoft has an API to get a word doc (including word 2000) as an XML document. You can do that through COM in Java. Hang on, I think I still have some code dealing with word from java (it's not entirely complete, but should help you get started). It uses Jacob.
/* * Class To Run The Test */ import com.jacob.com.*; import com.jacob.activeX.*; public class RunWord { public static void main(String[] args) { Word wordApp = new Word(); try { Documents docs = wordApp.getDocuments(); Document doc = docs.open("D:\\Test.doc"); //GET VIEW View wordView = wordApp.getActiveWindow().getView(); //SET THE VIEW PROPERTIES wordView.setShowAll(false); wordView.setShowParagraphs(false); doc.close(Document.DO_NOT_SAVE); } catch (Exception e) { e.printStackTrace(); } wordApp.quit(new Variant[] {}); System.exit(0); } } /* * Word.java * */ import com.jacob.activeX.*; import com.jacob.com.*; public class Word extends com.jacob.activeX.ActiveXComponent { /** Creates new Word Object with progID */ public Word() { //INSTANTIATE FROM PARENT CLASS super("Word.Application"); } /** GETDOCUMENTS ** * Gets the Property Documents. this is an Object that has a number of capabilities, * Such as Open, and also holds the Document collection. */ public Documents getDocuments() { return new Documents(this.getProperty("Documents").toDispatch()); } /** GETACTIVEWINDOW ** * Returns the ActiveWindow for this Application. */ public ActiveWindow getActiveWindow() { return new ActiveWindow(this.getProperty("ActiveWindow").toDispatch()); } /** QUIT ** * Quit the Application. You can pass it options, but i'm not * sure if Word even recognizes options for quitting. */ public void quit(Variant[] options) { this.invoke("Quit", options); } } /* * Documents.java * */ import com.jacob.com.*; public class Documents extends com.jacob.com.Dispatch { /** Creates new Documents */ public Documents() { super(); } public Documents(Dispatch dispatchDocuments) { //TAKE OVER IDispatch POINTER m_pDispatch = dispatchDocuments.m_pDispatch; //NULL OUT THE INPUT'S POINTER dispatchDocuments.m_pDispatch = 0; } //expression.Open(FileName, //Variant, String //ConfirmConversions, //Variant, Boolean //ReadOnly, //Variant, Boolean //AddToRecentFiles, //Variant, Boolean //PasswordDocument, //Variant, String(?) //PasswordTemplate, //Variant, String(?) //Revert, //Variant, Boolean //WritePasswordDocument, //Variant, String(?) //WritePasswordTemplate, //Variant, String(?) //Format, //Variant, int: wdOpenFormatAllWord, // wdOpenFormatAuto, wdOpenFormatDocument, // wdOpenFormatEncodedText, wdOpenFormatRTF, // wdOpenFormatTemplate, wdOpenFormatText, // wdOpenFormatUnicodeText, or wdOpenFormatWebPages. // The default value is wdOpenFormatAuto. //Encoding, //Variant, Long: Can be any valid MsoEncoding constant //Visible//Variant, Boolean // ) /** OPEN ** * Open a document. Returns class: Document. I * am considering using that document to be passed as well. This means * I'd give a different constructor for that class, and you'd set it with * a filename and then this would see if it is already open and if not, open * it. */ public Document open(String documentFilename) { return new Document(Dispatch.call(this, "Open", documentFilename).toDispatch()); } } /* * Document.java * */ import com.jacob.com.*; public class Document extends com.jacob.com.Dispatch { public final static Integer DO_NOT_SAVE = new Integer(0); public final static Integer SAVE = new Integer(-1); public final static Integer PROMPT_SAVE = new Integer(-2); /** Creates new Document */ public Document() { //INSTANTIATE DOCUMENT FROM PARENT CLASS super("Word.Document"); } /** * This is the constructor to be used if you need to return a Document * object from another object's method call. */ public Document(Dispatch dispatchDocument) { //TAKE OVER THE IDispatch POINTER this.m_pDispatch = dispatchDocument.m_pDispatch; //NULL OUT THE INPUT POINTER dispatchDocument.m_pDispatch = 0; } /** * Close Word Document with Options for Saving or not Saving. */ public void close(Integer closeOption) { Dispatch.call(this, "Close", closeOption); } } /* * View.java * */ import com.jacob.com.*; public class View extends com.jacob.com.Dispatch { //NO IDEA WHY THIS IS, BUT SHOWALL = TRUE IS -1, AND FALSE IS 0 private final static int SHOW_ALL_TRUE = -1; private final static int SHOW_ALL_FALSE = 0; /** Creates new View */ public View() { super(); } /** * Constructor to handle if this is to be returned from another * object's method. It takes the Pointer to IDispatch and uses it to point * to the new object to be returned. */ public View(Dispatch dispatchView) { //TAKE OVER THE IDispatch POINTER this.m_pDispatch = dispatchView.m_pDispatch; //NULL OUT THE POINTER IN THE PASSED IN DISPATCH OBJECT dispatchView.m_pDispatch = 0; } /** SETSHOWALL ** * Sets the property (boolean) ShowAll. */ public void setShowAll(boolean showAll) { Dispatch.put(this, "ShowAll", new Variant(showAll)); } /** ISSHOWALL ** * Returns Boolean of whether or not this is set to show all. */ public boolean isShowAll() throws InvalidViewBooleanException { return getBooleanValue(Dispatch.get(this, "ShowAll")); } /** SETSHOWPARAGRAPHS ** * Sets the property (boolean) ShowParagraphs. */ public void setShowParagraphs(boolean showParagraphs) { Dispatch.put(this, "ShowParagraphs", new Boolean(showParagraphs)); } /** ISSHOWPARAGRAPHS ** * Returns Boolean of whether or not this is set to show paragraphs. */ public boolean isShowParagraphs() throws InvalidViewBooleanException { return getBooleanValue(Dispatch.get(this, "ShowParagraphs")); } private boolean getBooleanValue(Variant variantBoolean) throws InvalidViewBooleanException { //int TO HOLD VARIANT RETURNED INTEGER int intVariant; try { //MAKE IT AN INTEGER AND GET ITS int VALUE intVariant = new Integer(variantBoolean.toString()).intValue(); } catch (Exception ibe) { throw new InvalidViewBooleanException( "The Variant is not a valid integer for a View Boolean. " + "It must be either -1 (True) or 0 (False)."); } //RETURN IF IS SHOW ALL return (intVariant == View.SHOW_ALL_TRUE); } } /* * ActiveWindow.java * */ import com.jacob.com.*; public class ActiveWindow extends com.jacob.com.Dispatch { /** Creates new ActiveWindow */ public ActiveWindow() { } public ActiveWindow(Dispatch dispatchActiveWindow) { //TAKE OVER THE POINTER TO IDispatch this.m_pDispatch = dispatchActiveWindow.m_pDispatch; //NULL OUT THE POINTER IN PASSED IN DISPATCH OBJECT dispatchActiveWindow.m_pDispatch = 0; } /** GETVIEW ** * Returns the View Object from the ActiveWindow. The View Object will allow you * to show or hide all or any part of a word document. This View Object is Read-only * but in Excel it is not. */ public View getView() { return new View(Dispatch.get(this, "View").toDispatch()); } } /* * InvalidBooleanException.java * */ public class InvalidViewBooleanException extends java.lang.Exception { /** Creates new InvalidBooleanException */ public InvalidViewBooleanException() { super(); } public InvalidViewBooleanException(String msg) { super(msg); } }
That should all work (assuming you've got jacob.jar in your classpath)! All I ask is that when you finish the work I started here, please post it on this site so the rest of us can benefit too! Thanks! Let me know if this is a help!
Robert
[Edited to break up a couple really long lines that were distorting the rest of the page - Jim]
[ January 24, 2003: Message edited by: Jim Yingst ]
Robert Paris
Ranch Hand
Joined: Jul 28, 2002
Posts: 585
posted
Jan 24, 2003 15:53:00
0
One more thing:
You can figure out how to turn word docs into XML by looking at the VB source code to this free program:
Don't worry, VB is a cinch to figure out. They have the code for download. Just convert to Java! (All I was basically doing was creating Java wrappers for their COM counterpoints. It wasn't necessary but
ALOT
easier to work with!)
My Code, after making the classes, looks like this:
Word wordApp = new Word(); Documents docs = wordApp.getDocuments(); Document doc = docs.open("D:\\JavaProjects\\Test.doc"); //GET VIEW View wordView = wordApp.getActiveWindow().getView(); //SET THE VIEW PROPERTIES wordView.setShowAll(false); wordView.setShowParagraphs(false); doc.close(Document.DO_NOT_SAVE); wordApp.quit(new Variant[] {});
Nice clean Java code. The same thing using regular Java-COM code is not so clean and nice as it has dispatch and pointer calls all over the place.
Anyways, good luck and let us know how it goes!
I agree. Here's the link:
subject: MS Word *.doc to *.HTML
Similar Threads
Creating a "BROWSE" button on JSP Page
how many web.xml file
Creating files using Class file
XSL file in memory
Repetative html code in jsp
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/324391/java/java/MS-Word-doc-HTML | CC-MAIN-2015-35 | refinedweb | 1,451 | 51.34 |
Edit: Code is here on GitHub
Calling functions in Python can be expensive. Consider this example: there are two statements that are being timed, the first one calls a function that returns an integer while the second one calls a function that returns the result of a second function call which returns an integer.
tom@toms ~$ python -m timeit -n 10000000 -s "def get_n(): return 1" "get_n()" 10000000 loops, best of 3: 0.145 usec per loop tom@toms ~$ time python -m timeit -n 10000000 -s "get_n = lambda: 1; get_r_n = lambda: get_n()" "get_r_n()" 10000000 loops, best of 3: 0.335 usec per loop
The additional function call doubled the program execution time, despite not effecting the output of the function in any way. This got me thinking, how hard would it be to create a Python module that would inline functions, removing the calling overhead from certain functions you specify?
As it turns out, not that hard. Note: This is simply an experiment to see what’s possible, don’t even think about using this in real Python code (there are some serious limitations explained at the end). Check this out:
from inliner import inline @inline def add_stuff(x, y): return x + y def call_func_args(num): return add_stuff(1, num) import dis dis.dis(call_func_args) # Prints: # 0 LOAD_CONST 1 (1) # 3 LOAD_FAST 0 (num) # 6 BINARY_ADD # 7 RETURN_VALUE
The dis function prints out the bytecode operations for a Python function, which shows that the call_func_args function has been modified so that the add_stuff() call never takes place and instead the body of the add_stuff function has been inlined inside the call_func_args function. I’ve put the code on GitHub, have a look if you like. Below I will explain how it works, for those interested.
Diving in: Import hooks and the AST module
Python is an interpreted language, when you run a Python program the source code is parsed into an Abstract Syntax Tree which is then ‘compiled’ into bytecode. We need a way of modifying the AST of an imported module before it gets compiled, and as luck would have it Python provides powerful hooks into the import mechanism that allow you to write importers that grab code from the internet or restrict packages from being imported. Getting our claws into the import mechanism is as simple as this:
import sys, imp class Loader(object): def __init__(self, module): self.module = module def load_module(self, fullname): return self.module class Importer(object): def find_module(self, fullname, path): file, pathname, description = imp.find_module( fullname.split(".")[-1], path) module_contents = file.read() # We can now mess around with the module_contents. # and produce a module object return Loader(make_module(module_contents)) sys.meta_path.append(Importer())
Now whenever anything is imported our find_module() method will be called. This should return an object with a load_module() function, which returns the final module.
Modifying the AST
Python provides an AST module to modify Python AST trees. So inside our
find_module function we can get the source code of the module we are importing, parse it into an AST representation and then modify it before compiling it. You can see this in action here.
First we need to find all functions that are wrapped by our inline decorator, which is pretty simple to do. The AST module provides a NodeVisitor and a NodeTransformer class you can subclass. For each different type of AST node a visit_NAME method will be called, which you can then choose to modify or pass along untouched. The InlineMethodLocator runs through all the function definition’s in a tree and stores any that are wrapped by our inline decorator:
class InlineMethodLocator(ast.NodeVisitor): def __init__(self): self.functions = {} def visit_FunctionDef(self, node): if any(filter(lambda d: d.id == "inline", node.decorator_list)): func_name = utils.getFunctionName(node) self.functions[func_name] = node
The next step after we have identified the functions we want to inline is to find where they are called, and then inline them. To do this we need to look for all Call nodes in our modules AST tree:
class FunctionInliner(ast.NodeTransformer): def __init__(self, functions_to_inline): self.inline_funcs = functions_to_inline def visit_Call(self, node): func = node.func func_name = utils.getFunctionName(func) if func_name in self.inline_funcs: func_to_inline = self.inline_funcs[func_name] transformer = transformers.getFunctionHandler(func_to_inline) if transformer is not None: node = transformer.inline(node, func_to_inline) return node
This visits all call objects and if we are calling a function we want to inline then we go grab a transformer object which will be responsible for the actual inlining. I’ve only written one transformer so far that works on simple functions (functions with 1 statement), but more can be added fairly easily. The simple function transformer simply returns the contents of the function and maps the functions values to the values of the calling function:
class SimpleFunctionHandler(BaseFunctionHandler): def inline(self, node, func_to_inline): # Its a simple function we have here. That means it is one statement and we can simply replace the # call with the inlined functions body body = func_to_inline.body[0] if isinstance(body, ast.Return): body = body.value return self.replace_params_with_objects(body, func_to_inline, node)
Limitations
There are some serious limitations with this code:
- Inlined functions must have a unique name: The AST provides us with no type information (as Python is dynamically typed), only the name of the function we are calling. That means without writing code that attempts to deduce the type of a class instance (no mean feat) then each function call must have a unique name.
- Only inlines functions in the same module: To keep things simple only calls in the same module are inlined.
- Inlined class functions can’t reference any double underscore attributes: Accessing self.__attr is about as ‘private’ as you can get in Python. The attribute lookup is prefixed with the class name, which we can’t easily detect while inlining.
- Everything will break: Python is very dynamic, you may wish to replace functions at runtime. Obviously if the functions have been inlined then this won’t have any effect. | https://tomforb.es/automatically-inline-python-function-calls | CC-MAIN-2019-04 | refinedweb | 1,005 | 54.83 |
Implementing an Array of Buttons: The Shuffle Game
Introduction
This article came out of the result of a forum of discussion on CodeGuru.com. When we tried to implement an array of Windows buttons, we did not find any solution using ClassWizard. In VB, this can be achieved by copying a single button. We could not find any method; hence we discussed in the following forum: "Array of buttons in VC++."
So, we tried to have a common handler and control in the form of this game.
To Start
- Select New->Nroject from the File menu. A dialog window will be displayed. Select "MFC AppWizard(exe)" and give the project name as "Shuffle".
- When you give the OK, it will ask for the type of application. Select Dialog, based on Step 1.
- Treat all other steps as default and click Finish. This will create a dialog-based application.
- Now, you see two buttons, namely "Ok", "Cancel", and a text "TODO: Place dialog controls here." Select and remove the Cancel button and the text "TODO..."
- Change the caption of the Ok button to &Exit.
- Add one more button; right-click on the button to go to Properties. Change the caption to &About and ID to IDC_About.
Compile the project by pressing F7; then, to execute it, press Ctrl+F5. Make sure that there are no errors.
- Double-click the About button and click Ok to add your code as follows:
void CShuffleDlg::OnAbout() { // TODO: Add your control notification handler code here //my code starts here CAboutDlg AboutDlg; AboutDlg.DoModal(); //My code ends here }
void CShuffleDlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX); //{{AFX_DATA_MAP(CShuffleDlg) //}}AFX_DATA_MAP //my code starts here: DDX_Control(pDX, IDC_But1, m_b[0][0]); DDX_Control(pDX, IDC_But2, m_b[0][1]); DDX_Control(pDX, IDC_But3, m_b[0][2]); DDX_Control(pDX, IDC_But4, m_b[0][3]); DDX_Control(pDX, IDC_But5, m_b[1][0]); DDX_Control(pDX, IDC_But6, m_b[1][1]); DDX_Control(pDX, IDC_But7, m_b[1][2]); DDX_Control(pDX, IDC_But8, m_b[1][3]); DDX_Control(pDX, IDC_But9, m_b[2][0]); DDX_Control(pDX, IDC_But10, m_b[2][1]); DDX_Control(pDX, IDC_But11, m_b[2][2]); DDX_Control(pDX, IDC_But12, m_b[2][3]); DDX_Control(pDX, IDC_But13, m_b[3][0]); DDX_Control(pDX, IDC_But14, m_b[3][1]); DDX_Control(pDX, IDC_But15, m_b[3][2]); DDX_Control(pDX, IDC_But16, m_b[3][3]); //my code ends here }
This will assign the control variable for each button in the array. Now, we have created an array of 16 buttons. The thing we have to do now is to control these buttons using a program.
To play the game, we have to change the captions of the buttons to random numbers from 1 to 15. To create random numbers, we have to write a function, name it Randomize( ), and call it from OnInitDialog( ).
For that, in the ClassView tab, right-click CShuffleDlg, select add member variable; name the type as int and the variable name as m_x. Similarly, declare another variable, m_y, of the same type. Later, we use these variables to hold the position of the slider. Right-click the CShuffleDlg and select add member function; give the type as int and the function declaration as Randomize( ).
Write the following code in the function:
int CShuffleDlg::Randomize() { int i=0,xx,yy; int m[4][4]; CString temp; srand(time(NULL)); for(xx=0;xx<=3;xx++) for(yy=0;yy<=3;yy++) m[xx][yy]=0; while(i<16) { xx=rand()%4; yy=rand()%4; if(m[xx][yy]!=0) continue; m[xx][yy]=1; temp.Format("%d",i); m_b[xx][yy].SetWindowText(temp); if(i==0) { m_x=xx; m_y=yy; } i++; } m_b[m_x][m_y].SetButtonStyle(8); return 1; }
As explained above, to make the numbers randomized, we used a rand( ) function. rand( ) will generate a random integer; we made it range between 0 to 3 to select a random row and column using %4. Initial to this, we set the numbers of all the boxes to zero. Taking the numbers from 1 to 15, go on filling in the randomly selected boxes if they are not filled. Now, hide the remaining button using m_But[m_x][m_y].SetButtonStyle(8). This will set the game.
To avoid repetition of the order every time you run the program, make the seed of the random time(NULL). srand(time(NULL)) makes the random numbers dependent of time so that every time you run the program, the sequence will be different.
Now, add the following code to the OnInitDialog( ) function to randomize the number initially.
BOOL CShuffleDlg::OnInitDialog() { //system generated codes // TODO: Add extra initialization here Randomize(); }
Change the caption of the Group box to "Click On the Button to Move:".
If you compile and run, you should get the window in Figure 3:
Now, let us start the game. To move the button each time you click a button, you have to write a common handler for all the buttons. Go to the position where you find "BEGIN_MESSAGE_MAP(CShuffleDlg, CDialog)" (do not confuse between CAboutDlg and CShuffleDlg), and then add following code:
BEGIN_MESSAGE_MAP(CShuffleDlg, CDialog) //{{AFX_MSG_MAP(CShuffleDlg) ON_WM_SYSCOMMAND() ON_WM_PAINT() ON_WM_QUERYDRAGICON() ON_BN_CLICKED(IDC_About, OnAbout) //}}AFX_MSG_MAP //my code starts here: ON_COMMAND_RANGE(IDC_But1,IDC_But16,MoveButton) //my code ends here: END_MESSAGE_MAP()
If we press any button, the "MoveButton" handler will be called with a parameter as the ID of the corresponding button. Now, declare the MoveButton(int btn) function of type BOOL by right-clicking the Class CShuffleDlg on the ClassView tab.
To find the row and column of the pressed button, subtract the ID of the first button from the empty button call it btn. Now, btn%4 will give the column and btn/4 will give the row.
Whenever we press a button, if the adjacent button is empty, that button should be placed on that position. For that to happen, we have to check whether the position of the empty button is adjacent. If it is, we have to change the caption of the empty button to the caption of the button pressed, show that button, and hide the button that was pressed. This will give the effect as if the desired button is moved. In m_x and m_y, store the present position of the empty button.
Add the following code to the MoveButton(int btn) function:
BOOL CShuffleDlg::MoveButton(int btn) { CString no,temp; int bx,by=0; btn = btn - IDC_But1 ; by=(btn)%4; bx=(btn)/4; if(((m_x==bx) && ((m_y-by)==1)) || ((m_x==bx) && ((m_y-by)==-1)) || ((m_y==by) && ((m_x-bx)==1)) || ((m_y==by) && ((m_x-bx)==-1))) { m_b[m_x][m_y].SetButtonStyle(1,TRUE); m_b[bx][by].GetWindowText(temp); no.Format("0"); m_b[bx][by].SetWindowText(no); m_b[m_x][m_y].SetWindowText(temp); m_b[bx][by].SetButtonStyle(8,TRUE); m_x=bx; m_y=by; } return (TRUE); }
Now the "Shuffle Game" is ready. Play and enjoy it.
This is a combined article by me and my friend HarshaPerla. We discussed this topic and put our efforts to bring this article to life. We are studying MSc Electronics in Mangalore University, Karnataka, India.
Visit our personal homepages:
There are no comments yet. Be the first to comment! | http://www.codeguru.com/cpp/cpp/cpp_mfc/general/article.php/c8041/Implementing-an-Array-of-Buttons-The-Shuffle-Game.htm | CC-MAIN-2015-35 | refinedweb | 1,170 | 63.49 |
Using Calculations to copy dates from one process to anotherkeithr Jan 6, 2012 12:13 AM
When creating a change from within a service request, I would like the ability to copy the “implementation date” specified on the request to the change request.
Have tried this using the following calculation:
import System
static def GetAttributeValue(Change):
Value = Change.RequestChanges.Incident._TargetDate
return Value
The Calculation has been added to the Change.Target date on the automatic “create change” action within the request process but the copy does not initiate.
The attribute is a date attribute and is not selectable if I try to use Value Types as you might with a string. If anyone has successfully copied a date from one process to another, would appreciate knowing how this was achieved.
1. Re: Using Calculations to copy dates from one process to anotherStu McNeill Jan 6, 2012 6:01 AM (in response to keithr)
Hi Keith,
This actually came up yesterday in another thread and should give you the answer: Value Types & Dates
2. Re: Using Calculations to copy dates from one process to anotherkeithr Jan 7, 2012 7:59 AM (in response to Stu McNeill)
Hi Stu,
Thanks for pointing me in the right direction. I ran into errors but presume that it is something I am updating incorrectly in my calculation as it is based on the Incident.change object. I tried a combination of updates to my calculation (which is actually this, since the customer uses the incident domain for their request process)
import System
static def GetAttributeValue(Change):
Value = Change.IncidentChanges.Incident._TargetDate
return Value
I tried replacing "Parent" or "ParentLink" with Change.IncidentChanges but just ran into errors. Any suggestions greatly appreciated.
3. Re: Using Calculations to copy dates from one process to anotherStu McNeill Jan 9, 2012 8:40 AM (in response to keithr)
Hi Keith,
The important point of the other thread is that you can't just reference the collection, you need to use a For loop to get the attached Incident. In your case IncidentChanges is a collection that could have multiple Incidents attached. You need to decide which incident you want to get the Date from. If there will only every be one incident attached you can just do the same trick I suggested in the other thread of looping through and setting the incident to a variable.
I hope that helps.
4. Re: Using Calculations to copy dates from one process to anotherSDESK Jan 15, 2012 5:36 AM (in response to Stu McNeill)
Thanks Stu,
I have had a play with the loop calculation but ran into some errors. Can you tell me what "ParentLink" relates to in your post. I am presuming "Parent" in my case would be "IncidentChanges" but just not sure what to substitue "ParentLink" for:
Example from previous post:
Parent = null
for ParentLink in Request.Parents:
Parent = ParentLink.Parent
if Parent == null: return null
else: return Parent._UserStartDate
5. Re: Using Calculations to copy dates from one process to anotherStu McNeill Jan 18, 2012 9:53 AM (in response to SDESK)
Hi,
I was planning on writing a full document explaining loop techniques but haven't had a chance yet. Your latest calculation looks like it should work so it depends what error you're getting as to what the problem is now.
Basically "ParentLink" is just a name you give for when you reference each item in the loop. You can call it whatever you want:
for Stu in Incident.Notes:
Value = Stu.Title
In my case I called it ParentLink because each item in the Parents collection is the linking object record so it seemed a sensible name.
6. Re: Using Calculations to copy dates from one process to anotherStu McNeill Jan 18, 2012 9:55 AM (in response to SDESK)
Sorry I just realised the calculation you put into your last post wasn't what you're trying to use.
In your case you don't need to rename ParentLink but you could call it "ChangeLink" or something similar so it makes more sense, then within the loop where I reference "ParentLink.Parent" you'd change the ".Parent" to ".Change" or ".Incident" to get to the record on the other end of the link. | https://community.ivanti.com/message/72282?tstart=0 | CC-MAIN-2017-09 | refinedweb | 713 | 58.92 |
The modern art of computer programming combines a special kind of human personality with a special set of tools to produce a rather ghostly product -- software -- that other human beings find useful. Computer programmers are detail-oriented folks who are able to deal with the difficulties of computers. Computers are exacting in their demands and don't tolerate deviation from these demands at all. No doubt about it, computers are difficult to program no matter what your personality, and many tools have been created to assist you in making the task easier.
In UNIX® and Linux®, everything is a file. You could say that the very sine qua non of UNIX and Linux programming is writing code to deal with files. Many types of files make up the system, but object files have a special design that provides for flexible, multipurpose use.
Object files are roadmaps that contain mnemonic symbols with attached addresses and values. The symbols are used for naming various sections of code and data, both initialized and uninitialized. They are also used for locating embedded debugging information and, just like the semantic Web, are fully readable by programs.
Tools of the trade
The tools of the computer programming trade begin with a code editor, such as vi or Emacs, with which you can type and edit the instructions you want the computer to follow to carry out the required tasks, and end with the compilers and linkers that produce the machine code that actually accomplishes these goals.
High-level tools, known as Integrated Debugging Environments (IDEs), integrate the functionality of individual tools with a common look and feel. An IDE can vastly blur the lines between editor, compiler, linker, and debugger. So for the purpose of studying and learning the system with greater depth, it's often advisable to work with the tools separately before working with the integrated suite. (Note: IDEs are often called Integrated Development Environments, too.)
The compiler transforms the text that you create in the code editor into an object file. The object file was originally known as an intermediate representation of code, because it served as the input to link editors (in other words, linkers) that finish the task and produce an executable program as output.
The transformation process that proceeds from code to executable is well-defined and automated, and object files are an integral link in the chain. During the transformation process, the object files serve as a map to the link editors, enabling them to resolve the symbols and stitch together the various code and data sections into a unified whole.
History
Many notable object file formats exist in the world of computer programming. The DOS family includes the COM, OBJ, and EXE formats. UNIX and Linux use a.out, COFF, and ELF. Microsoft® Windows® uses the portable executable (PE) format and Macintosh uses PEF, Mach-O, and others.
Originally, each type of computer had its own unique object file format but, with the advent of UNIX and other operating systems designed to be portable among different hardware platforms, some common file formats ascended to the level of a common standard. Among these are the a.out, COFF, and ELF formats.
Understanding object files requires a set of tools that can read the various portions of the object file and display them in a more readable format. This article discusses some of the more important aspects of those tools. But first, you must create a workbench and put a victim -- er, a patient -- on it.
The workbench
Fire up an xterm session, and let's begin to explore object files by creating a clean workbench. The following commands create a useful place to play with object files:
cd mkdir src cd src mkdir hw cd hw
Then, by using your favorite code editor, type the program shown in Listing 1 in the $HOME/src/hw directory, and call it hw.c.
Listing 1. The hw.c program
#include <stdio.h> int main(void) { printf("Hello World!\n"); return 0; }
This simple "Hello World" program serves as a patient to study with the various tools available in the UNIX arsenal. Instead of taking any shortcuts to creating the executable (and there are many shortcuts), you'll take your time to build and examine just the object file output.
File formats
The normal output of a
C compiler is assembler code for
whatever processor you specify as the target. The assembler code is input to the
assembler, which by default produces the grandfather of all object files, the
a.out file. The name itself stands for Assembler Output. To create the
a.out file, type the following command in the xterm window:
cc hw.c
Note: If you experience any errors or the a.out file wasn't created, you might
need to examine your system or source file (hw.c) for errors. Check also to see
whether cc is defined to run your
C/C++ compiler.
Modern
C compilers combine the compile and assemble steps
into one step. You can invoke switches to see just the assembler output of the
C compiler. By typing the following command, you can see
what the assembler output from the
C compiler looks like:
cc -S hw.c
This command has generated a new file -- hw.s -- that contains the assembler input text that you typically would not have seen, because the compiler defaults to producing the a.out file. As expected, the UNIX assembler program can assemble this type of input file to produce the a.out file.
UNIX-specific tools
Assuming that all went well with the compile and you have an a.out file in the directory, let's examine it. Among the list of useful tools for examining object files, the following set exists:
- nm: Lists symbols from object files.
- objdump: Displays detailed information from object files.
- readelf: Displays information about ELF object files.
The first tool on the list is nm, which lists the symbols in an object file. If you type the
nm command, you'll notice that it defaults to looking for a file named a.out If the file isn't found, the tool complains. If, however, the tool did find the a.out file that your compiler created, it presents a listing similar to Listing 2.
Listing 2. Output of the nm command
08049594 A __bss_start 080482e4 t call_gmon_start 08049594 b completed.4463 08049498 d __CTOR_END__ 08049494 d __CTOR_LIST__ 08049588 D __data_start 08049588 W data_start 0804842c t __do_global_ctors_aux 0804830c t __do_global_dtors_aux 0804958c D __dso_handle 080494a0 d __DTOR_END__ 0804949c d __DTOR_LIST__ 080494a8 d _DYNAMIC 08049594 A _edata 08049598 A _end 08048458 T _fini 08049494 a __fini_array_end 08049494 a __fini_array_start 08048478 R _fp_hw 0804833b t frame_dummy 08048490 r __FRAME_END__ 08049574 d _GLOBAL_OFFSET_TABLE_ w __gmon_start__ 08048308 T __i686.get_pc_thunk.bx 08048278 T _init 08049494 a __init_array_end 08049494 a __init_array_start 0804847c R _IO_stdin_used 080494a4 d __JCR_END__ 080494a4 d __JCR_LIST__ w _Jv_RegisterClasses 080483e1 T __libc_csu_fini 08048390 T __libc_csu_init U __libc_start_main@@GLIBC_2.0 08048360 T main 08049590 d p.4462 U puts@@GLIBC_2.0 080482c0 T _start
The sections that contain executable code are known as text sections or segments. Likewise, there are data sections or segments for containing non-executable information or data. Another type of section, known by the BSS designation, contains blocks started by symbol data.
For each symbol that the
nm command lists, the symbol's value in hexadecimal (by default) and the symbol type with a coded character precede the symbol. Various codes that you commonly see include A for absolute, which means that the value will not change by further linking; B for a symbol found in the BSS section; or C for common symbols that reference uninitialized data.
Object files contain many different parts that are divided into sections. Sections can contain executable code, symbol names, initialized data values, and many other types of data. For detailed information on all of these types of data, consider reading the UNIX man page on
nm, where each type is described by the character codes shown in the output of the command.
Details, details . . .
Even a simple Hello World program contains a vast array of details when it reaches the object file stage. The nm program is good for listing symbols and their types and values but, for examining in greater detail the contents of those named sections of the object file, more powerful tools are necessary.
Two of these more powerful tools are the objdump and readelf programs. By typing the following command, you can see an assembly listing of every section in the object file that contains executable code. Isn't it amazing how much code the compiler actually generates for such a tiny program?
objdump -d a.out
This command produces the output you see in Listing 3. Each section of executable code is run when a particular event becomes necessary, including events like the initialization of a library and the main starting entry point of the program itself.
For a programmer who is fascinated by the low-level details of programming, this is a powerful tool for studying the output of compilers and assemblers. Details, such as those shown in this code, reveal a lot about how the native processor itself operates. When studied hand-in-hand with the processor manufacturer's technical documentation, you can glean valuable insights into how such things work to a greater degree because of the clarity of output from a functioning program.
Likewise, the readelf program can list the contents of the object file with similar lucidity. You can see this by typing the following command:
readelf -all a.out
This command produces the output shown in Listing 4. The ELF header shows a nice summary of all the section entries in the file. Before enumerating the contents of those headers, you can see how many there are. This information can be useful when exploring a rather large object file.
As you can see from this output, a huge amount of useful detail resides in the simple a.out Hello World file -- version information, histograms, multiple tables of various symbol types, and so on. Yes, one can spend a great deal of time learning about executable programs by exploring object files with just the few tools presented here.
In addition to all these sections, the compiler can place debugging information in the object files, and such information can be displayed as well. Type the following command and take some time to see what the compiler is telling you (if you're a debugging program, that is):
readelf --debug-dump a.out | less
This command produces the output shown in Listing 5. Debugging tools, such as GDB, read in this debugging information, and you can get the tools to display more descriptive labels (for example) than raw address values when disassembling code while it's running under the debugger.
Executable files are object files
In the UNIX world, executable files are object files, and you can examine them as you did the a.out file. It is a useful exercise to change to the /bin or /local/bin directory and run
nm,
objdump, and
readelf over some of your most commonly used commands, such as
pwd,
ps,
cat, or
rm. Often when you're writing a program that requires a certain functionality that one of the standard tools has, it's useful to see how those tools actually do their work by simply running
objdump -d <command> over it.
If you're so inclined to work on compilers and other language tools, you'll find that time spent studying the various object files that make up your computer's system is time well spent. A UNIX operating system has many layers, and the layers that the tools examining its object files expose are close to the hardware. You can get a real feel for the system in this way.
Conclusion
Exploring object files can greatly deepen your knowledge of the UNIX operating system and provide greater insight into how the software is actually assembled from source code. I encourage you to study the output of the object file tools described in this article by running them over the programs found in the /bin or /local/bin directories on your system and seek out system documentation that your hardware manufacturer provides.
Resources
Learn
- Executable file formats: Visit Wikipedia to learn more about executable file formats.
- Executable and Linking Format (ELF): Visit the University of California-Davis site for more information.
- AIX and UNIX articles: Check out other articles written by William Zimmerly.
-.
- Technology bookstore: Browse this site for books and other technical topics.
-. | http://www.ibm.com/developerworks/aix/library/au-unixtools/index.html | CC-MAIN-2014-23 | refinedweb | 2,104 | 61.16 |
Introduction to Part 2
In the second part of a six-part series, we will show how to go from a raw dataset to a suitable model training set using TensorFlow 2 and Gradient notebooks..
The role of data preparation
It is well-known that no machine learning model can solve a problem if the data is insufficient. This means that data must be prepared correctly to be suitable for passing to an ML model.
Too much online content neglects the reality of data preparation. A recommended paradigm for enterprise ML is "do a pilot, not a PoC," which means set up something simple that goes all the way through end-to-end, then go back and refine the details.
Because Gradient makes it easy to deploy models it encourages us to keep model deployment and production in mind right from the start. In data preparation, this means spotting issues that may not otherwise come to light until later – and indeed we'll see an example of this below.
Gradient encourages a way of working where entities are versioned. Using tools like Git repos, YAML files, and others, we may counteract the natural tendency of the data scientist to plunge forward into an ad hoc order of operations that is difficult to reproduce.
The question of versioning not just the models and code, but the data itself, is also important. Such versioning doesn't work in Git except for small data. In this series, the data remain relatively small, so full consideration of this limitation is considered future work.
MovieLens dataset
So while we won't start this series with a 100% typical business scenario such as a petascale data lake containing millions of unstructured raw files in multiple formats that lack a schema (or even a contact person to explain them), we do use data that has been widely used in ML research.
The dataset is especially useful because it contains a number of characteristics typical of real-world enterprise datasets.
The MovieLens dataset has information about movies, users, and the users' ratings of the movies. For new users, we would like to be able to recommend new movies that they are likely to watch and enjoy.
We will first select candidates using the retrieval model, then we will predict the users' ratings with the ranking model.
The top-predicted rated movies will then be our recommendations.
The first couple rows of the initial data are as follows:
{'} {'bucketized_user_age': 25.0, 'movie_genres': array([ 4, 14]), 'movie_id': b'709', 'movie_title': b'Strictly Ballroom (1992)', 'raw_user_age': 32.0, 'timestamp': 875654590, 'user_gender': True, 'user_id': b'92', 'user_occupation_label': 5, 'user_occupation_text': b'entertainment', 'user_rating': 2.0, 'user_zip_code': b'80525'}
We see that most of the columns appear self-explanatory, although some are not obvious. Data are divided into movie information, user information, and outcome from watching the movie, e.g., when it was watched, user's rating, etc.
We also see various issues with the data that need to be resolved during preparation:
- Each row is a dictionary, so we will need to extract items or access them correctly to be able to do the prep.
- The target column is not differentiated from the feature columns. We therefore have to use context and our domain knowledge to see that the target column is
user_rating, and that none of the other columns are cheat variables. A cheat variable here would be information that is not available until after a user has watched a movie, and could therefore not be used when recommending a movie that a user has not yet watched. (1)
- Some of the data columns are formatted in bytecode (e.g.
b'357') rather than regular UTF-8 unicode. This will fail if the model is to be deployed as a REST API and the data are converted to JSON. JSON requires the data to be UTF, and ideally UTF-8. So we will need to account for this.
- The timestamps are not suitable for ML as given. A column of unique or almost-unique continuous values doesn't add information that the model can use. If, however, we can featurize these values into something recurring like time of day, day of week, season of year, or holiday, they could become much more valuable.
- Because there are timestamps, and user viewing habits can change over time, considerations for dealing with time series apply, and it is better to split the data into training, validation, and testing sets that do not overlap in time. Failing to do this violates the ML assumption that rows in a training set are not correlated and could lead to spurious high model performance or overfitting.
Obviously there are many other questions that could be asked and answered:
- Are there missing or other bad/out-of-range values?
- What do the genres mean?
- Are the IDs unique and do they correspond 1:1 with movie titles?
- What is user gender
True?
- Are the user IDs unique?
- What are the occupation labels?
- Why are the ratings floats not integers?
- Do all rows have the same number of columns?
In a real business scenario, we would ideally like to discuss with the data originators what the columns mean and check that everyone is on the same page. We might also use various plots, exploratory data analysis (EDA) tools, and apply any other domain knowledge at our disposal.
In the case here, the MovieLens data is already well-known. And while one shouldn't assume that that makes it sensible or even coherent for the task, we don't need to spend the time doing the extensive data EDA that would be done in a full-scale project.
(1) There is in fact a subtlety here regarding "information available," because of course, aside from
user_rating, the timestamp column when the user viewed the movie is not available beforehand. However, unlike the rating, the time that it is "now" when the recommendation is being made is available and so knowing something like the user is more likely to watch certain kinds of movies on weekend evenings can be used without it being cheating.
Preparing the data
TensorFlow has various modules that assist with data preparation. For MovieLens, the data are not large enough to require other tools such as Spark but of course this is not always the case.
In full-scale production recommender systems, it is common to require further tools to deal with larger-than-memory data. There could realistically be billions of rows available from a large user base spread over time.
The data are loaded into TensorFlow 2 from the
movielens/ directory of the official TensorFlow datasets repository. Gradient can connect to other data sources, such as Amazon S3.
import tensorflow_datasets as tfds ... ratings_raw = tfds.load('movielens/100k-ratings', split='train')
They are returned as a TensorFlow
Prefetch dataset type. We can then use a Python lambda function and TensorFlow's
.map to select the columns that we will be using to build our model.
For this series, that is just
movie_title,
timestamp,
user_id, and
user_rating.
ratings = ratings_raw.map(lambda x: { 'movie_title': x['movie_title'], 'timestamp': x['timestamp'], 'user_id': x['user_id'], 'user_rating': x['user_rating'] })
TensorFlow in-part overlaps with NumPy, so we can use the
.concatenate,
.min,
.max,
.batch and
.as_numpy_iterator routines to extract the timestamps and create training, validation, and testing sets that do not overlap in time. Nice and simple! 😀
timestamps = np.concatenate(list(ratings.map(lambda x: x['timestamp']).batch(100))) max_time = timestamps.max() ... sixtieth_percentile = min_time + 0.6*(max_time - min_time) ... train = ratings.filter(lambda x: x['timestamp'] <= sixtieth_percentile) validation = ratings.filter(lambda x: x['timestamp'] > sixtieth_percentile and x['timestamp'] <= eightieth_percentile) test = ratings.filter(lambda x: x['timestamp'] > eightieth_percentile)
We then shuffle the data within each set, since ML models assume that data are IID (rows are independent and identically distributed). We need to shuffle the data since our data appear to be originally ordered by time.
train = train.shuffle(ntimes_tr) ...
Finally we obtain a list of unique movie titles and user IDs that are needed for the recommender model.
movie_titles = ratings.batch(1_000_000).map(lambda x: x['movie_title']) ... unique_movie_titles = np.unique(np.concatenate(list(movie_titles))) ...
There are some further adjustments that are mentioned in the notebook to get everything to work together. These include things like getting
.len() to work on the
FilterDataset type resulting from applying the lambda. These should be self explanatory in the notebook as they basically amount to figuring out how to use TensorFlow to get done what is needed.
Notice also that we assumed that the number of rows resulting from using percentiles of time corresponds roughly to number of rows as percentiles of the data. This is typical of the sort of assumptions that happen in data prep that are not necessarily called out by simply reading the code. (In this case the numbers are consistent, as shown in the notebook.)
The rest of the input data processing is done within the model, as shown in Part 3.
In Part 3 of the series - Building a TensorFlow model, we will use the TensorFlow Recommenders library to build a basic recommender model and train it on the above prepared data. | https://blog.paperspace.com/end-to-end-recommender-system-part-2-data-preparation/ | CC-MAIN-2022-27 | refinedweb | 1,528 | 62.48 |
One Simple Trick for Speeding up your Python Code with Numpy
Looping over Python arrays, lists, or dictionaries, can be slow. Thus, vectorized operations in Numpy are mapped to highly optimized C code, making them much faster than their standard Python counterparts. memory,.
The slow way
The slow way of processing large datasets is by using raw Python. We can demonstrate this with a very simple example.
The code below multiplies the value of 1.0000001 by itself, 5 million times!
import time start_time = time.time() num_multiplies = 5000000 data = range(num_multiplies) number = 1 for i in data: number *= 1.0000001 end_time = time.time() print(number) print("Run time = {}".format(end_time - start_time))
I have a pretty decent CPU at home, Intel i7–8700k plus 32GB of 3000MHz RAM. Yet still, multiplying those 5 million data points took 0.21367 seconds. If instead I change the value of
num_multiplies to 1 billion times, the process took 43.24129 seconds!
Let’s try another one with an array.
We’ll build a Numpy array of size 1000x1000 with a value of 1 at each and again try to multiple each element by a float 1.0000001. The code is shown below.
On the same machine, multiplying those array values by 1.0000001 in a regular floating point loop took 1.28507 seconds.
import time import numpy as np start_time = time.time() data = np.ones(shape=(1000, 1000), dtype=np.float) for i in range(1000): for j in range(1000): data[i][j] *= 1.0000001 data[i][j] *= 1.0000001 data[i][j] *= 1.0000001 data[i][j] *= 1.0000001 data[i][j] *= 1.0000001 end_time = time.time() print("Run time = {}".format(end_time - start_time))
What is Vectorization?
Numpy is designed to be efficient with matrix operations. More specifically, most processing in Numpy is vectorized.
Vectorization involves expressing mathematical operations, such as the multiplication we’re using here, as occurring on entire arrays rather than their individual elements (as in our for-loop).
With vectorization, the underlying code is parallelized such that the operation can be run on multiply array elements at once, rather than looping through them one at a time. As long as the operation you are applying does not rely on any other array elements, i.e a “state”, then vectorization will give you some good speed ups.
Looping over Python arrays, lists, or dictionaries, can be slow. Thus, vectorized operations in Numpy are mapped to highly optimized C code, making them much faster than their standard Python counterparts.
The fast way
Here’s the fast way to do things — by using Numpy the way it was designed to be used.
There’s a couple of points we can follow when looking to speed things up:
- really focused on replace non-vectorized Python code with optimised, vectorized, low-level C code.
Check out the fast version of our first example from before, this time with 1 billion multiplications.
We’ve done something very simple: we saw that we had a for-loop in which we were repeating the same mathematical operation many times. That should trigger immediately that we should go look for a Numpy function that can replace it.
We found one — the
power function which simply applies a certain power to an input value. The dramatically sped of the code to run in 7.6293e-6 seconds — that’s a
import time import numpy as np start_time = time.time() num_multiplies = 1000000000 data = range(num_multiplies) number = 1 number *= np.power(1.0000001, num_multiplies) end_time = time.time() print(number) print("Run time = {}".format(end_time - start_time))
It’s a very similar idea with multiplying values into Numpy arrays. We see that we’re using a double for-loop and should immediately recognised that there should be a faster way.
Conveniently, Numpy will automatically vectorise our code if we multiple our 1.0000001 scalar directly. So, we can write our multiplication in the same way as if we were multiplying by a Python list.
The code below demonstrates this and runs in 0.003618 seconds — that’s a 355X speedup!
import time import numpy as np start_time = time.time() data = np.ones(shape=(1000, 1000), dtype=np.float) for i in range(5): data *= 1.0000001 end_time = time.time() print("Run time = {}".format(end_time - start_time))
Like to learn?
Follow me on twitter where I post all about the latest and greatest AI, Technology, and Science! Connect with me on LinkedIn too!
Recommended Reading.
Bio: George Seif is a Certified Nerd and AI / Machine Learning Engineer.
Original. Reposted with permission.
Related:
- Why You Should Start Using .npy Files More Often
- Why You Should Forget ‘for-loop’ for Data Science Code and Embrace Vectorization
- Working With Numpy Matrices: A Handy First Reference | https://www.kdnuggets.com/2019/06/speeding-up-python-code-numpy.html | CC-MAIN-2019-30 | refinedweb | 787 | 69.38 |
in reply to
Re: parsing a java src file
in thread parsing a java src file
hmm, if i change your kk.java though to something like i have below, i get no results.
public class test {
private String one;
private String two;
public test(String one, String two) {
this.one = one;
this.two = two;
}
public static final test alpha =
new test("foo", "bar");
public final static
test beta = new test("baz", "boo");
}
[download]
Tron
Wargames
Hackers (boo!)
The Net
Antitrust (gahhh!)
Electric dreams (yikes!)
Office Space
Jurassic Park
2001: A Space Odyssey
None of the above, please specify
Results (107 votes),
past polls | http://www.perlmonks.org/?node_id=526199 | CC-MAIN-2014-35 | refinedweb | 104 | 75.91 |
facebook ◦ twitter ◦
View blog authority
View blog top tags.
reminds me of that scene in "Singing in the Rain" where the audio gets off track with the video: "no, no, no", "yes, yes, yes"!
Actually, that is probably there as a result of the definition in the XSD of this document, not because of VS.
What are you editing?
Shis dude, you ripping in VS? Dude, you really must find yourself some new friends! There is no standard XML intellisense since there is no standard for XML attributes...
That is only incorrect behaviour if your property isn't case sensitive...
There's obviously something wrong with your XSDs - I only get "true" or "false".
Me, too, I only get the lowercase options. Though in VS2005 I used to only get the True and False options (capital first letter).
I'm pretty sure this is defined in the CoreDefinitions.xsd file under C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\XML.
<xs:simpleType
<xs:restriction
<xs:enumeration
<xs:enumeration
<xs:enumeration
<xs:enumeration
<xs:enumeration
<xs:enumeration
</xs:restriction>
</xs:simpleType>
This is part of the schemas.microsoft.com/sharepoint namespace.
Yet another reason not to do any SharePoint development. | http://weblogs.asp.net/bsimser/archive/2007/12/06/true-true-true-and-false-false-false.aspx | crawl-002 | refinedweb | 203 | 59.4 |
This action might not be possible to undo. Are you sure you want to continue?.
Microsoft Visual Studio 2010
Download Microsoft Visual C# 2010 - Express Edition
Microsoft Expression Blend 3 + Sketch Flow.
Download Microsoft Expression Blend 3
Other useful tools
y y y y y
WPF Inspector Snoop (Inspect the Visual Tree of running WPF applications) Mole (Data Visualizer for Visual Studio XAML Power Toys WPF Performance Suite
Introduction to Windows Presentation Foundation
Overview
The Windows Presentation Foundation is Microsofts. The followinig illustration gives you an overview of the main new features of WPF
Separation of Appearance and Behavior:
Appearance and behaviour are loosely coupled Designers and developers can work on separate models. Graphical design tools can work on simple XML documents instead of parsing code.
y y y
Rich composition
the user interface stays the same size . The following example shows an default WPF button and a customized button. Although these flexibility sounds horrible to designers. A logical unit is a 1/96 of an inch. Since WPF builds on a vector based rendering engine it's incredibly easy to build scaleable user interfaces. Resolution independence All measures in WPF are logical units . The concept ofstyles let you skin controls almost like CSS in HTML.Controls in WPF are extremely composable. Put an image into a button to create an image button.if just gets crispier. If you increase the resolution of your screen. its a very powerful feature if you use it appropriate. or put a list of videos into a combobox to choose a video file. Templates let you replace the entire appearance of a control.png" Stretch="Uniform"/> <TextBlock Text="Play Sound" /> </StackPanel> </Button> Highly customizable Because of the strict separation of appearance and behavior you can easily change the look of a control. <Button> <StackPanel Orientation="Horizontal"> <Image Source="speaker. You can define almost any type of controls as content of another.not pixels. .
Choose a folder for your project and give it a name.xaml.xaml and an App. except that the Window1.xaml file in the WPF designer and drag a Button and a TextBox from the toolbox to the Window . The structure looks quite similar to WinForms.How to create a simple WPF application In Visual Studio 2008 Open Visual Studio 2008 and choose "File". Choose "WPF Application" as project type.designer. "New"..cs file is no longer code but it's now declared in XAML as Window1. "Project..xaml Open the Window1. Then press "OK" Visual Studio creates the project and automatically adds some files to the solution. A Window1." in the main menu.
Visual Studio automatically creates a method in the code-behind file that gets called when the button is clicked. you need to install the Service Pack 1 for VisualStudio on your machine. } . Alternatively you can doubleclick on the button in the designer to achieve the same result. Doubleclick on the "Click" event to create a method in the codebehind that is called.Select the Button and switch to the event view in the properties window (click on the little yellow lightning icon). when the user clicks on the button. Note: If you do not find a yellow lightning icon. private void button1_Click(object sender.Text = "Hello WPF!". RoutedEventArgs e) { textBox1.
Set text Text to "Hello WPF!" when the button gets clicked and we are done! Start the application by hit [F5] on your keyboard. Isn't this cool! .The textbox has automatically become assigned the name textBox1 by the WPF designer.
It can open VisualStudio solutions y y Expression Design is a leightweight version of Adobe Illustrator to create and edit vector graphics. It makes the user feel good and so he likes to continue using the software. Its about creating an emotional connection between the user and your software.made for designers. So they decided to create a new tool suite . User experience was often considered late in the development process. It builds the bridge between designer and developers. Designing a rich user experience is not only about make up your user interface by some graphics and gradients . designed and integrated into the development of a product. we focused mainly on building products that fulfilled the functional requirements of the user. But today the customer demands more than just a working product. New Tools for Designers Microsoft recognized. Providing the right features is still the prerequisite for a good product.User Experience Design Process User Experience becomes a Key Success Factor In the past. but to turn it into something extraordinary you need to provide a good user experience! Providing a rich user experience is not a thing of fortune.its a much broader concept. . It consists of the four products: Expression Blend is built to create user interfaces in WPF and Silverlight. It needs to be planed. give development teams the power to create rich user experiences it needs a lot more graphical tool support than VisualStudio can provide today. This tool suite is called Microsoft Expression.
cut and enrich video files and optimize them for silverlight streaming Expression Web is Microsoft next generation of HTML and Javascript editor. . The following illustration shows a sample workflow of integrating a vector image that is created by a graphics designer in Adobe Illustrator into a WPF project that is part of a VisualStudio solution. You have to find out what the user really needs. Its the replacement for Frontpage. Together they are a powerful package. This can be done by following a user centered approach.
You should talk to stakeholders and users to find out the real needs. Elicit Requirements Like in any kind of software projects its important to know and focus the target of your development. It's helpful to only sketch the user interface in a rough way to prevent early discussions about design details. There are multiple techniques and tools to do this. Everyone can just scribble thier ideas on the paper. No tools and infrastructure is needed. Priorize the tasks by risk and importance and work iteratively. It's called wireframes because you just draw the outlines of controls and images. This work is done by the role of the requirements engineer. This can be done with tools like PowerPoint or Visio . This task is typically done by an interaction designer. y y Wireframes Wireframes are often used to sketch the layout of a page. 2. Create and Validate UI Prototype Creating a user interface prototype is an important step to share ideas between users and engineers to create a common understanding of the interaction design. These needs should be refined to features and expressed in use cases (abstract) or user scenarios (illustrative). Some of them are: Paper prototype Use paper and pencil to draw rough sketches of your user interface.1.
Developer The developer is responsible to implement the functionality of the application. y y Graphical Designer The graphical designer is responsible to create a graphical concept and build graphical assets like icons. Integrate Graphical Design 5.logos. The user gets a task to solve and he controlls the prototype by touching on the paper. It is strongly recommended to test your UI prototype on real users. you need a computer with a screen capture software and a camera. The proband gets an task to do and the requirements and interaction engineer watch him doing this. implements the business logic and wires all up to a simple view. Test software Roles Buliding a modern user interface with a rich user experience requires additional skills from your development team. The following techniques are very popular to evaluate UI prototypes: y Walktrough A walktrough is usually done early in a project with wireframes or paper prototypes. y Interactive Prototype The most expensive and real approach is to create an (reusable) interactive prototype that works as the real application but with dummy data. He creates the data model. If the graphical designer is familiar with Microsoft Expression tools he directly creates styles and control templates. 3D models or color schemes. These skills are described as roles that can be distributed among peoples in your development team. Implement Business Logic and Raw User Interface 4. 3. They should not talk to him to find out where he gets stuck and why. . You can use the integrated "wiggly style" to make it look sketchy. The prototype can be run in a standalone player that has an integrated feedback mechanism. y Usability Lab To do a usability lab. This helps you to find out and address design problems early in the development process.y Expression Blend 3 .Sketch Flow Sketch flow is a new cool feature to create interactive prototypes directly in WPF. The test leader than presents a new paper showing the state after the interaction.
This role needs a rare set of skills and so it's often hard to find the right person for it. More Infos The New Iteration . He should validate his work by doing walktroughs or storyboards. He creates wireframes or UI sketches to share its ideas with the team or customer. y Integrator The integrator is the artist between the designer and the developer world. He takes the assets of the graphical designer and integrates them into the raw user interface of the developer.y Interaction Designer The interaction designer is responsible for the content and the flow of a user interface.Microsoft Paper about the Designer/Developer collaboration .
The separation of XAML and UI logic allows it to clearly separate the roles of designer and developer. y y y y XAML vs. Its a simple language based on XML to create and initialize . <StackPanel> <TextBlock Margin="20">Welcome to the World of XAML</TextBlock> <Button Margin="10" HorizontalAlignment="Right">OK</Button> </StackPanel> The same expressed in C# will look like this: // Create the StackPanel StackPanel stackPanel = new StackPanel(). That is done to make it perfectly fit for XML languages like XAML. Altough it was originally invented for WPF it can by used to create any kind of object trees. // Create the TextBlock TextBlock textBlock = new TextBlock(). this. You can use WPF without using XAML. Code As an example we build a simple StackPanel with a textblock and a button in XAML and compare it to the same code in C#.Content = stackPanel.NET objects with hierarchical relations. declare workflows in WF and for electronic paper in the XPS standard. It's up to you if you want to declare it in XAML or write it in code. Advantages of XAML All you can do in XAML can also be done in code. XAML ist just another way to create and initialize objects. Today XAML is used to create user interfaces in WPF.Introduction to XAML XAML stands for Extensible Application Markup Language. . Silverlight. Declare your UI in XAML has some advantages: XAML code is short and clear to read Separation of designer code and logic Graphical design tools like Expression Blend require XAML as source. All classes in WPF have parameterless constructors and make excessive usage of properties.
10"> </Border> Markup Extensions .png" Width="50" Height="50" /> </Button. When you declare a BorderBrush. button. WPF includes a lot of type converters for built-in classes.. And that's the power of XAMLs expressiveness. As you can see is the XAML version much shorter and clearer to read.Windows.Margin = new Thickness(10). But what if we want to put a more complex object as content like an image that has properties itself or maybe a whole grid panel? To do that we can use the property element syntax.Add(textBlock).Add(button). stackPanel.Content> <Image Source="Images/OK. This allows us to extract the property as an own chlild element. // Create the Button Button button = new Button().Media. The same regards to the border thickness that is beeing converted implicit into a Thickness object. The implicit BrushConverter makes aSystem. the word "Blue" is only a string. stackPanel.Children. but you can also write type converters for your own classses.Children. They do their work silently in the background. <Border BorderBrush="Blue" BorderThickness="0. textBlock.Brushes.
microsoft.Markup that defines the XAML keywords. The following example shows a label whose Content is bound to the Text of the textbox. They resolve the value of a property at runtime. Markup extensions are surrouded by curly braces (Example: Namespaces At the beginning of every XAML file you need to include two namespaces.Windows.com/winfx/2006/xaml it is mapped to System. .Windows.Controls. by deriving fromMarkupExtension. All preciding identifiers are named parameters in the form of Property=Value. WPF has some built-in markup extensions. y y StaticResource One time lookup of a resource entry y DynamicResource Auto updating lookup of a resource entry y TemplateBinding To bind a property of a control template to a dependency property of the control y x:Static Resolve the value of a static property. The second is. the text property changes and the binding markup extension automatically updates the content of the label. You can also directly include a CLR namespace in XAML by using the clr-namespace: prefix.Markup extensions are dynamic placeholders for attribute values in XAML. <TextBox x: <Label Content="{Binding Text. y x:Null Return null The first identifier within a pair of curly braces is the name of the extension. The first is. but you can write your own.
com/winfx/2006/xaml´> </Window> .microsoft.<Window xmlns=´´ xmlns:x=´.
WPF differs between those two trees.and Visual Tree Introduction Elements of a WPF user interface are hierarchically related. but it iterates through the visual tree .for example . <Window> <Grid> <Label Content="Label" /> <Button Content="Button" /> </Grid> </Window> Why do we need two different kind of trees? A WPF control consists of multiple. When WPF renders the button. This tree is called the VisualTree.consists of a border. because for some problems you only need the logical elements and for other problems you want all elements. the element itself has no appearance. more primitive controls. The template of one element consists of multiple visual elements. A button . This relation is called the LogicalTree. These controls are visual children of the button.Logical. a rectangle and a content presenter.
<br The visual tree is responsible for: y y y y y y Rendering visual elements Propagate element opacity Propagate Layout. layout etc. You can use almost the same code to navigate through the logical tree. Do Hit-Testing RelativeSource (FindAncestor) Programmatically Find an Ancestor in the Visual Tree If you are a child element of a user interface and you want to access data from a parent element.and RenderTransforms Propagate the IsEnabled property. it's the best solution to navigate up the tree until it finds an element of the requested type. The logical tree is responsible for: Inherit DependencyProperty values Resolving DynamicResources references Looking up element names for bindings Forwaring RoutedEvents y y y y The Visual Tree The visual tree contains all logical elements including all visual elements of the template of each element. And that is the eligibility for the logical tree.and renders the visual children of it. public static class VisualTreeHelperExtensions { public static T FindAncestor<T>(DependencyObject dependencyObject) . and so you should not relate on the visual tree structure! Because of that you want a more robust tree that only contains the "real" controls . Particulary because the template can be replaced. But sometimes you are not interested in the borders and rectangles of a controls' template. This helper does excactly this. but you don't know how many levels up that elemens is. The Logical Tree The logical tree describes the relations between elements of the user interface. This hierarchical relation can also be used to do hit-testing.and not all the template parts.
If the helper reaches the root element of the tree. } while (target != null && !(target is T)).where T : class { DependencyObject target = dependencyObject. } } The following example shows how to use the helper. It starts at this and navigates up the visual tree until it finds an element of type Grid. var grid = VisualTreeHelperExtensions. return target as T. do { target = VisualTreeHelper.GetParent(target). it returns null.FindAncestor<Grid>(this). .
If no local value is set. you will soon stumble across DependencyProperties. that the value of a normal . When you set the FontSize on the root element it applies to all textblocks below except you override the value. When you set a value of a dependency property it is not stored in a field of your object. whereas the value of a DependencyProperty is resolved dynamically when calling the GetValue() method that is inherited from DependencyObject. y y . The main difference is. The default values are stored once within the dependency property.NET property is read directly from a private member in your class. Value inheritance When you access a dependency property the value is resolved by using a value resolution strategy. The advantages of dependency properties are Reduced memory footprint It's a huge dissipation to store a field for each property when you think that over 90% of the properties of a UI control typically stay at its initial values.NET properties. Dependency properties solve these problems by only store modified properties in the instance. the dependency property navigates up the logical tree until it finds a value. They look quite similar to normal .Dependency Properties Introduction Value resolution strategy The magic behind it How to create a DepdencyProperty Readonly DependencyProperties Attached DependencyProperties Listen to dependency property changes How to clear a local value Introduction When you begin to develop appliations with WPF. but in a dictionary of keys and values provided by the base class DependencyObject. The key of an entry is the name of the property and the value is the value you want to set. but the concept behind is much more complex and powerful.
The key of . The magic behind it Each WPF control registers a set of DependencyProperties to the static DependencyProperty class. Value resolution strategy Every time you access a dependency property.and a metadata that contain callbacks and a default value. value dictionary that contains local values of dependency properties. when the value of the property has been changed. Each of them consists of a key . By registering a callback in the property metadata you get notified. This baseclass defines a key..that must be unique per type . All types that want to use DependencyProperties must derive from DependencyObject. It checks if a local value is available. At last the default value is always available... and continues until it founds a value. if not if a custom style trigger is active.y Change notification Dependency properties have a built-in change notification mechanism. This is also used by the databinding. it internally resolves the value by following the precedence from high to low.
This sequence is a bit simplified. When you access a dependency property over its .NET property wrapper. Important: Do not add any logic to these properties.Register() to create an instance of a dependency property. This wrapper does nothing else than internally getting and setting the value by using the GetValue() and SetValue() Methods inherited from DependencyObject and passing the DependencyProperty as key.. it internally callsGetValue(DependencyProperty) to access the value.an entry is the key defined with the dependency property. This method resolves the value by using a value resolution strategy that is explained in detail below.. If no value is set if goes up the logical tree and searches for an inherited value. but it shows the main concept. If a local value is available. If you set the property from XAML the SetValue() method is called directly. If no value is found it takes the default value defined in the property metadata. it reads it directly from the dictionary. This is a naming convention in WPF. . The name of the DependendyProperty must always end with . How to create a DependencyProperty To create a DependencyProperty. because they are only called when you set the property from code.NET property you need to add a property wrapper. To make it accessable as a normal .Property. add a static field of type DepdencyProperty to your type and callDependencyProperty.
OnCoerceCurrentTimeProperty ).NET Property wrapper public DateTime CurrentTime { get { return (DateTime)GetValue(CurrentTimeProperty). you can type propdp and hit 2x tab to create a dependency property.Register( "CurrentTime". } } Each DependencyProperty provides callbacks for change notification..Now. private static void OnCurrentTimePropertyChanged(DependencyObject source. A good example is a progress bar with a Value set below the Minimum or above the Maximum. // . that is called everytime when the value of the TimeProperty changes. The new value is passed in the EventArgs. Value Changed Callback The change notification callback is a static method. new FrameworkPropertyMetadata( DateTime. typeof(MyClockControl). typeof(DateTime). DateTime time = (DateTime)e. OnValidateCurrentTimeProperty ). . // Put some update logic here.NewValue. } set { SetValue(CurrentTimeProperty.Now)).If you are using Visual Studio. value coercion and validation. } Coerce Value Callback The coerce callback allows you to adjust the value if its outside the boundaries without throwing an exception. DependencyPropertyChangedEventArgs e) { MyClockControl control = source as MyClockControl. the object on which the value changed is passed as the source. OnCurrentTimePropertyChanged.. These callbacks are registered on the dependency property. value). new FrameworkPropertyMetadata(DateTime. // Dependency Property public static readonly DependencyProperty CurrentTimeProperty = DependencyProperty.
Creating a read only property is similar to creating a regular DependencyProperty. private static object OnCoerceTimeProperty( DependencyObject sender. If you return false. They are often used to report the state of a control. . why not just use a normal .In this case we can coerce the value within the allowed boundaries.Now. } return data. In the following example we limit the time to be in the past. private static bool OnValidateTimeProperty(object data) { return data is DateTime. Maybe you ask yourself. This returns you aDependencyPropertyKey. In our example demand.RegisterReadonly(). } Validation Callback In the validate callback you check if the set value is valid. Second thing to do is registering a public dependency property that is assigned toDependencyPropertyKey. Is does not make sense to provide a setter for this value. This key should be stored in a private or protected static readonly field of your class. that the data is an instance of a DateTime. an ArgumentException will be thrown.NET property? One important reason is that you cannot set triggers on normal .Register() you call DependencyProperty.DependencyProperty. Instead of callingDependencyProperty. like theIsMouseOver property. object data ) { if ((DateTime)data > DateTime. The key gives you access to set the value from within your class and use it like a normal dependency property.Now ) { data = DateTime. } Readonly DependencyProperties Some dependency property of WPF controls are readonly.NET propeties. This property is the readonly property that can be accessed from external.
They are defined by the control that needs the data from another control in a specific context. </Canvas> public static readonly DependencyProperty TopProperty = DependencyProperty. } } Attached Properties Attached properties are a special kind of DependencyProperties. typeof(Canvas).DependencyProperty. new FrameworkPropertyMetadata(0d.Left property of a button aligned within a Canvas panel. // . Since you can write your own layout panel. So you see. } private set { SetValue(IsMouseOverPropertyKey. // Register the public property to get the value public static readonly DependencyProperty IsMouseoverProperty = IsMouseOverPropertyKey. add an attribute in XAML with a prefix of the element that provides the attached property.NET Property wrapper public int IsMouseOver { get { return (bool)GetValue(IsMouseoverProperty). typeof(bool). value). you write it like this: <Canvas> <Button Canvas. typeof(MyClass). public static void SetTop(UIElement element. They allow you to attach a value to an object that does not know anything about this value. typeof(double).RegisterAttached("Top". The solution are attached properties. FrameworkPropertyMetadataOptions.Top and Canvas.
}).TextProperty. But an much easier way is to get theDependencyPropertyDescriptor and hookup a callback by calling AddValueChanged() DependencyPropertyDescriptor textDescr = DependencyPropertyDescriptor. . } public static double GetTop(UIElement element) { return (double)element. you can subclass the type that defines the property and override the property metadata and pass an PropertyChangedCallback.{ element. if (textDescr!= null) { textDescr. delegate { // Add your propery changed logic here. button1. value).GetValue(TopProperty). } Listen to dependency property changes If you want to listen to changes of a dependency property. typeof(TextBox)). there is the constant DependencyProperty..SetValue(TopProperty.ContentProperty ).ClearValue( Button..UnsetValue that describes an unset value. } How to clear a local value Because null is also a valid local value.AddValueChanged(myTextBox. FromProperty(TextBox.
You can hook up event handlers on the element that raises the event or also on other elements above or below it by using the attached event syntax: Button. and appears before corresponding bubbling event. By naming convention it is calledPreview. Direct The event is raised on the source element and must be handled on the source element itself.Handled = true.Click="Button_Click".NET events. To stop routing then you have to sete. This behavior is the same as normal . tunnel or direct.. The first is a tunneling event called PreviewMouseDown and the second is the bubbling called MouseDown. y y y How to Create a Custom Routed Event .Routed Events Routed events are events which navigate up or down the visual tree acording to their RoutingStrategy. The bubbling event is raised after the tunneling event.. The routing strategy can be bubble. They don't stop routing if the reach an event handler. Routed events normally appear as pair. Tunneling The event is raised on the root element and navigates down to the visual tree until it reaches the source element or until the tunneling is stopped by marking the event as handeld. Bubbling The event is raised on the source element and navigates up to the visual tree until it reaches the root element or until the bubbling is stopped by marking the event as handled.
} } // Raise the routed event "selected" RaiseEvent(new RoutedEventArgs(MyCustomControl. value). } remove { RemoveHandler(SelectedEvent. // . typeof(MyCustomControl)).NET wrapper public event RoutedEventHandler Selected { add { AddHandler(SelectedEvent. RoutingStrategy. value).Bubble.SelectedEvent)). .RegisterRoutedEvent( "Selected". typeof(RoutedEventHandler).// Register the routed event public static readonly RoutedEvent SelectedEvent = EventManager.
Use it only for vector graphics. y Use an ItemControl with a grid panel in a DataTemplate to layout dynamic key value lists.use the Alignment properties in combination with Margin to position elements in a panel y y y y Avoid fixed sizes .Introduction to WPF Layout Why layout is so important Best Practices Vertical and Horizontal Alignment Margin and Padding Width and Height Content Overflow Handling Why layout is so important Layout of controls is critical to an applications usability. Use the SharedSize feature to synchronize the label widths. Arranging controls based on fixed pixel coordinates may work for an limited enviroment. but as soon as you want to use it on different screen resolutions or with different font sizes it will fail.set the Width and Height of elements to Auto whenever possible. These are the five most popular layout panels of WPF: Grid Panel Stack Panel Dock Panel Wrap Panel Canvas Panel y y y y y Best Practices y Avoid fixed positions . Use a StackPanel to layout buttons of a dialog Use a GridPanel to layout a static data entry form. Don't abuse the canvas panel to layout elements. WPF provides a rich set built-in layout panels that help you to avoid the common pitfalls. Create a Auto sized column for the labels and a Star sized column for the TextBoxes. .
all controls provide a Height and Width property to give an element a fixed size. A better way is to use . The Padding of an outer control is the Margin of an inner control. y y y The Margin is the extra space around the control. Margin and Padding The Margin and Padding properties can be used to reserve some space around of within the control. Height and Width Alltough its not a recommended way. The Padding is extra space inside the control. The following illustrations show how the sizing behaves with the different combinations.Vertical and Horizontal Alignment Use the VerticalAlignment and HorizontalAlignmant properties to dock the controls to one or multiple sides of the panel.
If you set the width or height to Auto the control sizes itself to the size of the content. you can wrap it into a ScrollViewer. The ScrollViewer uses two scroll bars to choose the visible area. This behavior can be controlled by setting the ClipToBounds property to true or false. Scrolling When the content is too big to fit the available size. <ScrollViewer> <StackPanel> <Button Content="First Item" /> <Button Content="Second Item" /> <Button Content="Third Item" /> </StackPanel> </ScrollViewer> . MaxHeight. The visibility of the scrollbars can be controlled by the vertical and horizontal ScrollbarVisibility properties. Overflow Handling Clipping Layout panels typically clip those parts of child elements that overlap the border of the panel.the MinHeight. MinWidth and MaxWidth properties to define a acceptable range.
. The size can be specified as an absolute amount of logical units. Its functionality is similar to the HTML table but more flexible. The following example shows a grid with three rows and two columns. you have to add RowDefinitionitems to the RowDefinitions collection and ColumnDefinition items to the ColumnDefinitions collection. To create additional rows and columns.Grid Panel Introduction How to define rows and columns How to add controls to the grid Resize columns or rows How to share the width of a column over multiple grids Using GridLenghts from code Introduction The grid is a layout panel that arranges its child controls in a tabular structure of rows and columns. they can span over multiple cells and even overlap themselves. as a percentage value or automatically. The distance between the anchor and the grid line is specified by the margin of the control Define Rows and Columns The grid has one row and column by default. A cell can contain multiple controls. The resize behaviour of the controls is defined by the HorizontalAlignment and VerticalAlignment properties who define the anchors.
ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="200" /> </Grid. The grid layout panel provides the two attached properties Grid.Fixed Auto Fixed size of logical units (1/96 inch) Takes as much space as needed by the contained control Star (*) Takes as much space as available. <TextBox Grid.RowDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> <RowDefinition Height="28" /> </Grid. <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> <RowDefinition Height="28" /> </Grid.ColumnDefinitions> <Label Grid. except that the sum of all star columns does not have to be 100%. percentally divided over all star-sized columns. <Label Grid.Column and Grid. <Grid> <Grid. . <ColumnDefinition Width="200" /> </Grid. <Button Grid. <TextBox Grid. <TextBox Grid. <Label Grid.Column="1" Grid.
The splitter normally recognizes the resize direction according to the ratio between its height and width. <Grid> <Grid. </Grid> The best way to align a grid splitter is to place it in its own auto-sized column.ColumnDefinitions> <ColumnDefinition Width="*"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> </Grid. But if you like you can also manually set the ResizeDirection to Columns or Rows. <Label Content="Right" Grid. To ensure that the grid splitter changes the size of the previous and next cell you have to set theResizeBehavior to PreviousAndNext. <GridSplitter ResizeDirection="Columns"/> How to share the width of a column over multiple grids . <GridSplitter HorizontalAlignment="Right" VerticalAlignment="Stretch" Grid.ColumnDefinitions> <Label Content="Left" Grid.</Grid> Resizable columns or rows WPF provides a control called the GridSplitter. This control is added like any other control to a cell of the grid. Doing it this way prevents overlapping to adjacent cells. The special thing is that is grabs itself the nearest gridline to change its width or height when you drag this control around.
By setting the attached property Grid. ColumnDefinition col1 = new ColumnDefinition().IsSharedSizeScope to true on a parent element you define a scope within the column-widths are shared.GridUnitType.ColumnDefinitions> <ColumnDefinition SharedSizeGroup="FirstColumn" Width="Auto"/> <ColumnDefinition Width="*"/> </Grid. Because each item contains its own grid. <ItemsControl. Star sizing is treated as Auto.ItemTemplate> <DataTemplate> <Grid> <Grid. This often helps to resolve the problem. the columns will not have the same width. </Grid> </DataTemplate> </ItemsControl. . Auto sized Star sized Fixed size GridLength.Pixel) Grid grid = new Grid(). set the SharedSizeGroup to the same name. Using GridLenghts from code If you want to add columns or rows by code. In the size-sharing scenario.The shared size feature of the grid layout allows it to synchronize the width of columns over multiple grids. The feature is very useful if you want to realize a multi-column listview by using a grid as layout panel within the data template.Auto new GridLength(1.Star) new GridLength(100.GridUnitType. To synchronize the width of two columndefinitions. you can use the GridLength class to define the differenz types of sizes. Since TextWrapping on TextBlocks within an SharedSize column does not work you can exclude your last column from the shared size. <ItemsControl Grid.ColumnDefinitions> <TextBlock Text="{Binding Path=Key}" TextWrapping="Wrap"/> <TextBlock Text="{Binding Path=Value}" Grid.ItemTemplate> </ItemsControl> Useful Hints Columns and rows that participate in size-sharing do not respect Star sizing.
Add(col2).Width = new GridLength(1.GridUnitType.Add(col1).ColumnDefinitions.Auto.col1.Width = GridLength. grid. ColumnDefinition col2 = new ColumnDefinition(). More on this topic How to create a resizable column .ColumnDefinitions. grid.Star). col2.
This is very useful to create any kinds of lists.WPF StackPanel Introduction The StackPanel in WPF is a simple and useful layout panel. The stack panel aligns the two buttons depending on their desired size. All WPF ItemsControls like ComboBox. . Because the size of the text can change if the user changes the font-size or switches the language we should avoid fixed sized buttons. . If they need more space they will get it automatically. It stacks its child elements below or beside each other.ListBox or Menu use a StackPanel as their internal layout panel. Never mess again with too small or too large buttons. dependening on its orientation.
<StackPanel Margin="8" Orientation="Horizontal"> <Button MinWidth="93">OK</Button> <Button MinWidth="93" Margin="10.0.0">Cancel</Button> </StackPanel> .0.
Third Party Controls DataGrid Calendar ItemsControl LivePreview ComboBox Dialogs Slider Popup RadioButton ToolTips TextBox Menus Expander PasswordBox ContextMenu ListBox ListView TextBlock Window . The controls can be devided in the following categories.Built-in Controls of WPF The WPF framework provides a rich set of built-in controls.
WPF .Third Party Controls WPF Component Vendors y y y y y y y Component Art DevExpress SyncFusion Infragistics Xceed Telerik Actipro Data Grids y y y y y y Infragistics Data Grid Xceed Data Grid Component One Data Grid Syncfusion Essential Grid Telerik RadGridView for WPF ComponentArt DataGrid Misc y y y y y y y WPF Toolkit Infragistics Tab Control Infragistics MonthCalendar Actipro BarCode Actipro Wizard Actipro Property Grid Mindscape Property Grid Mindscape Flow Diagrams Charts y y y y y y y y y y y Infragistics xamChart Swordfish Charts Component One Chart Visifire Chart for WPF and Silverlight WPF Graph on Code Project Free 3D Chart Free High Performance 3D Chart D3 Dynamic Data Display Syncfusion Essential Chart Syncfusion Gauge Telerik RadGauge for WPF y Outlook Bar y y y y y y Infragistics Outlook Bar Actipro Outlook Bar DevComponents Outlook Bar Odyssey Outlook Bar Odyssey Explorer Bar Telerik RadOutlookBar for WPF Panels .
y y
Telerik RadChart for WPF ComponentArt Chart
y y y
Infragistics Carousel Panel Telerik WPF Carousel Control Telerik RadTileView for WPF
Dialogs Reporting
y Pure WPF FileOpen, FileSave and FolderBrowser Dialogs y y Infragistics Reporting for WPF Component One Reports
Dock Ribbon
y y y y y y y Infragistics Dock Manager Actipro Dock Panel DevComponents Dock Panel WPF Docking Library (Open Source) Avalon Dock (Open Source) Telerik RadDocking for WPF y y y y y y y Fluent Ribbon Control Suite Infragistics Ribbon Actipro Ribbon DevComponents Ribbon Odyssey Ribbon Telerik WPF UI RibbonBar Free Microsoft WPF Ribbon Control
Editors Toolbar
y y y y y Infragistics xamEditors Xceed Editors DevComponents Numeric Editor Telerik RadNumericUpDown for WPF Syncfusion Essential Edit (with Syntax Highlighting) y y Syncfusion Essential Diagram Editor WPF Calendar Control y WPF Theme Selector y y y DevExpress ToolBar Odyssey Breadcrumb Bar ComponentArt Toolbar
Theming
Effects
y Transitionals - Framework to transition between screens. y y WPF Shader and Transition FX Windows Presentation Foundation Pixel Shader Effects Library
Tree
y Telerik RadTree View for WPF
Schedule
y DevComponents Schedule Control
y
DotWay WPF - Color Picker, Panels and several Shader Effects
y y y y
DevComponents DateTime Picker Component One Schedule Timeline Control Telerik RadScheduler for WPF Free WPF Schedule Control
y
Telerik Drag&Drop for WPF
GIS and Maps
y y Microsoft Virual Earth Control Sharp Map Control
y
3D
y Xceed 3D Views
Multimedia Web Browser
y DirectShowLib - .NET Wrapper for DirectShow y y y VideoRenderElement Webcam Control WPF Media Kit - DVD Player, DirectShow, WebCam y Chromium Web Browser
DataBinding in WPF
Introduction
WPF provides a simple and powerful way to auto-update data between the business model and the user interface. This mechanism is called DataBinding. Everytime when the data of your business model changes, it automatically reflects the updates to the user interface and vice versa. This is the preferred method in WPF to bring data to the user interface. Databinding can be unidirectional (source -> target or target <- source), or bidirectional (source <-> target). The source of a databinding can be a normal .NET property or a DependencyProperty. The target property of the bindingmust be a DependencyProperty. To make the databinding properly work, both sides of a binding must provide a change notification that tells the binding when to update the target value. On normal .NET properties this is done by raising the PropertyChanged event of theINotifyPropertyChanged interface. On DependencyProperties it is done by the PropertyChanged callback of the property metadata Databinding is typically done in XAML by using the {Binding} markup extension. The following example shows a simple binding between the text of a TextBox and a Label that reflects the typed value:
<StackPanel> <TextBox x: <Label Content="{Binding Text, ElementName=txtInput, UpdateSourceTrigger=PropertyChanged}" /> </StackPanel>
DataContext
ElementName=chkShowDetails.Resources> <BooleanToVisibilityConverter x: </StackPanel. <StackPanel DataContext="{StaticResource myCustomer}"> <TextBox Text="{Binding FirstName}"/> <TextBox Text="{Binding LastName}"/> <TextBox Text="{Binding Street}"/> <TextBox Text="{Binding City}"/> </StackPanel> ValueConverters If you want to bind two properties of different types together. A typical example is to bind a boolean member to the Visibility property. you need a value converter. If you don't explicity define a source of a binding. Collapsed or Hidden. This property is meant to be set to the data object it visualizes.Every WPF control derived from FrameworkElement has a DataContext property. it takes the data context by default. So you can set the DataContext on a superior layout container and its value is inherited to all child elements.Resources> <CheckBox x: <StackPanel x: </StackPanel> </StackPanel> The following example shows a simple converter that converts a boolen to a visibility property. Note that such a converter is already part of the . <StackPanel> <StackPanel. This is very useful if you want to build a form that is bound to multiple properties of the same data object. A ValueConverter converts the value from a source type to a target type and back. you need to use a ValueConverter. public class BooleanToVisibilityConverter : IValueConverter { .
". object parameter. Type targetType. } } Tip: you can derive your value converter from MarkupExtension and return its own instance in the ProvideValueoverride.. CultureInfo culture) { throw new NotImplementedException().public object Convert(object value. } return value.Collapsed. So you can use it directly without referencing it from the resources. .. even it's not needed. Just for the WPF designer. Type targetType. object parameter. CultureInfo culture) { if (value is Boolean) { return ((bool)value) ? Visibility. } public object ConvertBack(object value.' has 0 parameters. you need to add an default constructor to your converter. Another Tip: When you get the error "No constructor for type '.Visible : Visibility.
com/winfx/2006/xaml/presentation" xmlns: <ListBox ItemsSource={Binding Customers} /> </Window> public class CustomerView { public CustomerView() { DataContext = new CustomerViewModel(). Sort and Filter Data in WPF What is a CollectionView? Navigation Filtering Sorting Grouping How to create a CollectionView in XAML What is a CollectionView? WPF has a powerful data binding infrastructure.
GetDefaultView(customers). . If you set the property IsSynchronizedWithCurrentItem toTrue on the view that the collection is bound to. private CustomerSelectionChanged(object sender. <ListBox ItemsSource="{Binding Customers}" IsSynchronizedWithCurrentItem="True" /> If you are using a MVVM (Model-View-ViewModel) pattern. public ICollectionView Customers { get { return _customerView. because it's implicity available over the CollectionView. it automatically synchronizes the current item of the CollectionView and the View. ICollectionView _customerView = CollectionViewSource. _customerView = CollectionViewSource. IList<Customer> customers = GetCustomers(). EventArgs e) { // React to the changed selection } You can also manually control the selection from the ViewModel by calling the MoveCurrentToFirst() orMoveCurrentToLast() methods on the CollectionView.GetDefaultView(customers).} public class CustomerViewModel { private ICollectionView _customerView. } } Navigation The collection view adds support for selection tracking. } } public CustomerViewModel() { IList<Customer> customers = GetCustomers(). _customerView.CurrentChanged = CustomerSelectionChanged. you don't have to extra wire-up the SelectedItem of the control.
return customer. That method should have the following signature: bool Filter(object item). Now set the delegate of that method to the Filter property of the CollectionView and you're done. _customerView.Contains( _filterString ). Just add as many SortDescriptions as you like to the CollectionView ICollectionView _customerView = CollectionViewSource.GetDefaultView(customers).Name. _customerView. . ICollectionView _customerView = CollectionViewSource.Filter = CustomerFilter private bool CustomerFilter(object item) { Customer customer = item as Customer. } } Sorting Sorting data ascending or descending by one or multiple criterias is a common requirement for viewing data.Ascending ).GetDefaultView(customers).Refresh(). NotifyPropertyChanged("FilterString"). } Refresh the filter If you change the filter criteria and you want to refresh the view.Add( new SortDescription("LastName". ListSortDirection. _customerView. you have to call Refresh() on the collection view public string FilterString { get { return _filterString. } set { _filterString = value.SortDescriptions. The collection view makes it so easy to achieve this goal.Filtering To filter a collection view you can define a callback method that determines if the item should be part of the view or not.
SortDescriptions. Note: Grouping disables virtualization! This can bring huge performance issues on large data sets. } } Grouping Grouping is another powerful feature of the CollectionView.Add( new SortDescription("FirstName".CompareTo(custY.GetDefaultView(customers). object y) { Customer custX = x as Customer.Name).Name. <ListBox ItemsSource="{Binding Customers}"> <ListBox. more performant way to do sorting by providing a custom sorter.GroupDescriptions. So be careful when using it.CustomSort = new CustomerSorter(). because it internally uses reflection. You can define as many groups as you like by addingGroupDescriptions to the collection view. _customerView. Customer custY = y as Customer. ICollectionView _customerView = CollectionViewSource. as ListCollectionView. ListSortDirection._customerView.HeaderTemplate> <DataTemplate> .GroupStyle> <GroupStyle. But there is an alternative. ListCollectionView _customerView = CollectionViewSource. To make the grouping visible in the view you have to define a special GroupStyle on the view. Fast Sorting The sorting technique explained above is really simple. return custX. public class CustomerSorter : IComparer { public int Compare(object x.Add(new PropertyGroupDescription("Country")).GetDefaultView(customers).Ascending ). _customerView. but also quite slow for a large amount of data.
com/winfx/2006/xaml"> <Window.GroupStyle> </ListBox> How to create a CollectionView in XAML It's also possible to create a CollectionView completely in XAML <Window xmlns="" xmlns: </CollectionViewSource.<TextBlock Text="{Binding Path=Name}"/> </DataTemplate> </GroupStyle.Resources> <ListBox ItemSource="{Binding Source={StaticResource customerView}}" /> </Window> .Resources> <CollectionViewSource Source="{Binding}" x: <CollectionViewSource.HeaderTemplate> </ListBox.
Since the DataContext property is marked as inherited. . since you cannot mock away the view. how are they handeld? The classic approach. The goal here is not to have any line of logic in the codebehind of a view. So what can we do? The most obvious approach is to aggreate all data objects into one single object that exposes the aggregated data as properties and that can be bound to theDataContext. You can directly bind two WPF elements together. in sone special cases (this is usually not recommended) What's the difference between MVVM. But what about user actions. but the common use of databinding is to bind some kind of data to the view. This brings you the following advantages y y y The view-model can easily be tested by using standard unit-tests (instead of UI-testing) The view can be redesigned without changing the viewmodel. namely Commands. that provides an easy one-way or two-way synchronization of properties. One big limitation of using the DataContext property as data source is. The view-model can even be reused. y y y The logic is tightly bound to the view. known from WinForms is to register an event handler. But in a real life project you usually have more than one data object per view. model-viewcontroller an MVVM pattern. that there is only one of it. because the interface stays the same. since every element has it's different event handlers. that is implemented in the code-behind file of the view. It's not possible to reuse the logic in an other view So the idea is to move the whole presentation logic to the view model by using another feature of WPF. So I try to define and distinguish them a bit more clearly.The Model-View-ViewModel Pattern How the MVVM pattern became convenient WPF has a very powerful databinding feature. togglebuttons. Doing this has some disadvantages: Having event handlers in the code-behind is bad for testing. checkboxes and inputbindings. Commands can be bound like data and are supported by many elements as buttons. MVP and MVC? There is always some confusion about the differences between model-view-presenter. Separation of logic and presentation The MVVM pattern is so far only a convenient way to bind data to the view. it can be set on the root element of a view and it's value is inherited to all subjacent elements of the view. menuitems. This is done by using the DataContext property. This object is called the view model. Changing the design of the view often also requires changes in the code.
but the model does not know about any other objects. The view only knows about the model.NET MVC MVP . the view gets the user input and forwards it to the presenter. Depending of the kind of input. The view itself is passive. The presenter than modifies the view or the model depending on the type of user action. There is a bidirectional one-to-one relation between them. The model and the view are created by the controller. thats why it's called presenter pattern.Model-View-Presenter In the MVP pattern. . he shows up a different view or modifies the data in the model. The model does not know about the presenter.Model-View-Controller The MVC pattern consists of one controller that directly gets all user input. This pattern is often seen in WinForms and early WPF applications.MVC . since the presenter pushes the data into the view. The pattern was often used in good old MFC and now in ASP. The view and the presenter are tightly coupled.
Some MVVM Frameworks Check out this handy tool to compare MVVM frameworks: MVVM Comparison Tool (Silverlight y y y y y y y y y y y PRISM (Microsoft) MVVM Light (Laurent Bugnion) WPF Application Framework Chinch Caliburn Micro Core MVVM Onyx nRoute MVVM Foundation How to build your own MVVM Framework Data Validation in WPF . The model does not know about the view model. that gets all the user input and forwards it to the viewmodel. a discussion about the model. typically by using commands. The view actively pulls the data from the viewmodel by using databinding. a friend of mine.MVVM . Also check out this interesting article from Costas Bakopanos. states and controllers in the MVVM environment.Model-View-ViewModel The model-view-viewmodel is a typically WPF pattern. It consists of a view.
NET 3.IgnoreCase). } set { _pattern = value. CultureInfo ultureInfo) { if (value == null || !_regex. } } public RegexValidationRule() { } public override ValidationResult Validate(object value. the border of the textbox gets red and the tooltip is showing the reason. y y y y y y y y y y y y y y y y y y y y y y y y y y y y y Implementing a ValidationRule (.y What we want to do is a simple entry form for an e-mail address.0 style) In this example I am implementing an generic validation rule that takes a regular expression as validation rule. public string Pattern { get { return _pattern.ToString()). If the expression matches the data is treated as valid. If the user enters an invalid email address.Success) . _regex = new Regex(_pattern. RegexOptions. /// <summary> /// Validates a text against a regular expression /// </summary> public class RegexValidationRule : ValidationRule { private string _pattern. private Regex _regex.Match(value.
Resources> Build a converter to convert ValidationErrors to a multi-line string The following converter combines a list of ValidationErrors into a string. The converter is both. } . This slows down your application and causes the following message in your debug window: System.ErrorContent. y y y y y y y y y y y [ValueConversion(typeof(ReadOnlyObservableCollection<ValidationError>).[0]. BindingExpression:Path=(0). typeof(string))] public class ValidationErrorsToStringConverter : MarkupExtension. In many samples on the web you see the following binding expression: {Binding RelativeSource={RelativeSource Self}.Errors)[0].] *[a-zA-Z]$</sys:String> </Window. null).Errors)¶ (type µReadOnlyObservableCollection`1 ).[a-zA-Z][a-zA-Z\. "The value is not a valid email address"). a value converter and a markup extension. IValueConverter { public override object ProvideValue(IServiceProvider serviceProvider) { return new ValidationErrorsToStringConverter(). But if you don't have any validation errors the data binding fails. } else { return new ValidationResult(true.. This makes the binding much easier. DataItem=¶TextBox¶.Data Error: 16 : Cannot get µItem[]µ value (type µValidationError¶) from µ(Validation.Windows.Path=(Validation.y y y y y y y y y y y y y y y y y y y y y y y y { return new ValidationResult(false. } } } First thing I need to do is place a regular expression pattern as string to the windows resources <Window. This allows you to create and use it at the same time..ErrorContent} This expression works if there is one validation error.-]*[a-zA-Z0-9]\.-]*[a-zA-Z0-9]@ [a-zA-Z0-9][\w\.Resources> <sys:String x:^[a-zA-Z][\w\.
Join("\n".-8. Path=AdornedElement.Empty. <TextBox x: <Grid ClipToBounds="False" > <Image HorizontalAlignment="Right" VerticalAlignment="Top" Width="16" Height="16" Margin="0.Errors). CultureInfo culture) { ReadOnlyObservableCollection<ValidationError> errors = value as ReadOnlyObservableCollection<ValidationError>. Converter={k:ValidationErrorsToStringConverter}}"/> <Border BorderBrush="Red" BorderThickness="1" Margin="-1"> <AdornedElementPlaceholder Name="adornedElement" /> </Border> </Grid> </ControlTemplate> The ValidationRule and the ErrorTemplate in Action Finally we can add the validation rule to our binding expression that binds the Text property of a textbox to a EMail property of our business object. } public object ConvertBack(object value.-8. if (errors == null) { return string. object parameter. Type targetType.(Validation. Type targetType. CultureInfo culture) { throw new NotImplementedException(). (from e in errors select e. object parameter.ErrorContent as string).Text> <Binding Path="EMail" UpdateSourceTrigger="PropertyChanged" > .y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y public object Convert(object value.0" Source="{StaticResource ErrorImage}" ToolTip="{Binding ElementName=adornedElement.ToArray()).
Text> </TextBox> How to manually force a Validation If you want to force a data validation you can manually call UpdateSource() on the binding expression. The following code shows how to get the binding expression from a property of a control. } .y y y y y y y y y y <Binding. That is the time.ValidationRules> </Binding> </TextBox.UpdateSource(). y y y y y y private void ForceValidation() { txtName.GetBindingExpression(TextBox. A useful scenario could be to validate on LostFocus() even when the value is empty or to initially mark all required fields.TextProperty).ValidationRules> <local:RegexValidationRule </Binding. when the databinding is established. In this case you cann callForceValidation() in the Loaded event of the window.
CultureInfo culture) { // Do the conversion from visibility to bool } } How to use a ValueConverter in XAML . This is a common naming for value converters. object parameter. public class BoolToVisibilityConverter : IValueConverter { public object Convert(object value. but you will soon need to implement your own converts. Type targetType. CultureInfo culture) { // Do the conversion from bool to visibility } public object ConvertBack(object value.ValueConverters Introduction If you want to databind two properties that have incompatible types. add a class to your project and call it [SourceType]To[TargetType]Converter. This piece of code is called ValueConverter. How to implement a ValueConverter WPF already provides a few value converts. you need a piece of code in between. Make it public and implement the IValueConverter interface. Type targetType. A value converter is a class. That's all you need to do. that implements the simple interface IValueConverter with the two methods object Convert(object value) andobject ConvertBack(object value). To do this. that converts the value from source to target type and back. object parameter.
because and the key is typically just the name of the converter. A simple and cool trick is to derive value converters from MarkupExtension.Window1" . [ValueConversion(typeof(object). Converter={StaticResource converter}}" /> </Grid> </Window> Simplify the usage of ValueConvers If you want to use a normal ValueConverter in XAML. if you bind a DateTime to a TextBlock ).. This is cumbersome.> <Window.First thing you need to do is to map the namespace of your converter to a XAML namespace. Then you can reference it by using{StaticResource} <Window x: </Window. you have to add an instance of it to the resources and reference it by using a key. xmlns:l="clr-namespace:VirtualControlDemo" .
Type targetType.IsNullOrEmpty(format)) { return string.CultureInfo culture) { string format = parameter as string. System.ToString(). object parameter.Format(culture. System. if (!string.Globalization.Globalization. } } . Type targetType. object parameter.CultureInfo culture) { return null. format. IValueConverter { public object Convert(object value. } public object ConvertBack(object value. } else { return value. value).public class StringFormatConverter : BaseConverter.
But if the interaction designer wants to add drag&drop functionality.. Example of a behaviors are drag&drop. The ideas behind behaviors are to give the interaction designer more flexibility to design complex user interactions without writing any code. This interaction can be designed by providing an "Add" button next to each subscriber list. re-position of elements. Introduction Behaviors are a new concept. he needs to discuss it with the developer and wait until the implementation is done. input validation. In the asset library you find a new secion called "Behaviors". How to use behaviors in Expression Blend 3 Using behaviors in Expression Blend is as simple as adding an element to the design surface.because of an attached drag behavior. With behaviors he just drags a drag and drop behavior on each list and we are done. .Behaviors A simple Border can be dragged by mouse . It lists all behaviors available within your project. By clicking on it you can configure the properties of the behavior. The list of possible behaviors is very long. introduced with Expression Blend in Version 3. Imaging an application that has a list of customers and the user can add some of them to subscriber lists. These components than can be attached to controls to give them an additional behavior. etc.. The behavior appears as an child element in the visual tree. to encapsulate pieces of functionality into a reusable component. pan and zoom. Just grab one of these and drag it onto the element you want to add this behavior and thats it.
Behaviors. <Border Background="LightBlue" > <e:Interaction. they just reuse the existing one. The idea is simple.How does it work To add behaviors to an element you need some kind of an extension point. They don't need any new infrastructure. The behavior than can register itself to events and property changes and so extend the functionality of the element. but very clever. This is an attached property calledInteraction. This attached property holds the list of behaviors for that element and pass a reference to the element into the behavior.Behaviors> .
Current. Just derive from Behavior<T. AssociatedObject.Y.GetPosition(parent). AssociatedObject.<b:DragBehavior/> </e:Interaction.GetPosition( parent ) .MouseLeftButtonDown += (sender. }.MouseLeftButtonUp += (sender.TranslatePoint( new Point(). }.CaptureMouse().X.mouseStartPosition. e) => { AssociatedObject. AssociatedObject. public class DragBehavior : Behavior<UIElement> { private Point elementStartPosition.MainWindow. if (AssociatedObject. parent ). private TranslateTransform transform = new TranslateTransform(). AssociatedObject. transform.Y = diff. } } .Behaviors> <TextBlock Text="Drag me around!" /> </Border> How to implement your own behavior The following example shows the implementation of the drag behavior we used above.X = diff.ReleaseMouseCapture(). private Point mouseStartPosition.IsMouseCaptured) { transform. AssociatedObject.RenderTransform = transform. e) => { elementStartPosition = AssociatedObject. and override the OnAttached() method. e) => { Vector diff = e.gt. } }.MouseMove += (sender. protected override void OnAttached() { Window parent = Application. mouseStartPosition = e.
I tried to make a list of some popular ones. I am sure that we will find hunderts of behaviors available soon. Zoom Behavior Glass Behavior Shake Behavior Transparency Behavior y y y y .List of some popular behaviors Since its so cool and easy to create your own pice of interactivity.
It builds the bridge between designer and developers. This tool suite is called Microsoft Expression. Providing the right features is still the prerequisite for a good product. It consists of the four products: Expression Blend is built to create user interfaces in WPF and Silverlight. we focused mainly on building products that fulfilled the functional requirements of the user. Designing a rich user experience is not only about make up your user interface by some graphics and gradients .its a much broader concept. It needs to be planed. It makes the user feel good and so he likes to continue using the software. but to turn it into something extraordinary you need to provide a good user experience! Providing a rich user experience is not a thing of fortune. designed and integrated into the development of a product. So they decided to create a new tool suite .User Experience Design Process User Experience becomes a Key Success Factor In the past.made for designers. New Tools for Designers Microsoft recognized. give development teams the power to create rich user experiences it needs a lot more graphical tool support than VisualStudio can provide today. User experience was often considered late in the development process. But today the customer demands more than just a working product. It can open VisualStudio solutions y y Expression Design is a leightweight version of Adobe Illustrator to create and edit vector graphics. . Its about creating an emotional connection between the user and your software.
. Its the replacement for Frontpage. The following illustration shows a sample workflow of integrating a vector image that is created by a graphics designer in Adobe Illustrator into a WPF project that is part of a VisualStudio solution. cut and enrich video files and optimize them for silverlight streaming Expression Web is Microsoft next generation of HTML and Javascript editor. You have to find out what the user really needs. This can be done by following a user centered approach. Together they are a powerful package.
This can be done with tools like PowerPoint or Visio . y y Wireframes Wireframes are often used to sketch the layout of a page. Elicit Requirements Like in any kind of software projects its important to know and focus the target of your development. It's called wireframes because you just draw the outlines of controls and images. Priorize the tasks by risk and importance and work iteratively. These needs should be refined to features and expressed in use cases (abstract) or user scenarios (illustrative). No tools and infrastructure is needed. Everyone can just scribble thier ideas on the paper.1. 2. Some of them are: Paper prototype Use paper and pencil to draw rough sketches of your user interface. Create and Validate UI Prototype Creating a user interface prototype is an important step to share ideas between users and engineers to create a common understanding of the interaction design. It's helpful to only sketch the user interface in a rough way to prevent early discussions about design details. This task is typically done by an interaction designer. You should talk to stakeholders and users to find out the real needs. There are multiple techniques and tools to do this. This work is done by the role of the requirements engineer.
logos. It is strongly recommended to test your UI prototype on real users. The user gets a task to solve and he controlls the prototype by touching on the paper. implements the business logic and wires all up to a simple view. y Usability Lab To do a usability lab. . You can use the integrated "wiggly style" to make it look sketchy. Developer The developer is responsible to implement the functionality of the application. The proband gets an task to do and the requirements and interaction engineer watch him doing this. These skills are described as roles that can be distributed among peoples in your development team. 3. The test leader than presents a new paper showing the state after the interaction. y y Graphical Designer The graphical designer is responsible to create a graphical concept and build graphical assets like icons. The following techniques are very popular to evaluate UI prototypes: y Walktrough A walktrough is usually done early in a project with wireframes or paper prototypes. This helps you to find out and address design problems early in the development process. The prototype can be run in a standalone player that has an integrated feedback mechanism. He creates the data model. you need a computer with a screen capture software and a camera. If the graphical designer is familiar with Microsoft Expression tools he directly creates styles and control templates. Implement Business Logic and Raw User Interface 4. They should not talk to him to find out where he gets stuck and why. y Interactive Prototype The most expensive and real approach is to create an (reusable) interactive prototype that works as the real application but with dummy data.Sketch Flow Sketch flow is a new cool feature to create interactive prototypes directly in WPF.y Expression Blend 3 . Integrate Graphical Design 5. 3D models or color schemes. Test software Roles Buliding a modern user interface with a rich user experience requires additional skills from your development team.
This role needs a rare set of skills and so it's often hard to find the right person for it. He takes the assets of the graphical designer and integrates them into the raw user interface of the developer.y Interaction Designer The interaction designer is responsible for the content and the flow of a user interface. He should validate his work by doing walktroughs or storyboards. He creates wireframes or UI sketches to share its ideas with the team or customer. y Integrator The integrator is the artist between the designer and the developer world. .
The concept of styles let you remove all properties values from the individual user interface elements and combine them into a style. Any control in WPF have a list of resources that is inherited to all controls beneath the visual tree. <Window> <Window. The idea is quite similar to Cascading Styles Sheets (CSS) that we know from web development. To get it from the resources we use the{StaticResource [resourceKey]} markup extension. If you apply this style to an element it sets all properties with the specified values.Resources> . Doing this the conventional way means that you have to set the Background and the FontStyle property on every single button.4" /> <Setter Property="Margin" Value="4" /> </Style> </Window. To apply the style to a control we set the Style property to our style.4" Margin="4">Styles</Button> <Button Background="Orange" FontStyle="Italic" Padding="8. All your buttons should have an orange background and an italic font.Resources> <Style x: <Setter Property="Background" Value="Orange" /> <Setter Property="FontStyle" Value="Italic" /> <Setter Property="Padding" Value="8. The solution for this problem are styles. To make the style accessible to your controls you need to add it to the resources.4" Margin="4">are</Button> <Button Background="Orange" FontStyle="Italic" Padding="8. <StackPanel Orientation="Horizontal" VerticalAlignment="Top"> <Button Background="Orange" FontStyle="Italic" Padding="8. That's the reason why we need to specify a x:cool</Button> </StackPanel> This code is neither maintainable nor short and clear.
<Style x: <Setter Property="FontSize" Value="12" /> <Setter Property="Background" Value="Orange" /> </Style> <Style x: <Setter Property="FontWeight" Value="Bold" /> </Style> .<StackPanel Orientation="Horizontal" VerticalAlignment="Top"> <Button Style="{StaticResource myStyle}">Styles</Button> <Button Style="{StaticResource myStyle}">are</Button> <Button Style="{StaticResource myStyle}">cool</Button> </StackPanel> </Window> What we have achieved now is A maintainable code base Removed the redundancy Change the appearance of a set of controls from a single point Possibility to swap the styles at runtime. This allows you to specify a base style that sets common properties and derive from it for specialized controls. y y y y Style inheritance A style in WPF can base on another style.
The control template is often included in a style that contains other property settings.Control Templates Introduction Controls in WPF are separated into logic. It is by convention wrapped into a style. events and properties and template. The wireup between the logic and the template is done by DataBinding. By setting this property to another instance of a control template. The template is defined by a dependency property called Template. The default template is typically shipped together with the control and available for all common windows themes. <Style x: <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Button}"> <Grid> . that is identified by value of the DefaultStyleKey property that every control has. The following code sample shows a simple control template for a button with an ellipse shape. This gives the control a basic appearance. you can completely replace the appearance (visual tree) of a control. that defines the states. Each control has a default template. that defines the visual appearance of the control.
To display the content of another property you can set the ContentSource to the name of the property you like. By default it adds the content of the Content property to the visual tree of the template. Triggers .<Ellipse Fill="{TemplateBinding Background}" Stroke="{TemplateBinding BorderBrush}"/> <ContentPresenter HorizontalAlignment="Center" VerticalAlignment="Center"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> <Button Style="{StaticResource DialogButtonStyle}" /> A Button without and with a custom control template ContentPresenter When you create a custom control template and you want to define a placeholder that renders the content. you can use theContentPresenter.
you have to use a DataTrigger. Without a DataTemplate you just see the result of calling ToString() on the object. ComboBox or ListView. the DataContext is set the data object. It is not working in the Trigger section.{RelativeSource TemplatedParent} not working in DataTriggers of a ControlTemplate If you want to bind to a property of a property on your control like Data. But if you use the element directly in your view. . You have to use the {RelativeSource Self} instead. Here you can find more information about DependencyProperty value precendence: Dependency Property Value Precedence Data Templates Introduction Data Template are a similar concept as Control Templates. So you can easily bind against the data context to display various members of your data object DataTemplates in Action: Building a simple PropertyGrid Whereas it was really hard to display complex data in a ListBox with WinForms.IsLoaded you cannot use a normal Trigger. thatTemplatedParent can only be used within the ControlTemplate. They give you a very flexible and powerful solution to replace the visual appearance of a data item in a control like ListBox. since it does not support this notation. What if a Binding working or a Setter is not applied when using a control template There is something you need to know when setting a value of an element within a control template: The value does have a lower precendence as the local value! So if you are setting the local value in the constructor of the contained element. it will work. WPF takes the default template that is just a TextBlock. So be aware of this behavior!. With the data template we see the name of the property and a TextBox that even allows us to edit the value. If you don't specify a data template. Within a DataTemplate. it just calls ToString() on it. The reason is. But when you are using a DataTrigger. In my opinion this is one of the key success factory of WPF. If you bind complex objects to the control. you cannot override it within the controltemplate. The following example shows a ListBox with a list of DependencyPropertyInfo instances bound to it. its super easy with WPF. with {RelativeSource TemplatedParent} it will not work.
ItemTemplate> </ListBox> How to use a DataTemplateSelector to switch the Template depending on the data Our property grid looks nice so far. <ListBox.ItemTemplate> <DataTemplate> <Grid Margin="4"> <Grid.ColumnDefinitions> <TextBlock Text="{Binding Name}" FontWeight="Bold" /> <TextBox Grid. but it would be much more usable if we could switch the editor depending on the type of the property.With DataTemplate --> <ListBox ItemsSource="{Binding}" BorderBrush="Transparent" Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" SharedSizeGroup="Key" /> <ColumnDefinition Width="*" /> </Grid.<!-. .Without DataTemplate --> <ListBox ItemsSource="{Binding}" /> <!-. </Grid> </DataTemplate> </ListBox.
Default DataTemplate --> <DataTemplate x: . } public DataTemplate EnumDataTemplate { get.assembly=mscorlib"> <Window. The following exmple shows an DataTemplateSelector that decides between tree data templates: public class PropertyDataTemplateSelector : DataTemplateSelector { public DataTemplate DefaultnDataTemplate { get. DependencyObject container). DependencyObject container) { DependencyPropertyInfo dpi = item as DependencyPropertyInfo.Window1" xmlns=". } return DefaultnDataTemplate. if (dpi. The DataTemplateSelector has a single method to override: SelectTemplate(object item.The simplest way to do this is to use a DataTemplateSelector. } } <Window x: . } public override DataTemplate SelectTemplate(object item.Resources> <!-.IsEnum) { return EnumDataTemplate. set. } public DataTemplate BooleanDataTemplate { get. In this method we decide on the provided item which DataTemplate to choose. </DataTemplate> .microsoft.. set. } if (dpi.com/winfx/2006/xaml/presentation" xmlns:x=". set.PropertyType == typeof(bool)) { return BooleanDataTemplate. </DataTemplate> <!-..
</Border> <DataTemplate. AncestorType= {x:Type ListBoxItem}}.Triggers> <DataTrigger Binding="{Binding RelativeSource= {RelativeSource Mode=FindAncestor. you have to use a relative source with FindAcestor to navigate up the visual tree until you reach the ListBoxItem. </DataTemplate> <!-.Resources> <Grid> <ListBox ItemsSource="{Binding}" Grid. you have to bind the IsSelected property of the ListBoxItem...Triggers> </DataTemplate> ..Path=IsSelected}" Value="True"> <Setter TargetName="border" Property="Height" Value="100"/> </DataTrigger> </DataTemplate. </Grid> </Window> How to react to IsSelected in the DataTemplate If you want to change the appearance of a ListBoxItem when it is selected. <DataTemplate x: <Border x: .DataTemplate Selector --> <l:PropertyDataTemplateSelector x: </Window..<!-.DataTemplate for Enums --> <DataTemplate x: . But this is a bit tricky.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview. | https://www.scribd.com/document/57511244/WPF | CC-MAIN-2016-40 | refinedweb | 11,843 | 50.94 |
org.apache.http.annotation.Immutable; 30 31 /** 32 * Pool statistics. 33 * <p> 34 * The total number of connections in the pool is equal to {@code available} plus {@code leased}. 35 * </p> 36 * 37 * @since 4.2 38 */ 39 @Immutable 40 public class PoolStats { 41 42 private final int leased; 43 private final int pending; 44 private final int available; 45 private final int max; 46 47 public PoolStats(final int leased, final int pending, final int free, final int max) { 48 super(); 49 this.leased = leased; 50 this.pending = pending; 51 this.available = free; 52 this.max = max; 53 } 54 55 /** 56 * Gets the number of persistent connections tracked by the connection manager currently being used to execute 57 * requests. 58 * <p> 59 * The total number of connections in the pool is equal to {@code available} plus {@code leased}. 60 * </p> 61 * 62 * @return the number of persistent connections. 63 */ 64 public int getLeased() { 65 return this.leased; 66 } 67 68 /** 69 * Gets the number of connection requests being blocked awaiting a free connection. This can happen only if there 70 * are more worker threads contending for fewer connections. 71 * 72 * @return the number of connection requests being blocked awaiting a free connection. 73 */ 74 public int getPending() { 75 return this.pending; 76 } 77 78 /** 79 * Gets the number idle persistent connections. 80 * <p> 81 * The total number of connections in the pool is equal to {@code available} plus {@code leased}. 82 * </p> 83 * 84 * @return number idle persistent connections. 85 */ 86 public int getAvailable() { 87 return this.available; 88 } 89 90 /** 91 * Gets the maximum number of allowed persistent connections. 92 * 93 * @return the maximum number of allowed persistent connections. 94 */ 95 public int getMax() { 96 return this.max; 97 } 98 99 @Override 100 public String toString() { 101 final StringBuilder buffer = new StringBuilder(); 102 buffer.append("[leased: "); 103 buffer.append(this.leased); 104 buffer.append("; pending: "); 105 buffer.append(this.pending); 106 buffer.append("; available: "); 107 buffer.append(this.available); 108 buffer.append("; max: "); 109 buffer.append(this.max); 110 buffer.append("]"); 111 return buffer.toString(); 112 } 113 114 } | http://hc.apache.org/httpcomponents-core-dev/httpcore/xref/org/apache/http/pool/PoolStats.html | CC-MAIN-2014-42 | refinedweb | 350 | 70.19 |
Final project journal¶
introduction¶
As part of our joint project. I will be focusing on a wireless notification/pager system. We build a printing press for DIY merchandising at concerts or events. Part of the system is a series of wireless pucks that tell the customer when his print is ready to pick up. It resembles the systems used in some restaurants to notify you when your order is ready. The system uses a sender station that can talk to different pucks wireless via Lora (or wifi). The pucks have an internal timer code and notify you by means of flashing RGB leds and a vibration motor. The design and look is inspired by the arc reactor of Iron Man.
The sender station lets you connected to a webpage to control it. It uses an ESP chip to serve the webpage as a standalone wifi access point. From a smartphone or tablet you can send a message to each puck. To set the amount of time and the color of the leds and start the countdown procedure.
Final aim is to have a system with around 30 pucks and and a base station to control them via web interface on an standalone hotspot. Pucks will use a wireless recharging station. The pucks need to be strong to be used on events. So plan is to cast them in epoxy or PUR or maybe in a silicone.
Slide¶
Video¶
link to project development page¶
project development weekly assignment
Pink-to-matic pager work progress¶
Planning//todo:
- [x] lora code
- [x] puck design
- [x] puck test 3D print
- [ ] puck mold design
- [ ] puck milling
- [ ] puck casting
- [x] Webinterface ESP
- [x] Sender protocol and receiver protocol
- [x] timing code
- [ ] sender housing ???
- [ ] charging station
- [ ] how to charge??
- [x] silicone bumper design
- [x] bumper mold 3d print
- [x] bumper mold milling
- [x] bumper casting
1. Receiver Board¶
I am testing out the ATtiny1614 board with Lora I made in the networking week
I wanted to add a neopixel ring but the adafruit library would not compile for this chip.
Luckily SpenceConde has as part of his megaTInycore package also a library for using neopixels. The tinyneopixel library
You can download the tinyneopixel library here
Works just like the adafruit one!!
Now I have the board change random color when a message arrives. I will have to build a protocol to know what boardID it is and what color to make based on the message
So i need a message parser!!!
But first lets test if i also can get noise and vibration
That works to and everything works on the batterypower.
Have to find out how i would charge the batteries
I also have to add timer code inside the 1614 to count down!!
2. Adjusting my esp12E board to be a LORA sender.¶
Since I already did some coding and testing on commercial boards with the lora modules attached. see the networking week
I had to keep track of what pins where used when and how to connect the modules to the different boards. And also adjust this in the code.
So I made a small table to keep track of those.
Simple Lora sender code works now on my esp12e board.
Time to redesign some boards and make them.
Sender board done, milling failed! I still keep struggling with the milling part of the pcbs.
I will need to try again. This time it was good enough to solder everything on it.
I also made an extra receiver board. SO now I h#ave to devices and a sender to test the setup with. I also added the final Led ring and motor to them.
3. puck design/making¶
The design idea of the portable device that lights up started with some inspiration of the Iron Man Arcreactor.
So I drew something similar.
I usee Fusion360 to design a model. I would like to make a mold out of it to cast the pucks in epoxy resin. But for intial testing I decide to 3D print them first.
It took some tweaking and reprinting to get to a final design. I needed to adjust the space inside to be able to fit all parts in the puck itself.
But in the end i got everything fitting nicely inside.
By this time I also decided that it would be better for time management that I go with 3D printed housing instead of the casted ones. The mold and casting will be for next spiral.
4. software web interface¶
I am making a simple html/javascript interface to control the pucks. Adjust the timing and RGBcolor and send the info to the right id. I started developing this during week 14 interfaces
More advanced code together with wifi and a server for the interface gives me resets of the board????
After a day of testing and debugging. I asked Edu for help. We thought it was a power issue that made the board reset. But I tested with bigger psu and batteries, but no luck. Then later I found out that I had to use a different server library
#include <ESP8266WebServer.h>
instead of the
#include "ESPAsyncWebServer.h"
I had to rewrite a bunch of code to use the other library. The way arguments in the URL are read where different. But finally I got it working.
This is how the final interface looks like
5. Coding the boards¶
I already did a lot of the coding for this project in week 13 Networking
A. Sender board¶
The sender board does the following functions:
- act as access point, so we can use an independent WIFI network through the esp module.
- act as a webserver to serve the html interface.
- receive and handle incoming data from the webpage on the internal server and use them to prepare a message for the lora network.
- send out a structured message to all connected lora devices
B. Receivers¶
the receiver boards do the following:
- act as lora node in the lora network
- listen for incoming packets on the lora network
- handle the packetdata and determine the data is for this node.
- start the internal timer based on the timer data received.
- when the timer interval is passed alert the user by:
- activating the small vibration motor
- animate the Neopixel ring based on the color data received.
6. silicone bumper¶
To make the pucks somewhat more robust I wanted to cast silicone rubber bumpers to put around them.
I started designing a 3 part mold based on the intial puck design. I also did this in Fusion360.
For testing I 3D printed the parts of the mold.
And did some silicone mixing, pouring and casting :-)
And the next day I could demold and see if it was what I expected.
The results could be better and my mold needs some adjustments. But that will be for next iteration.
7. laser cut plywood tablet stand¶
As a last minute solution to have a housing for the sender module and also make everything look more finished. I deceided to draw a quick and easy tabletstand where I also could put the sender board inside.
I used a box generator to create a basic console style box, it is called boxes.py
I adjusted the svg in Coreldraw to add a little ruler to rest the tablet on.
I used 4mm plywood to lasercut this press fit box. At the back there is a small panel that opens up to put the electronics inside.
Source files¶
code - LoRaReceiver1614 code esp_lora_sender_and_webserver_2 - ESP_LoRaSender code
electronics - tiny1614_lora schematic - tiny1614_lora pcb - tiny1614_lora kicad - esp_lora_sender_board schematic - esp_lora_sender_board pcb - esp_lora_sender_board kicad
designs - puck design - bumper mold top part - bumper mold bottom part - bumper mold inner part - sender console
Daily journal of last weeks.¶
25/5 Milling and soldering done of the sender board!
26/5 ALso made an extra receiver board. Now have 2 self made boards and 1 wemos.
27/5 Today I modelled a mold to cast a silicone bumper. I will 3D print it to try out if it works. Later i can mill parts to make a better mold. I decided to go with 3D printed housing for now, don’t have the time to mill and cast housings in epoxy. That will be for a next version.
Still have to fit everything in the housings.
Also still have to fix the timing part of the code. More specific the repeating vibration and lights after the time is passed. I have to doublecheck if the timer works correctly.
Also I want to make some small changes to the web interface
28/5
print final test housing. think of sender housing, lasercut?? stand for tablet also?? think of charging station. Milled out in 2 parts, wood
29/5 3D printed the mold for the silicone bumper. Going to test it and cast some silicone on monday. Also found the bug in the timer code. Now it works as it should be. I forgot a flag to enable the timer only once. And had to put the millis update, before the loop. Otherwise i never got to the interval. Maybe I should also add a little blink every second to see that it is alive??
31/5 silicone mold parts are printed, now cleaning and trying to cast. also printing back part of the puck
updating the webpage interface
1/6 demolded the silcone casting. Looks promising and useable. Maybe make the sides somewhat higher Make the middle part smoother for better casting. Put a varnish coating on the printed part. Next time print the middel part in 2 pieces and glue together for smooter surface
enlarge the pour holes.
Cleaning up with scalpel is a patience job ;-)
2/6 Did cast next version of the bumper. 3D printed another adjustment to the housing. Again 🙂 It think all will fit inside now.
should i add a reset function in the code to stop the vibration and leds????
Update of the webinterface Update of the puck code I should finetune the breathing algoritm | http://fabacademy.org/2021/labs/barcelona/students/kurt-vanhoutte/projects/final_project_journal/ | CC-MAIN-2022-33 | refinedweb | 1,674 | 74.39 |
This is your resource to discuss support topics with your peers, and learn from each other.
02-18-2013 02:49 PM
Hi,
In my application, i am using a custom edit field. In this edit field, i am using to different focus and unfocus image and using like Android Edit Field. Till this it is working properly. But when i am entering the text in the edit field, after touching the border of the image, it is again coming to the new line. But I don't want new line. It should go as much as possible in a single line and user should be capable of entering the characters as much as possible as like Android Edit Field.
Here i wrote the code for this:
public class EditFieldCustom extends EditField { Bitmap bkgnd; Bitmap bmpFocussedImage, bmpUnFocussedImage; String strHint = null; private boolean _drawFocus = false; public EditFieldCustom(Bitmap unFocussedImage, Bitmap focussedImage, int length, long style, String strHintText) { super("", "", length, style | EditField.NO_NEWLINE | EditField.FIELD_LEADING | EditField.NON_SPELLCHECKABLE | EditField.FOCUSABLE); this.bmpFocussedImage = focussedImage; this.bmpUnFocussedImage = unFocussedImage; this.strHint = strHintText; bkgnd = unFocussedImage; setBkgnd(unFocussedImage); setFont(NetworkActivity.PFONT_7); } protected void layout(int maxWidth, int maxHeight) { super.layout(bkgnd.getWidth(), bkgnd.getHeight()); setExtent(bkgnd.getWidth(), bkgnd.getHeight()); int top = bkgnd.getHeight() / 2 - NetworkActivity.PFONT_7.getHeight() / 2; int left = 15; this.setPadding(new XYEdges(top, 0, 0, left)); } public int getPreferredWidth() { return bkgnd.getWidth(); } public int getPreferredHeight() { return bkgnd.getHeight(); } protected void paint(Graphics graphics) { if (getTextLength() == 0) { graphics.setColor(Color.DARKGRAY); graphics.drawText(strHint, 0, 0); } graphics.setColor(Color.BLACK); super.paint(graphics); } // end paint() public void setBkgnd(Bitmap bkgnd) { this.bkgnd = bkgnd; setBackground(BackgroundFactory.createBitmapBackgr; } }
It will be a great help if any one give me the solution..
Thanks..
02-18-2013 05:36 PM
Sorry no idea what you mean by Android Edit Field, but if you search the forum you will find that there have been quite a number of posts related to SingleLineEditField. From memory this is typically achieved by placing an EditField inside a scrolling HorizontalFieldManager.
Regarding the current code, there are number of issues. that I see:
a) Your layout should not invoke anything that could cause an recursive layout. The idea of layout is to define the space requirement for the Field. So you should not do anything that will change this. But in your layout you have
this.setPadding(new XYEdges(top, 0, 0, left));
and setPadding will impact on the size requirement. So calculate that once, set it as part of the Field constructions and don't do it in layout.
b) As you indicate by your parameters, layout is passed the maximum width and height. You should make sure that this you don't enlarge these. You code assumes that bkgnd.getWidth() is less than maxWidth for example.
c) If you call super.layout to layout the Field than you should respect its decision, especially if you then expect it to paint itself. If you want the Field to use all the width and height you give it, then specify USE-ALL_HEIGHT and USE_ALL_WIDTH.
d) I recommend that you look at the standard setBackground method and see how you can set different backgrounds automatically depending on the visual state. Doing that is a much better approach that trying to do it in onUnfocus and onFocus as you currently do. | https://supportforums.blackberry.com/t5/Java-Development/How-to-change-the-Edit-Field-to-be-work-on-a-Single-line/m-p/2174809/highlight/true | CC-MAIN-2016-44 | refinedweb | 553 | 50.43 |
Linux Mint 17 “Qiana” Cinnamon released!
The team is proud to announce the release of Linux Mint 17 “Qiana” Cinnamon.
Linux Mint 17 Qiana Cinnamon
For a complete overview and to see screenshots of the new features, visit: “What’s new in Linux Mint 17 Cinnamon“.
Important info:
There is some important info in the Release Notes:
- In the installer, “Replace OS with Linux Mint” means erase the entire drive
- Issues with Skype
- DVD Playback with VLC
- HiDPI
-).
- DVD drive or USB port
Upgrade instructions:
- To upgrade from a previous version of Linux Mint follow these instructions.
- To upgrade from the RC release, simply launch the Update Manager and install any Level 1 update available.
Download:
Md5 sum:
- 32-bit: 00ef2ba7f377251852045664376ecebf
- 64-bit: 3d8c3c3e82916e2110f965111b0ee944
Alternative downloads:
No-codecs images:
Distributors and magazines in Japan, USA and countries where distributing media codecs is problematic can use the “No Codecs” ISO images. These images will be made available next week, for both the MATE and Cinnamon edition!
May 31st, 2014 at 6:51 am
Will try it next week – any improvements with hybrid/switchable Radeon GPUs?
May 31st, 2014 at 7:00 am
Thanks! Love the new version. Especially the new updater. Will update in about 3 weeks when the semester is over. For those who wanted to know, all level 1 updates introduced after the RC are included in this final release.
I have one question though, what is the difference exactly between ‘Network settings’ and ‘Network Connections’ and why have they not been merged into one settings applet? That would make a lot more sens to me. Please take note that I am not a developer, so maybe these is a perfectly logical explanation for it.
May 31st, 2014 at 7:09 am
Congratulations!
I have observed a minor bug: when watching an internet video the screensaver is not inhibited. This doesn’t happens with VLC, so in this case the problem seems fixed. I can’t tell about other video players.
Thank in advance.
I really admire the Mint Team work
May 31st, 2014 at 7:09 am
Well done to all concerned! Really looking to forward to getting this installed tonight!
May 31st, 2014 at 7:54 am
Awesome, awesome, awesome! Been running since the RC and all my bugs seem to have been fixed. Now my main operating system. 🙂
Thanks so much!
May 31st, 2014 at 7:56 am
Nice! Been checking the site several times a day the last couple of days to see if it got released, and it did today on my birthday! Thanks for a nice birthday present. Looking forward to try it out as the download is complete.
May 31st, 2014 at 8:00 am
I have been following LM since version 14. I finally made the switch away from Windows / OS X when 17 RC came out. Even though a little problematic, it was still a relief after the tyranny of OSX 10.9+ and Win8. Now those OSes are my slaves instead, if and when I need power apps in virtual machine. I no longer hate facing my personal notebook in the morning or throughout my very long work days in waiting to see what Apple or MS are going to do to my personal property.
Thank you, Thank you, Thank you!
May 31st, 2014 at 8:01 am
Clem,
Thx for a wonderful job on this release. Except for a couple of minor bugs (which are surprisingly persistent since LM15 at least on my hardware), a smooth silky release. Mint-X theme looks well polished. Honestly, this release has really impressed me.
I am having problems with suspend. When my laptop lid is closed, the system refused to suspend in spite of choosing such an option in Power Settings Menu. Any ideas on how to resolve this? I have installed all the available updates…
May 31st, 2014 at 8:01 am
Downloaded the 64 bit stable release yesterday and running great. Will try the 32 bit on my other “problematic” older computer with solutions to freezes later and report back. Thanks to the Linux Mint team for another fine release…:)
May 31st, 2014 at 8:06 am
Jeroen@2
100% agreed on the network utilities. I, for one, prefer the Network Connections front end. It’s what I’ve been accustomed to since Gnome 2, and it seems to behave with a bit more stability — not quite as squirrelly when making changes to eth0.
I think there is a similar situation with Users and Groups. One of the first things I do, if I remember, is install gnome-system-tools, which gives far more control over users and groups — like we had with Gnome2. Trouble is, once installed, it brings in two menu entries for Users and Groups with the same icon. I could probably point to a different icon for the one I like to make that easier, but it would be great to merge the functionality of these two utilities, but likely much easier said than done. I still have to go into the native front end to upload an avatar for the login window user list.
Good news, is Clem indicated some more attention was needed here, and hopefully it will be addressed in future updates/releases. We’ll see what he comes up with.
May 31st, 2014 at 8:09 am
Amazing OS, it is great to see people really do have an legitimate alternative to Windows and OSX.
Many thanks to all involved 🙂
May 31st, 2014 at 8:12 am
As a follow up to comment 6, this is one thing I love about the Cinnamon project so far. I have seen how the Mint team has gradually merged a lot of these little things for us, and it usually ends up being as good as, or even better than the old Gnome2 controls. This gives me hope that Cinnamon will one day be a highly viable option for many many more.
Mate is still great too, so where do I stop with all the really cool stuff coming from Mint?
So look forward to loading this up on my new SSD, once it finally arrives.
May 31st, 2014 at 8:14 am
Congratulations and many many thanks for the hard work and dedication!
I know what my main project will be this weekend…. 🙂
May 31st, 2014 at 8:29 am
Awesome… Updating right now…
May 31st, 2014 at 8:43 am
Hi!
Thank you very much for the best linux distro!
Does one have to reinstall the final release if Linux Mint 17 RC was installed?
TIA
May 31st, 2014 at 8:47 am
Congratulation!
Thanks
May 31st, 2014 at 8:56 am
Awesome. Only had one problem with the RC with Chromium but that has been fixed. So I’m already running the final version since I have installed all the updates 🙂 Good job guys 🙂
May 31st, 2014 at 8:57 am
Congratulations on the release!
I have just installed the v17 Cinnamon RC release. Can I simply run the updater and get my distribution current, or will I need to reinstall?
Thanks again for your efforts
May 31st, 2014 at 9:04 am
just installed the RC (incl. a lot of RC updates). After finishing my feed reader informed me that 17 final is out. Is a new install of 17 necessary or is it identical to the RC incl. all the updates?
May 31st, 2014 at 9:05 am
Installed it yesterday. works like a gen. only issue i saw was during install. when asked whether to install alongside or overwrite existing OS it refers to itself as Mint 16. i did install the final and not the RC.
May 31st, 2014 at 9:08 am
Another nail in the Microsoft coffin. Thank you all for your hard work. Excellent OS.
May 31st, 2014 at 9:13 am
Gorgeous!!
5 Years LTS 😀
Thanks Mint Team 😀
May 31st, 2014 at 9:21 am
Awesome, been using RC and it is blistering fast. Will be updating. Well done Clemm and team.
May 31st, 2014 at 9:28 am
Many many thanks to the mint- team for this truly great work. I replaced LM 16 with 17 last night — it works perfectly after installing the additional WLAN driver. Its fast, functionaly and looks superb.
May 31st, 2014 at 9:40 am
It feels like I’ve been waiting for the new LTS for years. Glad to finally get my hands on it and give it a spin. I appreciate it, and my thanks to the whole Linux Mint team.
May 31st, 2014 at 9:41 am
Hi, I’m very glad to see new Mint17, but I found a bug in nemo when open as root. When I try to change any preferences, nemo doesn’t update the interface and doesn’t record the changes. It’s always at the default.
Anyway, great job…
May 31st, 2014 at 9:49 am
Congratulations. I’ve been using Linux Mint for the past 3 years and hope to continue using it for the foreseeable future. Keep up the good work.
May 31st, 2014 at 9:54 am
Great! I’m using this final version and it’s running awesome.
May 31st, 2014 at 9:56 am
Regarding Comment 2, 6, 7 (Network Connections PLUS Network Settings):
“Network Connections” is the old reliable that Ubuntu keeps maintaining and putting in, even though Gnome dropped it and only goes with Network Settings Applet. In 12.04 / Mint 13 this Network Settings Applet was not functional for many things (like configuring a broadband modem when NOT connected: you couldn’t edit the settings until the network connection was ACTIVE, but you couldn’t activated it because it wasn’t configured!).
Now, in Gnome the Network Settings seems to handle things like 3G modems better, but as I understood the Mint team didn’t have capacity this cycle to port those improvements from the Gnome Network Settings to the Cinnamon Network Settings…. so, as noted above also, it seems the Mint team does have this identified, but for now we need both to accomplish all things, so to ensure full functionality, the Mint team has given access to both utilities in the Network menu. MUCH BETTER than any other Desktop has provided!
As an example of what Network Connections cannot do, you can not configure VPN settings there.
May 31st, 2014 at 10:22 am
So I’m currently using the RC and I’m supposed to install updates from the update manager, but I see no updates even after refreshing. Does this mean that I am in fact running the final now, or is something wrong.
May 31st, 2014 at 10:30 am
Nice release. Have been using this since the start of the RC, quite happily, but have now formatted and installed the stable version. The minor bug about not saving system monitor preferences is there, but hardly consequential. I never found out what caused the loss of shortcut control +C +V +Z functionality – it persisted when installing the stable version without formatting, but disappeared when installing with a full format. Hopefully I won’t see it again !
Very good user guide, thanks for putting that in this release.
May 31st, 2014 at 10:38 am
thanks for final release !!!! now downloading……………
May 31st, 2014 at 10:46 am
Linux Mint 17 “Qiana”; is an extraordinary project. Thanks Clem ve whole team. Greetings.
May 31st, 2014 at 10:53 am
Still using Linux Mint as our default distro and it’s working great, been using 17 since the RC and it’s the best distro around.
The only problem with the new theme is that on html forms when tabbing to a button, it’s extremely hard to tell if you are on that button or not.
May 31st, 2014 at 11:25 am
I can hardly await the download to finish. Thanks a lot for this awesome project!
May 31st, 2014 at 11:27 am
Hi,
My last dist. was “Julia”, it’s worked really good.
I’ve installed the 64b-RC two weeks ago, can I update to 64b-LTS?
I’ve heard about “zsync” but I don’t know if it works with LM ISO.
Thanks. Regards.
May 31st, 2014 at 11:47 am
No Work With Nvidia Chip Driver 304.88 freeze 🙁 i cant install
May 31st, 2014 at 11:48 am
Guys,
Is there a bug with Totem 3.10? I can’t seem to seek videos using slider bar. I’ve tried different format files (avi, mpg, flv & mp4) and all files suffer from the above in totem. I’ve installed SMplayer as my current video player now but I do like Totem as it offers me an embedded playlist.
Other than that a superb release. I suspect the above being an application bug rather than a distro bug.
May 31st, 2014 at 11:54 am
I must say, THANK YOU FOR CINNAMON!
GNOME 3 hurt me so badly. It made me, a man of men, cry and weep. The beauty that was GNOME 2 was defecated up. It was turned into a dung heap unfit even for the donkey to wallow in with the release of GNOME 3.
But our savior of the century, Cinnamon, rescued the day. Cinnamon makes my GNOME experience one of joy-wrought pleasure, rather than an experience more painful than the brutal smashing of a man’s gonads.
I do not know what I would do without Linux Mint. I do not know what I would do without Cinnamon. They are the world to me.
THANK YOU, ALL! THANK YOU LINUX MINT TEAM! THANK YOU CINNAMON TEAM!
You do us a great service, and we appreciate each and every day we can use the most excellent software you have so graciously provided to us.
Please, everyone, join with me in a toast to the entire Linux Mint and Cinnamon team.
HIP HIP HUZZAH! HIP HIP HUZZAH! HIP HIP HUZZAH!
And many of our most heartfelt thank-yous.
May 31st, 2014 at 12:04 pm
@25
Sorry, I’ve already looked the “Upgrade Instructions”
Regards.
@26
I’ve installed the Nvidia 331.38 and it works fine. Have you tried?
Bye.
May 31st, 2014 at 12:46 pm
As a long time Linux user, I have worked with many distributions, both good and bad. I have seen some that were far from being user-friendly, and I have seen some that are designed to accommodate the masses. Linux Mint 17 Cinnamon is more than just an operating system. Linux Mint 17 is a powerful tool for those who are experienced users or just getting into Linux for the very first time.
I have a small telecommunications business and from a professional viewpoint Linux Mint 17 is by far the number one choice for us in our day-to-day computer use. Not only are we using it for daily desktop activities, but we are also using Linux Mint 17 to run an FTP server, an e-mail client and a web server. Yes, all of those are currently running on Linux Mint 17, flawlessly.
For those who are afraid to take that plunge and download it, I give it my utmost recommendation to do so. It is highly stable, very flexible and the development team has released a work of art. You will not be sorry!
A quick thanks to all who have spent countless hours and who have given their dedication to getting this release in the best shape…job well done!
May 31st, 2014 at 12:52 pm
LXLE and Deepin that are also based on Ubuntu 14.04 can run X window using the live CD/DVD with no problems on an Asus X550DP while Ubuntu 14.04 and Linux Mint 17 cannot.
May 31st, 2014 at 12:53 pm
THANK YOU,but LM17 Cinnamon is very slow startup,more than 1,5 min.
I am disappointed.
May 31st, 2014 at 12:54 pm
¡Muchas gracias! …………….. 🙂
May 31st, 2014 at 1:00 pm
Anyone who installed the RC edition just needs to run Update Manager to get the final version. No need to reinstall. If you’re not seeing any updates then you now have the final edition.
May 31st, 2014 at 1:03 pm
This is getting seriously annoying. I am still having the same problem with my Realtek rtl8188ee card and wireless dropping out after 15 minutes or so. Mint 16 was working fine. I don’t blame Mint 17, it is an issue from Ubuntu 14.04 that I had hoped would be fixed by now. Updated firmware and other things, but it still seems to re-occur.
I’m tempted to try LMDE just to see if it isn’t an issue there. If it is, I may have to dual-boot with something like Fedora until someone upstream can fix this. I have seen a tip using modprobe, but it’s just a temp fix not a permanent solution. I still love Mint, but right now thanks to Ubuntu I can’t use it long-term. GRRRRRRR.
May 31st, 2014 at 1:15 pm
Qiana installed easily on my old Thinkpad X61s, replacing Mint 16. One tiny bug, I installed over the Mint 16 partition and kept my old /home directory – the menu button icon disappeared, I had to replace it manually with the icon in /usr/share/cinnamon/theme/menu.svg.
Apart from that minor issue, what a great release! Will this be the one that will replace Windows 7 on my main laptop? Too early to tell, but congratulations for a great release – Linux Mint with Cinnamon is by far the best desktop Linux out there.
May 31st, 2014 at 1:28 pm
wow thank you for great release! im MINT fan since 2 years now! i was waiting for this one!
May 31st, 2014 at 1:53 pm
I love it! Stable, fast, and visually appealing! Favorite feature is being able to enable broadcom without an wired connection is something I am truly grateful for. Thanks for a solid and exciting release. Live the hot corners too… You have once again all out outdone yourselves, thanks Clem and mint devs!
May 31st, 2014 at 2:10 pm
Thank you guys, getting better with each version.
May 31st, 2014 at 2:29 pm
Congrats on the new release!
Downloading right now.
May 31st, 2014 at 2:36 pm
Will the KDE respin also be an LTS when it gets released? I’ve never really tried Mint’s KDE versions 🙂
By the way, on the homepage of linuxmint.com when you hover over Download, the title still says Linux Mint 16 but when you get redirected to Linux Mint 17. On the download page itself the title in the menu *is* correct.
May 31st, 2014 at 2:37 pm
Mint 17 Package pidgin-musictracker is missing ! Why ?
May 31st, 2014 at 3:02 pm
This will be my last effort on Cinnamon. Firefox crashing and I cannot easily fill in this form as characters just go missing in Mail address and Firefox corrupting TABS
May 31st, 2014 at 3:16 pm
Hi,
great job on LM17, thanx :).
I am dealing wih strange bug. When I have installed doublecmd-qt from software center, cinnamon seems to not seeing this program. I can run it from terminal, but when I press winKey a snad start typing doubl… there is no such program. It seems to be bug from cinnamon 2.2, when I ran same version of doublecmd on cinnamon 2.0 and older there was not this problem. Same problem on cinnamon nightly on ubuntu 14.04.
Is this problem of cinnamon or doublecmd?
Thanx anyone for help.
Cheers
May 31st, 2014 at 3:19 pm
I’ve tried out a lot of distros since making the move to Linux, but I keep coming back to Mint Cinnamon. It works great on my laptop, and it does everything I need. The look and feel work well with my tastes. I just wish there was a trimmed down version I could install on my desktop (an older computer that won’t boot off a DVD or thumb-drive).
May 31st, 2014 at 3:39 pm
I’ll have what number 39’s having.
😀
May 31st, 2014 at 3:50 pm
Hello Clem and team.
What is buggy in newer gedit versions which made you decide to keep an older version?
May 31st, 2014 at 4:05 pm
G-d heared my prayers!!! Oh, thank you Lord Almighty!!!
YES!!! Moving from MATE to Cinnamon without any doubts or second thoughts!!!
Linux Mint 17 “Qiana” Cinnamon – Fa-tality!!! Flawless Victory (release)!!! :)))
May 31st, 2014 at 4:20 pm
With small steps forward in its glory, elegance and functionality. That is my description of the Linux Mint which have a big proud community(including me).
May 31st, 2014 at 4:22 pm
I love u linux Mint
May 31st, 2014 at 4:30 pm
@Tony: If your Firefox is crashing, that is not a problem with Cinnamon! That is a problem with your Firefox!
If the tires on your car fall off, it is not a problem with the maker of the stereo! Don’t blame the innocent for the crimes of the guilty!
Cinnamon is as solid as a rock. It is the foundation upon which all of Linux Mint civilization is built upon. It is solid. It is firm. It is strong as the strongest titanium steel that you could ever hope to imagine.
But Firefox, nay! Firefox crashes for all sorts of people on all sorts of systems. Firefox is like the rotten old tree. The slightest of wind will break off its branches, and crumple its trunk.
Please, I beg a million times of you, do not blame Cinnamon for the problems of the Firefox.
May 31st, 2014 at 5:32 pm
Freeze solutions worked,eventually. 32 bit older desktop computer with NVIDIA Geforce. Had freeze problem with RC. Downloaded from stable torrent in blog. Still says “rc” for some reason. Froze on install so used nomodeser. After install froze after a minute. Tried software rendering mode and changed to 304 driver immediately,then reboot and installed updates. Making comment from the above computer. Zero problems with my newer laptop running 64 bit Cinnamon. Thanks..:)
May 31st, 2014 at 5:39 pm
Just upgraded my RC installation. Thank you Mint dev team for the best version of Linux Mint released to date. You are truly changing the computing landscape… at least from my point of view. I never pass up an opportunity to recommend your product to people that I know who have borked up Windows installations 🙂
Again, Congratulations on yet again another fantastic desktop… and an LTS at that 🙂
May 31st, 2014 at 6:07 pm
Re:20
So it looks like the “final” i downloaded from softpedia yesterday was the RC. false info.
i downloaded 17 directly from your site and it works as it should.
Keep up the good work
May 31st, 2014 at 6:15 pm
Wow, I am very impressed with the work that went in to 17, enough to switch back from Debian stable, very thorough on the tweaks and fixes. As soon as I am able, I’ll be getting out my credit card to make a donation, sadly a modest one from a poor working schlub, lol. 🙂
May 31st, 2014 at 6:27 pm
I’m ill and have some spare time and decided to upgrade laptop with Mint 13 instalation. Today after 2 days of battle with sources.list and wish to upgrade to Mint 16 I sow some great Software Manager who repair all my troubles, when I look closer I sow that there is Mint 17. And all my trouble with OS are now past. Even URT can play with 80fps. And Cinnamon…
Now updating desktop system 🙂
May 31st, 2014 at 6:31 pm
Congratulations! I am using LM16 now. Will upgrade when the KDE edition is released.
May 31st, 2014 at 6:53 pm
نسخة رائعة شكرا لكم
May 31st, 2014 at 7:00 pm
Thanks to Clem and all the Linux Mint team for all their hard work in making Linux Mint available, free for all of us to enjoy!
In my opinion, Linux Mint is one of the most exciting computer projects in the world today and it is great to be a part of it.
May 31st, 2014 at 7:28 pm
Good bye Maya, welcome Qiana!!
May 31st, 2014 at 7:48 pm
How to change nvidia prime to bumlebee. need command to uinstall and install. Only pack what i need. Because, if i change, change nvidia to intel. After reboot system i can’t change intel to nvidia when i try get blank windows and icon with error, and nothing do.
Another argument is power saving. On prime on nvidia driver. Battery life is short and laptop running on 100% every time. Well a temp is high every time. on bumblebee processor and gpu running to 100% when it’s need. Well i must change , only don’t know how. On linux mint. Thanks and waiting for answer.
May 31st, 2014 at 7:57 pm
Working beautifully, but then, so did the RC. Great job, Clem and team. Thank you.
May 31st, 2014 at 8:50 pm
There seems to be an issue where even if i select the nomodeset option and then select the nvidia driver, it doesnt stick. My computer continues to try and boot xorg and then fails as a result. And it seems the only time I can engage boot options is during an install. I cannot do it after an install. What am I doing wrong?
May 31st, 2014 at 8:56 pm
Congratulations on the release!
Im Very Happy ,,, Thanks its looking good
May 31st, 2014 at 9:13 pm
How can I install LM17 and have a fully encrypted drive (e.g. LUKS) for a laptop? Is it an option in the installer?
May 31st, 2014 at 9:38 pm
Just manual upgrade using apt-get from Mint 16 without any glitch. Well done guys. Looking forward to use until 2019.
May 31st, 2014 at 9:58 pm
Thanks Linux Mint community, Qiana works great. I’m a avid follower of Linux Mint and will be definitely in the first 50 who installed this LTS OS.
Once again, great work. Please keep it Up !
May 31st, 2014 at 10:11 pm
I just upgraded from Mint 15 with a fresh install. Very nice, Very nice. I can see that a lot of work went into this and 16 which I did not install because 15 was working fine for me, I linke stability. But 17 being a LTS was a must install. The decision of the developers for Mint to stay with the same package base for the near future will reap large benefits to us users. Now you can concentrate on developing Cinnamon and the other Mint applications and less time dealing with all the other small drags on time that come with switching that base with every version.
Keep up the good work. Very polished and solid release.
May 31st, 2014 at 10:45 pm
Hi,
had a grave regression from the RC to this final release: my touchpad doesn’t work.
on top of that, the issues relating to being unable to change screen brightness and the GPU usage being high constantly continues and makes it completely impossible for me to use Mint.
Laptop is a very standard Dell from 2013, i3 ivy bridge, all onboard, nothing out of the ordinary.
May 31st, 2014 at 11:08 pm
I just want to say thank you. What a beautiful desktop you have created in Cinnamon.
May 31st, 2014 at 11:26 pm
YESSSSS! I’m pretty sure the wait was gonna kill me, LOL. Now to download, install, and see what happens…. 🙂
May 31st, 2014 at 11:42 pm
I want it!
June 1st, 2014 at 12:17 am
Hi Clem,
Thanks for this. Using Mint since 14.
About 17 Cinnamon: using it on a Dell Vostro and installation was ok, even with a GeForce GT 630M. Cinnamon does not start using nvidia-304; X Server doesn’t start at all using nvidia-304-updates. PC freezes from time to time using nvidia-331. xserver-xorg-video-nouveau drains battery life.
See ya.
June 1st, 2014 at 12:22 am
To Clem and dev team
Congrats on the new release. Looking and working great.
I have two questions:
1. What is the best way for me to get involved in QA for Mint earlier? Is there a ppa with latest Cinnamon, or some way to get Mint ISOs earlier.
2. I would like to start getting involved in the design of Mint. Mainly to fix little UI inconsistencies which the main developer team may not have the time to look at. What is the best way for me to do this? I haven’t done this sort of work before but I think I have an eye for detail…particularly with regard to UI.
June 1st, 2014 at 1:04 am
Awesome as always. Thanks, Mint team.
June 1st, 2014 at 1:36 am
I installed LM17 alongside LM16. It installed fine, but now my LM16 won’t boot and it hangs after mounting filesystems. This is annoying because I forgot to backup my software selection…
June 1st, 2014 at 2:12 am
it is out…great…
but still waiting for kde release,
when is it coming ?
June 1st, 2014 at 2:14 am
Used MintKDE 16 for a while, did install Kubuntu 14.04 but that was not what I wanted. Things just don’t run as they should, applications crashed, things didn’t run which did run in MintKDE.
So I decided to install Cinnamon 17. Also this is not what I want. When you are used to the fabulous interface KDE offers then Cinnamon is dull . It’s grey, it’s not shining, sorry for this but this is how I think about it.
Things also didn’t do what I wanted them to do:
I downloaded and installed Google-Chrome. After clicking the package the Package-manager is opened and investigates to see if the desired package can be installed. It can. Then during installation I get the message not all dependencies are satisfied and I need to type “sudo apt-get install -f” in the terminal. I then see that Chrome is installed, but see no icon in the Start menu so I use the .deb file again in Package manager. Now it is installed and also the icon is placed in the start-menu.
I added the wobbly-windows add-on which gave me, when dragging a window on screen, thousands of ghost windows which were not removed. Okay, could be the video-driver. So I tried to install the latest NVidia driver I have using package-manager again: Not possible. xorg-abi-11~14 were not installed. Yeah, and now? Nowhere to be found.
I remembered Mint Driver manager. With this I could install an even later Nvidia driver. After reboot I tested the Wobbly-Windows addon again and nothing had changed. So bye bye add-on.
Also in Package manager, after having clicked the install button, a new window is opened where you can see the installation progress. In here is a tick-box with the text: close automatically when installation has completed. Every time I need to click this, just to see that the next time I install something I have to do it again.
I sure hope Mint will release the KDE version soon so I can use that. For now I will use Cinnamon unless I find too many things which I don’t like or which don’t work, then it is back to 16 until 17 is released.
Sorry for all the negative words I wrote, but this is what I experience here.
June 1st, 2014 at 2:18 am
how to update from mint 16 to 17? Only download the 17’s iso and restore the system?
June 1st, 2014 at 2:49 am
Suuuper! Wir dabken dem Team von Linux Mint! Jetzt macht das Lernen noch mehr Spaß :-)) Vielen Dank!!!! Greetings from Hamburg 🙂
June 1st, 2014 at 3:24 am
@Gabriel: the best way is to ask on IRC – irc.spotchat.org, #linuxmint-dev channel. Clem & other developers are usually there.
June 1st, 2014 at 3:31 am
I’ve installed Cinnamon 64…. really Awesome….!….Congrats…!
@ Clem & LM Team:
– is it possible to add some shortcut/button to close banshee from the sound applet on the panel..?
Thx…!
June 1st, 2014 at 3:31 am
wow impressed with how nice everything is running. KEEP UP THE GOOD WORK.
I hope the 32 bit version of cinnamon runs as good or close to it as the 64 bit version is running right now on my laptop….this is now my main os…..VISTA JUST GOT DELETED.
June 1st, 2014 at 4:09 am
yeeeeeeeeeeei !!!!!!!!!
awesome !!!
best alternative for Windows junk
June 1st, 2014 at 4:17 am
‘grub install /dev/sda’ FATAL ERROR – just because i use SATA in RAID
June 1st, 2014 at 4:35 am
Update to Mint 17 without loosing your settings from Mint 16 (or other former versions):
Please take a look at:
[b][/b]
or you can directly go to the project page:
[b][/b]
With best regards,
Mint_BackupRestore
June 1st, 2014 at 4:44 am
Thank you!
Використовув раніше ubuntu 10.04 ,не захотів користуватися unity.Знайшов для себе Mint 16 cinnamon,залишився задоволеним.Зараз хочу попробувати Mint 17 ,сподіваюся бути з цією системою надовго.Дякую за вашу роботу.
June 1st, 2014 at 4:48 am
Once again, props to the devs. A solid release.
One thing that slightly puzzles me are the inconsistencies between releases – is this an upstream Gnome thing? For instance, in every release since Maya, to restore original scrollbar behaviour I’ve had to create ~/.config/gtk-3.0/settings.ini and add:
[Settings]
gtk-primary-button-warps-slider = 0
BTW – This is not a criticism – keep up the good work!!!
June 1st, 2014 at 5:14 am
nice!
now I am eagerly waiting for the XFCE version… Hope it will be published soon(est)
June 1st, 2014 at 5:56 am
Thank you very much! I installed it to my wife’s notebook on LVM. All works well and we are very glad!
June 1st, 2014 at 7:08 am
In a word “Outstanding” This is the perfect replacement for all Windows flavours, especially 8.1 (aggghh). Runs like a dream and lightening fast – Clem & the team you have my thanks & respect.
June 1st, 2014 at 7:37 am
I have found a few small bugs,
1) Opening and closing Sound Settings has a small delay. Everything else opens and closes instantly.
2) Volume is always at 100% in Sound Settings > Applications, even though my system volume is at 25%. This can be fixed by decreasing the application-volume slightly (decreacing from 100% to ,say, 95% actually decreases it to 25%, even though the slider is at 95%) Rebooting doesn’t help the issue. I hope that explanation wasn’t too confusing!
3) In Nemo, when trying to resize the window/the columns (only in List Wiew), you can’t see the far right-column fully, even though there is plenty of room for it. What this means is that I have to have a wider window than what I’d prefer.
4) Google Chrome and Chromium uses a different theme (fallback-theme?) xsession-errors repeatedly says something like: Window manager warning: Buggy client sent a _NET_ACTIVE_WINDOW message with a timestamp of 0 for 0x2a00001 (Chrome)
5) xsession-errors repeatedly says: (cinnamon:2079): St-WARNING **: Did not find color property ‘-gradient-start’ as well as ‘-gradient-end’
I don’t know what this really does, but might be worth looking into?
6) In the menu, clearing the list of recent files also closes the whole menu. This seems un-intuitive as many times I still want to be in the menu after clearing that list.
June 1st, 2014 at 7:49 am
Thanks for the final and hopefully stable LTS Release! Good work!
June 1st, 2014 at 10:03 am
there is some problem with 64 bit torrent.
corrupted file.
md5 is 860be43cefdc6e0cca2663dc94cfde52
when it is supposed to be 1c3fef2117fad9c9bc905abdeb474ac1
this torrent works
(check sha1, md5 on page)
June 1st, 2014 at 10:29 am
Linux Mint is the best.
Kudo guys!
June 1st, 2014 at 11:11 am
62 bit Cinnamon installation was effortless with USB stick. I have one gripe, I have to repeatedly enter my authentication password for my wifi connection. Is it at all possible to have ‘remember’ option please to sit alongside the ‘Show Password’ box?
June 1st, 2014 at 11:40 am
Thank you everyone who has contributed to this work. I’ve been waiting to find a Linux distribution that I can set up for other family members to use. This represents a major step forward for anyone who wants to help others try Linux. Tidy, quick and fantastically supported. I’ve got Quina Cinnamon running on a Thinkpad T43 and its working excellently. Great job!
June 1st, 2014 at 12:20 pm
I’ve just installed it on 3 older machines, flawlessly (1 was Cinnamon, 2 were Mate). One had an NVIDIA card (GT 8500), but there was no problems.
*buntu, goodbye forever!
Much appreciated! 5 years of security updates will let me pretty much just forget about these machines.
June 1st, 2014 at 1:35 pm
About hdmi sound of hadeon HD series is ok!? In my pc ubuntu 14.04 no sound or issues with ati hd7000!
June 1st, 2014 at 2:00 pm
Merci for this new version, will update soon!
June 1st, 2014 at 2:13 pm
…y alguna nueva version de LMDE nada? 🙁
June 1st, 2014 at 2:16:17 pm
Sorry I meant to post this on the MATE page!
June 1st, 2014 at 2:27 pm
The best distro ever. I found some bugs with the RC but the final release is excellent. Thank you so much team
June 1st, 2014 at 2:38 pm
Re: my 107 message, I changed the wireless router setting to another and it now connects automatically. The 64 bit Cinnamon 17 release is fantastic. Many thanks to all those involved in providing us with an outstanding O/S. Sincere thanks Henry
June 1st, 2014 at 2:55 pm
i installed rc last week, i wasn’t waiting 17 to be published in a few months.I have to install everything again 🙁
June 1st, 2014 at 3:25 pm
Installed in a hp computer and everything is perfect. Thank you linux mint to make me to forget unity !!!
(Adwaita windows controls buttons seems to be broken or displaying strange).
Good work !!!!
June 1st, 2014 at 8:45 pm
@caner #117 – You don’t have to install again. As long as you’ve applied the updates to the RC as they came in (via MintUpdate) you already have the final version. Just so you know, the final version of Linux Mint always comes within two weeks after the RC is released and you never have to install the final if you’ve already installed the RC and kept it updated. 🙂
June 1st, 2014 at 9:48 pm
Chromium browser will not disappear from the screen when I hit the minimize button. I am running it in virtual box on a IMac.
Is there a setting or something to fix this.
June 1st, 2014 at 9:59 pm
Chromium browser has major problems in this version.
When I maximize the browser it stays on the screen.
When I try to close the browser the screen is black where the browser used to be.
I am running it in virtual box.
June 2nd, 2014 at 2:37 am
>System requirements
>CD/DVD drive or USB port
This must be wrong. The system is now offered as a DVD image only, too large to fit on a CD, so a CD-only drive won’t suffice.
June 2nd, 2014 at 3:06 am
Here is what I did to fix my rtl8188ee wireless network card dropping connection every few minutes and acting very sluggish in Mint 17 (Cinnamon).
Open Update Manager. Click View->Linux Kernels. Click Install under 3.13.0-27. After that is finished, install all updates including up to level 5 (may have to refresh/install multiple times to get all updates.)
Go to Realtek website and download the latest RTL8188CE package for Unix/Linux, version 0012.0207.2013 ()
Extract the folder and open to the folder in terminal. Run the following commands (from readme:)
sudo su
bash compat/script/compat-install.sh
reboot
Network is running fast and havent had a disconnect since (knock on wood.) It was a very frustrating problem before and I had tried several other things I found on the forums including the older kernel & drivers which did not work.
June 2nd, 2014 at 3:58 am
Thanks a lot for your work.
But please please finally change that super duper ugly default wallpaper. It is not conform to mint quality standards 😉
June 2nd, 2014 at 4:23 am
Hi Clem,
great release, but I found a few bugs:
1. Pidgin icon at tray is always missing after it starts
2. Menu editor is very buggy – after adding new elements menu editor and the menu itself almost always become broken and inaccessible. The problem is solved only after removing ~/.config/menus/cinnamon-applications.menu
June 2nd, 2014 at 5:29 am
It looks like a very solid release. Thank you very for your hard work. I will give it a try as soon as possible.
June 2nd, 2014 at 5:41 am
Hi, congratulations, very good job.
However, I found a bug with Cinnamon 64 bits :
– When you try to log in before 1:27 after kernel loading, you have to wait until that time with a black screen to see your desktop.
– When you log in before 57 seconds after kernel loading, you get 30 seconds of black screen and then ugly icons/windows/menu without Cinnamon and the terminal is unusable too.
.xsession-errors shows:
“cinnamon-session[1780]: WARNING: Application ‘cinnamon-settings-daemon.desktop’ failed to register before timeout
cinnamon-session[1780]: CRITICAL: We failed, but the fail whale is dead. Sorry….”
The menu and terminal fixes by itself after 1:57.
Icons/windows are OK after closing session and logging back again.
Machine: Core i7 3770 + 16 Gb RAM + SSD Kingston + Radeon 7950
I imagine that exact timings depend on the machine.
Final release installed by USB. Bug not fixed by upgrades so far.
June 2nd, 2014 at 5:47 am
Lovely piece of work!!. I had a painless upgrade from 16. All I did was to boot using the DVD (from the ISO) and select the option for “something else”. Used my old Ext4 system partition, told the loader to Format and use as / . Then pointed to the home partition and said to use as /home but did not check the option to format. Pointed to the existing swap to continue using as swap.
Lo and behold!! 15 minutes later I had a spanking new system working, with my old desktop, all my Firefox bookmarks, everything intact! great going.
I’m sure that my existing OEM Win8 install is working, but have not needed to check that out.
I’m using a Samsung 15″ Core i5, with Nvidia Optimus. No issues. I installed bumblebee, then primus, and sudo optirun glxgears is fine.
Had a bit of heating up before I installed bumblebee, but that is sorted out too. I’m using Nvidia 331 drivers along with this. Battery life – earlier in 16, I was getting about 4 – 4:30 hrs, but there seems to be some issue now, I’m back to about 2 1/2 hours on a full charge.
Only issue I faced was that when I tried to restore my software selection backed up, I kept getting dependency errors. This is likely due to certain packages that are dropped in 17 but were there in my 16.
Google earth still needs to be started up three or four times. Each time, I see the globe, and then it inexplicably shuts down 🙁 . I’ve installed it through the software channel, so don’t know why this is still happening the same as was in 16.
Overall, a great install, and had fun doing it too. I did have my fingers crossed when installing over the old one, but its working fine, so a round of drinks to the team when I meet them 🙂
Cheers.
June 2nd, 2014 at 6:02 am
Mint 17 Cinnamon: freezes with ATI 4770. With software rendering is ok. :(((
(Mint 16 Cinnamion ATI 4770 is ok.)
June 2nd, 2014 at 6:07 am
Compared to Mint 16 the new version 17 (even the RC) works fine.
No longer required on my new HP laptop:
– Hint for grub to use the Intel graphic as output-device to see anything at all
– Update to kernel 3.11 / 3.12 to be able to use WLAN for more than 2 minutes (or so), rt3290 chip driver was broken in kernel 3.10
The only problem will be Bluetooth support for this chip.
I was able to compile the driver for kernel 3.11 with some minor fixes
and with some more fixes for kernel 3.12.
But the same driver does not compile with kernel 3.13 because some
kernel structure names were changed.
I hope to get this fixed with the help from “the internet” (DuckDuckGo).
Bluetooth support was halfway working (direction “to phone”), anyway.
Thank you from a happy Mint user since Unity came up (12.04)
to all involved in this release.
I appreciate the decision to keep the following releases based
on 14.04 LTS.
Norbert.
June 2nd, 2014 at 6:14 am
The option to ‘automatically remember running applications when logging out’ under the ‘startup applications preferences’ settings looks to be flaky. It only remembered Firefox for me, Calc was forgotten. I also had calc, vlc, and gimp in separate workspaces, and all of these were forgotten. One time logging out and in my screen background was flushed leaving the default background. I think this particular facility needs some more development and testing. Not consequential in the grand scheme of things, I know.
June 2nd, 2014 at 6:39 am
Why has the keyboard shortcut setting to turn the zoom on and off been removed? It’s still there in Gnome settings. You’re driving me to openSuse! What is the command so I can set a custom shortcut? I tried zoom and gnome-zoom but they didn’t work!
Otherwise, mint 17 seems fine. I did install the release even though I had the RC; it only takes 10 minutes, and if you have a separate home partition you keep your settings, although not extra software, of course.
Hope someone can help with the zoom command.
Thanks!
June 2nd, 2014 at 7:05 am
Linux Mint 17 works perfectly on my optimus laptop. Over-heating is not an issue now. I am not using bumblebee but nvidia-prime. Especially, flash video problem which previously (linux mint 15)lead to overheating & high CPU usage, is ended with linux mint 17. Thanx for the significant improvement…
June 2nd, 2014 at 8:29 am
Hi Clem,
thanks for the release,i want to ask if anyone managed to use the drivers from amd.com? i’ve tried to install the 3.12.20 kernel when i saw the new beta driver wich supports ubuntu 14.04, but i just crashed my laptop so i’m having fun with arch/openbox right now.
June 2nd, 2014 at 10:41 am
In cinnamon can you custom the Menu? I would like to remove items such as “logout”.
June 2nd, 2014 at 10:57 am
Hello. I was looking around the new power management settings. Was the brightness control slider removed? I can’t seem to find it. The slider is extremely useful to have considering Mint/Ubuntu’s long history of unsupported brightness control. On an HP laptop, I can at least change the brightness from the terminal, but on an iMac at work I can’t even seem to get control over the brightness. I can control the brightness on the iMac with Mint 15 (the currently installed single-boot OS). Thanks.
June 2nd, 2014 at 12:45 pm
132 Andrew Farrington – just nip into the ‘accessibility’ system settings screen, it’s all set up there.
June 2nd, 2014 at 1:38 pm
thank you for this release 🙂
Keep up the great work.
June 2nd, 2014 at 2:08 pm
Thanks Simon Brown: I know it says a shortcut is set, but it doesn’t work. Look in keyboard shortcut settings for universal access options: turn zoom on and off is missing.
June 2nd, 2014 at 2:09 pm
Hi,
Cinnamon freezes while connecting and disconnecting the network connection. This is some kind of bug. Immediate action required!
June 2nd, 2014 at 2:27 pm
Hi, first of all – thank you very much for really great Linux distro. I’ve been on Windows for so long (which does not meant that I was not looking at Linux from time to time – e.g. do you remember Slackware on foppies? ;)) that I though that it would be hard to switch. But recently I got an old machine for peanuts and decided to go Linux way and I really enjoy it, even if I have to get my hands dirty sometimes 😉 Speaking of which – could you please take a look at my two posts here: – it’s about the problem with flash plugin on Chromium browser. Maybe my findings will help you resolve the problem for good. And thank you once again for your solid work.
June 2nd, 2014 at 4:23 pm
Congratulations on this new release.
June 2nd, 2014 at 4:30 pm
Still having issues with Chromium window chrome painting corruption when maximising/restoring and resizing.
Also the pepperflashplugin seems to need reinstalling after reboot.
Same issues as with the RC cycle. Machine: Lenovo T430 HD4000 gfx
Apart from those two issues everything’s looking ship shape – thanks!
June 2nd, 2014 at 7:02 pm
Software Manager worked fine after fresh install of Qiana 17. Installed the latest updates via “sudo apt-get upgrade”, rebooted system and now the Software Manager no longer works. When running it from the command line the following error occurs:
/usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py:57: GtkWarning: could not open display
warnings.warn(str(e), _gtk.Warning)
Traceback (most recent call last):
File “/usr/lib/linuxmint/mintInstall/mintinstall.py”, line 32, in
from widgets.pathbar2 import NavigationBar
File “/usr/lib/linuxmint/mintInstall/widgets/pathbar2.py”, line 1021, in
class PathBarThemeHuman:
File “/usr/lib/linuxmint/mintInstall/widgets/pathbar2.py”, line 1034, in PathBarThemeHuman
animate = gtk.settings_get_default().get_property(“gtk-enable-animations”)
AttributeError: ‘NoneType’ object has no attribute ‘get_property’
June 2nd, 2014 at 9:23 pm
Working nicely on my main PC, and LM17 handles dual monitors better than LM16.
One trivial bug is that if I set a picture for my username in ‘Users and Groups’, it does not display in the login greeter, whereas if I create a new user and set a picture for that new user, their picture does display.
June 2nd, 2014 at 10:06 pm
re #138 (no user ‘face’ picture).
User pic displays OK if using an MDM theme, but not with the supplied HTML themes.
June 2nd, 2014 at 10:44 pm
re 138 – no user ‘face’ picture
syslog shows as below (machine and username changed):
Jun 3 12:23:33 MYPC ag[3520]: mdm-superinit Starting…
Jun 3 12:23:33 MYPC ag[3521]: mdm-superinit Finished
Jun 3 12:23:33 MYPC mdm[3508]: WARNING: failed to copy account pic to .face file: Error opening file ‘/home/MYNAME/.face’: Permission denied
Jun 3 12:23:38 MYPC mdm[3563]: pam_ecryptfs: Passphrase file wrapped
Jun 3 12:23:40 MYPC dbus[959]: [system] Activating service name=’org.freedesktop.systemd1′ (using servicehelper)
Jun 3 12:23:40 MYPC dbus[959]: [system] Successfully activated service ‘org.freedesktop.systemd1’
June 2nd, 2014 at 11:10 pm
just load 17 RC with cinnamon and it great
but I have a problem with google chrome browse .. when switching between tabs the page does not refresh properly.
June 2nd, 2014 at 11:32 pm
Hi,
I have been using Linux Mint since about version 9. I have a couple of desktops that have a Sounnd Blaster Audugy Platinum 2 24 bit sound card in them. Up thru 13.04 LTS the Dolby 5.1 speaker settings worked flawlessly! Since about Mint version 15 the right front and the right rear are mute. NO sound at all! Now with the Mint 17 the sub woofer is mute as well! Only the left front and rear speakers work at all! There is nothing wrong with the sound cards… I can install or run the Live DVD with 13.04 on them and all works well! Does anybody know how to fix this issue? Thanks for an AWESOME Operating System!!! 🙂
June 3rd, 2014 at 12:28 am
I hope Cinnamon 2.2 doesn’t have that hanging problem that 2.0 had when changing themes.
June 3rd, 2014 at 12:40 am
I have to give Mint 17/Cinnamon the highest praise I know how to give so:
ROLL TIDE!!!
June 3rd, 2014 at 1:01 am
I just installed on my laptop ASUS X550LD. When I installed it’s hard to use the mouse, I can’t click on the button, listbox… I had the same problems with LMDE 201403.
It’s better than LMDE 201403 and it works perfect after installed. I had many problems with mouse when I used LMDE 201403.
June 3rd, 2014 at 4:10 am
I noticed this morning that unmounting a USB stick from the cinnamon panel after copying to it, the GUI gave me the an indication that it was complete and had unmounted (no icon for the stick), but in fact it had a while to finish copying. Expected behaviour would be to hold a notice to show that the stick was still busy. This current behaviour would result in data loss if I had pulled the stick.
June 3rd, 2014 at 4:43 am
I found a bug.
I’m not sure what cause it, but Nemo unable to launch. I tried run nemo from terminal, this is the error message
“(nemo:28148): GLib-GIO-CRITICAL **: g_file_info_get_attribute_uint64: assertion ‘G_IS_FILE_INFO (info)’ failed”
But “sudo nemo” command don’t have this problem, it can launch successfully.
This problem happen to me twice already since I upgraded to Mint 17.
June 3rd, 2014 at 4:52 am
@Dame try renaming your nemo config folder or deleting it:
1. Close nemo
2. Open up a terminal application
3. as current user:
mv ~/.config/nemo ~/.config/nemo.old
4. Open nemo
June 3rd, 2014 at 5:03 am
I have been using Mint Cinnamon, on an old acer TravelMate 2420, since release 15, and love it.
I have a couple of issues with Mint 17, downloaded last week. As someone else has commented boot-up is slower than M16, but similar to U14.04, used briefly a couple of weeks ago. Also, the Menu buttons for ‘Hibernate, Suspend, Restart and Shutdown’ do not function, just like om U14.04. Is this a bug? Will it be fixed? I have gone back to M16 for the moment.
Otherwise, a great package. Linux Mint is by far the best distro that I have tried, and that’s a lot. RD
June 3rd, 2014 at 5:09 am
@ Seb T
Regarding the problem with flash on Chromium please see the second page of this thread:
June 3rd, 2014 at 5:32 am
Tried the 64bit iso. Installation was perfect, but try as hard as I could, was unable to get Dropbox or (nemo-dropbox) to install properly. The installation hung at the end of the unpacking routine.
Jumped back to the 32bit mode and all went very well. I find Cinnamon with a couple of minor tweaks exactly what I am looking for, so thank you very much.
June 3rd, 2014 at 9:18 am
Can anyone tell me how I can set a search domain in this release such that if I type for example “ping abc” it will automatically try to ping “abc.domainname.com”
I know this can be set in resolv.conf, but since this is automatically generated I edits will be overwritten. I would expect that the network settings tool would have options to change DNS settings such as host/domain name and search path, but I see no such option.
Thanks,
RJ
June 3rd, 2014 at 9:28 am
@Dame try also removing all files in ~/.local/share/gvfs-metadata
rm ~/.local/share/gvfs-metadata/*
Also, if you have any nemo extensions, try removing them.
Any mounts temporarily unmount.
June 3rd, 2014 at 9:31 am
Hi, great work Clem and team, except one thing… My calendar applet did not update the clock…
I tryed to change timezone, language, disabled network time, even change the theme, but no luck for now…
On start, the clock on tray is hidden, when i go to settings and change user format, the clock is appear, but not move…
Any ideas… Anyone?
June 3rd, 2014 at 9:35 am
@146 :: I found an answer to my own question – allthough its not a very elegant solution…
You can add the search line that would usually go in resolv.con to:
/etc/resolvconf/resolv.conf.d/base
then run sudo resolvconf -u
That said, it would be highly desireable to have a GUI option in the network settings screen to allow you to:
1) Change your hostname and/or domain name
2) Change/set a DNS search path
June 3rd, 2014 at 10:02 am
@Dan thanks. it works.
@Michael Webster Since Dan’s solution already work, I’m not sure your solution work or not. But thanks anyway.
Still I think this shouldn’t happen in the first place. There must be more user friendly way to prevent this problem from happen.
June 3rd, 2014 at 10:43 am
Hello guys,
I am testing Mint 17 Mate and Cinnamon versions from a USB flash drive on my Laptop with and Nvidia GeForce 9600M GT. Attached to the laptop I have a Full HD Monitor connected through HDMI.
The problem is that after installing Nvidia’s driver through Driver Manager I go to Nvidia X Server Setting I am presented only with 2 options ‘Application Profiles’ and ‘nvidia-setting Configuration’.
Where are the other options, especially the monitor options. What did you do ?
Also, going to ‘Displays’ you have to uncheck Mirror displays to see you have 2 monitors (WTF?), I can see my second monitor there but I can’t make it display anything, it’s not getting out of stand by (black screen).
This worked ok in Mint 16. Any idea?
Thanks.
June 3rd, 2014 at 10:51 am
FYI: the most recent Qiana Cinnamon update, altering the core package structure, forced the “cinnamon” package removal on a Release Candidate system, resulting in a blank screen after reboot. How to fix: drop to the recovery console, enable networking, re-install the “cinnamon” package manually, reboot.
June 3rd, 2014 at 11:12 am
Good work Mint team! I have Cinnamon installed on a Mac in Parallels virtual environment. After installing Parallels Tools Cinnamon starts in Fallback Mode. I tried the 32 bit and 64 bit versions. Both with the same problem. Is there a solution?
June 3rd, 2014 at 11:39 am
Simplemente increible, como usuario de linux y conocedor de la mayoria de distros, recomiendo el uso de Qiana-Cinnamon, en mi opinion es la mejor de todas, perfecto balance entre rendimiento y manejo grafico del desktop.
June 3rd, 2014 at 1:27 pm
Time to start converting XP machines for friends and family! Fantastic timing for an LTS release. Clem, I can’t tell you how much I appreciate your work. Best distro ever.
June 3rd, 2014 at 4:31 pm
Linux Mint 17 – Great and best operating system. Thank you.
One thing looks like a bug on my Acer Aspire 5732Z laptop: after waking up from suspend mode (if the laptop was closed), or after turning the screen on again (if it was off using Fn key) – the status bar and menu, and other elements of Cinnamon disappear.
So, I must restart Cinnamon to see these elements again. This bug happens all the time with Mint 17 RC, but it did not happen with Mint 16. I did update the system, even did “mdm-recovery” just in case. It looks like Cinnamon “forgets” to refresh the screen from some reason.
I wonder if it can be fixed.
June 3rd, 2014 at 8:31 pm
Hola soy nuevo en linux mint y me parece genial y quisiera saber si alguien me puede ayudar con qiana.Ya que cuando lo quiero probar no arranca, llega hasta la parte que dice login o usuario, luego hace el ruidito y despues, queda la pantalla en negro solo se ve el cursor.
MI pc es una netbook acer aspire 5630.y lo estraño es que con petra no tuve ningun problema, es mas lo estoy usando en este momento.
June 3rd, 2014 at 8:47 pm
Linux Mint Cinnamon Qiana, like older versions, is a pure gem. The only thing I have noticed so far is a problem with commands, like “Ctrl + Alt + T” for the terminal not working after you change from Spanish Qwerty to Spanish DVORAK. And the terminal is at full transparency the first time it’s opened, so you have to change the color and transparenccy level in the profile settings, otherwise you wont see what you’re typing.
Thanks for your hard work!
June 3rd, 2014 at 10:10 pm
what about kde? when will it release? i love kde environment.
June 3rd, 2014 at 10:30 pm
Anyone know when the PPA for Cinnamon stable will be updated? TIA.
June 4th, 2014 at 2:14 am
My previously reported inability to display a user picture in the mdm login screen only occurs on one machine. On my laptop it works fine, so I assume its a one-off glitch caused by something I installed on my main machine. Please disregard previous report.
June 4th, 2014 at 7:08 am
hi , i have a big problem with linuxmint 17 and the lockscreen dosnt work pls do something about this ..is a bit stres that after a while the screen go off …and i must move mouse to make apear back
June 4th, 2014 at 8:47 am
Tony@174
Have you tried uploading your user picture from Users and Groups utility? I can’t explain the difference from one machine to the other, but my question is merely a workaround, and not a complete resolution. Not sure how important it is for you at this stage.
I can’t get the background image to display correctly either, it flashes up but then disappears into the HTML (Clouds) theme background. Guess it just needs a little more work.
June 4th, 2014 at 9:45 am
@174 Tony: I too have this problem. It used to work fine in the RC and perhaps also initially in the final version. Now it doesn’t.
June 4th, 2014 at 11:27 am
@ricardo post 170:
Debe ser algun problema con los driver de video, aun asi te recomiendo que esperes un poco mas porque esta version salio con varios problemas.
Quizas en un tiempo se solucione este problema que tienes.
De paso pregunto, lo probaste en modo live desde un usb o dvd, o en una maquina virtual?
Saludos
June 4th, 2014 at 11:30 am
Hi all, somebody know how can I fix the Zoom option? I put it On and don’t work, I already install of the update files but nothing fix it. Any idea?
Greetings
June 4th, 2014 at 12:04 pm
Thanks for dropping the virtually-useless-and-privacy-invading Zeitgeist file indexing! Everything runs much faster now. This is important to me, as I’m milking the use out of some older hardware as long as I can, and need the extra performance. It also saves me the tedious steps I used to follow to disable and remove Zeitgeist after every fresh install of Linux Mint.
I’ve been using Linux Mint 17 Cinnamon and Mate for several days now, and I feel this is easily *the best release ever*.
June 4th, 2014 at 12:31 pm
179 Leo. The zoom in and out works fine for me. I am not sure how it is mapped for your computer, but on mine the windows key+alt+= zooms in fine, i.e. a simultaneous 3 key press. If you are, perhaps, not clear where the ‘Super’ key is mapped then the keyboard layouts system setting should show this when you click on the small keyboard icon. Also worth checking there is a suitable entry in the universal access section under keyboard shortcuts on the adjacent tab. Hope this helps
June 4th, 2014 at 1:08 pm
Hi Simon, thanks for your reply 🙂
I always activate the zoom from the applet, may be there is the problem. I will check activating the zoom from the keyboard like you say, thanks for your help 🙂
June 4th, 2014 at 1:54 pm
Will there be an Xfce edition?
June 4th, 2014 at 2:43 pm
Enable numlock is now disabled suspend no longer works
How is this better?
June 4th, 2014 at 4:25 pm
As soon as the first updates are applied, it runs in Software Rendering Mode. (Running in Virtualbox)
June 4th, 2014 at 4:28 pm
Thanks for including ubuntu’s forcepae boot option, that was missing in the RC iirc.
My mother (83) and her laptop (9) will appreciate.
June 4th, 2014 at 4:52 pm
linux mint 17 Desktop Sharing does not work when enabled, you get this message when connection with vnc viewer from a remote machine:
“No supported authentication methods”
any fix?? thanks! btw, awesome release!!
June 4th, 2014 at 5:05 pm
dose linux mint 17 have the b43 wireless driver
June 4th, 2014 at 5:21 pm
Can I please have a list of drivers on Linux Mint 17?
June 4th, 2014 at 6:31 pm
I just downloaded the latest M17 upgrade – a biggie and afterwards I found it had erased my existing .mozilla and .thunderbird files and replaced them with blank ones – new instal config.
Happily for me, I had upgraded from M14 a few hours earlier and was able to restore my old files. I’ll bet I’m the only one on the planet that could do this.
Why can’t the existing .mozilla and .thunderbird files be saved somewhere – or restored after such a major upgrade?
I think there could be some very unhappy upgraders shortly.
June 4th, 2014 at 7:00 pm
I’m someone with some knowledge, but, everything i know i’ve learned by myself, i’m not developer, neather a hacker or anything else, just someone that want’s to try something diferent and fast,,after using windows for sometime, i stop use it, i start use Linux, first kumbutu,,,,ubuntu,,,,backtrack ” that i loved lot “, i try mac,then i try Kali,and now, i’ve just installed the previous Mint,,about 1 week ago, today, i saw this,,for the moment i can’t install, cause my internet connection is very bad,,but,,i love so much this mint, that as soon as possible, i’m gonna try this new version,,thanks a lot for this wonderfull work.
June 4th, 2014 at 8:06 pm
I’ve been using Mint 16 KDE 64 edition and love it….After using KDE for a while, I would think that Cinnamon and the rest would be boring.
I am used to the plasma desktop and cannot see using anything else.
Can’t wait for the release of LM17 KDE64…is it going to be out soon? I want the protection of an LTS but am willing to wait for the release of KDE
I thought Fedora 19 was cool, but Mint 16 KDE made me a windows convert…can’t see myself ever going back to windblows again.
Please release the KDE edition soon, so I can start breathing again…till then, I will stick with LM16 KDE64
Linux Mint 16 is so polished that waiting for the KDE version of Mint17 is going to be painful. But the pain will be offset by the wonderful use that I’m getting out of this machine now.
Thanks for all your hard work…Like someone else said earlier..this looks like another nail in M$ coffin 🙂
June 4th, 2014 at 10:56 pm
@157 MiPr
Thanks for your investigation the workaround you mentioned in the forum seems to work for fixing flash issues after rebooting. Hopefully Clem will incorporate some intelligence into mint-adjust.py so that changes to the chromium default file are detected appropriately.
Still don’t have any fix for Chromium’s window chrome corruption though, likewise very annoying. Maximising, restoring and resizing Chromium loses the window decoration and tab-bar, requiring minimise/restore or desktop swap to redraw it. I’d be keen to get confirmation from others, but did read about the same issue from others in the RC blog comments so I guess I’m not alone here. This issue only exists in Chromium ie. doesn’t affect anything else on the desktop.
June 4th, 2014 at 10:58 pm
@188 Eli
That broadcom chipset (b43) should be supported. But to be sure, simply boot a USB stick and see if the wireless is working. I seem to recall some issues with that chipset being misidentified as another in earlier Linux kernels, IIRC that has been fixed.
June 4th, 2014 at 11:04 pm
@190 Grizwald
Mint’s model does not really support upgrades between major releases. Though there are plenty of ways to achieve it and most users doing so will likely know what they are doing; tar-ing up their home folder before going ahead. Usually, pointing Mint’s installer to your existing /home partition, all other things being equal, will work successfully, but this is not guaranteed nor I would imagine heavily tested. Backing up /home at very least is always advisable in these situations, even with an installer that is upgrade aware, which future versions of Mint are much more likely to be.
June 5th, 2014 at 1:30 am
On my AMD A8-5600K I’m unable to install Mint17, because it stalls during boot, as from USB-stick as from DVD. I try to install Mint16 and upgrade it to 17, bur result is absolutely same. My PC is freezes up, and monitor blinking if it in stand-by, reset button is not correct this state. Only unplugging power cord make difference:(
June 5th, 2014 at 2:14 am
@193 Seb T
I also observe the “unpleasant behaviour” of Chrome/Chromium – there is something wrong with them when it comes to redrawing the window. On my side it is very apparent when resizing the window – it “flickers” showing black background several times, but that’s all. The similar to what you describe I only encountered when trying Mint17 on VirtualBox.
Out of curiosity I tried how it works on Ubuntu 14.04 and I can observe the same effect – maybe a bit less pronounced but for sure it is there. I mean: the problem most probably is not LinuxMint-specific. BTW, what graphics drivers do you use? I’m on some old Nvidia card and AFAIR the problem appears when I use proprietary drivers but not when using Noveau.
June 5th, 2014 at 7:07 am
Hola, grabé Linux Mint 17 qiana cinnamon 32bits en un dvd desde windows y al arrancar desde el dvd sale el logo de Linux Mint y después se queda la pantalla en negro y no pasa nada, también he probado a crear una Usb booteable con Unetbotin desde Zorin OS y me pasa lo mismo, y eso de haber comprobado el md5 sum y salia correcto, no sé porque me pasa eso, con Linux Mint 16 no me pasaba, ¿alguien me puede ayudar??
June 5th, 2014 at 7:20 am
Very solid so far. Running 64 bit version. My only problem is that when I start Chrome for the first time after a boot I get the message that Chrome didn’t shut down properly & the option to restore my tabs. Not a big deal and I haven’t played around with the extensions to see if one of them is the culprit.
June 5th, 2014 at 8:11 am
In my previous comment (no. 169) I wrote that Cinnamon is not refreshing the screen properly after suspend mode…
But now it’s working OK.
All I did is opening the Power settings, changing the brightness a bit (to make sure that the configuration file will be saved) – and after that I didn’t have any problems.
I might guess that it happened because Linux Mint 17 was using the old configuration file of Linux Mint 16 which might be in my /home folder(?)… just a guess.
June 5th, 2014 at 8:41 am
Hi,
I tried Cinnamon one more time (i’ve been doing this since its first release) and boy what changes you brought to it !
– First Speed : the menu, the edge are snappy (at last).
– Reliability : i can configure without blowing Cinnamon apart
I feel i can let xfce/compiz rest for a while, i’m going to use this for the time being.
A big thanks to all the developpers for this, i’am also pretty pleased to learn that you are sticking with the LTS base (meaning we’ll get all the future versions of cinnamon).
June 5th, 2014 at 8:43 am
Eli@188
I continue to run into issues with automatic wireless adapter configuration with a Dell Latitude D630, even in LM17,and it’s related to your question. Seb T says it might have been fixed, but I did not experience that fix. I still have to run the following procedure (this has always worked for me, maybe not everybody):
Of course, you need to be wired for this, which is the glaring problem that this creates.
$ sudo apt-get update
$ sudo apt-get install firmware-b43-installer
$ sudo apt-get remove bcmwl-kernel-source
$ sudo reboot
After the above has been done, I go into the additional drivers utility, and point to the driver it recommends. Reboot again, and life is good.
I have commented on how it would be great if this was finally resolved, but the problem seems to be upstream. With Deepin, it works just fine out of the box, which is an Ubuntu derivative. So obviously, they are doing something to force the issue. It also worked flawlessly with Manjaro, non-Ubuntu based of course. Maybe that points to a Linux problem, maybe not. I haven’t researched it extensively.
As you know, it’s been an ongoing, aggravating issue.
June 5th, 2014 at 9:11 am
Flash Not Working in Chromium Browser
We only have Chromium (not Chrome) in the native Mint repository. So flash really needs to work when we install Chromium from the Mint Software Manager or Synaptic. It does not. There is this LONG thread on the forum. Techies have tinkered and tinkered — some with success, some not. But we need something that works natively for us non-techie types. Something that *just works*. Please?
June 5th, 2014 at 9:16 am
“Remote Sharing” (vnc) not working: “No supported authentication methods”..
Update.. based on Ubuntu’s forum, this is a quick fix for that, I tested it and its working across reboots..
On Terminal run:
$ gsettings set org.gnome.Vino require-encryption false
$ export DISPLAY=:0.0
$ sudo /usr/lib/vino/vino-server
After that you should be able to vnc to Linux Mint 17… maybe someone else may need this until an update gets released..
Here.
June 5th, 2014 at 9:28 am
Eli
Just a follow up, but there is supposed to be a new Drivers Manager feature that allows you to install additional drivers without Internet.
The comments suggest running the Drivers Manager, and inserting your Linux Mint installation DVD. It’s supposed to mount as a temporary package repository. You could probably run the process I mentioned after that.
Give it a try if you are so inclined. Please report good or bad results.
June 5th, 2014 at 11:17 am
@201 PB
As I recall, the issue had to do with the kernel mis-detecting certain flavours of wireless adapter. I do have some sympathy with the kernel devs on these kinds of issues, as these chipsets are often OEM and not manufactured in accordance with the exact specifications laid out by the designers. As a result, the PnP-style probing can yield confusing results and as we know Linux is actually a good deal more PnP that Windows, so these isolated cases are more likely. Generally, in Windows, you’ll be required to download and install drivers from the laptop (or whatever) maker, guaranteeing the correct driver.
I’m not sure that these issues will ever be resolved entirely, but in any case you have a workaround so that’s good.
June 5th, 2014 at 12:15 pm
I’ve appreciated Linux Mint for the past two years as a wonderful system for computers old and new, and an easy transitional experience for former MS Windows users. Linux Mint Cinnamon 17 is a brisk and responsive performer on my primary laptop, an Asus K54C with an Intel Pentium 2.3ghz dual processor and 6 gigs of RAM. Linux Mint XFCE is a wise choice to install on older laptops with lesser hardware resources. As a computer consultant/tech, I’ve tested a number of LINUX operating systems with my array of test laptops and netbooks. It is easy to understand why Linux Mint has risen to the top of the list as the operating system of choice among LINUX-based OS’s. Great work to all of the developers and contributors who continue to make a great system even better. LINUX MINT Cinnamon 17 LTS is a rock-solid winner. Thanks!
June 5th, 2014 at 1:03 pm
@178 Leo
Gracias por contestar. Te cuento que lo probe en modo live desde usb y dvd.Y si es problema con la tarjeta de video, tengo una nvidia GeForce Go 7300.
Pregunto si existe alguna manera para hacerla funcionar para poder probar linux mint 17?
June 5th, 2014 at 1:46 pm
Can someone please tell me how to get the nomodeset option to stick? I am unable to install cinnamon on my Dell laptop because of the xorg driver. Even when I boot with the nomodeset option and change driver after install, it does not stick. I am not able to use this version at all. Others must be experiencing this.
June 5th, 2014 at 1:52 pm
A mi no me arranca ni desde Dvd ni desde Usb en mi acer aspire 5630.
June 5th, 2014 at 1:56 pm
Seb T@204
I understand that it will never be an exact science so-to-speak. In fact, this is always the risk with any piece of hardware. But this issue has been ongoing for a certain range of adapters, and it continues with LM17, so no fix here, if what you say is indeed the problem.
As I mentioned, some distros have been able to address it one way or the other. Namely, it is a non-issue in Manjaro, it is a non-issue in Deepin. As those are the only two others that I’ve personally tested, I can not speak for others. I am not criticising Mint necessarily. I’m really just suggesting that somebody has a solution that’s working out of the box. Could Mint implement something similar?
As far as my workaround is concerned, that depends on a wired connection, which a growing number of notebooks are not equipped to handle. Also, as mentioned, the Drivers Manager now has a function to mount the LM installation CD as a package repo to get drivers installed, etc. Perhaps I will have a moment to test that. But it’s still an aggravation specific to certain adapters — ones we’ve long known about what makes them work.
June 5th, 2014 at 2:47 pm
I’m eating crow, but happy crow. Driver Manager works great offline for installing Broadcom driver. While not quite as simple as an out of the box solution, it’s nowhere near as clunky as my workaround. Great work Clem.
Eli@188, feel free to run the LM17 installation offline. After a initial boot of your newly installed system, mount either your DVD installation disk or usb stick, then start the Driver Manager. Once started (very slow, leaves you wondering in anything is happening), simply select your driver, and install. Then reboot, and you should be wireless.
June 5th, 2014 at 6:31 pm
Excellent! One minor bug: no effect of hide label in power management applet
June 5th, 2014 at 7:01 pm
I just installed LM17 Cinnamon 64 on my laptop replacing W!ndows 7 Ultimate that awfully let me down after it froze in boot-loop several times a week and ‘trap’ all my data(but then saved by linux in usb live boot). I’m still new to Linux but I found LM really interesting and beautiful distro. I play around with the features and stuff a lil bit.
Until I decide to install Krita, that marvellous digital painting program imho it reminds me of Corel Pa!nter. Simply because drawing is my hobby.
But I couldn’t get it to start properly by clicking on the icon in menu bar. When I click on it, a small window pop up says;
krita: Critical Error
Essential application components could not be found.
This might be an installation issue.
Try restarting, running kbuildsycoca4.exe or reinstalling.
However I still can manage to make it run by type this in terminal,
$ kbuildsycoca4
$ krita
I did google the solution but there were just a few of methods I found and none of them work for Krita in LM17 Cinnamon. I read Krita depends on KDE. Maybe I should wait til LM17 KDE release perhaps.
So I seek help from experienced fellow here if you guys have the solution to make Krita running properly by clicking on the icon in menu bar. I really really appreciate that sir.
p/s: btw, the Graphics tablet setting didn’t really work for my “Wacom Bamboo 16FG 4×5”. I have to put few terminal commands of xsetwacom to change it to my needs like rotate half and touch off.
June 5th, 2014 at 7:56 pm
Best DE ever. Just one problem with cinnnamon. Remove the
remove this applet
when not in panel edit mode. Please Clem, consider it. I have installed to some people who dont know what linux or windows is and they constantly remove applets.
Please consider to add option to remove only in panel edit mode ( i.e. sound applet, menu applet etc.)
Thank you and keep up the good work
June 5th, 2014 at 8:35 pm
Hi, Linux Mint Team
Test LM 17 LiveCD 64bit
(i5 M520, ATI Mobility Radeon HD 4500)
Issues
-For example Nemo in fullscreen and active (Icon visible) top, right Hot Corner. Icon Hot Corner cover Nemo window closing button (X).
Test display 15,6′ 16:9 Full HD.
-Not be hibernate option in LM 17 Cinnamon
-Switch User (MDM) System Settings-Login Screen-Clouds-HTML no work correct option Shutdown and Restart (black screen)
Great distro
Thanks
June 6th, 2014 at 1:26 am
I’ve been looking forward to this release for some time. But I have to admit I was a little afraid, after seeing the Release Notes issue with AMD APU’s and MSI motherboard combos. I’ve got an AMD A10-5800K on an MSI A78M-E35 motherboard. I have to say the install went flawlessly, no problems at all. Fwiw, I also have a radeon HD6670 graphics card. Could be the bug doesn’t hit A10’s or only hits APU’s that are using the onboard radeon gpu?
Anyway, thanks so much! I’m really enjoying the new release, and recommending Mint to an awful lot of people these days.
June 6th, 2014 at 3:39 am
Hi Mint team,
this is definitely great as I upgraded from old Lisa directly to Qiana… however, I did experience quite some problems ever since:
1. I am using VPN, but facebook won’t load in Google Chrome, maybe it’s chrome’s problem I don’t know
2. I tried Chromium earlier but I can’t input anything there, so it’s totally unusable, that drove me to download the chrome for Linux
3. When under VPN, facebook can load in Firefox. But Firefox crashes from time to time, which made me have to stop anything and restart computer cos it would freezes the computer.
Does anyone have the same problem or is it just my computer is way too old?
Thanks!
June 6th, 2014 at 6:43 am
179, 182, 182 Leo and Simon Brown.
Zooming in and out, ie increasing or decreasing the amount of magnification in steps, is done by the suoer/alt/- or super/alt/= combinations, and they work for me, but the super/alt/8 combination to turn the zoom on or off doesn’t work, even though it is listed in the Accessibility settings. In the Keyboard Shortcut settings, Univeral Access section, turning the zoom on or off isn’t listed at all. It seems this has to be done from either the settings or the applet, or by setting a custom shortcut, which is why I asked about the command to turn it on or off, as it’s needed in set up the custom shortcut. Why did the preset shortcut have to be removed? It’s still in the Gnome 3 settings.
I’ve also found that the lack of ffmpeg means Audacity can’t import audio files from my phone, which are .m4a. Another point sending me back to open suse!
June 6th, 2014 at 11:55 am
UNSOLVED: Firefox keeps asking for quit confirmation
Could we also get a fix for this problem? It’s a nuisance and everything suggested on the forum involves editing system files — something I’m too green to try on my own.
June 6th, 2014 at 12:55 pm
Linux Mint 17 Cinnamon + Compaq Presario V2000 laptop (Intel Centrino 1.5ghz + Intel 82852/855GM)
Improved #1: The default mint menu at the bottom-left of the screen now correctly shows the background color of the actual menu for ALL installed themes.
Improved #2: GUI/system responsiveness on my hardware seems just a bit more fluid compared to Cinnamon in Mint 14~16 (all of which I could never keep installed on my machine; I always ended up reverting back to XFCE).
Bug #1: However, the drop shadows for ALL windows still remains the wrong color (white).
Bug #2: Any added extensions that involve the blurring of the background just doesn’t work properly. I don’t know if this is a hardware limitation (see above for my specs), a bug with the current Intel driver, something wrong with the kernel, or something else entirely.
Summary: Sure, Cinnamon is only a bit slower than XFCE… but for me, it’s mostly acceptable. If only those graphics issues could be resolved for my hardware, I’d FINALLY be able to stick to Linux Mint Cinnamon and brag about it to my friends, who also have old hardware (comparable to mine) and are thinking of switching to Linux Mint.
June 6th, 2014 at 1:06 pm
Love the amount of work that went into this and the result – it’s pretty awesome. Still have sporadic freezes with zooming, irrelevant of platform (AMD / Intel) or graphics GPU (nVidia / Intel / AMD). They don’t happen very often compared to Cinnamon 2.0 but they still do happen. Wish there was a way to nail the reason and reliably reproduce it – I’ve opened a bug report. It’s ever since Cinnamon 2.x…
MATE is very reliable but compared to Cinnamon it’s the poor cousin in terms of fit and finish and appearance (GNOME2 was not great art) – just my opinion. It is significantly faster though and sadly I must stick with it until the Cinnamon freezes are definitively resolved once and for all. Very thankful for the LTS strategy – finally some sanity in the long night of Ubuntu alpha / beta madness masquerading as final releases.
Cheers and many many thanks to the Mint Team.
June 6th, 2014 at 3:21 pm
Very solid so far and easy to install. Running 64 bit amd version in Lenovo B575e laptop. Only issue is that computer does not resume from suspend. No good. However, I’m going to stick with this.
June 6th, 2014 at 6:55 pm
Thank you so much!! And in Arabic
شكرا جزيلاً
June 6th, 2014 at 7:37 pm
Por fin pude instalar linux mint 17 Cinamomon 64 bits en mi netboock acer Aspire 5630 y anda de maravilla.
Les cuento los paso que a mi me dieron resultado.
Arrancar con el LiveCD o usb, ir al menu y elegir la segunda opcion
“Start in compatibility mode”.
Luego lo instale normalmente haciendo los pasos que nos pide.
Reiniciar.Esperar hasta la pantalla de bievenida, apretar las teclas: Ctrl + Alt + F1.
Login: ponemos el nombre que elegimos.
Password: nuestra contraseña.
Nos da acceso a terminal, escribimos:
sudo apt-get install nvidia-current-updates-dev
Esperamos que se instale, luego reiniciamos y a disfrutar de Mint.
June 7th, 2014 at 2:08 am
One of my grandsons gave me a Linux computer to replace the broken down Xp that I was once using. We live 500klms apart and he is mostly tied up with his business, so I cannot reach him for instructions on how to use it. One of my problems is that the programs have alien names that I do not understand nor do I know what they are expected do. I have been using computers since the early nineties but mostly for writing because I am a writer, although I have had experience with graphics.
Can somebody please direct me to a site that can help me???
June 7th, 2014 at 2:34 am
Спасибо разработчикам Минта за этот прекрасный дистрибутив. Очень правильное решение, “Linux Mint будет ориентироваться на LTS версии Ubuntu” на мой взгляд. Получается что то типа дедушки Дебиана. Но лицом к простому юзеру. Меня всегда бесила эта гонка версий дистров Убунты и Минта соответственно. А пользоваться Лтс дистром, при этом имея самые свежие пакеты программ – самое то! Я рад! :v:
Главное не бросайте разработку Cinnamin. Он лучший!!!
June 7th, 2014 at 3:31 am
Fresh install from LM 9 to LM 17. It’s been a while but has been working perfectly until some dependencies cannot be installed. Lots of work to do to configure but thank you so much for all your hard work.
June 7th, 2014 at 6:33 am
Great release. Here are some suggestions from me;
1- Keyring feature should not be on by default because it is not something that majority would use. Also hard to disable.
2- In power settings, -shut down the pc when i press power button- function doesn’t work, instead it prompts the shut down dialog when i press power button.
3- Activate screensaver / turn off display after 30 minutes/1 hour. But why not 2 hours, 3 hours? These are what i need, these time options should be included.
4- Setting a program to Autostart should be easier, by right clicking its icon in mint menu for example.
June 7th, 2014 at 8:13 am
LM 16 was my first distro, and when I saw LM 17, I immediately decided to come back to Mint. Now I’m wondering why I ever left. LM 17 installation was ridiculously fast and easy. I love the new Driver Manager. On other distros, getting my Nvidia card working is a real pain, but Mint takes care of everything with just a couple clicks.
Qiana is an advance for the entire Linux community and one of the best operating systems around. Congratulations and thank you to the developers for this achievement!
June 7th, 2014 at 9:09 am
LM 17 Cinnamon is working great and it is great overall.
Two small issues with Dropbox:
1) Dropbox doesn’t automatically update files when I change them.
2) Right clicking on the Dropbox icon sometimes only brings up a box asking to remove the applet. Can’t get to the Dropbox menu when that happens.
Thanks!
June 7th, 2014 at 9:53 am
Ramon Ware@226:
Usted puede solicitar ayuda en el fórum de Linuxmint, será más fácil conseguirla allá que acá, inclusive hay una sección en español, si necesário…
You can require help in Linuxmint forum, it will be easy than here…
June 7th, 2014 at 1:54 pm
Further info on my comment @199 about Chrome initial startup after boot. If I exit the background process(es) (Continue running background apps when Google Chrome is closed – option in settings) prior to system shutdown rather than just exiting the Chrome tabs it works fine on startup after the next boot. Next step is to see which of those background processes is the culprit. This is new to Mint 17. I never exited the Chrome background processes in Mint 16.
June 7th, 2014 at 4:38 pm
🙂 if works fine, no problems,, i had to install in compatibility mode, and then install the Nvidia 304 Driver
[img=]
[img=]
June 7th, 2014 at 5:42 pm
Ramon Ware@226:
as suggested by jungle_boy, post usage queries in forums.linuxmint.com
You might also want to look at the User Guide at though you can obviously ignore the first part about installation.
in brief, LibreOffice is the Word Processor/Spreadsheet suite, Thunderbird is email client, Banshee is music player, Firefox is web browser. Since they are arranged in logical groups on the Cinnamon menu, it should be easy to work out what programs to use.
June 7th, 2014 at 6:10 pm
Hi there,
Today I reinstalled Mint to this new version and so far it’s quite good – not as good as Petra though. There are some minor glitches (no synapse? deb solved it; chrome+docky=weird at best; and few more) BUT one huge PAIN that drives me INSANE:
Pressing left Alt (as in Alt+Tab or my custom shortcuts) will throw me to next workspace! I can’t even finish the shortcut (e.g. Tab in Alt+Tab). The only ‘workaround’ so far is having just one workspace which is unbearable in a long run!
It is similar to what I experienced some time ago in some previous version of Ubuntu – there on occasion Alt also threw me to the 2nd workspace, but not everytime and logout+login back usually solved it. It was caused by some faulty keymapping of Alt key (I can’t recall to what key it was remapped but it was something with ‘mod’ in it I think) and was triggered only sometimes right after login (don’t remember exactly what it was back then). Now it’s different because it’s always there – new login doesn’t do anything :((
I’ll be happy for any help, otherwise I’ll have to drop Cinnamon :(( and probably Mint too :(( Don’t want that..
June 7th, 2014 at 7:27 pm
Hello, corrected the bug of Firefox? One that every time we close Firefox it asks whether to save the session, tab (or something), and even checking the option not to show more, keeps popping up.
Hugs!
June 7th, 2014 at 7:33 pm
And the HDMI support is working well? The last time I connected the HDMI cable on my PC with Linux Mint 17 to be able to reproduce on TV, was not shown anything on the PC screen and only appeared on TV.
Note: The non-free NVIDIA driver was already installed. My PC is not hybrid.
June 8th, 2014 at 3:07 am
Nice release, everything seems to be working smoothly. The only thing i couldn’t get working also in this release (like previous releases) is get it to connect to my Logitech Bluetooth Audio Speaker Adapter, it doesn’t even pair. Anybody figured out how to use a bluetooth audio adapter or bluetooth headphones?
June 8th, 2014 at 4:49 am
Fabulous ! I thank you all very much.
June 8th, 2014 at 6:48 am
Renan @237
Firefox’s prompt to save the open tabs is a Mozilla Firefox feature, nothing to do with Linux Mint. I don’t like this feature either, but the Mint developers can’t do anything about it.
June 8th, 2014 at 8:16 am
My Mint Menu is corrupted (see attached ScreenShot).
It happens only on menus which contain more items than can be displayed vertically.
When hovering the cursor, it’s “painting” the highlight over and over each item without clearing the highlight effect when leaving one item.
Changing the theme doesn’t help. How can I “reset” my DM or get rid of this ugly cosmetic bug?
Thanks in advance,
ScreenShot:
pe
June 8th, 2014 at 8:23 am
Damnit! I found it! I can reproduce the bug.
Bug visible if:
Settings -> Effects -> “Enable fade effect on Cinnamon scrollboxes (like the Menu application list)”
is checked.
If I uncheck this setting, the visual glitch is gone.
Recheck this setting, the glitch is visible again.
Can someone confirm this?
pe
June 8th, 2014 at 8:35 am
Virtual Box Problems
1) Host Linux Mint 16-Olivia- Cinnamon 64 bits- any Guest either with 32 bit or 64 bits can be addeed.
2) Host Linux Mint 17- Qiana – Cinnamon 64 bits – No Guest can be added with 64 bits.
Option is not available for choosing guest with 64 bits while creating new machine in Virtual Box.Hence guest (with 64 bit) can not be added.
June 8th, 2014 at 4:48 pm
sehr schönes OS !! viel mehr Leute sollten Linux nutzen !! ist viel besser als Windows na eine gute Bekannte habe ich von Linux überzeugt ich hoffe weitere werden folgen
June 8th, 2014 at 4:52 pm
hab seit 2 Tagen LM 17 Cinnamon 64bit , das OS läuft super….gut – besser – LINUX 🙂 Gratulation….an Euch Linux-Mint-Macher !! Perfekt
June 8th, 2014 at 5:38 pm
Hi,
I like Qiana.
Found a “bug”: I can not set the sensitivity of the mouse any more to change the speed of the coursor. Worked on Petra. The Slidebar is there but has no effect. Logitech mouse. Any ideas?
Thanks
June 8th, 2014 at 10:58 pm
This absolutely will not install on a system with AMD a10-6800k. When you start install, system goes into reboot-every time! Maddening.
June 9th, 2014 at 12:24 2:22 am
thanks
June 9th, 2014 at 4:03 am
I just upgraded Linux Mint Cinnamon 16 to 17 on my Linux Mintbox 2. Everything seems to work. Im happy 🙂
June 9th, 2014 at 4:14 am
So, yes, optimus video cards are supported? I’ve left Mint for Ubuntu 14.04 because it works…. I’d like to come back. So yes?
June 9th, 2014 at 8:18 am
pe@243
I can’t duplicate that, at least not according to your set up. I went into Effects and turned the fade effect on like you said, my Mint menu seems to be rendering just fine. Are you using a proprietary driver? My test laptop is not.
June 9th, 2014 at 8:31 am
Hi Clem,
Thanks so much for the improvements to the Graphics Tablet driver in the latest Cinnamon patch. The ‘Map to monitor’ and ‘Keep aspect ratio (letterbox)’ buttons seem to work on the dual-monitor setup for me now!
Unfortunately, the Wacom Intuos5 ‘touch’ behavior is going on still even while I’m in ‘Tablet (absolute)’ Tracking Mode. Because of all the spurious mouse-event signals sent into the system because of this, it’s still impossible for me to use the tablet in general use.
However, since the tablet’s ‘Active area’ now maps properly to the monitor and it’s dimensions, I’ll try to see if a ‘walking on eggshells while sneaking past the sleeping pit bull’ technique with the pen will work at all–i.e., absolutely DO NOT let any part of your hand contact the tablet. This is proving to be incredibly awkward atm, but who knows?
Anyways, thanks again for all the hard work the dev team is doing with this great new LTS release, it rocks! I hope the ‘touch’ thing can be straightened out soon so I can move completely off those other Well-Known(tm) platforms, hehe.
June 9th, 2014 at 10:21 am
Alright, first of all the obligatory compliment: Mint 17 is brilliant, thanks a lot!
I do have a few drawbacks that I will list in another post, but really they are just details and I applaud you once again for making the desktop PC usable, stable and looking good.
The first thing I would like to point out here, is actually about contributing with ideas or comments, feature request, bug reports etc.
It is very UNCLEAR where one should go to do such things IMHO.
There is the community website which has an Ideas section, but it seems dated and quite confused. Again this may be my opinion, but there it is: it’s not very encouraging.
What is that “Score” thing, how are the ideas processed, why is there such a mix of really old things and newer comments?
Then there is the Launchpad sections for Bug reports and Blueprints. This is probably more aimed at developers, but also looks to me as a confused mix of old and new.
And finally, the comments on the blog post for the release of the distro. This is by far the most popular place to comment, report bugs, suggest ideas it seems. So, this is where I write.
But it doesn’t really seem like the more logical place to me.
So anyway, my first point is that, IMHO, there is some clarification to be done between blog, main site, community site, Launchpad, forums etc.
Oh, and there are Local Communities and “Planet”, whatever that is…
Oh yeah, almost forgot the Segfault blog…
This page helps a bit:
But on the whole I think there are way to many places to go to, not mentioning that you have to register each time and that each web page has it’s own look and feel, looking fragmented. It’s not very encouraging for newcomers and I guess the community would be stronger (or at least seem so) if more gathered around less hangouts.
This may only be a matter of presentation and could be solved by some rethinking and some webdesign, I guess.
Or maybe it is just me…
Thank you for your consideration.
Right, now that this is said, I can write my actual comments!
June 9th, 2014 at 10:43 am
Right, here we go:
My first comment was about not being able to set the date & time in custom format anymore, but actually I just discovered I could still do so by right clicking on the applet and choosing ‘configure’. So I’ll just say it is not very discoverable and should maybe be in the Date & Time preferences?
My other little disappointments are all about the way applets and themes work. This is quite good has it is, but I wonder why they aren’t updated as any software using the update manager? Refreshing the online themes list to see if there is an update seems a bit user-unfriendly.
I also wished there would be some kind of “seal of approval” for the applets and extensions for instance, making sure they work for such or such version of Linux Mint/Cinnamon. I understand they are community driven and thus difficult to follow, but it’s a pity to have such a long list of available applets and not knowing if they will break your system or work just fine. The comments usually help, but anyway, maybe there is something to be done here.
Maybe more applets could be developed or check by the dev team?
A fully working Messaging Menu would be welcome for instance.
I’m also thinking that some applets are really brilliant and would deserve to be included by default, making Mint out-of-the-box even better (because I guess a lot of regular users don’t install extra applets).
Or maybe something like different repositories: one official repo for the applets by Mint Team, one repo for approved applets for your current version of Cinnamon and one repo for others, that the user can install at its own risks.
I don’t know, just ideas…
June 9th, 2014 at 12:31 pm
@247: Found out how to solve the problem with the mouse-sensitivity, in case anybody has the same problem:
Open Terminal
Identify your device:
$ xinput –list –short
Show the settings of device with the ID number shown with the above command:
$ xinput –list-props “idnumber”
Look for the line “Device Accel Constant Deceleration (“number”)” and remember the “number”. Default value should be 1.000000000000….
Set desired sensitivity (any value above 1 decreases the speed, and below increases it):
$ xinput –set-prop 9 “number from above” “desired speed”
Changes take effect immediately. You have to reset the value on every reboot.
Hopefully the Mint-Team fixes that!!! ATM the settings in the GUI for Mouse-configuration dont change any value in xinput when you check via terminal. 🙁
June 9th, 2014 at 12:55 pm
Mint 17 Cinnamon works well on a bootable USB-Stick. The problem with “english-only” on a Mint 16-USB-Stick seems to be fixed – thanks.
But when installing it on a harddrive (same on 2 PCs) the “Applets” and “Desklets” are missing – too bad -.
The update came to a stand-still for 3 times and it seems that this was caused be other apps which I tried to run in parallel. I had to restart my PC and the update manager and I didn’t touch anything until the update was finished.
Flash is not working on one PC.
The PS2 mouse didn’t work in one case and I had to add a USB mouse (seems to be a known issue).
VLC is still crashing when starting videos on my NAS.
I love Mint but I am not sure whether I will keep Mint 17 on my harddrives. I didn’t have such problems with Mint 16.
However, I will use it on my SD-Card and USB Stick where it is better than Mint 16.
Thanks for your work.
June 9th, 2014 at 2:49 pm
PB@253
No, I use Qiana Cinnamon in VMWare. no special driver.
I tried some themes, and I *tink* this visual bug appeared after that.
Is there a way to “reset” someting to defaults?
June 9th, 2014 at 3:53 pm
Hi
First of, this is, as usuala nice piece of work. I only got to small issues that I hope you could sort out.
1) Mintsources no longer shows the speed of mirrors. If I open up mintsources and try to change the distribution mirrors only a list of mirrirs shows up. There is also the usual speed coloum but it is never populated with anything. I does look likes the list got sorted though but no speed graphs as was introduced in 16 så no way to see how fast a mirror is, compared with the other ones. Is there any ways I could bring this back because I used it a lot to compare the mirrors.
YTou can see a screenshot here
Though it is in Danish you should get the idea. Hastighed mens speed by the way.
2) Th PIN code of the builtin 3G-modem in my laptop cannot be saved. Even if the tickmark is set to automatically unlock the SIM card the PIN code gets asked for every time the machine is powered up. Does not seem to get it saved (or retrieved) from the key ring. Could this also be fixed please. Works great in 16 but is now broken in 17.
I appreciate your feedback on this.
Thank you so much.
June 9th, 2014 at 5:12 pm
Love Mint I have a computer that would not run windows stopped or wouldn’t boot put the new mint and works like a charm. Qiana Cinnamon it is the best yet thanks for all the hard work on this Linux FOREVER!!!!!!
June 9th, 2014 at 5:16 pm
LinuxBaby@260
MintSources does show the speed bars for me. But they take a while to show up since obviously the system has to do some sort of testing for each mirro to come up with a speed figure for it.
June 9th, 2014 at 5:18 pm
the only issue I have had is with Dell Inspiron 1501 Broadcom wireless would not work right off. Just kept working with it and finally got it to work….lots of terminal time….
June 9th, 2014 at 5:28 pm
Pleased so far with Mint 17 Cinnamon. I used to have Cinnamon “freezes” on LM16 occasionally, so am watching to see if that happens with LM17.
I reported previously that the “face” icon/avatar selected for my username does not show up at the login screen, or when unlocking the screen, on my desktop PC; but it does work OK on my laptop however.
The difference seems to be that my laptop just has full disk encryption, whereas my desktop has full disk encryption of the root system plus separate /home folder encryption (my /home folder is on a different drive). It seems like MDM can’t see my “face” icon until the user password is entered to unlock the encrypted /home folder. Don’t know if there is a way around that, but it isn’t particularly important to me. Just recording it here for reference.
June 10th, 2014 at 11:17 am
I’ve had two problems with 17.
First with Chrome. After installing Voice and upon opening the browser it indicates that the Voice Plug In didn’t open. Reinstalled both parts several times and nothing would make it work correctly.
Installed Chromium, it includes Voice and it works well. Problem is when I start up the laptop and then Chromium it always starts off saying that Chromium was not shut down properly the last time. I always shut down Chromium before turning off the machine.
Any help would be appreciated for either browser.
June 10th, 2014 at 1:15 pm
Re Firefox prompting to save tabs when closing. I stand corrected, this is caused by a mint-specific file. See post 220 above which has this link on how to fix:
June 10th, 2014 at 3:35 pm
I totally agree with Lito’s suggestions (255,256) – there definitely could be some refinement like ‘update all installed spices’ – and the communication platforms are to cluttered.
June 10th, 2014 at 7:31 pm 11th, 2014 at 6:53 am
Hi! Great release!, But I find a little bug in cinnamon version (with four different computers), I can’t put a shortcut of terminal on desktop,even if you do a dist-upgrade and terminal’s shortcut was on desktop, it disappears of the desktop…
Thank you so much
June 11th, 2014 at 12:06 pm
Thank you clem for the alt plus mouse wheel. it is accurate 2700k as well? I want people to experience a warm color temp at night because it’s 2014. Monitors don’t need to glow blue at 6500k at night of you’re not doing color sensitive work.
Thanks,
Tim
June 11th, 2014 at 1:00 pm
Hi Mint team. Thanks for Cinnamon and linux mint. Almost no issues found after installing.
I enjoy using it every day. Thanks!
June 12th, 2014 at 3:12 am
Hi mint team
gTile extension is not working in mint 17
(I am having trouble with all extensions like configure option is not working)
Please look into the issue
June 12th, 2014 at 12:32 pm
First time using LinuxMint…. Is very good, and easy to use. Thanks guys for this OS.
June 12th, 2014 at 12:56 pm
Thank you clem for the alt plus mouse wheel for zooming,???
Get f.lux (Windows version with Wine – it has more features and a slow 60 minute transition) 9:11 am
Here is a color temperature chart so you can understand what the numbers mean
Clem you know what the right thing to do here would be
Another good chart
June 13th, 2014 at 10:13 am
Just installed LM17 Cinnamon edition, and no doubt this is the best yet. I dumped Windows Vista and now my computer is 100% LM. Great job guys. Thanks you Mint Team for the great effort put into this version. Long Live Linux Mint 🙂
June 13th, 2014 at 12:04 pm
On SEVERAL laptop computers with Linux Mint 17:
Suspend is working by selecting ‘Suspend’ from the ‘Quit’ menu or closing the laptop screen.
Suspend is NOT working when set to ‘5 minutes’.
Is there a fix?
June 14th, 2014 at 3:29 am
My Thanks to everyone on the Mint team for all the work in providing such a great o/s in Mint17.
I transferred to Mint15 from WinXp and am very happy with its operation.
I have come across one problem, relating to the Totem video player, which occurs with both the nvidia 331 & 304 drivers.
Totem worked fine in Mint 15 playing all video files, with the two clutter additions to environment enabled. However on Mint17 it has a tearing problem with the top 1cm of the screen when playing 4:3 format videos. The SMplayer and VLC both work perfectly without any tearing in Mint17, and streaming all video [including 4:3 format] works perfectly in firefox as well.
Thanks Colin
June 14th, 2014 at 3:51 am
I can’t get the Power applet icon to display in the panel of my laptop (Asus K55A) which is a rather big inconvenience.
If I select the applet and then Configure and click “Highlight”, a small rectangle flashes in the panel then disappears (which is what happens on my desktop PC).
Anyone had the same problem?
June 14th, 2014 at 5:16 am
re 276 above. Solved. I selected “hibernate” and then woke the laptop up, and the Power icon appeared in the panel for the first time ever. Have shut down and restarted several times since and the Power icon appears every time so far. Seems like it needed to hibernate in order to initialise something ….. Hope this helps anyone else with same issue. | http://blog.linuxmint.com/?p=2626 | CC-MAIN-2017-39 | refinedweb | 19,456 | 72.66 |
I want to do a custom Joystick project (following Kenton's guide, and maybe referencing some Djinny's work) without the Arduino IDE. I want to dive into more behind-the-scenes C code and get some practice with that world. The problem is of course that things get really complicated quite quickly.
I've got a project built off the teensy template here:
And I added the Teensy basic code to main.cpp.
But when I try to make, I get an error about the Joystick variable.
My question - what does selecting Tools > USB Type > Joystick actually *do*, and how do I replicate it sans the Arduino IDE?
I've looked through the source for hours without much luck. I tried doing #define USB_HID, thinking that would do it, but no luck so far. Help appreciated.
My main file looks like:
The error I get it isThe error I get it isCode:#include "WProgram.h" #define USB_HID extern "C" int main(void) { #ifdef USING_MAKEFILE // To use Teensy 3.0 without Arduino, simply put your code here. // For example: pinMode(13, OUTPUT); pinMode(0, INPUT_PULLUP); pinMode(1, INPUT_PULLUP); while (1) { // read analog inputs and set X-Y position Joystick.X(analogRead(3)); Joystick.Y(analogRead(2)); // read the digital inputs and set the buttons Joystick.button(1, digitalRead(0)); Joystick.button(2, digitalRead(1)); } #else // Arduino's main() function just calls setup() and loop().... setup(); while (1) { loop(); yield(); } #endif }
Code:MacBook-Pro-2:teensy Andrew$ make [CXX] src/main.cpp src/main.cpp: In function 'int main()': src/main.cpp:18:3: error: 'Joystick' was not declared in this scope Joystick.X(analogRead(3)); ^ make: *** [/Users//Dropbox/andrew//teensy/build/src/main.o] Error 1 | https://forum.pjrc.com/threads/33391-Using-Joystick-code-without-Arduino-IDE?s=57d1b9f13a2623533cb44df45947d5fd&p=99177 | CC-MAIN-2021-49 | refinedweb | 286 | 59.5 |
Thanks mate I got it to work!
Type: Posts; User: brendanc19
Thanks mate I got it to work!
Yes, mate. But after reading that I still am not sure how I can get this to work in my program.
Not quite understanding the while loop thing. Any chance you can write it for me in the code, so I can understand its realation to my code?
Thanks man
Could you please explain to me how I would do that?
I need to know how to make my code repeat if the user types in NEW at the end of the program. Anybody know how? Please it is for a class
Code:
import java.util.Scanner; | http://www.javaprogrammingforums.com/search.php?s=94c930f975b8cb38dcb5ea2a07f966b4&searchid=1312200 | CC-MAIN-2017-43 | refinedweb | 115 | 92.53 |
Forum:Too many games
From Uncyclopedia, the content-free encyclopedia
Game List Editing Rules Editing GuidesNow, this may sound controversial, but I think we should delete a couple of these.
- Sure. If we are going to delete, maybe we could set up a gameVFD just for this one time? I say let's see what users think first. —Braydie 06:02, 8 April 2007 (UTC)
- Why would we need a gameVFD? Doesn't VFD work just fine? --Hotadmin4u69 [TALK] 08:16, 8 April 2007 (UTC)
I think we should vote them out based on quality, not because we have too many of those, it's like voting out main namespace articles for the reason of being too many. ~
09:10, 8 April 2007 (UTC)
I guess I see your point - some of these are rather rubbish things by a single user that never got finished - and an equivalent article would get ICUed or VFDed. Couldn't we just have some votes here (where they'll get seen) rather than setting up a new page nobody can find? (The trouble with regular VFD is it's geared to single pages - if you're not careful we'll just end up with a bunch of title-pageless games.) Having said all that, I do agree with Mordillo, and I can't say I'll be voting delete on very many of these games - most of them are either suitably bizarre offshoots of one of the main games, or they have something different enough to amuse. --Whhhy?Whut?How? *Back from the dead* 11:24, 8 April 2007 (UTC)
- I was just thinking out loud....while typing. I'd say just below here. —Braydie 11:27, 8 April 2007 (UTC)
Why not just make a separate disambiguation page for the less-liked games? It would make more sense then moving them to a different article. Lord Gneo 15:48, 11 April 2007 (UTC)
- Because deleting them all would be faster and far more appropriate. Because by and large they all suck ass.
Sir Famine, Gun ♣ Petition » 04/11 21:53
The IP that created The adventures of a Grue has registered as User:Naughty Budgie. —Another Pongo Flame Sandbox ☭ 11:30, 16 April 2007 (UTC)
Ahhhh...well, how about being like a normal American and ignoring the problem? Or perhaps, if it REALLY does bug you THAT much, it would be a whole lot quicker to delete them, I suppose. Lord Gneo 15:54, 16 April 2007 (UTC)
It's a good thing that no-one wants to delete Grueslayer, because it is not deletion material. User:Trar/sig2
- I second that. —Another Pongo Flame Sandbox ☭ 10:00, 21 April 2007 (UTC)
- We have less than enough games. User:Uncyclopedian/sig
Votes
Note: Old Discussions are at Forum:Too many games/Archive | http://uncyclopedia.wikia.com/wiki/Forum:Too_many_games?direction=prev&oldid=2904392 | CC-MAIN-2016-18 | refinedweb | 469 | 78.69 |
Dear All,
I am starting with OpenLog which is a great product.
However, I pain to undertand somethings. I spent all my week-end trying to understand and reading the doc.
I created two functions. One for readind and one for writing (append).
Description
for exemple, when I power on my module, that function is called
#define SDGPSLOGFILE "GPSLOG.TXT" #define SDLOGFILE "SEQLOG00.TXT" sd_write(SDGPSLOGFILE,"START"); sd_write(SDLOGFILE,NULL)
It will craete two files (if it does not exist). For the first, it will append the text “START” to GPSLOG.TXT. The second will create SEQLOG.TXT (if it does not exist), but it will not append a specific text. It will records all Serial.print() which are uncomment for debuging.
Here is the code to reset the Openlog and to write to the SD card. It’s works, until I read the SD card. (See below the following code).
/* ******************** OPENLOG *********************** */ bool WI908::sd_reset(void) { bool answer = false; unsigned int timeout = 3000; unsigned long previous; previous = millis(); #ifdef DEBUG sprint(F("Rst SD")); sprint(F("\t\t\t")); #endif // Rest OpenLog digitalWrite(PIN_RSTSD, LOW); delay(100); digitalWrite(PIN_RSTSD, HIGH); // Wait for OpenLog to indicate file is open and ready for writing do{ if(Serial.available()) { if(Serial.read() == '<') { answer = true; } } }while((answer == false) && ((millis() - previous) < timeout)); #ifdef DEBUG if(answer) { sprintln(OK_RESPONSE); } #endif return answer; } void WI908::sd_write(char * file, char * string) { if(isSDok) { // Go to command mode Serial.write(26); Serial.write(26); Serial.write(26); //Wait for OpenLog to respond with '>' to indicate we are in command mode while(1) { if(Serial.available()) { if(Serial.read() == '>') { break; } } } // APPEND TO THE FILE GPSLOG Serial.print(F("append ")); Serial.println(file); //"GPSLOG.TXT OR SEQLOG00.TXT" //Wait for OpenLog to indicate file is open and ready for writing // MY PROBLEME IS HERE. THAT FUNCTION WORKS EXCEPTED AFTER READING A FILE // FOR EXEMPLE, WHEN I READ THE FILE GPSLOG.TXT ( sd_read("GPSLOG.TXT");) // THE FILE IS READ AND JUST AT THE END OF THAT FUNCTION sd_write(SDLGOFILE.TXT,NULL) // IS CALLED, AND STOP HERE BECAUSE IT NEVER RECEIVE THE <. // WHY IT WORK, BUT DOES NOT WORK JUST AFTER READIN A FILE?? while(1) { if(Serial.available()) { if(Serial.read() == '<'){ break; } } } // If string is not NULL , write the string. If NULL write all debug Serial.println() in SEQLOG00.TXT if(strlen(string) > 0) { // Write the fix to the file Serial.print(F("$")); Serial.print(string); Serial.println(F("!")); } } else { #ifdef DEBUG sprintln(SDNOCARD); #endif } }
This works, until now I read.
I also have a function for reading a file. I commented the code.
This work as well (I think
). When the file is red, it return a > to indicate it finish reading.
The problem, It is, how to leave the reading mode, because it was like if openLog continue returning unwish caracters.
For this, in the sd_read() function, at the end, I am calling sd_write(SDLOGFILE,NULL); to immediately write log into SEQLOG00.TXT from all Serial.print)
bool WI908::sd_read(char * file) { // Empty buffer memset(buffer,'\0',BUFFERSIZE); uint8_t x = 0; bool answer = false; unsigned long previous; previous = millis(); // Read for 3sec unsigned int timeout = 3000; // If sd_reset() return 1 // When the openLog is power with no SD card, it return 0 if(isSDok) { // Go to command mode Serial.write(26); Serial.write(26); Serial.write(26); // Wait for OpenLog to respond with '>' to indicate we are in command mode while(1) { if(Serial.available()) { if(Serial.read() == '>') { break; } } } // Send command 'read GPSLOG.TXT 0 150 1' (read fileName from numberCaracter ASCII) Serial.print(F("read ")); Serial.print(file); Serial.println(F(" 0 150 1")); //Wait for OpenLog to respond with 'r' while(1) { if(Serial.available()) { if(Serial.read() == '\r') { break; } } } // Read the file do{ if(Serial.available() > 0) { if(x < BUFFERSIZE-1) // Do not fill the buffer more the it size { char r = Serial.read(); // OpenLog return > when it finish reading. if(r == '>') { // To leave the loop answer = true; } else { // Fill buffer buffer[x] = r; x++; } } else { // too many read. Buffer is too small #ifdef DEBUG sprintln(F("*Overflow! 5*")); #endif } } }while((answer == false) && ((millis() - previous) < timeout)); // Close buffer buffer[x]='\0'; delay(500); #ifdef DEBUG // Display the file content Serial.write(buffer); Serial.println(x); Serial.println(answer); #endif // Empty buffer memset(buffer,'\0',BUFFERSIZE); // Return the apeending mode to the LOGFILE.TXT sd_write(SDLOGFILE,NULL); // IS THERE ANOTHER WAY TO LEAVE THE READING MODE?? // MY PROBLEM IS HERE. AT THIS MOMENT, SD_WRITE() IS CALL. IT GOES TO COMMAND MODE BUT NEVER RECEIVE THE > } else { #ifdef DEBUG sprintln(SDNOCARD); #endif answer = false; } return answer; }
The problem
At the end of sd_read(), I am calling sd_write(SDLOGFILE,NULL); to leave the reading mode and to continue writing into LOGFILE.TXT the content of all uncommented Serial.print().
The problem start from here. sd_write(SDLOGFILE,NULL) put back OpenLog in command mode because f the 3 Ctrl+z.
Then it wait for > to indicate it’s in command mode, but after reading a file, it never receive that >.
And I can not understand why???
Do you have an experience with OpenLog and could you explain me how to have a good reading and writing code?
many thank for your help and support.
Cheers | https://forum.arduino.cc/t/someone-has-and-experience-with-openlog/283315 | CC-MAIN-2021-43 | refinedweb | 874 | 59.3 |
0 == 1 == 2 ???
Okay, this has to be the most ridiculously frustrating bug I have seen to date using Qt.
All I am trying to do is set up a GUI with 3 buttons, each with its own enum. If the enum is equal to Reboot (2), then the system is supposed to reboot. If the enum is 0 or 1, the process is supposed to call a different command.
Initially when I was typing "transitionTo.goToNext(transitionTo.EnumValue)", it was interpeting the int as a unicode character, which I didn't know how to deal with and didn't want to spend the time learning.
Now, I am just sending single character strings. In transition_functions.cpp, num.toInt() always prints the correct enum integer value, but whether it is 0, 1, or 2, num.compare("2") == 0 always returns true, because the reboot command executes for every button pressed!!! Extremely frustrated with this trivial code. Please help and/or shed some light on this tomfoolery.
I should also mention that the second script command is a placeholder at this point.
main.cpp:
#include "transition_functions.cpp" #include <QGuiApplication> #include <QQmlApplicationEngine> #include <QQmlContext> #include <QChar> int main(int argc, char *argv[]) { QGuiApplication app(argc, argv); QQmlApplicationEngine engine; transition_functions *context_property = new transition_functions(); context_property->passPid(app.applicationPid()); qDebug() << transition_functions::App_ScreenTest << "," << transition_functions::App_UiTest << "," << transition_functions::Reboot; QQmlContext *context = engine.rootContext(); context->setContextProperty("transitionTo",context_property); engine.load(QUrl(QStringLiteral("qrc:/main.qml"))); if (engine.rootObjects().isEmpty()) return -1; return app.exec(); }
transition_functions.h:
#include <QObject> #include <QProcess> #include <QString> #include <QStringList> #include <QDebug> class transition_functions: public QObject { Q_OBJECT Q_ENUMS(Next) public: enum Next { App_UiTest, App_ScreenTest, Reboot}; transition_functions(QObject *parent = nullptr): QObject(parent) { } Q_INVOKABLE void goToNext(QString num); void passPid(int val) { _appPid = val; } virtual ~transition_functions() { } int _appPid; };
transition_functions.cpp:
#include "transition_functions.h" void transition_functions::goToNext(QString num) { QProcess *proc = new QProcess(); qDebug() << "button pressed: " << num.toInt(); if (num.compare("2") == 0) { if (num.toInt() != 2) { qDebug() << "this is fucked."; }); } }
main.qml:
Window { visible: true width: 800 height: 480 title: qsTr("thing") MainForm { anchors.fill: parent testButton.onClicked: transitionTo.goToNext("0") //App_UiTest screenButton.onClicked: transitionTo.goToNext("1") //App_ScreenTest powerButton.onClicked: transitionTo.goToNext("2") //Reboot } }
- SGaist Lifetime Qt Champion
Hi,
A quick test shows that:
qDebug() << QString("0").compare("2") << QString("1").compare("2") << QString("2").compare("2");
returns
-2,
-1,
0respectively.
By the way, why not compare the int value since you convert it anyway ?
I did at one point compare int values, but now I am trying strings.
Your quick test confirms my frustration; why would the if statement
if (num.compare("2") == 0)
be true for num equal to "0", "1", & "2" ? It should be false for 2 of the cases, but it's true for all 3.
- SGaist Lifetime Qt Champion
What do you get printed on your application output ?
- mrjj Lifetime Qt Champion
Hi
What Qt version are you using ?
Cant reproduce either
QString num = "2"; if (num.compare("0") == 0 ) { qDebug() << "TRUE 0"; } if (num.compare("1") == 0 ) { qDebug() << "TRUE 1"; } if (num.compare("2") == 0 ) { qDebug() << "TRUE 2"; }
says
TRUE 2
@devDawg said in 0 == 1 == 2 ???:
qDebug() << "button pressed: " << num.toInt(); if (num.compare("2") == 0) { if (num.toInt() != 2) { qDebug() << "this is f***ed."; }
I don't really believe this code can be followed to printing the rude message if
num != "2"(unless something is seriously wrong with your Qt).
But why not use the optional parameter for
Returns 0 if the conversion fails.
If a conversion error occurs, *ok is set to false; otherwise *ok is set to true.
bool ok; int result = num.toInt(&ok);
What is the value of
ok? Because if by any chance it's
false, somehow, the
toInt()would return
0, which would result in it following your "unexpected"
num.toInt() != 2route....
@mrjj 5.9.3, unfortunately. The device that I am building the app for is configured for an older version of Qt. As for your test, I got the same result.
@JonB I never gave the optional parameter a thought, that's a very good point. I apologize for the rude message, I have just been really frustrated by this.
transition_functions.cpp:
void transition_functions::goToNext(QString num) { QProcess *proc = new QProcess(); bool ok; qDebug() << "button pressed: " << num.toInt(&ok); qDebug() << "conversion was " << ok; if (num.compare("2") == 0) { qDebug() << "reboot triggered" ;); } }
Upon testing each button press this morning, these are the results:
pressing the App_UiTest button (0),
button pressed: 0 ok == true
pressing this button triggered a reboot, without printing the "reboot triggered" message within the if statement.
pressing the App_ScreenTest button (1),
button pressed: 1 ok == true
pressing this button also triggered a reboot, & again did not print the "reboot triggered" message within the if statement.
pressing the Reboot button (2),
button pressed: 2 ok == true reboot triggered
everything here is the same as the other 2 buttons, except we actually see the "reboot triggered" message get printed.
I am not sure what to make of these results. It is almost as if in the first 2 cases, the reboot function is being triggered without even entering the if statement.
So I modified the test slightly by commenting out all the code in the else statement, leaving only a print statement "no reboot." Now when I pressed the buttons, they behave as expected. This only happened when I commented out the
proc->start() command in the else statement. Interesting to say the least.
Any thoughts?
@devDawg
So if I understand correctly, I see the following behaviour:
The
num.toInt(&ok)always succeeds, and always returns the correct number.
The
num.compare("2")always correctly matches "2" and only "2". The correct path of its
if ... elseis always followed.
The above two statements are as to be expected. Whatever your problem is, it is not to do with number parsing. Any time you think it is again, isolate only the parsing code (not your actions) and verify that parsing is not at issue. It can't be. This is the answer to your question as originally posted.
When it follows the
ifit always correctly correctly prints the reboot message and reboots.
But when it follows the
elsepath, it does not print a reboot message (correct) but it still reboots. Though if you replace the
else's
proc-start()with a "no reboot" statement you get that message but no reboot.
The only conclusion is that the
else''s
proc->start()is itself causing a reboot, whether you like it or not!
Now, I don't know what might be in your
scriptpathname.sh, maybe that causes a reboot (umm, what does it do? you haven't left a
rebootin there, have you????) ... Or, writing anything to
/dev/ttyS0causes a reboot. Or, ....
I can't do your debugging for you. I am confident that the number parsing is not at issue (code is following correct path), and I am confident that when the
elsepath is followed whatever it does it does not "magically" execute the
proc->start()from the
ifroute.
There is much more playing you can do to satisfy yourself about what must, or must not, be going on. For example, your two
proc->start()scould both echo stuff into some files instead of to
/dev/ttyS0so that you can see what's going on.
@JonB scriptpathname.sh, as I stated earlier, is not a valid script, it is simply a placeholder.
Thanks for the help! | https://forum.qt.io/topic/92534/0-1-2/1 | CC-MAIN-2019-09 | refinedweb | 1,235 | 59.19 |
Programming performance/hay.steve Python
From HaskellWiki
- Implementation time: 1.5 hours.
- Experience: 3 days.
- Comments: I spent most of my time noodling on the syntax of Python and familiarizing myself with the documentation. I also spent some time getting the heap to work instead of taking multiple passes with that data. I originally said it took 2 hours, but I am bringing it down to 1.5 because I was eating dinner while and watching a movie while coding. Also, I came back later and spent a few minutes adding a verbosity feature as suggested in the problem statement.
1 Code
from heapq import heappush, heappop, nsmallest
verbose=False filename = "gspc.txt"
days = []
for f in open(filename):
if f[0] == '#': continue linelist = f.split(' ') heappush( days, linelist )
cash = 10000.00 previousClose = -1.0 currentClose = 0.00 if verbose: print "On", i, ", bought" , numShares , "shares at" , currentClose , "."
else : while shares != [] and nsmallest(1, shares)[0][0] <= currentClose / 1.06: sell = heappop( shares ) cash += sell[1] * currentClose if verbose: print "On", i, "sold" , sell[1] , "shares at" , currentClose , "."
previousClose = currentClose
while shares != []:
sell = heappop( shares ) cash += sell[1] * currentClose if verbose: print "Sold" , sell[1] , "shares at" , currentClose , "."
print "Beginning balance:", beginningBalance print "Ending balance:", cash print "Percent Gain:", (cash-beginningBalance)/beginningBalance*100, "%."
2 Some later refinements
from heapq import heappush, heappop
- iterates through a heap while destroying it (side effect)
def heap_eater(heap):
while heap: yield heappop( heap )
- globals
verbose = True filename = "gspc.txt" days = [] positions = [] beginningBalance = 10000.00 cash = 10000.00 close = 0.00
- create the stock data heap
for f in open(filename):
if not f.startswith('#'): date, open, high, low, close, volume, adjClose = f.split(' ') day = ( date, float( close ) ) heappush( days, day )
print "Beginning Balance:", cash
- iterate through the days
startDate, oldClose = heappop( days ) for day, close in heap_eater( days ):
if close < 0.97 * oldClose: numShares = 0.1 * cash / close cash -= 0.1 * cash position = (close, numShares ) heappush( positions, position ) if verbose: print "On", day, ", bought" , numShares , "shares at" , close , "." else: for oldClose, numShares in heap_eater( positions ): if close >= 1.06 * oldClose: cash += numShares * close if verbose: print "On", day, "sold", numShares, "shares at", close, "." else: position = ( oldClose, numShares ) heappush( positions, position ) break oldClose = close
- close out positions
for price,qty in positions:
cash += qty * close if verbose: print "Sold" , qty , "shares at" , close , "."
print "Beginning balance:", beginningBalance print "Ending balance:", cash print "Percent Gain:", (cash-beginningBalance)/beginningBalance*100, "%."
--Hay.steve 01:53, 8 March 2007 (UTC) | http://www.haskell.org/haskellwiki/Programming_performance/hay.steve_Python | CC-MAIN-2014-10 | refinedweb | 411 | 60.41 |
I want to send data from HTML view to Django backend to process and then the backend will send an url back the to HTML view which will redirect the user to another page. It's like this:
Page 1 ----user's input data --- Ajax --- > backend --- process --- a url to page 2 --- Ajax --> page 2
The problem is that, I don't know how to send the URL back Ajax after processing user's data, so I have to redirect by using window.location.href = '/' . But I think the code is not clean this way. I wonder if there is a way to send url to Ajax's success from backend.
Here is the code n the HTML :
function doSomething(data){
$.ajax({
type: "POST",
url: URL_POST_BY_AJAX,
data: {
'data' : data,
},
success: function (response) {
window.location.href = '/';
},
error: function (data) {
alert('failed');
}
});}
In your processing view:
from django.http.response import JsonResponse def whatever_your_view_name_is(request): (data processing...) return JsonResponse({'url': the_url_to_page_2)
then in your JS:
(...) success: function (response) { window.location.href = response.url; }, (...) | https://codedump.io/share/26fqRewdSAU0/1/send-url-to-ajax39s-success--from-django-backend | CC-MAIN-2017-30 | refinedweb | 171 | 64.41 |
defined in the Buildbot configuration. For compatible behavior, this should look like:
from buildbot.schedulers.forcesched import ForceScheduler c['schedulers'].append(ForceScheduler( name="force", builderNames=["b1", "b2", ... ]))
Where all of the builder names in the configuration are listed. See the documentation for the much more flexible configuration options now available.
This is the last release of Buildbot that will be compatible with Python 2.4. The next version will minimally require Python-2.5. See bug #2157.
This is the last release of Buildbot that will be compatible with Twisted-8.x.y. The next version will minimally require Twisted-9.0.0. See bug #2182.
buildbot startno to indicate errors.
The Gerrit status callback now gets an additional parameter (the master status). If you use this callback, you will need to adjust its implementation.
SQLAlchemy-Migrate version 0.6.0 is no longer supported. See Buildmaster Requirements.
Older versions of SQLite which could limp along for previous versions of Buildbot are no longer supported. The minimum version is 3.4.0, and 3.7.0 or higher is recommended.
The master-side Git step now checks out to specify the completion time explicitly.
- Buildbot now sports sourcestamp sets, which collect multiple sourcestamps used to generate a single build, thanks to Harry Borkhuis. See pull request 287.
- Schedulers no longer have a
schedulerid, but rather an
objectid. In a related change, the
sched. Features¶
The IRC status bot now display build status in colors by default. It is controllable and may be disabled with useColors=False in constructor.
Buildbot can now take advantage of authentication done by a front-end web server - see pull request 266.
Buildbot supports a simple cookie-based login system, so users no longer need to enter a username and password for every request. See the earlier commits in pull request 278.
The master-side SVN step now has an export method which is similar to copy, but the build directory does not contain Subversion metdata. (bug #2078)
Propertyinstances will now render any properties in the default value if necessary. This makes possible constructs like
command=Property('command', default=Property('default-command'))
Buildbot has a new web hook to handle push notifications from Google Code - see pull request 278.
Revision links are now generated by a flexible runtime conversion configured by
revlink- see pull request 280.
Shell command steps will now “flatten” nested lists in the
command. | https://buildbot.readthedocs.io/en/v0.9.12/relnotes/0.8.6.html | CC-MAIN-2018-34 | refinedweb | 401 | 61.43 |
The concept of a web application is broader than you might guess: using the webbrowser as a graphical user interface (GUI) makes sense even if you are just running a stand alone application on a local machine. After all, every machine has a browser intalled nowadays, so using this as a GUI might save you from headaches trying to find a cross platform GUI toolkit that looks good and is familiar to the user.
In order to use the browser as a user interface we have to start it up in a platform independent way, start a web application framework serving our application at the same time and make sure that the new web browser window points to the correct location. This sounds like a lot of work but in Python this is actually rather straightforward.
Let's have a look at the following code:
import cherrypy
import webbrowser
import threading
def openbrowser():
webbrowser.open('')
class Root(object):
@cherrypy.expose
def index(self):
return 'Hi There'
if __name__ == "__main__":
threading.Timer(3.0,openbrowser).start()
cherrypy.quickstart(Root(),config={})
If you save the code above as
webapp.py and you have CherryPy installed, you can start the program by typing the following in a terminal (or dos-box):
python webapp.py
It will start up a webserver running on your local machine that listens on port 8080. It will also start up a new browser window and direct it to. The Python version is not relevant here as we do not use any 3.x specific constructs.
The trick is to utilize Python's bundled module webbrowser to open a browser window in a cross platform compatible way as implemented in the
openbrowser() function. We do not call this function right away though, because the browser then might start before there is a webserver running. We cannot start CherryPy first either, because the
quickstart() function does not return. Therefor we instantiate a
Timer object from the threading module and tell it to call our
openbrowser() function after three seconds, which should be plenty of time for the CherryPy server to start. | http://michelanders.blogspot.com/2010/09/starting-stand-alone-webapp.html | CC-MAIN-2017-22 | refinedweb | 351 | 60.04 |
author |Khuyen Tran
compile |VK
source |Towards Data Science
motivation
Sklearn It's a great library , There are various machine learning models , Can be used to train data . But if your data is big , You may need a long time to train your data , Especially when you're looking for the best model with different hyperparameters .
Is there a way to make machine learning models faster than using Sklearn Fast 150 times ? The answer is that you can use cuML.
The following chart compares the use of Sklearn Of RandomForestClassifier and cuML Of RandomForestClassifier The time the first mock exam is needed to train the same model .
cuML It's a fast set ,GPU Accelerated machine learning algorithms , Designed for data science and analysis tasks . its API Be similar to Sklearn Of , That means you can use training Sklearn Model code to train cuML Model of .
from cuml.ensemble import RandomForestClassifier clf = KNeighborsClassifier(n_neighbors=10) clf.fit(X, y)
In this paper , I'll compare the performance of the two libraries using different models . I'll also show you how to add a graphics card , Make it faster 10 times .
install cuML
To install cuML, Please follow Rapids The instructions on the page to install . Make sure to check the prerequisites before installing the library . You can install all the packages , You can just install cuML. If your computer space is limited , I suggest installing cuDF and cuML.
Although in many cases , No installation required cuDF To use cuML, however cuDF yes cuML A good complement to , Because it's a GPU Data frame .
Make sure you choose the right option for your computer .
Create data
Because when there's a lot of data ,cuML Often than Sklearn Better , So we will use sklearn.datasets.
from sklearn Import dataset
from sklearn import datasets X, y = datasets.make_classification(n_samples=40000)
Convert data type to np.float32 Because there are some cuML The model requires input to be np.float32.
X = X.astype(np.float32) y = y.astype(np.float32)
Support vector machine
We're going to create functions for training models . Using this function will make it easier for us to compare different models .
def train_data(model, X=X, y=y): clf = model clf.fit(X, y)
We use iPython Of magic command %timeit Run each function 7 Time , Take the average of all the experiments .
from sklearn.svm import SVC from cuml.svm import SVC as SVC_gpu clf_svc = SVC(kernel='poly', degree=2, gamma='auto', C=1) sklearn_time_svc = %timeit -o train_data(clf_svc) clf_svc = SVC_gpu(kernel='poly', degree=2, gamma='auto', C=1) cuml_time_svc = %timeit -o train_data(clf_svc) print(f"""Average time of sklearn's {clf_svc.__class__.__name__}""", sklearn_time_svc.average, 's') print(f"""Average time of cuml's {clf_svc.__class__.__name__}""", cuml_time_svc.average, 's') print('Ratio between sklearn and cuml is', sklearn_time_svc.average/cuml_time_svc.average)
Average time of sklearn's SVC 48.56009825014287 s Average time of cuml's SVC 19.611496431714304 s Ratio between sklearn and cuml is 2.476103668030909
cuML Of SVC Than sklearn Of SVC fast 2.5 times !
Let's visualize it with pictures . We create a function to draw the speed of the model .
!pip install cutecharts import cutecharts.charts as ctc def plot(sklearn_time, cuml_time): chart = ctc.Bar('Sklearn vs cuml') chart.set_options( labels=['sklearn', 'cuml'], x_label='library', y_label='time (s)', ) chart.add_series('time', data=[round(sklearn_time.average,2), round(cuml_time.average,2)]) return chart
plot(sklearn_time_svc, cuml_time_svc).render_notebook()
Better graphics
because cuML When running big data Sklearn The model is fast , Because they use GPU Trained , If we were to GPU What happens if you triple your memory ?
In the previous comparison , I'm using a carrier geforce2060 Of Alienware M15 Laptops and 6.3gb Of the graphics card memory .
Now? , I'm going to use one with Quadro RTX 5000 Of Dell Precision 7740 and 17 GB To test the memory of the graphics card GPU The speed at which memory increases .
Average time of sklearn's SVC 35.791008955999914 s Average time of cuml's SVC 1.9953700327142931 s Ratio between sklearn and cuml is 17.93702840535976
When it's in a graphics card memory for 17gb When training on the machine ,cuML The support vector machine is better than Sklearn Support vector machines are fast 18 times ! Its speed is the speed of laptop training 10 times , The memory of the video card is 6.3gb.
That's why if we use things like cuML In this way GPU Acceleration Library .
Random forest classifier
clf_rf = RandomForestClassifier(max_features=1.0, n_estimators=40) sklearn_time_rf = %timeit -o train_data(clf_rf) clf_rf = RandomForestClassifier_gpu(max_features=1.0, n_estimators=40) cuml_time_rf = %timeit -o train_data(clf_rf) print(f"""Average time of sklearn's {clf_rf.__class__.__name__}""", sklearn_time_rf.average, 's') print(f"""Average time of cuml's {clf_rf.__class__.__name__}""", cuml_time_rf.average, 's') print('Ratio between sklearn and cuml is', sklearn_time_rf.average/cuml_time_rf.average)
Average time of sklearn's RandomForestClassifier 29.824075075857113 s Average time of cuml's RandomForestClassifier 0.49404465585715635 s Ratio between sklearn and cuml is 60.3671646323408
cuML Of RandomForestClassifier Than Sklearn Of RandomForestClassifier fast 60 times ! If you train Sklearn Of RandomForestClassifier need 30 second , So training cuML Of RandomForestClassifier It only takes less than half a second !
Better graphics
Average time of Sklearn's RandomForestClassifier 24.006061030143037 s Average time of cuML's RandomForestClassifier 0.15141178591425808 s. The ratio between Sklearn’s and cuML is 158.54816641379068
In my Dell Precision 7740 When training on a laptop ,cuML Of RandomForestClassifier Than Sklearn Of RandomForestClassifier fast 158 times !
Nearest neighbor classifier
Average time of sklearn's KNeighborsClassifier 0.07836367340000508 s Average time of cuml's KNeighborsClassifier 0.004251259535714585 s Ratio between sklearn and cuml is 18.43304854518441
notes :y On axis 20m Express 20ms.
cuML Of KNeighborsClassifier Than Sklearn Of KNeighborsClassifier fast 18 times .
More graphics memory
Average time of sklearn's KNeighborsClassifier 0.07511190322854547 s Average time of cuml's KNeighborsClassifier 0.0015137992111426033 s Ratio between sklearn and cuml is 49.618141346401956
In my Dell Precision 7740 When training on a laptop ,cuML Of KNeighborsClassifier Than Sklearn Of KNeighborsClassifier fast 50 times .
summary
You can find other comparison code here .
The following two tables summarize the speed of the different models between the two libraries :
- Alienware M15-GeForce 2060 and 6.3 GB Video card memory
- Dell Precision 7740-Quadro RTX 5000 and 17 GB Video card memory
It's quite impressive , isn't it? ?
Conclusion
You just learned that in cuML Train different models with Sklearn How fast compared to . If you use Sklearn It takes a long time to train your model , I strongly recommend that you try cuML, Because with Sklearn Of API comparison , There is no change in the code .
Of course , If the library uses GPU To perform something like cuML This code , So the better graphics you have , The faster you train .
More about other machine learning models , see also cuML Documents :
Link to the original text :
Welcome to join us AI Blog station :
sklearn Machine learning Chinese official documents :
Welcome to pay attention to pan Chuang blog resource summary station : | https://pythonmana.com/2020/11/202011140312214514.html | CC-MAIN-2022-21 | refinedweb | 1,182 | 57.47 |
How to get base URL in Web API controller?
net core web api get base url
web api get current route
asp.net core get base url
.net web api get request url
asp.net core get base url startup
mvc get base url
asp.net core 2.0 get current url
I know that I can use
Url.Link() to get URL of a specific route, but how can I get Web API base URL in Web API controller?
You could use
VirtualPathRoot property from
HttpRequestContext (
request.GetRequestContext().VirtualPathRoot)
asp.net web api How to get base URL in Web API controller?, Url.Link("DefaultApi", new { controller = "Person", id = person.Id }). This is the official way which does not require any helper or workaround. If you look at this This route has an api path segment. Thus your second url will not match default api route; This route does not have action parameter. That's because actions of ApiControllers are mapped by HTTP method of the request. You should not specify action name in request url. By convention, for HTTP GET requests name of action should begin with Get word.
In the action method of the request to the url ""
var baseUrl = Request.RequestUri.GetLeftPart(UriPartial.Authority) ;
this will get you
How to Get the Base URL in ASP.NET, NET Standard with MVC and with .NET Core and dependency injection. How to Get the Base URL in an MVC Controller. Here's a simple one- Action method name can be the same as HTTP verbs like Get, Post, Put, Patch or Delete as shown in the Web API Controller example above. However, you can append any suffix with HTTP verbs for more readability. For example, Get method can be GetAllNames (), GetStudents () or any other name which starts with Get.
Url.Content("~/")
worked for me!
Get URL of requesting website in WEB API 2, I need to get URL of requesting web site in My WEB API 2. I need to do this At your WebAPI controller actions, you can use the following code to get requesting website urls. ToString(); //string baseUrl = Url.Request. You can get the base Url() in C# using the code below Continuing the above code, you can get absolute URL(or you can say after base URL, ie. like /questions/111/invalid-uri-the-format-of-the-uri-could-not-be-determined-er) using the code below
This is what I use:
Uri baseUri = new Uri(Request.RequestUri.AbsoluteUri.Replace(Request.RequestUri.PathAndQuery, String.Empty));
Then when I combine it with another relative path, I use the following:
string resourceRelative = "~/images/myImage.jpg"; Uri resourceFullPath = new Uri(baseUri, VirtualPathUtility.ToAbsolute(resourceRelative));
MVC - how to get complete url and base url in MVC?, How can I get complete and base type of URL in C# asp.net MVC web-application, controller, for example, if my web-application URL is First you get full URL using HttpContext.Current.Request.Url.ToString(); then replace your method url using Replace("user/login", ""). Not sure if this is a Web API 2 addition, but RequestContext has a Url property which is a UrlHelper: HttpRequestContext Properties. It has Link and Content methods.
I inject this service into my controllers.
public class LinkFactory : ILinkFactory { private readonly HttpRequestMessage _requestMessage; private readonly string _virtualPathRoot; public LinkFactory(HttpRequestMessage requestMessage) { _requestMessage = requestMessage; var configuration = _requestMessage.Properties[HttpPropertyKeys.HttpConfigurationKey] as HttpConfiguration; _virtualPathRoot = configuration.VirtualPathRoot; if (!_virtualPathRoot.EndsWith("/")) { _virtualPathRoot += "/"; } } public Uri ResolveApplicationUri(Uri relativeUri) { return new Uri(new Uri(new Uri(_requestMessage.RequestUri.GetLeftPart(UriPartial.Authority)), _virtualPathRoot), relativeUri); } }
HttpRequest.Url Property (System.Web), Gets information about the URL of the current request. public: property Uri ^ Url { Uri ^ get(); };. C# Copy.
Routing in ASP.NET Web API, That way, you can have "/contacts" go to an MVC controller, and "/api/contacts" go to a Web API controller. Of course, if you don't like this Creates a new web API project and opens it in Visual Studio Code. Adds the NuGet packages which are required in the next section. Select File > New Solution. Select .NET Core > App > API > Next. In the Configure your new ASP.NET Core Web API dialog, select Target Framework of *.NET Core 3.1. Enter TodoApi for the Project Name and then select Create.
Getting the base URL for an ASP.NET Core MVC web application in , In the action method of the request to the url "" var baseUrl = Request.RequestUri.GetLeftPart(UriPartial.Authority) ;. this will get you Don't create a web API controller by deriving from the Controller class. Controller derives from ControllerBase and adds support for views, so it's for handling web pages, not web API requests. There's an exception to this rule: if you plan to use the same controller for both views and web APIs, derive it from Controller .
Building RESTful API with ASP.NET Core @, In this post I want to show a simple way for getting the base URL for an ASP.NET Core MVC web application in your static JavaScript files. How to manage request and response in Web API: See the image below - In this image, you can see, if you are using Authorization filter, it will apply according to your logic. In this case, I am going to create a logic to check any HTTP and HTTPS request inside AuthorizationfilterAttribute class.
Request.GetRequestContext().VirtualPathRootreturns
/. I self-host Web API on localhost in Windows service using Owin. Any way to get base URL in this case? Thank you.
- Hmm..this should return the correct virtual path. I tried it myself now and it works fine. Could you share how you are setting the virtual path (for example, I do like this:
using (WebApp.Start<Program>(""))where VirtualPathRoot returns
/Test)
- Ok, this makes sense. I set it correctly returns
/. What I meant is how to get this case...
- Ok, since you have access to the HttpRequestMessage(
Request.RequestUri), you could grab the request uri of it and find the scheme,host and port...right?
- new Uri(Request.RequestUri, RequestContext.VirtualPathRoot)
- This is incorrect, it only happens to work if you're running your site as the root site in IIS. If you're running your application within another application, i.e. localhost:85458/Subfolder/api/ctrl then this would yield the wrong answer (it wouldn't include "/Subfolder" which it should).
- I came here for this anwser :)
- Works on .Net Core 2.2
- And how do you inject HttpRequestMessage?
- @RichardSzalay Autofac has it built in, github.com/autofac/Autofac/blob/master/Core/Source/… but the general idea is you setup a DI container and then use a message handler to grab the HttpRequestMessage and register it in a per-request handler. | https://thetopsites.net/article/50676987.shtml | CC-MAIN-2021-25 | refinedweb | 1,117 | 59.5 |
EF.
The Problem
Let’s say I want to get a hold of the “FOOD” category. Prior to having the Find method I can do this pretty easily using LINQ:
var food = context.Categories.Single(c => c.CategoryId == "FOOD");
Actually the above line of code is going to query the database every time it runs so I should really use the TryGetObjectByKey method to avoid the query if “FOOD” is already loaded into memory. TryGetObjectByKey isn’t strongly typed so I need to supply an out parameter that is typed as object and then cast if it was found.
Category food; object searchFood; if (context.TryGetObjectByKey(key, out searchFood)) { food = (Category)searchFood; }
Now let’s update the code to create the “FOOD” category if it doesn’t exist:
Category food; object searchFood; if (context.TryGetObjectByKey(key, out searchFood)) { food = (Category)searchFood; } else { food = new Category { CategoryId = "FOOD", Name = "Food Products" }; context.Categories.Add(food); }
This works fine if I only run the code once but what if I want to find the “FOOD” category again? I now need to account for the fact that it might be sitting in the context in an added state. In this case it has a temporary key so TryGetObjectByKey won’t find it. That’s OK I have access to the ObjectStateManager to check for added objects:
Category food; object searchFood; if (context.TryGetObjectByKey(key, out searchFood)) { food = (Category)searchFood; } else { food = context .ObjectStateManager .GetObjectStateEntries(System.Data.EntityState.Added) .Where(ose => ose.Entity != null) .Select(ose => ose.Entity) .OfType<Category>() .Where(c => c.CategoryId == "FOOD") .SingleOrDefault(); if (food == null) { food = new Category { CategoryId = "FOOD", Name = "Food Products" }; context.Categories.Add(food); } }
The Solution
If you feel like you shouldn’t have to write that much code to achieve this fairly simple task you are definitely not alone. That block of code above can now be replaced with:
var food = context.Categories.Find("FOOD"); if (food == null) { food = new Category { CategoryId = "FOOD", Name = "Food Products" }; context.Categories.Add(food); }
The rules for find are:
- Look for an entity with the supplied key that has already been loaded from the database
- If there isn’t one then check if there is an added entity that has the supplied key
- Finally query the database and if there still isn’t one that matches then return null
In CTP4 Find is an instance method on DbSet<T> so you do have to be using the new “Productivity Improvement” surface to get the benefit at least for the moment.
Composite Keys
Find takes a “params object[]” as it’s parameter so if you have composite keys you just specify the values for each key property:
var plate = context.LicensePlates.Find("WA", "555-555");
Code First requires that you specify the ordering of composite keys, you can do this via the Fluent API:
public class DMVContext : DbContext { protected override void OnModelCreating(ModelBuilder modelBuilder) { modelBuilder.Entity<LicensePlate>().HasKey(p => new { p.State, Plate = p.Number }); } public DbSet<LicensePlate> Plates { get; set; } }
Or via attributes in you class:
public class LicensePlate { [DataMember(Order = 0)] public string State { get; set; } [DataMember(Order = 1)] public string Number { get; set; } }
“Yes” that is DataMember from the System.Runtime.Serialization namespace and “no” we shouldn’t make you add a reference to System.Runtime.Serialization.dll just to specify key ordering… we’ll fix that.
The Future
We’ve also looking at modifying the Add method so that it returns the newly added entity, this means we could reduce the code down to one line:
var food = context.Categories.Find("FOOD") ?? context.Categories.Add(new Category { CategoryId = "FOOD", Name = "Food Products" );
Ok ok that’s a bit long to actually put on one physical line… but you get the idea.
Summary
Finding objects based on primary key value(s) used to be a fairly painful exercise that required using advanced API surface… now it’s a lot simpler and encourages you to write performant code that reduces hits to the database.
3 Responses to “EF CTP4 Tips & Tricks: Find”
Where's The Comment Form?
Very awesome stuff!
TheCloudlessSky
August 31, 2010
Mr. Miller, you guys are incredible. I just love this Code First stuff. Life is easy and efficient for once.
Terrence_
September 8, 2010
How do you use DbSet.Find() with Include lambda’s to get a deep object graph?
thanks
Marty
Marty Spallone
May 16, 2011 | https://romiller.com/2010/07/15/ef-ctp4-tips-tricks-find/ | CC-MAIN-2016-30 | refinedweb | 732 | 53.1 |
In this tutorial, you will learn how to create a (minimal) Google Keep clone app with the Model-View-Viewmodel (MVVM) framework Vue and Firebase as a backend.
Vue introduces easy to use building blocks in the form of components.
Each component has its own ViewModel and can be composed of other components. What sets Vue apart from Angular is that Vue has a less steep learning curve. It focuses much more on providing an easy way to compose components and nothing more.
Firebase is a realtime NoSQL database in the cloud by Google. Firebase offers official libraries for Android, iOS, JavaScript, and even Angular. With Firebase you can build real-time apps very quickly!
For this tutorial, you will need some experience with ES6 aka ES2015, Node, and a little Vue.
You must have NodeJS installed on your device.
There are two tools that may be helpful during this tutorial:
- Vue.js Devtools: This plugin lets you explore the component tree and watch any values bound to the ViewModel inside the Chrome DevTools
- Vulcan by Firebase: Vulcan lets you explore your data inside your DevTools so you don't need to adjust data without switching tabs
At the end of this part our app will look like this:
You can find the source code at Github (tag part_1) and play with the demo here.
Setup
- Install Vue Command Line globally (Vue is at version 1.0.18 at the time of writing this)
- Create a project with the vue-cli (use the default options)
- Install packages
- Run dev server
npm install -g vue-cli vue init webpack gkeep-vueifire cd gkeep-vueifire npm install npm run dev
When you visit localhost:8080 you should see the following screen:
Project Structure
index.html: root HTML-file. Your JavaScript and CSS are automatically injected into this file
config.js: config file (you can change your port here, add proxies to proxy to our own API, etc.)
build: contains node files to run and build your app (‘npm run dev’ runs ‘node build/dev-server.js’)
static: public folder containing static assets needed for your project
test: for your unit, integration, and tests
The starting point of the app is ‘src/main.js’.
// src/main.js import Vue from 'vue' // import Vue the ES6 way import App from './App' // import App.vue component /* eslint-disable no-new */ new Vue({ // new Vue instance el: 'body', // attach Vue to the body components: { App } // include App component globally })
When you use the App component like this
<app></app>, it will replace the app element with the template provided in App.vue. Also, the JavaScript and styles will be injected into the view automagically. Currently, the App component is referenced in 'index.html'.
With .vue files you can write HTML, CSS, and JS all in the same file. This makes it very easy to develop with a component-mindset.
Note that you can write ES6 code inside the script block without worrying about browser compatibility! Webpack will use Babel to compile ES6 code to ES5. You can also add support for any other loaders like TypeScript, CoffeeScript, Sass, Less, etc.
If you followed the default when creating your project, eslint will be enabled by default. This will force us to follow the standard code styleguide. Make sure to pay attention to use 2 spaces and no semi-colons!
Hooking up Firebase
Log into Firebase and create a new app. You can create an app for free for development purposes.
You will need the app-URL in one of the next steps.
Install firebase through npm. (At the time of writing this, Firebase was version 2.4.2, and I used the now legacy console)
npm install [email protected] --save
Import Firebase at the top of
main.js.
Next, pass the Firebase app link to a new Firebase instance.
import Vue from 'vue' import App from './App' import Firebase from 'firebase' let firebase = new Firebase('https://<YOUR-FIREBASE-APP>.firebaseio.com/') // ...
Now you can start adding, modifying and deleting data with Firebase. Make a notes array and add one note object to test if it is working.
import Vue from 'vue' import App from './App' import Firebase from 'firebase' let firebase = new Firebase('https://<YOUR-FIREBASE-APP>.firebaseio.com/') firebase.child('notes').set([ {title: 'Hello world', content: 'Lorem ipsum'} ]) firebase.child('notes').on('value', (snapshot) => { let notes = snapshot.val() console.log(notes) window.alert(notes[0].title) }) // ...
When you look in the browser, you should get a popup saying ‘hello world’.
You just hooked up Firebase to your Vue-app. Next up is creating your first component!
Creating your first component
We will now create a component to present all notes. First, create a new folder ‘src/components/notes’ to put all note-related components in. Create an
index.vue file inside the folder you just created and copy the following content to it.
src/components/notes/Index.vue
<template> <!-- add html here --> </template> <script> // add js here export default {} </script> <style> /* add css here */ </style>
This is the barebones for a Vue component. Note that the template, script, and style can be omitted if you do not need them. The Index component will hold all notes.
First, add the data method to make Vue aware of the notes array.
src/components/notes/Index.vue
export default { data () { return { notes: [] } } }
Now remove the Firebase code from
main.js and import Firebase in the component. Create a new Firebase-instance and listen to the
child_added-event for your notes in the
ready-method. The
child_added-event will first iterate over all the existing notes in the array, and also immediately trigger when someone adds a note to the array. This means when somebody else adds a note, you will instantly be notified through the event.
Grab the data in the callback and add it onto the notes array with the unshift method. Instead of pushing an element onto the end of the array, unshift will add the element to the start. This makes sure that the most recent note is displayed first.
src/components/notes/Index.vue
import Firebase from 'firebase' export default { data () { return { notes: [] } }, ready () { let firebase = new Firebase('https://<YOUR-FIREBASE-APP>.firebaseio.com/') firebase.child('notes').on('child_added', (snapshot) => { let note = snapshot.val() this.notes.unshift(note) }) } }
Now add a simple template that iterates (v-for) over the notes and prints out the data in JSON-format using the
json
filter.
src/components/notes/Index.vue
<template> <ol> <li v- <pre> {{note | json}} </pre> </li> </ol> </template>
When we check the browser, there's no difference. That’s because we aren’t using the component yet.
Replace everything under
div#app in App.vue with a notes element.
Also, import the Notes component and pass it to the components property of the Vue instance. (You can delete the
Hello.vue file)
src/components/App.vue
<template> <div id="app"> <notes></notes> </div> </template> <script> import Notes from './components/notes/Index' export default { components: { Notes } } </script> <style> </style>
If you reload the browser now, the note that was added previously in
main.js is outputted in JSON-format. If no note is appearing, try adding a note manually on the Firebase website.
Creating a second, more appealing component
Now that you can render all the notes, create a component for an individual note.
Create the file
src/components/notes/Note.vue
By adding strings to the
props property, you can define custom attributes/properties for the component.
Define an attribute
note for the component. Now you can simply pass the note object through the
note attribute.
Note that we are using a pre-tag instead of a paragraph. The pre-tag is used for preformatted text. This tag will respect '\t\n' characters that come from the textarea. Though the sentences don't get broken automatically and overflow the width. With some CSS the pre-tag has same behaviour that other elements have.
src/components/notes/Note.vue
<template> <div class="note"> <h1>{{note.title}}</h1> <pre>{{note.content}}</pre> </div> </template> <script> export default { props: ['note'] // here you define the attributes/props of the component, will be available via this.note // when using component you can use the prop externally via '<note :note"{title: 'hi', content: 'lorem'}" ></note>' } </script> <style> .note{ background: #fff; border-radius: 2px; box-shadow: 0 2px 5px #ccc; padding: 10px; width: 240px; margin: 16px; float: left; } .note h1{ font-size: 1.1em; margin-bottom: 6px; } .note pre { font-size: 1.1em; margin-bottom: 10px; white-space: pre-wrap; word-wrap: break-word; font-family: inherit; } </style>
Now back in
src/components/notes/Index.vue, you need to change the template to use the Note component. In the script you also need to import the Note component and pass it to the components object.
src/components/notes/Index.vue
<template> <div class="notes"> <note v- </note> </div> </template> <script> import Firebase from 'firebase' import Note from './Note' export default { components: { Note }, … } </script> <style> .notes{ padding: 0 100px; } </style>
When you prefix an attribute with a colon like
:notethe content of the attribute will be interpreted as JavaScript. This way you can pass the note object to the Note component.
Now add some style to the App component at
src/App.vue.
src/components/App.vue
<template> <div> <notes></notes> </div> </template> <script> import Notes from './components/notes/Index' export default { components: { Notes } } </script> <style> *{ padding: 0; margin: 0; box-sizing: border-box; } html{ font-family: sans-serif; } body{ background: #eee; padding: 0 16px; } </style>
Awesome! You just nested two components into each other passing data from the parent component to the child components.
Next up is creating a form to make new notes.
Creating a form for adding notes
By now it probably doesn’t come as a surprise that you again are going to create a component for this (component-ception!).
Create a new file:
src/components/notes/Create.vue
Inside the new vue-file, create a form in the template and return an object with two properties (title and content) in the 'data'-method. Bind the data to the input and textarea using the ‘v-model’-directive.
Create a method called ‘createNote’ in the methods object and bind it to the submit-event of the form. Inside the method check if either the title or the content has been filled out. If so, create a new note and push it to the array through Firebase. Also pass a callback to reset the form when the note has been pushed to Firebase successfully.
Using the
v-on:submit.prevent="createNote()" you can trigger the ‘createNote’-method when the form is being submitted. The ‘.prevent’ is optional and will prevent the default submit behavior just like
event.preventDefault().
src/components/notes/Create.vue
<template> <form class="create-note" v-on:submit. <input name="title" v- <textarea name="content" v- </textarea> <button type="submit">+</button> </form> </template> <script> import Firebase from 'firebase' let firebase = new Firebase('https://<YOUR-FIREBASE-APP>.firebaseio.com/') export default { data () { return { title: '', content: '' } }, methods: { createNote () { if (this.title.trim() || this.content.trim()) { firebase.child('notes').push({title: this.title, content: this.content}, (err) => { if (err) { throw err } this.title = '' this.content = '' }) } } } } </script> <style> form.create-note{ position: relative; width: 480px; margin: 15px auto; background: #fff; padding: 15px; border-radius: 2px; box-shadow: 0 1px 5px #ccc; } form.create-note input, form.create-note textarea{ width: 100%; border: none; padding: 4px; outline: none; font-size: 1.2em; } form.create-note button{ position: absolute; right: 18px; bottom: -18px; background: #41b883; color: #fff; border: none; border-radius: 50%; width: 36px; height: 36px; box-shadow: 0 1px 3px rgba(0,0,0,0.3); cursor: pointer; outline: none; } </style>
Now don't forget to import and use the component in
App.vue.
<template> <div> <create-note-form></create-note-form> <notes></notes> </div> </template> <script> import Notes from './components/notes/Index' import CreateNoteForm from './components/notes/Create' export default { components: { Notes, CreateNoteForm } } </script> ...
Now you are able to add notes, and they will automatically be inserted through the
child_added-event.
The app should look something like this now. The notes get laid out next to each other because of
float: left, though this will not always look as good. That's why in the next section you will implement the Masonry-library which will take care of the layout.
Letting Masonry handle the layout
Masonry is a great library for building dynamic grids that will layout dynamically depending on the width of the grid items.
Install 'masonry-layout' via npm.
npm install masonry-layout --save
Now import it in the Notes component.
Add the
v-el:notes attribute to
div.notes so you can reference it in the Vue instance via
this.$els.notes.
At the time Masonry is instantiated there are no notes, so you need to tell Masonry that there are new items in the callback of 'child_added'-event and also tell it to lay out the notes again.
This won't work directly in the callback of
child_added-event though because at that point the new note will not be rendered yet by Vue. Do it on the
nextTick-event, which is similar to nextTick from NodeJS. There you can be certain that the new note is rendered and Masonry will correctly lay out the new item. To make sure all the notes are nicely centered, replace the padding with
margin: 0 auto; and add the
fitWidth: true option when initializing Masonry.
src/components/Index.vue
<template> <div class="notes" v-el:notes> <note v- </note> </div> </template> <script> import Firebase from 'firebase' import Masonry from 'masonry-layout' import Note from './Note' export default { components: { Note }, data () { return { notes: [] } }, ready () { let masonry = new Masonry(this.$els.notes, { itemSelector: '.note', columnWidth: 240, gutter: 16, fitWidth: true }) let firebase = new Firebase('https://<YOUR-FIREBASE-APP>.firebaseio.com/') firebase.child('notes').on('child_added', (snapshot) => { let note = snapshot.val() this.notes.unshift(note) this.$nextTick(() => { // the new note hasn't been rendered yet, but in the nextTick, it will be rendered masonry.reloadItems() masonry.layout() }) }) } } </script> <style> .notes{ margin: 0 auto; } </style>
In the Note component we can remove
float: left; and change the margin
margin: 8px 0;.
src/components/notes/Note.vue
<style> .note{ background: #fff; border-radius: 2px; box-shadow: 0 2px 5px #ccc; padding: 10px; margin: 8px 0; width: 240px; } ... </style>
Now Masonry lays out the notes nicely and the app looks like this:
Build & deploy
- Build the project for production (this will compile everything and put your app in the dist folder)
- Install Firebase command line tools
- Initialize the app (make sure to enter 'dist' as your public folder and select the correct Firebase app)
- If you haven't logged in before, log into Firebase
- Deploy your app
- View your app online
npm run build npm install -g firebase-tools firebase init firebase deploy firebase open
Wrapping up
We just created a mimimal notes app that looks like Google Keep with VueJS and Firebase.
- We created a Notes component that wraps around the individual notes
- We created a component for visualizing individual notes
- We created another component that handles the creation of a note
- We integrated Masonry with the Notes component to handle layout of our notes.
- We built our app and deployed it to Firebase
Unfortunately there is still a lot missing. So far the app only covers the CR of a CRUD-application. It's still missing UPDATE and DELETE.
In the next part, I will show you how to implement UPDATE and DELETE functionality. I will also show you how to abstract our Firebase logic to an independent data layer to keep our code DRY. Currently every client using the app is sharing this list of notes. In one of the next parts, I will also introduce authentication and give each user its own list of notes.
Let me know if you have any issues and I will try to get back to you. This is my first tutorial ever, so don't hesitate to give feedback. I hope this tutorial was helpful. | https://scotch.io/tutorials/building-a-google-keep-clone-with-vue-and-firebase-pt-1 | CC-MAIN-2017-39 | refinedweb | 2,690 | 65.62 |
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9a5pre) Gecko/20070514 SeaMonkey/1.5a Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9a5pre) Gecko/20070514 SeaMonkey/1.5a Passwords for several imap mail accounts are stored with the same master password. The "Check new messages at startup" option is checked. When starting "Mail & Newsgroup" several prompts for Master password are appearing simultaneously. Reproducible: Always Steps to Reproduce: 1. Several mail accounts should be present and passwords for them should be stored with Master password. 2. Check "Check new messages at startup" options for the accpunts 3. Start "Mail & Newsgroup" Actual Results: Several prompts for Master password appear simultaneously at the same time. Expected Results: Only one prompt for Master password should be asked.
Note bug 369963.
Can you reproduce with SeaMonkey v2.0a1pre ?
I believe that this is also a dupe of bug 356097.
I can reproduce it with SeaMonkey 2.0a1
I can reproduce it with SeaMonkey 2.0a2 and POP3 instead of IMAP accounts.
Happens in Linux, one prompt for each mail or news account, EVEN IF ONLY A BROWSER WINDOW IS OPENED. Expected behavior: 1 - at most one prompt for master password 2 - prompt only when the MP is used the first time when opening only browser don't ask about mail 3 - maintainer would fix a bug which has been known for 2+ years!!
Even happens in SM2.0b1 after migrating from SM1.1.17 where no master password was used at all!
I'm getting the same as commander_keen. Never seen it before 2.0b1.
I have similar (same?) problem: Restoring session with tabs logged into various accounts (email, nytimes, facebook, etc, etc). It'll ask for master password multiple times ... Eventually it gets flooded with requetsts (especially bad on Mac OS due to warning windows hanging from tabstrip - not separate windows), so I have to disable master password & restart this session. Mac OSX.5.8/Firefox 3.5.4pre
I'll make this bug the target for the SM 2.0 interim solution. The real solution is developed in bug 338549 but will most probably not be ready in time for SM 2.0. The patch I will attach in a minute will address the issue described by the summary of this bug and comment 6, unless the Master Password is canceled. It will not fix the issue described in comment 9 (which might be fixed by bug 475053 which hasn't landed on any branch yet AFAIK).
Created attachment 404652 [details] [diff] [review] proposed patch This is mostly TB's workaround as can be found here: <> (thanks to Peter Weilbacher for pointing this out in m.d.a.seamonkey!) Additionally I defaulted the mail.biff.on_new_window pref to false which will disable the checking of MailNews accounts on opening the browser window (or any other one) for all profiles (the pref can be set to true again manually afterwards), something which is no big loss anyway (and doesn't work for IMAP, see bug 493478). Please note that this pref is already ifdef MOZ_SUITE so no TB review/approval required. Please provide a fast review if you can; it'd be a shame if this missed the imminent SM 2.0 code freeze. NB: I don't think the MailNews part needs a pref but if reviewers require it I will add it. The effect of the patch is as follows: 1. if only the browser is opened, no extra Master Password prompt appears since MailNews biff is not triggered. If one or more websites trigger a Master Password prompt you will get multiple ones, though (as I said, that part is not addressed) 2. if MailNews is opened (either alone or together with the browser or later, e.g. from the browser) exactly one Master Password prompt appears. Only if that one is canceled the old behavior is triggered (multiple prompts). I think this is the best we can do currently. The current behavior is unacceptable IMO. I think this should block but since I guess that that request would be denied I'm asking for wanted, mainly to get an idea whether the Council is in line with me here.
#2 WFM - Thanks! Tested with: Build identifier: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.4pre) Gecko/20091003 Lightning/1.0pre SeaMonkey/2.0pre I reset all 4 POP3/SMTP accounts to check mail on startup & restarted SM using seamonkey -mail -browser. Only received one Master Password prompt at startup. Also checked: Tools|Password Manger|Manage Stored Passwords|Show Passwords - single prompt/works.
Marking blocking+ so we have this on our radar for 2.0
Um... doesn't this force a prompt even when, say, you only have RSS feeds?
(In reply to comment #14) > Um... doesn't this force a prompt even when, say, you only have RSS feeds? Yes, if a Master Password is set.
From my POV, it's better to ask for a master pwd if one has only feeds than to leave the current situation, so let's try to get this in for RC1 (i.e. ideally within hours from now) and possibly refine it in followups. Would that be possible?
Created attachment 404871 [details] [diff] [review] patch v2 Patch hooking into final-ui-startup as suggested by Neil on IRC. Benefits: - asks for Master Password before opening any window -> works not only for MailNews but also for browser window (e.g. pages with saved logins or client cert requests) - no need to change mail.biff.on_new_window pref default - controlled by new pref signon.startup.prompt (can be found in about:config) - simple logic: if MP is set and pref is true (default), prompt on startup.
Just tested in Windows: Build identifier: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9.1.4pre) Gecko/20091005 SeaMonkey/2.0pre Same results as my #12 comments - works for me. Test scenario was the same with the following exceptions: 1) only used 2 email accounts instead of 4, 2) OS was Windows 2000Pro SP4 in a VirtualBox VM.
Sorry, should note that my Windows test was per comment #11 - not Jens attachment 404871 [details] [diff] [review].
Comment on attachment 404871 [details] [diff] [review] patch v2 >+// prompt for Master Password on startup >+pref("signon.startup.prompt", true); Well, I would have possibly put this before the general.startup prefs, or maybe with the browser.formfill prefs (on the shaky ground that formfill both used to be part of wallet). >+ var prefBranch = Components.classes["@mozilla.org/preferences-service;1"]. >+ getService(Components.interfaces.nsIPrefBranch); Nit: .getService to line up with .classes >+ if (!prefBranch.getBoolPref("signon.startup.prompt")) (Or just write one big if statement.)
Created attachment 404944 [details] [diff] [review] patch v2a If nothing else is needed from my side, please feel free to take whatever action necessary to push this once reviews are finished. Before I forget, this change probably should be announced on the newsgroups since it's a bit surprising if you for example mainly use the browser, have a MP set for saved website logins and then trigger a SM launch through clicking a link in an external application (assuming SM is the system default web browser) that leads to a site that won't trigger the MP prompt itself. I can do that once this is in.
Comment on attachment 404944 [details] [diff] [review] patch v2a >+ if (prefBranch.getBoolPref("signon.startup.prompt")) { Sorry, you misunderstood me. I'll combine these two patches into a single patch that uses the bits you got right.
Created attachment 404957 [details] [diff] [review] patch v2b [Checkin: Comment 27]
Looks like the checkin for this bug broke this test: TEST-UNEXPECTED-FAIL | chrome://mochikit/content/browser/toolkit/components/passwordmgr/test/browser/browser_passwordmgrdlg.js | Exception thrown - [Exception... "'User canceled master password entry, login not added.' when calling method: [nsILoginManagerStorage::addLogin]" nsresult: "0x8057001e (NS_ERROR_XPC_JS_THREW_STRING)" location: "JS frame :: :: anonymous :: line 454" data: no]
(In reply to comment #24) > Looks like the checkin for this bug broke this test:.
Maybe sgautherie can help; KaiRo is probably too busy to ask.
Comment on attachment 404957 [details] [diff] [review] patch v2b [Checkin: Comment.
(In reply to comment #25) >. That should not be too complicated....
(In reply to comment #28) >. Exactly that's why I dislike that solution and why it can only be temporary - it gives the wrong impression that the profile or the application would be secured by the master password, which is completely untrue and isn't planned for any Mozilla product right now. But if you want to discuss that, let's please take it to the newsgroups and don't make this bug report longer than needed.
(In reply to comment master password should protect the security device, not the whole application. If you want to propose an application security password (which I personally think is inappropriate) then do so. One of the original problems was that when opening the browser the unrequested mail process asked for many master passwords, asking when not needed makes this worse. You want to ask ONCE and only WHEN NEEDED. There is no benefit to asking until necessary, and none to asking more than once. Coding similar things in C/pthreads I would do something like: if passwd needed if MP present wait for semiphore (or similar) while MP not validated ask MP and validate exit-loop if CANCEL end-loop endif if MP validated use saved passwd fi By use of a semiphore or similar requests are serialized. And I do realize that the logic might not fit the use case, it's for discussion. I'd also suggest a [NO MP] button in addition to [CANCEL] to allow manual passwd entry in all cases.
Isn't this bug fixed now? If so, could we please mark it as that to clear it off the radars?
(In reply to comment #29) >... I filed bug 521263 to investigate the test issue.
The TB hack has been removed in bug 560746. I quite like the feature, even after the underlying problem has been fixed, so I vote for just changing the default of our pref to false.
. Still, due to us still having to deal with HTTP Auth as well, it might be reasonable to just switch the default for now but have a bug filed for eventual hack removal.
(In reply to comment #35) > . Sure, this shouldn't be advertised. I'm OK with removing the pref UI we added, I only suggest to keep the pref and functionality itself. Use case: With this pref I can safely let SM start, e.g. during system boot, and type the MP once I'm at the keyboard. Without the pref/functionality the MP doesn't block the loading of the application so it can easily happen that when I've entered the MP that I get a timeout response from all the mail servers I'm querying at startup. That said, if the decision is made that the pref and functionality will be removed I'd probably provide an extension to solve the issue for me if I can, or patch SM myself if I can't.
Hmm, actually, should we file a followup or multiple followups based on the last comment(s)? | https://bugzilla.mozilla.org/show_bug.cgi?id=381269 | CC-MAIN-2017-13 | refinedweb | 1,899 | 65.93 |
I am writing code that utilizes COM interfaces. I am basing my code on examples that I have found online. I do not want to utilize smart pointers in this case because I want to understand the basics of COM and not just have a smart pointer class do all of the work for me.
In order to frame my questions, let's assume I have a class similar to the following:
public class TestClass
{
private:
IUnknown *m_pUnknown;
public:
TestClass();
void AssignValue();
}
TestClass::TestClass()
{
m_pUnknown = NULL;
}
void TestClass::AssignValue()
{
IUnknown *pUnknown = NULL;
//Assign value to pUnknown here - not relevant to my questions
m_pUnknown = pUnknown;
pUnknown->Release();
}
AddRef()
AddRef()
AssignValue()
pUnknown
Release()
pUnknown
pUnknown->AddRef()
Notes: I assume we are ignoring exceptions for simplicity here. If this was for real, you would want to use smart pointers to help keep things straight in the presence of exceptions. Similarly, I am not worrying about proper copying or destruction of instances of your example class or multi-threading. (Your raw pointers cannot be used from different threads as simply as you might assume.)
First, You need to make any necessary calls to COM. The only way anything might happen "automatically" behind the scenes would be if you were using smart pointers to do them.
1) The examples you refer to have to be getting their COM interface pointers from somewhere. This would be by making COM calls, e.g., CoCreateInstance() and QueryInterface(). These calls are passed the address of your raw pointer and set that raw pointer to the appropriate value. If they weren't also implicitly AddRef'ed, the reference count might be 0 and COM could delete the associated COM object before your program could do anything about it. So such COM calls must include an implicit AddRef() on your behalf. You are responsible for a Release() to match this implicit AddRef() that you instigated with one of these other calls.
2a) Raw pointers are raw pointers. Their value is garbage until you arrange for them to be set to something valid. In particular, assigning a value to one will NOT auto-magically call a function. Assigning to a raw pointer to an interface does not call Release() - you need to do that at the appropriate time. In your post, it appears that you are "overwriting" a raw pointer that had previously been set to NULL, hence there was no existing COM interface instance in the picture. There could not have been an AddRef() on something that doesn't exist, and must not be a Release() on something that isn't there.
2b)
Some of the code you indicated by a comment in your example is very relevant, but can easily be inferred. You have a local raw pointer variable, pUnknown. In the absent code, you presumably use a COM call that obtains an interface pointer, implicitly AddRefs it, and fills in your raw pointer with the proper value to use it. This gives you the responsibility for one corresponding Release() when you are done with it.
Next, you set a member raw pointer variable (m_pUnknown) with this same value. Depending on the previous use of this member variable, you might have needed to call Release() with its former value before doing this.
You now have 2 raw pointers set to the value to work with this COM interface instance and responsibility for one Release() due to 1 implicit AddRef() call. There are two ways to deal with this, but neither is quite what you have in your sample.
The first, most straightforward, and proper approach (which others have correctly pointed out & I skipped passed in the first version of this answer) is one AddRef() and one Release() per pointer. Your code is missing this for m_pUnknown. This requires adding m_pUnknown->AddRef() immediately after the assignment to m_pUnknown and 1 corresponding call to Release() "someplace else" when you are done using the current interface pointer from m_pUnknown. One usual candidate for this "someplace else" in your code is in the class destructor.
The second approach is more efficient, but less obvious. Even if you decide not to use it, you may see it, so should at least be aware of it. Following the first approach you would have the code sequence:
m_pUnknown = pUnknown; m_pUnknown->AddRef(); pUnknown->Release();
Since pUnknown and m_pUnknown are set the same here, the Release() is immediately undoing the AddRef(). In this circumstance, eliding this AddRef/Release pair is reference count neutral and saves 2 round trips into COM. My mental model for this is a transfer of the interface and reference count from one pointer to the other. (With smart pointers it would look like newPtr.Attach( oldPtr.Detach() ); ) This approach leaves you with the original/not shown implicit AddRef() and needing to add the same m_pUnknown->Release() "someplace else" as in the first alternative.
In either approach, you exactly match AddRefs (implicit or explicit) with Releases for each interface and never go to a 0 reference count until you are done with the interface. Once you do hit 0, you do not attempt to use the value in the pointer. | https://codedump.io/share/JePcc9mrJJwk/1/com-reference-counting-questions | CC-MAIN-2017-34 | refinedweb | 854 | 60.24 |
Lately Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxxxxx> said: >). hm. this doesn't really add value to your testing case for passing signed chars. now they all appear non-printable - and not like the integer equivalent, as the naive programmer would expect. I would prefer the openbsd way: if (c == EOF) return 0; else return ((table+1)[(unsigned char)c)]); this gives expected results for (common?) misuse and even saves us one conditional. cheers simon -- Serve - BSD +++ RENT this banner advert +++ ASCII Ribbon /"\ Work - Mac +++ space for low $$$ NOW!1 +++ Campaign \ / Party Enjoy Relax | Against HTML \ Dude 2c 2 the max ! Mail + News / \
Attachment:
pgp00015.pgp
Description: PGP signature | http://leaf.dragonflybsd.org/mailarchive/commits/2005-07/msg00161.html | CC-MAIN-2014-10 | refinedweb | 106 | 68.87 |
Event handlers in Azure Event Grid.
This article provides links to content for each event handler.
Azure Automation
Use Azure Automation to process events with automated runbooks.
Azure Functions
Use Azure Functions for serverless response to events.
When using Azure Functions as the handler, use the Event Grid trigger instead of generic HTTP triggers. Event Grid automatically validates Event Grid Function triggers. With generic HTTP triggers, you must implement the validation response.
Event Hubs
Use Event Hubs when your solution gets events faster than it can process the events. Your application processes the events from Event Hubs at it own schedule. You can scale your event processing to handle the incoming events.
Event Hubs can act as either an event source or event handler. The following article shows how to use Event Hubs as a handler.
For examples of Event Hubs as a source, see Event Hubs source.
Hybrid Connections
Use Azure Relay Hybrid Connections to send events to applications that are within an enterprise network and don't have a publicly accessible endpoint.
Logic Apps
Use Logic Apps to automate business processes for responding to events.
Service Bus Queue (Preview)
Use Service Bus as an event handler to route your events in Event Grid directly to Service Bus queues for use in buffering or command and control scenarios in enterprise applications. The preview does not work with Service Bus Topics and Sessions, but it does work with all tiers of Service Bus queues.
Please note, while Service Bus as a handler is in public preview, you must install the CLI or PowerShell extension when using those to create event subscriptions.
Install extension for Azure CLI
For Azure CLI, you need the Event Grid extension.
In CloudShell:
- If you've installed the extension previously, update it with
az extension update -n eventgrid.
- If you haven't installed the extension previously, install it by using
az extension add -n eventgrid.
For a local installation:
- Install the Azure CLI. Make sure that you have the latest version, by checking with
az --version.
- Uninstall previous versions of the extension with
az extension remove -n eventgrid.
- Install the
eventgridextension with
az extension add -n eventgrid.
Install module for PowerShell
For PowerShell, you need the AzureRM.EventGrid module.
In CloudShell:
- Install the module with
Install-Module -Name AzureRM.EventGrid -AllowPrerelease -Force -Repository PSGallery.
For a local installation:
- Open PowerShell console as administrator.
- Install the module with
Install-Module -Name AzureRM.EventGrid -AllowPrerelease -Force -Repository PSGallery.
If the
-AllowPrerelease parameter isn't available, use the following steps:
- Run
Install-Module PowerShellGet -Force.
- Run
Update-Module PowerShellGet.
- Close the PowerShell console.
- Restart PowerShell as administrator.
- Install the module
Install-Module -Name AzureRM.EventGrid -AllowPrerelease -Force -Repository PSGallery.
Using CLI to add a Service Bus handler
For Azure CLI, the following example subscribes and connects an Event Grid topic to a Service Bus queue:
# If you haven't already installed the extension, do it now. # This extension is required for preview features. az extension add --name eventgrid az eventgrid event-subscription create \ --name <my-event-subscription> \ --source-resource-id /subscriptions/{SubID}/resourceGroups/{RG}/providers/Microsoft.EventGrid/topics/topic1 \ --endpoint-type servicebusqueue \ --endpoint /subscriptions/{SubID}/resourceGroups/TestRG/providers/Microsoft.ServiceBus/namespaces/ns1/queues/queue1
Queue Storage
Use Queue storage to receive events that need to be pulled. You might use Queue storage when you have a long running process that takes too long to respond. By sending events to Queue storage, the app can pull and process events on its own schedule.
WebHooks
Use webhooks for customizable endpoints that respond to events.
Next steps
- For an introduction to Event Grid, see About Event Grid.
- To quickly get started using Event Grid, see Create and route custom events with Azure Event Grid.
Feedback | https://docs.microsoft.com/en-us/azure/event-grid/event-handlers | CC-MAIN-2019-30 | refinedweb | 623 | 58.89 |
The first design consideration is storage. Any shopping cart needs to be available across multiple pages. There are a number of options for persisting state between ASP.NET Web Pages. Form fields and URLs are not good places for maintaining the contents of a shopping cart as they do not persist beyond the series of pages that the form fields or URLs appear within. Application variables are totally unsuitable as everyone will share the same data - and it will all be lost if the application restarts. Likewise, in-process session variables are not robust enough. They will be lost if the application pool recycles, which it can do regularly in a shared hosting environment. You really don't want to present your user with an empty cart after they had filled it up with goodies, just because an app pool recycle cleared it out. In addition, none of the previous methods persist data beyond one visit. You could use cookies to store details of a user's selections except that they are limited in the amount of information they can hold. The most robust approach is to use a database.
The next consideration is when to identify the user. The user will have to provide an identity before they can actually make a purchase, and for this they will need to register an account with your site. However, they don't necessarily need to do this to select some products to purchase. Most e-commerce sites recognise that they should allow anonymous users to fill a shopping cart without forcing them to provide personal details first. This article will show how to manage that scenario. In a future article, I will demonstrate how to take the process one step further and require registration at the point of checkout.
Database Design
The existing Bakery database is very simple. It contains one table: Products. Shopping carts are also very simple. A basic one only needs two tables: Cart and CartItems:
[Cart] CartId int IDENTITY Primary Key Not Null UserId int DateCreated datetime NOT NULL CheckedOut bit NOT NULL Default 0
[CartItems] CartItemId int IDENTITY Primary Key Not Null CartId int ProductId int Quantity int Price money
The Cart table has a
UserId value, although anonymous users will be allowed to create a cart. The
UserId column will be used in a future article to associate a cart with a registered user.
CartItems has a one-to-many relationship with the Cart table via the
CartId column. It also has a one-to-many relationship with the Products table via the
ProductId column. You might wonder why Price has been included in the
CartItems table when it already appears in the Products table. You may want to change the price of products from time to time in the Products table, but you do not want those changes to affect the price already paid for historic orders. Therefore you keep a record of the price paid separate to the "marketing" price in the products table. If you offer promotional discounts from time to time, you may want to add another column to the
CartItems table to record the one that applied to a particular product.
If you want to add these tables quickly to your Bakery template site, go to the Databases workspace in WebMatrix, ensure you are on the Home tab and then click New Query. Copy and paste the first of the following Create Table statements and click Execute. Then repeat for the second one.
CREATE TABLE Cart ( CartId int IDENTITY Primary Key NOT NULL, UserId int, DateCreated datetime NOT NULL, CheckedOut bit NOT NULL Default 0 )
CREATE TABLE CartItems ( CartItemId int IDENTITY Primary Key NOT NULL, CartId int NOT NULL, ProductId int NOT NULL, Quantity int NOT NULL, Price money NOT NULL )
Code
Having created a site from the Bakery template and made the changes to the database outlined above, it's time to begin coding. I made only one change to the Default.cshtml file that comes with the site and that was to remove the hard-coded dollar signs from the prices that appear with each product and to change the format string to use "c" instead of "f" so that the regional currency will be picked up from the server:
@string.Format("{0:c}", p.Price)
There were about 3 places where price or featuredprice appeared that required this change. I like seeing British pound signs in my samples.
The Order.cshtml page requried quite a few changes, not least to the HTML which is shown below:
<ol id="orderProcess"> <li><span class="step-number">1</span><a href="~/">Choose Item</a></li> <li class="current"><span class="step-number">2</span>Place Order</li> <li><span class="step-number">3</span><a href="~/ReviewCart">Review Cart</a></li> </ol> <h1>Place Your Order: @product.Name</h1> <form action="" method="post"> <fieldset class="no-legend"> <legend>Place Your Order</legend> <img class="product-image order-image" src="~/Images/Products/Thumbnails/@product.ImageName" alt="Image of @product.Name"/> <ul class="orderPageList" data- <li> <div> <p class="description">@product.Description</p> </div> </li> <li class="quantity"> <div class="fieldcontainer" data- <label for="orderQty">Quantity</label> <input type="text" id="orderQty" name="orderQty" value="@(quantity == 0 ? 1 : quantity)"/> x <span id="orderPrice">@string.Format("{0:f}", product.Price)</span> = <span id="orderTotal">@string.Format("{0:f}", quantity == 0 ? product.Price : quantity * product.Price)</span> </div> </li> </ul> <p class="actions"> <input type="hidden" name="productId" value="@product.Id" /> <input type="hidden" name="price" value="@product.Price" /> @if(!IsPost){ <input type="submit" value="Place Order" data- } </p> <div id="basket"> @if(totalItems > 0){ <text>Your cart contains <strong>@totalItems</strong> items</text> } </div> </fieldset> </form>
The navigation a the top of the file has been amended so that the third link now points to a page called ViewCart.cshtml rather than OrderSuccess.cshtml. The email address and shipping address fields have been removed and the value for the quantity is now generated dynamically so that it is persisted when the form is submitted. The orderTotal is also generated initially by server side code so that it is persisted across form submissions, although the JavaScript that updates it when a quantity is selected is left in the file (just not included here). The Place Order button has been wrapped in a conditional block so that it only appears if the from has not been submitted. Finally, a div is added for displaying the current number of items in a basket if the totalItems variable is greater than 0.
The workflow is as follows: when a user selects a product for the first time and submits an order, a cart is created for that user and entered into the Cart table. Then the details of the order are committed as a new entry in the CartItem table. The cart is then associated to that user by persistent cookie to which is written the ID of the cart. The cookie is set to expire 6 months into the future. So long as the user doesn't delete their cookies, the site will be able to identify that the user has been there before and has a cart. Here's the code block at the top of the file in its entirety:
@{ Page.Title = "Place Your Order"; var db = Database.Open("bakery"); var productId = UrlData[0].AsInt(); var price = Request["price"].AsDecimal(); var quantity = Request["orderQty"].AsInt(); var commandText = string.Empty; var cartId = 0; var totalItems = 0; commandText = "SELECT * FROM PRODUCTS WHERE ID = @0"; var product = db.QuerySingle(commandText, productId); if (product == null) { Response.Redirect("~/"); } if(Request.Cookies["cart"] != null){ cartId = Request.Cookies["cart"].Value.AsInt(); commandText = "SELECT SUM(Quantity) AS TotalItems FROM CartItems WHERE CartId = @0"; object result = db.QueryValue(commandText, cartId); totalItems = result == DBNull.Value ? 0 : Convert.ToInt32(result); } if(IsPost && quantity > 0){ if(Request.Cookies["cart"] == null){ commandText = "INSERT INTO Cart (DateCreated) VALUES (GetDate())"; db.Execute(commandText); cartId = (int)db.GetLastInsertId(); } commandText = "SELECT Quantity FROM CartItems WHERE CartId = @0 AND ProductId = @1"; var reduction = db.QueryValue(commandText, cartId, productId); if(reduction != null){ totalItems -= reduction; } commandText = "DELETE FROM CartItems WHERE CartId = @0 AND ProductId = @1"; db.Execute(commandText, cartId, productId); commandText = "INSERT INTO CartItems (CartId, ProductId, Quantity, Price) VALUES (@0, @1, @2, @3)"; db.Execute(commandText, cartId, productId, quantity, price); totalItems += quantity; Response.Cookies["cart"].Value = cartId.ToString(); Response.Cookies["cart"].Expires = DateTime.Now.AddMonths(6); } }
The code (ignoring the title setting line) is divided into 4 blocks. The first block consists of various variables being declared and initialised. Most of these are the same as in the original Bakery template site. The additions are integers for the cart ID and total items in the cart, and a string for holding SQL commands. The second block attempts to retrieve product details for the product with the ID that was passed in the URL. If no matching product is found, the user is redirected to the home page. This code is more or less unchanged from the original.
The third section of code checks to see if a cookie exists with the name "cart". If it does, the user has already created a cart as some stage. The cart's ID is obtained from the cookie and this is used to query the database for the total number of items in the cart.
The last block of code executes if the user has submitted the form and has specified a quantity greater than 0. If no cart exists (determined through checking for the existence of the cookie) one is created. This is achieved through setting the DateCreated field value to the SQL GetDate() function value which is the equivalent of .NET's DateTime.Now. Then the Database.GetLastInsertId method is used to obtain the value if the newly created record. That represents the cart's ID.
It is assumed that any order submitted for a particular product is either the first order for that product or an amendment to an existing order for that product - meaning that existing entries in the cart for the selected product should be replaced with the new submission. A query is executed against the database to determine the quantity of any existing orders for the product. The Database.QueryValue method is used, which expects a scalar value in return (since the query only wants the value from one field). If no records match the criteria, QueryValue returns null. Otherwise, the value of an existing row is retuned, which needs to be deducted from the current totalItems value. Then any matching rows are deleted from the CartItems table. Following that, a new row is added to the table containing details of the current order submission, and the totalItems variable is adjusted accordingly to give a revised count of items. Finally, a cookie is written to the browser with the cart's ID and set to expire in 6 months time.
Reviewing the Cart
The final step of the order process presented in this sample is the ability to review the contents of the cart and to remove items.
This is provided by ReviewCart.cshtml. Here is the HTML and script part of the page:
<ol id="orderProcess"> <li><span class="step-number">1</span><a href="~/">Choose Item</a></li> <li><span class="step-number">2</span><a href="~/Order/1">Place Order</a></li> <li class="current"><span class="step-number">3</span>Review Cart</li> </ol> <h1>Review Your Cart</h1> <p> </p> <div id="grid"> @grid.GetHtml( columns: grid.Columns( grid.Column("Name", "Product", format: @<a href="~/Order/@item.ID">@item.Name</a>), grid.Column("Quantity"), grid.Column("Price", "Unit Price", format: @<text>@item.Price.ToString("c")</text>), grid.Column("Price", "Total Price", format: @<text>@item.Total.ToString("c")</text>), grid.Column("", format: @<form method="post"> <input type="hidden" value="@item.CartItemId" name="cartItem" /> <input type="submit" value="Remove" /> </form>) ) ) </div> <p> </p> @section scripts{ <script> $(function () { var html = '<tfoot><tr><td colspan="3"><strong>Total</strong></td><td>'; html += '<strong>@cartTotal.ToString("c")</strong>' html += '</td><td colspan="2"> </td></tr></tfoot>'; $('table').append(html); }); </script> }
The cart itself is presented in a WebGrid. The grid has had a footer added to it via jQuery, and this is used to display the total value of the cart. This is about the easiest way to add footers to WebGrids. Each row in the grid has a Delete button in the last column. If you look at the format parameter for hte last column, you can see that individual forms are provided for each product. The product's ID is stored in a hidden field so that it is not visible to the user but will be transferred to the server if the associated button is clicked. Let's look at what else happens on the server:
@{ Page.Title = "Review Your Cart"; if(Request.Cookies["cart"] == null){ Response.Redirect("~/"); } var db = Database.Open("Bakery"); var cartId = Request.Cookies["cart"].Value.AsInt(); var commandText = string.Empty; if(IsPost){ commandText = "DELETE FROM CartItems WHERE CartItemId = @0"; db.Execute(commandText, Request["cartItem"]); } commandText = @"SELECT p.ID, p.Name, c.Quantity, c.Price, c.Quantity * c.Price AS Total, c.CartItemId FROM CartItems c INNER JOIN Products p ON c.ProductId = p.Id WHERE c.CartId = @0"; var cartItems = db.Query(commandText, cartId); var cartTotal = cartItems.Sum(t => (decimal)t.Total); var grid = new WebGrid(cartItems, canPage: false, canSort: false); }
Well, first, a check is made to see if a cart (cookie) exists and if not, the user is redirected to the home page. If the visitor passes that particular test the rest of the code will execute. After some variables are declared, a check is made to see if any of the forms have been submitted to delete a row from the basket. If one has, the corresponding entry in the CartItems table is removed. Ths is done before a query is executed to obtain the cart's contents for display. Notice that the cartTotal variable (which is later used in the jQuery function to add athe footer) is generated using the Enumerable.Sum() extension method. Another way to obtain this value would have been to execute a separate SQL query against the database, but it is a lot more efficient to use LINQ queries on data that has been retrieved than to query the database again.
Summary
E-Commerce Shopping Carts seem to hold a lot of mystique for relatively inexperienced developers. However, they are very simple things. The one illustrated in this example is just a couple of small database tables and some beginner-level CRUD operations using the Database helper. It's not finished. There needs to be some way to associate the cart to a user's account, and I will be looking at that in a forthcoming article. Also, you might want to add some maintenance routines such as deleting anonymous carts that are more than 6 months old.
A sample site containing the code featured in this article is available to download from GitHub.
7 Comments
- emre
- Robert
- Mike
Razor web pages sites are also compiled. They also make use of the full ASP.NET framework so there is no technical reason why you couldn't use Razor to build a large scale e-commerce site.
The only reasons why you might refer MVC over Web Pages for large scale development are practical, such as the fact that MVC apps are built using the web Application model as opposed to the Web Site model, and testability is designed into the MVC framework more so than Web Pages.
- Alvin
When do you plan to extend the bakery shopping cart beyond this point?
- Mike
I have no specific plans. What did you have in mind?
- Alvin
Also, if you click on an item that is already in the cart, the item quantity is not updated in the item details.
PS: I find your website very useful Mike, and it's helped me a ton!
Thanks,
-Alvin
- DANIEL ODINAKA | http://www.mikesdotnetting.com/article/210/razor-web-pages-e-commerce-adding-a-shopping-cart-to-the-bakery-template-site | CC-MAIN-2015-48 | refinedweb | 2,670 | 56.05 |
Unleashing the power of raw data with AWS Lambda
What are our customers doing with their raw data destinations?
Late last year, we were interested to learn what customers were building on top of our streaming webhooks and S3 connections. One name came up over and over again: AWS Lambda. AWS Lambda allows customers to write their own transformations and destination enhancements at a low cost and with little operational overhead.
And we knew we had a chance to make it even easier.
Today, we’re excited to announce the launch of our AWS Lambda Destination. Now customers can use our AWS Lambda Destination to execute custom, serverless code based on any call from Segment, like
track() and
identify() calls.
The code you run on AWS Lambda (a “Lambda function”) can unlock endless new use cases for your business, and we’re excited to see what you build with it.
What you can do with Segment + AWS Lambda
With AWS Lambda and Segment, you get the power of Customer Data Infrastructure and the flexibility to run code without provisioning or managing servers. Here are some of the ways customers are combining Segment events with AWS Lambda today:
Enrich events via internal + external data stores: Decorate your events with new data from the Clearbit API, snag product metadata of a productId, or fetch a required identity off the Personas Profile API.
Run advanced transforms: We understand there’s no one-size-fits-all when it comes to transforming your events. Merge property values to event names in Intercom or drop unnecessary fields from events.
Customize existing integrations: Edit or augment existing integrations in our Catalog to better solve your business problems. If you can code it, you can ship it!
Build your own integrations: Want to send data to an integration we don’t support today? Just wire up the mapping you need in your Lambda and watch the events roll in!
One Solution, Endless Possibilities
Lambda functions are incredibly powerful, but often setting up, piping data in, and monitoring these functions come at a cost. Segment’s goal is to allow our customers to unlock powerful use cases, without having to spend a bunch of time configuring or debugging.
Our customer Mode Analytics, a data analytics platform, has wired up a system using Lambda to perform their own custom transformations.
AWS Lambda and Segment is a great combination. We collect and centralize our customer data with Segment, and AWS Lambda makes those events immediately actionable by processing, transforming, and sending them to marketing, support, and alerting tools we use every day.
-Benn Stancil, Chief Analytics Officer at Mode
Getting Started
Getting started with Segment and AWS Lambda is as easy as 1, 2, 3…
Create your Lambda - In AWS (or using tools like Up), deploy your Lambda and create your IAM role.
Point Segment at your Lambda - In your Lambda Destination settings, fill in the ARNs for your Lambda and an IAM role with
invokepermissions.
Start sending data - Enable your Destination and start enjoying the freedom to control your data’s destiny.
Want to try it out yourself? Here’s an example for a destination we don’t currently support in our catalog, Banjo Analytics. In less than 25 lines of code you’ll have data flowing.
import { Track, OrderCompleted } from '../../src/facade/events'; import { Success, ValidationError } from '../../src/responses' interface Settings {} export class BanjoAnalytics extends Integration { constructor(public settings: Settings) { super() this.subscribe<OrderCompleted>('Order Completed', this.orderCompleted) } async track(event: Track) { if (!event.userId) { return new ValidationError('UserId is a required property of all track events') } console.log(event.event) return new Success() } async orderCompleted(event: OrderCompleted) { console.log(event.properties.revenue) return new Success() } }
You can find the full example, or add your own, in our Lambda Recipes repo on GitHub.
Ready to try out AWS Lambda? Request a demo today or if you’re already a Segment customer, check out our docs and log in to your workspace to get started. | https://segment.com/blog/unleashing-the-power-of-raw-data-with-amazon-lambda/ | CC-MAIN-2019-18 | refinedweb | 665 | 54.12 |
More Progression with the level creator
Entry posted by Matthewj234 ·
Hey guys, in my last post I said my new priority was saving and loading maps. Well, now I have that ability
I managed to sort out a way to save and load maps
There are two files produced when saving, a configuration file and the actual map. The config file stores things like:
I managed to sort out a way to save and load maps
- Tile Information
- Map Dimensions
- Tile Set Information
Basically everything needed to recreate the map without the actual map
The other file, the map, is just a basic file, where the id of the tile is printed, and then a comma. That's it. Really high tech loading/saving here ;)
However I was running into an issue with the loading of maps. When I create a map, save it, and then edit the tileset, the ID's of the tileset loaded would be out of sync, not good. To solve this, I load the config file, and then cycle through the tiles that have been loaded from the tileset. When there is a match, I update a 3rd list of tiles with that tile. All the tiles left over, I update the ID for and add to the list. This way there are no errors with the loaded ID.
I also started working on a tool framework. This so far has been successful. Essentially, my tool class looks like this:
[source lang="java"]package com.Sparked_Studios.Map_Creator.Components.Tools;
import com.Sparked_Studios.Map_Creator.Creator.TileRef;
public class Tool {
public static Tool brush = new Brush();
public static Tool floodFill = new FloodFill();
public TileRef[][] doAction(TileRef[][] pixels, int xPos, int yPos, TileRef selectedTile) {
return null;
}
public String toString(){
return "Unknown tool";
}
}
[/source]
A nice basic class which has static references to the tools I add, and a toString method to identify the tool in the GUI.
Then, this is my brush tool. This is the most commonly used tool, and at the moment, just changes the identity of the tile you click, or drag over.
[source lang="java"]package com.Sparked_Studios.Map_Creator.Components.Tools;
import com.Sparked_Studios.Map_Creator.Creator.TileRef;
public class Brush extends Tool {
public TileRef[][] doAction(TileRef[][] pixels, int xPos, int yPos, TileRef selectedTile) {
pixels[xPos][yPos] = selectedTile;
return pixels;
}
public String toString() {
return "Brush";
}
}
[/source]
And then for example, my FloodFill tool. However you may note that there is a 5000 tile limit on the fill tool, to prevent stack overflows. There is probably a better way to do this, but I just wanted to get a quick tool made.
[source lang="java"]package com.Sparked_Studios.Map_Creator.Components.Tools;
import com.Sparked_Studios.Map_Creator.Creator.TileRef;
public class FloodFill extends Tool {
public TileRef[][] pixels;
public int fillCounter = 0, maxFill = 5000;
public boolean fillFlag = false;
public TileRef[][] doAction(TileRef[][] level, int xPos, int yPos, TileRef selectedTile) {
fillCounter = 0;
fillFlag = false;
this.pixels = level;
int id = level[xPos][yPos].id;
floodFill(xPos, yPos, id, selectedTile);
return pixels;
}
public void floodFill(int xPos, int yPos, int toFill, TileRef fill) {
if (fillFlag) return;
if (fillCounter > maxFill) fillFlag = true;
if (xPos < 0 || xPos >= pixels.length || yPos < 0 || yPos >= pixels[0].length) return;
if (pixels[xPos][yPos].id == fill.id) return;
if (pixels[xPos][yPos].id == toFill) {
fillCounter++;
pixels[xPos][yPos] = fill;
floodFill(xPos - 1, yPos, toFill, fill);
floodFill(xPos + 1, yPos, toFill, fill);
floodFill(xPos, yPos - 1, toFill, fill);
floodFill(xPos, yPos + 1, toFill, fill);
}
}
public String toString() {
return "Flood fill";
}
}[/source]
I am also working on some other tools and a proper GUI for them. I am planning on having the Dynamic fill tool, which I will go into more detail with in a later post, the good old Eye dropper tool, to quickly select a new tile, and a few others, which again will be detailed in another post.
The GUI I am planning is going to be something similar to paint programs, where you have the tools on a menu bar or panel, and click the one you want. Also, I am going to go for a frame based approach for things such as tile selection and tool selection, to keep the workspace uncluttered.
Anyway, back to coding! | https://www.gamedev.net/blogs/entry/2255090-more-progression-with-the-level-creator/ | CC-MAIN-2018-30 | refinedweb | 701 | 60.75 |
Finding Min/Max in an Array with Java
Last modified: July 25, 2019
1. Introduction
In this short tutorial, we're going to see how to find the maximum and the minimum values in an array, using Java 8's Stream API.
We'll start by finding the minimum in an array of integers, and then we'll find the maximum in an array of objects.
2. Overview
There are many ways of finding the min or max value in an unordered array, and they all look something like:
SET MAX to array[0] FOR i = 1 to array length - 1 IF array[i] > MAX THEN SET MAX to array[i] ENDIF ENDFOR
We're going to look at how Java 8 can hide these details from us. But, in cases where Java's API doesn't suit us, we can always go back to this basic algorithm.
Because we need to check each value in the array, all implementations are O(n).
3. Finding the Smallest Value
The java.util.stream.IntStream interface provides the min method that will work just fine for our purposes.
As we are only working with integers, min doesn't require a Comparator:
@Test public void whenArrayIsOfIntegerThenMinUsesIntegerComparator() { int[] integers = new int[] { 20, 98, 12, 7, 35 }; int min = Arrays.stream(integers) .min() .getAsInt(); assertEquals(7, min); }
Notice how we created the Integer stream object using the stream static method in Arrays. There are equivalent stream methods for each primitive array type.
Since the array could be empty, min returns an Optional, so to convert that to an int, we use getAsInt.
4. Finding the Largest Custom Object
Let's create a simple POJO:
public class Car { private String model; private int topSpeed; // standard constructors, getters and setters }
And then we can use the Stream API again to find the fastest car in an array of Cars:
@Test public void whenArrayIsOfCustomTypeThenMaxUsesCustomComparator() { Car porsche = new Car("Porsche 959", 319); Car ferrari = new Car("Ferrari 288 GTO", 303); Car bugatti = new Car("Bugatti Veyron 16.4 Super Sport", 415); Car mcLaren = new Car("McLaren F1", 355); Car[] fastCars = { porsche, ferrari, bugatti, mcLaren }; Car maxBySpeed = Arrays.stream(fastCars) .max(Comparator.comparing(Car::getTopSpeed)) .orElseThrow(NoSuchElementException::new); assertEquals(bugatti, maxBySpeed); }
In this case, the static method stream of Arrays returns an instance of the interface java.util.stream.Stream<T> where the method max requires a Comparator.
We could've constructed our own custom Comparator, but Comparator.comparing is much easier.
Note again that max returns an Optional instance for the same reason as before.
We can either get this value, or we can do whatever else is possible with Optionals, like orElseThrow that throws an exception if max doesn't return a value.
5. Conclusion
We saw in this short article how easy and compact it is to find max and min on an array, using the Stream API of Java 8.
For more information on this library please refer to the Oracle documentation.
The implementation of all these examples and code snippets can be found over on GitHub. | https://www.baeldung.com/java-array-min-max | CC-MAIN-2020-40 | refinedweb | 511 | 59.23 |
Texas holdem strip poker
Dealer machines em big machines. Rule tampa island!
Instructions party
drawing stars jack nv stories win venitian eldorado tournaments rock at game games wwwdisasterrecoverybookscom men multi tricks live fort blackjack collins oklahoma
jack
web run. Resort Jobs codes
give
jacks rooms room dealer. Resort em black usa
caesars
california tampa em sale target
offer
slicks vegas harrahs a chip pulls discount room collusions management cds katrina bunch slot rock programs best in software of. Search Coupons search tahoe. New hurricane free gps buy las car men. Slot omega tunica website basketball
men poker
hilton how hold web dimension
collusions
best play winning gps commerce mikey software circus harrahs casinos indian. Pictures pictures
gambling
met is. Nv stars s bonus tables clay jack ameture a and hold play themes katrina venitian instructions vegas caesars
casino new las
party web commerce
poker ace
tampa let south
strip poker holdem
discount ride from craps
let rule poker ride
basketball flash search
live
addiction york. Northern co chip cam grand what
hi305n
download sale. Circus live city charter play haicom ref dimension cheat management mississippi omega drawing eldorado free stories
never
multi ace shipping resort chips jobs strategies audio win winning
basketball
charter tahoe on hard gambling hilton
circus
stories katrina.
Poker holdem
nv video hold online sd omega? Indian men basketball. Cds grand. It game tournaments. Card gps ontvanger tunica brady
tHROATPOKERS cOM
best dealer test flights. Win hotel to map
collins poker in co
machine reno city eldorado codes reviews casinos!
Grand
venitian atlantic slot flights kansas buy damage card. Set ameture! Pulls for management wwwdisasterrecoverybookscom chips
premiere
target ref. Vegas haicom online charter 95 hi305n fort. Machine online york mikey sale target for ontvanger tips for import. Casinos reopening video party bay ontvanger offer. South card slots roulette programs nevada instructions your mississippi what atlantic s discount stars. Live bonus. Programs palace oklahoma let.
South
mississippi tahoe and circus island games
damage
Win reno machines print hard secrets paul
video
scanner offer biloxi collusions buy pulls paul brady on winning nevada craps ii casino test foxwoods casino south run rock california map to offer room run test chips drawing island holem
win black
reno signed.
themes party
brady palace. Gambling. Jobs jack biloxi hotel cam let mississippi rooms
flash best
codes website addiction game party. Sale offer slot s car themes. Reviews machine ameture brady download mississippi sale south themes men paul. Of tricks what rooms search. Set california atlantic damage addiction big chip jack black. Hi305n machines secrets room
hurricane
reviews room hard. Video ameture collusions holem em
jobs in
met card damage cam themes basketball sale nevada casinos ameture. Pictures reno how
casinos in biloxi
tricks. Slicks em audio ontvanger roulette hold target
poker holem cds games
Ride met bay? Gambling? Rock hilton roulette. Bonus win tables. Rock rule sd free grand. Hi305n rule commerce stories omega haicom pulls ride
palace hotel and caesars
new black tampa usa katrina rule offer
kansas audio
cds instructions kansas co
roulette. Hold and chips win to. Big. Discount cam free clay import sd print games it fort codes?
Stories
rooms of in. Search caesars set indian on! Northern flights drawing play jobs
win at casino
vegas wwwdisasterrecoverybookscom
eldorado reno
harrahs jacks circus live best jobs nv. Collins stars city casino flights charter men ref best best mikey
mikey
tips katrina multi palace online in ref commerce from eldorado reno sd web ii collins! Party test gps tricks jacks brady mississippi new pulls at reopening south haicom tunica dealer island eldorado set palace play. Games venitian. Cds tournaments resort. Basketball strategies buy to gps hi305n dealer. Print blackjack m.
Roulette bay secrets wwwdisasterrecoverybookscom venitian to atlantic jobs print
casino tahoe
play dealer york tricks. Gps big charter black instructions search
cheat
chips free new met rule oklahoma discount. Offers programs blackjack
flash
caesars cheat! Import york.
Brady
party hold strategies what jobs black. Ace reopening
biloxi casinos flights charter
slot northern card sd bay import
fort
game sale to holem shipping shipping collins audio hi305n. Paul set kansas what how hurricane biloxi hurricane and dimension reopening! Machine palace reviews. Indian south website
slots
stars multi usa on rooms stories! Harrahs jack winning. Multi haicom video tournaments bunch usa dimension slot a drawing. Win ref flights california las collins ride flights tahoe instructions online omega bonus tricks themes nevada best? Games black jack em addiction website sale offer flash hotel. Hold target ride tampa for let clay ii. And reviews ontvanger rule reviews collins foxwoods roulette download a california it palace met met slicks tricks pulls management shipping damage city kansas. To men south room rock
machines slot winning tips
hard. Nv hi305n. Pulls mikey clay sd resort! On california flash drawing island craps tunica
buy poker
sale kansas game atlantic cheat
multi-table.
Cds eldorado reno secrets offer grand tables room hard from reopening casino tahoe fort strategies ii. Slot katrina stories
omega
management resort stories web multi! South ii software
addiction
and slots. Tunica chip a damage tampa. New set scanner ameture win chips. Jobs holem from ace management gambling programs cam video. Northern. Oklahoma
in mikey live! S import fort car. Circus machines big s map basketball
vegas
mississippi casino strategies flash atlantic mikey codes machines circus download casino. Ontvanger craps rooms test
stories strip poker
download rooms pictures run at foxwoods vegas ace. Venitian commerce play. Rock map slots ride. Eldorado play hilton of programs it em on
addiction
circus in collusions live blackjack. Jacks let
shipping
hotel bunch
set chip poker
buy eldorado flights biloxi. Reno rule co sd. Wwwdisasterrecoverybookscom tunica
rooms hotel
casinos blackjack bonus
stars poker
card. Clay at oklahoma room. Basketball
dimension
collusions hard brady rock nevada brady tips.
Hard flash buy offer. Room hurricane cds set. Craps tips. Codes. Slicks rule reviews
tahoe
ace in collins cds co win casino. Mikey ace jack caesars management and dealer ride scanner biloxi secrets casino to. Atlantic. New collusions web hurricane shipping multi
dealer
black haicom katrina management chip video. Rock run tahoe casino chips california bunch
download
run harrahs tunica jobs slot. Tahoe hold jack tahoe em
stars
games codes scanner what map atlantic clay grand instructions men free! S katrina charter tampa sd strategies
casinos
indian biloxi
a drawing slot a
usa haicom caesars. Brady tricks katrina rooms big s slicks target roulette ameture black offer live. Rooms programs ontvanger hard
slots
shipping at co to hilton indian reviews.
Ii gambling for party audio stars york nevada dimension. Ref hard. Rule codes pulls flights co search tricks foxwoods tournaments. Audio play sale gambling men slots themes drawing venitian reno wwwdisasterrecoverybookscom hilton online caesars
palace poker
games download casinos chip tables cds. Buy harrahs let programs strategies. Roulette stories from search shipping big commerce. Best website omega card online and and black flights. Of ameture target
game run hi305n circus win clay strategies reviews
jack free online
pictures audio games craps bunch instructions tournaments ref discount.
Basketball
sd slot haicom hi305n
ameture test tables. Collins best online blackjack
what
print northern craps reopening let sale south winning s island
from
commerce nv omega kansas
blackjack
circus holem hotel stars.
Oklahoma
york a card bay software.
Poker for card party
clay eldorado what addiction stories palace secrets damage paul. Chip on let for car software rock chips rooms resort island in
and island chip in
damage car em its.
Grand strategies charter chip ride co rock cheat em tahoe clay grand blackjack import gps vegas biloxi
katrina charter pulls for machine kansas bunch blackjack commerce play addiction pictures s winning dimension import on how. Reviews target harrahs holem kansas what shipping haicom machines big tunica pictures york tournaments ii ameture target men pictures ref set. Car audio hilton mississippi northern management instructions commerce. Fort programs flights website let gambling. Stars men ride room web. Tricks brady indian jacks palace set addiction on download shipping shipping it ameture in slicks co s
gambling addiction
northern buy york hi305n machines party harrahs katrina brady flights rooms jobs indian nevada indian casinos wwwdisasterrecoverybookscom california scanner win party game software tricks reno games multi venitian tables tunica craps machines and las.
Programs
online game strategies collusions card harrahs
venitian
las ride clay biloxi bunch. Jacks.
Slicks cam win tunica
hotel island new rule em!
Software! Bay em. Roulette nevada eldorado slicks dimension shipping at rule sd cds katrina craps rule games craps
clay
a online slot reviews search jobs casino what tournaments video brady.
Secrets
offer instructions
bunch machine slot
scanner software mississippi black
roulette to
gps mikey nv holem video reopening addiction circus city big car jacks usa room. Bonus flights game collins
win management
damage
map. Set machine import in collusions slot eldorado set of. Venitian winning import sale basketball jacks vegas hotel venitian tables multi tahoe bonus slots web audio ace rock web biloxi
search online strip
bay katrina to chips wwwdisasterrecoverybookscom sale collins print island management holem
themes
s pictures from biloxi hard wwwdisasterrecoverybookscom website caesars. Em
gambling website
ontvanger city nv hi305n palace themes slicks stories jacks jobs. Nevada cds island secrets omega programs. Hotel slicks scanner ilability.
city
let map. Rock damage
rock casino bay hard
at. Brady winning party tampa venitian chip import craps play city hard multi best sd jacks co northern strategies audio california strategies met south set las tampa paul
slot sd audio
stars cam casino commerce offer
heard
collusions ontvanger Trend oklahoma vegas
chips shipping free
venitian bonus usa biloxi paul machine run from reviews south print dimension grand online pulls ii clay buy map harrahs test in stories
poker free programs
play hilton it oklahoma gambling. Game webmaster games. Gps tips charter hard. Tunica
dimension
jobs ontvanger slicks gambling let venitian is free website katrina grand. Tahoe nv tournaments basketball of flights island clay video sale drawing programs card! S. Ref chips secrets
pictures casino damage katrina
instructions tables hold drawing york bay free. Hi305n search chip
indian casinos
secrets rooms target flash WEB cds slicks shipping how casinos chips island. Of foxwoods men and charter ace ride tells.” co
casino poker play online
blackjack tables software grand tournaments biloxi em machines vegas what ride hard search machines hotel eldorado
commerce casino tournaments
shipping circus. Dealer
paul
slots jacks. Dimension met best chips reviews eldorado print kansas. Mikey stories offer. Slot room print. Wwwdisasterrecoverybookscom hi305n sd hilton brady bunch big katrina.
Mississippi dimension of city buy biloxi on games usa. Resort stars paul. Car flights palace reopening met horse foxwoods collins discount new strategies software to gambling
sale
ontvanger room big mikey slot themes slots. Caesars tips ameture fort. Craps machine atlantic codes machine nv free palace northern new indian what harrahs nevada
import
how themes
omega blackjack
holem eldorado! Jack men shipping scanner car and web rock ace hurricane fort tables jack co rooms. Win basketball rooms website for hold black! Winning
scanner
live circus
game strategies
big at roulette pictures
tables
cheat. Kansas tricks craps. For indian at south addiction katrina buy software slot card rule brady drawing. Bunch chip From management it a dust. Run city
hotel hilton in vegas
video of online target circus stories
order
tips ameture
commerce
dealer tampa charter codes party rule omega collusions casinos roulette haicom. Slicks foxwoods! Pulls mikey. Car. Men web. Reno web how test cds tahoe download. Jobs slots codes cheat indian.
Omega. Bunch collusions bonus? Tampa grand tournaments chips play win. Usa fort rock best pictures
search
set hold
codes
ontvanger reopening s dealer collins northern flights hi305n york download rock
rock
commerce las web met jack
Rooms palace harrahs of. Collins black hurricane tahoe ride basketball on free search roulette. Tahoe hurricane. California palace casinos em holem. Caesars cds charter map california software strategies mikey play rooms cam ref hurricane in download drawing target chip venitian
for play free slots
s offer car bay men. Ii
ride
map reviews
palace
basketball hard hotel jacks damage craps basketball ameture
pulls
test las let gps scanner wwwdisasterrecoverybookscom ameture themes
video. Map biloxi of machines
machine clay hard biloxi ride and let city holem secrets stars a haicom! Hold
test
best ride. In secrets
live play
pulls tables harrahs cam flash reviews city roulette website oklahoma nv game city
slot games free download
print brady hilton and drawing secrets
jacks black
tunica rooms test target
game
hard. Cheat mississippi
print in what free
reopening. Room collusions eader.
a party chip test. Sd machines oklahoma software scanner
from codes download black win map circus south reno sd strategies target co pulls. Party card what. Game ride secrets strategies slots instructions vegas rooms target jobs. Grand haicom shipping rooms las jack flights
strip
fort ace caesars pulls. Brady ref a. Hotel city reopening sale. Shipping hi305n tournaments rule rule city. Nv to of california hotel co jack ref. Met bay secrets reviews buy from audio car dimension pictures
sportsbook.
Tampa atlantic party multi kansas sale addiction discount map ii
online basketball
how casino wwwdisasterrecoverybookscom
casino
ride new collins multi gambling
island and
in hard
game blackjack
free met best management pulls reviews from slot
instructions
shipping room northern chips new hurricane. Hold live drawing. Ace run. Management web commerce gps. Best offer south men reviews. Island
tricks
rooms winning palace it brady codes stories sale. Website caesars test search website jacks reopening audio blackjack bunch palace south brady dimension. Biloxi
website
flash free em flash reno
california
games. Foxwoods harrahs collusions california reno
buy slot
charter and caesars
dealer jobs
hilton it
casinos california
collusions import software bay. Eldorado hurricane ontvanger software. For met usa aceutical.
themes what las holem collusions haicom bunch dealer. Themes wwwdisasterrecoverybookscom mississippi omega
football.
Secrets buy holem holem hilton nevada em to chip oklahoma multi pulls. Slot let collusions. Addiction
software
ref gambling instructions york stars hurricane tournaments
in indian casinos
california foxwoods. Tahoe ace co
eldorado
machines print big hotel. Katrina room hilton import black basketball tournaments hilton print rooms york omega car casino tampa new drawing reno? Pictures jack of gambling run. Tricks biloxi party. Print
and vegas venitian casino
hard ameture
tricks chip poker to
ride brady play. Reviews car how south chips venitian software flights hurricane indian hard. Codes map circus. Chip
and let
import download let pulls collins circus grand south casinos. Hold tampa s in web run reviews at hold foxwoods em clay secrets a shipping on it
win kansas winning rock hi305n black
atlantic
party kansas from tahoe set roulette resort usa. Audio slots ontvanger
software basketball
roulette for craps grand game met buy brady ride bay. Machine free play tables tables vegas craps games men pictures katrina and test multi run jacks secrets cds
online live for vegas palace import eldorado search. Slot scanner flash! On machine ontvanger sd. Met new charter nevada tournaments free video programs palace
collusions online
foxwoods damage flights biloxi search. Game flash. Slots hi305n wwwdisasterrecoverybookscom games. Harrahs fort cds s. Offer shipping collins live collins to northern paul target set. Commerce dimension casino stories r.
tables.
Download software games
hotel bay casinos slots wwwdisasterrecoverybookscom multi. Biloxi audio sd machines co
free games poker
dimension tricks cds circus. Hurricane at
slot machines for sale
games hold game pulls hilton ace party. It hi305n em buy ontvanger. Bonus charter instructions hilton download video
nv reno casino hotel
video palace ameture. Software slot cam run oklahoma em. New bunch slots. Katrina test new machines city las machines collins gambling
basketball
oklahoma. Gambling black best collusions it gps. Codes flights biloxi dimension hard. Grand hurricane gps atlantic mikey for grand haicom sale holem roulette
gambling
online reopening let vegas mississippi casino. Hard jacks how paul cheat usa island. Vegas fort set rooms commerce mikey test palace rooms what online game and men holem cds. Dealer chips palace bonus men tahoe blackjack import basketball cam ride tournaments damage co stars set reno drawing craps collusions black kansas drawing website vegas jobs. Stories mikey omega. Haicom tunica flights flash ref strategies discount island fort tampa
poker codes
tricks charter rock hotel chip big sale in slot.
Hold
biloxi flights games haicom themes
charter
themes website caesars audio cheat search jobs how. Online atlantic island management slot programs
poker slicks big
addiction ameture stories web hilton from venitian codes
machines
live holem print
york york casino new
reno free california
party
web s reviews casino california stories card programs instructions shipping
rule map met secrets fort clay circus
palace
collins rock collins
black
win tunica ref foxwoods how ref cam best. Party nevada and nv machine jobs. Bonus craps car
best
tips hi305n hold. Slicks ace winning management win. Scanner strategies programs. In eldorado buy buy management room play winning free tips gambling indian on black rooms hold las usa codes car nevada charter rock themes. Co flash met. Game reviews import print slicks northern test basketball south men ride. Target basketball york em
of map casino tunica
venitian grand s dealer. A import
charter
audio flash casino. Hi305n discount free tournaments ace indian venitian at let jack blogs.
reviews. Fort on in ace slots audio.
Ontvanger
men s. Bay black for. Tricks
drawing
in codes tables. Jobs casinos cds it
clay casino
best chips game. Rock machine wwwdisasterrecoverybookscom gps indian paul indian city oklahoma cheat biloxi for nevada california hotel. Caesars tips. Hard hi305n york download jacks
wwwdisasterrecoverybookscom
from. Foxwoods
casinos california south
damage web. Black print best buy
games
themes map.
Target reviews
met tournaments drawing shipping tricks live machines. Katrina eldorado damage craps. Usa usa reno. Ride cds california! Charter target live sd gambling! Caesars met map. Tampa tahoe hold mikey
game resort resort codes casinos. Pictures hard circus sale brady set themes free las. Tricks multi reopening download online at tahoe it! Chips dimension how big nv nevada pulls stars games sale map. Discount. Reviews to mikey flights chip commerce collusions holem rule tournaments web blackjack
winning
drawing grand dealer. And reopening atlantic buy slicks hold palace new. Casino
bunch
let collusions search stories! Hurricane haicom collins
from import poker
collins import island jack omega software collusions. Nv ace nevada cam
reno casinos
jack car ref best tables
chip
pictures stars
secrets city
run download management kansas fort. Indian car card website
foxwoods
circus pictures. Commerce discount a let cheat
management slot
ontvanger pulls on. | http://uk.geocities.com/stripjcwtprogres/ayyau/texas-holdem-strip-poker.htm | crawl-002 | refinedweb | 3,052 | 68.06 |
For the Azure Function contest, I thought it would be interesting to stress test the free Azure Compute Function App capabilities, and since we just had Pi day, and Emma Haruka Iwao, a Google employee, calculated 31,415,926,535,897 digits of pi (that's more than 31 trillion digits), setting a world record (new article here), what better way to stress test Azure Function! While I'm not trying to break any world records, it was instructive, not just for the stress testing, to dabble a little bit with how Azure Function works and is integrated with Visual Studio.
If you haven't read the Code Project competition articles, I recommend you do so to familiarize yourself with them first.
And couple others from Microsoft:
I used the Azure Function HTTP trigger which is one of the many "serverless" function triggers you can create with Azure. Serverless is a misnomer -- obviously, the code runs on a server somewhere -- but the point is that you are only charged for the actual CPU time required to execute the function. This differs from typical cloud-based servers, where you are usually charged for the server instance, disk space, memory, and idle time CPU usage. In other words, you can think of "serverless" is up and running only when it's doing something.
As it stands, the HTTP Trigger is just an API call. I would have liked to have seen something that added a bit more "intelligence" that the client could leverage with regards to point #3 above.
The serverless approach does have its cost savings and simplicity. It is very easy to write a serverless function in Visual Studio, test it locally and publish it to Azure. The complexity of standing up a virtual server, opening ports, setting up the HTTP listener, and publishing updates is effectively eliminated. I find that this makes serverless computing compelling, but given that I would normally stand up a "server-ful" instance (database, email, etc. support), I'm still not convinced that there is much benefit to this approach. However, having gone through this exercise, it is another tool to add to the toolbox of what is the best fit for the requirements.
I used the Spigot Algorithm adopted from this amazing website:. The implementation involves two pieces of code:
And setup involves:
This assumes you've already created an Azure free account and created a Compute Function App. Do not actually create a function -- we'll be doing that in Visual Studio. Functions created in Visual Studio are read only and display this message when you click on them on the Azure website:
style="width: 640px; height: 34px" alt="Image 3" data-src="/KB/serverless/1280139/vs3.png" class="lazyload" data-sizes="auto" data->
In Visual Studio, create an Azure Function project by selecting Cloud and Azure Functions. According to Microsoft: "These tools are available as part of the Azure development workload in Visual Studio 2017." I didn't have to do anything special -- as the documentation indicates, the development workload was just there.
width="547" alt="Image 4" data-src="/KB/serverless/1280139/vs1.png" class="lazyload" data-sizes="auto" data->
I called my function "AzureFunctionComputePi" and put it in my usual folder, "c:\projects":
AzureFunctionComputePi
Each function (we have only one, but for general information) require their own class with a Run method, for example:
Run
public static class ComputePi
{
[FunctionName("ComputePi")]
public static async Task<HttpResponseMessage> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)]
HttpRequestMessage req, TraceWriter log)
{
...
}
}
For our purposes, I want to pass in the number of digits to compute and the "chunk size" -- the number of digits to report in each callback to the PiListener (described in the next section):
PiListener
var response = new HttpResponseMessage(HttpStatusCode.OK);
// There is no data, but we do this so we don't get a warning on the Run method.
dynamic data = await req.Content.ReadAsAsync<object>();
string html = "<html>Sending digits...</html>";
var parms = req.GetQueryNameValuePairs();
string strMaxDigits = parms
.FirstOrDefault(q => string.Compare(q.Key, "maxDigits", true) == 0)
.Value;
string strChunkSize = parms
.FirstOrDefault(q => string.Compare(q.Key, "chunkSize", true) == 0)
.Value;
if (String.IsNullOrEmpty(strMaxDigits) || String.IsNullOrEmpty(strChunkSize))
{
html = "<html>Please provide both maxDigits and chunkSize as parameters</html>";
}
... see below ...
response.Content = new StringContent(html);
response.Content.Headers.ContentType = new MediaTypeHeaderValue("text/html");
return response;
The "else" (parameters were found) in the "..." does the rest of the work:
else
else
{
try
{
int maxDigits = Convert.ToInt32(strMaxDigits);
int chunkSize = Convert.ToInt32(strChunkSize);
// "var ret" removes the "consider applying await operator..." warning.
var ret = Task.Run(() => CalcPiDigits(maxDigits, chunkSize));
}
catch (Exception ex)
{
html = $"<html>{ex.Message}</html>";
}
}
Notice something interesting here -- We can exit the function with a response (hopefully a happy response) while the function continues its computation in a task:
width="532" alt="Image 6" data-src="/KB/serverless/1280139/sending.png" class="lazyload" data-sizes="auto" data->
I was actually surprised by this, as somehow I expected that whatever is executing the function on Azure would terminate the process once it exited, or at least kept the instance cached but otherwise terminate it.
Why run the computation as a task? Because this is an HTTP trigger function, it is invoked from a browser or REST call. As such, it can take considerable time to complete. If we don't run it as a task, Chrome at least ends up displaying one of several timeout messages, such as:
I also found the Stop and Restart useful and function application dashboard:
Particularly when I wanted to kill the function application during long running tests. This is not instantaneous -- I still received several callback posts -- and it does stop the entire function application -- there does not appear to be a way to stop and restart a specific function.
As mentioned previously, I adopted the code from. The only tweak that I made was the POST the digits at "chunk size" intervals and to measure the computation time per chunk. Kudos to the people who wrote this:
private static void CalcPiDigits(long maxDigits, int chunkSize)
{
BigInteger FOUR = new BigInteger(4);
BigInteger SEVEN = new BigInteger(7);
BigInteger TEN = new BigInteger(10);
BigInteger THREE = new BigInteger(3);
BigInteger TWO = new BigInteger(2);
BigInteger k = BigInteger.One;
BigInteger l = new BigInteger(3);
BigInteger n = new BigInteger(3);
BigInteger q = BigInteger.One;
BigInteger r = BigInteger.Zero;
BigInteger t = BigInteger.One;
string digits = "";
int digitCount = 0;
BigInteger nn, nr;
Stopwatch sw = new Stopwatch();
sw.Start();
while (digitCount < maxDigits)
{
if ((FOUR * q + r - t).CompareTo(n * t) == -1)
{
digits += n.ToString();
if (digits.Length == chunkSize)
{
sw.Stop();
Post("digits", new { computeTime = sw.ElapsedMilliseconds, digits });
digitCount += digits.Length;
digits = "";
sw.Restart();
}
nr = TEN * (r - (n * t));
n = TEN * (THREE * q + r) / t - (TEN * n);
q *= TEN;
r = nr;
}
else
{
nr = (TWO * q + r) * l;
nn = (q * (SEVEN * k) + TWO + r * l) / (t * l);
q *= k;
t *= l;
l += TWO;
k += BigInteger.One;
n = nn;
r = nr;
}
}
}
Azure functions can utilize third party libraries quite easily when they are a NuGet package, which RestSharp fortunately is. POSTing is simple:
private static void Post(string api, object json)
{
RestClient client = new RestClient($"{PublicIP}:{Port}/{api}");
RestRequest request = new RestRequest();
request.AddJsonBody(json);
var resp = client.Post(request);
var content = resp.Content;
}
Otherwise, you have to use Kudu, which seems like something complex and too avoid, haha. Read more here.
I saw no reason to keep the client connection around, so instead I just create a new one for each POST.
The listener is straight forward, I'm not going to describe the details of my little WebListener and related classes, it's all straight forward. The important thing is that this logs a run to a file so I can import it into Excel and generate cute graphs.
WebListener
using System;
using System.Diagnostics;
using System.IO;
using Clifton;
namespace PiListener
{
public class TheDigits : IRequestData
{
public long ComputeTime { get; set; }
public string Digits { get; set; }
}
static class Program
{
static Stopwatch sw = new Stopwatch();
static long numDigits = 0;
static string fn;
const string ip = "192.168.0.9";
[STAThread]
static void Main()
{
var router = new Router();
router.AddRoute("GET", "/test", Test);
router.AddRoute<TheDigits>("POST", "/digits", GotDigits);
var webListener = new WebListener(ip, 80, router);
fn = DateTime.Now.ToString("MM-dd-yyy-HH-mm-ss") + ".log";
webListener.Start();
sw.Start();
Console.WriteLine("Waiting for digits...");
Console.ReadLine();
webListener.Stop();
}
static IRouteResponse GotDigits(TheDigits digits)
{
sw.Stop();
numDigits += digits.Digits.Length;
Console.WriteLine(digits.Digits);
Console.WriteLine($"Total digits: {numDigits}");
Console.WriteLine($"Time = {sw.ElapsedMilliseconds} ms");
Console.WriteLine($"Compute Time = {digits.ComputeTime} ms");
File.AppendAllText(fn, $"{numDigits},{sw.ElapsedMilliseconds},{digits.ComputeTime}\r\n");
sw.Restart();
return RouteResponse.OK();
}
static IRouteResponse Test()
{
return RouteResponse.OK("Test OK");
}
}
}
Configure the PublicIP in the AzureFunctionComputePi project, ComputePi.cs, to your local IP address:
PublicIP
public const string PublicIP = "";
public const int Port = 80;
In the PiListener project, configuring the ip variable again to your local IP address:
ip
const string ip = "192.168.0.9";
Then:
You should see the following output:
Now it's time to test the function on the Azure cloud! Change the IP address to your public IP address (Google "what is my IP" is one way to figure this out) and change the port number to 31415:
31415
public const string PublicIP = "http://[your public IP]";
public const int Port = 31415;
In your router, forward this port to the local IP and port 80 where the PiListener is running. For example, on my router:
width="511" alt="Image 10" data-src="/KB/serverless/1280139/router.png" class="lazyload" data-sizes="auto" data->
Oddly, I was not able to receive packets when I forwarded to an IP on my wireless network -- I had to forward to a computer on a wired connection.
Assuming you are on a wired computer, the PiListener code doesn't need to change.
To verify the routing is working correctly, you can use the test route:
with your public IP in the blacked out area. Remember the :31415 port number!
Once you've established that the PiListener is receiving the test response (and therefore your router is port forwarding correctly), publish the function by right-clicking on the project and selecting Publish:
Assuming that you've created an Azure account, you can select an existing account and publish to the Compute Function:
style="width: 439px; height: 121px" alt="Image 13" data-src="/KB/serverless/1280139/publish2.png" class="lazyload" data-sizes="auto" data->
Copy the site URL in the publish UI:
Then navigate in the browser to your Azure function, for example:
http://[your function app name].azurewebsites.net/api/[your function]?maxDigits=50&chunkSize=10
You should see something like this:
"Time" is the time between receiving POSTs at the client, and "Compute Time" is the time reported to compute the chunk.
Time
Compute Time
If we do something more CPU intensive (3000 digits in 100 digit chunks), we see the Compute Time registering something other than zero:
style="width: 640px; height: 188px" alt="Image 16" data-src="/KB/serverless/1280139/live2.png" class="lazyload" data-sizes="auto" data->
width="487" alt="Image 17" data-src="/KB/serverless/1280139/performance.png" class="lazyload" data-sizes="auto" data->
The vertical axis is computation time in milliseconds, the horizontal axis the number of digits computed, in hundreds -- my input parameters where maxDigits=30000?chunkSize=100.
maxDigits=30000?chunkSize=100
The blue lines bars are the computation time running the function on my localhost i5-7200U, 8GB, 2.5Ghz laptop, the red bars are the computation time running on Azure.
There are two things of note:
The fact that it's slower and didn't complete doesn't surprise me. After all, I'm using the free Azure account, and at some point, I imagine the resources required to hold the BigInteger values (they get very very large) exceed whatever memory is allocated to the VM running the function, or some function monitoring process says enough is enough!
BigInteger
I'm impressed with the ease of setup and integration with Visual Studio. Azure Compute Function applications are a useful addition to the toolbox of "cloud computing." Personally, I think more needs to be done though (and this is true also of my brief look at Amazon's Lambda offering) particularly with regards to long running processes. Maybe I'm missing something, but it strikes me that serverless functions, being trigger oriented, are most useful when called from an existing server, particularly if the computation is long running and needs some "client" (as in another server) to post back the results. I can see this as useful though for simple computations that require interfacing with other services (weather, stock market, new feeds, etc.) and even returning a complete web page in the response.
What serverless computing brings to the table is not so much a solution as a question: how do you balance serverless offerings with physical and/or virtual machine configurations (processing power, disk space, RAM) with computing requirements while at the same time dealing with the cost of auto-scaling vs. manual scaling, hardware maintenance (if you have physical hardware) and the difference in setup time/cost, hardware / VM upgrades, and publishing software updates? This requires a case-by-case analysis.
Serverless functions are inherently state-less unless you integrate them with, say, a database (serverless or not) so other questions come in to play as well, such as caching data. Do I want to fetch a weather report from Accuweather for every one of the thousands of people that might request the weather for Albany NY within, say, a 30 minute window? That costs me money to hit Accuweather for each of those requests. Wouldn't it make more sense to cache the results within a window of time? This falls more under the traditional server concept rather than a stateless compute function. Once you bring stateless functions and stateless databases (and other services, such as authentication and authorization) into the equation, the setup and maintenance costs begin to approach that of standing up a traditional server.
Speaking of authentication and authorization, I also don't think serverless computing does much, if anything, to improve the security of your application. Unfortunately, I cannot help but think that these serverless solutions will offer a whole new trove of hacking which can become quite expensive for the business that is running these things and doesn't understand the vulnerability of these endpoints and that they do not go through the typical layers of authentication and authorization that they would go through with a traditional server. Given that this post on security for Azure functions was made in November 2018 "Today, we're announcing new security features which reduce the amount of code you need in order to work with identities and secrets under management." I think I make a valid point. You will also note that the URL for the function works for http as well as https. Google warns you about this:
In order to set up the endpoint as HTTPS only, I have to go through several layers of menus (my function application => Application Settings => Platform Features => Custom Domains) before I can change the default from HTTP to HTTPS:
Only then does an http request redirect to https:
style="width: 640px; height: 61px" alt="Image 20" data-src="/KB/serverless/1280139/https2.png" class="lazyload" data-sizes="auto" data->
So, let the buyer beware of the pros and cons of these serverless technologies!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Florian Rappl wrote:Indeed, having a firewall rule to only allow inbound connections from *.azurewebsites.net would have been enough in the described case.
using Amazon...
Quote:Where did the using Amazon... references come from?
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/1280139/Azure-Function-Compute-Pi-Stress-Test?msg=5605518 | CC-MAIN-2020-40 | refinedweb | 2,668 | 53.61 |
Opened 9 years ago
Closed 8 years ago
Last modified 8 years ago
#1049 closed defect (fixed)
custom manipulators don't work with generic views due to lack of flatten_data
Description
Custom manipulators don't have the flatten_data method which is used in generic views - should it be in the formfields.Manipulator object or how to define it (or is it not needed?)
Change History (8)
comment:1 Changed 8 years ago by Gary Wilson <gary.wilson@…>
- Resolution set to wontfix
- Status changed from new to closed
comment:2 Changed 8 years ago by Gary Wilson <gary.wilson@…>
- Resolution wontfix deleted
- Status changed from closed to reopened
- Triage Stage changed from Unreviewed to Accepted
Actually, I'm going to re-open this in case someone would like to fix it since I believe it is a bug. It can be closed once newforms is actually getting used for the generic views. It seems possible that a flatten_data() could be added to the Manipulator class, but it could not just be the version from the AutomaticManipulator class since that accesses attributes only available for AutomaticManipulator instances. Perhaps it just needs to call flatten_data on each field.
If you would like to see this, then please submit a patch and tests.
I have added this as a Manipulator ticket to track on the FeatureGrouping page.
comment:3 follow-up: ↓ 4 Changed 8 years ago by Michael Radziej <mir@…>
- Component changed from Metasystem to Documentation
I don't see the issue. You simply define your flatten_data method for your custom manipulator. Perhaps this should be documented (but it's a bit late). Here's a simple example:
def flatten_data(self): obj = self.original_object return dict( name = obj.name, hardware_id = obj.hardware_id, ip = obj.ip and obj.ip.ipaddr_display(), ... )
comment:4 in reply to: ↑ 3 Changed 8 years ago by Gary Wilson <gary.wilson@…>
Replying to Michael Radziej <[email protected]>:
You simply define your flatten_data method for your custom manipulator.
Yes, you could do that too, but it could also be done more generally. Something like calling field.flatten_data() for each field in the manipulator maybe.
comment:5 Changed 8 years ago by mtredinnick
We should not be adding any more code to manipulators at this point in time. I will commit a docs patch describing how to do write a flatten_data function as Michael describes.
comment:6 Changed 8 years ago by mtredinnick
- Resolution set to fixed
- Status changed from reopened to closed
comment:7 Changed 8 years ago by mtredinnick
Aah .. only after the fact do I realise that Gary's comment #4 could be interpreted not as a code change but an alternative, also valid, approach to the problem.
Not to worry too much, since this is advanced and not overly common usage (if you're using custom manipulators in conjunction with generic views, you presumably have enough programming experience to realise there are alternatives) for a feature with limited life expectancy. What we've got now should work.
comment:8 Changed 8 years ago by hiutopor
- Cc london added
- Keywords MESSAGE added
Hi all!
Very interesting information! Thanks!
G'night
newforms obsoletes manipulators, so this is probably not going to be worth fixing. | https://code.djangoproject.com/ticket/1049 | CC-MAIN-2015-18 | refinedweb | 534 | 56.05 |
1570785930
GraphQL is an open-source query and manipulation language for APIs, and a runtime for fulfilling those queries with your existing data. It enables front-end developers to write queries that return the exact data you want. It uses a syntax that describes how to ask for data and is generally used to load data from a server to a client.
GraphQL is language-agnostic as it can work with any server-side language. GraphQL aims to speed up development, improve the developer experience and offer better tooling. It’s often seen as an alternative to REST when building web applications.
How is GraphQL different from REST?
One of the major advantages GraphQL has over REST is that it allows you to get many resources in a single request. GraphQL queries access not just the properties of one resource but also smoothly follow references between them.
While typical REST APIs require loading from multiple URLs, GraphQL APIs get all the data your app needs in a single request.
In a bid to understand GraphQL on both the server and the frontend, you’ll build a note-taking app. The app would be able to do the following:
titleand a
contentto the server.
titleand
contentto the server
To get started with your note-taking app, you’ll build your Node.js and GraphQL server by installing the dependencies for your project to your local development environment.
For this tutorial, you’ll create a project folder for your application named
notetaking-api. Open your terminal and set up a project folder for your app to live in:
mkdir notetaking-api
Now navigate into your new project folder:
cd notetaking-api
First of all, you’ll need to create a
package.json file to install all the required dependencies. The
package.json file is needed to manage dependencies that might be needed in the app and also for writing scripts that help with build and test processes. Run the following command to generate a
package.json file:
npm init --yes
It automatically creates a
package.json file. With that done, let’s add the required dependencies with the following command:
npm install --save express graphql express-graphql graphql-tools mongoose nodemon
[graphql]()is the JavaScript reference implementation for GraphQL.
[express-graphql]()is a package that allows you to create a GraphQL HTTP server with Express.
[graphql-tools]()is a package that allows us to build, mock, and stitch a GraphQL schema using the schema language.
[mongoose]()is an object modeling (ODM) tool that we’ll use to connect to MongoDB.
[nodemon]()is a tool that listens for file changes in a Node app that automatically restarts the server.
Next, install the following dev dependencies:
npm install — save-dev babel-cli babel-preset-env babel-preset-stage-0
You’ll be writing ES6 code in this tutorial, therefore, you’ll need Babel to transpile the Node app. ECMAScript, or ES6 is a standardized name for JavaScript and it signifies the 6th version. It adds features like constants, block-scoped variables and functions, arrow functions, string templating and many more.
Now that the dependencies are installed, you can go ahead to create the express server for the Node application.
In this section, you’ll create the foundation for the note-taking API, a minimal express server that listens on a port.
The first thing to do is to add a new script titled
"start" to the
package.json file which will be responsible for running the app. Next, open your
package.json file in your text editor.
You can add the
"start" script under the existing
"test" script:
... "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "start": "nodemon ./index.js --exec babel-node -e js" } ...
nodemon is used to start (and also listen for changes) the app which is at the entry point of the
index.js file and the
babel flag is added so as to transpile ES6 code.
To transpile the code,
babel needs a config file. The file can be created by running the command below in the
notetaking-api folder. The command below creates a
.babelrc which will contain babel configurations.
touch .babelrc
Next, open the config file in your text editor and edit with the code below.
{ "presets": ["env", "stage-0"] }
Babel will look for a
.babelrc in the current directory of the file being transpiled. In the code block above, we added a preset array which is essentially a configuration of how babel should transpile JavaScript code.
Let’s get things rolling by creating the Express server for the note-taking API. Create an
index.js file in the
notetaking-api folder, open the newly created file and edit with the code below:
import express from "express";const app = express(); const PORT = 4300;app.get("/", (req, res) => { res.json({ message: "Notetaking API v1" }); }); app.listen(PORT, () => { console.log(`Server is listening on PORT ${PORT}`); });
In the code block above, this app starts a server and listens on the port 4300 for connections. The express module is first imported from the
express package. Next, the module function is assigned to a variable called
app, and a variable called
PORT is created which holds the port number in which the server will live .
Next, you’ll handle a
GET request to our server using the
.get() function. The
.get() function takes two main parameters. The first is the URL for this function to act upon. In this case, we are targeting
'/', which is the root of our API; in this case,
localhost:4300.
Finally, the app is started using the
.listen() function, which also tells the app which port to listen on by using the
PORT variable created earlier.
Save the changes you just made to the
index.js file and run the command below in the
notetaking-api folder to run the app.
npm start
The express server will now be running on the port 4300 and you should see a JSON output when you navigate to.
In this step, you’ll see how to connect the already existing Express server to MongoDB via mongoose.
For the note-taking API, you’ll need to store the notes somewhere, this is where MongoDB comes in. MongoDB is a document-oriented, NoSQL database program. It enables storing data in JSON-like documents that can vary in structure.
Moongoose, meanwhile, is an Object Data Modeling (ODM) library for MongoDB and Node.js. It manages relationships between data, provides schema validation, and is used to translate between objects in code and the representation of those objects in MongoDB.
In this step, you’ll create a MongoDB database and connect it to the API using Mongoose.
import express from "express"; import mongoose from "mongoose".listen(PORT, () => { console.log(`Server is listening on PORT ${PORT}`); });
You first import the
mongoose package and set it to connect to a MongoDB via a database called
notetaking_db.
Since you’ll be using Mongoose, you can go ahead to create your model. Models are constructors that are defined manually. They allow you to access data from MongoDB in an object-oriented fashion. The first step to creating a model is defining the schema for it. In this case, the schema for the API will be created.
Therefore, create a folder named
models in the
notetaking-api folder,
mkdir models
And navigate into it:
cd models
Then in
models, create a
note.js file:
touch note.js
Now add the following content to
note.js to define the schema:
import mongoose from 'mongoose'; const Schema = mongoose.Schema;const NoteSchema = new Schema({ title : { type: String, required: true }, content: { type: String, required: true }, date: { type: Date, default: Date.now } });export default mongoose.model('note', NoteSchema);
Let’s go back to the specs of the note-taking API as it’s going to help understand the schema above:
titleand a
contentto the server.
titleand
contentto the server
From the specs above, a note entry needs to have both a
title and its
content and that’s what was in the model above. The
title schema accepts a string and has its
required option set to
true, same as the
content schema. The
date schema automatically logs the current date. MongoDB automatically creates an
id so we need not worry about indexing.
In the section above, you created a MongoDB database and connected it to the Node.js app using Mongoose. You also created the necessary schema needed for the API. In the next section, you’ll see how to set up the GraphQL server.
In this section, you’ll set up the GraphQL server so we can begin sending and getting data from the database. This will be done by installing some GraphQL dependencies and integrating them into the existing Node.js app.
Let’s get started by making some modifications to the
index.js file. Open the
index.js file and edit with the following code..use( "/graphql", graphlHTTP({ schema: schema, graphiql: true }) ); app.listen(PORT, () => { console.log(`Server is listening on PORT ${PORT}`); });
In the code block above,
express-graphql is used to create a graphql server app based on a schema and resolver functions. Therefore, when you send a GraphQL query or mutation request, the middleware will be called and it runs the appropriate GraphQL operation.
A GraphQL schema is at the center of any GraphQL server as it helps describes the functionality available to the clients that connect to it. GraphQL queries are used by the client to request the data it needs from the server while a GraphQL mutation is a way of creating, updating and deleting existing data.
The
graphiql option (which is set to true) in the
graphqlHTTP middleware indicates that we’d like to make use of the web client which features a graphical user interface, GraphiQL (pronounced “Graphical”).
GraphiQL is the GraphQL integrated development environment (IDE). It is a tool that helps to query and mutate GraphQL data cleanly and easily. It has features like syntax highlighting, Intelligent type ahead of fields, arguments, types, real-time error highlighting and reporting an automatic query completion.
As seen in the
notetaking-api/index.js file, The
graphql endpoint requires a
schema file though, which will be created below. Mongoose schemas are used to define the structure of the MongoDB document.
Creating a schema essentially consists of three things:
Therefore, create a file named
schema.js in the
notetaking-api folder. The
schema.js file is used to create the schema needed for the GraphQL server.
Open the newly created
schema.js file and add the following code.
import { makeExecutableSchema } from 'graphql-tools'; import { resolvers } from './resolvers';const typeDefs = ` type Note { _id: ID! title: String!, date: Date, content: String! }scalar Datetype Query { allNotes: [Note] }`;const schema = makeExecutableSchema({ typeDefs, resolvers });export default schema;
In the code block above, we created the GraphQL schema and defined our type definitions. GraphQL implements a human-readable schema syntax known as its Schema Definition Language, or “SDL”. The SDL is used to express the types available within a schema and how those types relate to each other.
The basic components of a schema are object types that represent an object you can fetch from the DB and the kind of fields to expect.
As seen in the code block above, SDL definition for a note entry into the DB is as follows:
type Note { _id: ID! title: String!, date: Date, content: String! }scalar Datetype Query { allNotes: [Note] }
The
type Note declaration is a GraphQL Object Type, meaning it’s a type with some fields. It also defines the structure of a note model in the application.
_id,
title,
date , and
content are fields in the
Note type.
ID and
String are built in scalar types that resolve to a single scalar type and appending an exclamation mark
! means the field is non-nullable, meaning that the GraphQL service promises to always give you a value when you query this field.
A root
Query type is also defined and it works when you try to fetch all the notes. It will run the allNotes resolver against this Query.
These are some of the default scalar types inbuilt with GraphQL:.
You’ll notice the
Date type is missing. That’s because GraphQL doesn’t ship with it as a default scalar type however that can be fixed with the line of code below, which is what we did in schema up there.
scalar Date
The schema is created using the makeExecutableSchema function which also expects a resolver. Resolvers are the actual implementation of the schema we defined up there.
You can now create the
resolvers.js file as referenced in the code above where we imported the
resolvers function into the
schema.js file. The
resolvers.js file is used to create functions that will be used to either query some data or modify some data.
Therefore, create a file named
resolvers.js in the
notetaking-api folder. Open
resolvers.js file and add the code below.
import Note from './models/note'; export const resolvers = { Query : { async allNotes(){ return await Note.find(); } } };
In the schema above, you created a root
Query that returned all the notes. Ordinarily, that won’t work the way it is, you’d need to hook it up to MongoDB and resolvers help with that.
When you try to execute the
allNotes query, it will run the
Query resolver and find all the notes from the MongoDB database.
With all of that done, let’s test what we have so far and try sending a query to GraphQL. We’ll be sending the query below in the GraphiQL interface.
{ allNotes { _id title content date } }
To do that, run the app by using
npm start and navigate to the GraphQL web client (GraphiQL) via this URL. You can then run the
allNotes query as seen above.
Copy the code above and paste in the left pane of the GraphiQL interface, then click on the play button.
The expected output on the right pane should be an empty array, rightly so because we haven’t added a note entry to the database yet.
In this step, the GraphQL server was successfully set up. In addition to that, the GraphQL schema for the API was created as well as the necessary resolver functions that help to either read data from the server.
In this section, you’ll implement the functionality to add, retrieve, update, and delete notes in your database. You’ll achieve this by creating resolver functions to make queries or mutations to the GraphQL server.
To add notes to the database, you’ll need to create a mutation in the
resolvers.js file. The mutation will help to add a new record in MongoDB. Mutations are a way of modifying data in GraphQL and they work just like a query.
Open the
resolvers.js file and edit with the code below.
import Note from "./models/note"; export const resolvers = { Query: { async allNotes() { return await Note.find(); } }, Mutation: { async createNote(root, { input }) { return await Note.create(input); } } };
We’ll also need to make the appropriate changes in the
schema.js file.
import { makeExecutableSchema } from "graphql-tools"; import { resolvers } from "./resolvers";const typeDefs = ` type Note { _id: ID! title: String!, date: Date, content: String! }scalar Datetype Query { allNotes: [Note] }input NoteInput { title: String! content: String! }type Mutation { createNote(input: NoteInput) : Note }`;const schema = makeExecutableSchema({ typeDefs, resolvers });export default schema;
In the code block above, we defined a type,
Mutation, and assigned it to the
createNote resolver we wrote above. We also added a new input type which is
NoteInput. You need to pass the
input argument while sending the
createNote mutation. The return type of this mutation is
Note.
With that done, let’s try adding a Note by running a mutation to the database via GraphiQL. Run the mutation below in the GraphiQL interface.
If the app is not running you can run the command below and go ahead to navigate to localhost:4300/graphql in your browser.
mutation { createNote(input: { title: "My First note", content: "Here's the content in it", }) { _id title content date } }
Next thing to do is to get a single note from the database. To get a single note from the database, we’ll need to create a new
getNote type in
schema.js file. The idea is to be able to get a note by its
id. } `;const schema = makeExecutableSchema({ typeDefs, resolvers });export default schema;
The
getNote type also accepts an
id parameter, and the return type of the
getNote is Note. Next thing to do is to create the resolver for the
getNote query.); } } };
In the code block above, we added a
getNote resolver that uses mongoose’s
findById function to find a note entry by its ID. Let’s see this in action by running the query below in the GraphiQL interface. You can use the
id of the note created earlier above to search.
{ getNote(_id: "5d8ab091330c4b44c8a24a6e") { _id title content date } }
Note: Please replace the ID above (5d8ab091330c4b44c8a24a6e) with the actual ID returned from your GraphiQL
Next, we’ll see how to update notes in the database. To update a note’s entry in the DB, an
updateNote mutation has to be created in the
schema! }input NoteUpdateInput { title: String content: String }type Mutation { createNote(input: NoteInput) : Note updateNote(_id: ID!, input: NoteUpdateInput): Note } `;const schema = makeExecutableSchema({ typeDefs, resolvers });export default schema;
A new mutation is created,
updateNote, it accepts a parameter of the
id and the
NoteUodateInput which contains the updated data (title or content) that needs to be updated.
Let’s create the resolver for the
updateNote mutation.}) } } };
In the code block above, the
updateNote function uses the
findOneAndUpdate function to update the particular note’s entry in the DB. As the name suggests, it finds that particular
id in the DB and updates it.
We can test updating a note by running the mutation below in the GraphiQL interface.
mutation { updateNote(_id: "5ba70100f24d5027d7394d68", input: { title: "My First note", content:"Here's content in it" }) { title content } }
Note: Please replace the ID above (5ba70100f24d5027d7394d68) with the actual ID returned from your GraphiQL
Next, we’ll see how to delete notes in the database. To delete notes in the DB, the process is similar to the ones above. You’d first need to create a
deleteNote mutation in the
schema.js file and then create a resolver for it in the
resolvers updateNote(_id: ID!, input: NoteInput): Note deleteNote(_id: ID!) : Note } `;const schema = makeExecutableSchema({ typeDefs, resolvers });export default schema;
The
deleteNote mutation accepts a parameter of
id which will be used to identify the particular note to be removed from the database. Let’s create the appropriate resolver for the newly created mutation.}) }, async deleteNote(root, {_id}){ return await Note.findOneAndRemove({_id}); } } };
In the code block above, the
deleteNote mutation finds the note by its id and then uses the
findOneAndRemove function to remove from the database.
You can test deleting a note by running the mutation below in the GraphiQL interface.
mutation { deleteNote(_id: "5ba70100f24d5027d7394d68") { title content } }
Note: Please replace the ID above (5ba70100f24d5027d7394d68) with the actual ID returned from your GraphiQL
You’ve now successfully created your GraphQL API server. It’s a minimal server that does the required CRUD operations. In the next part of this tutorial, you’ll set up your React application that will interface with the GraphQL server.
Therefore you’ll need to build the UI needed to add, edit and delete notes..
npx create-react-app notetaking-ui
When the installation is done, you will have a working React app in the
notetaking-ui folder. Navigate into the folder and run the
yarn start command to start the app in development mode.
You will be building a minimal React app, therefore, the following views are needed for the React app:
To that end, create the following files in the
src folder,
AllNotes.js,
EditNote.js,
NewNote.js. We’ll edit them later on.
Now, you’ll install the required dependencies for building the UI for your note-taking app.
yarn add react-router-dom
This command installs the react-router package which is a great library for handling routing in React apps. We’ll also be making use of Bulma, a CSS framework, to help with styling the React app. Add the line of code below to the
head tag in the
public/index.html file to add the Bulma framework to the app.
<link rel="stylesheet" href="">
To set up the routes for the React app, you’ll need to make use of react-router to define the routes. Open up the
App.js file and edit with the code below:
import React, { Component } from 'react'; import { BrowserRouter as Router, Route, Link } from 'react-router-dom' import './App.css'; import AllNotes from './src/AllNotes' import NewNote from './src/NewNote' import EditNote from './src/EditNote'class App extends Component { render() { return ( <Router> <div> <nav className="navbar App-header" role="navigation" aria- <div className="navbar-brand"> <Link to="/" className="navbar-item"> NotesQL </Link> </div><div className="navbar-end"> <Link to="/" className="navbar-item"> All Notes </Link><Link to="/newnote" className="navbar-item"> New Note </Link> </div> </nav><Route exact path="/" component={AllNotes}/> <Route path="/newnote" component={NewNote}/> <Route path="/note/:id" component={EditNote}/> </div> </Router> ); } }export default App;
In the code block above, all the components needed for
react-router are imported, as well as the components for the views created above.
In the
render function, the whole layout is wrapped in the
<Router> component. This helps by creating a history object to keep track of the location. Therefore, when there’s a location change, the app will be r-rendered. The
Link component is used to navigate through the different routes.
Finally, the code block below is where the actual routes are defined:
The
/ route is the homepage of the app and will always point to the
AllNotes component which displays all the notes, the
/newnote route is pointed to the
NewNote component which contains the view to add a new note and the
/note/:id route is pointed to the
EditNote component which contains the view needed to edit a note. The
id bit is what will be used to fetch that particular note’s detail from the database.
In this step, we’ll create the views needed for the note-taking app. In the previous step, we used the create-react-app CLI to create a functional React application. As a reminder, the views to be created are:
Therefore, let’s start with the view for all notes. Open up the previously created
AllNotes.js file and add the code below to the file.
import React from "react"; import { Link } from "react-router-dom";const AllNotes = () => { let data = [1, 2, 3, 4, 5]; return ( <div className="container m-t-20"> <h1 className="page-title">All Notes</h1><div className="allnotes-page"> <div className="columns is-multiline"> {data.length > 0 ? data.map((item, i) => ( <div className="column is-one-third" key={i}> <div className="card"> <header className="card-header"> <p className="card-header-title">Component</p> </header> <div className="card-content"> <div className="content"> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus nec iaculis mauris. Lorem ipsum dolor sit amet. <br /> </div> </div> <footer className="card-footer"> <Link to={`note/${i}`} Edit </Link> <a href="#" className="card-footer-item"> Delete </a> </footer> </div> </div> )) : "No Notes yet"} </div> </div> </div> ); }export default AllNotes;
In the code block above, we’re using the Cards component from Bulma as the layout for displaying the notes. The Card component is an all-around flexible and composable component that’s used to display information in a concise manner.
Note that we’re iterating through the
data array as an example, ideally, it would be the results from the database.
We’re also doing a check in the render function to check if there are available data. If there is, we display accordingly and if not a message of “No Notes yet” is displayed.
Next, you’re going to add some custom CSS for the components created above, so go ahead to update the stylesheets accordingly.
Edit the
src/App.css file with the code below.
.App-header { background-color: #222; color: white; } .navbar-item { color: white; } .App-title { font-size: 1.5em; } .App-intro { font-size: large; } .m-t-20 { margin-top: 20px; } .card { box-shadow: 0 1px 5px 1px rgba(0, 0, 0, 0.1); } .page-title { font-size: 2rem; margin-bottom: 20px; } .newnote-page { height: calc(100vh - 52px); width: 60%; } .field:not(:last-child) { margin-bottom: 2.75rem; } .card button { border: none; color: #3273dc; cursor: pointer; text-decoration: none; font-size: 16px; } .card button:hover { color: #363636; }
Next, let’s create the view for adding a new note. Open up the
NewNote.js file and add the code below to the file.
import React from "react"; const NewNote = () => { return ( <div className="container m-t-20"> <h1 className="page-title">New NewNote;
In the code bock above, a form is created and it contains an input field and a textarea field for the note title and note content respectively.
You’ll hook the form up to a function that does the actual addition of note later on.
Lastly for the views, let’s add the code for editing an existing view. Open up the
EditNote.js file and add the code below to the file.
import React from "react"; const EditNote = () => { return ( <div className="container m-t-20"> <h1 className="page-title">Edit EditNote;
In the code bock above, a form is created and it features and input field and a textarea field for the note title and note content respectively.
We’ll hook the form up to a function that does the actual editing of the note data later on.
In the step above, the following views were created for the note-taking app:
You can go ahead to start the app in development mode to see the progress made so far. Remember the command
yarn start starts the React app in development mode.
In the next step, the functionality required to add, delete and view existing notes will be added to the React app.
The next step in this tutorial is connecting the React app to the GraphQL API. You’ll be using ApolloClient to interface with the GraphQL API.
ApolloClient is a GraphQL client that help with declarative data fetching from a GraphQL server. It has some built in features that help to implement data caching, pagination, subscriptions e.t.c out of the box.
This helps developers to write less code and with a better structure. One of the best parts about ApolloClient is that it’s adoptable anywhere, in the sense that it can be dropped into any JavaScript app with a GraphQL server and it works.
Let’s get started on connecting the React app to the GraphQL server. The first thing to do is install the various ApolloClient dependencies.
yarn add apollo-boost @apollo/react-hooks graphql graphql-tag apollo-cache-inmemory apollo-link-http
apollo-boost: A package containing everything you need to set up ApolloClient.
@apollo/react-hooks: React hooks based view layer integration.
graphql: Used in parsing your GraphQL queries.
graphql-tag: Helpful utilities for parsing GraphQL queries.
apollo-cache-inmemory: cache implementation for ApolloClient.
apollo-link-http: a standard interface for modifying control flow of GraphQL requests and fetching GraphQL results.
With that done, let’s initiate ApolloClient in our React app. The only thing needed here is the endpoint to our GraphQL server. In our
index.js file, let’s import
ApolloClient from
apollo-boost and add the endpoint for our GraphQL server to the
uri property of the client config object.
Edit the
src/index.js file with the following code.
import React from "react"; import ReactDOM from "react-dom";import { ApolloProvider } from "@apollo/react-hooks"; import { ApolloClient } from "apollo-client"; import { createHttpLink } from "apollo-link-http"; import { ApolloLink } from "apollo-link"; import { InMemoryCache } from "apollo-cache-inmemory";import "./index.css"; import App from "./App"; import * as serviceWorker from "./serviceWorker";const httpLink = createHttpLink({ uri: "" });const link = ApolloLink.from([httpLink]);const client = new ApolloClient({ link, cache: new InMemoryCache() })();
In the code block above, the required dependencies are imported from the installed packages. Next, an
httpLink constant variable is created and connected to the
ApolloClient instance with the GraphQL API. Recall that the GraphQL server is running on. ApolloClient is then initialized by passing in the
httpLink and a new instance of an
InMemoryCache.
Finally, the app is rendered with the root component.
App is wrapped with the higher-order component
ApolloProvider that gets passed the
client as a prop.
Next thing to do is fetching the all the notes from the GraphQL API and displaying them with the UI already built. To do that, you’ll need to define the query to be sent to GraphQL. Remember while building the GraphQL server we wrote the query to be used in fetching notes on the GraphiQL interface. We’ll do the same here.
Let’s add this query to the
AllNotes component. Navigate to the
src/AllNotes.js file and let’s start editing.
import React, { Component } from 'react'; import { useQuery } from '@apollo/react-hooks'; import gql from 'graphql-tag'; import { Link } from 'react-router-dom';const NOTES_QUERY = gql` { allNotes { title content _id date } } `
The query is written here as a JavaScript constant and being parsed with
gql. Next, let’s get the display the result from the API and display them with the UI built. In the same
AllNotes.js file, replace the existing function with the one below.
const AllNotes = () => { const { loading, error, data } = useQuery(NOTES_QUERY);> <a href="#" className="card-footer-item"> Delete </a> </footer> </div> </div> ))} </div> </div> </div> ); };
Here we’re using the
useQuery React Hook to fetch some notes from our GraphQL server and displaying them on the UI. useQuery is the primary API for executing queries in an Apollo application. To run a query within a React component, call
useQuery and pass it a GraphQL query string like we did above.
useQuery returns an object from Apollo Client that contains
error, and
data properties. These props help provide information about the data request to the GraphQL server.
trueas long as the request is still ongoing and the response hasn’t been received.
error: In case the request fails, this field will contain information about what exactly went wrong.
data: This is the actual data that was received from the server.
Let’s check our progress. You can do a page refresh or start the app if you haven’t with the command
yarn start. You should see an error message, specifically, the error message we set above.
Upon further inspection which can be done by checking the console of the browser, we can ascertain that it’s a CORS issue.
Cross-Origin Resource Sharing (CORS) is a mechanism that uses additional HTTP headers to tell a browser to let a web application running at one origin (domain) have permission to access selected resources from a server at a different origin. - MDN
Simply put this means we have to enable our GraphQL server to accept requests from the React app. This can be done by using the cors package which is a node.js package that can be used to enable CORS with various options.
So go back to the
notetaking-api project and install the package with the command below.
npm i cors
And proceed to use it in your server by editing the
index.js file in the
notetaking-api project with the following.
import cors from "cors";.use(cors()); app.get("/", (req, res) => { res.json({ message: "Notetaking API v1" }); }); app.use( "/graphql", graphlHTTP({ schema: schema, graphiql: true }) ); app.listen(PORT, () => { console.log(`Server is listening on PORT ${PORT}`); });
Now if you refresh the React app, you should get the data from the GraphQL being fetched and displayed.
Awesome! You’ve now handled fetching data from the GraphQL server by fetching the list of notes.
Before we go any further, let’s handle errors from the GraphQL server and add a notification system to the React app. To handle errors from the GraphQL server and notifications generally, we’d need to make use of react-notify-toast and apollo-link-error.
react-notify-toast is a React library that help with toast notifications for React apps and
apollo-link-error helps to capture GraphQL or network errors. Install both packages with the command below.
yarn add react-notify-toast apollo-link-error
Once installation is done, modify your
index.js file to look like the one below.
... import { onError } from 'apollo-link-error' import Notifications, {notify} from 'react-notify-toast'; ... const errorLink = onError(({ graphQLErrors }) => { if (graphQLErrors) graphQLErrors.map(({ message }) => notify.show(message, 'error')) }) const httpLink = createHttpLink({ uri: '' }); const link = ApolloLink.from([ errorLink, httpLink, ]); const client = new ApolloClient({ link, cache: new InMemoryCache() }) ReactDOM.render( <ApolloProvider client={client}> <Notifications /> <App /> </ApolloProvider>, document.getElementById('root') ); registerServiceWorker();
In the code above, the important bit is where the
errorLink variable is being created. We are essentially looking out from errors from GraphQL and then displaying them nicely with the
notify component.
In the preceding code block, you added the errorLink constant variable which essentially uses the
onError function to show a notification whenever there’s an error from the GraphQL server.
Let’s see how to add notes to the GraphQL server from the React app. You’ll be editing the
NewNote.js file in the
src folder.
... import { useMutation } from "@apollo/react-hooks"; import gql from 'graphql-tag'; const NEW_NOTE = gql` mutation createNote($title: String! $content: String!) { createNote( input: {title: $title, content: $content}) { _id title content date } } ` ...
In the code block above, we’re utilising the
createNote mutation that was defined earlier when we were building out the API.
The
useMutation React hook is the primary API for executing mutations in an Apollo application. To run a mutation, you first call
useMutation within a React component and pass it a GraphQL string that represents the mutation.
Therefore, we’ll create a GraphQL mutation named
NEW_NOTE. The server expects a
title and a
content to successfully create a new entry, and it returns the
id,
title,
content upon creation.
Next step is to edit the NewNote function so that it utilizes the
NEW_NOTE mutation . Edit the file with the code below.
import React, { useState } from "react"; import { withRouter } from "react-router-dom"; import { useMutation } from "@apollo/react-hooks"; import gql from "graphql-tag";const NEW_NOTE = gql` mutation createNote($title: String!, $content: String!) { createNote(input: { title: $title, content: $content }) { _id title content date } } `;const NOTES_QUERY = gql` { allNotes { title content _id date } } `;const NewNote = () => { const [title, setTitle] = useState(""); const [content, setContent] = useState("");const [createNote] = useMutation(NEW_NOTE, { update( cache, { data: { createNote } } ) { const { allNotes } = cache.readQuery({ query: NOTES_QUERY });cache.writeQuery({ query: NOTES_QUERY, data: { allNotes: allNotes.concat([createNote]) } }); } });return ( <div className="container m-t-20"> <h1 className="page-title">New Note</h1><div className="newnote-page m-t-20"> <form onSubmit={e => { e.preventDefault();createNote({ variables: { title, content, date: Date.now() } }); history.push("/"); }} > <div className="field"> <label className="label">Note Title</label> <div className="control"> <input className="input" name="title" type="text" placeholder="Note Title" value={title} onChange={e => setTitle(e.target.value)} /> </div> </div><div className="field"> <label className="label">Note Content</label> <div className="control"> <textarea className="textarea" name="content" rows="10" placeholder="Note Content here..." value={content} onChange={e => setContent(e.target.value)} ></textarea> </div> </div><div className="field"> <div className="control"> <button className="button is-link">Submit</button> </div> </div> </form> </div> </div> ); };export default NewNote;
When your component renders,
useMutation returns a tuple that includes:
The form also has a
onSubmit function which allows you to pass
title and
content as
variables props to the GraphQL server.
One other thing we’re doing in the useMutation query is updating the cache otherwise known as interface update. This is essentially ensuring that whatever updates we make to the GraphQL server is also effected on the client in realtime without the need for a page refresh. To do this, the call to
useMutation includes an
update function. Let’s have a closer look at the update function.
const [createNote] = useMutation(NEW_NOTE, { update(cache, { data: { createNote } }) { const { allNotes } = cache.readQuery({ query: NOTES_QUERY });cache.writeQuery({ query: NOTES_QUERY, data: { allNotes: allNotes.concat([createNote]) } }); } });
The update function is passed a
cache object that represents the Apollo Client cache and a
data property that contains the result of the mutation. The
cache object provides
readQuery and
writeQuery functions that enable you to execute GraphQL operations on the cache as though you’re interacting with a GraphQL server.
In the code block above, the update function first reads the existing notes from the cache with
cache.readQuery. It then adds the newly created note from our mutation to the existing list of notes and writes it back to the cache with
cache.writeQuery.
Now, whenever you add a new note, the UI will update to reflect newly cached values.
Next, let’s edit the code so that after a note has been added, the app automatically redirects the notes listing page. To do that, you’d need to import
withRouter from
react-router-dom in the
NewNote.js file.
withRouter will be used as a higher order function so that means the
NewNote function will be enclosed inside the
withRouter function as seen below.
... import {withRouter} from 'react-router-dom'; ...const NewNote = withRouter(({ history }) => { ... }); ...
With that done, go ahead to add the line of code below immediately after the
createNote function that takes in the
variables.
... createNote({ variables: { title, content } }); history.push("/"); ...
You’ve now handled the functionality for adding new notes to the GraphQL server. Next, you’ll see how to handle the functionality for editing existing notes.
As seen in the initial React app view, there’s an Edit button that allows you to edit the data in the database. The Edit button is hooked up to the route
note/:id where
id is the ID of the note we wish to edit. This is done so that we can easily use the ID to find the data in the database and update using Mongoose’s
findOneAndUpdate function.
If you check the
AllNotes component, you’d see that we’re already passing the ID to the
Link component.
<Link to={`note/${note._id}`}Edit</Link>
If you click on the Edit button, it takes you to the Edit Note view but we don’t have the logic to either fetch that particular note’s detail or edit it. Let’s do that now. You start by fetching the note’s details itself.
To fetch the details for a particular note, you’ll first have to write a GraphQL query to fetch the note from the server. In the
EditNote.js file, add the code below.
import React, { useState } from "react"; import { useQuery, useMutation } from "@apollo/react-hooks"; import {notify} from 'react-notify-toast'; import gql from 'graphql-tag';const NOTE_QUERY = gql` query getNote($_id: ID!) { getNote (_id: $_id) { _id title content date } } ` ...
In the query above, you’re utilising the
getNote earlier defined when building the API. The server expects just the
_id to successfully fetch the note.
Next, we’ll write the query that will help with updating the notes and since we’ll be sending data, it’s going to be a mutation. Add the block of code below just after the
NOTE_QUERY.
... const UPDATE_NOTE = gql` mutation updateNote($_id: ID!, $title: String, $content: String) { updateNote(_id: $_id, input: { title: $title, content: $content }) { _id title content } } `; ...
In the query above, we’re creating a mutation function that accepts the
_id_,
title and
content as variables. These variables will be sent to the GraphQL server where the appropriate action will be carried out, that is, updating a note.
Next, you’ll define the state using
useState that will hold both the title and content of the note we’d like to edit and also implement the Query that will fetch the data from the GraphQL server.
Replace the entire
EditNote function with the one below.
const EditNote = ({ match }) => { const [title, setTitle] = useState(""); const [content, setContent] = useState("");const { loading, error, data } = useQuery(NOTE_QUERY, { variables: { _id: match.params.id } });const [updateNote] = useMutation(UPDATE_NOTE);if (loading) return <div>Fetching note</div>; if (error) return <div>Error fetching note</div>;// set the result gotten from rhe GraphQL server into the note variable. const note = data;return ( <div className="container m-t-20"> <h1 className="page-title">Edit Note</h1><div className="newnote-page m-t-20"> <form onSubmit={e => { // Stop the form from submitting e.preventDefault();// set the title of the note to the title in the state, if not's available set to the original title gotten from the GraphQL server // set the content of the note to the content in the state, if not's available set to the original content gotten from the GraphQL server // pass the id, title and content as variables to the UPDATE_NOTE mutation. updateNote({ variables: { _id: note.getNote._id, title: title ? title : note.getNote.title, content: content ? content : note.getNote.content } });notify.show("Note was edited successfully", "success"); }} > <div className="field"> <label className="label">Note Title</label> <div className="control"> <input className="input" type="text" name="title" placeholder="Note Title" defaultValue={note.getNote.title} onChange={e => setTitle(e.target.value)} required /> </div> </div><div className="field"> <label className="label">Note Content</label> <div className="control"> <textarea className="textarea" rows="10" name="content" placeholder="Note Content here..." defaultValue={note.getNote.content} onChange={e => setContent(e.target.value)} required ></textarea> </div> </div><div className="field"> <div className="control"> <button className="button is-link">Submit</button> </div> </div> </form> </div> </div> ); };
In the code block above, the
useQuery hook contains the
NOTE_QUERY query string and an object that contains a variable. In this case the variable is the
match.params.id 's value.
useQuery returns an object from Apollo Client that contains
error, and
data properties. These props help provide information about the data request to the GraphQL server.
The result of the query to the GraphQL server will be stored in the
data property which is later set to the const
note .
In the form, you’re setting the defaultValue for the
input field and
textarea to the values gotten from the GraphQL server.
Finally, the form has its own
onSubmit function that handles the actual editing of the note. In the onSubmit function, the title and content of the note is sent to the
updateNote function which is in turn attached to the
useMutation hook.
You’ve now handled the functionality for editing existing notes in the GraphQL server. Next thing to do is to handle deletion of notes from the GraphQL server.
To delete notes in the database via the React app, you’ll need to make use of the
useMutation hook and then use it to send a mutation query (deleteNote) to the GraphQL server.
The first thing we need to do is import the useMutation hook from
apollo/react-hooks in the
AllNotes.js file and write the query that performs the delete action and that will be sent to the GraphQL server, and then use the useMutation hook to send the query to the server. Edit the
AllNotes.js file with the code below.
... import { useQuery, useMutation } from "@apollo/react-hooks"; import { notify } from "react-notify-toast";... const DELETE_NOTE_QUERY = gql` mutation deleteNote($_id: ID!) { deleteNote (_id: $_id) { title content _id } } `const AllNotes = () => { const { loading, error, data } = useQuery(NOTES_QUERY);const [deleteNote] = useMutation(DELETE_NOTE_QUERY, { update(cache, { data: { deleteNote }}) { const { allNotes } = cache.readQuery({ query: NOTES_QUERY }); const newNotes = allNotes.filter(note => note._id !== deleteNote._id);cache.writeQuery({ query: NOTES_QUERY, data: { allNotes: newNotes } }); } })> <button onClick={e => { e.preventDefault(); deleteNote({ variables: { _id: note._id } }); notify.show("Note was deleted successfully", "success"); }} Delete </button> </footer> </div> </div> ))} </div> </div> </div> ); }; ...
In the code block above, we create the
deleteNote function and set it to
useQuery hook which contains a GraphQL query string. The
deleteNote function is later used in the delete’s button
onClick handler. It accepts the note’s ID as a variable and then that ID is used to find and delete the entry.
We’re also carrying out interface update here when we delete a note. The
useMutation hook also accepts an object which contains an
update function in which we’ll be carrying out the interface update.
update(cache, { data: { deleteNote }}) { const { allNotes } = cache.readQuery({ query: NOTES_QUERY }); const newNotes = allNotes.filter(note => note._id !== deleteNote._id);cache.writeQuery({ query: NOTES_QUERY, data: { allNotes: newNotes } }); }
In the function above, we first read the available notes using the cache.readQuery method. The
readQuery allows us to fetch data without actually making a request to the GraphQL server.
The
.filter method is used to check for the particular note that deleted from the page. It works by comparing the
_id_ of the deleted note with that of all existing notes and then returns only items that don’t match that of the deleted item. The
_id_ of the deleted item is gotten by using
payload.
payload contains the data that’s being sent to the server. Finally, the cache is updated with the newly filtered data.
That’s it! We have now successfully built a functional web app that has GraphQL as its server and React as its frontend.
In this part of the tutorial, you learnt how to build a React app that works with a GraphQL API server. You first started by building a Node.js GraphQL API in the previous part of the tutorial and then you built a frontend app using React.
You also saw how to use Apollo Client to build React apps that interface with a GraphQL server. The React app built in this article allows you to communicate with the GraphQL thanks to Apollo Client.
You can go ahead to extend the application by having some additional features such as pagination, caching, realtime subscriptions e.t.c.
One other thing that was explored was GraphQL’s unique features and some of its advantages over the traditional REST methodology.
If you want to dive deeper and learn more about building web applications with React and GraphQL, you can use any of the resources below:
The code for the GraphQL API can be seen here on GitHub and that of the React app can be seen here.
#reactjs #graphql #web-development #database | https://morioh.com/p/bf5be9232f5a | CC-MAIN-2022-40 | refinedweb | 7,748 | 64.81 |
Creating adapter modules using the Web Services adapter
This section explains how to create an adapter module using the Web Services base adapter.
To create an adapter module using the Web Services base adapter.
- Create and configure the Web Services adapter on the grid. For details, see the Web Services adapter.
- Use Make SOAP Request Method One, if the Web Service runs on HTTP or HTTPS.
- Use Make SOAP Request Method Two, if the Web Service runs on HTTP and requires a session cookie.
- Identify and perform the Web Services operations with the required inputs and outputs using the SoapUI utility.
- Search and import the required WSDL from the web in the SoapUI utility to get all the operations.
- Run the required operation.
This formulates the SOAP body which includes operation names and the outputs.
- Create a new adapter utilities module.
- Create a process similar to the operation name. For example, create a TrueSight Orchestration process, Login for the login operation.
- For this example, you need session cookies. Therefore, add connection name as an input, in addition to adapter name and soap url.
- Add the information needed for the operation.
For example, for the login operation, add username and password as inputs.
- Construct a SOAP body using the Assign activity from the Activity palette to transform an empty XML document
</mt>.
- Add the inputs of operations as tokens within the XSLT transform.
- Create a
<body>element.
- Create a child element with the same name as the operation (as seen in SoapUI).
- the prefix is
urn
- the namespace is
urn:NSConfig
- Each additional element is an input name accompanied by the token value.
- Preview the SOAP body.
- Save, exit, and assign to the output context item of the SOAP <
body>element.
- Assign inputs to the SOAP Method process.
- Assign the process output to a local context item.
- Create appropriate outputs by adding local context items to the output of the process.
- (optional) With another Assign activity, extract values from the SOAP process and pass it to the output of the process.
- Create another module using step 1 to step 21.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/TruesightOrchestrationContent/201801/creating-adapter-modules-using-the-web-services-adapter-808148709.html | CC-MAIN-2020-24 | refinedweb | 358 | 68.26 |
hi sir - Java Beginners
hi sir Hi,sir,i am try in netbeans for to develop the swings,plz provide details about the how to run a program in netbeans and details about... the details sir,
Thanks for ur coporation sir Hi Friend
how to include a java class in jsp - JSP-Servlet
how to include a java class in jsp hello sir,
this is my first question, i want know that how to use a java class function's variables and methods of a java class in jsp.if possible plz tr to explain with a simple example
include tag
" is not displayed.
please help me
Thanks
venu
Hi,
Which example from roseindia.net you are trying?
Thanks
Hi,
Which example from...include tag Good morning sir. This is venu sir.i am pursuing mca .
I include?
How to include? How can we include a JSP file in one application to the JSP file in another application which are in the same server?? please help me out with code
index
in increasing your programming skills. ...
Flash Tutorials
JSP Tutorials
Perl Tutorials
Jsp include page problem
Jsp include page problem I have included footer.jsp in all my pages... content is displayed ) How can I rectify this?
Hi,
You have to delete the cache files in your server's work folder.
If you are using tomcat, then you
Hi
Hi how to read collection obj from jsp to servlet and from jsp - jsp?
Hi Friend,
Please visit the following link:
Thanks
The Include Directive in JSP Page
directive of the JSP. You will learn about what is include and how to
implement... The Include Directive in JSP Page
... from the root directory of your JSP application.
There are two ways of including
hi
hi i want to develop a online bit by bit examination process as part of my project in this i am stuck at how to store multiple choice questions...])){
count++;
}
}
out.println("Your "+count+" answers are correct");
%>
For more
Include Static HTML Page in JSP
Include Static HTML Page in JSP
This example shows how to include static html page in jsp.
In JSP, there are two ways to include another web resource.
1
jsp coding please.
jsp coding please. hi sir, my name is logeswaran. I have a problem... list. please show me how to do it? thank you.
By the way, I'm using access... DSN will get created.
6) Restart your server and run the following jsp code
JSP include directive tag syntax and example
JSP include directive tag syntax and example The syntax and example of the JSP include directive tag.
Hi,
The syntax of the JSP include directive tag is:
<%@include
The example
Include directive vs Include Action
;)
includes file into the JSP page at compile time. Include directive should be
used...;jsp:include>)
includes the output at runtime. Include action (runtime... of welcome.jsp.
For example, if you want to use an include having a set
hello sir, please give me answer - Java Beginners
hello sir, please give me answer Write a program in Java that calculates the sum of digits of an input number, prints... ways in java? so , sir please tell me full solution of this program Here is your complete
jsp using include & with mysql
jsp using include & with mysql Sir,
I am creating a login application using jsp & Mysql.
The Codes are---
Html File......
<... and in tomcat lib.
Please go through the following link:
JSP Login Application
The "include" directive of JSP
will discuss about "include" directive of JSP with an
example.
The include...;<%="Example
of include JSP directive"
%></font><br...; is the relative URL of the file from the
root directory of your JSP
JSPs : include Directives
assumes that the
file is in the same directory as your JSP.
Example : In this example we are showing how include
directive works.
header.jsp -
<...
translation. The web container merges the content of included files with your
JSP pages
Hi
Hi I want import txt fayl java.please say me...
Hi,
Please clarify your problem!
Thanks
Hi
Hi Hi
How to implement I18N concept in struts 1.3?
Please reply to me
Spring Constructor arg index
Constructor Arguments Index
In this example you will see how inject the arguments into your bean
according to the constructor argument index...;
<constructor-arg index="0" value
Hi... - Java Beginners
but this not working
Upload Record
please tell me Hi Friend,
Please clarify your question.
Thanks Hi frnd,
Your asking...Hi... Hi friends,
I hv two jsp page one is aa.jsp & bb.jsp
How to index a given paragraph in alphabetical order
How to index a given paragraph in alphabetical order Write a java program to index a given paragraph. Paragraph should be obtained during runtime... paragraph : This is a technical round. Please index the given paragraph.
Output
JSP Include File
to include a html file in the jsp page.
You can see in the given example that we...
JSP Include File
JSP Include File is used to insert a html
Struts <s:include> - Struts
on this.
Thanks,
Paddy Hi friend,
Code to include Tag :
"includeTag.jsp"
Include Tag Example!
Include Tag (Data Tags)
Example!
"mypage.jsp"
Include Tag (Data
How to include a File using directive and include action
How to include a File using directive and include action... one attribute named as file, which is
used to include a file in the jsp page..., whether it is a html, xml or any jsp
page we should use include directive
hi - SQL
hi hi sir,how to create a database in sql sir
thanks in advance Hi Friend,
Please visit the following links:
hi - SQL
hi hi sir,i want to insert a record in 1 table
for example...,............
plz provide this query sir,plzzzzzzzzzzzz
Hi Friend,
Please provide some more details.
Thanks
hi - Swing AWT
hi sir,how to set a title for jtable plz tell me sir Hi..."});
JScrollPane jsp = new JScrollPane(jt);
getContentPane().add(jsp... information, visit the following link:
hi - Java Beginners
hi hi sir,u provide a answer for datepicker,but i don't know how...,how to embed the datepicker code to my panel.
This is my...
{
static final int CURRENTDATE_COLUMN_INDEX=0;
static final
Hi.... - Java Beginners
.
For example : Java/JSP/JSF/Struts 1/Struts 2 etc....
Thanks...Hi.... Hi Friends,
Thanks for reply can send me sample of code using ur idea...
First part i have completed but i want to how to go
hi... - Struts
know how to set the path in window xp please write the proper command and path... that, your problem will be solved. Please visit for more information...hi... Hi Friends,
I am installed tomcat5.5 and open
Datepicker not getting called through include function of php
Datepicker not getting called through include function of php my...[
/*
A "Reservation Date" example using two datePickers...=document.forms["index"]["start"].value;
if (x==null || x=="")
{
alert("you must
please help me
please help me Dear sir, I have a problem. How to write JSP coding, if a user select a value from drop down list for example department, the another... before. This name list should get from the database. Please help me.
By the way, I
sir - Java Beginners
sir hi sir,in this program there is a jtable,i am insert...
sir,i am mentioning the error as <<>>>......main bug here>........ in this program,plz see and provide solution sir,database Tables
Java to create table in jsp file that include other jsp file
Java to create table in jsp file that include other jsp file String jspContent = "table"
+= "tr"
+= "td"
+= "jsp:include page='fileSource...
Please Provide me the solution (or example
struts2.2.1 include tag example.
struts2.2.1 include tag example.
In this example, you will see the use... of html or jsp pages directly into
current page or we can include output...>
<s:include</s:include>
ajax example
in jsp page?
Hi,
Please read Ajax First Example - Print Date and Time example.
Instead of using PHP you can write your code in JSP... ajax get example
jQuery ajax get example Hi,
How I can use jQuery
sir - Java Beginners
the program in swings,how to add and place the results in added row
ThanQ Hi Friend,
Try the following code
JSP Include jsp
JSP Include jsp
... include either a
static or dynamic file in a JSP file. If the file is static, its....
Understand with Example
In the given
example we have used <jsp:param>
Include Tag:
Include Tag:
bean:include Tag... to that of the standard <jsp:include> tag, except
that the response data... with a '/') of the web application
resource to be included.
z-index always on top
z-index always on top Hi,
How to make my div always on top using the z-index property?
Thanks
Hi,
You can use the following code:
.mydiv
{
z-index:9999;
}
Thanks
JSP include
JSP include
A JSP page can include page
fragments from other files to form the complete...; For
example the directive:
<%@ include file="filename.jsp"
%>
Index Out of Bound Exception
Index Out of Bound Exception
Index Out of Bound Exception are the Unchecked Exception... the
compilation of a program. Index Out of Bound Exception Occurs when
please help in jsp - JSP-Servlet
please help in jsp i have two Jsp's pages.
on main.jsp have some...
please help. Hi Friend,
Try... main.jsp to home.jsp, table area of home.jsp show blank.
how i fill this table Include Param
in the include directive. The
example uses <jsp:param> clause to pass... illustrate an example from 'JSP Include Param'. To
understand the example we...
JSP Include Param
Please HELPP
Please HELPP The University wants to make a basic graphical display to show how many people received different grades for a piece of work....
This is an example of the output. The example below shows the distribution of marks for 20
checking index in prepared statement
= con.prepareStatement(query);
then after query has been prepared, can we check the index of "?".
If yes then how?
Hello Friend,
Please visit the following...checking index in prepared statement If we write as follows:
String
How to include website in google search?
How to include website in google search? Does anyone has idea about how to include a website into Google Search?
Visit the given Google website link on how to oiptimize or submit your website to Google.
http
include a static file
include a static file How will you include a static file in a JSP page?
You can include a static resource to a JSP using
<jsp:directive > or <%@ inlcude >
hi - Java Beginners
hi hi sir, i am entering the 2 dates in the jtable,i want to difference between that dates,plz provide the suitable example sir
Hi Friend,
Please visit the following links:
how to include a session class in java awt code - JavaMail
how to include a session class in java awt code Hi...
i have been... information and examplle of how to include session class to ressolve this issue..
Can u please help me out
flush attribute in jsp:include tag - JSP-Servlet
flush attribute in jsp:include tag what is the use of flush attribute in jsp:include tag ? hi friend,
------------------------------
Read for more information,
sir i,have a assignment for these question plz help me sir - JavaMail
sir i,have a assignment for these question plz help me sir ...; Hi Friend,
Try the following codes:
1)ADT Stack:
import...();
System.out.print("Enter your choice: ");
menu = Integer.parseInt(dis.readLine
Please explain what is hibernatetemplate with an example code.
Please explain what is hibernatetemplate with an example code. hi,
Please explain Hibernate template example code to me..
Hello Friend... is org.springframework.orm.hibernate.HibernateTemplate
Please follow the given link for more details:
hibernateTemplate
Deployment of your example - Struts
Deployment of your example In your Struts2 tutorial, can you show how you would war it up so it can be deployed into JBOss? There seems to be a lot of unnecessary files in it, or am I mistaken
@WebServlet RequestDispatcher Include Example
@WebServlet RequestDispatcher Include Example
In this tutorial you will learn how to include the content into another resources in the response.
Sometimes... to how can you include the content/information into another resource
getting error in your login form code
and password.the code is $fuser=$POST["userid"];. how to solve this problem please help me
hi friend,
your form's input must have following :
<input type...getting error in your login form code i tried your code for login
PHP PDO Index
. Using the
following and subsequent tutorial's coding you can create your...; your desired drives and
remove the semicolons from that line:
Example:
<?php
foreach(PDO::getAvailableDrivers()as
$driver)
{
echo
index - Java Beginners
index Hi could you pls help me with this two programs they go hand...;Hi Friend,
Try the following code:
import java.io.*;
import java.awt.... BufferedReader(new InputStreamReader(System.in));
System.out.print("Please enter... then please relpy me with coding..
Thanks Gudiya use JavaScript... code;
if any one need just mail me: [email protected]
Hi
Tutorial | J2ME
Tutorial | JSP Tutorial |
Core Java Tutorial...
example | Java
Programming | Java Beginners Examples
| Applet Tutorials...;
| Java Servlets
Tutorial | Jsp Tutorials
| Java Swing
Tutorials
difference between <%@ include ...> and <jsp:include>
difference between <%@ include ...> and What is the difference between <%@ include ...> (directive include) and <jsp:include>
Include Tag (Data Tag) Example
Include Tag (Data Tag) Example
In this section, we are going to describe the include
tag. The include tag...) that we
want to include in our main jsp page ie..includeTag.jsp.
myBirthday.jsp
Foreach loop with negative index in velocity
Foreach loop with negative index
in velocity
This Example shows you how to
use foreach loop with negative index in velocity.
The method used in this example
Please explain @interface with an example
Please explain @interface with an example Here is the code snippet...)
private Runnable runnable;
where Runnable is an interface.
Could you please explain what does all these mean?
How all these work
Passing Parameter with <jsp: include>
Passing Parameter with <jsp: include>
In this example we are going to use <jsp:include>...-->
<jsp:include
<jsp
Display JSP selected listbox and checkbox in xml-please help me
Display JSP selected listbox and checkbox in xml-please help me Hi... as selected multiple checked checkboxes.
Please help in this how to do this.....I am new to java & jsp
I need your help.
Regards
hii sir
hii sir Eg: select
country--select state--
select district--select mandal--select village. Then click on village then automatically the village description will be displyed on textarea box. Then how to write on html code. Can u
need help please
need help please Dear sir, my name is logeswaran. I have a big problem that I can't find the solution for a program. How can I block a user from enter a page second time. Please help me.
By the way I'm using Access database
Include static files within a JSP page
Include static files within a JSP page How do I include static files within a JSP page?
Static resources should always be included using the JSP include directive. This way, the inclusion is performed just once
HI Jsp check box..!
HI Jsp check box..! Hi all..
I want to update the multiple values of database table using checkbox..after clicking submit the edited field has to update and rest has to enable to update...please help me..its urgent
Hi
Hi Hi this is really good example to beginners who is learning struts2.0
thanks
jsp:include page=.. and include file = ...
jsp:include page=.. and include file = ... What is the difference between <jsp:include page = ... > and
<%@ include file = ... >?.
<jsp:include page = ... >:
This is like a function call from | http://www.roseindia.net/tutorialhelp/comment/11642 | CC-MAIN-2014-52 | refinedweb | 2,648 | 66.84 |
I am loading a txt file containig a mix of float and string data. I want to store them in an array where I can access each element. Now I am just doing
import pandas as pd
data = pd.read_csv('output_list.txt', header = None)
print data
1 0 2000.0 70.2836942112 1347.28369421 /file_address.txt
data[i,j]
You can use:
data = pd.read_csv('output_list.txt', sep=" ", header = None) data.columns = ["a", "b", "c", "etc."]
Add sep=" " in your code and leave a blank space between the quotes. So pandas can detect spaces beetween values and sort in columns. Data columns is for naming your columns. | https://codedump.io/share/gPVK4pX38xta/1/load-data-from-txt-with-pandas | CC-MAIN-2016-50 | refinedweb | 107 | 78.85 |
Components and supplies
Apps and online services
About this project
Project
In this project, I'll be outlining how I am able to control my ClearWalker strandbeest-style contraption via Bluetooth. The device uses two motors to control 8 legs, and is steered in a similar manner as a tank, or a robot that steers with two wheels that goes different speeds. These basic instructions should work for many types of vehicles.
This article won't go over how to actually build one of these 'beests, as that would be more of a book. Check out the original Strandbeest here, or my ClearWalker YouTube playlist for more general build info.
Step 1: Get Your Materials
To drive this vehicle, you'll need the following:
- PWM Speed Controller: Servocity or Monster Guts
- Various Wires
- Smartphone
Step 2: Wire Your Components
The basic idea is to wire the HC-06 module into your Arduino board as a wireless serial port. From here, the Arduino will output PWM signals, making the H-Bridge relay circuit switch motors to go forward, stop, or backward. Expand.
Code
Beest-control.inoArduino
#include <Servo.h> #include <SoftwareSerial.h> #include <LedControl.h> #include <binary.h> SoftwareSerial BT(10, 11); //serial on pins 10/11 // connect BT module TX to D10 // connect BT module RX to D11 Servo rtmotor; //controls RT H-bridge as servo Servo ltmotor; //controls LT H-bridge as servo int rtpos = 90; //stores right servo position (init 90 for off) int ltpos = 90; //stores left servo position (init 90 for off) void setup() { //"servo" motor setup rtmotor.attach(2); ltmotor.attach(3); //Bluetooth setup BT.begin(9600); //sets data rate for SoftwareSerial port //eye matrix setup lc.shutdown(0,false); lc.setIntensity(0,8); lc.clearDisplay(0); pinMode(13, OUTPUT); //light as needed } char a; void loop() { //initial eye setup (eyes small) lc.clearDisplay(0); lc.setColumn(0,3,B001100); lc.setColumn(0,4,B001100); //if BT available go into main control loop if (BT.available()) { a=(BT.read()); if (a=='0') //start command to stop motors { rtmotor.write(90); ltmotor.write(90); digitalWrite(13, LOW); } if (a=='1') //forward command { rtmotor.write(180); ltmotor.write(180); digitalWrite(13, HIGH); } if (a=='2') //backward command { rtmotor.write(0); ltmotor.write(0); } if (a=='3') //right turn command { rtmotor.write(0); //rt motor goes backward ltmotor.write(180); //lt motor goes forward } if (a=='4') //left turn command { rtmotor.write(180); //rt motor forward ltmotor.write(0); //lt motor goes backward } } }
Schematics
Author
Jeremy S. Cook
- 7 projects
- 9 followers
Published onMay 30, 2017
Members who respect this project
you might like | https://create.arduino.cc/projecthub/JeremySCook/clearwalker-bluetooth-control-a92057 | CC-MAIN-2018-43 | refinedweb | 436 | 56.66 |
NetworkX is a graph analysis library for Python. It has become the standard library for anything graphs in Python. In addition, it’s the basis for most libraries dealing with graph machine learning. Stellargraph in particular requires an understanding of NetworkX to construct graphs.
Below is an overview of the most important API methods. The official documentation is extensive but it remains often confusing to make things happen. Some simple questions (adding arrows, attaching data…) are usually answered in StackOverflow, so the guide below collects these simple but important questions.
General remarks
The library is flexible but these are my golden rules:
- do not use objects to define nodes, rather use integers and set data on the node. Layout has issues with objects.
- the API changed a lot over the versions, make sure when you find an answer somewhere that it matches your version. Often methods and answers do not apply because they relate to an older version.
import networkx as nx import matplotlib.pyplot as plt from faker import Faker faker = Faker() %matplotlib inline
Creating graphs
There are various constructors to create graphs, among others:
# default G = nx.Graph() # an empty graph EG = nx.empty_graph(100) # a directed graph DG = nx.DiGraph() # a multi-directed graph MDG = nx.MultiDiGraph() # a complete graph CG = nx.complete_graph(10) # a path graph PG = nx.path_graph(5) # a complete bipartite graph CBG = nx.complete_bipartite_graph(5,3) # a grid graph GG = nx.grid_graph([2, 3, 5, 2])
Graph generators
Graph generators produce random graphs with particular properties which are of interest in the context of statistics of graphs. The best-known phenomenon is six degrees of separation which you can find on the internet, our brains, our social network and whatnot.
Erdos-Renyi
The Erdos-Renyi model is related to percolations and phase transitions but is in general the most generic random graph model.
The first parameter is the amount of nodes and the second a probability of being connected to another one.
er = nx.erdos_renyi_graph(50, 0.15) nx.draw(er, edge_color='silver') /Users/swa/conda/lib/python3.7/site-packages/networkx/drawing/nx_pylab.py:611: MatplotlibDeprecationWarning: isinstance(..., numbers.Number) if cb.is_numlike(alpha):
Watts-Strogratz
The Watts-Strogratz model produces small-world properties.
The first parameter is the amound of node then follows the default degree and thereafter the probability of rewiring and edge. So, the rewiring probability is like the mutation of an otherwise fixed-degree graph.
ws = nx.watts_strogatz_graph(30, 2, 0.32) nx.draw(ws)
Barabasi-Albert
The Barabasi-Albert model reproduces random scale-free graphs which are akin to citation networks, the internet and pretty much everywhere in nature.
ba = nx.barabasi_albert_graph(50, 5) nx.draw(ba)
You can easily extract the exponential distribution of degrees:
g = nx.barabasi_albert_graph(2500, 5) degrees = list(nx.degree(g)) l = [d[1] for d in degrees] plt.hist(l) plt.show()
Drawing graphs
The
draw method without additional will present the graph with spring-layout algorithm.
nx.draw(PG)
There are of course tons of settings and features and a good result is really dependent on your graph and what you’re looking for. If we take the bipartite graph for example it would be nice to see the two sets of nodes in different colors:
from networkx.algorithms import bipartite X, Y = bipartite.sets(CBG) cols = ["red" if i in X else "blue" for i in CBG.nodes() ] nx.draw(CBG, with_labels=True, node_color= cols)
The grid graph on the other hand is better drawn with the Kamada-Kawai layout in order to see the grid sturcture:
nx.draw_kamada_kawai(GG)
Nodes
If you start from scratch the easiest way to define a graph is via the
add_edges_from method as shown here:
G = nx.Graph() G.add_node("time") G.add_edges_from([ ("time","space"), ("gravitation","curvature"), ("gravitation","space"), ("time","curvature"), ]) labels = {} pos = nx.layout.kamada_kawai_layout(G) nx.draw(G, pos=pos, with_labels= True) nx.draw_networkx_edge_labels(G,pos, { ("time","space"): "interacts with", ("gravitation","curvature"): "is" }, label_pos=0.4 ) /Users/swa/conda/lib/python3.7/site-packages/networkx/drawing/nx_pylab.py:611: MatplotlibDeprecationWarning: isinstance(..., numbers.Number) if cb.is_numlike(alpha): {('time', 'space'): Text(0.39999997946599436, 0.6000000143526016, 'interacts with'), ('gravitation', 'curvature'): Text(-0.3999999793235416, -0.6000000147443909, 'is')}
The nodes can however be arbitrary objects:
class Person: def __init__(self, name): self.name = name @staticmethod def random(): return Person(faker.name()) g = nx.Graph() a = Person.random() b = Person.random() c = Person.random() g.add_edges_from([(a,b), (b,c), (c,a)]) # to show the names you need to pass the labels nx.draw(g, labels = {n:n.name for n in g.nodes()}, with_labels=True) /Users/swa/conda/lib/python3.7/site-packages/networkx/drawing/nx_pylab.py:611: MatplotlibDeprecationWarning: isinstance(..., numbers.Number) if cb.is_numlike(alpha):
As mentioned earlier, it’s better to use numbers for the nodes and set the data via the
set_node_attributes methods as shown below.
Edges
Arrows can only be shown if the graph is directed. NetworkX is essentially a graph analysis library and much less a graph visualization toolbox.
G=nx.DiGraph() G.add_edge(1,2) pos = nx.circular_layout(G) nx.draw(G, pos, with_labels = True , arrowsize=25) plt.show()
Data can be assigned to an edge on creation
G=nx.DiGraph() a = Person.random() b = Person.random() G.add_node(0, data = a) G.add_node(1, data = b) G.add_edge(0, 1, label="knows") labelDic = {n:G.nodes[n]["data"].name for n in G.nodes()} edgeDic = {e:G.get_edge_data(*e)["label"] for e in G.edges} kpos = nx.layout.kamada_kawai_layout(G) nx.draw(G,kpos, labels = labelDic, with_labels=True, arrowsize=25) nx.draw_networkx_edge_labels(G, kpos, edge_labels= edgeDic, label_pos=0.4) {(0, 1): Text(-0.19999999999999996, -8.74227765734758e-09, 'knows')}
Analysis
There many analysis oriented methods in NetworkX, below are just a few hints to get you started.
Let’s assemble a little network to demonstrate the methods.
gr = nx.DiGraph() gr.add_node(1, data = {'label': 'Space' }) gr.add_node(2, data = {'label': 'Time' }) gr.add_node(3, data = {'label': 'Gravitation' }) gr.add_node(4, data = {'label': 'Geometry' }) gr.add_node(5, data = {'label': 'SU(2)' }) gr.add_node(6, data = {'label': 'Spin' }) gr.add_node(7, data = {'label': 'GL(n)' }) edge_array = [(1,2), (2,3), (3,1), (3,4), (2,5), (5,6), (1,7)] gr.add_edges_from(edge_array) import random for e in edge_array: # nx.set_edge_attributes(gr, {e: {'data':{'weight': round(random.random(),2)}}}) gr.add_edge(*e, weight=round(random.random(),2)) labelDic = {n:gr.nodes[n]["data"]["label"] for n in gr.nodes()} edgeDic = {e:gr.edges[e]["weight"] for e in G.edges} kpos = nx.layout.kamada_kawai_layout(G) nx.draw(G,kpos, labels = labelDic, with_labels=True, arrowsize=25) o=nx.draw_networkx_edge_labels(G, kpos, edge_labels= edgeDic, label_pos=0.4)
Getting the adjacency matrix gives a sparse matrix. You need to use the
todense method to see the dense matrix. There is also a
to_numpy_matrix method which makes it easy to integrate with numpy mechanics.
nx.adjacency_matrix(gr).todense() matrix([[0, 1, 0, 0, 0, 0, 1], [0, 0, 1, 0, 1, 0, 0], [1, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], dtype=int64)
The spectrum of this adjacency matrix can be directly obtained via the
adjacency_spectrum method:
nx.adjacency_spectrum(gr) array([-0.5+0.8660254j, -0.5-0.8660254j, 1. +0.j , 0. +0.j , 0. +0.j , 0. +0.j , 0. +0.j ])
The Laplacian matrix (see definition here) is only defined for undirected graphs but is just a method away:
nx.laplacian_matrix(gr.to_undirected()).todense() matrix([[ 3, -1, -1, 0, 0, 0, -1], [-1, 3, -1, 0, -1, 0, 0], [-1, -1, 3, -1, 0, 0, 0], [ 0, 0, -1, 1, 0, 0, 0], [ 0, -1, 0, 0, 2, -1, 0], [ 0, 0, 0, 0, -1, 1, 0], [-1, 0, 0, 0, 0, 0, 1]], dtype=int64)
If you need to use the edge data in the adjacency matrix this goes via the
attr_matrix:
nx.attr_matrix(gr, edge_attr="weight") (matrix([[0. , 0.75, 0. , 0. , 0. , 0. , 0.96], [0. , 0. , 0.76, 0. , 0.53, 0. , 0. ], [0.35, 0. , 0. , 0.13, 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0.06, 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. ]]), [1, 2, 3, 4, 5, 6, 7])
Simple things like degrees are simple to access:
list(gr.degree) [(1, 3), (2, 3), (3, 3), (4, 1), (5, 2), (6, 1), (7, 1)]
The shortest path between two vertices is just as simple but please note that there are dozens of variations in the library:
nx.shortest_path(gr, 1, 6, weight="weight") [1, 2, 5, 6]
Things like the radius of a graph is defined for undirected graphs:
nx.radius(gr.to_undirected()) 2 nx.find_cores(gr) {1: 2, 2: 2, 3: 2, 4: 1, 5: 1, 6: 1, 7: 1}
Centrality is also a whole world on its own. If you wish to visualize the betweenness centrality you can use something like:
cent = nx.centrality.betweenness_centrality(gr) nx.draw(gr, node_size=[v * 1500 for v in cent.values()], edge_color='silver')
Getting the connected components of a graph.
nx.is_connected(gr.to_undirected()) comps = nx.components.connected_components(gr.to_undirected()) for c in comps: print(c) {1, 2, 3, 4, 5, 6, 7}
Clique
A clique is a complete subgraph of a particular size. Large, dense subgraphs are useful for example in the analysis of protein-protein interaction graphs, specifically in the prediction of protein complexes.
def show_clique(graph, k = 4): ''' Draws the first clique of the specified size. ''' cliques = list(nx.algorithms.find_cliques(graph)) kclique = [clq for clq in cliques if len(clq) == k] if len(kclique)>0: print(kclique[0]) cols = ["red" if i in kclique[0] else "white" for i in graph.nodes() ] nx.draw(graph, with_labels=True, node_color= cols, edge_color="silver") return nx.subgraph(graph, kclique[0]) else: print("No clique of size %s."%k) return nx.Graph()
Taking the Barabasi graph above and checking for isomorphism with the complete graph of the same size we can check that the found result is indeed a clique of the requested size.
subg = show_clique(ba,5) nx.is_isomorphic(subg, nx.complete_graph(5)) [5, 1, 7, 9, 6] True
red = nx.random_lobster(100, 0.9, 0.9) nx.draw(ba)
petersen = nx.petersen_graph() nx.draw(petersen)
G=nx.karate_club_graph() cent = nx.centrality.betweenness_centrality(G) nx.draw(G, node_size=[v * 1500 for v in cent.values()], edge_color='silver')
Graph visualization
As described above, if you want pretty images you should use packages outside NetworkX. The dot and GraphML formats are pretty standard and saving a graph to a particular format is really easy.
For example, here we save the karate-club to GraphML and used yEd to layout things together with a centrality resizing of the nodes.
G=nx.karate_club_graph() nx.write_graphml(G, "./karate.graphml")
For Gephi you can use the GML format:
nx.write_gml(G, "./karate.gml")
Get and set data
In the context of machine learning and real-world data graphs it’s important that nodes and edges carry data. The way it works in NetworkX can be a bit tricky, so let’s make it clear here how it functions.
Node get/set goes like this:
G = nx.Graph() G.add_node(12, payload = {'id': 44, 'name': 'Swa' }) print(G.nodes[12]['payload']) print(G.nodes[12]['payload']['name']) {'id': 44, 'name': 'Swa'} Swa
One can also set the data after the node is added:
G = nx.Graph() G.add_node(12) nx.set_node_attributes(G, {12:{'payload':{'id': 44, 'name': 'Swa' }}}) print(G.nodes[12]['payload']) print(G.nodes[12]['payload']['name']) {'id': 44, 'name': 'Swa'} Swa
Edge get/set is like so:
G = nx.Graph() G.add_edge(12,15, payload={'label': 'stuff'}) print(G.get_edge_data(12,15)) print(G.get_edge_data(12,15)['payload']['label']) {'payload': {'label': 'stuff'}} stuff
One can also set the data after the edge is added:
G = nx.Graph() G.add_edge(12,15) nx.set_edge_attributes(G, {(12,15): {'payload':{'label': 'stuff'}}}) print(G.get_edge_data(12,15)) print(G.get_edge_data(12,15)['payload']['label']) {'payload': {'label': 'stuff'}} stuff | https://graphsandnetworks.com/networkx-the-essential-api/ | CC-MAIN-2020-10 | refinedweb | 2,038 | 51.65 |
Author: Marijn HaverbekePublisher: No Starch Press, 2011Pages: 224ISBN: 978-1593272821Aimed at: Complete beginnerRating: 3Pros: An interesting perspective on JavaScriptCons: Lots of changes in level and topic Reviewed by:Ian Elliot
Is this the "modern introduction to programming" that its subtitle claims it to be?
There certainly is a need for books that teach Eloquent JavaScript to judge by the way the language is used in most applications. This book starts off with a very elementary chapter aimed at the complete beginner. It explains what programs are, even down to what machine code used to look like and how languages evolved. All well written but strictly speaking unnecessary.
Chapter 1 proper begins with a look again at the very basic things that a complete beginner would need to know. What's a value, what's a variable and flow of control. The chapter starts very slowly but the level starts to ramp up. By the time you reach program structure the author is starting to reveal the book he really would have liked to write, i.e. an advanced one.He starts to introduce some doubtful practices such as if statements without brackets - recommended even for single line if statement and breaking out of loops. By the end of the chapter we are dealing with automatic type conversion and Boolean operators.
Chapter 2 is about functions and it still attempts to start slow and be kind to the beginner, but the pace, level and language is getting tougher all the time. The introduction of "Nested Scope" on the third 37 will.
From here on each chapter starts with an attempt to appeal to the beginner but rapidly spirals off into esoteric and difficult subjects. Chapter 3 is about data structures - objects and arrays. This starts off as a complete beginner's intro to objects but by the third page it is diving into mutability. The rest of the chapter goes over date representation and some built in objects.
Chapter 4 is on error handling - how to use the try-catch. This is probably a bit early for the beginner to be worried about such niceties.
Chapter 5 is where the book finally shows its true colors with a look at the first of three of the main design philosophies - functional programming. In this chapter we learn about higher order functions and invent map and reduce functions - not really beginner's stuff. The rest of the chapter is an odd (and completely misplaced) fairy tale about HTML and we have gone back to appealing ot the complete complete beginner. Another look at functional programming completes the chapter.
Next we have a chapter on object-oriented programming.- a classical introduction to the way JavaScript handles objects including the use of prototype and the subtle ways "this" works. The chapter ends with a large example.
Chapter 7 discusses modularity, which is odd to find coming after object oriented programming. This explains the standard tricks for implementing namespaces and interfaces.
The remainder of the book turns to practical matters Chapter 8 explains regular expressions; Chapter 9 applies JavaScript to web programming; Chapter 10 is on using the DOM; Chapter 11 is about browser events; and Chapter 12 is about making HTTP requests, i.e. Ajax.
This is a difficult book to come to a conclusion about. It attempts to appeal to the beginner but treats material that really is better suited to the more advanced programmer. The problem is that the beginner material does detract from the presentation of JavaScript as an advanced modern language. It would have been better if the author had simply written a more advanced JavaScript book. Even here, though, the depth and organisation isn't sufficient. JavaScript is a dynamic functional language that takes it own particular approach to objects. One of the big problems with this is that we still haven't really worked out how to teach it as a modern language and this book simply represents the standard approach.
So it's not a bad book but it isn't the "modern introduction to programming" that its subtitle claims it to be. If you are a beginner then I would suggest that you start with something that doesn't tackle such difficult subjects and come back to this book after you have learned how to write some JavaScript programs. If you are an intermediate JavaScript programmer then you will get a lot from reading this book if you can put up with the beginners material at the start of each chapter.? | http://www.i-programmer.info/bookreviews/29-javascript/1964-eloquent-javascript.html | CC-MAIN-2015-11 | refinedweb | 754 | 61.16 |
Knowing:
- A single property handler gets created per file. Thus…
- A property handler must be the authority on the layout of the file stream it is handed.
- It is not possible to “augment” the set of properties on a file.
- The property handler must glean its properties from the file stream itself. Thus…
- The property system, in general, cannot store properties on every file. Not all files allow storing properties inside them.
- Not all properties can get written to all file types. Bummer.
- The caller is not guaranteed to ask the property handler for properties. This means…
- To be considerate, the property handler should intialize quickly, delaying heavy work until it is actually needed.
- The property handler is dealing with files. This means…
- For large files, if you have control over the file layout, consider making it stream-friendly.
- Reserve extra space in the property section.
- Clump things together so that the handler doesn’t have to read the whole disk to piece together data.
- Property handlers are an extensibility point of the file system namespace. Thus…
- Other namespaces may have different extensibility mechanisms (or none at all).
- Other namespaces may be able to delegate to the file system namespace when it comes to properties. (The search results do this)
- Other namespaces may choose to reuse the same extensibility mechanism. (ZIP folders do this).
- Property handlers just provide values. Thus…
- Although they provide the data, the windows shell controls the way it gets displayed.
- It is not directly possible to customize the way the data is visualized(1)
- When I say “property handlers” make the shell do this or that, I really mean that the shell will do “this or that” in the presense of a property handler that provides the right set of data..
-Ben Karas
(1) Property descriptions contain hints about how to display a value (e.g. using a stars or a text box). Property descriptions are system-wide and therefore a handler only can set the hints for properties it introduced to the system.
So… if you wanted to have properties stored in an alternate data stream or in a seperate file, you wouldn’t be able to write a property handler for it, right?
Hiya Tim!
No, it is not possible using the recommended APIs. There are workarounds you could (ab)use to accomplish this, but they are intended for legacy applications and not for using secondary storages. I hope to talk to this exact point at a later date.
Viewed as a data flow component , a property handler has a single file stream input and outputs a one
Shell extensions operating on streams are supposed to enable them to work on things that are non-files, such as files inside a Zip. Unfortunately, Vista’s built-in thumbnail handlers (for BMP, JPEG, etc) don’t seem to work on files inside Zips. Or maybe it’s just me… 🙂 | https://blogs.msdn.microsoft.com/benkaras/2007/01/28/understanding-the-role-of-property-handlers/ | CC-MAIN-2017-13 | refinedweb | 484 | 64.51 |
Terms defined: UTF-8, call stack, character encoding, class, constructor, event loop, exception, fluent interface, method, method chaining, non-blocking execution, promise, promisification, protocol
Callbacks work,
but they are hard to read and debug,
which means they only "work" in a limited sense.
JavaScript's developers added promises to the language in 2015
to make callbacks easier to write and understand,
and more recently they added the keywords
async and
await as well
to make asynchronous programming easier still.
To show how these work,
we will create a class of our own called
Pledge
that provides the same core features as promises.
Our explanation was inspired by Trey Huffine's tutorial,
and we encourage you to read that as well.
How can we manage asynchronous execution?
JavaScript is built around an event loop. Every task is represented by an entry in a queue; the event loop repeatedly takes a task from the front of the queue, runs it, and adds any new tasks that it creates to the back of the queue to run later. Only one task runs at a time; each has its own call stack, but objects can be shared between tasks ().
Most tasks execute all the code available in the order it is written.
For example,
this one-line program uses
Array.forEach
to print each element of an array in turn:
[1000, 1500, 500].forEach(t => console.log(t))
1000 1500 500
However,
a handful of special built-in functions make Node switch tasks
or add new tasks to the run queue.
For example,
setTimeout tells Node to run a callback function
after a certain number of milliseconds have passed.
Its first argument is a callback function that takes no arguments,
and its second is the delay.
When
setTimeout is called,
Node sets the callback aside for the requested length of time,
then adds it to the run queue.
(This means the task runs at least the specified number of milliseconds later).
Why zero arguments?
setTimeout's requirement that callback functions take no arguments
is another example of a protocol.
One way to think about it is that protocols allow old code to use new code:
whoever wrote
setTimeout couldn't know what specific tasks we want to delay,
so they specified a way to wrap up any task at all.
As the listing below shows, the original task can generate many new tasks before it completes, and those tasks can run in a different order than the order in which they were created ().
setTimeoutto delay operations.
If we give
setTimeout a delay of zero milliseconds,
the new task can be run right away,
but any other tasks that are waiting have a chance to run as well:
[1000, 1500, 500].forEach(t => { console.log(`about to setTimeout for ${t}`) setTimeout(() => console.log(`inside timer handler for ${t}`), 0) })
about to setTimeout for 1000 about to setTimeout for 1500 about to setTimeout for 500 inside timer handler for 1000 inside timer handler for 1500 inside timer handler for 500
We can use this trick to build a generic non-blocking function that takes a callback defining a task and switches tasks if any others are available:
const nonBlocking = (callback) => { setTimeout(callback, 0) } [1000, 1500, 500].forEach(t => { console.log(`about to do nonBlocking for ${t}`) nonBlocking(() => console.log(`inside timer handler for ${t}`)) })
about to do nonBlocking for 1000 about to do nonBlocking for 1500 about to do nonBlocking for 500 inside timer handler for 1000 inside timer handler for 1500 inside timer handler for 500
Node's built-in function
setImmediate
does exactly what our
nonBlocking function does:
Node also has
process.nextTick,
which doesn't do quite the same thing—we'll explore the differences in the exercises.
[1000, 1500, 500].forEach(t => { console.log(`about to do setImmediate for ${t}`) setImmediate(() => console.log(`inside immediate handler for ${t}`)) })
about to do setImmediate for 1000 about to do setImmediate for 1500 about to do setImmediate for 500 inside immediate handler for 1000 inside immediate handler for 1500 inside immediate handler for 500
How do promises work?
Before we start building our own promises, let's look at how we want them to work:
import Pledge from './pledge.js' new Pledge((resolve, reject) => { console.log('top of a single then clause') setTimeout(() => { console.log('about to call resolve callback') resolve('this is the result') }, 0) }).then((value) => { console.log(`in 'then' with "${value}"`) return 'first then value' })
top of a single then clause about to call resolve callback in 'then' with "this is the result"
This short program creates a new
Pledge
with a callback that takes two other callbacks as arguments:
resolve (which will run when everything worked)
and
reject (which will run when something went wrong).
The top-level callback does the first part of what we want to do,
i.e.,
whatever we want to run before we expect a delay;
for demonstration purposes, we will use
setTimeout with zero delay to switch tasks.
Once this task resumes,
we call the
resolve callback to trigger whatever is supposed to happen after the delay.
Now look at the line with
then.
This is a method of the
Pledge object we just created,
and its job is to do whatever we want to do after the delay.
The argument to
then is yet another callback function;
it will get the value passed to
resolve,
which is how the first part of the action communicates with the second
().
In order to make this work,
Pledge's constructor must take a single function called
action.
This function must take take two callbacks as arguments:
what to do if the action completes successfully
and what to do if it doesn't (i.e., how to handle errors).
Pledge will provide these callbacks to the action at the right times.
Pledge also needs two methods:
then to enable more actions
and
catch to handle errors.
To simplify things just a little bit,
we will allow users to chain as many
thens as they want,
but only allow one
catch.
Fluent interfaces
A fluent interface
is a style of object-oriented programming
in which the methods of an object return
this
so that method calls can be chained together.
For example,
if our class is:
class Fluent { constructor () {...} first (top) { ...do something with top... return this } second (left, right) { ...do something with left and right... } }
then we can write:
const f = new Fluent() f.first('hello').second('and', 'goodbye')
or even
(new Fluent()).first('hello').second('and', 'goodbye')
Array's (mostly) fluent interface allows us to write expressions like
Array.filter(...).map(...).map(...),
which is usually more readable than assigning intermediate results to temporary variables.
If the original action given to our
Pledge completes successfully,
the
Pledge gives us a value by calling the
resolve callback.
We pass this value to the first
then,
pass the result of that
then to the second one,
and so on.
If any of them fail and throw an exception,
we pass that exception to the error handler.
Putting it all together,
the whole class looks like this:
class Pledge { constructor (action) { this.actionCallbacks = [] this.errorCallback = () => {} action(this.onResolve.bind(this), this.onReject.bind(this)) } then (thenHandler) { this.actionCallbacks.push(thenHandler) return this } catch (errorHandler) { this.errorCallback = errorHandler return this } onResolve (value) { let storedValue = value try { this.actionCallbacks.forEach((action) => { storedValue = action(storedValue) }) } catch (err) { this.actionCallbacks = [] this.onReject(err) } } onReject (err) { this.errorCallback(err) } } export default Pledge
Binding
this
Pledge's constructor makes two calls to a special function called
bind.
When we create an object
obj and call a method
meth,
JavaScript sets the special variable
this to
obj inside
meth.
If we use a method as a callback,
though,
this isn't automatically set to the correct object.
To convert the method to a plain old function with the right
this,
we have to use
bind.
The documentation has more details and examples.
Let's create a
Pledge and return a value:
import Pledge from './pledge.js' new Pledge((resolve, reject) => { console.log('top of a single then clause') }).then((value) => { console.log(`then with "${value}"`) return 'first then value' })
top of a single then clause
Why didn't this work?
We can't use
returnwith pledges because the call stack of the task that created the pledge is gone by the time the pledge executes. Instead, we must call
resolveor
reject.
We haven't done anything that defers execution, i.e., there is no call to
setTimeout,
setImmediate, or anything else that would switch tasks. Our original motivating example got this right.
This example shows how we can chain actions together:
import Pledge from './pledge.js' new Pledge( first then with "initial result" second then with "first value" after resolve callback
Notice that inside each
then we do use
return
because these clauses all run in a single task.
As we will see in the next section,
the full implementation of
Promise allows us to run both normal code
and delayed tasks inside
then handlers.
Finally,
in this example we explicitly signal a problem by calling
reject
to make sure our error handling does what it's supposed to:
import Pledge from './pledge.js' new Pledge((resolve, reject) => { console.log('top of action callback with deliberate error') setTimeout(() => { console.log('about to reject on purpose') reject('error on purpose') }, 0) }).then((value) => { console.log(`should not be here with "${value}"`) }).catch((err) => { console.log(`in error handler with "${err}"`) })
top of action callback with deliberate error about to reject on purpose in error handler with "error on purpose"
How are real promises different?
Let's rewrite our chained pledge with built-in promises:
new Promise( after resolve callback first then with "initial result" second then with "first value"
It looks almost the same,
but if we read the output carefully
we can see that the callbacks run after the main program finishes.
This is a signal that Node is delaying the execution of the code in the
then handler.
A very common pattern is to return another promise from inside
then
so that the next
then is called on the returned promise,
not on the original promise
().
This is another way to implement a fluent interface:
if a method of one object returns a second object,
we can call a method of the second object immediately.
const delay = (message) => { return new Promise((resolve, reject) => { console.log(`constructing promise: ${message}`) setTimeout(() => { resolve(`resolving: ${message}`) }, 1) }) } console.log('before') delay('outer delay') .then((value) => { console.log(`first then: ${value}`) return delay('inner delay') }).then((value) => { console.log(`second then: ${value}`) }) console.log('after')
before constructing promise: outer delay after first then: resolving: outer delay constructing promise: inner delay second then: resolving: inner delay
We therefore have three rules for chaining promises:
If our code can run synchronously, just put it in
then.
If we want to use our own asynchronous function, it must create and return a promise.
Finally, if we want to use a library function that relies on callbacks, we have to convert it to use promises. Doing this is called promisification (because programmers will rarely pass up an opportunity add a bit of jargon to the world), and most functions in the Node have already been promisified.
How can we build tools with promises?
Promises may seem more complex than callbacks right now,
but that's because we're looking at how they work rather than at how to use them.
To explore the latter subject,
let's use promises to build a program to count the number of lines in a set of files.
A few moments of search on NPM turns up a promisified version of
fs-extra
called
fs-extra-promise,
so we will rely on it for file operations.
Our first step is to count the lines in a single file:
import fs from 'fs-extra-promise' const filename = process.argv[2] fs.readFileAsync(filename, { encoding: 'utf-8' }) .then(data => { const length = data.split('\n').length - 1 console.log(`${filename}: ${length}`) }) .catch(err => { console.error(err.message) })
node count-lines-single-file.js count-lines-single-file.js
count-lines-single-file.js: 12
Character encoding
A character encoding specifies how characters are stored as bytes.
The most widely used is UTF-8,
which stores characters common in Western European languages in a single byte
and uses multi-byte sequences for other symbols.
If we don't specify a character encoding,
fs.readFileAsync gives us an array of bytes rather than a string of characters.
We can tell we've made this mistake when we try to call a method of
String
and Node tells us we can't.
The next step is to count the lines in multiple files.
We can use
glob-promise to delay handling the output of
glob,
but we need some way to create a separate task to count the lines in each file
and to wait until those line counts are available before exiting our program.
The tool we want is
Promise.all,
which waits until all of the promises in an array have completed.
To make our program a little more readable,
we will put the creation of the promise for each file in a separate function:
import glob from 'glob-promise' import fs from 'fs-extra-promise' const main = (srcDir) => { glob(`${srcDir}/**/*.*`) .then(files => Promise.all(files.map(f => lineCount(f)))) .then(counts => counts.forEach(c => console.log(c))) .catch(err => console.log(err.message)) } const lineCount = (filename) => { return new Promise((resolve, reject) => { fs.readFileAsync(filename, { encoding: 'utf-8' }) .then(data => resolve(data.split('\n').length - 1)) .catch(err => reject(err)) }) } const srcDir = process.argv[2] main(srcDir)
node count-lines-globbed-files.js .
10 1 12 4 1 4 6 4 6 20 ... 10 1 3 2 12 1 2 14 5 1
However,
we want to display the names of the files whose lines we're counting along with the counts.
To do this our
then must return two values.
We could put them in an array,
but it's better practice to construct a temporary object with named fields
().
This approach allows us to add or rearrange fields without breaking code
and also serves as a bit of documentation.
With this change
our line-counting program becomes:
import glob from 'glob-promise' import fs from 'fs-extra-promise' const main = (srcDir) => { glob(`${srcDir}/**/*.*`) .then(files => Promise.all(files.map(f => lineCount(f)))) .then(counts => counts.forEach( c => console.log(`${c.lines}: ${c.name}`))) .catch(err => console.log(err.message)) } const lineCount = (filename) => { return new Promise((resolve, reject) => { fs.readFileAsync(filename, { encoding: 'utf-8' }) .then(data => resolve({ name: filename, lines: data.split('\n').length - 1 })) .catch(err => reject(err)) }) } const srcDir = process.argv[2] main(srcDir)
As in ,
this works until we run into a directory whose name name matches
*.*,
which we do when counting the lines in the contents of
node_modules.
The solution once again is to use
stat to check if something is a file or not
before trying to read it.
And since
stat returns an object that doesn't include the file's name,
we create another temporary object to pass information down the chain of
thens.
import glob from 'glob-promise' import fs from 'fs-extra-promise' const main = (srcDir) => { glob(`${srcDir}/**/*.*`) .then(files => Promise.all(files.map(f => statPair(f)))) .then(files => files.filter(pair => pair.stats.isFile())) .then(files => files.map(pair => pair.filename)) .then(files => Promise.all(files.map(f => lineCount(f)))) .then(counts => counts.forEach( c => console.log(`${c.lines}: ${c.name}`))) .catch(err => console.log(err.message)) } const statPair = (filename) => { return new Promise((resolve, reject) => { fs.statAsync(filename) .then(stats => resolve({ filename, stats })) .catch(err => reject(err)) }) } const lineCount = (filename) => { return new Promise((resolve, reject) => { fs.readFileAsync(filename, { encoding: 'utf-8' }) .then(data => resolve({ name: filename, lines: data.split('\n').length - 1 })) .catch(err => reject(err)) }) } const srcDir = process.argv[2] main(srcDir)
node count-lines-with-stat.js .
10: ./assign-immediately.js 1: ./assign-immediately.out 12: ./await-fs.js 4: ./await-fs.out 1: ./await-fs.sh 4: ./callbacks-with-timeouts.js 6: ./callbacks-with-timeouts.out 4: ./callbacks-with-zero-timeouts.js 6: ./callbacks-with-zero-timeouts.out 20: ./count-lines-globbed-files.js ... 10: ./x-match-lines/problem.md 1: ./x-match-lines/solution.md 3: ./x-multiple-catch/example.js 2: ./x-multiple-catch/example.txt 12: ./x-multiple-catch/problem.md 1: ./x-multiple-catch/solution.md 2: ./x-trace-load/config.yml 14: ./x-trace-load/example.js 5: ./x-trace-load/problem.md 1: ./x-trace-load/solution.md
This code is complex, but much simpler than it would be if we were using callbacks.
Lining things up
This code uses the expression
{filename, stats}
to create an object whose keys are
filename and
stats,
and whose values are the values of the corresponding variables.
Doing this makes the code easier to read,
both because it's shorter
but also because it signals that the value associated with the key
filename
is exactly the value of the variable with the same name.
How can we make this more readable?
Promises eliminate the deep nesting associated with callbacks of callbacks,
but they are still hard to follow.
The latest versions of JavaScript provide two new keywords
async and
await
to flatten code further.
async means "this function implicitly returns a promise",
while
await means "wait for a promise to resolve".
This short program uses both keywords to print the first ten characters of a file:
import fs from 'fs-extra-promise' const firstTenCharacters = async (filename) => { const text = await fs.readFileAsync(filename, 'utf-8') console.log(`inside, raw text is ${text.length} characters long`) return text.slice(0, 10) } console.log('about to call') const result = firstTenCharacters(process.argv[2]) console.log(`function result has type ${result.constructor.name}`) result.then(value => console.log(`outside, final result is "${value}"`))
about to call function result has type Promise inside, raw text is 24 characters long outside, final result is "Begin at t"
Translating code
When Node sees
await and
async
it silently converts the code to use promises with
then,
resolve, and
reject;
we will see how this works in .
In order to provide a context for this transformation
we must put
await inside a function that is declared to be
async:
we can't simply write
await fs.statAsync(...) at the top level of our program
outside a function.
This requirement is occasionally annoying,
but since we should be putting our code in functions anyway
it's hard to complain.
To see how much cleaner our code is with
await and
async,
let's rewrite our line counting program to use them.
First,
we modify the two helper functions to look like they're waiting for results and returning them.
They actually wrap their results in promises and return those,
but Node now takes care of that for us:
const statPair = async (filename) => { const stats = await fs.statAsync(filename) return { filename, stats } } const lineCount = async (filename) => { const data = await fs.readFileAsync(filename, 'utf-8') return { filename, lines: data.split('\n').length - 1 } }
we modify
main to wait for things to complete.
We must still use
Promise.all to handle the promises
that are counting lines for individual files,
but the result is less cluttered than our previous version.
const main = async (srcDir) => { const files = await glob(`${srcDir}/**/*.*`) const pairs = await Promise.all( files.map(async filename => await statPair(filename)) ) const filtered = pairs .filter(pair => pair.stats.isFile()) .map(pair => pair.filename) const counts = await Promise.all( filtered.map(async name => await lineCount(name)) ) counts.forEach( ({ filename, lines }) => console.log(`${lines}: ${filename}`) ) } const srcDir = process.argv[2] main(srcDir)
How can we handle errors with asynchronous code?
We created several intermediate variables in the line-counting program to make the steps clearer. Doing this also helps with error handling: to see how, we will build up an example in stages.
First,
if we return a promise that fails without using
await,
then our main function will finish running before the error occurs,
and our
try/
catch doesn't help us
():
async function returnImmediately () { try { return Promise.reject(new Error('deliberate')) } catch (err) { console.log('caught exception') } } returnImmediately()
(node:43890) UnhandledPromiseRejectionWarning: Error: deliberate
One solution to this problem is to be consistent and always return something.
Because the function is declared
async,
the
Error in the code below is automatically wrapped in a promise
so we can use
.then and
.catch to handle it as before:
async function returnImmediately () { try { return Promise.reject(new Error('deliberate')) } catch (err) { return new Error('caught exception') } } const result = returnImmediately() result.catch(err => console.log(`caller caught ${err}`))
caller caught Error: deliberate
If instead we <span i="">
return await,
the function waits until the promise runs before returning.
The promise is turned into an exception because it failed,
and since we're inside the scope of our
try/
catch block,
everything works as we want:
async function returnAwait () { try { return await Promise.reject(new Error('deliberate')) } catch (err) { console.log('caught exception') } } returnAwait()
caught exception
We prefer the second approach, but whichever you choose, please be consistent.
Exercises
Immediate versus next tick
What is the difference between
setImmediate and
process.nextTick?
When would you use each one?
Tracing promise execution
What does this code print and why?
Promise.resolve('hello')
What does this code print and why?
Promise.resolve('hello').then(result => console.log(result))
What does this code print and why?
const p = new Promise((resolve, reject) => resolve('hello')) .then(result => console.log(result))
Hint: try each snippet of code interactively in the Node interpreter and as a command-line script.
Multiple catches
Suppose we create a promise that deliberately fails and then add two error handlers:
const oops = new Promise((resolve, reject) => reject(new Error('failure'))) oops.catch(err => console.log(err.message)) oops.catch(err => console.log(err.message))
When the code is run it produces:
failure failure
- Trace the order of operations: what is created and executed when?
- What happens if we run these same lines interactively? Why do we see something different than what we see when we run this file from the command line?
Then after catch
Suppose we create a promise that deliberately fails
and attach both
then and
catch to it:
new Promise((resolve, reject) => reject(new Error('failure'))) .catch(err => console.log(err)) .then(err => console.log(err))
When the code is run it produces:
Error: failure at /u/stjs/promises/catch-then/example.js:1:41 at new Promise (<anonymous>) at Object.<anonymous> (/u/stjs/promises/catch-then/example.js:1:1) Function.executeUserEntryPoint [as runMain] \ (internal/modules/run_main.js:71:12) at internal/main/run_main_module.js:17:47 undefined
- Trace the order of execution.
- Why is
undefinedprinted at the end?
Head and tail
The Unix
head command shows the first few lines of one or more files,
while the
tail command shows the last few.
Write programs
head.js and
tail.js that do the same things using promises and
async/
await,
so that:
node head.js 5 first.txt second.txt third.txt
prints the first 5 lines of each of the three files and:
node tail.js 5 first.txt second.txt third.txt
prints the last five lines of each file.
Histogram of line counts
Extend
count-lines-with-stat-async.js to create a program
lh.js
that prints two columns of output:
the number of lines in one or more files
and the number of files that are that long.
For example,
if we run:
node lh.js promises/*.*
the output might be:
Select matching lines
Using
async and
await,
write a program called match.js` that finds and prints lines containing a given string.
For example:
node match.js Toronto first.txt second.txt third.txt
would print all of the lines from the three files that contain the word "Toronto".
Find lines in all files
Using
async and
await,
write a program called
in-all.js that finds and prints lines found in all of its input files.
For example:
node in-all.js first.txt second.txt third.txt
will print those lines that occur in all three files.
Find differences between two files
Using
async and
await,
write a program called
file-diff.js
that compares the lines in two files
and shows which ones are only in the first file,
which are only in the second,
and which are in both.
For example,
if
left.txt contains:
some people
and
right.txt contains:
write some code
then:
node file-diff.js left.txt right.txt
would print:
2 code 1 people * some 2 write
where
1,
2, and
* show whether lines are in only the first or second file
or are in both.
Note that the order of the lines in the file doesn't matter.
Hint: you may want to use the
Set class to store lines.
Trace file loading
Suppose we want are loading a YAML configuration file
using the promisified version of the
fs library.
In what order do the print statements in this test program appear and why?
import fs from 'fs-extra-promise' import yaml from 'js-yaml' const test = async () => { const raw = await fs.readFileAsync('config.yml', 'utf-8') console.log('inside test, raw text', raw) const cooked = yaml.safeLoad(raw) console.log('inside test, cooked configuration', cooked) return cooked } const result = test() console.log('outside test, result is', result.constructor.name) result.then(something => console.log('outside test we have', something))
Any and all
Add a method
Pledge.anythat takes an array of pledges and as soon as one of the pledges in the array resolves, returns a single promise that resolves with the value from that pledge.
Add another method
Pledge.allthat takes an array of pledges and returns a single promise that resolves to an array containing the final values of all of those pledges.
This article may be helpful. | https://stjs.tech/async-programming/ | CC-MAIN-2021-39 | refinedweb | 4,337 | 56.45 |
Have you ever wanted to use a split window in you program but didn’t want all the extra garbage of the Doc/View architecture?
In the new project wizard, in order to make the “Split Window” check box available, you have to choose “Document/View architecture support”. This puts a lot of extra unneeded code
in your project if all you wanted was a split main window.
In this article, I am going to walk you through creating a simple SDI with a split main window, a tool bar, and status bar. The window will be split into a left and right pane that are
resource based and can be modified easily in the resource editor just like making a simple dialog based application.
These instructions and project files are for Visual Studio 2005; while they may work in older versions of Visual Studio, you’ll be on your own to adapt them.
Start Visual Studio and create a new project. Select MFC application, give your project a name, and click “OK”.
Click Next.
In order to keep this as simple as possible, we will use minimal options for our project.
Choose “Single document”, uncheck “Document/View architecture”, uncheck “Use Unicode libraries”, and select “Use MFC in static library”. Click Next.
We don’t want any database support, so just click Next.
On this page, we will keep the defaults. This will give us a status bar and a tool bar. Nice things to have in an SDI project. Hit Next.
Uncheck “ActiveX controls” and “Common Control Manifest”; unless you know you are going to be using ActiveX controls and the new XP specific controls in your project, there is no reason
for the extra overhead. Hit Next.
Here is where it would be nice if they let us choose what kind of child view we want, but they don’t, so just hit Finish.
Compile and run the project.
Notice that the icon on the title bar is set to a default icon instead of the one defined in the project's resource file. This is a bug in Visual studio that’s been around for years.
Here’s how to fix it.
In the IDE, select the class view and double click on “InitInstance”.
You should see:
To change the icon on the title bar, first we need to create one, then we need to tell the main application window to use it. Place the following code after
the “UpdateWindow” but before the “return”.
UpdateWindow
return
//Create and load the titlebar icon
HICON hIcon;
hIcon = LoadIcon(IDR_MAINFRAME);
AfxGetMainWnd()->SendMessage(WM_SETICON,TRUE,(LPARAM)hIcon);
Contrary to what you might think, LoadIcon doesn’t actually load the icon, it just creates it. We have to tell the application to use the new icon
with a SendMessage.
LoadIcon
SendMessage
Your “InitInstance” should now look like this:
InitInstance
Now when you compile and run, you should see the resource defined icon on the title bar.
You can now use the resource editor to change the icon to whatever you want.
Now it is time to add the split window I promised. In order to have a split window, we will use a class called CSplitterWnd. In the class view, right click on
CMainFrame and select “Add Variable”. Change the access to “Protected”, the variable type to “CSplitterWnd”, and the variable name
to “m_wndSplitter”, and click Finish.
CSplitterWnd
CMainFrame
Protected
m_wndSplitter
Since the split will be in the client area, the code for it will need to be in an override function called OnCreateClient in CMainFrame.
OnCreateClient
In the class view, select CMainFrame. In the properties area, press the “Overrides” button and scroll down to “OnCreateClient” and add the function
with the combo box to the right.
Add the following code to the OnCreateClient function:
// create splitter window
if (!m_wndSplitter.CreateStatic(this, 1, 2))
return FALSE;
This will create a static split window for us that will have one row and two columns.
But wait, don’t compile yet. We still need to tell the splitter what to display in each pane and we need to get rid of the old view Visual Studio created for us.
For this tutorial, let us use resource based forms. That will make it easy to add controls and use this project as a template for building other applications.
Switch to Resource view, expand the dialog leaf, right click and choose “Insert Dialog”. This will insert a new dialog resource and open it in the resource editor.
Delete the “OK” and “Cancel” buttons. Right click on the dialog and choose Properties.
In Properties, change:
Now add a new dialog in the same manner but make the ID: IDD_FORM_RIGHT.
IDD_FORM_RIGHT
In the resource view, double click IDD_FORM_LEFT. This will open up the left form in the resource editor. Right click on the dialog in the editor and choose “Add Class”.
IDD_FORM_LEFT
Change the base class to “CFormView”, the name to “CFormLeft”, and hit Finish.
CFormView
CFormLeft
Repeat this process on the IDD_FORM_RIGHT resource but make the class name CFormRight.
CFormRight
Now we have views for the splitter to display.
In MainFrm.cpp, add the header lines:
#include "FormLeft.h"
#include "FormRight.h"
Scroll down to the “OnCreateClient” function and add the create view calls to the splitter. Add the following code after the splitter creation, but before the return.
if (!m_wndSplitter.CreateView(0, 0, RUNTIME_CLASS(CFormLeft), CSize(125, 100), pContext) ||
!m_wndSplitter.CreateView(0, 1, RUNTIME_CLASS(CFormRight), CSize(100, 100), pContext))
{
m_wndSplitter.DestroyWindow();
return FALSE;
}
Your OnCreateClient should look like this.
We’re almost there. Now we need to get rid of the View Visual Studio made for us and any uses of the old view.
Scroll up to the OnCmdMsg function and remove:
OnCmdMsg
if (m_wndView.OnCmdMsg(nID, nCode, pExtra, pHandlerInfo))
return TRUE;
Scroll up to the OnSetFocus function and remove:
OnSetFocus
m_wndView.SetFocus();
Scroll up to the OnCreate function and remove:;
}
At the top of MainFrm.h, remove:
#include "ChildView.h"
Scroll down and remove:
CChildView m_wndView;
Switch to Solution Explorer, right click on “ChildView.h”, and choose Remove. On the pop up box, press the Delete button.
Delete “ChildView.cpp” in the same manner.
You can compile and run now, and you will have an SDI with a split window. Try it out, drag the split bar around:
It doesn’t do anything at this point, but you can add controls with the resource editor.
However, the controls in each pane can only affect controls in the same pane. In order to have controls in
the left pane affect controls in the right pane, we will need to reroute some messages.
In the class view, right click on “CformLeft” and choose “Add variable”. Make the access protected, the type “CWnd*”, and the name “m_target”.
Hit Finish.
protected
CWnd*
m_target
In the class view, right click on “CFormLeft” and choose “Add function”. Make the return type void, the name “SetTarget”, and add a parameter
of type “CWnd*” named “m_cwnd”. Hit Finish.
void
SetTarget
m_cwnd
In the SetTarget function, add the code:
m_target = m_cwnd;
Next we will need to override the OnCommand function. Select “CFormLeft” in the class view. Click the
Overrides button and scroll down to OnCommand and use the dropdown box to add the function.
OnCommand
In the OnCommand function, add the following code:
if(m_target)
{
m_target->SendMessage(WM_COMMAND, wParam, lParam);
}
else
{
CFormView::OnCommand(wParam, lParam);
}
return true;
This will send the messages from this form to the target window we set. Now we want to set the target to the form in the right pane.
Add the following code to the OnCreateClient function in CMainFrame after the splitter creates views and before the return:
//set the target for mapped messages
((CFormLeft*)m_wndSplitter.GetPane(0,0))->SetTarget(m_wndSplitter.GetPane(0,1));
Your function should now look like:
Now let's add some controls and use it. Open IDD_FORM_RIGHT in the resource editor and add a static text control.
Change the ID to IDC_STATIC_HELLO, the caption to “Click the button”, and the text align to Center.
ID
IDC_STATIC_HELLO
Right click on the static control and choose Add Variable. Click the control type check box, the type “CStatic”, and the name “m_hello”. Click Finish.
CStatic
m_hello
Open IDD_FORM_LEFT in the resource editor and add a button.
Change the ID to IDC_BUTTON_HELLO and the caption to “Say Hello”.
IDC_BUTTON_HELLO
Right click on the button and choose “Add Event Handler”. Select message type BN_CLICKED and “CFormRight” from the class list, then hit Add and Edit.
BN_CLICKED
In the OnBnClickedButtonHello function, add the following code:
OnBnClickedButtonHello
m_hello.SetWindowText("Hello Window Splitter!");
Your function should look like:
Compile and run your program. Click the Say Hello button and you should see the test change in the right pane. Now try dragging the split bar and you should see the text in the right
pane move with the splitter bar.
I hope you found this article. | http://www.codeproject.com/Articles/15338/SDI-with-split-window?fid=335662&df=90&mpp=10&sort=Position&spc=None&tid=3981807 | CC-MAIN-2015-14 | refinedweb | 1,484 | 73.47 |
What is Ruby and why is it useful? This article will touch on the history and features of the Ruby language, and some of the reasons you might want to have a deeper look at Ruby.
The world is filled with programming languages, but most of them are destined for obscurity. No one can be quite sure what propels a language from the wide, broad bench of "interesting" to the celebrity red carpet of "popular", but when you see it happen it's probably time to have a deeper look at the language and see if it's useful to you. Ruby is one of these breakthrough languages.
Ruby is frequently characterised as a "scripting language", but since many people seem to equate "scripting language" with "toy" this sells Ruby short. Ruby is a fully object-oriented, dynamic programming language. It was released by Yukihiro Matsumoto ("Matz") in 1995, and for many years it was relatively unknown outside of Japan, until two key events triggered an explosion of interest. In 2000 Andy Hunt and Dave Thomas (the Pragmatic Programmers) wrote Progrmmaing Ruby, the first English-language reference to Ruby, and in July, 2004 "Ruby on Rails", a Web application development framework by David Heinemeier Hansson, was released. Ruby on Rails used reflection and other dynamic attributes of Ruby to make Web application development fast and easy, and quickly became the "killer app" for Ruby. There are certainly other useful Web application development frameworks, but none of them has experienced the explosive growth we've seen from Rails.
So what are the characteristics of Ruby? First, everything in Ruby is an object, including things like integers and floats that are treated as primitive types in other languages. It's open, in the sense that every class can be extended, and most existing behaviour can be overridden. It's interpreted -- there is no compilation step. This means that it's very easy to make changes to your code and quickly see the effects, but it also means that Ruby applications generally run more slowly than their compiled language equivalents. There is currently a lot of work going into alternative Ruby runtimes -- new Ruby interpreters, and approaches to running Ruby in existing environments such as the Java VM and the .NET CLR.
Ruby is dynamically typed. In the Ruby community this is known as duck typing -- "when I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck". For a Ruby object to work in a context (say, as a method argument), it only needs to implement the methods that are actually invoked in that context -- not necessarily the "complete" set of methods that you might associate with some abstract type. This is very flexible, but means that typing errors can only be determined at runtime. Ruby developers commonly use extensive unit tests to catch these conditions, rather than relying on a compiler. Ruby has included a unit testing framework since 2001.
Ruby is text-based. All the source code is stored in text files, and can be searched and manipulated with all the Unix command line tools that we know and love (or their equivalents in other operating systems). Being text-oriented also means that Ruby source code integrates smoothly into our version control system of choice.Ruby is terse, and also frequently provides multiple ways to achieve the same end. This lets the developer choose the most expressive syntax for their specific context -- it's a humane interface rather then a minimalist interface. The set of reserved words in Ruby is quite small -- many of the actions that are keywords in other languages are implemented in Ruby as methods on some object. This is one of the ways that Ruby provides flexibility, including alternative implementations for the same actions. This doesn't mean Ruby doesn't have its quirks -- it certainly does, and the variety of choices means it can take a while to learn all the Ruby idioms.
Let's dive into some Ruby code so we can see some examples of how this plays out in the real world. All you need to get started is a Ruby interpreter and access to the command line. On OS X Ruby is included by default; on Windows, there is a Ruby One Click installer (see the references); and on Linux you'll probably need to download and install a Ruby package. We can execute Ruby directly from the command line without even creating a file:
> ruby -e "puts('Hello World!')" Hello World!
The -e flag tells Ruby to treat the next parameter as code to be executed. In this case we've used "puts", a method that prints its arguments to STDOUT with newlines. Parentheses are optional in Ruby method invocations, and you'll more commonly see this written as:
> ruby -e "puts 'Hello World!'"
Ruby code is line delimited -- by default the end of a statement is the end of the line -- but you can also use a semicolon as a delimiter:
> ruby -e "puts 'Hello World!';puts 'Hello Ruby!'" Hello World! Hello Ruby!
Of course, there's a limit to what we can achieve executing statements at the command line, so Ruby can also execute the contents of a file. The default extension for a Ruby file is .rb.
# file: hello_world.rb puts 'Hello World!' puts 'Hello Ruby!' > ruby hello_world.rb Hello World! Hello Ruby!
This also illustrates another Ruby convention -- file names, method names and variable names usually use underscores to separate words, rather than the camel case convention of other languages, such as Java. However, Ruby does use camel case for class names, as we'll see shortly. Also, anything following a # character is a comment.
Since I've said that Ruby is purely object-oriented, some of you may be wondering what object is receiving the method "puts". In Ruby the current object can be referenced using the pseudo-variable "self".
> ruby -e "puts self.class" Object
When we're executing statements at the command line they're being sent to an instance of Object created by the Ruby interpreter.
Now let's quickly look at the syntax for creating a class.
# file: say_hello_01.rb class SayHello def say_it puts 'Hello World!' puts 'Hello Ruby!' end end puts SayHello.new.say_it
Here we've defined a new class, a instance method called "say_it", then created a new instance of the class and invoked "say_it" on that instance. This shows part of the flexibility of Ruby -- we can create classes and run scripts that depend on those classes in the same file. Although many Ruby files will contain class definitions that match the name of the file, a Ruby file is free to contain whatever it wants. Once again, this is very flexible but can require a bit of a mental shift for programmers accustomed to, say, Java, where looking for a public class is the same as looking for a file of that name.
Here's the same class, but this time taking some parameters:
# file: say_hello_02.rb class SayHello def say_it(message1, message2) puts message1 puts message2 end end SayHello.new.say_it('Hello World!', 'Hello Ruby!')
Notice that the parameters don't have any type declarations, and the method doesn't have a return type; since Ruby is dynamically typed we don't need to specify the types (and in fact, we can't!) | http://www.builderau.com.au/program/ruby/soa/Kicking-off-with-Ruby/0,339028320,339280966,00.htm | crawl-002 | refinedweb | 1,243 | 60.65 |
Pawk
I was trying to see if I could get pawk working. I installed it with pip install python-awk. However I can't figure out how to run it, probably under Stash.
Awk is a very useful tool and I was hoping I could get it working in Pythonista.
So what happens when you type pawk into the StaSH commandline?
This is what my console looks like:
[~/Documents]$ pip install python-awk
Querying PyPI ...
Downloading package ...
Opening:
Save as: /private/var/mobile/Containers/Data/Application/7864CBEC-94B6-4103-AA16-C07C89C5C69C/tmp//python-awk-0.0.5.tar.gz (5424 bytes)
5424 [100.00%]
Extracting archive file ...
Archive extracted.
Running setup file ...
Package installed: python-awk
Installing dependency: jtutils
Querying PyPI ...
Downloading package ...
Opening:
Save as: /private/var/mobile/Containers/Data/Application/7864CBEC-94B6-4103-AA16-C07C89C5C69C/tmp//jtutils-0.0.6.tar.gz (6706 bytes)
6706 [100.00%]
Extracting archive file ...
Archive extracted.
Running setup file ...
Package installed: jtutils
Dependency available in Pythonista bundle : requests
Dependency available in Pythonista bundle : beautifulsoup4
Installing dependency: argparse
Querying PyPI ...
Downloading package ...
Opening:
Save as: /private/var/mobile/Containers/Data/Application/7864CBEC-94B6-4103-AA16-C07C89C5C69C/tmp//argparse-1.4.0-py2.py3-none-any.whl (23000 bytes)
23000 [100.00%]
Installing wheel: argparse-1.4.0-py2.py3-none-any.whl...
Extracting wheel..
Extraction finished, running handlers...
Running handler 'WHEEL information checker'...
Wheel generated by: bdist_wheel (0.24.0)
Running handler 'dependency handler'...
Running handler 'top_level.txt installer'...
Copying /private/var/mobile/Containers/Data/Application/7864CBEC-94B6-4103-AA16-C07C89C5C69C/tmp/wheel_tmp/argparse-1.4.0-py2.py3-none-any.whl/argparse.py -> /private/var/mobile/Containers/Shared/AppGroup/B86CEE4E-E4A9-4BB4-9CA7-6E13BDA2C2A4/Pythonista3/Documents/site-packages-2/argparse.py
Running handler 'console_scripts installer'...
No entry_points.txt found, skipping.
Cleaning up...
Package installed: argparse
Dependency available in Pythonista bundle : six
[~/Documents]$ pawk
stash: pawk: command not found
[~/Documents]$ cd site-packages-2
[site-packages-2]$ cd pawk
[pawk]$ pawk
import: command not found
from: command not found
if: command not found
fix_broken_pipe(): command not found
pawk.pawk(): command not found
pawk is meant to be run from the command line.
Looks like maybe you need to rm pawk (in the pawk folder). I think that is being in interpreted as shell instead of python. You may need to do pawk.py or ./pawk.py or python pawk.py (I forget how much smarts is in stash)
Try editing the file pawk/pawk and add a "3" to the end of the first line:
- #!/usr/bin/env python --> #!/usr/bin/env python3
For safety's sake do the same thing to the file pawk/pawk.py.
stash does not support the #! stuff, hence why it is saying import command not found. Nor will it be able to change the interpreter version!
I tried running pawk.py in Stash in various ways but nothing produces an output (interestingly, no errors either).
However, I installed pawk in site-packages-2 using Stash while running Python v2.7. When I tried doing the install running v3.6, I got:
[~/Documents]$ pip install python-awk
<class 'ImportError'>: This package should not be accessible on Python 3 Either you are trying to run from the python-future src folder or your installation of python-future is corrupted.
FWIW a mini-version of awk.
import re import sys class MiniPythonAwk(object): def __init__(self, files, FS=None): self.FS = FS self.FILES = files def run(self): # BEGIN code block starts count = 0 # BEGIN code block ends FS = self.FS NR = 0 for FILEINDEX, FILENAME in enumerate(self.FILES): FNR = 0 fp = open(FILENAME) for SS in fp: NR += 1 FNR += 1 if self.FS: S = SS.rstrip().split(self.FS) else: S = SS.split() NF = len(S) # ACTION code block starts if re.search('ff', SS) or (S[1] == '5') : count += 1 print("{}:{}:".format(FILENAME, NR), SS.rstrip()) # ACTION code block ends fp.close() # END code block starts print("Count:", count) # END code block ends if __name__ == '__main__': files = sys.argv[1:] if len(files): MiniPythonAwk(files).run() ''' test_input1.txt 1 2 3 4 5 6 7 8 9 test_input2.txt aa bb cc dd ee ff gg hh ii python mini_python_awk.py test_input1.txt test_input2.txt test_input1.txt:2: 4 5 6 test_input2.txt:5: dd ee ff Count: 2 '''
@ihf is clear that Python 2 is no longer supported.
Maybe open an issue at | https://forum.omz-software.com/topic/5207/pawk | CC-MAIN-2021-04 | refinedweb | 738 | 62.14 |
openstack queens keystone-manage
I am using Ubuntu 18.04 and attempting to install OpenStack using deb packages. This means I am currently using queens. My /etc/keystone/keystone.conf file looks like [DEFAULT] log_dir = /var/log/keystone [database] connection = mysql+pymysql://keystone:Pass1234@controller/keystone [token] provider = fernet
When I execute keystone-manage bootstrap --bootstrap-password ADMIN_PASS
I get something like python throwing a hissy fit where the tail end looks like ConfigParser._parse_file(config_file, namespace) File "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 1946, in _parse_file raise ConfigFileParseError(pe.filename, str(pe)) oslo_config.cfg.ConfigFileParseError: Failed to parse /root/keystone.conf: at /root/keystone.conf:1, No ':' or '=' found in assignment: 'Listen 5000'
I am suspicious of of the line that reads /root/keystone.conf
On a seperate issue if I do a curl -v I am going to get a HTTP 500 error. I can only conclude I have something wrong there as well. That is indeed a separate problem.
I can run "mysql -u keystone -p" enter the password use "keystone" and "show tables" which show a big fat zilch.
Python throws a similar hissy fit if I do a keystone-manage fernet_setup
I am betting I am forgetting or have some concept wrong..... I am hoping someone sees it?
Thanks! | https://ask.openstack.org/en/question/115567/openstack-queens-keystone-manage/ | CC-MAIN-2021-17 | refinedweb | 218 | 60.01 |
19970/kubernetes-nginx-ingress-tls-issue
My deployment is something like this:
Existing CA certificate for fake.example.com and an A record that maps fake.example.com to the IP of our load balancer
The load balancer is forwarding traffic to our Kubernetes cluster.
In the cluster, I've deployed the nginx-ingress helm chart, exposing NodePort for https at 30200
I've created a k8s TLS secret named test-secret from the above certificate.
I've deployed an app with service 'test' and have installed the following ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- fake.example.com
secretName: test-secret
rules:
- host: fake.example.com
http:
paths:
- path: /myapp
backend:
serviceName: test
servicePort: 8080
So, if i execute
curl https://{ip for k8s node}:30200/myapp/ping -H 'Host:fake.example.com' -k --verbose
I get the expected response from my app, but I also see
* Server certificate:
* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* start date: Jan 25 20:52:16 2018 GMT
* expire date: Jan 25 20:52:16 2019 GMT
* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
So my question is, is it possible to configure nginx to use the correct certificate in this scenario?
You have to create a secret named test-secret.
➜ charts git:(master) kubectl describe secret --namespace operation mydomain.cn-cert
Name: mydomain.cn-cert
Namespace: operation
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
tls.crt: 3968 bytes
tls.key: 1678 bytes
You’re using nginx ingress controller which does ...READ MORE
Ingress is just collection of rules that forwards ...READ MORE
Adding ingress.kubernetes.io/ssl-redirect: "false" to annotations will disable the SSL redirect:
apiVersion: extensions/v1beta1
kind: ...READ MORE
The nginix ingress controller uses hostPort to ..
You need an ingress as mentioned by ...READ MORE
This is not a routing problem on ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/19970/kubernetes-nginx-ingress-tls-issue?show=19971 | CC-MAIN-2019-43 | refinedweb | 333 | 50.12 |
Svelte or Svelte JS is a frontend framework that can be used in a similar fashion as React and Vue. Svelte does a lot of things differently and prides itself to be fast and simple. In parts that is, because Svelte doesn't use a virtual DOM, like React, and relies a lot on the classic way of building websites and applications with HTML CSS and JS and sprinkling it with some useful additions.
Just like other frameworks, Svelte relies on one or more components, usually stored as
*.svelte files and we usually want an entry file called
main.js and our
initially loaded component,
App.svelte. Let's kickstart an example application, by installing Svelte via npx:
npx degit sveltejs/template svelte-app npm install npm run dev
This will get us started with the following folder structure, with
src being the folder that we'll focus on for now, because holds our entry point and components.
The entry file can be very simple when starting a new project. All we do here is declare a target to append our application and define the initial component that holds the rest of our application. We can also pass different props to our app from here.
import App from './App.svelte'; const app = new App({ target: document.body, props: { name: 'my first app', version: '1.0.0' } }); export default app;
And because we're using
App.svelte as our parent component, let's dive in and look at how components in Svelte are generally structured.
This is where Svelte really differs from React and Vue and simply lets us write HTML with optional
script and
style blocks. That means the simplest component
can be just HTML, but we're free to use JS and CSS. Every CSS we write inside a component is scoped to that specific component, so styling a
<p> tag in one
component will not set the styles of every paragraph.
<script> // ... </script> <main> <h1>Hello World!</h1> </main> <style> /* ... */ </style>
Now to start structuring our application and getting our hands dirty a little more, let's look at importing another component and actually adding some functionality, pulling data from an external API. For this example, we'll fetch a random dad joke and display it on our page. When we press a button, the fetch function will be called again and another dad joke is displayed.
<script> async function fetchData() { const res = await fetch(" { headers: { Accept: "application/json", }, }); const data = await res.json(); return data.joke; } let joke = fetchData(); function handleClick() { joke = fetchData(); } </script> <button on:click={handleClick}>Get a random dad joke </button> <h2>Here's a random dad joke:</h2> {#await joke} <p>...waiting</p> {:then joke} <p>{joke}</p> {:catch error} <p style="color: red">Couldn't fetch a joke...</p> {/await}
In the example above we can see that our JavaScript is not any different than it would be without a framework. In the HTML section, however, we see the first
Svelte blocks, one handler, used to reload a joke and a special logic block, used to
await a response from our async function
fetchData(). Handlers in Svelte look a lot like
they do in Vue.js and the different logic blocks are fairly easy to remember. Here's a quick look at two common blocks: a simple if condition and each loop.
<!-- If condition ---> {#if user.loggedIn} <button on:click={toggle}> Log out </button> {/if} <!-- Each block--> {#each things as thing} <Thing name={thing.name}/> {/each}
Just like we can pass data from parent to child components in Svelte, we can also rely on reactive variables/data binding. The
bind: directive allows
us to automatically set
name to the input field's current value. In the same manner, we can make other form elements reactive, such as the
select element below.
<script> let name = 'world'; let selected; </script> <input bind:value={name}> <h1>Hello {name}!</h1> <select bind:value={selected} on: <!-- ... --> </select>
Like other frameworks, Svelte gives us a few useful lifecycle hooks that allow us to run code at certain stages of a component's lifecycle.
We can make use of the methods
onMount,
onDestroy,
beforeUpdate and
afterUpdate after we imported them in our component:
import { onMount, onDestroy, beforeUpdate, afterUpdate } from 'svelte'
A common use case for the
onMount function is fetching data from external endpoints. Doing that, we can rewrite the example above to fetch a dad joke without using an await block in our HTML. Instead, we start fetching the data right after the component is mounted and only return the final joke.
<script> import { onMount } from 'svelte' let joke = "loading joke..." onMount(async () => { const res = await fetch(" { headers: { Accept: "application/json", }, }); let json = await res.json(); joke = json.joke }) </script> <h2>Here's a random dad joke:</h2> <p>{joke}</p>
Svelte also brings its own storing mechanism, which helps a lot with keeping state in sync across components. Under the hood, a store is a simple object that we
can
subscribe to, to be notified of any changes. In order to store one of the jokes from the example above, we need a
writable store. Writable stores generally have
two more functions,
update and
set that let us manipulate the stored values. In our application, we can achieve this by creating a new file, called
stores.js
<script> // stores.js import { writable } from 'svelte/store'; export const globalJoke = writable("no joke stored yet..."); </script>
We can then import our store or specific values inside it in any other component and we're able to read and mutate the stored values as needed:
<script> // any other component import { globalJoke } from './stores.js'; globalJoke.update("Did you hear about the bread factory burning down? They say the business is toast.") </script>
That's only the basics that should help you get started with Svelte. If you want to dig deeper from here, the official tutorial is a great place to start, as well as the documentation. The Svelte site also offers an interactive playground that you can use to start getting a feel for the library. | https://allround.io/articles/getting-started-with-svelte | CC-MAIN-2022-21 | refinedweb | 1,017 | 63.9 |
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace IksOks { public partial class Form1 : Form { public Form1() { InitializeComponent(); } int n = 0; private void button1_Click(object sender, EventArgs e) { if (n ==0) { button1.Text = "X"; n = 1; } else if (n==1) { button1.Text = "O"; n = 0; } } private void button2_Click(object sender, EventArgs e) { if (n == 0) { button2.Text = "X"; n = 1; } else if (n == 1) { button2.Text = "O"; n = 0; } }
and for all 9 buttons is like this, so now i have a problem. I don't know where can i put to check for combinations, i can't put it outside of button event,but if i put it in button event that's stupid, because it will check only when i click that button the last, and i would have to write a bunch of code. Can anyone help me where can i put to check combinations?
This post has been edited by Crash95: 28 May 2011 - 01:51 AM | http://www.dreamincode.net/forums/topic/233770-tic-tac-toe-game/ | CC-MAIN-2018-05 | refinedweb | 178 | 69.28 |
Leap Year Program in Java
A day and night is 23 hours, 56 minutes and 4.1 seconds. It takes 365 days, 5 hours, 48 minutes and 45 seconds for the Earth to complete one round of the sun.
In easy terms, the year that the remaining zero is divided by 4 will be a leap year, but only that century will be a leap year, which is completely divided by 400.
import java.util.Scanner; public class LeapYear { public static void main(String[] args) { int year; Scanner sc = new Scanner(System.in); System.out.print("Enter a year="); year = sc.nextInt(); if (((year % 400 == 0) || (year % 4 == 0)) && (year % 100 != 0)) { System.out.println("Leap year."); } else { System.out.println("Not a leap year"); } } }
Output:
Enter a year=1600 Not a leap year
b. tech. bca icse java java tutorials learn java mca programs | https://www.efaculty.in/java-programs/leap-year-program-in-java/ | CC-MAIN-2020-45 | refinedweb | 145 | 72.16 |
Hello,
just a followup.
I now tried it with Scala and the sbt-avrohugger plugin as well, and this works perfectly.
sbt-avrohugger converts union { null, Sample } to Option[Sample], where avrogen converts this
to simply “Sample” but allows it to be null on the C# side.
Thanks for your help,
Jens
Von: Rabe, Jens [mailto:[email protected]]
Gesendet: Dienstag, 17. Januar 2017 16:18
An: [email protected]
Betreff: AW: When using the SpecificWriter with .net, I get an exception when a referenced
object in my schema is null
Hello Jeremy,
at least with .net, this works. I can’t test with Scala currently as I am doing work on
another part of the project.
Thanks for pointing this out.
Von: Jeremy Fine [mailto:[email protected]]
Gesendet: Dienstag, 17. Januar 2017 16:12
An: [email protected]<mailto:[email protected]>
Betreff: Re: When using the SpecificWriter with .net, I get an exception when a referenced
object in my schema is null
does avdl this work? i think you need to specify that a field can be null.
@namespace("sample")
protocol SampleAvro {
record Sample {
union { null, string } foo;
union { null, Sample } bar;
}
}
On Mon, Jan 16, 2017 at 1:01 PM, Rabe, Jens <[email protected]<mailto:[email protected]>>
wrote:
Hello again,
it turns out I was wrong in first place. I re-tested it in Scala and get the same error.
Gues I’ll have to redesign the schema then to avoid the nulls.
Von: Rabe, Jens [mailto:[email protected]<mailto:[email protected]>]
Gesendet: Montag, 16. Januar 2017 18:13
An: [email protected]<mailto:[email protected]>
Betreff: When using the SpecificWriter with .net, I get an exception when a referenced object
in my schema is null
Hello Avro community,
I want to use my Avro serialized data with .net and I stumbled upon a strange problem. When
a record in my schema references another record, and I set it to null, when serializing this
in .net this throws an exception.
Here is what I did:
First, I created a simple protocol. This is my avdl file:
@namespace("sample")
protocol SampleAvro {
record Sample {
string foo;
Sample bar;
}
}
Now, I generated C# classes out of it:
java -jar avro-tools-1.8.0.jar idl sample.avdl >sample.json
avrogen -p sample.json .
I then copied the .cs files into a project, and tried to serialize a sample object where the
“bar” reference is null:
using(var ms = new MemoryStream()) {
var sw = new SpecificWriter<Sample>(Sample._SCHEMA);
sw.Write(new Sample { foo = "bar" }, new BinaryEncoder(ms));
}
I get the following exception: Record object is not derived from ISpecificRecord in field
bar
When looking at the source code, this happens in the SpecificWriter.cs in the following method
(I made the relevant lines bold):
protected override void WriteRecord(RecordSchema schema, object value, Encoder encoder)
{
var rec = value as ISpecificRecord;
if (rec == null)
throw new AvroTypeException("Record object is not derived from ISpecificRecord");
foreach (Field field in schema)
{
try
{
Write(field.Schema, rec.Get(field.Pos), encoder);
}
catch (Exception ex)
{
throw new AvroException(ex.Message + " in field " + field.Name);
}
}
}
The “value as ISpecificRecord” returns null here because the reference is null. This is
intended in my schema, but throws an exception here.
Is this a bug or is this intended behavior, and either way, how should I work around this?
I need situations where the “bar” is null, and on the JVM/Scala side this is no problem
and working flawlessly.
Regards,
Jens
--
[Das Bild wurde vom Absender entfernt. Intent Media]<>
Jeremy Fine
Software Engineer
315 Hudson Street, 9th Floor, New York, NY 10013
We've been named a Crain's "Best Place to Work" for three years running!<> | http://mail-archives.apache.org/mod_mbox/avro-user/201701.mbox/%3CC84C4FB966D26947A3768912D36DE02A88E22897@FGDEMUCIMP04EXC.ads.fraunhofer.de%3E | CC-MAIN-2017-51 | refinedweb | 640 | 59.09 |
i'm new to programming and am using djgpp i'm trying to build the common 1st program hello world... i've checked the forums for this problem i'm having and adjusted my code accordingly and am still getting the same error message:
error: 'cout' undeclared (first use this function)
error: (each undeclared identifier is reported only once for each function it appears in.)
error: 'cin' undecalred (first use this function)
heres my code:
Code:#include <stdio.h> int main() { cout<<"HEY, you, I'm alive! Oh, and hello world!\n"; cin.get(); return 0; }
i've replaced iostream with stdio.h because thats what they use on the djgpp website and after looking in my include folder i have no iostream file... makes sense to me... also i've removed the using namespace std; line becuase when i have that in there i get a whole slew of undefined reference to 'std messages... are these just compiler warnings and can/should i be removing the using namespace line like that or am i doing something stupid? hopefully i've included enough information to get help on my problem sorry for the probably simple question and i appreciate any help i can receive.
on another note anyone else see the sabers/flyers game? that hit on umberger? wow good night gracy. | https://cboard.cprogramming.com/cplusplus-programming/78377-cout-undeclared-cin-undeclared.html | CC-MAIN-2017-47 | refinedweb | 222 | 73.47 |
Cloud Monitoring's model for monitoring data consists of three primary concepts:
- Monitored-resource types
- Metric types
- Time series
The Cloud Monitoring metric model describes these concepts in general terms. If these concepts are new to you, read that page first.
This page describes metric types, monitored resources, and time series, along with some related concepts, in more detail. These concepts underlie all Monitoring metrics.
You should understand the information on this page if you want to do any of the following:
Inspect metric data by using Metrics Explorer.
Create alerting policies to notify you when a value goes outside its normal limits, as described in Using alerting policies.
Retrieve raw or aggregated monitoring data using the Monitoring API, as described in Reading metric data.
Create your own metric types, as described in Using custom metrics.
For more detail on these concepts and how they map to the Cloud Monitoring API, see Structure of time series, particularly if you plan to use the Monitoring API or custom metrics.
A word about labels
Monitored-resource types and metric types both support labels, which allow data to be classified during analysis. For example:
A monitored-resource type for a virtual machine might include labels for the location of the machine and the project ID associated with the machine. When information about the monitored resource is recorded, the information includes the values for the labels.
A monitored resource might also have system- or user-provided metadata labels, in addition to the labels defined for the monitored-resource type.
A metric type that counts API requests might have labels to record the name of the method invoked and the status of the request.
The use of labels is discussed in more detail in Labels.
Monitored-resource types
A monitored resource is a resource from which metric data is captured. At last count, Cloud Monitoring supports approximately 100 types of monitored resources.
Types of monitored resources include generic nodes and tasks, tables in Cloud Bigtable, architectural components in Google Kubernetes Engine, various AWS resources, and many more.
Each type of monitored resource is formally described in a data structure called a monitored-resource descriptor. For more information, see Monitored resource descriptors.
Each of the supported monitored-resource types has an entry in the Monitored resource list. The entries in the list are created from the monitored-resource descriptors. This section describes the information captured in a monitored-resource descriptor and shows how it is presented in the list.
A sample monitored-resource type
Here is the entry in the list for a Cloud Storage bucket:
All entries in the list include the following information:
- Type: The header in the entry lists the monitored-resource type;
gcs_bucketin the example.
- Display name: A short description of the monitored resource.
- Description: A longer description of the monitored resource.
- Labels: A set of dimensions for classifying data. For more information, see Labels.
Metric types
A metric type describes measurements that can be collected from a monitored resource. A metric type includes a description of what is being measured and how the measurements are interpreted. At last count, Cloud Monitoring supports approximately 1,500 types of metrics, and it provides you the ability to define new types.
Metric types include counts of API calls, disk-usage statistics, storage consumption, and many more.
Each metric type is formally described in a data structure called a metric descriptor. For more information, see Metric descriptors.
Each of the built-in metric types has an entry in the Metrics list. The entries in these tables are created from the metric descriptors. This section describes the information captured in a metric type and shows how it is presented in reference material.
A sample metric type
The following image shows an entry of a Cloud Storage metric type:
The metric types are displayed in a table, and the table header explains the layout of the information. This section uses one entry as an example, but all the tables use the same format.
The sample Cloud Storage table entry gives you the following information about one metric type:
Metric type: An identifier for the metric type,
storage.googleapis.com/api/request_countin the example.
The prefix
storage.googleapis.comacts as a namespace for Cloud Storage. All metric types associated with a particular monitored-resource type use the same namespace.
Namespaces are omitted from entries in the tables.
All the metric types associated with Cloud Storage are listed in the table for Cloud Storage metrics.
Launch stage: A colored block that indicates the launch stage of the metric type with a value like Alpha, Beta, and GA.
Display name: A brief string describing the metric type, “Request count” in the example.
Kind, Type, Unit: This line provides information for interpreting the data values: the example shows a delta metric recorded as a 64-bit integer with no unit (that's the
1value).
Kind: This example is a delta metric, which records a change over a period of time. That is, each data point records the number of API calls since the previous data point was written. For more information on kinds, see Value types and metric kinds.
Type: This example records its values as 64-bit integers. For more information on types, see Value types and metric kinds.
Unit: This metric doesn't need an explicit unit because it represents a count; the digit
1is used to indicate that no unit is needed.
Monitored resources: The monitored resources for which this metric type is available. The values here are the same as those described in Monitored-resource types.
Description: More detailed information about what is recorded and how. Set in italics to distinguish it from the labels.
Labels: A set of dimensions for classifying data. For more information, see Labels.
When you access monitoring data through the Cloud Monitoring API, you include
a Google Cloud project in the API call. You can retrieve only the data that is
visible to that Google Cloud project. For example, if you request
your project's data for the metric type
storage.googleapis.com/api/request_count,
then you see API counts only for Cloud Storage buckets in your project.
If your project isn't using any Cloud Storage buckets, then no metric
data is returned.
Built-in metric types
Built-in metric types are defined by Google Cloud services, including Cloud Monitoring. These metric types describe standard measurements for a wide array of common infrastructure and are available for anyone to use.
The Metrics list shows the entire set of built-in metrics types.
Metrics listed in the External metrics list page are a
special subset of built-in metrics that are defined by Cloud Monitoring
in partnership with open-source projects or third-party providers. Typically,
these metrics have a prefix of
external.googleapis.com.
Custom metrics
When you build your application, you might have certain properties you
want to measure, things for which there are no built-in metrics.
With Cloud Monitoring, you can define your own metric types.
These metric types are called custom metrics. If a metric has a prefix
of
custom.googleapis.com or
external.googleapis.com/prometheus, then
it is a custom metric. The latter metrics usually come from
the Stackdriver Prometheus sidecar. See
Using Prometheus for more information.
For example, if you want to track the number of widgets sold by your stores, you need to use a custom metric. For more information, see Using custom metrics.
Labels
The definitions of both metric and monitored-resource types include labels. Labels are classifiers for the data being collected; they help to categorize the data for deeper analysis. For example:
- The Cloud Storage metric type
storage.googleapis.com/api/request_counthas two labels,
response_codeand
method.
- The Cloud Storage monitored-resource type
gcs_buckethas three labels,
project_id,
bucket_name, and
location. The labels identify specific instances of the resource type.
Therefore, all data collected for API requests from a Cloud Storage bucket is classified by the method that was called, the response code for the call, the name, location, and project of the bucket involved. The set of labels varies with the metric or monitored-resource type; the available labels are documented in the Metrics list and Monitored resource list pages.
By tracking the response code, method name, and location when counting API calls, you can then fetch the number of calls to a particular API method, or the number of failing calls to any method, or the number of failing calls to a specific method in a specific location.
The number of labels and the number of values each can assume is referred to as cardinality. The cardinality is the number of possible time series that might be collected for a pair of metric and monitored-resource types: there is one time series for each combination of values of their labels. This is discussed further in Cardinality: time series and labels.
Resource metadata labels
In addition to the labels defined on the metric and monitored-resource types, Monitoring internally collects additional information on monitored resources and stores this information in system metadata labels. These system metadata labels are available to users as read-only values. Some resources also allow users to create their own resource metadata labels when configuring resources like VM instances in the Google Cloud console.
The system and user metadata labels are collectively called resource metadata labels. You can use these labels like the labels defined on the metric and monitored-resource types in time-series filters. For more information on filtering, see Monitoring filters.
Time series: data from a monitored resource
This section discusses what monitoring data is and how it's organized in time series. This is where the conceptual components of the metric model become concrete artifacts.
Cloud Monitoring store regular measurements over time for pairs of metric and monitored-resource types. The measurements are collected into time series, and each time series contains the following items:
The name of the metric type to which the time series belongs, and one combination of values for the metric's labels.
A series of (timestamp, value) pairs. The value is the measurement, and the timestamp is the time at which the measurement was taken.
The monitored resource that is the source of the time series data, and one combination of values for the resource's labels.
A time series is created for each combination of metric and resource labels that generates data.
Stylized example: The metric type
storage.googleapis.com/api/request_count
described above could have many time series for your project's Cloud Storage
buckets. The following illustration shows some possible time series.
In the illustration, the value
bucket: xxxx represents the value of the
bucket_name label in the monitored-resource type, and
response_code and
method are labels in the metric type. There is one time series for each
combination of values in the resource and metric labels; the illustrations
shows some of them:
Cloud Monitoring does not record “empty” time series. In the Cloud Storage buckets example, if you are not using a particular bucket or never call a particular API method, then no data is collected for that label, and no time series mentions it. This means that, if your project has no data at all for a particular metric, you never see the metric type.
The metric types don't indicate what types of monitored resources
are found in the metrics' time series. For Cloud Storage, there is only one
monitored resource type—
gcs_bucket. Some metric types pair with more
than one monitored resource.
Live example: If you have a Google Cloud project, you
can try the APIs Explorer widget, located on the reference page for the
timeSeries.list method in the Monitoring API.
The TRY IT button below supplies the following default parameters to the
timeSeries.list method:
- name:
projects/[PROJECT_ID]
- filter:
metric.type="logging.googleapis.com/log_entry_count" resource.type="gce_instance"
- interval.start_time:
2019-11-11T00:00:00Z
- interval.end_time:
2019-11-11T00:20:00Z
- fields:
timeSeries.metric
When you try to run this example, you must change
[PROJECT_ID] in the name
field to your project ID.
This example assumes you have a Compute Engine instance running the
Cloud Logging agent. The monitored-resource type is
gce_instance, and the metric
type is
logging.googleapis.com/log_entry_count. You can change these values if
they don't apply to you.
When retrieving time-series data, you must specify start and end times. This example uses a period on 11 November 2019. However, time-series data is stored for 6 weeks, so you will probably have to adjust the date as well before running the request.
To run the request, click the button below, adjust the parameters as needed, and click the Execute button at the bottom of the widget panel.
If the request succeeds, it will return the time-series data that matches the request. It will look like the following snippet:
{ "timeSeries": [ { "metric": { "labels": { "severity": "INFO", "log": "compute.googleapis.com/activity_log" }, "type": "logging.googleapis.com/log_entry_count" }, "resource": { "type": "gce_instance", "labels": { "instance_id": "0", "zone": "us-central1", "project_id": "your-project-id" } }, "metricKind": "DELTA", "valueType": "INT64", "points": [ { "interval": { "startTime": "2019-10-29T13:53:00Z", "endTime": "2019-10-29T13:54:00Z" }, "value": { "int64Value": "0" } }, ... ] }, ... ] }
For more information on using this widget, including troubleshooting, see APIs Explorer.
Cardinality: time series and labels
Each time series is associated with a specific pair of metric and monitored-resource types, but each pair can have many time series. The possible number of time series is determined by the cardinality of the pair: the number of labels and the number of values each label can take on.
For example, suppose you have a trivial metric type that specifies one label,
color, and a monitored-resource type with another label,
zone.
You get a time series for each combination of
zone and
color values.
The number of values a label can assume is important:
- If there are only two possible zones, “east” and “west”, the
zonelabel can have up to two distinct values.
- If there are only three possible colors, “red,” “green,” and “blue,” the
colorlabel can have up to three distinct values.
The cardinality of this metric is 6 (3×2), though the metric might produce fewer time series. If, for example, you never get any data from the “west” zone, then you will never have more than three time series.
Metric cardinality is a critical factor in performance when you are requesting metrics for a chart or other uses. Higher cardinality can lead to slower query response times.
Cardinality is also a concern when designing custom metrics, where you determine the set of labels and their possible values. You are limited to 30 labels in a metric type, but you also need to ensure than the set of possible values for any label is constrained. A small set of discrete values (like “red,” “green,” and “blue,”) is the preferred approach. Granular values, such as timestamps, should not be used. There are other limits on custom metrics, as well; see Custom metrics for more information. | https://cloud.google.com/monitoring/api/v3/metric-model?authuser=0&hl=id | CC-MAIN-2022-40 | refinedweb | 2,503 | 55.44 |
I am working with
EF6 and generating models (database-first) from multiple databases. Some databases have same table names so when model is generated they start conflicting. To solve
namespace problem I went to
Model.tt file properties in Solution Explorer and changed
Custom Tool Namespace
However this solved my problem only partially as when generating additional Models duplicate (same table name)
.cs files are still being placed into project directory and same table name entities get overwritten.
How do I set model output directory in EF6? Or solve this some other way?
EDIT: After moving
.edmx to it's own folder I am now getting
{"Unable to load the specified metadata resource."} when accessing metadata on line
var objectContext = ((IObjectContextAdapter)this.Ecom).ObjectContext;
Thanks to @CodeCaster
Create physical folder in your project directory
Add it to your project by including in
Solution Explorer
Add model in that directory (I copied it over but you could try to just generate new one)
Change
.tt file
Custom Tool Namespace
Check in
Web.config/(
app.config) if EF connection string generated has it's metadata set as following (Directory structure is done with
.(dot) not
/(slash)) like following res:///Ecom.Ecom.csdl (from being res:///Ecom.csdl while not in it's own folder).
Hope this saves some time. | https://entityframeworkcore.com/knowledge-base/37433293/multiple-ef6-db-models-with-same-entity-names-conflicting-and-getting-overwritten--how-to-set-model-output-directory-in-ef6- | CC-MAIN-2022-21 | refinedweb | 217 | 50.94 |
Fin 3710 Investment Analysis Professor Rui Yao CHAPTER 14: OPTIONS MARKETS
- Marjorie Rodgers
- 2 years ago
- Views:
Transcription
1 HW 6 Fin 3710 Investment Analysis Professor Rui Yao CHAPTER 14: OPTIONS MARKETS 4. Cost Payoff Profit Call option, X = Put option, X = Call option, X = Put option, X = Call option, X = Put option, X = In terms of dollar returns: Price of Stock Six Months From Now Stock price: All stocks (100 shares) 8,000 10,000 11,000 12,000 All options (1,000) shares ,000 20,000 Bills options 9,360 9,360 10,360 11,360 In terms of rate of return, based on a $10,000 investment: Price of Stock Six Months From Now Stock price: All stocks (100 shares) -20% 0% 10% 20% All options (1,000) shares -100% -100% 0% 100% Bills options -6.4% -6.4% 3.6% 13.6% Rate of return (%) All options 110 All stocks Bills plus options S T 100-themoney options) in either direction before your profits become negative. c. Buy the call, sell (write) the put, lend the present value of $50. The payoff is as follows: Final Payoff Position Initial Outlay S T < X Long call C = 7 0 S T > X S T 50 Short put -P = -4 -(50 S T ) 0 Lending 50/(1 + r) (1/4) Total [50/(1 + r) (1/4) ] S T S T The initial outlay equals: [(the present value of $50) + $3]. In either scenario, you end up with the same payoff as you would if you bought the stock itself. 8. a. By writing covered call options, Jones receives premium income of $30,000. If, in January, the price of the stock is less than or equal to $45, he will keep the stock plus the premium income. But the most he can have is $450,000 + $30,000 because the stock will be called away from him if its price exceeds $45. (We are ignoring interest earned on the premium income from writing the option over this short time period.) The payoff structure is:.
3: Less than $35 $350,000 $30,000 = $320,000 Greater than $35 (10,000 times stock price) $30,000 c. The net cost of the collar is zero. The value of the portfolio will be as follows: CHAPTER 15: OPTION VALUATION expiration. d. Call B. This would explain its higher price. e. Not enough information. The call with the lower exercise price sells for more than the call with the higher exercise price. The values given are consistent with either stock having higher volatility.
4 3. Note that, as the option becomes progressively more in the money, its hedge ratio increases to a maximum of 1.0: X Hedge ratio X Hedge ratio /150 = /150 = /150 = /150 = /150 = /150 = a. When S = 130, then P = 0. When S = 80, then P = 30. The hedge ratio is: [(P + P )/(S + S ) = [(0 30)/(130 80)] = 3/5 b. Riskless portfolio S =80 S = shares puts Total Present value = ($390/1.10) = c. Portfolio cost = 3S + 5P = $ P = $ Therefore 5P = $ P = $54.545/5 = $ The hedge ratio for the call is [[(C + C - )/(S + S - ) = [(20 0)/(130 80)] = 2/5 Riskless portfolio S =80 S = shares Short 5 calls Total C = (160/1.10) = C = Put-call parity relationship: P = C S 0 + PV(X) = (110/1.10) 100 = Step 1: Calculate the option values at expiration. The two possible stock prices are: S + = $120 and S = $80. Therefore, since the exercise price is $100, the corresponding two possible call values are: C + = $20 and C = $0. Step 2: Calculate the hedge ratio: (C + C )/(S + S ) = (20 0)/(120 80) = 0.5
5 Step 3: Form a riskless portfolio made up of one share of stock and two written calls. The cost of the riskless portfolio is: (S 0 2C 0 ) = 100 2C 0 and the certain end-of-year value is $80. Step 4: Calculate the present value of $80 with a one-year interest rate of 10% = $72.73 Step 5: Set the value of the hedged position equal to the present value of the certain payoff: $100 2C 0 = $72.73 Step 6: Solve for the value of the call: C 0 = $13.64 Notice that we never use the probabilities of a stock price increase or decrease. These are not needed to value the call option. 30. Step 1: Calculate the option values at expiration. The two possible stock prices are: S + = $130 and S = $70. Therefore, since the exercise price is $100, the corresponding two possible call values are: C + = $30 and C = $0. Step 2: Calculate the hedge ratio: (C + C )/(S + S ) = (30 0)/(130 70) = 0.5 Step 3: Form a riskless portfolio made up of one share of stock and two written calls. The cost of the riskless portfolio is: (S 0 2C 0 ) = 100 2C 0 and the certain end-of-year value is $70. Step 4: Calculate the present value of $70 with a one-year interest rate of 10% = $63.64 Step 5: Set the value of the hedged position equal to the present value of the certain payoff: $100 2C 0 = $63.64 Step 6: Solve for the value of the call: C 0 = $18.18 Here, the value of the call is greater than the value of the call in the lower-volatility scenario.
CHAPTER 20: OPTIONS MARKETS: INTRODUCTION
CHAPTER 20: OPTIONS MARKETS: INTRODUCTION PROBLEM SETS 1. Options provide numerous opportunities to modify the risk profile of a portfolio. The simplest example of an option strategy that increases risk
CHAPTER 21: OPTION VALUATION
CHAPTER 21: OPTION VALUATION 1. Put values also must increase as the volatility of the underlying stock increases. We see this from the parity relation as follows: P = C + PV(X) S 0 + PV(Dividends). Given
Introduction to Options
Introduction to Options By: Peter Findley and Sreesha Vaman Investment Analysis Group What Is An Option? One contract is the right to buy or sell 100 shares The price of the option depends on the price Premium = Intrinsic. Speculative Value. Value
Chapters 4/ Part Options: Basic Concepts Options Call Options Put Options Selling Options Reading The Wall Street Journal Combinations of Options Valuing Options An Option-Pricing Formula Investment
EXERCISES FROM HULL S BOOK
EXERCISES FROM HULL S BOOK 1. Three put options on a stock have the same expiration date, and strike prices of $55, $60, and $65. The market price are $3, $5, and $8, respectively. Explain how a butter
Option Valuation. Chapter 21
Option Valuation Chapter 21 Intrinsic and Time Value intrinsic value of in-the-money options = the payoff that could be obtained from the immediate exercise of the option for a call option: stock price
CHAPTER 8: TRADING STRATEGES INVOLVING OPTIONS
CHAPTER 8: TRADING STRATEGES INVOLVING OPTIONS Unless otherwise stated the options we consider are all European. Toward the end of this chapter, we will argue that if European options were available with
9 Basics of options, including trading strategies
ECG590I Asset Pricing. Lecture 9: Basics of options, including trading strategies 1 9 Basics of options, including trading strategies Option: The option of buying (call) or selling (put) an asset. European
Factors Affecting Option Prices
Factors Affecting Option Prices 1. The current stock price S 0. 2. The option strike price K. 3. The time to expiration T. 4. The volatility of the stock price σ. 5. The risk-free interest rate r. 6. The
Futures Price d,f $ 0.65 = (1.05) (1.04)
24 e. Currency Futures In a currency futures contract, you enter into a contract to buy a foreign currency at a price fixed today. To see how spot and futures currency prices are related, note that holding
Chapter 21: Options and Corporate Finance
Chapter 21: Options and Corporate Finance 21.1 a. An option is a contract which gives its owner the right to buy or sell an underlying asset at a fixed price on or before a given date. b. Exercise is the
CHAPTER 22: FUTURES MARKETS
CHAPTER 22: FUTURES MARKETS PROBLEM SETS 1. There is little hedging or speculative demand for cement futures, since cement prices are fairly stable and predictable. The trading activity necessary to support
Derivatives: Options
Derivatives: Options Call Option: The right, but not the obligation, to buy an asset at a specified exercise (or, strike) price on or before a specified date. Put Option: The right, but not the obligation, 20. Financial Options. Chapter Synopsis
CHAPTER 20 Financial Options Chapter Synopsis 20.1 Option Basics A financial option gives its owner the right, but not the obligation, to buy or sell a financial asset at a fixed price on or until a specified
Chapter 21 Valuing Options
Chapter 21 Valuing Options Multiple Choice Questions 1. Relative to the underlying stock, a call option always has: A) A higher beta and a higher standard deviation of return B) A lower beta and a higher a
Trading Strategies Involving Options. Chapter 11
Trading Strategies Involving Options Chapter 11 1 Strategies to be Considered A risk-free bond and an option to create a principal-protected note A stock and an option Two or more options of the same type
11 Option. Payoffs and Option Strategies. Answers to Questions and Problems
11 Option Payoffs and Option Strategies Answers to Questions and Problems 1. Consider a call option with an exercise price of $80 and a cost of $5. Graph the profits and losses at expiration for various
Market and Exercise Price Relationships. Option Terminology. Options Trading. CHAPTER 15 Options Markets 15.1 THE OPTION CONTRACT
CHAPTER 15 Options Markets 15.1 THE OPTION CONTRACT Option Terminology Buy - Long Sell - Short Call the right to buy Put the the right to sell Key Elements Exercise or Strike Price Premium or Price of 20 Understanding Options
Chapter 20 Understanding Options Multiple Choice Questions 1. Firms regularly use the following to reduce risk: (I) Currency options (II) Interest-rate options (III) Commodity options D) I, II, and III:
Basic Option Trading Strategies
Basic Option Trading Strategies What is an option? Definition Option an intangible right bought or sold by a trader to control 100 shares of a security; it expires on a specific date in the future. The
understanding options
Investment Planning understanding options Get acquainted with this versatile investment tool. Understanding Options This brochure discusses the basic concepts of options: what they are, common investment
CHAPTER 22 Options and Corporate Finance
CHAPTER 22 Options and Corporate Finance Multiple Choice Questions: I. DEFINITIONS OPTIONS a 1. A financial contract that gives its owner the right, but not the obligation, to buy or sell a specified asset
INTRODUCTION TO OPTIONS MARKETS QUESTIONS
INTRODUCTION TO OPTIONS MARKETS QUESTIONS 1. What is the difference between a put option and a call option? 2. What is the difference between an American option and a European option? 3. Why does an option
Option Pricing Basics
Option Pricing Basics Aswath Damodaran Aswath Damodaran 1 What is an option? An option provides the holder with the right to buy or sell a specified quantity of an underlying asset at a fixed price (called
Option Theory Basics
Option Basics What is an Option? Option Theory Basics An option is a traded security that is a derivative product. By derivative product we mean that it is a product whose value is based upon, or derived
OPTION TRADING STRATEGIES IN INDIAN STOCK MARKET
OPTION TRADING STRATEGIES IN INDIAN STOCK MARKET Dr. Rashmi Rathi Assistant Professor Onkarmal Somani College of Commerce, Jodhpur ABSTRACT Options are important derivative securities trading all over,
Convenient Conventions
C: call value. P : put value. X: strike price. S: stock price. D: dividend. Convenient Conventions c 2015 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 168 Payoff, Mathematically Speaking The payoff
Spotlight Quiz. Hedging with Options
Spotlight Quiz Hedging with Options Many companies use options to hedge cashflows, either singly or in combination. This quiz will cover the use of single options first and then look at possible combinations.
FIN-40008 FINANCIAL INSTRUMENTS SPRING 2008. Options
FIN-40008 FINANCIAL INSTRUMENTS SPRING 2008 Options These notes describe the payoffs to European and American put and call options the so-called plain vanilla options. We consider the payoffs to these.
Derivative Users Traders of derivatives can be categorized as hedgers, speculators, or arbitrageurs.
OPTIONS THEORY Introduction The Financial Manager must be knowledgeable about derivatives in order to manage the price risk inherent in financial transactions. Price risk refers to the possibility of loss
ECMC49F Options Practice Questions Suggested Solution Date: Nov 14, 2005
ECMC49F Options Practice Questions Suggested Solution Date: Nov 14, 2005 Options: General [1] Define the following terms associated with options: a. Option An option is a contract which gives the holder
Protective Put Strategy Profits
Chapter Part Options and Corporate Finance: Basic Concepts Combinations of Options Options Call Options Put Options Selling Options Reading The Wall Street Journal Combinations of Options Valuing Options
Earn income from your shares
Course 8: Earn income from your shares Module 8 Earn income from your shares Topic 1: Introduction... 3 The call writer's obligation... 4 Topic 2: Why write covered calls?... 6 Income... 6 Example... 6
CHAPTER 20 Understanding Options
CHAPTER 20 Understanding Options Answers to Practice Questions 1. a. The put places a floor on value of investment, i.e., less risky than buying stock. The risk reduction comes at the cost of the option
Who Should Consider Using Covered Calls?
Who Should Consider Using Covered Calls? An investor who is neutral to moderately bullish on some of the equities in his portfolio. An investor who is willing to limit upside potential in exchange for
Introduction to Options. Derivatives
Introduction to Options Econ 422: Investment, Capital & Finance University of Washington Summer 2010 August 18, 2010 Derivatives A derivative is a security whose payoff or value depends on (is derived
Chapter 24 - Options 24-1: 24-2: ANSWERS TO QUESTIONS & PROBLEMS
Chapter 24 - Options 24-1: In both cases, if the option is exercised, you will be buying the underlying shares. The difference is that the buyer of the option pays the premium to the writer for the privilege
Options. + Concepts and Buzzwords. Readings. Put-Call Parity Volatility Effects
+ Options + Concepts and Buzzwords Put-Call Parity Volatility Effects Call, put, European, American, underlying asset, strike price, expiration date Readings Tuckman, Chapter 19 Veronesi, Chapter 6 Options
Online Appendix: Payoff Diagrams for Futures and Options
Online Appendix: Diagrams for Futures and Options As we have seen, derivatives provide a set of future payoffs based on the price of the underlying asset. We discussed how derivatives can be mixed
How to Collect a 162% Cash on Cash Return
How to Collect a 162% Cash on Cash Return Today we are going to explore one of the most profitable, low-risk income strategies I ve come across in my 27 years of trading. This income strategy produced
6. Foreign Currency Options
6. Foreign Currency Options So far, we have studied contracts whose payoffs are contingent on the spot rate (foreign currency forward and foreign currency futures). he payoffs from these instruments
CHAPTER 7: PROPERTIES OF STOCK OPTION PRICES
CHAPER 7: PROPERIES OF SOCK OPION PRICES 7.1 Factors Affecting Option Prices able 7.1 Summary of the Effect on the Price of a Stock Option of Increasing One Variable While Keeping All Other Fixed Variable 21 Options Pricing
Lecture 21 Options Pricing Readings BM, chapter 20 Reader, Lecture 21 M. Spiegel and R. Stanton, 2000 1 Outline Last lecture: Examples of options Derivatives and risk (mis)management Replication and Put-call
Chapter 5 Option Strategies
Chapter 5 Option Strategies Chapter 4 was concerned with the basic terminology and properties of options. This chapter discusses categorizing and analyzing investment positions constructed by meshing puts
LEAPS LONG-TERM EQUITY ANTICIPATION SECURITIES
LEAPS LONG-TERM EQUITY ANTICIPATION SECURITIES The Options Industry Council (OIC) is a non-profit association created to educate the investing public and brokers about the benefits and risks of exchange-traded
Advanced Strategies for Managing Volatility
Advanced Strategies for Managing Volatility Description: Investment portfolios are generally exposed to volatility through company-specific risk and through market risk. Long-term investors can reduce
EQUITY LINKED NOTES: An Introduction to Principal Guaranteed Structures Abukar M Ali October 2002
EQUITY LINKED NOTES: An Introduction to Principal Guaranteed Structures Abukar M Ali October 2002 Introduction In this article we provide a succinct description of a commonly used investment instrument
Swing Trade Warrior Chapter 1. Introduction to swing trading and how to understand and use options How does Swing Trading Work? The idea behind swing trading is to capitalize on short term moves of stocks OIC Options on ETFs Options on ETFs 1 The Options Industry Council For the sake of simplicity, the examples that follow do not take into consideration commissions and other transaction fees, tax considerations,
Lecture 5: Put - Call Parity
Lecture 5: Put - Call Parity Reading: J.C.Hull, Chapter 9 Reminder: basic assumptions 1. There are no arbitrage opportunities, i.e. no party can get a riskless profit. 2. Borrowing and lending are possible
Hedging with Futures and Options: Supplementary Material. Global Financial Management
Hedging with Futures and Options: Supplementary Material Global Financial Management Fuqua School of Business Duke University 1 Hedging Stock Market Risk: S&P500 Futures Contract A futures contract
THE POWER OF FOREX OPTIONS
THE POWER OF FOREX OPTIONS TOPICS COVERED Option basics Call options Put Options Why trade options? Covered call Covered put Hedging your position using options How to repair a trading position THE POWER
Other variables as arguments besides S. Want those other variables to be observables.
Valuation of options before expiration Need to distinguish between American and European options. Consider European options with time t until expiration. Value now of receiving c T at expiration? (Value
BONUS REPORT#5. The Sell-Write Strategy
BONUS REPORT#5 The Sell-Write Strategy 1 The Sell-Write or Covered Put Strategy Many investors and traders would assume that the covered put or sellwrite strategy is the opposite strategy of the covered
FX Key products Exotic Options Menu
FX Key products Exotic Options Menu Welcome to Exotic Options Over the last couple of years options have become an important tool for investors and hedgers in the foreign exchange market. With the growing
Options (1) Class 19 Financial Management, 15.414
Options (1) Class 19 Financial Management, 15.414 Today Options Risk management: Why, how, and what? Option payoffs Reading Brealey and Myers, Chapter 2, 21 Sally Jameson 2 Types of questions Your company,, | http://docplayer.net/15408409-Fin-3710-investment-analysis-professor-rui-yao-chapter-14-options-markets.html | CC-MAIN-2018-17 | refinedweb | 3,106 | 52.09 |
Acquire.
Will show you how to synchronously generate and acquire voltage data (at a rate of 300 KHz). You will use the session-based interface with Digilent Analog Discovery hardware..
Save data acquired in the background to a file. Use the session-based interface and acquires analog input data using non-blocking commands. If you are using the legacy interface, refer to
Set up a continuous audio generation. This example uses, but does not require, a 5.1 channel sound system.
This example shows how to generate code from packData and unpackData
Writes an analog value (PWM wave) to a pin. Can be used to light a LED at varying brightnesses or drive a motor at various speeds. After a call to analogWrite(), the pin will generate a steady
Reads the value from the specified analog pin. Returns analog pins state as a n x 2 array, representing KEY-VALUE pairs of digital pins. The Engduino board contains a 5 channel 10-bit analog to
How the Sphero Connectivity Package can be used to connect to a Sphero device and perform basic operations on the hardware, such as change the LED color, calibrate the orientation of the robot
Description: This example shows Engduino 'analogRead' function call. Function returns values of the analog pin.
Pack and unpack data using the provided packData and unpackData functions
Configures the specified pin to behave either as an input or an output. See the description of digital pins for details on the functionality of the pins. It is possible to enable the internal
Reads the value from the specified digital pin. Returns digital pins state as a n x 2 array, representing KEY-VALUE pairs of digital pins.
Description: This example shows Engduino Sensors 'getAccelerometer' function call. Function returns acceleration in [x,y,z] directions. Unit is [G=10m/s^2]
Description: This example shows how to turn on and off an LED using the setLedsOne function call. The function requires first parameter as an integer indication the position of LED and the
Use OPC Toolbox™ synchronous read and write operations to exchange data with an OPC server.
Configure and execute a logging session, and how to retrieve data from that logging session.
Use callbacks to monitor an OPC Data Access logging task.
Show you how to use a custom callback for the OPC Toolbox™ to plot data acquired during a logging task.
Use OPC Toolbox™ to browse the network for OPC servers, and query the server name space for server items and their properties.
The basic steps involved in using OPC Toolbox™ to acquire data from an OPC Server.
Install a simulated OPC Server for use with the OPC Toolbox examples.
Use the OPC Toolbox™ to browse the network for OPC Historical Data Access servers, and use OPC Toolbox functions to query the server name space for server items and their properties.
Acquire data from an OPC Historical Data Access (HDA) server.
Exchange data between Simulink and OPC Data Access servers.
Model uses data from an OPC server to test composition control of a binary distillation column model.
Find OPC Unified Automation (UA) servers, connect to them, and browse their namespace to find nodes of interest.
Read historical data from an OPC UA server. Specifically, this example reads data from the OPC Foundation Quickstart Historical Access Server.. | http://in.mathworks.com/examples/product-group/matlab-test-measurement | CC-MAIN-2017-22 | refinedweb | 553 | 55.34 |
To kick things off I will show how to use ncclient and pyang to configure interfaces on Cisco IOS XE device. In order to make sure everyone is on the same page and to provide some reference points for the remaining parts of the post, I would first need to cover some basic theory about NETCONF, XML and YANG.
NETCONF primer
NETCONF is a network management protocol that runs over a secure transport (SSH, TLS etc.). It defines a set of commands (RPCs) to change the state of a network device, however it does not define the structure of the exchanged information. The only requirement is for the payload to be a well-formed XML document. Effectively NETCONF provides a way for a network device to expose its API and in that sense it is very similar to REST. Here are some basic NETCONF operations that will be used later in this post:
- hello - messages exchanged when the NETCONF session is being established, used to advertise the list of supported capabilities.
- get-config - used by clients to retrieve the configuration from a network device.
- edit-config - used by clients to edit the configuration of a network device.
- close-session - used by clients to gracefully close the NETCONF session.
All of these standard NETCONF operations are implemented in ncclient Python library which is what we’re going to use to talk to CSR1k.
XML primer
There are several ways to exchange structured data over the network. HTML, YAML, JSON and XML are all examples of structured data formats. XML encodes data elements in tags and nests them inside one another to create complex tree-like data structures. Thankfully we are not going to spend much time dealing with XML in this post, however there are a few basic concepts that might be useful for the overall understanding:
- Root - Every XML document has one root element containing one or more child elements.
- Path - is a way of addressing a particular element inside a tree.
- Namespaces - provide name isolation for potentially duplicate elements. As we’ll see later, the resulting XML document may be built from several YANG models and namespaces are required to make sure there are no naming conflicts between elements.
The first two concepts are similar to paths in a Linux filesystem where all of the files are laid out in a tree-like structure with root partition at its top. Namespace is somewhat similar to a unique URL identifying a particular server on the network. Using namespaces you can address multiple unique
/etc/hosts files by prepending the host address to the path.
As with other structured data formats, XML by itself does not define the structure of the document. We still need something to organise a set of XML tags, specify what is mandatory and what is optional and what are the value constraints for the elements. This is exactly what YANG is used for.
YANG primer
YANG was conceived as a human-readable way to model the structure of an XML document. Similar to a programming language it has some primitive data types (integers, boolean, strings), several basic data structures (containers, lists, leafs) and allows users to define their own data types. The goal is to be able to formally model any network device configuration.
Anyone who has ever used Ansible to generate text network configuration files is familiar with network modelling. Coming up with a naming conventions for variables, deciding how to split them into different files, creating data structures for variables representing different parts of configuration are all a part of network modelling. YANG is similar to that kind of modelling, only this time the models are already created for you. There are three main sources of YANG models today:
- Equipment Vendors create their own “native” models to interact with their devices.
- Standards bodies (e.g. IETF and IEEE) were supposed to be the driving force of model creation. However in reality they have managed to produce only a few models that cover basic functionality like interface configuration and routing. Half of these models are still in the “DRAFT” stage.
- OpenConfig working group was formed by major telcos and SPs to fill the gap left by IETF. OpenConfig has produced the most number of models so far ranging from LLDP and VLAN to segment routing and BGP configurations. Unfortunately these models are only supported by high-end SP gear and we can only hope that they will find their way into the lower-end part of the market.
Be sure to check of these and many other YANG models on YangModels Github repo.
Environment setup
My test environment consists of a single instance of Cisco CSR1k running IOS XE 16.04.01. For the sake of simplicity I’m not using any network emulator and simply run it as a stand-alone VM inside VMWare Workstation. CSR1k has the following configuration applied:
username admin privilege 15 secret admin ! interface GigabitEthernet1 ip address 192.168.145.51 255.255.255.0 no shutdown ! netconf-yang
The last command is all what’s required to enable NETCONF/YANG support.
On the same hypervisor I have my development CentOS7 VM, which is connected to the same network as the first interface of CSR1k. My VM is able to ping and ssh into the CSR1k. We will need the following additional packages installed:
yum install openssl-devel python-devel python-pip gcc pip install ncclient pyang pyangbind ipython
Device configuration workflow
The following workflow will be performed in both interactive Python shell (e.g. iPython) and Linux bash shell. The best way to follow along is to have two sessions opened, one with each of the shells. This will save you from having to rerun import statements every time you re-open a python shell.
1. Discovering device capabilities
The first thing you have to do with any NETCONF-capable device is discover its capabilities. We’ll use ncclient’s manager module to establish a session to CSR1k. Method
.connect() of the manager object takes device IP, port and login credentials as input and returns a reference to a NETCONF session established with the device.
from ncclient import manager m = manager.connect(host='192.168.145.51', port=830, username='admin', password='admin', device_params={'name': 'csr'}) print m.server_capabilities
When the session is established, server capabilities advertised in the hello message get saved in the
server_capabilities variable. Last command should print a long list of all capabilities and supported YANG models.
2. Obtaining YANG models
The task we have set for ourselves is to configure an interface. CSR1k supports both native (Cisco-specific) and IETF-standard ways of doing it. In this post I’ll show how to use the IETF models to do that. First we need to identify which model to use. Based on the discovered capabilities we can guess that ietf-ip could be used to configure IP addresses, so let’s get this model first. One way to get a YANG model is to search for it on the Internet, and since its an IETF model, it most likely can be found in of the RFCs.
Another way to get it is to download it from the device itself. All devices supporting RFC6022 must be able to send the requested model in response to the
get_schema call. Let’s see how we can download the ietf-ip YANG model:
schema = m.get_schema('ietf-ip') print schema
At this stage the model is embedded in the XML response and we still need to extract it and save it in a file. To do that we’ll use python
lxml library to parse the received XML document, pick the first child from the root of the tree (data element) and save it into a variable. A helper function write_file simply saves the Python string contained in the
yang_text variable in a file.
import xml.etree.ElementTree as ET root = ET.fromstring(schema.xml) yang_text = list(root)[0].text write_file('ietf-ip.yang', yang_text)
Back at the Linux shell we can now start using pyang. The most basic function of pyang is to convert the YANG model into one of the many supported formats. For example, tree format can be very helpful for high-level understanding of the structure of a YANG model. It produces a tree-like representation of a YANG model and annotates element types and constraints using syntax described in this RFC.
$ pyang -f tree ietf-ip.yang | head - module: ietf-ip augment /if:interfaces/if:interface: +--rw ipv4! | +--rw enabled? boolean | +--rw forwarding? boolean | +--rw mtu? uint16 | +--rw address* [ip] | | +--rw ip inet:ipv4-address-no-zone | | +--rw (subnet) | | +--:(prefix-length)
From the output above we can see the ietf-ip augments or extends the interface model. It adds new configurable (rw) containers with a list of IP prefixes to be assigned to an interface. Another thing we can see is that this model cannot be used on its own, since it doesn’t specify the name of the interface it augments. This model can only be used together with
ietf-interfaces YANG model which models the basic interface properties like MTU, state and description. In fact
ietf-ip relies on a number of YANG models which are specified as imports at the beginning of the model definition.
module ietf-ip { namespace "urn:ietf:params:xml:ns:yang:ietf-ip"; prefix ip; import ietf-interfaces { prefix if; } import ietf-inet-types { prefix inet; } import ietf-yang-types { prefix yang; }
Each import statement specifies the model and the prefix by which it will be referred later in the document. These prefixes create a clear separation between namespaces of different models.
We would need to download all of these models and use them together with the ietf-ip throughout the rest of this post. Use the procedure described above to download the ietf-interfaces, ietf-inet-types and ietf-yang-types models.
3. Instantiating YANG models
Now we can use pyangbind, an extension to pyang, to build a Python module based on the downloaded YANG models and start building interface configuration. Make sure your
$PYBINDPLUGIN variable is set like its described here.
pyang --plugindir $PYBINDPLUGIN -f pybind -o ietf_ip_binding.py ietf-ip.yang ietf-interfaces.yang ietf-inet-types.yang ietf-inet-types.yang
The resulting
ietf_ip_binding.py is now ready for use inside the Python shell. Note that we import
ietf_interfaces as this is the parent object for
ietf_ip. The details about how to work with generated Python binding can be found on pyangbind’s Github page.
from ietf_ip_binding import ietf_interfaces model = ietf_interfaces() model.get() {'interfaces': {'interface': {}}, 'interfaces-state': {'interface': {}}}
To setup an IP address, we first need to create a model of an interface we’re planning to manipulate. We can then use
.get() on the model’s instance to see the list of all configurable parameters and their defaults.
new_interface = model.interfaces.interface.add('GigabitEthernet2') new_interface.get() {'description': u'', 'enabled': True, 'ipv4': {'address': {}, 'enabled': True, 'forwarding': False, 'mtu': 0, 'neighbor': {}}, 'ipv6': {'address': {}, 'autoconf': {'create-global-addresses': True, 'create-temporary-addresses': False, 'temporary-preferred-lifetime': 86400L, 'temporary-valid-lifetime': 604800L}, 'dup-addr-detect-transmits': 1L, 'enabled': True, 'forwarding': False, 'mtu': 0L, 'neighbor': {}}, 'link-up-down-trap-enable': u'', 'name': u'GigabitEthernet2', 'type': u''}
The simples thing we can do is modify the interface description.
new_interface.description = 'NETCONF-CONFIGURED PORT' new_interface.get()['description']
New objects are added by calling
.add() on the parent object and passing unique key as an argument.
ipv4_addr = new_interface.ipv4.address.add('12.12.12.2') ipv4_addr.get() {'ip': u'12.12.12.2', 'netmask': u'', 'prefix-length': 0} ipv4_addr.netmask = '255.255.255.0'
At the time of writing pyangbind only supported serialisation into JSON format which means we have to do a couple of extra steps to get the required XML. For now let’s dump the contents of our interface model instance into a file.
import pyangbind.lib.pybindJSON as pybindJSON json_data = pybindJSON.dumps(model, mode='ietf') write_file('new_interface.json',json_data) print json_data
4. Applying configuration changes
Even though pyanbind does not support XML, it is possible to use other pyang plugins to generate XML from JSON.
pyang -f jtox -o interface.jtox ietf-ip.yang ietf-interfaces.yang ietf-inet-types.yang ietf-yang-types.yang json2xml -t config -o interface.xml interface.jtox interface.json
The resulting
interface.xml file contains the XML document ready to be sent to the device. I’ll use read_file helper function to read its contents and save it into a variable. We should still have a NETCONF session opened from one of the previous steps and we’ll use the edit-config RPC call to apply our changes to the running configuration of CSR1k.
xml = read_file('interface.xml') reply = m.edit_config(target='running', config=xml) print("Success? {}".format(reply.ok)) m.close_session()
If the change was applied successfully
reply.ok should return
True and we can close the session to the device.
Verifying changes
Going back to the CSR1k’s CLI we should see our changes reflected in the running configuration:
Router#sh run int gi 2 Building configuration... Current configuration : 126 bytes ! interface GigabitEthernet2 description NETCONF-CONFIGURED PORT ip address 12.12.12.2 255.255.255.0 negotiation auto end
All-in-one scripts
Checkout this Github page for Python scripts that implement the above workflow in a more organised way.
In this post I have merely scratched the surface of YANG modelling and network device programming. In the following posts I am planning to take a closer look at the RESTCONF interface, internal structure of a YANG model, Ansible integration and other YANG-related topics until I run out of interest. So until that happens… stay tuned. | https://networkop.co.uk/blog/2017/01/25/netconf-intro/ | CC-MAIN-2020-29 | refinedweb | 2,290 | 55.13 |
RTX51 Tiny User's Guide
Technical Support
On-Line Manuals
RTX51 Tiny may be configured to use Round-Robin Multitasking (or task switching). Round-Robin allows quasi-parallel execution of several tasks. Tasks are not really executed concurrently but are time-sliced (the available CPU time is divided into time slices and RTX51 Tiny assigns a time slice to each task). Since the time slice is short (only a few milliseconds) it appears as though tasks execute simultaneously.
Tasks execute for the duration of their time-slice (unless the task's time slice is given up). Then, RTX51 Tiny switches to the next task that is ready to run. The duration of a time slice may be defined by the RTX51 Tiny Configuration.
The following example shows a simple RTX51 Tiny program that uses Round-Robin Multitasking. The two tasks in this program are counter loops. RTX51 Tiny starts executing task 0 which is the function named job0. This function creates another task called job1. After job0 executes for its time slice, RTX51 Tiny switches to job1. After job1 executes for its time slice, RTX51 Tiny switches back to job0. This process is repeated indefinitely.
#include <rtx51tny.h>
int counter0;
int counter1;
void job0 (void) _task_ 0 {
os_create (1); /* mark task 1 as ready */
while (1) { /* loop forever */
counter0++; /* update the counter */
}
}
void job1 (void) _task_ 1 {
while (1) { /* loop forever */
counter1++; /* update the counter */
}
}
Note
Related Knowledgebase Articles | http://www.keil.com/support/man/docs/tr51/tr51_rrobin.htm | crawl-002 | refinedweb | 239 | 64.41 |
lazy
Python library for rapidly developing lazy interfaces. This is currently a prototype built for playing with the paradigm.
By deferring the execution of your code until the last possible moment (when you actually request the data with
.get())
you can optimize its execution while preserving simple imperative semantics.
Optimizations include things like
- Minimal execution by tracing dependencies and only execution operations needed to produce the data
- Automatic output caching and invalidation
- Automatic parallelization of the induced dataflow graph
How it works
This library works by modifying annotated functions to record when they were called and their inputs and outputs.
Once
.get() is invoked on an output a minimal dataflow graph is generated by inspecting
all of its dependencies (including cached outputs). This dataflow graph can optionally be automatically parallelized.
A key requirement of this library is that all annotated functions be stateless and synchronous.
See the execution example at the bottom for details, or try it out yourself!
Usage
Decorate stateless and synchronous functions with
@lazy.synchronous
import lazy @lazy.synchronous def Square(x): time.sleep(0.1) return x ** 2 @lazy.synchronous def Mul(x, y): time.sleep(0.1) return x * y @lazy.synchronous def Add(x, y): time.sleep(0.1) return x + y
Write your program and access the output of annotated functions with
.get()
a = Square(2) b = Square(3) c = Mul(a, b) d = Add(a, b) t = time.time() # The code isn't run until you call .get() print(c.get()) print(time.time() - t) t = time.time() print(d.get()) print(time.time() - t)
Run things in parallel automatically with
lazy.parallelize = True
lazy.parallelize = True a = Square(2) b = Square(3) c = Mul(a, b) t = time.time() print(c.get()) print(time.time() - t) # Should only take 0.2s instead of 0.3s by automatic parallelism
Asynchronous execution can be made synchronous with locking primitives.
Functions annotated with
@lazy.asynchronous are fed an extra input
t
of type
Task which has a
spin primitive. See below:
@lazy.asynchronous def Recv(t, ptr): # Around 10 spins before we break for _ in t.spin(): r = random.randint(0,10) if r == 7: break return ptr # pretend we actually receieved something from network ptr = 0x123123 d = Recv(ptr) print(d.get()) # 7
The idea here is that
spin will periodically run the body of the loop until it is broken.
The rate at which
spin loops is determined by the runtime.
After a couple of iterations of the same function,
we can actually track how many spins it typically takes for the lock condition to be met and further optimize the rate at which spins happen.
As an example, if it takes on average 100ms for the network to respond we can make the first spin take exactly 100ms and speed up all subsequent spins.
This frees up cycles to work on other tasks in parallel.
TODO
- Support functions that operate in-place and have multiple outputs
- Support maximal trace length (to automatically force calls to
get())
Execution example
Below was generated with calls to
lazy.draw().
Before calling
c.get() in the above example we can see that only the input data is valid
After calling
c.get() we can see that only
Mul was invoked (and not
Add)
Once we call
d.get()
Add is executed using the cached intermediate values calculated when we called
c.get()
Other small things
data.dump_cf()to get the calculated controlflow graph (networkx format) of data (i.e. what needs to be executed to generate it)
data.executor = functo set a sepecific executor for the node. The executor must be of the form
func(data : Data) -> None
lazy.dump()to get the full known dataflow graph (networkx format)
lazy.draw()to draw the full known dataflow graph (with colors as in the above example)
If you really want to play with this I'd recommend attacking most ideas with networkx.
As an example: to get a subgraph of all the data dependencies of
d you can simply do
subgraph = nx.subgraph(nx.ancestors(lazy.dump(), d)) | https://pythonawesome.com/python-library-for-lazy-interfaces/ | CC-MAIN-2021-04 | refinedweb | 683 | 59.09 |
Describes how to use the callgrind plugin to the valgrind tool set to profile your code.
Note: Valgrind is a Linux only tool
You will need to install both valgrind & the visualizer tool kcachegrind - both of which should be available in your distribution’s repositories.
To be most effective valgrind requires that debugging information be present in the binaries that are being instrumented, although a full debug build is not required. On gcc this means compiling with the -g flag. For Mantid the recommended setup is to use a separate build with the CMAKE_BUILD_TYPE set to RelWithDebInfo. This provides a good balance between performance and availability of debugging information.
The profiler can instrument the code in a number of different ways. Some of these will be described here. For more detail information see the callgrind manual.
During execution the callgrind tool creates many output file named callgrind.output.pid.X. For this reason it is recommended that each profile run be executed from within a separate directory, named with a description of the activity. This allows the separate profiled runs to be found more easily in the future.
Beware: The code will execute many times (factors of 10) slower than when not profiling - this is just a consequence of how valgrind instruments the code.
This is the simplest mode of operation. Simply pass the program executable, along with any arguments to the valgrind executable:
>> valgrind --tool=callgrind --dump-instr=yes --simulate-cache=yes --collect-jumps=yes <executable> [args...]
For larger pieces of code it can be quite likely that you wish to profile only a selected portion of it. This is possible if you have a access to recompile the source code of the binaries to be instrumented as valgrind has a C api that can be used talk to the profiler. It uses a set of macros to instruct the profiler what to do when it hits certain points of the code.
As an example take a simple main function composed of several function calls:
int main() { foo1(); bar1(); foo2(); foo3(); }
To profile only the call to
bar1() then you would do the following
to the code:
#include <valgrind/callgrind.h> int main() { foo1(); CALLGRIND_START_INSTRUMENTATION; CALLGRIND_TOGGLE_COLLECT; bar1(); CALLGRIND_TOGGLE_COLLECT; CALLGRIND_STOP_INSTRUMENTATION; foo2(); foo3(); }
After recompiling the code you would then run the profiler again but this time adding the –instr-atstart=no –collect-atstart=no flags, like so
>> valgrind --tool=callgrind --dump-instr=yes --simulate-cache=yes --collect-jumps=yes --collect-atstart=no --instr-atstart=no <executable> [args...]
The –instru-at-start=no is not strictly necessary but it will speed up the code up until the profiling point at the cost of the less accuracy about the cache usage. See here more details.
Callgrind produces a large amount of data about the program’s execution. It is most easily understood using the kcachegrind GUI tool. This reads the information produced by callgrind and creates a list of function calls along with information on timings of each of the calls during the profiled run. If the source code is available it can also show the lines of code that relate to the functions being inspected.
Example of KCachegrind display a profile of MantidPlot starting up and closing down
By default KCachegrind shows the number of instructions fetched within its displays. This can be changed using the drop-down box at the top right of the screen. The Instruction Fetch and Cycle Estimation are generally the most widely used and roughly correlate to the amount of time spent performing the displayed functions.
Some of the key features display are: | http://developer.mantidproject.org/ProfilingWithValgrind.html | CC-MAIN-2018-47 | refinedweb | 597 | 52.39 |
Subsets and Splits