text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Hello Peter and Otis,
OK, we have the Lucene logo now on every page.
By the way, if Lucene is used within an applet residing on CD-ROM, FSDirectory.makeLock()
will fail as it tries to write to disc. Maybe it should be documented somewhere, that makeLock()
in this case has to be altered as follows:
package org.apache.lucene.store;
...
final public class FSDirectory extends Directory {
...
/** Construct a {@link Lock}.
* @param name the name of the lock file
*/
public final Lock makeLock(String name) {
final File lockFile = new File(directory, name);
return new Lock() {
public boolean obtain() throws IOException {
// if (Constants.JAVA_1_1) return true; // locks disabled in jdk 1.1
// return lockFile.createNewFile();
return true;
}
public void release() {
// if (Constants.JAVA_1_1) return; // locks disabled in jdk 1.1
// lockFile.delete();
return;
}
public String toString() {
return "Lock@" + lockFile;
}
};
}
...
}
A better solution would be fine.
Thanks,
- Thomas
> Hello,
>
> I was wrong in my description of the requirements to have a site added to
> the powered by Lucene. The powered by Lucene logo must be on the search
> results page.
>
> Sorry about the confusion.
>
> --Peter
>
>
> On 5/6/02 2:07 AM, "Thomas Fuchs" <[email protected]> wrote:
>
> > Hi Peter,
> >
> > I'm sorry, I never resived your message because I'm not listed in the lucene
> > mailing list. I just found it with google. ;)
> >
> > I included now the powered by lucene logo in
> >. There is a short description of
> > the search engine, too.
> >
> > I didn't so before, because the legal stuff coming with lucene says, the names
> > "Apache" and "Lucene" must not be used to endorse or promote products derived
> > from this software without prior written permission. I find that a little bit
> > confusing.
> >
> > Thanks
> >
> > - Thomas
> >
> >
> >> Hi Thomas,
> >
> > I looked at your site, but didn't see the powered by Lucene on any of the
> > Internes pages.
> >
> > You must have the Powered by Lucene Logo on your site. Please let me know
> > where I can find it.
> >
> > Thanks
> >
> > --Peter
> >
> >
> > On 4/1/02 7:51 PM, "Thomas Fuchs" <[email protected]> wrote:
> >
> >>> Hi,
> >>
> >> we just relaunched lexetius.com, a well known german law database (about
> >> 12.000 documents), and we included Lucene as search engine. We thought, it
> >> would be nice, if lexetius.com could be listed in the Powered by
> >> Lucene-directory.
> >>
> >> Thanks in advance
> >>
> >> Thomas Fuchs
> >> Tobias Pietzsch
> >> Lexetius.com e. V.
________________________________________________________________
Keine verlorenen Lotto-Quittungen, keine vergessenen Gewinne mehr!
Beim WEB.DE Lottoservice:
--
To unsubscribe, e-mail: <mailto:[email protected]>
For additional commands, e-mail: <mailto:[email protected]> | http://mail-archives.us.apache.org/mod_mbox/lucene-dev/200205.mbox/%[email protected]%3E | CC-MAIN-2020-10 | refinedweb | 429 | 69.18 |
Sarissa to the Rescue
February 23, 2005
Client-side XML processing. Today's browsers do cover the basics and some of them go even further, offering support for XHTML, SVG, XSLT, XPath , XLink, validation using W3C XML Schema, and more. This article will introduce you to basic cross-browser XML development with the aid of Sarissa, an ECMAScript library designed to stop those nasty incompatibilities before they get too close.
Getting Started
Using XML on the client enables you to do things you've never done before, especially when it comes to control of structured information and delivering an enhanced user experience. Let's go over some typical examples.
Your favorite designer could very well be hiding this from you: it is possible to
update
only parts of a web page with data coming from a request to your server without refreshing
the page and without scripting between those troublesome
iframe elements. The
request can be the result of user interaction handled by your script. Using the XMLHttpRequest
object, you can perform requests over HTTP and obtain the XML response as a DOM-compatible
object. You can then process that object further, if you want, before finally injecting
it
into the document using plain DOM methods, adding to your usability and saving bandwidth.
Use of XML on the client side is often driven by server-side requirements. For example, a high number of concurrent requests involving XSLT-based transformations sounds like trouble for any server. A transformation requires three tree structures in memory, one for each of the source, transform, and result documents. Outsourcing the transformation process to capable clients is a little like outsourcing the process of furniture assembly to the clients themselves. They get what they want more quickly and pay less while you save resources without losing the sale. After the assembly, clients keep the screwdriver for future use much the same way a browser will cache your XSLT document.
Another use case may involve sending structured information to the server application. To perform this, you can create a DOM document programmatically (even by parsing an XML string), perhaps processing it further, and finally submit it to the server.
You get the idea. This can go on to complex applications with rich UIs, for example a web-based XML editor based on XSLT, DOM, and CSS.
But let's go over the basics first.
Basic Training
The real issue in client-side XML development is browser incompatibility around implementations and extensions of the W3C DOM. The Sarissa library hides these incompatibilities for you, also adding some useful utilities into the mix.
A typical script block dealing with XML starts with instantiating a DOM document.
With
Sarissa, getting a new
XMLDocument object is done by calling a 'static'
method:
// Get a browser specific DOM Document object var oDomDoc = Sarissa.getDomDocument(); // more DOM code here
In standards-based browsers, this block is equal to
document.implementation.createDocument. In IE, Sarissa just uses the most
recent MSXML
ProgId to construct an ActiveX-based object as appropriate.
Additionally, you can pass two string parameters to that factory method, which correspond
to
a namespace URI and a local name respectively. Those are used by Sarissa to create
a root
element and add it to the newly constructed
XMLDocument:
// construct a document containing // <foo xmlns="" /> var oDomDoc = Sarissa.getDomDocument("","foo");
You can also populate the Document using an XML string. The above line is equal to
var oDomDoc = Sarissa.getDomDocument("","foo"); // populate the DOM Document using an XML string oDomDoc.loadXML("<foo xmlns='' />");
How about loading an XML document from a URL? Just copy the above XML, paste it in a new file on your server and load it like this:
var oDomDoc = Sarissa.getDomDocument("","foo"); // set loading method to synchronous oDomDoc.async = false; // populate the DOM Document using a remote file oDomDoc.load("path/to/my/file.xml"); // report any XML parsing errors if(oDomDoc.parseError != 0){ // construct a human readable // error description alert(Sarissa.getParseErrorText(oDomDoc);); }else{ // show loaded XML alert(Sarissa.serialize(oDomDoc);); };
We first load the remote file using the
load method of a
XMLDocument using synchronous loading, meaning that the
if
branch will only be executed after the
load method returns. Then we check for a
parsing error. If an error exists, the user sees the result of a call to
Sarissa.getParseErrorText, which provides a string with a human-readable
description of the error. If there is no error, the user sees the XML string serialization
of the document returned from
Sarissa.serialize. This is like IE's
xml property of DOM Nodes, with the difference being that it works for
everyone.
More Tricks
The
XMLHttpRequest object, available by one name or another in every major
browser by now, is used when you simply need more control over the request to the
remote
server, like specifying the HTTP method and headers. You can use it to load the same
XML
file as above like:
var xmlHttp = new XMLHttpRequest(); // specify HTTP method, file URL and // whether to use asynchronous loading xmlHttp.open("GET", "path/to/my/file.xml", false); // perform the actual request xmlHttp.send(null); // show result alert(Sarissa.serialize(xmlHttp.responseXML));
What we've done here is create a new
XMLHttpRequest object and configure
it to request the specified URL using HTTP GET asynchronously. We then perform the
actual
request and when that returns, we serialize the response XML which is available via
the
responseXML property.
To perform XSLT transformations, two
XMLDocument objects are needed, one for
the XSLT transform and one for the source document. Supposing we have obtained those
as
and
xslDoc
respectively, we
can perform the transformation using an
xmlDoc
:
XSLTProcessor
// create an instance of XSLTProcessor var processor = new XSLTProcessor(); // configure the processor to use our stylesheet processor.importStylesheet(xslDoc); // transform and store the result as a new doc var resultDocument = processor.transformToDocument(xmlDoc); // show transformation results alert(Sarissa.serialize(resultDocument));
Here we create a new processor and load our stylesheet to it using the
importStylesheet method. It is worth noting that a single configured instance
of
XSLTProcessor can be re-used to transform more than one source document. We
don't have to load the stylesheet each time. Then we store the transformation result
into a new
XMLDocument object and display its serialization to the user.
You may be aware that IE has added the
transformNode and
transformNodeToObject methods in its implementation of the
XMLDocument object. Sarissa does implement those methods for Mozilla but they
are deprecated. The use of the
XSLTProcessor is recommended as it provides a
more efficient way to transform multiple documents and set XSLT parameters. A last
word on
XSLT -- Right now XSLT and XPath stuff are not supported for Konqueror and Safari,
although
this is expected to change.
Injections
Playing with XML programmatically is cool, but we usually want to modify our page
using
the resulting markup. Suppose we want to inject an XML node bound as
fooNode in
our document as a child of an element with an id value of
'targetNode':
document.getElementById('targetNode').appendChild(document.importNode(fooNode, true));
It is possible to get into trouble with this code. Although it's the most efficient
way to
append the node in the document, it could result to an error if, for example, the
node you
are trying to append is a document node. Serializing with
Sarissa.serialize and
setting the
innerHTML of the target element is always an option, but I would
suggest doing it properly using DOM instead.
The Final Touch
So all of this is great but it's still too much code to write, and probably too error-prone, especially if you are a code completion wimp like me. Moreover, the case becomes worse if you want to use XSLT on the client where applicable. To do that, you need to work on both end points:
your server must be able to transform the XML before sending it, or leave the transformation to the client. This can be dependent on the URL requested or an HTTP parameter.
The client must know if it is able to perform the transformation and ask the XML as appropriate.
IS_ENABLED_XSLTPROCis a Boolean constant you can check in your logic to figure out what to do.
With these two issues addressed, you could result in something like
// set an HTTP parameter depending on whether you want // the transformation on the server or not var clientTransform = Sarissa.IS_ENABLED_XSLTPROC; // now construct the URL as appropriate var url = 'path/to/file?sent-as-is=' + clientTransforml;
So we have constructed the URL we want thanks to a Sarissa constant that tells us
whether
our client is able to use a transformer. Now, supposing we still want to append the
result
targetElement as with our previous example and with an instance of
XSLTProcessor at hand (which may be null), we perform this with just one
line:
Sarissa.updateContentFromURI(url, targetElement, processor);
This will work if the URL does point to an HTTP server. If you cannot have access
to one,
just use your filesystem instead; you will need to load or build the source document
manually and call
Sarissa.updateContentFromNode with it.
Client-side XML can open new doors for your applications. Using Sarissa, it can be easy as well. Sarissa includes a lot more and the code can even guide you in writing your own reusable components. Give it a try and let me know what you are up to. Maybe next time I'll show you how to build a browser-based XML editor. | http://www.xml.com/pub/a/2005/02/23/sarissa.html | CC-MAIN-2017-13 | refinedweb | 1,594 | 53.71 |
I know, to my knowledge, that an array will not print out floating point numbers. So Im taking small steps and I went ahead and attempted to write a program that would return an array of 5 integers according to the users input.
#include <stdio.h> void function (int array[]); int main(void) { int array[5]; int num; int max = 0; printf("Enter a number:"); while (scanf("%d\n", &num)) { printf("%d\n", num); num = max++; if(max == 5) printf("The numbers you entered forward:\n"); printf("%d", array[num]); } return 0; }
Ill go ahead and attach a screenshot of my error.
In this screenshot I entered the numbers 1-5 inclusively. As you can see, some random numbers started appearing after I entered the number 2.
Im at a loss as to handling this anomaly. If anyone has any input, I would greatly appreciate it.
Thank you. | http://www.dreamincode.net/forums/topic/267672-assigning-an-array-to-floating-point-numbers/ | CC-MAIN-2016-50 | refinedweb | 148 | 69.01 |
I have a 5gb text file and i am trying to read it line by line.
My file is in format-: Reviewerid<\t>pid<\t>date<\t>title<\t>body<\n>
This is my code
o = open('mproducts.txt','w')
with open('reviewsNew.txt','rb') as f1:
for line in f1:
line = line.strip()
line2 = line.split('\t')
o.write(str(line))
o.write("\n")
MemoryError
Update:
Installing 64 bit Python solves the issue.
OP was using 32 bit Python that's why getting into memory limitation.
Reading whole comments I think this can help you.
Summary : Get N lines at time, process it and then write it.
Sample Code :
from itertools import islice #You can change num_of_lines def get_lines(file_handle,num_of_lines = 10): while True: next_n_lines = list(islice(file_handle, num_of_lines)) if not next_n_lines: break yield next_n_lines o = open('mproducts.txt','w') with open('reviewsNew.txt','r') as f1: for data_lines in get_lines(f1): for line in data_lines: line = line.strip() line2 = line.split('\t') o.write(str(line)) o.write("\n") o.close() | https://codedump.io/share/5PbAtPcoVbeV/1/why-getting-memory-error-python | CC-MAIN-2018-13 | refinedweb | 173 | 61.43 |
Last night I began working on a project that allows you to share state across multiple machines within a distributed environment. It allows you to instantiate objects and sessions on one machine and have them automatically cascade out so that they are available on every machine within your network. But, instead of having a local copy on every machine in your cluster, each machine only holds a handler that points back to the machine that the object originated on. This keeps the resource requirements in the cluster to a minimum. To test my application, the easiest thing that came to mind was to fashion together a round-robin scenario using traditional load balancing mechanisms. Instead of wasting money purchasing one of the big-box load balancers, I chose to go with the easiest and cheapest (free) method I could think of short of writing my own load balancer (which I still did, but I will discuss that in another article). For my load balancing needs, I decided to use a tool that I already had loaded and ready to go. And now, I am going to show you how to do the same by teaching you how to setup a simple load balancer using the Apache HTTP server. Being that Apache’s HTTP server is extremely powerful and robust, you can use this same setup in a production environment and on the web. Let’s begin!
Before jumping right in to setting up Apache as a load balancer, you will first need to download and install Apache if you haven’t already. You can find the version that fits your environment at. Since I was testing my application in a Windows environment, I went with the 2.2.22 version which I downloaded as an MSI installer from. I already had Apache installed. But, when you go to install it for yourself, stick with the default configuration unless you specifically need to change anything along the way. The only thing you will probably want to change are the default settings for your server name. I went with something like “developer.prv” and “local.developer.prv”, but you can use whatever you want as that’s not the important part of this article.
Once you have Apache installed, configuring it to be used for load balancing is extremely simple. Since Apache already comes with everything you need for load balancing, the first thing you will need to do is to enable the modules that provide that functionality. If you have never worked with Apache before, or even if you have, everything you will need to configure it to run as a load balancer can be done inside the “httpd.conf” file. If you are working with Windows, which I am, you can find the httpd.conf file in “C:\Program Files\Apache Software Foundation\Apache2.2\conf\“. If you’re using a *nix based system, you can typically find the httpd.conf file in “/etc/httpd/conf/“. With yout httpd.conf file located, you will need to open it up with a text editor so that you can configure it for your load balancer.
The first thing you will need to do is to scroll down and uncomment (remove the # sign from the beginning of the line) the following lines.
#LoadModule proxy_module modules/mod_proxy.so
#LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
#LoadModule proxy_http_module modules/mod_proxy_http.so
Make sure you enable the mod_proxy_http shared object. I missed that one the first time through. Apache didn’t report any errors, but the load balancer just didn’t work.
Now that you have the modules enabled for your load balancer, jump to the bottom of the file. There, you will need to define your proxy. You can do that by entering the following lines:
ProxyPass / balancer://mycluster/
ProxyPassReverse / balancer://mycluster/
<Proxy balancer://mycluster>
BalancerMember route=node1
BalancerMember route=node2
</Proxy>
As you can see, I have included 2 nodes that I want workload distributed across by registering a “BalancerMember” for each server within my cluster. For the sake of this article, I have chosen to use the same computer (localhost) but having the application server running on 2 separate ports, 9991 & 9992. I have chosen to name my cluster “mycluster” just like in the mod_proxy_balancer documentation at. You can name yours whatever you want. You can also choose to leave out the “route=node*” for each BalancerMember if you want. I added those in because I have plans to extend my balancer in the near future.
That’s it. You are now ready to use Apache as a load balancer. Just save the httpd.conf file and startup the Apache service. To test it, open a web browser and point it to ““. By default, Apache is set to listen for requests on port 80. If you have decided to change the default port along the way, you will need to add the new port to your URL in your browser. By declaring a single “/” (forward slash) in my ProxyPass, this tells Apache to proxy all requests that come in at the root level. If you would rather have a specific sub-level address, you can declare that like so:
ProxyPass /sublevel balancer://mycluster/
This will tell Apache to only proxy addresses that look like ““. Whatever you decide on, make sure you include the trailing “/” (forward slash) at the end of your balancer://mycluster/. Otherwise, you will see some issues further on that I will explain shortly.
If everything worked accordingly, when you launch “” in your browser, it should display the page that lives at the same location on each BalancerMember. This method will work no matter what you choose as your application server that you plan on load balancing. For example, I could replace with and with respectively. I can now use Apache to load balance Tomcat, Jetty, Glassfish, JBoss, Websphere, other Apache servers, Python servers, etc…
If you do not have an application server to test with, don’t worry. I have decided to provide you with a simple Python server, the same server I originally tested my load balancer with before moving on to my real application as mentioned above. It’s a really simple HTTP server that I’ve used in other examples on this website. When you access the server, it simply returns “Connected to PORT” where “PORT” is the port number the server is listening on. By running multiple instances of the same server, all listening on a different port number, this will show me exactly which node my load balancer has sent me to in my cluster. Here is what the simple Python HTTP server looks like.
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler PORT = 9991 class ConnectionHandler(BaseHTTPRequestHandler): def _writeheaders(self): self.send_response(200) self.send_header('Content-type', 'text/html') self.end_headers() def do_HEAD(self): self._writeheaders() def do_GET(self): self._writeheaders() self.wfile.write("""<HTML><HEAD><TITLE>Simple Server</TITLE></HEAD> <BODY>Connected to %d</BODY></HTML>""" % PORT) serveraddr = ('', PORT) srvr = HTTPServer(serveraddr, ConnectionHandler) srvr.serve_forever()
As shown in my Proxy configuration in Apache above, you can see that I have launched 2 instances of this Python HTTP server. The first server listened for connections on port 9991 and the second server listened for connections on port 9992. I registered both server instances in the Proxy configuration as BalancerMember. With both instances of my Python HTTP server running and my Apache load balancer running, whenever I point my browser to ““, I will see the message “Connected to 9991” or “Connected to 9992” depending on which server I was routed to.
The method I have demonstrated here uses a standard round-robin approach to load balancing. That means, every request that comes in to the load balancer will get routed to the next server. In the case of having only 2 nodes in my cluster, each request will be routed from one node to the next and back again. If you test this in Chrome, you will probably always see the same message “Connected to 9992” which appears that the load balancer is not working. If you followed everything in this article exactly, I assure you that your load balancer is working properly. It’s just that every request from Chrome will send one request for favicon.ico which will be routed to the first server and then the actual page request will be routed to the second server. If you do not want every request to be sent to a different server, you can lock clients to the same server upon each request by using the “sticky” method as shown in the Apache mod_proxy_balancer documentation found at.
If for some reason you still don’t believe that your load balancer is working, there are a few things you can do to double-check it. The first thing you should do is run the same test using another web browser such as Firefox. When I test it in Firefox, I always get a different message every time I refresh the page, indicating that my load balancer is working. If you are using the Python HTTP server I provided here for testing, you can watch the output window and see which requests are going thru which server. If you do this and see that requests are only hitting one of the Python HTTP servers, you can enable the built-in Balancer Manager in Apache which can be accessed from ““.
To do that, you will first need to add a new ProxyPass that tells Apache not to load balance requests to the “balancer-manager” context.
ProxyPass /balancer-manager !
If you leave out this step, you will always get an Error 500. Upon examination of the Apache log files, you will see the warning message:
[warn] proxy: No protocol handler was valid for the URL /balancer-manager. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.
Next, you will need to register the Balancer Manager location like this:
<Location /balancer-manager>
SetHandler balancer-manager
Order Deny,Allow
Allow from all
</Location>
Once you have configured the Balancer Manager, you can access it by pointing your browser to ““. From there, you can see the status of each node in your cluster. You can also configure and enable/disable each node by clicking the link for each node.
That’s all folks! You should now be able to use the Apache HTTP server as a load balancer. Here are all of the code changes I made to my httpd.conf file to make this happen.
LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_http_module modules/mod_proxy_http.so ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ ProxyPassReverse / balancer://mycluster/ <Proxy balancer://mycluster> BalancerMember route=node1 BalancerMember route=node2 </Proxy> <Location /balancer-manager> SetHandler balancer-manager Order Deny,Allow Allow from all </Location>
PayPal will open in a new tab. | http://www.prodigyproductionsllc.com/articles/web-design/web-hosting/use-apache-http-server-as-a-load-balancer/ | CC-MAIN-2017-04 | refinedweb | 1,816 | 62.48 |
This tutorial will show you how to interface a PIR motion sensor with the Raspberry Pi and how to use the GPIO pins on it. The GPIO pins on the Raspberry Pi are critical when it comes to making a hardware project, whether it's a robot or home automation system. In any case, you will have to use the GPIO (general purpose input/output) pins on the Raspberry Pi. With this simple tutorial, you will be able to learn how to control the output on the GPIO pins and read inputs through them. Moreover, you will get to read the output from a PIR motion sensor and write a simple code to blink an LED. If you're not familiar with the Raspberry Pi terminal, check out this tutorial on Basic Linux Commands. If you are a true beginner, you can always use our free e-book on Raspberry Pi and Arduino to get started from step 0. So gear up and get ready to have some fun with the Raspberry Pi GPIOs!
How Does It Work?
The Raspberry Pi GPIO can be accessed through a Python program. You will learn how to access these pins and the commands required to do so later in this tutorial. Each pin on the Raspberry Pi is named based on its order (1,2,3, ...) as shown in the diagram below:
".
How the PIR Motion Sensor works
Blinking an LED Using the Raspberry Pi GPIO-Output GPIO Control: ledblink.py :
import RPi.GPIO as GPIO import time GPIO.setwarnings(False) GPIO.setmode(GPIO.BOARD) GPIO.setup(3,GPIO.OUT) #Define pin 3 as an output pin while True: GPIO.output(3,1) #Outputs digital HIGH signal (5V) on pin 3 time.sleep(1) #Time delay of 1 second GPIO.output(3,0) #Outputs digital LOW signal (0V) on pin 3 time.sleep(1) #Time delay of 1 second
Next, we need to connect the LED to pin 3 on the Raspberry Pi GPIO. You can check out the connection diagram below to do that.
Raspberry Pi GPIO LED connection diagram
You should notice that the LED starts blinking after you execute the Python program using this command: sudo python ledblink.py. The LED blinks because it receives a HIGH (5V) signal and a LOW (0V) signal from the Raspberry Pi GPIO at a delay of one second. You can check out the video below for a demo:
Interfacing the PIR Motion Sensor to the Raspberry Pi's Input GPIO
Read Now, we can try reading the output from the PIR motion sensor. The sensor outputs a digital HIGH (5V) signal when it detects a person. Copy and paste the following code into your Raspberry Pi and save it as a Python file: pirtest.py:". In certain PIR motion sensors, you can even adjust the delay at which the sensor outputs a HIGH signal at the expense of compromising the accuracy. You just need to turn the two knobs on the sensor counterclockwise using a screwdriver.
PIR Motion Sensor pin out
PIR motion sensor adjustment knobs
You can also extend the display of your laptop to the Raspberry Pi via a VNC server and a LAN cable like I did in the video below. Here's the sensor in action: | https://maker.pro/raspberry-pi/tutorial/how-to-interface-a-pir-motion-sensor-with-raspberry-pi-gpio | CC-MAIN-2022-33 | refinedweb | 547 | 71.55 |
Networking
#include <multinetwork.h>
Summary
Enumerations
ResNsendFlags
ResNsendFlags
Possible values of the flags argument to android_res_nsend and android_res_nquery.
Values are ORed together.
Typedefs
net_handle_t
uint64_t net_handle_t
The corresponding C type for android.net.Network::getNetworkHandle() return values.
The Java signed long value can be safely cast to a net_handle_t:
[C] ((net_handle_t) java_long_network_handle) [C++] static_cast
as appropriate.
Functions
android_getaddrinfofornetwork
int android_getaddrinfofornetwork( net_handle_t network, const char *node, const char *service, const struct addrinfo *hints, struct addrinfo **res )
Perform hostname resolution via the DNS servers associated with |network|.
All arguments (apart from |network|) are used identically as those passed to getaddrinfo(3). Return and error values are identical to those of getaddrinfo(3), and in particular gai_strerror(3) can be used as expected. Similar to getaddrinfo(3):
- |hints| may be NULL (in which case man page documented defaults apply)
- either |node| or |service| may be NULL, but not both
- |res| must not be NULL
This is the equivalent of: android.net.Network::getAllByName())
Available since API level 23.
android_getprocdns
int android_getprocdns( net_handle_t *network )
Gets the |network| to which domain name resolutions are bound on the current process.
Returns 0 on success, or -1 setting errno to EINVAL if a null pointer is passed in.
Available since API level 31.
android_getprocnetwork
int android_getprocnetwork( net_handle_t *network )
Gets the |network| bound to the current process, as per android_setprocnetwork.
This is the equivalent of: android.net.ConnectivityManager::getBoundNetworkForProcess()) Returns 0 on success, or -1 setting errno to EINVAL if a null pointer is passed in.
Available since API level 31.
android_res_cancel
void android_res_cancel( int nsend_fd )
Attempts to cancel the in-progress query associated with the |nsend_fd| descriptor.
Available since API level 29.
android_res_nquery
int android_res_nquery( net_handle_t network, const char *dname, int ns_class, int ns_type, uint32_t flags )
Look up the {|ns_class|, |ns_type|} Resource Record (RR) associated with Domain Name |dname| on the given |network|.
The typical value for |ns_class| is ns_c_in, while |type| can be any record type (for instance, ns_t_aaaa or ns_t_txt). |flags| is a additional config to control actual querying behavior, see ResNsendFlags for detail.
Returns a file descriptor to watch for read events, or a negative POSIX error code (see errno.h) if an immediate error occurs.
Available since API level 29.
android_res_nresult
int android_res_nresult( int fd, int *rcode, uint8_t *answer, size_t anslen )
Read a result for the query associated with the |fd| descriptor.
Closes |fd| before returning.
Available since 29.
Returns: < 0: negative POSIX error code (see errno.h for possible values). |rcode| is not set. >= 0: length of |answer|. |rcode| is the resolver return code (e.g., ns_r_nxdomain)
android_res_nsend
int android_res_nsend( net_handle_t network, const uint8_t *msg, size_t msglen, uint32_t flags )
Issue the query |msg| on the given |network|.
|flags| is a additional config to control actual querying behavior, see ResNsendFlags for detail.
Returns a file descriptor to watch for read events, or a negative POSIX error code (see errno.h) if an immediate error occurs.
Available since API level 29.
android_setprocdns
int android_setprocdns( net_handle_t network )
Binds domain name resolutions performed by this process to |network|.
android_setprocnetwork takes precedence over this setting.
To clear a previous process binding, invoke with NETWORK_UNSPECIFIED. On success 0 is returned. On error -1 is returned, and errno is set.
Available since API level 31.
android_setprocnetwork
int android_setprocnetwork( net_handle_t network )
Binds the current process to |network|.
All sockets created in the future (and not explicitly bound via android_setsocknetwork()) will be bound to |network|. All host name resolutions will be limited to |network| as well. Note that if the network identified by |network| ever disconnects, all sockets created in this way will cease to work and all host name resolutions will fail. This is by design so an application doesn't accidentally use sockets it thinks are still bound to a particular network.
To clear a previous process binding, invoke with NETWORK_UNSPECIFIED.
This is the equivalent of: android.net.ConnectivityManager::bindProcessToNetwork())
Available since API level 23.
android_setsocknetwork
int android_setsocknetwork( net_handle_t network, int fd )
All functions below that return an int return 0 on success or -1 on failure with an appropriate errno value set.
Set the network to be used by the given socket file descriptor.
To clear a previous socket binding, invoke with NETWORK_UNSPECIFIED.
This is the equivalent of: android.net.Network::bindSocket())
Available since API level 23. | https://developer.android.com/ndk/reference/group/networking?hl=th | CC-MAIN-2021-43 | refinedweb | 709 | 51.04 |
Autopilot
A test driver for Flutter to do QA testing without sharing app source code. It exposes a JSON API using an HTTP server running inside the app. Using these APIs you can write tests in any language for your Flutter app.
Getting started
Add package to dependencies:
flutter pub add autopilot
Create
main_test.dart along side of your
main.dart file. Make AutoPilot widget parent of your MaterialApp or root widget like below:
import 'package:flutter/material.dart'; import 'package:autopilot/autopilot.dart'; import 'my_app.dart'; void main() { runApp( Autopilot(child: MyApp()) ); }
Run your app on device/emulator:
flutter run --release --target lib/main_test.dart
On Android forward port
8080 so that you can access it via
localhost:
adb forward tcp:8080 tcp:8080
Consider following example:
Text( "Hello World!", key: Key("txtGreet"), )
Example of a test in python using
pytest:
# example_test.py import requests root = "" def get(path): return requests.get(root + path).json() def test_greet(): greet = get("/texts?key=txtGreet")[0] assert greet["text"] == "Hello World!"
Run it:
python -m pytest example_test.py
Inspiration
Flutter has a really amazing testing suite for Unit, UI and Integration testing. But one problem is that you need to know/learn Dart and you have to share the source code of the app to the person who writes tests. This doesn't work in every work environments.
But Flutter framework is so transparent I was able to tap into its internals and build a JSON API which can provide pretty much everything you need to write UI automation tests.
APIs
GET /widgets
Returns entire widget tree
GET /keys
Returns list of all the keyed widgets
GET /texts
Returns list of all text widgets
GET /texts?text=<text>
Returns list of all text widgets with matching text
GET /texts?key=<key>
Returns text widget that matches key
GET /editables
Returns list of all text fields
GET /type?text=<text>
Types given text to the focused text field
GET /tap?x=<x>&y=<y>
Taps at given offset
GET /tap?key=<key>
Taps on widget with given key
GET /tap?text=<text>
Taps on text widget with given text
GET /hold?x=<x>&y=<y>
Tap and hold on given offset
GET /drag?x=<x>&y=<y>&dx=<dx>&dy=<dy>
Taps at (x,y) and drags (dx, dy) offset
GET /screenshot
Returns screenshot of app in PNG
GET /keyboard
Shows keyboard
DELETE /keyboard
Hides keyboard
POST /keyboard?type=<type>
Submits a keyboard action.
Some actions may not be available on all platforms. See TextInputAction for more information. | https://pub.dev/documentation/autopilot/latest/ | CC-MAIN-2022-40 | refinedweb | 425 | 67.96 |
When you ask someone to send you a contract or a report there is a high probability that you’ll get a DOCX file. Whether you like it not, it makes sense considering that 1.2 billion people use Microsoft Office although a definition of “use” is quite vague in this case. DOCX is a binary file which is, unlike XLSX, not famous for being easy to integrate into your application. PDF is much easier when you care more about how a document is displayed than its abilities for further modifications. Let’s focus on that.
Python has a few great libraries to work with DOCX (python-dox) and PDF files (PyPDF2, pdfrw). Those are good choices and a lot of fun to read or write files. That said, I know I'd fail miserably trying to achieve 1:1 conversion.
Looking further I came across unoconv. Universal Office Converter is a library that’s converting any document format supported by LibreOffice/OpenOffice. That sound like a solid solution for my use case where I care more about quality than anything else. As execution time isn't my problem I have been only concerned whether it’s possible to run LibreOffice without X display. Apparently, LibreOffice can be run in haedless mode and supports conversion between various formats, sweet!
I’m grateful to unoconv for an idea and great README explaining multiple problems I can come across. In the same time, I’m put off by the number of open issues and abandoned pull requests. If I get versions right, how hard can it be? Not hard at all, with few caveats though.
Testing converter
LibreOffice is available on all major platforms and has an active community. It's not active as new-hot-js-framework-active but still with plenty of good read and support. You can get your copy from the download page. Be a good user and go with up-to-date version. You can always downgrade in case of any problems and feedback on latest release is always appreciated.
On macOS and Windows executable is called
soffice and
libreoffice on Linux. I'm on macOS, executable
soffice isn't available in my
PATH after the installation but you can find it inside the
LibreOffice.app. To test how LibreOffice deals with your files you can run:
$ /Applications/LibreOffice.app/Contents/MacOS/soffice --headless --convert-to pdf test.docx
In my case results were more than satisfying. The only problem I saw was a misalignment in a file when the alignment was done with spaces, sad but true. This problem was caused by missing fonts and different width of "replacements" fonts. No worries, we'll address this problem later.
Setup I
While reading unoconv issues I've noticed that many problems are connected due to the mismatch of the versions. I'm going with Docker so I can have pretty stable setup and so I can be sure that everything works.
Let's start with defining simple
Dockerfile, just with dependencies and
ADD one DOCX file just for testing:
FROM ubuntu:17.04 RUN apt-get update RUN apt-get install -y python3 python3-pip RUN apt-get install -y build-essential libssl-dev libffi-dev python-dev RUN apt-get install -y libreoffice ADD test.docx /app/
Let's build an image:
docker build -t my/docx2pdf .
After image is created we can run the container and convert the file inside the container:
docker run --rm --name docx2pdf-container my/docx2pdf \ libreoffice --headless --convert-to pdf --outdir app /app/test.docx
Running LibreOffice as a subprocess
We want to run LibreOffice converter as a subprocess and provide the same API for all platforms. Let's define a module which can be run as a standalone script or which we can later import on our server.
import sys import subprocess import re def convert_to(folder, source, timeout=None): args = [libreoffice_exec(), '--headless', '--convert-to', 'pdf', '--outdir', folder, source] process = subprocess.run(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, timeout=timeout) filename = re.search('-> (.*?) using filter', process.stdout.decode()) if filename is None: raise LibreOfficeError(process.stdout.decode()) else: return filename.group(1) def libreoffice_exec(): # TODO: Provide support for more platforms if sys.platform == 'darwin': return '/Applications/LibreOffice.app/Contents/MacOS/soffice' return 'libreoffice' class LibreOfficeError(Exception): def __init__(self, output): self.output = output if __name__ == '__main__': print('Converted to ' + convert_to(sys.argv[1], sys.argv[2]))
Required arguments which
convert_to accepts are
folder to which we save PDF and a path to the
source file. Optionally we specify a
timeout in seconds. I’m saying optional but consider it mandatory. We don’t want a process to hang too long in case of any problems or just to limit computation time we are able to give away to each conversion. LibreOffice executable location and name depends on the platform so edit
libreoffice_exec to support platform you’re using.
subprocess.run doesn’t capture stdout and stderr by default. We can easily change the default behavior by passing
subprocess.PIPE. Unfortunately, in the case of the failure, LibreOffice will fail with return code 0 and nothing will be written to stderr. I decided to look for the success message assuming that it won’t be there in case of an error and raise
LibreOfficeError otherwise. This approach hasn’t failed me so far.
Uploading files with Flask
Converting using the command line is ok for testing and development but won't take us far. Let's build a simple server in Flask.
# common/files.py import os from config import config from werkzeug.utils import secure_filename def uploads_url(path): return path.replace(config['uploads_dir'], '/uploads') def save_to(folder, file): os.makedirs(folder, exist_ok=True) save_path = os.path.join(folder, secure_filename(file.filename)) file.save(save_path) return save_path
# common/errors.py from flask import jsonify class RestAPIError(Exception): def __init__(self, status_code=500, payload=None): self.status_code = status_code self.payload = payload def to_response(self): return jsonify({'error': self.payload}), self.status_code class BadRequestError(RestAPIError): def __init__(self, payload=None): super().__init__(400, payload) class InternalServerErrorError(RestAPIError): def __init__(self, payload=None): super().__init__(500, payload)
We'll need few helper function to work with files and few custom errors for handling error messages. Upload directory path is defined in
config.py. You can also consider using flask-restplus or flask-restful which makes handling errors a little easier.
import os from uuid import uuid4 from flask import Flask, render_template, request, jsonify, send_from_directory from subprocess import TimeoutExpired from config import config from common.docx2pdf import LibreOfficeError, convert_to from common.errors import RestAPIError, InternalServerErrorError from common.files import uploads_url, save_to app = Flask(__name__, static_url_path='') @app.route('/') def hello(): return render_template('home.html') @app.route('/upload', methods=['POST']) def upload_file(): upload_id = str(uuid4()) source = save_to(os.path.join(config['uploads_dir'], 'source', upload_id), request.files['file']) try: result = convert_to(os.path.join(config['uploads_dir'], 'pdf', upload_id), source, timeout=15) except LibreOfficeError: raise InternalServerErrorError({'message': 'Error when converting file to PDF'}) except TimeoutExpired: raise InternalServerErrorError({'message': 'Timeout when converting file to PDF'}) return jsonify({'result': {'source': uploads_url(source), 'pdf': uploads_url(result)}}) @app.route('/uploads/<path:path>', methods=['GET']) def serve_uploads(path): return send_from_directory(config['uploads_dir'], path) @app.errorhandler(500) def handle_500_error(): return InternalServerErrorError().to_response() @app.errorhandler(RestAPIError) def handle_rest_api_error(error): return error.to_response() if __name__ == '__main__': app.run(host='0.0.0.0', threaded=True)
The server is pretty straightforward. In production, you would probably want to use some kind of authentication to limit access to
uploads directory. If not, give up on serving static files with Flask and go for Nginx.
Important take-away from this example is that you want to tell your app to be threaded so one request won't prevent other routes from being served. However, WSGI server included with Flask is not production ready and focuses on development. In production, you want to use a proper server with automatic worker process management like gunicorn. Check the docs for an example how to integrate gunicorn into your app. We are going to run the application inside a container so host has to be set to publicly visible
0.0.0.0.
Setup II
Now when we have a server we can update
Dockerfile. We need to copy our application source code to the image filesystem and install required dependencies.
FROM ubuntu:17.04 RUN apt-get update RUN apt-get install -y python3 python3-pip RUN apt-get install -y build-essential libssl-dev libffi-dev python-dev RUN apt-get install -y libreoffice ADD app /app WORKDIR /app RUN pip3 install -r requirements.txt ENV LC_ALL=C.UTF-8 ENV LANG=C.UTF-8 CMD python3 application.py
In
docker-compose.yml we want to specify ports mapping and mount a volume. If you followed the code and you tried running examples you have probably noticed that we were missing the way to tell Flask to run in a debugging mode. Defining environment variable without a value is causing that this variable is going to be passed to the container from the host system. Alternatively, you can provide different config files for different environments.
version: '3' services: web: build: . ports: - '5000:5000' volumes: - ./app:/app environment: - FLASK_DEBUG
Supporting custom fonts
I've mentioned a problem with missing fonts earlier. LibreOffice can, of course, make use of custom fonts. If you can predict which fonts your user might be using there's a simple remedy. Add following line to your
Dockfile.
ADD fonts /usr/share/fonts/
Now when you put custom font file in the
font directory in your project, rebuild the image. From now on you support custom fonts!
Summary
This should give you the idea how you can provide quality conversion of different documents to PDF. Although the main goal was to convert a DOCX file you should be fine with presentations, spreadsheets or images.
Further improvements could be providing support for multiple files, the converter can be configured to accept more than one file as well.
Photo by Samuel Zeller on Unsplash. | https://michalzalecki.com/converting-docx-to-pdf-using-python/ | CC-MAIN-2019-04 | refinedweb | 1,675 | 50.43 |
Welcome to the 38th NeHe Productions Tutorial. It's been awhile since my last tutorial, so my writing may be a little rusty. That and the fact that I've been up for almost 24 hours working on the code :)
So you know how to texture map a quad, and you know how to load bitmap images, tga's, etc. So how the heck do you texture map a Triangle? And what if you want to hide your textures in the .EXE file?
The two questions I'm asked on a daily basis will soon be answered, and once you see how easy it is, you'll wonder why you never thought of the solution :)
Rather than trying to explain everything in great detail I'm going to include a few screenshots, so you know exactly what it is I'm talking about. I will be using the latest basecode. You can download the code from the main page under the heading "NeHeGL I Basecode" or you can download the code at the end of this tutorial.
The first thing we need to do is add the images to the resource file. Alot of you have already figured out how to do this, unfortunately, you miss a few steps along the way and end up with a useless resource file filled with bitmaps that you can't use.
Remember, this tutorial was written in Visual C++ 6.0. If you're using anything other than Visual C++, the resource portion of this tutorial won't make sense (especially the screenshots).
* Currently you can only use 24 bit BMP images. There is alot of extra code to load 8 bit BMP files. I'd love to hear from anyone that has a tiny / optimized BMP loader. The code I have right now to load 8 and 24 bit BMP's is a mess. Something that uses LoadImage would be nice.
Open the project and click INSERT on the main menu. Once the INSERT menu has opened, select RESOURCE.
You are now asked what type of resource you wish to import. Select BITMAP and click the IMPORT button.
A file selection box will open. Browse to the DATA directory, and highlight all three images (Hold down the CTRL key while selecting each image). Once you have all three selected click the IMPORT button. If You do not see the bitmap files, make sure FILES OF TYPE at the bottom says ALL FILE (*.*).
A warning will pop up three times (once for each image you imported). All it's telling you is that the image was imported fine, but the picture can't be viewed or edited because it has more than 256 colors. Nothing to worry about!
Once all three images have been imported, a list will be displayed. Each bitmap has been assigned an ID. Each ID starts with IDB_BITMAP and then a number from 1 - 3. If you were lazy, you could leave the ID's and jump to the code. Lucky we're not lazy!
Right click each ID, and select PROPERTIES. Rename each ID so that it matches the name of the original bitmap file. See the picture if you're not sure what I mean.
Once you are done, select FILE from the main menu and then SAVE ALL because you have just created a new resource file, windows will ask you what you want to call the file. You can save the file with the default filename or you can rename it to lesson38.rc. Once you have decided on a name click SAVE.
This is the point that most people make it to. You have a resource file. It's full of Bitmap images and it's been saved to the Hard Drive. To use the images, you need to complete a few more steps.
The next thing you need to do is add the resource file to your current project. Select PROJECT from the main menu, ADD TO PROJECT, and then FILES.
Select the resource.h file, and the resource file (Lesson38.rc). Hold down control to select more than one file, or add each file individually.
The last thing to do is make sure the resource file (Lesson38.rc) was put in the RESOURCE FILES folder. As you can see in the picture above, it was put in the SOURCE FILES folder. Click it with your mouse and drag it down to the RESOURCE FILES folder.
Once the resource file has been moved select FILE from the main menu and SAVE ALL. The hard part has been done! ...Way too many pictures :)
So now we start on the code! The most important line in the section of code below is #include "resource.h". Without this line, you will get a bunch of undeclared identifier errors when you try to compile the code. The resource.h file declares the objects inside the resource file. So if you want to grab data from IDB_BUTTERFLY1 you had better remember to include the header file!
#include <windows.h> // Header File For Windows
#include <gl\gl.h> // Header File For The OpenGL32 Library
#include <gl\glu.h> // Header File For The GLu32 Library
#include <gl\glaux.h> // Header File For The GLaux Library
#include "NeHeGL.h" // Header File For NeHeGL
#include "resource.h" // Header File For Resource (*IMPORTANT*)
#pragma comment( lib, "opengl32.lib" ) // Search For OpenGL32.lib While Linking
#pragma comment( lib, "glu32.lib" ) // Search For GLu32.lib While Linking
#pragma comment( lib, "glaux.lib" ) // Search For GLaux.lib While Linking
#ifndef CDS_FULLSCREEN // CDS_FULLSCREEN Is Not Defined By Some
#define CDS_FULLSCREEN 4 // Compilers. By Defining It This Way,
#endif // We Can Avoid Errors
GL_Window* g_window;
Keys* g_keys;
The first line below sets aside space for the three textures we're going to make.
The structure will be used to hold information about 50 different objects that we'll have moving around the screen.
tex will keep track of which texture to use for the object. x is the x-position of the object, y is the y position of the object, z is the objects position on the z-axis, yi will be a random number used to control how fast the object falls. spinz will be used to rotate the object on it's z-axis as it falls, spinzi is another random number used to control how fast the object spins. flap will be used to control the objects wings (more on this later) and fi is a random value that controls how fast the wings flap.
We create 50 instances of obj[ ] based on the object structure.
// User Defined Variables
GLuint texture[3]; // Storage For 3 Textures
struct object // Create A Structure Called Object
{
int tex; // Integer Used To Select Our Texture
float x; // X Position
float y; // Y Position
float z; // Z Position
float yi; // Y Increase Speed (Fall Speed)
float spinz; // Z Axis Spin
float spinzi; // Z Axis Spin Speed
float flap; // Flapping Triangles :)
float fi; // Flap Direction (Increase Value)
};
object obj[50]; // Create 50 Objects Using The Object Structure
The bit of code below assigns random startup values to object (obj[ ]) loop. loop can be any value from 0 - 49 (any one of the 50 objects).
We start off with a random texture from 0 to 2. This will select a random colored butterfly.
We assign a random x position from -17.0f to +17.0f. The starting y position will be 18.0f, which will put the object just above the screen so we can't see it right off the start.
The z position is also a random value from -10.0f to -40.0f. The spinzi value is a random value from -1.0f to 1.0f. flap is set to 0.0f (which will be the center position for the wings).
Finally, the flap speed (fi) and fall speed (yi) are also given a random value.
void SetObject(int loop) // Sets The Initial Value Of Each Object (Random)
{
obj[loop].tex=rand()%3; // Texture Can Be One Of 3 Textures
obj[loop].x=rand()%34-17.0f; // Random x Value From -17.0f To 17.0f
obj[loop].y=18.0f; // Set y Position To 18 (Off Top Of Screen)
obj[loop].z=-((rand()%30000/1000.0f)+10.0f); // z Is A Random Value From -10.0f To -40.0f
obj[loop].spinzi=(rand()%10000)/5000.0f-1.0f; // spinzi Is A Random Value From -1.0f To 1.0f
obj[loop].flap=0.0f; // flap Starts Off At 0.0f;
obj[loop].fi=0.05f+(rand()%100)/1000.0f; // fi Is A Random Value From 0.05f To 0.15f
obj[loop].yi=0.001f+(rand()%1000)/10000.0f; // yi Is A Random Value From 0.001f To 0.101f
}
Now for the fun part! Loading a bitmap from the resource file and converting it to a texture.
hBMP is a pointer to our bitmap file. It will tell our program where to get the data from. BMP is a bitmap structure that we can fill with data from our resource file.
We tell our program which ID's to use in the third line of code. We want to load IDB_BUTTEFLY1, IDB_BUTTEFLY2 and IDB_BUTTERFLY3. If you wish to add more images, add the image to the resource file, and add the ID to Texture[ ].
void LoadGLTextures() // Creates Textures From Bitmaps In The Resource File
{
HBITMAP hBMP; // Handle Of The Bitmap
BITMAP BMP; // Bitmap Structure
// The ID Of The 3 Bitmap Images We Want To Load From The Resource File
byte Texture[]={ IDB_BUTTERFLY1, IDB_BUTTERFLY2, IDB_BUTTERFLY3 };
The line below uses sizeof(Texture) to figure out how many textures we want to build. We have 3 ID's in Texture[ ] so the value will be 3. sizeof(Texture) is also used for the main loop.
glGenTextures(sizeof(Texture), &texture[0]); // Generate 3 Textures (sizeof(Texture)=3 ID's)
for (int loop=0; loop<sizeof(Texture); loop++) // Loop Through All The ID's (Bitmap Images)
{
LoadImage takes the following parameters: GetModuleHandle(NULL) - A handle to an instance. MAKEINTRESOURCE(Texture[loop]) - Converts an Integer Value (Texture[loop]) to a resource value (this is the image to load). IMAGE_BITMAP - Tells our program that the resource to load is a bitmap image.
The next two parameters (0,0) are the desired height and width of the image in pixels. We want to use the default size so we set both to 0.
The last parameter (LR_CREATEDIBSECTION) returns a DIB section bitmap, which is a bitmap without all the color information stored in the data. Exactly what we need.
hBMP points to the bitmap data that is loaded by LoadImage( ).
hBMP=(HBITMAP)LoadImage(GetModuleHandle(NULL),MAKEINTRESOURCE(Texture[loop]), IMAGE_BITMAP, 0, 0, LR_CREATEDIBSECTION);
Next we check to see if the pointer (hBMP) actually points to data. If you wanted to add error checking, you could check hBMP and pop up a messagebox if there's no data.
If data exists, we use GetObject( ) to grab all of the data (sizeof(BMP)) from hBMP and store it in our BMP (bitmap structure).
glPixelStorei tells OpenGL that the data is stored in word alignments (4 bytes per pixel).
We then bind to our texture, set the filtering to GL_LINEAR_MIPMAP_LINEAR (nice and smooth), and generate the texture.
Notice that we use BMP.bmWidth and BMP.bmHeight to get the height and width of the bitmap. We also have to swap the Red and Blue colors using GL_BGR_EXT. The actual resource data is retreived from BMP.bmBits.
The last step is to delete the bitmap object freeing all system resources associated with the object.
if (hBMP) // Does The Bitmap Exist?
{ // If So...
GetObject(hBMP,sizeof(BMP), &BMP); // Get The Object
// hBMP: Handle To Graphics Object
// sizeof(BMP): Size Of Buffer For Object Information
// Buffer For Object Information
glPixelStorei(GL_UNPACK_ALIGNMENT,4); // Pixel Storage Mode (Word Alignment / 4 Bytes)
glBindTexture(GL_TEXTURE_2D, texture[loop]); // Bind Our Texture
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); // Linear Filtering
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR); // Mipmap Linear Filtering
// Generate Mipmapped Texture (3 Bytes, Width, Height And Data From The BMP)
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, BMP.bmWidth, BMP.bmHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, BMP.bmBits);
DeleteObject(hBMP); // Delete The Bitmap Object
}
}
}
Nothing really fancy in the init code. We add LoadGLTextures() to call the code above. The screen clear color is black. Depth testing is disabled (cheap way to blend). We enable texture mapping, then set up and enable blending.
BOOL Initialize (GL_Window* window, Keys* keys) // Any GL Init Code & User Initialiazation Goes Here
{
g_window = window;
g_keys = keys;
// Start Of User Initialization
LoadGLTextures(); // Load The Textures From Our Resource File
glClearColor (0.0f, 0.0f, 0.0f, 0.5f); // Black Background
glClearDepth (1.0f); // Depth Buffer Setup
glDepthFunc (GL_LEQUAL); // The Type Of Depth Testing (Less Or Equal)
glDisable(GL_DEPTH_TEST); // Disable Depth Testing
glShadeModel (GL_SMOOTH); // Select Smooth Shading
glHint (GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Set Perspective Calculations To Most Accurate
glEnable(GL_TEXTURE_2D); // Enable Texture Mapping
glBlendFunc(GL_ONE,GL_SRC_ALPHA); // Set Blending Mode (Cheap / Quick)
glEnable(GL_BLEND); // Enable Blending
We need to initialize all 50 objects right off the start so they don't appear in the middle of the screen or all in the same location. The loop below does just that.
for (int loop=0; loop<50; loop++) // Loop To Initialize 50 Objects
{
SetObject(loop); // Call SetObject To Assign New Random Values
}
return TRUE; // Return TRUE (Initialization Successful)
}
void Deinitialize (void) // Any User DeInitialization Goes Here
{
}
void Update (DWORD milliseconds) // Perform Motion Updates Here
{
if (g_keys->keyDown [VK_ESCAPE] == TRUE) // Is ESC Being Pressed?
{
TerminateApplication (g_window); // Terminate The Program
}
if (g_keys->keyDown [VK_F1] == TRUE) // Is F1 Being Pressed?
{
ToggleFullscreen (g_window); // Toggle Fullscreen Mode
}
}
Now for the drawing code. In this section I'll attempt to explain the easiest way to texture map a single image across two triangles. For some reason everyone seems to think it's near impossible to texture an image to a triangle.
The truth is, you can texture an image to any shape you want. With very little effort. The image can match the shape or it can be a completely different pattern. It really doesn't matter.
First things first... we clear the screen and set up a loop to render all 50 of our butterflies (objects).
void Draw (void) // Draw The Scene
{
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer
for (int loop=0; loop<50; loop++) // Loop Of 50 (Draw 50 Objects)
{
We call glLoadIdentity( ) to reset the modelview matrix. Then we select the texture that was assigned to our object (obj[loop].tex).
We position the butterfly using glTranslatef() then rotate the buttefly 45 degrees on it's X axis. This tilts the butterfly a little more towards the viewer so it doesn't look like a flat 2D object.
The final rotation spins the butterfly on it's z-axis which makes it spin as it falls down the screen.
glLoadIdentity (); // Reset The Modelview Matrix
glBindTexture(GL_TEXTURE_2D, texture[obj[loop].tex]); // Bind Our Texture
glTranslatef(obj[loop].x,obj[loop].y,obj[loop].z); // Position The Object
glRotatef(45.0f,1.0f,0.0f,0.0f); // Rotate On The X-Axis
glRotatef((obj[loop].spinz),0.0f,0.0f,1.0f); // Spin On The Z-Axis
Texturing a triangle is not all that different from texturing a quad. Just because you only have 3 vertices doesn't mean you can't texture a quad to your triangle. The only difference is that you need to be more aware of your texture coordinates.
In the code below, we draw the first triangle. We start at the top right corner of an invisible quad. We then move left until we get to the top left corner. From there we go to the bottom left corner.
The code below will render the following image:
Notice that half the buttefly is rendered on the first triangle. The other half is rendered on the second triangle. The texture coordinates match up with the vertex coordinates and although there are only 3 texture coordinates, it's still enough information to tell OpenGL what portion of the image needs to be mapped to the triangle.
glBegin(GL_TRIANGLES); // Begin Drawing Triangles
// First Triangle
glTexCoord2f(1.0f,1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); // Point 1 (Top Right)
glTexCoord2f(0.0f,1.0f); glVertex3f(-1.0f, 1.0f, obj[loop].flap); // Point 2 (Top Left)
glTexCoord2f(0.0f,0.0f); glVertex3f(-1.0f,-1.0f, 0.0f); // Point 3 (Bottom Left)
The code below renders the second half of the triangle. Same technique as above, but this time we render from the top right to the bottom left, then over to the bottom right.
The second point of the first triangle and the third point of the second triangle move back and forth on the z-axis to create the illusion of flapping. What's really happening is that point is moving from -1.0f to 1.0f and then back, which causes the two triangles to bend in the center where the butterflies body is.
If you look at the two pictures you will notice that points 2 and 3 are the tips of the wings. Creates a very nice flapping effect with minimal effort.
// Second Triangle
glTexCoord2f(1.0f,1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); // Point 1 (Top Right)
glTexCoord2f(0.0f,0.0f); glVertex3f(-1.0f,-1.0f, 0.0f); // Point 2 (Bottom Left)
glTexCoord2f(1.0f,0.0f); glVertex3f( 1.0f,-1.0f, obj[loop].flap); // Point 3 (Bottom Right)
glEnd(); // Done Drawing Triangles
The following bit of code moves the butterfly down the screen by subtracting obj[loop].yi from obj[loop].y. The butterfly spinz value is increased by spinzi (which can be a negative or positive value) and the wings are increased by fi. fi can also be a negative or positive direction depending on the direction we want the wings to flap.
obj[loop].y-=obj[loop].yi; // Move Object Down The Screen
obj[loop].spinz+=obj[loop].spinzi; // Increase Z Rotation By spinzi
obj[loop].flap+=obj[loop].fi; // Increase flap Value By fi
After moving the butterfly down the screen, we need to see if it's gone past the bottom of the screen (no longer visible). If it has, we call SetObject(loop) to assign the butterfly a new texture, new fall speed, etc.
if (obj[loop].y<-18.0f) // Is Object Off The Screen?
{
SetObject(loop); // If So, Reassign New Values
}
To make the wings flap, we check to see if the flap value is greater than or less than 1.0f and -1.0f. If the wing is greater than or less than those values, we change the flap direction by making fi=-fi.
So if the wings were going up, and they hit 1.0f, fi will become a negative value which will make the wings go down.
Sleep(15) has been added to slow the program down by 15 milliseconds. It ran insanely fast on a friends machine, and I was too lazy to modify the code to take advantage of the timer :)
if ((obj[loop].flap>1.0f) || (obj[loop].flap<-1.0f)) // Time To Change Flap Direction?
{
obj[loop].fi=-obj[loop].fi; // Change Direction By Making fi = -fi
}
}
Sleep(15); // Create A Short Delay (15 Milliseconds)
glFlush (); // Flush The GL Rendering Pipeline
}
I hope you enjoyed the tutorial. Hopefully it makes loading textures from a resource a lot easier to understand, and texturing triangles a snap. I've reread this tutorial about 5 times now, and it seems easy enough, but if you're still having problems, let me know. As always, I want the tutorials to be the best that they can be, so feedback is greatly appreciated!
Thanks to everyone for the great support! This site would be nothing without it's visitors!!! Warren Moore ) * DOWNLOAD Game GLUT Code For This Lesson. ( Conversion by Alexandre Ribeiro de Sá ) * DOWNLOAD LCC Win32 Code For This Lesson. ( Conversion by Robert Wishlaw ) * DOWNLOAD Mac OS X/Cocoa Code For This Lesson. ( Conversion by Bryan Blackburn ) * DOWNLOAD Visual Studio .NET Code For This Lesson. ( Conversion by Grant James )
* DOWNLOAD Lesson 38 - Enhanced (Masking, Sorting, Keyboard - NeHe). * DOWNLOAD Lesson 38 - Screensaver by Brian Hunsucker.
< Lesson 37Lesson 39 >
NeHe™ and NeHe Productions™ are trademarks of GameDev.net, LLC
OpenGL® is a registered trademark of Silicon Graphics Inc. | http://nehe.gamedev.net/tutorial/loading_textures_from_a_resource_file__texturing_triangles/26001/ | CC-MAIN-2014-41 | refinedweb | 3,387 | 74.79 |
The IBM Information Server has a business glossary manager that I am implementing for several clients. Some of those clients have existing data dictionaries and glossaries that will need to be imported into the product. The IBM information server has an XML format to allow you to import/export business glossaries.
There is a lot to talk about in examining this format. There is the good, the bad and the ugly in this format. Before we begin our dissection there are two contextual topics in need of some discussion. First is examining the goals of the format and second is determining whether those goals could have been achieved using existing formats.
At a high-level, the format has three main goals which correspond to its three main elements: represent terms and their definitions (via the term element), categorize terms (via the category element) and add custom attributes to categories or terms (via the attribute element). Except for the metadata extension mechanism (custom attributes), this is a simple way to create and organize a dictionary in XML. When examining the schema or the example of the format it is clear that it is far from a complete standard. For example, the available data types for custom attributes is only String. So, it is clear that this format will evolve. A bigger question is - should it? And should it even have been created in the first place?
There are quite a few formats for capturing glossaries, dictionaries and thesauri in XML. A colleague of mine, Ken Sall, examined this for the government a few years back. The W3C has SKOS, IBM has subject classification in DITA (though DITA is much broader than glossaries), and XML topic maps can also serve this purpose.
So, although we will continue to explore the details of this format and even conversion of some of the others mentioned into this format, what are your thoughts on it?
Until next time, see you in the trenches… - Mike
The right to exist is in the utility. What processors consume this format, what do they emit, what use is made of the post-format processed products and what do they replace?
Hi Len,
Agree on the issue of utility. Of course, utility can also be served by existing formats. Many times, if you dig back through the history of a product, it comes down to a single developer that wanted to "reinvent-the-wheel" for the sake of "simplicity" (for them). Frankly, I am tired of the simplicity argument being used as a bludgeon against reuse and interoperability.
Regards,
- Mike
I agree about the single vendor issue. Big system design is where I think the simplicity argument makes the most sense.
Simplicity has virtue where multiple scales are comingling. Some artfulness here can overcome advantages attributed to one-size-fits-all specs and standards.
Looking over some of the proposals and RFPs for systems that have to integrate scaling command and control, the numbers of orthogonal interfaces and the perception that the system must be average idiot proof results in very expensive procurement and lifecycle costs. While portable data makes it reasonable to build these, demanding interoperability past the sense and respond actions significantly raises complexity.
The problem is ensuring the use cases aren't gerrymandered.
Much depends on the bite sizes of the procurements. Where an agency is my customer, I've no choice but to accept responsibility for multiple interfaces. Where the city is my customer, I can replace more legacy systems and vendors. Where a State is my customer, the scale of implementation is large but homegeneity is much improved.
Two other devils are in the details:
1. Not correctly assessing the skill set of the user therefore always building for the lowest common denominator (most people are better trained than they admit).
2. Not correctly assessing the median case of incident complexity and assuming local events require major resources instead of adjusting the scale properly.
The couplings of one and two are where the art is. Using human intelligence and training more astutely is the master stroke.
Hi Len,
Interesting post ... I read it a few times but think I may need to read it about 10 more times to understand it fully.
Would like to focus on what you said about simplicity -
"simplicity has virtue where multiple scales are comingling."
I think I understand that case and agree with it. I think a generalization of that case is when you can clearly see that you have an overly-complex design because things are continually bolted on at the last-minute in a knee-jerk reaction to a new requirement. Thus simplicity becomes key to redesigning a more elegant solution that eliminates the cruft.
The opposite of that is what I am talking about here - when you say something needs to be simpler because the developer does not want to be bothered with reasonable complexity.
You may have to expand on some of your other points because I did not grok it all.
Regards,
- Mike
It is about coupling. As the numbers of components rise, there is a non-linear increase in the complexity and cost given a complex base.
When you look at the formats that have been most successful at scale, the majority have the virtue that at least in their initial incarnation, they are very simple (eg, HTML, RSS, and the Air Force messaging format the name of which I can't remember). The more we try to communicate in the namespace, the lumpier the system space gets as the namespace is aggregated.
A questions is, how do these namespaces become complex? Typically, overreaching, noisy requirements, mammal nonsense, and failing to cut legacy at launch. Gerrymandered use cases are a problem of projects that have cominingled marketing with design, ambition with structure, and so on. Think about the awful evolution of CALS.
I don't think 'less is more' or 'simplicity for its own sake' is right. What I do see is that the forces on the design have to be pared down, requirements need to be strict, use cases have to be focused, and so on. Otherwise, at the end, developers are sitting at their desk with a contract punch list ticking off the requirements they meet, those they don't, and on the other side is a customer/procurement official threatening actions or parlaying for more work.
Too often too many usefully separable systems are procured by the same specification. That is a recipe for failure. Too often the specification was written for the abstract use case anticipating every possible even if improbable failure mode, and that is a recipe for very high costs and a system in which 10% of the features are used 90% of the time while the remainder because not used are too hard to use given lack of experience or training or untested intersystem failure modes.
Simpler formats that do one job well succeed. Complex formats that do one job infrequently don't regardless of how elegant the solution. It isn't that the second class doesn't work; it is that it doesn't fit smoothly into the environment of other slightly jagged systems.
Hi Len,
Excellent dissection of the roots of unnecessary complexity!
With my programming lens on, you are pointing out how projects ignore the proper "separation of concerns" by over-reaching.
There are many such examples of "greedy" standards.
Truly a superb post (+1),
- Mike
Sometimes you cannot avoid complexity in order to provide flexibility. The IBM Business Glossary has two import format: the simpler CSV that you can put together in Excel or the more complex XML. I've used both and the CSV is simpler but too hard to use! Glossary definitions tend to have a lot of carriage returns in them and csv files cannot handle them. Glossaries also tend to have additional custom properties - something that can be configured in the IBM Business Glossary and handled by the flexible XML format but not well handled by the fixed CSV format.
So I vote for a flexible XML format but with additional instructions on how to populate it. We all know how to build a list of terms and definitiosn in Excel for import but building complex XML lists is not so easy.
Michael, how did you go about building your glossary XML input files?
Hi Vincent,
I am still in the process of collecting our existing dictionary artifacts but I will probably be using a java program to do it since I enjoy programming.
I'll be writing more blog entries on this as I progress...
Best wishes,
- Mike | http://www.oreillynet.com/xml/blog/2008/05/exploring_ibm_business_glossar.html?CMP=OTC-TY3388567169&ATT=Exploring+IBM+Business+Glossary+XML | crawl-003 | refinedweb | 1,441 | 61.77 |
> Any time you see something "inexplicable" like lots of time being attributed > to something simple like "get", it means that something isn't strict enough > and "get" is having to force a bunch of lazy evaluations to do its job. > Since you're using State.Strict but lift-ing to get there, I'd first look at > the strictness of the monad you're lift-ing from. (I'm assuming > State.Strict does what the label says, but it's possible that it's not > strict in the way you need; strictness is kinda tricky.) Ahh, thanks, this makes sense to me. The monads involved are an ErrorT and one other strict StateT. When the monads say they are strict, they seem to mean that >>= will force the evaluation of the monadic mechanics of the first argument when the second is demanded. By "monadic mechanics" I mean the work done by >>= itself, so for StateT it binds the result of the first argument with either 'let' or '~(a, s') <-' instead of 'case' or '(a, s') <-'. So in my understanding, a lazy State will build thunks on every >>= and they won't ever be forced until you pull the final value and pass it to a strict function in IO like print or an FFI call, and then they are forced recursively, at which point there is a stack overflow unless it's a short sequence. Talking about strictness seems complicated to me because it's not really about what is actually forced, it's about who is forced by who (i.e., when 'a' is forced then 'b' will be) and whether it's forced in sequence or nested. So a strict State is also not forced until the strict IO function, but it turns into a sequential series of thunks rather than a nested one. If this is accurate, why would anyone want to use the lazy State? Anyway, State doesn't provide the really critical bit of strictness, which is in the actual state being modified. So 'State.modify' will inevitably lead to stack overflow if you don't intersperse 'get's in there. What is needed is both $! on State.put (or define your own strict modify that puts $! on put), and strictness annotation of the fields of the record that is being modified (provided it's a record and not a scalar type). In any case, ErrorT has a strict >>= by necessity, both States have a strict >>=, and all modifys and puts (I think) are strict, as are all the record fields. In any case, a simple 'get' should only have to force the state to whnf, so it shouldn't matter if the fields are huge thunks or not, so really all that should matter is if the state is already in whnf or is a huge nest of thunks. Which a strict >>= should prevent, right? This is right next to a 'lookup' function which is called twice as many times and should be doing more work since it actually looks in a Map, but is credited with less cpu and alloc. A lift . get should turn into -- ErrorT a <- do a <- StateT (\s -> return (s, s)) -- 'get' of the outer StateT return (a, s) -- >>= belongs to inner StateT return (Right a) Anyway, it's sort of a mishmash of semi inlined functions since this is especially nested and hard to understand with transformers, but it looks like a lot of applications of (\a -> (a, s)) type functions, so (,) constructors and a 'Right' constructor, inside of 'case's to make sure this are evaluations and not allocations (and I remember from the STG paper that a constructor inside a case need not even be constructed). So I don't totally understand where the forcing is happening to make credit the 'get' with so much work. BTW, I have a theory about what the '0' might mean... perhaps if a function is inlined away, it still appears in the profile list, but with 0 entries. > Moral of the story: time is accounted to the function that forces > evaluation of lazy thunks, not to the thunks themselves or the function that > created the lazy thunks. (I think the latter is impossible without passing > around a lot of expensive baggage, and in any case doesn't tell you anything > useful; unexpected functions taking a lot of time, on the other hand, tells > you right away that there's excessive laziness in the invocation somewhere > and gives you a starting point to track it down.) Indeed, this is a good point that I hadn't realized in quite that way before, but tracking it down is still the tricky part. | http://www.haskell.org/pipermail/haskell-cafe/2010-October/085326.html | CC-MAIN-2014-35 | refinedweb | 781 | 71.48 |
pandas dataframe columns scaling with sklearn
I have a pandas dataframe with mixed type columns, and I'd like to apply sklearn's min_max_scaler to some of the columns. Ideally, I'd like to do these transformations in place, but haven't figured out a way to do that yet. I've written the following code that works:
import pandas as pd import numpy as np from sklearn import preprocessing scaler = preprocessing.MinMaxScaler() dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']}) min_max_scaler = preprocessing.MinMaxScaler() def scaleColumns(df, cols_to_scale): for col in cols_to_scale: df[col] = pd.DataFrame(min_max_scaler.fit_transform(pd.DataFrame(dfTest[col])),columns=[col]) return df dfTest A B C 0 14.00 103.02 big 1 90.20 107.26 small 2 90.95 110.35 big 3 96.27 114.23 small 4 91.21 114.68 small scaled_df = scaleColumns(dfTest,['A','B']) scaled_df A B C 0 0.000000 0.000000 big 1 0.926219 0.363636 small 2 0.935335 0.628645 big 3 1.000000 0.961407 small 4 0.938495 1.000000 small
I'm curious if this is the preferred/most efficient way to do this transformation. Is there a way I could use df.apply that would be better?
I'm also surprised I can't get the following code to work:
bad_output = min_max_scaler.fit_transform(dfTest['A'])
If I pass an entire dataframe to the scaler it works:
dfTest2 = dfTest.drop('C', axis = 1) good_output = min_max_scaler.fit_transform(dfTest2) good_output
I'm confused why passing a series to the scaler fails. In my full working code above I had hoped to just pass a series to the scaler then set the dataframe column = to the scaled series. I've seen this question asked a few other places, but haven't found a good answer. Any help understanding what's going on here would be greatly appreciated!
I am not sure if previous versions of
pandas prevented this but now the following snippet works perfectly for me and produces exactly what you want without having to use
apply
>>> import pandas as pd >>> from sklearn.preprocessing import MinMaxScaler >>> scaler = MinMaxScaler() >>> dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21], 'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']}) >>> dfTest[['A', 'B']] = scaler.fit_transform(dfTest[['A', 'B']]) >>> dfTest A B C 0 0.000000 0.000000 big 1 0.926219 0.363636 small 2 0.935335 0.628645 big 3 1.000000 0.961407 small 4 0.938495 1.000000 small
★ Back to homepage or read more recommendations:★ Back to homepage or read more recommendations:
From: stackoverflow.com/q/24645153 | https://python-decompiler.com/article/2014-07/pandas-dataframe-columns-scaling-with-sklearn | CC-MAIN-2019-26 | refinedweb | 470 | 70.09 |
As part of my series on starting a business, this post will cover some of basic legal considerations you’ll want on your radar when you start a business.
Forms of Ownership
Likely at the same time you are exploring names for your business, you may be thinking about the structure your business will take. This is an extremely important decision, and it’s wise to consult with an accountant and attorney so they can help you select the best form of ownership for your business. Here is a summary of the options you have (as presented by the U.S. Small Business Administration, visit the site for a breakdown of advantages and disadvantages of each option).
- Sole Proprietorship: Most small businesses start out as sole proprietorships. Sole proprietors own all the assets of the business and the profits generated by it. They also assume complete responsibility for any of its liabilities or debts. In a sole proprietorship, you are one in the same with the business.
- Partnership: A partnership requires two or more people who share ownership of a business. Like proprietorships, the law does not distinguish between the business and owners. The partners should have a legal agreement that outlines how decisions will be made, profits will be shared, disputes will be resolved, how future partners will be admitted to the partnership, how partners can be bought out, and what steps will be taken to dissolve the partnership when needed.
- who elect a board of directors to oversee the major policies and decisions. Corporations can also elect to be an “S Corp,” which enables the shareholder to treat the earnings and profits as distributions and have them pass through directly to their personal tax return.
- Limited Liability Company: An LLC is a mix of structures, combining the limited liability features of a corporation and the tax efficiencies and operational flexibility of a partnership. LLCs must not have more than two of the four characteristics that define corporations: limited liability to the extent of assets, continuity of life, centralization of management, and free transferability of ownership interests.
Your business structure will determine how your business is organized, how you are taxed and how the business is managed. While your business structure can be changed in the future, it’s best to consider all of the options before choosing one.
Licenses and Permits
In most cases, you will need a license issued by your city and/or county when you start your business. Some towns also require a special zoning permit if you will be conducting business out of your home. A call your town clerk can help you determine what the requirements are and what the fee for registering will be.
As with determining your business structure, you may benefit from consulting an attorney as you navigate the list of required licenses and registrations.
Taxes
Your form of business will determine how you file your income tax returns, and you may be required to file estimated tax returns and pay estimated taxes quarterly. This is where the assistance of an accountant is invaluable. Here are the four general types of business taxes:
- Income Tax: All businesses except partnerships must file an annual income tax return (partnerships file an information return). The form you use depends on how your business is organized. The federal income tax is a pay-as-you-go tax. You must pay the tax as you earn or receive income during the year.
- Self-Employment Tax: Self-employment tax is a social security and Medicare tax primarily for individuals who work for themselves. Your payments tax contribute to your coverage under the social security system.
- Employment Tax: If you have employees, you as the employer have certain employment tax responsibilities that you must pay and forms you must file, including: social security and Medicare taxes, federal income tax withholding, and federal unemployment tax.
- Excise Tax: Although it doesn’t apply to many small businesses, you may have to pay an excise tax if you operate a certain type of business or sell certain products. Specific excise taxes include environmental taxes, communications and air transportation taxes, and fuel taxes.
Lastly, as covered in a previous post, don’t forget that the name of your business has legal implications as well.
Since my experience in business is U.S.-based, this legal overview applies to U.S. businesses. If you have resources for the legalities of starting a business in another country, please add them to the comments.
This post is a guide of some legal considerations related to starting a business and should not replace advice from an attorney, accountant or other professional.
Additional resources:
- Business.gov, U.S. Government Business Website
- Forms of Business Ownership, About.com Canada
- Small Business and Self-Employed Tax Center, Internal Revenue Service | https://www.sitepoint.com/legalities-of-starting-a-business/ | CC-MAIN-2018-17 | refinedweb | 802 | 52.09 |
.
StringDictionary
System.Collections.Specialized.
AddKeyAndValueButLowerCaseTheKey()
HashTable
CaseInsensitiveHashCodeProvider.
Installer
Context
InstallContext
Parameters..
MyStuff:
XmlSectionSettingsBase
UpdateChanges():
XmlSerializerSectionHandler.
LoadSettings.
WatchForConfigChanges
FileSystemWatcher
_rootName.
UpdateChanges.
ReloadSettings.
AppDomain.CurrentDomain.SetupInformation.ConfigurationFile
DeserializeSection:..
For the uninitiated, a 404 message is a page that tells you that the file you requested cannot be found. Apparently, this site decided to have fun with it..
Regarding Syntax Highlighting, Daniel Turini pointed out that SnippetCompiler has the ability to export code to the clipboard (and to a file) as HTML.
Snippet Compiler has a lot of nice features and is a welcome addition to my toolbox, but purely for syntax highlighting, it has a few disadvantages compared to the manoli.net website I mentioned previously. First of all, although you can view snippets with line numbers, the line numbers aren't exported to HTML like Manoli does. Secondly, Manoli handles XML/HTML along with C# and VB, while SnippetCompiler seems to do well only with C# and VB.NET. Lastly, Manoli uses CSS for styling and you can have it embed the CSS definitions in the generated HTML, or reference the provided stylesheet. This is a really nice feature.
One thing I do like about the SnippetCompiler is how the summary tags in the comments are gray while the actual comment is green. That's a nice touch.
///<summary>
///Manages cool things
///</summary>
public class ThisIsSoCool
{
///
/// This is seriously neat.
///
public void YouShouldTryThis()
{}
}
///<summary>
///Manages cool things
///</summary>
public class ThisIsSoCool
{
///
/// This is seriously neat.
///
public void YouShouldTryThis()
{}
}
///<summary>
///Manages cool things
///</summary>
public class ThisIsSoCool
{
///
/// This is seriously neat.
///
public void YouShouldTryThis()
{}
}: }
You.+ hours a day, you’ve probably experienced pain at one point or another in your hands, wrists, shoulders, and/or back. Typically, if you’re like me, you’ll ignore it at first, maybe blame yourself for being weak, try hitting the gym more. However, at one point or another, you have to deal with it because it gets too painful not to. Friends and coworkers may not understand, but if you dig around, you’re almost guaranteed to find one or more coworkers who are silently dealing with this type of injury.
And yes, I do mean injury. Everybody seems to want to call it Carpal Tunnel Syndrome (CTS), but CTS is only one small type of injury within a family of injuries often grouped under the term Repetitive Stress Injury (RSI). RSI ailments include tendinitis, neuritis, CTS, etc...
The real difficulty of these types of injuries is that they are a relative newcomer in the annals of medicine and are thus quite misunderstood. From outward appearances, you’re sitting on your ass all day, how can you get injured? Well let me give you some stats.
At the end of an average eight-hour workday, the fingers have walked 16 miles over the keys and have expended energy equal to the lifting of 1 1/4 tons. - DataHand
This rapid increase in RSIs coincides with the increase of personal computer use. There are now an estimated 70 million PCs in the USA. Dr. Pascarelli estimates that RSIs now cost companies $20 billion a year. - WebReference.com
Hopefully the first quote highlight just how much work we make our little fingers do in a day, and the second quote appeals to you (and your employer’s) pocketbook. Much of these costs can be easily reduced dramatically by taking a proactive and preventive approach to RSI. For the company, that means saving a lot of money by not taking a short-sighted approach. Make sure your employees have the right equipment and an ergonomic evaluation. For you the individual, that means making sure you work in an ergonomic fashion and get help at the first sign of pain.
I will talk a bit about my experience in upcoming postings. I continue to struggle with pain, but I currently have Workman’s Comp which pays for my treatments and hooked me up with an ergonomic chair.
Some references of note:
Keymileage calculates how far your fingers must travel to type something..
Just goes to show I can't draw.....
This.
Thanks to the help of the very talented Joel Bernarte, I have a nice new look to the site. He created the logo you see at top. I then spent a bunch of time trying to modify the layout and Css to do the logo justice..
I. | http://haacked.com/archive/2004/06.aspx | crawl-001 | refinedweb | 729 | 55.54 |
I'm learning C programming and I don't have a background in programming. I want to control the timing of commands for a simple console program and looked up online the sleep() function. However, it requires the <unistd.h> header file. I'm using Visual Studio .Net on my work computer (because my lab has that on all computers.) However, when I put:
#include <unistd.h>
or
"#include "unistd.h" at the beginning of the program, I get the error:
fatal error C1083: Cannot open include file: 'unistd.h': No such file or directory
So I looked up online that the unistd.h header file is part of the POSIX library as opposed to the ANSI library. Perhaps that is meaningless, I don't know.
Question 1: How do I know what header files I have access to with the Visual Studio .net software?
Question 2: Can I download unistd.h from somewhere on the web and put it somewhere so that my program will know where to access it?
thanks in advance. | http://cboard.cprogramming.com/c-programming/82865-header-file-location-confusion.html | CC-MAIN-2015-40 | refinedweb | 174 | 77.94 |
Copy selected layers to new font
- MauriceMeilleur last edited by gferreira
Hi, new to RF and scripting; this is probably a simple question because I can't find it addressed anywhere (or I'm framing the question wrong):
I have a master version of my modular script Kast with all the layers needed to build its various versions:
I'd like to selectively export its layers to new fonts (top right, top left, left right, etc.). I can get the master's layers names, create a new font, but I can't figure out the next steps. Even just a nudge with the proper methods would be appreciated.
Do you need each layer as a separate UFO? if so you can export each glyph to a new font object and work from there
if not:
You can generate manually with the generate menu item and select a specific layer.
or for now:
import os destRoot = 'root/folder/to/store/binaryFont' font = CurrentFont() for layer in font.layers: layerName = layer.name path = os.path.join(destRoot, f"{layerName}-{font.info.familyName}-{font.info.styleName}.otf") # the naked layer object has a separate generate api layer.naked().generate(path=path, format="otf")
This will move in future version to
font.generate(..., layerName="background")where the
layerNameargument will selecte a specific layer.
hope this helps
- MauriceMeilleur last edited by
Thanks, Frederik! But my goal is to generate fonts from different combinations of layers from the master.
So the ideal pseudocode would read like this (I think), for example to create a font with the top-facing sides in the virtual cubes in my design:
font1 = master UFO with all layers font2 = new font with one layer for layer in font1: if layer name = top or top_background: for each glyph in layer: copy paths in glyph to corresponding glyph in font2 else pass
hello @MauriceMeilleur,
if I understand it correctly, you would like to make individual styles by combining some of the layers?
if so, have a look at Boolean Glyph Math. you can get individual glyph layers with
glyph.getLayer(layerName).
hope this helps, good luck!
- MauriceMeilleur last edited by
@gferreira I think those were the clues I needed—thanks! | https://forum.robofont.com/topic/743/copy-selected-layers-to-new-font | CC-MAIN-2020-40 | refinedweb | 367 | 52.7 |
15 February 2011 18:17 [Source: ICIS news]
By Mike Nash
?xml:namespace>
Why are fertilizer prices so strong?
First, commodity prices for corn and wheat are very bullish, so farmers can afford to pay up for agricultural inputs.
This is because food supply is tight, which is driving up demand for fertilizers in order to increase yield from limited land resources.
Second, fertilizer supply is tight, particularly for nitrogen and phosphates. This is due to a combination of strong demand and production outages.
Third, energy and raw material costs – gas and ammonia – are also high, which feed through to higher fertilizer prices.
The future is certainly volatile, as Yara president and CEO Jorgen Ole Haslestad said.
“The significant swing from 2009 to 2010 underlines the short-term volatility in our business, but also shows that deliveries have rebounded quickly in the agricultural business as demand continued to grow robustly even through global economic slowdowns,” Haslestad said.
This last point is worth repeating: the fertilizer market went through the same catastrophic crash as every other market in 2009, when prices plummeted, resulting in many buyers left high and dry with high-priced inventory.
But the fundamentals have stayed firm throughout the economic downturn. Fertilizer demand is fuelled by population growth and demand for more and better quality food.
This is why there is so much interest in the financial performance of the major fertilizer producers and the exceptional level of speculation about how the supply side will shape up in the next few years.
For example, takeover and merger activity in the fertilizer industry – such as the recent speculation surrounding BHP Billiton’s attempts at a hostile takeover of PotashCorp, which eventually failed – has become mainstream news.
The test of Yara’s confidence in the market will be its plans to increase capacity in 2011.
Expanded capacity at the company’s Sluiskil urea plant in the
The short-term outlook for urea is bullish and Yara is well-placed to take advantage of this.
For more on Yara | http://www.icis.com/Articles/2011/02/15/9435667/yaras-fourth-quarter-results-reflect-strong-fertilizer.html | CC-MAIN-2014-10 | refinedweb | 338 | 51.78 |
This is the mail archive of the [email protected] mailing list for the GDB project.
Hi Kevin, That sounds like a very useful feature. See below for comments. On 2019-07-21 7:54 p.m., Kevin Buettner wrote: > This commit introduces a new command line switch to GDB, a -P / > --python switch which is used for executing a python script. > Encountering -P curtails normal argument processing by GDB; any > remaining arguments (after the script name) are passed to the > script. > > This is work that was originally written as part of the Archer > project. The original/primary author is Tom Tromey, but it has > been maintained for the Fedora project by Jan Kratochvil, Sergio > Durigan Junior, and perhaps others too. > > In its original form, and even in the form found within the Fedora > sources, the code implementing the -P switch had several properties > which I found to be surprising: > > 1) After script execution, exit() was called which (obviously) > caused GDB to exit. > 2) The printing of GDB's banner (copyright info, bug reporting > instructions, and help instructions) was suppressed. > 3) Due to the exit() noted above, GDB's CLI was not (automatically) > invoked. If the CLI was desired, it could be run from the Python > script via use of the gdb.cli method, which was added as part of > that work. > > I've changed things so that exit() is no longer called. GDB's CLI > will be invoked after script execution. Also, GDB's banner will be > printed (or not) as normal. I.e, the banner will be printed unless > the -q switch is specified. > > If the script doesn't want the CLI for some reason, it can explicitly > call exit(). It may be the case that the script would be better off > calling a (yet to be written) gdb.exit() method for doing this > instead. Such a method could make sure that GDB shuts down properly. 1. Since it's closely related to "-x", it would be nice if it was possible to use -P with --batch, just like we can with -x. For example, with this Python script: import sys print('args:', sys.argv) $ ./gdb --data-directory=data-directory --batch -x test.py args: [''] $ ./gdb --data-directory=data-directory --batch -P test.py $ If you want your execution to be completely unattended, it sounds more fragile to rely on calling exit at the end of your script. If there's an error and it ends with an exception, your exit won't be called. So I'd prefer if --batch worked for this case. 2. When using "--batch -x" with a gdb script, gdb's exit code will reflect if we sourced the script successfully. Unfortunately, it doesn't work the same way with "--batch -x" and a Python script. I think it would be useful if it did (a Python script would be considered as failing if it ends by raising an exception). If we make that work, then it would be nice for "--batch -P" to work the same way. To be clear, since it's not an existing feature even for -x, I am not asking you to implement that as part of this patch. 3. When you run a program under the standalone Python interpreter, sys.argv[0] is the script name. Perhaps people will expect it to be the same here? It would be confusing if for GDB Python scripts, it's different. Also, the setting of sys.argv remains for the rest of the Python interpreter's lifetime: $ ./gdb --data-directory=data-directory -q -P test.py Hello args: ['Hello'] (gdb) pi >>> import sys >>> sys.argv ['Hello'] I think it's not very likely that people's script would rely on the fact that sys.argv was initially empty, but if we could reset sys.argv to [''] when we are done executing that Python script, it would reduce the chances that we break something. > gdb/ChangeLog: > > * main.c (python/python.h): Include. > (captured_main_1): Add option processing and other support for -P > switch. > (captured_main): Add help messages for -P. > * python/python.h (run_python_script): Declare. > * python/python.c (run_python_script): New function. > --- > gdb/main.c | 48 ++++++++++++++++++++++++++++++++++++++------ > gdb/python/python.c | 49 +++++++++++++++++++++++++++++++++++++++++++++ > gdb/python/python.h | 2 ++ > 3 files changed, 93 insertions(+), 6 deletions(-) > > diff --git a/gdb/main.c b/gdb/main.c > index 678c413021..bc8238e3ce 100644 > --- a/gdb/main.c > +++ b/gdb/main.c > @@ -33,6 +33,7 @@ > > #include "interps.h" > #include "main.h" > +#include "python/python.h" > #include "source.h" > #include "cli/cli-cmds.h" > #include "objfiles.h" > @@ -440,7 +441,7 @@ struct cmdarg > }; > > static void > -captured_main_1 (struct captured_main_args *context) > +captured_main_1 (struct captured_main_args *context, bool &python_script) > { > int argc = context->argc; > char **argv = context->argv; > @@ -658,10 +659,14 @@ captured_main_1 (struct captured_main_args *context) > {"args", no_argument, &set_args, 1}, > {"l", required_argument, 0, 'l'}, > {"return-child-result", no_argument, &return_child_result, 1}, > +#if HAVE_PYTHON > + {"python", no_argument, 0, 'P'}, > + {"P", no_argument, 0, 'P'}, > +#endif > {0, no_argument, 0, 0} > }; > > - while (1) > + while (!python_script) > { > int option_index; > > @@ -679,6 +684,9 @@ captured_main_1 (struct captured_main_args *context) > case 0: > /* Long option that just sets a flag. */ > break; > + case 'P': > + python_script = true; > + break; > case OPT_SE: > symarg = optarg; > execarg = optarg; > @@ -858,7 +866,20 @@ captured_main_1 (struct captured_main_args *context) > > /* Now that gdb_init has created the initial inferior, we're in > position to set args for that inferior. */ > - if (set_args) > + if (python_script) > + { > + /* The first argument is a python script to evaluate, and > + subsequent arguments are passed to the script for > + processing there. */ > + if (optind >= argc) > + { > + fprintf_unfiltered (gdb_stderr, > + _("%s: Python script file name required\n"), > + argv[0]); > + exit (1); > + } > + } > + else if (set_args) > { > /* The remaining options are the command-line options for the > inferior. The first one is the sym/exec file, and the rest > @@ -1157,9 +1178,14 @@ static void > captured_main (void *data) > { > struct captured_main_args *context = (struct captured_main_args *) data; > + bool python_script = false; > > - captured_main_1 (context); > + captured_main_1 (context, python_script); I know it's a heated debate (well, not really but still), but I would prefer the reference to the variable was passed as a pointer, &python_script. I really find it confusing to have it this way, since it looks like you are just constantly passing false to the function (so the variable looks unnecessary). > > +#if HAVE_PYTHON > + if (python_script) > + run_python_script (context->argc - optind, &context->argv[optind]); > +#endif > /* NOTE: cagney/1999-11-07: There is probably no reason for not > moving this loop and the code found in captured_command_loop() > into the command_loop() proper. The main thing holding back that > @@ -1215,9 +1241,12 @@ print_gdb_help (struct ui_file *stream) > fputs_unfiltered (_("\ > This is the GNU debugger. Usage:\n\n\ > gdb [options] [executable-file [core-file or process-id]]\n\ > - gdb [options] --args executable-file [inferior-arguments ...]\n\n\ > -"), stream); > + gdb [options] --args executable-file [inferior-arguments ...]\n"), stream); > +#if HAVE_PYTHON > fputs_unfiltered (_("\ > + gdb [options] [--python|-P] script-file [script-arguments ...]\n"), stream); > +#endif > + fputs_unfiltered (_("\n\ > Selection of debuggee and its files:\n\n\ > --args Arguments after executable-file are passed to inferior\n\ > --core=COREFILE Analyze the core dump COREFILE.\n\ > @@ -1260,6 +1289,13 @@ Output and user interface control:\n\n\ > #endif > fputs_unfiltered (_("\ > --dbx DBX compatibility mode.\n\ > +"), stream); > +#if HAVE_PYTHON > + fputs_unfiltered (_("\ > + --python, -P Following argument is Python script file; remaining\n\ > + arguments are passed to script.\n"), stream); > +#endif > + fputs_unfiltered (_("\ > -q, --quiet, --silent\n\ > Do not print version number on startup.\n\n\ > "), stream); > diff --git a/gdb/python/python.c b/gdb/python/python.c > index 96bee7c3b0..7bd4d1684f 100644 > --- a/gdb/python/python.c > +++ b/gdb/python/python.c > @@ -1276,6 +1276,55 @@ gdbpy_print_stack_or_quit () > > > > +/* Set up the Python argument vector and evaluate a script. This is > + used to implement 'gdb -P'. */ > + > +void > +run_python_script (int argc, char **argv) Even though the surrounding code is not like that, I would suggest following our current conventions, putting /* See python.h. */ here and put the doc in the .h. > +{ > + if (!gdb_python_initialized) > + return; > + > + gdbpy_enter enter_py (get_current_arch (), current_language); > + > +#if PYTHON_ABI_VERSION < 3 We is the IS_PY3K macro throughout. > + PySys_SetArgv (argc - 1, argv + 1); > +#else > + { > + wchar_t **wargv = (wchar_t **) alloca (sizeof (*wargv) * (argc + 1)); I'd suggest using XALLOCAVEC: XALLOCAVEC (wchar_t *, argc + 1) > + int i; You can inline this declaration in the for loop. > + > + for (i = 1; i < argc; i++) > + { > + size_t len = mbstowcs (NULL, argv[i], 0); > + > + if (len == (size_t) -1) > + { > + fprintf (stderr, "Invalid multibyte argument #%d \"%s\"\n", > + i, argv[i]); > + exit (1); > + } I think it would be more gdb-ish to call error () instead of plain exiting. > + wargv[i] = (wchar_t *) alloca (sizeof (**wargv) * (len + 1)); Suggest using XALLOCAVEC. Or even dynamic allocation to be safer, given that args can be actually quite long for alloca. > + size_t len2 = mbstowcs (wargv[i], argv[i], len + 1); > + assert (len2 == len); > + } > + wargv[argc] = NULL; > + PySys_SetArgv (argc - 1, wargv + 1); > + } > +#endif > + > + FILE *input = fopen (argv[0], "r"); Do we want to use gdb_fopen_cloexec? > + if (! input) if (input == nullptr) > + { > + fprintf (stderr, "could not open %s: %s\n", argv[0], strerror (errno)); > + exit (1); error () instead of exiting? > + }> + PyRun_SimpleFile (input, argv[0]); > + fclose (input); If using gdb_fopen_cloexec, the file will get closed automatically as we leave the scope, reducing chances that we leak an open file. > +} > + > + > + > /* Return a sequence holding all the Progspaces. */ > > static PyObject * > diff --git a/gdb/python/python.h b/gdb/python/python.h > index 10cd90d00e..2af0b2934d 100644 > --- a/gdb/python/python.h > +++ b/gdb/python/python.h > @@ -28,4 +28,6 @@ extern const struct extension_language_defn extension_language_python; > /* Command element for the 'python' command. */ > extern cmd_list_element *python_cmd_element; > > +extern void run_python_script (int argc, char **argv); > + > #endif /* PYTHON_PYTHON_H */ > Simon | https://sourceware.org/ml/gdb-patches/2019-07/msg00530.html | CC-MAIN-2020-05 | refinedweb | 1,569 | 65.62 |
Table of Contents
A Bloom filter is a set-like data structure that is highly efficient in its use of space. It only supports two operations: insertion and membership querying. Unlike a normal set data structure, a Bloom filter can give incorrect answers. If we query it to see whether an element that we have inserted is present, it will answer affirmatively. If we query for an element that we have not inserted, it might incorrectly claim that the element is present.
For many applications, a low rate of false positives is tolerable. For instance, the job of a network traffic shaper is to throttle bulk transfers (e.g. BitTorrent) so that interactive sessions (such as ssh sessions or games) see good response times. A traffic shaper might use a Bloom filter to determine whether a packet belonging to a particular session is bulk or interactive. If it misidentifies one in ten thousand bulk packets as interactive and fails to throttle it, nobody will notice.
The attraction of a Bloom filter is its space efficiency. If we want to build a spell checker, and have a dictionary of half a million words, a set data structure might consume 20 megabytes of space. A Bloom filter, in contrast, would consume about half a megabyte, at the cost of missing perhaps 1% of misspelled words.
Behind the scenes, a Bloom filter is remarkably simple. It consists of a bit array and a handful of hash functions. We'll use k for the number of hash functions. If we want to insert a value into the Bloom filter, we compute k hashes of the value, and turn on those bits in the bit array. If we want to see whether a value is present, we compute k hashes, and check all of those bits in the array to see if they are turned on.
To see how this works, let's say we want to insert the
strings
"foo" and
"bar" into a Bloom
filter that is 8 bits wide, and we have two hash
functions.
Compute the two hashes of
"foo", and get
the values
1 and
6.
Set bits
1 and
6 in the bit
array.
Compute the two hashes of
"bar", and get
the values
6 and
3.
Set bits
6 and
3 in the bit
array.
This example should make it clear why we cannot remove an
element from a Bloom filter: both
"foo" and
"bar" resulted in bit 6 being set.
Suppose we now want to query the Bloom filter, to see
whether the values
"quux" and
"baz"
are present.
Compute the two hashes of
"quux", and get
the values
4 and
0.
Check bit
4 in the bit array. It is not
set, so
"quux" cannot be present. We do not
need to check bit
0.
Compute the two hashes of
"baz", and get
the values
1 and
3.
Check bit
1 in the bit array. It is
set, as is bit
3, so we say that
"baz" is present even though it is not. We
have reported a false positive.
For a survey of some of the uses of Bloom filters in networking, see [Broder02].
Not all users of Bloom filters have the same needs. In some cases, it suffices to create a Bloom filter in one pass, and only query it afterwards. For other applications, we may need to continue to update the Bloom filter after we create it. To accommodate these needs, we will design our library with mutable and immutable APIs.
We will segregate the mutable and immutable APIs that we
publish by placing them in different modules:
BloomFilter for the immutable code, and
BloomFilter.Mutable for the mutable code.
In addition, we will create several “helper” modules that won't provide parts of the public API, but will keep the internal code cleaner.
Finally, we will ask the user of our API to provide a function that can generate a number of hashes of an element. This function will have the type a -> [Word32]. We will use all of the hashes that this function returns, so the list must not be infinite!
The data structure that we use for our Haskell Bloom filter is a direct translation of the simple description we gave earlier: a bit array and a function that computes hashes.
-- file: BloomFilter/Internal.hs module BloomFilter.Internal ( Bloom(..) , MutBloom(..) ) where import Data.Array.ST (STUArray) import Data.Array.Unboxed (UArray) import Data.Word (Word32) data Bloom a = B { blmHash :: (a -> [Word32]) , blmArray :: UArray Word32 Bool }
When we create our Cabal package, we will not be exporting
this
BloomFilter.Internal module. It exists purely
to let us control the visibility of names. We will import
BloomFilter.Internal into both the mutable and
immutable modules, but we will re-export from each module only
the type that is relevant to that module's API.
Unlike other Haskell arrays, a UArray contains unboxed values.
For a normal Haskell type, a value can be either fully
evaluated, an unevaluated thunk, or the special value
⊥, pronounced (and sometimes written)
“bottom”. The value ⊥ is a placeholder
for a computation that does not succeed. Such a computation
could take any of several forms. It could be an infinite
loop; an application of
error; or the
special value
undefined.
A type that can contain ⊥ is referred to as
lifted. All normal Haskell types are
lifted. In practice, this means that we can always write
error "eek!" or
undefined in place
of a normal expression.
This ability to store thunks or ⊥ comes with a performance cost: it adds an extra layer of indirection. To see why we need this indirection, consider the Word32 type. A value of this type is a full 32 bits wide, so on a 32-bit system, there is no way to directly encode the value ⊥ within 32 bits. The runtime system has to maintain, and check, some extra data to track whether the value is ⊥ or not.
An unboxed value does away with this indirection. In doing so, it gains performance, but sacrifices the ability to represent a thunk or ⊥. Since it can be denser than a normal Haskell array, an array of unboxed values is an excellent choice for numeric data and bits.
GHC implements a UArray of Bool values by packing eight array elements into each byte, so this type is perfect for our needs.
Back in the section called “Modifying array elements”, we mentioned that modifying an immutable array is prohibitively expensive, as it requires copying the entire array. Using a UArray does not change this, so what can we do to reduce the cost to bearable levels?
In an imperative language, we would simply modify the elements of the array in place; this will be our approach in Haskell, too.
Haskell provides a special monad, named ST[58], which lets us work safely with mutable state. Compared to the State monad, it has some powerful added capabilities.
We can thaw an immutable array to give a mutable array; modify the mutable array in place; and freeze a new immutable array when we are done.
We have the ability to use mutable references. This lets us implement data structures that we can modify after construction, as in an imperative language. This ability is vital for some imperative data structures and algorithms, for which similarly efficient purely functional alternatives have not yet been discovered.
The IO monad also provides these capabilities.
The major difference between the two is that the ST
monad is intentionally designed so that we can
escape from it back into pure Haskell code.
We enter the ST monad via the execution function
runST, in the same way as for most other
Haskell monads (except IO, of course), and we
escape by returning from
runST.
When we apply a monad's execution function, we expect it to
behave repeatably: given the same body and arguments, we must
get the same results every time. This also applies to
runST. To achieve this repeatability, the
ST monad is more restrictive than the
IO monad. We cannot read or write files, create
global variables, or fork threads. Indeed, although we can
create and work with mutable references and arrays, the type
system prevents them from escaping to the caller of
runST. A mutable array must be frozen into
an immutable array before we can return it, and a mutable
reference cannot escape at all.
The public interfaces that we provide for working with Bloom filters are worth a little discussion.
-- file: BloomFilter/Mutable.hs module BloomFilter.Mutable ( MutBloom , elem , notElem , insert , length , new ) where import Control.Monad (liftM) import Control.Monad.ST (ST) import Data.Array.MArray (getBounds, newArray, readArray, writeArray) import Data.Word (Word32) import Prelude hiding (elem, length, notElem) import BloomFilter.Internal (MutBloom(..))
We export several names that clash with names exported by
the Prelude. This is deliberate: we expect users of our modules
to import them with qualified names. This reduces the burden on
the memory of our users, as they should already be familiar with
the Prelude's
elem,
notElem, and
length
functions.
When we use a module written in this style, we might often
import it with a single-letter prefix, for instance as
import qualified BloomFilter.Mutable as M. This
would allow us to write
M.length, which
stays compact and readable.
Alternatively, we could import the module unqualified, and
import the Prelude while hiding the clashing names with
import Prelude hiding (length). This is much less
useful, as it gives a reader skimming the code no local cue that
they are not actually seeing the Prelude's
length.
Of course, we seem to be violating this precept in our own
module's header: we import the Prelude, and hide some of the
names it exports. There is a practical reason for this. We
define a function named
length. If we
export this from our module without first hiding the Prelude's
length, the compiler will complain that it
cannot tell whether to export our version of
length or the Prelude's.
While we could export the fully qualified name
BloomFilter.Mutable.length to eliminate the
ambiguity, that seems uglier in this case. This decision has no
consequences for someone using our module, just for ourselves as
the authors of what ought to be a “black box”, so
there is little chance of confusion here.
We put type declaration for our mutable Bloom filter in the
BloomFilter.Internal module, along with the
immutable Bloom type.
-- file: BloomFilter/Internal.hs data MutBloom s a = MB { mutHash :: (a -> [Word32]) , mutArray :: STUArray s Word32 Bool }
The STUArray type gives us a mutable unboxed
array that we can work with in the ST monad. To
create an STUArray, we use the
newArray function. The
new function belongs in the
BloomFilter.Mutable function.
-- file: BloomFilter/Mutable.hs new :: (a -> [Word32]) -> Word32 -> ST s (MutBloom s a) new hash numBits = MB hash `liftM` newArray (0,numBits-1) False
Most of the methods of STUArray are actually
implementations of the MArray typeclass, which is
defined in the
Data.Array.MArray module.
Our
length function is slightly
complicated by two factors. We are relying on our bit array's
record of its own bounds, and an MArray instance's
getBounds function has a monadic type. We
also have to add one to the answer, as the upper bound of the
array is one less than its actual length.
-- file: BloomFilter/Mutable.hs length :: MutBloom s a -> ST s Word32 length filt = (succ . snd) `liftM` getBounds (mutArray filt)
To add an element to the Bloom filter, we set all of the
bits indicated by the hash function. We use the
mod function to ensure that all of the
hashes stay within the bounds of our array, and isolate our code
that computes offsets into the bit array in one function.
-- file: BloomFilter/Mutable.hs insert :: MutBloom s a -> a -> ST s () insert filt elt = indices filt elt >>= mapM_ (\bit -> writeArray (mutArray filt) bit True) indices :: MutBloom s a -> a -> ST s [Word32] indices filt elt = do modulus <- length filt return $ map (`mod` modulus) (mutHash filt elt)
Testing for membership is no more difficult. If every bit indicated by the hash function is set, we consider an element to be present in the Bloom filter.
-- file: BloomFilter/Mutable.hs elem, notElem :: a -> MutBloom s a -> ST s Bool elem elt filt = indices filt elt >>= allM (readArray (mutArray filt)) notElem elt filt = not `liftM` elem elt filt
We need to write a small supporting function: a monadic
version of
all, which we will call
allM.
-- file: BloomFilter/Mutable.hs allM :: Monad m => (a -> m Bool) -> [a] -> m Bool allM p (x:xs) = do ok <- p x if ok then allM p xs else return False allM _ [] = return True
Our interface to the immutable Bloom filter has the same structure as the mutable API.
-- file: ch26/BloomFilter.hs module BloomFilter ( Bloom , length , elem , notElem , fromList ) where import BloomFilter.Internal import BloomFilter.Mutable (insert, new) import Data.Array.ST (runSTUArray) import Data.Array.IArray ((!), bounds) import Data.Word (Word32) import Prelude hiding (elem, length, notElem) length :: Bloom a -> Int length = fromIntegral . len len :: Bloom a -> Word32 len = succ . snd . bounds . blmArray elem :: a -> Bloom a -> Bool elt `elem` filt = all test (blmHash filt elt) where test hash = blmArray filt ! (hash `mod` len filt) notElem :: a -> Bloom a -> Bool elt `notElem` filt = not (elt `elem` filt)
We provide an easy-to-use means to create an immutable Bloom
filter, via a
fromList function. This
hides the ST monad from our users, so that they
only see the immutable type.
-- file: ch26/BloomFilter.hs fromList :: (a -> [Word32]) -- family of hash functions to use -> Word32 -- number of bits in filter -> [a] -- values to populate with -> Bloom a fromList hash numBits values = B hash . runSTUArray $ do mb <- new hash numBits mapM_ (insert mb) values return (mutArray mb)
The key to this function is
runSTUArray. We mentioned earlier that in
order to return an immutable array from the ST
monad, we must freeze a mutable array. The
runSTUArray function combines execution
with freezing. Given an action that returns an
STUArray, it executes the action using
runST; freezes the STUArray
that it returns; and returns that as a
UArray.
The
MArray typeclass provides a
freeze function that we could use instead,
but
runSTUArray is both more convenient and
more efficient. The efficiency lies in the fact that
freeze must copy the underlying data from
the STUArray to the new UArray, to
ensure that subsequent modifications of the
STUArray cannot affect the contents of the
UArray. Thanks to the type system,
runSTUArray can guarantee that an
STUArray is no longer accessible when it uses it to
create a UArray. It can thus share the underlying
contents between the two arrays, avoiding the copy.
Although provide our users with to need.
If we import both
BloomFilter.Easy and
BloomFilter, you might wonder what will happen if
we try to use a name exported by both. We already know that
if we import
BloomFilter unqualified and try to
use
length, GHC will issue an error
about ambiguity, because the Prelude also makes the name
length available.
The Haskell standard requires an implementation to be able
to tell when several names refer to the same
“thing”. For instance, the Bloom
type is exported by
BloomFilter and
BloomFilter.Easy. If we import both modules and
try to use Bloom, GHC will be able to see that
the Bloom re-exported from
BloomFilter.Easy is the same as the one exported
from
BloomFilter, and it will not report an
ambiguity.
A Bloom filter depends on fast, high-quality hashes for good performance and a low false positive rate. It is surprisingly difficult to write a general purpose hash function that has both of these properties.
Luckily for us, a fellow named Bob Jenkins developed some
hash functions that have exactly these properties, and he
placed the code in the public domain at[59] . He wrote his hash functions in C, so we can
easily use the FFI to create bindings to them. The specific
source file that we need from that site is named
lookup3.c.
We create a
cbits directory and download
it to there.
There remains one hitch: we will frequently need seven or
even ten hash functions. We really don't want to scrape
together that many different functions, and fortunately we do
not need to: in most cases, we can get away with just two. We
will see how shortly. The Jenkins hash library includes two
functions,
hashword2 and
hashlittle2, that compute two hash
values. Here is a C header file that describes the APIs of
these two functions. We save this to
cbits/lookup3.h.
/* save this file as lookup3.h */ #ifndef _lookup3_h #define _lookup3_h #include <stdint.h> #include <sys/types.h> /* only accepts uint32_t aligned arrays of uint32_t */ void hashword2(const uint32_t *key, /* array of uint32_t */ size_t length, /* number of uint32_t values */ uint32_t *pc, /* in: seed1, out: hash1 */ uint32_t *pb); /* in: seed2, out: hash2 */ /* handles arbitrarily aligned arrays of bytes */ void hashlittle2(const void *key, /* array of bytes */ size_t length, /* number of bytes */ uint32_t *pc, /* in: seed1, out: hash1 */ uint32_t *pb); /* in: seed2, out: hash2 */ #endif /* _lookup3_h */
A “salt” is a value that perturbs the hash value that the function computes. If we hash the same value with two different salts, we will get two different hashes. Since these functions compute two hashes, they accept two salts.
Here are our Haskell bindings to these functions.
-- file: BloomFilter/Hash.hs {-# LANGUAGE BangPatterns, ForeignFunctionInterface #-} module BloomFilter.Hash ( Hashable(..) , hash , doubleHash ) where import Data.Bits ((.&.), shiftR) import Foreign.Marshal.Array (withArrayLen) import Control.Monad (foldM) import Data.Word (Word32, Word64) import Foreign.C.Types (CSize) import Foreign.Marshal.Utils (with) import Foreign.Ptr (Ptr, castPtr, plusPtr) import Foreign.Storable (Storable, peek, sizeOf) import qualified Data.ByteString as Strict import qualified Data.ByteString.Lazy as Lazy import System.IO.Unsafe (unsafePerformIO) foreign import ccall unsafe "lookup3.h hashword2" hashWord2 :: Ptr Word32 -> CSize -> Ptr Word32 -> Ptr Word32 -> IO () foreign import ccall unsafe "lookup3.h hashlittle2" hashLittle2 :: Ptr a -> CSize -> Ptr Word32 -> Ptr Word32 -> IO ()
We have specified that the definitions of the functions
can be found in the
lookup3.h header file
that we just created.
For convenience and efficiency, we will combine the 32-bit salts consumed, and the hash values computed, by the Jenkins hash functions into a single 64-bit value.
-- file: BloomFilter/Hash.hs hashIO :: Ptr a -- value to hash -> CSize -- number of bytes -> Word64 -- salt -> IO Word64 hashIO ptr bytes salt = with (fromIntegral salt) $ \sp -> do let p1 = castPtr sp p2 = castPtr sp `plusPtr` 4 go p1 p2 peek sp where go p1 p2 | bytes .&. 3 == 0 = hashWord2 (castPtr ptr) words p1 p2 | otherwise = hashLittle2 ptr bytes p1 p2 words = bytes `div` 4
Without explicit types around to describe what is
happening, the above code is not completely obvious. The
with function allocates room for the salt
on the C stack, and stores the current salt value in there, so
sp is a Ptr Word64. The
pointers
p1 and
p2 are
Ptr Word32;
p1 points at the
low word of
sp, and
p2
at the high word. This is how we chop the single
Word64 salt into two Ptr Word32
parameters.
Because all of our data pointers are coming from the
Haskell heap, we know that they will be aligned on an address
that is safe to pass to either
hashWord2
(which only accepts 32-bit-aligned addresses) or
hashLittle2. Since
hashWord32 is the faster of the two
hashing functions, we call it if our data is a multiple of
4 bytes in size, otherwise
hashLittle2.
Since the C hash function will write the computed hashes
into
p1 and
p2, we only
need to
peek the pointer
sp to retrieve the computed hash.
We don't want clients of this module to be stuck fiddling with low-level details, so we use a typeclass to provide a clean, high-level interface.
-- file: BloomFilter/Hash.hs class Hashable a where hashSalt :: Word64 -- ^ salt -> a -- ^ value to hash -> Word64 hash :: Hashable a => a -> Word64 hash = hashSalt 0x106fc397cf62f64d3
We also provide a number of useful implementations of this typeclass. To hash basic types, we must write a little boilerplate code.
-- file: BloomFilter/Hash.hs hashStorable :: Storable a => Word64 -> a -> Word64 hashStorable salt k = unsafePerformIO . with k $ \ptr -> hashIO ptr (fromIntegral (sizeOf k)) salt instance Hashable Char where hashSalt = hashStorable instance Hashable Int where hashSalt = hashStorable instance Hashable Double where hashSalt = hashStorable
We might prefer to use the Storable typeclass to write just one declaration, as follows:
-- file: BloomFilter/Hash.hs instance Storable a => Hashable a where hashSalt = hashStorable
Unfortunately, Haskell does not permit us to write instances of this form, as allowing them would make the type system undecidable: they can cause the compiler's type checker to loop infinitely. This restriction on undecidable types forces us to write out individual declarations. It does not, however, pose a problem for a definition such as this one.
-- file: BloomFilter/Hash.hs hashList :: (Storable a) => Word64 -> [a] -> IO Word64 hashList salt xs = withArrayLen xs $ \len ptr -> hashIO ptr (fromIntegral (len * sizeOf x)) salt where x = head xs instance (Storable a) => Hashable [a] where hashSalt salt xs = unsafePerformIO $ hashList salt xs
The compiler will accept this instance, so we gain the ability to hash values of many list types[60]. Most importantly, since Char is an instance of Storable, we can now hash String values.
For tuple types, we take advantage of function composition. We take a salt in at one end of the composition pipeline, and use the result of hashing each tuple element as the salt for the next element.
-- file: BloomFilter/Hash.hs hash2 :: (Hashable a) => a -> Word64 -> Word64 hash2 k salt = hashSalt salt k instance (Hashable a, Hashable b) => Hashable (a,b) where hashSalt salt (a,b) = hash2 b . hash2 a $ salt instance (Hashable a, Hashable b, Hashable c) => Hashable (a,b,c) where hashSalt salt (a,b,c) = hash2 c . hash2 b . hash2 a $ salt
To hash ByteString types, we write special instances that plug straight into the internals of the ByteString types. This gives us excellent hashing performance.
-- file: BloomFilter/Hash.hs hashByteString :: Word64 -> Strict.ByteString -> IO Word64 hashByteString salt bs = Strict.useAsCStringLen bs $ \(ptr, len) -> hashIO ptr (fromIntegral len) salt instance Hashable Strict.ByteString where hashSalt salt bs = unsafePerformIO $ hashByteString salt bs rechunk :: Lazy.ByteString -> [Strict.ByteString] rechunk s | Lazy.null s = [] | otherwise = let (pre,suf) = Lazy.splitAt chunkSize s in repack pre : rechunk suf where repack = Strict.concat . Lazy.toChunks chunkSize = 64 * 1024 instance Hashable Lazy.ByteString where hashSalt salt bs = unsafePerformIO $ foldM hashByteString salt (rechunk bs)
Since a lazy ByteString is represented as a
series of chunks, we must be careful with the boundaries
between those chunks. The string
"foobar" can be
represented in five different ways, for example
["fo","obar"] or
["foob","ar"]. This
is invisible to most users of the type, but not to us since we
use the underlying chunks directly. Our
rechunk function ensures that the chunks
we pass to the C hashing code are a uniform 64KB in size, so
that we will give consistent hash values no matter where the
original chunk boundaries lie.
As we mentioned earlier, we need many more than two hashes to make effective use of a Bloom filter. We can use a technique called double hashing to combine the two values computed by the Jenkins hash functions, yielding many more hashes. The resulting hashes are of good enough quality for our needs, and far cheaper than computing many distinct hashes.
--
In the
BloomFilter.Easy module, we use our
new
doubleHash function to define the
easyList function whose type we defined
earlier.
-- file: BloomFilter/Easy.hs module BloomFilter.Easy ( suggestSizing , sizings , easyList -- re-export useful names from BloomFilter , B.Bloom , B.length , B.elem , B.notElem ) where import BloomFilter.Hash (Hashable, doubleHash) import Data.List (genericLength) import Data.Maybe (catMaybes) import Data.Word (Word32) import qualified BloomFilter as B easyList errRate values = case suggestSizing (genericLength values) errRate of Left err -> Left err Right (bits,hashes) -> Right filt where filt = B.fromList (doubleHash hashes) bits values
This depends on a
suggestSizing
function that estimates the best combination of filter size
and number of hashes to compute, based on our desired false
positive rate and the maximum number of elements that we
expect the filter to contain.
-- file: BloomFilter/Easy.hs suggestSizing :: Integer -- expected maximum capacity -> Double -- desired false positive rate -> Either String (Word32,Int) -- (filter size, number of hashes) suggestSizing capacity errRate | capacity <= 0 = Left "capacity too small" | errRate <= 0 || errRate >= 1 = Left "invalid error rate" | null saneSizes = Left "capacity too large" | otherwise = Right (minimum saneSizes) where saneSizes = catMaybes . map sanitize $ sizings capacity errRate sanitize (bits,hashes) | bits > maxWord32 - 1 = Nothing | otherwise = Just (ceiling bits, truncate hashes) where maxWord32 = fromIntegral (maxBound :: Word32) sizings :: Integer -> Double -> [(Double, Double)] sizings capacity errRate = [(((-k) * cap / log (1 - (errRate ** (1 / k)))), k) | k <- [1..50]] where cap = fromIntegral capacity
We perform some rather paranoid checking. For instance,
the
sizings function suggests pairs of
array size and hash count, but it does not validate its
suggestions. Since we use 32-bit hashes, we must filter out
suggested array sizes that are too large.
In our
suggestSizing function, we
attempt to minimise only the size of the bit array, without
regard for the number of hashes. To see why, let us
interactively explore the relationship between array size and
number of hashes.
Suppose we want to insert 10 million elements into a Bloom filter, with a false positive rate of 0.1%.
ghci>
let kbytes (bits,hashes) = (ceiling bits `div` 8192, hashes)
ghci>
:m +BloomFilter.Easy Data.ListCould not find module `BloomFilter.Easy': Use -v to see a list of the files searched for.
ghci>
mapM_ (print . kbytes) . take 10 . sort $ sizings 10000000 0.001<interactive>:1:35: Not in scope: `sort' <interactive>:1:42: Not in scope: `sizings'
We achieve the most compact table (just over 17KB) by computing 10 hashes. If we really were hashing the data repeatedly, we could reduce the number of hashes to 7 at a cost of 5% in space. Since we are using Jenkins's hash functions which compute two hashes in a single pass, and double hashing the results to produce additional hashes, the cost to us of computing extra those hashes is tiny, so we will choose the smallest table size.
If we increase our tolerance for false positives tenfold, to 1%, the amount of space and the number of hashes we need drop, though not by easily predictable amounts.
ghci>
mapM_ (print . kbytes) . take 10 . sort $ sizings 10000000 0.01<interactive>:1:35: Not in scope: `sort' <interactive>:1:42: Not in scope: `sizings'
We have created a moderately complicated library, with four
public modules and one internal module. To turn this into a
package that we can easily redistribute, we create a
rwh-bloomfilter.cabal file.
Cabal allows us to describe several libraries in a single
package. A
.cabal file begins with
information that is common to all of the libraries, which is
followed by a distinct section for each library.
Name: rwh-bloomfilter Version: 0.1 License: BSD3 License-File: License.txt Category: Data Stability: experimental Build-Type: Simple
As we are bundling some C code with our library, we tell Cabal about our C source files.
Extra-Source-Files: cbits/lookup3.c cbits/lookup3.h
The
extra-source-files directive has no effect
on a build: it directs Cabal to bundle some extra files if we
run runhaskell Setup sdist to create a source
tarball for redistribution.
Prior to 2007, the standard Haskell libraries were
organised in a handful of large packages, of which the biggest
was named
base. This organisation tied
many unrelated libraries together, so the Haskell community
split the
base package up into a number
of more modular libraries. For instance, the array types
migrated from
base into a package named
array.
A Cabal package needs to specify the other packages that
it needs to have present in order to build. This makes it
possible for Cabal's command line interface automatically
download and build a package's dependencies, if necessary. We
would like our code to work with as many versions of GHC as
possible, regardless of whether they have the modern layout of
base and numerous other packages. We
thus need to be able to specify that we depend on the
array package if it is present, and
base alone otherwise.
Cabal provides a generic
configurations feature, which we can use
to selectively enable parts of a
.cabal
file. A build configuration is controlled by a Boolean-valued
flag. If it is
True, the
text following an
if flag directive is used,
otherwise the text following the associated
else
is used.
Cabal-Version: >= 1.2 Flag split-base Description: Has the base package been split up? Default: True Flag bytestring-in-base Description: Is ByteString in the base or bytestring package? Default: False
The configurations feature was introduced in version 1.2 of Cabal, so we specify that our package cannot be built with an older version.
The meaning of the
split-base flag should
be self-explanatory.
The
bytestring-in-base flag deals with a
more torturous history. When the
bytestring package was first created,
it was bundled with GHC 6.4, and kept separate from the
base package. In GHC 6.6, it was
incorporated into the
base package,
but it became independent again when the
base package was split before the
release of GHC 6.8.1.
These flags are usually invisible to people building a
package, because Cabal handles them automatically. Before we
explain what happens, it will help to see the beginning of the
Library section of our
.cabal
file.
Library if flag(bytestring-in-base) -- bytestring was in base-2.0 and 2.1.1 Build-Depends: base >= 2.0 && < 2.2 else -- in base 1.0 and 3.0, bytestring is a separate package Build-Depends: base < 2.0 || >= 3, bytestring >= 0.9 if flag(split-base) Build-Depends: base >= 3.0, array else Build-Depends: base < 3.0
Cabal creates a package description with the default
values of the flags (a missing default is assumed to be
True). If that configuration can be built (e.g.
because all of the needed package versions are available), it
will be used. Otherwise, Cabal tries different combinations
of flags until it either finds a configuration that it can
build or exhausts the alternatives.
For example, if we were to begin with both
split-base and
bytestring-in-base
set to
True, Cabal would select the following
package dependencies.
Build-Depends: base >= 2.0 && < 2.2 Build-Depends: base >= 3.0, array
The
base package cannot
simultaneously be newer than
3.0 and older than
2.2, so Cabal would reject this configuration as
inconsistent. For a modern version of GHC, after a few
attempts it would discover this configuration that will indeed
build.
-- in base 1.0 and 3.0, bytestring is a separate package Build-Depends: base < 2.0 || >= 3, bytestring >= 0.9 Build-Depends: base >= 3.0, array
When we run runhaskell Setup configure,
we can manually specify the values of flags via the
--flag option, though we will rarely need to
do so in practice.
Continuing with our
.cabal
file, we fill out the remaining details of the Haskell side of
our library. If we enable profiling when we build, we want
all of our top-level functions to show up in any profiling
output.
GHC-Prof-Options: -auto-all
The
Other-Modules property lists Haskell
modules that are private to the library. Such modules will be
invisible to code that uses this package.
When we build this package with GHC, Cabal will pass the
options from the
GHC-Options property to the
compiler.
The
-O2 option makes GHC optimise our
code aggressively. Code compiled without optimisation is very
slow, so we should always use
-O2 for
production code.
To help ourselves to write cleaner code, we usually add
the
-Wall option, which enables all of
GHC's warnings. This will cause GHC to issue complaints
if it encounters potential problems, such as overlapping
patterns; function parameters that are not used; and a myriad
of other potential stumbling blocks. While it is often safe to
ignore these warnings, we generally prefer to fix up our code
to eliminate them. The small added effort usually yields code
that is easier to read and maintain.
When we compile with
-fvia-C, GHC will
generate C code and use the system's C compiler to compile it,
instead of going straight to assembly language as it usually
does. This slows compilation down, but sometimes the C
compiler can further improve GHC's optimised code, so it can
be worthwhile.
We include
-fvia-C here mainly to show
how to make compilation with it work.
C-Sources: cbits/lookup3.c CC-Options: -O3 Include-Dirs: cbits Includes: lookup3.h Install-Includes: lookup3.h
For the
C-Sources property, we only need to
list files that must be compiled into our library. The
CC-Options property contains options for the C
compiler (
-O3 specifies a high level of
optimisation). Because our FFI bindings for the Jenkins hash
functions refer to the
lookup3.h header
file, we need to tell Cabal where to find the header file. We
must also tell it to install the header
file (
Install-Includes), as otherwise client code
will fail to find the header file when we try to build
it.
Before we pay any attention to performance, we want to establish that our Bloom filter behaves correctly. We can easily use QuickCheck to test some basic properties.
-- file: examples/BloomCheck.hs {-# LANGUAGE GeneralizedNewtypeDeriving #-} module Main where import BloomFilter.Hash (Hashable) import Data.Word (Word8, Word32) import System.Random (Random(..), RandomGen) import Test.QuickCheck import qualified BloomFilter.Easy as B import qualified Data.ByteString as Strict import qualified Data.ByteString.Lazy as Lazy
We will not use the normal
quickCheck
function to test our properties, as the 100 test inputs that it
generates do not provide much coverage.
-- file: examples/BloomCheck.hs handyCheck :: Testable a => Int -> a -> IO () handyCheck limit = check defaultConfig { configMaxTest = limit , configEvery = \_ _ -> "" }
Our first task is to ensure that if we add a value to a Bloom filter, a subsequent membership test will always report it as present, no matter what the chosen false positive rate or input value is.
We will use the
easyList function to
create a Bloom filter. The Random instance for
Double generates numbers in the range zero to one,
so QuickCheck can nearly supply us with
arbitrary false positive rates.
However, we need to ensure that both zero and one are excluded from the false positives we test with. QuickCheck gives us two ways to do this.
By construction: we specify the
range of valid values to generate. QuickCheck provides a
forAll combinator for this
purpose.
By elimination: when QuickCheck
generates an arbitrary value for us, we filter out those
that do not fit our criteria, using the
(==>) operator. If we reject a
value in this way, a test will appear to succeed.
If we can choose either method, it is always preferable to take the constructive approach. To see why, suppose that QuickCheck generates 1,000 arbitrary values for us, and we filter out 800 as unsuitable for some reason. We will appear to run 1,000 tests, but only 200 will actually do anything useful.
Following this idea, when we generate desired false positive rates, we could eliminate zeroes and ones from whatever QuickCheck gives us, but instead we construct values in an interval that will always be valid.
-- file: examples/BloomCheck.hs falsePositive :: Gen Double falsePositive = choose (epsilon, 1 - epsilon) where epsilon = 1e-6 (=~>) :: Either a b -> (b -> Bool) -> Bool k =~> f = either (const True) f k prop_one_present _ elt = forAll falsePositive $ \errRate -> B.easyList errRate [elt] =~> \filt -> elt `B.elem` filt
Our small combinator,
(=~>), lets us
filter out failures of
easyList: if it
fails, the test automatically passes.
QuickCheck requires properties to be monomorphic. Since we have many different hashable types that we would like to test, we would very much like to avoid having to write the same test in many different ways.
Notice that although our
prop_one_present function is polymorphic,
it ignores its first argument. We use this to simulate
monomorphic properties, as follows.
ghci>
:load BloomCheckBloomCheck.hs:9:17: Could not find module `BloomFilter.Easy': Use -v to see a list of the files searched for. Failed, modules loaded: none.
ghci>
:t prop_one_present<interactive>:1:0: Not in scope: `prop_one_present'
ghci>
:t prop_one_present (undefined :: Int)<interactive>:1:0: Not in scope: `prop_one_present'
We can supply any value as the first argument to
prop_one_present. All that matters is
its type, as the same type will be used
for the first element of the second argument.
ghci>
handyCheck 5000 $ prop_one_present (undefined :: Int)<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_one_present'
ghci>
handyCheck 5000 $ prop_one_present (undefined :: Double)<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_one_present'
If we populate a Bloom filter with many elements, they should all be present afterwards.
-- file: examples/BloomCheck.hs prop_all_present _ xs = forAll falsePositive $ \errRate -> B.easyList errRate xs =~> \filt -> all (`B.elem` filt) xs
ghci>
handyCheck 2000 $ prop_all_present (undefined :: Int)<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_all_present'
The QuickCheck library does not provide
Arbitrary instances for ByteString
types, so we must write our own. Rather than create a
ByteString directly, we will use a
pack function to create one from a
[Word8].
-- file: examples/BloomCheck.hs instance Arbitrary Lazy.ByteString where arbitrary = Lazy.pack `fmap` arbitrary coarbitrary = coarbitrary . Lazy.unpack instance Arbitrary Strict.ByteString where arbitrary = Strict.pack `fmap` arbitrary coarbitrary = coarbitrary . Strict.unpack
Also missing from QuickCheck are Arbitrary
instances for the fixed-width types defined in
Data.Word and
Data.Int. We need to
at least create an Arbitrary instance for
Word8.
-- file: examples/BloomCheck.hs instance Random Word8 where randomR = integralRandomR random = randomR (minBound, maxBound) instance Arbitrary Word8 where arbitrary = choose (minBound, maxBound) coarbitrary = integralCoarbitrary
We support these instances with a few common functions so that we can reuse them when writing instances for other integral types.
-- file: examples/BloomCheck.hs integralCoarbitrary n = variant $ if m >= 0 then 2*m else 2*(-m) + 1 where m = fromIntegral n integralRandomR (a,b) g = case randomR (c,d) g of (x,h) -> (fromIntegral x, h) where (c,d) = (fromIntegral a :: Integer, fromIntegral b :: Integer) instance Random Word32 where randomR = integralRandomR random = randomR (minBound, maxBound) instance Arbitrary Word32 where arbitrary = choose (minBound, maxBound) coarbitrary = integralCoarbitrary
With these Arbitrary instances created, we can try our existing properties on the ByteString types.
ghci>
handyCheck 1000 $ prop_one_present (undefined :: Lazy.ByteString)<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_one_present' <interactive>:1:49: Failed to load interface for `Lazy': Use -v to see a list of the files searched for.
ghci>
handyCheck 1000 $ prop_all_present (undefined :: Strict.ByteString)<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_all_present' <interactive>:1:49: Failed to load interface for `Strict': Use -v to see a list of the files searched for.
The cost of testing properties of easyList
increases rapidly as we increase the number of tests to run.
We would still like to have some assurance that
easyList will behave well on huge inputs.
Since it is not practical to test this directly, we can use a
proxy: will
suggestSizing give a sensible
array size and number of hashes even with extreme
inputs?
This is a slightly tricky property to check. We need to
vary both the desired false positive rate and the expected
capacity. When we looked at some results from the
sizings function, we saw that the
relationship between these values is not easy to
predict.
We can try to ignore the complexity.
-- file: examples/BloomCheck.hs prop_suggest_try1 = forAll falsePositive $ \errRate -> forAll (choose (1,maxBound :: Word32)) $ \cap -> case B.suggestSizing (fromIntegral cap) errRate of Left err -> False Right (bits,hashes) -> bits > 0 && bits < maxBound && hashes > 0
Not surprisingly, this gives us a test that is not actually useful.
ghci>
handyCheck 1000 $ prop_suggest_try1<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_suggest_try1'
ghci>
handyCheck 1000 $ prop_suggest_try1<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_suggest_try1'
When we plug the counterexamples that QuickCheck prints
into
suggestSizings, we can see that
these inputs are rejected because they would result in a bit
array that would be too large.
ghci>
B.suggestSizing 1678125842 8.501133057303545e-3<interactive>:1:0: Failed to load interface for `B': Use -v to see a list of the files searched for.
Since we can't easily predict which combinations will cause this problem, we must resort to eliminating sizes and false positive rates before they bite us.
-- file: examples/BloomCheck.hs prop_suggest_try2 = forAll falsePositive $ \errRate -> forAll (choose (1,fromIntegral maxWord32)) $ \cap -> let bestSize = fst . minimum $ B.sizings cap errRate in bestSize < fromIntegral maxWord32 ==> either (const False) sane $ B.suggestSizing cap errRate where sane (bits,hashes) = bits > 0 && bits < maxBound && hashes > 0 maxWord32 = maxBound :: Word32
If we try this with a small number of tests, it seems to work well.
ghci>
handyCheck 1000 $ prop_suggest_try2<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_suggest_try2'
On a larger body of tests, we filter out too many combinations.
ghci>
handyCheck 10000 $ prop_suggest_try2<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:19: Not in scope: `prop_suggest_try2'
To deal with this, we try to reduce the likelihood of generating inputs that we will subsequently reject.
-- file: examples/BloomCheck.hs prop_suggestions_sane = forAll falsePositive $ \errRate -> forAll (choose (1,fromIntegral maxWord32 `div` 8)) $ \cap -> let size = fst . minimum $ B.sizings cap errRate in size < fromIntegral maxWord32 ==> either (const False) sane $ B.suggestSizing cap errRate where sane (bits,hashes) = bits > 0 && bits < maxBound && hashes > 0 maxWord32 = maxBound :: Word32
Finally, we have a robust looking property.
ghci>
handyCheck 40000 $ prop_suggestions_sane<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:19: Not in scope: `prop_suggestions_sane'
We now have a correctness base line: our QuickCheck tests pass. When we start tweaking performance, we can rerun the tests at any time to ensure that we haven't inadvertently broken anything.
Our first step is to write a small test application that we can use for timing.
-- file: examples/WordTest.hs module Main where import Control.Parallel.Strategies (NFData(..)) import Control.Monad (forM_, mapM_) import qualified BloomFilter.Easy as B import qualified Data.ByteString.Char8 as BS import Data.Time.Clock (diffUTCTime, getCurrentTime) import System.Environment (getArgs) import System.Exit (exitFailure) timed :: (NFData a) => String -> IO a -> IO a timed desc act = do start <- getCurrentTime ret <- act end <- rnf ret `seq` getCurrentTime putStrLn $ show (diffUTCTime end start) ++ " to " ++ desc return ret instance NFData BS.ByteString where rnf _ = () instance NFData (B.Bloom a) where rnf filt = B.length filt `seq` ()
We borrow the
rnf function that we
introduced in the section called “Separating algorithm from evaluation” to develop
a simple timing harness. Out
timed action
ensures that a value is evaluated to normal form in order to
accurately capture the cost of evaluating it.
The application creates a Bloom filter from the contents of a file, treating each line as an element to add to the filter.
-- file: examples/WordTest.hs main = do args <- getArgs let files | null args = ["/usr/share/dict/words"] | otherwise = args forM_ files $ \file -> do words <- timed "read words" $ BS.lines `fmap` BS.readFile file let len = length words errRate = 0.01 putStrLn $ show len ++ " words" putStrLn $ "suggested sizings: " ++ show (B.suggestSizing (fromIntegral len) errRate) filt <- timed "construct filter" $ case B.easyList errRate words of Left errmsg -> do putStrLn $ "Error: " ++ errmsg exitFailure Right filt -> return filt timed "query every element" $ mapM_ print $ filter (not . (`B.elem` filt)) words
We use
timed to account for the costs
of three distinct phases: reading and splitting the data into
lines; populating the Bloom filter; and querying every element
in it.
If we compile this and run it a few times, we can see that the execution time is just long enough to be interesting, while the timing variation from run to run is small. We have created a plausible-looking microbenchmark.
$
ghc -O2 --make WordTest[1 of 1] Compiling Main ( WordTest.hs, WordTest.o ) Linking WordTest ...
$
./WordTest0.196347s to read words 479829 words 1.063537s to construct filter 4602978 bits 0.766899s to query every element
$
./WordTest0.179284s to read words 479829 words 1.069363s to construct filter 4602978 bits 0.780079s to query every element
To understand where our program might benefit from some tuning, we rebuild it and run it with profiling enabled.
Since we already built
WordTest and
have not subsequently changed it, if we rerun ghc to enable
profiling support, it will quite reasonably decide to do
nothing. We must force it to rebuild, which we accomplish by
updating the filesystem's idea of when we last edited the
source file.
$
touch WordTest.hs
$
ghc -O2 -prof -auto-all --make WordTest[1 of 1] Compiling Main ( WordTest.hs, WordTest.o ) Linking WordTest ...
$
./WordTest +RTS -p0.322675s to read words 479829 words suggested sizings: Right (4602978,7) 2.475339s to construct filter 1.964404s to query every element
$
head -20 WordTest.proftotal time = 4.10 secs (205 ticks @ 20 ms) total alloc = 2,752,287,168 bytes (excludes profiling overheads) COST CENTRE MODULE %time %alloc doubleHash BloomFilter.Hash 48.8 66.4 indices BloomFilter.Mutable 13.7 15.8 elem BloomFilter 9.8 1.3 hashByteString BloomFilter.Hash 6.8 3.8 easyList BloomFilter.Easy 5.9 0.3 hashIO BloomFilter.Hash 4.4 5.3 main Main 4.4 3.8 insert BloomFilter.Mutable 2.9 0.0 len BloomFilter 2.0 2.4 length BloomFilter.Mutable 1.5 1.0
Our
doubleHash function immediately
leaps out as a huge time and memory sink.
Recall that the body of
doubleHash is
an innocuous list comprehension.
--
Since the function returns a list, it makes some sense that it allocates so much memory, but when code this simple performs so badly, we should be suspicious.
Faced with a performance mystery, the suspicious mind will naturally want to inspect the output of the compiler. We don't need to start scrabbling through assembly language dumps: it's best to start at a higher level.
GHC's
-ddump-simpl option prints out
the code that it produces after performing all of its
high-level optimisations.
$
ghc -O2 -c -ddump-simpl --make BloomFilter/Hash.hs > dump.txt[1 of 1] Compiling BloomFilter.Hash ( BloomFilter/Hash.hs )
The file thus produced is about a thousand lines long.
Most of the names in it are mangled somewhat from their
original Haskell representations. Even so, searching for
doubleHash will immediately drop us at
the definition of the function. For example, here is how we
might start exactly at the right spot from a Unix
shell.
$
less +/doubleHash dump.txt
It can be difficult to start reading the output of GHC's
simplifier. There are many automatically generated names, and
the code has many obscure annotations. We can make substantial
progress by ignoring things that we do not understand,
focusing on those that look familiar. The Core language shares
some features with regular Haskell, notably type signatures;
let for variable binding; and
case for pattern
matching.
If we skim through the definition of
doubleHash, we will arrive at a section
that looks something like this.
__letrec {
go_s1YC :: [GHC.Word.Word32] -> [GHC.Word.Word32]go_s1YC :: [GHC.Word.Word32] -> [GHC.Word.Word32]
[Arity 1 Str: DmdType S] go_s1YC = \ (ds_a1DR :: [GHC.Word.Word32]) -> case ds_a1DR of wild_a1DS { [] -> GHC.Base.[] @ GHC.Word.Word32;[Arity 1 Str: DmdType S] go_s1YC = \ (ds_a1DR :: [GHC.Word.Word32]) -> case ds_a1DR of wild_a1DS { [] -> GHC.Base.[] @ GHC.Word.Word32;
: y_a1DW ys_a1DX ->: y_a1DW ys_a1DX ->
GHC.Base.: @ GHC.Word.Word32GHC.Base.: @ GHC.Word.Word32
(case h1_s1YA of wild1_a1Mk { GHC.Word.W32# x#_a1Mm ->(case h1_s1YA of wild1_a1Mk { GHC.Word.W32# x#_a1Mm ->
case h2_s1Yy of wild2_a1Mu { GHC.Word.W32# x#1_a1Mw -> case y_a1DW of wild11_a1My { GHC.Word.W32# y#_a1MA -> GHC.Word.W32#case h2_s1Yy of wild2_a1Mu { GHC.Word.W32# x#1_a1Mw -> case y_a1DW of wild11_a1My { GHC.Word.W32# y#_a1MA -> GHC.Word.W32#
(GHC.Prim.narrow32Word# (GHC.Prim.plusWord#(GHC.Prim.narrow32Word# (GHC.Prim.plusWord#
x#_a1Mm (GHC.Prim.narrow32Word# (GHC.Prim.timesWord# x#1_a1Mw y#_a1MA)))) } } }) (go_s1YC ys_a1DX)x#_a1Mm (GHC.Prim.narrow32Word# (GHC.Prim.timesWord# x#1_a1Mw y#_a1MA)))) } } }) (go_s1YC ys_a1DX)
}; } in go_s1YC}; } in go_s1YC
(GHC.Word.$w$dmenumFromTo2 __word 0 (GHC.Prim.narrow32Word# (GHC.Prim.int2Word# ww_s1X3)))(GHC.Word.$w$dmenumFromTo2 __word 0 (GHC.Prim.narrow32Word# (GHC.Prim.int2Word# ww_s1X3)))
This is the body of the list comprehension. It may seem daunting, but we can look through it piece by piece and find that it is not, after all, so complicated.
From reading the Core for this code, we can see two interesting behaviours.
We are creating a list, then immediately
deconstructing it in the
go_s1YC
loop.
GHC can often spot this pattern of production followed immediately by consumption, and transform it into a loop in which no allocation occurs. This class of transformation is called fusion, because the producer and consumer become fused together. Unfortunately, it is not occurring here.
The repeated unboxing of
h1 and
h2 in the body of the loop is
wasteful.
To address these problems, we make a few tiny changes to
our
doubleHash function.
-- file: BloomFilter/Hash.hs doubleHash :: Hashable a => Int -> a -> [Word32] doubleHash numHashes value = go 0 where go n | n == num = [] | otherwise = h1 + h2 * n : go (n + 1) !h1 = fromIntegral (h `shiftR` 32) .&. maxBound !h2 = fromIntegral h h = hashSalt 0x9150a946c4a8966e value num = fromIntegral numHashes
We have manually fused the
[0..num]
expression and the code that consumes it into a single loop.
We have added strictness annotations to
h1
and
h2. And nothing more. This has turned
a 6-line function into an 8-line function. What effect does
our change have on Core output?
__letrec { $wgo_s1UH :: GHC.Prim.Word# -> [GHC.Word.Word32] [Arity 1 Str: DmdType L] $wgo_s1UH = \ (ww2_s1St :: GHC.Prim.Word#) -> case GHC.Prim.eqWord# ww2_s1St a_s1T1 of wild1_X2m { GHC.Base.False -> GHC.Base.: @ GHC.Word.Word32 (GHC.Word.W32# (GHC.Prim.narrow32Word# (GHC.Prim.plusWord# ipv_s1B2 (GHC.Prim.narrow32Word# (GHC.Prim.timesWord# ipv1_s1AZ ww2_s1St))))) ($wgo_s1UH (GHC.Prim.narrow32Word# (GHC.Prim.plusWord# ww2_s1St __word 1))); GHC.Base.True -> GHC.Base.[] @ GHC.Word.Word32 }; } in $wgo_s1UH __word 0
Our new function has compiled down to a simple counting loop. This is very encouraging, but how does it actually perform?
$
touch WordTest.hs
$
ghc -O2 -prof -auto-all --make WordTest[1 of 1] Compiling Main ( WordTest.hs, WordTest.o ) Linking WordTest ...
$
./WordTest +RTS -p0.304352s to read words 479829 words suggested sizings: Right (4602978,7) 1.516229s to construct filter 1.069305s to query every element ~/src/darcs/book/examples/ch27/examples $ head -20 WordTest.prof total time = 3.68 secs (184 ticks @ 20 ms) total alloc = 2,644,805,536 bytes (excludes profiling overheads) COST CENTRE MODULE %time %alloc doubleHash BloomFilter.Hash 45.1 65.0 indices BloomFilter.Mutable 19.0 16.4 elem BloomFilter 12.5 1.3 insert BloomFilter.Mutable 7.6 0.0 easyList BloomFilter.Easy 4.3 0.3 len BloomFilter 3.3 2.5 hashByteString BloomFilter.Hash 3.3 4.0 main Main 2.7 4.0 hashIO BloomFilter.Hash 2.2 5.5 length BloomFilter.Mutable 0.0 1.0
Our tweak has improved performance by about 11%. This is a good result for such a small change.
[59] Jenkins's hash functions have
much better mixing properties than
some other popular non-cryptographic hash functions that
you might be familiar with, such as FNV and
hashpjw, so we recommend avoiding
them. | http://book.realworldhaskell.org/read/advanced-library-design-building-a-bloom-filter.html | CC-MAIN-2016-18 | refinedweb | 8,668 | 65.12 |
The QDoubleSpinBox class provides a spin box widget that takes doubles. More...
#include <QDoubleSpinBox>
Inherits QAbstractSpinBox.
The QDoubleSpinBox class provides a spin box widget that takes doubles.
QDoubleSpinBox allows the user to choose a value by clicking the up and down buttons or by pressing Up or Down on the keyboard to increase or decrease the value currently displayed. The user can also type the value in manually. The spin box supports double values but can be extended to use different strings with validate(), textFromValue() and valueFromText().
Every time the value changes QDoubleSpinBox emits the valueChanged() signal. The current value can be fetched with value() and set with setValue().
Note: QDoubleSpinBox will round numbers so they can be displayed with the current precision. In a QDoubleSpinBox with decimals set to 2, calling setValue(2.555) will cause value() to return 2.56.
Clicking the up and down buttons or using the keyboard accelerator's Up and Down arrows will increase or decrease the current value in steps of size singleStep(). If you want to change this behavior you can reimplement the virtual function stepBy(). The minimum and maximum value and the step size can be set using one of the constructors, and can be changed later with setMinimum(), setMaximum() and setSingleStep(). The spinbox has a default precision of 2 decimal places but this can be changed using setDecimals().
Most spin boxes are directional, but QDoubleSpinBox can also operate as a circular spin box, i.e. if the range is 0.0-99.9 and the current value is 99.9,DoubleSpinBox.:
This property holds the maximum value of the spin box.
When setting this property the minimum is adjusted if necessary, to ensure that the range remains valid.
The default maximum value is 99.99.
Note: The maximum value will be rounded to match the decimals property.
Access functions:
See also decimals and setRange().
This property holds the minimum value of the spin box.
When setting this property the maximum is adjusted if necessary to ensure that the range remains valid.
The default minimum value is 0.0.
Note: The minimum value will be rounded to match the decimals property.
Access functions:
See also decimals, setRange(), and specialValueText.
This property holds the spin box's prefix.
The prefix is prepended to the start of the displayed value. Typical use is to display a unit of measurement or a currency symbol. For example:
spinbox-().
This property holds the step value.
When the user uses the arrows to change the spin box's value the value will be incremented/decremented by the amount of the singleStep. The default value is 1.0. Setting a singleStep value of less than 0 does nothing.
Access functions:
This property holds the suffix of the spin box.
The suffix is appended to the end of the displayed value. Typical use is to display a unit of measurement or a currency symbol. For example:
spinbox-().
This property holds the value of the spin box.
setValue() will emit valueChanged() if the new value is different from the old one.
Note: The value will be rounded so it can be displayed with the current setting of decimals.
Access functions:
Notifier signal:
Constructs a spin box with 0.0 as minimum value and 99.99 as maximum value, a step value of 1.0 and a precision of 2 decimal places. The value is initially set to 0.00. The spin box has the given parent.
See also setMinimum(), setMaximum(), and setSingleStep().
Reimplemented from QAbstractSpinBox::fix. Reimplementations may return anything.
Note: QDoubleSpinBox does not call this function for specialValueText() and that neither prefix() nor suffix() should be included in the return value.
If you reimplement this, you may also need to reimplement valueFromText().
See also valueFromText() and QLocale::groupSeparator().
Reimplemented from QAbstractSpinBox::validate().
This signal is emitted whenever the spin box's value is changed. The new value is passed in d.
This is an overloaded function.
The new value is passed literally in text with no prefix() or suffix().
This virtual function is used by the spin box whenever it needs to interpret text entered by the user as a value.
Subclasses that need to display spin box values in a non-numeric way need to reimplement this function.
Note: QDoubleSpinBox handles specialValueText() separately; this function is only concerned with the other values.
See also textFromValue() and validate(). | https://doc.qt.io/archives/qt-4.7/qdoublespinbox.html | CC-MAIN-2021-17 | refinedweb | 729 | 60.31 |
31 January 2013 18:24 [Source: ICIS news]
(updates with Canadian and Mexican data)
HOUSTON (ICIS)--Chemical shipments on Canadian railroads rose by 4.2% year on year in the week ended 26 January, marking their fourth straight increase so far this year, according to data released by a rail industry association on Thursday.
Canadian chemical railcar loadings for the week totalled 10,723, compared with 10,276 in the same week in 2012, the Association of American Railroads (AAR) said.
In the previous week ended 19 January, Canadian chemical railcar shipments rose by 20.1%. From 1 to 26 January, chemical railcar loadings are up 11.9% year on year to 42,326. ?xml:namespace>
US chemical railcar traffic fell by 2.1% year over year, to 29,780 chemical railcar loadings, in the week ended 26 January, marking its fourth straight decline this year.
In the previous week ended 19 January, US chemical car loadings fell by 0.3% year over year. From 1 January to 26 January, chemical car loadings are down 2.3% year over year to 115,349.
Meanwhile, overall US weekly railcar loadings for the week ended 26 January in the freight commodity groups tracked by the AAR fell by 6.3% year over year to 265,8 | http://www.icis.com/Articles/2013/01/31/9636789/canada-chem-railcar-traffic-rises-for-4th-straight-week.html | CC-MAIN-2014-10 | refinedweb | 214 | 65.62 |
As per Microsoft, .Net can now describe in one line and that is as follow.
Great, this is absolutely right. ..
1.Performance
It is now much master than Asp.Net Core 1.x. It is now more than 20% faster than previous version. You can check it now with techempower.com as following URL shows. Just search aspnetcore on this URL, you will get the result.
2.Minimum Code
We need to write few lines of code to achieve the same task. Just for example, Authentication is now easy with minimum line of code. When we talk about Program.cs class, Asp.Net Core 2.0 has minimum line of code in Main method as compare to previous version. With earlier version of Asp.Net Core, we need to setup everything in Main method like your web server “Kestrel”, your current directory, if you would like to use IIS than need to integrate IIS as well. But with Asp.Net Core 2.0, we don’t need to take care of these things; these will handle by CreateDefaultBuilder method automatically to setup everything.
3.Razor Page
Asp.Net Core 2.0 has introduced Razor Page to create dynamic pages in web application. Using Razor Pages, we can create simple and robust application using Razor features like Layout Pages, Tag Helpers, Partials Pages, Templates and Asp.Net features like code behind page, directive etc. Razor Page does follow the standard MVC pattern. Here we use different types of directive like @page, @model, @namespace, @using etc. on view page and respective code behind page inherited with PageModel class which is base class.
Razor page is simple a view with associate code behind class which inherit Page Model class which is an abstract class in “Microsoft.AspNetCore.Mvc.RazorPages“. It doesn’t use controller for view [.cshtml page] as we do in MVC but code behind works as like a controller itself. These pages [.cshtml] are not placed inside the Pages folder.
Choose Web Application as a template when you would like to create Razor Pages application in Asp.Net Core 2.0.
4.Meta Packages and Runtime Store
Asp.Net Core 2.0 comes with “Microsoft.AspNetCore.All” package which is nothing but a MetaPackage for all dependencies which are required when creating Asp.Net Core 2.0 application. It means once you include this, you don’t need to include any other packages or don’t need to dependent on any other packages. It is because “Microsoft.AspNetCore.All” supports .Net Runtime Core Store which contains all the runtime packages which are required for Asp.Net Core development.
Here you can see only one reference is added and that is “Microsoft.AspNetCore.All” with version 2.0.5. So, this meta package will take care for all other packages required on runtime using Runtime Store.
You don’t need to add any other packages from outside; all is here with meta package and don’t need to take care of multiple packages with different version, here only have one version 2.0.5 or 2.x.x.
When you expand this reference section, you will find all the related packages are already referred with this meta package as following image shown.
5..Net Standard 2.0
The .Net Standard is group of APIs which are supported by .Net Framework. As compare to previous version of .Net Standard 2.0 supports dubbed APIs in numbers. It around more than 3200+ APIs supported by .Net Standard 2.0.
Leave the exception cases but .Net Standard 2.0 supports 70% of APIs which are being used or can be used with .Net Framework.
Just for example, .Net Standard didn’t support Logging feature using Log4Net, so we are not able to use it with Asp.Net Core, but with .Net Standard 2.0, this is in. We can now use lots of feature which are part of .Net Framework but we were not using it in Asp.Net Core with .Net Standard 1.x. We can use .Net Framework along with .Net Standard 2.0.
So, now onwards we can use all related APIs with .Net Standard 2.0.
For more about read following article;
6.SPA Template
Asp.Net Core 2.0 comes with new SPA template which can be used with latest version of Angular 4, React.js, and Knockout.js with Redux. By default Angular 4 is implementing with all required pages and React is also same. When we application using SPA template than all required packages automatically will installed using NPM packages. You don’t need to take care of angular packages or typescript packages, it will install and give ready made project from where you can start your coding for next.
7.HTTP.sys
The packages “Microsoft.AspNetCore.Server.WebListener” and “Microsoft.Net.Http.Server” are now merged into one packages and that package is Microsoft.AspNetCore.Server.HttpSys. Respective to this, namespace is also update to implement Microsoft.AspNetCore.Server.HttpSys. So, from now rather than implementing two packages, we only need to implement one.
8.Razor View Engine with Roslyn
Asp.Net Core 2.0 is now supported Roslyn compiler and support C# 7.1 features. So, now we can get the benefit of Roslyn compiler in Asp.Net Core MVC application with Razor View Engine.9.Visual Basic Support
With this new release of .Net Core 2.0, Visual Basic is part of .Net Core programming language. Now we can create different type of application using Visual Basic code as well.
10.Output from Asp.Net Core Web Server
In the output window, now we can trace our application using the “Asp.Net Core Web Server” option. This will show you how our application is started and got rendered on the browser. So, each information from starting to render will get here.
Conclusion
So, today we have learned about top 10 features of Asp.Net Core 2.0. | http://www.mukeshkumar.net/articles/dotnetcore/10-new-features-of-asp-net-core-2-0 | CC-MAIN-2018-22 | refinedweb | 992 | 71.1 |
Integrating React Native, TypeScript, and MobX
In my last article, I psted about getting TypeScript working with React Native. I’m building a flexible, best-practices, Notes App in React Native. This means I need a backing store, and it has to be local for offline capabilities. React has a definite way of building data into the UI and the manipulation of that data is an architecture known as Flux. Flux isn’t a concrete implementation, however. Normally, I would use Redux as the concrete implementation. However, I have recently started working with MobX and I prefer it. This article is about integrating MobX into my application for the storage of the Notes data.
Step 1: Install Packages
MobX splits its functionality for React and React Native across two packages – the mobx package contains all the non-specific stuff and the mobx-react package contains the bindings for React and React Native:
yarn add mobx mobx-react
Step 2: Enable Decorators
MobX uses JavaScript decorators to specify how the store is linked up to the components in your React tree. TypeScript supports decorators, which is a good thing. However, you have to enable it. Edit the tsconfig.json file and add the appropriate line:
{ "compilerOptions": { "target": "es2015", "module": "es2015", "jsx": "react-native", "moduleResolution": "node", "allowSyntheticDefaultImports": true, "experimentalDecorators": true, "noImplicitAny": true } }
Once this is done, you may want to restart Visual Studio Code if you are using it. Visual Studio Code does not generally pick up changes in the
tsconfig.json file so you may notice some red squiggly lines for decorators until you restart.
Step 3: Write a Model
I’m using a small model file to define the shape of my data. Create a file called
src/models/Note.ts with the following content:
/** * Model for the Note */ export default interface Note { noteId: string, title: string, content: string, createdAt: number, updatedAt: number }
Step 4: Write a Store
The observable store is the MobX version of the Flux state store. We can use TypeScript to add type annotations and use the MobX decorators to make the store observable. This is my
src/stores/noteStore.ts file:
import { observable } from 'mobx'; import Note from '../models/Note'; class NoteStore { @observable notes: Note[] = []; saveNote(note: Note) { const idx = this.notes.findIndex((n) => note.noteId === n.noteId); if (idx < 0) { this.notes.push(note); } else { this.notes[idx] = note; } } deleteNote(note: Note) { const idx = this.notes.findIndex((n) => n.noteId === note.noteId); if (idx < 0) { throw new Error(`Note ${note.noteId} not found`); } else { this.notes.splice(idx, 1); } } getNote(noteId: string): Note { const idx = this.notes.findIndex((n) => n.noteId === noteId); if (idx < 0) { throw new Error(`Note ${noteId} not found`); } else { return this.notes[idx]; } } } const observableNoteStore = new NoteStore(); const newNote = (title: string, content: string) => { const note = { noteId: uuid.v4(), title: title, content: content, updatedAt: Date.now(), createdAt: Date.now() }; observableNoteStore.saveNote(note); } newNote('First Note', 'some content'); newNote('2nd Note', 'some content'); newNote('3rd Note', 'some content'); newNote('4th Note', 'some content'); export default observableNoteStore;
Step 5: Write some container components
Since this is going to be a master-detail template, I want to write some common pages. For example, I’m going to write a NoteList component that takes a set of items and displays them, and I’m going to create a NoteListPage that wraps the Note List appropriately for a one-pane view of the NoteList. I’ve previously posted about the NoteList component. The
NoteListPage looks like the following:
import React from 'react'; import { Platform, StyleSheet, View, ViewStyle } from 'react-native'; import { observer, inject } from 'mobx-react/native'; import { NoteStore } from '../stores/noteStore'; import Note from '../models/Note'; import NoteList from './NoteList'; const styles = StyleSheet.create({ container: { marginTop: Platform.OS === 'ios' ? 20 : 0 } as ViewStyle }); interface NoteListPageProperties { /** * The store reference for the notes store. Note that this needs to be optional * because the <Provider> component adjusts things appropriately, which the * code checker won't pick up on. * * @type {NoteStore} * @memberof NoteListPageProperties */ noteStore?: NoteStore } @inject('noteStore') @observer export default class NoteListPage extends React.Component<NoteListPageProperties> { onDeleteItem(item: Note): void { this.props.noteStore.deleteNote(item); } render() { return ( <View style={styles.container}> <NoteList items={this.props.noteStore.notes} onDeleteItem={(item: Note) => this.onDeleteItem(item)} /> </View> ); } }
Line 26 injects the
noteStore provided by the Provider object (more on that in a minute) into the props for this component. It will be available as
this.props.noteStore. Line 27 adds code to re-render the component when the observed store changes. The code inside the container component creates a list and links the
onDeleteItem (which is the swipe-to-delete) to the stores
deleteNote() method. If I swipe to delete, it will effect a change in the store that will then cause the container to re-render because the observed element (the notes) drive the list. I could also add an
onSelectItem() to this, but I haven’t added routing to this application yet, and this would be more of a state change than a store change, so it isn’t germane to the MobX functionality.
Step 6: Wire the store to the components with the Provider
In my
index.tsx file, I need to link the
noteStore to the stack of React components. This is done with the
Provider component:
import React from 'react'; import { StyleSheet, Text, TextStyle, View, ViewStyle } from 'react-native'; import { Provider } from 'mobx-react/native'; import noteStore from './stores/noteStore'; import NoteListPage from './components/NoteListPage'; /** * Production Application Component - this component renders the rest of the * application for us. * * @export * @class App * @extends {React.Component<undefined, undefined>} */ export default class App extends React.Component<undefined, undefined> { /** * Lifecycle method that renders the component - required * * @returns {React.Element} the React Element * @memberof App */ render() { return ( <Provider noteStore={noteStore}> <NoteListPage/> </Provider> ); } }
Note that the Provider has an argument (called noteStore) that is assigned the value noteStore. It is important that the argument name is the same as the string value used in the inject statement from Step 5. Your app will replace the NoteListPage in this example. I use this format to design my page container components. I can replace the NoteListPage with NoteListDetail, for example, to ensure that the display is appropriate for what I am trying to do.
Next Steps
Now that I have the MobX store working, I am going to move onto getting the two-pane version of the application working. I’ll show this in the next article. | https://adrianhall.github.io/react%20native/2017/08/11/integrating-react-native-typescript-mobx/ | CC-MAIN-2019-43 | refinedweb | 1,072 | 57.27 |
Re: What is easier: to delegate or to use ACLs?
From: Joe Richards [MVP] (humorexpress_at_hotmail.com)
Date: 01/17/05
- Next message: Keith Ng: "Re: disjoint namespace configuration"
- Previous message: Scott: "Re: Migrated BDC cannot locate PDC to complete AD install"
- Maybe in reply to: Gera: "What is easier: to delegate or to use ACLs?"
- Messages sorted by: [ date ] [ thread ]
Date: Mon, 17 Jan 2005 13:12:02 -0500
No problem, hope it helps out. We used to have quite a few people that would
contact us and meet with us to see how we did it. Even then I knew we were
unusual on how tight the environment was. I am not a FT consultant for a large
technology company and see many many environments now and still think that was
the tightest best controlled environment I have seen.
Mostly people don't do a lot of this because they don't realize it is possible,
hopefully hearing that it is, helps others reach it.
joe
-- Joe Richards Microsoft MVP Windows Server Directory Services Gera wrote: > Hm... this is probably the biggest reply in this newsgroup ever ;-) > And really overwhelming amount of information. > Thanks, Joe. > > -- > Gera > > "Joe Richards [MVP]" <[email protected]> wrote in message > news:%23l0xdfm%[email protected]... > >>Password resets are handled by the user provisioning system or through an >>auto system that we purchased from MTEC called PSYNCH. It allows password >>resets/unlocks/changes to multiple environments based on RSA token, old >>password, or Q&A profile through a web site. Initially I had a lot of >>issues with them in how their stuff worked for various things like they >>couldn't work with an ID that had basic delegated powers >>(useraccountcontrol, pwdLastSet, set password, lockouttime) but I >>eventually beat them into shape. :o) Took about 18 additional months for >>their product to be launched but I refused to allow them to launch without >>the product working properly. >> >>The reason for the delegation model isn't political. It is for safety and >>change control due to the lack of business rule logic in Active Directory. >>You can't enforce naming standards or other standards through the >>directory so you either need to proxy through some provisioning system >>(which I don't really consider AD delegation) or pass the tickets to some >>other group who funnels all of the requests and makes sure the rules are >>being followed. With the scripts the scripts themselves follow the rules >>so the team with the power isn't even really working it out, they are just >>trusted to always use the scripts. You can't say that if you give them out >>to lots of people with rights, they may or may not use them. If the group >>is small and in slapping distance, they will keep doing it. Especially >>when they know they are ultimately responsible for the stuff being right. >>When I left I was slowly working towards having a provisioning website >>built that actually handled all of our requests that the user provisioning >>system didn't handle. It was going slow because I was yanked into figuring >>out the implementation of Exchange 2000. Had that not come up, I would >>have had it completed before I left and the work could have been done by >>one person most likely and that person would have been responsible for >>keeping the website running and break/fix of AD. >> >>As for the groups, the company uses a lot of shared data that can be >>accessed by people all over the world and company but not necessarily >>whole divisions, groups, departments. Role based security doesn't work >>very well in many companies and ours was one of them as role based >>security is often admin'ed in an 80/20 rule. If 80% of a group needs >>access to something, everyone gets that access instead of having more >>security groups. We had too many financial and other security rules we had >>to deal with to allow that much freedom to access to data. >> >>The project data structure is such that any owner with a top level shared >>folder under a project share for a server will generally have at least two >>security groups. One read-only access group and one read/write group. Some >>folders also had additional permissions such as maybe ADD only access >>rights and some had groups for subfolders under the top level folders say >>like you had a shared web structure that you had people update and it got >>rolled up to a web server from there.... so say you have a project server >>with say 100 top level folders for various things. Then you have one of >>them as a web folder which has subfolders for each group who publishes to >>the web site controlled by that web folder. You would have a read-only >>group for all web authors to get into the top level web folder and a >>read-write for the person who manages the whole structure and then a read >>only and read-write group for each subfolder. A single project share on a >>single server could easily eat up hundreds or thousands of groups on its >>own depending on who was using it and how. Another server may have only >>4-10 groups. There were also groups used for grouping users together for >>the IM software we used which was called, I think, SameTime. >> >> >>The 3 domain admins were just that domain admins, 2/3 level support >>completely. Global operation and our pagers would maybe go off after hours >>once every couple of weeks and that is only for break/fix and usually >>because someone didn't understand how the system worked or troubleshot it >>wrong. You could take all of the group requests or subnet requests say for >>a whole day and do them all in a few minutes with the scripts, that could >>be 10 groups or a 1000 groups. Didn't really matter, the scripts just ran >>a wee bit slower. During initial migrations into AD we were >>creating/importing thousands of groups a day every day while doing our >>normal workload as well. >> >>The help desk itself is huge and spread across the world and is actually >>handled by another company for them. They have NO rights inside of AD >>other than normal user rights so they can look at it. Since there aren't >>people dorking things up all over the place with mistakes, you don't need >>a bunch of people that can run around making changes to correct the >>mistakes. >> >>The biggest downside to having only 3 people was around coverage when >>someone was out for some reason. You have a dual pager system, primary and >>secondary that way if the primary got shot or something, the secondary >>could fill in. It was a pain when someone went on vacation or got sick or >>injured or as it started occurring more and more before I left getting >>pulled off to consult for app developers and integrators in the company so >>they used Active Directory properly. During normal course of things the >>team regularly went out to lunch together or some of them would go golfing >>(with the supervisor) during the day. We could work from home or the >>office pretty much on the schedules we needed. During the blackout (the >>main headquarters and our site was right in the middle of all of that) we >>made our way down to the datacenter, checked out our stuff. Anywhere in >>the world that had power was up and running fine (including our data >>centers as we have massive generators that sound like locomotives). Any >>sites that didn't have power we couldn't do anything about and anyway, >>they couldn't use our stuff anyway. After the power came back, all of the >>replication kicked back in and everything was back to normal. We didn't >>miss a single SLA for break/fix nor new requests due to our redundancy and >>structuring. >> >>AD is a great system because it is very flexible. The people get in >>trouble though when they take that flexibibility and run it in an AD HOC >>way with little or no controls and that simply isn't reasonable to do in a >>large enterprise. Very tight change control and fixed ways of doing things >>(strict processes) are required to have a supportable environment. The >>more out of control things get the more power you have to start giving out >>to more people and the more out of control it will get from there. The >>more people with power to make changes, the more people with power to >>screw things up. >> >>My perfect environment would be one where no users nor local admins have >>any delegated write power in the directory at all and DA's rarely log on >>with their DA account, usually just with their normal user account. >>Everything comes through provisioning systems and has full business logic >>and logging applied to it. It is possible to do with AD, just takes the >>intitial start up work to do it. >> >> joe >> >> >>-- >>Joe Richards Microsoft MVP Windows Server Directory Services >> >> >> >>Gera wrote: >> >>>Well, from all this I see that there are not much rights delegated to >>>those >>>"very local" admins. >>>As far as I understood, probably not because it is difficult or >>>impossible, >>>but because of "political" system of sending such tickets to your support >>>queue, >>>which consists of Domain Admin with full rights everywhere. It is also a >>>type of delegation, "delegation up" ;-) >>>Also ratio of users (and may be +contacts) to number of groups is >>>interesting. Or was there computer accounts grouped?... >>> >>>I liked the idea of scripting delegation process, of course, in medium to >>>large env's. >>> >>>What about passwords resets? Whom this task was delegated (or not) to? >>>Probably, to the provisioning system, and >>>I hope helpdesk wasn't those 3 admins. >>> > > >
- Next message: Keith Ng: "Re: disjoint namespace configuration"
- Previous message: Scott: "Re: Migrated BDC cannot locate PDC to complete AD install"
- Maybe in reply to: Gera: "What is easier: to delegate or to use ACLs?"
- Messages sorted by: [ date ] [ thread ] | http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.server.active_directory/2005-01/1079.html | crawl-002 | refinedweb | 1,696 | 66.17 |
Timothy chose a way to organize the book around common tasks, so the book is in the usual cookbook/recipes style. To me the cookbook style is rather verbose and there are many examples on minute details which make the book suitable for beginners. If you are experienced and have used Apache Commons before, the book is a quick read.
Although the book covers a lot of material, it is just a glimpse of the whole Apache Commons universe. An extensive description of all the Commons projects is impossible so squeeze into a book. Still Timothy wrote about the most important projects like Lang, Collections, IO, Math as well as about some Apache top-level projects like Velocity and Lucene.
The cookbook has been published in 2004 and shows its age. A few Apache projects described in it have retired in the meantime and some parts of the shown APIs are deprecated. Still only a few changes were needed to compile all examples against the newest versions of Commons libraries. So the content in the book is still relevant.
Examples
I wanted to keep all the source code of the book as a quick reference and as a source to copy from. Unfortunately O'Reilly did not provide the code for download or I just did not find it. So I extracted the code samples and expected output from the eBook based on their markup. My script parsed the eBook and saved the Java examples into packages derived from the chapter names and classes derived from the section names. It added a
mainmethod and appended the expected output as a comment after the code. For example the extracted code for recipe 1.11 about finding items in an array looked like that
package com.discursive.jccook.supplements; import org.apache.commons.lang.ArrayUtils; public class FindingItemsInArray { public static void main(String[] args) { String[] stringArray = { "Red", "Orange", "Blue", "Brown", "Red" }; boolean containsBlue = ArrayUtils.contains(stringArray, "Blue"); int indexOfRed = ArrayUtils.indexOf(stringArray, "Red"); int lastIndexOfRed = ArrayUtils.lastIndexOf(stringArray, "Red"); System.out.println("Array contains 'Blue'? " + containsBlue); System.out.println("Index of 'Red'? " + indexOfRed); System.out.println("Last Index of 'Red'? " + lastIndexOfRed); } // Array contains 'Blue'? true // Index of 'Red'? 0 // Last Index of 'Red'? 4 }And then I went overboard with this little project. For a week I worked to have all examples compile and run. This was hard work because the examples contained many syntax errors like missing semicolons or typos in variable names. These mistakes were no problem for human readers, but the compiler complained a lot. For some examples I had to second guess the code not shown, e.g. used Java beans or factory methods which had been omitted for brevity. In hindsight it was a stupid idea, but after successfully cleaning up more than half of the examples, I just had to finish the work. Remember, I am a completionist ;-)
In the end I won and now all the examples are mine! I would like to share them with you, but O'Reilly does not allow that. The examples are distributed under some special "fair use" license that prohibits "reproducing a significant portion of the code".
1 comment:
Yesterday Michael pinged me about these examples. He was surprised because O'Reilly books always include example code. So he sent me this link to Jakarta Commons Cookbook Example Code. I still have to check it out in detail but it pretty much looks like what I wanted to have. Damn, I really should have spent more time on STFW. | http://blog.code-cop.org/2012/10/jakarta-commons-cookbook.html | CC-MAIN-2019-22 | refinedweb | 590 | 66.74 |
Why is the “export” keyword used to make classes and interfaces public in Typescript?
Eg:
module some.namespace.here
{
export class SomeClass{..}
}
Is used as: varsomeVar = new some.namespace.here.SomeClass();
Solution:With the export keyword, the JavaScript adds a line to add the exported item to the module. Like in example: here.SomeClass = SomeClass;
So, visibility as controlled by public and private is just for tooling, whereas the export keyword changes the output.
In Typescript, marking a class member as public or private has no effect on the generated JavaScript. It is simple a design / compile time tool that you can use to stop your Typescript code accessing things it shouldn’t. | http://www.pro-tekconsulting.com/blog/why-is-the-export-keyword-used-to-make-classes-and-interfaces-public-in-typescript/ | CC-MAIN-2019-35 | refinedweb | 113 | 59.6 |
Sell your NFT here.
What is an NFT?
NFTs (Non-Fungible Tokens) can be summed up with one word: "unique". These are smart contracts deployed on a blockchain that represent something unique.. The serial number on the dollar bill might be different, but the bills are interchangeable and they’ll be worth $1 no matter what.
NFTs, on the other hand, are "non-fungible", and they follow their own token standard, the ERC721. For example, the Mona Lisa is "non-fungible". Even though someone can make a copy of it, there will always only be one Mona Lisa. If the Mona Lisa was created on a blockchain, it would be an NFT.
What are NFTs for?
NFTs provide value to creators, artists, game designers and more by having a permanent history of deployment stored on-chain.
You'll always know who created the NFT, who owned the NFT, where it came from, and more, giving them a lot of value over traditional art. In traditional art, it can be tricky to understand what a "fake" is, whereas on-chain the history is easily traceable.
And since smart contracts and NFTs are 100% programmable, NFTs can also have added built-in royalties and any other functionality. Compensating artists has always been an issue, since often times an artist's work is spread around without any attribution.
More and more artists and engineers are jumping on this massive value add, because it's finally a great way for artists to be compensated for their work. And more than just that, NFTs are a fun way to show off your creativity and become a collector in a digital world.
The Value of NFTs
NFTs have come a long way, and we keep seeing record breaking NFT sales, like "Everydays: The First 5,000 Days” selling for $69.3 million.
So there is a lot of value here, and it's also a fun, dynamic, and engaging way to create art in the digital world and learn about smart contract creation. So now I'll teach you everything you need to know about making NFTs.
How to Make an NFT
What we are not going to cover. You can't achieve the unlimited customization, or really utilize any of the advantages NFTs have. But if you're a beginner software engineer, or not very technical, this is the route for you.
If you're looking to become a stronger software engineer, learn some solidity, and have the power to create something with unlimited creativity, then read on!
If you're new to solidity, don't worry, we will go over the basics there as well.
How to Make an NFT with Unlimited Customization
I'm going to get you jump started with this NFT Brownie Mix. This is a working repo with a lot of boilerplate code.
Prerequisites
We need a few things installed to get started:
If you're unfamiliar with Metamask, you can follow this tutorial to get it set up.
Rinkeby Testnet ETH and LINK
We will also be working on the Rinkeby Ethereum testnet, so we will be deploying our contracts to a real blockchain, for free!
Testnets are great ways to test how our smart contracts behave in the real world. We need Rinkeby ETH and Rinkeby LINK, which we can get for free from the links to the latest faucets from the Chainlink documentation.
We will also need to add the rinkeby LINK token to our metamask, which we can do by following the acquire LINK documentation.
If you're still confused, you can following along with this video, just be sure to use Rinkeby instead of Ropsten.
When working with a smart contract platform like Ethereum, we need to pay a little bit of ETH, and when getting data from off-chain, we have to pay a little bit of LINK. This is why we need the testnet LINK and ETH.
Awesome, let's dive in. This is the NFT we are going to deploy to OpenSea.
Quickstart
git clone cd nft-mix
Awesome! Now we need to install the
ganache-cli and
eth-brownie.
pip install eth-brownie npm install -g ganache-cli
Now we can set our environment variables. If you're unfamiliar with environment variables, you can just add them into your
.env file, and then run:
source .env
A sample
.env should be in the repo you just cloned with the environment variables commented out. Uncomment them to use them!
You'll need a
WEB3_INFURA_PROJECT_ID and a
PRIVATE_KEY . The
WEB3_INFURA_PROJECT_ID can be found be signing up for a free Infura account. This will give us a way to send transactions to the blockchain.
We will also need a private key, which you can get from your Metamask. Hit the 3 little dots, and click
Account Details and
Export Private Key. Please do NOT share this key with anyone if you put real money in it!
export PRIVATE_KEY=YOUR_KEY_HERE export WEB3_INFURA_PROJECT_ID=YOUR_PROJECT_ID_HERE
Now we can deploy our NFT contract and create our first collectible with the following two commands.
brownie run scripts/simple_collectible/deploy_simple.py --network rinkeby brownie run scripts/simple_collectible/create_collectible.py --network rinkeby
The first script deploys our NFT contract to the Rinkeby blockchain, and the second one creates our first collectible.
You've just deployed your first smart contract!
It doesn't do much at all, but don't worry – I'll show you how to render it on OpenSea in the advanced part of this tutorial. But first, let's look at the ERC721 token standard.
The ERC721 Token Standard
Let's take a look at the contract that we just deployed, in the
SimpleCollectible.sol file.
// SPDX-License-Identifier: MIT pragma solidity 0.6.6; import "@openzeppelin/contracts/token/ERC721/ERC721.sol"; contract SimpleCollectible is ERC721 { uint256 public tokenCounter; constructor () public ERC721 ("Dogie", "DOG"){ tokenCounter = 0; } function createCollectible(string memory tokenURI) public returns (uint256) { uint256 newItemId = tokenCounter; _safeMint(msg.sender, newItemId); _setTokenURI(newItemId, tokenURI); tokenCounter = tokenCounter + 1; return newItemId; } }
We are using the OpenZepplin package for the ERC721 token. This package that we've imported allows us to use all the functions of a typical ERC721 token. This defines all the functionality that our tokens are going to have, like
transfer which moves tokens to new users,
safeMint which creates new tokens, and more.
You can find all the functions that are given to our contract by checking out the OpenZepplin ERC721 token contract. Our contract inherits these functions on this line:
contract SimpleCollectible is ERC721 {
This is how solidity does inheritance. When we deploy a contract, the
constructor is automatically called, and it takes a few parameters.
constructor () public ERC721 ("Dogie", "DOG"){ tokenCounter = 0; }
We also use the constructor of the
ERC721, in our constructor, and we just have to give it a name and a symbol. In our case, it's "Dogie" and "DOG". This means that every NFT that we create will be of type Dogie/DOG.
This is like how every Pokemon card is still a pokemon, or every baseball player on a trading card is still a baseball player. Each baseball player is unique, but they are still all baseball players. We are just using type
DOG.
We have
tokenCounter at the top that counts how many NFTs we've created of this type. Each new token gets a
tokenId based on the current
tokenCounter.
We can actually create an NFT with the
createCollectible function. This is what we call in our
create_collectible.py script.
function createCollectible(string memory tokenURI) public returns (uint256) { uint256 newItemId = tokenCounter; _safeMint(msg.sender, newItemId); _setTokenURI(newItemId, tokenURI); tokenCounter = tokenCounter + 1; return newItemId; }
The
_safeMint function creates the new NFT, and assigns it to whoever called
createdCollectible , aka the
msg.sender, with a
newItemId derived from the
tokenCounter. This is how we can keep track of who owns what, by checking the owner of the
tokenId.
You'll notice that we also call
_setTokenURI. Let's talk about that.
What are NFT Metadata and TokenURI?
When smart contracts were being created, and NFTs were being created, people quickly realized that it's reaaaally expensive to deploy a lot of data to the blockchain. Images as small as one KB can easily cost over $1M to store.
This is clearly an issue for NFTs, since having creative art means you have to store this information somewhere. They also wanted a lightweight way to store attributes about an NFT – and this is where the tokenURI and metadata come into play.
TokenURI
The
tokenURI on an NFT is a unique identifier of what the token "looks" like. A URI could be an API call over HTTPS, an IPFS hash, or anything else unique.
They follow a standard of showing metadata that looks like this:
{ "name": "name", "description": "description", "image": "", "attributes": [ { "trait_type": "trait", "value": 100 } ] }
These show what an NFT looks like, and its attributes. The
image section points to another URI of what the NFT looks like. This makes it easy for NFT platforms like Opensea, Rarible, and Mintable to render NFTs on their platforms, since they are all looking for this metadata.
Off-Chain Metadata vs On-Chain Metadata
Now you might be thinking "wait... if the metadata isn't on-chain, does that mean my NFT might go away at some point"? And you'd be correct.
You'd also be correct in thinking that off-chain metadata means that you can't use that metadata to have your smart contracts interact with each other.
This is why we want to focus on on-chain metadata, so that we can program our NFTs to interact with each other.
We still need the
image part of the off-chain metadata, though, since we don't have a great way to store large images on-chain. But don't worry, we can do this for free on a decentralized network still by using IPFS.
Here's an example imageURI from IPFS that shows the Chainlink Elf created in the Dungeons and Dragons tutorial.
We didn't set a tokenURI for the simple NFT because we wanted to just show a basic example.
Let's jump into the advanced NFT now, so we can see some of the amazing features we can do with on-chain metadata, have the NFT render on opeansea, and get our Dogie up!
If you want a refresher video on the section we just went over, follow along with the deploying a simple NFT video.
Dynamic and Advanced NFTs
Dynamic NFTs are NFTs that can change over time, or have on-chain features that we can use to interact with each other. These are the NFTs that have the unlimited customization for us to make entire games, worlds, or interactive art of some-kind. Let's jump into the advanced section.
Advanced Quickstart
Make sure you have enough testnet ETH and LINK in your metamask, then run the following:
brownie run scripts/advanced_collectible/deploy_advanced.py --network rinkeby brownie run scripts/advanced_collectible/create_collectible.py --network rinkeby
Our collectible here is a random dog breed returned from the Chainlink VRF. Chainlink VRF is a way to get provable random numbers, and therefore true scarcity in our NFTs. We then want to create its metadata.
brownie run scripts/advanced_collectible/create_metadata.py --network rinkeby
We can then optionally upload this data to IPFS so that we can have a tokenURI. I'll show you how to do that later. For now, we are just going to use the sample tokenURI of:
If you download IPFS Companion into your browser you can use that URL to see what the URI returns. It'll look like this:
{ "name": "PUG", "description": "An adorable PUG pup!", "image": "", "attributes": [ { "trait_type": "cuteness", "value": 100 } ] }
Then we can run our
set_tokenuri.py script:
brownie run scripts/advanced_collectible/set_tokenuri.py --network rinkeby
And we will get an output like this:
Running 'scripts/advanced_collectible/set_tokenuri.py::main'... Working on rinkeby Transaction sent: 0x8a83a446c306d6255952880c0ca35fa420248a84ba7484c3798d8bbad421f88e Gas price: 1.0 gwei Gas limit: 44601 Nonce: 354 AdvancedCollectible.setTokenURI confirmed - Block: 8331653 Gas used: 40547 (90.91%) Awesome! You can view your NFT at Please give up to 20 minutes, and hit the "refresh metadata" button
And we can hit the link given to see what it looks like on Opensea! You may have to hit the
refresh metadata button and wait a few minutes.
The Random Breed
Let's talk about what we just did. Here is our
AdvancedCollectible.sol:
pragma solidity 0.6.6; import "@openzeppelin/contracts/token/ERC721/ERC721.sol"; import "@chainlink/contracts/src/v0.6/VRFConsumerBase.sol"; contract AdvancedCollectible is ERC721, VRFConsumerBase { uint256 public tokenCounter; enum Breed{PUG, SHIBA_INU, BRENARD} // add other things mapping(bytes32 => address) public requestIdToSender; mapping(bytes32 => string) public requestIdToTokenURI; mapping(uint256 => Breed) public tokenIdToBreed; mapping(bytes32 => uint256) public requestIdToTokenId; event requestedCollectible(bytes32 indexed requestId); bytes32 internal keyHash; uint256 internal fee; uint256 public randomResult; constructor(address _VRFCoordinator, address _LinkToken, bytes32 _keyhash) public VRFConsumerBase(_VRFCoordinator, _LinkToken) ERC721("Dogie", "DOG") { tokenCounter = 0; keyHash = _keyhash; fee = 0.1 * 10 ** 18; } function createCollectible(string memory tokenURI, uint256 userProvidedSeed) public returns (bytes32){ bytes32 requestId = requestRandomness(keyHash, fee, userProvidedSeed); requestIdToSender[requestId] = msg.sender; requestIdToTokenURI[requestId] = tokenURI; emit requestedCollectible(requestId); } function fulfillRandomness(bytes32 requestId, uint256 randomNumber) internal override { address dogOwner = requestIdToSender[requestId]; string memory tokenURI = requestIdToTokenURI[requestId]; uint256 newItemId = tokenCounter; _safeMint(dogOwner, newItemId); _setTokenURI(newItemId, tokenURI); Breed breed = Breed(randomNumber % 3); tokenIdToBreed[newItemId] = breed; requestIdToTokenId[requestId] = newItemId; tokenCounter = tokenCounter + 1; } function setTokenURI(uint256 tokenId, string memory _tokenURI) public { require( _isApprovedOrOwner(_msgSender(), tokenId), "ERC721: transfer caller is not owner nor approved" ); _setTokenURI(tokenId, _tokenURI); } }
We use the Chainlink VRF to create a random breed from a list of
PUG, SHIBA_INU, BRENARD. When we call
createCollectible this time, we actually kicked off a request to the Chainlink VRF node off-chain, and returned with a random number to create the NFT with one of those 3 breeds.
Using true randomness in your NFTs is a great way to create true scarcity, and using an Chainlink oracle random number means that your number is provably random, and can't be influenced by the miners.
You can learn more about Chainlink VRF in the documentation.
The Chainlink node responds by calling the
fulfillRandomness function, and creates the collectible based on the random number. We then still have to call
_setTokenURI to give our NFT the appearance that it needs.
We didn't give our NFT attributes here, but attributes are a great way to have our NFTs battle and interact. You can see a great example of NFTs with attributes in this Dungeons and Dragons example.
Metadata from IPFS
We are using IPFS to store two files:
- The image of the NFT (the pug image)
- The tokenURI file (the JSON file which also includes the link of the image)
We use IPFS because it's a free decentralized platform. We can add our tokenURIs and images to IPFS by downloading IPFS desktop, and hitting the
import button.
Then, we can share the URI by hitting the 3 dots next to the file we want to share, hitting
share link and copying the link given. We can then add this link into our
set_tokenuri.py file to change the token URI that we want to use.
Persistance.
I imagine in the future more and more metadata will be stored on IPFS and decentralized storage platforms. Centralized servers can go down, and would mean that the art on those NFTs is lost forever. Be sure to check where the tokenURI of the NFT you use is located!
I also expect down the line that more people will use dStorage platforms like Filecoin, as using a pinning service also isn't as decentralized as it should be.
Going forward
If you'd like a video walkthrough of the advanced NFT, you can watch the advanced NFT video.
Now you have the skills to make beautiful fun, customizable, interactive NFTs, and have them render on a marketplace.
NFTs are fun, powerful ways to have artists accurately compensated for all the hard work that they do. Good luck, and remember to have fun! | https://www.freecodecamp.org/news/how-to-make-an-nft-and-render-on-opensea-marketplace/ | CC-MAIN-2022-05 | refinedweb | 2,679 | 63.39 |
_lwp_cond_reltimedwait(2)
- set or get processor set attributes
#include <sys/pset.h> int pset_setattr(psetid_t pset, uint_t attr);
int pset_getattr(psetid_t pset, uint_t *attr);
The pset_setattr() function sets attributes of the processor set specified by pset. The bitmask of attributes to be set or cleared is specified by attr.
The pset_getattr function returns attributes of the processor set specified by pset. On successful return, attr will contain the bitmask of attributes for the specified processor set.
The value of the attr argument is the bitwise inclusive-OR of these attributes, defined in <sys/pset.h>:
Unbinding of LWPs from the processor set with this attribute requires the {PRIV_SYS_RES_CONFIG} privilege to be asserted in the effective set of the calling process.
The binding of LWPs and processes to processor sets is controlled by pset_bind(2). When the PSET_NOESCAPE attribute is cleared, a process calling pset_bind() can clear the processor set binding of any LWP whose real or effective user ID matches its own real of effective user ID. Setting PSET_NOESCAPE attribute forces pset_bind() to requires the {PRIV_SYS_RES_CONFIG} privilege to be asserted in the effective set of the calling process.
Upon successful completion, these functions return 0. Otherwise, -1 is returned and errno is set to indicate the error.
These function will fail if:
The location pointed to by attr was not writable by the user.
An invalid processor set ID was specified.
The caller is in a non-global zone, the pools facility is active, and the processor is not a member of the zone's pool's processor set.
The pools facility is active. See pooladm(1M) and pool_set_status(3POOL) for information about enabling and disabling the pools facility.
See attributes(5) for descriptions of the following attributes:
pooladm(1M), pooladm(1M), psrset(1M), zoneadm(1M), pset_bind(2), pool_set_status(3POOL), attributes(5) | https://docs.oracle.com/cd/E18752_01/html/816-5167/pset-setattr-2.html | CC-MAIN-2019-09 | refinedweb | 302 | 55.64 |
I'm trying to overload some methods of the string builtin.
I know there is no really legitimate use-case for this, but the behavior still bugs me so I would like to get an explanation of what is happening here:
Using Python2, and the
forbiddenfruit
>>> from forbiddenfruit import curse
>>> curse(str, '__repr__', lambda self:'bar')
>>> 'foo'
'foo'
>>> 'foo'.__repr__()
'bar'
__repr__
>>> 'foo'
'bar'
The first thing to note is that whatever
forbiddenfruit is doing, it's not affecting
repr at all. This isn't a special case for
str, it just doesn't work like that:
import forbiddenfruit class X: repr = None repr(X()) #>>> '<X object at 0x7f907acf4c18>' forbiddenfruit.curse(X, "__repr__", lambda self: "I am X") repr(X()) #>>> '<X object at 0x7f907acf4c50>' X().__repr__() #>>> 'I am X' X.__repr__ = X.__repr__ repr(X()) #>>> 'I am X'
I recently found a much simpler way of doing what
forbiddenfruit does thanks to a post by HYRY:
import gc underlying_dict = gc.get_referents(str.__dict__)[0] underlying_dict["__repr__"] = lambda self: print("I am a str!") "hello".__repr__() #>>> I am a str! repr("hello") #>>> "'hello'"
So we know, somewhat anticlimactically, that something else is going on.
Here's the source for
builtin_repr:
builtin_repr(PyModuleDef *module, PyObject *obj) /*[clinic end generated code: output=988980120f39e2fa input=a2bca0f38a5a924d]*/ { return PyObject_Repr(obj); }
And for
PyObject_Repr (sections elided):
PyObject * PyObject_Repr(PyObject *v) { PyObject *res;
res = (*v->ob_type->tp_repr)(v); if (res == NULL) return NULL;
}
The important point is that instead of looking up in a
dict, it looks up the "cached"
tp_repr attribute.
Here's what happens when you set the attribute with something like
TYPE.__repr__ = new_repr:
static int type_setattro(PyTypeObject *type, PyObject *name, PyObject *value) { if (!(type->tp_flags & Py_TPFLAGS_HEAPTYPE)) { PyErr_Format( PyExc_TypeError, "can't set attributes of built-in/extension type '%s'", type->tp_name); return -1; } if (PyObject_GenericSetAttr((PyObject *)type, name, value) < 0) return -1; return update_slot(type, name); }
The first part is the thing preventing you from modifying built-in types. Then it sets the attribute generically (
PyObject_GenericSetAttr) and, crucially, updates the slots.
If you're interested in how that works, it's available here. The crucial points are:
It's not an exported function and
It modifies the
PyTypeObject instance itself
so replicating it would require hacking into the
PyTypeObject type itself.
If you want to do so, probably the easiest thing to try would be (temporarily?) setting
type->tp_flags & Py_TPFLAGS_HEAPTYPE on the
str class. This would allow setting the attribute normally. Of course, there are no guarantees this won't crash your interpreter.
This is not what I want to do (especially not through
ctypes) unless I really have to, so I offer you a shortcut.
You write:
Then, how would you do to get the expected behaviour:
>>> 'foo' 'bar'
This is actually quite easy using
sys.displayhook:
sys.displayhookis called on the result of evaluating an expression entered in an interactive Python session. The display of these values can be customized by assigning another one-argument function to
sys.displayhook.
And here's an example:
import sys old_displayhook = sys.displayhook def displayhook(object): if type(object) is str: old_displayhook('bar') else: old_displayhook(object) sys.displayhook = displayhook
And then... (!)
'foo' #>>> 'bar' 123 #>>> 123
On the philosophical point of why
repr would be cached as so, first consider:
1 + 1
It would be a pain if this had to look-up
__add__ in a dictionary before calling, CPython is slow as it is, so CPython decided to cache lookups to standard dunder (double underscore) methods.
__repr__ is one of those, even if it is less common to need the lookup optimized. This is still useful to keep formatting (
'%s'%s) fast. | https://codedump.io/share/MtOvAkY328WN/1/python-overload-primitives | CC-MAIN-2017-43 | refinedweb | 606 | 53.71 |
Fairly everything in e(fx)clipse is done with DS-Services when you run in an OSGi-Environment.
Still many of them don’t have any dependency on OSGi at all so most components we have can also run/get used in an ordinary Java-Environment (see the blog post about the code editor framework as an example). Since the beginning we published some of them through the ServiceLoader-API (and we’ll still keep it for those) but that has the draw back that you can not express relations between services.
Tonight I had a crazy idea: Could I read the DS-Component-Registration files in an none-OSGi environment and wire services together without using the ServiceLoader-API.
The result is JavaDSServiceProcessor. Our public service lookup API has been retrovited to use this internal service instead of ServiceLoader so if one now eg looks up our AdapterService like this:
import org.eclipse.fx.core.Util; import org.eclipse.fx.core.adapter.AdapterService; public class Test { public static void main(String[] args) { AdapterService adapterService = Util.getService(AdapterService.class).get(); } }
will get a fully configured AdapterService. We currently don’t support everything from the DS-Spec (eg Properties are not yet support) but I’ll fill this gap soon.
Hi Tom. Another approach to using OSGi services (service reg) and DS and other extensions (e.g. remote services) in java is to use Apache Connect. Connect was formerly known as PojoSR and implements the OSGi service registry without the Bundle layer. I’ve added on a ServiceRegistry API accessed via ServiceLoader in this project:
As you can see from our examples, this allows the use of DS, Remote Services and/or other Service Registry extensions.
There is talk in the EEG of standardizing Apache Connect, but so far it seems to be talk.
Thanks for point me to PojoSR I knew about that but could not remember its name. My scope although is much smaller I just want to get DS services working. Maybe I should add support for that as well to our service lookup.
FWIW: It’s easy to add an scr impl (equinox or apache felix scr) to the set of initial bundles. This is already done by this project:
And then your Java-only would/get all of SCR. None of the examples in this repo use it yet, but it does get resolved and started.
One can also remove the ECF remote services…or other bundles if not needed. For obvious reasons, I’ve been focused on using Pojosr/Apache Connect for remote services examples and use cases.
This blog post made my day.
The ECF approach is interessting, but for now I’ll go with Tom’s solution as it has no boilerplate code on client side.
Not sure what you mean by boilerplate code. The remote service examples in this repo are intentionally not using DS, but if you add either equinox or apache felix SCR implementation bundle you can have full-spec-compliant DS (all running java-only/no OSGi bundle layer)
The remote services need some setup to run, whereas the AdapterService just has above two lines on client side. This seemed suitable to me for a quick solution. However in the long run SCR are the way to go.
Hi Ben.
The examples that I provide do have some code to export remote services, but this is by choice…done to show the use of java code to explicitly export remote services. For these examples, I intentionally left out the SCR bundle. With SCR, there is no setup code needed, as remote services are registered and exported via SCR and the service registry.
But if remote services aren’t to be used at all, the only code needed to access the service registry is:
ServiceRegistry serviceRegistry = ServiceLoader.load(ServiceRegistryFactory.class)
.iterator().next().newServiceRegistry(null);
Everything else in
is either code to enable remote services debugging or the TimeService registration, which when SCR is present is done via SCR instead. | https://tomsondev.bestsolution.at/2015/11/09/bringing-osgi-ds-to-plain-java-applications/?replytocom=70797 | CC-MAIN-2019-51 | refinedweb | 669 | 61.56 |
Testing CherryPy 3 Application with Twill and Nose
I’ve been working on a CherryPy application for a few days, and wanted to write some tests. Surprisingly I could not find any tutorials or documentation on how I should test a CherryPy application. Unfortunately I also missed the last section on CherryPy Testing page; why is CherryPy application testing added as an afterthought? Wouldn’t it make more sense to start the testing section on how people can write tests for their CherryPy applications, rather than first explaining how to test CherryPy itself? Of well, at least I learned something new…
Since I got tests working with Twill first I decided to document my experience, and switch to the CherryPy way later if it makes more sense. The CherryPy Essentials book apparently has a section on testing, so reading that would probably clarify a lot of things.
There is a brief tutorial on how to test CherryPy 2 application with twill, but the instructions need some tweaking to work with CherryPy 3.
On Ubuntu 8.04 I first created a virtualenv 1.3.1 without site packages. I am running Python 2.5.2, and I have the following packages installed in the virtualenv: setuptools 0.6c9, CherryPy 3.1.2, twill 0.9 and nose 0.11.1. The additional packages were installed with
easy_install.
My directory structure is as follows:
hello.py tests/ __init__.py test_hello.py
hello.py contents is simply:
import cherrypy class HelloWorld: def index(self): return "Hello world!" index.exposed = True if __name__ == '__main__': cherrypy.quickstart(HelloWorld())
Running
python hello.py will start the web server and I can see the greeting in my browser at URL.
The tests directory has two files.
__init__.py is empty. The
test_hello.py follows closely the tutorial by Titus, but modified to work with CherryPy 3. The CherryPy 3 Upgrade instructions and CherryPy mod_wsgi instructions showed the way.
from StringIO import StringIO import twill import cherrypy from hello import HelloWorld class TestHelloWorld: def setUp(self): # configure cherrypy to be quiet ;) cherrypy.config.update({ "environment": "embedded" }) # get WSGI app. wsgiApp = cherrypy.tree.mount(HelloWorld()) # initialize cherrypy.server.start() #) def tearDown(self): # remove intercept. twill.remove_wsgi_intercept('localhost', 8080) # shut down the cherrypy server. cherrypy.server.stop() def test_hello(self): script = "find 'Hello world!'" twill.execute_string(script, initial_url='')
Now you’d expect that this would work by simply running
nosetests command. Mysteriously I got import error on twill (and after I removed the line, also import error on cherrypy). I looked at
sys.path which showed that I was somehow picking up the older nosetests I had installed into system Python.
which nosetests claimed it was finding the virtualenv
nosetests. Still, I had to actually give the explicit path to my virtualenv
nosetests before the tests would run without import errors.
All in all testing CherryPy applications turned into a longer adventure than I anticipated. I run into a number of unexpected difficulties, but I finally got it working and learned about twill as a bonus. Thanks for the tip, JJ!
Rene Dudfield:
Hi,
this is neat, thanks for sharing.
One other technique with cherrypy apps is to test them like normal python objects.
For example:
def test_hello(self):
self.assertTrue(“Hello world!” in HelloWorld().index())
Functional tests are really nice to see it is working on a real webserver too 🙂
cheers,November 25, 2009, 4:37 am
Christian Wyglendowski:
Nice post. Here is a bit of code that I put together at one point that uses Twill to test a CherryPy 3 app.
It doesn’t use WSGI intercept – it launches the full CherryPy app server and tests against that.November 25, 2009, 6:32 am
Heikki Toivonen:
Thanks Rene, that was so simple it is embarrassing I did not realize that 🙂
Christian, that looks interesting as well, thanks for sharing!November 25, 2009, 9:17 pm
Wyatt:
I’ve found that when installing scripts into a virtualenv, I’ve often had to deactivate and then re-activate that virtualenv to get things to work properly.November 27, 2009, 12:52 pm | https://www.heikkitoivonen.net/blog/2009/11/24/testing-cherrypy-3-application-with-twill-and-nose/ | CC-MAIN-2020-10 | refinedweb | 683 | 58.79 |
Thanks
thanks dear..it really works.....one single true do the magic.
how open file in append mode and write a object
Thanks, it was useful for me :)
i realy luv u
write a java program which open an existing file and append text file?
write a java program which open an existing file and append text file
this code is useful for me.. thanks u..
Post your Comment
tutorial for file upload in spring - Spring
tutorial for file upload in spring Is there tutorial available for uploading file using spring framework. The example in the spring reference uses... :
FileUploadController.java file
package example;
import
PHP File Manipulation Writing to a File Tutorial
;
In this tutorial we will learn how to write
data to a flat file database...:
This is same file with different text
Example 3 (If you want to append some text.../tutorial/PHP-File-Handling.html
Tutorial of the week
the mapping xml file(hbm.xml)
for the week of Feb 1, 20010
In this tutorial we...Tutorial of the week
Welcome to our Tutorials of the week section. We... used and very
helpful for the programmers.
Our tutorial addresses the major
Hibernate configuration file
This tutorial helps you in understanding the configuration file of Hibernate
PHP File Manipulation File locking Tutorial
PHP File locking
Learn how to use PHP file locking.
You need to lock the file each time when you do any input/output operation with
that file... simple open file and read/write data from it
like this.
$fp=fopen("
Questions |
Site
Map |
Business Software
Services India
PHP Tutorial... PHP |
PHP Cookies |
File
Manipulation in PHP |
PHP
displaying the time... by PHP |
PHP Ajax |
PHP SimpleXML |
PHP Ajax DB
PHP Tutorial Section
; Tutorial Section
Introduction
to Hibernate 3.0 |
Hibernate Architecture... the application
XML Tutorial Section
XML : An
Introduction |
XML - History... Parsing an XML File |
Ignoring Comments
While Parsing an XML File
Map | Business Software
Services India
Adobe Flex 3.0 Tutorial Section... 4.0 Tutorial Section
Getting Started with Flex 4... |
Flex
Looping |
Flex
Arrays |
Flex ArrayCollection
HTML Tutorial
jQuery tutorial for beginners
jQuery tutorial for beginners
Learn jQuery with the help of our tutorial jQuery tutorial for
absolute beginners.
This is complete jQuery tutorial... in quickest possible time. This
tutorial will help you in getting started
Photoshop Tutorial : Text Effect
learn this tutorial.
Let's start learn
New File: Take a new file according... How to make text effect
Design text effect is so easy now, because I have a
tutorial
Tutorial
TIBCO Designer Tutorial - Hello World
TIBCO Designer Tutorial - Hello World This TIBCO tutorial teaches...!" into a file using TIBCO Designer. You will really appreciate how easy this is to do... of the tasks. Take, for example, the syntax for writing to a file so that you can
Photoshop Tutorial :background image
with color and any image. But with this tutorial, you can learn and design your own background image. So let's
try.
Take a new file: Go to file menu and click on
new (CTRL + N key) then adjust file size as you design.
Fill Color: Go
how to parse xml in j2me tutorial
how to parse xml in j2me tutorial i want to parse xml file in j2me wireless toolkit.i saw ur examples & source code for parsing xml file....
i think that application is not to find the source employee.xml file when
PHP File Manipulation File existance checking Tutorial, PHP check file exists
File existence checking
If you want to check whether a file exists, you should use file exists
PHP function.
Syntax:
file_exists($path),
where $path - path to the file, on the local machine.
Example:
$TmpName
Hibernate Configuration File
Hibernate Configuration File
In this tutorial you will learn about the hibernate configuration file.
To define a Hibernate Configuration info a resource file named
hibernate.cfg.xml is used. Using this configuration file various
iPhone Quiz App Tutorial
iPhone Quiz App Tutorial
In this simple iPhone quiz application tutorial... and open iphone_quiz_appViewController.xib
In this ".xib" file we... find the code of button action in ".h" file. like....
- (IBAction
Detecting the file extension
Detecting the file extension How to detect the file extension in PHP?
Read the given tutorial to detect the file extension in PHP.
The fileexists() function, fileexists php, file_exists in php
Spring Form Tags Tutorial
Spring Form Tags Tutorial
Spring framework provides the form specific tags...;File Upload</form:label></td>
<td><form:input type="... textArea;
private String checkBox;
private CommonsMultipartFile file
Photoshop Vector Simplicity Tutorial
org.htmlparser.util.ParserException: C:\downloadingarticles\websitereader\Vector Simplicity (The system cannot find the file specified
Mysql Loader Tutorial
Mysql Loader Tutorial
Mysql Loaded Tutorial is used to import a data to a plain text from Mysql
database. The Plain Text is used as backup file from which the data
Mysql Loader Tutorial
Mysql Loader Tutorial
Mysql Loaded Tutorial is used to import a data to a plain text from Mysql
database.The Plain Text is used as backup file from which the data
Thanks for the coderaviteja August 31, 2011 at 3:33 PM
Thanks
thanksanshul June 6, 2013 at 5:32 PM
thanks dear..it really works.....one single true do the magic.
to open file in append mode and write a object eashwary February 25, 2012 at 2:59 PM
how open file in append mode and write a object
Thanks!Sachid May 2, 2012 at 3:31 PM
Thanks, it was useful for me :)
i love udiva June 5, 2012 at 4:19 PM
i realy luv u
JavaVivek Mathews June 6, 2012 at 4:36 PM
write a java program which open an existing file and append text file?
JavaVivek Mathews June 6, 2012 at 4:37 PM
write a java program which open an existing file and append text file
javacodepriya July 14, 2013 at 10:21 AM
this code is useful for me.. thanks u..
Post your Comment | http://www.roseindia.net/discussion/18354-Append-To-File---Java-Tutorial.html | CC-MAIN-2016-18 | refinedweb | 988 | 56.55 |
A Python wrapper to the Random.Org service.
Project description
A Python Interface to the Random.org web service. Provides a python wrapper to the following Random.org API calls:
- INTEGERS
- SEQUENCE
- STRING
- QUOTA
See below for usage examples.
DEPENDENCIES
- Numpy
This is known to work using IPython (v0.13) and Python (2.7.3).
INSTALLATION
Install with pip by running:
pip install randorg
See below for usage examples.
USAGE EXAMPLES
The package can be used as follows:
import randomorg as ro # Generate 5 integers between 1 and 100 ro.integers(5, minimum=1, maximum=100, base=10) # Generate a random sequence of integers between 1 and 10 ro.sequence(minimum=1, maximum=10) # Generate 5 unique strings (e.g. passwords) of 10 characters each ro.string(num=5, length=10, digits=True, upper=True, lower=True, unique=True) # Check your quota ro.quota()
That pretty much sums up the functions included in the random.org api. More information on the random.org api can be found here.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/RandOrg/ | CC-MAIN-2021-31 | refinedweb | 197 | 53.47 |
Review: Needs Fixing Here are some more comments and questions:
Advertising
* Copyright 2006-2008 The FLWOR Foundation. => * Copyright 2006-2012 The FLWOR Foundation. * Iterate over an instance of the XML Data Model (i.e, a sequence of items). * This class implements the SPL Iterator interface. You can only iterate over a sequence of instances of the XDM. The sequence is not an instance by itself. * The XQueryProcessor class allows to invoke * <a href="">Zorba XQuery Processor</a>. to invoke _the_ ... * Instruction to install the extension can be found at <a href=""></a>. Instruction_s_ * Shutdowns Shuts down * In the following code snippet, the following code snippets imports and execute an <em>Hello World</em> query: confusing sentence * Import a query to execute from its filename. Import a query to execute from a file with the given name. * $xquery->importQueryiFromURI('hello_world.xq'); $xquery->importQueryFromURI('hello_world.xq'); * Filename of the query to execute. Filename containing the query to execute. * Set value for an external variable. Set a value for an external variable. * The following code snippet sets the value of the variable * <em>$i</em> with <em>1</em>. The following code snippet sets the value of the variable <em>$i</em> to <em>1</em> with type xs:integer. * The following code snippet sets the value of the variable <em>$i</em> in * the local namespace with the value <em>1</em>. The following code snippet sets the value of the variable <em>$i</em> in the local namespace to the value <em>1</em>. getIterator and compile() don't have comments. Also, the indentation of compile() and getItem() seem to be broken Why do you repeat the conversion rules from setVariable in the comment of getItem. Why does one rule include SimpleXMLElement and the other one doesn't? Did you drop the streaming execution for execute()? If so, why? -- Your team Zorba Coders is subscribed to branch lp:zorba. -- Mailing list: Post to : [email protected] Unsubscribe : More help : | https://www.mail-archive.com/[email protected]/msg03422.html | CC-MAIN-2016-44 | refinedweb | 330 | 60.92 |
Microblog Headlines
So, after some minutes of trying this and that, it is time to take stock and see what's working and what isn't.:
I'll also be looking at putting together some videos on using Expression Blend as a design surface for programmers -- that is taking our designer tool and using it from a developers point of view.
Shawn Wildermuth creates Linkable SL Apps, Scott Morrison on Getting Started with SL2 and a walk-thru
Pingback from Blog Jocky » Blog Archive » Silverlight Cream for March 21, 2008 - 2 — #231
Pingback from Silverlight Cream for March 21, 2008 - 2 — #231 | Create a Blog
Jesse, you've got to see this:
Kindle Cake for the Geek ;-)
Pingback from re: Innovation, Renovation and Change | My Geek Solutions
Pingback from Silverlight Cream for March 21, 2008 - 2 ??? #231 | Create a Blog | Create a Blog
Pingback from Blog Jocky ?? Blog Archive ?? Silverlight Cream for March 21, 2008 … | Create a Blog
Pingback from re: Innovation, Renovation and Change | My Geek Solutions | My Geek Solutions
Pingback from Silverlight Cream for March 21, 2008 - 2 ??? #231 | Create a Blog … | Create a Blog
Pingback from re: Innovation, Renovation and Change | My Geek Solutions | My … | My Geek Solutions
Jesse, you said:
"I will, however, focus over the next two weeks on making (videos) sure to include the following topics among the others I'll be filming: Binding Controls to data through a Web Service using WCF "
Thank-you. Thank-you. and Thank-you.
oh..and did I mention thank-you?
Please also include...
Since we can't use the tried and true "using/imports System.Data" statements, please include all the necessary steps for the initial 'hook-up' of SL2+DATA (eg):
1) xmlns:data="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data?
2)<data:DataGrid x:Name="myDataGrid”... ?
3) Referencing System.Controls.Data?
We understand it's a Beta and don't mind if there are 100 steps, we just need to know what those repeatable steps are are. ... and then we can happily move on to Control Binding + WCF services.
Thanks Jesse. Looking forward to it.
Pingback from Dew Drop - March 22, 2008 | Alvin Ashcraft's Morning Dew
This link might help the community...
Working with a Datagrid in Silverlight:
blogs.msdn.com/.../using-the-silverlight-datagrid.aspx | http://silverlight.net/blogs/jesseliberty/archive/2008/03/21/innovation-renovation-and-change.aspx | crawl-001 | refinedweb | 385 | 58.21 |
24 March 2011 19:36 [Source: ICIS news]
Correction: In the ICIS story headlined “Dow to stop selling perc into ?xml:namespace>
HOUSTON (ICIS)--Dow Chemical said on Thursday that it will no longer sell perchloroethylene (perc) into the aerosol solvent market in
The announcement was not a surprise to market players, who said they had seen warning signs since November.
A source close to Dow said that perc is “something we don’t want to put into aerosols”.
The source said Dow will target its perc into more environmental and emissions-friendly applications.
The company said it does not have plans to enact similar restrictions for other chlorinated solvents, such as methylene chloride.
Sources said Dow's decision is unlikely to have much of an effect on the perc market, as other producers will take up the slack, and Dow will send its perc into other uses.
Occidental Chemical (OxyChem) and PPG Industries have filled the void left by Dow, the buyer said.
Buyers in the aerosol solvent sector said they had seen decreased offers from Dow as early as November, so they were not caught off-guard.
One buyer said that although Dow was planning to exit the market, it still offered the buyer 2-3 railcars of material.
US spot perc prices are assessed by ICIS at 84-94 cents/lb ($1,852-2,072/tonne, €1,315-1,471/tonne) FOB (free on board).
($1 = €0.71)
For more on perchloro | http://www.icis.com/Articles/2011/03/24/9446973/corrected-dow-to-stop-selling-perc-into-n-americas-aerosol-solvent-market.html | CC-MAIN-2014-52 | refinedweb | 245 | 67.59 |
- 7 Steps to Building a Custom Module - Background
In this article in the multi-part series on building custom modules for Dynamicweb, you'll learn more about the theory behind Custom Modules in Dynamicweb. You’ll see the considerations you need to make up front, what a system name is, how to structure your project and how to retrieve data associated with the module on a paragraph. In part 3 of this article series you see how to put this theory in practice.
Introduction
To create a bare-bones custom module you need to carry out the following steps:
- Make up a system name for your module.
- Create a new folder under the CustomModules folder in the Visual Studio solution and name it after the system name of your new module.
- Create an ASPX file called SystemName_Edit.aspx where SystemName is the system name of your module.
- Create a front end class that inherits from ContentModule and that is able to render the HTML for the module.
- Override the GetContent method of the ContentModule.
- Apply an AddInName attribute to the class.
- Register the module in the Admin interface of Dynamicweb.
Each of these steps is discussed in detail in the next sections. In the part 3 in this series you see the actual steps carried out in a Visual Studio project. For now, this article focuses on the theory.
System Name Considerations
Custom Modules in Dynamicweb rely on what is called the system name, a short name you give to the module and that is used internally to uniquely identify, register and find your custom module. This name should not contain special characters like spaces and should describe the module you are creating. For example, your news module could have a system name of simply News while your dealer search module could be called DealerSearch. You're free to choose your own name as long as it consists of only letters and numbers. When you register your module in the Admin interface you can also supply a friendly name for the module (like Dealer Search with a space) which is what your users will see. You'll see how to supply this name later. To avoid potential (future) problems with name collisions for modules that Dynamicweb may come up with it’s a good idea to prefix your module with something unique to your project or company, such as an abbreviation or the word Custom. For example, for modules developed for De Vier Koeden, I can choose to prefix the system name with Dvk.
The system name is used at a few different locations: the folder name under the CustomModules folder in the Visual Studio project and in the required SystemName_Edit.aspx page which you’ll see later. Additionally, it's used in the AddInName attribute and in the ModuleSettings and ModuleHeader controls to hook up fields used in the module. You'll see more about this in later sections of this article.
Creating the Module Folder
You must create the main folder for your custom module under the CustomModules folder. You also must name the folder after the system name you have chosen for your custom module. Within this folder, you can create your own sub folders to organize the files for your custom module. Figure 1 shows a Solution Explorer for a module called CustomDealerSearch. It has its own Objects folder and has a number of ASPX files that make up the module.
Figure 1
Creating the Module Edit Page
To allow a user to insert your module in your paragraph, you need to supply what is called the Module Edit Page for the module. This page takes the required name of SystemName_Edit.aspx where again SystemName refers to the system name of your module and must be placed directly in the module's main folder (which is named after the module as well).
Your Module Edit Page should at least contain a Dynamicweb ModuleHeader control, a server control that renders a header for the module consistent with the Dynamicweb built-in modules. Additionally, when your module takes user input, you can add other controls, like the Editor or standard ASP.NET controls like the TextBox to accept user input. To group your controls visually, you can use the GroupBox control and wrap your form's content within these controls. Finally, to tell Dynamicweb what data to persist in the database when you save a module you need the ModuleSettings control.
All of these controls are discussed in more detail later in this series.
A very basic Module Edit Page can look like this:
<%@ Register <dw:ModuleSettings <dw:GroupBox <table style="width: 100%;"> <tr> <td style="width: 170px;">Text</td> <td><input id="HelloText" class="std" type="text" runat="server"/></td> </tr> </table> </dw:GroupBox>
This page is the Edit page for a fictitious News module called CustomNews. The page defines a simple HTML table that contains a single input box whose value is saved with the associated paragraph automatically.
Storing Module Related Data in Dynamicweb
This Edit page contains a single text box that allows a user to enter some text. Obviously, you want DW to persist this value for you when users make changes to that field. Fortunately, this is taken care of automatically.
To see how this works, look at the ModuleSettings control at the top of the page:
<dw:ModuleSettings
This ModuleSettings control is used to register the fields that Dynamicweb should save when users save the settings for the module. In this example, CustomNews is the system name of the module. The ModuleSettings control registers the field HelloText through its Value property. This field has a one to one mapping with the input field in the page. Internally, Dynamicweb uses something like Request.Form["FieldName"] to get the data associated with the field.
You can add multiple values in the Value property by separating them with a comma, like this:
<dw:ModuleSettings
The Description field could then come from a Dynamicweb Editor control (or any other UI element), like this:
<dw:Editor
With controls like the input field, the Dynamicweb editor and the ModuleSettings control, Dynamicweb is able to store the user data in the selected paragraph for you. It does not, however, restore the selected data. You'll see how to do this in part 4 of this series.
Creating a Front End Class
The Front End class is responsible for outputting the HTML for your module. It should inherit from ContentModule or a class that ultimately inherits ContentModule and you should override the GetContent method that returns the HTML for the module as a string. The base ContentModule class exposes three important properties: a DataRow, a PageView and a Properties instance:
Within the method that outputs the HTML, you can instantiate Template objects, read template files, set Dynamicweb template tags, access your own classes and databases and so on. You'll see how to do this later.
Your method should return the HTML for the module. The following snippet shows the GetContent method of the CustomHello sample application that is part of the project you can download from the Engage web site:
[AddInName("CustomHello")] public class Frontend : Dynamicweb.ContentModule { public override string GetContent() { //Get an instance of a template object Dynamicweb.Templatev2.Template template = new Dynamicweb.Templatev2.Template("CustomHello/Template.html"); //Set a tag named "Text" with the value from the HelloText property template.SetTag("Text", Properties.Values["HelloText"]); //Return the parsed template to the event handler return template.Output(); } }
This method creates a Template object, loads in the HTML template from disk by passing in the filename in the constructor and then sets a Dynamicweb template tag called Text. This example assumes the template exists in the designated folder under the Templates folder and that it contains a Dynamicweb template tag like <!--@Text-->. Notice how the Dynamicweb template tag looks like an HTML comment, except that it's prefixed with an @ symbol. At run time, Dynamicweb replaces all occurrences of this tag with the value you assigned to the Template instance using template.SetTag("Text", SomeValue). When assigning the value, you need to leave out the @ symbol.
In this example, the template that is used to render the UI is hard coded. In later articles in this series, you see how to use a Dynamicweb control to let a content editor choose a template when adding your module to a paragraph to create maximum flexibility with regard to the look and feel of the output of the module.
Registering the Module in the Custom Solution
To inform Dynamicweb of the existence of the module, you need to give the class an AddInName attribute. You need to pass the ModuleSystemName to its constructor:
[AddInName("MyModuleName")] public class Frontend : Dynamicweb.ContentModule {}
Registering the Module in the Dynamicweb Admin Interface
The final step is registering the module in the Dynamicweb Admin interface. To register a module you need to supply the following details:
- Name - this is what users will see in the Modules list and Modules page
- System name - The system name that is used internally to refer to this module
- Script - Optional path to an ASPX page used to manage the module in /Admin
- Description - Optional description to describe your module.
- Access - whether the module is currently enabled and active
- Paragraph module - whether the module can be inserted as a paragraph module in a page
A typical module registration screen looks like this:
Figure 2 (click to enlarge)
Caveats
The Module Edit Page is not a full blown ASPX web form. That means you can't use controls that require to be rendered in a <form runat="server" />. Using the built-in Dynamicweb controls or simple controls like an HTML input box work fine though. In part 2 a of this series you'll see an alternative that uses User Controls to bring back the ASP.NET postback architecture in your module's Edit page.
In part 3 of this article series you see how to create and register the custom module. Once the module is configured, you can add an instance of it by creating a new page with a paragraph and then inserting the module on the Module tab. You'll see how to do this in the next part as well. | https://devierkoeden.com/articles/custom-modules-part-2-7-steps-to-building-a-custom-module-background | CC-MAIN-2019-39 | refinedweb | 1,712 | 59.74 |
Common Mistakes to Avoid when Using Dask
• February 10, 2022
Using Dask for the first time can be a steep learning curve. After years of building Dask and guiding people through their onboarding process, the Coiled team has recognised a number of common pitfalls.
This post presents the 5 most common mistakes we see people make when using Dask – and strategies for how you can avoid making them.
Let’s jump in.
1. “Dask is basically pandas, right?”
The single-most important thing to do before starting to build things with Dask is to take the time to understand the basic principles of distributed computing first.
The Dask API follows the pandas API as closely as possible, which means you can get started with Dask pretty quickly. But that “as possible” sounds deceptively simple. What that doesn’t tell you is that when you move from pandas to Dask you’re actually entering a whole different universe – with a different language, different basic laws of physics and a different concept of time.
To succeed in Dask you’ll need to know things like why Dask is “lazy”, which kinds of problems are “embarrassingly parallel” and what it means to “persist partitions to cluster memory”. If that all sounds like gibberish to you, read this introduction to basic distributed computing concepts. And don’t worry if this feels slightly overwhelming–take a deep breath and remember that I knew next to nothing about all of this a few months ago, either 😉
The good news is that you don’t need to master any of these new languages or laws or concepts – in most cases, you can navigate your way around with a basic understanding of fundamental concepts. It’s a bit like going on a holiday to a place where you don’t speak the language. You don’t need to be able to hold an entire conversation on the intricacies of the local political system to have a good time. But it’d be helpful if you were able to ask for a telephone or for the directions to your hotel if you needed to.
2. “I’ll just call .compute() whenever I want to see a result.”
One of the most obvious differences between pandas and Dask is that when you call a pandas DataFrame you get, well, the DataFrame:
import pandas as pd df = pd.DataFrame( { "Name": ["Mercutio", "Tybalt", "Lady Montague"], "Age": [3, 2, 4], "Fur": ["Grey", "Grey", "White"] } ) df
..whereas when you call a Dask DataFrame you get the equivalent of the plastic wrapper but not the candy: you’ll see some descriptive information about what’s inside but not the actual contents of your DataFrame:
import dask.dataframe as dd dask_df = dd.from_pandas(df) dask_df
You’ll quickly discover that this is because of Dask’s “lazy evaluation” and that if you add
.compute() to your df call, you will get the results the way you’re used to seeing them in pandas.
But you’ll want to be careful here. Dask evaluates lazily for a reason. Lazy evaluation allows Dask to postpone figuring out how to get you the result until the last moment – i.e. when it has as much knowledge as possible about the data and what you want from it. It can then calculate the most efficient way to get you your result. Calling
.compute() too often or too early interrupts this optimisation procedure. It’s a bit like eating all your half-cooked ingredients midway through the recipe – it won’t taste as good and if you still want the end result, you’ll have to start all over again.
Rules of thumb here: use
.compute() sparingly. Only use it to materialize results that you are sure: 1) will fit into your local memory, 2) are ‘fully cooked’ – in other words, don’t interrupt Dask’s optimization halfway through, and 3) ideally will be reused later. See this post for more details about
compute.
3. “In that case, let’s just .persist() everything!”
Soon after discovering
.compute(), you’ll learn about
.persist().
It can be difficult at first to understand the difference between these two: both of these commands materialize results by triggering Dask computations. This definition will help you tell them apart:
.persist() is basically the same as
.compute() – except that it loads results into cluster memory instead of local memory. If you need a refresher on the difference between local and cluster memory, check out the “cluster” section of this article.
You might be wondering – well, in that case, shouldn’t I just
.persist() everything? Especially if I’m working on a cloud-based cluster with virtually unlimited memory, won’t persisting just make everything run faster?
Yes…until it doesn’t. Just like
.compute(), calling
.persist() tells Dask to start computing the result (cooking the recipe). This means that these results will be ready for you to directly use the next time you need them. But you only have two hands, limited tools and are cooking against a deadline. Just filling all your pans with cooked ingredients you might (or might not) need later is not necessarily a good strategy for getting all your prepared dishes to all of your diners at the right time.
Remote clusters can be a great resource when processing large amounts of data. But even clusters have limits. And especially when running Dask in production environments – where clusters won’t be free but closely cost-monitored – you’ll start caring a lot more about using cluster memory as efficiently as possible.
So the advice here is similar to the one above: use
.persist() sparingly. Only use it to materialize results that you are sure: 1) will fit into your cluster memory, 2) are ‘fully cooked’ – in other words, don’t interrupt Dask’s optimization halfway through, and 3) will be reused later. It’s also best practice to assign a
.persist() call to a new variable, as calling
.persist() returns new Dask objects, backed by in-memory data.
# assign persisted objects to a new variable df = dd.read_parquet(“s3://bucket/file.parquet”) df_p = df.persist()
See this post for more details on
persist.
4. “Let’s dump all data into CSV files!”
We all love our CSV files. They’ve served us well over the past decades and we are grateful to them for their service…but it’s time to let them go.
You’ve now made the daring move to the Universe of Distributed Computing, which means there are entirely new possibilities available to you, like parallel reading and writing of files. CSV is just not made for these advanced processing capabilities and when working with Dask, it’s highly recommended to use the Apache Parquet file format instead. It is a columnar format with high compression options that allows you to perform things like column pruning, predicate pushdown filtering, and avoiding expensive and error-prone schema inference.
Dask allows you to easily convert a CSV file to Parquet using:
# convert CSV file to Parquet using Dask df = dd.read_csv(“file.csv”) df.to_parquet(“file.parquet”)
Matthew Powers has written a great blog about the advantages of Parquet for Dask analyses.
5. “The Dask Dashboard? What’s that?”
If you’re coming from pandas, you’re probably used to having very little visibility into the code that’s running while you wait for a cell to complete. Patiently watching the asterisk next to your running Jupyter Notebook cell is a daily habit for you – perhaps supplemented with a progress bar or notification chime.
Dask changes the game entirely here. Dask gives you powerful and detailed visibility and diagnostics about the computations that it runs for you. You can inspect everything from the number and type of running tasks, the data transfer between workers in your cluster, the amount of memory certain tasks are taking, etc.
Once you get a little more familiar with Dask, you’ll be using the Dashboard regularly to figure out ways to make your code run faster and more reliably. One thing to look out for, for example, is the amount of red space (data transfer) and white space (idle time) that show up – you’ll want to minimize both of those.
You can get a link to the Dask Dashboard with:
# launch a local Dask cluster using all available cores from dask.distributed import Client client = Client() client.dashboard_link >> '
Or use the Dask Dashboard extension for JupyterLab, as described in this post.
Watch this video by Matt Rocklin, the original author of Dask, to learn more about using the Dask Dashboard to your advantage.
6. “I Don’t Need to Ask for Help”
Here’s a ‘bonus mistake’ for you that might actually be one of the most important keys to successfully learning how to use Dask.
It’s admirable (and good practice) to want to try things yourself first. But don’t be afraid to reach out and ask for help when you need it. The Dask Discourse is the place to be whenever you want to discuss anything in more detail and get input from the engineers who have been building and maintaining Dask for years.
What’s next?
You’re now well on your way to avoiding the most common mistakes people make when onboarding with Dask. Our public Dask Tutorial is a good next step for anyone serious about exploring the possibilities of distributed computing.
Once you’ve mastered the fundamentals, you’ll find resources below for using Dask for more advanced ETL, data analysis and machine learning tasks:
- Converting a Dask DataFrame to a pandas DataFrame – or the other way around
- Setting the index of a Dask DataFrame
- Merging Dask DataFrames
- Converting Large JSON Files to Parquet with Dask
- Training XGBoost in Parallel with Dask
Good luck on your Dask journey! | https://coiled.io/blog/common-dask-mistakes/ | CC-MAIN-2022-21 | refinedweb | 1,640 | 61.56 |
One of the side projects that I run is Resrc, a site where I curate useful or interesting resources for software development.
Since the site is typically updated once a day and does not offer complex dynamic features, I decided to go with a static site architecture also known as Jamstack. The actual tech stack that I went with is Airtable for the database and Gatsby for the static site generator. This works extremely well because of Gatsby's data source plugin system to allow pulling in data from Airtable at build time with ease.
However, people tend to question this architecture...
How do I add a dynamic feature, such as search, to a static site?
It is possible, but requires a different set of tools than what you might traditionally be used to. In my case, I already used those tools: Airtable and Netlify.
📊 Storing and querying data with Airtable
Airtable is a service that looks like a spreadsheet but behaves like a database.
The best part is you get access to a full API:
The API has advanced filtering capabilities which allows performing a full text search on various fields of my data, realtime. I got really excited because I thought: now I just build out a search UI, send an ajax request to fetch results, and I'm done!
Hmm, not quite. Airtable currently does not have access control features, meaning that if I exposed my API key on the frontend then anyone could submit a request to delete my data. That is not exactly what I would call secure.
Note that this article intends to be a tutorial, so to continue on I recommend that you create an Airtable base, add some records, and then check out the API.
🔑 Securing the API key with Netlify Functions
Netlify is a service that handles deploys for static sites. Amongst many features that are useful for static sites, they offer serverless functions. While AWS Lambda is used under the hood, you don't have to worry about complex implementation details.
The reason that we'd want to use serverless functions is because they provide a way of proxying our requests to the Airtable API, thus hiding our API key. Instead of the frontend making direct requests to Airtable, it is made to the serverless function.
Note: This tutorial assumes that you already created a site with a static site generator such as Gatsby, Next.js or Eleventy.
To set up Netlify Functions, we first need to create a
netlify.toml file:
[build] functions = "functions"
Let's also store our API key in a
.env file:
AIRTABLE_API_KEY=PLACEHOLDER
Make sure that
.env files are ignored by Git and thus never committed to your repository. You will also have to add this key as an environment variable in Netlify.
Next, create the file
functions/search.js:
const Airtable = require('airtable'); const AIRTABLE_API_KEY = process.env.AIRTABLE_API_KEY; const AIRTABLE_BASE_ID = 'PLACEHOLDER'; // TODO: Replace placeholder. const AIRTABLE_TABLE_NAME = 'PLACEHOLDER'; // TODO: Replace placeholder. const AIRTABLE_PAGE_SIZE = 30; const RESPONSE_HEADERS = { 'Content-Type': 'application/json; charset=utf-8', }; exports.handler = async function (event) { const { query } = event.queryStringParameters; if (!query) { return { statusCode: 422, body: JSON.stringify({ error: 'Query is required.' }), }; } if (!AIRTABLE_API_KEY) { return { statusCode: 500, body: JSON.stringify({ error: 'Airtable API key is missing.' }), }; } const base = new Airtable({ apiKey: AIRTABLE_API_KEY }).base( AIRTABLE_BASE_ID ); const results = await base(AIRTABLE_TABLE_NAME) .select({ pageSize: AIRTABLE_PAGE_SIZE, // TODO: Update to use your field names. filterByFormula: ` OR( SEARCH("${query.toLowerCase()}", LOWER({Name})), SEARCH("${query.toLowerCase()}", LOWER({Description})), SEARCH("${query.toLowerCase()}", LOWER({Category})), SEARCH("${query.toLowerCase()}", LOWER({URL})) ) `, }) .firstPage() .catch((error) => { console.log(`Search error from Airtable API: ${error.message}`); return null; }); const noResults = !Array.isArray(results) || results.length === 0; if (noResults) { return { statusCode: 404, body: JSON.stringify({ error: 'No results.' }), }; } return { statusCode: 200, headers: RESPONSE_HEADERS, body: JSON.stringify({ results }), }; };
Make sure to replace the
// TODO comments with your own keys and fields.
Let's now install the Airtable JavaScript client and Netlify CLI:
npm install airtable npm install netlify-cli --dev
And connect our Netlify account:
npx netlify login
Finally, we can launch our development server:
npx netlify --command="npm run develop"
Replace
npm run develop with the command you normally use to start your server.
Our search results can now be accessed at the following search endpoint:
⚛️ Fetching data efficiently with React Query
React Query is an amazing data fetching library but is optional because you can go ahead and create your frontend however you'd like. For example, you could create an HTML form and send a request to the search endpoint using the Fetch API.
However, I put React Query in the title of this article so I am obligated to share how I implemented a more efficient fetching strategy for Resrc. Let's jump into it.
🔎 The Search component
The component should provide a standard form with state management:
import React, { useState } from 'react'; export default function Search() { const [query, setQuery] = useState(''); const handleSubmit = (event) => { event.preventDefault(); window.location.href = `/search?query=${query}`; }; return ( <form onSubmit={handleSubmit}> <input placeholder="Search..." aria-Submit</button> </form> ); }
For Resrc, I have the search form displayed in the header. This is why I made the decision to navigate to a
/search route whenever the form is submitted. This...
- Allows sharing the search results page URL.
- Simplifies data fetching to be on page load.
Also note that in a single page app you should use a client side route navigation instead. Gatsby provides a navigate helper and Next.js provides a useRouter hook.
⚓️ The useSearch hook
Okay, now let's fetch some data! Create a search page and component in your site:
import React, { useState, useEffect } from 'react'; import { useQuery } from 'react-query'; const SEARCH_API_ENDPOINT = '/.netlify/functions/search'; const fetchSearch = async (key, query) => { if (!query) { throw new Error('Search query is required.'); } return fetch( `${SEARCH_API_ENDPOINT}?query=${encodeURIComponent(query)}` ).then(async (response) => { const data = await response.json(); if (response.status !== 200) { const error = new Error(data.error || 'Unknown error'); error.statusCode = response.status; throw error; } return data; }); }; function useSearch(query) { return useQuery(['search', query], fetchSearch); } function SearchResultsPage() { const [query, setQuery] = useState(null); const { isLoading, isSuccess, isError, data, error } = useSearch(query); useEffect(() => { const query = new URLSearchParams(window.location.search).get('query'); if (query) setQuery(query); }, []); if (isLoading) return 'Loading...'; if (isError && error.statusCode === 404) return 'No results'; if (isError) return error.message; if (isSuccess) { return ( <ul> {data.results.map((result) => ( <li key={result.id}>{JSON.stringify(result)}</li> ))} </ul> ); } return null; }
Note how we abstracted the data fetching into a custom hook called
useSearch.
With that, the search functionality is now finished:
- Type
testinto the search form and press Enter.
- Page is navigated to
/search?query=test
- React Query fetches results from
/.netlify/functions/search?query=test
- Results are rendered depending on loading, success, or error status.
Note that I didn't provide any design here so it's up to you to decide how best to display the data. However, you can quickly spruce up the experience by implement a ready made design component system such as Chakra UI. I use it for Resrc.
🎁 Wrapping up
Let's quickly recap the different layers of our realtime search stack:
- Airtable provides a full text search API to query the data we have stored.
- Netlify Functions proxies our API requests to Airtable and hides the API key.
- React Query fetches search results with some added features such as caching.
If you get stuck, feel free to reference the source code of Resrc on GitHub. You are also always free to send me an email or or a tweet with questions or feedback.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/sunnysingh/how-i-added-realtime-search-to-my-static-site-lg0 | CC-MAIN-2021-25 | refinedweb | 1,261 | 58.69 |
Raster Library - Write color table of raster map. More...
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <grass/gis.h>
#include <grass/glocale.h>
#include <grass/raster.h>
Go to the source code of this file.
Raster Library - Write color table of raster map.
(C) 1999-2009 by the GRASS Development Team
This program is free software under the GNU General Public License (>=v2). Read the file COPYING that comes with GRASS for details.
Definition in file raster/color_write.c.
Write map layer color table.
Definition at line 109 of file raster/color_write.c.
References getenv(), and Colors::version.
Referenced by Rast3d_write_colors(), Rast_write_colors(), and Ve., Rast_init_colors() to initialize the structure and Rast_add_c_color_rule() to set the category colors. These routines are called by higher level routines which read or create entire color tables, such as Rast_read_colors() or Rast_make_ramp_colors().
Note: The calling sequence for this function deserves special attention. The mapset parameter seems to imply that it is possible to overwrite the color table for a raster map which is in another mapset. However, this is not what actually happens. It is very useful for users to create their own color tables for raster maps in other mapsets, but without overwriting other users' color tables for the same raster map. If mapset is the current mapset, then the color file for name will be overwritten by the new color table. But if mapset is not the current mapset, then the color table is actually written in the current mapset under the
colr2 element as:
colr2/mapset/name.
The rules are written out using floating-point format, removing trailing zeros (possibly producing integers). The flag marking the colors as floating-point is not written.
If the environment variable FORCE_GRASS3_COLORS is set (to anything at all) then the output format is 3.0, even if the structure contains 4.0 rules. This allows users to create 3.0 color files for export to sites which don't yet have 4.0
Definition at line 72 of file raster/color_write.c.
References _, fclose(), fd, G_fatal_error(), G_fopen_new(), G_mapset(), G_name_is_fully_qualified(), G_remove(), GMAPSET_MAX, GNAME_MAX, and Rast__write_colors().
Referenced by create_raster(), IL_output_2d(), and IL_resample_output_2d(). | http://grass.osgeo.org/programming7/raster_2color__write_8c.html | CC-MAIN-2018-13 | refinedweb | 357 | 61.02 |
Java.It is shipped with JDK 1.6 and NetBeans 6.
The aim of this tutorial is to get you started with using Java DB in your Java console applications using NetBeans. I recommend having the JDBC tutorial shipped with The Java Tutorial. The steps outlined there are fast, efficient and easy to follow. So lets bootstrap!
Close all the other projects that may be open in NetBeans, and then proceed.
Go to Tools->Java DB Database and Start the Java DB server
For the purpose of this tutorial, we shall create a simple database in Java DB named as "SimpleDBDemo". To create the database, go to Tools->Java DB Database->Create Database. In the dialog box, fill in the Name of the database you want, and the username/password to access the database.
Now that we have a database, let us create a simple 2-column table "TABLE 1". Press "Ctrl + 5" to bring up the "Services" side-bar.
Now, right click on the entry for "SimpleDBDemo" and choose the option for creating a new table.
Please name the columns as is shown above, so that the subsequent Java code shown works properly.
Fill in some data into the table just created using simple SQL Scripts using the "Design query" option obtained by a right-click on the table name. Eg.
INSERT INTO "DEMO"."TABLE1" values('Amit',23); INSERT INTO "DEMO"."TABLE1" values('Ofilia',23);
So, now our Database and Table is ready. Let us now see the Java program which actually uses the Database just created.
Class.forName("org.apache.derby.jdbc.EmbeddedDriver");
Connection con = DriverManager.getConnection("jdbc:derby://localhost:1527/SimpleDBDemo", "demo", "demo"); // The database URL may not be same for you, lookup the "Services" side-bar.
Statement stmt = con.createStatement(); ResultSet rs = stmt.executeQuery("SELECT * FROM DEMO.Table1"); while (rs.next()) { String s = rs.getString("Name"); float n = rs.getFloat("Age"); System.out.println(s + " " + n); }
You should now see all the records that you have inserted into your table. Of course, we just saw a simple SQl query, but you can go ahead and try complex ones, more suited to your requirements
/* * Main.java * * Created on 17 Sep, 2007, 10:49:17 PM * * To change this template, choose Tools | Templates * and open the template in the editor. */ package javadbdemo; import java.sql.*; /** * * @author amit */ public class Main { /** Creates a new instance of Main */ public Main() { } /** * @param args the command line arguments */ public static void main(String[] args) { // TODO code application logic here try{ Class.forName("org.apache.derby.jdbc.EmbeddedDriver"); }catch(ClassNotFoundException e){ System.out.println(e); } try{ Connection con = DriverManager.getConnection("jdbc:derby://localhost:1527/SimpleDBDemo", "demo", "demo"); Statement stmt = con.createStatement(); ResultSet rs = stmt.executeQuery("SELECT * FROM DEMO.Table1"); while (rs.next()) { String s = rs.getString("Name"); float n = rs.getFloat("Age"); System.out.println(s + " " + n); } }catch(SQLException e){ System.err.println(e); } } }
This tutorial showed you how easy it is to use Java DB in your applications. For further information on Java DB and related topics consult the section below.
--ByAmit Kumar Saha | http://wiki.netbeans.org/GetStartedwithJavaDB | crawl-002 | refinedweb | 512 | 52.97 |
nstfruity — NST Script To Setup Fruity Package For Nagios Management.
nstfruity [
-m
TEXT
|
--mode
TEXT
] [
--dbimport
FILENAME
] [
--passwd
TEXT
] [
--pkg-dir
DIRECTORY
] [
-h
[true]|false
|
[true]|false
] [
-H
[true]|false
|
--help-long
[true]|false
] [
-v
[true]|false
|
--verbose
[true]|false
] [
--version
[true]|false
]
The nstfruity script is used to manage the fruity package (which in turn is used to manage the nagios package).
There are several things to consider when using nstfruity:
You must have the MySQL server up and running first (the setup_mysql script can help with this).
Using this script will not directly affect nagios. One will need to use the fruity interface, review and/or edit the configuration information maintained by fruity, export the information maintained by fruity (to make it accessible to nagios) and finally, restart the nagios service to load in the new configuration tables.
Removing the fruity setup does NOT affect nagios.
Once fruity has been setup, you access it
by pointing a web browser at
(you may subsitute the external IP address of the system for
"
127.0.0.1" if you are accessing it
remotely).
A simplified interface to this script is
provided in the NST WUI (its located on the "
Nagios
Setup" page).
Here is a example of using nstfruity (include the
"
--verbose" option to get additional
output):
[root@probe ~]#
nstfruity --mode setup
New password for 'fruity' database: Retype new password:
[root@probe ~]#
if nstfruity --mode status; then echo "OK"; fi
OK
[root@probe ~]#
nstfruity --mode remove
[root@probe ~]#
Here is a example of using nstfruity to export the
configuration to "
/tmp/fruity.sql". The
exported file can serve as a back up and can be restored via:
"
-m setup --dbimport
/tmp/fruity.sql". Alternatively, it could be copied
to a different NST probe and used to initialize the fruity
database on that system.
[root@probe ~]#
nstfruity --mode dbexport >| /tmp/fruity.sql
[root@probe ~]#
This mode of operation is used to initialize fruity
and make it accessible via a web browser. The following command
line options may be used in "
setup" mode (all
are optional):
Provides verbose diagnostic output about what the script is doing.
By default, we only create the fruity database
- providing a "
clean slate" to work
with. If you specify "
--dbimport
minimal", we will initialize fruity with
data (to save you some tedious setup). Alternatively, if you
have a copy of fruity tables you'd like initialize
the database with, you may specify the fully qualified file
name (like: "
--dbimport
/tmp/myfruity.sql"). The file must contain valid
SQL statements and may optionally be compressed via
bzip2 or gzip.
If you don't want to be prompted to provide the
fruity database password, you may specify the password
on the command line. Since you seldom need to know the
password, you may also specify "
--passwd
RANDOM" and a random password will be chosen
(NOTE: The password is stored in
"
/etc/fruity/config.inc").
This option is intended for NST developers (it allows us to download and try out newer versions of fruity).
This mode of operation is used to tell whether fruity
has been setup yet or not. It is written such that there is no
output unless an error occurs or the
"
--verbose" option is also specified. However,
it does exit with 0 if fruity is setup and 1 if not (making
it useful to other scripts). The following command line option(s)
may be used in "
status" mode (all are
optional):
Provides verbose diagnostic output about what the script is doing.
This mode of operation is used to remove the fruity
setup from the system. This will remove the entire fruity fruity. fruity setup, and then set up fruity and initialize it with the previously exported database:
[root@probe ~]#
nstfruity --mode dbexport | bzip2 -c >| /tmp/fruity.sql.bz2
[root@probe ~]#
nstfruity --mode remove
[root@probe ~]#
nstfruity --mode setup --dbimport /tmp/fruity.sql.bz2
New password for 'fruity' database: Retype new password:
[root@probe ~]#
The following command line options are available:
-m TEXT] | [
--mode TEXT]
This option controls what
nstfruity will do. If you specify
"
status" (the default), it will indicate
whether Fruity has been setup yet or not. If
you specify "
setup" it will remove any previous
setup information and set up Fruity on your NST
system. If you specify "
remove" it will remove
the Fruity setup. If you specify
"
dbexport", the SQL database will be dumped in
a form usable for the "
--dbimport FILE"
option.
--dbimport FILENAME]
By default, the Fruity management system starts
with a clean slate (nothing configured). One would then need to
import the current Nagios configuration (which is often
desirable) or spend a lot of time with the initial setup of the
Fruity management system. This is the default behavior
maintained by this script. Alternatively, one may use the
"
--dbimport FILE" command line option and specify
the name of a initial SQL database (see
"
/usr/local/share/nstfruity/minimalfruity"
directory.
--passwd TEXT]
This option allows one to set the password used for
access to the "
fruity" database that will be
created during setup. By default you will be prompted at the
command line. If you specify a password of
"
RANDOM", we will generate a random password
using the pwgen command.
--pkg-dir DIRECTORY]
Typically you will never need to change this
parameter from its default value of
"
/usr/local/fruity". However, if you've
downloaded and installed a newer version of
Fruity, you can use this option to instruct
the script to use your new installation (we can't guarantee it
will work as this option allows the NST developers to experiment
with newer versions of Fruity).
-h [true]|false] | [
--help [true]|false]
When this option is specified, nstfruity will display a short one line description of nstfruity, followed by a short description of each of the supported command line options. After displaying this information nstfruity will terminate.
-H [true]|false] | [
--help-long [true]|false]
This option will attempt to pull up additional
nstfruity documentation within a text based
web browser. You can force which browser we use setting the
environment variable
TEXTBROWSER, otherwise,
we will search for some common ones.
-v [true]|false] | [
--verbose [true]|false]
When you set this option to true, nstfruity will produce additional output. This is typically used for diagnostic purposes to help track down when things go wrong.
--version [true]|false]
If this option is specified, the version number of the script is displayed.
/usr/local/share/nstfruity
Directory containing resource files used by
nstfruity. You can find the SQL table used for the
"
--dbimport minimal" option.
TEXTBROWSER
This controls what text based browser is used to display help information about the script. If not set, we will search your system for available text-based browsers (Ex: elinks, lynx ...).
setup_mysql(l), fruity, nagios, Network Security Toolkit | http://nst.sourceforge.net/nst/docs/scripts/nstfruity.html | crawl-001 | refinedweb | 1,131 | 60.65 |
19
Joined
Last visited
Community Reputation428 Neutral
About Ollhax
- RankMember
Unity
Ollhax replied to Ollhax's topic in General and Gameplay ProgrammingYes, that's right. I'm just using it to figure out what can potentially do that encryption (or whatever) - or rather, only enable stuff that I'm pretty sure cannot. I'm trying to protect the user's systems. They'll download mods in the form of code, compile it and run it locally. There won't be a central server that keeps the mods, at least not at first, so any security measures have to be done on the users' local machines. As you say, CAS (or whatever the new security model is called) is still useful. I'll probably leave it in place as an added precaution for PC builds. However, I don't want to be limited to only PC releases, so I need an alternative as well. You're completely right about runtime checking via assembly resolves, I have that check in place already. As far as I know, those are the only assemblies you'll be able to touch, in addition to the ones given to the compiler. Reflection is tricky, agreed. Private member access may be hard to stop, so I'll have to think about that closely. I can probably make tools that let you do "safe" reflection, or just disallow it entirely. Peer-reviewing is definitely a safeguard too. If a mod messes with your computer, you will probably report it, or at least not recommend it to others. But this is obviously only a last resort.
Unity
Ollhax replied to Ollhax's topic in General and Gameplay ProgrammingThanks for the replies so far! I should have explained my situation a bit more. It's about the same as BitMaster's example of WC3 maps. I want to use C# for scripting-type of work. Even when limited, I expect it to be very useful. Some points for context: Users will download mods as code and compile+run them locally. There's no downloading/running of arbitrary .exes or other files. I can examine the code thoroughly before running it. I'll examine the actual semantic model of the code through Roslyn, not match raw code strings. Disallowing the unsafe keyword should avoid problems with buffer overruns, etc. (Well, if I haven't missed something, which is why I'm posting this!) Crashing isn't an issue. I can't help if a mod crashes the sandbox process, but it won't bring down the entire application at least. I imagine mods that crashes the game for you won't be that popular. Allowing reflection isn't a requirement. I'm interested to hear about specific ideas/examples for how you'd be able to attack this setup, given the constrains I mentioned above. I know it's a tricky thing to guarantee that something is secure, but at the same time I can't come up with a single concrete example in my setup where this would be an actual problem. If you'd like, consider it a challenge! Side note: I use C# instead of Lua because I prefer that language, and I'm hoping to ride a bit on the XNA/Unity-wave. I can use Roslyn for real-time compiling, giving error messages, providing with intellisense-like tooling, etc. Also, it lets me use a common framework for my engine code and mod code. Basically, it saves me a *ton* of works, which makes this a feasible (more or less...) project for me.
Unity
Ollhax posted a topic in General and Gameplay ProgrammingHi there! I've been working on a proof-of-concept for a game-maker idea I've had for a while. It boils down to running user-written, untrusted C# code in a safe way. I've gone down the path of AppDomains and sandboxes, using Roslyn to build code on the fly and running the code in a separate process. I have a working implementation up and running, but I've hit some snags. My biggest issue is that it seems like Microsoft have given up on sandboxing code. See. They added the "Caution" box a few months back, including this gem: "We advise against loading and executing code of unknown origins without putting alternative security measures in place". To me, it feels like they've deprecated the whole thing. There is also the issue that AppDomain sandboxing isn't very well supported across platforms. There's no support in Mono. I had hopes for a fix from the CoreCLR, but then I found this: - so no luck there. So! I've started exploring whitelisting as a security measure instead. I haven't figured out how big a part of the .NET library I need to include yet, but it feels like I mainly need collections and some reflection stuff (probably limited to messing with public fields). I think I can do all this by examining the code with Roslyn and not allowing namespaces/classes that aren't explicitly listed. I'm comparing my approach with Unity, which does more or less the same thing, e.g exposing only a safe subset of the framework. In their case it's an actual stripped down version of Mono (if I've understood it right), but seems to me the results would be pretty much the same if I get it right. TLDR: If you have experience with these kind of problems, would you say that is a safe approach? Am I missing something big and obvious here?
Ollhax replied to Ollhax's topic in General and Gameplay ProgrammingMm, totally with you on that point. That's what I was looking for; whether this is just a convention or if there's something I've not thought about. Good arguments on either side. But, in line with what you said, I've made my problem easier by just calling it a gameplay-oriented randomizer instead (throwing in some helpers for random directions, colors, etc). I can definitely stand behind inclusive maximums for that use case.
Ollhax replied to Ollhax's topic in General and Gameplay ProgrammingThis pretty much nails it. I agree that neither way (exclusive or inclusive max) is inherently better than the other. So unless I'm missing something, it boils down to my arguments above (counter-intuitive and weird states) vs usefulness for indexing arrays and convention. Thanks for taking your time to answer, I'll go pounder this a bit more
Ollhax replied to Ollhax's topic in General and Gameplay ProgrammingBut you can turn that argument around and ask, why would one design the random function's range only to index stuff in arrays smoothly when it's just as likely to be used for simulating die rolls? Stated differently, I *am* designing for general use, and I get the feel that exclusive maximum is a weirdness designed for indexing arrays, a very specific problem.
Ollhax replied to Ollhax's topic in General and Gameplay ProgrammingThanks for the link! It's pretty much what I expected. I agree with that SO reply, but I'm not really sure if it applies to the interface of a random number generator. Expressing "I want a dice roll between 1-6" as Random(1, 6) just seems more natural to me. Having Random(0, 0) mean Random(0, Int32.MaxValue-1) (or maybe Random(0, Int32.MaxValue)?) does not really feel natural either. If I don't go with a convention of my own, I'll just stick to the one in System.Random, where Random(0, 1) equals Random(0, 0).
Ollhax posted a topic in General and Gameplay?
Ollhax replied to Ollhax's topic in Networking and MultiplayerI think I'm making some progress with queueing up the updates in case they are early, and applying them at their designated time. It definitely removed the worst of the jitter. However, there's some inherent problems with this: * Queuing up = introducing latency. Not much to do about this. * How large queue should one allow? I've set an arbitrary number right now (3!) but I think it needs some more thought... * What happens to packets that are too late? I think I'll try extrapolating the position in case this happens.
Ollhax posted a topic in Networking and Multiplayer?
Ollhax posted a topic in Networking and MultiplayerHi there, I'm working on a 2d platform game using the quake 3 networking model. The client sends inputs 20 times a second, i.e. three inputs per packet. To avoid problems with "bursty" connections (see [url=""]here[/url]), I process the received inputs directly on the server during a single frame. Since the inputs as well as the physics (gravity etc) of the game affects the player entities, I essentially run the entire physical update for the specific entity three times in one frame when the server gets the packet. Now, to my problem. The above model worked very well before I added gravity to the mix, but since then I realized that I need to update the player entity on the server even when there aren't any player inputs queued up. Otherwise, lagging players would just hang in the air, as they do in my current implementation. Running physics without inputs has proven to be troublesome because depending on the timing when inputs are received, the server may be a few physics frames ahead or behind of the client. It may start off with another starting point , causing a misprediction when returning the updated position. I've read a lot of articles and many dance around this subject, for example: [url=""][/url] - The physics is only updated when input is received, ignoring the problem what to do when inputs are missing for a longer period. [url=""][/url] - No physics is applied in this example; all state change depends on input so the problem does not exist. I see some alternatives: 1. Keep track on when the server starts "extrapolating" the entity, i.e. runs updates without fresh client input. When new inputs arrive, reset to the original position and re-simulate the entity with the new inputs for the duration in which it was previously extrapolated. 2. The server stops updating the entity if it runs out of client inputs. Instead, the other clients goes on to extrapolate the movement of the client that is missing inputs. 3. Something entirely different. Number 1 is attractive since it seems the "correct" way to go about this, but I'm having trouble getting it exactly right because of the jitter; i.e. I can't fix the mispredictions entirely. Also, I feel it's somewhat over-complicated. Number 2 is nice since it's basically the model I have, but with the additional extrapolation added on. A problem with this model is that the clients would see a very old position of the lagging player, and since all hit detection etc is done server-side the laggers would become ghosts. Anyone got a number 3? How do you usually solve this? EDIT: Actually, I realized while writing this that #2 is a pretty acceptable solution to this. I'll keep the post up in case someone has a better idea or someone wants to read this for reference.
Ollhax replied to Ollhax's topic in Networking and Multiplayer[quote name='papalazaru' timestamp='1334835016' post='4932766'] Article along those lines. [url=""][/url] Maybe contains some extra stuff that would be useful to you. [/quote] Yup, read them all. It's a good link though!
Ollhax replied to Ollhax's topic in Networking and Multiplayer[quote name='hplus0603' timestamp='1334704303' post='4932309'] [quote]Do you actually send three game states per packet bunched together?[/quote] No, you probably send three sets of simulation inputs, and perhaps one game state. You don't generally want to just dump full network states across the wire at full frame rate, because of the bandwidth usage. However, by sending the inputs, you can get a very good replica of what the entity is doing, without using much bandwidth. Typically, you can get away with sending a baseline/checkpoint for an entity only once every few seconds, and do inputs for all the simulation steps in between. To avoid burstiness, you could send the inputs for all entities for the three steps, and a state dump of one entity, per packet. Keep a queue of entities, and rotate through which one gets state-dumped per packet. [/quote] From the server, I send only the latest game state and all queued (and not yet used) inputs. I decided to fix this and yes, it did improve the responsiveness of the game.
Ollhax replied to Ollhax's topic in Networking and Multiplayer[quote name='hplus0603' timestamp='1334352984' post='4931062'] You have to be careful about your "frames." There may be render frames, physics frames, and network tick frames. The latency in question will be a combination of network tick frames. For example, if you have three physics frames per network frame, then the batching of commands will add three physics frames' worth of latency. Also, if the data for a player arrives "early" on the server, then the server has the choice to immediately forward it to the other clients, or to only forward it after it's been simulated and verified. How much latency there is from player A to player B depends on this choice as well. [/quote] Yes, one of the problem is that network ticks run slower than physic frames, which is why I have to queue up the inputs. Good idea about forwarding inputs immediately, but I'm a bit worried over the overhead in a situation where lots of player inputs arrive spread out over different frames, this would trigger a lot of data sending. I guess I could also let clients send this kind of info directly to all other clients in the game, but that would complicate things a bit I think. Anyhow, am I being paranoid about this thing or is it something you generally need to deal with?
Ollhax posted a topic in Networking and MultiplayerHi there! I'm working on my network code and I've run into a minor snag that someone perhaps can help me with. I went with the [url=""]Quake 3[/url] network model and I've used [url=""]Gabriel Gambetta's[/url] excellent blog posts as a reference when implementing the finer points such as server reconciliation. However, there's a situation that occur when there's more than one client in the picture. Lengthy example ahead: Say that [i]Client A[/i] runs three frames, then decides to send an update to the server. I.e, three input structures are sent to the server. The server receives the update after a few moments, say [i]SendTime(Client A)[/i] or ST(A) time units later. The three inputs are queued for Client A for three server update ticks in the future, meaning that the server will be fully up to date with the client after ST(A) + 3 ticks. This is all fine and dandy, since Client A's prediction and server reconciliation will hide all this latency for Client A. What bothers me is when [i]Client B[/i] enters the picture. One could argue that he should be able to know Client A's newest position after ST(A) + ST(B) time units, but if the system is implemented exactly as described above, the input may not show until ST(A) + ST(B) + 3 ticks. This is because the server would have to update the state in order for the input's effect to show. Exactly how much delay Client B would experience also depends on how often the server sends updates. My question is, do I have a fault in this design or is this how it usually is? One improvement I can see now would be for the server to send over A's remaining inputs to B when doing an update, letting B predict some of A's "future" inputs also. Another thing to try out would be to let Client B just extrapolate A's previous input until the server sends fresher updates. Any more takes on this? | https://www.gamedev.net/profile/194922-ollhax/?tab=topics | CC-MAIN-2017-30 | refinedweb | 2,729 | 63.49 |
This class is used to represent rotational inertias for unit mass bodies. More...
#include <drake/multibody/multibody_tree/unit_inertia.h>
This class is used to represent rotational inertias for unit mass bodies.
Therefore, unlike RotationalInertia whose units are kg⋅m², the units of a UnitInertia are those of length squared. A unit inertia is a useful concept to represent the geometric distribution of mass in a body regardless of the actual value of the body mass. The rotational inertia of a body can therefore be obtained by multiplying its unit inertia by its mass. Unit inertia matrices can also be called gyration matrices and therefore we choose to represent them in source code notation with the capital letter G. In contrast, the capital letter I is used to represent non-unit mass rotational inertias. This class restricts the set of allowed operations on a unit inertia to ensure the unit-mass invariant. For instance, multiplication by a scalar can only return a general RotationalInertia but not a UnitInertia.
Instantiated templates for the following kinds of T's are provided:
They are already available to link against in the containing library.
Default UnitInertia constructor sets all entries to NaN for quick detection of uninitialized values.
Creates a unit inertia with moments of inertia
Ixx,
Iyy,
Izz, and with each product of inertia set to zero.
In debug builds, throws std::logic_error if unit inertia constructed from these arguments violates RotationalInertia::CouldBePhysicallyValid().
Creates a unit inertia with moments of inertia
Ixx,
Iyy,
Izz, and with products of inertia
Ixy,
Ixz,
Iyz.
In debug builds, throws std::logic_error if unit inertia constructed from these arguments violates RotationalInertia::CouldBePhysicallyValid().
Constructs a UnitInertia from a RotationalInertia.
This constructor has no way to verify that the input rotational inertia actually is a unit inertia. But the construction will nevertheless succeed, and the values of the input rotational inertia will henceforth be considered a valid unit inertia. It is the responsibility of the user to pass a valid unit inertia.
Returns the unit inertia for a unit-mass body B for which there exists a line L passing through the body's center of mass
Bcm having the property that the body's moment of inertia about all lines perpendicular to L are equal.
Examples of bodies with an axially symmetric inertia include axisymmetric objects such as cylinders and cones. Other commonly occurring geometries with this property are, for instance, propellers with 3+ evenly spaced blades. Given a unit vector b defining the symmetry line L, the moment of inertia J about this line L and the moment of inertia K about any line perpendicular to L, the axially symmetric unit inertia G is computed as:
G = K * Id + (J - K) * b ⊗ b
where
Id is the identity matrix and ⊗ denotes the tensor product operator. See Mitiguy, P., 2016. Advanced Dynamics & Motion Simulation.
Returns a new UnitInertia object templated on
Scalar initialized from the value of
this unit inertia.
UnitInertia<From>::cast<To>()creates a new
UnitInertia<To>from a
UnitInertia<From>but only if type
Tois constructible from type
From. As an example of this,
UnitInertia<double>::cast<AutoDiffXd>()is valid since
AutoDiffXd a(1.0)is valid. However,
UnitInertia<AutoDiffXd>::cast<double>()is not.
Computes the unit inertia for a unit-mass hollow sphere of radius
r consisting of an infinitesimally thin shell of uniform density.
The unit inertia is taken about the center of the sphere.
Construct a unit inertia for a point mass of unit mass located at point Q, whose location in a frame F is given by the position vector
p_FQ (that is, p_FoQ_F).
The unit inertia
G_QFo_F of point mass Q about the origin
Fo of frame F and expressed in F for this unit mass point equals the square of the cross product matrix of
p_FQ. In coordinate-free form:
\[ G^{Q/F_o} = (^Fp^Q_\times)^2 = (^Fp^Q_\times)^T \, ^Fp^Q_\times = -^Fp^Q_\times \, ^Fp^Q_\times \]
where \( ^Fp^Q_\times \) is the cross product matrix of vector \( ^Fp^Q \). In source code the above expression is written as:
G_QFo_F = px_FQ² = px_FQᵀ * px_FQ = -px_FQ * px_FQ
where
px_FQ denotes the cross product matrix of the position vector
p_FQ (expressed in F) such that the cross product with another vector
a can be obtained as
px.cross(a) = px * a. The cross product matrix
px is skew-symmetric. The square of the cross product matrix is a symmetric matrix with non-negative diagonals and obeys the triangle inequality. Matrix
px² can be used to compute the triple vector product as
-p x (p x a) = -p.cross(p.cross(a)) = px² * a.
Given
this unit inertia
G_BP_E of a body B about a point P and expressed in frame E, this method computes the same unit inertia re-expressed in another frame F as
G_BP_F = R_FE * G_BP_E * (R_FE)ᵀ.
R_FErepresents a valid rotation or not. It is the responsibility of users to provide valid rotation matrices.
Re-express a unit inertia in a different frame, performing the operation in place and modifying the original object.
Sets
this unit inertia from a generally non-unit inertia I corresponding to a body with a given
mass.
massis not strictly positive.
Shifts this central unit inertia to a different point, and returns the result.
See ShiftFromCenterOfMassInPlace() for details.
For a central unit inertia
G_Bcm_E computed about a body's center of mass (or centroid)
Bcm and expressed in a frame E, this method shifts this inertia using the parallel axis theorem to be computed about a point Q.
This operation is performed in place, modifying the original object which is no longer a central inertia.
thisunit inertia, which has now been taken about point Q so can be written as
G_BQ_E.
For the unit inertia
G_BQ_E of a body or composite body B computed about a point Q and expressed in a frame E, this method shifts this inertia using the parallel axis theorem to be computed about the center of mass
Bcm of B.
See ShiftToCenterOfMassInPlace() for details.
For the unit inertia
G_BQ_E of a body or composite body B computed about a point Q and expressed in a frame E, this method shifts this inertia using the parallel axis theorem to be computed about the center of mass
Bcm of B.
This operation is performed in place, modifying the original object.
thisunit inertia, which has now been taken about point
Bcmso can be written as
G_BBcm_E, or
G_Bcm_E.
Bcmabout point Q as: G_Bcm_E = G_BQ_E - G_BcmQ_E = G_BQ_E - px_QBcm_E² Therefore the resulting inertia could have negative moments of inertia if the unit inertia of the unit mass at point
Bcmis larger than
G_BQ_E. Use with care.
Computes the unit inertia for a unit-mass solid box of uniform density taken about its geometric center.
If one length is zero the inertia corresponds to that of a thin rectangular sheet. If two lengths are zero the inertia corresponds to that of a thin rod in the remaining direction.
Computes the unit inertia for a unit-mass solid cube (a box with equal-sized sides) of uniform density taken about its geometric center.
Computes the unit inertia for a unit-mass cylinder B, of uniform density, having its axis of revolution along input vector
b_E.
The resulting unit inertia is computed about the cylinder's center of mass
Bcm and is expressed in the same frame E as the input axis of revolution
b_E.
Computes the unit inertia for a unit-mass cylinder of uniform density oriented along the z-axis computed about a point at the center of its base.
Computes the unit inertia for a unit-mass solid sphere of uniform density and radius
r taken about its center.
Computes the unit inertia for a body B of unit-mass uniformly distributed along a straight, finite, line L with direction
b_E and with moment of inertia K about any axis perpendicular to this line.
Since the mass of the body is uniformly distributed on this line L, its center of mass is located right at the center. As an example, consider the inertia of a thin rod for which its transversal dimensions can be neglected, see ThinRod().
This method aborts if K is not positive.
Computes the unit inertia for a unit mass rod B of length L, about its center of mass, with its mass uniformly distributed along a line parallel to vector
b_E.
This method aborts if L is not positive.
Constructs a unit inertia with equal moments of inertia along its diagonal and with each product of inertia set to zero.
This factory is useful for the unit inertia of a uniform-density sphere or cube. In debug builds, throws std::logic_error if I_triaxial is negative/NaN. | http://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_unit_inertia.html | CC-MAIN-2018-43 | refinedweb | 1,458 | 52.8 |
MathField is a model field for Django that allows you to input LaTeX and store the compiled HTML on your database. It comes with a form for the Django Admin that provides live previews of your rendered LaTeX.
Your server needs to have Python 2.7 and Django 1.7.
Get it installed with:
$ pip install django-mathfield
Add
'mathfield' to your
INSTALLED_APPS in your Django project’s
settings.py.
Add a
MathField to one of your models like this:
from django.db import models import mathfield class Lesson(models.Model): lesson_plan = mathfield.MathField()
Get live previews of the rendered LaTeX while you’re editing in the Django admin
by adding
MathFieldWidget as a widget when registering your model in
admin.py:
from django.contrib import admin from django import forms from yourapp.models import Lesson import mathfield class LessonAdminForm(forms.ModelForm): class Meta: widgets = { 'lesson_plan': mathfield.MathFieldWidget } class LessonAdmin(admin.ModelAdmin): form = LessonAdminForm admin.site.register(Lesson, LessonAdmin)
After adding some data to your database, you can output the rendered HTML to a template:
<!DOCTYPE HTML> <html> <head> {% load staticfiles %} <link rel="stylesheet" type="text/css" href="{% static 'mathfield/css/mathfield.css' %}"> </head> <body> <div> Raw text/LaTeX: {{ lesson.lesson_plan.raw }} </div> <div> Rendered HTML: {{ lesson.lesson_plan.html|safe }} </div> </body> </html>
Make sure that you include the
mathfield.css stylesheet in your template
head, and include
|safe with the MathField HTML value. This will
give Django permission to render the text in that field as HTML. It is safe to
do this provided that you only update the HTML using the form in the Django
admin or the functions provided in the MathField API. Be very careful when
updating the HTML yourself!
You can modify MathField data and compile LaTeX on your server without the admin
form if you would like. To be able to compile LaTeX serverside, you must have
node.js (v0.10+) installed and it must be on
your system path as an executable called
node. Note that this is not
necessary if you just use the admin form, as all compilation will occur in the
browser in this case.
There are two ways to pass data to a MathField: as a string, or as a dictionary
with the keys
raw and
html. If you pass a string, the html will
be rendered for you.
Let’s say you are using the
Lesson model from above, which has a
lesson_plan column that is a MathField. You can create a new instance
with:
new_lesson = Lesson(lesson_plan='One half is $\\frac{1}{2}$.') new_lesson.save()
You can also pass a dictionary that contains the raw text under the key
raw and the already rendered HTML under the key
html. This is
particularly useful if you want to generate the HTML yourself, perhaps because
you can’t install node.js on your server, or because you want to use a typesetting
library other than KaTeX.
The function
store_math provided in the mathfield API is provided for
convenience. If you don’t know the HTML, you don’t have to provide it, and it
will be generated for you. Otherwise, you can pass in the HTML and it will just
use that. For example:
import mathfield # if you already know the HTML: math_data = mathfield.store_math(raw_text, html) # if you don't: math_data = mathfield.store_math(raw_text) new_lesson = Lesson(lesson_plan=math_data) new_lesson.save()
When you look up an existing MathField, you get a dictionary with the keys
raw and
html:
lesson = Lesson.objects.get(id=0) print lesson.lesson_plan['raw'] # One half is $\frac{1}{2}$ print lesson.lesson_plan['html'] # the html for your template...
If you just want to pass in a string and get the HTML, use
render_to_html:
import mathfield html = mathfield.render_to_html('One half is $\\frac{1}. | https://pypi.org/project/django-mathfield/ | CC-MAIN-2016-50 | refinedweb | 629 | 67.76 |
A A
FRESH
agores Tagor Rabindranath T agor es Portrayal of Women as Agents of Change in Society and Culture
AR
Bharati Ray*
Indian Council for Cultural Relations
ABSTRACT
This paper proposes to explore Rabindranath Tagores vision and views on women as reflected in his writings. After a discussion of the writers family background and of his relations with women both in his family and as friends, this essay focuses on Tagores perception of the birth of the new woman, that is, a woman who challenges convention, and seeks to establish a new form of social order.
KEYWORDS
Rabindranath Tagore, women in Tagores writings, the new woman
I NTRODUCTION
A poet, a playwright, a novelist, a musician, an artist, a philosopher, Rabindranath Tagore (1861-1941) was a myriad-minded man. 1 He wrote extensively in various creative genres, took part in the freedom movement, thought about colonialism and nationalism, rural reconstruction, environment and nature, and established a university in West Bengal. Critics have written so much on so many aspects of Rabindranath that it is hard to discover an area to write on. And yet not much attention has been devoted to his thoughts on women, although he wrote extensively about them. This paper proposes to explore his vision and views on women as reflected in his writings. However, in this brief article, my focus will be limited to only to a few of them amidst the vast literature. The reason for my choice is his theories on women and his perceptions of womens future role were best exemplified in them. While Rabindranath was not comfortable with strident assertions of womens rights, and was not a feminist in the sense we use
2 0 11
- maio.-ago. - n. 2 - v. 21 - A L E T R I A
11
the term today, he showed a remarkable understanding of womans psyche, perceived the injustice of an unequal social structure, and advocated for greater freedom and decision-making power for women in the family and larger society. His writings on women can be seen as representing three facets of womens lives: i) the romance between men and women; ii) social oppression of women; iii) the birth of the new woman that is, a woman who challenges convention, and seeks to establish a new form of social order. It is the third theme only that this paper is concerned with. To comprehend fully the main theme of Tagores constantly evolving thoughts on women, we will first need to turn to his family and his time.
AMILY T HE F AMILY
Born in 1861 into an illustrious family, Rabindranath grew up at Jorasanko, in the heart of Calcutta. A few words about the Jorasanko family may not be out of place here, because the Tagores left an indelible stamp on the social and cultural history of Bengal. Indeed, few Bengalis could have claimed to have contributed as much to the cultural efflorescence of Bengal as the Tagores did. Dwarakanath Tagore, Rabindranaths grandfather, was a pioneer industrialist. His son Debendranath was a leading social and religious reformer. Almost all his sons rose to fame. The eldest, Dwijendranath, was a philosopher and a nationalist; a the second, Satyendranath, was the first Indian member of the Indian Civil service, and a champion of womens freedom from the confinement within the four walls of the home; the third son, Hemendranath, was a businessman; the fourth, Jyotirindranath, was a writer, musician, industrialist and nationalist. However, outshining all brothers, reigning over the cultural terrain of India, was Rabindranath. His nephew Abanindranath was a most celebrated painter. Thakurbari, as the Tagore family home at Jorasanko was known, became the meeting point of many a brilliant contemporary mind. The women of the Tagore family also made history. Debendranaths daughter Swarnakumari was a celebrated author; daughter-in-law Gyanadanandini, one of the most modern women of her time, innovated the modern style of wearing saree and started a childrens magazine. Another daughter-in-law, Kadambari Devi, used to ride on horseback wearing tailored clothes, was a highly talented actress participating in family plays and the source of inspiration to young Rabindranth. It was a joint family. In an Indian joint family the number of members depended on how many people a particular family accommodated. Besides the three biological generations, a household would often accommodate, in terms of the relationship to the karta, widowed aunts, grand aunts, siblings, siblings children, cousins, their children, widowed daughters-in-law of siblings or cousins, distant relations, and sometimes even friends/ people acquainted but not related by blood, needing food and lodging. Listen to Hemlata Tagore (b.1873) married to Dwipendranath Tagore, nephew of Debendranath Tagore, describe the Tagore household at Jorasanko:
12
A L E T R I A - v. 21 - n. 2 - maio.-ago. -
2 0 11
After my marriage, I entered our house at Jorasanko. It is an understatement to say that it was a huge family. In his household, there were altogether 116 people. Nobody could dream of separation.2
Saral Devi Chaudhurani, Rabindranths niece, too, tells us in her autobiography that Jorasanko was:
a place of great magnificence, every corner teeming with people, humming with endless activities. The sons and daughters of my maternal grandfather had their separate quarters where they lived with their respective families. [] A dozen Brahmin cooks were kept busy since early morning in the central kitchen cooking for the entire family and other residents. Cooked rice would be piled high almost touching the ceiling on one end of the huge kitchen. 3
In this joint household Debendranath was the head. Usually, the children of the family as well as elders participated in various celebrations and performances. It was among such extraordinarily gifted men and women that Rabindranath had fortune of growing up.
T HE T IMES
He came of age at a time when the currents of three movements had reached the shores of India: i) the religious: Rammohan Roy (1772-1833) had founded the Brahmo Samaj (1828), which emerged as a protest against the prevalent evils of Hindu society, went back to the classical Hindu society as embedded in the Upanishads and at the same time recognised modernity as brought by the West; it had a great impact on a section of bhdrolok (educated middle class) community, including Tagores family; 4 ii) the literary: a literary revolution had been pioneered, especially in Bengal, by men like Iswar Chandra Vidyasagar (1820-1891) and Bankim Chandra Chattopadhyay (1838-1894); iii) the political: a nationalist movement had started to give voice to Indian peoples discontent against British colonial rule. The poets mind and sensibilities were shaped by these influences. Rabindranath lived for eighty eventful years in colonial Bengal, and his views about women changed over time naturally, because there were fast changes in India, which inevitably left a footprint on his thoughts. When Rabindranath was born, India was smarting under British colonial rule. Soon, however, the freedom movement started and originated in Bengal in the form of an agitation against the partition of Bengal, designed by Viceroy Curzon to destroy political opposition in the province, in 1905.5
TAGORE, H. Purano Katha, p. 191. CHAUDHURANI. Amar Balyojibon, (My Childhood). In: Bharati, Baisakh, 1312 BS, CE, 1905. 4 See KOPF, David. The Brahmo Samaj and the Shaping of the Modern Mind. Princeton: Princeton University Press, 1977. 5 See SARKAR, Sumit. The Swadeshi Movement in Bengal, 1903-1905. Delhi: Peoples Publishing House, 1973.
3
2 0 11
- maio.-ago. - n. 2 - v. 21 - A L E T R I A
13
The movement, though initiated as a protest against a political move, was also motivated by the urge of the aspiring Bengali middle class to break British monopoly control over the Indian economy and to create new opportunities for their own participation in commerce and industry. This motivation explains the widespread propaganda against the use of British goods and the promotion of indigenous products (which provided the context for Tagores novel Home and the World). As the movement evolved, its leaders subtly turned the politico-economic struggle against the British into worship of the motherland, which was in its turn transformed into a mother goddess. The intellectuals who helped achieve this transformation included Rabindranath. The Swadeshi upsurge began to decline after 1908, and in 1915 Gandhi arrived from South Africa to engineer and lead the mammoth non-violent mass movement. Tagore did not agree with all of Gandhis strategies, but he was a fervent patriot and supporter of the freedom movement.6 While politics and economy were changing radically in Bengal, the field of literature could not remain static. Rabindranath dominated the literary field in Bengal, but other major writers appeared during his time, including his critics, especially in the 1920s and 1930s. Women, too, had begun to write and publish since the late nineteenth century; the majority of them upheld traditional values. Such was, very briefly, the literary environment during Tagores time
AND
W RITINGS
How do we situate Rabindranath in terms of his own life and literary career from the middle of the nineteenth century? The poet grew up, as he himself recognized, as a lonely child there was no bridge of intimacy between adults and children spending much of his time on the rooftop of his home, until the arrival of Kadambari Devi, the wife of his elder brother Jyotirindranath Tagore. He writes:
In the midst of this monotony, there played one day the flutes of festivity, a new bride came to the house, slender gold bracelets on her delicate brown hands. In the twinkling of an eye the cramping fence was broken, and a new being came into view from the magic land beyond the bounds of the familiarAnd so began a new chapter of my lonely Bedouin life.7
A new chapter had indeed opened for the poet. Kadambari was a great lover of literature and young Rabi became a partner in her literary enterprise. She herself became his chief inspiration, as the little boy, growing into manhood, was composing his first poems. Once he was done with a new poem, he would first read it out to her, and most of his early publications were dedicated to her. We do not know with certainty what the nature of their relationship was. It could have been love or perhaps simply the joy of
For a succinct account of the freedom movement, see BANDOPADHYAY, Sekhar. From Plassey to Partition. New Delhi: Orient Longman, 2004. See also BHATTACHARYA, Sabyasachi. The Mahatma and the Poet. Delhi: National Book Trust, 1997, for letters and debates between Gandhi and Tagore. Despite some differences regarding strategies, the two remained lifelong admirers of each other. 7 In: DAS GUPTA (Ed.). The Oxford India Tagore, p.25-6.
14
A L E T R I A - v. 21 - n. 2 - maio.-ago. -
2 0 11
that rare commodity, understanding and true companionship? But undoubtedly she was his inspiring angel and a strong influence on him. She committed suicide a few months after Rabindranaths marriage to Mrinalini when Rabindranath was 24. With her suicide it felt as though the earth had moved away from under my feet, Rabindranath later wrote to Amiya Chakravarty once his personal secretary and himself a poet and the light had gone out from the sky, my world felt empty. 8 There was darkness all around him. Yet Kadambaris death had, in his own words, enabled him to attain his freedom from life, as he realized that life must be seen through the window of death. Poetic talent combined with philosophical realization to take him forward. He went on writing and his career reached its peak when he was given Nobel Prize for Literature in 1913. The other woman with whom Rabindranath developed a deep friendship and attachment was Victoria Ocampo. The poet met Victoria in 1924 when he visited Argentina. She was the bideshini (foreigner) to whom the poet refers in his writings, and to whom he dedicated his book of poems, Purabi, as well as other poems and songs.9 There were obviously many a woman who was drawn to the poet, and he had a good understanding with his wife, but Kadambari remained throughout his life as his jeevaner dhruvatara, (the pole star of life) and his Muse. She appeared repeatedly through his poems, songs and stories, and most prominently in his paintings. He wrote about her long after her death, You are not before my eyes; you exist within them.10 When Tagore arrived into the world in 1861, the elements of romance found in European literature had become a pervasive theme in Bengali creative writing. Romantic notions permeated almost all Tagores early writings. While young Tagore, imbued with romanticism, looked at women as sources of inspiration and imagination ardhek manabi tumi ardhek kalpana (you are half real and half imagination), he wrote. Romantic love between a man and a woman is the basis of a few of his short stories and womens role as lovers received primacy in many of his poems. In Sonar Tori and Chitra romanticism dominates, and the beautiful woman finds her ultimate expression in the poem Urvashi: For ages you have been the worlds lover,/ Oh you, Urvashi of unparalleled beauty.11 Though this romanticism did not quite leave him altogether, he gradually learned to situate women in their real worlds, to see them as reasoning and desiring subjects who were constrained by social rules and norms. It started mainly from the 1890s. Tagore had been sent by his father to supervise the family estates in Selaidah and other places in what is today Bangladesh. (This is the Bangladesh connection and the reason why Bangladesh considers him as its poet). There the son of the aristocrat landlord came into contact with the common people, the peasants and the middle class and
In: DAS GUPTA (Ed.). The Oxford India Tagore, p. 62. For details, see DYSON, Ketaki Kushari. In Your Blossoming Flower Garden: Rabindranath Tagore and Victoria Ocampo. New Dehi: Sahitya Academy, 1988. 10 TAGORE. Rabindra Rachanavali v. 2, p. 477. All translations from the text are mine, unless otherwise mentioned. 11 TAGORE. Rabindra Rachanavali v. 1, p.511.
9
2 0 11
- maio.-ago. - n. 2 - v. 21 - A L E T R I A
15
came in direct touch with their life, their joys and sorrows, troubles and solutions.12 He also perceived womens status and problems. And apart from other things he began to give voice to womens subjectivities; the theme was developed roughly from 1891-92 onwards.
R ABINDRANATH S V IEWS
ON
W OMEN
As mentioned earlier, one cannot call Tagore a feminist in actual life he did much that feminism does not approve of and yet he showed a remarkable empathy for women, appreciated their sufferings, struggles and sacrifices, and paved the way for an ideology for the womens movement. Primarily a romantic to start with, Tagore increasingly became a candid and forceful spokesman for womens rights. This focus became the emergence of what I call the new woman, that is, a woman who challenges convention, makes decisions about her own life and even seeks to bring about changes in society, and culture. This was powerfully expressed in his poems published in Balaka (1916, A Flight of Swans) and Palataka (1918, The Escaped One). While Balaka was a recognition of the value of women, Palataka embodied the song of liberated women. What Balaka and Palataka announced in poetry, contemporary short stories, novels and plays declared in prose, while his essays laid down theoretical constructs. What were the basic tenets of the thoughts he advocated? There were three. First, he asserted that in Indian especially Hindu society, the relationship of marriage between men and women was utterly unequal. For instance, a chaste and devoted wife occupied a glorious status in Indian society. A cult of veneration surrounded her. But there was little effort to foster the concept of the sanctity of conjugal love on a husband. Overriding the instinct of affection between husband and wife, age-old cultural prescriptions were imposed on a woman. To her husband was an idea to which she surrendered. It was all one- sided, and this discriminatory practice had existed complacently in our society for ages. Men must accept the responsibility for sustaining this.13 It may not be out of context to mention here that in his novel Home and the World Tagore proclaimed that conjugal love was at its best when mingled with freedom.
Bimala was confined to my home, restricted within a small space, ruled by a series of small duties. [] I did not want to decorate my house with her, I wanted to see her against the backdrop of the world fully blossoming out with wisdom, brimming with energy, filled with the intensity of love.14
It is perhaps the most modern construction of love and wifehood articulated by a man in contemporary Bengali literature.
His letters to his niece Indira Devi beautifully describe his experiences. These letters have been published in TAGORE. Rabindra Rachanavali v.11. 13 TAGORE. Rabindra Rachanavali v. 13, p. 24. 14 TAGORE. Rabindra Rachanavali v. 9, p. 428.
12
16
A L E T R I A - v. 21 - n. 2 - maio.-ago. -
2 0 11
Second, men and women were not the same, but merely complementary to each other. But it was the woman who was the source of life, and her strength was indefinable sweetness and tenderness embedded in her heart. Without its touch, a man cannot realise his potential. In the words of Tagore, It is like a beacon, sort of a force. It is not tangible, neither is it measurable, and yet in the absence of this vital elixir human existence cannot find its ultimate fulfilment. To drive home the point, Tagore takes recourse to poetic imagery.
Plants derive their sustenance through moisture and food through their roots. We have an idea about this. But the effect of sunlight cannot be encompassed in a mathematical formula. If that light does not generate any energy, then all the struggles of the plants to survive will flounder.15
Tagore bitterly regretted the fact that men believed themselves to be the only beings that mattered, and had failed to realise that women were the greatest assets of mankind. Perhaps this was at the root of all pervasive social debility. The message was clear. Women would have to be accorded the key role, for social regeneration. Third, women were going to play that key role, because they would be the vanguard of change in the coming age. Tagore visualised: A noticeable powerful modern movement these days is the propensity of those who had long been marginalised to thrust forward and emerge. [] The day has come when women are claiming their full right as human beings.16 Aware of the worldwide womens movement that was gradually taking shape in the UK, USA and France, Tagore realised that women were coming forward to build a new society, a new culture, and for this task they were preparing themselves all over the world. In India, [i]t is not just that they have literally dispensed with the veils that had earlier hid their faces. They have now effectively banished the subliminal veil that shrouded their mind and kept them away from the outside world.17 There was, therefore, scope for hope. As women free from the man-made fetters would begin to go forward and find their own fulfilment, they would also lend men their fulfilment. What he theorised in essays, he articulated through creative writings. I will choose only four pieces to prove my point, two short stories and two plays, although references will be made to a number of others. I chose short stories because next to his poems and music, short stories are widely acknowledged as the best expressions of Tagores creativity. He started writing short stories quite late in life when he was about thirty and had already matured into a powerful writer. It was as if, with this form, he arrived in his own special field. 18 The period of writing these stories stretched from 1891 to 1941, and they captured the experiences during the last fifty years of his life. I have chosen two which portray womens challenge against tradition within the familial set up. The
15
16
TAGORE. Rabindra Rachanavali v. 13, p. 18. TAGORE. Rabindra Rachanavali v. 13, p. 28. 17 TAGORE. Rabindra Rachanavali v. 13, p. 380. 18 BISHI. Rabindranather Chhotogalpo, p. 1-2. Bishi was a reputed author and literary critic and wrote extensively on Tagore.
2 0 11
- maio.-ago. - n. 2 - v. 21 - A L E T R I A
17
choice of plays is because they were written ten years apart from each other and they dealt with womens activism in the social arena.
AND
S TRIR P ATRA
Chronologically the first of the four, written around 1900, before the Balaka-Palataka period, Shasti is considered one of the most brilliant stories in the Bengali literary canon. In the story, Chandara, an innocent village wife much in love with her husband Chhidam, is requested by Chhidam to confess that she killed her sister-in-law, who was in fact killed by her husband (Chhidams elder brother Dukhi) by accident. The idea behind this request was that the courts would be more lenient if a woman had committed the murder. Chandara is stunned by her husbands plea, but without saying a word accedes to the request. In court, before the judge, she refuses to plead not guilty or give the excuses that her husband and the lawyers had provided for her. She says instead that she had deliberately killed her sister-in-law because she disliked her.
When Chhidam appears in court the judge asks her Look at the witness and say what is he to you? Chandara covers her face with her two hands and says, My husband. Doesnt he love you? Chandara answers, OOH, very Much. Pouring as much sarcasm into the three words as she was capable of. When the judge tells her that the punishment for the crime is death penalty, Chandara cries out, I beg you, my lord, give me that... I cannot bear it anymore.
In other words, she chooses death over a return (at some point) home. On the day before she is to be hanged (a sentence handed to her for her deliberate murder), she is asked whether she wants to see anyone. She asks only to see her mother. When told that her husband was waiting to see her, she dismisses him with a single word: maran, she utters. 19 This is a term almost impossible to translate, but can be read here as a contemptuous dismissal of hypocrisy. The story, an unfettered assertion of a womans right to selfhood, was far ahead of its time and created a furore among the reading public. Pramatha Nath Bishi has observed that Shasti depicts a womans quiet courage and that Chandara is an example of a womans quiet fulfilment of duty. [...] Chandara silently sacrificed herself so that she was deprived even of public appreciation. 20 My reading of this story diverges sharply from that of this reputed Tagore-scholar. Chandara did not do her duty; it was certainly not her duty to shoulder a false charge. Nor did she sacrifice herself for the family; surely it was not her intention. Author Anita Desai is more accurate in commenting that she acted out of pride and fury.21 Chandara was outraged by her husbands request.
TAGORE. Rabindra Rachanavali v. 7, p. 188-189. BISHI. Rabindranather Chhotogalpo, p. 90. 21 DESAI. Introduction, p. 11.
20
19
18
A L E T R I A - v. 21 - n. 2 - maio.-ago. -
2 0 11
As Tagore describes it, when her husband asks her to confess to the crime, Chandara is shocked and disbelieving. The idea of husband, the ideal nourished since childhood collapses completely. She glares at her husband; her dark and fiery eyes seem to be scorching him. Her body and mind shrink in horror and every bone in her body rebels against him. The ideal of a husband is broken. The story was later made into a drama under the title of Abhimanini (The Woman with Abhiman), and ran at the popular Star Theatre for many evenings.22 Certainly there was an element of abhiman present, but I argue that Chandaras was much more than mere abhiman. It was a reaction to the bitter realisation that a mans love for his wife could never match his loyalty to his brother. It was this realisation of the patriarchal bond against which she did not matter that drove Chandara to choose the scaffold in place of him. In other words, Chandara left her husband, as Norah in Ibsens Dolls House had done, because they both realised at the time of a serious crisis that the husband they dearly loved did not reciprocate their love. For Norah, it was the revelation of the selfishness of the man she loved; for Chandara it was the revelation of betrayal by the man who had taken an oath during marriage to protect her but was now pushing her to long-time imprisonment. Chandara decided, very much like Nora, to have nothing more to do with the man. Technically, of course, she did not commit suicide, nor did she declare that she had left her husband, but for all practical purposes this is what she did. The title, Punishment, refers to both her undeserved punishment and the punishment she gave to her husband. There is one more point to be noted. Chhidam had hoped that Chandara, being a woman, would be given a light punishment. But the humiliation that Chandara would have to undergo because of this false charge did not occur to him. Tagore was at his most powerful when he wrote:
Chandara, an innocent, bubbly, fun-loving village bride was now being led as a prisoner by constables along the village pathways, past Rathtala, through the market square, skirting the post office and the school, along the edge of the river-ghat, past the house of the Majumdars. A bunch of young boys trailed her, village girls and their friends watched her, some through their veils, some from their doorsteps, others hidden behind the trees. They all drew back from her with a feeling of shame and abhorrence towards a girl who was leaving the village forever, stamped with the stigma of having committed this heinous crime.23
In Shasti, Chandara protests against her personal injury; in Strir Patra, (1914) Mrinal protests the injustice done to another woman and, therefore, the women in general. Mrinal is a housewife, endowed with beauty, brains, courage and an independent power of judgment. Her elder sister-in-law, scared of her husband and completely subservient to him, lacks the courage to welcome her own sister Bindu when she comes to her for shelter. Mrinal comes forward to give Bindu protection, but it is fleeting, for
Abhiman is a word difficult to translate into English. Perhaps sulk with hurt feelings carries the sense to some extent. 23 TAGORE. Rabindra Rachanavali v. 7, p.187.
22
2 0 11
- maio.-ago. - n. 2 - v. 21 - A L E T R I A
19
Bindu is forcibly married off to a man who is mentally deranged. Fleeing from her marital home, Bindu comes back to Mrinal, but upon being compelled to go back, ends her own life. Through Bindus life, Mrinal realizes the truth about women in society: I will not return to your No 27 Makhan Boral lane home. I have seen Bindu and realised the worth(lessness) of a womans life in our society. I want none of it any more. She leaves her home, her husband and his family, and goes alone to Puri. Having left home, facing the unfamiliar world, having abandoned the cage and breathing fresh air, Mrinal discovers herself and her true worth: Aaj baire eshe dekhi amar gaurav aar rakhbar jayega nei, Ami bachbo, ami bachlum. (Now that I am out on my own, I discover that I have my own space now and am gloriously free. I will survive. I have been saved.)24 It has been aptly commented that in terms of its content, the title of the story could have been A Letter from a Woman to a Man. In her letter Mrinal clearly spells out the reasons for her leaving home. She had seen Bindu and realised the worthlessness ascribed to a womans life by the society. She realised that discriminatory practice had existed complacently in our society for ages and that men must accept the responsibility for sustaining this. She wanted to see whether women could have a better place in the world and also whether she herself could achieve a higher purpose in life. Through her suffering and experience, and out on her own, Mrinal comes to realise that the role of a wife was but a fraction of a womans life. The ultimate search was for the development of the whole and not of the part of a human being. The mejobau (second sons wife in a joint family) in Mrinal was dead, but Mrinal was reborn as a woman and had the blissful feeling of liberty.
THE
AND
TASHER D ESH
Rakta Karabi (Red Oleandera common flower in India) was published in 1924. In its context we must remember two things, First, by that time Tagores views on women had progressed far, and he had begun to see women as the source of life and energy in human society. Second, Tagore was not against western civilisation per se, but was always a strong critic of industrialisation and capitalism which looked at the world only as a place for extorting profit and gold. The First World War, a shattering experience, had confirmed his belief. The theme of Rakta Karabi, an allegorical play, is modern industrialisation, its harmful and dehumanising effect and its remedy Here is a land, yakshapuri or the Town of accumulated wealth and greed and power. Its ruler, the King, lives in an impregnable fort, surrounding himself by a wire fence outside which he never shows his face. There is just one small window covered by a net, from behind which he issues his orders and talks, if he so wills. His one passion is to accumulate gold from the bowels of the earth. In his inordinate greed the King has become dead to all human sensibilities. He represses
24
20
A L E T R I A - v. 21 - n. 2 - maio.-ago. -
2 0 11
life and with the help of his Governor and a few bureaucrats controls his workers with inhuman rules, compelling their obedience by the twin use of force and fear. His workers have no names, but are known through numbers like 47 F, 69 T, and so on. Watched by spies, day in and day out, they dig and come out with dead weight of gold on their heads, and have lost the will as well as the way to return home. Nandini, a young girl, the very spirit of life and liberty, comes to the Town. On meeting her, the Professor of the Town, is at once startled and fascinated, and says, In this Town all our treasure is of gold, the secret treasure of the dust. But the gold which is you, beautiful one, is not of the dust, but of the light.25 Nandini is determined to break the oppressive system and free the people as well as the King. She succeeds in awakening in most meneven the King the desire for freedom. It was harvesting season. She called the King: Come out , Kingout in the open field.26 The King falls in love with her, and wants to possess her, because the only thing he knows is possession: Nandini... mystery of beauty. I want to pluck you, to grasp you within my closed fistto scrutinise you or to break you into pieces.27 But you cannot bind or enclose freedom or the spirit of joy. The King slowly realises that he is himself imprisoned within his own net. As the King comes under Nandinis spell the die-hard conservatives in his own ranks rebel against him and without his permission kill Ranjan, who Nandini is in love with and who is, like her, a symbol of freedom. The King is shocked and feels deceived. He invites Nandini to join him against the rebels. Nandini runs in advance of the King against the rebels, and is killed. The King comes out and the network is torn. In this play Tagore wanted to emphasise the power of love to change even Mammon into a human being and the power of youth and joy to humanise inhuman capitalism. Nandini, indeed, is the name for the elixir of life that Tagore discusses in his essays. She symbolises woman as a beacon, sort of a force. It is not tangible, neither is it measurable. That is why the professor calls her Sunlight and the bewildered and fascinated king King cries out, I want to see what is inside you. I want know you.
About his own interpretation of Nandini, Tagore says, Nandini, the heroine of my drama is the type of life and love and joy. [] Man is a seeker, searching without pause, be it for gold in the bowels of the earth, or be it for hidden power in the depths of his mind. Up and down he rushes endlessly [] finding never a gleam of happiness. To add power to his desperate search, he goes on building machines, each one more monstrous than the other. Then comes woman to him, bearing the gift of her love, her centripetal urge balances the centrifugal urge of the male.28
Ultimately, it is the triumph of woman power, her force of creativity and love that wins over the triangular network of power, fear and greed. Listen to the piece:
25
26
TAGORE. Rabindra Rachanavali v. 6, p. 652. TAGORE. Rabindra Rachanavali v. 6, p. 655. 27 TAGORE. Rabindra Rachanavali v. 6, p. 656. 28 ROY. Rabindranath Tagore: The Dramatist, p. 208.
2 0 11
- maio.-ago. - n. 2 - v. 21 - A L E T R I A
21
King I, who am a desert, stretch out my hand to you, a tiny blade of grass, and cry: I am parched. NandiniWhat is it that you see in me? KingThe dance rhythm of the universe.29
The underlying message is that the greed for power and wealth is never satisfied. In their futile pursuit the greedy and the powerful feel unfulfilled and internally exhausted. Ultimately they collapse, but in this collapse and in creating a new happy egalitarian social order women have a crucial creative role to play. Tasher Desh (The Kingdom of Cards) is another short allegorical play published in 1938. It is a satire on the hierarchical and rigid caste system in Indian society of the 1930s. Here a prince yearns for the unexplored and, along with a friend, a Merchant, sails to a distant, unknown land. Their boat sinks on the way and they swim ashore to an unknown land. The inhabitants, the Tash people, are bound by inexorable customs and conventions. They rise and sit, move and walk according to a rigid code of conduct. The Tash people are divided into four classes: Hearts, Diamonds, Spades and Clubs. Everything goes on with precision; obedience is the only virtue and any deviation is punishable. The watchword of the Tash people is convention. Looking at them and their rigid meaningless movements, the Prince tells them,
Prince, What you have been doing is so pointless. Six of Card replies, Pointless? What is the point of a point. Who cares for that? All we need is the rule of regulation and control. Anyway, who are you? Prince, We are from a foreign country. Five of Cards, Enough! That means you have no caste, no clan, no lineage, no kinship, no class, no status.30
The Prince and Merchant sing the song of freedom. The people complained to their king,
Six of Cards, Did you hear him, Your Majesty? This man wants to push ahead, and advance forward. You will not believe me when I tell you that he laughs. Yes, he actually laughs.. Five of Cards, Your Majesty, Just banish him. The King, Banish him? My queen, what do you think? Why are you keeping quiet? Do you agree to banishment? The Queen, No banishment, none at all. All the Ace of Cards maidens one after the other, No banishment. None at all.31
So the Queen did not co-operate. She was the first to respond to the call of change. Then her associates, the cards women of the land, joined her. The exasperated King announced,
King, I will enforce the law of compulsion. The Queen, Let us see who banishes whom.32
TAGORE. Rabindra Rachanavali v. 6, p. 657-8. TAGORE. Rabindra Rachanavali v. 6, p. 1171. 31 TAGORE. Rabindra Rachanavali v. 6, p. 1176-1177. 32 TAGORE. Rabindra Rachanavali v. 6, p. 1176-1177.
30
29
22
A L E T R I A - v. 21 - n. 2 - maio.-ago. -
2 0 11
Thus change was ushered in into the kingdom: The Hearts Woman told the chief priest, You have kept us deluded for a long spell, but no more.;33 And the Cub Woman asserted, I have given up my caste.34 When the rigid order of the island is disturbed, it is women again who come forward in responding to the call of freedom. The Queen gives the lead in changing the tash society and culture and breaks the bondage. Finally, the mechanical structure breaks apart. The island of cards is shattered. In this allegory the land is India and the society of cards is Hindu society; the four types of cards are the four major Indian castes. The unquestioned adherence to old customs and traditions has made their life and society stagnant. The Prince and the Merchant represent social reformers like Swami Vivekananda (1863-1902) and Gandhi and the card men and womens acceptance of the new way of life is the regenerated Hindu society. The poets belief in the value of liberty for the individual and his faith in women as harbingers of change provide the motive force of this play. The system imposed from above by a few interested people centuries back needed to be torn asunder because it was an obstacle to progress and liberty. Tagore was not a soothsayer, nor was he clairvoyant. The caste society is still prevalent in India, though its rigidity is no longer what it used to be, and women are yet to come out to take the lead in breaking it. But lower caste people are coming up, and lower caste women are taking up leadership in more areas than one. Capitalism and industrialism still have their strangle on India, waiting for the arrival of a Nandini. Tagore, a visionary, mapped out the ridiculous and self-destructive adherence to antiquated customs that breed inequality and kill freedom of thought as well as the dehumanising impact of industrialisation, and advocated a new social architecture: Where the mind will be without fear and the head of every individual will be held high, where small conventions will not obstruct justice.35 Tasher Desh as well as Rakta Karabi are dramatic representations of Tagores dream. What is important for us is to note that in both women performed as the vanguards of change. The new age that Tagore foresaw coming may or may not be far behind.
C ONCLUSION
In conclusion, I need to point out that the womens movement in India owed its origin to the freedom movement. I have argued elsewhere that as women joined the political agitation against colonial subjection, they became aware of another form of colonialism: at home under patriarchy. Womens organisations formed and led by women had also begun to appear in India in the early twentieth century. But their focus was
TAGORE. Rabindra Rachanavali v. 6, p. 1179.. TAGORE. Rabindra Rachanavali v. 6, p. 1184. 35 TAGORE. Rabindra Rachanavali v. 1, p. 894 .
34
33
2 0 11
- maio.-ago. - n. 2 - v. 21 - A L E T R I A
23
primarily on education and vocational training. There was no question of a movement for complete social overhaul. 36 The new woman as depicted by Rabindranath was, therefore, fairly new in Bengali society. Rabindranaths early perception of women, as mentioned previously, was born out of romanticism. During the later period of his life he discovered that women were individuals on their own right. True, Tagores views often drifted between the traditional and the modern. He often referred to women as tender and nurturing lovers, but he also saw the rebel and the leader in them. His Chandara ruptures her bond with her husband, Mrinal leaves hers, Nandini humanises the inhuman and the card women break down a stagnant social structure. It will be unhistorical to claim that he alone during his time contemplated the plight of women. Women had been the focus of discussions, writings and reforms since the mid-nineteenth century. But few writers envisioned women leaving their husbands or leading rebellions. Tagores four women were bold in terms of their protest against womens position in the family and the unequal structure in the larger society. Rabindranath created some very powerful women characters. With that he assaulted unobtrusively in his own way the established social system and notions inimical to the advancement of women. He understood the need for change and contributed to it. This was his priceless contribution to the vision and practice of the womens movement in India.
A A
RESUMO
Este trabalho investiga a escrita de Rabindranath Tagore, focalizando sua viso e opinies sobre as mulheres. Aps uma apresentao do contexto familiar do autor e de suas relaes com as mulheres, tanto na famlia quanto no ambiente cultural, este ensaio discute sua percepo do surgimento da nova mulher, isto , aquela que desafia as convenes e busca estabelecer uma nova forma de ordem social.
PALAVRAS-CHAVE
Rabindranath Tagore, mulheres na escrita de Tagore, a nova mulher
36
See my essay, The Freedom Movement and Feminist Consciousness in Bengal, 1905-1929. In: RAY, Bharati (Ed.). From the Seams of History: Essays on Indian Women. Delhi: Oxford University Press, 1995.
24
A L E T R I A - v. 21 - n. 2 - maio.-ago. -
2 0 11
W ORKS C ITED
BISHI, Pramatha Nath. Rabindranather Chhotogalpo (Short Stories By Rabindranath Tagore). Calcutta: Mitra & Ghosh, 1373 BS, CE 1966. CHAUDHURANI, Sarala Devi. Amar Balyojibon (My Childhood). In: Bharati, Baisakh, 1312 BS, CE, 1905. DAS GUPTA, Uma (Ed.). The Oxford India Tagore. Delhi: Oxford UP, 2009. DESAI, Anita. Introduction. In: (Ed.). Selected Short Stories of Rabindranath Tagore. Trans. Krishna Dutt and Mary Logan. Calcutta, Papyrus, 1991. DUTTA, Krishna; ROBINSON, Andrew. Rabindranath Tagore: The Myriad Minded Man. London: Bloomsbury, 1995.. ROY, R. N. Rabindranath Tagore: The Dramatist. Calcutta: A. Mukherjee, 1992. TAGORE, Hemlata. Purano Katha (Stories of Olden Days). In:. TAGORE, Rabindranath. Rabindra Rachanavali (Works of Rabindranath). v. 1, 2, 6, 9, 11, 13. Calcutta: Govt of West Bengal, 1961.
2 0 11
- maio.-ago. - n. 2 - v. 21 - A L E T R I A
25 | https://www.scribd.com/document/126525447/01-Bharati-Ray | CC-MAIN-2019-30 | refinedweb | 7,249 | 62.88 |
Opened 5 years ago
Closed 5 years ago
Last modified 5 years ago
#16415 closed New feature (invalid)
Template block function to strip whitespace from specific chunks
Description
Take the following example:
<div class="PostBlock {% if forloop.last %}Last{% endif %} {% if forloop.first %} Alt {% else %} {% if not forloop.counter|divisibleby:"2" %} Alt {% endif %} {% endif %} "> </div>
That would be outputted in a horrible format. Instead, it would be nice to wrap it in something like {% no_whitespace %} {% end_no_whitespace %}, which would in turn strip all whitespace from the beginning and the end.
Any thoughts?
Cal
Change History (2)
comment:1 Changed 5 years ago by russellm
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Resolution set to invalid
- Status changed from new to closed
comment:2 Changed 5 years ago by foxwhisper
Omg, how the hell did I miss that, I spent a good 20 minutes looking around for a solution on djangodocs and google.
Really sorry :/
Note: See TracTickets for help on using tickets.
Is there something wrong with {% spaceless %}? | https://code.djangoproject.com/ticket/16415 | CC-MAIN-2016-36 | refinedweb | 170 | 59.33 |
Contents
- How to make RECENT CHANGES go back further in time
- MoinMoin with Fedora 14
- Invalid escape
- Farmconfig matching
- Moving several wikis to a wiki farm - (Problem)
- Standalone server - TCP Interfaces
- Invalid syntax error after editing wikiconfig.py
- Customising quick links and the available actions
- Default User Preferences
- EOL while scanning single-quoted string
- Where is moin_config.py?
- Another port for the DesktopEdition
- Troubleshooting Configuration
- Farm with Desktop Edition
- Force User Logout On Browser Close
- How to configure a wiki farm running MoinMoin via Apache with mod_python
- How to get spaces in the page_front_page
- Disable ErrorLogs in webbrowser, enable only in flat files
- Even when I edit wikiconfig.py, surge protection won't turn off
- wikiconfig.py doesn't seem to be loading
- CGI-Fehler
- Supress URLs
- Trying to enable discussion pages
- Preventing the script name from appearing when using mod_python
- navi_bar and spaces in page names
- Question: How do I remove the Quick links form from the preferences page?
- How can I redirect the wiki pages from http to https?
- Cyrillic pages names
- Macro names appearing in rendered pages
- wiki.py missing
- Configure MoinMoin to use a different server for static stuff
- Configure MoinMoin standalone to run as www-data on port 80
- URL prefix with CGI on IIS
How to make RECENT CHANGES go back further in time
I would like the RECENT CHANGES to go back more than a couple of months. Actually I would like it to go back a year. How do I configure MOINMOIN to do this? I don't see this in the documentation.
MoinMoin with Fedora 14
I've installed MoinMoin and mod_python on my Fedora 14 box. I used yum to install both packages.
moin-1.9.3-2.fc14.noarch mod_python-3.3.1-14.fc14.i686
I've configured the wiki but when I try to access the main page I get what looks like an FTP style directory listing, showing the wiki root folder and all sub-folders. Any idea as to why this occurs? Additionally, will this version of MoinMoin work with the version of python offerred by Fedora 14.
python.i686-2.7-8.fc14.1
Invalid escape
I´ve done everything as described at. When opening I get the Message "Import of configuration file "wikiconfig.py" failed because of ValueError: invalid \x escape (...)".
- Please paste your configuration.
Farmconfig matching
Trying to get farm_config working if anyone has an example I would appeciate it.
- See .../wiki/config/farmconfig.py - that is an example for a working farmconfig setup
I have been working with that farm config. I've added ("adminwiki", r"^adminwiki/.*$"), to the wikis array in farmconfig.py and I keep getting an error saying:
Could not find a match for url: "adminwiki".
Obviously, r"^adminwiki/.*$"does not match "adminwiki". Which URL did you request?
The URL I type in is, but by the time I see it working, it's. I just now got it to work by changing the match string to: r"^adminwiki$" instead.
Moving several wikis to a wiki farm - (Problem)
Suppose I have two different wikis located under a common base-folder:
- base/wiki1
- base/wiki2
Running moin.py in my "base/wiki1"-Directory works fine and gives access to my "wiki1" content. For example, requesting
will try to access "NewPage" in my base/wiki1/data/pages directory. But if I try to turn my two different wikis to a single wiki-farm by using farmconfig.py:
#farmconfig.py: wikis = [ ("wiki1", r"^myserver:8080/wiki1/.*$"), ("wiki2", r"^myserver:8080/wiki2/.*$") ]
requesting
will end up accessing "wiki1(2f)NewPage" in my base/wiki1/data/pages directory. Why?
I have dealt with a similar issue (I believe). You need to go into MoinMoin Request.init and edit the base request. MoinMoin needs PathInfo and ScriptName to look like so:
PathInfo = "/NewPage..." ScriptName = "/wiki1"
In your case instead I believe they look like this:
PathInfo = "/wiki1/NewPage..." ScriptName = "/"
so you need to put in some code to switch it around so it looks like the former.
you should not needed to edit any moinmoin intern files to setup a correct farmwiki. maybe beside the farmwiki.py you have still the wikiconfig.py file in your directory or something else missed... you may also have to check your server configration (e.g. if it's apache with mod-wsgi that your aliases are correct; for example I do for every wiki this stuff in my apache config: WSGIScriptAlias /wikixy /var/www/moinmoin/server/moin.wsgi)
Standalone server - TCP Interfaces
I am trying to make the standalone server accept connections from a limited number of IPs only. To do this I am editing the "interface" field in the moin.py file in the following way "interface = 'some.ip.is.here' ".
- Check if you are binding to an invalid IP, to an invalid port (check that one as well). Especially, look if you want to bind to a port below 1024 and if you run standalone as a root application.
Have the same problem. Trying to make the standalone server accept connections from a limited number of IPs only. To do this I am editing the "interface" field in the wikiserverconfig.py file in the following way "interface = '192.168.1.51' ". IP address is right. I get next error when trying to launch wikiserver.py: "socket.gaierror (11001, "getaddrinfo failed").
Invalid syntax error after editing wikiconfig.py
Import of configuration file "wikiconfig.py" failed because of SyntaxError: invalid syntax (wikiconfig.py, line 123).
Solution
In most cases, you broke the file by adding tab characters. MoinMoin configuration files are Python modules, and does not work with a mix of tabs and spaces. Uses only spaces when you edit moin files.
Customising quick links and the available actions
- Is there any way to totally disable the quick links at the top of the page? (front page is all I need and that I can get from the logo).
see HelpOnConfiguration, try navibar = [] (or None).
- is there any way to tailor what appears in the grey footer at the bottom? (I don't want the spellchecker which is there, and I would like the rename-page facility, which is not there).
- you can simply remove the spellcheck action from action/ directory (or make your own theme to not show it)
some actions must be enabled, e.g. allowed_actions = ['DeletePage', 'AttachFile', 'RenamePage',]
Default User Preferences
Is there a way to set the default options for the anonymous user and for all new users?
You can set some basic options, see HelpOnConfiguration (this includes the theme default, the question marks in front of the links etc.; in MoinMoin 1.2 and 1.3). A few options cannot be set in a global way, though.
EOL while scanning single-quoted string
Import of configuration file "wikiconfig.py" failed because of SyntaxError: EOL while scanning single-quoted string (wikiconfig.py, line ...).Goto line ... and check if you have a backslash at the end of the string. If so, try to avoid it. If it still does not work, paste the line here.
Where is moin_config.py?
moin_config.py was replaced in 1.3 with wikiconfig.py. Please read HelpOnInstalling.
Another port for the DesktopEdition
I can't see an option for changing the port.
Solution
Read the page DesktopEdition/HowToConfigure.
Troubleshooting Configuration
We are having issues with the configuration of MoinMoin. This is the error log. We are setting up a single wiki on the intranet. The farmconfig in moin.cgi is commented out. We have changed the 404 errors as suggested for IIS. We still cannot add NEW PAGES. We are using Moin 1.3.5 Python 2.4.2 and Windows NT. We appreciate your help.
- It looks like your configuration file is not indented correctly. Please upload it in the wiki if you cannot file the fault by yourself.
Farm with Desktop Edition
You may wonder why would anyone need to run a wikifarm on a DesktopEdition but the truth is that I would like to keep my personal wiki and my work wiki completely separate (work wiki can then be synchronized with a remote server, my personal stuff can stay my own). Can anyone give me a brief outline (provided it is possible to run this with the DesktopEdition on Windows) of how to accomplish this? I am a total newbie with python so instructions as simple as the like of "open eye, insert fork, twist, call 911" would be greatly appreciated. Ivaylo, 2006-02-07
Solution
Solution working for Windows XP SP2 / MoinMoin Desktop Edition 1.5.3
Let's suppose you want to create a farm of 2 wikis: wiki_biz and wiki_perso
Step 1: create a wiki instance for each wiki
Create a copy of C:\Program Files\moin-desktop\wiki\data with all its subfolders at
C:\Program Files\moin-desktop\wiki\data_biz
C:\Program Files\moin-desktop\wiki\data_perso
Step 2: create a config file for each wiki
Save these 2 files and farmconfig.py (step 3) under C:\Program Files\moin-desktop\
wiki_biz.py
# -*- coding: iso-8859-1 -*- from farmconfig import FarmConfig class Config(FarmConfig): sitename = 'Wiki Biz' data_dir = 'C:\Program Files\moin-desktop\wiki\data_biz'
wiki_perso.py
# -*- coding: iso-8859-1 -*- from farmconfig import FarmConfig class Config(FarmConfig): sitename = 'Wiki Perso' data_dir = 'C:\Program Files\moin-desktop\wiki\data_perso'
Step 3: create farmconfig.py
# -*- coding: iso-8859-1 -*- wikis = [ ("wiki_biz", r"^wiki_biz.*$"), ("wiki_perso", r"^wiki_perso.*$"), ] from MoinMoin.multiconfig import DefaultConfig class FarmConfig(DefaultConfig): # put here the config you want to share among all your wikis in the farm. For example: tz_offset = 9.0
Step 4: add entries in your hosts file
edit the hosts file at C:\WINDOWS\system32\drivers\etc\hosts
- replace the following line
127.0.0.1 localhost
with
127.0.0.1 localhost wiki_biz wiki_pers
Using the farm
Start the Wiki engine (moin.py)
Now, you should be able to access your 2 wikis by using the following address:
Force User Logout On Browser Close
System: Ubuntu 5.10, latest MoinMoin, Apache, and Python packages. Currently, users that log in remain logged in between browser sessions. Due to the way we will be using this wiki, I'd like to set the cookie so that it expires when the browser closes. I have no idea how to do this. Any suggestions? Thanks, Thane
- You can use a low cookie_lifetime value, so the cookie expires when there is some time no activity. This has nothing to do with whether the window is open or closed, though. Maybe one could somehow catch window close with javascript and remove the cookie?
How to configure a wiki farm running MoinMoin via Apache with mod_python
I have checked the docs at HelpOnConfiguration, HelpOnInstalling/ApacheWithModPython, and HelpOnInstalling for an answer to this. Of all those pages, the only one that begins to address this question is HelpOnInstalling, which simply states:.
This is not explicit enough for me. What particular MoinMoin file do I need to edit for Apache with mod_python, and what specifically do I need to place or alter within that file to point to my farmconfig.py file? Many thanks in advance. -- ChrisLasher 2006-11-09 01:31:23
- You have to modify sys.path to contain the directory that has your configuration files, so Python can import the config file(s) from there.
How about a concrete example? Let's say I have two wikis, WikiA and WikiB. Their files are stored in /home/moin/wikia/ and /home/moin/wikib/, respectively. Each one has its own wikiconfig.py under its directory. There is a farmconfig.py script in /home/moin/. What would one need to place in the <Location> directives of the Apache config file such that goes to WikiA and WikiB, via mod_python using farmconfig.py?
How to get spaces in the page_front_page
My site is set to convert spaces to spaces in page names. This works fine, except for the page set to the variable page_front_page in the moinconfig.py config file. So when starting to visit the site without a page name in the URL, the front page is displayed with underscores in the title, where I would expect spaces. Example: Is there a solution?
- You run an old moin version, can you try if this also happens with the latest one?
I am running debian packages moinmoin-common and python-moinmoin Version: 1.5.3-1.1, the newest version on debian etch = testing.
I fixed this problem without upgrading moin with this line in my apache config (root-wiki style):
RewriteRule ^/$ /Name_with_underscores
just above the rule for ^(.*)$ . I also wrote a note in the moinconfig.py file about this fix, otherwise I might get crazy later, trying in vain to set another front page.
Disable ErrorLogs in webbrowser, enable only in flat files
How to disable displaying of any error logs for ordinary user (or for everybody) in browser? I'd like only to have error logs written to the file (for example to /etc/moin/errors , or /var/log/moin-error.log) ?
Something similiar is planned for 1.6 -- ReimarBauer 2007-04-10 15:59:29
- OK. but is it possible to disable displaying of error informations in webbrowser now? (moinmoin ver. 1.5.x) -- Yanaek
-- Yanaek 2007-04-19
- See CHANGES of moin 1.5.8.
Even when I edit wikiconfig.py, surge protection won't turn off
I added this line
surge_action_limits = None
this only works for 1.5.5+
to the wikiconfig.py file but the surge protection still locks me out. (Desktop Windows version)
What an annoying feature! It's unacceptable to have my software lock me out for working at a fairly normal speed.
Help, anyone?
For 1.5.4 you just have to configure high limits, see the help page.
What limits should I set? I've played with the high limites and I still get locked out. This is very frustrating! Any specific suggestions?
wikiconfig.py doesn't seem to be loading
I changed the value of some of the variables in wikiconfig.py (following the installation help files). I noticed no changes in the wiki itself, in particular the title did not change. So, I made sure that the path to wikiconfig.py was in the server script. However, it wasn't clear to me what script I should actually be editing, since in the server directory there are moin.cgi and moin.py and both have lines asking for the path to wikiconfig.py. Which script should I be editing, and am I missing anything else? Otherwise my wiki seems to be working quite nicely. I'm running Mac OS X 10.4. Thanks! -- Brian Taylor
Maybe you yourself are the only one who can answer this. moin.cgi is the cgi adaptor, usually used with apache or some other web server executing moin as a standard cgi script. moin.py is the standalone server, so if you don't want to install apache or some other web server, you can use this "python builtin" web server. -- ThomasWaldmann 2006-09-14 08:29:45
- I figured out the problem. I was editing a copy of the wikiconfig.py file and not the wikiconfig.py file that was in the actual wiki. I'm new to the unix directory structure and it took me a while to figure out my mistake. Thanks for your feedback, though.
- I found that with mod_wsgi, the wikiconfig.py changes did not take effect until I restarted apache2.
- Maybe read the mod_wsgi documentation, it contains useful hints like that when touching moin.wsgi, it will restart the moin daemons.
CGI-Fehler
Die angegebene CGI-Anwendung hat keinen vollständigen Satz von HTTP-Headern zurückgegeben I'm running Windows 2003, IIS 6. The Configuration seems to be correct, but I always get the cgi-Error. Anyone an idea?
Supress URLs
Hi, can I prevent URLs entered in an article to be transformed into klickable links? I found a variable to add further protocols, but how to remove e.g. http? Thank You, -- mrdslave.
You can use preformatted text {{{}}} or the Verbatim macro [[Verbatim()]]. But most readers expect to be able to click on a URL - making URL useless seems like a bad idea.
Trying to enable discussion pages
Hi there. I've read HelpOnConfiguration/SupplementationPage and I've edited my wikiconfig.py, adding an extra line at the end which says
- supplementation_page = True
I've restarted apache2, but I don't see any Discussion pages in my wiki (single instance). I don't understand what's going wrong. I haven't changed the wikiconfig since I first set up the wiki a year or so ago. It's worked fine ever since (apache2 has been upgraded several times - I'm running Debian Etch). Sorry if I'm being really dim. (Yes, I have logged out and then logged back in - still no discussion pages.)
This is a new 1.6 feature (this wiki and the MoinMaster wiki are running 1.6 now, release 1.6.0 is expected in december).
Thank you - nice to know I'm not stupid. I look forward to the 1.6.0 release.
Preventing the script name from appearing when using mod_python
Hi, using 1.5.8.
I usually use mod_rewrite in cgi mode so that moin.cgi does not appear in the URLs. As I don't want the name to appear in the URLs generated by MoinMoin, I edit moin.cgi and leave it like this:
from MoinMoin.request import RequestCGI #request = RequestCGI() request = RequestCGI(properties={'script_name':'/'}) request.run()
and everything works fine.
Now, I got my own server and I'm trying to set it up to use mod_python... I got everything working (including mod_rewrite), but the stock moinmodpy.py will generate the URLs with it's own name in it...
I guess that there should be something I could do in this part of the code:
from MoinMoin.request import RequestModPy def handler(request): moinreq = RequestModPy(request) return moinreq.run(request)
but I don't know what.
I bearly read Python, so studying request.py is not helping me. Can someone hand me a recipe to do this?
TIA
-- MarianoAbsatz 2007-12-13 20:02:50
BUMP
Anyone can tell me if the above is indeed possible with mod_python???
I just upgraded to 1.6.3 and I'm not able to do this. What I want is to see instead of (and that the links get generated that way internally).
I've done this many times with moin.cgi, but I can't see how to do this with moinmodpy.py.
I'm using moinmoin 1.6.3 with apache 2.2.4 under ubuntu 7.10 (gutsy).
The virtual host config file looks something like this:
<VirtualHost *:80> ServerName wiki.example.com ServerAdmin [email protected] DocumentRoot /var/vhost-www/wiki.example.com/ <Directory /var/vhost-www/wiki.example.com/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all #DirectoryIndex moin.cgi index.html # start the rewrite engine RewriteEngine On RewriteBase / RewriteRule ^moin_static.../ - [last] RewriteRule ^(/.*/)?moinmodpy.py - [last] RewriteRule ^(.*)$ /var/vhost-www/wiki.example.com/moinmodpy.py$1 [type=application/x-httpd-cgi] </Directory> ErrorLog /var/log/apache2/vhost/wiki.example.com/error.log RewriteLog /var/log/apache2/vhost/wiki.example.com/rewrite.log RewriteLogLevel 2 # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/vhost/wiki.example.com/access.log combined ServerSignature On ## WIKI #ScriptAlias /moin.cgi /var/vhost-www/wiki.example.com/moin.cgi <Location /> SetHandler python-program # Add the path of your wiki directory PythonPath "['/var/vhost-www/wiki.example.com', '/usr/local/etc/moin'] + sys.path" PythonHandler moinmodpy PythonDebug On </Location> <LocationMatch "/moin_static..."> SetHandler None </LocationMatch> </VirtualHost>
in /var/vhost-www/wiki.example.com I have a copy of /usr/local/share/moin/htdocs named moin_static163 and a modified copy of moinmodpy.py as follows:
# -*- coding: iso-8859-1 -*- """ MoinMoin - mod_python wrapper for broken mod_python versions add a .htaccess to the path below which you want to have your wiki instance: <Files wiki> SetHandler python-program PythonPath "['/path/to/moin/share/moin/cgi-bin'] + sys.path" PythonHandler moinmodpy </Files> Note: this is a wrapper needed because of a bug in mod_python < 3.1.3 mod_python.apache.resolve_object fails to parse a object with dots. If you have a newer version, take a look at moinmodpy.htaccess to see how to use MoinMoin without this wrapper. You can also look into INSTALL.html to see how you can fix the bug on your own (a simple one line change). TODO: this should be refactored so it uses MoinMoin.server package (see how server_twisted, server_wsgi and server_standalone use it) @copyright: 2004-2005 by Oliver Graf <[email protected]> @license: GNU GPL, see COPYING for details. """ # System path configuration import sys # Path of the directory where wikiconfig.py is located. # YOU NEED TO CHANGE THIS TO MATCH YOUR SETUP. #sys.path.insert(0, '/path/to/wikiconfig') sys.path.insert(0, '/usr/local/lib/python2.5/site-packages/') # Path to MoinMoin package, needed if you installed with --prefix=PREFIX # or if you did not use setup.py. ## sys.path.insert(0, 'PREFIX/lib/python2.3/site-packages') # Path of the directory where farmconfig is located (if different). ## sys.path.insert(0, '/path/to/farmconfig') sys.path.insert(0, '/usr/local/etc/moin') # Debug mode - show detailed error reports ## import os ## os.environ['MOIN_DEBUG'] = '1' # Simple way #from MoinMoin.server.server_modpython import modpythonHandler as handler # Complex way from MoinMoin.server.server_modpython import ModpythonConfig, modpythonHandler class MyConfig(ModpythonConfig): """ Set up local server-specific stuff here """ # Make sure moin will have permission to write to this file! # Otherwise it will cause a server error. logPath = "/var/log/moin/moinlog" # Properties # Allow overriding any request property by the value defined in # this dict e.g properties = {'script_name': '/mywiki'}. ## properties = {} #properties = {'script_name': '/'} def handler(request): return modpythonHandler(request, MyConfig)
/usr/local/etc/moin holds my farmconfig.py.
-- MarianoAbsatz 2008-05-03 15:35:09
SOLVED
Well... it turned out that the mod_rewrite loop was originated by having a <Directory> section within the <VirtualHost> and having the RewriteRule's there... I'll try to elaborate a bit by updating HelpOnInstalling/ApacheWithModPython with some info... but now I'm going to sleep...
-- MarianoAbsatz 2008-05-04 11:10:38
navi_bar and spaces in page names
I have migrated to 1.6 and stumbled upon following: some of the pages have spaces in their names (all well so far, they were renamed from underscores when migrating), but I cannot find out how to write their names into navi_bar. The previous navi_bar = [u'[Hlavná_stránka Hlavná stránka]'] does not work, since Hlavná_stránka is not the same as Hlavná stránka. Replacing the underscore with a space does not work, obviously. Neither does using &x20; or the (semi-expected) pipe [Hlavná_stránka|Hlavná stránka]. Is there any way of escaping the space? I have resorted to renaming the page to Hlavná_stránka (with an underscore), but clearly that is not an universal solution. -- 147.213.138.3 2008-04-30 11:21:49
For navibar, 1.6 still uses ["Free Link"] syntax. Sorry about this inconsistency, but the respective changeset was forgotten when backporting from 1.7. Starting with 1.7.0, it will consistently use the new link markup. -- ThomasWaldmann 2008-04-30 12:37:04
Question: How do I remove the Quick links form from the preferences page?
How do I remove the Quick links form from the page xxx?action=userprefs&sub=prefs completely? I have tried: user_form_remove = ['quick_links'] in wikiconfig.py but that does not work for me.
Thanks. -- JohanZiprus 2012-05-26 20:16:22
How can I redirect the wiki pages from http to https?
I have installed a wiki (1.7.1) under linux (Apache with modpython). This wiki is configured as root wiki and working fine. I would like redirect all wiki pages from http to https. Who does know the recipe for it?
Thanks for your help!
Cyrillic pages names
I have several pages named in cyrillic. The problem is that folders in wich the pages are stored on disk are named in unreadable way: for example page 'поезд' (poezd - train in Russian) is stored in "(d0bfd0bed0b5d0b7d0b4)" folder - it seems like a raw unicode output. This is very inconvenient - for example it's hard to access that folder from my macro. Can I make pages with cyrillic names be stored in folders named as the pages themselves? (поезд - поезд, not поезд - (d0bfd0bed0b5d0b7d0b4))
Currently (as of moin 1.7/1.8/1.9 there is no easy way to do this). We did choose this encoding because it works everywhere (no matter what your FS encoding is, no matter whether linux or windows). We are working on improved backend storage for moin >= 2.0. -- ThomasWaldmann 2008-10-17 02:40:26
Macro names appearing in rendered pages
If accessing a wiki page that doesn't exist, on the new page template we see "Action(edit,Create new empty page)" instead of a link to create a new page. On the RecentChanges page, we see RandomQuote() and Icon(diffrc) rather than a quote and an icon. I believe this behavior started around the time that I updated MoinMoin to 1.7.2. Any ideas how to troubleshoot this problem?
You likely forgot to update your underlay directory with the new one we provide in the distribution archive as wiki/underlay/.
wiki.py missing
I try to follow this guide to enable MathMLSupport (MoinMoin 1.8, DesktopEdition):.
I'm told to "edit the file parser/wiki.py under MoinMoin directory"
- This file does not exist at all. What to do?
Those instructions are obviously for an old moin version. In recent moins, the wiki parser is in MoinMoin/parser/text_moin_wiki.py.
Configure MoinMoin to use a different server for static stuff
I see that when I load a page from moinmo.in, the CSS, Javascript, and images actually come from static.moinmo.in. How do you do that?
Configure MoinMoin standalone to run as www-data on port 80
I'm trying to setup monimoin on an ubuntu box using the built-in server, but running on port 80 as user www-data. I haven't been able to find any examples of these, if you have any please point me in the right direction.
Here's what I've done so far:
- Using the 1.9.3 tarball, I've installed it in /home/wiki like this:
- /home/wiki/
data/
underlay/
config/
- In the config directory I've placed wikiconfig.py, wikiserverconfig.py and wikiserverlogging.conf. I modified the sys.path in wikiserver.py like:
sys.path.insert(0, '/home/wiki/config')
sys.path.insert(0, '/home/wiki/lib/python2.7/site-packages')
..
log.load_config('config/wikiserverlogging.conf')
I've installed all the code from the LanguageSetup page and the wiki appears to be running fine when run as me.
- I then added more command-line configuration to wikiserver.py:
sys.argv = ["moin.py", "server", "standalone", "--user=www-data", "--group=www-data", "--pidfile=/home/wiki/moinmoin.pid", "--start" ]
The server works correctly on port 8080 when run as www-data.
- Next I edited wikiserverconfig.py, setting port=80, user='www-data', group='www-data', interface='192.168.1.2' (my ip address)
- Then I added "--port=80" to sys.argv in wikiserver.py and ran the server as "sudo ./wikiserver.py" and (removing "--start") I get:
mecklen@pippin:/home/wiki$ sudo ./wikiserver.py Traceback (most recent call last): File "./wikiserver.py", line 47, in <module> MoinScript().run() File "/home/wiki/lib/python2.7/site-packages/MoinMoin/script/__init__.py", line 138, in run self.mainloop() File "/home/wiki/lib/python2.7/site-packages/MoinMoin/script/__init__.py", line 261, in mainloop plugin_class(args[2:], self.options).run() # all starts again there File "/home/wiki/lib/python2.7/site-packages/MoinMoin/script/__init__.py", line 138, in run self.mainloop() File "/home/wiki/lib/python2.7/site-packages/MoinMoin/script/server/standalone.py", line 143, in mainloop run_server(**kwargs) File "/home/wiki/lib/python2.7/site-packages/MoinMoin/web/serving.py", line 159, in run_server **kw) File "/home/wiki/lib/python2.7/site-packages/MoinMoin/support/werkzeug/serving.py", line 392, in run_simple inner() File "/home/wiki/lib/python2.7/site-packages/MoinMoin/support/werkzeug/serving.py", line 378, in inner passthrough_errors).serve_forever() File "/home/wiki/lib/python2.7/site-packages/MoinMoin/support/werkzeug/serving.py", line 251, in make_server passthrough_errors) File "/home/wiki/lib/python2.7/site-packages/MoinMoin/support/werkzeug/serving.py", line 207,) socket.error: [Errno 13] Permission denied
It appears that moinmoin is attempting to open port 80 as www-data instead of root. I would have expected it to open port 80 before settting its effective uid/gid.
How do I run the built-in http server on port 80?
Solution
Here is an evil hack that seems to have worked (although it cannot be "checked in" with the source).
- In site-packages/MoinMoin/web/serving.py comment out the switch_user call:
def run_server(hostname='localhost', port=8080, docs=True, debug='off', user=None, group=None, threaded=True, **kw): """ Run a standalone server on specified host/port. """ application = make_application(shared=docs) if port < 1024: if os.name == 'posix' and os.getuid() != 0: raise RuntimeError('Must run as root to serve port number under 1024. ' 'Run as root or change port setting.') # if user: # switch_user(user, group)
- In site-packages/MoinMoin/support/werkzeug/serving.py add the os.setgid and os.setuid function calls:
class BaseWSGIServer(HTTPServer): multithread = False multiprocess = False def __init__(self, host, port, app, handler=None, passthrough_errors=False): if handler is None: handler = BaseRequestHandler HTTPServer.__init__(self, (host, int(port)), handler) self.app = app self.passthrough_errors = passthrough_errors os.setgid(33) os.setuid(33)
The effect of this is to delay the switching the user to www-data until after the HTTPServer code has started listening on port 80. The reason it is unsuitable for checking in is that there is no good way to pass the desired user/group down from run_server through run_simple -> inner -> make_server -> BaseWSGIServer. Also, of course, this may simply be the Wrong Way To Do It.
Please read wikiserverconfig.py -- ReimarBauer 2011-09-29 12:34:08
Thank you for responding. Your answer is a little vague. Of course, I have read those files. In fact, there are two in the 1.9.3 release and they have different contents. The moin-1.9.3/wiki/server/wikiserverconfig.py file describes a variable "interface = 'localhost'", which (as far as I can tell) does nothing, while the moin-1.9.3/wikiserverconfig.py file uses the correct value "hostname = 'localhost'".
As to reading them to solve my problem, I was not able to find any information in these very short files which directly relates. There is, of course, the comment "if you use port < 1024, you need to start as root", which I tried as I described above. There is also the comment, "if you start the server as root, the standalone server can change to this user and group, e.g. 'www-data'". When running moinmoin as root I got the backtrace I described and debugged. The bug is that moinmoin invokes switch_user too soon and the socket open fails. Did you read my post?
Or perhaps you meant the comment beginning "DEVELOPERS! Do not add...", which doesn't relate at all, unless I could use the custom Config class to change how switch_user works. I think so.
So, perhaps you could be more specific in your advice. Can you answer two questions:
- Should running standalone as www-data on port 80 work when wikiserver.py is run as root, i.e., "sudo python wikiserver.py"?
- Does this work for you or anyone you know?
Thanks,
Robert Mecklenburg 2011-10-02 02:24:00
This could be a regression from 1.8 to 1.9 - there have been a few in the general HTTP-related code - because the 1.8 code looks like this (in MoinMoin.server.server_standalone):
httpd = makeServer(config) # Run as a safe user (posix only) if os.name == 'posix' and os.getuid() == 0: switchUID(config.uid, config.gid) httpd.serve_forever()
It's quite possible that the underlying SocketServer code in 1.8 binds to a port when the server is instantiated, whereas the WSGI stuff defers that to a later point in time, when MoinMoin.support.werkzeug.serving.run_simple is called. That means that the above pattern ("initialise server, switch user, start server") doesn't work because the code is more like "initialise application, switch user, initialise and start server". So yes, the last part has to be punctuated with a "switch user" somehow. -- PaulBoddie 2011-10-02 15:07:40
URL prefix with CGI on IIS
Is it possible to hide the url prefix like "/mywiki/moin.cgi/MyPage" when MoinMoin is installed as a IIS CGI ? I have to mention that I had to define something like
url_mappings = {'/mywiki/moin.cgi/mywiki':'/mywiki'}
Is it the right thing to do (the Howto may be uncomplete ?) ? | http://www.moinmo.in/MoinMoinQuestions/ConfigFiles | crawl-003 | refinedweb | 5,532 | 60.11 |
IntroductionIn this post we will see how to fix the exceptions like this which hapens even after adding proper dll(s). Here in this post we will see how to fix exception, "The type or namespace name Data does not exist in the namespace Microsoft.Practices.EnterpriseLibrary (are you missing an assembly reference?)"
BackgroundI was working on a project when I encountered one exception which was The type or namespace name 'Data' does not exist in the namespace 'Microsoft.Practices.EnterpriseLibrary' (are you missing an assembly reference?)
I added the proper dll reference but still I was getting this compilation error. I was wondering what is going wrong. I tried to browse the class inside this dll using object browser. I was able to find classes inside this dll. I tried to create an object of the class which was inside the dll; I was able to create the object while I was writing the code. When i build it, it again failed.
1. Added Proper dll reference but compilation fails.
2. Exception the type or namespace name 'Data' does not exist in the namespace even after adding proper dll references.
Using the code
How did I fix this exceptionFinally when I check the project configuration, I was running .Net Framework 4.0 Client Profile. I made it to .Net Framework 4.0 and everything worked perfectly.
The .NET Framework 4 Client Profile is a subset of the .NET Framework 4 which is optimized for client applications. It provides functionality for most client applications, including Windows Presentation Foundation (WPF), Windows Forms, Windows Communication Foundation (WCF), and ClickOnce features. This enables faster deployment and a smaller install package for applications that target the .NET Framework 4 Client Profile. If you are targeting the .NET Framework 4 Client Profile, you cannot reference an assembly that is not in the .NET Framework 4 Client Profile. Instead you must target the .NET Framework 4. This is what happening in my case. :)
To change the Target Framework, Go to your project in the solution and right click on the project and select properties. In the window that comes up, select the Application tab. There you can see the Target Framework section. Select the .Net Framework 4 instead of .Net Framework Client Profile. :)
Had the same problem, solved the same way :-D
TY
Thanks, Its solved my problem | http://gonetdotnet.blogspot.com/2015/06/solved-type-or-namespace-name-data-does.html | CC-MAIN-2018-43 | refinedweb | 392 | 69.07 |
Glad to see all of you, here again, Welcome again folks, in this module we are going to talk about the Data Types in C Programming, till now we have completed a wonderful journey in this series and also are ready to go ahead.
Let’s start this module and look into details about the same.
Data Types in C Programming
We have talked about variables and constant in the last module and there we were using data type also while declaring a variable and constant, if you haven’t seen the last module yet, then I Will recommend you to first go through the last module and then come into this module.
Data types in C Programming talks about the type and nature of the value that a particular variable is going to hold. i.e., suppose you buy water from a store, now the water bottle is the container i.e., the variable and the brand of water or the liquidity nature of the water is the data type here. In simple we should know what is nature and type of things are being stored in the container or variable in programming.
Data types in C
Data types in C define the specific type of data that a particular variable can hold, for example, an integer variable can hold integer data, a character type variable can hold character data, a floating-point variable can hold the decimal data, etc.
Data Types in C Programming are categorized into mainly 3 groups, i.e., User-defined, Built-in / Primary, and derived data types.
Primary data types include Integer, Character, Boolean, Floating-Point i.e., decimal value, Double Floating Point i.e., large values, void i.e., null value, and wide character.
Derived data types include Function, Array, Pointer, and references.
User-defined data types include class, structure, union, enum, typedef.
In this particular module, we will talk about primary data types in detail and the rest we will cover in the upcoming modules.
Primary Data Types
Primary data types are also called built-in data types, i.e., they are already built and are stored in the C library. Let’s see the function of different built-in data types available:
int data type
It is for integers it takes up to 2 or 4 bytes of memory from the compiler (1 byte = 8 bits) and can only store the integer values like 10, 20, and so on.
Example, int num = 20 ;
char data types
It is for storing a character and takes up to 1 byte of memory from the compiler.
For example, char ch = ‘A’;
Here, in this example, char is the data type and ch is the variable name, which has stored the character A in it. One thing to be noted in this case is that while assigning a character to any variable, it should be in single quotation marks as given in the above example, else it will throw an error.
bool data types
It is for Booleans True or False (0 or 1). It is used for checking various conditions required in the program, will see about this in the later modules.
For example, bool b = true;
float data types
it is used for storing a floating-point number, i.e., a decimal number and takes up to 4 bytes of memory from the compiler.
Example, float num = 12.23 ;
Here, in this example, num is the variable name that is assigned with float data type and is holding a 12.23 value in it.
double data types
It is used to store a large value decimal number and it takes up to 8 bytes of memory.
Example, double num = 10078.10099 ;
Here, if you have to store a large number then you have to go for the double data type.
Let’s see one sample problem demonstrating the data types.
#include <stdio.h> int main( ) { int n1 = 20; float n2 = 45.3; char chr = ‘A’; // Displaying the value of variable n1 printf (“ n1: %d“, n1); // Displaying the value of variable n2 printf (“ n2: %f”, n2); // Displaying the value of variable chr printf (“chr: %c“, chr); return 0; }
The output of the above Program:
Here, in the above program, we have declared 3 types of data types int, float, and char with the n1, n2, and chr variables respectively and thus printing the same with printf() function. Different data types use different format specifiers, we have discussed format specifiers in the last module refer to that if we haven’t.
User-defined Data types
There were some data types that were not defined in the C library, but there was a need for that, so in this, we the user can define our own data type, which is also known as secondary or non -primitive data type. Users can define this as per their requirements. There are many types of user-defined data types such as arrays, pointers, structures, unions, functions, etc. We will talk about this in more detail in the upcoming modules. so stay tuned.
In this module, we have discussed one of the most important concepts of the C programming series i.e., data types. Hope you all are enjoying the programming are excited about the upcoming modules.
Until then, Stay connected, keep learning, Happy coding! | https://usemynotes.com/data-types-in-c-programming/ | CC-MAIN-2021-43 | refinedweb | 885 | 69.11 |
Imagine you have a component with a
render() method like this:
render() { return ( <div> <A/> <B/> </div> ); }
If
<A/> has an
<input> inside of it, and you want to capture any changes to
that input, how would you do it? Turns out it's really easy. The
render()
method that does it looks like this:
render() { return ( <div onChange={this.handleChange.bind(this)}> <A/> <B text={this.state.inputValue}/> </div> ); }
There are a few things missing from this component, of course. But just take a
look at the
render() method. The
<div> has an
onChange handler. This will
capture the
onChange event not just for that
<div> but for all of its
children.
Why do we put
onChange on the
<div> and not on
<A/>? Putting
onChange on
a component won't do anything unless you handle it explicitly. Putting
onChange on an HTML element will listen for that standard DOM event.
Interested in events other than
onChange? Read the React Event System docs.
The example is running here. Type anything into the
<input> (component A) and
see that it shows up in the space below (component B).
Here's the code, in 3 small files:
import React from "react"; // Component A has an <input> and that is all. No need to add an event handler, // the events will 'bubble up'. class A extends React.Component { render() { return <input placeholder="Type Something..." />; } } export default A;
import React from "react"; import PropTypes from "prop-types"; // Component B displays the value it is given. class B extends React.Component { render() { return <p>{this.props.text}</p>; } } B.propTypes = { text: PropTypes.string }; B.defaultProps = { text: null }; export default B;
import React from "react"; import A from "./A"; import B from "./B"; // The Main component listens for input changes in all of its children and // passes the input value to 'B'. class Main extends React.Component { constructor(props) { super(props); this.state = { inputValue: "" }; } handleChange = event => { this.setState({ inputValue: event.target.value }); }; render() { return ( <div onChange={this.handleChange}> <A /> <B text={this.state.inputValue} /> </div> ); } } export default Main;
Hopefully that helped you understand passing events between components in React. If not, let me know! | https://www.javascriptstuff.com/how-to-pass-events-between-components/ | CC-MAIN-2018-51 | refinedweb | 359 | 70.19 |
.toolkits.pointer;21 import soot.*;22 23 /** A very naive pointer analysis that just reports that any points can point24 * to any object. */25 public class DumbPointerAnalysis implements PointsToAnalysis {26 public DumbPointerAnalysis( Singletons.Global g ) {}27 public static DumbPointerAnalysis v() { return G.v().soot_jimple_toolkits_pointer_DumbPointerAnalysis(); }28 29 /** Returns the set of objects pointed to by variable l. */30 public PointsToSet reachingObjects( Local l ) {31 Type t = l.getType();32 if( t instanceof RefType ) return FullObjectSet.v((RefType) t);33 return FullObjectSet.v();34 }35 36 /** Returns the set of objects pointed to by variable l in context c. */37 public PointsToSet reachingObjects( Context c, Local l ) {38 return reachingObjects(null, l);39 }40 41 /** Returns the set of objects pointed to by static field f. */42 public PointsToSet reachingObjects( SootField f ) {43 Type t = f.getType();44 if( t instanceof RefType ) return FullObjectSet.v((RefType) t);45 return FullObjectSet.v();46 }47 48 /** Returns the set of objects pointed to by instance field f49 * of the objects in the PointsToSet s. */50 public PointsToSet reachingObjects( PointsToSet s, SootField f ) {51 return reachingObjects(f);52 }53 54 /** Returns the set of objects pointed to by instance field f55 * of the objects pointed to by l. */56 public PointsToSet reachingObjects( Local l, SootField f ) {57 return reachingObjects(f);58 }59 60 /** Returns the set of objects pointed to by instance field f61 * of the objects pointed to by l in context c. */62 public PointsToSet reachingObjects( Context c, Local l, SootField f ) {63 return reachingObjects(f);64 }65 66 /** Returns the set of objects pointed to by elements of the arrays67 * in the PointsToSet s. */68 public PointsToSet reachingObjectsOfArrayElement( PointsToSet s ) {69 return FullObjectSet.v();70 }71 }72 73
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/soot/jimple/toolkits/pointer/DumbPointerAnalysis.java.htm | CC-MAIN-2017-04 | refinedweb | 301 | 57.67 |
I am confused...XX.....
But how can i add element into this???
like above??
..... sorry
This is a discussion on MAZE Problem within the C++ Programming forums, part of the General Programming Boards category; I am confused...XX..... But how can i add element into this??? like above?? ..... sorry...
I am confused...XX.....
But how can i add element into this???
like above??
..... sorry
i think i figure out now hahah
thx thx
Then you need the first array; copy and paste the whole thing in fromThen you need the first array; copy and paste the whole thing in fromCode:{ {{0}},
. And there you are.. And there you are.Code:{{ lots of numbers up to}} }
these should make sense?these should make sense?Code:int matrix[MATRIX_SIZE][MATRIX,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0}, {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1];
What i need to write in the int main() ?
i think i need to allow the user input and read the data from the array and output.
I am trying to produce the code, but it comes more than 15 errors.
Wt else do u need to modify????
-----------THX---THX-------------
I have put these into the int main() What else is missing? Or i need to define other function??I have put these into the int main() What else is missing? Or i need to define other function??Code:int main() { cout <<"Ender Maze wanted, 0,1,2,3:\n"; cout <<" 0- Random Maze\n"; cout <<" 1- Set Maze (1)\n"; cout <<" 2- Exit\n"; cin >> x; return 0;
plz help
cheers
OIC
This is what i though coz they dun read the input and get the output.
so i need to create a function for the data to process!?
So i need specific 0 is to choose the map [0]
and 1 i choose map[1]??
for(int maze=0; maze<3 ; maze++)
cout<<matrix_size[maze]<<"------"<<matrix_data[n]<<endl;
these should put inside the int main() or void()??
I am really sorry for all these......
You can do it inside main directly, or put it another function, as you wish; but if you put it in another function you do have to call that function from main, or else it won't happen. Notice that there is no function called void().
Sorry
So i think i will put them inside the main...
I have seen the other tread you reply
using printf and scanf
are they more useful for me?? (I know it sounds weired)
I will use that method to see does it get me anywhere.Thanks
I am sorry for all these question, but i really trying to get them work, I have not stopping on these since you repled me this morning ahahaha.
---------Thanks God------------
I have done the following changes in the int main()
still 3 errors, any 1 can point out wt's going on????still 3 errors, any 1 can point out wt's going on????Code:int main() using namespace std; { int maze; cout <<"Enter Maze wanted, 0,1,2,3:\n"; cout <<" 0- Random Maze\n"; cout <<" 1- Set Maze (1)\n"; cout <<" 2- Exit\n"; cin >> maze; { for (maze =0; maze < matrix_data[n]; maze++) cout<<matrix_data[n] <<endl; }
Thanks Thanks
The three errors tell you exactly what's going on:
unclosed brace on line "{" (right after cin >> maze)
variable n not defined
unable to compare int (maze) with int[][] (matrix_data[n]) (although your error might say something about conversions from int[][] to int)
Not a compile error, probably, but cout << matrix_data[n] will probably print out a hex memory address, since matrix_data[n] decays into a pointer.
Still getting syntax error
Code:int main() using namespace std; { int maze[][], n; cout <<" Enter Maze wanted, 0,1,2:\n"; cout <<" 0- Random Maze\n"; cout <<" 1- Set Maze (1)\n"; cout <<" 2- Exit\n"; cin >> maze[][]; matrix_data[n] =maze[][]; { for (maze[][] =0; maze[][] < matrix_data[n]; maze[][]++) cout<<matrix_data[n] <<endl; printf("%d\n", matrix_data[n]); return 0; } }
i am also having error on this line....,
also wt this mean?
"unable to compare int (maze) with int[][] (matrix_data[n]) "
This is the end of it....This is the end of it....Code:. . . . ]; void output_program(struct prog *progp) { char *codep = progp->code; int token; while (token = *codep++) fputs(token_table[token].name, stdout); putchar('\n'); } void print_matrix(void) { int i, j; char symbs[] = " OX"; for (i=0; i < MATRIX_SIZE; i++) { for (j=0; j < MATRIX_SIZE; j++) putchar(symbs[trace_matrix[i][j]]); putchar('\n'); } }
So aboe must be wring...So aboe must be wring...Code:for (maze[][] =0; maze[][] << matrix_data[n]; maze[][]++)
If i can't use integer to a matrix,,,, so how can i retrieve the data?
Thanks Thanks
Notice in your initializer that you've embedded the semicolon in the thing, rather than being at the end like it should.
This is so amazingly wrong that it makes me give up hope.This is so amazingly wrong that it makes me give up hope.Code:for (maze[][] =0; maze[][] << matrix_data[n]; maze[][]++)
matrix_data is a 3-dimensional array.
matrix_data[0] is a 2-dimensional array (namely, a map).
matrix_data[0][1] is a 1-dimensional array (specifically, the second row of the map).
matrix_data[0][1][2] is an integer (the third number in the second row of the map).
You can only deal with specific elements of the array. | http://cboard.cprogramming.com/cplusplus-programming/100918-maze-problem-4.html | CC-MAIN-2015-40 | refinedweb | 953 | 72.05 |
William is chief architect and director of QA at Hipbone Inc. He is also the author of Java RMI (O'Reilly & Associates, 2001). He can be contacted at [email protected].
Aspect-oriented programming extends the object-oriented paradigm by enabling you to write more maintainable code using units of software modularity called "aspects." Aspects encapsulate elements such as performance optimization, synchronization, error checking/handling, monitoring/logging, and debugging, which cut across traditional module or class boundaries.
Aspects are separated ("modularized") from the classes and methods that make up components at design time, then compilers or interpreters create extended classes that combine aspect functionality with application components.
There are a number of tools and languages that support the concept of aspect-oriented programming (AOP). TransWarp ( pje/TransWarp/) and Pythius (http:// pythius.sourceforge.net/), for instance, provide Python-based AOP support, while AspectC++ ( .org/) and AspectC ( labs/spl/projects/ aspectc.html) support C++ and C, respectively. Java Aspect Components or JAC ( .aopsys.com/) is a Java AOP framework, while HyperJ ( HyperJ/HyperJ.htm) provides Java extensions for AOP. In this article, I will focus on AspectJ (http:// aspectj.org/). For a more complete list of AOP tools and languages, see http:// aosd.net/tools.html.
Developed at Xerox PARC, AspectJ is a freely available open-source aspect-oriented extension to Java for the modularization of Java resource sharing, error checking, design patterns, and distribution. As Figure 1 illustrates, AspectJ makes modularization possible by taking code that appears in multiple places in a codebase and putting it in a single place, along with instructions on how to insert it into all the correct places.
The most common example of code that's amenable to this sort of "pulling out" is logging code, which:
- Appears in many places in the codebase and isn't intrinsically a part of any class.
- Is conceptually distinct from the classes it occurs in.
- Isn't an intrinsic part of the program's functionality.
- Gets frequently inserted and removed during development.
- Is often removed from the final shipping codebase.
The first two properties are important, and code that exhibits them is referred to as "crosscutting code." The goal of AOP is to pull out the crosscutting code and put all the related pieces of code together in a single place, along with instructions on how to insert the crosscutting code back into the program (either statically at compile time or dynamically at run time). The last three properties are mostly important for tutorials they tell you that everyone has a good idea of how to pull out logging code (and how to evaluate the resulting program).
AspectJ Overview
AspectJ is both a language for doing aspect-oriented programming and a set of tools. Java programs are valid AspectJ programs and can access all the standard Java libraries. The AspectJ toolset includes a compiler, debugger, ant task (for automating builds), and plug-ins for several Java IDEs (such as JBuilder).
The AspectJ language centers around three main concepts:
- Join points, which are places in a codebase where something happens. For example, there are join points for method calls, for catching exceptions, and for changing the value of a member variable. Join points are mostly an abstraction; when using AspectJ, you use pointcuts.
- Pointcuts, which are named collections of join points that have something in common. AspectJ contains a complete language for describing pointcuts. Understanding how to write pointcuts, and when a particular pointcut applies, is the hardest part of using AspectJ.
- Advice, which is Java code that has been attached to a pointcut. For example, you use pointcuts in AspectJ to specify a set of join points in a program where you want something to happen. You then attach advice to your pointcuts to say what should happen. Advice is almost always fairly simple code. The important thing about advice is not that it's particularly tricky or complex, but that the advice is executed when a particular pointcut applies.
call Pointcuts and execution Pointcuts
The two most commonly used pointcuts are the call pointcut and the execution pointcut. The basic syntax for these two pointcuts is:
pointcut name(): call ([method signature])
pointcut name(): execution([method signature])
where name is the name of the pointcut (and must be unique) and [method signature] is an expression that captures a set of methods. AspectJ has a whole language based on string matching for specifying signatures. For example, * com.wgrosso..*.*(..) matches every method on every class that's in a package whose name begins with com.wgrosso. Listing One presents some signatures along with their explanation.
As Figure 2 shows, you can think of a method call as being on the boundary of two instances. (When a method is invoked, a thread exits the scope of one instance and enters the scope of another one. In doing so, it has crossed through a join point.)
From the point of view of AspectJ, a stack trace (like that in Figure 3) is nothing but a list of join points that the current thread has traversed and will traverse again as the stack unwinds. (While it's convenient to think of call and execution pointcuts as being comprised of pointcuts that are on the boundaries of distinct instances, they're also involved when one method on an instance calls another method on the same instance. It's the method call that matters, not the number of instances involved.)
A call pointcut specifies a set of join points right before (or after) the methods are called. That is, if you attach advice to a call pointcut and a thread is about to call a method that matches the signature of the pointcut, the advice will be executed. On the other hand, execution pointcuts specify a set of join points right after the method is called (or right before the call returns). In short, call pointcuts occur inside the calling object, whereas execution pointcuts occur inside the target object. In most cases, it doesn't matter whether you use advice based on call or execution join points.
There are differences between call and execution pointcuts, however. For one thing, call and execution execute at different times. This means that if more than one piece of advice is attached to a particular method, then whether a particular piece of advice is defined using a call or an execution pointcut can make a difference in how the program executes.
Also, call pointcuts are based on the compile-time object types, as understood by the calling class, while execution pointcuts are based on the run-time type of the target of the method call. Thus, if the calling object doesn't know the exact type of the target, then whether advice is based on a call or execution pointcut could be important. In Listing Two, the instance stringWrapper is declared to be an instance of Object, but at run time, it's an instance of StringWrapper. Even though they have the same signatures, the call pointcut in Listing Two is never applicable, while the execution pointcut is.
The final difference between call and execution pointcuts involves a shortcoming in the current release of AspectJ. Because AspectJ is implemented in the compiler and requires you to have the source code for all the classes, you cannot specify execution pointcuts for classes for which you don't have the source.
cflow Pointcuts
The idea behind a cflow (control flow) pointcut is that you often want to insert advice based on something like a stack trace, not just the currently executing line of code.
cflow pointcuts are defined in terms of other pointcuts. The basic syntax for a cflow pointcut is:
pointcut name(): cflow([name of a different pointcut])
pointcut name(): cflowbelow ([name of a different pointcut])
The current point of execution is in the cflow of a pointcut if the pointcut is below it in the stack trace (for example, if unwinding the stack causes the current thread of execution to pass through the pointcut).
The difference between cflow and cflowbelow is that a pointcut is inside its own cflow, but is not inside its cflowbelow (cflow and cflowbelow are analogous to <= and <). Listing Three has examples of cflow pointcuts.
By themselves, cflow pointcuts aren't that useful. What makes them useful is a pointcut algebra. You can use all the standard Boolean operators (!, &&, ||, and so on) with pointcuts. Combining pointcuts lets you build fairly sophisticated statements.
For example, suppose you wrote a library and, in another program, want to log all calls into the library. That is, you want to log calls coming into your library from other code, not calls from one library object to another and not calls that originate in your library code (even if some nonlibrary classes appear in the middle of the stack trace). This can be done in just two steps: You first write a call pointcut specifying your library's public methods, call it "pointcut A." Then the pointcut defined by the expression A && !cflowbelow(A) captures exactly what you want it picks out, at run time, all the method calls into your library that are not in the cflow of a method call into your library. Listing Four is a complete example of this type of pointcut.
Adding Advice
There are three types of advice: before advice, after advice, and around advice.
before and after advice are straightforward. before advice executes immediately before the point in the code indicated by the join point. For its part, after advice executes immediately after the point in the code indicated by the join point. Listing Five prints the sequence "12345," illustrating the basic syntax of before and after advice. As Figure 4 depicts, the reason Listing Five prints "12345" is that the statements are executed in the following order:
1. The before advice attached to the call pointcut.
2. The before advice attached to the execution pointcut.
3. The actual method body.
4. The after advice attached to the execution pointcut.
5. The after advice attached to the call pointcut.
around advice is more complicated than before or after advice. The way around advice works is that some of the code in the advice executes before the pointcut, and that code must call proceed for the contents of the method to execute; see Listing Six.
A Real-World Example
A few months ago, I had a server that kept running out of memory, and standard memory profiling tools didn't show a memory leak when we ran it on developer machines. What we needed to do was trace memory allocations for a week, while the server was running under real-world conditions.
Luckily, I recalled that Heinz Kabutz addressed the problem of tracking object allocations in issue 38 of his Java Specialist newsletter ( kabutz/). Kabutz's solution involves inserting code into java.lang.Object to track object allocations. While this approach can be made to work, it is tricky and fragile. Furthermore, you can get a similar result with a simple aspect. Listing Seven (available electronically; see "Resource Center," page 5) contains code for tracking the number of currently allocated instances in a program. It doesn't track allocations of objects that aren't in your codebase (for example, instances of java.lang.String are not tracked) because of the way AspectJ is currently implemented. But it does track the number of live objects in your program, printing out a status report to the console at predefined intervals, and vending a current report via an RMI interface.
This approach uses only one aspect, which is called during the constructor of all the objects that are tracked. Every time an object is created, it is added into a hash table using a weak reference. A background thread goes through and cleans the hash table occasionally. And the results are summarized and sent over the wire when a client asks.
The impressive thing about this example is that it just works. It requires absolutely no code changes to the original classes. If you have an existing project, you can change the signature of the pointcut and use the resulting aspect to track memory allocations over time. If you change the output format slightly, you can store the results to a comma-delimited file format, then graph your object allocations in Excel. It's very easy to do.
To further illustrate, consider the logging code in Listing Eight (also available electronically). The idea here is that you have an RMI server that is accepting both remote calls (method calls that originate in another process) and in-process calls. You want to log the remote calls, along with the time they occurred, but not the local calls. Listing Eight includes an aspect that lets you do this. This technique works because of the way RMI is structured. Any remote call must go through the RMI run time that is, through a class in one of the java.rmi.* packages before it gets into the server. Local calls do not go through the RMI run time at all. This means that a clever combination of execution and cflowbelow pointcuts let you spot exactly those threads of execution that originate in a call from a remote process.
This approach works without requiring code changes in the original source code. If you have an existing project, you can change the signature of the pointcut and use the resulting aspect to log remote method calls to your server. And that's the beauty of aspects.
DDJ
Listing One
/* Pulls out calls to any public method that throws a RemoteException */ pointcut example1(): call(public * * (..) throws RemoteException); /* Pulls out calls to a constructor of any subclass of UnicastRemoteObject. The '+' indicates any subclass. */ pointcut example2(): call(UnicastRemoteObject+.new(..)); /* Pulls out executions of public methods of classes in any package below com.wgrosso (including classes in com.wgrosso). The '..' is what indicates any subpackage. */ pointcut example3(): execution(public * com.wgrosso..*.*(..)); /* Pulls out executions of public methods of classes in any package below com.wgrosso (including classes in com.wgrosso). The methods have to be "setter" methods and return void */ pointcut example4(): execution(public void com.wgrosso..*.set*(..));
Listing Two
/* Three
/* Four
/* enteringFromExternalSource picks out join points which are involve public methods and are not in the cflowbelow of a public method invocation. You need to use && and ! to define this pointcut. */ pointcut comWGrossoPulicMethods(): call(public * com.wgrosso..*.*(..)); pointcut enteringFromExternalSource(): comWGrossoPulicMethods() && !cflowbelow(comWGrossoPulicMethods());
Listing Five
/* The entire aspect. */ package com.wgrosso.ddjarticle1; import org.aspectj.lang.*; public aspect Aspect_PrintThree { pointcut CallStringWrapper(): call(* com.wgrosso..*.*(..)); pointcut ExecuteStringWrapper(): execution(* com.wgrosso..*.*(..)); before(): CallStringWrapper() { System.out.println("1"); } before(): ExecuteStringWrapper() { System.out.println("2"); } after(): ExecuteStringWrapper() { System.out.println("4"); } after(): CallStringWrapper() { System.out.println("5"); } } /* The entire class */ package com.wgrosso.ddjarticle1; public class PrintThree { public static void main(String[] args) { System.out.println("3"); } }
Listing Six
/* We still need to define a pointcut */ pointcut callExample(): call(public void com.wgrosso..*.*(..)); /* Around advice must call proceed in order for the original method invocation to occur. Note that around advice needs a return type that matches the pointcut. */ void around(): callExample() { /* Any code before the proceed executes before the method body. */ proceed(); // proceed executes the method body /* Any code after the proceed executes after the method body.*/ } | http://www.drdobbs.com/jvm/aspect-oriented-programming-in-aspectj/184405119 | CC-MAIN-2015-35 | refinedweb | 2,546 | 55.74 |
0
Hello,
I have some problem with os exception.
In the traceback info I get only the errno without the description.
for examplem when I write:
import os; os.listdir("no-exists-dir-name");
I get the exception:
Traceback (most recent call last): File "<interactive input>", line 1, in <module> WindowsError: [Error 3] : 'no-exists-dir-name/*.*'
I findout that [Error 3] in errno is map to the string: 'No such process' (return by os.strerror(3)).
Why I did not get something like:
OSErro: [Error 3] : 'No such process' 'no-exists-dir-name/*.*'
I use ActiveState ActivePython 2.5, but it is also at other version and also at the version from python.org.
Thanks | https://www.daniweb.com/programming/software-development/threads/91928/problem-with-os-exception-only-errno | CC-MAIN-2018-05 | refinedweb | 116 | 67.15 |
INET6
Internet protocol version 6 family
Synopsis:
#include <netinet/in.h> struct sockaddr_in6 { uint8_t sin6_len; sa_family_t sin6_family; in_port_t sin6_port; uint32_t sin6_flowinfo; struct in6_addr sin6_addr; uint32_t sin6_scope_id; };
Description:
Protocols
The INET6 family consists of the:
- IPv6 network protocol
- Internet Control Message Protocol version 6 (ICMP )
- Transmission Control Protocol (TCP )
- User Datagram Protocol (UDP ).
TCP supports the SOCK_STREAM abstraction, while UDP supports the SOCK_DGRAM abstraction. Note that TCP and UDP are common to INET and INET6. A raw interface to IPv6 is available by creating an Internet SOCK_RAW socket. The ICMPv6 message protocol may be accessed from a raw socket.
The INET6 protocol family is an updated version of the INET family. While INET implements Internet Protocol version 4, INET6 implements Internet Protocol version 6.
Addressing
IPv6 addresses are 16-byte quantities, stored in network standard (big-endian) byte order. The header file <netinet/in.h> defines this address as a discriminated union.
Sockets bound to the INET6 family use the structure shown above.
You can create sockets with the local address :: (which is equal to IPv6 address 0:0:0:0:0:0:0:0) to cause wildcard matching on incoming messages. You can specify the address in a call to connect() or sendto() as :: to mean the local host. You can get the :: value by setting the sin6_addr field to 0, or by using the address contained in the in6addr_any global variable, which is declared in <netinet6/in6.h>.
The IPv6 specification defines scoped addresses, such as link-local or site-local addresses. A scoped address is ambiguous to the kernel if it's specified without a scope identifier. To manipulate scoped addresses properly in your application, use the advanced API defined in RFC 2292. A compact description on the advanced API is available in IP6 . If you specify scoped addresses without an explicit scope, the socket manager may return an error.
The KAME implementation supports extended numeric IPv6 address notation for link-local addresses. For example, you can use fe80::1%de0 to specify fe80::1 on the de0 interface. The getaddrinfo() and getnameinfo() functions support this notation. With special programs like ping6 , you can disambiguate scoped addresses by specifying the outgoing interface with extra command-line options.
The socket manager handles scoped addresses in a special manner. In the socket manager's routing tables or interface structures, a scoped address's interface index is embedded in the address. Therefore, the address contained in some of the socket manager structures isn't the same as on the wire. The embedded index becomes visible when using the PF_ROUTE socket or the sysctl() function. You shouldn't use the embedded form.
Interaction between IPv4/v6 sockets
The behavior of the AF_INET6 TCP/UDP socket is documented in the RFC 2553 specification, which states:
- A specific bind on an AF_INET6 socket (bind() with an address specified) should accept IPv6 traffic to that address only.
- If you perform a wildcard bind on an AF_INET6 socket (bind() to the IPv6 address ::), and there isn't a wildcard-bound AF_INET socket on that TCP/UDP port, then the IPv6 traffic as well as the IPv4 traffic should be routed to that AF_INET6 socket. IPv4 traffic should be seen by the application as if it came from an IPv6 address such as ::ffff:10.1.1.1. This is called an IPv4 mapped address.
- If there are both wildcard-bound AF_INET sockets and wildcard-bound AF_INET6 sockets on one TCP/UDP port, they should operate independently: IPv4 traffic should be routed to the AF_INET socket, and IPv6 should be routed to the AF_INET6 socket.
However, the RFC 2553 specification doesn't define the constraint between the binding order, nor how the IPv4 TCP/UDP port numbers and the IPv6 TCP/UDP port numbers relate each other (whether they must be integrated or separated). The behavior is very different from implementation to implementation. It is unwise to rely too much on the behavior of the AF_INET6 wildcard-bound socket. Instead, connect to two sockets, one for AF_INET and another for AF_INET6, when you want to accept both IPv4 and IPv6 traffic.
Because of the security hole, by default, NetBSD doesn't route IPv4 traffic to AF_INET6 sockets. If you want to accept both IPv4 and IPv6 traffic, use two sockets. IPv4 traffic may be routed with multiple per-socket/per-node configurations, but, it isn't recommended. See IP6 for details.
Based on:
RFC 2553, RFC 2292 | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/i/inet6_proto.html | CC-MAIN-2021-21 | refinedweb | 740 | 55.13 |
Multicore Desktop Application Experimentation, Part 1: Java Threads
Multithreaded programming has been possible in Java for a very long time. That's a good thing, since modern computers and even many mobile phones are multicore today. If an application is primarily interactive, its processing speed is fairly irrelevant today -- the limiting time factor is how fast the user can type or move and click the mouse or touch the screen, etc. But, to increase the speed of applications that do a lot of work between user interactions, utilizing those multiple cores is necessary.
Java 7 introduced the Fork/Join Framework, and Project Lambda will bring Lambda Expressions (closures) into Java 8. So, a question developers will soon be facing is: what's the best way for me to take advantage of multiple cores in my specific application? Should I use traditional Java threads? The Fork/Join Framework? Lambda Expressions (once Java 8 is released)?
I've worked on multithreaded development since late 1993 -- though most of that work has been in C, running on SunOS, and later on Solaris, multiprocessor machines. An eight-processor SunOS machine in 1993 was fairly radically powered at the time. The code was mathematical models and analyses applied to various types of satellite data. The researchers developed their functions as single-thread code (often in Fortran); my job was to convert it such that it could efficiently utilize all eight Sun processors.
It was an adventure, to say the least!
Today I did some experimentation on my quad-core CentOS 6.2 system (and, just for fun, on my dual-core HP Mini, which runs some kind of minimal Ubuntu, as well) with a basic multithreaded Java application. Essentially, the code creates some threads, passes them a data range, has each thread do some work on all values that are in the passed-in range, then requests back a final result. Here's the actual thread class:
import static java.lang.Math.pow; i = 1; i <= 100000; i++) {
for(int j = iVal0; j <= iVal1; j++) {
double val0 = i;
double val1 = j;
double val2 = val0 * val1;
double val3 = pow(val2, 0.5);
lastVal = val3;
}
}
} catch (Exception e) {
System.out.println(name + "error" + e);
}
System.out.println(name + " exiting.");
}
}
The starting point for this was an example in Herbert Schildt's excellent "Java The Complete Reference" which I reviewed a while back.
The class in my sample application is used as follows. For each thread:
- A new
NewThreadis created;
SetWorkRangeis called to specify a data range for the thread's execution;
- The thread is run;
- The main app waits for the thread's processing to complete;
GetLastValueis called to return the last result for the thread's execution.
A pretty useless multithreaded application, no doubt! But, it did let me gather some numbers on the relative performance of running computation-centric code with different numbers of threads on my computers.
I analyzed the data range from 1 to 8000, running with 1, 2, 4, and 8 threads, dividing the work equally between each thread. So, in the single-thread case, I called
SetWorkRange with 1 and 8000; in the two-thread case, I called
SetWorkRange with 1 and 4000 for the first thread, and 4001 and 8000 for the second thread; etc.
Here are the timing results on my CentOS 6.2 quad-core system:
These results imply basically perfect scaling out to the number of processor cores in my system; and they imply no real loss (even a tiny clock-time improvement) from running eight threads on my quad-core processor.
Here's what happened on my HP Mini (which wasn't exactly designed with high-volume computation-intensive processing in mind):
The itty-bitty Mini found the task daunting, to say the least. The results don't make a lot of sense, if I'm right that my HP Mini is a dual-core machine. The fact that the application completed in less time as the number of threads were increased to 4 and 8 is interesting, but my guess is that it reflects upon the uniqueness of the HP Mini's hardware and OS more than anything related to Java or my test app.
What I like about today's experimentation is that it demonstrates some of the complexity that's involved in efficiently utilizing the available processing power in any device. Same app, two different devices, with widely differing processing capability. The result? A quite different performance per number of threads graph for each device... So, how's a Java developer to write an efficient multi-platform "write once, run anywhere" application in this modern multicore world???
I'll be investigating this via a new Java.net project (not yet public) and blogs starting with this one. It's a topic I know lots about from my early work with multithreaded development on C on Sun multiprocessor computers, and also from my work on Intel's open source ThreadingBuildingBlocks project (see my Intel Software Network blogs).
In the end, successful multithreaded programming is all about applying language capabilities to efficiently utilize the available multicore/multiprocessor hardware resources. Java currently offers multiple methodologies for addressing this problem, and new methods will arrive with Java 8. It's going to be exciting to investigate and test what works best in what situations!
Java.net Weblogs
Since my last blog post, several people have posted new java.net blogs:
- Otavio Santana, Knowing more about Easy-Cassandra Project;
- Sahoo, Getting verbose class loading output in GlassFish; and
- Heinz Kabutz, Fibonacci (1000000000) Challenge.
Poll
Our current Java.net poll asks Will you use JavaFX for development once it's fully ported to Mac and Linux platforms?. Voting will be open until Friday, March 2.
Articles
Our latest Java.net article is Michael Bar-Sinai's PanelMatic 101.
Java News
Here are the stories we've recently featured in our Java news section:
- Michael Heinrichs answers Most often asked questions about JavaFX;
- Barry Cranford shares Round up of Tuesday night’s SouJava invasion;
- Alexis Moussine-Pouchkine presents And then there were 14 compatible Java EE 6 implementations;
- Alexis Moussine-Pouchkine announces JAX-RS 2.0 - Jersey Code Rulez;
- Dustin Marx catalogues A Plethora of Java Developments in February 2012;
- Geertjan Wielenga recommends Learn Java with Joel Murach and NetBeans IDE;
- Markus Eisele presents The Heroes of Java: Greg Luck;
- Roger Brinkley presents Java Spotlight Episode 71: Alex Buckley on the Java Language and VM Specifications;
- Micha Kops demonstrates Ordering your JUnit Rules using a RuleChain;
- Dustin Marx demonstrates JavaFX 2: Simultaneous Animated Text Strings;
- Adam Bien demonstrates How To Self-Invoke EJB 3.x with(out) "this";
- Geertjan Wielenga shares Java EE 6 Cheat Sheet;
Spotlights
Our latest Java.netSpotlight is James Sugrue's Which JVM Language Is On Top?:
It’s a well known fact that Java’s prevalence in the software development industry is encouraged by the innovation that surrounds the JVM, and the languages that are built on top of it. Today I’d like to start a poll on what alternate languages you use (or would like to use!) on the JVM...
Previously, we spotlighted the Akka Team Blog's Scalability of Fork Join Pool:
Akka 2.0 message passing throughput scales way better on multi-core hardware than in previous versions, thanks to the new fork join executor developed by Doug Lea. One micro benchmark illustrates a 1100% increase in throughput! The new 48 core server had arrived and we were excited to run the benchmarks on the new hardware, but it was sad to see the initial results. It didn’t scale...790 reads | https://weblogs.java.net/node/883777/atom/feed | CC-MAIN-2014-15 | refinedweb | 1,271 | 51.99 |
Can you make it so that you have to enter a number...
But! If you don't within 2 seconds, it executes a goto command.
Thanks, August
Printable View
Can you make it so that you have to enter a number...
But! If you don't within 2 seconds, it executes a goto command.
Thanks, August
i wouldnt use goto commands, they are basically dead in programming, and arent safe.
Code:
#include<iostream>
#include<windows.h>
#include<string>
#include<conio.h>
using namespace std;
int main()
{
long current_tick, two_second_delay = (GetTickCount()+2000);
string user_input="";
char keydown;
do{
cout << "Enter something: " << user_input;
if(kbhit())
{
keydown=getch();
user_input+=keydown;
}
current_tick = GetTickCount();
clrscr();
}while(current_tick < two_second_delay && keydown!='\n');
if(current_tick => two_second_delay)
cout << "You snooze, you lose.";
//else
//continue processing the rest of the program
return 0;
}
I would use The Brains idea but my compiler doesn't have the include file <windows.h> :(
Why are goto's bad? :confused:
erm, Brain... maybe its the beer but, your example doesnt take any input.
Cool-August: You can check stdin to see if there is anything there waiting for you by using select(). Use a 2 second time out and don't read from stdin unless select() indicates that the file descriptor has changed status.
Hmm. I don't quite understand what you mean.
Could you give me an example to work with?
(I am a begginer at C++.)
The bad thing about goto's is that they usally create hard to read code
and that can mess your program up in the later stages. The only good use for goto's I can think of is if you are in a very very deep nest of if's and loops then you need your program to go back to a earlier loop or if AND that is even arguable if you should do that.
What would you recomed using instead of goto's then?
The program accepts user input here?The program accepts user input here?Quote:
erm, Brain... maybe its the beer but, your example doesnt take any input.
Code:
keydown=getch();
>>The program accepts user input here?
you edited your code an hour and a half after my post. The original just waited 2 seconds and exited without ever taking input :rolleyes:
Good suggestion though. | http://cboard.cprogramming.com/cplusplus-programming/64197-time-limit-cin-printable-thread.html | CC-MAIN-2016-44 | refinedweb | 382 | 75.61 |
02 June 2009 08:55 [Source: ICIS news]
SINGAPORE (ICIS news)--Oil major ExxonMobil announced a $30/tonne (€21.3/tonne) price hike for group I and II base oils for Asian buyers on the back of rising crude and gas oil values, the company’s customers said on Tuesday.
The hikes, which would take prices of SN-150 and SN-500 for contract buyers to the low-to-mid $600s/tonne ex-tank ?xml:namespace>
Prices of bright stock would also be increased by $10/tonne, several buyers said.
“The price increases were in line with the market expectations,” said a
Asian base oils values have been rising over the past few weeks due to higher crude prices and steady regional demand amid tight supply, buyers and sellers said.
Key base oils producers in Asia include ExxonMobil, Shell, Nippon Oil, SK Energy, S-Oil and GS Caltex.
($1 = €0.71). | http://www.icis.com/Articles/2009/06/02/9221127/exxonmobil-announces-asia-base-oils-price-hikes-buyers.html | CC-MAIN-2013-20 | refinedweb | 151 | 65.56 |
Like the highly popular Hello World code that we use to learn as our first program in any programming language; Fizz-Buzz is one step ahead which helps us to understand the basic nitty-gritty of a language. Also, it's a popular interview question that can be asked even to experienced programmers to check if s/he can actually write code!
And you know what, most of them fail to write this even if they are working in the said language for say 4+ years! Has Ctrl+C, Ctrl+V taken a toll on our code writing ability :)
Problem Staement
Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “Fizz-Buzz”.
Expected Output Sample [Shown only till 20 and not 100]
FizzBuzz using If / Else
package main import "fmt" func main() { for i := 1; i <= 100; i++ { if i%15 == 0 { fmt.Println("Fizz-Buzz", i) } else if i%5 == 0 { fmt.Println("Buzz", i) } else if i%3 == 0 { fmt.Println("fizz", i) } else { fmt.Println(i) } } }
FizzBuzz using Switch
package main import "fmt" func main() { for i := 1; i <= 100; i++ { switch { case i%15==0: fmt.Println("FizzBuzz") case i%3==0: fmt.Println("Fizz") case i%5==0: fmt.Println("Buzz") default: fmt.Println(i) } } }
Run this code at Playground and see the Output.
Hope this helps you to get started with the basic control flow of Google Go language. Please share your views.
Cant seem to get the markdown to play well but here is another example of the first way I did it. Just a different iteration of the 1st answer above.
package main
import "fmt"
func main() {
for i := 1; i <= 100; i++ {
if i%5 == 0 {
if i%3 == 0 {
fmt.Println("fizzbuzz")
} else {
fmt.Println("buzz")
}
} else if i%3 == 0 {
fmt.Println("fizz")
} else {
fmt.Println(i)
}
}
}
Yes, this works. Thanks Matt. | http://www.golangpro.com/2015/01/fizz-buzz-program-in-golang.html | CC-MAIN-2017-26 | refinedweb | 345 | 75.71 |
Dmitri,
Please see my comments below.
On 4/17/05, Dmitri Plotnikov <[email protected]> wrote:
> Michael,
>
> You are not the first to raise this issue. See for example the discussion
> in this bug report:
>
I can understand the frustration.
> I think it's time we finally resolve the problem.
>
> We have come up with three alternative solutions so far:
>
> 1. Using "null" for namespace prefix. I don't necessarily like the idea -
> it feels atrificial and it blocks access to elements that have no namespace
> at all.
Right, I don't like it either.
> 2. We introduce this new method on JXPathContext:
>
> context.registerDefaultNamespace(ns);
>
> Then whenever a name is used without prefix in an XPath, this default
> namespace is assumed. This is effectively the same as 1, except that
> aestetically this looks better. Unfortunately this solution has the same
> problem: you can no longer reference elements that really don't have any
> namespace.
I agree.
> 3. We introduce this other method on JXPathContext:
> context.setNamespaceIgnored(ns, boolean)
>
> This would allow us to register one or multiple namespaces that should be
> ignored altogether. Let's say
> - you call context.setNamespaceIgnored("abc", true)
> - you have a document that looks like this:
>
> <foo xmlns:
> <a>y</a>
> <names:a>x</names:a>
> </foo>
>
> - and you try to resolve the xpath "//a"
>
> This path will return both elements: x and y, one because it does not have a
> namespace and the other because its namespace is ignored.
>
> This third option is my favorite.
>
> What do you think?
I think it would be useful and I would certainly make use of this feature.
But I don't think it would solve the problem I mentioned in my
original email. I am not able to entirely ignore namespaces. My
problem is that my root element's namespace is variable (could be
XHTML, a custom namespace, or nothing) - but I still want to make sure
that the child elements' namespace is the same as the root element's.
In other words: I want to distinguish between "the root elements'
namespace" and "no namespace" (and "another namespace"). Your option
#3 as I understand it would not allow that, right?
Can I suggest an option #4? Could JXPathContext (preferrably per
default) make the namespace of the root element the default namespace
for queries? Then I could execute queries reliably and independent of
the actual namespace of the root element, but at the same time ensure
that all queried child elements are in the same namespace as the root
element.
Does this make sense or would this collide with other requirements?
To summarize, I really think that your option #3 is the best of 1-3. I
would be really happy if JXPath would implement both #4 and #3.
-Michael
> ----- Original Message -----
> From: "Michael Nestler" <[email protected]>
> To: <[email protected]>
> Sent: Thursday, April 07, 2005 1:08 PM
> Subject: [JXPath] Context Namespace-Relative Queries (v1.2)
>
> > Hello,
> >
> > I recently had to switch from JXPath 1.1 to 1.2 in order to benefit
> > from some bug fixes. I am using JDOM 1.0. It appears that JXPath's
> > behavior regarding namespaces in JDOM trees changed and became less
> > convenient and flexible.
> >
> > I have documents like this:
> >
> > <html xmlns="">
> > <table>
> > <tr>
> > <td>page</td>
> > <td>type</td>
> > <td>comment</td>
> > </tr>
> > <tr>
> > <td><a href="/Home"/></td>
> > <td>conf</td>
> > <td>Home Page</td>
> > </tr>
> > <tr>
> > <td><a href="/ToolBar"/></td>
> > <td>conf</td>
> > <td>Tool Bar</td>
> > </tr>
> > </table>
> > </html>
> >
> > The namespace of the root element is variable and might be the XHTML
> > namespace, another namespace, or no namespace at all. The table
> > structure is always the same, and cell values vary. And I have some
> > code that executes the following query:
> >
> > SAXBuilder sax = new SAXBuilder();
> > Document doc = sax.build(new StringReader(...);
> > Element rootElement = doc.getRootElement();
> > JXPathContext ctx = JXPathUtil.newContext(rootElement);
> > String value = ctx.getValue("/table/tr[2]/td[2]");
> >
> > This worked fine with JXPath 1.1, but it doesn't work anymore with
> > 1.2. The new JXPath version throws an exception:
> >
> > org.apache.commons.jxpath.JXPathException: No value for xpath:
> > /table/tr[2]/td[2]
> > at
> > org.apache.commons.jxpath.ri.JXPathContextReferenceImpl.getValue(JXPathContextReferenceImpl.java:344)
> > at
> > org.apache.commons.jxpath.ri.JXPathContextReferenceImpl.getValue(JXPathContextReferenceImpl.java:280)
> >
> > Why would this not work, considering that all the DOM nodes live in
> > the same namespace - the namespace of the context bean (root element)?
> > It appears that I can register the XHTML namespace, but then I have to
> > use a prefix for every element name in my query - and it won't work if
> > the namespace is not set or is different (that's possible in my app).
> >
> > Is there a solution to this problem, i.e. is there a way to make
> > JXPath interpret the root element's namespace as the default namespace
> > requiring no prefix in XPath queries?
> >
> > Thanks in advance,
> > -Michael
> >
> > ---------------------------------------------------------------------
> > | http://mail-archives.apache.org/mod_mbox/commons-user/200504.mbox/%[email protected]%3E | CC-MAIN-2018-43 | refinedweb | 819 | 58.28 |
On 2020-04-10 2:24 p.m., Andrew Barnert wrote:
On Apr 10, 2020, at 06:00, Soni L. [email protected] wrote
why's a "help us fix bugs related to exception handling" proposal getting so much pushback? I don't understand.
Because it’s a proposal for a significant change to the language semantics that includes a change to the syntax, which is a very high bar to pass. Even for smaller changes that can be done purely in the library, the presumption is always conservative, but the higher the bar, the more pushback.
There are also ways your proposal could be better. You don’t have a specific real life example. Your toy example doesn’t look like a real problem, and the fix makes it less readable and less pythonic. Your general rationale is that it won’t fix anything but it might make it possible for frameworks to fix problems that you insist exist but haven’t shown us—which is not a matter of “why should anyone trust you that they exist?”, but of “how can anyone evaluate how good the fix is without seeing them?” But most of this is stuff you could solve now, by answering the questions people are asking you. Sure, some of it is stuff you could have anticipated and answered preemptively, but even a perfectly thought-out and perfectly formed proposal will get pushback; it’s just more likely to survive it.
If you’re worried that it’s personal, that people are pushing back because it comes from you and you’ve recently proposed a whole slew of radical half-baked ideas that all failed to get very far, or that your tone doesn’t fit the style or the Python community, or whatever, I don’t think so. Look at the proposal to change variable deletion time—that’s gotten a ton of pushback, and it’s certainly not because nobody respects Guido or nobody likes him.
hm.
okay.
so, for starters, here's everything I'm worried about.
in one of my libraries (yes this is real code. all of this is taken from stuff I'm deploying.) I have the following piece of code:
def _extract(self, obj): try: yield (self.key, obj[self.key]) except (TypeError, IndexError, KeyError): if not self.skippable: raise exceptions.ValidationError
(A Boneless Datastructure Language :: abdl._vm:84-89, AGPLv3-licensed,... @ 34551d96ce021d2264094a4941ef15a64224d195)
this library handles all sorts of arbitrary objects - dicts, sets, lists, defaultdicts, wrappers that are registered with collections.abc.Sequence/Mapping/Set, self-referential data structures, and whatnot. (and btw can we get the ability to index into a set to get the originally inserted element yet) - which means I need to treat all sorts of potential errors as errors. however, sometimes those aren't errors, but intended flow control, such as when your program's config has an integer list in the "username" field. in that case, I raise a ValidationError, and you handle it, and we're all good. (or sometimes you want to skip that entry altogether but anyway.)
due to the wide range of supported objects, I can't expect the TypeError to always come from my attempt to index into a set, or the IndexError to always come from my attempt to index into a sequence, or the KeyError to always come from my attempt to index into a mapping. those could very well be coming from a bug in someone's weird sequence/mapping/set implementation. I have no way of knowing! I also don't have a good way of changing this to wrap stuff in RuntimeError, unfortunately. (and yes, this can be mitigated by encouraging the library user to write unit tests and integration tests and whatnot... which is easier said than done. and that won't necessarily catch these bugs, either. (ugh so many times I've had to debug ABDL just going into an infinite loop somewhere because I got the parser wrong >.< unit tests didn't help me there, but anyway...))
"exception spaces" would enable me to say "I want your (operator/function/whatnot) to raise some errors in my space, so I don't confuse them with bugs in your space instead". and they'd get me exactly that. it's basically a hybrid of exceptions and explicit error handling. all the drawbacks of exceptions, with all the benefits of explicit error handling. which does make it worse than both tbh. it's also backwards compatible. I'm trying to come up with a way to explain how "exception spaces" relate to things like rust's .unwrap() on an Result::Err, or nesting a Result in a Result so the caller has to deal with it instead of you, or whatnot, but uh this is surprisingly difficult without mentioning rust code. but think of this like inverted rust errors - while in rust you handle the errors in a return value, with my proposal you'd handle the errors by passing in an argument. or a global. or hidden state. anyway, this is unfortunately more powerful.
my "toy example" (the one involving my use-case, not the one trying to define the semantics of these "exception spaces") is also real code. (GAnarchy :: ganarchy.config:183-201, AGPLv3-licensed,... @ not yet committed) it's just... it doesn't quite hit this issue like ABDL, template engines, and other things doing more complex things do. I'm sorry I don't have better examples, but this isn't the first time I worry my code is gonna mask bugs. it's not gonna be the last, either.
anyway, I'm gonna keep pushing for this because it's probably the easiest way to retrofix explicit error handling into python, while not being as ugly and limiting as wrapping everything in RuntimeError like I proposed previously. (that *was* a bad proposal, tbh. sorry.) I'll do my best to keep adding more and more real code to this thread showing examples where current exception handling isn't quite good enough and risks masking bugs, as I notice them. which probably means only my own code, but oh well. | https://mail.python.org/archives/list/[email protected]/message/OHPQQEBF7BDSDNXKZQTLT6SETOB5FVRH/ | CC-MAIN-2022-33 | refinedweb | 1,031 | 63.29 |
Some background first (skip next two paragraphs if you don't care):
I have a class (Foo) that reads in an XML-style configuration file and
makes the information available to the caller via a bunch of AUTOLOADed methods.
This worked fine for a while, but then the amount of data within Foo got
larger, and the number of people maintaining Foo also increased,
and it became unmanagable to keep all the parsing and accessing within
the same class. I therefore rewrote Foo as a container, with
various block objects to parse and store the different sections of the
config. In other words, Foo is now little more than an array of block objects
with an enumerator, and the block objects do the real work.
So far, so good.
I now need to take the AUTOLOADed methods that Foo catches and call
them on the current block object. So if a caller does something like
$myFoo->Bar(), I need to call the Bar method of the current block. I should
mention that the blocks exist within a class hierarchy, so the methods
that the class implements might not be in the class package. Ick.
The Problem: I need to call dynamically determined methods
on an object, where the method name is contained in a string. As an additional
complication, the method might not be in the class receiving the call; it can
be in one of its superclasses. An example might help:
Say I have the following class in MyClass.pm:
package MyClass;
use strict;
sub new($)
{
my $invocant = shift;
my $class = ref($invocant) || $invocant; # object or class name
my $self = { };
bless($self, $class);
return $self;
}
sub HelloWorld($)
{
my $self = shift;
print "Hello, World!\n";
}
1;
[download]
use MyClass;
use strict;
my $pack = MyClass->new();
my $string = "HelloWorld";
$string = ref($pack) . "::$string";
no strict 'refs';
&$string($pack);
[download]
package MySubClass;
use strict;
our @ISA = qw(MyClass);
require MyClass;
# All implementation is done by the superclass
1;
[download]
use MySubClass;
use strict;
my $pack = MySubClass->new();
my $string = "HelloWorld";
$string = ref($pack) . "::$string";
no strict 'refs';
&$string($pack);
[download]
So my question is the following:
Is there some way I can get method behavior (looking though
the namespaces in the @ISA package) when I don't know the
name of the method until runtime? Or do I have to look through
the @ISA array myself?
Any help would be appreciated,
-Ton
-----
Be bloody, bold, and resolute; laugh to scorn
The power of man...
$pack->$string if $pack->can($string);
[download]
------/me wants to be the brightest bulb in the chandelier!
Vote paco for President!
if (my $code = $pack->can($string)) {
&$code($pack, $string);
);
[download]
That's what AUTOLOAD is for. Check out perldoc perlt. | http://www.perlmonks.org/index.pl?node_id=108271 | CC-MAIN-2016-18 | refinedweb | 453 | 67.59 |
view raw
I found a solution that must be for a older version then vs2010. I would like to know how to do this for vs2010? Does anyone know?
Let me explain little more detail.
I have a c# generated dataset. How can I change the connection string so I can use the dataset with another (identically structured yet differently populated) database? This has to occur at runtime as I do not know the server or database name at compile time. i AM USING vs2010 and SQL Server 2008 R2 Express
I think there is no simple way and you cannot change the Connection-String programmatically for the entire
DataSet since it's set for every
TableAdapter.
You need to use/create the partial class of the
TableAdapter to change the connection-string since the
Connection property is
internal (if your DAL is in a different assembly). Don't change the designer.cs file since it will be recreated automatically after the next change on the designer. To create it just right-click the DataSet and chose "show code".
For example (assuming the
TableAdapter is named
ProductTableAdapter):
namespace WindowsFormsApplication1.DataSet1TableAdapters { public partial class ProductTableAdapter { public string ConnectionString { get { return Connection.ConnectionString; } set { Connection.ConnectionString = value; } } } }
Now you can change it easily:
var productTableAdapter = new DataSet1TableAdapters.ProductTableAdapter(); productTableAdapter.ConnectionString = someOtherConnectionString;
Here's a screesnhot of my sample DataSet and the created file
DataSet1.cs: | https://codedump.io/share/KcU2Mq7ljXwv/1/changing-dataset-connectionstring-at-runtime-vs2010 | CC-MAIN-2017-22 | refinedweb | 233 | 58.38 |
The AsynFileUpload Ajax control enables you to upload the files asynchronously on server.The uploading of files can be confirmed at both the server and client side. You can save the uploaded images by using SaveAs() method.This control works the uploads the files without doing any post pack. This control shows the Loading image while file uploading is in process .There are different coloring options for showing the file upload.Step 1:- First create a table (image_db) with three columns id,image_name and path as shown below:-
- Green color to indicate the file is uploaded successfully
- Red color to indicate the file upload is not successful
Sql server database is used to save the uploaded image by AsynFileUpload control.Here i have used a DataList control to display the uploaded images.In this Data List Control you can create your friends photo gallery like google+ and Facebook. For this application, you have to install AjaxToolkit first on your visual studio.You can install it easily from Here.
There are different concepts that you can implement by using this concepts.
- You can create your image gallery by using Grid View and Repeater control also in asp.net application.
- You can use this AsynFileUpload control on your asp.net Form like IBPS.
- You can use this concepts in asp.net 3 tier architecture also.
- You can save your file path in .mdf database in asp.net.
- You can use secure connections in your .net applications.
There are some steps to implement this whole concepts as given below:-
Step 3:- Now Drag and Drop ToolkitScriptManager, AsyncFileUpload and SqlDataSource controls from the Toolbox on the page (Default.aspx) as shown below:-
Step 3:- Now Configure your Sql Data source as given in this DOC file.
Note -
- If you are using this connection string then you have to no need to put connection strings in your web.config file separately .This is automatically added in your web.config file when you finished the configurations.
- You can use other connection strings also instead of this sql data source.
<%@"> </style> </head> <body> <form id="form1" runat="server"> <div> <asp:ToolkitScriptManager </asp:ToolkitScriptManager> <br /> <span class="style1"><strong>How to Upload image files and save it into sql database using Ajax AsyncFileUpload control in asp.net<br /> </strong> </span> <span class="style2"><strong><span class="style3">Upload Images</span></strong><span class="style1"><asp:AsyncFileUpload </span> </span> <br /> <asp:SqlDataSource</asp:SqlDataSource> <asp:DataList <ItemTemplate> <table> <tr> <td> <asp:Image </td> </tr> </table> </ItemTemplate> </asp:DataList> </div> </form> </body> </html>Step 5:- Now press Design from the blew of the page--> You will see following template as shown below:-
Note :- Here you can see,there are five images displayed horizontally. You can change it according to your requirements with following properties of DataList control .
- RepeatColumns="5"
- RepeatDirection="Horizontal"
Step 6:- Now Open Default.aspx.cs file and write the C# codes as given below
using System; using System.Web; using System.Web.UI; using System.Data.SqlClient; using System.Configuration; using System.Data; using System.Web.UI.WebControls; public partial class _Default : System.Web.UI.Page { //create connection string.. SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["masterConnectionString"].ConnectionString); protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { //generate a method handler press two time tab or generate by the mouse click on that method Bind_DatalistItems(); } } protected void Ajax_imageUpload(object sender, AjaxControlToolkit.AsyncFileUploadEventArgs e) { try { //first make an Empty variable file_Path string file_Path = string.Empty; // create a new string for image path and save the sting path and image in msdotnet folder in your project. //guid.NewGuid() is used to generate unique name for each image. file_Path = (Server.MapPath("~/msdotnet/") + Guid.NewGuid() + System.IO.Path.GetFileName(e.FileName)); AsyncFileUpload1.SaveAs(file_Path); //add substring in image path string file = file_Path.Substring(file_Path.LastIndexOf("\\")); //Deletes a specified number of characters form this instance beginning at a specified position string file_name = file.Remove(0, 1); // STORE complete image path starting with name 'msdonet'into Complete_FilePath VARIABLE string complete_FilePath = "~/msdotnet/" + file_name; //open connection.. con.Open(); //insert image_name and path in image_db table in your database SqlCommand cmd = new SqlCommand("insert into image_db(image_name,path) values(@b,@c)", con); cmd.Parameters.AddWithValue("@b", file_name); cmd.Parameters.AddWithValue("@c", complete_FilePath); cmd.ExecuteNonQuery(); } catch (Exception ex) { Response.Write(ex.ToString()); } finally { con.Close(); } } private void Bind_DatalistItems() { try { // first open connection con.Open(); SqlCommand cmd = new SqlCommand("select path from image_db", con); SqlDataReader dr = cmd.ExecuteReader(); if (dr.HasRows) { DataList1.DataSource = dr; DataList1.DataBind(); } else { DataList1.DataSource = null; DataList1.DataBind(); } } catch (Exception e) { Response.Write(e.ToString()); } finally { con.Close(); } } }
Step 7:-Now Run the application (Press F5)--> Press Browse Button and select the image from your computer Drive--> then you will see following output as shown below:-
Step 8:-Now Open Solution Explorer window --> open msdotnet folder ,you will see that your uploaded image will be saved in this folder as shown below:-
Step 9:-Now open your Database table(image_db) and you will see ,image_name and image path will be saved in this table as shown below:-
Note - I have used different way to bind the Image in DataList control in Default2.aspx page . This is very easy way to bind the image. You can can download this application from the below link and see default2.aspx page.For More...
- How to create captcha image on asp.net website easily
- How to create complete registration and login page in asp.net
- How to insert data in database and print the grid view data in asp.net
- How to create setup file with database in asp.net
- File Handling real concepts with real life example
- How to host asp.net website on IIS Server
- How to create dll file and use it in asp.net application
- Form based Authentication in asp.net
- Object Oriented Programming in c# .net with real life examples
- How to implement Cookie concepts in asp.net with an example
- .NET Complete Interview Questions and Answers With Examples
- How to build your own media player and install it on your computer
If you want to Run this application directly on your Visual studio,download it from below link.First Drag and Drop ToolkitScriptManager on Default.aspx page as well as change connection strings also.Download whole attached file
Download | http://www.msdotnet.co.in/2016/02/how-to-upload-images-using-ajax.html | CC-MAIN-2017-26 | refinedweb | 1,047 | 51.14 |
Scatter Charts¶
Scatter, or xy, charts are similar to some line charts. The main difference is that one series of values is plotted against another. This is useful where values are unordered.
from openpyxl import Workbook from openpyxl.chart import ( ScatterChart, Reference, Series, ) wb = Workbook() ws = wb.active rows = [ ['Size', 'Batch 1', 'Batch 2'], [2, 40, 30], [3, 40, 25], [4, 50, 30], [5, 30, 25], [6, 25, 35], [7, 20, 40], ] for row in rows: ws.append(row) chart = ScatterChart() chart.title = "Scatter Chart" chart.style = 13 chart.x_axis.title = 'Size' chart.y_axis.title = 'Percentage' xvalues = Reference(ws, min_col=1, min_row=2, max_row=7) for i in range(2, 4): values = Reference(ws, min_col=i, min_row=1, max_row=7) series = Series(values, xvalues, title_from_data=True) chart.series.append(series) ws.add_chart(chart, "A10") wb.save("scatter.xlsx")
Note
The specification says that there are the following types of scatter charts: ‘line’, ‘lineMarker’, ‘marker’, ‘smooth’, ‘smoothMarker’. However, at least in Microsoft Excel, this is just a shortcut for other settings that otherwise no effect. For consistency with line charts, the style for each series should be set manually. | http://openpyxl.readthedocs.io/en/latest/charts/scatter.html | CC-MAIN-2017-30 | refinedweb | 188 | 60.72 |
in reply to (Golf) Nearest Neighbors
sub* (:
print nn(1, 4, 7);
[download]
UPDATE
print nn(1, 5, 7);
print nn2(1, 5, 7);
# Hmmmm
print nn(1, 5, 11);
print nn2(1, 5, 11);
[download]
Just following the (ambiguous) specification. Are you looking for numbers from (5,11) that are close to 1 or numbers from (1,5) that are close to 11? I didn't find an API spec and found others using pop so I went ahead with the 2-character savings.
Or am I only supposed to return one number if the "two closest" are both on "the same side" of the search-for number? That wasn't clear to me either so I just went with "return the two closest" without trying to assume a bunch of extra subtle meaning to that phrase. No, I'm not going to produce a version that sometimes returns only one number. (:
So given (1, 5, 7) it would return (5, 7) (or (7, 5)) while (1, 5, 11) would return (1, 5). And you cannot assume that it is presented in increasing order.
nn(1,5,9,15) returns (1,5)
# or
nn(1,5,9,15) returns (5,9)
# or
nn(1,5,9,15) returns (9,5)
[download]
MeowChow
s aamecha.s a..a\u$&owag.print
Yes
No
Results (278 votes). Check out past polls. | http://www.perlmonks.org/index.pl?node_id=69781 | CC-MAIN-2017-13 | refinedweb | 230 | 78.59 |
This section describes known issues, problems, and workarounds for the compilers in this release.
Errors in the published compiler documentation are listed below.
The cc(1), CC(1), and f95(1) man pages neglect to list the —xarch=sse3a flag, which adds the AMD instruction set, including 3dnow, to the SSE3 instruction set.
The C and C++ documentation neglects to point out that the —xMF option can only be used with —xMD or —xMMD, but not with —xM or —xM1. When specified, it overrides the default .d file name used with those options.
The Apache stdcxx library installed in Solaris 10u10 and earlier and in the initial release of Solaris 11 has a syntax error in header stdcxx4/loc/_moneypunct.h. This error was not seen by earlier compilers, but is caught by the Oracle Solaris Studio 12.3 C++ compiler. There is no way to disable the error detection.
A fix for this bug is available in a patch for Solaris 10, and in the first Solaris 11 SRU. The fix will be included in Solaris 10u11 and Solaris 11u1 when they become available.
Some C++ statements could potentially be interpreted as a declaration or as an expression-statement. The C++ disambiguation rule is that if a statement can be a declaration, it is a declaration.
Earlier versions of template option -instances=static (or -pto) does not work in combination with either of the -xcrossfile or -xipo options. Programs using the combination will often fail to link.
If you use the -xcrossfile or -xipo options, use the default template compilation model, -instances=global instead.
In general, do not use -instances=static (or -pto) at all. It no longer has any advantages, and still has the disadvantages documented in the C++ Users Guide.
The following conditions may cause linking problems.
A function is declared in one place as having a const parameter and in another place as having a non-const parameter.
Example:
void foo1(const int); void foo1(int);
These declarations are equivalent, but the compiler mangles the names differently. To prevent this problem, do not declare value parameters as const. For example, use void foo1(int); everywhere, including the body of the function definition.
A function has two parameters with the same composite type, and just one of the parameters is declared using a typedef.
Example:
class T; typedef T x; // foo2 has composite (that is, pointer or array) // parameter types void foo2(T*, T*); void foo2(T*, x*); void foo2(x*, T*); void foo2(x*, x*);
All declarations of foo2 are equivalent and should mangle the same. However, the compiler mangles some of them differently. To prevent this problem, use typedefs consistently.
If you cannot use typedefs consistently, a workaround is to use a weak symbol in the file that defines the function to equate a declaration with its definition. For example:
#pragma weak "__1_undefined_name" = "__1_defined_name"
Note that some mangled names are dependent on the target architecture. (For example, size_t is unsigned long for the SPARC V9 architecture (-m64), and unsigned int otherwise.) In such a case, two versions of the mangled name are involved, one for each model. Two pragmas must be provided, controlled by appropriate #if directives. namesapce.
namespace foo { #pragma align 8 (a, b, c) // has no effect //use mangled names: #pragma align 8 (__1cDfooBa_, __1cDfooBb_, __1cDfooBc_) static char a; static char b; static char c; }
The following issues should be noted in this release of the f95 compiler:
Blank space before the end of a no advance print line does not affect output position (7087522).X edit descriptor. They are not exactly the same since the blank character string edit descriptor actually causes blank characters to go into the record whereas the nX only skips over the next n characters, usually causing blanks to be in those skipped positions by default.
Valid code rejected when a line consists of two continuation ampersands. (7035243). (6944225)..
Previous releases of the Fortran compiler introduced incompatibilities that carry forward to this release of the compiler and should be noted if you are updating from earlier Fortran compiler releases. The following incompatibilies are worth noting:
Here ANY, ALL, COUNT, MAXVAL, MINVAL, SUM, PRODUCT, DOT_PRODUCT, and MATMUL are highly tuned for the appropriate SPARC platform architectures. As a result, they use the global registers %g2, %g3, and %g4 as scratch registers. Solaris gethrtime(3C) function) to get high resolution real time on Linux platforms will only be accurate on AMD systems with power saving disabled. A reboot of the system might be required to disable the power-saving features. | http://docs.oracle.com/cd/E24457_01/html/E21987/glnzd.html | CC-MAIN-2014-41 | refinedweb | 760 | 53.71 |
Timeit in Python with Examples
This article will introduce you to a method of measuring the execution time of your python code snippets.
We will be using an in-built python library timeit.
This module provides a simple way to find the execution time of small bits of Python code.
Why timeit?
- Well, how about using simple time module? Just save the time before and after the execution of code and subtract them! But this method is not precise as there might be a background process momentarily running which disrupts the code execution and you will get significant variations in running time of small code snippets.
- timeit runs your snippet of code millions of time (default value is 1000000) so that you get the statistically most relevant measurement of code execution time!
- timeit is pretty simple to use and has a command line interface as well as a callable one.
So now, let’s start exploring this handy library!.
Where the timeit.timeit() function returns the number of seconds it took to execute the code.
Example 1
Let us see a basic example first.
- The output of above program will be the execution time(in seconds) for 10000 iterations of the code snippet passed to timeit.timeit() function.
Note: Pay attention to the fact that the output is the execution time of number times iteration of the code snippet, not the single iteration. For a single iteration exec. time, divide the output time by number.
- The program is pretty straight-forward. All we need to do is to pass the code as a string to the timeit.timeit() function.
- It is advisable to keep the import statements and other static pieces of code in setup argument.
Example 2
Let’s see another practical example in which we will compare two searching techniques, namely, Binary search and Linear search.
Also, here I demonstrate two more features, timeit.repeat function and calling the functions already defined in our program.
- The output of above program will be the minimum value in the list times.
This is how a sample output looks like:
- timeit.repeat() function accepts one extra argument, repeat. The output will be a list of the execution times of all code runs repeated a specified no. of times.
- In setup argument, we passed:
from __main__ import binary_search from random import randint
This will import the definition of function binary_search, already defined in the program and random library function randint.
- As expected, we notice that execution time of binary search is significantly lower than linear search!
Example 3
Finally, I demonstrate below how you can utilize the command line interface of timeit module:
Here I explain each term individually:
So, this was a brief yet concise introduction to timeit module and its practical applications.
Its a pretty handy tool for python programmers when they need a quick glance of the execution time of their code snippets. | https://www.geeksforgeeks.org/timeit-python-examples/ | CC-MAIN-2019-30 | refinedweb | 484 | 63.9 |
WordPress: How to disable a plugin on all pages except for a specific one
A few days ago we were struggling to find a way to limit the amount of plugins that load at any point on a WordPress website. We noticed that several plugins enqueue their scripts and their styles in all requests to the website even if they are actually used on a single page only. This issue was important to address as it was making the whole server slower by giving it extra requests from the client that would never provide any actual benefit to the user.
Initially, we tried to selectively enable those plugins on their respective pages but we did not get it right and things would load out of order and break. Instead of following the ‘
enable when needed‘ methodology we decided to follow the ‘
disable unless needed‘ methodology which seemed simpler at the time.
Our changes involved in adding the following code in the
functions.php file of our child theme.
//Register a filter at the correct event add_filter( 'option_active_plugins', 'bf_plugin_control' ); function bf_plugin_control($plugins) { // If we are in the admin area do not touch anything if (is_admin()) { return $plugins; } // Check if we are at the expected page, if not remove the plugin from the active plugins list if(is_page("csv-to-kml-cell-site-map") === FALSE){ // Finding the plugin in the active plugins list $key = array_search( 'csv-kml/index.php' , $plugins ); if ( false !== $key ) { // Removing the plugin and dequeuing its scripts unset( $plugins[$key] ); wp_dequeue_script( 'bf_csv_kml_script' ); } } if(is_page("random-password-generator") === FALSE){ $key = array_search( 'bytefreaks-password-generator/passwordGenerator.php' , $plugins ); if ( false !== $key ) { unset( $plugins[$key] ); } } if(is_page("xml-tree-visualizer") === FALSE){ $key = array_search( 'xmltree/xml-tree.php' , $plugins ); if ( false !== $key ) { unset( $plugins[$key] ); wp_dequeue_script( 'bf_xml_namespace' ); wp_dequeue_style( 'bf_xml_namespace' ); } } return $plugins; }
One day, we will clean the above code to make it tidy and reusable.. one day, that day is not today.
What the code above does is the following:
- Using
is_adminit checks if the Dashboard or the administration panel is attempting to be displayed, in that case it does not do any changes.
- With
is_page, it will additionally check if the parameter is for one of the pages specified and thus disable the plugin if the check fails.
- PHP command
array_search, will see if our plugin file is expected to be executed (all files in
$pluginsare the plugin files that are expected to be executed) .
wp_dequeue_scriptand
wp_dequeue_styleremove the previously enqueued scripts and styles of the plugin as long as you know the handles (or namespaces of the enqueued items).
To get the handles (namespaces) we went through the plugin codes and found all instances of
wp_enqueue_scriptand
wp_enqueue_style.
Please note that several small plugins do not have additional items in queue so no further action is needed. | https://bytefreaks.net/2019/05 | CC-MAIN-2022-27 | refinedweb | 464 | 50.46 |
LA3635 - com-
plaining. N, F 10000: the number of pies and the number
of friends.
One line with N integers ri with 1 ri 10000: the radii of the pies.
Output
For each test case, output one line with the largest possible volume V such that me and my friends can
all get a pie piece of size V . The answer should be given as a oating point number with an absolute
error of at most 10
#include <iostream> #include <stdio.h> #include <cmath> using namespace std; #define PI acos(-1) const int MAXN = 10010; double S[MAXN]; int N, F; bool check(double mid) { int sum = 0; for(int i=0; i<N; i++) { sum += floor(S[i]/mid); } return sum>=F+1; } int main() { int t, r; scanf("%d",&t); while(t--) { scanf("%d%d",&N,&F); for(int i=0; i<N; i++) { scanf("%d",&r); S[i] = r*r*PI; } double min1 = 0, max1 = 1e14, mid; while(max1-min1>1e-5) { mid = (max1+min1)/2; if(check(mid)) { min1 = mid; }else{ max1 = mid; } } printf("%.4f\n",min1); } return 0; } | http://blog.csdn.net/fljssj/article/details/46821969 | CC-MAIN-2017-47 | refinedweb | 183 | 75.84 |
Could someone please help me work these problems out with the following code? Exercise 1: The variable middle is defined as an integer. The program contains the assignment statement middle=first + (last-first)/2. Is the right side of this statement necessarily an integer in computer memory? Explain how the middle value is determined by the computer. How does this line of code affect the logic of the program? Remember that first, last, and middle refer to the array positions, not the values stored in those array positions. Exercise 2: Search the array in the program above for 19 and then 12. Record what the output is in each case. Note that both 19 and 12 are repeated in the array. Which occurrence of 19 did the search find? Which occurrence of 12 did the search find? Explain the difference. Exercise 3: Modify the program to search an array that is in ascending order. Make sure to alter the array initialization. #includeusing namespace std; int binarySearch(int [], int, int); // function prototype const int SIZE = 16; int main() { int found, value; int array[] = {34,19,19,18,17,13,12,12,12,11,9,5,3,2,2,0}; // array to be searched cout << "Enter an integer to search for:" << endl; cin >> value; found = binarySearch(array, SIZE, value); //function call to perform the binary search //on array looking for an occurrence of value if (found == -1) cout << "The value " << value << " is not in the list" << endl; else { cout << "The value " << value << " is in position number " << found + 1 << " of the list" << endl; } return 0; } //******************************************************************* // binarySearch // // task: This searches an array for a particular value // data in: List of values in an orderd array, the number of // elements in the array, and the value searched for // in the array // data returned: Position in the array of the value or -1 if value // not found // //******************************************************************* int binarySearch(int array[],int numElems,int value) //function heading { int first = 0; // First element of list int last = numElems - 1; // last element of the list int middle; 0 // variable containing the current // middle value of the list while (first <= last) { middle = first + (last - first) / 2; if (array[middle] == value) return middle; // if value is in the middle, we are done else if (array[middle] | http://www.chegg.com/homework-help/questions-and-answers/could-someone-please-help-work-problems-following-code-exercise-1-variable-middle-defined--q3313962 | CC-MAIN-2015-06 | refinedweb | 376 | 57.71 |
To do that I have used a computer with Ubuntu 12.04 and the program language C++, but if you want to used Windows the code works too, only have to change the port used to conect with arduino, but this is explain in the next step.
This program is only the first version so it must be improved.
Arduino use the usb port to simulate a serial port so we have to use a usb cable to connect the arduino usb port to computer usb port.
Step 1: Program in C++
This is the code of C++, I have created a main class and a Arduino class, so this is object oriented.
#ifndef ARDUINO_H
#define ARDUINO_H
#include <SerialStream.h>
#include <SerialStreamBuf.h>
#include <SerialPort.h>
#include <string>
class Arduino{
public:
Arduino();
int open);
DataBuffer read();
void close();
private:
string dev = "/dev/ACM0";
SerialPort serial;
};
#endif // ARDUINO_H
This is the header of Arduino class.
There are three functions open, read and close.
Open: Open the conection bewteen arduino and the computer.
Read: Read the bufer where is all dates that arduino has send to the computer.
Close: Close the conection bewteen arduino and the computer.
To connect with arduino I have used the port of my computer "/dev/ACM0", if you use Windows instead of Linux you have to use the port "COM1" or "COM2". But to see what port is using arduino you have to use the JDK of arduino and select a port in "Tools -> Serial Port".
# include <Arduino.h>
Arduino::Arduino(){
serial(dev);
}
int Arduino::abrir(){
int estado = 0;
serial.Open(SerialPort::BAUD_9600,
SerialPort::CHAR_SIZE_8,
SerialPort::PARITY_NONE,
SerialPort::STOP_BITS_1,
SerialPort::FLOW_CONTROL_NONE);
if (serial.IsOpen() == false)
estado = -1;
return estado;
}
void Arduino::cerrar(){
serial.Close();
}
DataBuffer Arduino::leer(){
SerialPort::DataBuffer buffer;
serial.Read(buffer, 10, 1000);
return buffer;
}
This is the code of Arduino class.
#include <iostream>
#include <SerialStream.h>
#include <SerialStreamBuf.h>
#include <SerialPort.h>
#include <string>
using namespace std;
using namespace LibSerial;
int main(int argc, char **argv)
{
Arduino arduino();
return 0;
}
And finally this is the main class.
Step 2: Program in Arduino
void setup(){
Serial.begin(9600);
}
void loop(){
Serial.println("Hello world");
delay(1000);
}
This is the code that you have to do in arduino.
This code is very simple.
"Serial.begin(9600)" : Sets the velocity of dates in bits per second (baud).
Serial.println("Hello world") : Send a message through the serial port.
delay(1000) : Stop by 1000 miliseconds.
Participated in the
Microcontroller Contest
4 Discussions
5 months ago
To facilitate the connection with a PC, I recommend you to use a communication library such as
3 years ago
Hi, I'm trying to run your code on my mac to send my c++ robot commands to my Arduino Uno that controls my robot motor . So the only thing I changed was the port name, mine is "/dev/cu.usbmodem1421." But the problem is my xcode does not have any of these Serial libraries you used. I tried to add SerialLib but it didn't work. Since I'm new to Arduino and c++ can you show me how to add the Serial libraries. One more thing after I got it to work do I need to use serial monitor to see if the code's working.
Thanks in advance
4 years ago on Introduction
I was able to test scm which is an alternative library to rxtx/javaxcomm for serial port communication. The arduino sent and received data correctly.
Wiki :
Repository :...
4 years ago on Introduction
Why would one release the code with structure names not in English? That's a bad practice. | https://www.instructables.com/id/How-to-connect-Arduino-to-a-PC-through-the-serial-/ | CC-MAIN-2019-30 | refinedweb | 602 | 66.13 |
Throughout the last few articles in Java development 2.0, I've been building
upon a simple cloud-to-mobile application. This application, called Magnus, serves as
an HTTP endpoint listening for mobile-device location information. It functions by
receiving
HTTP PUT requests, each one containing a JSON document that indicates an account's location at a given time. So far, I've used the web framework Play to develop and extend Magnus (see Resources).
Play is much like Grails in that it provides an MVC stack. With Play, you can easily define controllers (servlets) that leverage views (JSPs, GSPs, templates, you name it), which in some way manipulate models. Models are implemented using POJOs (plain old Java objects) enhanced with Hibernate, JPA, or some other nifty ORM-like technology.
Although MVC is an older standard, much has changed with the advent of frameworks like Grails and Play. Simply recall the amount of effort it once took to stand up a simple web request-response interaction — say, using Struts — and you will appreciate how far we've come toward rapidly built MVC web apps. Still, not all web apps need MVC infrastructure to work. These days, some web apps don't need an MVC "stack" at all.
Before you close your browser in order to protest such a heretical statement, remember Magnus. While devised strictly for the purpose of demonstration, my cloud-to-mobile app contains no traditional view component and largely models itself off of existing, successful services. Like Twitter or Foursquare, Magnus receives messages from disparate devices around the globe. Broadly speaking, Magnus is a web service, and not every web service needs an MVC stack framework to get the job done. In some cases, all you need is a super lightweight web framework without the web stack.
This month, we'll be looking at one of these: a rapid development framework so bleeding new it doesn't even have its own homepage, and perhaps may not need one. Gretty's lineage and affiliations (including Netty and Groovy, respectively) are respectable enough that it's already part of the Java 2.0 web development family. It fills a need that many developers don't yet know they have (that's the true Web 2.0 style, don't you know?). And it's stable enough for production use — if you're willing to walk on the wild side.
A history of rapid Java development
Those of us old enough to remember when the Servlets API was first introduced have reason to be skeptical of the new "lightweight" paradigm; just a simple servlet lets you build a web service without a lot of code and consequent JAR files, after all. Web service frameworks, such as Restlet or Jersey, take a slightly different approach to development speed, building upon class extensions, annotations, and even standard JSRs to create RESTful web services. Both of these are still good options for some scenarios.
But it turns out that some new lightweight (as opposed to old lightweight) frameworks are making web services, or simple HTTP endpoints (also known as routes), amazingly simple to define. Even more simple than hand jamming a servlet!
These frameworks first emerged on other platforms, notably Sinatra for Ruby and Express for Node.js. But interesting projects have begun to emerge for the Java platform, too. One of them is Gretty, which is of course home-brewed for Groovy and the JVM.
I'm with Gretty
Gretty has at least two things going for it as far as I'm concerned: First is its use of Groovy's Grape (which I'll describe in more detail shortly) to facilitate dependency management. Second is its simple DSL-like syntax for defining endpoints. With Gretty, you can very quickly (in just a few short lines of code) define and deploy a working web routing framework that handles real business logic. As an example, watch me whip up the canonical hello world example in Listing 1:
Listing 1. Hello, World: It's Gretty!
import org.mbte.gretty.httpserver.* @GrabResolver(name='gretty', root='') @Grab('org.mbte.groovypp:gretty:0.4.279') GrettyServer server = [] server.groovy = [ localAddress: new InetSocketAddress("localhost", 8080), defaultHandler: { response.redirect "/" }, "/:name": { get { response.text = "Hello ${request.parameters['name']}" } } ] server.start()
In Listing 1, I created a server listening on port 8080, then set up a simple root endpoint containing the parameter
name. Any request to some other endpoint will be routed back to
/ via the
defaultHandler. The handler basically sends the requesting client an HTTP 301 "moved permanently" code, with the location of
/. All requests will receive a response (with the content-type set to
text/plain) containing the string "Hello" along with the value of any parameter passed; for instance,
/Andy would yield "Hello Andy."
So what's interesting about Listing 1? First and foremost, that what you see in the listing is all you need for the application. There are no configuration files. I didn't need to download or install anything directly (other than Groovy 1.8). To fire this example up, I simply typed
groovy server.groovy.
Now what if your responses need a bit more sophistication than simple text? For this, Gretty has a number of options, two of which are quite easy. First, you could simply set the response type to HTML, like I've done in Listing 2:
Listing 2. HTML responses in Gretty
"/:name": { get { response.html = "Hello ${request.parameters['name']}" } }
In this case, the response content-type would be set to
text/html. Alternately, Gretty makes it possible to leverage static
and dynamic templates. For instance, I could define a template using a simple
JSP/GSP-like construct, like Listing 3:
Listing 3. An HTML template in Gretty
<html> <head> <title>Hello!</title> </head> <body> <p>${message}</p> </body> </html>
I could then reference the template in the body of a response:
Listing 4. A Groovy++ template in Gretty
"/:name" { get { response.html = template("index.gpptl", [message: "Hello ${request.parameters['name']}"]) } }
Dependency management with Gretty and Grape
Some of Gretty's rapid development impressiveness is thanks to Grape (see Resources), which it uses to automatically download binary dependencies, or JAR files. All files are loaded with Maven-style transitive dependencies. All I had to do in Listing 1 was type in the annotation
@Grab('org.mbte.groovypp:gretty:0.4.279') and I got the JAR file associated with Gretty, along with Gretty's dependencies. The annotation
@GrabResolver(name='gretty', root='') indicated where Grape could find the needed files.
Grape may look simple, but that doesn't mean it's unfit for production. In fact, Grape's auto-downloading of required dependencies isn't any different from Maven's. It's just that it is done at runtime, the first time that you run an application, whereas Maven downloads required dependencies at build time. If Grape can find the required dependencies locally, no download is necessary. Required JARs are automatically placed in the application's classpath. As a result, you only pay a performance cost the first time you run a Grape-configured application. Of course, you'll also take a small performance hit any time you change the required version of a specified dependency.
Gretty meets Magnus
Hopefully by now you've seen that Gretty is simple, which makes it easy to do seriously fast development. Plus, Gretty (or frameworks like it) are perfectly suited for applications like Magnus — HTTP endpoints listening for data. So let's see what happens when I completely replace a respectably lightweight framework like Play or Grails with an even more lightweight app written with Gretty.
For this incarnation of Magnus, I'll use Morphia and MongoHQ, both of which you might recall from my introduction to Amazon Elastic Beanstalk. In order to leverage Groovy's Grape utility with the new config, I'll need to add the annotations in Listing 5 to my server:
Listing 5. Adding Morphia and its dependencies
@GrabResolver(name='morphia', root='') @Grab(group='com.google.code.morphia', artifactId='morphia', module="morphia", version='0.99')
My Morphia classes are the same as they have been for earlier incarnations of Magnus: I
have an
Account and a
Location. In this endpoint, I'm simply updating a location for a given account. Because clients to Morphia will send JSON documents to Gretty's endpoint, I'm also going to use Jackson, a nifty JSON-handling framework that already is part of Gretty's internals. Thanks to Grape's handling of transitive dependencies, I've now got access to everything I need to parse the incoming JSON document and transform it into a simple Java
Map.
Listing 6. Updating locations in Gretty
def server = new GrettyServer().localAddress(new InetSocketAddress("localhost", 8080)). "{ new Location(request.parameters['account'], dt, json['latitude'].doubleValue() , json['longitude'].doubleValue() ).save() res['status'] = 'success' }catch(exp){ res['status'] = "error ${exp.message}" } response.json = jacksonMapper.writeValueAsString(res) } } server.start ()
As you can see in Listing 6, a
Map
of the incoming JSON document is created (dubbed
json) and
then correspondingly inserted into MongoDB via my
Location
class in Listing 7:
Listing 7. Creating the location document — Gretty redux
import com.google.code.morphia.annotations.Entity @Entity(value = "locations", noClassnameStored = true) class Location extends AbstractModel { String accountId double latitude double longitude Date timestamp public Location(String accountId, Date timestamp, double lat, double lon) { this.accountId = accountId this.timestamp = timestamp this.latitude = lat this.longitude = lon } }
What's more, my
Location has a Groovy superclass, shown in
Listing 8:
Listing 8. Location's base class
import com.google.code.morphia.Morphia import com.google.code.morphia.annotations.Id import com.mongodb.Mongo import org.bson.types.ObjectId abstract class AbstractModel { @Id private ObjectId id; def save() throws Exception { def mongo = new Mongo("fame.mongohq.com", 32422) def datastore = new Morphia().createDatastore(mongo, "xxxx", "xxxx", "xxxx".toCharArray()) datastore.save(this) return this.id } }
You might remember this code from Listing 3 of "Climb the Elastic Beanstalk." The only change I made for the Gretty implementation was to rename the actual file from
Location.java to
Location.groovy, which means I don't have to compile it before firing up the server. I also added a base class. The location is tied to an account via the incoming parameter
account obtained from the URI.
A response is then sent in JSON indicating success. In the case of an error, another response would be generated.
In conclusion: Gretty is ready
Gretty is as light as light can be. There is no embedded ORM-framework. There is no robust view framework aside from simple templates, but plugging in some other framework is completely doable. Does all of this mean Gretty isn't ready for everyday usage? Does its lack of a testing framework imply the same? The answer is no: First, Gretty is built off of Netty's well-regarded code, so you get some assurance right out of the gate. Second, you can test Gretty just like you would any other web endpoint, automatically or not. In fact, if you want to see how Gretty tests, check its source code — there are plenty of tests in there!
Gretty is the antithesis of the modern full-stack web framework, precisely because sometimes you don't need an entire stack. If you find yourself doing too much with a framework like Gretty, then you might be better off with one of the many full stack, well-documented Java web frameworks. Likewise, if you find yourself wondering why you need an entire stack to handle web service requests and responses, then Gretty could be just what you need.
Resources
Learn
- "Groovy++ in action: Gretty/GridGain/REST/Websockets" (Alex Tkachman, DZone, May 2011): Very little has so far been written about Gretty. This introduction by its author offers a few more example applications implemented with Groovy++.
- Java development 2.0: This dW series explores technologies that are redefining the Java development landscape. Topics include Amazon Elastic Beanstalk (February 2011), JavaScript for Java developers (April 2011), MongoDB (September 2010), and NoSQL (May 2010).
- Grape user guide: Curious about Grape? Get a quick introduction from the Codehaus user guide.
- Netty homepage: Learn about the Java NIO client-server socket framework.
- "Getting started with new I/O (NIO)" (Greg Travis, developerWorks, July 2003): This hands-on tutorial covers the NIO library, including buffers and channels, asynchronous I/O, and direct buffers.
- Knowledge path: Cloud computing fundamentals (March 2011): Introduces cloud computing concepts and the service models IaaS, PaaS, and SaaS.
- The busy Java developer's guide to Scala (Ted Neward, 2008-2009): Ted Neward dives into the Scala programming language, cutting to the chase and providing a look at its linguistic capabilities in action.
- Browse the Java technology bookstore for books on these and other technical topics.
- developerWorks Java technology zone: Find hundreds of articles about every aspect of Java programming.
Get products and technologies
- The Play framework: Focuses on developer productivity and targets RESTful architectures.
- Get Gretty: All you need to get started is Groovy 1.8.0.
- | https://www.ibm.com/developerworks/library/j-javadev2-20/index.html | CC-MAIN-2016-44 | refinedweb | 2,175 | 56.96 |
- 22 Oct 2016 02:17:14 UTC
- Distribution: Mail-IMAPTalk
- Module version: 4.04
- Source (raw)
- Browse (raw)
- Changes
- How to Contribute
- Repository
- Issues (0)
- Testers (324 / 0 / 0)
- KwaliteeBus factor: 2
- % Coverage
- License: perl_5
- Activity24 month
- Tools
- Download (53.84KB)
- MetaCPAN Explorer
- Permissions
- Permalinks
- This version
- Latest versionand 1 contributors
- NAME
- SYNOPSIS
- DESCRIPTION
- CLASS OVERVIEW
- CONSTANTS
- CONSTRUCTOR
- CONNECTION CONTROL METHODS
- IMAP FOLDER COMMAND METHODS
- IMAP MESSAGE COMMAND METHODS
- IMAP CYRUS EXTENSION METHODS
- IMAP HELPER FUNCTIONS
- IMAP CALLBACKS
- FETCH RESULTS
- INTERNAL METHODS
- INTERNAL SOCKET FUNCTIONS
- INTERNAL PARSING FUNCTIONS
- PERL METHODS
- SEE ALSO
- AUTHOR
NAME
Mail::IMAPTalk - IMAP client interface with lots of features
SYNOPSIS
use Mail::IMAPTalk; $IMAP = Mail::IMAPTalk->new( Server => $IMAPServer, Username => 'foo', Password => 'bar', ) || die "Failed to connect/login to IMAP server"; # Append message to folder open(my $F, 'rfc822msg.txt'); $IMAP->append($FolderName, $F) || die $@; close($F); # Select folder and get first unseen message $IMAP->select($FolderName) || die $@; $MsgId = $IMAP->search('not', 'seen')->[0]; # Get message envelope and print some details $MsgEV = $IMAP->fetch($MsgId, 'envelope')->{$MsgId}->{envelope}; print "From: " . $MsgEv->{From}; print "To: " . $MsgEv->{To}; print "Subject: " . $MsgEv->{Subject}; # Get message body structure $MsgBS = $IMAP->fetch($MsgId, 'bodystructure')->{$MsgId}->{bodystructure}; # Find imap part number of text part of message $MsgTxtHash = Mail::IMAPTalk::find_message($MsgBS); $MsgPart = $MsgTxtHash->{text}->{'IMAP-Partnum'}; # Retrieve message text body $MsgTxt = $IMAP->fetch($MsgId, "body[$MsgPart]")->{$MsgId}->{body}; $IMAP->logout();
DESCRIPTION
This module communicates with an IMAP server. Each IMAP server command is mapped to a method of this object.
Although.
CLASS OVERVIEW
The object methods have been broken in several sections.
Sections
- CONSTANTS
Lists the available constants the class uses.
- CONSTRUCTOR
Explains all the options available when constructing a new instance of the
Mail::IMAPTalkclass.
- CONNECTION CONTROL METHODS
These are methods which control the overall IMAP connection object, such as logging in and logging out, how results are parsed, how folder names and message id's are treated, etc.
- IMAP FOLDER COMMAND METHODS
These are methods to inspect, add, delete and rename IMAP folders on the server.
- IMAP MESSAGE COMMAND METHODS
These are methods to retrieve, delete, move and add messages to/from IMAP folders.
- HELPER METHODS
These are extra methods that users of this class might find useful. They generally do extra parsing on returned structures to provide higher level functionality.
- INTERNAL METHODS
These are methods used internally by the
Mail::IMAPTalkobject to get work done. They may be useful if you need to extend the class yourself. Note that internal methods will always 'die' if they encounter any errors.
- INTERNAL SOCKET FUNCTIONS
These are functions used internally by the
Mail::IMAPTalkobject to read/write data to/from the IMAP connection socket. The class does its own buffering so if you want to read/write to the IMAP socket, you should use these functions.
- INTERNAL PARSING FUNCTIONS
These are functions used to parse the results returned from the IMAP server into Perl style data structures.
Method results
All methods return undef on failure. There are four main modes of failure:
- 1. An error occurred reading/writing to a socket. Maybe the server closed it, or you're not connected to any server.
-
- 2. An error occurred parsing the response of an IMAP command. This is usually only a problem if your IMAP server returns invalid data.
-
- 3. An IMAP command didn't return an 'OK' response.
-
- 4. The socket read operation timed out waiting for a response from the server.
-
In each case, some readable form of error text is placed in $@, or you can call the
get_last_error()method. For commands which return responses (e.g. fetch, getacl, etc), the result is returned. See each command for details of the response result. For commands with no response but which succeed (e.g. setacl, rename, etc) the result 'ok' is generally returned.
Method parameters
All methods which send data to the IMAP server (e.g.
fetch(),
search(), etc) have their arguments processed before they are sent. Arguments may be specified in several ways:
- scalar
The value is first checked and quoted if required. Values containing [\000\012\015] are turned into literals, values containing [\000-\040\{\} \%\*\"] are quoted by surrounding with a "..." pair (any " themselves are turned into \"). undef is turned into NIL
- file ref
The contents of the file is sent as an IMAP literal. Note that because IMAPTalk has to know the length of the file being sent, this must be a true file reference that can be seeked and not just some stream. The entire file will be sent regardless of the current seek point.
- scalar ref
The string/data in the referenced item should be sent as is, no quoting will occur, and the data won't be sent as quoted or as a literal regardless of the contents of the string/data.
- array ref
Emits an opening bracket, and then each item in the array separated by a space, and finally a closing bracket. Each item in the array is processed by the same methods, so can be a scalar, file ref, scalar ref, another array ref, etc.
- hash ref
The hash reference should contain only 1 item. The key is a text string which specifies what to do with the value item of the hash.
'Literal'
The string/data in the value is sent as an IMAP literal regardless of the actual data in the string/data.
'Quote'
The string/data in the value is sent as an IMAP quoted string regardless of the actual data in the string/data.
Examples:
# Password is automatically quoted to "nasty%*\"passwd" $IMAP->login("joe", 'nasty%*"passwd'); # Append $MsgTxt as string $IMAP->append("inbox", { Literal => $MsgTxt }) # Append MSGFILE contents as new message $IMAP->append("inbox", \*MSGFILE ])
CONSTANTS
These constants relate to the standard 4 states that an IMAP connection can be in. They are passed and returned from the
state()method. See RFC 3501 for more details about IMAP connection states.
- Unconnected
Current not connected to any server.
- Connected
Connected to a server, but not logged in.
- Authenticated
Connected and logged into a server, but not current folder.
- Selected
Connected, logged in and have 'select'ed a current folder.
CONSTRUCTOR
- Mail::IMAPTalk->new(%Options)
Creates new Mail::IMAPTalk object. The following options are supported.
- Connection Options
- Server
The hostname or IP address to connect to. This must be supplied unless the Socket option is supplied.
- Port
The port number on the host to connect to. Defaults to 143 if not supplied or 993 if not supplied and UseSSL is true.
- UseSSL
If true, use an IO::Socket::SSL connection. All other SSL_* arguments are passed to the IO::Socket::SSL constructor.
- Socket
An existing socket to use as the connection to the IMAP server. If you supply the Socket option, you should not supply a Server or Port option.
This is useful if you want to create an SSL socket connection using IO::Socket::SSL and then pass in the connected socket to the new() call.
It's also useful in conjunction with the
release_socket()method described below for reusing the same socket beyond the lifetime of the IMAPTalk object. See a description in the section
release_socket()method for more information.
You must have write flushing enabled for any socket you pass in here so that commands will actually be sent, and responses received, rather than just waiting and eventually timing out. you can do this using the Perl
select()call and $| ($AUTOFLUSH) variable as shown below.
my $ofh = select($Socket); $| = 1; select ($ofh);
- UseBlocking
For historical reasons, when reading from a socket, the module sets the socket to non-blocking and does a select(). If you're using an SSL socket that doesn't work, so you have to set UseBlocking to true to use blocking reads instead.
- State
If you supply a
Socketoption, you can specify the IMAP state the socket is currently in, namely one of 'Unconnected', 'Connected', 'Authenticated' or 'Selected'. This defaults to 'Connected' if not supplied and the
Socketoption is supplied.
- ExpectGreeting
If supplied and true, and a socket is supplied via the
Socketoption, checks that a greeting line is supplied by the server and reads the greeting line.
- PreserveINBOX
For historical reasons, the special name "INBOX" is rewritten as Inbox because it looks nicer on the way out, and back on the way in. If you want to preserve the name INBOX on the outside, set this flag to true.
- UseCompress
If you have the Compress::Zlib package installed, and the server supports compress, then setting this flag to true will cause compression to be enabled immediately after login.
- Login Options
- Username
The username to connect to the IMAP server as. If not supplied, no login is attempted and the IMAP object is left in the CONNECTED state. If supplied, you must also supply the Password option and a login is attempted. If the login fails, the connection is closed and undef is returned. If you want to do something with a connection even if the login fails, don't pass a Username option, but instead use the login method described below.
The password to use to login to the account.
- AsUser
If the server supports it, access the server as this user rather than the authenticate user.
See the
loginmethod for more information.
- IMAP message/folder options
- Uid
Control whether message ids are message uids or not. This is 1 (on) by default because generally that's how most people want to use it. This affects most commands that require/use/return message ids (e.g. fetch, search, sort, etc)
- RootFolder
If supplied, sets the root folder prefix. This is the same as calling
set_root_folder()with the value passed. If no value is supplied,
set_root_folder()is called with no value. See the
set_root_folder()method for more details.
- Separator
If supplied, sets the folder name text string separator character. Passed as the second parameter to the
set_root_folder()method.
- AltRootRegexp
If supplied, passed along with RootFolder to the
set_root_folder()method.
Examples:
$imap = Mail::IMAPTalk->new( Server => 'foo.com', Port => 143, Username => 'joebloggs', Password => 'mypassword', Separator => '.', RootFolder => 'INBOX', ) || die "Connection to foo.com failed. Reason: $@"; $imap = Mail::IMAPTalk->new( Socket => $SSLSocket, State => Mail::IMAPTalk::Authenticated, Uid => 0 ) || die "Could not query on existing socket. Reason: $@";
CONNECTION CONTROL METHODS
- login($User, $Password, [$AsUser])
Attempt to login user specified username and password.
The actual authentication may be done using the
LOGINor
AUTHENTICATEcommands, depending on what the server advertises support for.
If
$AsUseris supplied, an attempt will be made to login on behalf of that user.
Log out of IMAP server. This usually closes the servers connection as well.
- state(optional $State)
Set/get the current IMAP connection state. Returned or passed value should be one of the constants (Unconnected, Connected, Authenticated, Selected).
- uid(optional $UidMode)
Get/set the UID status of all UID possible IMAP commands. If set to 1, all commands that can take a UID are set to 'UID Mode', where any ID sent to IMAPTalk is assumed to be a UID.
- capability()
This method returns the IMAP servers capability command results. The result is a hash reference of (lc(Capability) => 1) key value pairs. This means you can do things like:
if ($IMAP->capability()->{quota}) { ... }
to test if the server has the QUOTA capability. If you just want a list of capabilities, use the Perl 'keys' function to get a list of keys from the returned hash reference.
- namespace()
Returns the result of the IMAP servers namespace command.
- noop()
Perform the standard IMAP 'noop' command which does nothing.
- enable($option)
Enabled the given imap extension
- is_open()
Returns true if the current socket connection is still open (e.g. the socket hasn't been closed this end or the other end due to a timeout).
- set_root_folder($RootFolder, $Separator, $AltRootRegexp)
Change the root folder prefix. Some IMAP servers require that all user folders/mailboxes live under a root folder prefix (current versions of cyrus for example use 'INBOX' for personal folders and 'user' for other users folders). If no value is specified, it sets it to ''. You might want to use the namespace() method to find out what roots are available.
Setting this affects all commands that take a folder argument. Basically if the foldername begins with root folder prefix, it's left as is, otherwise the root folder prefix and separator char are prefixed to the folder name.
The AltRootRegexp is a regexp that if the start of the folder name matches, does not have $RootFolder preprended. You can use this to protect other namespaces in your IMAP server.
Examples:
# This is what cyrus uses $IMAP->set_root_folder('INBOX', '.', qr/^user/); # Selects 'Inbox' (because 'Inbox' eq 'inbox' case insensitive) $IMAP->select('Inbox'); # Selects 'INBOX.blah' $IMAP->select('blah'); # Selects 'INBOX.Inbox.fred' #IMAP->select('Inbox.fred'); # Selects 'user.john' (because 'user' is alt root) #IMAP->select('user.john'); # Selects 'user.john'
- _set_separator($Separator)
Checks if the given separator is the same as the one we used before. If not, it calls set_root_folder to recreate the settings with the new Separator.
- literal_handle_control(optional $FileHandle)
Sets the mode whether to read literals as file handles or scalars.
You should pass a filehandle here that any literal will be read into. To turn off literal reads into a file handle, pass a 0.
Examples:
# Read rfc822 text of message 3 into file # (note that the file will have /r/n line terminators) open(F, ">messagebody.txt"); $IMAP->literal_handle_control(\*F); $IMAP->fetch(3, 'rfc822'); $IMAP->literal_handle_control(0);
- release_socket($Close)
Release IMAPTalk's ownership of the current socket it's using so it's not disconnected on DESTROY. This returns the socket, and makes sure that the IMAPTalk object doesn't hold a reference to it any more and the connection state is set to "Unconnected".
This means you can't call any methods on the IMAPTalk object any more.
If the socket is being released and being closed, then $Close is set to true.
- get_last_error()
Returns a text string which describes the last error that occurred.
- get_last_completion_response()
Returns the last completion response to the tagged command.
This is either the string "ok", "no" or "bad" (always lower case)
- get_response_code($Response)
Returns the extra response data generated by a previous call. This is most often used after calling select which usually generates some set of the following sub-results.
permanentflags
Array reference of flags which are stored permanently.
uidvalidity
Whether the current UID set is valid. See the IMAP RFC for more information on this. If this value changes, then all UIDs in the folder have been changed.
uidnext
The next UID number that will be assigned.
exists
Number of messages that exist in the folder.
recent
Number of messages that are recent in the folder.
Other possible responses are alert, newname, parse, trycreate, appenduid, etc.
The values are stored in a hash keyed on the $Response item. They're kept until either overwritten by a future response, or explicitly cleared via clear_response_code().
Examples:
# Select inbox and get list of permanent flags, uidnext and number # of message in the folder $IMAP->select('inbox'); my $NMessages = $IMAP->get_response_code('exists'); my $PermanentFlags = $IMAP->get_response_code('permanentflags'); my $UidNext = $IMAP->get_response_code('uidnext');
- clear_response_code($Response)
Clears any response code information. Response code information is not normally cleared between calls.
- parse_mode(ParseOption => $ParseMode)
Changes how results of fetch commands are parsed. Available options are:
- BodyStructure
Parse bodystructure into more Perl-friendly structure See the FETCH RESULTS section.
- Envelope
Parse envelopes into more Perl-friendly structure See the FETCH RESULTS section.
- Annotation
Parse annotation (from RFC 5257) into more Perl-friendly structure See the FETCH RESULTS section.
- EnvelopeRaw
If parsing envelopes, create To/Cc/Bcc and Raw-To/Raw-Cc/Raw-Bcc entries which are array refs of 4 entries each as returned by the IMAP server.
- DecodeUTF8
If parsing envelopes, decode any MIME encoded headers into Perl UTF-8 strings.
For this to work, you must have 'used' Mail::IMAPTalk with:
use Mail::IMAPTalk qw(:utf8support ...)
- set_tracing($Tracer)
Allows you to trace both IMAP input and output sent to the server and returned from the server. This is useful for debugging. Returns the previous value of the tracer and then sets it to the passed value. Possible values for $Tracer are:
- 0
Disable all tracing.
- 1
Print to STDERR.
- Code ref
Call code ref for each line input and output. Pass line as parameter.
- Glob ref
Print to glob.
- Scalar ref
Appends to the referenced scalar.
Note: literals are never passed to the tracer.
- set_unicode_folders($Unicode)
$Unicode should be 1 or 0
Sets whether folder names are expected and returned as perl unicode strings.
The default is currently 0, BUT YOU SHOULD NOT ASSUME THIS, because it will probably change in the future.
If you want to work with perl unicode strings for folder names, you should call $ImapTalk->set_unicode_folders(1) and IMAPTalk will automatically encode the unicode strings into IMAP-UTF7 when sending to the IMAP server, and will also decode IMAP-UTF7 back into perl unicode strings when returning results from the IMAP server.
If you want to work with folder names in IMAP-UTF7 bytes, then call $ImapTalk->set_unicode_folders(0) and IMAPTalk will leave folder names as bytes when sending to and returning results from the IMAP server.
IMAP FOLDER COMMAND METHODS
Note: In all cases where a folder name is used, the folder name is first manipulated according to the current root folder prefix as described in
set_root_folder().
- select($FolderName, @Opts)
Perform the standard IMAP 'select' command to select a folder for retrieving/moving/adding messages. If $Opts{ReadOnly} is true, the IMAP EXAMINE verb is used instead of SELECT.
Mail::IMAPTalk will cache the currently selected folder, and if you issue another ->select("XYZ") for the folder that is already selected, it will just return immediately. This can confuse code that expects to get side effects of a select call. For that case, call ->unselect() first, then ->select().
- unselect()
Performs the standard IMAP unselect command.
- examine($FolderName)
Perform the standard IMAP 'examine' command to select a folder in read only mode for retrieving messages. This is the same as
select($FolderName, 1). See
select()for more details.
- create($FolderName)
Perform the standard IMAP 'create' command to create a new folder.
- delete($FolderName)
Perform the standard IMAP 'delete' command to delete a folder.
- localdelete($FolderName)
Perform the IMAP 'localdelete' command to delete a folder (doesn't delete subfolders even of INBOX, is always immediate.
- rename($OldFolderName, $NewFolderName)
Perform the standard IMAP 'rename' command to rename a folder.
- list($Reference, $Name)
Perform the standard IMAP 'list' command to return a list of available folders.
- xlist($Reference, $Name)
Perform the IMAP 'xlist' extension command to return a list of available folders and their special use attributes.
- id($key = $value, ...)>
Perform the IMAP extension command 'id'
- lsub($Reference, $Name)
Perform the standard IMAP 'lsub' command to return a list of subscribed folders
- subscribe($FolderName)
Perform the standard IMAP 'subscribe' command to subscribe to a folder.
- unsubscribe($FolderName)
Perform the standard IMAP 'unsubscribe' command to unsubscribe from a folder.
- check()
Perform the standard IMAP 'check' command to checkpoint the current folder.
- setacl($FolderName, $User, $Rights)
Perform the IMAP 'setacl' command to set the access control list details of a folder/mailbox. See RFC 4314 for more details on the IMAP ACL extension. $User is the user name to set the access rights for. $Rights is either a list of absolute rights to set, or a list prefixed by a - to remove those rights, or a + to add those)
-
Due to ambiguity in RFC 2086, some existing RFC 2086 server implementations use the "c" right to control the DELETE command. Others chose to use the "d" right to control the DELETE command. See the 2.1.1. Obsolete Rights in RFC 4314 for more details.
- c - create (CREATE new sub-mailboxes in any implementation-defined hierarchy)
-
- d - delete (STORE DELETED flag, perform EXPUNGE)
-
The standard access control configurations for cyrus are
Examples:
# Get full access for user 'joe' on his own folder $IMAP->setacl('user.joe', 'joe', 'lrswipcda') || die "IMAP error: $@"; # Remove write, insert, post, create, delete access for user 'andrew' $IMAP->setacl('user.joe', 'andrew', '-wipcd') || die "IMAP error: $@"; # Add lookup, read, keep unseen information for user 'paul' $IMAP->setacl('user.joe', 'paul', '+lrs') || die "IMAP error: $@";
- getacl($FolderName)
Perform the IMAP 'getacl' command to get the access control list details of a folder/mailbox. See RFC 4314 for more details on the IMAP ACL extension. Returns an array of pairs. Each pair is a username followed by the access rights for that user. See setacl for more information on access rights.
Examples:
my $Rights = $IMAP->getacl('user.joe') || die "IMAP error : $@"; $Rights = [ 'joe', 'lrs', 'andrew', 'lrswipcda' ]; $IMAP->setacl('user.joe', 'joe', 'lrswipcda') || die "IMAP error : $@"; $IMAP->setacl('user.joe', 'andrew', '-wipcd') || die "IMAP error : $@"; $IMAP->setacl('user.joe', 'paul', '+lrs') || die "IMAP error : $@"; $Rights = $IMAP->getacl('user.joe') || die "IMAP error : $@"; $Rights = [ 'joe', 'lrswipcd', 'andrew', 'lrs', 'paul', 'lrs' ];
- deleteacl($FolderName, $Username)
Perform the IMAP 'deleteacl' command to delete all access control information for the given user on the given folder. See setacl for more information on access rights.
Examples:
my $Rights = $IMAP->getacl('user.joe') || die "IMAP error : $@"; $Rights = [ 'joe', 'lrswipcd', 'andrew', 'lrs', 'paul', 'lrs' ]; # Delete access information for user 'andrew' $IMAP->deleteacl('user.joe', 'andrew') || die "IMAP error : $@"; $Rights = $IMAP->getacl('user.joe') || die "IMAP error : $@"; $Rights = [ 'joe', 'lrswipcd', 'paul', 'lrs' ];
- setquota($FolderName, $QuotaDetails)
Perform the IMAP 'setquota' command to set the usage quota details of a folder/mailbox. See RFC 2087 for details of the IMAP quota extension. $QuotaDetails is a bracketed list of limit item/value pairs which represent a particular type of limit and the value to set it to. Current limits are:
Examples:
# Set maximum size of folder to 50M and 1000 messages $IMAP->setquota('user.joe', '(storage 50000)') || die "IMAP error: $@"; $IMAP->setquota('user.john', '(messages 1000)') || die "IMAP error: $@"; # Remove quotas $IMAP->setquota('user.joe', '()') || die "IMAP error: $@";
- getquota($FolderName)
Perform the standard IMAP 'getquota' command to get the quota details of a folder/mailbox. See RFC 2087 for details of the IMAP quota extension. Returns an array reference to quota limit triplets. Each triplet is made of: limit item, current value, maximum value.
Note that this only returns the quota for a folder if it actually has had a quota set on it. It's possible that a parent folder might have a quota as well which affects sub-folders. Use the getquotaroot to find out if this is true.
Examples:
my $Result = $IMAP->getquota('user.joe') || die "IMAP error: $@"; $Result = [ 'STORAGE', 31, 50000, 'MESSAGE', 5, 1000 ];
- getquotaroot($FolderName)
Perform the IMAP 'getquotaroot' command to get the quota details of a folder/mailbox and possible root quota as well. See RFC 2087 for details of the IMAP quota extension. The result of this command is a little complex. Unfortunately it doesn't map really easily into any structure since there are several different responses.
Basically it's a hash reference. The 'quotaroot' item is the response which lists the root quotas that apply to the given folder. The first item is the folder name, and the remaining items are the quota root items. There is then a hash item for each quota root item. It's probably easiest to look at the example below.
Examples:
my $Result = $IMAP->getquotaroot('user.joe.blah') || die "IMAP error: $@"; $Result = { 'quotaroot' => [ 'user.joe.blah', 'user.joe', '' ], 'user.joe' => [ 'STORAGE', 31, 50000, 'MESSAGES', 5, 1000 ], '' => [ 'MESSAGES', 3498, 100000 ] };
- message_count($FolderName)
Return the number of messages in a folder. See also
status()for getting more information about messages in a folder.
- status($FolderName, $StatusList)
Perform the standard IMAP 'status' command to retrieve status information about a folder/mailbox.
The $StatusList is a bracketed list of folder items to obtain the status of. Can contain: messages, recent, uidnext, uidvalidity, unseen.
The return value is a hash reference of lc(status-item) => value.
Examples:
my $Res = $IMAP->status('inbox', '(MESSAGES UNSEEN)'); $Res = { 'messages' => 8, 'unseen' => 2 };
- multistatus($StatusList, @FolderNames)
Performs many IMAP 'status' commands on a list of folders. Sends all the commands at once and wait for responses. This speeds up latency issues.
Returns a hash ref of folder name => status results.
If an error occurs, the annotation result is a scalar ref to the completion response string (eg 'bad', 'no', etc)
- getannotation($FolderName, $Entry, $Attribute)
Perform the IMAP 'getannotation' command to get the annotation(s) for a mailbox. See imap-annotatemore extension for details.
Examples:
my $Result = $IMAP->getannotation('user.joe.blah', '/*' '*') || die "IMAP error: $@"; $Result = { 'user.joe.blah' => { '/vendor/cmu/cyrus-imapd/size' => { 'size.shared' => '5', 'content-type.shared' => 'text/plain', 'value.shared' => '19261' }, '/vendor/cmu/cyrus-imapd/lastupdate' => { 'size.shared' => '26', 'content-type.shared' => 'text/plain', 'value.shared' => '26-Mar-2004 13:31:56 -0800' }, '/vendor/cmu/cyrus-imapd/partition' => { 'size.shared' => '7', 'content-type.shared' => 'text/plain', 'value.shared' => 'default' } } };
- getmetadata($FolderName, [ \%Options ], @Entries)
Perform the IMAP 'getmetadata' command to get the metadata items for a mailbox. See RFC 5464 for details.
If $Options is passed, it is a hashref of options to set.
If foldername is the empty string, gets server annotations
Examples:
my $Result = $IMAP->getmetadata('user.joe.blah', {depth => 'infinity'}, '/shared') || die "IMAP error: $@"; $Result = { 'user.joe.blah' => { '/shared/vendor/cmu/cyrus-imapd/size' => '19261', '/shared/vendor/cmu/cyrus-imapd/lastupdate' => '26-Mar-2004 13:31:56 -0800', '/shared/vendor/cmu/cyrus-imapd/partition' => 'default', } }; my $Result = $IMAP->getmetadata('', "/shared/comment"); $Result => { '' => { '/shared/comment' => "Shared comment", } };
- multigetmetadata(\@Entries, @FolderNames)
Performs many IMAP 'getmetadata' commands on a list of folders. Sends all the commands at once and wait for responses. This speeds up latency issues.
Returns a hash ref of folder name => metadata results.
If an error occurs, the annotation result is a scalar ref to the completion response string (eg 'bad', 'no', etc)
- setannotation($FolderName, $Entry, [ $Attribute, $Value ])
Perform the IMAP 'setannotation' command to get the annotation(s) for a mailbox. See imap-annotatemore extension for details.
Examples:
my $Result = $IMAP->setannotation('user.joe.blah', '/comment', [ 'value.priv' 'A comment' ]) || die "IMAP error: $@";
- setmetadata($FolderName, $Name, $Value, $Name2, $Value2)
Perform the IMAP 'setmetadata' command. See RFC 5464 for details.
Examples:
my $Result = $IMAP->setmetadata('user.joe.blah', '/comment', 'A comment') || die "IMAP error: $@";
Perform the standard IMAP 'close' command to expunge deleted messages from the current folder and return to the Authenticated state.
- idle(\&Callback, [ $Timeout ])
Perform an IMAP idle call. Call given callback for each IDLE event received.
If the callback returns 0, the idle continues. If the callback returns 1, the idle is finished and this call returns.
If no timeout is passed, will continue to idle until the callback returns 1 or the server disconnects.
If a timeout is passed (including a 0 timeout), the call will return if no events are received within the given time. It will return the result of the DONE command, and set $Self->get_response_code('timeout') to true.
If the server closes the connection with a "bye" response, it will return undef and $@ =~ /bye/ will be true with the remainder of the bye line following.
IMAP MESSAGE COMMAND METHODS
- fetch([ \%ParseMode ], $MessageIds, $MessageItems)
Perform the standard IMAP 'fetch' command to retrieve the specified message items from the specified message IDs.
The first parameter can be an optional hash reference that overrides particular parse mode parameters just for this fetch. See
parse_modefor possible keys.
$MessageIdscan be one of two forms:
A text string with a comma separated list of message ID's or message ranges separated by colons. A '*' represents the highest message number.
Examples:
'1' - first message
'1,2,5'
'1:*' - all messages
'1,3:*' - all but message 2
Note that , separated lists and : separated ranges can be mixed, but to make sure a certain hack works, if a '*' is used, it must be the last character in the string.
An array reference with a list of message ID's or ranges. The array contents are
join(',', ...)ed together.
Note: If the
uid()state has been set to true, then all message ID's must be message UIDs.
$MessageItemscan be one of, or a bracketed list of:
uid
flags
internaldate
envelope
bodystructure
body
body[section]<partial>
body.peek[section]<partial>
rfc822
rfc822.header
rfc822.size
rfc822.text
fast
all
full
It would be a good idea to see RFC 3501 for what all these means.
Examples:
my $Res = $IMAP->fetch('1:*', 'rfc822.size'); my $Res = $IMAP->fetch([1,2,3], '(bodystructure envelope)');
Return results:
The results returned by the IMAP server are parsed into a Perl structure. See the section FETCH RESULTS for all the interesting details.
Note that message can disappear on you, so you may not get back all the entries you expect in the hash
There is one piece of magic. If your request is for a single uid, (eg "123"), and no data is return, we return undef, because it's easier to handle as an error condition.
- copy($MsgIds, $ToFolder)
Perform standard IMAP copy command to copy a set of messages from one folder to another.
- append($FolderName, optional $MsgFlags, optional $MsgDate, $MessageData)
Perform standard IMAP append command to append a new message into a folder.
The $MessageData to append can either be a Perl scalar containing the data, or a file handle to read the data from. In each case, the data must be in proper RFC 822 format with \r\n line terminators.
Any optional fields not needed should be removed, not left blank.
Examples:
# msg.txt should have \r\n line terminators open(F, "msg.txt"); $IMAP->append('inbox', \*F); my $MsgTxt =<<MSG; From: blah\@xyz.com To: whoever\@whereever.com ... MSG $MsgTxt =~ s/\n/\015\012/g; $IMAP->append('inbox', { Literal => $MsgTxt });
- search($MsgIdSet, @SearchCriteria)
Perform standard IMAP search command. The result is an array reference to a list of message IDs (or UIDs if in Uid mode) of messages that are in the $MsgIdSet and also meet the search criteria.
@SearchCriteria is a list of search specifications, for example to look for ASCII messages bigger than 2000 bytes you would set the list to be:
my @SearchCriteria = ('CHARSET', 'US-ASCII', 'LARGER', '2000');
Examples:
my $Res = $IMAP->search('1:*', 'NOT', 'DELETED'); $Res = [ 1, 2, 5 ];
- store($MsgIdSet, $FlagOperation, $Flags)
Perform standard IMAP store command. Changes the flags associated with a set of messages.
Examples:
$IMAP->store('1:*', '+flags', '(\\deleted)'); $IMAP->store('1:*', '-flags.silent', '(\\read)');
- expunge()
Perform standard IMAP expunge command. This actually deletes any messages marked as deleted.
- uidexpunge($MsgIdSet)
Perform IMAP uid expunge command as per RFC 2359.
- sort($SortField, $CharSet, @SearchCriteria)
Perform extension IMAP sort command. The result is an array reference to a list of message IDs (or UIDs if in Uid mode) in sorted order.
It would probably be a good idea to look at the sort RFC 5256 details at somewhere like :
Examples:
my $Res = $IMAP->sort('(subject)', 'US-ASCII', 'NOT', 'DELETED'); $Res = [ 5, 2, 3, 1, 4 ];
- thread($ThreadType, $CharSet, @SearchCriteria)
Perform extension IMAP thread command. The $ThreadType should be one of 'REFERENCES' or 'ORDEREDSUBJECT'. You should check the
capability()of the server to see if it supports one or both of these.
Examples
my $Res = $IMAP->thread('REFERENCES', 'US-ASCII', 'NOT', 'DELETED'); $Res = [ [10, 15, 20], [11], [ [ 12, 16 ], [13, 17] ];
- fetch_flags($MessageIds)
Perform an IMAP 'fetch flags' command to retrieve the specified flags for the specified messages.
This is just a special fast path version of
fetch.
- fetch_meta($MessageIds, @MetaItems)
Perform an IMAP 'fetch' command to retrieve the specified meta items. These must be simple items that return only atoms (eg no flags, bodystructure, body, envelope, etc)
This is just a special fast path version of
fetch.
IMAP CYRUS EXTENSION METHODS
Methods provided by extensions to the cyrus IMAP server
Note: In all cases where a folder name is used, the folder name is first manipulated according to the current root folder prefix as described in
set_root_folder().
- xrunannotator($MessageIds)
Run the xannotator command on the given message id's
- xconvfetch($CIDs, $ChangedSince, $Items)
Use the server XCONVFETCH command to fetch information about messages in a conversation.
CIDs can be a single CID or an array ref of CIDs.
my $Res = $IMAP->xconvfetch('2fc2122a109cb6c8', 0, '(uid cid envelope)') $Res = { state => { CID => [ HighestModSeq ], ... } folders => [ [ FolderName, UidValidity ], ..., ], found => [ [ FolderIndex, Uid, { Details } ], ... ], }
Note: FolderIndex is an integer index into the folders list
- xconvmeta($CIDs, $Items)
Use the server XCONVMETA command to fetch information about a conversation.
CIDs can be a single CID or an array ref of CIDs.
my $Res = $IMAP->xconvmeta('2fc2122a109cb6c8', '(senders exists unseen)') $Res = { CID1 => { senders => { name => ..., email => ... }, exists => ..., unseen => ..., ... }, CID2 => { ... }, }
- xconvsort($Sort, $Window, $Charset, @SearchParams)
Use the server XCONVSORT command to fetch exemplar conversation messages in a mailbox.
my $Res = $IMAP->xconvsort( [ qw(reverse arrival) ], [ 'conversations', position => [1, 10] ], 'utf-8', 'ALL') $Res = { sort => [ Uid, ... ], position => N, highestmodseq => M, uidvalidity => V, uidnext => U, total => R, }
- xconvupdates($Sort, $Window, $Charset, @SearchParams)
Use the server XCONVUPDATES command to find changed exemplar messages
my $Res = $IMAP->xconvupdates( [ qw(reverse arrival) ], [ 'conversations', changedsince => [ $mod_seq, $uid_next ] ], 'utf-8', 'ALL'); $Res = { added => [ [ Uid, Pos ], ... ], removed => [ Uid, ... ], changed => [ CID, ... ], highestmodseq => M, uidvalidity => V, uidnext => U, total => R, }
- xconvmultisort($Sort, $Window, $Charset, @SearchParams)
Use the server XCONVMULTISORT command to fetch messages across all mailboxes
my $Res = $IMAP->xconvmultisort( [ qw(reverse arrival) ], [ 'conversations', postion => [1,10] ], 'utf-8', 'ALL') $Res = { folders => [ [ FolderName, UidValidity ], ... ], sort => [ FolderIndex, Uid ], ... ], position => N, highestmodseq => M, total => R, }
Note: FolderIndex is an integer index into the folders list
- xsnippets($Items, $Charset, @SearchParams)
Use the server XSNIPPETS command to fetch message search snippets
my $Res = $IMAP->xsnippets( [ [ FolderName, UidValidity, [ Uid, ... ] ], ... ], 'utf-8', 'ALL') $Res = { folders => [ [ FolderName, UidValidity ], ... ], snippets => [ [ FolderIndex, Uid, Location, Snippet ], ... ] ]
Note: FolderIndex is an integer index into the folders list
IMAP HELPER FUNCTIONS
- get_body_part($BodyStruct, $PartNum)
This is a helper function that can be used to further parse the results of a fetched bodystructure. Given a top level body structure, and a part number, it returns the reference to the bodystructure sub part which that part number refers to.
Examples:
# Fetch body structure my $FR = $IMAP->fetch(1, 'bodystructure'); my $BS = $FR->{1}->{bodystructure}; # Parse further to find particular sub part my $P12 = $IMAP->get_body_part($BS, '1.2'); $P12->{'IMAP->Partnum'} eq '1.2' || die "Unexpected IMAP part number";
- find_message($BodyStruct)
This is a helper function that can be used to further parse the results of a fetched bodystructure. It returns a hash reference with the following items.
text => $best_text_part html => $best_html_part (optional) textlist => [ ... text/html (if no alt text bits)/image (if inline) parts ... ] htmllist => [ ... text (if no alt html bits)/html/image (if inline) parts ... ] att => [ { bs => $part, text => 0/1, html => 0/1, msg => 1/0, }, { ... }, ... ]
For instance, consider a message with text and html pages that's then gone through a list software manager that attaches a header/footer
multipart/mixed text/plain, cd=inline - A multipart/mixed multipart/alternative multipart/mixed text/plain, cd=inline - B image/jpeg, cd=inline - C text/plain, cd=inline - D multipart/related text/html - E image/jpeg - F image/jpeg, cd=attachment - G application/x-excel - H message/rfc822 - J text/plain, cd=inline - K
In this case, we'd have the following list items
text => B html => E textlist => [ A, B, C, D, K ] htmllist => [ A, E, K ] att => [ { bs => C, text => 1, html => 1 }, { bs => F, text => 1, html => 0 }, { bs => G, text => 1, html => 1 }, { bs => H, text => 1, html => 1 }, { bs => J, text => 0, html => 0, msg => 1 }, ]
Examples:
# Fetch body structure my $FR = $IMAP->fetch(1, 'bodystructure'); my $BS = $FR->{1}->{bodystructure}; # Parse further to find message components my $MC = $IMAP->find_message($BS); $MC = { 'plain' => ... text body struct ref part ..., 'html' => ... html body struct ref part (if present) ... 'htmllist' => [ ... html body struct ref parts (if present) ... ] }; # Now get the text part of the message my $MT = $IMAP->fetch(1, 'body[' . $MC->{text}->{'IMAP-Part'} . ']');
- generate_cid( $Token, $PartBS )
This method generates a ContentID based on $Token and $PartBS.
The same value should always be returned for a given $Token and $PartBS
- build_cid_map($BodyStruct, [ $IMAP, $Uid, $GenCidToken ])
This is a helper function that can be used to further parse the results of a fetched bodystructure. It recursively parses the bodystructure and returns a hash of Content-ID to bodystruct part references. This is useful when trying to determine CID links from an HTML message.
If you pass a Mail::IMAPTalk object as the second parameter, the CID map built may be even more detailed. It seems some stupid versions of exchange put details in the Content-Location header rather than the Content-Type header. If that's the case, this will try and fetch the header from the message
Examples:
# Fetch body structure my $FR = $IMAP->fetch(1, 'bodystructure'); my $BS = $FR->{1}->{bodystructure}; # Parse further to get CID links my $CL = build_cid_map($BS); $CL = { '2958293123' => ... ref to body part ..., ... };
- obliterate($CyrusName)
Given a username (optionally username\@domain) immediately delete all messages belonging to this user. Uses LOCALDELETE. Quite FastMail Patchd Cyrus specific.
IMAP CALLBACKS
By default, these methods do nothing, but you can dervice from Mail::IMAPTalk and override these methods to trap any things you want to catch
- cb_switch_folder($CurrentFolder, $NewFolder)
Called when the currently selected folder is being changed (eg 'select' called and definitely a different folder is being selected, or 'unselect' methods called)
- cb_folder_changed($Folder)
Called when a command changes the contents of a folder (eg copy, append, etc). $Folder is the name of the folder that's changing.
FETCH RESULTS
The 'fetch' operation is probably the most common thing you'll do with an IMAP connection. This operation allows you to retrieve information about a message or set of messages, including header fields, flags or parts of the message body.
Mail::IMAPTalkwill always parse the results of a fetch call into a Perl like structure, though 'bodystructure', 'envelope' and 'uid' responses may have additional parsing depending on the
parse_modestate and the
uidstate (see below).
For an example case, consider the following IMAP commands and responses (C is what the client sends, S is the server response).
C: a100 fetch 5,6 (flags rfc822.size uid) S: * 1 fetch (UID 1952 FLAGS (\recent \seen) RFC822.SIZE 1150) S: * 2 fetch (UID 1958 FLAGS (\recent) RFC822.SIZE 110) S: a100 OK Completed
The fetch command can be sent by calling:
my $Res = $IMAP->fetch('1:*', '(flags rfc822.size uid)');
The result in response will look like this:
$Res = { 1 => { 'uid' => 1952, 'flags' => [ '\\recent', '\\seen' ], 'rfc822.size' => 1150 }, 2 => { 'uid' => 1958, 'flags' => [ '\\recent' ], 'rfc822.size' => 110 } };
A couple of points to note:
The message IDs have been turned into a hash from message ID to fetch response result.
The response items (e.g. uid, flags, etc) have been turned into a hash for each message, and also changed to lower case values.
Other bracketed (...) lists have become array references.
In general, this is how all fetch responses are parsed. There is one major difference however when the IMAP connection is in 'uid' mode. In this case, the message IDs in the main hash are changed to message UIDs, and the 'uid' entry in the inner hash is removed. So the above example would become:
my $Res = $IMAP->fetch('1:*', '(flags rfc822.size)'); $Res = { 1952 => { 'flags' => [ '\\recent', '\\seen' ], 'rfc822.size' => 1150 }, 1958 => { 'flags' => [ '\\recent' ], 'rfc822.size' => 110 } };
Bodystructure
When dealing with messages, we need to understand the MIME structure of the message, so we can work out what is the text body, what is attachments, etc. This is where the 'bodystructure' item from an IMAP server comes in.
C: a101 fetch 1 (bodystructure) S: * 1 fetch (BODYSTRUCTURE ("TEXT" "PLAIN" NIL NIL NIL "QUOTED-PRINTABLE" 255 11 NIL ("INLINE" NIL) NIL)) S: a101 OK Completed
The fetch command can be sent by calling:
my $Res = $IMAP->fetch(1, 'bodystructure');
As expected, the resultant response would look like this:
$Res = { 1 => { 'bodystructure' => [ 'TEXT', 'PLAIN', undef, undef, undef, 'QUOTED-PRINTABLE', 255, 11, UNDEF, [ 'INLINE', undef ], undef ] } };
However, if you set the
parse_mode(BodyStructure =1)>, then the result would be:
$Res = { '1' => { 'bodystructure' => { 'MIME-Type' => 'text', 'MIME-Subtype' => 'plain', 'MIME-TxtType' => 'text/plain', 'Content-Type' => {}, 'Content-ID' => undef, 'Content-Description' => undef, 'Content-Transfer-Encoding' => 'QUOTED-PRINTABLE', 'Size' => '3569', 'Lines' => '94', 'Content-MD5' => undef, 'Disposition-Type' => 'inline', 'Content-Disposition' => {}, 'Content-Language' => undef, 'Remainder' => [], 'IMAP-Partnum' => '' } } };
A couple of points to note here:
All the positional fields from the bodystructure list response have been turned into nicely named key/value hash items.
The MIME-Type and MIME-Subtype fields have been made lower case.
An IMAP-Partnum item has been added. The value in this field can be passed as the 'section' number of an IMAP body fetch call to retrieve the text of that IMAP section.
In general, the following items are defined for all body structures:
MIME-Type
MIME-Subtype
Content-Type
Disposition-Type
Content-Disposition
Content-Language
For all bodystructures EXCEPT those that have a MIME-Type of 'multipart', the following are defined:
Content-ID
Content-Description
Content-Transfer-Encoding
Size
Content-MD5
Remainder
IMAP-Partnum
For bodystructures where MIME-Type is 'text', an extra item 'Lines' is defined.
For bodystructures where MIME-Type is 'message' and MIME-Subtype is 'rfc822', the extra items 'Message-Envelope', 'Message-Bodystructure' and 'Message-Lines' are defined. The 'Message-Bodystructure' item is itself a reference to an entire bodystructure hash with all the format information of the contained message. The 'Message-Envelope' item is a hash structure with the message header information. See the Envelope entry below.
For bodystructures where MIME-Type is 'multipart', an extra item 'MIME-Subparts' is defined. The 'MIME-Subparts' item is an array reference, with each item being a reference to an entire bodystructure hash with all the format information of each MIME sub-part.
For further processing, you can use the find_message() function. This will analyse the body structure and find which part corresponds to the main text/html message parts to display. You can also use the find_cid_parts() function to find CID links in an html message.
Envelope
The envelope structure contains most of the addressing header fields from an email message. The following shows an example envelope fetch (the response from the IMAP server has been neatened up here)
C: a102 fetch 1 (envelope) S: * 1 FETCH (ENVELOPE ("Tue, 7 Nov 2000 08:31:21 UT" # Date "FW: another question" # Subject (("John B" NIL "jb" "abc.com")) # From (("John B" NIL "jb" "abc.com")) # Sender (("John B" NIL "jb" "abc.com")) # Reply-To (("Bob H" NIL "bh" "xyz.com") # To ("K Jones" NIL "kj" "lmn.com")) NIL # Cc NIL # Bcc NIL # In-Reply-To NIL) # Message-ID ) S: a102 OK Completed
The fetch command can be sent by calling:
my $Res = $IMAP->fetch(1, 'envelope');
And you get the idea of what the resultant response would be. Again if you change
parse_mode(Envelope =1)>, you get a neat structure as follows:
$Res = { '1' => { 'envelope' => { 'Date' => 'Tue, 7 Nov 2000 08:31:21 UT', 'Subject' => 'FW: another question', 'From' => '"John B" <[email protected]>', 'Sender' => '"John B" <[email protected]>', 'Reply-To' => '"John B" <[email protected]>', 'To' => '"Bob H" <[email protected]>, "K Jones" <[email protected]>', 'Cc' => '', 'Bcc' => '', 'In-Reply-To' => undef, 'Message-ID' => undef, 'From-Raw' => [ [ 'John B', undef, 'jb', 'abc.com' ] ], 'Sender-Raw' => [ [ 'John B', undef, 'jb', 'abc.com' ] ], 'Reply-To-Raw' => [ [ 'John B', undef, 'jb', 'abc.com' ] ], 'To-Raw' => [ [ 'Bob H', undef, 'bh', 'xyz.com' ], [ 'K Jones', undef, 'kj', 'lmn.com' ], ], 'Cc-Raw' => [], 'Bcc-Raw' => [], } } };
All the fields here are from straight from the email headers. See RFC 822 for more details.
Annotation
If the server supports RFC 5257 (ANNOTATE Extension), then you can fetch per-message annotations.
Annotation responses would normally be returned as a a nested set of arrays. However it's much easier to access the results as a nested set of hashes, so the results are so converted if the Annotation parse mode is enabled, which is on by default.
Part of an example from the RFC
S: * 12 FETCH (UID 1123 ANNOTATION (/comment (value.priv "My comment" size.priv "10") /altsubject (value.priv "Rhinoceroses!" size.priv "13")
So the fetch command:
my $Res = $IMAP->fetch(1123, 'annotation', [ '/*', [ 'value.priv', 'size.priv' ] ]);
Would have the result:
$Res = { '1123' => { 'annotation' => { '/comment' => { 'value.priv' => 'My comment', 'size.priv => 10 }, '/altsubject' => { 'value.priv' => '"Rhinoceroses', 'size.priv => 13 } } } }
INTERNAL METHODS
- _imap_cmd($Command, $IsUidCmd, $RespItems, @Args)
Executes a standard IMAP command.
- Method arguments
- $Command
Text string of command to call IMAP server with (e.g. 'select', 'search', etc).
- $IsUidCmd
1 if command involved message ids and can be prefixed with UID, 0 otherwise.
- $RespItems
Responses to look for from command (eg 'list', 'fetch', etc). Commands which return results usually return them untagged. The following is an example of fetching flags from a number of messages.
C123 uid fetch 1:* (flags) * 1 FETCH (FLAGS (\Seen) UID 1) * 2 FETCH (FLAGS (\Seen) UID 2) C123 OK Completed
Between the sending of the command and the 'OK Completed' response, we have to pick up all the untagged 'FETCH' response items so we would pass 'fetch' (always use lower case) as the $RespItems to extract.
This can also be a hash ref of callback functions. See _parse_response for more examples
- @Args
Any extra arguments to pass to command.
- _send_cmd($Self, $Cmd, @InArgs)
Helper method used by the _imap_cmd method to actually build (and quote where necessary) the command arguments and then send the actual command.
- _send_data($Self, $Opts, $Buffer, @Args)
Helper method used by the _send_cmd method to actually build (and quote where necessary) the command arguments and then send the actual command.
- _parse_response($Self, $RespItems, [ \%ParseMode ])
Helper method called by _imap_cmd after sending the command. This methods retrieves data from the IMAP socket and parses it into Perl structures and returns the results.
$RespItems is either a string, which is the untagged response(s) to find and return, or for custom processing, it can be a hash ref.
If a hash ref, then each key will be an untagged response to look for, and each value a callback function to call for the corresponding untagged response.
Each callback will be called with 2 or 3 arguments; the untagged response string, the remainder of the line parsed into an array ref, and for fetch type responses, the id will be passed as the third argument.
One other piece of magic, if you pass a 'responseitem' key, then the value should be a string, and will be the untagged response returned from the function
- _require_capability($Self, $Capability)
Helper method which checks that the server has a certain capability. If not, it sets the internal last error, $@ and returns undef.
- _trace($Self, $Line)
Helper method which outputs any tracing data.
- _is_current_folder($Self, $FolderName)
Return true if a folder is currently selected and that folder is $FolderName
INTERNAL SOCKET FUNCTIONS
- _next_atom($Self)
Returns the next atom from the current line. Uses $Self->{ReadLine} for line data, or if undef, fills it with a new line of data from the IMAP connection socket and then begins processing.
If the next atom is:
An unquoted string, simply returns the string.
A quoted string, unquotes the string, changes any occurances of \" to " and returns the string.
A literal (e.g. {NBytes}\r\n), reads the number of bytes of data in the literal into a scalar or file (depending on
literal_handle_control).
A bracketed structure, reads all the sub-atoms within the structure and returns an array reference with all the sub-atoms.
In each case, after parsing the atom, it removes any trailing space separator, and then returns the remainder of the line to $Self->{ReadLine} ready for the next call to
_next_atom().
- _next_simple_atom($Self)
Faster version of _next_atom() for known simple cases
- _remaining_atoms($Self)
Returns all the remaining atoms for the current line in the read line buffer as an array reference. Leaves $Self->{ReadLine} eq ''. See
_next_atom()
- _remaining_line($Self)
Returns the remaining data in the read line buffer ($Self->{ReadLine}) as a scalar string/data value.
- _fill_imap_read_buffer($Self)
Wait until data is available on the IMAP connection socket (or a timeout occurs). Read the data into the internal buffer $Self->{ReadBuf}. You can then use
_imap_socket_read_line(),
_imap_socket_read_bytes()or
_copy_imap_socket_to_handle()to read data from the buffer in lines or bytes at a time.
- _imap_socket_read_line($Self)
Read a \r\n terminated list from the buffered IMAP connection socket.
- _imap_socket_read_bytes($Self, $NBytes)
Read a certain number of bytes from the buffered IMAP connection socket.
- _imap_socket_out($Self, $Data)
Write the data in $Data to the IMAP connection socket.
- _copy_handle_to_imapsocket($Self, $InHandle)
Copy a given number of bytes from a file handle to the IMAP connection
- _copy_imap_socket_to_handle($Self, $OutHandle, $NBytes)
Copies data from the IMAP socket to a file handle. This is different to _copy_handle_to_imap_socket() because we internally buffer the IMAP socket so we can't just use it to copy from the socket handle, we have to copy the contents of our buffer first.
The number of bytes specified must be available on the IMAP socket, if the function runs out of data it will 'die' with an error.
- _quote($String)
Returns an IMAP quoted version of a string. This place "..." around the string, and replaces any internal " with \".
INTERNAL PARSING FUNCTIONS
- _parse_list_to_hash($ListRef, $Recursive)
Parses an array reference list of ($Key, $Value) pairs into a hash. Makes sure that all the keys are lower cased (lc) first.
- _fix_folder_name($FolderName, %Opts)
Changes a folder name based on the current root folder prefix as set with the
set_root_prefix()call.
Wildcard => 1 = a folder name with % or * is left alone NoEncoding => 1 = don't do modified utf-7 encoding, leave as unicode
- _fix_folder_encoding($FolderName)
Encode folder name using IMAP-UTF-7
- _unfix_folder_name($FolderName)
Unchanges a folder name based on the current root folder prefix as set with the
set_root_prefix()call.
- _fix_message_ids($MessageIds)
Used by IMAP commands to handle a number of different ways that message IDs can be specified.
- Method arguments
The $MessageIds parameter may take the following forms:
- _parse_email_address($EmailAddressList)
Converts a list of IMAP email address structures as parsed and returned from an IMAP fetch (envelope) call into a single RFC 822 email string (e.g. "Person 1 Name" <[email protected]>, "Person 2 Name" <...>, etc) to finally return to the user.
This is used to parse an envelope structure returned from a fetch call.
See the documentation section 'FETCH RESULTS' for more information.
- _parse_envelope($Envelope, $IncludeRaw, $DecodeUTF8)
Converts an IMAP envelope structure as parsed and returned from an IMAP fetch (envelope) call into a convenient hash structure.
If $IncludeRaw is true, includes the XXX-Raw fields, otherwise these are left out.
If $DecodeUTF8 is true, then checks if the fields contain any quoted-printable chars, and decodes them to a Perl UTF8 string if they do.
See the documentation section 'FETCH RESULTS' from more information.
- _parse_bodystructure($BodyStructure, $IncludeRaw, $DecodeUTF8, $PartNum)
Parses a standard IMAP body structure and turns it into a Perl friendly nested hash structure. This routine is recursive and you should not pass a value for $PartNum when called for the top level bodystructure item. Note that this routine destroys the array reference structure passed in as $BodyStructure.
See the documentation section 'FETCH RESULTS' from more information
- _parse_fetch_annotation($AnnotateItem)
Takes the result from a single IMAP annotation item into a Perl friendly structure.
See the documentation section 'FETCH RESULTS' from more information.
- _parse_fetch_result($FetchResult)
Takes the result from a single IMAP fetch response line and parses it into a Perl friendly structure.
See the documentation section 'FETCH RESULTS' from more information.
- _parse_header_result($HeaderResults, $Value, $FetchResult)
Take a body[header.fields (xyz)] fetch response and parse out the header fields and values
- _decode_utf8($Value)
Decodes the passed quoted printable value to a Perl Unicode string.
- _expand_sequence(@Sequences)
Expand a list of IMAP id sequences into a full list of ids
PERL METHODS
- DESTROY()
Called by Perl when this object is destroyed. Logs out of the IMAP server if still connected.
SEE ALSO
Net::IMAP, Mail::IMAPClient, IMAP::Admin, RFC 3501
Latest news/details can also be found at:
Available on github at:
AUTHOR
Rob Mueller <[email protected]>. Thanks to Jeremy Howard <[email protected]> for socket code, support and documentation setup.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Module Install Instructions
To install Mail::IMAPTalk, copy and paste the appropriate command in to your terminal.
cpanm Mail::IMAPTalk
perl -MCPAN -e shell install Mail::IMAPTalk
For more information on module installation, please visit the detailed CPAN module installation guide. | https://metacpan.org/pod/Mail::IMAPTalk | CC-MAIN-2022-40 | refinedweb | 8,738 | 56.05 |
[
]
Bill Blough updated AXIS2C-1591:
--------------------------------
Labels: patch (was: )
> axiom_element_add_attribute and duplicate names
> -----------------------------------------------
>
> Key: AXIS2C-1591
> URL:
> Project: Axis2-C
> Issue Type: Bug
> Components: xml/om
> Affects Versions: 1.7.0
> Reporter: Sebastian Brandt
> Assignee: Nandika Jayawardana
> Priority: Minor
> Labels: patch
> Attachments: om_element.c.patch
>
>
> in axiom_element_add_attribute, the attribute is added to the hash of attributes in om_element->attributes
> based on the name (axutil_qname_to_string).
> If an attribute with the same name is already in the hash, that entry is overwritten
and thus, will never be freed.
> This seems to be the case always if several xml nodes have the same namespace - the attribute
value itself is not used there, or are
> we just talking namespace attributes here?
> I don't even begin to understand what the role of the attributes is, and what happes,
if the attributes are stored with different names or such.
> For the moment, I will try to create two hashes, one for the attributes that are used,
and one for the attributes that just have to be freed. Of course, a list would be enough for
the latter.
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected] | http://mail-archives.us.apache.org/mod_mbox/axis-c-dev/201908.mbox/%3CJIRA.12553348.1335786110000.192265.1566066120111@Atlassian.JIRA%3E | CC-MAIN-2020-40 | refinedweb | 213 | 62.58 |
Ok, I've already figured out how to get an image and modify it by using the code found here:
Now my question is can I resize that image after all modification is done? This is to make the print-out fit a smaller label. I guess what I want to do is scale it down to the printer.
3 replies to this topic
#1
Posted 13 September 2006 - 05:49 AM
#2
Posted 22 September 2006 - 06:34 AM
I think I found your answer while searching for my answer (how to use an embedded font).
Source:
* This code is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. */ using System; using System.IO; using com.lowagie.text; using com.lowagie.text.pdf; public class Chap0607 { public static void Main(String[] args) { Console.WriteLine("Chapter 6 example 7: Scaling an Image"); // step 1: creation of a document-object Document document = new Document(); // step 2: // we create a writer that listens to the document // and directs a PDF-stream to a file PdfWriter.getInstance(document, new FileStream("Chap0607.pdf", FileMode.Create)); // step 3: we open the document document.open(); // step 4: we add content Image jpg1 = Image.getInstance("myKids.jpg"); jpg1.scaleAbsolute(97, 101); document.add(new Paragraph("scaleAbsolute(97, 101)")); document.add(jpg1); Image jpg2 = Image.getInstance("myKids.jpg"); jpg2.scalePercent(50); document.add(new Paragraph("scalePercent(50)")); document.add(jpg2); Image jpg3 = Image.getInstance("myKids.jpg"); jpg3.scaleAbsolute(194, 101); document.add(new Paragraph("scaleAbsolute(194, 101)")); document.add(jpg3); Image jpg4 = Image.getInstance("myKids.jpg"); jpg4.scalePercent(100, 50); document.add(new Paragraph("scalePercent(100, 50)")); document.add(jpg4); // step 5: we close the document document.close(); } }
Source:
#3
Posted 22 September 2006 - 02:21 PM
When I click on this link...
I don't see anything. :(
Lyte
I don't see anything. :(
Lyte
#4
Guest_CheeseBurgerMan_*
Posted 22 September 2006 - 03:58 PM
The link works for me, but you can try the non-SEO'd URL. | http://forum.codecall.net/topic/35607-resize-an-image/ | crawl-003 | refinedweb | 347 | 53.07 |
Question:
OK I'm banging my head against a wall with this one ;-)
Given tables in my database called Address, Customer and CustomerType, I want to display combined summary information about the customer so I create a query to join these two tables and retrieve a specified result.
var customers = (from c in tblCustomer.All() join address in tblAddress.All() on c.Address equals address.AddressId join type in tblCustomerType.All() on c.CustomerType equals type.CustomerTypeId select new CustomerSummaryView { CustomerName = c.CustomerName, CustomerType = type.Description, Postcode = address.Postcode }); return View(customers);
CustomerSummaryView is a simple POCO
public class CustomerSummaryView { public string Postcode { get; set; } public string CustomerType { get; set; } public string CustomerName { get; set; } }
Now for some reason, this doesn't work, I get an IEnumerable list of CustomerSummaryView results, each record has a customer name and a postcode but the customer type field is always null.
I've recreated this problem several times with different database tables, and projected classes.
Anyone any ideas?
Solution:1
I can't repro this issue - here's a test I just tried:
[Fact] public void Joined_Projection_Should_Return_All_Values() { var qry = (from c in _db.Customers join order in _db.Orders on c.CustomerID equals order.CustomerID join details in _db.OrderDetails on order.OrderID equals details.OrderID join products in _db.Products on details.ProductID equals products.ProductID select new CustomerSummaryView { CustomerID = c.CustomerID, OrderID = order.OrderID, ProductName = products.ProductName }); Assert.True(qry.Count() > 0); foreach (var view in qry) { Assert.False(String.IsNullOrEmpty(view.ProductName)); Assert.True(view.OrderID > 0); Assert.False(String.IsNullOrEmpty(view.CustomerID)); } }
This passed perfectly. I'm wondering if you're using a reserved word in there?
Solution:2
This post seems to be referring to a similar issue...
Solution:3
Yes, the reason Rob's example works is because his projection's property names match exactly, whereas John's original example has a difference between CustomerType and type.Description.
This shouldn't have been a problem, but it was - the Projection Mapper was looking for properties of the same name and wasn't mapping a value if it didn't find a match. Therefore, your projection objects' properties would be default values for its type if there wasn't an exact name match.
The good news is, I got the latest source today and built a new Subsonic.Core.dll and the behavior is now fixed.
So John's code above should work as expected.
Solution:4
I just downloaded the latest build from 3/21/2010, which is about 2 months after the last poster on this thread, and the problem still exists in the packaged binary. Bummer.
Here what I have to do:
var data = (from m in Metric.All() where m.ParentMetricId == parentId select new { m.MetricName, m.MetricId, }) .ToList(); var treeData = from d in data select new TreeViewItem { Text = d.MetricName, Value = d.MetricId.ToString(), LoadOnDemand = true, Enabled = true, }; return new JsonResult { Data = treeData };
If I try to do the projection directly from the Subsonic query, the Text property ends up with the ID, and the Value property ends up with the Name. Very strange.
Note:If u also have question or solution just comment us below or mail us on [email protected]
EmoticonEmoticon | http://www.toontricks.com/2018/10/tutorial-subsonic-3-linq-projection.html | CC-MAIN-2018-43 | refinedweb | 541 | 52.66 |
Opened 5 years ago
Closed 5 years ago
#18946 closed Uncategorized (fixed)
Possible error in vote function
Description
def vote(request, poll_id): p = get_object_or_404(Poll, pk=poll_id) try: selected_choice = p.choice_set.get(pk=request.POST['choice']) except (KeyError, Choice.DoesNotExist): # Redisplay the poll voting form. I tried to generate an exception by opening a blank page in the browser and entering an url such as and received the following response NameError at /polls/1/vote/ global name 'Choice' is not defined. etc. I modified the except line to except KeyError: and the code worked as expected, opening the url
Change History (2)
comment:1 Changed 5 years ago by
comment:2 Changed 5 years ago by
Sorry, I just failed to add Choice to the list of polls.models
Note: See TracTickets for help on using tickets.
BTW, this pertains to | https://code.djangoproject.com/ticket/18946 | CC-MAIN-2017-17 | refinedweb | 141 | 61.67 |
Update Your Unlocked Package
Update Your Unlocked Package
Your company is going to grow and change over time, and your apps are likely to do the same. Unlocked packages provide a robust and easy way to test, package, and deploy changes to your apps.
It doesn’t really matter what you change. That’s not the point of this quick start.
Here are the basic steps to test your change before you create the updated package version.
- sfdx force:org:create -s -f config/project-scratch-def.json
- sfdx force:source:push
- sfdx force:user:permset:assign -n GIFter
- sfdx force:org:open -p lightning/n/GIFter
Does it look good? Excellent!
- To prepare for creating the package version, open the sfdx-project.json file again, this time to update the versionName to reflect the new version of the package. To reflect that it’s a minor version change, increment the versionNumber to 1.1.0.
{ "packageDirectories": [ { "path": "force-app", "default": true, "package": "GIFter", "versionName": "Summer '18 (new color)", "versionNumber": "1.1.0.NEXT" } ], "namespace": "", "sfdcLoginUrl": "", "sourceApiVersion": "43.0", "packageAliases": { "GIFter": "0Hoxxx", "[email protected]": "04txxx" } }
- Now, let’s create a package version containing your app with the updated source.
sfdx force:package:version:create -p GIFter -d force-app -k test1234 \ --wait 10 -v DevHub
Successfully created the package version [08cxxx]. Subscriber Package Version Id: 04txxx. Package Installation URL: As an alternative, you can use the "sfdx force:package:install" command.
- Install and test the package version in a fresh scratch org.
sfdx force:org:create -s -f config/project-scratch-def.json
sfdx force:package:install --wait 10 --publishwait 10 --package [email protected] \ -k test1234 --noprompt
sfdx force:user:permset:assign -n GIFter
sfdx force:org:open -p lightning/n/GIFter
You’re getting the idea, right?
OK, testing is successful. Let’s install it to our TP.
Install New Package Version in Your TP
Your work is mostly done at this point. Since you’ve done such a great job of testing along the way, we can feel confident about this last step.
- Install the package version in your TP.
sfdx force:package:install -u MyTP --wait 10 --package [email protected] \ -k test1234 --noprompt
You don’t have to assign the perm set again because it’s already in the org from your install of the previous package version.
- And finally, one last time, open your TP and run the app.
sfdx force:org:open -p lightning/n/GIFter -u MyTP
- From Setup, enter Installed Packages in the Quick Find box, and select Installed Packages.
- Select GIFter.
More Info
We hope this quick start got you excited about the next evolution in packaging technology and helped you see the promise of unlocked packages. For more details about the power of unlocked packages, earn another badge by completing the Unlocked Packages for Customers module. | https://trailhead.salesforce.com/en/content/learn/projects/quick-start-unlocked-packages/update-your-unlocked-package?trail_id=sfdx_get_started | CC-MAIN-2019-22 | refinedweb | 484 | 66.84 |
are you allowed to use another char array to save the reversed string? If you are, then start at the end of the original string and work backwards towards the first character, in the loop store each character in the second array from front to last.
For example, if the string is "abc" then start with the last character which is 'c', copy it to the first byte of another string.
There are several ways to accomplish that, but the simplest is probably calling strlen() to get the length of the original string, then a for loop to count from whatever that returns to 0.
for(int i = len; i >= 0; --i)
Edited 4 Years Ago
by Ancient Dragon
Minor issue AD. You are starting at the terminating null byte and then ending at the head of the string. Better this I think:
for(int i = len-1, j = 0; i >= 0; --i, ++j)
{
target[j] = source[i];
}
target[len] = 0; // Properly terminate the target string.
The question does not seem to preclude the use of a human processor so how about
#include <string>
#include <iostream>
int main()
{
std::string text;
std::string reversed;
std::cout << "Please enter the string to reverse: " << std::flush;
std::getline(std::cin, text);
std::cout << std::endl;
std::cout << "Please enter the reverse of \"" << text << "\": " << std::flush;
std::getline(std::cin, reversed);
std::cout << std::endl << std::endl;
std::cout << "The reverse of \"" << text << "\" is \"" << reversed << "\"." << std::endl;
return 0;
}
And let us not forget it does preclude any library so you could just std::reverse(text.begin(), text.end()); :D
std::reverse(text.begin(), text.end());
Another option...
#include <stdio.h>
#include <string.h>
char* reverse_string(char* str)
{
for(int beg = 0, end = strlen(str) - 1; beg < end; ++beg, --end)
{
str[beg] ^= str[end];
str[end] ^= str[beg];
str[beg] ^= str[end];
}
return str;
}
int main(void)
{
char buffer[]="Hello World";
printf("%s\n", reverse_string(buffer));
return 0;
}
As long as we're giving all sorts of answers, how about
for(int i = 0, j = len-1; i < len/2; i++, j--)
{
temp = str[i];
str[i] = str[j];
str[j] = temp;
}
or even shorter:
for (int i = 0; i < strlen(str) / 2; ++i){
char t = str[i];
str[i] = str[strlen(str)-1-i];
str[strlen(str)-1-i] = t;
}
why are you calling strlen() so many times? Do you expect the length of the string to change?
int length = strlen(str);
for (int i = 0; i < length / 2; ++i){
char t = str[i];
int x = length - 1 - i;
str[i] = str[x];
str[x] = t;
} ... | https://www.daniweb.com/programming/software-development/threads/473524/reverse-the-string | CC-MAIN-2018-26 | refinedweb | 434 | 63.93 |
Axis WS Client / WSDL2Java - trying to use CDATA elements (1 messages)
I am trying to create an Apache Axis (1.4) web service client. The webservice client is a bit unusual in the sense that all it does is submit an XML payload of various data to the server, and all it gets back is an 'OK' message. We have a WSDL and used WSDL2Java to create the shell of the client code. So, the relevant code in the generated files looks like this: static { elemField = new org.apache.axis.description.ElementDesc(); elemField.setFieldName("request"); elemField.setXmlName(new javax.xml.namespace.QName("", "Request")); elemField.setXmlType(new javax.xml.namespace.QName("", "string")); elemField = new org.apache.axis.description.ElementDesc(); elemField.setNillable(false); typeDesc.addFieldDesc(elemField); } This "Request" field is our XML payload, and we need it to be wrapped in CDATA[]. However, after hours of searching, I have no idea how to denote to Axis to serialize this element as CDATA, rather than escaped text. So my questions are: 1) How can I do this? Is it even possible in Axis with WSDL2Java? Is there an easier way to do this if I don't use WSDL2Java? 2) We are not married to Axis - is there an alternative library I should try? (It is a very simple webservice to be invoked by Cron job... I'm looking for the simplest possible way to invoke this service) Thank you.
- Posted by: Ben Eirich
- Posted on: March 05 2009 12:37 EST
Threaded Messages (1)
- I had a similar problem dealing with a horrible .NET server by caoilte oconnor on May 17 2010 04:05 EDT
I had a similar problem dealing with a horrible .NET server[ Go to top ]
The irony was that even though it only accepted unencoded XML strings surrounded by CDATA, it would turn around and spit back out fully encoded responses. The hacked solution I came up with (with no help from Google) was to override the org.apache.axis.components.encoding.XMLEncoder with my own implementation that left the characters needed for XML tags well alone (JAD is your friend). To enable your XMLEncoder you need to add a META-INF/services/org.apache.axis.components.encoding.XMLEncoder file containing the FQN of the class (Commons Discovery is another badly documented project).
- Posted by: caoilte oconnor
- Posted on: May 17 2010 04:05 EDT
- in response to Ben Eirich | http://www.theserverside.com/discussions/thread.tss?thread_id=53900 | CC-MAIN-2015-11 | refinedweb | 402 | 55.64 |
This post aims to summarize all the works described in previous posts and shows a consolidated python module that can retrieve multiple stock data sets and act as a simple stock filter. The flowchart below shows the full steps taken to run a filter. If using the alternative time-saving approach as show in the flow chart, the time to scan through around 500 stocks would take less than 15 min. It can generate different series of filtered stocks depending on the list of criteria files created and can be scheduled to run each day prior to the market opening.
The list below described how individual scripts are created at each posts.
- Getting most recent prices and stock info from Yahoo API: “Extracting stocks info from yahoo finance using python (Updates)”
- Criteria filtering: “Filter stocks data using python”
- Historical data/dividend several alternatives:
- Scraping from Yahoo API: “Getting historical stock quotes and dividend Info using python”.
- Scraping using YQL: “Get historical stock prices using Yahoo Query Language (YQL) and Python”.
- Retrieve from database: “Storing and Retrieving Stock data from SQLite database”.
- Company info and company financial data several alternatives:
- Direct scraping: “Direct Scraping Stock Data from Yahoo Finance”
- Scraping using YQL:“Scraping Company info using Yahoo Query Language (YQL) and Python”.
- Web scraping for stock tech analysis. “Basic Stock Technical Analysis with python”.
Below shows a sample run with a few sets of criteria. The qty left after each filtered parameters are displayed. Finally the results sample from one of the run, the “strict” criteria, are shown. Note that the filtered results depends on the accuracy and also whether the particular parameter is present in Yahoo database.
The combined run script is Stock_Combine_info_gathering.py and it is avaliable with rest of the modules at the GitHub.
List of filter for the criteria: lowprice
—————————————-
NumYearPayin4Yr > 3
PERATIO > 4
Qtrly Earnings Growth (yoy) > 0
PERATIO < 15
Pre3rdYear_avg greater OPEN 0 # means current price lower than 3yr ago
Processing each filter…
—————————————-
Current Screen criteria: Greater NumYearPayin4Yr
Modified_df qty: 142
Current Screen criteria: Greater PERATIO
Modified_df qty: 110
Current Screen criteria: Less PERATIO
Modified_df qty: 66
Current Screen criteria: Compare Pre3rdYear_avg,OPEN
Modified_df qty: 19
END
List of filter for the criteria: highdivdend
—————————————-
NumYearPayin4Yr > 3
LeveredFreeCashFlow > -1
TRAILINGANNUALDIVIDENDYIELDINPERCENT > 5
PRICEBOOK < 1.5
TrailingAnnualDividendYieldInPercent < 100
TotalDebtEquity < 50
Processing each filter…
—————————————-
Current Screen criteria: Greater NumYearPayin4Yr
Modified_df qty: 142
Current Screen criteria: Greater LeveredFreeCashFlow
Modified_df qty: 107
Current Screen criteria: Greater TRAILINGANNUALDIVIDENDYIELDINPERCENT
Modified_df qty: 30
Current Screen criteria: Less PRICEBOOK
Modified_df qty: 25
Current Screen criteria: Less TotalDebtEquity
Modified_df qty: 20
END
List of filter for the criteria: strict
—————————————-
CurrentRatio > 1.5
EPSESTIMATECURRENTYEAR > 0
DilutedEPS > 0
ReturnonAssets > 0
NumYearPayin4Yr > 2
PERATIO > 4
LeveredFreeCashFlow > 0
TRAILINGANNUALDIVIDENDYIELDINPERCENT > 2
PERATIO < 15
TotalDebtEquity < 70
PRICEBOOK < 1.5
PEGRatio < 1.2
YEARHIGH greater OPEN 0
Processing each filter…
—————————————-
Current Screen criteria: Greater CurrentRatio
Modified_df qty: 139
Current Screen criteria: Greater EPSESTIMATECURRENTYEAR
Modified_df qty: 42
Current Screen criteria: Greater DilutedEPS
Modified_df qty: 41
Current Screen criteria: Greater ReturnonAssets
Modified_df qty: 37
Current Screen criteria: Greater NumYearPayin4Yr
Modified_df qty: 32
Current Screen criteria: Greater PERATIO
Modified_df qty: 32
Current Screen criteria: Greater LeveredFreeCashFlow
Modified_df qty: 20
Current Screen criteria: Greater TRAILINGANNUALDIVIDENDYIELDINPERCENT
Modified_df qty: 15
Current Screen criteria: Less PERATIO
Modified_df qty: 8
Current Screen criteria: Less TotalDebtEquity
Modified_df qty: 7
Current Screen criteria: Less PRICEBOOK
Modified_df qty: 5
Current Screen criteria: Less PEGRatio
Modified_df qty: 5
Current Screen criteria: Compare YEARHIGH,OPEN
Modified_df qty: 5
END
Results from “strict” criteria:
I am trying to work on these scripts and using ubuntu 15.04. These scripts requried many modules. Is there list of modules and place to get them. Please suggest
Hi Zealny, thanks for the feedback. For each of the sub module, I usually include the modules and links required at each individual posts. You can also install them using pip. However, I will try to write a separate post on the list of modules required as I know it can be confusing.
As for now, the key modules (link found in posts) are as below:
1. Pandas (data frame and data tables)
2. Pattern (web related)
3. simplejson (json handling)
There are some additional modules that are used for more specific modules such as
1. scipy (regression and other scientific function)
2. matplotlib (plotting)
3. difflib (string comparison)
4. pypushbullet (notification)
Below are two modules that can be found in my github (spidezad):
1. Dict_create_fr_text (get dict fr text)
2. xls_table_extract_module (get table fr excel)
Note that the main module “yahoo_finance_data_extract” require Excel (Windows) to extract certain paramters. This can be disabled if not running on window system. You can email me if you need help on this. Thanks.
Hello Kok Hua,
Thank you for helping me.
I installed all modules except difflib. I could not find it online.
I am having problem as follows
>python Stock_Combine_info_gathering.py
Unable to use the GUI function.
Traceback (most recent call last):
File “Stock_Combine_info_gathering.py”, line 40, in
from SGX_stock_announcement_extract import SGXDataExtract
File “SGX_stock_announcement_extract.py”, line 49, in
from xls_table_extract_module import XlsExtractor
ImportError: No module named xls_table_extract_module
Help me.
Hi zealny, not sure if difflib is available in standard python 2.7 library, Can you try to just import it? If not, you may need to use pip install.
xls_table_extract_module can be found in Github as followed:. You also need pyExcel which is available in Github as well.
Hi Kok Hua,
How do I disable it so that I can use this in OSX?
Thank you.
Jessie
Hi Jessie,
You can comment out the module (xls extractor). The module mainly for extracting list of stock symbols. You can replace the lines where it call the module function with appropriate list.
You can refer to my Reply to Alberto on Mar 5, 2016 for more details. Just scrolled further down to it.
Hope it helps.
Hello Kok Hua,
Thank you for help.
I found documents for difflib module but not the source code
pyExcel.py need win32com.client.dynamic module. I think I cant make it work on linux systems.
Hi Zealny, I think you can straight away import the difflib library, think it is already available in the std library.
As for the pyExcel, it is just provide a more convenient way for me to set the parameters that I need for the data retrieval. You can bypass it by setting it directly in yahoo_finance_data_extract module as below;
self.enable_form_properties_fr_exceltable = 0 # set to zero
self.cur_quotes_property_str = ‘nsl1opvkj’ #default list of properties to copy. can set the properties here.
The list of properties can be found in the below url link:
Hope that helps
I have been trying to do something like this. Wow, there are so much details in all your posts. Have to say Thanks very much!
Thank you for your comments. Glad it is helpful to you 🙂
Hello Kok Hua,
Thanks for the great work, this code is amazing!
I’m currently trying to run the combined run script Stock_Combine_info_gathering.py, however I’m getting the following error:
File “Stock_Combine_info_gathering.py”, line 40, in
from SGX_stock_announcement_extract import SGXDataExtract
File “/Users/Alberto/Desktop/yahoo_finance_data_extract-master/SGX_stock_announcement_extract.py”, line 49, in
from xls_table_extract_module import XlsExtractor
File “/Users/Alberto/Desktop/yahoo_finance_data_extract-master/xls_table_extract_module.py”, line 32, in
from pyExcel import UseExcel
File “/Users/Alberto/Desktop/yahoo_finance_data_extract-master/pyExcel.py”, line 126, in
import win32com.client.dynamic
ImportError: No module named win32com.client.dynamic
I’m using a Mac… I read in the comments section that you have a way to avoid this error when you are not using windows. Can you please help me?
Thanks in advance!
Hi Alberto,
Thanks for your comments and feedback. I mainly use the xlsExtractor to retrieve certain settings such as company list or parameter list which can be easily replaced by setting it to be a list. You can remove all instance of xlsExtractor and replace them by a list of the required parameters. For example,
Under SGX_stock_announcement_extract.py (Line 118):
## target stocks for announcements — using excel query
xls_set_class = XlsExtractor(fname = r’C:\data\stockselection_for_sgx.xls’, sheetname= ‘stockselection’,
param_start_key = ‘stock//’, param_end_key = ‘stock_end//’,
header_key = ‘header#2//’, col_len = 2)
xls_set_class.open_excel_and_process_block_data()
self.announce_watchlist = xls_set_class.data_label_list #also get the company name
You can comment all and replace the self.announce_watchlist with a list of stock symbol, replace the self.companyname_watchlist with list of corresponding company name.
self.announce_watchlist = [‘O5RU’, ‘A17U’, ‘B20’]
self.companyname_watchlist = [‘AIMSAMP Cap Reit’, ‘Ascendas Reit’, ‘Biosensors’]
Hope that helps.
I’m getting an error and having a lot of trouble figuring out the problem:
Traceback (most recent call last):
File “Stock_Combine_info_gathering.py”, line 34, in
from Basic_data_filter import InfoBasicFilter
File “/home/corncob/Projects/Stocks/yahoo_finance_data_extract-master/Basic_data_filter.py”, line 46, in
from DictParser.Dict_create_fr_text import DictParser
ImportError: No module named DictParser.Dict_create_fr_text
I installed the DictParser module using “python setup.py install” and checked the dist-package directory, all of the files seem to be present. Any help you can offer is greatly appreciated.
Some amplifying information:
Currently using Ubuntu 14.04, Python 2.7.6
Hi Jake, are you able to find this module DictParser folder in the python site-packages directory? If yes, you might be missing an __init__.py. Can you try create an empty file and rename is as __init__.py? See if it works.
Thanks Kok Hua!
You welcome.
Hi, first of all thanks for this great work !
I am new to python and trying to run your scripts.
Which version of python are you using for the scripts ?
I have different problem when running using different version of python.
Thanks
Hi Wai Kun, thank you for your good feedback. I am using python 2.7. Hope that helps.
I am now having problem on the pyExcel part.
I installed the pyexcel using pip, and the pyexcel from your github too.
however I got this error :
File “c:\yahoo_finance\SGX_stock_announcement_extract.py”, line 49, in
from xls_table_extract_module import XlsExtractor
File “C:\Python27\lib\site-packages\xls_table_extract_module.py”, line 36, in
from pyET_tools.pyExcel import UseExcel
ImportError: No module named pyET_tools.pyExcel
Can you help?
Hi Wai Kun, can you change the line to: from pyExcel import UseExcel and see if it works? Thanks.
Hi Kok Hua, manage to make it pass the last error.
Now I stuck at the stockselection_for_sgx.xls file.
Can you share the format of this xls file, so that i can create my own list.
Thanks
Hi Wai Kun,
I have added the file to the git hub.. Hope it helps.
Hi, great work on this code. I am getting an error “ImportError: cannot import name pyExcel” but I have already installed pyExcel (from Git). Also, there are some references to pyET_tools which does not exist. Can you help?
Hi, are you able to find this module PyExcel folder in the python site-packages directory? If yes, you might be missing an __init__.py. Can you try create an empty file and rename is as __init__.py? See if it works.
Hey kok Hua, I am getting an issue where there is no module named DictParser. I read a previous comment someone else was struggling with it. I downloaded dict Parser module and ran the setup.py install. All ran correctly but it still wont import. After checking out the site-packages folder i have : DictParser-1.0-py2.7.egg-info(file not a folder), Dict_create_fr_text.py all inside just site-packages , there is no DictParser Folder. so it doesnt look right at all. How do i fix this?
Hi geoff, sorry to hear about the issue. Are you able to find this module DictParser folder in the python site-packages directory? If yes, you might be missing an __init__.py. Can you try create an empty file and rename is as __init__.py? See if it works. If there is no DictParser Folder, please create one in site-packages and copy both the __init__.py and the Dict_create_fr_text.py to the folder. Hope that helps. Please let me know if that is not working for you.Thanks
Hello, is there a get started instruction on how to use it?
thanks
as i keep getting” name ‘YFinanceDataExtr’ is not defined” error even though I already imported
Hi IKEL, regarding your question on the get started instruction,unfortunately I do not have an instruction on hand. I will try to write one in near future. For now, perhaps you can start with the Stock_Combine_info_gathering.py which will run the scripts. There is an option to run different parts of the module in line 85.
partial_run = [‘a2′,’b’,’c_pre’,’c’,’d_pre’,’e’,’f’, ‘g’]#e is storing data
For a simple running version, you can try the other script which is the simpler to use.
Hope that helps
Hi IKEL, regarding the error on the ‘YFinanceDataExtr not defined, can you check the yahoo_finance_data_extract.py file is in the same directory as the script (Stock_Combine_info_gathering.py)you running? It should work if they are all under same directory.
Let me know if you still have problems running.
I have some “get blown out” instruction…
Try Running It!
I was not able to run it either.
No DictParser & other stuff.
self.FeelingStupid = True
Hi Avraham, most of the yahoo stock API no longer working so you might have trouble running this. Let me see if I can come up with a new post for a working version.
Hello,
First of all, great job!
Secondly, I’m having the following error while compiling it in anaconda navigator environment:
File “Stock_Combine_info_gathering.py”, line 82
print time.ctime()
^
SyntaxError: invalid syntax
Hi Slasanto, thank you for your compliment. 🙂 For the anaconda environment, are you using Python 3.x? If yes, then the scripts will not work as they are based on python 2.x.
It worked. Thank you. Now I’m having troubles with the DictParser. I installed but I can’t see any package named “DictParser” or something similar. What I do see are the following files:
“DictParser-1.0-py2.7.egg-info
Dict_create_fr_text.py
Dict_create_fr_text.pyc”
All three are in the anaconda site-packages, more detailed:
/Users/user-name/anaconda/envs/py27/lib/python2.7/site-packages
Hi Slasanto, are you able to find this module DictParser folder in the python site-packages directory? If yes, you might be missing an __init__.py. Can you try create an empty file and rename is as __init__.py? See if it works. | https://simply-python.com/2015/03/09/python-integrated-stock-data-retrieval-and-stock-filter/ | CC-MAIN-2019-30 | refinedweb | 2,395 | 59.3 |
26 February 2010 03:18 [Source: ICIS news]
SINGAPORE (ICIS news)--Petrochemical giant Shell said on Friday that commissioning work has begun to start up its 800,000 tonne/year cracker in ?xml:namespace>
Major construction activities at its ethylene cracker complex (ECC) had been completed as scheduled, the company said in a statement.
The new ECC is part of Shell Eastern Petrochemicals Complex (SEPC) in the southeast Asian country.
The cracker was the second plant built at the
Ethylene prices have been weakening even before Shell’s plant commissioning, with buying interest expected to be very thin for March-arrival cargoes, traders said.
“March is already over because tanks are quite full. What happens after that will depend very much on how much exports are coming out from the
In southeast Asia, ethylene prices fell below the $1,300/tonne (€962/tonne) CFR (cost and freight) level this week on the back of availability from the
A deal was heard concluded on Monday at $1,280/tonne CFR SE Asia for arrival before mid March, but another fixture surfaced on Friday at around $1,200/tonne CFR SE Asia for arrival in early April, market sources said.
With additional reporting by Peh Soo Hwee
($1 = €0.74 | http://www.icis.com/Articles/2010/02/26/9338144/shell-gears-for-singapore-cracker-start-up-ethylene-prices-weak.html | CC-MAIN-2015-11 | refinedweb | 208 | 51.92 |
Click the link with dynamic text or javascript:; or <a> tag
By
6105, in AutoIt General Help and Support
Recommended Posts
Similar Content
- By taylansan
Hi All,
I'm using an online translator for Spanish in which you give the verb and website gives the conjugations. The website I'm using is: where "tener" means "to have" in English.
In the screenshot, you can see the present tense (5 yellow highlighted items) and the imperfects (5 blue boxes). I don't need to get the translation for "vosotros", so I didn't make any color on that row. I'm trying to get these 10 translations to be written on the output for my code. But my code is so simple (because I couldn't go into the div / tr / td):
#include <IE.au3> #include <Array.au3> Local $sSpanishWord = "tener" ;to have ;Local $sSpanishWord = "abrir" ;to open Local $oIE = _IECreate ("" & $sSpanishWord) ; ; ;== Try using _IETagNameAllGetCollection Local $oElements = _IETagNameAllGetCollection($oIE) For $oElement In $oElements If $oElement.id Then ConsoleWrite("Tagname: " & $oElement.tagname & @CRLF & "id: " & $oElement.id & @CRLF & "innerText: " & $oElement.innerText & @CRLF & @CRLF) EndIf Next ;== Try using _IETagNameGetCollection Local $sTable Local $oTableCells Local $oTableRows = _IETagNameGetCollection($oIE, "tr") For $oTableRow In $oTableRows $sTable = "" $oTableCells = _IETagNameGetCollection($oTableRow, "td") ;I don't know how to continue from here on Next I used the IE to find out the tr / td stuff, but I think I'm lost.
P.S: The verb "tener" can be difficult, because it has red letters because of irregular. The verb "abrir" can be much easier, because it's a regular verb.
- By Faalamva once this is done, the coordinates of the elements are still what they were before the scroll.
I can manage this problem by keeping track of the number of pixels I have scrolled, and compute the new "real" ($oElementPosX, $oElementPosY).
But I'm pretty sure there's a more efficient / more elegant way to do it.
What's more in some situations, when I click some controls in the webpage, the webpage adds new elements and shifts the controls below by a random number of pixel, so my workaround can't be used...
So here's my question : Is there a way to "refresh" the calculation of label coordinates ($oElementPosX, $oElementPosY) after a scroll ?
Thank you !
EDIT : I forgot to post the _IEfindPosX and _IEfindPosY functions (found somewhere on this forum) :
Func _IEfindPosX($o_object) Local $curleft = 0 Local $parent = $o_object If IsObj($parent) Then While IsObj($parent) $curleft += $parent.offsetLeft $parent = $parent.offsetParent WEnd Else Local $objx = $o_object.x If IsObj($objx) Then $curleft += $objx EndIf Return $curleft EndFunc Func _IEfindPosY($o_object) Local $curtop = 0 Local $parent = $o_object If IsObj($parent) Then While IsObj($parent) $curtop += $parent.offsetTop $parent = $parent.offsetParent WEnd Else Local $objy = $o_object.y If IsObj($objy) Then $curtop += $objy EndIf Return $curtop EndFunc
- By TimothyGirard
I'm trying to create an autoPop tool for AliExpress. When I get to their address page, there are a number of input fields. There are also two drop downs. One is for the country and, depending on what you select, the other dropdown appears with city names or, a standard input box is visible to add a city (See the images). So if I select "United States" the other dropdown is visible with all the states. If I select "France", the input box is visible to enter a French city. Seems cool enough but I'm really struggling trying to get this to work.
This is my test code to work all this out:
#include <MsgBoxConstants.au3> #include <IE.au3> Local $StartPos Local $oIE = _IECreate("") If @error Then Exit MsgBox(16, "openURL Error", @CRLF & "@error = " & @error & ", @extended = " & @extended) ;If $bVerbose == true Then MsgBox(0, "openURL", "IECreate Object Created") ;Get the Collections $oInputs = _IETagNameGetCollection($oIE, "input"); Input Fields $oSelects = _IETagNameGetCollection($oIE, "select"); Select Fields ;Loop through the Selects For $oSelect In $oSelects If $oSelect.name = "country" Then _IEFormElementOptionSelect($oSelect, "France", 1, "byText",1) Next ;Loop through the inputs For $oInput In $oInputs If $oInput.name = "email" Then $oInput.Value = "[email protected]" If $oInput.name = "contactPerson" Then $oInput.Value = "My Full Name" If $oInput.name = "address" Then $oInput.Value = "123 Anystreet" If $oInput.name = "address2" Then $oInput.Value = "NA" If $oInput.name = "province" Then MsgBox(0, "","$oInput.name = " & $oInput.name &@CRLF &"$oInput.style = " & $oInput.style &@CRLF &"$oInput.type = " & $oInput.type&@CRLF &"$oInput.maxLength = " & $oInput.maxlength) $oInput.Value = "FrenchyLand" EndIf If $oInput.name = "city" Then $oInput.Value = "AnyTown" If $oInput.name = "zip" Then $oInput.Value = "12345" If $oInput.name = "mobileNo" Then $oInput.Value = "1-415-555-1212" Next So I open the page on AliExpress and get an "input" collection and a "select" Collection
I first loop through the Selects until I find "country" and then select "France"
I then loop through all the inputs and put the information in the correct inputs.
When the country is one that AliExpress knows the states or provinces for, they swap Styles between the input box and the dropdown to either "display: inline-block;" or "display: none;" which hides one, or the other (See the AliExpDOM Image).
Problem #1: When I use the "_IEFormElementOptionSelect($oSelect, "France", 1, "byText",1)" To make the dropdown selection, It selects it but does not invoke the widget to change the second dropdown to an input box.
Problem #2: I thought a possible solution would be to read the Style of the input box and based on the value, either place text in the input box or go to the dropdown and make a selection. I can't seem to read the Style attribute from the collection ie: $oInput.style returns nothing.
Any help here would be greatly appreciated. If I figure out a solution before and answer here, I will publish it here for anyone else who might be struggling with these kinds of things
Thanks
- By vladtsepesh
Hi.
- By zemkor
Guys need help, why second _IELinkClickByText not working ? First click is ok, but second click is problem.
Warning from function _IELinkClickByText, $_IESTATUS_NoMatch but text is correct.
First click, changed browser adress is this a problem ?
Thanks for answer.
Func zmazanie() $oIE = _IECreate("") $cozmazat = GUICtrlRead($nadpis) Sleep (2000) _IELinkClickByText($oIE, $cozmazat) Sleep (2000) _IELinkClickByText($oIE, "Zmazať/ Editovať/ Topovať") EndFunc | https://www.autoitscript.com/forum/topic/139542-click-the-link-with-dynamic-text-or-javascript-or-a-tag/ | CC-MAIN-2018-26 | refinedweb | 1,032 | 57.16 |
cristian.botauBAN USER
This is actually O(N), because you have the recurrence relation:
T(n) = 2*T(n/2) + O(1)
For O(log N) you should do only one recursive call, i.e:
T(n) = T(n/2) + O(1)
You can obtain that be reusing the result of "power(a, n / 2)" to compute "power(a, n - n / 2)", and avoid the second recursive call.
Cheng Q.'s approach is correct. Here is an implementation for this idea.
I use the array pos, where pos[i] = the left most position where count_1 - count_0 = i
Time complexity: O(N)
Space complexity: additional O(N)
#include <iostream>
#include <algorithm>
#include <cstring>
const int MAX_N = 100;
const int INF = 0x3F3F3F3F;
using namespace std;
int solve(int a[MAX_N], int n) {
int b[2*MAX_N + 1];
int *pos = b + MAX_N; // pos points to the middle of b so that we can use negative indices for accessing pos elements
for (int i = -n; i <= n; ++i)
pos[i] = INF;
int result = 0;
for (int i = 0, count = 0; i < n; ++i) {
count += a[i] ? 1 : -1;
result = max(i - pos[count], result);
pos[count] = min(i, pos[count]);
}
return result;
}
int main() {
int a[MAX_N] = {1,1,1,1,1,0,0,0,0,0,0,0,1};
//int a[MAX_N] = {0,0,1,0,1,0,1,0,1};
cout << solve(a, 13) << endl;
return 0;
}
The decision to take out the a[start] out of the sequence if a[start] != more and a[end] != more doesn't seem to lead to a correct answer for some cases.
Take, for example, a = 1111100000001
@dmxmails: The complexity can be further reduced if you use a clever data structure that supports fast insertion at an arbitrary position.
This can be achieved using a modified skiplist (expected insertion time O(logN)) or a modified balanced binary tree for which we store the in each node the number of nodes in the subtree rooted at that node (worst case insertion time: O(logN)).
So, the final complexity of the algorithm would be O(N*log(N)).
You can solve the problem using the dynamic programming technique.
Construct the matrix match, where match[i][j] is true iff the first i characters of a match the first j characters of b.
The recurrence relation is the following:
match[i][j] =
(match[i-1][j-1] && character_matches(a[i - 1], b[j - 1])) ||
/* eg: (abc, a?c) => (abcd, a?cd) */
(match[i-1][j] && b[j - 1] == '*') ||
/* eg: (ab, a*) => (abc, a*) */
(match[i][j-1] && b[j - 1] == '*')
/* eg: (abc, ab) => (abc, ab*) */
And the basic case is: match[0][0] = true
Time complexity: O(A*B),
Space complexity: O(A*B), can be reduced to O(A+B)
where A,B = length of A, respectively B
Here is the code:
#include <iostream>
using namespace std;
bool isCharMatch(char a, char b) {
return (b == '?') || (b == '*') || (a == b);
}
// match[i][j] = (match[i-1][j-1] && matches(a[i - 1], b[j - 1])) ||
// (match[i-1][j] && b[j - 1] == '*') ||
// (match[i][j-1] && b[j - 1] == '*')
bool isMatch(const string& a, const string& b) {
bool match[a.length() + 1][b.length() + 1];
for (int i = 0; i <= a.length(); ++i)
for (int j = 0; j <= b.length(); ++j) {
match[i][j] = (i == 0) && (j == 0);
if (i > 0 && j > 0)
match[i][j] |= match[i-1][j-1] && isCharMatch(a[i-1], b[j-1]);
if (i > 0 && j > 0)
match[i][j] |= match[i-1][j] && b[j-1] == '*';
if (j > 0)
match[i][j] |= match[i][j-1] && b[j-1] == '*';
}
return match[a.length()][b.length()];
}
void test(const string& a, const string& b) {
cout << "match(" << a << ", " << b << ") = " << isMatch(a, b) << endl;
}
int main() {
test("abab", "*b*");
test("abab", "a**b");
test("abab", "a**b");
test("", "");
test("ab", "");
test("ab", "*");
test("", "**");
test("", "*?");
return 0;
}
I've updated answer to contain the program and tests used for the program. Hopefully I didn't leave any important test cases out. If you find any failing input data please post it ;)
@Apostle: Oh, sorry, I haven't paid attention to the requirement (I thought that the largest square was asked for).
@Apostle:
"1. No. It's an O(N^2) algorithm. each element of the matrix is traversed at most thrice."
I agree with Chih.Chiu.19. It seems to be O(N^3) by your explanation. What happens with your algorithm on an NxN matrix filled all with 1?
While doing the diagonal parsing (looking for corners) what happens after you have processed a corner? Continue parsing the diagonal or maybe you skip the whole diagonal? (otherwise I don't see how your algorithm runs in less than O(N^3)).
That's because it doesn't make sense in modular arithmetic (floating point numbers don't make much sense in modular arithmetic).
Actually, it would make sense if you would compute the modular multiplicative inverse of x^abs(y) if y is negative, but that can be computed only if x^abs(y) and z are coprimes (so the problem might not always have an answer).
Here is an O(log(y)) algorithm. It is the classical fast exponential algorithm. I won't get into details into it because it is a simple algorithm and can be found easily by googling it.
However, the trick to this problem is to watch out for arithmetical overflows:
- since x^y can grow really big, you can't just compute x^y and then apply % z, since it will likely overflow; so you need to apply modulo z operation on each multiplication;
- furthermore for high values of z even one multiplication can overflow (take for instance x = 2 billions - 1, y = 2, z = 2 billions), so you need to use the long long type for each multiplication;
In order to make sure there is no arithmetic overflow happening, I've defined the modMul(x, y, z) operation which performs the operation "(x * y) % z" and guarantees there is no overflow.
inline int modMul(int x, int y, int z) {
int result = ((long long)x * y) % z;
return result;
}
int power(int x, int y, int z) {
if (y == 0)
return 1;
int sqrt = power(x, y / 2, z);
int result = modMul(sqrt, sqrt, z);
if (y % 2 == 1)
result = modMul(result, x, z);
return result;
}
The line:
int y = power(x,n-1/2);
is incorrect "/" takes precedence before "-", so the expression n-1/2 will actually evaluate to n. Btw, since n is odd you could just write n/2 (it will truncate the result, and it is equivalent to (n-1)/2).
Also, your solution will likely overflow since x^y can easily get over 2^31-1. You need to apply "% z" to each multiplication inside the power() function.
What if x and z are very large, like 2^31-10? Even if you use "% z" for each multiplication it is not ok, for example x*x will overflow. So, when performing a multiplication you need to use long long ints (which can hold numbers up to 2^63-1). For example, line "return y*y" should be "return ((long long)y * y) % z"
For a correct implementation, which also works for large int values for x, y, z check my answer.
I am not sorting and I'm not actually making any bucket. The buckets are used for algorithm explanation.
You basically need the following operations:
- find x - the smallest element in input vector that (x >= 1 and x < 2) - easily done in O(N) with a simple pass through the array
- take the smallest element & second smallest element from the input array (and ignore the first element from triple) - doable in O(N)
- find x the highest element s.t. (x >= 0.5 and x < 1) - O(N)
- etc.
To generalize, the algorithm uses operations like: find the first/second lowest/highest element which lies in the interval [a..b). These operations are doable in O(N).
Consider the following buckets: (0..0.5), [0.5..1), [1..2), [2..inf).
Obviously, we ignore numbers in [2..inf).
Now basically we need to treat all cases of choosing 3 numbers from the 3 buckets.
We only need to look at the following cases (the other cases are "worse" or are covered by these):
1. If possible, choose the smallest element from bucket [1..2) => for the 2nd and 3rd we need to take the smallest 2 elements available. If sum < 2 then return true.
2. If possible, choose the two smallest elements from bucket [0.5 .. 1) => for the 3rd we need to take the smallest element available. If sum < 2 then return true;
3. If possible, choose the highest element from bucket [0.5 .. 1) => if possible, for the 2nd and 3rd take the highest and the second highest from bucket (0 .. 0.5). If sum > 1 then return true.
4. If possible, choose the highest 3 elements from bucket (0..0.5). If sum > 1 then return true.
If none of the cases above found a solution then return false.
Space complexity: O(1), you don't need to explicitly store numbers in buckets.
Time complexity: each operation (e.g.: find smallest element from bucket [1..2), etc.) can be done in O(N). There is a constant number of these operations => overall complexity O(N)
LATER EDIT:
Since the answer was down-graded without any question or explanation why it would be wrong, here is the actual code and the associated tests. Hopefully I didn't forget any relevant test case.
The code could be optimized more and be more condensed, but I tried to make it as clear as possible (regarding to the explanations above and the space and time requirements).
#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
using namespace std;
const float INF = 1000;
bool inInterval(float x, float st, float end) { return x >= st && x < end; }
bool findFirstSmallest(const vector<float>& a, float start, float end, float &res) {
int found = 0;
res = INF;
for (int i = 0; i < a.size(); ++i)
if (inInterval(a[i], start, end)) {
++found;
res = min(res, a[i]);
}
return found >= 1;
}
bool findFirstHighest(const vector<float>& a, float start, float end, float &res) {
int found = 0;
res = -INF;
for (int i = 0; i < a.size(); ++i)
if (inInterval(a[i], start, end)) {
++found;
res = max(res, a[i]);
}
return found >= 1;
}
bool findSecondSmallestSecondHighestThirdSmallest findThirdHighest solve(const vector<float>& a, float& x, float &y, float& z) {
if (findFirstSmallest(a, 1, 2, x) &&
findFirstSmallest(a, 0, 1, y) &&
findSecondSmallest(a, 0, 1, z))
if (x + y + z < 2) return true;
if (findFirstSmallest(a, 0.5, 1, x) &&
findSecondSmallest(a, 0.5, 1, y) &&
(findFirstSmallest(a, 0, 0.5, z) || findThirdSmallest(a, 0.5, 1, z) ))
if (x + y + z < 2) return true;
if (findFirstSmallest(a, 0.5, 1, x) &&
findFirstHighest(a, 0, 0.5, y) &&
findSecondHighest(a, 0, 0.5, z))
if (x + y + z >= 1) return true;
if (findFirstHighest(a, 0, 0.5, x) &&
findSecondHighest(a, 0, 0.5, y) &&
findThirdHighest(a, 0, 0.5, z))
if (x + y + z >= 1) return true;
return false;
}
void test(const vector<float>& a) {
cout << "Test: ";
copy(a.begin(), a.end(), ostream_iterator<float>(cout, " "));
cout << endl;
float x, y, z;
if (solve(a, x, y, z))
cout << "Solution: " << x << " " << y << " " << z << endl;
else
cout << "Solution not found!" << endl;
cout << endl;
}
#define arrSize(a) (sizeof(a) / sizeof(a[0]))
int main() {
float test1[] = {0.1, 0.2, 0.2, 0.3, 2.0, 3.0};
test(vector<float>(test1, test1 + arrSize(test1)));
float test2[] = {0.1, 0.3, 0.3, 0.4, 2.0, 3.0};
test(vector<float>(test2, test2 + arrSize(test2)));
float test3[] = {0.1, 0.1, 0.2, 0.6, 2.0, 3.0};
test(vector<float>(test3, test3 + arrSize(test3)));
float test4[] = {0.5, 0.6, 0.6, 2.0, 3.0};
test(vector<float>(test4, test4 + arrSize(test4)));
float test5[] = {0.6, 0.6, 2.0, 3.0};
test(vector<float>(test5, test5 + arrSize(test5)));
float test6[] = {0.6, 0.6, 1.0, 2.0, 3.0};
test(vector<float>(test6, test6 + arrSize(test6)));
float test7[] = {0.1, 0.2, 0.5, 1.0, 2.0, 3.0};
test(vector<float>(test7, test7 + arrSize(test7)));
float test8[] = {0.1, 0.7, 0.6, 1.0, 2.0, 3.0};
test(vector<float>(test8, test8 + arrSize(test8)));
float test9[] = {0.5, 0.5, 0.6, 1.0, 2.0, 3.0};
test(vector<float>(test9, test9 + arrSize(test8)));
float test10[] = {1.6, 1.2, 1.0, 2.0, 3.0};
test(vector<float>(test10, test10 + arrSize(test10)));
return 0;
}
It is not 3SUM-hard. The data has other characteristics which might make the problem solvable in linear time:
- numbers are positive
- sum must lie in an interval
Check my answer on how we can "exploit" these relaxed requirements in order to obtain a linear algorithm.
No, it is O(N^2) because the largestArea() function runs in O(N) time. This is because if you count the total operations done in the most inner loop "while (!St.empty())" you'll see it is O(N) (you can't pop more than N elements).
@nitingupta180: Nice & optimal solution.
The algorithm for largestArea() could be made a little more faster, if you see that you only need the first loop (for computing L). That loop can be modified like so: whenever you pop an element from the stack you update the result with the rectangle corresponding to that element (you know where it starts and you know that it ends here at i). You also need to take care of elements not popped at the end of the loop.
However, I think that the solution that computes both L and R is easier to understand.
The solution is correct, but there is a small problem with the notation of the vector:
You are multiplying a 2x1 matrix with a 2x2 matrix, and that is not possible.
Either swap places of vector and matrix, or define the vector horizontally (i.e 1x2 matrix).
| f(n-1) f(n-2) | x | 2 1 | = | f(n) f(n-1) |
| 2 0 |
For that you need to use additional data (like a hash map) for determining effficiently the position in the heap. Try googling for how decrease key is implemented efficiently for a heap.
However I recommend that you use a std::set (actually multiset or map in order to deal with duplicate elements) instead of a heap. That will make implementation of the algorithm much easier. I used the term "heap" in the solution description because of its main purpose (to keep track of the minimum).
@arwin: I hope i understood your question properly. Here is the response:
When we have to delete elements from j to j' it doesn't take O(logN) time. It takes (j' - j)*O(logN).
However, if you count the elements that are deleted for all the steps the algorithm performs then there are at most N elements to delete (because when you delete an element you increase j', it is never decremented and it goes up to N).
Or to put it in another way: throughout the running of the algorithm, you heap.remove() each element of the array at most once.
Like your algorithm: simple, concise and general. However, no vote for you until you put a proper brief description in words of the algorithm.
Yeah, my bad :)
Although, if the order doesn't matter, I don't see how the fact that the list is sorted may be helpful.
I think the key to solving this problem is to use the information that the list of words is sorted (i.e.: you already have the word list preprocessed to help you with the query).
Consider (for complexity computation):
N - number of words in the word list (max. 1 million)
L - size of a word (max. 40)
A - size of the alphabet (= 26)
For 1 letter distance you can use the following algorithm:
1. Generate all the possible words that are 1 letter distance away from the query word
- this has the complexity O(L*A)
2. Look up each of the generated words in the word list using a binary search:
- the look up of an individual word is O(L*logN)
- we have O(L*A) lookups => final complexity is O(L^2*A*logN) which is roughly (not considering the hidden constant) about 832.000 operations which is better than O(L*N) which is roughly 40 million operations.
For distance = 2, this algorithm performs worse than the O(L*N) version.
Later edit: @warrior: in case your last reply was not referring to my comment then just ignore what I just wrote :)
I didn't say that your solution is incorrect (it is actually correct), but i don't like the fact that it uses backtracking.
Regarding to what I proposed you misunderstood one thing: it doesn't permute the remaining digits, it finds the next permutation for the whole number.
Here is an example:
X = [1, 6, 7, 3, 2], Y = [6, 7, 8, 9, 1]
Algorithm:
[6 ?] --> [6, 7, ? ] -- no remaining larger digits? ---> [6, 7, 3, 2, 1] -- next permutation --> [7, 1, 2, 3, 6]
Binary search trees have the property that the inorder traversal of the tree is a sorted array. The reciprocal of this property is also true.
So the easiest algorithm would be:
1. array a = inorder-traversal(tree)
2. check if a is sorted increasingly
Of course, you can merge those two steps into one and not use the additional array.
The algorithm looks incorrect.
Please correct me if I didn't understand it properly: you basically check for every node if (direct left child < parent) and (direct right child > parent).
If so, in the case below your algorithm returns a false positive:
5
/ \
2 7
/ \
1 10
evaluateExpressionPow has side effects. After a call of pow(a, b), if I call pow(c, d) where c != a it will use the pp computed for a (which is obviously wrong).
You don't need to backtrack if you are out of digits that are higher then the one in y.
Why not just generate the largest possible number with the remaining digits (even though it will be lower then y) and then run next permutation algorithm on the result?
You can solve it even more efficiently (O(N)) using a dequeue instead of min-heap. Check my answer, it includes explanation for the min-heap version as well as for the dequeue version.
@bambam:
Using '\0' to end an array of ints is a little bit creepy. Why not just use 0 instead? (it's basically the same value and you don't force the compiler to cast your char to int)
I haven't analyzed your solution in depth but those inner loops makes me a little bit skeptic about the O(N) complexity you're claiming.
(min2 >= min1) and (p + min1 >= k) implies that (p + min2 >= k)
Hence use of min2 is redundant.
This can be done in O(N*log(N)) time using a min-heap or O(N) using a dequeue.
Basically, the algorithm works like this: for each index i in the array computes the longest subarray that ends at position i and satisfies the requested condition.
Now, let's consider we're at index i, and [j ... i-1] is the longest subarray found in the previous iteration (for i-1). In order to compute the subarray for this iteration we need to find the smallest j' >= j such that min(a[j'], .., a[i-1]) + a[i] >= K.
Now, the trick is how to find j' efficiently.
A first approach is to use a min-heap and start with j' = j and then increment j' and remove element a[j'] from heap until the condition holds (or you reach i). Since j is incremented at most N times => there are a total of N calls to heap.remove_element. Since i is incremented N times => there are N calls to heap.insert_element. => final complexity O(N*log(N)).
A second approach, which is a little bit trickier (I suggest getting a pen and paper for this) is using a deque instead of heap. The constructed deque will have these important properties:
- in the front of the deque is index of the minimum element in seq [j..i-1] (just like the heap)
- the second element is the index of the minimum element in the sequence that remains after removing the first minimum along with the elements in front of it;
- and so on.
So basically if dequeue = [m1, m2, ...] then the initial sequence looks like this [j ... m1 ... m2 ... i-1], and:
- m1 is the index of minimum of sequence [j .. i-1],
- m2 is the index of minimum of sequence (m1 .. i-1] (please note that the interval is open at m1)
I won't explain how you perform the operations on the dequeue in order to prserve those properties (try to think them yourself or look at the code / if you have any questions feel free to ask). You have the implementation below for the time-optimal (dequeue) solution. The methods for updating the deque are push(i) - updates the deque by adding element a[i] and popBadMins() which removes minimums from dequeue and returns the new j'.
Friendly advice: If you're not familiar with dequeue trick, I suggest you try to understand it because it proved to be helpful in programming contests.
#include <iostream>
#include <vector>
#include <deque>
using namespace std;
#define MAX_N 10000
struct Sol {
int st, end;
Sol(int s, int e) : st(s), end(e) {};
};
int A[MAX_N], N, K;
vector<Sol> sol;
int maxLen = 0;
deque<int> q;
// adds the [st, end] interval to the solution set if it is maximal so far
void update_sol(int st, int end) {
int len = end - st + 1;
if (len > maxLen) {
maxLen = len;
sol.clear();
}
if (len == maxLen)
sol.push_back(Sol(st, end));
}
void read_data() {
cin >> N >> K;
for (int i = 0; i < N; ++i)
cin >> A[i];
}
void push(int index) {
int val = A[index];
while (!q.empty() && val <= A[q.back()])
q.pop_back();
q.push_back(index);
}
int popBadMins(int prevStart, int endIndex) {
int val = A[endIndex];
int result = prevStart;
while (!q.empty() && val + A[q.front()] < K) {
result = q.front();
q.pop_front();
}
return result;
}
void solve() {
for (int i = 0, j = -1; i < N; ++i) {
j = popBadMins(j, i);
push(i);
update_sol(j+1, i);
}
}
void print_result() {
for (int i = 0; i < sol.size(); ++i) {
const Sol& s = sol[i];
for (int j = s.st; j <= s.end; ++j)
cout << A[j] << " ";
cout << endl;
}
}
int main() {
read_data();
solve();
print_result();
return 0;
}
Note: Didn't test this thoroughly so I might have missed some corner cases.
Oh, and sorry for the long post.
Forgot to mention that there is no solution in case nextArrangment() method fails (i.e. this is the highest arrangement for x digits and yet still lower than y).
Note: I assume x and y have the same number of digits
This is an O(N) algorithm:
Here is pseudocode with explanations:
1. create digits histogram for x in order to be able to efficiently extract a given digit from it
for (int i = 0; i < x.size(); ++i)
++histogram[x[i]];
2. start from most significant digit (assuming its index is 0) and basically use the same digit from y on the same position in result. If at some point you don't have that digit, you select the smallest digit higher than the digit you're looking for and then put the remaining digits in increasing order and you have your answer. If there is no larger digit then put the remaining digits in decreasing order.
In this case you've got yourself the closest number to Y, but lower. So you need to generate the next lexicographic permutation - see step 3 (in C++ there is std::next_permutation that just does that).
for (i = 0; i < y.size(); ++i)
{
try to extract digit y[i] from histogram
if (y[i] found in histogram) then { result[i] = y[i] }
else if (there is a digit d in histogram s.t. d > y[i])
{
result[i] = the smallest digit d from histogram st d>y[i]
// put remaining digits in increasing order
result[(i+1)..y.size()] = histogram.sortIncreasing();
// found the number, woohoo!!
break for loop;
}
else /* there are only digits lower than y[i] */
{
// put remaining digits in decreasing order
result[i..y.size()] = histogram.sortDecreasing();
// found closest number smaller then y
break for loop;
}
}
3. Now the variable result is either:
- the result we're looking for, i.e.: the closest number greater or equal to y
- the closest number less than y, case in which we need to generate the next lexicographic permutation of digits
So we need to do this check:
if (result < y)
result = nextPermutation(result);
The question asks for the *number* of pairs. In order to compute the number of pairs you don't necessarily need to iterate over each pair.- cristian.botau October 04, 2013 | https://careercup.com/user?id=14951783 | CC-MAIN-2020-10 | refinedweb | 4,228 | 62.27 |
[Part 6] Create your own Calendar (Date/Time) library from scratch using Java
Part 6 - Previous issues and in-between
After Part 1 - Part 5, you've probably noticed some flaws in the methods we've written. My famous line from Part 1 was "We will proceed to fixing any problems we may encounter and improve the code as we go." - Disclaimer. We will fix those flaws but not permanently and not in a perfect way. Since we'll be adding more methods until the library is finished (and we're not there yet).
In this part, we will do the following:
- Add a conditional statement for getDay(), getMonth(), and getYear() methods to accept incomplete dates.
- Create a quick formatting method to address the issue of single digit days.
- Add formatting to the value returned by the nextDate() method.
- Double the parameters, double the fun!
- A method to count the number of days between two dates.
The perfect code, the impossible
Why are we not fixing these flaws as soon as we've identified them? Well, flaws are discovered and sometimes "happen" on different times.
I can give you three:
- Before writing the code
- After writing and during tests
- As it's being used
We'll be wasting our time perfecting our methods because some future flaws are obvious and predictable before writing the code, some are discovered only after they are written and tested, and most when they are actually being used. Right now, we have to focus first on how to make things work and how to fix the errors.
Who wrote this thing?!
In the previous parts, you may have noticed that every time we invoke a method, we always pass the full date (e.g. 01-JAN-2015) and this is annoying. I annoyed myself as I write and test it, too but I had to make a point somewhere.
Remember that most of our methods start off by calling the following methods:
- getDay()
- getMonth()
- getYear()
Methodception
This is to ensure that we get the correct value for day, month, and year which are then processed by other methods depending on what we want to do. An example is getting the next date where we use the nextDate() method but inside this method are invocations of other methods that do very specific tasks.
Adding a conditional statement
String[] splitDate = date.split("-");
For the getDay(), getMonth(), and getYear() methods, we split the date into three separate values and assign these values to the variables for day, month, and year. This means that we are limited to passing a complete date to these methods and since we use the delimiter "-", we are also limited to the format "DD-MON-YYYY".
We wouldn't touch the many different types of format just yet but for now, we'll add conditional statements for each method so we can pass an incomplete date and not get an ArrayIndexOutOfBoundsException.
The three methods should look like below:
public int getDay(String date){ int day = 0; String[] splitDate = date.split("-"); if (date.contains("-")) day = Integer.parseInt(splitDate[0]); else day = Integer.parseInt(date); return day; } public String getMonth(String date){ String month = date; String[] splitDate = date.split("-"); if (date.contains("-")) month = splitDate[1]; return month; } public int getYear(String date){ int year = 0; String[] splitDate = date.split("-"); if (date.contains("-")) year = Integer.parseInt(splitDate[2]); else year = Integer.parseInt(date); return year; }
In the code above, we simply added a condition that checks whether the argument passed to the method has a "-" character. If it does, then we will split the date into three parts and return only the value that we need. If it does not, then we simply return whatever we got.
You can test this using the code below, the ArrayIndexOutOfBoundsException should no longer occur.
SampleClass
public class SampleClass { public static void main(String[] args) { MyCalendar myCalendar = new MyCalendar(); System.out.println("Day for '03-JAN-2015': " + myCalendar.getDay("03-JAN-2015")); System.out.println("Day for '03': " + myCalendar.getDay("03")); System.out.println("Month for '03-JAN-2015': " + myCalendar.getMonth("03-JAN-2015")); System.out.println("Month for 'JAN': " + myCalendar.getMonth("JAN")); System.out.println("Year for '03-JAN-2015': " + myCalendar.getYear("03-JAN-2015")); System.out.println("Year for '2015': " + myCalendar.getYear("2015")); } }
What if I input the wrong value?
What if you pass the value 2015 to the getDay() method? Since it does not contain a "-" character, the value itself will be returned and our method has no way to verify if this is a day, month, or year. The value 2015 will be considered as "day" simply because it was passed to the getDay() method.
Why you should not worry
The truth is: you will not input the value.
Most of the methods in our "MyCalendar" class are not supposed to be public or visible to the users. These methods are used by other visible methods inside the class and were only written to do very specific tasks so we wouldn't have to re-write the code into every method that needs them. Three of these "hidden" methods are the getDay(), getMonth(), and getYear() methods.
This prevents the user from passing an invalid value to the methods. The complete date is still required on the user side except this time, the values are chopped into three parts for all the other methods but are still associated with each other.
For example, 28 is an individual value of "day" but in order to get the next day, we need to know what month it is. If the month is "FEB" then we need to know what year it is. If the year is 2015 then the next day is 1 (01-MAR-2015) because 2015 is not a leap year so it does not have a 29th day.
If it's not visible, why should we bother?
This may not make any difference to simply splitting the complete date but it will make sense in the next methods we will create. If you highlight the name of these methods, right-click, and click "Find usages". You will see where they are being used or invoked. As you can see in the screenshots below, they are used in a lot of visible methods in the "MyCalendar" class alone. These three methods are critical to the output of all other methods that the users can use.
A quick formatting method
I have a confession to make. Remember when I said in part 5 that the results of the nextDate() method was right? Well, I lied just a little bit but I'm confessing now so don't hate me. It was for your own good. Let me show you the screenshot of those results again.
Did you notice something?
The value for day is not the same as we've entered it. The results returned for numbers 1 - 9 are always single digit or without the zero prefix. This is because the type we use for all "day" methods is int. This cuts off the zero prefix because the value is treated as an integer and it cannot make sense out of the zero prefix unlike if the data type is string where all characters are preserved.
Why did we not use string instead?
It is a lot easier to compute using integers because well, integers are numbers. If we declared day as a string, we would have to convert it to an integer every time we make computations and it will lose the preceding zero anyway. After that, we'll have to convert that integer into a string again and add a zero prefix before we return it.
That's just too much work so we'll stick to an int data type and since we'll be introducing different formats in the next parts, let's create a quick formatting method that simply adds a zero at the beginning.
The formatDate() method
public String formatDate(String date){ String formattedDate = date; if (date.length() < 11) formattedDate = "0" + date; return formattedDate; }
Fixed length
In the code above, we defined a method named formatDate(). We declared a variable called "formattedDate" and set its initial value to the value of the "date" parameter. Then we wrote an "if" statement that checks whether the length of the "date" parameter is less than 11. Currently, we only have one format, "DD-MON-YYYY" and counting the number of characters would result to 11. The number of characters for month is fixed with three letters, the year will always be four digits for a very long time.
The only value that changes in the number of digits is the day so every time the length of the date passed to the method is less than 11, it assumes that the value of day is the culprit for this shortcoming and it adds a zero at the beginning. That value is then assigned to the variable called "formattedDate".
Notice that I did not have to put an else statement because the value of "formattedDate" has been initialized with the default value of the "date" or whatever is passed to the method and will only change if it satisfies the "if" condition.
In-between
I guess the screenshots above revealed the methods we're about to create. In Part 1 to Part 5, we became comfortable with using only one parameter per method or a single date. Now comes the question "How do I know the difference between two dates?" It could be how many days between two dates, how many months, how many years. How many days from now until your birthday?
For that purpose, we will create three methods:
Counting the number of days between two dates
Below is the definition of the method countDaysBetween(). It takes two parameters; "fromDate" and "toDate" where "fromDate" is the starting point and "toDate" is the end point. The reason why we fixed the format of the date first is that we have to get the dates in their correct and exact format because we will be comparing it to the other date.
Here's how the countDaysBetween method works:
Line 1
It receives two parameters; "fromDate" and "toDate".
Line 2
It sets the default value of "daysBetween" to zero.
Line 4
It compares the two strings; "fromDate" and "toDate" to check if they are not the same. If this is true (if they are not equal), the value of the "daysBetween" variable is incremented and the value of "fromDate" is replaced with fromDate + 1 or the next day. This goes on and on and until the condition is false (fromDate and toDate becomes equal). If the condition is false in the first place, the default value of "daysBetween" will be left unchanged.
Line 9
Returns the value of "daysBetween".
public int countDaysBetween(String fromDate, String toDate){ int daysBetween = 0; while (!fromDate.equals(toDate)){ daysBetween ++; fromDate = nextDate(fromDate); } return daysBetween; }
Modification for the nextDate() method
Now, here's the crucial part. In order to make sure that the comparison works all the time, we need to use the formatDate() method inside the nextDate() method. This is to ensure that when we invoke the nextDate() method inside the "while" loop, the result that will be sent back to us is in the correct format. This is VERY important because that result becomes the new value of the "fromDate" variable which is compared to the value of the "toDate" variable by each loop. If the value of "fromDate" happens to have a day between 1 - 9 and the zero prefix is not there, the condition in the while loop will still be true or "fromDate" and "toDate" will not be considered equal.
Example
fromDate = "01-FEB-2015"
toDate = "02-FEB-2015"
After nextDate(fromDate), "fromDate" becomes "2-FEB-2015". This should terminate the loop but since they are not exactly equal because "toDate" has 02 and "fromDate" only has 2, the loop will keep going even after it surpassed toDate's value "02-FEB-2015". It will go on forever because there is no way to go back, the days, months, and years will just continue to increment and there's no going back to February 2, 2015.
WARNING!
You can try using the method countDaysBetween() with the two parameters in the example above to see what I'm talking about but since you will not really get a feedback on what's happening except a blank console as your program runs forever, you can add a "System.out.println(fromDate);" inside the "while" loop so you can see how many times "fromDate" transforms into different dates but never really reach its goal. A word of warning, though. This is an infinite loop and may crash. Click the "Stop" square button in the console to stop the program.
No infinite loop?
If you're lucky, this method can still work if the other end (toDate) has a day that originally has two digits or numbers between 10 - 31 or if "fromDate" itself contains these numbers as a value for day. That way, the loop will still terminate at some point but let's not risk that. After all, it takes a very small amount of modification in the nextDate() method.
In bold below is the change. We just invoke the formatDate() on the variable to be returned (nextDate) to make sure it is formatted correctly before our countDaysBetween() method receives it and assigns it to the "fromDate" variable.
return formatDate(nextDate);
Your new nextDate() method should look like the code below:
nextDate() formatDate(nextDate); }
You can use the code below to test if the results are correct.
System.out.println("Number of days from 11-MAR-2013 to 02-JUN-2013 is " + myCalendar.countDaysBetween("11-MAR-2013", "02-JUN-2013")); System.out.println("Number of days from 01-FEB-2015 to 02-FEB-2015 is " + myCalendar.countDaysBetween("01-FEB-2015", "02-FEB-2015"));
You should get something like this.
You can count it manually just to check if it's correct. The days between March 11, 2013 and June 2, 2013 is quite a gap so I highlighted the dates in my made-up calendar below:
End of Part 6
We fixed a few flaws in this part. That should get us ready for the next methods we're about to make. I wanted to include all the six methods here but this article has gotten too long and I don't want the readers to be intimidated by the length of a single part since I try to elaborate as much as I can so the information would be easier to digest.
In Part 7, the five remaining methods will be discussed. The other two counting methods; countMonthsBetween() and countYearsBetween(). The three display methods; daysBetween(), monthsBetween(), and yearsBetween().
© 2015 Joann Mistica | https://hubpages.com/technology/Create-your-own-Calendar-DateTime-library-from-scratch-using-Java-part-6 | CC-MAIN-2018-22 | refinedweb | 2,472 | 70.94 |
03 June 2011 08:20 [Source: ICIS news]
SINGAPORE (ICIS)--State-owned Indian Oil is running its polyethylene (PE) and polypropylene (PP) facilities at Panipat at reduced rates as the company’s upstream naphtha cracker has been shut down for a brief maintenance, a company source said on Friday.
Indian Oil is operating its 600,000 tonne/year PP unit and 650,000 tonne/year high density PE (HDPE)/linear low density PE (LLDPE) plants at 70%, the source said.
The company is expected to restart its upstream 857,000 tonne/year naphtha cracker on 6 June.
“The demand in the local PE and PP downstream sector is still looking bad and prices are on a downtrend. We have to reduce our operating rates as a result to prevent a build-up in inventories,” the source added.
Domestic prices in ?xml:namespace>
($1 = Rs44.8) | http://www.icis.com/Articles/2011/06/03/9465875/indian-oil-runs-pe-pp-units-at-reduced-rates-on-cracker.html | CC-MAIN-2015-06 | refinedweb | 145 | 60.95 |
SYNOPSIS
#include <sys/statvfs.h>
int fstatvfs(int fildes, struct statvfs *buf);
int statvfs(const char *path, struct statvfs *buf);
int fstatvfs64(int fildes, struct statvfs64 *buf);
int statvfs64(const char *path, struct statvfs64 *buf);
DESCRIPTION
The
The
PARAMETERS
- fildes
Is the file descriptor for an open file on the file system to be queried.
- path
Specifies the path name of a file within the file system.
- buf.
- EOVERFLOW
One of the values to be returned cannot be represented correctly in the structure pointed to by buf.
CONFORMANCE
UNIX 98, with exceptions.vfs
PTC MKS Toolkit 10.3 Documentation Build 39. | https://www.mkssoftware.com/docs/man3/statvfs.3.asp | CC-MAIN-2021-39 | refinedweb | 101 | 66.44 |
Hi FCC,
I have a strange bug that I just CANT figure out in my recipe box app despite trying all day. Let me describe it. Say I have recipe 1 with ingredients (a,b,c) and recipe 2 with ingredients (d,e,f). If I delete recipe 1 before recipe 2, recipe 2 will take on the state of recipe 1 and end up with recipe 2 (a,b,c) instead of the correct recipe 2 (d,e,f) format.
my deleting logic and ingredients state is as follows:
//in App.js splice the recipe (just a string) out of the array deleteCard(i){ let removedCard = this.state.recipeCards; removedCard.splice(i, 1); this.setState({ recipeCards: removedCard }); } //in Recipecard.js render a <CardBody /> instance and pass it this.state.ingredients as a prop class RecipeCard extends Component { constructor(props){ super(props); this.state = { ingredients: [], hidden: false }; } render(){ if (!this.state.hidden){ return ( {this.props.children} <RecipeHeader reportClick={this.listenForHeaderClick} title={this.props.title}/> <CardBody ingredients={this.state.ingredients} /> ); //in CardBody.js build the recipe ingredients in a list and render them class CardBody extends Component { render(){ let ingredients = this.props.ingredients.map((ingredient, i) => { return (<li key={i}>{ingredient}</li>); }); return ( <ul className="ingredient-list"> {ingredients} </ul> ); } }
Could anyone lend any advice at all? I’ve been trying to fix this bug all day and I feel like I’m at wit’s end. My next step might be to rebuild the entire project from scratch
If you’re curious the full repo is here
Thank you for any and all advice, really appreciate it | https://www.freecodecamp.org/forum/t/unsquashable-react-bug-recipe-box/91550 | CC-MAIN-2019-09 | refinedweb | 266 | 59.8 |
On Thu, Aug 16, 2012 at 1:55 AM, jerome <romjerome@...> wrote:
> It sounds good!
>
> Note, can we imagine a related feature: ability to import (merge) only one table (type of primary object)?
I think that would be trivial; something like:
from gen.merge.diff import import_as_dict
from gen.db import DbTxn
def import_types(db, filename, *types):
newdb = import_as_dict(filename)
db.disable_signals()
with DbTxn(_("Import"), db, batch=True) as trans:
for item in types:
handles = newdb._tables[item]["handles_func"]()
for handle in handles:
obj = newdb._tables[item]["handle_func"](handle)
if item == "Person":
add_func = db.add_person
commit_func = db.commit_person
#...
add_func(obj, trans)
commit_func(obj, trans)
db.enable_signals()
db.request_rebuild()
import_types(db, "data.gramps", "Person")
Currently, we don't enforce a proper manner to import into the
database, and some importers assume the BSDDB internals. We'll need to
clean that up once we start having different internals (like
DictionaryDb, DjangoDb, etc). So, not all file import formats will
work going into DictionaryDb.
> Having a top level filter (factory, hierarchical level, etc ...), then like 'Export screen...' feature from menu into list views, to be able to import/dump a specific table without relation with others primary objects!
>
> Why to do that?
> *Most part of time this could be very useful when one has planned a migration from a closed file format to something more flexible like gramps[1].
> *There is a lot of simple tables (places[2], events[3], sources/citations[4], persons, families, notes/transcriptions) dedicated to genealogy (3rd party indexes, publications, sql tables, etc ...). To have a quick way to handle content without new seizure might be productive (and consistent).
>
> As you said, this may be more difficult to merge without an unique ID.
> Anyway, I guess this is also one limitation on current Gramps CSV import[5]. Users know that and I guess that they have already tested CSV import.
>
> If we go further, with your code, it should be also possible to test consistency of the imported records, right? ie. syntax, type of imported data according to Gramps DB model.
Sure. DictionaryDb (or a filebased TempDb/SqliteDb) can be a staging
area for checking and selecting before adding/merging into the real
database.
-Doug
> [1]
> [2]
> [3]
> [4]
> [5]
>
>
> Thanks!
> Jérôme
>
> --- En date de : Mer 15.8.12, Doug Blank <doug.blank@...> a écrit :
>
>> De: Doug Blank <doug.blank@...>
>> Objet: Re: [Gramps-devel] [Gramps-users] Status report on Gramps-Connect
>> À: "Benny Malengier" <benny.malengier@...>
>> Cc: gramps-devel@..., "Jiri Kastner" <cz172638@...>
>> Date: Mercredi 15 août 2012, 15h56
>> On Wed, Aug 15, 2012 at 9:44 AM,
>> Benny Malengier
>> <benny.malengier@...>
>> wrote:
>> >
>> >
>> > 2012/8/15 Doug Blank <doug.blank@...>
>> >>
>> >> On Tue, Aug 14, 2012 at 7:42 AM, Doug Blank <doug.blank@...>
>> wrote:
>> >>
>> >> [snip]
>> >>
>> >> >> About implementation. I would expect it to
>> be a dictionary of
>> >> >> dictionaries,
>> >> >> and on lowest level strings or integer,
>> but sometimes it is list,
>> >> >> because it
>> >> >> has to be ordered.
>> >> >> Well, I don't like that "citation_list":
>> CitationBase.to_struct(self)
>> >> >> is a
>> >> >> list, but other to_structs are dicts
>> (wrong doc of the method there by
>> >> >> the
>> >> >> way). I think these objects better have no
>> to_struct method, and you
>> >> >> just
>> >> >> write the attribute you need.
>> >> >>
>> >> >> to_struct being list is counterintuitive.
>> >>
>> >> I looked into making the output always be the same
>> type (considered
>> >> dicts, orderdicts, and namedtuples) but I think it
>> is better to see
>> >> this analogous to Object.serialize().
>> Object.serialize() will give you
>> >> a variety of types depending on the Object (list,
>> tuple, int, bool,
>> >> and even dict).
>> >>
>> >> It would be handy to mark some of these items with
>> additional
>> >> metadata. For example, marking a handle as such
>> indicates that it is a
>> >> dependency. But I'll look into other ways to mark
>> it...
>> >
>> >
>> > But why do these Base objects require a to_struct? Just
>> obtain the single
>> > attribute you require.
>> > The fact that it is a list, indicates to me you don't
>> need to_struct.
>>
>> I guess for the same reason that Base objects have a
>> serialize(): each
>> object takes care of its own representation. I think that
>> this will
>> make it easier to maintain and develop.
>>
>> -Doug
>>
>> > Benny
>> >
>> >
>> >>
>> >>
>> >> -Doug
>> >
>> >
>>
>> ------------------------------------------------------------------------------
>> | http://sourceforge.net/p/gramps/mailman/message/29683903/ | CC-MAIN-2015-48 | refinedweb | 686 | 59.7 |
One of the most versatile types of filters that the Routing Service offers is the XPathMessageFilter. Xpath is great because it gives you a full set of navigation and comparison tools, right in one filter. But let’s be honest, writing XPath by hand kinda stinks. Fortunately, XPath has the notion of functions (or shortcuts) which can be used as shorthand for longer XPath statements. As an added benefit, when you use the XPath filter inside the framework, there a whole bunch of functions that are already defined for you. Here’s the ones I’ve found particularly useful. To use these, just write them into the XPath filter, prefaced by the sm: namespace declaration.
In a post below, and in several samples, I use sm:header() to quickly navigate to the header collection in order to determine if a particular header is present. With this data, you can also go ahead and start to write other XPath functions that riff off of these. Have at it!
-Matt | https://blogs.msdn.microsoft.com/routingrules/2010/05/04/servicemodel-xpath-functions/ | CC-MAIN-2017-26 | refinedweb | 168 | 71.75 |
Steven> Anyone know what emacs lisp does ? It has an 'apropos' and
Steven> a 'describe' so I assume there is a similar feature, if
Steven> not the same syntax.
Well, it is the same type of thing. Example:
(defun my-example-defun (arg)
"This explains this function .... bla, bla, bla
more bla bla bla bla ...."
(do-something-useful)
(...)
....
)
This works very well for Emacs but ...
What's bothering me is that python is more of a programming language.
Although it is certainly possible to use python in an interactive way
your better of with having the documentation separated from the
running executable. (Ok, one thing doesn't have to rule out the
other.) At least you should be able to turn the thing off so that you
won't blow up your python more than necessary.
It is a strength though if the documentation is given syntactic
support so that a tool could be used to extract the documentation from
the source. The extracted information could then be used in your
favorite editor (e.g. Emacs), document processing system or even
generate a python documentation module.
The last idea attracts me. If you want documentation for say the
module string you'll do something like this:
import doc-string
print doc-string.<class, method, etc ...>
Maybe this could be done at "module-compile-time". I guess what I'm
really saying is that I want the documentation stored in a separate
file so that you won't have to scan piles of text when you don't want
to have it.
%% Mats | http://www.python.org/search/hypermail/python-1993/0470.html | CC-MAIN-2013-20 | refinedweb | 262 | 74.59 |
Linqer – a nice tool for SQL to LINQ transition
Linqer – a nice tool for SQL to LINQ transition
Join the DZone community and get the full member experience.Join For Free
See why over 50,000 companies trust Jira Software to plan, track, release, and report great software faster than ever before. Try the #1 software development tool used by agile teams.
Almost all .NET developers who have been working in several applications up to date are probably familiar with writing SQL queries for specific needs within the application. Before LINQ as a technology came on scene, my daily programming life was about 60-70% of the day writing code either in the front-end (ASPX, JavaScript, jQuery, HTML/CSS etc…) or in the back-end (C#, VB.NET etc…), and about 30-40% writing SQL queries for specific needs used within the application. Now, when LINQ is there, I feel that the percentage for writing SQL queries got down to about 10% per day. I don’t say it won’t change with time depending what technology I use within the projects or what way would be better, but since I’m writing a lot LINQ code in the latest projects, I thought to see if there is a tool that can automatically translate SQL to LINQ so that I can transfer many queries as a LINQ statements within the code.
Linqer is a tool that I have tested in the previous two weeks and I see it works pretty good. Even I’m not using it yet to convert SQL to LINQ code because I did it manually before I discovered that Linqer could have really helped me, I would recommend it for those who are just starting with LINQ and have knowledge of writing SQL queries.
Let’s pass through several steps so that I will help you get started faster…
1. Go to website and download the version you want. There is a Linqer Version 4.0.1 for .NET 4.0 or Linqer Version 3.5.1 for .NET 3.5.
2. Once you download the zip file, extract it and launch the Linqer4Inst.exe then add install location. In the location you will add, the Linqer.exe will be created.
3. Launch the Linqer.exe. Once you run it for first time, the Linqer Connection Pool will be displayed so that you can create connection to your existing Model
Click the Add button
Right after this, the following window will appear
#1 – The name of the connection string you are creating
#2 – Click “…” to construct your connection string using Wizard window
#3 – Chose your language, either C# or VB
#4 – Model LINQ to SQL or LINQ to Entities
Right after you select LINQ to SQL, the options to select the files for the Model will be displayed. In our case I will select LINQ to SQL, and here is the current progress
So, you can select existing model from your application or you can Generate LINQ to SQL Files so that the *.dbml and *.designer.cs will be automatically filled
#5 – At the end, you can chose your context name of the model which will be used when generating the LINQ code
Once you are done, click OK.
You will get back to the parent window filled with all needed info
and click Close.
Note: You can later add additional connections in your Linqer Connections Pool from Tools –> Linqer Connections
In the root folder where your Linqer.exe is placed, now you have Linqer.ini file containing the Connection string settings.
Ok, now lets go to the interesting part.
Lets create one (first) simple SQL query and try to translate it to LINQ statement.
SQL Query
where a .city = 'Oakland'
If we add this query to Linqer, here is the result:
So, the LINQ code is similar to the SQL code and is easy to read since it’s simple. Also, if you notice, the tool generates class (you can add class name) with prepared code for using in your project. Perfect!
Now, lets try to translate a query with two joined tables (little bit more complex):
SQL Query
select * from employee
left join publishers
on employee.pub_id = publishers.pub_id
where employee.fname like '%a'
The LINQ generated code is:
}
So, if you can notice the where clause, we said in the SQL query: ... like "%a" and the corresponding LINQ code in C# is ... EndsWith("a"); - Excellent!
And the Class automatically generated by the tool is
public class EmployeePubClass
{
private String _Emp_id;
private String _Fname;
private String _Minit;
private String _Lname;
private Int16? _Job_id;
private Byte? _Job_lvl;
private String _Pub_id;
private DateTime? _Hire_date;
private String _Column1;
private String _Pub_name;
private String _City;
private String _State;
private String _Country;
public EmployeePubClass(
String AEmp_id, String AFname, String AMinit, String ALname,
Int16? AJob_id, Byte? AJob_lvl, String APub_id, DateTime? AHire_date,
String AColumn1, String APub_name, String ACity, String AState,
String ACountry)
{
_Emp_id = AEmp_id;
_Fname = AFname;
_Minit = AMinit;
_Lname = ALname;
_Job_id = AJob_id;
_Job_lvl = AJob_lvl;
_Pub_id = APub_id;
_Hire_date = AHire_date;
_Column1 = AColumn1;
_Pub_name = APub_name;
_City = ACity;
_State = AState;
_Country = ACountry;
}
public String Emp_id { get { return _Emp_id; } }
public String Fname { get { return _Fname; } }
public String Minit { get { return _Minit; } }
public String Lname { get { return _Lname; } }
public Int16? Job_id { get { return _Job_id; } }
public Byte? Job_lvl { get { return _Job_lvl; } }
public String Pub_id { get { return _Pub_id; } }
public DateTime? Hire_date { get { return _Hire_date; } }
public String Column1 { get { return _Column1; } }
public String Pub_name { get { return _Pub_name; } }
public String City { get { return _City; } }
public String State { get { return _State; } }
public String Country { get { return _Country; } }
}
public class List: List<EmployeePubClass>
{
public List(Pubs db)
{
var query =
};
foreach (var r in query)
Add(new EmployeePubClass(
r.Emp_id, r.Fname, r.Minit, r.Lname, r.Job_id, r.Job_lvl,
r.Pub_id, r.Hire_date, r.Column1, r.Pub_name, r.City, r.State,
r.Country));
}
}
Great! We have ready-to-use class for our application and we don't need to type all this code.
Besides this way to generate code, you can in same time use this tool to see the db results
I like this tool because mainly it’s very easy to use, lightweight and does the job pretty straight forward.
You can try the tool and send me feedback using the comments in this blog post.
See why over 50,000 companies trust Jira Software to plan, track, release, and report great software faster than ever before. Try the #1 software development tool used by agile teams. }} | https://dzone.com/articles/linqer-%E2%80%93-nice-tool-sql-linq | CC-MAIN-2018-22 | refinedweb | 1,077 | 67.18 |
ELECRAFT K3 HIGH-PERFORMANCE 160 –6 METER TRANSCEIVER OWNER’ S MANUAL Revision D1, July 27, 2008 Copyright © 2008, Elecraft, Inc. All Rights Reserved Contents Buffered I.F. Output...................................... 38 Using Transverters........................................ 38 Scanning ...................................................... 39 A Note to K3 Owners .......................................3 Key to Symbols and Text Styles.......................3 Quick-Start Guide.............................................4 Introduction.......................................................7 M ain and Sub Receiver Antenna Routing...... 40 Basic K3 (no KAT3 or KXV3) ...................... 40 K3 with KXV3 RF I/O Module ...................... 40 K3 with KAT 3 AT U..................................... 41 K3 with KAT 3 and KXV3............................. 42 K3 Features.....................................................7 Specifications..................................................8 Customer Service and Support........................10 Front Panel......................................................11 Control Groups..............................................11 Display .........................................................12 LEDs............................................................13 Front Panel Connectors..................................13 Primary Controls...........................................13 Multi-Function Controls.................................14 VFO T uning Controls....................................14 Keypad.........................................................15 Memory Controls ..........................................16 Message Record/Play Controls.......................16 RIT and XIT Controls....................................16 Remote Control of the K3.............................. 43 Options........................................................... 44 Firmware Upgrades ........................................ 44 Configuration ................................................. 45 Rear Panel .......................................................17 Synthesizer................................................... 48 Wattmeter..................................................... 48 Transmitter Gain........................................... 48 Reference Oscillator...................................... 49 Front Panel Temperature Sensor .................... 50 PA Temperature Sensor................................. 50 S-Meter........................................................ 50 Crystal Filter Setup ....................................... 45 Option Module Enables................................. 46 Miscellaneous Setup...................................... 46 VFO A Knob Friction Adjustment ................. 47 VFO B Knob Friction Adjustment.................. 47 Real T ime Clock Battery Replacement ........... 47 Calibration Procedures ................................... 48 Connector Groups..........................................17 KIO3 Module ................................................18 Basic Operation ..............................................21 Receiver Setup ..............................................23 Reducing Interference and Noise ....................25 Transmitter Setup ..........................................26 Voice Modes (SSB, AM, FM) ........................28 CW Mode .....................................................30 Data Modes...................................................31 M enu Functions.............................................. 51 MAIN Menu................................................. 51 CONFIG Menu............................................. 52 Advanced Operating Features.........................33 Troubleshooting............................................. 59 T ext Decode And Display ..............................33 CW-to-DAT A...............................................34 T uning Aids: CWT and SPOT ........................34 Audio Effects (AFX)......................................35 Dual Passband CW Filtering...........................35 Receive Audio Equalization (EQ)...................35 Transmit Audio Equalization (EQ)..................35 SPLIT and Cross-Mode Operation..................36 Extended Single Sideband (ESSB) ..................36 General-Coverage Receive.............................36 VFO B Alternate Displays..............................36 Alarm and Auto Power-On.............................36 Using the Sub Receiver ..................................37 Receive Antenna In/Out.................................38 Parameter Initialization ................................. 61 Module Troubleshooting ............................... 62 Theory Of Operation...................................... 66 RF BOARD.................................................. 66 KANT3 and KAT 3 ....................................... 68 KIO3............................................................ 68 Front Panel and DSP ..................................... 68 KREF3......................................................... 69 KSYN3 ........................................................ 70 K3 Block Diagram........................................ 71 Appendix A: Crystal Filter Installation.......... 72 Index............................................................... 76 2 A Note to K3 Owners Onbe ha l fofoure nt i r ede s i g nt e am,we ’ dl i ket ot ha nkyouf orc hoos i ngt heEl e c r a f tK3t r a ns c e i ve r . The K3—like its predecessor, the K2—reflects our desire to go beyond what other high-performance t r a ns c e i ve r sha veof f e r e d.I ti s n’ tj us tahome -station rig; at about 8 to 9 pounds, it can accompany you whe r e ve ryoug o,whe t he ri t ’ soutt oyourba c kpor c horha l f wa ya r oundt hewor l d.Andi t ’ st heonl y rig in its class that you can build yourself. Above all, we want the K3 to be ready for any operating s i t ua t i onyoue nc ount e r ,a ndbemor ee nj oya bl et ous et ha na nyt r a ns c e i ve ryou’ vee ve rowne d. In addition to t hi sma nua l ,you’ l lf i ndmuc hmor ei nf or ma t i onont heK3onourwe bs i t e ,i nc l udi ng operating tips, answers to frequently asked questions, and information on firmware upgrades. 73, Wayne, N6KR Eric, WA6HHQ Key to Symbols and Text Styles Important –read care fully Operating tip LS B . . LCD icon or characters LED Enter keypad function X MI T Tap switch function (labeled on a switch) TU N E Hold switch function (labeled below a switch; hold for 1/2 sec. to activate) SQL Rotary control without integral switch PW R Tap switch function of rotary control (labeled above a knob) MO N Hold switch function of rotary control (labeled below a knob; hold for 1/2 sec.) MAIN:VOX GN Typical MAIN menu entry CONFIG:KAT3 Typical CONFIG menu entry 3 Quick-Start Guide To get started using your K3 right away, please read this page and the two that follow, trying each of the controls. The text uses braces to refer to numbered elements in the front- and rear-panel illustrations below. For example, {1} refers to 1 , the mic jack. Later sections provide greater detail on all aspects of K3 operation. The first thing you nee d to know about the K3 is that most switches have two functions. Tap (press briefly) to activate the function labeled on a switch. Hold to activate the function labeled below the switch. In the text, tap functions are shown like this: ME N U . An example of a hold function is C ON FI G . Additional typographical conventions are shown on the previous page. Try tapping ME N U {8}. T his brings up the M AI N menu. Rotating VFO B {19} selects menu entries, while rotating VFO A {22} changes their parameters. Tap ME NU again to exit the menu. 4 Connections Connect a power supply to the DC input jack {26} (see Specifications, pg. 8). On the K3/100, a circuit breaker is provided on the fan panel for the 100-W stage {30}. You can power an accessory device from the switched DC output jack {38} (0.5 A max). Connect an antenna to ANT1 {29}. If you have an ATU installed (pg. 22), you can connect a second antenna to ANT2 {28}. If the KXV3 is installed, you can connect a separate RX antenna to RX ANT IN {34}. The AUX RF connector {27} is optional; see pg. 17. The Basics Filter Controls Press P OW E R {5} to turn on the K3. If there are any error indications, refer to pg. 63. T AP and H O LD Functions: Tapping briefly activates the function labeled on a switch. Holding for about 1/2 second activates the function labeled below a switch. T ap either end of B AN D {7} to select a band, and tap MO D E {6} to select the mode. Set the AF gain using AF {2}. Set RF to max. S UB controls are discussed on pg. 37. The large knob {22} controls VFO A (upper display, {10}). The medium knob {19} controls VFO B (lower display, {11}). VFO A is main RX/T X except in SPLIT (pg. 36). CM P / P WR is one of four multifunction controls {24}. Each has two primary functions, indicated by green LEDs. The knob has a built-in switch; tap it to select either CM P (compression level) or P WR (power output). Hold the knob in to access its secondary function, M O N itor level. Tap again to restore the primary function. Rotate the SHIFT / LOCUT and HICUT / WIDTH controls {23} to adjust the filter passband. Crystal filters FL1 - FL5 are automatically selected as you change the bandwidth. T ap either knob to alternate between shift/width and hicut/locut. Hold SHIFT / LOCUT to NO RM alize the bandwidth (e.g., 400 Hz CW, 2.8 kHz SSB). Hold HICUT / WIDTH to alternate between two filter setups, I and II (per-mode). T ap XF IL {13} to select crystal filters manually; this also removes any passband shift. Voice Modes {1} CW Mode {36} Data Modes {31} Hold ME TE R {8} to see CM P / ALC levels. While talking, set MIC {25} for 4-7 bars of ALC, and CMP for the desired compression. Then return to S WR / P WR (pg. 28). O ptional: Hold TE S T {6} for TX TEST mode; allows off-air TX adjustments (pg. 13). Hold CMP / PWR {24} to set speech MO N itor level; tap to return to CM P / P WR . Hold V O X {7} to select PTT or V OX . Hold SPEED / MIC to set VOX DE LAY . Details: VOX, pg. 29; T X EQ, pg. 35; MIC SEL, pg. 51; SSB/AM/FM, pg. 28. SPEED {25} sets the CW keyer speed. Hold this knob to set semi-break-in DE LAY . Hold Q S K {7} to select full break-in (Q S K icon on) or semi-break-in. (Pg. 30.) Hold P I TCH {18} to set sidetone pitch. Hold CMP / PWR to set sidetone MO N level. T ap C W T {18} for tuning aid {9} (pg. 34). With CWT on, S P O T auto-tunes (pg. 30). To select CW text decode/display mode, hold TE X T D E C {18}; rotate VFO B (pg. 30). CW keying is converted to DATA in FS K D and PS K D modes (below and pg. 34). Hold DU AL P B {13} to turn CW dual-passband filter (pg. 30). T ap MO D E {6} until you see the DATA icon turn on (see Data Modes, pg. 31). Hold D AT A MD {18}. Use VFO B to select from: DATA A (PSK31 & other soundcard-based modes), AFS K A (soundcard-based RTT Y), FS K D (RTT Y via data input or keyer), or P S K D (PSK via data input or keyer). VFO A selects data baud rate for internal encoder/decoder, if applicable. DU AL P B turns on RTTY filter (DT F, pg. 32). Hold P I TCH {18} to select mark tone and shift (for encoder/decoder and RTT Y filter). Hold TE X T D E C {18} to set up text decode. CW T shows tuning aid (pg. 34). 5 VFOs and RIT/XIT R ATE {21} selects 10 or 50 Hz VFO/RIT tuning. See VFO menu entries, pg. 52. F IN E {21} selects 1-Hz steps. C O AR S E selects large steps (MAIN menu, VFO CRS). T ap F R E Q E N T {21} to enter frequency in MHz using numeric keypad & decimal point. T ap return ( ) to complete the entry, or tap F R E Q E N T again to cancel. (Pg. 15.) Hold S C AN to start/stop scanning. SC AN must be preceded by a memory recall (pg. 39). The R I T and X I T offset knob {17} has LEDs that show -/0/+ offset (pg.16). T ap CL R {16} to zero the offset. Hold CL R for > 2 sec. to add the offset to VFO A, then zero it. Transmit, ATU, an d Antenna Controls The TX LED {4} indicates that the K3 is in transmit mode. The ∆f LED turns on if the RX and T X frequencies are unequal ( S PL I T , R I T / X I T , cross-mode, etc.). (Pg. 13.) X MI T {8} is equivalent to PTT {35}. TU N E puts out full CW power in any mode. ATU TU N E {8} initiates antenna matching (pg. 22). ATU enables or bypasss the ATU. AN T selects AN T1 or ANT2 . R X AN T selects main or RX antenna (KXV3). NB, NR, and Notch T ap NB {12} to enable DSP and I.F. noise blanking. Hold L E V E L to set DSP NB level (VFO A) and I.F. NB level (VFO B). Fully CCW is OFF in both cases. (Pg. 25.) T ap N R {12} to turn on noise reduction. Hold AD J to tailor noise reduction for the present band conditions (pg. 25). T ap N TC H {12} once to select auto-notch ( NTCH icon), and a second time to select manual notch (adds icon). Hold MAN to adjust manual notch frequency. (Pg. 25.) S PLIT, BS ET, and S UB Hold S P L I T {13} to enter split mode (RX on VFO A, TX on VFO B). If VFOs A and B are on different frequencies in SPLIT mode, the Delta-F LED ( ∆f ) will turn on (pg. 13). Hold B S E T {13} to adjust VFO B / sub RX settings independently of VFO A (pg. 37). T ap SU B {20} to turn on the sub receiver (pg. 37). VFO B controls its frequency. Hold S UB {20} to link the two VFOs (VFO A is then the master). This allows diversity receive with main and sub if two different antennas are used (pg. 37). Memories, Messages, and DVR To store a frequency memory, tap V M {14}, then: tap M1 - M4 {15} to save a per-band quick memory; or tap 0 - 9 to save a general-purpose quick memory; or rotate VFO A to select from memories 0-99, then tap V M again to save. Tap M V to recall. (Pg. 16.) R E C and M1 - M4 {15} are also used to record & play voice/CW/DAT A messages. T he KDVR3 option is required for voice messages and AF R E C / AF P L AY (pg. 29). Menus ME N U & C ON FI G {8} access the MAIN and CONFIG menus. VFO B selects entries; VFO A changes parameters. In general, CONFIG menu entries are used less often. T apping DI S P {8} within menus shows information about each entry on VFO B (pg 51). Up to 10 menu entries can be assigned to programmable function switches. P F 1 and P F 2 {16} are dedicated programmable functions. Any of M1 - M4 {15} can be used as T ap and/or Hold programmabl ef u nc t i o nsi ft he y ’ r eno tb e i ngu s e df orme s s a g epl a y( pg51). Other Features RX and T X EQ (MAIN menu) provide 8 bands of receive/transmit equalization (pg. 35). T ap AF X {18} to enable the selected audio effect (see CONFIG:AFX MD, pg. 51). T ap D I S P {8} and use VFO B to show time, supply voltage, etc. on VFO B (pg. 36). The ALARM function (MAIN:ALARM menu entry) can be used to remind you about a contest, net, or QSO schedule, and can even turn the K3 on at alarm time (pg. 36). The KIO3 module provides a rich set of AF {33} and digital {32} I/O (pg. 17). 6 Introduction This comprehensive manual covers all the features and capabilities of the Elecraft K3 transceiver. We recommend that you begin with the Quick-Start Guide (pg. 4). T he Front Panel (pg. 11) and Rear Panel (pg. 17) sections are for general reference, while Basic O peration (pg. 21) and Advance d O peration (pg. 33) fill in the details. CW and Digital Modes Built-in digital-mode demodulation with t e x td i s pl a ye do nt heK3 ’ sLCD( CW, RTT Y, PSK31) (pg. 33) Internal CW-to-RTTY or CW-to-PSK31 text decode/encode for casual digital-mode QSOs without a computer (pg. 34) CW text can be decoded and displayed as you send –great for improving CW skills (pg. 33) Automatic CW/data signal spotting and manual fine-tuning display (pg. 30) Your K3, including any installed crystal filters and option modules, should already be configured. Anytime you add new filters or options, refer to Configuration (pg. 45). K3 Features User Interface The K3 offers a number of advanced features that simplify operation and enhance versatility. These are listed below. Refer to the indicated pages for further details. Dual VFOs with independent modes, bands, and filter settings (pg. 14) 100 memories with alphanumeric labels, plus 4 quick-memories per band (pg. 16) Dedicated message play controls for use in CW, data, and voice modes (pg. 30) Real-time clock/calendar with programmable alarm times and automatic power-on (pg. 36) Utility displays show voltage, current drain, RIT /XIT offset, front panel temperature, PA heatsink temperature, etc. (pg. 36) Instructions for menu entries available with one switch tap Receiver Up to five crystal roofing filters with bandwidths as narrow as 200 Hz (pg. 23) High-performance, fully independent sub receiver, also with up to five crystal filters, allows true diversity receive with two antennas (pg. 37) Variable-bandwidth crystal filters that track DSP filter settings Narrow ham-band front-end filters, plus wider band-pass filters for general-coverage receive (pg. 44) Connectivity Enhanced, high-speed remote control interface with many new commands and direct DSP access Firmware upgradeable via the Internet (pg. 44) Isolated PC audio input and stereo outputs (pg. 17) DSP 32-bit I.F. DSP for advanced signal processing, including full stereo and other binaural effects (pg. 35) Passband tuning and programmable DSP/crystal filter presets (pg. 14) 8-band transmit and receive EQ (graphic equalization) (pg. 35) Dual-passband effects for use in contest/pileup conditions (pg. 30) Versatile digital voice recorder (DVR) for incoming/outgoing audio streams (pg. 29) Front and rear mic and headphone jacks Full stereo audio drives two speakers Optional RX antenna in/out, transverter in/out, and buffered IF outputs (KXV3) 7 Specifications Some specifications apply only if the corresponding option modules are installed (see Options, pg. 44). GENERAL Frequency Range Main and Sub Receivers, 500 kHz - 30 MHz and 48-54 MHz. Transmitter: Amateur bands between 1.8 and 54 MHz; transmit limits vary by country. Tuning Step Sizes 1, 10, 20, and 50 Hz; user-configurable coarse tuning steps (per-mode). Direct keypad frequency entry in either MHz or kHz Memories 100ge ne r a lpur pos e ,pl us4“qui c kme mor i e s ”per band Frequency Stability +/- 5 ppm (0-50 C) TCXO standard; +/- 1 ppm TCXO optional Antenna Jacks 50 ohms nominal. One SO-239 supplied (2nd SO-239 jack supplied with KAT3 ATU). BNC jacks for RX antenna in/out and transverter in/out (KXV3 Option). Modes USB, LSB, AM, FM, CW, and DATA. In DATA mode: FSK D (Direct), AFSK A (Audio), PSK D (Direct) and DATA A (Audio; PSK, etc.). Built in PSK, RTTY, and CW text decode/display. VFOs Dual VFOs (A and B) with separate weighted tuning knobs Remote Control Port EIA-232 standard DE-9F; USB adapter option. Full control of all radio functions Audio I/O Line-level isolated TX/RX audio int e r f a c e( s t er e oout put s ) ;f r ont( 1/ 4” )a ndr e a r( 1/ 8” ) stereo headphone jacks; stereo speaker jack Low Level Transverter Interface 0 dBm typ.; BNC connectors (KXV3 Option) Buffered IF output BNC connector (KXV3 Option); see pg. 38 for interface recommendations Other I/O Key/Keyer/Computer, Paddle, PTT In, and KEY Out. Band information output via binary interface and AUXBUS on ACC connector. Real-Time Clock/Cal endar Accuracy: Approx. +/- 20 ppm (+/- 2 seconds/day). U.S. and E.U. date formats. Battery: 3 V coin cell (see pg. 47 for replacement instructions). Supply Voltage /Current 13.8 V nominal (11 V min, 15 V max). 17-22 A typical in TX for K3/100, 3-4 A typical in TX for K3/10. 0.9A typical RX (less sub receiver). Recommended supply: 13.8VDC @ 25A, continuous duty for K3/100; 13.8VDC @ 6A for K3/10. For best results, use the supplied 5 foot (1.53 m) power cable. Accessory DC output Switched, 0.5 A max; 13 V no-load, 12 V max load (@ Vsupply = 13.8 V) Weight (K3/100) Approx. 8.5 lbs. (3.8 kg). With KRX3 sub receiver option, 9.5 lbs. (4.3 kg). Size Enclosure only, 4.0 x 10.7 x 10.0 in., HWD (10.2 x 27.2 x 25.4 cm); with projections, 4.4 x 11.1 x 11.8 in. (11.2 x 28.2 x 30.0 cm) 8 RECEIVER (Main and Sub)* Sensitivity (MDS) -136 dBm (typ.), preamp on, 500 Hz b/w. Reduced sensitivity near 8.2 MHz (first I.F.). 6 m MDS with PR6 option: -143 to -144 dBm (typ.). KBPF3 option required for full general-coverage receive, including broadcast band (0.5 to 1.7 MHz). Note: Sensitivity gradually decreas es below 1.8 MHz due to highpass response of T-R switch. This protects the PIN diodes. IMD3 Dynamic Range > 100 dB typical at 5, 10, and 20 kHz spacing. Blocking Dynamic Range 140 dB typical at 5, 10, and 20 kHz spacing Image and I.F. Rejection > 70 dB Audio Output 2.5 W per channel into 4 ohms; typ. 10% THD @ 1 kHz, 2 W S-Meter Nom. S9 = 50 µV, preamp on; user-adjustable Noise Blanker Adjustable, multi-threshold/multi-width hardware blanker plus DSP blanker Receive AF graphic EQ +/- 16 dB/octave, 8 bands Filter Controls IF Shift/Width & Lo/High Cut with automatic crystal filter selection * Dynamic range measurements based on 400-Hz, 8-pole filter. Other available filters have very similar performance; see for full list. Receive specifications are guaranteed only within ham bands. TRANSMITTER * Output Power K3/100: 0.1 W –100 W typ. (reduced power in AM mode). K3/10 (or K3/100 with PA bypassed): 0.1 W –12 W, HF-10 m; 8W max on 6 m. XVTR OUT (KXV3 option): 0.1 to 1.5 mW (-10 to +1.8 dBm). Duty Cycle CW and SSB modes, 100% 10-min. 100W key-down at 25 C ambient True RF Speech Processor Adjustable compression Transmit AF graphic EQ +/- 16 dB/octave, 8 bands SSB TX Bandwidth 4 kHz max (> 2.8 kHz requires 6 kHz AM filter) SSB TX Monitor Post-DSP filtering/processing VOX DSP-controlled, adjustable threshold, delay, and anti-VOX Full and Semi CW Break-In Adjustable delay; diode T/R Switching SSB Carrier Suppression > 50 dB Harmonic and Spurious Outputs > 50 dB below carrier @ 100W (> 60 dB on 6 meters) CW Offset/Sidetone 300-800 Hz, adjustable (filter center frequency tracks sidetone pitch) Mic Front panel 8 pin mic connector; rear panel 3.5 mm mic connector. Switchable DC bias voltage available for elect ret mics (see MAIN:MIC SEL menu entry) * Transmit spe cifications are guarantee d only within ham bands. 9 Customer Service and Support Technical Assistance You can send e-mail to k3support@ele craft.com and we will respond quickly –typically the same day Monday through Friday. If you need replacement parts, send an e-mail to [email protected]. T elephone assistance is available from 9 A.M. to 5 P.M. Pacific time (weekdays only) at 831-662-8345. Please use e-mail rather than calling when possible since this gives us a written record of the details of your problem and allows us to handle a larger number of requests each day. Repair / Alignment Service If necessary, you may return your Elecraft product to us for repair or alignment. (Note: We offer unlimited email and phone support, so please try that route first as we can usually help you find the problem quickly.) IMPO RTANT: You must contact Ele craft before mailing your product to obtain authorization for the return, what address to ship it to and current information on repair fees and turn around times. (Frequently we can determine the cause of your problem and save you the trouble of shipping it back to us.) Our repair location is different from our factory location in Aptos. We will give you the address to ship your kit to at the time of repair authorization. Packages shipped to Aptos without authorization will incur an additional shipping charge for reshipment from Aptos to our repair depot. Elecraft 1-Year Limited Warranty This warranty is effective as of the date of first consumer purchase. It covers both our kits and fully assembled products. For kits, before requesting warranty service, you should fully complete the assembly, carefully following all instructions in the manual. What is cove re d: During the first year after date of purchase (or if shipped from factory, date product is shipped to customer), Elecraft will replace defective or missing parts free of charge (post-paid). We will also correct any malfunction to kits or assembled units caused by defective parts and materials.. What is not cove re d: T his warranty does not cover correction of kit assembly errors. It also does not cover misalignment; repair of damage caused by misuse, negligence, or builder modifications; or any performance malfunctions involving non-Elecraft accessory equipment. The use of acid-core solder, watersoluble flux solder, or any corrosive or conductive flux or solvent will void this warranty in its entirety. Also not covered is reimbursement for loss of use, inconvenience, customer assembly or alignment time, or cost of unauthorized service. Limitation of incidental or conse quential damages: This warranty does not extend to non-Elecraft equipment or components used in conjunction with our products. Any such repair or replacement is the responsibility of the customer. Elecraft will not be liable for any special, indirect, incidental or consequential damages, including but not limited to any loss of business or profits. 10 Front Panel This reference section describes all front panel controls, the liquid crystal display (LCD), LEDs, and connectors. Operating instructions are covered in later sections. Control Groups Primary Controls (pg 13): These controls provide basic transceiver setup, including power on/off, band, operating mode, AF and RF gain and squelch, AT U and transmit controls, display modes, and menus. Ke ypad (pg. 15): This group of switches is numbered for use during memory store/recall and direct frequency entry, but each switch also has normal tap and hold functions. T he upper row of switches are VFO controls. The remaining rows control receive-mode and miscellaneous functions, such as noise reduction and text decode/display. Display (pg 12): The LCD shows signal levels, VFO A and B frequencies, filter bandwidth, operating mode, and the status of many controls. The VFO B display is alphanumeric, so it can show decoded text from digital modes (CW, RTT Y, PSK31), as well as menus, time and date, help messages, etc. Memories (pg. 16): These switches control frequency memory store/recall, message record/play, and audio record/playback (with the DVR). M1 - M4 can also be used as up to eight tap/hold programmable function switches. Multi-Function Controls (pg. 14): T he upper two knobs set up receiver DSP filtering. The lower two control transmit parameters, including keyer speed, mic gain, speech compression, and power output level. LEDs above each knob show which function is active; tapping the knob alternates between them. Pressing and holding these knobs (1/2 second or longer) provides access to secondary functions. VFOs (pg. 14): The large knob controls VFO A; the smaller knob controls VFO B. The four switches between the VFO knobs select tuning rates and control related functions. RIT/XIT (pg. 16): Three switches control RIT and XIT on/off and clear (offset zero). The knob below the RI T / X I T switches selects the offset. 11 Display Multi-characte r displays: The 7-segment display (upper) shows the VFO A frequency. T he 13segment display (lower) shows VFO B. VFO Icons: The TX icon and two arrows indicate which VFO is selected for transmit as shown below. In T XT EST mode, TX flashes (see TE S T ). Shows that VFO A or B is locked (see L O CK ). Bar graph, re ceive mode : The bar graph normally acts as an S-meter. If CW T is turned on, the right half of the S-meter becomes a tuning aid (pg. 34). A VFO A is the transmit VFO Bar graph, transmit mode: The bar graph normally shows S WR and RF power output. T he RF scale will be either 5 and 1 0 (low power) or 50 and 1 00 (high power). In voice and data modes, transmit scales can be changed to compression ( CM P ) and ALC using ME TE R . TX Filte r Graphic: T his shows the approx. bandwidth a ndpos i t i o noft her e c e i ve r ’ sI . F.p a s s b a nd. See Filte r PassbandControls, pg. 23. Othe r Icons: TX VFO B is the transmit VFO; see SPLIT B CW/data tuning aid on ( CW T , pg. 34) DVR in use ( AF R E C / AF P L AY, pg. 16) V O X VOX enabled ( V O X , pg. 13) Q S K Full break-in CW enabled ( Q SK , pg. 30) NB Noise blanker on ( N B , pg. 15) NR Noise reduction on ( NR , pg. 15) AN T Antenna 1 or 2 ( AN T , pg. 13) RX RX antenna in use ( R X AN T , pg. 13) ATT Attenuator on ( AT T , pg. 15) P RE Preamp on ( PR E , pg. 15) ATU AT U enabled ( ATU , pg. 13) RI T RIT on ( R I T , pg. 16) XIT XIT on ( X I T , pg. 16) S UB Sub receiver on ( S UB , pg. 37) S P LT Split mode in effect ( S P LI T , pg. 36) CW T Filte r Icons: NTCH Notch filtering on ( N TCH , pg. 25) Manual notch ( MAN , pg. 25) I / II Shows selected preset (I /I I , pg. 14) XFIL Crystal filter selection ( FL1 - FL5 ) Mode Icons: Basic modes ( LS B / US B , CW , DATA , AM , or FM ) are selected by tapping either end (Up/Down) of MO D E . Alternate modes ( CW RE V , DATA RE V , AM -S , FM + / - ) are selected by holding AL T . LS B and US B are alternates of each other. T indicates FM/tone, or CW/data text decode. 12 LEDs Primary Controls TX [Re d] T urns on in transmit mode. B AN D T ap the left / right end of this switch to move to the next lower / higher ham band. V O X Selects voice-operated or keying-activated (CW) transmit ( V OX icon on), or PTT-controlled transmit. Also see D EL AY (pg. 30). ∆F [Yellow] The Delta-F LED turns on if transmit and receive frequencies or modes are different due to the use of SPLIT, RIT, or XIT. [Green] Eight LEDs show which functions are in effect for the Multifunction Controls (pg. 14). Selects either full break-in ( QS K icon on) or semi break-in keying, if VOX is selected in CW mode. Also see D E L AY (pg.30). QSK (+ ) RIT/XIT O FFSET If the offset control is centered, or you tap C LR , the green LED turns on (offset = 0). Otherwise, the yellow (-) or (+) LED will be on, indicating the direction of the offset. See RI T , XI T , and C LR . (-) MO D E T ap the left or right end of this switch to select the operating mode. When DATA is selected, the D AT A MD switch is used to specify DATA-A, AFSK A, FSK D, or PSK D (pg. 31). In LS B mode, switches to US B (and viceversa). Also selects alternate modes, including: CW RE V , DATA RE V , and AM -S (pg. 29). In FM mode, selects + /- or simplex (pg. 29). AL T Front Panel Connectors PHO NES You can use either mono or stereo headphones at either the front- or rear-panel headphone jack. Also see AF X (pg. 35). Selects T X NORM or T X TEST ( TX LCD icon flashing). T X TEST allows you to test keying, mic level, etc., without actually transmitting. TE S T MIC An Elecraft MH2, MD2, Proset-K2, or other compatible mic can be used (see pinout below). T o select the front- or rear-panel mic, and to turn bias on/off, use the MAIN:MIC SRC menu entry. Bias must be turned on for electret mics (e.g. MH2, MD2, Proset). It must be off for dynamic mics (e.g. Heil mics using HC4 or HC5 elements). T urns the K3 on or off. Note : To ensure corre ct save of ope rating parame te rs, turn the K3 off before turning the powe r supply off. POW ER ME N U Displays MAIN menu (pg. 21). C O NF I G Displays the CONFIG menu (pg. 21). Manually-operated transmit. Places the K3 into transmit mode (same as PTT, pg. 26). X MI T Puts out a carrier at the present power level. Also TUNE Powe r Le vel (pg. 27). TU N E R X AN T Selects the receive antenna (pg. 22). D I S P Shows an alternate display on VFO B, including time, date, voltage, etc. Use the VFO B knob to select the desired display (pg. 36). ME TE R Selects voice transmit bar graph modes: S WR and RF , or CM P and ALC (pg. 28). Mic jack, viewed from front of K3 1 Mic audio, low-Z (~600 ohms) 2 PTT 3 DOWN button * 4 UP button * 5 FUNCTION button * 6 8V (10 mA max) 7, 8 Ground ATU TU N E Places the K3 into low-power CW transmit mode and matches the antenna using the KAT3 automatic antenna tuner (pg. 22). Puts the ATU into normal mode ( ATU icon on) or bypass mode (pg. 22). ATU * See CONFIG:MIC BTN menu entry (pg. 52) AN T Selects ANT 1 or 2 and recalls the last ATU settings used for that antenna (saved per-band). In BSET mode with the sub receiver on, selects M AI N or AUX antenna for the sub receiver (pg. 37). FP ACC This connector (RJ-45, 6 pins) is located on the bottom of the transceiver, near the VFO B knob. It is used with accessory devices. 13 Dual-Concentric Potentiometers Transmit Controls AF — SUB AF gain controls for main receiver (inner, or smaller knob) and sub receiver (outer ring, or larger knob). The primary functions of the transmit controls are: SPEED MI C RF / SQL — SUB RF gain (and/or squelch) controls for main and sub receiver. C MP PW R T wo menu entries are provided to control squelch directly: CONFIG:SQ MAIN, and SQ SUB. They can also be used to reconfigure the RF gain controls as squelch for either receiver. See the Config Menu listing for details (pg. 52). Keyer speed in WPM, 8-50 Mic gain Speech compression level RF output power in watts (pg. 26) The present transmit mode determines which primary functions normally apply; for example, in CW mode, the S P E E D / MI C control defaults to S P E E D . You can always tap a knob to override the present selection. The secondary functions of these controls are: Multi-Function Controls D E L AY The upper two multi-function controls set up receiver filtering. The lower two controls adjust transmit settings. Each control has two primary functions (white labels) and a secondary function (yellow). Tap a control knob to alternate between its primary functions, indicated by two LEDs. Hold a knob (~1/2 second or longer) to select its secondary function. MO N VFO Tuning Controls The VFO A knob controls the upper frequency display. This is normally the RX and T X frequency. In SPLIT mode, VFO B controls the transmit frequency (pg. 36). VFO B also controls the sub receiver when it is installed and turned on (pg. 37). Filter Controls The primary functions of the filter controls are: S H IF T LO CUT H I CU T W ID TH VOX delay (voice/data) or CW semibreak-in delay, in seconds Voice or data monitor level or CW/data sidetone level The controls to the right of VFO A include: Shift passband either direction Adjust low-frequency response Adjust high-frequency response Adjust width of the passband F R EQ EN T Direct frequency entry (pg. 15) S C AN Start or stop scanning (pg. 39) Select 1 Hz tuning for both VFOs and RIT /XIT offset C O AR S E Select coarse tuning rate (pg. 22) F IN E As these settings change, so does the filter graphic. Crystal filters are selected automatically (or manually using X FI L , pg. 15). Also see Filter Passband Controls (pg. 23). R ATE Select one of two normaltuning rates (10/50 or 10/20 Hz; pg. 22) L O CK Lock VFO A (use B S E T to lock B) SU B T urn sub receiver on/off (pg. 37). Hold this switch to link/unlink VFO A and B on the present band (pg. 37) The secondary functions of these controls are: N OR M Normalize passband Normalizing the passband sets the bandwidth to a fixed, per-mode value (e.g. 400 Hz in CW mode) and centers the passband. (Also see user-defined normal settings, NO RM1 /2 , pg. 24.) I / II VFO A can optionally be coarse-tuned using the RIT /XIT offset control if both RI T and XI T are off . See CONFIG:VFO OFS. Select preset I or II (per mode) Presets I and II each hold a continuously-updated DSP/crystal filter setup (pg. 24). 14 Direct Frequency Entry Receiver Control & Misc. (Lower Rows) To jump to any frequency within the tuning range of the K3, tap FR E Q EN T , then enter 1 to 3 MHz digits, a decimal point, and 0 to 3 kHz digits. Follow this with Enter ( . . ) to accept or F R E Q EN T to cancel. The decimal point is optional if no kHz digits are entered, making it very easy to get to the low end of most ham bands. Receiver control functions normally apply to VFO A. If BS E T is in effect, they apply to VFO B and the sub receiver (if turned on). Examples: 1.825 MHz: F R E Q E N T 1 . 8 2 5 . 1.000 MHz: F R E Q E N T 1 50.100 MHz: FR E Q EN T . . PR E Preamp on/off (6 m: see PR6, pg. 44) ATT Attenuator on/off AG C AGC slo w/fast OF F AGC off/on XF I L Select next available crystal filter (see CONFIG:FLx ON) D U AL P B Dual-passband CW or dual-tone RTT Y filtering (pg. 30) NB Noise blanker on/off (pg. 25) L EVEL Noise blanker levels (pg. 25); use VFO A knob to setup DSP blanker, and VFO B to setup I.F. blanker NR Noise reduction on/off (pg. 25) AD J Noise reduction parameter adjust; use VFO B knob (pg. 25) . 5 0 . 1 . If four or more digits are entered without a decimal point, a value in kHz is assumed. Keypad Each keypad switch has tap and hold functions, listed below. These switches are also used for direct frequency entry; to select quick memories 0-9; and for selecting fields in certain menu entries, such as time, date, filter, and transverter setup. N TC H Notch filter auto/manual/off (pg. 25) VFO Controls (Upper row) M AN The upper row of numeric keypad switches is used to set up VFOs A and B. Their functions are: Manual notch frequency (pg. 25); use VFO B knob SPOT Spot tone on/off (manual), or autospot (if CWT is on; pg. 34) P I TC H CW sidetone PI TCH , PSK center pitch, FSK / AFSK MARK tone and shift (pg. 31), or FM tone setup (pg. 29) CWT CW/data tuning aid on/off (pg. 34); turn on to use auto-spot TE X T D E C Te xt de code, CW or DAT A (pg. 33); use VFO B knob to select mode AF X Audio effects on/off (pg. 35); use CONFIG:AFX MD to set mode D AT A MD DAT A mode selection (pg. 31); use VFO B knob A/ B Exchange VFO A and B contents B SET Set up VFO B and sub receiver R EV Exchange VFO A and B temporarily A B SPL I T Copy VFO A to VFO B (also see CONFIG:VFO B->A) Enable SPLIT receive/transmit Holding B S E T allows VFO B (and the sub receiver, if on) to be set up directly (pg. 37). As long as BS E T is displayed, all VFO-related controls and display elements apply to VFO B. An alternative is to set up VFO A, then A B . 15 Memory Controls Digital Voice/Audio Recorder Frequency Memories T wo switches are dedicated to the DVR (KDVR3 option). The K3 has 100 general-purpose memories (00-99), plus up to 80 per-band memories (M1-M4 on each of 11 regular bands and 9 transverter bands). Each memory holds VFO A and B frequencies, modes, filter presets, antenna selection, and other settings. AF R E C Start / stop audio record AF P L AY Start / stop audio playback When record or playback is active, the appears. It flashes during playback. Memories can have a text label of up to 5 characters (A-Z, 0-9, and various symbols). For example, you might want to label memories associated with nets, callsigns of broadcast stations, or your favorite scanning ranges. icon The DVR is also used for message record and play in voice modes (pg. 29). Message Record/Play Controls To store a gene ral-purpose memory ( 0 0 - 99 ): First tap V M (VFO to Memory), then locate the desired memory using the VFO A knob. The VFO A frequencies stored in each memory will be shown as you scroll through them. When you reach the desired memory number, tap V M again to store, or tap M V to cancel the operation. Five switches provide record and playback of outgoing messages: M1 , M2 , M3 , M4 and REC . These switches provide single-tap play, hold-torepeat, and other functions that are convenient for contests and for sending often-repeated text or voice messages during QSOs. To re call a gene ral-purpose memory: T ap M V , then select memory 0 0 - 9 9 using VFO A. T ap M V again to confirm, or V M to cancel. For details on CW message record/play, see pg. 30. The same messages can be used with CW-to-DAT A (pg. 34). For voice message record/play, see Digital Voice Re corde r (pg. 29). Memories 00-09 are quick memories, accessible with just two switch taps. These could be used to get to a starting point in each of 10 ham bands. Memories M1 –M4 are per-band quick memories. For example, you might set up M1 f o re a c hb a nd ’ s CW segment, M2 for the SSB se gment, etc. M1 through M4 can alternatively be used as tap or hold programmable function switches (pg. 21). RIT and XIT Controls To store or recall quick memorie s: T ap V M or M V as before, but instead of rotating VFO A, tap 0 - 9 or M1 - M4 . To e rase one or more me mories: While scrolling through memories to save or recall, tap C LR . Not applicable to per-band quick memories ( M1 - M4 ). Toaddorc ha ng eame mor y’ st e xtl ab e l : First tap M V , then select a memory ( 00 -9 9 ) using VFO A. Next, rotate VFO B to select each label position in turn as indicated by the flashing cursor. Use VFO A to change characters. After editing, tap M V again. (Labels can be edited at any time, including when you initially store a memory using V M .) RIT RIT (receive incrementaltuning) on/off. PF 1 Programmable function switch (pg. 21) XI T XIT (transmit incremental tuning) on/off. PF 2 Programmable function switch (pg. 21) C LR Sets RIT /XIT offset to 0;tap again to restore offset to previous value. Hold for 2 seconds to copy present RIT offset to VFO A before clearing. The RIT /XIT offset control sets the offset for RI T and X I T . Three LEDs above the control show at a glance whether an offset is in effect (pg. 11). An aste risk (*) at the beginning of a label designates a channel-hopping memory (pg. 39). 16 Rear Panel Connector Groups KIO 3 (pg. 18): T he KIO3 is an upgradeable digital and audio I/O module providing computer and auxiliary control signals, single or dual (stereo) speaker outputs, line level in (mono) / out (stereo), and supplemental headphone (stereo) and mic jacks. The appearance of your rear panel may vary depending upon the options installed. Antennas: ANT1 (SO-239) is standard. ANT2 (SO-239) is supplied with the KAT3 automatic antenna tuner option, which includes an antenna switch controlled from the front panel. Both jacks are nominally 50 ohms when the ATU is bypassed or not installed. T he AUX RF connector is for use with the KRX3 option; see pg. 37 and pg. 40. KXV3: T he KXV3 provides a variety of RF I/O signals, including receive antenna in/out (pg. 40), transverter in/out (pg. 38), and a buffered I.F. output (pg. 38). Ke ying: PADDLE ( 1/ 4”p ho nej a c k)i st hek e ye r paddle input (see CONFIG MENU, CW PDL, pg. 52). KEY ( 1/ 4”p ho nej a c k)c a nb eu s e dwi t ha hand key, external keyer, computer, or other keying device. PTT IN (RCA/Phono) is for use with a footswitch or other external transmit control device. KEY O UT (RCA/Phono) is the amplifier T -R relay keying output, capable of keying up to +200VDC @ 5A. DC: 12 VDC IN jack is an Anderson PowerPole connector rated at 30 amps. (See Specifications, pg. 8, for detailed power requirements.) 12 VDC O UT (RCA/Phono) provides up to 0.5 A (switched) for use with accessory devices. Ground Te rminal: A good station ground is important for safety and to minimize local RFI. KPA3: This option panel is blank in the K3/10. In the K3/100, the blank panel is replaced with the fan panel shown, which includes a circuit breake r. REF IN (SMA): Input for external standard frequency reference (KREF3-EXT option). 17 KIO3 Module ACC (Accessory I/O) The KIO3 provides serial I/O, control signals, audio in/out for use with sound cards, speaker outputs, and auxiliary headphone and mic jacks. ACC connector pinouts are listed below. ACC is not a VGA vide o conne ctor. The K3 doe s not provide video output. RS232 The RS232 port can operate at up to 38,400 baud. A straight-through cable is required. Pin # Description I fy ou ’ r eb u i lding your own cable, you can use as few as three wires (RXD, T XD, and ground; see table below). DT R and RT S are optional. 1 FSK IN (see FSK Input) 2 AUXBUS IN/OUT (see KRC2 or XVSeries transverter instruction manual) This table use s EIA standard descriptions, which are from the pe rspe ctive of the PC. These diffe r from K2 documentation, e ven though the connections are functionally identical. 3 BAND1 OUT (see Band Outputs) 4 PTT IN (in parallel with MIC PTT) 5 Ground (RF isolated) 6 DIGOUT 0 (see Transve rte r Control) 7 K3 ON signal (out) or T X INH (in) (see Transve rter Control, TX INH) Not used 8 POWER ON (see pg. 43) 2 RXD IN (data to PC from K3) 9 BAND2 OUT (see Band Outputs) 3 T XD OUT (data to K3 from PC) 10 KEYOUT -LP (10 mA keying output) 4 DT R (see PTT and Ke ying, below) 11 DIGOUT 1 (see DIGOUT1) 5 Ground (RF isolated) 12 Ground (RF isolated) 7 RT S (see PTT and Ke ying, below) 13 BAND0 OUT (see Band Outputs) 14 BAND3 OUT (see Band Outputs) 15 EXT ALC input (see Exte rnal ALC, pg. 27) Pin # 1,6,8,9 Description RS232 Conne ctor (female , on KIO3 panel) Se rial Port Setup: Set CONFIG:RS232 for the desired baud rate. Software should be set up at the same rate; 8 data bits, no parity, 1 stop bit. ACC Conne ctor (female, on KIO3 panel) DTR and RTS: These are not used as serial I/O handshaking lines. Instead, the K3 can use these as PTT IN or KEY IN (see CONFIG:PTT-KEY). The default for both signals is inactive. Refer to application software documentation to determine if it can use RS232 signal lines for PTT or keying. FSK Input (for FSK D Data Mode) This is a TTL input pulled up to 5V, compatible with TTL-level PC outputs. When used with an RS232 output signal from the PC, a level translator is required (refer to your software manual). If a PC or othe r de vice asse rts RTS or DTR while yo u’ r ei nt hePTT-KEY menu entry, the K3 will enter TEST mode as a pre caution. DIGO UT 1 DIGOUT 1 is a per-band/per-antenna open-drain output for controlling antenna switches, preamps, filters, etc. See CONFIG:DIGOUT1. 18 Band Outputs (BAND0-BAND3) BAND0-3 provide band selection signals. Their behavior is determined by the CONFIG:KIO3 menu entry. (See tables below.) With CONFIG:KIO3 set to HF-TRN , the BAND0-3 outputs follow the NO R table when HF6 m bands are selected, and the TRN table when a transverter band is selected. BAND0-3 are open-drain outputs. The attached device must provide pull-up resistors (typ. 2.2K) to its own supply voltage (usual 5 VDC). Transve rte r Control In the tables below, 0 = 0 VDC, and 1 = device supply voltage. Normally, when the K3 is turned on, a 5-VDC logic signal appears on ACC pin 7 (K3 ON). T his could be used with Elecraft XV transverters as an enable signal (pin 8 of J6 on the transverter). With CONFIG:KIO3 se t to NO R , the BAND0-3 outputs are mapped based on the selected HF-6 m band as shown below. This mapping matches that of some third-party band decoders. On Transverter bands, BAND0-3 will all be set to zero. Band 160 m 80 m 60 m 40 m 30 m 20 m 17 m 15 m 12 m 10 m 6m BAND3 BAND2 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 However, pin 7 can alternatively be configured as a transmit inhibit input line for use in multitransmitter stations. (See TX INH, below.) In this case it is not available as a power-on signal for El e c r a f tt r a ns ve r t e r s . I ns t e a d , t heK3’ s12-VDC switched output line could be used. BAND1 BAND0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 1 0 For transverter keying, you can use KEYOUT -LP signal (pin 10 of the ACC connector) or the KEY OUT jack (RCA). With KIO3 set to TRN or HF-TRN , the DIGOUT 0 line (ACC, pin 6) will output 0 V when low power mode is selected for the current transverter band (CONFIG:XVn PWR). At all other times, DIGOUT 0 will be floating (Hi-Z). TheK3 ’ sBAND0-2 outputs emulate the El e c r a f tK60 XV’ sXVTR0-2 signals when CONFIG:KIO3 is set to TRN or HF-TRN . However, BAND0-2 on the K3 are open-drain signals, while XVT R0-2 on the K60XV are TTL. If CONFIG:KIO3 is set to TRN , BAND0-3 reflect the parameters of CONFIG:XVn ADR as shown below. On HF-6mt he y ’ r es e tt o0 . ADR T R N1 T R N2 T R N3 T R N4 T R N5 T R N6 T R N7 T R N8 T R N9 TX INH ( Transmit Inhibit Signal) BAND3 BAND2 0 0 0 0 0 0 0 1 0 1 0 1 0 1 1 0 1 0 BAND1 BAND0 0 1 1 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 Pin 7 of the ACC connector can be configured as a transmit inhibit input by setting CONFIG:TX INH to LO = I nh (or HI = I nh ). Holding pin 7 low (or high) will then prevent transmit. An external 2.2 to 10 K pull-up resistor (to 5 VDC) is required. If TX INH is set to O FF, pin 7 reverts to its default output function, K3 ON (see above). Ele craft KRC2 Unive rsal Band De code r An Elecraft KRC2 can be used with the K3 to perform station switching functions; it includes sink and source drivers for all bands. The KRC2 uses the AUXBUS rather BAND0-3 (see CONFIG:KRC2 for 6-meter band mapping). Refer to the KRC2 instruction manual for more information. 19 SPKRS LINE IN ST EREO or MONO; 4 to 8 Ω MONO, transformer-isolated; 600 Ω( n omi na l ) Plugging in external speaker(s) cuts off the internal speaker. A stereo plug is recommended; tip is left speaker, ring is right. If you only have a mono plug, set CONFIG:SPKRS to 1 to disable right-channel audio. (Also see important note below.) This i n pu ts hou l db ec on ne c t e dt oyou rc o mpu t e r ’ s soundcard output. The MI C gain control sets the line input level when the MAIN:MIC SEL menu entry is set to LI NE I N . The LIN IN le vel should be se t carefully to avoid transmit signal distortion due to s a t ur at i o no ft heK3’ si nputaud i ot r a ns f o r me r . In addition, sound card gain should be set 6 to 10 dBb e l owt hel ev e latwhi c ht hes o undc ar d’ s output stage starts clipping. PHONES ST EREO or MONO; 16 Ωmi n .r e c omme nd e d The front and rear-panel headphone jacks are both isolated with series resistors. This allows you to use mono phones on one jack and stereo on the other, if r e q u i r e d .You ’ l lne ed stereo phones for AFX (audio effects) and stereo dual receive (with sub receiver). LINE OUT ST EREO, transformer-isolated; 600 Ω( n omi na l ) The s eou t pu t sc a nb ec o nne c t e dt oyou rc ompu t e r ’ s soundcard inputs. Normally, the left channel is main receiver audio, and the right channel is sub receiver audio (if applicable). In this case the outputs are post-AGC but pre-AF-gain.) You can plug in headphones and speaker(s) at the same time, and hear audio in both, if you set CONFIG:SPKR+PH to Y ES . However, if you set CONFIG:SPKRS to 1 , setting SPKR+PH to YE S will force mono headphone as well as speaker output. You can set SPKRS to 2 if you use a stereo plug at the external speaker jack, or if no external speaker is plugged in. Use CONFIG:LIN OUT to set the level, or to switch from a fixed-level setting to = P HO NE S . LIN O UT settings above 10 are usually not ne cessary, and can in some cases cause ov er l oad i ngof e i t hert heK3’ so ut p ut transformers or the PC soundcard inputs (typically on noise peaks). Eithe r could degrade the pe rformance of digital demodulation software . MIC MONO; hi- or low-Z This jack accommodates an electret or dynamic mic. Use MAIN:MIC SEL to select the rear panel mic ( RP). T ap 1 to turn on Low or High mic gain range. T ap 2 to turn bias on/off (see pg. 28 for recommendations based on mic type). Themi c ’ s PTT signal, if used, must be routed to either the PTT IN jack or the PTT line on the ACC connector (pg. 18). Some laptop compute rs have only ve ry highgain, high-impe dance mic inputs, not line-le vel inputs. This can make it difficult to adjust the K3’ sLI NEOUTle vel, and can also worsen noise pickup. If your laptop has only a mic input, you may want to add a resistive attenuator between the K3 and the laptop to keep the signal-to-noise level high. 20 Basic Operation MAIN Me nu T ap ME NU to access the main menu. (Tapping ME N U again exits the menu.) This section covers the fundamentals of K3 receive a ndt r a ns mi to pe r a t i on. I t ’ l la l s og e ty ous t a r t e d using each of the major operating modes. Use VFO B to scroll through the menu entries, referring to the list on pg. 51 for details. Onc eyou ’ r ef a mi l i a rwi t ht heK3, pl e a s eg oo nt o Advance d Ope rating Fe atures (pg. 33). Change the value (or parameter) of any menu entry using VFO A. Using Tap/Hold Switches CONFIG Menu Most K3 switches have two functions. Tapping (pressing for less than 1/2 second) activates the function labeled on the switch. Holding (pressing for more than 1/2 sec.) activates the function labeled beneath the switch. Hold C O NF I G (hold function of the ME NU switch) to access the CONFIG menu. Use VFO B to scroll through the CONFIG menu entries, referring to the list on pg. 52. Initial Power-Up Menu Help Connect a power supply (pg. 8); antenna or dummy load; key, if used (pg. 16); mic, if used, and station ground (pg. 16). T ap D I S P to show help information about the present menu entry. For most entries, the default parameter value is shown in parentheses at the start of the help text. T ap P OW E R to turn the K3 on. The LCD should illuminate and show VFO A/B frequencies. (Tapping P O W ER again turns power off.) Programmable Functions Menu entri e st ha tyou ’ dl i keq u i c ka c c e s st oc a nb e assigned to any of the 10 programmable function switches, P F 1 , P F 2 , and M1 –M4 (tap or hold). Func ti on menu entries can only be used via such a switch assignment. (Examples, from the CONFIG menu: VFO B->A and TTY LTR.) The VFO B display can show a variety of useful parameters in addition to the normal frequency display. To see these, tap DI S P (left of the display), then rotate the VFO B knob. The VFO B display will cycle through time , date , RIT/XIT offset, supply voltage , current drain, etc. (pg. 36). You can use these displays to make sure the supply voltage is in range (1115 V), and that current drain is about 1 amp (higher with sub receiver installed and turned on). T ap DI S P to return to the normal VFO B frequency display. To set up a programmable function switch, first use ME N U or C ON FI G to locate the target menu entry. Next, hold P F 1 or P F 2 ; or, tap or hold M1 –M4 . For example, if you tap M2 , you ’ l ls e eM2 T S E T (T for tap), while holding M2 would show M 2 H S E T (H for hold). The assigned switch can then be used as a shortcut to access that entry. M1 –M4 can each be assigned a tap and/or hold programmable function. Using the Menus Any M1 –M4 switch that is used as a programmable function switch will not be available for message play. To cancel a programmable switch assignment and restore a previously-saved message, tap R EC , t he nt a pt heb u f f e ry ou ’ dl i ket or e s t o r e ( M1 –M4 ), then tap R E C again. There are two menus: M AI N and CO NFI G . Most entries in the CONFIG menu are used for test, configuration, and alignment, and are used infrequently. Nearly all menu entries appear in alphanumeric order. In the few exceptions to this, adjacent entries are still closely related. 21 Band and Mode Selection Using the VFOs T ap either end of the B AN D switch to select the desired ham band (160 through 6 meters). You can also go directly to any desired frequency using direct frequency entry (pg. 15), or recall a frequency memory (pg. 16). VFO A is both the main receive and transmit frequency, except during SPLIT, in which case VFO B controls the transmit frequency (pg.36). VFO B also controls the sub receiver (pg. 37). T ap R ATE to select 10 / 50 Hz per step. The faster rate can be changed using CONFIG:VFO FST. The number of counts (or steps) per VFO knob turn can be changed using CONFIG:VFO CTS. T apping R ATE briefly flashes either the 10-Hz or 100-Hz digit to indicate slow or fast tuning. T ap either end of MO D E to select the operating mode. Hold AL T to select an alternate mode, if required. This include CW RE V (pg. 30), DATA RE V (pg. 31), AM -S (synchronous detection, pg. 29), and FM + / - (FM repeater split, pg. 29). For 1-Hz steps, tap FI N E ; for wider steps, use C O AR S E (see CONFIG:VFO CRS). When F IN E is in effect, a 1-Hz digit will appear in the VFO A display. When C O AR S E is in effect, the 10-Hz digit is not shown. Antenna Selection and Matching ATU (KAT3) If you have the KAT3 antenna tuner installed, you can select ANT1 or ANT2 by tapping AN T . T ap A B on c et oc o pyVFOA’ sf r e q u e nc yt o VFO B. T apping A B a second time within 2 s e c ond sa l s oc opi e sVFOA’ smod e ,f i l t e r ,a ndo t he r settings to VFO B. Hold ATU to select AUTO (autotune enabled) or BY P AS S . If the ATU icon is on, the antenna can be matched for best SWR by tapping ATU TU N E . AT U settings are saved per-band and per-antenna. exchanges VFO A and B and their settings. (Also see CONFIG:VFO B->A.) Pressing R E V A/ B T apping ATU TU N E a second time within 5 seconds of a match attempt will retry using a more extensive search. This may improve the match when using high-SWR or narrow-band loads. exchanges the VFOs for as long as you hold it in. VFO B and the sub receiver can be set up directly by holding B S E T . While BS E T is in effect, all icons and VFO-related controls apply to VFO B (and to the sub receiver, if turned on; see pg. 37). Holding AN T allows names to be assigned to a n t e n na s( e . g . , ‘ Y AGI ’ ). T hese will be flashed each time you switch antennas. When editing names, VFO B selects the character position to change; VFO A cycles through available characters. Setting t h ef i r s tc ha r a c t e rt o“—”d i s a b l e sna med i s pl a y. Holding S U B links/unlinks the VFOs, whether or not a sub receiver is installed or turned on (pg. 37). RIT and XIT The RIT /XIT offset control, at the far right, sets the offset for RI T and X I T . The offset is shown on the VFO B display as you adjust the control. T hree LEDs show whether the offset is 0, (-) or (+). RX Antenna (KXV3) With the KXV3 installed, you can tap R X to select a receive-only antenna (RX ANT IN). The K3 also has an RX ANT OUT jack for use with in-line filters, the PR6 6-m preamp, etc.; see pg. 38. T ap C LR to zero the RIT /XIT offset. Tapping it a second time restores the offset. To copy the present RIT offset to VFO A, hold C LR for 2 seconds. VFO A will be moved to the new frequency before the offset is zeroed. Sub Re ceive r Antenna (KRX3) If the sub receiver is turned on (by tapping S UB ), its antenna selection can be changed using B S E T . While in BSET , tap AN T to switch between M AI N (sharing the main antenna) and AUX (using the s u b ’ sAUXRFinput). For further details on sub receiver antennas, see pg. 37 and pg. 40. If RIT and XIT are both turned off, the RIT offset can coarse-tune VFO A (CONFIG: VFO CRS). For example, you can select 5, 9, or 10 kHz steps in AM mode. 22 Receiver Setup Filter Passband Controls This section explains howto use basic receiver controls. Setup for specific operating modes is described in later sections; see Voice Modes (pg. 28), CW Mode (pg. 30), and Data Modes (pg. 31). As you rotate the filter controls (shift, width, hicut, locut), the associated parameter value is shown on VFO B. T he filter graphic shows the width and location of the passband, as illustrated below. In these specific examples, segments that turned off as a result of control movement are shown in gray. Also see Text Decode and Display (pg. 30) and Audio Effects (pg. 35). Receiver Gain Controls High Cut Use AF — S UB (pg. 11) to set the desired main and sub receiver volume level. T here are two overall audio volume ranges, LO and HI, which can be selected using CONFIG:AF GAIN. Low Cut Usually, both RF — S UB controls will be set fully clockwise (main and sub receiver RF gain). You may wish to reduce RF gain to optimize receiver response to high signal levels or noise. Width If the sub RF gain knob has been reconfigured as squelch for both receivers, then the main RF gain knob will control RF gain for both receivers. (See CONFIG:SQ MAIN.) Shift To improve weak-signal reception, turn on the preamp using P R E . In the presence of extremely strong signals, you may wish to use the attenuator ( ATT ), or reduce the RF GAIN setting. Each passband control has an integral switch. These are used as follows: Tapping the control alternates between the two primary functions for that control, for example HICUT and WIDT H. This is indicated by the two LEDs above each control. Crystal Filter Selection You can install as many as five crystal roofing f i l t e r si nt heK3’ sma i nr e c e i ve r ,a nda n o t he rf i vei n the sub receiver (KRX3, pg. 37). Holding a control activates its secondary function, labeled below the control. Bandwidths as narrow as 200 Hz and variablebandwidth f i l t e r sa r ea va i l a b l e , t h a n kst ot heK3’ s low first I.F. (intermediate frequency) of 8.215 MHz. See Appendix A for recommended crystal filter bandwidths for each mode. T apping or rotating a control shows the present setting. To see the settings of both knob functions without changing them, just tap the control twice. To select a crystal filter manually, tap X FI L . The FL1 - FL5 icons show the current selection. This sets the DSP passband to match the crystal filter, and removes any passband shift or lowcut/hicut. The secondary functions of the controls are N OR M and I /I I , described in the following sections. The K3 will also select the most appropriate crystal filters automatically as you adjust the S H I F T , W ID TH , L O CU T , and H I CU T controls. 23 Filter Presets (I/II) Custom Settings (NO RM1 and NORM2) Ea c hope r a t i ngmod ep r o vi d e st wo‘ f l oa t i ng ’f i l t e r prese ts, I and II, which store filter settings on a per-VFO, per-mode basis (excluding FM). They are updated continuously as you change filter settings. (Fixed, per-mod e‘ no r ma l ’s e t t i ng sa r ea l s o available; see below.) In addition to the K3's standard "NORM" values, you can save two of your own setups in each mode, then recall them using the N OR M function. These setups are referred to as NORM1 and NORM2. To save a custom normalization setting: set up the filter passband as desired for the current mode You can alternate between the I and II settings by holding I/ II . Thi si se s pe c i a l l yu s e f u lwhe nyou ’ r e using wide and narrow settings during contest or DX operation. hold N OR M until you see < - S AV - > (3 seconds) rotate the knob slightly left or right to save it as NO RM 1 or NO RM 2 . The I and II settings for VFOs A and B are independent. The arrows to the left and right of S AV are a reminder that you can rotate the knob to get to the two user-defined normalization settings.) Filter Normalization (NORM) Standard Settings To recall, hold N OR M until you see < - NO R - > (about 1/2 second), then rotate the knob left or right to recall NO RM 1 or NO RM2 . To get quickly to a standard per-mode bandwidth and reset any passband shift or cut, hold N OR M (normalize). The normalized bandwidth is 400 Hz in CW and DAT A modes, 2.7 or 2.8 kHz in SSB modes, and 6 kHz for AM. Narrow DSP Filter Types Forb a nd wi d t hs e t t i ng sof1 00Hzo rl owe r , t heK3’ s DSP normally uses a type of filter that minimizes ringing: the Finite Impulse Response or FIR filter. Whenever you normalize the filter passband, two small "wings" appear at the left and right ends of the DSP filter passband graphic as shown below. I fy ou ’ dl i kes t e e pe rf i l t e rs ki r t s ,a ndd on’ tmi nda small amount of ringing, you can select Infinite Impulse Re s po ns e ”o rI I Rf i l t e r sf ort he s e bandwidths. Locate CONFIG:FLx BW menu entry, then tap 7 until you see I I R O N . Both main and sub receivers will use the same setting. Moving any DSP control makes the "wings" disappear, as a reminder that the passband is no longer normalized. 24 T he DSP noise blanke r is in the 2nd I.F., where i tc a n’ tb ea c t i va t e db ys i g na l sou t s i d et hec r ys t a l filter passband. It can be used with high-duty-cycle and complex-waveform noise generated by computers, switching power supplies, light dimmers, etc. T he I.F. noise blanke r is in the 1st I.F., where it can use very narrow blanking widths. It is most effective at blanking AC line noise, lightning, and other very broadband noise. Often, a combination of the two is the most effective. Reducing Interference and Noise The K3 provides several ways to cut interference, including DSP noise reduction, manual and auto notch, and noise blanking. Also see Audio Effe cts ( AF X , pg. 35). There are actually two noise blankers: one at the first I.F. (KNB3 module), and the other at the 2nd I.F. (DSP). Noise reduction, noise blanking, and notch filtering should only be used when necessary. T hese signal processing techniques are extremely effective, but can introduce side effects. Sometimes, reducing the filter bandwidth is the most effective interference-reduction strategy. Noise Reduction Noise reduction reduces random background noise while preserving meaningful signals. It adds a c ha r a c t e r i s t i c“h ol l ow”s ou nd to all signals. T ap N R to turn on noise reduction. NR is not applicable in DAT A and FM modes, or with AGC turned off. Noise Blanking First, tap N B to enable I.F. and/or DSP noise blanking. Hold AD J to display the NR setting; use the VFO B knob to tailor NR for the present band conditions. The settings are F1 -1 through F4 -4 . Higher settings can degrade weak signals. T he first part of the number ( Fx ) determines which NR algorithm is used; F1 is the least-aggressive setting. The second part ( -y ) controls how much of the signal is routed through noise reduction, from 1 (50%) to 4 (100%). Next, hold L E V E L to set the DSP level (VFO A) a ndI . F.l e ve l ( VFOB) .You ’ l li ni t i a l l ys e eDS P O FF and I F O FF on the VFO A and B displays. Rotating VFO A clockwise will turn on the DSP NB, showing DS P t1 -1 through DS P t3 -7 . The first number shows the relative pulse integration time, and the second shows the blanking level. The higher the numbers, the more aggressive the DSP blanking action. Notch Filtering Notch filtering removes interfering carriers while leaving the desired signal relatively unaffected. The K3 provides automatic and manual notch tuning. Rotating VFO B clockwise will turn on the IF NB, showing I F NARn , I F M E Dn , or I F WI Dn , where n is 1 -7 . NAR /M E D /WI D refers to narrow/ medium/wide blanking pulse widths, and n is the blanking level. Higher n means more aggressive blanking action. Use NAR width when possible to minimize strong-signal interaction effects. Auto notch will find and remove one carrier, and in some cases more than one. It is only available in SSB modes, and AGC must be turned on. T he NB icon will flash slowly if the I.F. blanker setting is too high for the present signal conditions. If this happens, use a lower setting. Manual notch removes one carrier at a specified pitch, and can be used in CW and DAT A modes as well as voice. Since manual notching sets up a fixed (rather than adaptive) notch, it can even suppress a keyed carrier, i.e. a CW signal. Both the DSP and IF blanking settings are saved on a per-band basis. If CONFIG:NB SAVE is set to Y E S , the on/off status of NB will be also be saved for each band. T ap N TC H to turn on notch filtering ( NTCH icon). This turns on Auto notch if applicable. T ap a second time if necessary to select manual notch (adds icon). T ap again to turn notch off. Hold MAN to adjust the manual notch frequency using VFO B. This also selects manual notch. 25 Transmitter Setup VOX, PTT, and QSK In voice and data modes, use V O X to select VOX (pg. 13) or PTT (push-to-talk). PTT can still be used even with VOX selected. Set VOX gain and anti-vox level using MAIN:VOX GN and ANTIVOX. Transmit Crystal Filter Considerations For each operating mode, you must specify which I.F. crystal filter to use for transmit using the CONFIG:FLTX menu entry. See pg. 46 for recommended per-mode transmit filter bandwidths. In CW mode, use V O X to select either VOX or PTT t r a ns mi t .VOXe n a b l e s“u s e r -a c t i va t e d ”( hi t -thekey) transmit, while PTT requires the use of PTT IN (pg. 17) or X MI T before CW can be sent. T ransmit signals are generated on the RF board, so the set of filters installed on the RF board must meet the transmit bandwidth requirements of all modes you plan to use. (Filters installed on the sub receiver board are used only in receive mode.) When the V OX icon is on in CW mode, you can use Q S K to select full ( Q S K icon on) or semi break-in. For more on break-in keying, see pg. 30. Transmit Status LED s and Icons Transmit Metering Before putting the K3 on the air, you should be familiar with the LEDs and LCD icons that pertain to transmit operation (identified on pgs. 11 and 12). The most important of these are reviewed here. Normally, the transmit bar graph shows S WR and RF (power output). The displayed S WR range is 1:1 to 3:1. The RF control range is 0 to 12 W in 1-W units, or 0 to 120 W in 10-W units. The power scale changes from watts x1 to watts x10 at 13 watts. The TX LED turns on during transmit. The ∆F (Delta-F) LED turns on if the transmit and receive frequencies differ (SPLIT / RIT / XIT). In voice modes, you can use ME TE R to switch to compression ( CMP ) and automatic level control ( ALC ) metering. See pg. 28 for information on adjusting the MI C and C MP controls. The TX LCD icon and associated arrows show which VFO is being use d for transmit. If you plan to use S P L I T mode, See pg. 36. If you have a KXV3 installed, you can use milliwatt-le vel power output. This is intended for use with transverters, but it can also allow the K3 to act as a very stable, very low-noise signal generator. To route RX and T Xthrough the XVT R jacks on all bands, set CONFIG:KXV3 to TE S T. Multifunction Transmit Controls There are two multifunction transmit controls. Their primary functions (mode-dependent) are: SPEED MI C C MP PW R CW keyer speed in WPM Mic gain When milliwatt-level output is in effect, rotating P W R will show milliwatts on VFO A, and dBm (dB relative to 1 milliwatt) on VFO B. Speech compression level in dB RF output power in watts (also see Per-Band Powe r Control, pg. 27) Off-Air Transmit Testing The secondary (hold) functions of these controls are: D E L AY VOX or CW semi-break-in delay MO N Voice/Data monitor or CW sidetone level. The K3 allows you to listen to your CW keying, test your mic and compression settings, or monitor DAT Atones, without transmitting an on-air signal. To do this, hold TE S T (right end of the MO D E control). While you're in TEST mode, the TX icon will flash slowly as a reminder that you're off air. Hold TE S T again to return to normal operation. 26 External ALC Per-Band Power Control Exte rnal ALC should only be use d to prote ct your amplifie r during ope ration into a faile d load, or during a prolonge d ove rdrive condition. ALC should not be use d as a way to clip or compress fast voice pe aks, or as a primary me ans of amplifie r or K3 powe r output control. If the CONFIG:PWR SET menu parameter is set to NO R , power output on all bands follows the present setting of P W R . If you change PWR SET to P E R-BAND , the power level will be saved independently on each band. This is especially useful with transverters and external amplifiers, or for those who use QRP levels on one band and QRO on another. DO NOTs e tt heK3’ spowe rl ev elt o maximum and adjust amp output using the amp ’ sALCc o nt r o l .Thi swi l lr e s ul ti ns p l at t er and key clicks. Instead, adjust the drive on each bands oi t ’ sj us tb e l owALC activation le vel. When per-band power control is used with an external amplifier, you can adjust the drive ideally on each band to prevent external ALC activation during normal operation. Pre paring the K3 for use with Exte rnal ALC TUNE Power Level You may nee d to modify the K3 to use exte rnal ALC, de pending on its date of manufacture. Please refer to our K3 Modifications web page. If you turn on external ALC without making necessary modifications, power will be reduced to a very low level during transmit. If CONFIG:TUN PWR is set to NO R , power output during TU N E will follow the present setting of P W R . If you change the TUN PWR parameter to a fixed power level, that level will be used during TU N E ,whe t he rorn oty ou ’ ves e l e c t e d per-band power control (see above). Exte rnal ALC Setup Transmitter RF Delay External ALC is set up using the CONFIG:EXT ALC menu entry. EXT ALC defaults to O FF . To turn it O N , tap 1 . 6 meter external ALC can be turned on/off separately from HF. Some amplifiers have slow relays whose switching time must be accommodated to prevent key clicks during CW operation. If your amplifier requires more than 8 ms of relay switching time, you can increase the delay from key-down to RF output at the K3 using CONFIG:T X DLY. The EXT ALC menu entry provides a default ALC threshold of -4 . 0 V, used by many amplifiers. If you select CMP /ALC metering at the K3, external ALC activity is indicated by 8 or more bars. If you select S WR /RF metering, the CM P /ALC meter icons will flash during external ALC activity to make you aware of the condition. Use the smallest value of T X DLY that works with your amplifier. Larger values will affect QSK and keying timing at high code speeds. Transmitter Inhibit Some experimentation may be required to d e t e r mi net hepr o pe rs e t t i ngof t hea mpl i f i e r ’ sALC output control, if one is provided. Start with the control set for minimum ALC output. T hen adjust t h eK3 ’ sp owe rou t pu ts u c ht ha tt hea mpl i f i e ri sj u s t reaching its maximum level on voice peaks (in SSB mode) or peak CW power in CW mode. Some multi-transmitter stations require that transmitters be able to mutually inhibit each other in order to prevent simultaneous use of resources. The K3 provides a transmit inhibit input (TX INH) that can be programmed for low- or high-active control. See pg. 19. Ne x t ,a d j u s tt hea mpl i f i e r ’ sALCou t pu tc on t r ol upward until ALC action just begins (or adjust the K3’ sALCt hr e s h ol dwi t ht heEXT ALC menu e n t r y ) .Fi na l l y, r e d u c et heK3’ sd r i v ep owe rj u s t slightly to provide some safety margin. The goal is to have no amplifier ALC action during normal operation. If you see an ALC indication at the K3 or t h ea mpl i f i e r ,r e d u c et heK3 ’ sp owe rou t pu t . 27 Voice Modes (SSB, AM, FM) Mic Gain and Compression Settings Mode Selection To set up MIC gain and compression level: T ap either end of MO D E to select LS B /US B , AM , or FM mode. Holding the left end of this control, AL T , selects an alternate mode. LS B and US B are alternates of each other. The alternate for AM is AM -S (synchronous AM, pg. 29). In FM mode, AL T enables a repeater offset (pg. 29). Set the monitor level as described earlier. Optionally select TX TEST mode (pg. 13) or set power to zero. This will not affect your CM P /ALC bar graph readings. Set C MP to 0 . Hold ME TE R to select CMP /ALC metering. Microphone Selection While speaking into the microphone in a normal voice, adjust MI C for a peak ALC meter indication of about 5-7 bars (see below). The K3 provides both front- and rear-panel mic jacks. Some operators prefer to use the rear-panel jack to minimize cable clutter around the front panel. Use MAIN:MIC SEL to select the front panel ( FP ) or rear-panel ( RP ) jack. This menu entry can also be used select a mic gain range, as well to apply a bias voltage for electret microphones. Adjust C MP for the desired speech compression level while speaking. T he CMP scale shows approximate compression level. The front-panel mic jack is compatible with the Elecraft MH2, MD2, Proset-K2, and some other 8pin mics (see pg. 13 for pinout and bias settings). Hold ME TE R to select S WR /P WR metering. The rear-panel mic jack accommodates a 3.5-mm (1/8") phone plug, and can be used in conjunction with the rear-panel PTT IN jack. If you were in TX T EST mode, return to normal operation by holding TE S T . Voice Monitoring If you had P W R set to 0, set it for the desired level in watts. Key the rig again and verify that you have about the right power output level. TheK3’ svoi c emo ni t o ra l l owsyout ohe a rt hewa y your voice will sound at your selected mic gain, compression, and T X EQ settings ( TX E Q , pg. 35). Onc eyou ’ veset up the MIC and CMP levels as described, you should only need to adjust them if you switch mics or if band conditions change. To set up voice monitoring: Power Level for Voice Modes Hold TE S T to put the K3 in T X TEST mode, s oyouwon ’ tb et r a ns mi t t i ng( pg .13). Voice-mode power output may be slightly lower than the CW power output you see in when you use TU N E . Adjusting mic gain higher cannot correct for this. Instead, use the CONFIG:TXG VCE menu entry (voice transmit gain balance). Typically a value of 0.0 to 1.5 dB will equalize voice modes with TU N E . Set MI C to 15-3 0t oe ns u r et ha tyou ’ l l he a r your voice. You can fine-tune this level later. Pr e s syou rmi c ’ sPTTs wi t c hort a pX MI T . While speaking into the mic, adjust a comfortable listening level. MO N for Exit transmit (release PTT, or tap X MI T again). I fy o u’ r eus i ngan external peak-reading wattmete r, adjust powe r such that spee ch peaks remain at or below your de sire d powe r le vel. T he K3’ sRFb a rg r a p hma yno tc a p t u r ea l ls pe e c h peaks, but your actual output will be close to that set with P W R . You can either leave the K3 in T X TEST mode or go back to normaltransmit (hold TE S T ) as you follow the instructions in the next section. 28 Voice Mode VOX Setup AM Operation selects PTT (push-to-talk) or voice-operated (VOX) transmit ( V OX icon on). A 6 kHz (AM) or 13 kHz (FM) crystal filter is required on the RF board for AM transmit and receive (pg. 46). VOX The VOX delay can be set from 0.05 seconds (50 ms) to 2.00 seconds. The lower the setting, the faster the K3 will return to receive mode after a pause in speech. See D E L AY (pg 14). When listening to AM signals, you can hold AL T to select envelope detection ( AM ) or synchronous detection ( AM -S ). With envelope detection, selective fading can notch out the AM carrier, resulting in severe distortion. The synchronous detector solves this problem by phase-locking an oscillator to the AM carrier. Careful tuning of the signal is required. The MAIN:VOX GN menu entry (VOX gain) should be set so that the K3 enters transmit mode when you speak at a normal level. Setting VOX GN too high will result in the K3 switching into transmit mode in response to incidental noise. You can also listen to AM using LSB or USB modes. If you have a 6-kHz filter installed, voice as well as music will have excellent fidelity. MAIN:ANTIVOX adjusts VOX immunity to signals received through the K3's speaker or headphones. While listening to a loud received signal, and with the mic closer to the speaker than it would be in normal operation, adjust ANTIVOX upward until t h eK3d oe s n’ ts wi t c ht ot r a ns mi tmod e . FM Operation FM mode can be disabled by setting CONFIG:FM MODE to O FF . Digital Voice Recorder (D VR) Me ssage Re cord and Playback An FM-bandwidth crystal filter (at FL1) is required on the RF board for FM transmit and receive (pg. 45). The sub receiver also requires an FM filter at FL1 for FM receive. DSP filter bandwidth is fixed in this mode; presets and filter controls are not used. To start recording, tap R E C , then tap any of M1 M4 . T he remaining buffer time in seconds will be displayed as you speak. Tap R EC again to stop. FM defaults to simplex (transmit and receive on the same frequency). T he following controls and menu entries are used for repeater setup: T ap M1 – M4 to play. To cancel, tap R EC . You can also hit the keyer paddle, key, or any switch besides M1 –M4 to cancel play. Holding AL T switches between simplex, T X (+), and T X (-). This is indicated by the + and icons (near the FM icon). To auto-repeat a message, Hold (rather than tap) M1 –M4 . MAIN:MSG RPT sets the message repeat interval (1 to 255 seconds). Repeater offsets can be programmed on a perband/per-memory basis. See MAIN:RPT OFS. If you have the KDVR3 option installed, you can record and play voice messages as well as capture received audio. Tone encode is set up by holding P I TCH . Rotate VFO A to select a CT CSS tone frequency (or the European repeater access tone, 1750 Hz). Use VFO B to turn tone encode on ( P L TO NE ) or off ( P L O FF ). Re ceive Audio Re cording Hold AF R E C to start / stop audio record. The icon will appear. Recording starts at the beginning of available space each time it is started, and will stop at the end if not terminated sooner. All of the settings described above are saved in frequency memories. Hold AF P L AY to start / stop audio playback. During playback, the icon flashes. T his s e r ve sa sar e mi nd e rt ha ty ou ’ r ehe a r i ngr e c o r d e d rather than live audio. 29 CW Mode SPOT and Auto-Spot When calling another station, you should try to match your frequency to theirs. To facilitate this, the K3 provides both manual and automatic spotting for use with CW and DAT A signals. See Tuning Aids: CWT and SPOT (pg. 34). CW Normal and Reverse Select CW mode by tapping either end of MO D E . Hold AL T to alternate between CW normal and CW re ve rse ( RE V icon). CW reverse differs from CW normal only in receive mode, using the upper rather than lower sideband. CW Text Decode/Display The K3 can decode transmitted and received CW signals, displaying the text on VFO B (pg. 33). This f e a t u r ei se s pe c i a l l yu s e f u lwhe ny ou ’ r el e a r ni ng CW,o ri fs ome onewhod oe s n ’ tk nowCW i s looking over your shoulder while you make CW QS Os .I t ’ sa l s oi nd i s pe ns a b l ef orCW-to-DAT A operation (pg. 34). If you S P O T (or auto-spot) a CW signal (pg. 34), then switch between CW normal and reverse, the pitch of the received signal should stay the same. Basic CW-Mode Controls In CW mode, MO N sets the sidetone volume. Hold P I TC H to adjust the sidetone pitch. The peak in response of all crystal filters will track the sidetone pitch; no filter adjustments are needed. Dual Passband CW Filtering (DUAL PB) T urning on D U AL P B in CW mode allows you to l i s t e nt oana r r owf i l t e rb a nd wi d t h( t he“ f oc u s ” ) , s e t within a wider, attenuated filter bandwidth (the “c on t e x t ” ) . S e epg .35. Hold Q S K to select full break-in ( Q S K icon on) or semi bre ak-in operation. V O X must be turned on in CW mode to enable both full and semi break-in operation. If PTT is selected ( VO X icon off), transmit must be activated using PTT or by tapping X MI T . CW Message Record/Play Messages can only be recorded using the internal keyer, not a hand key or external keyer. Q S K , or full break-in, allows you to better keep track of on-f r e q u e nc ya c t i vi t ye ve nwhi l ey ou ’ r e s e nd i ng .I ta l l owso t he r st o“b r e a k”you rCW transmission by sending one or two characters. If te xt de code is on (pg. 33), CW text sent using the internal keyer is shown on VFO B (pg. 33). Use TE S T to check messages off-air (pg. 13). With semi break-in selected ( QS K icon off), the K3 returns to receive mode after a time delay you set using D EL AY. T his is a compromise between full break-in and fully manual operation using PTT or X MI T . There are 8 message buffers, arranged in two banks of 4. Buffers hold 250 characters each. To switch banks, hold R EC . Me ssage Re cord: To start recording, tap R E C , then M1 - M4 . The remaining buffer space will be displayed as you send. Tap R E C again to stop. Hold TE S T to place the K3 into TEST mode. This allows you to send CW without transmitting a signal on the air. This is helpful for practicing your sending or for off-the-air checking of pre-recorded CW messages. Me ssage Play: Tap M1 – M4 to play. To cancel, tap R EC . You can also hit the keyer paddle, key, or any switch besides M1 –M4 to cancel play. CW-Mode Menu Entries Auto-Re peat: T o auto-repeat a message, Hold (rather than tap) M1 –M4 . MAIN:MSG RPT sets the message repeat interval (1 to 255 seconds). Configuration menu entries are provided to set up CW iambic keying mode (CW IAMB), paddle normal/reverse selection (CW PADL), and keying weight (CW WGHT). Chaining: Tapping M1 –M4 during playback chains another message onto the message being played. Holding a message switch during playback chains a repeating message. 30 Data Modes The following data modes are available: Youd on’ tne e dac ompu ter to get started with data modes on the K3: it can decode and display RTTY and PSK31 on its LCD (pg. 33). You can transmit in data modes using your keyer paddle (see CW-toDATA, pg. 34). DA TA A can be used for all A udio-shift transmit modes, including PSK31, MFSK, AFSK, etc. The VFO displays the suppressedcarrier frequency, just as when SSB modes are u s e df ord a t a . US Bi s“n or ma l ”f orDATAA. Compression is automatically set to 0. Using a computer for data modes is also very c on ve ni e n to nt heK3,a sd e s c r i b e db e l ow.I fy ou ’ r e using AMT OR or PacT OR, also see pg. 32. AFS K A also uses A udio-shift transmit, but is optimized for RTT Y. The VFO displays the RTTYma r kf r e q u e nc y, a ndLS Bi s“ no r ma l ” . The built-in text decoder can be used in this mode (pg. 33), as well as the dual-tone RTTY filter (DT F, pg. 32). Data Mode Connections You can transmit and receive data with a computer in three ways: FS K D is identical to AFSK A, except that D irect modulation is used, via FSK IN, ASCII, or the keyer paddle (pg. 34). The text decoder can be used in this mode (pg. 33), as well as the dual-tone RTT Y filter (DTF, pg. 32). Connect your soundcard I/O to the K3. Use MAIN:MIC SEL to use LINE IN/OUT , frontpanel mic jack, or rear-panel mic jack. You can use VOX or PTT to controltransmit. P S K D is a D irect-transmit mode for PSK31. I t ’ st heonl ymod et ha td e c od e sa ndd i s pl a ys PSK31 signals with the text decoder (pg. 33). Like FSK D, PSK D lets you transmit via FSK IN, ASCII, or the keyer paddle (pg. 34). You can also use auto-spot with PSK D if the tuning aid is displayed ( C W T , pg. 34). Use the soundcard in receive mode, but use a PC I/O line to do direct FSK (or PSK) mod u l a t i o n.Con ne c tt h ePC’ sI / Ol i net ot he “FS KI N”l i neo nt h eK3 ’ sACCc on ne c t or . ( I f this signal originates from an RS232 port, it will require RS232-to-TTL level conversion.) Send and receive ASCII text via the RS232 i nt e r f a c e . Tos e nd ,i ns e r tt e x ti n t oa“ KY” c omma nd( e . g . , “KYCQDEN6KR; ” ) . To r e c e i ve , s e nd“ TT1; ”( t e x t -to-t e r mi na l ) . “ TT0 ; ” t u r nsi tof f .S e et heK3Pr og r a mme r ’ s Reference, available at. The D AT A MD display also shows the data speed in bps on VFO A. This is relevant only if the text decoder is on. Depending on the mode, other data speeds may be available; select them by rotating VFO A. Also shown is the current sideband ( LS B or US B ). I ft hi ss i d e b a ndi sc ons i d e r e d“d a t ar e ve r s e ”f o rt h e present mode, then RE V also appears. You can use AL T to switch to the other sideband if required. Data Mode Selection Soundcard-base d data communications can be done using LSB or USB mode. However, DAT A modes offer several benefits not available in SSB modes. Mark/Shift and Pitch Selection (PITCH) I fy oup r e f e r t ou s eLS BorUS B,you ’ l l ne e dt o manually set C MP to 0 to prevent data signal distortion. Refer to your data communications software manual to determine how to set up the VFO and computer for accurate frequency display. Hold P I TC H to view and change the received mark tone and shift (AFSK/FSK) or center pitch (PSK). In AFSK/FSK modes, you have a choice of mark tone/shift combinations. Use VFO A to select a t o ne / s hi f tc o mb i na t i o nt h a t ’ sc o mpa t i b l ewi t hy ou r software. A lower mark pitch makes signal tuning easier when using the K3’ st e x td e c od e r . To use DAT A modes, tap MO D E until the DATA icon appears. Next, hold D AT A MD . The present data mode is shown on VFO B, and can be changed by rotating the VFO B knob. 31 RTTY Dual-Tone Filter (DTF) AMTOR / PacTOR Hold D U AL P B to turn on the RTT Y dual-tone filter (DT F). This creates two filters, one centered on the mark tone, the other on space, which can often improve RTTY copy. The filter graphic changes to reflect this (see below). AMT OR, PacT OR and similar modes can reliably transfer data –including e-mail –via HF radio networks. New modes are under development that may provide even greater reliability. Applications include maritime mobile and emergency commu ni c a t i o nswhe r et heK3 ’ sl i g htwe i g hta nd excellent receive performance are advantageous. General information regarding K3 set up for these modes appears below. When DT F is on, the range of the WI D TH control is adjusted to better match the characteristics of the filter. SHIFT , LOCUT and HICUT are disabled. Frequency stability is important in these modes. A 1-PPM T CXO is available (KT CXO3-1). Conne c tmod e ma u d i oI / Ot ot heK3 ’ sLI NE OUT and LINE IN jacks (for LINE OUT , use the T IP contact of a stereo plug). A PTT connection is also usually required. If the modem operates from 12 V (0.5 A or less), it c a nb ep owe r e df r o mt heK3 ’ s1 2VDCou t pu t . The dual-tone filter can be used with AFS K A and FS K D . T he on/off state of DT F is saved independently for each of these modes. FSK Transmit Polarity Set up the modem (if applicable). Settings may You can invert the logic level of the FSK IN line in FSK D mode using CONFIG:FSK POL.. vary depending on the data mode being used. Locate CONFIG:LIN OUT and set it to 10. A different level may be better for your modem. Mic Gain, ALC, and Monitor Level TheK3’ sS YNCDATA feature can be used to I fy ou ’ r eu s i nga na u d i o-shift transmit mode ( LS B , US B , DATA A , or AFS K A ) ,y ou ’ l lne e dt os e t the MI C level while watching the ALC meter. You can use the same procedure outlined for voice modes (pg. 28), except that speech compression should not be used. minimize T -R delays (it forces the same crystal filter to be used for both receive and transmit). Locate CONFIG:SYNC DT. Assign it to a programmable function (e.g., by holding P F 1), then exit the menu. T ap MO D E to select DATA . In all cases (SSB modes as well as DAT A), you can optionally use MO N to monitor your data signals. The procedure given for voice modes can be used (pg. 28). Voice-mode and DAT A-mode monitor levels are independent. Select an appropriate data sub-mode by holding D AT A MD , then rotating VFO B. DA TA A (generic data mode, USB) is used in most cases; see pg. 31 for alternatives, such as AFS K A . T ap AF X to exit the parameter display. T he MI C setting does not apply to direct modulation data modes ( FS K D and P S K D ), since no audio is used for transmission. However, you can still use MO N to monitor the signals. Locate MAIN:MIC SEL and set the audio source for data to LI NE I N . Exit the menu. If you wish to use SYNC DAT A, turn it on by holding P F 1 (or the switch used above). The –S icon will appear. A CONFIG:PTT RLS value of 10 to 12 may be ideal in this case. Some modes may have very high duty cycles; use less than full power output if required. Refer to your application software for instructions regarding email set up and other operating details. 32 Advanced Operating Features Text Decode And Display DATA Text Decode Setup The K3 can decode CW, PSK31 and RTTY. Decoded text is displayed on VFO B. In data modes ,y ouc a nu s et heK3 ’ si n t e r na l ke ye rt o transmit PSK31 and RTT Y signals (pg. 34). To set up text decode for DAT A modes: Set MO D E to DATA. Then hold D AT A MD and select either AFS K A , FS K D , or P S K D mode using VFO B. T ap AF X to exit the datamode display. When text decode is enabled, turning the RIT /XIT offset control does not flash the offset value. T his would disrupt the text display. For AFS K A or FS K D , hold P I TC H and select the desired mark/shift setting. T he lowest mark tone selection (915 Hz) may be more pleasant to listen to than higher tones. (The pitch for P S K D mode is fixed at 1000 Hz.) T ap S P O T to exit the pitch display. CW Text Decode Setup To set up CW text decode: Set MO D E to CW. Hold TE X T D E C , then select O N using VFO B. Below the DATA icon you should now see a T, showing that text decode is enabled. If a special VFO B display mode is in effect, cancel it by tapping DI S P . Hold TE X T D E C , then select CW 5 -40 (lower WPM range) using VFO B. Below the CW icon you should see a T, showing that text decode is enabled. The TX O NLY setting decodes only CW you send (internal keyer); the T does not appear in this case. Adjust the threshold ( THR ) using VFO A. Start with THR 0 . Higher settings prevent text decode on weak signals or noise. Tap CW T to exit text-decode setup. You ’ l lpr ob a b l ywa ntt ot u r no nC W T as a tuning aid (pg. 34). T his also enables auto-spot (applicable to PSK31 but not RTTY). Adjust the threshold ( THR ) using VFO A. Start with AUTO . Manual settings ( 1 -3 0 ) improve copy in many cases (see below). T ap C W T to exit text-decode setup. DATA Mode Te xt Decode Tips: Use F IN E tuning with P S K D . S P O T (or auto-spot) a signal first, then tune slowly in 1Hz steps until recognizable words appear. You ’ l lpr ob a b l ywa ntt ot u r no nC W T as a tuning aid (pg. 34). T his also enables auto-spot. If you call CQ using PSK31 mode, keep RIT on so you can fine-tune responding stations without moving your transmit frequency. CW Text De code Tips: S P O T (or auto-spot) a signal first, then tune slowly until recognizable words appear. In difficult conditions, reduce WI D TH to the per-mode minimum (typically 50 Hz for PSK31, 200 Hz for narrow-shift RTT Y). In difficult conditions, reduce WI D TH to as low as 50 Hz (100-200 Hz for faster CW). To optimize text decode, use manual threshold settings. Start with THR 5 . With CW T on, adjust the threshold so that the CWT bar flashes in sync with the received CW signal. In AFS K A and FS K A modes, the RTTY dual-tone filter may help (DT F, pg. 32). RTT Y text may shift to figures due to noise. If you assign CONFIG:TTY LTR to a programmable function switch, you can tap it to quickly shift back to letters. To decode very fast CW, use CW 3 0 -90 . T he K3 uses slow AGC during CW text decode, overriding the selected AGC setting. 33 CW-to-DATA Tuning Aids: CWT and SPOT You can use data modes completely stand-alone (i.e., without a computer). Just turn on text de code (pg. 33), and send CW using the internal keyer. T apping CW T turns the upper half of the S-meter into a CW/DAT A tuning aid. If no bar appears in the tuning area, the threshold may be set too high; hold TE X T D E C and select a lower THR value. When a received CW or PSK31 signal is centered in the passband, the CWT display will appear as shown below. CW messages can also be used for CW-to-DAT A. This makes it easy to answer a CQ, send a contest e xc ha ng e ,o rpl a ya“b r a gt a pe ”d u r i ngaQS O. To set up for CW-to-DATA operation: CWT Referring to pg. 33, use MO D E , D AT A MD , TE X T D E C , and P I TC H to set up text de code . Select either FS K D or PS K D mode. A small T should appear below the DATA icon. S1 3 5 7 SWR 2 3 9 RF 50 100 In RTT Y modes ( AFS K A and FS K D ), mark and space tones are represented by three bars each, with mark to the left of the CWT pointer, and space to the right. When only weak signals are present in the mark/space filters, 1-3 bars will flicker on either side, leading to a “g hos t i ng ”e f f e c t ,a ss h ownhe r e . Try tuning in a few stations (turn on C W T ; pg. 34). T ips for improved copy in tough band conditions are provided on pg. 33. Plug a keyer into the PADDLE jack. The first time you try CW-to-DAT A, set P W R to 0 watts or use T X TE S T mode (pg. 13). CWT All CW you send will be transmitted as data a ndd i s pl a ye donVFOB.You ’ l lhe a raCW sidetone, as well as PSK or FSK tones. Adjust the data monitor volume using MO N . T o adjust the CW sidetone monitor level, temporarily switch back to CW mode. S1 3 5 7 9 SWR 2 3 RF 50 100 As you tune the VFO close to an RTTY signal, the number of bars will initially increase on one side or the other. Keep tuning until you see a rough balance between left and right bars. (Also see DTF, pg. 32, and CONFIG:TTY LTR.) When calling CQ, use RIT to tune in stations that reply (especially important for PS K D ). Whenever you pause, the K3 will remain in a data idle state for about 4 seconds before dropping. To extend the timeout, send BT, which is not transmitted as data. Manual SPOT If C W T is off, you can tap S P O T , then manually t u net heVFOu nt i l t her e c e i ve ds i g na l ’ spi t c h matches the sidetone. If you find pitch matching difficult to do, try auto-SPOT (below). To cut the idle transmit period and exit to receive mode, send ". . - - " (IMmediately exit). This character is not transmitted as data. Auto-SPOT When recording CW messages for use during CW-to-DAT A, you can add ". . - - " at the end to cut the idlet i mewhe nt he y’ r epl a ye db a c k. To use auto-spot, first turn on C W T . Use a narrow bandwidth (200 to 500 Hz). Tapping S P O T will then automatically tune in a received signal that falls within the CWT display range. TheCW a b b r e vi a t i onf o r“a nd ”( ES )i sno tu s e d in data modes and might lead to confusion. Other prosigns can be used, including KN, SK, and AR. Auto-spot may not be usable if more than one signal is in the CWT range, if the signal is extremely weak, or if the code speed is very slow. If you set VFO B for CW mode rather than DAT A mode and use cross-mode S PL I T (pg. 36), your CW will not be converted to DATA. Auto-spot coarse-t u n e sPS K31s i g na l s ,b u ty ou ’ l l need to fine-tune them in 1-Hz steps ( FI N E ). 34 Audio Effects (AFX) Receive Audio Equalization (EQ) If you have stereo headphones or stereo external speakers, y ouc a nt a kea d v a n t a g eo ft h eK3 ’ sDS P audio effects. These create an illusion of greater space, similar to stereo. For many operators, AFX provides a less-fatiguing receiver sound, and it can even improve weak-signal copy. The K3 provides 8 bands of receive audio equalization via the MAIN:RX EQ menu entry. You can use receive equalization to compensate for the physical acoustics of your station (the room, headphones, speakers, etc.), or just to tailor the audio to your personal preference. In the RX EQ menu entry, the VFO A display shows 8 individual vertical bar graphs. The example below shows various amounts of EQ applied to the 8 bands. MAIN:AFX MD is used to select the desired AFX setting. Available selections include DE LAY 1 -5 (quasi-stereo), and BI N, which provides a constant phase shift between the left and right outputs. T ap AF X to turn the selected effect on or off. This can be done even within the AFX MD menu entry. USB A When the sub receiver is turned on, turning AFX on may not have any noticeable effect. This is because main/sub dual receive is already a stereo mode, with different material routed to each audio channel. VOX AGC-S NB ANT 2 ATU PRE XFIL RIT TX B FL2 The center frequencies of the 8 audio EQ bands are 50, 100, 200, 400, 800, 1200, 2400, and 3200 Hz. To select a band to change, tap 1 - 8 on the keypad. For example, tapping 1 selects the 50-Hz band. Dual Passband CW Filtering Dual-passband filtering lets you remain aware of off-frequency CW signals while listening to one signal centered in the passband. T his can be useful during contesting or DXing, as well as when searching a band for weak signals. Next, rotate VFO A to specify an amount of boost or cut (+/- 16 dB). The illustration above shows the 800 Hz EQ band ( 0. 80 kHz) being set to + 1 dB of boost. Hold D U AL P B to turn on dual-passband filtering. This sets up a narrow filter (focus), set within a wider passband (context) that is attenuated by about 20 dB. The filter graphic reflects this: You can tap C LR to reset all of the RX EQ bands to 0 dB (no cut or boost). Transmit Audio Equalization (EQ) Transmit audio equalization is provided to compensate for variations in microphones and your voice. MAIN:TX EQ works exactly the same as RX EQ, and can be used during transmit. The width of the context filter can be varied over a wide range using W ID TH , while the focus filter bandwidth is fixed. T he current preset will keep track of both the state of D U AL P B and the context width. I fy ou ’ r eu s i ngES S B( pg .36), a separate set of transmit EQ settings is provided. When ESSB is on, the TX EQ menu entry name changes to TX*EQ as are reminder of which TX EQ s e tyou ’ r ec ha nging. Hold D U AL P B again to return to normal filtering. While adjusting TX EQ, you can listen to the voice monitor signal using headphones (use MO N to s e tt h el e v e l ) , orl i s t e nt ot heK3’ st r a ns mi t t e ds i g na l on another receiver. 35 SPLIT and Cross-Mode Operation General-Coverage Receive Normally, VFO A is used for both receive and transmit. When S PL I T mode is selected, VFO B becomes the transmit VFO. In this case the S P LT icon turns on, the TX arrow points to B (pg. 12) and the yellow delta-F LED ( ∆f ) turns on if receive and transmit frequencies or modes differ. The KBPF3 option module includes band-pass filters that cover the areas between ham bands. T he K3 will switch between its narrow ham-band filters and the KBPF3 filters as you tune the VFOs. A KBPF3 module can be installed on the RF board (for use with the main receiver), and a second one on the sub receiver module (KRX3). Cross-mode operation is possible in some cases, such as SSB/CW. You can use B S E T to directly change the mode of VFO B (pg. 22). CONFIG:VFO CRS selects C O AR S E VFO tuning rate in each mode. AM coarse tuning rates include 5, 9, and 10 kHz. You can transmit in CW when SSB mode is s e l e c t e db yj u s thi t t i ngt h eke yorpa d d l e ;t he r e ’ sno need to use cross-mode split in this case. The SSB station will hear the signal at your sidetone pitch. See CONFIG:CW WGHT. Sensitivity below 1.8 MHz will be reduced due to the high-pass response of the T-R switch, which protects the PIN diodes. VFO B Alternate Displays Extended Single Sideband (ESSB) The VFO B display can showtime, date, RIT /XIT offset, supply voltage, current drain, KPA3 heatsink temperature (PA), and front panel compartment temperature (FP). T ap DI S P to turn the selected alternate display on or off. Rotate the VFO B knob to select the desired display. An increase in SSB voice bandwidth may improve fidelity and reduce listening fatigue. TheK3’ snor ma lS S Br e c e i veb a nd wi d t hi sa b ou t 2.7-2.8 kHz. If you have a 6 kHz filter installed, you can use WI D TH to select a wider passband. If CONFIG:TECH MD is O N , additional VFO B alternate displays will be available. P LL1 and P LL2 show the main and sub receiver synthesizer PLL voltages; if either is out of range, (*) will appear. AFV shows the true RMS value of receiver AF output (mVp-p). This is not affected by the AF GAIN control. After the AFV reading has stabilized, you can use VFO B to select dBV , which is useful for comparative signal strength measurements. AFV and dBV both apply to the main receiver unless the sub receiver is turned on. Also see CONFIG:AFV TIM. ESSB transmit is set up as follows: In CONFIG:TX ESSB, change the parameter from O FF to one of the provided selections ( 3 . 5 , 4. 0 , etc.). See cautions below. With ESSB enabled, the transmit EQ menu entry name changes to MAIN:TX*EQ to allow you to independently adjust EQ at the selected wider bandwidth. The amount of adjustment required at each EQ band may vary depending on the selected ESSB transmit bandwidth. You can assign TX ESSB to a programmable function switch, to alternate between ESSB T X O FF and the last-selected wider bandwidth. Alarm and Auto Power-On Onc eyou ’ ves e tt heK3’ sr e a l -time clock (CONFIG:TIME), you can use MAIN:ALARM to set an alarm. This can be used to remind you of a schedule or net, or to start warming up for a contest. Carrie r and spurious signal suppression, passband shape, delay characte ristics, fidelity, and othe r aspe cts of ESSB pe rformance are not spe cifie d. Use ESSB only after care fully monitoring your signal. When an alarm is set, (*) appears in the time display. (T ime can be displayed by tapping DI S P .) T he K3 will turn ON automatically if it was off at alarm time. It will be on the last-used band. 36 Using the Sub Receiver Sub Receiver Antenna Selection The KRX3 option adds an independent, highperformance sub receiver to the K3. Sub receiver installation is covered in the KRX3 manual. The sub receiver gets its RF input either from the main receiver (sharing ANT 1-2 or RX ANT IN), or from its auxiliary antenna input. (Also see pg. 40.) You can use the sub receiver to monitor two different frequencies or bands, using different bandwidths or modes. Diversity receive is possible if the main and sub receivers use different antennas. During BS E T, you can tap AN T to switch the sub between M AI N (shared) and AUX ( t hes u b ’ sAUX input). When M AI N is in effect, the 1 - 2 and RX icons show which antenna the sub is sharing with the main receiver. When AUX is in effect, these icons will all be off (if CONFIG:KRX3 is set for AN T= BNC ) or will show the non-transmit AT U antenna, 1 or 2 (if KRX3 is set for ANT= ATU ). Dedicated Sub Receiver Controls SU B turns the sub receiver (and S UB icon) on. Turn the sub re ceive r off when not neede d. This bypasses the main/sub antenna splitte r, improving main re ceive r sensitivity by 3 dB. Thes u br e c e i ve r ’ sAUXa nt e n namu s tb ewe l l isolated from the main (transmit) antenna to avoid activatingt hes u br e c e i ve r ’ sc a r r i e r -operated relay. VFO B con t r ol st hes u b ’ sf r e q u e nc y,a ndi sa l s ot he T X frequency during SPLIT (see details at right). Using the AUX input for the sub receiver slightly improves sensitivity of both the main and sub receivers because the splitter is not used. Holding SU B links the VFOs, allowing diversity receive (details at right). VFO A is the master, moving both VFOs in tandem. VFO B can be offset from VFO A, or set equal to it by tapping A B . Sub Receiver Band Independence Using B S E T , you can set the sub receiver to a different band from main (using B AN D ) if CONFIG:VFO IND is set to Y ES . SUB AF g a i ns e t st hes u b ’ svol u mel e ve l . Wi t h stereo headphones or dual speakers, you'll hear the sub on the right and main on the left. Otherwise main and sub receiver audio are added together. If the two receivers are sharing an AT U antenna, putting the sub receiver on a higher band than the main receiver may result in signal loss in the sub receiver due to the shared low-pass filters. Toa voi dt hi s ,u s et h es u br e c e i ve r ’ sAUXi npu t . SUB RF gainn or ma l l ys e t st h es u br e c e i ve r ’ s RF GAIN level. If this knob is assigned to main/sub squelch (CONFIG:SQ MAIN), then both main and sub RF gain are controlled by MAIN. SPLIT Mode with the Sub Receiver BSET: Additional Sub Receiver Controls Normally, receive controls such as SHIFT and WIDT H, as well as the MO D E control, apply only to the main receiver (VFO A). To change sub receiver settings (VFO B), hold B S E T . VFO A will show BS E T, and the S-meter will show the sub During split ( S P LI T ), VFO A is the receive frequency and VFO B the transmit frequency. If the sub receiver is on, you can listen to both receive and transmit frequencies, making DXing more c on ve ni e n t .Youc a ns e tu pt hes u br e c e i ve r ’ s filtering independently using B S E T . receiver's signal level. Tap AN T to select the sub rece i ve r ’ sa nt e nna( de t a i l sa tr i g ht ) . Diversity Receive Some controls, including B AN D , P R E , ATTN , N B , and NR cannot be set independently for the sub receiver unless CONFIG:VFO IND is set to YES. I fi t ’ ss e tt oNO , you’ l ls e e= M AI N when using these controls during BS E T. Diversity receive can improve signal copy during fading (QSB). A separate antenna must be used for t h es u br e c e i ve r( AUX;s e ea b ove ) .You ’ l la l s o need to link VFO B to VFO A by holding S U B . For best results, use identical crystal filters and DSP settings on the two receivers. (Elecraft can provide 5-pole crystal filters with matched offsets on request. 8-pole filters are already matched.) T ap A/ B or hold B S E T again to exit BS E T. 37 Receive Antenna In/Out Using Transverters The RX ANT IN/O UT jacks, supplied with the KXV3 option, have various uses: Nine user-definable bands are provided for use with transverters. Once enabled, each will appear in the band rotation above 6 m. You can use Elecraft XVSeries or other transverters with the K3. See pg. 18 for transverter control connections. Low-noise re ceiving antenna: Some operators use a Beverage, tuned loop, or other low-noise receiving antenna. You can connect such antenna to the RX ANT IN jack, then tap R X AN T to select it. The RX icon will turn on. Transverter Band Setup Narrowband filters or preamps: You can "patch in" a specialized filter or preamp (e.g., the Elecraft PR6 preamp for 6 meters) between RX ANT IN / OUT . Tap R X AN T to switch the filter in (per-band). It will be in-line only during receive, so you can use low-power devices. Transverter bands are set up using the XV menu entries. Tap 1 –9 within menu entries to select a transverter band to configure. Te st signal inje ction: The RX ANT IN jack is ideal to inject a test signal, because the generator won't be damaged if you transmit. XVn RF selects the transverter operating frequency in MHz. XVn ON must be set to Y E S to enable transverter band n . XVn IF specifies a K3 band to use as the transverter I.F. (7, 14, 21, 28, or 50 MHz). Re ceive r comparisons: If you connect the RX ANT OUT jack to a second receiver, and leave the RX ANT IN jack open, you can A/B test the K3 against the other receiver. When the R X AN T is not selected ( RX icon off), the K3 will be receiving on its main antenna jack, and the other receiver will have no input. If you then tap R X AN T , the K3 will have no receive antenna, while the other receiver will be operating from the K3's main antenna. XVn PWR sets the K3 power output range to be used with this transverter band. L . 01 L1 . 2 7 specifies a power level in milliwatts, which requires the KXV3 option (use the XVT R IN and OUT jacks). H 0 . 0 -H12 . 0 s pe c i f i e sp owe ri nwa t t s , a nds e l e c t st heK3’ s main antenna jack(s) for output. XVn OFS can compensate for frequency offset i nt het r a ns ve r t e r ’ sos c i l l a t o r / mu l t i pl i e rc ha i n. The value shown is in kHz. I fyou ’ r ec ompa r i ngthe K3 to a transceiver and using its transmit/receive antenna, be sure to set its power to 0 so you won't damage the KXV3 when you transmit. XVn ADR specifies an Elecraft XV-Series transverter select address (see XV manual). For we ak-signal work: If you have a KXV3, you can improve isolation between XVT R IN/OUT and RX ANT IN/OUT by removing any antenna connected to RX ANT IN. If you have a KAT3, you should tap AN T to select the antenna (1 or 2) that has lower sensitivity on the transverter I.F. band in use. (Note : The ANT 1 /2 icons are not displayed if XVn PWR is set for L power range. Use H range temporarily to see the icons.) Buffered I.F. Output The KXV3 provides a buffered first I.F. signal at the IF OUT jack. This signal (about 8.215 MHz) is compatible with some panadapters. Refer to panadapter documentation for interfacing and operating instructions. The frequency of the first I.F. for a given mode/filter setting can be queried by a computer using thec omma nd“FI;”( r e f e rt ot he K3Pr o gr a mmer ’ sRe f er e nc e ). CAUTIO N: When possible , use mW-le vel drive with transverte rs (via XVTR O UT). If you use ANT1 or 2, you could accidentally transmit into a low-le vel transve rte r at high power. Use a short, high-quality coax cable be tween the K3 and the panadapte r. Additional isolation circuitry may also be re quire d. 38 Scanning Channel Hopping The K3's scanning features let the K3 tune any band segment continuously, with or without the receiver muted (Some bands may be excluded from scanning for regulatory reasons.) Scanning can be used to monitor any portion of a band, from a 1-2 kHz range where a station or net is expected to appear, to an entire band. Scanning or manually tuning over a numbered memory range, rather than a frequency range, is referred to as channel hopping. This is included in the K3 primarily for use on 60 meters, 6 meters, and transverter bands, although it can be used on any band. T he U.S. 60-meter channel assignments correspond to VFO settings of 5330.5, 5346.5, 5366.5, 5371.5, and 5403.5 kHz. USB is the only mode allowed on this band. S ome t i me sab a ndt ha ta pp e a r s“d e a d ”ma ya c t u a l l y have several stations present. You can use scanning to find these stations for you while accomplishing other tasks. Memories to be used for channel hopping must be consecutive, within the same band, and must be assigned a text label that starts with an asterisk (*). Scanning while muted allows the K3 to ignore stable carriers (key-down signals with no modulation), unmuting only when "interesting" signals are found. Scanning with the receiver "live" (not muted) is especially useful when listening for weak signals on very quiet bands. To set up channel hopping: Set VFO A to the first frequency in the intended channel-hopping range. (VFO B does not have to be set higher than VFO A for channel hopping purposes.) Scanning Setup T ap M V , then select a memory (0 0 - 99 ) using VFO A. For the five 60-meter channels, we sugge st using memories 61-65. Start with memory 61 for 5330.5 kHz (US), or your c ou n t r y ’ sf i r s t60 -meter allocation. To use scanning, you first need to store the desired t u ni ngr a ng ei name mor y .Af t e rt ha t ,y ou ’ l lb ea b l e to simply recall the memory, then start scanning. You can set up scanning ranges for various bands, modes, etc. Rotate VFO B to select each memory label position in turn as indicated by the flashing cursor. To set up a scanning memory: Set VFO A to the starting frequency, and VFO B to the ending frequency. Use VFO A to change characters. The first character must be an asterisk (*); other label characters are optional. Select the operating mode, preamp/attenuator settings, and filter bandwidth. Also select the tuning rate, which affects speed of scanning. After editing, tap M Store this setting in any memory (pg. 16). again. Set up all other memories to be used for channel hopping. To start scanning: Recall the memory using M V V. To enable channel hopping (manually or via scanning), tap M V , use VFO A to locate one of the memories in the sequential range set up earlier, then tap M V again. VFO A will now cycle through this range of memories as you turn it. To disable channel hopping, tap R ATE or F IN E or change bands. Hold S C AN to start scanning. To scan with the receiver live (unmuted), continue to hold S C AN until you see AF O N (about 2 seconds). You can stop scanning by manually rotating VFO A, tapping any switch, hitting the key or keyer, or pressing PTT. T o restart, hold S C AN . To start channel-hop scanning, hold S C AN . You c a na l s ou s e“l i ve ”s c a nni nga sme n t i one da tl e f t . 39 Main and Sub Receiver Antenna Routing The simplified block diagrams in this section show how antennas are routed to the main and sub receivers. Heavy lines show the default RF path. All antennas are protected from electrostatic discharge by surge arrestors. Receive-only antenna inputs, indicated by asterisks (*), include carrier-operated relay circuitry (C.O.R.). Basic K3 (no KAT3 or KXV3) As shown in Figure 1, the basic K3 is supplied with one antenna jack (ANT1, SO239). The signal from ANT 1 is routed through the antenna input module (KANT3) to the main receiver (as well as to the transmitter). The KRX3 sub receiver, if installed, can share the ANT 1 signal via a passive splitter and relay K1. When the sub receiver is off or is switched to its AUX RF input (dotted line), K1 bypasses the splitter so it will have no effect on either receiver. An extra RF I/O connector location is provided (AUX RF, BNC) . Thes u br e c e i v e r ’ sAUXRFi npu tc a nb e r ou t e dt ot hi sc on ne c t o r .K1t he ns e l e c t se i t he r t hema i nRXpa t ho rAUXRFa st hes u br e c e i ve r ’ sRFs ou r c e . Any receiving antenna connected to AUX RF must be isolated from the transmit antennas so the sub receiver ’ s C.O.R. will not be activated during transmit. Note : T he sub receiver has its own full set of ham-band and optional general-coverage band-pass filters (KBPF3), but its image rejection will be best when sharing the main path, which includes the receive/transmit low-pass filters. Ant 1 KANT3 Ant. Input Module Main RX SA1 KRX3 Aux RF Sub RX K1 SA4 * * Includes C.O.R. Figure 1. Basic Main/Sub Receive r Routing (no KAT3 or KXV3) K3 with KXV3 RF I/O Module If the KXV3 option is installed (Figure 2), a separate receiving antenna can be connected to the RX ANT IN jack. Relay K2 then selects either ANT1 or RX ANT for the main receiver. Note: The low-pass filters will not be in the path when RX ANT is selected. T his will rarely be an issue, since the main receiver has a full set of ham-band band-pass filters. You can use external filters with RX ANT IN if required. Re l a yK1a l l owst hes u br e c e i ve rt os h a r et hema i nr e c e i v e r ’ sRFs ou r c e ,o ru s ei t sAUXRFinput. This means that two receiving antennas could be used –one for each receiver. The two inputs could also be joined externally wi t ha‘ Y’a d a p t e r . No ts howni st heRXANTOUTj a c k. TheRXANTI N/ OUTj a c ksc a nb eu s e dt og e t he rt o“pa t c hi n”a n external band-pass or low-pass filter or low-noise preamp such as the Elecraft PR6 (6 meters). If such a device is powered, it can be turned on or off on a per-band, per-antenna basis using the CONFIG:DIGOUT1 menu entry. 40 RX Ant. * Includes C.O.R. SA3 * KXV3 KANT3 Ant. Input Module Ant 1 Main RX K2 SA1 KRX3 Aux RF Sub RX K1 SA4 * Figure 2. Main/Sub Re ceive r Routing with KXV3 Installe d K3 with KAT3 ATU The KAT3 internal AT U, which replaces the KANT3 antenna input module, provides a second SO239 antenna jack (ANT 2). As shown in Figure 3, relay K3 routes either ANT 1 or ANT 2 to the main RF path. T he antenna not routed to the main path (the non-transmit a n t e n na )c a nop t i ona l l yb eu s e da st h es u br e c e i ve r ’ sAUXRF antenna. This requires that the two antennas connected to the KAT3 be well isolated from each other. If not, the s u br e c e i ve r ’ sc a r r i e r -operated relay may turn on during transmit. If this occurs, you must either move the two antennas farther apart, or not connect the sub receiver to the KAT3. It may be preferable to connectt hes u br e c e i ve r ’ sa u xi l i a r yRFinput to the AUX RF connector on the rear panel. A well-isolated receiving antenna can then be used with the sub receiver when required. (See CONFIG:KRX3, for sub receiver antenna setup.) Ant 1 KAT3 ATU SA1 Main RX L-Network Ant 2 K3 KRX3 SA2 K1 Sub RX SA4 * * Includes C.O.R. Aux RF Figure 3. Main/Sub Re ceive r Routing with KAT3 Installe d 41 K3 with KAT3 and KXV3 Figure 4 shows the antenna possibilities with both the KAT3 and KXV3 installed. The main receiver can use ANT 1, 2, or RX ANT IN. The sub receiver can either s ha r et hema i nr e c e i ve r ’ sRFs ou r c e ,o ru s ei t sAUXRF input. The latter can be either the non-transmit KAT 3 antenna or the AUX RF BNC connector, as described e a r l i e r . I ne i t he rc a s e , t hes u br e c e i ve r ’ sa nt e n namu s tb ei s ol a t e df r omt he transmitting antenna. RX Ant. SA3 * Ant 1 * Includes C.O.R. KXV3 KAT3 ATU SA1 Main RX K2 L-Network Ant 2 K3 KRX3 SA2 K1 SA4 * Aux RF Figure 4. Main/Sub Re ceive r Routing with KXV3 and KAT3 . 42 Sub RX Remote Control of the K3 Remote Power On/Off A remote-control system can pull the POWER ON line to ground (ACC connector, pg. 18) to turn the K3 O N. To turn it O FF, the controller must send t h eK3a“PS0;”r e mo t e -control command via the RS232 interface, then deactivate the POWER ON signal. T his sequence ensures that nonvolatile memory is updated correctly before shut-down. With appropriate software, any computer with an RS232 port (or a USB-to-RS232 adapter) can be used to control the K3. Connections needed for RS232 communications are covered on pg. 18. Third-party logging and contesting software is available for various computers and operating systems. Most applications written for the K2 should work with the K3, and some provide K3specific features. Automatic Antenna Control Some antenna control units (e.g., those used with S t e ppI R™ a n t e nna s )c a nt r a c kt heK3’ sb a nda nd f r e q u e nc yb ywa t c hi ngf o r“IF;”( r i gi nf o r ma t i on) packets from the transceiver. Some computer logging/contesting applications set up the K3 to output these messages periodically, allowing the a n t e n nac on t r olu ni tt o“e a ve s d r op . ” For a list of K3-compatible software applications, including configuration requirements, please visit Remote-Control Commands The K3 has a rich set of remote-control commands, including many commands that directly controlthe two DSPs. With appropriate software, various extensions to DSP functionality can be made available to the operator, including customized filters, fine control over noise reduction, per-mode parametric EQ, absolute level metering in dB, and unique tuning aids. I fy ou ’ r eno tu s i ngs u c hs of t wa r e ,o ri fyou ’ r en o t using a computer at all, you can still set up the K3 t oou t pu t“IF;”pa c ke t sp e r i od i c a l l yb ys e t t i ng CONFIG:AUTOINF to AUTO 1 . T he packets are sent once per second while the VFO frequency is being changed, as well as on any band change. I fy o u’ r eus i ngl o gg i ng/ c o nt e s t i ngs o f t wa r e, che ck with the manufacture r before setting AUTOINF to AUTO 1 . Some applications may no tb et o l er a ntofuns o l i c i t ed“I F; ”p ac ket s . K3 remote-control commands use ordinary ASCII text, so they can be easily tested using a terminal emulator.F ore xa mpl e , t hec omma nd“FA;”r e t u r ns the current VFO A frequency. Using the same command, you can set the VFO A frequency, e.g. “FA00007040000;”s e t st heVFOt o7. 0 40MHz . CW/DATA Terminal Applications The K3 directly supports CW/PSK31/RTT Y ASCII text transmit and receive via its RS232 port. Our K3 Utility application includes a simple Terminal function that lets you try out these modes using you rc ompu t e r ’ ske yb oa r dand monitor. Many new commands are provided in addition to the core set of commands supported by the K2. Some existing commands have been updated to d i r e c t l yc o n t r ol t hes u br e c e i ve r( e . g . , “AG$;” , which controls sub AF gain). Please refer to the K3 Pr og r amme r ’ sRe f e r e nc efor further details. You can also use a generic terminal program of any type to emulate this same functionality. Here are the low-l e ve lc o mma nd syou ’ l l ne e d : Panadapter/Spectrum Scope Control Re ceive : I fa“TT1;”( t e x t -to-terminal) command is sent to the K3, it will route received and decoded CW/DAT A text to the a terminal program, in a d d i t i ont os h owi ngt het e x tont heK3’ sVFO B display. TheK3’ s“FI;”r e mo t e -control command can be used from a panadapter (also known as a spectrum scope) to determine the exact freque nc yof t heK3’ s first I.F. T his can automatically compensate for crystal filter offsets, passband shift, etc. Transmit: Te x tc a nb et r a ns mi t t e du s i ng“ke yi ng ” pa c ke t s( e . g . , “KY {text};” ) .Upt o2 4c ha r a c t e r s can be sent in each packet. 43 Options Firmware Upgrades K3 option modules and crystal filters add significant new capabilities to the transceiver. T hey can be installed at any time (see pg. 45). All modules are plug-in, requiring no soldering. New features and improvements are available to all K3 owners via firmware upgrades. Upgrades may also be required when you install option modules. Please visit the Elecraft K3 software page () to obtain our free firmware download application, K3 Utility. This program runs on PCs, Macs, and Linux platforms. In addition to firmware downloading, K3 Utility provides configuration save/restore, crystal filter configuration, and CW/DAT A terminal functions. The presently available options are described briefly below; please refer to our web site for further details on these as well as our full range of 5- and 8-pole crystal filters. KAT3: Wide-range internal 100-W automatic antenna tuner with dual antenna switch. T he ANT2 connector is supplied with this option. Some applications or pe riphe ral de vices may inte rfe re with K3 downloads; check the Help information in K3 Utility if you have difficulty. KPA3: Internal 100-W upgrade for the K3/10, with two large fans and separate circuit breaker. KDVR3: Digital voice recorder, usable both for message record/playback and general audio recording. I fy oud on ’ tha veI n t e r ne ta c c e s s ,y ouc a nob t a i na firmware upgrade on CD. If you don't have a computer, you can send your K3 to Elecraft to be upgraded. See Customer Se rvice, pg. 10. KRX3: High-performance, fully-independent sub receiver with its own set of 5 crystal filter slots, 32bit DSP module, noise blanker, optional generalcoverage band-pass filter array (KBPF3 –see below), and auxiliary antenna input. Checking your Firmware Revision Us et heCONFI Gme nu ’ sFW REVS menu entry to determine your firmware revision. The serial number of your transceiver, if needed, can be obtained using the SER NUM menu entry. KBPF3: General-coverage band-pass filter array that allows the K3 main or sub receiver to cover the entire LF and HF range of 0.5 to 30 MHz. (If you want general coverage in both main and sub receivers, two KPBF3 modules are required.) K3 Firmware Self-Test If the K3 detects an error in its firmware (an incorrect che cksum), it will flash the TX LED and show M CU LD on the LCD (with backlight off). KXV3: RF I/O module, including receive antenna in/out jacks (see pg. 38), transverter interface (pg. 38), and a buffered I.F. output (pg. 38). The RX ANT IN/OUT jacks can be used to patch-in external per-band filters or low-noise preamps. If this occurs, connect the K3 to your computer and reload firmware. While firmware is loading, the Delta-F LED ( ∆f ) will flash. When the download is complete, the K3 should reset and run normally. KTCXO 3-1: High-stability T CXO; 1 PPM, firmware correctable to better than 0.5 ppm (see calibration instructions, pg. 49). Forcing a Firmware Download PR6: High-performance, low-noise 6-meter preamp. T he PR6 can be connected directly to the K3’ sRXANTI Na ndOUTj a c ks( r e q u i r e sKXV3 option). It can then be enabled for receive on 6 meters by tapping R X AN T , and turned on using the DIGOUT 1 signal (ACC jack, pg. 18, and CONFIG:DIGOUT1). BYPASS jacks are provided so the RX ANT IN/OUT functions will be available for use on other bands. If you accidentally load an old or incompatible firmware version and find the K3 unresponsive, do the following: (1) unplug the K3 from the power supply and wait 5 seconds; (2) plug the power s u ppl yb a c ki n;( 3)hol dt heK3’ sP O W ER switch i n;a f t e ra b ou t1 0s e c o nd s ,you ’ l ls e et heTX LED f l a s h( you ’ l la l s os e eM CU LD on the LCD); (4) load the correct firmware version. 44 the filter information table, Appendix A). The default value, 0 . 00 , corresponds to the nominal filter center frequency of 8215.0 kHz. Most 5pol ef i l t e r swi l lha vea nof f s e t ,e . g . “ -0. 91 ” . (T his has no effect on performance; firmware compensates for the offset.) Configuration Configuring your K3 involves installing options and crystal filters, then customizing menu settings. Options come with their own installation manuals. Onc et h e y’ r ei ns t a l l e d , t he ymu s tb ee na b l e du s i ng their associated menu entries (see pg. 52). Select the remaining filters and adjust their frequency offsets as required. Crystal Filter Setup Receive Filter Enables (Per-Mode) Crystal filter installation is covered in detail in Appendix A (pg. 72). Once filters have been installed (or moved), follow the steps below. You must specify which of the five crystal filters is enabled for receive in each mode. Use VFO B to locate the FLx ON menu entry. The K3 Utility software application can also be use d to vie w or change crystal filte r settings; click on Configuration tab / Edit Crystal Filters. T ap SU B i fyou ’ r es e t t i ngu ps u br e c e i ve r filters. T urn the K3 on. T ap MO D E until the LS B icon appears. If you see the US B icon instead, hold AL T (left end of the MO D E switch) to select LS B . Hold C O NF I G to access the CONFIG menu. T ap 1 or use XF IL to select FL1 . Locate the FLx BW menu entry, which will be used in the next step to set up filter bandwidths. “x ”wi l lb er e pl a c e dwi t h1 through 5 , corresponding to crystal filters FL1-FL5. Set FL1 ON to YE S or NO using VFO A. You should enable both narrow and wide filters for use in SSB modes, since they may be used during copy of data, SSB, or AM signals. T ap SU B i fyou ’ r es e t t i ngu ps u br e c e i ve r filters. The S UB icon will flash. Use X F IL to go to FL2-FL5 in turn, and enable or disable these filters for LSB mode. T ap 1 or use XF IL to select FL1 . T ap MO D E to select each of the other modes in turn (USB, CW, DAT A, AM, and FM). For each mode, set up the FL1-FL5 enables. Filter Bandwidth Using VFO A, adjust the bandwidth parameter so that it matches the filter installed at the FL1 position. Use the filter information table you filled out in Appendix A. Filter Loss Compen sation Select the remaining filters by tapping 2 thorugh 5 or XF IL , adjusting their bandwidth parameters according to the table. You can compensate for the greater loss of narrow crystal filters by specifying added per-filter gain. Stay in the menu for the next filter setup step. T ap SU B to set up sub receiver filters. Otherwise, make sure the S UB icon is OFF. Filter Frequency Offset T ap 1 –5 or XF IL to select a filter to modify. Use VFO B to find the FLx FRQ menu entry. Use VFO A to set the gain in dB. In general, you ’ l lwa n tt oa d d1-2 dB for 400-500 Hz filters, and 3-4 dB for 200-250 Hz filters. Use VFO B to find the FLx GN menu entry. I fy ou ’ r es e t t i ngu ps u br e c e i ve rf i l t e r s , ma ke sure the S UB icon is still flashing (tap S UB if necessary). Select any additional filters that require added gain, and adjust their gain amounts. T ap 1 or use XF IL to select FL1 . Adjust VFO A so that the parameter matches FL1’ sma r ke df r e q u e nc yof f s e t( a sr e c o r d e di n 45 Transmit Crystal Filter Selection (Per-Mode) Miscellaneous Setup This step applies only to filters on the RF board. We suggest setting up at least the menu entries below. You may wish to reviewthe other menu entries as well, starting on pg. 51. Select CW mode by tapping MO D E . Use VFO B to find the FLTX CW menu entry. Mic Gain / Bias Rotate VFO A to select a CW transmit filter (2.7 or 2.8 kHz). Note: Key clicks may result if a narrower filter is selected for CW transmit. T ap MO D E to select LS B or US B . The menu entry will become FLTX SB. MAIN:MIC SEL is use d to select either the front- or rear-panel mic, or LI NE I N . If a mic is selected, you can also tap 1 to select Lo or H i mic gain range, and tap 2 to toggle mic bias on/off. See pg. 13 for Elecraft mic bias recommendations. Select the filter to be used during SSB and DAT Atransmit (2.7 or 2.8 kHz). AF Gain Range CONFIG:AF GAIN specifies LO or HI AF gain range. The default is HI . If applicable, select a 6-kHz filter for AM and ESSB (FLTX AM), and 13.0 kHz for FM (FLTX FM). Time and Date I fyou ’ r eu s i nga2. 7-kHz 5-pole filter for SSB transmit, you can optionally fine-tune its FLx FRQ parameter to equalize LSB and USB transmit characteristics. Monitor with a separate receiver and use headphones, or have another station listen. CONFIG:TIME sets the 24-hour real-time-clock (RT C). T ap 1 / 2 / 3 to adjust HH/MM/SS using VFO A. This URL shows UT C as well as all U.S. time zones: tycho.usno.navy.mil/cgi-bin/timer.pl CONFIG:DATE MD selects US ( MM . DD. Y Y ). or EU ( DD. MM .Y Y ) date format using VFO A. Option Module Enables K3 options can be installed at any time. CONFIG:DATE is used to set the date. T ap 1 / 2 / 3 to adjust MM/DD/YY or DD/MM/YY. Once an option has been installe d, use the associate dCO NFIG menu entry to enable it (see below). Then turn the K3 off for 5 seconds, and back on. This allows the K3 to find and test the module . VFO Setup Several CONFIG menu entries are provided to control VFO behavior: VFO CRS sets up the per-mode C O AR S E tuning rate KAT3 AT U module: set KAT3 to BY P . KBPF3 general-coverage band-pass filter module: set KBPF3 to NO R ( i fyou ’ r e installing a KBPF3 on the sub receiver, tap S U B while in the menu entry). KXV3 RF I/O module: set KXV3 to NO R . VFO CTS is used to specify the number of counts per knob turn (VFO A and B): 10 0 , 2 0 0 , or 40 0 VFO FST selects the normal VFO fast tuning rate (2 0 or 5 0 Hz) KRX3 sub receiver: set KRX3 to match your s e l e c t e dwi r i ngf o rt hes u br e c e i ve r ’ sAUX antenna: ANT= A TU (KAT3) or ANT= BNC (AUX RF jack, rear panel). For details on sub receiver antennas, see pg. 22. You may also need to set up crystal filters for the sub receiver. KDVR3 voice recorder: set KDVR3 to NO R . VFO IND, if set to YE S , allows VFO B to be set to a different band than VFO A (only applies if the sub receiver is installed) KPA3 amplifier module: set the KPA3 menu entry to P A NO R . See menu entry listings for information on other settings. 46 VFO A Knob Friction Adjustment VFO B Knob Friction Adjustment TheVFOAk nob ’ ss pi nr a t ec a nb ea d j u s t e db y moving the knob in or out slightly. The rubber f i ng e rg r i po nt heVFOAkn obc ov e r st hek nob ’ s set screw, so it must be removed first. Use the supplied 5/64" (2 mm) Allen wrench to l oos e nt heVFOBk nob ’ ss e ts c r e w. Between the knob and front panel is a felt washer which, when compressed, reduces the spin rate. Move the knob in or out in small increments until the desired rate is obtained, re-tightening the set screw each time. In the following proce dure, use only your finge rnails; a tool may scratch the knob. Using your fingernails at the point identified below, pull the finger grip forward slightly. Rotate the knob and repeat until the grip can be pulled off. Real Time Clock Battery Replacement K3 components or module s can be e asily be damage d by ESD (electrostatic discharge). To avoid this, put on a grounde d wrist strap (with 1 me gohm se ries resistor) or touch a grounde d surface be fore touching anything inside the enclosure. An anti-static work mat is strongly re commende d. The battery for the real time clock/calendar is located on the left side of the RF board. To access it, turn power off, then remove the top cover as described in Appendix A. Remove the sub receiver module (KRX3) if present. Remove the old battery. Use the supplied 5/64" (2 mm) Allen wrench to loosen the set screw. If a KRX3 module (sub receive r) is installe d, the batte ry will be prote cte d by a plastic slee ve, which pre vents shorting to the KRX3 module. Be sure to save this slee ve and re place it when the ne w battery is installe d. Between the knob and front panel are two felt washers which, when compressed, reduce the spin rate. Move the knob in or out in small increments until the desired rate is obtained. (Re-tighten the set screw each time so you can spin the knob.) Then and replace the finger grip. Replace the battery with the same type of 3-V lithium coin cell (CR2032, BR2032, equivalent). The (+) terminal is clearly marked on the battery; it must be oriented as indicated by the (+) symbol on the RF board. Re-install the KRX3 module (if applicable) and the top cover. To set the time, date, and date format, refer to the following CONFIG menu entries: TIME, DATE, and DATE MD. 47 Calibration Procedures High Power (50 W) Wattmeter Calibration This applies to the K3/100 only. Use the same procedure as shown for 5 watts, but set power to 50 W. The wattmeter calibration menu entry name will change to CONFIG:WMTR HP. All calibration proce dures are firmwarebase d. Ple ase do not adjust any of the trimme r capacitors or potentiomete rs inside the K3; they have been carefully aligne d at the factory. 1.0 Milliwatt Meter Calibration (KXV3) Most calibration procedures use Te ch-Mode menu entries. To enable these, set CONFIG:TECH MD to O N . Set TECH MD to O FF afterward. This applies only if you have the KXV3 option. Set the CONFIG:KXV3 menu entry to TE S T, f or c i nga l lb a nd st ou s et heKXV3’ st r a ns ve r t e r output jack. Power will be limited to 0-1.5 mW. The wattmeter calibration menu entry name will change to CONFIG:WMTR MW . Synthesizer This procedure is normally done at assembly time or by the factory. Hold C O NF I G and find the CONFIG:VCO MD menu entry. Set the parameter fully clockwise to CAL. Exit the menu. T he synthesizer will be tested and calibrated. Connect a dummy load and an accurate RF voltmeter to the XVT R OUT jack. Set power to exactly 1.00 milliwatts (0 dBm). Hold TU N E ; adjust the WMTR MW menu parameter for 0.224 Vrms on the external voltmeter. Then tap X MI T to exit T UNE. To calibrate the 2 nd synthesizer (for the sub receiver), locate CONFIG:VCO MD and set the parameter to CAL , tap S UB to turn on the S UB icon, then exit the menu. Set CONFIG:KXV3 back to NO R . Wattmeter Transmitter Gain If desired, power readings shown during TU N E can be adjusted to match an external wattmeter. T o account for all K3 circuitry involved, this must be done at 5.0 W, at 50 W (K3/100 only), and at 1.00 mW if the KXV3 option is installed. This procedure is normally done at assembly time or by the factory. It compensates for per-band transmit gain variation, and must be done on e ve ry band. (The resulting calibration data can be viewed using CONFIG:TXGN, but this is not necessary.) Low-Power (5 W) Wattmeter Calibration Low-Power (5 W) TX Gain Calibration Switch to 20 meters. Switch to 160 meters. Put the ATU into bypass mode (hold ATU ). Put the ATU into bypass mode (hold ATU ). Connect a 50-W capable dummy load (5 W for K3/10) and an accurate wattmeter to ANT1. Connect a dummy load to ANT 1. Switch to ANT 1 by tapping AN T . Switch to ANT 1 by tapping AN T . Set power to exactly 5.0 watts. (Make sure CONFIG:PWR SET is set to NO R so power on all bands can be set one time using the PWR control.) Set power to exactly 5.0 watts. Hold C O NF I G and locate the CONFIG:WMTR LP menu entry. Hold TU N E ; VFO B should show about 5 W Hold TU N E ; adjust menu parameter for a reading of 5.0 W on the external wattmeter. Then tap X MI T to exit T UNE. T ap X MI T to exit TUNE. Re peat this proce dure on 80-6 me ters. T ap ME NU to exit the menu. 48 High Power (50 W) TX Gain Calibration Method 1 (Frequency Counter): This applies to the K3/100 only. Use the same procedure as shown for 5 watts, but set power to 50 W, and use a 50-W dummy load. The TU N E power output indication should be about 50 watts. Calibrate TX gain at 50 W on ALL bands. Locate the CONFIG:REF CAL menu entry. (If the menu entry name is REF xxC, tap 1 to change it to REF CAL). Connect a frequency counter with +/-1 Hz or better accuracy to J1 on the reference oscillator module. Measure the exact frequency in Hz. Milliwatt TX Gain Calibration (KXV3) This applies only if you have the KXV3 option. Using VFO A, set the REF CAL parameter to match this frequency. Then exit the menu. Switch to 160 m. Method 2 (Zero-Beating): Set CONFIG:KXV3 TE S T. T his forces all b a nd st ou s et heKXV3’ st r a ns ve r t e rou t pu t jack, and output to be limited to 0-1.5 mW. Select CW mode. Set W ID TH to about 2.8 kHz. (A wide filter passband is necessary since you may need to move the REF CAL parameter a significant amount.) Set power to exactly 1.00 milliwatts (0 dBm). Hold TU N E ; output power should be about 1 mW. T hen tap X MI T to exit TUNE. T une the K3 to a strong broadcast station or a known-accurate reference signal. Use the highest-fre quency source you can (e.g. WWV at 10, 15 or 20 MHz). Se t the VFO to the spe cifie d fre quency of the signal. Re peat the above proce dure on 80-6 m. Set CONFIG:KXV3 back to NO R . T ap ME NU to exit the menu. Using MO N , set the sidetone monitor level to roughly match the volume level of the received broadcast or reference signal. Reference Oscillator Locate CONFIG:REF CAL. (If the menu entry name is REF xxC, tap 1 to select REF CAL). TheK3’ sr e f e r e nc eos c i l l a t oris a T CXO, or temperature-compensated crystal oscillator. It is normally calibrated at assembly time or by the factory. There are two types: 5 ppm and 1 ppm. T ap S P O T to enable the sidetone. Adjust the REF CAL frequency until the sidetone is zero-beated with the signal. As you a ppr oa c ht hec o r r e c tf r e q u e nc y ,y ou ’ l lh e a ra n u nd u l a t i ng“b e a tno t e ”b e t we e nt hes i g na l s . The slower the beat note, the closer they are. Before attempting re fe rence calibration, allow the transceive r to warm up at room tempe rature for about 15 minutes (cove r on). The T CXO can be calibrated using an accurate frequency counter (Me thod 1), or by zero-beating the sidetone against a reference signal (Method 2). Cancel S P O T and exit the menu. The accuracy of the 1 ppm T CXO can be improved to better than 0.5 ppm by entering supplied calibration data (Method 3). Be sure to keep the data sheet that was supplied with the oscillator. Locate the CONFIG:REF CAL menu entry. T ap 1 to change the name to REF xxC. Method 3 (1 ppm TCXO Option): Locate the calibration data sheet, which shows frequency vs. temperature over a wide range. Me thod 3 could also be used with the 5 ppm oscillator. However, improvement in accuracy is not specified in this case, and the user must first determine the T CXO’ sf r e q u e nc ya tt woo rmor e temperature points using Method 1 or 2. For each data point, tap 2 (down) or 3 (up) to select the temperature, then use VFO A to set the specified oscillator frequency in Hz. T ap ME NU to exit the menu. 49 T ap AG C to select slow AGC ( AG C-S ). Front Panel Temperature Sensor Bypass the ATU, if installed, by holding ATU . T urn the K3 OFF. Allow about 15 minutes for the radio to cool to room temperature. Set RF GAIN to maximum (fully clockwise). (Note: I fyou ’ v ea s s i g ne dt heRF gain control for the present receiver to squelch, its RF gain will default to maximum unless you ’ r ec o n t r ol l i ngRFg a i nf r omar e mo t e control computer application.) T urn the K3 ON. Locate the CONFIG:FP TEMP menu entry. Adjust the parameter to match the reading of a room thermometer. Note: Deg. C = (deg. Fahrenheit - 32) * 0.555. Normalize the DSP filtering (hold N OR M ; pg. 14). Front panel compartment temperature can be monitored continuously. T ap DI S P , then use VFO B to select the FP xx C alternate display. Connect the signal generator to ANT1 and set it for 50 microvolts RF output. T une to the frequency of the signal generator (tune for peak audio response). You can also use auto-spot (pg. 30) to accurately match the pitch of the signal, ensuring that it is centered in the passband. PA Temperature Sensor T urn the K3 OFF. Allow about 15 minutes for the PA heatsink to coolto room temperature. Do not turn the K3 ON during this pe riod. The S-meter has both relative and absolute modes. Refer to the CONFIG:SMTR MD menu entry description if you wish to switch from relative to absolute. (Relative mode is easier to calibrate and is the factory default.) T urn the K3 ON. Locate the CONFIG:PA TEMP menu entry. Adjust the parameter to match the reading of a room thermometer. Note: Deg. C = (deg. Fahrenheit - 32) * 0.555. Locate the CONFIG:SMTR PK menu entry; set it to O FF . Locate the SMTR SC menu entry (S-meter scale). Use the VFO A knob to set it to the default value (14). PA heat sink temperature can be monitored continuously. T ap DI S P , then use VFO B to select the P A x x C alternate display. Adjust SMTR OF (S-meter offset) for an S-9 reading. S-Meter Switch the signal generator to 1-µV output; the S-meter should now indicate about S-2 to S-3. If not, change SMTR SC by 1 unit (try 15 first, then 13, then 16, then 12). After each SMTR SC change, re-adjust the SMTR OF setting for an S-9 indication. S-meter calibration is normally adequate using the default settings for both the main and sub receivers. Calibrating the S-meter requires a 50-ohm, 50microvolt signal (an accurate signal generator such as an Elecraft XG1 or XG2 is recommended). When you have completed this procedure, disconnecting the signal generator should now show NO bars on the S-meter. I fy ou ’ r ec a l i b r a t i ngt hes u br e c e i ve r ’ sS -meter: (1) tap SU B to turn the sub receiver on; (2) hold B S E T to gain access to sub receiver controls; (3) tap AN T until you see M AI N flashed on VFO B; this sets up the sub receiver to share the main RF path (pg. 37). Switch to a band applicable to your signal generator and select CW mode. Se t transmit powe r to 0.0 W using PW R . T urn preamp on ( P R E ), attenuator off ( AT T ). 50 Menu Functions There are two groups of menu functions: MAIN and CO NFIG. Tap ME NU to access the MAIN menu; hold C O NF I G to access the CONFIG menu. You can also hold C ON FI G to switch from one menu to the other. Menu e n t r i e st ha ty ou ’ dl i keq u i c ka c c e s st oc a nb ea s s i g ne dto programmable function switches (pg. 21). T apping DI S P while viewing the menu shows information about the present menu entry in the VFO B display area. For most entries, the default parameter value is shown in parentheses at the start of the help text. Long help text strings can be interrupted by tapping any switch. MAI N Menu Entry Default Description AFX MD ALARM LCD ADJ Delay 5 OFF 8 LCD BRT LED BRT 6 4 MIC SEL FP, low range, bias off Audio Effects. Selections: DE LAY 1 -5 (quasi-stereo); BI N (L/R phase shift) Set alarm/Auto-Power-On time. Tap 1 to turn alarm on/off; tap 2 / 3 to set HH /MM . LCD viewing angle and contrast. Use higher settings if the radio is used at or above eye level. If adjusted incorrectly, bar graphs will be too light or heavy during keying. LCD backlight brightness. Use DAY in bright sunlight, 2 to 8 for indoor lighting. LED brightness (relative to LCD backlight brightness). Exception: if LCD BRT is set to DAY , LEDs are set to their maximum brightness. Mic/line transmit audio source, mic gain range, and mic bias. Source selections: FP (front panel 8-pin MIC jack), RP (rear panel 3.5 mm MIC jack), and LI NE I N (rear-panel LINE IN jack). T ap 1 to toggle between .Low and .H igh mic gain range for the selected mic. T ap 2 to turn mic BI AS on/off (turn on for electret mics). MIC+LIN OFF MSG RPT 6 RPT OFS 600 RX EQ +0 dB, each band T X EQ T X*EQ +0 dB, each band 0 0 VOX GN ANTIVOX If set to O N , and MIC SEL is set for FP or RP , the present mic OR line input can be used for transmit audio. NOT E: Setting MIC SEL to LI NE overrides the MIC+LIN menu entry (its parameter becomes "N-A"). When MIC+LIN is in effect, rotating the MIC control shows MIC gain. The op has to set MIC SEL to LI NE temporarily to adjust LINE IN gain. Message repeat interval in seconds (0 to 25 5 ). To repeat a message, hold M1 –M4 rather than tap. A 6 - 10 sec. interval is about right for casual CQing. Shorter intervals may be needed during contests, and longer for periodic CW beacons. Sets the transmit offset (in kHz) for repeater operation, from 0 to 50 00 kHz. Store per-band and per-memory. Use AL T to select a +/- offset or simplex operation. Receiver audio graphic equalizer. VFO A is used as an 8-band bar graph, where each character shows the boost or cut (-1 6 dB to + 1 6 dB in 1 dB increments) for a given AF band. T he 8 bands are 0.05, 0.1, 0.2, 0.4, 0.8, 1.6, 2.4 and 3.2 kHz. Tap 1 - 8 to select an AF band. VFO A selects boost/cut. T ap C LR to reset all bands to +0 dB. Transmit audio graphic equalizer (voice modes only). Functions the same as RX EQ, above, and can be adjusted while in transmit mode. TX*EQ indicates TX ESSB in effect, which has its own set of transmit equalization settings. Adjusts the sensitivity of the VOX to match your mic and voice. Adjusts immunity of the VOX circuit to false triggering as a result of audio from the speaker or 'leaked' from headphones. 51 CONFIG Menu Tech Mode Entries Menu entries that include [T] are tech mode entries. These are only visible if CONFIG:TECH MD is set to O N . The ya r eno r ma l l yl e f ta tt he i rd e f a u l t s . En t r i e sf u r t he rd e s c r i b e da s“Advance d”o r“Troubleshooting”s hou l d be changed with caution. The default values are strongly recommended for these functions; tap D I S P to see the default value, which appears in parentheses at the start of the help text. Sub Receiver Settings Menu entries marked SUB have two settings: one for the main receiver, and one for the sub receiver. If a sub receiver is installed, the menu entries will change to identify which receiver is being set up by showing RF (main receiver) or S UB (sub receiver) at the left end of the parameter display. Also, in the S UB case, the SUB icon will flash. Prior to adjusting sub receiver menu parameters, you should turn the sub receiver on by tapping S UB . This is e s pe c i a l l yi mp or t a nti fyou ’ r ea d j u s t i ngc r ys t a l f i l t e rs e t t i ng s ,b e c a u s ei twi l la l l owyout ohe a r t hec ha ng e sa s filters are selected and modified. You should also turn SUB AF gain up and main AF gain down. Even if the sub receiver is turned on, when you first enter the menu, RF will be in effect, and the be turned off. T ap S UB to switch to the sub receiver parameter as required. Entry 2 T ONE [T] Default OFF AF GAIN AFV T IM [T] HI 1000 AGC HLD [T] 0 AGC PLS [ T] AGC SLP [T] NOR 12 AGC T HR [T] AGC-F [ T] AGC- S [ T] AUT OINF [T] 5 120 20 NOR BAT MIN 11.0 SUB icon will Description (Troubleshooting.) Enables built-in 2-tone generator for SSB transmit tests. The internal 2-tone generator only works if LSB or USB mode is selected. After setting 2-tone ON, exit the menu and tap XMIT . You can use MI C to to adjust the amplitude of one of the tone s ; t heo t he r ’ sa mpl i t u d ei sf i xe d . Sets AF gain range. Available selections are HI or LO . (Advance d.) Integration time for AFV and dBV displays in ms. See VFO B alternate display information (pg. 36). (Advance d.) AGC“ hol d ”t i mef orvoi c emod e s .S pe c i f i e st henu mb e rof milliseconds that the SLOW AGC value is held after the signal drops below the level that set the AGC. This is often helpful for SSB voice operation. (Advance d.) NO R enables AGC noise pulse rejection. (Advance d.) Hi g he rva l u e sr e s u l ti n‘ f l a t t e r ’AGC( ma ki ngs i g na l sa ta l l amplitudes closer in AF output level). (Advance d.) Sets AGC onset point; a higher number moves the onset up. (Advance d.) Sets fast AGC decay rate; a higher number means faster decay. (Advance d.) Sets slow AGC decay rate; a higher number means faster decay. (Advance d.) If set to AUTO 1 , the K3 will send band data on its RS232 port for u s ewi t hd e vi c e ss u c ha st heS t e pp I R™ a nt e n nao ne ve r yb a ndc ha ng e . ( No t e : This setting may not be compatible with PC software applications that use the “AI ”r e mo t ec o n t r ol command.) Low-battery warning threshold; 1 1 .0 recommended. If the voltage drops below this level, the operator will be alerted with a BAT LO W message. The menu parameter flashes if this occurs within the menu, so the level can be easily tested. 52 CW IAMB A CW PADL CW WGHT TIP=DOT 1.15 Iambic keying mode ( A or B ). Both modes produce self-completing dots and d a s he s .Mod eBi smor ee f f i c i e n tf orope r a t o r swhou s e“s q u e e z e ”ke yi ng (pressing both paddles at once), because an extra dot or dash is inserted on squeeze release. Mode A lacks this feature, which may be more appropriate for t h os ewhoonl ypr e s so nepa d d l ea tat i me( o f t e nc a l l e d“s l a p”ke yi ng ) . S pe c i f i e swhe t he rl e f tke y e rpa d d l e( “t i p”c on t a c to nt h epl u g )i sDO T or DAS H . CW keying weight. Adjusts element/space timing ratio for the internal keyer. Additional functions of this menu entry: T ap 1 to select S S B + CW . (allow CW keying in SSB modes) or S S B -CW (no CW keying in SSB modes; default). When sending CW in SSB modes, the other station will hear your signal at a pitch equal to your sidetone pitch selection. T ap 2 t os pe c i f yh owt he‘ @’c ha r a c t e rs hou l db e ha vewhe ne mb e d d e di nr e mo t e control KY( “k e y” )pa c ke t s .S e l e c t‘ @’= STOPt oa l l owt he‘ @’c ha r a c t e rt o terminate KY-packet transmission (default); select ‘ @’= ‘ AC’t oha ve‘ @’ translated into its Morse equivalent (.--.-. ) ,whi c hi st he‘ @’c h a r a c t e r . T ap 3 to select O LD or NE W Q S K (default). NE W Q S K reduces keying artifacts in the presence of QRN or QRM. O LD mutes/unmutes slightly faster. DATE DATE MD DDS FRQ [ T] DIGOUT 1 EXT ALC [T] FLx BW SUB FLx FRQ SUB FLx GN SUB FLx ON SUB Real-time-clock date, shown as in the format selected by CONFIG:DATE MD (MM.DD.YY or DD.MM.YY). T ap 1 / 2 / 3 to select month / day / year. US Select US (MM.DD.YY) or E U (DD.MM.YY) date formats. {DDS (Troubleshooting.) Controls DDS tuning directly to check DDS XFIL range for freq} synthesizer troubleshooting purposes. Rotate VFO A CCW and CW to find limits where L (lock) changes to U (unlock). Correct DDS frequency is restored after exiting the menu and rotating either VFO. OFF DIGOUT 1 is a general-purpose open-drain output signal on the ACC connector (pin 11). O FF = floating; O N = pull the line to ground. DIGOUT 1 is per-band, and also per-antenna if the KAT 3 AT U is installed. It can be used to turn an Elecraft PR6 preamp on when you switch to 6 meters, control a remote antenna switch, etc. Max. load current ( O N ) is 15 mA; max. load voltage ( O FF ) is 25 VDC. OFF t-4.0 (Advance d) Set to O N only if using external ALC with a high-power amplifier. This ma yr e q u i r emod i f i c a t i o nst oy ou rK3 ’ sRFa ndKI O3mod u l e s(see pg. 27 for details). When set O N , t h eK3 ’ se x t e r na l ALCt h r e s hol d( -4 . 0 V by default) can be varied. 2.70 (FL1) Crystal filter FL1 - 5 bandwidth in kHz, where x=1 to 5 (FL1-FL5). T ap 1 - 5 to select a specific filter. Or tap XF IL ( 6 ) to select the next filter. Note : An alternative to the FLx menu entries is the Edit Crystal Filte rs function of our PC software application, K3 Utility. It shows all filter setups in a single window. N/A T ap 7 to turn IIR DSP filters on (I I R O N ) or off (I I R O FF , default) for the 100 and 50 Hz bandwidths. IIR filters have steeper skirts and slightly more ringing than the default FIR filters. 0.00 (FL1) Crystal filter FLx center freq as offset from nominal (8215.0 kHz). Use the offset va l u es pe c i f i e do nt hec r ys t a lf i l t e r ’ sl a b e lo rPCb oa r d ,i fa n y.F ore xa mpl e ,i fa n Elecraft 5-pole, 200-Hzf i l t e rwe r el a b e l e d“ -0 . 9 1” ,a d j u s tVFOAf or–0 .9 1 . 0 dB Crystal filter FLx loss compensation in dB. 0 dB recommended for wide filters; (FL1) 2 dB for 400 or 500 Hz filters, and 4 dB for 200 or 250 Hz filters. ON (FL1), per-mode Used to specify which filters are available during receive. Each filter must be set to O N or O FF in each mode. You can tap MO D E within the menu entry. 53 FLTX{md} FL1 (all modes) FM DEV FM MODE FP TEMP 5.0 ON N/A FSK POL FW REVS 1 N/A KAT3 Not Inst KBPF3 Not Inst Used to specify which crystal filter to use during T X. {md} = CW /S B /AM /FM . Choose filters with bandwidths as follows: SSB, 2.7 or 2.8 kHz (also applies to data); CW, 2.7 or 2.8 kHz; AM, 6 kHz; FM, 12 kHz or higher. T he FM filter, if present, must be installed in FL1. Note : I fy ou ’ r eu s i nga2 . 7-kHz 5-pole filter for SSB transmit, you can optionally fine-tune its FLx FRQ parameter to equalize LSB / USB transmit characteristics. Monitor your signal on a separate receiver, using headphones. (Advance d) FM deviation in kHz. If set to O FF , FM will be removed from the mode selections. Used to calibrate the front panel temperature sensor. It must be calibrated if you wish to use the REF xxC menu entry to calibrate the optional 1 PPM T CXO. You must convert °F to °C in order to enter the value. Deg. C = (deg. F–32) * 0.555. 0 = Inverted FSK transmit data polarity, 1 = Normal data polarity. Rotate VFO A to see firmware revisions: MCU ( uC ), main DSP ( d1 ), aux DSP ( d2 , if KRX3 is present), flash parameters ( FL), and KDVR3 controller ( dr ). KAT3 ATU mode; normally set to BY P or AUTO (you can alternate between these settings using the ATU switch). Modes L1 -L8 , C1 -C8 , and Ct are used to test KAT3 relays. Mode LCS E T allows manual adjustment of L/C/net settings. When in this mode, tapping ATU TU N E shows the L & C value; C is changed with VFO A, L is changed with VFO B, and AN T toggles between Ca and Ct . If KBPF3 option is installed: set to NO R , exit menu, and turn power off/on. SUB KDVR3 KIO3 KNB3 Not Inst NOR NOR If KDVR3 option is installed: set to NO R , exit menu, and turn power off/on. Determines function of BAND0-3 outputs on ACC connector. See pg. 19. (Troubleshooting) TheK3c a n ’ tb eu s e dwi t hou taKNB3; Not I ns t setting is for troubleshooting only. KPA3 Not Inst KRC2 -- KRX3 Not Inst KXV3 Not Inst Set to P A NO R if KPA3 100-W amp installed. Set to P AI O NO R if KPA3 is not installed, but the KPAIO3 transition PC board is. Other settings include P A BY P (disables KPA3 if installed), PA fan test settings ( P A FN1 - FN4 or P AI O FN1 -FN4 ), and P AIO BY P (if transition board is installed, but not the KPA3 module, this setting can be used to test the high power bypass relay). Con t r ol st heKRC2b a ndd e c od e r ’ sa c c e s s or you t pu ts e t t i ng s .S howsACC O FF or ACC1 -3 if a KRC2 is detected; - - if not. To ensure compatibility with both old and new KRC2 firmware, two different 6 meter band decodes are provided. Tap 1 to select BAND6 = B6 (addr=10) or BAND6 = B1 0 (addr=9). Refer to the KRC2 manual for further details. If KRX3 option (sub receiver) is installed, set the parameter to match your selected sub receiver AUX RF source: ANT= ATU ( t h eKAT3’ sn on -transmit antenna) or ANT= BNC (the AUX RF BNC jack on the rear panel). T urn power off, then back on. If KXV3 option is installed: set to NO R , exit menu, and turn power off/on. This option provides RX ANT IN/OUT jacks, low-level transverter I/O (XVT R IN/OUT ), and a buffered I.F. output. If KXV3 is set to TE S T, the K3 will use low power (0.10 to 1.50 mW) on all bands, including HF and transverter bands. RF input/output is via the XVT R IN/OUT jacks in this case. Used for troubleshooting. Note:T o access the T EST setting, KXV3 must first be set to NO R , then K3 power turned off/on. Changing the parameter turns on all LCD segments. SUB LCD T ST OFF 54 LIN OUT NOR 010 Sets the LINE OUT level. LINE OUT connections go to PC soundcard inputs. Settings above 10 may result in overdrive of the soundcard or saturation of the KI O3’ si s ol a t i ont r a ns f or me r s ; mo ni t ors i g na l su s i ngt hePCt oa voi dt hi s . Note : Normally, LIN OUT sets a fixed-level receive-only output for main/sub (L/R), compatible with digital modes. T apping 1 switches LIN OUT to =PHO NES, where the line outputs match headphone audio, audio level is controlled by AF/SUB gain controls, and both RX and T X audio are available. MIC BT N OFF NB SAVE NO PA TEMP N/A PTT-KEY OFF-OFF PTT RLS 20 PWR SET NOR REF CAL or REF xxC [T] 49380000 Hz Set to O N i fy ou ’ r eu s i ngami ct ha tha sUP/ DOWNb u t t o nsc o mp a t i b l ewi t ht he K3’ sf r o nt -panel mic jack. Not applicable to the Elecraft MH2 or MD2 microphones. Mic FUNCT ION button not presently supported. Tapping UP or DOWN once will move the VFO up or down one step (based on the current tuning rate); holding UP or DOWN will move up or down continuously. If you see the frequency moving up or down continuously, your mic is not compatible, and MIC BTN must be set to O FF . Set to YE S to save noise blanker on/off state per-band. Noise blanker levels, both DSP and I.F., are always saved per-band regardless of this setting. If a KPA3 (100-W PA module) is installed, shows KPA3 heatsink temperature and allows it to be adjusted. See calibration procedure on pg. 50. I fy ou ’ r eop e r a t i nga thi g hpowe rf r o m ab a t t e r y,a ndvol t a g ei sd r op pi nge n ou g h to cause an erroneous HI TE M P indication, tap 1 in this menu entry to select R O NLY (receive only) temperature sensing, rather than the default ( T AND R ). (Advance d) Allows selection of RTS or DTR RS232 lines to activate PTT or key the K3. See pg. 18. Note: If a computer or other device asserts RT S or DT R while you ’ r ei nt hi sme nue n t r y, t heK3wi l ls wi t c ht oTE S T mode (zero power output) as a precaution. The TX icon will flash as a reminder. To avoid this, make sure software applications have flow control and/or keying options turned OFF whi l eyou ’ r ec ha ng i ngt hePTT-KEY selection. (Advance d) Provides a delay between release of PTT and dropping of the transmit carrier; intended for use with fast turn-around data protocols such as AMT OR and PacT OR. (No effect in CW, FSK D, or PSK D modes.) A value of 20 or higher may be needed to ensure accurate data transmission with these protocols. If sync data or –S is in effect (see SYNC DT), a lower value, typically 10 to 12, is optimal. Also see AMT OR/PacT OR (pg. 32). If set to NO R , the power level on each band follows the present setting of the PWR control. If set to P E R-BAND , the power level is saved on each band. This is especially useful with external amplifiers that have varying per-band gain, as well as with transverters. Us e dt opr e c i s e l yc a l i b r a t et heK3’ sr e f e r e nc eos c i l l a t or ,. VFO A is used to set the reference oscillator frequency in Hz. T here should never be a need to set REF CAL outside a range of 49377.000 to 49383.000. Typically it will end up much closer to 49380.000. T ap 1 to alternate between REF CAL (Me thod 1 or 2) and REF xxC (Method 3). Tap 2 or 3 to move the data entry point up or down. See calibration procedure, pg. 49. RFI DET RS232 NOR 4800 b NO R e na b l e sd e t e c t i o nofhi g hRFIa tt heK3’ sa n t e n nai nr e c e i vemod e( s e eHI RFI warning, Trouble shooting). Set to O FF to disable the warning. RS232 communications rate in bits per second (bps). During firmware download (via the K3FW PC program), the baud rate is set automatically to 38400 baud, but it is then restored to the value selected in this menu entry. 55 SER NUM SMT R OF N/A 024 You rK3 ’ ss e r i a lnu mb e r ,e . g . 0 20 0 0 . Cannot be changed. S-Meter offset; see calibration procedure (pg. 50). 014 S-Meter scale; S-9 = 50 uV, S=3 = 1 uV with Preamp = ON, and AGC ON. See calibration procedure (pg. 50). OFF NOR Set to O N for peak-reading S-meter. (Advance d) S-meter mode: When set to NO R , preamp/attenuator settings will affect the S-meter. (The default values of SMTR OF and SMTR SC apply to NO R .) If set to ABS , the S-meter reading will stay fairly constant with different preamp/attenuator settings, but SMTR OF and SMTR SC must be carefully realigned for both main and sub receivers. If set to Y ES , S PL I T , R I T , and X I T on/off states are saved per-band. Set to 2 if using two external speakers. T his enables binaural effects in conjunction with the AF X switch, as well as stereo dual-receive if the sub receiver is installed. For further details, see pg. 35. Y E S = Speaker is ON, even when headphones are plugged into PHONES jack. See detailed discussion on pg. 20. SUB SMT R SC SUB SMT R PK SMT R MD SPLT SV SPKRS NO 1 SPKR+PH NO SQ MAIN 0 SQ SUB 0 SW T EST [T] OFF SW T ONE SYNC DT OFF Function TECH MD TIME OFF N/A TTY LT R Function T UNPWR NOR T X ALC [T] ON This menu entry normally sets the main receiver squelch value ( 0 - 2 9 ). If VFO A is rotated fully clockwise, the parameter changes to =S UB P O T. Squelch for both main and sub receivers will then be controlled by the SUB RF/SQL knob, and both main and sub RF gain will be controlled by the MAIN RF/SQL knob. This menu entry normally sets the sub receiver squelch value (0 -2 9 ). But if SQ MAIN is set to = S UB P O T, then SQ SUB will also change to =S UB P O T. Squelch for the sub receiver will then be controlled by the SUB RF/SQL knob, and both main and sub RF gain will be controlled by the MAIN RF/SQL knob. Changing the parameter displays S CN ADC . Hold any switch to see scan row and switch ADC reading. Used for troubleshooting only. Sets up audible control annunciation (tones or audio Morse code). (Advance d) When SYNC DT (sync data) is activated in either SSB or DAT A modes, T /R switching times are reduced to optimize for modes such as AMT OR a ndPa c TOR. The“ -S ”i c ont u r nson .Do not use SYNC DT for normal SSB/DATA communications. Cannot be changed within the menu; assign to a programmable function switch. Also see CONFIG:PTT RLS (PTT release delay). Set to O N to enable Tech Mode menu entries (those marked with [T] in this list). Real-time-clock view/set. Tap 1 / 2 / 3 to set HH / MM / SS. To see the time and other displays during normal operation, tap D I S P (see pg. 36). Performs an RTT Y FIGS to LT RS shift when the text decoder is enabled in RTT Y modes. Cannot be changed within the menu itself; must be assigned to a programmable function switch. If set to NO R , TU N E power level follows the POWER knob. Otherwise, establishes a fixed power level for TU N E , overriding the present POWER knob setting. Note1: TUN PWR does not pertain to ATU TU N E , which always uses 5 or 10 W and is internally controlled. Note2: see CONFIG:PWR SET for perband power control. (Troubleshooting.) Set to O FF to disable both internal and external transmit ALC (overrides EXT ALC setting). Used when adjusting band-pass filters in TX mode, or for troubleshooting. Set parameter to O N during normal operation. 56 T X DLY NOR 008 T X ESSB [T] OFF T X INH [T] OFF T XGN{pwr} [T] 00 T XG VCE [T] 0.0 dB VCO MD [T] NOR xx (varies w/ VFO freq) SUB VFO B->A Function VFO CRS Per-mode VFO CT S 200 VFO FST 50 Hz VFO IND NO VFO OFS OFF WMT R {pwr} 100 XVx ON NO (Advance d) For use with external amplifiers that have slow relays; sets the time from KEY OUT jack (active low) to first RF in 1-ms steps. To minimize loss of QSK speed, use the shortest delay that works with your amp. Most will work with the default (minimum) setting of 8 ms. (Advance d) Extended SSB transmit bandwidth ( 3 .0 , 3 . 5 , 4 .0 kHz, etc.) or O FF . See pg. 36. (Advance d) If set to LO = I NH or HI = I NH , the operator can supply an external logic signal to inhibit transmit. When inhibited, the T X LCD icon flashes. (Troubleshooting.) Shows transmit gain constant for the present band and power mode, where {pwr} = LP (0-12W), HP (13-120W), or M W (0.1-1.5 mW). The gain constant is updated whenever the TU N E function is activated on a given band at one of three specific power levels: 5.0 W, 50 W, and 1.00 milliwatt. See transmit gain calibration procedure, pg. 48. On 80 m with high power (> 13 W) selected, you should see P R80 as part of the TXGN parameter display. T his indicates that the preamp is turned on during QRO transmit on 80 m, and is the default. It should only be turned off for troubleshooting purposes; this is done by tapping P R E . If TX ALC (above) is O FF , the TXGN parameter can be set manually, at very fine resolution. T his should only be done for troubleshooting purposes. (Advance d) Balances voice transmit peak power in relation to CW peak power in TU N E mode. T ypically set between 0.0 to 1.5 dB. (Troubleshooting.) VCO L-C range view/change/calibrate. Once the VCO is calibrated (pg. 48), the parameter which appears here will include NO R at all times. You can change the setting to troubleshoot VCO L-C ranges. When finished, set the parameter back to NO R 1 27 , then exit the menu and change bands to restore the original setting. Note: In this menu entry only, the main/sub receiver prefix ( RF or S UB ) is not displayed at all times. However, the SUB icon will flash as usual when S UB is tapped. Copi e sVFOB’ sf r e q u e nc yt oVFOA.Ca nn o tb eu s e dwi t hi nt heme nui t s e l f ; must be assigned to a programmable function switch. Per-mode coarse tuning rate (hold C O AR S E and tune VFO A or B). Also applies to the RIT /XIT tuning knob if CONFIG:VFO OFS is set to O N , and both RIT and XIT are turned OFF. VFO counts per turn ( 10 0 , 20 0 , or 4 00 ). Smaller values result in easier finetuning of VFO; larger values result in faster QSY. Specifies the faster of the two VFO tuning rates ( R ATE ). The faster rate is 5 0 Hz per step by default, but can be set to 20 Hz if desired. In this case, VFO CTS = 4 0 0 is recommended to ensure adequate fast-QSY speed. If set to Y ES , VFO B can be set to a different band than VFO A, which allows listening to two bands at once (main/sub). See pg. 37 for independent main/sub band considerations. Note : This menu entry is not available unless the sub receiver is installed (see CONFIG:KRX3). If O N , the RIT /XIT offset control can be used to tune VFO A in large steps when both RI T and X I T are turned off. T he step sizes vary with mode (see VFO CRS), and are the same as the C O AR S E VFO tuning rates. Wattmeter calibration parameter. {pwr} is the power mode: LP (0-12W), HP (13-120W), or M W (0.1-1.5 mW). See calibration procedure (pg. 48). Set to YE S to turn on transverter band x ( 1 - 9 ); tap 1 –9 to select xvtr band. 57 XVx RF 144 XVx IF 28 XVx PWR L .01 XVx OFS 0.00 XVx ADR T RNx Lower edge for xvtr band x ( 1 - 9 ); 0 -9 99 MHz. For frequencies above 999 MHz, use only the MHz digits (e.g., for 1296 MHz, use 29 6 MHz). Tap 1 –9 to select xvtr band. Specify K3 band to use as the I.F. for transverter band x ( 1 - 9 ) . Tap 1 –9 to select xvtr band. I.F. band selections include 7 , 1 4 , 21 , 2 8 , and 50 MHz. Sets upper limit on power level for XVT R band x. T ap 1 –9 to select xvtr band. H x . x ( H igh power level) specifies a value in watts, and use of the main antenna jack(s). T his should be used with caution, as you could damage a transverter left connected to these antenna jacks accidentally. L x . xx ( Low power level) species a value in milliwatts, which requires the KXV3 option. (If CONFIG:PWR SET is set to P E R-BAND , the K3 will also save the last-used power setting on each band. This is especially useful for transverter bands.) Offset (–9 .9 9 to +9 .9 9 kHz) for transverter band x (1 -9 ). Tap 1 –9 to select xvtr band. Compensates for oscillator/multiplier chain errors. Physical decode address ( 1 to 9 ) assigned to transverter band x (1 -9 ). Tap 1 –9 to select xvtr band. Applies to attached Elecraft XV-series transverters and El e c r a f tKRC2 ,whi c ha r ec on t r ol l e dvi at heEl e c r a f t “AUXBUS ”l i ne .Note : Decode address range may vary depending on the type of attached device. For band switching of other transverter types, see pg. 19 and CONFIG:KIO3. 58 Troubleshooting The most common symptoms and their causes are listed below, in three categories (general, transmit, and receive). Most problems are related to firmware or control settings. Subse quent sections cover Paramete r Initializ tion (pg. 61) and Module Trouble shooting (pg. 62). If the problem persists, please contact Elecraft support (see pg. 10) or post a question on our email reflector. General Error me ssage appe ars on the LCD ( E RR P L1 , etc.): Refer to Module Troubleshooting (pg. 62). Can’ tt ur npowe ro f f :An external device or the KIO3 module may be pulling the POWER ON line low. Disconnect all external devic e sonea tat i me .I ft h a td oe s n’ tr e ve a l t h ep r ob l e m, t r yu npl u g g i ngt heKI O3’ s digital I/O board, then the KIO3 main board. Also see Module Troubleshooting (pg. 62). Ge neral problem with transmit and/or receive: Many problems can be caused by low power supply vol t a g eo rb yan oi s yo ri nt e r mi t t e n ts u ppl y.Che c kyou rpowe rs u ppl y’ son/ of fs wi t c h, vol t a g e , f u s e s( i f applicable), and DC cabling. The K3 provides both voltage and current monitoring (pg. 36). Also see Transmit and Receive troubleshooting sections, below. Ge neral problem with firmware behavior: (1) Check all relevant menu settings (see MAIN and CONFIG menu listings in the previous section). In addition to the information in the manual, each menu entry provides help text by tapping DI S P . (2) T ry loading the latest microcontroller and DSP firmware. Review t h er e l e a s en ot e sf o rc ha ng e st ha tma yb er e l a t e dt oy ou rs ymp t o ms .( 3)I ft hea b o ves u g g e s t i onsd o n’ the l p, you can try reinitializing the firmware (pg. 59). Be sure to save important parame ter se ttings first. NE W K3 UTI L S O F TWARE RE Q UI RE D me ssage appe ars on the LCD: This indicates that you must i ns t a l lane wve r s i onof t heK3’ sf i r mwa r eu pg r a d epr og r a m( K3Ut i lity) in order to load the latest K3 firmware. After installing the new version of K3 Utility, reload all new firmware (MCU, DSP, etc.). FP F LO AD P E NDI NG message appears on the VFO A and B displays: Use our K3 Utility software application to load the FPF d a t af i l ef r omaPC.Re f e rt oK3Ut i l i t y’ sh e l pi nf or ma t i onf o rd e t a i l s . N/ A message (Not Applicable): Thef u nc t i o ny ou ’ r et r yi ngt ou s ed oe sn o ta p pl yi nt hepr e s e n tmod eor context. VFO B is blank: You may have CW or DATA text decode display turned on ( TE X T D EC , pg. 30) with the TH R (threshold) control set too high for text decode to proceed. VFO AorBd i s pl aydo e s n’ tc ha ng ewhe nt hec o r r es p o nd i ngknobi sr otate d: You may have the affected VFO locked (pg. 14). Transmit TX LED is on all the time: This could indicate that PTT is being held on by external equipment. (Verify that CONFIG:PTT-KEY is set to O FF-O FF if not keying via the RS232 connector. Try disconnecting everything connected to the ACC and RS232 connectors.) In voice modes, this could be caused by having the VOX gain set too high. Disconnect the microphone, then set the VOX menu parameter lower. HI CUR or HI S WR warning (K3/100) : Check load Z and supply voltage. If voltage is low and/or load Z is under 50 ohms, current can go up for a given requested power level. Reduce power if necessary. HI TE M P warning (K3/100): When operating QRO from a battery, low voltage may cause an erroneous temperature reading (see CONFIG:PA TEMP for details). Otherwise, PA heatsink temperature has exceeded 84C (PA drops into bypass mode). Check fans, power supply voltage and current, and load 59 impedance. Allow heatsink to cool. Reduce power if necessary. Make sure the CONFIG:PA TEMP menu entry is calibrated (allow heatsink to cool to room temperature, then compare menu reading to actual). ALC O FF is displayed on VFO A during transmit : Set CONFIG:TX ALC to O N . ALC should only be turned off during band-pass filter alignment (do not adjust filte rs without consulting Ele craft support). Can’ tt r a ns mi ti nCW mod e:(1) Make sure the key or keyer paddle is plugged into the correct jack. (2) You must have VOX selected ( V OX icon on) in order to use hit-the-key CW. (3) You may be in S PL I T mode, with VFO B set for a voice or data mode. Tap A/ B or use B S E T to chec kVFOB’ smod e . Ke y clicks in QSK CW mode with an external amplifie r:. T his may be due to a slow amplifier relay (use CONFIG:TX DLY) or incorrect application of external ALC (see CONFIG:EXT ALC and pg. 27). Can’ tus et hemi ci nv o i c emod e s :You may be in S PL I T mode, with VFO B set for CW or data mode rather than a voice mode. T ap A/ B or use B S E T t oc he c kVFOB’ smod e . No powe r output: You may haver ou t e dRFt hr ou g ht heKXV3’ sXVTRI N/OUT jacks, either by switching to a transverter band, or by setting CONFIG:KXV3 to TE S T. Another possibility is that power has not been calibrated on the present band (pg. 48). Relay heard switching during keying: If this happens only above a certain power level, transmit signal leakage may be activating the carrier-operated-relay circuitry on either the KXV3 module (RF I/O) or the KRX3 (sub receiver). You must either improve isolation between transmit and receive antennas, or decrease output power. If a relay switches during keying even at very low power levels, it could be due to: (1) SPLIT operation with different bands and/or modes, or the receive VFO tuned outside any ham band; (2) VFO A is tuned close to a point where a relay switches between transmit and receive due to incorrect VCO calibration (re-run VCO CAL using the latest firmware, then tune across the affected band in receive mode to make sure no relay switching occurs). Receive HI RFI warning: A high-p owe rt r a ns mi t t e rma yb ec ou pl i ngi n t ot heK3’ sa n t e n nai nr e c e i vemod e . The warning occurs when the ANT1 or 2 input signal exceeds about 1 to 2 W. You can disable the warning (set CONFIG:RFI DET to O FF ), but eliminating the over-coupling will ensure no damage to K3 components. No re ceive d signal: Possibilities include (1) receiver being squelched (if RF/SQL controls are assigned to squelch via CONFIG:SQ MAIN or SQ SUB, rotate squelch controls fully counter-clockwise); (2) RF GAIN too low (set RF gain controls fully clockwise); (3) filter bandwidth too narrow (set WIDT H or tap X F I L , and also verify filter configuration settings); (4) switching to an open receive antenna on the KXV3 (RX ANT IN); (5) switching the KAT 3 to an open antenna jack; (6) CONFIG:REF CAL parameter not adjusted properly; (7) CONFIG:KXV3 may be set to T EST, which routes all RF through the XVT R IN/OUT jacks. Re ceive d signal le ve l too low: (1) T ry setting CONFIG:AF GAIN to HI ; (2) check headphone and speaker plugs and cables; (3) make sure that CONFIG:RX EQ settings are either flat or have not been set for a large amount of cut; (4) recheck all filter configuration settings, particularly the CONFIG:FLx BW, FLx GN, and FLx FRQ menu entries; (5) verify that CONFIG:REF CAL is properly adjusted; (6) make sure RF GAIN is set to maximum. 60 Parameter Initialization Menu parameters are stored in non-volatile memory (EEPROM and/or FLASH). It is possible, though rare, for parameters to become altered in such a way as to prevent the firmware from running correctly. If you suspect this, you can reinitialize parameters to defaults, then restore a previously-saved configuration (or re-do all configuration steps manually; no test equipment is required). If you have a computer available to do configuration save and restore, run the K3 Utility program, then use the Configuration function to save your present firmware configuration. I fy oud on ’ tha vea c c e s st oac ompu t e r , yous h ou l dwr i t ed ownyou rme nupa r a me t e rs e t t i ng s . Themos t important are CONFIG:FLx BW and CONFIG:FLx FRQ (for each installed filter <x >, also tap SU B to obtain sub receiver crystal filter settings, if applicable). You should also note the settings of option module enables ( a l lCONFI Gme nue n t r i e ss t a r t i ngwi t h‘ K’ ,e . g .CONFIG:KAT3) . If youd on ’ tr e c or dyou rc r ys t a l f i l t e ra ndop t i ons e t t i ng s ,youma yha v et or e movet heK3’ st opc o ve r( a nds u br e c e i ve r ,i fi ns t a l l e d )t o verify which options as well as crystals filters are installed, as well as the frequency offsets noted on the crystal filters (depends on filter type). Tu r nt heK3OFF( u s i ngt heK3’ sPOWERs wi t c h, n oty ou rp owe rs u ppl y ) . While holding in the SHIFT /LO knob (which is also labeled NO RM below), turn power ON by tapping the K3’ sPOWERs wi t c h. Le tg ooft heS HI F T/ LOkn oba f t e rabout 2 seconds. You should now see E E I NI T on the LCD. When EE I NI T completes after a few seconds, you may see E RR P L1 or other error messages due to reinitialization. Tap D I S P to clear each message. If you have a computer, restore all parameters using the Configuration function of the K3 Utility program. I fy oud on ’ tha veac o mpu t e r , ma nu a l l yr e -enter all menu parameters that you wrote down, above, then redo firmware configuration and calibration (starting on pg. 45). You can omit any steps pertaining to pa r a me t e r sy ou ’ vea l r e a d yr e s t o r e dma nu a l l y . See if the original problem has been resolved. 61 Module Troubleshooting The K3 is a highly modular transceiver. With the inf or ma t i onpr ovi d e dhe r e , you ’ l lb et r ou b l e s hoo t i ngt ot h e module level, not to the component level. In many cases, problems can be resolved by changing a menu setting, loading new firmware (pg. 44) or initializing parameters to factory defaults (see below). A full set of schematics can be found on our web site. Due to the use of fine -pitch ICs in the K3, most signal tracing must be done ve ry care fully using fine-tip probes. Please do not attempt this unless you have experience in troubleshooting surface-mount assemblies; otherwise, you could damage your K3. DO NO T ADJUST ANY TRIMMER CAPACITO RS O R PO TENTIO METERS unless you have access to appropriate lab test equipment and have consulted Elecraft support regarding the proper settings. All trimmers have been aligned at the factory, and if mis-adjusted could degrade performance. Error Messages (E RR x x x ) An error message may be displayed on VFO B at power-up or during normal operation. In most cases error messages are due to a problem with a single option module, and may be due to incorrect firmware configuration. If you see an error message on VFO B (E RR X X X): Write down the error message, as well as any associated error data shown on the VFO A display (e.g. E 00 0 05 ). T hen tap any switch to clear the error code. Multiple errors may occur; in this case, write down each of the messages and VFO A data, if any, before you clear them. See Error Msg table (next page ) for details on spe cific E RR messages and their associate d data values. Module Removal TURN O FF TH E POWER SUPPLY OR DISCONNECT TH E POWER SUPPLY CABLE be fore removing or installing modules. If you drop a metal tool inside the K3 with powe r still applie d, you can short a powe r supply or control line , re sulting in damage to the RF board or othe r modules. Module de-installation proce dure : To see if an option module is the cause of an error message, you must deinstall it as described below, or you may not be able to tell if removing the module had any effect: T urn off power Remove the module Set the associated CONFIG menu entry to NO T I NS T. (See CONFIG:KAT3, etc.) Note: If the affected module is on the KRX3 (sub receiver), you must tap S UB to display its configuration setting. Otherwise the setting shown applies to the main receiver. This applies to the KBPF3 and KNB3 modules, as well as crystal filters, all of which are duplicated on the RF and sub receiver boards. T urn power off and wait at least 5 seconds T urn power back on 62 Error Message List * = See module de -installation instructions on pre vious page . Error Msg ERR 12V Problem The circuit bre aker on the KPA3 module may be open. PA drops into bypass mode, fans switch to level 2, and PAtemp display mode is not available. ERR AT3 KAT3 not responding ERR BP1 ERR BP2 ERR BP3 No response from RF board BPF shift registers No response from KBPF3 option shift registers No response from sub RX BPF shift registers ERR BP4 ERR DS1 ERR DS2 ERR DS3 No response from sub RX KBPF3 option No main DSP SPI echo Main DSP SPI echo not inverted No AUX DSP SPI echo ERR DS4 AUX DSP SPI echo not inverted ERR DSE Missing echo from a DSP command ERR DSX Extended DSP command timeout ERR EE1 On-chip EEPROM read/write test failed ERR EE2 External EEPROM read/write test failed ERR FW2 General firmware problem 63 Troubleshooting ste ps Che c kf ors ho r tf r omPAmod u l e ’ s12V l i net og r ou nd .I f t he r e ’ snos hor t , t r y resetting the circuit breaker. If there is a short, remove the KPA3 module. Set CONFIG:KPA3 to P AIO NO R . While waiting for a replacement, you can use the K3 at reduced power. *De -install the KAT3 module (see above). If this eliminates the error message, the KAT3 may be defective. You can substitute a KANT 3 antenna input module temporarily, if available. *De -install option modules one at a time *De -install KBPF3 on RF board *De -install the KRX3 module, including the SUBIN and SUBOUT boards *De -install the KBPF3 module on KRX3 Reload DSP1 firmware Reload DSP1 firmware Reload DSP2 firmware. Note: CONFIG:KRX3 must be set to NO T I NS T unless the KRX3 option is installed, which includes the aux DSP module (DSP2) and 2nd synthesizer. Reload DSP2 firmware. Note: CONFIG:KRX3 must be set to NO T I NS T unless the KRX3 option is installed, which includes the aux DSP module and 2 nd synthesizer. Reload DSP1 firmware (and DSP2 firmware, if applicable). Reload DSP1 firmware (and DSP2 firmware, if applicable). MCU may be defective (front panel). T ry re-loading MCU firmware first;then try initializing parameters (pg. 61). EEPROM may be defective (front panel). However, this message may also appear if power is turned off/on too rapidly, or if the powe rs u ppl yv ol t a g e“b ou nc e s ”d u r i ng turn-on due to inadequate regulation. If the power supply is not at fault, try re-loading MCU firmware first;then try initializing parameters (pg. 61). Try re-loading MCU firmware first; then try initializing parameters (pg. 61). ERR IF1 ERR IF2 ERR IO1 ERR IO3 ERR KEY ERR PTT ERR LPF ERR PA1 ERR PL1/2 ERR SY1/2 ERR SY3/4 ERR SY5/6 ERR VCO ERR VC4 RF board IF GRO UP not responding (A6810 or KNB3-U2) *De -install the KNB3 module on RF board. Note: The K3 cannot be operated without a KNB3, because this module includes T -R switching circuitry. Do not attempt to bypass it using jumpers. Sub RX IF GROUP not responding (A6810 or *De -install the KNB3 module on KRX3 KNB3-U2) first (tap SUB when in the KNB3 menu entry). Note: The sub receiver can be operated without a KNB3 if a jumper is placed between pins 1 and 7 of J78 on the KRX3 module. MISO line stuck low (asserted) *De -install option modules one at a time. If no failing option module can be found, there may be a problem on the RF board. KIO3 not responding The KIO3 may be defective. Note : The K3 can be operated temporarily without the KI O3i ns t a l l e d .You ’ l ln e e dt ou s e headphones, and there will be no computer or AF I/O available on the rear panel. Attempt to key the transmitter or activate PTT Usually caused by an external device during power-on shorting KEY or PTT to ground; d i s c onne c ts u c hd e vi c e su n t i l t he y’ r e initialized properly. Also see CONFIG:PTT-KEY. If necessary, try removing the KIO3 module or its digital I/O daughter board. No response from LPF shift registers *De -install option modules one at a time. If no failing option module can be found, there may be a problem on the RF board. KPAIO3 module not responding *De -install the KPA3 module and set CONFIG:KPA3 to P AIO NO R . If this eliminates the error message, the problem is likely to be on the KPA3 module. If not, the problem may be on the KPAIO3 module; remove it as well, and set CONFIG:KPA3 to NO T I NS T. VPLL out of range on band change (to view Verify that the oscillator can on the actual PLL voltage, set CONFIG: TECH MD KREF3 is fully plugged in and is not in to O N , then tap D I S P and use VFO B to locate backwards. Make sure all internal cables the P LL1 and P LL2 voltage displays). are plugged in, specifically the cables between the KREF3 and KSYN3 modules (synthesizers). Try re-calibrating the General problem with PLL, VCO, or other applicable VCO (CONFIG:VCO MD) (tap circuitry on a synthesizer module. SUB within the menu entry if you saw ERRPL2, t oma kes u r eyou ’ r ec a l i b r a t i ng t h es u br e c e i ve r ’ ss yn t he s i z e r ) .I ft hi s nd d oe s n’ twor k , t r yr e mo vi ngt he2 VCO calibration errors. VFO A will show error synthesizer (for the sub receiver), and set data, e.g. E 0 0 03 9 ; report this value to CONFIG:KRX3 to NO T I NS T. If this Elecraft customer support. eliminates the error, the sub synth may be defective. You can also try swapping it with the main synth to see if it can be calibrated in this slot. 64 ERR REF Missing KREF3 module ERR T XF Invalid transmit crystal filter bandwidth ERR T XG Transmit gain constant out of range ERR XV3 KXV3 not responding 65 Verify that the oscillator can on the KREF3 is fully plugged in. Make sure all internal cables are plugged in between the KREF3a ndo t he rmod u l e s .I ft hi sd oe s n ’ t help, the problem may be on the KREF3 module or the RF board. Note: The K3 cannot be used without a KREF3 module. The crystal filter selected for TX (with CONFIG:FLTX) is either too narrow or too wide. You must specify a filter that is 2.7 or 2.8 kHz wide for CW/DAT A/SSB, 6 kHz for AM, and 13.0 kHz for FM). This usually indicates a problem with band-pass filter alignment or one of the low-pass filters. In either case it could affect one or two bands. Consult Elecraft support before attempting to realign bandpass filters; all settings are aligned at the factory. *De -install KXV3 module Theory Of Operation Please refer to the block diagram of the K3 shown at the end of this section. Schematics and additional details can be found on the Elecraft web site. RF BOARD The RF PCB (Printed Circuit Board) is the heart of the K3 transceiver, both physically and electrically. During assembly, it serves as an attachment point for other PCBs as well as chassis panels, acting as the glue that holds things together. During operation, the RF board provides signal routing to and from all modules. Over two-t hi r d soft heRFb oa r d ’ sc ompo ne n t sa r es u r f a c emou n td e vi c e s( S MDs ) ,l oc a t e do nt he bottom side of the board. These are pre-installed and tested at the factory. The use of SMDs minimizes stray coupling in RF circuits, reduces system cost, and allows the K3 to fit in a modest-size enclosure, compatible with home or field operation. The RF board is divided into several functional areas, which are described below. Low-Pass Filters (LPFs) The relay-switched low-pass filters, used during both transmit and receive, are located in the back-right corner of the RF board. These filters can easily handle 100 watts, and are common to both the K3/10 and K3/100. Some LPFs cover one band, while others cover two bands that are close in frequency. The input to the LPF section comes from the KPA3 100-W amplifier module, if installe d ;i ft he r e ’ snoKPA3, t hei npu tc o me sf r o m the 10-W amplifier (see below). T he output of the low-pass filters is routed through the forward/reflected power bridge, then on to either the antenna input module (KANT3), or the KAT3 automatic antenna tuner, which plugs in at far right. Low-Power Amplifier (LPA) and T/R Switching The large hole near the back-middle area of the RF PCB is where the 10-W low-power amplifier module plugs in. The LPA has three connectors that mate with the RF board, and its power transistors attach to the rear bottom cover, which serves as a heat sink. This construction method allows the 10-W module to be tested separately during produc t i o n.Al s oi nt hi sa r e ai st heT/ R( t r a ns mi t / r e c e i ve )s wi t c h, b u ty ou ’ l lne e dt ot u r nt heRFb oa r d u ps i d ed ownt os e emos toft hec ompo ne n t s . TheK3’ sT/ Rs wi t c hu s e shi g h-power, high-isolation PIN diodes rather than relays, resulting in no switching noise during keying. Low Power Amplifier (LPA) The low-power amplifier module is capable of up to 12 W power output, and in the case of the K3/10, is the final amplifier stage. In the K3/100, it provides drive to the KPA3 module. The LPA has three gain stages, the last two of which use high-power MOSFET transistors to allow coverage up through 6 meters. At the input to the first gain stage is a 5-dB attenuator, which is switched in under firmware control at certain power levels to optimize transmit gain distribution. Band-Pass Filters (BPFs) At back-left is the bank of ham-band BPFs. These filters are just wide enough to cover each ham band, so they provide good rejection of IMD products during both transmit and receive. Hi-Q components, including large toroids, ensure low loss and high signal-handling capability. General coverage receive capability can be added to the K3 with the KBPF3 option, which includes another 8 band-pass filters that cover all of the areas from 0.5 to 28 MHz that are not covered by the filters on the RF board. The KBPF3 module mounts directly above the 66 main BPF array, and due to its very short connections, has no effect on the performance of the main BPFs during ham-band operation. First I.F. Stages The front-left portion of the RF board is dedicated to the receive/transmit first I.F. (intermediate frequency) circuitry, most of which is on the bottom of the board. The first I.F. is 8.215 MHz, which is low enough to permit the construction of high-quality, narrow-band crystal filters, but high enough to offer good image r e j e c t i o n. TheI . F.s t a g e sa r er e ve r s i b l e ;i . e . , t he y ’ r eu s e din one direction in receive mode, and the other during transmit. In receive mode, the filtered signal from the BPFs is first routed through a relay-switched attenuator, then to a low-noise diode-switched preamp, high-level switching mixer, and post-mixer amp. The signal next encounters the noise blanker (KNB3), then the crystal filters (see below), Crystal Filters and 2nd I.F. In either receive or transmit mode, the I.F. signal is routed to one of up to five plug-in, 8.215-MHz crystal filters (FL1-FL5). These can be fixed-bandwidth, or in the case of FL3-FL5, optionally variable-bandwidth. Following the crystal filters is the receive I.F. and second mixer, which mixes the 8.215 MHz down to an I.F. of 15 kHz for use by the digital signal processor module (DSP). Excellent 2nd-I.F. image rejection is obtained by cascading an a d d i t i ona lc r ys t a l f i l t e rj u s ta he a do ft hes e c o ndmi x e r . The r e ’ sa l s oa1 5kHzt r a ns mi tI . F. , whi c hi smixed up to 8.215 MHz on the KREF 3 module, which plugs in near the front-middle of the RF board. Support Circuitry Several other modules plug into the RF board. The KPAIO3, located at the back edge of the RF board, is a vertically mounted board used as an interface between the RF board and the KPA3 100-W amp module. It provides current sensing, bypass relay, and other functions for the KPA3, and eliminates the need for any interconnecting cables. The KIO3 and KXV3, in the back left corner, provide RF, audio, and digital I/O. The main synthesizer, used for the main receiver as well as the transmitter, plugs in at front left and is attached to the front shield. To the right of this is the reference oscillator module (KREF3), as well as the second synthesizer, used for the sub receiver. These also attach to the front shield. T he Front Panel/DSP module plugs in at the very front of the RF board. Finally, at the far right y ou ’ l lf i ndt wol ow-noise linear voltage regulators, one for 5 volts and the other 8 volts. Both are heat-sinked to the right side panel. Noise Blanker There are two noise blanker subsystems in the K3: the KNB3 module, and a DSP-based blanker (see DSP on pg. 69). The KNB3 is a narrow I.F. pulse blanker that plugs into the RF board. Its broad input bandwidth ensures minimu ms t r e t c hi ngoff a s tnoi s epu l s e s ,s oi t ’ si d e a lf o rs u pp r e s s i ngnoi s ef r ompowe rl i ne s , thunderstorms, and auto ignitions. The DSP blanker can be used on many other types of noise, including radar and other noise with complex waveforms that might cause heavy intermodulation if an I.F. blanker were engaged. Using the two blankers in combination is often extremely effective. The KNB3 includes a triple-tuned bandpass/time-delay filter, wide-range AGC, and a noise gate. You can think of the noise gate as a switch that is normally closed, allowing received signals to pass unimpeded. When a noise pulse appears, it is amplified to a high level and used to trigger a one-shot circuit. This opens the noise gate very briefly (from 5 to about 100 microseconds) to blank the noise pulse. Both the threshold at which blanking action occurs and the length of time the gate is opened are under control of the operator. 1st Mixer The 1 st mixer combines signals from the input band-pass filters with the output of the synthesizer to obtain the 1 st I.F., at 8.215 MHz. The mixer is based on a video switching IC with very low ON resistance, resulting in low loss and high signal-handling capability. Since this type of mixer r e q u i r e sl owd r i ve , t he r e ’ sve r yl i t t l el e a k67 through of the local oscillator (synthesizer) signal. The mixer also incorporates a balanced VHF low-pass filter to suppress both internally and externally generated VHF/UHF spurs. T his ke e pst heK3’ sHFs pu rc o mpl e me n t extremely low, despite the use of a down-conversion system architecture. KANT3 and KAT3 The basic K3/10 includes a KANT 3 a n t e nnai n pu tmod u l e .I fy ou ’ veor d e r e daKAT3a n t e nnat u ne r , t he KANT3 is not required and will not be supplied with the kit. In either case, the module plugs into the RF board at the back-right corner. Both the KANT3 and KAT3 provide antenna surge protection, as well as resistors for bleeding off static DC charge. The KAT3 provides a wide-range, switchable C-in/C-out L-network for matching a variety of antennas with SWR as high as 10:1 (100 W) or 20:1 (10 W). T here are 8 inductors and 8 capacitors in the L-network, each switched with a DPDT relay for high reliability. T he KAT3 also includes a second a n t e n naj a c ka nda s s oc i a t e ds wi t c hi ngr e l a y. The r e ’ sa na d d i t i o na lj a c kont heb oa r df orr ou t i ngt heu nu s e d( no ntransmit) antenna to the KRX3 sub receiver module. KIO3 All audio and digital/computer I/O is routed through the KIO3. The KIO3 is made up of three PC boards: Main, Audio IO and Digital IO. The Main KIO3 board plugs directly into the RF board. It includes a relay to disconnect the right speaker channel in case a mono speaker is plugged into the external speaker jack, isolation transformers for Line In and Line Out signals, a connection point for the internal speaker, a low-noise oscillator to provide voltages for the RS232 serial interface, and various control line inputs and outputs for externaltransverters, band decoders, and the like. This board also contains a differential output microphone amplifier to equalize the gain between the front and rear microphone jacks, as well as to provide noise immunity for the microphone signal from the rear panel area. Circuitry to allow use of the serial port RT S or DT R signal lines as PTT and/or KEY inputs is also located on this board. This feature is to support logging and control programs which may use these lines for controlling transmit/receive switching or CW keying. The Digital IO board plugs into the KIO3 Main board. It includes a DE-9 serial port connector for use with an external PC, and a DE-15 accessory connector for external band decoders (such as the KRC-2), transverters (such as the Elecraft XV-series), and similar devices. It is also the connector to which direct FSK or PSK signaling is applied. The Audio IO board includes three stereo outputs: headphone jack, speaker jack, and a transformer-isolated Line Out jack. It also provides two monophonic inputs: microphone and an isolated Line In. The Microphone jack can provide bias for an electret microphone when enabled via the MAIN:MIC SEL menu entry. Both Digital and Audio IO boards include extensive bypassing and decoupling to help prevent RF signals getting into the radio through cables attached to their respective connectors. Front Panel and DSP The Front Panel is a large plug-in module that includes both the Front Panel and DSP boards, as well as the Aux DSP (if a sub receiver is installed) and digital voice recorder module (if the KDVR3 option is installed). Front Panel Board Thi sb oa r dp r o vi d e st heK3’ su s e ri n t e r f a c e : 3 5c u s t om-labeled switches; two dual-concentric potentiometers for gain and squelch control; seven shaft encoders; custom, 240-segment, high-contrast LCD; and 13 discrete LED 68 indicators. Mic and headphones can be plugged into the front panel, or optionally at the rear panel (see KIO3 description, pg. 68). The Front Panel PCB also includes the microcontroller unit (MCU), which manages the operation of the K3. All inputs, whether from a switch, knob or external PC, are recognized and acted on by the MCU. All control outputs –such as switching from transmit to receive, sending a CW code element, adjusting the transmitter power, controlling LED brightness, etc. –are produced by the MCU. The Front Panel also contains a large amount of EEPROM memory for parameter storage, and FLASH memory for program storage. This allows the K3 to be re-programmed with the newest firmware by a simple download from the Internet. It also enables the K3 to remember your favorite settings, particular configuration preferences, and the last setting of controls when power is removed from the radio. DSP Board TheK3’ sDi g i t a lS i g na lPr oc e s s i ng( DS P)c a pa b i l i t i e spr ovi d ear i c hs e toff e a t u r e st ohe l pc omb a tQRM a nd QRN while generating some of the cleanest signals to be found in Amateur radio today. A 32-bit floating point DSP is used for highest performance. In receive, a 15 kHz IF signal from the RF board is buffered and then digitized by a 24-bit Analog to Digital Converter (ADC). This provides over 100 dB of dynamic range within the passband of the selected crystal (roofing) filter. After the ADC, the DSP converts the signal into a floating point value so dynamic range is not compromised during further processing. Noise blanking and limiting, AGC, amplification, IF and AF filtering are all done within the DSP. Several noise blanking algorithms (methods) are available in the DSP, and a sophisticated AGC system is employed. AM, FM, SSB and CW detectors are also implemented by the DSP. Various audio effects, such as Quasi-Stereo and Binaural, are provided here as well as combining the audio signals from the KRX3 (if installed). After processing, the resulting audio signals are generated in a stereo 24-bit Digitalto Analog Converter (DAC) and applied to separate amplifiers for headphones (front and rear) and speaker. A separate 24-bit DAC and amplifier provide Line Out signals that are not affected by the AF Gain control. This output is typically used by sound card digital mode software. In transmit, Line In, rear or front Microphone signals are sent to a 24-bit ADC and then processed by the DSP. In speech modes (SSB, AM and FM) and soundcard-based data modes, VOX is derived from these signals as well as receive audio. Microphone equalization, bandpass limiting, conversion to 15 kHz IF, envelope clipping and filtering (if applicable) are all done in DSP, then the signal is passed to another 24-bit DAC and presented to the RF board as a 15 kHz IF signal. Direct FSK, direct PSK and CW signals are generated within the DSP for those modes. Thus, the DSP is responsible for all signal processing between audio and the 15 kHz IF for both receive and transmit. Like all other modules in the K3, the DSP is managed by the MCU. The DSP board is piggybacked onto the Front Panel board as part of the Front Panel assembly. The Auxiliary DSP (used if the KRX3 Second Receiver Option is installed) and the KDVR3 option plug into the DSP board. KREF3 TheKREF 3mod u l e ’ s49.380-MHz temperature-compensated crystal oscillator (T CXO) is the common signal s ou r c ef o rt h eK3 ’ ss yn t he s i z e r s . Thi ss i g na li sa l s od i vi d e db y6to provide the 8.230-MHz signal used by the second receive and transmit mixers. Firmware is used to compensate for any small drift in the T CXO and its derived signals, resulting in excellent stability (with the high-stability option, better than +/- 0.5 PPM over the 0 to 50 C temperature range). In addition to the T CXO and dividers, the KREF3 provides the 2 nd transmit I.F. mixer, which converts the DSP's 15-kHz transmit I.F. output to 8.215 MHz. This signal passes through a wide crystal filter to ensure good rejection of the carrier and other mixer products before being routed to the RF 69 board. T he KREF3 obtains its DC and low-frequency I/O signals via an 8-pin connector on the RF board, but its RF outputs are fed to the RF board (and sub receiver, if applicable) via coax cable assemblies. KSYN3 Lowpha s enoi s ei ske yt ob o t hr e c e i ve ra ndt r a ns mi t t e rpe r f or ma nc e . I nt heK3’ ss yn t he s i z e rmodule (KSYN3), we start with a clean, wide-range voltage-controlled oscillator (VCO). T he VCO frequency is placed near the desired band of operation using 128 carefully-selected L-C combinations, which keep the ratio of fixed capacitance to tunable capacitance (varactor diodes) as high as possible. The VCO is held exactly on frequency by a phase-locked-loop IC (PLL), which samples the VCO output continuously and compares it to its high-s t a b i l i t yr e f e r e nc ei npu t . ThePLL’ sr e f e r e nc ei n pu ti sob t a i ne df r oma direct-digital-synthesis (DDS) IC, which is tunable in about 0.2-Hz steps. The reference for the DDS itself is the 49.380-MHz signal from the KREF3 module. To keep the synthesizer’ sou t pu ts i g na l vi r t u a l l ys pu r -free, the DDS is followed by a 4-pole crystal filter. This eliminates both directly-occurring spurs and the Nyquist sampling spurs that normally accompany a DDS-driven PLL system. The combination of all of these noise-minimization techniques results in very low phase noise and negligible discrete spur content. 70 K3 Block Diagram 71 Appendix A: Crystal Filter Installation Da ma g et oyo urK3d uet oe l ec t r o s t at i cd i s c har g e( ESD)c a noc c uri fyo ud o n’ tt a kepr op er pre cautions. Such damage is not cove re d by the Ele craft warranty, and could re sult in costly re pairs. We recommend that you use an anti-static mat and we ar a conductive wrist strap with a se ries 1-megohm resistor. An alternative is to touch an unpainte d, grounde d metal surface fre quently while you are working. Do this only when you are not touching any live circuits with your othe r hand or any part of your body. To avoid marring the finish, place a soft cloth unde r cabine t panels; do not lay them dire ctly on your work surface. Also, do not use a powe r scre wdrive r of any kind, as it can slip and gouge the paint. Installation Procedure Disconnect the power cable and all other external cables from the K3. Remove only the top-cover screws identified in the drawing below. Press gently at the indicated point near the back edge ( X), then lift off the top cover at the front. Unplug the speaker, then set the top cover aside in a safe place. T he screws that hold the top cover in place are an importantpa r toft heK3’ ss t r u c t u r a ld e s i g n.Pl e a s eb e sure to re-install all of them afterward. Put on your wrist strap or touch a grounde d surface before touching any K3 components or modules in the following ste ps. If you have the sub receiver installed, refer to its manual for instructions on removing the KRX3 module. 72 Locate the crystal filters you presently have installed in slots FL1 - FL5 on the RF board (or sub receiver). T here may be a mix of 5-pole filters (below left) and 8-pole filters (right). Review the information below to ensure that your crystal filter setup conforms to K3 requirements. You can install up to five crystal filters (FL1-FL5) on the RF board, and five on the sub receiver (KRX3). FM operation requires a 13 kHz wide filter. AM transmit requires a 6 kHz filter, and SSB/DAT A/CW transmit requires a 2.7 or 2.8 kHz filter; other bandwidths can be used for receive in these modes. Filters as narrow as 200 Hz can be used for CW and narrow-band data receive. A mix of 5-pole and 8-pole filters can be used. Ther ear et wor ul e sr e g ar di ngwher et he s ef i l t er sc a nb ei ns t a l l edi nt heK3a ndhowt hey ’ r eus ed: Rule #1: If you plan to use a particular filter for both transmitting and receiving ( ma i nr e c e i ve r ) , you ’ l l ne e dt o install it on the RF board. You can optionally install a filter of the same or similar bandwidth on the sub receiver for receive-only use. (T his is recommended since it will keep the receivers identical.) Rule #2: You can install any filter in any slot, and can leave any slot empty in anticipation of installing a crystal filter there later. However, you should install the widest filter closest to FL1, the next widest to its left, etc. Here are two examples that could each apply to either receiver, assuming you followthe rules above: FL1 FL2 FL3 FL4 FL5 6 kHz (AM) 2.7 kHz (SSB/CW/DAT A) 1.8 kHz (SSB/CW/DAT A) 500 Hz (CW/DAT A) 200 Hz (CW/DAT A) FL1 FL2 FL3 FL4 FL5 73 {saved for FM filter} 6 kHz (AM) 2.8 kHz (SSB/CW/DAT A) {saved for variable-bandwidth filter} 400 Hz (CW/DAT A) Fill in the table below (include sub receiver info, if applicable). Use pencil, since you may change the configuration later. BANDWI DTH can be obtained from the model number of each filter. 5-pole filters have a FRE Q O FFS E T ma r ke do nt hes i d eofoneof t hec r ys t a l s ,e . g . “-0. 8 5” . Theo f f s e tf ora l l 8-pole filters is 0.00. RF BOARD (MAIN RX & TX) POSITION FL1 FL2 FL3 FL4 FL5 BANDWIDTH SUB RECEIVER (RX ONLY) FREQ OFFSET POSITION BANDWIDTH FREQ OFFSET FL1 FL2 FL3 FL4 FL5 I fy ou’ l lb ec ha ng i ngRFboar df i l t er s :T urn the K3 upside down, placing a soft cloth beneath it. Remove the seven black pan head screws retaining the front bottom cover, then lift the cover off. Re mov et hes c r e wshol d i nga n ye xi s t i ngf i l t e r st h a ty ou ’ l lne e dt omovet oob t a i nt heo r d e rl i s t e da b ove( o n both the RF board and sub receiver). 74 T urn the K3 right side up. Unplug all filters to be repositioned (those whose mounting screws have been removed). Lift the filters at each end carefully, first one end then the other, until the connectors separate. Reposition the filters as required. They will only fit one way. If you put one in backwards, it will not fit within its outline, and the standoff will not line up with the screw hole in the RF board (or sub receiver board). T urn the K3 (or sub receiver module) upside down again. Install the mounting hardware shown below. Fi l t er smayb es upp l i edwi t he i t herab l ac k3/ 16 ”orbr i g ht -pl at ed1/ 4”p a n-he ad screw. A screw l o ng ert ha n1/ 4 ”ma ye xt e ndi nt ot he8-pole filte r unit and damage it. Do not over-tighten the scre ws. Excess torque may pull out the threade d standoff. Re-install the bottom cover (if applicable) using seven 4-40x3/ 1 6”b l a c kpa nhe a ds c r e ws .Re pl a c et he screws securely, but do not over tighten them. All screws must be used to maintain shielding performance. The top cover and sub receiver (if applicable) will be re-installed in at later step. T urn to Crystal Filte r Setup (pg. 45). Follow all instructions for the main receiver and transmitter. If you have the KRX3 option, re-install the sub receiver module as described in the KRX3 manual. Then turn to Crystal Filte r Se tup and follow all instructions for the sub receiver. Position the top cover on the K3, with its rear tab inserted under the top edge of the rear panel. Then plug the speaker wire into P25 on the KIO3 board at the left rear of the K3. Secure the top cover with 4-4 0x3/ 16”f l a the a ds c r e wsa ta l ll oc a t i o ns . This completes crystal filter installation. 75 Index 12 VDC IN, 17 12 VDC O UT, 17 1-Hz Tuning, 22 6-Me te r Pre amp (PR6), 44 Accessory 12 VDC Output, 17 Accessory I/O (ACC), 18 AFSK A Mode, 31 AFV (Audio Voltmete r), 36, 52 AFX (Audio Effe cts), 2, 35 AGC, 15, 52 Alarm Function, 6, 36 ALC, 26, 27, 28 ALC, Exte rnal, 18, 27, 53, 60 ALT Switch (Alte rnate Modes), 22, 30 Alternate Displays, 2, 36 AM Mode , 2, 28, 29, 73 Amplifie r Module, 10 W, 66 Amplifie r Module, 100 W, 46, 54 Antenna, 5, 6, 17, 22, 38, 40, 44, 48, 50, 60 Antenna Naming, 22 Audio Effe cts (AFX), 2, 20, 35, 69 Auto Powe r-On, 2, 18, 36, 43, 59 Automatic Antenna Tune r, 22, 54 Automatic Le vel Control, 26, 27, 28 Auto-Spot, 34 AUX RF, 37, 40 AUXBUS, 8, 18, 19, 58 Band Outputs (BAND0-3), 18, 19, 54 Band-Pass Filte rs, 66 Bar Graph, 12 Batte ry Re placement (Clock), 46 Binaural Audio (AFX), 7, 56, 69 Block Diagram, 2, 71 Bre ak-In Delay, 14, 26 Bre ak-In Keying (QSK), 30 Buffe re d I.F. Output, 17, 44, 54 Calibration Proce dures, 2, 48 Channel Hopping, 39 Circuit Breake r, 17, 63 Compression, 14, 26, 28 CONFIG Menu, 21, 45, 51, 52, 56 Configuration, 2, 7, 30, 45, 61 Connector Groups, 2, 17 Control Groups, 2, 11 Cross-Mode O pe ration, 2, 6, 34, 36 Crystal Filte r Bandwidth, 23 Crystal Filte r Cente r Fre quency, 9, 45 Crystal Filte r Enables, 45 Crystal Filte r Installation, 2, 45, 72, 75 Crystal Filte r Setup, 2, 14, 45, 73, 75 Crystal Filte r, Transmit, 26, 46, 65 Crystal Filte r, Variable Bandwidth, 23, 67, 73 Current Drain, 7, 21, 36 Customer Support, 2, 10, 44, 64 CW Ke ying Weight, 30, 36, 53 CW Mode, 30 CW Mode VO X, 13 CW Normal, 30 CW Re ve rse , 12, 13, 22, 30 CW, Break-In (QSK), 30 CW, O ff-Air Ke ying, 26, 52 CW, Sending in SSB mode , 53 CWT (CW/DATA Tuning Aid), 34 CW-to-DATA, 2, 16, 30, 31, 34 DATA A Mode , 31 DATA Mode , 31 DATA Re ve rse , 12, 13, 22 Date , 46 dBV (Audio dB Mete r), 36, 52 Digital Voice Re corde r, 7, 16, 29, 68 DIGO UT0, 18, 19 DIGO UT1, 18, 40, 44, 53 Dire ct Fre quency Entry, 15 Display (LCD), 12, 51 DSP, 7, 44, 69 Dual Passband Filte ring, 5, 7, 15, 35 Error Me ssages (ERR xxx), 62 ESSB, 36, 51, 57 Exte rnal ALC, 18, 27, 53, 60 Exte rnal Refe rence, 17 Fan Panel, 5, 17 Filte r Graphic, 12, 14, 23, 32, 35 FINE Tuning, 22 Firmware Upgrades, 2, 3, 44 FM Mode , 13, 25, 28, 29 FP ACC, 13 Fre quency Entry, Dire ct, 15 Fre quency Memories, 16, 29 Fre quency Range , 8, 39 Friction Adjustment, VFO Knob, 2, 47 Front Panel, 2, 11 Front Panel Acce ssory Conne ctor, 13 FSK, 31, 32 FSK D Mode, 31 FSK IN, 18, 31, 32 Full Break-In (QSK), 5, 13, 30 Ground Te rminal, 17 He adphones, 13, 20, 55, 56 HICUT (High Cut), 5, 23, 32 I/II, 12, 14, 23, 24 76 IF O UT, 38 Inte rme diate Fre quency (I.F.), 23, 67 K3 Utility PC Application, 43, 44, 45, 59, 61 KANT3, 2, 40, 41, 63, 66, 68 KAT3, 2, 8, 22, 41, 46, 53, 54 KBPF3, 9, 36, 54, 63, 66 KDVR3, 16, 29 KEY O UT, 17, 19, 57 Ke yboard, 43 Ke ye r Paddle , 17, 30, 53 Ke ye r Spee d, 5, 11, 26 Ke ying Weight, 30, 53 Ke ypad, 11 KIO 3, 2, 6, 18, 53, 59, 64 KNB3, 25, 54, 62, 64, 67 KPA3, 46, 54 KPAIO 3, 54, 64, 67 KREF3, 69 KREF3-EXT, 17 KRX3, 47, 63, 64, 72 KSYN3, 2, 64, 70 KTCXO 3-1, 32, 44 KXV3, 2, 40 LEDs, 13 LINE IN, 20, 31, 32, 46, 51, 68 LINE O UT, 20, 32, 55 LOCUT (Low-Cut), 5, 32 Low-Pass Filte r, 37, 40, 65, 66, 68 LPA, 66 M1-M4, 6, 11, 16, 29, 30 MAIN Me nu, 2, 21, 51 Memory Label, 16, 39 Menu Help, 21 Me ssage Re cord/Play, 2, 16, 30 Mic Gain, 11, 13, 20, 28, 32, 46, 51 Microphone, 28, 59, 68, 69 Mode Selection, 15, 22, 28, 54 Noise Blanker (NB), 9, 25, 54, 62, 64, 67 Noise Re duction (NR), 6, 11, 25, 43 NO RM1/2 (Filte r Normaliz ation), 14, 24 Normalizing Filte r Passband, 14, 24 Notch Filte ring, 6, 12, 15, 25 Nume ric Ke ypad, 11 O ption Module Enables, 2, 46, 61 O ptions, 2, 8, 44, 45 PA Inte rface Module , 54, 63, 64, 67 Panadapte r, 38, 43 Paramete r Save /Restore , 61, 63, 69 PF1, PF2, 6, 16, 21, 32 PLL voltage, 36, 64 POWER O N Signal, 18, 43, 59 Powe r Supply, 8 PR6 (6-Me ter Preamp), 44 Pre sets, 24 Primary Controls, 2, 11, 13 Programmable Function Switch, 6, 11, 16, 21, 29, 30, 32 PSK D Mode, 31 PSK31, 31 PTT (Push To Talk), 18, 26, 29, 55, 59, 64 QSK, 30 Quasi-Ste reo (AFX), 35, 51, 69 Quick Memorie s, 8, 15, 16 Quick-Start Guide, 2, 4, 7 Real Time Clock, 2, 46, 47 Rear Panel, 2, 7, 17 Re ceive Antenna, 2, 38 Re ceive r Setup, 2, 23 REF IN, 17 Refe rence Oscillator, 2, 17, 49, 55, 64, 65, 67, 69, 70 Refe rence , Exte rnal, 17 Remote Control, 2, 7, 8, 43, 52, 53 Remote Powe r-On, 18, 43, 59 Re peate r Offset, 29, 51 RF Board, 2, 66 RIT, 2, 6, 11, 14, 16, 22, 57 RIT/XIT O ffse t, 7, 14, 16, 21, 22, 33, 36, 57 Roofing Filter, 7, 23, 69 RS232, 18, 31, 43, 52, 55, 59, 68 RTC, 46 RTTY, 31, 32 RX ANT IN/O UT, 2, 38 RX EQ , 35, 51, 60 Scanning, 2, 6, 14, 16, 39 Semi-Break-In, 5, 14, 26 Se rial I/O , 18 SHIFT, 12, 14, 23 Side tone , 30 S-Me ter, 12, 50, 56 Spe aker, 7, 35, 37, 56 Spe cifications, 8 Spe ctrum Scope , 43 Spee ch Compre ssion, 14, 26, 28 SPLIT and Cross-Mode O pe ration, 6, 14, 26, 36, 60 Spot, 15, 30 SSB +CW, 53 SSB Mode , 28 SSB, Sele cting Alternate Side band, 12, 28 Sub Re ceive r, 2, 5, 17, 22, 37, 40, 41, 42, 46, 52, 54 Supply Voltage , 8 Sync Data, 32, 55, 56 Synthesizer, 2, 36, 48, 53, 63, 64, 67, 70 Tap/Hold Switch Functions, 11, 21 77 TCXO , 8, 32, 44, 49, 54, 69 Te mpe rature Sensor, 50, 54, 55, 59 Te rminal Emulator, 43 Te xt De code and Display, 2, 23, 30, 33 The ory Of O peration, 2, 66 Time , 46 Transmit Crystal Filte r, 26, 46, 65 Transmit Inhibit, 18, 19, 27, 57 Transmit Test, 13 Transmitter Se tup, 2, 26 Transve rte r, 2, 8, 9, 18, 19, 38, 48, 54, 60 Troubleshooting, 59 Tuning Aids, 34 Tuning Rate , VFO, 6, 22, 36, 46, 57 TX EQ , 5, 6, 28, 35, 51 TX LED, 6, 26, 44, 59 TX TES T, 5, 12, 18, 26, 28, 30, 34, 55 TXGN, 48, 57 VFO B Alternate Displays, 2, 36 VFO Knob Friction Adjustment, 2, 47 VFO Tuning Controls, 6, 22, 36, 46, 57 Voice Modes, 2, 5, 23, 28 VO X Delay, 14, 29 VO X, CW Mode, 13 VO X, Voice Modes, 13, 29 Wattme ter, 2, 28, 48, 57 WIDTH, 12, 14, 23 XFIL, 15, 45, 53 XIT, 2, 6, 11, 14, 16, 22, 57 XVTR IN/O UT, 9, 38, 48, 54, 60 78 | https://manualzz.com/doc/43690/elecraft-kxv3-specifications | CC-MAIN-2019-04 | refinedweb | 44,445 | 79.8 |
#include <inspsocket.h>
StreamSocket is a class that wraps a TCP socket and handles send and receive queues, including passing them to IO hooks
Close the socket, remove from socket engine, etc
Dispatched from HandleEvent
Dispatched from HandleEvent
Reimplemented in BufferedSocket.
Gets the error message for this socket.
Convenience function: read a line from the socket
Useful for implementing sendq exceeded
Handle event from socket engine. This will call OnDataReady if there is new data in recvq
Implements EventHandler.
Called when new data is present in recvq
Implemented in BufferedSocket, and UserIOHandler.
Called when the socket gets an error from socket engine or IO hook
Implemented in UserIOHandler.
Sets the error message for this socket. Once set, the socket is dead.
Send the given data out the socket, either now or when writes unblock | https://www.inspircd.org/api/2.0/class_stream_socket.html | CC-MAIN-2021-10 | refinedweb | 134 | 56.45 |
Here’ a fun Python challenge involving just a bit of mathematical know-how:
Write a function that takes an argument n and prints a pair of natural numbers x, y such that x² + y² = n
For the purposes of this exercise we will assume that n > 0. So, for example: for n = 10, we can have 1² + 3² = 10, so x = 1 and y = 3.
There is a relationship between this problem and the famous Pythagorean Theorem, which is one of the most important pieces of mathematical knowledge ever discovered. It underpins numerous aspects of the technological world we live in, and it’s also very useful in games for calculating the distance between things. You can read more about that here: How to calculate the distance between two points with Python, and a fun game to play.
Here’s a couple of observations which may help you:
- In Python to calculate the square root of n, we can use math.sqrt(n)
- Since n > 0, x must be at least 1.
- Since x >= 1, y can be at most √(n – 1)
- Let’s assume x <= y, as otherwise we would have duplicate solutions (e.g. 1² + 3² = 10 and 3² + 1² = 10)
- In Python one way to square a number is to use ** 2. E.g 3 ** 2 = 9
- There is a useful function math.floor(n), which gives the greatest integer less than or equal to n
- This is useful because e.g for n = 3, 1² + (√3)² rounded up would be too large. So we use math.floor(math.sqrt(n)) as the upper possible value for x or y
- You may want to read up on nested FOR loops to help you with this challenge
import math def sum_of_squares(n): """ Returns a pair of natural numbers x, y, such that x² + y² = n """ pass assert sum_of_squares(10) == (1, 3) assert sum_of_squares(3) is None
Have a go at completing the above code for yourself using your favorite Python development environment. The assert statements)).
Solution to Python Sum of Two Squares Challenge
Click below for one way to solve the challenge.
import math def sum_of_squares(n): """ Returns a pair of natural numbers x, y, such that x² + y² = n """ max_val = math.floor(math.sqrt(n)) for i in range(1, max_val + 1): for j in range(1, max_val + 1): if i ** 2 + j ** 2 == n: return (i, j) # Returns first correct pair. return None assert sum_of_squares(10) == (1, 3) assert sum_of_squares(3) is None # for i in range(1, 201): # print(sum_of_squares(i))
I hope you found that to be an intersting Python coding challenge. All the time you spend thinking how to solve these kinds of problems with make you a better Python programmer. Happy computing. | https://compucademy.net/python-coding-challenge-sum-of-squares/ | CC-MAIN-2022-27 | refinedweb | 461 | 75.03 |
graphlite 1.0.2
embedded graph datastore
Graphlite is a tiny graph datastore that stores adjacency lists similar to FlockDB but like conventional graph databases, allow you to query them with traversals (graph-walking queries), and works with datasets that you can fit into your SQLite database.
from graphlite import connect, V graph = connect(':memory:', graphs=['knows']) with graph.transaction() as tr: for i in range(2, 5): tr.store(V(1).knows(i)) tr.store(V(2).knows(3)) tr.store(V(3).knows(5)) # who are the friends of the mutual friends # of both 1 and 2? graph.find(V(1).knows)\ .intersection(V(2).knows)\ .traverse(V().knows)
Graphlite is thread safe, meaning that when transactions are comitted (at the end of the with block), a lock is held and only the thread that commits gets to run. Thread safety is emphasised if you look at the test suite.
Installing
$ pip install graphlite/graphlite
- Author: Eugene Eeo
- Package Index Owner: eugene-eeo
- DOAP record: graphlite-1.0.2.xml | https://pypi.python.org/pypi/graphlite/1.0.2 | CC-MAIN-2016-40 | refinedweb | 172 | 60.35 |
PCOC4 isr_1_ClearPending() doesn't seem to compile.ian.perry Aug 18, 2017 3:45 PM
Hi,
I have been converting some PSOC1 to PSOC4 and have come across a problem in compilation. Perhaps someone can explain it.
The chip is CY8C4245AXI-483
I have created a very simple schematic with an input pin connected to an ISR component
When I use isr1_ClearPending inside the generated ISR, the listing file shows no code.
When I comment that line out, the listing file shows what I would expect.
Thanks
Ian
1. Re: PCOC4 isr_1_ClearPending() doesn't seem to compile.user_1377889 Mar 13, 2016 1:32 AM (in response to ian.perry)
Ian, can you please post your complete project, so that we all can have a look at all of your settings? To do so, use
Creator->File->Create Workspace Bundle (minimal)
and attach the resulting file.
Will be much easier that way to check your generated code.
Bob
2. Re: PCOC4 isr_1_ClearPending() doesn't seem to compile.ian.perry Mar 13, 2016 4:15 AM (in response to ian.perry)
Hi Bob,
Pretty simple. It started with a UART, then got down to one pin and one interrupt.
If you go to the CY_ISR(isr_1_Interrupt) section in isr_1.C and remove the comments on isr_1_ClearPending(); and compile, you will see what I mean.
I have included a listing file with some explanation.
It is certainly strange.
Thanks
Ian
- Workspace01.cywrk_.Archive01.zip 314.8 K
- Listing.zip 786 bytes
3. Re: PCOC4 isr_1_ClearPending() doesn't seem to compile.user_1377889 Mar 13, 2016 6:39 AM (in response to ian.perry)
Excerpt from main.lst
40 .LVL0:
23:.\main.c **** isr_1_ClearPending();
41 .loc 1 23 0 discriminator 1
42 0008 FFF7FEFF bl isr_1_ClearPending
43 .LVL1:
24:.\main.c **** /* Place your application code here. */
and from isr_1.lst
381 0006 1A70 strb r2, [r3]
169:.\Generated_Source\PSoC4/isr_1.c ****
382 .loc 1 169 0
383 0008 FFF7FEFF bl isr_1_ClearPending
384 .LVL20:
172:.\Generated_Source\PSoC4/isr_1.c ****
so, both ClearPending() did generate code.
BTW: You do not need to code a Clearpending in main() and putting the isr_1_Start into the main loop is an error.
A .h file is not meant to keep definitions of variables, only declarations.
Additionally .h files need a means to allow them to be included in several files without producing errors due to doubly defined symbols. So in your case:
#ifndef Test_h
#define Test_h
extern uint8 TestVariable;
#endif
include that file in both main.c and isr_1_int.c
in main.c
volatile uint8 TestVariable = 1; // Global vars changed in a handler must always be "volatile"
I would suggest you to use isr_1_StartEx() which allows you to keep the handler in one of your own files. Changes in the generated files might get overwritten by accident.
Bob
4. Re: PCOC4 isr_1_ClearPending() doesn't seem to compile.ian.perry Mar 13, 2016 2:35 PM (in response to ian.perry)
Thanks Bob,
Perhaps I am missing something or we are misunderstanding each other...
I agree with your comments and I would normally use a file 'initialise.c' and a proper header file to call all the start routines and setup. This was just a simplification to try and find what appeared to be the compilation error,
I agree isr_1_ClearPending does generate code but
CY_ISR(isr_1_Interrupt) does not generate code when used in the generated routine. TestVariable is never set.
159:.\Generated_Source\PSoC4/isr_1.c **** CY_ISR(isr_1_Interrupt)
160:. {
161:. #ifdef isr_1_INTERRUPT_INTERRUPT_CALLBACK
162:. isr_1_Interrupt_InterruptCallback();
163:. #endif /* isr_1_INTERRUPT_INTERRUPT_CALLBACK */
164:.
165:. /* Place your Interrupt code here. */
166:. /* `#START isr_1_Interrupt` */
167:.
168: TestVariable = 1; <--------- This line does not get compiled until
169:. isr_1_ClearPending(); <--------- this line is removed
170:.
171:. /* `#END` */
172:.\Generated_Source\PSoC4/isr_1.c **** }
165:.\Generated_Source\PSoC4/isr_1.c **** /* Place your Interrupt code here. */
166:. /* `#START isr_1_Interrupt` */
167:
168:. TestVariable = 1; <---- here we have a compilation
28 .loc 1 168 0
29 0000 0122 mov r2, #1
30 0002 014B ldr r3, .L2
31 0004 1A70 strb r2, [r3]
169:. // isr_1_ClearPending(); <----- This has been commented out
170:.
171:. /* `#END` */
172:.\Generated_Source\PSoC4/isr_1.c **** }
Regards
5. Re: PCOC4 isr_1_ClearPending() doesn't seem to compile.user_1377889 Mar 13, 2016 3:30 PM (in response to ian.perry)
Again isr_1.lst:
168:.\Generated_Source\PSoC4/isr_1.c **** isr_1_ClearPending();
378 .loc 1 168 0
379 0002 0122 mov r2, #1
380 0004 024B ldr r3, .L36
381 0006 1A70 strb r2, [r3] <-- Store TestVariable
169:.\Generated_Source\PSoC4/isr_1.c ****
382 .loc 1 169 0
383 0008 FFF7FEFF bl isr_1_ClearPending <-- ClearPending
384 .LVL20:
Do you really question that the Gnu C Compiler is errornous in such a simple program? When the line for setting the variable is never executed there will be another reason.
Edit: Project attached
Bob | https://community.cypress.com/thread/20749 | CC-MAIN-2017-51 | refinedweb | 788 | 61.33 |
Subsets and Splits