text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Increasing Interest Manifest In Rangely Field Possibilities DENVER.—Indications are that the Rangeiy oil field in north west Colorado is developing into one of the major oil areas of the nation. Some informed oil men are inclined to hazard estimates that the Rangely field will go far beyond the 100 million barrel mark. Long known for its production of ♦ nude Pennsylvania type oil from shallow depth, the Rangely district has become an important producer f paraffin-asphaltic base pe during the past few month. Shallow Wells Produce At Rangely, the shallow oil ob tained at depths of from 700 to 900 feet below surface is not accom panied by gas pressure and the wells have been pumped for years. Most of the wells are "emptied" each day in short pumping periods but fill up overnight and are ready for (lumping again each morning. This refilling appears to be capil lary in nature. Veterans in the district are of the opinion that the shallow oil ascends through the calcite filled cracks in the formation. One pioneer con tends that the calcite purifies the shallow oil to its Pennsylvania grade. Average production per shal low well is about 30 barrels a day and at present the daily otput from all the shallow wells is from 2500 to 3000 barrels. troleum Tap Weber Sands The Weber oil bearing sandstone is encountered at a •'depth of about 7500 feet below surface in the Rangely district. It is 700 feet or more in thickness^—an extremely thick oil sand. After the initial flow from this sand is put under control, drilling continues for some 600 feet in order to penetrate the deepest practicable source of oil. Sensible regulations obtain in the Rangely field. Wells are drilled in the center only of each 40 acre lease or fee. and free flow of oil is avoided by the practice of balancing gas pressure to output in the effort to maintain steady flow for the esti mated life of the field so as to avoid having to pump the wells. One specialist thinks the field is pood for 30 years. At this time dis charge lines' as small as quarter inch are used in the wells. The producing wel from 300 to 450 barrels a day. Nine Wells Produce producing Ils deliver A recent visit to the Rangely dis trict disclosed the fact that nine wells are producing oil, with others We Make Your Oilfield Requirements Our Speciality American Pipe and Supply Co. a Dealers in new and used tubular goods, pumping equipment and drilling supplies. Cut Bank Casper Denver 7 6 f&00 Vi# AA GREASE DXE. OILS BRB GREASE DELVAC I— « i lubricants -y 6 1 /i I THE WORLD'S GREATEST lubncatmg open ence goea into the making of Gargoyle Lubricant*. From lubricating "know-how" garnered over 78 yean we create lubricant» tor every part of every machine made! Give your preciou». war-weary machine* the fine*t lubricant* and your «taff the meat «killed lubrication counael available. 30CONY-VACUUM OIL COMPANY, INC., Pint National Bank Bldg., Great Falla, Montana. CALL IN SOCONY-VACUUM expected daily as a result of drilling in progress with 20 or more modem rotary drilling rigs. Total produc tion at present depends upon the availability of oil trucks which de pipeline at il Refining 10-inch liver the oil to the oil Craig, Colo. The Utah O preparing to build a line from Rangely to the main line at Wamsutter, Wyo. Companies Listed In addition to several small com panies and lessees, the following companies are operating in the Rangely field: Associated Oil Co., California Co. (Standard of California), California Oil Groups (A. C. McLaughlin et al.), Equity Oil Co., Husky Refin ing Co., Newton Oil Co., Phillips Petroleum Co., Raven'Oil Co., Stan olind Oil & Gas Co., the Union Pa cific Railroad Co., Utah Southern Oil Co., and Wasatch-Idaho-Sharp less (Wasatch Oil Co.). While new wells continue to come in in the Rangely field, now considered to be about 8 by 4 miles in area, some major qpmpames have been and are conducting geo logical, geophysical, gravity and seismic surveys on the surface to the northwest, reaching into Utah to a point west of the asphaltic rim to the west and northwest of Ver nal. This work is conducted to iden tify "key" formations where the geology is covered up by surface waste and sediments. In addition, the General Petrol eum Co. is prepared to drill about 20 miles south of Rangely, clo^e to the Utah line. Co. Total supply of oil available to the United Nations as a group, in cluding Russia, during 1944 was 6, 887,000 barrel« per day. E, Byers Emrick COIfSTILTIKG G SO LOO 1ST OIL—It A TUB A L GAS Examinations, Reports. Appraisal fsHmim of Rsmith Mamie Surrey« States and Cans United Residence CONRAD Phone IBS MONTANA Phone 190 ds Office New Stanolind Pipeline Gives Outlet to Fields in Wyoming CHEYENNE, Wyo.—The State Public Service Commission has for construction of one of the biggest 1945 Wyoming pipeline projects. 4 The board granted the Stanolind Pipe Line company a certificate of public convenience and necessity to build 68 miles of eight-inch pipeline and 27 miles of six-inch pipeline in Central Wyoming for transportation of crude oil. The authority covers a pipeline from Maverick Springs, Steamboat Butte, Pilot Butte and Winkleman Dome, near Riverton, to a junction Is the Oil Industry To Be "Atomized?" All the potentialities of an atomic bomb are inherent in some recent developments in Washington, as far as the oil industry is concerned. Most oil men know that congress* continued for three more years the administration's authority to negotiate re ciprocal trade agreements. Or, in plain English, to slash the import duties on foreign crude oil, among other things. What most oil men do not fully realize, however, is that congress also wrote into the act permission for the state department to further reduce tariffs by 50 percent of the rate that was in effect at the beginning of 1945. Thus the new secretary of state has more power than any of his predecessors. The normal duty on imports of foreign oil was 21 cents a barrel. In 1939, an agreement with Venezuela cut the rate to 10V£ cents a barrel—and this rate cut automatically applied to imports from all other "favored nation" coun tries. Now, Secretary Byrnes, if he sees fit, can again slash this tariff by 50 percent, or down to 514 cents a barrel. Until 1943, the amount of crude that could be imported was limited to 5 percent of domestic refinery runs during a preceding year. Today there is no such quota. So, if Byrnes wishes, he may cut the tariff to 514 cents a barrel, artd allow unlimited amounts of foreign crude to enter this country. The 514-cents a barrel is a ridiculous figure, and so is the 101£ cents now in effect. The road is wide open to tremendous imports of peon produced foreign crude oil, with the blessing of congress, and with the tariff so low as to yield little in revenue to the nation, and to be absolutely no barrier to unrestricted importation. If you, as an oil producer, are interested in this sit uation with its ruinous implications, it is suggested you clip this ad and send it to your congressman, with suitable comment. I This'advertisement Is one of a series, sponsored by Montana independent oil producers, to acquaint the producers with facts vital to their welfare, and from time to time to acquaint the public with some of the problems now confronting this vital industry. These advertisements will be continued thronghont 1945. You Need This Map . . . (If yon are Interested in any aspect of the Montana oil industry.) This complete and newly revised map, a very clear white print, 50 by 36 inches, gives at a glance information that otherwise would require weeks of research. And—the information is CORRECT! Every well shown on the map has been checked against well logs on file with the Montana Oil Conservation board. Here's the information contained on this map: Location of all producing oil and gas fields. Location of all principal anticlines. Location of every wildcat ever drilled In the state, in cluding name of the well, depth to which it was drilled, section In which it was drilled, and results obtained, as indi cated by symbol. County boundaries and county seats. Township boundaries. Two cross sections showing geological formations, one talring in the area from Glacier Park eastward through Bow eastward from the doln dome, the other taking in the section Elk Basin field through the Baker-Glendive anticline. This map may be seen at our office. Price $5.25 for paper, $8.75 on linen backing. MONTANA OIL lOURNAL SUPPLY DEPARTMENT Great Palls, Mont. 518 First A venae South / near Lysite with an Elk Basin-Cas per pipeline. This certificate also permits con struction of a line to take off from the main pipe and terminate at Riverton to furnish crude oil for a Husky Refining company refinery. When you are down and out something usually turns up . . your friends' nose. xml | txt
https://chroniclingamerica.loc.gov/lccn/sn86075103/1945-08-25/ed-1/seq-2/ocr/
CC-MAIN-2022-21
refinedweb
1,544
62.27
Hi - this may be a very simple answer but I just cannot see it. I am also just beginning C programming by book and have no one to turn to but this very helpful forum. Thank You. Here is source: #include <stdio.h> int main (int argc, const char * argv[]) { int i; for ( i = 1; i <= 20; i++ ) { printf( "The number %d is ", i ); if ( (i % 2) == 0 ) printf( "even" ); else printf( "odd" ); if ( (i % 3) == 0 ) printf( " and is a multiple of 3" ); printf( ".\n" ); } return 0; } Here is my problem with the above code: If the % operator is doing math - and if i is being divided by 2 "leaving a remainder?" and if that remainder is even than it will be 0 and if odd it will be 1 Should I look at it like this? i-1 ---- so 1/2= .5 thus .5=odd ---remainder = odd i=2 ---- so 2/2=1 thus 1=odd --- remainder = odd? Iam confused
http://cboard.cprogramming.com/c-programming/134956-newbie-simple-math-using-%25-help.html
CC-MAIN-2014-52
refinedweb
162
85.73
Bleuio Firmware Update V2.0.5July 2, 2021 Smart Sensor Devices is announcing a firmware update v2.0.5 for BleuIO and Smart USB dongle 2.0. We invite all the users to apply the updated firmware. The new firmware will be available to download on 2nd July 2021, at Added features: - Added a new command ATASPS that will allow you to choose if the SPS responses will be shown as ASCII or Hex. ASCII is shown by default at startup. Bug fixes - Fixed a bug where if you sent more than 244 characters at once before sending a carriage return (or pressed Enter) the dongle would restart. To meet the demands of users, the BleuIO team will continue to update and add new features. To find out more about the updates of the dongles new firmware 2.0.5, please visit our Getting Started Guide.
https://www.bleuio.com/blog/bleuio-firmware-update-v2-0-5/
CC-MAIN-2022-40
refinedweb
146
82.75
This article needs a technical review. How you can help. Introduction This example shows you how to create a WebSocket API server using Oracle Java. Although other server-side languages can be used to create a WebSocket server, this example uses Oracle Java to simplify the example code. This server conforms to RFC 6455, so it only handles connections from Chrome version 16, Firefox 11, IE 10 and higher. First steps WebSockets communicate over a TCP (Transmission Control Protocol) connection. Java's ServerSocket class is located in the java.net package. ServerSocket Constructor: ServerSocket(int port) When you instantiate the ServerSocket class, it is bound to the port number you specified by theport argument. Here's how to implement what we have learnt: import java.net.ServerSocket; import java.net.Socket; public class Server{ public static void main(String[] args){ ServerSocket server = new ServerSocket(80); System.out.println("Server has started on 127.0.0.1:80.\r\nWaiting for a connection..."); Socket client = server.accept(); System.out.println("A client connected."); } } Socket Methods: java.net.Socket getInputStream() Returns an input stream for this socket. java.net.Socket getOutputStream() Returns an output stream for this socket. OutputStream Methods: write (byte[] b, int off, int len) Writes len bytes from the specified byte array starting at offset off to this output stream. InputStream Methods: read (byte[] b, int off, int len) Reads up to len bytes of data from the input stream into an array of bytes. Let us extend our example. Socket client = server.accept(); System.out.println("A client connected."); InputStream in = client.getInputStream(); OutputStream out = client.getOutputStream(); new Scanner(in, "UTF-8").useDelimiter("\\r\\n\\r\\n").next(); Handshaking When a client connects to a server, it sends a GET request to upgrade the connection to a WebSocket from a simple HTTP request. This is known as handshaking. import java.util.Scanner; import java.util.regex.Matcher; import java.util.regex.Pattern; //translate bytes of request to string String data = new Scanner(in,"UTF-8").useDelimiter("\\r\\n\\r\\n").next(); Matcher get = Pattern.compile("^GET").matcher(data); if (get.find()) { } else { } Creating the response is easier than understanding why you must do it in this way. You must, - Obtain the value of Sec-WebSocket-Key request header without any leading and trailing whitespace - Link (get.find()) { Matcher match = Pattern.compile("Sec-WebSocket-Key: (.*)").matcher(data); match.find(); byte[] response = ("HTTP/1.1 101 Switching Protocols\r\n" + "Connection: Upgrade\r\n" + "Upgrade: websocket\r\n" + "Sec-WebSocket-Accept: " + DatatypeConverter .printBase64Binary( MessageDigest .getInstance("SHA-1") .digest((match.group(1) + "258EAFA5-E914-47DA-95CA-C5AB0DC85B11") .getBytes("UTF-8"))) + "\r\n\r\n") .getBytes("UTF-8"); out.write(response, 0, response.length); } Decoding messages After a successful handshake, client can send messages to the server, but now these are encoded. If we send "abcdef", we get these bytes: - 129: FIN: You can send your message in frames, but now keep things simple. Opcode 0x1 means this is a text. Full list of Opcodes - 134: If the second byte minus 128 is between 0 and 125, this is the length of the message. If it is 126, the following 2 bytes (16-bit unsigned integer), if 127, the following 8 bytes (64-bit unsigned integer, the most significant bit MUST be 0) are the length. I can take 128 because the first bit is always 1. - 167, 225, 225 and 210 are the bytes of the key to decode. It changes every time. - The remaining encoded bytes are the message. Decoding algorithm decoded byte = encoded byte XOR (position of encoded byte BITWISE AND 0x3)th byte of key Example in Java: byte[] decoded = new byte[6]; byte[] encoded = new byte[] {198, 131, 130, 182, 194, 135}; byte[] key = byte[4] {167, 225, 225, 210}; for (int i = 0; i < encoded.length; i++) { decoded[i] = (byte)(encoded[i] ^ key[i & 0x3]); }
https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_a_WebSocket_server_in_Java
CC-MAIN-2017-13
refinedweb
649
52.15
Hello, i don't have any background in development especially in java. I would like to learn how to read and use SoapUI API from SoapUI API docs () something like testRunner, context, messageExchange. how do i know which method or which variable i can use for i.e context variable, testRunner variable usually i see someone write like context.expand or something like testRunner.testCase.testSuite where i can get this information? I would like to understand how people know they can user expand in the context variable etc many thx Solved! Go to Solution. Context is defined at project, test suite, and test case levels. So object can vary depending upon where it is being accessed. And similarly TestRunner. To your question, both are not same, testRunner is member variable of context object. Applies to all levels. Hi, This obviously isn't a complete answer, just my personal view and before learning about SoapUI, Groovy scripting and writing a book about it, I had a lot of Java / API / frameworks experience, but I would say the online help docs can be quite useful and was where I started e.g. Basically I learned by example, created lots of Groovy Test Steps and had a good hack about to learn how to extract / insert data to and from objects, requests and responses... Also, there are various blogs etc and was one book at the time. I never did the certification, not sure how much scripting is in that, possibly someone else could say? Any specific examples / explanataions of variables / scripts, then let me know... Cheers, Rupert Though the question is too generic, but it is near to my heart as I was in the same situation when I started using it. Initially, started searching on the net. Sometimes, lucky to get the info what was needed then and sometimes may not. On one fine day, had happened to watch the video And started more thurst from there. Hope you too get the benefit out of it. Hi Rupert, nmrao many thanks for your reply, also watching the youtube vid. actually my biggest confusion is related with the path, class and available method. Like in above youtube vid for 2nd webinar, they write println workspace . I am trying the same syntax in groovy test step and got error no such property workspace for class i know in teststep groovy there is variable log, context, testRunner, still i don't know the method i can use for each of them i am trying log.info context.metaClass.methods*.name got error can not get property methods i am trying log.info testRunner.metaClass.methods*.name got below result [equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait, cancel, fail, getLog, getReason, getRunContext, getStartTime, getStatus, getTestRunnable, getTimeTaken, isRunning, setRunContext, start, waitUntilFinished, getResults, getRunContext, getTestCase, gotoStep, gotoStepByName, runTestStep, runTestStepByName, setMockRunContext] still i am confuse, usually when i am googling some script, people write like this testRunner.testCase.testSuite... from above metaClass result i can't find after testRunner i can write testRunner.testCase. how does people know about this? In the video, it is explained how one can explore the soapUI API. Below is groovy script test step. If you notice above, below variables are accessible: log, context, testRunner. Now, if you want to know details about testRunner variable, then just have the following statement in the groovy step and execute it, so that you would know what is the type of this variable log.info testRunner You may see below output Sun Nov 08 00:10:13 IST 2015:INFO:com.eviware.soapui.impl.wsdl.panels.support.MockTestRunner@43810d51 Now open the class MockTestRunner in soapui's api docs where you can see the allowed methods on testRunner say getTestCase(). As you mentioned, testRunner.testCase and testRunner.getTestCase() are same. testRunner.testCase is groovified version(ignores explict get and paranthesis). Other one is java way. So, like above, you can find any object's class and look into api docs to get the methods. And similarly, if you see the project load script, test suite setup, teardown and test case setup and teardown (all of them or groovy type ) different variables available and are mentioned like how we see for groovy step. Thanks nmrao, very clear explanation, i am started to understand how to read the API document. would like to ask you agan: log.info testRunner.testCase.testSuite.project.workspace.getProjectByName("myProject").testSuites["yourTestSuite"].testCases["TestCase 1"].testSteps["delete"].testRequest.responseContentAsXmlis there any possibility to shortened above script? log.info testRunner.testCase.testSuite.project.workspace.getProjectByName("myProject").testSuites["yourTestSuite"].testCases["TestCase 1"].testSteps["delete"].run(testRunner, context)but i can not understand what is the meaning from (TestCaseRunner runner, TestCaseRunContext runContext) inside run method. regards, #1. Yes. Like, context.testCase.name and context.getTestCase().getName() are same and here there are no arguments involved. Similarly set also, however, it mostly may contain some argument. How to set a test case property But it will be little different if there are arguments to be passed for either set and get methods. context.testCase.setPropertyValue('PROPERTY_NAME','PROPERTY_VALUE') Can also be expressed as context.testCase.properties['PROPERTY_NAME'].value = 'PROPERTY_VALUE' How to get a test case property context.testCase.getPropertyValue('PROPERTY_NAME') or context.testCase.properties['PROPERTY_NAME'].value At least, above are some variants of set / and get methods while using groovified. 2. Gald know that. If you want to access artifacts from other test suite, one must at least have project object(by accessing it successive parents). Here, you went one level ahead i.e., workspace which is not required. Get the project object : def currentProject = context.testCase.testSuite.project Then use currentProject later for all references if you dont want to use it long. currentProject.testSuites["yourTestSu ite"].testCases["TestCase 1"].testSteps["delete"].testRequest.responseContenite"].testCases["TestCase 1"].testSteps["delete"].testRequest.responseConten tAsXmltAsXml #3 one is also same as above. Many thanks nmrao for you explanation if my understanding are correct, context and testRunner have same meaning? when i should use context and when i should use testRunner? hopefully you're not get boring with my question 🙂 Context is defined at project, test suite, and test case levels. So object can vary depending upon where it is being accessed. And similarly TestRunner. To your question, both are not same, testRunner is member variable of context object. Applies to all levels. thanks nmrao for very long explanation, got better understanding now on API and context & testRunner variable 🙂
https://community.smartbear.com/t5/SoapUI-Open-Source/How-to-read-and-use-SoapUI-API-for-non-Dev-people/m-p/109724
CC-MAIN-2020-40
refinedweb
1,081
57.57
C Programming Tutorial-Chapter 1 ============================ Things Covered In Chapter 1 Who this tutorial is for An introduction to C What you will need for this tutorial Your first C program (hello.c) Analysis of the hello program ============================ Who this tutorial is for. This tutorial assumes the following: 1. That you own a computer or have access to one. 2. That you know the basics of using your computer, ie, you know how to make a folder(directory), how to create, copy and delete files. 3. That you have access to a text editor(edit, notepad, pico, vi, emacs anything) and you know how to use it. 4. That you have heard a lot about C programming and that you want to learn how to tell your computer what to do. Introduction OK, so you're a hacker. You may be the best there is. You may know Perl, Shell Scripting, Batch File Programming, the lot. But unless you know C, your knowledge is incomplete. Why? Because most serious exploits, nearly all hacking tools (including Linux/UNIX), even MS-Windows are written in C or its successor, C++. C was created in the ancient days of computing at Bell Labs by two people called Brian Kernighan and Dennis Ritchie. It was born through a need for a portable yet flexible and powerful language. In those days most code was written in assembly language. But that meant that it wasn't possible for programmers to port code to other platforms. A C program can be taken and compiled on any system under the sun that has a C compiler and it will work (almost) exactly the same anywhere. This comes at a sacrifice of speed however, but today's programs are usually way too complex to be written in assembly language and C is arguably the next fastest alternative. So what are you waiting for, read on! What you will need for this tutorial 1. A computer (duh). 2. A C compiler (the program that converts your C program into an executable file). Decent C compilers can be found at the following locations: Compiler Location OS Borland C++ 5.5 Windows Turbo C/C++ DOS/Windows (Various Versions) (signup required) Microsoft Visual C++ Windows GNU C++ software/software.html UNIX/Linux Bloodshed C++ Windows Your first C program Now that you have your C compiler set up, start up your text editor and type in this program exactly as you see it here. /*hello.c Classical first C program*/ #include <stdio.h> void main() { printf("Hello World!\n"); } That's it! Save the file as "hello.c".Now all you have to do is compile and run the program. Compile For Bloodshed C++/Turbo C++/Microsoft Visual C++ select compile from the compile menu. For Borland C++ 5.5 start the msdos prompt, change to the folder where you've saved the program and then type bcc32 hello.c For GNU C++ type gcc -o hello.exe hello.c If you see any errors go back and check to see if the program is exactly like this (you might have forgotten the semicolon at the end of the printf thing). Then save and compile again. Run For Bloodshed C++/Turbo C++/Microsoft Visual C++ select run from the run menu. For Borland C++ 5.5 start the msdos prompt, change to the folder where you've saved the program(if you haven't already) and then type hello For GNU C++ type ./hello You have just successfully written your first C program! Easy huh? Analysis of the hello program Now let us analyze the hello.c program line by line. Line 1,2: /*hello.c Classical first C program*/ This is called a comment. The compiler completely ignores everything between the /* and the */. Comments can be any number of lines long and are used to make your program understandable to yourself. Line 3: #include <stdio.h> This tells the compiler to include the contents of the file "stdio.h" in the program. stdio stands for standard input and output and this is the file which defines the printf funtion used later in the program. Without the #include statement(try it) the compiler would give you an error because it doesn't know what printf is. Line 4: void main() This is something that will appear in EVERY C or C++ program you ever write. This is called the main function. When the program runs, the computer first looks at the stuff at the beginning (anything beginning with #) and then jumps to the main function. Whatever you write here is what the computer will execute first. Line 5,7: { .... } The braces tell the compiler that whatever is contained within them is part of the main function (in this case) or more generally that the stuff enclosed in braces is a unit. Line 6: printf("Hello World!\n"); Prints Hello World! on the screen. The printf function prints whatever is enclosed in quotes on the screen. The \n thing tells printf to go to the next line. So that's it for today's lesson. Please feel free to comment on this post. Constructive criticism will be appreciated. Also I'd like a few suggestions for the next chapter. You could post any doubts related to the tutorial here. Great Job, Isn't it amazing that some of the worlds most gifted coders start with `hello. Well, K&R put that into us didn't they? Also, check out C for Dummies, the first program there is a masterpiece. It says, printf("Goodbye, cruel world!\n"); Great stuff, thanks cgkanchi, keep these tutorials coming. I successfully done this one...now onto chapter 2... Greg \"Do you know what people are most afraid of? What they don\'t understand. When we don\'t understand, we turn to our assumptions.\" -- William Forrester Great!! As you know i'm writing the C++ tutorials, kinda been a long time though, but been busy learning it. I have ot learn it for linux to.. hmmm.. I love the format you written it in... cheers, MR Great tutorial! But I wonder if anyone can help me. I got my hands on C++ exploits, that include the following libraries: <NetDB.h><NetInet\In.h><SYS\Socket.h><Unistd.h>, it seems to me that they are only available on Unix systems, but can the be also found on Windows C/C++ compilers, if they can then which onces? Perhaps there can be some equivalents to them? Thank u. I think, though I'm not sure, that <winsock.h> should be all that you have to include for network programming in C/C++. Sorry if I'm wrong though, I just suck at network programming (mainly coz I've never tried it). good job done kanchi...greate post... u have given a very good start for newbies... well if anybody wants more information on the language just click here hope this will help.. intruder... oohh that was fun.. im really startin to like dis.. lolz cant wait for the next part.. thanx for the tute..............bring chapter 2 at least I can understand what u are saying Genius of the mind is not necessarily from the mind of a genius Forum Rules
http://www.antionline.com/showthread.php?136916-C-Programming-Tutorial-Chapter-1
CC-MAIN-2017-30
refinedweb
1,210
76.42
All Could you give me Mifare DES Fire Features and Hints V1.0 document at my email address [email protected] because I cannot solve reading desfire ev1 using iso 14443 i already select the card an get the 7 digit serial number and it done. But when i am doing RATS command and APDU command it doesnt work. I am using ACR120U reader and i Already succeess reading mifare philps 4k using the API. But when i do it in the DESfire EV1 thats cannot worked this are the steps that i done: import acs.jni.ACR120U; public class ConnectDesfire { ACR120U deviceReadDesf; byte rSerialNumber; public ConnectDesfire() { deviceReadDesf = new ACR120U(); rSerialNumber = new byte[10]; short conn= deviceReadDesf.open(ACR120U.ACR120_USB1); // succsess if(conn != 0) System.out.println("Error with code "+conn); short valueSelect = deviceReadDesf.select(conn, new byte[1], new byte[1], rSerialNumber); // Success if(valueSelect != 0) System.out.println("Error with code "+valueSelect ); // rats byte fsdi = 1 ; byte atslen[] = new byte[1]; byte[] ats = new byte[16]; short resRats = deviceReadDesf.rATS(conn, fsdi, atslen, ats); // failed dont khow why error -3030 if(resRats != 0) System.out.println("Res Rats failed = " + resRats); //apdu byte rData[] = new byte[16]; xLen[0] = 0; xData[0] = (byte) Integer.valueOf("60", 16).shortValue(); rLen[0] = 0; rData[0] = 0; short resAPDU=deviceReadDesf.xchAPDU(ACR120U.ACR120_USB1, true, xLen, xData, rLen, rData); if(resAPDU !=0) System.out.println("Failed xchAPDU val= "+con); // failed error -13.... sory i forgot the code closeConnection(ACR120U.ACR120_USB1); } public static void main(String[] args) { ConnectDesfire a = new ConnectDesfire(); } } So could you share the document Mifare DES Fire Features and Hints V1.0 to me? - hi all, Can someone send me the Mifare DES Fire Features and Hints V1.0 ? my email address is [email protected] THanks alot!! Kelvin - Hi. Send me please the Mifare DESFire Features and Hints V1.0 email address is green_troll(at)rambler.ru best regards. - Hi, I need some help with mifare desfire application commands. Can someone please send this document to me: "Mifare DES Fire Features and Hints V1.0" Thanks - Hi, I'm also interested in the document "Mifare DES Fire Features and Hints V1.0" Can someone give me a hint where to get it or email it to me. thanks - can you please send me desfire documents on [email protected] "Mifare DES fire Features and hints v1.0" also please - Hi, I need to acces a DESfire Card through the PC/SC interface, I can't find any usefull information. It would be nice if somebody sends me the "Mifare DES Fire Features and Hints V1.0" document and some Information on which APDUs to use to s25534 (at) fh-aschaffenburg.de regards lukas - Hi choege, can you email any document about DESfire card? I need for my final project. I try search google but I didn't found "Mifare DES Fire Features and Hints V1.0". or something like that. my email : suriva.25 at gmail dot com Thank in advance Suriva - M094510 - Mifare DESFire Feature and Hints M075040 - DESFire DataSheet Both document will help you work on DESfire card. To apply the documents you should go through NXP's rep or Vendor who you got card from. They are not public document. - HELLO,can someone please send this document: "Mifare DES Fire Features and Hints V1.0". on email [email protected] Thanks This discussion has been closed.
https://community.oracle.com/tech/developers/discussion/comment/7295745/
CC-MAIN-2021-39
refinedweb
565
67.65
From: David Abrahams (dave_at_[hidden]) Date: 2004-12-29 14:42:22 Jonathan Turkanis wrote: > David Abrahams wrote: >>>); >>> } >> >> Pshaw. > > Huzzah! > >> After a few pages C++ Template Metaprogramming has an example >> that begins with: >> >> namespace mpl = boost::mpl; // namespace alias >> >> which it then follows with a note that says >> >> "Many examples in this book will use ``mpl::`` to indicate >> ``boost::mpl::``, but will omit the alias that makes it legal >> C++." > > I know how I would explain the convention; I'm just afraid people won't notice > it, and will copy and paste from the examples and find they don't work. It's a > bit easier in a printed book, in which there's some expectation that the > material will be read from the beginning, than in a heavily hyperlinked web > document, where reader are known to skip immediately to somewhere in the middle. Then use some automated processing to ensure that every one of your examples begins with the single line: namespace io = boost::iostreams; And then your examples will become much more readable: io::foo instead of boost::io::foo. -- Dave Abrahams Boost Consulting Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/12/78174.php
CC-MAIN-2021-21
refinedweb
208
58.92
Malcolm, Thanks for your review! I have made necessary changes and i'll attach the patch shortly in next message. Please see my comments below which will be reflected in my patch. > * The usage message could be a lot clearer. I couldn't determine what > the effect of some of the switches would be without using them. Ok. I have added the enough document to such messages. > * The output Atom file declares the XHTML namespace, but doesn't use it. Yeah, you are correct. Henceforth i don't declare the XHTML namespace. > * There's a fair bit of unnecessary trailing whitespace in the file. I have corrected them. > In my testing, the script generated a file called '.atom', not > REPOS_NAME.atom. I'm unable to repeat this issue. Can you please provide the test case ? > The generated URL for each item is ITEM_URL?rev=N. That's a bit odd > (it assumes a particular URL scheme), but if we decide to keep it, > we should document it here. Yes, i've now documented. > Atom feeds require both a feed URL and item URL - or perhaps you can drop > the feed URL if every item has a URL, I'm not exactly sure. Whichever it > is, there's no point in allowing the user to be able to generate invalid > Atom feeds (with URL's such as 'None', as currently happens). The feed URL and item URL are required in the Atom feed. The domain part of the feed URL and item URL may be different, so, we can't drop the feed URL. If the user don't pass the feed URL or item URL, it is set as "". > * The generated feeds don't pass the feed validator[1], since they lack an > atom:link entry with rel="self". Yeah, you are correct. But i guess, it is just a warning message. Moreover, as per Atom syndication format specified in [1], this entry is optional. Anyhow, i tried to add this entry with 'rel="self"' attribute, but still the warning continues to appear. I'm unsure howto overcome this warning message. > Finally, I needed to apply the attached patch to get this to work > at all. I'm assuming you're using something outside of the standard > Python libraries for XML pretty-printing, so I just disabled that bit. > I think you should also be using a different method to get at the > DOMImplementation object (see the patch). Thanks for the patch. I've incorporated it in my patch. [1] , Atom syndication format - Introduction -- Regards, Bhuvaneswaran This is an archived mail posted to the Subversion Dev mailing list.
https://svn.haxx.se/dev/archive-2006-07/0739.shtml
CC-MAIN-2016-36
refinedweb
438
76.42
waitid(2) waitid(2) waitid - wait for child process to change state #include <sys/types.h> #include <wait.h> int waitid(idtype_t idtype, id_t id, siginfo_t *infop, int options); waitid specify which children waitid is to wait for. children and id is ignored. The options argument is used to specify which state changes waitid is to wait for. It is formed by an OR of any of the following flags: WEXITED Wait for process(es) to exit. WTRAPPED Wait for traced process(es) to become trapped or reach a breakpoint (see ptrace(2)). WSTOPPED Wait for and return the process status of any child that has stopped upon receipt of a signal. WCONTINUED Return the status for any child that was stopped and has been continued. WNOHANG Return immediately. WNOWAIT Keep the process in a waitable state. infop must point to a siginfo_t structure, as defined in siginfo(5). siginfo_t is filled in by the system with the status of the process being waited for. waitid fails if one or more of the following is true. EFAULT infop points to an invalid address. Page 1 waitid(2) waitid(2) EINTR waitid was interrupted due to the receipt of a signal by the calling process. EINVAL An invalid value was specified for options. EINVAL idtype and id specify an invalid set of processes. ECHILD The set of processes specified by idtype and id does not contain any unwaited-for processes. If waitid returns due to a change of state of one of its children, a value of 0 is returned. If WNOHANG was specified, 0 may also be returned indicating no error. Otherwise, a value of -1 is returned and errno is set to indicate the error. exec(2), exit(2), fork(2), intro(2), pause(2), ptrace(2), sigaction(2), signal(2), wait(2), siginfo(5) PPPPaaaaggggeeee 2222
https://nixdoc.net/man-pages/IRIX/man2/waitid.2.html
CC-MAIN-2021-10
refinedweb
308
67.35
When to Use a Pie Chart - Jun 10 • 5 min read - Key Terms: pie chart Pie charts illustrate relative sizes of data. There should be a finite and small number of labels. What is a label? A label is a response value for a data point collected. For example, let's say we surveyed 50 friends and asked them their favorite type of music. The responses we allow are Hip-Hop, Pop, Country, R&B, Rock, Jazz, Indie and Other. Each response is a type of music and is considered a label. Below, I'll walk through several practical examples to illustrate the proper use of pie charts. Import Modules import matplotlib.pyplot as plt %matplotlib inline Example: Favorite Music Responses To continue with the example above, below is sample of the responses I received: Given all my responses, I calculated the total count for each music type and will use this data to create a pie chart. Record Music Responses in Python Lists music_type = ['Hip-Hop', 'Pop', 'Country', 'R&B', 'Rock', 'Jazz', 'Indie', 'Other'] count_responses_per_music_type = [26, 4, 2, 2, 6, 2, 2, 6] In the pie charts legend, I want to include the count of responses for each music type. For each response, I'll create a string and append the word "responses" so I can clearly annotate my pie chart. count_responses_per_type = [str(response)+ " responses" for response in count_responses_per_music_type] count_responses_per_type ['26 responses', '4 responses', '2 responses', '2 responses', '6 responses', '2 responses', '2 responses', '6 responses'] Pie Chart of Favorite Music Type In the pie chart below, the percentage value of total sessions for each device is calculated automatically by our Python library with the use of autopct argument. colors = ['gainsboro', 'bisque', 'paleturquoise', 'aliceblue', 'plum', 'lemonchiffon', 'springgreen', 'lightsalmon'] fig = plt.gcf() plt.rcParams['font.size'] = 15.0 fig.set_size_inches(8, 10) plt.pie(x=count_responses_per_music_type, labels=music_type, autopct='%i%%', radius=15, colors=colors) plt.title("Favorite Music Responses of My Friends", fontsize=20, y=1.02) plt.axis('equal') # makes a circle pie chart (not oval) plt.legend(bbox_to_anchor=(1, 1), labels=count_responses_per_type); Explanation of Music Responses Chart We can easily tell from the pie chart that the most popular music type of my friends is hip-hop. 51% of my friends said it's their favorite type. The second most popular music type of my friends is Other and Rock as well as several other types in close running for second. Example: Website Sessions by Device On my website,, I use Google Analytics to help me track basic statistics. In the past 31 days, there have been 200 sessions to my site. A session is just a record of someone visiting my site via any device. Google Analytics tracks what device was used to visit my site; a device could be a desktop computer, mobile phone or tablet. Below is the data over the last 31 days of visits to my site. I want to know what percentage of sessions by each device type are there out of all device type sessions made. Therefore, I must calculate a new column to show the percentage of sessions by device type. The total number of sessions is 85. For each device type, divide its number of sessions by all device type sessions and multiply by 100. This will give us a new value for percentage of sessions by device type. These percentage values are the main focus of pie charts. Record Website Sessions Data in Python Lists devices = ['Desktop', 'Mobile', 'Tablet'] number_of_sessions = [170, 20, 10] In the pie charts legend, I want to include the number of sessions for each device. For each value by device, I'll create a string and append the word "sessions" so it's clear. number_of_sessions_with_units = [str(session_value)+" sessions" for session_value in number_of_sessions] number_of_sessions_with_units ['170 sessions', '20 sessions', '10 sessions'] Pie Chart of Breakdown of Website Sessons by Device colors = ['sandybrown', 'powderblue', 'thistle'] fig = plt.gcf() fig.set_size_inches(5, 7) plt.pie(x=number_of_sessions, labels=devices, autopct='%i%%', radius=7.5, colors=colors) plt.title("Breakdown of dfrieds.com Website Sessions over Past 31 Days", fontsize=18, y=1.02) plt.axis('equal'); # makes a circle pie chart (not oval) plt.legend(bbox_to_anchor=(1, 1), labels=number_of_sessions_with_units); Explanation of Website Sessions by Device Pie Chart The pie chart reveals that 85% of sessions over the past 31 days were viewed on desktop.
https://dfrieds.com/data-visualizations/when-use-pie-chart
CC-MAIN-2019-26
refinedweb
727
55.44
TL;DR Intro Filtering data is one of the most common features of any data-facing application, whether it's a front-end application or a back-end application. The Filter function is used to find records in a table or a dataset that meets certain criteria. For example, if you have a list called Books in a webpage, and you want to only show books that are currently on sale. You could accomplish this using the filter function. What are we building exactly? In this short tutorial we are building a single page web app with two parts. The first part will: student: { id: unique identifier, firstName: string, lastName: string, dob: date (date of birth), gpa: number, address: string, city: string, county: string, state: string, zip: number } The second part will be the filters that the user can use to filter the data with. Let's assume that the user can filter by any field displayed in the list. So to build generic filter functions, that can be used with multiple fields, we will need to group these filters by data type. And each data type will allow certain comparison operators. The following table illustrate this logic. string: [contains, startWith] date: [equal, greaterThan, lessThan, between] number: [equal, greaterThan, lessThan, between] lookup: [is, isNot] So basically we can build 12 comparison function that we can use with all our fields or any fields that we may add in the future. So let's get started with our app and see how we can build these features. Starting your Vue app To start a new app, you will need to install Vue and open new terminal window and type the following: # initiating new Vue app vue create col-admin Vue CLI v3.0.0-rc.9 ┌───────────────────────────┐ │ Update available: 3.5.1 │ └───────────────────────────┘ ? Please pick a preset: Manually select features ? Check the features needed for your project: Babel, Router, Vuex, Linter ? Pick a linter / formatter config: Standard ? Pick additional lint features: Lint on save ? Where do you prefer placing config for Babel, PostCSS, ESLint, etc.? In dedicated config files ? Save this as a preset for future projects? No # adding other js libraries vue add vue-cli-plugin-vuetify ? Choose a preset: Default (recommended) ✔ Successfully invoked generator for plugin: vue-cli-plugin-vuetify # adding the backend library npm install --save cosmicjs after this we should have our starter application ready to be customized. If you want to run the app, just open the terminal window and type npm run serve and then open the application from your browser from this app default url and you're good to go to the next step. Setup you Rest API with Cosmic JS As we mentioned earlier, the goal of this app is to display a list of students, and then use the filter functionality to narrow down the list. For this project we will use Cosmic JS to store our data, and also to server the data using the built-in Rest API that comes with Cosmic JS. - Add new bucket from the development dashboard - Add new Object Typefrom the dashboard. and specify the following attributes for this object type - From the object type metafields tab, add the following fields: SID: text field, firstName: text field, lastName: text field, DOB: text field, GPA: text field, Address: text field, City: text field, County: text field, State: text field, Zip: text field - Add some data into the Students table. If you want you can copy my data table from Cosmic JS by importing my col-admin-bucket under your account. I have inserted about 300 records, so you don't have to type all these information manually. - Access your Cosmic JS data via the built-in Rest API from this url: - Take a look to the Cosmic JS API Documentations for a detailed list of all the APIs available for you. After this you should be able to access you backend data via the Rest API. Add data store using Vuex Under our project root folder lets add new folder ./src/store/ and move ./src/store.js under the store folder. We will also need to create new file under ./src/api/cosmic.js const Cosmic = require('cosmicjs') const config = { bucket: { slug: process.env.COSMIC_BUCKET || 'col-admin', read_key: process.env.COSMIC_READ_KEY, write_key: process.env.COSMIC_WRITE_KEY } } module.exports = Cosmic().bucket(config.bucket) This small script will be used as Cosmic JS connection object. We will also need to create new file under ./src/store/modules/cosmic.js for all the Cosmic JS data related functions. import Cosmic from '../../api/cosmic' // used for Rest API const actions = { async fetchStudents ({commit, dispatch}) { const recordLimit = 25 let skipPos = 0 let fetchMore = true while (fetchMore) { try { const params = { type: 'students', limit: recordLimit, skip: skipPos } let res = await Cosmic.getObjects(params) if (res.objects && res.objects.length) { let data = res.objects.map((item) => { return {...item.metadata, id: item.metadata.sid, firstName: item.metadata.firstname, lastName: item.metadata.lastname } }) commit('ADD_STUDENTS', data) commit('SET_IS_DATA_READY', true) // if fetched recodrs lenght is less than 25 then we have end of list if (res.objects.length < recordLimit) fetchMore = false } else { fetchMore = false } skipPos += recordLimit } catch (error) { console.log(error) fetchMore = false } } dispatch('fetchStates') } } export default { actions } So far, we only have one function fetchStudents. This function will call the Cosmic JS getObjects to pull 25 records at a time. And it will do this inside a while loop until we reach the end or no more records can be found. We can identify the end of data of the data row count will be less than 25 records. After fetching all data from the Rest API we will call the ADD_STUDENTS mutation to store these records inside Vuex state variable. For more info about Vuex store, please read the documentation. There is another call at the end of this function to fetchStates. This function will simply loop through all students records and get the unique state code and store it in states variable. This can be used later on the filter by state dropdown component. This is the rest of the Vuex store. import Vue from 'vue' import Vuex from 'vuex' import _ from 'underscore' import cosmicStore from './modules/cosmic' Vue.use(Vuex) export default new Vuex.Store({ state: { isDataReady: false, students: [], states: [] }, getters: { students (state) { return state.students }, isDataReady (state) { return state.isDataReady }, states (state) { return state.states } }, mutations: { SET_STUDENTS (state, value) { state.students = value }, SET_IS_DATA_READY (state, value) { state.isDataReady = value }, ADD_STUDENTS (state, value) { state.students.push(...value) }, SET_STATES (state, value) { state.states = value } }, actions: { fetchStates ({commit, state}) { let states = [] states = _.chain(state.students).pluck('state').uniq().sortBy((value) => value).value() commit('SET_STATES', states) } }, modules: { cosmicStore } }) Application styling with Vuetify For this project we will use Vuetify as our fron-end components library. This is very helpful, especially if you like to use Google Material Design into your project without a lot of overhead. Plus Vuetify is awesome because it has tons of built-in UI components that are fully loaded. After adding Vuetify to your project using Vue CLI add command, you can just reference Vuetify components from your page templates. Let's take a look at the App.vue main layout. <template> <v-app> <v-toolbar app <v-icon>school</v-icon> <v-toolbar-title</v-toolbar-title> <v-spacer></v-spacer> </v-toolbar> <v-content> <router-view/> </v-content> <v-footer : <span>© 2017</span> </v-footer> </v-app> </template> <script> export default { name: 'App', data () { return { fixed: false, title: 'College Query' } } } </script> In the template above you can see that our application page layout has three sections: - v-toolbar: which is the top toolbar component - v-content: which will contain the inner content of any page - v-footer: which will have the app copyright and contact info Adding application view and components You may notice that under the './src' folder, there are two folders: - ./src/components: this folder will be used to store all web components that can be used inside any page. Currently we don't use any components yet! but if our app become more complex, we could easily break each page into small components. - ./src/views: This folder is used to store views. A view is the equivilent to a web page. Currently we have the Home.vuewhich is the main page, and the About.vue Adding data-grid to main page In the Home.vue page we will have two main sections: - data filters: which contains all filters that the user can select. - data grid: this the students list displayed as a data grid component. For our purpose we will use Vuetify data-tablecomponent. So let's take a look at the home page template: <template> <v-container grid-list-lg> <v-layout row wrap> <v-flex xs2> <v-select : </v-select> </v-flex> <v-flex xs2> <v-select : </v-select> </v-flex> <v-flex xs2 <v-text-field</v-text-field> </v-flex> <v-flex xs2 <v-text-field</v-text-field> </v-flex> <v-flex xs2 <v-autocomplete :</v-autocomplete> </v-flex> <v-flex xs2> <v-btnClear All</v-btn> </v-flex> <v-flex xs12> <v-data-table : <template slot="items" slot- <td>{{ props.item.id }}</td> <td>{{ props.item.firstName }}</td> <td>{{ props.item.lastName }}</td> <td>{{ props.item.dob | shortDate(dateFilterFormat) }}</td> <td>{{ props.item.gpa | gpaFloat }}</td> <td>{{ props.item.address }}</td> <td>{{ props.item.city }}</td> <td>{{ props.item.county }}</td> <td>{{ props.item.state }}</td> <td>{{ props.item.zip }}</td> </template> <template slot="pageText" slot- Total rows: {{ props.itemsLength }} </template> </v-data-table> </v-flex> </v-layout> </v-container> </template> As you can see, from the code above. the v-data-table component is using filteredStudents variable as it's data source. Inside Vuex store we have two state variables: - students: an array which contains all students that are fetched from the database. - filterdStudents: an array which contains only the students matching the filter criteria. Initially, if no filter is selected, then this variable will have exact same value as the studentsvariable. The data-table component also has three sections: - headers: currently we store the header in a data variable called headers - items: this is the data section which is feeding of filteredStudentsvariable - footer: will display the pagination controls and the record count info Adding data filters UI components As seen in the Home.vue page template the filters components consist of the following components: - Filter By: currently we have to select one of the available fields like firstName, lastName, dob... - Filter Operator: this will be something like Contains, 'Start with', 'Greater than'... The operators will change based on the field type - Filter Term: this is the user input for the selected filter. Currently we have two filter terms in case if we need to select a range. For instance if the user selects date of birth between, then we need two date input fields. - Filter lookup: is a dropdown in case if the filter criteria needs to be selected from a given list. In our app, when we need to filter by State, then we need to select a value from a a dropdown field. Add filter functionality We can summarize the filter functionality by these variables: headers: [ { text: 'ID', align: 'left', sortable: false, value: 'id' }, { text: 'First', value: 'firstName' }, { text: 'Last', value: 'lastName' }, { text: 'DOB', value: 'dob', dataType: 'Date' }, { text: 'GPA', value: 'gpa' }, { text: 'Address', value: 'address' }, { text: 'City', value: 'city' }, { text: 'County', value: 'county' }, { text: 'State', value: 'state' }, { text: 'Zip', value: 'zip' } ], This the data table headers. filterFields: [ {text: 'First Name', value: 'firstName', type: 'text'}, {text: 'Last Name', value: 'lastName', type: 'text'}, {text: 'DOB', value: 'dob', type: 'date'}, {text: 'GPA', value: 'gpa', type: 'number'}, {text: 'Address', value: 'address', type: 'text'}, {text: 'City', value: 'city', type: 'text'}, {text: 'County', value: 'county', type: 'text'}, {text: 'Zip', value: 'zip', type: 'number'}, {text: 'State', value: 'state', type: 'lookup'} ], This is the list of filter fields that the user can select. You can also see that I added a type for each filter field. Filter type will be used lated to decide which function will be called to run the filter operations. Many fields will have the same data types, therefore we don't need to call a separate function to filter by that field. We will call the same function for all fields that share the same data type. filterDefs: { text: {contains: {display: 'Contains', function: this.filterByTextContains}, startsWith: {display: 'Starts with', function: this.filterByTextStartsWith}}, number: {equal: {display: 'Equal', function: this.filterByNumberEqual, decimalPoint: 1}, greater: {display: 'Greater than', function: this.filterByNumberGreater, decimalPoint: 1}, less: {display: 'Less than', function: this.filterByNumberLess, decimalPoint: 1}, between: {display: 'Between', function: this.filterByNumberBetween, decimalPoint: 1}}, date: {equal: {display: 'Equal', function: this.filterByDateEqual, format: 'MM/DD/YYYY'}, greater: {display: 'Greater than', function: this.filterByDateGreater, format: 'MM/DD/YYYY'}, less: {display: 'Less than', function: this.filterByDateLess, format: 'MM/DD/YYYY'}, between: {display: 'Between', function: this.filterByDateBetween, format: 'MM/DD/YYYY'}}, lookup: {is: {display: 'Is', function: this.filterByLookupIs}, isNot: {display: 'Is not', function: this.filterByLookupIsNot}} }, The filterDefs variable will store information that tells our UI which operator to use on each field type. We also specify in this config variable, which Javascript function to call when we need to filter by the selected field. This variable is my own interpretation on how the filter function should be configured and designed, however you can certainly do without it, and use Javascript code with a lot of if statements. The last piece is the actual Javascript functions that we will call for each filter type. I am not going to list all of them, but let's see few examples from the Home.vue page ... methods: { filterByTextContains (list, fieldName, fieldValue) { const re = new RegExp(fieldValue, 'i') return this.filterByRegExp(list, fieldName, fieldValue, re) }, filterByTextStartsWith (list, fieldName, fieldValue) { const re = new RegExp('^' + fieldValue, 'i') return this.filterByRegExp(list, fieldName, fieldValue, re) }, filterByRegExp(list, fieldName, fieldValue, regExp) { return list.filter(item => { if(item[fieldName] !== undefined) { return regExp.test(item[fieldName]) } else { return true } }) }, ... } The code above have two functions filterByTextContains and filterByTextStartsWith which will be called each time the user uses the text field filter function. And behind those two functions we call filterByRegExp which is basically a function that uses Javascript Regular expression function. In a similar way I have written filter functions for numeric fields, for date fields, and for lookup fields. I have used simple logic like date comparison, or array find, or plain old JS if statement. The most important part is that these function should be generic enough to work with any field, and expects few parameters like the data list that should be filtered, the field name and the field value. I encourage you to take a look at the code for Home.vue for the full details. Using Vue computed properties, watchers, and filters You can also find inside './src/views/Home.vue' a couple methods under computed, watch, and filters. Here is how and why I use each type for. Computed: I have used these computed properties for students, filteredStudents, isDataReady, states. these properties will update automatically anytime the underlying variables which comes from Vuex store changes. This is useful especially if you bind the computed properties to the UI elements, and make UI changes or toggle between UI sections whenever the data inside the computed properties got updated. For instance isDataReadyis used in the data table whenever it's falsethen the UI will play a waiting animation progress bar that tells the user that the data is loading. Once the idDataReadyis updated to true, then the is loading progress bar will disappear, and the table will show the actual data. Watchers: I have used these watched properties filterField, and filterOperator. the difference is that the watched properties does not cache the value, and each time the underlying data changes, the function will be called. I have used this to update the filter UI elements on the home page. Filters: don't get confused Vue filters with data filtering. Filters are functions that you define in the logic, then use inside the html template to format a field value. For instance I have shortDate, and gpaFloatfunctions which are used to format date and float values to the desired display format. You can call the filter function from the html template using this syntax <td>{{ props.item.gpa | gpaFloat }}</td>. I also want to mention that I have used Vue life cycle event hooks to initiate the data fetching from the back end whenever the application starts. I am doing that from the ./main.js file ... new Vue({ store, router, render: h => h(App), created () { this.$store.dispatch('fetchStudents') } }).$mount('#app') As you can see, on app created event we are calling Vuex action by invoking the dispatch methods. this is very useful if you want to trigger actions automatically without waiting for user actions. And this is the final result. Please take a look at the application demo for a test drive. Conclusion At the end, I want to mention that building simple application like this sounds easy, however building an expandable app can take some thinking and a little planing to make sure that your code can easily allow future expanssions and changes without the need to rewrite the app. Also worth mentioning that, using an API ready back-end did certainly save us a lot of time. Lastly I want to add that after finishing the app I realize that the the Home.vue page could certainly be broken up into small components, and make it more readable and maintainable. So that would probably be the next step if you ever want to make a use of Vue Components. So, try the application demo, take a look at the source code and let me know what you think. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/mtermoul/add-dynamic-filters-to-your-data-with-ease-using-vue-cosmic-js-rest-api-445g
CC-MAIN-2021-10
refinedweb
2,964
64.61
I! Getting Started First off sign up on the site and you can register your application and get a unique key for it. Download the Windows Phone Flurry SDK and add a reference to their assembly in your project. Add a couple of calls to your Application class per their documentation and you already have the basics: you will now know how many new users you have, how long they use your app, in what country they are, what phone and what version they are using, and lots of other information. Those of us accustomed to the glacial performance of the App Hub system and web site, with its inexplicable reporting delays, will be especially pleased with all aspects of the Flurry system. Apart from the default data collection, the data that I find most important is ship-asserts and basic analytics. Basic Analytics I report when any page in the app is visited, sometimes with an additional argument. I also report which buttons are clicked as well, to determine which features customers are using and which they are not. I use the Flurry LogEvent API for this, with arguments of “Button”, the page name (“LittleWatson” in the example below) and then the name of the button itself. Ship Asserts There are points in my code where I have ship-asserts: they are similar to regular asserts, but actually exist in release builds. Before Flurry they would generate emails, based on an extended version of my LittleWatson code, but I learned when I first added Flurry to the beta version of my app that even beta testers often don’t bother to send those emails (I instrumented LittleWatson itself, see the graph above). So now I report ship-asserts, often with additional useful data, via Flurry. Flurry has a limit of 255 characters per argument but I often have data larger than that I want to see, so I use this code so I can still see all of the data: internal static void ShipAssert(string name, string item, string value) { const int max = 250; List<FlurryWP7SDK.Models.Parameter> items = new List<FlurryWP7SDK.Models.Parameter>(); if (value == null) value = "<null>"; if (value.Length < max) { items.Add(new FlurryWP7SDK.Models.Parameter(item, value)); } else { // chop up long values int chunk = 0; for (int i = 0; i < value.Length; i += max) { int len = Math.Min(value.Length - i, max); string key = item; if (chunk != 0) key = item + "_" + chunk.ToString(); items.Add(new FlurryWP7SDK.Models.Parameter(key, value.Substring(i, len))); chunk++; } } FlurryWP7SDK.Api.LogEvent(name, items, false); } On the Flurry web site if you Export as CSV on an Event it will gather together the _1, _2 etc chunks for you (though not in order), you can’t see this in the default view. Two of my ship-asserts led me to find bugs that I had no idea existed: no-one had emailed me directly or via LittleWatson, and no-one reported it in Reviews (which is a terrible way to report bugs IMHO). Version By default Flurry reports the version of the app by reading it from AppManifest.xml. However in my apps I always leave that at 1.0.0.0 and update the versions in AssemblyInfo.cs, so I set this version explicitly via their API. It is important to do this as soon as possible, else some data will be reported as coming from version 1.0.0.0 erroneously. I also change the version number reported for Debug, Beta and Trial apps, and Flurry produces handy graphs based on the version: // Get the Version, as set in AssemblyInfo.cs private static string _version; internal static string Version { get { if (_version == null) { string name = System.Reflection.Assembly.GetExecutingAssembly().FullName; _version = new System.Reflection.AssemblyName(name).Version.ToString(); } return _version; } } private string FLURRY_KEY; // call this from Application_Launching and Application_Activated private void InitFlurry() { FlurryWP7SDK.Api.StartSession(FLURRY_KEY); string ver; #if DEBUG ver = "D" + App.Version; #elif BETA ver = "B" + App.Version; #else ver = App.Version; #endif FlurryWP7SDK.Api.SetVersion(ver); } LittleWatson Flurry also has a mechanism to report crashes, so I added it to my existing LittleWatson code. I will probably disable LittleWatson-generated emails in the next update, except in beta builds as beta users sometimes add additional information (e.g. the repro) when they send them. Beta testing I think it is very important to beta test with Flurry enabled, so you can be sure you are reporting the right things in the right way, and that you find the resulting data genuinely useful before you ship your app. I use a different application key for beta versions and release versions, as I have a lot less users of the former than the latter and it is easier to detect anomalies (eg spikes in asserts) in the beta version when the amount of data (and users) is small. It also means that I can easily remove the beta data when it becomes obsolete or was generated by a bug in my reporting code in the beta itself. Opting Out As with any app that collects data from users, I highly recommend letting the user opt-out of this data collection. Although all data collected is anonymous, it is a simple courtesy to allow customers to not send it. I have a simple checkbox in my app to do this. Sure, you lose data from those who choose to opt-out, but those same users can hardly complain if you don’t fix an issue that they didn’t even inform you about by disabling the option. Support The support folks at Flurry were very responsive on the only occasion I had need of them: I had a beta tester in Europe telling me the app crashed on startup after a timezone change. I could not repro this myself, so I took a look at the callstacks from the App Hub (after the requisite two-day delay) and could see that the crash was in Flurry itself, so I reported it to them. In a few weeks there was an update to the Flurry SDK to fix this issue. Conclusion I highly recommend adding Flurry support to your Windows Phone application if you want to know more about your users and how they use your application, so you can fix and improve it. Thanks for a great article. Are you able to see the sequence of navigation events that the user goes through or do you just get counts of the different trigger events? I'm using an analytics service that does the latter at the moment which makes it hard for me to drill down and see how a particular user might have traversed through my app and then "given up" and gone back up the page stack without going any further. Simple counts don't necessarily tell me that story so I'm trying to find an analytics service that will help. Philip, you can track the sequence of events, yes. I occasionally do this to look at how a ship assert fired, but I haven't used it much myself. The navigation story in my app is pretty simple. The Flurry method to report errors turns out to be almost useless: LittleWatson will live for a while yet. Flurry just records the funciton name that called the method, and the exception type. If the callstack is captured I haven't found it anywhere. Do you have the solution for download? or some example? Thanks Just a quick question, did you add the Flurry WindowsPhone SDK v2.0.4 folder into your project for referencing it? I just put the Flurry.dll in the project directory and added a Reference to it
https://blogs.msdn.microsoft.com/andypennell/2012/04/17/using-flurry-analytics-on-windows-phone/
CC-MAIN-2017-04
refinedweb
1,287
61.06
Red Hat Bugzilla – Bug 377001 Eric won't start. The qt module throws runtime error upon loading. Last modified: 2007-11-30 17:12:22 EST Description of problem: Typing 'eric3' from terminal window gets this error message: [alex@core2duo ~]$ eric3 Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/eric3/eric3.py", line 16, in <module> from qt import QTextCodec, SIGNAL, SLOT, qApp RuntimeError: the sip module supports API v3.0 to v3.4 but the qt module requires API v3.5 Version-Release number of selected component (if applicable): [alex@core2duo ~]$ rpm -qi eric Name : eric Relocations: (not relocatable) Version : 3.9.2 Vendor: Fedora Project Release : 3.fc8 Build Date: Mon 27 Aug 2007 08:35:36 AM CDT Install Date: Sun 11 Nov 2007 10:21:56 AM CST Build Host: hammer2.fedora.redhat.com Group : Development/Tools Source RPM: eric-3.9.2-3.fc8.src.rpm Size : 12241207 License: GPL+ Signature : DSA/SHA1, Wed 24 Oct 2007 08:43:54 PM CDT, Key ID b44269d04f2a6fd2 Packager : Fedora Project URL : Summary : Python IDE Description : eric3 is a full featured Python IDE. How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: I can't reproduce (it worksforme on x86_64 anyway). More details please, rpm -q sip PyQt [alex@core2duo ~]$ rpm -q sip PyQt sip-4.6-3.fc8 PyQt-3.17.3-1.fc9 Just saw that PyQt is for fc9, is that a developmental package? yes, downgrade back to fc8's stock: PyQt-3.17.1. Quick-n-dirty method: rpm -e --nodeps PyQt yum install PyQt (And update to PyQt-3.17.3 is in the works, but for now, 3.17.1 is the latest). Reversed PyQt to original PyQt-3.17.1-1.fc7 on disk solved the issue. Thanks. It can be closed. good news!
https://bugzilla.redhat.com/show_bug.cgi?id=377001
CC-MAIN-2017-30
refinedweb
312
68.67
Red Hat Bugzilla – Bug 469271 yum ia64 no longer supports i386/i686/etc. Last modified: 2014-01-21 01:11:07 EST Description of problem: Since yum-3.2.10 or so upstream yum no longer supports anything but ia64 and noarch packages on ia64 (on the advise of Fedora people), but RHEL-5 still ships at least .i386 and .i686 packages in the ia64 RHN channels. Changing this is a trivial one line fix, but this BZ is mainly a "which is right" (I'm guessing RHEL/RHN) ... but also so I can get the flags to fix it. This bugzilla has Keywords: Regression. Since no regressions are allowed between releases, it is also being proposed as a blocker for this release. Please resolve ASAP. *** Bug 469343 has been marked as a duplicate of this bug. *** Also with respect to: diff -u /usr/lib/python2.4/site-packages/rpmUtils/arch.py{.orig,} --- /usr/lib/python2.4/site-packages/rpmUtils/arch.py.orig 2008-10-31 07:38:13.000000000 -0400 +++ /usr/lib/python2.4/site-packages/rpmUtils/arch.py 2008-10-31 07:38:44.000000000 -0400 @@ -67,7 +67,7 @@ "sh3": "noarch", #itanium - "ia64": "noarch", + "ia64": "i386", } def legitMultiArchesInSameLib(arch=None): ...I assume we need the above to be i686 so we can get the glibc/openssl .i686 packages? Hello Denis, what are correct secondary archs for ia64 RHEL-5 system? Is i386 and noarch enough? Thank you in advance, Jan I'd think just a patch to revert rpmUtils/arch.py to prior should be fine; there's no need to worry about optimizations w.r.t. i686..
https://bugzilla.redhat.com/show_bug.cgi?id=469271
CC-MAIN-2017-39
refinedweb
271
69.18
21 February 2013 17:16 [Source: ICIS news] WASHINGTON (ICIS)--US sales of existing homes rose marginally in January from December, the National Association of Realtors (NAR) said on Thursday, noting that the market for previously owned homes is running 9% ahead of January 2012. In its monthly report, the NAR said that existing home sales rose by about 0.4% in January to a seasonally adjusted annual rate of 4.92m units compared with the downwardly revised sales pace of 4.90m in December. December’s sales of existing homes had initially been put at 4.94m units. Had December’s figure not been revised downward, the January total would have shown a decline. While sales growth in the existing homes market remains modest, the NAR noted that January’s sales pace was 9.1% ahead of the same month last year, a sign that the housing recovery is gaining strength. Sales of existing homes would be stronger, the NAR said, if there were more single-family homes, condos and co-ops available for sale. “Tight inventory is a major factor in the market,” said NAR chief economist Lawrence Yun. “Buyer traffic is continuing to pick up, while seller traffic is holding steady,” he said. “In fact, buyer traffic is 40% above a year ago, so there is plenty of demand but insufficient inventory to improve sales more strongly.” The availability of existing homes for sale was lower in January, the NAR said, with the inventory of for-sale homes off by nearly 5% from December to 1.74m units. At the current sales pace, those 1.74m homes for sale represent a 4.2-month supply. That inventory level marks the lowest housing availability since April 2005 when the US housing boom was still in full flower. In January 2012, the inventory of for-sale existing homes constituted a 6.2-month supply. The ?xml:namespace>
http://www.icis.com/Articles/2013/02/21/9643337/us-existing-home-sales-rise-marginally-in-jan-up-9-from-2012.html
CC-MAIN-2014-49
refinedweb
318
63.19
Re: variadic functions - From: Paul N <gw7rib@xxxxxxx> - Date: Sat, 3 Oct 2009 09:01:08 -0700 (PDT) On 3 Oct, 10:17, "io_x" <a...@xxxxxxxxxxx> wrote: "Frank" <merr...@xxxxxxxxxxxxxxxxx> ha scritto nel messaggionews:23c2ebd2-dd31-43b0-8de1-0050cc007de9@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx #include <string.h> int main(void) { int printf(const char *, ...); in what i know here you say to the linker to find "int printf(const char *, ...);" that could be different from printf in <stdio.h>? I think you (io_x) could be making the same mistake here that I did when I first learnt C. I thought that printf was "in" stdio.h, ie that stdio.h contained the code to do printing. This is not the case. Headers, such as stdio.h, contain text. This text can be included into your program, and will have the same effect as if you included that same text yourself in your program file. In the case of printf, stdio.h contains a prototype, so that your program knows how to call printf properly, but it doesn't have a clue what printf actually does. Somewhere else, for example in a file called something.lib, there is the actual machine code for printf, and when you link the program the linker will find this and do the right thing with it, which will normally involve copying the code into your executable. My first compiler had six different .LIB files, but this wasn't so that each only covered some of the functions, the way that .h files do. They were because the computer had a choice of six different "memory models" which each required slightly different versions of the functions. Each .LIB file contained all of the functions. Hope that helps. Paul. . - Follow-Ups: - Re: variadic functions - From: Frank - References: - variadic functions - From: Frank - Re: variadic functions - From: io_x - Prev by Date: Re: tree - Next by Date: Re: typedef usage - Previous by thread: Re: variadic functions - Next by thread: Re: variadic functions - Index(es):
http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2009-10/msg00512.html
CC-MAIN-2014-35
refinedweb
330
84.47
Scala GraphQL implementation def getFriendship(userA: String, userB: String): Future[Boolean] = Future.successful(true) resolve = _ => { getFriendship(userA, userB).map{ usersAreFriends => { if (usersAreFriends) { fetchProfile.deferOpt(userB) } else { // TODO: How do I allow this to be None? None } }} } Future[Boolean]is a terrible idea and really it should be Future[Unit]and use .map(successResponse).recover(failureResponse) Hello, I'm pretty new to GraphQL and I'm encountering an issue with updating my entities in my mutation schema, the entity looks like this: Place { id: ID! name: String! city: String! pictures: [String!]! address: String description: String } and the update mutation looks like this: updatePlace ( id ID!, city String, pictures [String!], address String, description String ) Void! Now my issue is, since address and description are optional fields, is there a way to differentiate in the mutation whether I'm not providing arguments for those fields (and therefore don't want to change their values), or if I do but I want to set them to null ? Because as far as I know, in both cases I'll get None when calling c.arg() on my argument. Thanks in advance ! fieldTypebeing an InterfaceType, instead of a concrete type that interfaces some InterfaceType? It compiles just fine, but sangria then complains with the "Can't find appropriate subtype of an interface type" error. Any help would be most appreciated! New Question: Is it possible for an Action to be a Future UpdateCtx? The basic idea is I want to return a Future, but also update the Ctx. Hi, I am back with another question: I need to use schema first approach and have a schema like: type Query { item(id: [String]!) : [Item] } type Item { itemId: String product: Product } type Product { productId: String } How do I write resolvers for list of inputIds above with nested entities using AstSchemaBuilder? Hi, We've been using Sangria to create our company's first GraphQL APIs. We're a Scala Functional programming shop so we like doing everything monadic. We're doing our first schema that does a large number of fetches from DynamoDB tables to realize the result schema and I'm trying to use the Fetcher type for each sub-table. This method expects a Future returned from the repository function while our is returning a higher-type of F[Seq[Software]]. Have others been using Functional types with the Fetchers and Resolver caches? Sample code block below def softwareFetcher[F[_]: Effect] = Fetcher((repo: MasterRepo[F], ids: Seq[String]) => repo.devicesRepo.softwareByDeviceGuids(ids))(HasId(_.deviceGuid)) Correct me if I'm wrong, but... I get the impression that is using and that the latter has been ended in favor of GraphiQL. graphql/graphql-playground#1143 I've incorporated GraphiQL into a Sangria server before using webjars. Is this something that needs to be done? Following up on my previous question on union type, have made changes to return all possible types of union as a default case. But I wanted to add check based on the __typename requested for. I see in .js, objects are resolved based on the unique field name that each type has. Am looking for something equivalent in sangria. __resolveType: obj => { if (obj.reason) { return "TimeoutError"; } if (obj.field) { return "ValidationError"; } return null; } someUnionFieldor, better yet, split this into two distinct fields rather than 1 that is a union type. Am I understanding correctly? { someUnionField { ... on UnionTypeA { someFieldOfTypeA } ... on UnionTypeB { someFieldOfTypeB } } }))))
https://gitter.im/sangria-graphql/sangria?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
CC-MAIN-2021-39
refinedweb
569
58.58
std::basic_ios::clear From cppreference.com Sets the stream error state flags by assigning them the value of state. By default, assigns std::ios_base::goodbit which has the effect of clearing all error state flags. If rdbuf() is a null pointer (i.e. there is no associated stream buffer), then state | badbit is assigned. May throw an exception. [edit] Parameters [edit] Return value (none) [edit] Exceptions [edit] Example clear() without arguments can be used to unset the failbit after unexpected input Run this code #include <iostream> #include <string> int main() { double n; while( std::cout << "Please, enter a number\n" && ! (std::cin >> n) ) { std::cin.clear(); std::string line; std::getline(std::cin, line); std::cout << "I am sorry, but '" << line << "' is not a number\n"; } std::cout << "Thank you for entering the number " << n << '\n'; }
https://en.cppreference.com/w/cpp/io/basic_ios/clear
CC-MAIN-2018-34
refinedweb
136
58.18
I'm up to date with everything - xamarin 8.7.3 (build 13), Xam.Plugin.Media 5.0.1 the code hasn't changed for many months and used to work, but now never gets to the "if (file==null)" statement: var file = await CrossMedia.Current.TakePhotoAsync(new Plugin.Media.Abstractions.StoreCameraMediaOptions { SaveMetaData = false, SaveToAlbum = false, PhotoSize = PhotoSize.Small, CompressionQuality = 50, DefaultCamera = CameraDevice.Rear }); if (file == null) { attachment = null; InvokeOnMainThread(resetSelected); return; } Picking a photo from the library still works, but taking a picture does not. Answers Have you tested the sample here: I run it on my iOS 13 device, it worked fine: i'm trying to now, but am getting an error when i build - The type or namespace name 'App' could not be found (are you missing a using directive or an assembly reference?) 2 Warning(s) i have updated the packages, but i don't know what directive or reference it is looking for. ok, i got it to run with debug (i would still like to know how to solve the "missing reference" problem). yes, that sample is working on my iOS13 device, so there is something i'm doing that works with iOS12 but not iOS13 and i will have to find the difference between his code and mine. thx. Fell free to ask here if you have questions about it.
https://forums.xamarin.com/discussion/comment/420594
CC-MAIN-2021-25
refinedweb
227
61.06
From Least Squares Benchmarks to the Marchenko–Pastur Distribution Want to share your content on python-bloggers? click here. In this blog post, I tell the story how I learned about a theorem for random matrices of the two Ukrainian mathematicians Vladimir Marchenko and Leonid Pastur. It all started with benchmarking least squares solvers in scipy. Setting the Stage for Least Squares Solvers Least squares starts with a matrix A \in \mathbb{R}^{n,p} and a vector b \in \mathbb{R}^{n} and one is interested in solution vectors x \in \mathbb{R}^{p} fulfilling x^\star = \argmin_x ||Ax-b||_2^2 \,. You can read more about least squares in our earlier post Least Squares Minimal Norm Solution. There are many possible ways to tackle this problem and many algorithms are available. One standard solver is LSQR with the following properties: - Iterative solver, which terminates when some stopping criteria are smaller than a user specified tolerance. - It only uses matrix-vector products. Therefore, it is suitable for large and sparse matrices A. - It effectively solves the normal equations A^T A = A^Tb based on a bidiagonalization procedure of Golub and Kahan (so never really calculating A^T A). - It is, unfortunately, susceptible to ill-conditioned matrices A, which we demonstrated in our earlier post. Wait, what is an ill-conditioned matrix? This is most easily explained with the help of the singular value decomposition (SVD). Any real valued matrix permits a decomposition into three parts: A = U S V^T \,. U and V are orthogonal matrices, but not of further interest to us. The matrix S on the other side is (rectangular) diagonal with only non-negative entries s_i = S_{ii} \geq 0. A matrix A is said to be ill-conditioned if it has a large condition number, which can be defined as the ratio of largest and smallest singular value, \mathrm{cond}(A) = \frac{\max_i s_i}{\min_i s_i} = \frac{s_{\mathrm{max}}}{s_{\mathrm{min}}}. Very often, large condition numbers make numerical computations difficult or less precise. Benchmarking LSQR One day, I decided to benchmark the computation time of least squares solvers provided by scipy, in particular LSQR. I wanted results for different sizes n, p of the matrix dimensions. So I needed to somehow generate different matrices A. There are a myriad ways to do that. Naive as I was, I did the most simple thing and used standard Normal (Gaussian) distributed random matrices A_{ij} \sim \mathcal{N}(0, 1) and ran benchmarks on those. Let’s see how that looks in Python. from collections import OrderedDict from functools import partial import matplotlib.pyplot as plt import numpy as np from scipy.linalg import lstsq from scipy.sparse.linalg import lsqr import seaborn as sns from neurtu import Benchmark, delayed plt.ion() p_list = [100, 500] rng = np.random.default_rng(42) X = rng.standard_normal(max(p_list) ** 2 * 2) y = rng.standard_normal(max(p_list) * 2) def bench_cases(): for p in p_list: for n in np.arange(0.1, 2.1, 0.1): n = int(p*n) A = X[:n*p].reshape(n, p) b = y[:n] for solver in ['lstsq', 'lsqr']: tags = OrderedDict(n=n, p=p, solver=solver) if solver == 'lstsq': solve = delayed(lstsq, tags=tags) elif solver == 'lsqr': solve = delayed( partial( lsqr, atol=1e-10, btol=1e-10, iter_lim=1e6 ), tags=tags) yield solve(A, b) bench = Benchmark(wall_time=True) df = bench(bench_cases()) g = sns.relplot(x='n', y='wall_time', hue='solver', col='p', kind='line', facet_kws={'sharex': False, 'sharey': False}, data=df.reset_index(), marker='o') g.set_titles("p = {col_name}") g.set_axis_labels("n", "Wall time (s)") g.set(xscale="linear", yscale="log") plt.subplots_adjust(top=0.9) g.fig.suptitle('Benchmark LSQR vs LSTSQ') for ax in g.axes.flatten(): ax.tick_params(labelbottom=True) The left plot already looks a bit suspicious around n=p. But what is happening on the right side? Where does this spike of LSQR come from? And why does the standard least squares solver, SVD-based lstsq, not show this spike? When I saw these results, I thought something might be wrong with LSQR and opened an issue on the scipy github repository, see. The community there is really fantastic. Brett Naul pointed me to …. The Marchenko–Pastur Distribution The Marchenko–Pastur distribution is the distribution of the eigenvalues (singular values of square matrices) of certain random matrices in the large sample limit. Given a random matrix A \in \mathbb{R}^{n,p} with i.i.d. entries A_{ij} having zero mean, \mathbb{E}[A_{ij}] = 0, and finite variance, \mathrm{Var}[A_{ij}] = \sigma^2 < \infty, we define the matrix Y_n = \frac{1}{n}A^T A \in \mathbb{R}^{p,p}. As square and even symmetric matrix, Y_n has a simpler SVD, namely Y_n = V \Sigma V^T. One can in fact show that V is the same as in the SVD of A and that the diagonal matrix \Sigma = \frac{1}{n}S^TS contains the squared singular values of A and \min(0, p-n) extra zero values. The (diagonal) values of \Sigma are called eigenvalues \lambda_1, \ldots, \lambda_p of Y_n. Note/ that the eigenvalues are themselves random variables, and we are interested in their probability distribution or probability measure. We define the (random) measure \mu_p(B) = \frac{1}{p} \#\{\lambda_j \in B\} for all intervals B \subset \mathbb{R}. The theorem of Marchenko and Pastur then states thatfor n, p \rightarrow \infty with \frac{p}{n} \rightarrow \rho , we have \mu_p \rightarrow \mu, where \mu(B) = \begin{cases} (1-\frac{1}{\rho})\mathbb{1}_{0\in B} + \nu(B),\quad &\rho > 1 \\ \nu(B), \quad & 0\leq\rho\leq 1 \end{cases} \,, \nu(x) = \frac{1}{2\pi\sigma^2} \frac{\sqrt{(\rho_+ - x)(x - \rho_-)}}{\rho x} \mathbb{1}_{x \in [\rho_-, \rho_+]} dx\,, \rho_{\pm} = \sigma^2 (1\pm\sqrt{\rho})^2 \,. We can at least derive the point mass at zero for \rho>1 \Leftrightarrow p>n: We said above that \Sigma contains p-n extra zeros and those correspond to a density of \frac{p-n}{p}=1 – \frac{1}{\rho} at zero. A lot of math so far. Just note that the assumptions on A are exactly met by the one in our benchmark above. Also note that the normal equations can be expressed in terms of Y_n as n Y_n x = A^Tb. Empirical Confirmation of the Marchenko–Pastur Distribution Before we come back to the spikes in our benchmark, let us have a look and see how good the Marchenko–Pastur distribution is approximated for finite sample size. We choose n=1000, p=500 which gives \rho=\frac{1}{2}. We plot a histrogram of the eigenvalues next to the Marchenko–Pastur distribution. def marchenko_pastur_mu(x, rho, sigma2=1): x = np.atleast_1d(x).astype(float) rho_p = sigma2 * (1 + np.sqrt(rho)) ** 2 rho_m = sigma2 * (1 - np.sqrt(rho)) ** 2 mu = np.zeros_like(x) is_nonzero = (rho_m < x) & (x < rho_p) x_valid = x[is_nonzero] factor = 1 / (2 * np.pi * sigma2 * rho) mu[is_nonzero] = factor / x_valid mu[is_nonzero] *= np.sqrt((rho_p - x_valid) * (x_valid - rho_m)) if rho > 1: mu[x == 0] = 1 - 1 / rho return mu fig, ax = plt.subplots() n, p = 1000, 500 A = X.reshape(n, p) Y = 1/n * A.T @ A eigenvals, _ = np.linalg.eig(Y) ax.hist(eigenvals.real, bins=50, density=True, label="histogram") x = np.linspace(0, np.max(eigenvals.real), 100) ax.plot(x, marchenko_pastur_mu(x, rho=p/n), label="MP distribution") ax.legend() ax.set_xlabel("eigenvalue") ax.set_ylabel("probability") ax.set_title("Empirical evidence for n=1000, p=500, rho=0.5") I have to say, I am very impressed by this good agreement for n=1000, which is far from being huge. Conclusion Let’s visualize the Marchenko–Pastur distribution Y_n for several ratios \rho and fix \sigma=1: fig, ax = plt.subplots() rho_list = [0.5, 1, 1.5] x = np.linspace(0, 5, 1000)[1:] # exclude 0 for rho in rho_list: y = marchenko_pastur_mu(x, rho) line, = ax.plot(x, y, label=f"rho={rho}") # plot zero point mass if rho > 1: ax.scatter(0, marchenko_pastur_mu(0, rho), color = line.get_color()) ax.set_ylim(None, 1.2) ax.legend() ax.set_title("Marchenko-Pastur Distribution") ax.set_xlabel("x") ax.set_ylabel("dmu/dx") From this figure it becomes obvious that the closer the ratio \rho = 1, the higher the probability for very tiny eigenvalues. This results in a high probability for an ill-conditioned matrix A^TA coming from an ill-conditioned matrix A. Let’s confirm that: p = 500 n_vec = [] c_vec = [] for n in np.arange(0.1, 2.05, 0.05): n = int(p*n) A = X[:n*p].reshape(n, p) n_vec.append(n) c_vec.append(np.linalg.cond(A)) fig, ax = plt.subplots() ax.plot(n_vec, c_vec) ax.set_xlabel("n") ax.set_ylabel("condition number of A") ax.set_title("Condition Number of A for p=500") As a result of the ill-conditioned A, the LSQR solver has problems to achieve its tolerance criterion, needs more iterations, and takes longer time. This is exactly what we observed in the benchmark plots: the peak occurred around n=p. The SVD-based lstsq solver, on the other hand, does not use an iterative scheme and does not need more time for ill-conditioned matrices. You find the accompanying notebook here: Want to share your content on python-bloggers? click here.
https://python-bloggers.com/2022/05/from-least-squares-benchmarks-to-the-marchenko-pastur-distribution/
CC-MAIN-2022-40
refinedweb
1,569
58.58
"I guess the developer wanted to make make sure that all his bases were covered!" wrote Ryan. /// <summary> /// Returns True if the input string is /// null, empty, "undefined", or "null". /// </summary> /// String to check /// Booleanpublic static bool IsEmpty(string s) { return string.IsNullOrEmpty(s) || s == "undefined" || s == "null"; } Grahame wrote, "From the Javascript behind the download page (badly minified so that swapping eval for console.log reveals all the comments) of the 2011 Census of a certain country. Nice of them to document their naffness!" //---------------------------------------------------------------------------------------------------------------- // Function: guidGenerator // Description:returns a pseudo-random GUID //This is appended to a url for 2 reasons //1. to make the URL unique, so that the browser always gets it and doesn't use a cached version //2. to make a URL look like its got a unique key, in a naive attempt to fool a not-so-wily hacker //into thinking they can't download a datapack directly if they know the URL pattern, because they //need a unique key. //---------------------------------------------------------------------------------------------------------------- function guidGenerator() { var S4 = function() { return (((1+Math.random())*0x10000)|0).toString(16).substring(1); }; return (S4()+S4()+"-"+S4()+"-"+S4()+"-"+S4()+"-"+S4()+S4()+S4()); } "I grabbed latest from our codebase the other day and ran across a method that I didn't recognize. It had a comment block above, with what I believe is the best comment I've ever seen," wrote Nathan "I found this snippet of code when debugging an application developed by the software team I'm a part of," writes Goatie, " I happen to know this functionality was written by my boss. He has a habit of using rather unusual variable names." Case "SecretSquirrel" c_IsMemberOfSecretSquirrelClub = e.Result c_SecretSquirrelCommand.RaiseCanExecuteChanged() End Select End Sub Public ReadOnly Property CurrentSecretSquirrelShowiness As Boolean Get Return c_CurrentSecretSquirrelShowiness End Get End Property "I was recently tasked to fix some 508 compliance issues in a C#, ASP.NET application at the company I work for," writes Dan Johnson, "I was warned that the code may be a little, let's say, not well matured. That's fine, I've seen bad. Architecture, style and good practices problems all aside, I began to see comments throughout the code like this:" //don't do this -- too slow when there are many rows in the grid //return; //update: actually it's pretty fast now - use it if desired and //don't rebuild the grid on the client because using setTargetURL DOES NOT WORK //go ahead and do it - we're not using links for the app names now "The frightening thing is that I'm not sure if there was more than a single developer on this project or not. At the time it was originally built and maintained, our department operated on a one-person-per-project rule." "I was working on Java/Delphi code and I could not believe my eyes when I found this comment," writes Vladamir P. /** java uses BigIndian, Delphi uses little indian, while the two co-exist, need to convert back and forth when reading data in. */ "At first, I thought that maybe it was an isolated typo, that is until I looked further into the code..." /** convert from big indian to little indian: Remove when not using legacy databases or Delphi code.*/ private final boolean doIndianConversion() "I work at a medium large IT company with a handful of developers. A few months ago one of my colleagues left. He was a developer-slash-designer and was the only on in the company who used a Mac computer (a disused old Mac Mini)," writes Jan V., "His design skills were significantly better than his developing talents. Not that he was a bad developer, but he always had a ... let's say ... his own perspective on things, including his way of setting a debug flag." Class Controller_WTF Extends Controller_Base { function index() { $registry = Registry::getInstance(); if($_SERVER['HTTP_USER_AGENT']=="Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_4_11; nl-nl) AppleWebKit/533.19.4 ". "(KHTML, like Gecko) Version/4.1.3 Safari/533.19.4" { $debug = true; } ... } ... }
http://thedailywtf.com/articles/The-Secret-Squirrel-Club%2c-a-Gun%2c-and-More
CC-MAIN-2015-11
refinedweb
671
61.26
🙂 🙂. Hi, Does it support normal File Content Conversion. I believe third party adapters like Aeadaptive and Seeburger does support. Does it have the same set of features like other vendors. Did you had a chance to look on Receiver SFTP Adapter as well? Regards, Anandh.B just updated the blog with my thoughts on the FCC bit of things. About the receiver adapter, yes as i mentioned it is pretty straight forward and hence I havnt put up any detailed info. Hope this clarifies. Nice blog Shabz… It might not be that important but heck, this darn thing gave me a headache…I was seeing green lights in the channel monitor for the sender SFTP channel, everything polling nicely but nothing being picked up! Maybe add to your blog, the Filename ONLY uses regular expressions. I guess I should have read the configuration guide more carefully, especially the section that said: “The input to the field is a regular expression.” Hi Shabarish, Thank you so much for introducing us to the SFTP adapter features. Regards Anupam Hi Shabarish, I need to use the Private Key as Authentication. I did not find any documentation on how the keys are stored and generated for both receiver and sender scenarios. Can you please help with any of the documentation to use this option specifically where to perform Key Generation and Key Storage? Thanks for your help!! Shabz, Currently we are On PI 7.31SP4 ,but I don’t see SFTP adapter in the list .Do you think we need to do any setup for this. Your Reponse is greatly Appreciated. Thanks, Madhu Hi Madhusudhan Honnappa, SFTP is part of B2B add on. I believe it will involve additional licensing cost and you need to install the add on before you can use it. Shabarish Vijayakumar please feel free to add your thoughts. You’ll probably need to download the installation guide & ESR/XI Content from SAP Service Marketplace first Madhu. Once the XI Content (containing the SFTP adapter metadata) is imported into your ESR, you will see the SFTP adapter in your directory. There is no additional cost for SFTP or PGP functionality. The B2B and SFTP add-on’s are separate, I recall an additional cost for the B2B functionality only. do have a look at this blog to understand how you can install the adapters on your server. PGP and SFT solution is free from SAP but the B2B add on has license implications Hi Shabarish, In SFTP 3.0 of advantco.com, i got the option “Duration in minutes to keep a file duplicate ” and the default value is 1440. do you know what is the feature? Is it means when a same file is found in the server, keep it 1440 mins? Hi Shabarish, I read your blog above and it is very informative. I am currently working with a scenario for my project where I need to do a dynamic configuration for file name, also use SFTP adapter and post the message to SWIFTnet. Here is the description in detail: – SAP ECC sends me a message (encrypted payment + digital signature as attachments, enclosed in a SOAP envelop). – In PI (7.3) I receiver this message, do a dynamic config to generate the file name based on the SOAP message (from one particular field). I have used ASMA. – This message is then routed to another XI system (this is a global XI environment). Here I need to use SFTP receiver adapter and post the SOAP content as a zipped file onto SWIFT gateway. There is no mapping transformaiton required in this flow. It is passthrough interface. I was surfing the net and found “GetPayloadValueBean” and “PutPayloadValueBean” module parameters which is used to, ‘extract values from the message and store them temporally in the module context’ and ‘enrich the message with values from a module context’ respectively. However, am unable to make any progress with this. My feeling is that if the “GetPayloadValueBean” and “PutPayloadValueBean” module parameters did the storeing of value and enriching it, then I would not require to do a Dynamic Configuration in the First XI box, rather create the file name dynamically at runtime. Note: due to few reasons we are not allowed to make any mapping in the second XI box. Somehow, there isn’t much help on how to used these module parameters. Could you let me know if there is a way to use these module parameters in SFTP for my scenario. Thanks for your time in advance. Regards, Lakshmi Naik did you try the ASMA feature? refer the documentation; Adapter specific message attributes Use Adapter-specific message attributes If you want to include the adapter-specific metadata with the file, select the Set adapterspecific message attributes checkbox. Enter the namespace in the text box and select other appropriate attributes. The other attributes include: • File name • Directory Hi Shabarish, Thanks for your quick reply. I have used the ASMA. There has not been much use. Do you have any documentation for Receiver SFTP adapter and Module parameters that can be of any help. Or have you come accross this “GetPayloadValueBean” and “PutPayloadValueBean” module parameters. I get the following error: “MP: exception caught with cause com.sap.engine.services.jndi.persistent.exceptions.NameNotFoundException: Object not found in lookup of PutPayloadValueBean.” Regards, Lakshmi Naik Hi Lakshmi, I am facing a problem with ASMA for SFTP adapter. In mapping I placed the dynamic config. code for file and folder, but the receiver SFTP adapter is not able to pick these values even I set the ASMA and checked filename and folder. Could you please help me, what else I need to configure or do I miss anything. Regards, Sri Hi Shabarish, Archive file doesn’t work in SFTP! Hi Shabarish, I have PI 7.31 with IDOC to SFTP working fine for two months in development environment, and now all interfaces works on Production enviroment. But now I have some issues in sftp adapter (sender and receiver): all message all in queue without end the transmission. I have restarted PI, Windows, start/stop channels but the issue is not solved. The sender sftp don’t connect to the target server to get the files to be processed. The log still paused. The receiver sftp don’t connect to target system to put the result message. And the message are in status to be delivered in Adapter Message Monitor Hi Oscar, This can happen in two cases. Please check these two things. Regards, Lakshmi Naik I can connect with Filezilla sFTP client and pageant with the same user/key from Windows Server in our PID. I think the problem is not in certificates. I remove some old message and the connections works to sFTP on my laptop. But resend another idoc from sap to PI the message remaint with status Delivering in Message Monitor. I think the issue is in the original sFTP target server, but now this message can’t be cancelled. Finally, the message sent in the last our have an error in the sftp adapter, but the error is not solved. The next lines are log in sFTP server when I can’t transfer files from some days ago: sftp_user [22/Jan/2013:15:33:29 +0100] “USERAUTH_REQUEST sftp_user publickey” – – 83.231.xxx.xxx UNKNOWN sftp_user [22/Jan/2013:15:33:29 +0100] “PASS (hidden)” – – 83.231.xxx.xxx UNKNOWN nobody [22/Jan/2013:15:33:29 +0100] “USER sftp_user” – – sftp_user [22/Jan/2013:15:33:29 +0100] “USERAUTH_REQUEST sftp_user publickey” – – 83.231.xxx.xxx UNKNOWN sftp_user [22/Jan/2013:15:33:29 +0100] “CHANNEL_OPEN session” – – Hi Oscar, From the log above, it is clear that the issue is because of User Authentication. “USERAUTH_REQUEST sftp_user none” there is no authenttication available for the user id you are using to connect via SFTP adapter. I have had similar issue in the past and resolved it. Could you check it. Regards, Lakshmi Naik “USERAUTH_REQUEST sftp_user none” The user sftp_user is the name of the sftp user to identificate in sftp server. In sftp server side the administrator tell me that no modifications of configuration made from the last year. I created another certificate and tested from PID to my sftp server (Bitvise SSH Server) and works fine. With the updating of the user in the sftp server with the new certificate, the issue continues. I think the problem is in the sftp server, are you agree? The same issue occurs with sftp and user/password without certificate. The next lines are the sftp server log: Jan 23 18:20:01 [28037] <ssh2:3>: received SSH_MSG_USER_AUTH_REQUEST = (50) packet Jan 23 18:20:01 [28037] <ssh2:10>: auth requested for user = ‘sftp_user_test’, service ‘ssh-connection’, using method ‘password’ Jan 23 18:20:01 [28037] <ssh2:3>: sent SSH_MSG_USER_AUTH_SUCCESS (52) = packet (32 bytes) Jan 23 18:20:01 [28037] <ssh2:20>: SSH2 packet len 44 bytes Jan 23 18:20:01 [28037] <ssh2:20>: SSH2 packet padding len 19 bytes Jan 23 18:20:01 [28037] <ssh2:20>: SSH2 packet payload len 24 bytes Jan 23 18:20:01 [28037] <ssh2:20>: SSH2 packet MAC len 16 bytes Jan 23 18:20:01 [28037] <ssh2:3>: received SSH_MSG_CHANNEL_OPEN (90) = packet Jan 23 18:20:01 [28037] <ssh2:8>: open of ‘session’ channel using remote = ID 4 requested: initial client window len 1048576 bytes, client max = packet size 16384 bytes Jan 23 18:20:01 [28037] <ssh2:8>: confirm open channel remote ID 4, = local ID 0: initial server window len 4294967295 bytes, server max = packet size 32768 bytes Jan 23 18:20:01 [28037] <ssh2:3>: sent SSH_MSG_CHANNEL_OPEN_CONFIRMATION = (91) packet (48 bytes) Jan 23 18:20:01 [28037] <ssh2:20>: SSH2 packet len 44 bytes Jan 23 18:20:01 [28037] <ssh2:20>: SSH2 packet padding len 16 bytes Jan 23 18:20:01 [28037] <ssh2:20>: SSH2 packet payload len 27 bytes Jan 23 18:20:01 [28037] <ssh2:20>: SSH2 packet MAC len 16 bytes Jan 23 18:20:01 [28037] <ssh2:3>: received SSH_MSG_CHANNEL_REQUEST (98) = packet Jan 23 18:20:01 [28037] <ssh2:7>: received ‘subsystem’ request for = channel ID 0, want reply true Jan 23 18:20:01 [28037] <ssh2:3>: sent SSH_MSG_CHANNEL_SUCCESS (99) = packet (32 bytes) Jan 23 18:20:08 [4614] <ssh2:9>: sending CHANNEL_REQUEST (remote channel = ID 0, [email protected]) Jan 23 18:20:08 [4614] <ssh2:3>: sent SSH_MSG_CHANNEL_REQUEST (98) = packet (64 bytes) Jan 23 18:20:08 [4614] <ssh2:20>: SSH2 packet len 12 bytes Jan 23 18:20:08 [4614] <ssh2:20>: SSH2 packet padding len 6 bytes Jan 23 18:20:08 [4614] <ssh2:20>: SSH2 packet payload len 5 bytes Jan 23 18:20:08 [4614] <ssh2:20>: SSH2 packet MAC len 16 bytes Jan 23 18:20:08 [4614] <ssh2:3>: received SSH_MSG_CHANNEL_FAILURE (100) = packet Jan 23 18:20:08 [4614] <ssh2:12>: client sent SSH_MSG_CHANNEL_FAILURE = message, considering client alive Jan 23 18:20:12 [28037] <ssh2:20>: SSH2 packet len 44 bytes Jan 23 18:20:12 [28037] <ssh2:20>: SSH2 packet padding len 17 bytes Jan 23 18:20:12 [28037] <ssh2:20>: SSH2 packet payload len 26 bytes Jan 23 18:20:12 [28037] <ssh2:20>: SSH2 packet MAC len 16 bytes From the trace I see that the login via password was a success. But I see a CHANNEL_FAILURE coming from the client. And afterwards we are entering and endless loop of the following excerpt.=20 Jan 23 18:20:12 [28037] <ssh2:3>: received SSH_MSG_GLOBAL_REQUEST (80) = packet Jan 23 18:20:12 [28037] <ssh2:3>: sent SSH_MSG_REQUEST_FAILURE (82) = packet (32 bytes) Jan 23 18:20:22 [28037] <ssh2:20>: SSH2 packet len 44 bytes Jan 23 18:20:22 [28037] <ssh2:20>: SSH2 packet padding len 17 bytes Jan 23 18:20:22 [28037] <ssh2:20>: SSH2 packet payload len 26 bytes Jan 23 18:20:22 [28037] <ssh2:20>: SSH2 packet MAC len 16 bytes With user / password without certificate validation, the result have the same issue. I think isn’t a firewall issue, because I can connect with filezilla from the operating system in PID, but I don’t know exactly how PI make the connection . The firewall in Windows 2008 Server are disabled… Hi Shabarish, I am facing an issue with with sender, when I use SAP*.TXT sender is not finding any files, how ever when tried your solution with [0-9].txt I can see files being picked up, any thoughts? Thanks, Arvind Check if the extension of your file is capsletter: SAP*.txt is not equal to SAP*.TXT for the adapter. If you use SAP*.txt files like SAP_mytextFILE.txt can be read by the adapter. Does volume in folder makes any difference? I had more 1000 files with different dates, I tried to copy few files to some another directory and I saw files moving with [SAP].txt, but with same format nothing is picked from the folder which have large files. Thank you, Arvind
https://blogs.sap.com/2012/04/11/sap-sftp-sender-adapter-a-quick-walkthrough/
CC-MAIN-2017-09
refinedweb
2,173
70.23
This article is for programmers with the following requirements: Before you start learning socket programming make sure you already have a certain basic knowledge to network such as understand what is IP address, TCP, UDP. Before we start our tutorial, keep in mind that the following tutorial only works for Linux OS environment. If you are using Windows, I have to apologize to you because Windows has its own socket programming and it is different from Linux even though the connection concept is the same. Well, first copy and paste the following code and run it on server and client, respectively. Both code can be run on the same computer. It is always easy to understand after getting the code work. #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #include <string.h> #include <sys/types.h> int main(void) { int listenfd = 0,connfd = 0; struct sockaddr_in serv_addr; char sendBuff[1025]; int numrv; listenfd = socket(AF_INET, SOCK_STREAM, 0); printf("socket retrieve success\n");)); if(listen(listenfd, 10) == -1){ printf("Failed to listen\n"); return -1; } while(1) { connfd = accept(listenfd, (struct sockaddr*)NULL ,NULL); // accept awaiting request strcpy(sendBuff, "Message from server"); write(connfd, sendBuff, strlen(sendBuff)); close(connfd); sleep(1); } return 0; } #include <sys/socket.h> #include <sys/types.h> #include <netinet/in.h> #include <netdb.h> #include <stdio.h> #include <string.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #include <arpa/inet.h> int main(void) { int sockfd = 0,n = 0; char recvBuff[1024]; struct sockaddr_in serv_addr; memset(recvBuff, '0' ,sizeof(recvBuff)); if((sockfd = socket(AF_INET, SOCK_STREAM, 0))< 0) { printf("\n Error : Could not create socket \n"); return 1; } serv_addr.sin_family = AF_INET; serv_addr.sin_port = htons(5000); serv_addr.sin_addr.s_addr = inet_addr("127.0.0"); } printf("\n"); } if( n < 0) { printf("\n Read Error \n"); } return 0; } After debugging both source files, run Socket-server.out, then run Socket-client. Attention here, never mess up with the order of executing Socket-server.out and Socket-client. Socket-server must be executed first then execute Socket-client.out and never try to break Socket-server forever loop. It means, you need to open two terminals to run each of the outputs. When you execute Socket-cli, I guess you will get the following result: If you see the message above, congratulations, you have success with your first step to networking programming. Otherwise, do some checking on your development environment or try to run some simple code for instance hello world. The answer is the server and client both are software but not hardware. It means what is happening on the top is there are two different software executed. To be more precise, the server and client are two different processes with different jobs. If you are experienced with constructing a server you might find out that a server can be built on a home computer by installing a server OS. It is because server is a kind of software. Imagine a socket as a seaport that allows a ship to unload and gather shipping, whereas socket is the place where a computer gathers and puts data into the internet. Things that need to be initialized are listed as follows: int socket(int domain, int type, int protocol) Next, decide which struct needs to be used based on what domain is used above. struct sockaddr_un { sa_family_t sun_family ; char sun_path[]; }; struct sockaddr_in { short int sin_family ; int sin_port; struct in_addr sin_addr; }; On this article, I will explain sockadd_in that showed on the code above. serv_addr.sin_family = AF_INET; serv_addr.sin_addr.s_addr = htonl(INADDR_ANY); serv_addr.sin_port = htons(5000); Based on example above, server is using port 5000. You can check it by following command sudo netstat -ntlp Then, you will see following list Inside red bracket, you will found 0.0.0.0:5000 and Socket-server, it means port 5000 is used and listen to any valid incoming address. On client side, serv_addr.sin_port = htons(127.0.0.1) is declared in order to listen internal network. The flow chart below shows the interaction between client and server. The flow chart might looks complicated but make sure you don’t lost your patient due to the following flow chart. Because every process on the flow chart is needed and it acts a very important roles on network connection. After all setup on struct sockaddr_in is done, declare bind function. As flow chart, bind function must be declared on both server and client. Server and client will start interact with each other after the bind function and it is the most important session. From what flow chart shows, listen, accept, connect, three function play a very important roles. Imagine that server looks like an ATM, and only one person can be used the ATM. So, what happen if there is 2 or more people come at one time? The answer is simple, lining up and wait the front people finished using with ATM. It is exactly same as what happening in server. Listen function acts as waiting room, asking the traffic wait on the waiting room. Accept function acts as person who asking the traffic waiting inside the waiting room to be ready for the meeting between server. Last, connect function acts as the person who want to carry out some work with server. This article was publish on 2013/5/1 and I was still new to networking programming on this period. Maybe there is some point that I am not make clear enough, I have tried all of my best to present all my knowledge to this article. Hope you can get the good basic beginning over here..
http://www.codeproject.com/Articles/586000/Networking-and-Socket-programming-tutorial-in-C
CC-MAIN-2014-35
refinedweb
951
67.15
16179/how-can-i-expose-callbacks-to-fortran-using-python [Callback functions] may also be explicitly set in the module. Then it is not necessary to pass the function in the argument list to the Fortran function. This may be desired if the Fortran function calling the python callback function is itself called by another Fortran function. That's what has been stated in the scipy documentation. However, I can't seem to find an example of how this would be done. Let's take this particualr Fortran / Python combination for an example: test.f: subroutine test(py_func) use iso_fortran_env, only stdout => output_unit !f2py intent(callback) py_func external py_func integer py_func !f2py integer y,x !f2py y = py_func(x) integer :: a integer :: b a = 12 write(stdout, *) a end subroutine call_test.py: import test def func(x): return x * 2 test.test(func) Compiled with the following command (Intel compiler): python f2py.py -c test.f --fcompiler=intelvem -m test What changes would I have to take to expose the function to the entire Fortran program in the form of a module, so that I could call the function from inside the subroutine test, or any other subroutine in any other fortran file in the project? The code that I've written below. The important thing to note here is the absence of any parameters passed to test. subroutine test() use iso_fortran_env, only stdout => output_unit !f2py intent(callback) py_func external py_func integer py_func integer y,x !f2py y = py_func(x) integer :: a integer :: b a = 12 write(stdout, *) a end subroutine As an aside, I then wrapped py_func in a subroutine so that I could call it without having to declare the following in every file / function I use it: integer y y = py_func(x) In Logic 1, try if i<int(length/2): instead of if i<int((length/2+1)): In ...READ MORE def add(a,b): return a + b #when i call ...READ MORE down voteacceptedFor windows: you could use winsound.SND_ASYNC to play them ...READ MORE Hi there, instead of sklearn you could ...READ MORE if you google it you can find. ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE You can simply the built-in function in ...READ MORE 0110100001000101001010101001011010100100111100101001 READ MORE You could simply use a wrapper object ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/16179/how-can-i-expose-callbacks-to-fortran-using-python?show=16181
CC-MAIN-2019-43
refinedweb
406
67.65
If the following events occur, DPM might not update the status of a failed path when it comes back online: A monitored-path failure causes a node reboot. The device under the monitored DID path does not come back online until after the rebooted node is back online. The incorrect disk-path status is reported because the monitored DID device is unavailable at boot time, and therefore the DID instance is not uploaded to the DID driver. When this situation occurs, manually update the DID information. From one node, update the global-devices namespace. On each node, verify that command processing has completed before you proceed to the next step. The command executes remotely on all nodes, even though the command is run from just one node. To determine whether the command has completed processing, run the following command on each node of the cluster. Verify that, within the DPM polling time frame, the status of the faulted disk path is now Ok.
http://docs.oracle.com/cd/E19787-01/820-7358/gfxmn/index.html
CC-MAIN-2016-44
refinedweb
163
61.87
Here's some very good news about IBM and Patents. Three new sites to follow: For my own Atom hacking projects I've been using a home-grown Atom parser that's been nudged along through the various Atom drafts and had gotten quite nasty internally... mostly because I never really took the time to implement stuff neatly.. i just tweaked whatever I needed to make it work with whatever the current draft was. Today I reimplemented the stack completely -- once in Java and once in Ruby. I now have two very clean, pure Atom 1.0 implementations. What is striking about the two is how much easier and faster it was to write the implementation in Ruby than it was in Java. Start to finish, the Ruby implementation took me about an hour. The Java implementation about five hours*. Also, the Ruby code is much nicer and easier to follow than the Java code. The more I use Ruby, the more impressed I get. * Event-based API implemented as a thin-layer on top of SAX. Provides transparent Base URL support, conversion of iso8601 dates, proper handling of text and content constructs, some basic structural validation, support for namespace extensions, etc... lots more than just parsing XML.
https://www.ibm.com/developerworks/community/blogs/jasnell?sortby=0&page=4&maxresults=15&lang=en
CC-MAIN-2015-27
refinedweb
208
65.32
Enterprise. Enterprise Server log records follow a uniform format: [# and #] mark the beginning and end of the record. The vertical bar (|) separates the fields of the record. yyyy-mm-ddThh:mm:ss.SSSS-Z specifies the date and time that the record was created. For example: 2006-10-21T13:25:53.852-0400 Log Level specifies the desired log level. You can select any of the following values: SEVERE, WARNING, INFO, CONFIG, FINE, FINER, and FINEST. The default is INFO. ProductName-Version refers to the current version of the Enterprise Server. For example: glassfish LoggerName is a hierarchical logger namespace that identifies the source of the log module. For example: javax.enterprise.system.core Key Value Pairs refers to pairs of key names and values, typically a thread ID. For example: _ThreadID=14; Message is the text of the log message. For all Enterprise Server SEVERE and WARNING messages and for many INFO messages, the message begins with a message ID that consists of a module code and a numerical value. For example: CORE5004 An example log record might look like this: [#|2006-10-21T13:25:53.852-0400|INFO|GlassFish10.0|javax.enterprise. system.core|_ThreadID=13;|CORE5004: Resource Deployed: [cr:jms/DurableConnectionFactory].|#] The Administration Console presents log records in a more readable display.
http://docs.oracle.com/cd/E19226-01/820-7692/ablul/index.html
CC-MAIN-2016-40
refinedweb
215
51.55
I'm going to reply to this in. -scott Donald Ball <balld@websli To: <[email protected]> ngerZ.com> cc: <[email protected]>, <[email protected]> Subject: Re: something funny with namespaces and xalan2.2dev (fwd) 07/06/2001 02:42 AM On Mon, 2 Jul 2001 [email protected] wrote: > > I'm not 100% sure, but it looks like the current > > SAX2DTM code expects to be passed both, and I can imagine that Cocoon > might > > be trying to take the shortcut... > > The code should work fine if passed only startPrefixMapping and > endPrefixMapping events. I just wrote a small test for this, and > everything seems pretty happy (though it gets confused for local-name() if > you pass null instead of "" for startPrefixMapping...). > > My suspicion is that whoever is generating SAX events within the body > statement, i.e. the form, input, etc., is not generating the namespaceURI > argument for startElement. The SAX2DTM will not try and resolve the > namespace itself, as per: (sorry for the lateness of this response) all of the elements in the form were created using SAX by a custom component. the component strictly creates nodes in the default namespace: handler.startElement("","form","form",attributes); the other elements on the page were generated using SAX by cocoon's FileGenerator, which is ultimately using jaxp (xerces) to parse the file. > Since a ContentHandler doesn't have a way to set the namespace property by > itself, SAX2DTM assumes this property is always true. (It is still in > error in that it requires the qName argument). (This optionality on SAX2, > in my opinion, is really awful.) > > Donald, is this making any sense? a bit. would it be helpful if i got a dump of the SAX events that are being given to xalan for debugging? if so, let me know. for the time being, i've simply removed the namespace from that layer. i'm actually finding that the more i use namespaces, the more i dislike certain aspects of working with them. - donald --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, email: [email protected]
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200107.mbox/%[email protected]%3E
CC-MAIN-2015-14
refinedweb
357
58.08
When writing a script, it may be unwieldy to call the script defining the whole classpath at the command line, e.g. groovy -cp %JAXB_HOME%\bin\activation.jar;%JAXB_HOME%\bin\... myscript.groovy You can go the other way - let the script itself find the jars it needs and add them to the classpath before using them. To do this, you need to 1. get the groovy rootloader def loader = this.class.classLoader.rootLoader 2. introduce the necessary uls to groovy rootloader. Use whatever logic suits your situations to find the jars / class directories 3. Load the classes you need: 4. To instantiate the classes, use the newInstance method: Note that newInstance is on steroids when called from groovy. In addition to being able to call the parameterless constructor (as w/ Java's Class.newInstance()), you can give any parameters to invoke any constructor, e.g. You can also pass a map to initialize properties, e.g. The downside of using this approach is that you can't inherit from the classes you load this way - classes inherited from need to be known before the script starts to run.
http://docs.codehaus.org/pages/diffpages.action?originalId=228170514&pageId=233050100
CC-MAIN-2014-15
refinedweb
188
66.44
Support for new Arduino Hardware platform: WavGat UNO R3 compatible board Hi, Did use for several projects already clone Arduino's from AliExpress without any problems regarding the compatibility with the Arduino main stream. For the last order, I didn't look very closely to the brand of the clone, finding me know with some problems. I ordered some WavGat Arduino uno boards () If I can believe the information given, they are using an ATMEGA328P clone as processor. After some tweaking of the specific WavGat Arduino drivers (they are not really maintained by the manufacturer to follow the new Arduino 1.8.x branch ), I could compile the sketches without errors. The blinking led sketchs worked without any problem. The next step was an example of MySensors, drivers, AltSoftSerial, the serial monitor is outputting data (example uses 9600 baud). So far, so good. The next example was a distance sensor (example from the site): result was garbage in the serial monitor. I activated MY_DEBUG, same result, garbage in the serial monitor. Tested at several bauds : 300, 2400, 9600, … : all same result, garbage. The same script works like a charm on a real Arduino board, so it is not a programming error in the sketch. So for testing, I created a minimum script with only a serial print "hello world". Works! No garbage in the serial monitor. Started to add the first lines specific for MySensors #define MY_DEBUG #define MY_RADIO_RFM69 #define MY_BAUD_RATE (9600ul) #include <SPI.h> #include <MySensors.h> void setup() { // put your setup code here, to run once: Serial.begin(9600); } void loop() { // put your main code here, to run repeatedly: Serial.print("hello world"); Serial.println(); } When adding the #include <MySensors.h>, the serial monitor begins with outputting garbage. So it is clearly that somewhere in the MySensors library something happens to produce this. Very probably it is due to the no 100% compatibility of the WavGat Arduino. The problem: where to begin the debug? I searched the site, but till now, didn't find any information to guide me in the correct direction. @evb I ran into the same trap and ordered some nano's with fake avrs a while ago. These require separate board files etc. Too much hassle IMHO. Better count your losses and order a board with a real atmega @evb Yes as Yveaux suspects, it might be fake/counterfeit Atmega328 CPU that is on your Uno R3 device Sparkfun have a good story to read about this topic, and those ATmega328 was actually something else. so basically you can have no trust in a China-supplier, but you can also be lucky and actually find a good supplier and real good ATmega328 I don't think they are fake otherwise they won't do anything. They work with normal sketches, even used several MySensors sketch examples, running without any problem, provided that I don't include the <MySensors.h> line. Because the board does need his own files to work in the Arduino IDE, the used Atmega clone is too different from the normal line of Atmega's (clones). Searching on Google revealed that probably this board is using a LogicGreen LGT8Fx8 chip instead of an ATMega328P. () When I go into the folders of the specific driver files for the IDE, I see folders with names like lgt8f88a, etc... Meanwhile, I tested the setup with another Chinese brand of an Arduino clone. These clones don't need specific drivers files, you can just use the Aduino Genuino board. Everything is working. It is just challenging if I could make it work with these WavGat brand of so-named Arduino uno's. @evb said in Support for new Arduino Hardware platform: WavGat UNO R3 compatible board: It is just challenging if I could make it work with these WavGat brand of so-named Arduino uno's. Yes, you probably can. However, is it worth the effort ?... - Hermann Kaiser last edited by This post is deleted!
https://forum.mysensors.org/topic/10170/support-for-new-arduino-hardware-platform-wavgat-uno-r3-compatible-board/5
CC-MAIN-2019-39
refinedweb
660
64.61
in reply to aXML vs TT2 Ok. It's not the worst solution, but it's inherently fragile. It requires that I know every element name that can be found in an XML document. Some formats are extensible (e.g. XHTML), so there is no such list for them. And even if we assume that list can be found, most people won't bother trying to create it. It requires that I cross check that list against the list of keywords in a hash. How is that possible to do reliably if the list if as dynamic as you say. This will lead to errors that can be very subtle. And given how templates are typically used, the errors will not be seen by the dev and they will be seen by end users. The thing is, XML has already solved that problem. The mechanism is called "namespaces". <root xmlns: <a:incheader</a:inc> <table border=0 <tr> <th>User ID</th> <th>Name</th> <th>Email</th> </tr> <a:db_select> <a:query> SELECT * FROM user ORDER BY id </a:query> <a:mask> <tr> <td><a:d>id</a:d></td> <td>[hlink action="user_profile" user_id="<a:d>id</a:d>"]<a:d>na +me</a:d>[/hlink]</td> <td><a:d>email</a:d></td> </tr> </a:mask> </a:db_select> </table> <a:inc>footer</a:inc> </root> [download] PS — I personally prefer <inc name="header"/> over <inc>header</inc>. ...It requires that I know every element name that can be found in an XML document... Ok then, we add a new sub called something like <no_parse> which takes a path to the XML to be included in the output. The file is loaded and stored in memory and inserted after the parser has exited. That way it doesn't matter if the file to be included contains tags which match the plugins because it wont be seen by the parser. The original version has something similar to that called <ignore>, but I haven't got round to creating that functionality in the new one yet. Oh and regarding shortend tags like <inc name="header"/> I like that, however supporting those sort of tags as well as the standard ones will require extra compute time under the current methodology. It's perfectly possible to do, but I'm not sure the gain would be worth the overhead. That's not to say that the current methodology is the be all and end all, Corion once suggested writing a compiler for aXML which would solve that problem and further improve overall performance, however such a solution is currently over my head and given how blazing fast the current version is I just can't feel the desire to try and implement it (again). | | A scientific project A system/database administration tool A game A toy/personal tool Other web based tool None of the above Results (69 votes). Check out past polls.
https://www.perlmonks.org/?node_id=932959
CC-MAIN-2021-43
refinedweb
491
69.41
ServletContextListener example ServletContextListener example Before going into the details of ServletContextListener we should understand what is ServletContext... will be common to all the components. Remember that each servlet will have its ServletContextListener example ServletContextListener example  ...; ServletContextListener we should understand what is ServletContext. ServletContext... to all the components. Remember that each servlet will have its own ServletConfig ServletContextListener ServletContextListener ServletContextListener is a interface which contains two methods... then we have to implement its all methods. This listener will help a application Technical Writing as a profession is very much in demand. The job of technical writer implies...Technical Writing Do you know who writes the user manuals of various products...? It is technical writer, simplifying the technical aspects for the use of common man and offline is evolving very fast. New software development and testing techniques... India website. Index | Ask Questions | Site Map Web Services... example | Java Programming | Java Beginners Examples | Applet Tutorials Technical Documentation,Technical Documentation Company India,Technical Documentation Services India a good document is not an easy task. A good technical document is not much...What is Technical Document? Unlike common document, Technical document... of all of them. It is an important information that can be operated We have organized our site map for easy access. You can browser though Site Map to reach the tutorials and information pages. We will be adding the links to our site map as and when new pages are added In this section of sitemap we have listed all the important sections of java tutorials. Select the topics you want..., Spring, SQL, JSF and XML Tutorials. These All Java Tutorials links servlets are not supported by all web servers. So before using SSI read the web server.... The SSI makes easy to maintain the Servlets Books , and people seem most intrigued with Java servlets. With bookstores overloaded with books... Platform, by Dustin R. Callaway Java Servlets by Example, by Alan R... servlets give all the benefits of CGI scripting languages without the overhead 10 SEO Mistakes Most People Do on link scheme rather than content We all know the importance of good link...10 SEO Mistakes Most People Do Your excellence in making your SEO campaign... to how you avoid certain SEO mistakes most people do. These mistakes are sometimes Top 10 Tips for Good Website Design Designing a good website as to come up with all round appreciation, traffic... presence. Good website design tips reflect on the art of mastering the traffic... to content structure to device friendliness and all of these factors addressed Tutorial Section  ... | Spring Tutorial | Hibernate-Tutorials | Servlets-Tutorials | Web... Structure example | C Structure Pointer | C Temperature Converter | C Advantages of Servlets over CGI handling etc. Servlets inherits all these features and emerged as a very..., programs written in java are slow. But the java servlets runs very fast... or polymorphed into new objects. So the java servlets take all these advantages Professional Web Design Services For You Web Site to people from all walks of life, people with special needs too i.e. You can... Professional Web Design Services For You Web Site  ... and things like access, slow modems, bandwidths etc. Thus the job of a good Java Technical Architect with Finance Experience Java Technical Architect with Finance Experience Position Vacant: Java Technical Architect...; BE/BTech MCA Having good Java Technical Architect with ERP Experience Java Technical Architect with ERP Experience Position Vacant: Java Technical Architect...; BE/BTech MCA Having good experience in Hibernate abstraction in the Servlet API is the Servlet interface. All servlets implement...what is the architecture of a servlets package what is the architecture of a servlets package The javax.servlet package provides WEB SITE implement in my site..(Some latest technology) like theme selection in orkut like... Technical Subject if u have knowledge about PHP, MySQL, JavaScript, Jquery, and CSS then u can do. If U Dont have above Technical knowledge then first u servlets servlets why we require wrappers in servlets? what are its uses.../response. For e.g. compression, encryption, XSLT etc. Here is an Accessing Database from servlets through JDBC! runs very fast. These are due to the way servlets run on web server.... But in case of servlets initialization takes place very first time... into new objects. So the java servlets takes all these advantages Web Site Goals - Goal of Web Designing Web Site Goals - Goal of Web Designing What are the prime features necessary... for their choice. Thus a good website fulfills the requirement of both... and the client. What is Custom Web Design? Custom web site is little bit About Adventure Island Amusement Park Rohini Delhi in many areas, is a good example of this. The Space Jump, which twists people... is home to a large variety of different rides for people of all ages to enjoy... for people of all sorts. There are some rides that are for younger kinds servlets ; Please visit the following links: Logging Filter Servlet Example Response Filter Servlet Example , HttpServlet provides a default implementation for all those methods that does nothing servlets servlets hi i am doing one servlet program in which i strucked at one point. my requirement is after entering the student id it retieves all the information of the student and stores that into a resultset object. now to display Technical Content Writer Technical Content Writer  ..., shopping cart and hospital management, the company is popular for its online technical... articles directory, technical tutorials that consist of large number of training How to Upload Site Online on your server. Its very important to learn all these information specifically if you... the IP address, user name and password of your hosting server. Once all.... Uploading site can be done in many ways, but the most popular is FTP. After hosting Technical Documentation,Technical Documentation Company India,Technical Documentation Services India effectiveness as clear parameters. The team's capabilities cover almost all... could include some or all of the following... web site applications. Rose India has gained wide experience in is very easy these days. Money making process is not all that stressful if you.... There are reasonably very good database shopping cart software. One of them is the failed.... Handling these kinds of snags is very easy when using good shopping cart services web site creation web site creation Hi All , i want to make a web site , but i am using Oracle data base for my application . any web hosting site giving space for that with minimum cost . please let me know which site offering all Example program to get all the available time zones to get all the available time zones using java program. This example is very simple java code that will list all the available time zones. We have used... Example program to get all the available time zones Javascript Code for all fields Javascript Code for all fields Good Evening Sir/Madam, Please send me the example program using bootstrap framework for all fields of a form. Please send me the javascript code My E-mail id: [email protected] Regards, S Free Web Site Hosting Services your own web site with the DotNetNuke content management platform. No technical... offer free space and tools for you to build your own web site. You get:150 MB... Search Engines To Promote Your Site E-Mail (Coming Soon) And much more... http.   Top 10 Web Design Mistakes and multifarious your site becomes in content all the pages must have direct... its point through a flashy presentation that may look good but not at all value... is a specialist's business, but it is the very basis on which all web designs make Installation, Configuration and running Servlets Installation, Configuration and running Servlets  ... to install a WebServer, configure it and finally run servlets using this server... to install and configure, very less memory footprint, fast, powerful servlets and jsp - JDBC servlets and jsp I want to display textboxes dynamically in my page using JSP and servlets (javascript for validation). For eg, consider... no are common field to all categories(clerk, manager and senior manager). If I Open Source web Templates " offers fully customizable Dreamweaver web site templates suited for all industries... Open Source web Templates Open Source Web Templates A web site... for you to have a professionally designed website at a much lower cost. All you servlets - JSP-Servlet servlets i want to write a simple program on servlet context listener. Hi Friend, Please visit the following link: Hope servlets - Servlet Interview Questions = response.getWriter(); String title = "Reading All Request Parameters...); } } In this example not using of pseudo code only using of java... ------------------------------------------ Read for more Details need someone do it for me plz..people ,display the total for all three selections.save the file as FastFood.java B Need J2ee project technical flow - Design concepts & design patterns Need J2ee project technical flow Hi friend,am newbie to web development.I need j2ee project technical project flow. For example Browser ---> Controller --> Servlet(validator) < ServletContextListener with Timer In this section, you will learn how to use timer with ServletContextListener to run code snippet after every 1 minute immediate after the application Help Very Very Urgent - JSP-Servlet Help Very Very Urgent Respected Sir/Madam, I am sorry..Actually... contains all the ID's present in the database.. When I click or select any... requirements.. Please please Its Very very very very very urgent... Thanks Hibernate Criteria load all objects from table ). This is the first example of Hibernate Criteria which loads all the data from... of loading all the records. The org.hibernate.Criteria is very simplified API...Hibernate Criteria load all objects from table - Learn how to load all Using MYSQL Database with JSP & Servlets. Using MYSQL Database with JSP & Servlets.  ...; In MySQL all the database commands are followed by a semi-colon...) mysql> In the above example we What a PHP Programmer Can Do to be very high all around the world. The services that PHP programmers can... the site can work well with them all. Also, a programmer will understand how... involves how well the programmer works with people. A good programmer is one Have you tried PHP resource site; PHPKode.com? Have you tried PHP resource site; PHPKode.com? is a good free open source PHP resource site which helps me a lot during my PHP learning. Have you tried it before Servlets Programming Servlets Programming Hi this is tanu, This is a code for knowing...; import javax.servlet.http.HttpServletResponse; //In this example we are going... visit the following links: Techniques used for Generating Dynamic Content Using Java Servlets. -in technologies Server side plug-in technology provide very good performance. NSAPI... these plug-in are difficult and also learning curve is also very high. Java Servlets Java Servlets eliminated all these problems. The first truly platform mahesh want to know java with good understanding mahesh want to know java with good understanding I need to know... an example program and it's output also.please teach me... Java Beans...-counter.shtml http technical Que technical Que What is the exact use of abstract class and interface? Which one is better abstract class or interface? Thanklet Tutorials Links tutorial for writing HTTP Servlets with complete source code for the example Servlets... Servlet can get a list of all other Servlets in the Servlet Context by calling... content is added to the site. The first is servlets-announce jsp and servlets , maintainablity , technical expertize and then you can go for MVC I or II architecture Youtube as a Social Media Marketing upload and play site in the world, with more than 20 million subscribers and more than 15 million people visiting the site, making it a great medium for any... and it all works out very well because if your channel is a partner channel, your See What?s On with a Television Listing Mobile Application night should check one’s television listings. However, not all people... is on in the next five days. All people who use different television listing mobile.... it can be very easy to find all sorts of pieces of information with regards Gemstones as Fashion Items advantage besides all the issues, they are very affordable. If worked on properly... and many of them will end up on very special items in your home. However, not many people use them as fashion items or accessories because they believe What is Web Graphics in any websites is as significant as the content of the site... by people. The related graphics of products simplifies the content... descriptions on your web site, you should have the related graphic Apple's Ipad? How It Can Enhance Your Social Networking Experience? is one of the foremost activities for people all over the world from all walks... makes it possible for you to chat to people from all over the world. The iPad... Experience Since the release of Apple’s iPad, more and more people are flocking Social Media Marketing: Twitter or Facebook? to keep in mind is to use the site for which you have the most friends. For example..., both sites provide with a very good social media marketing platform. Basically... aspects, we will see why twitter can be a good option as well. Facebook has Open Source Shopping Cart in products is very important. It comes with good search... This very good module which can increase customer loyalty. Bar..., Coupons & Guest Checkout and much more. It very good shopping cart What is a blog? ; So you people want to know about blogs, right? Hmm ??? well, a blog is a website in which you will find lot of stuff posted by people from all over... people where they can share ideas, make friends etc. Sounds good hnn versions of servlets and JSPs? versions of servlets and JSPs? can you tell me the all versions of servlets and JSPs Writing Articles for People and Search Engines Writing Articles for People and Search Engines Are you confused about... help you in achieving good page ranking but to maintain this ranking... way of increasing density which is a bit offset from the regular good Java Servlets Java Servlets If the binary data is posted by both doGet and doPost then which one is efficient?Please give me some example by using both doGet and doPost. Thanks in Advance The doPost method is more efficient A Newspaper Mobile Application Allows You to Bring the Paper Without Paper stories in a variety of sections. For example, a good application will involve... More people are beginning to get information from newspapers online. Many people are even going as far as getting information off of different mobile devices reading data from a text box with the same name using servlets - JSP-Servlet like a= b= c= using servlet i want to print all the values from each text... to my question this is my first question? so that i can think that this site is providing help to people my mail id is [email protected]   Software Maintenance Services Without These You?ll Be Out Of Date be a daunting task to maintain it. The good news is there are people dedicated to doing.... Many people think that monthly updates are enough. If you have been thinking... that there are viruses and all sorts of other security issues. If your software Building Search Engine Applications Using Servlets ! Building Search Engine Applications Using Servlets Introduction... using Java Servlets. You can Download the source code of search engines and modify it according to requirement. Java Servlets
http://www.roseindia.net/tutorialhelp/comment/82325
CC-MAIN-2014-42
refinedweb
2,595
66.94
Welcome to the Parallax Discussion Forums, sign-up to participate. rogloh wrote: ». autobaud ... mov a, waitbit shl a, #3 waitx a ret rogloh wrote: » Interesting dgately. How does you machine compare? I have OS X 10.10.5 with a 2.9 GHz Intel Core i5 dual core CPU: Intel(R) Core(TM) i5-5287U CPU @ 2.90GHz Looking at the MainLoader1.Spin2 code in some more detail here is the fix I've found that I now think solves the issue...using it I now have 100% success with the download at both 2Mbps and the default (which I think might be 921600bps). This was with 100 iterations each.. A slower machine that introduces longer gaps into the serial transmission between these space characters, or one with no buffering, may not see this problem. ersmith wrote: » Thanks for testing this guys. Here's a revised binary (built with the latest github sources) that incorporates @rogloh's suggestions and @jmg's idea for more robust autobaud detection. It works fine on my Mac Mini, but OTOH the older version of the code did too, so evidently there is some OS/hardware variation. If get a chance please give it a try and let me know how it works. % for (( i=1; i<50; i++ )); do echo -n "$i " && ./loadp2.mac main.binary -p /dev/cu.usbserial-DN43WMDD && echo SUCCESS; done 1 SUCCESS 2 SUCCESS ... 48 SUCCESS 49 SUCCESS RLs-MacBook-Pro:loadp2 roger$ make mkdir -p ./build gcc -Wall -O -g -DMACOSX -o build/loadp2 loadp2.c loadelf.c osint_linux.c u9fs/u9fs.c u9fs/authnone.c u9fs/print.c u9fs/doprint.c u9fs/rune.c u9fs/fcallconv.c u9fs/dirmodeconv.c u9fs/convM2D.c u9fs/convS2M.c u9fs/convD2M.c u9fs/convM2S.c u9fs/readn.c osint_linux.c:215:25: error: use of undeclared identifier 'B921600' cfsetospeed(&sparm, B921600); // dummy speed, overridden later ^ osint_linux.c:216:25: error: use of undeclared identifier 'B921600' cfsetispeed(&sparm, B921600); // dummy speed ^ rogloh wrote: » @ersmith After checking out the latest github release I wasn't able to make the latest loadp2 to test. Looks like this B921600 related change broke the Mac build (on my system anyway). ~/flexgui$ make install make -C spin2cpp make[1]: Entering directory '/home/jim/flexgui/spin2cpp' mkdir -p ./build bison -p spinyy -t -b ./build/spin -d frontends/spin/spin.y frontends/spin/spin.y: warning: 25 shift/reduce conflicts [-Wconflicts-sr] bison -p basicyy -t -b ./build/basic -d frontends/basic/basic.y frontends/basic/basic.y: warning: 10 shift/reduce conflicts [-Wconflicts-sr] bison -p cgramyy -t -b ./build/cgram -d frontends/c/cgram.y frontends/c/cgram.y: warning: 3 shift/reduce conflicts [-Wconflicts-sr] gcc -g -Wall -I. -I./build -DFLEXSPIN_BUILD -o build/lexer.o -c frontends/lexer.c frontends/lexer.c:7:10: fatal error: string.h: No such file or directory #include <string.h> ^~~~~~~~~~ compilation terminated. Makefile:141: recipe for target 'build/lexer.o' failed make[1]: *** [build/lexer.o] Error 1 make[1]: Leaving directory '/home/jim/flexgui/spin2cpp' Makefile:200: recipe for target 'spin2cpp/build/fastspin' failed make: *** [spin2cpp/build/fastspin] Error 2 dpkg --status build-essential That matters more on Boot code, where RCFAST is not a large multiple of BAUD, and so getting the correct centred capture is more important. The boot code carefully selects a unique autobaud char, that cannot have a false skew-sample in the smart pin hardware used for capture. SW polled autobaud could select another char - eg '|" is not used in Base64, and has 3L+5H for an 8bw capture possible. Not a slower machine... MacBook Pro (15-inch, 2019) Processor 2.3 GHz 8-Core Intel Core i9 Memory 32 GB 2400 MHz DDR4 Graphics Intel UHD Graphics 630 1536 MB macOS Catalina 10.15.2 (19C57) dgately I have kept this older OS X so I can still use BST and PropellerIDE with some legacy P1 projects I developed. If it wasn't for that (and a bunch of other tool customisations I setup over years with homebrew etc), I'd want to upgrade this machine to the newer OS. But then I'm in another world of hurt upgrading all the tools and seeing what is broken/missing. Almost better off to just buy another Mac I guess, unless I setup some type of dual boot machine. Ugh. Presumably Git can be used to wind things back but I wouldn't know how. Here's an old set of sources from July last year. dgately There probably was a good reason that code was #ifdef'd out originally in osint_linux.c to stop this problem. Tested it with 50 using -l2000000, and 50 without (which now defaults to 2Mbps as well I think). Seems good so far. I tried your suggestion for getting flexgui running on Mint. Here is the error message I received: where did the wheels come off? Jim Thanks for the replies. I will check to see if build-essential is installed. The C compiler is not installed as I don’t normally work in C. Which one do you recommend, or is there a standard one that will come with my linux distribution? Jim Strange that it can't find the basic headers.
http://forums.parallax.com/discussion/comment/1488828/
CC-MAIN-2020-16
refinedweb
877
67.65
Tower of Hanoi . At a rate of one move per second, that is years! Clearly there is more to this puzzle than meets the eye. The animation below demonstrates a solution to the puzzle with four discs. Notice that, as the rules specify, the disks on each peg are stacked so that smaller disks are always on top of the larger disks. If you have not tried to solve this puzzle before, you should try it now. You do not need fancy disks and poles–a pile of books or pieces of paper will work. How do we go about solving this problem recursively? How would you go about solving this problem at all? What is our base case? Let’s think about this problem from the bottom up. Suppose you have a tower of five disks, originally on peg one. If you already knew how to move a tower of four disks to peg two, you could then easily move the bottom disk to peg three, and then move the tower of four from peg two to peg three. But what if you do not know how to move a tower of height four? Suppose that you knew how to move a tower of height three to peg three; then it would be easy to move the fourth disk to peg two and move the three from peg three on top of it. But what if you do not know how to move a tower of three? How about moving a tower of two disks to peg two and then moving the third disk to peg three, and then moving the tower of height two on top of it? But what if you still do not know how to do this? Surely you would agree that moving a single disk to peg three is easy enough, trivial you might even say. This sounds like a base case in the making. Here is a high-level outline of how to move a tower from the starting pole, to the goal pole, using an intermediate pole: - Move a tower of height-1 to an intermediate pole, using the final pole. - Move the remaining disk to the final pole. - Move the tower of height-1 from the intermediate pole to the final pole using the original pole. As long as we always obey the rule that the larger disks remain on the bottom of the stack, we can use the three steps above recursively, treating any larger disks as though they were not even there. The only thing missing from the outline above is the identification of a base case. The simplest Tower of Hanoi problem is a tower of one disk. In this case, we need move only a single disk to its final destination. A tower of one disk will be our base case. In addition, the steps outlined above move us toward the base case by reducing the height of the tower in steps 1 and 3. Below we present a possiblem Python solution to the Tower of Hanoi puzzle. def move_tower(height, from_pole, to_pole, with_pole): if height >= 1: move_tower(height - 1, from_pole, with_pole, to_pole) move_disk(from_pole, to_pole) move_tower(height - 1, with_pole, to_pole, from_pole) Notice that the code above is almost identical to the English description. The key to the simplicity of the algorithm is that we make two different recursive calls, the first to move all but the bottom disk on the initial tower to an intermediate pole. Before we make a second recursive call, we simply move the bottom disk to its final resting place. Finally we move the tower from the intermediate pole to the top of the largest disk. The base case is detected when the tower height is 0; in this case there is nothing to do, so the move_tower function simply returns. The important thing to remember about handling the base case this way is that simply returning from move_tower is what finally allows the move_disk function to be called. If we implement this simple move_disk function, we can then illustrate the required moves to solve the problem: def move_disk(from_pole, to_pole): print('moving disk from {} to {}'.format(from_pole, to_pole)) Now, calling move_tower with the arguments 3, 'A', 'B', 'C' will give us the output: moving disk from A to B moving disk from A to C moving disk from B to C moving disk from A to B moving disk from C to A moving disk from C to B moving disk from A to B Now that you have seen the code for both move_tower and move_disk, you may be wondering why we do not have a data structure that explicitly keeps track of what disks are on what poles. Here is a hint: if you were going to explicitly keep track of the disks, you would probably use three Stack objects, one for each pole. The answer is that Python provides the stacks that we need implicitly through the call stack.
https://bradfieldcs.com/algos/recursion/tower-of-hanoi/
CC-MAIN-2018-26
refinedweb
831
76.05
Static Methods Math Class has lot of static methods. These methods can be used without a need to create an object of math class. There is no Object needed, no heap space spent. The methods can be called any number of times. They do no use instance variables. int x = Math.round(42.2); int y = Math.min(56,12); int z = Math.abs(-343);// Use class name instead of Reference These methods dont use Instance variables so their behavior doesn’t need to about a specific object. A class with one or more static methods can be instantiated. Static methods cannot use non static variables as it does not have access to the instance of the class. Also they cannot use non static methods. It is possible to call static methods using a reference variable but that is not ideal. Static variables value is the same for all instances of the class. One value per class. Static variable is initialized only when the class is first loaded. public class duck { private int size; private static int duckcount = 0; public duck() { duckcount++; } } duckcount will keep incrementing each time the duck constructor runs because duckcount is static and it wont reset to 0. This will be useful to know the instances of duck created while the program is running. A Duck object does not keep its own copy of duckcount, because duckcount is static , duck objects all share single copy of it. Static variable lives in the class no the object. All instances of same class share a single copy of static variable. static variables are initialized when the class is loaded. But when is a class loaded ? when the new is called ? yes or when somebody runs a static method or variable of the class. Static variables are initialized before any object of that class are created. Static variables are initialized before the static method runs. Static variables gets the default values just like the integer, static int player; // player has a default value of 0. static final variable. a variable can be made to remain constant, by using keyword final. public static final double PI = 3.141592653589793; The variable is marked public so that any code can access it. The variable is marked static so that you don’t need an instance of class Math (which, remember, you’re not allowed to create). The variable is marked final because PI doesn’t change (as far as Java is concerned). static final variables are constants so the name should be upper case with underscore separating the words. A static final variable must be initialized, if not compiler will throw an error. However you can have a simple static variable not initialized. You cant have a static variable inside a static method as well, static variable must be at class level. Final variable means you cannot change its value. Final method means you cannot override the method. Final class means you cannot extend the class. A class may need to marked as final for security purpose. For instance, String class has to be marker a final because one should not be allowed to create a subclass. Extended String class substituted with their own string subclass objects could make the programs fail. If you dont want a class to be instantiated , then you can mark the constructor of the class as private. Few Math methods. Math.random() = return random value, type is double and value is between 0 – 1.0 double r = math.random(); int 2 = (int) (Math.random()*5); Math.abs() = returns absolute value. takes arguments such as integer m float int x = Math.abs(-240) // return 240 double d = Math.abs(240.45) // return 240.45 Math.round() = Rounds of the value to int or double int x = Math.round(-24.8f) // returns -25 int y = Math.rount(24.45f) // returns 24 Math.min() = returns a value that is minimum of two values. int x = Math.min(24,240) // return 24 double Math.min(90.5,90.4) // returns 90.4 Math.max() = retusns max of the two values int x = Math.min(24,240) // return 240 double Math.min(90.5,90.4) // returns 90.5 To treat a primitive data type as an object , we need to wrap it. There are wrapper classes that will do the job. int x = 32; Arraylist list = new Array:ist(); list.add(x); above code will not work in systems running earlier than JAVA 5.0, the list will accept only objects. which can be obtained by following below method. wrapping ( Also called as Boxing ) ——— Integer iwrap = new Integer(x);// the variable Integer is the wrapper class. In the same way there are wrapper classes for Boolean,Character,Byte,Short,Long etc all with same objective of providing a object out of primitive. unwrapping ———– int unwrapped = iwrap.intvalue(); Even though this is not relevant for now, but its worth knowing. How to deal with Integer and Object conversion for List Prior to Java 5.0 public void dooldway () { ArrayList listOfNumbers = new ArrayList() ; listOfNumbers.add(new Integer (3)) ; // integer cannot be directly added as primitives and objects cannot be used interchangeably. need to convert to object //to get the integer value out of the list Integer one = (Integer) listofNumbers.get(0) ;// get the object first int intone = one.intValue() ;//from the object get the integer } with Autoboxing —————- Arraylist<Integer> listofnumbers = new Arraylist<Integer>(); listofnumbers.add(3); // Just add it.The Compiler does the wrapping. In other words Compiler is actually storing an Integer object in the List and you get to add as primitive. int num = listofnumbers.get(0); // Compiler automatically unwraps(unbox) the object to get the value. more examples of autoboxing. Integer i = new Integer(42); i++; Here is why it matters. Since the compiler does the auto boxing we can use the Integer Object and Int primitive interchangeably. How ? see below Outside observation. local variables should be initialized. which is why below code will not compile. public class textbox { public static void main(String[] args) { int j; System.out.println(j); } } or public class textbox2 { public static void main(String[] args) { textbox2 t = new textbox2(); t.go(); } void go() { int x; System.out.println(x); } } class variables will take default values. public class textbox2 { int x; public static void main(String[] args) { textbox2 t = new textbox2(); t.go(); } void go() { System.out.println(x); } } other wrapper classes. String s = “2”; int x = Integer.parseInt(s); double d = Double.parseDouble(“4230.42”); ** This wont work ** String t = “two”; int x = Integer.parseInt(t); Compiler will compile but it will fail at run time. Boolean is a little different Boolean b = new Boolean(“true”).booleanValue(); Boolean constructor will give the object value and then you need to unwrap it with the booleanValue method. next page 314 More in next part. References: Head First Java 2nd Edition
https://knowingofnotknowing.wordpress.com/2016/06/19/java-beginners-part-13/
CC-MAIN-2018-22
refinedweb
1,144
69.89
im looooost freaking obj will not texture for nothing someone point me in the right direction please I have texutres in my folder but don’t know how to add it on my obj neois82 - 12 June 2013 03:10 PMim looooost freaking obj will not texture for nothing someone point me in the right direction please I have texutres in my folder but don’t know how to add it on my obj import the .OBJ and you will see a grey shape. Then highlight the surface you want and in the surfaces tab go to “diffuse colour” and select a texture file for that channel Working from memory, so please bear with me… 1) Find the Surfaces Tab. 2) Select the Editor Tab. 3) Use the Surface Tool to select the part of your object you want to apply a texture to - that part should then get an orange outline around it. 4) Options will appear for things like Diffuse Color, Specular Color, etc, in the Surfaces Menu. What you want is Diffuse Color. 5) Click the white square next to the Diffuse Color box. 6) A pop-up box will appear - the top option, I believe, is something like ‘select file’. That’s the one you want. 7) A dialog box will open. Use it to navigate to the file you want. 8) Click Enter, and the selected image should become the texture on the selected part of the object. If that doesn’t work - if you get to the last step and the texture doesn’t appear, but the object changes color - the object hasn’t been UV mapped. UV mapping is basically unwrapping the 3d object into a 2d plane, letting a texture be applied from a 2d image. There are various programs that let you UV map, but I’m hoping you won’t need that because I don’t know much about that. Best of luck with this! jerriecan’s given you the detail. I only gave you the overview (as I wasn’t sure how familiar you are with DAZ) Does your object have materials and is it UV mapped? Push buttons, see what happens!
http://www.daz3d.com/forums/viewthread/23687/
CC-MAIN-2015-11
refinedweb
363
79.09
Christine Dodrill - Blog - Contact - Gallery - Resume - Talks - Signal Boost - Feeds | GraphViz - When Then Zen Reading this webpage is possible because of millions of hours of effort with tens of thousands of actors across thousands of companies. At some level it's a minor miracle that this all works at all. Here's a preview into the madness that goes into hitting enter on christine.website and this website being loaded. The user types in into the address bar and hits enter on the keyboard. This sends a signal over USB to the computer and the kernel polls the USB controller for a new message. It's recognized as from the keyboard. The input is then sent to the browser through an input driver talking to a windowing server talking to the browser program. The browser selects the memory region normally reserved for the address bar. The browser then parses this string as an RFC 3986 URI and scrapes out the protocol (https), hostname (christine.website) and path (/). The browser then uses this information to create an abstract HTTP request object with the Host header set to christine.website, HTTP method (GET), and path set to the path. This request object then passes through various layers of credential storage and middleware to add the appropriate cookies and other headers in order to tell my website what language it should localize the response to, what compression methods the browser understands, and what browser is being used to make the request. The browser then checks if it has a connection to christine.website open already. If it does not, then it creates a new one. It creates a new connection by figuring out what the IP address of christine.website is using DNS. A DNS request is made over UDP on port 53 to the DNS server configured in the operating system (such as 8.8.8.8, 1.1.1.1 or 75.75.75.75). The UDP connection is created using operating system-dependent system calls and a DNS request is sent. The packet that was created then is destined for the DNS server and added to the operating system's output queue. The operating system then looks in its routing table to see where the packet should go. If the packet matches a route, it is queued for output to the relevant network card. The network card layer then checks the ARP table to see what mac address the ethernet frame should be sent to. If the ARP table doesn't have a match, then an arp probe is broadcasted to every node on the local network. Then the driver waits for an arp response to be sent to it with the correct IP -> MAC address mapping. The driver then uses this information to send out the ethernet frame to the node that matches the IP address in the routing table. From there the packet is validated on the router it was sent to. It then unwraps the packet to the IP layer to figure out the destination network interface to use. If this router also does NAT termination, it creates an entry in the NAT table for future use for a site-configured amount of time (for UDP at least). It then passes the packet on to the correct node and this process is repeated until it gets to the remote DNS server. The DNS server then unwraps the ethernet frame into an IP packet and then as a UDP packet and a DNS request. It checks its database for a match and if one is not found, it attempts to discover the correct name server to contact by using a NS record query to its upstreams or the authoritative name server for the WEBSITE namespace. This then creates another process of ethernet frames and UDP packets until it reaches the upstream DNS server which hopefully should reply with the correct address. Once the DNS server gets the information that is needed, it sends this back the results to the client as a wire-format DNS response. UDP is unreliable by design, so this packet may or may not survive the entire round trip. It may take one or more retries for the DNS information to get to the remote server and back, but it usually works the first time. The response to this request is cached based on the time-to-live specified in the DNS response. The response also contains the IP address of christine.website. The protocol used in the URL determines which TCP port the browser connects to. If it is http, it uses port 80. If it is https, it uses port 443. The user specified HTTPS, so port 443 on whatever IP address DNS returned is dialed using the operating system's network stack system calls. The TCP three-way handshake is started with that target IP address and port. The client sends a SYN packet, the server replies with a SYN ACK packet and the client replies with an ACK packet. This indicates that the entire TCP session is active and data can be transferred and read through it. However, this data is UNENCRYPTED by default. Transport Layer Security is used to encrypt this data so prying eyes can't look into it. TLS has its own handshake too. The session is established by sending a TLS ClientHello packet with the domain name (christine.website), the list of ciphers the client supports, any application layer protocols the client supports (like HTTP/2) and the list of TLS versions that the client supports. This information is sent over the wire to the remote server using that entire long and complicated process that I spelled out for how DNS works, except a TCP session requires the other side to acknowledge when data is successfully received. The server on the other end replies with a ClientHelloResponse that contains a HTTPS certificate and the list of protocols and ciphers the server supports. Then they do an encryption session setup rain dance that I don't completely understand and the resulting channel is encrypted with cipher (or encrypted) text written and read from the wire and a session layer translates that cipher text to clear text for the other parts of the browser stack. The browser then uses the information in the ClientHelloResponse to decide how to proceed from here. If the browser notices the server supports HTTP/2 it sets up a HTTP/2 session (with a handshake that involves a few roundtrips like what I described for DNS) and creates a new stream for this request. The browser then formats the request as HTTP/2 wire format bytes (binary format) and writes it to the HTTP/2 stream, which writes it to the HTTP/2 framing layer, which writes it to the encryption layer, which writes it to the network socket and sends it over the internet. If the browser notices the server DOES NOT support HTTP/2, it formats the request as HTTP/1.1 wire formatted bytes and writes it to the encryption layer, which writes it to the network socket and sends it over the internet using that complicated process I spelled out for DNS. This then hits the remote load balancer which parses the client HTTP request and uses site-local configuration to select the best application server to handle the response. It then forwards the client's HTTP request to the correct server by creating a TCP session to that backend, writing the HTTP request and waiting for a response over that TCP session. Depending on site-local configuration there may be layers of encryption involved. Now, the request finally gets to the application server. This TCP session is accepted by the application server and the headers are read into memory. The path is read by the application server and the correct handler is chosen. The HTML for the front page of christine.website is rendered and written to the TCP session and travels to the load balancer, gets encrypted with TLS, the encrypted HTML gets sent back over the internet to your browser and then your browser decrypts it and starts to parse and display the website. The browser will run into places where it needs more resources (such as stylesheets or images), so it will make additional HTTP requests to the load balancer to grab those too. The end result is that the user sees the website in all its glory. Given all these moving parts it's astounding that this works as reliably as it does. Each of the TCP, ARP and DNS requests also happen at each level of the stack. There are layers upon layers upon layers of interacting protocols and implementations. This is why it is hard to reliably put a website on the internet. If there is a god, they are surely the one holding all these potentially unreliable systems together to make everything appear like it is working. This article was posted on M05 19 2020. Facts and circumstances may have changed since publication. Please contact me before jumping to conclusions if something seems wrong or unclear. http ohgod philosophy This post was not WebMentioned yet. You could be the first! The art for Mara was drawn by Selicre. The art for Cadey was drawn by ArtZora Studios.
https://christine.website/blog/how-http-requests-work-2020-05-19
CC-MAIN-2021-17
refinedweb
1,556
69.92
On Sat, 2012-09-22 at 01:25 +0200, Bernhard R. Link wrote: > * peter green <[email protected]> [120921 21? > > I'm quite suprised to see /sys to be mounted in chroots. Wasn't one > of the reasons to start /sys and not put the info there in /proc to > not have to have it available in chroots? I've never heard that claimed. > Shouldn't that information about hardware usually be kept away from > chroots? Chroots aren't containers. A chrooted environment can use all CPUs and all network devices, and programs may expect to find information about them under sysfs. If you're concerned about leaking sensitive information to untrusted processes then procfs is a far, far bigger problem (somewhat mitigated by hidepid or pid namespaces). Ben. -- Ben Hutchings Once a job is fouled up, anything done to improve it makes it worse. Attachment: signature.asc Description: This is a digitally signed message part
https://lists.debian.org/debian-devel/2012/09/msg00540.html
CC-MAIN-2015-48
refinedweb
156
67.76
WL#7784: Store temporary table metadata in memory Affects: Server-8.0 — Status: Complete — Priority: Medium Temporary table metadata is now stored in FRM files. There are two main reasons for that: 1. To know the list of tables which should be deleted on the server restart and pass that knowldege to SE. 2. Originally valid fully initialized TABLE_SHARE object could be constructed from FRM-file only. These reasons become obsolete in 5.7 with the New DD: - InnoDB (the main SE) stores temporary tables in a dedicated tablespace, which is discarded on startup. So, there is no need to pass a list of individual table names. Other SEs can implement the same logic themself. - There are no FRM-files. Storing temporary table metadata in persistent DD is wrong by design and it also creates more problems than it solves. This WL is to avoid storing temporary table metadata in persistent DD. NOTE: MySQL temporary table implementation differs from The SQL Standard in a sense that MySQL temporary tables are not shown in the INFORMATION_SCHEMA. That's why this WL is possible. NF1: No user visible changes. Types of temporary tables ========================= There are several types of temporary tables in server: 1) Implicit temporary tables created by optimizer for query execution. These tables do not have .FRM and represented by in-memory TABLE/ TABLE_SHARE structure already. They don't need to be represented in on-disk data-dictionary or have in-memory dd::Table objects. In the scope of this WL they are relevant only because on start-up we need to remove orphan tables of this kind, which remain after previous server run has aborted due to crash (as it is done now). It is fairly easy to do so as these tables created in tempdir with #sql prefix or in temporary tablespace. 2) Explicit temporary tables. These are tables created by user with CREATE TEMPORARY TABLES statement. Currently these tables have .FRMs. The goal of this WL is to change implementation and don't represent these tables in on-disk DD. Instead they should be represented by in-memory dd::Table object which will be associated/owned by temporary table's TABLE_SHARE and won't be present in in-memory DD. Similarly to 1) on start-up we need to remove orphan tables of this kind which were left from previous server runs. Similarly to 1) these tables have #sql prefix and reside in tempdir or temporary tablespace. 3) Implicit temporary tables created by ALTER TABLE implementation. There are two subclasses for them: a) Implicit temporary tables representing new versions of user-created temporary tables. This case is similar to case 2) and should be handled in the same fashion. b) Implicit temporary tables representing new versions of user-created non-temporary tables. Such tables now have .FRM file and reside in datadir or general/system tablespaces. With new-DD information about these tables will end up in on-disk and in in-memory DD when new table version replaces old table version. In theory there is no need to store information about such tables in DD before this moment. But to limit the scope of this task we won't change the fact that information about such temporary tables (i.e. about new version of table) is stored on-disk DD even before they replace old version of table. For the same reason we won't keep in-memory dd::Table for such tables bound to TABLE_SHARE. Orphan tables of this kind should not be automatically removed on server start-up, as in some scenarios they might be the only chance to recover data if server crashes in the middle of DDL. Once crash-safe DDL is implemented this problem willgo away. Cleaning up orphaned temporary tables on server start-up ======================================================== Handlerton interface should be extended with an operation, which instructs SE to discard all temporary tables of 1), 2) and 3.a) types it has. That operation should be called at server startup. Supported SEs should be updated: - InnoDB -- should discard temporary table tablespace; - MyISAM, CSV, Archive -- should do the same logic as now is done for FRM-files, i.e. look for files with #sql prefix in tmpdir directory and remove them. This logic should be generalized so that different SEs can reuse the same code. Note that for MyISAM tables with DATA/INDEX DIRECTORY option such files will be symlinks to other directories. In this case we need to remove both symlink and file it points to. For security reasons we should not do this if symlink points to a file within the data directory (it is impossible to create tables with such symlinks without manual intervention anyway). - Blackhole, Heap, Example -- no changes ID and namespace issue ====================== - Since temporary tables are not referenced from other DD objects it is OK to have -1 IDs for all tmp tables. - Temporary tables shadow normal tables. This sorted out on the layer above DD. So tmp tables doesn't have to be present in general in-memory/on-disk DD in any form. Copyright (c) 2000, 2017, Oracle Corporation and/or its affiliates. All rights reserved.
https://dev.mysql.com/worklog/task/?id=7784
CC-MAIN-2017-30
refinedweb
858
56.76
Opened 6 years ago Closed 4 years ago #14976 closed New feature (fixed) Add is_html flag to contrib.messages Description (last modified by ) I would like to have add a message.is_html flag to the Message model of the contrib.messages app. The flag would be set to False by default and could be explicitly overridden for messages that are HTML. There are times when it would be helpful to the end user to include an html link in a message ("Welcome, click here to create a profile", "You've sent 25 points to user_b, click here to see your balance," etc.), and with the current message system there is not a good way to do this. Adding the is_html flag would require a minor set of backward compatible changes: def success(request, message, extra_tags='', fail_silently=False): to def success(request, message, extra_tags='', fail_silently=False, is_html=False): def add_message(request, level, message, extra_tags='', fail_silently=False): to def add_message(request, level, message, extra_tags='', fail_silently=False, is_html=False): def __init__(self, level, message, extra_tags=None): to def __init__(self, level, message, extra_tags=None, is_html=False): #add to __init__ self.is_html = is_html Then in the template: {% if message.is_html %}{{ message|safe }}{% else %}{{ message }}{% endif %}. Alternative ways to do this: - Run all messages through the safe filter This would require a code-wide policy of "make sure you escape anything in a message that might have user input" such as if my message is "your post %s is now published" % blog.post or "%s has sent you the message %s" %(user, message.content). I would then have to worry about every variable I use in a message string, if it could contain script, and if it is already escaped (or escape everything again). I would also have to worry if everyone else working on the codebase is doing this correctly. - Use a tag I could have a policy of adding "html" to the tags I want to run through the safe filter, but this is also fraught with downsides. Since all tags get output into html, the safe flag would end up output to the end user. The template logic is less clear and error prone. If this isn't violating a core django design precept, I'll get started on a patch in the next few days. Attachments (1) Change History (12) comment:1 Changed 6 years ago by - would like to add comment:2 Changed 6 years ago by It isn't at all obvious what 'is_safe' refers to. I thought you were talking about trusted vs untrusted messages. 'is_html' would be much clearer - so I've changed that. I also fixed up some other things in the description where you seemed to switch from "safe" to "test" - it was a bit confusing. Other than that, I can see the case for this request. We need to think about XSS, but AFAICS there is no issue. The Cookie backend for Messages is potentially vulnerable, but 1) Cookies are a very poor vector for XSS, and 2) we are signing and checking all Messages using HMAC. With regards to compatibility, we would also need to ensure that messages pickled before the change can be unpickled after it. So I've accepted this ticket, assuming we can find a fully backwards compatible solution. comment:3 Changed 6 years ago by Not sure where "test" came from... probably end of the day brain fog. Agree with the change to "is_html" Cookie backend. The current method stores the message as a 3 or 4 item list in json: [flag, level, message, optionally extra_tags]. The decoding method relies on this information being a list of 3 or 4 items to recreate the message object: #obj is the json object transformed into a list, obj[0] is a flag set to '__json_message' return Message(*obj[1:]) Adding another optional argument creates a problem. If we have [flag, level, message, extra_tags, is_html] the message decoding works, but if we have [flag, level, message, is_html] then the is_html tag is positionally interpreted as the value of extra_tags. The solution I see to this is always store extra_tags and optionally store is_html. In the case where there aren't extra tags, store an empty string. class MessageEncoder(json.JSONEncoder): """ Compactly serializes instances of the ``Message`` class as JSON. """ message_key = '__json_message' def default(self, obj): if isinstance(obj, Message): message = [self.message_key, obj.level, obj.message] if obj.extra_tags: message.append(obj.extra_tags) else: #New message.append(str()) #New if obj.is_html: #New message.append(obj.is_html) #New return message return super(MessageEncoder, self).default(obj) In the no extra_tags scenario, this solution has additional storage overhead of 4 characters - ,"", - IMHO that's acceptable. Legacy cookies will continue to pass the hash check because we haven't changed the hashing algorithm or what was stored. Since legacy messages are lists of length 3 or 4 and will never have the is_html flag set to true, the Message() call would still behave as expected with legacy cookies - the call would be either Message(level, message) or Message(level, message, extra_tags). XSS To use cookie stored messages in an xss attack, the attacker would have to know the site's secret key, because if the hash doesn't match the cookie backend discards the messages. Putting user input into an un-escaped output always has the possibility to open up an xss hole, but no more here than in any other feature. By having is_html as an optional, False-by-default variable I think we make it pretty hard to accidentally display a message as html. Going to take a look at the session backend tonight. comment:4 Changed 6 years ago by Took a look at the session backend last night. So long as is_html is an optional parameter on init we shouldn't run into any trouble unpickling legacy messages. However, depending on implementation there is a chance we would end up with a heterogeneous set of message objects: some with .is_html set, some without a .is_html property. In tests last night with pickling, just adding is_html=False in the method declaration was not sufficient to get an is_html property on unpickled legacy messages. Option 1: use a class variable to ensure there is always a value for is_html class Message(StrAndUnicode): is_html = False #NEW, could be deprecated in a future release def __init__(self, level, message, extra_tags=None, is_html=False): #Modified self.level = int(level) self.message = message self.extra_tags = extra_tags self.is_html = is_html #NEW Because there is the class level is_html, even though the instance may not have the attribute any attempt to access my_message.is_html will first check the instance then not finding it fall back to the class. Option 2: Omit the class level variable and live with heterogeneous messages. The primary use case for is_html is {% if message.is_html %} in the template. The template handles a missing attribute as False, so heterogeneity doesn't cause any issues there. I struggle to come up with a scenario where someone would be interacting with the messages in python code, perhaps some kind of check to see if a message is already queued and to add it if not. Even though I struggle to find a way Option2 would cause trouble, it has the potential to do so, so I'm in favor of option 1. comment:5 Changed 6 years ago by I would propose that you change the serialisation in the cookie backend to a list that is always 5 items. That way we can easily tell the old from the new, and the new always have the full set of data. We can live with the overhead, and it is more robust going forward. We can encode the boolean is_html as 0 or 1 for compactness. For the session backend, a better option is to simply fix up the defective instances of Message after unpickling. Changed 6 years ago by comment:6 Changed 6 years ago by I coded up a patch, and it is passing previous tests on my system. Need to write new tests: - ensure it is accurately recovering legacy messages - test that the is_html tag is being set and retrieved as expected Passing legacy tests says to me that api backward compatibility has been achieved. Of note: In base test_multiple_posts(): had to reduce the number of test messages per level from 10 to 9 (from 50 to 45 messages total. 10 appeared to be pushing past the cookie size limit.) Also of note: To get backward compatibility I had to write the api functions as def debug(request, message, extra_tags='', fail_silently=False, is_html=False): instead of def debug(request, message, extra_tags='', is_html=False, fail_silently=False): The later would be more intuitive [request], [message_content], [optional_message_content], [optional_message_content], [behavior_flag]. Given that positional arguments are now non-intuitively ordered, the docs should probably encourage or at least show examples using keyword arguments: messages.debug(request, "my message", extra_tags="markup this_way", is_html=True) comment:7 Changed 6 years ago by comment:8 Changed 6 years ago by comment:9 Changed 6 years ago by The requested feature already works if someone doesn't use the cookie storage but the session storage and uses mark_safe for the messages (since the session storage uses pickle to serialize the data and not json like the cookie storage does). Now the question is whether or not to add this new feature or just to extend the cookie storage to support SafeUnicode/SafeString. Granted; is_html is probably more explicit, but I just wanted to raise this as an option… comment:10 Changed 5 years ago by An alternative naming could be 'allow_tags' to be consistent with list_display methods.
https://code.djangoproject.com/ticket/14976
CC-MAIN-2017-09
refinedweb
1,614
60.45
Well, this is only beginning and kinda beta, but this is just awesome! This is 100% secured, but yet unfinished. This is huuuuuge! I've created temporal apache server on my computer to show how it works: Well, this is only beginning and kinda beta, but this is just awesome! This is 100% secured, but yet unfinished. This is huuuuuge! I've created temporal apache server on my computer to show how it works: Last edited by StrangeCoder; 11-24-2010 at 10:45 PM. This is the code for main functions: Code:<script> function alpha(e) { var k; document.all ? k = e.keyCode : k = e.which; if (e.keyCode == 13) { search(); } else { return ((k > 64 && k < 91) || (k > 96 && k < 123) || k == 8 || k == 32 || (k > 47 && k < 58)) } } function search() { var srchs1 = document.getElementById('search').value.toUpperCase(); var srchsspace = srchs1.replace(/ /gi, ". ."); var srchs2 = srchsspace.replace(/A/gi,".A."); var srchs3 = srchs2.replace(/B/gi,".B."); var srchs4 = srchs3.replace(/C/gi,".C."); var srchs5 = srchs4.replace(/D/gi,".D."); var srchs6 = srchs5.replace(/E/gi,".E."); var srchs7 = srchs6.replace(/F/gi,".F."); var srchs8 = srchs7.replace(/G/gi,".G."); var srchs9 = srchs8.replace(/H/gi,".H."); var srchs10 = srchs9.replace(/I/gi,".I."); var srchs11 = srchs10.replace(/J/gi,".J."); var srchs12 = srchs11.replace(/K/gi,".K."); var srchs13 = srchs12.replace(/L/gi,".L."); var srchs14 = srchs13.replace(/M/gi,".M."); var srchs15 = srchs14.replace(/N/gi,".N."); var srchs16 = srchs15.replace(/O/gi,".O."); var srchs17 = srchs16.replace(/P/gi,".P."); var srchs18 = srchs17.replace(/Q/gi,".Q."); var srchs19 = srchs18.replace(/R/gi,".R."); var srchs20 = srchs19.replace(/S/gi,".S."); var srchs21 = srchs20.replace(/T/gi,".T."); var srchs22 = srchs21.replace(/U/gi,".U."); var srchs23 = srchs22.replace(/V/gi,".V."); var srchs24 = srchs23.replace(/W/gi,".W."); var srchs25 = srchs24.replace(/X/gi,".X."); var srchs26 = srchs25.replace(/Y/gi,".Y."); var srchs27 = srchs26.replace(/0/gi,".0."); var srchs28 = srchs27.replace(/1/gi,".1."); var srchs29 = srchs28.replace(/2/gi,".2."); var srchs30 = srchs29.replace(/3/gi,".3."); var srchs31 = srchs30.replace(/4/gi,".4."); var srchs32 = srchs31.replace(/5/gi,".5."); var srchs33 = srchs32.replace(/6/gi,".6."); var srchs34 = srchs33.replace(/7/gi,".7."); var srchs35 = srchs34.replace(/8/gi,".8."); var srchs36 = srchs35.replace(/9/gi,".9."); var srchs = srchs36.replace(/Z/gi,".Z."); var srch = document.getElementById('lmao').innerHTML; var matchPos = srch.search(srchs); if((!srchs) || (srchs == ".W..R..I..T..E.. ..K..E..Y..W..O..R..D..S....")) { return; } else if(matchPos != -1) { var' + srchs.fontcolor("Red") + '</b>'); document.getElementById('lmao').innerHTML=match; document.getElementById('lolz').innerHTML='<IMG SRC="images/backbutton.gif" onclick="window.location=' + location + '" onmouseover="this.style.cursor=' + cursor + '"><br><br><br>'; setTimeout("tests()", 1); } else { alert("Nothing found, please, try again"); } } function tests() { var posit = document.getElementById('result'); var posit1 = posit.offsetTop - 200; window.scroll(0,posit1); } </script> Yeah, I'd say "lmao" is about right. Maybe "lmfao" would be more accurate. I assume this was meant as a joke, else it's pretty pointless. Be yourself. No one else is as qualified. well, i didn't think about div id's so much ^^ you can change em if you don't like ^^ And why pointless? This is not php engine, this is JAVASCRIPT only search engine, no google.com redirect, just pure javascript search script. Client-side only. (1) Why would anybody *NEED* a search engine in order to search for only 6 different things...especially when the 6 things are sittting right there on the page in front of you. (2) You are going to a lot of work to find some strangely formatted text. And it won't work to find ordinary text. (3) You can do all those replaces in one easy line: or I think you could do it asor I think you could do it asCode:var srchs = document.getElementById('search').value.replace(/([A-Z0-9\s])/gi, ".$1."); Code:var srchs = "." + document.getElementById('search').value.split("").join("..") + "."; But truly, reason (1) was why I thought it was all a joke. Be yourself. No one else is as qualified. (1) Well, i didn't think a lot about content yet, only making a driver. (2) Actually, I've found UPPERCASE format, but because IE is being retarded, have to replace characters, maybe i'll have any new idea. (3) Thank you, will test this one. However, "optimizations" don't bother me for now. My main problems for now are: - long list of downloads will look like retarded. - it will load too slowly for the first time. - .T..H..I..S.. ..S..H..O..U..L..D.. ..B..E.. ..F..I..X..E..D.. ..T..O..O.. - ****ing IE :\ But, agree, the idea is kinda nice. Just guess, when I make it work same like php search engine. This is the first step. Last edited by StrangeCoder; 11-25-2010 at 02:13 AM.
http://www.codingforums.com/javascript-programming/210220-ive-coded-strong-javascript-search-engine.html
CC-MAIN-2016-30
refinedweb
827
64.27
google: google_pay: ^0.0.1 You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:google_pay/google_pay.dart'; We analyzed this package on Jul/google_pay.dart. (-3.45 points) Analysis of lib/google_pay.dart reported 7 hints, including: line 2 col 8: Unused import: 'dart:convert'. line 3 col 8: Unused import: 'dart:developer'. line 6 col 8: Unused import: 'payment_details.dart'. line 16 col 35: Don't explicitly initialize variables to null. line 17 col 35: Don't explicitly initialize variables to null. Format lib/payment_details.dart. Run flutter format to format lib/payment_details.
https://pub.dev/packages/google_pay
CC-MAIN-2019-30
refinedweb
125
53.98
nickle (1) - Linux Man Pages nickle: a desk calculator language NAME nickle - a desk calculator language SYNOPSISnickle [--help|--usage] [-f file] [-l library] [-e expr] [ script ] [--] [arg ...] DESCRIPTION Nickle is a desk calculator. USAGE An un-flagged argument is treated as a Nickle script, and replaces standard input. Any remaining arguments following the script are placed in the Nickle string array argv for programmatic inspection. When invoked without an expression or script argument, Nickle reads from standard input, and writes to standard output. Options are as follows: - --help,--usage - Print a help/usage message and exit. This is a built-in feature of Nickle's ParseArgs module, and thus will also be true of Nickle scripts that use this library. - -f,--file file - Load file into Nickle before beginning execution. - -l,--library library - Load library into Nickle before beginning execution. See below for a description of the library facility. - -e,--expr expr - Evaluate expr before beginning execution. - -- - Quit parsing arguments and pass the remainder, unevaluated, to argv. SYNTAX To make the input language more useful in an interactive setting, newline only terminates statements at ``reasonable'' times. Newline terminates either expressions or single statements typed by the user (with the exception of a few statements which require lookahead: notably if() and twixt(), which have an optional else part). Inside compound statements or function definitions, only a ; terminates statements. This approach is convenient and does not appear to cause problems in normal use. The syntax of Nickle programs is as follows. In this description, name denotes any sequence of letters, digits and _ characters not starting with a digit; E denotes any expression; S denotes any statement; and T denotes any type. The syntax X,X,...,X denotes one or more comma-separated Xs, unless otherwise indicated. C-style comments are enclosed in /* and */, and shell-style comments are denoted by a leading # at the start of a line. Operands: - real number - Can include exponent, need not include decimal point or sign. Will be treated as exact rationals. If a trailing decimal part contains an opening curly brace, the brace is silently ignored; if it contains a curly-bracketed trailing portion, it is treated as a repeating decimal. `Floating point'' constants are currently represented internally as rationals: for floating constants with a given precision (and an infinite-precision exponent), use the imprecise() builtin function described below. - octal number - Start with a 0 (e.g., 014 is the same as 12). - hexidecimal number - Start with "0x" (e.g., 0x1a is the same as 26). - string - As in C. String constants are surrounded by double-quotes. Backslashed characters (including double-quotes) stand for themselves, except "\n" stands for newline, "\r" for carriage return, "\b" for backspace, "\t" for tab and "\f" for formfeed. - name - A variable reference. - name() name(E,E,...,E) - A function call with zero or more arguments. Functions are fully call-by-value: arrays and structures are copied rather than being referenced as in C. - desc name T name = value - Definition expressions: a new name is made available, with the value of the definition being the value of the initializer in the second form, and uninitialized in the first form. The descriptor desc is not optional: it consists of any combination of visibility, storage class or type (in that order). See QUALIFIERS immediately below for a description of these qualifiers. A structured value expression is also possible: see VALUES below. - In addition to being able to initialize a definition with a Nickle value, C-style array, structure, and union definitions are also allowed: For example, the following int[*,*] name = {{0,1},{2,3}} int[2,2] name = {{0...}...}are permitted with the obvious semantics. This is the context in which the dimensions in a type may be expressions: see the discussion of array types above. See the discussion of array and structure values for array and structure initializer syntax. QUALIFIERS A declaration or definition may be qualified, as in C, to indicate details of programmatic behavior. Unlike in C, these qualifiers, while optional, must appear in the given order. Visibility: - public - Any definition expression (function definition, variable definition, type definition) can be qualified with public to indicate that the name being defined should be visible outside the current namespace, and should be automatically imported. See Namespaces below for further info. - protected - Any definition expression (function definition, variable definition, type definition) can be qualified with protected to indicate that the name being defined should be visible outside the current namespace, but should not be made available by import declarations. See Namespaces below for further info. Lifetime: - auto - An auto object is local to a particular block: its lifetime is at least the lifetime of that block. An auto object with an initializer will be re-initialized each time it is evaluated. This is the default lifetime for local objects. - static - A static object is local to a particular function definition: its lifetime is at least the lifetime of that definition. A new static object will be created each time its enclosing function definition is evaluated. - In Nickle, the keyword static has to do only with lifetime (like the use of static inside C functions), not with visibility (which is handled by separate qualifiers as described above, not like the use of static in global scope in C). - global - A global object is global to the entire program: its lifetime is the lifetime of the program. A global object will be created and initialized when its definition is first seen. This is the default lifetime for global objects. - The distinction between static and global lifetime in Nickle is not possible in C, because C functions are not first class objects with nested scope. When deciding which to use in a Nickle program, think about what should happen if a definition is re-evaluated. OPERATORS Here are the basic Nickle operators, grouped in order of decreasing precedence: - A[E,E,...,E] - Refers to the E'th element of the array expression A, or the E1'th/E2'th/etc element of a multi-dimensional array. Both arrays of arrays ala C and multidimensional arrays ala NAWK are possible. - struct.tag - Structure dereference. - struct->tag - Structure pointer dereference ala C. - ============= - - ++ -- - Unary increment/decrement. May be either postfix or prefix. - - - Unary negate - ! E - Logical negation. - E ! - Factorial. Requires a non-negative integer argument. - * E - Pointer dereference. - & E - Reference construction. - ============= - - (U) E - Construct a value of union type with tag U and value E. - ============= - - ** - Exponentiation. Both operands may be fractional. The left operand must be non-negative unless the right operand is integer. The result type is the type of the left operand if the right operand is integer, and real otherwise. - This is the only known type-unsound feature of Nickle: an expression like 2 ** -3 will statically be of type integer, but dynamically will generate a rational result. This may cause a runtime type error later on: consider int x = 2 ** -3; - ============= - * / // % - Times, divide, integer divide, and remainder. The right operand of the last three operators must be nonzero. The result type of the division operator will always be at least rational: the result type of the integer division operator will always be int. This is a notable departure from C, where integer division is implied by integer operands. Integer division is defined by x // y == y > 0 ? floor (x / y) : ceil(x / y) The remainder is always non-negative and is defined by: by x % y = x - (x // y) * y - ============= - + - - Addition and subtraction. - ============= - << >> - Bitwise left and right shift with integer operands. Negative right operands work as expected. These operators are defined by x << y = x * 2 ** y x >> y = x // 2 ** y Another way to look at this is that negative left operands are considered to be in an infinite twos-complement representation (i.e., sign-extended to infinity), with right shift sign-extending its left operand. - ============= - <= >= < > - Relational operators. - ============= - == != - Equality operators. - ============= - Finally, in order of decreasing precedence: - & - Bitwise AND. Negative operands are considered to be in an infinite twos-complement representation (i.e., sign-extended to infinity). - ^ - Bitwise XOR. Negative operands as in bitwise AND. - | - Bitwise OR. Negative operands as in bitwise AND. - && - Short-circuit logical AND. - || - Short-circuit logical OR. - E ? E : E - Conditional expression: if first expression is logical true, value is second expression, else third. - fork E - Create (and return) a thread. See Thread below for details. - = += -= *= /= //= %= **= <<= >>= ^= &= |= - Assignment operators. Left-hand-side must be assignable. x <op>= y is equivalent to x = x <op> y - E , E - Returns right-hand expression. TYPES The type declaration syntax of Nickle more strongly resembles the ``left'' variant of the Java syntax than the C syntax. Essentially, a type consists of: - poly integer rational real string continuation void - A base type of the language. Type void is actually only usable in certain contexts, notably function returns. It is currently implemented as a ``unit'' type ala ML, and thus has slightly different behavior than in C. Type poly is the supertype of all other types (i.e., it can be used to inhibit static type checking), and is the default type in most situations where a type need not appear. - file semaphore thread - Also builtin base types, but integral to the File and Thread ADTs: see below. More About Types: Nickle supports polymorphic data: As an expresion is evaluated, a data type is chosen to fit the result. Any Nickle object may be statically typed, in which case bounds violations will be flagged as errors at compile time. Polymorphic variables and functions do not place restrictions on the assigned data type; this is the default type for all objects. - poly - This describes the union of all datatypes. A variable with this type can contain any data value. - int - Arbitrary precision integers. - rational - Arbitrary precision rational numbers. - real - Arbitrary exponent precision floating point numbers. As many computations cannot be carried out exactly as rational numbers, Nickle implements non-precise arithmetic using its own machine-independent representation for floating point numbers. The builtin function imprecise(n) generates a real number with 256 bits of precision from the number n, while imprecise(n,p) generates a real number with p bits of precision. - T[] - An array of type T, of one or more dimensions. There are no zero-dimensional arrays in Nickle. - T[*] - A one-dimensional array of type T. Unlike in C, the dimension of an array is never part of its type in Nickle. Further, arrays and pointers are unrelated types in Nickle. - T[*,*,...,*] - A two or more dimensional array of type T. The stars ``*'' are not optional. As the previous paragraphs make clear, ``T[]'' is not a zero-dimensional array. - T[E,E,...,E] - In definition contexts, integer values may be given for each dimension of an array context. These are strictly for value-creation purposes, and are not part of the type. An array type is determined only by the base type and number of dimensions of the array. - T0() T0(T,T,...,T) - A function returning type T0. A function accepts 0 or more arguments. - T0() T0(T,T,...,T ...) - A function accepting zero or more required arguments, plus an arbitrary number of optional arguments. The second sequence of three dots (ellipsis) is syntax, not metasyntax: see the description of varargs functions for details. - *T - A pointer to a location of type T. Pointer arithmetic in Nickle operates only upon pointers to arrays: the pointer must be of the correct type, and may never stray out of bounds. A pointer may either point to some location or be null (0). As in C, the precedence of ``*'' is lower than the precedence of ``[]'' or ``()'': use parenthesis as needed. - struct {T name; T name; ...} - A structure with fields of the given name and type. The types T are optional: in their absence, the type of the field is poly. - union {T name; T name; ...} - A ``disjoint union'' of the given types. This is more like the variant record type of Pascal or the datatype of ML than the C union type: the names are tags of the given type, exactly one of which applies to a given value at a given time. - (T) - Parentheses for grouping. Typedef: As in C, new type names may be created with the typedef statement. The syntax: Pointers: Functions: Structures: Unions: The twixt() statement guarantees that all of these events will happen in the specified order regardless of the manner in which the twixt() is entered (from outside) or exited, including exceptions, continuations, and break. (Compare with Java's ``finally'' clause.) Namespaces: Like Java and C++ Nickle has a notion of namespace, a collection of names with partially restricted visibility. In Nickle, namespaces are created with the namespace command.: Exceptions: A few standard exceptions are predeclared and used internally by Nickle. Builtin Namespaces: Format Letters: Format Modifiers: Nickle has a set of commands which may be given at the top level.. where T is a Nickle type. The resulting typename may be used anywhere a type is expected. VALUES STATEMENTS The statement syntax very closely resembles that of C. Some additional syntax has been added to support Nickle's additional functionality. refers to the given name as defined inside the given set of namespaces. The double-colon syntax is unfortunate, as it is slightly different in meaning than in C++, but all the good symbols were taken, and it is believed to be a feature that the namespace separator is syntactically different than the structure operator. In Java, for example, the phrase is syntactically ambiguous: the middle name may be either a structure or a namespace. BUILTINS COMMANDS DEBUGGER When an unhandled exception reaches top level during execution, the user receives a dash prompt, indicating that debug mode is active. In this mode, the command-line environment is that in which the unhandled exception was raised. In addition a number of debugging commands are available to the user: ENVIRONMENT EXAMPLES real function exponent(real x) { real a = 1; int b = 1; real s = 1; int i = 1; while (1) { a = a * x; b = b * i; real c = a / b; if (abs(c) < 1e-6) return s; s = s + c; i++; } } defines a function to compute an approximate value of the exponential function e ** x and for (i = 1; i < 10; i++) printf ("%g\n", exponent (i)); prints approximate values of the exponential function of the first ten integers. VERSION BUGS > int x = 0; > (int[*]){x = 1} -> (int[*]) { x = 1 } Non array initializer The workaround is to parenthesize the assignment expression: > (int[*]){(x = 1)} [1]{1} Because this is so rare, so hard to fix, and so easy to work around, this bug is unlikely to be fixed anytime soon. AUTHOR Nickle is the work of Keith Packard <keithp [at] keithp.com> and Bart Massey <bart_massey [at] iname: - [E] - creates a (zero-based) array with E elements. E must be non-negative. - [E]{V,V,...,V} - Creates an array with E elements, initialized to the Vs. If there are too few initializers, remaining elements will remain uninitialized. - [E]{V,V,...,V...} - The second ellipsis (three dots) is syntax, not metasyntax. Create an array with E elements. The first elements in the array will be initialized according to the Vs, with any remaining elements receiving the same value as the last V. This syntax may be used in the obvious fashion with any of the array initializers below. - [*]{V,V,...,V} - Creates an initialized array with exactly as many elements as initializers. There must be at least one initializer. - [E,E,...,E] [*,*,...,*] - Creates multidimensional arrays. Integer expressions and "*" cannot be mixed: an array's dimensions are entirely either specified or unspecified by the definition. These arrays may also be created initialized: see next paragraph for initializer syntax. - (T[E]) (T[E,E,...,E]) (T[E]){E,E,...,E} - - (T[E,E,...,E]){{E,...},...,{E,...}} - Alternate syntax for creating arrays of type T. The initializers, in curly braces, are optional. The number of initializers must be less than or equal to the given number of elements in each dimension. For multidimensional arrays, the extra curly braces per dimension in the initializer are required; this is unlike C, where they are optional. - (T[*]){E,E,...,E} (T[*,*,...,*]){{E,...},...,{E,...}} - Creates arrays of type T, with each dimension's size given by the maximum number of initializers in any subarray in that dimension. Pointers: - 0 - The null pointer, in contexts where a pointer is required. - &V &A[E,E,...,E] &S.N - Creates a pointer to the given variable, array element, or structure member. The type of the pointer will be *T, where T is the type of the object pointed to. - *P - The value pointed to by pointer P. This can be viewed or modified as in C. Functions: - (T func(){S;S;...S;}) (T func(T name,T name,...T name){S;S;...S;}) - Function expression: denotes a function of zero or more formal parameters with the given types and names, returning the given result type. The function body is given by the curly-brace-enclosed statement list. All types are optional, and default to poly. As noted above, functions are strictly call-by-value: in particular, arrays and structures are copied rather than referenced. - T function name(T name,T name,...,T name){S;S;...S;} - Defines a function of zero or more arguments. Syntactic sugar for T(T,T,...T) name = (T func(T name,T name,...T name){S;S;...S;}); - T function name(T name, T name ...) - The ellipsis here is syntax, not metasyntax: if the last formal argument to a function is followed by three dots, the function may be called with more actuals than formals. All ``extra'' actuals are packaged into the array formal of the given name, and typechecked against the optional type T of the last argument (default poly). Structures: - (struct { T name; T name; ...T name; }){name = E; name = E; ...name=E;} - Create a value of a structured type. The named fields are initialized to the given values, with the remainder uninitialized. As indicated, initialization is by label rather than positional as in C. Unions: - (union { T name; T name; ...T name; }.name) E - Create a value of the given union type, the variant given by .name, and the value given by E. E must be type-compatible with name. - E; - Evaluates the expression. - {S ... S} - Executes the enclosed statements in order. - if (E) S - Basic conditional. - if (E) S - Conditional execution. - else S - Else is allowed, with the usual syntax and semantics. In particular, an else binds to the most recent applicable if() or twixt(). - while (E) S - C-style while loop. - do S while (E); - C-style do loop. - for (opt-E; opt-E; opt-E) S - C-style for loop. - switch (E) { case E: S-list case E: S-list ... default: S-list } - C-style case statement. The case expressions are not required to be constant expressions, but may be arbitrary. The first case evaluating to the switch argument is taken, else the default if present, else the switch body is skipped. - twixt(opt-E; opt-E) S - - twixt(opt-E; opt-E) S else S - If first argument expression evaluates to true, the body of the twixt() and then the second argument expression will be evaluated. If the first argument expression evaluates to false, the else statement will be executed if present. Otherwise, the entire twixt() statement will be skipped. The twixt() statement guarantees that all of these events will happen in the specified order regardless of the manner in which the twixt() is entered (from outside) or exited, including exceptions, continuations, and break. (Compare with Java's ``finally'' clause.) - try S; - - try S catch name (T name, ...) { S; ... }; - - try S catch name (T name, ...) { S; ... } ... ; - Execute the first statement S. If an exception is raised during execution, and the name matches the name in a catch block, bind the formal parameters in the catch block to the actual parameters of the exception, and execute the body of the catch block. There may be multiple catch blocks per try. Zero catches, while legal, is relatively useless. After completion of a catch block, execution continues after the try clause. As with else, a catch binds to the most recent applicable try-catch block. - raise name(name, name, ..., name) - Raise the named exception with zero or more arguments. - ; - The null statement - break; - Discontinue execution of the nearest enclosing for/do/while/switch/twixt statement. The leave expression will be executed as the twixt statement is exited. - continue; - Branch directly to the conditional test of the nearest enclosing for/do/while statement. - return E; - Return value E from the nearest enclosing function. Namespaces: Like Java and C++ Nickle has a notion of namespace, a collection of names with partially restricted visibility. In Nickle, namespaces are created with the namespace command. - opt-P namespace N { S ... } - Places all names defined in the statements S into a namespace named N. The optional qualifier P may be the keyword public, but beware: this merely indicates that the name N itself is visible elsewhere in the current scope, and has nothing to do with the visibility of items inside the namespace. - extend namespace N { S ... } - Reopen the given namespace N, and extend it with the names defined as public in the given statements S. - Names defined inside the namespace are invisible outside the namespace unless they are qualified with the keyword public. Public names may be referred to using a path notation: namespace::namespace::...::namespace::name name.name.name - import N; - The name N must refer to a namespace: all public names in this namespace are brought into the current scope (scoping out conflicting names).: - int printf(string fmt, poly args...) - Calls File::fprintf(stdout, fmt, args ...) and returns its result. - string function gets () - Calls File::fgets(stdin) and returns its result. - string function scanf (string fmt, *poly args...) - Calls File::vfscanf(stdin, fmt, args) and returns its result. - string function vscanf (string fmt, (*poly)[*] args) - Calls File::vfscanf(stdin, fmt, args) and returns its result. - real imprecise(rational value) - See the discussion of type real above. - real imprecise(rational value, int prec) - See the discussion of type real above. - int string_to_integer(string s) - - int atoi(string s) - The argument s is a signed digit string, and the result is the integer it represents. If the string s is syntactically a hexadecimal, octal, binary, or explicit base-10 constant, treat it as such. - int string_to_integer(string s, int base) - - int atoi(string s, int base) - Treat s as a string of digits in the given base. A base of 0 acts as with no base argument. Otherwise, base specification syntax in the string is ignored. - int putchar(int c) - Place the given character on the standard output using File::putc(c, stdout), and return its result. - int sleep(int msecs) - Try to suspend the current thread for at least msecs milliseconds. Return 1 on early return, and 0 otherwise. - int exit(int status) - Exit Nickle with the given status code. Do not return anything. - int dim(poly[*] a) - Given a one-dimensional array a, dim() returns the number of elements of a. - int[] dims(poly[] a) - Given an arbitrary array a, dims() returns an array of integers giving the size of each dimension of a. Thus, dim(dims(a)) is the number of dimensions of a. - *poly reference(poly v) - Given an arbitrary value v, ``box'' that value into storage and return a pointer to the box. - rational string_to_real(string s) - - rational atof(string s) - Convert the real constant string s into its associated real number. - number abs(real v) - Return the absolute value of v. The result type chosen will match the given context. - int floor(real v) - Return the largest integer less than or equal to v. This will fail if v is a real and the precision is too low. - int ceil(real v) - Return the smallest integer greater than or equal to v. This will fail if v is a real and the precision is too low. - int exponent(real v) - Return the exponent of the imprecise real v. - rational mantissa(real v) - Return the mantissa of the imprecise real v, as a rational m with 0 <= m <= 0.5 . - int numerator(rational v) - Return the numerator of the rational number v: i.e., if v = n/d in reduced form, return n. - int denominator(rational v) - Return the denominator of the rational number v: i.e., if v = n/d in reduced form, return d. - int precision(real v) - Return the number of bits of precision of the mantissa of the imprecise real number v. - int sign(real v) - Return -1 or 1 as v is negative or nonnegative. - int bit_width(int v) - Return the number of bits required to represent abs(v) internally. - int is_int(poly v) - Type predicate. - int is_rational(poly v) - Numeric type predicates are inclusive: e.g., is_rational(1) returns 1. - int is_number(poly v) - Type predicate. - int is_string(poly v) - Type predicate. - int is_file(poly v) - Type predicate. - int is_thread(poly v) - Type predicate. - int is_semaphore(poly v) - Type predicate. - int is_continuation(poly v) - Type predicate. - int is_array(poly v) - Type predicate. - int is_ref(poly v) - Type predicate: checks for pointer type. This is arguably a misfeature, and may change. - int is_struct(poly v) - Type predicate. - int is_func(poly v) - Type predicate. - int is_void(poly v) - Type predicate. - int gcd(int p, int q) - Return the GCD of p and q. The result is always positive. - int xor(int a, int b) - Return a ^ b . This is mostly a holdover from before Nickle had an xor operator. - poly setjmp(continuation *c, poly retval) - The setjmp() and longjmp() primitives together with the continuation type form an ADT useful for nearly arbitrary transfers of flow-of-control. The setjmp() and longjmp() builtins are like those of C, except that the restriction that longjmp() always jump upwards is removed(!): a continuation saved via setjmp() never becomes invalid during the program lifetime. - The setjmp() builtin saves the current location and context into its continuation pointer argument, and then returns its second argument. - void longjmp(continuation c, poly retval) - The longjmp() builtin never returns to the call site, but instead returns from the setjmp() that created the continuation, with return value equal to the second argument of longjmp(). - string prompt - The prompt printed during interactive use when at top-level. Default "> ". when waiting for the rest of a statement or expression, and when debugging, respectively. Default values are "> ", "+ ", and "- ". - string prompt2 - The prompt printed during interactive use when waiting for the rest of a statement or expression. Default "+ ". - string prompt3 - The prompt printed during interactive use when debugging. Default "- ". - string format - The printf() format for printing top-level values. Default "%g". - string version - The version number of the Nickle implementation currently being executed. - string build - The build date of the Nickle implementation currently being executed, in the form "yyyy/mm/dd", or "?" if the build date is unknown for some reason. - file stdin - Bound to the standard input stream. - file stdout - Bound to the standard output stream. - file stderr - Bound to the standard error stream. Exceptions: A few standard exceptions are predeclared and used internally by Nickle. - exception uninitialized_value(string msg) - Attempt to use an uninitialized value. - exception invalid_argument(string msg, int arg, poly val) - The arg-th argument to a builtin function had invalid value val. - exception readonly_box(string msg, poly val) - Attempt to change the value of a read-only quantity to val. - exception invalid_array_bounds(string msg, poly a, poly i) - Attempt to access array a at index i is out of bounds. - exception divide_by_zero(string msg, real num, real den) - Attempt to divide num by den with den == 0. - exception invalid_struct_member(string msg, poly struct, string name) - Attempt to refer to member name of the object struct, which does not exist. - exception invalid_binop_values(string msg, poly arg1, poly arg2) - Attempt to evaluate a binary operator with args arg1 and arg2, where at least one of these values is invalid. - exception invalid_unop_values(string msg, poly arg) - Attempt to evaluate a unary operator with invalid argument arg. Builtin Namespaces: - Math - The math functions available in the Math namespace are implemented in a fashion intended to be compatible with the C library. Please consult the appropriate manuals for further details. - real pi - Imprecise constant giving the value of the circumference/diameter ratio of the circle to the default precision of 256 bits. - protected real e - Imprecise constant giving the value of the base of natural logarithms to the default precision of 256 bits. Since e is protected, it must be referenced via Math::e, in order to avoid problems with using the fifth letter of the alphabet at top level. - real function sqrt(real v) - Returns the square root of v. - real function cbrt(real v) - Returns the cube root of v. - real function exp(real v) - Returns e**v. - real function log(real a) - Returns v such that e**v == a. Throws an invalid_argument exception if a is non-positive. - real function log10(real a) - Returns v such that 10**v == a. Throws an invalid_argument exception if a is non-positive. - real function log2(real a) - Returns v such that 2**v == a. Throws an invalid_argument exception if a is non-positive. - real function pi_value(int prec) - Returns the ratio of the circumference of a circle to the diameter, with prec bits of precision. - real function sin(real a) - Returns the ratio of the opposite side to the hypotenuse of angle a of a right triangle, given in radians. - real function cos(real a) - Returns the ratio of the adjacent side to the hypotenuse of angle a of a right triangle, given in radians. - void function sin_cos(real a, *real sinp, *real cosp) - Returns with sin(a) and cos(a) stored in the locations pointed to by sinp and cosp respectively. If either pointer is 0, do not store into that location. May be slightly faster than calling both trig functions independently. - real function tan(real a) - Returns the ratio of the opposite side to the adjacent side of angle a of a right triangle, given in radians. Note that tan(pi/2) is not currently an error: it will return a very large number dependent on the precision of its input. - real function asin(real v) - Returns a such that sin(a) == v. - real function acos(real v) - Returns a such that cos(a) == v. - real function atan(real v) - Returns a such that tan(a) == v. - real function atan2(real x, y) - Returns a such that tan(a) == x / y. Deals correctly with y == 0. - real function pow(real a, real b) - The implementation of the ** operator. - File - The namespace File provides operations on file values. - int function fprintf(file f, string s, ....) - Print formatted values to a file, as with UNIX stdio library fprintf(). fprintf() and printf() accept a reasonable sub-set of the stdio library version: %c, %d, %e, %x, %o, %f, %s, %g work as expected, as does %v to smart-print a value. Format modifiers may be placed between the percent-sign and the format letter to modify formatting. There are a lot of known bugs with input and output formatting. Format Letters: - %c - Requires a small integer argument (0..255), and formats as an ASCII character. - %d - Requires an integer argument, and formats as an integer. - %x - Requires an integer argument, and formats as a base-16 (hexadecimal) integer. - %o - Requires an integer argument, and formats as a base-8 (octal) integer. - %e - Requires a number argument, and formats in scientific notation. - %f - Requires a number argument, and formats in fixed-point notation. - %s - Requires a string argument, and emits the string literally. - %g - Requires a number, and tries to pick a precise and readable representation to format it. Format Modifiers: - digits - All format characters will take an integer format modifier indicating the number of blanks in the format field for the data to be formatted. The value will be printed right-justified in this space. - digits.digits - The real formats will take a pair of integer format modifiers indicating the field width and precision (number of chars after decimal point) of the formatted value. Either integer may be omitted. - - - A precision value indicating infinite precision. - * - The next argument to fprintf() is an integer indicating the field width or precision of the formatted value. - file function string_write() - Return a file which collects written values into a string. - int function close(file f) - Close file f and return an indication of success. - int function flush(file f) - Flush the buffers of file f and return an indication of success. - int function getc(file f) - Get the next character from file f and return it. - int function end(file f) - Returns true if file f is at EOF, else false. - int function error(file f) - Returns true if an error is pending on file f, else false. - int function clear_error(file f) - Clears pending errors on file f, and returns an indication of success. - file function string_read(string s) - Returns a virtual file whose contents are the string s. - string function string_string(file f) - Return the string previously written into the file f, which should have been created by string_read() or string_write(). Behavior on other files is currently undefined. - file function open(string path, string mode) - Open the file at the given path with the given mode string, ala UNIX stdio fopen(). Permissible modes are as in stdio: "r", "w", "x", "r+", "w+" and "x+". - integer function fputc(integer c, file f) - Output the character c to the output file f, and return an indication of success. - integer function ungetc(integer c, file f) - Push the character c back onto the input file f, and return an indication of success. - integer function setbuf(file f, integer n) - Set the size of the buffer associated with file f to n, and return n. - string function fgets (file f) - Get a line of input from file f, and return the resulting string. - file function pipe(string path, string[*] argv, string mode) - Start up the program at the given path, returning a file which is one end of a "pipe" to the given process. The mode argument can be "r" to read from the pipe or "w" to write to the pipe. The argv argument is an array of strings giving the arguments to be passed to the program, with argv[0] conventionally being the program name. - int function print (file f, poly v, string fmt, int base, int width, int prec, string fill) - Print value v to file f in format fmt with the given base, width, prec, and fill. Used internally by File::fprintf(); - int function fscanf(file f, string fmt, *poly args...) - Fill the locations pointed to by the array args with values taken from file f according to string fmt. The format specifiers are much as in UNIX stdio scanf(): the "%d", "%e", "%f", "%c" and "%s" specifiers are supported with the expected modifiers. - int function vfscanf (file f, string fmt, (*poly)[*] args) - Given the file f, the format fmt, and the array of arguments args, fscanf() appropriately. - Thread - The namespace Thread supports various operations useful for programming with threads, which provide concurrent flow of control in the shared address space. There is one piece of special syntax associated with threads. - fork(E) - Accepts an arbitrary expression, and evaluates it in a new child thread. The parent thread receives the thread as the value of the fork expression. - The remainder of the Thread functions are fairly standard. - int function kill(thread list...) - Kills every running thread in the array list. With no arguments, kills the currently running thread. Returns the number of threads killed. - int function trace(poly list...) - Shows the state of every running thread in the array list. With no arguments, traces the default continuation. Returns the number of threads traced. - int function cont() - Continues execution of any interrupted threads, and returns the number of continued threads. - thread function current() - Return the current thread. - int function list() - Reports the currently running thread to stdout. - int function get_priority(thread t) - Reports the priority of the given thread. - thread function id_to_thread(int id) - Returns the thread with the given id, if found, and 0 otherwise. - poly function join(thread t) - Waits for thread t to terminate, and returns whatever it returns. - int function set_priority(thread t, int i) - Attempts to set the priority of thread t to level i, and returns the new priority. Larger priorities mean more runtime: a task with higher priority will always run instead of a lower priority task. Threads of equal highest priority will be pre-emptively multitasked. - Semaphore - The Semaphore namespace encapsulates operations on the semaphore built-in ADT. A semaphore is used for thread synchronization. Each signal() operation on the semaphore awakens the least-recent thread to wait() on that semaphore. The ``count'' of waiting processes may be set at semaphore creation time. - semaphore function new(int c) - Create a new semaphore with an initial count c of waiting processes. If c is positive, it means that c threads may wait on the semaphore before one blocks. If c is negative, it sets a count of threads which must be waiting on the semaphore before further waits will not block. - semaphore function new() - Call semaphore(0) and return its result. - int signal(semaphore s) - Increment semaphore s. If s is non-positive, and some thread is blocked on s, release the least-recently-blocked thread. Return 1 on success. - int wait(semaphore s) - Decrement semaphore s. If s is negative, block until released. Return 1 on success. - int test(semaphore s) - Test whether wait() on semaphore s would cause the current thread to block. If so, return 0. Otherwise, attempt to decrement s, and return 1 if successful. - String - The String namespace contains a few basic operations on the string ADT. - int function length(string s) - Returns the number of characters in s. - string function new(int c) - Returns as a string the single character c. - string function new(int cv[*]) - Returns a string comprised of the characters of cv. - int function index(string t, string p) - Returns the integer index of the pattern string p in the target string t, or -1 if p is not a substring of t. - string function substr(string s, int i, int l) - Returns the substring of string s starting with the character at offset i (zero-based) and continuing for a total of l characters. If l is negative, the substring will consist of characters preceding rather than succeeding i. - PRNG - The PRNG namespace provides pseudo-random number generation and manipulation. The core generator is the RC4 stream cipher generator, properly bootstrapped. This provide a stream of cryptographically-secure pseudo-random bits at reasonable amortized cost. (But beware, initialization is somewhat expensive.) - void function srandom(int s) - Initialize the generator, using the (arbitrarily-large) integer as a seed. - void function dev_srandom(int nbits) - Initialize the generator, using nbits bits of entropy obtained from some reasonable entropy source. On UNIX systems, this source is /dev/urandom. Asking for more initial entropy than the system has may lead either to bootstrapping (as on UNIX) or to hanging, so use cautiously. - int function randbits(int n) - Returns an n-bit pseudo-random number, in the range 0..(2**n)-1. Useful for things like RSA. - int function randint(int n) - Returns a pseudo-random number in the range 0..n-1. - void function shuffle(*(poly[*]) a) - Performs an efficient in-place true shuffle (c.f. Knuth) of the array a. - Command - The Command namespace is used by the top-level commands as described below. It is also occasionally useful in its own right. - string library_path - Contains the current library search path, a colon-separated list of directories to be searched for library files. - int function undefine(string name) - Implements the top-level undefine command. Remove the name denoted by string name from the namespace. This removes all visible definitions of the name. - int function undefine(string[*] names) - Remove each of the names in the array names from the namespace. This removes all visible definitions of each name. - int function delete(string name) - Attempt to remove the command with the given string name from the top-level command list, and return 1 if successful. - int function lex_file(string path) - Attempt to make the file at the given path the current source of Nickle code, and return 1 if successful. Note that this effectively ``includes'' the file by pushing it onto a stack of files to be processed. - int function lex_library(string filename) - Like lex_file(), but searches the directories given by the library_path variable for the first file with the given filename. - int function lex_string(string code) - Attempt to make the Nickle code contained in the string code be the next input. - int function edit(string[*] names) - Implements the top-level edit command. The names in the array are a path of namespace names leading to the symbol name, which is last. - int function new(string name, poly func) - Binds function func to the top-level command string name: i.e., makes it part of the top-level command vocabulary. - int function new_names(string name, poly func) - Binds function func to the top-level command string name: i.e., makes it part of the top-level command vocabulary. Unlike new(), the string names given to func at the top level are passed unevaluated as an array of string names or as a single string name. - int function pretty_print(file f, string[*] names) - Implements the top-level print command. Each of the passed name strings is looked up and the corresponding code printed to file f. - int function display(string fmt, poly val) - Uses printf() to display the value val in format fmt. - History - Nickle maintains a top-level value history, useful as an adjunct to command-line editing when calculating. The History namespace contains functions to access this history. - int function show(string fmt) - Implements the history top-level command with no arguments. Show the most recent history values with format fmt. - int function show(string fmt, int count) - Implements the history top-level command with one argument. Show the last count history values with format fmt. - int function show(string fmt, int first, int last) - Implements the history top-level command with two arguments. - poly function insert(poly val) - Insert val in the history list, and return it. - Environ - Many operating systems have some notion of ``environment variables.'' The Environ namespace contains functions to manipulate these. - int function check(string name) - Returns 1 if the variable with the given name is in the environment, and 0 otherwise. - string function get(string name) - Attempts to retrieve and return the value of the environment variable with the given name. Throws an invalid_argument exception if the variable is not available. - int function unset(string name) - Attempts to unset the environment variable with the given name, and returns an indication of success. - string function set(string name, string value) - Attempts to set the environment variable with the given name to the given value, and returns an indication of success. Nickle has a set of commands which may be given at the top level. - quit - Exit Nickle. - quit E - Exit Nickle with integer status code E. - undefine NAME {,NAME} - Remove these names from the system. - load E - Load a file given by the string name E. - library E - Load a library given by the string name E. See the discussion of the NICKLEPATH environment variable in ENVIRONMENT below, and the discussion of Command::library_path above. - E # E - Print expr1 in base expr2 . - print NAME - Display a formatted version of the object denoted by NAME. Comments and original formating are lost. If NAME is a variable, print the type as well as the value. - edit NAME - Invoke $EDITOR on the named object, and re-incorporate the results of the edit. This is most useful with functions. - history - Display the 10 most recently printed values. They can be accessed with $n where n is the number displayed to the right of the value in this list. - history E - Display the E most recent history values. - history E,E - Display history values from the first integer E through the second. - trace - Get a stack backtrace showing the current state, as with the GDB where command. - up - Move up the stack (i.e., toward the top-level expression) ala GDB. - down - Move down the stack (i.e., toward the current context) ala GDB. - done - Leave debugging mode, abandoning execution. - In addition, the Debug namespace is scoped in in debugging mode. This is primarily of use in debugging Nickle itself. - collect() - Run the garbage collector. - dump(function) - Print the compiled byte code for function. - EDITOR - The editor used by the edit command, described in COMMANDS above. - NICKLERC - The location of the user's .nicklerc file, which will be loaded at the beginning of nickle execution if possible. - Used to find the user's .nicklerc if NICKLERC is not set. - NICKLEPATH - A colon-separated path whose elements are directories containing Nickle code. The library command and the -l flag, described above, search this path for a filename matching the given file. The default library path in the absence of this variable is /usr/share/nickle. - NICKLESTART - The filename of the file that should be loaded as a bootstrap on Nickle startup. The default in the absence of this variable is to load /usr/share/nickle/builtin.5c..
https://www.systutorials.com/docs/linux/man/1-nickle/
CC-MAIN-2020-40
refinedweb
7,601
57.87
Managing Warnings from Compilers (and other tools) Avoid warnings, Eliminate warning, Suppress Warnings, or Document. Boost aims to avoid warnings messages as far as is reasonably practicable, even when compiled with the highest and most pedantic warning level, avoiding vendor specific extensions if possible. Warnings often indicate real problems. Sometimes they only manifest on a particular platform, revealing a portability issue. Sometimes they indicate that the code doesn't account for a runtime condition, like overflow, which the warning can only suggest as a possibility. Suppressing a warning without altering code may simply mask a problem. The right approach is to determine why the warning occurs, to decide whether it is correct in the context, and if so, apply appropriate remediation. If the warning is not correct in the context, only then should it be suppressed. To suppress warnings on all platforms is tedious hard work becuase every compiler provides different warnings and different ways of supressing warnings. gives macros provided by compilers that may help in pre-processing to chose the right method. But it is usually less trouble to eliminate the problem by better coding. Because developers don't have the same knowledge, even among Boost developers, Boost is amassing information to help them know when a warning is significant and not. That information can show cases in which a warning is legitimate and when it isn't. For the former, there is help to understand how to change the code portably to account for the problem revealed by the warning. For the latter, there is information on how to suppress the warning in a portable way. Changing code can lead to bugs. Thus, changing code to eliminate a warning might create a bug. That's unfortunate. From a maintenance standpoint, however, most would prefer to see altered code to a glob of preprocessor and pragma line noise that suppresses a warning in the unchanged code. Testing will reveal the bug. If it doesn't, the testing is insufficient. If the bug appears on an untested platform, then more testers are needed to be able to detect such bugs in the future. Briefly, for some users, any warning is a bug. See also for progress made on warning fixes/suppression in Boost libraries. Reasons to eliminate or suppress warnings: 1 To allow users, whose environment requires no warnings, to use Boost code. 2 To avoid the nuisance, perhaps overwhelming, of spurious warning messages. 3 To improve code quality by focusing library writers' attention on potential problems. 4 To improve portability by focusing developer attention on potentially non-portable code. 5 To improve compliance with the C++ Standard. 6 To permit users using Boost libraries to set high warning levels for their code without being overwhelmed with a barrage of library warnings. 7 To document that warnings have been considered by the library author or maintainer and are considered not significant. 8 For Boost, making the suppression local is more important than only suppressing a specific warning. What to do 1 Test compilation with the most pedantic setting for the compiler, and non-debug mode. For Microsoft Visual Studio, this means setting level to 4 (command line /W4). You might imagine that it would be a good idea to compile with the /Za option to disable MS extensions, but this will not be as useful as one might hope for reasons explained by Stephan T. Lavavej from the Microsoft STL team in an authoritative post at /Za and Warning Guidelines for VC. And more seriously, when using Boost libraries, standard-conforming name lookup is broken.See and note that is has a won't fix status. Briefly, don't use /Za for ANYTHING. It's broken. There are several problems: - Firstly, the latest compilers, especially VC10, is "pretty conformant" and so code is likely to be portable anyway. - Secondly, there is a compiler bug(s) which means that valid standard-conforming code fails to compile, and this is marked 'won't fix'. - Thirdly, is the probable need to link to other libraries, for example regex, serialization that also must be built with the /Za option, but some will fail to compile. - Finally, since there is no way to distinguish libraries built with and without this option by filename, so that they cannot co-exist: this is a recipe for confusion and trouble! The ultimate test of C++ code portability is still testing on as wide a variety of platforms as possible. You might also consider using Microsoft /Wall if available. You may find a helpful guide to the Byzantine complexity of warnings. Having said that, if you only have access to a Microsoft platform, before launching code on the full array of testers, you might consider checking if any indications of non-portability emerge from testing with language extensions disabled. But you will probably just find spurious compilation failures. To do this, (for code that doesn't deliberately use Microsoft language extensions), use the VS IDE to disable them with Disable MS extensions = Yes, which adds command line option /Za To a jamfile add <toolset>msvc:<cxxflags>/Za # disable MS extensions. If it proves impossible to compile with the /Za option (it causes trouble with some MS compilers, sometimes), just document this, including a comment in the build Jamfile. If only one (or a few) modules in a test (or other) build require MS extensions, you can selectively 'switch off' this 'disable' in the 'requirements' in the Jamfile, for example: # pow test requires type_of that MS extensions. run pow_test.cpp ../../test/build//boost_test_exec_monitor : # command line : # input files : # requirements -<toolset>msvc:<cxxflags>/Za # Requires type_of which requires MS extensions, so cancel /Za to enable extensions. : test_pow ; Note the leading - switches '''off''' the compiler switch /Za, producing output thus: compile-c-c++ ..\..\..\bin.v2\libs\math\test\test_pow.test\msvc-9.0\debug\asynch-exceptions-on\threading-multi\pow_test.obj pow_test.cpp using native typeof (If you really wish to make all of your VS projects in all VS solutions compile with other options by default, you might consider (carefully) editing (with a text editor, very carefully, after keeping a copy of the original) the file C:\Program Files\Microsoft Visual Studio 9.0\VC\VCWizards\1033\common.js (or similarly C:\Program Files\Microsoft Visual Studio 10.0\VC\VCWizards\1033\common.js for VC 10) to change from the Microsoft defaults. This does not seem to be officially documented as far as I know. It may save you changing the project properties repeatedly (and is especially useful if you find that because of previous choices, the project wizard's default is to disable extensions). function AddCommonConfig(oProj, strProjectName,bAddUnicode,bForEmptyProject) in the sections for debug and/or release CLTool.DisableLanguageExtensions = true; // add to make the default NO MS extensions (but you really don't want this for Boost code). CLTool.DisableLanguageExtensions = false; // default, enable language extensions (strongly recommended for all Boost code). LinkTool.LinkIncremental = linkIncrementalNo; // add CLTool.WarningLevel = WarningLevel_4; // change from 3 to 4 to making the default warning level 4 For GCC this means -Wall -pedantic but you might consider adding specific warnings that are to be suppressed, for example: -pedantic -Wall -Wno-long-long -Wno-unused-value but this is a global setting, so you need to document that these warnings must be suppressed by users of this module. This may be acceptable if they are building a library of entirely Boost modules. Using bjam add warnings=all to the invocation command line. Putting options in jam files allows setting to be made for one or more specified compiler(s). Some sample options in Jamfile.v2 are: project : requirements <toolset>gcc:<cxxflags>-Wno-missing-braces <toolset>clang:<cxxflags>-Wno-missing-braces <toolset>darwin:<cxxflags>-Wno-missing-braces <toolset>acc:<cxxflags>+W2068,2461,2236,4070,4069 # Comments on warning message help readers who do not have ACC documentation to hand. <toolset>intel:<cxxflags>-nologo <toolset>msvc:<warnings>all # == /W4 -<toolset>msvc:<define>_DEBUG # Undefine DEBUG. <toolset>msvc:<define>NDEBUG # Define NO debug, or release. <toolset>msvc:<asynch-exceptions>on # Needed for Boost.Test # <toolset>msvc:<cxxflags>/Za # Disable MS extensions, if required, but see above. #-<toolset>msvc:<cxxflags>/Za # (Re-)Enable MS extensions if these are definitely required for specific module. # The define of macros below prevent warnings about the checked versions of SCL and CRT libraries. # Most Boost code does not need these versions (as they are markedly slower). <toolset>msvc:<define>_SCL_SECURE_NO_WARNINGS <toolset>msvc:<define>_SCL_SECURE_NO_DEPRECATE <toolset>msvc:<define>_CRT_SECURE_NO_WARNINGS <toolset>msvc:<define>_CRT_SECURE_NO_DEPRECATE <toolset>msvc:<define>_CRT_NONSTDC_NO_DEPRECATE # Suppresses other warnings about using standard POSIX and C9X. # Alternatively, you can just suppress the warnings (perhaps not the best way). <toolset>msvc:<cxxflags>/wd4996 # 'putenv': The POSIX name for this item is deprecated. <toolset>msvc:<cxxflags>/wd4512 # assignment operator could not be generated. <toolset>msvc:<cxxflags>/wd4224 # nonstandard extension used : formal parameter 'arg' was previously defined as a type. <toolset>msvc:<cxxflags>/wd4127 # expression is constant. <toolset>msvc:<cxxflags>/wd4701 # unreachable code - needed for lexical cast - temporary for Boost 1.40 & earlier. ... ; Defining warnings as errors Using warning-as-errors will make it hard to miss or ignore warnings. Microsoft command line option /WX, makes all warnings errors, but this may be too drastic a step. The following have been recommended by Pete Bartlett to make the MSVC and GCC compilers behave as similarly as possible: /we4288 - For-loop scoping (this is the default) /we4238 - don't take address of temporaries /we4239 - don't bind temporaries to non-const references (Stephan's "Evil Extension") /we4346 - require "typename" where the standard requires it. and compile options: /Zc:forScope - For loop scoping again (this is the default). /DNOMINMAX - Don't define min/max macros (usually only applies if windows.h is involved). (Note that suitable bracketing, for example, (std::numeric_limits<result_type>::max)(), will avoid clashes with min/max macros, and is required to avoid being named'n'shamed by the Boost inspection program. So it may be better to check that clashes are correctly avoided by bracketing because others may use when min/max macros are defined rather than using this compiler option.) gcc option -warning-as-errors gcc option -pedantic-errors 2 Consider each warning and a Rewrite the code to avoid the warning, if possible. For example, adding a static_cast will indicate that any warning about loss of accuracy has been judged not possible or significant. Remove or comment out parameters to avoid warnings about unused parameters. Placing /* comment */ around an unused variable name, allows the name still to be useful documentation. b Use the compiler specific mechanism to suppress the warning message, but try hard to ensure that this is as local to the package as possible so that users can still get warnings from *their code*. For MSVC, this involves pushing to save the current warning state, disabling the warning, and later popping to restore (abbreviated as push'n'pop). See [@ warnings] There is a limit to how many times (56) this can be used, so it is always much better to 'fix' than to suppress warnings. "The compiler only supports up to 56 #pragma warning statements in a compiland" This limit might be reached in practice with Boost code. But it is now believed that this limit value is incorrect and that there is no practical limit, see post by Stephan T. Lavavej On GCC there does not appear to be a practical limit, see post by Patrick Horgan. #if defined(_MSC_VER) #pragma warning(push) // Save warning settings. #pragma warning(disable : 4996) // Disable deprecated localtime/gmtime warning. #endif ... #if defined(_MSC_VER) #pragma warning(pop) // Restore warnings to previous state. #endif Adding some or all of the warning message as a comment is helpful to tell readers what warning is being ignored, for example: # pragma warning(disable: 4510) // default constructor could not be generated If the warning is only for a specific compiler version, use this approach: #if BOOST_WORKAROUND(BOOST_MSVC, >= 1400) #pragma warning(push) #pragma warning(disable:4512) //assignment operator could not be generated #endif ... #if BOOST_WORKAROUND(BOOST_MSVC, >= 1400) #pragma warning(pop) #endif c Repeat this process with other compilers and use their specific suppression methods. d If a warning cannot be eliminated or suppressed, explain why in the code and the documentation. If appropriate, document in build files, as well. Consider indicating the highest warning level possible or compiler-specific settings that will provide a warning-free build. e If a warning only appears as false positive because is it in a #ifdef, for example: void f(int x) { // ..code.. #ifdef XYZ dostuff(x); #endif // ..code.. then the compiler cannot detect that it is a false positive (FP). then it is recommended by Gabor Horvath to add a line like "(void) x;" to suppress the warning. Other possible workarounds (in case you have several unused variables): - You can use pragmas to disable some diagnostics for certain regions of code. - You can use compiler flags to disable some diagnostics for certain translation units. - You can always factor out platform dependent code to multiple files and have the #ifdef on includes. Specific Warnings and Suggested Actions. These are just few for starters, and need more, especially for GCC. Suggestions may be wrong! The pragma or other code required to suppress warnings is in the right hand box, ready for copy and paste. Microsoft If you chose to suppress (rather than fix by recoding), localise the warnings as far as possible by using push'n'pop pragmas (see above). Warnings for GCC With older versions of GCC, it is more difficult to deal with warnings because there is no way to locally silence individual warnings. As of GCC version 4.2 they allowed suppression via pragma (actually you can choose whether a particular problem is a warning, error, or ignored). Initially they didn't allow you to save the current state of affairs, so after suppressing the warning, you wouldn't know whether the user had it on, off, or causing an error. You can imagine that this would be quite annoying for users who care about these things. Also, the pragmas had to be at file scope rather than inside your code. As of version 4.6 a pragma was added to push or to pop the state of diagnostics, and the pragmas now affect from the line they are specified in forward. It is considered important not to leave warnings switched off when meeting user code. This is an unresolved difficulty when using GCC versions prior to 4.6. On the other hand, the number of unhelpful warnings seems fewer. So more emphasis should be placed on fixing warnings, for example using static_cast, or providing missing items. Turning on Warnings GCC uses command line warnings to turn on warnings. If you want to turn on warnings about unused variables you would add -Wunused-variable to the command line. Some of the command line options such as -Wall, -Wextra, and -pedantic turn on groups of warnings as detailed below. Sometimes a group is almost right, but not quite. If you want -pedantic, for example, but without warnings about the use of long long for use with C++98, you could say -pedantic -Wno-long-long. You could also say -pedantic -std=c++0x since long long is in that version of the spec. In general, any diagnostic option specified as -Woption-name can be turned off with -Wno-option-name. -Wall. For specific information on any of these warnings please see -Wall turns on the following warning flags: -Waddress -Warray-bounds (only with -O2) -Wc++0x-compat -Wchar-subscripts which has been declared `register'. - (C++ only) Taking the address of a variable which has been declared `register'. - (C++ only) A base class is not initialized in a derived class' copy constructor. -pedantic Tells GCC that you intend to adhere to a particular version of a language standard. Causes GCC to issue all required diagnostics of version of the C or C++ language standard as given by the -std=xxxx option (the default is -std=gnu++98 for C++). Also warns about the use of GCC extensions. (This would, for instance, warn about the use of #ident, which is a non-standard, GCC extension.) Some of the warnings this option will trigger can be suppressed, such as the use of long long in C++98 code with -Wno-long-long, but many cannot. Using -pedantic makes it easier to write portable code. Best used with explicit -std=xxxx. Much of boost uses facilities from more recent versions of the C++ standard, so it might make more sense to explicitly specify -std=c++0x or -std=gnu++0x. Both have the same effect on -pedantic. See the page for more information. Specific Warnings and Suggested Actions comparison between signed and unsigned integer expressions -Wsign-compare - enable the warning (also turned on by -Wextra) -Wno-sign-compare - disable the warning Almost always points to a real issue. The ranges of these are obviously different, for example signed char -128 - 127 and unsigned char 0 - 255 for In the range from 128 to 255 a comparison from one to the other makes no sense. They can have the same bit pattern, yet test as not equal. This is the source of many subtle and hard to find bugs. The same problem can occur with all of the integral types. For example with 16-bit short int unsigned ranges from -32768 to 32767, and the signed short ranges from 0 to 65535. To fix, if you are sure that all the values for both things being compared fall into the overlap where their values are the same, you can static_cast one type to the other, if you must, but if you can change one of the types it's a better solution. If you are sure that you can fit all values into the compared range, i.e. positive numbers from 0 to (1<<(sizeof(IntegralType?)*8-1))-1, it would make more sense to declare the signed one as an unsigned equivalent, (perhaps size_t). In general, if you don't intend for a variable to ever have negative values, take advantage of the diagnostic capabilities of the compiler and declare it as an unsigned type, often size_t. After you've done this, if you see the warning again for one of these same variables, it is warning you of this same danger again. This often shows up with loop counters. We all learned to use int, but if it's truly counting only positive numbers, size_t is better. missing braces around initializer for ‘main()::X -Wmissing-braces - enable the warning (also turned on by -Wall) -Wno-missing-braces - disable the warning This warning is trying to let you know that what you think you're initializing may not actually be what you're initializing. struct X { int i; int j;}; struct Y { struct X x; int j; }; Y y={1,2}; return 0; The above example initializes both the elements of X in Y, but doesn't initialize j. Change to: Y y={{1},2}; to initialize half of X and all of Y, or better something like: Y y={{1,2},3}; The same kind of problem can come up with: int a[2][2]={ 0, 1, 2 }; versus: int a[2][2]={ {0,1},2}; or int a[2][2]={ 0, {1,2}}; deprecated conversion from string constant to ‘char*’ -Wwrite-strings - enable this warning (disabled by default for C and enabled by default for C++) -Wno-write-strings - disable this warning For C will warn about trying to modify string constants, but only if defined const. For C++ often symptomatic of a real bug since string literals are not writeable in C++. This one shows up when you have a string literal, like "Hello world!" which is arguably const data, and you assign it no a non-const char* either directly or via passing as an argument to a functions which expects a non-const char *. both:char* cp="Hello world!"; andvoid foo(char *cp); foo("Hello world") will get this warning because you're using a string literal where a char* is expected. If you say, so what? See if you can predict what will print in the example below. #include <iostream> void foo(char *cp) { cp[0]='W'; } int main() { char *str="Hello, world"; foo(str); std::cout << str << std::endl; } If you said that Wello, world will print, think again. Your most likely output is something like: Segmentation fault (core dumped) A string literal is put into a section that is read only and trying to write to it makes things blow up! The right fix is to make anything that a string literal can be assigned to a const char*. Unfortunately, sometimes you can't fix the problem at the source, because it's someone else's code, and you can't change the declaration. For example, if you had this function: int PyArg_ParseTupleAndKeywords(PyObject *arg, PyObject *kwdict, char *format, char **kwlist, ...); and you had a keyword list to pass it that looked like this: static char *kwlist[] = {"fget", "fset", "fdel", "doc", 0}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|OOOO:property", kwlist, &get, &set, &del, &doc)) you would get the warning on the kwlist assignment, because you're assigning a const char * to char *, but when you correctly change it to: static const char *kwlist[] = {"fget", "fset", "fdel", "doc", 0}; to get rid of the waring you'll get a warning on the call to PyArg_ParseTupleAndKeywords: error: invalid conversion from ‘const char’ to ‘char’ because the library function was declared as taking a char for the fourth argument. Your only recourse is to cast away the constness with a const_cast: if (!PyArg_ParseTupleAndKeywords(args, kwds, "|OOOO:property", const_cast<char **>(kwlist), &get, &set, &del, &doc)) Annoying, yet more correct than before, and your code maintains type correctness. You just cast away the correct constness for the brain damaged external routine. dereferencing type-punned pointer will break strict-aliasing rules this warning is only active when strict-aliasing optimization is active -Wstrict-aliasing - equivalent to -Wstring-aliasing=3 (included in -Wall) -Wstrict-aliasing=N where N is in {1,2,3} - Higher N implies less false positives, at the expense of more work. Currently there are 3 levels. Level 1: Most agressive, quick, very few false negatives, but many false positives. Might use if higher levels don't warn, but turning on -fstrict-aliasing breaks the code. Warns about pointer conversions even if pointer not dereferenced. Level 2: Aggressive, quick, not too precise. Less false positives than Level 1, but more false negatives. Only warns if pointer is dereferenced. Level 3: Very few false positives and very few false negatives--the default. -fstrict-aliasing - also turned on by -O2, -O3 and -Os. Tells the compiler that it's ok to do a certain class of optimization based on the type of expressions. In particular you're promising by using this flag that an object of one type won't reside at the same address as an object of an incompatible type. -fno-strict-aliasing - turns off this optimization. If this changes the behavior of your code, you have a problem in your code. This is trying to let you know that you are asking the compiler to do undefined behavior and it may not do what you think it will do. As the optimization level increases, the likelihood that you won't like what it does will increase. I show a simple example later that surprisingly generates the wrong result when optimization at any level is turned on. Ignore this warning at your own peril. You are unlikely to care for the undefined behavior that results.From the C++ Standard, section 3.10 Lvalues and rvalues member (including, recursively, a member of a subaggregate or contained union), — a type that is a (possibly cv-qualified) base class type of the dynamic type of the object, — a char or unsigned char type. The following program generates 6 warnings about breaking strict-aliasing rules, and many would dismiss them. The correct output of the program is: 00000020 00200000 but when optimization is turned on it's: 00000020 00000020 THAT's what the warning is trying to tell you, that the optimizer is going to do things that you don't like. In this case seeing that acopy is set to a and never touched again, strict aliasing lets it optimize by just returning the original value of a at the end. Broken version uint32_t swaphalves(uint32_t a) { uint32_t acopy=a; uint16_t *ptr=(uint16_t*)&acopy;// can't use static_cast<>, not legal. // you should be warned by that. uint16_t tmp=ptr[0]; ptr[0]=ptr[1]; ptr[1]=tmp; return acopy; } int main() { uint32_t a; a=32; cout << hex << setfill('0') << setw(8) << a << endl; a=swaphalves(a); cout << setw(8) << a << endl; } So what goes wrong? Since a uint16_t can't alias a uint32_t, under the rules, it's ignored in considering what to do with acopy. Since it sees that nothing is done with acopy inside the swaphalves function, it just returns the original value of a. Here's the (annotated) x86 assembler generated by gcc 4.4.1 for swaphalves, let's see what went wrong: _Z10swaphalvesj: pushl %ebp movl %esp, %ebp subl $16, %esp movl 8(%ebp), %eax # get a in %eax movl %eax, -8(%ebp) # and store in in acopy leal -8(%ebp), %eax # now get eax pointing at acopy (ptr=&acopy) movl %eax, -12(%ebp) # save that ptr at -12(%ebp) movl -12(%ebp), %eax # get the ptr back in %eax movzwl (%eax), %eax # get 16 bits from ptr[0] in eax movw %ax, -2(%ebp) # store the 16 bits into tmp movl -12(%ebp), %eax # get the ptr back in eax addl $2, %eax # bump up by two to get to ptr[1] movzwl (%eax), %edx # get that 16 bits into %edx movl -12(%ebp), %eax # get ptr into eax movw %dx, (%eax) # store the 16 bits into ptr[1] movl -12(%ebp), %eax # get the ptr again leal 2(%eax), %edx # get the address of ptr[1] into edx movzwl -2(%ebp), %eax # get tmp into eax movw %ax, (%edx) # store into ptr[1] movl -8(%ebp), %eax # forget all that, return original a. leave ret Scary, isn't it? Of course, if you are using gcc, you could use -fno-strict-aliasing to get the right output, but the generated code won't be as good. A better way to accomplish the same thing without the warnings or the incorrect output is to define swaphalves like this. N.B. this is supported in C99 and later C specs, as noted in this footnote to 6.5.2.3 Structure and union members: 85). but your mileage may vary in C++, almost all compilers support it, but the spec doesn't allow it. Right after this discussion I'll have another solution with memcpy that may be slightly less efficient, (but probably not), and is supported by both C and C++): Union version. Fixed for C but not guaranteed portable to C++ uint32_t swaphalves(uint32_t a) { typedef union { uint32_t as32bit; uint16_t as16bit[2]; } swapem; swapem s={a}; uint16_t tmp; tmp=s.as16bit[0]; s.as16bit[0]=s.as16bit[1]; s.as16bit[1]=tmp; return s.as32bit; } The C++ compiler knows that members of a union alias, and this helps the compiler generate MUCH better code: _Z10swaphalvesj: pushl %ebp # save the original value of ebp movl %esp, %ebp # point ebp at the stack frame movl 8(%ebp), %eax # get a in eax popl %ebp # get the original ebp value back roll $16, %eax # swap the two halves of a and return it ret So do it wrong, via strange casts and get incorrect code, or by turning off strict-aliasing get inefficient code, or do it right and get efficient code. You can also accomplish the same thing by using memcpy with char* to move the data around for the swap, and it will probably be as efficient. Wait, you ask me, how can that be? The will be at least to calls to memcpy added to the mix! Well gcc and other modern compilers have smart optimizers and will, in many cases, (including this one), elide the calls to memcpy. That makes it the most portable, and as efficient as any other method. Here's how it would look: memcpy version, compliant to C and C++ specs and efficient uint32_t swaphalves(uint32_t a) { uint16_t as16bit[2],tmp; memcpy(as16bit, &a, sizeof(a)); tmp = as16bit[0]; as16bit[0] = as16bit[1]; as16bit[1] = tmp; memcpy(&a, as16bit, sizeof(a)); return a; } For the above code, a C compiler will generate code similar to the previous solution, but with the addition of two calls to memcpy (possibly optimized out). gcc generates code identical to the previous solution. You can imagine other variants that substitute reading and writing through a char pointer locally for the calls to memcpy. Suppressing Warnings in GCC. Supressing Warnings For A File By Making GCC See It As A System Header Using a pragma to make GCC think a file or part of a file is a system header Beginning with GCC 3.1, for a particular file, you can turn off all warnings including most warnings generated by the expansion of macros specified in a file by putting the following in a file #pragma GCC system_header File considered a system header. (most?) warnings point to real issues and should be dealt with appropriately. Using -i to make GCC think all files in a directory are system headers You can also turn off warnings for all files in a directory, by putting the directory into the include search path with -i instead of -I The -idirectoryName command line option adds its argument to the list of directories to search for headers, just like -I. Any headers found in that directory will be considered system headers. It also has the side effect of changing the inclusion order, in: -Wsystem-headers This makes GCC print warning messages for constructs found in system header files that would normally not be seen. Using -Wall in conjunction with this option will not warn about unknown pragmas in system headers. For that, -Wunknown-pragmas must also be used. Turning off warnings locally with gcc. Finding out what option controls the warning - -fdiagnostics-show-option - In GCC, for versions 4.2 and higher, this option instructs the diagnostic machinery to add text to each diagnostic emitted, which indicates which command line option directly controls that diagnostic, when such an option is known to the diagnostic machinery. The added text will look similar to [-Wsign-compare]. If you see this, that tells you that the -Wsign-compare command line option turns this warning on. Turning the warnings off and. GCC provides the following pragmas to control warnings. - #pragma GCC diagnostic push -. - #pragma GCC diagnostic. Be careful, though, that you don't pop to soon. In this example: foo() { int unused,i; i=3; } We might want to suppress the unused variable like this: foo() { #pragma GCC diagnostic push #pragma GCC diagnostic "-Wunused-variable" int unused,i; #pragma GCC diagnostic pop i=3; } and then be surprised that we still get a warning about an unused variable. The reason is, that GCC doesn't know the variable is unused until it hits the closing brace. That means the pop has to come after the closing brace: foo() { #pragma GCC diagnostic push #pragma GCC diagnostic "-Wunused-variable" int unused,i; i=3; } #pragma GCC diagnostic pop - #pragma GCC diagnostic [warning|error|ignored] OPTION -: #pragma GCC diagnostic ignored "-Wdeprecated-declarations" version 4.2. You can turn them off but then what? So starting from 4.2 but before 4.6, just put near the top of the file something like: #pragma GCC diagnostic ignored "-Wdeprecated-declarations". version 4.6 Now you can restore the user's flags For version 4.6 or later, you can save the state of the user's diagnostic flags. See You can insert this around the line that causes the spurious warning:#pragma GCC diagnostic push[[BR]] #pragma GCC diagnostic ignored "-Wdeprecated-declarations" // Next you would have any amount of code for which you'd like to suppress that warning #pragma GCC diagnostic pop Of course this could cover everything from a line up to the whole file, and in between the push and the pop you could make multiple changes to each of multiple options. ====== A handy macro to help you do some of this Jonathan Wakely came up with a nice macro set to control this and I'm sharing a slightly modified version of it with you. It defines: - GCC_DIAG_OFF(FLAG) -. - GCC_DIAG_ON(FLAG) -): GCC_DIAG_OFF(sign-compare); if(a < b){ GCC_DIAG_ON(sign-compare); to turn off warnings that you know are spurious. (Probably a cast of one to the other's type or changing the declaration of the type of one to the other's would be a better fix.) #if ((__GNUC__ * 100) + __GNUC_MINOR__) >= 402 #define GCC_DIAG_STR(s) #s #define GCC_DIAG_JOINSTR(x,y) GCC_DIAG_STR(x ## y) # define GCC_DIAG_DO_PRAGMA(x) _Pragma (#x) # define GCC_DIAG_PRAGMA(x) GCC_DIAG_DO_PRAGMA(GCC diagnostic x) # if ((__GNUC__ * 100) + __GNUC_MINOR__) >= 406 # define GCC_DIAG_OFF(x) GCC_DIAG_PRAGMA(push) \ GCC_DIAG_PRAGMA(ignored GCC_DIAG_JOINSTR(-W,x)) # define GCC_DIAG_ON(x) GCC_DIAG_PRAGMA(pop) # else # define GCC_DIAG_OFF(x) GCC_DIAG_PRAGMA(ignored GCC_DIAG_JOINSTR(-W,x)) # define GCC_DIAG_ON(x) GCC_DIAG_PRAGMA(warning GCC_DIAG_JOINSTR(-W,x)) # endif #else # define GCC_DIAG_OFF(x) # define GCC_DIAG_ON(x) #endif These macro names won't collide with GCC macros since theirs start with one or two underscores. (A list of GCC pre-defined macros is at). Boost 1.46.1 libraries on a Fedora 7 Linux Boost 1.46.1 libraries on a Fedora 7 Linux system with the default g++ 4.1.2. The warnings from the math library produce zillions of lines of output. Each warning has a long description of the instantiation and then concludes with a couple of lines like this: ./boost/math/special_functions/binomial.hpp:65: instantiated from here ./boost/mpl/assert.hpp:79: warning: lowering visibility of 'int mpl_::assertion_failed(typename mpl_::assert<C>::type) [with bool C = false]' to match its type The workaround for this is to add cxxflags=-Wno-attributes to the <compileflags> and <linkflags> in user-config file Fixed by Redhat for g++ 4.5.1 on Fedora 14 Clang Clang supports most GCC warnings, see The options -Wpedantic -Wall and -Wextra will enable nearly all warnings, a good start and a guide to eliminate warnings by recoding as far as possible. sign-compare -Wtype-limits -Wuninitialized -Wunused-parameter (only with -Wunused or -Wall) -Wunused-but-set that has been declared `register'. (C++ only) Taking the address of a variable that has been declared `register'. (C++ only) A base class is not initialized in a derived class's copy constructor. However, it may still be necessary to suppress those that are not providing any useful guidance and cannot be reasonably eliminated. Many of the method discussed above for GCC should be useful. (Clang is more recent and so probably uses GCC >= 4.6, so the complexity of early GCC versions is absent). To ignore via a command line, to an option like "-Wunused-variable" add a preceeding "no-" thus: "-Wno-unused-variable". In a jamfile, add, for example: <toolset>clang:<cxxflags>-Wno-reorder <toolset>clang:<cxxflags>-Wno-unused-variable <toolset>clang<cxxflags>-Wno-maybe-uninitialized ... Clang also supports suppressing warnings by pragmas, see for example #pragma clang diagnostic ignored "-Wno-unused-variable" As with MSVC, it is usually desirable to localize warning suppression by bracketing push'n'pop pragmas. Localization is especially important in included .hpp library files to avoid quietly suppressing warnings in user code: the user may need to see warnings in his code. A contrived example is: #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wmultichar" char b = 'df'; // Avoid a warning that more than one chars are provided. #pragma clang diagnostic pop But this is not portable and will produce 'unrecognized pragmas' on other platforms. Macros that allow this to be done without causing complaint from other compilers is #ifdef __clang__ # define CLANG_DIAG_STR(s) # s // stringize s to "no-unused-variable" # define CLANG_DIAG_JOINSTR(x,y) CLANG_DIAG_STR(x ## y) // join -W with no-unused-variable to "-Wno-unused-variable" # define CLANG_DIAG_DO_PRAGMA(x) _Pragma (#x) // _Pragma is unary operator #pragma ("") # define CLANG_DIAG_PRAGMA(x) CLANG_DIAG_DO_PRAGMA(clang diagnostic x) # define CLANG_DIAG_OFF(x) CLANG_DIAG_PRAGMA(push) \ CLANG_DIAG_PRAGMA(ignored CLANG_DIAG_JOINSTR(-W,x)) // For example: #pragma clang diagnostic ignored "-Wno-unused-variable" # define CLANG_DIAG_ON(x) CLANG_DIAG_PRAGMA(pop) // For example: #pragma clang diagnostic warning "-Wno-unused-variable" #else // Ensure these macros so nothing for other compilers. # define CLANG_DIAG_OFF(x) # define CLANG_DIAG_ON(x) # define CLANG_DIAG_PRAGMA(x) #endif /* Usage: CLANG_DIAG_OFF(unused-variable) CLANG_DIAG_OFF(unused-parameter) CLANG_DIAG_OFF(uninitialized) */ #include <iostream> #include <boost/assert.hpp> int main() { #ifdef __clang__ std::cout << "Clang "<< __clang_major__ << '.' << __clang_minor__<< '.' << __clang_patchlevel__ << std::endl; #endif // Clang method of saving and restoring current warning state. #pragma clang diagnostic push #pragma clang diagnostic pop // Example of using Clang push'n'pop around two ignored warnings. #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wmultichar" #pragma clang diagnostic ignored "-Wconstant-conversion" #pragma clang diagnostic ignored "-Wunused-variable" char b = 'df'; // Most unwisely ignoring ALL warnings. #pragma clang diagnostic pop // Example of using macro for push and pop pragmas. CLANG_DIAG_PRAGMA(push); #pragma clang diagnostic ignored "-Wunused-variable" int unused; CLANG_DIAG_PRAGMA(pop); // example of using macro to push warning state,suppress a warning and restore state. CLANG_DIAG_OFF(unused-variable) int unused_two; // No warning from unused variable. CLANG_DIAG_ON(unused-variable) // int unused_too; // Expect an unused-variable warning! return 0; } /* Output: Clang 3.1.0 clang++.exe -dD -Wall -Wextra -c -g -Wall -I/i/boost-trunk -MMD -MP -MF build/Debug/MinGW_Clang-Windows/warnings_1.o.d -o build/Debug/MinGW_Clang-Windows/warnings_1.o warnings_1.cpp warnings_1.cpp:97:7: warning: unused variable 'unused_too' [-Wunused-variable] int unused_too; // Expect an unused-variable warning! ^ 1 warning generated. * * * but for MSVC * ClCompile: warning_1.cpp warning_1.cpp(77): warning C4068: unknown pragma warning_1.cpp(78): warning C4068: unknown pragma warning_1.cpp(81): warning C4068: unknown pragma warning_1.cpp(82): warning C4068: unknown pragma warning_1.cpp(83): warning C4068: unknown pragma warning_1.cpp(84): warning C4068: unknown pragma warning_1.cpp(85): warning C4305: 'initializing' : truncation from 'int' to 'char' warning_1.cpp(85): warning C4309: 'initializing' : truncation of constant value warning_1.cpp(86): warning C4068: unknown pragma warning_1.cpp(90): warning C4068: unknown pragma warning_1.cpp(99): warning C4101: 'unused_too' : unreferenced local variable warning_1.cpp(96): warning C4101: 'unused_two' : unreferenced local variable warning_1.cpp(85): warning C4189: 'b' : local variable is initialized but not referenced warning_1.cpp(91): warning C4101: 'unused' : unreferenced local variable j:\cpp\svg\warning_1\warning_1.cpp(66): warning C4701: potentially uninitialized local variable 'x' used */ showing that eliminating warnings is usually much less trouble. More info and experience needed on Clang. Intel Information needed here. Darwin /MacOS Information needed here. ACC Information needed here. Borland Information needed here. Sun some Sun workarounds to some Sun compiler errors/warnings.
https://svn.boost.org/trac/boost/wiki/Guidelines/WarningsGuidelines
CC-MAIN-2016-44
refinedweb
6,501
53.31
1 // $Header: /home/cvs/jakarta-jmeter/src/protocol/http/org/apache/jmeter/protocol/http/util/DOMPool.java,v 1.5.2.1 2004/06/12 20:29:05;20 21 import java.util.HashMap ;22 import org.w3c.dom.Document ;23 24 /**25 * The purpose of this class is to cache the DOM Documents in memory and26 * by-pass parsing. For old systems or laptops, it's not practical to parse the27 * XML documents every time. Therefore using a memory cache can reduce the CPU28 * usage.29 * <p>30 * For now this is a simple version to test the feasibility of caching. If it31 * works, this class will be replaced with an Apache commons or something32 * equivalent. If I was familiar with Apache Commons Pool, I would probably33 * use it, but since I don't know the API, it is quicker for Proof of Concept34 * to just write a dumb one. If the number documents in the pool exceed several35 * hundred, it will take a long time for the lookup.36 * <p>37 * Created on: Jun 17, 2003<br>38 * 39 * @author Peter Lin40 * @version $Revision: 1.5.2.1 $41 */42 public final class DOMPool43 {44 /**45 * The cache is created with an initial size of 50. Running a webservice46 * test on an old system will likely run into memory or CPU problems long47 * before the HashMap is an issue.48 */49 private static HashMap MEMCACHE = new HashMap (50);50 51 /**52 * Return a document.53 * @param key54 * @return Document55 */56 public static Document getDocument(Object key)57 {58 return (Document ) MEMCACHE.get(key);59 }60 61 /**62 * Add an object to the cache.63 * @param key64 * @param data65 */66 public static void putDocument(Object key, Object data)67 {68 MEMCACHE.put(key, data);69 }70 71 /**72 * Private constructor to prevent instantiation.73 */74 private DOMPool()75 {76 }77 }78 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/jmeter/protocol/http/util/DOMPool.java.htm
CC-MAIN-2017-30
refinedweb
327
57.67
Image Subtract¶ This is a function is used to subtract values of one gray-scale image array from another gray-scale image array. The resulting gray-scale image array has a minimum element value of zero. That is all negative values resulting from the subtraction are forced to zero. plantcv.image_subtract(gray_img1, gray_img2) returns new_img - Parameters: - gray_img1 - Grayscale image data from which gray_img2 will be subtracted - gray_img2 - Grayscale image data to be subtracted from gray_img1 - Context: - returns difference in pixel values of two images - Example use: Gray_Img1 Gray_Img2 from plantcv import plantcv as pcv # Set global debug behavior to None (default), "print" (to file), or "plot" (Jupyter Notebooks or X11) pcv.params.debug = "print" # Subtract image from another image. subtracted_img = pcv.image_subtract(gray_img1, gray_img2) Result
https://plantcv.readthedocs.io/en/latest/image_subtract/
CC-MAIN-2019-18
refinedweb
125
51.89
Microchip Technology's "PIC" microcontrollers are characterized by the diverse suite of on-board peripherals they contain. Many PIC devices include an on-chip transceiver capable of interfacing with a variety of serial networks. Microchip refers to this transceiver as an "Enhanced Universal Synchronous/Asynchronous Receiver/Transmitter," abbreviated EUSART. The EUSART is a general device, meaning that a bit of design work is necessary to interface it with a given real-world network. The article at hand describes a quick technique, including schematics and code, for interfacing many PIC chips directly with RS-232 devices such as a PC serial port or dumb terminal. This capability greatly enhances the utility of the PIC chip, which can now do such things as expose a user interface for configuration / control. The category "serial communications" encompasses a wide variety of basically similar technologies, which are all based on the use of a voltage differential to transmit one bit of a digital data at a time. These serial technologies differ greatly in their specifics, however. In particular, serial devices of different types exhibit big differences in such areas as the expected voltage levels, the speed at which data is transmitted, and the absence or presence of a definite clock signal for the transmission. A common problem faced by the designer of serial communications circuitry is the relatively wide voltage range specified by RS-232 and related standards in comparison with most digital computing hardware. RS-232 specifies a waveform residing between -25V (defined as a digital 1) and +25V (digital 0). Digital devices typically operate in a 0V to +5V range, or less. PIC chips are no exception. While the EUSART present on most PICs is (quite reasonably) advertised as compatible with RS-232, it is still incumbent upon the circuit designer to respect the electrical maxima of the PIC device. In practice, this means translating from an RS-232 serial waveform of ±15V to a so-called "TTL" waveform ranging from about 0V to 5V. It is quite typical to use an IC such as Maxim's MAX232 to perform this translation. The MAX232 is a full-featured, reliable device. This article presents an alternative approach to the problem, based around a single, large PNP transistor. This is a handy technique when something like a MAX232 is not readily available; it is also noteworthy that the MAX232 requires auxiliary capacitors, which are not used in the design presented below. Nevertheless, for critical production circuits, a dedicated device like the MAX232 should certainly be considered. For development work - especially work involving the development of assembly-language-based firmware - the techniques given here should generally be adequate. The assembly language code shown here is valid for all cases, regardless of whether or not the MAX232 is used. Much of the explanatory text below involves digital DC logic of the sort typically found in modern computers and electronics. The article aims to be as self-contained as possible. For example, photos and diagrams designed for easy accessibility to the layman are included in the hardware setup sections below. However, it is not possible for the article to discuss every aspect of DC-based logic. Many other online resources address the topic of basic digital logic, including this article by the same author. The target platform for the demo as provided is Microchip's "Low Pin Count" (LPC) demo board, part number DM164120-1. This demo board consists of a PIC microcontroller placed onto a small prototyping board, without much else. This is sufficient to run a demo of PIC serial communications without much extra hardware. The LPC board comes with a PIC 16F690 microcontroller, a 20-pin device with ample serial communications ability. The LPC board is typically purchased as a part of the "PICKit 2 Starter Kit" (see here and here), which has part number DV164120. The starter kit includes the PICKit 2 programmer, which is used to write programs to the PIC device. The programmer also powers the device, so that programs can execute after programming is completed. This PICKit 2 programmer connects to a standard Windows PC using a supplied USB cable. An installation CD for the MPLAB Integrated Development Environment is also included in the starter kit. This allows the developer to build object code from assembly language, and to write it to a variety of PIC devices using the PICKit 2 programmer. While the discussion below gives specifics designed around the LPC board, information about portability is also given as appropriate, and the demonstration program should be capable of running on any PIC with a EUSART with only minimal changes. When targeting other devices, for example, it will be necessary to #include the correct .inc file instead of "16F690.INC." On some setups, it may be required to reduce the baud rate to 9,600, and instructions to do so are provided. Some applications will need to locate the code provided differently, meaning that the .org directive will need to be edited. Full-featured implementations of the techniques given here might even place object code into a relocatable block using a CODE directive. Finally, many pins have multiple functions on the various PICs; on some devices, it will be necessary to disable specific functions in order to allow for serial communications. The datasheet often summarizes the necessary steps in pseudocode format, in the section dedicated to the EUSART. #include .inc .org CODE Whatever the specifics of a given architecture may be, though, the demonstration code / circuit provided should be basically adaptable to it. The code provided imposes few requirements on its target platform. It does not, for instance, allocate any SRAM or EEPROM storage, relying purely on the accumulator for storage. Also, care has been taken to make proper use of the banksel macro, in order to provide for portability to other PIC devices with their own register configurations. banksel The wiring techniques used below should also be applicable to a wide array of hardware, encompassing most families of digital logic, at least in the development realm. The basic need to translate between RS-232 and TTL voltage levels is inherent to any digital computing or control device which must perform RS-232 communications. Finally, it should be noted that Microchip has recently released the PICKit 3 programmer. This is very similar to the PICKit 2, and can substitute for it without major changes in the demonstration below. However, the PICKit3 is packaged with a different demo board from the PICKit2 (in Microchip Part DV164131), and this requires some different wiring techniques (discussed below). This section of the article describes in detail how to wire up and run a one-way (PIC to terminal) communications demo. In the narrative below, the physical connection of the RS-232 circuit is described first. Then, instructions are given to program the PIC to run the demo program, and to view the PIC's transmissions in a terminal program. The suggested hardware setup for the demo consists of the LPC demo board and PICKit 2 programmer, plus a PC. The PC will serve as host for the MPLAB IDE, and will thus be responsible for building the demo program from assembly language, and writing it to the Flash memory that the PIC uses to hold its firmware. The PC will also serve as the terminal communicating with the PIC. RS-232 ports are no longer ubiquitous on general-purpose computers, but it is not difficult to find a USB-based adapter. During development, the Dynex UBDB9 adapter was used. RS-232 has also made a resurgence among business computers recently, e.g. in some iterations of the Toshiba Tecra laptop. The simple, deterministic nature of RS-232 and its related protocols is still the best choice for many applications. As shown below, the ease with which RS-232 interfaces with the PIC 16F690 IC is impressive, and this is a point in favor of the RS-232 protocol and its revival. In all cases, it is advisable to disconnect power from both sides of the RS-232 connection prior to performing any wiring. Under the typical setup, where a USB / RS-232 adapter is used, it is possible to safely wire up the RS-232 link for the demo by first unplugging the USB connectors for: This serves to remove power from both sides of the RS-232 connection, and allows the developer to wire up this connection without worrying about causing electrical damage. In cases where the PC has a true RS-232 port, it is prudent to power down the PC before wiring up the RS-232 link, at least during the initial setup. For a more robust application, a custom cable should eventually be fabricated. If made correctly, this cable will be sufficiently well-insulated to allow for a "hot swap" capability. Two pins of the PC or terminal serial port are connected to the PIC in the demo below. The figures and explanations below refer to these two pins as they are typically located on a 9-pin "DB9" connector. It should be possible to apply these techniques to any RS-232 (or similar) device, though, so long as the three necessary pins can be located. To transmit from the PIC to a terminal typically requires only two wires. A MAX232 or other device is not generally necessary in practice, especially for this simply one-way transmission. This is possible because, although the RS-232 standard allows for wide voltage swings in the transmission waveform, it also specifies that the terminal must be able to distinguish a wave ranging from -3V to +3V. The PIC naturally generates a serial waveform ranging from 0 to +5V, with +5V serving as a digital one. So, beyond the issue of voltage range, the PIC waveform is inverted 180° compared to the equivalent RS-232 waveform, in which the bottom of the waveform (e.g. -15V) represents a logical one. Fortunately, the PIC EUSART offers an "inverted" mode that corrects this problem; and although inverted mode does nothing to translate the 0-to-5V waveform to a ±3V waveform, it has been the author's experience that an RS-232 terminal will typically tolerate this slight shift in the expected waveform, so long as the EUSART is set to inverted mode. Again, in critical production applications, something like Maxim's MAX232 should be considered, in order to ensure a waveform that is fully standards-compliant. Also, it should be noted that the maximum cable lengths specified by the RS-232 standard will not necessarily be achievable using the techniques described here. The PIC 16F690 emits serial transmissions from pin RB7. This is on the same side as pin 1 (which is marked with a dimple in most packages), but at the opposite corner. With pin 1 positioned at the top left of the assembly, as is the case in the PICKit 2 LPC demo board, pin RB7 is found at the bottom left corner of the microcontroller. In addition, a ground connection must be made; conveniently, the ground pin (Vss) of the PIC 16F690 is found at the exact opposite corner from pin RB7. In the case of the PICKit 2 LPC demo board, this will be the top right corner pin. The wiring diagram below shows a diagram of the LPC demo board, a hypothetical 9-pin RS-232 device, and the connections necessary between them: In interpreting this diagram, note that the pins of the serial port are numbered from one, beginning with the top left pin if one holds the connector with the wide side up and facing toward one's self. Pin 2 is the next pin to the right in the top row of pins, and so on, with the smaller, bottom row of pins beginning with pin 6 at its left side. Pin 1 of the serial port is marked with a small "1" in the diagram above. Note also that, on the LPC demo board, the three holes adjacent to each PIC pin are connected to that pin. The connections in the diagram go to a convenient hole, but in a real application, any one of the three holes can be used. Or, in some cases, a connection onto the pin itself can be made. In any case, pin 2 of the serial port ("receive" or "RX") is connected to pin RB7 of the PIC. Pin 5 of the serial port (common ground) is connected to the Vss pin of the PIC. Any convenient small-gauge wire can be used to make the actual connections necessary for communications. One approach is to use copper wire of about 20 AWG, together with female "D-Sub crimp pins" (Black Box part number FA820, or equivalent). The D-Sub connectors are designed to connect to the male pins of a terminal serial port, and a jumper wire can be crimped, or simply wrapped, onto them. The other end of the jumper wire can be threaded through the appropriate demo board hole and soldered to the board back. Another approach is to use D-Sub connectors on the PIC side of the connection as well. These connectors actually slide easily onto the RB7 and Vss pins of the 16F690. This works for the 16F690 in the PICKit 2 Starter Kit, with its very accessible transmit and receive pins. In other applications - for example, if the PICkit 3 Debug Express Starter Kit, with its "surface mount" microcontroller, is used - this will not be possible. Instead, it will probably be necessary to solder connections to the demo board. The two photographs below this paragraph show some details of the wiring setup described. The first image shows the pinout of the DX-UBDB9, with D-Sub crimp pins connected. The two necessary jumper wires are wrapped around the crimp pins. The second photo below shows a detail of the LPC demo board, around the PIC device itself. In the implementation pictured above, D-Sub connectors are slipped directly onto the appropriate PIC pins to complete the two necessary connections. More robust techniques for making this connection are explored later in this article; the technique shown in the pictures above is adequate for a quick, one-way connection, but is not workable for anything more advanced. After the RS-232 link has been wired, it is next necessary to set up the LPC board and PICKit 2 programmer for development. Instructions to do this are provided with the PICKit 2 starter kit. The starter kit includes a series of lessons in assembly language programming, and instructions for building each lesson's program and executing it on the LPC demo board. If you have obtained the PICKit2 and set it up correctly to program and run these lessons, then you should be set up for the demo program presented here. In summary, though, a USB cable connects the PICKit2 programmer to the PC, at the back of the programmer. At the other end of the programmer, a six-pin plug connects the programmer to the demo board. This setup also serves to power the 16F690, and the rest of the demo board, so that programs naturally execute after they are written to the microcontroller. The MPLAB development environment must be installed as well, and will be the central location for development activities, including writing the assembled program to the PIC device. Again, instructions to do this are provided with the starter kit, and if you have obtained the PICKit2 and set it up correctly to program and run Microchip's assembly language lessons, then you should be set up for the demo program presented here as well. The main configuration menu item in MPLAB in need of attention on an unconfigured system will be the "Select Device..." submenu of the "Configure" menu, where the 16F690 (or other device, as appropriate) will need to be selected from the central drop-down control. Also, note that each time MPLAB is opened, the correct programmer (e.g. "PICKit 2") will need to be selected from the "Programmer" menu. When this is done, MPLAB attempts to establish a connection to the programmer. The programmer, in turn, checks for the presence of the configured device (e.g. the PIC 16F690), and if there are any breakdowns in this chain of communication, this is reported by MPLAB. Once MPLAB is up and running, and connectivity has been established, it should be possible to open the first demo program (PICRS232.ASM) and then build it using the "Quickbuild" menu item of the "Project" menu. Then, assuming everything is connected correctly, "Program" should be selected from the "Programmer" menu. This loads the object program into the Flash memory present on the PIC. Then, the PIC will begin execution at code location 0, where the demo begins. This first demo program transmits a constant stream of data to the terminal. The program will persist in the Flash memory of the PIC, even if power is removed from the device. The final setup step for the first demo is to configure the terminal, or terminal program. In the development setup originally used to write the code presented here, the PICKit 2 Starter Kit communicated with Windows XP and Vista machines running the Hyperterminal terminal emulator program. This program was included with most versions of Windows prior to Vista; users of other versions of Windows can download Hyperterminal from Hillgraeve Software. Virtually every other general-purpose OS offers some sort of terminal program, and all of these should work for the demonstration given below. The communications settings in all cases will be 57,600 baud, 8 bit bytes, "No" parity, and one stop bit. "VT100" terminal emulation should be specified, although in fact any ASCII-based terminal or terminal emulator ought to work well with the setup provided in this article. As a convenience for users of Hyperterminal, the ".HT" connection file appropriate for the demo has been included in the source code download ("PIC576.HT"). This can be opened in Hyperterminal (using "File, Open" or the file folder icon) to ensure that settings are correct. Once the terminal is set up, a high-speed stream of incoming dot characters ('.') should be evident. An example of Hyperterminal receiving the output of the demo is shown below: The beginning of "PICRS232.ASM" is shown below this paragraph. This passage of code includes some preliminary directives, plus the entry point and first few instructions of the demo program: #include <p16F690.inc> __config (_INTRC_OSC_NOCLKOUT & _WDT_OFF & _PWRTE_OFF & _MCLRE_OFF & _CP_OFF & _BOR_OFF & _IESO_OFF & _FCMEN_OFF) org 0 prog: banksel OSCCON movlw B'01110000' ; full 8mhz internal osc iorwf OSCCON,f The first #include directive brings in definitions related to the 16F690 and its particular register configuration. This can be adjusted for applications with other processors. The next, __config directive begins by turning on the internal oscillator, with no clock signal out. This is done using setting _INTRC_OSC_NOCLKOUT. This setting is the simplest setting, since it removes any need to wire up an external oscillator, and since a clock signal out is not needed. __config _INTRC_OSC_NOCLKOUT The remaining settings disable several features that are not used in this demo. In order, these are the watchdog timer, the power-up timer, the master clear feature, code protection, brownout reset, internal / external mode switchover, and the fail-safe clock monitor. In general, these features can be used with the communications techniques described here; but they are not used in this demo. After these directives, the program proper begins at location 0. To start, three bits of the OSCCON register (the oscillator controller) are turned on, to indicate full 8mhz speed. This is done by the instruction iorwf OSCCON,f, which performs an inclusive OR operation between 01110000 binary and OSCCON. The result is stored in OSCCON, and indicated by the final ,f clause. OSCCON iorwf OSCCON,f ,f Note that banksel is used to ensure that the proper SRAM bank is selected for these operations. This is done generally throughout the demo code, whenever a special-purpose register is accessed. This practice reflects Microchip guidelines; banksel abstracts over the details of an evolving product line, where special-purpose registers are moving from page to page in different designs. The nature of the page selection instructions used to implement that macro is also evolving; see here or here for a Microchip presentation containing a description of the evolution of banksel. Pseudocode for much of the remainder of the transmission demo program can be found in the PIC 16F690 data sheet. In the data sheet used during development (publication DS41262A), this could be found on page 141: To set up an Asynchronous Transmission: Initialize the SPBRGH:SPBRG registers for the appropriate baud rate. Set or clear the BRGH and BRG16 bits, as required, to achieve the desired baud rate. Enable the asynchronous serial port by clearing bit SYNC and setting bit SPEN. If interrupts are desired, set enable bit TXIE. If 9-bit transmission is desired, set transmit bit TX9. Can be used as address/data bit. Enable the transmission by setting bit TXEN, which will also set bit TXIF. If 9-bit transmission is selected, the ninth bit should be loaded in bit TX9D. Load data to the TXREG register (starts transmission). If using interrupts, ensure that the GIE and PEIE bits in the INTCON register (INTCON<7:6>) are set. --Microchip Publication DS41262A, page 141, section 12.3.1, "EUSART ASYNCHRONOUS TRANSMITTER" To set up an Asynchronous Transmission: The initialization of SPBRGH:SPBRG is handled by the snippet below: SPBRGH:SPBRG banksel SPBRG movlw .34 ;34 -> ~57.6kbps@8mhz (207 for 9600bps ) movwf SPBRG banksel SPBRGH movlw .0 movwf SPBRGH The code above sets the SPBRG register to 34 (decimal) and sets the SPBRGH register to 0. These values were obtained from the table titled "12-3: BAUD RATES FOR ASYNCHRONOUS MODES" in the PIC 16F690 data sheet. According to this table, when register SYNC = 0, BRGH = 1, BRG16 = 1, and BRG16 = 1, an SPBRG value of 34 decimal, at 8 mhz, will yield a transmission rate of 57,142 baud. Note that the "SPBRG" number quoted in the tables refers to the 16-bit value formed by SPBRGH:SPBRG. A value of 34 thus implies that SPBRGH is 0, and this is ensured in the code above. SPBRG SPBRGH 0 SYNC BRGH BRG16 The 57,142 baud rate is close enough to 57,600 for a typical terminal to correctly interpret the data. As noted in the comment in the last code block, replacing the value of 34 with a value of 207 will result in a transmission rate of ~9,600 baud. Making this change (in conjunction with an appropriate terminal configuration change) has several potential advantages. Certain hardware may not support the higher baud rate. Also, the 9,615 nominal rate given in the data sheet table is significantly closer to the ideal 9,600 baud rate than 57,142 is to 57,600. In rare cases, it may thus be necessary to use a 9,600 baud channel. Finally, many standards specify 9,600 baud, e.g. the MDB protocol used in vending machines. The demo code continues with the section shown below: banksel TXSTA bcf TXSTA,SYNC ;async, i.e. timed by bits in the xmit stream banksel RCSTA bsf RCSTA,SPEN banksel TXSTA bsf TXSTA,TXEN ;enable TX bcf TXSTA,TX9 ;we want 8 bit bsf TXSTA,BRGH ;enable *64 baud generator w/o using SPBRGH banksel BAUDCTL bsf BAUDCTL, BRG16 bsf BAUDCTL, SCKP ;reverse polarity This code first sets up asynchronous (i.e. unclocked) mode by turning off SYNC, then enables the serial port by asserting SPEN. Next, TXEN is set to true in order to enable transmission capabilities. In this demo, an 8-bit word is used, so TX9 is turned off. Some protocols use a 9-bit word, and for these TX9 should be set to true; the MDB protocol mentioned earlier is in fact an example of a 9-bit protocol. Finally, the snippet above asserts SCKP to reverse the polarity of the transmission. The reasons for this were discussed above. SPEN TXEN TX9 true SCKP Just a bit more setup code is necessary before entering the transmission loop. This code enables interrupts, which are used in the demo. Three bits must be set: the transmit interrupt must be enabled by setting bit TXIE of the PIE1 register. Also, interrupts in general are enabled by setting bit GIE of the INTCON register, and interrupts for peripheral devices are enabled by setting bit PEIE of INTCON: These tasks are performed by the following code: TXIE PIE1 GIE INTCON PEIE banksel PIE1 bsf PIE1,TXIE ;Xmit interrupt on banksel INTCON ;General and peripheral interrupt enable bsf INTCON,PEIE bcf INTCON,GIE The demo code concludes with the following snippet: userprog: movlw '.' call printch goto userprog printch: banksel TXREG movwf TXREG nop btfss PIR1,TXIF goto $-1 return end At userprog, the code enters an infinite loop. First, the ASCII character '.' (code 46 decimal) is moved to the accumulator. Then, function princth, which prints the ASCII character in the accumulator, is called. Finally, a goto instruction returns to the top of the loop. userprog princth goto Function printch operates by moving the value in the accumulator into the TXREG register, which is a latch for the output of the EUSART. This will take at least one instruction on the 16F690 at 8mhz, per the datasheet, so a NOP is executed. Then, the code enters a polling process, where it essentially blocks until the TXIF bit is set. Specifically, instruction btfss ("bit test file and skip if set") will execute the goto that follows it (i.e. will decrement the Program Counter and thus continue the polling loop) unless its operand bit is set. In this case, the operand bit is TXIF, which gets set when the EUSART has completed the serial output process. Once this has happened, the code is free to resume transmission, and printch therefore returns to its caller, i.e. the main loop of the program. printch TXREG NOP TXIF btfss The second demonstration shows an echo program, i.e., a PIC program which receives input from the terminal device and repeats it back to that device. While this demo is running, pressing a key at the terminal results in the associated character being displayed at the cursor position on the terminal. This result is only evident once the character has been transmitted to the PIC, processed by it, and echoed back to the terminal on a second channel. At 57,600 baud, though, this does not take long at all. The code for the PIC firmware for this demonstration is present in the same archive as the first demonstration, and has file name "ECHO.ASM." Before the demo can be successfully executed, a two-way wiring scheme must be implemented. The main obstacle to adding two-way communication to the setup described above is the electrical issue. The ±15V waveform typical of RS-232 cannot be fed to the PIC without destroying it. The PIC is a generally durable device, but it has little ability to handle negative voltage. In fact, section 17.0 of the datasheet ("Electrical Specifications") lists a maximum negative voltage on any pin compared to Vss of -0.3V. The RS-232 waveform can include sustained negative voltages of many times greater magnitude than -0.3V. Beyond this voltage difference, the receive circuit must also deal with the fact that the RS-232 and TTL waveforms are inverted (i.e. differ by 180°) compared to each other. The SKCP bit used to correct this issue for transmission does not work for incoming data. SKCP In the circuit described here, a PNP transistor is employed to deal with these issues. This transistor has Fairchild Semiconductor part TIP42, or equivalent; most electronics manufacturers sell a "TIP42" transistor. TIP42 The PNP transistor is a device which closes a switch-like circuit between two pins in response to a negative voltage applied to a third pin. This switch-like circuit is designed to flow negative voltage from a "collector" pin to an "emitter" pin. However, it can just as easily flow a positive voltage from the emitter to the collector; if the collector is connected to ground, the resultant backward flow will be electrically equivalent to a negative flow from collector to ground. The third pin, which controls the quasi-switch at the heart of the transistor, is termed the "base" pin. In the circuit presented here, the collector is connected to the receive pin of the PIC (pin "RB5" or "12"), and the emitter to a +5V power source. This allows for a 0V / +5V incoming waveform, which is exactly what the EUSART expects. Furthermore, since the PNP transistor closes its circuit (i.e. supplies positive voltage) in response to negative voltage, it properly inverts the RS-232 signal, or at least the negative portion of it (which is all that is necessary to establish a square wave). Because the transistor connects to a +5V power source at the emitter, the PNP transistor will supply a maximum of +5V, i.e. it solves the voltage level problem as well. Another electrical component is necessary to provide a reliable signal to the PIC: a pull-down resistor. When negative voltage is not present at the base of the PNP transistor, the quasi-switch connecting it to the +5V voltage source is open. The receive pin is thus connected to nothing at all. The logical value perceived by the PIC is therefore undefined. The pull-down resistor creates a relatively weak connection between the receive pin and ground. When no other connection is present, this resistor serves to place a logical zero value on the receive pin. The resistor can be connected between any of the holes connected to pin RB5 and the row of ground holes beneath (and to the right of) the PIC. A 1,000 Ohm resistor is suggested for the demo circuit. A wide variety of resistors will in fact work for this purpose, though. So long as the connection provided by the resistor is weaker than that provided by the transistor, the desired effect will be obtained. One final electrical consideration bears mention: it is the voltage level at the emitter of the PNP transistor that determines its ground voltage. Because +5V is present at the emitter in the circuit described here, this means that, for example, +4V will be perceived as -1V by the transistor. After all, relative to ground, +4V is one negative volt in this case. In theory, this introduces the possibility of misinterpreting a logical one as a zero. This will not be a problem, though, so long as the positive RS-232 signal used for a zero is at least +5V. Most devices should meet this requirement; signal levels of ±10 V, ±12 V, and ±15 V are typical. Even the ad hoc transmittal circuit described in the first demo uses a +5V signal for zero. The diagram below this paragraph has the same format as the wiring diagram presented for the first demo, but adds the necessary circuitry for the receive function. The resistor is indicated by a line labelled "1000?." In all cases, when looking at the front of the transistor, the leftmost pin will be the base, the rightmost pin will be the emitter, and the middle pin will be the collector. The next figure shows an electronic schematic of the total PIC / RS-232 interface as described thus far. This is done in the format typically used for such diagrams. More information about this format is available from many sources, including here and here. A photograph of the entire setup is shown below this paragraph. Note that the use of unshrouded D-Sub connectors has been eliminated from this implementation. A neat DB-9 connector now surrounds the crimp pins used at the serial port. Several manufacturers make such hardware; examples include NorComp Part #170-009-273L000 and Tyco Part #205203-8. After being properly crimped down, the pins are simply pushed into the back of the appropriate hole of this enclosure. "Solder cup" D-Sub pins can be used instead of the crimp pins for an even more permanent connection. On the PIC side of the connection, each jumper wire is routed through one of the holes connecting to the appropriate pin, and then soldered to the back of the demo board. In the implementation shown above, the TIP-42 transistor is held in the air by the three wires soldered to its pins, and is visible near the top left corner of the photo. It would be cleaner to solder this transistor's pins through holes in the LPC demo board's prototyping area. One obstacle to this approach is the fact that the circular soldering points in this prototyping area do not connect to each other at all. So, the TIP-42 could be soldered down to the prototyping area, but jumper wires would then have to be added to connect each pin of the transistor appropriately. The code for demonstration 2 is contained in file "ECHO.ASM." Its additions to "PICRS232.ASM" (the first demo) are discussed in this section, from the top of the file down. First, when RCSTA is being adjusted, some new code is necessary to enable receive mode: RCSTA banksel RCSTA bcf RCSTA,CREN ;NEW bsf RCSTA,CREN ;NEW bsf RCSTA,SPEN Cycling CREN in this way serves first to clear any errors associated with the receive function of the EUSART, and then to enable continuous (versus clock-driven) receiving. CREN In addition, code to disable analog input AN11 must be executed. This is necessary because AN11 uses the same pin as the receive function (RB5). In the demo code, this is done immediately before interrupts are enabled: banksel ANSELH ;NEW bcf ANSELH,ANS11 ;NEW Then, when interrupts are enabled, bit RCIE must be set in addition to TXIE. This enables the "receive" interrupt: RCIE banksel PIE1 bsf PIE1,RCIE ;NEW bsf PIE1,TXIE In the main loop of the program, the literal load movlw '.' is replaced with a call to function getch. This waits for a character to arrive on the serial port, and places it in the accumulator. Because the call to getch is followed by a call to printch, the net effect will be a character-for-character echo back of whatever is typed at the terminal. The code for getch is inserted just before the end directive, and takes the following form: movlw '.' getch end getch: banksel PIR1 btfss PIR1,RCIF goto getch movf RCREG,w return First, this function polls RCIF to await data. This blocks the thread-of-execution in a fashion similar to that seen in printch. There, the program had to wait for output to complete. Here, the program must wait for input to arrive, and although the associated bit is different, the technique (skippable goto) is the same. Once the data has arrived, it is moved from register RCREG to the accumulator, and a return back to the main program loop is executed. There, a call to printch completes the echo functionality. RCIF RCREG return If this article is well-received, I hope to follow it with other articles that make use of the capabilities described here, and extend them. This article, along with any associated source code and files, is licensed under The GNU General Public License (GPLv3)
https://www.rootadmin.com/Articles/91/RS-RS-Communications-With-Microchip-Techno
CC-MAIN-2017-22
refinedweb
5,922
60.55
It is quite common to add authentication to your web or mobile app. The most common and easiest to set up for mobile apps and javascript clients that connect to Rails APIs is using JWT(JSON Web Tokens). They can be stored by you Javascript client( such as Angular, React, and Vue) and sent using each request to authenticate with your API. In this tutorial, I’ll show you how to authenticate using this method. You can also check out our repo here or check out some of our courses that involve JWT authentication and SPA development. Project Setup Let’s get started by scaffolding our applications. We’ll scaffold a TodoList so we can later use to test protecting our request from unauthorized users with JWT generated using Knock. Create the new Rails app with rails new knock-todo --api --skip --database=postgresql Then generate the todolist with rails generate scaffold Todo title:string finished:boolean You’ll get a rails migration that looks like this class CreateTodos < ActiveRecord::Migration[5.2] def change create_table :todos do |t| t.string :title t.boolean :finished t.timestamps end end end Also your rails JSON templates for you Todo API and Routes will be generated. Generate Knock Users Next, we will generate out user to be used with Knock. rails generate scaffold User email:string password_digest:string Password_Digest is a field that will be used by Knock to store hashed passwords class CreateUsers < ActiveRecord::Migration[5.2] def change create_table :users do |t| t.string :email t.string :password_digest t.timestamps end add_index :users, :email end end Now you can add knock and bycrpt to your gem file and install it in the command line gem 'knock' gem 'bcrypt' Now, inside of your rails User model, make sure you have has_secure_password as your authentication method. We’ll also validate the presence of an email and create a token payload function to return the user id and the email inside of the token. class User < ApplicationRecord has_secure_password validates :email, presence: true def to_token_payload { sub: id, email: email } end end bundle install rails generate knock:install This will generate a file “config/initializers/knock.rb” that is used to modify the default configuration of your token. You can change things such as its Signature algorithm and Key. Now, in order to create an endpoint to generate your tokens, you must run rails generate knock:token_controller user This creates the ruby on rails controller user_token_controller.rb and adds a post route to your controller for creating the JWT token. class UserTokenController < Knock::AuthTokenController end post 'user_token' => 'user_token#create' One last thing will be to change the user params inside of your user controller. Change def user_params params.require(:user).permit(:email, :password_digest) end to def user_params params.require(:user).permit(:email, :password, :password_confirmation) end If you don’t do this, you’ll end up with a 422 error. 422 User Creation Error Creating Rails Users and Returning a Knock JWT token Let’s test out creating our first use. Using an API testing client like Postman or Insomnia, post a JSON object to the create users route with new user credentials. Let’s create an object that looks like this { "user": { "email": "[email protected]", "password": "Pokemon43!" } } You should get a 201 status returned to you. User Creation Now we can try creating a Knock JWT using those same credentials. { "auth": { "email": "[email protected]", "password": "Pokemon43!" } } OOOps, If you get this error then that most likely means you’re using Rails 5.2 and above. This is caused by protect_from_forgery being include in ActionController::Base by default. 422 CSRF We can fix this error by adding as skip_before_action class UserTokenController < Knock::AuthTokenController skip_before_action :verify_authenticity_token, raise: false end Also inside of our Knock config, we want to set how we sign our token. config.token_secret_signature_key = -> { Rails.application.credentials.fetch(:secret_key_base) } We should be able to get our token one this change is added. Successful Token Guarding Our Routes Now that we have token auth set up, we can finally protect our routes from unauthenticated users. Let’s go to our todos controller and add before_action :authenticate_user This will protect the controller from unauthenticated users Now let’s try hitting out route. We’ll get a 401 error 401 Error Unauthorized Now let’s try again with a Bearer Token in the header We’ll now be able to get our JSON Success Auth Request Conclusion Using JWT is one of the best ways to authenticate your API against SPA(Single Page Application) like React or Vue. JWT authentication is also a technique that can be used with Mobile applications. If you’re interested in learning more, check out our repo here or check out some of our courses that involve JWT authentication and SPA
https://codebrains.io/rails-jwt-authentication-with-knock/
CC-MAIN-2019-35
refinedweb
800
55.24
Hi! i want to fill an area based on y values: red to negative, green to positives. How can i do that? Hi! i want to fill an area based on y values: red to negative, green to positives. How can i do that? I don’t know how to it with one single trace, but here is a solution using numpy masks: import plotly.graph_objects as go import numpy as np x = np.linspace(0, 2*np.pi, 100) y = np.sin(x) mask = y >= 0 fig = go.Figure(go.Scatter(x=x[mask], y=np.sin(x)[mask], mode='lines', fill='tozeroy', fillcolor='green')) fig.add_trace(go.Scatter(x=x[~mask], y=y[~mask], mode='lines', fill='tozeroy', fillcolor='red')) fig.show() Thank you for your answer! But i want to do it with a single trace I don’t think you can. That would be a new mode for the fill argument of Scatter traces. Ok, thanks for your help!!! @yuricda96 If your usecase of having only one trace is so that the legend behaves as a single trace, you can do the following: showlegend=False legendgroup="my-trace") This way you will only see one trace for the two and clicking on the legend shows/hides both traces at once. You can also set the line color and fillcolor differently to have continuity of the line with different fill colors if necessary. Hope this can be of help. I’ll just add this here, Fill area between 2 lines [SOLVED] This saved me from a great deal of frustration. Great tips, I like it However, it has some issues when the values are equal to the threshold (zero in your example). so it has a gaped space with no color. and if the steps are few it will be bigger and awful plot . So I tried to solve this and get this maybe someone needs it (like me a few hours ago) import plotly.graph_objects as go import numpy as np x = np.linspace(0, 2*np.pi, 100) y = np.sin(x) # I just add a threshold threshold= 0 x2=[] y2=[] for i in range(len(y)-1): x2 +=[x[i]] y2 +=[y[i]] if y[i]>threshold>y[i+1] or y[i]<threshold<y[i+1]: Xi=x[i]+((threshold-y[i])*(x[i+1]-x[i])/(y[i+1]-y[i])) x2 +=[Xi] y2 +=[threshold] x2 +=[x[-1]] y2 +=[y[-1]] x2 = np.array(x2) y2 = np.array(y2) mask = y2 >= threshold mask2 = y2 <= threshold fig = go.Figure(go.Scatter(x=x2[mask], y=np.sin(x2)[mask], mode='lines', fill='tozeroy', fillcolor='green')) fig.add_trace(go.Scatter(x=x2[mask2], y=y2[mask2], mode='lines', fill='tozeroy', fillcolor='red')) fig.show() and it works you can set the threshold as you wish (0.2) All my best
https://community.plotly.com/t/it-is-possible-to-fill-area-with-different-colors-on-a-line-plot/31664
CC-MAIN-2022-40
refinedweb
476
76.32
Function List Index The CryptoSys API gives you the ability to call fast, efficient cryptographic functions in Visual Basic, VBA, VB.NET, C/C++, C#, and ASP. It can be called from VBA applications like Access, Excel and Word. It provides four of the major block cipher algorithms, a stream cipher algorithm, all the major secure message digest hash algorithms, the HMAC and CMAC message authentication algorithms, a data compression facility, a password-based key derivation function (PBKDF2), and a secure random number generator. The block cipher algorithms provided are the Advanced Encryption Standard (AES) as specified in FIPS PUB 197; the original Data Encryption Standard (DES); Triple DES (TDEA, 3DES, DES-EDE3); and Bruce Schneier's Blowfish. Key Wrap algorithms for AES and Triple DES are provided [new in version 4.1]. The message digest hash functions are MD5, MD2, RIPEMD-160, SHA-1, SHA-224, SHA-256, SHA-384 and SHA-512. The HMAC algorithm is provided for all these hash algorithms and the CMAC algorithm for the block ciphers Triple DES and AES. The PC1 stream cipher is fully compatible with RC4. The random number generator (RNG) generates cryptographically-secure random numbers to the strict NIST SP800-90 standard, now an Approved Random Number Generator for FIPS PUB 140-2.. You can generate random keys and nonces in a secure manner. All functions are thread-safe. [Contents] [Index] The CryptoSys API provides functions to carry out primitive cryptographic operations intended to be used as part of a security-related application. It is up to you the programmer to ensure that keys, passwords and other private data in your application are kept secret, and to ensure that appropriate security policies and procedures are followed by end users. This manual assumes you are familiar with the basics of cryptography and can program to a reasonably advanced level. To get started, read the sections on Installation and General Programming Issues and the section on your programming language: and if you are a Visual Basic user, please read Visual Basic or Visual Basic: VB6 vs VB.NET. [Contents] [Index] [Contents] [Index] Changes in Version 4.2: GCM_Encrypt. WIPE_Filefunction - up to three times faster for large files. Changes in Version 4.1: Changes in Version 4.0: Changes in Version 3.2: [Contents] [Index] Code in classic Visual Basic (VB6/VBA) is shown shaded as follows (if your browser supports shading and colours):- Dim strData As String Dim nRet As Long strData = "Hello world" Debug.Print strData Code in C/C++ is shown as: char *str = "Hello world"; printf("%s\n", str); Code in VB.NET (VB200x) is shown as: Dim nDataLen As Integer Dim abData() As Byte If strData.Length = 0 Then Exit Function abData = System.Text.Encoding.Default.GetBytes(strData) nDataLen = abData.Length Code in C# is shown as: public static string ToHex(byte[] binaryData) { int nBytes = binaryData.Length; Int32 nChars = 2 * nBytes; if (nBytes == 0) return String.Empty; StringBuilder sb = new StringBuilder(nChars); nChars = CNV_HexStrFromBytes(sb, nChars, binaryData, nBytes); return sb.ToString(0, nChars); } Code in VBScript/ASP is shown as: Dim oGen Set oGen = Server.CreateObject("diCryptOCX.gen") Response.Write "Version=" & oGen.Version & Chr(13) & Chr(10) Output from code samples is shown as: Result=OK All functions called directly in the CryptoSys API begin with 3 or 4 capital letters followed by an underscore "_", e.g. nRet = API_ErrorLookup(strMsg, Len(strMsg), nCode) For VB users, there are some wrapper functions provided in the module basCryptoSys.bas which avoid the complications of having to pre-dimension strings, etc. These begin with lowercase letters and have no underscore. They are shown in our examples as follows: strErrMsg = apiErrorLookup(nCode) [Contents] [Index] Except where otherwise noted, the CryptoSys API executable, sample source code and this manual were written by David Ireland and are copyright (c) 2001-9 by DI Management Services Pty Limited, all rights reserved. They may not be distributed or reproduced separately by any means whatsoever without express permission. Users holding a valid developer's licence are permitted to distribute the executable as part of a value-added application according to the terms of their licence. You may obtain latest version of CryptoSys API from <>. [Contents] [Index] For a good introduction to the principles of cryptography refer to Bruce Schneier's Applied Cryptography [SCHN] or William Stallings Cryptography and Network Security [STAL]. For a more advanced treatment, see Handbook of Applied Cryptography by Menezes, van Oorschot and Vanstone [MENE]. All block cipher algorithms operate on a fixed-length block of data to produce a seemingly-random output of the same size. The security of the encryption process depends on a secret key, the length of which depends on the particular algorithm. It should be impossible (strictly, computationally infeasible) to derive the plaintext from the resultant ciphertext without knowing the key. The block and key lengths supported in the CryptoSys API package are as follows: * The deprecated older-style AES functions still support 192- and 256-bit block lengths but these were not adopted in the FIPS 197 standard. The block cipher confidentiality modes in this module comply with Recommendation for Block Cipher Modes of Operation [SP80038A]. To quote from Section 5.3 of that document: The input to the encryption processes of the CBC, CFB, and OFB modes includes, in addition to the plaintext, a data block called the initialization vector (IV), denoted IV. The IV is used in an initial step in the encryption of a message and in the corresponding decryption of the message. The IV need not be secret; however, for the CBC and CFB modes, the IV for any particular execution of the encryption process must be unpredictable, and, for the OFB mode, unique IVs must be used for each execution of the encryption process. In Electronic Codebook (ECB) mode, each block is encrypted independently. In Cipher Block Chaining (CBC) mode, an initialization vector (IV) is added to the first block of plaintext before encryption and the resultant ciphertext is added to the next block of plaintext before encryption, and so on. Decryption is the reverse process. The IV does not need to be kept secret and must be communicated to the receiving party along with the ciphertext. Block ciphers in ECB or CBC mode require their input to be an exact multiple of the block length. Any odd bytes need to be padded to the next multiple. In general, this is the user's responsibility. However, for encrypting files, CryptoSys API adds a padding string using the convention described in Padding below. For all other encryption functions in this API, it is the user's responsibility to provide and handle appropriate padding where necessary for ECB and CBC modes. See Padding. Cipher Feedback mode (CFB), Output Feedback mode (OFB) and Counter mode (CTR) do not require padding. We include CFB and OFB modes here for completeness where users may need to communicate with a system that requires it. We recommend using either CBC or CTR modes if you have the choice. CBC mode is supported in most encryption systems. The CTR mode in this package treats the entire "IV" as a 64- or 128-bit "counter". So if the IV provided is, say, 0xFFFFFFFFFFFFFFFD for a 64-bit block cipher, then the counter values used will be: fffffffffffffffd fffffffffffffffe ffffffffffffffff 0000000000000000 0000000000000001 ... It's up to the programmer to ensure that unique IV or counter values are provided for each message encrypted with the same key. Using an IV generated each time with the RNG_NonceData or RNG_NonceDataHex function should be perfectly adequate as the odds against producing a duplicate value are billions to one. [Contents] [Index] Before encrypting random-length plaintext with a block cipher algorithm in ECB or CBC mode it needs to be padded to an exact multiple of the block length. There are many conventions used in practice, see our web page Using Padding in Encryption. It's up to you and your recipient which method you use, but you must agree on one method and use it consistently. If your data is always an exact multiple of the block length and the sender and the recipient agree then you can omit the padding string. We recommend the convention from section 6.3 of RFC 3852 [CMS] (formerly RFC 3369 and RFC 2630), PKCS #5 [PKCS5] and PKCS #7 [PKCS7]; namely: For a 64-bit block size: Append a padding string of between 1 and 8 bytes to make the total length an exact multiple of 8 bytes. The value of each byte of the padding string is set to the number of bytes added; namely, 8 bytes of value 0x08, 7 bytes of value 0x07, ..., 2 bytes of 0x02, or one byte of value 0x01. The length of the plaintext to be encrypted thus will be a multiple of 8 bytes and it will be possible to recover the message unambiguously from the decrypted ciphertext. For a 128-bit block size (e.g. for AES), replace "8 bytes" in the above paragraph with "16 bytes" and replace "0x08" with "0x10", and reword accordingly. See the functions PAD_BytesBlock, PAD_UnpadBytes, PAD_HexBlock and PAD_UnpadHex. [Contents] [Index] A stream cipher operates on streams of plaintext one bit or byte at a time. The PC1 stream cipher in this API has a variable key size and operates on one byte at a time. By an amazing coincidence the PC1 algorithm produces identical results to the proprietary RC4 stream cipher which is in common use, for example, in encrypting PDF files. So you can substitute "RC4" wherever you find "PC1" in this document. The key can be any number of bytes long. The algorithm is very fast. The output is always the same length as the input. There is no 'decrypt' mode: to decipher just encrypt again with the same key. [Contents] [Index] A message digest hash function is a cryptographic primitive used for digital signatures and password protection. It maps a message of arbitrary length to a fixed-length hash value or "message digest". A cryptographic hash should be one-way and collision-resistant. "One-way" means that, given an n-bit hash value, it should require work equivalent to about 2^n hash computations to find any message that hashes to that value. "Collision-resistant" means that finding any two messages which hash to the same value should require work equivalent to about 2^n/2 hash computations. In other words, it should be computationally infeasible to find the original message from the digest or to create another message that produces the same result. SHA-1 is a 160-bit (20-byte) hash function specified in FIPS PUB 180-2 Secure Hash Standard [FIPS180]. SHA-256 is the newer standard intended as a companion for the new Advanced Encryption Standard (AES) to provide a similar level of enhanced security. SHA-256 is a 256-bit (32-byte) hash and is meant to provide 128 bits of security against collision attacks. SHA-256 is also specified in FIPS PUB 180-2 [FIPS180]. SHA-384 and SHA-512 provide greater levels of security, but at a greater computing cost. Both use the same algorithm but SHA-384 has a different starting value and a shorter digest value. MD5 is an older, less-secure but faster hash algorithm still in common use. A message authentication code (MAC) is used to establish the authenticity and, hence, the integrity of a message. MACs have two functionally distinct parameters, a message input (data, text) and a secret key known only to the message originator and intended receiver. MACs based on cryptographic hash functions are known as HMACs. The HMAC algorithm is described in [FIPS198] and RFC 2104 [HMAC]. HMACs can have a key of any length and produce a digest of the same length as the underlying message digest function. The CMAC algorithm is based on a symmetric key block cipher and is equivalent to the one-key CBC MAC1 (OMAC1) algorithm. CMAC is described in SP800-38B [SP80038B]. CMAC uses either Triple DES or one of the three AES functions. The key length for CMAC is the same as the underlying block cipher. The output is the same length as the block length: 64 bits for Triple DES and 128 bits for AES. It is permissible to truncate a MAC value, at a cost of reduced security. Galois/Counter Mode (GCM) is a block cipher mode of operation providing both confidentiality and data origin authentication. It was designed by McGrew and Viega [MCGR05], is free of patents, and is recommended by NIST [SP800-38D]. Security considerations: There are some security weaknesses if GCM mode is used incorrectly and the user is referred to NIST Special Publication 800-38D for more guidance. GCM is not suited for use with short tag lengths or a very long message (>64 GB - so not an issue here). The user should monitor and limit the number of unsuccessful verification attempts for each key. It is strongly recommended to use all 16 bytes for the tag, and generally no less than 8 (we impose a minimum requirement of 4 bytes). The same length of tag must always be used for a given key. The IV must be unique for each operation for a given key. Security is destroyed for all text encrypted with the same key if you ever use the same IV for different plaintext. Using a 12-byte randomly-generated IV is OK, and so is a counter that you have control over so that it can never be repeated (even after a power recycle or system crash!). There is more guidance on constructing unique IVs in section 8 of [SP800-38D]. We give you the basic tool; it's sharp; use it carefully. Compressing data before encryption not only makes for shorter messages to be transmitted or stored, but also improves security by reducing the redundancy in the plaintext and making cryptanalysis harder. CryptoSys API includes compression (deflate) and decompression (inflate) functions based on Jean-loup Gailly's excellent zlib product. You will need to devise a packet format to store the uncompressed length of the data along with the compressed data itself. There is an example of this posted on our web site - see Using Compression with CryptoSys. Just remember to compress before you encrypt. [Contents] [Index] Many procedures use a random session key to encrypt the body of the message. If this key is ever compromised - because the random numbers are predictable or can be manipulated before being generated - an opponent who has had access to your encrypted messages can decipher them at his leisure. You never use the standard VB6 Rnd() or C stdlib rand() functions to generate your keys! For more examples of potential problems see [GUTM] and [KELS98]. The random number generator used in the CryptoSys API has been redesigned as of Version 4.0 to conform to the more conservative NIST Special Publication 800-90 Recommendation for Random Number Generation Using Deterministic Random Bit Generators [SP80090], first published June 2006. Entropy is accumulated in "Fortuna" pools as described in Ferguson and Schneier, Practical Cryptography, [FERG03]. Under any circumstances, this algorithm provides more secure random numbers than ANSI X9.31 Appendix A [AX931]. The full technical details are published on our web site. The underlying RNG functions use the algorithms recommended in NIST SP 800-90 [SP80090] (the "DRBG Standard") to provide a Deterministic Random Bit Generator (DRBG). The HMAC_DRBG mechanism is used with SHA-1 as the underlying hash function. This outputs a sequence of binary bits that appears to be statistically independent and unbiased. The output is effectively random so long as internal actions of the process are hidden from observation. In particular the algorithm provides good Backtracking Resistance and, depending how it is used, good Prediction Resistance. Entropy is accumulated at startup and whenever certain functions in the library are called. Only inobtrusive methods of collecting entropy are used, so you can use the API safely in any application. The "Fortuna" method of pooling is used to prevent certain attacks from someone who controls some but not all of the entropy sources (see chapter 10 of [FERG03]). The more times your application calls the functions in the library before needing some random data, the more entropy will be accumulated. The user cannot control how or when the Fortuna entropy is added to the RNG process - this is by design. The advantage of the Fortuna system is that the level of entropy does not need to be measured. There is, however, a period of vulnerability just after start up when there may not be sufficient entropy in the pools. This can be overcome by initializing with a seed file. We strongly recommend that you use and initialize with a seed file wherever possible. RNG_Initializefunction to specify a seedfile with a known minimum amount of entropy to initialise the PRNG. This seed file is updated automatically when used. You can optionally call the RNG_UpdateSeedFilefrom time to time in your application, and use RNG_MakeSeedFileto create a new one. The security of this method is as good as the security you have over the seed file. If an attacker controls the seed file, it does not mean they control the random output data; it just means that using a seedfile does not increase the security strength of the PRNG. RNG_BytesWithPromptfunction when generating random data to force the user to generate entropy using random keystrokes and mouse movements. RNG_MakeSeedFilealso uses such a prompt. This works provided you know the user's keyboard strokes and mouse movements are secure (e.g. are not being transmitted over a network). RNG_KeyBytesfunction. If you assume zero security strength for the internally-generated entropy and you add input with, say, 128 bits of security strength, then the output from the RNG will have at least 128 bits of security strength. User-supplied entropy (a.k.a. a "seed") is added as "additional input" to the generation process. It does not affect the accumulation pools and cannot be used by an attacker to control the output. Remember it's not how "random" your user-supplied entropy is, but how little an attacker knows about it. Using the current time is no use. If you can provide 32 bytes* of data of which an attacker knows nothing and cannot later discover, then you have added 128 bits of security strength. * The bytes must have been selected randomly from the range 0 to 255. For more details on the security aspects of the random number generator, see the technical details published on our web site. CryptoSys API also lets you generate nonces - a term used in security engineering meaning "number used once". Use a nonce where random but not-necessarily-unpredictable numbers are required: e.g. for initialization vectors, SSL cookies and random padding data. [Contents] [Index] Use the setup.exe program to install CryptoSys API onto your computer. To uninstall, use the the Start>Settings>Control Panel>Add/Remove Programs option from Windows. Instructions on how to distribute the product to third-party users are provided with the Licensed Developer version - see the file distrib.txt for more details. Important: You must use the setup.exe program to install the Personal and Server Trial versions on your system and you must have administrator rights when installing or uninstalling either of these versions. The core executable file is diCryptoSys.dll which is a Win32 DLL. This file must exist in the library search path on the user's system for all programming language interfaces. The executable diCryptoSys.dll is not registered with RegSvr32 (it can't be). The VB6/VBA and C/C++ interfaces access this core executable directly. Two wrapper executables are provided to allow COM and .NET programming access to the core executable. Both require the core Win32 DLL to exist in the library search path. diCryptOCX.dllis an ActiveX DLL which exposes various classes to allow programming access from ASP, VBScript and other COM applications including VB. This DLL does require registering. To register, copy the diCryptOCX.dllto a convenient folder, open a command-line window, and type REGSVR32 diCryptOCX.dll diCrSysAPINet.dllis a .NET Class Library which exposes various classes to allow programming access from C# and VB.NET projects. This file is referenced from your .NET project. It is not registered. For more information on the executables, see Technical Details. [Contents] [Index] When we refer to "Visual Basic" in this document we probably mean the old "classic" Visual Basic, VB6. This language, now dropped by MS, is almost completely compatible with Visual Basic for Applications (VBA) that still comes with Microsoft Office products like MS Access and Excel. When we refer to "VB.NET" we mean "Visual Basic .NET" or "Visual Basic 2005" or "Visual Basic 2008" or "Visual Fred" or whatever is the latest incarnation coming out of Redmond. This is the one with the 20+ MB runtime instead of the 5 MB one for the old VB6. Everything we do here in VB.NET should be compatible with every version since it was introduced in 2002. It would have been less confusing if MS had chosen a completely different name for the VB.NET version (like, er, "Java"). The languages are similar, but programs need to be approached in a completely different manner, at least when using our stuff. In CryptoSys API there is a completely different set of VB.NET (VB2005/VB200x) methods that can be called using the VB.NET wrapper diCrSysAPINet.dll. See the sections marked .NET Equivalent under each function and the .NET Help File. In general, the function FOO_DoThis in VB6 is replaced by a method Foo.DoThis in VB.NET. The arguments and result will be probably be different - there will be fewer arguments and the result probably just gets returned directly without any pre-dimensioning or GPF errors. Funnily enough, the same function FOO_DoThis in C/C++ is replaced by a method Foo.DoThis in C# which is identical to the VB.NET method. This is not a coincidence. [Contents] [Index] Ciphertext is not text! The input to and output from an encryption function is, strictly speaking, a bit string. A `bit string' is an ordered sequence of `bits', each of value either `0' or `1'. There isn't a convenient `bit string' type in the usual programming languages so we use an ordered sequence of `bytes' instead, 8 bits to one byte, and we almost always choose to work with values that are an exact multiple of 8 like 64 bits or 256 bits to make life easier. The input to an encryption function is usually a representation of a text string like "Hello world!". Different systems store text in different ways. You need to convert the text to an unambiguous sequence of bytes before you encrypt it. For ECB and CBC modes you need to add padding bytes as well to ensure the input block is an exact multiple of the cipher block size. Do not store ciphertext bytes in a string. Once encrypted, the output is another sequence of bytes known as ciphertext. This sequence of bytes is generally not printable - it shows up as garbage. You can safely save this sequence of bytes directly to a binary file. It's often more convenient to encode the ciphertext bytes into a hexadecimal or base64 string, which is much easier to handle. But you do not convert the ciphertext back to text. It won't work. When decrypting, after you've deciphered the ciphertext back to plaintext, you still have a sequence of bytes. You need to convert these bytes back to a string of text before you can read it, provided your decryption was successful. Use the functions below to convert a string of text to an unambiguous array of bytes and vice versa. & "'") You could be more explicit by replacing .Default with .GetEncoding(1252), and then use the appropriate code page for your character set (1252 is Western Europe); In C and C++, the distinction between a string and an array of bytes is often blurred. A string is a zero-terminated sequence of char types and bytes are stored in the unsigned char type. A string needs an extra character for the null terminating character; a byte array does not, but it needs its length to be stored in a separate variable. . If your string is a Unicode string, then it consists of a sequence of wchar_t types. Converting wide-character strings to a sequence of bytes in C is more problematic. You can either convert the Unicode string directly to a string of bytes (in which case every second byte will be zero for US-ASCII characters), or use the stdlib wcstombs function or the Windows WideCharToMultiByte function to convert to a sequence of multi-byte characters (some will be one byte long, some two) and then convert the multi-byte string to bytes (you can do this with a simple cast). Each party encrypting and decrypting must agree on which way to do it. [Contents] [Index] Almost all the functions in CryptoSys API have a `Bytes' and a `Hex' version. The Bytes version expects its input as an array of bytes ( unsigned char* in C/C++). The Hex version expects its input as a hexadecimal-encoded string consisting only of the characters [0-9A-Fa-f] where two hex characters represent an 8-bit byte. In all these cryptographic algorithms, the underlying operations are carried out on 8-bit bytes (sometimes referred to as octets). See Storing and representing ciphertext. In practice, especially with VB and VBScript, you may find it more convenient to use the 'Hex' versions and pass all your data to the API functions as hexadecimal-encoded strings. Use the CNV_BytesFromHexStr and CNV_HexStrFromBytes functions to convert between bytes and hexadecimal strings. Use the Visual Basic StrConv function to convert between a String and an array of Byte values. See Converting strings to bytes and vice versa. Note that the CNV_BytesFromHexStr function will - by design - filter invalid hex characters and return the resulting bytes from whatever is left without error. The hex versions of the encryption functions are stricter and will fail if any invalid hex characters are found in the input. If your input data is in Unicode or UTF-16 format (e.g. your operating system is set up for CJK characters), then you are strongly recommended to convert your input data to unambiguous hexadecimal format before trying to use the functions in this API. Do not try and use the Visual Basic String type or it will end in tears. For various historical reasons the Hex encryption functions return their results in upper case and the hash digest functions in lower case. Just be careful if you use the case-sensitive strcmp() function in the C string.h library or if your Visual Basic options are set to Option Compare Binary. It's your decision which way you do it, but please be consistent. [Contents] [Index] All the core VB6/C functions in this API return a 32-bit signed integer value, that is a Long in VB6/VBA and a long in C/C++, but an Integer in VB.NET and an int in C#. The wrapper functions provided in the .NET and ActiveX interfaces behave differently (and more conveniently) - please refer to the detailed documentation on those interfaces. Functions either The exception is the _Init functions which return a non-zero context handle on success or zero if an error has occurred. Always check the return value before continuing. Use the _InitError function to find out the code for the error that has occurred. The value itself of the context handle is unimportant, but do not change its value. [Contents] [Index] To call the CryptoSys API functions from a classic Visual Basic project or VBA application, just add the module basCryptoSys.bas to your project (VBA users should delete the first line Attribute VB_Name = "basCryptoSys"). The VB functions work in the same way as you would call the Win32 API functions from VB. You must use the correct variable types and must pre-dimension strings and byte arrays that are to receive output or you will suffer the wrath of the great god Gee-pee-eff. For examples, see the test code provided in the distribution in the folder C:\Program Files\CryptoSys\VB6. To create a string of, say, length 40 characters, do either: Dim sData As String * 40 or Dim sData As String sData = String(40, " ") If you know the output string needs to be the same size as the input, do this: Dim strInput As String Dim strOutput As String strInput = "......" strOutput = String(Len(strInput), " ") To create a byte array of a given length: Dim abData() As Byte Dim nLen As Long nLen = 40 ReDim abData(nLen - 1) Note that byte arrays in VB are indexed from 0 to nLen - 1. To create a byte array of the same length as an existing array, do this: Dim abOutput() As Byte ReDim abOutput(Ubound(abInput)) To find the length of an existing byte array: nLen = UBound(abData) - LBound(abData) + 1 Be careful, as this will cause a run-time error if abData() has not been ReDim'd. Look at the code for the function cnvHexStrFromBytes in basCryptoSys.bas to see how this can be handled. Integertypes are never used in CryptoSys API. Only Longtypes are used. Compile error: ByRef argument type mismatch it means you have omitted the "(0)" after a Byte parameter, e.g. Dim abData() As Byte ' ... nRet = CRC_Bytes(abData, nBytes, 0) ' WRONG: compile error nRet = CRC_Bytes(abData(0), nBytes, 0) ' CORRECT [Contents] [Index] To use with a C or C++ program, include the diCryptoSys.h file with your source code and link with the diCryptoSys.lib library. The only non-ANSI C requirement is that your complier supports the __stdcall calling convention to call Win32 API functions from a Win32 DLL. The sample C code API_Examples.c provided in the distribution carries out a variety of tests using known test vectors. There are examples given for each function. You'll also realise that this manual is written primarily for Visual Basic programmers. Apologies, sort of, because we know you can easily work what to do from the examples given in the sample C programs and the rest of this manual. The VB6/VBA functions and C/C++ functions are identical in form. The function names are identical, the arguments are the same (even though we may have used different parameter names in the VB and C syntaxes), and the same warnings given in the Remarks section apply to both. Just be careful to add an extra character for output string types in C. The following type conversions apply:- Earlier versions used the Boolean type for the bEncrypt variable. This has been changed [Version 4.0] in the Declarations to take a 32-bit Long type, which still works in VB6 even if the variable passed has been dimensioned as a Boolean type, so this should not break any existing code. We recommend that you use the defined constants ENCRYPT and DECRYPT in your VB6 and C code. The API has been compiled and tested successfully with Microsoft Visual C++ (versions 5, 6, 7, 8 (VS2005/Express 2005) and VS2008) and Borland C++Builder version 5.5. Here is some minimal code:- /* myapisource.c */ #include <stdio.h> #include "diCryptoSys.h" int main() { char *message = "abc"; long ret; char digest[41]; ret = SHA1_StringHexHash(digest, message); printf("SHA1(%s)=\t%s\n", message, digest); printf("Correct =\t%s\n", "a9993e364706816aba3e25717850c26c9cd0d89d"); return 0; } To create myapisource.exe with MSVC++: cl myapisource.c diCryptoSys.lib Running this program should result in SHA1(abc)= a9993e364706816aba3e25717850c26c9cd0d89d Correct = a9993e364706816aba3e25717850c26c9cd0d89d The lib file distributed with the program is made using MSVC. With Borland you need to generate a new .lib file directly from the DLL using the IMPLIB utility: implib diCryptoSys.lib diCryptoSys.dll To compile with Borland C++: bcc32 myapisource.c diCryptoSys.lib charstrings require an extra char for the NUL terminating character, so add one more to the size a VB programmer would use, e.g. char sDigest_sha1[41]; /* SHA-1 digest is 40 hex chars plus NUL */ char sDigest_sha2[65]; /* SHA-256 digest is 64 hex chars plus NUL */ or #include <stdlib.h> char *buf; buf = malloc(API_MAX_HASH_CHARS + 1); /* ... */ free(buf); We recommend using the defined constants like API_SHA1_CHARS and API_SHA1_BYTES rather than hard-coded numbers, e.g. char sDigest_sha1[API_SHA1_CHARS + 1]; See the constants in diCryptoSys.h. For examples, see the test code API_Examples.c and APICheck.c provided in the distribution. [Contents] [Index] To use the .NET interface with C# and VB.NET: diCrSysAPINet.dlllibrary file into a convenient folder. diCrSysAPINet.dll. using CryptoSysAPI; or (for VB.NET) Imports CryptoSysAPI to your code. Alternatively, with C#, you can just include the source code module CryptoSysAPI.cs in your project and there is no need to reference the class library DLL. There are two different types of methods used in the .NET interface:- We recommend that you use the static methods if you only have a single block of data to operate on. For examples, see the test code TestAPIcsharp.cs and TestAPIvbnet.vb provided in the distribution. In Version 3.1 we suggested using direct upgrades of the VB6 code in VB.NET with all the pre-dimensioning and other unsafe practices. The .NET Class Library interface is cleaner, safer and more convenient. If you are writing VB.NET code from scratch, please use the .NET Class Library interface. If you need to upgrade old VB6 code, see Upgrading VB6 to VB.NET. The .NET classes and methods have different parameters and return values to the VB/C functions described in this manual. Full details are provided in the separate help file CryptoSysAPI.chm supplied with the distribution (Hint: it should be in the folder C:\Program Files\CryptoSys\DotNet). [Contents] [Index] The file diCryptOCX.dll is an ActiveX wrapper that provides an interface to most functions in the CryptoSys API. Strictly it's an ActiveX DLL, not an OCX, but we think three-letter acronyms with an "X" in them are cool. diCryptOCX.dllinto a convenient directory on the target machine. regsvr32 diCryptOCX.dll diCryptOCX.dll(Project>References>Browse...) then Dim oGen = New diCryptoOCX.gen Debug.Print "Version=" & oGen.Version Dim oGen Set oGen = Server.CreateObject("diCryptOCX.gen") Response.Write "Version=" & oGen.Version Refer to the syntax and details of the classes and methods of the ActiveX interface. Note: Almost all the COM/ASP methods require the data to be encoded in hexadecimal. Handling byte array types in VBScript is fraught with problems so we don't even offer the option. For examples, see the test code apiocxtests.asp and other ASP test pages provided in the distribution (look in the folder C:\Program Files\CryptoSys\COM). [Contents] [Index] The equivalent of the "Hello world" program for CryptoSys API is to call the API_Version function. The function will demonstrate that the API is properly installed. Here is some example source code in VB6, C, C#, VB.NET and ASP, respectively. Public Sub ShowVersion() Dim nRet As Long nRet = API_Version() Debug.Print "Version=" & nRet End Sub #include <stdio.h> #include "diCryptoSys.h" int main(void) { long version; version = API_Version(); printf("Version=%ld\n", version); return 0; } using CryptoSysAPI; static void ShowVersion() { int ver; ver = General.Version(); Console.WriteLine("Version={0}", ver); } Imports CryptoSysAPI Shared Sub ShowVersion() Dim ver As Integer ver = General.Version() Console.WriteLine("Version={0}", ver) End Sub <% Dim oGen Set oGen = Server.CreateObject("diCryptOCX.gen") Response.Write "Version=" & oGen.Version & Chr(13) & Chr(10) %> [Contents] [Index] diCryptoSys.dll. It is intended to meet FIPS 140-2 security level 1. The cryptographic boundary for CryptoSys API is defined as the enclosure of the computer on which the cryptographic module is installed. As a pure software product, CryptoSys API provides no physical security by itself. The computer itself must be appropriately physically secured. HKCU\Software\DI Management. HKLM\Software\DI Management. Do not attempt to change these entries. [Contents] [Index] The module performs power-up self-tests and conditional self-tests to ensure that it is functioning properly. Power-up self-tests are performed when the module is powered up, i.e. when the DLL is first attached to the parent Windows process. Conditional self-tests are performed when certain security functions are invoked. The following power-up self-tests are performed:- The integrity of the software module is tested using a 32-bit error detection code (EDC). The value of this EDC is set and stored when the module is created. On testing, the EDC is re-computed for the DLL module file being used and compared with the stored value. If the values do not match, the test fails. In addition to this automatic software integrity test, the integrity of the entire DLL file can be independently verified by the user using published SHA-1 and MD5 message digest and CRC-32 values before and after installation. The following conditional tests are performed:- When the module is first loaded or instantiated in a new thread, the RNG generates a 64-bit block which not used but is saved in thread-safe memory for comparison with the next 64-bit block to be generated. Each subsequent generation of a 64-bit block is compared with the previously generated block. The test fails if any two compared 64-bit blocks are equal. In addition, each time the RNG function is called it compares the first 64-bit block generated with the first 64-bit block generated on the previous call. The test fails if these blocks are equal. No blocks are saved that have actually been previously output by the generator. Any failure of a power-up test or conditional test will cause the following actions to take place: * The error log file will be given a filename "apierr.log". If the process does not have permissions to write to that directory, no log file be created. You can make settings in the machine's registry to prevent the message box displaying and to change the destination directory of the log file. See Optional Registry Settings. It is not possible to prevent the DLL from exiting if a critical error happens. The user may call the power-up self-tests on demand with the API_PowerUpTests function. In the event that such an "on demand" test fails, the module will log the error event and return an error code but will not terminate the process. Note that the automatic self-tests fail only in exceptional circumstances. You should never see one in practice unless the software module has been tampered with. [Contents] [Index] The following optional registry settings may be made to change the behaviour of the module if a critical error occurs. Disclaimer Modifying the registry can cause serious problems that may require you to reinstall your operating system. We cannot guarantee that problems resulting from the incorrect use of the registry can be solved. Use the information provided at your own risk. The default behaviour is to display a pop-up MessageBox if a critical error occurs. Users running the toolkit on an unattended server or within an IIS application can disable pop-ups to prevent problems (IIS gets a bit upset if an application displays a pop-up). Set the value to '1' to disable pop-up messages from appearing. Note that this will not prevent the 'first user' or expiry dialogs appearing with the Test Version. [HKEY_LOCAL_MACHINE\Software\DI Management\CryptoSys\Options] NoMessageBox The default behaviour if a critical error occurs is to try to write to an error log file in the same directory as the parent executable file that called the DLL. To change this directory create a REG_SZ value of 'ErrorLogDir' in the key below and set the value to the directory you want, e.g. "C:\myfolder\subdir". The directory name should not have a trailing slash character. [HKEY_LOCAL_MACHINE\Software\DI Management\CryptoSys\Options] ErrorLogDir To disable the creation of an error log file altogether, create a REG_DWORD value of 'NoErrorLog' in the key below. Set the value to '1' to disable. [HKEY_LOCAL_MACHINE\Software\DI Management\CryptoSys\Options] NoErrorLog The following registry entry is required for the Event Log messages to be recorded properly. If this entry is not present, or the path to the DLL is wrong, the event log entries will be of the form: The description for Event ID ( 8xxx ) in Source ( diCryptoSys ) cannot be found. The local computer may not have the necessary registry information or message DLL files to display messages from a remote computer. For correct formatting of the message, create the REG_SZ and REG_DWORD values in the key below. The message will still be recorded even if this entry is not present. [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Application\diCryptoSys] EventMessageFile TypesSupported [Contents] [Index] If you are writing VB.NET code from scratch, we recommend you use the classes and methods in the .NET Class Library. If you need to upgrade old VB6 code to VB.NET, please note the following: Longto Integer. CryptoSys API always uses 32-bit integers. Declare Function Foobar to Declare Ansi Function Foobar strBuffer = String(n, " ")to strBuffer = New String(" "c, n)Note the 'c' after the " ". This is required for the Strict Onoption. System.Text.Encoding.Default.GetBytes(Str)and System.Text.Encoding.Default.GetString(abData)to convert between text strings abd bytes instead of StrConv(Str, vbFromUnicode)and StrConv(abData, vbUnicode). Faster AES encryption functions were added in version 3 which have standardised on the 128-bit block size specified in FIPS 197. The additional 192- and 256-block sizes in the original Rijndael proposal were not adopted in the standard (CryptoSys API was first issued before the standard was published). There is a complete set of functions for each key size - AES128, AES192 and AES256 - rather than specifying it as a parameter. Old-style: AES_Hex(strOutput, strInput, strHexKey, 128, 128, True) ' Deprecated AES_Hex(strOutput, strInput, strHexKey, 192, 128, True) ' Deprecated AES_Hex(strOutput, strInput, strHexKey, 256, 128, True) ' Deprecated Replace with: AES128_Hex(strOutput, strInput, strHexKey, True) AES192_Hex(strOutput, strInput, strHexKey, True) AES256_Hex(strOutput, strInput, strHexKey, True) The older-style but deprecated AES functions with variable 128/192/256 block sizes have been retained for compatibility with earlier versions. [Contents] [Index] In Version 3 we have tightened up the requirements for input to the block encipher functions (AES, DES, Triple DES and Blowfish) to prevent the functions returning a seemingly-valid result for incorrect input. Earlier versions of the API would have substituted zeroes (or random values) for missing data or ignored trailing bytes. Input Data: Input data to the block encipher functions in ECB or CBC mode must be an exact multiple of the block length (16 bytes for AES, 8 bytes for the others) or the functions will return an error. It is the user's responsibility to provide padding where necessary and to remove the padding after decryption. Failure to do this will result in one of these errors:- Input not multiple of 8 bytes long Input not multiple of block size See Padding for more guidance on preparing your plaintext for encryption. This rule on input lengths does not apply to the file encryption functions which automatically provide their own padding. Keys and IV: All the keys and initialization vectors provided in hex format must be of the exact required length or the function will return an error:- Invalid key length Invalid initialization vector Note also that the CNV_BytesFromHexStr function will - by design - filter invalid hex characters and return the resulting bytes from whatever is left without error. The hex versions of the encryption functions are stricter and will fail if any invalid hex characters are found in the input. See Hexadecimal versus Bytes for more details of the hex conversion functions. [Contents] [Index] The core executable diCryptoSys.dll is a Win32 DLL compiled with Microsoft Visual C++ 2008. There are versions compiled for both Win32 (x86) and X64 platforms. The x86 version has been fixed using LegacyExtender so it will still work on W9x and NT4 systems. All programming language interfaces require this DLL to exist in the library search path of the user's system. The core cryptographic functions are written in pure ANSI C with extensive internal checks for memory leaks and overflow issues. The executable diCryptoSys.dll is compatible with all versions of 32-bit Windows (95/98/Me/NT4/2K/XP/2003/Vista). It does not require any other special libraries to work apart from the standard Win32 libraries available in all 32-bit versions of Windows. It is totally independent of the Microsoft Cryptographic API. The wrapper executable diCryptOCX.dll is an ActiveX DLL created with Microsoft Visual Basic 6.0 SP6. It calls the functions in the core Win32 DLL via diCryptoSys.tlb, which is available in the ActiveX source code. The source code for this wrapper DLL is provided in the distribution. The executable diCryptOCX.dll must be registered on the end-user's system using REGSVR32.EXE. For more details on its use, see Using with COM/ASP. The wrapper executable diCrSysAPINet.dll is a .NET Class Library created with Microsoft Visual C# 2008 set to target Microsoft .NET Framework 2.0. It is compiled for all target systems: Win32 and X64. This requires at least .NET 2.0 SP1 and should be upwardly compatible with all later .NET versions. It can be called from programs written in C# and VB.NET (aka VB2005/VB2008/VB200x). The .NET class library calls the functions in the core Win32 DLL using System.Runtime.InteropServices. The source code for this wrapper DLL is provided in the distribution. For more details on its use, see Using with .NET. [Contents] [Index] AES complies with: Triple DES (TDEA, 3DES, des-ede3) complies with: DES complies with these now-withdrawn standards: The block cipher modes of operation comply with AES-GCM complies with: The SHA-1, SHA-224, SHA-256, SHA-384 and SHA-512 algorithms comply with: The MD5 algorithm complies with: The PBKDF2 algorithm complies with: The random number generator conforms to [Contents] [Index] [Contents] [Index] AES Blowfish DES Triple DES (TDEA) PC1 (RC4) Key Wrap GCM Hash SHA-1 SHA-256 MD5 MAC RNG Compression PBE Padding Conversion CRC Misc ActiveX The AES functions have a separate set of functions for each of the three key sizes. All operate with a block size of 16 bytes. All expect a key of the respective length to be provided. CAUTION: all three sets share the same context for the Init-Update-Final functions, so don't start using AES-128 and then switch to AES-256 part way. [Contents] [Index] Blowfish can have a key of variable length from one to 56 bytes. The Byte variants need you to specify the key length, but the Hex variants will use whatever length is provided in the key hex string. The block and IV are always 8 bytes long. BLF_Hex BLF_HexMode BLF_Bytes BLF_BytesMode BLF_B64Mode BLF_File BLF_FileHex BLF_Init BLF_InitHex BLF_InitError BLF_Update BLF_UpdateHex BLF_Final [Contents] [Index] The key for DES is always 8 bytes long; the block length is 8 bytes. There is no need to specify the key length when calling the DES functions, but the input variables must be long enough. The parity bits in the key are ignored. DES_Hex DES_HexMode DES_Bytes DES_BytesMode DES_B64Mode DES_File DES_FileHex DES_Init DES_InitHex DES_InitError DES_Update DES_UpdateHex DES_Final DES_CheckKey DES_CheckKeyHex [Contents] [Index] The key for TDEA is always 24 bytes long; the block length is 8 bytes. There is no need to specify the key length when calling the TDEA functions, but the input variables must be long enough. The parity bits in the key are ignored. TDEA_Hex TDEA_HexMode TDEA_Bytes TDEA_BytesMode TDEA_B64Mode TDEA_File TDEA_FileHex TDEA_Init TDEA_InitHex TDEA_InitError TDEA_Update TDEA_UpdateHex TDEA_Final [Contents] [Index] PC1 is a variable-key-size stream cipher. It produces identical output to the proprietary RC4 (TM) stream cipher. It operates on individual bytes. To decrypt, just encrypt again with the same key. PC1_Bytes PC1_File PC1_Hex [Contents] [Index] CIPHER_KeyWrap - Wraps (encrypts) a content-encryption key with a key-encryption key. CIPHER_KeyUnwrap - Unwraps (decrypts) a content-encryption key with a key-encryption key. [Contents] [Index] GCM_Encrypt GCM_Decrypt GCM_InitKey GCM_NextEncrypt GCM_NextDecrypt GCM_FinishKey [Contents] [Index] HASH_Bytes HASH_File HASH_HexFromBytes HASH_HexFromFile HASH_HexFromHex [Contents] [Index] MAC_Bytes MAC_HexFromBytes MAC_HexFromHex [Contents] [Index] All the SHA-1 functions (except SHA1_BytesHash) output the message digest as a String in hexadecimal format. You must set the length of the output string to a minimum of 40 characters before calling the digest functions (41 characters in a C program). See Pre-dimensioning a string for instructions on how to do this. SHA1_StringHexHash SHA1_BytesHexHash SHA1_BytesHash SHA1_FileHexHash SHA1_Init SHA1_AddBytes SHA1_AddString SHA1_HexDigest SHA1_Reset SHA1_Hmac SHA1_HmacHex [Contents] [Index] The SHA-256 functions are identical in syntax and usage to their SHA-1 equivalents, except the string to receive the message digest must be set to a minimum of 64 characters (65 in a C program). See Pre-dimensioning a string. SHA2_StringHexHash SHA2_BytesHexHash SHA2_BytesHash SHA2_FileHexHash SHA2_Init SHA2_AddBytes SHA2_AddString SHA2_HexDigest SHA2_Reset SHA2_Hmac SHA2_HmacHex [Contents] [Index] The MD5 functions are identical in syntax and usage to their SHA-1 equivalents, except the string to receive the message digest must be set to a minimum of 32 characters (33 in a C program). See Pre-dimensioning a string. MD5_StringHexHash MD5_BytesHexHash MD5_BytesHash MD5_FileHexHash MD5_Init MD5_AddBytes MD5_AddString MD5_HexDigest MD5_Reset MD5_Hmac MD5_HmacHex [Contents] [Index] RNG_KeyBytes RNG_KeyHex RNG_BytesWithPrompt RNG_HexWithPrompt RNG_NonceData RNG_NonceDataHex RNG_Test RNG_Number (supersedes RNG_Long) RNG_Initialize RNG_MakeSeedFile RNG_UpdateSeedFile [Contents] [Index] Remember what happens to a football - you deflate it to make it smaller, and inflate it to make it bigger again. See also Data compression in cryptography. ZLIB_Deflate ZLIB_Inflate [Contents] [Index] PBE_Kdf2 PBE_Kdf2Hex [Contents] [Index] These functions add and remove padding according to the convention in PKCS#5, PKCS#7 and CMS - see Padding for more details. The outputs from the padding functions are always longer than the input, and the outputs from the 'Unpad' functions are always shorter. The 'Unpad' functions return a "decryption error" if the padding is not valid. PAD_BytesBlock PAD_UnpadBytes PAD_HexBlock PAD_UnpadHex [Contents] [Index] These functions carry out conversions between bytes and hexadecimal-encoded strings, and bytes and base64-encoded strings. CNV_BytesFromHexStr CNV_HexStrFromBytes CNV_HexFilter CNV_BytesFromB64Str CNV_B64StrFromBytes CNV_B64Filter [Contents] [Index] These functions compute a CRC-32 checksum of some given data. They all return the value of the checksum directly. CRC_Bytes CRC_File CRC_String [Contents] [Index] These functions allow you to check the version and other module details, carry out the self-tests on demand, and securely wipe data. API_CompileTime API_ErrorLookup API_LicenceType API_ModuleName API_PowerUpTests API_Version WIPE_Data WIPE_File [Contents] [Index] See ActiveX Classes and Methods. These still work but are no longer documented in this manual. See. AES_Bytes AES_BytesMode AES_Ecb AES_EcbHex AES_File AES_FileHex AES_Final AES_Hex AES_HexMode AES_Init AES_InitError AES_InitHex AES_Update AES_UpdateHex bf_BlockDec bf_BlockEnc bf_FileDec bf_FileEnc bf_Final bf_Init bf_StringDec bf_StringEnc BLF_Ecb BLF_EcbHex RAN_DESKeyGenerate RAN_DESKeyGenHex RAN_KeyGenerate RAN_KeyGenHex RAN_Long RAN_Nonce RAN_NonceHex RAN_Seed RAN_TDEAKeyGenerate RAN_TDEAKeyGenHex RAN_Test RNG_KeyGenerate RNG_KeyGenHex RNG_Long [Contents] [Index] AES128_B64Mode encrypts or decrypts data represented as a base64 string using a specified mode. The key and initialization vector are represented as base64 strings. Public Declare Function AES128_B64Mode Lib "diCryptoSys.dll" (ByVal strOutput As String, ByVal strInput As String, ByVal strKey As String, ByVal bEncrypt As Boolean, ByVal strMode As String, ByVal strIV As String) As Long nRet = AES128_B64Mode(strOutput, strInput, strKey, bEncrypt, strMode, strIV) Stringof sufficient length to receive the output. Stringcontaining the input data in base64. Stringcontaining the key in base64. Booleandirection flag: set as True to encrypt or False to decrypt. Stringspecifying the confidentiality mode: Stringcontaining the initialization vector (IV), if required, in base64. long _stdcall AES128_B64Mode(char *strOutput, const char *strInput, const char *strKey, int bEncrypt, const char *strMode, const char *sIV); Long: If successful, the return value is 0; otherwise it returns a non-zero error code. public static string Decrypt(string inputStr, string keyStr, Mode mode, string ivStr, EncodingBase encodingBase); public static string Encrypt(string inputStr, string keyStr, Mode mode, string ivStr, EncodingBase encodingBase); Public Shared Function Decrypt(ByVal inputStr As String, ByVal keyStr As String, ByVal mode As Mode, ByVal ivStr As String, ByVal encodingBase As EncodingBase) As String Public Shared Function Encrypt(ByVal inputStr As String, ByVal keyStr As String, ByVal mode As Mode, ByVal ivStr As String, ByVal encodingBase As EncodingBase) As String Refer to the .NET Help File for more details of the .NET equivalent methods. The length of the input string strInput must represent a multiple of the block size (16 bytes) when decoded. If not, an error will be returned. The initialization vector string strIV must represent exactly 16 bytes unless strMode is ECB, in which case strIV is ignored (use ""). The key strKey must also represent exactly 16 bytes, the required key length. The output string strOutput must be set up with at least the same number of characters as the input string before calling. The variables strOutput and strInput should be different. This example is the same data as for AES128_HexMode, except the data is in base64 format. Dim nRet As Long Dim strOutput As String Dim strInput As String Dim strKey As String Dim strIV As String Dim bEncrypt As Boolean Dim sCorrect As String ' Case #4: Encrypting 64 bytes (4 blocks) using AES-CBC with 128-bit key ' Key : 0x56e47a38c5598974bc46903dba290349 strKey = "VuR6OMVZiXS8RpA9uikDSQ==" ' IV : 0x8ce82eefbea0da3c44699ed7db51b7d9 strIV = "jOgu776g2jxEaZ7X21G32 strInput = sCorrect = ==" ' Set strOutput to be same length as strInput strOutput = String(Len(strInput), " ") Debug.Print "KY="; strKey Debug.Print "IV="; strIV Debug.Print "PT="; strInput nRet = AES128_B64Mode(strOutput, strInput, strKey, ENCRYPT, "CBC", strIV) Debug.Print "CT="; strOutput; nRet Debug.Print "OK="; sCorrect strInput = strOutput nRet = AES128_B64Mode(strOutput, strInput, strKey, DECRYPT, "CBC", strIV) Debug.Print "P'="; strOutput; nRet This should result in output as follows: KY=VuR6OMVZiXS8RpA9uikDSQ== IV=jOgu776g2jxEaZ7X21G32Q== PT== CT== 0 OK== P'== 0 AES128_HexMode AES128_BytesMode [Contents] [Index] AES128_Bytes encrypts or decrypts an array of Bytes in one step in Electronic Codebook (EBC) mode. Public Declare Function AES128_Bytes Lib "diCryptoSys.dll" (ByRef abOutput As Byte, ByRef abData As Byte, ByVal nDataLen As Long, ByRef abKey As Byte, ByVal bEncrypt As Boolean) As Long nRet = AES128_Bytes(abOutput(0), abData(0), nDataLen, abKey(0), bEncrypt) Bytearray of sufficient length to receive the output. Bytearray containing the input data. Longequal to length of the input data in bytes. Bytearray containing the key. Booleandirection flag: set as True to encrypt or False to decrypt. long _stdcall AES128_Bytes(unsigned char *output, const unsigned char *input, long nbytes, const unsigned char *key, int bEncrypt); Long
http://www.cryptosys.net/CryptoSysManual.html
crawl-002
refinedweb
9,091
55.24
python-vzaarpython-vzaar This is a wrapper for Vzaar's version 1.0 API. As of this repository's initial commit, all documented API methods are implemented. The most up-to-date version of this code will be available at The original API documentation can be found at Usage:Usage: Install using setuptools, pip, or by cloning the repository and running: python setup.py intall Once installed, it can be used with: from vzaar import Vzaar api = Vzaar(username, api_key, video_sucess_redirect, max_video_size) api.user_details(username) If you're using django, you can define the values in your settings object instead of having to set them when you instantiate the api object. Add each of the following to your settings, then import DjangoVzaar instead of importing Vzaar. VZAAR_USERNAME - string - your username (not your email) VZAAR_KEY - string - Vzaar API key VIDEO_SUCCESS_REDIRECT - string - where to redirect a user after upload ex: MAX_VIDEO_SIZE - integer - size (in bytes) of maximum alowed upload size Once the above keys are defined in your settings.py file, you can use the api like so: from vzaar import DjangoVzaar api = DjangoVzaar() DependenciesDependencies - oauth2 - - httplib2 - - python2.6 - This has been tested using python 2.6 - You may be able to get it to work in 2.5 by installing the json module that comes packaged with python 2.6, or the simplejson module.
https://libraries.io/pypi/vzaar
CC-MAIN-2021-21
refinedweb
223
62.58
. This might help: cu Here is another practice that I would like to contribute. It's been awhile since I took over the administration of SMW from a prior developer, and sometimes I have to refresh my own memory of what I did. Declare all properties. I say again: declare all properties. Yes, I know: by default, an undeclared property is assumed to be of type Page. (The old Relations were all of that type.) But I still adhere rigorously to that practice. Decide ahead of time what are the most common custom types and properties that you will need, and implement those right away. It's a lot easier to create a bunch of types and properties in advance than it is to reannotate a bunch of articles after the fact. As I said before: Whenever possible, develop templates that handle semantic annotation within their own code. Make those templates easy to use; otherwise, users won't use them. It's that simple. In this connection: Absolutely any sort of data that is repeatedly mentioned in multiple articles is a candidate for semantic annotation. (And don't be afraid to use Type:String as a datatype; not all annotated text needs to be a separate page.) The best time to implement SMW is when the wiki is just getting started. Failing that: Make sure that you have the full involvement of the user community and your colleagues and superiors in administration! Different people have different ideas on what sort of data is proper to annotate, and also on the new theory of semantic concepts: some administrators prefer to assign articles directly to MediaWiki's categories, while others, with a little persuasion and a demonstration project, might let you create a semantic Concept and watch as it acts like a "dynamic category." If you /do/ use concepts, then try to invent some way of referring people to your concepts so that they can browse them whenever they need to. (I haven't yet used Semantic Drilldown; that sounds very much as though it would provide a "concept tree" or something similar.) Expand on the definitions of all standard types and special properties. This serves two purposes: 1. In many cases it constitutes encyclopedic knowledge and is therefore valuable in that context alone. 2. It lets your users know what those types and special properties are good for and how they work. As such it is part of the help system. Good help, like good security, is redundant. Similarly, you should expand on your custom-type definitions. A good custom type definition must contain more than several [[Corresponds to::x number of units]] declarations. It should also have a clear statement of its intent. Expand on property declarations for the same reason. I always try to work my [[has type::x]] declaration (and, where applicable, my [[units of measure::x,y,z]] declaration) into a paragraph describing the property and its applicability. On my wiki, I have Length defined as a type. My base unit is the meter, but I support all reasonable divisions (and multiples) of the meter, and also US Customary units, some ancient units, and even the classical Astronomical Unit, the light-year, and the parsec. I then can declare multiple properties of this type, each of which has its own places to apply it. I already have separate properties for the semi-major axis and peri- and apo- apsides of an orbiting body, and also the length, breadth, and height of a building, or the standing height of a man. Now obviously I am not going to describe a man's standing height in fractions of an AU, or the semi-major axis of the dwarf planet Eris or the (alleged) companion star Nemesis in meters! That's what the selective units-of-measure special property is for--but this requires a bit more explanation than the minimum code required. This is actually analogous to the difference between code and comment in a program (or a PHP script). Comments are the means by which we remind our users, or our successors and assistants in SMW management, why we declared our properties and custom types as we did, and how we understand the proper uses of standard types and special properties. I do have one special situation: I built a nonlinear custom type, and until SMW 1.4.3 I had to alter the DataValueFactory script file to recognize a "new standard type." Now that I have enhanced the actual standard type, I simply redeclared all my properties that used the custom type, to be properties of the standard type. But now I have an "article" in the Type namespace for a non-standard type that lacks any definition code appropriate to custom types. For that reason, this "type" shows up in Special:Types with an error flag. But I don't want to discard the text completely. So I shall probably move that text to another namespace that I have created for "essays," or simply to the Help namespace. I'm not prepared at this point to create a special namespace for obsolete custom types! But one way that I could leave the article where it is, and make the error flag go away, would be if I could explicitly /define/ the old custom type as an /alias/ of the standard type that now subsumes it. If I have one criticism of the initial development of SMW, it is this: the only way to get SMW to recognize a non-standard alias is to hack the language script file to support that alias. I think we should be allowed to declare aliases within our wiki, and would propose an additional special property [[alias of::x]] or [[alternate name for::x]] to perform that function. Either that, or a special page, available only to sysops, that permits direct access to the language file involved. One more thing: Back up, back up, back up. On our wiki, we have installed a script that performs daily backups at an hour of the day in which most of our users are fast asleep. Then I try to do my upgrades and enhancements early in the morning, because I am three hours ahead of most of my other users and can probably get the upgrade done before I end up interfering with anyone else. And if something goes wrong, I can always restore. And when I say "back up," I also mean to back up your existing SMW code. Run tar -czf semediawiki-lng.tar.gz SemanticMediaWiki where "lng" stands for "Last Known Good." Then if you install a new version of SMW, and the code hangs up or otherwise gives an indication of being unsafe, you can revert it easily: rm -Rf SemanticMediaWiki tar -xzf semediawiki-lng.tar.gz You might in fact wish to automate these commands in shell scripts that you can invoke when you need to. But: make sure that you keep those shell scripts in "600" mode (that is, user has read and write, but not execute, privileges, and "group" and "everyone" have no privileges) until you need to use them. Then change the mode to "700", execute the script, and change the mode back to "600" when you're done with it. And now a more definitive "war story," to explain how and why I developed these practices: Our wiki, <>, is a multinational wiki. On our server we have one primary directory having our wiki in English, plus several subsidiary wikis, each in a different language, and finally a "pool" wiki containing images that are available for all articles in all our wikis. When I joined as an editor, and then as an administrator, my predecessor as the chief developer had installed SMW 0.6. That, if anyone still remembers it, had the old Relation and Attribute namespaces. For reasons that I never investigated, no one /ever/ declared any "attributes" or even paid attention to any /types/ that SMW had available to it in those days. The only thing we used, in short, were relations. On the day that I took over as chief developer, SMW 1.0 came out. That's when I found out how much richer it was than something to use to track relationships among articles. But in installing it (actually migrating form SMW 0.7 to 1.0), I had to migrate all of my Relation declarations to Property declarations of type Page. Once I had done that, a lot of frankly weird Factbox entries vanished. Then I also plunged myself into two parallel efforts: 1. Writing Help pages for semantic usage and development. 2. Extending the development of SMW to go beyond simply page-type properties. 3. Demonstrating the utility of SMW to the site's founding Bureaucrat, so that he would authorize its installation in the other languages. The third part turned out to be easy; once I showed him what SMW could do with queries (especially the building of dynamic tables), he approved the extensions. That, of course, required me to write my own language script for Korean, find an appropriate language script for Chinese, and eventually (though this came much later) to collaborate with a Brazilian native to create language support for Portuguese. But soon I had a problem. SMW had a means by which to annotate dates. But those dates used Unix-derived date-and-time-parsing functions, and stored the date as the number of seconds on either side of the Unix Epoch, which was one second shy of midnight on New Year's Eve in 1969. At a minimum, this date would be unsuitable for annotating, say, the dates of birth and death of men like Galileo Galilei and Sir Isaac Newton. For dates before the common era--well, as we say in American English, forget it! About all that I'd be able to annotate were launch and flyby dates for rocket probes to various celestial bodies--and I would be completely at a loss to annotate the date of discovery of any celestial body discovered earlier than the dwarf planet Pluto (1930). Worse yet, in 2038, I would run out of time. So I developed a custom datatype to handle dates in the far-distant past. In retrospect, it was a bad solution. Getting SMW to recognize that custom type required modifications in at least two different other files--and every time SMW came out with a new version, the story would be the same! Finally, with SMW 1.4.2, the situation became untenable, with the abandonment of the old getXSDValue() and parseXSDValue($value,$unit) functions in favor of getDBkeys() and parseDBkeys($args). Recently I've done what I should have done earlier: enhanced the standard Date datatype with my own modification. Happily, I had already suggested to the developers that they ought to scrap their implementation of dates in favor of a different base model--the Julian Day--that would be precise enough to store a time down to the second, yet broad enough to support any date in recorded history. Beginning with SMW 1.4.0, Fabian Howahl came out with his new date. This summer, I began systematically to enhance that date to support the different calendar models that I routinely used, and also to achieve certain goals that they had set out for themselves. Now this has required redeclaring many properties, and in some cases reannotating certain dates that were in a format that I no longer cared to support. That created a different problem: if I changed the format before changing the type, then dates that were not recognized before would be recognized after the change--but would not be stored in the database. That I installed this new version of the type concurrent with an upgrade of SMW to version 1.4.3 gave me the perfect opportunity to do the one thing that would solve that problem: rebuild the SMW database. So here's another memo: if you're going to make a major custom change (assuming that you want to hack the PHP code for that), try to time it with a major upgrade--because you're going to have to run php SMW_refreshData.php -ftpv and php SMW_refreshData.php -v [-s xxxx] anyway. Since taking over as developer, I have had to upgrade SMW many times, and MediaWiki itself almost as often. Naturally I have developed some rather elaborate shell scripts to accomplish this. One other thing I have learned: simply creating "soft links" to SMW in the other directories will not work. The maintenance scripts do not resolve properly when invoked through the soft link. What happens is that SMW_refreshData.php, for example, operates on the database relative to the location of the /target file/ and /not/ to the location of the soft link. I have, therefore, had no choice but to install actual copies of SMW in every extensions subdirectory of every second-language site. Without shell scripts, this would be far too cumbersome. I have also developed many templates that accomplish semantic annotation by themselves. I have noticed, in passing, that many of my users have taken upon themselves to "annotate" certain "properties" without declaring them. So now I have to make a regular "sweep" of the Properties list, and retroactively declare certain properties according to what I find that they are actually trying to annotate. If I could sum up one thing that any semantic developer needs, it is: Dedication. Semantic annotation is something new, and that Wikipedia has taken so long even to /consider/ it for their own project speaks volumes annotation can do, and how to do it, have to teach others, and do most of the "heavy lifting" to realize the full value of the technique. Temlakos Mike Axelrod wrote: > I am putting together a presentation on best practices for SMW. I'd > like to focus this work on pragmatic methods of developing a semantic > wiki, this would include planning, design, leading to early > implementation/prototyping etc.. I'd also like to include > social/community aspect if I can. > > Any contributions in the form of links, tid bits of knowledge, war > stories, etc.. would be welcome. In exchange I volunteer to post some > of my work and/or contribute to a best practices area on whichever SMW > website is appropriate. > > Mike > > Mike Axelrod > > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. > ------------------------------------------------------------------------ > > _______________________________________________ > Semediawiki-user mailing list > Semediawiki-user@... > > Hi Dominika, Thanks a million for the reply. I'll look into it and get back with results. Thanks again, j 2009/8/21 Wloka <wloka@...> > Hello John, > > > > Perhaps the graphical query interface of the Halo Extension may solve your > concern. There you can define queries and assign various templates to the > results, for example google pie or bar. A demonstration can be found here: > > > > Hello John, Perhaps the graphical query interface of the Halo Extension may solve your concern. There you can define queries and assign various templates to the results, for example google pie or bar. A demonstration can be found here: l_query_interface Consider this test case demo'd at semantic mediawiki sandbox<> . Pages Saale, and Saale2 do not exist. Unlike Saale2, Saale is referenced by page [[Halle (Saale)<>]] within [[Property:Adjacent to<> ]] "{{#show:Saale2| default=expected behavior. | ?Located in= }}" results in -> "expected behavior." "{{#show:Saale| default=expected behavior. | ?Located in= }}" results in -> "" Bug, no? -Phlox. Gentlemen: Please ignore the previous message about a timeline failing to appear in my French site. The fact of the matter is that I had misspelled a key property name in my query. Once I corrected the spelling, the timeline appeared, exactly as I had designed it. Temlakos Gentlemen: I have another difficulty with timelines, this time with regard to their use in multiple languages. Why is it, that although I can easily build a timeline in English, I cannot do so in French? I have already rebuilt and refreshed all my semantic data. (I found it full of the most appalling mistakes after I triggered an upgrade of the database structure with SMW 1.4.3.) After that, I was able to restore tables in French. But timelines will not appear. Here are two versions of an article where I am attempting to build a timeline. The title is "Planet" in English and "Planète" in French. What I seek to do is create a timeline of the discovery of all objects that I have written up that have the Sun as a primary, sorted by date of discovery (except for the earth itself and all objects "known to the ancients") and listing the astronomer(s) involved and also the origin of the name and the celestial class(es) to which the object belongs. Here is the article in English: <> And in French: <> One thought I had: Several of the article and property names have accents and other diacriticals. Does that matter? If so, how do I work around that issue? Any help would be appreciated--especially since I have registered my wiki in all its languages. Temlakos To the developers of Semantic Timelines, part of Semantic Result Formats: Well, I've switched my timestamp properties to the standard Date datatype (after my long and intense project to enhance it to satisfy my requirements), and upgraded SMW to version 1.4.3 and SRF to version 1.4.5. Now I'm experimenting with timelines. And I've found out some very interesting things--or rather, I have some questions. Why can't I build a timeline using events that took place before the birth of Christ, or the "common era," if you prefer that notation? Does Timeline have an "oldest allowable year"? My guess is the year 1000 AD/CE in the proleptic Gregorian calendar. All I know is this: when I asked for a timeline of the Hebrew Kings of Judah, I got a notice: "Use a Javascript-enabled browser to view this element." In other words, no soap. But when I simply queried "[[Born > 1 January 1000]]" and set timelineposition = middle, I got a point at the birth of James Ussher, during the lifetimes of Johannes Kepler and Galileo Galilei. However, even that display had a problem: Why is it, that when I ask for a timeline or an eventline that creates bands between two dates for one article (typically, Born-Died, or Began-Ended), the timeline markers are hard-limited to YEAR and MONTH? I tried to change them to DECADE and CENTURY (these were biographical articles), and no soap. However, single-event timelines (like querying celestial bodies having a common primary and sorting by Date of Discovery) do allow DECADE and CENTURY as timeline markers.) Don't get me wrong; timelines are now very useful to me. I rather enjoyed the feature that allows me to display non-date properties in a dialog balloon with any given event. For example, I can query the planets that were not known to the ancients, plot them on a timeline by date of discovery, and for each planet I can have the astronomer's name and any other properties I want, show up when I click on the dot that represents that planet. But I'd still like to know what the limitations of Timeline are, so that I can advise my editors on how and when to use them. For that matter: I have the PDF user's manual for SMW. Now does someone have a user's manual for SRF? Specifically, how can I use Calendar? What does Exhibit do? What is the difference between Timeline and Eventline? Temlakos I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/semediawiki/mailman/semediawiki-user/?viewmonth=200908&viewday=21
CC-MAIN-2017-17
refinedweb
3,347
61.97
Hello, Sage community. I was able to adapt @Juanjo's answer, but then I did some incantation (which I don't completely understand) that generalized his method. The only problem is that I had to modify the code in the file sage.misc.latex.latex. Here I describe what I did. First of all, I made a copy of the file I just mentioned, and added a small modification of part of your code. Here is the patch: @@ -36,6 +36,7 @@ from sage.misc.sage_ostools import have_program from sage.misc.temporary_file import tmp_dir @@ -36,6 +36,7 @@ from sage.misc.sage_ostools import have_program from sage.misc.temporary_file import tmp_dir +from sage.all import RR, CC EMBEDDED_MODE = False COMMON_HEADER = \ @@ -921,6 +922,10 @@ sage: latex((x,2), combine_all=True) x 2 """ + if x in RR: + return LatexExpr(r'\num{' + str(x) + '}') + elif x in CC: + return LatexExpr(r'\num{' + str(x.real_part()) + '+' + str(x.imag_part()) + 'i}') if has_latex_attr(x): return LatexExpr(x._latex_()) try: I saved this as LaTeX.py., then I made an import: from LaTeX import latex And here is the part I don't understand: sage.misc.latex.latex = latex This last line makes my new latex() function work on matrices, tuples, list, etc. automatically. (Why?)
https://ask.sagemath.org/answers/47066/revisions/
CC-MAIN-2021-39
refinedweb
209
60.01
Repeatrer level limit (NRF24L01 transport) Could someone please confirm that repeater mode could be "chained" (I mean that that could be more than one repeater along path between leaf node and gateway node). I've bunch of sensors but I have problems with tree mode). Working setup for me is GW->RN->LF(RN) but I can't get working GW->RN->RN->LF. RN - repater node RN(LF) - repater node connected to GW or RP "last in chain LF - leaf node (last in chain) Second question. Generally (in most sensors) I'm using NFR24L01 modules with BCB antena (not "SMD" version, without PA). Is it OK to use RF24_PA_LOW power mode setting or for these modules only option is RF24_PA_MIN? Best regards Piotr You can chain more repeaters. You could set a fixed parent ID in the repeaters so they don't try to jump one. About NRF24 power, you can also use pa_max and pa_high, it just depends on the quality of the radio module. @gohan said in Repeatrer level limit (NRF24L01 transport): About NRF24 power, you can also use pa_max and pa_high, it just depends on the quality of the radio module. Even versions without PA and rubber duck antena? Yes, it works but it depends on the chip quality: there are some performing better with lower power, it is a matter of doing tests to figure it out, there is not rule to follow with all those counterfeit nrf24 chips around. @gohan said in Repeatrer level limit (NRF24L01 transport): You can chain more repeaters. I've managed to "chain repeaters" so your statement is 100% true Still fighting with range but "there is a hope" for my project. @gohan said in Repeatrer level limit (NRF24L01 transport): You could set a fixed parent ID in the repeaters so they don't try to jump one. Is there a way to change fixed parent in "programming way (to set it as custom value in eeprom and call function setting it on startup not as define and recompile). I've about 20 devices the same type and I prefer to have the same firmware for all. Second question (I've looked at source and answer probably is: now it's very hard to do it): Is there a way to the same with transmit power of NRF module? @bilbolodz first question: it might be possible using the method suggested here: If you try it, please report back with the results. If it works we should document it somewhere where it is easier to find. Second question: it might be possible using the same technique as above, but I think you would need to reset the node or trigger re-initialization of the radio. Maybe sleep is enough, I'm not sure if the radio is re-initialized after sleep. Adding reference for the second question: The radio object can be accessed through the global variable _radio (see here). Maybe you can just call the appropriate function? @mfalkvidd said in Repeatrer level limit (NRF24L01 transport): @bilbolodz first question: it might be possible using the method suggested here: If you try it, please report back with the results. If it works we should document it somewhere where it is easier to find. For sure there is possibility to change ID of node "on the fly" set: #define MY_NODE_ID AUTO and send message C_INTERNAL I_ID_RESPONSE with new ID node. Working even without node reboot I will check your method with parent ID. I will try to check also these method with power level but I don't know how to check in programming way current NRF power. @bilbolodz the level is RF24_PA_HIGH unless you have an override in your sketch. See @mfalkvidd said in Repeatrer level limit (NRF24L01 transport): The radio object can be accessed through the global variable _radio (see here). Maybe you can just call the appropriate function? I've made a quick grep. It looks that _radio is avaliable for RFM69 transport. For NFR there is a function RF24_setRFSetup but as parameter i takes MY_RF24_RF_SETUP defined as: #define MY_RF24_RF_SETUP (uint8_t)( ((MY_RF24_DATARATE & 0b10 ) << 4) | ((MY_RF24_DATARATE & 0b01 ) << 3) | (MY_RF24_PA_LEVEL << 1) ) + 1 // +1 for Si24R1 It's problematic using it for power level without messing other things! See if you can change the value and call _radio.RF24_initialize(); But I am not familiar with the inner workings of the radio code. Maybe running RF24_initialize() will have unintended side effects. @mfalkvidd said in Repeatrer level limit (NRF24L01 transport): @bilbolodz the level is RF24_PA_HIGH unless you have an override in your sketch. That's obvious, I'm using it. Problem is that its "#define'ed" so value of these "variable" is set during compilation and it's not possible to change it in program! @bilbolodz sorry, I must have misunderstood your question. You asked how to get the current power setting. I said that the current power setting will be the default value, unless you set it to something else. If you set it to something else, it will be set to the value you set it to. So just use the value you set it to? No need to get it from the radio? Enabling MY_DEBUG_VERBOSE_RF24 would print the registers though. Maybe that's what you are asking for? @mfalkvidd The problem is: My sensors network is over 30 nodes a few repeaters and so on, majority of nodes are the same hardware devices. I want to "tune" my network changing NRF power level and MY_PARENT_NODE_ID "on th fly" (now it's "hard defined"). You've suggested to use method: int8_t myNodeId; #define MY_NODE_ID myNodeId void before () { // read I/O pins and set myNodeId myNodeId = 32; Serial.println( myNodeId ); } but with MY_PARENT_NODE_ID and MY_RF24_PA_LEVEL. Good idea I will check if it working BUT: there MUST be a way to check remotely IF it's working. There is no problem with MY_PARENT_NODE_ID because it affect network topology and it could be easily checked. I don't know how to check "from program" current power level. Ideal solution would that sketch will detect current power level and suffix sketch name with _MIN, _LOW, _HIGH, _MAX. It would allow me to have only two version of "HEX firmware" for OTA: with or without repeater mode. Other things (parent level and ID) I could change "on the fly" sending command" (and maybe rebooting) to sensor. Is it clear now? @bilbolodz yup. Thanks for clarifying. And thanks for noticing that _radio is only available for rfm69. Seems a bit inconsistent to me, but there are probably reasons for it. It looks like transportGetTxPowerLevel() / RF24_getTxPowerLevel() will return some sort of mapping of the current power level. But these functions are internal/private, so I don't know how to access them. Verifying the parent node id should be possible by calling transportGetParentNodeId() or simply getParentNodeId(); @bilbolodz how about this? Sending a I_SIGNAL_REPORT_REQUEST message with the command "T" could be what you need? @mfalkvidd said in Repeatrer level limit (NRF24L01 transport): Sending a I_SIGNAL_REPORT_REQUEST message with the command "T" could be what you need? Unfortunately MYSController in current version doesn't support I_SIGNAL_REPORT_REQUEST message. I will try it in code, thanks This post is deleted! /** * @brief Get transport signal report * @param command: * R = RSSI (if available) of incoming @ref I_SIGNAL_REPORT_REQUEST message (from last hop) * R! = RSSI (if available) of ACK to @ref I_SIGNAL_REPORT_REVERSE message received from last hop * S = SNR (if available) of incoming @ref I_SIGNAL_REPORT_REQUEST message (from last hop) * S! = SNR (if available) of ACK to @ref I_SIGNAL_REPORT_REVERSE message received from last hop * P = TX powerlevel in % * T = TX powerlevel in dBm * U = Uplink quality (via ACK from parent node), avg. RSSI * @return Signal report (if report is not available, INVALID_RSSI, INVALID_SNR, INVALID_PERCENT, or INVALID_LEVEL is sent instead) */ int16_t transportSignalReport(const char command); /** * @brief Get transport signal report * @param signalReport * @return report */ int16_t transportGetSignalReport(const signalReport_t signalReport); These functions are available only in devlopment branch. Not implemented into stable code @bilbolodz yes. Everything in MySensors started off in the development branch It takes time and effort to code and test. What you're asking for is interesting, but historically it has not been interesting enough for anyone to "man up" and spend the time needed. It looks that I've achieved my goal. I'm able dynamically change parent id node (proved in lab) and power level (don't have idea how to check it). You can send V_VAR1 message with number of preferred parent or V_VAR2 with requested power (values: 0 - RF24_PA_MIN, 1 - RF24_PA_LOW, 2 - RF24_PA_HIGH, RF24_PA_MAX) and then REBOOT node (required to changes take effect). Version of sketch reported to controller is changed dynamically depends of set power level. My "repeater: code bellow: #define SKETCH_NAME "Repeater Node L1" #define SV "1.0.1" #define MY_RADIO_NRF24 #define MY_REPEATER_FEATURE int8_t myParrentNodeId; int8_t myRF24PALevel; #define MY_PARENT_NODE_ID myParrentNodeId #define MY_RF24_PA_LEVEL myRF24PALevel #include <MySensors.h> #ifndef MY_REPEATER_FEATURE #define SKETCH_VERSION1 SV #else #define SKETCH_VERSION1 SV "R" #endif #define SKETCH_VERSION SKETCH_VERSION1 " " uint8_t SketchVersion[sizeof(SKETCH_VERSION)]; #define EEPROM_PARENT 0 #define EEPROM_PA_LEVEL 1 void before() { int8_t tmpui = loadState(EEPROM_PARENT); if (tmpui == 0xFF) { myParrentNodeId = 0; } else { myParrentNodeId = tmpui; } memcpy(SketchVersion, SKETCH_VERSION, sizeof(SKETCH_VERSION)); tmpui = loadState(EEPROM_PA_LEVEL); switch (tmpui) { case 0: myRF24PALevel = RF24_PA_MIN; memcpy(SketchVersion + sizeof(SKETCH_VERSION) - 4, "MIN", 3); break; case 1: myRF24PALevel = RF24_PA_LOW; memcpy(SketchVersion + sizeof(SKETCH_VERSION) - 4, "LOW", 3); break; case 2: myRF24PALevel = RF24_PA_HIGH; memcpy(SketchVersion + sizeof(SKETCH_VERSION) - 4, "HIG", 3); break; case 3: myRF24PALevel = RF24_PA_MAX; memcpy(SketchVersion + sizeof(SKETCH_VERSION) - 4, "MAX", 3); break; default: myRF24PALevel = RF24_PA_MAX; break; } } void setup() { } void presentation() { //Send the sensor node sketch version information to the gateway //sendSketchInfo("Repeater Node L1 MAX", "1.0"); sendSketchInfo(SKETCH_NAME, SketchVersion); } void loop() { } void receive(const MyMessage &message) { if (!message.isAck()) { if (message.type == V_VAR1) { uint8_t tmpui = message.getByte(); uint8_t tmpui1 = loadState(EEPROM_PARENT); if (tmpui != tmpui1) { saveState(EEPROM_PARENT, tmpui); } } else if (message.type == V_VAR2) { uint8_t tmpui = message.getByte(); uint8_t tmpui1 = loadState(EEPROM_PA_LEVEL); if (tmpui >= 0 && tmpui <= 3) { if (tmpui != tmpui1) { saveState(EEPROM_PA_LEVEL, tmpui); } } } } } @mfalkvidd said in Repeatrer level limit (NRF24L01 transport): Everything in MySensors started off in the development branch If you are using dev branch please test my sketch and check changing power mode with I_SIGNAL_REPORT_REQUEST message or transportGetSignalReport(const signalReport_t signalReport) function.
https://forum.mysensors.org/topic/7213/repeatrer-level-limit-nrf24l01-transport
CC-MAIN-2018-39
refinedweb
1,690
53.61
In this article a few machine learning problems from a few online courses will be described. 1. Fitting the distribution of heights data This problem appeared as an assignment problem in the coursera course Mathematics for Machine Learning: Multivariate Calculus. The description of the problem is taken from the assignment itself. This video explains this problem and the solution in details. In this assessment the steepest descent will be used to fit a Gaussian model to the distribution of heights data that was first introduced in Mathematics for Machine Learning: Linear Algebra. The algorithm is the same as Gradient descent but this time instead of descending a pre-defined function, we shall descend the χ2 (chi squared) function which is both a function of the parameters that we are to optimize, but also the data that the model is to fit to. Background We are given a dataset with 100 data-points for the heights of people in a population, with x as heights (in cm.) and y as the probability that there is a person with that height, first few datapoints are shown in the following table: The dataset can be plotted as a histogram, i.e., a bar chart where each bar has a width representing a range of heights, and an area which is the probability of finding a person with a height in that range, using the following code. import matplotlib.pylab as plt plt.figure(figsize=(15,5)) plt.bar(x, y, width=3, color=greenTrans, edgecolor=green) plt.xlabel('x') plt.ylabel('y') plt.show() We can model that data with a function, such as a Gaussian, which we can specify with two parameters, rather than holding all the data in the histogram. The Gaussian function is given as, By definition χ2 as the squared difference of the data and model, i.e., χ2 = |y − f (x; μ, σ)|^2. x an y are represented as vectors here, as these are lists of all of the data points, the |abs-squared|^2 encodes squaring and summing of the residuals on each bar. To improve the fit, we shall want to alter the parameters μ and σ, and ask how that changes the χ2. That is, we will need to calculate the Jacobian, Let’s look at the first term, ∂(χ2)/∂μ, using the multi-variate chain rule, this can be written as, A similar expression for ∂(χ2)/∂σ can be obtained as follows: The Jacobians rely on the derivatives ∂f/∂μ and ∂f/∂σ. It’s pretty straightforward to implement the python functions dfdmu() and dfdsig() to compute the derivatives. Next recall that steepest descent shall move around in parameter space proportional to the negative of the Jacobian, i.e., with the constant of proportionality being the aggression of the algorithm. The following function computes the expression for the Jacobian. def steepest_step (x, y, mu, sig, aggression) : J = np.array([ -2*(y - f(x,mu,sig)) @ dfdmu(x,mu,sig), -2*(y - f(x,mu,sig)) @ dfdsig(x,mu,sig) ]) step = -J * aggression return step We need to run a few rounds of steepest descent to fit the model. The next piece of code builds the model with steepest descent to fit the heights data: # Do a few rounds of steepest descent. for i in range(50) : dmu, dsig = steepest_step(x, y, mu, sig, 2000) mu += dmu sig += dsig p = np.append(p, [[mu,sig]], axis=0) The following animations show the steepest descent path for the parameters and the model fitted to the data, respectively. The data is shown in orange, the model in magenta, and where they overlap it’s shown in green. χ2 is represented in the figure as the sum of the squares of the pink and orange bars. This particular model has not been fit well with the initial guess – since there is not a strong overlap. But gradually the model fits the data better as more and more iterations of steepest descent are run. Note that the path taken through parameter space is not necessarily the most direct path, as with steepest descent we always move perpendicular to the contours. 2. Back-propagation This. Feed forward The following figure shows the feed-forward equations,. sigma = lambda z : 1 / (1 + np.exp(-z)) d_sigma = lambda z : np.cosh(z/2)**(-2) / 4 # This function initialises the network with it's structure, it also resets any training already done. def reset_network (n1 = 6, n2 = 7, random=np.random) : global W1, W2, W3, b1, b2, b3 W1 = random.randn(n1, 1) / 2 W2 = random.randn(n2, n1) / 2 W3 = random.randn(2, n2) / 2 b1 = random.randn(n1, 1) / 2 b2 = random.randn(n2, 1) / 2 b3 = random.randn(2, 1) / 2 # This function feeds forward each activation to the next layer. It returns all weighted sums and activations. def network_function(a0) : z1 = W1 @ a0 + b1 a1 = sigma(z1) z2 = W2 @ a1 + b2 a2 = sigma(z2) z3 = W3 @ a2 + b3 a3 = sigma(z3) return a0, z1, a1, z2, a2, z3, a3 # This is the cost function of a neural network with respect to a training set. def cost(x, y) : return np.linalg.norm(network_function(x)[-1] - y)**2 / x.size Backpropagation Next we need to implement the functions for the Jacobian of the cost function with respect to the weights and biases. We will start with layer 3, which is the easiest, and work backwards through the layers. The cost function C is the sum (or average) of the squared losses over all training examples: The following python code shows how the J_W3 function can be implemented. # Jacobian for the third layer weights. def J_W3 (x, y) : # First get all the activations and weighted sums at each layer of the network. a0, z1, a1, z2, a2, z3, a3 = network_function(x) # We'll use the variable J to store parts of our result as we go along, updating it in each line. # Firstly, we calculate dC/da3, using the expressions above. J = 2 * (a3 - y) # Next multiply the result we've calculated by the derivative of sigma, evaluated at z3. J = J * d_sigma(z3) # Then we take the dot product (along the axis that holds the training examples) with the final partial derivative, # i.e. dz3/dW3 = a2 # and divide by the number of training examples, for the average over all training examples. J = J @ a2.T / x.size # Finally return the result out of the function. return J The following python code snippet implements the Gradient Descent algorithm (where the parameter aggression represents the learning rate and noise acts as a regularization parameter here): while iterations < max_iteration: j_W1 = J_W1(x, y) * (1 + np.random.randn() * noise) j_W2 = J_W2(x, y) * (1 + np.random.randn() * noise) j_W3 = J_W3(x, y) * (1 + np.random.randn() * noise) j_b1 = J_b1(x, y) * (1 + np.random.randn() * noise) j_b2 = J_b2(x, y) * (1 + np.random.randn() * noise) j_b3 = J_b3(x, y) * (1 + np.random.randn() * noise) W1 = W1 - j_W1 * aggression W2 = W2 - j_W2 * aggression W3 = W3 - j_W3 * aggression b1 = b1 - j_b1 * aggression b2 = b2 - j_b2 * aggression b3 = b3 - j_b3 * aggression. 3. The Kernel Perceptron. import numpy as np def kernel(x, z, type, s): if type == 'rbf': return np.exp(-np.dot(x-z, x-z)/s**2) if type == 'quadratic': return (1 + np.dot(x, z))**2 return np.dot(x, z) Results. Results with Kernel SVM Classifier (sklearn) The following code and the figures show the decision boundaries and the support vectors (datapoints with larger size) learnt with sklearn SVC. from sklearn.svm import SVC x = data[:,0:2] y = data[:,2] clf = SVC(kernel=kernel, C=C, kernel = 'rbf', gamma=1.0/(s*s)) clf.fit(x,y) clf.support_ Dataset 1 With polynomial kernel (degree=2, C=1) With RBF kernel (C=10, σ = 10) Dataset 2 With polynomial kernel (degree=2, C=1) With RBF kernel (C=10, σ = 10) Dataset 3 With RBF kernel (C=10, σ = 10) 4. Models for handwritten digit classification This problem is taken from a few assignments from the edX course Machine Learning Fundamentals by UCSD (by Prof. Sanjay Dasgupta). The problem description is taken from the course itself. In this assignment we will build a few classifiers that take an image of a handwritten digit and outputs a label 0-9. We will start with a particularly simple strategy for this problem known as the nearest neighbor classifier, then a Gaussian generative model for classification will be built and finally an SVM model will be used for classification. The MNIST dataset MNIST is a classic dataset in machine learning, consisting of 28×28 gray-scale images handwritten digits. The original training set contains 60,000 examples and the test set contains 10,000 examples. Here we will be working with a subset of this data: a training set of 7,500 examples and a test set of 1,000 examples. The following figure shows the first 25 digits from the training dataset along with the labels. Similarly, the following figure shows the first 25 digits from the test dataset along with the ground truth labels. Nearest neighbor for handwritten digit recognition Squared Euclidean distance To compute nearest neighbors in our data set, we need to first be able to compute distances between data points. A natural distance function is Euclidean distance: for two vectors x, y ∈ ℝ^d, their Euclidean distance is defined as Often we omit the square root, and simply compute squared Euclidean distance. For the purposes of nearest neighbor computations, the two are equivalent: for three vectors x, y, z ∈ ℝ^d, we have ∥x−y∥≤∥x−z∥ if and only if ∥x−y∥^2≤∥x−z∥^2. The following python function squared Euclidean distance. ## Computes squared Euclidean distance between two vectors. def squared_dist(x,y): return np.sum(np.square(x-y) Computing nearest neighbors Now that we have a distance function defined, we can now turn to (1-) nearest neighbor classification, with the following naive implementation with 0 training / pre-processing time. ## Takes a vector x and returns the index of its nearest neighbor in train_data def find_NN(x): # Compute distances from x to every row in train_data distances = [squared_dist(x,train_data[i,]) for i in range(len(train_labels))] # Get the index of the smallest distance return np.argmin(distances) ## Takes a vector x and returns the class of its nearest neighbor in train_data def NN_classifier(x): # Get the index of the the nearest neighbor index = find_NN(x) # Return its class return train_labels[index] The following figure shows a test example correctly classified by finding the nearest training example and another incorrectly classified. Processing the full test set Now let’s apply our nearest neighbor classifier over the full data set. Note that to classify each test point, our code takes a full pass over each of the 7500 training examples. Thus we should not expect testing to be very fast. ## Predict on each test data point (and time it!) t_before = time.time() test_predictions = [NN_classifier(test_data[i,]) for i in range(len(test_labels))] t_after = time.time() ## Compute the error err_positions = np.not_equal(test_predictions, test_labels) error = float(np.sum(err_positions))/len(test_labels) print("Error of nearest neighbor classifier: ", error) print("Classification time (seconds): ", t_after - t_before) (‘Error of nearest neighbor classifier: ‘, 0.046) (‘Classification time (seconds): ‘, 41.04900002479553) The next figure shows the confusion matrix for classification Faster nearest neighbor methods Performing nearest neighbor classification in the way we have presented requires a full pass through the training set in order to classify a single point. If there are N training points in ℝ^d, this takes O(Nd) time. Fortunately, there are faster methods to perform nearest neighbor look up if we are willing to spend some time pre-processing the training set. scikit-learnhas fast implementations of two useful nearest neighbor data structures: the ball tree and the k-d tree. from sklearn.neighbors import BallTree ## Build nearest neighbor structure on training data t_before = time.time() ball_tree = BallTree(train_data) t_after = time.time() ## Compute training time t_training = t_after - t_before print("Time to build data structure (seconds): ", t_training) ## Get nearest neighbor predictions on testing data t_before = time.time() test_neighbors = np.squeeze(ball_tree.query(test_data, k=1, return_distance=False)) ball_tree_predictions = train_labels[test_neighbors] t_after = time.time() ## Compute testing time t_testing = t_after - t_before print("Time to classify test set (seconds): ", t_testing) (‘Time to build data structure (seconds): ‘, 0.3269999027252197) (‘Time to classify test set (seconds): ‘, 6.457000017166138) similarly, with the KdTree data structure we have the following runtime: (‘Time to build data structure (seconds): ‘, 0.2889997959136963) (‘Time to classify test set (seconds): ‘, 7.982000112533569) Next let’s use sklearn’s KNeighborsClassifier to compare with the runtimes. from sklearn.neighbors import KNeighborsClassifier neigh = KNeighborsClassifier(n_neighbors=1) neigh.fit(train_data, train_labels) predictions = neigh.predict(test_data) (‘Training Time (seconds): ‘, 0.2999999523162842) (‘Time to classify test set (seconds): ‘, 8.273000001907349) The next figure shows the error rate on the test dataset with k-NearestNeighbor classifier with different values of k. Training the 1-NN classifier on the entire training dataset with 60k images and testing on the entire testset with 10k images yields the the following results: (‘Training Time (seconds): ‘, 19.694000005722046) (‘Time to classify test set (seconds): ‘, 707.7590000629425) with the following accuracy on the test dataset and the confusion matrix: accuracy: 0.9691 (error 3.09%) Gaussian generative models for handwritten digit classification Recall that the 1-NN classifier yielded a 3.09% test error rate on the MNIST data set of handwritten digits. We will now see that a Gaussian generative model does almost as well, while being significantly faster and more compact. For this assignment we shall be using the entire MNIST dataset, the training dataset contains 60k images and the test dataset contains 10k images. Fit a Gaussian generative model to the training data The following figure taken from the lecture videos from the same course describes the basic theory. Let’s Define a function, fit_generative_model, that takes as input a training set (data x and labels y) and fits a Gaussian generative model to it. It should return the parameters of this generative model; for each label j = 0,1,...,9, we have: pi[j]: the frequency of that label mu[j]: the 784-dimensional mean vector sigma[j]: the 784×784 covariance matrix This means that pi is 10×1, mu is 10×784, and sigma is 10x784x784. We need to fit a Gaussian generative model. The parameters pi, mu and sigma are computed with corresponding maximum likelihood estimates (MLE) values: empirical count, mean and covariance matrix for each of the class labels from the data. However, now there is an added ingredient. The empirical covariances are very likely to be singular (or close to singular), which means that we won’t be able to do calculations with them. Thus it is important to regularize these matrices. The standard way of doing this is to add cI to them, where c is some constant and I is the 784-dimensional identity matrix. (To put it another way, we compute the empirical covariances and then increase their diagonal entries by some constant c). This modification is guaranteed to yield covariance matrices that are non-singular, for any c > 0, no matter how small. But this doesn’t mean that we should make c as small as possible. Indeed, c is now a parameter, and by setting it appropriately, we can improve the performance of the model. We will study regularization in greater detail over the coming weeks. The following python code snippet shows the function: def fit_generative_model(x,y): k = 10 # labels 0,1,...,k-1 d = (x.shape)[1] # number of features mu = np.zeros((k,d)) sigma = np.zeros((k,d,d)) pi = np.zeros(k) c = 3500 # regularizer for label in range(k): indices = (y == label) pi[label] = ... # empirical count mu[label] = ... # empirical mean sigma[label] = ... # empirical regularized covariance matrix return mu, sigma, pi Now let”s visualize the means of the Gaussians for the digits. Time taken to fit the generative model (in seconds) : 2.60100007057 Make predictions on test data # Compute log Pr(label|image) for each [test image,label] pair. k = 10 score = np.zeros((len(test_labels),k)) for label in range(0,k): rv = multivariate_normal(mean=mu[label], cov=sigma[label]) for i in range(0,len(test_labels)): score[i,label] = np.log(pi[label]) + rv.logpdf(test_data[i,:]) predictions = np.argmax(score, axis=1) # Finally, tally up score errors = np.sum(predictions != test_labels) print "The model makes " + str(errors) + " errors out of 10000" Time taken to classify the test data (in seconds): 19.5959999561 The following figure shows the confusion matrix. SVM for handwritten digit classification The entire training dataset from the MNIST dataset is used to train the SVM model, the training dataset contains 60k images and the test dataset contains 10k images. First let’s try linear SVM, the following python code: from sklearn.svm import LinearSVC clf = LinearSVC(C=C, loss='hinge') clf.fit(train_data,train_labels) score = clf.score(test_data,test_labels) The following figure shows the training and test accuracies of LinearSVC with different values of the hyper-parameter C. Next let’s try SVM with quadratic kernel, as can be seen it gives 98.06% accuracy on the test dataset with C=1. from sklearn.svm import SVC clf = SVC(C=1., kernel='poly', degree=2) clf.fit(train_data,train_labels) print clf.score(train_data,train_labels) print clf.score(test_data,test_labels) training accuracy: 1.0 test accuracy: 0.9806 (error: 1.94%) The following figure shows the confusion matrix:
https://sandipanweb.wordpress.com/2018/05/
CC-MAIN-2020-24
refinedweb
2,929
56.45
Question: I need to convert an ASCII string like... "hello2" into it's decimal and or hexadecimal representation (a numeric form, the specific kind is irrelevant). So, "hello" would be : 68 65 6c 6c 6f 32 in HEX. How do I do this in C++ without just using a giant if statement? EDIT: Okay so this is the solution I went with: int main() { string c = "B"; char *cs = new char[c.size() + 1]; std::strcpy ( cs, c.c_str() ); cout << cs << endl; char a = *cs; int as = a; cout << as << endl; return 0; } Solution:1 You can use printf() to write the result to stdout or you could use sprintf / snprintf to write the result to a string. The key here is the %X in the format string. #include <cstdio> #include <cstring> int main(int argc, char **argv) { char *string = "hello2"; int i; for (i = 0; i < strlen(string); i++) printf("%X", string[i]); return 0; } If dealing with a C++ std::string, you could use the string's c_str() method to yield a C character array. Solution:2 Just print it out in hex, something like: for (int i=0; i<your_string.size(); i++) std::cout << std::hex << (unsigned int)your_string[i] << " "; Chances are you'll want to set the precision and width to always give 2 digits and such, but the general idea remains the same. Personally, if I were doing it I'd probably use printf("%.2x");, as it does the right thing with considerably less hassle. Solution:3 A string is just an array of chars, so all you need to do is loop from 0 to strlen(str)-1, and use printf() or something similar to format each character as decimal/hexadecimal. Solution:4 #include <algorithm> #include <iomanip> #include <iostream> #include <iterator> int main() { std::string hello = "Hello, world!"; std::cout << std::hex << std::setw(2) << std::setfill('0'); std::copy(hello.begin(), hello.end (), std::ostream_iterator<unsigned>(std::cout, " ")); std::cout << std::endl; } Solution:5 for(int i = 0; i < string.size(); i++) { std::cout << std::hex << (unsigned int)string[i]; } Note:If u also have question or solution just comment us below or mail us on [email protected] EmoticonEmoticon
http://www.toontricks.com/2019/06/tutorial-convert-ascii-string-into.html
CC-MAIN-2019-43
refinedweb
367
62.38
Wa-Tor is a population dynamics simulation described by Alexander Dewdney in Scientific American (1984): "Computer Recreations: Sharks and fish wage an ecological war on the toroidal planet Wa-Tor". In the above animation, fish (prey) are pink and sharks (predators) are yellow. In this case, the populations of each fluctuate over time, but don't lead to extinction of either species. The rules of the simulation are well-described in the Wikipedia article on Wa-Tor, but in summary: The Wa-Tor world consists of a two-dimensional grid of cells which may be empty, contain a fish or contain a shark. The grid wraps top-to-bottom and left-to-right so may also be thought of as a torus (hence Wa-Tor). Time progresses in discrete ticks, called chronons. At each chronon, each creature's state evolves according to the following rules: fertility_thresholdin the code below), it reproduces by leaving another fish behind on its previous cell after it moves. Afterwards, its fertility is reset to zero; Here is the code. Various parts can be customized where indicated. It produces a sequence of PNG files which can be converted into an animation as above with e.g. ffmpeg or ImageMagick's convert. Some care is needed to adjust the simulation parameters to avoid the common outcomes of (a) extinction of all species (first the sharks eat all the fish, then they die of starvation) or (b) extinction of all the sharks (too few fish to begin with so the sharks die out before the fish breed to saturation). import random import matplotlib.pyplot as plt from matplotlib.colors import LinearSegmentedColormap EMPTY = 0 FISH = 1 SHARK = 2 # Colour the cells for the above states in this order: colors = ['#00008b', '#ff69b4', '#ffd700'] n_bin = 3 cm = LinearSegmentedColormap.from_list( 'wator_cmap', colors, N=n_bin) # Run the simulation for MAX_CHRONONS chronons (time intervals). MAX_CHRONONS = 400 # Save every SAVE_EVERYth chronon iteration. SAVE_EVERY = 5 # PRNG seed. SEED = 10 random.seed(SEED) initial_energies = {FISH: 20, SHARK: 3} fertility_thresholds = {FISH: 4, SHARK: 12} class Creature(): """A sea creature living in Wa-Tor world.""" def __init__(self, id, x, y, init_energy, fertility_threshold): """Initialize the creature. id is an integer identifying the creature. x, y is the creature's position in the Wa-Tor world grid. init_energy is the creature's initial energy: this decreases by 1 each time the creature moves and if it reaches 0 the creature dies. fertility_threshold: each chronon, the creature's fertility increases by 1. When it reaches fertility_threshold, the creature reproduces. """ self.id = id self.x, self.y = x, y self.energy = init_energy self.fertility_threshold = fertility_threshold self.fertility = 0 self.dead = False class World(): """The Wa-Tor world.""" def __init__(self, width=75, height=50): """Initialize (but don't populate) the Wa-Tor world.""" self.width, self.height = width, height self.ncells = width * height self.grid = [[EMPTY]*width for y in range(height)] self.creatures = [] def spawn_creature(self, creature_id, x, y): """Spawn a creature of type ID creature_id at location x,y.""" creature = Creature(creature_id, x, y, initial_energies[creature_id], fertility_thresholds[creature_id]) self.creatures.append(creature) self.grid[y][x] = creature def populate_world(self, nfish=120, nsharks=40): """Populate the Wa-Tor world with fish and sharks.""" self.nfish, self.nsharks = nfish, nsharks def place_creatures(ncreatures, creature_id): """Place ncreatures of type ID creature_id in the Wa-Tor world.""" for i in range(ncreatures): while True: x, y = divmod(random.randrange(self.ncells), self.height) if not self.grid[y][x]: self.spawn_creature(creature_id, x, y) break place_creatures(self.nfish, FISH) place_creatures(self.nsharks, SHARK) def get_world_image_array(self): """Return a 2D array of creature type IDs from the world grid.""" return [[self.grid[y][x].id if self.grid[y][x] else 0 for x in range(self.width)] for y in range(self.height)] def get_world_image(self): """Create a Matplotlib figure plotting the world.""" im = self.get_world_image_array() fig = plt.figure(figsize=(8.3333, 6.25), dpi=72) ax = fig.add_subplot(111) ax.imshow(im, interpolation='nearest', cmap=cm) # Remove ticks, border, axis frame, etc ax.set_xticks([]) ax.set_yticks([]) ax.axis('off') return fig def show_world(self): """Show the world as a Matplotlib image.""" fig = self.get_world_image() plt.show() plt.close(fig) def save_world(self, filename): """Save a Matplotlib image of the world as filename.""" fig = self.get_world_image() # NB Ensure there's no padding around the image plot plt.savefig(filename, dpi=72, bbox_inches='tight', pad_inches=0) plt.close(fig) def get_neighbours(self, x, y): """Return a dictionary of the contents of cells neighbouring (x,y). The dictionary is keyed by the neighbour cell's position and contains either EMPTY or the instance of the creature occupying that cell. """ neighbours = {} for dx, dy in ((0,-1), (1,0), (0,1), (-1,0)): xp, yp = (x+dx) % self.width, (y+dy) % self.height neighbours[xp,yp] = self.grid[yp][xp] return neighbours def evolve_creature(self, creature): """Evolve a given creature forward in time by one chronon.""" neighbours = self.get_neighbours(creature.x, creature.y) creature.fertility += 1 moved = False if creature.id == SHARK: try: # Try to pick a random fish to eat. xp, yp = random.choice([pos for pos in neighbours if neighbours[pos]!=EMPTY and neighbours[pos].id==FISH]) # Eat the fish. Yum yum. creature.energy += 2 self.grid[yp][xp].dead = True self.grid[yp][xp] = EMPTY moved = True except IndexError: # No fish to eat: just move to a vacant cell if possible. pass if not moved: # Try to move to a vacant cell try: xp, yp = random.choice([pos for pos in neighbours if neighbours[pos]==EMPTY]) if creature.id != FISH: # The shark's energy decreases by one unit when it moves. creature.energy -= 1 moved = True except IndexError: # Surrounding cells are all full: no movement. xp, yp = creature.x, creature.y if creature.energy < 0: # Creature dies. creature.dead = True self.grid[creature.y][creature.x] = EMPTY elif moved: # Remember the creature's old position. x, y = creature.x, creature.y # Set new position creature.x, creature.y = xp, yp self.grid[yp][xp] = creature if creature.fertility >= creature.fertility_threshold: # Spawn a new creature and reset fertility. creature.fertility = 0 self.spawn_creature(creature.id, x, y) else: # Leave the old cell vacant. self.grid[y][x] = EMPTY def evolve_world(self): """Evolve the Wa-Tor world forward in time by one chronon.""" # Shuffle the creatures grid so that we don't always evolve the same # creatures first. random.shuffle(self.creatures) # NB The self.creatures list is going to grow as new creatures are # spawned, so loop over indices into the list as it stands now. ncreatures = len(self.creatures) for i in range(ncreatures): creature = self.creatures[i] if creature.dead: # This creature has been eaten so skip it. continue self.evolve_creature(creature) # Remove the dead creatures self.creatures = [creature for creature in self.creatures if not creature.dead] world = World() world.populate_world() for chronon in range(400): if not chronon % SAVE_EVERY: print('{}/{}: {}'.format(chronon+1,MAX_CHRONONS, len(world.creatures))) world.save_world('world-{:04d}.png'.format(chronon)) world.evolve_world() Comments are pre-moderated. Please be patient and your comment will appear soon. Mr. Lauris Grundmanis 1 year, 3 months ago I read this article during my last year at the University of Minnesota and did an independent study class with a professor, writing this simulation using PASCAL. It was a great experience!Link | Reply New Comment
https://scipython.com/blog/wa-tor-world/
CC-MAIN-2020-29
refinedweb
1,223
53.37
Below we cover the most commonly used parts of the API. The Go agent is documented using standard godoc. For complete documentation, refer to the documentation at godoc.org/go.elastic.co/apm, or by using the "godoc" tool. Tracer APIedit The initial point of contact your application will have with the Go agent is the apm.Tracer type, which provides methods for reporting transactions and errors. To make instrumentation simpler the Go agent provides a pre-initialized tracer, apm.DefaultTracer. This tracer is always initialized and available for use. This tracer is configured with environment variables; see Configuration for details. import ( "go.elastic.co/apm" ) func main() { tracer := apm.DefaultTracer ... } Transactionsedit func (*Tracer) StartTransaction(name, type string) *Transactionedit StartTransaction returns a new Transaction with the specified name and type, and with the start time set to the current time.. If you need to set the timestamp or set the parent trace context, you should use Tracer.StartTransactionOptions. This method should be called at the beginning of a transaction such as a web or RPC request. e.g.: transaction := apm.DefaultTracer.StartTransaction("GET /", "request") Transactions will be grouped by type and name in the Elastic APM app. After starting a transaction, you can record a result and add context to further describe the transaction. transaction.Result = "Success" transaction.Context.SetLabel("region", "us-east-1") See Context for more details on setting transaction context. func (*Tracer) StartTransactionOptions(name, type string, opts TransactionOptions) *Transactionedit StartTransactionOptions is essentially the same as StartTransaction, but also accepts an options struct. This struct allows you to specify the parent trace context and/or the transaction’s start time. opts := apm.TransactionOptions{ Start: time.Now(), TraceContext: parentTraceContext, } transaction := apm.DefaultTracer.StartTransactionOptions("GET /", "request", opts) func (*Transaction) End()edit End enqueues the transaction for sending to the Elastic APM server. The Transaction must not be modified after this, but it may still be used for starting spans. The transaction’s duration will be calculated as the amount of time elapsed since the transaction was started until this call. To override this behaviour, the transaction’s Duration field may be set before calling End. transaction.End() func (*Transaction) TraceContext() TraceContextedit TraceContext returns the transaction’s trace context. func (*Transaction) EnsureParent() SpanIDedit EnsureParent returns the transaction’s parent span ID, generating and recording one if it did not previously have one. EnsureParent enables correlation with spans created by the JavaScript Real User Monitoring (RUM) agent for the initial page load. If your backend service generates the HTML page dynamically, you can inject the trace and parent span ID into the page in order to initialize the JavaScript RUM agent, such that the web browser’s page load appears as the root of the trace. var initialPageTemplate = template.Must(template.New("").Parse(` <html> <head> <script src="elastic-apm-js-base/dist/bundles/elastic-apm-js-base.umd.min.js"></script> <script> elasticApm.init({ serviceName: '', serverUrl: '', pageLoadTraceId: {{.TraceContext.Trace}}, pageLoadSpanId: {{.EnsureParent}}, pageLoadSampled: {{.Sampled}}, }) </script> </head> <body>...</body> </html> `)) func initialPageHandler(w http.ResponseWriter, req *http.Request) { err := initialPageTemplate.Execute(w, apm.TransactionFromContext(req.Context())) if err != nil { ... } } See the JavaScript RUM agent documentation for more information. func ContextWithTransaction(context.Context, *Transaction) context.Contextedit ContextWithTransaction adds the transaction to the context, and returns the resulting context. The transaction can be retrieved using apm.TransactionFromContext. The context may also be passed into apm.StartSpan, which uses TransactionFromContext under the covers to create a span as a child of the transaction. func TransactionFromContext(context.Context) *Transactionedit TransactionFromContext returns a transaction previously stored in the context using apm.ContextWithTransaction, or nil if the context does not contain a transaction. func DetachedContext(context.Context) context.Contextedit DetachedContext returns a new context detached from the lifetime of the input, but which still returns the same values as the input. DetachedContext can be used to maintain trace context required to correlate events, but where the operation is "fire-and-forget" and should not be affected by the deadline or cancellation of the surrounding context. func TraceFormatter(context.Context) fmt.Formatteredit TraceFormatter returns an implementation of fmt.Formatter that can be used to format the IDs of the transaction and span stored in the provided context. The formatter understands the following formats: - %v: trace ID, transaction ID, and (if in the context of a span) span ID, space separated - %t: trace ID only - %x: transaction ID only - %s: span ID only The "+" option can be used to format the values in "key=value" style, with the field names trace.id, transaction.id, and span.id. For example, using "%+v" as the format would yield "trace.id=… transaction.id=… span.id=…". For a more in-depth example, see Manual log correlation (unstructured). Spansedit To describe an activity within a transaction, we create spans. The Go agent has built-in support for generating spans for some activities, such as database queries. You can use the API to report spans specific to your application. func (*Transaction) StartSpan(name, spanType string, parent *Span) *Spanedit StartSpan starts and returns a new Span within the transaction, with the specified name, type, and optional parent span, and with the start time set to the current time. If you need to set the timestamp or parent trace context, you should use Transaction.StartSpanOptions. If the span type contains two dots, they are assumed to separate the span type, subtype, and action; a single dot separates span type and subtype, and the action will not be set. If the transaction is sampled, then the span’s ID will be set, and its stacktrace will be set if the tracer is configured accordingly. If the transaction is not sampled, then the returned span will be silently discarded when its End method is called. You can avoid any unnecessary computation for these dropped spans by calling the Dropped method. As a convenience, it is valid to create a span on a nil Transaction; the resulting span will be non-nil and safe for use, but will not be reported to the APM server. span := tx.StartSpan("SELECT FROM foo", "db.mysql.query", nil) func (*Transaction) StartSpanOptions(name, spanType string, opts SpanOptions) *Spanedit StartSpanOptions is essentially the same as StartSpan, but also accepts an options struct. This struct allows you to specify the parent trace context and/or the spans’s start time. If the parent trace context is not specified in the options, then the span will be a direct child of the transaction. Otherwise, the parent trace context should belong to some span descended from the transaction. opts := apm.SpanOptions{ Start: time.Now(), Parent: parentSpan.TraceContext(), } span := tx.StartSpanOptions("SELECT FROM foo", "db.mysql.query", opts) func StartSpan(ctx context.Context, name, spanType string) (*Span, context.Context)edit StartSpan starts and returns a new Span within the sampled transaction and parent span in the context, if any. If the span isn’t dropped, it will be included in the resulting context. span, ctx := apm.StartSpan(ctx, "SELECT FROM foo", "db.mysql.query") func (*Span) End()edit End marks the span as complete. The Span must not be modified after this, but may still be used as the parent of a span. The span’s duration will be calculated as the amount of time elapsed since the span was started until this call. To override this behaviour, the span’s Duration field may be set before calling End. func (*Span) Dropped() booledit Dropped indicates whether or not the span is dropped, meaning it will not be reported to the APM server. Spans are dropped when the created with a nil, or non-sampled transaction, or one whose max spans limit has been reached. func (*Span) TraceContext() TraceContextedit TraceContext returns the span’s trace context. func ContextWithSpan(context.Context, *Span) context.Contextedit ContextWithSpan adds the span to the context, and returns the resulting context. The span can be retrieved using apm.SpanFromContext. The context may also be passed into apm.StartSpan, which uses SpanFromContext under the covers to create another span as a child of the span. func SpanFromContext(context.Context) *Spanedit SpanFromContext returns a span previously stored in the context using apm.ContextWithSpan, or nil if the context does not contain a span. Contextedit When reporting transactions and errors you can provide context to describe those events. Built-in instrumentation will typically provide some context, e.g. the URL and remote address for an HTTP request. You can also provide custom context and tags. func (*Context) SetTag(key, value string)edit SetTag is equivalent to calling SetLabel with a string value. This function is deprecated, and will be removed in a future major version of the agent. func (*Context) SetLabel(key string, value interface{})edit SetLabel labels the transaction or error with the given key and value. If the key contains any special characters ( ., *, "), they will be replaced with underscores. If the value is numerical or boolean, then it will be sent to the server as a JSON number or boolean; otherwise it will converted to a string, using fmt.Sprint if necessary. Numerical and boolean values are supported by the server from version 6.7 onwards. String values longer than 1024 characters will be truncated. Labels are indexed in Elasticsearch as keyword fields. Avoid defining too many user-specified labels. Defining too many unique fields in an index is a condition that can lead to a mapping explosion. func (*Context) SetCustom(key string, value interface{})edit SetCustom is used to add custom, non-indexed, contextual information to transactions or errors. If the key contains any special characters ( ., *, "), they will be replaced with underscores. Non-indexed means the data is not searchable or aggregatable in Elasticsearch, and you cannot build dashboards on top of the data. However, non-indexed information is useful for other reasons, like providing contextual information to help you quickly debug performance issues or errors. The value can be of any type that can be encoded using encoding/json. func (*Context) SetUsername(username string)edit SetUsername records the username of the user associated with the transaction. func (*Context) SetUserID(id string)edit SetUserID records the ID of the user associated with the transaction. func (*Context) SetUserEmail(email string)edit SetUserEmail records the email address of the user associated with the transaction. Errorsedit Elastic APM provides two methods of capturing an error event: reporting an error log record, and reporting an "exception" (either a panic or an error in Go parlance). func (*Tracer) NewError(error) *Erroredit NewError returns a new Error with details taken from err. The exception message will be set to err.Error(). The exception module and type will be set to the package and type name of the cause of the error, respectively, where the cause has the same definition as given by github.com/pkg/errors. e := apm.DefaultTracer.NewError(err) ... e.Send() The provided error can implement any of several interfaces to provide additional information: // Errors implementing ErrorsStacktracer will have their stacktrace // set based on the result of the StackTrace method. type ErrorsStacktracer interface { StackTrace() github.com/pkg/errors.StackTrace } // Errors implementing Stacktracer will have their stacktrace // set based on the result of the StackTrace method. type Stacktracer interface { StackTrace() []go.elastic.co/apm/stacktrace.Frame } // Errors implementing Typer will have a "type" field set to the // result of the Type method. type Typer interface { Type() string } // Errors implementing StringCoder will have a "code" field set to the // result of the Code method. type StringCoder interface { Code() string } // Errors implementing NumberCoder will have a "code" field set to the // result of the Code method. type NumberCoder interface { Code() float64 } Errors created by with NewError will have their ID field populated with a unique ID. This can be used in your application for correlation. func (*Tracer) NewErrorLog(ErrorLogRecord) *Erroredit NewErrorLog returns a new Error for the given ErrorLogRecord: type ErrorLogRecord struct { // Message holds the message for the log record, // e.g. "failed to connect to %s". // // If this is empty, "[EMPTY]" will be used. Message string // MessageFormat holds the non-interpolated format // of the log record, e.g. "failed to connect to %s". // // This is optional. MessageFormat string // Level holds the severity level of the log record. // // This is optional. Level string // LoggerName holds the name of the logger used. // // This is optional. LoggerName string // Error is an error associated with the log record. // // This is optional. Error error } The resulting Error’s log stacktrace will not be set. Call the SetStacktrace method to set it, if desired. e := apm.DefaultTracer.NewErrorLog(apm.ErrorLogRecord{ Message: "Somebody set up us the bomb.", }) ... e.Send() func (*Error) SetTransaction(*Transaction)edit SetTransaction associates the error with the given transaction. func (*Error) SetSpan(*Span)edit SetSpan associates the error with the given span, and the span’s transaction. When calling SetSpan, it is not necessary to also call SetTransaction. func (*Error) Send()edit Send enqueues the error for sending to the Elastic APM server. func (*Tracer) Recovered(interface{}) *Erroredit Recovered returns an Error from the recovered value, optionally associating it with a transaction. The error is not sent; it is the responsibility of the caller to set the error’s context as desired, and then call its Send method. tx := apm.DefaultTracer.StartTransaction(...) defer tx.End() defer func() { if v := recover(); v != nil { e := apm.DefaultTracer.Recovered(v) e.SetTransaction(tx) e.Send() } }() func CaptureError(context.Context, error) *Erroredit CaptureError returns a new Error related to the sampled transaction and span present in the context, if any, and sets its exception details using the given error. The Error.Handled field will be set to true, and a stacktrace set. If there is no transaction in the context, or it is not being sampled, CaptureError returns nil. As a convenience, if the provided error is nil, then CaptureError will also return nil. if err != nil { e := apm.CaptureError(ctx, err) e.Send() } Trace Contextedit Trace context contains the ID for a transaction or span, the ID of the end-to-end trace to which the transaction or span belongs, and trace options such as flags relating to sampling. Trace context is propagated between processes, e.g. in HTTP headers, in order to correlate events originating from related services. Elastic APM’s trace context is based on the W3C Trace Context draft. Error Contextedit Errors can be associated with context just like transactions. See Context for details. In addition, errors can be associated with an active transaction or span using SetTransaction or SetSpan, respectively. tx := apm.DefaultTracer.StartTransaction("GET /foo", "request") defer tx.End() e := apm.DefaultTracer.NewError(err) e.SetTransaction(tx) e.Send() Tracer Configedit Many configuration attributes can be be updated dynamically via apm.Tracer method calls. Please refer to the documentation at godoc.org/go.elastic.co/apm#Tracer for details. The configuration methods are primarily prefixed with Set, such as apm#Tracer.SetLogger.
https://www.elastic.co/guide/en/apm/agent/go/current/api.html
CC-MAIN-2021-31
refinedweb
2,466
50.02
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Porcentages in graph view There is a way in odoo v8 to show the percentages in the graph view, for example, if I filter a model by sex, that I can see 80% women and 20% men, and if I apply another filter such as age> 43, keep the result in porcent, currently only graphics me the sum of the values filtered. The thing is that the results to render graph view is tied to the use of read_group method using the group_operator argument of the field definition that by default is sum when you don't define the group_operator. That field argument only works with aggregated functions in postgresql and percent is not an aggregated function for postgresql. You could do it by overriding the read_group method of the graph view model to manually change the field value for the result. Like: def read_group(self, cr, uid, domain, fields, groupby, offset=0, limit=None, context=None, orderby=False, lazy=True): read_group_res = super(your_model self).read_group(cr, uid, domain, fields, groupby, offset=offset, limit=limit, context=context, orderby=orderby, lazy=lazy) for res in read_group_res: domain_filter = res.get('__domain', []) line_ids = self.search(cr, uid, domain_filter, context=context) if line_ids: res_id = self.browse(cr, uid, line_ids[0], context=context) res['results'] = res_id.results return read_group_re As the docstring of the read_group methods states: :return: list of dictionaries(one dictionary for each record) containing: * the values of fields grouped by the fields in ``groupby`` argument * __domain: list of tuples specifying the search criteria * __context: dictionary with argument like ``groupby`` Use this example to build your own solution based on your needs Ibra thanks very much for your answer but the problem is that i need see the porcentage related whit all my models in a graph view, for example if i have 5 employees, 3 male and 2 female when i filter for sex in my graph view for hr_employee model i wan to see the result of all my models in porcent, not the quantity like by default odoo work. Hello Oscar, you can use a widget called progressbar for this : <field name="your_field" widget="progressbar"/> to show the percentages, your field should be computed. example : add the percentage of taken seats you .py : seat = fields.Integer(string="Number of seats") attendees_ids = fields.Many2many('res.partner', string="Attendees") taken_seats = fields.Float(string="Taken seats", compute='_taken_seats') then you add the decorator depends because taken seats depends on both seats and attendees @api.depends('seat', 'attendees_ids') def _taken_seats(self): for r in self: if not r.seat: r.taken_seats = 0.0 else: r.taken_seats = 100.0*len(r.attendees_ids)/r.seat your .xml : <field name= "taken_seats" widget="progressbar"/> this is a simple example on how to dislay percentage. Try to work on your own example with the same logic and tell us if it works.
https://www.odoo.com/forum/help-1/question/porcentages-in-graph-view-99098
CC-MAIN-2017-09
refinedweb
502
50.97
29 January 2009 14:22 [Source: ICIS news] WASHINGTON (?xml:namespace> The December decline followed a drop of 3.7% in November and indicates that the. Excluding transportation equipment, new orders for durable goods were down more sharply at 3.6% below November’s figures. The data for transportation equipment, which is heavily influenced by orders for commercial aircraft, is often backed out of the overall durable goods monthly figures. Aircraft orders often are made in multiple-plane purchases and in any given month those commitments - or their lack - can affect manufacturing data disproportionately. In its monthly report, the department said unfilled orders for manufactured durable goods also declined in December for the third consecutive month, falling 1.3% to $803bn. This followed a 0.9% fall in November. Inventories of manufactured durable goods increased by 0.4% in December to $343.5bn, the 17th increase in the last 18 months. The inventory increase last month followed a 0.3% gain in November. The continuing decline in unfilled orders combined with increasing inventories indicates that manufacturers are having greater difficulty in selling goods that they produce. US durable goods orders and inventories* r: revised *seasonally adjusted ($1 = €
http://www.icis.com/Articles/2009/01/29/9188528/us-durable-goods-fell-in-dec-for-fifth-straight-month.html
CC-MAIN-2015-18
refinedweb
197
52.56
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hi, I am looking at creating a calculated scripted field in Script runner based on the Days:Hours:Minutes to resolution. The have managed to do this calculation to based on the numbers of days, but not down to the hours and minutes. return (issue.resolutionDate != null) ? String.format("%d day(s)",(int)((issue.resolutionDate.getTime() - issue.getCreated().getTime()) /(1000*60*60*24))) : "N/A" ; Thanks in advance Use JiraDurationUtils instead to format duration. Hello! You may do the similar with CelesteCS Math for Confluence, which allows any calculations in Confluence, including any manipulations with table data. Date parameters and functions, like NOW() and DATE() are supported.! Hi @djamal I am trying to do the same. Can you help me in getting the hours and days? I was able to get the no of day by using resolvedDate - createdDate. Thank.
https://community.atlassian.com/t5/Marketplace-Apps-questions/Calculating-time-to-resolution-using-a-scripted-field-in-Script/qaq-p/357041
CC-MAIN-2018-51
refinedweb
162
55.13
Each Answer to this Q is separated by one/two green lines. I have a async function and need to run in with apscheduller every N minutes. There is a python code below URL_LIST = ['<url1>', '<url2>', '<url2>', ] def demo_async(urls): """Fetch list of web pages asynchronously.""" loop = asyncio.get_event_loop() # event loop future = asyncio.ensure_future(fetch_all(urls)) # tasks to do loop.run_until_complete(future) # loop until done async def fetch_all(urls): tasks = [] # dictionary of start times for each url async with ClientSession() as session: for url in urls: task = asyncio.ensure_future(fetch(url, session)) tasks.append(task) # create list of tasks _ = await asyncio.gather(*tasks) # gather task responses async def fetch(url, session): """Fetch a url, using specified ClientSession.""" async with session.get(url) as response: resp = await response.read() print(resp) if __name__ == '__main__': scheduler = AsyncIOScheduler() scheduler.add_job(demo_async, args=[URL_LIST], trigger="interval", seconds=15) scheduler.start() print('Press Ctrl+{0} to exit'.format('Break' if os.name == 'nt' else 'C')) # Execution will block here until Ctrl+C (Ctrl+Break on Windows) is pressed. try: asyncio.get_event_loop().run_forever() except (KeyboardInterrupt, SystemExit): pass But when i tried to run it i have the next error info Job "demo_async (trigger: interval[0:00:15], next run at: 2017-10-12 18:21:12 +04)" raised an exception..... ..........\lib\asyncio\events.py", line 584, in get_event_loop % threading.current_thread().name) RuntimeError: There is no current event loop in thread '<concurrent.futures.thread.ThreadPoolExecutor object at 0x0356B150>_0'. Could you please help me with this? Python 3.6, APScheduler 3.3.1, In your def demo_async(urls), try to replace: loop = asyncio.get_event_loop() with: loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) The important thing that hasn’t been mentioned is why the error occurs. For me personally, knowing why the error occurs is as important as solving the actual problem. Let’s take a look at the implementation of the get_event_loop of BaseDefaultEventLoopPolicy: class BaseDefaultEventLoopPolicy(AbstractEventLoopPolicy): ... def get_event_loop(self): """Get the event loop. This may be None or an instance of EventLoop. """ if (self._local._loop is None and not self._local._set_called and isinstance(threading.current_thread(), threading._MainThread)): self.set_event_loop(self.new_event_loop()) if self._local._loop is None: raise RuntimeError('There is no current event loop in thread %r.' % threading.current_thread().name) return self._local._loop You can see that the self.set_event_loop(self.new_event_loop()) is only executed if all of the below conditions are met: self._local._loop is None– _local._loopis not set not self._local._set_called– set_event_loophasn’t been called yet isinstance(threading.current_thread(), threading._MainThread)– current thread is the main one (this is not True in your case) Therefore the exception is raised, because no loop is set in the current thread: if self._local._loop is None: raise RuntimeError('There is no current event loop in thread %r.' % threading.current_thread().name) Just pass fetch_all to scheduler.add_job() directly. The asyncio scheduler supports coroutine functions as job targets. If the target callable is not a coroutine function, it will be run in a worker thread (due to historical reasons), hence the exception. Use asyncio.run() instead of directly using the event loop. It creates a new loop and closes it when finished. This is how the ‘run’ looks like: if events._get_running_loop() is not None: raise RuntimeError( "asyncio.run() cannot be called from a running event loop") if not coroutines.iscoroutine(main): raise ValueError("a coroutine was expected, got {!r}".format(main)) loop = events.new_event_loop() try: events.set_event_loop(loop) loop.set_debug(debug) return loop.run_until_complete(main) finally: try: _cancel_all_tasks(loop) loop.run_until_complete(loop.shutdown_asyncgens()) finally: events.set_event_loop(None) loop.close() Since this question continues to appear on the first page, I will write my problem and my answer here. I had a RuntimeError: There is no current event loop in thread 'Thread-X'. when using flask-socketio and Bleak. Edit: well, I refactored my file and made a class. I initialized the loop in the constructor, and now everything is working fine: class BLE: def __init__(self): self.loop = asyncio.get_event_loop() # function example, improvement of # : def list_bluetooth_low_energy(self) -> list: async def run() -> list: BLElist = [] devices = await bleak.discover() for d in devices: BLElist.append(d.name) return 'success', BLElist return self.loop.run_until_complete(run()) Usage: ble = path.to.lib.BLE() list = ble.list_bluetooth_low_energy() Original answer: The solution was stupid. I did not pay attention to what I did, but I moved some import out of a function, like this: import asyncio, platform from bleak import discover def listBLE() -> dict: async def run() -> dict: # my code that keep throwing exceptions. loop = asyncio.get_event_loop() ble_list = loop.run_until_complete(run()) return ble_list So I thought that I needed to change something in my code, and I created a new event loop using this piece of code just before the line with get_event_loop(): loop = asyncio.new_event_loop() loop = asyncio.set_event_loop() At this moment I was pretty happy, since I had a loop running. But not responding. And my code relied on a timeout to return some values, so it was pretty bad for my app. It took me nearly two hours to figure out that the problem was the import, and here is my (working) code: def list() -> dict: import asyncio, platform from bleak import discover async def run() -> dict: # my code running perfectly loop = asyncio.get_event_loop() ble_list = loop.run_until_complete(run()) return ble_list Reading given answers I only manage to fix my websocket thread by using the hint (try replacing) in on this page. loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) The documentation of BaseDefaultEventLoopPolicy explains Default policy implementation for accessing the event loop. In this policy, each thread has its own event loop. However, we only automatically create an event loop by default for the main thread; other threads by default have no event loop. So when using a thread one has to create the loop. And I had to reorder my code so my final code loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) # !!! Place code after setting the loop !!! server = Server() start_server = websockets.serve(server.ws_handler, 'localhost', port) I had a similar issue where I wanted my asyncio module to be callable from a non-asyncio script (which was running under gevent… don’t ask…). The code below resolved my issue because it tries to get the current event loop, but will create one if there isn’t one in the current thread. Tested in python 3.9.11. try: loop = asyncio.get_event_loop() except RuntimeError as e: if str(e).startswith('There is no current event loop in thread'): loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) else: raise
https://techstalking.com/programming/python/runtimeerror-there-is-no-current-event-loop-in-thread-in-async-apscheduler/
CC-MAIN-2022-40
refinedweb
1,090
61.02
Node backend for Angular using Nest JS. As per the official documentation of Nest JS, it is a framework for building efficient, scalable Node.js server-side applications. It is built with the full support of typescript and combines various features like Object Oriented, Functional and Reactive Programming. Nest JS is inspired a lot by Angular, which is the very popular frontend Javascript framework for creating single page client-side applications. It uses various features like dependency injections, modules, decorators and other such features which are quite frequently used in Angular. Nest JS enables the construction of fast, testable, and extensible backend applications. Nest JS can easily be integrated with the existing Angular app so that your backend feels just like your frontend. The ng add command provides the bunch of schematics with help of ng-universal package which smoothly integrates the nest js in existing angular app. Step 1- Navigate inside your existing angular app and run the following command ng add @nestjs/ng-universal Step 2- What this command will do is that it will install the Nest js and configure it for your angular app behind the hood. It will create the server folder at the root level of your application which includes the application level module and a main.ts file which is responsible for bootstrapping your nest application. The main.ts file looks something like this- enableProdMode(); async function bootstrap() { const app = await NestFactory.create(ApplicationModule); app.setGlobalPrefix('api'); await app.listen(4200); } bootstrap(); It will bootstrap the application module and will automatically add the api prefix for incoming HTTP requests and will configure the app to listen to the incoming requests at the port 4200. Nest JS provides the bunch of functionalities like controllers, services, middlewares, exception filters, support to web sockets, Graph QL and much more. Just like Angular CLI, which uses ng generate commands to generate the file automatically, Nest CLI also uses nest generate commands to generate its required files. Demo- To test this, let us generate a simple controller which will handle the GET request at localhost:4200/api/greet and will return a JSON response with Hello World as a greeting. Step 1- From inside the angular app, run the following commands- cd server npm install -g @nestjs/cli nest generate controller greetings These commands will install the Nest CLI globally and will create the greetings controller and will update the controllers' array in app.module. The code of greetings controller looks something like this import { Controller, Get } from '@nestjs/common'; @Controller('greet') export class GreetingsController { @Get() greet() { return { greeting: 'Hello World' }; } } Step 2- To test this run npm run serve.It will serve your angular app and your nest app at the same port i.e. 4200. The routing of the nest app will be prefixed by api . For more details, dive deep into the official documentation of Nest JS-
https://medium.com/@naimishverma50/node-backend-for-angular-using-nest-js-f9d1528ef2dc?source=post_recirc---------1------------------
CC-MAIN-2020-16
refinedweb
482
51.18
[ Revised with C# and VB.NET code] Okay, first, understand that I’m in the position of running through the streets yelling at folks “c’mere! ya’ gotta see this!” and what I’m pointing to is the incredible new invention of… a laptop computer. Something that is undeniably amazing and cool, but everyone else on my block has already got one. Second, and much worse, I’m about to show you how I used a “pattern” that you either have already embraced, or that you’ve been avoiding like the plague because the folks who are running around shouting “MVVM! MVVM!” sound just like the folks who were running around shouting “MVC! MVC!” and “OOP! OOP!” and “COM! COM!”… you get the idea. Many of us are still recovering from the last five fads that caused us to go out and buy dozens of books and break our head on the latest/greatest trend, only to have it be “oh so last year” by the time we fully grokked it. Drinking The Kool Aid* But this is different. Honest. Here are three heretical assertions about MVVM: - It is not all that different from what you are already doing - It is not hard to understand or to do - You will write better code, and you’ll write less code. Typically, when a pattern or practice comes along, there is a steep learning curve, and the cognoscenti will tell you that it takes a very long time to truly master the approach. Feh. And not so, at least not this time. Let’s go over the assertions above, and I’ll explain, briefly, what you need to know to profit from MVVM. It’s Not All That Different and It’s Not That Hard, only the third of which is new, but that difference is all the difference in the world when you’re writing a Silverlight application that you want to be able to maintain and create unit tests for (Yes, I owe a blog post on Test-Driven design, but one glass of grape juice at a time). Core Concept #1: Separate your User Interface concerns (View) from your Business objects and behaviors (View Model) and from your data/persistence layer (Model). Core Concept #2 Don’t Look Up. We tend to conceptualize the View (User Interface objects) at the top, the ViewModel (objects that provide the UI with its data and behaviors) in the middle and the model (often the persistence layer). Why Would You Want To Do That, & What Does It Cost? The huge advantages of using binding and making the VM the datacontext for the View is that you write less code and, equally important, your behaviors and state are all separated from the UI and thus. Names Simplifying is one thing, over-simplifying another, and there is an elegance in the chosen names. The Model is that which the application is modeling. Calling this the database layer or the persistence layer loses site of the fact that the model might be virtually any time of information in virtually any format. Calling the top layer the View, rather than the User Interface is important both to emphasize that it is just one of many possible views of the model, and to keep clear that the User Interface comprises both the appearance and the behaviors and the View is concerned only with the appearance. The ViewModel is the bridge between the Model and the View; and the ViewModel thus owns responsibility for binding the relevant data to the view and for handling user actions appropriately, whether the response is in the widget, elsewhere in the View or in other parts of the application. A Practical Example To see this at work, we’ll start by creating a new Silverlight Application. (Complete source code is available here) Immediately add a new UserControl and name it PeopleView. Here is the Xaml for the UserControl: C# VB.Net We start by defining a Person (lines 6-15) and then we give the PeopleViewModel class a property People which is an ObservableCollection of Person objects. The constructor (lines 24-27) calls a method that mocks up getting data from the Model; in this case I just generate ten names on lines 30-48 The final code implements INotifyPropertyChanged, which is not required yet, as the only property we have is an Observable collection. Binding ItemsSource="{Binding People}"DisplayMemberPath="Name" and then add this to the constructor in the code behind: C# VB.Net Behavior c:\Program Files (x86)\Microsoft SDKs\Expression\Blend Preview for .NET 4\Interactivity\Libraries\Silverlight\ System.Windows.Interactivity.dll C:\Program Files (x86)\Microsoft Expression\Blend 3 Samples\Silverlight\Design\Expression.Samples.Interactivity.dll xmlns:i="clr-namespace:System.Windows.Interactivity;assembly= System.Windows.Interactivity"xmlns:si="clr-namespace:Expression.Samples.Interactivity;assembly= Expression.Samples.Interactivity" and finally, you modify the control (the listbox) to add the trigger and behavior The EventTrigger has a property for the EventName; indicating that this trigger will fire when the event SelectionChanged is raised for the ListBox. The CallDataMethod has a property Method for the name of the method to invoke. Since the ViewModel class is the data context, you don’t need to indicated which class supplies the method, any more than you need to indicate which class has the property People to which the ListBox’s ItemsSource is binding. Just add that method to the ViewModel class… C# VB.Net … and start your program. When you change the selection, the method you’ve bound to will be called. In the Silverlight HVP this will allow us to bind the event notification to the SelectionChanged event, further decouples the control from its data and logic. The technical term for this is “good.” Overall Impact of Refactoring for MVVM While this is not an exhaustive understanding of MVVM by any means, with these fundamentals, it became obvious how to break up my code, and I found that there was less of it, and it was more intention-revealing. In fact, the code-behind for my View classes have no code at all except setting the dataContext; and the ViewModel code is short and extremely readable. All in all, I find MVVM well worth the small cognitive startup costs; yielding a very natural separation of concerns, and perhaps equally important, exposing far more of the program to unit tests, and thus driving down the overall time to release. ———————————– * In 1978 cult leader Jim Jones induced 900 followers to commit “revolutionary suicide” by knowingly drinking cyanide-laced grape Flavor Aid. To the chagrin of General Foods, the cultural memory of the event is that they drank Kool Aid and the expression. To drink the Kool Aid has evolved to mean “to embrace without reservations, the ideas of a strong leader” Pingback: MVVM model vs viewmodel | Question and Answer Pingback: MVVM model vs viewmodel | WEB PROGRAMMING Small grammar correction – ‘loses site of the fact’ should be ‘loses *sight* of the fact’ and ‘any time of information in virtually any format’ should be ‘any *type* of information in virtually any format’ (I think) “you don’t work harder, you work less, and in exchange for doing less, your code is easier to write, to read, to maintain and to test” IS BS! You WILL always write more code using MVVM rather than using code behind. YOu WILL be working harder (because you write more code). It is NOT easier to read. Debatable if more code is ever simpler to maintain. Testing is simpler because the VM tier is unit testable which has always been a challenge for WinForms & WPF with code behind. For a button using code behind: OnClick(TheMethodName). Done. Right click it, go to code, and BAM you are in the event that gets raised. Going to do that with MVVM? NO WAY! Impossible. It’s a handful of extra code, you can not go directly to the event. Want a dialog window? Using(formobject form=new formobject()){form.showdialog();} and it disposes itself automatically. MVVM equivalent? There is NONE unless you write a form controller and all of the code necessary to manage a dialog window (any sizing, title bar, moving events etc etc etc…good luck under 150 lines of code). MVVM is great, and I use it with WPF all the time. But it’s not simpler. It’s not faster to write. And, it is not easier to read. Thanks very much, Help a lot ; ) Dim custID As Guid = (CType(V2_CustomerDataGrid.SelectedItem, _ v2_Customer)).Cust_UUID Dim loadOp = context.Load(context.GetCustDetailByIDQuery(custID)) ‘context.Load(context.GetCustDetailByIDQuery(custID)) ‘CustomerDataForm1.ItemsSource = loadOp.Entities ‘aaaa””””””””””””” Dim taskList As ObservableCollection(Of v2_Customer) = New ObservableCollection(Of v2_Customer) ” Dim custID As Guid = (CType(V2_CustomerDataGrid.SelectedItem, _ ” v2_Customer)).Cust_UUID ” Generate some task data and add it to the task list. For index = 0 To taskList.Count – 1 taskList.Add(New v2_Customer() With _ {.Cust_UUID = custID, .Billing_Zip = .Billing_Zip }) Dim taskListView As New PagedCollectionView(taskList) Me.CustomerDataForm1.ItemsSource = taskListView Great post with simple approach to raising data events. Just what I needed. For VS2010 and Silverlight 4, the Expression.samples.interactivity needs to be changed to Microsoft.Expression.Interactivity.Core and include that reference as well. The CallDataMethod uh method has been changed to CallmethodAction. so changes — si:CallMethodAction MethodName=”TrimestersClicked” The MSDN for this is here –> Pingback: Model-View-ViewModel « MolePlex Pingback: Visual Basic Code Examples | Pozitive.NeT It cut off my code. take 2: <i:Interaction.Triggers> <i:EventTrigger EventName=”SelectionChanged”> <si:CallDataMethod Method=”HandleSelectionChanged” /> </i:EventTrigger> </i:Interaction.Triggers> I downloaded Jason Quin’s code and made the following change to PeopleView.xaml and got it to work: @Jason Quinn: Ok, I downloaded your version and it now builds (using VS010 pro) but the selection changed event does nothing? In this original example it calls the method HandleSelectionChanged on the VM, but on yours it calls “ViewModelMethod” which does nothing. I tried changing this to the method “HandleSelectionChanged” but it still does not fire. Has anyone got a working version of this in 2010? thanks I could not get the source to build (using SL4 and VS2010) – Error 1 The tag ‘CallDataMethod’ does not exist in XML namespace ‘clr-namespace:Expression.Samples.Interactivity;assembly=Expression.Samples.Interactivity’ and Error 2 A value of type ‘CallDataMethod’ cannot be added to a collection or dictionary of type ‘TriggerActionCollection’. But when I browse with the Object Browser it is there? Pingback: When should you use the MVVM Pattern | Scott Koland's blog It works with VS EX 2010 and Silverlight 4 and the following changed PeopleView: I’ve fixed the code sample so it works now with visual studio 2010 and Expression Blend 4. It can be found at Can you give me more here? You have 4 projects there and none are really labled. Or just give the information. Thanks! Pingback: How to Implement MVVM, INotifyChanged and ICommand in a Silverlight Application | Arrange Act Assert Pingback: Why Developers Should, Must, Do Care About The New Expression Blend | Jesse Liberty @Per H Thanks for taking the time to post this after working it out. Your comment was cut off, if you get a chance to fill in the details, that will be terrific. Download code doesn’t build. It’s a well written article code. Thanks for the effort. I’ll see if I can get it to compile. @Per H , where are these libraries found? I’m using VS 2010 and Expression Blend 4. @Jesse Liberty That would be Kool 😉 I’ll try and see if I can get something done in the next week. If I don’t then I’ll fess up to being lazy and leave it for you to take it on. Cheers. Hi Jesse, Good post, particularly on how simply you can use the Blend Samples to extend the commanding. I know you weren’t trying to write the definitive guide to MVVM but it would be good to have a little more on the View Model/Model relationship as mocking up the data within the VM avoids that somewhat. We’ve been on the MVVM kool aid since last year and it’s definitely the way to go, although sometimes I think the MVVM police might be about to storm in. Some of the team got to see you on your UK visit – thanks again for that. I agree, and if you would like to write something and have me chime in and send it back and once done publish here with joint authorship, that would be a gas. If not, it is definitely on my follow-up list. 🙂 Exactically! @Jeff Ballard @Ken Byrd The problem is that I didn’t explain what I wanted to do, and so you are totally right wrt what i wrote, but not what i meant What I was trying to show was an example where a change in the state causes you to want to invoke a method (not just update the view). Jesse, Are you breaking any MVVM rule when calling System.Windows.MessageBox.Show in the ViewModel? Regards, Yes, calling messagebox.show from the vm would be a temporary hack until you move the ui to the view. Why would this method be preferable to simply implementing a SelectedPerson property on the ViewModel? {ListBox ItemsSource=”{Binding People}” SelectedItem=”{Binding SelectedPerson, Mode=TwoWay}” /> Within your ViewModel, you could simply watch for changes to the SelectedPerson property which wouldn’t require 3rd party codeplex code! Hi. This is a great example, and I agree that one has to apply a little KISS to MVVM in order to get your brain around it! However, using VS2010RTM and Blend4 beta, I think the following needs to be changed? 1) I updated the references and the XAML: xmlns:si=”” xmlns:sa=”” 2) and changed the CallDataMethod to CallMethodAction like this: How was the calldatamethod changed? thnx. And I thought references to drinking Kool Aid were more to do with Kool Aid Acid Test… which is more to do with seeing things that aren’t there! 😉 Heh 😉 Hi Jesse I like it very much your intro in this post . Trend’s killing our precious time. Jesse, Am I correct in stating that the part with the codeplex expression samples and interactivity is basically a way to implement commanding for non button and hyperlink base controls in MVVM? I was trying to do something similar using Prism and was having trouble getting it to work although I did have only a few minutes to troueshoot before I had to leave for a family event. 🙂 Great article!
http://jesseliberty.com/2010/05/08/mvvm-its-not-kool-aid-3/
CC-MAIN-2016-26
refinedweb
2,437
61.87
. - What We’re Building - Building the React Switch Element using HTML - Styling our React Switch with CSS - Using the React Switch Component - Handling onChange and Passing a Value Through Props - Changing the Background Color onChange - Specifying the Switch Color What We’re Building Long before iOS introduced the switch, the web’s boolean input was the trusty checkbox. Checkboxes are of course still used to this day, but the switch improved on the checkbox by emulating physical switches found in the real world. A switch feels tangible. Clicking or tapping it feels like you’re actually using a real-life switch as opposed to clicking a checkbox. Therefore, in this tutorial, we’re going to build a new React switch component that piggybacks on the native HTML checkbox input. And, using some CSS, we’re going to turn that simple, age-old checkbox into a snazzy looking switch! Building the React Switch Element using HTML Whenever I’m building a brand-new React component, I’ll always, always begin by scaffolding it out in HTML and CSS and when I’m happy with the look and feel, then I’ll move over to writing the JavaScript. Create a new file called Switch.js and add the following code to it: import React from 'react'; import './Switch.css'; const Switch = () => { return ( <> <input className="react-switch-checkbox" id={`react-switch-new`} <label className="react-switch-label" htmlFor={`react-switch-new`} > <span className={`react-switch-button`} /> </label> </> ); }; export default Switch; If you saved the component at this point, you’d see a simple checkbox. That’s because we use the checkbox input as the basis for our React switch component. There’s no need for us to re-invent the wheel here. A switch is, after all, another way of representing a boolean value (true or false). The checkbox input is a native input to handle boolean values. Styling our React Switch with CSS Create a new file under the same directory as the component file, called Switch.css. Drop in the following CSS. Feel free to take a look at each class. I’m not going to explore the CSS in this tutorial as the focus is on JavaScript and React. .react-switch-checkbox { height: 0; width: 0; visibility: hidden; } .react-switch-label { display: flex; align-items: center; justify-content: space-between; cursor: pointer; width: 100px; height: 50px; background: grey; border-radius: 100px; position: relative; transition: background-color .2s; } .react-switch-label .react-switch-button { content: ''; position: absolute; top: 2px; left: 2px; width: 45px; height: 45px; border-radius: 45px; transition: 0.2s; background: #fff; box-shadow: 0 0 2px 0 rgba(10, 10, 10, 0.29); } .react-switch-checkbox:checked + .react-switch-label .react-switch-button { left: calc(100% - 2px); transform: translateX(-100%); } .react-switch-label:active .react-switch-button { width: 60px; } Using the React Switch Component There’s one last step required in order for us to use the React switch component and that’s importing it into another component file and declaring it: import React from 'react'; import Switch from "./Switch"; function App() { return ( <div className="app"> <Switch /> </div> ); } export default App; Save the file, jump over to your browser and watch the simple checkbox input transform into a rather beautiful looking switch input! Handling onChange and Passing a Value Through Props Although our React switch may look like it’s functional, behind the scenes it’s not actually changing its value. That’s because our switch’s checkbox input does not have two very important attributes. These are: - checked - onChange The checked attribute stores the input’s current value. In our case, it would be true or false. The onChange attribute is an event handler that triggers whenever the switch toggles. We’ll use this event handler to change the component’s current value. Before we jump into some code, let’s talk about ‘stateless’ components and ‘stateful’ components. A stateless component, or ‘dumb’ component, is a component that does not control its own state. As a result, it requires another component to keep track of the React switch component’s state. Our React switch component is going to be a stateless component. Therefore, it requires us to pass a value from a parent component through its props. Open up Switch.js and modify it with the following: import React from 'react'; const Switch = ({ isOn, handleToggle }) => { return ( <> <input checked={isOn} onChange={handleToggle} <label className="react-switch-label" htmlFor={`react-switch-new`} > <span className={`react-switch-button`} /> </label> </> ); }; export default Switch; The code above makes four new additions: - isOn is passed in through props - handleToggle function is passed in through props - checked attribute is added, and uses the value from the isOn prop - onChange event handler is added and uses the handleToggle function prop Finally, open up the parent component (I’m using App.js) and modify the React Switch component declaration to match the following code: import React, { useState } from 'react'; import './App.css'; import Switch from "./Switch"; function App() { const [value, setValue] = useState(false); return ( <div className="app"> <Switch isOn={value} handleToggle={() => setValue(!value)} /> </div> ); } export default App; Notice how this parent component now has state from using the useState Hook. That means that this component is going to pass down the state value into our React switch component’s isOn prop. We also pass down the state setter function, setValue, into the handleToggle prop. As a result, when the Switch component is toggled and changes its value, it will call what is passed to the handleToggle prop. Changing the Background Color onChange If you saved the React switch component and toggled it in the UI, you’d see that there is no visual difference 🤔… …yet. We only have to make one simple change to the React switch component in order to change the background color of the switch. That’s because we have access to the switch’s state through the isOn prop. Modify the label HTML element inside Switch.js to the following code: ... <label style={{ background: isOn && '#06D6A0' }} className="react-switch-label" htmlFor={`react-switch-new`} > ... Save the component, jump on over to your browser and you’ll see a fully working Switch component that lights up green when it’s turned on! And there we have it! We’ve made a complete React Switch component that toggles, changes value, and lights up green when it’s on. Read on if you want to learn how to expand our Switch’s functionality by specifying the on color. Specifying the Switch Color It’s good practice to build flexible React components so that they may be used in a variety of scenarios. For example, we might want to use a switch component: - In a sign-in form, as a way to tell the site to remember your user credentials - On a settings page - In a modal dialog for deleting a user account Those are three examples, however, there are countless implementations for a switch. Here’s the thing. Right now, our React Switch component only lights up green. What if we wanted it to light up red when we use it in that modal for deleting a user account? Let’s add another prop to our switch component, called onColor: import React from 'react'; const Switch = ({ isOn, handleToggle, onColor }) => { return ( <> <input checked={isOn} onChange={handleToggle} <label style={{ background: isOn && onColor }} className="react-switch-label" htmlFor={`react-switch-new`} > <span className={`react-switch-button`} /> </label> </> ); }; export default Switch; onColor will be a string value representing a hex color. Save that. Jump over to the parent component and add the new onColor prop to the Switch declaration: import React, { useState } from 'react'; import './App.css'; import Switch from "./Switch"; function App() { const [value, setValue] = useState(false); return ( <div className="app"> <Switch isOn={value} onColor="#EF476F" handleToggle={() => setValue(!value)} /> </div> ); } export default App; Now we’ve got a flexible, modular React switch component! Thanks for reading, and I really hope you enjoyed this tutorial. I write all of the tutorials on here, and I’m going to continue adding more and more tutorials each month. If there’s a particular tutorial that you want to see, I want to hear from you! Contact me through the about page, even if you aren’t suggesting a tutorial — I’d love to hear from you just the same! 💻 More React Tutorials Very cool tutorial. I am working on trying to add two toggles to a page and cycle them independently… Not there yet but very nice presentation and writing style. This was great and easy to follow, thank you. Fantastic tutorial! Thanks so much! Don’t you need ‘backgroundColor’ in the style tag rather than just the color? i am completely new to react and i get an error useState is not defined when i tried to follow your code. Any help would be useful ? Thanks Thank you, i would have loved if you go ahead and make onChange also work. but thanks once again this really helped This is great, I noticed that this doesn’t support multiple toggles on the same page. I think this might be related to the htmlFor= I’d like to hear your view on how best to solve this. Thank you this was very helpful. Cool, thank you for saving me a lot of time 😀
https://upmostly.com/tutorials/build-a-react-switch-toggle-component
CC-MAIN-2020-29
refinedweb
1,551
62.27
My gripe with Prototype Many(i to wrap other types in a jQuery object, for example: var myElem = $(myDomNode); This augments the underlying variable with jQuery functionality. Besides the '$' (which can be turned off), jQuery pretty much keeps it's hands off your global namespace Same with YUI. All the functionality is imported through the YUI object: YUI().use('node-base', function(Y) { Y.on("domready", function() { console.log('ready!'); }); }); This is a stark contrast with Prototype. As soon as you include it, it changes basic types such as strings, arrays and numbers. An example: alert( [1, 2, 3].toJSON() ); // outputs "[1, 2, 3]": var test = { prop1: 'val1', privateProp: 'hidden', toJSON : function() { return { prop1: this.prop1 }; } } alert(JSON.stringify(test)); The output of this last example will be : {"prop1":"val1"} I would argue that this functionality is not a great design decision (separation of concerns!). However, it's there and it's standard. Prototype however, adds a toJSON() method to every Array, Object and String. In Prototype this has a different meaning though. The prototype methods actually json-encode themselves and return a string. From an API perspective this is as bad as a choice as JSON.stringify defining toJSON(). And this problem highlights exactly why it's a bad idea, as these 2 libraries both define a global toJSON, and add their own meaning to it. Example of how this fails: JSON.stringify({ prop : [1, 2, 3, 4] }); The normal result: {"prop":[1,2,3,4]} The result with prototype: {"prop":"[1, 2, 3, 4]"} The easy fix is to simply get rid of toJSON functions as such: delete Object.prototype.toJSON; delete Array.prototype.toJSON; delete Hash.prototype.toJSON; delete String.prototype.toJSON; There's even a comment on stackoverflow that fixes the issue and keeps Prototype's methods intact, but I know that as long as I will maintain applications that use Prototype, I'll have to deal with API collisions and incompatibilities. Therefore, Prototype will never be the choice of JS library for me.
https://evertpot.com/my-gripe-with-prototype/
CC-MAIN-2017-39
refinedweb
338
58.79
AutoVersioning plugin An application versioning plug in that increments the version and build number of your application every time a change has been made and stores it in version.h with easy to use variable declarations. Also have a feature for committing changes a la SVN style, a version scheme editor, a change log generator and more... Introduction The idea of the AutoVersioning plugin was made during the development of a pre-alpha software that required the version info and status. Been to busy coding, without time to maintain the version number, just decided to develop a plugin that could do the job with little intervention as possible. Features Here is the list of features the plugin covers summarized: - Supports C and C++. - Generates and auto increment version variables. - Software status editor. - Integrated scheme editor for changing the behavior of the auto incrementation of version values. - Date declarations as month, date and year. - Ubuntu style version. - Svn revision check. - Change log generator. - Works on Windows and Linux (doesn't have Mac to test). Sources Usually source and news about new releases are posted on the Code::Blocks forums on the plugins development section. You can access this plugin topic over [/index.php/topic,6294.msg48225.html#msg48225 here], recent sources are attached on the first post (you have to be logged in to download attachments on the forum). The plugin also have a project page at. The link is. Submit all features request and bugs on the project page. Svn check out: svn://svn.berlios.de/autoversioning Usage After downloading the sources, compiling them and installing the plugin to Code::Blocks just go to Project->Autoversioning menu. A pop up window like this will appear: When hitting yes on the ask to configure message box, the main auto versioning configuration dialog will open, to let you configure the version info of your project. Each editable control has a tool tip with information that explains you its features, so if you put the mouse on the control the tool tip will pop up. Below is a low quality screenshot showing all the tabs: After configuring your project for auto versioning, the settings that you entered on the configuration dialog will be stored on the project file, and a version.h file will be created. For now, every time that you hit the Project->Autoversioning menu the configuration dialog will popup to let you edit your project version and versioning related settings, unless you don't save the new changes made by the plugin to the project file. Dialog notebook tabs Version Values Here you just enter the corresponding version values or let the auto versioning plugin increment them for you. - Major - Increments by 1 when the minor version reaches its maximum - Minor - Increments by 1 when the build number pass the barrier of build times, the value is reset to 0 when it reach its maximum value. - Build number (also equivalent to Release) - Increments by 1 every time that the revision number is incremented. - Revision - Increments randomly when the project has been modified and then compiled. Status Some fields to keep track of your software status with a list of predefined values for convenience. - Software Status - The typical example should be v1.0 Alpha - Abbreviation - Same as software status but like this: v1.0a Scheme Lets you edit how the plugin will increment the version values. - Minor maximum - The maximum number that the Minor value can reach, after this value is reached the Major is incremented by 1 and next time project is compiled the Minor is set to 0. - Build Number maximum - When the value is reached, the next time the project is compiled is set to 0. Put a 0 for unlimited. - Revision maximum - Same as Build Number maximum. Put a 0 for unlimited - Revision random maximum - The revision increments by random numbers that you decide, if you put here 1, the revision obviously will increment by 1. - Build times before incrementing Minor - After successful changes to code and compilation the build history will increment, and when it reaches this value the Minor will increment. Settings Here you can set some settings of the auto versioning behavior. - Autoincrement Major and Minor - lets the plugin increments this values by you using the scheme. If not marked only the Build Number and Revision will increment. - Create date declarations - create entries in the version.h file with dates and ubuntu style version. - Do Auto Increment - this tells the plugin to automatically increment the changes when a modification is made, this incrementation will occur before compilation. - Header language - select the language output of version.h - Ask to increment - if marked, Do Auto Increment, it ask you before compilation (if changes has been made) to increment the version values. - svn enabled - search for the svn revision and date in the current folder and generates the correct entries in version.h Changes Log This lets you enter every change made to the project to generate a ChangesLog.txt file. - Show changes editor when incrementing version - will pop up the changes log editor when incrementing the version. - Title Format - a format able title with a list of predefined values. Including in your code To use the variables generated by the plugin just #include <version.h> (or the real name in case you changed the default name). An example code would be like the following: #include <iostream> #include "version.h" void main(){ std::cout<<AutoVersion::MAJOR<<endl; } Output of version.h The generated header file. Here is a sample content of the file on c++ mode: #ifndef VERSION_H #define VERSION_H namespace AutoVersion{ / On C mode is the same as C++ but without the namespace: #ifndef VERSION_H #define VERSION_H / Change log generator This dialog is accessible from the menu Project->Changes Log. Also if checked Show changes editor when incrementing version on the changes log settings, the window will open to let you enter the list of changes after a modification to the project sources or an incrementation event. Below is an example screen shot: Buttons Summary - Add - appends a row in to the data grid - Edit - enables the modification of the selected cell - Delete - removes the current row from the data grid - Save - stores into a temporary file (changes.tmp) the actual data for later procesing into the changes log file - Write - process the data grid data to the changes log file - Cancel - just closes the dialog without taking any action Here is an example of the output generated by the plugin to the ChangesLog.txt file: 03 September 2007 released version 0.7.34 of AutoVersioning-Linux Change log: -Fixed: pointer declaration -Bug: blah blah 02 September 2007 released version 0.7.32 of AutoVersioning-Linux Change log: -Documented some areas of the code -Reorganized the code for readability 01 September 2007 released version 0.7.30 of AutoVersioning-Linux Change log: -Edited the change log window -If the change log windows is leave blank no changes.txt is modified Removing AutoVersioning from a project There is no way to disable AutoVersioning through its GUI, you will have to manually edit the Code::Blocks project (.cbp) file. - Close the project. - Open the cbp file in a simple text editor like Notepad, gedit or TextEdit. - Find the <AutoVersioning> [...] </AutoVersioning> section. Remove it including the opening and closing tags, as well as everything in between. Save the file. - Reopen the project and remove the version.h file from it. - Delete the version.h file from your file system.
https://wiki.codeblocks.org/index.php/AutoVersioning_plugin
CC-MAIN-2022-05
refinedweb
1,250
55.44
After your servers have been installed and configured with SharePoint and your farm topology has been created, the next step is to create and manage the Web applications that will be used by the SSP and site collections. A site collection is a group of Web sites on a virtual server; the Web sites have the same owner and administrative settings. An SSP is a collection of farm services, such as profiles, search, and audiences, that are made available and consumed by the associated Web applications and site collections. This section covers creating, deleting, and managing these Web applications and establishes why Web application management knowledge is critical to the successful running of your farm's services and applications. To host site collections and SSPs, you need to have a hosting application to manage those services. This hosting application for SharePoint Server 2007 is IIS 6.0. Although "Web applications" is the term we use to describe this functionality in SharePoint Server 2007, you might have heard Web applications referred to by other names, for example: An IIS administrator might refer to them as "Web sites." A SharePoint Portal 2003 administrator might refer to them as "virtual servers." IIS 6.0 is a set of software services that enables Microsoft Windows Server 2003 to host Web sites, Simple Mail Transfer Protocol (SMTP) servers, File Transfer Protocol (FTP) sites, and Network News Transfer Protocol (NNTP) services. The type of Web sites that IIS 6.0 can host depends on the software that is installed and enabled on top of IIS. In the case of SharePoint Server 2007, you have installed the .NET Framework which includes Windows Workflow Foundation. With these two components installed on top of IIS, you have a very feature-rich set of components with which to create the Web applications and SSPs. A Web application is defined by a port number or host headers, or both, and includes an application pool and authentication method. As part of the creation process, a database is also created that is used for storing the content from the sites associated with the Web application. (You will learn about each of these options in the "Creating a New Web Application" section.) Once a Web application has been created, you can do one of three things: Extend it to create a new site collection. Extend and map it to an existing site collection. Leave it as is to be used for hosting an SSP. Each of these options will be covered in more detail later in this chapter, in the "Provisioning a Web Application" section. If you choose to extend the Web application and create a new site collection, you can associate the top-level site with a template such as a corporate portal or team site. By creating multiple Web applications and extending them with different templates, you can create standalone site collections that are all associated with a single SSP. All the sites in these Web applications can consume the available services from the SSP, such as profiles, audiences, and search. Within SharePoint Portal Server 2003, it was not possible to have standalone site collections and have them participate in shared services. This feature is a big step forward for Web applications. An SSP is required before you create Web applications that will host site collections. This is because the Web application hosting a site collection needs to know which SSP it is associated with in order to consume the services provided by the SSP. The services that can be configured in an SSP and consumed by a Web application include some or all of the following: Profiles and My Sites Audiences Search Business Data Catalog Excel Services Usage Reporting For more detailed information on configuring and managing the SSP, see Chapter 18. In SharePoint Server 2007, you can also create multiple SSPs, which can then provide different services to any Web applications associated with them. This enables administrators to choose which Web applications belong to which SSPs, and therefore, what services are provided to the sites housed by the Web application. When creating Web applications, you need to decide whether you want to associate each Web application with its own application pool in IIS. There are several reasons to consider this issue: Each application pool runs in its own memory space using a worker process, which means that if an application pool fails, it does affect other Web applications using their own application pool. Each application pool requires additional resources and can easily consume up to 100 MB of physical memory without any connected users. This extra resource use is offset by the advantage given by running each Web application in its own memory process. Multiple worker processes can be associated with a single application pool for resilience. A final reason for creating multiple Web applications is that you can create a database for each one. This helps with database management size and in situations such as performing disaster recovery and SQL maintenance jobs. After completing your initial Central Administration tasks and configuring the server farm, the next task is to create a new SSP. As part of this process, you must create a new Web application either through the Application Management section of Central Administration or during the creation of an SSP. Both methods take you to the Create New Web Application page (shown in Figure 7-4) in Central Administration. This section shows you both methods for accessing this page, and it describes the choices available to you for creating your Web application. If you need to create a new site collection or extend a Web application, use the Application Management page. If you are creating a new Web application for an SSP, use the New Shared Services Provider page. To create a Web application through the Central Administration Application Management tab, complete the following steps: On the Central Administration Home page, select the Application Management tab. On the Application Management page (shown in Figure 7-1), in the SharePoint Web Application Management section, select Create Or Extend Web Application to open the Create Or Extend Web Application page, shown in Figure 7-2. Select Create A New Web Application to open the Create A New Web Application page. Figure 7-2: Creating a new Web application on the Application Management page To create a Web application when you create a new SSP, complete the following steps: On the Application Management page in Central Administration, in the Office SharePoint Server Shared Services section, select Create Or Configure This Farm's Shared Services to open the Manage This Farm's Shared Services page. To create an SSP, select New SSP. On the New Shared Services Provider page (shown in Figure 7-3), assign a unique identifier to the provider in the SSP Name box, click Create A New Web Application, and click OK to open the Create New Web Application page. Figure 7-3: Creating a new Web application when creating the SSP On the Create New Web Application page (see Figure 7-4), your application is automatically allocated a random port number, a description, and a folder location in the default local path. By default, this path is C:\Inetpub\wwwroot\wss\VirtualDirectories \portnumber. You are not, by default, given a host header value. Therefore, add it in the Host Header box if you want to use a fully qualified domain name, such as, as well as a port number to access this Web application. Therefore, in order to connect to the Web application using the random port number, you would use. You must ensure that this host header URL is resolved by your users. Normally, this is achieved by adding an entry into DNS pointing the URL to the Web server. Figure 7-4: Choosing a port and host header On the Create New Web Application page, there are two authentication protocols available for a Web application: Kerberos and NTLM. By default, it is set to NTLM authentication for maximum compatibility with mixed-domain models and user account permissions, as shown in Figure 7-5. Web applications use these security mechanisms when they communicate with other servers and applications in the network, such as when communicating with the Microsoft SQL server hosting the databases. Figure 7-5: Security configuration options for a Web application Kerberos authentication is more secure than NTLM authentication, but it requires a service principal name (SPN) for the domain account that SharePoint is using. This SPN, which must be added by a member of the domain administrators group, enables the SharePoint account to use Kerberos authentication. When you choose NTLM authentication, it does not matter which domain account is being used by the Web application to communicate with the application pool because the application pool will run as long as it has the required permissions to access the SQL server and the Web server. The required SQL permissions for a Web application account are configured in the Security Logins page on the SQL server's Enterprise Manager console. The required roles are as follows: Database Creator Role Security Administrator Also in the Security Configuration section of the Create New Web Application page, you can enable anonymous access on the Web application, which enables users to gain access to the sites hosted on the Web application without authenticating. You must, however, also enable anonymous access on the site itself because enabling it on the Web application only gets the users past IIS authentication. This is a useful configuration for any Internet-facing sites, such as a company Web site. To enable Anonymous access in a site, follow these steps: Click Site Actions. Click Site Settings. Click Modify all Settings. Click Advanced Permissions. Click Settings. Click Anonymous Access to define the access rights for Anonymous users. For added security, you can also enable Secure Sockets Layer (SSL) certificates on the Web application. You can choose to use certificates from both your internal certificate authority or from an authorized certificate authority such as Thawte or VeriSign. You must install the SSL certificate, however, on all servers where users will be accessing the Web application or their access attempt will fail. When you configure a load-balanced URL, it becomes the default URL with which users access the sites hosted on this Web application. To add a load-balanced URL, complete the following steps: On the Create New Web Application page, scroll down to the Load Balanced URL section, shown in Figure 7-6. Add your load-balanced URL by using the fully qualified domain that will be used by your users-for example,. Figure 7-6: Creating a load-balanced URL A load-balanced URL is used when configuring multiple front-end servers that are load-balanced using the Windows Server 2003 Network Load-Balancing Service. The Network Load-Balancing Service enables administrators to create a cluster IP address that will be shared by all front-end servers' network cards configured in the load-balancing configuration. See Chapter 6, "Performing Central Administration and Operations Configuration," for more information on configuring the Network Load-Balancing Service. For your users to connect to the clustered IP address, however, you should also define a load-balanced URL both here in the Web application and on your DNS servers so that the name resolutions match. The load-balanced URL uses the default zone for user access, and this zone is matched to the URL mappings that are configured for the default zone configured in Central Administration. To configure the URL mappings, complete the following steps: On the Central Administration Home page, click the Operations tab. On the Operations page, click Alternate Access Mappings in the Global Configuration section to open the Alternate Access Mappings Management page. Click Add Incoming URLs. Select the Web application hosting the load-balanced URL. Add the load-balanced URL to the incoming URL. Leave the zone set to Default. Click Save. An application pool is used to configure a level of isolation between different Web applications and their hosted sites. Each application pool is serviced by its own worker process (w3wp.exe). This means if one worker process hangs it will not affect other worker processes hosting different application pools. What type of install you have chosen determines how many application pools are created by default. Unless you have created a Standalone (Basic) installation, you will need to create at least one application pool for hosting the SSP and one for hosting the first Web application and its associated sites. When creating a new application pool, use a meaningful descriptive name to make it easy to identify in IIS. This naming strategy is especially useful in a disaster recovery scenario when you might have multiple application pools and random port numbers. When selecting a security account that will be used by the application pool, you can either choose a predefined local or network service account, or you can create and assign your own service account, as shown in Figure 7-7. In most cases, you will want to create and assign your own service account because it gives you the most flexibility for scaling out a server farm: Local Service is an account that has low-level access rights on the server and is useful when you do not need to connect to resources on remote computers. This is suitable only on a standalone installation with SharePoint and SQL Server on the same server. Network Service is also a low-level access rights account, but it also has the ability to connect to remote resources. Configurable allows you to assign a domain user account you created as the service account that will be used by the application pool to access the necessary services and servers, such as an SQL database. This account should be configured with the following rights: Figure 7-7: Creating a new application pool Select Restart IIS Automatically so that an iisreset is performed on all Web servers after the new Web application has been replicated. See Figure 7-8. Figure 7-8: Reseting Internet Information Services on the Web servers By default, the database server name presented is the SQL server configured in Central Administration and is the one used when you first installed the product and configured your farm. It is possible to specify a different SQL Server instance for a Web application. To configure the database name and authentication method, complete the following steps: Scroll down the Create New Web Application page to the Database Name And Authentication section, shown in Figure 7-9. Change the Database Server if it is different than the default. Choose a name for the new database, and type it in the Database Name box. Select a Database Authentication method. The default is Windows Authentication. Figure 7-9: Specifying the database server and name When configuring the database account, use Windows authentication and, by default, your SQL server will be set to accept only Windows authentication for security purposes. This account must have Create and Modify database rights in the SQL server and use the format of domainname\username. As shown in Figure 7-10, a Web application will use the search server that has been configured for the Office SharePoint Server Search service that is configured on the Services On Server page, as discussed in Chapter 6. Figure 7-10: The search server using the Office SharePoint Server Search service A simple way to add resilience and enhance performance for an application pool is to create additional worker processes associated with that application pool. All Web applications and their sites will benefit from this additional availability of resources. In IIS 6.0, an application pool configuration that is supported by multiple worker processes is known as a Web garden. Creating additional worker processes creates additional w3wp.exe processes running in your operating system. You can see in Figure 7-11 that currently there are two w3wp.exe worker processes running on the SharePoint Server 2007 server. Figure 7-11: Two worker processes To create a Web garden and see the effect it has on the amount of available w3wp.exe processes, complete the following steps: Open Internet Information Services Manager from your administrator tools on the SharePoint Server 2007 server where you want the additional worker processes to be created. This should be the server your users are connecting to, such as a front-end server. Expand Application Pools. Right-click the application pool to be configured and select Properties. Select the Performance tab, shown in Figure 7-12. Under the Web Garden section, set the Maximum Number Of Worker Processes to 4. Click Apply and then click OK. Close the IIS Manager. Open a command prompt and type IISRESET. Figure 7-12: Adding more worker processes Configuration is now complete. When the Web application that uses the application pool has multiple connections associated with it, multiple worker processes will be launched up to a maximum of four, as shown in Figure 7-13. Figure 7-13: A Web garden with four worker processes running When a Web garden is running, each process is allocated its own memory space. This means that if you allocate 800 MB of memory to the application pool and then set up a Web garden with three processes, the application pool will divide the memory usage of 800 MB between the three processes. After you create a Web application, you have three options for provisioning it: Option 1 Extend the Web application, and create a new site collection. Option 2 Return to Central Administration, and create a new SSP. Option 3 Extend the Web application, and map it to an existing site collection. Creating a new site collection allows you to select a template and extend the Web application with a site template. There are many new templates included with SharePoint Server 2007, and they are divided into four tabbed choices. Table 7-1 describes each of the tabs. To create a new site collection on a free Web application and choose a template, complete the following steps: Go to Central Administration, and select the Application Management page. Select Create Site Collection in the SharePoint Site Management section. Give the site collection a title, URL, and administrator account. Choose a template from the available four tabs (described in Table 7-1), and click OK. Creating a new SSP allows you to configure and provide a new set of services to Web applications associated with that SSP. The new SSP will have its own management page. To create a new SSP on the Web application, refer to Chapter 18. If you choose to extend and map the Web application, you select an unused Web application, and by extending the Web application, you are able to redirect requests made to that Web application to another Web application already provisioned with a site collection. This allows you to change the authentication mechanism on the new Web application to Basic authentication for example, with an SSL certificate to support external users connecting from the Internet. This method enables both Windows authenticated users and Basic authenticated users to access the same site collection and content, but to do so by using unique URLs from both internal and external networks. After a Web application has been created, several additional management settings are available to configure. These options allow fine tuning of the Web application or even removing SharePoint altogether from the Web application. To configure the additional Web application settings, go to Central Administration and click the Application Management page. The following management options are listed under the SharePoint Web Application Management section: Remove SharePoint From IIS Web Site Delete Web Application Define Managed Paths Web Application Outgoing E-Mail Settings Web Application General Settings Content Databases Manage Web Application Features Web Application List This section describes each of these management options. This management option allows you to unextend the Web application and remove SharePoint Services from using the Web application in IIS. The Web application itself is not deleted and can be used for reprovisioning a new site collection or SSP at any time. To use this option, complete the following steps: Go to Central Administration, and click the Application Management page. Click Remove SharePoint From IIS Web Site under the SharePoint Web Application Management section. On the Unextend Windows SharePoint Services From IIS Web Site page, select a Web application by clicking the Web Application Name drop-down arrow and choosing to change the Web application. Confirm the Web site and zone to be unextended. If you want to delete the Web site from IIS and the IIS metabase, select Yes. Click OK. This management option allows you to delete the Web application and also the databases associated with it. Before deciding to delete a Web application, make sure you have a working backup. To use this option, complete the following steps: Go to Central Administration, and click the Application Management page. Click Delete Web Application under the SharePoint Web Application Management section. On the Delete Web Application page, select a Web application by clicking the Web Application Name drop-down arrow and choosing to change the Web application. Select Yes to delete the content databases from the database server. Select Yes to delete all the IIS Web sites and IIS metabase entries associated with the Web application. Click OK. This management option lets you identify the managed paths that indicate which parts of the URL namespace are managed by each Web application in IIS. On the Define Managed Paths page, shown in Figure 7-14, you can also define a taxonomy within the namespace by creating new sites and associating them with specific namespaces such as customers or projects. Figure 7-14: Adding a new managed path There are two types of managed paths: explicit and wildcard. Use an explicit managed path to allow only a particular namespace, such as /customers, or use a wildcard managed path to include that namespace and all URL namespaces that come after it, such as /customers/*. To define managed paths, complete the following steps: Go to Central Administration, and click the Application Management page. Scroll to the SharePoint Web Application Management section, and click Define Managed Paths. On the Define Managed Paths page, select a Web application by clicking the Web Application Name drop-down arrow and choosing to change the Web application. Add the new path by typing the name in the Path box. If the new path is to be at the root of the Web application, start the path with a forward slash (/). Click Check URL to ensure that the path is not already in use for the selected Web application. Click OK. By default, a global e-mail server for outgoing mail is defined on the Operations page of Central Administration. You can, however, also specify a unique outgoing mail server for each Web application. This could be useful in a situation where a Web application that is hosting an extranet-facing site collection needs to use an SMTP server that resides within the screened subnet rather than on the internal network. To update the e-mail settings, complete the following steps: Go to Central Administration, and click the Application Management page. Scroll to the SharePoint Web Application Management section, and click E-mail Settings. On the Web Application E-Mail Settings page, select a Web application by clicking the Web Application Name drop-down arrow and choosing to change the Web application. Specify the Outbound SMTP server using the fully qualified domain name. Choose a From and Reply To SMTP address. Choose a localized character set. Click OK. Each Web application has individual settings that affect all the sites that are being hosted by the Web application. Table 7-2 describes each of these settings. To access the Web application settings, complete the following steps: Go to Central Administration, and click the Application Management page. Scroll to the SharePoint Web Application Management section, and click Web Application Settings. In the Web Application Settings page, select a Web application by clicking the Web Application name drop-down arrow and choosing to change the Web application. Use Table 7-2 for information on each available field for the Web application settings. You can associate multiple databases with each Web application. By default, only one database is displayed here, and this is the one specified when creating the Web application. The default content database is limited to 15,000 top-level sites that can be created within it. By creating additional databases, however, you are able to control the database growth by spreading the quantity of sites across the databases. You can then limit the total amount of sites each database can hold, even as low as one site per database. You can give each top-level site its own database or group top-level sites by database. For example, if you have 20 top-level team sites, you can give each site its own database or put five sites together in each database. For this to work effectively, limit each new database for the site collections to one site, and create the new site collection after its associated database has been created. By doing this, you can name each database with its respective site collection-for example, projects_db for the project's team site. You will also benefit from creating additional content databases for the Web application hosting users' My Sites. It is recommended to have a dedicated Web application for hosting My Sites. Alternatively, you could store the users' My Sites portals on another Web application. To configure the database settings for a Web application, complete the following steps: Go to Central Administration, and click the Application Management page. Scroll to the SharePoint Web Application Management section, and click Content Databases. On the Manage Content Databases page, select a Web application by clicking the Web Application Name drop-down arrow and choosing to change the Web application. To modify an existing database, click the database name in the list of available database names. For each database, you can modify the following settings: Change the database status to offline or ready. Change the number of top-level sites that can be created in the database and the number of sites at which a warning event is generated. Select the Windows SharePoint Services search server. Choose to remove the content database and all data in it. Click Add A Content Database, and configure each of the following options for the new database: Select a Web application for the new content database. Choose a database server to host the content database. Select the Authentication method to access the database server. The default and recommended method is Windows authentication. If you are using SQL authentication, add the SQL account name and password. Define the total number of top-level sites that can be created in the database and the number of sites that can be created before a warning event is generated. Click OK. You can disable any of the available features on a Web application that have been enabled by default. This enables an administrator to restrict all the sites associated with that Web application from using a standard feature, such as the ability to convert a document for publishing from .docx to HTML. If you also have premium-level features available, such as the Business Data Catalog (BDC), you can choose to disable these features on a Web application as well. To manage Web application features, complete the following steps: Go to Central Administration and click the Application Management page. Scroll to the SharePoint Web Application Management section and click Manage Web Application Features. On the Manage Web Application Features page, select a Web application by clicking the Web Application name drop-down arrow and choosing to change Web Application. Next to the feature, click Deactivate or Activate to enable or disable the feature. The Web Application List allows you to get a quick view of all your configured Web applications and the associated URLs and port numbers for each Web application.
https://flylib.com/books/en/2.355.1.66/1/
CC-MAIN-2019-35
refinedweb
4,678
51.18
..... More about.....Character Encoding Merc Asked: 2015-04-16 03:57:45 -0600 Seen: 937 times Last updated: Aug 07 '15 from __future__ import unicode_literals and variable names Is there a way to use non english symbols? Is it possible to use unicode letters in tag-names UnicodeDecodeError in Notebook Server if Worksheet is set to 'python' instead of 'sage' problem with encoding & german characters Why Sage can not plot Chinese label How to convert renpy python code into renpy python script? UnicodeDecodeError in matplotlib if 'python' set instead of 'sage' in Notebook [closed] How do I work with the character tables of Weyl groups in Sage to compute restrictions to parabolic subgroups? See also the presumably identical... And see also...
https://ask.sagemath.org/question/26556/character-encoding/
CC-MAIN-2017-47
refinedweb
121
56.59
In this guide, we're going to build a simple Flask application using Docker, more specifically with Docker Compose. A powerful and convenient way to work with and configure Docker containers & services. We'll be using Nginx as our HTTP webserver and uWSGI as our application server, just like in the previous guide to deploying a Flask app on a virtual machine. We're going to cover everything from scratch so don't worry if you've never worked with Docker or Docker Compose before, just make sure you've got them both installed on your machine and we'll try our best to cover the concepts as we go. To install Docker, head to the link below: To install Docker compose, head to the link below: Armed and ready with Docker and Docker Compose? Let's get started! Application structure Here's an overview of how our application is going to look: We'll get started by creating a new directory for our new project and move into it. We'll call ours app but feel free to name it whatever you like: mkdir app && cd app Inside of our app parent directory, we're going to create a couple more directorires, one for each container - flask and nginx: mkdir flask nginx While we're here, let's create our docker-compose.yml file here in the app directory. We'll use this file to define our application services shortly: touch docker-compose.yml Feel free to create a .gitignore and readme.md too if you're going to be pushing this project up to Github! touch .gitignore readme.md Your project should now look something like this: app ├── docker-compose.yml ├── flask/ ├── nginx/ ├── .gitignore └── readme.md Let's start off by building the basic Flask app and testing it locally before we do anything else. The Flask app Move into the flask directory: cd flask We'll start by creating a typical Flask project structure that you're likely to be familiar with, but first, let's create a virtual environment: python -m venv env Activate it: source env/bin/activate The only dependencies we require are flask and uwsgi, let's install them with pip: pip install flask uwsgi Once the packages are installed, we'll generate a requirements.txt: pip freeze > requirements.txt Now we can move on to building out out app! Go ahead and create a file called run.py. This will be the entrypoint to our app: touch run.py Open up run.py and add the following: run.py from app import app if __name__ == "__main__": app.run() We also need an ini file for our uWSGI configuration, go ahead and create app.ini and add the following: app.ini [uwsgi] wsgi-file = run.py callable = app socket = :8080 processes = 4 threads = 2 master = true chmod-socket = 660 vacuum = true die-on-term = true Let's quickly touch on a few of the settings in app.ini: wsgi-file = run.py- The file containing the callable ( app) callable = app- The callable object itself socket = :8080- The socket uwsgiwill listen on (More on that later!) You can read more about the uwsgi config here We need a directory for our application module to live, go ahead and create a new directory called app and move into it: mkdir app && cd app We need an __init__.py file. Create it, open it up and add the following: app/__init__.py from flask import Flask app = Flask(__name__) from app import views __init__.py simply imports Flask, creates the app object and imports our views.py file. The last file we need here is views.py containing our app routes. Go ahead and create it and add the following: app/views.py from app import app import os @app.route("/") def index(): # Use os.getenv("key") to get environment variables app_name = os.getenv("APP_NAME") if app_name: return f"Hello from {app_name} running in a Docker container behind Nginx!" return "Hello from Flask" It's a simple route with the addition of app_name = os.getenv("APP_NAME"), which will become apparent later when we use docker-compose.yml to set some enviromment variables. Your flask directory should now look like this: flask ├── app │ ├── __init__.py │ └── views.py ├── app.ini ├── requirements.txt └── run.py At this point, we're ready to test our application locally! Testing the Flask app locally Move back up to the flask directory: cd .. Just like when we normally run a Flask app, we need to set a few environment variables. Make sure you're in the flask directory and run the following: export FLASK_APP=run.py export FLASK_ENV=development Now, run the app: flask run Head to in your browser and you should see: Hello from Flask If everything worked, use Ctrl + c to kill the Flask development server. Flask Dockerfile A Dockerfile is a special type of text file that Docker will use to build our containers, following a set of instruction that we provide. We need to create a Dockerfile for every image we're going to build. In this case, both one for Flask and one for Nginx. Make sure you're in the flask directory, create a Dockerfile: touch Dockerfile Add the following: flask/Dockerfile # Use the Python3.7.2 image FROM python:3.7.2-stretch # Set the working directory to /app WORKDIR /app # Copy the current directory contents into the container at /app ADD . /app # Install the dependencies RUN pip install -r requirements.txt # run the command to start uWSGI CMD ["uwsgi", "app.ini"] Let's touch on the contents of our Dockerfile FROM python:3.7.2-stretch- Pull the python:3.7.2-stretchcontainer base image WORKDIR /app- Sets our working directory as /app ADD . /app- Copies everything in the current directory to /appin our container RUN pip install -r requirements.txt- Installs the packages from requirements.txt CMD ["uwsgi", "app.ini"]- Starts the uwsgiserver You can read the Dockerfile reference here At this point, your flask directory should look like this: flask ├── Dockerfile ├── app │ ├── __init__.py │ └── views.py ├── app.ini ├── requirements.txt └── run.py The ADD . /app instruction in our Dockerfile is going to copy everything in the flask directory into the new container, but we don't need it to copy our virtual environment or any cached Python files. To prevent this, we can create a .dockerignore file, just like how you'd create a .gitignore file , prioviding a list of file or directory names that we don't want docker to copy over2 to our image. Go ahead and create it: touch .dockerignore Add the following: env/ __pycache__/ At this point, your finished flask directory should look like this: flask ├── Dockerfile ├── .dockerignore ├── app │ ├── __init__.py │ └── views.py ├── app.ini ├── requirements.txt └── run.py We're going to use the docker-compose.yml file to instruct Docker to build this image shortly, however we still need to setup Nginx. Nginx Go ahead and move into the nginx directory we created in the base of our application: cd ../nginx/ We'll start by creating an Nginx config file that we'll use to setup a server block and route traffic to our application. Go ahead and create a new file called nginx.conf: touch nginx.conf Open it up and add the following: nginx/nginx.conf server { listen 80; location / { include uwsgi_params; uwsgi_pass flask:8080; } } Inside of the server block: listen 80;- Instructs Nginx to listen for requests on port 80 (HTTP) include uwsgi_params;- Incluses the uwsgi_paramsfile To map the server block to an IP address, you'd set the server_name value to your servers' IP and place it just under listen 80;: server_name 1.2.3.4; To map the server block to a domain: server_name example.com; uwsgi_pass flask:8080; is where things get a little interesting! In our last Flask guide, we deployed a Flask app to a virtual machine, also with Nginx and uWSGI and our Nginx server block looked like this: server { listen 80; location / { include uwsgi_params; uwsgi_pass unix:/home/username/app/app.sock; } } You'll notice the uwsgi_pass value is differet between the two. On the virtual machine (I.e not using Docker containers) the value for uwsgi_pass is unix:/home/username/app/app.sock, a full path to a local socket file. In this case, HTTP requests coming in are handled by Nginx and proxied to unix:/home/username/app/app.sock; where a uwsgi application server is listening and waiting to handle and requests. However, in the case of our Docker example, we have both Flask and Nginx each in their own container. So how do they communicate? Docker Compose sets up a single network for our application, where each container is freely reachable by any other container on the same network, and discoverable by their container name. Naming our Flask container as flask will give it a hostname of flask. Naming our Nginx container nginx will give it that hostname and so on.. We configured uwsgi to listen on socket :8080 in the app.ini file, so now any HTTP requests received by Nginx are proxied to the Flask container using uwsgi_pass flask:8080;. We name our containers in docker-compose.yml so we'll touch more on that shortly. On a side note - we'll cover how to setup a custom domain and HTTPS using certbot in a future guide. You can learn more about networking in Compose here To finish up our basic Nginx container, we need to create a Dockerfile to build the image. Nginx Dockerfile Make sure you're in the nginx directory and create a Dockerfile: touch Dockerfile Open it up and add the following: nginx/Dockerfile # Use the Nginx image FROM nginx # Remove the default nginx.conf RUN rm /etc/nginx/conf.d/default.conf # Replace with our own nginx.conf COPY nginx.conf /etc/nginx/conf.d/ We've only got 3 instructions in our Nginx Dockerfile: FROM nginx- Pulls the oficial Nginx container image RUN rm /etc/nginx/conf.d/default.conf- Removes the default Nginx config file COPY nginx.conf /etc/nginx/conf.d/- Copies the nginx.conffile we just created into the container At this point, your nginx directory should look like this: nginx ├── Dockerfile └── nginx.conf Lest thing to do now is create our docker-compose.yml. Docker compose Docker Compose is a powerful tool with lots of features and configuration options available, allowing us to define, build and configure our services & containers all from one place. We're only touching on some of the basics in this guide but we'd definitely recommend having a read through the documentation here to get a better understanding of the features and options of Compose. Open up the docker-compose.yml file you created earlier and add the following: docker-compose.yml version: "3.7" services: flask: build: ./flask container_name: flask restart: always environment: - APP_NAME=MyFlaskApp expose: - 8080 nginx: build: ./nginx container_name: nginx restart: always ports: - "80:80" Let's go through what we've got here and briefly touch on some of the concepts: Every docker-compose.yml file must start with the version, followed by the services we want to build using services: and indenting any individual services thereafter. In our case - version: "3.7". We've defined 2 individual services in the services block and named them flask and nginx respectively. build: ./flask- Instructs Docker to build the image using the Dockerfilefound in the flaskdirectory (relative to the docker-compose.ymlfile) container_name: flask- Gives our container the name of flask, also assigning it that hostname as we mentioned earlier restart: always- Makes the container always restart environment- A place for us to define environment variables for the container. expose- Exposes internal ports to other containers and services on the same network You'll see we have a similar setup with the nginx service with the only difference being ports instead of expose, the two of which serve a fundamentally diffefrent purpose. ports are mapped as HOST:CONTAINER and will expose ports to the outside world, which in our case has mapped port 80 on the host machine to port 80 of our Nginx container. ports: - "80:80" expose on the other hand has simply opened up port 8080 internally, allowing our Nginx container to communicate with the uwsgi server inside of the flask container listening on socket 8080! expose: - 8080 Right now, your project should be looking like so: Save the file and jump back into your terminal. We're going to build and test out our app. Building & testing Run the following command in the same directory as docker-compose.yml to build the services: docker-compose build And now run the following to create and start the containers: docker-compose up You can optionally run the following to achieve the same result: docker-compose up --build You'll notice we didn't have to pass a filename to the command. Compose will look in the current directory for a docker-compose.yml file to build! Open up or to see your Flask app in action. You should see: Hello from MyFlaskApp running in a Docker container behind Nginx! To stop the service, hit Ctrl + c. Docker compose commands docker-compose comes with quite a large number of commands and options which can be found by running: docker-compose We're not going to cover many ot the commands in this guide, so be sure to read the documentation or at least run docker-compose and have a read. Run the following to list any running images: docker-compose images Container Repository Tag Image Id Size --------------------------------------------------------- flask docker_flask latest d7ea63adcedf 899 MB nginx docker_nginx latest 8b3031c24599 104 MB docker-compose ps will list any running containers: docker-compose ps Name Command State Ports --------------------------------------------------------- flask uwsgi app.ini Up 8080/tcp nginx nginx -g daemon off; Up 0.0.0.0:80->80/tcp To stop our services: docker-compose stop Making changes If you make any changes to the application, you'll have to rebuid the images by running: docker-compose up --build The best way to build and test your app is to use the virtual environment we created earlier and the Flask development server. Once you're happy with how your application is running on the development server, you can test it locally by building the services with Docker Compose. Next steps This guide was designed as a gentle introduction to building a basic flask application and running it begind Nginx and uWSGI using containers and Docker Compose, along with some of the Docker and Compose concepts. Practically speaking, most applications are likely to have more services running alongside them, such as a database, cache, task queue etc, in addition to a domain name, certificates and served over HTTPS. Some services are also likely to require persistent data, which can be achieved by configuring volumes for things such as database data, logs or any other data we'd like to make available between containers. Any data stored in a container will be destroyed when we rebuild it, however we can setup a volume on the host machine as a safe and persistent place to store it. When the containers are destroyed and rebuilt, our data remains! The great thing about Docker is that now we can simply bolt on additional services, build new images and bring them together using Docker Compose. In a future part of this series we'll be adding additional services, including: - Setting up a MongoDB database container - Setting up Redis container as a cache - Setting up Certbot to generate a self signed certificate - Deploying the application Thanks for reading!
https://pythonise.com/series/learning-flask/building-a-flask-app-with-docker-compose
CC-MAIN-2021-17
refinedweb
2,613
55.03
Chapter 9: Computed Fields And Onchanges¶ The relations between models are a key component of any Odoo module. They are necessary for the modelization of any business case. However, we may want links between the fields within a given model. Sometimes the value of one field is determined from the values of other fields and other times we want to help the user with data entry. These cases are supported by the concepts of computed fields and onchanges. Although this chapter is not technically complex, the semantics of both concepts is very important. This is also the first time we will write Python logic. Until now we haven’t written anything other than class definitions and field declarations. Computed Fields¶ Reference: the documentation related to this topic can be found in Computed Fields. Note Goal: at the end of this section: In the property model, the total area and the best offer should be computed: In the property offer model, the validity date should be computed and can be updated: In our real estate module, we have defined the living area as well as the garden area. It is then natural to define the total area as the sum of both fields. We will use the concept of a computed field for this, i.e. the value of a given field will be computed from the value of other fields. So far fields have been stored directly in and retrieved directly from the database. Fields can also be computed. In this case, the field’s value is not retrieved from the database but computed on-the-fly by calling a method of the model. To create a computed field, create a field and set its attribute compute to the name of a method. The computation method should set the value of the computed field for every record in self. By convention, compute methods are private, meaning that they cannot be called from the presentation tier, only from the business tier (see Chapter 1: Architecture Overview). Private methods have a name starting with an underscore _. Dependencies¶ The value of a computed field usually depends on the values of other fields api, fields, models class TestComputed(models.Model): _name = "test.computed" total = fields.Float(compute="_compute_total") amount = fields.Float() @api.depends("amount") def _compute_total(self): for record in self: record.total = 2.0 * record.amount Note self is a collection. The object self is a recordset, i.e. an ordered collection of records. It supports the standard Python operations on collections, e.g. len(self) and iter(self), plus extra set operations such as recs1 | recs2. Iterating over self gives the records one by one, where each record is itself a collection of size 1. You can access/assign fields on single records by using the dot notation, e.g. record.name. Many examples of computed fields can be found in Odoo. Here is a simple one. Exercise Compute the total area. Add the total_areafield to estate.property. It is defined as the sum of the living_areaand the garden_area. Add the field in the form view as depicted on the first image of this section’s Goal. For relational fields it’s possible to use paths through a field as a dependency: description = fields.Char(compute="_compute_description") partner_id = fields.Many2one("res.partner") @api.depends("partner_id.name") def _compute_description(self): for record in self: record.description = "Test for partner %s" % record.partner_id.name The example is given with a Many2one, but it is valid for Many2many or a One2many. An example can be found here. Let’s try it in our module with the following exercise! Exercise Compute the best offer. Add the best_pricefield to estate.property. It is defined as the highest (i.e. maximum) of the offers’ price. Add the field to the form view as depicted in the first image of this section’s Goal. Tip: you might want to try using the mapped() method. See here for a simple example. Inverse Function¶ You might have noticed that computed fields are read-only by default. This is expected since the user is not supposed to set a value. In some cases, it might be useful to still be able to set a value directly. In our real estate example, we can define a validity duration for an offer and set a validity date. We would like to be able to set either the duration or the date with one impacting the other. To support this Odoo provides the ability to use an inverse function: from odoo import api, fields, models class TestComputed(models.Model): _name = "test.computed" total = fields.Float(compute="_compute_total", inverse="_inverse_total") amount = fields.Float() @api.depends("amount") def _compute_total(self): for record in self: record.total = 2.0 * record.amount def _inverse_total(self): for record in self: record.amount = record.total / 2.0 An example can be found here. A compute method sets the field while an inverse method sets the field’s dependencies. Note that the inverse method is called when saving the record, while the compute method is called at each change of its dependencies. Exercise Compute a validity date for offers. Add the following fields to the estate.property.offermodel: Where date_deadline is a computed field which is defined as the sum of two fields from the offer: the create_date and the validity. Define an appropriate inverse function so that the user can set either the date or the validity. Tip: the create_date is only filled in when the record is created, therefore you will need a fallback to prevent crashing at time of creation. Add the fields in the form view and the list view as depicted on the second image of this section’s Goal. Additional Information¶ Computed fields are not stored in the database by default. Therefore it is not possible to search on a computed field unless a search method is defined. This topic is beyond the scope of this training, so we won’t cover it. An example can be found here. Another solution is to store the field with the store=True attribute. While this is usually convenient, pay attention to the potential computation load added to your model. Lets re-use our example: description = fields.Char(compute="_compute_description", store=True) partner_id = fields.Many2one("res.partner") @api.depends("partner_id.name") def _compute_description(self): for record in self: record.description = "Test for partner %s" % record.partner_id.name Every time the partner name is changed, the description is automatically recomputed for all the records referring to it! This can quickly become prohibitive to recompute when millions of records need recomputation. It is also worth noting that a computed field can depend on another computed field. The ORM is smart enough to correctly recompute all the dependencies in the right order… but sometimes at the cost of degraded performance. In general performance must always be kept in mind when defining computed fields. The more complex is your field to compute (e.g. with a lot of dependencies or when a computed field depends on other computed fields), the more time it will take to compute. Always take some time to evaluate the cost of a computed field beforehand. Most of the time it is only when your code reaches a production server that you realize it slows down a whole process. Not cool :-( Onchanges¶ Reference: the documentation related to this topic can be found in onchange(): Note Goal: at the end of this section, enabling the garden will set a default area of 10 and an orientation to North. In our real estate module, we also want to help the user with data entry. When the ‘garden’ field is set, we want to give a default value for the garden area as well as the orientation. Additionally, when the ‘garden’ field is unset we want the garden area to reset to zero and the orientation to be removed. In this case, the value of a given field modifies the value of other fields. The ‘onchange’ mechanism provides a way for the client interface to update a form without saving anything to the database whenever the user has filled in a field value. To achieve this, we define a method where self represents the record in the form view and decorate it with onchange() to specify which field it is triggered by. Any change you make on self will be reflected on the form: from odoo import api, fields, models class TestOnchange(models.Model): _name = "test.onchange" name = fields.Char(string="Name") description = fields.Char(string="Description") partner_id = fields.Many2one("res.partner", string="Partner") @api.onchange("partner_id") def _onchange_partner_id(self): self.name = "Document for %s" % (self.partner_id.name) self.description = "Default description for %s" % (self.partner_id.name) In this example, changing the partner will also change the name and the description values. It is up to the user whether or not to change the name and description values afterwards. Also note that we do not loop on self, this is because the method is only triggered in a form view, where self is always a single record. Exercise Set values for garden area and orientation. Create an onchange in the estate.property model in order to set values for the garden area (10) and orientation (North) when garden is set to True. When unset, clear the fields. Additional Information¶ Onchanges methods can also return a non-blocking warning message (example). How to use them?¶ There is no strict rule for the use of computed fields and onchanges. In many cases, both computed fields and onchanges may be used to achieve the same result. Always prefer computed fields since they are also triggered outside of the context of a form view. Never ever use an onchange to add business logic to your model. This is a very bad idea since onchanges are not automatically triggered when creating a record programmatically; they are only triggered in the form view. The usual pitfall of computed fields and onchanges is trying to be ‘too smart’ by adding too much logic. This can have the opposite result of what was expected: the end user is confused from all the automation. Computed fields tend to be easier to debug: such a field is set by a given method, so it’s easy to track when the value is set. Onchanges, on the other hand, may be confusing: it is very difficult to know the extent of an onchange. Since several onchange methods may set the same fields, it easily becomes difficult to track where a value is coming from. When using stored computed fields, pay close attention to the dependencies. When computed fields depend on other computed fields, changing a value can trigger a large number of recomputations. This leads to poor performance. In the next chapter, we’ll see how we can trigger some business logic when buttons are clicked.
https://www.odoo.com/documentation/master/developer/howtos/rdtraining/09_compute_onchange.html
CC-MAIN-2022-27
refinedweb
1,814
66.64
1999.08.16 16:40 "BYTE tag bug", by Hi, libtiff V3.4Beta 032, NT system (although I don't believe the bug is system dependent). Summary: Adding extra private tags of type BYTE (or BYTE array) doesn't work properly. I think I have identified the problem and I have included a possible fix. Background: I added some new tags to the library, using the xtiff_dir method contributed by Niles Ritter. The tags I added included LONG, ASCII, BYTE and variable-array-of-BYTE type tags. I the method worked well for the LONG and ASCII type tags, but BYTE and BYTE array tags didn't write properly. Solution: I have tracked the problem down to TIFFWriteNormalTag in dir_write.c The basic problem is that the switch statement in there doesn't handle the BYTE type case - it falls straight through. My initial thought was that the UNDEFINED case might do pretty much what I wanted, but that isn't quite right either. I think adding the following case fixes the problem: case TIFF_BYTE: if (wc > 1) { char* cp; if (wc == (u_short) TIFF_VARIABLE) { TIFFGetField(tif, fip->field_tag, &wc, &cp); dir->tdir_count = wc; } else TIFFGetField(tif, fip->field_tag, &cp); if (!TIFFWriteByteArray(tif, dir, cp)) return (0); } else { char cv; TIFFGetField(tif, fip->field_tag, &cv); if (!TIFFWriteByteArray(tif, dir, &cv)) return (0); } I think that the UNDEFINED case might also be incorrect, and perhaps needs to use the same code as above, but I am not sure. Regards Martin McBride
https://www.asmail.be/msg0055471562.html
CC-MAIN-2019-39
refinedweb
249
71.95
Okay, I have a slight problem here. My final project was pretty much open to anything. We just had to make a proposal and do it. So, I proposed a bank type program. I had it done and in the bag(or so I thought) until I was told to include linked lists somewhere in it. So, I decided to sorta keep track of the deposits in one linked list and keep track of withdrawals in another linked list. However, I keep getting errors that say "jump to case label - crosses initialization of Nodetype newNodePtr". I know it has something to do with me trying to use linked lists inside a switch function. I really do not want to change the format of my entire program. Is there any way around this? My code is below: //************************************************************************** //This program simulates the services offered by a bank. The user can make //deposits, withdraw, check their balance. //*************************************************************************** #include <iostream> #include <string> #include <cstddef> //to access NULL using namespace std; struct NodeType; //forward declaration typedef NodeType* NodePtr; struct NodeType { char transaction; float amt; float total; NodePtr next; }; void chooseService(); float balance=0.00; NodePtr listPtr = NULL; //external pointer int main() { string usernameFirst, usernameLast; char answer; cout<<"\n ***Welcome to the Bank of AA&J***\n\n\n"; cout<<"Please enter your name: "; cin>>usernameFirst>>usernameLast; cout<<"\n\nWelcome "<<usernameFirst<<" "<<usernameLast<<"!"<<endl; cout<<"\n"; chooseService(); action: cout<<"\nWould you like to perform another action? (y or n)"; cin>>answer; if(answer=='n') { cout<<"\nThank you! Goodbye!\n\n"; } if(answer=='y') { chooseService(); goto action; } system("pause"); return 0; } //*************************************************************************** void chooseService() { char service; float add; float subtract; cout<<"\nPlease choose a service: "<< "\n D-to deposit"<< "\n W-to withdraw"<< "\n C-to check balance"<< "\n H-to see account history\n"; cin>>service; switch(service) { case 'D' : cout<<"Enter amount to deposit: $"; cin>>add; balance=balance+add; NodePtr newNodePtr=new NodeType; newNodePtr->amt=add; newNodePtr->total=balance; listPtr=newNodePtr; break; case 'W' : cout<<"Enter amount to withdrawal: $"; cin>>subtract; balance=balance-subtract; break; case 'C' : cout<<"Your balance is: "; cout<<"$"<<balance; break; case 'H' : NodePtr currNodePtr; currNodePtr = listPtr; while (currNodePtr != NULL) { cout<<"The transaction amount was: $"<< currNodePtr->amt<< endl; cout<<"Balance: $"<<currNodePtr->total<<endl; currNodePtr = currNodePtr -> next; } break; default : cout<<"error()"; break; } } //**************************************************************************** If you take out everything pointer/linked list related, it compiles perfectly. Just need ideas on how to do a linked list inside a switch. Thanks for any help as usual! Edited by NitaB: n/a
https://www.daniweb.com/programming/software-development/threads/279474/linked-list-inside-switch
CC-MAIN-2017-34
refinedweb
413
54.22
Objective This example project demonstrates how to configure and use an internal PIC24F timer using an interrupt. You will be shown how to configure a PIC24 timer to generate an overflow interrupt every half-second. You will write a simple C function to toggle an LED. You will register the C function as the Interrupt Service Routine (ISR) for the timer, ensuring that each time the timer overflows, the LED on the development board will toggle. This lab uses interrupts. If you are looking for an example of using a PIC24F which polls the timer overflow flag, please refer to the "Programming a Microchip PIC24F Timer" example. With the use of MPLAB Code Configurator (MCC), this project demonstrates the following: - Creating a project for the 16-bit Microchip MCU on the development board - Configuring the MCU system oscillator to run off the internal RC-oscillator at 4 MHz - Configuring one of the I/O pins connected to an LED as an output - Adding a timer to the list of peripherals used by the application - Configuring the added timer to overflow every half-second - Configuring the timer interrupt priority - Generating the MCC code - Manually editing the MCC-generated code to: - Create a function which toggles the LED - Register the just-written function as the ISR for the timer - Start the timer operation when the application begins execution - Building and programming the completed application into the development board As a result of this lab, the selected LED will change state every half-second. It is expected that after completing this lab, you will able to use MCC to set a timer period of any value on a 16-bit PIC MCU. This project uses the PIC24F Curiosity Development Board. This board has two LEDs and two input switches: LED1 - connected to pin RA9 (PORTA pin 9).1 peripheral9 is set as an output pin. - Rename RA9 as LED1. 5 Confgure Timer1 for Half-second Period with the Interrupt Enabled With the Fosc set at 2 MHz, Timer1 will be unable to generate a half-second period as the 16-bit counter will overflow before half-second. To lengthen the period window for this timer, we will prescale the system clock by 64 to slow the timer down. Once the timer is slowed, we will be able to select the half-second timer period. Select TMR1 from the project window and select the following settings: Ensure that the Enable Timer Interrupt box is checked 6 Verify the Priority of the Timer Interrupt Select the Interrupt Module icon in the Project Resources window to verify that the Timer1 interrupt has been enabled. For this example, Timer1 should be shown as the only enabled interrupt with the default priority of '1.' Please refer to the *16-bit Interrupts page for details on how interrupts are implemented and programmed on this device. 7(), and TMR1_Initialize(). These functions initialize Timer1, set up the output pin and establish the interrupt writing to the device's function registers. If you are interested in learning more about the details of device initialization, please consult the PIC24FJ128GA204 datasheet for the specific registers and settings used to configure the I/O pins and Timer1. 8 Modify the MCC-generated Code to Complete the Application We will now modify main.c. An inspection of the MCC-generated header file pin-manager.h and tmr1.h shows MCC has created several control functions useful for our applications pin-manager.h Function Prototype: - LED1_toggle(); - changes the value of the I/O pin connected to the LED1 tmr1.h Function Prototypes: - TMR1_Start(): starts the operation of Timer1 - TMR1_SetInterruptHandler(): registers a user defined function as the ISR for Timer1 Make the following modifications to main.c - Insert the text #include "mcc_generated_files/mcc.h" near the top of the file - Create a function void My_ISR(void); with one line of code LED1_Toggle(); - Insert TMR1_Start and TMR1_SetInterruptHandler(My_ISR) into main.c as follows: main.c #include "mcc_generated_files/system.h" #include "mcc_generated_files/mcc.h" //##### must be added ##### void My_ISR(void) //############# { LED1_Toggle(); } int main(void) { SYSTEM_Initialize(); TMR1_SetInterruptHandler(My_ISR) ; //########## TMR1_Start(); //########## while (1) { } return 1; } #include "mcc_generated_files/mcc.h" is required to be placed in any application source file which accesses. 8 Build, Download and Run the codeTo run the program on the development board, click on the Make and Program Device Main Project button Results When the application is built and programmed into the MCU, LED1 will change state every half-second. Learn More Here are some addtional examples of programming other 16-bit MCU peripherals:
http://microchip.wikidot.com/projects:16bit-interrupts
CC-MAIN-2021-25
refinedweb
754
51.48
On Wed, 2009-07-01 at 18:40 +0200, Guido Günther wrote: > Hi"/> Ok perfect thanks. > > #). No matter what, I can't trigger this segfault. Do you have a build log for the package? and could you send me the make/defines.mk in the build tree? gcc versions and usual tool chain info.. maybe it's a gcc bug or maybe it's an optimization that behaves differently between debian and fedora. I have attached a small test case to simply test libccs. At this point I don't believe it's a problem in libfence. Could you please run it for me and send me the output? If the bug is in libccs this would start isolating it. [root fedora-rh-node4 ~]# gcc -Wall -o testccs main.c -lccs [root fedora-rh-node4 ~]# ./testccs -hopefully some output- and please check the XPath query at the top of main.c as it could be slightly different given your config. Thanks Fabio #include <stdio.h> #include <stdlib.h> #include <ccs.h> #include <strings.h> #include <limits.h> static void dump() { int cd, error; const char *path = "/cluster/fencedevices/fencedevice[ name=\"fence2\"]/@*"; char *str; cd = ccs_connect(); if (cd < 0) return; for (;;) { error = ccs_get_list(cd, path, &str); if (error || !str) break; printf("%s\n", str); free(str); } ccs_disconnect(cd); } int main() { printf("xpathlite\n"); dump(); printf("fullxpath\n"); fullxpath = 1; dump(); return 0; }
https://www.redhat.com/archives/linux-cluster/2009-July/msg00010.html
CC-MAIN-2015-11
refinedweb
231
79.26
Missing and/or Incorrect Docs - Ext 1.1.x This thread is solely for reporting bugs in the Ext 1.1.x documentation. (currently available here) I'll be consolidating all the unresolved reports from the old 1.0 - 1.1RC1+ threads into this thread. I'll officially say so (in here) when i've completed this task. in the meantime, be sure to search in the old doc bugs threads for issues before making a report here. p.s. stupid me accidentally went and deleted the original 1.1 doc bugs thread instead of a post. thankfully there was only 1 bug reported, and i've reposted that below. [edit] I've completed consolidating all unresolved doc issues from Ext 1.0 - 1.1RC1 into a single list, available below. I've also checked all of them against the current Ext 1.1 docs to ensure they're still unresolved. Where possible, i've included a [link] pointing to the original post / a place in the docs where the problem is evident. Some things to note - post only documentation bugs - always CTRL+F BEFORE posting - i will regularly update the list below to reflect all verified doc issues reported in this thread - you might also want to take a look at this thread before posting Ext Documentation Team members: please mark any resolved issues in the list below with a strikethrough using the [s][/s] tags. thanks. [edit] as the list has grown really long, all classes with unresolved issues will be marked in red. Ext 1.1.x Documentation Bug List All Classes - main constructor in "Public Methods" section is not hyperlinked to corresponding description in "Method Details" section - This will no longer be an issue in the 2.0 doc center and will not be fixed for 1.x Class Ext - Old reference to Yahoo in description for method namespace [link] "Creates namespaces but does not assume YAHOO is the root." - missing docs for method destroy - missing docs for method num Class Ext.BorderLayout - missing method create [link] - missing all properties except monitorWindowResize [link] - Skipped - This must be referring to a different object because BorderLayout is fully doc'd - missing config factory Class Ext.Component - invalid reference to "Ext.Container" in description for method destroy [link] (all subclasses of Ext.Component are also affected) Class Ext.data.DataReader - missing docs [link] - This is a base class that is not intended to be used directly, so private comments were added to that effect. Class Ext.data.Record - config dateFormat for method create shows "{@link Date#Date.parseDate}" [link] - (removed link) - incorrect return type for method create [link] -- should be Ext.data.Record, not void - missing square brackets in example for method create [link] -- should be Code: var TopicRecord = Ext.data.Record.create([ {name: 'title', mapping: 'topic_title'}, {name: 'author', mapping: 'username'}, {name: 'totalPosts', mapping: 'topic_replies', type: 'int'}, {name: 'lastPost', mapping: 'post_time', type: 'date'}, {name: 'lastPoster', mapping: 'user2'}, {name: 'excerpt', mapping: 'post_text'} ]); Class Ext.data.Store - incorrect description for method getModifiedRecords. "...when you load a data store, its modified records do NOT get cleared" [link] - incorrect punctuation for description of event metachange [link] - missing method startAutoRefresh [link] - Skipped - This is not a method of Store. The OP was referring to Animal's Store override which I think he mistakenly thought had been added to Ext (?) - incorrect description for event remove [link] -- should say "Fires when a Record has been removed from the Store" - incorrect description for event update [link] -- should say "Fires when a Record has been updated" - incorrect description [link] -- "This is currently only support for JsonReaders." should be "This is currently only supported for JsonReaders." Class Ext.data.Tree - missing method proxyNodeEvent - (marked private) - missing method registerNode - (marked private) - missing method unregisterNode - (marked private) - missing method toString - (marked private) - missing config pathSeparator [link] Class Ext.data.XmlReader Class Ext.DatePicker - missing config value [link] - Skipped - This is not a config -- it's a private variable that can only be set/retrieved via its accessor methods Class Ext.dd.DragDrop - incorrect return type of String for property groups [link] -- should be an Object of type {groupName1 : true, groupName2 : true} - incorrect return type of void for method padding [link] -- should be an Object with array values 0, 1, 2 or 3 denoting the padding - missing configs (as with all subclasses of Ext.dd.DragDrop) Class Ext.DialogManager Class Ext.DomHelper - method createTemplate [link] -- incorrect description:- should be "Creates a new Ext.Template from the DOM object spec" -- incorrect return type:- should be Ext.Template Class Ext.Editor Class Ext.Element - invalid return type /HTMLElementElement for method wrap[link] - missing method uncache [link] - (Marked private) - missing method garbageCollect [link] - (Marked private) - method setLeftTop missing arguments [link] - method translatePoints -- param An should be the return object instead [link] - incorrect description for method boxWrap [link] -- currently says "class: A base CSS class to apply to the containing wrapper element (defaults to 'x-box')" -- should be "class: A class name for the containing wrapper element, and a class name prefix for it's related child elements. (defaults to 'x-box')" -- docs should also give an example usage as outlined in this post. Class Ext.EventManager Class Ext.form.Action Class Ext.form.BasicForm - missing config options for method load [link] - missing property el [link] - el is private. Public method getEl() was added for this purpose - method doAction -- options object missing config success [link] Class Ext.form.Checkbox - incorrect description for config autoCreate's default value. [link] should be "(defaults to {tag: "input", type: 'checkbox', autocomplete: "off"})". Class Ext.form.ComboBox - missing config store - missing config title - missing config tpl [link] - Skipped - This is not really a config. It's more like a protected property that can be overridden by subclasses. - incorrect state name abbreviations in states.js - (also changed Tennessee TE -> TN ) - incorrect signature for event beforequery [link] -- currently "beforequery : ( Ext.form.ComboBox combo, String query, Boolean forceAll, Boolean cancel, Object e )", should be "beforequery : ( Ext.form.ComboBox combo, Object qe )" -- currently "Fires before all queries are processed. Return false to cancel the query or set cancel to true. The event object passed has these properties:...", should be "Fires before all queries are processed. Return false to cancel the query or set cancel to true. The query event object passed has these properties:...". also note that the actual qe object only has four attributes, namely - combo - query - forceAll - cancel - config minChars - "The docs say that it defaults to 4. In the code if mode is local and minChars is undefined it gets set to 0. Hence, docs need modifying to take this into account." [link] Class Ext.form.DateField - method formatDate should be marked public & doc'ed? [link] - Skipped - The post by Jack that was referred to mentions these methods in the context of being able to override them if necessary to implement custom parsing. In that sense, they could be considered protected, but they are definitely not public (should never be called directly from external code). - method parseDate should be marked public & doc'ed? [link] - Skipped - Ditto. Class Ext.form.Field Class Ext.form.Form Class Ext.form.HtmlEditor - "it's not written in doc that there is a limit of one editor per page" [link] - (added note to header) - missing config autoCreate - Skipped - This is explicitly hidden at the bottom of the class as an unsupported property Class Ext.form.Layout Class Ext.form.TextArea - incorrect description for config autoCreate's default value. [link] should be "(defaults to {tag: "textarea", style:"width:300px;height:60px;", autocomplete: "off"})". Class Ext.form.TextField - missing config vtypeText (all subclasses of TextField also suffer from the same problem) [link] - incorrect type for config maskRe [link] -- should be RegExp, not String Class Ext.form.TriggerField - incorrect description for config autoCreate's default value. [link] should be "(defaults to {tag: "input", type: "text", size: "16", autocomplete: "off"})". - the following incompatible configs are marked as hidden, but still appear in the docs (also affects all subclasses of TriggerField): [link] - Doc parse issue, should be fixed for 2.0 beta 1 - grow - growMax - growMin Class Ext.form.TwinTriggerField - missing docs - Skipped - This is an abstract base class that can be extended, not a public component to be used directly. Comments have been added to that effect, but this class will not show up in the API docs. Class Ext.grid.CellSelectionModel Class Ext.grid.ColumnModel - missing config fixed [link] - incorrect description for config sortable [link] -- "Defaults to true." should instead read "Defaults to the value of the defaultSortable property." - missing config css [link] (and possibly missing configs attr, cellId, id and value too) - See note below... - method setRenderer > parameter fn > cell metadata description -- missing the following parameters (refer to /ext/src/widgets/grid/GridView.js lines 834-837) [link] - cellId - id - value "I assumed it meant a style like 'width:195px', but the code in GridView adds the CSS to the cell's class list, not the style." - See note below... - event columnlockchange is missing the "n". [link] - event headerchange - param newText should be a String, not a Number. [link] - missing method isLocked [link] - Added in 1.1, but not 2.0 since it is no longer supported Notes regarding CM configs: The css column property is supported (and I added it), but probably NOT for the reason you think. The GridView class is fairly complex, and you cannot simply look at everything in the cell template enclosed by a {} and assume that it's a valid config placeholder. E.g., even though there is a {css} in the template, it is in the class="" part of the template, so it is a class name, not a style config (as noted above in the setRenderer function, which I have updated). You have to dig into the actual rendering code to find the line that applies a .css property from the column object directly onto the style attribute of the cell -- the template has nothing to do with it and the css template token is only overwritten by the cell renderer. Likewise, attr is not a valid column config value, but it IS a valid parameter to setRenderer (as already noted). Column id is a supported config option, and is already on the CM config list -- it is not a valid param to setRenderer however. cellId and value, while in the template, are only set internally by GridView code and are not valid config options. Class Ext.grid.EditorGrid - missing config clicksToEdit [link] - missing config autoWidth [link] - (Not actually added, as it will be inherited from Grid) Class Ext.grid.Grid - missing config autoWidth [link] - missing config enableCtxMenu [link] - missing config enableColLock [link] - missing config enableColumnResize [link] - missing config sm or selModel (or both) [link] - missing config ds or dataSource (or both) [link] - missing config cm or colModel (or both) [link] - incorrect return type for method getDataSource [link] -- should be Store, not DataSource - ambiguous parameter name for 2nd argument of method reconfigure [link] -- should be colModel, not The Class Ext.grid.GridView - missing configs like: - rowClass - cellClass - tdClass - hdClass - missing method getRow [link] (and possibly a whole host of other methods, all found in source but not in docs) - Most are private Class Ext.grid.PropertyColumnModel Class Ext.grid.PropertyGrid Class Ext.grid.PropertyRecord Class Ext.grid.PropertyStore Class Ext.grid.RowSelectionModel Class Ext.JsonView Class Ext.KeyNav Class Ext.LayoutManager - missing config allowScroll Class Ext.LayoutRegion Class Ext.menu.Adapter Class Ext.menu.BaseItem Class Ext.menu.CheckItem Class Ext.menu.Item Class Ext.menu.Menu Class Ext.menu.MenuMgr Class Ext.MessageBox - method show missing the following config options: - width (as seen in the example) - minWidth [link] - maxWidth [link] - minProgressWidth [link] - Skipped - This is not a general config option, only a property that is used as the minWidth within progress() and wait(). Passing it into show() directly would have no effect (minWidth should always be used). - defaultTextHeight [link] - animEl [link] - fn [link] Class Ext.PagingToolbar Class Ext.QuickTips - missing config target for method register [link] - incorrect reference to config option "True" [link] - should be autoDismiss - source code comment should be "@cfg {Boolean} autoDismiss" - missing config autoDismissDelay [link] - source code comment should be "@cfg {Number} autoDismissDelay" Class Ext.Shadow Class Ext.state.CookieProvider - error in example [link] Code: var cp = new Ext.state.CookieProvider({ path: "/cgi-bin/", expires: new Date(new Date().getTime()+(1000*60*60*24*30)); // should be a comma, not a semicolon domain: "extjs.com" }) // missing semicolon Class Ext.SplitLayoutRegion - missing config splitTip - missing config collapsibleSplitTip - missing config useSplitTips - missing various methods (no indication of private methods, although Ext.SplitLayoutRegion description alludes to this: "Adds a splitbar and other (private) useful functionality to a {@link Ext.LayoutRegion}.") Class Ext.TabPanel - "old reference to YUI in the introduction text: 'Creates a lightweight TabPanel component using Yahoo! UI.'" [link] - missing config disableTooltips [link] - missing config docs Class Ext.TabPanelItem - missing property textEl - Skipped - This is a private property - missing config docs? Class Ext.TaskMgr Class Ext.Toolbar - method add missing the following shorthand arguments - (Rewrote the entire doc for add) Class Ext.tree.AsyncTreeNode Class Ext.tree.TreeDragZone Class Ext.tree.TreeDropZone - missing docs Class Ext.tree.TreeEditor - missing config editDelay Class Ext.tree.TreeLoader - "children option not documented in description of the 'node definition object' at the beginning of the description of the TreeLoader class. This option allows child nodes to be preloaded via JSON either dynamically or statically." [link] - typo in description for config dataUrl [link] - should read "The URL from which to request a Json string which specifies an array of node definition objects representing the child nodes to be loaded." Class Ext.tree.TreeNode - missing config checked [link] - missing config draggable [link] - missing config allowChildren - missing config isTarget - missing class description [link] Class Ext.tree.TreeNodeUI Class Ext.tree.TreePanel - missing config pathSeparator [link] - missing method getTreeEl - incorrect type for config loader [link] -- should be an Ext.tree.TreeLoader, or an object with a load method with params identical to Ext.tree.TreeLoader's load method - missing class description [link] Class Ext.tree.RootTreeNodeUI Class Ext.UpdateManager Class Ext.util.Observable Class Ext.View - missing config emptyText [link] - missing config multiSelect [link] - incorrect reference to dataModel in the example at the top [link] Code: dataModel.load("foobar.xml"); // incorrect store.load("foobar.xml"); // correct - double quotes in the note below the example should be single quotes Note: The root of your template must be a single node. Table/row implementations may work but are not supported due to IE"s limited insertion support with tables and Opera"s faulty event bubbling. -- this note should also appear in Ext.JsonView, and should also explicitly mention "select/option" implementations i.e. "...Table/row or select/option implementations may work but..." [link] Class Ext.XTemplate __________________________________________________ Class Date - incorrect description for method getLastDateOfMonth [link] - incorrect Date.parseDate examples Code: // incorrect dt = Date.parseDate("2006-1-15", "Y-m-d"); dt = Date.parseDate("2006-1-15 3:20:01 PM", "Y-m-d h:i:s A" ); // should be dt = Date.parseDate("2006-01-15", "Y-m-d"); dt = Date.parseDate("2006-01-15 03:20:01 PM", "Y-m-d h:i:s A"); Note: The following are documented correctly. There appears to be an issue with the documentation parser's ability to distinguish these as separate classes for some reason. We are looking into it, but no doc changes are necessary. Class Array - missing docs (see notes on the String class below) - Skipped Class Number - missing docs (see notes on the String class below) - Skipped Class String - methods indexOf and remove are incorrectly listed as belonging to Class String. [link] these methods should belong to the Array class [link] (refer to src\core\Ext.js) - Skipped - method constrain is incorrectly listed as belonging to Class String. [link] this methd should belong to the Number class (refer to src\core\Ext.js) - Skipped __________________________________________________ Misc/Requests - "Would it be possible to specify the return value (null, undefined, this?) when finder methods don't find the element(s) like Element.fly, Element.get, Ext.CompositeElement.query, Ext.CompositeElement.select?" [link] - Added for El.get/fly, CompEl.select/query always return a valid CompEl that may contain 0 or more elements internally, so they are already correct. - "I can't find any documentation that describes the CSS file(s) to use. There's good descriptions of the JS files to include but nothing about CSS at all." [link] - Skipped - This is not an API doc issue. Perhaps a page in the wiki manual would be more appropriate, although I'm not sure exactly what the contents would be ("use ext-all.css"?) - Class Ext.Ajax "Suggest adding the code example and other info from Jack found here to the description of this class. An explicit example like this would help tremendously in my opinion to see how to use what is likely to be a popular class." [link] - Class Ext.ContentPanel "The doco for [ContentPanel] should include information about or a reference to the IE scroll position relative bug as outlined in" [link] - Skipped - We can't post every possible workaround to the API docs. This one is a bit too general to document effectively in the API and only affects certain cases, so it will remain in the forums. - Class Ext.data.DataReader "There is no documentation-browser-friendly documentation for the DataReader class." [link] - Class Ext.data.JsonReader "'After any data loads, the raw JSON data is available for further custom processing.' The docs are not clear on this. This property only exists if there is valid JSON data. If there is a data store load exception, for example, the property does not exist, rather than being null or undefined. If this is the correct intention, then it should be made clear for those making use of the load exception event so that they can apply the correct programming logic." [link] - Class Ext.DomQuery "Could you add a link to the CSS3 Simple Selector chapter in DomQuery..." [link] - Class Ext.Element "I've recently run into an issue with the Ext.Element.alignTo animation config docs and Ext.Fx, both refer to animation config objects, though Ext.Element.alignTo doesn't specify which properties of the animation config object the method actually uses..." [link] - It is somewhat confusing, but the headers of both Element and Fx discuss the differences. I added a little more explanation in the Element header to try and clarify the difference, but I'm not going to add additional comments to every single Element method that can take an anim argument. - Ext.form.BasicForm/Form "I think JSON response format () should be included in docs." [link] - Skipped - The existing explanations and example code for Ext.form.Action.Submit and Load seem to be sufficient - Class Ext.grid.Grid "I spent a couple hours figuring out that the reason DnD from a grid to a tree failed with a rather obscure error was because I needed to set the selection model for the grid..." [link] - Skipped - This seems like possibly a bug, but not something that would go into the API docs? - Class Ext.form.ComboBox ComboBox config hiddenName - "Can we have this state a word smith'ed version of "required for form.submit"?" [link] - Class Ext.MessageBox - "It would save people a lot of grief if the documentation for the alert and confirm methods would highlight that the message boxes are displayed asynchronously (they are not a direct substitute for the regular javascript alert and confirm methods) and to provide examples of how to use these in the most common scenarios (e.g. do something after an alert(), potentially cancel a form submit following a confirm())..." [link] - "it is important that the docs indicate this" -- that "there is nothing in javascript land that can replace the blocking js alert() and confirm() methods." [link] - "the docs should show how to emulate this [blocking alert / confirm] - i.e. by having Ext.Msg.alert() invoke a callback function. I can see how to do this for a simple alert - where I came unstuck was with a confirm() that needed to control whether or not a form would be submitted." [link] - "...The alert() case is pretty trivial though, just pass a function as the third argument to Ext.Msg.alert() that triggers the activity to occur after after OK is clicked." [link] - "...The simple form submission using confirm() is trivial also - as for alert but with something like the following in the 3rd arg function..." -- example posted in thread [link] - Class Ext.util.MixedCollection "The difference beween those methods is not really clear. When to use key?" [link] - Class Ext.util.Observable - sample code for method addListener is now correctly formatted, but misaligned - "In the addListener method, there is an explanation for Combining Options. The examples in this section use the el.on() shortcut method. I think there needs to be an introduction describing exactly what on() is just before the source code block that uses it to avoid confusion. Another possibility is that maybe this example is misplaced. Should it instead be in the on() method and a link to the on() shortcut method be placed in addListener?" [link] - "Perhaps a mention in the docs that if you use a logger to monitor the progress of your application you must create a null function reference to the Ext.log() function when you remove the -debug from the script call otherwise your application will break." [link] - Skipped - There's not really anywhere in the API docs where this would go. How about a FAQ entry or wiki page? Last edited by mystix; 24 Oct 2007 at 3:29 AM. Reason: doc update Class Ext.form.BasicForm: addButton Class Ext.form.BasicForm Missing method: addButton Last edited by mystix; 8 Aug 2007 at 4:11 AM. Reason: not a bug. not added to the list. Ah, OK, I see that in the code now. Thanks for keeping the list updated and in alpha order... it makes it much easier to see what's been reported. Class Ext.grid.ColumnModel - Incorrect sortable config option description: really defaults to false, but described as "Defaults to true". the value of sortable, however, shouldn't default to false though. it should default instead to the value of the ColumnModel's defaultSortable property. i've updated the doc bug list accordingly. [edit] i've moved your question to the Help forum. here's the link Thread Participants: 32 - Animal (1 Post) - thejoker101 (4 Posts) - sjivan (5 Posts) - Wolfgang (1 Post) - cwells (1 Post) - aconran (1 Post) - Jul (12 Posts) - dolittle (3 Posts) - K0bo (1 Post) - FritFrut (2 Posts) - MarkT (1 Post) - MaxT (5 Posts) - jsakalos (2 Posts) - ksachdeva (2 Posts) - evant (1 Post) - Troy Wolf (1 Post) - Phenothiasine (2 Posts) - gimbles (1 Post) - asgillett (1 Post) - GArrow (3 Posts) - baroncelli (1 Post) - seade (4 Posts) - andrei.neculau (1 Post) - OneManArmy (1 Post) - mscdex (2 Posts) - Comma (1 Post) - gimler (1 Post) - eneko (1 Post) - keeper (1 Post) - balou (2 Posts) - Greenosity (2 Posts) - brian.moeskau (1 Post)
https://www.sencha.com/forum/showthread.php?10476-Missing-and-or-Incorrect-Docs-Ext-1.1.x
CC-MAIN-2016-07
refinedweb
3,843
57.37
The contest, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy. This Kaggle Getting Started Competition provides an ideal starting place for people who may not have a lot of experience in data science and machine learning." From the competition homepage. Show a simple example of an analysis of the Titanic disaster in Python using a full complement of PyData utilities. This is aimed for those looking to get into the field or those who are already in the field and looking to see an example of an analysis done with Python. To run this notebook interactively, get it from my Github here. The competition's website is located on Kaggle.com. import matplotlib.pyplot as plt %matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm from statsmodels.nonparametric.kde import KDEUnivariate from statsmodels.nonparametric import smoothers_lowess from pandas import Series, DataFrame from patsy import dmatrices from sklearn import datasets, svm from KaggleAux import predict as ka # see github.com/agconti/kaggleaux for more details df = pd.read_csv("data/train.csv") Show an overview of our data: df 891 rows × 12 columns Above is a summary of our data contained in a Pandas DataFrame. Think of a DataFrame as a Python's super charged version of the workflow in an Excel table. As you can see the summary holds quite a bit of information. First, it lets us know we have 891 observations, or passengers, to analyze here: Int64Index: 891 entries, 0 to 890 Next it shows us all of the columns in DataFrame. Each column tells us something about each of our observations, like their name, sex or age. These colunms are called a features of our dataset. You can think of the meaning of the words column and feature as interchangeable for this notebook. After each feature it lets us know how many values it contains. While most of our features have complete data on every observation, like the survived feature here: survived 891 non-null values some are missing information, like the age feature: age 714 non-null values These missing values are represented as NaNs. The features ticket and cabin have many missing values and so can’t add much value to our analysis. To handle this we will drop them from the dataframe to preserve the integrity of our dataset. To do that we'll use this line of code to drop the features entirely: df = df.drop(['ticket','cabin'], axis=1) While this line of code removes the NaN values from every remaining column / feature: df = df.dropna() Now we have a clean and tidy dataset that is ready for analysis. Because .dropna() removes an observation from our data even if it only has 1 NaN in one of the features, it would have removed most of our dataset if we had not dropped the ticket and cabin features first. df = df.drop(['Ticket','Cabin'], axis=1) # Remove NaN values df = df.dropna() For a detailed look at how to use pandas for data analysis, the best resource is Wes Mckinney's book. Additional interactive tutorials that cover all of the basics can be found here (they're free). If you still need to be convinced about the power of pandas check out this wirlwhind look at all that pandas can do. # specifies the parameters of our graphs fig = plt.figure(figsize=(18,6), dpi=1600) alpha=alpha_scatterplot = 0.2 alpha_bar_chart = 0.55 # lets us plot many diffrent shaped graphs together ax1 = plt.subplot2grid((2,3),(0,0)) # plots a bar graph of those who surived vs those who did not. df.Survived.value_counts().plot(kind='bar', alpha=alpha_bar_chart) # this nicely sets the margins in matplotlib to deal with a recent bug 1.3.1 ax1.set_xlim(-1, 2) # puts a title on our graph plt.title("Distribution of Survival, (1 = Survived)") plt.subplot2grid((2,3),(0,1)) plt.scatter(df.Survived, df.Age, alpha=alpha_scatterplot) # sets the y axis lable plt.ylabel("Age") # formats the grid line style of our graphs plt.grid(b=True, which='major', axis='y') plt.title("Survival by Age, (1 = Survived)") ax3 = plt.subplot2grid((2,3),(0,2)) df.Pclass.value_counts().plot(kind="barh", alpha=alpha_bar_chart) ax3.set_ylim(-1, len(df.Pclass.value_counts())) plt.title("Class Distribution") plt.subplot2grid((2,3),(1,0), colspan=2) # plots a kernel density estimate of the subset of the 1st class passangers's age df.Age[df.Pclass == 1].plot(kind='kde') df.Age[df.Pclass == 2].plot(kind='kde') df.Age[df.Pclass == 3].plot(kind='kde') # plots an axis lable plt.xlabel("Age") plt.title("Age Distribution within classes") # sets our legend for our graph. plt.legend(('1st Class', '2nd Class','3rd Class'),loc='best') ax5 = plt.subplot2grid((2,3),(1,2)) df.Embarked.value_counts().plot(kind='bar', alpha=alpha_bar_chart) ax5.set_xlim(-1, len(df.Embarked.value_counts())) # specifies the parameters of our graphs plt.title("Passengers per boarding location") <matplotlib.text.Text at 0x119c1dd90> The point of this competition is to predict if an individual will survive based on the features in the data like: Let’s see if we can gain a better understanding of who survived and died. First let’s plot a bar graph of those who Survived Vs. Those who did not. plt.figure(figsize=(6,4)) fig, ax = plt.subplots() df.Survived.value_counts().plot(kind='barh', color="blue", alpha=.65) ax.set_ylim(-1, len(df.Survived.value_counts())) plt.title("Survival Breakdown (1 = Survived, 0 = Died)") <matplotlib.text.Text at 0x119a90fd0> <matplotlib.figure.Figure at 0x119931f10> fig = plt.figure(figsize=(18,6)) #create a plot of two subsets, male and female, of the survived variable. #After we do that we call value_counts() so it can be easily plotted as a bar graph. #'barh' is just a horizontal bar graph df_male = df.Survived[df.Sex == 'male'].value_counts().sort_index() df_female = df.Survived[df.Sex == 'female'].value_counts().sort_index() ax1 = fig.add_subplot(121) df_male.plot(kind='barh',label='Male', alpha=0.55) df_female.plot(kind='barh', color='#FA2379',label='Female', alpha=0.55) plt.title("Who Survived? with respect to Gender, (raw value counts) "); plt.legend(loc='best') ax1.set_ylim(-1, 2) #adjust graph to display the proportions of survival by gender ax2 = fig.add_subplot(122) (df_male/float(df_male.sum())).plot(kind='barh',label='Male', alpha=0.55) (df_female/float(df_female.sum())).plot(kind='barh', color='#FA2379',label='Female', alpha=0.55) plt.title("Who Survived proportionally? with respect to Gender"); plt.legend(loc='best') ax2.set_ylim(-1, 2) (-1, 2) Here it’s clear that although more men died and survived in raw value counts, females had a greater survival rate proportionally (~25%), than men (~20%) Can we capture more of the structure by using Pclass? Here we will bucket classes as lowest class or any of the high classes (classes 1 - 2). 3 is lowest class. Let’s break it down by Gender and what Class they were traveling in. fig = plt.figure(figsize=(18,4), dpi=1600) alpha_level = 0.65 # building on the previous code, here we create an additional subset with in the gender subset # we created for the survived variable. I know, thats a lot of subsets. After we do that we call # value_counts() so it it can be easily plotted as a bar graph. this is repeated for each gender # class pair. ax1=fig.add_subplot(141) female_highclass = df.Survived[df.Sex == 'female'][df.Pclass != 3].value_counts() female_highclass.plot(kind='bar', label='female, highclass', color='#FA2479', alpha=alpha_level) ax1.set_xticklabels(["Survived", "Died"], rotation=0) ax1.set_xlim(-1, len(female_highclass)) plt.title("Who Survived? with respect to Gender and Class"); plt.legend(loc='best') ax2=fig.add_subplot(142, sharey=ax1) female_lowclass = df.Survived[df.Sex == 'female'][df.Pclass == 3].value_counts() female_lowclass.plot(kind='bar', label='female, low class', color='pink', alpha=alpha_level) ax2.set_xticklabels(["Died","Survived"], rotation=0) ax2.set_xlim(-1, len(female_lowclass)) plt.legend(loc='best') ax3=fig.add_subplot(143, sharey=ax1) male_lowclass = df.Survived[df.Sex == 'male'][df.Pclass == 3].value_counts() male_lowclass.plot(kind='bar', label='male, low class',color='lightblue', alpha=alpha_level) ax3.set_xticklabels(["Died","Survived"], rotation=0) ax3.set_xlim(-1, len(male_lowclass)) plt.legend(loc='best') ax4=fig.add_subplot(144, sharey=ax1) male_highclass = df.Survived[df.Sex == 'male'][df.Pclass != 3].value_counts() male_highclass.plot(kind='bar', label='male, highclass', alpha=alpha_level, color='steelblue') ax4.set_xticklabels(["Died","Survived"], rotation=0) ax4.set_xlim(-1, len(male_highclass)) plt.legend(loc='best') <matplotlib.legend.Legend at 0x10af39ed0> Awesome! Now we have a lot more information on who survived and died in the tragedy. With this deeper understanding, we are better equipped to create better more insightful models. This is a typical process in interactive data analysis. First you start small and understand the most basic relationships and slowly increment the complexity of your analysis as you discover more and more about the data you’re working with. Below is the progression of process laid out together: fig = plt.figure(figsize=(18,12), dpi=1600) a = 0.65 # Step 1 ax1 = fig.add_subplot(341) df.Survived.value_counts().plot(kind='bar', color="blue", alpha=a) ax1.set_xlim(-1, len(df.Survived.value_counts())) plt.title("Step. 1") # Step 2 ax2 = fig.add_subplot(345) df.Survived[df.Sex == 'male'].value_counts().plot(kind='bar',label='Male') df.Survived[df.Sex == 'female'].value_counts().plot(kind='bar', color='#FA2379',label='Female') ax2.set_xlim(-1, 2) plt.title("Step. 2 \nWho Survived? with respect to Gender."); plt.legend(loc='best') ax3 = fig.add_subplot(346) (df.Survived[df.Sex == 'male'].value_counts()/float(df.Sex[df.Sex == 'male'].size)).plot(kind='bar',label='Male') (df.Survived[df.Sex == 'female'].value_counts()/float(df.Sex[df.Sex == 'female'].size)).plot(kind='bar', color='#FA2379',label='Female') ax3.set_xlim(-1,2) plt.title("Who Survied proportionally?"); plt.legend(loc='best') # Step 3 ax4 = fig.add_subplot(349) female_highclass = df.Survived[df.Sex == 'female'][df.Pclass != 3].value_counts() female_highclass.plot(kind='bar', label='female highclass', color='#FA2479', alpha=a) ax4.set_xticklabels(["Survived", "Died"], rotation=0) ax4.set_xlim(-1, len(female_highclass)) plt.title("Who Survived? with respect to Gender and Class"); plt.legend(loc='best') ax5 = fig.add_subplot(3,4,10, sharey=ax1) female_lowclass = df.Survived[df.Sex == 'female'][df.Pclass == 3].value_counts() female_lowclass.plot(kind='bar', label='female, low class', color='pink', alpha=a) ax5.set_xticklabels(["Died","Survived"], rotation=0) ax5.set_xlim(-1, len(female_lowclass)) plt.legend(loc='best') ax6 = fig.add_subplot(3,4,11, sharey=ax1) male_lowclass = df.Survived[df.Sex == 'male'][df.Pclass == 3].value_counts() male_lowclass.plot(kind='bar', label='male, low class',color='lightblue', alpha=a) ax6.set_xticklabels(["Died","Survived"], rotation=0) ax6.set_xlim(-1, len(male_lowclass)) plt.legend(loc='best') ax7 = fig.add_subplot(3,4,12, sharey=ax1) male_highclass = df.Survived[df.Sex == 'male'][df.Pclass != 3].value_counts() male_highclass.plot(kind='bar', label='male highclass', alpha=a, color='steelblue') ax7.set_xticklabels(["Died","Survived"], rotation=0) ax7.set_xlim(-1, len(male_highclass)) plt.legend(loc='best') <matplotlib.legend.Legend at 0x11bdf3ed0> I've done my best to make the plotting code readable and intuitive, but if you’re looking for a more detailed look on how to start plotting in matplotlib, check out this beautiful notebook here. Now that we have a basic understanding of what we are trying to predict, let’s predict it. As explained by Wikipedia: In statistics, logistic regression or logit regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (a dependent variable that can take on a limited number of values, whose magnitudes are not meaningful but whose ordering of magnitudes may or may not be meaningful) based on one or more predictor variables.—and problems with more than two categories are referred to as multinomial logistic regression or, if the multiple categories are ordered, as ordered logistic regression. Logistic regression measures the relationship between a categorical dependent variable and one or more independent variables, which are usually (but not necessarily) continuous, by using probability scores as the predicted values of the dependent variable.[1] As such it treats the same set of problems as does probit regression using similar techniques. Our competition wants us to predict a binary outcome. That is, it wants to know whether some will die, (represented as a 0), or survive, (represented as 1). A good place to start is to calculate the probability that an individual observation, or person, is likely to be a 0 or 1. That way we would know the chance that someone survives, and could start making somewhat informed predictions. If we did, we'd get results like this:: (Y axis is the probability that someone survives, X axis is the passenger’s number from 1 to 891.) While that information is useful it doesn’t let us know whether someone ended up alive or dead. It just lets us know the chance that they will survive or die. We still need to translate these probabilities into the binary decision we’re looking for. But how? We could arbitrarily say that our survival cutoff is anyone with a probability of survival over 50%. In fact, this tactic would actually perform pretty well for our data and would allow you to make decently accurate predictions. Graphically it would look something like this: If you’re a betting man like me, you don’t like to leave everything to chance. What are the odds that setting that cutoff at 50% works? Maybe 20% or 80% would work better. Clearly we need a more exact way to make that cutoff. What can save the day? In steps the Logistic Regression. A logistic regression follows the all steps we took above but mathematically calculates the cutoff, or decision boundary (as stats nerds call it), for you. This way it can figure out the best cut off to choose, perhaps 50% or 51.84%, that most accurately represents the training data. The three cells below show the process of creating our Logitist regression model, training it on the data, and examining its performance. First, we define our formula for our Logit regression. In the next cell we create a regression friendly dataframe that sets up boolean values for the categorical variables in our formula and lets our regression model know the types of inputs we're giving it. The model is then instantiated and fitted before a summary of the model's performance is printed. In the last cell we graphically compare the predictions of our model to the actual values we are trying to predict, as well as the residual errors from our model to check for any structure we may have missed. # model formula # here the ~ sign is an = sign, and the features of our dataset # are written as a formula to predict survived. The C() lets our # regression know that those variables are categorical. # Ref: formula = 'Survived ~ C(Pclass) + C(Sex) + Age + SibSp + C(Embarked)' # create a results dictionary to hold our regression results for easy analysis later results = {} # create a regression friendly dataframe using patsy's dmatrices function y,x = dmatrices(formula, data=df, return_type='dataframe') # instantiate our model model = sm.Logit(y,x) # fit our model to the training data res = model.fit() # save the result for outputing predictions later results['Logit'] = [res, formula] res.summary() Optimization terminated successfully. Current function value: 0.444388 Iterations 6 # Plot Predictions Vs Actual plt.figure(figsize=(18,4)); plt.subplot(121, axisbg="#DBDBDB") # generate predictions from our fitted model ypred = res.predict(x) plt.plot(x.index, ypred, 'bo', x.index, y, 'mo', alpha=.25); plt.grid(color='white', linestyle='dashed') plt.title('Logit predictions, Blue: \nFitted/predicted values: Red'); # Residuals ax2 = plt.subplot(122, axisbg="#DBDBDB") plt.plot(res.resid_dev, 'r-') plt.grid(color='white', linestyle='dashed') ax2.set_xlim(-1, len(res.resid_dev)) plt.title('Logit Residuals'); fig = plt.figure(figsize=(18,9), dpi=1600) a = .2 # Below are examples of more advanced plotting. # It it looks strange check out the tutorial above. fig.add_subplot(221, axisbg="#DBDBDB") kde_res = KDEUnivariate(res.predict()) kde_res.fit() plt.plot(kde_res.support,kde_res.density) plt.fill_between(kde_res.support,kde_res.density, alpha=a) plt.title("Distribution of our Predictions") fig.add_subplot(222, axisbg="#DBDBDB") plt.scatter(res.predict(),x['C(Sex)[T.male]'] , alpha=a) plt.grid(b=True, which='major', axis='x') plt.xlabel("Predicted chance of survival") plt.ylabel("Gender Bool") plt.title("The Change of Survival Probability by Gender (1 = Male)") fig.add_subplot(223, axisbg="#DBDBDB") plt.scatter(res.predict(),x['C(Pclass)[T.3]'] , alpha=a) plt.xlabel("Predicted chance of survival") plt.ylabel("Class Bool") plt.grid(b=True, which='major', axis='x') plt.title("The Change of Survival Probability by Lower Class (1 = 3rd Class)") fig.add_subplot(224, axisbg="#DBDBDB") plt.scatter(res.predict(),x.Age , alpha=a) plt.grid(True, linewidth=0.15) plt.title("The Change of Survival Probability by Age") plt.xlabel("Predicted chance of survival") plt.ylabel("Age") <matplotlib.text.Text at 0x10dc26350> test_data = pd.read_csv("data/test.csv") test_data 418 rows × 11 columns test_data['Survived'] = 1.23 Our binned results data: results {'Logit': [<statsmodels.discrete.discrete_model.BinaryResultsWrapper at 0x10b1e8550>, 'Survived ~ C(Pclass) + C(Sex) + Age + SibSp + C(Embarked)']} # Use your model to make prediction on our test set. compared_resuts = ka.predict(test_data, results, 'Logit') compared_resuts = Series(compared_resuts) # convert our model to a series for easy output # output and submit to kaggle compared_resuts.to_csv("data/output/logitregres.csv") # Create an acceptable formula for our machine learning algorithms formula_ml = 'Survived ~ C(Pclass) + C(Sex) + Age + SibSp + Parch + C(Embarked)' "So uhhh, what if a straight line just doesn’t cut it." Wikipeda: In machine learning, support vector machines (SVMs, also support vector networks[1]) are.. In addition to performing linear classification, SVMs can efficiently perform non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. The logit model we just implemented was great in that it showed exactly where to draw our decision boundary or our 'survival cut off'. But if you’re like me, you could have thought, "So uhhh, what if a straight line just doesn’t cut it". A linear line is okay, but can we do better? Perhaps a more complex decision boundary like a wave, circle, or maybe some sort of strange polygon would describe the variance observed in our sample better than a line. Imagine if we were predicating survival based on age. It could be a linear decision boundary, meaning each additional time you've gone around the sun you were 1 unit more or less likely to survive. But I think it could be easy to imagine some sort of curve, where a young healthy person would have the best chance of survival, and sadly the very old and very young a like: a poor chance. Now that’s a interesting question to answer. But our logit model can only evaluate a linear decision boundary. How do we get around this? With the usual answer to life the universe and everything; $MATH$. The answer: We could transform our logit equation from expressing a linear relationship like so: $survived = \beta_0 + \beta_1pclass + \beta_2sex + \beta_3age + \beta_4sibsp + \beta_5parch + \beta_6embarked$ Which we'll represent for convenience as: $y = x$ to a expressing a linear expression of a non-linear relationship: $\log(y) = \log(x)$ By doing this we're not breaking the rules. Logit models are only efficient at modeling linear relationships, so we're just giving it a linear relationship of a non-linear thing. An easy way to visualize this by looking at a graph an exponential relationship. Like the graph of $x^3$: Here its obvious that this is not linear. If used it as an equation for our logit model, $y = x^3$; we would get bad results. But if we transformed it by taking the log of our equation, $\log(y) = \log(x^3)$. We would get a graph like this: That looks pretty linear to me. This process of transforming models so that they can be better expressed in a different mathematical plane is exactly what the Support Vector Machine does for us. The math behind how it does that is not trivial, so if your interested; put on your reading glasses and head over here. Below is the process of implementing a SVM model and examining the results after the SVM transforms our equation into three different mathematical plains. The first is linear, and is similar to our logic model. Next is an exponential, polynomial, transformation and finally a blank transformation. # set plotting parameters plt.figure(figsize=(8,6)) # create a regression friendly data frame y, x = dmatrices(formula_ml, data=df, return_type='matrix') # select which features we would like to analyze # try chaning the selection here for diffrent output. # Choose : [2,3] - pretty sweet DBs [3,1] --standard DBs [7,3] -very cool DBs, # [3,6] -- very long complex dbs, could take over an hour to calculate! feature_1 = 2 feature_2 = 3 X = np.asarray(x) X = X[:,[feature_1, feature_2]] y = np.asarray(y) # needs to be 1 dimenstional so we flatten. it comes out of dmatirces with a shape. y = y.flatten() n_sample = len(X) np.random.seed(0) order = np.random.permutation(n_sample) X = X[order] y = y[order].astype(np.float) # do a cross validation nighty_precent_of_sample = int(.9 * n_sample) X_train = X[:nighty_precent_of_sample] y_train = y[:nighty_precent_of_sample] X_test = X[nighty_precent_of_sample:] y_test = y[nighty_precent_of_sample:] # create a list of the types of kerneks we will use for your analysis types_of_kernels = ['linear', 'rbf', 'poly'] # specify our color map for plotting the results color_map = plt.cm.RdBu_r # fit the model for fig_num, kernel in enumerate(types_of_kernels): clf = svm.SVC(kernel=kernel, gamma=3) clf.fit(X_train, y_train) plt.figure(fig_num) plt.scatter(X[:, 0], X[:, 1], c=y, zorder=10, cmap=color_map) # circle out the test data plt.scatter(X_test[:, 0], X_test[:, 1], s=80, facecolors='none', zorder=10) plt.axis('tight') x_min = X[:, 0].min() x_max = X[:, 0].max() y_min = X[:, 1].min() y_max = X[:, 1].max() XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j] Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()]) # put the result into a color plot Z = Z.reshape(XX.shape) plt.pcolormesh(XX, YY, Z > 0, cmap=color_map) plt.contour(XX, YY, Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'], levels=[-.5, 0, .5]) plt.title(kernel) plt.show() <matplotlib.figure.Figure at 0x10d4820d0> Any value in the blue survived while anyone in the read did not. Checkout the graph for the linear transformation. It created its decision boundary right on 50%! That guess from earlier turned out to be pretty good. As you can see, the remaining decision boundaries are much more complex than our original linear decision boundary. These more complex boundaries may be able to capture more structure in the dataset, if that structure exists, and so might create a more powerful predictive model. Pick a decision boundary that you like, adjust the code below, and submit the results to Kaggle to see how well it worked! # Here you can output which ever result you would like by changing the Kernel and clf.predict lines # Change kernel here to poly, rbf or linear # adjusting the gamma level also changes the degree to which the model is fitted clf = svm.SVC(kernel='poly', gamma=3).fit(X_train, y_train) y,x = dmatrices(formula_ml, data=test_data, return_type='dataframe') # Change the interger values within x.ix[:,[6,3]].dropna() explore the relationships between other # features. the ints are column postions. ie. [6,3] 6th column and the third column are evaluated. res_svm = clf.predict(x.ix[:,[6,3]].dropna()) res_svm = DataFrame(res_svm,columns=['Survived']) res_svm.to_csv("data/output/svm_poly_63_g10.csv") # saves the results for you, change the name as you please. "Well, What if this line / decision boundary thing doesn’t work at all." Wikipedia, crystal clear as always: Random forests are an ensemble learning method for classification (and regression) that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes output by individual trees. Once again, the skinny and why it matters to you: There are always skeptics, and you just might be one about all the fancy lines we've created so far. Well for you, here’s another option; the Random Forest. This technique is a form of non-parametric modeling that does away with all those equations we created above, and uses raw computing power and a clever statistical observation to tease the structure out of the data. An anecdote to explain how this the forest works starts with the lowly gumball jar. We've all guess how many gumballs are in that jar at one time or another, and odds are not a single one of us guessed exactly right. Interestingly though, while each of our individual guesses for probably were wrong, the average of all of the guesses, if there were enough, usually comes out to be pretty close to the actual number of gumballs in the jar. Crazy, I know. This idea is that clever statistical observation that lets random forests work. How do they work? A random forest algorithm randomly generates many extremely simple models to explain the variance observed in random subsections of our data. These models are like our gumball guesses. They are all awful individually. Really awful. But once they are averaged, they can be powerful predictive tools. The averaging step is the secret sauce. While the vast majority of those models were extremely poor; they were all as bad as each other on average. So when their predictions are averaged together, the bad ones average their effect on our model out to zero. The thing that remains, if anything, is one or a handful of those models have stumbled upon the true structure of the data. The cell below shows the process of instantiating and fitting a random forest, generating predictions form the resulting model, and then scoring the results. # import the machine learning library that holds the randomforest import sklearn.ensemble as ske # Create the random forest model and fit the model to our training data y, x = dmatrices(formula_ml, data=df, return_type='dataframe') # RandomForestClassifier expects a 1 demensional NumPy array, so we convert y = np.asarray(y).ravel() #instantiate and fit our model results_rf = ske.RandomForestClassifier(n_estimators=100).fit(x, y) # Score the results score = results_rf.score(x, y) print "Mean accuracy of Random Forest Predictions on the data was: {0}".format(score) Mean accuracy of Random Forest Predictions on the data was: 0.945224719101 Our random forest performed only slightly better than a thumb wave, meaning that if you randomly assigned 1s and 0s by waving your thumb up and down you would do almost as well on average. It seems that this time our random forest did not stumble on the true structure of the data. These are just a few of the machine learning techniques that you can apply. Try a few for yourself and move up the leader board! Ready to see more an example of a more advanced analysis? Check out these notebooks:
http://nbviewer.jupyter.org/github/agconti/kaggle-titanic/blob/master/Titanic.ipynb
CC-MAIN-2018-13
refinedweb
4,533
51.04
What is it, why is it used? Examples … Answer 1, Authority 100% Lambda functions are the functions that actually does not have a name. Thus, mathematics simplified to the inability to record function, but in general, lambda-calculus tried to formalize the calculations λx.x λ – means that it is a lambda function. All that after it is a list of arguments, ideally any type, including another lambda function. After the point there is a “body function”, and then, in fact, there is an argument that will be transmitted. So λx.x + 2 2 // returns 4 Example more complicated: λx.x.x 2 λy.y + 1 // Result 3 Here, another lambda function λy.y + 1 , in which the parameter is transmitted 2. That is, any lambda function is a higher order function, can take another function as an argument and Return function: λx.λy.y + x + 3 2 // return λy.y + 5, because X was equal to two. λx.λy.y + x + 3 2 3 // Return 8. In fact, this carrying: First, the function takes the argument 2 and returns a function that takes another argument and returns the result. If it is interesting, I once wrote Similar things on C # Now let’s see how all our examples will look at C #. Here, as a lambda function, I use Func , where T is the type of argument , and U – the type of return value: 1) Func & lt; int, int & gt; Func = x = & gt; x; 2) Var Result = New Func & Lt; int, int & gt; (x = & gt; x + 2) (2); 3) var result = new func & lt; func & lt; int, int & gt;, int & gt; (x = & gt; x (2)) (new func & lt; int, int & gt; (y = & gt; y + 1)); 4) Var Result = New Func & Lt; int, Func & lt; int, int & gt; & gt; (x = & gt; y = & gt; y + x + 3) (2); 5) Var Result = New Func & Lt; int, Func & lt; int, int & gt; & gt; (x = & gt; y = & gt; y + x + 3) (2) (3); The complexity is only a clear indication of the type of arguments and the return value. Answer 2, Authority 33% lambda – this is simple words that lies right in the variable if we want to record a function in the variable, then we must first describe this function def main (a, b): Return A + 1, B-1 Tools = Main (3.4) // 43. Print (Tools) Using Lambda a, b = (3,4) Tools = Lambda A, B: (A + 1, B-1) Print (Tools) Thanks to a record of one line, it can be convenient to insert anywhere. Answer 3, Authority 30% Lamd expressions – anonymous functions. Calked from mathematics, where a special form of recording functions was used, eliminating the ambiguity function / value of the function, etc. Thanks to the efforts of the vertellover (not HTML-, and the typographic stacks) from the $ Hat XX $ form, it was transformed in $ WEDGE XX $, well, and then already naturally – in $ lambda xx $ i.e. The value of the lambda expression is a function that can be applied to some argument / arguments. Answer 4 Indicate the details (language in particular). Lambda Expression is Anonymous functors (variables containing a whole function). Lambda expression allows for a lexical context, in which this expression is used. in PHP, for example, such an expression is created as: $ func = create_function ("$ a, $ b", "Return $ a + $ b;"); // Lambda-Style Function Echo $ FUNC (1,2); // 3.
https://computicket.co.za/computickets-what-is-lambda-expressions/
CC-MAIN-2022-21
refinedweb
559
68.1
DEBSOURCES Skip Quicknav sources / mutt / 2.0 mutt (2.0.2-1) unstable; urgency=medium * New upstream release. + contains important fix for CVE-2020-28896. + some default values were changed as mutt switched to 2.x, check NEWS.Debian for more details. * debian/patches: all refreshed; the following were removed because they are already upstream: + upstream/433829-reply-to-before-reply-self.patch + upstream/521405-decode-clobber-weed.patch + upstream/802613-space-stuffing-flowed/attachment-handling.patch + upstream/802613-space-stuffing-flowed/mutt-pipe-attachment.patch + upstream/802613-space-stuffing-flowed/mutt-print-attachment.patch + upstream/802613-space-stuffing-flowed/mutt-save-attachment.patch + upstream/802613-space-stuffing-flowed/mutt-view-attachment.patch + upstream/861572-pgp-sign-inline-message.patch -- Antonio Radici <[email protected]> Sun, 22 Nov 2020 11:59:19 +0100 mutt (1.14.6-1) unstable; urgency=medium * New upstream release. * debian/rules: + removed --enable-exact-address, it introduced a regression (Closes: 965200, 966456). * debian/patches: + added upstream/861572-pgp-sign-inline-message.patch to deal with verification fo inline-signed emails (Closes: 861572). + added upstream/433829-reply-to-before-reply-self.patch to respect reply-to when replying to yourself (Closes: 433829). + added upstream/521405-decode-clobber-weed.patch not to clobber header when decode-save/copy is called and weed is set to yes (Closes: 521405). + added 802613-space-stuffing-flowed/*.patch not to revert to space-stuffing when saving format=flowed messages (Closes: 802613). + added debian-specific/530584-default-tmpdir-to-var-tmp{,-docs}.patch to switch the default draft dir to /var/tmp (Closes: 530584). -- Antonio Radici <[email protected]> Sun, 02 Aug 2020 08:46:21 +0200 mutt (1.14.5-1) unstable; urgency=medium [ Debian Janitor ] * Trim trailing whitespace. * Set upstream metadata fields: Bug-Database, Bug-Submit, Repository, Repository-Browse. [ Antonio Radici ] * New upstream release. * debian/rules + enabled --enable-exact-address so that we follow RFC2822 (Closes: 698267) + switched to libidn2 rather than libidn (Closes: 932698) * debian/patches: + removed upstream/imap-preauth-and-ssh-tunnel.patch, no longer needed. -- Antonio Radici <[email protected]> Mon, 29 Jun 2020 16:49:31 +0200 mutt (1.14.4-2) unstable; urgency=medium * debian/patches: + added imap-preauth-and-ssh-tunnel.patch from upstream, which does not check IMAP preauth in SSH tunnels (Closes: 963107) * debian/control: + Standards-Version bumped to 4.5.0, no change required. + Set debhelper-compat in B-D. * debian/upstream/metadata: added the correct repository. * debian/watch: + switched from FTP on mutt.org to HTTPS on bitbucket.org, both URLs are listed on the website and are officially endorsed by the mutt maintainer. * debian/upstream/signing-key.asc: re-exported to be minimal (i.e. export-options=export-minimal,export-clean) * debian/rules: + remove --enable-nntp as there is no NNTP patch in mutt (Closes: 948328) -- Antonio Radici <[email protected]> Sat, 20 Jun 2020 16:41:32 +0200 mutt (1.14.4-1) unstable; urgency=medium * New upstream release. + Contains a fix for a possible MITM response injection attack when using STARTTLS with IMAP, POP3 and SMTP. The CVE is not yet released. -- Antonio Radici <[email protected]> Fri, 19 Jun 2020 06:45:02 +0200 mutt (1.14.3-1) unstable; urgency=medium * New upstream release. + Fixes CVE-2020-14093 (Closes: 962897) + Fixes failures on non-linear cert chains (Closes: 961894) -- Antonio Radici <[email protected]> Wed, 17 Jun 2020 10:46:34 +0200 mutt (1.14.0-1) unstable; urgency=medium * New upstream release. * debian/patches: + all patches refreshed. -- Antonio Radici <[email protected]> Sat, 16 May 2020 07:31:08 +0200 mutt (1.13.2-1) unstable; urgency=medium * New upstream release. * debian/patches: + all patches refreshed. -- Antonio Radici <[email protected]> Thu, 19 Dec 2019 07:00:56 +0100 mutt (1.13.0-1) unstable; urgency=medium * New upstream release. * debian/patches: + all patches refreshed. -- Antonio Radici <[email protected]> Thu, 12 Dec 2019 09:22:41 +0100 mutt (1.12.2-2) unstable; urgency=medium * debian/control: + Standards-Version bumped to 4.4.1, no change required. + Added sensible-utils in Recommends due to /etc/Muttrc binaries which were previously in debianutils. -- Antonio Radici <[email protected]> Mon, 11 Nov 2019 06:35:33 +0100 mutt (1.12.2-1) unstable; urgency=medium * New upstream release. * debian/patches: + all patches refreshed. + debian-specific/467432-write_bcc.patch slightly modified to fit the new function calls. + removed the following patches that are already upstream: - upstream/905551-oauthbearer-imap.patch - upstream/905551-oauthbearer-refresh.patch - upstream/905551-oauthbearer-smtp.patch - upstream/929017-atoi-undefined-behavior.patch * debian/mutt.install: renamed pgpring to mutt_pgpring as per upstream name change. -- Antonio Radici <[email protected]> Fri, 25 Oct 2019 08:45:15 +0200 mutt (1.10.1-2.1) unstable; urgency=medium * Non-maintainer upload. * Apply patch from upstream to prevent undefined behaviour when parsing invalid Content-Disposition mail headers. The atoi() function was being called on a number which can potentially overflow and thus can have security implications depending on the atoi() implementation. (Closes: #929017) -- Chris Lamb <[email protected]> Sat, 25 May 2019 09:57:12 +0100 mutt (1.10.1-2) unstable; urgency=low [ Jonathan Nieder ] * debian/patches: + added upstream patches for OAUTHBEARER support by Brandon Long (Closes: #905551). + upstream/905551-oauthbearer-imap.patch + upstream/905551-oauthbearer-smtp.patch + upstream/905551-oauthbearer-refresh.patch [ Antonio Radici ] * New release to include the patch above. -- Antonio Radici <[email protected]> Tue, 07 Aug 2018 09:31:52 +0100 mutt (1.10.1-1) unstable; urgency=medium * New upstream release. * This release includes important patches to security bugs; especially important for IMAP and POP users. -- Antonio Radici <[email protected]> Wed, 18 Jul 2018 22:05:56 +0100 mutt (1.10.0-1) unstable; urgency=medium * New upstream release. * debian/patches: + all patches refreshed. + removed patches already upstream: + upstream/383769-score-match.patch + upstream/fast-imap-flag-handling.patch + upstream/imap-poll-timeout.patch + upstream/maildir-hash.patch + upstream/886104-abort_noattach.patch -- Antonio Radici <[email protected]> Sun, 27 May 2018 16:23:01 +0100 mutt (1.9.5-2) unstable; urgency=medium * Restore the abort_noattach patch which is upstream but not in the branch used for the 1.9.5 release (Closes: 895861, 895794). -- Antonio Radici <[email protected]> Thu, 19 Apr 2018 06:39:19 +0100 mutt (1.9.5-1) unstable; urgency=medium * New upstream release. + debian/patches/upstream/886104-abort_noattach.patch removed (already upstream). -- Antonio Radici <[email protected]> Sun, 15 Apr 2018 08:58:37 +0100 mutt (1.9.4-3) unstable; urgency=medium * debian/patches: + added upstream/886104-abort_noattach.patch which replaces the patch in the previous version -- Antonio Radici <[email protected]> Tue, 13 Mar 2018 21:12:25 +0000 mutt (1.9.4-2) unstable; urgency=medium * debian/patches: + added debian-specific/886104-abort_noattach.patch (Closes: 886104). -- Antonio Radici <[email protected]> Tue, 06 Mar 2018 21:05:29 +0000 mutt (1.9.4-1) unstable; urgency=medium * New upstream release. * debian/control: + Standards-Version upgraded to 4.1.3, no changes required. + VCS-* fields changed as we moved to salsa.debian.org. + Set Rules-Requires-Root: binary-targets as mutt_dotlock requires the suid bit. * debian/copyright: + Moved wildcard to first entry as requested by lintian. + Updated year for my contributions to 2018. -- Antonio Radici <[email protected]> Sun, 04 Mar 2018 18:47:58 +0000 mutt (1.9.3-1) unstable; urgency=medium * New upstream release. -- Antonio Radici <[email protected]> Sat, 27 Jan 2018 21:12:29 +0000 mutt (1.9.2-1) unstable; urgency=medium * New upstream release. -- Antonio Radici <[email protected]> Sat, 16 Dec 2017 09:13:04 +0000 mutt (1.9.1-5) unstable; urgency=medium * debian/NEWS: adding a more detailed message about the new mailcap behavior for text/html content. * debian/{rules,control}: removed any reference to notmuch. * Bumped Standards-Version to 4.1.1; no changes required. * debian/patches: + rewrote Md.etc_mailname_gethostbyname.patch and renamed it to 882690-use_fqdn_from_etc_mailname.patch (Closes: 882690) * bumped dh compat to 10, removed B-D on dh_autoreconf (now it's a dependency of dh). -- Antonio Radici <[email protected]> Sat, 25 Nov 2017 18:23:05 +0000 mutt (1.9.1-4) unstable; urgency=medium * debian/control: remove a reference to Neomutt in the description. -- Antonio Radici <[email protected]> Wed, 22 Nov 2017 21:03:20 +0000 mutt (1.9.1-3) unstable; urgency=medium * debian/mutt.maintscript: remove notmuch config file, this time for real (Closes: 882320) -- Antonio Radici <[email protected]> Wed, 22 Nov 2017 20:40:44 +0000 mutt (1.9.1-2) unstable; urgency=medium * debian/patches: + upstream/749483-conststrings.patch: removed, already upstream, albeit in a different way + performance improvements for imap and maildir open times backported from mutt head: * upstream/fast-imap-flag-handling.patch * upstream/imap-poll-timeout.patch * upstream/maildir-hash.patch * debian/extra/rc/notmuch.rc: removed as notmuch is not in the upstream source code yet (Closes: 882320) -- Antonio Radici <[email protected]> Tue, 21 Nov 2017 21:51:31 +0000 mutt (1.9.1-1) unstable; urgency=medium * New upstream release + switched to the official mutt.org source code (Closes: 870635) + added more information in the NEWS file explaining the changes. * debian/patches: + upstream/865822-restore-defaults.patch updated so that the debug file is correctly named and initialized. + removed the following patches because they were either applied upstream or not relevant: * upstream/693993-manpage-corrections.patch * upstream/749483-conststrings.patch * upstream/fix_doc_builddir.patch * upstream/fix_spelling_error.patch * upstream/865822-restore-defaults.patch * neomutt-devel/866366-index_format-corruption.patch + all remaining patches refreshed. * debian/control: + ensure correct attribution to the mutt source code for the sidebar and the compressed folder features. * debian/extra/lib/mailspell: stop using the deprecated tmpnam() (Closes: 866312). -- Antonio Radici <[email protected]> Mon, 20 Nov 2017 21:38:53 +0000 mutt (1.8.3+neomutt20170609-2) unstable; urgency=medium * debian/patches: + upstream/644992-ipv6-literal.patch removed, already upstream. + upstream/771125-CVE-2014-9116-jessie.patch removed, already upstream. + upstream/865822-restore-defaults.patch created to set the default values of variables after they have been set in the init function (Closes: 865842, 865822). -- Antonio Radici <[email protected]> Sun, 25 Jun 2017 10:00:09 +0100 mutt (1.8.3+neomutt20170609-1) unstable; urgency=medium * Switching upstream code to NeoMutt. + due to code formatting changes the neomutt patch would have been bigger than the mutt code. * New upstream NeoMutt release, 2017-06-09 (1.8.3) (Closes: 857687, 864024) * debian/patches: + neomutt patch removed. + debian-specific/Md.etc_mailname_gethostbyname.patch slightly rewritten. + debian-specific/467432-write_bcc.patch refreshed. + upstream/228671-pipe-mime.patch removed, already upstream (Closes: 860176). + upstream/611410-no-implicit_autoview-for-text-html.patch removed, already upstream. + upstream/644992-ipv6-literal.patch partially superseded by libidn. + upstream/fix_doc_builddir.patch created, to source html files from the builddir. + upstream/fix_spelling_error.patch created. + general patch refresh where applicable. * debian/control: + Standards-Version updated to 4.0.0, no changes required. * debian/rules: + removed NEWS, does not exist anymore. + build mutt_dotlock with --enable-dotlock. + enabled lua scripting with --enable-lua. * debian/mutt.docs: removed some docs that do not exist anymore. * debian/mutt.lintian-overrides: masked testsuite warning, not available. -- Antonio Radici <[email protected]> Sat, 24 Jun 2017 09:32:46 +0100 mutt (1.8.0-1) unstable; urgency=medium * New upstream Mutt relaese. * New upstream NeoMutt release, 2017-03-06 (Closes: 857687) * debian/patches: + neomutt-devel/832971-reset-xlabel.patch removed, already upstream. + upstream/383769-score-match.patch: changed slightly to be applied. + all other patches refreshed. -- Antonio Radici <[email protected]> Tue, 14 Mar 2017 21:30:19 +0000 mutt (1.7.2-1) unstable; urgency=medium * New upstream Mutt release. * New upstream NeoMutt release, 2017-01-13. * debian/patches: + all patches refreshed. + upstream/gpgme-set-sender.patch removed because it is already upstream. -- Antonio Radici <[email protected]> Fri, 20 Jan 2017 21:47:49 +0000 mutt (1.7.1-5) unstable; urgency=medium * debian/patches: + add upstream/gpgme-set-sender.patch otherwise the package does not build due to conflicting declarations. -- Antonio Radici <[email protected]> Thu, 01 Dec 2016 20:41:44 +0000 mutt (1.7.1-4) unstable; urgency=medium * New upstream NeoMutt release, 2016-11-26. + All patches refreshed. + upstream/549204-clear-N-on-readonly-imap-folders.patch pulled from the repo because it could cause bad interactions with the pager (I will revisit the decision in the next release). * debian/rules: + explicitely added --with-tokyocabinet otherwise the package does not build. -- Antonio Radici <[email protected]> Thu, 01 Dec 2016 08:28:54 +0000 mutt (1.7.1-3) unstable; urgency=medium * Team upload. [ Antonio Radici ] * debian-specific/828751-pinentry-gpg2-support.patch: moved --pinentry-loopback within the conditional that checks the existance of PGPPASSFD. [ Evgeni Golov ] * update neomutt patch to 20161104 * do not apply 835421-pop-digest-md5.patch, it's part of neomutt 20161104 * refresh gpg.rc-paths.patch after 828751-pinentry-gpg2-support.patch [ Christoph Berg ] * Remove myself from Uploaders. Thanks for the fish! -- Evgeni Golov <[email protected]> Mon, 07 Nov 2016 19:17:50 +0100 mutt (1.7.1-2) unstable; urgency=medium * Dropped neomutt-devel/837601-do-not-segfault-on-new-mails.patch which caused an extra empty line to be added to the pager. -- Antonio Radici <[email protected]> Sun, 16 Oct 2016 20:17:38 +0100 mutt (1.7.1-1) unstable; urgency=medium * New upstream Mutt release. + Dropped two patches already applied upstream: - upstream/827189-opportunistic-encryption-crash.patch - upstream/837372-do-not-color-gpgme-output.patch * New upstream NeoMutt release, 2016-10-14 + Refreshed all the patches to apply cleanly. + dropped neomutt-devel/drop-neomutt-syntax.patch -- Antonio Radici <[email protected]> Sat, 15 Oct 2016 10:35:20 +0100 mutt (1.7.0-6) unstable; urgency=medium * New upstream NeoMutt release, 2016-09-16 + Refreshed some patches to apply cleanly. * Dropped the following patches as a result of the above release: + neomutt-devel/fix-array-bounds-error.patch + neomutt-devel/fix-tarname-in-ac-init.patch + neomutt-devel/837416-avoid-segfault-when-listing-\ mailboxes-on-startup.patch + upstream/833192-preserve-messageid-for-postponed-emails.patch + upstream/openssl-1.1-build.patch + upstream/837673-fix-gpgme-sign-bindings.patch * debian/NEWS: + added an info about the deprecation of --encrypt-to in hardcoded gnupg command (Closes: 838352). + added a note about $attribution_locale and the removal of $locale, introduced with the latest Neomutt version (Closes: 414828). * debian/patches: + neomutt-devel/drop-neomutt-syntax.patch: remove neomutt-syntax.vim otherwise the package does not build. + debian-specific/Muttrc.patch: add three more headers to mailto_allow (Closes: 834765). -- Antonio Radici <[email protected]> Sat, 24 Sep 2016 23:29:56 +0100 mutt (1.7.0-5) unstable; urgency=medium * debian/patches: + neomutt-devel/837601-do-not-segfault-on-new-mails.patch: updated to prevent crash when exiting from the pager while viewing a composed email (Closes: 837634). + upstream/827189-opportunistic-encryption-crash.patch: do not crash when doing opportunistic encryption with long addresses (Closes: 827189). + upstream/upstream/837673-fix-gpgme-sign-bindings.patch: to use correct key bindings if the pgp sign message is not translated (Closes: 837673). -- Antonio Radici <[email protected]> Tue, 13 Sep 2016 14:57:35 +0100 mutt (1.7.0-4) unstable; urgency=medium * debian/patches: + neomutt-devel/837416-avoid-segfault-when-listing-mailboxes\ -on-startup.patch: to prevent segfaulting when mutt -y is launched (Closes: 837416). + neomutt-devel/fix-array-bounds-error.patch: fix a off-by-one in neomutt. + neomutt-devel/837601-do-not-segfault-on-new-mails.patch: do not crash when a new mail arrives (Closes: 837601). + upstream/837372-do-not-color-gpgme-output.patch: to maintain the same behavior as pgp.c when it comes to use the body color for the gpg output (Closes: 837372). -- Antonio Radici <[email protected]> Sun, 11 Sep 2016 14:38:40 +0100 mutt (1.7.0-3) unstable; urgency=medium * New upstream NeoMutt release, 2016-09-10. + added neomutt-devel/fix-tarname-in-ac-init.patch to set the right location for locale and docs. * Dropped the following patches as a result of the above release: + neomutt-devel/834448-restore-i-pager-binding.patch + neomutt-devel/836812-user-agent-temp-fix.patch + neomutt-devel/837212-fix-duplicate-saved-messages.patch (Closes: 837212) + neomutt-devel/sensible-browser.patch + upstream/569038-interrupt-socket-read-write.patch + upstream/741213-dsa-elgamal-keys-length.patch + upstream/757141-date-format-length.patch + upstream/819196-disable-X-in-message-scoring.patch -- Antonio Radici <[email protected]> Sat, 10 Sep 2016 08:10:02 +0100 mutt (1.7.0-2) unstable; urgency=medium * debian/patches: + upstream/833192-preserve-messageid-for-postponed-emails.patch: do not remove the message-id of postponed emails (Closes: 833192). + upstream/819196-disable-X-in-message-scoring.patch: to disable ~X in message scoring, as upstream requested (Closes: 819196). + upstream/757141-date-format-length.patch: allow more space for date_format (Closes: 757141). + upstream/644992-ipv6-literal.patch: to parse ipv6 literal addresses properly (Closes: 644992). + upstream/741213-dsa-elgamal-keys-length.patch: to correctly extract the length of DSA and Elgamal keys (Closes: 741213). + upstream/549204-clear-N-on-readonly-imap-folders.patch: to clear the N flag on readonly IMAP mailboxes (Closes: 549204). + upstream/569038-interrupt-socket-read-write.patch: allow the interruption of operations which can be long-running (Closes: 569038, 774746, 423931, 599136, 618425). + upstream/openssl-1.1-build.patch: to build against openssl 1.1 + neomutt-devel/832971-reset-xlabel.patch to reset X-Label properly for newer versions of mutt (Closes: 832971). + neomutt-devel/836812-user-agent-temp-fix.patch: hardcode the NeoMutt version, it will be fixed in the next NeoMutt release (Closes: 836812). + neomutt-devel/834448-restore-i-pager-binding.patch: restored the 'i' binding to exit from the pager (Closes: 834448). + debian-specific/828751-pinentry-gpg2-support.patch: enable gpgme by default, delegating all crypto to gnupg (Closes: 96144, 828751, 824832). + misc/smime.rc.patch: switch to 'openssl cms' for decrypt (superset of smime) (Closes: 639533). * debian/extra/rc/notmuch.rc: restored the notmuch keybindings (Closes: 836148). * debian/NEWS: added information about GPGME being enabled by default. -- Antonio Radici <[email protected]> Mon, 29 Aug 2016 21:27:08 +0100 mutt (1.7.0-1) unstable; urgency=medium * New upstream release. * New upstream NeoMutt release, 2016-08-27. - neomutt-devel/restore-docfile-installation.patch removed (already upstream). * debian/patches: + some patches refreshed. + debian-specific/document_debian_defaults.patch updated to remove an incorrect reference to a default variable (Closes: 741166). + upstream/611410-no-implicit_autoview-for-text-html.patch restored, it was incorrectly dropped (Closes: 823971). + upstream/835421-pop-digest-md5.patch to incorrectly handle pop DIGEST-MD5 auth (Closes: 835421). + upstream/693993-manpage-corrections.patch with some fixes to the manpage (Closes: 693993). + upstream/749483-conststrings.patch fixes a conflicting declaration (Closes: 749483) -- Antonio Radici <[email protected]> Sun, 28 Aug 2016 15:10:08 +0100 mutt (1.6.2-3) unstable; urgency=medium [ Faidon Liambotis ] * Pass --disable-fmemopen to ./configure. fmemopen has been the cause for some issues, such as reported crashes and incompatibility with torify. (Closes: #834408). [ Antonio Radici ] * New NeoMutt release, 2016-08-21. - Fixes data corruption in compressed folders. (Closes: #834818) - Reverts to the vanilla mutt keybindings. (Closes: #834448, #834500) - Hide the progress bar if it wasn't given a color. (Closes: #834450) - Drop neomutt-devel/*sidebar* patches, already included upstream. - Drop getrandom patch already included upstream. - Some patches, including sensible-browser, refreshed. - Slightly modified the AC_INIT in configure.ac so that the package builds. * debian/patches: + created neomutt-devel/restore-docfile-installation.patch to restore the installation of some docfiles dropped by neomutt. -- Antonio Radici <[email protected]> Tue, 23 Aug 2016 21:48:35 +0100 mutt (1.6.2-2) unstable; urgency=medium * New NeoMutt release, 2016-08-08. - Fixes crash when closing compressed mbox files. (Closes: #834024) - Drop patch features/809802_timeout_hook.patch, merged upstream. - Drop patch debian/patches/features/multiple-fcc.patch, merged upstream. - Drop patch imap-sidebar-update-bug.patch, was an upstream backport. - Refresh patch neomutt-devel/sensible-browser.patch. * Backport three sidebar fixes, courtesy of upstream vanilla mutt maintainer Kevin J. McCarthy. * Add a patch to avoid a warning message if getrandom() fails. This should help not scare off users that are running a jessie kernel, among others. Thanks to Adam Borowski for the fix. (Closes: #833593) -- Faidon Liambotis <[email protected]> Sun, 14 Aug 2016 18:40:11 +0300 mutt (1.6.2-1) unstable; urgency=medium * New upstream release. * New upstream NeoMutt release, 2016-07-23. - Adds SMIME encrypt to self patch. (Closes: #688970) * Backport a fix for the sidebar from neomutt git/mutt hg, patch imap-sidebar-update-bug.patch. * Update NEWS.Debian and (unfortunately) rewrite history in order to make it a little more consistent and easier to read for users upgrading from jessie. (Closes: #832761) * The sidebar patch has been stabilized with this release, with the option names also having been stable enough to be included into upstream mutt (what will become 1.7.0). All the known Debian bugs have been fixed and changes have been documented in NEWS. (Closes: #499596, #741853, #777127, #821748, #823142, #823454, #823654, #823655) * Remove the /etc/Muttrc.d/sidebar.rc conffile which enabled sidebar by default. Sidebar is now OFF by default, in order to stick with upstream's defaults and what most mutt users expect. Document this in NEWS.Debian. * Ship our patched Muttrc instead of the stock, non-generated Muttrc, a regression from 1.6.1-2. (Closes: #830692, #830695) * Remove the assumed_charset-compat.patch and inform users of the renamed option ("file_charset" -> "attach_charset") via NEWS.Debian. -- Faidon Liambotis <[email protected]> Fri, 29 Jul 2016 16:43:06 +0300 mutt (1.6.1-2) experimental; urgency=medium * Fold mutt-patched into mutt, as the line between the wo has became more blurry since its introduction. Upstream mutt has merged the sidebar and the Debian mutt binary package already carries a lot of feature patches. - Remove the mutt-patched binary from debian/control. - Add Breaks/Replaces: mutt-patched on the mutt binary package. - Remove mutt-patched rules from debian/rules. - Move multiple-fcc (the only remaining patch) under features/. * Similarly, fold mutt-kz, a binary from a different source package until now, into mutt. mutt-kz upstream has joined forces with NeoMutt, so this is now a matter of just passing --enable-notmuch to our ./configure. * Remove update-alternatives support for /usr/bin/mutt and associated manpages and docs. This only existed for the benefit of mutt-patched and mutt-kz and is thus moot now. * Update neomutt to a newer version, 20160611. - Enable *all* the neomutt patches now, not just a hand-picked selection. - Use the single patch, as the broken out patches conflict with each other. - Drop the compressed-folders, NNTP and path_max patches, as they are now part of neomutt. - Update sensible-browser and multiple-fcc patches to adjusted versions from the neomutt-upstream. - Adjust the package's description to mention some of neomutt's features. * Enable all hardening build flags. (Closes: #823295) * Remove README.Patches from /usr/share/doc. Including a unified diff in /usr/share/doc isn't very useful; the source package can always be used instead. * Remove mentions of gpg-2comp from gpg.rc and README.Debian and ship the resulting gpg.rc into /etc/Muttrc.d pristine, with the comments included. * Switch to tokyocabinet (from qdbm) in hurd-i386 as well, as it is nowadays available there too. * Lower Priority to optional (from standard), as this is currently the value in the archive, as overridden by ftp-masters. * Remove postinst code that handles migrations from pre-1.5.20-9 versions, as it's too old (even wheezy shipped with 1.5.21-6.2). * Remove preinst code that handles an obsolete conffile from pre-1.5.19-2 versions, for the same reasons. * Remove Conflicts/Replaces mutt-utf8, it was last shipped with version 1.5.5.1-20040105+1, released over 12 years ago. * Remove statically-linked-binary lintian override for mutt_dotlock, not the case anymore. * Remove quilt.mk and manual quilt invocations. Rely on the native 3.0 (quilt) source package format instead for applying debian/patches. * Use dh_bugfiles instead of manual install invocations. * Migrate from our own -dbg package to the automatic -dbgsym package. * Migrate to dh, instead of our hand-crafted old-style debhelper d/rules. * Migrate to dh-autoreconf instead of autotools-dev. * Revamp debian/copyright: use copyright-format 1.0 (aka DEP5), update the list of copyright holders, add a debian/* stanza with all the past maintainers for the period they were maintainers etc. * Change Maintainer to the newly created pkg-mutt-maintainers list. * Add myself to Uploaders. -- Faidon Liambotis <[email protected]> Sat, 09 Jul 2016 00:05:49 +0300 mutt (1.6.1-1) experimental; urgency=medium * Team upload. [ Sebastian Ramacher ] * Update link to sidebar documentation and update the sample (Closes: #822729) [ James McCoy ] * fix compilation of nntp.patch and re-enable it again (Closes: #822893) [ Evgeni Golov ] * document sidebar changes via debian/NEWS (Closes: #822910) [ Matteo F. Vescovi ] * Imported Upstream version 1.6.1 * debian/README.source: file dropped [ Faidon Liambotis ] * Remove patch define-pgp_getkeys_command. According to the upstream bug report, it was never supposed to work like our patch (adding a commented-out setting) expected it to. * Import neomutt 20160502 under debian/patches/neomutt. To be used soon. * Replace ifdef.patch with neomutt's * Replace the three trash patches with neomutt's - features/imap_fast_trash.patch - features/purge-message.patch - features/trash-folder.patch * Replace our sidebar patches with neomutt's. This replaces all 4 of our sidebar patches (and more!). Unfortunately, needed a tiny fix over neomutt's sidebar because of our opposite ordering vis-a-vis 11-ifdef. * Remove the sidebar.muttrc sample, new patch has proper docs * Update NEWS with the new (NeoMutt's) sidebar changes * Replace NNTP patch with neomutt's * Replace sensible-browser patch with neomutt's * Manually resolve some patch fuzz * Restore PATCHES before quilt push/pops. Our new neomutt patches modify PATCHES, which results in conflicts with our debian/rules code that also modifies it. Back PATCHES up and restore it before any quilt operations. -- Matteo F. Vescovi <[email protected]> Tue, 17 May 2016 22:02:52 +0200 mutt (1.6.0-1) unstable; urgency=medium * New upstream release. + adds the -E option to modify the draft file (Closes: #695220, #434235) + does not crash while managing attachments (Closes: #677687) + allows setting the signing digest for S/MIME (Closes: #741147) + properly parses Outlook's S/MIME signatures (Closes: #701013) [ Antonio Radici ] * debian/control: moved the MTA from Recommends to Suggests (Closes: #670769) * debian/extra/mutt.desktop: set NoDisplay to false (Closes: #678596) [ Matteo F. Vescovi ] * debian/patches/: patchset updated - upstream/809802_timeout_hook.patch added (Closes: #809802) As stated by the upstream maintainer, the following patches can be safely dropped: (Closes: #816706) - misc/fix-configure-test-operator.patch - upstream/531430-imapuser.patch - upstream/543467-thread-segfault.patch - upstream/548577-gpgme-1.2.patch - upstream/553321-ansi-escape-segfault.patch - upstream/603288-split-fetches.patch - upstream/611410-no-implicit_autoview-for-text-html.patch * debian/rules: Glob expansions added to make mutt reproducible. Thanks to Daniel Shahaf for the patch (Closes: #818419) * debian/control: S-V bump 3.9.6 -> 3.9.8 (no changes needed) * debian/control: Vcs-* fields updated for https:// usage * debian/control: add myself to Uploaders * debian/mutt.menu: file dropped [ Evgeni Golov ] * update sidebar patch to the 20151111 version * update nntp patch to the 1.6.0 version * drop patches applied upstream * refresh patches against 1.6.0 -- Matteo F. Vescovi <[email protected]> Tue, 26 Apr 2016 16:46:49 +0200 mutt (1.5.24-1) unstable; urgency=medium * Team upload. [ Evgeni Golov ] * Fix implicit-function-declaration warnings during compile [ Matteo F. Vescovi ] * Imported Upstream version 1.5.24 (Closes: #763522) * debian/patches/: patchset re-worked against v1.5.24 - features-old/patch-1.5.4.vk.pgp_verbose_mime.patch dropped (applied upstream) - features/xtitles.patch dropped (different approach by upstream) - upstream/542817-smimekeys-tmpdir.patch dropped (applied upstream) - upstream/547980-smime_keys-chaining.patch dropped (applied upstream) - upstream/624058-gnutls-deprecated.patch dropped (applied upstream) * debian/control: S-V bump 3.9.5 => 3.9.6 (no changes needed) * debian/: GnuPG signature verification added * debian/watch: path to release tarballs updated -- Christoph Berg <[email protected]> Sun, 20 Sep 2015 17:58:34 +0200 mutt (1.5.23-3.1) unstable; urgency=low * Non-maintainer upload. * upstream/624058-gnutls-deprecated.patch: Use gnutls_priority_set_direct() instead of gnutls_protocol_set_priority() together with gnutls_set_default_priority(). Cherrypick the relevant parts from upstream HG, without the compatibilty stuff for ancient (< 2.2.0) GnuTLS. Closes: #624058 -- Andreas Metzler <[email protected]> Sat, 01 Aug 2015 13:54:03 +0200 mutt (1.5.23-3) unstable; urgency=medium * Fixed upstream/771125-CVE-2014-9116-jessie.patch thanks to Salvatore Bonaccorso; now it correctly fixes the CVE and does not affect other functionalities of mutt (Closes: 771674) -- Antonio Radici <[email protected]> Thu, 04 Dec 2014 21:09:07 +0000 mutt (1.5.23-2) unstable; urgency=medium * Created upstream/771125-CVE-2014-9116-jessie.patch to address CVE-2014-9116; the patch prevent mutt_substrdup from being used in a way that can lead to a segfault. -- Antonio Radici <[email protected]> Sat, 29 Nov 2014 18:13:56 +0000 mutt (1.5.23-1.1) unstable; urgency=medium * Non-maintainer upload. * Rebuild against GnuTLS v3. Closes: #668816 -- Andreas Metzler <[email protected]> Sun, 17 Aug 2014 13:42:50 +0200 mutt (1.5.23-1) unstable; urgency=medium * Team upload. [ Matteo F. Vescovi ] * New upstream security bugfix release - debian/patches/: patchset refreshed against v1.5.23 - debian/patches/: CVE-2014-0467.patch dropped (applied upstream) * debian/rules: --enable-exact-address parameter dropped again (Closes: #741525, #741527, #741551) [ Evgeni Golov ] * update nntp patch to the latest upstream version -- Evgeni Golov <[email protected]> Sun, 16 Mar 2014 23:16:31 +0100 mutt (1.5.22-2) unstable; urgency=high * Team upload. [ Matteo F. Vescovi ] * debian/gbp.conf: config file added * debian/patches/: patchset re-worked using right gbp-pq option * debian/rules: --enable-exact-address parameter added. Thanks to Jari Aalto for the patch. (Closes: #698267) [ Evgeni Golov ] * SidebarDelim can be NULL and strlen(NULL) is a bad idea (Closes: #696145) * Use mbstowcs instead of strlen for the sidebar_delim (Closes: #663883) * Install mutt-patched docs (HTML, TXT, manpages) with a -patched suffix. Properly link the correct docs for the running mutt flavour using update-alternatives. (Closes: #740887) * update package description of mutt-patched (Closes: #700365) * fix drawing of no sidebar when starting in compose mode (Closes: #502627) * update sidebar-newonly patch so the keybindings work properly. Thanks to gregor herrmann <[email protected]> (Closes: #546627) * Fix buffer overrun caused by not updating a string length after address expansion. (Closes: #708731) Fixes: CVE-2014-0467 * Update lintian overrides, as mutt-{org,patched} now have own mapages and the doc-base references get created in the postinst. -- Evgeni Golov <[email protected]> Wed, 12 Mar 2014 13:16:21 +0100 mutt (1.5.22-1) unstable; urgency=low Many thanks to Matteo and Evgeni for preparing this release! [ Matteo F. Vescovi ] * Imported Upstream version 1.5.22 (Closes: #732859) - debian/patches/: patchset re-worked against v1.5.22 via gbp - __[tag]__ classification has been introduced with this release (gbp pq format) - all already-applied "upstream" patches were dropped - most of the patches required simple renaming - debian/rules: patch usage modified - mutt.org patch popping updated - patch path for README.patches updated * debian/control: Vcs-* fields updated * debian/: dh bump version 7 => 9 * debian/source/format: 1.0 => 3.0 (quilt) * debian/control: S-V bump 3.9.2 => 3.9.5 (no changes needed) [ Evgeni Golov ] * Refresh sidebar related patches. Update the main sidebar patch from OpenBSD, who have ported the latest upstream version of the patch to mutt 1.5.22. * Drop our sidebar-sorted patch, as upstream has support for sorting now. * Drop our sidebar-dotted in favor of Gentoo's sidebar-dotpathsep patch. Gentoo's patch has a configurable list of delimiters, which is nice. * Re-add sidebar-newonly patch * Do not segfault in sidebar-newonly (Closes: #546591) * Add nntp patch * Fix a warning during configure checking for idna_to_ascii_from_locale... no ../configure: line 12285: test: =: unary operator expected [ Christoph Berg ] * Bugs fixed upstream: Closes: #631017: crash on group reply Closes: #674245: mutt silently truncates IMAP passwords longer than 63 bytes Closes: #413688: GnuPG and GnuPG clients unsigned data injection vulnerability Closes: #541241: attachment type misdetection for small .tar.gz Closes: #580677: default keybindings override user defined ones Closes: #592874: User-defined settings are overriden by /etc/Muttrc Closes: #602145: Display problems for mbox-files > 2GiB Closes: #668583: SegFault on verifying gpg key for a message Closes: #675464: almost all colors are bright if bright is used for "normal" Closes: #172960: %r contains e-mail adress instead of key id Closes: #482883: removes custom headers on postpone+resume Closes: #509980: Mail-Followup-To removed when recalling postponed messages * Patches applied upstream are now removed: 537061-dont-recode-saved-attachments.patch 537694-segv-imap-headers.patch 537818-emptycharset.patch 568295-references.patch 578087-header-strchr.patch 584138-mx_update_context-segfault.patch 608706-fix-spelling-errors.patch 611412-bts-regexp.patch 619216-gnutls-CN-validation.patch 620854-pop3-segfault.patch 624058-gnutls-deprecated-set-priority.patch 624085-gnutls-deprecated-verify-peers.patch * Use autotools-dev to update configure.{sub,guess} (Closes: #727264) * Remove obsolete configure switches: --enable-inodesort --with-sharedir. -- Christoph Berg <[email protected]> Wed, 05 Mar 2014 13:51:33 +0100 mutt (1.5.21-6.4) unstable; urgency=low * Non-maintainer upload with maintainer approval. * Update 584138-mx_update_context-segfault.patch Stop segfaulting when listing folders with new mails over imap. Thanks: Nikolaus Schulz <[email protected]> Closes: #626294 * Update features/imap_fast_trash Don't send saved messages to trash Thanks: Chow Loong Jin <[email protected]> Closes: #721860 -- Evgeni Golov <[email protected]> Fri, 13 Sep 2013 08:34:05 +0200 mutt (1.5.21-6.3) unstable; urgency=low * Build-Depend on libtool, as used by autoreconf. * Cherry-pick 6205:0488deb39a35 to resolve FTBFS due to "automatic de-ANSI-fication support" being removed. (Closes: #711786) -- Iain Lane <[email protected]> Mon, 17 Jun 2013 09:54:24 +0100 mutt (1.5.21-6.2) unstable; urgency=low * Non-maintainer upload. * debian/rules: Use xz compression for binary packages. (Closes: #683893) -- Ansgar Burchardt <[email protected]> Sun, 05 Aug 2012 10:07:14 +0200 mutt (1.5.21-6.1) unstable; urgency=low * Non-maintainer upload. * (Mainly) L10n NMU targeting wheezy + German translation of desktop file (Closes: #658627) + Fixes when using the desktop file with dropped URL (Closes: #628240) + Fixes for syntax of desktop file (Closes: #639001) + Update de.po, reviewed by debian-l10n-german (Closes: #579967) and fix a typo in the compressed folder de.po translation -- Helge Kreutzmann <[email protected]> Fri, 29 Jun 2012 12:48:12 +0200 mutt (1.5.21-6) unstable; urgency=low * debian/rules: enabled hardened flags; patch by [email protected] (Closes: 654148). * debian/mutt.lintian-overrides: adding an override for the statically linked mutt_dotlock -- Antonio Radici <[email protected]> Fri, 22 Jun 2012 22:47:28 +0100 mutt (1.5.21-5) unstable; urgency=low * debian/control: Standards-Version moved from 3.9.2.0 to 3.9.2 for cosmetic reasons * debian/patches/mutt-patched: + sidebar: patch replaced with the one written by Stuart Henderson (Closes: 619822) + sidebar: don't overwrite the status if status_on_top is enabled (Closes: 494735) + sidebar-sorted: use strcoll() to sort the sidebar using the locale settings of the system, patch by Arnaud Riess (Closes: 589240) + multiple-fccs: added a patch that allows multiple FCC separated by commas, written by Omen Wild (Closes: 586454) + sidebar-utf8: rewrites make_sidebar_entry() to allow correct padding of utf-8 strings and also prevents segfaults due to overflows (Closes: 584581, 603287) * debian/patches/debian-specific: + Muttrc: remove a hook for application/octet-stream, already upstream (Closes: 611405) * debian/patches/upstream: + 611412-bts-regexp.patch: fixes a regexp for BTS in the mutt manual (Closes: 611412) + 624058-gnutls-deprecated.patch: deprecate gnutls_protocol_set_priority() (Closes: 624058) + 624085-gnutls-deprecated-verify-peers.patch: deprecate gnutls_certificate_verify_peers() (Closes: 624085) + 584138-mx_update_context-segfault.patch: fix a segfault due to holes in IMAP headers, 537694-segv-imap-headers.patch is removed as part of this fix (Closes: 584138) + 619216-gnutls-CN-validation.patch: fix the validation of the commonname in the gnutls code (Closes: 619216) + 611410-no-implicit_autoview-for-text-html.patch: blacklist text/html from implicit_autoview, patch by Loïc Minier (Closes: 611410, 620945) * debian/patches/compressed-folders: remove partially uncompressed folder if the open fails (Closes: 578098) * debian/extra/samples/sidebar.muttrc: documented the options that the sidebar-{sorted,dotted} patches are introducing; documentation submitted by Julien Valroff (Closes: 603186) * added mutt.desktop and to debian/extra and installed through mutt.install, it contains a MimeType handler for mailto (Closes: 613781) -- Antonio Radici <[email protected]> Thu, 05 May 2011 15:00:56 +0000 mutt (1.5.21-4) unstable; urgency=low * debian/paches: + mutt-patched/sidebar: added a closedir() so the fds will not be starved (Closes: 620854) + upstream/620854-pop3-segfault.patch: prevent segfault when $message_cachedir is set and a pop3 mailbox is open (Closes: 620854) * debian/control: + Standards-Version upgraded to 3.9.2.0, no changes required -- Antonio Radici <[email protected]> Mon, 11 Apr 2011 16:23:35 +0100 mutt (1.5.21-3) unstable; urgency=low * Uploading to unstable -- Antonio Radici <[email protected]> Sun, 20 Mar 2011 23:53:02 +0000 mutt (1.5.21-2) experimental; urgency=low [ Christoph Berg ] * debian/patches: features/imap_fast_trash: Support purging of messages. [ Antonio Radici ] * debian/patches: + upstream/578087-header-strchr.patch: prevent from segfaulting on malformed messages (Closes: 578087, 578583) + upstream/603288-split-fetches.patch: split FETCH's into smaller chunks, workaround for Exchange 2010 (Closes: 603288) + upstream/537061-dont-recode-saved-attachments.patch: as the patch says, see the patch for more info (Closes: 537061) + upstream/608706-fix-spelling-errors.patch: to fix some spelling errors (Closes: 608706) + debian-specific/566076-build_doc_adjustments.patch: use w3m to build the manual (Closes: 566076) * debian/extra/lib/mailto-mutt: replaced by a wrapper, added the reason to NEWS.Debian (Closes: 576313) * debian/extra/rc/compressed-folders.rc: added support for xz-compressed folders (Closes: 578099) * debian/*.lintian-overrides: ignore 'binary without manpage' errors due to the diversions to mutt-org/mutt-patched -- Antonio Radici <[email protected]> Sun, 02 Jan 2011 21:05:25 +0000 mutt (1.5.21-1) experimental; urgency=low * New upstream version (Closes: 597487) * debian/patches: + refreshed all patches + removed patches applied upstream * debian/control: + Standards-Version bumped to 3.9.1, no change required -- Antonio Radici <[email protected]> Sun, 03 Oct 2010 22:48:50 +0100 mutt (1.5.20-10) experimental; urgency=low * debian/patches: features/imap_fast_trash: Make "move to trash folder" use IMAP COPY, by Paul Miller (jettero). -- Christoph Berg <[email protected]> Wed, 25 Aug 2010 11:11:43 +0200 mutt (1.5.20-9) unstable; urgency=low *. -- Christoph Berg <[email protected]> Sat, 12 Jun 2010 10:33:03 +0200 mutt (1.5.20-8) unstable; urgency=low *) -- Christoph Berg <[email protected]> Tue, 01 Jun 2010 23:22:26 +0200 mutt (1.5.20-7) unstable; urgency=low * -- Antonio Radici <[email protected]> Mon, 08 Feb 2010 00:27:55 +0000 mutt (1.5.20-6) unstable; urgency=low * debian/patches: + debian-specific/467432-write_bcc.patch: do not write Bcc headers even if write_bcc is set (Closes: 467432, 546884, 467432) -- Antonio Radici <[email protected]> Tue, 19 Jan 2010 21:57:48 +0000 mutt (1.5.20-5) unstable; urgency=low *) -- Antonio Radici <[email protected]> Wed, 02 Dec 2009 22:38:00 +0000 mutt (1.5.20-4) unstable; urgency=low * Backing out the broken mutt-patched/sidebar-newonly patch (Closes: 546591, 546592) -- Antonio Radici <[email protected]> Mon, 14 Sep 2009 18:49:29 +0100 mutt (1.5.20-3) unstable; urgency=low [. -- Antonio Radici <[email protected]> Sun, 13 Sep 2009 18:34:48 +0100 mutt (1.5.20-2) unstable; urgency=low [ Antonio Radici ] * debian/patches/series: + upstream/533209-mutt_perror.patch: better error reporting if a mailbox cannot be opened (Closes: 533209) + upstream/533459-unmailboxes.patch: fixes a segfault with the unmailboxes command (Closes: 533459) + upstream/533439-mbox-time.patch: do not corrupt the atime/mtime of mboxes when opened (Closes: 533439) + upstream/531430-imapuser.patch: ask the user for the right information while logging in on IMAP servers (Closes: 531430) + upstream/534543-imap-port.patch: correctly parse the port in an IMAP url (Closes: 534543) + added the right copyright misc/smime_keys-manpage.patch + mutt-patched/sidebar: refreshed + mutt-patched/sidebar-{dotted,sorted} added (Closes: 523774) * debian/control: + Debian policy bumped to 3.8.2 * debian/mutt.install and debian/extra/lib/mailto-mutt: + added the firefox mailto handler (Closes: 406850) [ Christoph Berg ] * Remove maildir-mtime patch, upstream has a different implementation (though with different results; Closes: 533471) * Use elinks-lite (with an alternative Build-Dependency on elinks) for rendering the manual. Thanks to Ahmed El-Mahmoudy for the suggestion. (Closes: 533445) -- Christoph Berg <[email protected]> Sat, 20 Jun 2009 15:00:50 +0200 mutt (1.5.20-1) unstable; urgency=low * New upstream release, includes the following features: + Bounced messages contains From: headers (Closes: 93268) + Attachments displayed based on Content-Disposition (Closes: 199709) + fcc to a mailbox does not raise the 'new' flag (Closes: 209390) + '!' supported as suffix in gpg keys (Closes: 277945) + failed attachment saving shows an error message (Closes: 292350) + inline signed messages sent honouring $send_charset (Closes: 307819) + support for <clear-flag> and <set-flag> in the pager (Closes: 436007) + fcc_attach is a quad option (Closes: 478861) + Content-Description header not included in reply (Closes: 500766) + imap_sync_mailbox fix for a segfault (Closes: 516364) + better threading support with particular Message-ID's (Closes: 520735) + no crash on IMAP folder refresh (Closes: 528465) + undisclosed-recipients not passed in the envelope (Closes: 529090) * debian/patches/series: + commented all references to upstream/*, they should be included in 1.5.20 + removed debian-specific/529838-gnutls-autoconf.patch, ditto + removed misc/manpage-typos.patch, ditto + modified misc/hyphen-as-minus.patch, a big part was integrated upstream + features/trash-folder: do not reupload messages to $trash if IMAP is used (Closes: #448241) + added misc/hg.pmdef.debugtime, see upstream #3263 * debian/control: added DM-Upload-Allowed: yes -- Antonio Radici <[email protected]> Sun, 14 Jun 2009 20:53:18 +0100 mutt (1.5.19-4) unstable; urgency=low * debian/rules: + disable tokyocabinet as backend so it won't be used (Closes: 530670) + enable gpgme support (Closes: 263443) * debian/control: + added pkg-config to Build-Depends + add libgpgme11-dev to Build-Depends and libgpgme11 to Depends * patches/debian-specific/529838-gnutls-autoconf.patch: + pkg-config to detect gnutls rather than libgnutls-config (Closes: 529838) * patches/upstream/530661-mandatory-doubledash.patch + document the mandatory usage of -- with the -a option (Closes: 530661) * patches/features/sensible_browser_position + mutt does not segfault when the last mailbox is removed (Closes: 439387) * patches/upstream/375530-index-weirdness.patch + fix index weirdness if mailbox is emptied (Closes: 375530) * patches/upstream/493719-segfault-imap-close.patch + IMAP: only close socket when not already disconnected (Closes: 493719) * patches/upstream/514960-certificate-insecure-algorithm.patch + allow certs generated with insecure algorithms if they are in cache (Closes: 514960) * patches/misc/manpage-typos.patch + fixes some typos in the manpage (Closes: 428017) * patches/upstream/524420-segfault-reconnect-sasl.patch + sasl, mutt segfaults on reconnect to IMAPS server (Closes: 524420) * patches/upstream/350957-postponed-to-bcc.patch + display bcc for postponed message if there is no To (Closes: 350957) * patches/upstream/502628-attach_charset-doc.patch + doc update: clarify what attach_charset does (Closes: 502628) * patches/upstream/504530-stunnel-account_hook-doc.patch + doc update: mention account-hook in the docs for $tunnel (Closes: 504530) * patches/upstream/530887-dovecot-imap.patch + fixes two problems with subdirs on dovecot (Closes: 530671, 530887) -- Antonio Radici <[email protected]> Tue, 26 May 2009 23:42:51 +0100 mutt (1.5.19-3) unstable; urgency=low * debian/control: + Xs- removed from VCS headers + removed a duplicate "priority" in the binary package + Section: debug for mutt-dbg + debhelper dependency updated to support dh_lintian + Standards-Version bumped to 3.8.1 + widened mutt-dbg extended description * debian/compat: bumped to 7 * debian/patches + added a small description to all patches missing it + the following patches were refreshed against upstream/1.5.19: * features/{ifdef,maildir-mtime,xtitles,trash-folder,purge-message} * features-old/patch-1.5.4.vk.pgp_verbose_mime * debian-specific/{Md.etc_mailname_gethostbyname.diff debian/specific/{correct_docdir_in_manpage.diff,assumed_charset-compat} * mutt-patched/* + mutt-patched/sidebar: added the new sidebar patch for 1.5.19 + misc/hyphen-as-minus: sub hyphen with minus in the mutt manpages to make lintian happy + misc/smime_keys-manpage.patch: add a missing manpage (Closes: 528672) * debian/rules + re-enabled building of mutt-patched for 1.5.19 + replacing the deprecated "dh_clean -k" with dh_prep * debian/mutt-patched.lintian-overrides: mutt can be w/out manpage * debian/mutt.lintian-overrides: excluding arch-dep-package-has-big-usr-share because there are many locales * debian/mutt.preinst: added "set -e" to abort if there are errors * debian/clean: remove aclocal.m4 it will not appear in the .diff.gz -- Antonio Radici <[email protected]> Sun, 24 May 2009 17:24:18 +0100 mutt (1.5.19-2) experimental; urgency=low * Recommends: libsasl2-modules. Technically, we depend on libsasl2-2 which already recommends this package, but not having it installed just confuses too many users. * Use upstream's smime.rc file, hereby fixing S/MIME encryption. (Closes: #315319) * Grab two patches from upstream that should also go into lenny: + Always sort inode list for accessing header cache. (Closes: #508988) + Delete partially downloaded files in message cache. (Closes: #500016) * Add Antonio Radici to Uploaders. Thanks for the BTS triaging! -- Christoph Berg <[email protected]> Thu, 05 Feb 2009 23:26:41 +0100 mutt (1.5.19-1) experimental; urgency=low * New upstream version. + Header weeding changed in default config; now we ignore * and unignore from: subject to cc date x-mailer x-url user-agent. (Mutt: #286) + $move now defaults to "no" instead of "ask-no". * Upstream dropped changelog.old, so do we. * Temporarily disable building mutt-patched until an updated sidebar patch is available. -- Christoph Berg <[email protected]> Thu, 15 Jan 2009 23:47:29 +0100 mutt (1.5.18-3) unstable; urgency=low * Pull patch from upstream to fix multipart decoding. (Closes: #489283) * Add example sidebar config, thanks Stefano Zacchiroli. (Closes: #460452) * (Finally) compile with native Kerberos GSSAPI support. (Closes: #469483) * Add a switch in debian/rules to make building mutt-patched configurable. -- Christoph Berg <[email protected]> Sun, 20 Jul 2008 01:35:03 +0200 mutt (1.5.18-2) unstable; urgency=low * Updated sidebar patch, does not display (NULL) anymore. (Closes: #483151) * Install reportbug script to inform us about the status of installed mutt packages. * Use dh_lintian (prefix with '-' so we do not need to bump the DH level). * Register mutt as message/rfc822 application in /etc/mailcap. (Closes: #474539) * Refresh some patches to get rid of -p0 in series file. * Bump Standards-Version; add debian/README.source. * Switch Maintainer and Uploader as suggested by Dato. -- Christoph Berg <[email protected]> Thu, 12 Jun 2008 23:53:46 +0200 mutt (1.5.18-1) unstable; urgency=low * New upstream version. + Query menu format is configurable. (Closes: #66096, Mutt: #170) + Quote attachment filenames starting with '='. (Closes: #351890, Mutt: #1719) + Mention that References: and Date: cannot be changed in editor. (Closes: #191850, Mutt: #1234). * Refreshing patches from upstream: + compressed-folders. + sidebar. (Closes: #470657) * Update doc-base section. -- Christoph Berg <[email protected]> Sat, 24 May 2008 19:36:44 +0200 mutt (1.5.17+20080114-1) unstable; urgency=low * New upstream snapshot (hg 130aa0517251), and this time build a proper orig.tar.gz tarball. + Fixes message corruption/duplication. (Closes: #459739) -- Christoph Berg <[email protected]> Mon, 14 Jan 2008 23:26:14 +0100 mutt (1.5.17-2) unstable; urgency=low * Build a mutt-patched package to apply the sidebar patch. Thanks to Dato who had the right idea for the necessary debian/rules magic during the recent debian-qa meeting in Extremadura. (Closes: #277637) * Build a mutt-dbg package, and bump DH level to 5. * Grab current hg tip from upstream (68a9c3e74f9a). + Fixes "mailto:" URL parsing. (Closes: #426148, #426158, #446016, Mutt: #2968, #2980) + 'set folder= =' won't segfault. (Closes: #448728) + Improve DSN docs. (Closes: #436228) * Bump Standards-Version, add Homepage field. -- Christoph Berg <[email protected]> Tue, 01 Jan 2008 20:00:33 +0100 mutt (1.5.17-1) unstable; urgency=low [ Adeodato Simó ] * Move the packaging back to Bazaar, adjust X-VCS-* accordingly. [ Christoph Berg ] * Mention libsasl2-modules-gssapi-mit in README.Debian. (Closes: #433425) * Call autoreconf at build time, drop the autotools-update patch. * Update menu file, add lintian override file. * Refresh patches. * New upstream version: + fix segfaults with single byte 8-bit characters in index_format. (Closes: #420598, Mutt: #2882) + properly render subject headers with encoded linefeeds. (Closes: #264014, Mutt: #1810) + only calls gnutls_error_is_fatal when gnutls_record_recv returns a negative value. (Closes: #439775, Mutt: #2954) + Large file support for mutt_pretty_size(). (Closes: #352478, #416555, Mutt: #2191) + Do not consider empty pipes for filtering in format strings. (Closes: #447340) -- Christoph Berg <[email protected]> Sat, 03 Nov 2007 23:00:04 +0100 mutt (1.5.16-3) unstable; urgency=medium * Fix the maildir-mtime patch change in 1.5.14+cvs20070403-1 that broke new mail message count in IMAP folders. (Closes: #421468, #428734, #433275) -- Adeodato Simó <[email protected]> Thu, 19 Jul 2007 23:41:02 +0200 mutt (1.5.16-2) unstable; urgency=low * Finally a new unstable version :) * Disable gpgme backend again, it needs two "optional" libs we do not want to pull into "standard" now, and it is still somewhat buggy. Reopens: #263443. * Use gdbm instead of bdb for the cache files. * Enable sensible_browser_position patch. -- Christoph Berg <[email protected]> Thu, 28 Jun 2007 21:58:47 +0200 mutt (1.5.16-1) experimental; urgency=low * New upstream version. * compressed-folders: grab updated patch, thanks Roland. -- Christoph Berg <[email protected]> Thu, 14 Jun 2007 10:54:56 +0200 mutt (1.5.15+20070608-1) experimental; urgency=low * Muttrc.head: Temporarily set pipe_decode in the \cb urlview macro. Closes: #423640. * Apply patch by [email protected] to strdup strings when sorting. Mutt: #2515, Closes: #196545. -- Christoph Berg <[email protected]> Fri, 08 Jun 2007 11:19:08 +0200 mutt (1.5.15+20070515-1) experimental; urgency=low * New snapshot. + Removed hardcoded pager progress indicator and add %P format code to $pager_status which contains the same information. Mutt: #2087, Closes: #259145. * $smime_verify_opaque_command: fallback to -noverify. Mutt: #2428, Closes: #420014. -- Christoph Berg <[email protected]> Thu, 17 May 2007 14:15:48 +0200 mutt (1.5.15+20070412-1) experimental; urgency=low * New snapshot: + Avoid altering the argument to mutt_complete() when completion fails. Mutt: #2871, Closes: #367405 + Allow reply-hook to use ~h when replying from the index. Mutt: #2866, Closes: #362919 + Exit with a nonzero value if sending a message in batch mode fails. Mutt: #2709, Closes: #273137 + Make mutt more posixly-correct. Mutt: #1615, Closes: #204904 -- Christoph Berg <[email protected]> Thu, 12 Apr 2007 17:04:05 +0000 mutt (1.5.14+cvs20070403-1) experimental; urgency=low * set ssl_ca_certificates_file="/etc/ssl/certs/ca-certificates.crt". * Use /etc/ssl/certs/ca-certificates.crt as smime_ca_location if there is none in ~/.smime/ (Closes: #255653). * New snapshot: + Make mutt_edit_file display error if editor return is non-zero (Closes: #209244). + Use ~/.muttrc as the default alias_file if no user muttrc exists (Closes: #226500). + Reset list.name before each list response in folder browser (Mutt: #2444, Closes: #377783). + Fix segfault when trying to read header cache if cwd does not exist (Mutt: #2714, Closes: #387560). + Make message cache write to temporary location until file is complete (Closes: #394383). + Use RECENT for first mailbox check if header cache check fails (Closes: #372512). * Patches: + maildir-mtime: refreshed. -- Christoph Berg <[email protected]> Tue, 3 Apr 2007 20:54:35 +0200 mutt (1.5.14+cvs20070321-1) experimental; urgency=low * Move source package to. * debian/control: Add XS-Vcs fields. * New snapshot: + More space for the "help" string (Closes: #415277). + --buffy-size is a config option, $check_mbox_size (Closes: #379472). * Patches: + gpg.rc: upstream ships without absolute paths, our patch is much simpler now. + compressed-folders: refreshed. * Mention /etc/Muttrc defaults in documentation (Closes: #388667). * debian-ldap-query: Support middle names (Closes: #415653). -- Christoph Berg <[email protected]> Wed, 21 Mar 2007 21:54:08 +0100 mutt (1.5.14+cvs20070315-1) experimental; urgency=low * New upstream snapshot (now from mercurial). + send_charset supports charset-hook'd charsets (Closes: #152444). + Regex for color patterns can be > 256 chars long (Closes: #229801). + Reduces massive strcat use (Closes: #290701). + Uses realpath of folders in the cache (Closes: #298121). + Wraps help correctly on utf-8 terminals (Closes: #328921). + Fixes typos in muttrc.5 (Closes: #366413). + Requery IMAP capabilities after login (Closes: #384076). + Various mutt.1 updates (Closes: #332803, #355912, #366413, #394256). + The key binding documentation is now auto-generated, thereby documenting some missing functions (Closes: #413144). + Previously fixed: IMAP hangs (Closes: #413715). * Split up Muttrc into separate files in /etc/Muttrc.d/. * charset.rc: iconv-hooks for some commonly misused charsets (Closes: #402027). * Add compatibility alias file_charset for attach_charset (got renamed when the assumed-charset patch went upstream). * Patches: + compressed-folders: synced with upstream. + compressed-folders.ranty-fix: removed, went upstream. * Packaging: + Use quilt.make. + Move patchlist sorting into patchlist.sh. -- Christoph Berg <[email protected]> Thu, 15 Mar 2007 14:11:31 +0100 mutt (1.5.14+cvs20070301-1) experimental; urgency=low * New upstream snapshot. Hilights: + Now features ESMTP support, yay! + PKA support via gpgme. + Ability to save history. * Enable gpgme backend (Closes: #263443). * Move mail-transport-agent from Depends to Recommends (Closes: #356297). * /etc/Muttrc: + Do not unset write_bcc (Closes: #304718). + Do not unset use_from and use_domain (Closes: #283311, #398699). + Add quotes for compressed folder hooks (Closes: #238034), + mime_lookup application/octet-stream. * Patches: + assumed-charset: removed, applied upstream. + xtitles: Removed a comment on the default of xterm_set_titles (mentioned in #366413). * colors.angdraug: Fix spelling (Closes: #295241). * gpg.rc: add full path for pgpewrap (Closes: #396207). * Update copyright holders. -- Christoph Berg <[email protected]> Thu, 1 Mar 2007 22:48:53 +0100 mutt (1.5.13+cvs20070215-1) experimental; urgency=low * Update to a CVS snapshot: Closes: #47284: newlines/spaces are removed from custom multiple header lines Closes: #397858: /usr/bin/mutt_dotlock: off-by-one error in mutt_dotlock.c Closes: #400831: logic error in mutt-1.5.13/account.c Closes: #404916: sort-mailbox by spam tag score sorting strangeness Closes: #410678: crash when IMAP server skips messages during a FETCH without a cast. * Patches: + Reshuffle patches to move autotools-needing/updating to front. + compressed-folders: refreshed. + autotools-update: updated. + tempfile-race, thread_pattern_in_UPDATING: removed, included upstream. New patches: + ifdef: test for presence of features, patch by Cedric Duval. + trash-folder, purge-message: trash folder support, also by Cedric Duval (Closes: #263204). Patches shipped but not applied by default: + chdir: change working directory. + indexcolor: color index colums. + w3mface: display X-Face headers using w3mimgdisplay. * Create /etc/Muttrc.d/ (Closes: #391961). * Make sure reldate.h is found while building the docs. -- Christoph Berg <[email protected]> Fri, 16 Feb 2007 02:04:35 +0100 mutt (1.5.13-2) experimental; urgency=low * Adding myself as uploader, thanks Dato. * debian/rules: + Actually support DEB_BUILD_OPTIONS=noopt. + Do not touch stamp-h.in, touch PATCHES in clean. * Patches: + Moved xtitles to features/ and fixed a segfault (Closes: #365683). -- Christoph Berg <[email protected]> Mon, 12 Feb 2007 18:37:44 +0100 mutt (1.5.13-1.1etch1) stable; urgency=low * Stable update. * Grab patch from upstream: Add imap_close_connection to fully reset IMAP state (Closes: #413715). * Add myself to Uploaders, thanks Dato. -- Christoph Berg <[email protected]> Tue, 15 May 2007 09:59:24 +0200 mutt (1.5.13-1.1) unstable; urgency=high * Non-maintainer upload. * Add upstream patch to fix insecure temp file generation (Closes: #396104, CVE-2006-5297, CVE-2006-5298). -- Christoph Berg <[email protected]> Tue, 12 Dec 2006 14:49:24 +0100 mutt (1.5.13-1) unstable; urgency=low * New upstream release, with a new pattern to match full threads (see NEWS.gz). -- Adeodato Simó <[email protected]> Wed, 16 Aug 2006 15:22:53 +0200 mutt (1.5.12-1) unstable; urgency=low * New upstream release. Ship upstream's UPDATING file as NEWS.gz in /usr/share/doc/mutt. -- Adeodato Simó <[email protected]> Sat, 15 Jul 2006 02:49:50 +0200 mutt (1.5.11+cvs20060403-2) unstable; urgency=high * Fix CVE-2006-3242, stack-based buffer overflow when processing an overly long namespace from the IMAP server. (Closes: #375828) -- Adeodato Simó <[email protected]> Fri, 7 Jul 2006 15:01:28 +0200 mutt (1.5.11+cvs20060403-1) unstable; urgency=low * Update to CVS 2006-04-03, which finally: + fixes segfault when changing to an IMAP folder and the mailbox name is implicitly INBOX. (Closes: #351337, #353550) -- Adeodato Simó <[email protected]> Tue, 4 Apr 2006 06:10:12 +0200 mutt (1.5.11+cvs20060330-1) unstable; urgency=low * Update to CVS 2006-03-30, which fixes the following bugs: + IMAP cache works again with Courier. (Closes: #351220) + does not segfault if external query command output contains spaces. (Closes: #351258) + does not segfault when replying from the view-attachments menu when a reply-hook is in use. (Closes: #352357) + default save location for attachments which specify a path in their name is not `dirname $attachment` anymore, but $CWD. (Closes: #301236) * Switch to libdb4.4. (Closes: #355433) -- Adeodato Simó <[email protected]> Mon, 3 Apr 2006 02:41:15 +0200 mutt (1.5.11+cvs20060126-2) unstable; urgency=medium * Make imap_idle default to off, since it does not work with dovecot from stable, which a lot of people use; upstream will make this change before 1.5.12. (Closes: #351263, #354902) * Ignore DKIM-Signature by default in /etc/Muttrc. (Closes: #354907) -- Adeodato Simó <[email protected]> Thu, 2 Mar 2006 22:42:34 +0100 mutt (1.5.11+cvs20060126-1) unstable; urgency=low * Update to CVS 2006-01-26; since this includes a huge diff between ChangeLog and ChangeLog.old (moved entries), prepare a new tarball. Some worth-mentioning changes: + Mutt can now expand its own variables as it does with envvars; for example, it's now possible to put something like this into a hook: set sendmail="mysmtp -f $from". + Support for user-defined variables starting with my_; environment variables take precedence, and expansion does not occur in shell-escape. + Pattern group support, as explained (only!) in: <> + Loooots of improvments in the IMAP code, including sync speed-ups (through pipelining), hcache stuff (eg. $imap_cachedir), and things like $imap_idle and support for the "old" flag in IMAP folders. * Rework the package build system to fit personal preference: + debhelperize debian/rules a bit more. + drop dbs in favor of quilt; reorganize patches a bit. (NOTE: quilt means that dropping patches into debian/patches is no longer enough to get them applied; they must be listed in the debian/patches/series file.) * Adjustments to debian/control: + use '*' for the bulleted list, instead of 'o'. + build-depend on gawk instead of mawk, to have "nextfile". + drop conflicts and replaces on packages that are not in woody. * Updated debian/copyright. * Added debian/watch. -- Adeodato Simó <[email protected]> Thu, 2 Feb 2006 05:12:18 +0100 mutt (1.5.11-5) unstable; urgency=medium * Unbreak Mutt in Turkish locales (tr_TR): include patch from CVS to use the proper strcmp function in several places. Upstream bug #2144, reported in both BTS by Recai Oktas. (Closes: #343655) * Apply patch from Nik A. Melchior to fix formatting problem in muttrc(5). (Closes: #343030) -- Adeodato Simó <[email protected]> Fri, 23 Dec 2005 23:18:44 +0100 mutt (1.5.11-4) unstable; urgency=low * Update to CVS 2005-11-24 to fix the following bug (yay): + does not fail to open messages that contain base64-encoded inline PGP bits (signature, encrypted hunk, or a key). (Closes: #340116) Also, do not report success to decrypt an inline PGP message when decryption actually failed. * Again, update my e-mail address in debian/control, yada yada. -- Adeodato Simó <[email protected]> Fri, 25 Nov 2005 02:50:20 +0100 mutt (1.5.11-3) unstable; urgency=low * Update to CVS 2005-11-01, with the following worth-of-mentioning goodies (among others): + full read/write >2 GB mbox support. + attachment counting patch merged upstream (%X in index_format); check the "Attachment Searching and Counting" section in the manual for more information. And the following bugs are fixed as well: + S/MIME keys can be selected from the menu. (Closes: #318470) + clarified description of pop_checkinterval. (Closes: #320642) * Update my e-mail address in debian/control. -- Adeodato Simó <[email protected]> Fri, 11 Nov 2005 02:16:11 +0100 mutt (1.5.11-2) unstable; urgency=low (but fixes critical bug not in testing) * The fix for coping with mboxes bigger than 2 GB introduced a bug affecting at least powerpc (but not i386) which made mutt write Content-Length: 0 in mboxes due to a un-updated %ld format specifier. This caused for mail to be lost in the next mbox write. Apply a patch quickly provided by upstream (thanks, Brendan Cully!) that makes mutt use the right format specifier. (Closes: #330474) * Update the compressed folders to the 1.5.11, which includes documentation in XML format. -- Adeodato Simó <[email protected]> Fri, 30 Sep 2005 01:15:28 +0200 mutt (1.5.11-1) unstable; urgency=low * New upstream release, fixing the following bugs: + ~h can match folded headers. (Closes: #319654) + implements progress indication when uploading messages to an imap folder. (Closes: #228713) + limit pattern is properly displayed when zero messages matched. (Closes: #242398) A further CVS pull (2005-09-24) fixes the following bugs: + can open mboxes bigger than 2 GB. (Closes: #296940) + does not require GPG_TTY to be set in order to accept using the GnuPG agent: it'll set the variable itself if not present. (Closes: #316388) + does not segfault when replying to a message if content_type is unset. (Closes: #329306) + does not segfault with IMAP folder completion. (Closes: #329442) Packaging changes needed: + Upstream documentation now comes in XML, so changed Build-Dependency on linuxdoc-tools-text and groff to xsltproc, docbook-xml, docbook-xsl and links. Added patch debian/patches/doc_build_adjustments.diff to force the use of links instead of lynx for generating the text version of the manual, and to not ignore errors from links and xsltproc. + Changed --with-sasl2 to its new name --with-sasl in debian/rules; removed extra hunk on debian/patches/patch-1.5.4.Md.sasl2-1arc. + Rediffed 080_Md.Muttrc, removed patch.asp.fix-bug-266493.1 (applied upstream). + Temporarily removed documentation from the compressed-folders patch, until upstream reacts to the move to XML. * Build against libgnutls12 (build-depend on libgnutls-dev instead of libgnutls11-dev). (Closes: #323279) * Remove spurious dash in argument to -encrypt in smime_encrypt_command. (Closes: #315319) -- Adeodato Simó <[email protected]> Sun, 25 Sep 2005 23:11:59 +0200 mutt (1.5.10-1) unstable; urgency=low * New upstream release, fixing the following bugs: + does not store gpg passphrase when signing or decrypting has failed, since that probably means it was wrong. (Closes: #132548) + does not fail to delete attachments in unencrypted mails. (Closes: #302500) * New functionality overview or otherwise noticeable news: + $imap_check_subscribed variable to add the list of subscribed folders to the buffy list. + $braille_friendly variable to make Mutt more usable for blind users. + $imap_login variable in case the login name on the IMAP server is different to the name of the account ($imap_user). + -D command line option to dump current configuration, after all initialization files have been read. + $imap_force_ssl gone. + an empty limit is now interpreted as a request to cancel the current limit. * Patches: + 080_Md.paths_mutt.man: adjusted; upstream build system puts now the right paths in mutt.1 using @bindir@. Install mutt.1 instead of mutt.man in debian/rules. + 080_Md.Muttrc: don't set menu_move_off in /etc/Muttrc since the compile-time default is now what we want (pre-1.5.7 compatible). + edit-threads, current-shortcut, incomplete-mbyte: removed, integrated upstream. + maildir-mtime: s/if/ifdef/ to get it to apply. + compressed-folders: updated to 1.5.10. * Upstream now builds "complete" documentation, i.e., for all features whether enabled or not. Disable that for Debian. [patch.docs-match-config.h] * Build-Depend on autotools-dev and use updated config.{guess,sub} at build time to fix FTBFS on GNU/kFreeBSD. (Closes: #302735) * Update Standards-Version to 3.6.2 (no changes required). * Set myself as the maintainer, and remove Marco from Uploaders as agreed with him. -- Adeodato Simó <[email protected]> Mon, 15 Aug 2005 15:51:55 +0200 mutt (1.5.9-2sarge2) stable-security; urgency=high * Fix buffer overflow in IMAP parsing code -- Moritz Muehlenhoff <[email protected]> Wed, 28 Jun 2006 17:12:05 +0000 mutt (1.5.9-2sarge1) stable; urgency=low * For attachments marked for deletion after the message is sent, don't remove them if the message is finally cancelled, or if the attachments are dropped from the message prior to sending. (Closes: #332972) -- Adeodato Simó <[email protected]> Tue, 31 Jan 2006 01:23:28 +0100 mutt (1.5.9-2) unstable; urgency=high * Added a missing Build-Depend on mawk. (Closes: #310039) * Updated the Swedish translation. -- Adeodato Simó <[email protected]> Sun, 22 May 2005 17:29:25 +0200 mutt (1.5.9-1) unstable; urgency=medium * New upstream release, though the previous upload already included most of it because of the CVS pull. Do another one now (2005-04-03), including the following bits from 1.5.10: + several translation updates (de, id, nl, pl, ru). + a patch by Daniel Jacobowitz to synchronise message flags before moving messages. (Closes: #163616) * Also, the header cache patch is now fully integrated upstream, so drop it. * Don't set pipe_default in debian/patches/080_Md.Muttrc, and stick to upstream's default (unset). (Closes: #300830) * Updated the compressed folders patch to version 1.5.9. * Updated patch 080_Md.Muttrc to restore the old behaviour of the index. -- Adeodato Simó <[email protected]> Sun, 03 Apr 2005 20:08:39 +0200 mutt (1.5.8-1) unstable; urgency=low * New upstream release, with a CVS pull to get all the translation updates that happen right after a release. New features worth mentioning: + the PGP auto decode patch by Derek Martin has been accepted upstream, so inline PGP messages are automatically verified/decrypted now if $pgp_auto_decode is set. (Closes: #269699) + IDN decoding can be disabled by unseting $use_idn (set by default). + new hook 'send2-hook', which gets executed each time there is a change in a message being composed. This permits, for example, to match against recipients added manually after writing the mail, which wasn't possible with 'send-hook' alone. + Christoph Berg's menu_context patch is also included. Check the $menu_context and $menu_move_off variables. * This version also includes the following fixes: + message flags are not lost after editing a message. (Closes: #275060) + IMAP folder paths ending with the delimiter are trimmed so that they don't fail to open with some servers, e.g. Courier. (Closes: #277665) + the correct charset is used when signing a forwarded message. (Closes: #295528) + correctly forget the S/MIME passphrase. (Closes: #300516) * Explicitly pass --enable-inodesort to ./configure, since upstream has disabled it by default in this version. * Updated the compressed folders patch to version 1.5.8. * Dropped the adjust_line and adjust_edited_file patches from extra-patches/mutt-ja-compat, incorporated upstream. Renamed mutt-ja-compat to assumed-charset, since that's the only patch that remains. * Lots of patches in the Debian package have been applied upstream, drop them (16 in total). Worth mentioning is the gnutls patch. The maildir_inode_sort patch has been adopted too, with the static functions no longer being nested, which closes: #287744 (FTBFS with gcc-4.0). * Implemented a conf.d style directory for mutt: other packages or local admins may now drop configuration snippets in /etc/Muttrc.d/*.rc and have them sourced at the end of the default Muttrc. (Closes: #285574) * Updated the header cache patch to version 28. The size of this patch has been drastically reduced, since the generic code and the IMAP support has been incorporated upstream. * Updated the compressed folders patch to version 1.5.8. * Use mixmaster-filter by default. (Closes: #299060) -- Adeodato Simó <[email protected]> Fri, 25 Mar 2005 21:55:52 +0100 mutt (1.5.6-20040907+3) unstable; urgency=high * Upload targeted at sarge to include some must-have fixes. * Include small patch to fix imap-related segfaults in ia64, due to a buffer length being declared as int instead of size_t in the gnutls patch. Thanks to David Mosberger for spotting the problem. (Closes: #234783, #285001) [New file: upstream/extra-patches/gnutls.59.size_t-fix] * Include (finally!) a patch that really prevents decrypt-save from deleting the message if the supplied password was wrong. (Closes: #275188) [New file: upstream/patches/decrypt-save_non-empty-output] * Updated the header-cache patch to version 25, which includes a fix to make it possible for hcache to talk to broken Lotus IMAP servers. (Closes: #282451) [Modified file: upstream/extra-patches/header-cache] * The mutt BTS has been closed due to excessive spam in their debbugs installation. Included the patch that substitutes flea and flea.1 by a note that states so. [New file: upstream/patches/empty-muttbug, removed file: debian/patches/080_Md.muttbug] * Removed /usr/share/bug/mutt/presubj, now useless. * Added Domainkey-Signature to the list of ignored headers. [Modified file: debian/patches/080_Md.Muttrc] -- Adeodato Simó <[email protected]> Fri, 28 Jan 2005 19:02:58 +0100 mutt (1.5.6-20040907+2) unstable; urgency=medium * A "Let's procrastinate some important stuff and fix a bunch of mutt bugs instead" release. * Include small patch to fix the Swedish translation, which was making impossible to turn off pgp signing and/or encrypting. (Closes: #281265) [New file: upstream/patches/i18n-sv-fix.diff] * Make the default Muttrc work out the box for people using gnupg-agent. Wrote and applied a one-line patch to make the %?p? conditional escape work correctly, patch forwarded upstream. (Closes: #277646) [New file: debian/patches/patch.asp.%p-escape-agent-compatible.1] * Relocate the definition of the USE_GNUTLS macro, so that it gets passed to the documentation build process too. Otherwise, options that end up in the manual wouldn't match those that are really compiled in. (Closes: #278124) [Modified files: debian/rules, upstream/extra-patches/gnutls.debian] * Honour /etc/alternatives/pager in the muttbug script. (Closes: #275448) [Modified file: debian/patches/080_Md.muttbug] * Include patch by Nicolas François <[email protected]> to fix typo in muttrc.5. (Closes: #272579) [New file: debian/patches/patch.nf.fix-bug-272579.1] * Updated the (formerly unmaintained) current-shortcut patch with a new version by Christoph Berg <[email protected]>. Now the actual used Fcc will be shown instead of '^' when you folder-hook . 'set record="^"'. [Modified file: upstream/extra-patches/current-shortcut] * Updated the header-cache patch to version 24. -- Adeodato Simó <[email protected]> Wed, 17 Nov 2004 15:17:14 +0100 mutt (1.5.6-20040907+1) unstable; urgency=low * Updated to CVS snapshot 20040907 (includes updated ja translation). * The maildir-mtime patch is now NOT enabled by default, you need to set the maildir_mtime variable in your ~/.muttrc. This variable has been necessary since people with large maildirs over NFS experienced a large performance impact with the mtime patch enabled. (Closes: #253261) * Updated the header cache patch to version 21. This fixes a file descriptor leak which could cause problems for people who keep their mutt open for a long time. Also includes support for per-folder cache: setting $header_cache to a directory will enable it, and you should experience some performance gains. * Included the current shortcut patch. It's completely unintrusive and allows you to specify ^ as a shortcut for the current folder (e.g., in a Fcc). (Closes: #253104) * Updated the autotools stuff. Include in it also stuff from patches, so that --enable-foo options can be used in debian/rules. Put files directly in extra/autotools and cp from there in debian/rules instead of using a patch file (which was too big due to automake1.4/autoconf2.13 => 1.8/2.50 migration). * debian/rules: + use --enable-compresed --enable-hcache --without-gdbm in configure. + copy autotools files from extra/autotools when unpacking, and emulate the 000_Md.config.h.in patch. * debian/patches/: + removed 000_Md.config.h.in, no longer needed since config.h.in is now overwritten from extra/autotools. + added patch.asp.fix-bug-266493.1, which makes mutt not wait for a keypress to handle SIGWINCH in certain situations. (Closes: #123943, #266493) -- Adeodato Simó <[email protected]> Tue, 21 Sep 2004 01:39:22 +0200 mutt (1.5.6-20040818+1) unstable; urgency=low * The post-Sarge era officially begins for mutt. This mostly means that patch inclusion policy will untighten a bit. * Added the maildir/imap header caching patch by Thomas Glanzmann, see: <>. For a quick start, read documentation for the $header_cache variable. (Closes: #242762, #255475) * Added the maildir-mtime patch by Dale Woolridge, see <>. This patch should make happy users that use maildir and have $sort_browser=reverse-date. (Closes: #253261) * Reorganized patches location: + patches not written by the mutt maintainers are now in the upstream/extra-patches directory. + each patch in that directory now contains a preamble listing: - the patch author - the patch home page - last time patch was updated - exact URL to the patch file - applied changes, if any + all preambles are available in the doc/README.Patches file, and debian/copyright now points to this file too. * Other changes in upstream/extra-patches/: + updated the edit-threads patch. + updated the compressed-folders patch. * Updated to CVS snapshot 20040818: + various memory leaks spotted and fixed. + several translations updated: pl, sv, fr, id, nl, de, ca, cs. The Czech translation addresses a bad chosen shorcut in the crypt menu, and thus closes: #140639. Updated German translation closes: #265120. + fix some UI flaws in the new PGP and S/MIME menus which could easily make the user send in clear mail which was meant to be signed and/or encrypted (the (e)ncrypt, (s)ign and (b)oth commands were toggles). Also renamed the (f)orget action to (c)lear for newbie's benefit; accept the (f) key for long time users' benefit. + make mutt not hang if STARTTLS fails to complete the SSL handshake. + try all methods in $imap_authenticators when one of them fails; previously mutt would give up upon the first of them failing. * debian/: + scripts/vars: add upstream/extra-patches to SRC_PATCH_DIR. + control: Build-Depend on libdb4.2-dev for the header-cache patch. + rules: call scripts/patch-preamble to create the README.Patches file. + copyright: add pointer to README.Patches, where patch authors are listed. * debian/patches/: + updated 000_VERSION to reflect new snapshot date. + removed obsolete #defines from 000_Md.config.h.in. Added #include "debian-config.h" there, which is used by upstream/extra-patches/*.debian. + removed patch.asp.fix-bug-260578.1, included upstream. * debian/rules: sort the PATCHES file, which is printed by `mutt -v`. -- Adeodato Simó <[email protected]> Sat, 21 Aug 2004 20:53:39 +0200 mutt (1.5.6-20040803+1) unstable; urgency=low * Updated to CVS snapshot 20040803: + fixes the code that closed #213412. * debian/control: + Rebuilt against gnutls11. (Closes: #263067, #263625) + List myself in Uploaders field. * debian/patches/: + updated 000_VERSION to reflect new snapshot date. + removed patch-1.5.6.tt.compat.1.asp.fix.1, which was not meant to be included in the last upload. (Closes: #261951) + update the gnutls patch to include TLS support for POP3 as well. Patch provided by Alexander Neumann <[email protected]>. (Closes: #260638) -- Adeodato Simó <[email protected]> Thu, 5 Aug 2004 18:13:33 +0200 mutt (1.5.6-20040722+1) unstable; urgency=high * Updated to CVS snapshot 20040722: + bugfixes: - does not segfault when chdir'ing to a directory without read permission. (Closes: #237426) - does not segfault when applying check-traditional-pgp to multiple messages. (Closes: #257277) - uses the right From address when composing a new message from the pager and $reverse_name is set. (Closes: #249870) - initial IMAP header download does not take quadratic time on the number of messages. (Closes: #213412) + new functionality: - support for spam-scoring filters (see §3.24 of the fine manual). - $include_onlyfirst: controls whether or not Mutt includes only the first attachment of the message you are replying. - $hide_thread_subject: when unset, mutt will show the subject for all messages in a thread. - uses List-Post header when doing list-reply. (Initial RFC 2369 support, closes: #49048) * debian/patches/: + updated 000_VERSION to reflect new snapshot date. + updated the following patches to apply cleanly: - 001_patch-1.5.4.rr.compressed.1 [Makefile.in] - 002_patch-1.5.5.1.admcd.gnutls.59 [globals.h] - patch-1.5.3.cd.edit_threads.9.2-1arc [mutt.h] + updated patch-1.5.6.tt.compat.1 (does not close #259145). + removed patch-1.5.3.Md.gpg-agent, issue fixed upstream. + added patch-1.5.6.helmersson.incomplete-mbyte.2 by Anders Helmersson to avoid passing incomplete multibyte sequences to regexec(), which can cause segfaults due to libc6 Bug#261135. (Closes: #254314, #260623) [Yes, this is a sequel, not a dejà-vu.] + added patch.asp.fix-bug-{210679,254294,258621,260578}.1, which fix several minor issues unaddressed by upstream for some time. All patches submitted upstream. (Closes: #210679, #254294, #258621, #260578). * debian/rules: + be robust to any locale by exporting LC_ALL=C. (Closes: #253048) + touch some autotools files to prevent having them to be built again. * Using "urgency=high" at maintainer's discretion -- Adeodato Simó <[email protected]> Wed, 21 Jul 2004 19:31:55 +0200 mutt (1.5.6-20040523+2) unstable; urgency=low * Renamed patch-1.5.5.1.tt.compat-fix to patch-1.5.5.1.tt.compat.2-fix. (Closes: #253048) -- Marco d'Itri <[email protected]> Mon, 7 Jun 2004 00:45:40 +0200 mutt (1.5.6-20040523+1) unstable; urgency=low * This release is based on the work of Adeodato Simó <[email protected]>. * Updated to CVS snapshot 20040523: + now mutt includes better support for inline/traditional signing and encrypting. See for details. (Closes: #190204) + sourcing output of a command works again. (Closes: #247007) + corrected .PP usage in flea.1 and mbox.5. (Closes: #237827) + do not eat chars on rfc822-valid From:address lines (i.e., when there is no space between the colon and address; fixes the already archived #226759). * debian/patches/: + added 000_VERSION: reflect CVS snapshot date. + removed 081_nbrown.auth_imap_plain: included upstream. + added patch-1.5.5.1.tt.compat-fix: introduces a fix for the compat patch which prevents mutt from segfaulting when there is binary junk in headers. (Closes: #233315, #247366, #249588) + removed patch-1.5.4.Md.gpg_by_keyid-1arc: no longer needed. (Closes: #250108, #248994) + added 004_ranty.fix-compressed: written some time ago by Manuel Estrada to prevent mutt from deleting a message if saving to a compressed folder fails. In memoriam. (Closes: #210429) + added patch-1.5.6.asp.pgp_getkeys: set pgp_getkeys_command in Muttrc. Currently commented out because of #172960. (Closes: #237691) + 080_Md.Muttrc: make colors respect terminal scheme. (Closes: #86393) * debian/rules: pass --enable-debug to configure. (Closes: #198073) -- Marco d'Itri <[email protected]> Sun, 6 Jun 2004 01:17:14 +0200 mutt (1.5.6-1) unstable; urgency=low * New upstream release. Old configuration files may become incompatible, see NEWS.Debian.gz for details. * Added debian/NEWS. * debian/patches/: - updated patch-1.5.4.Md.gpg_by_keyid-1arc - pgp_export_command uses %r instead %k. (Closes: #223960) - updated 002_patch-1.5.5.1.admcd.gnutls.59 - mutt can save server's certificate. (Closes: #228607, #234623, #236886) - removed 000_VERSION (not needed in this release). - removed 004_patch-1.5.5.1.Md.libgnutls10 - included in 002_patch-1.5.5.1.admcd.gnutls.59. - added 081_nbrown.auth_imap_plain - mutt can authenthicate itself to imap server via sasl2 using PLAIN method, thanks to Neil Brown. (Closes: #206078, #214758) - removed patch-1.5.4.helmersson.incomplete_multibyte because it's broken. (Closes: #244549) - removed 003_patch-1.4.admcd.gnutlsdlopen.53, now the mutt binary will be linked with libgnutls. (Closes: #228279, #228323, #230287) - updated patch-1.3.27.bse.xtitles.1 with patch-1.5.5.1.nt.xtitles.3.ab.1. * Update the default MTA: Depend on exim4 | mail-transport-agent. (Closes: #228560) * This release is the work of Artur R. Czechowski. -- Marco d'Itri <[email protected]> Sun, 2 May 2004 18:02:10 +0200 mutt (1.5.5.1-20040112+1) unstable; urgency=medium * Build-Depend on libidn11-dev instead of libidn9-dev and libgnutls10-dev instead of libgnutls7-dev. (Closes: #226910, #227426) * Updated to CVS snapshot 20040112: + fixed manual (Closes: #226936) + fixed pgp_retainable_sigs (Closes: #226424) * New patch patch-1.4.1.tt.compat.1-ter, a part of mutt-ja which allows configuring a default character set to be used for non-MIME messages. (Closes: #222191) * Added a note about temporary files to README.Debian. (Closes: #141143, #222125) * New patch 100_arc.smime_descripitive_messages which adds some error messages to smime_keys.pl. (Closes: #226696) * Conflicts/Replaces mutt-utf8. * New patch 004_patch-1.5.5.1.Md.libgnutls10. * This release is the work of Artur R. Czechowski. -- Marco d'Itri <[email protected]> Sat, 17 Jan 2004 17:50:16 +0100 mutt (1.5.5.1-20040105+1) unstable; urgency=low * New upstream release: + fixed infinite loop during attachment saving. (Closes: #219314, #224654) * Updated to CVS snapshot 20040105: + fixed a lot of crashes/coredumps. (Closes: #141214, #141468, #192341, #197322, #219499, #223663) + honor Reply-To while generating Mail-Followup-To headers. (Closes: #182526) + improved colouring of thread tree. (Closes: #219594) + fixed retrieving mail via preauth imap over ssh. (Closes: #209025) * debian/rules: added extra-clean target to delete *.orig and *.rej files when debian/sys-build.mk make-diff is called. * Modified patches to apply without conflicts: + 001_patch-1.5.4.rr.compressed.1 + 003_patch-1.4.admcd.gnutlsbuild.53 * Suggests libgnutls7 instead libgnutls5 (Closes: #217716), updated README.Debian. * Added README.SMIME. (Closes: #222903) * smime_keys moved to /usr/bin. (Closes: #222905) * Suggests ca-certificates and openssl. * Killed mutt-utf8. * This release is the work of Artur R. Czechowski. -- Marco d'Itri <[email protected]> Tue, 6 Jan 2004 15:38:58 +0100 mutt (1.5.4+20031024-1) unstable; urgency=medium * New CVS snapshot. (Closes: #133021, #207242, #208430, #213007, #213917) * Fix FTBFS bug in debian/control. (Closes: #216508) * Compiled with libgnutls7. (Closes: #209722) * New patch patch-1.5.4.fw.maildir_inode_sort. (Closes: #212664) * New patch patch-1.5.4.helmersson.incomplete_multibyte. (Closes: #187991, #188605) * New patch patch-1.5.4.Md.gpg_by_keyid. (Closes: #210668) * Removed README.NFS, as it talks about 2.0 and 2.2 kernels. * Removed reference to $AGENT_SOCKET from README.Debian. (Closes: #215412) -- Marco d'Itri <[email protected]> Fri, 24 Oct 2003 15:06:01 +0200 mutt (1.5.4+20030913-1) unstable; urgency=medium * New CVS snapshot. (Closes: #210354, #210423) * Added patch-1.5.3.vk.pgp_verbose_mime. (Closes: #201306) -- Marco d'Itri <[email protected]> Sat, 13 Sep 2003 15:59:49 +0200 mutt (1.5.4+20030817-1) unstable; urgency=medium * New CVS snapshot, packaged with the great help of [email protected] (co-maintainer). (Closes: #138966) * Switched to libsasl2. (Closes: #201210) * Removed hcache patch. (Closes: #189999, #194843, #196832, #196978, #197182, #199052) * Updated gnutls patch to 002_patch-1.5.4.admcd.gnutls.56. (Closes: #196117) * Removed libdb4.0-dev from Build-Depends (Closes: #204015) * /etc/Muttrc: call gpg without a path. (Closes: #193756) * locales upgraded to Recommended status. * Added an icon. (Closes: #188726) * Make pgp_import_command nonverbose. (Closes: #195310) -- Marco d'Itri <[email protected]> Sun, 17 Aug 2003 15:56:55 +0200 mutt (1.5.4-1) unstable; urgency=high * New upstream release. (Closes: #142266, #148858, #169740, #178563) * Fixes remotely exploitable buffer overflow in the IMAP code. (Core Security Technologies Advisory CORE-2003-03-04-02.) * Removed again BUFFY_SIZE support, too many people complained. Added a note to README.Debian. * Provides: imap-client. (Closes: #183351) * Always include the whole certificates chain in S/MIME mail to comply with RFC 2315 spirit. (Closes: #182477) * Applied ME's headers caching patch (provided by Nicolas Bougues). * Fixed pgpewrap core dump. (Closes: #170666) -- Marco d'Itri <[email protected]> Thu, 20 Mar 2003 15:06:13 +0100 mutt (1.5.3-3) unstable; urgency=medium * Recompiled to fix missing dependencies information. (Closes: #181167) -- Marco d'Itri <[email protected]> Sun, 16 Feb 2003 11:46:46 +0100 mutt (1.5.3-2) unstable; urgency=medium * Compiled with BUFFY_SIZE. (Closes: #179970) * Stop generating escape codes in the manual. (Closes: #167006) * Set the default editor as specified by policy. (Closes: #177245) -- Marco d'Itri <[email protected]> Fri, 14 Feb 2003 19:13:15 +0100 mutt (1.5.3-1) unstable; urgency=low * New upstream release (Closes: #112865, #165397, #168907, #169018). * Suggests ispell | aspell (Closes: #175324). * Add & and = to the URL coloring regex (Closes: #169646). * Removed message-hook for clearsigned PGP messages (Closes: #168275). * Removed obsolete patch-1.4.0.dw.pgp-traditional.2. * By popular (?) demand, use /etc/mailname and only if it fails fall back to gethostbyname(3) and then to uname(2) (Closes: #167549). -- Marco d'Itri <[email protected]> Tue, 14 Jan 2003 18:31:01 +0100 mutt (1.4.0-5) unstable; urgency=medium * Try a different strategy to find the FQDN. Stop using /etc/mailname. (Closes: #166060). * Suggests: mixmaster (Closes: #166360). * Updated copyright file (Closes: #163783). * *Really* enable --enable-imap-edit-threads (Closes: #162352). * Ignore Microsoft Thread-* header (Closes: #161473). * Do not ask the password if gpg-agent or quintuple-agent are active (Closes: #161508). * Applied patch-1.5-me_editor.1 (Closes: #72318). * Applied patch-1.5.1.tlr.mailboxes-overflow.1 (Closes: #153751). * Applied patch-1.4-me.regex_doc.1 (Closes: #162550). * Removed patch-1.3.15.sw.pgp-outlook.1 in favour of patch-1.4.0.dw.pgp-traditional.2. All "traditional" PGP messages now use text/plain. -- Marco d'Itri <[email protected]> Tue, 29 Oct 2002 14:38:52 +0100 mutt (1.4.0-4) unstable; urgency=medium * Recompile with newer libgnutls5 (Closes: #160114). * Updated contrib/colors.angdraug. * Updated 010_patch-1.4.admcd.gnutls.55. -- Marco d'Itri <[email protected]> Tue, 17 Sep 2002 16:07:50 +0200 mutt (1.4.0-3) unstable; urgency=medium * *Really* compile mutt with --enable-imap-edit-threads (Closes: #154864). * *Really* merge the Maildirs new messages patch (Closes: #151582). * Recompile with libgnutls5 (Closes: #152787, #157120). -- Marco d'Itri <[email protected]> Sun, 18 Aug 2002 16:59:06 +0200 mutt (1.4.0-2) unstable; urgency=low * Update GNUTLS patch and link against gnutls4. (Closes: #152141). * Link mutt-utf8 against libncursesw5 instead of slang1a-utf8. * set pipe_decode=yes in /etc/Muttrc (Closes: #151460). * Compile mutt with --enable-imap-edit-threads (Closes: #150274). * Make debian/scripts/lib work even if $CDPATH is set (Closes: #152678). * Merged patch to fix spurious new message notifications with Maildirs (Closes: #151582). -- Marco d'Itri <[email protected]> Fri, 12 Jul 2002 03:34:48 +0200 mutt (1.4.0-1) unstable; urgency=medium * New upstream release (Closes: #146889, #149348, #148558). * Updated patch edit_threads.9.2 (Closes: #146451). * Priority of mutt-utf8 changed from optional to extra. -- Marco d'Itri <[email protected]> Mon, 10 Jun 2002 21:26:08 +0200 mutt (1.3.28-2) unstable; urgency=medium * Moved into main. * Suggests: libgcrypt1, gnutls3 (Closes: #140970). * Added patch from CVS to fix crash with UTF-8 locales (Closes: #126336). -- Marco d'Itri <[email protected]> Sat, 6 Apr 2002 18:35:01 +0200 mutt (1.3.28-1) unstable; urgency=medium * New upstream release (Closes: #138200). * Updated GNUTLS patch. * Make flea(1) work even if bug(1) is not installed (Closes: #138273). * Added /usr/share/bug/mutt/presubj file (Closes: #138274). -- Marco d'Itri <[email protected]> Tue, 19 Mar 2002 14:42:39 +0100 mutt (1.3.27-5) unstable; urgency=medium * Added build dependancy on linuxdoc-tools-text (Closes: #137890). * Use sensible-pager instead of zless to read the manual (Closes: #136206). * Added example colors scheme contributed by Dmitry Borodaenko. -- Marco d'Itri <[email protected]> Mon, 11 Mar 2002 19:40:20 +0100 mutt (1.3.27-4) unstable; urgency=high * Recompiled against new slang packages (Closes: #133255, #134115). * Added patch-1.3.27.me.aliasdups.1 (Closes: #133559). * Updated GNUTLS patch. * Added missing flea(1) symlink (Closes: #133646). -- Marco d'Itri <[email protected]> Sun, 17 Feb 2002 03:27:50 +0100 mutt (1.3.27-3) unstable; urgency=high * Recompiled against new slang and packages (Closes: #132644). * Title bar is changed on more xterm variants (Closes: #131178). * Removed obsolete advice about shred from README.Debian (Closes: #132786). -- Marco d'Itri <[email protected]> Sun, 10 Feb 2002 13:26:20 +0100 mutt (1.3.27-2) unstable; urgency=high * Updated GNUTLS patch (Closes: #131386, #131424). * Added patch-1.3.27.me.listfrom_doc.1 (Closes: #45706). * Added missing -fPIC (Closes: #131209). * Added missing commas in charset.c (Closes: #130481). -- Marco d'Itri <[email protected]> Thu, 31 Jan 2002 15:23:34 +0100 mutt (1.3.27-1) unstable; urgency=medium * New upstream release. * Small fix to pt_BR translation (Closes: #130416). * Hide gpg status messages (Closes: #127519). -- Marco d'Itri <[email protected]> Tue, 22 Jan 2002 20:18:21 +0100 mutt (1.3.26-1) unstable; urgency=medium * New upstream release. * Removed patch-1.3.25.chip.fast-limited-threads because the patched code has changed. -- Marco d'Itri <[email protected]> Sat, 19 Jan 2002 19:30:13 +0100 mutt (1.3.25-5) unstable; urgency=high * Added build dependancy on groff (Closes: #129605, #129698). -- Marco d'Itri <[email protected]> Thu, 17 Jan 2002 19:03:29 +0100 mutt (1.3.25-4) unstable; urgency=high * Forced build dependancy on newer gnutls-dev (Closes: #129283). * Updated GNUTLS patch (Closes: #129291). -- Marco d'Itri <[email protected]> Wed, 16 Jan 2002 19:47:45 +0100 mutt (1.3.25-3) unstable; urgency=medium * Force documentation rebuilding (Closes: #128758, #129045). * TLS patch update from Andrew McDonald (Closes: #125924, #128718, #129039). * Suggests gnutls0. * Fixed typo in manual (Closes: #128836). * Added patch-1.3.25.chip.fast-limited-threads, which is supposed to speed up limited threaded display (Closes: #128174). * Added patch-1.3.25.tlr.attach_overwrite.1 (Closes: #126122). -- Marco d'Itri <[email protected]> Sun, 13 Jan 2002 17:18:21 +0100 mutt (1.3.25-2) unstable; urgency=low * Force dependancy on slang1-utf8 (Closes: #127938). * Enable again optimization (Closes: #127653, #127682). * Enable {open,close,append}-hook by default again (Closes: #127894). -- Marco d'Itri <[email protected]> Sun, 6 Jan 2002 19:35:57 +0100 mutt (1.3.25-1) unstable; urgency=high * New upstream release, fixes remotely exploitable buffer overflow. * Fixed mutt_dotlock permissions (Closes: #127264, #127265, #127278, #127308, #127312, #127357). -- Marco d'Itri <[email protected]> Wed, 2 Jan 2002 18:49:54 +0100 mutt (1.3.24-3) unstable; urgency=medium * A new mutt-utf8 package is generated (Closes: #99898). * Added patch-1.3.24.de.new_threads.3 to fix segfaults while sorting thread (Closes: #123658). * Added --status-fd option to gpg command line and working $pgp_good_sign variable to default /etc/Muttrc (see #110414 and #123273 for details). * Updated thread editing patch to patch-1.3.24.cd.edit_threads.8. * Added patch-1.3.24.appoct.2 to lookup application/octet-stream files extensions in mime.types. * Fixed coredump processing in flea(1) (Closes: #123081). * Removed obsolete contrib/pgp-macros. -- Marco d'Itri <[email protected]> Thu, 27 Dec 2001 03:32:16 +0100 mutt (1.3.24-2) unstable; urgency=medium * Added better GNUTLS code from Andrew McDonald. * Added again threads editing patch. * Enable {open,close,append}-hook by default again (Closes: #115473). * Removed default $pgp_good_sign from Muttrc (related to #110414). -- Marco d'Itri <[email protected]> Wed, 5 Dec 2001 02:26:13 +0100 mutt (1.3.24-1) unstable; urgency=high * New upstream release (Closes: #74484, #98630, #114938). * Added conflict with gnutls < 0.2.9 (Closes: #121645). * Fixed mailspell script (Closes: #120446). * Removed 000_patch-1.3.22.1.cd.edit_threads-5.1. * Added 000_patch-1.3.23.1.ametzler.pgp_good_sign (Closes: #110414). -- Marco d'Itri <[email protected]> Fri, 30 Nov 2001 22:52:42 +0100 mutt (1.3.23-4) unstable; urgency=high * Added Build Dependandcies on libgcrypt-dev and zlib1g-dev (Closes: #119309). * Added a comment to README.Debian about stunnel (Closes: #115421). * Removed safefilter script (Closes: #118630). * Added the broken-outlook-pgp-clearsigning patch (Closes: #120090). -- Marco d'Itri <[email protected]> Fri, 30 Nov 2001 20:49:02 +0100 mutt (1.3.23-3) unstable; urgency=high * Moved to non-US (Closes: #118294). -- Marco d'Itri <[email protected]> Mon, 5 Nov 2001 12:06:43 +0100 mutt (1.3.23-2) unstable; urgency=medium * Added SSL support using GNUTLS. WARNING: requires the CVS library! * Added unpack target to debian/rules (Closes: #115765). * Fixed account-hook (Closes: #117125). * Added default compression hooks to /etc/Muttrc (Closes: #115473). -- Marco d'Itri <[email protected]> Sun, 4 Nov 2001 13:59:25 +0100 mutt (1.3.23-1) unstable; urgency=medium * New upstream release (Closes: #106864, #106229, #110414) (Closes: #69135, #89195, #92651, #97319, #98627). * Added README.NFS note from the liblockfile maintainer (Closes: #96788). * Fixed gpg operations (Closes: #113458, #114163, #114938). * Fixed compressed folder patch (Closes: #114199). * Highlight https URLs too (Closes: #113791). -- Marco d'Itri <[email protected]> Wed, 10 Oct 2001 02:30:15 +0200 mutt (1.3.22-2) unstable; urgency=medium * Renamed dotlock.1 (Closes: #112545). * Fixed the threads editing patch (Closes: #112554). -- Marco d'Itri <[email protected]> Wed, 19 Sep 2001 12:18:47 +0200 mutt (1.3.22-1) unstable; urgency=low * New upstream release (Closes: ). * Old bugs fixed in the last NMU (Closes: #29884, #101075, #101484) (Closes: #101890, #101890, #102439, #106863, #104469, #105391) (Closes: #110262). * Fixed bashism in vars.build (Closes: #104137). * Updated libssl package name in README.Debian (Closes: #96564). * Added a note about temporary files in README.Debian (Closes: #89277). * Added a note about locales in README.Debian (Closes: #105545). * Included threads editing patch (Closes: #111291). * Fixed paths in mutt.man (Closes: #110462). -- Marco d'Itri <[email protected]> Sat, 15 Sep 2001 17:33:51 +0200 mutt (1.3.19-1) unstable; urgency=low * New upstream release (Closes: #81155, #93830, #95426, #100298, #101075). (Closes: #101451). * Suggests locales instead of i18ndata (Closes: #98814). -- Marco d'Itri <[email protected]> Tue, 22 May 2001 13:42:34 +0200 mutt (1.3.18-1) unstable; urgency=low * New upstream release (Closes: #81155, #92234, #90400, #92860, #95426) (Closes: #88358, #92846, #92847, #91979, #97658, #98014). -- Marco d'Itri <[email protected]> Tue, 22 May 2001 13:42:34 +0200 mutt (1.3.17-1) unstable; urgency=high * New upstream release (Closes: #89011, #82372, #86228, #83187). -- Marco d'Itri <[email protected]> Sun, 1 Apr 2001 22:09:27 +0200 mutt (1.3.15-2) unstable; urgency=high * Built again without linking libiconv. -- Marco d'Itri <[email protected]> Wed, 14 Feb 2001 23:02:23 +0100 mutt (1.3.15-1) unstable; urgency=low * New upstream release (Closes: #81873, #81155, #81640, #76922). * Added more headers to the ignore list. * Removed dh_suidregister (Closes: #84826). * Removed US-ASCII charset-hook (Closes: #81240). * Commented all "color header" lines in the default /etc/Muttrc. * Fixed default colors on white background xterm. -- Marco d'Itri <[email protected]> Mon, 12 Feb 2001 23:34:41 +0100 mutt (1.3.12-2) unstable; urgency=low * Fixed typo in muttbug (Closes: #79230). * Added menu hints (Closes: #80067). * Compiled with libsasl (Closes: #78746). -- Marco d'Itri <[email protected]> Sun, 24 Dec 2000 12:18:23 +0100 mutt (1.3.12-1) experimental; urgency=low * Packaged the development tree (Closes: #60459, #73050, #75885, #77856). * Documented the fact that pgp_encryptself is gone and will not be back (Closes: #47833, #69221). -- Marco d'Itri <[email protected]> Tue, 28 Nov 2000 02:25:50 +0100 mutt (1.2.5-5) unstable; urgency=low * Added support for libc6 2.2 compressed charmaps (Closes: #74975). * Updated README.Debian about SSL support (Closes: #75895). * Added the compressed folder patch (Closes: #76224). * Removed some colorization (Closes: #77976). -- Marco d'Itri <[email protected]> Mon, 27 Nov 2000 18:46:25 +0100 mutt (1.2.5-4) stable unstable; urgency=high * Typo in debian-ldap-query prevented it from running (#74575). -- Marco d'Itri <[email protected]> Sun, 29 Oct 2000 13:09:42 +0100 mutt (1.2.5-3) stable unstable; urgency=high * ====> Removed non GPL-compatible SHA code (patch-1.3.9.tlr.sha.1). <==== * ====> Disabled linking with the GPL-incompatible openssl library. <==== * Update debian-ldap-query for new libnet-ldap-perl package (Closes: #74575). -- Marco d'Itri <[email protected]> Thu, 12 Oct 2000 10:28:10 +0200 mutt (1.2.5-2) unstable; urgency=low * Disallow suspend by default if mutt is the session leader (Closes: #64169). * Fixed the check for optional crypto libraries (Closes: #68518). * Added build dependency to debhelper (Closes: #68401). * Added some info to README.NFS (Closes: #71163). * Added ispell wrapper. -- Marco d'Itri <[email protected]> Thu, 21 Sep 2000 19:43:57 +0200 mutt (1.2.5-1) unstable; urgency=low * New upstream release (Closes: #67885, #65999, #67420, #65638, #62580). (Closes: #67420). * Fixed charmaps handling for autobuilders (Closes: #67609).\ * Added debian-ldap-query script. * mutt_imap_*_.so and pgp* moved to /usr/lib/mutt/. * Added safefilter contributed script. -- Marco d'Itri <[email protected]> Tue, 1 Aug 2000 18:22:58 +0200 mutt (1.2-1) unstable; urgency=low * New upstream release. * Fixed manual.txt.gz path in F1 macro (Closes: #63384). -- Marco d'Itri <[email protected]> Sun, 7 May 2000 19:51:09 +0200 mutt (1.1.12-1) unstable; urgency=low * New upstream release. * Removed duplicated install-docs commands (Closes: #60788). -- Marco d'Itri <[email protected]> Mon, 20 Mar 2000 22:14:53 +0100 mutt (1.1.9-1) unstable; urgency=low * New upstream release (Closes: #60139, #57965, #60139). * pgp_sign_micalg=pgp-sha1 added to the default Muttrc (Closes: #59765). -- Marco d'Itri <[email protected]> Wed, 15 Mar 2000 10:57:49 +0100 mutt (1.1.5-1) unstable; urgency=low * New upstream release (Closes: #56011, #58703). -- Marco d'Itri <[email protected]> Sat, 26 Feb 2000 15:13:09 +0100 mutt (1.1.4-1) unstable; urgency=low * New upstream release. -- Marco d'Itri <[email protected]> Wed, 16 Feb 2000 01:14:31 +0100 mutt (1.1.3-1) unstable; urgency=low * New upstream release (Closes: #57373, #57155, #57533, #56970). * Fixed Pine.rc (Closes: #57647). -- Marco d'Itri <[email protected]> Wed, 16 Feb 2000 01:14:20 +0100 mutt (1.1.2-2) unstable; urgency=low * README.UPDATE installed in the documentation directory (Closes: 56970). * Patched for run time loading of SSL and Kerberos libraries. -- Marco d'Itri <[email protected]> Thu, 10 Feb 2000 23:56:21 +0100 mutt (1.1.2-1) unstable; urgency=low * New upstream release (Closes: #30639, #28727). * Fixed color problems in xterms. -- Marco d'Itri <[email protected]> Tue, 1 Feb 2000 12:31:26 +0100 mutt (1.1.1-1) experimental; urgency=low * My christmas present: packaged the development tree. -- Marco d'Itri <[email protected]> Sun, 19 Dec 1999 12:08:53 +0100
https://sources.debian.org/src/mutt/2.0.2-1/debian/changelog/
CC-MAIN-2021-04
refinedweb
16,707
54.08
Event Details jQuery can be as simple or as complex as you want it to be. The course will cover more advanced jQuery concepts including: - Working with Event Handlers - Creating your own Events and Event namespaces - Working with forms, including validation - Using AJAX to make your pages responsive and interactive - AJAX Deferreds and Callbacks Learning materials will be provided for the class and access to recorded audio/visual screencasts will be available shortly after the class. Requirements: Familiarity with jQuery including working with the DOM or successful completion of my jQuery 101 class. You may want to read Professional JavaScript for Web Developers before the meetup. The class will run from 9am to 4pm. There will be an hour long lunch (BYOB). It will be held in downtown Oakland, CA near City Center. The location is above 12th St. Bart station. There are plenty of food places in the area. When & Where MightyMinnow 1440 Broadway Suite 711 Oakland, CA 94612 Saturday, April 21, 2012 from 9:00 AM to 4:00 PM (PDT) Add to my calendar Share jQuery 201: Events and AjaxShare Tweet
https://www.eventbrite.com/e/jquery-201-events-and-ajax-tickets-3284809955
CC-MAIN-2017-22
refinedweb
184
70.02
You can subscribe to this list here. Showing 25 50 100 250 results of 35 Update: A late bug fix for 1.3.7 introduced another bug (broken keyboard shortcuts for some items under the Geometry menu). The 1.3.7a release has a fix for this. >. > > > > > ------------------------------------------------------------------------- > It's the best place to buy or sell services for > just about anything Open Source. > > _______________________________________________ > Misfitmodel3d-announce mailing list > Misfitmodel3d-announce@... > >. Development version 1.3.6 has been released. This release adds support for the Cal3D format (import and export), multiple bone joints in MS3D format, and better MD3_PATH handling for MD3 models. Many bugs fixes are also included. - Kevin Development version 1.3.5 has been released.. Development version 1.3.4 has been released. This release includes i18n support and an automatic vertex bone joint assignment feature. Several MD2 import/export bugs have been fixed. Stable version 1.2.3 has been released.. Or click here: Development version 1.3.2 has been released. This release includes multiple bone joint assignments, free-floating vertices, cylinder and sphere texture mapping, and other texture mapping improvements. Bug fixes include MD3 import and export changes for texture paths and non-animated models. Stable version 1.2.2 has been released. This release fixes an MD2 texture coordinate bug for non-square textures. Stable version 1.2.1 has been released. This release includes bug fixes for Quake MD2 export, an animation looping bug, and various minor fixes. Development version 1.3.1 has been released. This release includes Quake MD3 import/export, boolean operations, direct edit of translation and rotation keyframes, and tinting selected faces. This release includes snap to vertex and snap to grid, direct vertex coordinate editing, edge turn, edge divide, and initial support for bolt points. The animation window has been converted into a dockable toolbar. New model filters include an import filter for COB and an import/export filter for DXF. Support for 64-bit architectures has been added but is not thoroughly tested. Many other minor feature enhancements and bug fixes are also included in this release. For more details see the ChangeLog. This release fixes a minor OBJ export bug on Win32 and corrects documentation in a comment for the MM3D filter. For more details see the ChangeLog. There is also a User Survey on the website for anyone who is interested in providing feedback. - Kevin Stable Candidate 1.2.0-beta2 Released This release fixes JPEG support for Qt 4.x and a parent joint endian bug on Mac OS X. Some minor documentation errors have been corrected. For more details see the ChangeLog. Stable Candidate 1.2.0-beta1 has been released. This stable release adds support for Windows XP/2000 and Macintosh OS X, canvas background images, 3D preview for materials, and many new tools and commands. For more details see the ChangeLog. This release adds alpha blending (model edit mode only); buttons for viewport scroll, pan, and zoom; and an improved materials dialog with 3D preview. The Win32 port was migrated to Qt 4.0. Many other minor feature enhancements and bug fixes are also included in this release. For more details see the ChangeLog. This release includes an OS X port, a "select connected mesh" tool, and the option to show the selected mesh region in 3D viewports. Bug fixes include various Win32 UI issues, a memory leak in normal calculation, and elimination of some GCC 4.0 warnings. This release adds copy and paste between models, loading and saving of model meta data, and plugin versioning (do not load plugins for the wrong version of mm3d). This release prevents the stable version from loading development plugins. If you are not using development and stable versions of mm3d at the same time you do not need to upgrade. All Misfit Model 3D plugins on the official site have been updated. The update is due to changes in the way plugins will be initialized in the next development release of mm3d (plugins that do not report which version of mm3d they were compiled with will not be initialized). Other than this, the plugins behave identically to their previously released versions. New stable and release versions of Misfit Model 3D will be released in the near future to take advantage of the plugin loading changes. The wrong DLL was included with the Win32 installer. There is a 1.1.5a release that fixes this problem. Ooops, sorry about that. - Kevin Added a torus tool and the ability to export animations as a series of PNG or JPEG images. Fixed a bug with OBJ import creating unecessary groups for some textured models. The initial Qt4 port has begun. Misfit Model 3D will compile with Qt4 but it does not run correctly. If you have Qt4 installed you may need to pass arguments to ./configure to explicitly build using Qt3. For more details see the ChangeLog. Added support for Win32 platforms. The Win32 version requires the kde-cygwin build of Qt and MinGW. A binary installer is available. Plugins are disabled. For more details see the ChangeLog. There is a Win32 beta version of Misfit Model 3D available for download. This release was made possible because of a patch sent by Georg Hennig. I integrated his patch, fixed some other win32-related issues, and did some testing to verify that the program was relatively stable. It is ready for wider use. Try it out and let me know if you find any bugs. The Windows version is available at. Development version 1.1.3 has been released. New features were added, including the option to draw textured polygons in orthographic projections, a command to flatten the current selection on an axis, and the ability to change bone joint display. Added limited texture support to the lightwave import filter (lwo2 uv maps only). A normal transformation bug with skeletal animations was fixed. This release breaks binary compatibility with plugins from previous versions. For more details see the ChangeLog. New features were added, including color material support, proper lighting on material previews, and undo and redo for background image changes. Background images are saved and restored (mm3d format only). The OBJ import and export filter was completed. The LWO color material import was completed. Fixed bugs with group normal smoothing for skeletal animations. This release breaks binary compatibility with plugins from previous versions. For more details see the ChangeLog.
http://sourceforge.net/p/misfitmodel3d/mailman/misfitmodel3d-announce/
CC-MAIN-2014-23
refinedweb
1,072
60.82
can anybody suggest a decent quality Midi library for me to upload. 6 analog ins, 14 digital ins note names and cc's imaterial as can asign in pc daw. Yes use a Leonardo or Micro and that can simply be used to look like a MIDI input / output device from your computer.Use this #include <MIDI_Controller.h> // Include the libraryconst uint8_t velocity = 0b1111111; // Maximum velocity (0b1111111 = 0x7F = 127)const uint8_t channel = 1; // MIDI channel 1// Create an array of 14 new instances of the class 'Digital', called 'buttons', // on pins 0,1, ..., 13 that send MIDI messages with notes // 12, 13, ..., 25 on MIDI channel 1, with maximum velocity (127)Digital buttons[] = { { 0, 12, channel, velocity }, // button connected to pin 0, sends MIDI note 12 on channel 1 with velocity 127 { 1, 13, channel, velocity }, { 2, 14, channel, velocity }, { 3, 15, channel, velocity }, { 4, 16, channel, velocity }, { 5, 17, channel, velocity }, { 6, 18, channel, velocity }, { 7, 19, channel, velocity }, { 8, 20, channel, velocity }, { 9, 21, channel, velocity }, {10, 22, channel, velocity }, {11, 23, channel, velocity }, {12, 24, channel, velocity }, {13, 25, channel, velocity }};// Create an array of 6 new instances of the class 'Analog', called 'potentiometers', // on pins A0, A1, .., A5 that send MIDI CC messages with controller numbers 16, 17, ... 21// on MIDI channel 1Analog potentiometers[] = { { A0, 16, channel }, // potentiometer connected to pin A0, sends CC #16 on MIDI channel 1 { A1, 17, channel }, { A2, 18, channel }, { A3, 19, channel }, { A4, 20, channel }, { A5, 21, channel },};void setup() {} // nothing to set upvoid loop() { // Refresh the buttons and potentiometers (check whether a button's state or a potentiometer's position has changed since last time, if so, send it over MIDI) MIDI_Controller.refresh();} Am I correct in thinking that the auduino can be powered fromn the usb bus? It seems to me that all I need to change is analog cc no's? Is it really that simple? a complete beginner and having no prior knowledge would it be suitable? Fair comment but how and where to purchase? As you said these clones are sold under the Auduino banner which is illegal. How doe's one deduce which is legal or not? Me I want the quality of a known product. The teensy is too small for I me thinks! Which auduino is the easiest to connect to mechanically/electrically? Can a pin/socket arrangement be used or breadboard headers? How do you connect the auduino to in out pins etc? Peiter the example code. Is it ready to go? Is there more for me to learn?It seems to me that all I need to change is analog cc no's?Is it really that simple?
http://forum.arduino.cc/index.php?PHPSESSID=ch7i7jghdfavcmdh228a3h1gn0&topic=505099.0
CC-MAIN-2018-30
refinedweb
448
63.09
Getting started is simple Grab the Sentry Java SDK: compile 'io.sentry:sentry-android:VERSION' Configure your SDK: import io.sentry.Sentry object Application { fun main(args: Array<String>) { Sentry.init("your dsn") try { runSomething() } catch (e: Exception) { Sentry.capture(e) } } } That’s it! Check out the Sentry for Java documentation for more information. Get Kotlin error monitoring with complete stack traces See details like filename and line number so you never have to guess. Filter and group Kotlin exceptions intuitively to eliminate noise. Monitor errors at scale without impacting throughput in production. Fill in the blanks about Kotlin errors Expose the important events that led to each Kotlin exception: network requests, debug logs, database queries, past errors. See the full picture of any Kotlin Kotlin bug occur?” - “Did misaligned chakras cause the problem?” - “Where in the checkout process were they?” Sentry also supports your frontend Resolve Kotlin Kotlin<<
https://sentry.io/for/kotlin/
CC-MAIN-2018-17
refinedweb
149
53.17
Can any one help me with the issue-- 1) Is there an inbuilt sorting function to sort rows of excel with respect to a particular column using jxl in java Printable View Can any one help me with the issue-- 1) Is there an inbuilt sorting function to sort rows of excel with respect to a particular column using jxl in java I havent used jxl for a long time (been using a significantly more powerful Excel Library instead), but from what I remember it doesnt have that supported. Your best option may be to store all the data in an array, sort the array, and then overwrite the data in the Excel File. hi sort using array is not feasible in-case of large excel data, I was trying to take an approach where i can read and update data in a single excel file I am able to get data for switching but i guess the data is not getting update for the next element read below is the java code i am using, there should b one excel with the name ExtratedData.xls and the sorting will be dome according to 4th column. import java.io.File; import java.io.IOException; import java.util.Locale; import org.apache.poi.extractor.ExtractorFactory; import reader.ReadExcel; import src.ExtrectingData; import jxl.Cell; import jxl.CellType; import jxl.CellView; import jxl.Sheet; import jxl.Workbook; import jxl.WorkbookSettings; import jxl.format.UnderlineStyle; import jxl.read.biff.BiffException;; public class sortExcel { private String inputFile; public static WritableWorkbook workbook ; public static WritableSheet s; Sheet sheet; static Workbook w; // Worksheet wooksheet; Workbook w1; private WritableCellFormat timesBoldUnderline; private WritableCellFormat times; private String outFile; private int searchValue=16576; public static int rowcounter=0; //public WriteIntoExcel WE =new WriteIntoExcel(); public static int rowCount=1; /*public void setInputFile(String inputFile) { this.inputFile = inputFile; } public void setOutputFile(String outFile) { this.outFile = outFile; }*/ public void read() throws IOException, WriteException { // File inputWorkbook = new File(inputFile); // Workbook w,w1; // w = Workbook.getWorkbook(inputWorkbook); // Get the first sheet Sheet sheet = w.getSheet(0); // Loop over first 10 column and lines // for (int j = 0; j < sheet.getColumns(); j++) { int j=3; //for (int i = 1; i < sheet.getRows(); i++) { for(int i=0; i<sheet.getRows()-1; i++) { //Loop once for each element in the each row. for(int index=0; index<sheet.getRows()-1-i; index++) { Cell cell = sheet.getCell(j, index); Cell cell2 = null; //CellType type = cell.getType(); //if (i!=sheet.getRows()) cell2 = sheet.getCell(j, index+1); //if (cell.getType() == CellType.NUMBER || cell2.getType() == CellType.NUMBER) { System.out.println("I got a number "+ cell.getContents()); System.out.println("I got another number "+ cell2.getContents()); //if(Integer.parseInt(cell.getContents())>Integer.pa rseInt(cell2.getContents())) if(Integer.parseInt(cell.getContents())>Integer.pa rseInt(cell2.getContents())) { System.out.println("I "); int rowNo=index; for (int k = 1; k < sheet.getColumns(); k++) // for (int k = 1; k < 9; k++) { //for(int l = 1; l < sheet.getColumns(); l++) { Cell cell1 = sheet.getCell(k,rowNo); Cell cell3 = sheet.getCell(k,rowNo+1); Cell cell23 = sheet.getCell(k,rowNo); System.out.println("kkkkkkkkk"+cell1.getContents() ); String data=cell1.getContents(); String data2=cell3.getContents(); System.out .println("oooooooooooooooojjjjjjjjjjjjjjj"+cell23. getContents()); writeDataSheet(rowNo , k, data2); writeDataSheet(rowNo+1 , k, data); System.out.println("jjjjjjjjjjjjjjjoooooooooooo"+c ell23.getContents()); // workbook.write(); // workbook.close(); } } // workbook.write(); } } //} writeClose(); // } System.out.println("column "+sheet.getColumns()); System.out.println("rows "+sheet.getRows()); } private static synchronized void writeDataSheet(int r,int c,String data) throws WriteException { Label l = new Label(c,r,data); s.addCell(l); System.out.println("jaya column "+c); System.out.println("jaya row "+r); } /* public void initialiseWriteExcel() throws IOException { String filename = "sorted.xls"; WorkbookSettings ws = new WorkbookSettings(); ws.setLocale(new Locale("en", "EN")); workbook = Workbook.createWorkbook(new File(filename), ws); s = workbook.createSheet("Sheet1", 0); }*/ private void writeClose() throws IOException, WriteException { workbook.write(); workbook.close(); } public static void main(String[] args) throws IOException, WriteException, BiffException { sortExcel testread = new sortExcel(); //testread.setInputFile("ExtractedData.xls"); w = Workbook.getWorkbook(new File("ExtractedData.xls")); workbook = Workbook.createWorkbook(new File("sorted12.xls"), w); s = workbook.getSheet(0); //ExtrectingData testwrite = new ExtrectingData(); //testwrite.initialiseWriteExcel(); // testread.initialiseWriteExcel(); testread.read(); } } 2 things: 1) No one will be able to help you with your code unless you wrap it in code tags, especially with this much code. 2) I can't guarantee I can help too much, I stopped using jxl because of the overhead (with writableworkbooks and crap) and I'm using SmartXLS now (so much easier to use). However I did used to use jxl, so I can see what I can remember from it if you wrap your code for me. @aussiemcgr which API are you using, is that has a function to sort .xls file directly I use SmartXLS (Java Excel library: Java Excel Components). Two downsides: 1) There is very little documentation. However I have used it extensively and used it enough that I would be able to help you out with anything you are wanting to do. 2) For your use, you would want to use the Trial Version. This is not practical if you plan on distributing your program because the Trial Library expires after like 2 or 3 months. However, for your personal machine and for you to make programs, you can just redownload the Trial Version when it expires and it all is good (that's what I've been doing). I have not used the sort method the Library provides, so I dont know what to put for "keys" but you could try if you wanted to download the library. Here is the API details for that method: Quote: sort public void sort(int row1, int col1, int row2, int col2, boolean byRows, int key1, int key2, int key3) Specified a range of data to be sorted by three keys. If by rows,each row of data in the range is considered as a record. If by columns,each column in the range is considered as a record. Parameters: row1 - first row. col1 - first column. row2 - last row. col2 - last column. byRows - if ture,data is sorted by row. key1 - the primary key. key2 - the second key. key3 - the last key. keys are the number of the row/column,0-indicates no key. Tell me if that helps.
http://www.javaprogrammingforums.com/%20member-introductions/5283-sorting-data-excell-printingthethread.html
CC-MAIN-2015-32
refinedweb
1,049
51.85
How to get the directory path and file name from a absolute path in C on LinuxPosted on How to get the directory path and file name from a absolute path in C on Linux? For example, with "/foo/bar/baz.txt", it will produce: "/foo/bar/" and "baz.txt". You can use the APIs basename and dirname to parse the file name and directory name. A piece of C code: #include <libgen.h> #include <string.h> char* local_file = "/foo/bar/baz.txt"; char* ts1 = strdup(local_file); char* ts2 = strdup(local_file); char* dir = dirname(ts1); char* filename = basename(ts2); // use dir and filename now // dir: "/foo/bar" // filename: "baz.txt" Note: dirnameand basenamereturn pointers to null-terminated strings. Do not try to freethem. - There are two different versions of basename—the POSIX version and the GNU version. - The POSIX version of dirnameand basenamemay modify the content of the argument. Hence, we need to strdupthe local_file. - The GNU version of basenamenever modifies its argument. - There is no GNU version of dirname(). One comment Hello. I’m afraid you forgot to free an allocated memory
https://www.systutorials.com/how-to-get-the-directory-path-and-file-name-from-a-absolute-path-in-c-on-linux/
CC-MAIN-2021-39
refinedweb
183
70.19
#include <hallo.h> * Thomas Bushnell BSG [Mon, Feb 20 2006, 09:35:14AM]: > > No, it will never end. A few people managed to change the definition of > > freedom which was commonly accepted when I joined the project nine years > > ago and I do not feel a moral need to support their position. > > This is, I believe, unacceptible. > > GR's are the way we make decisions as a project. Maintainers are > expected to abide by those decisions when they do their work. I know you are not that naive, did you really think the holy license war against the non-pure-pure-pure-freedom-followers can be won by a single coup? Eduard. -- <youam> btwotch: stable ist nicht testing. <btwotch> youam, Ich mische das miteinander. Ist doch nicht schlimm, solange unstable nicht reinkommt, oder?
https://lists.debian.org/debian-devel/2006/02/msg00790.html
CC-MAIN-2015-11
refinedweb
133
74.39
Joel Bijapurkar wrote:I have written the following code to create the .class file: import javax.tools.JavaCompiler; import javax.tools.ToolProvider; public class CompilerDemo {"); } } } This code runs properly and the HelloWorld.class file does get generated. How do I execute this .class file? Joel Bijapurkar wrote: theProcess=re.exec(System.getProperty("java.home").toString()+"/bin/java C:/HelloWorld"); Joel Bijapurkar wrote: The HelloWorld program executes fine through the command prompt. Joel Bijapurkar wrote:Thank you Jeff. I tried executing the following command: theProcess=re.exec("java -cp C:/HelloWorld"); But the file is not executing. Joanne Neal wrote: Joel Bijapurkar wrote:Thank you Jeff. I tried executing the following command: theProcess=re.exec("java -cp C:/HelloWorld"); But the file is not executing. Unfortunately in Jeff's posts a space coinicided with a newline. The command Jeff was suggesting is java -cp C:/ HelloWorld with a space after the c:/ Tim Moores wrote:Try using an absolute path (for "java"). Tim Moores wrote: Try using an absolute path (for "java"). Rob Spoor wrote: Which you can use System.getProperty("java.home") for. Add the bin folder, add the java executable, and you have the full path to the java executable of the currently used JVM.
http://www.coderanch.com/t/572152/java/java/execute-java-class-file-programmatically
CC-MAIN-2014-42
refinedweb
206
53.68
heey guys.. i have this h.w.. i solved the first part which is based on the second part.. but not sure whether i did it right or not.. can someone check plz.. this is the Q.. i only did the first part! First create a one dimensional array called numbers that contains the following data (Initialize the array with the numbers listed below). 6,7,-3,-4,5,-5,10 Output the array data to a file called data.txt. ============================================== Then: 1) Calculate how many items are there in the file. 2) Calculates the number of the positive numbers and negative numbers 3) Calculate the sum of the positive numbers only. 4) Get a target from the user to search for; e.g. 7 (as an input from the screen) 5) Sort the array’s elements in ascending order and follow exactly the file snapshot next page. 6) Then append the result of 1) to 5) to the same file. ============================================== now i created the input file and added the numbers to it.. but after running the program.. the numbers didn't appear in the output file.. dunnu why.. is smthn wrong with my code? #include <iostream> #include <cstdlib> #include <fstream> using namespace std; int main () { int NUMBERS [7] = {6,7,-3,-4,5,-5,10}; //declare file: ifstream read_in; ofstream data_out; //initialize file: read_in.open ("read_data.txt"); data_out.open ("data.txt"); //check file: if (data_out.fail()) cout<<"Exit data"; if (read_in.fail()) cout<<"Exit read_data"; //use file: for (int i=0;i>=6;i++) { read_in>>NUMBERS [i]; data_out<<NUMBERS [i]; } //close file: read_in.close(); data_out.close (); return 0; }
https://www.daniweb.com/programming/software-development/threads/196027/need-help-on-h-w-plz
CC-MAIN-2018-13
refinedweb
269
77.53
Q ports. In the Versatile PB manual there’s a section, called Memory Map, that includes the absolute addresses of the mapped peripherals. For example, the UART 0, 1 and 2 interfaces are placed at addresses 0x101F1000, 0x101F2000 and 0x101F3000 respectively. Inside the manual, the programmer’s model for the UART peripherals indicates the ARM PrimeCell UART (PL011) Technical Reference Manual as reference. In the PL011 manual we can find a detailed description of the UART memory mapped registers. From that decription I implemented a C struct that renders easy to use the serial ports. The complete program is the following: #include <stdint.h> typedef volatile struct { uint32_t DR; uint32_t RSR_ECR; uint8_t reserved1[0x10]; const uint32_t FR; uint8_t reserved2[0x4]; uint32_t LPR; uint32_t IBRD; uint32_t FBRD; uint32_t LCR_H; uint32_t CR; uint32_t IFLS; uint32_t IMSC; const uint32_t RIS; const uint32_t MIS; uint32_t ICR; uint32_t DMACR; } pl011_T; enum { RXFE = 0x10, TXFF = 0x20, }; pl011_T * const UART0 = (pl011_T *)0x101f1000; pl011_T * const UART1 = (pl011_T *)0x101f2000; pl011_T * const UART2 = (pl011_T *)0x101f3000; static inline char upperchar(char c) { if((c >= 'a') && (c <= 'z')) { return c - 'a' + 'A'; } else { return c; } } static void uart_echo(pl011_T *uart) { if ((uart->FR & RXFE) == 0) { while(uart->FR & TXFF); uart->DR = upperchar(uart->DR); } } void c_entry() { for(;;) { uart_echo(UART0); uart_echo(UART1); uart_echo(UART2); } } The pl011_T structure implements the memory map of the PL011 serial port. I used the standard sized types that are defined inside “ stdint.h” because the important part of the definition is to put the register at the right offset from the start of the struct. The purpose of the reserved byte arrays is to put the subsequent register at the right offset as specified in the PL011 manual. The definition is marked as volatile because I don’t want that the compiler assumes something about the values of the registers, because they can be changed by the hardware. Some registers are marked const because they are read-only, but the volatile specifier indicates that they can actually change outside the execution of the program. At last, the three pointers UART0, UART1 and UART2 are pointing to the address where the peripheral is mapped in the Versatile platform. The code itself does nothing more than echoing the received character, and if the character is a letter it converts it to uppercase. The echo is done polling the RXFE flag of the Flag Register ( FR) until the receive FIFO is not empty. Then, we wait until the transmit FIFO is not full ( TXFF flag) and use the Data Register to read and write the byte. To try the program I used the “ startup.s” and “ test.ld” files that I prepared in my previous post. I’m assuming the main C code is a file called “ test.c“. Then I compiled the program using the CodeSourcery bare metal ARM toolchain, and I emulated it with qemu-system-arm emulator: $ arm-none-eabi-gcc -c -mcpu=arm926ej-s test.c -o test.o $ arm-none-eabi-as -mcpu=arm926ej-s startup.s -o startup.o $ arm-none-eabi-ld -T test.ld test.o startup.o -o test.elf $ arm-none-eabi-objcopy -O binary test.elf test.bin $ qemu-system-arm -M versatilepb -m 128M -kernel test.bin -serial stdio -serial telnet:localhost:1235,server -serial telnet:localhost:1236,server QEMU waiting for connection on: telnet:127.0.0.1:1235,server QEMU waiting for connection on: telnet:127.0.0.1:1236,server The QEMU serial options open two telnet ports, and the execution stops until there’s a telnet client connected to each of the ports. So, in another terminal, the commands: $ telnet 127.0.0.1 1235 and: $ telnet 127.0.0.1 1236 open the connections and the program starts. All three UARTs are active and respond to characters sent by the host. Thomas 2012/09/27 Should’nt you do all the stuff to enable the UART ? or do you rely on QEMU for doing it ? Balau 2012/09/27 You are right, in the real hardware it should not have worked because I didn’t set the UARTEN bit of the UARTCR register. It seems QEMU ignores that bit, though… I don’t know what you mean by “all the stuff”, if I do not touch any other configuration (baud rate, parity, etc.) it means the default values are used, so it’s not necessary to do it. minghuasweblog 2013/05/27 Reblogged this on minghuasweblog and commented: Really great and very detailed series of artistic work on ARM and emulation. Franc 2013/06/11 Hi, In QEMU 1.4.0 do I have to add the drivers of PL011 UART to my beagleboard-xm image or does the a normal 3.2.8 kernel config for beagle should be good enough? Balau 2013/06/12 The beagleboard xm does not have PL011 UART, it has a UART/IrDA/CIR module. The image you have should be good enough. Franc 2013/06/12 Thank you for your answer. The thing is that I’ trying to communicata across serial port with the host machine using the -serial pts and and when doing echo “test” > ttyO1 I get no output on the host pty created. I did some troubleshooting that I posted here :. Then is where I thought it was a kernel config issue since in QEMU’s web site they say PL011 UARTS where amulated for cortex-A8 cpu’s. Could you take a look at my troubleshooting and tellme what you think maybe the issue? Thnks again. PS: I writed also about the same matter in an other of your feed, thinking it was more appropriated. Balau 2013/06/13 I saw your Stack Exchange post. You should try replacing all your “ -serial stdio” and “ -serial pty” with “ -serial telnet:localhost:XXXX” where XXXX are different TCP ports, and then connect from host with “ telnet localhost XXXX“. I don’t think you need to change the kernel configuration or guest setserial. Franc 2013/06/14 Once again thank you for you time. I’m not realy familiar with the telnet protocols but I will try this for sure. Meanwhile could you tell me where I can download an Image, a config file or a defconfig for beagleboard that the serial communication across multiples tty had worked? I did some programs that absolutly need to open a tty on both guets and host to communicate. Balau 2013/06/14 I never tried to emulate beagleboard, but I think the best bet is taking the mainline kernel, configuring with omap2plus_defconfig (that seems to generate a .config with CONFIG_MACH_OMAP3_BEAGLE=y) and compiling. Michael Rupp 2015/05/28 I’m using Windows 7 and have had success with other tutorials here using qemu. I’m having issues with the client telnet windows saying that they are connected… netstat shows that the connections are established but the clients just freeze and I can’t complete the demo by typing and having the UART echo out to the console. Any ideas that I may have overlooked or which direction to go from here? Thanks Michael Rupp 2015/05/29 Ok, I think it is working. The Windows telnet client will continue if you hit the enter key. I was then able to type and when I did every character was capitalized in the telnet console. There was no echo out at the qemu server. I suppose that was normal? I’m still learning how to use QEMU. Balau 2015/05/30 It seems normal to me. QEMU emulates many serial ports (UART peripherals), as many as the Versatile PB offers. If I understand correctly you are using nographic option so the first UART (UART0) is connected to the terminal you open with Ctrl-Alt-3. Then the next telnet option tells QEMU to connect the second UART (UART1) to a telnet server. Then the last telnet option tells QEMU to connect the third UART (UART2) to another telnet server. If you want to echo something in QEMU you have to code the part that when a byte is read from UART2 it writes it into UART0. I don’t know about the behaviour of Windows telnet, I suppose that’s the way it works with that Enter key at the beginning. younes 2017/07/25 Hi, I am using your code in gem5,I change memory addresses of UARTs and correct offsets with pl011 class 0f gem5. it is working but not well which part should modify. output is not correct thank you in advance
https://balau82.wordpress.com/2010/11/30/emulating-arm-pl011-serial-ports/
CC-MAIN-2017-47
refinedweb
1,425
63.59
Hi I'm trying to learn Silex/Twig, but I've come across a small issue... In every "tutorial" I've seen, the speaker uses the "asset"-function to include JS and CSS. Unfortunately, this doesn't seem to work for me... <link href="{{ asset('/css/index.css') }}" rel="stylesheet" media="all" /> I get the following error: "Twig_Error_Syntax: The function "asset" does not exist in "base.html" at line 8" I can replace the css-include to the following and it works: <link href="{{ app.request.basepath }}/css/mails.css" rel="stylesheet" media="all" />. Thx for the reply. The twig-bridge is in my composer.json (also in my autoload-namespaces.php): "symfony/twig-bridge": "2.1.*", It's not a show-stopper though, I've used the code below which seems to work. I'm just curious why the asset-function doesn't work... <link href="{{ app.request.basepath }}/css/index.css" rel="stylesheet" media="all" /> This topic is now closed. New replies are no longer allowed.
http://community.sitepoint.com/t/silex-twig-asset-function/23299
CC-MAIN-2015-22
refinedweb
168
63.25
The the client and the server mode. You will need the following components − For this project, you just need the usual Arduino IDE, the Adafruit’s CC3000 library, and the CC3000 MDNS library. We are also going to use the aREST library to send commands to the relay via WiFi. Follow the circuit diagram and make the connections as shown in the image given below. The hardware configuration for this project is very easy. Let us now connect the relay. After placing the relay on the breadboard, you can start identifying the two important parts on your relay: the coil part which commands the relay, and the switch part where we will attach the LED. You also have to place the rectifier diode (anode connected to the ground pin) over the pins of the coil to protect your circuit when the relay is switching. Connect the +5V of Arduino board to the common pin of the relay’s switch. Finally, connect one of the other pin of the switch (usually, the one which is not connected when the relay is off) to the LED in series with the 220 Ohm resistor, and connect the other side of the LED to the ground of Arduino board. You can test the relay with the following sketch − const int relay_pin = 8; // Relay pin void setup() { Serial.begin(9600); pinMode(relay_pin,OUTPUT); } void loop() { // Activate relay digitalWrite(relay_pin, HIGH); // Wait for 1 second delay(1000); // Deactivate relay digitalWrite(relay_pin, LOW); // Wait for 1 second delay(1000); } The code is self-explanatory. You can just upload it to the board and the relay will switch states every second, and the LED will switch ON and OFF accordingly. Let us now control the relay wirelessly using the CC3000 WiFi chip. The software for this project is based on the TCP protocol. However, for this project, Arduino board will be running a small web server, so we can “listen” for commands coming from the computer. We will first take care of Arduino sketch, and then we will see how to write the server-side code and create a nice interface. First, the Arduino sketch. The goal here is to connect to your WiFi network, create a web server, check if there are incoming TCP connections, and then change the state of the relay accordingly. #include <Adafruit_CC3000.h> #include <SPI.h> #include <CC3000_MDNS.h> #include <Ethernet.h> #include <aREST.h> You need to define inside the code what is specific to your configuration, i.e. Wi-Fi name and password, and the port for TCP communications (we have used 80 here). // WiFi network (change with your settings!) #define WLAN_SSID "yourNetwork" // cannot be longer than 32 characters! #define WLAN_PASS "yourPassword" #define WLAN_SECURITY WLAN_SEC_WPA2 // This can be WLAN_SEC_UNSEC, WLAN_SEC_WEP, // WLAN_SEC_WPA or WLAN_SEC_WPA2 // The port to listen for incoming TCP connections #define LISTEN_PORT 80 We can then create the CC3000 instance, server and aREST instance − // Server instance Adafruit_CC3000_Server restServer(LISTEN_PORT); // DNS responder instance MDNSResponder mdns; // Create aREST instance aREST rest = aREST(); In the setup() part of the sketch, we can now connect the CC3000 chip to the network − cc3000.connectToAP(WLAN_SSID, WLAN_PASS, WLAN_SECURITY); How will the computer know where to send the data? One way would be to run the sketch once, then get the IP address of the CC3000 board, and modify the server code again. However, we can do better, and that is where the CC3000 MDNS library comes into play. We will assign a fixed name to our CC3000 board with this library, so we can write down this name directly into the server code. This is done with the following piece of code − if (!mdns.begin("arduino", cc3000)) { while(1); } We also need to listen for incoming connections. restServer.begin(); Next, we will code the loop() function of the sketch that will be continuously executed. We first have to update the mDNS server. mdns.update(); The server running on Arduino board will wait for the incoming connections and handle the requests. Adafruit_CC3000_ClientRef client = restServer.available(); rest.handle(client); It is now quite easy to test the projects via WiFi. Make sure you updated the sketch with your own WiFi name and password, and upload the sketch to your Arduino board. Open your Arduino IDE serial monitor, and look for the IP address of your board. Let us assume for the rest here that it is something like 192.168.1.103. Then, simply go to your favorite web browser, and type − 192.168.1.103/digital/8/1 You should see that your relay automatically turns ON. We will now code the interface of the project. There will be two parts here: an HTML file containing the interface, and a client-side Javascript file to handle the clicks on the interface. The interface here is based on the aREST.js project, which was made to easily control WiFi devices from your computer. Let us first see the HTML file, called interface.html. The first part consists importing all the required libraries for the interface − <head> <meta charset = utf-8 /> <title> Relay Control </title> <link rel = "stylesheet" type = "text/css" href = ""> <link rel="stylesheet" type = "text/css" href = "style.css"> <script type = "text/javascript" src = ""></script> <script type = "text/javascript" src = ""></script> <script type = "text/javascript" src = ""></script> <script type = "text/javascript" src = "script.js"></script> </head> Then, we define two buttons inside the interface, one to turn the relay on, and the other to turn it off again. <div class = 'container'> <h1>Relay Control</h1> <div class = 'row'> <div class = "col-md-1">Relay</div> <div class = "col-md-2"> <button id = 'on' class = 'btn btn-block btn-success'>On</button> </div> <div class = "col-md-2"> <button id = 'off' class = 'btn btn-block btn-danger'>On</button> </div> </div> </div> Now, we also need a client-side Javascript file to handle the clicks on the buttons. We will also create a device that we will link to the mDNS name of our Arduino device. If you changed this in Arduino code, you will need to modify it here as well. // Create device var device = new Device("arduino.local"); // Button $('#on').click(function() { device.digitalWrite(8, 1); }); $('#off').click(function() { device.digitalWrite(8, 0); }); The complete code for this project can be found on the GitHub repository. Go into the interface folder, and simply open the HTML file with your favorite browser. You should see something similar inside your browser − Try to click a button on the web interface; it should change the state of the relay nearly instantly. If you managed to get it working, bravo! You just built a Wi-Fi-controlled light switch. Of course, you can control much more than lights with this project. Just make sure your relay supports the power required for the device you want to control, and you are good to go.
https://www.tutorialspoint.com/arduino/arduino_network_communication.htm
CC-MAIN-2021-43
refinedweb
1,143
71.85
More like this - currency filter without using locale by andzep 3 years, 1 month ago - currency filter with minus symbol replacement and css class adding by sspross 3 years, 2 months ago - Currency Object by Rupe 4 years, 10 months ago - Humanize lists of strings in templates by ChipX86 6 years, 10 months ago - Currency formatting filter by rafa 4 years, 4 months ago Cool, I didn't know about the locale module. I had to change line 3 to "locale.setlocale(locale.LC_ALL, 'en_US')" for this to work on a shared server I use. # This works great on my ubuntu-hardy laptop but I can't get it to work on my ubuntu-gutsy server. I get some kind of a Locale C error. # FIGURED IT OUT!!! I had to set up the language pack on my ubuntu-server... Apparently this doesn't get done by default on the server version of ubuntu. sudo ./install-language-pack en_US then restart x and you're golden. # Thanks for this filter. I also had to use change line 3 to "locale.setlocale(locale.LC_ALL, 'en_US')" # hmm this doesn't seem to be working for me. Nothing is displayed on the page? # Not really sure why but I had to change line 9 to; and then I added the GBP symbol (£) myself. Otherwise nothing was ever displayed. # Hm... sometimes 'value' in context may be string. That will be better, imho: # My code above lost its formatting. The line numbers are included. So, maybe you can decipher it. # i modify your snip to have the money simbol: from django.utils.translation import ugettext from django.utils.safestring import mark_safe I use first import to translate the simbol money (in my django.po) msgid "currency" msgstr "\[HTML_REMOVED]" I use the second import to transform "\&\[HTML_REMOVED]" to "\[HTML_REMOVED]" and return this return mark_safe( '%s %s' %(locale.currency(value, symbol=False, grouping=True), ugettext('currency')) ) {"1234.56"|currency} return 1.234,55 € # I used the current configured language for the locale. Here's what worked for me: #
https://djangosnippets.org/snippets/552/
CC-MAIN-2014-15
refinedweb
341
68.26
Provided by: libhtml-strip-perl_2.10-1build3_amd64 NAME HTML::Strip - Perl extension for stripping HTML markup from text. SYNOPSIS use HTML::Strip; my $hs = HTML::Strip->new(); my $clean_text = $hs->parse( $raw_html ); $hs->eof; DESCRIPTION This module simply strips HTML-like markup from text rapidly and brutally. It could easily be used to strip XML or SGML markup instead; but as removing HTML is a much more common problem, this module lives in the HTML:: namespace. It is written in XS, and thus about five times quicker than using regular expressions for the same task. It does not do any syntax checking (if you want that, use HTML::Parser), instead it merely applies the following rules: 1.). No parsing for quotes is performed within comments, so for instance "<!-- comment with both ' quote types " -->" would be entirely stripped. 2. Anything the appears within what we term strip tags is stripped as well. By default, these tags are "title", "script", "style"(). Alternatively, you may set "auto_reset" to true on the constructor or any time after with "set_auto_reset", so that the parser will always operate in one-shot basis (resetting after each parsed chunk). METHODS new() Constructor. Can optionally take a hash of settings (with keys corresponding. clear_striptags() Clears the current set of strip tags. add_striptag() Adds the string passed as an argument to the current set of strip tags. set_striptags(). filter_entities() If HTML::Entities is available, this method behaves just like invoking HTML::Entities::decode_entities, except that it respects the current setting of 'decode_entities'. set_filter() Sets a filter to be applied after tags were stripped. It may accept the name of a method (like 'filter_entities') or a code ref. By default, its value is 'filter_entities' if HTML::Entities is available or "undef" otherwise. set_auto_reset() Takes a boolean value. If set to true, "parse" resets after each call (equivalent to calling "eof"). Otherwise, the parser remembers its state from one call to "parse" to another, until you call "eof" explicitly. Set to false by default. set_debug() Outputs extensive debugging information on internal state during the parse. Not intended to be used by anyone except the module maintainer. decode_entities() filter() auto_reset() debug() Readonly accessors for their respective settings. LIMITATIONS Whitespace Entities HTML::Strip will only attempt decoding of HTML entities if HTML::Entities is installed. EXPORT None by default. AUTHOR Alex Bowley <[email protected]> SEE ALSO perl, HTML::Parser, HTML::Entities LICENSE This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://manpages.ubuntu.com/manpages/disco/man3/HTML::Strip.3pm.html
CC-MAIN-2020-34
refinedweb
419
58.08
[1.1.14 Jython] App() crashes with Java NPE --- sikulixapi.jar MUST be on Java classpath My Script: ======== # sandbox.py import sys sys.path. from org.sikuli.script import App app = App('test') << Here the Script crashes app.open() Used Versions: ============ Jython 2.7.0 SikuliXApi.jar Version 1.1.4 Windows 10 Stack Trace: ========== C:\_Daten\ Traceback (most recent call last): File "sandbox.py", line 4, in <module> app = App('test') at org.sikuli. at org.sikuli. at org.sikuli. at org.sikuli. at org.sikuli. at org.sikuli. at org.sikuli. at org.sikuli. at sun.reflect. at sun.reflect. at sun.reflect. at java.lang. at org.python. java.lang. Research: ======== When I execute those Steps from the Script ( sandbox.py ) within the Jython Console it still works after a second attempt. On the first try it also crashes with NullPointer Exception. Question information - Language: - English Edit question - Status: - Answered - For: - Sikuli Edit question - Assignee: - No assignee Edit question - Last query: - 2019-02-08 - Last reply: - 2019-02-11 Due to the special relation between the Python API level and the Java API level (historical reasons), your approach does not lead to a correct initialization. your "solution" apparently triggers the initialisation only in the second try. you might try: Screen() # just to trigger the initialization before doing something else with SikuliX. The next days I will setup a Python/pyjnius environment myself and also test the plain Jython environment. I will fix possible startup oddities asap. BTW: thanks for the testing, evaluation and documentation. I made some tests with different scenarios. --- pyjnius In the current shape of SikuliX it is not useable with jnius from Python. I now have it on the list to make it useable though. Might take a while, but seems to be worth it --- plain Jython The sikulixapi.jar MUST be on the Java classpath (environment CLASSPATH) at time of first use of any SikuliX feature. Simply having it on sys.path does not work (causes the NPE with App("test") as mentioned above) Found a Solution. Not happy with that but it works. Possible Solution: ============== # sandbox.py import sys append( 'sikulixapi. jar') sys.path. from org.sikuli.script import App try: app = App('test') except: pass finally: app = App('test') app.open()
https://answers.launchpad.net/sikuli/+question/678428
CC-MAIN-2019-09
refinedweb
380
67.86
Thanks for the patch, but unfortunately the problem is still the same.). I presume the change from "sti()" to "__sti()" was a semantic (or SMP) thing, since the former is #defined to the latter anyway? Please note also the following modification which was required to 2.4.19: diff -u -p -r1.1 -r1.2 --- scratch/include/asm-mips/hardirq.h 2002/09/26 15:58:11 1.1 +++ scratch/include/asm-mips/hardirq.h 2002/12/12 13:15:03 1.2 @@ -42,7 +42,7 @@ #define irq_exit(cpu, irq) (local_irq_count(cpu)--) #define synchronize_irq() barrier(); - +#define release_irqlock(cpu) do { } while (0) #else #include <asm/atomic.h> (We had a look at the 2.5 (head) kernel, but this seems to have some wrong coding, and doesn't build straight off. Things like duplicate #defines ALIGN and a conditional branch instruction with only one operand!) Jun Sun <[email protected]> To: [email protected] Sent by: cc: [email protected], [email protected], [email protected] linux-mips-bounce@lin Subject: Re: Problems with CONFIG_PREEMPT ux-mips.org 17-Dec-2002 06:03 PM On Tue, Dec 17, 2002 at 08:27:16AM +0000, [email protected] wrote: > > NEW_TIME_C is set. URL to the patch is: > > There are some bits missing. Not sure if it is related to your problem or not. Robert, please take the MIPS preemptible kernel update patch. > We ultimately want to add in real-time support, such as Ingo's O(1) > scheduler - if this is 'complete' for MIPS. O(1) shortens process dispatching time, usually not a big time saver unless you have *lots* of process and/or you are doing frequent context switches. Another patch which is probably more important is the Ingo's breaking selected big lock patch. That will preemption work more effectively. > I don't know if it would be > better just to go for this in one hit, or if we'd need the preemption > sorted out anyway first. You do have to sort them out one by one. (Or you get them all by becoming mvista customer. :-0) > Or should we just go to a 2.5.x kernel instead? We'd love to have more people bang on 2.5 MIPS *grin* ... Jun
http://www.linux-mips.org/archives/linux-mips/2002-12/msg00262.html
CC-MAIN-2015-22
refinedweb
379
69.48
I've got a problem to set texture from an image on a sphere. The problem is that texture.Load or texture.SetData always returns false. I did try different methods like SetData, Load, resize texture and image (to a power of 2 number) and ... but none of them worked. Here is my code: async void CreateScene() { Input.SubscribeToTouchEnd(OnTouched); _scene = new Scene(); _octree = _scene.CreateComponent<Octree>(); _plotNode = _scene.CreateChild(); var baseNode = _plotNode.CreateChild().CreateChild(); var plane = baseNode.CreateComponent<StaticModel>(); plane.Model = CoreAssets.Models.Sphere; var cameraNode = _scene.CreateChild(); _camera = cameraNode.CreateComponent<Camera>(); cameraNode.Position = new Vector3(10, 15, 10) / 1.75f; cameraNode.Rotation = new Quaternion(-0.121f, 0.878f, -0.305f, -0.35f); Node lightNode = cameraNode.CreateChild(); var light = lightNode.CreateComponent<Light>(); light.LightType = LightType.Point; light.Range = 100; light.Brightness = 1.3f; int size = 3; baseNode.Scale = new Vector3(size * 1.5f, 1, size * 1.5f); var imageStream = await new HttpClient().GetStreamAsync("some 512 * 512 jpg image"); var ms = new MemoryStream(); imageStream.CopyTo(ms); var image = new Image(); var isLoaded = image.Load(new MemoryBuffer(ms)); if (!isLoaded) { throw new Exception(); } var texture = new Texture2D(); //var isTextureLoaded = texture.Load(new MemoryBuffer(ms.ToArray())); var isTextureLoaded = texture.SetData(image); if (!isTextureLoaded) { throw new Exception(); } var material = new Material(); material.SetTexture(TextureUnit.Diffuse, texture); material.SetTechnique(0, CoreAssets.Techniques.Diff, 0, 0); plane.SetMaterial(material); try { await _plotNode.RunActionsAsync(new EaseBackOut(new RotateBy(2f, 0, 360, 0))); } catch (OperationCanceledException) { } } Please help! @ShahramShobeiri For me the below works. As an image I used this (put to my Assets/Data/Textures folder). using Urho; using Urho.Shapes; var app = SimpleApplication.Show(new ApplicationOptions("Data") { Width = 1280, Height = 800 }); app.Viewport.SetClearColor(Color.Black); app.Renderer.MaterialQuality = 15; var sphere = app.RootNode.GetOrCreateComponent<Sphere>(); var i = app.ResourceCache.GetImage("Textures/world.topo.bathy.200401.3x5400x2700.png"); var m = Material.FromImage(i); sphere.SetMaterial(m); @puneetmahali Nothing to explain. There are use cases when storing resources (images, textures, etc) is better than manage images dynamically. But below is an (not optimized) example, how to download the image and use it as texture/material on the fly: using Urho; using Urho.Resources; using Urho.Shapes; using System.Net; using System.Text; var url = ""; var wc = new WebClient() { Encoding = Encoding.UTF8 }; var app = SimpleApplication.Show(new ApplicationOptions("Data") { Width = 1280, Height = 800 }); app.Viewport.SetClearColor(Color.Black); app.Renderer.MaterialQuality = 15; try { var mb = new MemoryBuffer(wc.DownloadData(url)); var sphere = app.RootNode.GetOrCreateComponent<Sphere>(); var img = new Urho.Resources.Image(app.Context) { Name = "MyImage" }; img.Load(mb); var m = Material.FromImage(img); sphere.SetMaterial(m); } catch (Exception ex) { // do something when an error occurs } Answers Hi Shahram, Can you please send the full project? Hi @puneetmahali, Here is the full project: What about to use Material.FromImage(string) method? Thanks @laheller It didn't work either, After Material.FromImage(string) sphere disappears and everything gets black. @ShahramShobeiri For me the below works. As an image I used this (put to my Assets/Data/Textures folder). @laheller Can you please explain more about (put to my Assets/Data/Textures folder), because if image will be fetch from url then why we need to put the image into the folder also. Because I want to show the multiple images or we can say dynamically. @laheller It would be so helpful If you share the working demo project. @puneetmahali Nothing to explain. There are use cases when storing resources (images, textures, etc) is better than manage images dynamically. But below is an (not optimized) example, how to download the image and use it as texture/material on the fly: @laheller Thanks for quick reply. Sounds Cool. Is the above solution working/executable for both means iOS & Android for you? Because it's not working and shows the error like failed to add resource path 'Data', check the documentation. I already give the build action BundleResource for iOS & EmbededResource for Droid. So, It would be so helpful if you can share the executable demo. Thanks in advance. @puneetmahali The app in my example is an instance of Urho.Application or Urho.SimpleApplication. You have to create your one and use the rest from my example. It should work on all platforms because of Xamarin Thank you @laheller, You helped a lot, My main goal was to create a 360 degree image viewer, I found a NuGet (Swank.FormsPlugin) for this goal but it didn't work for me, So based on Swank.FormsPlugin and @laheller solution, I created a very simple project to show 360 degree images that works! Here is the project for @laheller @ShahramShobeiri This is not working on iOS gives the OpenGl error - "Error: Could not create OpenGL context, root cause 'failed to create OpenGL ES drawable". @puneetmahali Is my second example working for you? BTW I never tried any Xamarin.IOS, I develop only for Windows and Android. Actually I am working on Indoor Maps. I want to load the 2D Indoor Map images into 3D viewer with 360 degree movement. I am using Texture with Sphere shape. But It's not display into the full screen and light shade also come. Can you please help me to solve this problem. Here is my example- @puneetmahali Are you working on a Windows Forms/WPF application, while you have no fullscreen? If yes, you have to initialize your app using ApplicationOptions class. Regarding to the second problem "light shade also come" please run your current app, make a screenshot and post it here, because it is not clear for me, what is the problem. @laheller >>IMAGE am not familiar with Xamarin.iOS, never used. I only develop on Xamarin.Android and WinForms platforms. On Android if you want: 1. Fullscreen app in general, you have to specify a theme without ActionBar for your activity, for example @android:style/Theme.Holo.Light.NoActionBar or your custom style where you disable actionbar. Then the whole device screen will be available for Activity layout(s) or other view. 2. For UrhoSharp you have to create the UrhoSurface within a fullscreen layout. BTW I can't see your screenshots here. Sorry @puneetmahali, I never tried any Xamarin.IOS, but it works on Android without problem. One more thing, in the sample project, if you want to load image from URL it's better to use var mb = new MemoryBuffer(wc.DownloadData(new Uri(url))); because var mb = new MemoryBuffer(wc.DownloadData(url)); didn't work for me. @laheller >>IMAGE still cannot see your screenshots. Upload them somewhere and share the links here. @laheller @ShahramShobeiri We have some Urho shapes(Urho.Shapes) like Box, Sphere, Plane, Cone, Cylinder, Dome.....etc etc. So I need to create a Static Component and loads the models into the above shape like below- var plane = baseNode.CreateComponent(); plane.Model = CoreAssets.Models.Box; That box provides the 3D view but now I want to load the image in a Square like a Simple ImageView through the texture. So, Which shapes will give me that shape also needs to think about Vector3 position who returns the view in the center. Make it more Simple- I need to load an Image with texture in 2D view like a normal ImageView(square shape) in Xamarin.forms and It should work for both in iOS & Android. @puneetmahali Still not sure, what is your goal. You want to display a Box shape and render different textures on its 6 surfaces? @laheller Are you able to download the Images? No, I want to display the Image in a square shape and render different textures without 6 surfaces. Please check your personal message also. @puneetmahali Then you have to: Finally you have to transform (Rotate/Translate/Scale) your node to the final orientation/position/size.
https://forums.xamarin.com/discussion/comment/345197/
CC-MAIN-2019-35
refinedweb
1,291
53.47
On Fri, Oct 17, 2008 at 12:27 PM, Jeff Layton <[email protected]> wrote:> On Fri, 17 Oct 2008 10:24:29 -0500> "Steve French" <[email protected]> wrote:>>> On Fri, Oct 17, 2008 at 10:09 AM, Steve French <[email protected]> wrote:>> > Even when a file is open by another process, posix allows the file to>> > be deleted (the file is removed from the namespace and eventually the>> > data is removed from disk). Unfortunately due to problems in some>> > NAS filers/servers this can be hard to implement, and I am not sure>> > what the "best" behavior is in that case.>> My argument is that the primary function of unlink() is to remove a> filename from the filesystem. If we return success from such a call> without actually having freed up that filename, then that's a bug. It's> unfortunate that some servers don't support the calls we need to make> this work all the time.The filename will be freed - and the trade off is which breaks fewer apps.Both open and unlink man pages list plausible return codes but Iam worried that the sequence of file operations open/unlink/close(I think we see both dbench and connectathon do this IIRC)is as common a a sequence as open/unlink/create/closethus we could break more apps your way than leaving it as is.> We can't however make assumptions about what applications want. We> could, in principle, fix up the situation where a server does> open->unlink->create by truncating the old file and pretending that> it's a new one.That could corrupt data - the original opener may need that data upto the moment they close that handle.> All we can reasonably do is try to have each syscall give us the> expected end result or an error if it can't be done.The open syscall is allowed to fail with ETXTBUSY (or even accessdenied among other).Although this type of situation is not common on open, it more common inopen (and create) than on unlink and thus A likely that an app could deal withan open error than on the unlink that preceeded it. The other argumenthere is that whether or not we allow unlink (when it can be marked for deletionbut not silly renamed) - we have apps that will get the same error on open dueto Windows, MacOS and other non-Linux clients setting the flag (ie open/createfailures for a filename that was marked delete-on-close can stillhappen even if we aren't theones who set the flag on the file since Windows and various other OS can and doset this file on the server or remotely)-- Thanks,Steve
http://lkml.org/lkml/2008/10/17/357
CC-MAIN-2016-50
refinedweb
453
63.53
On Monday 08 March 2004 2:25 am, Takayoshi Kochi wrote: > I think that's still true for IDE / serial port drivers. > Kaneshige-san, could you confirm your changes are compatible > with probe_irq_on()? > > Itanium-generation machines (such as BigSur) depends on > probe_irq_on() for finding serial port IRQ. Strictly speaking, since ACPI tells us about IRQs, we shouldn't need probe_irq_on() on ia64, should we? I don't see any ACPI smarts in the IDE driver, but I think the serial driver needs only the attached patch to make it avoid the use of probe_irq_on(). I tested this on i2k and various HP zx1 boxes, and it works fine. Russell, if you agree, would you mind applying this? ACPI and HCDP tell us what IRQ the serial port uses, so there's no need to have the driver probe for the IRQ. ===== drivers/serial/8250_acpi.c 1.7 vs edited ===== --- 1.7/drivers/serial/8250_acpi.c Fri Jan 16 15:01:45 2004 +++ edited/drivers/serial/8250_acpi.c Mon Mar 8 11:14:51 2004 @@ -134,8 +134,7 @@ } serial_req.baud_base = BASE_BAUD; - serial_req.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF | - UPF_AUTO_IRQ | UPF_RESOURCES; + serial_req.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF | UPF_RESOURCES; priv->line = register_serial(&serial_req); if (priv->line < 0) { ===== drivers/serial/8250_hcdp.c 1.2 vs edited ===== --- 1.2/drivers/serial/8250_hcdp.c Sun Jan 11 16:27:13 2004 +++ edited/drivers/serial/8250_hcdp.c Mon Mar 8 11:28:27 2004 @@ -186,8 +186,6 @@ port.irq = gsi; #endif port.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF | UPF_RESOURCES; - if (gsi) - port.flags |= ASYNC_AUTO_IRQ; /* * Note: the above memset() initializes port.line to 0, - To unsubscribe from this list: send the line "unsubscribe linux-ia64" in the body of a message to [email protected] More majordomo info at on Mon Mar 8 14:14:52 2004 This archive was generated by hypermail 2.1.8 : 2005-08-02 09:20:24 EST
http://www.gelato.unsw.edu.au/archives/linux-ia64/0403/8680.html
CC-MAIN-2020-16
refinedweb
310
60.72
v., kicked, kick·ing, kicks. v.intr. - To strike out with the foot or feet. - Sports. - To score or gain ground by kicking a ball. - To punt in football. - To propel the body in swimming by moving the legs, as with a flutter kick or frog kick. - To recoil: The powerful rifle kicked upon being fired. - Informal. - To express negative feelings vigorously; complain. - To oppose by argument; protest. - To strike with the foot. - To propel by striking with the foot. - To spring back against suddenly: The rifle kicked my shoulder when I fired it. - Sports. To score (a goal or point) by kicking a ball. - A vigorous blow with the foot. - Sports. The motion of the legs that propels the body in swimming. - A jolting recoil: a rifle with a heavy kick. - Slang. A complaint; a protest. - Slang. Power; force: a car engine with a lot of kick. - Slang. - A feeling of pleasurable stimulation: got a kick out of the show. - kicks Fun: went bowling just for kicks. - Slang. Temporary, often obsessive interest: I'm on a science fiction kick. - Slang. A sudden, striking surprise; a twist. - Sports. - The act or an instance of kicking a ball. - A kicked ball. - The distance spanned by a kicked ball. kick about - To move from place to place. - To treat badly; abuse. - To move from place to place: “spent the next three years in Italy, kicking around the country on a motor scooter” (Charles E. Claffey). - To give thought or consideration to; ponder or discuss. - To recoil unexpectedly and violently. - Informal. To take it easy; relax: kicked back at home and watched TV. - Slang. To return (stolen items). - Slang. To pay a kickback. - Informal. To contribute (one's share): kicked in a few dollars for the office party. - Informal. To become operative or take effect: “His pituitary kicked in, and his growth was suddenly vertical” (Kenneth Browser). - Slang. To die. - Sports. To begin or resume play with a kickoff. - Informal. To begin; start: kicked off the promotional tour with a press conference. - Slang. To die. - To throw out; dismiss. - To begin to fire: The engine finally kicked over. - To increase in amount or force; intensify: A sandstorm kicked up while we drove through the desert. - To stir up (trouble): kicked up a row. - To show signs of disorder: His ulcer has kicked up again. kick ass (or butt) Vulgar Slang. - To take forceful or harsh measures to achieve an objective. - To die. - To free oneself of an addiction, as to narcotics or cigarettes. - To cast off one's inhibitions and have a good time. - To promote to a higher yet less desirable position. [Middle English kiken, perhaps of Scandinavian origin.]
http://www.answers.com/topic/kick
crawl-002
refinedweb
447
78.85