text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
the curves vary with respect the parameter t and their appearance is determined by the ratio a/b and the value of δ.
As usual, I made a snippet to visualize them:
from numpy import sin,pi,linspace from pylab import plot,show,subplot a = [1,3,5,3] # plotting the curves for b = [1,5,7,4] # different values of a/b delta = pi/2 t = linspace(-pi,pi,300) for i in range(0,4): x = sin(a[i] * t + delta) y = sin(b[i] * t) subplot(2,2,i+1) plot(x,y) show()This is the result and setting delta = pi/4 the we have
It is a beautiful thing to see these patterns on an oscilloscope.
Thanks a lot. Very useful article.
Hi.
I wonder if I could use a spanish free translation of some of your articles (of course, with attribution) in a possible future blog of scientific python (in spanish and if I find some time).
Hi basuradek,
you can consider every post released under CC BY-NC-SA 3.0 licence. So, feel free to publish the translation of the my posts but you have to link the original versions on this website.
By the way, I would love to see some of my work translated in Spanish. If you'll do it, let me know :) | http://glowingpython.blogspot.it/2011/12/lissajous-curves.html | CC-MAIN-2017-30 | refinedweb | 224 | 66.98 |
I'm trying to analyse a method and when debuggintg it has a return type as ??? even though the type is declared in the .Net framework (4.0), the return type should be IObservable<T>.
Why is the return type uinknown for this method declaration (IMethodDeclaration)?
Cheers
Ollie Riches
I'm trying to analyse a method and when debuggintg it has a return type as ??? even though the type is declared in the .Net framework (4.0), the return type should be IObservable<T>.
This usually means the file isn't "correct" - there's a syntax or reference error somewhere. Perhaps the method isn't declared properly (no braces, no body, missing brackets, that sort of thing), or the return type isn't referenced correctly. In other words, ReSharper knows there should be a declared element at that point, but doesn't know what that element is. Instead, it gives you a declared element with a name of "???" - this is constant defined in SharedImplUtil.MISSING_DECLARATION_NAME. In your code, you can check for declaredElement.ShortName == SharedImplUtil.MISSING_DECLARATION_NAME to know that there's code you can't analyse, and break out of your processing early.
Matt,
Thanks for the info...
I thought it might be an issue with the test file I'm using and made sure this compiled as expected in the unit test project.
I'll double check this later and report back.
Thanks for the hlep
Ollie
This is the test file I am using for the test fixture. I even included this file as a compiled file in the test fixture and this compiled perfectly fine.
The type IObservable<T> is defined in the core (.Net) framework, it is not part of the Reactive Extensions add-on - so why is resharper not able to resolve the return type for IsReady.
The only other consideration is this is a .Net 4.0 assembly and test assembly.
Cheers
Ollie
namespace Resharper._7._1.ReactivePlugin.Tests
{
using System;
using System.Reactive.Concurrency;
using System.Reactive.Linq;
class Program
{
static void Main(string[] args)
{
Observable.Return(42);
}
static int Number()
{
return 42;
}
static IObservable<bool> IsReady()
{
return Observable.Return(true);
}
static IObservable<bool> IsReadyWithScheduler(IScheduler scheduler = null)
{
return Observable.Return(true, scheduler);
}
}
}
Yep, the issue is that you're trying to test a .net 4 project. The test system creates a temp solution and project to host your test files in, allowing ReSharper to parse or otherwise handle the file in the right context. By default, it sets up a .net 3.5 project, and .net 3.5 doesn't now about IObservable. This is easily fixed - just add the [TestNetFramework4] attribtute to your test method or class.
There are a bunch of other attributes you can add to change behaviour like this - look for the derived classes of ITestPlatformProvider (such as TestNetFramework4, TestSilverlight2, TestWindowsPhone70, TestMvc4, etc) or derived classes of ITestFlavoursProvider, which can provide the extra guids that change a normal C# project into a Sharepoint or Windows Phone project. There's also ITestMsCorLibFlagProvider, ITestSdkReferencesProvider and ITestLibraryReferncesProvider, which affect what assembly references are made in the project.
Thanks
Matt
Matt - thanks you're a star!
Ollie
Matt,
I suppose this is a general question about the way R# PSI relates to .Net type system.
I can see the return type, but how can I validate the return type - I want to make sure it is from a specific .Net assembly?
The IType returned has an inherited property of Module, which is an IPsiModule. This represents either the Visual Studio project or assembly that the type is defined in. You can try to downcast to IAssemblyPsiModule, which will then give you an IPsiAssembly with all the info you need to get at. Or you can use IPsiModule.ContainingProjectModule, which will return you an IModule which you can then downcast to IAssembly or IProject (these types are part of the Project Model, rather than the PSI. The hierarchies are similar, but intended for different purposes - these are for loading and building projects, while the PSI is for introspection of abstract syntax trees and references).
Alternatively, and I'm less sure on this one, you can downcast the IType to IDeclaredType (this might return null!) and that then gets you an AssemblyNameInfo which should give you the info you need. I'm less sure on this because I don't know if you can always downcast that IType to an IDeclaredType. | https://resharper-support.jetbrains.com/hc/en-us/community/posts/205991569-DeclaredElement-return-type-is- | CC-MAIN-2020-16 | refinedweb | 737 | 66.74 |
Hi there,
I made a project, where I want to pull a thread with two Lego-Wheels. The hardware all works as it should, but I got a problem using the stepper, which I could not solve by myself and google didn’t help either:
If the speed of the stepper is set too low, it decreases the speed even more.
In this program I calculated the speed and steps with the distance to pull and the time-span, in which the thread should be pulled this distance:
#include <Stepper.h> int SPU = 2048; Stepper Motor(SPU, 3,5,4,6); double roundsPerMinute = 0; double rounds = 0; double distance = 20; //in cm double timeSpan = 1; //in min double wheelDiameter = 23.3; //in mm void setup() { Serial.begin(9600); double distancePerRound = (wheelDiameter/10)*PI; rounds = distance/distancePerRound; roundsPerMinute = rounds/timeSpan; //calculate Speed and needed rotations } void loop() { Serial.println("Rounds per minute: " + String(roundsPerMinute)); Serial.println("Rounds: " + String(rounds)); Motor.setSpeed(roundsPerMinute); long steps = rounds*2048; //calculate speed from rounds Serial.println("Steps: " + String(steps)); long start = millis(); //start measuring time for(int i = 0; i < int(steps/10000); i++){ //I tested out that the .step function only can recieve values up to 32767, so I call this function multiple times Motor.step(10000); } Motor.step(steps%10000); long neededTime = millis() - start; //stop measuring time Serial.println("Needed Time: " + String(neededTime/1000.0)); Serial.println(); }
This works all fine if the roundsPerMinute are higher than ca.5, but when they are lower, the problem I explained appears.
To name an example:
distance = 147 cm
timeSpan = 6 min
→ real distance: correct
→ real time: 6 min 41 sec
I hope you can understand what I mean.
Stepper: 28BYJ-48
Driver: ULN2003
What do I do wrong?
Thank you in advance! | https://forum.arduino.cc/t/stepper-too-slow-on-low-speed/673208 | CC-MAIN-2022-05 | refinedweb | 297 | 65.62 |
In my first article using the Arduino 2009 board, I described a simple temperature sensor interfaced using Visual Basic.
I have developed the board and Visual Basic code to give a fairly usable indoor weather station.
The Arduino 2009 acts as a standalone weather station. It does not display the data. It can operate independently for months. It samples the sensors until the RAM is full. The RAM samples are stored on a sector of an SD card. Eventually, a year's weather samples could be stored on the SD card.
Each time the PC is connected, any unstored sectors are uploaded to files on the PC's hard drive. It can then be displayed using all the facilities of the PC's display.
The central idea is that the PCc has a file copy of each SD card sector stamped with the date time it was recorded.
The activity diagram for the program:
The Arduino IDE uses C++, and the actual program for this activity diagram is straightforward.
The SD card current sector is held in EEPROM so that during power off or reset, it can be reused. The Arduino has a class EEPROM which only allows read and write of bytes. To read a long (32 bytes in Arduino) requires some work:
EEPROM
inline void ready()
{
unsigned long lastblock = 0; //the last block number saved in the sd card
unsigned long tempblock = 0;
tempblock = EEPROM.read(0); // remember the LSB of the last saved block
lastblock |= tempblock;
tempblock = EEPROM.read(1); // remember the next LSB of the last saved block
lastblock |= tempblock << 8;
tempblock = EEPROM.read(2); // remember the next LSB of the last saved block
lastblock |= tempblock << 16;
tempblock = EEPROM.read(3); // remember the next MSB of the last saved block
lastblock |= tempblock << 24;
Serial.println("ready"); //send computer the ready to reset message
Serial.println(lastblock); //send computer the last saved block number
delay(10000); //every 10 seconds
}//end of ready
The Arduino does not have a class to read and write to SD cards, so I wrote my own. This is the .h file:
/* Card type: Ver2.00 or later Standard Capacity SD Memory Card
1.0 and 2.0 GB cards purchased in 2009 work well.
Usage: Must have global variable.
volatile unsigned char buffer[512];
Function calls.
unsigned char error = SDCARD.readblock(unsigned long n);
unsigned char error = SDCARD.writeblock(unsigned long n);
error is 0 for correct operation
read copies the 512 bytes from sector n to buffer.
write copies the 512 bytes from buffer to the sector n.
References: SD Specifications. Part 1. Physical Layer Simplified Specification
Version 2.00 September 25, 2006 SD Group.
Code examples: search "sd card"
Operation: The code reads/writes direct to the sectors on the sd card.
It does not use a FAT. If the card has been formatted the
FAT at the lowest sectors and files at the higher sectors
can be written over.
The card is not damaged but will need to be reformatted at
the lowest level to be used by windows/linux.
Timing: readblock or writeblock takes 44 msec.
Improvement: Could initialize so that can use version 1 sd and hc sd.
Instead of CMD1 need to use CMD8, CMD58 and CMD41.
*/
#ifndef SDCARD_h
#define SDCARD_h
#define setupSPI SPCR = 0x53; //Master mode, MSB first,
//SCK phase low, SCK idle low, clock/64
#define deselectSPI SPCR = 0x00; //deselect SPI after read write block
#define clearSPI SPSR = 0x00; // clear SPI interrupt bit
#define setupDDRB DDRB |= 0x2c; //set SS as output for cs
#define selectSDCARD PORTB &= ~0x04; //set the SS to 0 to select the sd card
#define deselectSDCARD PORTB |= 0x04; //set the SS to 1 to deselect the sd card
#include "WProgram.h"
class SDCARDclass
{
public:
unsigned char readblock(unsigned long Rstartblock);
unsigned char writeblock(unsigned long Wstartblock);
private:
unsigned char SD_reset(void);
unsigned char SD_sendCommand(unsigned char cmd, unsigned long arg);
unsigned char SPI_transmit(unsigned char data);
};//end of class SDCARDclass
extern SDCARDclass SDCARD;
#endif
So, when we need to save a sector of data, we do:
inline void lastblocksave()
{
unsigned int e = 0; //the error code from the sd card
e = SDCARD.writeblock(currentblock); //save this 256 block of integer data
while (e != 0) //cant continue if sd card not working
{
Serial.println("writesderror"); //send computer sd card error
Serial.println(e); //send computer the error number
digitalWrite(8, HIGH); //turn led on to show sd card error
delay(10000); //every 10 seconds
}//end of sd card not working
currentblock +=1; //go to the next block in sd card
EEPROM.write(0,currentblock); //write the LSB of saved block to EEPROM
EEPROM.write(1,currentblock >> 8); //write the next LSB of saved block to EEPROM
EEPROM.write(2,currentblock >> 16); //write the next LSB of saved block to EEPROM
EEPROM.write(3,currentblock >> 24); //write the MSB of saved block to EEPROM
ramaddress = 0; //we can now start again to save samples in RAM
}//end of sd save
The PC program was written using Microsoft's Visual Basic Express IDE.
When the display program loads and before it is activated, we create a startup form which has all the routines to upload the data samples.
Private Sub ArduinoWeather_Load(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles MyBase.Load
Dim f As New showstart
f.ShowDialog() 'startup form to connect to arduino
While My.Computer.FileSystem.FileExists(cd & "\" & last_sector)
last_sector += 1 'find the next block
End While
If My.Computer.FileSystem.FileExists(cd & "\archive") Then
Dim st As Stream = File.Open(cd & "\archive", FileMode.Open, FileAccess.Read)
Dim br As New BinaryReader(st) 'to get samples from stream
Try
Dim n As Integer = 0 'a counter
Do
archived(n) = br.ReadInt64()
archivedisplay.Items.Add(archived(n))
If archived(n) > lastblock Then lastblock = archived(n)
n += 1 'we get the largest archive block
If n = 100 Then Exit Do 'no more room
Loop
Catch ex As Exception 'exception of none left
Finally
br.Close() 'must close
st.Close()
End Try 'exit try when all read
End If
fill_buffers() 'get all the samples into thir buffers
If overflow.Text = "" Then 'enable displays
Try
com8 = My.Computer.Ports.OpenSerialPort("com8", 9600)
readthread = New Threading.Thread(AddressOf read)
readthread.Start() 'thread runs for whole program
'to get samples every 10 sec
Catch ex As Exception
comdisplay.Text = "no connection" & vbNewLine & _
"or" & vbNewLine & "no communication"
display_noconnect.Enabled = True 'just use a timer to display
End Try
End If
End Sub
The activity diagram for the start form is shown here:
The whole purpose of this startup is to ensure that every sector on the SD card is recorded on the hard drive with its real sample time. I start a thread which is used to upload any required data. This following code is the core of the procedure:
Dim st As Stream = File.Open(save_sd, FileMode.Create, FileAccess.Write)
Dim bw As New BinaryWriter(st) 'to send samples to stream
For i = 0 To 255 'all old samples
bw.Write(sd_samples(i)) 'send all the samples
Next 'sector stored in file
bw.Write(date_stamp.ToString("F")) 'add date to file
date_stamp = date_stamp.Add(TimeSpan.FromSeconds(850))
bw.Close() 'sends all samples to file
st.Close()
lastblock = j 'we have uploaded one sector
upload = "uploading"
Invoke(New messdelegate(AddressOf showmessage)) 'show the progress
The user will see a form center screen:
Next, in the user form load, we determine the number of sector files. Then, the archived files are stored in their own file. We then fill the display buffers from the relevant sector files.
Finally, we start a thread which will read the new data samples that will be displayed in the current display.
At last, the user form is displayed:
The display images are generated in its own class. The data is passed to the class and then the image is added to the user form. If the display is as detailed as possible, i.e., each 10 second sample has its own pixel, the full display span will be for 4 hours. The data can be averaged to give a maximum of 4 weeks per display span.
The start time of the display can be offset to allow a view of any section of the 4 weeks data. This can be at the maximum resolution (10 sec. per sample).
This code implements the operation:
Private Sub display_all()
Try
Dim Tinterrim_buffer(241919) As Int32 'interim buffer for temperature display
Dim Hinterrim_buffer(241919) As Int32 'interim buffer for humidity display
Dim Ainterrim_buffer(241919) As Int32 'interim buffer for air pressure display
Dim Cdisplay_start_time As DateTime 'the current display start time
Select Case True
Case RadioButton1.Checked
display_span = span_define(0) '4 hours
Case RadioButton2.Checked
display_span = span_define(1) '8 hours
Case RadioButton3.Checked
display_span = span_define(2) '12 hours
Case RadioButton4.Checked
display_span = span_define(3) '24 hours
Case RadioButton5.Checked
display_span = span_define(4) '2 days
Case RadioButton6.Checked
display_span = span_define(5) '4 days
Case RadioButton7.Checked
display_span = span_define(6) '7 days
Case RadioButton8.Checked
display_span = span_define(7) '2 weeks
Case RadioButton9.Checked
display_span = span_define(8) '4 weeks
End Select
For i = 0 To 241919
If i < last_pointer + 1 Then
Tinterrim_buffer(241919 - i) = temp_buffer(last_pointer - i)
Hinterrim_buffer(241919 - i) = humid_buffer(last_pointer - i)
Ainterrim_buffer(241919 - i) = air_buffer(last_pointer - i)
Else
Tinterrim_buffer(241919 - i) = 999999
Hinterrim_buffer(241919 - i) = 999999
Ainterrim_buffer(241919 - i) = 999999
End If
d.display_span_time = TimeSpan.FromMinutes(240 * display_span)
Dim number_display As Integer = _
1440 * display_span - 1 'the width of current span
If cursor_time + number_display < 241920 Then
Cdisplay_start_time = display_start_time.AddDays(-28 * cursor_time / 241919)
Dim counter As Integer = 0
For i = 241919 - cursor_time To 0 _
Step -display_span 'copy working to display
Dim average = 0
For j = 0 To display_span - 1
average += Tinterrim_buffer(i - j)
Next 'straight average of the number of samples required
d.temperature(1439 - counter) = average / display_span
average = 0
For j = 0 To display_span - 1
average += Hinterrim_buffer(i - j)
Next 'straight average of the number of samples required
d.humidity(1439 - counter) = average / display_span
average = 0
For j = 0 To display_span - 1
average += Ainterrim_buffer(i - j)
Next 'straight average of the number of samples required
d.airpressure(1439 - counter) = average / display_span
counter += 1 'we have done one
If counter = 1440 Then Exit For
Else
hasbeenoffset.Text = "selected offset out of range"
cursor_time = 0 'reset the value
End If
d.display_start_time = Cdisplay_start_time
If zoom_TH Or zoom_AR Then
If zoom_TH Then
d.full_temp_hum() 'expand temp humid
Else
d.full_air_rain() 'expand air rain
End If
Else 'normal display
d.scale_temp_hum()
d.scale_air_rain()
End If
Catch ex As Exception
End Try
End Sub
The user form can start a file management form:
Here is the code to archive a file:
Private Sub archivefile_Click(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles Button5.Click
If My.Computer.FileSystem.FileExists(cd & "\archive") Then
Dim words() As String = archived.Text.Split(vbNewLine)
Dim x = CLng((words.Length - 1))
If archiveblock > CLng(words(words.Length - 2)) + 100 Then
Dim str As Stream = File.Open(cd & "\archive", _
FileMode.Append, FileAccess.Write)
Dim bwr As New BinaryWriter(str) 'to send samples to stream
bwr.Write(archiveblock) 'send all the samples to disk
bwr.Close() 'sends all samples to file
str.Close()
Else
MsgBox("The archived block must be at least" & vbNewLine & _
"one day -that is 100 bigger than last")
End If
Dim st As Stream = File.Open(cd & "\archive", FileMode.Open, FileAccess.Read)
Dim br As New BinaryReader(st) 'to get samples
archived.Text = ""
Try
Do
archived.Text = archived.Text & (br.ReadInt64()) & vbNewLine
Loop
Catch ex As Exception 'exception of none left
Finally
br.Close() 'must close
st.Close()
End Try 'exit try when all read. | https://www.codeproject.com/Articles/55943/Indoor-Weather-Station-using-Arduino | CC-MAIN-2017-51 | refinedweb | 1,939 | 57.47 |
linecache module in Python: cache a text file
Get FREE domain for 1st year and build your brand new site
linecache is one of the most useful modules of Python standard library which can be used to have random access to any text file. This is used by the traceback module (which extracts, formats and prints the stack traces of Python programs, it prints the error messages or the exceptions when they are raised int he program) to retrieve source lines and include them in the formatted traceback.
This module is used extensively when dealing with Python source files. It returns the requested line(s) by indexing them into a list. Repeatedly reading the lines and parsing them provides lot of efficiency in time complexity.
Importing the module
linecache module is imported by this-
>>import linecache
Usage of linecache module
Following is a list of linecache module and example of their code-
1. getline():
getline() is the most important and commonly used method of linecache module. It has two parameters. The first parameter is the file name and the second parameter is the line number that has to be displayed, i.e. getline(file, n). It returns the nth line from the file that is passed through the file parameter. It will return '' on errors or when the line or file in not found. The terminating newline character will be included for lines that are found.
>>linecache.getline('/folder/file.txt', 2) Hello World!
In the above code, the getline method is used to display the 2nd line of the text file 'file.txt' in the 'folder' directory.
When multiple lines in a range is needed to be displayed, getlines() function is used.
>>linecache.getlines('/folder/file.txt', [2:4]) ['Hello World!\n','Hello!\n']
The above code displays the texts in the range from the 2nd line to 3rd line of the text file 'file.txt' in the 'folder' directory.
2. clearcache():
When the lines from files previously read using getline() are no longer needed, we can clear the cache by clearcache() function. It has no parameters.
>>linecache.clearcache()
3. checkcache():
It checks validity of cache. It is used when the files in the cache may have changed on disk, and new version is needed. If filename is empty, it will check all the previous entries of cache.
>>linecache.checkcache()
4. lazycache():
It captures details about non-file based modules to allow its lines to be accesed by getline() even if module_globals is None during the call. This avoids doing I/O until a line is actually needed. When getlines() is called only then the module loader will be asked for the source only, not immediately.
For example, It checks if the filename is cachable if it is then the filename must not be already cached. If there is a file called 'file.txt' and it is not already in cache then it returns True, otherwise False.
>>linecache.lazycache('file.txt', module_globals=None)
Output:
True
Applications:
Let's see some application of this linecache module-
1. Reading Specific Lines
As we have seen in the above, using this module we can read lines from a text file. The line numbers in the linecache module start with 1, but if we split the string then we start indexing the array from 0. We also need to strip the trailing newline from the value returned from the cache. Let's see an example-
Let's take a textfile 'Hello.txt' the content of which is shown bellow-
HelloA HelloB HelloC HelloD
Now let's try to read and display the 3rd line.
import linecache line=linecache.getlines('Hello.txt', 3) print(line)
Output:
HelloC
2. Handling Empty Lines
Let's see an example of the output when the line to be displayed is empty.
import linecache # Blank lines include the newline print '"%s"' % linecache.getline("Hello.txt", 4)
Output:
' '
The 4th line of the file "file.txt" is blank so empty string (' ') is printed.
3. Error Handling
If the requested line number falls out of the range of valid lines in the file, linecache returns an empty string too.
import linecache # Blank lines include the newline print '"%s"' % linecache.getline("Hello.txt", 7)
Output:
' '
Source code:
Cache is extensively used for the operations of linechace module. A cache is component or temporary storage that stores data so that anytime in the future requests for that data can be served faster. This data stored in a cache might be the result of an earlier computation or a copy of data which is stored in another storage.
Now let's dive deep into the backend code of the different functions of line cache module. (from)
Let's take a sample file with a sample content-
filename.txt
Hello World 1 Hello World 2 Hello World 3
Let's now see the source code of different linecache module methods-
Importing important modules-
import functools import sys import os import tokenize
1. clearcache():
cache = {} #Initilizing cache def clearcache(): """Clear the cache entirely.""" cache.clear()
.clear() function clears the content of the cache entirely.
2. getline():
def getline(filename, lineno, module_globals=None): lines = getlines(filename, module_globals) if 1 <= lineno <= len(lines): return lines[lineno - 1] return ''
It reads a line from the file from the cache and then updates the cache if it doesn't have an entry for this file already. If the lineno is greater than or equal to 1 or lesst than or equal to the total number of lines available, then it returns the particular line by lines[lineno-1].
3. getlines():
def getlines(filename, module_globals=None): if filename in cache: entry = cache[filename] if len(entry) != 1: return cache[filename][2] try: return updatecache(filename, module_globals) except MemoryError: clearcache() return []
Similarly for getlines() method, if there are multiple lines to be read, i.e. the length of the entry is not 1, then the multiple lines from the cache are loaded. If it doesn't have an entry for the file, it updates the cache.
4. checkcache():
def checkcache(filename=None): """Discard cache entries that are out of date. (This is not checked upon each call!)""" if filename is None: filenames = list(cache.keys()) elif filename in cache: filenames = [filename] else: return for filename in filenames: entry = cache[filename] if len(entry) == 1: continue # lazy cache entry size, mtime, lines, fullname = entry if mtime is None: continue # no-op for files loaded via a __loader__ try: stat = os.stat(fullname) except OSError: del cache[filename] continue if size != stat.st_size or mtime != stat.st_mtime: del cache[filename]
If file is empty, list(cache.keys()) checks for the previous entries in cache, else it loads the new entries in cache. If there is only one new entry, it's lazycache and checks for next iteration. Otherwise the entry is stored in the variables size, mtime, liens, fullname as shown in the above code. If mtime is None, it again checks for the next iterationl. If something's wrong, it deletes the entry by del keyword in the cache by as shown in the above.
5. updatechache():
def updatecache(filename, module_globals=None): if filename in cache: if len(cache[filename]) != 1: del cache[filename] if not filename or (filename.startswith('<') and filename.endswith('>')): return [] fullname = filename try: stat = os.stat(fullname) except OSError: basename = filename if lazycache(filename, module_globals): try: data = cache[filename][0]() except (ImportError, OSError): pass else: if data is None: # No luck, the PEP302 loader cannot find the source # for this module. return [] cache[filename] = ( len(data), None, [line + '\n' for line in data.splitlines()], fullname ) return cache[filename][2] # Try looking through the module search path, which is only useful # when handling a relative filename. if os.path.isabs(filename): return [] for dirname in sys.path: try: fullname = os.path.join(dirname, basename) except (TypeError, AttributeError): # Not sufficiently string-like to do anything useful with. continue try: stat = os.stat(fullname) break except OSError: pass else: return [] try: with tokenize.open(fullname) as fp: lines = fp.readlines() except OSError: return [] if lines and not lines[-1].endswith('\n'): lines[-1] += '\n' size, mtime = stat.st_size, stat.st_mtime cache[filename] = size, mtime, lines, fullname return lines
It updates the cache entry and return its list of lines. If filename doesn't exist or the format is unknows it return empty array.stat() method of OS module provides the chance of interacting with the working operatng system. If relative path of the file is given it again return emtpy, else it looks or the useful module search path and joins the directory path with the file base name. If the file exists, tokenize.open() method detects the encoding of the concerned file and the lines are read and stored. If any error, i.e. OSError occurs it returns empty array again. If there is atleast one line readable but the last line ends with '\n' it puts '\n' at the end by lines[-1]+=1 and returns the lines that were read. If some error occurs, it prints the error or exception message, deletes the cache entry and returns an empty list.
6. lazycache():
def lazycache(filename, module_globals): if filename in cache: if len(cache[filename]) == 1: return True else: return False if not filename or (filename.startswith('<') and filename.endswith('>')): return False # Try for a __loader__, if available if module_globals and '__loader__' in module_globals: name = module_globals.get('__name__') loader = module_globals['__loader__'] get_source = getattr(loader, 'get_source', None) if name and get_source: get_lines = functools.partial(get_source, name) cache[filename] = (get_lines,) return True return False
It seeds the cache for filename with module_globals. Then upon calling getline() the module loader will be asked for the source. If there is an entry in the cache already, it is not altered. True has to be returned if a lazy load is registered in the cache, otherwise False. Then it checks for the existance of thhe module_globals and the loader's presence in the module_globals, and upon its positive return it stores the name, loader and the source of the loader. If name and source exists, those are stored in the particular filename of the cache. functools.partial function allows to derive a function with one parameter to a function with fewer fixed parameters.
These are some functionality and usage of linecache module. See the documentation of the module here. | https://iq.opengenus.org/linecache-module-python/ | CC-MAIN-2021-43 | refinedweb | 1,724 | 65.01 |
As mentioned in the post VSTS Rangers Projects – TFS Migration: A state of the nation status update, we are busy reviewing some of the new and also existing re-factored artefacts. The following questions and answers are some of the questions that we have received and the associated answers. The references for the questions are listed below:
- AIT. (2009). TFS Migration and Synchronization "Toolkit How To Guide". AIT Applied Information Technologies AG.
- Olausson, M. (2009). Project Correspondence. VSTS Rangers.
- VSTS-Rangers. (2009). TFS2TFS project Copy VSTS Rangers Team.
Some of the top questions
What adapters are planned by the TFS Migration Tools team? (Olausson, 2009)
All of the current Microsoft solutions, like CC to TFS and QC to TFS are good candidates for adapters that may be ported to the new platform. The current goal, however, is to enable partners to build solutions and for the TFS Migration and Synchronization Toolkit team not to develop any further adapters. Currently the TFS2TFS and WSS2TFS adapters are tested samples of custom adapters, based on the new toolkit.
Will a User Interface be released or are we stuck with the console application? (Olausson, 2009)
A graphical user interface is planned and will be released as part of future TFS Migration and Synchronization Toolkit releases.
Will there be a windows service, in place of the console application, which will allow continuous synchronization? (AIT, 2009)
In the new toolkit, there will be a windows service in place of the console application, which will allow users to implement continuous synchronization.
Can I perform concurrent, more than one, synchronization pipeline? (AIT, 2009)
The new Windows Service host will be able to handle this requirement. The Windows Service will be looking to the DB for active sessions, will restart/resume ones that need that sort of treatment and will pick up new ones concurrently.
Will v2 allow me to trigger synchronization other than by a timeout event as is the case in v1? (AIT, 2009)
The toolkit is currently timeout based, but there are plans to expose a refresh trigger on a WCF endpoint exposed by the Windows Service. Using the refresh trigger, TFS check-in events could be used as a trigger condition.
Why are all classes, i.e. the classes used for work item tracking synchronization, in one namespace? (AIT, 2009)
The V0.1 toolkit is TFS-biased and the adapters, especially the WIT Adapter, are tightly coupled with the toolkit. In the new toolkit there is clear separation of the toolkit and the adapters, i.e. the toolkit won’t have any reference to the TFS assemblies and adapter classes are intentionally not packaged in the Toolkit namespace.
Why is the WIT Adapter called “TfsWitAdapter” and the VC Adapter “TfsAdapter”, and not TfsVCAdapter? (VSTS-Rangers, 2009)
The new version of the toolkit features a re-factored code base, which features a TfsWitAdapter and TfsVcAdapter, for example.
What is your guidance on how to handle conflicts when an ongoing synchronization is taking place - is the synchronization paused until the conflict is resolved? (VSTS-Rangers, 2009)
- For VC, the default policy is to stop on any conflict and that is what we do. VC would typically only have one conflict to resolve in a real run.
- For WIT, the default policy is to continue.
If the TFS2TFS tool/runtime and process fail during an ongoing synchronization does the tool have any idea of the failure the next time it launches that session? (VSTS-Rangers, 2009)
All unresolved conflicts are persisted to the Migration DB. If power is shut off to one of the servers in a mirroring relationship, for instance, it would currently translate into one of the built-in conflict types - a general toolkit conflict. This is used to wrap exceptions and other runtime events that the migration framework does not understand as a conflict, whereby the only resolution action at the moment is "manual resolution". That basically means ... “I'm the user and I've fixed the problem - try again”.
The idea with conflict types is that adapter writers might selectively pick classes of issues out of that general pool and name them to associate more specific and relevant resolution actions. That network communication failure, for instance, might be raised as a GeneralNetworkCommunicationFailure conflict in the future and the user might have a resolution rule in scope for that conflict type that says something like ... “any time you see this, I want you to try up to N times over M minutes before giving up and calling this an unresolved conflict”.
The End … for today
Please send us additional questions which we can answer and include in our TFS Migration guidance.
Until next time, cheers.
PingBack from
Buenas no me pude resistir y traduje este post son información super interesante sobre el roadmap
Buenas no me pude resistir y traduje este post son información super interesante sobre el roadmap
Can’t resist … I have to translate this information, is great information()
regards from Spain
El Bruno, great! I have no idea what the translated version says, but I am very happy that this information is proving valuable 🙂
WIlly-Peter Schaub on VSTS Rangers Projects – WCF Load Test Tool 3.0 Ships and VSTS Rangers Projects
Is there a way to migrate from a specific changeset and on? | https://blogs.msdn.microsoft.com/willy-peter_schaub/2009/06/17/vsts-rangers-projects-tfs-migration-questions-and-answers/ | CC-MAIN-2017-39 | refinedweb | 882 | 60.95 |
Is it possible to blink 3 leds at the same time but all 3 leds blinks at different rates?
Yes.
Do a site search on things like "blink without delay" or "using millis".
millis() timing tutorials:
Several things at a time.
Beginner's guide to millis().
Blink without delay().
The initial examples delivered with the arduino-IDE IMO are leading newcomers to a wrong way how coding on a microcontroller works.
Because these inital examples use the command delay()
Jump over delay(). delay is delaying your progress in learning how real programming works.
Of course there are different ways how different people learn.
Some with reading, some with videos. Just to extend the ways how to learn it
non-blocking timing enables to have hundreds of things each "blinking" independently from all others. Non-blocking timing is based on the function millis().
These two videos explain non-blocking timing based on function millis()
best regards Stefan
Of course you could do with millis() directly.
But I maybe more easy to use a library
Here is an example of my MobaTools library, that does what you want with 2 leds. It should be easy to extend to 3 leds.
( the original example in the lib is commented in german)
#include <MobaTools.h> /* Demo: Time delays without delay command. In principle, the 'MoToTimer' works like a kitchen timer. kitchen: you wind it up to a certain time, and then it runs back to 0. Unlike the kitchen alarm clock, however, it does not ring. You have to check cyclically to see if the time has elapsed. But that fits perfectly with the principle of the 'loop', i.e. an endless loop in which one cyclically queries for events. Method calls: MoToTimer.setTime( long Runtime ); sets the time in ms bool = MoToTimer.running(); == true as long as the time is still running, == false when expired In contrast to the procedure with delay(), this allows for several independent and asynchronous cycle times. In this demo 2 leds are flashing with different clock rates */ const int led1P = 5; const int led2P = 6; MoToTimer flashTime1; MoToTimer flashTime2; void setup() { pinMode(led1P, OUTPUT); pinMode(led2P, OUTPUT); } void loop() { // -------- Blinking of the 1st Led ------------------ // This led flashes with a non-symmetrical clock ratio if ( flashTime1.running()== false ) { // Flashing time expired, toggle output and rewind time if ( digitalRead( led1P ) == HIGH ) { digitalWrite( led1P, LOW ); flashTime1.setTime( 600 ); } else { digitalWrite( led1P, HIGH ); flashTime1.setTime( 300 ); } } // -------- Blinking of the 2nd Led ------------------ // This led flashes symetrically if ( flashTime2.running() == false ) { // Flashing time expired, toggle output and rewind time if ( digitalRead( led2P ) == HIGH ) { digitalWrite( led2P, LOW); } else { digitalWrite( led2P, HIGH); } flashTime2.setTime( 633 ); } }
However, the advices how to write non blocking code are important in any case.
Yes it is possible. Make per LED an object containing the information about the pin port address and the timing information. The BWOD example from the IDE is used to process the timing information from the object.
/* BLOCK COMMENT ATTENTION: This Sketch contains elements of C++. */ #define ProjectName "Blinking 3 leds concurrently and independently?" // hardware and timer settings constexpr byte LedOnePin {2}; // portPin o---|220|---|LED|---GND constexpr byte LedTwoPin {3}; // portPin o---|220|---|LED|---GND constexpr byte LedThreePin {4}; // portPin o---|220|---|LED|---GND struct GROUP { byte pin; unsigned long stamp; unsigned long duration; } groups[] { {LedOnePin, 0, 1000}, {LedTwoPin, 0, 1333}, {LedThreePin, 0, 2222}, }; unsigned long currentTime; // ------------------------------------------------ void setup() { Serial.begin(9600); Serial.println(F(".")); Serial.print(F("File : ")), Serial.println(__FILE__); Serial.print(F("Date : ")), Serial.println(__DATE__); Serial.print(F("Project: ")), Serial.println(ProjectName); pinMode (LED_BUILTIN, OUTPUT); // used as heartbeat indicator // for (auto group : groups) pinMode(group.pin, OUTPUT); // check outputs for (auto group : groups) digitalWrite(group.pin, HIGH), delay(1000); for (auto group : groups) digitalWrite(group.pin, LOW), delay(1000); } void loop () { currentTime = millis(); digitalWrite(LED_BUILTIN, (currentTime / 500) % 2); for (auto &group : groups) { if (currentTime-group.stamp>=group.duration) { group.stamp=currentTime; digitalWrite(group.pin,!digitalRead(group.pin)); } } }
Have a nice day and enjoy coding in C++.
My "2-cents"...
I'm not going to write the code but this isn't too hard if you understand how Blink Without Delay works. You can start with Blink Without Delay and make some modifications to add one LED, then another.
You're going to need 3 if-statements/structures, one for each LED.
You'll also need a few separate variables for each LED to keep track of the 3 on/off states and the 3 times... These are variables that you define.
Of course, there is only one currentMillis.
But instead of ledState we can have ledState1, ledState2, and ledState3. (Or if you have different color LEDs you can have RedState, BlueState, GreenState, or whatever is the most descriptive.)
Instead of Interval we can have Interval1, Interval2, Interval3.
Finally we need previousMillis1, previousMillis2, previousMillis3.
My take (unless I misunderstand the requirement). Super simple use of BWOD and Several things at a time..
const byte redPin = 4; const byte bluePin = 5; const byte greenPin = 6; unsigned long redInterval = 500; unsigned long blueInterval = 100; unsigned long greenInterval = 2000; unsigned long currentMillis = 0; void setup() { Serial.begin(115200); pinMode(redPin, OUTPUT); pinMode(bluePin, OUTPUT); pinMode(greenPin, OUTPUT); } void loop() { currentMillis = millis(); flashRed(); flashBlue(); flashGreen(); } void flashRed() { static unsigned long timer = 0; if (currentMillis - timer >= redInterval) { timer = currentMillis; digitalWrite(redPin, !digitalRead(redPin)); } } void flashBlue() { static unsigned long timer = 0; if (currentMillis - timer >= blueInterval) { timer = currentMillis; digitalWrite(bluePin, !digitalRead(bluePin)); } } void flashGreen() { static unsigned long timer = 0; if (currentMillis - timer >= greenInterval) { timer = currentMillis; digitalWrite(greenPin, !digitalRead(greenPin)); } }
Wow! Thank you so much for the replies. I will read everyone’s comment slowly and test it out and upload the result when I am free!
consider a more generic approach
struct Led { const byte Pin; const unsigned long Period; unsigned long msecLst; } leds [] = { { 10, 150 }, { 11, 300 }, { 12, 400 }, { 13, 550 }, }; const int N_LEDS = sizeof(leds) / sizeof(Led); // ----------------------------------------------------------------------------- void loop (void) { unsigned long msec = millis (); Led *p = leds; for (unsigned n = 0; N_LEDS > n; n++, p++) { if ( (msec - p->msecLst) > p->Period) { p->msecLst = msec; digitalWrite (p->Pin, ! digitalRead (p->Pin)); } } } // ----------------------------------------------------------------------------- void setup (void) { Serial.begin (9600); for (unsigned n = 0; N_LEDS > n; n++) pinMode (leds [n].Pin, OUTPUT); } | https://forum.arduino.cc/t/blinking-3-leds-concurrently-and-independently/919646 | CC-MAIN-2021-49 | refinedweb | 1,030 | 56.66 |
Update: It turns out this feature has been implemented to test n3386. You can read the discussion and even see the patch on the mailing list:
Are your two favourite C++11 features decltype and declval? I have mixed feelings. On one hand, it lets me write code like this
template< class X, class Y > constexpr auto add( X&& x, Y&& y ) -> decltype( std::declval<X>() + std::declval<Y>() ) { return std::forward<X>(x) + std::forward<Y>(y); }
and know that it will work on any type and do the optimal thing if x or y should be moved or copied (like if X=std::string). On the other hand, it's tedious. "forward" and "declval" are both seven letter words that have to be typed out every time for every function, per variable. Then there's the std:: prefix and <X>(x) suffix. The only benefit of using declval over forward is a savings of one letter not typed.
But someone must have realized there's a better way. If the function is only one statement, and the return type is the declval of that statement, couldn't the compiler just figure out what I mean when I say this?
template< class X, class Y > constexpr auto add( X&& x, Y&& y ) { return std::forward<X>(x) + std::forward<Y>(y); }
March of this year, one Jason Merrill proposed just this (n3386) and GCC has already implemented it in 4.8 (change log), though it requires compiling with -std=c++1y. One can play with 4.8 on Ubuntu with gcc-snapshot. (Note that it doesn't modify your existing gcc install(s) and puts it in /usr/lib/gcc-snapshot/bin/g++. Also, I have been unable to install any but the third-to-most recent package.) I hope it is not too much trouble to install on other distros/platforms.
So if your favourite c++11 feature is decltype and declval, prepare to never use them again. The compiler can deduce the type for you implicitly, and better, and it works even if the function is longer than one line. Take for example, reasonably complex template functions like the liftM function I wrote for "Arrows and Keisli":
constexpr struct LiftM { template< class F, class M, class R = Return<typename std::decay<M>::type> > constexpr auto operator () ( F&& f, M&& m ) -> decltype( std::declval<M>() >>= compose(R(),std::declval<F>()) ) { return std::forward<M>(m) >>= compose( R(), std::forward<F>(f) ); } template< class F, class A, class B > constexpr auto operator () ( F&& f, A&& a, B&& b ) -> decltype( std::declval<A>() >>= compose ( rcloset( LiftM(), std::declval<B>() ), closet(closet,std::declval<F>()) ) ) { return std::forward<A>(a) >>= compose ( rcloset( LiftM(), std::forward<B>(b) ), closet(closet,std::forward<F>(f)) ); } } liftM{};
Could be written instead:
constexpr struct LiftM { template< class F, class M > constexpr auto operator () ( F&& f, M&& m ) { using R = Return< typename std::decay<M>::type >; return std::forward<M>(m) >>= compose( R(), std::forward<F>(f) ); } template< class F, class A, class B > constexpr auto operator () ( F&& f, A&& a, B&& b ) { return std::forward<A>(a) >>= compose ( rcloset( LiftM(), std::forward<B>(b) ), closet(closet,std::forward<F>(f)) ); } } liftM{};
Automatic type deduction didn't exactly make this function more obvious or simple, but it did remove the visual cruft and duplication of the definition. Now, if I improve this function to make it more clear, I won't have a decltype expression to have to also edit.
To be fair, this doesn't entirely replace decltype. auto doesn't perfect forward. But it seems to work as expected, most of the time.
For another example of the use-case of auto return type deduction, consider this program:
#include <tuple> int main() { auto x = std::get<0>( std::tuple<>() ); }
This, small, simple, and obviously wrong program generates an error message 95 lines long. Why? GCC has to check make sure this isn't valid for the std::pair and std::array versions of get, and when it checks the tuple version, it has to instantiate std::tuple_element recursively to find the type of the element. It actually checks for the pair version first, so one has to search the message for the obviously correct version and figure out why it failed. A simple one-off bug in your program could cause a massive and difficult to parse error message. We can improve this simply.
#include <tuple> template< unsigned int i, class T > auto get( T&& t ) { using Tup = typename std::decay<T>::type; static_assert( i < std::tuple_size<Tup>::value, "get: Index too high!" ); return std::get<i>( std::forward<T>(t) ); } int main() { int x = get<0>( std::tuple<>() ); }
How much does this shrink the error by? Actually, it grew to 112 lines, but right at the top is
auto.cpp: In instantiation of 'auto get(T&&) [with unsigned int i = 0u; T = std::tuple<>]': auto.cpp:13:36: required from here auto.cpp:7:5: error: static assertion failed: get: Index too high!
The error message might be a little bigger, but it tells you right off the bat what the problem is, meaning one has less to parse.
Similar to this static_assert example, typedefs done as default template arguments can be moved to the function body in many cases.
template< class X, class Y = A<X>, class Z = B<Y> > Z f( X x ) { Z z; ... return z; }
can now be written
template< class X > // Simpler signature. auto f( X x ) { using Y = A<X>; // Type instantiation logic using Z = B<Y>; // done in-function. Z z; ... return z; }
Looking forward.This release of GCC also implements inheriting constructors, alignas, and attribute syntax. It also may have introduced a few bugs; for example, my library, which compiles with 4.7, does not with 4.8, producing many undefined references.
The other features of this release might not be quite so compelling, but automatic type deduction alone is one powerful enough to change the way coding is done in C++--again. Heavily templated code will become a breeze to write and maintain as figuring out the return type is often the hardest part. I find it encouraging that this has been implemented so quickly. Of coarse, it's not standard, at least not yet.
On a final note, I wonder how this will interact with the static if. It would be nice, were the following well-formed:
template< class X > auto f( X x ) { if( std::is_same<int,X>::value ) return "Hi"; // Int? Return string. else return x / 2; // Other? halve. } | http://yapb-soc.blogspot.com/2012/12/gcc-48-has-automatic-return-type.html | CC-MAIN-2015-48 | refinedweb | 1,098 | 61.36 |
Robotics - Writing and Testing VPL Services for Serial Communication
By Trevor Taylor | February 2010
Microsoft Robotics Developer Studio (RDS) is, as you’d expect, a platform for programming robots. RDS first shipped in 2006 and the latest version, RDS 2008 R2, was released in June 2009.
RDS consists of four major components: the Concurrency and Coordination Runtime (CCR), Decentralized Software Services (DSS), Visual Programming Language (VPL) and Visual Simulation Environment (VSE). Sara Morgan wrote about the VSE in the June 2008 issue of MSDN Magazine (msdn.microsoft.com/magazine/cc546547).
VPL, however, is more pertinent to this article. VPL is a dataflow language, which means you create programs by drawing diagrams on the screen. At run time, messages flow from one block to another in the diagram, and this dataflow is effectively the execution path of the program. Because VPL is built on top of CCR and DSS, dataflows can occur asynchronously (as a result of notifications from services) and can also run in parallel. While VPL is intended for novice programmers, experienced coders can also find it useful for for prototyping.
This article outlines a simple RDS service that allows you to send and receive data using a serial port (also known as a COM port). The example code illustrates some of the key concepts involved in writing reusable RDS services.
To learn more about RDS and download the platform, go to microsoft.com/robotics. The package includes a useful help file, which is also available at msdn.microsoft.com/library/dd936006. You can get further help by posting questions to the various Robotics Discussion Forums at social.msdn.microsoft.com/Forums/en-us/category/robotics.
RDS Services
RDS services are built using CCR and DSS. They are conceptually similar to Web services because RDS has a service-oriented architecture (SOA) based on a Representational State Transfer (REST) model using the Decentralized Software Services Protocol (DSSP) for communication between services. Wading through all this alphabet soup, what this means is that you don’t talk directly to RDS services. Instead, you send messages to a proxy, which acts as the external interface to the service (an approach Web developers will be familiar with). It also means that services can be distributed anywhere on the network.
Using a proxy has two effects. First, messages sent between services are serialized before transmission and deserialized at the other end. XML is the usual format for the data being transmitted. Second, the proxy defines a contract—effectively the set of APIs that are exposed to other services.
Every service has an internal state, which you can retrieve or modify by operations on the service. These involve sending a request message and waiting for a response message in a process similar to that used by most Web services.
When a service’s state changes, it can send out notifications to subscribers. This publish-and-subscribe approach makes RDS services different from traditional Web services because notification messages are sent asynchronously to subscribers.
When you build a new service, it automatically becomes visible in VPL and you can start using it immediately. This is one of the key RDS features, and it makes testing and prototyping very easy—you don’t have to write test harnesses in C#, because you can use VPL instead.
Controlling a Robot Remotely
Many simple educational robots have 8- or 16-bit microcontrollers for their on-board brains. But because RDS runs under the .NET Framework on Windows, it doesn’t generate code that can run directly on these robots. They must be controlled remotely via a communications link, instead. (The alternative is to have an on-board PC, such as the MobileRobots Pioneer 3DX).
Since the majority of microcontrollers support serial ports, a serial connection is the obvious solution. However, supplying the link using a cable isn’t ideal—it limits the robot’s mobility.
As an alternative, you can use Bluetooth to create a wireless connection by installing a serial-to-Bluetooth device on the robot. Some robots, such as the LEGO NXT, come with Bluetooth built in. Others, such as the RoboticsConnection Stinger, have optional Bluetooth modules. Bluetooth is a good choice, given that it’s standard on most laptops these days. Even if your PC doesn’t have Bluetooth, you’ll find inexpensive, readily available USB Bluetooth devices.
The good news is that you don’t have to know anything about programming Bluetooth because the connection to the robot will appear as a virtual serial port. You can use the same code as you would if a physical cable were providing the serial connection. The only difference is that you have to establish a pairing between your PC and the Bluetooth device on the robot so a virtual COM port will be created.
Some robot controller boards have firmware that can accept commands using a simple command language. For example, with the Serializer from RoboticsConnection (which gets its name from its use of a serial protocol), you can type commands to the controller via a terminal program like HyperTerminal. Commands are all human-readable, and you terminate each by pressing Enter.
If you’re designing your own protocol for use with a robot, you need to make a few choices. First, you must decide whether you’ll send binary data. Converting binary values to hexadecimal or decimal for transmission requires more bandwidth and increases the processing overhead for the on-board CPU. On the other hand, it makes reading the messages much easier, and you won’t experience any strange behavior due to misinterpreted control characters.
The next question is whether you want to use fixed-length packets or a more flexible variable-length format. Fixed-length is easier to parse and works best with hexadecimal.
You should also consider whether you’ll use a checksum. For computer-to-computer communication calculating check digits is easy. If, however, you want to test your robot by typing in commands manually, figuring out the check digits gets very tricky. When using checksums, typically the receiver sends back an acknowledgement (ACK) or negative acknowledgement (NAK) depending on whether the command came across successfully or not. Your decision about using a checksum really comes down to the reliability of the connection.
The SerialPort Service
It should be apparent by now why a serial-port service would be useful. The RDS package, however, doesn’t include such a service, although many of the robot samples use serial communication links. If you explore the RDS sample code, you’ll find that each example handles the serial port differently. This article outlines one approach to using a serial port. It’s not the only way to do it and not necessarily the best way.
Before going any further, make sure you’ve downloaded and installed RDS. The download for this article contains the source code of the SerialPort service. Unzip it into a folder under your RDS installation directory (also known as the mount point). Note that you should keep your code separate from the samples folder that comes with RDS so that you don’t mix up your code with the Microsoft code. Also, I recommend placing your code under the RDS mount point rather than elsewhere on your hard drive, because it simplifies development. I have a Projects folder where I keep all my own code in subfolders, making it easy to back up.
The main files in the SerialPort source code are SerialPort.cs, SerialPortTypes.cs and SerialPort.manifest.xml.
SerialPort.cs holds the implementation of the service, or the behavior. It consists of the service class itself, which contains the operation handlers, plus any required supporting methods and variables.
SerialPortTypes.cs contains the definitions of the service state, the operation types, and message types. In effect, it describes the interface to the service, and it should not contain any executable code. This is a key point about a service contract—it’s a data definition only.
SerialPort.manifest.xml is called the manifest and describes how services are combined, or orchestrated, to run an application. This file is used as input to the DssHost utility, which creates a DSS node that services run inside. In this case the manifest only runs the SerialPort service, which is fairly useless. A useful manifest would also specify other services that rely on the SerialPort service to make connections.
Before using the SerialPort service, you must run the DssProjectMigration tool and then recompile the service. Open a DSS Command Prompt (look under RDS in the Start menu) and make sure your paths are set up so you can execute files in the \bin folder. Then change to the directory where you unzipped the code and enter the following command:
The /b- option means don’t create a backup copy and the period character (.) refers to the current directory.
You can compile the service in Visual Studio or do so from the command line by typing:
Compiling the service generates the service DLL, a proxy DLL, and a transform DLL (which translates data types between the service DLL and the proxy). These will all be placed into the \bin folder under the RDS mountpoint. You should also keep the manifest and config files together by copying them into the \samples\config folder (though this isn’t essential).
The Service Contract
Every service has a unique contract identifier, which is declared in the Types.cs file and looks like a URL. Note, however, that it has no significance on the Web, and using a URL-style identifier is just a convenient way to guarantee uniqueness by allowing organizations to create their own namespaces. (For your projects you’ll need to use a namespace other than microsoft.com/robotics.) SerialPortTypes.cs contains the following definition:
A service contract also includes the state and operations that define what properties other services can manipulate and how. SerialPortState (see Figure 1) includes the serial port configuration, some parameters for timeouts, and the last byte received when operating asynchronously.
[DataContract] public class SerialPortState { private bool _openOnStart; private bool _asynchronous; private SerialPortConfig _config; private byte _lastByteReceived; private bool _isOpen; // Open the COM port when the service starts // Must be set in config file [DataMember] public bool OpenOnStart { get { return _openOnStart; } set { _openOnStart = value; } } // Operate in Asynchronous mode [DataMember] public bool Asynchronous { get { return _asynchronous; } set { _asynchronous = value; } } // Configuration parameters for the serial port [DataMember] public SerialPortConfig Config { get { return _config; } set { _config = value; } } // Last byte received from the serial port [DataMember] public byte LastByteReceived { get { return _lastByteReceived; } set { _lastByteReceived = value; } } // Indicates if the port is currently open [DataMember] public bool IsOpen { get { return _isOpen; } set { _isOpen = value; } } }
You can define your own data types for use in the state and message types. In this service, there’s a SerialPortConfig class. To make it visible in the proxy, it must be public and marked with the [DataContract] attribute. Each property in the class must be declared public and also be tagged using the [DataMember] attribute. If this isn’t done, the property will not be accessible.
You can also expose public enums—that way other programmers don’t have to use magic numbers in code that uses your service. It’s good programming practice, too, because it allows datatype checking.
When you design a service, you also have to decide how other services will interact with it. This SerialPort service supports the following operations, listed below in logical groups:
- Get, Subscribe
- ReceiveByte
- SetConfig, Open, Close, ClearBuffers
- ReadByte, ReadByteArray, ReadString
- WriteByte, WriteByteArray, WriteString
Note that the ReceiveByte operation is for internal use by the service itself and shouldn’t be used by other services. More on this later.
The Read and Write operations are all synchronous—the service does not respond until the operation has completed. If you open the serial port in asynchronous mode (explained below), the service will send you a ReceiveByte notification for every byte received and you should not use the Read operations. The Write operations, however, are always synchronous.
Each operation has a request message type and a response message type. Some of these might have properties and others will be empty—the datatype itself is sufficient to convey the appropriate message.
Service Behavior
As noted earlier, the executable code of the service is in SerialPort.cs. All services are derived from DsspServiceBase, which provides a number of helper methods and properties. When a service is started, its Start method is called:
Start takes care of initializing the state (if necessary) and then opening the serial port automatically if OpenOnStart was specified in the config file (explained below).
Each operation supported by a service must have a service handler. But some operations, such as Drop and Get, are handled by the infrastructure (unless you wish to override the default behavior).
The OpenHandler shows a very simple handler:
This handler calls OpenPort, an internal method. If this is successful, a response is posted back. Because no information has to be returned, this is just a default response provided by DSS.
If the open fails, then an exception is thrown. DSS catches the exception and converts it to a fault, which is sent back as the response. Though not immediately obvious, if an exception occurs in OpenPort, it will also bubble up and return a fault.
There’s no need to explain all of the OpenPort method, but one point is important—you can open the serial port for either synchronous or asynchronous operation. In synchronous mode, all read and write requests complete before returning a response. However, if you open in asynchronous mode, notifications are sent for every character received. To make this work, the code sets up a conventional event handler:
The event handler is shown in Figure 2. This handler runs on a .NET thread. But to interoperate with DSS, it must be switched to a CCR thread, so the code posts a ReceiveByte request to the service’s own main operations port. After posting a request, the code should receive the response, otherwise there will be a memory leak. This is the purpose of Arbiter.Choice, which uses the C# shorthand notation for anonymous delegates to handle the two possible responses. An Activate is necessary to place the Choice receiver into the active queue. Only a fault is relevant in this case, and a successful result does nothing.
void DataReceivedHandler(object sender, SerialDataReceivedEventArgs e) { int data; while (sp.BytesToRead > 0) { // Read a byte - this will return immediately because // we know that there is data available data = sp.ReadByte(); // Post the byte to our own main port so that // notifications will be sent out ReceiveByte rb = new ReceiveByte(); rb.Body.Data = (byte)data; _mainPort.Post(rb); Activate(Arbiter.Choice(rb.ResponsePort, success => { }, fault => { LogError("ReceiveByte failed"); } )); } }
The ReceiveByte handler will be executed next to process the new character:
The [ServiceHandler] attribute lets DSS hook the handler to the operations port during service initialization. The handler sends a notification to subscribers, then posts back a response saying the operation is complete. (Services that wish to receive notifications must send a Subscribe operation to the SerialPort service).
Service handlers for the other read and write operations are fairly self-explanatory. The WriteStringHandler (see Figure 3) contains one small wrinkle, though—it can insert a small delay between sending characters. This is designed to help slower microcontrollers that might not be able to keep up if the data is sent at full speed, especially devices like the BasicStamp that do bit banging and don’t have a hardware Universal Asynchronous Receiver/Transmitter (UART), so they perform serial I/O in software .
[ServiceHandler] public virtual IEnumerator<ITask> WriteStringHandler( WriteString write) { if (!_state.IsOpen) { throw (new Exception("Port not open")); } // Check the parameters - An empty string is valid, but not null if (write.Body.DataString == null) throw (new Exception("Invalid Parameters")); // NOTE: This might hang forever if the comms link is broken // and you have not set a timeout. On the other hand, if there // is a timeout then an exception will be raised which is // handled automatically by DSS and returned as a Fault. if (_state.Config.InterCharacterDelay > 0) { byte[] data = new byte[1]; for (int i = 0; i < write.Body.DataString.Length; i++) { data[0] = (byte)write.Body.DataString[i]; sp.Write(data, 0, 1); yield return Timeout(_state.Config.InterCharacterDelay); } sp.WriteLine(""); } else sp.WriteLine(write.Body.DataString); // Send back an acknowledgement now that the data is sent write.ResponsePort.Post(DefaultSubmitResponseType.Instance); yield break; }
Another point about this handler is that it is an Iterator. Notice the declaration of the method as IEnumerator<ITask> and the fact that it uses yield return and yield break. This feature of the C# language allows CCR to suspend tasks without blocking a thread. When the yield return is executed, the thread executing the method is returned to the pool. Once the timeout has completed, it posts back a message that causes execution to resume, though possibly on a different thread.
Configuring the Service
A configuration file is an XML serialized version of the service state that’s loaded during service initialization. It’s a good idea to support a config file because it allows you to change the behavior of the service without recompiling. This is especially important for setting the COM port number.
You can specify the name of the file in the service when you declare the state instance (in SerialPort.cs):
In this case we’ve declared the config as optional; otherwise, the service won’t start if the config file is missing. On the other hand, this means you have to check the state in the service’s Start method and initialize it to some sensible defaults if necessary.
Because no path is specified for the ServiceUri, DSS will assume that the file is in the same folder as the manifest used to start the service. Figure 4 shows the contents of a typical config file.
<?xml version="1.0" encoding="utf-8"?> <SerialPortState xmlns:xsd="" xmlns:xsi="" xmlns= > <OpenOnStart>false</OpenOnStart> <Asynchronous>false</Asynchronous> <Config> <PortNumber>1</PortNumber> <BaudRate>57600</BaudRate> <Parity>None</Parity> <DataBits>8</DataBits> <StopBits>One</StopBits> <ReadTimeout>0</ReadTimeout> <WriteTimeout>0</WriteTimeout> <InterCharacterDelay>0</InterCharacterDelay> </Config> <LastByteReceived>0</LastByteReceived> <IsOpen>false</IsOpen> </SerialPortState>
Note that this config file does not request the service to open the COM port automatically, so you’ll have to send an Open request.
If you want your service to have a professional look, you need to pay attention to details such as the service description and icons. Even if you don’t plan to use VPL yourself, it’s a good idea to test your service in VPL and make it VPL-friendly so other people can use it easily.
You can decorate your service class with two attributes that describe the service:
The first attribute displays as the name of the service in the VPL service list (see Figure 5), and the second attribute appears as a tooltip if you mouse over the name of the service in the list.
Figure 5 VPL Service List
.png)
If you have a Web site and you want to document your services online, you can attach another attribute to the service class to provide the hyperlink:
When VPL sees this attribute, it adds a small information icon (a white “i” on a blue disc) beside the service in the service list.
If you add icons to your service, which you can easily do, they’ll appear in VPL diagrams and the service list. Notice the small icon, called a thumbnail, beside the name of the service in Figure 5. Figure 6 illustrates the larger version of the icon, which will show up inside the block that appears in VPL diagrams.
Figure 6 Service Block
.png)
When you add these images to a project, make sure you change the file properties so that Build Action is set to Embedded Resource. PNG is the preferred file format because it supports an alpha channel. This lets you create an icon with a transparent background by setting the alpha value on the background color to zero.
The service icon image should be 32 by 32 pixels and the thumbnail 16 by 16. The file names of the images must begin with the class name of the service, in this case SerialPortService. Thus I made the file names for my example SerialPortService.Image.png and SerialPortService.Thumbnail.png.
Using the Service
The service is quite flexible. You can specify the serial port configuration in a config file (which is the most common way) or by sending a SetConfig request. Once you’ve set the config, you can call the Open operation. For convenience, a flag in the config file will cause the service to open the port automatically on startup. If the port is already open, the Open call will first close, then re-open it.
You need to decide how you want to use the service: synchronously or asynchronously. When operating synchronously, each read or write request will wait until the operation completes before sending back a response. In terms of VPL, this is a simple approach, because the message flow will pause at the SerialPort block in the diagram. Note that asynchronous operation is not fully asynchronous—write operations still occur synchronously. But every new byte received is sent as a notification to subscribers.
In theory, every state-modifying operation on a service should cause a notification to be sent so that subscribers can keep their own cached version of the service state. This means all operations based on the DSSP Replace, Update, Insert, Delete and Upsert operations should send a corresponding notification. However, developers often play fast and loose with this requirement.
For simplicity, the SerialPort service only sends notifications using the ReceiveByte operation type when in asynchronous mode. The Open, Close, and SetConfig operations also cause notifications to be sent.
Because the Read and Write operations do not modify the state, they are subclassed off the DSSP Submit operation. Of course, they have a side-effect, which is to receive and send data over the serial link.
Testing with VPL
The download for this article includes two sample VPL programs, EchoAsynch and EchoSynch, which show how to use the service in both asynchronous mode (via notifications) and synchronous mode. The VPL samples use config files to set the initial parameters for the COM port, including the port number (which is set to 21 in the config files and must be changed to match your computer’s COM port address).
Note that to test the service you will need a null modem cable and either two computers with serial ports or two ports on the same computer. USB-to-serial devices are readily available, so it’s quite feasible to have multiple serial ports on a single PC. When you have your COM ports connected together, run a terminal emulator such as HyperTerminal and connect to one of the COM ports. Start by running the EchoAsynch VPL program on the other serial port. (You can find VPL in the Start menu under RDS). When you type into the terminal emulator window, you should see the characters echoed.
You can use VPL to create config files. Click on a SerialPort service block in the diagram and look in the Properties panel. You should see something like Figure 7. Make sure the PortNumber is set correctly for your PC. (This must be the other serial port, not the one you opened in the terminal emulator). You’ll notice that Parity and StopBits are drop-down lists. The entries in these lists come directly from the enums defined in SerialPortTypes.cs.
Figure 7 Serial Port Service Configuration
.png)
This is one place where XML doc comments come in handy. When you hover over a config parameter with the mouse cursor, a tooltip will pop up showing the corresponding comment for each state member.
When you run the EchoAsynch VPL program, the first part of the program in Figure 8 opens the serial port in asynchronous mode.
Figure 8 Opening the Serial Port in Asynchronous Mode
.png)
If you’ve entered invalid values in the config, such as a bad baud rate, the Open will fail. (It might also fail if the COM port is in use, the port doesn’t exist, or you don’t have appropriate permissions). This is why the program checks for a fault and displays it.
The rest of the program (see Figure 9) just echoes every character received. It does this by taking the characters from ReceiveByte notifications and sending them back using WriteByte.
To make the output easier to read, a carriage return character (ASCII 13, decimal) has a linefeed (10) appended so the cursor will move to the next line in your terminal emulator window. Note that all of the SerialPortService blocks in the Figure 9 diagram refer to the same instance of the service.
Figure 9 EchoAsynch Program
.png)
The EchoSynch VPL program (see Figure 10) uses the synchronous mode of operation—it doesn’t use notifications. (In fact, notifications are never sent in synchronous mode).
Figure 10 The EchoSynch Program
.png)
Unlike the previous program, this one uses ReadString and WriteString to echo data. These operations perform functions similar to those of ReadByteArray and WriteByteArray, However, strings are easier to handle in VPL than byte arrays.
The string operations use a linefeed (or newline) character as an end-of-line marker. So ReadString completes when you press linefeed (Ctrl-J) not when you press Enter. This can be confusing the first time you test the program using a terminal emulator, because you might wonder why nothing is echoing. WriteString adds a carriage return and linefeed to the output so each string appears on a separate line.
Note that the config file for EchoSynch has an InterCharacterDelay of 500ms. This causes the strings to be sent very slowly. You can try changing this value.
Trevor Taylor, Ph.D., is the program manager for Robotics Developer Studio at Microsoft. He’s the coauthor, with Kyle Johns, of “Professional Robotics Developer Studio” (Wrox, 2008).
Thanks to the following technical experts for reviewing this article:Kyle Johns
MSDN Magazine Blog
More MSDN Magazine Blog entries >
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | https://msdn.microsoft.com/en-us/magazine/ee309885.aspx | CC-MAIN-2016-44 | refinedweb | 4,398 | 53.71 |
There were feature requests and bug reports. Much to do. Sadly, I'm really slow at doing it.
Thursday, December 30, 2010
pyWeb Literate Programming Tool | Download pyWeb Literate Programming Tool software for free at SourceForge.net
I've (finally) updated the pyWeb Literate Programming Tool.
Top Language Skills
Check out this item on eWeek: Java, C, C++: Top Programming Languages for 2011 - Application Development - News & Reviews - eWeek.com.
The presentation starts with Java, C, C++, C# -- not surprising. These are clearly the most popular programming languages. These seem to be the first choice made by many organizations.
In some cases, it's also the last choice. Many places are simply "All C#" or "All Java" without any further thought. This parallels the "All COBOL" mentality that was so pervasive when I started my career. The "All Singing-All Dancing-All One Language" folks find the most shattering disruptions when their business is eclipsed by competitors with language and platform as a technical edge.
The next tier of languages starts with JavaScript, which is expected. Just about every web site in common use has some JavaScript somewhere. Browsers being what they are, there's really no viable alternative.
Weirdly, Perl is 6th. I say weirdly because the TIOBE Programming Community Index puts Perl much further down the popularity list.
PHP is next. Not surprising.
Visual Basic weighs in above Python. Being above Python is less weird than seeing Perl in 6th place. This position is closer to the TIOBE index. It is distressing to think that VB is still so wildly popular. I'm not sure what VB's strong suit is. C# seems to have every possible advantage over VB. Yet, there it is.
Python and Ruby are the next two. Again, this is more-or-less in the order I expected to see them. This is is the second tier of languages: really popular, but not in the same league as Java or one of the innumerable C variants.
After this, they list Objective-C as number 11. This language is tied to Apple's iOS and MacOS platforms, so it's popularity (like C# and VB) is driven in part by platform popularity.
Third Tier
Once we get past the top 10 Java/C/C++/C#/Objective C and PHP/Python/Perl/Ruby/Javascript tier, we get into a third realm of languages that are less popular, but still garnering a large community of users.
ActionScript. A little bit surprising. But -- really -- it fills the same client-side niche as JavaScript, so this makes sense. Further, almost all ActionScript-powered pages will also have a little bit of JavaScript to help launch things smoothly.
Now we're into interesting -- "perhaps I should learn this next" -- languages: Groovy, Go, Scala, Erlang, Clojure and F#. Notable by their absence are Haskell, Lua and Lisp. These seem like languages to learn in order to grab the good ideas that make them both popular and distinctive from Java or Python.
Tuesday, December 28, 2010
Amazing Speedup
A library had unit tests that ran for almost 600 seconds. Two small changes dropped the run time to 26 seconds.
I was amazed.
Step 1. I turned on the cProfile. I added two methods to the slowest unit test module.
def profile(): import cProfile cProfile.run( 'main()', 'the_slow_module.prof' ) report() def report(): import pstats p = pstats.Stats( 'the_slow_module.prof' ) p.sort_stats('time').print_callees(24)
Now I can add profiling or simply review the report. Looking at the "callees" provided some hints as to why a particular method was so slow.
Step 2. I replaced ElementTree with cElementTree (duh.) Everyone should know this. I didn't realize how much this mattered. The trick is to note how much time was spent doing XML parsing. In the case of this unit test suite, it was a LOT of time. In the case of the overall application that uses this library, that won't be true.
Step 3. The slowest method was assembling a list. It did a lot of list.append(), and list.__len__(). It looked approximately like the following.
def something( self ): result= [] for index, value in some_source: while len(result)+1 != index: result.append( None ) result.append( SomeClass( value ) ) return result
This is easily replaced by a generator. The API changes, so every use of this method function may need to be modified to use the generator instead of the list object.
def something_iter( self ): counter= 0 for index, value in some_source: while counter+1 != index: yield None counter += 1 yield SomeClass( value ) counter += 1
The generator was significantly faster than list assembly.
Two minor code changes and a significant speed-up.
Thursday, December 23, 2010
The Anti-IF Campaign
Check this out:.
I'm totally in favor of reducing complexity. I've seen too many places where a Strategy or some other kind of Delegation design pattern should have been used. Instead a cluster of if-statements was used. Sometimes these if-statements suffer copy-and-paste repetition because someone didn't recognize the design pattern.
What's important is the the if statement -- in general -- isn't the issue. The anti-if folks are simply demanding that folks don't use if as a stand-in for proper polymorphism.
Related Issues
Related to abuse of the if statement is abuse of the else clause.
My pet-peeve is code like this.
When the various conditions share common variables it can be very difficult to deduce the condition that applies for the else clause.When the various conditions share common variables it can be very difficult to deduce the condition that applies for the else clause.if condition1:
work
elif condition2:
work
elif condition3:
work
else:
what condition applies here?
My suggestion is to Avoid Else.
Write it like this.
if condition1:
work
elif condition2:
work
elif condition3:
work
elif not (condition1 or condition2 or condition3)
work
else:
raise AssertionError( "Oops. Design Error. Sorry" )
Then you'll know when you've screwed up.
[Update]
Using an assert coupled with an else clause is a kind of code-golf optimization that doesn't seem to help much. An elif will have the same conditional expression as the assert would have. But the comment did lead to rewriting this to use AssertionError instead of vague, generic Exception.
Tuesday, December 14, 2010
Code Base Fragmentation -- Again
Check this out: "Stupid Template Languages".
Love this: "The biggest annoyance I have with smart template languages (Mako, Genshi, Jinja2, PHP, Perl, ColdFusion, etc) is that you have the capability to mix core business logic with your end views, hence violating the rules of Model-View-Controller architecture."
Yes, too much power in the template leads to code base fragmentation: critical information is not in the applications, but is pushed into the presentation. This also happens with stored procedures and triggers.
Thursday, December 9, 2010
The Wrapper vs. Library vs. Aspect Problem
Imagine that we've got a collection of applications used by customers to provide data, a collection of applications we use to collect data from vendors. We've got a third collection of analytical tools.
Currently, they share a common database, but the focus, use cases, and interfaces are different.
Okay so far? Three closely-related groups or families of applications.
We need to introduce a new cross-cutting capability. Let's imagine that it's something central like using celery to manage long-running batch jobs. Clearly, we don't want to just hack celery features into all three families of applications. Do we?
Choices
It appears that we have three choices.
- A "wrapper" application that unifies all the application families and provides a new central application. Responsibilities shift to the new application.
- A site-specific library that layers some common features so that our various families of applications can be more consistent. This involves less of a responsibility shift.
- An "aspect" via Aspect-Oriented programming techniques. Perhaps some additional decorators added to the various applications to make them use the new functionality in a consistent way.
Lessons Learned
Adding a new application to be an overall wrapper turned out to be a bad idea. After implementing it, it was difficult to extend. We had two dimensions of extension.
- The workflows in the "wrapper" application needed constant tweaking as the other applications evolved. Every time we wanted to add a step, we had to update the real application and also update the wrapper. Python has a lot of introspection, but these aren't technical changes, these are user-visible workflow changes.
- Introducing a new data types and file formats was painful. The responsibility for this is effectively split between the wrapper and the underlying applications. The wrapper merely serves to dilute the responsibilities.
Libraries/Aspects
It appears that new common features are almost always new aspects of existing applications.
What makes this realization painful is the process of retrofitting a supporting library into multiple, existing applications. It seems like a lot of cut-and-paste to add the new
importstatements, add the new decorators and lines of code. However, it's a pervasive change. The point is to add the common decorator in all the right places.
Trying to "finesse" a pervasive change by introducing a higher-level wrapper isn't a very good idea.
A pervasive change is simply a lot of changes and regression tests. Okay, I'm over it.
Tuesday, December 7, 2010
Intuition and Experience
First, read EWD800.
It has harsh things to say about relying on intuition in programming.
Stack Overflow is full of questions where someone takes their experience with one language and applies it incorrectly and inappropriately to another language.
I get email, periodically, also on this subject. I got one recently on the question of "cast", "coercion" and "conversion" which I found incomprehensible for a long time. I had to reread EWD800 to realize that someone was relying on some sort of vague intuition; it appears that they were desperate to map Java (or C++) concepts on Python.
Casting
In my Python 2.6 book, I use the word "cast" exactly twice. In the same paragraph. Here it is.
This also means the "casting" an object to match the declared typeof a variable isn't meaningful in Python. You don't use C++ or Java-stylecasting.
I though that would be enough information to close the subject. I guess not. It appears that some folks have some intuition about type casting that they need to see reflected in other languages, no matter how inappropriate the concept is.
The email asked for a "a nice summary with a simple specific example to hit the point home.".
Coercion
The words coercion (and coerce) occur more often, since they're sensible Python concepts. After all, Python 2 has formal type coercion rules. See "Coercion Rules". I guess my summary ("Section 3.4.8 of the Python Language Reference covers this in more detail; along with the caveat that the Python 2 rules have gotten too complex.") wasn't detailed or explicit enough.
The relevant quote from the Language manual is this: "As the language has evolved, the coercion rules have become hard to document precisely; documenting what one version of one particular implementation does is undesirable. Instead, here are some informal guidelines regarding coercion. In Python 3.0, coercion will not be supported."
I guess I could provide examples of coercion. However, the fact that it is going to be expunged from the language seems to indicate that it isn't deeply relevant. It appears that some readers have an intuition about coercion that requires some kind of additional details. I guess I have to include the entire quote to dissuade people from relying on their intuition regarding coercion.
Further, the request for "a nice summary with a simple specific example to hit the point home" doesn't fit well with something that -- in the long run -- is going to be removed. Maybe I'm wrong, but omitting examples entirely seemed to hit the point home.
Conversion
Conversion gets it's own section, since it's sensible in a Python context. I kind of thought that a whole section on conversion would cement the concepts. Indeed, there are (IMO) too many examples of conversions in the conversion section. But I guess that showing all of the numeric conversions somehow wasn't enough. I have certainly failed at least one reader. However, I can't imagine what more could be helpful, since it is -- essentially -- an exhaustive enumeration of all conversions for all built-in numeric types.
What I'm guessing is that (a) there's some lurking intuition and (b) Python doesn't match that intuition. Hence the question -- in spite of exhaustively enumerating the conversions. I'm not sure what more can be done to make the concept clear.
It appears that all those examples weren't "nice", "simple" or "specific" enough. Okay. I'll work on that.
Thursday, December 2, 2010
More Open Source and More Agile News
Computer. | http://slott-softwarearchitect.blogspot.com/2010_12_01_archive.html | CC-MAIN-2018-26 | refinedweb | 2,169 | 58.79 |
The <%python> tag is also capable of specifying Python code that occurs within a specific scope of a template's execution, using the scope attribute. Scopes are provided that specify Python code as always the first thing executed within a component, always the last thing executed, or executes once per request, once per application thread, or once per application process.
Originally, these scopes had their own specific tag names, which are still available. These names include all those that are familiar to HTML::Mason users, as well as some new names. All synonyms are noted in each section.
Usage: <%python
Also called: <%python>
A component-scoped block is a regular block of Python that executes within the body of a template, as illustrated in the previous section Python Blocks . Many component-scoped blocks may exist in a template and they will all be executed in the place that they occur. Component scope has two variants, init and cleanup scope, which while still being component scope, execute at specific times in a component's execution regardless of their position in a template. They are described later in this section.
Usage: <%python
Also called: <%global>, <%once>
A global-scoped block refers to a section of Python code that is executed exactly once for a component, within the scope of its process. In reality, this usually means code that is executed each time the module is newly loaded or reloaded in the current Python process, as it is called as part of the module import process. Variables that are declared in a global-scoped block are compiled as global variables at the top of the generated module, and thus can be seen from within the bodies of all components defined within the module. Assignment to these variables follows the Python scoping rules, meaning that they will magically be copied into local variables once assigned to unless they are pre-declared in the local block with the Python keyword global.
Global-scoped Python executes only once when a component module is first loaded, as part of its import. As such, it is called slightly outside of the normal request call-chain and does not have the usual access to the the built-in request-scoped variables m, r, ARGS, etc. However, you can access the Request that is corresponding to this very first execution via the static call request.instance(), which will give you the current request instance that is servicing the global-scoped block.
<%python # declare global variables, accessible # across this component's generated module message1 = "this is message one." message2 = "this is message two." message3 = "doh, im message three." </%python> <%python> # reference the global variables m.write("message one: " + message1) m.write("message two: " + message2) # we want to assign to message3, # so declare "global" first global message3 message3 = "this is message three." m.write("message three: " + message3) </%python>
Use a global-scoped Python block to declare constants and resources which are shareable amongst all components within a template file. Note that for a threaded environment, global-scoped section applies to all threads within a process. This may not be appropriate for certain kinds of non-threadsafe resources, such as database handles, certain dbm modules, etc. For thread-local global variable declaration, see the section Thread Scope, or use the Myghty ThreadLocal() object described in Alternative to Thread Scope: ThreadLocal(). Similarly, request-scoped operation is provided as well, described in the section Request Scope.
A global-scoped block can only be placed in a top-level component, i.e. cannot be used within %def or %method.
Usage: <%python
Also called: <%init>
Init-scoped Python is python code that is executed once per component execution, before any other local python code or text is processed. It really is just a variant of component scope, since it is still technically within component scope, just executed before any other component-scoped code executes. One handy thing about init scope is that you can stick it at the bottom of a big HTML file, or in any other weird place, and it will still execute before all the other local code. Its recommended as the place for setting up HTTP headers which can only be set before any content and/or whitespace is written (although if autoflush is disabled, this is less of an issue; see The Autoflush Option for more details).
In this example, a login function is queried to detect if the current browser is a logged in user. If not, the component wants to redirect to a login page. Since a redirect should occur before any output is generated, the login function and redirect occurs within an init-scoped Python block:
<%python # check that the user is logged in, else # redirect them to a login page if not user_logged_in(): m.send_redirect("/login.myt", hard = True) </%python> <%doc>rest of page follows....</%doc> <html> <head> ....
Usage: <%python
Also called: <%cleanup>
Cleanup-scoped Python is Python code executed at the end of everything else within a component's execution. It is executed within the scope of a try..finally construct so its guaranteed to execute even in the case of an error condition.
In this example, a hypothetical LDAP database is accessed to get user information. Since the database connection is opened within the scope of the component inside its init-scoped block, it is closed within the cleanup-scoped block:
<%args> userid </%args> <%python # open a connection to an expensive resource ldap = Ldap.connect() userrec = ldap.lookup(userid) </%python> name: <% userrec['name'] %><br/> address: <% userrec['address'] %><br/> email: <% userrec['email'] %><br/> <%python # insure the expensive resource is closed if ldap is not None: ldap.close() </%python>
Usage: <%python
Also called: <%requestlocal>, <%requestonce> or <%shared>
A request-scoped Python block has similarities to a global-scoped block, except instead of executing at the top of the generated component's module, it is executed within the context of a function definition that is executed once per request. Within this function definition, all the rest of the component's functions and variable namespaces are declared, so that when variables, functions and objects declared within this function are referenced, they are effectively unique to the individual request.
<%python context = {} </%python> % context['start'] = True <& dosomething &> <%def dosomething> % if context.has_key('start'): hi % else: bye </%def>
The good news is, the regular Myghty variables m, ARGS, r etc. are all available within a request-scoped block. Although since a request-scoped block executes within a unique place in the call-chain, the full functionality of m, such as component calling, is not currently supported within such a block.
Request-scoped sections can only be used in top-level components, i.e. cannot be used within %def or %method.
While the net result of a request-scoped Python block looks similar to a global-scoped block, there are differences with how declared variables are referenced. Since they are not module-level variables, they can't be used with the Python global keyword. There actually is no way to directly change what a request-scoped variable points to; however this can be easily worked around through the use of pass-by-value variables. A pass-by-value effect can be achieved with an array, a dictionary, a user-defined object whose value can change, or the automatically imported Myghty datatype Value(), which represents an object whose contents you can change:
<%python status = Value("initial status") </%python> <%python> if some_status_changed(): status.assign("new status") </%python> the status is <% status() %>
An alternative to using request-scoped blocks is to assign attributes to the request object, which can then be accessed by any other component within that request. This is achieved via the member attributes:
<%python> m.attributes['whatmawsaw'] = 'i just saw a flyin turkey!' </%python> # .... somewhere in some other part of the template JIMMY: Hey maw, what'd ya see ? MAW: <% m.attributes['whatmawsaw'] %>
Produces:
JIMMY: Hey maw, what'd ya see ? MAW: i just saw a flyin turkey!
Also see The Request which lists all request methods, including those used for attributes.
Usage: <%python
Also called: <%threadlocal>, <%threadonce>
A thread-scoped Python block is nearly the same as a Request Scope block, except its defining function is executed once per thread of execution, rather than once per request. In a non-threaded environment, this amounts to once per process. The standard global variables m, r etc. are still available, but their usefulness is limited, as they only represent the one particular request that happens to be the first request to execute within the current thread. Also, like request-scope, variables declared in a thread-scoped block cannot be changed except with pass-by-value techniques, described in Using Pass-By-Value
In this example, a gdbm file-based database is accessed to retreive weather information keyed off of a zipcode. Since gdbm uses the flock() system call, it's a good idea to keep the reference to a gdbm handle local to a particular thread (at least, the thread that opens the database must be the one that closes it). The reference to gdbm is created and initialized within a thread-scoped block to insure this behavior:
<%python # use GNU dbm, which definitely doesnt work in # multiple threads (unless you specify 'u') import gdbm db = gdbm.open("weather.dbm", 'r') </%python> <%args> zipcode </%args> temperature in your zip for today is: <% db[zipcode] %>
Use a thread-scoped block to declare global resources which are not thread-safe. A big candidate for this is a database connection, or an object that contains a database-connection reference, for applications that are not trying too hard to separate their presentation code from their business logic (which, of course, they should be).
Thread-scoped sections can only be used in top-level components, i.e. cannot be used within %def or %method.
A possibly higher-performing alternative to a thread-scoped section is to declare thread-local variables via the automatically imported class ThreadLocal(). The ThreadLocal() class works similarly to a Value() object, except that assigning and retrieving its value attaches the data internally to a dictionary, keyed off of an identifier for the current thread. In this way each value can only be accessed by the thread which assigned the value, and other threads are left to assign their own value:
<%python x = ThreadLocal() </%python> <%python> import time x.assign("the current time is " + repr(time.time())) </%python> value of x: <% x() %>
ThreadLocal() also supports automatic creation of its value per thread, by supplying a pointer to a function to the parameter creator. Here is the above gdbm example with ThreadLocal(), using an anonymous (lambda) creation function to automatically allocate a new gdbm handle per thread:
<%python import gdbm db = ThreadLocal(creator = lambda: gdbm.open("weather.dbm", 'r')) </%python> <%args> zipcode </%args> temperature in your zip for today is: <% db()[zipcode] %> | http://packages.python.org/Myghty/scopedpython.html | crawl-003 | refinedweb | 1,804 | 50.87 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Generating a Service6:20 with James Churchill
Now let's use the CLI to generate a service that we'll use within our new component.
Available Schematics
The following schematics are available for use with the
ng generate command:
- Class
- Component
- Directive
- Enum
- Guard
- Interface
- Module
- Pipe
- Service
Updating Unit Tests
After adding new components or services, you might need to manually update previously generated spec files in order to keep all of your tests passing.
For example, if you add a new component named
MyTest to your app and update the
app.component.html template to render that component, you'll need to update the
app.component.spec.ts file in order to keep your unit tests passing. Just import your new component and add it to the test bed's
declarations array.
import { TestBed, async } from '@angular/core/testing'; import { AppComponent } from './app.component'; import { MyTestComponent } from './new-test/new-test.component'; describe('AppComponent', () => { beforeEach(async(() => { TestBed.configureTestingModule({ declarations: [ AppComponent, NewTest!'); })); });
Custom Schematics
If the provided built-in schematics don't work for you, it's possible to configure the Angular CLI to use your own custom schematics. Here are a couple of resources that can help get you started.
Additional Learning
- 0:00
Now let's take a look at using the CLI to generate a service for our app.
- 0:05
If you still have the ng serve command running in the terminal,
- 0:09
press Ctrl + C to stop it.
- 0:11
Just like we did when generating a component,
- 0:13
we'll use the ng generate command.
- 0:16
This time specifying we want to generate a service.
- 0:20
And we need to add the for our servers, how about test-content, let's also
- 0:25
add the dry run option in other to preview the changes that the CLI will make.
- 0:34
Notice that the CLI created two files, but didn't update the app.module.ts file.
- 0:40
By default, when generating the service,
- 0:43
the CLI won't provide the service in Angular dependency injection system.
- 0:49
That's something that you need to take care of
- 0:51
before you can inject the service into a component.
- 0:54
Lucky the seal lie provides an option to specify which app module to provide
- 0:59
the service in.
- 1:03
--module=app.
- 1:10
Now we can see that the CLI will update our app module.
- 1:14
Let's make the changes for real by removing the --dry --run option and
- 1:17
running the command again.
- 1:25
Here's our service.
- 1:30
And here in our app module, is the import statement for our service.
- 1:37
And here's where the CLI added our service to the ng module provider's array.
- 1:43
In order to test our service, let's add a method to the test content service class
- 1:48
that returns a string literal value.
- 1:53
GetContent, then return,
- 1:57
how about hello from our service!
- 2:05
Now let's update the my-test component to make use of our service.
- 2:18
Let's start with importing the TestContentService class.
- 2:32
And then inject our service into the MyTestComponent class constructor.
- 2:47
Now let's add a property for the service content.
- 2:53
How about testContent of type string?
- 2:58
Then in the ngOnInit method, let's set the test content property
- 3:02
the return value from the testContentService gitContent method.
- 3:15
Then in the component template,
- 3:22
Let's buy the test content property using interpolation in order to render
- 3:27
the value to the view.
- 3:34
Now, we're ready to run and test our app again by running the ng serve command.
- 3:45
Remember, you can include the -o option to have the CLI open our app in our
- 3:50
default browser.
- 3:57
And here's the content for our service.
- 4:01
The placement is a little odd, so lets double check the markup in our template.
- 4:09
We didn't surround it in a paragraph let's try that.
- 4:23
That looks better.
- 4:26
Now that you've seen how easy it is to generate components and services,
- 4:30
go ahead and take some time to experiment with adding more components and services
- 4:34
to our test app, or you can try using one of the other available schematics.
- 4:39
If you find yourself using the same ng generate options every time that you
- 4:43
generate an item, you can save yourself some time by customizing the default
- 4:48
values for specific schematics via the Angular CLI's configuration file.
- 4:58
For example, we can set the default values for
- 5:01
the component schematic's inline style and inline template options to true.
- 5:25
Doing this will keep the CLI from generating separate files for
- 5:30
component CSS styles and HTML templates.
- 5:32
Instead referring to use inline styles and templates.
- 5:39
Let's stop the ng serve command process, And see this in action.
- 5:45
We can test our configuration changes by running the ng generate
- 5:49
component command I'll include the --dry --run option to keep
- 5:53
the CLI from actually generating the component files.
- 6:03
And we can see that the CLI only generated two files,
- 6:06
one for the component class, and another for the component spec.
- 6:13
It's also possible to customize the schematics that are included with the CLI,
- 6:17
for more information, see the teacher's notes. | https://teamtreehouse.com/library/generating-a-service | CC-MAIN-2018-22 | refinedweb | 985 | 62.98 |
I have this program but a liitle confused about getting the loop to work. i have to write a program that inputs a text and a search string from the keyboard. using a function strstr locate the first occurrence of the search string of the line of text, and assign the location to a variable searchptr of type char* if the search string is found print the remaninder of the line of the text beginning with the search string then use strstr again to locate the next occurrence of the search string in the line of text. if a second occurrence is found print the remainder of the line beiginning with the second occurrence. hint: the second call to strstr should contain searchptr plus 1 as its first argument.
so far i have this, can somebody help me. Thanks
#include <stdio.h> #include <string.h> #include <stdlib.h> int main() { char line1[80]; char line2[80]; char *searchptr; int i; char found[80]; printf( "Enter Sentence\n"); gets(line1); printf(" Enter String to Search: "); gets(line2); searchptr=strstr(line1, line2); printf ("%s\n",searchptr); while (searchptr!=NULL) { searchptr=strstr(line1, line2)+1; } return 0; } | https://www.daniweb.com/programming/software-development/threads/14236/need-help-with-string-loop | CC-MAIN-2017-17 | refinedweb | 194 | 72.05 |
What (1926-1997)
Allen Ginsberg
At one web site a person wonders:
Another misunderstanding souls relates:
Written in the American poetic tradition, with Walt Whitman and William Carlos Williams as major influences. His poetry possess an improvised quality in its informality, discursive and repetitiveness. It conveys immediacy and honesty.
Ginsberg had written "Supermarket in California" in a grocery store on College Avenue in Berkeley, after reading Garcia Lorca's Ode to Walt Whitman.
'Supermarket in California' is a crafted criticism of literary figures as ode; addressed by poet, to those who cannot, or will not, answer. The narrator begins by creating a postmodern holy soul in a Blakean-like Preface to Milton ' from mental fight,' longing for the return of Whitman. Carl Sandburg's canonical Hog Butcher for the World echoes in Whitman's empty questions:Who killed the pork chops? What price bananas? Are you my Angel? And the question posed in the last refrain,
(Ginsberg) wanted to create a poetry that would not be literary, but would
make full use of everything in our daily lives. "When you approach the Muse,
talk as frankly as you would with yourself or your friends".
-- 20th Century Poetry and Poetics, ed. Gary Geddes
Dear Dr. Williams:
I enclose finally some of my recent work.
Am reading Whitman through, note enclosed poem on same, saw your essay a few days ago, you do not go far enough, look what I have done with the long line. In some of these poems it seems to answer your demand for a relatively absolute line with a fixed base, whatever it is (I am writing this in a hurry finally to get it off, have delayed for too long) -- all held within the elastic of the breath, though of varying lengths. The key is in Jazz choruses to some extent; also to reliance on spontaneity & expressiveness which long line encourages; also to attention to interior unchecked logical mental stream. With a long line comes a return (to) expressive human feeling, it's generally lacking in poetry now, which is inhuman. The release of emotion is one with rhythmical buildup of long line. The most interesting experiment here is perhaps the sort of bachlike fuge built up in part III of the poem called Howl.
This is not all I have done recently, there is one other piece which is nakeder than the rest and passed into prose. I'll send that on if you're interested -- also I have a whole book building up since 1951 when you last saw my work. I wish you would look at it but have not sent it on with these, is there enough time?
Enclosed poems are all from the last few months.
I hope these answer somewhat what you were looking for.
As ever,
Allen
No time to write a weirder letter.
There's a supermarket just like it down the street, only instead of a Californian supermarket shelf and Ginsberg, Whitman, and Lorca walking by neon fruit where they talked; all three are dead forty years and now cool themselves by Lethe's breeze.
There is a woman in the cold yellow glare of the deli case rehearsing words; it's in different languages, different locations and with different moods, adding different layers of dynamics between the produce landscapes. How can people live with these artificial products? Dreaming of the lost America of love past, in the seamy underbelly of consumption, a vain attempt to interpret the numbing fount of consumerism.
Shopping is not usually likened to that of a nice verse of poetry. Uninspiring and charmless clusters, supermarkets tend to offer the same; shopping as sport. Fueling the deterioration of meaningful contact between people. The days of casual, yet personal relationships between the shopper and their local storekeepers have come and gone. One has to face the fact, for it is a fact, that there is no arm to cling to, but that we go alone, and that our relation is to the world of reality of walking home, past blue automobiles in driveways to a silent cul de sac.
Sources:
The Beat Begins: America in the 1950s:
Accessed February 27, 2007.
Poetry:
Accessed April 7, 2002.
The Wondering Minstrels:
Accessed February 27, 2007.
Log in or register to write something here or to contact authors.
Need help? [email protected] | https://m.everything2.com/title/A+Supermarket+in+California | CC-MAIN-2022-05 | refinedweb | 726 | 61.36 |
Table of Contents
Tool Chain Requirements
OpenMP support for RTEMS is available via the GCC provided libgomp. To enable
the OpenMP support for GCC add the
--enable-libgomp option to the GCC
configure command line.. Make sure that RTEMS is configured with the
--enable-smp
option.
Configuration.
Open Issues
- Atomic operations and OpenMP are not supported in GCC, see Bug 65467. Due to this a
#include <rtems.h>is not possible in files compiled with
-fopenmpif used by the C compiler (C++ works). Problem is solved in GCC 7.
- Earlier versions of libgomp used dynamic memory in the hot path. This was solved by GCC r225811.
- The libgomp uses the standard heap for dynamic memory. Since a
gomp_free()function is missing it is hard to replace this with a dedicated heap.
OpenMP Validation
Originally the OpenMP 3.1 Validation Suite was selected to validate the OpenMP support. Due to some problems with this,the test suite results are currently not available. Tests that failed on RTEMS failed also on Linux. There were two related bug reports to GCC Bug 65385 and Bug 65386. The quality of the test suite itself turned out to be quite bad. | https://devel.rtems.org/wiki/OpenMP?version=10 | CC-MAIN-2020-40 | refinedweb | 196 | 68.67 |
Opened 8 years ago
Closed 8 years ago
#3920 enhancement closed duplicate (duplicate)
Socks4a now with Asynchronous hostname resolution
Description
A short while ago I implemented the socks4a feature. It used a blocking gethostbyname to resolve hostnames which was a fast and easy update.
With this new slightly longer fix, the gethostbyname is replaced with reactor.resolve so it will not be blocking anymore. I have also updated the unit with help from Exarkun.
If you would like an easy way to test it on Firefox, here is how to do it.
Run this code:
from twisted.internet import reactor from twisted.protocols.socks import SOCKSv4Factory class SOCKSFactory(SOCKSv4Factory): def __init__(self): SOCKSv4Factory.__init__(self, '') reactor.listenTCP(1080, SOCKSFactory()) reactor.run()
In Firefox, go to "about:config" and set "network.proxy.socks_remote_dns" to true. Then set the Firefox socks proxy to localhost:1080. This is also used in other programs such as utorrent.
Attachments (1)
Change History (6)
Changed 8 years ago by
comment:1 Changed 8 years ago by
comment:2 Changed 8 years ago by
comment:3 Changed 8 years ago by
Put ticket correctly in review queue.
comment:4 Changed 8 years ago by
comment:5 Changed 7 years ago by
Note: See TracTickets for help on using tickets.
diff | http://twistedmatrix.com/trac/ticket/3920 | CC-MAIN-2017-51 | refinedweb | 213 | 59.19 |
New to Typophile? Accounts are free, and easy to set up.
Dear Bezier and Math Masers,
for a piece of software I'm writing for some time now I need to calculate points on a Bezier curve. These points need to be of exactly identical absolute distance from each other.
At the moment I'm using the common SplitCubicAtT function (T of which is a float between 0 and 1) to part the curve into two with T being 0.5 at first, then calculate the distance between P1 and P1,4 using the Pythagoreic Triangle. Then I repeat the calculation with increasing or decreasing T until the difference in distance is below a certain threshold (= binary search). But this method is 1) slow and 2) not precise.
Step 1: SplitCubicAtT, result is (among other points) the point P1,4. The difference of position in x and y direction between P1,4 and P1 is used in
Step 2: to calculate the absolute distance between P1 and P1,4 using the Pythagoreic Triangle.
From what I understand there should be a possibility to invert the combination of these two funtions so one could hand over the absolute distance and receive the positions of the P1,4.
From what I can see from here this classifies as rocket science, but I thought that someone here has already stepped across this problem earlier and might have found a solution.
As SplitCubicAtT I'm using a simple interpolation function
def SplitCubicAtT(p1, p2, p3, p4, t): u"""\ Split cubic Beziers curve at relative value t, return the two resulting segments. """ from ynlib.maths import Interpolate p12 = (Interpolate(p1[0], p2[0], t), Interpolate(p1[1], p2[1], t)) p23 = (Interpolate(p2[0], p3[0], t), Interpolate(p2[1], p3[1], t)) p34 = (Interpolate(p3[0], p4[0], t), Interpolate(p3[1], p4[1], t)) p123 = (Interpolate(p12[0], p23[0], t), Interpolate(p12[1], p23[1], t)) p234 = (Interpolate(p23[0], p34[0], t), Interpolate(p23[1], p34[1], t)) p1234 = (Interpolate(p123[0], p234[0], t), Interpolate(p123[1], p234[1], t)) return (p1, p12, p123, p1234), (p1234, p234, p34, p4) def SameLengthSegments(segment, distance, precision, firstpoint = None): u"""\ Finds points on a curve segment with equal distance (approximated through binary search, with given precision). If firstpoint is given, that would in most cases be the second last calculated point of the previous segment (to avoid gaps between smooth connection segments), this point is used as the starting point instead of p1. The distance from firstpoint to p1 should then be less than 'distance'. Returns a list with calculated points and the position of the last calculated point. """ from ynlib.maths import Distance from ynlib.beziers import SplitCubicAtT points = [] p1, p2, p3, p4 = segment l = distance t = None segments = SplitCubicAtT(p1, p2, p3, p4, .5) # Use firstpoint firstrun = True if firstrun and firstpoint != None: d = Distance(firstpoint, segments[0][3]) else: d = Distance(p1, segments[0][3]) count = 0 while Distance(segments[1][0], p4) > l: min = 0 max = 1 # if t != None: # min = t t = min + (max - min) / 2.0 segments = SplitCubicAtT(p1, p2, p3, p4, t) if firstrun and firstpoint != None: d = Distance(firstpoint, segments[0][3]) else: d = Distance(p1, segments[0][3]) # Binary search while (d - l) > precision or (d - l) < (precision * -1): if (d-l) > 0: max = t elif (d-l) < 0: min = t t = min + (max - min) / 2.0 segments = SplitCubicAtT(p1, p2, p3, p4, t) # Use last point of previous curve as first point if firstrun and firstpoint != None: d = Distance(firstpoint, segments[0][3]) else: d = Distance(segments[0][0], segments[0][3]) count += 1 p1 = segments[1][0] p2 = segments[1][1] p3 = segments[1][2] points.append(segments[0][3]) firstrun = False # List of points excluding, last point return points, segment[3], count def Distance(p1, p2): u"""\ Return distance between two points definded as (x, y). """ import math return math.sqrt( (p2[0] - p1[0]) ** 2 + (p2[1] - p1[1]) ** 2 )
instead of the probably more correct (but much slower) equation found in fontTools (results are identical):
def splitCubicAtT(pt1, pt2, pt3, pt4, *ts): """Split the cubic curve between pt1, pt2, pt3 and pt4 at one or more values of t. Return a list of curve segments. >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5)) ((0.0, 0.0), (12.5, 50.0), (31.25, 75.0), (50.0, 75.0)) ((50.0, 75.0), (68.75, 75.0), (87.5, 50.0), (100.0, 0.0)) >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5, 0.75)) ((0.0, 0.0), (12.5, 50.0), (31.25, 75.0), (50.0, 75.0)) ((50.0, 75.0), (59.375, 75.0), (68.75, 68.75), (77.34375, 56.25)) ((77.34375, 56.25), (85.9375, 43.75), (93.75, 25.0), (100.0, 0.0)) """ a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4) return _splitCubicAtT(a, b, c, d, *ts) def _splitCubicAtT(a, b, c, d, *ts): ts = list(ts) ts.insert(0, 0.0) ts.append(1.0) segments = [] for i in range(len(ts) - 1): t1 = ts[i] t2 = ts[i+1] delta = (t2 - t1) # calc new a, b, c and d a1 = a * delta**3 b1 = (3*a*t1 + b) * delta**2 c1 = (2*b*t1 + c + 3*a*t1**2) * delta d1 = a*t1**3 + b*t1**2 + c*t1 + d pt1, pt2, pt3, pt4 = calcCubicPoints(a1, b1, c1, d1) segments.append((pt1, pt2, pt3, pt4)) return segments def calcQuadraticParameters(pt1, pt2, pt3): pt1, pt2, pt3 = numpy.array((pt1, pt2, pt3)) c = pt1 b = (pt2 - c) * 2.0 a = pt3 - c - b return a, b, c def calcCubicPoints(a, b, c, d): pt1 = d pt2 = (c / 3.0) + d pt3 = (b + c) / 3.0 + pt2 pt4 = a + d + c + b return pt1, pt2, pt3, pt4
1 Nov 2011 — 10:03pm
Just ask Raph Levien.
hhp
2 Nov 2011 — 7:02am
You can ask the question on math forum:. Basically you have to solve the equation "absolute distance(P1, P1.4) = constant" and express P1.4 point with Bezier equation with one variable t.
In case of unusual bezier segments there may be more (maybe even up to 4?) resulting points with equal distance to the P1.
2 Nov 2011 — 8:33am
From what I understand there should be a possibility to invert the combination of these two funtions so one could hand over the absolute distance and receive the positions of the P1,4.
The set of all points of distance K from point P1 is a circle of radius K. If you compute the intersection of that circle with bezier P1234, that's your point.
2 Nov 2011 — 10:28pm
The cubic Bezier can be written as a vector-valued function (x(t),y(t)) parameterized by the single variable t for 0<=t<=1. You want to find the value of t such that the distance from (x(t),y(t)) to P1 is a fixed value. Thus, (as Miha said) you wish to solve the equation sqrt((x(t)-P1.x)^2+(y(t)-P1.y)^2)=constant (*). Moving the constant to the left side, you want to find when the function f(t)=sqrt((x(t)-P1.x)^2+(y(t)-P1.y)^2)-constant is equal to 0 for 0<=t<=1. This can be efficiently solved using Newton's Method; this will be significantly faster than the binary search approach you described above since it uses information from the derivative. Newton's Method is iterative but converges quickly, so you can quickly compute the answer to the level of precision that you need.
One trick to simplify applying Newton's Method is to square both sides of equation (*) first. This will give the same t value, but then the function f(t) becomes a polynomial in t. This is easier to differentiate. The resulting polynomial has degree 6, and so there is no closed form solution (in general) for the roots. However, Newton's Method does converge quadratically.
I'm curious as to what you are using this for. This is not one of the typical computations done with Bezier curves.
Best wishes,
Stephen
4 Nov 2011 — 2:16pm
@hrant: heh, thanks.
@sgh: You are absolutely correct, the convergence of Newton's method is much faster than bisection. However, Newton's method can fail to converge when the relationship is not approximately linear, which can definitely happen with cubic Bezier segments when the control points are "kinky." Bisection is much more robust in that case, as it's pretty much guaranteed to converge at _something_ in range of the solution.
One approach which might be more robust than Newton but converge faster than bisection is the secant method. Actually this has the added advantage that you don't need to analytically compute the derivatives, as well. I've used it quite a bit in my spline foo.
There are other approaches to the problem which could work, but they could get hairy. If you can decompose your beziers into circular arcs, then the problem reduces to computing the intersection between two circles, which is quite simple. Here's a reference on that:
And, what @sgh said, it's hard for me to imagine why you really need this.
Hope this helps.
5 Nov 2011 — 1:59pm
QED. ;-)
But it's great to see we have at least a few more
math experts around here. I myself have a minor in
Numerical Analysis, but there's now more rust than metal...
hhp
5 Nov 2011 — 2:52pm
Hi people,
thanks four your responses. Understanding them is already hairy for me.
So let me continue with answering the question to the why.
I'm working on a type design software plugin that illustrates curve speed on top of bezier outlines, live while editing outlines.
In order to calculate the amount of curvature I'm calculating on-curve points in equal distance, then calculating the angle between them. Just cutting a curve segment into equal parts using SplitCubicAtT doesn't work because distance and hence angle will change when a curve becomes tighter, so angle calculation is useless. They need to be in equal absolute distance from each other.
But, thinking about it, i just had another idea. When distance between on-curve points (returned by SplitCubicAtT) decreases with increasing curvature, there's already the numerical basis for my illustration, right?
Here's another problem, though: This might work only per segment. How to sync illustrations between segments, when an equal amount of splits per segment will always return different distances, since segment lengths differ?
5 Nov 2011 — 2:56pm
Cool idea. Wouldn't the first derivative of
the equation give you this relatively easily?
BTW, related and perhaps helpful:
Do I remember correctly that there's a certain type
of spline equation that ensures constant (or maybe
controllable) velocity of change?
hhp
22 Nov 2011 — 10:38am
Dear Bezier masters,
I've tried to approach my problem (calculating curve speed) in a more general way, using the general cubic Bezier equation.
I've constructed the first (and to be sure, also the second) derivative, as suggested. Please correct me, if I made mistakes there. School was a looong time ago.
The function solveCubic return as the first argument p1234 (identical to the middle on-curve point calculated by the known splitCubicAtT function, I just skipped the other points for now), then the result of the first derivative, then the second.
All are vectors (or coordinates).
The segment used for the results posted below (with t as 0.1 steps) is a quarter of a circle, or a quarter of as close a circle as one can construct with Beziers. The first derivative should then have all equal (more or less) values, right, in case of a circle? In other words, curve speed should be steady at all positions in a circle.
But the results differ by 8%, even more in the second derivative. This doesn't look right. I've double-checked for a correct circle pie piece.
But maybe I have a misunderstanding of what to do with the results of the equations that are actually x/y-coordinates. So far I calculate the absolute value of the vectors (rightmost in the results, respectively).
The results are:
24 Nov 2011 — 6:33am
I think the first (or second) derivative is conceptually wrong answer and what you are looking for is “curvature”, which you already compute through its “natural” definition … but you said it’s slow. There are also other types of formulas for computing curvature, but others might write more detailed answers.
When I saw your image I remembered this fine illustration by Tim Ahrens: What constitutes a "bad curve"?
1 Dec 2011 — 2:23am
I found it. It's an equation that involves the first and second derivative of the general Bezier equation.
See it in action: | http://www.typophile.com/node/86965 | CC-MAIN-2014-10 | refinedweb | 2,175 | 62.48 |
22 June 2012 08:25 [Source: ICIS news]
SINGAPORE (ICIS)--India’s Reliance Industries Ltd (RIL) has adjusted its list prices by Indian rupees (Rs) 1.00-1.50/kg (1,000-1,500/tonne) ($17.76-26.64/tonne) lower for high density polyethylene (HDPE) monofilament yarn and low density PE (LDPE) grades, industry players said on Friday.
The new list prices took effect from 21 June, they said.
This is a second consecutive price reduction for LDPE grades within June.
Both LDPE film and HDPE monofilament yarn list prices were cut by Rs1.00/kg, while LDPE coating was lowered by Rs1.50/kg, a source close to RIL said.
“We have adjusted prices to keep parity, as we know that the import offers have been drifting downwards,” the source added.
The new LDPE film list prices are now at Rs92.50-93.50/kg ?xml:namespace>
Other PE and PP list prices were unchanged at Rs92.00-93.00/kg
The local major quoted its equivalent import parity at $1,335/tonne CFR (cost and freight) Mumbai for LDPE film, $1,460/tonne CFR Mumbai for LDPE coating and $1,385/tonne CFR Mumbai for HDPE monofilament yarn.
Sporadic discussion levels for LDPE film imports for July shipments were at close to $1,300/tonne CFR Mumbai, $1,450/tonne CFR Mumbai for LDPE coating and $1,340/tonne CFR Mumbai for HDPE monofilament yarn, local trader said.
($1 = Rs56 | http://www.icis.com/Articles/2012/06/22/9571828/indias-reliance-cuts-again-pe-list-prices-on-lower-import.html | CC-MAIN-2015-11 | refinedweb | 243 | 72.56 |
Hi,
I have an input file which contains 3 sentences:
Jane likes chicken.
Alex likes chicken too.
both like chicken.
At the beginning of the 3rd sentence I want to add a word "They" while the reference line is the 1st line i.e "Jane likes chicken".
The code I have tried to develop can do so if reference is made to the exact line where I want to add the word but this is not what I want. Below is the program that I have developed but as I said before, it adds the word when reference is made to the exact line of insertion.
Is there a way whereby I can do something like this:Is there a way whereby I can do something like this:Code:#include <iostream> #include <fstream> using namespace std; string line; int main() { /*opens the input file*/ ifstream infile("../testing_in.txt"); if (!infile.is_open()) { cout<<"problem!!!!"<<endl; return 1; } /*this part opens the temperate output file*/ ofstream outfile ("testing_out.txt"); /*this reads the lines in the input file and inserts a word "They" in one of the lines specified*/ while(getline(infile,line)) { if(line.substr(0,line.find(" "))== "both") { line.insert(0,"They "); outfile<<line<<endl; } } }
(1) Find a line that begins with "Jane".
(2) Then two (2) line after that line with Jane (which brings me to the line "both like chicken" in this particular case, insert the word "They".
Thanks for your help. | https://cboard.cprogramming.com/cplusplus-programming/142344-how-insert-word-different-line-than-one-being-referenced.html | CC-MAIN-2017-43 | refinedweb | 243 | 72.46 |
dru-webware@... wrote:
>BTW, also, what is the best way to get some code to run
>when a context is started up?
Stuff your code in the __init__.py file of the context in question. Also,
...Edmund.
BTW, also, what is the best way to get some code to run
when a context is started up?
I need to reload the TaskKit schedule.
Dru?
Dru
Thanks Ian, I'll take a look.
btw, the quick test case for the fieldstorage bug is to call an rpc servlet
with params on the query string, e.g:
from WebKit.XMLRPCServlet import XMLRPCServlet
class RpcTest(XMLRPCServlet):
def exposedMethods(self):
return ['add']
def add(self, a,b):
return a+b
and then from other python code:
import xmlrpclib
test = xmlrpclib.Server('';)
test.add(1,2)
Chris
> -----Original Message-----
> From: Ian Bicking [mailto:ianb@...]
> Sent: Saturday, June 08, 2002 8:17 PM
> To: cprinos@...
> Cc: webware-devel@...
> Subject: Content-type handling in HTTPRequest
>
>
>
>
On Monday 03 June 2002 09:36 am, Ingo Luetkebohle wrote:
> Hi,
>
> attached is a patch against MiddleKit to provide preliminary support
> for PostgreSQL. We are using the patch locally, it seems to work
> just fine. Its a bit unpolished but, well,... here it is :-)
Were you able to run the MiddleKit test suite by any chance?
> cd Webware/MiddleKit/Tests
> python Test.py > r
(The reason I redirect to 'r' is that any exceptions go to stderr and
will therefore show up (or not)).
I have an old MK/Postgres patch lying about. I guess I have the task of
reviewing both patches and merging them in. :-O :-)
-Chuck
This is a fairly minor change coming up, and I think we agreed on it
earlier. Just thought I'd describe it in case anyone has any comments:
- Rename *.config to *.config.default.
- Tweak installer to copy *.config.default to *.config.
In this way, you can modify your *.config files in a cvs version of
Webware without ever having to experience a conflict or accidental
checkin with cvs.
As usual, you'll be able to continue looking at *.config.default to see
what the original settings were.
-Chuck
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/webware/mailman/webware-devel/?viewmonth=200206&viewday=9 | CC-MAIN-2017-17 | refinedweb | 399 | 75.91 |
Cells
View Components for Ruby and Rails.
Overview
Cells allow you to encapsulate parts of your UI into components into view models. View models, or cells,, caching, and integrate with Trailblazer.
This is not Cells 3.x!
Temporary note: This is the README and API for Cells 4. Many things have improved. If you want to upgrade, follow this guide. When in trouble, join us on the IRC (Freenode) #trailblazer channel.
Rendering Cells
You can render cells anywhere and as many as you want, in views, controllers, composites, mailers, etc.
Rendering a cell in Rails ironically happens via a helper.
<%= cell(:comment, @comment) %>
This boils down to the following invocation, that can be used to render cells in any other Ruby environment.
CommentCell.(@comment).()
In Rails you have the same helper API for views and controllers.
class DasboardController < ApplicationController def dashboard @comments = cell(:comment, collection: Comment.recent) @traffic = cell(:report, TrafficReport.find(1)).() end
Usually, you'd pass in one or more objects you want the cell to present. That can be an ActiveRecord model, a ROM instance or any kind of PORO you fancy.
Cell Class
A cell is a light-weight class with one or multiple methods that render views.
class Comment::Cell < Cell::ViewModel property :body property :author def show render end private def link_to "#{.email}", end end
Here,
show is the only public method. By calling
render it will invoke rendering for the
show view.
Logicless Views
Views come packaged with the cell and can be ERB, Haml, or Slim.
<h3>New Comment</h3> <%= body %> By <%= author_link %>
The concept of "helpers" that get strangely copied from modules to the view does not exist in Cells anymore.
Methods called in the view are directly called on the cell instance. You're free to use loops and deciders in views, even instance variables are allowed, but Cells tries to push you gently towards method invocations to access data in the view.
File Structure
In Rails, cells are placed in
app/cells or
app/concepts/. Every cell has their own directory where it keeps views, assets and code.
app ├── cells │ ├── comment_cell.rb │ ├── comment │ │ ├── show.haml │ │ ├── list.haml
The discussed
show view would reside in
app/cells/comment/show.haml. However, you can set any set of view paths you want.
Invocation Styles
In order to make a cell render, you have to call the rendering methods. While you could call the method directly, the prefered way is the call style.
cell(:comment, @song).() # calls CommentCell#show. cell(:comment, @song).(:index) # calls CommentCell#index.
The call style respects caching.
Keep in mind that
cell(..) really gives you the cell object. In case you want to reuse the cell, need setup logic, etc. that's completely up to you.
Parameters
You can pass in as many parameters as you need. Per convention, this is a hash.
cell(:comment, @song, volume: 99, genre: "Jazz Fusion")
Options can be accessed via the
@options instance variable.
Naturally, you may also pass arbitrary options into the call itself. Those will be simple method arguments.
cell(:comment, @song).(:show, volume: 99)
Then, the
show method signature changes to
def show(options).
Testing
A huge benefit from "all this encapsulation" is that you can easily write tests for your components. The API does not change and everything is exactly as it would be in production.
html = CommentCell.(@comment).() Capybara.string(html).must_have_css "h3"
It is completely up to you how you test, whether it's RSpec, MiniTest or whatever. All the cell does is return HTML.
In Rails, there's support for TestUnit, MiniTest and RSpec available, along with Capybara integration.
Properties
The cell's model is available via the
model reader. You can have automatic readers to the model's fields by uing
::property.
class CommentCell < Cell::ViewModel property :author # delegates to model.author def link_to .name, end end
HTML Escaping
Cells per default does no HTML escaping, anywhere. Include
Escaped to make property readers return escaped strings.
class CommentCell < Cell::ViewModel include Escaped property :title end song.title #=> "<script>Dangerous</script>" Comment::Cell.(song).title #=> <script>Dangerous</script>
Properties and escaping are documented here.
Installation
Cells.
module Admin class CommentCell < Cell::ViewModel
Invocation in Rails would happen as follows.
cell("admin/comment", @comment).()
Views will be searched in
app/cells/admin/comment per default.
Rails Helper API
Even in a non-Rails environment, Cells provides the Rails view API and allows using all Rails helpers.
You have to include all helper modules into your cell class. You can then use
link_to,
simple_form_for or whatever you feel like.
class CommentCell < Cell::ViewModel include ActionView::Helpers::UrlHelper include ActionView::Helpers::CaptureHelper def author_link content_tag :div, link_to(author.name, author) end
As always, you can use helpers in cells and in views.
You might run into problems with wrong escaping or missing URL helpers. This is not Cells' fault but Rails suboptimal way of implementing and interfacing their helpers. Please open the actionview gem helper code and try figuring out the problem yourself before bombarding us with issues because helper
xyz doesn't work.
View Paths
In Rails, the view path is automatically set to
app/cells/ or
app/concepts/. You can append or set view paths by using
::view_paths. Of course, this works in any Ruby environment.
class CommentCell < Cell::ViewModel self.view_paths = "lib/views" end
Asset Packaging
Cells can easily ship with their own JavaScript, CSS and more and be part of Rails' asset pipeline. Bundling assets into a cell allows you to implement super encapsulated widgets that are stand-alone. Asset pipeline is documented here.
Render API
Unlike Rails, the
#render method only provides a handful of options you gotta learn.
def show render end
Without options, this will render the state name, e.g.
show.erb.
You can provide a view name manually. The following calls are identical.
render :index render view: :index
If you need locals, pass them to
#render.
render locals: {style: "border: solid;"}
Layouts
Every view can be wrapped by a layout. Either pass it when rendering.
render layout: :default
Or configure it on the class-level.
class CommentCell < Cell::ViewModel layout :default
The layout is treated as a view and will be searched in the same directories.
Nested Cells
Cells love to render. You can render as many views as you need in a cell state or view.
<%= render :index %>
The
#render method really just returns the rendered template string, allowing you all kind of modification.
def show render + render(:additional) end
You can even render other cells within a cell using the exact same API.
def about cell(:profile, model.).() end
This works both in cell views and on the instance, in states.
View Inheritance
Cells can inherit code from each other with Ruby's inheritance.
class CommentCell < Cell::ViewModel end class PostCell < CommentCell end
Even cooler,
PostCell will now inherit views from
CommentCell.
PostCell.prefixes #=> ["app/cells/post", "app/cells/comment"]
When views can be found in the local
post directory, they will be looked up in
comment. This starts to become helpful when using composed cells.
If you only want to inherit views, not the entire class, use
::inherit_views.
class PostCell < Cell::ViewModel inherit_views Comment::Cell end PostCell.prefixes #=> ["app/cells/post", "app/cells/comment"]")
Builder
Often, it is good practice to replace decider code from views or classes into separate sub-cells. Or in case you want to render a polymorphic collection, builders come in handy.
Builders allow instantiating different cell classes for different models and options.
class CommentCell < Cell::ViewModel builds do |model, options| PostCell if model.is_a?(Post) CommentCell if model.is_a?(Comment) end
The
#cell helper takes care of instantiating the right cell class for you.
cell(:comment, Post.find(1)) #=> creates a PostCell.
This also works with collections.
cell(:comment, collection: [@post, @comment]) #=> renders PostCell, then CommentCell.
Multiple calls to
::builds will be ORed. If no block returns a class, the original class will be used (
CommentCell). Builders are inherited.
Caching
For every cell class you can define caching per state. Without any configuration the cell will run and render the state once. In following invocations, the cached fragment is returned.
class CommentCell < Cell::ViewModel cache :show # .. end
The
::cache method will forward options to the caching engine.
cache :show, expires_in: 10.minutes
You can also compute your own cache key, use dynamic keys, cache tags, and Global Partials
Although not recommended, you can also render global partials from a cell. Be warned, though, that they will be rendered using our stack, and you might have to include helpers into your view model.
This works by including
Partial and the corresponding
:partial option.
class Cell < Cell::ViewModel include Partial def show render partial: "../views/shared/map.html" # app/views/shared/map.html.haml end
The provided path is relative to your cell's
::view_paths directory. The format has to be added to the file name, the template engine suffix will be used from the cell.
You can provide the format in the
render call, too.
render partial: "../views/shared/map", formats: [:html]
This was mainly added to provide compatibility with 3rd-party gems like Kaminari and Cells that rely on rendering partials within a cell.
LICENSE
Copyright (c) 2007-2008, Solide ICT by Peter Bex and Bob Leers
Released under the MIT License. | http://www.rubydoc.info/github/apotonick/cells/ | CC-MAIN-2015-35 | refinedweb | 1,551 | 59.7 |
NAMEsetns - reassociate thread with a namespace
SYNOPSIS
#define _GNU_SOURCE /* See feature_test_macros(7) */ #include <sched.h>
int setns(int fd, int nstype);
DESCRIPTIONThe setns() system call allows the calling thread to move into different namespaces. The fd argument is one of the following:
- a file descriptor referring to one of the magic links in a /proc/[pid]/ns/ directory (or a bind mount to such a link);
- a PID file descriptor (see pidfd_open(2)).
The nstype argument is interpreted differently in each case.
fd refers to a /proc/[pid]/ns/ linkIf fd refers to a /proc/[pid]/ns/ link, then setns() reassociates the calling thread with the namespace associated with that link, subject to any constraints imposed by the nstype argument. In this usage, each call to setns() changes just one of the caller's namespace memberships.TIME (since Linux 5.8)
- fd must refer to a time.)
fd is a PID file descriptorSince Linux 5.8, fd may refer to a PID file descriptor obtained from pidfd_open(2) or clone(2). In this usage, setns() atomically moves the calling thread into one or more of the same namespaces as the thread referred to by fd.
The nstype argument is a bit mask specified by ORing together one or more of the CLONE_NEW* namespace constants listed above. The caller is moved into each of the target thread's namespaces that is specified in nstype; the caller's memberships in the remaining namespaces are left unchanged.
For example, the following code would move the caller into the same user, network, and UTS namespaces as PID 1234, but would leave the caller's other namespace memberships unchanged:
int fd = pidfd_open(1234, 0); setns(fd, CLONE_NEWUSER | CLONE_NEWNET | CLONE_NEWUTS);
Details for specific namespace typesNote the following details and restrictions when reassociating with specific namespace types:
- User namespaces
- A process reassociating itself with a user namespace must have the CAP_SYS_ADMIN capability in the target user namespace. (This necessarily implies that it is only possible to join a descendant).
- Mount namespaces
- Changing the mount namespace requires that the caller possess both CAP_SYS_CHROOT and CAP_SYS_ADMIN capabilities in its own user namespace and CAP_SYS_ADMIN in the user namespace that owns the target mount namespace.
- A process can't join a new mount namespace if it is sharing filesystem-related attributes (the attributes whose sharing is controlled by the clone(2) CLONE_FS flag) with another process.
- See user_namespaces(7) for details on the interaction of user namespaces and mount namespaces.
- PID namespaces
- In order to reassociate itself with a new PID namespace, the caller must have the CAP_SYS_ADMIN capability both in its own user namespace and in the user namespace that owns the target PID namespace.
- Reassociating the PID namespace target PID namespace is a descendant (child, grandchild, etc.) of, or is the same as, the current PID namespace of the caller.
- For further details on PID namespaces, see pid_namespaces(7).
- Cgroup namespaces
- In order to reassociate itself with a new cgroup namespace, the caller must have the CAP_SYS_ADMIN capability both in its own user namespace and in the user namespace that owns the target cgroup namespace.
- Using setns() to change the caller's cgroup namespace does not change the caller's cgroup memberships.
- Network, IPC, time, and UTS namespaces
- In order to reassociate itself with a new network, IPC, time, or UTS namespace, the caller must have the CAP_SYS_ADMIN capability both in its own user namespace and in the user namespace that owns the target namespace.
RETURN VALUE.
- EINVAL
- fd is a PID file descriptor and nstype is invalid (e.g., it is 0).
- ENOMEM
- Cannot allocate sufficient memory to change the specified namespace.
- EPERM
- The calling thread did not have the required capability for this operation.
- ESRCH
- fd is a PID file descriptor but the process it refers to no longer exists (i.e., it has terminated and been waited on).
VERSIONSThe setns() system call first appeared in Linux in kernel 3.0; library support was added to glibc in version 2.14.
CONFORMING TOThe setns() system call is Linux-specific.
NOTESFor further information on the /proc/[pid]/ns/ magic links, see namespaces(7).
Not all of the attributes that can be shared when a new thread is created using clone(2) can be changed using setns().
EXAMPLEST); } /* Get file descriptor for namespace; the file descriptor is opened with O_CLOEXEC so as to ensure that it is not inherited by the program that is later executed. */ fd = open(argv[1], O_RDONLY | O_CLOEXEC); if (fd == -1) errExit("open"); if (setns(fd, 0) == -1) /* Join that namespace */ errExit("setns"); execvp(argv[2], &argv[2]); /* Execute a command in namespace */ errExit("execvp"); } | https://man.archlinux.org/man/setns.2.en | CC-MAIN-2022-21 | refinedweb | 771 | 60.55 |
Tail recursion
Let's start this one with a formal definition which we'll borrow from Wikipedia:.
Putting it in simpler terms: when you call a procedure as the final action of your own procedure (function, method, etc) you have a tail call. If that call happens to be called again down the call chain, it is tail-recursive.
This is all very interesting you might say, but why is this important? Well, this is important because these type of calls can be implemented without adding new stack frames to the call stack. That means that we don't need to fill up the call stack and eat up a lot a memory in the process. This is particularly relevant for functional languages (for example Erlang, Scala) where for and while loops don't exist and so recursion is mandatory.
As I am more of a hands on guy let's illustrate this with some examples to understand the concept. Let's use a recursive function by nature: factorial. Straight from it's definition , we can code as an Erlang function to calculate it (we'll borrow it from Learn You Some Erlang for great good!) :
fac(0) -> 1;fac(N) when N > 0 -> N*fac(N-1).
Let's do a small trace for this function with 4 as an input:
fac(4) = 4 * fac(4 - 1)= 4 * 3 * fac(3 - 1)= 4 * 3 * 2 * fac(2 - 1)= 4 * 3 * 2 * 1 * fac(1 - 1)= 4 * 3 * 2 * 1 * 1= 4 * 3 * 2 * 1= 4 * 3 * 2= 4 * 6= 24
While this works fine for a small number like 4, as you can see there are quite a lot of fac calls. Imagine that for big numbers: we would be in trouble. Because the last call for this function involves a multiplication between a number a a recursive call to fac, a new stack frame has to be placed in the call stack so that at the end we can go back and multiply the results.
Let's try and change our function to be tail recursive. What that means is that we will implement it in a way that the same stack frame is used. And that can be achieve if the last last call of our function is only, you guessed it, a funciton. Let's see how it goes:
tail_fac(N) -> tail_fac(N,1).tail_fac(0,Acc) -> Acc;tail_fac(N,Acc) when N > 0 -> tail_fac(N-1,N*Acc).
Again, let's do a small trace and check what's happening:
Well, well, well! It seems that we don't need to fill up the call stack with new frames after all. And this is all pretty with functional languages, but what about imperative one's? Let's make use a Python example.
def fac(n):if n == 0:return 1else:return n * fac(n-1)
Again, straight from the definition, and fairly simple. If we try to run with numbers bigger than 998 we get:
RuntimeError: maximum recursion depth exceeded
This is because Python, as well as other languages, has a limit (Python's default is 1000) to the number of recursive calls you can make. Faced with this, you can try and solve 3 ways:
- increase the limit (you still would be limited to the maximum amount you could increase it to, and also would be increasing the calls stack size a lot and memory consumption
- write an imperative alternative
- write a tail recursive alternative
Because we're illustrating tail recursion, let's go with the third approach:
def fac(n):return tail_fac(n, 1)def tail_fac(n, acc):if n == 0:return accelse:return tail_fac(n - 1, acc * n)
If we try to run with numbers bigger than 998 the result will be:
RuntimeError: maximum recursion depth exceeded
Wait, what?!? How's this possible? I'm using a tail recursive solution. This shouldn't have happening. I've I been talking nonsense up until now? No I didn't, and I didn't choose Python by chance.
The fact is, although you might implement your function in a tail recursive fashion, it all comes down to the implementation of the language. The fact that the same stack frame is used for consecutive recursive calls depends on the language implementation. Python does not implement it. You can check the links bellow on, from Guido himself, explaining why:
-
-
P.S. I want to thank Ricardo Sousa and Nuno Silva for the reviews. | https://mccricardo.com/tail-recursion/ | CC-MAIN-2021-25 | refinedweb | 747 | 69.21 |
import ssh_host_key_file
Function
The import ssh_host_key_file command is used to replace the public key file and private key file on the SSH server.
Format
import ssh_host_key_file key_type=? ip=? user=? password=? public_key_file=? private_key_file=? [ protocol=? ] [ port=? ]
Parameters
Level
Super administrator
Usage Guidelines
If you want to use your own SSH public key file and private key file, perform the following steps:
- Use the ssh-keygen tool to generate a public key file and private key file encrypted with RSA, DSA, or ECDSA algorithm.
- Run the import ssh_host_key_file command to import the public key file and private key file.
If a public key file or private key file is encrypted with a non-RSA, non-DSA, and non-ECDSA algorithm, or an illegal public key file and an illegal private key file are imported, the RSA, DSA, or ECDSA algorithm will not be used for connection encryption when an SSH connection is set up the next time. password=******.
System Response
None | https://support.huawei.com/enterprise/en/doc/EDOC1100049140/b4d9f8d0 | CC-MAIN-2019-39 | refinedweb | 158 | 53.85 |
i'm working on my proyect with codewarrior 10.2 and when i want to make the proyect, there are 3 errors that say:
and when i click on any of those errors, the focus goes on ccstddef.h section:
#ifdef __GNUG__
#pragma interface "stddef.h"
#endif
#include <Gconfig.h>
#include <cstddef.h>
extern "C++"
{
#ifndef _LIBRARY_WORKAROUND /* dont't DEFINE vars in header files !! */
const size_t NPOS = (size_t)(-1);
#endif
typedef void fvoid_t();
#ifndef _LIBRARY_WORKAROUND
#ifndef _WINT_T
#define _WINT_T
typedef _G_wint_t wint_t;
#endif
#endif
} // extern "C++"
#endif
--------------------------------------------------------------------
but i had never modify any of that part..
i wish you can help me please! thank!
Thanks for helping me, it was my bad! haha, the problem was that I was including the "iostream.h" library in a C proyect, and iostream it's a C++ library! so thanks again and if anyone has the same dummy problem, here there is the solution! | https://community.nxp.com/thread/308912 | CC-MAIN-2018-22 | refinedweb | 151 | 76.42 |
This pile of codes here has no errors in it when being compiled. However, when I tried to build it, it says:
Here are my codes:Here are my codes:Code:-------------- Build: Debug in Funtion2 --------------- Linking console executable: bin\Debug\Funtion2.exe collect2: cannot find `ld' Process terminated with status 1 (0 minutes, 0 seconds) 0 errors, 0 warnings
Please someone help me out. Thanks in advance.Please someone help me out. Thanks in advance.Code:#include <iostream> #include <string> using namespace std; void Test( string language, string hugeNumber); int main() { string favourite = "C++"; string favourite2 = "Java"; Test( favourite, favourite2); return 0; } void Test ( string language, string hugeNumber) { cout<< "I like "<< language<< hugeNumber; }
EDIT: I tried the following, but it still couldn't not work:
Code:#include <iostream> #include <string> using namespace std; void Test(std::string language, std::string hugeNumber); int main() { std::string favourite = "C++"; std::string favourite2 = "Java"; Test(favourite,favourite2); return 0; } void Test ( std::string language, std::string hugeNumber) { std::cout<< "I like "<< language<< hugeNumber; } | http://cboard.cprogramming.com/cplusplus-programming/107166-unknown-error-%60ld%27.html | CC-MAIN-2015-35 | refinedweb | 170 | 58.21 |
Making adept use of threads on Android can help you boost your app’s performance. This page discusses several aspects of working with threads: working with the UI, or main, thread; the relationship between app lifecycle and thread priority; and, methods that the platform provides to help manage thread complexity. In each of these areas, this page describes potential pitfalls and strategies for avoiding them.
Main thread
When the user launches your app, Android creates a new Linux process along with an execution thread. This main thread, also known as the UI thread, is responsible for everything that happens onscreen. Understanding how it works can help you design your app to use the main thread for the best possible performance.
Internals
The main thread has a very simple design: Its only job is to take and execute blocks of work from a thread-safe work queue until its app is terminated. The framework generates some of these blocks of work from a variety of places. These places include callbacks associated with lifecycle information, user events such as input, or events coming from other apps and processes. In addition, app can explicitly enqueue blocks on their own, without using the framework.
Nearly any block of code your app executes is tied to an event callback, such as input, layout inflation, or draw. When something triggers an event, the thread where the event happened pushes the event out of itself, and into the main thread’s message queue. The main thread can then service the event.
While an animation or screen update is occurring, the system tries to execute a block of work (which is responsible for drawing the screen) every 16ms or so, in order to render smoothly at 60 frames per second. For the system to reach this goal, the UI/View hierarchy must update on the main thread. However, when the main thread’s messaging queue contains tasks that are either too numerous or too long for the main thread to complete the update fast enough, the app should move this work to a worker thread. If the main.
Moving numerous or long tasks from the main thread, so that they don’t interfere with smooth rendering and fast responsiveness to user input, is the biggest reason for you to adopt threading in your app.
Threads and UI object references
By design, Android View objects are not thread-safe. An app is expected to create, use, and destroy UI objects, all on the main thread. If you try to modify or even reference a UI object in a thread other than the main thread, the result can be exceptions, silent failures, crashes, and other undefined misbehavior.
Issues with references fall into two distinct categories: explicit references and implicit references.
Explicit references
Many tasks on non-main threads have the end goal of updating UI objects. However, if one of these threads accesses an object in the view hierarchy, application instability can result: If a worker thread changes the properties of that object at the same time that any other thread is referencing the object, the results are undefined.
For example, consider an app that holds a direct reference to a UI object on a
worker thread. The object on the worker thread may contain a reference to a
View; but before the work completes, the
View is
removed from the view hierarchy. When these two actions happen simultaneously,
the reference keeps the
View object in memory and sets properties on it.
However, the user never sees
this object, and the app deletes the object once the reference to it is gone.
In another example,
View objects contain references to the activity
that owns them. If
that activity is destroyed, but there remains a threaded block of work that
references it—directly or indirectly—the garbage collector will not collect
the activity until that block of work finishes executing.
This scenario can cause a problem in situations where threaded work may be in
flight while some activity lifecycle event, such as a screen rotation, occurs.
The system wouldn’t be able to perform garbage collection until the in-flight
work completes. As a result, there may be two
Activity objects in
memory until garbage collection can take place.
With scenarios like these, we suggest that your app not include explicit references to UI objects in threaded work tasks. Avoiding such references helps you avoid these types of memory leaks, while also steering clear of threading contention.
In all cases, your app should only update UI objects on the main thread. This means that you should craft a negotiation policy that allows multiple threads to communicate work back to the main thread, which tasks the topmost activity or fragment with the work of updating the actual UI object.
Implicit references
A common code-design flaw with threaded objects can be seen in the snippet of code below:
Kotlin
class MainActivity : Activity() { // ... inner class MyAsyncTask : AsyncTask<Unit, Unit, String>() { override fun doInBackground(vararg params: Unit): String {...} override fun onPostExecute(result: String) {...} } }
Java
public class MainActivity extends Activity { // ... public class MyAsyncTask extends AsyncTask<Void, Void, String> { @Override protected String doInBackground(Void... params) {...} @Override protected void onPostExecute(String result) {...} } }
The flaw in this snippet is that the code declares the threading object
MyAsyncTask as a non-static inner class of some activity (or an inner class
in Kotlin). This declaration creates an implicit reference to the enclosing
Activity
instance. As a result, the object contains a reference to the activity until the
threaded work completes, causing a delay in the destruction of the referenced activity.
This delay, in turn, puts more pressure on memory.
A direct solution to this problem would be to define your overloaded class instances either as static classes, or in their own files, thus removing the implicit reference.
Another solution would be to always cancel and clean up background tasks in the appropriate
Activity lifecycle callback, such as
onDestroy. This approach can be
tedious and error prone, however. As a general rule, you should not put complex, non-UI logic
directly in activities. In addition,
AsyncTask is now deprecated and it is
not recommended for use in new code. See Threading on Android
for more details on the concurrency primitives that are available to you.
Threads and app activity lifecycles
The app lifecycle can affect how threading works in your application. You may need to decide that a thread should, or should not, persist after an activity is destroyed. You should also be aware of the relationship between thread prioritization and whether an activity is running in the foreground or background.
Persisting threads
Threads persist past the lifetime of the activities that spawn them. Threads continue to execute, uninterrupted, regardless of the creation or destruction of activities, although they will be terminated together with the application process once there are no more active application components. In some cases, this persistence is desirable.
Consider a case in which an activity spawns a set of threaded work blocks, and is then destroyed before a worker thread can execute the blocks. What should the app do with the blocks that are in flight?
If the blocks were going to update a UI that no longer exists, there’s no reason for the work to continue. For example, if the work is to load user information from a database, and then update views, the thread is no longer necessary.
By contrast, the work packets may have some benefit not entirely related to the
UI. In this case, you should persist the thread. For example, the packets may be
waiting to download an image, cache it to disk, and update the associated
View object. Although the object no longer exists, the acts of downloading and
caching the image may still be helpful, in case the user returns to the
destroyed activity.
Managing lifecycle responses manually for all threading objects can become
extremely complex. If you don’t manage them correctly, your app can suffer from
memory contention and performance issues. Combining
ViewModel
with
LiveData allows you to
load data and be notified when it changes
without having to worry about the lifecycle.
ViewModel objects are
one solution to this problem. ViewModels are maintained across configuration changes which
provides an easy way to persist your view data. For more information about ViewModels see the
ViewModel guide, and to learn more about
LiveData see the LiveData guide. If you
would also like more information about application architecture, read the
Guide To App Architecture.
Thread priority
As described in Processes and the Application Lifecycle, the priority that your app’s threads receive depends partly on where the app is in the app lifecycle. As you create and manage threads in your application, it’s important to set their priority so that the right threads get the right priorities at the right times. If set too high, your thread may interrupt the UI thread and RenderThread, causing your app to drop frames. If set too low, you can make your async tasks (such as image loading) slower than they need to be.
Every time you create a thread, you should call
setThreadPriority().
The system’s thread
scheduler gives preference to threads with high priorities, balancing those
priorities with the need to eventually get all the work done. Generally, threads
in the foreground
group get about 95% of the total execution time from the device, while the
background group gets roughly 5%.
The system also assigns each thread its own priority value, using the
Process class.
By default, the system sets a thread’s priority to the same priority and group
memberships as the spawning thread. However, your application can explicitly
adjust thread priority by using
setThreadPriority().
The
Process
class helps reduce complexity in assigning priority values by providing a
set of constants that your app can use to set thread priorities. For example,
THREAD_PRIORITY_DEFAULT
represents the default value for a thread. Your app should set the thread's priority to
THREAD_PRIORITY_BACKGROUND
for threads that are executing less-urgent work.
Your app can use the
THREAD_PRIORITY_LESS_FAVORABLE
and
THREAD_PRIORITY_MORE_FAVORABLE
constants as incrementers to set relative priorities. For a list of
thread priorities, see the
THREAD_PRIORITY constants in
the
Process class.
For more information on
managing threads, see the reference documentation about the
Thread and
Process classes.
Helper classes for threading
For developers using Kotlin as their primary language, we recommend using coroutines. Coroutines provide a number of benefits, including writing async code without callbacks as well as structured concurrency for scoping, cancellation and error handling.
The framework also provides the same Java classes and primitives to facilitate
threading, such as the
Thread,
Runnable
, and
Executors classes,
as well as additional ones such as HandlerThread.
For further information, please refer to Threading on Android.
The HandlerThread class
A handler thread is effectively a long-running thread that grabs work from a queue and operates on it.
Consider a common challenge with getting preview frames from your
Camera object.
When you register for Camera preview frames, you receive them in the
onPreviewFrame()
callback, which is invoked on the event thread it was called from. If this
callback were invoked on the UI thread, the task of dealing with the huge pixel
arrays would be interfering with rendering and event processing work.
In this example, when your app delegates the
Camera.open() command to a
block of work on the handler thread, the associated
onPreviewFrame()
callback
lands on the handler thread, rather than the UI thread. So, if you’re going to be doing long-running
work on the pixels, this may be a better solution for you.
When your app creates a thread using
HandlerThread, don’t
forget to set the thread’s
priority based on the type of work it’s doing. Remember, CPUs can only
handle a small number of threads in parallel. Setting the priority helps
the system know the right ways to schedule this work when all other threads
are fighting for attention.
The ThreadPoolExecutor class
There are certain types of work that can be reduced to highly parallel,
distributed tasks. One such task, for example, is calculating a filter for each
8x8 block of an 8 megapixel image. With the sheer volume of work packets this
creates,
HandlerThread isn’t the appropriate class to use.
ThreadPoolExecutor is a helper class to make
this process easier. This class manages the creation of a group of threads, sets
their priorities, and manages how work is distributed among those threads.
As workload increases or decreases, the class spins up or destroys more threads
to adjust to the workload.
This class also helps your app spawn an optimum number of threads. When it
constructs a
ThreadPoolExecutor
object, the app sets a minimum and maximum
number of threads. As the workload given to the
ThreadPoolExecutor increases,
the class will take the initialized minimum and maximum thread counts into
account, and consider the amount of pending work there is to do. Based on these
factors,
ThreadPoolExecutor decides on how many
threads should be alive at any given time.
How many threads should you create?
Although from a software level, your code has the ability to create hundreds of threads, doing so can create performance issues. Your app shares limited CPU resources with background services, the renderer, audio engine, networking, and more. CPUs really only have the ability to handle a small number of threads in parallel; everything above that runs into priority and scheduling issue. As such, it’s important to only create as many threads as your workload needs.
Practically speaking, there’s a number of variables responsible for this, but picking a value (like 4, for starters), and testing it with Systrace is as solid a strategy as any other. You can use trial-and-error to discover the minimum number of threads you can use without running into problems.
Another consideration in deciding on how many threads to have is that threads aren’t free: they take up memory. Each thread costs a minimum of 64k of memory. This adds up quickly across the many apps installed on a device, especially in situations where the call stacks grow significantly.
Many system processes and third-party libraries often spin up their own threadpools. If your app can reuse an existing threadpool, this reuse may help performance by reducing contention for memory and processing resources. | https://developer.android.com/topic/performance/threads?hl=nl | CC-MAIN-2021-04 | refinedweb | 2,392 | 52.6 |
Member Since 5 Months Ago)
Sinnbeck, Had to go back and edit my question since all of the codes were hidden. Thanks in advance.
Started a new Conversation Display Oracle Blob (Image)
I'm new to Laravel, but I have the following code on my controller below.
namespace App\Http\Controllers; use Validator;
use Illuminate\Http\Request; use DB;
use App\Images; use Image;
class DisplayImageController extends Controller {
public function index() { { $data = DB::table('image_pic_table')->get(); return view('displayimage', compact('data')); } }
}
Now on my displayimage.blade.php, I just want to display the jpg images from the Blob column using foreach loop. The name of image_pic_table have two columns: contact_id (integer), and picture which is a blob column so I want to be able to display both the contact_id along with the picture in a table format.
Thanks in advance.
Replied to Need To Track And Monitor Oracle DB Connection
We are migrating from a legacy system to a web application and it's a requirement. So I need to be able to actually see the database connection for each Oracle database user in a v$session.
Started a new Conversation Need To Track And Monitor Oracle DB Connection
I'm new to Laravel, and want to perform the following:
Currently, my development website connect as an 'Admin' user, but want to know if there is a way to log into the web application using their database username and password. So when they enter their username and password, that it will connect to the database using both the user name and password
The reason is that that we need to monitor user activity at the database level, and would like to know who is currently connected to the database. Thank you in advance.
Awarded Best Reply')); }
}
Replied')); }
}
Started a new Conversation Multiple Queries
I'm new to Laravel.
I have a data entry form that requires multiple queries to populate the data entry form but also the drop down from different tables. What is the best way of populating the form including the drop downs without using AJAX. Lets say that I have one controller to display information for the main form, and have 5 different drop downs that requires 5 different queries using Controller/Query Builder. How can this be accomplished? How would I set up the routes? Thank you advance.
Replied to Basic Drop Down
Michael,
I got the following error: Call to a member function isNotEmpty() on string.
I know this can't be this hard, but I'm new to Laravel....
How can I test whether it's retrieving data from $products from main view. Do I need to include this in my web.php route? Thanks in advance.
Replied to Basic Drop Down
James, thanks for you assistance. When I ran the code, I received the following error:
Invalid argument supplied for foreach()
So do I need to add this route to my web.php. Apologize for asking basic question... I have years of programming experience, but new to Laravel.
Started a new Conversation Basic Drop Down
I'm new to Laravel and is currently using Laravel 7. I just need to create a dropdown (product_id, description) , not using ajax, so users can select and eventually update then entire form.
So if I'm using this example controller:
public function test (Request $request)
{
$products = Product::pluck('name', 'id');
$selectedID = 2;
return view('main.index', compact('id', 'products'));
}
My main.blade.php I have the drop downSelect Product @foreach ($products as $key => $value)
<option value="{{ $key }}" {{ ( $key == $selectedID) ? 'selected' : '' }}> {{ $value }} </option>
when I try to run this, it cannot find $products on my main.blade.php and end up with an error. How does it know that $products is coming from this controller? Doesn't seem like it's calling the controller.
Also, how can I dynamically pass the selectedID, so I can set it to the initial value on a form retrieve.
So do I need to add additional routes for this? I'm at a lost. Thanks!
Replied to Encryption Of ID
I think I'm going to use a combination of methods: Authorization and UUID. Really appreciate your assistance!
Replied to Encryption Of ID
I understand but since this is considered a "unique value", I believe I then will need to encrypt this value.
Replied to Encryption Of ID
In an environment that I'm working on, sensitive data needs to be encrypted including the primary key (ID) and other sensitive data such as SSN. That is just a requirement. If possible, I really don't want to pass the primary key value, but I would need to delete and update data without the primary key. So when using a database, I currently have a hidden ID field for each row and when an user updates the form, it passes the primary key (ID) to update and delete. So if I can just update records without passing any data which identifies that particular row (ID) that is preferable. If the ID needs to be passed, then it needs to be encrypted.
Replied to Encryption Of ID
Thank you for the feedback, but lets say I'm trying to update a User table I want to update an user with id = 3 via form, then how would I implement this? In my controller, I have an update statement:
update users set first_name='Test' where ID = ID.
Thanks in advance.
Started a new Conversation Encryption Of ID
I'm new to Laravel, and is currently using Laravel's Datatable. It's a requirement that I encrypt the value of a primary key (ie. ID) which could be either passed as a hidden field or in an URL. What is the best way to implement this. Currently, in the Datatable, I have it hidden, but one can still view the passed value of an ID via browser's Developer's Tool.
Even when I'm not using Laravel's DataTable, I like to encrypt certain fields including the primary key for the entire web site. Thank you in advance.
Replied to Track User Links
Thank you !
Started a new Conversation Track User Links
I'm relatively new to Laravel so this is likely a simple questions. I have a database driven menu and each of the menu have an id associate with it. There are many links. I want to keep track of all of the links that a user visited using an insert statement. What is the best way to implement this? Do I create a listener and pass the ID so I can perform an insert statement? Thanks in advance.
Started a new Conversation Oracle Sequence Number
I'm new to Laravel, and I need to insert an oracle sequence number as the product_id then insert them in Test_Table.
$test_key= DB::select('SELECT new_sequence.NEXTVAL as seq_gen FROM DUAL');
DB::table('Test_Table')->insert(['product_id'=>$test_key, 'second'=>$second, 'another_field'=>$third);
I'm getting error when trying to pass $test_key... it's an array and not an integer. How can I just convert it to an integer. Thanks
Replied to How To Connect And Login As An Oracle User
I'm using the Yajra OCI-8 driver, your script did work.
DB::connection('oracle')->raw('DBMS_Session.Set_Role('ALL')');
However, I want the system to connect as an actual Oracle database User so the system can have database level audit trail for each user.
Started a new Conversation How To Connect And Login As An Oracle User
I'm new to Laravel, and is currently using Laravel 7.
Immediately after a successful user login, how can I immediately connect/reconnect as an actual Oracle User rather than as a web application user then execute DBMS_Session.Set_Role('ALL') which will grant the current user all roles.
It's a requirement and also want to be able to keep an audit trail on each Oracle users until the user logs out. The audit must be performed at the database level. Thanks in advance. | https://laracasts.com/@johnw65 | CC-MAIN-2020-40 | refinedweb | 1,334 | 64 |
Opened 9 years ago
Closed 9 years ago
Last modified 9 years ago
#1652 closed defect (fixed)
sqlite backend executemany -> typo bug
Description
There is a typo in
backends/sqlite/base.py
note how the last line uses a non existing parameter param instead of param_list
def executemany(self, query, param_list): query = self.convert_query(query, len(param_list[0])) return Database.Cursor.executemany(self, query, param)
Change History (1)
comment:1 Changed 9 years ago by adrian
- Resolution set to fixed
- Status changed from new to closed
Note: See TracTickets for help on using tickets.
(In [2710]) magic-removal: Fixed #1652 -- Fixed typo in SQLite executemany() | https://code.djangoproject.com/ticket/1652 | CC-MAIN-2015-18 | refinedweb | 105 | 52.8 |
This control, we can eliminate the necessity of controlling logon through IIS, and enable it through our code. This opens up a considerable area of control through code. We can hence have a control on which user login is requested, and on which domain and all that.
We will discuss the creation of the project and the logic I had in mind while developing it. The completed and tested code, that was developed using VS.NET is attached to this article.
We create two web user controls.
WindowsLoginControl
This has the implementation and UI for the login pane. It has two UIs, one for new users, one for already logged in users. A session variable maintains the state of the login to determine which UI to show. Code in this control calls the
logInUser class' shared method to process the login.
ErrorControl
This has implementation and UI for an error reporting pane. When errors occur, other controls on the page update a session variable, which is checked when this control loads. When there's no error, we display an 'Under construction' message (This may be removed in release versions).
NOTE: We could have implemented the logic of this control also into the
WindowsLoginControl, but having this as a separate control allows us to easily move the control on the UI of a target page in VS.NET.
LoginUserclass with a shared method for processing the login,
This has implementation of the login process. A shared method takes username, password and domain as parameters and tries a Windows logon with the data, and we impersonate the user.
This is to cleanup the sessions, and make the
totalActiveUser count on the system more reliable.
Open an ASP.NET project. Select the project in the Solution Explorer and create a new 'web user control' item. Develop the UI for it.. probably two text boxes.. for username and password and a 'Login' button.
We make another UI, which shows a viewpane with the details of user login.
We show the login form when the user hasn't logged into the system, and show a login details view after the user logs in. Users login with their Windows authentication (this means that we should have created users on the server and the domain for this to work).
The
windowsLoginControl calls shared function
LogInThisUser() of the
LogInUser class which logs-in the user and impersonates the logged-on user. The code that does this is as below.
Dim loggedOn As Boolean = LogonUser(username, _ domainname, password, 3, 0, token1) 'impersonate user Dim token2 As IntPtr = New IntPtr(token1) Dim mWIC As WindowsImpersonationContext = _ New WindowsIdentity(token2).Impersonate
For this, we declare the
loginuser class with the proper namespaces, and in a manner to include unmanaged code. We need unmanaged code to be written, because I believe we don't have a managed code implementation of the
LogonUser function of Windows to do the same.
'include permissions namespace for security attributes 'include principal namespace for windowsidentity class 'include interopservices namespace for dllImports. Imports System.Security.Principal Imports System.Security.Permissions Imports System.Runtime.InteropServices <Assembly: SecurityPermissionAttribute (SecurityAction.RequestMinimum, UnmanagedCode:=True)> Public Class LogInUser <DllImport("C:\\WINDOWS\\System32\\advapi32.dll")> _ Private:\\WINDOWS\\System32\\Kernel32.dll")> _ Private Shared Function GetLastError() As Integer End Function
We can also find whether the
logonuser function generated errors, by calling the
GetLastError method.
We use session variables to keep track of the user's login information and last access. We use an application variable to keep track of the total active users in the system.
Below code is a part of this implementation (can be found in windowsLoginControl.ascx.vb)
Session.Add("LoggedON", True) Session.Add("Username", sRetText) Application.Item("TotalActiveUsers") += 1 lblUserName.Text = Session("Username") lblLastSession.Text = Session("LastActive") lblTotalUsers.Text = Application("TotalActiveUsers")
We keep track of the no. of active users by simply incrementing the value every time the login method succeeds, and decrementing the value every time
session_end event occurs.
Better means to do this can also be used. The idea of this article is only to communicate the logic.
Before testing the project, we should check the following.
We keep the
domainname as constant, rather than taking it from the user as an input. Check whether proper domain name is assigned to the constant.
Private Const domainName = "TestDomain"
Check whether location of the DLLs that are being imported are proper.
<DllImport("C:\\WINDOWS\\System32\\advapi32.dll")>
Check whether the logoff page has the correct page name and path to transfer the user, once cleanup is done.
Server.Transfer("webform1.aspx")
It has to be taken care that code implemented doesn't allow for inappropriate usage through various
userLogins.
I preferred to keep the domain name hard-coded into the application through a constant rather than accept it as an user input... so that it's easy to limit or monitor user login sessions.
In case of intranet projects, we can create a separate domain, and user group for the project and use the above logic to allow users to login to the system only on the particular domain. May be you can call this an 'Idea' :o)
To implement the web user controls in a web project, we simply copy the files related to the two controls, the
loginuser class, the logoff user page, to our new web project, and also copy the code from our global.asax.vb to the new project's global.asax.vb.
In VS.NET, these copied files can easily be included in the target project by right clicking and selecting 'Include in Project' in Solution Explorer.
The code that's been worked out in this article will authenticate users on only one page of the web application. Normally, a web application will have content inside the site to be viewed by authenticated users.. in this case, the controls will have to have a mechanism of holding the user's authentication across page requests. This can be done by holding the
windowsIdentity object of the authenticated user in a session variable, and allowing users rights on pages by using
FileIOPermission and other classes in the
System.Security namespace.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/web-security/ASPdotnet_LoginControl.aspx | crawl-002 | refinedweb | 1,033 | 56.76 |
View Source MyXQL
MySQL driver for Elixir.
Documentation:
Features
- Automatic decoding and encoding of Elixir values to and from MySQL text and binary protocols
- Supports transactions, prepared queries, streaming, pooling and more via DBConnection
- Supports MySQL 5.5+, 8.0, and MariaDB 10.3
- Supports
mysql_native_password,
sha256_password, and
caching_sha2_passwordauthentication plugins
Usage
Add
:myxql to your dependencies:
def deps() do [ {:myxql, "~> 0.6.0"} ] end
Make sure you are using the latest version!
iex> {:ok, pid} = MyXQL.start_link(username: "root") iex> MyXQL.query!(pid, "CREATE DATABASE IF NOT EXISTS blog") iex> {:ok, pid} = MyXQL.start_link(username: "root", database: "blog") iex> MyXQL.query!(pid, "CREATE TABLE posts IF NOT EXISTS (id serial primary key, title text)") iex> MyXQL.query!(pid, "INSERT INTO posts (`title`) VALUES ('Post 1')") %MyXQL.Result{columns: nil, connection_id: 11204,, last_insert_id: 1, num_rows: 1, num_warnings: 0, rows: nil} iex> MyXQL.query(pid, "INSERT INTO posts (`title`) VALUES (?), (?)", ["Post 2", "Post 3"]) %MyXQL.Result{columns: nil, connection_id: 11204, last_insert_id: 2, num_rows: 2, num_warnings: 0, rows: nil} iex> MyXQL.query(pid, "SELECT * FROM posts") {:ok, %MyXQL.Result{ columns: ["id", "title"], connection_id: 11204, last_insert_id: nil, num_rows: 3, num_warnings: 0, rows: [[1, "Post 1"], [2, "Post 2"], [3, "Post 3"]] }}
It's recommended to start MyXQL under supervision tree:
defmodule MyApp.Application do use Application def start(_type, _args) do children = [ {MyXQL, username: "root", name: :myxql} ] Supervisor.start_link(children, opts) end end
and then we can refer to it by its
:name:
iex> MyXQL.query!(:myxql, "SELECT NOW()").rows [[~N[2018-12-28 13:42:31]]]
Mariaex Compatibility
See Mariaex Compatibility page for transition between drivers.
Data representation
MySQL Elixir ----- ------ NULL nil bool 1 | 0 int 42 float 42.0 decimal #Decimal<42.0> # (1) date ~D[2013-10-12] # (2) time ~T[00:37:14] # (3) datetime ~N[2013-10-12 00:37:14] # (2), (4) timestamp ~U[2013-10-12 00:37:14Z] # (2), (4) json %{"foo" => "bar"} # (5) char "é" text "myxql" binary <<1, 2, 3>> bit <<1::size(1), 0::size(1)>> point, polygon, ... %Geo.Point{coordinates: {0.0, 1.0}}, ... # (6)
Notes:
When using SQL mode that allows them, MySQL "zero" dates and datetimes are represented as
:zero_dateand
:zero_datetimerespectively.
Values that are negative or greater than
24:00:00cannot be decoded
Datetime fields are represented as
NaiveDateTime, however a UTC
DateTimecan be used for encoding as well
MySQL added a native JSON type in version 5.7.8, if you're using earlier versions, remember to use TEXT column for your JSON field.
See "Geometry support" section below
JSON support
MyXQL comes with JSON support via the Jason library.
To use it, add
:jason to your dependencies:
{:jason, "~> 1.0"}
You can customize it to use another library via the
:json_library configuration:
config :myxql, :json_library, SomeJSONModule
Geometry support
MyXQL comes with Geometry types support via the Geo package.
To use it, add
:geo to your dependencies:
{:geo, "~> 3.3"}
Note, some structs like
%Geo.PointZ{} does not have equivalent on the MySQL server side and thus
shouldn't be used.
If you're using MyXQL geometry types with Ecto and need to for example accept a WKT format as user input, consider implementing an custom Ecto type.
Contributing
Run tests:
git clone [email protected]:elixir-ecto/myxql.git cd myxql mix deps.get mix test
See
scripts/test-versions.sh for scripts used to test against different server versions.
License
The source code is. | https://hexdocs.pm/myxql/readme.html | CC-MAIN-2022-33 | refinedweb | 567 | 58.89 |
The Qt namespace contains miscellaneous identifiers used throughout the Qt library. More...
#include <Qt>
The Qt namespace contains miscellaneous identifiers used throughout the Qt library..
An anchor has one or more of the following attributes:
This enum describes attributes that change the behavior of application-wide features. These are enabled and disabled using QCoreApplication::setAttribute(), and can be tested for with QCoreApplication::testAttribute().
Menus that are currently open or menus already created in the native Mac OS X menubar may not pick up a change in this attribute. Changes in the QAction::iconVisibleInMenu property will always be picked up..
On X11 this value is used to do a move.
TargetMoveAction is not used on the Mac.
The DropActions type is a typedef for QFlags<DropAction>. It stores an OR combination of DropAction values.(). describes how the items in a widget are sorted.().Application:()..
This enum type is used to specify the current state of a top-level window.
The states are
The WindowStates type is a typedef for QFlags<WindowState>. It stores an OR combination of WindowState values..().. | http://doc.qt.nokia.com/4.5-snapshot/qt.html#ConnectionType-enum | crawl-003 | refinedweb | 178 | 59.19 |
/* prot.c Protocol support routines to move commands and data around. Copyright (C) 1991, 1992, 1994_rcsid[] = "$Id: prot.c,v 1.33 2002/03/05 19:10:41 ian Rel $"; #endif #include <errno.h> #include "uudefs.h" #include "uuconf.h" #include "system.h" #include "conn.h" #include "prot.h" /* Variables visible to the protocol-specific routines. */ /* Buffer to hold received data. */ char abPrecbuf[CRECBUFLEN]; /* Index of start of data in abPrecbuf. */ int iPrecstart; /* Index of end of data (first byte not included in data) in abPrecbuf. */ int iPrecend; /* We want to output and input at the same time, if supported on this machine. If we have something to send, we send it all while accepting a large amount of data. Once we have sent everything we look at whatever we have received. If data comes in faster than we can send it, we may run out of buffer space. */ boolean fsend_data (qconn, zsend, csend, fdoread) struct sconnection *qconn; const char *zsend; size_t csend; boolean fdoread; { if (! fdoread) return fconn_write (qconn, zsend, csend); while (csend > 0) { size_t crec, csent; if (iPrecend < iPrecstart) crec = iPrecstart - iPrecend - 1; else { crec = CRECBUFLEN - iPrecend; if (iPrecstart == 0) --crec; } if (crec == 0) return fconn_write (qconn, zsend, csend); csent = csend; if (! fconn_io (qconn, zsend, &csent, abPrecbuf + iPrecend, &crec)) return FALSE; csend -= csent; zsend += csent; iPrecend = (iPrecend + crec) % CRECBUFLEN; } return TRUE; } /* Read data from the other system when we have nothing to send. The argument cneed is the amount of data the caller wants, and ctimeout is the timeout in seconds. The function sets *pcrec to the amount of data which was actually received, which may be less than cneed if there isn't enough room in the receive buffer. If no data is received before the timeout expires, *pcrec will be returned as 0. If an error occurs, the function returns FALSE. If the freport argument is FALSE, no error should be reported. */ boolean freceive_data (qconn, cneed, pcrec, ctimeout, freport) struct sconnection *qconn; size_t cneed; size_t *pcrec; int ctimeout; boolean freport; { /* Set *pcrec to the maximum amount of data we can read. fconn_read expects *pcrec to be the buffer size, and sets it to the amount actually received. */ if (iPrecend < iPrecstart) *pcrec = iPrecstart - iPrecend - 1; else { *pcrec = CRECBUFLEN - iPrecend; if (iPrecstart == 0) --(*pcrec); } #if DEBUG > 0 /* If we have no room in the buffer, we're in trouble. The protocols must be written to ensure that this can't happen. */ if (*pcrec == 0) ulog (LOG_FATAL, "freceive_data: No room in buffer"); #endif /* If we don't have room for all the data the caller wants, we simply have to expect less. We'll get the rest later. */ if (*pcrec < cneed) cneed = *pcrec; if (! fconn_read (qconn, abPrecbuf + iPrecend, pcrec, cneed, ctimeout, freport)) return FALSE; iPrecend = (iPrecend + *pcrec) % CRECBUFLEN; return TRUE; } /* Read a single character. Get it out of the receive buffer if it's there, otherwise ask freceive_data for at least one character. This is used because as a protocol is shutting down freceive_data may read ahead and eat characters that should be read outside the protocol routines. We call freceive_data rather than fconn_read with an argument of 1 so that we can get all the available data in a single system call. The ctimeout argument is the timeout in seconds; the freport argument is FALSE if no error should be reported. This returns a character, or -1 on timeout or -2 on error. */ int breceive_char (qconn, ctimeout, freport) struct sconnection *qconn; int ctimeout; boolean freport; { char b; if (iPrecstart == iPrecend) { size_t crec; if (! freceive_data (qconn, sizeof (char), &crec, ctimeout, freport)) return -2; if (crec == 0) return -1; } b = abPrecbuf[iPrecstart]; iPrecstart = (iPrecstart + 1) % CRECBUFLEN; return BUCHAR (b); } /* Send mail about a file transfer. We send to the given mailing address if there is one, otherwise to the user. */ boolean fmail_transfer (fsuccess, zuser, zmail, zwhy, zfromfile, zfromsys, ztofile, ztosys, zsaved) boolean fsuccess; const char *zuser; const char *zmail; const char *zwhy; const char *zfromfile; const char *zfromsys; const char *ztofile; const char *ztosys; const char *zsaved; { const char *zsendto; const char *az[20]; int i; if (zmail != NULL && *zmail != '\0') zsendto = zmail; else zsendto = zuser; i = 0; az[i++] = "The file\n\t"; if (zfromsys != NULL) { az[i++] = zfromsys; az[i++] = "!"; } az[i++] = zfromfile; if (fsuccess) az[i++] = "\nwas successfully transferred to\n\t"; else az[i++] = "\ncould not be transferred to\n\t"; if (ztosys != NULL) { az[i++] = ztosys; az[i++] = "!"; } az[i++] = ztofile; az[i++] = "\nas requested by\n\t"; az[i++] = zuser; if (! fsuccess) { az[i++] = "\nfor the following reason:\n\t"; az[i++] = zwhy; az[i++] = "\n"; } if (zsaved != NULL) { az[i++] = zsaved; az[i++] = "\n"; } return fsysdep_mail (zsendto, fsuccess ? "UUCP succeeded" : "UUCP failed", i, az); } | http://opensource.apple.com/source/uucp/uucp-11/uucp/prot.c | CC-MAIN-2014-52 | refinedweb | 781 | 73.27 |
Printing barcodes in Windows Service projects using C# or VB.NET
November 10, 2010 Leave a comment
Some weeks ago, a customer contacted us about some intermittent issues he had when printing barcodes in a Windows Service using our Barcode Professional SDK for .NET
Barcode Pro SDK for .NET is written using 100% managed code based on the “standard” drawing engine provided by .NET Framework which is basically a wrapper around GDI+, that’s System.Drawing classes.
The printing functionality using the “standard” drawing engine is mainly under System.Drawing.Printing namespace and the key class behind it is PrintDocument which we used in some guides demoing barcode printing using our Barcode Pro SDK for .NET
Although this approach seems to be working fine in Windows Forms applications, it seems that in Windows Service it has some limitation or “not supported” tag as is stated in this page which basically says:
Caution
Classes within the System.Drawing.Printing namespace are not supported for use within a Windows service or ASP.NET application or service. Attempting to use these classes from within one of these application types may produce unexpected problems, such as diminished service performance and run-time exceptions.
Sadly, there’s no solution to that if try to use our Barcode Pro SDK for .NET (or Barcode Pro for Windows Forms) in Windows Service scenarios. However, one possible alternate approach to bypass this issue is to replace the “drawing” engine. Another drawing engine in .NET Framework is the one based on WPF (Windows Presentation Foundation) which was introduced in .NET 3.0
WPF drawing engine is NOT based on GDI+ but on DirectX and the printing classes is available under System.Printing
It seems that WPF printing can be safely used in Windows Service projects. Taken advantage of this, we wrote a guide for printing barcodes in Windows Service using our Barcode Professional for WPF (which is 100% based on WPF drawing engine and NOT on GDI+) In that guide we tried to reproduce two common scenarios i.e. printing a single page document as well as a multipage document with barcodes.
We hope that guide helps you as a starting point for more complex printing scenarios. | https://neodynamic.wordpress.com/2010/11/10/printing-barcodes-in-windows-service-projects-using-c-or-vb-net/ | CC-MAIN-2017-34 | refinedweb | 368 | 54.32 |
Since I published A quest for pattern-matching in Ruby 3 years ago, I’ve been called “pattern matching guy" more than once. So obviously, when I learned that PM is inevitably coming to the Ruby core, I was curious to check it out. First Impressions have already been published, so this is "Second Impressions", from my point of view.
Heads up: it’s very subjective.
I’m mostly judging it by examples provided in the original Redmine ticket, such as:
class Array
alias deconstruct itself
end
case [1, 2, 3, d: 4, e: 5, f: 6]
in a, *b, c, d:, e: Integer | Float => i, **f
p a #=> 1
p b #=> [2]
p c #=> 3
p d #=> 4
p i #=> 5
p f #=> {f: 6}
e #=> NameError
end
First of all, the code is ugly in a way that makes it hard to reason about. It looks like being added on top of a language which was not designed to support pattern matching (which is exactly the case). This might not be important in the long run, when people get used to it – but here it is, in the second impressions round.
Destructuring (why was it called deconstruction?) looks nice, but I would remove the pipe thingy. Instead of e: Integer | Float => i (which is terribly ambiguous – is it e: (Integer | Float) => i or ((e: Integer) | (Float)) => i, or something else?) it would be better to have a possibility to define a type union like in Pony. For example:
number = typeunion(Integer | Float) # hypothetic keyword typeunion
case n
in number
puts "I’m a number"
in String
puts "I’m string"
end
Besides that it’s good, especially for getting things out of hashes.
But probably my most important problem with this proposal is that it does not let me have multiple functions defined with different type signatures, ruled by pattern matching. This is what I’m mostly missing on a daily basis working with Ruby, while having it available in Erlang or Elixir. To give you a taste of what I’m talking about:
class Writer
def write_five_times(text => String)
puts text * 5
end
def write_five_times(text => Integer)
puts text.to_s * 5
end
def write_five_times(text => Object)
raise NotImplementedError
end
end
Of course, to achieve what’s in code listing above, it would be much larger and complicated change. Basically it would be like introducing proper types to Ruby. It needs to allow having one method defined mutiple times in one class, but without shadowing previous definitions. I don’t think that Ruby will ever go this way, yet this is something that would clean up my code in few places significantly.
I also realised that while working on Noaidi – my implementation of pattern matching. I don’t really want plain pattern matching somewhere in the code, as I can make most cases work with good old case in Ruby. But I would like to be able to write modules that behave kind of like the ones in Elixir.
And this is being made possible in Noaidi. I have an experimental branch enabling this and I hope I will be able to finish it some day. Such module would look like this:
module Math
extend Noaidi::DSL
fun(:fib, 0..1) { 1 }
fun(:fib, Integer) { |n| add(fib(n-1), fib(n-2)) }
fun(:fib, Object) { raise NotImplementedError }
funp(:add, Integer, Integer) { |a,b| a + b }
end
Math.fib(1) #=> 1
Math.fib(20) #=> 10946
Math.fib("test") #=> NotImplementedError
Math.add(1,3) #=> NoMethodError (private method `add’ called for Math:Module)
Verdinct: I’m kind of disappointed. The quest is not over yet.
This has been also posted on my personal blog.
Link: | https://jsobject.info/2019/04/19/ruby-pattern-matching-second-impressions/ | CC-MAIN-2019-35 | refinedweb | 615 | 70.13 |
Using.
Feeds displayed in Outlook look just like mail folders, and show the unread count next to the feed name. Michael describes how to setup this type of Search Folder in this OfficeHours article.
Another handy feature is that RSS in Outlook is that the Outlook feed list can be synchronized with the Windows Common Feed List, which is the list of RSS feeds maintained by Internet Explorer 7 (or later). Outlook prompts you to enable this functionally on first startup, and you can later change that decision through Tools, Options.
Feeds from Internet Explorer and other Common Feeds List clients automatically appear in Outlook
When the synchronize option is turned on, any feed added to either IE or Outlook is added to the other automatically. Deleting a feed from Outlook, however, does not delete the feed from IE, so if you have a feed that is publishing a large volume of posts or very large posts that you do not want filling up your mailbox, you can delete the feed from Outlook but still read it in Internet Explorer. This way you can keep your favorite feeds - the feeds that you always want to download – in Outlook and any feeds you just want to check on occasionally in Internet Explorer. Doing so is especially handy for those who have slow internet connections and who want to minimize RSS-related network activity from Outlook.
The option to download enclosures is also very convenient. Some RSS feeds have attachments to their RSS items to supplement the content of the article, like a song file, picture, or Office document. Also, some RSS feeds only post minimal data to their RSS items, such as an article summary and a link to the full article. Outlook makes it possible to download the article text as an attachment to the RSS item for viewing offline. By default, both of these features are turned off, but you may activate either feature by editing a feed’s properties. To edit the properties of a feed, select Tools, then Account Manager from the main Outlook window. Select the RSS Feeds tab to see a list of the RSS Feeds Outlook is downloading for you, and use the Change button to modify a feed’s properties.
You can also choose to download attachments or update an item directly from the RSS post in Outlook. Click the information bar at the top of the item to view the full article in your web browser, or download additional content.
A number of RSS related improvements were made in the 2007 Microsoft Office Suite Service Pack 1 (SP1).
Most notably, the downloading of duplicate items was the number one customer issue with RSS in Outlook 2007 and we have made improvements to drastically reduce the number and frequency of duplicates.
Another common issue that was resolved in SP1 is that certain feeds would show all items were posted on 12/31/2006 or 1/1/2007.
Additionally we made performance improvements when synchronizing RSS items to the Common Feed List managed by Internet Explorer. This will speed up Outlook if you keep your Outlook feeds synchronized to the Common Feed List.
There are certainly lots of different ways you can use RSS inside of Outlook 2007, and I hope you find some of these tips and tricks useful as you explore on your own.
Thanks for taking a look at RSS in Outlook,
Christopher Stuart Outlook Software Design Engineer in Test
Hi,
I've written a little IE Plugin to subscribe per one click to a RSS Feed. This feature I missed in the Windows RSS Plattform.
Here is the download:
If it is interesting for you I can translate it to English. At the moment the post is in German. ;-)
This was one of the features I really was looking forward to i Office 2007 and I used it alot in the beginning. Now I've switched back to Google Reader because of to things:
1. I kept getting duplicate items. Hopefully this is fixed in SP1.
2. Search folders like Unread email counted unread feed items as well, which made in unusable. Feeds are not email. Period. Email have a much higher priority than feeds and should not be included in the unread email search folder.
I ended up with a custom search folder, but it didnt work that well since my custom search folder kept counting unread mail in deleted items, drafts and junk email.
If you fix the search folders I would be glad to use Outlook as a rss reader again.
I have been using the RSS features of IE7 for my blogs.msdn tracking needs. I may need to give Outlook a second look.
JamesNT
Christopher,
Thanks for the interesting article.
I have a couple of questions regarding RSS in Outlook 2007.
1. Why doesn't it use the common feed list directly? I'm really bewildered by this. Microsoft introduces common feed engine for Windows ... Microsoft does not use it in Office ?
2. For exchange users, why are RSS subscriptions local rather than server based? If you have, say, a laptop and a desktop which sync to the same Exchange mailbox, the RSS feeds get messed up (in my experience). Now if you had made RSS a feature of Exchange, rather than Outlook, that would be a good reason *not* to use the common feed list.
We seem to have the worst of both worlds.
Tim
The biggest thing that bugs me about RSS feeds in Outlook is that you can't edit the feed URL without deleting the feed and then recreating it. And then it re-downloads all of the items that it's already got.
Oh, and logging into someone else's mailbox once you've set up the RSS feeds in Outlook, downloads all the RSS items into that mailbox. Which is highly annoying if it's an administrative mailbox, or the MD's mailbox.
Basically, I think that the RSS implementation in Outlook is only partially thought through and because of this it has some serious flaws in it.
I see 2 problems with RSS feeds in Outlook.
1) You can't edit them and change the URL (I subscribe to a feed that regularly has it's URL changed). In order to do this at present, you have to delete the subscription and then recreate it. And then it downloads everything again.
2) Outlook downloads all the RSS items even if you're logged into someone elses mailbox. I have to regularly check an administration mailbox (and our MD's mailbox). Logging into them results in RSS items from the feeds I'm subscribed to being downloaded into that mailbox.
I see these as serious flaws in the RSS features of Outlook.
Johan Nordberg:
We have made significant improvements with regards to RSS in Outlook dealing with duplicates. I would also like to mention that there are some feeds that have a habit of re-posting their posts, effectively creating duplicates on their end.
As for your concern when using search folders, you can tell a search folder to ignore RSS posts. To do this, go to the search folder in question and right-click on it. From the right-click menu, select "Customize this search folder...", then from the dialog that appears, select "browse" from "Mail from these folders will be included in this search folder". From there, you can select which folders to aim the search folder at, and exclude the RSS folder.
Tim Anderson:
The Common feeds list is what IE (Internet Explorer) uses to check RSS feeds. The separate list for Outlook was introduced so that you could have feeds appearing in Outlook but not in IE and vice versa. One scenario where this might be of good use is if you are subscribed to a feed that makes a lot of very large posts. For performance reasons, you may not want to have this feed downloading to Outlook all the time, but you may still want to look at it occasionally in IE. Also, Outlook saves any RSS feeds to your exchange account, so if you have multiple accounts on multiple machines, RSS feeds will propagate from one account to another. If you did not have the ability to turn that off (which is what relying on the CFL would cause), then any feed you subscribed to on IE on any computer with your account would immediately appear on any other computer without your ability to control it.
RSS is propagated on exchange. Why is it that you believe that it is not?
Mark:
How many RSS feeds do you subscribe to that frequently change URLs? We weren't really looking at that as a common user scenario when we were working on the feature.
As for your administrative mailbox picking up RSS feeds, when you want to log on to that account, just turn off sync to the Common Feeds List and it won't pick up any additional RSS feeds. To do that, please go to tools->Options...->"Other" tab->"Advanced Options..." button. On the dialog that appears, look for "Sync RSS Feeds to the Common Feeds List" and turn it off. That will stop unwanted feeds from appearing in your other accounts.
Regards,
-Chris Stuart
I cannot believe this type of lousy, clunky and kludgy feature is actually released in mainstream Office product. I tried to use this feature and I'm so infinitely frustrated that I never ever gonna touch it with 10 foot pole in the future. I trusted the Outlook product team and imported all of my 2500 feeds. Now I'm trying to get read of those feeds and guess what? I'm supposed to delete this one by one whole day long. I'm not sure who even coded this thing and dared to release it out. Seriously. Outlook is frustratingly outdated as it is. You don't have to top this off.
I want to display the latest three rss pages (title as a link) in an email. Basically I want this to function like a google gadget in my email. How do I do this?
I have to agree with Mark. The RSS URL's don't change often but they change enough to make you aware of the lack of feature and piss you off when it does change. For this reason I've dropped outlook as my preferred RSS reader.
Also fact that for some reason it seems to just stop getting the feeds even though it's running and patched to the latest level. The fact you can't get to the URL's just compounds the problem of recreating the RSS feed to overcome the bug.
Oh well it was a good idea but not executed with user flexibility in mind nor stability. Back to a third party free reader without the issues or limitations.
The feeds just stopping downloading/displaying new items seems to be a common thing too.
Lost syncronization. Some of the feeds are no longer updated in Outlook 2007. There is no easy way to fix this without getting duplicat feeds.
Why wasn't this fixed? there are loads of us that have this problem.
my #1 complaint for 07 outlook rss is the fact that i cant "merge" rss folders. for example, i want to have all of my gaming rss sites in one folder and all of my tech/work rss in another.
I've tried to find away around this (editing the outlook today layout) but i cant find one solid way to fix this.
maybe a patch? :)
I have so many feeds hundreds and now I want to delete them. Why can't I just hit CTRL and click each RSS folder and delete them ALL? One by one will take hours!!
I keep getting the following error:
Task 'RSS Feeds' reported error (0xE7240155) : 'Synchronization to RSS Feed:""">" has failed. Outlook cannot download the RSS content from because of a problem connecting to the server.'
Task 'RSS Feeds' reported error (0xE7240155) : 'Outlook cannot download the RSS content from because of a problem connecting to the server. An OLE registration error occurred. The program is not correctly installed. Run Setup again for the program.'
I've noticed that one of my RSS feed "disappeared" from the folder list (below RSS Feed). However, when I add it (again) I'm told it's already there. How can I enable the folder to reappear?
i am experiencing failure of Outlook 2007 SP1 to re-check the feed. it checks the feed when created, when the program re-starts and when you manually do a send/receive; but, otherwise does not check the feed during the automatic send/receives. it recognizes the feed's download interval. i have tried this with 4 different rss feeds. same results. any ideas?
I understand why feeds deleted in IE are not deleted from Outlook (and those deleted from Outlook not deleted from IE) but why isn't there an option to turn this feature off ?
Actually I'd prefer having only one feed list for all programs (IE, Outlook, Sidebar Gadget, etc. ..) in my computer.
I would like to be able to retrieve posts that are older than x number of days. As i just started using outlook 2007, i only have feeds since a few days before i started using the software. How can retrive all those archived posts?
Jeremy Behmoaras > I don't think you can : feeds are published by their own editors in an XML format (the feed itself).
This XML Document contain only the latest headlines. The exact number of these headlines is choosed by the editor, and, each time a new entry is added in the feed, the oldest one is removed (it's a FIFO behaviour).
Some RSS clients (like Outlook and the Windows RSS Platform) keep the older feed entries in some kind of local store, so you can always see these entries, but these are no longer publish by their original editors.
Hi Chris. I really like the Outlook RSS implementation - very convenient. One problem I'm having though and maybe I just set it up wrong.
I created subforlders in Outlook for various groups of feeds (Microsoft Feeds, etc). But while the feeds themselves seem to be present in both IE7 and Outlook 07, the folder strucure isn't synced. So, when I try to subscribe to a feed with IE, I can't simply add it to the appropriate folder that I set up in Outlook. Any thoughts?
What I have to do now is hit the subscribe button in IE, copy the feed address from the address line, switch to Outlook, go to tools/account/rss/new, add it and then change to the folder where I want it to live.
I'm sure I probably just got into a bad habit and maybe set things up wrong, but any thoughts on how I can streamline adding feeds to Outlook from IE? Thanks.
I've used RSS feeds with Outlook 2007 with no problem. I have now loaded up a "corporate image" on a notebook with everything setup for me, kind of. The RSS Feeds folder is still in the list underneath the Exchange mailbox, but none of the feeds are getting updated.
Upon further review, I have found that RSS is no longer setup as an account. I tried going into mail setup to add RSS but there isn't an option for it.
Could the "corporate image" be built without RSS support? Is there anyway to add it?
Thanks!
Chris, I use this all the time and really like it. My problem is that one of my RSS feeds is from a SharePoint list and for some reason, Outlooks is marking five items in the list as unread over and over again, even though the feed in IE does not. Any ideas?
I went on vacation last week and when I came back all of my RSS feeds are not being updated. I have spent a considerable amount of time on this and finally found the Send/Receive Group option. When I go to edit my default send/receive group and click on RSS there are only new ones added. It completely removed my regular RSS feeds.
The feeds are still in the Common Feeds List because I can see them in Internet Explorer. They are all being updated and everything looks good there but Outlook seems to have totally dropped them.
The RSS capability seems like such a basic feature to include in Outlook and I am really disappointed that it functions so poorly. I will have to explore other RSS reader options until this product matures.
Regarding deleting multiple rss items. Am I missing something? I just press control+A to select all items, then just place them in deleted items.
I'm very frustrated with the Outlook RSS integration. I currently do not receive any updates to any of the RSS feeds to which I am subscribed. Looking at my Groups, all the check marks are set to down load updates, I don't have any synchronization filters set. All in all, this is a bad user experience and it does not do the job it was intended to do - download posts.
I run Outlook 2007 SP1 on Windows Server 2003 and manually add the RSS URL's using the "Add a New RSS Feed..." menu item.
Hi Chris: RSS feed option does not show in my account settings/ options. When I try to add RSS feed, I get a message stating that administrator has turned off this option. It is my personal computer, and there is no other administrator. I suspect the problem originates during a disk clean up. It was working before that. could you or anyone on this forum advise me what to do? Thanks
Can someone explain what makes a feed show up as unread? Is it the pubDate or guid or the fact that the content has changed? Thanks.
I've noticed that Outlook 2007 never updates most of my feeds. I found this post by searching for the problem, and saw that there was an "Update Limit" setting. Since most RSS publishers don't include their update rate (which I discovered is set by a sy namespace setting below), I unchecked this box. But how do I force Outlook to refresh a feed, as even after unchecking this option, the feeds haven't refreshed.
xmlns:sy=""
<sy:updatePeriod>daily</sy:updatePeriod>
<sy:updateFrequency>6</sy:updateFrequency>
<sy:updateBase>2000-01-01T12:00+00:00</sy:updateBase>
Hi
I need to read feeds offline quite often, and most of the feeds I read have only minimal data in the feed itself, so I have set up Outlook to download whole article as HTML. But when I am offline, I find that none of teh images have been downloaded with the whole article.
Any suggestions?
Regards.
Kinshuk
Punit, I'm having the same problem with the RSS Feed option tab missing from the Account Settings. Although the RSS Feeds folders is listed in my Mailbox. We're setup a little different because we do have an Exchange server. However I'm not sure that is the problem, because the RSS Feeds works on other computers on the network. I pretty sure it's specific to the computer b/c no matter who logs on to it and setups Outlook 2007 client, we get the same message: administrator has turned off this option.
Did you find a solution?
There seems to be a lot of chatter on the net about RSS feeds in Outlook failing to update. I too have encountered this problem.
I believe my problems may stem from installing Windows XP on another partition and then accessing the Outlook PST file from Outlook 2007 installed on the new partition. When I return to the old partition my existing RSS feeds were no longer updating. Now only RSS feeds created after that date are being processed.
Hope this helps you sort this problem out. Outlook 2007's RSS feature is nice, but the glitches are a real killer.
Trademarks |
Privacy Statement | http://blogs.msdn.com/outlook/archive/2008/02/08/using-rss-feeds-in-outlook-2007.aspx | crawl-002 | refinedweb | 3,382 | 71.55 |
Windows Azure Accelerator for Web Roles (Year of Azure Week 2)
Tired of it taking 10-30 minutes to deploy updates to your website? Want a faster solution that doesn’t introduce issues like losing changes if the role instance recycles? Then, you need to look into the newly released Windows Azure Accelerator for Web Roles.
The short version is that this accelerator allows you to easily host and quickly update multiple web sites in Windows Azure. The accelerator will host the web site’s for us and also manage the host headers for us. I will admit I’m excited to start using this. We’re locking down details for a new client project and this will fit RIGHT into what we’re intending to do. So the release of this accelerator couldn’t be more timely. At least for me.
Installation
I couldn’t be more pleased with the setup process for this accelerator. It uses the web platform installer complete with dependency checking. After downloading the latest version of the Web Platform Installer (I’d recently rebuilt my development machine), I was able to just run the accelerator’s StartHere.cmd file. It prompted me to pull down a dependency (like MVC3, hello!) and we were good to go. The other dependency, Windows Azure Tools for VS2010 v1.4 I already had.
I responded to a couple y/n prompts and things were all set.
Well, almost. You will need a hosted Windows Azure Storage account and a hosted service namespace. If you don’t already have one, take advantage of my Azure Pass option (to the right). If you want to fully use this, you’ll also need a custom domain and the ability to create a forwarding entry (aka c-name) for the new web sites.
Creating our first accelerated site
I fired up Visual Studio and kicked off a new cloud project. We just need to note that we now have a new option, a “Windows Azure Web Deploy Host” as shown below.
Now to finish this, you’ll also need our Azure Storage account credentials. Personally, I set up a new one specifically to hold any web deploys I may want to do.
Next, you’ll be asked to enter in a username and password to be used for administering the accelerator via its web UI. A word of warning, there does not appear to be any password strength criteria, so please enter in a solid, strong password.
Once done, the template will generate a cloud service project that contains a single web role for our web deploy host/admin site. The template will also launch a Readme.htm file that explains how to go about deploying our service. Make sure not to miss the steps for setting up the remote desktop. Unfortunately, this initial service deployment will take just as long as always. It also deploys 2 instances of the admin portal, so if you want to save a couple bucks, you may want to back this off to 1 instance before deploying.
You’ll also want to make sure when publishing the web deploy host, that its setup for remote desktop. If you haven’t done this previously, I recommend look at Using Remote Desktop with Windows Azure Roles on MSDN.
NOTE: Before moving on to actually doing a web deploy, I do want to toss out another word of caution. The Windows Azure storage credentials and admin login we entered were both stored into the service configuration for the admin site role. And these are stored in CLEAR TEXT. So I you have concerns about security of these types of things, you may want to make some minor customizations here.
Deploying our web site
There are two steps to setting up a new site that we’ll managed via the accelerator, we will have to define the site in the accelerator admin portal, and also create our web site project in Visual Studio. Its important to note that its not required that this site be a Windows Azure Web Role. But not using that template will limit you a bit. Namely, if you plan to leverage Windows Azure specific features you will have a little extra work to do. It could be a simple as adding assembly references manually, to having to write some custom code. So pick what’s right fo
r your needs.
I started by following the accelerator’s guidance and defining my web site in the accelerator host project I just deployed. I enter a name and description and just keep the rest of the fields at their defaults.
A quick check of its status shows that its been created, but isn’t yet ready for use. You’ll also see one row on the ‘info’ page for each instance of our service (in my case only one because I’m cheap *grin*).
One thing I don’t get about the info page is that the title reads “sync status”. I would guess this is because it shows me the status of any deployments being sync’d. I agree with the theory of this, but I think the term could mislead folks. Anyways… moving on, we have a site to create.
I fire up Visual Studio 2010 and do a file->new->project. Unless the accelerator documentation, I’m going to create a web role instead (I use the MVC2 template). Next, we’ll build and publish it. You will get prompted for a user name and password, so use the remote desktop credentials you used when publishing the service host initially.
Things should go fairly smooth, but in my case I get a 500 error (internal error in the service) when trying to access to the MSDEPLOYAGENTSERVICE. So I RDP’d into the service to see if I could figure out what went wrong. I’m still not certain what is wrong, but eventually it started working. It may have been me using the Windows Azure portal to re-configure RDP. I could previously RDP into the box just fine, but I couldn’t seem to get the publish to work. I just kept getting a internal server error (500). Oh well.
Once everything is ok, you’ll see that that the site’s status has changed to “deployed”.
Its fast, but so what?
So, within 30 seconds, we should be seeing the new site up and running. This is impressive, but is that all? Actually no. What the accelerator is doing is managing the IIS host header stuff, making it MUCH easier to do multiple web sites. This could be just managing your own blog and a handful of other sites. But say you have a few, or a hundred web sites that you need to deploy to Azure, this can make it pretty easy. ISV’s would eat this up.
I could even see myself using this for doing demos.
Meanwhile, feel free to reverse engineer the service host project. There’s certainly some useful gems hidden in this code.
Pingback: JrzyShr Dev Guy | https://brentdacodemonkey.wordpress.com/2011/07/13/windows-azure-accelerator-for-web-roles-year-of-azure-week-1/ | CC-MAIN-2018-26 | refinedweb | 1,181 | 72.76 |
10.2. Applying a linear filter to a digital signal
Linear filters play a fundamental role in signal processing. With a linear filter, one can extract meaningful information from a digital signal.
In this recipe, we will show two examples using stock market data (the NASDAQ stock exchange). First, we will smooth out a very noisy signal with a low-pass filter to extract its slow variations. We will also apply a high-pass filter to the original time series to extract the fast variations. These are just two common examples among a wide variety of applications of linear filters.
How to do it...
1. Let's import the packages:
import numpy as np import scipy as sp import scipy.signal as sg import pandas as pd import matplotlib.pyplot as plt %matplotlib inline
2. We load the NASDAQ data (obtained from) with pandas:
nasdaq_df = pd.read_csv( '' 'cookbook-2nd-data/blob/master/' 'nasdaq.csv?raw=true', index_col='Date', parse_dates=['Date'])
nasdaq_df.head()
3. Let's extract two columns: the date and the daily closing value:
date = nasdaq_df.index nasdaq = nasdaq_df['Close']
4. Let's take a look at the raw signal:
fig, ax = plt.subplots(1, 1, figsize=(6, 4)) nasdaq.plot(ax=ax, lw=1)
5. Now, we will follow the first approach to get the slow variations of the signal. We will convolve the signal with a triangular window, which corresponds to a FIR filter. We will explain the idea behind this method in the How it works... section of this recipe. For now, let's just say that we replace each value with a weighted mean of the signal around this value:
# We get a triangular window with 60 samples. h = sg.get_window('triang', 60) # We convolve the signal with this window. fil = sg.convolve(nasdaq, h / h.sum())
fig, ax = plt.subplots(1, 1, figsize=(6, 4)) # We plot the original signal... nasdaq.plot(ax=ax, lw=3) # ... and the filtered signal. ax.plot_date(date, fil[:len(nasdaq)], '-w', lw=2)
6. Now, let's use another method. We create an IIR Butterworth low-pass filter to extract the slow variations of the signal. The
filtfilt() method allows us to apply a filter forward and backward in order to avoid phase delays:
fig, ax = plt.subplots(1, 1, figsize=(6, 4)) nasdaq.plot(ax=ax, lw=3) # We create a 4-th order Butterworth low-pass filter. b, a = sg.butter(4, 2. / 365) # We apply this filter to the signal. ax.plot_date(date, sg.filtfilt(b, a, nasdaq), '-w', lw=2)
7. Finally, we use the same method to create a high-pass filter and extract the fast variations of the signal:
fig, ax = plt.subplots(1, 1, figsize=(6, 4)) nasdaq.plot(ax=ax, lw=1) b, a = sg.butter(4, 2 * 5. / 365, btype='high') ax.plot_date(date, sg.filtfilt(b, a, nasdaq), '-', lw=1)
The fast variations around 2000 correspond to the dot-com bubble burst, reflecting the high-market volatility and the fast fluctuations of the stock market indices at that time. For more details, refer to.
How it works...
In this section, we explain the very basics of linear filters in the context of digital signals.
A digital signal is a discrete sequence \((x_n)\) indexed by \(n \geq 0\). Although we often assume infinite sequences, in practice, a signal is represented by a vector of the finite size \(N\).
In the continuous case, we would rather manipulate time-dependent functions \(f(t)\). Loosely stated, we can go from continuous signals to discrete signals by discretizing time and transforming integrals into sums.
What are linear filters?
A linear filter \(F\) transforms an input signal \(x = (x_n)\) to an output signal \(y = (y_n)\). This transformation is linear—the transformation of the sum of two signals is the sum of the transformed signals: \(F(x+y) = F(x)+F(y)\).
In addition to this, multiplying the input signal by a constant yields the same output as multiplying the original output signal by the same constant: \(F(\lambda x) = \lambda F(x)\).
A Linear Time-Invariant (LTI) filter has an additional property: if the signal \((x_n)\) is transformed to \((y_n)\), then the shifted signal \((x_{n-k})\) is transformed to \((y_{n-k})\), for any fixed \(k\). In other words, the system is time-invariant because the output does not depend on the particular time the input is applied.
From now on, we will only consider LTI filters.
Linear filters and convolutions
A very important result in the LTI system theory is that LTI filters can be described by a single signal: the impulse response \(h\). This is the output of the filter in response to an impulse signal. For digital filters, the impulse signal is \((1, 0, 0, 0, ...)\).
It can be shown that \(x=(x_n)\) is transformed to \(y=(y_n)\) defined by the convolution of the impulse response \(h\) with the signal \(x\):
The convolution is a fundamental mathematical operation in signal processing. Intuitively, and considering a convolution function peaking around zero, the convolution is equivalent to taking a local average of the signal (\(x\) here), weighted by a given window (\(h\) here).
It is implied, by our notations, that we restrict ourselves to causal filters (\(h_n = 0\) for \(n < 0\)). This property means that the output of the signal only depends on the present and the past of the input, not the future. This is a natural property in many situations.
The FIR and IIR filters
The support of a signal \((h_n)\) is the set of \(n\) such that \(h_n \neq 0\). LTI filters can be classified into two categories:
- A Finite Impulse Response (FIR) filter has an impulse response with finite support
- A Infinite Impulse Response (IIR) filter has an impulse response with infinite support
A FIR filter can be described by a finite impulse response of size \(N\) (a vector). It works by convolving a signal with its impulse response. Let's define \(b_n = h_n\) for \(n \leq N\). Then, \(y_n\) is a linear combination of the last \(N+1\) values of the input signal:
On the other hand, an IIR filter is described by an infinite impulse response that cannot be represented exactly under this form. For this reason, we often use an alternative representation:
This difference equation expresses \(y_n\) as a linear combination of the last \(N+1\) values of the input signal (the feedforward term, like for a FIR filter) and a linear combination of the last \(M\) values of the output signal (feedback term). The feedback term makes the IIR filter more complex than a FIR filter in that the output depends not only on the input but also on the previous values of the output (dynamics).
Filters in the frequency domain
We only described filters in the temporal domain. Alternate representations in other domains exist such as Laplace transforms, Z-transforms, and Fourier transforms.
In particular, the Fourier transform has a very convenient property: it transforms convolutions into multiplications in the frequency domain. In other words, in the frequency domain, an LTI filter multiplies the Fourier transform of the input signal by the Fourier transform of the impulse response.
The low-, high-, and band-pass filters
Filters can be characterized by their effects on the amplitude of the input signal's frequencies. They are as follows:
- A low-pass filter attenuates the components of the signal at frequencies higher than a cutoff frequency
- A high-pass filter attenuates the components of the signal at frequencies lower than a cutoff frequency
- A band-pass filter passes the components of the signal at frequencies within a certain range and attenuates those outside
In this recipe, we first convolved the input signal with a triangular window (with finite support). It can be shown that this operation corresponds to a low-pass FIR filter. It is a particular case of the moving average method, which computes a local weighted average of every value in order to smooth out the signal.
Then, we applied two instances of the Butterworth filter, a particular kind of IIR filter that can act as a low-pass, high-pass, or band-pass filter. In this recipe, we first used it as a low-pass filter to smooth out the signal, before using it as a high-pass filter to extract fast variations of the signal.
There's more...
Here are some general references about digital signal processing and linear filters:
- Digital signal processing on Wikipedia, available at
- Linear filters on Wikipedia, available at
- LTI filters on Wikipedia, available at
See also
- Analyzing the frequency components of a signal with a Fourier transform | https://ipython-books.github.io/102-applying-a-linear-filter-to-a-digital-signal/ | CC-MAIN-2019-09 | refinedweb | 1,439 | 55.34 |
Timeline
11/13/13:
- 19:22 Ticket #21435 (Improved error message for reverse v. reverse_lazy) created by
- […] Uninformative foremost as %s is top-level urls (problem …
- 19:04 BetterErrorMessages edited by
- (diff)
- 16:38 Ticket #21434 (IN clause not supporting the to_field) created by
- I think this is similar to bug 17972. If I have a model defined: […] …
- 10:29 Ticket #15107 (Convert core commands to use self.std(out|err) instead of ...) closed by
- fixed: Let's close this one in favour of #21429
- 09:25 Changeset [5b8c8c46]stable/1.6.x by
- [1.6.x] Added release note for #21410; thanks Loic. Backport of …
- 09:24 Changeset [94d567b]stable/1.7.x by
- Added release note for #21410; thanks Loic.
- 07:36 Ticket #20125 (get_language_from_request() - inconsistent use of fallback language) closed by
- fixed: I haven't switched my projects to 1.6 yet, but after looking at the code …
- 05:55 Ticket #21433 (Stupid error handling in translation which block to solve error :)) closed by
- duplicate: Hi, Please choose your words more carefully when opening a ticket (Django …
- 04:44 Ticket #21433 (Stupid error handling in translation which block to solve error :)) created by
- See this code: if status != STATUS_OK: …
- 03:11 Ticket #21432 (datetimes method always raise AttributeError) created by
- NewsItem.objects.all().datetimes('date_time', 'month', order='DESC') …
11/12/13:
- 23:54 Ticket #21431 (Django 1.6 GenericRelation admin list_filter regression) created by
- Previously in 1.5 it was possible to have a model A with a GenericRelation …
- 23:26 Ticket #21430 (Raise explicit error when unpickling QuerySet from incompatible version) created by
- Currently the errors from doing this are not obvious at all, and make …
- 21:41 Changeset [b107421]stable/1.6.x by
- [1.6.x] Fixed #21410 -- prefetch_related() for ForeignKeys with …
- 21:36 Ticket #21410 (Error when trying to ignore reverse relationships with related_name using ...) closed by
- fixed: In cb83448891f2920c926f61826d8eae4051a3d8f2: […]
- 21:35 Changeset [cb83448]stable/1.7.x by
- Fixed #21410 -- prefetch_related() for ForeignKeys with related_name='+' …
- 18:33 Ticket #21429 (BaseCommand should use logging instead of custom output wrappers) created by
- Since python offers the powerful logging class, …
- 12:05 Changeset [8d1f339]stable/1.5.x by
- [1.5.x] Removed a mention of Form._errors from the documentation. Also …
- 11:54 Changeset [b6acc4f]stable/1.6.x by
- [1.6.x] Removed a mention of Form._errors from the documentation. Also …
- 11:42 Changeset [0048ed7]stable/1.7.x by
- Fixed typos in previous commit (9aa6d4bdb6618ba4f17acc7b7c0d1462d6cbc718).
- 11:27 Changeset [9aa6d4b]stable/1.7.x by
- Removed a mention of Form._errors from the documentation. Also removed …
- 11:14 Ticket #21428 (Django 1.6 GenericRelation editable regression) created by
- Prior to Django 1.6 it was possible to mark a GenericRelation as editable …
- 09:45 Changeset [8ed96464]stable/1.7.x by
- Fixed typo in lru_cache.py; refs #21351.
- 09:07 Ticket #21427 (Clearly state the value range of all integer type fields in the model ...) created by
- The documentation for BigIntegerField, PositiveSmallIntegerField and …
- 05:41 Ticket #21426 (No indexes are created on syncdb) created by
- the model is the following: […] the error message on syncdb is the …
- 04:12 Ticket #21425 (Logging documentation improvement) created by
- Now says: " …
- 01:47 Ticket #21424 (ERROR in writing your first django app,part 3) closed by
- invalid: Hi, The namespace attribute to the url function is introduced at the …
11/11/13:
- 17:50 Ticket #21424 (ERROR in writing your first django app,part 3) created by
- from django.conf.urls import patterns, include, url from django.contrib …
- 14:05 Changeset [bc742ca]stable/1.7.x by
- Flake8 fixes -- including not runnign flake8 over a backported file
- 09:34 Ticket #21388 (Language code is incorrect for Frisian) closed by
- fixed: In 4142d151025d6169d67b513c4569603d0e751a6d: […]
- 09:33 Changeset [0be7f57a]stable/1.7.x by
- Merge pull request #1907 from Bouke/tickets/21388 Fixed #21388 -- …
- 09:11 Changeset in djangoproject.com [1bbdbde]django1.6 by
- Improved the documentation search form and view. The search form was …
- 09:03 Ticket #21423 (comment typo.) closed by
- fixed: In 5fda9c9810dfdf36b557e10d0d76775a72b0e0c6: […]
- 09:03 Changeset [5fda9c9]stable/1.7.x by
- Fixed #21423 -- Fixed typo in widgets.py.
- 08:59 Ticket #21421 (Expose level tag (debug, info, warning) on messages) closed by
- fixed: In d87127655f540747e7dc83badc015ea520b880f5: […]
- 08:58 Changeset [d871276]stable/1.7.x by
- Fixed #21421 -- Added level_tag attribute on messages. Exposing the level …
- 06:49 Ticket #21420 (Runserver autoreload defect on OSX) closed by
- fixed: In f67cce0434907ef00db25577d46fdd04c0ad765d: […]
- 06:48 Changeset [f67cce04]stable/1.7.x by
- Fixed #21420 once more.
- 05:21 Ticket #21423 (comment typo.) created by
- In django.forms.widgets line 659: propogate instead of propagate
- 04:34 Changeset [4142d15]stable/1.7.x by
- Fixed #21388 -- Corrected language code for Frisian
- 04:32 Ticket #21422 (prefetch_related does not document restriction) created by
- I have just been trying to track down a performance issue in my …
- 02:51 Ticket #20990 (compilemessages fails) closed by
- fixed: In dffcc5e97988d92b2b8a3bba23a49bcb3cf5d040: […]
- 02:51 Changeset [dffcc5e]stable/1.7.x by
- Fixed #20990 -- Ensured unicode paths in compilemessages Thanks Gregoire …
- 02:43 Ticket #9523 (Restart runserver after translation MO files change) closed by
- fixed: In 2397daab4a1b95a055514b009818730f4dfc4799: […]
- 02:43 Changeset [2397daab]stable/1.7.x by
- Fixed #9523 -- Restart runserver after compiling apps translations Django …
- 01:54 Changeset [15592a04]stable/1.7.x by
- Merge pull request #1906 from DanSears/master Added description of MySQL …
- 01:47 Ticket #21420 (Runserver autoreload defect on OSX) closed by
- fixed: In dbbd10e75f2f078909d231b2fd5ca1a351726faa: […]
- 01:46 Changeset [dbbd10e7]stable/1.7.x by
- Fixed #21420 -- Autoreloader on BSD with Python 3. Thanks Bouke Haarsma …
- 01:43 Changeset [6010b53]stable/1.7.x by
- Fix syntax error under Python 3.2.
- 00:34 Ticket #21421 (Expose level tag (debug, info, warning) on messages) created by
- Messages in the messages framework have a level, like debug, info or … …
Note: See TracTimeline for information about the timeline view. | https://code.djangoproject.com/timeline?from=2013-11-13T14%3A44%3A59-08%3A00&precision=second | CC-MAIN-2014-15 | refinedweb | 983 | 58.08 |
Merge sort is another comparison based sorting algorithm. Also, it is a divide and conquer algorithm, which means the problem is split into multiple simplest problems and the final solution is represented by putting together all the solutions.
How it works:
The algorithm divides the unsorted list into two sub-lists of about half the size. Then sort each sub-list recursively by re-applying the merge sort and then merge the two sub-lists into one sorted list.
Step by step example :
Having the following list, let’s try to use merge sort to arrange the numbers from lowest to greatest:
Unsorted list: 50, 81, 56, 32, 44, 17, 99
Divide the list in two: the first list is 50, 81, 56, 32 and the second is 44, 17, 99 .
Divide again the first list in two which results: 50, 81 and 56, 32.
Divide one last time, and will result in the elements of 50 and 81. The element of 50 is just one, so you could say it is already sorted. 81 is the same one element so it’s already sorted.
Now, It is time to merge the elements together, 50 with 81, and it is the proper order.
The other small list, 56, 32 is divided in two, each with only one element. Then, the elements are merged together, but the proper order is 32, 56 so these two elements are ordered.
Next, all these 4 elements are brought together to be merged: 50, 81 and 32, 56. At first, 50 is compare to 32 and is greater so in the next list 32 is the first element: 32 * * *. Then 50 is again compared to 56 and is smaller, so the next element is 50: 32 50 * *. The next element is 81, which is compared to 56, and being greater, 56 comes before, 32 50 56 *, and the last element is 81, so the list sorted out is 32 50 56 81.
We do the same thing or the other list, 44, 17, 99, and after merging the sorted list will be: 17, 44, 99.
The final two sub-lists are merged in the same way: 32 is compared to 17, so the latter comes first: 17 * * * * * *. Next, 32 is compared to 44, and is smaller so it ends up looking like this : 17 32 * * * * *. This continues, and in the end the list will be sorted.
Sample code:
#include < iostream >
using namespace std;
void Merge(int *a, int low, int mid, int high)
{
int i = low, j = mid + 1, k = low;
int b[100];
while ((i <= mid) && (j <= high))
{
if (a[i] < a[j])
{
b[k] = a[i];
i++;
}
else
{
b[k] = a[j];
j++;
}
k++;
}
while (i <= mid)
{
b[k] = a[i];
i++;
k++;
}
while (j <= high)
{
b[k] = a[j];
j++;
k++;
}
for (k = low; k <= high; k++)
a[k] = b[k];
}
void MergeSort(int *a, int low, int high)
{
int mid;
if(low >= high)
return;
else
{
mid =(low + high) / 2;
MergeSort(a, low, mid);
MergeSort(a, mid+1, high);
Merge(a, low, mid, high);
}
}
int main()
{
int *a;
int n;
cout << "The number of elements is: ";
cin>>n;
a = (int*) calloc (n,sizeof(int));
for(int i=0;i < n;i++)
{
cout << "Input " << i << " element: ";
cin >> a[i]; // Adding the elements to the array
}
cout << "nUnsorted list:" << endl; // Displaying the unsorted array
for(int i=0;i < n;i++)
{
cout << a[i] << " ";
}
MergeSort(a, 0, n-1);
cout<<"nThe sorted list is: " << endl;
for (int i=0;i < n;i++)
cout << a[i] << " ";
return 0;
}
Output:
Code explanation:
The MergeSort procedure splits recursively the lists into two smaller sub-lists. The merge procedure is putting back together the sub-lists, and at the same time it sorts the lists in proper order, just like in the example from above.
Complexity:
Merge sort guarantees O(n*log(n)) complexity because it always splits the work in half. In order to understand how we derive this time complexity for merge sort, consider the two factors involved: the number of recursive calls, and the time taken to merge each list together..
Advantages:
- is stable;
- It can be applied to files of any size.
- Reading through each run during merging and writing the sorted record is also sequential. The only seeking necessary is as we switch from run to run.
Disadvantages:
- it requires an extra array;
- is recursive;
Conclusion:
Merge sort is a stable algorithm that performs even faster than heap sort on large data sets. Also, due to use of divide-and-conquer method merge sort parallelizes well. The only drawback could be the use of recursion, which could give some restrictions to limited memory machines.
.
- – Counting Sort
- C Algorithms – Merge Sort
- C Algorithms – Heap Sort
- C Algorithms – Selection Sort
- C Algorithms – Radix Sort
- C Algorithms – Shell Sort
- C Algorithms – Insertion Sort
- C Algorithms – Bubble Sort
- C Algorithms – The Problem of Sorting the Elements | http://www.exforsys.com/tutorials/c-algorithms/merge-sort.html | CC-MAIN-2014-42 | refinedweb | 819 | 63.32 |
DC 2004-03-22 - I've submitted a TIP to include a megawidget package in the core. This package will not consist of an actual framework for defining widgets but will instead include several helper functions that a megawidget package can use to control the basic functions of megawidgets. The commands provided by the TIP are::Vale> <type> can be: boolean, int, double, string, enum, color, font, bitmap, border, relief, cursor, justify, anchor, synonym, pixels, window:'.Everything else is defined by the author of the megawidget package. These commands are merely meant to speed up the common functions that all megawidget packages must reproduce (like cget and configure) by implementing versions of them in C. If this becomes a core package, megawidget writers can count on at least these basic functions being available in a Tcl release and can use them to write widgets that are much faster than just pure-Tcl alternatives.escargo 27 May 2005 - This might be a late comment here, but I thought I should inject this: As long as you are providing for ways to define things, you should also provide a way to inspect things. E.g., if there is a ::megawidget::class <widgetClass> there should be a ::megawidget::info classes ?pattern? that returns the list of (programmer-declared) classes. Similarly for all the other defining operations. These are useful for extension writers and for test writers as well......NEM Sorry to interrupt, but surely the argument "some people didn't want to depend on an external library" is not a reason to write another external library? I'm really slightly confused as to what the difference is between snit and your code - surely neither is in the core, and so both are external libraries. snit is in tcllib, however. Wouldn't it be better to concentrate on submitting improvements to the snit maintainers?DC 2004-03-03 - Well, the point of all of this was to, in fact, develop an extension that could be merged into the core with a TIP. I'm pretty sure that the inclusion of SNIT in the core is not gonna' happen. I developed the new framework as a proof-of-concept and modeled the syntax after SNIT because I liked it. That's all. 0-]NEM 4Mar04 - OK. I'd quite like to see a framework for creating widgets go into the core. However, getting an object system into the core is going to be hard. Firstly, itcl has already been officially bundled with the core (ok, noone has bothered to do the work, so it could be argued that that TIP has lapsed) -- I imagine a number of people might start saying "use itk/iwidgets, or at the very least, use itcl". Secondly, if we're going to put an object system into Tk (I assume that's what you're proposing), then why not put it into Tcl, and take care of all those object-like things which all behave slightly differently (interps, channels, fonts, etc etc)? I made a list of them all once, and it was quite long, and almost all of them worked in different ways (e.g. should [fconfigure $chan -options ..] be [$chan configure -options...]?). I think this would be a good thing, from a consistency point of view, but it comes back to the heated debate of which object system to "bless" with core inclusion.Now, you could argue that this is simply a megawidget framework for Tk - no more, no less. However, I'm pretty sure these questions will be raised. If you can bear the arguments that will probably ensue, then more power to you..... And, for the record, the new framework creates a single ::megawidget namespace, and everything else is created under that, and it has the added bonus of handling options and error messages correctly so that a resulting mega-widget behaves exactly like a standard widgetwhich:
- a source for useful widgets without having to code and debug them
- source code examples for cases when the useful widgets do not quite meet the specifications.Vince very much agrees with these last comments. This suggests that any new megawidget package should be written as, perhaps, a composition of two things:
- a wrapper around Tk to emulate the stuff we'd like in C in the future (whatever that may be -- user data in widgets, better focus handling, etc)
- a new megawidget package which makes use of the wrapper's functionality
WHD 2004-03-03: Some thoughts about Snit and the dependence of megawidgets on external libraries: de facto, any set of megawidgets is going to rely on some kind of megawidget framework. Users of the megawidgets shouldn't have to know about that framework--but surely it should be available for those who'd like to use it?That said, I hate using packages that depend on other packages. But there's no reason why a megawidget set couldn't include a private copy of Snit, slightly modified to load into the megawidget set's namespace. Snit's implemented in such a way that two such copies could happily coexist.It doesn't surprise me that Snit's slower than BWidget; it adds quite a lot of sugar, and sugar can be expensive. And to date, I've not had time or energy to do any amount of performance tuning--especially since I hope to reimplement Snit using ensembles when 8.5 is released.A question for DC: Does your partial reimplementation of Snit pass the Snit test suite? There are many, many little details, especially regarding widget destruction, that are quite tricky to get right. But if it does, and it's faster, I'm not at all averse to updating Snit to use your techniques.Also, you say your version produces widgets which look more like built-in widgets than Snit does--could you send me a list of the specifics?
DC 2004-03-03 - My original plan was to just create a new megawidget library that was BWidgets written in SNIT. Upon testing, SNIT was way slower than BWidgets, so I started off on a quest to create a framework that was both faster than BWidgets and syntactically like SNIT (since, as I've said, I really like it).The "widgets look more like built-in widgets" comment was basically adding more "sugar." The widgets created in the framework I have now return proper error messages as well as handle shorthand commands and options. This is something lacking in both BWidgets and SNIT widgets. Both would choke if I tried to do:
ButtonBox .b .b conf -def 0 ; ## shorthand for .b configure -default 0Because neither one can pick up shorthand subcommands or options. The error messages were another thing that bothered me. When I'm testing, I like to be able to see all of the available subcommands of a widget. Neither BWidgets or SNIT offer error messages that are helpful in this regard.(WHD: Snit doesn't, because (in the presence of "delegate method *") it doesn't know what the available subcommands are.)I just added more "sugar." 0-]After having played around with it more, I'm still not happy with the results. The object creation is as fast as BWidgets (which still ain't that great when compared with widgets written in C), but the configure and cget methods are much, much slower. Now, we're talking 100 microseconds here, which ain't much, but it still bugs me. Especially since the Tk core widgets can handle a configure or cget request in about 2 microseconds.So, I'm now working on a C / Tcl combination where the extension will use Tcl's built-in functions for handling options and subcommands at the C level while the bulk of the code will still be pure Tcl. We'll see how it goes. 0-]
I am thinking building an OO extension agnostic framework (that is to say, a megawidget framework which doesn't care whether you are using snit, incr tk, etc.) would be the most valuable thing; otherwise, the ego/territorial/pick your favorite term issues overwhelm the technical issues.
Anonymous comment on 27 May 2005: I suggest that the megawidget framework being developed be as much like Tk as possible, so that the merging of it into Tk move as rapidly as possible. This means that coding style, comment style, variable and function naming all match the Tk styles. Test suites and docs need to develop. And taking a discussion of the goals and status of this work to the Tcl core mailing list (see TCT) would be very useful at this point, since you don't want to spend weeks on this effort only to have basic concerns from the TCT slow down the process.Lars H: Hasn't DC already done that? See TIP 180
| http://wiki.tcl.tk/10986 | CC-MAIN-2017-04 | refinedweb | 1,477 | 68.5 |
Mono Contribution HOWTO
From Mono
A little help for mono newbie coders
For those who are new to Mono and are impatient to contribute with code (uhh... you are brave!!) here is the document you should read.
You will see all Mono hackers say the same (great minds have similar way of thinking): First, DO WRITE TESTS!!!. In order to do that:
- Start with the NUnit Tests Guidelines. In the cvs they are located at: mcs/class/doc/NUnitGuideli...
- But wait, this is a document for impatient people. So EVERYTHING should be here. Well, it is.
The NUnit Tests Guidelines document
Mono NUnit Test Guidelines and Best Practices
Authors: Nick Drochak <[email protected]> Martin Baulig <[email protected]> Last Update: 2002-03-02 Rev: 0.3
Purpose
This document captures all the good ideas people have had about writing NUnit tests for the mono project. This document will be useful for anyone who writes or maintains unit tests.
Other resources
- mcs/class/README has an explanation of the build process and how it relates to the tests.
- is the place to find out about NUnit
Getting Started
- If you are new to writing NUnit tests, there is a template you may use to help get started. The file is: TemplateTest.cs
- Just a reminder to copy/paste this file it in another buffer. And keep reading. You can get it here or at the end of the guidelines in the Testing Tips section)
Save a copy of this file in the appropriate test subdirecty (see below), and replace all the [text] markers with appropriate code. Comments in the template are there to guide you. You should also look at existing tests to see how other people have written them. your test class is complete, you need to add it to the AllTests.cs file in the same directory as your new test. Add a call to "suite.AddTest()" passing the name of your new test class's suite property as the parameter. You will see examples in the AllTests.cs file, so just copy and paste inside there.
Once all of that is done, you can do a 'make test' from the top mcs directory. Your test class will be automagically included in the build and the tests will be run along with all the others.
Provide an unique error message for Assert()
Include an unique message for each Assert() so that when the assert fails, it is trivial to locate the failing one. Otherwise, it may be difficult to determine which part of the test is failing. A good way to ensure unique messages is to use something like #A01, #A02 etc.
Bad:
AssertEquals("array match", compare[0], i1[0]); AssertEquals("array match", compare[1], i1[1]); AssertEquals("array match", compare[2], i1[2]); AssertEquals("array match", compare[3], i1[3]);
Good:()
Never compare two values with Assert() - if the test fails, people have no idea what went wrong while AssertEquals() reports the failed value.
Bad:
Assert ("A01", myTicks[0] == t1.Ticks);
Good:
AssertEquals ("A01", myTicks[0], t1.Ticks);
Constructors
When writing your testcase, please make sure to provide a constructor which takes no arguments:
public class DateTimeTest : TestCase { public DateTimeTest() : base ("[MonoTests.System.DateTimeTest]") {} public DateTimeTest (string name): base(name) {} public static ITest Suite { get { TestSuite suite = new TestSuite (); return suite; } } }
Namespace
Please keep the namespace within each test directory consistent - all tests which are referenced in the same AllTests.cs must be in the same namespace. Of course you can use subnamespaces as you like - especially for subdirectories of your testsuite. For instance, if your AllTests.cs is in namespace "MonoTests" and you have a subdirectory called "System", you can put all the tests in that dir into namespace "MonoTests.System".
Test your test with the microsoft runtime
If possible, try to run your testsuite with the Microsoft runtime and do also report it to the list - we'll forward this to the Microsoft people from time to time to help them fix their documentation and runtime.
On Windows, you can simply use "make run-test-ondotnet" to run class libraries tests under Microsoft runtime.
Miscellaneous Tips
- If you use Emacs, you might want to use the .emacs file and the package developed by Brad Merrill mailto:[email protected]. It will allow you to highlight and indent in C# style in your Emacs editor. (XEmacs will still work but it'll also complain).
- MonoDevelop () is a GPLed IDE developed by IC#Code (SharpDevelop) and ported to Mono/Gtk#.
- For those who Java: A comparison of Microsoft's C# programming language to Sun Microsystem's Java Programming language () by Dare Obasanjo is a really good (very complete) text to read.
- Suggest this point and more, now I can't think of anything more.
Enjoy!!.
(c) 2002, Jaime Anguiano Olarra (mailto:[email protected]).
The parts included in this document are property of their respective authors.
Note: The identation of the source code has been changed a bit so it could fit better in the website. Anyway, as nothing more changed, the files should work as expected. | http://www.mono-project.com/Mono_Contribution_HOWTO | crawl-001 | refinedweb | 851 | 73.68 |
I was looking for a StringBuilder type of thing to use in Python (I've been working in PHP, at work, recently, and cStringIO had momentarily slipped my mind), and found StringIO and cStringIO, but in doing so, I found a post that claimed that they both performed very poorly in comparison with plain old, naive string concatenation. (!)
Here's the test program they posted to prove this:
def test_string(nrep, ncat): for i in range(nrep): s = '' for j in range(ncat): s += 'word' def test_StringIO(nrep, ncat): for i in range(nrep): s = StringIO.StringIO() for j in range(ncat): s.write('word') s.getvalue() def test_cStringIO(nrep, ncat): for i in range(nrep): s = cStringIO.StringIO() for j in range(ncat): s.write('word') s.getvalue() test_string(10, 10) test_StringIO(10, 10) test_cStringIO(10, 10) profile.run('test_string(10, 1000)') profile.run('test_StringIO(10, 1000)') profile.run('test_cStringIO(10, 1000)') # sample execution and output: # ~> python stringbuf.py | grep seconds # 15 function calls in 0.004 CPU seconds # 50065 function calls in 0.920 CPU seconds # 10035 function calls in 0.200 CPU seconds
As you can see from their output, the profiler shows a clear preference for naive string concatenation. (Way fewer calls, much less CPU time.)
Well, this seemed naive to me (to pummel a pun). It seemed likely to me that the profiler was picking apart the calls to the string io modules, and making calls individually, and counting the time surrounding making them, etc., while it wasn't really doing that for the built-in, naive concatenation call, so I tried simply timing the test functions, like this:
. . def timeMethod(method, *args): from time import time t1 = time() method(*args) t2 = time() return t2 - t1 print "test_string:\t%.4f" % timeMethod(test_string, 10, 1000000) print "test_stringIO:\t%.4f" % timeMethod(test_string, 10, 1000000) print "test_cStringIO:\t%.4f" % timeMethod(test_string, 10, 1000000) # sample execution and output: # -> python test.stringbuf.py # test_string: 1.0545 # test_stringIO: 1.0005 # test_cStringIO: 0.9869
From this output, it appears to me that I was correct, that the profiler doesn't pick apart built-in calls to the same degree that it picks apart module calls, and that cStringIO is actually slightly faster than naive string concatenation. (Surprise, surprise.)
Surprising to me still, however, is how slight the difference is - it seems like we're looking at about a 6% difference, even after 1,000,000 concatenations of the word 'word'. So it does seem like cStringIO is hardly worth the bother, in most applications.
It seems like Python must be using some sort of StringBuilder-like pattern internally, at this point, for string concatenation, or at least for appending to the end of a string. I can't imagine that Python is actually making a copy of the entire string for every += call, and still coming in at around 1 second for this test. I mean, after 250,000 concatenations of the word 'word', we have 1 million character string, right? So at the very least, we're talking about copying a buffer that is 1 million bytes or larger, 750,000 times! That would be like moving more than 750 gigs of memory from one spot to another. (10 times, in this test, actually.) In one second? I don't think so, not on this computer! So Python must not be doing that anymore, if it ever did.
__ | https://www.daniweb.com/programming/software-development/threads/287700/profile-results-misleading | CC-MAIN-2017-26 | refinedweb | 568 | 74.19 |
Probing a Hidden .NET Runtime Performance Enhancement
Jomo Fisher--Matt Warren once told me that the runtime had a performance optimization involving calling methods through an interface. If you only had a small number of implementations of a particular interface method the runtime could optimize the overhead of those calls. Coming from the C++ world where a vtable is a vtable this seemed a little odd to me. I finally got around to trying this out myself and he was right. Here's the code so you can try it for yourself:
using System; using System.Diagnostics; class Program { static void Main(string[] args) { Stopwatch sw = new Stopwatch(); sw.Start(); DoManyTimes(new Call1()); DoManyTimes(new Call2()); DoManyTimes(new Call3()); sw.Stop(); Console.WriteLine(sw.ElapsedMilliseconds); } interface ICall { void Do(); } class Call1 : ICall { public void Do() { } } class Call2 : ICall { public void Do() { } } class Call3 : ICall { public void Do() { } } static void DoManyTimes(ICall ic) { for (int i = 0; i < 100000000; ++i) ic.Do(); } }
On my machine this code reports values around ~2300 ms. Now, make a slight change and only use the Call1 class:
DoManyTimes(new Call1()); DoManyTimes(new Call1()); DoManyTimes(new Call1());
Now I get numbers like ~1800 ms. Generally, I observed the following:
- It doesn't seem to matter how many implementations of ICall there are. Its only whether there are many implementations of 'Do' called.
- One implementation of 'Do' performs better than two implementations. Two implementations performs better than three. After three, it doesn't seem to matter how many there are.
- Delegate calls don't have an equivalent behavior.
This posting is provided "AS IS" with no warranties, and confers no rights. | https://docs.microsoft.com/en-us/archive/blogs/jomo_fisher/probing-a-hidden-net-runtime-performance-enhancement | CC-MAIN-2021-31 | refinedweb | 274 | 57.16 |
Anatomy of a SQL Injection Attack 267.
Use a persistence library (Score:3, Informative)
One should definitely use a persistence library instead of concatenating strings to help mitigate the possibilities of being victim of SQL injections. They are pretty good at it. Hibernate is a widely used one.
Re:Use a persistence library (Score:5, Informative)
One should use positional/named bindings and let the driver handle escape sequences, make sure the Web user only has access to what is needed, rather than running everything as root. Use procedures/views where possible and never allow dynamically created queries.
Re:Use a persistence library (Score:5, Insightful)
Re: (Score:2)
Use parameterized SQL be in with dynamic SQL, procedures or views.
Re: (Score:3, Informative)
That really depends on your database flavour. SolidDB which I primarily work with, it is impossible to construct dynamic queries within a procedure.
Also your claim that procedures only slow down databases is just plain wrong. Databases with procedures where the SQL is immutable will genrally run much faster than your dynamically generated versions. Philippe Bonnet and Dennis Sasha claims (their book, "Database Tuning") that as much as 9/10 of your average query time spend in the database is spend on the que
Re: (Score:2)
The statement to use store procedures usually comes from MS-SQL server people who read and continue to repeat information from over a decade ago when MS-SQL server treated procedures and dynamic SQL differently; unfortunatly this has spread into Oracle space. With the query optimizer under MS-SQL Server and Oracle you don't have to worry about using procedures since dynamic SQL is treated the same way, especially if you use
Re: (Score:3, Informative)
There's an excellent article on dynamic queries and little bit about SQL injections here but it's Sql Server specific so I don't know if it's any good for the Slashdot crowd: [sommarskog.se]
Re: (Score:2)
Vendor products tend to shy away from stored procs and views because it ties them to a particular back-end (and can limit sales). Instead of spending time writing database code, they just show it all into the front-end. That doesn't mean they can't take steps to prevent SQL Injection.
Re:
Re:Use a persistence library (Score:4, Interesting)
For PHP + *SQL, use DBO, first proper interface for databases in PHP IMO.
Where I work there is no interface to the database other than stored procedures, yes writing programs takes longer and requires one of the DBAs to make the procedure, however, we have never had a single incident of some cowboy programmer forgetting to add a where clause to an update/delete, nor some insane environment where random pageviews clobbers the databases.
Re: (Score:2)
PDO has been around for years, and offers standardized escaping and binding for all the major db platforms. If you're stuck with an "old PHP ways" host, they probably are still using PHP4 and have register_globals set to on - IOW, time to move to a modern host. Just like you wouldn't stick with a Java host only offering 1.3 or 1.4, it's time to vote with your wallet and move to modern hosting operations.
Re:Use a persistence library (Score:4, Insightful)
Re: (Score:3, Informative)
??
You can write stored procedures that take variables, for instance..one that will change the ORDER BY clause to what you want. Also, it isn't much of a problem to overload stored procedures...same name, but behaves differently by the number or types of parameters you call it with.
Re:Use a persistence library (Score:4, Interesting)
My only issue w/ stored procedures comes from an abstraction quarrel:
Where should the logic be? The code? The DB?
What if I need to debug, what if someone else needs to debug?
I've seen way too many nasty examples of shit going awry in databases, because someone has crazy triggers or stored procedures in place without documentation..
Re:Use a persistence library (Score:4, Interesting)
The logic for the dataset should be in the database where it belongs.
Crazy trigger/Crazy procedure problems are the same as every where else, if it's undocumented the code is hard to maintain.
Not sure what your problem with debugging a procedure is, most databases has interfaces for tracing procedures, I actually find SolidDb procedure trace to be preferred over normal print statements in .?
Re:Use a persistence library (Score:4, Interesting)
I was going to MOD this up as super informative but I had to pipe in myself
;)
Having worked in a small startup, a major Fortune 500, and in between companies this kind of thing is by far the best approach over the long run. The places where the DB/Code guys are separate always end up with a better product. Simply because it allows people who excel at something to really apply that benefit to what their doing. I love writing code but hate writing SQL and maintaining databases. So I tend to focus on the code and the DB stuff gets done but pretty half assed. Now people could say you should do it all equally well but in real life that never happens. Let the database go do his thing and the programmer guy do his thing. Get them talking together and your product will benefit greatly.
Also when logic is in procs, views, whatever you don't need to redeploy anything to achieve results. Simply change the database and it's done.
Re: (Score:3, Interesting)
Re: everywh
Re: us:Use a persistence library (Score:4, Informative)
Since there is no user input used in generating the query, you can never have an SQL inection attack, and still use dynamic queries. There are ways to do dynamic queries, without opening yourself up to attacks.
Re:Use a persistence library (Score:4, Informative)
What if "item"s came from the users in the first place? Most databases don't return the strings pre-escaped for reuse in the database.
Personally, web programming is where "Hungarian Notation" style variable names shine: I have htVariableName, dbVariableName and the original inVariableName, and it's blindingly obvious when I'm using the wrongly-escaped string in the wrong place or re-escaping something I already escaped.
Re: (Score:3)
I cringe at this particular example of boilerplate code, b/c I have seen it so often lately and it's such an obvious choice for refactoring. If your language of choice has string join operator, the above code can be expressed in two lines:
items = list.join(", ")
sql = "SELECT item FROM table WHERE keyword IN (" + items + ")"
Much more readable with a better chance to spot the bug in your code: What happens if list is empty?
IHMO, better yet is to put the values in a temporary table using a batch insert and th
Re:Use a persistence library (Score:5, Insightful)
Persistence is just a bad idea, it hides the real performance issues of how databases work, and limits how you can easily manipulate the data. A better idea is just to always use bind variables. Problem solved.
Re: (Score:2, Informative)
I have found, that if used correctly, hibernate can be quite powerful; you can still run native and database independent HQL queries if you like.
You can also map your native queries to objects; it is quite easy, and I believe it is same as binding to variables.
The entity manage also helped me to reduce the amount of queries that I hard code into my DAOs; you can query for objects based on their class and ID (yes, it does support composite IDs).
Also provides control for optimizations, and will automaticall
Re:Use a persistence library (Score:4, Informative) [owasp.org]
Re: (Score:2)
Yeah, until someone comes at it with a cross-site scripting attack. ^^
Re: : (Score:3, Funny)
IT's very simple. Don't use any of the mysql_* functions.
Use the PDO prepare function () and remember newer to pass any input you got from the user directly into the string you give to prepare.
In most cases(99%) the string you give to the prepare function should really be constant and not depend on user input at all.
Re: (Score:2)
That assumes performance is somebody's number 1 priority. An app might use something like OpenJPA or Hibernate because code correctness, scalability, time to market or portability are more important than performance. Besides, I bet for typical database queries, the performance boost from handwriting SQL vs Hibernate (hql) / OpenJPA (jpql) generating it would be neg
Re:Use a persistence library (Score:5, Insightful)
A more simple way is to use a parametrized statement [wikipedia.org]. No extra libraries if you are using Java,
.NET, or PHP5.
Re: (Score:2)
Sure you're aware of this, but to make to clear for everyone. Python, Perl and other languages don't require extra libraries to do parameterized queries either. In Python the pattern is
import db_module
conn = db_module.connect('user/pass@host')
curs = conn.cursor()
curs.execute('select field1, field2 from table1 where field3 = ? and field4 = ?', ('foo', 7.6))
curs.fetchall()
Exactly the same number of lines as doing it with string munging, but type safe and zero chance of sql injection.
Re: (Score:3, Informative)
Except that Python's DB-API [python.org] is a horrible mess. Depending on what db_module is, you might need to spell your query as:
curs.execute('select field1, field2 from table1 where field3 = ? and field4 = ?', ('foo', 7.6))
curs.execute('select field1, field2 from table1 where field3 =
Re: : (Score:2)
Re: (Score:2)
Nah, just interpret the arguments and stop your program when it goes out of type or range, never use arguments directly.
Re: (Score:3, Informative)
Yeah, all this SQL stuff always confuses me. Partially because I often am in the Joomla framework, which doesn't let you do parameterized queries, and, while I guess you could do stored procedures, I've never seen the need.
Instead, I simply take all input and make sure it is sane. Is it supposed to be a number? Put an (int) before assigning it out of $_POST. (Now there's a JRequest::getInt that I'm learning to use instead.) Am I putting a string from a user into a database? I use $db->getEscaped(). When
Re: (Score:3, Interesting)
One should definitely use a persistence library instead of concatenating strings to help mitigate the possibilities of being victim of SQL injections. They are pretty good at it. Hibernate is a widely used one.
Speaking as someone who has used both approaches, Hibernate is a lot of overhead for, in many cases, very little gain, and having used it on a number of large projects my team has decided not to use it in future. Of course you must sanitise all values passed in from untrusted clients carefully before they are spliced into any SQL string, but there are a number of frameworks which do this which are far lighter weight than Hibernate.
A cautionary tale' OR 1=1 (Score:5, Funny)
...for these modern times.: (Score:2)
Please provide an example of how would it work.
For instance, in Perl I can do a query safely like this:
But, I also have a bit like this:
The second bit is also safe, but it creates a query by concatenation which could be used unsa
Re: (Score:2)
It's not such a big deal to filter all user input inorder to prevent SQL injection. It's simply a habit you need to learn and stick to.
It is more difficult to make a site that allows some people to provide content including html and script, and still prevent evil content to enter your database / pages.
And it is difficult to enforce a strict password regime because many a client have asked to remove the safety measures for convenience sake. I guess we all know examples of dumb passwords. Like 'coconuts' for
Re: (Score:2)
It is more difficult to make a site that allows some people to provide content including html and script, and still prevent evil content to enter your database / pages.
The issue there is that you're allowing that at all (see CWE-79 [mitre.org]). The solution is to not allow general HTML/script input from non-trusted sources (i.e., they can upload new HTML with sftp, but not through a web form) and instead support some greatly restricted syntax (e.g., bbcode or wikisyntax) that is easy to convert to guaranteed fang-free content. And use a proper templating library for output of content from the database instead of hacking things.
Re: (Score:3, Interesting)
I think you didn't understand my question. The grandparent said: "That is, the only possible way to get or insert data from a database should be the correct one". That excludes any kind of "habit you need to learn and stick to", it must simply be impossible to do otherwise.
My question is, how do you actually implement a system like that? I'd like an example code of a hypothetical system that would allow me to compose an arbitrary SQL query with variable amounts of selected columns, JOIN and WHERE clauses, e
Re: (Score:2)
Then what needs to be done is make the libraries have this security implemented *by design*.
Libraries do, but they're powerless against string concatenation unless it's impossible to run raw SQL. I think the only thing you could do is deny non-paramter values at all, but it'd make everything a lot more annoying and probably have a performance impact. Like you couldn't say "WHERE is_active = 1" but had to use "WHERE is_active = ?" and bind the value.
Re: (Score:2)
Is there really a need for "interesting solutions" in yet another 3-layer web app? It's a serious question as I don't do this kind of work. But it seems to me that this stuff is already so well known that production sites shouldn't be looking for new interesting (and thus untested) ways of hacking together queries. Forcing programmers to do things "the right way" for established designs and purposes doesn't really seem like a problem to me, though I'm sure it takes some of the fun out of it.
Re: (Score:3, Insightful)
Re: .
Re:Aarghhhh (Score:5, Interesting)
This is a very valid point, yet "programmer" and "webdev" is often seen as very closely related with a blurry line; in my experience a "webdev" is a programmer who's proficient with webtechnologies, but usually has a blind spot for design. (or the inability to be visually creative and create pretty interfaces, but might be brilliant with logical creativy and finding solutions). The agencies I've worked for had the design part done by "designers" who drew a few designs, shook hands on one and had a "webdev" implement it. They never touched the websites, just sliced up images when they were done.
Maybe my strong reaction was rather based on the difference of concept we have from "webdev" and "programmer", for me they're very closely related wheras you seem to see the "webdev" as a designer with a course of HTML or something alike
:)
SQL is the problem, really. (Score:2)
I have been doing a bit of work with sqlite lately and I am surprised to find that the C api is basically a way to pass in strings containing SQL commands. Now even in C I could imagine an API which allows you to build up queries to do everything SQL does without using commands in text strings.
With an OO language it should be dead easy.
Re: (Score:2)
Re: (Score:2)
Sounds like a problem with sqlite, not SQL in general.
So why can sql code ever be injected on other platforms?
Instead of execute_command("create table X")
I want to see create_table("X")
Re: (Score:3, Informative)
long SomeNumericValue;
char SomeStringValue[SOME_SIZE];
StatementHandle Statement = Parse("INSERT INTO TableName (Col1, Col2) VALUES (?, ?)");
BindNumericVar(Statement, 0, &SomeNumericValue);
BindStringVar(Statement, 1, SomeStringValue, SOME_SIZ.
didn't you ever watch startrek? (Score:5, Insightful)
learn from Scotty. always double your estimates... Especially when they ask for an honest estimate.
I'm up to a multiple of 16 now.
Re: : (Score:3, Insightful)
The counterpart to accepting any input is sanitizing any output. It's really very easy if you have centralized DB fetching (and you should).
Re: : (Score:2)
Whoever, in their right mind thought it was a good idea to expose SQL query inputs on the Web?
Most people are not doing it because they want to, but because the software they use allows such things to silently happen behind their back. It is a classic case of in-band signaling, you are pumping data through the same pipe as code and when the data isn't properly escaped, things break in bad and unexpected ways. To get rid of this once and for all you need to seperate the pipes, seperate the data and the code and don't allow them to be mixed. LINQ for example does that by moving the query language into
Obligatory xkcd (Score:4, Funny) [xkcd.com]
Re:Obligatory xkcd (Score:5, Funny)
Lemme be the first to say (Score:2)
Use perl. Because the support both in java and php for applying regexes and preparing SQL statements has been late, convoluted and lacking.
Re: (Score:2)
I used Perl in 90's. Then switched over to PHP.
I remember that Perl was not too good for web programming. It was unstable in a sense that variables sometimes got strange values inexplicably.
And also the architecture of the language was not suited for web pages. When I saw PHP3, I switched to it immediately and never looked back.
PHP also has got its minuses (why I cannot create RAR or ZIP archive locked by a password on a website?), but in general it is OK, if one pays attention to what he gets from users.
I
Re: (Score:3, Informative)
I remember that Perl was not too good for web programming. It was unstable in a sense that variables sometimes got strange values inexplicably.
Funny, the thing I -like- about Perl is that it is very stable in the sense that variables never get strange values inexplicably. It is a very deterministic environment, set it up and it just works as promised.
And also the architecture of the language was not suited for web pages. When I saw PHP3, I switched to it immediately and never looked back.
There are packages that make it very well suited for web pages. OK, you can't really just sprinkle code into your html like you can with php (or maybe you can, but really, why the hell would you want to do that?) but it generates web pages just fine.
I totally agree with you about sanity checking in
Re:Lemme be the first to say (Score:4, Funny)
I remember that Perl was not too good for web programming. It was unstable in a sense that variables sometimes got strange values inexplicably.
Perhaps less(or more) drinking would help?
Re: (Score:3, Informative)
Re:Lemme be the first to say (Score:4, Informative)
It was unstable in a sense that variables sometimes got strange values inexplicably.
Perl doesn't stop you from programming like a rodeo clown (for those who don't even qualify as cowboys...).
If you're going to make zealous use of globals and then use mod_perl you will get hurt.
Universities teach about something called "coupling". Every professional programmer will talk about something called "use strict". If either of these concepts are too difficult you're better off with a language that does its best to help you from yourself (but be aware Java threads are not going to stop any determined doofus from causing real pain).
Re: (Score:2)
Re: (Score:2)
Re: (Score:2):SQL Injections SHOULD NEVER WORK (Score:4, Interesting)
If the attacker can still input SQL commands they can display the views,tables, procedures,etc that the account accessing the database can access. Besides most current databases allow you to use views for update and insert.
That means you need to implement a solution using multiple database credentials that way they attempt to access something the account used to access the database has the least permissions needed for the specific page and the rights of that current user. There are very few tools that understand using multiple database credentials and those that do are expensive and a pain, been a few years so maybe they are better.
So that leaves you having to write your own code and adding alot of code to handle the switching of database credentials or having different area, including duplicate pages, that handle the different database credentials.
Re: (Score:2, Informative)
The user can see the table structure, perhaps the view definition, but not the data they have no rights to.
You deny select on the table, and grant access to the view. The view contains a constraint that forces the view only to return the data the connecting user is allowed to see.
I have implemented this in Postgres/PHP.
You have a group role that has read access to the public tables (eg products). The webserver runs, by default, at this user level.
When a user logs in, they reconnect to the database. They are
Re: (Score:2)
how does your design method apply to blog/twitter style web sites where everyone about inserts to tables and about everyone reads everyone else inserts?
Your view method may work well in your Postgres DB setup, but in the MS world, a view causes a full table scan and cannot be indexed unless your fork out $25k per socket for enterprise ed. using views sounds bad to me.
Re: (Score:2)
Uhm. No.
Well, yes, but it don't help much. True, the web-sql-user should only have access to information it needs to see. But that doesn't help you at all against the fact that a single web-user shouldn't nessecarily be able to see everything and do everything the web-server as such can see and do.
To make a concrete example, if you're making a internet-bank, then the web-frontend need to be able to see the account-balance and movements of everyone who has internet-banking, it also needs to be able to put in
SQL is not always the answer ... (Score:2)
You're right -- because it's SQL, which has assumptions about how it's used.
LDAP, on the other hand, you can set up to bind as the individual user, and you adjust which attributes a user is allowed to see or modify in their own entry, and which entries they can see in other entries.
So, part of the solution is using the correct data store for the situation, and SQL isn't always it. (I haven't played with any of the "NoSQL" stuff yet, but much of the behaviour with replication and and flexibility of storage
Re: (Score:2)
The idea is that instead of creating a "users" table and filling it with your users, the user is created as a database user, and their username and password is handed straight to the database during the connection process. If it connects, the user had a valid username/password. If it doesn't connect, the user didn't. If you have a million users, then your database server would need to be able to handle having a million different users each with different levels of access on different tables/rows/columns/
Re:SQL Injections SHOULD NEVER WORK (Score:4, Interesting)
If your code is running at the correct privilege level, SQL injections should be completely irrelevant.
True, if you run your web app at the correct privilige level, there is no way an SQL injection can be used to root the machine.
But it can still be used to corrupt the application itself, which is often more valuable that the system.
Example: a gaming application that wants to store a score per user. Even if the app uses a separate DB user per game user, and even if the DB only allows the user himself to update his score, this would not be good enough, because SQL injection might allow a player to assign himself an arbitrary score of his chosing.
Re:
USDA likes to put SQL strings in their URLS (Score:3, Informative)
If you look for a while you'll find them. The developers replied to me with "It's perfectly fine". While it seems they do parse this information isn't that screaming "Exploit me!"
Slash Dot Virus Sequel Injected in You (Score:5, Funny)
Re: (Score:2)
you will need to reformat your brain.
Does this mean I have to download the internet again?
It is a sad world we live in. (Score:5, Informative)
I go through this all of the time. Though I call it laziness, it is actually a combination of ignorance, indignation, and laziness.
Here is a very, very, very simple and very, very, very standard way of keeping SQL injections out. Validate everything at every level. There you go. Done.
1) Client side matters. Check input, validate it and pass it through to the application layer.
2) Application layer matters. Check variable, strictly type it, validate it and pass it through to your data layer.
3) Data layer matters. Check argument against strict type, validate it, paramaterize it, and pass it off to the database.
4) Database matters. Check paramater against strict type, validate it, and run it.
You run into problems when someone only follows any one of the steps above. You could handle it with a medium level of confidence in areas 2 and 3 (and if you're asking why not 1 and 4, go sit in the corner while the grown-ups talk), but good practice for keeping it clean is validate it at every layer. That doesn't mean every time you touch the information you have to recheck the input, but every time it moves from one core area of the platform to another or hits an area it could be compromised, you do.
As I said above, the only reason for not following 1-4 is laziness, ignorance, or indignation. SQL injections aren't hard to keep out.
We're in an age where web development IS enterprise level programming and developers need to treat it as such.
There, I just saved your organization millions of dollars. Go get a raise on my behalf or something.
Re: (Score:3, Interesting)
5. Output matters. Check data from the layer below, ensuring any characters that might carry unintended meaning but need to be in the data are escaped as required.
Always check the data on the way out as well as on the way in, in case something malicious got in by any means (due to a failure in steps 1 through 4, or direct database access by other means). This is implied by your supplementary text, but I thin
Re: (Score:3, Interesting)
I am with you on thee through 4, and you probably should or are doing 1 because you want to be able to help the user put the right information in fields, check onblur an give some useful feedback but spending allot of time on careful input validation at the client level with web is pretty pointless. Anyone doing something malicious does not have to use your interface at all.
I produced a video on SQL injections - (Score:3, Informative)
I wanted it to be short, easy for management to understand (even non-technical). Definitely worth watching, IMHO. [youtube.com]
Re: (Score:2)
Little Bobby Tables: [xkcd.com]
The author should be more careful... (Score:3, Insightful)
Re: '!='
This lovely programmer has sold his code around (Score:5, Insightful)
Took me 2 minutes with Google to find other sites that are apparently using the same crappy code with the same vulnerabilities. "inurl:" does wonders.
Re:limit the length and content of what you accept (Score:5, Insightful)
Re: (Score:3, Interesting)
I agree. Just like any regular program, input must be reduced to an EXPECTED set of values. Bounds checking must be performed. Anything outside that strict set of values must be rejected offhand and an error message provided. This is programming 101.
Unfortunately when HTML, PHP and SQL went "mainstream", these core programming concepts didn't get passed along. Frankly I say let "evolution" take careof/teach sloppy web developers - the smarter ones will have backups and be able to fix their problems. What re
Re: .
Re: th | http://developers.slashdot.org/story/10/02/26/0542206/anatomy-of-a-sql-injection-attack?sdsrc=next | CC-MAIN-2014-10 | refinedweb | 4,840 | 61.36 |
Introduction
C++ or ‘the New C,’ as it is based on C’s framework and additional features. C++ is also credited to influence several languages such as C# and other newer editions of C. It is also recognized with the introduction of Object-Oriented Programming. This establishes the fact about how essential C++ has been for the programming world.
This article is about one of the most basic yet crucial tasks, file handing in C++. Now, files are significant for programming as well as for other sectors as they are the storage sectors. This is where the entire data is assembled. The whole concept of file handling can be divided into four sections –
- Opening a File
- Writing to a File
- Reading from a File
- Close a file
Importance of File Handling in C++
Before we embark on this journey of C++, let’s take a few moments to understand why do we need file handling. In simple terms, it offers a mechanism through which you can collect the output of a program in a file and then perform multiple operations on it.
There is one more term, “Stream,” which we’ll be using quite frequently. So, let’s get acquainted with it, as well. A stream is a process that indicates a device on which you are performing the input and output operations. In other words, the stream can be represented as an origin or target of characters of unspecified length based on its function.
ifstream, ofstream, and fstream make the set of file handling methods in C++. A brief description of these three objects –
- ofstream – In C++, ofstream is used to create and write in files. It signifies the output file stream.
- ifstream – Programmers use ifstream to read from files. It signifies the input file stream.
- fstream – fstream can be said as a combination of ofstream and ifstream. It is used to create, read, and write files.
Each one of them helps to manage disk files and, therefore, is specifically designed to manage disk files.
These are the operations used in File Handling in C++ –
- Creating a file: open()
- Reading data: read()
- Writing new data: write()
- Closing a file: close()
Must Read: Top 8 Project Ideas in C++
Let’s discuss them thoroughly to understand how file handling in C++ works –
- Opening a File
Before you can take action on the file, be it read or write, you need to open it first. Ofstream or fstream objects can be applied to initiate a file for writing. Similarly, the ifstream object can be used if you want to read the file.
You can use the following procedures to open a file –
- At the time of object creation, bypass the file name.
- Or you can use the open() function. It is a member if ifstream, ofstream, fstream objects.
For example
void open(const char *nameofthefile, ios::openmode mode);
The first argument in the above defines the name and location of the file which you want to open. The second argument specifies the method by which your target file should be opened.
Here are the Mode Flag & Description –
- ios::app – Append mode. All output to that file to be attached to the end.
- ios::in – Open a file for reading.
- ios::ate – Open a file for output and move the read/write control to the end of the file.
- ios::out – Open a file for writing.
- ios::trunc – If the file already exists, its contents will be truncated before opening the file.
You can create multiple values using the above modes by using the OR. For instance, if you wish to open a file for reading or writing purpose, use-
fstream newfile;
newfile.open (“file.dat”, ios::out | ios::in );
Similarly, if you wish to open a file in write mode and wish to truncate it if it already exists –
ofstream newfile;
newfile.open (“file.dat”, ios::out | ios::trunc );
- Writing a file
While working on a C++ programming file, use either ofstream or fstream object along with the name of the file. It would be best to use the stream insertion operator (<<) to write information to a file.
#include <iostream>
#include <fstream>
Utilize namespace std;
int main() {
// Create and open a text file
ofstream newFile(“filename.txt”);
// Write to the file
NewFile << “Learning files can be challenging, but the result is satisfying enough!”;
// Close the file
NewFile.close();
}
- Reading a file
For reading a C++ programming file, you use either the fstream or ifstream object. In case you want to read the file line by line, and to print the content of the file, use a while loop along with the getline () function.
To read information from a file, you need to use the stream extraction operator (>>).
Example
// Construct a text string, which is managed to output the text file
string newText;
// Read from the text file
ifstream newReadFile(“filename.txt”);
// Now use a while loop together with the getline() function to read the file line by line
while (getline (MyReadFile, myText)) {
// Output the text from the file
cout << myText;
}
// Close the file
MyReadFile.close();
- Closing a file
By default, when a C++ program closes, it expels all the teams, lets out all the designated memory, and concludes all the opened files. But it is considered to be a healthy practice in terms of file handling in C++ that one should close all the opened files prior to the termination of the program. It also cleans up unnecessary space.
This is the standard syntax for close() function. It is a member of fstream, ifstream, and ofstream objects.
void close();
Also Read: Data Structure Project Ideas
Conclusion
That concluded the lesson on ways in which you can do file handling in C++. Remember, C++ is one of the most predominant languages in the programming world to create both technical and commercial softwares.
Therefore, the more you understand, the more you can explore using this versatile language.If you are interested to learn more and need mentorship from industry experts, check out upGrad & IIIT Banglore’s PG Diploma in Full-Stack Software Development. | https://www.upgrad.com/blog/file-handling-in-c-plus-plus/ | CC-MAIN-2021-31 | refinedweb | 1,010 | 70.94 |
Say you have the following Java class:
public class MyClass
{
private String name = "";
public String getName()
{
return name;
}
}
Then you can do the following in Kotlin:
println(MyClass().getName())
However, IntelliJ recognizes the prefix get and will suggest you use property notation:
get
println(MyClass().name)
Furthermore, IntelliJ offers code completion for the latter option only
So far, this is great: name is a property and should be accessed like one, using the concise syntax.
This becomes a problem when there is a Java method with prefix get that is actually not a property. Sometimes, the line between methods and properties may be blurry, but sometimes it is quite clear. E.g., javax.persistence.Query.getResultList() is clearly a method:
javax.persistence.Query.getResultList()
So, writing query.resultList would actually be misleading and masquerading a potentially expensive method call. It is better to write query.getResultList(), but IntelliJ will discourage you to do so.
query.resultList
query.getResultList()
The ideal solution to this problem would involve a new annotation @NoProperty that gets added to all Java methods like Query.getResultList(). Since this is not going to happen, I suggest that IntelliJ should allow you to configure get-prefixed methods that are supposed to be called like methods. A default list could be community-curated. It is very unlikely that there will be versioning issues. Another option could be bytecode analysis to find out whether get-prefixed methods ultimately only access fields (possibly via a chain of get-prefixed methods). If so, they are properties, otherwise the are not.
@NoProperty
Query.getResultList()
For background information, you might want to check out the guide on properties vs. methods in .NET.
Update:
The Kotlin reference also has a small section on functions vs. properties, giving basically the same advice as the .NET guide:
IntelliJ IDEA already offers this configuration possibility.
Indeed that looks exactly like what I have been searching for. Unfortunately, I just cannot seem to be able to persist my changes to that list of excluded methods.
Works: When I navigate to that option from the settings menu and disable the rule (in profile “Default IDE”), the “Apply” button becomes enabled, as expected. If I click “OK”, the window closes and the rule is now disabled, as expected. If I reopen the settings, it is still disabled, as expected.
Does not work: When I add a method to the list, the “Apply” button stays disabled. If I close the window, the method is not excluded by the rule. If I reopen the settings, the new method is gone. This looks like a bug to me. Should I create an issue in your tracker?
IntelliJ IDEA 2017.2.4
Build #IC-172.4155.36, built on September 11, 2017
JRE: 1.8.0_152-release-915-b11 amd64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
Windows 7 6.1
See: | https://discuss.kotlinlang.org/t/java-interop-intellij-property-syntax-for-method-invocations/4611 | CC-MAIN-2017-43 | refinedweb | 481 | 58.58 |
#include <aflibEnvFile.h>
List of all members.
This class will create a .<name> directory in the users home directory or a directory for the users choice. It will then manage a set of key values that the programmer can search for. This allows programmers to save a users settings and then retrieve then at a latter time. It currently supports upto 1024 different settings. Two constructors are provided. They will dertermine what directory to look for the settings file and what the name of the file is. If the directory is not specified then a default directory of .<DEFAULT_DIR> will be created in the users home directory. A file will be created in this directory. The name will be the one specified by the caller. If none is specified then DEFAULT_FILE will be created. To retrieve and store strings into the file readValueFromFile and writeValueToFile should be called. | http://osalp.sourceforge.net/doc/html/class_aflibEnvFile.html | crawl-001 | refinedweb | 148 | 69.28 |
Google Cardboard continues to be one of the primary ways that many creators and the general public are getting their first taste of virtual reality. In this three part series for Make:, I have been exploring how people used to building things in the physical world can get involved with the exciting virtual one that is emerging. We have reached the last piece of the puzzle — allowing our user to interact with the virtual world they see!
If you are completely new to Unity, I’d recommend reading the first article on getting started with Unity. If you’ve already tinkered with a bit of Unity in the past, but haven’t tried out the Google Cardboard SDK, the second article on building for Google Cardboard in Unity is the place to start! If you’ve got those bases covered, this final article in the series will cover adding interactivity to your Google Cardboard virtual reality apps via gaze and touch input.
Adding a physics raycaster to the camera
In order to determine what object a user is looking at, we need to add a physics raycaster to our Cardboard SDK’s camera. Imagine a straight line that shoots out from the camera and intersects whatever the camera is facing — that’s the idea of a raycaster. We then use that to trigger either a touch or gaze input action, which we’ll explore further below.
To add a physics raycaster to our camera, we go to our object hierarchy on the left and choose CardboardMain > Head > Main Camera. This is our camera object that follows the user’s headset viewing angle and direction. When you have this selected, go to the inspector column on the right, scroll to the bottom and click the “Add Component” button:
In the menu that appears, you will see a search box that will let you filter the objects inside it. Type in “physics” and then click “Physics Raycaster” (Note: we do not want “Physics 2D Raycaster,” so be careful with which one you choose here!).
When you click that, it should appear within your inspector like so:
Setting up objects so we can interact with them
Now that we have a physics raycaster, we need to ensure that the raycaster can detect the objects in our scene. To do this, we need to add a “collider” to these objects.
When you create some objects in Unity, they will already come with a collider by default. For example, adding in a cylinder into a Unity project will give it a default capsule collider component. This wraps tightly around the shape so that any time something touches this collider, we know for certain it is touching the object itself. The collider is visible as the green lined shape that appears when you click the object:
If we look at our collider’s settings inside the “Capsule Collider” component on the right, we can see quite a few things that are editable. The main ones that you’ll tend to change are:
- The center point — This moves the collider around the object, which can be useful if you want only a part of the object to trigger events.
- The radius — This makes the collider wider or smaller, giving it more/less buffer room around the object itself.
- The height — This makes the collider taller or shorter, giving it more/less buffer room on the other axis.
For example, if we enlarge the cylinder collider’s radius and height, we get the following:
Chances are high that most of the time you won’t need to change the collider values for these sorts of objects. However, custom objects (which you are likely to be using in your projects!), do not have colliders by default. If I click on our Makey robot, you will see it doesn’t have any collider component at all. This is an issue which many beginner developers miss and is one of the more common questions I’m asked by people stuck trying to get started with Google Cardboard and Unity. Make sure that every object you want to be able to interact with has a collider!
To add a collider to our custom object, we go to “Add Component” once again and type in “collider” to filter to the different collider types. There are a range of different collider shapes to suit different objects. Feel free to experiment and choose the one that you feel fits your object the nicest.
Often, having a bit of a buffer around the object isn’t a bad thing — it can make it easier for your user to select or interact with the object. For the Makey robot, I’ve chosen a “Box Collider” because it was able to cover the robot’s overall physical space and a little bit extra in case the user wasn’t quite accurate enough with their glance.
When first creating a collider, you may struggle to actually see the collider itself! Colliders aren’t automatically sized to cover the object — you need to change the center point and size to cover it yourself. When a box collider first appears, it appears as a small 1×1×1 cube:
We then make it bigger, setting the size to 30×20×30 (adjust these values for your own custom object, you will see how well it fits the object by watching the green lines grow around it). You may have to move the actual collider a little bit off center to cover the whole object too — I had to move the robot’s collider up by 4:
Our interactivity script
To try out the capabilities of touch and gazing at objects, we need something to actually happen when we do these in the VR app. What we are going to do is make the bot move closer to us when we look at it and move further away when we click it. You could set up any number of other responses to object interactions — change its material so that its color changes, change the whole object into something else, change the background scene or other objects… the sky is the limit!
To set up our moving back and forth responses, we need to attach some basic coding to our Unity object (in my example, we will be adding it to the Maker Faire Bot). To attach a script to your object, start by clicking the object, going to the inspector window and clicking “Add Component” once more. Search for “script” and choose “New Script”:
Name your script something that makes sense (e.g. “MakerFaireBot” or “MakerFaireBotController”), stick with “C Sharp” as the language (unless you’re already familiar with Unity and want to use UnityScript) and then click “Create and Add”:
It’s much neater to have all of our assets in categorized folders, including scripts, so let’s right click on the “Assets” window and choose Create > Folder:
From there, we call that new folder “Scripts” and then drag our new script which you will see in the Assets window into that folder. Unity will still have it linked correctly within your object without you needing to change it which is nice!
The initial script will look like so:
using UnityEngine; using System.Collections; public class MakerFaireBot : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { } }
Let’s change it to; } } }
In our code changes, we have added a variable called
lastPosition which remembers where the robot was last. Within the
Start() function, we set this variable to be the current position of our robot via
transform.position.
In a new function called
lookAtBot(), we move our bot towards the camera by 0.1 on the Z axis each time it runs until we reach –8. We stop at –8 because our camera is positioned at –10 on the Z axis and we don’t want the bot going through us! Now, we just need a way to get this to happen when the user gazes at the robot.
Gaze input
For gaze functionality to work, we need to add one final component to our object — an event trigger. To do so, click “Add Component” once again while you have your custom object selected, find and select the “Event Trigger” component:
Once it is added, you will see an option to “Add New Event Type”. Click that button and choose “PointerEnter”:
Click the “+” icon to add in a script that will be triggered any time that the “PointerEnter” event is fired. A series of options will appear:
Drag your custom object from your hierarchy on the left into the small area just underneath the “Runtime Order” menu (bottom left of the Event Trigger window):
We can now select public functions within that object to be called as event triggers. So we choose
MakerFaireBot and then find our
LookAtBot() function (you will choose your own function name!):
We are almost at a point where our event triggers will run! Before Unity’s scene knows to look out for them, we need to add an event system. To do this, go to GameObject > UI > Event System:
Within the event system, we can teach it to look out for gaze events triggered by the Google Cardboard SDK. To do so, click on “Add Component” while you have “EventSystem” selected and find the “GazeInputModule”. This is from the Google Cardboard SDK. Add that to the “EventSystem” object.
Once you have the GazeInputModule, untick the “Standalone Input Module” within EventSystem too. If this is enabled, your Cardboard device’s click events will not trigger!
If you run the app now in Unity’s test mode and keep looking at your object, it will move closer and closer towards you!
Touch input
Let’s set up one final response for any time the user clicks the object via the Cardboard clicker (not all of the Cardboard headsets out there have one but quite a few do). We will update our code in the custom object to have another function for moving the robot away from us. Let’s change it; } } public void ClickBot() { if (lastPosition.z < 5) { lastPosition = new Vector3(lastPosition.x, lastPosition.y, lastPosition.z + 2f); transform.position = lastPosition; } } }
This adds in a
ClickBot() function which moves the object away from us as long as they are still less than 5 on our Unity scene’s Z axis (this limit is so the object doesn’t get beyond our reach!).
We then go back to our robot (or your custom object), go to the “Event Trigger” component and click “Add New Event Type” once more. This time, we choose “PointerClick”:
It adds our
LookAtBot() function as the click function automatically, which is lovely but not quite what we want. Click that function and select MakerFaireBot >
ClickBot():
If you play the scene again and click your custom object, it will now be pushed back, defending your personal space!
How can I tell if I’m looking at it?
We are missing one rather important thing — a way of telling the user where they are looking. It isn’t always clear whether you are looking at the right spot, especially when it comes to things like menus. We can add a crosshair (called a reticle within Google Cardboard’s SDK…) by going to your Assets and finding Assets > Cardboard > Prefabs > UI > CardboardReticle. This is a prefab Google have provided that will give your experience a crosshair.
Drag this prefab onto your Main Camera within CardboardMain:
If we play our scene now, we have a nice circle which grows when you have an object within your sights:
That’s all folks!
We now have all three aspects of the basics for building Google Cardboard experiences covered — understanding a bit of Unity, setting up a scene to be viewable in virtual reality and finally, setting up interactivity in the scene via Google Cardboard. With these basic concepts, there are a whole range of possibilities available to you!
Makers everywhere — there is no reason not to give this new medium a go. Who knows what your unique view of the world from the perspective of a maker could produce? Build something wonderful. Make someone smile. Make something that influences the world in a positive way. Give someone a thrill. Teach someone a valuable lesson or skill. The territory of VR is still so new and unexplored that it really is up to you. Go your own path. Try things.
If you have found this series useful, I’d love to see what you build and create! You can get in touch with me on Twitter at @thatpatrickguy. For those who are keen to develop in virtual reality, I have more insight into building for VR on my website — Dev Diner. I’ve also written other developer focused VR tutorials over at SitePoint that might help — check those out! The virtual world is your oyster! Please do not hesitate to get in touch if you need some advice on getting started, I’m here to help. Thank you to those who’ve come along for the ride in this series! | http://makezine.com/projects/getting-started-virtual-reality-add-interactivity-google-cardboard/ | CC-MAIN-2017-04 | refinedweb | 2,194 | 66.27 |
SWF Tag DefineEditText (37). More...
#include <DefineEditTextTag.h>
SWF Tag DefineEditText (37).
Virtual control tag for syncing streaming sound to playhead
Gnash will register instances of this ControlTag in the frame containing blocks of a streaming sound, which is occurrences of SWF Tag StreamSoundBlock (19).
The tag will then be used to start playing the specific block in sync with the frame playhead.
Get text alignment.
Is border requested ?
Get color of the text.
Create a DisplayObject with the given parent.
This function will determine the correct prototype and associated object using the passed global.
Implements gnash::SWF::DefinitionTag.
References gnash::createTextFieldObject(), and getFont().
Return a reference to the default text associated with this EditText definition.
Referenced by gnash::TextField::TextField().
Referenced by createDisplayObject(), and gnash::TextField::TextField().
Return true if this DisplayObject definition requested use of device fonts.
Used by TextFielf constructor to set its default.
Has text defined ?
Return true if HTML was allowed by definition.
Get indentation in twips.
Get extra space between lines (in twips).
This is in addition to default font line spacing.
Get left margin in twips.
Load an SWF::DEFINEEDITTEXT (37) tag.
References gnash::movie_definition::addDisplayObject(), gnash::SWF::DEFINEEDITTEXT, gnash::SWFStream::ensureBytes(), and gnash::SWFStream::read_u16().
Return the maximum length of text this widget can hold.
If zero, the text length is unlimited.
Get right margin in twips.
Get height of font in twips.
Return a reference to the "VariableName" associated with this EditText definition. The variable name is allowed to contain path information and should be used to provide an 'alias' to the 'text' member of instances.
Word wrap requested ? | http://gnashdev.org/doc/html/classgnash_1_1SWF_1_1DefineEditTextTag.html | CC-MAIN-2015-32 | refinedweb | 268 | 54.49 |
Typed table block¶
The
typed_table_block module provides a StreamField block type for building tables consisting of mixed data types. Developers can specify a set of block types (such as
RichTextBlock or
FloatBlock) to be available as column types; page authors can then build up tables of any size by choosing column types from that list, in much the same way that they would insert blocks into a StreamField. Within each column, authors enter data using the standard editing control for that field (such as the Draftail editor for rich text cells).
Installation¶
Add
"wagtail.contrib.typed_table_block" to your INSTALLED_APPS:
INSTALLED_APPS = [ ... "wagtail.contrib.typed_table_block", ]
Usage¶
TypedTableBlock can be imported from the module
wagtail.contrib.typed_table_block.blocks and used within a StreamField definition. Just like
StructBlock and
StreamBlock, it accepts a list of
(name, block_type) tuples to use as child blocks:
from wagtail.contrib.typed_table_block.blocks import TypedTableBlock from wagtail.core import blocks from wagtail.images.blocks import ImageChooserBlock class DemoStreamBlock(blocks.StreamBlock): title = blocks.CharBlock() paragraph = blocks.RichTextBlock() table = TypedTableBlock([ ('text', blocks.CharBlock()), ('numeric', blocks.FloatBlock()), ('rich_text', blocks.RichTextBlock()), ('image', ImageChooserBlock()) ])
To keep the UI as simple as possible for authors, it’s generally recommended to use Wagtail’s basic built-in block types as column types, as above. However, all custom block types and parameters are supported. For example, to define a ‘country’ column type consisting of a dropdown of country choices:
table = TypedTableBlock([ ('text', blocks.CharBlock()), ('numeric', blocks.FloatBlock()), ('rich_text', blocks.RichTextBlock()), ('image', ImageChooserBlock()), ('country', ChoiceBlock(choices=[ ('be', 'Belgium'), ('fr', 'France'), ('de', 'Germany'), ('nl', 'Netherlands'), ('pl', 'Poland'), ('uk', 'United Kingdom'), ])), ])
On your page template, the
{% include_block %} tag (called on either the individual block, or the StreamField value as a whole) will render any typed table blocks as an HTML
<table> element.
{% load wagtailcore_tags %} {% include_block page.body %}
Or:
{% load wagtailcore_tags %} {% for block in page.body %} {% if block.block_type == 'table' %} {% include_block block %} {% else %} {# rendering for other block types #} {% endif %} {% endfor %} | https://docs.wagtail.org/en/v2.15.1/reference/contrib/typed_table_block.html | CC-MAIN-2022-21 | refinedweb | 319 | 51.75 |
PROBLEM LINK:
Setter- Erfan Alimohammadi
Tester- Roman Bilyi
Editorialist- Abhishek Pandey
DIFFICULTY:
EASY
PRE-REQUISITES:
Binary Search, Two Pointers
PROBLEM:
Given an array A of size N, find number of ways to delete a non-empty subarray from it so that the remaining part of A is strictly increasing.
QUICK-EXPLANATION:
Key to AC- Don’t over-complicate the solution. Think in terms of, that, “If I start deleting from index i, then how many ways are possible?”
Maintain 2 arrays, Prefix[i] and Suffix[i] - where Prefix[i] will store true if subarray from [1,i] is strictly increasing, and similarly Suffix[i]=true if the subarray from [i,N] is strictly increasing. Note that, once Prefix[i]=false for any index i, it will remain false for every index after i as well. A vice-versa observation holds for Suffix[] array as well.
Notice that if Prefix[i]=false, then deleting a sub-array starting from index i will not yield a strictly increasing array (as its falsified before this index already). Hence, for every index starting from 1 (1-based indexing), as long as Prefix[i] is true, Binary search for the least index j such that A_j>A_i and array from index j onwards is strictly increasing. (This can be easily achieved by using our Suffix[j] array) . We can now delete N-j+1 subarrays from index (i+1) to index [j-1,N]. Note that, it is implicitly implied that j must be greater than (i+1).
Make sure to account for cases where you might delete an empty subarray (for some test cases) when i=0, or when you are deleting the first element of the array in operation as well.
(Note- I said the range [j-1,N] because you have the option to keep entire range [j,N] as well. In other words, starting from index i, you have to compulsorily delete upto index j-1. Thats the reason for +1 in the expression)
EXPLANATION:
We will first discuss the importance of Prefix[] and Suffix[] array. Once that is done, we will move on to the binary search part, touching upon little implementation details and summarize the editorial. The editorial uses 1-based \ indexing unless explicitly mentioned otherwise.
1. Maintaining the Prefix[] and Suffix[] array-
We define Prefix[i] to be true if the subarray from [1,i] is strictly increasing. Similarly, we maintain another array Suffix[], where Suffix[i] is true if the array from [i,N] is strictly increasing. The first step towards solution is to realize why they are needed in first place - hence give some time thinking of it. In case you are unable to grasp, the solution is below.
If you are still not clear about Role of Prefix & Suffix Arrays here
The rationale behind these two, is that, we only have those many starting positions to consider where Prefix[i]=true. Once Prefix[i] becomes false, it will never be true again because if subarray from [1,i] is not strictly increasing, then the subarray from [1,j] will never be strictly increasing either, for all j > i.
Hence, if we start from an index where Prefix[i]=false, then we cannot make the resulting array strictly increasing no matter what. Hence, starting from index 1, we can go upto only that index i till where Prefix[i] is true.
Similar reasoning holds for Suffix[] array as well. Try to think of it a little, in case you face any issue, the reasoning is there in bonus section. Look at it before proceeding to the next section if the purpose is not clear.
Once you are clear with the idea that what is the need of these arrays, or what do these arrays denote, proceed to the next section to see how they will be used.
2. Binary Searching-
Now, the things to note are that we can perform the operation only once, and the operation must remove a contiguous subarray. Starting from index 1, iterate along all indices i till which Prefix[i]=true. For all such indices, if we can find the index j, such that A_j>A_i and Suffix[j]=true (i.e. the subarray from index [j,N] is strictly increasing), then we are done! That is because, we can then claim that "We get (N-j+1) ways of removing sub-array from range [j-1,N]. Note that it is (j-1) here because A_j>A_i and Suffix[j]=true, which allow the final array formed by elements in range [1,i] \cup [j,N] to be strictly increasing as well! Iterating for all such valid i's will give us the final answer!
Now, the question reduces to, how will we find such a valid index j ? Easy, but perhaps unexpected for some newcomers - Binary Search!
Have a look at your Prefix[] and Suffix[] arrays. Prefix[] is initially true at i=1, and once it becomes false, it remains false till the end. Vice Versa holds for the Suffix[] array. Because of this, we can apply Binary Search on the array!
For Binary Search, we look for 2 things-
- Suffix[j] must be true.
- A_j must be strictly greater than A_i
The second part is very much valid even if original array A is unsorted (Why?). With this, all that is left is to code our binary search. Setter’s implementation is given below-
Setter's Binary Search
int lo = i + 1, hi = n + 1; while (hi - lo > 1){ int mid = (lo + hi) >> 1; if (pd[mid] == 1 and a[mid] > a[i]) //pd = Suffix Array hi = mid; else lo = mid; } answer += n - lo + 1;
However, one thing is still left!!
Recall our interpretation! For every index i, we are fixing the starting point of the to-be-deleted subarray to (i+1). Got what I am trying to imply? Take it like this - For every index i, finding a j such that the array formed by [1,i] \cup [k,N], where j-1 \leq k \leq N. This means that the deleted sub-array is [i+1,k-1]. Can you think the corner case we are missing here?
We are not deleting the first element here!! To account for this, just do another binary search without the A_{mid} > A_i condition, as all elements from index 1 upto index k will be deleted. (Alternate - What if we simply see for upto how many indices the Suffix[i] is 1 ?)
With this done, all that is left is to take care of implementation issues, for instance, if your implementation needs to handle the case of deleting an empty subarray, or getting an empty sub-array &etc.
SOLUTION
Setter
#include <bits/stdc++.h> using namespace std; typedef long long ll; const int maxn = 2e5 + 10; const int inf = 1e9 + 10; bool dp[maxn], pd[maxn]; int a[maxn], t, n; int main(){ ios_base::sync_with_stdio (false); cin >> t; while (t--) { cin >> n; dp[0] = 1; a[0] = -inf; for (int i = 1; i <= n; i++){ cin >> a[i]; dp[i] = (dp[i - 1] & (a[i] > a[i - 1])); } pd[n + 1] = 1; a[n + 1] = inf; for (int i = n; i >= 1; i--) pd[i] = (pd[i + 1] & (a[i] < a[i + 1])); ll answer = 0; int lo = 0, hi = n + 1; while (hi - lo > 1){ int mid = (lo + hi) >> 1; if (pd[mid]) hi = mid; else lo = mid; } answer = (n - lo); for (int i = 1; i <= n - 1; i++){ if (dp[i] == 0) break; int lo = i + 1, hi = n + 1; while (hi - lo > 1){ int mid = (lo + hi) >> 1; if (pd[mid] == 1 and a[mid] > a[i]) hi = mid; else lo = mid; } answer += n - lo + 1; } cout << answer - dp[n] << endl; } }
Tester
#include "bits/stdc++.h" using namespace std; #define FOR(i,a,b) for (int i = (a); i < (b); i++) #define RFOR(i,b,a) for (int i = (b) - 1; i >= (a); i--) #define ITER(it,a) for (__typeof(a.begin()) it = a.begin(); it != a.end(); it++) #define FILL(a,value) memset(a, value, sizeof(a)) #define SZ(a) (int)a.size() #define ALL(a) a.begin(), a.end() #define PB push_back #define MP make_pair typedef long long Int; typedef vector<int> VI; typedef pair<int, int> PII; const double PI = acos(-1.0); const int INF = 1000 * 1000 * 1000; const Int LINF = INF * (Int) INF; const int MAX = 100007; const int MOD = 998244353; long long readInt(long long l,long long r,char endd){ long long x=0; int cnt=0; int fi=-1; bool is_neg=false; while(true){ char g=getchar(); if(g=='-'){ assert(fi==-1); is_neg=true; continue; } if('0'<=g && g<='9'){ x*=10; x+=g-'0'; if(cnt==0){ fi=g-'0'; } cnt++; assert(fi!=0 || cnt==1); assert(fi!=0 || is_neg==false); assert(!(cnt>19 || ( cnt==19 && fi>1) )); } else if(g==endd){ assert(cnt>0); if(is_neg){ x= -x; } assert(l<=x && x<=r); return x; } else { assert(false); } } } string readString(int l,int r,char endd){ string ret=""; int cnt=0; while(true){ char g=getchar(); assert(g!=-1); if(g==endd){ break; } cnt++; ret+=g; } assert(l<=cnt && cnt<=r); return ret; } long long readIntSp(long long l,long long r){ return readInt(l,r,' '); } long long readIntLn(long long l,long long r){ return readInt(l,r,'\n'); } string readStringLn(int l,int r){ return readString(l,r,'\n'); } string readStringSp(int l,int r){ return readString(l,r,' '); } void assertEof(){ assert(getchar()==-1); } int main(int argc, char* argv[]) { // freopen("in.txt", "r", stdin); //ios::sync_with_stdio(false); cin.tie(0); int t = readIntLn(1, 10); FOR(tt,0,t) { int n = readIntLn(1, 100000); VI A(n); FOR(i,0,n) { if (i + 1 < n) A[i] = readIntSp(-INF, INF); else A[i] = readIntLn(-INF, INF); } VI X; int val = INF + 47; X.push_back(val); RFOR(i, n, 0) { if (A[i] >= val) break; val = A[i]; X.push_back(val); } Int res = min(SZ(X) - 1, n - 1); val = -INF - 47; FOR(i,0,n) { if (A[i] <= val) break; val = A[i]; while (SZ(X) && X.back() <= A[i]) X.pop_back(); res += min(SZ(X), n - 1 - i); } cout << res << endl; } assertEof(); cerr << 1.0 * clock() / CLOCKS_PER_SEC << endl; }
Time Complexity=O(NLogN) (Binary Search)
Space Complexity=O(N)
CHEF VIJJU’S CORNER
Why we need the Suffix array
Similar to the Prefix[] array, the Suffix[] array highlights about number of suitable ending positions we have for sub-array to delete.
Exactly opposite to Prefix[] array, Suffix[] array is initially 0, and once it takes a value of 1, it will always be 1. This is because if the subarray in range [i,N] is strictly increasing, then so is the subarray in range [i+1,N]
If we put the ending point of the to-be-deleted subarray at some point where Suffix[i]=false AND Suffix[i+1] is NOT true, then the resulting subarray cannot be strictly increasing as the array in range [i+1,N] is not strictly increasing.
"Array A is not sorted, then how is Binary search Possible
Simple! Because this condition matters only if Suffix[i] is true! This means that the range the continuously increasing sub-array ending at index N. In other words the sub-array in which we are binary searching is strictly increasing, or sorted. Hence we are able to avoid wrong answer. Remember, we are not binary searching for some value in the entire array - we are binary searching in the sorted part of the array for some value greater than A[i]!
3.This Question is also solvable by using two-pointer technique! Give it a try and share your approach here
4.What modifications can you think of, if I modify the problem as “the resulting sub-array must be strictly increasing, however, an error of 1 element is allowed”. Meaning, its ok if ignoring/deleting atmost 1 element, the resulting array after the operation becomes strictly increasing. How can we solve this problem?
Setter's Notes
We can solve the problem using Binary Search for appropriate index, or by two-pointers. | https://discuss.codechef.com/t/delarray-editorial/32453 | CC-MAIN-2021-10 | refinedweb | 2,034 | 59.94 |
On Fri, May 07, 1999 at 01:58:28PM -0700, Chris Waters wrote: > > 3. On the issue of QT. please note that I don't believe that Debian > > should use this. I would rather see your UI based around something > > like GTK. When I mention QT it is only to do with Corel's version of > > the setup UI , which to point out again, is not built into the setup > > API . > > Fair enough, and pretty much what I expected. However, I'd just like > to make sure that we aren't limited to GTK and QT, or even GTK, QT, > and console. I'd like to keep our options open. I still have an > ongoing interest in GNUStep, and heck, if someone wants to build a > Debian-with-CDE system, I think we should leave room for it. Oh, and > don't forget the Athena-Desktop-Environment project! :-) xtifr, you should be fwopped for suggesting the xaw thing.... => The rest of the above paragraph makes perfect sense and I agree completely. > > 4.Frame buffer. Again if it can be demonstrated that it works under > > the requirements that I have been given(a nice GUI based front end > > -> GTK/QT2 widgets) then yes we'll use it. > > Again, I think flexibility is probably the best approach. If the > VGA16 server works best for x86, but framebuffer works best for some > other platforms, why not make it configurable (by the person building > the CD images, not necessarily by the end users). At a minimum, maybe > something like: > > #ifndef DEFAULT_CONFIG_X_SERVER > # ifdef __i386__ > # define DEFAULT_CONFIG_X_SERVER "XF86_VGA16" > # else > # define DEFAULT_CONFIG_X_SERVER "XF86_FB" > # endif > #endif Using the framebuffer might be the best idea for x86 too if the VGA framebuffer works like the matrox one and can still let you run things like xgvalib and XF86_* servers... --p0viTf1aGBI.pgp
Description: PGP signature | https://lists.debian.org/debian-devel/1999/05/msg00367.html | CC-MAIN-2017-22 | refinedweb | 305 | 71.24 |
02, 2021
Release 1.7.2
Anthos clusters on bare metal release 1.7.2 is now available. To upgrade, see Upgrading Anthos on bare metal. Anthos clusters on bare metal 1.7.2 runs on Kubernetes 1.19.
Fixes:
- Fixed CVE-2021-25735 that could allow node updates to bypass a Validating Admission Webhook. For more details, open the Anthos clusters on bare metal tab of the GCP-2021-003 security bulletin.
- Resolved the
bmctl snapshotcommand failure when the user creates a custom cluster namespace omitting "cluster-" prefix from the cluster config file. The prefix is no longer required for a custom cluster namespace.
- Added webhook blocks to prevent users from modifying control plane node pool and load balancer node pool resources directly. Control plane and load balancer node pools for Anthos clusters on bare metal are specified in the cluster resource, using the
spec.controlPlane.nodePoolSpecand
spec.LoadBalancer.nodePoolSpecsections of the cluster config file respectively.
- Fixed the cluster upgrade command,
bmctl upgrade cluster, to prevent it from interfering with user-installed Anthos Service Mesh (ASM).
Functionality changes:
- Updated the
bmctl check snapshotcommand so that it includes certificate signing requests in the snapshot.
- Changed the upgrade process to prevent node drain issues from blocking upgrades. The upgrade process triggers a node drain. Now, if the node drain takes longer than 20 minutes, the upgrade process carries on to completion even when the draining hasn't completed. In this case, the upgrade output reports the incomplete node drain. Excessive drain times signal a problematic with pods. You may need to restart problem pods.
- Updated cluster creation process,
bmctl create cluster, to display logged errors directly on the command line. Prior to this release, detailed error messages were only available in the log files.
Known issues:
- Node logs from nodes with a dot (".") in their name are not exported to Cloud Logging. For workaround instructions, see Node logs aren't exported to Cloud Logging in Anthos clusters on bare metal known issues.
For information about the latest known issues, see Anthos clusters on bare metal known issues in the Troubleshooting section..
April 30, 2021
Anthos clusters on bare metal release 1.7.1 is now available. To upgrade, see Upgrading Anthos clusters on bare metal. Anthos clusters on bare metal 1.7.1 runs on Kubernetes 1.19.
Functionality changes:
- Customers can now take cluster snapshots regardless of whether the admin cluster control plane is running. This is helpful for diagnosing installation issues.
- Deploying Anthos clusters on bare metal with SELinux is now fully supported on supported versions of Redhat Enterprise Linux. This applies for new installations of Anthos clusters on bare metal cases only.
- User cluster creation with
bmctlsupports credential inheritance from the admin cluster by default. Credential overrides for the user cluster can be specified in the config file during cluster creation.
Fixes:
- (Updated May 12, 2021) Fixed CVE-2021-28683, CVE-2021-28682, CVE-2021-29258. For more details, see the GCP-2021-004 security bulletin.
- Fixed potential stuck upgrade from 1.6.x to 1.7.0. The bug was caused by a rare race condition when the coredns configmap failed to be backed up and restored during the upgrade.
- Fixed potential missing GKE connect agent during installation due to a rare race condition.
- Fixed issue that prevented automatic updates to the control plane load balancer config when adding/removing node(s) from the control plane node pool.
- Addressed problem with syncing NodePool taints and labels that resulted in deletion of pre-existing items. Syncs will now append, update, or delete items that are added by taints and labels themselves only.
Known issues:
- Upgrading the container runtime from containerd to Docker will fail in Anthos clusters on bare metal release 1.7.1. This operation is not supported while the containerd runtime option is in preview.
bmctl snapshotcommand fails when the user creates a custom cluster namespace omitting
cluster-prefix from the cluster config file. To avoid this issue, the cluster namespace should follow the
cluster-$CLUSTER_NAMEnaming convention.
For information about the latest known issues, see Anthos on bare metal known issues in the Troubleshooting section. on bare metal 1.7.0 is now available. To upgrade, see Upgrading Anthos on bare metal. Anthos on bare metal 1.7.0 runs on Kubernetes 1.19.
Extended installation support:
Added requirement for Anthos clusters on bare metal connectivity with Google Cloud for install and upgrade operations. As of 1.7.0 preflight checks will check for connectivity to Google Cloud, enabled APIs, and permissions for service accounts. Existing clusters need to be registered in Google Cloud before upgrading. The connectivity checks are not overridable by the
--forceflag. For details, see the cluster creation and cluster upgrade documentation.
Added support for installing Anthos clusters on bare metal on OpenStack. For configuration instructions, see Configure your clusters to use OpenStack.
Added support for installing Anthos clusters on bare metal, using a private package repository instead of the default Docker APT repository. For instructions and additional information, see Use a private package repository server.
Removed installation prerequisite for setting Security-Enhanced Linux (SELinux) operational mode to be permissive. The related preflight check has been removed, as well.
Removed installation prerequisite for disabling firewalld . The related preflight check has also been removed. For information on configuring ports to use firewalld with Anthos clusters on bare metal, see Configuring firewalld ports on the Network requirements page.
Updated requirements for installing behind a proxy server and removed restriction on system-wide proxy configurations. For a detailed list of prerequisites, see Installing behind a proxy.
Improved upgrade:
Updated cluster upgrade routines to ensure worker node failures do not block cluster upgrades, providing a more consistent user experience. Control plane node failures will still block cluster upgrades.
Added
bmctlsupport for running upgrade preflight checks.
bmctl check preflightwill run upgrade preflight checks if users specify the
--kubeconfigflag. For example:
bmctl check preflight --kubeconfig bmctl-workspace/cluster1/cluster1-kubeconfig
Updated user cluster lifecycle management:
Added support in
bmctlfor user cluster creation and upgrade functions.
Improved resource handling. Anthos clusters on bare metal now reconciles node pool taints and labels to nodes unless the node has a
baremetal.cluster.gke.io/label-taint-no-syncannotation.
Enhanced monitoring and logging:
Preview: Added out-of-the-box alerts for critical cluster metrics and events. For information on working with alerting policies and getting notified, see Creating alerting policies.
Added support for collecting ansible job logs in admin and hybrid clusters by default.
Expanded support for newer versions of operating systems:
- Added support for installing Anthos clusters on bare metal on Red Hat Enterprise Linux (RHEL) 8.3 and CentOS 8.3.
Functionality changes:
- Added support for configuring the number of pods per node. New clusters can be configured to run up to 250 pods per node. For more information about configuring nodes, see Pod networking. You can find additional information for configuring pods in the cluster creation documentation.
- Preview: Added support to use containerd as the container runtime. Anthos clusters on bare metal 1.6.x supports only Docker for container runtime (dockershim). In 1.7.0, Kubelet can be configured to use either Docker or containerd, using the new
containerRuntimecluster config field. You must upgrade existing clusters to 1.7.0 to add or update the
containerRuntimefield.
- Added support for more load balancer
addressPoolentries under
cluster.spec.loadBalancer.addressPools. For existing
addressPools, users can use
cluster.spec.loadBalancer.AddressPools[].manualAssignspecify additional
addressPoolentries.
Known issues:
Under rare circumstances,
bmctl upgrademay become stuck at the
Moving resources to upgraded clusterstage after finishing upgrading all nodes in the cluster. The issue does not affect cluster operation, but the final step needs to be finished.
If
bmctldoes not move forward after 30 minutes in this state, re-run the
bmctl upgradecommand to complete the upgrade.
The issue is captured in the
upgrade-cluster.logfile located in
.../bmctl-workspace/<cluster name>/log/upgrade-cluster-<timestamp>. The following log entry shows how the failure is reported:
Operation failed, retrying with backoff. Cause: error creating "baremetal.cluster.gke.io/v1, Kind=Cluster" <cluster name>: Internal error occurred: failed calling webhook "vcluster.kb.io": Post "? timeout=30s": net/http: TLS handshake timeout
For information about the latest known issues, see Anthos on bare metal known issues in the Troubleshooting section. | https://cloud.google.com/anthos/clusters/docs/bare-metal/1.7/release-notes-1.7?hl=es-ES&skip_cache=true | CC-MAIN-2021-25 | refinedweb | 1,382 | 50.94 |
Hi all,
I have the following situation any help will be appreciated.
the user supplys two values a1 and a2, based on the value of a1 I show him a different form where one of the fields is populated. the user then edits it along with the rest of the form and hits submit.
i have successufully achieved to display the second form but with blank fields. Only when the submit button is pressed I see one of them populated. I am not sure where the problem is. Here's the relevant code of the two actions
A.jsp {
field a1;
field a2;
}
Aform.java{
prop a1;
prop a2;
}
Aaction.java {
ActionForward execute(..., ActionForm form){
Aform af = (Aform) form;
string a1 = ar.getA1();
string id = someotherclass(a1);
if (id == null) {
session.setAttribute("A1", a1);
request.setAttribute("A1", a1);
return (mapping.findforward("success");--> B.jsp
} else { do somthing else }
//=================
B.jsp {
b1<html:text
b2<html:text
}
Bform.java {
prop b1, prop b2
}
Baction.java {
ActionForward execute(...form) {
Object obj = session.get("A1");
String var = (String)obj;
String val = someclass(var);
Bform bf = (Bform)form;
bf.setB2(val);
.......
.......
return (success/failure)
}
}
Action Chanining, Help (3 messages)
- Posted by: ahmed tantan
- Posted on: February 26 2003 12:49 EST
Threaded Messages (3)
- Redirecting to jsp by brad gallagher on February 27 2003 03:52 EST
- Redirecting to jsp by ahmed tantan on February 27 2003 12:01 EST
- Actions Handling Different Operations by brad gallagher on February 27 2003 09:51 EST
Redirecting to jsp[ Go to top ]
You say your mapping is to B.jsp?
- Posted by: brad gallagher
- Posted on: February 27 2003 03:52 EST
- in response to ahmed tantan
>>return (mapping.findforward("success");--> B.jsp
If you are mapping to B.jsp, the execute method in your Baction.java will not be called until you submit B.jsp. If you want to trigger a call to the execute method in Baction.java, you will need to set your mapping to do so. One way is to modify "success" in your struts config - change it from B.jsp to B.do (or whatever XXX.do is associated with your action.)
Redirecting to jsp[ Go to top ]
Brad,
- Posted by: ahmed tantan
- Posted on: February 27 2003 12:01 EST
- in response to brad gallagher
thank you again for your help.
well, if the mapping were to change to xxx.do to invoke the execute method of Baction.java so B.jsp would be semi-populated then the user will have to fill out the rest of the form and hits submit. My question is how would the execute method of Baction perform two disttinct operations. view info from db to jsp and read data from jsp to db? or do I have to use a different Action class for each operation?
Actions Handling Different Operations[ Go to top ]
Like everything else in struts, it seems there are several ways to handle this :)
- Posted by: brad gallagher
- Posted on: February 27 2003 21:51 EST
- in response to ahmed tantan
If you need an Action that can handle different operations, one approach might be to extend DispatchAction rather than Action. You could also use different action classes for each operation. I've seen both approaches used and they both worked. | http://www.theserverside.com/discussions/thread.tss?thread_id=18083 | CC-MAIN-2015-32 | refinedweb | 550 | 67.86 |
What
>>52622464
Thanks, I thought it'd be nice if we started posting programming-related news pieces in the OP
Is it worth implementing a 2d vector library using SIMD?
>>52622511
maybe if it will operate on many vectors at once
git rebase master
>>52622511
no
i fell for the linux meme
what do i do now?
>>52622461.
>>52622984
You install a proper desktop environment like Unity.
>>52622459
This is the output I'm getting$ ./main Hello
hELLO, wORLD
Hello, World
$
Where's the extra newline?
I don't understand what you're doing. You have two arrays ("ptoc" and "ctop") but you only initialise one. You also have code after the parent child if. Won't both try to execute it? But it seems to be meant for the parent only. I'm not familiar with pipe so maybe that's why, but I don't think that's right.
M$ just open-sourced their machine learning framework:
Where were you when F# won the functional language bowl?
>>52622459
>>52623036
Oops, ignore the argument to main, I was trying pass the argument through the pipe, but it broke everything. That output is from your unmodified code though.
>>52622677
>>52622956
Okay, asking since I've started working on a 2D phyiscs library and then I have an idea for a small 2D game I'll make just for learning purposes. But that begs the question, is a 4d Vector the only one worth SIMDizing or would it be worth it for a 3D one if you add a padding?
>>52622984
install emacs and go make something
>>52622954
oh my.
>>52622984
gcc g++ python cmake and be free
>>52623135
>>52623113
i wouldn't bother with SIMD at all unless you're a very advanced programmer and you're working on something that could really benefit from a slight boost in performance.
>>52622511
>>52623291
Even then there are tons of extremely optimized and extremely well-tested libraries that a single man in their basement couldn't possibly hope to match, let alone surpass, so there's no point.
>>52623361
>a single man in their basement couldn't possibly hope to match, let alone surpass, so there's no point.
Damn if that ain't the story of life. Time to drink another beer.
>>52623361
That is true, but I like I said, I'm just doing this for learning purposes. It's not like I'm actually trying to outdo any of these libraries
>>52623460
still, for learning purposes i think you should focus on more conventional things first and foremost because SIMD is kinda shitty
Should i read sicp before learning C?
>>52623460
For learning purposes, a GPU-accelerated tensor library is a lot more interesting, impressive, and useful. You can, in fact, alone in your basement, make a new such library that's better than the competition's because of the wide variety of optimizations and code generation techniques that can be tried.
>>52623558
Absolutely.
>>52623558
It won't help you to become a better C programmer. Go ahead and read it for pleasure if you want though.
>>52623595
>It won't help you to become a better C programmer.
Spoken like someone who truly never even tried to read SICP!
>>52623609
but it won't help you to become a better C programmer. Go ahead and read it for pleasure if you want though.
What's the best language to learn scripting in? I know c++ and want to learn how this shit works
>>52623609
It'll help you become a better programmer
Not a better C programmer
>>52623595
>>52623624
Then what will SICP make me better at?
made a bot crawler for a forum to parse redirect links and feed me back the real file host links. works gud.
i would post it but i'd get banned
>>52623558
Absolutely! At a minimum, there are some great video lectures on youtube as well.
DrRacket is a good IDE to use to work through the examples.
>>52623627
>scripting
Define "scripting", what platform you'll be working on and what your going to be scripting.
>>52622420
Reposting from the Ruby thread.?
>>52622420
Don't start putting links in the OP.
It's a slippery slope.
A book which teaches Scheme and functional programming paradigms cannot be expected to be particularly useful for someone trying to learn a systems language, which is not typically programmed in the same ways as Scheme.
>>52623796
How the fuck do you sit through lectures for 3.5 consecutive years and retain nothing?
>>52623796
Depending on where you live like 80% of all programming jobs are C# or Java, choose one.
>>52623822
No job = no practice = no retention.
>>52623823
Iowa, so Java will be easiest to resume I suppose. Thank you very much.
>>52623822
Just because you went to class doesn't mean you learned anything.
>>52623822
Extremely easily? you realize that after completing a undergraduate degree you know next to nothing about your subject right?
For someone with almost zero prior exposure to programming, is the following a logical learning progression
>Ruby to Python to Java to C++
for someone with an interest in game development and freelance oddjobs?
>>52623806
Almost every general on every board has pastebin, resources and on-topic articles in the OP
thinking of designing and building a basic circuit and software to interface PIC chips with USB.
It would be pretty basic but I think there'd be a lot of really low budget projects that could use it especially for control circuits.
DO you think this is a good idea, dpt?
>>52623921
>he studied a worthless degree
>he didn't study on his own time
>he has no motivation to think about his subject outside of class
i am literally lmaoing at your life
>>52623935
C > C++/Java > Python/Java
>>52623935
>freelance oddjobs
PHP
>game development
C#/C++
>>52623981
Is C not made obsolete by C++, or are you suggesting it purely as a stepping stone?
Find a flaw.int to_digit(const char c)
{
return (int) c - '0';
}
>>52623646
SICP will probably make you a better Scheme programmer, and perhaps also a better programmer in general... at least when it comes to abstractions.
Systems programming is a different ball game though. A good systems programmer must be able to think both about higher level constructs (to make larger programs easier to maintain) and also lower level constructs (to make them not run like dogshit despite not needing an interpreter/VM).
C is a systems programming language, and is used for tasks where performance is of upmost importance. Many practices common in Schem may not be the best choices for a C programmer.
>>52624039to_digit('a');
>>52624039
No colon after variable definition.
>>52624025
C is it's own programming language.
C++ adds a bunch of things and it's own way of doing things that look absolutely nothing like idomatic C.
I recommend C because it's a small language with very few abstractions that get in the way of learning the basics of programming.
It also helps you to appreciate the massive featureset of modern languages even more because C has almost none of them.
>>52623935
MY SIDES
>>52624068
Scheme is a simple language with few abstractions that doesn't get in the way of learning.
C is "how to learn nothing in 20 hours of debugging": the language.
>>52624087
>C is "how to learn nothing in 20 hours of debugging": the language.
In 20 hours of C, you'll learn how sweet real strings are.
>>52623935
literally laughed out loud
>python -> C -> Java -> C++
or if you have a brain
>C -> Java -> Python -> C++
When should I use structures instead of classes in C++?
Working on poor man's TCP for my MMO.
I realized that implementing reliable and in order at the same time would require a bit of refactoring, but then I also couldn't think of a situation in which I would need both at the same time, so for now I'm going for only one or the other at a time.
Screenshot from a while ago
>>52624049
Will reading SICP as my first programming book hurt me in the long run?
>>52623958
And /dpt/ is not going to be one of those generals.
>>52624118
Or, if you're not clinically retarded,
>scheme -> OCaml
>>52622420
>>Andrei continues his goal of being able to use the standard library without GC
lmfao this should already have been the case
>>52624127
Wew
>>52624119
For C-related backwards compatibility and POD.
>>52624119
Same shit in C++.
>>52622420
>>C# gains new features such as non-nullable reference types, local functions and immutable types
more pointless "features"
>>52623958
fuck off
/dpt/ isn't one of your consumerism generals that spams links to buy things in the OP.
In here, we spam traps and umaru.
>>52623935
java -> C++
and THEN maybe python if you want to do low-quality freelance oddjobs. absolutely don't start with python with ruby
>>52624057
I am not sure what language you are thinking this is, but we do not do that in C.
>>52624068
These are the reasons why I recommend C over C++ to beginners. That and not wanting to have to have to explain what the fuck cout is, because that involves also explaining what an object is, what a method is, and what operator overloading is. It's much easier to explain printf. It's a function, and it looks like a function call just from the syntax.
>>52624188
Well, better late than never
>>52623806
>>52624173
>not wanting to know what's going on in the programming world
>>52624206
Sort of. The more interesting features are the record types and pattern matching that were discussed in part one of the C# 7 news.
>>52624197
almost shat myself laughing
procedurally generated robots are my plan for February. The placeholders will have to do for now.
>>52624263
The programming world moves very slowly.
If a new language comes out tomorrow and everyone decides to back it, it will take at least 5 years before it's in a state in which you could recommend it to people.
>>52624283
What's it done in?
>>52624218
There is no "we", it's just you.
>>52624302
plain C# for the server
C# and Unity for the client
>>52624300
No, you don't understand, everybody will be writing everything in Rust by the end of next week, HN told me so.
>>52624320
I puked.
>>52624320
Nice. C# is goat.
>>52624332
this
have fun being stuck with that crayon shader unless you cough up some more money for the jew
>>52624333
>tripfag approved
Confirmed for garbage.
>>52624349
You can write custom shaders in Unity
>>52624332
I've implemented graphics engines in DirectX and OpenGL, and don't care to again
>>52624333
Damn right
>>52624349
If only that was the only issue. The ridiculously low performance and high resources usage is the main issue really. No matter how low-tech you make your game, people need $1000 machines to get more than 20 FPS.
>>52624349
>have fun being stuck with that crayon shader unless you cough up some more money for the jew
Unity pricing changes fucking ages ago m8
>>52624371
>You can write custom shaders in Unity
You aren't free to use whatever shader you want unless you buy it.
Any traps want to be my gf?
>>52624391
see
>>52624380
I've spent the last 3 weeks learning to program and I haven't done anything else.
What anime should I watch?
>>52624371
>>52624380
ok i guess you can have custom shaders without any special license, must have been outdated info. most unity games still look like ass though
>>52624378
this. unity is so shit in so many ways. just because it's a game engine doesn't mean you should use it. especially for 2d games you don't really gain anything at all as a developer
>>52624417
Developers use Unity because it brings development time and costs to sane levels compared to engines like Unreal and cry.
>>52624411
Spice and Wolf.
>>52624459
agdg kids use unity because they're too afraid and too lazy to learn how to program for real
>>52624463
I just started watching that with my boyfriend. Spooky.
>>52623115
I installed emacs a few months ago but now all I do is work on my emacs config. Fucking emacs.
>>52624476
Or to use a non-shit engine anyway.
Give this code in C++:A a = new A();
delete a;
a = new A();
What's being called on the third line? The normal constructor or copy constructor?
Also, I have read about the rule of three in C++, but should I bother with copy constructor, etc if I don't plan on using the object in any way that would require it?
>>52624459
Cry and unreal have lesser royalties and perform a fuckload better. That's called "bringing the development time and costs to sane levels". Devs all tried unity once and only the incompetent (mostly indie) retards stuck with it.
>>52624578
Both.
>>52624580
>unreal
Enjoy your 400mb binary for rendering a triangle.
>>52624580
>perform a fuckload better
>bringing down development time
Shitfirstyearcsstudentssay.jpg
>>52624616
Lrn2compact, noob. And nobody cares about a 400mb that renders a triangle because videogames aren't simply the rendering of a triangle.
>>52624629
Found the paid unity shill.
>>52624637
>missing the point
A binary that size for a simple 2D game is absurd.
>>52624648
how did you found me?
>>52624702
There's no reason to use a 3d game engine for a 2d game.
As a filthy wretch that only knows JavaScript, what are classes?
I know they're not in JS, but I'm reading about closures and the author mentions them as an explanatory aside aimed at users coming from other languages.
Alright, a bit of a strange question here
I wanted to know, when it comes to constructors (or things in general, including return types), when variables are set are they an alias of the previous variable, or are they just set to what that variable was at that time?
Take the following code.public class StupidShittery {
int foo = 4;
void main() {
int integer = 4;
Foo(integer);
integer = 5;
}
public int Foo(int integer)
{
this.foo = integer;
return integer;
}
}
Would foo's value change after integer's value is changed, or would it stay at 4? Someone was trying to convince me otherwise. If this isn't true, does anyone know what they might have been trying to say? I'm not sure of anything that reflects the changes of variables live unless the variables themselves are static.
>>52624760
That's irrelevant since Unreal is marketed as having 2D support. Also, I hope you realise most 2D game engines are just OpenGL/DirectX with an orthographic projection.
So if I'm reading this documentation correctlypublic synchronized void createNewThing()
{
}
will NEVER run concurrently with another of the same type, even if they're called at the exact same time, yes?
>>52624840
Use of opengl != 3D. Support for 2D != optimal for 2D. After all, unity is marketed as a game engine.
>>52624767
>I know they're not in JS
Yes they are.
>>52624840
>most 2D game engines are just OpenGL/DirectX with an orthographic projection
so there is not much of a reason whatsoever to use any game engine but your own for a 2d game
p = np
>>52624888p = p/n
>>52624888false
>>52624854
>
>ES6
Shiny new toys
I've heard a lot of talk about how these classes are not "true" classes and are only going to make shit a nightmare in actuality.
What is a class?
>>52624903
>>52624904
trips don't lie niggas
How long to learn C++'s nuances and subtleties well enough to work in UE4?
>>52624935
>How long to learn C++'s nuances and subtleties
Several human lifetimes.
don't do this
>>52624913
>In object-oriented programming, a class is an extensible program-code-template for creating objects, providing initial values for state (member variables) and implementations of behavior (member functions or methods).
>>52624794
foo will be changed because the method has access to it.
What they probably meant to say is that you can't modify the value of a parameter passed to the method if the method doesn't already have access.
This is because primitives are passed by value, not by reference.
You'd need an example with multiple classes to show this.
Also, your constructor should have the same name as your class i.e. StupidShittery.
As far as I know, you cannot return from a constructor, since you are already returning the newly created object itself.
>>52624956
OO is much maligned, from what little I've read. Is there much merit to that kind of viewpoint? What are the pros of OO-programming?
>>52622459
I think your final call to read is reading the newline as part of the string, so that when it calls the last printf, there are two newlines -- one that was read with the string, and one that you included in the printf call. See:mini$ ./a.out
hELLO, wORLD
0123456789012345678901234
0123456789012345678901234
mini$
mini$ ./a.out
hELLO, wORLD
012345678901234567890123
012345678901234567890123
mini$
>>52624996
It maps easily to many real-world concepts
>>52624996
>OO is much maligned, from what little I've read.
on /g/, by memers.
wew
Not that many, but I still haven't learned to commit after every change that doesn't break the build.
>>52624968
Sorry, that code was a bit disgusting because it was just pseudocode; here's a better examplepublic class StupidShittery {
int foo = 4;
public StupidShittery() {
int integer = 4;
Foo(integer);
integer = 5;
}
public int Foo(int integer)
{
this.foo = integer;
return foo;
}
}
So foo would end up being 5, even though Foo would return 4 because int foo isn't private?
>>52625090
wew pt. 2
>>52624578
Normal constructor is called in both cases. But the type of a should be A* not A. Also your syntax on the first line is weird, you don't need the empty parens.
struct foo {
char * const bar;
};
int main(void)
{
struct foo *x = malloc(sizeof (struct foo));
x->bar /* how to assign? */
}
Anyone got tips/recommendations for vb.net+sql?
I have to learn that shit for the job. Looked at it a bit and it reminded me that I am not very smart.
Hi /g/, I've been trying to learn C++ by working through "Accelerated C++ practical programming by example" and I'm having a bit of trouble.
In section 3.2.2 it stores the length of a vector called "homework" in a variable called "size" using the code:typedef vector<double>::size_type vec_sz;
vec_sz size = homework.size();
When I try to compile the code after adding this it returns the error:
>'vector' does not name a type
>typedef vector<double>::size_type vec_sz;
I am using Notepad++ and the Mingw-w64 compiler, which has been working so far. Any idea what could be causing this?
>>52625212
All you have inside of struct foo is a pointer that can't be changed (because it's const). Normally malloc will give you back memory at some random location, but since your pointer can't be changed, you can't set it to the location returned by malloc. So unless you can find some way to force allocation to happen at a specific address (i.e. your const pointer), it's pretty useless.
>>52625298
either addusing namespace std;in your file or usestd::vector
>>52625298
Looks like it should work, maybe the mingw compiler is very old? Works for me with VC2010. Maybe you could try just:vector<double>::size_type size = homework.size();
and see if that works.
>>52625365
Thank you, it worked.
I had using std::vector, but I guess I must have deleted it.
Thanks again.
>>52625212
Does it need to be on the heap?
>>52625504
I put ur mom on the heap ;)
>>52625556
In any case, there's a trick to it (which I was not aware of).
Memcpy'll do it.
i got a 6 figure job a month ago but it's shit.
our main app is a "c# web app", but it's actually:
-javascript
-stored procedures
literally everything is one of those two. the most SIMPLE select in the world is done via a stored procedure (select x from y where [email protected]), c# calls it and passes it to the view or javascript.
the C# does nothing but call stored procedures (they're scared of C#)
the javascript is the worst in the world, it's jquery with no other organization, not even javascript objects. it takes up pages and pages of shit with global variables, and huge swathes of it are copy and pasted across different html views.
i know so much more, and how to do everything better, but i don't know how receptive they will be.
should i stay for the money or just find another job even if it pays less?
>>52625667
Take them out for a nice date, and then fuck them hard with ASP.NET. They'll learn to love it.
>>52625667
This sounds interesting actually. How many different stored procedures are there?
>>52625152
nice m8. what are you working on?
It's coming along kind of nicely now.
>>52625880
500
>>52625900
>>52625667
most software is shit just stay for the money and don't antagonize them too much with your own ideas
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
int count_words(char* line){
char *w_token;
int words = 0;
w_token = strtok(line, " ");
while(w_token != NULL && w_token[0] != '\n'){
words++;
w_token = strtok(NULL, " ");
}
printf("%d\n", words);
return words;
}
int main(int argc, char *argv[]){
FILE *f;
char file_name[80];
int words, lines;
printf("Input a file name: \n");
scanf("%s", file_name);
f = fopen(file_name, "r");
if(f == NULL){
printf("Error opening file\n");
return 1;
}
words = lines = 0;
while (1) {
char buffer[80];
lines++;
fgets(buffer, 80, f);
words += count_words(buffer);
if (feof(f))
break;
}
printf("%d words and %d lines total\n", words, lines);
fclose(f);
return 0;
}
this program works well to count the number of words and lines in a file except for the one case where only the last line is empty. In this case for some reason it counts an extra word. If there are more than one empty lines at the end it won't count any extra words and will work correctly. It also doesn't count empty lines that are not a single line at the end of the file as words.
why
>>52626112
I know, I have 9 years of experience, this is a whole new level of shit for me though.
>>52626016
holy shit
>>52622420
>Andrei continues his goal of being able to use the standard library without GC
EXCELLENT
>>52625105
Whaaaaat
No, foo will be set to integer, which is 4, so it stays the same. Why do you think it would be 5?
>>52624578
> Give this code in C++:
That won't compile; the value returned from new is a A*, not A.
> What's being called on the third line? The normal constructor or copy constructor?
You don't get that far, because your code won't compile. If you fix the type, it's the normal constructor. There's no copy being made (you're copying a pointer, and pointers don't have constructors).
> should I bother with copy constructor, etc if I don't plan on using the object in any way that would require it?
If the default implementation won't work, either implement it, "delete" it (C++11), or make it private. Don't rely upon it not being called.
>>52625105
foo = 4 because pass by value, not pass by reference
wtf is going on here? I rebased the program and it still won't work.
>>52626358
Why are you using XP
>>52626374
because it's a VM and it runs fast
>>52625667
The JS part sucks, but using stored procedures for everything is a perfectly reasonable approach. It's certainly better than letting back-end web code execute raw queries.
How do I write a C++ for loop in C?
I'm trying to convert one of my programs to C, but I never learned the languagefor (int count = 0; count < size; count++)
cout << array[count] << " ";
cout << endl;
>>52626429{
}
>>52626429
use printf();
>>52626429
you should be using printf and \n
>>52626445printf("array[count] ");would this print the space as well?
What are my nimfriends up to?
>>52626389
Then go catch it!
>>52626494
yeah. just add \n to the end for a newline.printf("array[count] \n");
>>52626494
of course
>>52626494
shiggy doodely
>>52626514
it's better than window 8
I don't have a key for W7 so I'm stuck with XP. I really haven't had any problems with it and I run a shit load of hardware and software analysis programs on it.
>>52626551
>I don't have a key for W7
daz loader my nigga
>>52626551
Install Gentoo
>>52626538
>>52626532
>>52626531
thank you
>>52626163
debug my code /g/
>>52626572
probably will in the future but I'm satisfied with the setup right now. Even Pspice works on it ffs lol.
>>52626494
>>52626600
Err, might want something likeprintf("%d \n", array[count]);
Or whatever format specifier would be applicable to your situation.
>>52626600
>>52626659
this is the correct thing right
Starting with Heap h, I want to perform a level order traversal, save it in the string dynArray, then return it. But my program keep crashing.
...help?struct Heap {
string name;
Heap *left = nullptr;
Heap *right = nullptr;
};
The declaration of the function is: */
string *printLinear(Heap h);
string *printLinear(Heap h) {
Heap *root;
// Base Case
if (root == nullptr) return dynArray;
// Create an empty queue for level order tarversal
queue<Heap*> q;
// Enqueue Root and initialize height
q.push(root);
while (1)
{
// nodeCount (queue size) indicates number of nodes
// at current lelvel.
int nodeCount = q.size();
if (nodeCount == 0)
break;
// Dequeue all nodes of current level and Enqueue all
// nodes of next level
while (nodeCount > 0)
{
Heap *node = q.front();
cout << node->name << " ";
q.pop();
if (node->left != nullptr){
q.push(node->left);
}
if (node->right != nullptr){
q.push(node->right);
}
nodeCount--;
}
cout << endl;
}
return dynArray; //I have yet to initalize the heap into an array based on level order transversal
What's wrong with this js function?function byTagName(node, tagName) {
var found = [];
tagName = tagName.toUpperCase();
if(node.nodeName==tagName){
found.push(node);
}
for (var i = 0; i < node.childNodes.length; i++) {
var child = node.childNodes[i];
if (child.nodeType == document.ELEMENT_NODE) {
found.concat(byTagName(child,tagName));
}
}
return found;
}
>Illinois State doesn't let me major in InfoSec and major or minor in CS
Feels bad man
>>52626724
yes if you're printing ints
>>52626767
>InfoSec
>>52626806
fite me
ain't no hackers getting by me
>working on porting and fixing some embedded code to another architecture at work
>the old developer is gone and the code he left behind is disgusting
>tons of "clever" hacks and magic numbers, with no documentation
>stumble on another hack he wrote today which saved 1 (one) byte of memory
>fix it to what it should have been
>2 previous bugs that have been around for 10 years are magically fixed
I get why you'd want to save on memory, but I don't understand why embedded developers feel the need to make everything hacky and sketchy.
>>52622420
1. Search engine and NLP-based ranking system for a startup I work with.
2. External filesystem application (and interface) achieving access control through modern public key cryptography variants
3. Implementation of a recursive divide-and-conquer heuristic for approximating maximum clique search in arbitrary graphs
4. Personality-trained conversational agent using deep, sequential recursive neural network chains
5. Example implementation of a generalized semi-self-supervised neural network architecture capable of differentiating to function in multiple domains
Needless to say, I'm becoming stretched a little thin.
>>52626806
What's wrong with infosec
>>52626163
>>52626601
The line with "break" starts with a tab and then a space, the tab is being counted as a word because of how you wrote count_words.
For the lines, you're getting a bad count because of where you're checking for eof. Try it like this:while (1) {
char buffer[80];
fgets(buffer, 80, f);
if (feof(f))
break;
lines++;
words += count_words(buffer);
}mini$ echo test.c |./a.out
Input a file name:
108 words and 55 lines total
mini$ wc -w -l test.c
55 108 test.c
mini$
>>52626924
>infosec
to understand security you need to have a deep understanding of mathematical encryption, good luck not knowing anything in the future
>>52625667
>6 figure job
>code isn't to your liking
>actually considering quitting because of this
>>52627040
uh.... i have a lot of experience and good programmers are hard to find. it's not like i couldn't find another one
>>52627016
There are many facets to information security beyond encryption.
>>52627059
If the main application is in shitty c# I'm pretty sure a university student intern would do just as well
>>52627061
Lol no there isn't
>>52624051
This
>>52627061
infosec is like 1 notch above web dev there is not much depth to the field
>>52627070
the main app isn't in C#, c# is just the glue. it's all stored procedures (T-sql).
>>52627116
What is the mightiest subfield of CS to go into then?
posted this yesterday to /no reply/
is a career as a programmer possible without a degree? i'm learning web dev and ruby and much prefer the ruby work to the web dev.
>>52625667
leave and give me the job instead
>>52627138
Computer Engineering
>>52627138
Chinese hacker, North Korean hacker, Russian Hacker > US Hacker > EU Hacker
>>52626727
Assuming there's more code you haven't posted (i.e. I don't see dynArray declared anywhere). But this looks like a problem:Heap *root;
...
q.push(root);
Since you haven't initialized root, it could be pointing to anything. There's no actual Heap structure anywhere yet. So later on when you get this out of the queue and try to get name/left/right out of it, you're going into random memory locations.
>>52627145
yes BUT you have to make things. Instead of having on your resume "graduated from X with Y degree", you need to have *legit* project links. Like, live links that take you to a page that you made that does something. It doesn't have to look nice. Hell, it can be some ascii-art game you made with javascript. Just showing that you actually made something, and that it works, and that it's live, is important. What I'd recommend is experimenting with making your own web server. This'll help you understand the many layers. Also the book Eloquent Javascript (google it) is available for free and is an amazing resource. I used to hate javascript, but after reading that I loved it.
>>52623035
>proper desktop environment like Unity
>>52627201
What does this mean? Can you give me an example project? I'm currently in my third year of undergrad and my major is described as "Computer Science and Engineering", but I don't really know what the difference is between the "computer science" part and the "computer engineering" part, and I don't know which category the code-monkeying I do falls into.
>>52627217
awesome, thank you anon
>>52627263
NP and when I mention setting up your own web server, it's not all that complicated. To get you started, all you really need is an old laptop and the ability to google "how to set up my own custom web server".
Also, although you don't necessarily need a degree, textbooks are still really nice resources, as well as online courses. I'd recommend learning about the ins and outs of networking, as it fills in a lot of the grey area. I really enjoyed the textbook "Networking: A top down approach". Also, data structures and algorithms are important too. If anything on this [] page doesn't look familiar, learn about it.
Godspeed my anonymous friend.
>>52627234
Are you B.Sc, B.Eng, B.A
B.Eng > B.Sc = B.A
>>52627234
Usually two separate majors. The engineering part has to do with building and programming hardware. Anywhere from microchips to whatever the fuck else as long as it's digital.
>Concurrent programming
>>52627317
>>52627319
The major are normally separate but my school (osu life) offers a combined major under the engineering department (so I'm B.Eng), even though the coursework is pretty much the same throughout all of these majors: [software dev, comp sci, comp sci & eng, comp eng, info tech]
>>52627215
Bless you my good man, you have saved me. I can finally stop crying.
Q: do government jobs stay away from technical interviews or no?
>>52627353
Also, anyone wanna recommend a good place to apply for a summer internship? I have an offer from cisco already, wondering if anyone knows of any more l33t places I could go to.
If I theoretically were autistic enough to write a program in machine language, how would I run it?
I need to compile/run simple c++ programs on android, assuming it's reasonably possible (cs lab requiring us to bring a laptop, I don't own a laptop).
>>52627663
Termux, but it'll be a major pain in the ass
>>52627663
get a cheapo laptop on craigslist
>>52627670
Thanks. Going to try that out.
>>52627796
And that's definitely a fallback. I'd rather not if I don't have to though. I really don't have any other use for a laptop.
>>52622420
I'm starting to not care about differences in languages that much as long as you have like static typing and some level of sanity and coherence
Whats more important is the free tools that the languages offer
I want UI toolkits, image and video processing, audio tools, statistical and numerical libraries, etc
And C++ is still the best language for that (even though a lot of its tools still kind of suck)
It feels like we have a new language every few weeks -- Rust, Swift, Nim, Crystal
Get back to me when you have a full featured set of tools for quickly creating applications without me having to roll my own libraries for everything from the ground up.
>>52627663
There are C++ ides on Android, just look on the app store. I can't imagine a CS lab would require you to bring your own laptop though, your school seems like its shit.
I'm using C++ to read a file line-by-line backwards.
So I use seekg(-1, file.end); where file.end is a streampos located at EOF, and file is an fstream.
I then do file.seekg(file.cur, -8), because the size of the line I want is exactly -8.
Then I do file.read(out, 8), where out is a char array of size 8.
And for some reason, char remain garbage data.
The fuck.
What am I doing wrong?
>>52624119
Use structs when you don't need or want data hiding. If something has no private fields, make it a struct. If it has at least one private field, make it a class.
>>52627670
I would like to second the fact that it is indeed a pain in the ass to use Termux. If you can get any chroot application to work (i.e. Lil' Debi, Linux Deploy, Debian Kit, etc...), definitely use that.
If one does have to make use of Termux, it is probably best to use an editor that isn't Vim or Emacs, as they run like dogshit on Termux.
Working on a variant utility for C++template <typename T>
using Maybe = variant<none, T>;
int main ()
{
auto var1 = Maybe<std::string>::make<0>();
auto var2 = Maybe<std::string>::make<1>("something!");
assert(var1.get<1>() == nullptr);
if (auto ptr = var2.get<1>())
{
assert(*ptr == "something!");
}
using V = variant<char, int, double>;
auto var3 = V::make<2>(4.5);
// auto var4 = V::make<3>("Compile error!");
assert(var3.get<0>() == nullptr);
assert(var3.get<1>() == nullptr);
assert(*(var3.get<2>()) == 4.5);
return 0;
}
Yes I know boost probably has a bloated one of their own but I just wanted to try out my C++-fu and template metaprogramming fuckery
Does anybody have any idea how to do stuff like this in C or C++? It looks like it's in 80x25 text mode, but I'm not sure.
If I need to use extended ascii, how do I do so?
>>52628030
Google "ncurses"
>>52628039
NO
ncurses is a fucking bitch and the API is horrible and it's super outdated and confusing.
Use "termbox" it's seriously great, SUPER simple API, very effective, very nice.
>>52628045
this, ncurses is overkill 90% of the time
just use termbox if you need to do some simple TUIs
>>52627974
Unrelated, but File::next_line has a memory leak.
>>52628045
It's also not portable.
>>52628112
You mean deleting the char, right? I suppose I should use a unique pointer in that case.
>>52627930
the idea of at least some of those new languages is that you'll be able to easily and efficiently interface with libraries from C/C++ and other languages while still being able to write the bulk of your application's code in a "better" language
>>52628155
Since you mentioned in a previous thread that the lines were of constant length, and you were supposed to read this in line by line, why not just use std::getline()? No intermediate char[] buffer needed.
>>52627074
InfoSec is literally everything and anything related to securing information. A large part of it relies on encryption, but the field of information security is far, far, wider than encryption.
>>52628297
Yeah, I tried that yesterday while experimenting with the I/O commands. It doesn't work either.
Actually, nothing seems to work and I don't seem to really have any way of verifying that anything I'm doing is actually having any effect.
>>52627942
>C++ ides on Android
Any anyone would recommend? I just tried two and the first one didn't seem to have keyboard support and I couldn't get the second to work right?
>>52628352
That's not a complete sentence. Try again.
>>52628352
It's great that you're doing research on your own, but try not to assume that everyone has read every book under the sun.
>>52628360
lets try that again
Guys is the book i want to get called Beginning C# Game Programming Ron Penton and Beginning C++ Game Programming Michael Dawson any good
>>52628368
o sorry, but if there is any books anybody thinks can benefit me i would gladly take it too.
>>52628390
check the gentoomen library, there's lots of great stuff in there. both are in there so you can try either one and see which looks better to you before making a purchase (if you want the hardcover)
>>52622954
>gnome 3
>adwaita
>Using Spring
>Getting this error when generating a pageFailed to instantiate [controller.deposit.Deposit]: No default constructor found; nested exception is java.lang.NoSuchMethodException: controller.deposit.Deposit.<init>()
>I'm not calling a default constructor, I'm calling it with arguments
Here's the page's controller@Controller
public class DepositController {
private String ip;
@RequestMapping(value = "/deposit", method = RequestMethod.GET)
public String depositForm(Model model, HttpServletRequest request) throws UnknownHostException {
ip = request.getRemoteAddr();
System.out.println("connection from " + request.getRemoteAddr());
model.addAttribute("deposit", new Deposit(ip));
return "depositSubmit";
}
@RequestMapping(value = "/deposit", method = RequestMethod.POST)
//Grabs the existing status from the model attributes
public String depositSubmit(Deposit deposit, Model model) throws UnknownHostException {
model.addAttribute("deposit", deposit);
return "depositResult";
}
}
I was looking into the error and some people said that this issue could be fixed by editing some XML file that I can't find, but all their examples had static constructors and static constructors are fucking pointless.
Does anyone know how I could solve this problem?
Why is spring so weirdly complicated with shitty documentation?
>>52624996
Only use inheritance for polymorphism (even then I prefer interfaces). If you want to reuse code then just use composition god dammit.
>>52626750
document.querySelector exists
>>52627212
To be honest if there is a 1 in one million chance to give birth to a super hacker that means china has a thousand of those.
>>52627974
lel something is clearly wrong with the way I set up the File object.
I wrote a minimal working example even featuring the more advanced C++14 crap I used--essentially a copy paste--and it works.
I don't know what I did wrong though. Pretty sure I did everything right.
>>52623035
this is bait
the function keys are ridiculously unresponsive
how should i map keys on a computer keyboard to musical notes?
>>52628462
>found answers from all over the ner
>every single fucking one has the solution to be to hardcode the constructor values in the XML
Why the absolute fuck is this a thing? Why would you have a constructor with static values? That's fucking retarded. Why doesn't spring just let you use constructors like a normal person?
Anyone know of decently featured GUI toolkits for C++ that aren't Qt? I want to see how they implement MVC.
>>52628634
>tfw you don't even change the object, just dick around with pointers and everything suddenly works
Hey guys, I'm learning JavaScript instead of C! Great decision huh!
>>52628917
>>52628928
Is it easy to make your browser's file manager to be ranger, /g/?
I want to select my memes in a terminal.
>>52624039
You need to check if the input is only from ranges '0' to '9' like the other anons said.
Also, that function is nice and all, but a function that could translate multiple char digits to integers would be more useful overall.
>>52628928
Dumb Reddit Fish poster
>>52629044
you mean atoi?
Will I get bullied for using shell scripts to build my C/C++ projects instead of Makefiles?
>>52629109
yes kill yourself
>>52629109
why would you even do that
>>52629109
No. I do that too.
Build systems are just ricing shit that distract you from getting shit done
>>52629109
Dunno about "bullied", but expect people to point out what a dumb idea that is.
>>52629157
in today's sensitive touchy politically correct climate, simply disagreeing with someone is considered "abusive" bullying behavior.
>>52629153
but you're just going to end up with something literally nobody except you can compile
Hey guys, pretty new to programming.
I picked python because i got free access to Automate the Boring Stuff with Python on Udemy and all that and I'm almost done going through and doing all the practices + projects at the end.
I'll be going to university next year for programming, and I sort of wanted to get a head start. I want to work my way towards being able to program a basic emulator such as CHIP8 and work my way to NES and then a GBC emulator. After getting used to Python a bit more, I want to start learning C++ to start working my way towards this goal of making a GBC emulator.
What I want to ask is if C++ is a logical step? Should I be doing C next first and then easing into C++? For my emulator project (if i ever get around to it) I would also probably have to learn assembly and what not of the chips for each of the systems. How difficult compared to stuff like Python and C++ is it to learn assembly languages?
Sorry for the bunch of questions, but I'm really interested in programming as a hobby and a field and I'm pretty excited at the moment.
Also, I don't want to build the emulators to have them as a 100% ACCURATE SUPER BEST EMULATORS COMPETING WITH THE TOP EMULATORS type deal, I just want to understand how to write them and understand them better and get a better understanding of how the consoles I liked work at a base level. I eventually want to be able to write hobby code for larger emulators like the WIP WiiU and 3DS emulators.
Thanks to anyone who reads/replys to this.
>>52629241
Skip python and move directly to C.
It'll make you appreciate python and C++ more.
>>52629241
C is the logical next step
if you've never programmed in a language with C-like syntax, learning C will help you enormously before you go on to learning C++
i learned assembly first, c second.
wouldn't bother with chip8. what makes emulators interesting is emulating the games you want to emulate. start with the system you like better of nes or gb.
c is enough to make a good emulator. c++ is way overcomplicated and not worth using for personal projects. but that's just my opinion.
>>52629289
what's the point to learn c before c++ ?
>>52629308
Good C++ is just C with Classes.
Don't bother with idiomatic C++.
Always strive to write C-style code when writing C++.
>>52629289
you do realize python is a c-like syntax, right?
Working on some shitty lcds adapters for a bloody banking system. Please kill me slowly.
>>52629272
>>52629289
Thanks for the input! I'll do C next then to get a good grasp on it as I'll probably be using it for my university courses as well.
Do you guys have any recommended resources for learning C? I don't mind reading large books or anything as long as the methods are good.
>>52629305
Thanks for the insight. I was wary about making a CHIP8 emulator because I wasn't interested in a single game on it, and I heard conflicting opinions on if you should start with it or not.
Was there any resources you used to learn general assembly outside of the documentation on the consoles themselves?
>>52622420
I'm making a video game for fun in C++ using the SDL library.
Anyone got an input on how vertical sync works and how to manage it ?
If I activate it using the SDL flag, it just slow down everything in my game. how should I manage that ?
project euler in haskell. up to #4
>>52629314
>Good C++ is just C with Classes.
C with Classes is bad c++. good c++ is about generic programming, raii, ownership, abstraction.
>>52629318
I mean as in, a language with brackets instead of whitespace
aside from syntax, the fact that it's statically typed. That didn't really occur to me before but come to think of it that's probably going to be the biggest change
>>52629153
how is a make file that you barely have to adjust at all a distraction from getting shit done?
>>52629318
not at all, they have little in common.
>>52622420
>What are you working on
new ways to kill myself
have to make some kind of instant messenger and a 2k word paper on it by the end of this week
>>52629308
C is a simpler language, and makes explaining C++ concepts easier. It's easier to understand this shit better if you go from the lower level on up, rather than the other way around.
>>52629335
my input about learning C++ and C is a bit late since you already choose to learn C, but learning C or C++ will not make a lot of difference in the end.
You just have to know that C can be hard and I find it force you to think a lot about what you're doing, how to manage your project and structure it (which files to create, which structures, what to put in, where the functions goes, etc.).
Also you'll have to deal with the memory which is a great thing to know how stuff works in general, and permits you to do a lot of things. But if you can't into pointers, you're kinda fucked.
In the other hands, C++ is easier to maintain with the classes (it also depends on the project you're working on). But if you failed at making a good and maintainable code, you're doomed and will take a lot of time to do simple things.
>C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do, it blows away your whole leg. – Bjarne Stroustrup
Whatever the language you pick in the end, what is important is to make projects, try, fail a lot, understand your mistakes and how to structure your code. Once you know how to make a nice project, even if you're not "that" good at programming you can achieve a lot of things.
>>52626750
concat returns a new list, it doesn't modify the old one.
>>52627221
Literally nothing wrong with unity, it is the best desktop environment for Linux after KDE. Cinnamon is literally gnome shell trying to be KDE, gnome shell is too pretentious, xfce/lxde is for poorfags.
>>52629241
java/C++ is the best start imo
>>52629314
kill yourself Ctard
Ctards never learn how to program in C++ or use OOP correctly. you're a prime example of that.
>>52629437
doesn't make any sense, c++ can go as lower as c. also, learning low level stuffs first is one of the most shitty thing to do.
>>52629441
I appreciate the input still. I don't mind it being hard. In fact, I actually prefer a more challenging language with more rigid syntax to learn rather than what I've been doing in Python.
Every example and program I've wrote in Python so far as "example projects" feel rather simple and feel as if they're shying away from the more intense concepts of programming, as if Python is only meant for simple tasks and it's offputting.
The only struggle I'm going to have is what resources to learn C. I hear outside of books, there isn't much to use and this this:
is the de facto standard of C programming books, but it seems a lot of people say it isn't the best starter tutorial (compared to those with reviews from 1997 and 2000 saying it is a sharp dive into C but still a good learning resource).
Any recommendations? I'm grateful for all the input you guys are giving me.
>>52629471
I implement OOP in C whenever it's necessary.
You should never write OOP for the sake of using OOP, that's cargo cultism and the sign of a terrible programmer.
Ask your beloved programming literate anything.
>>52629491
>I implement OOP in C whenever it's necessary.
You can not implement true oop in c.
>>52629511
not all aspects of OOP are worth using
like member functions
sure, you can implement them in C, but since you have to explicitly pass a pointer to it's own member function, what's the point?
Why couldn't that function be kept outside of the struct?
>>52629511
>true oop
HERE WE GO AGAIN
MY MY
>>52629484
Can't tell you I'm not english. I started to learn online. There's a lot of tutorials and online books to learn.
>>52629511
Are you a girl?
>>52629564
No worries, I'm sure I'll figure it out. Thanks again.
Is he right? Should tabs be 8 characters long?
>>52629655
no
4
and you should use tabs for indentation not spaces
spaces are fucking disgusting
>>52629655
Most non-trivial code requires at least 4 indents.
he's just a hack
NEW THREAD!
>>52629675
>>52629682
Who the fuck use tabs like this in 2016 ?
Didn't we all agree that tabs should indent using 4 spaces so it doesn't fuck up whatever the IDE / text editor you use ?
>>52629703
your editor can display tabs as any number of spaces you want.
Don't be dumb.
4 space tabs is master race tho.
>>52629703
the source code should contain tabs
the width of the tabs should be equivalent to 4 spaces
it shouldn't put 4 actual spaces, it should be an actual tab just with the width of 4 spaces
Want to learn a language that will be useful in finding decent paying job with no related degree without resorting to handmade minecraft mods and donations from children, what's the best choice?
Or would that actually be feasible?
>>52630415
If you want to just exploit suckers for cash like that, learn to draw furry porn and set up a patreon
>>52630448
Unfortunately my drawing skills are shit and I'd learn how to code quicker. I don't get much free time outside of work but I want a language I can do something with that will eventually require less effort or just let me work from home so I can perpetually fund my travel addiction instead of work 6 weeks and get 2 weeks off.
>>52629703
>2016
>Using text editors that can't display tabs as however many fucking spaces you like
POO IN LOO
>>52627040
try management?
>>52629655
No, even giants can be wrong about things though. Bjarne for example is wrong about using tabs for indentation.
>>52630740
there is no argument for using spaces other than that the most primitive editors such as literally notepad might have tabs at 8 spaces wide with no option to change it
>>52630740
>Using spaces for indentation
>Literally forcing your shitty, incorrect opinions on everyone who has to look at your code
>>52629655
Kerneldoc says use tabs, but tabs should be displayed as 8 spaces.
You can display things however the fuck you like in your own editor, all it means is that you have to keep within 80 chars/line assuming some people will be using 8 space tabs.
This is pretty logical since most terminals default to 8 space tabs, and wrapping lines in terminals is fucking hideous. | http://4archive.org/board/g/thread/52622420 | CC-MAIN-2017-04 | refinedweb | 9,042 | 71.44 |
uswitch - Get or set compatibility environment specific
behavior for a calling process through the uswitch value.
#include <sys/uswitch.h>
long uswitch(
long cmd,
long mask );
Specifies the requested actions. The valid cmd values are:
Returns the current uswitch value for the calling process.
If mask is non-zero, it returns the status of specific
uswitch bit-mask(s). Changes the current uswitch value
for the calling process as specified by the mask bitmask(s).
The following bit-masks are valid when specified
with either of the values for the cmd parameter: Specifies
System V NULL pointer behavior. Specifies process
requests enhanced core file naming.
The uswitch system call is used to get or change the compatibility
environment specific behavior in Tru64 UNIX.
Any changes affect the calling process and its children.
When the USW_NULLP bit of uswitch is set to 1, the System
V method of treating NULL pointers is applied. In this
method, references to a NULL pointer always returns zero
(0). When this bit-mask is reset to zero (0), subsequent
references to a NULL pointer generate a segmentation violation
signal (SIGSEGV).
When the USW_CORE bit of uswitch is set to 1, the process
requests enhanced core file naming. The bit-mask, when
set, can be inherited when the process forks. The bitmask
is cleared when an exec system call is executed. See
core(4) for more information about core files.
Any write(2) references to NULL pointers generate a segmentation
violation signal (SIGSEGV) regardless of the
uswitch value.
Usage of this system call may make the application nonportable.
Upon successful completion, either the current or new
uswitch value for mask is returned. Otherwise, a value of
-1 is returned and errno is set to indicate the error.
If the uswitch system call fails, the uswitch value
remains unchanged and errno is set to the following: The
mask is greater than USW_MAX or less than USW_MIN.
The following code sample sets the bit mask for System V
NULL pointer behavior:
long uswitch_val;
... uswitch_val = uswitch(USC_GET,0); /*
Gets current value*/ uswitch(USC_SET, uswitch_val |
USW_NULLP); /* Sets USW_NULLP bit */ The following
code sample sets the bit mask for enhanced core
file names:
long uswitch_val;
... uswitch_val = uswitch(USC_GET,0); /*
Gets current value*/ uswitch(USC_SET, uswitch_val |
USW_CORE); /* Sets USW_CORE bit */
uswitch(2) | https://nixdoc.net/man-pages/Tru64/man2/uswitch.2.html | CC-MAIN-2020-45 | refinedweb | 380 | 64.61 |
In the previous example, we have used only public members for the base class.
However, there are 3 types of data members for a class, public, private and protected.
If a data member is declared as private, this data member could only be accessible by its member functions.
Consider the following example.
#include <cstdlib> #include <iostream> using namespace std; // Base class class Music { private: string singer; string title; public: string printOut() { this->singer = "Katy Perry"; return this->singer; } }; // Derived class class Rock : public Music { public: string album; }; int main(void) { Rock rock; cout << rock.printOut(); return 0; }
Rock is a derived class of Music. For it to access the private members of the base class, in this case, the singer variable, we have another public method called printOut.
It is through this printOut method that we are able to access the private member singer. | http://codecrawl.com/2015/02/07/cplusplus-private-members-of-the-base-class/ | CC-MAIN-2016-44 | refinedweb | 144 | 62.17 |
I have a Python 2.7 code which is running two piped suprocess
Popens with multiprocessing
Pool as follows
import sys, shlex import subprocess as sp import multiprocessing as mp try: def run(i): p0 = sp.Popen(shlex.split('myapp'), stdout=sp.PIPE, stderr=sp.PIPE, shell=False) p1 = sp.Popen(shlex.split('cat'), stdin=p0.stdout, stdout=sp.PIPE, stderr=sp.PIPE) out, err = p1.communicate() return i pool = mp.Pool(1) pool.map(run, range(1, 100)) except: print 'CATCHED'
The issue is that, when
myapp is failing with return code
1, an error
IOError: [Errno 32] Broken pipe is raised but it does not get caught by the
try ...
except block which covers everything.
Does anyone know how this is possible? How can I capture this error?
Additional information:
myapp corresponds to
bwa mem -M hg19.fa A B where
bwa is failing because it does not find a needed index file within current directory where file
hg19.fa exists.
I have also noticed that after failing
myapp (i.e.
bwa) is taking a while to terminate, probably causing the broken pipe as
cat has probably already terminated.
The broken pipe seems to occurs when the number of parallel Pool workers is higher than available processors.
Substituting
p1.communicate() with
p0.wait() and
p1.wait() does not change the outcome.
Running with Python 3 (substituting the print statement) seems to be taking ages to terminate but it does not end with the same uncaught error; thus
myapp seems to have troubles terminating but Python 3 correctly handles the situation.
$ store user notification preferencesI do it like this:? | https://cmsdk.com/python/python-uncatchable-ioerror-errno-32-broken-pipe-from-popen-within-pool.html | CC-MAIN-2021-04 | refinedweb | 270 | 58.79 |
How to Develop a RESTful Web Service in ASP .NET Web API
How to Develop a RESTful Web Service in ASP .NET Web API
Let's take a look at a tutorial that explains how to develop a RESTful web service in ASP .NET with a web API.
One of the most popular distributed architectures in the digital world nowadays is RESTful web service, which is widely used in this sector. As ASP.NET is one of the most popular and widely used technologies in the financial and digital sectors of the UK, I have decided to write this article for those interested in distributed technology. You can download the project from here.
I am going to explain, step-by-step, how to develop a RESTful Web service in ASP .NET with a Web API.
First, download the latest visual studio in your system. This is free for learning purposes.
Also, download SOAPUI for testing our application from here.
I have written one more article in .net Core, you might like that too.
Let’s start our project:
Step 1
First, create an ASP.NET Web Application project in Visual Studio and name it StudentRegistrationDemo2. For that, select File->New->Project->ASP.NET Web Application (see below window) and click OK.
Once you click the OK button, you can see the below window from where you need to select Web API and click the OK button.
Once you click the OK Button, it will create the below project structure:
Step 2
Now we will create the below resource classes for handling our GET, POST, PUT, and DELETE services. Right-click on the Models folder from the project explorer window and select Add=>Class (see below).
Modify Class Student.cs like below:
using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace StudentRegistrationDemo2.Models { public class Student { String name; public String Name { get { return name; } set { name = value; } } int age; public int Age { get { return age; } set { age = value; } } String registrationNumber; public String RegistrationNumber { get { return registrationNumber; } set { registrationNumber = value; } } } }
Step 3
Follow the above step 2 to create and add below two classes in Models folder:
The first one is StudentRegistration, this is a singleton class, and it will hold the list of registered students including all the operations for GET, POST, PUT, and DELETE requests.
using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace StudentRegistrationDemo2.Models { public class StudentRegistration { List<Student> studentList; static StudentRegistration stdregd = null; private StudentRegistration() { studentList = new List<Student>(); } public static StudentRegistration getInstance() { if (stdregd == null) { stdregd = new StudentRegistration(); return stdregd; } else { return stdregd; } } public void Add(Student student) { studentList.Add(student); } public String Remove(String registrationNumber) { for (int i = 0; i < studentList.Count; i++) { Student stdn = studentList.ElementAt(i); if (stdn.RegistrationNumber.Equals(registrationNumber)) { studentList.RemoveAt(i);//update the new record return "Delete successful"; } } return "Delete un-successful"; } public List<Student> getAllStudent() { return studentList; } public String UpdateStudent(Student std) { for (int i = 0; i < studentList.Count; i++) { Student stdn = studentList.ElementAt(i); if (stdn.RegistrationNumber.Equals(std.RegistrationNumber)) { studentList[i] = std;//update the new record return "Update successful"; } } return "Update un-successful"; } } }
The second one is class StudentRegistrationReply, this class will be used to reply message to the client application as response.
using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace StudentRegistrationDemo2.Models { public class StudentRegistrationReply { String name; public String Name { get { return name; } set { name = value; } } int age; public int Age { get { return age; } set { age = value; } } String registrationNumber; public String RegistrationNumber { get { return registrationNumber; } set { registrationNumber = value; } } String registrationStatus; public String RegistrationStatus { get { return registrationStatus; } set { registrationStatus = value; } } } }
Step 4
Now is the time to introduce controller classes to handle GET, POST, PUT, and DELETE web requests. We will create separate controllers for GET, POST, PUT, and DELETE requests in this example even though it's not necessary, but I am using separate controllers for more clarity. Even one controller would suffice for all the above services, but as per good design principle, we should have a separate controller so that it’s easy to maintain and debug the application too.
Let’s start with the GET and POST request first. Right-click on the Controllers folder and select Add=>Controller. From the below window, select Web API 2 Controller — Empty
Name the first controller as StudentRetriveController and click the Add button (See below).
Now modify StudentRetriveController like below:
using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using StudentRegistrationDemo2.Models; namespace StudentRegistrationDemo2.Controllers { //GET api/studentretrive public class StudentRetriveController : ApiController { public List<Student> GetAllStudents() { return StudentRegistration.getInstance().getAllStudent(); } } }
If you look into the code, it is so simple. We don't need to exclusively mention whether it is a GET request method or not. It even does not require to mention the resource path. The Web API automatically considers it a GET request as the method name starts with a keyword "Get" (GetAllStudent) and in resource path, it will add "api" at the front and the name of the controller (all in small letters) at its back. So any GET call with resource path "/api/studentretrive" will invoke the above "GetAllStudents" method. But if you implement more than one GET method, you have to clearly mention the resource path.
Step 5
Now is the time to introduce a controller to handle a POST request. Just following step 4 and create StudentRegistrationController and modify it like below:
using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using StudentRegistrationDemo2.Models; namespace StudentRegistrationDemo2.Controllers { public class StudentRegistrationController : ApiController { public StudentRegistrationReply registerStudent(Student studentregd) { Console.WriteLine("In registerStudent"); StudentRegistrationReply stdregreply = new StudentRegistrationReply(); StudentRegistration.getInstance().Add(studentregd); stdregreply.Name = studentregd.Name; stdregreply.Age = studentregd.Age; stdregreply.RegistrationNumber = studentregd.RegistrationNumber; stdregreply.RegistrationStatus = "Successful"; return stdregreply; } } }
Now we are done with our first stage, and it is the time to test the application.
Step 6
From the menu bar, you can see a green arrow button and you can select a browser installed in your system and click it. It will start your web server and run your web service application.
Wait until you see the browser like below:
Now the server is running and we will do our first web service call i.e. GET service call first.
Step 7
I hope you already installed SOAPUI in your system. If not, download SOAPUI from here. Now open the application, and from the File menu, select New REST Project (File=>New REST Project) and copy and paste the below URL and change the port number 63053 if it is different in your system. Then click the OK button. (Notice the URL. We are using the controller name studentretrive (StudentRetriveController) as resource locator)
Once the project is created, just click the green arrow button and you can see an empty record like below:
The reason is obvious, as our Student list is empty. We have to insert a few records here. To add records, we will use our POST service. Let's test our POST service now.
Step 8
Just follow the step 7 and create a new REST project and add the below URL:
But here, we need to do some extra configuration. First, select POST from the methods list and add the record in Media Type to insert into the application. Now click the green arrow button:
Now repeat step 7 and see:
Now repeat step 8 and insert a few more records and repeat step 7 to check the result.
So far so good. Now we are going to complete our last part of this project by adding PUT and DELETE services.
Step 9
Follow step 4 and add two controllers respectively: StudentUpdateController and StudentDeleteController. Modify both like below:
using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using StudentRegistrationDemo2.Models; namespace StudentRegistrationDemo2.Controllers { public class StudentUpdateController : ApiController { public String PutStudentRecord( Student stdn) { Console.WriteLine("In updateStudentRecord"); return StudentRegistration.getInstance().UpdateStudent(stdn); } } }
using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using StudentRegistrationDemo2.Models; namespace StudentRegistrationDemo2.Controllers { public class StudentDeleteController : ApiController { [Route("student/remove/{regdNum}")] public String DeleteStudentRecord(String regdNum) { Console.WriteLine("In deleteStudentRecord"); return StudentRegistration.getInstance().Remove(regdNum); } } }
Now, look at the above controller class. What kind of differences do you notice? Route: here, we are using Route to specifically mention the resource location. You might have a question about if we need to add multiple POST or GET services and how to differentiate between each method. No worries, we can do it like below:
First with a different method signature
With different method signature:
GetAllStudents(){}//default
GetStudent(Student object){}
GetStudentRec(String registrationNumber){}
and by using Route
[Route("student/remove/{regdNum}")]
DeleteStudent(String regdNum){}
[Route("student/removeall")]
DeleteAllStudent(){}
Now let's test all the services that we have implemented so far.
Step 10
Now use POST service and add three records. In one of the records, make age=270, which is a wrong entry of course. With a GET call, check the records first. Now we are going to correct the above value with our PUT request test. Create a New REST Project and add the below URL. This time, select the method as PUT and add the record to be modified and click the green arrow button.
Now verify the records:
We have reached the end of our project. Now let's test the DELETE request. Create a new project and add the below URL, and this time, select method type DELETE.
Now just repeat the GET request call and check the result.
Hope you liked this article. Let me know any questions you might have in the comments section. Thanks! }} | https://dzone.com/articles/step-by-step-how-to-develop-a-restful-web-service | CC-MAIN-2018-47 | refinedweb | 1,641 | 51.14 |
Docx is an opensource library which provide creation and manipulation of Microsoft Word Document. It is one of of the easiest way to create documents in your C# application. Above all no need to install Office word on your system and it is fast.
DocX is best alterate for Office.Interop.Word which require Office Installation in your system.
The developer made a good and fruitfull effort to deliver such a fast library, thanks lot
First you need to add the library to your project and using it. Use Nuget Package Manager to add the Library for that.
using Xceed.Document.NET;
Now you can create you document with Docx. Create method will perform file operation adfter you called the Save. Here is the sample code
string fileName=@"/doc/sample.docx"; using (var document = DocX.Create(fileName)) { // Add a title document.InsertParagraph("This is Sample Text").FontSize(15d).SpacingAfter(50d).Alignment = Alignment.center; document.InsertParagraph("This is last line of this Document"); document.Save();
You can also work on tables,List and styles as you do on Word Application. Docx is faster than Interop.Word
One thought on “Create Word Document programatically with Docx in C#” | https://developerm.dev/2020/05/31/create-word-document-programatically-with-docx-in-c/ | CC-MAIN-2021-17 | refinedweb | 196 | 52.66 |
GLOB(3) BSD Programmer's Manual GLOB(3)
NAME
glob, globfree - generate pathnames matching a pattern
SYNOPSIS
#include <<glob.h>>
int
glob(const char *pattern, int flags,
const int (*errfunc)(const char *, int), glob_t *pglob);
void
globfree(glob_t *pglob);
DESCRIPTION
The glob() function is a pathname generator that implements the rules for
file name pattern matching used by the shell.
The include file glob.h defines the structure type glob_t, which contains de-
fined oth-
er is set to 1, and the number of
matched pathnames set to 0. If GLOB_QUOTE is set, its
effect is present in the pattern returned.
GLOB_NOSORT By default, the pathnames are sorted in ascending ASCII
order; this flag prevents that sorting (speeding up
glob()).
The following values may also be included in flags, however, they are
non-standard extensions to IEEE Std1003.2 (``POSIX''). re-
store Use the backslash (`\') character for quoting: every oc-
currence of a backslash followed by a character in the
pattern is replaced by that character, avoiding any spe-
cial interpretation of the character.
GLOB_TILDE Expand patterns that start with `~' to user name home
directories._ABEND.
GLOB_ABEND The scan was stopped because an error was encountered and
either GLOB_ERR was set or (*errfunc)() returned non-zero.
The arguments pglob->gl_pathc and pglob->gl_pathv are still set as speci-
fied above.
EXAMPLEp(3)
STANDARDS
The glob() function is expected to be IEEE Std1003.2 (``POSIX'') compati-
ble with the exception that the flags GLOB_ALTDIRFUNC, GLOB_BRACE
GLOB_MAGCHAR, GLOB_NOMAGIC, GLOB_QUOTE, and GLOB_TILDE, and the fields
gl_matchc and gl_flags should not be used by applications striving for
strict POSIX).
4.4BSD April 16, 1994 4 | http://modman.unixdev.net/?sektion=3&page=globfree&manpath=4.4BSD-Lite2 | CC-MAIN-2017-39 | refinedweb | 271 | 61.56 |
Remarks
scanf("%d", &n)
scanf("%d", n)
field = %x
field = 5218
field=5218
field= 5218
field =5218
fiel d=5218
The format specification string for the output of information can
contain:
A conversion specification consists of the following, in the order
listed:
For examples of conversion specifications, see the sample programs in
Section 2.6.
Table 2-4 shows the characters you can use between the percent sign
(%) (or the sequence %n$) and the conversion specifier. These
characters are optional, but if specified, they must occur in the order
shown in Table 2-4.
For the o (octal) conversion, the precision is increased to force
the first digit to be a zero.
For the x (or X) conversion, a nonzero result is prefixed with 0x
(or 0X).
For e, E, f, g, and G conversions, the result contains a decimal
point even at the end of an integer value.
For g and G conversions, trailing zeros are not trimmed.
For other conversions, the effect of # is undefined.
The minimum field width is considered after the conversion is done
according to the all other components of the format directive. This
component affects padding the result of the conversion as follows:
If the result of the conversion is wider than the minimum field,
write it out.
If the result of the conversion is narrower than the minimum width,
pad it to make up the field width. Pad with spaces by default. Pad with
zeros if the 0 flag is specified; this does not mean that the width is
an octal number. Padding is on the left by default, and on the right if
a minus sign is specified.
For the wide-character output functions, the field width is measured
in wide characters; for the byte output functions, it is measured in
bytes.
If a precision appears with any other conversion specifier, the
behavior is undefined.
Precision can be designated by a decimal integer constant, or by an
output source. To specify an output source, use an asterisk (*) or the
sequence *
n$, where
n refers to the
nth output source listed after the format specification.
If only the period is specified, the precision is taken as 0.
An l (lowercase ell) specifies that a following d, i, o, u, x, or X
conversion specifier applies to a
long int
or
unsigned long int
argument; an l can also specify that a following n conversion specifier
applies to a pointer to a
long int
argument.
On
OpenVMS Alpha systems, an L or ll (two lowercase ells)
specifies that a following d, i, o, u, x, or X conversion specifier
applies to an
__int64
or
unsigned __int64
argument. (ALPHA ONLY)
An L specifies that a following e, E, f, g, or G conversion
specifier applies to a
long double
argument.
An l specifies that a following c or s conversion specifier applies
to a
wchar_t
argument.
If an h, l, or L appears with any other conversion specifier, the
behavior is undefined.
On
OpenVMS VAX and
OpenVMS Alpha systems, Compaq C
int
values are equivalent to
long
values.
Table 2-5 decribes the conversion specifiers for formatted output.
The value is rounded to the appropriate number of digits.
If the optional character l (lowercase ell) precedes this conversion
specifier, then the specifier converts a
wchar_t
argument to an array of bytes representing the character, and writes
the resulting character. If the field width is specified and the
resulting character occupies fewer bytes than the field width, it will
be padded to the given width with space characters. If the precision is
specified, the behavior is undefined.
If an l (lowercase ell) precedes the c specifier, then the specifier
converts a
wchar_t
argument to an array of bytes representing the character, and writes
the resulting character. If the field width is specified and the
resulting character occupies fewer characters than the field width, it
will be padded to the given width with space characters. If the
precision is specified, the behavior is undefined.
If the optional character l (lowercase ell) precedes this
conversion specifier, then the specifier converts an array of
wide-character codes to multibyte characters, and writes the multibyte
characters. Requires an argument that is a pointer to an array of wide
characters of type
wchar_t
. Characters are written until a null wide character is encountered or
until the number of bytes indicated by the precision specification is
exhausted. If the precision specification is omitted or is greater than
the size of the array of converted bytes, the array of wide characters
must be terminated by a null wide character.
If an l precedes this conversion specifier, then the argument is a
pointer to an array of
wchar_t
. Characters from this array are written until a null wide character is
encountered or the number of wide characters indicated by the precision
specification is exhausted. If the precision specification is omitted
or is greater than the size of the array, the array must be terminated
by a null wide character.
Compaq C defines three file pointers that allow you to perform I/O
to and from the logical devices usually associated with your terminal
(for interactive jobs) or a batch stream (for batch jobs). In the
OpenVMS environment, the three
permanent process files SYS$INPUT, SYS$OUTPUT, and SYS$ERROR perform
the same functions for both interactive and batch jobs. Terminal I/O
refers to both terminal and batch stream I/O. The file pointers stdin,
stdout, and stderr are defined when you include the
<stdio.h>
header file using the
#include
preprocessor directive.
The stdin file pointer is associated with the terminal to perform
input. This file is equivalent to SYS$INPUT. The stdout file pointer is
associated with the terminal to perform output. This file is equivalent
to SYS$OUTPUT. The stderr file pointer is associated with the terminal
to report run-time errors. This file is equivalent to SYS$ERROR.
There are three file descriptors that refer to the terminal. The file
descriptor 0 is equivalent to SYS$INPUT, 1 is equivalent to SYS$OUTPUT,
and 2 is equivalent to SYS$ERROR.
When performing I/O at the terminal, you can use Standard I/O functions
and macros (specifying the pointers stdin, stdout, or stderr as
arguments), you can use UNIX I/O functions (giving the corresponding
file descriptor as an argument), or you can use the Terminal I/O
functions and macros. There is no functional advantage to using one
type of I/O over another; the Terminal I/O functions might save
keystrokes since there are no arguments.
This section gives some program examples that show how the I/O
functions can be used in applications.
Example 2-1 shows the
printf
function.
/* CHAP_2_OUT_CONV.C */
/* This program uses the printf function to print the */
/* various conversion specifications and their affect */
/* on the output. */
/* Include the proper header files in case printf has */
/* to return EOF. */
#include <stdlib.h>
#include <stdio.h>
#include <wchar.h>
#define WIDE_STR_SIZE 20
main()
{
double val = 123345.5;
char c = 'C';
int i = -1500000000;
char *s = "thomasina";
wchar_t wc;
wchar_t ws[WIDE_STR_SIZE];
/* Produce a wide character and a wide character string */
if (mbtowc(&wc, "W", 1) == -1) {
perror("mbtowc");
exit(EXIT_FAILURE);
}
if (mbstowcs(ws, "THOMASINA", WIDE_STR_SIZE) == -1) {
perror("mbstowcs");
exit(EXIT_FAILURE);
}
/* Print the specification code, a colon, two tabs, and the */
/* formatted output value delimited by the angle bracket */
/* characters (<>). */
printf("%%9.4f:\t\t<%9.4f>\n", val);
printf("%%9f:\t\t<%9f>\n", val);
printf("%%9.0f:\t\t<%9.0f>\n", val);
printf("%%-9.0f:\t\t<%-9.0f>\n\n", val);
printf("%%11.6e:\t\t<%11.6e>\n", val);
printf("%%11e:\t\t<%11e>\n", val);
printf("%%11.0e:\t\t<%11.0e>\n", val);
printf("%%-11.0e:\t\t<%-11.0e>\n\n", val);
printf("%%11g:\t\t<%11g>\n", val);
printf("%%9g:\t\t<%9g>\n\n", val);
printf("%%d:\t\t<%d>\n", c);
printf("%%c:\t\t<%c>\n", c);
printf("%%o:\t\t<%o>\n", c);
printf("%%x:\t\t<%x>\n\n", c);
printf("%%d:\t\t<%d>\n", i);
printf("%%u:\t\t<%u>\n", i);
printf("%%x:\t\t<%x>\n\n", i);
printf("%%s:\t\t<%s>\n", s);
printf("%%-9.6s:\t\t<%-9.6s>\n", s);
printf("%%-*.*s:\t\t<%-*.*s>\n", 9, 5, s);
printf("%%6.0s:\t\t<%6.0s>\n\n", s);
printf("%%C:\t\t<%C>\n", wc);
printf("%%S:\t\t<%S>\n", ws);
printf("%%-9.6S:\t\t<%-9.6S>\n", ws);
printf("%%-*.*S:\t\t<%-*.*S>\n", 9, 5, ws);
printf("%%6.0S:\t\t<%6.0S>\n\n", ws);
}
Running Example 2-1 produces the following output:
$ RUN EXAMPLE
%9.4f: <123345.5000>
%9f: <123345.500000>
%9.0f: < 123346>
%-9.0f: <123346 >
%11.6e: <1.233455e+05>
%11e: <1.233455e+05>
%11.0e: < 1e+05>
%-11.0e: <1e+05 >
%11g: < 123346>
%9g: < 123346>
%d: <67>
%c: <C>
%o: <103>
%x: <43>
%d: <-1500000000>
%u: <2794967296>
%x: <a697d100>
%s: <thomasina>
%-9.6s: <thomas >
%-*.*s: <thoma >
%6.0s: < >
%C: <W>
%S: <THOMASINA>
%-9.6S: <THOMAS >
%-*.*S: <THOMA >
%6.0S: < >
$
Example 2-2 shows the use of the
fopen
,
ftell
,
sprintf
,
fputs
,
fseek
,
fgets
, and
fclose
functions.
/* CHAP_2_STDIO.C */
/* This program establishes a file pointer, writes lines from */
/* a buffer to the file, moves the file pointer to the second */
/* record, copies the record to the buffer, and then prints */
/* the buffer to the screen. */
#include <stdio.h>
#include <stdlib.h>
main()
{
char buffer[32];
int i,
pos;
FILE *fptr;
/* Set file pointer. */
fptr = fopen("data.dat", "w+");
if (fptr == NULL) {
perror("fopen");
exit(EXIT_FAILURE);
}
for (i = 1; i < 5; i++) {
if (i == 2) /* Get position of record 2. */
pos = ftell(fptr);
/* Print a line to the buffer. */
sprintf(buffer, "test data line %d\n", i);
/* Print buffer to the record. */
fputs(buffer, fptr);
}
/* Go to record number 2. */
if (fseek(fptr, pos, 0) < 0) {
perror("fseek"); /* Exit on fseek error. */
exit(EXIT_FAILURE);
}
/* Read record 2 in the buffer. */
if (fgets(buffer, 32, fptr) == NULL) {
perror("fgets"); /* Exit on fgets error. */
exit(EXIT_FAILURE);
}
/* Print the buffer. */
printf("Data in record 2 is: %s", buffer);
fclose(fptr); /* Close the file. */
} | http://h71000.www7.hp.com/commercial/c/docs/5763p008.html | CC-MAIN-2014-35 | refinedweb | 1,704 | 56.15 |
Operators
Operators are a basic feature of the C++ language, which, similar to operators in mathematics, allow the production of a result of computation from one, or a combination of two variables. There are roughly 60 operators in C++; fortunately, you only need to know a few of them to get started writing programs.
[edit] Assignment operator
The assignment operator (
=) assigns a value to a variable. For example:
b = 14;
This statement assigns the integer value 14 to the variable
b. The assignment operator always works from right to left. For example:
c = b;
Here the variable
c is assigned the value that is held in
b. The value stored in
b is left unmodified, whereas the previous value of
c is lost.
Below is an example that shows how to use assignment operator to swap two values:
Output:
x: 10; y: 20 x: 20; y: 10
[edit] Arithmetic operators
The arithmetic operators compute a new result from two given values. The following arithmetic operators are available in C++:
- addition. Example: a + b. Here the sum of
aand
bis calculated.
- subtraction. Example: a - b. Here
bis subtracted from
a.
- multiplication. Example: a * b. Here the multiplication of
aand
bis performed.
- division. Example: a / b. Here
ais divided by
b. For integer types, non-integer results are rounded towards zero (truncated).
- modulo. Example: a % b. Here the remainder of the division of
aby
bis calculated.
The below example demonstrates use of the arithmetic operators:
#include <iostream> int main() { int a = 14; int b = 5; int c = 12; std::cout << "a: " << a << "; b: " << b << ";"; return 0; }
Output:
a: 14; b: 5; c: 12 a+b: 19; b+c: 17; a+c: 26 a-b: 9; b-c: -7; a-c: 2 a*b: 70; b*c: 60; a*c: 168 a/b: 2; b/c: 0; a/c: 1 a%b: 4; b%c: 5; a%c: 2
[edit] Bitwise logical operators
[edit] Bitwise shift operators
[edit] Compound assignment operators
[edit] Increment and decrement operators
[edit] Logical operators
[edit] Comparison operators
Comparison operators allow to determine the relation of two different values. The following operators are available in C++:
- less-than. Example: a < b: Yields true if the value on the left (
a) side is less (smaller) than the value on the right side (
b).
- less-or-equal. Example: a <= b: Yields true if the value of
ais less than or equal to the value of
b.
- equality. Example: a == b: Yields true if the value of
ais equal to the value of
b.
- greater-or-equal. Example: a >= b: Yields true if the value of
ais greater than or equal to the value of
b.
- greater-than. Example: a > b: Yields true if the value of
aside is greater than the value of
b.
- non-equality. Example: a != b: Yields true if the value of
ais not equal to the value of
b.
The following example demonstrates use of the comparison operators:
Output:
14 is greater than 5
Be aware that comparison of floating point values may sometimes yield unexpected results due to rounding effects. Therefore, it is recommended to always use <= or >= comparison with floating point values, instead of checking for equality with ==.
[edit] Other operators
There are several other operators that we will learn about later. | https://en.cppreference.com/book/intro/operators | CC-MAIN-2018-39 | refinedweb | 547 | 54.32 |
New in spot 1.1.4a (not relased)
* Changes to command-line tools:
- ltlcross has a new option --color to color its output. It is
enabled by default when the output is a terminal.
- ltlcross will give an example of infinite word accepted by the
two automata when the product between a positive automaton and a
negative automaton is non-empty.
- ltlcross can now read the Rabin and Streett automata output by
ltl2dstar. This type of output should be specified using '%D':
ltlcross 'ltl2dstar --ltl2nba=spin:path/to/ltl2tgba@-s %L %D'
However because Spot only supports Büchi acceptance, these Rabin
and Streett automata are immediately converted to TGBA before
further processing by ltlcross. This is still interesting to
search for bugs in translators to Rabin or Streett automata, but
the statistics might not be very relevant.
- When ltlcross obtains a deterministic automaton from a
translator it will now complement this automaton to perform
additional intersection checks. This is complementation is done
only for deterministic automata (because that is cheap) and can
be disabled with --no-complement.
- To help with debugging problems detected by ltlcross, the
environment variables SPOT_TMPDIR and SPOT_TMPKEEP control where
temporary files are created and if they should be erased. Read
the man page of ltlcross for details.
- There is a new command, named dstar2tgba, that converts a
deterministic Rabin or Streett automaton (expressed in the
output format of ltl2dstar) into a TGBA, BA or Monitor.
In the case of Rabin acceptance, the conversion will output a
deterministic Büchi automaton if one such automaton exist. Even
if no such automaton exists, the conversion will actually
preserves the determinism of any SCC that can be kept
deterministic.
In the case of Streett acceptance, the conversion produces
non-deterministic Büchi automata with Generalized acceptance.
These are then degeneralized if requested.
See for some
examples, and the man page for more reference.
- The %S escape sequence used by ltl2tgba --stats to display the
number of SCCs in the output automaton has been renamed to %c.
This makes it more homogeneous with the --stats option of the
new dstar2tgba command.
Additionally, the %p escape can now be used to show whether the
output automaton is complete, and the %r escape will give the
number of seconds spent building the output automaton (excluding
the time spent parsing the input).
* All the parsers implemented in Spot now use the same type
to store locations.
* Degeneralization was not indempotant on automata with an accepting
initial state that was on a cycle, but without self-loop.
* Cleanup of exported symbols
All symbols in the library now have hidden visibility on ELF systems.
Public classes and functions have been marked explicitely for export
with the SPOT_API macro.
During this massive update, some of functions that should not have
been made public in the first place have been moved away so that
they can only be used from the library. Some old of unused
functions have been removed.
removed:
- class loopless_modular_mixed_radix_gray_code
hidden:
- class acc_compl
- class acceptance_convertor
- class bdd_allocator
- class free_list
New in spot 1.1.4 (2013-07-29)
* Bug fixes:
- The parser for neverclaim, updated in 1.1.3, would fail to
parse guards of the form (a) || (b) output by ltl2ba or
ltl3ba, and would only understand ((a) || (b)).
- When used from ltlcross, the same parser would fail to
parse further neverclaims after the first failure.
- Add a missing newline in some error message of ltlcross.
- Expressions like {SERE} were wrongly translated and simplified
for SEREs that accept the empty word: they were wrongly reduced
to true. Simplification and translation rules have been fixed,
and the doc/tl/tl.pdf specifications have been updated to better
explain that {SERE} has the semantics of a closure operator that
is not exactly what one could expect after reading the PSL
standard.
- Various typos.
New in spot 1.1.3 (2013-07-09)
* New feature:
- The neverclaim parser now understands the new style of output
used by Spin 6.24 and later.
* Bug fixes:
- The scc_filter() function could abort with a BDD error. If all
the acceptance sets of an SCC but the first one were useless.
- The script in bench/spin13/ would not work on MacOS X because
of some non-portable command.
- A memory corruption in ltlcross.
New in spot 1.1.2 (2013-06-09)
* Bug fixes:
- Uninitialized variables in ltlcross (affect the count of terminal
weak, and strong SCCs).
- Workaround an old GCC bug to allow compilation with g++ <= 4.5
- Fix several Doxygen comments so that they display correctly.
New in spot 1.1.1 (2013-05-13):
* New features:
- lbtt_reachable(), the function that outputs a TGBA in LBTT's
format, has a new option to indicate that the TGBA being printed
is in fact a Büchi automaton. In this case it outputs an LBTT
automaton with state-based acceptance.
The output of the guards has also been changed in two ways:
1. atomic propositions that do not match p[0-9]+ are always
double-quoted. This avoids issues where t or f were used as
atomic propositions in the formula, output as-is in the
automaton, and read back as true or false. Other names that
correspond to LBT operators would cause problem as well.
2. formulas that label transitions are now output as
irredundant-sums-of-products.
- 'ltl2tgba --ba --lbtt' will now output automata with state-based
acceptance. You can use 'ltl2tgba --ba --lbtt=t' to force the
output of transition-based acceptance like in the previous
versions.
Some illustrations of this point and the previous one can be
found in the man page for ltl2tgba(1).
- There is a new function scc_filter_states() that removes all
useless states from a TGBA. It is actually an abbridged version
of scc_filter() that does not alter the acceptance conditions of
the automaton. scc_filter_state() should be used when
post-processing TGBAs that actually represent BAs.
- simulation_sba(), cosimulation_sba(), and
iterated_simulations_sba() are new functions that apply to TGBAs
that actually represent BAs. They preserve the imporant
property that if a state of the BA is is accepting, the outgoing
transitions of that state are all accepting in the TGBA that
represent the BA. This is something that was not preserved by
functions cosimultion() and iterated_simulations() as mentionned
in the bug fixes below.
- ltlcross has a new option --seed, that makes it possible to
change the seed used by the random graph generator.
- ltlcross has a new option --products=N to check the result of
each translation against N different state spaces, and everage
the statistics of these N products. N default to 1; larger
values increase the chances to detect inconsistencies in the
translations, and also make the average size of the product
built against the translated automata a more pertinent
statistic.
- bdd_dict::unregister_all_typed_variables() is a new function,
making it easy to unregister all BDD variables of a given type
owned by some object.
* Bug fixes:
- genltl --gh-r generated the wrong formulas due to a typo.
- ltlfilt --eventual and --universal were not handled properly.
- ltlfilt --stutter-invariant would trigger an assert on PSL formulas.
- ltl2tgba, ltl2tgta, ltlcross, and ltlfilt, would all choke on empty
lines in a file of formulas. They now ignore empty lines.
- The iterated simulation applied on degeneralized TGBA was bogus
for two reasons: one was that cosimulation was applied using the
generic cosimulation for TGBA, and the second is that
SCC-filtering, performed between iterations, was also a
TGBA-based algorithm. Both of these algorithms could lose the
property that if a TGBA represents a BA, all the outgoing
transitions of a state should be accepting. As a consequence, some
formulas where translated to incorrect Büchi automata.
New in spot 1.1 (2013-04-28):
Several of the new features described below are discribed in
Tomáš Babiak, Thomas Badie, Alexandre Duret-Lutz, Mojmír
Křetínský, Jan Strejček: Compositional Approach to Suspension and
Other Improvements to LTL Translation. To appear in the
proceedings of SPIN'13.
* New features in the library:
- The postprocessor class now takes an optional option_map
argument that can be used to specify fine-tuning options, making
it easier to benchmark different scenarios while developing new
postprocessings.
- A new translator class implements a complete translation chain,
from LTL/PSL to TGBA/BA/Monitor. It performs pre- and
post-processings in addition to the core translation, and offers
an interface similar to that used in the postprocessor class, to
specify the intent of the translation.
- The degeneralization algorithm has learned three new tricks:
level reset, level caching, and SCC-based ordering. The former
two are enabled by default. Benchmarking has shown that the
latter one does not always have a positive effect, so it is
disabled by default. (See SPIN'13 paper.)
- The scc_filter() function, which removes dead SCCs and also
simplify acceptance conditions, has learnt how to simplify
acceptance conditions in a few tricky situations that were not
simplified previously. (See SPIN'13 paper.)
- A new translation, called compsusp(), for "Compositional
Suspension" is implemented on top of ltl_to_tgba_fm().
(See SPIN'13 paper.)
- Some experimental LTL rewriting rules that trie to gather
suspendable formulas are implemented and can be activated
with the favor_event_univ option of ltl_simplifier. As
always please check doc/tl/tl.tex for the list of rules.
- An experimental "don't care" (direct) simulation has been
implemented. This simulations consider the acceptance
of out-of-SCC transitions as "don't care". It is not
enabled by default because it currently is very slow.
- remove_x() is a function that take a formula, and rewrite it
without the X operator. The rewriting is only correct for
stutter-insensitive LTL formulas (See K. Etessami's paper in IFP
vol. 75(6). 2000) This algorithm is accessible from the
command-line using ltlfilt's --remove-x option.
- is_stutter_insensitive() takes any LTL formula, and check
whether it is stutter-insensitive. This algorithm is accessible
from the command-line using ltlfilt's --stutter-insensitive
option.
- Several functions have been introduced to check the
strength of an SCC.
is_inherently_weak_scc()
is_weak_scc()
is_syntactic_weak_scc()
is_complete_scc()
is_terminal_scc()
is_syntactic_terminal_scc()
Beware that the costly is_weak_scc() function introduced in Spot
1.0, which is based on a cycle enumeration, has been renammed to
is_inherently_weak_scc() to match established vocabulary.
* Command-line tools:
- ltl2tgba and ltl2tgta now honor a new --extra-options (or -x)
flag to fine-tune the algorithms used. The available options
are documented in the spot-x (7) manpage. For instance use '-x
comp-susp' to use the afore-mentioned compositional suspension.
- The output format of 'ltlcross --json' has been changed slightly.
In a future version we will offer some reporting script that turn
such JSON output into various tables and graphs, and these change
are required to make the format usable for other benchmarks (not
just ltlcross).
- ltlcross will now count the number of non-accepting, terminal,
weak, and strong SCCs, as well as the number of terminal, weak,
and strong automata produced by each tool.
* Documentation:
- org-mode files used to generate the documentation about
command-line tools (shown at
is distributed in doc/org/. The resulting html files are also
in doc/userdoc/.
* Web interface:
- A new "Compositional Suspension" tab has been added to experiment
with compositional suspension.
* Benchmarks:
- See bench/spin13/README for instructions to reproduce our Spin'13
benchmark for the compositional suspension.
* Bug fixes:
- There was a memory leak in the LTL simplification code, that could
only be triggered when disabling advanced simplifications.
- The translation of the PSL formula !{xxx} was incorrect when xxx
simplified to false.
- Various warnings triggered by new compilers.
New in spot 1.0.2 (2013-03-06):
* New features:
- the on-line ltl2tgba.html interface can output deterministic or
non-deterministic monitors. However, and unlike the ltl2tgba
command-line tool, it doesn't different output formats.
- the class ltl::ltl_simplifier now has an option to rewrite Boolean
subformulaes as irredundante-sum-of-product during the simplification
of any LTL/PSL formula. The service is also available as a method
ltl_simplifier::boolean_to_isop() that applies this rewriting
to a Boolean formula and implements a cache.
ltlfilt as a new option --boolean-to-isop to try to apply the
above rewriting from the command-line:
% ltlfilt --boolean-to-isop -f 'GF((a->b)&(b->c))'
GF((!a & !b) | (b & c))
This is currently not used anywhere else in the library.
* Bug fixes:
- 'ltl2tgba --high' is documented to be the same as 'ltl2tgba',
but by default ltl2tgba forgot to enable LTL simplifications based
on language containment, which --high do enable. There are now
enabled by default.
- the on-line ltl2tgba.html interface failed to output monitors,
testing automata, and generalized testing automata due to two
issues with the Python bindings. It also used to display
Testing Automaton Options when the desired output was set to Monitor.
- bench/ltl2tgba would not work in a VPATH build.
- a typo caused some .dir-locals.el configuration parameters to be
silently ignored by emacs
- improved Doxygen comments for formula_to_bdd, bdd_to_formula,
and bdd_dict.
- src/tgbatest/ltl2tgba (not to be confused with src/bin/ltl2tgba)
would have a memory leak when passed the conflicting option -M
and -O. It probably has many other problems. Do not use
src/tgbatest/ltl2tgba if you are not writing a test case for
Spot. Use src/bin/ltl2tgba instead.
New in spot 1.0.1 (2013-01-23):
* Bug fixes:
- Two executions of the simulation reductions could produce
two isomorphic automata, but with transitions in a different
order.
- ltlcross did not diagnose write errors to temporary files,
and certain versions of g++ would warn about it.
- "P0.init" is parsed as an atomic even without the double quotes,
but it was always output with double quotes. This version will
not quote this atomic proposition anymore.
- "U", "W", "M", "R" were correctly parsed as atomic propositions
(instead of binary operators) when placed in double quotes, but
on output they were output without quotes, making the result
unparsable.
- the to_lbt_string() functions would always output a trailing space.
This is not the case anymore.
- tgba_product::transition_annotation() would segfault when
called in a product against a Kripke structure.
* Minor improvements:
- Four new LTL simplifications rules:
GF(a|Xb) = GF(a|b)
GF(a|Fb) = GF(a|b)
FG(a&Xb) = FG(a&b)
FG(a&Gb) = FG(a&b)
- The on-line version of ltl2tgba now displays edge and
transition counts, just as the ltlcross tool.
- ltlcross will display the number of timeouts at the end
of its execution.
- ltlcross will diagnose tools with missing input or
output %-sequence before attempting to run any of them.
- The parser for LBT's prefix-style LTL formulas will now
read atomic propositions that are not of the form p1, p2...
This makes it possible to process formulas written in
ltl2dstar's syntax.
* Pruning:
- lbtt has been removed from the distribution. A copy of the last
version we distributed is still available at
and our test suite will use it if it is installed, but the same
tests are already performed by ltlcross.
- the bench/ltl2tgba/ benchmark, that used lbtt to compare various
LTL-to-Büchi translators, has been updated to use ltlcross. It
now output summary tables in LaTeX. Support for Modella (no
longer available online), and Wring (requires a too old Perl
version) have been dropped.
- the half-baked and underdocumented "Event TGBA" support in
src/evtgba*/ has been removed, as it was last worked on in 2004.
New in spot 1.0 (2012-10-27):
* License change: Spot is now distributed using GPL v3+ instead
of GPL v2+. This is because we started using some third-party
files distributed under GPL v3+.
* Command-line tools
Useful command-line tools are now installed in addition to the
library. Some of these tools were originally written for our test
suite and had evolved organically into useful programs with crappy
interfaces: they have now been rewritten with better argument
parsing, saner defaults, and they come with man pages.
- genltl: Generate LTL formulas from scalable patterns.
This offers 20 patterns so far.
- randltl: Generate random LTL/PSL formulas.
- ltlfilt: Filter lists of formulas according to several criteria
(e.g., match only safety formulas that are larger than
some given size). Besides being used as a "grep" tool
for formulas, this can also be used to convert
files of formulas between different syntaxes, apply
some simplifications, check whether to formulas are
equivalent, ...
- ltl2tgba: Translate LTL/PSL formulas into Büchi automata (TGBA,
BA, or Monitor). A fundamental change to the
interface is that you may now specify the goal of the
translation: do you you favor deterministic or smaller
automata?
- ltl2tgta: Translate LTL/PSL formulas into Testing Automata.
- ltlcross: Compare the output of translators from LTL/PSL to
Büchi automata, to find bug or for benchmarking. This
is essentially a Spot-based reimplementation of LBTT
that supports PSL in addition to LTL, and that can
output more statistics.
An introduction to these tools can be found on-line at
The former test versions of genltl and randltl have been removed
from the source tree. The old version of ltl2tgba with its
gazillion options is still in src/tgbatest/ and is meant to be
used for testing only. Although ltlcross is meant to replace
LBTT, we are still using both tools in this release; however this
is likely to be the last release of Spot that redistributes LBTT.
* New features in the Spot library:
- Support for various flavors of Testing Automata.
The flavors are:
+ "classical" Testing Automata, as used for instance by
Geldenhuys and Hansen (Spin'06), using Büchi and
livelock acceptance conditions.
+ Generalized Testing Automata, extending the previous
with multiple Büchi acceptance sets.
+ Transition-based Generalized Testing Automata moving Büchi
acceptance to transitions, and getting rid of livelock
acceptance conditions by expliciting stuttering self-loops.
Supporting algorithms include anything required to run
the automata-theoretic approach using testing automata:
+ dedicated synchronized product
+ dedicated emptiness-check for TA and GTA, as these
may require two passes because of the two kinds of
acceptance, while a TGTA can be checked for emptiness
with the same one-pass algorithm as a TGBA.
+ conversion from a TGBA to any of the above kind, with
options to reduce these automata with bisimulation,
and to produce a BA/GBA that require a single pass
(at the expense of determinism).
+ output in dot format for display
A discussion of these automata, part of Ala Eddine BEN SALEM's
PhD work, should appear in ToPNoC VI (LNCS 7400). The web-based
interface and the aforementioned ltl2tgta tool can be used
to build testing automata.
- TGBA can now be reduced by Reverse Simulation (in addition to
the Direct Simulation introduced in 0.9). A function called
iterated_simulations() will alternate direct and reverse
simulations in a loop as long as it diminishes the size of the
automaton.
- The enumerate_cycles class implements the Loizou-Thanisch
algorithm to enumerate elementary cycles in a SCC. As an
example of use, is_weak_scc() will tell whether an SCC is
inherently weak (all its cycles are accepting, or none of them
are).
- parse_lbt() will parse an LTL formula expressed in the prefix
syntax used (at least) by LBT, LBTT and Scheck.
to_lbt_string() can be used to print an LTL formula using this
syntax.
- to_wring_string() can be used to print an LTL formula into
Wring's syntax.
- The LTL/PSL parser now has a lenient mode that can be useful
to interpret atomic proposition with language-specific constructs.
In lenient mode, any (...) or {...} block that cannot be parsed
as formula will be assumed to be an atomic proposition.
For instance the input (a < b) U (process[2]@ok), normally
flagged as a syntax error, is read as "a < b" U "process[2]@ok"
in lenient mode.
- minimize_obligation() has a new option to disable WDBA
minimization it cases it would produce a deterministic automaton
that is bigger than the original TGBA. This can help
choosing between less states or more determinism.
- new functions is_deterministic() and count_nondet_states()
(The count of nondeterministic states is now displayed on
automata generated with the web interface.)
- A new class, "postprocessor", makes it easier to apply
all available simplification algorithms on a TGBA/BA/Monitors.
* Minor changes:
- The '*' operator can (again) be used as an AND in LTL formulas.
This is for compatibility with formula written in Wring's
syntax. However inside SERE it is interpreted as the Kleen
star.
- When printing a formula using Spin's LTL syntax, we don't
double-quote complex atomic propositions (that was not valid
Spin input anyway). For instance F"foo == 2" used to be
output as <>"foo == 2". We now output <>(foo == 2) instead.
The latter syntax is understood by Spin 6. It can be read
back by Spot in lenient mode (see above).
- The gspn-ssp benchmark has been removed.
New in spot 0.9.2 (2012-07-02):
* New features to the web interface.
- It can run ltl3ba (Babiak et al., TACAS'12) where available.
- "a loading logo" is displayed when result is not instantaneous.
* Speed improvements:
- The unicity hash table of BuDDy has been separated separated
node table for better cache-friendliness. The resulting speedup
is around 5% on BDD-intensive algorithms.
- A new BDD operation, called bdd_implies() has been added to
BuDDy to check whether one BDD implies another. This benefits
mostly the simulation and degeneralization algorithms of Spot.
- A new offline implementation of the degeneralization (which
had always been performed on-the-fly so far) available. This
especially helps the Safra complementation.
* Bug fixes:
- The CGI script running for ltl2tgba.html will correctly timeout
after 30s when Spot's translation takes more time.
- Applying WDBA-minimization on an automaton generated by the
Couvreur/LaCIM translator could lead to an incorrect automaton
due to a bug in the definition of product with symbolic
automata.
- The Makefile.am of BuDDy, LBTT, and Spot have been adjusted to
accomodate Automake 1.12 (while still working with 1.11).
- Better error recovery when parsing broken LTL formulae.
- Fix errors and warnings reported by clang 3.1 and the
upcoming g++ 4.8.
New in spot 0.9.1 (2012-05-23):
* The version of LBTT we distribute includes a patch from Tomáš
Babiak to count the number of non-deterministic states, and the
number of deterministic automata produced.
See lbtt/NEWS for the list of other differences with the original
version of LBTT 1.2.1.
* The Couvreur/FM translator has learned two new tricks. These only
help to speedup the translation by not issuing states or
acceptance conditions that would be latter suppresed by other
optimizations.
- The translation rules used to translate subformulae of the G
operator have been adjusted not to produce useless loops
already implied by G. This generalizes the "GF" trick
presented in Couvreur's original FM'99 paper.
- Promises generated for formula of the form P(a U (b U c))
are reduced into P(c), avoiding the introduction of many
promises that imply each other.
* The tgba_parse() function is now available via the Python
bindings.
* Bug fixes:
- The random SERE generator was using the wrong operators
for "and" and "or", mistaking And/Or with AndRat/OrRat.
- The translation of !{r} was incorrect when this subformula
was recurring (e.g. in G!{r}) and r had loops.
- Correctly recognize ltl2tgba's option -rL.
- Using LTL simplification rules based on syntactic implication,
or based on language containment checks, caused BDD variables
to be allocated in an "unnatural" order, resulting in a slower
translation and a less optimal degeneralization.
- When ltl2tgba reads a neverclaim, it now considers the resulting
TGBA as a Büchi automaton, and will display double circles in
the dotty output.
New in spot 0.9 (2012-05-09):
* New features:
- Operators from the linear fragment of PSL are supported. This
basically extends LTL with Sequential Extended Regulat
Expressions (SERE), and a couple of operators to bridge SERE and
LTL. See doc/tl/tl.pdf for the list of operators and their
semantics.
- Formula rewritings have been completely revamped, and augmented
with rules for PSL operators (and some new LTL rules as well).
See doc/tl/tl.pdf for the list of the rewritings implemented.
- Some of these rewritings that may produce larger formulas
(for instance to rewrite "{a;b;c}" into "a & X(b & Xc)")
may be explicitely disabled with a new option.
- The src/ltltest/randltl tool can now generate random SEREs
and random PSL formulae.
- Only one translator (ltl2tgba_fm) has been augmented to
translate the new SERE and PSL operators. The internal
translation from SERE to DFA is likely to be rewriten in a
future version.
- A new function, length_boolone(), computes the size of an
LTL/PSL formula while considering that any Boolean term has
length 1.
- The LTL/PSL parser recognizes some UTF-8 characters (like ◇ or
∧) as operators, and some output routines now have an UTF-8
output mode. Tools like randltl and ltl2tgba have gained an -8
option to enable such output. See doc/tl/tl.pdf for the list
of recognized codepoints.
- A new direct simulation reduction has been implemented. It
works directly on TGBAs. It is in src/tgbaalgos/simlation.hh,
and it can be tested via ltl2tgba's -RDS option.
- unabbreviate_wm() is a function that rewrites the W and M operators
of LTL formulae using R and U. This is called whenever we output
a formula in Spin syntax. By combining this with the aforementioned
PSL rewriting rules, many PSL formulae that use simple SERE can be
converted into LTL formulae that can be feed to tools that only
understand U and R. The web interface will let you do this.
- changes to the on-line translator:
+ SVG output is available
+ can display some properties of a formula
+ new options for direct simulation, larger rewritings, and
utf-8 output
- configure --without-included-lbtt will prevent LBTT from being
configured and built. This helps on systems (such as MinGW)
where LBTT cannot be built. The test-suite will skip any
LBTT-based test if LBTT is missing.
* Interface changes:
- Operators ->, <->, U, W, R, and M are now parsed as
right-associative to better match the PSL standard.
- The constructors for temporal formulae will perform some trivial
simplifications based on associativity, commutativity,
idempotence, and neutral elements. See doc/tl/tl.pdf for the
list of such simplifications.
- Formula instances now have many methods to inspect their
properties (membership to syntactic classes, absence of X
operator, etc...) in constant time.
- LTL/PSL formulae are now handled everywhere as 'const formula*'
and not just 'formula*'. This reflects the true nature of these
(immutable) formula objects, and cleanups a lot of code.
Unfortunately, it is a backward incompatible change: you may have
to add 'const' to a couple of lines in your code, and change
'ltl::const_vistitor' into 'ltl::visitor' if you have written a
custom visitor.
- The new entry point for LTL/PSL simplifications is the function
ltl_simplifier::simplify() declared in src/ltlvisit/simplify.hh.
The ltl_simplifier class implements a cache.
Functions such as reduce() or reduce_tau03() are deprecated.
- The old game-theory-based implementations for direct and delayed
simulation reductions have been removed. The old direct
simulation would only work on degeneralized automata, and yet
produce results inferior to the new direct simulation introduced
in this release. The implementation of delayed simulation was
unreliable. The function reduc_tgba_sim() has been kept
for compatibility (it calls the new direct simulation whatever
the type of simulation requested) and marked as deprecated.
ltl2tgba's options -Rd, -RD are gone. Options -R1t, -R1s,
-R2s, and -R2t are deprecated and all made equivalent to -RDS.
- The tgba_explicit hierarchy has been reorganized in order to
make room for sba_explicit classes that share most of the code.
The main consequence is that the tgba_explicit type no longuer
exists. However the tgba_explicit_number,
tgba_explicit_formula, and tgba_explicit_string still do.
New in spot 0.8.3 (2012-03-09):
* Support for both Python 2.x and Python 3.x.
(Previous versions would only work with Python 2.x.)
* The online ltl2tgba.html now stores its state in the URL so that
history is preserved, and links to particular setups can be sent.
* Bug fixes:
- Fix a segfault in the compression code used by the -Z
option of dve2check.
- Fix a race condition in the CGI script.
- Fix a segfault in the CGI script when computing a Büchi run.
New in spot 0.8.2 (2012-01-19):
* configure now has a --disable-python option to disable
the compilation of Python bindings.
* Minor speedups in the Safra complementation.
* Better memory management for the on-the-fly degeneralization
algorithm. This mostly benefits to the Safra complementation.
* Bug fixes:
- spot::ltl::length() forgot to count the '&' and '|' operators
in an LTL formula.
- minimize_wdba() could fail to mark some transiant SCCs as accepting,
producing an automaton that was not fully minimized.
- minimize_dfa() could produce incorrect automata, but it is not
clear whether this could have had an inpact on WDBA minimization
(the worse case is that some TGBA would not have been minimized
when they could).
- Fix a Python syntax error in the CGI script.
- Fix compilation with g++ 4.0.
- Fix a make check failure when valgrind is missing.
New in spot 0.8.1 (2011-12-18):
* Only bug fixes:
- When ltl2tgba is set to perform both WDBA minimization and
degeneralization, do the latter only if the former failed.
In previous version, automata were (uselessly) degeneralized
before WDBA minimization, causing important slowdowns.
- Fix compilation with Clang 3.0.
- Fix a Makefile setup causing a "make check" failure on MacOS X.
- Fix an mkdir error in the CGI script.
New in spot 0.8 (2011-11-28):
* Major new features:
- Spot can read DiVinE models. See iface/dve2/README for details.
- The genltl tool can now output 20 different LTL formula families.
It also replaces the LTLcounter Perl scripts.
- There is a printer and parser for Kripke structures in text format.
* Major interface changes:
- The destructor of all states is now private. Any code that looks like
"delete some_state;" will cause an compile error and should be
updated to "some_state->destroy();". This new syntax is supported
since version 0.7.
- The experimental Nips interface has been removed.
* Minor changes:
- The dotty_reachable() function has a new option "assume_sba" that
can be used for rendering automata with state-based acceptance.
In that case, acceptance states are displayed with a double
circle. ltl2tgba (both command line and on-line) Use it to display
degeneralized automata.
- The dotty_reachable() function will also display transition
annotations (as returned by the tgba::transitition_annotation()).
This can be useful when displaying (small) state spaces.
- Identifiers used to name atomic proposition can contain dots.
E.g.: X.Y is now an atomic proposition, while it was understood
as X&Y in previous versions.
- The Doxygen documentation is no longer built as a PDF file.
* Internal improvements:
- The on-line ltl2tgba CGI script uses a cache to produce faster
answers.
- Better memory management for the states of explicit automata.
Thanks to the aforementioned ->destroy() change, we can avoid
cloning explicit states.
- tgba_product has learned how to be faster when one of the operands
is a Kripke structure (15% speedup).
- The reduction rule for "a M b" has been improved: it can be
reduced to "a & b" if "a" is a pure eventuallity.
- More useless acceptance conditions are removed by SCC simplifications.
* Bug fixes:
- Safra complementation has been fixed in cases where more than
one acceptance conditions where needed to convert the
deterministic Streett automaton as a TGBA.
- The degeneralization is now idempotent. Previously, degeneralizing
an already degeneralized automaton could add some states.
- The degeneralization now has a deterministic behavior. Previously
it was possible to obtain different output depending on the
memory layout.
- Spot now outputs neverclaims with fully parenthesized guards.
I.e., instead of
(!x && y) -> goto S1
it now outputs
((!(x)) && (y)) -> goto S1
This prevents problems when the model defines `x' as
#define x flag==0
because !x then evaluated to (!flag)==0 instead of !(flag==0).
New in spot 0.7.1 (2011-02-07):
* The LTL parser will accept operator ~ (for not) as well
as --> and <--> (for implication and equivalence), allowing
formulae from the Büchi Store to be read directly.
* The neverclaim parser will accept guards of the form
:: !(...) -> goto ...
instead of the more commonly used
:: (!(...)) -> goto ...
This makes it possible to read neverclaims provided by the Büchi Store.
* A new ltl2tgba option, -kt, will count the number of "sub-transitions".
I.e., a transition labelled by "true" counts for 4 "sub-transitions"
if the automaton uses 2 atomic propositions.
* Bugs fixed:
- Fix segfault during WDBA minimization on automata with useless states.
- Use the included BuDDy library if the one already installed
is older than the one distributed with Spot 0.7.
- Fix two typos in the code of the CGI scripts.
New in spot 0.7 (2011-02-01):
* Spot is now able to read an automaton expressed as a Spin neverclaim.
* The "experimental" Kripke structure introduced in Spot 0.5 has
been rewritten, and is no longer experimental. We have a
developement version of checkpn using it, and it should be
released shortly after Spot 0.7.
* The function to_spin_string(), that outputs an LTL formula using
Spin's syntax, now takes an optional argument to request
parentheses at all levels.
* src/ltltest/genltl is a new tool that generates some interesting
families of LTL formulae, for testing purpose.
* bench/ltlclasses/ uses the above tool to conduct the same benchmark
as in the DepCoS'09 paper by Cichoń et al. The resulting benchmark
completes in 12min, while it tooks days (or exhausted the memory)
when the paper was written (they used Spot 0.4).
* Degeneralization has again been improved in two ways:
- It will merge degeneralized transitions that can be merged.
- It uses a cache to speed up the improvement introduced in 0.6.
* An implementation of Dax et al.'s paper for minimizing obligation
formulae has been integrated. Use ltl2tgba -Rm to enable this
optimization from the command-line; it will have no effect if the
property is not an obligation.
* bench/wdba/ conducts a benchmark similar to the one on Dax's
webpage, comparing the size of the automata expressing obligation
formula before and after minimization. See bench/wdba/README for
results.
* Using similar code, Spot can now construct deterministic monitors.
* New ltl2tgba options:
-XN: read an input automaton as a neverclaim.
-C, -CR: Compute (and display) a counterexample after running the
emptiness check. With -CR, the counterexample will be
replayed on the automaton to ensure it is correct
(previous version would always compute a replay a
counterexample when emptiness-check was enabled)
-ks: traverse the automaton to compute its number of states and
transitions (this is faster than -k which will also count
SCCs and paths).
-M: Build a deterministic monitor.
-O: Tell whether a formula represents a safety, guarantee, or
obligation property.
-Rm: Minimize automata representing obligation properties.
* The on-line tool to translate LTL formulae into automata
has been rewritten and is now at
It requires a javascript-enabled browser.
* Bug fixes:
- Location of the errors messages in the TGBA parser where inaccurate.
- Various warning fixes for different versions of GCC and Clang.
- The neverclaim output with ltl2tgba -N or -NN used to ignore any
automaton simplification performed after degeneralization.
- The formula simplification based on universality and eventuality
had a quadratic run-time.
New in spot 0.6 (2010-04-16):
* Several optimizations to improve some auxiliary steps
of the LTL translation (not the core of the translation):
- Better degeneralization
- SCC simplifications has been tuned for degeneralization
(ltl2tgba now has two options -R3 and -R3f: the latter will
remove every acceptance condition that used to be removed
in Spot 0.5 while the former will leave useless acceptance conditions
going to accepting SCC. Experience shows that -R3 is more
favorable to degeneralization).
- ltl2tgba will perform SCC optimizations before degeneralization
and not the converse
- We added a syntactic simplification rule to rewrite F(a)|F(b) as F(a|b).
We only had a rule for the more specific FG(a)|FG(b) = F(Ga|Gb).
- The syntactic simplification rule for F(a&GF(b)) = F(a)&GF(b) has
be disabled because the latter formula is in fact harder to translate
efficiently.
* New LTL operators: W (weak until) and its dual M (strong release)
- Weak until allows many LTL specification to be specified more
compactly.
- All LTL translation algorithms have been updated to
support these operators.
- Although they do not add any expressive power, translating
"a W b" is more efficient (read smaller output automaton) than
translating the equivalent form using the U operator.
- Basic syntactic rewriting rules will automatically rewrite "a U
(b | G(a))" and "(a U b)|G(a)" as "a W b", so you will benefit
from the new operators even if you do not use them. Similar
rewriting rules exist for R and M, although they are less used.
* New options have been added to the CGI script for
- SVG output
- SCC simplifications
* Bug fixes:
- The precedence of the "->" and "<->" Boolean operators has been
adjusted to better match other tools.
Spot <= 0.5 used to parse "a & b -> c & d" as "a & (b -> c) & d";
Spot >= 0.6 will parse it as "(a & b) -> (c & d)".
- The random graph generator was fixed (again!) not to produce
dead states as documented.
- Locations in the error messages of the LTL parser were off by one.
New in spot 0.5 (2010-02-01):
* We have setup two mailing lists:
- <[email protected]> is read-only and will be used to
announce new releases. You may subscribe at
- <[email protected]> can be used to discuss anything related
to Spot. You may subscribe at
* Two new LTL translations have been implemented:
- eltl_to_tgba_lacim() is a symbolic translation for ELTL based on
Couvreur's LaCIM'00 paper. For this translation (available with
ltl2tgba's option -le), all operators are described as finite
automata. A default set of operators is provided for LTL
(option -lo) and user may add more automaton operators.
- ltl_to_taa() is a translation based on Tauriainen's PhD thesis.
LTL is translated to "self-loop" alternating automata
and then to Transition-based Generalized Automata. (ltl2tgba's
option -taa).
The "Couvreur/FM" translation remains the best LTL translation
available in Spot.
* The data structures used to represent LTL formulae have been
overhauled, and it resulted in a big performence improvement
(in time and memory consumption) for the LTL translation.
* Two complementation algorithms for state-based Büchi automata
have been implemented:
- tgba_kv_complement is an on-the-fly implementation of the
Kupferman-Vardi construction (TCS'05) for generalized acceptance
conditions.
- tgba_safra_complement is an implementation of Safra's
complementation. This algorithm takes a degeneralized Büchi
automaton as input, but our implementation for the Streett->Büchi
step will produce a generalized automaton in the end.
* ltl2tgba has gained several options and the help text has been
reorganized. Please run src/tgbatest/ltl2tgba without arguments
for details. Couvreur/FM is now the default translation.
* The ltl2tgba.py CGI script can now run standalone. It also offers
the Tauriainen/TAA translation, and some options for SCC-based
reductions.
* Automata using BDD-encoded transitions relation can now be pruned
for useless states symbolically using the delete_unaccepting_scc()
function. This is ltl2tgba's -R3b option.
* The SCC-based simplification (ltl2tgba's -R3 option) has been
rewritten and improved.
* The "*" symbol, previously parsed as a synonym for "&" is no
longer recognized. This makes room for an upcoming support of
rational operators.
* More benchmarks in the bench/ directory:
- gspn-ssp/ some benchmarks published at ACSD'07,
- ltlcounter/ translation of a class of LTL formulae used by
Rozier & Vardi at SPIN'07
- scc-stats/ SCC statistics after translation of LTL formulae
- split-product/ parallelizing gain after splitting LTL automata
* An experimental Kripke interface has been developed to simplify
the integration of third party tools that do not use acceptance
conditions and that have label on states instead of transitions.
This interface has not been used yet.
* Experimental interface with the Nips virtual machine.
It is not very useful as Spot isn't able to retrieve any property
information from the model. This will just check assertions.
* Distribution:
- The Boost C++ library is now required.
- Update to Autoconf 2.65, Automake 1.11.1, Libtool 2.2.6b,
Bison 2.4.1, and Swig 1.3.40.
- Thanks to the newest Automake, "make check" will now
run in parallel if you use "make -j2 check" or more.
* Bug fixes:
- Disable warnings from the garbage collection of BuDDy, it
could break the standard output of ltl2tgba.
- Fix several C++ constructs to ensure Spot will build with
GCC 4.3, 4.4, and older 3.x releases, as well as with Intel's
ICC compiler.
- A very old bug in the hash function for LTL formulae caused Spot
to sometimes (but very rarely) consider two different LTL formulae
as equal.
New in spot 0.4 (2007-07-17):
* Upgrade to Autoconf 2.61, Automake 1.10, Bison 2.3, and Swig 1.3.31.
* Better LTL simplifications.
* Don't initialize Buddy if it has already been initialized (in case
the client software is already using Buddy).
* Lots of work in the greatspn interface for our ACSD'05 paper.
* Bug fixes:
- Fix the random graph generator not to produce dead states as documented.
- Fix synchronized product in case both side use acceptance conditions.
- Fix some syntax errors with newer versions of GCC.
New in spot 0.3 (2006-01-25):
* lbtt 1.2.0
* The CGI script for LTL translation also offers emptiness check algorithms.
* tau03_opt_search implements the "ordering heuristic".
(Submitted by Heikki Tauriainen.)
* A couple of bugs were fixed into the LTL or automata simplifications.
New in spot 0.2 (2005-04-08):
* Emptiness checks:
- the new spot::option_map class is used to pass options to
emptiness-check algorithms.
- the new emptiness_check_instantiator class is used to turn a
string such as `algorithm(option1, option2)' into an actual
instance of this emptiness-check algorithm with the given
options. All tools use this.
- tau03_opt_search implements the "condition heuristic".
(Suggested by Heikki Tauriainen.)
* Minor bug fixes.
New in spot 0.1 (2005-01-31):
* Emptiness checks:
- They all follow the same interface, and gather statistical data.
- New algorithms: gv04.hh, se05.hh, tau03.hh, tau03opt.hh
- New options for couvreur99: poprem and group.
- reduce_run() try to reduce accepting runs produced by emptiness checks.
- replay_run() ensure accepting runs are actually accepting runs.
* New testing tools:
- ltltest/randltl: Generate random LTL formulae.
- tgbatest/randtgba: Generate random TGBAs. Optionally multiply them
against random LTL formulae. Optionally check them for emptiness
with all available algorithms. Optionally gather statistics.
* bench/emptchk/: Contains scripts that benchmark emptiness-checks.
* Split the degeneralization proxy in two:
- tgba_tba_proxy uses at most max(N,1) copies
- tgba_sba_proxy uses at most 1+max(N,1) copies and has a
state_is_accepting() method
* tgba::transition_annotation annotate a transition with some string.
This comes handy to associate that transition to its high-level name.
* Preliminary support for Event-based GBA (the evtgba*/ directories).
This might as well disappear in a future release.
* LTL formulae are now sorting using their string representation, instead
of their memory address (which is still unique). This makes the output
of the various functions more deterministic.
* The Doxygen documentation is now organized using modules.
New in spot 0.0x (2004-08-13):
* New atomic_prop_collect() function: collect atomic propositions
in an LTL formula.
* Fix several typos in documentation, and some warnings in the code.
* Now compiles on Darwin and Cygwin.
* Upgrade to Automake 1.9.1, and lbtt 1.1.2.
(And drop support for older lbtt versions.)
* Support newer versions of Valgrind (>= 2.1.0).
New in spot 0.0v (2004-06-29):
* LTL formula simplifications using basic rewriting rules,
a-la Wring syntactic approximations, and Etessami's universal
and existential classes.
- Function reduce() in ltlvisit/reduce.hh is the main interface.
- Can be tested with the CGI script.
* TGBA simplifications using direct simulation, delayed simulation,
and SCC-based simplifications. This is still experimental.
* The LTL parser will now read LTL formulae written using Wring's syntax.
* ltl2tgba_fm() now has options for on-the-fly fair-loop approximations,
and Modella-like branching-postponement.
* GreatSPN interface:
- The `declarative_environment' is now part of Spot itself rather than
part of the interface with GreatSPN.
- the RG and SRG interface can deal with dead markings in three
ways (omit deadlocks from the state graph, stutter on the deadlock
and consider as a regular behavior, or stutter and distinguish the
deadlock with a property).
- update SSP interface to Soheib Baarir latest work.
* Preliminary Python bindings for BuDDy's FDD and BVEC.
* Upgrade to BuDDy 2.3.
New in spot 0.0t (2004-04-23):
* `emptiness_check':
- fix two bugs in the computation of the counter example,
- revamp the interface for better customization.
* `never_claim_reachable': new function.
* Introduce annonymous BDD variables in `bdd_dict', and start
to use it in `ltl_to_tgba_fm'.
* Offer never claim in the CGI script.
* Rename EESRG as SSP, and offer specialized variants of the
emptiness_check.
New in spot 0.0r (2004-03-08):
* In ltl_to_tgba_fm:
- New option `exprop' to optimize determinism.
- Make the `symbolic indentification' from 0.0p optional.
* `nonacceptant_lbtt_reachable' new function to help getting
accurate statistics from LBTT.
* Revamp the cgi script's user interface.
* Upgrade to lbtt 1.0.3, swig 1.3.21, automake 1.8.3
New in spot 0.0p (2004-02-03):
* In ltl_to_tgba_fm:
- identify states with identical symbolic expansions
(i.e., identical continuations)
- use Acc[b] as acceptance condition for Fb, not Acc[Fb].
* Update and speed-up the cgi script.
* Improve degeneralization.
New in spot 0.0n (2004-01-13):
* emptiness_check::check2() is a variant of Couvreur's emptiness check that
explores visited states first.
* Build the EESRG supporting code condinally, as the associated
GreatSPN changes have not yet been contributed to GreatSPN.
* Add a powerset algorithm (determinize TGBA ignoring acceptance
conditions, i.e., as if they were used to recognize finite languages).
* tgba_explicit::merge_transitions: merge transitions with same source,
destination, and acceptance condition.
* Run test cases within valgrind.
* Various bug fixes.
New in spot 0.0l (2003-12-01):
* Computation of prime implicants. This simplify the output of
ltl_to_tgba_fm, and allows conditions to be output as some of
product in dot output.
* Optimize translation of GFy in ltl_to_tgba_fm.
* tgba_explicit supports arbitrary binary formulae on transitions
(only conjunctions were allowed).
New in spot 0.0j (2003-11-03):
* Use hash_map's instead of map's almost everywhere.
* New emptiness check, based on Couvreur's algorithm.
* LTL propositions can be put inside doublequotes to disambiguate
some constructions.
* Preliminary support for GreatSPN's EESRG.
* Various bug fixes.
New in spot 0.0h (2003-08-18):
* More python bindings:
- "import buddy" works (see wrap/python/tests/bddnqueen.py for an example),
- almost all the Spot API is now available via "import spot".
* wrap/python/cgi/ltl2tgba.py is an LTL-to-Büchi translator that
work as as a cgi script.
* Couvreur's FM'99 ltl-to-tgba translation.
New in spot 0.0f (2003-08-01):
* More python bindings, still only for the spot::ltl:: namespace.
* Functional GSPN interface. (Enable with --with-gspn=directory.)
* The LTL scanner recognizes /\, \/, and xor.
* Upgrade to lbtt 1.0.2.
* tgba_tba_proxy is an on-the-fly degeneralizer.
* Implements the "magic search" algorithm.
(Works only on a tgba_tba_proxy.)
* Tgba's output algorithms (save(), dotty()) now non-recursive.
* During products, succ_iter will optimize its set of successors
using information computed from the current product state.
* BDD dictionnaries are now shared between automata and. This
gets rid of all the BDD-variable translating machinery.
New in spot 0.0d (2003-07-13):
* Optimize translation of G operators occurring at the root
of a formula (or its immediate children when the root is a
conjunction). This saves two BDD variables per G operator.
* Distribute lbtt, and run it during `make check'.
* First sketch of GSPN interface.
* succ_iter_concreate::next() completely rewritten.
* Transitions are now labelled by boolean formulae (not only
conjunctions).
* Documentation:
- Output collaboration diagrams.
- Build and distribute PDF manual.
* Many bug fixes.
New in spot 0.0b (2003-06-26):
* Everything. | https://gitlab.lrde.epita.fr/spot/spot/-/blame/b09ef5bea9324e08e63e5dc0fd910a3e619ace0e/NEWS | CC-MAIN-2022-21 | refinedweb | 8,003 | 50.02 |
Subject: [boost] Boost Modularization
From: Robert Ramey (ramey_at_[hidden])
Date: 2011-01-31 12:28:46
I believe that eventually we will want have feel the need to
address some issues in directory structure, namespaces etc.
However, I'm thinking that it's premature to do anything but
set a few policies for new libraries and submissions. The
only specific one that comes to mind is:
a) libraries should not place any header files directly in the boost
directory or in the boost namespace. The only exception
would be a "convience header" in the boost directory which
would include the headers in the library directory.
b) The library name should be the same as the namespace
name and the same as the subdirectory name in the boost
directory.
....
Robert Ramey
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2011/01/176353.php | CC-MAIN-2019-22 | refinedweb | 152 | 64.3 |
ASP.NET Tip: Sending Mail with ASP.NET 2.0
In ASP.NET 2.0, Microsoft deprecated the System.Web.Mail namespace and replaced it with System.Net.Mail. The new library introduces some new features, but it also includes some bugs in how mail is sent. Before discussing some of these in detail, let's go through some code sample (which assumes you've added a using System.Net.Mail at the top of the file):
MailMessage msg = new MailMessage(); msg.From = new MailAddress("[email protected]", "Person's Name"); msg.To.Add(new MailAddress("[email protected]", "Addressee's Name"); msg.To.Add(new MailAddress("[email protected]", "Addressee 2's Name"); msg.Subject = "Message Subject"; msg.Body = "Mail body content"; msg.IsBodyHtml = true; msg.Priority = MailPriority.High; SmtpClient c = new SmtpClient("mailserver.domain.com"); c.Send(msg);
The code is similar with some minor changes to how you address the message. Instead of constructing an address, you can let the system do that for you. If you specify an e-mail address and a name, it will automatically display in the message as this:
"Person's Name" <[email protected]>
This is the "proper" form for an e-mail address. You can add multiple addresses to the To, CC, and BCC collections in the same way as shown above. This is programmatically easier to do than sending lots of messages—just add multiple addresses to the BCC property in order to send a mass mailing.
Now, About Those Bugs...
As previously mentioned, this new namespace has a couple of bugs. The first is when you send a message the headers are all added in lowercase letters. While the RFC for SMTP mail doesn't specify how the headers should be capitalized, many spam filters restrict messages where the headers are not properly capitalized.
The other bug deals with the Priority setting, which should mark a message as important within the mail client. Because of the way the header is formatted, my mail program (Eudora) doesn't recognize it as the priority flag and doesn't mark the message as important. While this seems trivial, it's a change from the System.Web.Mail version for no apparent reason. I'm continuing to research this and if I can't find a good fix, I may switch back to System.Web.Mail and deal with the warning messages that Visual Studio displays about System.Web.Mail being deprecated.. | https://www.codeguru.com/csharp/.net/net_asp/email/article.php/c12755/ASPNET-Tip-Sending-Mail-with-ASPNET-20.htm | CC-MAIN-2019-35 | refinedweb | 410 | 60.41 |
Github :
Playlist :
SvelteKit is a framework for building extremely high-performance web apps..
The easiest way to start building a SvelteKit app is to run npm init:
To get started, open up a new terminal window and initiate your svelte application using the command below. Note if you don't have npm installed, you'll need to get it. You can install npm by installing Node.JS, via the link here.
Once you have Node.JS and NPM installed, run the command below. Before you do that though, make sure you use cd to move into the folder you want to create your new Svelte application in.
npm init svelte@next my-svelte-app
➜ lets-play-with-sveltejs git:(master) npm init svelte@next
npx: installed 5 in 0.874s
create-svelte version 2.0.0-next.136
Welcome to SvelteKit!
This is beta software; expect bugs and missing features.
Problems? Open an issue on if none exists already.
✔ Where should we create your project?
(leave blank to use current directory) …
✔ Directory not empty. Continue? … yes
✔ Which Svelte app template? › SvelteKit demo app
✔ Add type checking? › TypeScript
✔ Add ESLint for code linting? … No / Yes
✔ Add Prettier for code formatting? … No / Yes
✔ Add Playwright for browser testing? … No / Yes
Your project is ready!
✔ Typescript
Inside Svelte components, use <script lang="ts">
✔ ESLint
✔ Prettier
✔ Playwright
Install community-maintained integrations:
Next steps:
1: npm install (or pnpm install, etc)
2: git init && git add -A && git commit -m "Initial commit" (optional)
3: npm run dev -- --open
To close the dev server, hit Ctrl-C
Stuck? Visit us at
When you run this command, you'll auto generate a Svelte template in a folder called my-svelte-app. Svelte will guide you through a number of options. Select your preferences. The image below shows the one's I have selected. For the purposes of this guide, I will be using the Skeleton project.
Options for Selecting in SvelteKit
Finally, run the following command to cd into your svelte directory:
cd my-svelte-app
And then install all of your dependencies using the following line:
npm i
If you are familiar with other frameworks, then Svelte will feel familiar. Here is an overview of the file structure in Svelte, for the project we have just made:
static <-- where we store all of our public assets like favicons, images, and fonts
|- favicon.png <-- our favicon
tests <-- a folder to store our tests
|- test.js <-- an example test using @playwright
src <-- our main Svelte app files
|- routes <-- a folder to store all of our routes in
|-- index.svelte <-- our index route file. This will be the file displayed at the route of the site
|- app.d.ts <-- our core Svelte app file
|- app.html <-- our main index file where the app will appear
.gitignore <-- files we wish to ignore for git
.npmrc <-- config file for npm
.prettierrc <-- config file for prettier
.eslintrc.cjs <-- config file for eslint
package.json <-- our NPM installed packages
playwright.config.js <-- config file for playwright
svelte.config.js <-- config file for svelte itself
tsconfig.json <-- config file for typescript
Our basic Svelte application is ready to go. If you want to see how it looks, you can serve it on your local computer on the URL by running the following command in your Svelte application folder:
npm run dev
If you visit in your browser, you should see something like this:
Our first Svelte Application
To make a new route in Sveltekit, simply make a new file within the routes folder. For example, if you make a file called about.svelte, then it will show up at. Another way you can do this is to make a new folder called about, and put index.svelte in that folder, will work.
Try it yourself
Create a new page within your /src/routes folder, called about.svelte. Now when you go to, you will be able to access that page. Similarly, you can try making a folder called about with a file placed inside called index.svelte
To run your Svelte application on a server or locally on a Node.JS server, you need to use an adapter. If you want to run your Svelte application on a Node Server, install @sveltejs/adapter-node@next via the following line:
install @sveltejs/adapter-node@next
npm i @sveltejs/adapter-node@next
Now we have to change our svelte.config.js file. We need to use the new adapter, and change our kit.adapter object within the config file. You can replace the contents of your svelte.config.js with the code below, but we're only changing two lines - our adapter import, and then adding the build directory in your config:
// We have changed the adapter line to use adapter-node@next
import adapter from '@sveltejs/adapter-node@next';
import preprocess from 'svelte-preprocess';
svelte.config.js
/** @type {import('@sveltejs/kit').Config} */
const config = {
// Consult
// for more information about preprocessors
preprocess: preprocess(),
kit: {
// We have changed this to point to a build directory
adapter: adapter({ out: 'build' })
}
};
export default config;
SvelteKit then does all the heavy lifting of setting up an app with server-side rendering, routing, and more, just like Next.js. However, SvelteKit also uses an adapter that can export your app to a specific platform and adapts well to serverless architecture. Since serverless architecture is becoming more prominent, it’s a good reason to try SvelteKit out.
You can use the official SvelteKit adapters for platforms like Netlify and Vercel.
By also providing features including server-side rendering, code splitting, and more, SvelteKit is especially useful for beginnings.
SvelteKit sets up a routing system where files in your src/routes determine the routes in your app. This directory can be changed by editing svelte.config.cjs.
Note that src/routes/index.svelte is the homepage.
By inputting npm run dev, you start a development server. SvelteKit uses Vite behind the scenes making updates are blazing fast.
At this point, install the static adapter to build the pre-rendered version of the entire app by using the following:
We will add another route to the counter app that SvelteKit bootstrapped for us by inputting about.svelte to the src/routes/ directory.
<!-- about page -->
<svelte:head>
<title>About</title>
</svelte:head>
<h1>About Page</h1>
<p>This is the about page. Click <a href="/">here</a> to go to the index page.</p>
As you can probably guess, this will set up another route for us at /about. To navigate to this page, we will add a link to the index page as well.
The index page already has the following line:
Visit svelte.dev to learn how to build Svelte apps.
Visit the about page
SvelteKit allows you to disable this router by altering the Svelte configuration file svelte.config.cjs. Setting the router property to false disables the app-wide router. This will cause the app to send new requests for each page, meaning the navigation will be handled on the server side.
In this guide we've looked at how to use SvelteKit to create your first Svelte application with routes. Let's look at what we've learned:
How to set up SvelteKit and create the basic structure of your Svelte application.
How to use routes in SvelteKit, so you can have multiple pages on your application.
How to update your config file to use the right adapter, based on where you want to deploy your application.
How to build and run your application locally on a Node.JS server. | https://tkssharma.com/lets-learn-sveltekit-server-side-rendering/ | CC-MAIN-2022-40 | refinedweb | 1,260 | 66.33 |
Trying to set a cron job on my Skygear python cloud code, but not sure what I should enter in the decorator. I only know that it will work for units in second, but how to schedule a job to run every 12 hours? It is hard to calculate the seconds every time.
My code is like this, the function is to call a POST request:
@skygear.every('@every 43200s')
def post_req():
print ('scheduled to run every 12 hours')
url = myurl
ref = something
r = requests.post(myurl, data = {'token':some_token, 'ref':something})
It seems like
skygear.every also accepts crontab notation… so
0 */12 * * * could also do the trick.
Edit: Reading the robfig/cron docs, the best solution would actually be just
@every 12h | https://codedump.io/share/aFMB84v5PEPE/1/how-to-set-a-cron-job-on-skygear-to-run-every-12-hours | CC-MAIN-2018-26 | refinedweb | 124 | 73.68 |
I'm trying to evaluate javascript in Java by using the ScriptEngine class. Here is a short example of what I am trying to do:
import javax.script.ScriptEngineManager;
import javax.script.ScriptEngine;
public class Test {
public static void main(String[] args)
{
ScriptEngine engine = new ScriptEngineManager().getEngineByName("js"); //Creates a ScriptEngine
Object obj = engine.eval("var obj = { value: 1 }; return obj; "); // Evals the creation of a simple object
System.out.println(obj.value); // I get an invalid token error when trying to print a property of the object
}
}
Note: The following is for Java 8, using the Nashorn engine.
First, to make the code compile, remove the
.value from the
println() statement.
obj is declared to be type
Object, and
Object doesn't have a
value field.
Once you do that, you get the following exception when running the code:
Exception in thread "main" javax.script.ScriptException: <eval>:1:25 Invalid return statement var obj = { value: 1 }; return obj; ^ in <eval> at line number 1 at column number 25
That is because you don't have a function, so you cannot call
return. The return value of the script is the value of the last expression, so just say
obj.
Now it will run and print
[object Object]. To see what type of object you got back, change to
println(obj.getClass().getName()). That will print
jdk.nashorn.api.scripting.ScriptObjectMirror. I've linked to the javadoc for your convenience.
ScriptObjectMirror implements
Bindings which in turn implements
Map<String, Object>, so you can call
get("value").
Working code is:
import javax.script.*; public class Test { public static void main(String[] args) throws ScriptException { ScriptEngine engine = new ScriptEngineManager().getEngineByName("js"); Bindings obj = (Bindings)engine.eval("var obj = { value: 1 }; obj; "); Integer value = (Integer)obj.get("value"); System.out.println(value); // prints: 1 } } | https://codedump.io/share/gaFBOW2GfJFi/1/java-returning-an-object-from-scriptengine-javascript | CC-MAIN-2017-51 | refinedweb | 301 | 60.11 |
Created on 2014-06-29 18:27 by Saimadhav.Heblikar, last changed 2018-12-11 22:29 by terry.reedy.
(This issue is continuation of)
This issue is about a feature to execute any 3rd party code checker from within IDLE.
I am attaching an initial patch(so as to get reviews, is functional logic wise, but missing a lot UI/UX wise.)
It is implemented as an extension.
Read everything, looks plausible ;-). .run_checker assumes api:
<program name> pat_to_something.py <additional args>
I will download pyflakes tomorrow and see if everything works on Windows.
If so, some immediate issues:
1. Only use tempfile if editor is 'dirty'.
2. Allow full path in config for programs not on system path.
I want to broaden this to 'external' programs. Will discuss.
Both versions appear to be trying to access non-existent configuration parameters.
Warning: configHandler.py - IdleConf.GetOption -
problem retrieving configuration option 'enabled'
from section 'pyflakes'.
returning default value: None
Warning: configHandler.py - IdleConf.GetOption -
problem retrieving configuration option 'command'
from section 'pyflakes'.
returning default value: None
v1. Trial on file with no warnings - need end of report summary like find in files so actually know program ran. Trial on file with warnings hangs Idle.
v2. Trying to enable pyflakes gives this
Traceback (most recent call last):
File "F:\Python\dev\4\py34\lib\tkinter\__init__.py", line 1487, in __call__
return self.func(*args)
File "F:\Python\dev\4\py34\lib\idlelib\Checker.py", line 290, in ok
self.close()
File "F:\Python\dev\4\py34\lib\idlelib\Checker.py", line 296, in close
self._checker.update_listbox()
AttributeError: 'function' object has no attribute 'update_listbox'
In v3, there is no subprocess usage.
It imports the checker specific module,does its job and returns the result of processing.
The checker specific files are to be installed from TestPyPI(atleast for now). It has to be installed via pip.
It will be detected automatically in IDLE. There will be a feature to pass additional arguments onto the checker(though not yet implemented in this patch).
This patch also supports the feature to modify the editor buffer.
To test out this patch, kindly install two packages
pip install -i IDLEPyflakes IDLEWhitespaceRemover
(I used the reindent.py file in Tools/scripts in IDLEWhitespaceRemover)
Again, this is more a proof of concept patch. I we are to go ahead in this direction, I will be writing it from scratch again and also with tests.
Checker, is actually a misnomer if we do support the "modify buffer" feature.
This seem like a new feature for IDLE, so I'd imagine it would not be included in either 2.7 or 3.4. Correct me if I'm wrong.
>This seem like a new feature for IDLE, so I'd imagine it would not be >included in either 2.7 or 3.4. Correct me if I'm wrong.
Hi,
Yes, it is a new feature. I think it will be included in both 2.7 and 3.4(apart from the latest version 3.5), if my understanding of pep434 is correct.
From pep434
>The PEP would apply to changes in existing features and addition of >small features, such as would require a new menu entry, but not >necessarily to possible major re-writes such as switching to themed >widgets or tabbed windows
Though, I cant say for sure into what category this feature would fall into, i.e. whether it is a "small feature" or not.
Small feature requiring a new menu entry.
I was going to work on this 'today' (well, Saturday), but I injured my
eye a bit and cannot see well enough to work on code. I am hoping things
will be better after sleeping for a night.
Attached is a patch which adds capability to work with external programs which can modify the source file(Like whitespace remover tool).
It works with all 4 boolean combinations for {show result, reload source}.
The test coverage will be increased, depending on what feature we choose to keep. The GUI is tested thoroughly. The runner code is tested at a basic level.
Please let me know your comments on this.
It's unfortunate that this has gone dormant for so long. Is anyone interested in picking this up? I'd be happy to provide guidance and feedback.
This issue is specifically based on msg195711 of #18704. Anyone working on this should read it.
Saimadhav's work was part of his Google Summer of Code (GSOC) project, which ended soon after V4 was submitted. I recorded reviews of V1 and V2 above. I don't remember which tests and reviews, if any, I did with V3 and V4.
Some needed changes starting with v4:
Checker.py should be checker.py.
Implement it as a feature, not an extension.
Access the in-memory config object for .idlrc/checker.cfg directly rather than through idleConf. idleConf accesses the fixed defaults and mutable user overrides as if they are one config. I am a bit surprised that idleConf worked without an empty idlelib/config-checker.def.
The main blocker was and is keeping the GUI responsive while the 3rd party program is executing. V1 & V2 used subprocess through a pipe. V3 did not use subprocess. V4 uses subprocess without a pipe. It has this blocking polling loop:
while process.poll() is None:
continue
If a 3rd party program is expected to revise a file, the corresponding editor should be read-only for the duration.
I intended that any issue like this should start with a coherent specification separate from the code. A doc patch is needed and that might be enough.
Since this issue was opened, it has been more firmly stated that the stdlib should not have any hard-coded dependencies on 3rd party code. In April 2016, the proposal for a GSOC project to add a GUI front end for pip got no opposition and 2 overt approvals on pydev. In August 2016, the result was rejected by the release manager and one of the additional approvers because it necessarily used the (public) pip (command-line) interface.
Not withstanding that, there could be a separate idle-checker repository containing a checker.cfg with entries for multiple checkers. Such a file would be needed to do manual tests with multiple checkers. This could include a typing annotation checker, like mypy, which is a new type of code checker added since this issue was created.
Tal, what do you think is the easiest way to turn a .diff on the tracker into a git master branch?
I'll get this up on a git branch so that we can continue hacking on it.
Idlelib modules OutputWindow and configHandler are now outwin and config.
I discovered a non-checker .py file consumer. Pygame Zero enables use of pygame to create games without boilerplate.
One can run a mygame.py file containing, for instance,
WIDTH = 800
HEIGHT = 600
def draw():
screen.clear()
screen.draw.circle((400, 300), 30, 'white')
from a command line with 'pgzrun mygame.py'.
In this case, one can instead run from an IDE by adding boilerplate lines 'import pgzrun' and 'pgzrun.go()' at top and bottom. But there might be people (or instructors) who prefer, if possible, to enable pgzrun as a 3rd party .py file consumer with the 'checker' option.
One issue with configparser is that it does not read comments in a .cfg file and therefore overwrites them when overwriting a file. We might add a multiline 'comment' option to each application section.
Update: I've nearly got an updated version ready and working based on the current master branch. I hope to have a PR up by tomorrow.
See PR9802. | https://bugs.python.org/issue21880 | CC-MAIN-2019-09 | refinedweb | 1,285 | 68.67 |
WSE 3.0 Kerberos Security problem
Discussion in 'ASP .Net Web Services' started by iidelchik@yahoo:
- 616
- Shikari Shambu
- Dec 29, 2004
WSE 2.0 - role based security for Web ServicesHari Menon, Nov 28, 2003, in forum: ASP .Net Security
- Replies:
- 0
- Views:
- 174
- Hari Menon
- Nov 28, 2003
Getting error Security Authority cannot be contacted while using KerberosEngr, Feb 12, 2010, in forum: ASP .Net Security
- Replies:
- 0
- Views:
- 736
- Engr
- Feb 12, 2010
WSE 2.0 - The security token could not be authenticated or authoriDavid M. Young, Jun 11, 2004, in forum: ASP .Net Web Services
- Replies:
- 2
- Views:
- 804
Problem with parsing SOAP security element using older namespace in WSE 2kaush, Oct 18, 2005, in forum: ASP .Net Web Services
- Replies:
- 0
- Views:
- 147
- kaush
- Oct 18, 2005 | http://www.thecodingforums.com/threads/wse-3-0-kerberos-security-problem.786765/ | CC-MAIN-2015-11 | refinedweb | 131 | 65.62 |
Ferris uses the concept of authorization chains to control access to controllers and their actions. The concept is simple: A authorization chain consists of a series of functions (or callables). When trying to determine to allow or deny a request each function in the chain is called. If any of the functions return False then the request is rejected. Chains are specified using Controller.Meta.authorizations and the add_authorizations() decorator.
Here’s an example of requiring a user to be logged in to access a controller:
from ferris import Controller def require_user(controller): return True if controller.user else False class Protected(Controller): class Meta: authorizations = (require_user,) ...
You can also use add_authorizations() instead:
@route @add_authorizations(require_user) def protected_action(self): ...
Adds additional authorization chains to a particular action. These are executed after the chains set in Controller.Meta.
To add authorizations globally use the global event bus.
As shown above a simple authorization function is rather trivial:
def require_user(controller): return True if controller.user else False
Note that you can also include a message:
def require_user(controller): return True if controller.user else (False, "You must be logged in!")
Or if you’d like to redirect or present your own error page you can return any valid response:
def require_user(controller): if controller.user: return True url = users.create_login_url(dest_url=controller.request.url) return controller.redirect(url)
The module ferris.core.auth includes some built-in useful authorization functions and utilities.
Requires that a user is logged in
Requires that a user is logged in and that the user is and administrator on the App Engine Application
There are a few function generators that use predicates. These are useful shortcut authorization functions.
Generates an authorization function that requires that users are logged in for the given prefix.
Generates an authorization function that requires that the user is an App Engine admin for the given prefix.
You can also create your own generators using precidates.
Then use predicate_chain to combine them with your authorization function.
Returns the result of chain if predicate returns True, otherwise returns True. | http://ferris-framework.appspot.com/docs21/users_guide/authorization_chains.html | CC-MAIN-2017-13 | refinedweb | 346 | 50.94 |
Php windows application työt
Markitng sales
Windows Muuta tai ei tiedä Osaan myydä tavaroita
htgfhjgf gfhdddddddn ngfhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhn
I want r programming and windows expert
We are looking for C# Windows App preferrably with DevExpress experience. Philippine based. We will be paying monthly/hourly - to be negotiated...
Someone with good knowlegde of C++, C++ CLI, windows forms, Video decoding. freelancer to support me on writing a patch for microsoft windows 2008
any php developer who can help me for fcm webservices
In my Cpanel there is no section to install modules and extensions
I have an application that creates a XML file for Word using UTF8 encoding, here is the header rows: <?xml version="1.0" encoding="utf-8" standalone="yes" ?> <?mso-application progid="[kirjaudu nähdäksesi URL:n]" ?> It works fine for almost all my Windows 10 customers, but for some of them the created XML file does not contain the pla...
I need a small application writen with any one of the following language:php,html,vb,python the application contain Form Email,phone,Date of birth,Country Thats...
Windows 10, shinyr, r scripting,r programming,windows 10 app
I want a website. Where Account creation and payment gateway should be
We are looking for a developer that know webrtc and tokbox with php mysql. please show us demo that you have worked tokbox....
I want expert in r programming,software development,r-scripting,windows
Hi I need some one to set-up a root or a CentOS 7 that is connected to my domain , php mail send and more
Want to build a simple Simon Says App that has Red, Blue, Green and Yellow and Up Down Left Right. Want to be able to brand the app. Also want to be able to have a resting screen with a logo.
R-scripting,R-programming,software development,windows...
R programming, windows, software developer, r-scripting
hello, i have this API documentation and need a script/app that can scrape an excel and make AWB labels from each position. i am attaching API example and excel list [kirjaudu nähdäksesi URL:n] thank you
PHP Page is running fine with all functionalities, all need to do is to add logo & design it so that it looks nice. Both are login pages (Admin & Client).
Verificação da segurança de servidores e sua manutenção.
Only Spanish native speakers // No automatic responses Buenas tardes, Me interesa programar y automatizar las siguientes tareas bajo windows: 1.- Se recibe un email de confirmación, donde se indica: a. Duración en minutos del vídeo (de tres posibles) y b. Email del usuario. 2.- Se enciende la cámara de video y se graban durante los minutos elegidos, 3.- Se nombra el ...
## Automatic messages will not be answered ## This project is based in the remote control of a Sony camera model AX53: [kirjaudu nähdäksesi URL:n] The camera will be mounted on a Robot. The robot move the camera and zoom. The main porpoise of this project is to control the camera remotely. We will use a tablet running on windows 10 to command this system. It means that we will have...
You will create: New Windows Docker Image, published to dockerhub - Run docker container - use: 'docker exec -it c7 cmd' - From the cmd line call: [kirjaudu nähdäksesi URL:n] Flow -------- [kirjaudu nähdäksesi URL:n] -> msbuild [kirjaudu nähdäksesi URL:n] -> msbuild task (JavaScriptCompressorTask) -> yuicompressor -> javascriptcompress E...
I want professional and expert in R programming, software development and windows.
Searching for an expert who can give me training on windows server 2016. Thank you
I have a WinForms Application, it has a element Host control that contains a WPF program. The WPF has a Datagrid on it. When the user click's on the row in the data grid the Textbox on the Windows Form must contain the details of that row in the WPF form grid.
Responsibilities • Design, develop & maintain complex web-based applications using Java, Internet technologies, and databases. • Detailed design and delivery of software solutions for complex business requirements. • Developing and implementing it in the areas of integration, configuring ability, manageability, reliability, and performance. • Developing an application on th...
Make a small tweak in the script so that i can run the script without the answer folder. I will share the script on chat I can max pay $10 Thanks...
1. The application is developed in ASP.NET MVC 5.0, plus ajax javascript. It requires database. 2. Task is to install this app on my local laptop: Surface 4 Pro, Windows 10 Pro 3. Task is to be performed using AnyDesk or TeamViewer.
Looking for Graphic & photoshop designers for our Employment Application Pack design
I want expert in R programming and windows..
Error Message: The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. Minify xml script: ([kirjaudu nähdäksesi URL:n]) <?xml version="1.0" encoding="utf-8"?> <Proje...
DockerFile: # escape=` FROM [kirjaudu nähdäksesi URL:n]:4.8 AS build WORKDIR /app NuGet Package: [kirjaudu nähdäksesi URL:n] Error Message: The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build...
For sale Eat Zilla application for ordering food online ..
Make a small tweak in the script so that i can run the script without the answer folder. I will share the script on chat I can max pay $10 Thanks
I want a windows software of my design. I am ready all PSD designs and wireframe. It should develop with VC++(MFC) on Visual Studio 2007. I will share you wireframe via private chatting. thanks | https://www.fi.freelancer.com/job-search/php-windows-application/ | CC-MAIN-2019-39 | refinedweb | 968 | 55.95 |
std::remainder std:.
std::fmod, but not
std::remainder is useful for doing silent wrapping of floating-point types to unsigned integer types: (0.0 <= (y = std::fmod( std::rint(x), 65536.0 )) ? y : 65536.0 + y) is in the range
[-0.0 .. 65535.0], which corresponds to unsigned short, but std::remainder(std::rint(x), 65536.0 is in the range
[-32767.0, +32768.0], which is outside of the range of signed short.
[edit] Example
#include <iostream> #include <cmath> #include <cfenv> #pragma STDC FENV_ACCESS ON int main() { std::cout << "remainder(+5.1, +3.0) = " << std::remainder(5.1,3) << '\n' << "remainder(-5.1, +3.0) = " << std::remainder(-5.1,3) << '\n' << "remainder(+5.1, -3.0) = " << std::remainder(5.1,-3) << '\n' << "remainder(-5.1, -3.0) = " << std::remainder(-5.1,-3) << '\n'; // special values std::cout << "remainder(-0.0, 1.0) = " << std::remainder(-0.0, 1) << '\n' << "remainder(5.1, Inf) = " << std::remainder(5.1, INFINITY) << '\n'; // error handling std::feclearexcept(FE_ALL_EXCEPT); std::cout << "remainder(+5.1, 0) = " << std::remainder(5.1, 0) << '\n'; if(fetestexcept(FE_INVALID)) std::cout << " FE_INVALID raised\n"; }
Possible output:
remainder(+5.1, +3.0) = -0.9 remainder(-5.1, +3.0) = 0.9 remainder(+5.1, -3.0) = -0.9 remainder(-5.1, -3.0) = 0.9 remainder(-0.0, 1.0) = -0 remainder(5.1, Inf) = 5.1 remainder(+5.1, 0) = -nan FE_INVALID raised | http://en.cppreference.com/w/cpp/numeric/math/remainder | CC-MAIN-2016-36 | refinedweb | 234 | 55.61 |
Configuring failover clusters in Windows Server can help ensure near-consistent availability. Here are several potential troubleshooting scenarios.
Last month, I looked at some of the more common issues with Windows Server 2008 R2 Failover Clustering, and examined how to accurately troubleshoot those problems.
Remember the current support policy is that, for a Windows Server 2008 or Windows Server 2008 R2 Failover Clustering solution to be considered an officially supported solution by Microsoft Customer Support Services (CSS), it must meet the following criteria:
Here are several scenarios that may help expedite or inform your next troubleshooting efforts. These represent some of the more common issues in supported Windows 2008 R2 Failover Clusters, as well as the steps you may need to take to resolve them.
The Cluster Name Object (CNO) is very important, because it’s the common identity of the Cluster.
It’s created automatically by the Create Cluster wizard and has the same name as the Cluster. Through this account, it creates other Cluster Virtual Computer Objects (VCOs) as you configure new services and applications on the Cluster. If you delete the CNO or take permissions away, it can’t create other objects as required by the Cluster until it’s restored or the correct permissions are reinstated.
As with all other objects in Active Directory, there’s an associated objectGUID. This is how Failover Cluster knows you’re dealing with the correct object. If you simply create a new object, a new objectGUID is created as well. What we need to do is restore the correct object so that Failover Cluster can continue with its normal operations.
When troubleshooting this, we need to find out two things from the Cluster resource. From Windows PowerShell, run the command:
Get-ClusterResource "Cluster Name" | Get-ClusterParameter CreatingDC,objectGUID
This will retrieve the needed values. The first parameter we want is the CreatingDC. When Failover Cluster creates the CNO, we note the domain controller (DC) upon which it was created. For any activity we need to do with the Cluster (create VCOs, bring names online and so on), we know to go to this DC to get the object and security. If it isn’t found on that DC or that DC isn’t available, we’ll search any others that respond, but we know to go here first.
The second parameter is the objectGUID to ensure we’re talking about the correct object. For our example, the Cluster Name is CLUSTER1, the Creating DC is DC1 and the objectGUID is 1a3cf049cf79614ebd94670560da6f04, like so:
Object Name Value------ ---- -----Cluster Name CreatingDC \\DC1.domain.com Cluster Name ObjectGUID1a3cf049cf79614ebd94670560da6f04
We’d need to log on to the DC1 machine and run Active Directory Users and Computers. If there’s a current CLUSTER1 object, we can check to see if it has the proper attributes. One note about this is the display you’ll see. Active Directory attribute editor is initially not going to show you the GUID shown here, as it’s not displaying it in hexadecimal format.
What you’re initially going to see is 49f03c1a-79cf-4e61-bd94-670560da6f04. The hexadecimal format does a switch and it works in pairs, which is a little confusing. If you take the first eight pairs of numbers and do the switch, 49f03c1a becomes 1a3cf049. By switching the next two pairs, 79cf becomes cf79, and then 4e61 becomes 614e. The remaining pairs stay the same.
You must bring up the properties of the objectGUID in the attribute editor to see it in the hexadecimal format that Failover Clustering sees. Because it’s not the proper object, we must first delete the object to take it out of the picture to restore the proper one.
There are several ways of restoring the object. We could use an Active Directory Restore, a utility such as ADRESTORE or the new Active Directory Recycle Bin (if running a Windows 2008 R2 DC with an updated schema). Using the new Active Directory Recycle Bin makes things much easier and is the most seamless process for restoring deleted Active Directory objects.
With the Active Directory Recycle bin, we can search to find the object to restore with the Windows PowerShell command:
Get-ADObject –filter 'isdeleted –eq $true –and samAccountName –eq "CLUSTER1$"' –includeDelectedObjects –property * | FormatListsamAccountName,objectGUID
That command is going to search for any deleted object with the name CLUSTER1 in the Active Directory Recycle Bin. It will give us the account name and objectGUID. If there are multiple items, it will show them all. When we see the one we want, we’d display it as this:
samAccountName : CLUSTER1$
objectGUID:49f03c1a-79cf-4e61-bd94-670560da6f04
Now we need to restore it. After we delete the incorrect one, the Windows PowerShell command to restore it would be:
Restore-ADObject –identity 49f03c1a-79cf-4e61-bd94-670560da6f04
This will restore the object in the same location (organizational unit, or OU) and keep the same permissions and computer account password known by Active Directory.
This is one of the benefits of the Active Directory Recycle Bin when compared to something like a utility such as ADRESTORE. Using ADRESTORE, you’d have to reset the password, move it to the proper OU, repair the object in Failover Clustering and so on.
With the Active Directory Recycle Bin, we simply bring the Cluster Name resource online. This is also a better option than doing a restore of Active Directory, especially if there have been new computer/user objects created, if there are old ones that no longer exist and would have to be deleted again, and so on.
First, let’s quickly recap the definition of Cluster Shared Volumes (CSVs). CSVs simplify the configuration and management of Hyper-V virtual machines (VMs) in Failover Clusters. With CSV on a Failover Cluster that runs Hyper-V, multiple VMs can use the same LUN (disk), yet fail over (or move from node to node) independently of one another. The CSV provides increased flexibility for volumes in clustered storage. For example, you can keep system files separate from data to optimize disk performance, even if the system files and the data are contained within virtual hard disk (VHD) files.
In the properties for all network adapters that carry cluster communication, make sure “Client for Microsoft Networks” and “File and Printer Sharing for Microsoft Networks” are enabled to support Server Message Block (SMB). This is required for CSV. The server is running Windows Server 2008 R2, so it automatically provides the version of SMB that’s required by CSV, which is SMB2. There will be only one preferred CSV communication network, but enabling these settings on multiple networks helps the Cluster have resiliency to respond to failures.
Redirected Access means all I/O operations are going to be “redirected” over the network to another node that has access to the drive. There are basically three reasons why a disk is in Redirected Access mode:
In our scenario, we’ve ruled out Option 1 and Option 2. This leaves us with Option 3. If we look in the System Event Log, we’d see the event “Event ID: 5121” from Failover Clustering.
Here’s the definition of that log entry: Cluster Shared VolumeCSV ‘Cluster Disk x’.
Taking that stance, we’d also look right before this event for any hardware-related event. So we’d look for events like 9, 11 or 15 that point to a hardware or communication issue. We’d look in Disk Management to see if we could physically see the disk. In most cases, we’ll see some other errors. Once we correct the problem with the back end, we can bring the disk out of this mode.
Keep in mind that the CSV will remain running as long as at least one node can communicate with the storage-attached network. This is why it would be in a “redirected” mode. All writes to the drive are sent to the node that can communicate and the Hyper-V VMs will continue running. There may be a performance hit on those VMs, but they’ll continue to run. So we’ll never really be out of production, which is a good thing.
There’s only one “true” owner of the drive and it’s called the Coordinator Node. Any type of metadata writes to the drive would be done by this node only.
When you open Explorer or Disk Management, it’s going to want to open the drive so it can do any metadata writes (if that’s the intention). Because of this, any drive it doesn’t own will get redirected over the network to the Coordinator node. This is different than the drive being in “redirected access.”
When troubleshooting this, Failover Cluster Management will show the drive as online. So first you should look to see what events are logged. In the System Event Log, you could see these events from Failover Clustering:
Event ID: 5120
Cluster Shared Volume ‘Cluster Disk x’ is no longer available on this node because of ‘STATUS_BAD_NETWORK_PATH(c00000be).’ All I/O will temporarily be queued until a path to the volume is reestablished.
Event ID: 5142
Cluster Shared Volume ‘Cluster Disk x’ is no longer accessible from this cluster node because of error ‘ERROR_TIMEOUT(1460).’ Please troubleshoot this node’s connectivity to the storage device and network connectivity.
Those event logs are timing out trying to get over the network to the Coordinator Node. So then you’d look to see if there are any other errors in the System Event Log that would point to network connectivity between the nodes. If there are, you need to resolve them. Things such as a malfunctioning or disabled network card can cause this.
Next, you’d want to check basic network connectivity between the nodes. What you first need to verify is the network over which your CSV traffic is traveling. The way Failover Clustering chooses the network to use for CSV is by highest metric value. This is different from how Windows identifies the network.
The Failover Cluster Network Fault Tolerance adapter (NETFT) has its own metric system it uses internally. All networks it detects have a default gateway, and will be given the metric of 10000, 10100, as it goes along. All networks that don’t have a default gateway start at 1000, 1100 and so on. Using Windows PowerShell, you can use the command Get-ClusterNetwork | FT Name, Metric, Role to see how the NETFT adapter has defined them. You’d see something similar to:
Name Metric ------------------- Management 10100 CSV Traffic 1000 LAN-WAN 10000 Private 1100
With these four networks, the network I’ve identified as the CSV is called CSV Traffic. The IP Address I’m using for it 1.1.1.1 for Node1 and 1.1.1.2 for Node2, so I would try basic network connectivity with PING between the IP Addresses.
The next step is to attempt an SMB connection using the IP Addresses. This is just what Failover Clustering is going to do. A simple NET VIEW \\1.1.1.1 will suffice to see if there’s a response. What you should get back is either a list of shares or a message: “There are no entries in the list.”
This indicates that you could make a connection to that share. However, if you get the message “System error 53 has occurred. The network path was not found,” this indicates a TCP/IP configuration problem with the network card.
Having “Client for Microsoft Networks” and “File and Printer Sharing for Microsoft Networks” enabled on the network card are required to use CSV. If they aren’t, you’ll get this problem of hanging Explorer. Select these and you’re good to go.
In Windows 2003 Server Cluster and below, unchecking these options was the recommended procedure. This is no longer the case moving forward and you can see how it can break.
There are a few other factors you’ll need to consider. If your Cluster Nodes are experiencing Resource Host Subsystem (RHS) failures, you must first think about the nature of RHS and what it’s doing. RHS is the Failover Cluster component that does a lot of resource health checking to ensure everything is functioning. For an IP Address, it will ensure it’s on the network stack and that it responds. For disks, it will try to connect to the drive and do a DIR command.
If you experience an RHS crash, you’ll see System Event Log IDs 1230 and 1146. In the event 1230, it will actually identify the resource and the resource DLL it uses. If it crashes, it means the resource is not responding as it should and may be deadlocked. If this were to crash on a disk resource, you’d want to look for disk-related errors or disk latencies. Running a Performance Monitor would be a good place to start. Updating drivers/firmware of the cards or the back end may be something to consider as well.
You’re also going to be doing some user mode detections. Failover Clustering conducts health monitoring from kernel mode to a user mode process to detect when user mode becomes unresponsive or hung. To recover from this condition, clustering will bug-check the box. If it does, you’d see a Stop 0x0000009E. Troubleshooting this would entail reviewing the dump file it creates to look for hangs. You’d also want to have Performance Monitor running and look for anything appearing as hanging, memory leaks and so on.
Failover Clustering is dependent on Windows Management Instrumentation (WMI). If you’re having problems with WMI, you’re going to have problems with Failover Clustering (creating and adding nodes, migrating and so on). Run checks against WMI, such as WBEMTEST.EXE, or even remote WMI scripts.
One script you can attempt from Windows PowerShell is (where NODE1 is the name of the actual node):
get-wmiobjectmscluster_resourcegroup -computer NODE1 -namespace "ROOT\MSCluster"
This will make a WMI connection to the Cluster and give you information about the groups.
If that fails, you have some WMI issues. The WMI Services may be stopped, so you may need to restart them. The WMI repository may also be corrupt (use the Windows PowerShell command winmgmt /salvagerepository to see if it’s consistent), and so on.
Here are some troubleshooting points to remember:
Failover Clustering is designed to detect, recover from and report problems. The fact that the Cluster is telling you there is or was a problem does not mean the Cluster caused it. As some people say: “Don’t shoot the messenger.”
John Marlin is a senior support escalation engineer in the Commercial Technical Support Group. He has been with Microsoft for more than 19 years, with the last 14 years focusing on Cluster Servers. | http://technet.microsoft.com/en-us/magazine/hh289314.aspx | CC-MAIN-2014-10 | refinedweb | 2,475 | 62.48 |
class FooController < ApplicationController
before_filter :foo_filter
def action
respond_to :html
end
private
def foo_filter
head :gone unless some_conditions
end
end
How about this (from the Docs):
var casper = require("casper").create({
}),
utils = require('utils'),
http = require('http'),
fs = require('fs');
casper.start();
casper.thenOpen('', function(response) {
casper.capture('test.png');
utils.dump(response.status);
if (response == undefined || response.status >= 400)
this.echo("failed");
});
casper.on('http.status.404', function(resource) {
this.echo('wait, this url is 404: ' + resource.url);
});
casper.run(function() {
casper.exit();
});
Not without an external tool no.
You see, this has been brought up a number of times in the past and is one
of the largest "issues" within Selenium's official issue tracker. The
particular issue has been bounced around and essentially decided that it's
outside the scope of Selenium.
This however, does not mean that it is not possible. Thankfully, you are
using C#, so it's a little easier than you may think.
Recently, one of the Selenium developers wrote a blog post outlining
exactly how to do this in C#. It is a three part blog post to help explain
each step and uses an external tool, called Fiddler (which, by the way, is
one awesome tool).
Fiddler is a proxy and has a C# API that allows you to intercept requests.
It therefore means you can simply "point" Selenium to use that proxy, a
If it's server error then it should be 500. If it's client error, use 400.
It's hard to be more precise than that without seeing the URI and what you
do with it. For example, if "Product no longer available" is a result of
GET request, then it should be 404 (not found). But if it was a POST
request, then it should be 200 or 202.
For the other two, they might not be error. It could be the client has sent
the correct request but the stock has been consumed by someone else, in
this case server should return 409 (conflict) . If the request was for too
much stock from the start, then it should just be 200/202.
If you had to have only one code, just use 400 and 200 (see above).
You can use $.ajax with jquery
It has everything you need here :
If you look at "statusCode" explaination, you will see you can make
something for every code
On the one hand, response is built with methods like:
success?
redirect?
unprocessable?
full list do: response.methods.grep(/?/)
On the other hand, Rspec predicates transforms every foo? method to a
be_foo matcher.
Not sure you can have the 201 this way unfortunately, but creating a custom
matcher is quite easy.
Note Rails test only rely on a few statuses.
Http status codes are maintained by the Internet Assigned Numbers
Authority (IANA), whereas readyState is specific to XmlHttpRequest.
Therefore just go to a reputable source. The wikipedia article should
suffice as this is not really a contentious topic - or, as commented, the
official list can be found here
You could also wrap those you are interested in into a javascript object
var HttpCodes = {
success : 200,
notFound : 404
// etc
}
usage could then be if(response == HttpCodes.success){...}
I had faced the similar problem.I looked into the angular lib and added a
few lines to have status returned in the response itself.In this file,find
where promise is being returned.
Replace code block starting with
var promise = $http(httpConfig).then(function(response)
with the following
var promise = $http(httpConfig).then(function(response) {
var data = response.data,
promise = value.$promise;
if (data) {
// Need to convert action.isArray to boolean in case it is
undefined
// jshint -W018
if ( angular.isArray(data) !== (!!action.isArray) ) {
throw $resourceMinErr('badcfg', 'Error in resource
configuration. Expected ' +
'response to contain an {0} but got an {1}',
The reason for not working is because the webapp has implemented
ExceptionMapper that catches the exception instead of letting the custom
error page to be resolved from web.xml
The resolution would be to remove the ExceptionMapper impl class.
I have to send SOAP message to server, it has to bee a message with code
500
This doesn't make sense. The status code is sent by the server in response
to the client's request. If the server sends a 200 OK, your request was
correctly formatted and processed.
I posted an answer here that used a simple HTTP request to check the
status. If status = 200 GOOD! Load remote URL to webview, else load local
HTML page from assets.
It sounds like the 4XX series of responses are appropriate here. From the
RFC:
The 4xx class of status code is intended for cases in which the
client seems to have erred. Except when responding to a HEAD request,
the server SHOULD include an entity containing an explanation of the
error situation, and whether it is a temporary or permanent
condition.
With this in mind, I think 403 forbidden is the most appropriate: entit
The 409 Conflict implies a conflict with the current state of the resource.
There is no such conflict, so that's not right.
422 Unprocessable Entity looks more correct. I would also argue that 400
Bad Request would not be unreasonable.
403 Forbidden seems like the most descriptive status code for this.
However, RFC 2616 suggests that you could also use 404 Not Found if you
don't want to let the user know that the reason for the failure is an
access control restriction. It says:
If ... the server wishes to make public why the request has not been
fulfilled, it SHOULD describe the reason for the refusal in the entity. If
the server does not wish to make this information available to the client,
the status code 404 (Not Found) can be used instead.
If the person attempting the access believes the error code, it might
discourage them from trying to find a way around the restriction (e.g. by
searching for a proxy). But it probably won't stop a determined hacker.
Code failure is not linked to a single HTTP status code, as there are
multiple possible status codes that lead to errors. I could be wrong on
this part, but I think that, if the readyState never changes to 4, an abort
gets thrown and calls error without returning an HTTP status call at all.
I found a useful resource on HTTP status codes and AJAX here.
You'll want to define the source option and perform the submission yourself
using $.ajax. Then handle the status codes you care about using the
statusCode option:
$( "#site_search" ).autocomplete({
source: function(request, response) {
$.ajax({
url: '/members/search/suggest',
dataType: "json",
data: { term: request.term },
success: function(data, textStatus, jqXHR) {
response(data);
},
error: function(jqXHR, textStatus, errorThrown) {
// Handle errors here
},
statusCode: {
401: function () {
// Disable autocomplete here
}
}
});
});,
Note: You will need jquery 1.5+ for this to work.
Use the following cURL options to make a HEAD request:
CURLOPT_HEADER => true,
CURLOPT_NOBODY => true,
CURLOPT_CUSTOMREQUEST => 'HEAD',
You could also use:
CURLOPT_FAILONERROR => true,
To have curl_exec return false whenever a erroneous HTTP code is returned.
If you have 10,000 URLs and this is a recurring task, you will probably
want to look at curl_multi_* functions and process the URLs in batches of
4, 8 or something close to it. The speed-up is significant.
Maybe you can do it by configuring a context.
It is discouraged but you can try to add the following in your
conf/server.xml just to give a try:
<Context path="Beer-v1" docBase="webapps/Beer-v1" crossContext="true"
debug="0" reloadable="false">
</Context>
EDIT
As David Levesque wrote in his answer, there are some incorrect characters
in your web.xml (the ” issue seems to be fixed there) and your html file.
About this point, I am not sure the servlet names with blanks such as Ch3
Beer are legal (but once again it shouldn't make a problem form your HTML
page).
Please note also that your servlet-name values are inconsistent. They
should be the same in order to link the class and the url mapping.
I have made a test with your new web.xml file and without modifying the c
Very interesting question :-) This is why I love REST, sometimes it might
get you crazy. Reading w3 http status code definitions I would choose (this
of course is just my humble opinion) one of those:
202 Accepted - since this mean "well yes I got your request, I will process
it but come back later and see what happens" - and when the user comes back
later she'll get a 403(which should be expected behavior)
205 Reset Content - "Yep, I understood you want to remove yourself please
make a new request, when you come back you'll get 403"
On the other hand (just popped-up in my mind), why should you introduce a
separate logic and differentiate that case and not using 200 ? Is this rest
going to be used from some client application that has an UI? And the user
of the rest should show a pop-u
Generally, you will use 4xx HTTP Status Code when you can manage an
exception. Otherwise, you will generate a 5xx HTTP Status Code.
For your example, you can use the 400 Bad Request HTTP Status code.
10.4.1 400 Bad Request
The request could not be understood by the server due to malformed syntax.
The client SHOULD NOT repeat the request without modifications.
From W3C
the file in webapps/doc/ will not be in the same context as trail.jsp. You
can not include a jsp from another context. Although according to the code
you are trying to include a static file. If this is right please update
your description.
As stated at the mod_alias doc
If the client requests, it will be
told to access instead. This
includes requests with GET parameters, such as, it will be redirected to
You can change your Redirec 301 to Rewrite rules:
Instead of:
Redirect 301 /old-site/tees~1-c/blue~123-p.html
test.mydomain.com/tees~1-c/blue~123-p.html
Redirect 301 /old-site/tees~1-c.html test.mydomain.com/tees~1-c.html
Redirect 301 /old-site/content/about.html
test.mydomain.com/content/about.html
Use:
RewriteRule /old-site/tees~1-c.html test.mydomain.com/tees~1-c.html
[L,R=301]
RewriteRule /old-site/tees~1-c.html test.mydomain.com
Twilo evangelist here.
So Web API determines how to serialize the data returned from an endpoint
(JSON, XML, etc) by looking at the Accept header in the incoming request.
It will also set the Content-Type header based on the format of data it
returns. The problem is that Twilio's web hook requests don't include an
Accept header, and if there is no Accept header Web API defaults to
returning JSON.
Twilio expects TwiML (our XML-based language) in response to to its
request, so if your Web API endpoint is returning JSON Twilio and setting
the Content-Type header in the response to application/json, Twilio says
thats bad.
There are a couple different ways you can tell Web API to format the
response as XML. The first is to remove the JSON formatter as an option
available to Web API. Th
First thing to notice:
java.lang.IllegalStateException: Neither BindingResult nor plain target
object for bean name 'command' available as request attribute
This exception means that you haven't specified or specified incorrect
model attribute. I've already commented that modelattribute="Candidate"
should be modelAttribute="Candidate". Exception is saying that it can not
find command object, because it is default one. So you haven't declared
your model attribute at all.
Also you have to have a class for e.g. Candidate. Which probably you do.
public class Candidate {
private String name;
//getters and setters
}
Then in your controller create a method:
@Controller
public class CandidateController {
@ModelAttribute("Candidate")
public Candidate getCandidate() {
retu
I came across the same situation, and this happens when your parameter is
present in the request with an empty value.
That is, if your POST body contains "number=" (with empty value), then
Spring throws this exception. However, if the parameter is not present at
all in the request, it should work without any errors.
I think there is not a straightforward solution but there is a workaround.
On http status code 0 you can check...
if(navigator.onLine)
which will return whether there is some network connected or not.
It's an Internet Proxy issue. The suspect machine had be configured to
manually use a proxy server. Turning this off: Control Panel -> Internet
Options -> Connections Tab -> Lan Settings Button, and enabling
"Automatically detect settings" has fixed the problem.
The customErrors entry in your web.config under system.web works for urls
with file extensions as we have established. but to work for extension-less
urls, you need to override IIS by adding an entry to the system.webServer
node in your web.config:
<system.webServer>
<httpErrors existingResponse="Replace" errorMode="Custom">
<remove statusCode="404" subStatusCode="-1" />
<error statusCode="404" prefixLanguageFilePath=""
path="/Error404.cshtml" responseMode="ExecuteURL" />
</httpErrors>
</system.webServer>
Having done this, you also need to move any other custom error handling to
the system.webServer section and remove the customErrors entry under
system.web.
Try the below
PropertyInfo req = new PropertyInfo();
req.name="silent";
req.type=String.class;
req.setValue("<silent>"
+"<action>"+logon+"</action>"
+"<gameid>"+mygameid+"</gameid>"
+"<gpassword>"+mypwd+"</gpassword>"
+"<data>"
+"<username>"[email protected]+"</username>"
+"<password>"+test+"</password>"
+"</data>"
+"</silent>");
request.addProperty(req);
I don't think
the entity could theoretically be the data that would be sent to an
authorized request.
From RFC 2616:
If the 401 response contains the same challenge as the prior response,
and the user agent has already attempted authentication at least once, then
the user SHOULD be presented the entity that was given in the response,
since that entity might include relevant diagnostic information.
So it's legal to present entity to a unauthenticated user.
As you said, authorized data should not be returned to a client... but in
your case, the entities are just both the same for authenticated and
unauthenticated users.
On the other hand, Google's login page uses 200 as the response code for
failing to authenticate.
Note that:
the response MUST include a WWW-Authenticate hea
Quoting one of my previous answers:
HTTP Upgrade is used to indicate a preference or requirement to
switch to a different version of HTTP or to another protocol, if
possible:.
According to the IANA regist
Your version of jettison.jar is wrong.
Try use the 1.1 version.
See this bug for more information:
You should see the actual response from your remote side. Most probably
you'll see some boilerplate HTML saying your resource wasn't found, you are
not authorized, or something similar instead of the actual WS response.
This can happen when the WS server is hidden behind an Apache front end,
which is misconfigured to assume you are a browser.
You could do the following
HttpGet httpRequest = new HttpGet(myUri);
HttpEntity httpEntity = null;
HttpClient httpclient = new DefaultHttpClient();
HttpResponse response = httpclient.execute(httpRequest);
response.getStatusLine().getStatusCode()
I see below points here, which needs to be fixed:
Once you deploy your application in tomcat webapp directory, the URL should
be something like<web-app-name>/<your
servlet>
Alternatively you could add a welcome page and check if it is working. For
that you need to add welcome page setting in web-xml. See This post for a
clue.
If this is not working, please share your tomcat directory tree with your
war deployed; and the war file directory structure
Note: you should be able to expand your war using a zip utility.
Do you have Jackson/Jackson2 libs on your classpath? If not Spring will
create no message converters. From the docs:
MappingJackson2HttpMessageConverter (or MappingJacksonHttpMessageConverter)
converts to/from JSON — added if Jackson 2 (or Jackson) is present on the
classpath.
I think your API could get away with returning the exact same status code /
message as it would if it were a successful first login.
Here's why...
The way I see it, you have two different scenarios from the perspective of
the API: new login and re-login. programmatically there is a difference.
But, from the perspective of the API consumer, all the consumer wants to
know is if login was successful, which it was.
Acdcjunior is rigth in his comment:
Try making, in web.xml:
<servlet-mapping>
<servlet-name>appServlet</servlet-name>
<url-pattern>/*</url-pattern>
</servlet-mapping>
instead of *.do.
You asked:
Besides, why the url is: localhost:8080/hxj/ instead of:
localhost:8080/hxj/home.jsp (or /home.do)?
I expect that the right URL is: localhost:8080/hxj/home
(localhost:8080/hxj/ works only because the welcome URL mapping in the
web.XML)
That is because that is the request mapping you specified for the
controller method. It is not *.jsp because in Spring you use the url to
specify which controller method should be invoked, but not the jsp
directly. (And it is not *.do because this is not struts)
No, what you are doing is wrong. I guess you want to submit this form to a
Servlet(test.java).
First you have to make sure that test.java (btw this is not a proper
convention in java for a class name, it should start with an uppercase
letter) is actually a servlet by extending the HttpServlet class, and
implementing the required methods (doGet() and/or doPost() ...). More Info
Here
Then you have to map this Servlet in the web.xml.
<servlet>
<servlet-name>Test</servlet-name>
<servlet-class>test.java</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>Test</servlet-name>
<url-pattern>/Test</url-pattern>
</servlet-mapping>
And then submit the form to the url-pattern of the servlet assigned ab
In your code, the status assignment only occurs when the error happens. You
should be able to get the status when the call was made successfully like
this:
success(function(data, status, headers, config) {
$scope.objects = data;
$scope.status1 = status;
}).error(function(data, status) {
$scope.status1 = status;
}); | http://www.w3hello.com/questions/-ASP-page-and-302-HTTP-Status-Code-in-IIS-Log- | CC-MAIN-2018-17 | refinedweb | 3,039 | 64.71 |
Bug #3320closed
Feature #5142: Remove ruby-mode.el from ruby's repo
emacs ruby-mode.el font-lock fails on symboled string ending with ?
Description
=begin
Fontification breaks when emacs sees a symbol like
:'this is a symbol?'
example code:¶
class EmacsExample
:symbol
'this is a test'
'is this a test?'
"Can this be a test"
:'this is an error?'
def bar
end
end
I have a very hacked fix in ruby-font-lock-syntactic-keywords
for
;; the last $', $", $
in the respective string is not variable ;; the last ?', ?", ? in the respective string is not ascii code
("\(^\|[[ \t\n<+(,=:]\)\(['"
]\\)\\(\\\\.\\|\\2\\|[^'\"\n\\]\)*?\\?[?$]\(\2\)"
(2 (7 . nil))
(4 (7 . nil)))
by adding : in the above matches with space tabs L etc...
See the attached patch
I am not sure this is the proper fix, but it fixes the above example.
Thanks,
Zev
=end
Files
Updated by zev (Zev Blut) about 12 years ago
- File ruby-mode.el.patch ruby-mode.el.patch added
=begin
=end
Updated by zev (Zev Blut) about 12 years ago
=begin
=end
Updated by zenspider (Ryan Davis) about 12 years ago
=begin
On May 20, 2010, at 06:18 , Zev Blut wrote:
Issue #3320 has been updated by Zev Blut.
In this case it is because it sees ?' or ?" and interprets that as the character notation (I have no idea what this is called). Putting a backslash in front of the ? fixes the problem locally.
There are a lot of different things that can trip up ruby-mode when inside a string. I tripped on one today with a multiline string with "def" in it:
eval "
def xxx
yyy
end
"
Notice that tab will indent up to the yyy as if it is actual code, not string content.
BTW, I'm using version 1.0 of ruby-mode.el as supplied in emacs 24.0.50. It may be more up to date in the ruby distro, but afaik, it should be shifting to emacs for distribution.
=end
Updated by akr (Akira Tanaka) almost 11 years ago
- Project changed from Ruby to Ruby master
- Category deleted (
misc)
- Assignee set to nobu (Nobuyoshi Nakada)
Updated by nahi (Hiroshi Nakamura) almost 11 years ago
- Target version set to 1.9.3
Updated by kosaki (Motohiro KOSAKI) almost 11 years ago
- Status changed from Open to Assigned
Updated by kosaki (Motohiro KOSAKI) almost 11 years ago
- Target version changed from 1.9.3 to 1.9.4
ETIMEOUT. ruby-mode.el never be release stopper.
Updated by naruse (Yui NARUSE) almost 11 years ago
- Parent task set to #5142
Updated by dgutov (Dmitry Gutov) over 9 years ago
All examples in this bug work fine for me with ruby-mode from the Emacs tree.
Not sure when they were fixed.
Updated by nobu (Nobuyoshi Nakada) over 9 years ago
- Status changed from Assigned to Third Party's Issue
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/3320 | CC-MAIN-2022-27 | refinedweb | 481 | 73.88 |
import serial
import threading
import Queue
import Tkinter as tk
class SerialThread(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
s = serial.Serial('COM10',9600, timeout=10)
s.bytesize = serial.EIGHTBITS #number of bits per bytes
s.parity = serial.PARITY_NONE #set parity check: no parity
s.stopbits = serial.STOPBITS_ONE #number of stop bits(1500, self.process_serial)
app = App()
app.mainloop()
1. [self.root.after(100, self.process_serial)] I don't want this unnecessary time period.
Here the first text will display and wait for 100ms, then second text will display and wait for 100ms, it will keep on doing this.
But in my code i don't know when I receive's serial data. If I receive serial data, this data should display on tkinter window until i receive other serial data to print on the window.The time period in between the receiving data varies all the time.
If i use 100ms it had a delay of 100ms and displaying the data on the window just only for 100ms. same for any milliseconds. If i use 1000ms, it has delay of 1000ms and displaying the data on tkinter window for 1000ms. After 1000ms the data which was displaying on the window will vanish. If i use time period of 100ms or 1000ms, its not easy to read the data on the window in such a small span of time. I dont want any time period here. I need time period of 100ms to take the data from Queue to display on window, but not just to display for 100ms on the window., the receiver has to receive data and display it on tkinter window until it receives other serial data again. the data should be on the screen until it receives other serial data.
2. If I receive just a single line of and with this font size 9 lines of datacan print on the tkinter..
0 Replies - 726 Views - Last Post: 05 July 2013 - 11:27 PM
#1
Dynamically update Tkinter window based on serial data.
Posted 05 July 2013 - 11:27 PM
Page 1 of 1 | http://www.dreamincode.net/forums/topic/324311-dynamically-update-tkinter-window-based-on-serial-data/ | CC-MAIN-2017-04 | refinedweb | 357 | 66.74 |
> If remove-if is renamed cl-remove-if and then 'remove-if defaliased to > 'cl-remove-if, is it now ok to (require 'cl) at runtime? Won't the > alias now conflict with whatever definition of remove-if the user had? > And if you don't (require 'cl) at runtime, then you can't call > cl-remove-if at runtime, since it might not be available. We need a (require 'cl-<something>) which brings up CL but only within the "cl-" namespace. I don't have a good idea for naming. `cl-defs' might be OK, but I'm open to other suggestions. Maybe `cl-layer', or `cl-emu', or `cl-compat'? > One could move cl-remove-if etc into a separate module called cl-funcs, > so if you want to call cl-remove-if at runtime you (require 'cl-funcs). `cl-funs' is another option, indeed. > This would somewhat break up the logical structure of the cl package, > since things will get grouped by whether they're functions or macros > rather than by topic. No: The way to do it is to rename cl.el to cl-<something>.el, change all its definitions to use the "cl-" prefix and then create a new cl.el which only contains defaliases. Stefan | http://lists.gnu.org/archive/html/emacs-devel/2012-04/msg00241.html | CC-MAIN-2016-07 | refinedweb | 211 | 75.3 |
Re: [Fink-devel] xml-simple-pm
here is what I get after installing storable-pm/1_XMLin...ok
[Fink-devel] GNU getopt
I ported GNU getopt or gengetopt and I used the getopt.c, getopt.h and getopt1.c to port lftp and fix a few other ports that use GNU getopt, GNU getopt provide getopt_long which libSystem.dylib doesn't have. I propose making gengetopt essential so that we can add a UpdateGNUgetopt:. Like the
[Fink-devel] QT?
Can i fix this? configure:8472: checking if Qt compiles without flags configure:8532: c++ -o conftest -g -O2 -I -I/sw/include -L/usr/X11R6/lib conftest.C -lqt-mt -lXext -lX11 15 conftest.C:2: qglobal.h: No such file or directory conftest.C:3: qapplication.h: No such file or directory
[Fink-devel] pthread_kill
anyone know a work around for the undefined symbol _pthread_kill ?? I have -lpthread set. I think I remember Finlay telling my that darwin couldn't do _pthread_kill IIRC. So is there a work aroung to this? ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett
Re: [Fink-devel] pthread_kill
hmm I might to that...I guess i could just add it to just about any .h in the pkg, or should I make a new file and patch what needs it? [EMAIL PROTECTED] writes: pthread_kill isn't implemented in Darwin, although support went into CVS HEAD a few days back!!! You could do what MySQL does, which
[Fink-devel] mplayer
Where would you start?? :) cc -no-cpp-precomp -Iloader -Ilibvo -I/sw/include -I/sw/include/gtk-1.2 -I/sw/include/glib-1.2 -I/sw/lib/glib/include -I/usr/X11R6/include -o mplayer mplayer.o mp_msg.o open.o parse_es.o ac3-iec958.o find_sub.o aviprint.o dec_audio.o dec_video.o aviwrite.o
Re: [Fink-devel] mplayer
I believe aalib [EMAIL PROTECTED] writes: Where does that libaa.dylib come from, I wonder? I.e. which package? ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett CAISnet Inc. 2nd Floor, 11635 - 160 Street Edmonton,
Re: [Fink-devel] package validator
I't like to request that %n is being used in Souce and SouceDirectory fields. Like ti checks to %v [EMAIL PROTECTED] writes: No worries. I thought about this, too, and plan to add it. Another thing: it should verify the Maintainer field is valid: Full Name [EMAIL PROTECTED]
[Fink-devel] I think i broke something
What does this mean and how can i fix it? I changed to order in the configure script to try .dylib before .so and this happened. make cd . aclocal cd . automake --gnu --include-deps Makefile cd . autoconf ./aclocal.m4:448: error: m4_defn: undefined macro: _m4_divert_diversion aclang.m4:173:
Re: [Fink-devel] Dpkg deleted my /etc directory
did you do a dpkg --purge of a pkg that had soemthing in /etc [EMAIL PROTECTED] writes: I know this shouldn't have happened, but dpkg deleted my /etc directory, didn't even warn me that it was full. It told me about /sbin, but not /etc ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·.,
[Fink-devel] gnome panel fish applet
** WARNING **: CORBA plugin load failed: not a Mach-O MH_BUNDLE file type ** CRITICAL **: file goad.c: line 840 (goad_server_activate_shlib): assertion `gmod' failed. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett CAISnet Inc.
[Fink-devel] QT3
When i try and run qmake this is what I get. Should we be setting the value for this in the Qt3 install? Or is it a temp value I should be setting in the CompileScript? QMAKESPEC has not been set, so configuration cannot be deduced. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·.,
Re: [Fink-devel] ideas on pseudo packages
okay so we need to determ the system versions, so make a configure file that will fail if no = a certain version then of course the sucky part we need a Darwin1.3, 1.4 and 5.2 etc etc and the user has to install the pkg...that is off the top of my head i know sucks but it's a starting point so
Re: [Fink-devel] ideas on pseudo packages
unless we add this in fink it's self with a fink base-upgrade which will run the checks and find the closest pkgs. That could be done, but then we have the dpkg problem. [EMAIL PROTECTED] writes: package darwin52 checks in post-install (by running script) if it is darwin 5.2 which is on the
Re: [Fink-devel] db/db3/db4 and shlibs -- request for action
I heard the shlibs is your little project, i'd like to know why we decided on -bin, I missed lots od emails when tis was discussed and sorry if i'm rehashing out topics but to me pkg (current -bin + the base dylib and .a and .la) pkg-shlibs (current -shlibs, versioned.dylibs) pkg-dev (includes
Re: [Fink-devel] db/db3/db4 and shlibs -- request for action
sure I totally understand the -shlibs and agree it's the -bin i have a problem with, I think -bin should be the main pkg and that if need be a -dev package with the headers and stuff (which will be optional install of course), that would help clean up the huge include dir, since we are
[Fink-devel] clanlib or c++?
Why won't this work, I have to ^C from it or in about one hour it kills Finder and crashes the system...it just sits there. justin@localhost [~/tmp-clan]$ cat conftest.C #include unistd.h justin@localhost [~/tmp-clan]$ justin@localhost [~/tmp-clan]$ cc -E -v -I/sw/include
Re: [Fink-devel] Re: possible readline/ncurses problem
I'm running the latest of each and my bash is fine. [EMAIL PROTECTED] writes: I just installed bash to verify and it works for me, so do you update readline after having build bash ? Ah and which version-revision of readline are you using ?
Re: [Fink-devel] Re: possible readline/ncurses problem
is there a bug open for this? [EMAIL PROTECTED] writes: I suppose Dave has libxpg4 installed. It has this effect. libxpg4 sets the environment variable DYLD_FORCE_FLAT_NAMESPACE, which will break any binary that is compiled with twolevel_namespace (the default) and has multiply defined symbols.
Re: [Fink-devel] fink install audiofile-shlib has odd results
it's not you it's probably me, and now that I'm running the new splitoff code this is gonna be hard to fix...Anyidea as to when the splittoff code will make a release or cvs? Anyhow I'll look into this. [EMAIL PROTECTED] writes: So, first I notice that my audiofile needs updating: $ fink
Re: [Fink-devel] fink install audiofile-shlib has odd results
found it, turns out it was you :P, it's shlibs not shlib fink install audiofile-shlibs, the pkg name is too long and gets cut off. [EMAIL PROTECTED] writes: $ fink install audiofile-shlib ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst
Re: [Fink-devel] glib 2.0
me either and i just made a symlink from my cvsroot splitoff dir into fink so i can install all the splitoff pkgs and there not there. [EMAIL PROTECTED] writes: I put their packages into shared-libraries/splitoff in CVS. Hm, don't see them there. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·.,
Re: [Fink-devel] glib 2.0
okay just testing it right now. [EMAIL PROTECTED] writes: Oh sorry, I forgot to commit them. Now you can. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst Phone: (780)-408-3094 Fax: (780)-454-3200 E-Mail: [EMAIL
Re: [Fink-devel] glib 2.0
also what is in the -common and are we gonna use -common and -base, this a general question for splitoff. [EMAIL PROTECTED] writes: Nice. Now one more question: why are the packages named like glib2-0 and not just glib2 ? ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems
Re: [Fink-devel] glib 2.0
OH we've been using -conf. We need a standard I think. I like -common and -base but those are debians anyhow that is just babling we need a standard. [EMAIL PROTECTED] writes: -common package is a common files for -shlibs packages. fooN-shlibs packages should be installable at the same time,
Re: [Fink-devel] glib 2.0
still waiting for Max to comment. All i'm saying is that we all need to use the same convension. Or should I think. [EMAIL PROTECTED] writes: -conf is a good naming, if it contains only config files. but shlibs packages may share these files: - config files - locale files - modules - program
Re: [Fink-devel] question about splitoffs
Okay once again, I have the arts pkg done... [EMAIL PROTECTED] writes: I'm working on the arts package for the kde3 stuff, and I have a small question about whether I'm doing this right for the splitoff. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst
Re: [Fink-devel] BuildDependsOnly
hmmm I can't see why not but instead of adding more to the build time run add it to the fink check command, which I hope all fauthors are using right? :P [EMAIL PROTECTED] writes: I have another small proposal to make related to the long-term shared libraries project: I suggest that we add a
Re: [Fink-devel] BuildDependsOnly
the sub heredoc is in the works by Max ATM. for now i think it have to be one long line as far as i know. [EMAIL PROTECTED] writes: Another thing that occurred to me while packaging is, is there a way to do multilines inside a splitoff? While making the kdelibs package, I have a huge line
Re: [Fink-devel] BuildDependsOnly
okay point taken and I'm game if approved could we add documentation for it on the website. my docs are far behind now with all the new changes :P I'm a paper guy still need to print em :P And BTW it was fink validate that i was referring to with fink check. [EMAIL PROTECTED] writes:
Re: [Fink-devel] Planning this mail - comment please
looks good to me, thought qt and bind9 should be added to the list IMHO. But I think it's a good email to send like once week :P [EMAIL PROTECTED] writes: I plan to send out the following mail to fink-beginners, fink-user, and fink-announce. Please tell me what you think of it, if should
Re: [Fink-devel] shlibs
I fully agree with this, it hurts on one to have them but helps us developpers in the mean time. [EMAIL PROTECTED] writes: Someday, later, we will want to introduce Shlibs and start to use it. If we are sure that this will be the name of the field, it would be nice to have fink validate not
Re: [Fink-devel] qt-3.0.1-2 build failure (fwd)
and it will be released tomorrow since I just got 3.0.2 done :P [EMAIL PROTECTED] writes: Yeah, it's actually on but is not world-readable yet. They're taunting us. =) ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst
Re: [Fink-devel] qt-3.0.1-2 build failure (fwd)
[EMAIL PROTECTED] writes: libqt-mt.dylib.3.0.1 is not prebound this tells me it's an old revision...update from cvs, I beleive Jeff has added my changes... it'll make libqt-mt.3.0.1.dylib now and the links are made properly. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett -
Re: [Fink-devel] ANN: Fink Package Manager 0.9.9 released
I think we are gonig to adding a fink.conf switch for till in the near future [EMAIL PROTECTED] writes: Hmm, i like the Fink list width fix, but shouldn't it default to auto? Having to add -w=auto to every list command is kinda silly. -B ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F.
Re: [Fink-devel] fink install failing....
yes I had just got it...I had to dpkg --purge the pkg remove the info and patch files run fink selfudpate and then rebuild the pkg...I don't know what is causing it though. [EMAIL PROTECTED] writes: Pick one: [1] Failed: Internal error: node for automake already exists
Re: [Fink-devel] porting minicom
two things, ./configure --help and see if you can force a dir I'd use /sw/var/lock or read the config.log in the build dir to see what it's looking for. [EMAIL PROTECTED] writes: I am interested in making a fink package for minicom, so we can use serial consoles, but ./configure is failing
Re: [Fink-devel] porting minicom
find where #include dirent.h is and put #include sys/types.h right before it. [EMAIL PROTECTED] writes: where are these types defined? i cant find them anywhere. i used ./configure --enable-lock-dir=/sw/var/lock --enable-dfl- port=/dev/tty.modem --includedir=/sw/include/ cc -DHAVE_CONFIG_H -I.
Re: [Fink-devel] porting minicom...almost done....i hope
_getopt_long is defined by GNUgetopt install my gengetopt package and hopefully it will use the getopt.h in the /sw/include dir if not it's a long change. if doesn't work here are your options #1 if getopt.c, getopt1.c and getopt.h are present in the pkg which i doubt if your getting this error,
Re: [Fink-devel] porting minicom...almost done....i hope
check to see if this has #include getopt.h or #include getopt.h if it uses then installing gengetopt will fix this if not..then i'd have to see more. [EMAIL PROTECTED] writes: cc -DHAVE_CONFIG_H -I. -I. -I.. -g -O2 -I../intl -c getopt_long.c ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·.,
Re: [Fink-devel] qt-3.0.1-2 build failure (fwd)
3.0.1-2 was my first version with a patch to fix a dylib creation error. You'll need to force remove it and then install the new one...the next upgrade should hopefully work seamless. Sorry about this. [EMAIL PROTECTED] writes: I am getting the exact same error now, and this for fink install
Re: [Fink-devel] porting minicom...almost done....i hope
you'll prolly need to keep ncurses, but you'll have to add getopt_long, either by adding it the way i mentioned before or by patching the getopt file currently in it...I'll look at righ tnow and send you it...send me the info and patch you have righ tnow. [EMAIL PROTECTED] writes: -BEGIN PGP
Re: [Fink-devel] diff/patch formats
I use diff -ruN but it's up to the user i think. [EMAIL PROTECTED] writes: Is there a 'proper' way to make a diff for fink? I mean the options used: -cr, -ur, whatever. I was wondering if it mattered or it was as long as patch can understand it. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·.,
Re: [Fink-devel] minicom...and sf.net will not accept ssl connections
I use IE with SF all the time. [EMAIL PROTECTED] writes: You need to use OmniWeb or Mozilla, IE does not work with sourceforge. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst Phone: (780)-408-3094 Fax: (780)-454-3200
Re: [Fink-devel] Updates for the nessus package
sorry been updating sooo many pkgs lately I must have over looked this one..thanks I'll get it up to date ASAP. [EMAIL PROTECTED] writes: However, Nessus is now at version 1.1.14, with many new features and vulnerability detection scripts that are not porteable to the old 1.0.9 version.
Re: [Fink-devel] qt probs
add UpdateLibTool: true and add --with-pic --enable-static --enable-shared to the configureparams [EMAIL PROTECTED] writes: nevermind. I got it. but I do get this tidbit: checking if libtool supports shared libraries... no checking whether to build shared libraries... no checking whether to
Re: [Fink-devel] qt probs
first off make sure your QTDIR env var is set. if not then your qt install isn't complete. then read the config.log from licq and see why it's failing, it might be an other reason. [EMAIL PROTECTED] writes: Install the QT libraries, or if you have them installed, override this check with the
Re: [Fink-devel] Package nessus and gtk+ friends
gtk+ is a splitoff package which means it has the info for shlibs in the main info file which should be in the gnome dir. If your not running full unstable you'll need to copy over the gtk+ pkg from unstable gnome dir to local. [EMAIL PROTECTED] writes: fink selfupdate-cvs; fink update gtk+
Re: [Fink-devel] pilot-link-shlibs does not compile
this is very odd it makes it for me everytime...can you scroll up to the part that is attempts to make the lib i think it's the second phase and paste that to me? [EMAIL PROTECTED] writes: hi, pilot-link-shlibs-0.10.99-1 does not compile. it breaks as follows: mkdir -p
Re: [Fink-devel] pilot-link-shlibs does not compile
okay thanks I can fix it [EMAIL PROTECTED] writes: after uninstalling pilot-link, the error was reproducible. i attached the complete compile log. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst Phone: (780)-408-3094 Fax:
Re: [Fink-devel] What is the current state of the OpenOSX Installer SW?
I wouldn't mind trying to make it but we would once again be back to needing a logo:P Sorry trying to look at the bright side :P Thought for a fink cd we would just need the binary installer since it's web updatable and since there is no software on the cd execpt the installer ther eis no need
Re: [Fink-devel] What is the current state of the OpenOSX Installer SW?
or we put a cd iso online and email the link to macworld no cost, execpt time and bandwidth. [EMAIL PROTECTED] writes: Not a bad collection for sixty bucks, but then on the other hand you can get it all for free. But on the gripping hand, they admit this. Sorta:
Re: [Fink-devel] What is the current state of the OpenOSX Installer SW?
but that costs money to an open source project but we can get free bandwidth hell maybe even off of apple's site. I'd be willing to make the dmg or iso. [EMAIL PROTECTED] writes: Maybe we can co-host with linuxiso.org? I was thinking more of mailing them a burned CD.
Re: [Fink-devel] Fink CD
but you;'d be using disk space that doesn't need to be so. at least not for the bin dist since apt can get from cd. [EMAIL PROTECTED] writes: Make a package which uses the Apple installer to install a bunch of .deb files into /sw/fink/debs and a bunch of source files into /sw/src, after
Re: [Fink-devel] Fink CD
agreed :P [EMAIL PROTECTED] writes: 3) Yeah, having a logo would be nice for a CD, and for other stuff, too, but I don't see it as a strict requirement... OK, Justin? 8-) ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst Phone:
Re: [Fink-devel] Fink CD
If FinkCommander or a fink gui of somesort would include configuration GUIs for fink.conf and source.list and was included as a fink pkg so they could get updates with out having to buy an other CD or reinstall the software, I'd be game in make the CD, which would install fink/finkcommander as a
Re: [Fink-devel] libtool hacker needed...
I'm sorry I don't remember there being an issue, as a matter a fact I thought giftcurs worked fine?? [EMAIL PROTECTED] writes: I have a problem with giFT. I need to disable dlopen, and manually add - -ldl to the LDFLAGS, or else it compiles fine, but says it cannot find symbols at runtime. I
Re: [Fink-devel] Fink CD
IE is fine. [EMAIL PROTECTED] writes: I take it I first need a Sourceforge account, and that IE can't be used to do this? I'll go sign up for one with Mozilla... ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst Phone: (780)-408-3094
Re: [Fink-devel] dealing with libtool 1.4a
install libtool14 from unstable copy the /sw/share/libtool/ltmain.sh to the build dir and edit the VERSION tag in ltmain.sh to be 1.4a or do a diff on them.. [EMAIL PROTECTED] writes: Anyway, pspell comes with libtool 1.4a (VERSION=1.4a in ltmain.sh). The source package has both ltmain.sh and
Re: [Fink-devel] libtool hacker needed...
well I got giFT-skt to work and sent it to beren, he informed me that he had just got it to work as well..but I do agree I'd use the curs version first. [EMAIL PROTECTED] writes: GiFTcurs works fine, and so does the giFT daemon, but the GTK+ front-end doesn't (it is obsolete anyway, I believe...
Re: [Fink-devel] mozilla help
Thanks that fixed it, you were right, I wonder why it was owned by root. 0.9.9 works perfectly now other then the ruff fonts, not mozilla's fault I don't think. [EMAIL PROTECTED] writes: The problem was that root owns ~/.mozilla ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett -
[Fink-devel] mozilla help
I don't even get anything on my display, just a pause the Gdk-WARNING and then my prompt again. Here is the gdb thing thought ther eis no bt cause the program just exits and doesn't give me any msgs. --- Reading symbols for shared libraries .. done (gdb) run Starting program:
[Fink-devel] Xfree libs
I'm not sure but I think it needs the new dlcompat. I need this libs for a few port and they are always unsable. And for some reason this two libs are the only two that don't have dynamic libs only the a.out. Anyhow keep me posted. configure:8754: checking for XvShmCreateImage in -lXv
[Fink-devel] cc errors
why am i getting lib errors when it's not linking?? snip terminal.app Missing C++ runtime support for CC (/usr/bin/CC). Compilation of the following test program failed: -- #include iostream.h int main(){ cout Hello World! endl; return
Re: [Fink-devel] bug in fink? or bug in me? =)
shlibs prolly depends on %N (= %v-%r) I'd remove rrdtool. rrdtool-shlibs then try to instal version specific. [EMAIL PROTECTED] writes: [g4:main/finkinfo/libs] ranger% fink --version Package manager version: 0.9.10 Distribution version: 0.3.2a.cvs [g4:main/finkinfo/libs] ranger% fink install
Re: [Fink-devel] cc errors
good point thanks i just reinstalled libxpg4 not used to having it installed yet...thanks, I'll try that. [EMAIL PROTECTED] writes: Isn't this the true error message? This one is the well-known DYLD_FORCE_FLAT_NAMESPACE/libxpg4 bug. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett
Re: [Fink-devel] elf's dl*() ported...
it would be nice if we didn't have to do this. I know in my xine port i need to beable to use both which is a pain, though i'm not sure why i need both yet. [EMAIL PROTECTED] writes: So does ours, if you change the cc line to be cc -Ddlsym=dlsym_prepend_underscore
Re: [Fink-devel] elf's dl*() ported...
well if you have fink then 10.1.3 does have dlcompat. [EMAIL PROTECTED] writes: Well, yes, however, Mac OS X 10.1.3 does not have dlcompat, and I have noticed many posters seem to be using a Mac OS X system for 'darwin' development. (myself included...) Also, it was a less than easy install
[Fink-devel] Re: Nessus 1.2.0 released (was Re: [Fink-users] nessus 1.0.10)
it is updated, I also added libnessus-ssl which needs to be compiled first so if you move over to ssl support rebuild libnasl, nessus-common or nessus-common-nox and nessus-plugins and also the plugins now provide the needed nes files. Let me know how it all works, I've had one compile bug
Re: [Fink-devel] Move aggressive
I've also been using both especially mozilla 0.9.9 and I give them the thumbs up as well. [EMAIL PROTECTED] writes: Also, my Galeon package has a few users (at least I've gotten feedback from around 6 or 7), but it depends on gnome-vfs (= 1.0.3-4) and mozilla (= 0.9.9). I've been using both
Re: [Fink-devel] Move aggressive
once Max, uses the tree() that I co wrote in fink to add it's functionalty to fink list and fink info it will help :P [EMAIL PROTECTED] writes: Also many others, is there a quick and easy way to get a list of packages installed and not yet in stable? Use FinkCommander :-) (The smiley does
Re: [Fink-devel] mozilla to stable (was Re: Move agressive)
I know there was also a fix for mozilla 0.9.9, and 0.4 is released I think 0.9.9 should be re looked at now. [EMAIL PROTECTED] writes: I'm one of the people who has not been able to get mozilla-0.9.9 running (on either of my machines). I can use mozilla-0.9.8 just fine. There were other
Re: [Fink-devel] mozilla to stable (was Re: Move agressive)
simple check ~/.mozilla and make sure it's owned by user not root. I had the same problem and Feanor helped me fix it on the devel list. [EMAIL PROTECTED] writes: OK, we can look at it. I still have it installed. (It compiled and installed just fine.) I unlimit the stacksize (just in case),
Re: [Fink-devel] mozilla to stable (was Re: Move agressive)
I think it's a problem with the install script...since it's run in sudo mode it's prolly making the ~/.mozilla directly from the install script and not to %i. That would be my guess. Maybe run a check on ~/.mozilla if owned by root nuke it?? I don't know haven't thought of that part or hack
Re: [Fink-devel] mozilla to stable (was Re: Move agressive)
hmm I was just a suggestion...I don't know very much about hte mozilla port haven't looked at it. Maybe add a check to the mozilla script? I dont' know how it's being made or why it's wrong when being made. But i don't know that is the problem. I'm glad that you figured out the issue with rm
[Fink-devel] pingus port
since I'm very new to gdb, where would I start my search to fix this. I've checked the fonts.src (datafile) and it's okay and the refering font file is present and where it says it should be. any ideas?? Program received signal SIGABRT, Aborted. 0x7001a70c in kill () (gdb) bt #0 0x7001a70c in
Re: [Fink-devel] mozilla to stable (was Re: Move agressive)
sounds good to me though i believe it was 0.9.9-1 that introduced this bug. and i know it was fixed in -4 the inbetween version s I'm not certain of so only users that install 0.9.9-1 need worry about this issue. [EMAIL PROTECTED] writes: OK, I can confirm that simply upgrading to
Re: [Fink-devel] ld option to supress multiple definitions (from apple's list)
as far as I'm concerned if you can compile it, and run it why worry about the warnings. I think it would be better to try and keep as close as the author intended it to be. [EMAIL PROTECTED] writes: Would it be better to use the two-level namespace support of ld instead of fighting against it?
Re: [Fink-devel] Problems getting a configure script to recognise libiconv
now there has to be a reason why, but it's missing -liconv [EMAIL PROTECTED] writes: configure:7877: cc -o conftest -g -O2 -Wall -Wno-unknown-pragmas -I/sw/include - L/sw/lib conftest.c -lm 15 /usr/bin/ld: Undefined symbols: _iconv ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett
Re: [Fink-devel] Problems getting a configure script to recognise libiconv
I think the configure script should be fixed, since it might not need to link libiconv to every link. [EMAIL PROTECTED] writes: You're right... Hrmmm... Weird indeed! I'll see if adding '-liconv' to the LDFLAGS helps... Thanks! ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett -
[Fink-devel] link problems
how can a build link the same two libs in two different link cmds and one fail but the other doesn't?? cc -DHAVE_CONFIG_H -I. -I. -I.. -I.. -I../build -I../lib -I../rpmdb -I../rpmio -I../popt -I../misc -I/sw/include -I../misc -I/sw/include -no-cpp-precomp -D_GNU_SOURCE -D_REENTRANT -Wall
Re: [Fink-devel] fink info format docs
yes they are in the porting notes. [EMAIL PROTECTED] writes: Are the % variables/hashes/whatever documented anywhere? It seems every once in a while I stumble on another. I tried searching the fink sources, but I could not find them anywhere. I was thinking of working on a subsection of the
Re: [Fink-devel] FYI: qt 3.0.4 upgrade will break old libraries
updating all the pkgs will also force a rebuild on all of them to avoid any issues as RangerRick (Ben) had previously mentioned. [EMAIL PROTECTED] writes: Just a note, I've just committed a new version of the QT info file that updates it to 3.0.4, and also fixes some bugs. The most important
Re: [Fink-devel] QT2 package
qt2-shlibs should in theory be able to co exist if someone makes the split. [EMAIL PROTECTED] writes: Hi everyone! I was wondering if it would be possible for a qt2 package to be created, that would conflict with and replace the qt3 package. I currently have some packages (qcad, bbkeysconf)
Re: [Fink-devel] Re: [Fink-users] qt3-shlibs linking error
shoot your right the only one that I didn't think of and the only one that couldn't be avoided. See the problem is that when qt pkg was made it should have started with qt2 and then when the first qt3 pkg was made it should not have followed the same error as qt2 and been called qt3. So we had
Re: [Fink-devel] new QT packages
the problem is that qt was made to live in seperate directories, Not that I want to say it but I think we should see how debian handled this to avoid more problems. The problem with this I think will be with the bin portion. I think that the qt2-shlibs will need qt2-bin I'm sure the qt2-bin and
Re: [Fink-devel] FYI: Porting fakeroot
it doesn't appear to, do you know what headers normally provide this on linux? I appears that kde and mozilla both define there own, it might be possible to copy theirs? [EMAIL PROTECTED] writes: PS: Anyone know if Darwin has stat64 and friends? ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·.,
Re: [Fink-devel] FYI: Porting fakeroot
you could use -Dstat64=__USE_FILE_OFFSET64 is they are equal, but I didn't really understand your question. [EMAIL PROTECTED] writes: sys/stat.h There is some magic in there to make stat64 be called stat if __USE_FILE_OFFSET64 is defined, too. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin
Re: [Fink-devel] LINGUAS ?
that or maybe fink should be unsetting before the make process? [EMAIL PROTECTED] writes: I do get a number of build failures, where to correct things I have first to unsetenv LINGUAS _ twice this morning (I sent a note to the maintainers), and again this evening (gal19). Question: is it
Re: Solved: [Fink-devel] perl warning in fink 0.9.12-1
right but in order to do a apt-get update and not have error for the first time since the local dir won't have the packages.gz file in them you need to run fink scanpackages so it should be run before apt-get update to update the local apt packages.gz [EMAIL PROTECTED] writes: FWIW,
Re: [Fink-devel] system-passwd?
since we are on the topic, can i get a user list and group list added to the passwd file for my up comming mailman port? [EMAIL PROTECTED] writes: What remains to mention is that the user *IDs* must be fixed if I am mistaken. This makes a system-passwd package basically useless. It doesn't
[Fink-devel] lftp or port?
Since I suck at gdb still, maybe someone could make sense of this for me. Since 2.5.1 I can no longer use local tab completion in lftp I get a bus error, I want to make sure it's an lftp problem and not in my port. Remote tab completion works fine, it's only local, like put, lcd, etc... lftp
Re: [KDE-Darwin] Re: [Fink-devel] KDE-Fink library problem
Xinerama and Xv, in the shared lib format are *NOT* needed. They are only used if present. Since Xinerama and Xv are built statics in the Xfree build by default, and we turned them on, since I needed them for an other port, and Ben build the KDE binaries from fink and the fink verion of Xfree,
Re: [KDE-Darwin] Re: [Fink-devel] KDE-Fink library problem
no there isn't. Plus why would you want to revert the change? It doesn't hurt anything unless you try and mix to systems. hmmm...I think this might end up being a problem with other pkgs for the bin dist as well. shared libs and system pkgs are not gonna mix well in the bin dist. [EMAIL
Re: [KDE-Darwin] Re: [Fink-devel] KDE-Fink library problem
This is true, how ever the system-xfree86 pkg is flawed in other ways as well. Since some pkgs may depend on a certain version of xfree, and since there is no way of knowing which is install with system-xfree86. The make a pkg management system as fink all pkgs almost need to be controlled by
Re: [KDE-Darwin] Re: [Fink-devel] KDE-Fink library problem
Firstly i asked about the shared libs to the Xfree team, and there was no good reason for them being disabled, according to my replies from the list. Secondly this would be a concern if we were making .pkg OS X style installed that aren't managed. (i.e. kinda like rpm thought I'll give rpms
Re: [KDE-Darwin] Re: [Fink-devel] KDE-Fink library problem
of course i can't remember which pkg it was now, though it might be mplayer, and it's cause Xinerama and Xv statics are flawed on darwin that this happens mostly, and there are instructions on how to get xfree to make the shared versions in the install notes (accutally I think it was xine now
Re: [Fink-devel] - verry thanks ! -
yes and no. precompiled version may or may not work, and compiling these programs for source if available might need some patches. Since IIRC most of these are commercial programs and will never be in fink and are probably distributed via binaries only form. You can try them but since they are
[Fink-devel] lesstif and kdebase3-ssl
is it just me me or are these two choices odd, first lesstif, and with out doing a fink list to see that the revision on -dev is -4 and the revision on lesstif is -6 I'd never known to use lesstif. Then I told it to install kde-ssl why ask me if i want kdebase3 to have ssl?? Anyhow i don't know
Re: [Fink-devel] lesstif and kdebase3-ssl
thanks for the quick kleen explination Dave...I understand now... [EMAIL PROTECTED] writes: Hi Justin. The lesstif choice is a temporary thing... lesstif-dev is going away, and once the new lesstif has been tested and I can move it to stable, lesstif-dev will go away completely. The other one, | https://www.mail-archive.com/[email protected]&q=from:%22Justin+Hallett%22 | CC-MAIN-2019-43 | refinedweb | 5,958 | 73.27 |
libevent for Lisp: A Signal Example
- CMUCL/SBCL's SERVE-EVENT
- IOlib
- cl-event (libevent for Lisp, using cffi; three years old)
- cl-async (also using cffi libevent wrapper, actively developed)
I was ready to dive in and get things current, when one last Google search turned up cl-async. This little bugger was hard to find, as at that point it had not been listed on CLiki. (But it is now :-)). Andrew Lyon has done a tremendous amount of work on cl-async, with a very complete set of bindings for libevent. This is just what I had been looking for, so I jumped in immediately.
As one might imagine from the topic of this post, there's a lot to be explored, uncovered, and developed further around async programming in Lisp. I'll start off slowly with a small example, and add more over the course of time.
I also hope to cover IOlib and SBCL's SERVE-EVENT in some future posts. Time will tell... For now, let's get started with cl-async in SBCL :-)
Dependencies
In a previous post, I discussed getting an environment set up with SBCL, I the rest of this post assumes that has been read and done :-)
Getting cl-async and Setting Up an SBCL Environment for Hacking
Now let's download cl-async and install the Libevent bindings :-)
With the Lisp Libevent bindings installed, we're now ready to create a Lisp image to assist us when exploring cl-async. A Lisp image saves the current state of the REPL, with all the loaded libaries, etc., allowing for rapid start-ups and script executions. Just the thing, when you're iterating on something :-)
Example: Adding a Signal Handler
Let's dive into some signal handling now! Here is some code I put together as part of an effort to beef up the examples in cl-async:
Note that the as: is a nickname for the package namespace cl-async: .
As one might expect, there is a function to start the event loop. However, what is a little different is that one doesn't initialize the event loop directly, but with a callback. As such, one cannot set up handlers, etc., except within the scope of this callback.
We've got the setup-handler function for that, which adds a callback for a SIGINT event. Let's try it out :-)
Once your script has finished loading the core, you should see output like the above, with no return to the shell prompt.
When we send a SIGINT with ^C , we can watch our callback get fired:
Next up, we'll take a look at other types of handlers in cl-async.
Syndicated 2012-10-19 17:14:00 (Updated 2013-03-14 03:42:51) from Duncan McGreggor | http://www.advogato.org/person/oubiwann/diary/284.html | CC-MAIN-2015-48 | refinedweb | 467 | 77.06 |
#include <wx/strconv.h>
This class converts between the UTF-7 encoding and Unicode.
It has one predefined instance, wxConvUTF7.
Notice that, unlike all the other conversion objects, this converter is stateful, i.e. it remembers its state from the last call to its ToWChar() or FromWChar() and assumes it is called on the continuation of the same string when the same method is called again. This assumption is only made if an explicit length is specified as parameter to these functions as if an entire
NUL terminated string is processed the state doesn't need to be remembered.
This also means that, unlike the other predefined conversion objects, wxConvUTF7 is not thread-safe. | https://docs.wxwidgets.org/3.0/classwx_m_b_conv_u_t_f7.html | CC-MAIN-2018-51 | refinedweb | 114 | 53.21 |
Adding Ordinal Indicators with .NET
Introduction
If you know me or have read some of my articles, you would know that I am very inquisitive. I also love languages. I have always wanted to study further, something like drama or writing; sadly, I never had the chance. The ultimate dream would be to have a book or two published. I have written two novels, which aren't quite good, frankly. The third one, with which I am busy with currently, is going much better, because I spend a lot more time and effort on it.
Anyway, you are not here to read about my sad life story, you are here to learn. In this article, you will learn how to add ordinal indicators using .NET.
Ordinal Numbers
Ordinal Indicators
An ordinal indicator is a character, or group of characters, following a numeral indicating that it is an ordinal number and not a cardinal number. In the English language, this corresponds to the suffixes -st, -nd, -rd, and -th in written ordinals (1st, 2nd, 3rd, 4th).
Practical
As you can gather, you will be creating a project that is able to add the necessary suffixes to the ordinal numbers to denote their proper positions. Start Visual Studio and create either a C# or Visual Basic.NET Windows Forms Application. After the project has been created and the default form displayed, add one ListBox and one Button onto the form. Your design should resemble Figure 1.
Figure 1: Design
Code
Add the following Namespaces:
C#
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; using Microsoft.VisualBasic;
Add the AddOrdinalIndicator function:
C#
private string AddOrdinalIndicator(int intNumber) { string strIndicator = ""; if (intNumber < 20) { switch (intNumber) { case 1: { strIndicator = "st"; break; } case 2: { strIndicator = "nd"; break; } case 3: { strIndicator = "rd"; break; } case 4: case 5: case 6: case 7: case 8: case 9: case 10: case 11: case 12: case 13: case 14: case 15: case 16: case 17: case 18: case 19: { strIndicator = "th"; break; } } } else { string strNumber = ""; strNumber = Convert.ToString(intNumber); char chrLast = strNumber[strNumber.Length - 1]; switch (Convert.ToString(chrLast)) { case "1": { strIndicator = "st"; break; } case "2": { strIndicator = "nd"; break; } case "3": { strIndicator = "rd"; break; } default: { strIndicator = "th"; break; } } } return Convert.ToString(intNumber) + strIndicator; }
VB.NET
Private Function AddOrdinalIndicator(ByVal intNumber _ As Integer) As String Dim strIndicator As String = "" If intNumber < 20 Then Select Case intNumber Case 1 strIndicator = "st" Case 2 strIndicator = "nd" Case 3 : strIndicator = "rd" Case 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, _ 17, 18, 19 strIndicator = "th" End Select Else Select Case Convert.ToString(intNumber).Chars(Convert _ .ToString(intNumber).Length - 1) Case "1" strIndicator = "st" Case "2" strIndicator = "nd" Case "3" strIndicator = "rd" Case Else strIndicator = "th" End Select End If AddOrdinalIndicator = Convert.ToString(intNumber) + _ strIndicator End Function
It is quite interesting, actually, when you think about it: The first three numbers work differently than the rest. For example: one becomes first, two becomes second, and three becomes third. So, now you have to compensate for that throughout your numbered list. Yes, there are better ways, I suppose, but that is basically what this function does. Add the call to the function:
C#
private void btnDisplay_Click(object sender, EventArgs e) { int i; for (i = 1; i <= 500; i++) lstNumbers.Items.Add(AddOrdinalIndicator(i)); }
VB.NET
Private Sub btnDisplay_Click(sender As Object, e As EventArgs) _ Handles btnDisplay.Click Dim i As Integer For i = 1 To 500 lstNumbers.Items.Add(AddOrdinalIndicator(i)) Next End Sub
When you click the Display button, a numbered list will be added to the ListBox, along with the numbers' respective ordinal indicators. When run, it should look like Figure 2.
Figure 2: Running
Conclusion
Today, you have learned about Ordinal Indicators and how to add them to your strings. I hope you have enjoyed this article. | https://www.codeguru.com/csharp/.net/net_general/adding-ordinal-indicators-with-.net.html | CC-MAIN-2019-47 | refinedweb | 664 | 56.86 |
'Timer' has no attribute 'PERIODIC'
Hi everyone,
I'm just getting started with timers and irq and I was trying the example code.
However I have an issue with the initialisation of the timer.
Here is my code:
tim1 = Timer(1, mode=Timer.PERIODIC) tim1_ch = tim1.channel(Timer.A, freq=100) tim1_ch.irq(priority=7, trigger=Timer.TIMEOUT, handler=acquireData_cb)
And the return
PYB: soft reboot Traceback (most recent call last): File "main.py", line 42, in <module> AttributeError: type object 'Timer' has no attribute 'PERIODIC' MicroPython v1.8.6-607-g9c8a0e9e on 2017-05-01; WiPy with ESP32 Type "help()" for more information.
I was basically running with a custom firmware so I reflashed the last one but still gets the error
>>> import os >>> os.uname() (sysname='WiPy', nodename='WiPy', release='1.6.13.b1', version='v1.8.6-607-g9c8a0e9e on 2017-05-01', machine='WiPy with ESP32')
I already had a similar issue on pyBoard with the time module where ticks_ms was not available.
Did someone already had the same problem?
Hi @bucknall, I finally found this library for using queues on micropython.
However, this library need collections.deque and uasynio.core.
1 from collections.deque import deque 2 from uasyncio.core import sleep
I went to the github repository, I downloaded collection.deque but I couldn't find uasynio.core anywhere.
I think I'm not importing libraries correctly because I need to modify the line
from collections.deque import dequeto
from collections import deque
So now I would like to find the uasyncio.core module to be able to use the queue library ^^
I hope I do not need to open a new thread for this question.
If so, I'm sorry ;)
As there is not so much documentation about multithreading and interrupts, how could two threads communicate?
I'm looking for a queue-like mecanism but I can't find anything about it neither in Threading or Timers.
Thanks
I was using this documentation :
I had not realized yet that wipy documentation and wipy 2.0 was not the same...
Thank you very much, everything is working fine now.
Where are you getting your example code from? Please see for our example code within our documentation.
It's important to note that our timers do not work the same as on the Pyboard.
Thanks!
Alex | https://forum.pycom.io/topic/1193/timer-has-no-attribute-periodic/1?lang=en-US | CC-MAIN-2020-24 | refinedweb | 391 | 60.01 |
- Kevin Ndung'u Gathuku
- Jan 25, 2017
- Updated on Jul 20, 2017
- Tagged as Python Testing
Testing Python Applications with Pytest
Pytest stands out among Python testing tools due to its ease of use. This tutorial will get you started with using pytest to test your next Python project.
SemaphoreLearn More
Introduction.
Prerequisites
This tutorial uses Python 3, and we will be working inside a
virtualenv.
Fortunately for us, Python 3 has inbuilt support for creating virtual environments.
To create and activate a virtual environment for this project, let's run the following commands:
mkdir pytest_project cd pytest_project python3 -m venv pytest-env
This creates a virtual environment called
pytest-env in our working directory.
To begin using the
virtualenv, we need to activate it as follows:
source pytest-env/bin/activate
As long as the
virtualenv is active, any packages we install will be installed
in our virtual environment, rather than in the global Python installation.
To get started, let's install pytest in our
virtualenv.
pip install pytest
Basic Pytest Usage
We will start with a simple test. Pytest expects our tests to be located in
files whose names begin with
test_ or end with
_test.py.
Let's create a file called
test_capitalize.py, and inside it we will write a
function called
capital_case which should take a string as its argument,
and should return a capitalized version of the string.
We will also write a test,
test_capital_case to ensure that the function does what it says.
We prefix our test function names with
test_, since this is what pytest expects
our test functions to be named.
# test_capitalize.py def capital_case(x): return x.capitalize() def test_capital_case(): assert capital_case('semaphore') == 'Semaphore'
The immediately noticeable thing is that pytest uses a plain assert statement,
which is much easier to remember and use compared to the numerous
assertSomething
functions found in
unittest.
To run the test, execute the
pytest command:
pytest
We should see that our first test passes.
A keen reader will notice that our function could lead to a bug. It does not check the type of the argument to ensure that it is a string. Therefore, if we passed in a number as the argument to the function, it would raise an exception.
We would like to handle this case in our function by raising a custom exception with a friendly error message to the user.
Let's try to capture this in our test:
# test_capitalize.py import pytest def test_capital_case(): assert capital_case('semaphore') == 'Semaphore' def test_raises_exception_on_non_string_arguments(): with pytest.raises(TypeError): capital_case(9)
The major addition here is the
pytest.raises helper, which asserts that our function
should raise a
TypeError in case the argument passed is not a string.
Running the tests at this point should fail with the following error:
def capital_case(x): > return x.capitalize() E AttributeError: 'int' object has no attribute 'capitalize'
Since we've verified that we have not handled such a case, we can go ahead and fix it.
In our
capital_case function, we should check that the argument passed is a
string or a string subclass before calling the
capitalize function. If it is not,
we should raise a
TypeError with a custom error message.
# test_capitalize.py def capital_case(x): if not isinstance(x, str): raise TypeError('Please provide a string argument') return x.capitalize()
When we rerun our tests, they should be passing once again.
Using Pytest Fixtures
In the following sections, we will explore some more advanced pytest features. To do this, we will need a small project to work with.
We will be writing a
wallet application that enables its users to add or spend
money in the wallet. It will be modeled as a class with two instance methods:
spend_cash and
add_cash.
We'll get started by writing our tests first. Create a file called
test_wallet.py
in the working directory, and add the following contents:
# test_wallet.py import pytest from wallet import Wallet, InsufficientAmount def test_default_initial_amount(): wallet = Wallet() assert wallet.balance == 0 def test_setting_initial_amount(): wallet = Wallet(100) assert wallet.balance == 100 def test_wallet_add_cash(): wallet = Wallet(10) wallet.add_cash(90) assert wallet.balance == 100 def test_wallet_spend_cash(): wallet = Wallet(20) wallet.spend_cash(10) assert wallet.balance == 10 def test_wallet_spend_cash_raises_exception_on_insufficient_amount(): wallet = Wallet() with pytest.raises(InsufficientAmount): wallet.spend_cash(100)
First things first, we import the
Wallet class and the
InsufficientAmount
exception that we expect to raise when the user tries to spend more cash than
they have in their wallet.
When we initialize the
Wallet class, we expect it to have a default balance of
0.
However, when we initialize the class with a value, that value should
be set as the wallet's initial balance.
Moving on to the methods we plan to implement, we test that the
add_cash method
correctly increments the balance with the added amount. On the other hand, we are
also ensuring that the
spend_cash method reduces the balance by the spent amount,
and that we can't spend more cash than we have in the wallet.
If we try to do so, an
InsufficientAmount exception should be raised.
Running the tests at this point should fail, since we have not created our
Wallet class yet.
We'll proceed with creating it. Create a file called
wallet.py, and we will add our
Wallet implementation in it. The file should look as follows:
# wallet.py class InsufficientAmount(Exception): pass class Wallet(object): def __init__(self, initial_amount=0): self.balance = initial_amount def spend_cash(self, amount): if self.balance < amount: raise InsufficientAmount('Not enough available to spend {}'.format(amount)) self.balance -= amount def add_cash(self, amount): self.balance += amount
First of all, we define our custom exception,
InsufficientAmount, which will be
raised when we try to spend more money than we have in the wallet.
The
Wallet class then follows. The constructor accepts an initial amount, which
defaults to
0 if not provided. The initial amount is then set as the balance.
In the
spend_cash method, we first check that we have a sufficient balance.
If the balance is lower than the amount we intend to spend, we raise the
InsufficientAmount exception with a friendly error message.
The implementation of
add_cash then follows, which simply adds the provided amount to
the current wallet balance.
Once we have this in place, we can rerun our tests, and they should be passing.
pytest -q test_wallet.py ..... 5 passed in 0.01 seconds
Refactoring our Tests with Fixtures
You may have noticed some repetition in the way we initialized the class in each test. This is where pytest fixtures come in. They help us set up some helper code that should run before any tests are executed, and are perfect for setting up resources that are needed by the tests.
Fixture functions are created by marking them with the
@pytest.fixture decorator.
Test functions that require fixtures should accept them as arguments. For example,
for a test to receive a fixture called
wallet, it should have an argument with
the fixture name, i.e.
wallet.
Let's see how this works in practice. We will refactor our previous tests to use test fixtures where appropriate.
# test_wallet.py import pytest from wallet import Wallet, InsufficientAmount @pytest.fixture def empty_wallet(): '''Returns a Wallet instance with a zero balance''' return Wallet() @pytest.fixture def wallet(): '''Returns a Wallet instance with a balance of 20''' return Wallet(20) def test_default_initial_amount(empty_wallet): assert empty_wallet.balance == 0 def test_setting_initial_amount(wallet): assert wallet.balance == 20 def test_wallet_add_cash(wallet): wallet.add_cash(80) assert wallet.balance == 100 def test_wallet_spend_cash(wallet): wallet.spend_cash(10) assert wallet.balance == 10 def test_wallet_spend_cash_raises_exception_on_insufficient_amount(empty_wallet): with pytest.raises(InsufficientAmount): empty_wallet.spend_cash(100)
In our refactored tests, we can see that we have reduced the amount of boilerplate code by making use of fixtures.
We define two fixture functions,
wallet and
empty_wallet, which will be
responsible for initializing the
Wallet class in tests where it is needed,
with different values.
For the first test function, we make use of the
empty_wallet fixture,
which provided a wallet instance with a balance of
0 to the test.
The next three tests receive a wallet instance initialized with a balance of
20.
Finally, the last test receives the
empty_wallet fixture.
The tests can then make use of the fixture as if it was created inside the test
function, as in the tests we had before.
Rerun the tests to confirm that everything works.
Utilizing fixtures helps us de-duplicate our code. If you notice a case where a piece of code is used repeatedly in a number of tests, that might be a good candidate to use as a fixture.
Some Pointers on Test Fixtures
Here are some pointers on using test fixtures:
Each test is provided with a newly-initialized
Walletinstance, and not one that has been used in another test.
It is a good practice to add docstrings for your fixtures. To see all the available fixtures, run the following command:
pytest --fixtures
This lists out some inbuilt pytest fixtures, as well as our custom fixtures. The docstrings will appear as the descriptions of the fixtures.
wallet Returns a Wallet instance with a balance of 20 empty_wallet Returns a Wallet instance with a zero balance
Parametrized Test Functions
Having tested the individual methods in the
Wallet class, the next step we should
take is to test various combinations of these methods.
This is to answer questions such as "If I have an initial balance of
30, and spend
20, then add
100, and later on spend
50, how much should the balance be?"
As you can imagine, writing out those steps in the tests would be tedious, and pytest provides quite a delightful solution: Parametrized test functions
To capture a scenario like the one above, we can write a test:
# test_wallet.py @pytest.mark.parametrize("earned,spent,expected", [ (30, 10, 20), (20, 2, 18), ]) def test_transactions(earned, spent, expected): my_wallet = Wallet() my_wallet.add_cash(earned) my_wallet.spend_cash(spent) assert my_wallet.balance == expected
This enables us to test different scenarios, all in one function.
We make use of the
@pytest.mark.parametrize decorator, where we can specify the names
of the arguments that will be passed to the test function, and a list of arguments
corresponding to the names.
The test function marked with the decorator will then be run once for each set of parameters.
For example, the test will be run the first time with the
earned parameter set to
30,
spent set to
10, and
expected set to
20.
The second time the test is run, the parameters will take the second set of arguments.
We can then use these parameters in our test function.
This elegantly helps us capture the scenario:
- My wallet initially has
0,
- I add
30units of cash to the wallet,
- I spend
10units of cash, and
- I should have
20units of cash remaining after the two transactions.
This is quite a succinct way to test different combinations of values without writing a lot of repeated code.
Combining Test Fixtures and Parametrized Test Functions
To make our tests less repetitive, we can go further and combine test fixtures and parametrize test functions. To demonstrate this, let's replace the wallet initialization code with a test fixture as we did before. The end result will be:
# test_wallet.py @pytest.fixture def my_wallet(): '''Returns a Wallet instance with a zero balance''' return Wallet() @pytest.mark.parametrize("earned,spent,expected", [ (30, 10, 20), (20, 2, 18), ]) def test_transactions(my_wallet, earned, spent, expected): my_wallet.add_cash(earned) my_wallet.spend_cash(spent) assert my_wallet.balance == expected
We will create a new fixture called
my_wallet that is exactly the same as the
empty_wallet fixture we used before. It returns a wallet instance with a balance of
0.
To use both the fixture and the parametrized functions in the test, we include the
fixture as the first argument, and the parameters as the rest of the arguments.
The transactions will then be performed on the wallet instance provided by the fixture.
You can try out this pattern further, e.g. with the wallet instance
with a non-empty balance and with other different combinations of the
earned and
spent amounts.
Continuous Testing on Semaphore CI
Next, let's add continuous testing to our application using SemaphoreCI to ensure that we don't break our code when we make new changes.
Make sure you've committed everything on Git, and push your repository to GitHub or Bitbucket, which will enable Semaphore to fetch your code. Next, sign up for a free Semaphore account, if you don't have one already. Once you've confirmed your email, it's time to create a new project.
Follow these steps to add the project to Semaphore:
Once you're logged into Semaphore, navigate to your list of projects and click the "Add New Project" button:
Next, select the account where you wish to add the new project.
Select the repository that holds the code you'd like to build:
Select the branch you would like to build. The
masterbranch is the default.
Configure your project as shown below:
Once your build has run, you should see a successful build that should look something like this:
In a few simple steps, we've set up continuous testing.
Summary
We hope that this article has given you a solid introduction to pytest, which is one of the most popular testing tools in the Python ecosystem. It's extremely easy to get started with using it, and it can handle most of what you need from a testing tool.
You can check out the complete code on GitHub.
Please reach out with any questions or feedback you may have in the comments section below.
Edited on {{comment.updatedAt}} | http://semaphoreci.com/community/tutorials/testing-python-applications-with-pytest | CC-MAIN-2018-13 | refinedweb | 2,284 | 64.2 |
Importing Keras Models into TensorFlow.js
Hello Readers! In this blog, we will see how to import the Keras models into TensorFlow.js.
You all must have heard the name of TensorFlow.js. TensorFlow.js is a Javascript library, and it is used to integrate two great technology stacks which are – Machine Learning and Web development. This library helps to train and deploy models, and also to use pre-trained models of machine learning into our web applications.
Different Ways to Save the Model
Keras models can be saved in various forms which are –
- Saving the complete model i.e both the weights and training configurations. This model file contains the architecture of the model, loss, optimizer, and state of the optimizer.
model.save(your_file_path)
- Saving only the weights of a model which is done in HDF5 format. The below-mentioned code is used for the same. Keras uses the h5py Python package to save in HDF5 format.
model.save_weights('model_weights.h5')
- Saving only the architecture of the model, and not its weights or its training configuration. The below lines of code explains the implementation of it,
json_str = model.to_json() // converting the model to the form of json strings from tensorflow.keras.models import model_from_json //then using this data to build a fresh model model = model_from_json(json_str)
The complete model is saved in a file format which can be converted into the Layers format of TensorFlow.js. This type of file can be directly used by TensorFlow.js to train and use the model in applications.
TensorFlow.js Layer Format
The Layer Format of TensorFlow.js contains a folder that contains a model.js file. This file consists of a vivid description of layers and the architecture of the model. The folder also contains the weight files which are in binary format.
Importing models – From Keras to TensorFlow.js
Note: The tensorflowjs module is necessary to convert the model in TF.js format. It can be installed by running the command – pip install tensorflowjs in your command prompt/bash.
First, we need to convert the Keras models into the Layers format of TensorFlow.js. To do this there are two ways:-
- The complete model is saved in HDF5 format when Keras models are saved by running the – model.save(file_path) command. It contains the file with both weights and model architecture. On running the below command this file is converted to Layers format.
tensorflowjs_converter --input_format keras "/path/souce_directory/model.h5" "/path/target_directory"
In the above code, first, the path of the Keras model file is provided (where it was saved ) after which the path of the target folder is provided. This is the directory where the tensorflow.js files will be saved.
- Using Python API can be another way to export the saved model in “.h5” to TF.Layers format. If the Keras model is made in Python language it can directly be exported using a one-line code.
import tensorflowjs as tfjs tfjs.converters.save_keras_model(model, target_directory)
The final task is to load the model into TensorFlow.js. Consider the below code to load the model in your JavaScript code.
import * as tf from '@tensorflow/tfjs'; const model = await tf.loadLayersModel('');
The link mentioned above has to be the URL to the model.json file to load the model in TensorFlow.js. After writing these lines of code you can use your model in your web applications. We can retrain and evaluate the model. We can also make predictions using this imported Keras model.
The below line of code is used for predicting the output for a given example by the model.
const prediction = model.predict(example);
Now you are all set to try importing various models into TensorFlow.js yourself! | https://valueml.com/importing-keras-models-into-tensorflow-js/ | CC-MAIN-2021-25 | refinedweb | 622 | 59.3 |
A method, also known as a function, is a module of code that a programmer can create and then call on later on in the program. Many methods already exist in programming languages such as C# but the programmer also has the ability to make their own. A method will usually perform a single task. Many methods can work together to achieve a goal.
Methods should have descriptive names (they should represent an action and are usually in the form of a verb). Spaces cannot be used in method names and you should always avoid use of special characters eg. $%^!@. The method name itself should also not contain brackets because brackets are used for parameters. When you create a new method, the method name should be unique and not already exist in the language (it should not be a reserved word that is used for a statement or method).
Method names should follow a consistent naming convention throughout your code eg. using camel case or mixed case. Examples of suitable method names include CalculateScore, AddNumbers, MultiplyNumbers, GetUserDetails, etc.
Watch the video below and then scroll down for examples and sample code.
Creating methods
Lets break up the method into its different components and look at each component…
What is an access modifier?
Access modifiers include public and private (or just left empty). Public means other parts of the program can see and use this method. If we don’t want that we use private instead or no access modifier (leave it out).
What is a return type?
Methods are able to return a variable back to the code that called it known as the return type. If a method returns an integer value then the return type is an int and if a method returns a true or false value then the return type is a bool. Even if the method doesn’t return any value, it still has a return type. If the method doesn’t return a value, then its return type is void (which means nothing). You might notice that many functions have a return type of void.
What are parameters?
In the same way that methods can pass a variable back to the code that called it, the calling code can pass variables into the method. These variables are known as parameters. The variables that are passed into the method are identified in the parameter list part of the method (inside the brackets). When you specify a parameter you must specify the variable type and the name. If there are no parameters, then the brackets are left empty.
Below is an example of a method in C# for a calculator that is used to add two numbers together. There are two parameters in this method (separated by commas). The parameters in this method are num1 and num2. These are the two numbers that will be added together (they are of the float data type). Notice that the return type is also float meaning that the result of this method (the sum of num1 and num2) will be returned as a float value.
public static float AddNumbers(float num1, float num2) { float total = num1 + num2; return total; }
The method above will add two numbers together (the two parameters num1 and num2) and then return the answer as a float back to the part of the program that called the method.
The word static means that this particular method is associated with the class, not a specific instance (object) of that class. What this means is that you are able to call a static method without actually creating an object of the class.
Many methods have the word void in their declaration. The word void basically means that the method will not return any value to the part of the program that called it.
Using methods
Once you have created a method the next thing to do is use it. Using a method is known as calling or invoking a method. To call a method that was named AddNumbers, you would write:
AddNumbers();
If the method contained a parameter or multiple parameters (eg. the values 5 and 10), then they would be included inside the brackets like this:
AddNumbers(5,10);
The below example shows how to call a method and pass variables into the method. You do not need to write int inside the brackets where the function is called.
int number1 = 10; int number2 = 30 AddNumbers(number1,number2);
When you call a method you do not need to provide the variable type with the variable that is being passed into the method. If the method AddNumbers() in the example code above returns a value, then it should be stored in a variable or used in a statement, for example:
int result = AddNumbers(5,10);
Sample code
Here is an example using parameters and a return type of void. The AddNumbers method is called from inside the Main method.
using System; namespace MyCSharpProject { class Program { static void Main(string[] args) { AddNumbers(5, 10); } public static void AddNumbers(int num1, int num2) { int total = num1 + num2; Console.WriteLine("The sum of the two numbers is " + total); } } }
Here is an example using parameters and a return type of int. The AddNumbers method is called from inside the Main method.
using System; namespace MyCSharpProject { class Program { static void Main(string[] args) { int answer = AddNumbers(5, 10); Console.WriteLine(answer); } public static int AddNumbers(int num1, int num2) { int total = num1 + num2; return total; } } }
Here is an example using no parameters. The AddNumbers method is called from inside the Main method.
using System; namespace MyCSharpProject { class Program { static void Main(string[] args) { int answer = AddNumbers(); Console.WriteLine(answer); } public static int AddNumbers() { int total = 5 + 10; return total; } } } | https://www.codemahal.com/video/methods-in-c-sharp/ | CC-MAIN-2018-47 | refinedweb | 957 | 70.84 |
In a previous post, I introduced the MauiKit project, the idea with this post is to follow up the previous one and demonstrate some of the controls from the MauiKit and show you how to get started. The MauiKit project is a helpful set of templated controls and custom widgets based on QCC2 and Kirigami that follows the Maui HIG and makes it easier to create concurrent applications that work well on GNU Linux distros, Plasma Mobile and Android.
How to use MauiKit?
The MauiKit works as a submodule, and it relies on the KDE framework Kirigami. It makes use of qmake to be included in your project and then be built. But you can also now make use of CMake to add it as a subdirectory and linked as a static library. So let’s say your new project name is Nota, a simple and convergent text editor. This is how the project folder would look like when we first create the project using Qt Creator:
alt="" width="1048" height="631" data->
Now, you will include the MauiKit submodule to your project, but first, you need to clone the sources.
git clone — recursive
Don’t forget to add the “— recursive” parameter when cloning it, because Maui Kit includes another submodule named “tagging” that needs to be present. It is a shared tagging system that allows all the Maui apps to share tags and metadata information between each other, but I will cover that one on another occasion.
Now your project folder should look more like this:
alt="" width="1048" height="631" data->
And now it is time to add it to your project sources, so open up your nota.pro file and add the following lines:
linux:unix:!android { message(Building for Linux ) } else:android { message(Building for Android) include($$PWD/3rdparty/kirigami/kirigami.pri) } else { message(“Unknown configuration”) } include($$PWD/mauikit/mauikit.pri)
As I mentioned earlier, MauiKit works on top of Kirigami, and so it depends on it. If you plan to deploy your app on a device running Android, you will also need to add the Kirigami submodule. That’s what is going on in the first previous lines; it is including the Kirigami module and sources if the platform where your project is being deployed is Android.
And in the final line, we include the Maui Kit module and sources, feel free to check out its Content and see what’s in there.
Now we need to expose the MauiKit controls to the QML engine.
Open up your main.cpp file and add the following lines:
First, include the Maui Kit header.
#include “mauikit.h”
And then register the MauiKit components.
MauiKit::getInstance().registerTypes();
Your project source files would look now a little more like this:
alt="" width="1636" height="937" data->
And now, you’re ready to start creating your next convergent application. On your QML files you now can import the Maui Kit controls like this:
import org.kde.maui 1.0 as Maui
Now, let’s take a first look at the controls included on the Maui Kit. The most important control is the ApplicationWindow, it is what will make your app a Maui app, so it is the one I will be covering on this post.
Components
alt="" width="3446" height="978" data->
ApplicationWindow
Properties:
bool altToolBars
If set to True, the HeadBar will be moved to the bottom of the application window. By default is set to the isMobile property, so if the target device is a mobile device, the HeadBar will move to the bottom.
If set to False, then the HeadBar will stay at the top of the application window, even on mobile devices.
bool floatingBar
If set to True, the FootBar will act as a floating bar, it will not occupy the full width of the application window, but instead, use the minimum width required to place its children Content. Also, it will be colored by making use of the theming property accentColor and cast a shadow.
By default the floating bar does not overlap the Content, to make it overlap the Content the footBarOverlap property will have to be set to True
By default, this property is set to the altToolBars property.
int footBarAligment
By default, it is set to Qt.AlignCenter, but can be set to any other alignment value. Changes will be only visible if the floatingBar property is set to True.
bool footBarOverlap
If set to True, and also only if the floatingBar property is set to True too, the FootBar will act as a floating bar overlapping the Content of the window.
When this property is set to True, the Content will flick when scrolled down to show a reserved space for the FootBar in order to not overlap important Content positioned at the bottom.
By default, this property is set to False
int footBarMargins
This property adds margins to the FootBar when it is set as a floating bar using the floatingBar property. It is useful when the FootBar needs to stand out.
Theming Properties:
These properties can be changed to alter the look of the application from the Maui Kit controls and to the application’s controls. By default, the colors are picked up from the system color palette.
int iconSize
color borderColor
color backgroundColor
color textColor
color highlightColor
color highlightedTextColor
color buttonBackgroundColor
color viewBackgroundColor
color altColor
color altColorText
color accentColor
color bgColor
Read-only properties:
bool isMobile
Defines if the target device is a mobile device or not. By default, it uses the property Kirigami.Settings.isMobile
bool isAndroid
Defines if the operating system of the target device is Android or not.
int unit
Defines a unit making use of point size rather than pixel size. It is useful when the target device does not support pixel units right, like Plasma Mobile, right now. By default, it uses the property Kirigami.Units.devicePixelRatio
int rowHeight
It defines a standardized height for rows, such as in lists and menus. It is used on the templated controls inside the Maui Kit, and it is recommended to be used in the application.
int rowHeightAlt
Defines a standardized height for alternative rows, such is in sub-items in lists and menus. It is used on the templated controls inside the Maui Kit, and it is recommended to be used in the application. It has a smaller height size than the rowHeight property
int toolBarHeight
It defines a standardized height for the toolbars. It is used on the templated controls inside the Maui Kit and for the Maui.ToolBar control, and it is recommended to be used in the application where fitted.
int toolBarHeightAlt
It defines a standardized height for the alternative toolbars. It is used on the templated controls inside the Maui Kit and for toolbars inside the Maui.Page control, and it is recommended to be used in the application where fitted.
int contentMargins
It defines a standardized size for content margins in places like Page, Panes, and other controls.
var fontSizes
It defines a standardized set of sizes to be used with fonts.
The possible values are: tiny, small, medium, default, big, large
var space
Defines a standardized set of sizes to be used for spacing items and layouts.
The possible values are: tiny, small, medium, big, large, huge, enormous
var iconSizes
Defines a standardized set of sizes to be used for icons and buttons.
The possible values are: tiny, small, medium, big, large, huge, enormous
The Application window is based on Kirigami.ApplicationWindow.
The Maui implementation follows the Maui HIG and suggests the use of toolbars. It is layout vertically and contains a HeadBar, the Content, a FootBar, and a GlobalDrawer.
The HeadBar and FootBar, are both based on another Maui component: Maui.ToolBar, so they inhere its properties.
The Content can be any other QCC2 control, such a SwipeView, StackView, or Maui.Page, etc., but for default, there is a PageRow from Kirigami, given that Maui.ApplicationWindow is based on Kirigami.ApplicationWindow.
The Content area, as mentioned earlier, can be any other control, although the Maui HIG suggests the following horizontal layouts that can be achieved by making use of a SwipeView or Kirigami.PageRow.
Here’s an example from VVAVE:
alt="" width="1895" height="629" data->
alt="" width="3825" height="4388" data->
By default and for reach-ability, the HeadBar is moved to the bottom when it is being deployed on a mobile device, but this can be changed by setting the altToolBars property.
Also, the FootBar is styled differently on mobile devices, and it is drawn as a floating bar that does not occupy the full width of the window and uses a background color set by the property accentColor. It can be set to overlap the Content or not with the property footBarOverlap, and the floating bar can be set with the property floatingBar.
alt="" width="3385" height="1705" data->
Here’s a preview of the floating bar overlapping the Content. The following app sets the properties mentioned above as:
accentColor: altColor highlightColor: “#8682dd” altColor: “#43455a” altColorText: “#ffffff” altToolBars: false floatingBar: isMobile footBarOverlap: true
This results in the following UI:
alt="" width="606" height="1280" data->
Here are other Maui Kit controls I will be covering next:
- ToolBar
- ToolButton
- Page
- FileDialog
- ShareDialog
- PieButton
- Holder
- SelectionBar
- GlobalDrawer
- IconDelegate
- Style
- SideBar
- TagsBar
- NewDialog | https://nxos.org/articles/mauikit-controls/ | CC-MAIN-2020-50 | refinedweb | 1,555 | 59.74 |
cc[ flag... ] file... -lldap[ library... ]
#include <lber.h>
#include <ldap.h>
#define LDAP_DISP_OPT_AUTOLABELWIDTH 0x00000001
#define LDAP_DISP_OPT_HTMLBODYONLY 0x00000002
#define LDAP_DTMPL_BUFSIZ 2048
These functions use the LDAP display template functions (see ldap_disptmpl(3LDAP) and ldap_templates.conf(4)) to produce a plain text or an HyperText Markup Language (HTML) display of an entry or a set of values. Typical plain text output produced for an entry might look
like:
"Barbara J Jensen, Information Technology Division"
Also Known As:
Babs Jensen
Barbara Jensen
Barbara J Jensen
[email protected]
Work Address:
535 W. William
Ann Arbor, MI 48103
Title:
Mythical Manager, Research Systems
...
ldap_entry2text() produces a text representation of entry and writes the text by calling the writeproc function. All of the attributes values to be displayed must be present in entry; no interaction
with the LDAP server will be performed within ldap_entry2text. ld is the LDAP pointer obtained by a previous call to ldap_open. writeproc should be declared as:
int writeproc( writeparm, p, len )
void *writeparm;
char *p;
int len;
where p is a pointer to text to be written and len is the length of the text. p is guaranteed to be zero-terminated. Lines of text are terminated with the string eol. buf
is a pointer to a buffer of size LDAP_DTMPL_BUFSIZ or larger. If buf is NULL then a buffer is allocated and freed internally. tmpl is a pointer to the display template to
be used (usually obtained by calling ldap_oc2template). If tmpl is NULL, no template is used and a generic display is produced. defattrs is a NULL-terminated array of LDAP attribute names which you
wish to provide default values for (only used if entry contains no values for the attribute). An array of NULL-terminated arrays of default values corresponding to the attributes should be passed in defvals. The rdncount parameter is used to limit the number of Distinguished Name (DN) components that are actually displayed for DN attributes. If rdncount is zero, all components are shown. opts is used to specify output options. The only values
currently allowed are zero (default output), LDAP_DISP_OPT_AUTOLABELWIDTH which causes the width for labels to be determined based on the longest label in tmpl, and LDAP_DISP_OPT_HTMLBODYONLY. The LDAP_DISP_OPT_HTMLBODYONLY option instructs the library not to include <HTML>, <HEAD>, <TITLE>, and <BODY> tags. In other words, an HTML fragment is generated, and the caller is responsible for prepending and appending the appropriate HTML tags to construct a correct HTML document.
ldap_entry2text_search() is similar to ldap_entry2text, and all of the like-named parameters have the same meaning except as noted below. If base is not NULL, it is the search base to use when executing
search actions. If it is NULL, search action template items are ignored. If entry is not NULL, it should contain the objectClass attribute values for the entry to be displayed. If entry is NULL, dn must not be NULL, and ldap_entry2text_search will retrieve the objectClass values itself by calling ldap_search_s. ldap_entry2text_search will determine the appropriate display template to use by calling ldap_oc2template, and will call ldap_search_s to retrieve any attribute values to be displayed. The tmpllist parameter is a pointer to the
entire list of templates available (usually obtained by calling ldap_init_templates or ldap_init_templates_buf). If tmpllist is NULL, ldap_entry2text_search will attempt to read a load templates
from the default template configuration file ETCDIR/ldaptemplates.conf.
ldap_vals2text produces a text representation of a single set of LDAP attribute values. The ld, buf, writeproc, writeparm, eol, and rdncount parameters are the same as the like-named parameters for ldap_entry2text. vals is a NULL-terminated list of values, usually obtained by a call to ldap_get_values. label is a string shown
next to the values (usually a friendly form of an LDAP attribute name). labelwidth specifies the label margin, which is the number of blank spaces displayed to the left of the values. If zero is passed, a default label width is used. syntaxid is a
display template attribute syntax identifier (see ldap_disptmpl(3LDAP) for a list of the pre-defined LDAP_SYN_... values).
ldap_entry2html produces an HTML representation of entry. It behaves exactly like ldap_entry2text(3LDAP), except for the formatted
output and the addition of two parameters. urlprefix is the starting text to use when constructing an LDAP URL. The default is the string ldap:/// The second additional parameter, base, the search base to use when executing
search actions. If it is NULL, search action template items are ignored.
ldap_entry2html_search behaves exactly like ldap_entry2text_search(3LDAP), except HTML output is produced and one additional parameter is required. urlprefix is the starting text to use when constructing an LDAP URL. The default is the string ldap:///
ldap_vals2html behaves exactly like ldap_vals2text,exceptHTMLoutputis and one additional parameter is required. urlprefix is the starting text to use when constructing an LDAP URL. The default
is the string ldap:///
These functions all return an LDAP error code. LDAP_SUCCESS is returned if no error occurs. See ldap_error(3LDAP) for details. The ld_errno field of the ld parameter is also set to indicate the error.
See attributes(5) for a description of the following attributes:
ldap(3LDAP), ldap_disptmpl(3LDAP), ldaptemplates.conf(4) , attributes(5) | http://www.shrubbery.net/solaris9ab/SUNWaman/hman3ldap/ldap_entry2text.3ldap.html | CC-MAIN-2014-52 | refinedweb | 857 | 55.64 |
The
Why a lambda and not a block?
June 25, 2009 at 4:14 am
Just because I think passing multiple variables and a block looks weird:
test_named_scope(Tag.all, Tag.whitelisted){|tag| tag.status == Tag::WHITELISTED }
It’d be pretty easy to rewrite it to take a block if you prefer that syntax.
June 25, 2009 at 6:13 am
I’ve needed to do the exact same thing, but ended up with a simpler approach. [Check out some code I ripped from that project.]() I’m sure it’s obvious, but to be clear I’m using shoulda, factory_girl, and matchy.
The general idea is to verify the records returned from my named scope `Video.processed` are identical to those returned from `Video.all(:conditions => { :processed => true })`. Because I know Rails will return the data I want using that query, I should simply be able to compare the two result sets and assert they’re equal. If they’re not, something’s broken. If, for some reason, Rails is gives me unprocessed videos with that finder call, then I have much bigger problems.
June 25, 2009 at 3:33 pm
Makes sense Larry, but a lot of our named_scopes are pretty complicated. Here for example is one we’re using to check if users have left feedback on a project
named_scope :with_no_evaluation_by, lambda{|user|
{:conditions => <<-SQL
not exists (select * from evaluations
where evaluations.project_id = projects.id
and creator_id = #{user.id})
SQL
}}
The spec for the named scope looks like this
describe “.with_no_evaluation_by(user)” do
it “returns all etasks without evaluations by the given user” do
jane = users(:jane)
test_named_scope(Project.all, Project.with_no_evaluation_by(jane), lambda{|project| !project.evaluations.map(&:creator).include?(jane) })
end
end
June 25, 2009 at 4:45 pm
Ah! I knew there was a reason you chose your approach instead of the more obvious. That’s quite a hairy scope. I totally understand why you need to explicitly validate the results to guarantee you’re receiving the correct results. Nicely done.
June 26, 2009 at 6:15 pm
I really like Josh’s refactoring, but it doesn’t include your tests that ensure that your test data actually includes items in and out of the conditions defined by the scope. You may want to add:
scoped_objects.should_not be_empty
other_objects.should_not be_empty
and all together:
def test_named_scope(all_objects, subset, condition)
scoped_objects, other_objects = all_objects.partition(&condition)
scoped_objects.should_not be_empty
other_objects.should_not be_empty
scoped_objects.should == subset
other_objects.should == all_objects – subset
end
At first I thought making sure that you had items in both the scoped and non-scoped categories was a little pedantic. On second thought, the point is that you are testing scoping and if your test data doesn’t actually contain objects of both types then your test may be faulty.
Cool stuff. Thanks.
June 27, 2009 at 10:02 am
Ooh, good catch Kelly, I’ll fix the post. Yeah, the tests to make sure both sets actually have objects in them has caught a few bugs in code and more than a few fixture issues.
July 6, 2009 at 11:57 pm
I’m liking this, but it needs a little work to deal with named_scopes that have limits on them.
I modified like this for a quick fix:
def test_named_scope(all_objects, subset, condition, limit = 0)
scoped_objects, other_objects = all_objects.partition(&condition)
other_objects += scoped_objects.slice!(limit..scoped_objects.size) if limit > 1
subset.should_not be_empty
scoped_objects.should == subset
other_objects.should == all_objects – subset
end
July 12, 2009 at 7:09 am
Here’s a more robust version, but it is still trading the ability to use limits for the ability to use ordering:
def test_named_scope(all_objects, subset, condition, limit = 0)
scoped_objects, other_objects = all_objects.partition(&condition)
other_objects += scoped_objects.slice!(limit..scoped_objects.size) if limit > 0
subset.should_not be_empty
scoped_objects.should == subset
other_objects.sort{|a,b|a.id< =>b.id}.should == (all_objects – subset).sort{|a,b|a.id< =>b.id}
end
July 12, 2009 at 7:41 am | http://pivotallabs.com/an-easy-way-to-write-named-scope-tests/?tag=cloud | CC-MAIN-2014-42 | refinedweb | 659 | 58.69 |
Subject: Re: [boost] [review][Fit] Review of Fit starts today : September 8 - September 17
From: paul (pfultz2_at_[hidden])
Date: 2017-09-08 15:40:19
On Fri, 2017-09-08 at 15:14 +0000, Fletcher, John P wrote:
> ________________________________________
> From: P F [pfultz2_at_[hidden]]
> Sent: 08 September 2017 14:06
> To: boost_at_[hidden]
> Cc: Fletcher, John P
> Subject: Re: [boost] [review][Fit] Review of Fit starts today : September 8
> - September 17
>
>
> >
> > >
> > > Is that detailed report available somewhere please?
> >
> > No, a detailed report was never published.
> Oh well.  I have just started to look at this in comparison with some
> examples I made with a version downloaded in February 2017.  I have not
> found any list of the changes.  I have noted the following.
>
> 1.  Change namespace to boost::fit
> 2.  Relocation of header files
> 3.  Add BOOST_ to macro names.
>
> All of those I expected.
>
> Also two of the examples I had done no longer work as "compress" and
> "reverse_compress" have disappeared.
>
> Have they been renamed to something else or removed completely?
These have been renamed to `fold` and `reverse_fold`.
>
> Are there any other changes please?
There are no other breaking changes. Other changes have been improvements to
take advantage of C++17 features where possible, support for msvc 2017, more
testing was added, some documentation improvements, and some minor fixes.
.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2017/09/238534.php | CC-MAIN-2021-39 | refinedweb | 255 | 67.76 |
�export�.
Note the use of a delegate in the code as shown below:
public delegate bool IECallBack(int hwnd, int lParam);
Let us now study how the callback is
actually implemented. When you click on the button
GetIE, it
calls the event handler
GetIE_Click.
The code is shown below:
private void GetIE_Click(object sender, System.EventArgs e) { listBoxHandle = listBox1.Handle; //store a listbox handle here. EnumWindows (new CallBack (IEInstance.EnumWindowCallBack), (int)listBoxHandle) ; label1.Text = "Total Instances of Internet Explorer : "+i; } private static bool EnumWindowCallBack(int hwnd, int lParam) { windowHandle = (IntPtr)hwnd; listBoxHandle = (IntPtr)lParam; // getting an instance of listbox from its handle. ListBox lb =(ListBox)ListBox.FromHandle(listBoxHandle); StringBuilder sb = new StringBuilder(1024); StringBuilder sbc = new StringBuilder(256); GetClassName(hwnd,sbc,sbc.Capacity); GetWindowText((int)windowHandle, sb, sb.Capacity); String xMsg = sb+" "+sbc+" "+windowHandle; if( sbc.Length > 0 ) { if( sbc.ToString().Equals("IEFrame")) { myAl.Add(windowHandle); i++; lb.Items.Add(xMsg); } } return true; }
The
delegate in our example called
IECallBack takes two arguments:
hwnd
(handle to the window) and
lParam (application-defined, any
additional parameter which needs to be passed, in the example we have
passed the handle to the ListBox class). Both the arguments are
integers. We define a function called
EnumWindowCallBack (..) that
matches the signature of the delegate in our program and it is then
called by the API function
EnumWindows (..) to
execute a task.
EnumWindows (..) enumerates through
all existing windows (visible or not) and provides a handle to all of
the currently open top-level windows. Each time a window is located, the
function calls the delegate (named
IECallBack) which in turn
calls the
EnumWindowCallBack
(..) function and passes the window handle to it. This
is because the API function (
EnumWindows(..)
can only locate the handle but does not know what to do with it. It
is upon the callback function (
EnumWindowCallBack
(..) ) to decide what to do with the handle. It calls API
functions
GetWindowText (..) and
GetClassName (..) to
obtain the title and class name of each window. The
EnumWindow
(..) function continues passing the window handles, one at a time, until
all windows have been enumerated, or until the process has been aborted.
If an error occurs, the function returns 0 else a non-zero value.
Enumeration
is just one scenario where callback functions are used. Other common
practice of using them is to create message handler routines for objects
like windows. Hence, an API function will require an associated callback
function whenever it wants the program, calling the API, to do some
necessary tasks. Callback functions return zero to indicate failure and
non-zero values to indicate success. You would notice that we have set
the return value of
EnumWindowCallBack
(..) to
True to continue enumeration.
The function
GetWindowText
retrieves the text that appears in the title bar of the regular windows.
It takes three parameters: a
handle to the window whose title it wants to read, a
String variable that receives
the window�s text, we have taken a StringBuilder
object so that it can have enough room to read variable length strings
and the last parameter is the size in bytes of the string.
The
function
GetClassName retrieves the name of the window class to
which the window belongs. The window class determines a number of
properties common to a group of windows that are inherited from a main
window. Its parameters follow the same explanation as for those of
GetWindowText.
You would see that in the code above we are only concerned with
�IEFrame� type of class names because we are interested in dealing
with Internet Explorer windows only.
You
must have observed the use of the type
IntPtr. It is an integer
whose size varies with the platform (is 32 bit on 32 bit platforms and
64 bit on 64 bit platform). It is used to represent a pointer or a
handle.
On clicking on the
CloseIE
button we are able to close the window containing an instance of
Internet Explorer. Consider the code shown below:
private void RemoveIE_Click(object sender, System.EventArgs e) { int index=listBox1.SelectedIndex; listBox1.Items.RemoveAt(index); int count =0; IEnumerator myEnumerator = myAl.GetEnumerator(); while ( myEnumerator.MoveNext() ) { if ( count == index ) { listBoxHandle = (IntPtr)myEnumerator.Current; break; } count++; } PostMessage(listBoxHandle,0x0010/*WM_CLOSE*/,0,0); myAl.RemoveAt(count ); label1.Text = "Total Instances of Internet Explorer :" +myAl.Count; }
Note
that we have used an
ArrayList class (part of the
System.Collections
namespace) to hold all the window handles. The moment an item is
selected in the listbox to be removed, we extract the handle of the
window from the
ArrayList and call the
PostMessage
function with that window handle and a
WM_CLOSE message. This
closes the open window. We also remove the instance from the
ArrayList
and the selected item from the
ListBox.
Hope the above article was useful in explaining the concepts. This concept has a lot of benefits like changing an instance of IE from your windows control at run-time.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/cs/runninginstanceie.aspx | crawl-002 | refinedweb | 826 | 58.69 |
Anyone who uses LINQ (or lambdas in general) and the debugger will quickly discover the dreaded message “Expression cannot contain lambda expressions”. Lack of lambda support has been a limitation of the Visual Studio Debugger ever since Lambdas were added to C# and Visual Basic. We’ve heard your feedback and we are pleased to announce that the debugger now supports evaluation of lambda expressions!
Let’s first look at an example, and then I’ll walk you through current limitations.
Example
To try this yourself, create a new C# Console app with this code:
using System.Diagnostics; using System.Linq; class Program { static void Main() { float[] values = Enumerable.Range(0, 100).Select(i => (float)i / 10).ToArray(); Debugger.Break(); } }
Then compile, start debugging, and add “values.Where(v => (int)v == 3).ToArray()” in the Watch window. You’ll be happy to see the same as what the screenshot above shows you.
NOTE: Lambda expressions that require native functions to run (e.g. LINQ-to-SQL) are not supported.
Summary
Please let us know how it works for you and what we can do to improve the experience below through the Send a Smile feature in Visual Studio or via twitter
Awesome work guys! Really excited to have this coming in VS2015. This will save loads of time stopping the debugger and recompiling after adding LINQ into the source to dig through data.
A few questions:
I notice in the screenshot there is still no syntax highlighting in the Watch window. Is this planned?
Is statement completion available or planned for the Watch/Immediate Windows?
Any chance the Immediate Window will merge will with the C# REPL previewed in Roslyn a few years ago? I found the REPL far more full featured and useful than the Immediate Window has ever been.
@MgSm88
-We are not going to be able to add syntax highlighting to the Watch windows in Visual Studio 2015. However please create a suggestion on User Voice (visualstudio.uservoice.com/forums)
-Statement completion should work in the Watch/Immediate windows. There is a bug in Preview that this does not work on the first line of the Immediate window, but if you press Enter so you are typing on the second line (or below) you should see it.
-It is our long term goal that you will not have to choose between the C# REPL and Immediate window, however at this time we do not have definitive plans we can share in the context of Visual Studio 2015
Very exciting news! At every version of Visual Studio since the introduction of lambda expressions we have been asking for this. I'm glad that advances in your compiler technology finally makes this possible.
@MgSm88 – for the "add syntax highlighting to the Watch windows" suggestion , can you, please, vote here: visualstudio.uservoice.com/…/6707687-add-syntax-highlighting-to-the-watch-windows
Does this include edit and continue?
Fantastic! Thank you!
@fedak: 2015 Preview does not include Edit and Continue support for Lambdas, but stay tuned for future updates…
Debugging Lamda Expressions with Visual Studio 2015 | AlexDresko.com…/debugging-lamda-expressions-with-visual-studio-2015
In the console app it indeed works but I Tried this in an ASP.NET 5 project and still got the dreaded "Expression cannot contain lambda expressions".
Can you explain?
This is a real huge debug improvement. Also, for those who those not know, this is not in the current Visual Studio Preview but will be in the 2015 Release version. Keep the good work.
@Adin, Unless you are debugging a local 32-bit mode IIS worker process, you're hitting the second "known limitation" above.
Thanks for the clarification @zonets
@MgSm88,
Syntax colorization isn't planned for Visual Studio 2015. We would of course love to create a consistent experience of writing and reading code wherever it appears. It's still very early to discuss plans for vNext but that's definitely a scenario we'll consider. You can help us prioritize by creating a user voice suggestion on which the community can then vote at visualstudio.uservoice.com
Completion works in both the Watch and Immediate Windows. However it only works in the Immediate Window at debug time (design-time expression evaluation doesn't have completion).
As for the REPL – the answer is "it's complicated". The Interactive Window (REPL) absolutely represents the baseline for the kind of experience we'd ultimately like to have in the Immediate Window – with all of the IDE experiences we've all come to depend on like colorization, completion, refactorings, and quick fixes. What the prototype we shipped in earlier "Roslyn" CTPs lacked was deep integration into the rest of VS – specifically the debugger. That window was built in an almost entirely Visual Studio independent way on top of our early scripting APIs. Ultimately we would want something that takes full advantage of all of the great experiences that exist in VS already. Designing something like that takes time. We have a design team working hard on designing what the blended experience between the REPL and the Immediate Window would look like, as well as enabling some new scenarios that don't work today, but, sadly, those experiences won't be ready in time for Visual Studio 2015 RTM.
Regards,
Anthony D. Green, Program Manager, Visual Basic and C# Languages Team
Hi all,
This is really a great work, and I hope the VS 2015 RTM will arrive soon!!!
I do understand that in the current Preview there are limitations, but I would like to understand why I cannot get the feature working with the following lines of VB.NET code:
Dim qry As IQueryable(Of DeviceReading) = (From dr As DeviceReading In _HGData.DeviceReadings
Where dr.DeviceID.CompareTo(searchID) = 0)
If DateFrom.HasValue Then qry = qry.Where(Function(item As DeviceReading) item.ReadingTMS >= DateFrom)
If DateTo.HasValue Then qry = qry.Where(Function(item As DeviceReading) item.ReadingTMS <= DateTo)
if I add "qry" to the watch window I get an "Internal error in the expression evaluator", while when I add the whole
"qry.Where(Function(item As DeviceReading) item.ReadingTMS >= DateFrom)" I get the message "The debugger is unable to evaluate expression".
Please let me know if I am hitting a limitation or making some other mistake.
thanx
@Piggy: When you see the message "internal error in the expression evaluator" it does not mean you did anything wrong, but rather you hit a bug. We have fixed a significant number of bugs since Preview in the CTP5 that released today, so please pick that up and lets us know if your scenario is now working (blogs.msdn.com/…/visual-studio-2015-cpt-5-now-available.aspx)
I tried the same simple code in a c# console app and in a Silverlight app, but in this last one seems that the lambda expressions in the watch window not works reporting the usual "Expression cannot contains lambda expressions".
There is a plan to support this feature in the Silverlight apps too ?
how could this killer feature leave outside of vs for so long time? looking forward 😀
In a silverlight application with Visual Studio 2015 I get "Expression cannot contains lambda expressions".
Please fix this!
I have the same problem with a silverlight app
Wow this is so Usefull in debugging , Tanx alllllllllllllllllllllloT
Just tried the exact same sample, but I get a "expression cannot contain lamba expressions" message in the watch window. Switching to framework 4.6 and c# 6 did not solve it.
Tried a few settings like enabling Just My Code in debugging did not work
Just tried the exact same sample, but I get a “expression cannot contain lamba expressions” message in the watch window. Switching to any version of framework including 4.6 did not solve it. | https://blogs.msdn.microsoft.com/visualstudioalm/2014/11/12/support-for-debugging-lambda-expressions-with-visual-studio-2015/ | CC-MAIN-2017-09 | refinedweb | 1,300 | 56.15 |
cube
Cubic DSL for 3D printing
This package is not currently in any snapshots. If you're interested in using it, we recommend adding it to Stackage Nightly. Doing so will make builds more reliable, and allow stackage.org to host generated Haddocks.
Cube: Cubic DSL for 3D printing
Cube is DSL for 3D printing.
This indents to make original blocks and prototypes for hobby.
This DSL is based on mathematical algebra. Cube is the same as Quaternion. Block is set of Cube. It allows boolian operations(and, subtruct and convolution).
Getting started
Install this from Hackage.
cabal update && cabal install cube
Example
Block is converted to STL. Block is the same as set of cube. Following example shows example of DSL generating shape of house.
import Language.Cube house :: Block Cube house = let house'' = (house' + square)*line in rfmap (12 * dz +) house'' where house' :: Block Cube house' = block $ [cube 1 x y| x<-[-10..10], y<-[-10..10], y < x , y < (-x)] square :: Block Cube square = rfmap ((+) (-12 * dz)) $ block $ [cube 1 x y| x<-[-5..5], y<-[-5..5]] line :: Block Cube line = block $ [cube x 0 0 | x<-[-5..5]] main :: IO () main = do writeFileStl "house.stl" $ house
Changes
0.2.0
- Fix bug of rotation
- Add RFunctor to define Functor for Data.Set and other data.
- Add sample in cabal
0.1.0
- First Release | https://www.stackage.org/package/cube | CC-MAIN-2017-17 | refinedweb | 229 | 79.26 |
.
Extraction rules are similar to validation rules, but instead of verifying data, they will extract the data and store it in the Web test context. For more information, see About Extraction Rules.
Microsoft Visual Studio 2005 Team Edition for Software Testers includes the following predefined validation rules:
Form Field
Verifies the existence of a form field that has a specified name and value.
Find Text
Verifies the existence of specified text in the response.
Maximum Request Time
Verifies that the request finishes within a specified amount of time.
Required Attribute Value
Verifies the existence of a specified HTML tag that contains an attribute with a specified value.
Required Tag
Verifies the existence of a specified HTML tag in the response.
Visual Studio Team Edition for Testers provides predefined validation rules in the form of classes in the Microsoft.VisualStudio.TestTools.WebTesting.Rules namespace. However, you can create your own custom validation rules by deriving from the ValidationRule class. For more information, see How to: Create a Custom Validation Rule.
Execution of validation rules has an impact on performance in load testing. To reduce the performance impact, use the Validation Level of a request to control which validation rules are used in a specific load test. You can set the validation level of each rule to Low, Medium, or High. Typically, the higher you set the validation level, the slower your test will run.
Setting the Validation Level of a rule in a request determines when that validation rule is used in a load test. For example, setting it to High means that the rule is executed only when the load test validation level is set to high.
Low
Invoke only rules with a setting of Low
Medium
Invoke rules with a setting of Low and Medium
High
Invoke all rules - Low, Medium, and High
The ability to set the rule levels in both the Web test request and the load test setting gives you flexibility in your testing. Setting a load test setting to Low executes the fewest rules and can be used for heavy load test and stress runs. Setting a load test setting to High executes the most rules and should be used when validation is more important than maximum throughput. | http://msdn.microsoft.com/en-us/library/ms404670(VS.80).aspx | crawl-002 | refinedweb | 373 | 51.78 |
06 April 2012 04:56 [Source: ICIS news]
SINGAPORE (ICIS)--China’ BASF-YPC plans to shut its 300,000 tonne/year monoethylene glycol (MEG) plant at Nanjing in Jiangsu province on 16 April for routine maintenance, a company source said on Friday.
The plant also produces 30,000 tonnes of diethylene glycol (DEG) annually, the source added. DEG is a co-product of MEG.
The company is planning to restart the plant in early May, and the DEG output will fall by 1,500 tonnes during the shutdown period, the source said.
The shutdown will have a limited impact on ?xml:namespace>
The company had a shutdown plan in March-April and only now have finalised the date for the turnaround.
BASF-YPC is a joint venture between German chemical major BASF and Sinopec, with each company holding a 50% stake. | http://www.icis.com/Articles/2012/04/06/9548356/chinas-basf-ypc-to-shut-meg-plant-on-16-april.html | CC-MAIN-2014-41 | refinedweb | 141 | 61.06 |
AIR can render PDFs using the HTMLControl as long as the host computer has Acrobat 8 or later installed.
I have seen many examples of doing this, just without the ability to scale the PDF larger and smaller when a resize of the AIR application occurs. So I worked that issue out and wanted to share it with the Flex community.
To load a PDF into an AIR application, use the HTMLLoader class to load the PDF into your AIR window. Since you cannot add the HTMLLoader as a child, you will need to create a UIComponent() to add the PDF to a container.
If you look at the addFile() function, you will see that I first checked the PDF capability had a STATUS_OK before proceeding to the PDF loading steps with the following line of code:
if(HTMLLoader.pdfCapability == HTMLPDFCapability.STATUS_OK) {}
Once successful, I created an URLRequest to locate the PDF on my Tomcat server. You could easily add a parameter to set the PDF based on your content management as needed.
I then set the PDF’s initial height and width to that of the with the id of container. After that is done, I loaded the PDF to the HTMLLoader from the URLRequest.
Now that you have a handle on the PDF and it is loaded to the AIR application, you can use the UIComponent.addChild function to add the PDF to the UIComponent. After that, you need to add the child to the which is done by calling the addChild() function on container.
If you look at the example from the Adobe Livedocs you will notice they give the PDF’s height and width static numbers. What we want to do is allow the PDF to scale on window resize. To accomplish this, create a new function that is executed everytime the application is resized. In this example, I created the scalePDF() function that is called by the resize function on the as follows:
Since the width and height are set to 100% on the and I set the PDF’s height and width to the height and width, the PDF will resize to fit the width and height of the every time it is resized. This will always give you the maximum reading space available for the PDF.
Here is the full source for this example. You can take this code and create a component that can be reused throughout your AIR applications. Enjoy.
<?xml version="1.0" encoding="utf-8"?>
<mx:WindowedApplication
xmlns:
<mx:Script>
<![CDATA[
import mx.core.UIComponent;
private var pdf:HTMLLoader = new HTMLLoader();
private function addFile():void
{
if(HTMLLoader.pdfCapability == HTMLPDFCapability.STATUS_OK)
{
var request:URLRequest =
new URLRequest("");
pdf.height = container.width;
pdf.width = container.width;
pdf.load(request);
var ui:UIComponent = new UIComponent();
ui.addChild(pdf)
container.addChild(ui);
}
else
{
trace("PDF cannot be displayed. Error code:",
HTMLLoader.pdfCapability);
}
}
private function scalePDF():void
{
pdf.height = container.width;
pdf.width = container.width;
}
]]>
</mx:Script>
<mx:Button
<mx:VBox
</mx:WindowedApplication>
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/render-pdf-adobe-air-pdf | CC-MAIN-2019-04 | refinedweb | 510 | 55.34 |
Reverse Mode Differentiation is Kind of Like a Lens II
Reverse mode automatic differentiation is kind of like a lens. Here is the type for a non-fancy lens
type Lens s t a b = s -> (a, b -> t)
When you compose two lenses, you compose the getters (s -> a) and you compose the partially applied setter (b -> t) in the reverse direction.
We can define a type for a reverse mode differentiable function
type AD x dx y dy = x -> (y, dy -> dx)
When you compose two differentiable functions you compose the functions and you flip compose the Jacobian transpose (dy -> dx). It is this flip composition which gives reverse mode it’s name. The dependence of the Jacobian on the base point x corresponds to the dependence of the setter on the original object
The implementation of composition for Lens and AD are identical.
Both of these things are described by the same box diagram (cribbed from the profunctor optics paper ).
This is a very simple way of implementing a reserve mode automatic differentiation using only non exotic features of a functional programming language. Since it is so bare bones and functional, is this a good way to achieve the vision gorgeous post by Christoper Olah? I do not know.
Now, to be clear, these ARE NOT lenses. Please, I don’t want to cloud the water, do not call these lenses. They’re pseudolenses or something. A very important part of what makes a lens a lens is that it obeys the lens laws, in which the getter and setter behave as one would expect. Our “setter” is a functional representation of the Jacobian transpose and our getter is the function itself. These do not obey lens laws in general.
Chain Rule and Jacobians
What is reverse mode differentiation? One’s thinking is muddled by defaulting to the Calc I perspective of one dimensional functions. Thinking is also muddled by the general conception that the gradient is a vector. This is slightly sloppy talk and can lead to confusion. It definitely has confused me.
The right setting for intuition is $ R^n \rightarrow R^m$ functions
If one looks at a multidimensional to multidimensional function like this, you can form a matrix of partial derivatives known as the Jacobian. In the scalar to scalar case this is a $ 1\times 1$ matrix, which we can think of as just a number. In the multi to scalar case this is a $ 1\times n$ matrix which we somewhat fuzzily can think of as a vector.
The chain rule is a beautiful thing. It is what makes differentiation so elegant and tractable.
For many-to-many functions, if you compose them you matrix multiply their Jacobians.
Just to throw in some category theory spice (who can resist), the chain rule is a functor between the category of differentiable functions and the category of vector spaces where composition is given by Jacobian multiplication. This is probably wholly unhelpful.
The chain rule says that the derivative operator is a functor.
— Functor Fact (@FunctorFact) September 13, 2018
The cost of multiplication for an $ a \times b$ matrix A and an $ b \times c$ matrix B is $ O(abc) $. If we have 3 matrices ABC, we can associate to the left or right. (AB)C vs A(BC) choosing which product to form first. These two associations have different cost, abc * acd for left association or abd * bcd for right association. We want to use the smallest dimension over and over. For functions that are ultimately many to scalar functions, that means we want to multiply starting at the right.
For a clearer explanation of the importance of the association, maybe this will help
Functional representations of matrices
A Matrix data type typically gives you full inspection of the elements. If you partially apply the matrix vector product function (!* :: Matrix -> Vector -> Vector) to a matrix m, you get a vector to vector function (!* m) :: Vector -> Vector. In the sense that a matrix is data representing a linear map, this type looks gorgeous. It is so evocative of purpose.
If all you want to do is multiply matrices or perform matrix vector products this is not a bad way to go. A function in Haskell is a thing that exposes only a single interface, the ability to be applied. Very often, the loss of Gaussian elimination or eigenvalue decompositions is quite painfully felt. For simple automatic differentiation, it isn’t so bad though.
You can inefficiently reconstitute a matrix from it’s functional form by applying it to a basis of vectors.
One weakness of the functional form is that the type does not constrain the function to actually act linearly on the vectors.
One big advantage of the functional form is that you can intermix different matrix types (sparse, low-rank, dense) with no friction, just so long as they all have some way of being applied to the same kind of vector. You can also use functions like (id :: a -> a) as the identity matrix, which are not built from any underlying matrix type at all.
To match the lens, we need to represent the Jacobian transpose as the function (dy -> dx) mapping differentials in the output space to differentials in the input space.
The Lens Trick
A lens is the combination of a getter (a function that grabs a piece out of a larger object) and a setter (a function that takes the object and a new piece and returns the object with that piece replaced).
The common form of lens used in Haskell doesn’t look like the above. It looks like this.
type Lens s t a b = forall f. Functor f => (a -> f b) -> (s -> f t)
This form has exactly the same content as the previous form (A non obvious fact. See the Profunctor Optics paper above. Magic neato polymorphism stuff), with the added functionality of being able to compose using the regular Haskell (.) operator.
I think a good case can be made to NOT use the lens trick (do as I say, not as I do). It obfuscates sharing and
obfuscates your code to the compiler (I assume the compiler optimizations have less understanding of polymorphic functor types than it does of tuples and functions), meaning the compiler has less opportunity to help you out. But it is also pretty cool. So… I dunno. Edit:
/u/mstksg points out that compilers actually LOVE the van Laarhoven representation (the lens trick) because when f is finally specialized it is a newtype wrappers which have no runtime cost. Then the compiler can just chew the thing apart.
One thing that is extra scary about the fancy form is that it makes it less clear how much data is likely to be shared between the forward and backward pass. Another alternative to the lens that shows this is the following.
type AD x dx y dy = (x -> y, x -> dy -> dx)
This form is again the same in end result. However it cannot share computation and therefore isn’t the same performance wise. One nontrivial function that took me some head scratching is how to convert from the fancy lens directly to the regular lens without destroying sharing. I think this does it
unfancy :: Lens' a b -> (a -> (b, b -> a)) unfancy l = getCompose . l (\b -> Compose (b, id))
Some code
I have some small exploration of the concept in this git
Again, really check out Conal Elliott’s AD paper and enjoy the many, many apostrophes to follow.
Some basic definitions and transformations between fancy and non fancy lenses. Extracting the gradient is similar to the set function. Gradient assumes a many to one function and it applies it to 1.
import Data.Functor.Identity import Data.Functor.Const import Data.Functor.Compose type Lens' a b = forall f. Functor f => (b -> f b) -> a -> f a lens'' :: (a -> (b, b -> a)) -> Lens' a b lens'' h g x = fmap j fb where (b, j) = h x fb = g b over :: Lens' a b -> ((b -> b) -> a -> a) over l f = runIdentity . l (Identity . f) set :: Lens' a b -> a -> b -> a set l = flip (\x -> (over l (const x))) view :: Lens' a b -> a -> b view l = getConst . l Const unlens'' :: Lens' a b -> (a -> (b, b -> a)) unlens'' l = getCompose . l (\b -> Compose (b, id)) constlens :: Lens' (a,b) c -> b -> Lens' a c constlens l b = lens'' $ \a -> let (c, df) = f (a,b) in (c, fst . df) where f = unlens'' l grad :: Num b => Lens' a b -> a -> a grad l = (flip (set l)) 1
Basic 1D functions and arrow/categorical combinators
-- add and dup are dual! add' :: Num a => Lens' (a,a) a add' = lens'' $ \(x,y) -> (x + y, \ds -> (ds, ds)) dup' :: Num a => Lens' a (a,a) dup' = lens'' $ \x -> ((x,x), \(dx,dy) -> dx + dy))))) sin' :: Floating a => Lens' a a sin' = lens'' $ \x -> (sin x, \dx -> dx * (cos x)) cos' :: Floating a => Lens' a a cos' = lens'' $ \x -> (cos x, \dx -> -dx * (sin x)) pow' :: Num a => Integer -> Lens' a a pow' n = lens'' $ \x -> (x ^ n, \dx -> (fromInteger n) * dx * x ^ (n-1)) --cmul :: Num a => a -> Lens' a a --cmul c = lens (* c) (\x -> \dx -> c * dx) exp' :: Floating a => Lens' a a exp' = lens'' $ \x -> let ex = exp x in (ex, \dx -> dx * ex) fst' :: Num b => Lens' (a,b) a fst' = lens'' (\(a,b) -> (a, \ds -> (ds, 0))) snd' :: Num a => Lens' (a,b) b snd' = lens'' (\(a,b) -> (b, \ds -> (0, ds))) swap' :: Lens' (a,b) (b,a) swap' = lens'' (\(a,b) -> ((b,a), \(db,da) -> (da, db))) assoc' :: Lens' ((a,b),c) (a,(b,c)) assoc' = lens'' $ \((a,b),c) -> ((a,(b,c)), \(da,(db,dc)) -> ((da,db),dc)) par' :: Lens' a b -> Lens' c d -> Lens' (a,c) (b,d) par' l1 l2 = lens'' f3 where f1 = unlens'' l1 f2 = unlens'' l2 f3 (a,c) = ((b,d), df1 *** df2) where (b,df1) = f1 a (d,df2) = f2 c fan' :: Num a => Lens' a b -> Lens' a c -> Lens' a (b,c) fan' l1 l2 = lens'' f3 where f1 = unlens'' l1 f2 = unlens'' l2 f3 a = ((b,c), \(db,dc) -> df1 db + df2 dc) where (b,df1) = f1 a (c,df2) = f2 a first' :: Lens' a b -> Lens' (a, c) (b, c) first' l = par' l id second' :: Lens' a b -> Lens' (c, a) (c, b) second' l = par' id l relu' :: (Ord a, Num a) => Lens' a a relu' = lens'' $ \x -> (frelu x, brelu x) where frelu x | x > 0 = x | otherwise = 0 brelu x dy | x > 0 = dy | otherwise = 0
Some List based stuff.
import Data.List (sort) import Control.Applicative (ZipList (..)) -- replicate and sum are dual! sum' :: Num a => Lens' [a] a sum' = lens'' $ \xs -> (sum xs, \dy -> replicate (length xs) dy) replicate' :: Num a => Int -> Lens' a [a] replicate' n = lens'' $ \x -> (replicate n x, sum) repeat' :: Num a => Lens' a [a] repeat' = lens'' $ \x -> (repeat x, sum) map' :: Lens' a b -> Lens' [a] [b] map' l = lens'' $ \xs -> let (bs, fs) = unzip . map (unlens'' l) $ xs in (bs, getZipList . ((ZipList fs) <*>) . ZipList) zip' :: Lens' ([a], [b]) [(a,b)] zip' = lens'' $ \(as,bs) -> (zip as bs, unzip) unzip' :: Lens' [(a,b)] ([a], [b]) unzip' = lens'' $ \xs -> (unzip xs, uncurry zip) maximum' :: (Num a, Ord a) => Lens' [a] a maximum' = lens'' $ \(x:xs) -> let (best, bestind, lenxs) = argmaxixum x 0 1 xs in (best, \dy -> onehot bestind lenxs dy) where argmaxixum best bestind len [] = (best, bestind, len) argmaxixum best bestind curind (x:xs) = if x > best then argmaxixum x curind (curind + 1) xs else argmaxixum best bestind (curind + 1) xs onehot n m x | m == 0 = [] | n == m = x : (onehot n (m-1) x) | otherwise = 0 : (onehot n (m-1) x) sort' :: Ord a => Lens' [a] [a] sort' = lens'' $ \xs -> let (sxs, indices) = unzip . sort $ zip xs [0 ..] in (sxs, desort indices) where desort indices = snd . unzip . sort . zip indices
And some functionality from HMatrix
import Numeric.LinearAlgebra import Numeric.LinearAlgebra.Devel (zipVectorWith) import Numeric.ADLens.Lens -- import Data.Vector as V dot' :: (Container Vector t, Numeric t) => Lens' (Vector t, Vector t) t dot' = lens'' $ \(v1,v2) -> (v1 <.> v2, \ds -> (scale ds v2, scale ds v1)) mdot' :: (Product t, Numeric t) => Lens' (Matrix t, Vector t) (Vector t) mdot' = lens'' $ \(a,v) -> (a #> v, \dv -> (outer dv v, dv <# a)) add' :: Additive c => Lens' (c, c) c add' = lens'' $ \(v1,v2) -> (add v1 v2, \dv -> (dv, dv)) -- I need konst I think? sumElements' :: (Container Vector t, Numeric t) => Lens' (Vector t) t sumElements' = lens'' $ \v -> (sumElements v, \ds -> scalar ds) reshape' :: Container Vector t => Int -> Lens' (Vector t) (Matrix t) reshape' n = lens'' $ \v -> (reshape n v, \dm -> flatten dm) -- conjugate transpose not trace tr'' :: (Transposable m mt, Transposable mt m) => Lens' m mt tr'' = lens'' $ \x -> (tr x, \dt -> tr dt) flatten' :: (Num t, Container Vector t) => Lens' (Matrix t) (Vector t) flatten' = lens'' $ \m -> let s = fst $ size m in (flatten m, \dm -> reshape s dm) norm_2' :: (Container c R, Normed (c R), Linear R c) => Lens' (c R) R norm_2' = lens'' $ \v -> let nv = norm_2 v in (nv, \dnv -> scale (2 * dnv / nv) v ) cmap' :: (Element b, Container Vector e) => (Lens' e b) -> Lens' (Vector e) (Vector b) cmap' l = lens'' $ \c -> (cmap f c, \dc -> zipVectorWith f' c dc) where f = view l f' = set l {- maxElement' :: Container c e => Lens' (c e) e maxElement' = lens'' $ \v -> let i = maxIndex v in (v ! i, dv -> scalar 0) -} det' :: Field t => Lens' (Matrix t) t det' = lens'' $ \m -> let (minv, (lndet, phase)) = invlndet m in let detm = phase * exp detm in (detm, \ds -> (scale (ds * detm) minv)) diag' :: (Num a, Element a) => Lens' (Vector a) (Matrix a) diag' = lens'' $ \v -> (diag v, takeDiag) takeDiag' :: (Num a, Element a) => Lens' (Matrix a) (Vector a) takeDiag' = lens'' $ \m -> (takeDiag m, diag)
In practice, I don’t think this is a very ergonomic approach without something like Conal Elliott’s Compiling to Categories plugin. You have to program in a point-free arrow style (inspired very directly by Conal’s above AD paper) which is pretty nasty IMO. The neural network code here is inscrutable. It is only a three layer neural network.
import Numeric.ADLens.Lens import Numeric.ADLens.Basic import Numeric.ADLens.List import Numeric.ADLens.HMatrix import Numeric.LinearAlgebra type L1 = Matrix Double type L2 = Matrix Double type L3 = Matrix Double type Input = Vector Double type Output = Vector Double type Weights = (L1,(L2,(L3,()))) class TupleSum a where tupsum :: a -> a -> a instance TupleSum () where tupsum _ _ = () instance (Num a, TupleSum b) => TupleSum (a,b) where tupsum (a,x) (b,y) = (a + b, tupsum x y) -- A dense relu neural network example swaplayer :: Lens' ((Matrix t, b), Vector t) (b, (Matrix t, Vector t)) swaplayer = first' swap' . assoc' mmultlayer :: Numeric t => Lens' (b, (Matrix t, Vector t)) (b, Vector t) mmultlayer = second' mdot' relulayer :: Lens' (b, Vector Double) (b, Vector Double) relulayer = second' $ cmap' relu' uselayer :: Lens' ((Matrix Double, b), Vector Double) (b, Vector Double) uselayer = swaplayer . mmultlayer . relulayer runNetwork :: Lens' (Weights, Input) ((), Output) runNetwork = uselayer . uselayer . uselayer main :: IO () main = do putStrLn "Starting Tests" print $ grad (pow' 2) 1 print $ grad (pow' 4) 1 print $ grad (map' (pow' 2) . sum') $ [1 .. 5] print $ grad (map' (pow' 4) . sum') $ [1 .. 5] print $ map (\x -> 4 * x ^ 3 ) [1 .. 5] l1 <- randn 3 4 l2 <- randn 2 3 l3 <- randn 1 2 let weights = (l1,(l2,(l3,()))) print $ view runNetwork (weights, vector [1,2,3,4]) putStrLn "The neural network gradients" print $ set runNetwork (weights, vector [1,2,3,4]) ((), vector [1])
For those looking for more on automatic differentiation in Haskell:
Ed Kmett’s ad package
Conal Elliott is making the rounds with a new take on AD (GOOD STUFF).
Justin Le has been making excellent posts and has another library he’s working on. | https://www.philipzucker.com/reverse-mode-differentiation-is-kind-of-like-a-lens-ii/ | CC-MAIN-2021-39 | refinedweb | 2,660 | 65.35 |
Parent Directory
|
Revision Log
Added a help facility to "StandardSetup".
# # ScriptSetup ScriptFinish); @EXPORT_OK = qw(GetFile GetOptions Merge MergeOptions ParseCommand ParseRecord UnEscape Escape); use strict; use Carp qw(longmess croak); use CGI; use FIG_Config; use PageBuilder; use Digest::MD5; use File::Basename; StandardSetup C<< threer special tracing categories that are automatically handled by this method. In other words, if you used L</TSetup> you would need to include these categories manually, but if you use this method they are turned on automatically. =over 4 =item FIG Turns on trace messages inside the B<FIG> package. .log> in the FIG temporary directory. The default trace level is 2. To get all messages, specify a trace level of 4. For a genome-by-genome update, use 3.<FIG>, C<Tracer>, and <DocUtils>. C<FIG> and C<Tracer>. Finally, if the special option C<-h> is specified, the option names will be traced at level 0 and the program will exit without processing. This provides a limited help capability. For example, if the user enters TransactFeatures -h he would see the following output. TransactFeatures [options] command transactionDirectory IDfile -trace tracing level (default 2) -sql trace SQL commands -safe use database transactions -noAlias do not expect aliases in CHANGE transactions -start start with this genome -tblFiles output TBL files containing the corrected IDs) = @_; # Add the tracing options. $options->{trace} = [2, "tracing level"]; $options->{sql} = [0, "turn on SQL tracing"]; $options->{h} = [0, "display command-line options"]; #); # Now we want to set up tracing. First, we need to know if SQL is to # be traced. my @cats = @{$categories}; if ($retOptions->{sql}) { push @cats, "SQL"; } # Add the default categories. push @cats, "Tracer", "FIG"; # Next, we create the category string by prefixing the trace level # and joining the categories. my $cats = join(" ", $parseOptions{trace}, @cats); # Now set up the tracing. TSetup($cats, "+>$FIG_Config::temp/trace.log"); # Check for the "h" option. If it is specified, dump the command-line # options and exit the program. if ($retOptions->{h}) { $0 =~ m#[/\\](\w+)(\.pl)?$#i; Trace("$1 [options] $parmHelp") if T(0); for my $key (sort keys %{$options}) { my $name = Pad($key, $longestName, 0, ' '); my $desc = $options->{$key}->[1]; if ($options->{$key}->[0]) { $desc .= " (default " . $options->{$key}->[0] . ")"; } Trace(" $name $desc") if T(0); } exit(0); } # Return the parsed parameters. return ($retOptions, @retParameters); } . if (ref $traceLevel) { Confess("Bad trace level."); } elsif (ref $TraceLevel) { Confess("Bad trace config."); } ); >> or ScriptSetup C<< my ($query, $varHash) = ScriptSetup(); >> Perform standard tracing and debugging setup for scripts. The value returned is the CGI object followed by a pre-built variable hash. The C<Trace> query parameter is used to determine whether or not tracing is active and which trace modules (other than C<Tracer> and C<FIG>) should be turned on. Specifying the C<CGI> trace module will trace parameter and environment information. Parameters are traced at level 3 and environment variables at level 4. At the end of the script, the client should call L</ScriptFinish> to output the web page. =cut sub ScriptSetup { # Get the CGI query object. my $query = CGI->new(); # Check for tracing. Set it up if the user asked for it. if ($query->param('Trace')) { # Set up tracing); } =head3 ScriptFinish C<< ($query, $varHash) = ScriptSetup(); eval { # ... get data from . =cut sub ScriptFinish { # Get the parameters. my ($webData, $varHash) = @_; # Check for a template file situation. my $outputString; if (defined $varHash) { # Here we have a template file. We need to apply the variables to the template. $outputString = PageBuilder::Build("<$webData", $varHash, "Html"); } else { # Here the user gave us a raw string. $outputString = $webData; } # Check for trace messages. if ($Destination eq "QUEUE") { #; } substr $outputString, $pos, 0, QTrace('Html'); } # Write the output string. print $outputString; } 1; | http://biocvs.mcs.anl.gov/viewcvs.cgi/FigKernelPackages/Tracer.pm?revision=1.36&view=markup&pathrev=mgrast_rel_2008_1110_v2 | CC-MAIN-2020-05 | refinedweb | 612 | 59.09 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.