text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
All we have learned so far is printf, scanf_s, if, do, while, int, and all the math functions.
We're suppose to do this WITHOUT the use of arrays, and WITH a loop.
I have no clue how to do this without arrays.. I know how to convert from binary to decimal and back on paper, but I don't know how to make the computer do this..
Any help would be greatly appreciated!
This is the format the teacher is expecting:
#include "stdafx.h" int _tmain(int argc, _TCHAR* argv[]) { int binary, number; printf("Enter binary number: "); scanf_s("%d", &binary); while () { ???????????? } printf("Decimal equivalent is: %d", decimal); return(0); } | http://www.dreamincode.net/forums/topic/265063-very-simple-c-question-binary-decimal/page__p__1542829 | CC-MAIN-2016-07 | refinedweb | 110 | 75.4 |
Hello everyone! This is the second part of practice questions for the upcoming Quiz 02. The first set covered Dicts, and this one will be focused on memory diagrams. The solutions for these problems can be found here.
NOTE (10/20/2020): On the video, none of the counter variables for the for in loops were defined on the frames, even though they should be..
For these problems, please create a memory diagram for each documenting the entire execution of the program like you have practiced in class.
def main() -> None: """main function.""" grades: List[int] = [30, 62, 88, 100] passfail: List[bool] = [] pass_grade: int = 60 for grade in grades: if grade >= pass_grade: passfail.append(True) else: passfail.append(False) print(passfail) if __name__ == "__main__": main()
def main() -> None: """main function.""" weather: List[int] = [73, 55, 21, 101] temps: List[int] = weather warm: int = 70 cold: int = 32 normalize_temps(temps, warm, cold) print(weather) print(temps) def normalize_temps(temps: List[int], warm: int, cold: int) -> None: i: int = 0 while i < len(temps): if temps[i] >= warm: temps[i] = temps[i] - 10 elif temps[i] <= cold: temps[i] = temps[i] + 20 if __name__ == "__main__": main()
def main() -> None: """main function.""" phrases: List[str] = ["hi", "hiya", "wassup", "howdy"] people: List[str] = ["anna", "kush", "kris", "kaki"] talk(phrases, people) print(phrases) print(people) def talk(greetings: List[str], people: List[str]) -> None: i: int = 0 while i < len(greetings): greetings[i] = greetings[i] + " " + people[i] i = i + 1 happy(greetings) def happy(xs: List[str]) -> None: this_happy: int = 1 for ind in range(len(xs)): i: int = 0 while i < this_happy: xs[ind] = xs[ind] + "!" i = i + 1 this_happy = this_happy + 1 if __name__ == "__main__": main() | https://20f.comp110.com/students/resources/quiz2-practice.html | CC-MAIN-2021-10 | refinedweb | 283 | 67.99 |
convert json schemas to voluptuous schemas
Project description
voluptuary
This is a tool and library to convert a JSON Schema to a Voluptuous schema.
Usage
from voluptuary import to_voluptuous # some JSON Schema json_schema = { 'type': 'object', 'properties': { 'value': {'type': 'integer'}, }, } # convert to a voluptuous schema schema = to_voluptuous(json_schema) # validate something schema({'value': 1})
Why?
This library is for the people who aren’t satisfied with JSON Schema but can’t justify rewriting schemas (“tedious”, “time-consuming”, “little benefit”). But with a library to rewrite the schemas for you, there are no more excuses!
If you are wondering “why voluptuous over JSON Schema”, there are some good reasons listed here. I find voluptuous models to be nicer: more Pythonic, more expressive, easier to read and maintain, easier to customize, better error messages, and so on.
How do I know my converted schema will validate correctly?
First of all, you should always test the validation behavior of your schemas. No library is going to write correct schemas for you.
Aside from that, this library follows the following principles:
- Familiar behavior: This library strives to match the validation behavior of the (Draft 4) validator from jsonschema.
- Proactive testing: This library includes and actively runs a comprehensive suite of tests to ensure behavior in the first point.
- Documentation: This library includes detailed documentation of support for JSON Schema validation features, and strive to keep this up to date.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/voluptuary/ | CC-MAIN-2022-21 | refinedweb | 262 | 52.9 |
telnet [-468ELadr] [-S tos] [-b address] [-e escapechar] [-l user]
[-n tracefile] ``no character''.
-L Specifies an 8-bit data path on output. This causes the TELNET
BINARY option to be negotiated on just output.
connec-
tion to the value tos.
-e escapechar
Sets the escape character to escapechar. If no character is sup-
plied, no escape character will be used. Entering the escape
character while connected causes telnet to drop to command mode.
-l user
Specify user as the user to log in as on the remote system. This
fied, sys-
tem, under the control of the remote system. When input editing or char-
acter echoing is to be disabled, the remote system will relay that infor-
mation. The remote system will also relay changes to any special charac-
ters char-
acter'' acknowl-
edges the TELNET sequence) and flush previous terminal input (in the case
of quit and intr).
Commands:
The following telnet commands are available. Unique prefixes are under-
stood.
below). avail-
able.
Note that the current version of telnet does not support
encryption. explic-
itly for the rest.
? Prints out help information for the environ com-
mand..
name or IP address. The -l option may be used to specify a
user name to be passed to the remote system, like the -l com-
mand begin-
ning with a #, and blank lines, are ignored. The rest of the
file should consist of hostnames and sequences of telnet com-
mands to use with that host. Commands should be one per line,
indented by whitespace; lines beginning without whitespace are
interpreted as hostnames. Lines beginning with the special
hostname 'DEFAULT' will apply to all hosts. Hostnames includ-
ing .
quit Close any open session and exit telnet. An end of file condi-
tion on input, when in command mode, will trigger this opera-
tion char-
acter entered.
getstatus
If the remote side supports the TELNET STATUS command,
getstatus will send the subnegotiation to request that
the server send its current option status.
ip Sends the TELNET IP (Interrupt Process) sequence,
which should cause the remote system to abort the cur-
rently vari-
ables com-
mand. The variables which may be set or unset, but not tog-
g termi-
nal's status character.
acter. char-
acter is taken to be the terminal's erase character.
escape This is the telnet escape character (initially ``^['')
which causes entry into telnet command mode (when con-
nected charac-
ter termi-
nal charac-
ter.
enabled, then this character is taken to be the termi-
nal's start character. The initial value for the kill
character is taken to be the terminal's start charac-
ter.
stop If the TELNET TOGGLE-FLOW-CONTROL option has been
enabled, then this character is taken to be the termi-
nal's stop character. The initial value for the kill
character is taken to be the terminal's stop charac-
ter.
susp If telnet is in localchars mode, or LINEMODE is
enabled, and the suspend character is typed, a TELNET
SUSP sequence (see send susp above) is sent to the
remote host. The initial value for the suspend char-
acter spe-
cial characters. The remote side is requested to
send all the current special character settings,
and if there are any discrepancies with the local
side, the local side will switch to the remote
value.
export Switch to the local defaults for the special char-
acters. The local default characters are those of
the local terminal at the time when telnet was
started.
import Switch to the remote defaults for the special
characters. The remote default characters are
those of the remote system at the time when the.
Note that this flag exists only if encryption
support is enabled. com-
mand.
encdebug Turns on debugging information for the encryp-
tion code. Note that this flag only exists if
encryption support is available. are sent as eof and
susp, see send above).
netdata Toggles the display of all network data (in
hexadecimal format). The initial value for this
toggle is FALSE.
options Toggles the display of some internal telnet pro-
tocol.
verbose_encrypt
Execute a single command in a subshell on the local system.
If command is omitted, then an interactive subshell is
invoked.
? [command]
Get help. With no arguments, telnet prints a help summary.
If a command is specified, telnet will print the help informa-
tion for just that command.
ENVIRONMENT
Telnet uses at least the HOME, SHELL, DISPLAY, and TERM environment vari-
ables.. | http://www.linux-directory.com/man1/telnet.shtml | crawl-003 | refinedweb | 741 | 65.93 |
#1 Members - Reputation: 135
Posted 14 March 2014 - 07:35 AM
#2 Crossbones+ - Reputation: 13624
Posted 14 March 2014 - 07:59 AM
There shouldn't be much of a delay from the key being pressed to the message being sent to your program. I don't have any experience with Windows programming, but I have used SDL and SFML on Linux and Mac OS X, and this has never been a problem.
Looking at your code a bit, it looks like you simply register a callback function to process events, so you don't have a whole lot of control of when they get processed. Perhaps you could write your game loop more explicitly, and then you can make sure you process all the events in the queue before you proceed to rendering? [EDIT: Never mind. I see you are dispatching all the events in a loop. I missed that in my first reading.]
Edited by Álvaro, 14 March 2014 - 08:02 AM.
#3 Crossbones+ - Reputation: 4910
Posted 14 March 2014 - 08:11 AM
I think nothing is inherently wrong with your code (even though the behavior that you get is obviously undesired, I think it is somehow nevertheless exactly doing what you're asking for).
What you are doing is peek and drain the message queue until it is empty, then you draw some stuff and flip buffers, which presumably blocks for 16ms.
16ms is a very long time for a computer, and everything else that you do happens more or less "instantly" in comparison to that. Therefore most of the time, this will just work fine, as you spend 99% of your time inside SwapBuffers, and so your different key presses all arrive while your application is blocked. When you next drain the message queue, you get all events that are in the queue, and it's correct.
However, sometimes, it may just happen that one key event arrives in the message queue while you're draining it. After that, there are none left, so your loop continues (of course, what else!). The next event arrives, but that is now irrelevant since you're already somewhere in your OpenGL calls, and after that you block for 16ms. So you get a huge delay between two keypresses that actually happen simultaneously (or nearly so).
Now, why you get figures of 100+ milliseconds, I can't answer. That's truly odd.
(I once upon a time, years ago, tried stamping messages with values obtained from timeGetTime by the way, and Windows message queues push events through much faster than the minimum resolution (which is 0.5ms here), so I doubt it's a Windows message problem as such)
Edited by samoth, 14 March 2014 - 08:17 AM.
#4 Members - Reputation: 135
Posted 14 March 2014 - 08:16 AM
Actually cause of problem is that there no such thing as "simultaneous" in programming. For instance, I press and hold two keys (down & right) and move diagonally, then I "simultaneous" (as quickly as possible) release these both keys, but window message loop (pretty same with raw input) receives WM_KEYUP Down and then only after slightly delay (it can reach 50-100 ms according to tests!) it receives WM_KEYUP Up. Because game logic and rendering processes much faster (60 fps, vsync, 16 ms) it assumes Up key is still being pressed (changes direction and animation frame to Up instead of keeping Diagonal). You can compile given code and test yourself, it has no dependencies and fully working.
Edited by Demion, 14 March 2014 - 08:25 AM.
#5 Crossbones+ - Reputation: 4910
Posted 14 March 2014 - 08:23 AM
You got that right, there is no such thing as "simultaneous", since only one message can enter the message queue at the same time (even if the keys are pressed at exactly the same time).
That's what I mean... usually this will work just fine, and the simultaneous events will be quasi-simultaneous in your queue (following one after the other). But of course it may happen that you process one, see that the queue is empty, and move on. And then, the event that is being posted immediately after ("simultaneously") will have to wait for a full frame.
Except 100ms is more like 8 frames... which I don't understand. That smells like you're lost in a call to Sleep somewhere, but I don't see one anywhere in that code.
Edited by samoth, 14 March 2014 - 08:23 AM.
#6 Members - Reputation: 135
Posted 14 March 2014 - 08:42 AM
Log from first program - pastebin
It is rarely 100 milliseconds but often within 25-50 milliseconds, which is 1 or 2 frames (using VSync 60 fps, ~ 16 ms per frame) which already leads to unwanted result (game logics changes direction, animation frame (angle) etc).
So only solution I see so far is using raw input (which is subjectively a little bit faster) instead of wm_keydown / up and process input in game with periodic delay (once per 2-5 frames instead of every frame) to avoid unwanted results.
According to stackoverflow delays of the order of 50 ms or so are common in processing key presses through the normal Windows message queue.
Thanks for effort anyway. I am still looking for any better solutions.
Edited by Demion, 14 March 2014 - 08:43 AM.
#7 Members - Reputation: 1379
Posted 14 March 2014 - 10:18 AM
I tried your code, and nearly always get ~20ms delay between simultaneous key presses. I wasn't aware that the delay was so significant. Incidentally, WM_KEYxxx vs WM_INPUT was the same - I saw no improvement whatsoever.
However, my wireless keyboard receiver is attached to my monitor USB connector, which is cabled to the PC, which can't help. I wonder if a wired keyboard fares any better?
#8 Members - Reputation: 1790
Posted 15 March 2014 - 03:17 PM
I saw that you directly add 1 to the velocity each time you get a message, without any checks, and then directly add that to the position. I would not do that, because its dependent on how many messages you get, which can be dependent on hardware or windows.
You also never reset those values, so you have to first accumulate many messages before you have enough to compensate after changing direction.
I would instead have acceleration values dependent on keys pressed on last frame and clamp those to some maximum at each change.
Then calculate the velocity from that and the frame time and clamp it to some maximum and possibly dampen it so that it goes back to zero slowly. Only then calculate the position.
Reset the acceleration to zero before next frame.
#9 Members - Reputation: 135
Posted 16 March 2014 - 02:39 AM
I saw that you directly add 1 to the velocity each time you get a message, without any checks, and then directly add that to the position.
Sorry, you are very wrong. I add velocity only once on key press and this has nothing to do with my problem.
if ((lParam & (1 << 30)) == 0) ProcessInput(wParam, 4);
if (!Keys[raw->data.keyboard.MakeCode]) { Keys[raw->data.keyboard.MakeCode] = true; ProcessInput(MapVirtualKey(raw->data.keyboard.MakeCode, MAPVK_VSC_TO_VK), 4); }
#10 Crossbones+ - Reputation: 2560
Posted 16 March 2014 - 05:37 AM
Your GetTime() method looks suspicious to me. You could be throwing away lots of precision because of the integer divide.
The simplest way to solve that is to make it return a double, and make the program start time come out as zero. Note that float is no good for this - after the program has been running for a while the accuracy will be too low.
// Get current time in seconds double GetTime() { static LARGE_INTEGER frequency = GetFrequency(); LARGE_INTEGER counter; QueryPerformanceCounter(&counter); // We want time zero to be when the program starts. I've hacked that in with a static variable here. static LARGE_INTEGER programStartTime = counter; return (double)(counter.QuadPart - programStartTime.QuadPart) / (double)frequency.QuadPart; }
#11 Members - Reputation: 1790
Posted 16 March 2014 - 05:39 AM
Yeah sorry, I overlooked that if. Yet the other points still stand, and that you only accept 1 message and ignore the rest will make it still look wrong, for example, if you do push-up, release-up, push-up, push-right, you have added to up-velocity twice and right-velocity once and that will have the effect of not looking diagonal until you press down once.
And you are using an angle there that is not correctly representing the movement you apply:
x += xvel; y += yvel; if ((xvel != 0) || (yvel != 0)) { if ((xvel != 0) && (yvel != 0)) degrees = 45; else degrees = 0; } glClear(GL_COLOR_BUFFER_BIT); glLoadIdentity(); glTranslatef(x + quadWidth / 2.f, y + quadHeight / 2.f, 0); glRotatef(degrees, 0, 0, 1);
And that velocity depends too much on old keypresses so if press the keys a few times and dont count and press the opposite keys exactly the same number of times it will pretty much always show 45 degrees.
#12 Members - Reputation: 135
Posted 16 March 2014 - 06:05 AM
Your GetTime() method looks suspicious to me.
GetTime() multiplies counter x 1 000 000 before division and that gives perfect microsecond accuracy ( frequency = 1 second = 1 000 000 microseconds). Anyway please also take a look at second source code which actually makes drawing without any time involved. Its not time releated problem at all.
for example, if you do push-up, release-up, push-up, push-right, you have added to up-velocity twice and right-velocity once and that will have the effect of not looking diagonal until you press down once.
Edited by Demion, 16 March 2014 - 06:10 AM.
#13 Crossbones+ - Reputation: 13904
Posted 16 March 2014 - 06:20 AM
According to stackoverflow delays of the order of 50 ms or so are common in processing key presses through the normal Windows message queue.
That has nothing to do with 2 keys being pressed at once, which should be consistently as far apart in microseconds as you are actually pressing them.
The problem is your method of timing. You should be timing only the point at which WM_INPUT is caught, not all the other code that is run inside WM_INPUT.
In other words, good:
case WM_INPUT: { unsigned long long MsgTime = GetTime(); // Do stuff and log MsgTime.
Bad:
case WM_INPUT: { // Do a bunch of stuff, allocate some memory (this is likely why your timings vary so much). … … Records[Count].Time = GetTime();
You’re aren’t currently timing when the key was pressed, you are timing when the key was pressed, plus some logic plus a call to new.
V-sync isn’t helping either. It will lag your input thread unless you are rendering from another thread. This can easily add 16 milliseconds to your timings, but your first test case does not have this problem.
L. Spiro
Edited by L. Spiro, 16 March 2014 - 06<<
#14 Members - Reputation: 135
Posted 16 March 2014 - 06:25 AM
You’re aren’t currently timing when the key was pressed, you are timing when the key was pressed, plus some logic plus a call to new.
Good point. Although WM_KEYDOWN / WM_KEYUP approach (commented out) has no allocations and same results. Anyway in real game I need to determine which event happend (press or release) and then determine key, so I need to do this allocations.
Especially for you reupload and fixed - pastebin
Last test up to 65 ms delays, Vsync is only 16 ms.
Edited by Demion, 16 March 2014 - 06:31 AM.
#15 Members - Reputation: 1344
Posted 16 March 2014 - 12:22 PM
Unlikely to be of help, but i note anyway:
You are using QueryPerformanceCounter - what is the frequency you are given for it (btw, there is no need to re-query the frequency as it is strictly not allowed to change at runtime)?
That 65 sounds quite close to the 55 resolution limit (8253/8254 timer?). I have never seen GPC using that timer, but it is possible if your hardware can not give anything better (for example, my CPU has dynamic clock frequency and hence can not be used - instead some unknown much-much lower frequency timer is used [~3.5 mil, but high resolution]). I highly doubt you are suffering from bad GPC timer, but for sanity check you could do QPC in a loop and output the times (without frequency division of course) - to see what the resolution for your given frequency actually is.
edit: On second though - ignore that. Too bloody unlikely.
Edited by tanzanite7, 16 March 2014 - 12:36 PM.
#16 Crossbones+ - Reputation: 13904
Posted 16 March 2014 - 01:37 PM
He only calls it once; it is a static.He only calls it once; it is a static.
there is no need to re-query the frequency as it is strictly not allowed to change at runtime
V-sync and other factors are why input actual in games are not handled in the way shown in your test application, so I disregard those results entirely and would only accept your data for your basic non-OpenGL test.V-sync and other factors are why input actual in games are not handled in the way shown in your test application, so I disregard those results entirely and would only accept your data for your basic non-OpenGL test.
Last test up to 65 ms delays, Vsync is only 16 ms.
You are still timing the call to new; it just spans across multiple calls to WindowProc(). That is, you store the current time after WM_INPUT, then do stuff, but all that stuff you do is delaying the next WM_INPUT which could already be in the buffer and waiting.
Eliminate the call to new entirely.
Make a static buffer of an array of 3 RAWINPUT structures and if the first call to GetRawInputData() returns a size greater than (sizeof( RAWINPUT ) * 3) then print an error, dump the message, and increase the size of the static buffer if you want.
While this may not be what you would do in an actual game, the important point now is to find the bottleneck.
If it improves the timings then you know at least one of the main culprits. If not, it won’t be a problem in a real game to do it properly but you need to keep searching for the answer before you add back the allocations (which are leaking, just so you know).
You should also be prepared to accept that your timings are accurate. Maybe you aren’t hitting 2 keys as closely together as you think.
L. Spiro
Edited by L. Spiro, 16 March 2014 - 01:43<<
#17 Members - Reputation: 1379
Posted 16 March 2014 - 03:07 PM
Using the <time> variable from the MSG structure gives similar results - 15,16,31 or 32 ms between calls. As L. Spiro said, it's quite possible that no one can actually hit two keys at precisely the same time.
Incidentally, I'm testing on a Win8.1 machine. Maybe an XP or Win7 system will give different results. Unlikely, but...
Edit - here are some actual numbers. The first is the time according to QueryPerformanceCounter, the second as reported by the time variable in the MSG structure. Both are in milliseconds.
40.37 - 47
19.97 - 15
19.98 - 16
21.76 - 32
19.87 - 31
19.92 - 16
20.03 - 31
20.74 - 16
18.82 - 15
19.99 - 16
20.21 - 16
I've altered your program quite considerably, moving the timing into the GetMessage loop.
Edited by mark ds, 16 March 2014 - 03:29 PM. | http://www.gamedev.net/topic/654442-simultaneous-multiple-key-press-release-delay/?k=880ea6a14ea49e853634fbdc5015a024&setlanguage=1&langurlbits=topic/654442-simultaneous-multiple-key-press-release-delay/&langid=2 | CC-MAIN-2014-49 | refinedweb | 2,605 | 70.02 |
How to Build a Robot - Lesson 6: Build an Arduino Robot That Can Monitor CO2 Density
This.
CO2 Gas Sensor for Arduino× 1
Step 1:
Step 2:
Step 3:
M3*6MM Nylon columns and tie-wraps
Step 4:
ASSEMBLY INSTRUCTION:
STEP 1: Add the touch sensor
There are two holes on the touch sensor for fixing the Nylon columns.
Fix the Nylon columns. Please do not over-twist those columns.
Step 5:
Then attach the touch sensor on the plate.
Step 6:
STEP2: Add the LCD Screen
Slide the shores into the four holes of the LCD screen and fix them. Cut the remaining part of the tie-wraps.
Step 7:
STEP3: Add the CO2 Sensor
Fix the Nylon columns on the CO2 Sensor. Attach the CO2 Sensor on the sensor plate.
Step 8:
You almost complete the assembling. Please do not fix the upper plate onto the platform yet as we need to work on the circuit connection later.
Step 9:
CONNECT THE HARDWARE:
Please keeping cables in order.
The interface is colored as follows:
Red indicates power
Black indicates ground
Blue indicates Analog Input Pin
Green indicates Digital I/O Pin
The LCD monitor should be connected to VCC, GND, SCL and SDA in that particular order.
Step 10:
CODING
Find the code named DHT11_Display.ino and download it. Don’t forget the library for LiquidCrystal_I2C and CO2.
Step 11:.
CODE SYNOPSIS Library is important. It is hard to understand the library without library.
#include #include
LiquidCrystal_I2C lcd(0x20,16,2);
#include "CO2.h"
CO2Sensor CO2ppm;
Here you need to know about CO2Pin, a variable that is used to declare the sensor’s pins.
int CO2Pin = A1;
Namely, DHT11Pin represents Analog Pin1. That is to say, our CO2 sensor is connected to Analog Pin1.
The followings are some declarations for the time variables. TouchPin represents touch sensor while 13 stands for digital pin.
long currentMillis=0;
long previousMillis;
long Interval=4000;
int count=0; //counting numbers
int touchPin = 13;
Bring in the function of setup(), which is a setting-up for the initation.
pinMode(touchPin,INPUT);
Then keep the touch sensor with a type-in mode. For specific information, you may check the Arduino Reference in Arduino Website(), which has introductof the function of pinMode().
Next, you need to initialize the LCD screen and turn on the LCD light, which show that LCD screen is ready.
lcd.init();
lcd.backlight();
delay(100);
lcd.setBacklight(0);
Now it’s turn for the function of loop(). First we need to read the value from the touch sensor and then store those data with one variable touchState.
int touchState = digitalRead(touchPin);
Then check if the controller will receive a signal of HIGH once you touch the touch sensor with you fingers, 1 shall be added to the count.
if (touchState == HIGH){ count++; previousMillis= millis(); }
Hereby count means how many times you have touched the screen. But if you only touch the sensor one time, then the amount of time for each touch will be included in the function of millis().
We change the length of touch time with a sub sentence initiating with if. Interval here means the period for touch we set up. Thus, we know what action shall be taken within four-second of touch and more than four seconds’ touch respectively.
if(currentMillis - previousMillis < Interval) { //do something in 4 second
else{ //do something more than 4 seconds }
lcd.setBacklight(0);
The function of setBacklight() is used to turn off the LCD backlight lamp.
What action shall be taken when we touch the sensor for more than four seconds
If we touch the sensor for more than four seconds, we know that the LCD backlight lamp can be turned off.
what action shall be taken within four-second of touch.
if (count==1){ // One touch, the LCD screen won’t show any difference }
else if (count==2){ // Touch twice, value will be shown on the LCD screen }
Press the touch sensor one more time within four seconds; the screen would still be off. Only if you touches it twice at the same time, will the LCD backlight be on and figures of CO2 density be shown.
Please remember to keep the count as zero after you touch the sensor for the last time.
count=0;
Thus the complete code shall be:
if (count==1){ lcd.setBacklight(0); }
else if (count==2){ lcd.backlight(); DustShow(); count=0; }
Then we need to keep a track of the current time as we can compare it with previousMillis. This point is very important.
currentMillis = millis();
The function of CO2ppm.Read() is used to read data. And the variable CO2Value will be used to store the data from the CO2 sensor.
int CO2Value=CO2ppm.Read(CO2Pin);
Here is how we would used the function related to LCD screen.
lcd.setCursor(0,0);
lcd.setCursor(0,1);
The function of setCursor(column,row) is used to demonstrate which column and row the cursor is shown, starting from zero within the brackets.
lcd.print(CO2Value);
print() means this figure can be shown on the screen directly.
lcd.print(" ");
lcd.print(" ") means blank space shown on the screen. It’s used to clear the screen.
A Combination of Multiple Sensors How can you combine multiple environmental sensors once you have bought some kind of sensors?
Don’t worry. We will offer you guys a coding template for the testing of multiple sensors. You can make adjustments of the combination by referring to the mentioned template. In fact, the theory is the same as single sensor except that there are for steps for the changes of LCD screen.
The coding in red below needs to be modified. We mentioned before that count refers to how many times fingers touch the sensor. Thus, count=2 means that we have pressed twice and it shows the figures for the first sensor. Keep going! Please bear in your mind that you shall keep the count zero again.
Sample Code:
if(currentMillis - previousMillis < Interval) {
if (count==1){ lcd.setBacklight(0); }
else if (count==2){ //No.1 Sensor Sensor1Show(); lcd.backlight(); }
else if(count==3){ //No.2 Sensor Sensor2Show(); lcd.backlight(); count = 0; }
Of course, initiation set-up, declaration of variables at the beginning, for the sensor is important.
You can check the sample code named WeatherStation.ino for reference if you still have no idea how to modify your codes. | http://www.instructables.com/id/How-to-Build-a-Robot-Lesson-6-Build-an-Arduino-Rob/ | CC-MAIN-2017-17 | refinedweb | 1,068 | 74.39 |
I'm trying to create slot game with scala programming, I thought scala is pretty much like javascript which could create new Array or new something else with function.
Please suggest me what to do with this code:
import scala.util.Random
import scala.math
val randSymbol = List(1,2,2,3,3,3,4,4,4,4,5,5,5,5,6,6,6,6,6,6,7,7,7,7,7,7,8)
val _finalSymbol = List(List(0,0,0,0,0),List(0,0,0,0,0),List(0,0,0,0,0))
var i:Int = 0
var a:Int = 0
for (i <- 0 to 2){
_finalSymbol(i) = new List
for (a <- 0 to 4){
var iRandIndex = floor(Random.nextInt() * randSymbol.length).toInt
var iRandSymbol = randSymbol(iRandIndex)
_finalSymbol(i)(a) = iRandSymbol
}
}
scala
return
_finalSymbol
var result
Try something like this:
val _finalSymbol = (0 to 2) map { _ => Random.shuffle(randSymbol).take(5) }
And do yourself a favor: buy a scala book. It's not javascript. At all. | https://codedump.io/share/e6MG4ZEN6ADs/1/update-list-with-loop-arguments-scala | CC-MAIN-2017-34 | refinedweb | 170 | 64.61 |
I have an XML like this:
<Envelope>
<Node> <Status>1</Status>
<Name1>John</Name1>
<Name2>Smith</Name2> </Node> <Node> <Status>2</Status>
<Name1>Jane</Name1> <Name2>Doe</Name2> </Node></Envelope
I am attempting to learn the best practices for the following
scenario.
I have a defined set of data objects that are subject
to change with updates. And some of these objects have arrays of other
objects within them.
Using sqlite I have the
database set up with the following pattern: Each object is its on table.
And if an object has a child object within it
sqlite
Currently, I have a dictionary that has a number as the key and a Class
as a value. I can access the attributes of that Class like so:
dictionary[str(instantiated_class_id_number)].attribute1
Due to memory issues, I want to use the
shelve module. I am wondering if doing so is plausible. Does a
shelve dictionary act the exact same as a standard di
shelve
I want to search for a string in 10 files and write the matching lines
to a single file. I wrote the matching lines from each file to 10 output
files(o/p file1,o/p file2...) and then copied those to a single file using
10 threads.
But the output single file has mixed output(one
line from o/p file1,another line from o/p file 2 etc...) because its
accessed simultaneously by many threa
I'm kind of curious about what the best practice is when referencing the
'global' namespace in javascript, which is merely a shortcut to the
window object (or vice versia depending on how you look at
it).
window
I want to know if:
var answer =
Math.floor(value);
is better or worse than:
var answer = window.Math.floor(value);
javascript
accessing
window
Math
slower
faster
Is there a way to access (read or free) memory chunks that are outside
the memory that is allocated for the program without getting access
violation exceptions.Well what I actually would like to understand
apart from this, is how a memory cleaner (system garbage collector) works.
I've always wanted to write such a program. (The language isn't an
issue)
Thanks in advance :)
I'm trying to access a <li> tag in my first master
page file. I tried FindControl(..) but it allways returns null.
<li>
Structure:
<li
id="element" runat="server"
What do I need to do to access
the li element?
When I examine a particular user (friend) via the Graph API, I can see
their bio. How do I access the BIO with FQL?
I seem to be able
to access the few items that the Graph API does return with FQL, but not
this one.
I am running a subversion service on my localhost, I want users on the
LAN to be able to access this repository without being prompted for
username and password. Is there any way to do this.
Hope you all having a good day!
I need your recommendations
for something I've been thinking about these last three days. I have a COM
component written in an unmanaged platform. The component has a method that
returns a sort of some sensitive data and I need to store the value as soon
as I get it.
What I need is to call a UDF to access the COM
object and get the value. | http://bighow.org/tags/accessing/1 | CC-MAIN-2017-47 | refinedweb | 566 | 71.65 |
.
$ vim valgring_test.c
#include <stdio.h> #include <stdlib.h> int main() { char *ptr = (char *) malloc(1024); char ch; /* Uninitialized read */ ch = ptr[1024]; /* Write beyond the block */ ptr[1024] = 0; /* Orphan the block */ ptr = 0; exit(0); }
$ gcc -Wall -pedantic valgrind_test.c
this will print the unused variables, warning etc
$ sudo apt-get install splint
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following extra packages will be installed:
splint-data
Suggested packages:
splint-doc-html
The following NEW packages will be installed:
splint splint-data
0 upgraded, 2 newly installed, 0 to remove and 261 not upgraded.
Need to get 928 kB of archives.
After this operation, 2,998 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 trusty/universe splint-data all 3.1.2.dfsg1-2 [182 kB]
Get:2 trusty/universe splint i386 3.1.2.dfsg1-2 [746 kB]
Fetched 928 kB in 11s (82.7 kB/s)
Selecting previously unselected package splint-data.
(Reading database … 210095 files and directories currently installed.)
Preparing to unpack …/splint-data_3.1.2.dfsg1-2_all.deb …
Unpacking splint-data (3.1.2.dfsg1-2) …
Selecting previously unselected package splint.
Preparing to unpack …/splint_3.1.2.dfsg1-2_i386.deb …
Unpacking splint (3.1.2.dfsg1-2) …
Processing triggers for man-db (2.6.7.1-1ubuntu1) …
Setting up splint-data (3.1.2.dfsg1-2) …
Setting up splint (3.1.2.dfsg1-2) …
$ splint -strict valgrind_test.c
Splint 3.1.2 --- 03 May 2009 valgrind_test.c:4:5: Function main declared without parameter list A function declaration does not have a parameter list. (Use -noparams to inhibit warning) valgrind_test.c: (in function main) valgrind_test.c:6:29: Function malloc expects arg 1 to be size_t gets int: 1024 To allow arbitrary integral types to match any integral type, use +matchanyintegral. valgrind_test.c:9:6: Index of possibly null pointer ptr: ptr) valgrind_test.c:6:13: Storage ptr may become null valgrind_test.c:9:6: Array element ptr[1024] used before definition An rvalue is used that may not be initialized to a value on some execution path. (Use -usedef to inhibit warning) valgrind_test.c:11:1: Assignment of int to char: ptr[1024] = 0 Types are incompatible. (Use -type to inhibit warning) valgrind_test.c:13:1: Fresh storage ptr (type char *) not released before assignment: ptr = 0 A memory leak has been detected. Storage allocated locally is not released before the last reference to it is lost. (Use -mustfreefresh to inhibit warning) valgrind_test.c:6:35: Fresh storage ptr created valgrind_test.c:9:6: Possible out-of-bounds read: ptr[1024] Unable to resolve constraint: requires maxRead(malloc(1024) @ valgrind_test.c:6:22) >= 1024 needed to satisfy precondition: requires maxRead(ptr @ valgrind_test.c:9:6) >= 1024 A memory read references memory beyond the allocated storage. (Use -boundsread to inhibit warning) valgrind_test.c:11:1: Likely out-of-bounds store: ptr[1024] Unable to resolve constraint: requires 1023 >= 1024 needed to satisfy precondition: requires maxSet(ptr @ valgrind_test.c:11:1) >= 1024 A memory write may write to an address beyond the allocated buffer. (Use -likelyboundswrite to inhibit warning) Finished checking --- 8 code warnings | https://www.lynxbee.com/identifying-security-vulnerabilities-and-coding-mistakes-code-review-using-splint/ | CC-MAIN-2020-24 | refinedweb | 535 | 53.47 |
Hello,
I'm having some problems in using DataGrid with LINQ.
The following code makes crashing the IE7.
XDocument xmlProducts = XDocument.Parse(xmlContent); var products = from product in xmlProducts.Descendants("Product") select new { ProductID = Convert.ToInt32(product.Element("ProductId").Value), ProductName = (string)product.Element("ProductName").Value }; datagrid1.ItemsSource = products;
Env:
1) Vista Home Premium (32) 2) VS 2008 + SL2 (beta1)3) IE7
Let me know if you need more details.
Note:
If you are using class and List<T> like the example from doc, it's working fine. but why not work with LINQ and implicitly-typed variable?
(If this has answered your question, please click on "Mark as Answer" on this post. Thank you!)Best Regards,Michael SyncMicrosoft WPF & Silverlight InsiderBlog :
Hello, actually the problem is: Anonymous Types are internal. Currently data binding doesn't support binding to non-public classes. Try to use a ListBox to bind to an internal class, you'll find a similar problem.
The root cause seems to be: Data binding uses reflection, and reflection needs high security permissions. Silverlight runs in a sand box, where such security permissions are not granted.
shanaolanxing - I'll transfer to the Windows Azure team, and will have limited time to participate in the Silverlight forum. Apologize if I don't answer your questions in time.
Thanks a lot for your answer. Yi-Lun.
Yi-Lun Luo - MSFT:Try to use a ListBox to bind to an internal class, you'll find a similar problem.
Hello Yi-lun,
I just tired to use ListBox instead of Datagrid. it's working fine. Could you please review this issue?
Hi Michael, sorry for the late response. But I really don't see how you get it to work with ListBox. This is a known bug. swildermuth (The ADO Guy) already reported this issue on a ListBox from the Jolt private connect. Any ItemsControl bound to internal types will cause this problem. While data binding to internal types such as anonymous types are not supported, we understand we should not hang the browser. Our developers are investigating this issue. | http://silverlight.net/forums/p/11147/36232.aspx | crawl-002 | refinedweb | 348 | 60.41 |
This article will teach you the bare minimum you need to know in order to start creating apps on top of the Dropbox API.
Once you’ve read it, you can also check out our free course on the Dropbox API if you’re interested in learning more. In that course, you’ll learn how to build an expense organizer app using modern JavaScript.
This article uses JavaScript for its examples, however, the SDKs are very similar across languages, so even if you’re for example a Python developer, it should still be relevant.
The setup
In order to build on top of Dropbox, you first need a Dropbox account. After you’ve registered, head over to the developer section. Choose My apps on the lefthand side of the dashboard and click Create app.
Choose the following settings, and give your app a unique name.
Preferred settings for this tutorial
In the dashboard, go to OAuth 2 section under Generated access token and click the
Generate button to get an API
accessToken, which we will save for later.
Now, let’s install the Dropbox desktop app. Log in to the app with your new developer credentials and you should be able to see a folder with the same name as your newly created app. In my case, it’s
LearnDbxIn5Minutes.
Drop some files and images into the folder, so we can access them via our API.
Installation and initial Dropbox class
Now let’s install Dropbox library to our project.
npm install dropbox
or
yarn add dropbox
Import Dropbox and create
dbx with our token and fetching library passed into our class instantiation. If you prefer
axios or any other fetching library, feel free to pass it instead.
import { Dropbox } from 'dropbox'; const accessToken = '<your-token-from-dashboard>'; const dbx = new Dropbox({ accessToken, fetch });
Note that Dropbox is a named import. The reason is that there are other sub-libraries within
'dropbox', for example,
DropboxTeam, but we will focus only on
Dropbox in this tutorial.
Getting files
The first method we’re going to look at is for getting files.
dbx.filesListFolder({ path: '' }).then(response => console.log(response))
filesListFolder() takes a path to the target folder and lists all the files inside. This method returns a promise.
Also, it’s worth keeping in mind that you’ll provide an empty string
'' and not a slash
'/' in order to get to the root of our app. Now the root is the root of our application folder and not that of the Dropbox account. We can always change that option in the settings of our app.
When we run our code, the console should log the entries of our Dropbox folder:
Getting more files
In this part, we’re going to look at loading further files, with potential for implementing pagination or an infinite scroll feature.
For this purpose, Dropbox has got a concept of a
cursor, which indicates our current position between the files that we’ve received and the ones that need to be sent.
For example, we have a folder with 10 files, and we requested 5. The cursor will let us know that there are more files to download via
has-more: true property on the
response. We can continue requesting files using
filesListFolderContinue() passing in
cursor until there are no more files left and we get
has_more: false.
const getFiles = async () => { const response = await dbx.filesListFolder({ path: '', limit: 5 }) console.log(response) } getFiles()
When we examine the response we got in the console we can see
has_more: true.
Let’s update our code to handle cases when we’ve got more files to receive.
const getFiles = async () => { const response = await dbx.filesListFolder({ path: '', limit: 5 }) // We can perform a custom action with received files processFiles(response.entries) if (response.has_more) { // provide a callback for the newly received entries // to be processed getMoreFiles(response.cursor, more => processFiles(more.entries)) } } getFiles()
We provide the cursor to let the API know the entries that we’ve received, so we won’t receive the same files again.
const getMoreFiles = async (cursor, callback) => { // request further files from where the previous call finished const response = await dbx.filesListFolderContinue({ cursor }) // if a callback is provided we call it if (callback) callback(response) if (response.has_more) { // if there are more files, call getMoreFiles recursively, // providing the same callback. await getMoreFiles(response.cursor, callback) } }
Note the callback we are providing to
getMoreFiles() function. It’s a really neat trick to make sure that our newly received files get the same treatment as their predecessors.
In the end, when there are no more files to get, we receive
has_more: false
It’s also worth mentioning that the recursive call is implemented here for simplicity of the tutorial, rather than for the performance of the function. If you have large amounts of data to load, please refactor this out into a more performant function.
Getting thumbnails
The third method we’re going to study is for getting thumbnails for our files.
In order to request thumbnails for the uploaded files, we can call
filesGetThumbnailBatch().
dbx.filesGetThumbnailBatch({ entries: [{ path: '', size: 'w32h32', format: 'png', }] });
This endpoint is optimized for getting multiple thumbnails and it accepts an array of objects, where each object can have multiple properties specified.
The essential property is
path, which holds the same caveats as in
filesListFolder().
In our response, we can access our images via the
thumbnail properties.
You can see that the thumbnails are not returned as links, but as really really long strings — this is a base64 image. You could use the string in your HTML to set
src of
<img> to
".
And if I render my response, I would get these amazing cats!
Image credits: Max Pixel (1, 2, 3)
Moving files
Lastly, we’re going to cover moving our files from one folder to another.
We can use
filesMoveBatchV2() for moving our files in batches from one folder to another. This method works best when implemented as a part of an
async function.
The method accepts
entries array of objects, that consist of
from_path and
to_path properties.
filesMoveBatchV2() returns either
success if the call was immediately successful, in case there are only a few files to process. However, for bigger workloads, it’s going to return an object with a property
async_job_id, and that means that your call is being executed and we will need to check up on it at a later stage.
We can use
filesMoveBatchCheckV2() to keep checking for completion of our job until it’s complete and is not
in_progress any more.
const entries = { from_path: 'origin_folder', to_path: 'destination_folder } const moveFiles = async () => { let response = await dbx.filesMoveBatchV2({ entries }) const { async_job_id } = response if (async_job_id) { do { response = await dbx.filesMoveBatchCheckV2({ async_job_id }) // This where we perform state update or any other action. console.log(res) } while (response['.tag'] === 'in_progress') } }
Wrap up
Congratulations! You now have a very basic understanding of Dropbox API and its JavaScript SDK.
If you want to learn more about the Dropbox API and build an app on top of it with Vanilla JavaScript, be sure to check out our free course on Scrimba. It has, along with this post, been sponsored and paid for by Dropbox. This sponsorship helps Scrimba keep the lights on and it enables us to continue creating free content for our community throughout 2019. So a big thanks to Dropbox for that!
Happy coding :) | https://www.freecodecamp.org/news/learn-the-dropbox-api-in-5-minutes-fd4626a0df18/ | CC-MAIN-2019-43 | refinedweb | 1,230 | 63.09 |
Here my folder structure
Lapin wrote:Its a trap !
HolyShitGoodGodItsJesus
Why is there a FTP function into the code
[03:55:41] <~Bag> Yes, I can put things inside me when I need to
Xplodin wrote:Error 1 The type or namespace name 'Nexus' could not be found (are you missing a using directive or an assembly reference?)
Error 2 The type or namespace name 'RomeExt' could not be found (are you missing a using directive or an assembly reference?)
I tried adding Dinput8.dll as a reference but it refuses it!
Notice: This pre-release is no longer supported and has been discontinued.
Users browsing this forum: No registered users and 1 guest | https://forums.veniceunleashed.net/viewtopic.php?f=6&t=114&start=40 | CC-MAIN-2019-35 | refinedweb | 114 | 72.76 |
hi
I'm trying to write a game where I play against the computer...and as u see below..I really suck at it..
can someone pleeease help me..I want it to be as simple as possible..
thanks
input random
game = raw_input("Your turn. Input the coordinate for your move on the form x,y: ")
for turn in range(9):
def print_board(board):
board = board.replace("_"," ")
print "." + "---." * 3
for bound in [0,3,6]:
print "|",
for sym in board[bound:bound+3]:
print sym, "|",
if bound < 6:
print "\n|" + "---|" * 3
print "\n'" + "---'" * 3
Empty = ' '
Player_X = 'x'
Computer_O = 'o'
a = 0,0
b = 0,1
c = 0,2
d = 1,0
e = 1,1
f = 1,2
g = 2,0
h = 2,1
i = 2,2
winlist = [(a,b,c), (d,e,f), (g,h,i), (a,d,g,), (b,e,h), (c,f,i), (a,e,i), (g,e,c)]
opponent = { Player_X : Computer_O, Computer_O : Player_X }
if winner == player:
print: ("I win!")
if winner == computer:
print: ("You win!")
if winner == none:
print: ("None")
board ()
raw_input ( 'Remi' ) | https://www.daniweb.com/programming/software-development/threads/228908/need-help-with-tic-tac-toe | CC-MAIN-2018-43 | refinedweb | 177 | 82.95 |
#include <deal.II/lac/slepc_solver.h>
An implementation of the solver interface using the SLEPc Krylov-Schur solver. Usage: All spectrum, all problem types, complex.
For examples of how this and its sibling classes can be used, including how to provide preconditioners to the matrix of which eigenvalues are to be computed, see the documentation of the SolverBase class as well as the extensive discussions in the documentation of the SLEPcWrappers namespace.
Definition at line 392 of file slepc_solver.h.
SLEPc solvers will want to have an MPI communicator context over which computations are parallelized. By default, this carries the same behavior as the PETScWrappers, but you can change that.
Definition at line 362 of file slepc_solver.cc.
Composite method that solves the eigensystem \(Ax=\lambda x\). The eigenvector sent in has to have at least one element that we can use as a template when resizing, since we do not know the parameters of the specific vector class used (i.e. local_dofs for MPI vectors). However, while copying eigenvectors, at least twice the memory size of
eigenvectors is being used (and can be more). To avoid doing this, the fairly standard calling sequence executed here is used: Set up matrices for solving; Actually solve the system; Gather the solution(s).
This is declared here to make it possible to take a std::vector of different PETScWrappers vector types
Definition at line 704 of file slepc_solver.h.
Same as above, but here a composite method for solving the system \(A x=\lambda B x\), for real matrices, vectors, and values \(A, B, x, \lambda\).
Definition at line 736 of file slepc_solver.h.
Same as above, but here a composite method for solving the system \(A x=\lambda B x\) with real matrices \(A, B\) and imaginary eigenpairs \(x, \lambda\).
Definition at line 774 of file slepc_solver.h.
Solve the linear system for
n_eigenpairs eigenstates. Parameter
n_converged contains the actual number of eigenstates that have converged; this can be both fewer or more than n_eigenpairs, depending on the SLEPc eigensolver used.
Definition at line 157 of file slepc_solver.cc.
Set the initial vector space for the solver.
By default, SLEPc initializes the starting vector or the initial subspace randomly.
Definition at line 830 of file slepc_solver.h.
Set the spectral transformation to be used.
Definition at line 98 of file slepc_solver.cc.
Set target eigenvalues in the spectrum to be computed. By default, no target is set.
Definition at line 126 of file slepc_solver.cc.
Indicate which part of the spectrum is to be computed. By default largest magnitude eigenvalues are computed.
Definition at line 138 of file slepc_solver.cc.
Specify the type of the eigenspectrum problem. This can be used to exploit known symmetries of the matrices that make up the standard/generalized eigenspectrum problem. By default a non-Hermitian problem is assumed.
Definition at line 148 of file slepc_solver.cc.
Take the information provided from SLEPc and checks it against deal.II's own SolverControl objects to see if convergence has been reached.
Definition at line 307 of file slepc_solver.cc.
Access to the object that controls convergence.
Definition at line 335 of file slepc_solver.cc.
Access the real parts of solutions for a solved eigenvector problem, pair index solutions, \(\text{index}\,\in\,0\dots \mathrm{n\_converged}-1\).
Definition at line 260 of file slepc_solver.cc.
Access the real and imaginary parts of solutions for a solved eigenvector problem, pair index solutions, \(\text{index}\,\in\,0\dots \mathrm{n\_converged}-1\).
Definition at line 273 of file slepc_solver.cc.
Initialize solver for the linear system \(Ax=\lambda x\). (Note: this is required before calling solve ())
Definition at line 77 of file slepc_solver.cc.
Same as above, but here initialize solver for the linear system \(A x=\lambda B x\).
Definition at line 87 of file slepc_solver.cc.
Store a copy of the flags for this particular solver.
Definition at line 415 of file slepc_solver.h.
Reference to the object that controls convergence of the iterative solver.
Definition at line 297 of file slepc_solver.h.
Copy of the MPI communicator object to be used for the solver.
Definition at line 302 of file slepc_solver.h.
Objects for Eigenvalue Problem Solver.
Definition at line 354 of file slepc_solver.h. | https://dealii.org/developer/doxygen/deal.II/classSLEPcWrappers_1_1SolverKrylovSchur.html | CC-MAIN-2021-10 | refinedweb | 709 | 59.6 |
Ferdinand Soethe wrote:
>
> I need to install a new grammar for the plugin source files, but I'm
> unclear where it should go. Also, it has to be relax-ng
> (preferred) or schema to support all the features, is that a problem?
Well RNG is what we would all prefer isn't it.
Without seeing the rest of it, i will make a guess
at what we need.
We have established ways for source documents to declare
DTDs and we can detect and handle such documents and
associated resources. The DTDs can be either in the core,
or in plugins, or supplied by each project.
There are other methods using the Content Aware Pipelines
SourceTypeAction [1]. Rather than depending on the case of
which DOCTYPE, we use the document element and namespace,
or other methods.
With DTDs, the actual DTDs are required to be be present
for the actual parsing of the source. With RELAX NG, the
grammars are separate.
We can have validating components in our sitemap pipelines
if we so choose, or have them as separate pipelines. [2]
So where to store the grammars.
We have core grammars at
main/webapp/resources/schema/relaxng/
For plugins we have resources/schema/
Add more structure if needed.
If you use validation transformers in the sitemaps
then refer to these resources via the plugin's
locationmap.
Would that be sufficient?
Later we might find ways to obtain remote grammars
so that we don't need to redistribute. Not yet.
Keen to see this plugin, and the use of RELAX NG.
Please commit it so that we can all help.
Don't try to get it all perfect beforehand. :-)
[1]
[2]
-David | http://mail-archives.apache.org/mod_mbox/forrest-dev/200511.mbox/%[email protected]%3E | CC-MAIN-2014-23 | refinedweb | 280 | 74.69 |
Opened 13 years ago
Closed 13 years ago
#671 closed Bug (Fixed)
Hard crash when using $WS_EX_MDICHILD
Description
This simple example bellow should reproduce a hard crash of the script:
#include <WindowsConstants.au3> ;$hParent = GUICreate("") GUICreate("", 200, 100, -1, -1, BitOR($WS_POPUP, $WS_CHILD), $WS_EX_MDICHILD);, $hParent) GUICreate("");, -1, $WS_EX_MDICHILD) GUISetState(@SW_SHOW)
If i uncomment the $hParent part, then scrip will not crash.
Sometimes there is message box saing (title is AutoIt Error): Error allocating memory.
Perhaps it's wrong usage of the style, but i don't think that script should crash like this.
Attachments (0)
Change History (1)
comment:1 Changed 13 years ago by Valik
- Milestone set to 3.2.13.11
-.11 | https://www.autoitscript.com/trac/autoit/ticket/671 | CC-MAIN-2022-05 | refinedweb | 114 | 67.08 |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
BackMongo
It's a simple REST interface for MongoDB written in Python that can be used straight away from a Backbone application.
It's an alpha version. We are using backmongo to develop backbone apps without having to worry about the server side. We've coded just what we've needed so far. In the future we plan to add more features, such as, authentication, user privileges, etc.
At the moment backmongo can be only used as a flask extension, but we plan to add extensions for other frameworks.
Requirements
- flask
- pymongo
You can install them all with:
$ [sudo] pip install -r requirements.txt
Use
As a Flask extension:
from flask import Flask from flask.ext import backmongo app = Flask(__name__) backmongo.init_app(app, url_prefix='API') if __name__ == "__main__": app.run(debug=True)
From the command line:
$ python flask_backmongo.py path/to/project/dir
Examples
There's an example in examples/todos/ (it's a slightly modified version of the backbone's todo example) Mocha globally
$ npm install -g mocha
and then install locally the other required modules.
$ npm install should $ npm install jquery $ npm install backbone $ npm install xmlhttprequest
Then you have to type in the console:
$ make
which starts a flask app using backmongo, executes the javascript tests with mocha and finally stops the flask app.
By default the tests are working in a data base called backmongo. If you need to change it, just create a file named backmongo_conf.py in the root folder, or in some other place in your PYTHONPATH, and set the data base name in the variable DATABASE. There is an example of configuration file in examples/todo/static/backmongo_conf.py
Todo
- Prepare and submit BackMongo to the Python Package Index. | https://bitbucket.org/remosu/backmongo | CC-MAIN-2017-47 | refinedweb | 311 | 56.96 |
1. Install ubuntu server 10.04 or similar Debian based distro.
1a. sudo apt-get update ; sudo apt-get upgrade
2. Install catalyst
3. catalyst.pl testFileModel
4. Check to see that the development server works. Start with -r option.
5. Go to this site, which has a neat little Text File model.
6. Start adding content as per the article.
7. Once you add lib/testFileModel/Model/File.pm, check the debug output on the dev server. You will see this:
require testFileModel::Model::File was successful but the package is n
+ot defined. at /usr/share/perl5/Catalyst/Utils.pm line 287.
[debug] Debug messages enabled
[debug] Statistics enabled
[debug] Loaded plugins:
[download]
And at this point, as a beginner, I'm stuck. I wrote the author of the article, who has been pleasant and responsive, but his first suggestion was that the code was not in place. And not to pick on this particular model, but I've had this problem before, when I tried to use Catalyst::Model::Adaptor and add a non Catalyst Moose Class to a Catalyst project, getting the exact same error as above.
So I'm missing something, something simple, and I'm worried that #catalyst on irc.perl.org is as hostile to newb questions as #perl is reputed to be (I did see dhoss there, but he's asleep presently).
Questions that occur to me:
1. What can lead to that kind of error?
2. This config: how do I know that it works? If I built a Moose Class with one scalar, one array ref and one hash ref (that's it), applied Catalyst::Model::Adaptor and tried to populate the model with the configuration file, how do I know that it worked? Any easy way to confirm or test that?
Noob questions to be sure. But the tutorials leave out these kinds of issues.
I'd like to know how to get a model to "attach" to the Catalyst framework. And I'd like to know if it has been configured correctly, once attached.
Thanks for any help provided.
You haven't shown any code, which makes it quite hard for us to guess what might have gone wrong in your elaborate and complicated setup.
My wild guess is that you did not edit the "Test File Model" file testFileModel/Model/File.pm, so its first line still reads:
package MyApp::Model::File;
[download]
... which will install all code into the MyApp::Model::File namespace and not into the testFileMode::Model::File namespace, which then seem s to confuse | http://www.perlmonks.org/?node_id=861325 | CC-MAIN-2015-35 | refinedweb | 431 | 75.91 |
Technical Support
On-Line Manuals
C251 User's Guide
#include <stdlib.h>
int atoi (
char *string); /* string to convert */
The atoi function converts string into
an integer value. The input string must be a
sequence of characters that can be interpreted as an integer. This
function stops processing characters from string
at the first one it cannot recognize as part of the number.
The string passed to atoi must have the
following format:
〚whitespace〛〚{+|-}〛 digits
Where
whitespace
digits
The atoi function returns the integer value that is
produced by interpreting the characters in string
as a number.
If no conversion could be performed, 0 is returned.
atof, atol
#include <stdlib.h>
#include <stdio.h> /* for printf */
void tst_atoi (void) {
int i;
char s [] = "12345";
i = atoi (s);
printf ("ATOI(%s) = %d\n", s,. | http://www.keil.com/support/man/docs/c251/c251_atoi.htm | CC-MAIN-2020-05 | refinedweb | 133 | 65.73 |
Attachment points
Last December I wrote an essay describing how to generate a combinitorial library with SMILES. Last month someone asked me how to make the fragment libraries in the first place. There are two ways: old-skool and next-gen. The one I outlined is old-skool. It uses the Daylight SMILES syntax that's been around since the 1980s. The next-gen style uses an OpenEye SMILES extension for external attachment points
To demonstrate them I'll need a set of compounds for the core and the side group. I am not a chemist. My background is physics, math and computer science and my graduate work was in computational biophysics of protein structures. The reaction I'm most familar with is amino acid synthesis. For this example I'll assume any primary amide (C-[NH3+]) can be made to react with any carboxyl (C(=O)[O-]) and ignore the real chemistry.
I'll use a couple of the PubChem data sets as my source library. Specifically, compounds 500001-510000 because I already had it for previous examples and 1-10000 because it contains some amides. From these I'll extract the compounds that have an one and only one amide group and those with one and only one carboxyl. I'll also skip any compound which has more than one of each group (don't want to make a polymer), compounds which have more than one component (most likely salts), or compounds with a net charge.
A proper system should do a better job than this. For example, if there's an amide then the structure file may have a negatively charged salt to balance the overall charge. That should be okay if matched with a similarly charged carboxyl structure. I don't want to handle all those details because that's not the point of this essay.
The following program reads through the data files and saves the SMILES and compound id of the matching amide and carboxyl groups to the SMILES files amide.smi and carboxyl.smi. Because I was curious I also printed some information about which compounds did not pass some of the filters. I made one helper function named read_multiple_files() to make it easier to read all of the compounds from a list of structure filenames.
from openeye.oechem import * def read_multiple_files(*filenames): """iterate over each of the OEGraphMols in the named files""" istrm = oemolistream() for filename in filenames: # I should report problems on failure if istrm.open(filename): for cmpd in istrm.GetOEGraphMols(): yield cmpd istrm.close() def main(): # Recognize the two sides of the reaction amide = OESubSearch("C[NH3+]") carboxyl = OESubSearch("[C](=O)[O-]") # Save the SMILES and compound id for each matching compound amide_out = open("amide.smi", "w") carboxyl_out = open("carboxyl.smi", "w") for cmpd in read_multiple_files( "/Users/dalke/databases/compounds_000001_010000.sdf.gz", "/Users/dalke/databases/compounds_500001_510000.sdf.gz", ): # Skip anything with a net charge charge = OENetCharge(cmpd) if charge: continue # Get the match counts and compound id n_amide = len(list(amide.Match(cmpd))) n_carboxyl = len(list(carboxyl.Match(cmpd))) # Skip compounds that don't have either group if not (n_amide or n_carboxyl): continue id = OEGetSDData(cmpd, "PUBCHEM_COMPOUND_CID") smi = OECreateCanSmiString(cmpd) # Skip compounds with salts or other components if "." in smi: print id, "has multiple components:", smi continue if n_amide: if n_carboxyl: print id, "has both groups:", smi elif n_amide > 1: print id, "has", n_amide, "amide groups:", smi else: # n_amide == 1 print id, "has one amide group:", smi amide_out.write("%s %s\n" % (smi, id)) else: # n_amide == 0 if n_carboxyl > 1: print id, "has", n_carboxyl, "carboxyl groups:", smi else: print id, "has one carboxyl group:", smi carboxyl_out.write("%s %s\n" % (smi, id)) if __name__ == "__main__": # Suppress warning messages OEThrow.SetLevel(OEErrorLevel_Error) main()
Running the program produced two short files:
% wc amide.smi carboxyl.smi 2 4 65 amide.smi 15 30 675 carboxyl.smi 17 34 740 total % cat amide.smi C(CN(CCN)N(N=O)[O-])[NH3+] 4518 CC(C)N(CCC[NH3+])N(N=O)[O-] 4520 % cat carboxyl.smi CC(=O)OC(CC(=O)[O-])C[N+](C)(C)C 1 C[N+](C)(C)CC(=O)[O-] 247 C[N+](C)(C)CC(CC(=O)[O-])O 288 CCCCCCCCCCCCCCCC(=O)OC(CC(=O)[O-])C[N+](C)(C)C 461 C[N+]1(CCCC1C(=O)[O-])C 554 C[N+](C)(C)CC=CC(=O)[O-] 589 C[N+](C)(C)CCCC(=O)[O-] 725 C[N+]1(CCCC1)CC2=C(N3C(C(C3=O)NC(=O)C(=NOC)c4csc(n4)N)SC2)C(=O)[O-] 2622 CON=C(c1nc(sn1)N)C(=O)NC2C3N(C2=O)C(=C(CS3)C[n+]4ccn5c4cccn5)C(=O)[O-] 2639 CC(C)(C(=O)O)ON=C(c1csc(n1)N)C(=O)NC2C3N(C2=O)C(=C(CS3)C[n+]4ccccc4)C(=O)[O-] 2650 C[n+]1ccccc1C(=O)[O-] 3620 C[S+](CCC(C(=O)[O-])N)CC1C(C(C(O1)n2cnc3c2ncnc3N)O)O 5136 C[n+]1cccc(c1)C(=O)[O-] 5570 c1cc[n+](cc1)CC2=C(N3C(C(C3=O)NC(=O)Cc4cccs4)SC2)C(=O)[O-] 5773 c1cc(c(cc1N[N+]#N)O)C(=O)[O-] 504537 %
I want to transform the amides so the [NH3+] group looses a hydrogen and the positive charge and gains an attachment point. In old-skool style I'll label the attachment point with the incomplete ring closure %90, so the two amide SMILES strings will become
C(CN(CCN)N(N=O)[O-])[NH2]%90 4518 CC(C)N(CCC[NH2]%90)N(N=O)[O-] 4520There's no way to generate that in traditional SMILES. The trick is to make a fake atom with atomic weight 0 (atomic symbol "*") and post-process the SMILES string to convert the "*" into a "%90". That works because that's the only way an asterisk can be in a SMILES string so it can be manipulated without needing a full SMILES parser.
If there are several attachment points or a need to tag multiple atoms then the trick is to encode that via atomic weights and generate the isomeric SMILES. The SMILES pattern "[\d+" can only occur because of the atomic weight. So if there's a * atom with atomic weight 90 the resulting isomeric SMILES will have "[90*]" which could be post-processed to become "%90". I don't need that nuance for this essay.
I'll show the steps interactively before writing the program.
make the molecule and the substructure search pattern >>> mol = OEMol() >>> OEParseSmiles(mol, "CC(C)N(CCC[NH3+])N(N=O)[O-]") True >>> pat = OESubSearch("C[NH3+]") Do the substructure search; by construction there's only one match >>> matches = pat.Match(mol) >>> match = list(matches)[0] >>> match <C OEChem::OEMatchBase instance at _00344950_p_OEChem__OEMatchBase> Get the match atom for the nitrogen. The order is that of the SMARTS pattern and [NH3+] is the 2nd pattern >>> match_atom = list(match.GetAtoms())[1] >>> match_atom <C OEChem::OEMatchPair<(OEChem::OEAtomBase)> instance at _0186b418_p_OEChem__OEMatchPairTOEChem__OEAtomBase_t> The match atom is a pair of the query atom and the target atom >>> match_atom.target.GetAtomicNum() 7 >>> match_atom.target.GetImplicitHCount() 3 >>> match_atom.target.GetFormalCharge() 1 >>> Change the implicit H-count and formal charge >>> match_atom.target.SetImplicitHCount(2) True >>> match_atom.target.SetFormalCharge(0) True Add a new "dummy" atom to the amide, with atomic number 0 >>> dummy_atom = mol.NewAtom(0) >>> mol.NewBond(match_atom.target, dummy_atom, 1) <C OEChem::OEBondBase instance at _0186fcf0_p_OEChem__OEBondBase> I can't use the canonical SMILES string because the * can become the first atom in the canonical ordering >>> OECreateCanSmiString(mol) '*[NH2]CCCN(C(C)C)N(N=O)[O-]' Instead I'll use the lower-level function and without the OESMILESFlag::Canonical (or any other) flag. This uses atom creation order, so the newly created atom will never be first >>> OECreateSmiString(mol, 0) 'CC(C)N(CCC[NH2]*)N(N=O)[O-]' >>> smi = OECreateSmiString(mol, 0) >>> smi_with_attachment = smi.replace("*", "%90") >>> smi_with_attachment 'CC(C)N(CCC[NH2]%90)N(N=O)[O-]'
The last part about the atom ordering requires some knowledge of SMILES and how SMILES strings are generated. The "*" is an atom symbol. When I replace it with a "%90" then it becomes a bond symbol. A atom symbol in SMILES can go anywhere a bond symbol can go, but not vice versa. A bond cannot be the first symbol in the SMILES string or be after a dot disconnect or another bond. This new dummy atom is singly bonded to only one other atom so the only worry is if the dummy atom becomes the first atom in the output SMILES. In that case the substitution would create an illegal SMILES string.
The algorithm to create a SMILES string starts by assigning a number to each atom. The first atom symbol in the string comes from the atom with the lowest number. The rest of the string is created by building a spanning tree rooted at that first atom. Ambiguities in the spanning are broken by looking at the atom numbering. The result is the SMILES string for that ordering.
If the number assignment algorithm is invariant up to molecular symmetry then it will always generate the same SMILES string for a given structure and the corresponding string is a unique, or canonical SMILES string.
To ensure that the "*" is not the first character in the output I need to make sure some other atom has priority. (Remember, this is old-skool style; I'll cover a better but OpenEye-specific approach in a bit). To control if canonical atom numbering is done in OEChem, use the OECreateSmiString() function. If the flag OESMILESFlag::Canonical (or OESMILESFlag_Canonical in PyOEChem) is set, it uses a canonical ordering. If not set, it uses the atom's index. That is, the first atom created for a molecule (more technically, each connected component) will always be the first atom in its non-canonical SMILES. The dummy atom is the last atom created and since the SMARTS matched I know there's at least one other carbon in the structure which means the non-canonical ordering will never have the dummy atom first.
In the Daylight toolkit you can go one step further and manually set the arborder of each atom. The dt_arbsmiles() function computes a compounds SMILES using this ordering instead of the standard canonical one computed by dt_cansmiles().
By the way, canonical atom number takes non-linear time. It may be exponential for the general case but molecules are degree limited so I think in practice it's polynomial time. If you need a SMILES string for a large protein and don't need it to be canonical then you should generate the non-canonical SMILES instead. It might save you a few seconds.
Here's a program that converts the existing amide.smi and prints a new SMILES file with "%90" attachment points on the amides.
# Make an attachment point on the primary amide of each # compound in the file "amide.smi" from openeye.oechem import * amide_pattern = OESubSearch("C[NH3+]") and bond it to the nitrogen dummy = mol.NewAtom(0) mol.NewBond(target, dummy, 1) # Make the non-canonical SMILES string so the "*" won't be # first, then replace the "*" with the given attachment text. smi = OECreateSmiString(mol, 0) return smi.replace("*", attachment) def main(): filename = "amide.smi" mol = OEMol() for line in open(filename): smi, id = line.split() if not OEParseSmiles(mol, smi): raise AssertionError("Cannot parse %r" % (smi,)) new_smi = make_amide_attachment(mol) print new_smi, id # reset for the next compound mol.Clear() if __name__ == "__main__": main()When run it gives the desired result:
C(CN(CCN)N(N=O)[O-])[NH2]%90 4518 CC(C)N(CCC[NH2]%90)N(N=O)[O-] 4520
It works, but it's only half the solution. I need to convert the carboxyls as well. Here's another program which does that. The only real difference is I remove the [O-] atom and attach the dummy atom to the carbon of the carboxyl. It was easier to delete the oxygen and create the new atom than it was to transmute it.
from openeye.oechem import * carboxyl_pattern = OESubSearch("[C](=O)[O-]") def make_carboxyl_attachment(mol, attachment = "%90"): matches = carboxyl_pattern.Match(mol) # Assume there is one match and that I only need # to change the first. match = list(matches)[0] match_atoms = list(match.GetAtoms()) C_atom = match_atoms[0].target O_atom = match_atoms[2].target # Replace the oxygen with a dummy atom mol.DeleteAtom(O_atom) dummy_atom = mol.NewAtom(0) mol.NewBond(C_atom, dummy_atom, 1) # Make the non-canonical SMILES string so the "*" won't be # first, then replace the "*" with the given attachment text. smi = OECreateSmiString(mol, 0) return smi.replace("*", attachment) def main(): filename = "carboxyl.smi" mol = OEMol() for line in open(filename): smi, id = line.split() if not OEParseSmiles(mol, smi): print "Cannot parse %r" % (smi,) mol.Clear() continue new_smi = make_carboxyl_attachment(mol) print new_smi, id mol.Clear() if __name__ == "__main__": main()and the output looks right, with
There's one fundamental problem with this solution. It's tedious. That increases the chances of writing buggy code and makes it harder to review and maintain the code.
A cleaner solution, somewhere between old-skool and next-gen, is to describe the transformation as a reaction. SMILES is a notation for chemical compounds. SMIRKS is a notation for chemical reactions. I can describe the two reactions implemented above as
amide_smirks = "[C:1][NH3+]>>[C:1][NH2]*" carboxyl_smirks = "[C:1](=[O:2])[O-]>>[C:1](=[O:2])*"
The first SMIRKS says to map the carbon to itself (that's what the ":1" atom mapping means) and to remove the "[NH3+]" and replace it with the two atoms "[NH2]*". I tried using the atom map ":2" but it looks like OEChem's SMIRKS implementation, which is in beta now, doesn't yet support that. Daylight's should be fine but I don't have ready access to it.
The second SMIRKS says to keep the carbon and the double bonded oxygen of the carboxyl but to remove the negatively charged oxygen and attach a dummy atom instead. I didn't need the ":2"; the result would have been the same for this case. An advantage of keeping the atom is that this preserves other atomic information, like the atomic weight ("isotope" in OEChem).
To use a SMIRKS in OEChem, start by making an OEUniMolecularRxn() and Init() it with the SMIRKS pattern. This is exactly parallel to the normal way to create OESubSearch() objects for a SMARTS search. You can pass the pattern string into the constructor but I tend to do so only when I know the pattern is valid. I was experimenting with different SMIRKS so wasn't sure.
The reaction object is a callable object (meaning it's called like a function). It takes the molecule to transform and manipulates it in-place. Here's an example which converts any non-aromatic oxygen connected to any carbon into a sulpher.
>>> rxn = OEUniMolecularRxn() >>> rxn.Init("[#6:1][O:2]>>[#6:1][S:2]") True >>> mol = OEMol() >>> OEParseSmiles(mol, "OCc1ccccc1O") True >>> OECreateSmiString(mol, 0) 'OCc1ccccc1O' >>> rxn(mol) True >>> OECreateSmiString(mol, 0) 'SCc1ccccc1S' >>>What's neat is if I don't use the ":2" atom map for the O->S transition then the order of the atoms in the resulting non-canonical SMILES changes. In this form the the first atom (an oxygen) was deleted and a new atom, the sulpher, appended to the atom list. This makes the non-aromatic carbon be the first atom in the atom list, and hence the first atom in the non-canonical SMILES output.
>>> rxn = OEUniMolecularRxn() >>> rxn.Init("[#6:1]O>>[#6:1]S") True >>> mol = OEMol() >>> OEParseSmiles(mol, "OCc1ccccc1O") True >>> OECreateSmiString(mol, 0) 'OCc1ccccc1O' >>> rxn(mol) True >>> OECreateSmiString(mol, 0) 'C(c1ccccc1S)S' >>>(Well, I thought it was neat.)
Here's code that uses a reaction to put a dummy atom at the right spot of the the two data sets. As you can see it's smaller, more readable, and just generally nicer.
from openeye.oechem import * amide_smirks = "[C:1][NH3+]>>[C:1][NH2]*" carboxyl_smirks = "[C:1](=[O:2])[O-]>>[C:1](=[O:2])*") new_smi = OECreateSmiString(mol, 0).replace("*", "%90") print new_smi, id # Reset for the next time through the loop mol.Clear()Here's the output
==== processing amide.smi ==== C(CN(CCN)N(N=O)[O-])N%90 4518 CC(C)N(CCCN%90)N(N=O)[O-] 4520 ==== processing carboxyl.smi ====and to double-check that I can mix fragments from the two libraries
>>> mol = OEMol() >>> OEParseSmiles(mol, "CC(C)N(CCCN%90)N(N=O)[O-].C[N+]1(CCCC1C(=O)%90)C") True >>> OECreateCanSmiString(mol) 'CC(C)N(CCCNC(=O)C1CCC[N+]1(C)C)N(N=O)[O-]' >>>
That was old-skool style. It used knowledge of how SMILES generation works to create a SMILES that could be easily converted to a fragment. The approach works with both Daylight's and OpenEye's toolkits.
OEChem extended SMILES to support external bond attachment points. Quoting from the documentation: using the same notation C&1CCC&2.F&1.Br&2 is interpreted as the reaction product, i.e. FCCCCBr.It's trival to change the above code to generate "&1" instead of "%90". One of the advantages of the & notation over hijacking ring closures is they the two values are in different numbering spaces. "%12" and "&12" do not refer to the same bond connection so I don't need to use an absurdly high ring closure number to avoid possible collisions.
The numbering space used is the same space at the RGroup attachement points and the atom maps. That is, "&1" is identical to "R1" and to "[*:1]". (Does that last one look familiar? :)
The new notation has exactly the same limitations as ring closures (ie, it can't be the first token in the SMILES string). There are no real advantages to using it in the post-processing system I described earlier in this essay. The advantage is that OEChem knows how to generate external bond attachment points directly from the molecule object, without having to resort to post-processing.
The OECreateSmiString() function takes a flag parameter. This is a bit-wise OR of many available flags. One I mentioned earlier is OESMILESFlag::Canonical which tells the function to generate a canonical atom numbering. Another is OESMILESFlag::ExtBonds. Quoting from the documentation:
The OESMILESFlag::ExtBonds flag controls whether atoms with atomic number zero (as determined by the OEAtomBase::GetAtomicNum method), and a non-zero map index (as determined by the OEAtomBase::GetMapIdx method) should be generated the external bond `&1' notation. In this notation, the integer value following the `&' corresponds to the atom's map index. When this flag isn't set, such atoms are written in the Daylight convention `[*:1]'.The default flags (without the prefix) are RGroups|AtomMaps|Canonical. The first says to write atoms with an atom map and atomic number of 0 using R-group notation. The second says that any remaining atoms with an atom map are written the "[C:1]" notation. The last generates a canonical SMILES. Here's a nice example showing how the different combinations contribute:
>>> mol = OEMol() >>> OEParseSmiles(mol, "C[*:1]") True >>> OECreateSmiString(mol, OESMILESFlag_DEFAULT) '[R1]C' Expanding the DEFAULT flags into its constituent parts >>> OECreateSmiString(mol, OESMILESFlag_RGroups | OESMILESFlag_AtomMaps | OESMILESFlag_Canonical) '[R1]C' >>> OECreateSmiString(mol, OESMILESFlag_AtomMaps | OESMILESFlag_Canonical)' [*:1]C' >>> OECreateSmiString(mol, OESMILESFlag_AtomMaps) 'C[*:1]' >>> OECreateSmiString(mol, OESMILESFlag_ExtBonds) 'C&1' >>>For this essay all I need is the ExtBonds flag. By the way, the SMILES generation knows that the external bond cannot be the first term of the output SMILES so there's no need to worry about that possibility. Feel free to use Canonical if you need it.
If you make the attachment structures "by hand" with something like the make_amide_attachment() and make_carboxyl_attachment() functions then there are two changes to make: give the newly created dummy atom an atom map index and add the ExtBonds flag to the OECreateSmiString() call. Here's how to modify one of the functions defined earlier: which will have an external bond # attachment point label of 1, then bond it to the nitrogen dummy = mol.NewAtom(0) dummy.SetMapIdx(1) mol.NewBond(target, dummy, 1) # use the external bond attachment point labeling return OECreateSmiString(mol, OESMILESFlag_ExtBonds)
Unfortunately the external bond attachment point notation doesn't interact well with the reaction transforms because both use atom maps. The atom maps in a SMIRKS must be balanced. As far as I can tell there's no way to specify that the newly created dummy atom has an unmatched atom map number or R-Group. Instead I'll need to set the atom map index of any dummy atoms after doing the transform. It's still a post-processing step but one that works directly on the data structure instead of on the syntax, so it's less likely to fail in strange ways.
The changes are minor so here's the updated version of the SMIRKS-based fragment generator:
from openeye.oechem import * amide_smirks = "[C:1][NH3+]>>[C:1][NH2]*" carboxyl_smirks = "[C:1](=[O:2])[O-]>>[C:1](=[O:2])*" # Give each dummy atoms a unique map index def set_dummy_atom_map_indicies(mol): idx = 1 for atom in mol.GetAtoms(): if atom.GetAtomicNum() == 0: atom.SetMapIdx(idx) idx = idx + 1 # I could check that one and only one dummy atom was # created but I won't worry about that for now.) set_dummy_atom_map_indicies(mol) new_smi = OECreateSmiString(mol, OESMILESFlag_ExtBonds) print new_smi, id # Reset for the next time through the loop mol.Clear()and its output is
==== processing amide.smi ==== C(CN(CCN)N(N=O)[O-])N&1 4518 CC(C)N(CCCN&1)N(N=O)[O-] 4520 ==== processing carboxyl.smi ==== CC(=O)OC(CC&1=O)C[N+](C)(C)C 1 C[N+](C)(C)CC&1=O 247 C[N+](C)(C)CC(CC&1=O)O 288 CCCCCCCCCCCCCCCC(=O)OC(CC&1=O)C[N+](C)(C)C 461 C[N+]1(CCCC1C&1=O)C 554 C[N+](C)(C)CC=CC&1=O 589 C[N+](C)(C)CCCC&1=O 725 C[N+]1(CCCC1)CC2=C(N3C(C(C3=O)NC(=O)C(=NOC)c4csc(n4)N)SC2)C&1=O 2622 CON=C(c1nc(sn1)N)C(=O)NC2C3N(C2=O)C(=C(CS3)C[n+]4ccn5c4cccn5)C&1=O 2639 CC(C)(C(=O)O)ON=C(c1csc(n1)N)C(=O)NC2C3N(C2=O)C(=C(CS3)C[n+]4ccccc4)C&1=O 2650 C[n+]1ccccc1C&1=O 3620 C[S+](CCC(C&1=O)N)CC1C(C(C(O1)n2cnc3c2ncnc3N)O)O 5136 C[n+]1cccc(c1)C&1=O 5570 c1cc[n+](cc1)CC2=C(N3C(C(C3=O)NC(=O)Cc4cccs4)SC2)C&1=O 5773 c1cc(c(cc1N[N+]#N)O)C&1=O 504537
In the SMIRKS approach the details of how to transform the molecule into a fragment are completely described by the SMIRKS. There's no need to write Python code for it. With very little work you can change the last code sample into a program which takes the SMIRKS on the command-line and a list of filenames (or use stdin) and generates a fragment for each the compounds in those files.
Andrew Dalke is an independent consultant focusing on software development for computational chemistry and biology. Need contract programming, help, or training? Contact me
| http://www.dalkescientific.com/writings/diary/archive/2005/05/07/attachment_points.html | CC-MAIN-2016-18 | refinedweb | 3,902 | 55.34 |
PRObooks
Start-to-Finish Visual Basic 2005
Traditional Visual Basic programming books only teach us such concepts as how to use various Toolbox controls and other related topics. Even though it helps, in the real world you ll be required to develop an application by using all the features and techniques associated with Visual Basic 2005. What about having a book that completely examines the language with the help of a project? It is very hard to find a book on the market that examines Visual Basic 2005 with the help of a mini project. But Tim Patrick s Start-to-Finish Visual Basic 2005 thoroughly examines all aspects of the language, from beginning to end, with the help of a project. When you finish reading the book you ll be in a position either to develop a real project or customize the project discussed in the book (you can download the software from).
The first two chapters introduce you to the world of .NET and Visual Basic 2005 with a comprehensive explanation of all the important concepts, accompanied by relevant source code. Each chapter initially discusses the relevant concepts and then demonstrates how to implement them into the project. For instance, in chapter 13 the author first discusses XML and the various namespaces associated with it. At the end, the author demonstrates the application of XML into the project. I must say that you can not only learn the concepts examined in each chapter in this way but also create side-by-side an outline of the project under discussion. Indeed, the author presents Visual Basic 2005 in a practical manner.
The author then examines in detail the layout of the discussed project. Chapters 4 and 5 discuss the database design and the implementation of .NET assemblies into the project. Others chapters provide detailed coverage about data types, Windows Forms, object-oriented programming concepts, error handling, and ADO.NET. Chapter 11 includes a concise explanation about security-related aspects, such as cryptography and encryption and their implementation into the project. Further chapters demonstrate the application of some of the important concepts, such as operator overloading, XML, application settings, files, and directories, all with the help of comprehensive explanations and with supporting source code.
An interesting point to note is that the author devotes one complete chapter to discussing generics, which is one of the new features of .NET 2.0. However, I think the author missed an opportunity to provide coverage about Anonymous types and Partial types. The author nicely integrates some features of the Graphics class into the project, and Chapter 17 discusses the various concepts involved. While chapter 18 discusses internalization of the project, chapters 19 and 20 demonstrate printing and reporting related concepts, and their application into the project. These chapters are of high importance because developers need to implement printing functionality into their Visual Basic 2005 project, be it a billing system or library management application.
Whatever software you develop, you must create some sort of license files so you can sell the software under various editions. Chapter 21 examines all the techniques required for licensing Visual Basic 2005 applications (and their implementation into the project). This chapter is an invaluable resource for developers because most books don t cover this.
The author examines some of the important concepts related to ASP.NET and its application into the sample project in Chapter 22. The final three chapters provide concise coverage about the creation of online help and various methods of deploying Visual Basic 2005 applications. The book wraps up by providing a complete overview about the project and also an appendix, which provides a good overview about the various steps required for the installation of the project discussed in the book. Chapters 21 and 23 should be combined into one, as the licensing aspects should be discussed along with deployment topics.
I would suggest the publisher include a CD with complete source code of the project, including additional tools like Visual Basic 2005 and Visual Web Developer Express Edition. However, you can download the complete source code from.
This book is code intensive and will be a good reference material for all levels of developers.
Anand Narayanaswamy
Rating:
Title: Start-to-Finish Visual Basic 2005
Author: Tim Patrick
Publisher: Addison Wesley Professional
ISBN: 978-0-321-39800-0
Web Site:
Price: US$49.99
Page Count: 888 | https://www.itprotoday.com/development-techniques-and-management/start-finish-visual-basic-2005 | CC-MAIN-2019-04 | refinedweb | 732 | 51.89 |
Models are responsible for persisting data and containing business logic. Models are direct interfaces to Google App Engine’s Datastore. It’s recommended that you encapsulate as much of your business and workflow logic within your models (or a separate service layer) and not your controllers. This enables you to use your models from multiple controllers, task queues, etc without the worry of doing something wrong with your data or repeating yourself. Models can be heavily extended via Behaviors which allow you to encapsulate common model functionality to be re-used across models.
An example model to manage posts might look like this:
from ferris import BasicModel, ndb from ferris.behaviors import searchable class Post(BasicModel): class Meta: behaviors = (searchable.Searchable,) search_index = ('global',) title = ndb.StringProperty() idk = ndb.BlobProperty() content = ndb.TextProperty()
Models are named with singular nouns (for example: Page, User, Image, Bear, etc.). Breaking convention is okay but scaffolding will not work properly without some help.
Each model class should be in its own file under /app/models and the name of the file should be the underscored class name. For example, to create a model to represent furry bears, name the file /app/models/furry_bear.py. Inside the file, define a class named FurryBear.
The Model class is built directly on top of App Engine’s google.appengine.ext.ndb module. You may use any regular ndb.Model class with Ferris. Documentation on propeties, querying, etc. can be found here.
Base class that augments ndb Models by adding easier find methods and callbacks.
Ferris provides two simple query shortcuts but it’s unlikely you’ll ever use these directly. Instead, you’ll use the automatic methods described in the next section.
Generates an ndb.Query with filters generated from the keyword arguments.
Example:
User.find_all_by_properties(first_name='Jon',role='Admin')
is the same as:
User.query().filter(User.first_name == 'Jon', User.role == 'Admin')
Similar to find_all_by_properties, but returns either None or a single ndb.Model instance.
Example:
User.find_by_properties(first_name='Jon',role='Admin')
The Model class automatically generates a find_by_[property] and a find_all_by_[property] classmethod for each property in your model. These are shortcuts to the above methods.
For example:
class Show(Model): title = ndb.StringProperty() author = ndb.StringProperty() Show.find_all_by_title("The End of Time") Show.find_all_by_author("Russell T Davies")
The Model class also provides aliases for the callback methods. You can override these methods in your Model and they will automatically be called after their respective action.
Called before an item is saved.
Called after an item has been saved.
Called before an item is retrieved. Note that this does not occur for queries.
Called after an item has been retrieved. Note that this does not occur for queries.
Called before an item is deleted.
Called after an item is deleted.
These methods are useful for replicating database triggers, enforcing application logic, validation, search indexing, and more.
The BasicModel adds automatic access fields. These are useful for a variety of situations where you need to track who created and updated an entity and when.
Adds the common properties created, created_by, modified, and modified_by to Model
Stores the created time of an item as a datetime (UTC)
Stores the modified time of an item as a datetime (UTC)
Stores the user (a google.appengine.api.users.User) who created an item.
Stores the user (a google.appengine.api.users.User) who modified an item. | http://ferris-framework.appspot.com/docs21/users_guide/models.html | CC-MAIN-2017-13 | refinedweb | 567 | 52.66 |
Simple script to download .whl packages from.
Project description
Gohlke Grabber
Simple script to download .whl packages from the pre-built Python packages at.
Christoph Gohlke maintains 32-bit and 64-bit binaries for many popular scientific Python packages. These can save you some trouble in cases where getting the package from PyPI (using
pip install package_name) causes pip to try and build underlying C or C++ code. This can of course be made to work on Windows, but requires the installation and configuration of a C/++ compiler and libraries - both of which come standard with a Linux installation, but not with Windows.
So, if you have issues installing a package, you trust Gohlke's build, and you want something easy that helps automate the download, grab a copy of gohlkegrabber.py and call it like shown below or in download.py.
Of course, once you have a wheel (a file with the
.whl extension), you can install it using:
pip install path\to\saved\location\name.whl
Please don't bother Christoph Gohlke if there are issues with this tool. If it breaks, that's my fault and you should bother me with it, or ideally propose how to fix it. He just provides a valuable service at no cost and merely deserves credit.
Installing
pip install gohlkegrabber
Dependencies
Dependencies that will be installed :
lxml>=4.4.2
Getting Started
Quick
After installing, to get a recent copy of
gdal:
from gohlkegrabber import GohlkeGrabber gg = GohlkeGrabber() gg.retrieve('c:/temp', 'gdal')
Or, directly from the command line:
ggrab c:\temp gdal
Note that
ggrab takes the same arguments as the
.retrieve() method, except that positional arguments come after named arguments, as this is the convention on OS CLIs. For example:
ggrab -v 1.18 --platform win32 .\bin numpy
The CLI command
ggrab also takes an additional argument
--cache if you want to specify a cached index file to use, for example:
ggrab --cache c:\temp\cache.html . numpy
If you run
ggrab from the command line, you can also pass
--bare or
-x
pip install gohklegrabber for /f "tokens=*" %i in ('ggrab --bare c:\temp numpy') do set ggrab_last_package=%i pip install %ggrab_last_package%
Or in a batch file:
@echo off pip install gohklegrabber for /f "tokens=*" %%i in ('ggrab --bare c:\temp numpy') do set ggrab_last_package=%%i pip install %ggrab_last_package%
In greater detail
When you create a
GohlkeGrabber, it automatically downloads the index from the website (or reads a cached copy) and figures out all the packages on offer. Of course, this requires an active connection to the web.
You can list the available packages:
print(list(gg.packages))
Note that
.packages is a
dict - of course you can just use the dictionary directly and the data therein yourself as well. For example, this is what the start of the
numpy entry looks like:
{ 'numpy-1.16.5+mkl-cp27-cp27m-win32.whl': { 'link': '', 'version': '1.16.5+mkl', 'build': None, 'python': '2.7', 'abi': 'cp27m', 'platform': 'win32' }, 'numpy-1.16.5+mkl-cp27-cp27m-win_amd64.whl': ... }
To download the latest version (default) of
numpy, for Windows 64-bit (default), and for the most recent version of Python (default) for which it is available, you would call:
fn, metadata = gg.retrieve(output_folder, 'numpy')
fn will be the filename of the wheel that was downloaded.
metadata will be a dictionary with the metadata for the downloaded wheel. Both will be
None if no package could be downloaded that matched the request.
An example of what the metadata would look like:
{ 'link': '', 'version': '1.17.4+mkl', 'build': None, 'python': '3.8', 'abi': 'cp38', 'platform': 'win_amd64' }
Note that this is just the appropriate entry from the
.packages
dict.
To get a copy for a specific Python version (e.g. 2.7), Windows version (e.g. 32-bit) and package version (e.g. '<1.17'), you can provide extra parameters to the call in no particular order:
fn, metadata = gg.retrieve(output_folder, 'numpy', python='2.7', platform='win32', version='<1.17')
Any file downloaded will be stored in the
output_folder.
If the file already exists, it won't be downloaded again, unless you pass
overwrite=True to the
.retrieve() call.
If you create the GohlkeGrabber with a
cached parameter, it will save the downloaded web page to that location, or load that file instead of downloading it again, if it already exists.
gg = GohlkeGrabber(cached='work/cache.html')
License
This project is licensed under the MIT license. See LICENSE.txt.
Change log
0.3.3
- 'Bare' mode added to capture written wheel
- Project structure cleanup (
scriptfolder, version location)
0.3.2
- Versioning issues resolved
- Documentation fix
- Short command line switches
0.3.1
- Added command line tool. Added 'User-Agent' to file retrieve as well as index.
0.3.0
- Flipped default for
pythonparameter, favouring the current Python over the most recent
0.2.9
- added a user agent header field, as the site no longer serves a basic Python client
0.2.8
- open release, version conflict
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/gohlkegrabber/ | CC-MAIN-2021-17 | refinedweb | 865 | 65.93 |
Mutates caffeine into Code. Values clean content structure, design pattern & thoughtful interaction
“Is that it?”
“No. That’s a wall.”
“It could be disguised.”
“You’re not very good at looking for things, are you?”
“I’m good at looking for walls. Look, I found another one.”
― Derek Landy, Kingdom of the Wicked
When a system was monolithic we had access to the full execution stack trace. However, in Microservices architecture, any single operation in any service can trigger a chain of downstream microservice calls, as all are isolated among themselves which leads to a challenging task to debug an actual flow.
And let’s be honest, we all hate those “
” or “” or “
something went wrong
” system errors.” system errors.
unknown
Well, this situation can be smoothly handled if we externalize and centralize the storage of our logs. Which I believe will increase our chances of tracking down and fixing issues.
Complete project can be found here :
E-L-K Stack
Logs as Streams of events
Example
ELK stack is basically a combination of 4 open source softs for processing log files and storing them at a centralized place. It helps to identify the issues spanning through multiple servers by correlating their logs within a specific time frame.
ELK Stack: Elasticsearch, Logstash, Kibana | Elastic
What is the ELK Stack? The ELK Stack is an acronym for a combination of three widely used open source projects…
Let’s have a quick look at each component of Elastic Stack.
Elasticsearch is a distributed analytics and search engine built over Apache Lucene, a Java-based search and indexing library. Elasticsearch provides distributed cluster features and sharding for Lucene indexes, advanced search and aggregation functionality, as well as high availability, security, a snapshot/restore module, and other data management features. The platform also provides such features as thread-pooling, node/cluster monitoring API, queues, data monitoring API, cluster management, etc..
The following illustration shows how the components of Elastic Stack interact with each other
In few Words:
Logs are the continuous events.
By following this methodology logs can be continuously routed to a file or can be watched in real-time on terminal.
Playing with spring boot applications Logback dependency will be pulled by default and will be used as a default logging mechanism. Logback preferably used with
just like following,
SLF4J
@Slf4j public class Vrush{ log.trace("Logging at TRACE level"); log.debug("Logging at DEBUG level"); log.info("Logging at INFO level"); log.warn("Logging at WARN level"); log.error("Logging at ERROR level"); //or any Parametrized logs log.debug("Found {} results", list.size()); }
Logback can be configured in the
file, located under thefile, located under the
logback-spring.xml
folder. In this configuration file, we can take advantage of Spring profiles and the templating features provided by Spring Boot.folder. In this configuration file, we can take advantage of Spring profiles and the templating features provided by Spring Boot.
resources.
One solution to this is at the beginning of the call chain we can create a CORRELATION_ID and add it to all log statements. Along with it, send
as a header to all the downstream services as well so that those downstream services also useas a header to all the downstream services as well so that those downstream services also use
CORRELATION_ID
in logs. This way we can identify all the log statements related to a particular action across services.in logs. This way we can identify all the log statements related to a particular action across services.
CORRELATION_ID
We can implement this solution using MDC feature of Logging frameworks. Typically we will have a WebRequest Interceptor where you can check whether there is a
header. If there is noheader. If there is no
CORRELATION_ID
in the header then create a new one and set it in MDC. The logging frameworks include the information set in MDC with all log statements.in the header then create a new one and set it in MDC. The logging frameworks include the information set in MDC with all log statements.
CORRELATION_ID
But, instead of doing all this work, we can use Spring Cloud Sleuth which will do all this and much more for us.
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-sleuth</artifactId> </dependency>
By adding Spring Cloud Sleuth dependency in pom.xml file, interactions with the downstream services will be instrumented automatically and the trace and span ids will be added to the SLF4J’s Mapped Diagnostic Context (MDC), which will be included in the logs.
Sleuth includes the pattern [appname,traceId,spanId,exportable] in logs from the MDC.
In
logs you can find log statements something like:logs you can find log statements something like:
booking-service
2021–01–01 15:31:29.189 WARN [booking-service,903c472a08e5cda0] returning BAD REQUEST response for [email protected] request due Room 7513 is not available
In
logs you can find log statements something like:logs you can find log statements something like:
consumer-service
2021–01–01 15:31:29.189 WARN[consumer service,903c472a08e5cda0,af68249ac3a6902] 64712 [io-8181-exec-12]
Observe that TraceID
is the same in both booking-service and consumer-service for the same REST API call. This way we can easily correlate the logs across services.is the same in both booking-service and consumer-service for the same REST API call. This way we can easily correlate the logs across services.
903c472a08e5cda0
By default, logback stores logs in plain text format. And we are going to store logs in Elasticsearch which operates on JSON index. To accomplish it, we can use the Logstash Logback Encoder or LoggingEventCompositeJsonEncoder.
In general, the microservices will run in Docker containers so that we can leave the responsibility of writing the log files to Docker.
For a simple and quick configuration, we could use LogstashEncoder, which comes with a pre-defined set of providers:
<configuration> <springProperty scope="context" name="application_name" source="spring.application.name"/> <appender name="jsonConsoleAppender" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="net.logstash.logback.encoder.LogstashEncoder"/> </appender> <root level="INFO"> <appender-ref </root> </configuration>
The above configuration will produce the following log output (just bear in mind that the actual output is a single line, but it’s been formatted below for better visualization):
{ "@timestamp": "2021-01-01T05:01:38.967+01:00", "@version": "1", "message": "Finding details of room with id 7052", "logger_name": "com.vrush.microservices.booking.service", "thread_name": "http-nio-8001-exec-3", "level": "WARN", "level_value": 20000, "application_name": "booking-service", "traceId": "c52d9ff782fa8f6e", "spanId": "c52d9ff782fa8f6e", "spanExportable": "false", "X-Span-Export": "false", "X-B3-SpanId": "c52d9ff782fa8f6e", "X-B3-TraceId": "c52d9ff782fa8f6e" }
The composite encoder has no providers configured by default, so we must add the providers we want to customize the output. Which you can find on my github page.
Running on Docker
As we will have multiple containers, we will use Docker Compose to manage them. Application’s all services are configured in a
file. Then, by running a shell script, we create and start all the services from our configuration. All you need to do is hit afile. Then, by running a shell script, we create and start all the services from our configuration. All you need to do is hit a
YAML
..
build.sh
And take a Coffee or two!!
In the given example have a look at how the services are defined and configured in the
. What’s important to highlight is the fact that labels have been added to some services. Labels are simply metadata that only have meaning for who’s using them. Let’s have a quick looks at the labels that have been defined for the services:
docker-compose-backend-services.yml
and will autodiscover the Docker containers that have this property -and will autodiscover the Docker containers that have this property -
collect_logs_with_filebeat=true
decode_log_event_to_json_object=true.
5044
When applications run on containers, they become moving targets to the monitoring system. So we’ll use the autodiscover feature from Filebeat, which allows it to track the containers and adapt settings as changes happen. By defining configuration templates, the autodiscover subsystem can monitor services as they start running.
Booking, Financial, Searching, Consumer services will produce logs to the standard output (
). By default, Docker captures the standard output (and standard error) of all your containers, and writes them to files in JSON format, using the json-file driver. The logs files are stored in the). By default, Docker captures the standard output (and standard error) of all your containers, and writes them to files in JSON format, using the json-file driver. The logs files are stored in the
stdout
directory and each log file contains information about only one container. Indirectory and each log file contains information about only one container. In
/var/lib/docker/containers
file, Filebeat is configured as following:file, Filebeat is configured as following:
filebeat.docker.yml
filebeat.autodiscover: providers: - type: docker labels.dedot: true templates: - condition: contains: container.labels.collect_logs_with_filebeat: "true" config: - type: container format: docker paths: - "/var/lib/docker/containers/${data.docker.container.id}/*.log" processors: - decode_json_fields: when.equals: docker.container.labels.decode_log_event_to_json_object: "true" fields: ["message"] target: "" overwrite_keys: true output.logstash: hosts: "logstash:5044"
The above configuration uses a single processor. If we need, we could add more processors, which will be chained and executed in the order they are defined in the configuration file. Each processor receives an event, applies a defined action to the event, and the processed event is the input of the next processor until the end of the chain.
Once the log event is collected and processed by Filebeat, it is sent to Logstash, which provides a rich set of plugins for further processing the events.
The Logstash pipeline has two required elements, input and output, and one optional element, filter. The input plugins consume data from a source, the filter plugins modify the data as we specify, and the output plugins write the data to a destination.
In the
file, Logstash is configured to:
logstash.conf
Receive events coming from Beats in the port
. Process the events by adding the tag. Process the events by adding the tag
5044
sends the processed events to Elasticsearch which runs on the portsends the processed events to Elasticsearch which runs on the port
logstash_filter_applied
9200
input { beats { port => 5044 } } filter { mutate { add_tag => [ "logstash_filter_applied" ] } } output { elasticsearch { hosts => "elasticsearch:9200" } }
Elasticsearch will store and index the log events and, finally, we will be able to visualize the logs in Kibana, which exposes a UI in the port
..
5601
Before starting, ensure you at least
,,
Java 11
andand
Maven 3.x
set up. Then clone the repository from GitHub:set up. Then clone the repository from GitHub:
Docker
git clone
In given above example there are multiple service — booking, financial, searching, consumer. Each are producing external logs file in specified locations in docker container.
There are multiple
files created for creating docker images and all these YMLs are configured with docker compose fromfiles created for creating docker images and all these YMLs are configured with docker compose from
.yml
file.file.
build.sh
Visualizing logs in KibanaOpen Kibana in your favourite browser:
. Kibana comes with sample data in case we want to play with it. To explore the data generate by our applications, click the Explore on my own link. On the left hand side, click the Discover icon.
Then pick a field for filtering the data by time. Choose
and click the Create index pattern button.and click the Create index pattern button.
@timestamp
The index pattern will be created. Click again in the Discover icon and the log events of booking service start up will be shown:
To filter log events from the booking service, for example, enter application_name : "booking-service" in the search box. Click the Update button and now you’ll see log events from the booking service only.
In the left-hand side, there’s a list of fields available. Hover over the list of fields and an Add button will be shown for each field. Add a few fields such as
and message. Now let’s see how to trace a request. Pick a trace id from the logs and, in the filter box, inputand message. Now let’s see how to trace a request. Pick a trace id from the logs and, in the filter box, input
application_name, trace.trace_id, trace.span_id
wherewhere
trace.trace_id: "<value>"
is the trace id you want to use as filter criteria. Then click the Update button and you will able to see logs that match that trace id.is the trace id you want to use as filter criteria. Then click the Update button and you will able to see logs that match that trace id.
<value>
Configurations! and hopefully been able to understand and get your basics clear on all the terms and technologies that shall be used to implement the logs tracing application.
If you found this story helpful, please share to help others find it!
Come back here and tell me about the before-and-after. I bet you’ll have something to say!
Create your free account to unlock your custom reading experience. | https://hackernoon.com/m-logs-with-elasticsearch-kibana-logstash-and-docker-lw3334wb | CC-MAIN-2021-17 | refinedweb | 2,194 | 56.25 |
Walkthrough: Creating and Using an ASP.NET Web Service in Visual Web Developer.
Tasks illustrated in this walkthrough include:
Creating.
In the Language list, click the programming language that you prefer to work in.
The programming language that you choose will be the default for the Web site. However, you can use more than one language in the same Web application by creating pages and components in different programming languages. For more information about how to create components using different languages, see Shared Code Folders in ASP.NET Web.
Visual Web Developer creates a new Web service that is made up of two files. The Convert.asmx file is the file that can be invoked to call Web service methods, and it points to the code for the Web service. The code itself is in a class file (Convert.vb, Convert.cs, or Convert.jsl, depending on the programming language) in the App_Code folder. The code file contains a template for a Web service. The code file includes some code for a Web service method.
You will create two methods in the Web service. The first method converts Fahrenheit temperatures to Celsius, and the second method converts Celsius temperatures to Fahrenheit.
To create the conversion methods
Add the following code inside the class, after the HelloWorld method:.
Now that you have a Web service, you will create a Web site where you will reference and use the Web service that you created. For the walkthrough, you will create a separate Web site that has a page where you start the Web service methods that you just created.
To create a Web site to use the Web service
In Solution Explorer, right-click the name of the Web site, and then.
Click Add Reference.
Visual Web Developer creates an App_WebReferences folder and adds a folder to it for the new Web reference. By default, Web references are assigned a namespace corresponding to their server name (in this case, localhost). Make a note of the name for the Web reference namespace. In the folder, Visual Web Developer adds a .wsdl file that references the Web service. It also adds supporting files, such as discovery (.disco and .discomap) files, that include information about where the Web service is located.
You can now use the Web service. In this walkthrough, you will add controls to Default.aspx, and then program the controls to convert a specified temperature to both Fahrenheit and Celsius. When the page is running, it will look something like the following illustration..
protected void ConvertButton_Click(object sender, EventArgs e) { localhost.Convert wsConvert = new localhost.Convert(); double temperature = System.Convert.ToDouble(TemperatureTextbox.Text); FahrenheitLabel.Text = "Fahrenheit To Celsius = " + wsConvert.FahrenheitToCelsius(temperature).ToString(); CelsiusLabel.Text = "Celsius To Fahrenheit = " + wsConvert.CelsiusToFahrenheit(temperature).ToString(); }.
Both the Web site and the Web service are configured for debugging, so that you can now try debugging. You will start in the Default.aspx page and step through the code until the code invokes the Web service. The debugger will switch to the Web service and continue stepping through the code.
To debug the page and Web service XML Web Services in Managed Code, Web Services (How Do I in Visual Basic), XML Web Services Created with ATL Server, and Creating and Accessing XML Web Services Walkthroughs.
TasksWalkthrough: Debugging Web Pages in Visual Web Developer
Reference@ WebService Directive in XML Web Services
<webServices> Element
ConceptsDesign Guidelines for XML Web Services Created Using ASP.NET
Securing XML Web Services Created Using ASP.NET
Other ResourcesGuided Tour of Creating Web Sites in Visual Web Developer
XML Web Services Using ASP.NET | http://msdn.microsoft.com/en-US/library/8wbhsy70(v=vs.80).aspx | CC-MAIN-2014-15 | refinedweb | 600 | 58.58 |
We are having a little debate
internally on an issue around naming conventions for moving APIs to the 64bit
world.We made a few design mistakes in V1
and exposed some properties that are really word sized as Int32’s rather than
Int64s. I don’t think there are
very many of these, but it seems we need a common pattern for any we do dig
up…Here is an example. On the Process class today we
have:public
int
VirtualMemorySize { get;
}public
int WorkingSet
{ get; }As you know we can’t just change
these to return longs as that would break apps complied against V1 or V1.1 when
run on Whidbey… We also can not add
overloads that return longs as the properties must differ by more than return
type (btw, this is a CLS rule not a runtime rule, the runtime is just fine with
overloading on return type.. now the only problem is finding a language where
that is valid ;-)). So we are left
we having to make a name change…We feel strongly we want a postfix
so the properties sort correctly in initellisense. The two front runners
are:XxxLongXxx64So that would
be:public
long
VirtualMemorySizeLong { get;
}public
long
WorkingSetLong { get; }Orpublic
long
VirtualMemorySize64 { get;
}public
long
WorkingSet64 { get; }Thoughts, other
suggestions?
Returning a long is exactly the same problem as returning an int. When the CLR goes 128bit you will have the same problem. Now, of course we can never envisage a situation where computers have outgrown 64bits – but it has happened before.
Surely it would be better to use IntPtr or some other variable sized integer type?
ps I prefer the ’64’ suffix.
I agree with RichB that some kind of ‘native type’ struct is needed for something like this. I also think it would be forgiveable to break existing code in order to fix the problem you mentioned.
On another note, I would very much like to see co-variant return types in a future version of c#.
Since it was already decided to use long rather than 64, why do we need to create a confusion over another confusion.
What I mean:-) V1.1: Array.LongLength ……………………….
I would prefer a ’64’ suffix.
Correct me if I’m wrong, but aren’t int and long just synonyms for Int32 and Int64 (which are the official names)? In which case it seems to me that WorkingSet64 fits in a lot better. It’s also a lot more descriptive than WorkingSetLong.
On a side note I would really like to see property overloading in a future C# version. Not necessarily overloading on returns types everywhere — just properties. Otherwise it’s one more reason to ignore them in the face of the superiority of methods (for this type of versioning).
In fact now that I think about it, isn’t it also true that .NET languages must have Int64, but not long? It would be weird calling SomeFuncLong from a language that has no actual "long" concept.
+1 on Xxx64. Given the address space that 64 bits gives us, the CLR will be history long before we need to move to 128 bits.
Alternatively, how about ‘VirtualMemorySizeEx2’? 😉
if the return type has to be long then the function name must be VirtualMemorySizeLong. But long it’s 64 bit for 64 bit platforms and 32 bits for others, so the developers could always use the VirtualMemorySizeLong method and you internally can call either the 32bit or the 64bit one.
And speaking of future compatibilty when we will have a 128bit processor, long will be 128bit long and so we will not have the issue of adding another method.
If the new function must only be called within the 64bits environment, then the return type should be int64 and the postfix in the xxx64 notation.
That’s my 2 cents.
regards,
– mn
+1 on Xxxx64
+1 on Xxxx64. As a C++ programmer I’ve been bitten before by the long==64 bits in C#. Having it spelt out always helps.
I too agree with Rich (and Len)
Break existing code. Create a new struct type – maybe in the System.Environment namespace called Word.
And,
public Word VirtualMemorySize { get; }
public Word WorkingSet { get; }
I am confused by the benefit of side by side execution if all older apps must be able to run on newer frameworks.. Wasn’t one of the big design goals of side by side execution to allow breaking changes in the framework to not affect older apps? It seems like the .NET framework would evolve much better if it is assumed that apps written for 1.x will run on a 1.x CLR and apps written for 2.x will run on a 2.x CLR. Maintaining compatibility throughout all possible versions of the .NET framework will still not eliminate the compatibility issues and we’ll also end up with the VirtualMemorySizeEx2 problem mentioned above.. Why not ship the 1.1 framework with Longhorn in addition to the Whidbey framework?
My vote is for the "64" suffix. 64 explicitly states the size is less to type.
+1 for Xxx64. 🙂
"And speaking of future compatibilty when we will have a 128bit processor, long will be 128bit long and so we will not have the issue of adding another method."
That’s not true. The sizes of the types are fixed. If, for god only knows why, we have a 128-bit machine, then new types will have to be invented to cover the need for 128-bit integers.
Another issue is that FxCop doesn’t like the type to be embedded in public property names. So, VirtualMemorySizeLong will report a violation. But, VirtualMemorySize64 won’t.
Guess which one I’m voting for.
64 is a beter choice, esp since in C++, ‘long’ is 32 bits.
marco: "if the return type has to be long then the function name must be VirtualMemorySizeLong".
But the return type isn’t long, it just looks like it. It’s actually Int64.. so surely VirtualMemorySize64 makes more sense.
Zhwgnon: "Create a new struct type – maybe in the System.Environment namespace called Word."
But wasn’t the whole point of the common type thingy that each separate type has a specific size?
The concept of supporting 128 bit numbers might also be a valid issue. After all, we support 64-bit variables on 32-bit machines. What would it be then, VirtualMemorySizeLongLong? Or VirtualMemorySizeReallyLong? 🙂
Another vote for 64 based on there being no "Long" in the class library.
Func64() looks much better. No confusion (which might happen with FuncLong()). – I wish I could stop calling them funcs.
Whoever suggested the following …
public Word VirtualMemorySize { get; }
public Word WorkingSet { get; }
No, no, no, no … please no.
Why, why, why, why … please why?
break code. let side by side execution take care of it.
the other implicit problem with WorkingSet/WorkingSet64 is that it introduces a subtle error… for example
lets saytheres a property called Size, which is int32. Lets say i do something ridiculous, like say that the maximum size is int32.MaximumValue. lets say i then say sizePercent = Size/int32.MaximumValue.
Now, theres this new value, size64. for all values of size <int32.MaximumValue, size==size64. but, when size64 exceeds int32.maximum value, the value of "size" (the int32 version of the property) is wrong. so, my percentsize calculation will also be absolutely wrong.
essentially, your supporting the use of two different properties to represent the same logical information, and ensuring that one of those properties will return incorrect information, and forcing developers to change code to get correct information in those cases.
My vote goes to 64,
1st- It’s shorter
2nd-It clearly indicates the type size
After many years using VB I still have a little brain lock remembering how many bytes Integer or Long is, that’s why I always use Int32 or Int16 instead of integer or short.
+1 for neither.
Posted rationale for 64 suffix is silly… "I can’t remember the size of the type?". Well, you not supposed to have to.
Suffix of Long is only marginally better (at least it’s not bit size). But it’s ugly… and it breaks current coding convention.
This is a horrible design pattern that should not be propagated. Fix the problem, force load of v1[.x] for older apps.
+1 for Neither,
However if you are convienced that you have to suffix them than I vote for "64" as it is consistant with Int32 and Int64
One more vote for xxxx64. Int32 and Int64 shows the way.
+(10^666)! for Word.
Modify the .NET platform to provide the overloads transparently.
A 1.x app will see only the 32 bit version
A 2.x app will see only the 64 bit version
That way you can maintain backwards compatability and not break any CLR rules and not have any ugly names.
Plus it would be a generally useful feature to have – the ability to mark a method as only being callable from code of certain versions. That way you could deprecate methods to new source, but maintain the methods for old binaries. All without the mess of worrying about forcing the load of older versions.
My favorite solutions in that order:
1) Add support in CLS for overloading on return type. C++ has it already.
2) Break compatibility and let SxS execution deal with it
3) use 64 suffix. Languages as MC++ use long for Int32, so using Long as sufix would lead to a lot of confusion.
public double VirtualMemorySize { get; }
public double WorkingSet { get; }
Why on earth do you need more than 15 significant digits when dealing with memory sizes? They are approximations anyway.
+1 for: Let side by side runtime do it’s work. Stop trying to get everything in the same namespace, it’s going to be a REAL mess if you do that.
And provide a compatibility layer if you really want applications to run on 2.0 when compiled against 1.0. Aliasing works just fine for that kind of situations, you get a cleaner lib. We all loved the clean 1.0. If you start (as it’s being done in other teams) to do]
XmlDocument
XmlDocument2
and
VirtualMemorySize
VirtualMemorySize64
I’m pretty sure the runtime is going to loose a LOT to that. A huge lot.
It’s better to rely on SxS execution to avoid polluting the system with more and more legacy names.
BTW, such platform-dependent things can be explicitly specified as being platform-dependent: use UIntPtr and mark them as not CLS-compliant. Then the clients will know that they are touching dark areas of the API.
+1 for side by side.
Keep the API clean!
Err.. doesn’t side by side execution mean we have to recompile our apps separately for 64-bit machines and 32-bit machines? Since the data types we’ll be using are set to 32 or 64 bits in the compiled code?
What happens if I have an Int64 which I want to assign to WorkingSet (or whatever, assuming the property is setable) and it turns out that the runtime I’m using is the 32 bit version? If I’d set it to WorkingSet64 it would have been fine (I assume there will be a version of WorkingSet64 for 32 bit machines? Or the framework would be equally broken).
R Said – "Whoever suggested the following …
public Word VirtualMemorySize { get; }
public Word WorkingSet { get; }
No, no, no, no … please no."
Why not? What’s a word? Whaddaya mean it’s not obvious?
Case closed.
+1 for letting side by side handling this.
I am all for binary backwards compatibility and not breaking existing programs. However source compatibility is a different story. At the point I recompile my program, I would actually like to know that any call to VirtualMemorySize no longer work because the return value doesn’t fit into an Int32. To me, that is a feature. I can fix my code to handle the return value as an Int64.
Personally I really despise the ClassName2, ClassNameEx, ClassNameEx2 approach to naming updated types. That just really screams to me "hey, we screwed it up and now we are fixing it" or "we screwed, fixed it once but screwed up doing that so we have to fix it again (Ex2)". That doesn’t make me feel confident about the Framework.
I think a couple other people suggested the xxxInt64 version as it is the suggestion standard for class library design provided by Microsoft. It basically tells class library desiners to use the native .NET names for data types so that you do not get cross-language confusion.
See:
Xxx64 ++;
definately should include postfix/suffix annotation to keep the CLS/FCL clear — especially for those who use .Net languages that consider ‘long’ to be a 32-bit type. possibly allow it to be aliased to ‘XxxLong’ when targeting 64bit machines at either the source-level (and let the compiler inject the much more descriptive CLS/FCL ‘Xxx64’ type) or IL-level.
If there is absolutely no way to avoid adding methods with ‘we screwed up’-names, then you should definately use the CLR type names and not the C# ones.
And how about using namespaces for new types like XmlDocument2. System.Xml.Whatnot.XmlDocument sucks less than System.Xml.XmlDocument2. And face it – type names with version number postfixes suck bigtime.
I wouldn’t worry about naming – I would break backwards-compatibility for assemblies compiled on the new framework.
Yes, it’s a minor annoyance for people to need to fix their code, but in my opinion, providing two sets of functions is making the problem even worse!
If I *had* to choose, I would pick 64. "long" is meaningless.
PingBack from | https://blogs.msdn.microsoft.com/brada/2003/10/16/naming-convention-for-apis-in-the-64-bit-world/?replytocom=6043 | CC-MAIN-2017-43 | refinedweb | 2,305 | 65.73 |
In this post, I’ll show you how to set up end-to-end Capistrano testing using Cucumber. I’ve extracted this from the cucumber features I wrote for a gem I’m building named auto_tagger. To fully test capistrano recipes, your tests will have to:
- Create a local git repository
- Create a local app with a config/deploy.rb file
- Push the app to the local repository
- Run
cap deploy:setupfrom the app (which will setup a directory inside your local test directory)
- Run a
cap deployfrom the app (which will deploy to your test directory)
- Assert against the content of the deployed app in the test directory
Background – Capistrano recipes are almost never tested
Looking around online, I couldn’t find a single list of capistrano packages that has an automated test suite, even ones from some big hosts. It’s no surprise that Capistrano tasks are seldom tested – testing capistrano recipes is hard, and even when you do test them, there are still so many variables in real-life deploys that you can’t account for everything.
It’s like Rummy.
However, there are some things you can do to stave off the “known unknowns”. For example, you know that someone might forget to set an important variable in their cap task and you know they might be using cap-ext-multistage. For these kinds of examples, Capistrano testing can give you much more assurance that a bug in your recipe is less likely to
rm -rf /* on your remote machine.
Getting started: setup your keys
To make life easy, you’ll want to be able to ssh to your own machine. To do this, you’ll need to create a key, then add that key to your authorized keys. If you don’t already have a key setup locally, check out the excellent RailsMachine guide. Once you have a key, you can copy it to authorized keys like so:
cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
Now you should be able to ssh to your own box without entering a password. To log into your own box, you can use the IP address or the computer’s name. Depending on your
/etc/hosts file entries, you may also be able to log in using
localhost.
If you are a Mac user you’ll have to enable “Remote Access” from System Preferences to be able to ssh in to your own box. For security, only allow yourself to log in via ssh. The system preferences pane will show you the IP address you can use to ssh into.
NOTE: this only will not work on Windows
Setup your cucumber file system
The file system I’ll use for this demo will look like this:
|-- features | |-- capistrano.feature | |-- step_definitions | | `-- capistrano_steps.rb | |-- support | | `-- env.rb | `-- templates | `-- deploy.erb |-- recipes | `-- my_recipe.rb `-- test_files
Add the feature
Let’s say you have a simple cap task that writes a file to shared after you deploy. The feature file might look something like this:
features/capistrano.feature
Feature: Deployment In order to know feel better about myself As a person who needs lots of reinforcement I want leave files named PEOPLE_LIKE_YOU all around my remote machine Scenario: User deploys Given a an app When I deploy Then the PEOPLE_LIKE_YOU file should be written to shared
Now you can run
cucumber features/ and you’ll see that you have several pending steps.
Get your setup correct
For these features to work, we’ll need a test directory (that’s outside of the features directory), and we’ll need to delete everything from it before running every scenario:
features/support/env.rb
require 'spec' require 'erb' require 'etc' Before do @test_files_dir = File.join(Dir.pwd, "test_files") @app_dir = File.join(@test_files_dir, "app") @repo_dir = File.join(@test_files_dir, "repo") FileUtils.rm_r(@test_files_dir) if File.exists?(@test_files_dir) FileUtils.mkdir_p(@test_files_dir) end
Fill in the steps
features/step_definitions/capistrano_steps.rb
Given /^a an app$/ do # Create the git repo FileUtils.mkdir_p @repo_dir Dir.chdir(@repo_dir) do system "git --bare init" end # Create and capify the dummy app, and push it to the local repo FileUtils.mkdir_p @app_dir Dir.chdir(@app_dir) do [ %Q{git init}, %Q{mkdir config}, %Q{capify .}, %Q{git add .}, %Q{git commit -m "first commit"}, %Q{git remote add origin{@repo_dir}}, %Q{git push origin master} ].each do |command| system command end end # Write a custom deploy file to the app, using an ERB template deploy_variables = { :deploy_to => File.join(@test_files_dir, "deployed"), :repository => @repo_dir, :git_executable => `which git`.strip, :logged_in_user => Etc.getlogin } template_path = File.expand_path(File.join(__FILE__, "..", "..", "templates", "deploy.erb")) compiled_template = ERB.new(File.read(template_path)).result(binding) File.open(File.join(@app_dir, "config", "deploy.rb"), 'w') {|f| f.write compiled_template } end When /^I deploy$/ do Dir.chdir(@app_dir) do system "cap deploy:setup" system "cap deploy" end end Then /^the PEOPLE_LIKE_YOU file should be written to shared$/ do File.exists?(File.join(@test_files_dir, "deployed", "shared", "PEOPLE_LIKE_YOU")).should be_true end
Now when you run
cucumber features/ and you’ll see that you a failure because you don’t have the correct cap file.
Make it pass
To make this pass, add a recipe like this:
recipes/my_recipe.rb
Capistrano::Configuration.instance(:must_exist).load do task "my_task" do run "echo PEOPLE_LIKE_YOU > #{shared_path}/PEOPLE_LIKE_YOU" end end
Debugging
You’ll notice that when you run
cucumber features you get all of the output from capistrano. This makes your output messy, but provides a lot of valuable debug information. If you want to silence it, you can use any number of tools, including piping the output to logs, or using methods like
silence_stream.
You’ll also notice that the setup described above leaves the files in the
test_files directory intact after each feature (it wipes it clean before each feature). This makes it easy to inspect the file system manually after each run. While developing, you can even
cd into the
test_files/app directory and re-run deployments, or tweak the
config/deploy.rb file and re-deploy and then move your changes back to
templates/deploy.erb.
Next Steps
This is just a quick sample to show you what you can do. It is not meant to be a good example of how to use cucumber (it has lots of instance variables, hard to read steps that are not reusable etc…), but rather a quick example of how to use cucumber to test cap recipes.
You’ll probably want to create a helper class of some sort to wrap up the file system calls, so your steps would look more like:
Given /^a an app$/ do MyFileHelper.create_repo MyFileHelper.create_app MyFileHelper.capify_app end
Testing against non-local environments
You could in theory use this to test against any environment you have access to – just change the host in
templates/deploy.erb. If you choose to test against a true remote machine, you’ll have to figure out how to shell out commands to it.
If you are on a mac, one thing that might help is to mount a remote machine over ssh.
Grab the source
The full source code for this app can be found at:
Nice — glad to see someone finally tackling this. Good work!
April 15, 2009 at 9:53 pm
the ‘black box’ aspect of capistrano is a little scary..
very cool.. this post is inspirational and informative.
August 4, 2009 at 3:34 pm
That’s a clever idea. I decided to give it a try, and have taken a slightly different approach.
I use my existing deploy.rb and override the variables with -s params.
def cap_cmd
cmd = Array.new
cmd << “cap”
cmd << “-s user=#{Etc.getlogin}”
cmd << “-s repository=#{@repository}”
cmd << “-s scm=git”
cmd << “-s scm_command=#{@scm_command}”
cmd << “-s deploy_to=#{@deploy_to}”
cmd << “localhost”
cmd
end
I also matched the capistrano variable names, so mentally I can keep things
straight:
Before do
@scm_command = `which git`.chomp
@tmpdir = tempdir ### method which returns a tempdir
@deploy_to = File.join(@tmpdir, ‘app’)
@repository = File.join(@tmpdir, ‘repo’)
@shared_path = File.join(@deploy_to, ‘shared’)
@current_path = File.join(@deploy_to, ‘current’)
raise “ERROR: ‘git’ command not found” if @scm_command.empty?
end
Currently I am only testing deploy:setup, but on track to get my custom
recipes tested. Cool idea Jeff.
August 26, 2009 at 11:28 am
I ended up ditching cucumber and using rsepec.
It annoyed me I didn’t have a BeforeAll callback. I also didn’t like the fact that I had to write a story for each item I would assert. Rspec worked out quite well, altho it’s probably not as sexy, but much easier for me to maintain.
August 27, 2009 at 4:25 pm | http://pivotallabs.com/testing-capistrano-recipes/?tag=textmate | CC-MAIN-2015-22 | refinedweb | 1,439 | 64.91 |
How to Normalize a Pandas DataFrame Column
In this tutorial, you will learn how to Normalize a Pandas DataFrame column with Python code. Normalizing means, that you will be able to represent the data of the column in a range between 0 to 1.
At first, you have to import the required modules which can be done by writing the code as:
import pandas as pd from sklearn import preprocessing
Along with the above line of code, you will write one more line as:
%matplotlib inline
What this does is, basically it just represents graphs that you create with your project will be projected in the same window and not in a different window.
Now let’s create data that you will be working on:
data = {'data_range': [100,55,33,29,-57,56,93,-8,79,120]} data_frame = pd.DataFrame(data) data_frame
This will just show our unnormalized data as:
We can also plot this above-unnormalized data as a bar graph by using the command as:
data_frame['data_range'].plot(kind='bar')
The graph of our unnormalized data is:
It can be seen clearly from the graph that our data is unnormalized, and now you will be using various preprocessing tools to convert it into a normalized data.
A = data_frame.values #returns an array min_max_scaler = preprocessing.MinMaxScaler() x_scaled = min_max_scaler.fit_transform(A)
Where A is nothing but just a Numpy array and MinMaxScaler() converts the value of unnormalized data to float and x_scaled contains our normalized data.
We can also see our normalized data that x_scaled contains as:
normalized_dataframe = pd.DataFrame(x_scaled) normalized_dataframe
The results of the above command will be:
Now you can plot and show normalized data on a graph by using the following line of code:
normalized_dataframe.plot(kind='bar')
So we are able to Normalize a Pandas DataFrame Column successfully in Python. I hope, you enjoyed doing the task.
Also, read: Drop Rows and Columns in Pandas with Python Programming | https://www.codespeedy.com/normalize-a-pandas-dataframe-column/ | CC-MAIN-2020-29 | refinedweb | 323 | 58.42 |
Pointer Arithmetic
Last updated on
Some consider this a no-no in C++ and generally I’m with them, however its not that hard of a concept and in general not that different from iterators anyway.
#include <stdio.h> struct foo { int a,b,c; }; struct foo data[10]; int main() { /* using ptr arth to find the length */ struct foo *begin = &data[0]; struct foo *end = &data[10]; int count = end - begin; printf("array count = %d\n", count); /* using ptr arth as an iterator */ struct foo *it = begin; while(it != end) { static int i = 1; printf("%d, ", i); i++; it++; } printf("\n"); return 0; };
Compile and Run in C
cc ptr_arth.c && ./a.out
Outputs
array count = 10 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, | https://blog.cooperking.net/posts/2019-03-07-ptr_arith/ | CC-MAIN-2021-39 | refinedweb | 128 | 68.13 |
Refactoring Android in Eclipse: Accelerate Your Android App Development
After you get a handle on the essentials of developing for a new platform, the next issue you should tackle is making the development process faster and easier -- and the Android platform is no exception.
Eclipse and the ADT plugin provide the Android developer with a powerful IDE that includes plenty of functionality for accelerating Android app development. In this article, I'll introduce you to refactoring Android in Eclipse, and demonstrate how leveraging it can give your Android app development a boost.
Intro to Refactoring in Android App Development
Refactoring is an essential technique for taking the stress out of building an application, allowing you to alter the structure of a program without impacting its functionality. If you're new to refactoring, this may sound like a strange concept (why fix what isn't broken?), but refactoring is handy for implementing design-focused changes. It automates potentially complicated and labor-intensive design tweaks. For example, instead of picking through your code and manually changing every layout to a new type, you can use Eclipse's intuitive 'Change Layout' option to automate this change across your project. Refactoring is also commonly used to make code easier to read and understand.
Eclipse has a range of refactoring options that can be accessed through the dedicated 'Refactor' drop-down menu, which I'll cover in detail in this post.
1) Change Widget Type
'Change Widget Type' replaces the type of the selected view with a new type. To support this change, Eclipse automatically removes any attributes that are unsupported by the new widget type, adds any attributes that are required by the new type, and changes the view ID to reflect the change.
To use this option, select 'Change Widget Type…' from Eclipse's 'Refactor' drop-down menu. In the subsequent dialog, select the desired widget from the 'New Widget Type' list.
If you wish to preview this change by comparing a 'before' and 'after' version of your code, select 'Preview.' Alternatively, press 'Ok' to go ahead and apply the change.
2) Wrap In Container
'Wrap In Container' allows you to select one or more elements and (as the name suggests) wrap them in a new container. To support the change, namespace and layout parameters are automatically transferred to the new parents. If you wrap root elements in a new container, the namespace declaration attributes and layout_attribute references will be transferred to the new root.
To take advantage of this refactoring option:
- Select 'Wrap in Container' from Eclipse's 'Refactor' menu.
- Select a new container type.
- Enter a new layout ID.
- Select 'Preview' to see how this change will affect your code, or 'Ok' to apply the change.
3) Change Layout
'Change Layout' can be applied to any layout view and converts the selected layout to a new type. For some supported items, 'Change Layout' does make the effort to preserve the pre-existing layout, but note that this option often simply changes the opening and closing XML tags, along with the supporting functions of updating IDs, removing unsupported attributes and adding missing attributes.
To implement this change, select 'Change Layout' from the 'Refactor' menu and choose your new layout type.
Tip. When the target type is RelativeLayout, the 'Flatten Hierarchy' option can convert the entire layout and flatten it into a single target layout.
Originally published on.
Page 1 of 2
| http://www.developer.com/ws/android/programming/refactoring-android-in-eclipse-accelerate-your-android-app-development.html | CC-MAIN-2017-13 | refinedweb | 569 | 51.99 |
In this article, I explain how to setup your ASP.NET 5 project for a single page application (SPA) in Angular JS. I have used MVC 6 Web API for some static data to display, eventually I will use database to store and display the data using Web API. I will also show how to copy the front-end JavaScript files automatically using gulp, bower and npm. In later modules, I will describe how to combine and minify the front-end JavaScripts files.
In my demo, I have Visual Studio 2015 RC, the latest pre-release version of next generation Visual Studio for cross platform development. I will also use Entity Framework 7 and DNX (.NET Execution Environment) to migrate the database and run the web site in different environments.
Single Page Application (SPA) now-a-days is well-known term due to the advancement of front-end tools and technologies that have been evolved in recent years. Among them, Angular JS is the most popular in building SPAs. Recent release of Microsoft's Visual Studio 2015 RC is a fantastic platform for developing SPAs in which the front-end tools can be easily managed and maintained for the application. Also the Web API 2 in MVC 6 is being used as back end service that are very efficient support for calling it from the front-end.
I will be using Visual Studio 2015 RC, MVC 6 Web API2, gulp, bower and npm throughout the journey to work with front-end and back-end development.
To start with, I have opened Visual Studio 2015 RC, and from the Start page, I have selected New Project and from the available template, I chose ASP.NET Web Application. I have named the project as BooksAngularApp.
BooksAngularApp
After clicking Ok, the template select wizard appears where I will select Web Site from the available ASP.NET 5 Preview Templates. The reason I have chosen the template is that the template provides us a framework that is helpful rather than starting the empty template.
To note here, the Web Site has gulp task runner as the default which I will be using to combine and minify the front-end JavaScripts and CSS files.
style="width: 640px; height: 465px" data-src="/KB/aspnet/992208/extracted-png-image2.png" class="lazyload" data-sizes="auto" data->
After clicking OK, I have got the following project created with the project structure and the preview page as below:
width="626" data-src="/KB/aspnet/992208/extracted-png-image3.png" class="lazyload" data-sizes="auto" data->
If we look at the project structure on the Solution Explorer, we can see the pre-installed packages such as bootstrap, jquery, hammer, etc. in the bower folder. Also, we can see the npm folder where gulp and rimraf have been installed as front-end tools that are being used to manage the front-end scripts, especially JavaScripts and CSS files.
Now let us have a look at the gulpfile.js which will be used to manage our front-end code or scripts.
var gulp = require("gulp"),
rimraf = require("rimraf"),
fs = require("fs");
eval("var project = " + fs.readFileSync("./project.json"));
var paths = {
bower: "./bower_components/",
lib: "./" + project.webroot + "/lib/"
};
gulp.task("clean", function (cb) {
rimraf(paths.lib, cb);
});));
}
});
We can see the scripts runs some tasks to copy and clean the folders for the JavaScripts and CSS files in the wwwroot folders whenever any change is made to the files during development.
Let us see how these happen. I have opened the Task Runner from View->Other Windows->Task Runner.
style="width: 624px; height: 461px" data-src="/KB/aspnet/992208/extracted-png-image5.png" class="lazyload" data-sizes="auto" data->
We can the clean and copy tasks as defined in the gulpfile.js. Before we running any of them, we can see a lib folder has a copy of all the JavaScripts and CSS files by default.
I have run the clean task and it cleans the lib folder in wwwroot directory, that means gulp is being used to run the clean task to remove the static files.
gulp
style="width: 640px; height: 414px" data-src="/KB/aspnet/992208/extracted-png-image6.png" class="lazyload" data-sizes="auto" data->
I have run the copy task and the lib folder is back with the copied files again.
copy
style="width: 640px; height: 412px" data-src="/KB/aspnet/992208/extracted-png-image7.png" class="lazyload" data-sizes="auto" data->
For our SPA, I will be using three angular packages: angular, angular-route and angular-resource. Currently, the default bower.json file looks like below:
angular
angular-route
angular-resource
>
{
"name": "ASP.NET",
"private": true,
"dependencies": {
"bootstrap": "3.0.0",
"jquery": "1.10.2",
"jquery-validation": "1.11.1",
"jquery-validation-unobtrusive": "3.2.2",
"hammer.js": "2.0.4",
"bootstrap-touch-carousel": "0.8.0"
}
}
I have added the three angular packages as below:
{
"name": "ASP.NET",
"private": true,
"dependencies": {
"bootstrap": "3.0.0",
"jquery": "1.10.2",
"jquery-validation": "1.11.1",
"jquery-validation-unobtrusive": "3.2.2",
"hammer.js": "2.0.4",
"bootstrap-touch-carousel": "0.8.0",
"angular": "*",
"angular-route": "*",
"angular-resource": "*"
}
}
After adding the three lines for angular, I haven't saved the bower.json file yet, and in the Solution Explorer in Bower folder, there are no angular modules.
style="width: 624px; height: 404px" data-src="/KB/aspnet/992208/extracted-png-image8.png" class="lazyload" data-sizes="auto" data->
After saving the file, we can see the angular components on the bower folder in the dependencies list after few seconds.
style="width: 430px; height: 621px" data-src="/KB/aspnet/992208/extracted-png-image9.png" class="lazyload" data-sizes="auto" data->
You might see the bower folder as below where the angular modules are not installed:
If they are not installed, right click on the Bower folder and click on Restore Packages and it will restore all the uninstalled packages that are added to the bower.json file.
width="624" data-src="/KB/aspnet/992208/extracted-png-image11.png" class="lazyload" data-sizes="auto" data->
So I have got angular packages installed now and we need to update the task runner so that the angular files are copied over the lib folder with the other bower components.
I have updated the copy task as below:",
"angular": "angular/angular*.{js,map}",
"angular-route": "angular-route/angular-route*.{js,map}",
"angular-resource": "angular-resource/angular-resource*.{js,map}"
}
Now I have run the copy task again and I can see the angular files are copied over to the lib folder of wwwroot.
I have configured the copy and copyapp tasks to be run after every build of the application to make sure the changes are copied over.
copyapp
width="588" data-src="/KB/aspnet/992208/extracted-png-image13.png" class="lazyload" data-sizes="auto" data->
Now we are ready to start working with the SPA BooksAngularApp.
For the SPA, I will be using a fictitious book store where we will be able to view the list of books and their details. In this module, I will be using some static data for the book items. For this purpose, I need a data model that will be used for accessing and displaying the books.
I have added a new class called Book in the Models folder of my project and the model looks like below:
Book
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using System.ComponentModel.DataAnnotations;
namespace BooksAngularApp.Models
{
/// <summary>
/// The entity class with Book properties
/// </summary>
public class Book
{
[Key]
public int Id { get; set; }
public string Title { get; set; }
public string Author { get; set; }
public string Type { get; set; }
public DateTime DatePublished { get; set; }
public decimal Price { get; set; }
}
}
After this, I have created an MVC Controller Class named as BooksController which will be the API Controller for our data access back and forth.
BooksController
style="width: 624px; height: 429px" data-src="/KB/aspnet/992208/extracted-png-image14.png" class="lazyload" data-sizes="auto" data->
As default, the Controller class looks like below:
Controller
style="width: 624px; height: 396px" data-src="/KB/aspnet/992208/extracted-png-image15.png" class="lazyload" data-sizes="auto" data->
I have made changes to the class to be as an API Controller by adding the...
[Route("api/[controller]")]
...before the class definition so that by the Route attribute, it will act as an API Controller.
Route
I have added the HttpGet method to display the static list of books as below:
HttpGet
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNet.Mvc;
using BooksAngularApp.Models;
namespace BooksAngularApp.Controllers
{
[Route("api/[controller]")]
public class BooksController : Controller
{
[HttpGet]
public IEnumerable<Book> Get()
{
return new List<Book>
{
new Book {Id=1, Title="Wonders of the Sky",
Author="Martin James", DatePublished=Convert.ToDateTime("1/1/2013"),
Type="Science",Price=23.23m },
new Book {Id=2, Title="Secrets of the Mind ",
Author="Allan Sue ", DatePublished=Convert.ToDateTime("2/1/2011"),
Type="Psychology", Price=12.50m },
new Book {Id=3, Title="We are Alive",
Author="Dick Smith", DatePublished=Convert.ToDateTime("2/11/2010"),
Type="Science Fiction", Price=21.25m } ,
new Book {Id=4, Title="Last day of the world",
Author="Martin James", DatePublished=Convert.ToDateTime("1/1/2013"),
Type="History", Price=10.40m },
};
}
}
}
style="width: 640px; height: 199px" data-src="/KB/aspnet/992208/extracted-png-image16.png" class="lazyload" data-sizes="auto" data->
To see whether the API Controller that I created is working, let us run the API from the browser:
style="width: 640px; height: 129px" data-src="/KB/aspnet/992208/extracted-png-image17.png" class="lazyload" data-sizes="auto" data->
We can see the data is being displayed as JSON from the API Controller, that means, we are ready to use the API to display the data.
I have created an app folder in my application root where I will be placing all angular related modules such as controllers, services, etc.
I have added app.js as the first module as below:
(function () {
'use strict';
angular.module('booksApp', ['booksServices']);
})();
The booksApp Angular module will depend on the service called booksServices which I will define now.
booksApp
booksServices
I have created a new folder called services within app folder and I have added booksServices.js as below:
(function () {
'use strict';
var booksServices = angular.module('booksServices', ['ngResource']);
booksServices.factory('Books', ['$resource',
function ($resource) {
return $resource('/api/books/', {}, {
query: { method: 'GET', params: {}, isArray: true }
});
}]);
})();
The service will inject ngResource as dependency which will eventually call the API Controller to get the data from the API.
ngResource
Now we need another Angular module which we will call booksController which I have created in controllers folder within app folder named as booksController.js.
booksController
(function () {
'use strict';
angular
.module('booksApp')
.controller('booksController',booksController)
booksController.$inject = ['$scope', 'Books'];
function booksController($scope, Books)
{
$scope.books = Books.query();
}
})();
Here we can see, the controller injects the scope object which returns the query results from the API.
scope
We need to write the task to copy the files over to the wwwroot folder as with other JavaSript files we did before.
var paths = {
bower: "./bower_components/",
lib: "./" + project.webroot + "/lib/",
app: "./" + project.webroot + "/app/",
srcapp: "./app/",
};
gulp.task("cleanappp", function (cb) {
rimraf(paths.app, cb);
});
gulp.task("copyapp",["cleanappp"], function () {
var app = {
"controllers": "controllers/booksController.js",
"services": "services/booksServices*.js",
"/": "app.js"
}
for (var destinationDir in app) {
gulp.src(paths.srcapp + app[destinationDir])
.pipe(gulp.dest(paths.app + destinationDir));
}
});
I have added the app and srcapp in the paths variable to define the source and destination for the app scripts. I have added a new task called cleanapp which will be used to clean the app folder in wwwroot. Finally, I have added another task called copyapp which has a inject of cleanapp task that means before copying the app scripts, it will clean up the folder first and then copy all script files.
<!DOCTYPE html>
<html ng-
<head>
<meta charset="utf-8" />
<title>List of Books</title>
<link href="lib/bootstrap/css/bootstrap.css" rel="stylesheet" />
<link href="lib/bootstrap/css/bootstrap-theme.css" rel="stylesheet" />
<script src="lib/jquery/jquery.min.js"></script>
<script src="lib/angular/angular.js"></script>
<script src="lib/bootstrap/js/bootstrap.js"></script>
<script src="lib/angular-resource/angular-resource.js"></script>
<script src="app/app.js"></script>
<script src="app/controllers/booksController.js"></script>
<script src="app/services/booksServices.js"></script>
</head>
<body ng-cloak>
<div ng-
<table class="table table-bordered table-striped">
<thead>
<tr>
<th>Title</th>
<th>Author</th>
<th>Type</th>
<th>Date Published</th>
<th>Price</th>
</tr>
</thead>
<tbody>
<tr ng-
<td>{{book.Title}}</td>
<td>{{book.Author}}</td>
<td>{{book.Type}}</td>
<td>{{book.DatePublished|date:"dd/MM/yyyy"}}</td>
<td>${{book.Price}}</td>
</tr>
</tbody>
</table>
</div>
</body>
</html>
style="width: 624px; height: 193px" data-src="/KB/aspnet/992208/extracted-png-image18.png" class="lazyload" data-sizes="auto" data->
width="624" data-src="/KB/aspnet/992208/extracted-png-image19.png" class="lazyload" data-sizes="auto" data->
Here, the web site is running both IIS Express and WebListener - side-by-side.
style="width: 640px; height: 403px" data-src="/KB/aspnet/992208/extracted-png-image20.png" class="lazyload" data-sizes="auto" data->
The source code for this article is available in the GitHub at:
In the next module, I will explore searching and filtering data and migration of the database using Entity Framework 7 and DN. | https://www.codeproject.com/Articles/992208/Angular-JS-Application-with-MVC-Web-API-ASPNET-and?msg=5082356#xx5082356xx | CC-MAIN-2022-27 | refinedweb | 2,241 | 50.02 |
AT FIRST SIGHT, a connection between learning C++ programming and poultry would seem to be unlikely, but there is—it’s the chicken-and-egg problem. Particularly in the early stages of understanding C++, you’ll often have to make use of things in examples before you properly understand them. This chapter is intended to solve the chicken-and-egg problem by giving you an overview of the C++ language and how it hangs together, and by introducing a few of the working concepts for things that you’ll be using before you have a chance to understand them in detail.
All the concepts that you’ll read about here are covered in more detail in later chapters. Most of this information is just to set the scene before you get into the specifics of writing C++ programs. You’ll see what a simple C++ program looks like, and then you’ll pull it to pieces to get a rough idea of what the various bits do. You’ll also look at the broad concepts of programming in C++ and how you create an executable program from the source code files you’ll be writing.
Don’t try to memorize all the information in this chapter. Concentrate on getting a feel for the ideas presented here. Everything mentioned in this chapter will come up again in later chapters. Here’s an overview of what this chapter covers:
- What the features of C++ are that make it so popular
- What the elements of a basic C++ program are
- How to document your program source code
- How your source code becomes an executable program
- How object-oriented programming differs from procedural programming
You’re probably familiar with the basic ideas of programming and programming languages, but to make sure we’re on common ground, let’s do a quick survey of some of the terms you’ll encounter as you progress through the book. You can also put C++ into perspective in relation to some of the other programming languages you’ll have heard of.
There are lots of programming languages, each with its advantages and disadvantages, and its protagonists and detractors. Along with C++, other languages that you’re likely to have come across include Java, BASIC (an acronym for Beginner’s All-purpose Symbolic Instruction Code), COBOL (an acronym for Common Business-Oriented Language), FORTRAN (an acronym for formula translator), Pascal (after Blaise Pascal, a French mathematician), and C (simply because it was a successor to a language called B). All of these are referred to collectively as high-level languages, because they’re designed to make it easy for you to express what the computer is to do, and they aren’t tied to a particular computer. Each source statement in a high-level language will typically map to several native machine instructions. A low-level language is one that is close to the native machine instructions and is usually referred to as an assembler language. A given assembler language will be specific to a particular hardware design, and typically one assembler instruction will map to one native machine instruction.A Potted History
FORTRAN was the first high-level language to be developed, and the first FORTRAN compiler was written in the late 1950s. Even though FORTRAN has been around for over 40 years, it’s still used today for scientific and engineering calculations although C++ and other languages have eroded much of its usage.
COBOL is a language exclusively for business data processing applications, and it’s almost as old as FORTRAN. Although relatively little new code is written in COBOL, there is an immense amount of code that was written years ago that’s still in use and still has to be maintained. Again, C++ has become the language of choice for many business data processing programs.
BASIC emerged in the 1970s when the idea of a personal computer was being conceived. Interestingly, the first product sold by Microsoft was a BASIC interpreter. The ease of use inherent in the language resulted in a rapid growth in its popularity that continues to this day.
Java was developed in the 1990s. Its original incarnation as a language called Oak was really intended for programming small consumer electronics devices. In 1995 Oak evolved into the Java language for embedding code in web pages and from there into what it is today. The primary reason for the success of Java is its portability. A Java program can run unchanged on any hardware platform that supports it. The syntax of the Java language has many characteristics that make it look similar to C++, but there are significant differences. Although Java gains over C++ on portability, it can’t match C++ in execution performance.
C was developed in the early 1970s as a high-level language that could be used for low-level programming, such as implementing operating systems. Most of the Unix operating system is written in C.
C++ was developed by Bjarne Stroustrup in the early 1980s as an object-oriented language based on C. Hence the name, C++, which in C++ means C incremented. Because C++ is based on C, the two languages share a common subset of syntax and functionality, and all of the capabilities in C for low-level programming are retained in C++. However, C++ is a much richer and more versatile language than its ancestor. The vastly improved memory management features and the object-oriented capabilities of C++ means that C functionality represents a very small subset of C++. C++ is still unrivaled in scope, performance, and power. For this reason, the majority of high-performance applications and systems are still written in C++ today.
{mospagebreak title=Interpreted vs. Compiled Program Execution}
Whatever the programming language, the programs that you write are made up of separate instructions or source statements that describe the actions that you want the computer to carry out. These are referred to collectively as source code and are stored on disk in a source file.A single C++ program of any size will consist of several source files.
Programming languages are designed to make it relatively easy for you to describe the actions you want a computer to carry out compared with the form of program that a computer can actually execute. Your computer can only execute programs that consist of machine instructions (also called machine code), so it can’t execute your program’s source code directly. There are basically two ways in which a program written in one of the languages I mentioned previously can get executed and, for the most part, a particular language will use one or the other. Programs written in BASIC, for example, are often interpreted—that is, another program called an interpreter inspects the BASIC source code, figures out what it’s supposed to do, and then causes that to be done. This is illustrated on the left side of Figure 1-1.
Figure 1-1. Interpreted and compiled program execution
C++, on the other hand, is usually a compiled language. Before you can execute your C++ program, it must be converted to machine language by another program called a compiler. The compiler inspects and analyzes the C++ program and generates the machine instructions that will produce the actions specified by the source code. Of course, in reality neither interpreting nor compiling is quite as simple as I’ve described them here, but in principle that’s how they work.
With an interpreted language, execution is “indirect,” by which I mean the intent of the source code needs to be determined each time a program is executed. For this reason, execution is much slower—sometimes of the order of 100 times slower—than the equivalent program in a compiled language. The upside is that you don’t have to wait and compile a program before you run it. With an interpreted language, as soon as you’ve entered the code, you can execute the program immediately. A given language is usually either compiled or interpreted, and it’s typically the design and intended use of the language that determines which. Having described BASIC as an interpreted language, I should point out that it isn’t exclusively so; there are compilers for the BASIC language.
The question of which language is the “best” language sometimes comes up. The short answer is that there is no such thing as the “best” language—it depends on the context. Writing a program in BASIC is typically very rapid compared with most other languages, for instance, so if speed of development is important to you and obtaining the maximum execution performance isn’t, then BASIC is an excellent choice. On the other hand, if your program requires the execution performance that C++ provides, or you need the capabilities in your application that are available in C++ but not in BASIC, then C++ is obviously what you would use. If your application really must execute on a wide range of different computers and you aren’t concerned about achieving the ultimate in execution performance, then Java may be the best option.
Of course, the length and steepness of the learning curve will vary among different languages. In terms of the amount of time needed to learn a language, C++ is probably at the higher end of the scale, but this shouldn’t put you off. It doesn’t mean that C++ is necessarily more difficult. It does mean that there’s a great deal more to C++ than most other languages, so it takes a little longer to learn.
As a final thought on which language you should learn, any professional programmer worth his or her salt needs to be comfortable in several programming languages. If you’re a beginner, this may sound a little daunting, but once you’ve grappled with and conquered your first two programming languages, you’ll find it gets a lot easier to pick up others. Your first programming language is almost always the hardest.Libraries
If you had to create everything from scratch every time you wrote a program, it would be tedious indeed. The same kind of functionality is often required in many programs— for example, reading data from the keyboard, or displaying information on the screen, or sorting data records into a particular sequence. To address this, programming languages usually come supplied with considerable quantities of prewritten code that provides standard facilities such as these, so you don’t have to write the code for them yourself every time.
Standard code that’s available for you to use in any of your programs is kept in a library. The library that comes with a particular programming language is as important as the language itself, as the quality and scope of the library can have a significant effect on how long it takes you to complete a given programming task.
Why Is C++ Such a Great Language?
C++ enjoys extraordinary popularity across virtually all computing environments: personal computers, Unix workstations, and mainframe computers. This is all the more remarkable when you consider the degree to which history weighs against a new programming language, no matter how good it is. The inertia implicit in the number of programs written in previous languages inevitably slows the acceptance of a new language. Added to this, there’s always a tendency among most professional programmers to stick with what they know and are expert and productive in, rather than jump in at the deep end with something new and unfamiliar, in which it will take time to develop fluency. Of course, the fact that C++ was built on C (which itself was the language of choice in many environments before the advent of C++) helped tremendously, but there’s a great deal more to it than that. C++ provides you with a unique combination of advantages:
- C++ is effective across an incredible range of applications. You can apply C++ to just about anything, from word processing to scientific applications, and from operating system components to computer games.
- C++ can be used for programming down at the hardware level—for implementing device drivers, for instance.
- C++ combines the facility for efficient procedural programming that it inherits from C with a powerful object-oriented programming capability.
- C++ provides extensive facilities in its standard library.
- There are many commercial libraries supporting a wide range of operating system environments and specialized applications for C++.
You’ll also find that just about any computer can be programmed in C++, so the language is pervasive across almost all computer platforms. This means that it is possible to transfer a program written in C++ from one machine to another with relatively limited effort. Of course, if this is truly going to be a straightforward process, you need to have in mind when you write the program that you intend to run it on a different machine.
The ANSI/ISO Standard for C++
The international standard for C++ is defined by the document ISO/IEC 14882, which is published by the American National Standards Institute (ANSI). You can get a copy of this standard if you wish, but remember, the standard is intended for use by compiler writers, not by students of the language. If that hasn’t discouraged you, you can download a copy for a relatively reasonable fee from.
Standardization of a language is fundamental when you want to transfer a program written for one type of computer to another. The establishment of a standard makes possible a consistent implementation of the language across a variety of machines. A full set of standard facilities across all conforming programming systems means that you’ll always know exactly what you’re going to get. The ANSI standard for C++ defines not only the language, but also the standard library. Using the ANSI standard for C++ makes the migration of applications between different machines easier and eases the problems of maintaining applications that run in more than one environment.
Another benefit of the ANSI standard for C++ is that it standardizes what you need to learn in order to program in C++ in any environment. The existence of the standard itself forces conformance over time, because it provides the only definitive reference for what a C++ compiler and library should provide. It removes the license to be “flexible” that compiler writers have had in the absence of an agreed standard, so when you buy a C++ compiler that conforms to the ANSI standard, you know what language and standard library capabilities you’re going to get.
{mospagebreak title=A Simple C++ Program}{mospagebreak title=A Simple C++ Program}
Let’s take a look at a very simple C++ program and find out what its constituents are. You don’t need to enter this code right now; it’s just here so that you can get a feel for what goes into making up a program. I don’t go into all the details at the moment either, as everything that appears here will be explored at length in later chapters. Figure 1-2 illustrates a simple C++ program.
Figure 1-2. A simple C++ program
The program shown in Figure 1-2 displays the following message:
============================================================
The best place to start is at the beginning
============================================================
This isn’t a very useful program, but it serves to demonstrate a few points. The program consists of a single function, main(). A function is a self-contained block of code that’s referenced by a name, main in this case. There may be a lot of other code in a program, but every C++ application consists of at least the function main(). There can be only one function called main() within a program, and execution of a C++ program always starts with the first statement in main().
The first line of the function is
int main()
which identifies that this is the start of a function with the name main. The int at the beginning indicates that this function will return an integer value when it finishes executing. Because it’s the function main(), the value will be received by the operating system that calls it in the first place.
This function main() contains two executable statements, each on a separate line:
cout << “The best place to start is at the beginning”;
return 0;
These two statements are executed in sequence. In general, the statements in a function are always executed sequentially, unless there’s a statement that specifically alters the sequence of execution. You’ll see what sorts of statements can do that in Chapter 4.
In C++, input and output are preferably performed using streams. If you want to output something from a program, you put it into an output stream, and when you want something to be input, you get it from an input stream. A stream is thus an abstract representation of a source of data, or a data sink. When your program executes, each stream is tied to a specific device that is the source of data in the case of an input stream and the destination for data in the case of an output stream. The advantage of having an abstract representation of a source or sink for data is that the programming is the same regardless of what the stream actually represents. You can read data from a disk file in essentially the same way as you read from the keyboard, for instance. The standard output and input streams in C++ are called cout and cin, and by default they correspond to your computer’s screen and keyboard, respectively.
The first line of code in main() outputs the character string “The best place to start is at the beginning” to your screen by placing it in the output stream, cout, using the insertion operator,<<. When we come to write programs that involve input, you’ll see its partner, the extraction operator, >>.
A header contains code defining a set of standard facilities that you can include in a program source file when required. The facilities provided by the C++ standard library are stored in headers, but headers aren’t exclusively for that. You’ll create your own header files containing your own code. The name cout referred to in this program is defined in the header iostream. This is a standard header that provides the definitions necessary for you to use the standard input and output facilities in C++. If your program didn’t include the following line:
#include <iostream>
then it wouldn’t compile, because the <iostream>
TIP Note that there are no spaces between the angled brackets and the standard header name.With many compilers, spaces are significant between the two angled brackets, < and >; if you insert any spaces here, the program may not compile.
The second and final statement in the body of the function name is
return 0;
This ends the program and returns control to your operating system. It also returns the value zero to the operating system. Other values can be returned to indicate different end conditions for the program and can be used by the operating system to determine if the program executed successfully. Typically, zero indicates a normal end to a program, and any nonzero value indicates an abnormal end. However, whether or not a nonzero return value can be acted upon will depend on the operating system concerned.Names
Lots of things in a C++ program have names that are used to refer to them. Such names are also referred to as identifiers. There are five kinds of things that you’ll give names to in your C++ programs:
- Functions are self-contained, named blocks of executable code. Chapter 8 goes into detail on how to define these.
- Variables are named areas in memory that you use to store items of data. You’ll start with these in Chapter 2.
- Types are names for the kinds of data that you can store. The type int, for example, is used for integers (whole numbers). You’ll see something on these in Chapter 2 and more in subsequent chapters, particularly Chapter 11.
- Labels provide a means of referring to a particular statement. These are rarely used, but you’ll look at them in action in Chapter 4.
- Namespaces are a way of gathering a set of named items in your program under a single name. If that sounds confusing, don’t worry—I’ll say more about them shortly, and you’ll look at them again in Chapter 10.
In C++, you can construct a name using the upper- and lowercase Latin letters a to z and A to Z, the underscore character (_), and the digits 0 to 9. The ANSI standard for C++ also permits Universal Character Set (UCS) characters to be included in a name for reasons I cover in a moment.
The ANSI standard allows names to be of any length, but typically a particular compiler will impose some sort of length limit. However, this is normally sufficiently large (several thousand characters) that it doesn’t represent a serious constraint.
Whitespace is the term used in C++ to refer to spaces, vertical and horizontal tabs, and newline and form-feed characters. You must not put whitespace characters in the middle of a name. If you do, the single name won’t be seen by the compiler as such; it will be seen as two or more names, and therefore it won’t be processed correctly. Another restriction is that names may not begin with a digit.
Here are some examples of legal names:
value2 Mephistopheles BettyMay Earth_Weight PI
Here are some names that aren’t legal:
8Ball Mary-Ann Betty+May Earth-Weight 2PI
CAUTION Note that names that contain a double underscore (_ _) or start with an underscore followed by an uppercase letter are reserved for use by the C++ standard library, so you shouldn’t choose such names for use in your programs.Your compiler probably won’t check for this, so you’ll only find out that you have a conflicting name when things go wrong!
{mospagebreak title=Names Using Extended Character Sets}
As mentioned in the previous section, the C++ standard permits UCS characters to be included in a name. You can write them in the form Udddddddd or the form udddd, where d is a hexadecimal digit in the UCS code for the character.
No one really expects anyone to include characters in names like this, though. Embedding U followed by a bunch of hexadecimal digits in a name would hardly improve the readability of the code. The purpose of allowing UCS characters in a name is to allow compiler writers to accommodate names written in characters for national languages other than English, such as Greek, or Korean, or Russian, for instance.
The C++ standard allows a compiler to be implemented so that any characters can be used to specify a name. Any compiler that takes advantage of this must then translate the characters that aren’t in the basic set to the standard representation for UCS characters noted previously, before the compilation of the code begins. In the source code someone may write the name Khhra, which will be meaningful to a Russian programmer. Internally the compiler will convert this name to one of the two standardized representations of UCS characters before it compiles the code, perhaps as /u041A/u043D/u0438/u0433/u0430. Indeed, regardless of the character set used to write names in the source, it will always end up as characters from the basic set that you saw initially, plus possibly UCS characters as Udddddddd or as udddd.
You must always use the basic set of characters a to z, A to Z, 0 to 9, and the underscore in a name as explicit characters. Using the UCS codes for these characters in a name is illegal. The reason for this is that the standard doesn’t specify the encoding to be used for the basic set of characters, so this is left to the compiler. Consequently, if you were to specify any of the basic characters by its UCS code, it’s possible it will be different from the encoding used by the compiler when the character is specified explicitly with the obvious chaotic result.
Note that it isn’t a requirement that any given compiler must support the use of explicit national language characters in specifying names. If it does, they must be mapped into the UCS form before processing. A compiler that conforms to the standard must in any event support names in the basic character set plus the use of UCS characters in the somewhat unfriendly forms noted at the beginning of this section.Namespaces
I’m sure that you noticed there was a line in the simple C++ program that I didn’t explain in the preceding discussion. To understand it, you need to know what name-spaces are, and for those to make any sense, I had to first to tell you about names. As a reminder, the line in question was
using namespace std;
Within the rules for identifiers that I discussed in the previous section, you can choose any names that you like for things in your programs. This obviously means that you might choose a name for something that’s already used for some other purpose within the standard library. Equally, if two or more programmers are working concurrently on parts of a larger project, there is potential for name collisions. Clearly, using the same name for two or more different things is bound to cause confusion, and namespaces are there to alleviate this problem.
A namespace name is a bit like a family name or a surname. Each individual within a family has his or her own name, and within most families each family member has a unique name. In the Smith family, for instance, there may be Jack, Jill, Jean, and Jonah, and among family members they’ll refer to each other using these names. However, members of other families may have the same names as members of the Smith family. Within the Jones family, for instance, there might be John, Jean, Jeremiah, and Jonah. When Jeremiah Jones refers to Jean, it’s clear that he means Jean Jones. If he wants to refer to Jean in the Smith family, he’ll use the fully qualified name: Jean Smith. If you’re not a member of either family, you can only be sure that people know whom you’re talking about if you use the full names of individuals, such as Jack Smith or Jonah Jones.
This is pretty much how namespaces work—a namespace name is analogous to a surname. Inside a namespace, you can use the individual names of things within the namespace. From outside the namespace, you can only refer to something within the namespace by a combination of the name of the particular entity and the namespace name. The purpose of a namespace is to provide a mechanism that minimizes the possibility of accidentally duplicating names in various parts of a large program and thereby creating confusion. In general, there may be several different namespaces within a program.
The entities in the C++ standard library are all defined within a namespace called std, so the names of all the entities within the standard libraries are qualified with std. The full name of cout, therefore, is actually std::cout. Those two colons together have a very fancy title: the scope resolution operator. I’ll have more to say about it later on. In this example, it serves to separate the namespace name, std, from the name of the stream, cout.
The using directive at the beginning of the simple C++ program indicates that you want to refer to any of the things defined within the namespace called std without specifying the namespace name each time. Continuing this analogy, it makes your program file a sort of honorary member of the std family, so you can refer to everyone by his or her first name alone. One effect of this is to obviate the need to refer to cout as std::cout, making the program code little simpler. If you were to omit the using directive, you would have to write the output statement as
std::cout << “The best place to start is at the
beginning”;
Of course, although this does make the code look a little more complicated, it’s really a much safer and therefore better way to write the code. The effect of the using directive is to allow you to refer any name in the namespace without qualifying it with the namespace name. This implies that you could do so accidentally. By explicitly qualifying cout with its namespace name, you avoid the need to make all the names in the namespace accessible in your program. This means that there’s no possibility of clashes between names that you choose for things you might define in your program and names that are defined in the namespace.
The program code for this example is therefore somewhat better if you write it as follows:
// Program 1.1 A simple C++ program
#include <iostream>
int main() {std::cout << “The best place to start is at the
beginning”;
return 0;
}
However, although this is much safer code, if you had a lot more references to std::cout in the code it might begin to look very cluttered. You also have the irritation of repeatedly typing std:: in many places throughout the program. In this case, you can use a form of the using directive that just introduces a single name from a namespace into your program source file. For instance, you can introduce the name cout from the std namespace into your program file with the following directive:
using std::cout;
With this directive, you can get the best of both worlds. You can use the name cout from the std namespace in its unqualified form, and you protect yourself from accidental conflicts in your code with other names in the std namespace because they’re simply not accessible without using the std qualifier. The program now looks like this:
// Program 1.1A A simple C++ program
#include <iostream>
using std::cout;cout << “The best place to start is at the beginning”;
int main() {
return 0;
}
Of course, you can introduce just some names from the std namespace into your program file by means of a using directive for each name. You might do this for names that you refer to frequently in your code. You can then access other names from std that you refer to relatively rarely by their fully qualified names.
There’s much more to namespaces and using directives than you’ve seen here. You’ll explore them in depth in Chapter 10.Keywords
There are reserved words in C++, called keywords, that have special significance within the language. The words return and namespace that you saw earlier are examples of keywords.
You’ll see many more keywords as you progress through the book. You must ensure that the names that you choose for entities in your program aren’t the same as any of the keywords in C++. You’ll find a list of all the keywords that are used in C++ in Appendix B.
NOTE Keywords are case sensitive, as are the identifiers that you choose in your program.
{mospagebreak title=C++ Statements and Statement Blocks}
Statements are the basic units for specifying what your program is to do and the data elements it acts upon. Most C++ statements end with a semicolon (;). There are quite a few different sorts of statements, but perhaps the most fundamental is a statement that introduces a name into your program source file.
A statement that introduces a name into a source file is called a declaration. A declaration just introduces the name and specifies what kind of thing the name refers to, as opposed to a definition, which results in allocation of some memory to accommodate whatever the name refers to. As it happens, most declarations are also definitions.
A variable is a place in memory in which you can store an item of data. Here’s an example of a statement that declares a name for a variable, and defines and initializes the variable itself:
double result = 0.0;
This statement declares the name result will be used to refer to a variable of type double (declaration), causes memory to be allocated to accommodate the variable (definition), and sets its initial value to 0.0 (initialization).
Here’s an example of another kind of statement called a selection statement:
if (length > 25)
boxLength = size + 2;
This statement tests the condition “Is the value of length greater than 25?” and then executes the statement on the second line if that condition is true. The statement on the second line adds 2 to the value stored in the variable size and stores the result in the variable boxLength. If the condition tested isn’t true, then the second line won’t be executed, and the program will continue on its merry way by executing whatever comes next in the program.
You can enclose several statements between a pair of curly braces, { }, in which case they’re referred to as a statement block. The body of a function is an example of a block, as you saw in the first example program where the statements in the body of the main() function appear between curly braces. A statement block is also referred to as a compound statement, because in many circumstances it can be considered as a single statement, as you’ll see when we look at C++’s decision-making capabilities in Chapter 4. In fact, wherever you can put a single statement in C++, you can equally well put a block of statements between braces. As a consequence, blocks can be placed inside other blocks—this concept is called nesting.In fact, blocks can be nested, one within another, to any depth you need.
A statement block also has important effects on the variables that you use to store data items, but I defer discussion of this until Chapter 3, where I cover variable scope.Code Presentation Style
The way in which you arrange your code visually can have a significant effect on how easy it is to understand. There are two basic aspects to this. First, you can use tabs and/or spaces to indent program statements in a manner that provides visual cues to their logic, and you can arrange matching braces that define program blocks in a consistent way so that the relationships between the blocks are apparent. Second, you can spread a single statement over two or more lines when that will improve the readability of your program. A particular convention for arranging matching braces and indenting statements is a presentation style.
There are a number of different presentation styles in use. The following code shows three examples of how the same code might look in three commonly used styles:
In this book I have used the style shown on the right for all the examples. I chose this because I think it is clear without being too extravagant on space. It doesn’t matter much which style you use as long as you are consistent.Program Structure
Each of your C++ programs will consist of one or more files. By convention, there are two kinds of file that you can use to hold your source code: header files and source files. You use header files to contain code that describes the data types that your program needs, as well as some other sorts of declarations. These files are referred to as header files because you usually include them at the beginning (the “head”) of your other source files. Your header files are usually distinguished by having the filename extension .h, although this is not mandatory, and other extensions, such as .hxx, are used to identify header files in some systems.
Your source files, which have the filename extension .cpp, contain function defintions—the executable code for your program. These will usually refer to declarations or definitions for data types that you have defined in your own header files. The compiler will need to know about these when it compiles your code so you specify the .h files that are needed in a .cpp file by means of #include directives at the beginning of the file. An #include directive is an instruction to the compiler to insert the contents of a particular header file into your code. You’ll also need to add #include directives for any standard library header files that your code requires.
Figure 1-3 shows a program in which the source code is contained in two .cpp files and three header files. The first .cpp file uses the information from the first two header files, and the second .cpp file requires the contents of the last two header files. You’ll learn more about the #include directives that do this in Chapter 10.
Figure 1-3. Source files in a C++ program
A number of standard headers are supplied with your compiler and contain declarations that you need in order to use the standard library facilities. They include, for example, declarations for the available standard library functions. The first .cpp file in Figure 1-3 includes the
NOTE Appendix C provides details on the ANSI/ISO standard library headers.
Your compiler system may have a whole range of other header files, providing the definitions necessary to use operating system functions, or other goodies to save you programming effort. This example shows just a few header files in use, but in most serious C++ applications many more will be involved.Program Functions and Execution
As already noted, a C++ program consists of at least one function that will be called main(), but typically a program consists of many other functions—some that you will have written and others from the standard library. Your program functions will be stored in a number of source files that will typically have filenames with the extension .cpp, although other extensions such as .cxx and .cc are common.
Figure 1-4 shows an example of the sequence of execution in a program that consists of several functions. Execution of main() starts when it’s invoked by the operating system. All the other functions in your program are invoked by main() or by some other function in the set. You invoke a function by calling it. When you call a function, you can pass items of data to it for use while it’s executing. The data items that you want to pass to a function are placed between parentheses following the function name in the call operation. When a function finishes executing, execution control returns to the point at which it was called in the calling function.
Figure 1-4. How program functions execute
A function can also return a value to the calling point when it finishes executing. The value returned can be stored for use later or it can participate in a calculation of some kind—in an arithmetic expression, for example. You’ll have to wait until Chapter 8 to learn how you can create your own functions, but you’ll use functions from the standard library early in the next chapter.
{mospagebreak title=Creating an Executable from Your Source Files}{mospagebreak title=Creating an Executable from Your Source Files}
Creating a program module that you can execute from your C++ source code is essentially a two-step process. In the first step, your compiler converts each .cpp file to an object file that contains the machine code equivalent of the source file contents. In the second step, the linker combines the object files produced by the compiler into a file containing the complete executable program.
Figure 1-5 shows three source files being compiled to produce three corresponding object files. The filename extension that’s used to identify object files varies between different machine environments, so it isn’t shown here. The source files that make up your program may be compiled independently in separate compiler runs, or most compilers will allow you to compile them in a single run. Either way, the compiler treats each source file as a separate entity and produces one object file for each .cpp file. The link step then combines the object files for a program, along with any library functions that are necessary, into a single executable file.
Figure 1-5. The compile and link process
In practice, compilation is an iterative process, as you’re almost certain to have made typographical and other errors in the source code. Once you’ve eliminated these from each source file, you can progress to the link step, where you may find that yet more errors surface! Even when the link step produces an executable module, your program may still contain logical errors; that is, it doesn’t produce the results you expect. To fix these, you must go back and modify the source code and start trying to get it to compile once more. You continue this process until your program works as you think it should. As soon as you declare to the world at large than your program works, someone will discover a number of obvious errors that you should have found. It hasn’t been proven beyond doubt so far as I know, but it’s widely believed that if a program is sufficiently large, it will always contain errors. It’s best not to dwell on this thought when flying.
Let’s take a closer look at what happens during the two basic steps, compiling and linking, because there are some interesting things going on under the covers.Compiling
The compilation process for a source file has two main stages, as illustrated in Figure 1-6, but the transition between them is automatic. The first stage is the preprocessing phase, which is carried out before the compilation phase proper. The preprocessing phase modifies the contents of the source file according to the preprocessing directives that you have placed in the file. The #include directive, which adds the contents of a header file to a .cpp file, is an example of a preprocessing directive, but there are many others (as you’ll see in Chapter 10).
Figure 1-6. Details of the compilation process
This facility for modifying the source file before it is compiled provides you with a lot of flexibility in accommodating the constraints of different computers and operating system environments. The code you need in one environment may be different from that required for another because of variations in the available hardware or the operating system. In many situations, you can put the code for several environments in the same file and arrange for the code to be tailored to the current environment during the preprocessor phase.
Although preprocessing is shown in Figure 1-6 as a distinct operation, you don’t execute it independent of the compiler. Invoking the compiler will perform preprocessing automatically, before compiling your code.Linking
Although the output from the compiler for a given source file is machine code, it’s quite a long way from being executable. For one thing, there will be no connection established between one object file and another. The object file corresponding to a particular source file will contain references to functions or other named items that are defined in other source files, and these will still be unresolved. Similarly, links to library functions will not yet be established; indeed, the code for these functions will not yet be part of the file. Dealing with all these things is the job of the linker (sometimes called the linkage editor).
As Figure 1-7 illustrates, the linker combines the machine code from all of the object files and resolves any cross-references between them. It also integrates the code for any library functions that the object modules use. This is actually a simplified representation of what the linker does, as we’re assuming that all the links between modules are established statically within the executable module. It’s also possible for some links to be dynamic; that is, they’re established only when the program executes.
Figure 1-7. Details of the linking process
As I said, the linker establishes links to functions statically—that is, before the program executes. This is done for all the functions contained in the source files that make up the program. Functions linked to dynamically—that is, during program execution— are compiled and linked separately to create a different kind of executable module called a dynamic link library (DLL).A link to a function in a DLL is established when the executable module for your program calls the function, and not before.
DLLs present several important advantages. A primary one is that the functions in the DLL can be shared between several programs that are executing concurrently. This saves using up memory with duplicates of the same function when more than one program is executing that requires the services provided by the functions in the DLL. Another advantage is that the DLL won’t be loaded into memory until one of the functions it contains is called. This implies that if you don’t use a function from a given DLL, it won’t occupy space in memory. DLLs are a system capability that ties in closely with your operating system, so I don’t discuss them further in this book.
{mospagebreak title=C++ Source Characters}{mospagebreak title=C++ Source Characters}
You write C++ statements using a basic source character set. This is simply the set of characters that you’re allowed to use explicitly in a C++ source file. Obviously, the character set that you can use to define a name is going to be a subset of this. Of course, the basic source character set in no way constrains the character data that you work with in your code. Your program can create strings consisting of characters outside this set in various ways, as you’ll see. The basic source character set consists of the following characters:
- The letters a to z and A to Z
- The digits 0 to 9
- The control characters representing horizontal tab, vertical tab, form-feed, and newline
- The characters _ {}[]#()<>%:;.?*+-/^&|~!=,”‘
This is easy and straightforward. You have 96 characters that you can use, and it’s likely that these will accommodate your needs most of the time.
This definition of the characters that you can use in C++ does not say how the characters are encoded. Your particular compiler will determine how the characters that you use to write your C++ source code are represented in the computer. On a PC, these characters will typically be represented in the machine by an American Standard Code for Information Interchange (ASCII) code such as ISO Latin-1, but other ways of encoding characters may be used.
Most of the time the basic source character set will be adequate, but occasionally you’ll need characters that aren’t included in the basic set. You saw earlier that you can include UCS characters in a name. You can also include UCS characters in other parts of your program, such as when you specify character data. In the next section I elaborate a little on what UCS is all about.
The Universal Character Set
UCS is specified by the standard ISO/IEC 10646, and it defines codes for characters used in all the national languages that are current and many more besides. The ISO/IEC 10646 standard defines several character-encoding forms. The simplest is UCS-2, which represents characters as 16-bit codes, so it can accommodate 65,536 different character codes that can be written as four hexadecimal digits, dddd. This encoding is described as the basic multilingual plane because it accommodates all of the languages in current use, and the likelihood of you ever wanting more than this is remote. UCS-4 is another encoding within the ISO/IEC 10646 standard that represents characters as 32-bit codes that you can express as eight hexadecimal digits, dddddddd. With more than 4 billion different codes, UCS-4 provides the capacity for accommodating all the character sets that you might ever need.
This isn’t all there is to UCS, though. For example, there’s another 16-bit encoding called UTF-16 (UTF stands for Unicode Transformation Format) that is different from UCS-2 in that it accommodates more than 65,535 characters by encoding characters outside of the first 65,536 by what are referred to as surrogate pairs of 16-bit code values. There are other character encodings with UCS too. Generally, a given character will have a code with the same value in any UCS encoding that you choose. The values of codes in US_ASCII are the same as those in UCS character encodings.
Regardless of whether a compiler supports an extended character set for writing source statements, you can include characters from the UCS in your source code by specifying them in the form of a hexadecimal representation of their codes, either as udddd or Udddddddd, where d is a hexadecimal digit. Note the lowercase u in the first case and the uppercase U in the second. However, you must not specify any of the characters in the basic source character set in this way. This is because the codes for these characters will be determined by the compiler, and they may not be consistent with the UCS codes.
If your compiler supports an extended character set with characters outside the base source character set, you’ll be able to use these characters in your source code and the compiler will translate the characters to the internal representation before compilation begins.
Trigraph Sequences
NOTE The character codes defined by the UCS standard are identical to codes defined by Unicode, so Unicode is essentially UCS by another name. If you are keen to explore the delights of UCS and Unicode in detail, is a good place to start.
You’re unlikely to see this in use very often—if ever—but the C++ standard allows you to specify certain characters as trigraph sequences. A trigraph sequence is a sequence of three characters that’s used to identify another character. This was necessary way back in the dark ages of computing to accommodate characters that were missing from some keyboards. Table 1-1 shows the characters that may be specified in this way in C++.
The compiler will replace all trigraph sequences with their equivalent characters before any other processing of the source code.Escape Sequences
When you want to use character constants in a program, certain characters can be problematic. A character constant is a data item that your program will use in some way, and it can be either a single character or a character string such as the one in the earlier simple example. Obviously, you can’t enter characters such as newline or tab directly as character constants, as they’ll just do what they’re supposed to do: go to a new line or tab to the next tab position in your source code file. What you want in a character constant is the appropriate code for the character.
You can enter control characters as constants by means of an escape sequence. An escape sequence is an indirect way of specifying a character, and it always begins with a backslash (). Table 1-2 shows the escape sequences that represent control characters.
There are some other characters that are a problem to represent directly. Clearly, the backslash character itself is difficult, because it signals the start of an escape sequence, and there are others with special significance too. Table 1-3 shows the “problem” characters you can specify with an escape sequence.
Because the backslash signals the start of an escape sequence, the only way to enter a backslash as a character constant is by using two successive backslashes (\).
Escape sequences also provide a general way of representing characters such as those in languages other than the one your keyboard supports, because you can use a hexadecimal (base 16) or octal (base 8) number after the backslash to specify the code for a character. Because you’re using a numeric code, you can specify any character in this way. In C++, hexadecimal numbers start with x or X, so x99A and XE3 are examples of escape sequences in this format.
You can also specify a character by using up to three octal digits after the backslash—165, for example. The absence of x or X determines that the code will be interpreted as an octal number.
Try It Out: Using Escape Sequences
You can produce an example of a program that uses escape sequences to specify a message to be displayed on the screen. To see the results, you’ll need to enter, compile, link, and execute the following program.
As I explained in the Introduction, exactly how you perform these steps will depend on your compiler, and you’ll need to consult your compiler’s documentation for more information. If you look up “edit”, “compile”, and “link” (and, with some compilers, “build”), you should be able to find out what you need to do.
// Program 1.2 Using escape sequences
#include <iostream>
using std::cout;int main() {
cout << “n”Least saidnttsoonest mended.”na”;
return 0;
}
When you do manage to compile, link, and run this program, you should see the following output displayed:
============================================================
“Least said
soonest mended.”
============================================================
You should also hear a beep or some equivalent noise from whatever sound output facility your computer has.
HOW IT WORKS
The output you get is determined by what’s between the outermost double quotes in the statement
cout << “n”Least saidnttsoonest mended.”na”;
In principle, everything between the outer double quotes in the preceding statement gets sent to cout.A string of characters between a pair of double quotes is called a string literal. The double quote characters just identify the beginning and end of the string literal; they aren’t part of the string. I said “in principle” because any escape sequence in the string literal would have been converted by the compiler to the character it represents, so the character will be sent to cout, not the escape sequence itself. A backslash in a string literal always indicates the start of an escape sequence, so the first character that’s sent to cout is a newline character. This positions the screen cursor at the beginning of the next line.
The next character in the string is specified by another escape sequence, “, so a double quote will be sent to cout and displayed on the screen, followed by the characters Least said. Next is another newline character corresponding to n, so the cursor will move to the beginning of the next line. You then send two tab characters to cout with tt, so the cursor will be moved two tab positions to the right. The characters soonest mended. will then be displayed from that point on, followed by another double quote from the escape sequence “. Lastly, you have another newline character, which will move the cursor to the start of the next line, followed by the character equivalent of the a escape sequence that will cause the beep to sound.
The double quote characters that are interior to the string aren’t interpreted as marking the end of the string literal because each of them is preceded by a backslash and is therefore recognized as an escape sequence. If you didn’t have the escape sequence, “, available, you would have no way of outputting a double quote because it would otherwise be interpreted as indicating the end of the string.
The name endl is defined in the <iostream>
CAUTION Be aware that the final character of endl is the letter l, not the number 1.It can sometimes be difficult to tell the two apart.
Using endl, the statement in the preceding code to output the string could be written as follows:
cout << endl
<< “”Least said”
<< endl
<< “ttsoonest mended.”a”
<< endl;
This statement sends five separate things in sequence to cout: endl, “”Least said”, endl, “ttsoonest mended.”a”, and endl. This will produce exactly the same output as the original statement. Of course, for this statement to compile as written, you would need to add another using directive at the beginning of the program:
using std::endl;
You don’t have to choose between using either endl or the escape sequence for newline. They aren’t mutually exclusive, so you can mix them to suit yourself. For example, you could produce the same result as the original again with this statement:
cout << endl
<< “”Least saidnttsoonest mended.”a”
<< endl;
Here you’ve just used endl for the first and last newline characters. The one in the middle is still produced by an escape sequence. Of course, each instance of endl in the output will result in the output buffer being flushed after writing a newline character to the stream.
{mospagebreak title=Whitespace in Statements}
As you learned earlier, whitespace is the term used in C++ to describe spaces, horizontal and vertical tabs, newline, and form-feed characters. In many instances, whitespace separates one part of a statement from another and enables the compiler to identify where one element in a statement ends and the next element begins. For example, look at the following line of code:
int fruit;
This statement involves int, which is a type name, and fruit, which is the name of a variable. There must be at least one whitespace character (usually a space) between int and fruit for the compiler to be able to distinguish them. This is because intfruit would be a perfectly acceptable name for a variable or indeed anything else, and the compiler would interpret it as such.
On the other hand, consider this statement:
fruit = apples + oranges;
No whitespace characters are necessary between fruit and =, or between = and apples, although you’re free to include some if you wish. This is because the equals sign (=) isn’t alphabetic or numeric, so the compiler can separate it from its surroundings. Similarly, no whitespace characters are necessary on either side of the plus sign (+). In fact, you’re free to include as little or as much whitespace as you like, so you could write the previous statement as follows:
fruit
=
apples
+
oranges;
If you do this, it’s unlikely you’ll be congratulated for good programming style, but the compiler won’t mind.
Apart from its use as a separator between elements in a statement, or when it appears in a string between quotes, the compiler ignores whitespace. You can, therefore, include as much whitespace as you like to make your program more readable. In some programming languages, the end of a statement is at the end of the line, but in C++ the end of a statement is wherever the semicolon occurs. This enables you to spread a statement over several lines if you wish, so you can write a statement like this:
Documenting Your ProgramsDocumenting Your Programs
std::cout << std::endl << “”Least said” << std::endl
<< “ttsoonest mended.”a” << std::endl;
or like this:
std::cout << std::endl
<< “”Least said”
<< std::endl
<< “ttsoonest mended.”a”
<< std::endl;
Documenting your program code is extremely important. Code that seems crystal clear when you write it can look extraordinarily obscure when you’ve been away from it for a month. You can document your code using comments, of which there are two sorts in C++: single-line comments and multiline comments (that is, comments that can span several lines).
You begin a single-line comment with a double slash (//), for example
// Program to forecast stock market prices
The compiler will ignore everything on the line following the double slash, but that doesn’t mean the comment has to fill the whole line. You can use this style of comment to explain a statement:
length = shrink(length, temperature); // Compensate for wash shrinkage
You can also temporarily remove a line of code from your program just by adding a double slash to the beginning of the line:
// length = shrink(length, temperature); // Compensate for wash shrinkage
This converts the statement to a comment, which is something you might want to do during the testing of a program, for example. Everything from the first // in a line to the end of the line is ignored, including any further occurrences of //.
The multiline comment is sometimes used for writing more verbose, general descriptive material—explaining the algorithm used within a function, for example. Such a comment begins with /*, ends with */, and everything between these two is ignored. This enables you to embellish multiline comments to highlight them, for example
/***************************************************
* This function predicts future stock prices *
* using advanced tea leaf simulation techniques. * ***************************************************/
You can also use this comment style for temporarily disabling a block of code. Just put /* at the beginning of the block and */ at the end. However, you must take particular care not to nest /* … */ comments; you’ll cause error messages from your compiler if you do. This is because the closing */ of the inner nested comment will match the opening /* of the outer comment:
// You must not nest multiline comments
/* This starts an outer comment
/* This is an inner comment, but the start will not be recognized
because of the outer comment.
Instead, the end of the inner comment will be interpreted as the end
of the outer comment. */
This will cause the compiler to try to compile this part of the
outer comment as C++ code. */
The last part of the outer comment is left “dangling,” and the compiler will try to compile it, which will inevitably result in failure. For this reason, the // form of comment is the most widely used in C++ programs.
The Standard Library
NOTE You may also hear multiline comments being described as “C-style” comments. This is because the /* … */ syntax is the only one available for creating comments in the C language.
The standard library contains a substantial number of functions and other things that support, augment, and extend the basic language capabilities of C++. The contents of the standard library are just as much a part of C++ as the syntax and semantics of the language itself. The standard for C++ defines both, and so every compiler that conforms to the standard will supply the complete standard library.
Bearing this in mind, the scope of the standard library is extraordinary. You get a vast range of capability, from essential elements such as basic language support, input and output functions, and exception handling (an exception is an unusual occurrence during program execution—often an error of some kind) to utility functions, mathematical routines, and a wide range of prewritten and tested facilities that you can use to store and manage data during execution of your program.
To use C++ most effectively, you should make sure that you have a good familiarity with the contents of the standard library. You’ll be introduced to many of the capabilities of the standard library as you learn the C++ language in this book, but the degree of coverage within the book will inevitably be incomplete. It would take another book comparable with the size of this one to cover the capability and use of the standard library comprehensively.
The definitions and declarations necessary to use standard library facilities appear in the standard headers touched upon earlier. There are a few cases in which the standard headers will be included in your program files by default, but in most instances you must add an #include directive for the appropriate header for the library facilities that you want to use. You’ll find a comprehensive list of the standard headers in Appendix C, with a brief description of what sort of functionality each one supports.
Almost everything in the C++ standard library is defined within the namespace std. This means that all the names that you’ll use from the library are prefixed with std. As you saw at the beginning of the chapter, when you reference something from the standard library, you can prefix the name with std, as in the following statement:
std::cout << “The best place to start is at the beginning”;
Alternatively, you can put a using directive at the beginning of your source file:
using std::cout;
This allows you to use the name cout without its std prefix so you can write that statement as follows:
cout << “The best place to start is at the beginning”;
You also saw earlier that you have a blanket capability for introducing names from the std namespace into a program file:
using namespace std;
This allows you to omit the std prefix for any standard library names that are defined in the headers you’ve included in your program. However, it has the serious disadvantage that it allows potential clashes between names you’ve defined and identical names in the standard library headers that you’ve included.
In this book I always include the std namespace prefix where necessary in code fragments. In complete working programs, you’ll generally add using statements for standard library names that you use repeatedly in code. Names that you use once or twice you’ll just qualify with the namespace name.Programming in C++
Because C++ inherits and enhances the power and flexibility of the original C language, you have a comprehensive capability for handling time-critical, low-level programming tasks and for dealing with problems for which a traditional procedural approach may be preferable. The major strengths of C++, though, are its powerful and extensive object-oriented features. These provide the potential for writing programs that are less error-prone, less time-consuming to maintain, simpler to extend, and easier to understand than their equivalent procedural solutions.
There are fundamental differences between these two programming methodologies, so let’s contrast them to highlight just how they’re different and see some of the reasons why an object-oriented approach can be so attractive.
{mospagebreak title=Procedural and Object-Oriented Programming}{mospagebreak title=Procedural and Object-Oriented Programming}
Historically, procedural programming is the way almost all programs have been written. To create a procedural programming solution to a problem, you focus on the process that your program must implement to solve the problem. A rough outline of what you do, once the requirements have been defined precisely, is as follows:
- You create a clear, high-level definition of the overall process that your program will implement.
- You segment the overall process into workable units of computation that are, as much as possible, self-contained. These will usually correspond to functions.
- You break down the logic and the work that each unit of computation is to do into a detailed sequence of actions. This is likely to be down to a level corresponding to programming language statements.
- You code the functions in terms of processing basic types of data: numerical data, single characters, and character strings.
Apart from the common requirement of starting out with a clear specification of what the problem is, the object-oriented approach to solving the same problem is quite different:
- From the problem specification, you determine what types of objects the problem is concerned with. For example, if your program deals with baseball players, you’re likely to identify BaseballPlayer as one of the types of data your program will work with. If your program is an accounting package, you may well want to define objects of type Account and type Transaction. You also identify the set of operations that the program will need to carry out on each type of object. This will result in a set of application-specific data types that you will use in writing your program.
- You produce a detailed design for each of the new data types that your problem requires, including the operations that can be carried out with each object type.
- You express the logic of the program in terms of the new data types you’ve defined and the kinds of operations they allow.
The program code for an object-oriented solution to a problem will be completely unlike that for a procedural solution and almost certainly easier to understand. It will certainly be a lot easier to maintain. The amount of design time required for an object-oriented solution tends to be greater than for a procedural solution. However, the coding and testing phase of an object-oriented program tends to be shorter and less troublesome, so the overall development time is likely to be roughly the same in either case.
Let’s try to get an inkling of what an objected-oriented approach implies. Suppose that you’re implementing a program that deals with boxes of various kinds. A feasible requirement of such a program would be to package several smaller boxes inside another, larger box. In a procedural program, you would need to store the length, width, and height of each box in a separate group of variables. The dimensions of a new box that could contain several other boxes would need to be calculated explicitly in terms of the dimensions of each of the contained boxes, according to whatever rules you had defined for packaging a set of boxes.
An object-oriented solution might involve first defining a Box data type. This would enable you to create variables that can reference objects of type Box and, of course, create Box objects. You could then define an operation that would add two Box objects together and produce a new Box object that could contain the first two. Using this operation, you could write statements like this:
bigBox = box1 + box2 + box3;
In this context the + operation means much more than simple addition. The + operator applied to numerical values will work exactly as before, but for Box objects it has a special meaning. Each of the variables in this statement is of type Box. The preceding statement would create a new Box object big enough to contain box1, as well as box2 and box3.
Being able to write statements like this is clearly much easier than having to deal with all the box dimensions separately, and the more complex the operations on boxes you take on, the greater the advantage is going to be. This is a trivial illustration, though, and there’s a great deal more to the power of objects than that you can see here. The purpose of this discussion is just to give you an idea of how readily problems solved using an object-oriented approach can be understood. Object-oriented programming is essentially about solving problems in terms of the entities to which the problems relates rather than in terms of the entities that computers are happy with— numbers and characters. You’ll explore object-oriented programming in C++ fully starting in Chapter 11.Summary
This chapter’s content has been broad-brush to give you a feel for some of the general concepts of C++. You’ll encounter everything discussed in this chapter again, and in much more detail, in subsequent chapters. However, some of the basics that this chapter covered are as follows:
- A program in C++ consists of at least one function, which is called main().
- The executable part of a function is made up of statements contained between a pair of braces.
- A pair of curly braces is used to enclose a statement block.
- In C++, a statement is terminated by a semicolon.
- Keywords are a set of reserved words that have specific meanings in C++. No entity in your program can have a name that coincides with any of the keywords in the language.
- A C++ program will be contained in one or more files.
- The code defining functions is usually stored in files with the extension .cpp.
- The code that defines your own data types is usually kept in header files with the extension .h.
- The C++ standard library provides an extensive range of capabilities that supports and extends the C++ language.
- Input and output in C++ are performed using streams and involve the use of the insertion and extraction operators, << and >>.
- Object-oriented programming involves defining new data types specific to your problem. Once you’ve defined the data types that you need, a program can be written in terms of the new data types.
The following exercises enable you to try out what you’ve learned in this chapter. If you get stuck, look back over the chapter for help. If you’re still stuck after that, you can download the solutions from the Apress website ( download.html), but that really should be a last resort.
Exercise 1-1. Create a program that will display the text “Hello World” on your screen.
Exercise 1-2. Change your program so that it uses the hexadecimal values of the characters to spell out the phrase. If you’re working on a computer that uses ASCII to encode its characters, you’ll find a table of the values you need in Appendix A. (Hint: When you’re using hexadecimal ASCII values, “He” can be displayed by the statement std::cout << “x48x65”;.)
Exercise 1-3. The following program produces several compiler errors. Find these errors and correct them so the program can compile cleanly and run.
#include <iostream>
using namespace std;
int main() {
<< “Hello World”
<< endl
return0;
)
Exercise 1-4. What will happen if you remove the using directive from the program in Exercise 1-3? Apart from restoring the using directive, how else could you fix the problem that occurs? Why is your solution better that restoring the original using directive?
NOTE You’ll find model answers to all exercises in this book in the Downloads section of the Apress website at. | http://www.devshed.com/c/a/Practices/Basic-Ideas/ | CC-MAIN-2017-34 | refinedweb | 12,350 | 58.01 |
Controllers fail during execution when using octomap with Moveit
I'm using MoveIt with the default
RRTConnectkConfigDefault motion planning library. I have a 6 DoF arm to which I pass target poses using roscpp's MoveGroupInterface. I'm using ros_control and have created my own Fake Controllers of the type
FollowJointTrajectory. The target poses are acquired from the readings of a depth camera in Gazebo.
By default, I do not use the octomap using the depth cam. In this cases, MoveIt is able to generate the plans and also execute them successfully. I can see the arm moving in Rviz. I do not have an arm in Gazebo, it's only loaded in Rviz.
When I use the octomap, MoveIt can generate the plans, but fails during execution. All the joint states are being published on topic
/joint_states. The same code works when I remove the sensors.yaml file and don't use the octomap. The controllers are up. I don't see any other errors in the terminal. Please help me identify the cause.
EDIT: When I increase the resolution of the octomap from 0.01m to 0.05m, then suddenly things start to work? I changed the value in moveit_config/sensor_manager.launch file as so:
<param name="octomap_resolution" type="double" value="0.05" />
I'd like to make things work at 0.01m resolution, it looks granular enough.
Here are the error messages: Error from MoveGroupInterface terminal where I pass in commands:
[ INFO] [1524323290.317619120, 261.964000000]: Ready to take commands for planning group arm. [ INFO] [1524323293.819272021, 265.299000000]: 3D Co-ords of next target: X: 0.487010, Y: -0.132180, Z:-0.215447 [ INFO] [1524323303.789838914, 275.190000000]: ABORTED: Solution found but controller failed during execution
Error from the MoveIt terminal:
(more)(more)
[ INFO] [1524323029.558940862, 3.456000000]: Starting scene monitor [ INFO] [1524323029.562571287, 3.460000000]: Listening to '/move_group/monitored_planning_scene' [ INFO] [1524323033.952942898, 7.842000000]: Constructing new MoveGroup connection for group 'arm' in namespace '' [ INFO] [1524323035.143276258, 9.027000000]: Ready to take commands for planning group arm. [ INFO] [1524323035.143337694, 9.027000000]: Looking around: no [ INFO] [1524323035.143357118, 9.027000000]: Replanning: no [New Thread 0x7fffbef7e700 (LWP 20461)] [Wrn] [Publisher.cc:141] Queue limit reached for topic /gazebo/empty/pose/local/info, deleting message. This warning is printed only once. [ WARN] [1524323072.113526614, 45.706000000]: Failed to fetch current robot state. [ INFO] [1524323072.113676595, 45.706000000]: Planning request received for MoveGroup action. Forwarding to planning pipeline. Debug: Starting goal sampling thread [New Thread 0x7fffb3fff700 (LWP 21008)] Debug: Waiting for space information to be set up before the sampling thread can begin computation... [ INFO] [1524323072.116232595, 45.709000000]: Planner configuration 'arm[RRTConnectkConfigDefault]' will use planner 'geometric::RRTConnect'. Additional configuration parameters will be set when the planner is constructed. Debug: The value of parameter 'longest_valid_segment_fraction' is now: '0.0050000000000000001' Debug: The value of parameter 'range' is now: '0' Debug: arm[RRTConnectkConfigDefault]: Planner range detected to be 4.017020 Info: arm[RRTConnectkConfigDefault]: Starting planning with 1 states already in datastructure Debug: arm[RRTConnectkConfigDefault]: Waiting for goal region samples ... Debug: Beginning sampling thread ... | https://answers.ros.org/question/289286/controllers-fail-during-execution-when-using-octomap-with-moveit/ | CC-MAIN-2020-16 | refinedweb | 512 | 53.07 |
Installation QtSerialport
Hello,
I try to install QtSerialport following this:
First, the command
git clone git: //gitorious.org/qt/qtserialport.git
gives me the following result:
fatal: Could not read from remote repository.
Please make sour you-have the proper access rights
and the repository exists.
So I download the archive:
After extraction and the creation of the serialport-build directory I try this:
qmake ../serialport-src/qtserialport.pro, no message ok
Then:
make: an error message:
In file included from /.../qt-qtserialport/src/serialport/qserialport.cpp:45:0:
/home/francois/qt-qtserialport/src/serialport/qserialport.h:48:44: fatal error: QtSerialPort / qserialportglobal.h: No such file or folder of this type
#include <QtSerialPort / qserialportglobal.h>
I use: QT5 / ubuntu 14.04.1
qmake --version
QMake version 3.0
Using Qt 5.2.1 in / usr / lib / i386-linux-gnu
Thanks
Hi and welcome to devnet,
Since you are using Qt 5.2.1 why don't you use the QtSerialPort module that comes with it ? It's been in Qt since 5.1
Thank you for your reply.
I installed QT5 using QtCreator package on ubuntu and I have no trace of QtSerialPort include files in the /usr/include/qt5 !
The version of QtCreator package: 3.0.1-0ubuntu4
Qt Creator 3.0.1
Based on Qt 5.2.1 (GCC 4.8.2, 32 bit)
What is the problem?
thank you
- JKSH Moderators
I installed QT5 using QtCreator package on ubuntu
Qt Creator is only the IDE. You need to install Qt development libraries.
The recommended way to get the libraries is from the official website:
However, you can get packages from the Ubuntu Software Center too if you want. Be aware that there are many separate packages, such as libqt5serialport5-dev.
Hi,
I'm want to use the QSerialPort Library for QT 5.7. But I cant find any downloads and it doesn't seem to be preinstalled. I'm working on a win pc.
Can you tell me where I can download the QSerialPort library?
Thanks!
Hi,
It comes with your installation of Qt
Hi,
but when I try #include <QSerialPort> then I get an error "No such file or directory." I addes QT += serialport.
On the web I found something like #include <QtSerialPort/QtSerialPort>. But is it the same library? And does it have an info library too?
Sorry for beeing such a newbie :D
- jsulm Moderators
Thanks for your response!
I did the installation via an online installer. Or what do you mean?
The "info library" is the QSerialPortInfo library. Sorry for beeing unexact!
- jsulm Moderators
@Smileede Can you check whether you have the QtSerial header files? See in Qt/5.7/mingw53_32/include/QtSerialPort. Replace mingw53_32 with what ever Qt version you installed. Maybe something went wrong during installation.
Yes, it is there! Ahh, now I got it! If I type #include <QTSerialPort/qseriaport.h> it does work :)
But I can also include QTSerialPort/QtSerialPort. Is there any difference between QtSerialPort and QSerialPort?
Thank you for your help!
There's always one basic that should be done: did you re-run qmake after you added
QT += serialportto your .pro file ? | https://forum.qt.io/topic/52059/installation-qtserialport | CC-MAIN-2018-26 | refinedweb | 527 | 53.47 |
Hey I've been working on an SMS sending manager for gnokii in Linux but I feel I am a little bit out of my depth in relation to the memory management side of things... I'm not sure if I'm having any memory leak issues but some things make me a bit suspicious. Could anyone take a glance of my code and see if they can find any?
I think mainly around the passing of variables between functions, some of them are local such as the vector, so if I return it is the memory marked for deletion so that when I try to access it from outside the function it'll stuff up?I think mainly around the passing of variables between functions, some of them are local such as the vector, so if I return it is the memory marked for deletion so that when I try to access it from outside the function it'll stuff up?Code:#include <iostream> #include <fstream> #include <string> #include <vector> #include <dirent.h> using namespace std; char* remove(char* const src, char character); vector<string> getListOfFiles(const char* directory); int main() { string DirIn = "/var/gsm/in/", DirOut = "/var/gsm/out/", DirSent = "/var/gsm/sent/", DirFail = "/var/gsm/fail/"; vector<string> Filenames = getListOfFiles(DirOut.c_str()); for(unsigned int i = 0; i < Filenames.size(); i++) { fstream SMSFile((DirOut + Filenames.at(i)).c_str()); if(!SMSFile.fail()) { char NumberLine[64]; SMSFile.getline(NumberLine, 64); string Number = NumberLine; Number = Number.substr(8); if(Number.length() == 10) { cout << "Number is OK" << endl; } else { cout << "Number is Wrong" << endl; } cout << Number << endl; char MsgLine[160]; SMSFile.getline(MsgLine, 160); // Remove ", ', carriage returns and new lines strcpy(MsgLine, remove(MsgLine, '\n')); strcpy(MsgLine, remove(MsgLine, '\r')); strcpy(MsgLine, remove(MsgLine, '\'')); strcpy(MsgLine, remove(MsgLine, '"')); string Msg = MsgLine; if(!system(("echo \"" + Msg + "\" | gnokii --sendsms " + Number).c_str())) { cout << "SMS Sent to " << Number << "!" << endl; system(("mv " + DirOut + Filenames.at(i) + " " + DirSent).c_str()); } else { cout << "SMS Failed to " << Number << "!" << endl; system(("mv " + DirOut + Filenames.at(i) + " " + DirFail).c_str()); } } } else { cout << "Unable to open the file" << endl; } } return 0; }; vector<string> getListOfFiles(const char* directory) { vector<string> Filenames; DIR* dirHandle; struct dirent* dirEntry; dirHandle = opendir(directory); if(dirHandle != NULL) { while((dirEntry = readdir(dirHandle))) { if(dirEntry->d_type != DT_DIR) { Filenames.push_back(dirEntry->d_name); } } } return Filenames; } char* remove(char* const src, char character) { char* s = src; int occurences = 0, length = 0; while(*s) { if(*s == character) occurences++; length++; s++; } char* result = new char[length - occurences + 1]; char* resultptr = result; s = src; while(*s) { if(*s != character) { *resultptr = *s; resultptr++; } s++; } *resultptr = '\0'; return result; }
Thanks heaps | https://cboard.cprogramming.com/cplusplus-programming/96118-memory-leaks.html | CC-MAIN-2017-43 | refinedweb | 431 | 55.24 |
or Join Now!
« back to Power Tools, Hardware and Accessories forum
Bill White
home | projects | blog
3579 posts in 2704 days
03-05-2013 10:51 PM
I’ve been using a light weight general purpose oil for the bearings (mostly shielded), but was wonderin’ what you guys use. Seems like the high speed might require a different viscosity or type. Am I just worryin’ too much?Bill
-- [email protected]
Handtooler
1124 posts in 876 days
#1 posted 03-05-2013 10:57 PM
As an advanced novice craftsman, and not knowing any better, I use 3-n-one sparingly and seldom. Not after every use.
-- Russell Pitner Hixson, TN 37343 [email protected]
a1Jim
112806 posts in 2321 days
#2 posted 03-05-2013 10:59 PM
router bearing lube
-- Custom furniture
Kazooman
60 posts in 696 days
#3 posted 03-05-2013 11:58 PM
I watched the first video and I have to say I was not impressed. I always get my hackles up when someone says “what you don’t understand” or similar words. I do understand router bits, and the guy in the video has some issues. He starts out OK, but then goes on to equate a round-over bit with a beading bit, claiming that you can just swap out the bearings and make a round-over bit into a beading bit The beading bit has the added duty of cutting s nice clean edge above the curve. Round-over bits are not designed to do this and they won’t do this. Slap a smaller bearing on a round-over bit and you will get a mess. Put the correct bearing on a beading bit and it is a round-over bit.
Didn’t notice anything about lube in the video. I have not watched the other one yet.
I would recommend a light weight oil and be certain that you are not using anything with silicone in it that might be a problem with finishing down the road. Oil the bearing and then wipe off any excess that does not seep in.
#4 posted 03-06-2013 12:17 AM
KazoomanHere’s is “the guy’s” web site. “Charels Neil” Note the furniture he has made!He has been a professional furniture maker for 30+ years,He teaches many classes all over the US ,He makes 40 major pieces of furniture a year, teaches classes on line and has dozens of Videos on woodworking an finishing,an has written a book on finishing.
There is a good chance that he might just know a little something about woodworking and how router bits work !
#5 posted 03-06-2013 12:28 AM
+1 a1Jim Boy, is Charles ever helpful. He spent four extremely helpful and trying attempts for me to correct white cup and glass rings from a table that was recovered from a cafe/bar that had closed. His last piece of advice did the trick. I didn’t have to strip the table as he advised not to, since it was an heirloom sample of furniture well worth saving. Man, do I ever admire that person. He’s an accomplished woodworker of the highest level.
#6 posted 03-06-2013 12:34 AM
JamesI confess I just read the google description that included router bit lube in it. But Charles did mention there was router lube in the kit. I guess I owe you 3 minutes :))
#7 posted 03-06-2013 12:45 AM
a!jim and handtooler: I didn’t go to the website, since I am certain that he has made a ton of impressive stuff, otherwise Woodcraft wouldn’t have made the video. However, that does not excuse him from providing erroneous information to novice woodworkers in the form of a Woodcraft video. What he said is BS. The guy comes across as a “know it all” A-hole. Perhaps he does “know it all” but if he continues to record videos with misinformation then he is not doing the community any favors.
I don’t “know it all” but I do know a few things and I know that this “A-hole” is wrong. Such a condescending attitude!
GaryL
1080 posts in 1574 days
#8 posted 03-06-2013 01:07 AM
Easy with the name calling… If you don’t agree with Charles, fine. You don’t have to jump off a cliff about it.Oh and by the way….I’ve changed bearing sizes on “round over bits” many times to get a shoulder, i.e. bead, with no problem at all and vice versa with beading bits. I’m sure every manufacturer does not make their bits identical to each others, so maybe yours won’t do this, but many will.
This is a typical roundover bit. If you look at it there is no reason that a smaller bearing could be installed and achieve a bead. Most carbides are sharpened far enough in. Just the opposite to get a roundover from a bead.
Bill….sorry about that, we’re a little off topic. Myself I have never lubed a router bearing and have never burned one up or had one go bad. I have used many bits and probably have 70 to 80 on hand right now (some are wore out and should be pitched or sharpened…tool hoarder confession). I’m sure A1Jim has his fair share too as well as Charles. Even though I don’t treat my bearings properly, I’m sure lubing won’t hurt.
A1Jim…I do enjoy watching Charles videos but I have to agree with James. He does tend to drag things out a bit.
-- Gary; Marysville, MI...Involve your children in your projects as much as possible, the return is priceless.
juniorjock
1930 posts in 2509 days
#9 posted 03-06-2013 01:22 AM
Oh hell no…........... you don’t know what you’re dealing with . . . . . . . . .
Tony_S
442 posts in 1827 days
#10 posted 03-06-2013 01:38 AM
Sorry for continuing the derail…
KazoomanThe info that Charles Neil gave on the round over/beading bit is 100% correct.Ive used numerous different brand names over the years, Velepec, Dimar, Whiteside, Freud and probably 2 or 3 more that I don’t recall the brand names of. No issues with putting smaller bearings on any of them to cut a bead.
Freud even suggests/recommends it on their website.
Like Charles Neil or not…you owe him an apology.
-- Wisdom begins in wonder. Socrates
#11 posted 03-06-2013 01:41 AM
KazoomanWhat he said was correct ,I’ve done it myself.Perhaps if you watch it again you might understand it better.I guess how people come across to us might be a reflection of how we view people in general . Your entitled to your opinion and I mine. Charles is very knowledgeable and kind and giving person,He helps anyone that emails him with questions with out asking for anything in return (like Handtooler),he has built the things of beauty an auctioned them off to send the proceeds to people in need.There are many other things I could list that would only embarrass him. So I guess it’s up them who know him and others to evaluate who the A-hole is.
James Charles is from the south and is what I call down home folks (real people) His delivery is just who he is. I have no problem with it. This might be a generational thing or a regional thing. One thing I know is he is more knowledgeable than any of the faster talking TV or on line woodworking personalities .
Bill
I’m sorry for high jacking your thread. I will not post anymore on this thread.
Greg..the Cajun Wood Artist
5252 posts in 2052 days
#12 posted 03-06-2013 01:58 AM
I have also changed bearings on a roundover bit numerous times to achieve beading..I use a lightweight oil on bearings to lube them…everything from router bearings to thrust and guide bearings on my bandsaw. whenever I clean bearings to remove pitch or crud and then lube them after soaking with acetone.
-- Each step of every Wood Art project I design and build is considered my masterpiece… because I want the finished product to reflect the quality and creativeness of my work
ward63
327 posts in 1831 days
#13 posted 03-06-2013 02:15 AM
I’ve used graphite in the past and it has worked great.
Dallas
3167 posts in 1231 days
#14 posted 03-06-2013 02:46 AM
Bear fat.
-- Improvise.... Adapt...... Overcome!
gfadvm
11478 posts in 1434 days
#15 posted 03-06-2013 02:48 AM
I soak all my router bits in “Bug and Tar Remover” from the auto parts store to keep them clean. This also seems to keep the bearings well lubricated.
-- " I'll try to be nicer, if you'll try to be smarter" gfadvm | http://lumberjocks.com/topics/47498 | CC-MAIN-2014-52 | refinedweb | 1,506 | 80.41 |
RDF::Trine::Exporter::GraphViz - Serialize RDF graphs as dot graph diagrams
version 0.141
use RDF::Trine::Exporter::GraphViz; my $ser = RDF::Trine::Exporter::GraphViz->new( as => 'dot' ); my $dot = $ser->to_string( $rdf ); $ser->to_file( 'graph.svg', $rdf ); # highly configurable my $g = RDF::Trine::Exporter::GraphViz->new( namespaces => { foaf => '' }, alias => { '' => '=', }, prevar => '$', # variables as '$x' instead of '?x' url => 1, # hyperlink all URIs # see below for more configuration options ); $g->to_file( 'test.svg', $model );::Serializer as long as RDF::Trine has no common class RDF::Trine::Exporter). This module also includes a command line script rdfdot to create graph diagrams from RDF data.
This modules derives from RDF::Trine::Serializer with all of its methods (a future version may be derived from RDF::Trine::Exporter). The following methods are of interest in particular:
Creates a new serializer with configuration options as described below.
Serialize RDF data, provided as RDF::Trine::Iterator or as RDF::Trine::Model to a file.
$file can be a filehandle or file name. The serialization format is automatically derived from known file extensions.
Serialize RDF data, provided as RDF::Trine::Iterator or as RDF::Trine::Model to a string.
Creates and returns a GraphViz object for further processing. You must provide RDF data as RDF::Trine::Iterator or as RDF::Trine::Model.
Returns the exporter's mime type. For instance if you create an exporter with
as => 'svg', this method returns
('image/svg+xml').
Provided as alias for
to_file for compatibility with other
RDF::Trine::Exporter classes.
Provided as alias for
to_string for compatibility with other
RDF::Trine::Exporter classes.
Serialize a RDF::Trine::Iterator as graph diagram to a string.
Internal core method, used by
to_string and
to_file, which one should better call instead.
The following configuration options can be set when creating a new object.
Specific serialization format with
dot as default. Supported formats include canonical DOT format (
dot), Graphics Interchange Format (
gif), JPEG File Interchange Format (
jpeg), Portable Network Graphics (
png), Scalable Vector Graphics (
svg and
svgz), server side HTML imape map (
imap or
map), client side HTML image map (
cmapx), PostScript (
ps), Hewlett Packard Graphic Language (
hpgl), Printer Command Language (
pcl), FIG format (
fig), Maker Interchange Format (
mif), Wireless BitMap format (
wbmp), and Virtual Reality Modeling Language (
vrml).
Mime type. By default automatically set based on
as.
General graph style options as hash reference. Defaults to
{ rankdir => 'TB', concentrate => 1 }.
Hash reference with general options to style nodes. Defaults to
{ shape => 'plaintext', color => 'gray' }.
Hash reference with options to style resource nodes. Defaults to
{ shape => 'box', style => 'rounded', fontcolor => 'blue' }.
Hash reference with options to style literal nodes. Defaults to
{ shape => 'box' }.
Hash reference with options to style blank nodes. Defaults to
{ label => '', shape => 'point', fillcolor => 'white', color => 'gray', width => '0.3' }.
Code referece with a function that get passed a predicate and variable
$_ set to the predicate's URI. The function must return undef to skip the RDF statement or a hash reference with options to style the edge.
Add clickable URLs to nodes You can either provide a boolean value or a code reference that returns an URL when given a RDF::Trine::Node::Resource.
Hash reference with URL aliases to show as resource and predicate labels.
Hash reference with options to style variable nodes. Defaults to
{ fontcolor => 'darkslategray' }.
Which character to prepend to variable names. Defaults to '?'. You can also set it to '$'. By now the setting does not affect variables in Notation3 formulas.
An URI that is marked as 'root' node.
Add a title to the graph.
Hash reference with mapping from prefixes to URI namespaces to abbreviate URIs. By default the prefix mapping from RDF::NS is used.
This serializer does not support
negotiate on purpose. It may optionally be enabled in a future version. GraphViz may fail on large graphs, its error message is not catched yet. Configuration in general is not fully covered by unit tests. Identifiers of blank nodes are not included.
Jakob Voß <[email protected]>
This software is copyright (c) 2012 by Jakob Voß.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~voj/RDF-Trine-Exporter-GraphViz-0.141/lib/RDF/Trine/Exporter/GraphViz.pm | CC-MAIN-2013-48 | refinedweb | 696 | 59.7 |
Signal / Socket connection problem...
I have written a class which is derived from QPushButton, in my derived class I want to connect the "clicked" signal to a slot in the same class also called "clicked". In my derived class:
private slots: void clicked(bool blnChecked);
In the class constructor:
Object::connect(this, SIGNAL(clicked(bool)), this, SLOT(clicked(bool)));
Everything builds without warnings or errors, when I execute the code I get the following in the Application Output pane:
QObject::connect: No such slot QPushButton::clicked(bool)
- J.Hilk Moderators
@SPlatten
naming 2 functions the same, with the same arguments. That's bound to cause problems.
The only way you got away with it in the first place is, because you derived the class containing one function.
But, I'm pretty sure the
mommoc-compiler is running into issues here because of it.
That isn't the problem, I just renamed the slot to "clickedHandler" exactly the same problem, no change.
@SPlatten said in Signal / Socket connection problem...:
I have written a class which is derived from QPushButton, in my derived class I want to connect the "clicked" signal to a slot in the same class also called "clicked". In my derived class:
private slots:
void clicked(bool blnChecked);
I don't think it is a good idea to have a signal and a slot have same name!
I would remane slot to
private slots: void clicked(bool blnChecked);
And use new connection syntax to got connection failures at compilation time and not runtime:
connect(this, &MyButton:clicked, this, &MyButton:onClicked);
And don't forget to add "Q_OBJECT":
class MyButton : QPushButton { Q_OBJECT ... private slots: void clicked(bool blnChecked); }
And perhaps also rerun qmake
Regards
- jsulm Qt Champions 2018
@SPlatten said in Signal / Socket connection problem...:
QObject::connect: No such slot QPushButton::clicked(bool)
Are you sure this warning comes from the connect you pasted here?
Because the warning should be
QObject::connect: No such slot YOURCLASSNAME::clicked(bool)
Are you trying to connect somewhere else also?
And is the warning now exact the same or does it contain the new slot name?
@KroMignon, thank you, adding Q_OBJECT to my derived class has stopped the warning from being displayed, however when I click the button I still don't see anything in the slot which I've now renamed to "clickedHandler"
@jsulm, yes, positive, its the only connect I have.
- Pradeep P N
Even using
private slots: void clicked(bool);
Works fine for me.
Below is my code
#include <QPushButton> class Widget : public QPushButton { Q_OBJECT public: Widget(QPushButton *parent = nullptr); ~Widget(); private slots: void clicked(bool); };
Widget::Widget(QPushButton *parent) : QPushButton(parent) { // New syntax // connect(this, &Widget::clicked, this, &Widget::clicked); // Old syntax connect(this, SIGNAL(clicked(bool)), this, SLOT(clicked(bool))); } void Widget::clicked(bool) { qDebug() << Q_FUNC_INFO << endl; }
- jsulm Qt Champions 2018
@SPlatten said in Signal / Socket connection problem...:
I still don't see anything in the slot
What do you mean by that? What are you doing in the slot?
- Pradeep P N
@SPlatten Please check with
connect(this, SIGNAL(clicked(bool)), this, SLOT(clicked(bool)));
It works.
There is no issue in Build but still the Slot is not called when using new connect syntax.
@SPlatten said in Signal / Socket connection problem...:
still don't see anything in the slot which I've now renamed to "clickedHandler"
What do you mean? Is the slot not called? Are you sure you are watching/click on the right button?
Ok, I've now got it working and just for the record and anyone else needing do so the same, this is what I needed to do. I thought 'wrongly' that because my class was derived from QPushButton which already has Q_OBJECT in it, that I didn't need to do the same, wrong!
Adding Q_OBJECT to the class solved the problem. Also having a signal and slot with the same name does not cause a problem, it's quite possible to have both with the exact same name.
My class, still a work in progress but working:
class clsMenuBtn; typedef List<clsMenuBtn*> lstOfMenus; class clsMenuBtn : public QPushButton { Q_OBJECT private: bool mblnSecure; clsMenuBtn* mpChildren, *mpNext, *mpParent, *mpPrev; public slots: void clicked(bool blnChecked); public: clsMenuBtn(QString strText = "", QWidget* pobjParent = NULL ,bool blnSecure = false); lstOfMenus lstGetMenus(); clsMenuBtn* pAddOption(QString strText = "", bool blnSecure = false); clsMenuBtn* pGetChildren() { return mpChildren; } clsMenuBtn* pGetNext() { return mpNext; } clsMenuBtn* pGetParent() { return mpParent; } clsMenuBtn* pGetPrev() { return mpPrev; } void setText(const QString& strText); };
@SPlatten said in Signal / Socket connection problem...:
Adding Q_OBJECT to the class solved the problem
When you are creating a class which is based on QObject and you want to add new signals and/or slots, you have to add Q_OBJECT macro to enable "MOC magic". That is mandatory!
As I said, I thought wrongly that since my class was derived from one with that functionality already present, that my new class would inherit the same, live and learn :)
- Pradeep P N
@SPlatten
Also here is the reason why new connect syntax donot work in this case.
- J.Hilk Moderators
@SPlatten
also in case you didn't do it. Initializing the "parent QObject-class" in the constructor is a good practice you should always do.
I do, my code for the implementation of the constructor:
/** * @brief clsMenuBtn::clsMenuBtn * @param strText Optional, text for menu * @param pobjParent Optional, pointer to parent object * @param blnSecure Optional, secure flag default is false */ clsMenuBtn::clsMenuBtn(QString strText, QWidget* pobjParent, bool blnSecure) : QPushButton(pobjParent) { mblnSecure = blnSecure; mpChildren = mpNext = mpParent = mpPrev = NULL; setText(strText); if ( strText.isEmpty() != true ) { QObject::connect(this, SIGNAL(clicked(bool)), this, SLOT(clicked(bool))); } } | https://forum.qt.io/topic/103279/signal-socket-connection-problem/16 | CC-MAIN-2019-30 | refinedweb | 934 | 57.71 |
User talk:Flameviper
From Uncyclopedia, the content-free encyclopedia
edit Forum:Deletion policy, again
Read your recent comments in the above forum, and obviously I agree with you 100%. While I doubt we will change the deletionists' minds on this issue, and they do have a point about the problem of high volume, low content/quality submissions, I think there is a middle path, which is the howto page I developed that tells n00bs how they can edit and create in their userpage subpages without being poked and prodded by deletionist admins, which is obviously not necessarily the greatest creative environment for some people.
Anyway, thanks for the comments, I think it's important to keep a debate alive on this issue, even if it doesn't go anywhere. A lot of people think that their views are the only ones that matter here. It's good to have at least some small space for alternatives, even if they are not accepted by everyone, or even the majority of the users. That's probably the essence of what a wiki is, after all, anybody can do anything within reasonable limits, and that means different people are entitled to have different, even antithetical, ideas, and still somehow find a way to work together anyhow.
--Hrodulf 21:26, 19 May 2006 (UTC)
edit Aaaaargh
Flameviper, those Flammable libel pages were ALREADY DELETED. They show up in a search becasue Mediawiki detects deleted revisions when searching for a query string. All you've done is make extra work for me (as I'm now the one who has to go through and delete them all over again). Please man, ASK someone when you come across something like this. I saw the original note you left for Flammable on his talk page; if you feel that you need a response to a query immediately without waiting for the relevant admin to come online, look at the Recent Changes page and leave a note for an admin who's on duty. Had you done that, someone would have explained the above to you already, and we wouldn't have had to go through all this. Bottom line: If you're not sure what the appropriate action should be, find an active admin before instigating a change. Cheers. -- Sir Codeine K·H·P·B·M·N·C·U·Bu. · (Harangue) 15:24, 17 May 2006 (UTC)
- I know. Before, they were obscene. But now they say nice things about Flammable :D
- It's more that if those pages must be there, at least make people a little more comfortable by making disclaimers as the content. Flameviper12 15:27, 17 May 2006 (UTC)
- ALL YOU ARE DOING IS MAKING MORE WORK FOR ME! Stop making them, I've banned you for one minute to get your attention. We can find a way of removing them from the search results by editing the scripts. And offensive language is prevalent throughout this wiki. I hope our users generally are above being offended by the odd swearword in their search results. -- Sir Codeine K·H·P·B·M·N·C·U·Bu. · (Harangue) 15:37, 17 May 2006 (UTC)
- Wow. Sorry dude. Erm...wow, you can ban someone for just one minute? I want a screenshot of the ban screen!
edit Decorum
Of course it's "crap" you moron, it's not even started yet
Talk pages are the way to ask admins—nicely—to undelete your article. Calling them a moron will just get you sent to the corner. If there is a "moron" here, blame the guy who neglected to mark his work in progress with a {{construction}} tag. But even by forgetting that, your template could have been restored had you asked politely and provided an explanation. ~ T. (talk) 14:56, 15 May 2006 (UTC)
edit Flameviper12
Please do not use the wiki space to create redirects to your userpage, either here or on other wikis. Especially trans-wiki double redirects. Ugh. Thanks. -- Sir Codeine K·H·P·B·M·N·C·U·Bu. · (Harangue) 20:19, 14 February 2006 (UTC)
- Why, wot wot??? Not right for you? Why, I only use one user page, wot wot???
CHEESE, WOT WOT!!! -- Flameviper
OK. Firstly, if I leave a message on your talk page, please reply there rather than on my talk page, otherwise the conversation gets fragmented (hence the reason I pasted your text above). To asnwer your question, we revert vandalism very easily, using the rollback button, which is how I removed the Harry Potter junk you just dumped on my talk page. Please don't do that again. -- Sir Codeine K·H·P·B·M·N·C·U·Bu. · (Harangue) 20:33, 14 February 2006 (UTC)
But how, Codeine, do you know if I reply? Surely thou dost not sittetm at thy keyboarde and waiteth for thee to respondeth to thine query? Or dost thy putteth it on thy Watchlist? Ande besides, thou hast put a redirect to thine Uncyclo talk page on thine talk page, thus to the transwiki. Why hast thou reverted it?
- Umm, first off, stop being an ass. Beyond that, it'd be nice if you didn't consistantly create crap, and if you wouldn't create completely irrelevant double-redirects to your own work. Also, Don't vandalize pages. That's definately gonna get you:45, 12 March 2006 (UTC)
Hello! I have given you a timeout. Please don't vandalize, and create irrelevant redirects. --Chronarion 20:12, 12 March 2006 (UTC)
edit Haska
I had a look at the deleted article, and I'm pretty sure I can guess why Splarka deleted it. IT'S CRAP AND A HALF!! Now then, please don't recreate it or measures will be taken.
Take care! --⇔ Sir Mon€¥$ignSTFU F@H|CUNT|+S 00:17, 24 March 2006 (UTC)
- NO, Haska was (whoops didn't intend to put 'no' in ALL CAPS) not crap and a half, what you saw was not what it was, argh...it wa a botched attempt to retrieve what was lost. Arrgh. Flameviper12 00:22, 24 March 2006 (UTC)
Ugh... Please reply on your own talkpage if I left you a message there. I'll be watching it, and the conversation won't become fragmented. As for your article: you're right... It wasn't crap and a half. It was 2 full craps!!! EWWW! But, I'll tell you what I'll do... I'll restore it, and move it to User:Flameviper/Haska. That was it stays away from the main space, you can stay in the fantasy that it was great enough to be kept, and we needn't delete it again. Deal? Deal! --Sir Mon€¥$ignSTFU F@H|CUN 00:27, 24 March 2006 (UTC)
- Deal! Keeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeewwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwl. Flameviper12 00:29, 24 March 2006 (UTC)
edit Template:Categories
Further to your request.....could you try to keep post relevant to the page they are on though. :) -- Sir Mhaille
(talk to me)
edit May I Ask...
What in the name of all that is good and holy are you doing? --KATIE!! 21:49, 27 March 2006 (UTC)
- Umm.. yeah.. just exactly what do you plan to do with the invention templates... and how many pages to you plan using them:58, 27 March 2006 (UTC)
- I was making them so that...well, let's start. When someone creates an article, they just type in random things, like "invented by Vin Diesel in 1149", or some nonsense. If people are going to do that, you might as well give them a plain template to do it on, there's no sense in just having them all be the same, because it takes up way more space. I plan to have about 10 or 25 (yeah, big difference aint it) Invent templates and the same number of Person templates.
- The cabal has discussed the matter, and we think it might be a good idea to, at least temporarily, stop the production of these templates. You see, the porblem is, there appears to be no pages currently using these templates, and until there are... it's pointless to keep creating more. So, it'd be a good idea to put the templates on pause, and actually start writing full articles that use the template(s) in some way.:42, 27 March 2006 (UTC)
- Erm, all righty then, ne'er mind. I was getting tired of these increasingly cumbersome templates anyway. But what exactly is a cabal? Ah well, at least I didn't get the business end of the baninating stick!
- Believe me, that would be the next step. On another note, There Is No Cabal. The cabal has nothing more to say to you, because there is no cabal. Do not attempt to reply to this fictional cabal, for the sole reason that there is no cabal. Now if you on't mind.. I must be on my way.. I have a meeting to attend:51, 27 March 2006 (UTC)
edit List of pictures that exist, and should
Those things are all degrees of amusing, even if most of them are rather simple. Is it possible for lesser users such as myself to award you cookies based on this? --Ж Kalir yes, I play Pokémon 15:26, 3 April 2006 (UTC)
- I don't know...but wow! I've never been commended on this before. Haha, my pixel art is good for something at least. You should ask an admin. Thanks.
- Yes,
peasantsusers are allowed to hand out certain awards to other users. And yes, a cookie is one of them... Handing out awards to yourself, however, is considered embarrasing. --⇔ Sir Mon€¥$ignSTFU F@H|CUNT|+S 18:16, 3 April 2006 (UTC)
- Um, MonkeySign, I shouldn't have to bowdlerise your sig. Erm, rather odd to say this but at least sign something clean on my user talk.
Well then, sequentially...
Enjoy. --Ж Kalir yes, I play Pokémon 15:28, 5 April 2006 (UTC)
edit Your edit-summaries
What do they mean? Have you seen the Oracle? What did she say? And don't give me that "She told me only what I needed to know" crap, neither. --⇔ Sir Mon€¥$ignSTFU F@H|CUNT|+S 16:46, 20 April 2006 (UTC)
- The Oracle told me that on July 6 2006, Wikipedia will be engulfed by a cloud of grey poo, then Chuck Norris would round-house kick it into oblivion. By "its maker", I think she meant that it was going to be formally introduced and shake hands with Jimbo. Although Wiki's really a bunch of servers strung together running on MediaWiki software, I don't see how that's possible. Unless it uses a cord or something as a hand, and has the psychological capacity (being a bunch of servers) to even comprehend what the hell's going on...I prefer the Chuck Norris version myself. Flameviper12
edit "No userboxes?" Bullshit!
I notice you have an unhealthy obsession with userboxes, but you say that they don't work here. There are userboxes here, just not the same ones at WP. Try here to look for userboxes. And here, too. —Hinoa KUN (talk) 17:34, 20 April 2006 (UTC)
- I know what you're getting at, I don't mean that. I mean that I {{subst:}}'ed them. This would mean that they could be universally transwiki-ed, but when I copy-pasted them from my Wiki account, it didn't work. I'll try again here...
edit Thy Olde Anglishe
Thine olde Anglishe is wrong, for thou art ascribing in seconde persone. And it is alsoe middle Anglishe, for olde Anglishe is muche more incomprehensible. ~ <span class="buttonlink" title="talke in
olde middle Anglishe" style="border-width: 2px; border-color: #FFFFFF;white-space:nowrap;background-color:#C0C0C0;padding:1px 5px 1px 5px;color:black;">talke in olde middle Anglishe</span> (ye olde 17:24, 27 April 2006 (UTC)) P.S - thine edite to Template:Numbere hast been reverted.
- Teacheth thou to speacketh in thy aenglish! Alsoe, whither art thy first persone for middle Aenglish? I dost needeth to know! Flameviper12 14:49, 1 May 2006 (UTC)
edit Table of Olde, Middle and Moderne Anglishes
Here you go. So you know first and second pronouns in Old, Middle and Modern English. ~ 17:03, 2 May 2006 (UTC)
edit TEST
Flameviper12
edit Ban from Template: namespace
This message and attached complimentary 1 day ban is to serve notice you are hereby banned from editing or creating files in the Template: namespace for a period of no less than 2 months, and up to infinity.
You and your IP may make no edits or creations to that namespace until we the CABAL feel you are fit to do so. You are free to experiment with inclusions in your user space once you learn how to do so.
Fondest regards,
- --Splaka 04:02, 20 May 2006 (UTC)
- Cosigned —rc (t) 04:08, 20 May 2006 (UTC)
- Cocosigned --⇔ Sir Mon€¥$ignSTFU F@H|CUNT|+S 05:04, 20 May 2006 (UTC)
- Coocoosigned. --KATIE!! 05:14, 20 May 2006 (UTC)
-- Sir Codeine K·H·P·B·M·N·C·U·Bu. · (Harangue) 09:33, 20 May 2006 (UTC)
- I concur. ~ T. (talk) 11:00, 20 May 2006 (UTC)
- CABAL!!!:23, 21 May 2006 (UTC)
- I could be missing something, but didn't flameviper agree to stop creating templates above?
- "Erm, all righty then, ne'er mind. I was getting tired of these increasingly cumbersome templates ::::anyway. But what exactly is a cabal? Ah well, at least I didn't get the business end of the ::::baninating stick!
- 1. Flameviper12 22:45, 27 March 2006 (UTC)"
- Now, something else could have happened I don't know about, but right now I don't understand what happened here.
- Yes, he was warned. Did he heed? Well.. lesse, I've huffed (since the warning) a handful of re-created crap templates, and in addition, he created Template:Person twice, and even attempted to overwrite Template:Random/person with "ALREADY A TEMPLATE FOR TISH!" despite the latter template having existed 3 to 5 months prior to his. See all his non-deleted "contribs" to Template:Namespace. Deleted contribs (to be fair, many of these were before the warning) include: Template:Body part, Template:Country, Template:LiteInsult, Template:Randvar, Template:Person, Template:Begin, Template:Anoun, Template:Years, Template:Invent, Template:Invent1, Template:Invent2, Template:Invent3, Template:Invent4, Template:Invent5, Template:Averb, Template:Aadv, Template:Bnoun, Template:Body part, Template:Person1, Template:2number, Template:3number, Template:3Number, Template:Animal, Template:3letter, Template:2letter, Template:Country, Template:LiteInsult, Template:Alien4, Template:Alien3, Template:Alien2, Template:Alien1, Template:Vowel, Template:Consonant. All in all, he has not done anything significant to improve that namespace, hence the ban, cosigned by 6 admin you might notice. --Splaka 00:18, 21 May 2006 (UTC)
- I thought it was something like that, but I was just curious as to what had happened, since it was a little bit cryptic. I didn't mean to imply anything, I just didn't understand. Like I said earlier "something else could have happened I don't know about."
Btw, how does the ban from one namespace work? Is that automatic, or is it a situation where he can do stuff, but if it's detected he will be banned? I'm asking because there was a discussion at [] and the question of whether it was possible to ban someone from creating images came up. Based on this talk page, I said it was, but I wanted to make sure the information was accurate.
--Hrodulf 01:54, 21 May 2006 (UTC)
- It is manual, which is the point of having a list of admin co-signing it. The undersigned admin will patrol and ban Flameviper or his ip (or any sockpuppet accounts) if they see him edit or create a template during his probation. We've done this before a few times, although rarely, where a contributor has some merit except in certain places (Kakun with VFP, Clorox/Nerd42 with Template, etc). --Splaka 02:03, 21 May 2006 (UTC)
- That's what I assumed, but I wasn't sure. Thanks for clearing that up.
--Hrodulf 02:24, 21 May 2006 (UTC)
edit Beat What?
I have absolutely no idea what the hell you were talking about. Whatever, I enjoyed the:52, 20 May 2006 (UTC)
- Not entirely sure myself. But it sounded cool when I wrote it anyways. Flameviper12 05:06, 21 May 2006 (UTC)
edit QVFD
The {{QVFD}} template is not for putting on pages to be deleted, but on the talk page of the creator of the article (reason). Read the template's instructions. --Splaka 10:03, 22 May 2006 (UTC)
- I was trying for a Burninator award by rampantly and wontonly destoying all new pages that were crap. But there doesn't seem to be a widely accepted template for it. I tried a combination of QVFD and NRV, hoping that would get someone to delete it. So what do I do for, say, these...?
- Crappy one-liners?
- Short pages?
- Short pages w/pictures?
- Long pages that are still crap?
- Random cruft?
- Because I don't want to be putting random templates on pages in the hope it will work (or EVERY deletion template, for that matter). Thanks. Flameviper12 11:36, 22 May 2006 (UTC)
- Read this. Not every problem should be solved by putting a template on it. Pages like that go on Uncyclopedia:QuickVFD. --Splaka 11:44, 22 May 2006 (UTC)
- QVFD is not useless, it is just NOT for the articles, it is to serve notification on user talk pages of a page placed on Uncyclopedia:QuickVFD (or given some other delete notice). --Splaka 11:47, 22 May 2006 (UTC)
- You'd have to perform some service first. As is, I'm having to revert most of your NRVing. Lets clarify a few of the NRV
rulesguidelines:)
edit Juste to cleare thinges up...
- Art thou goinge to change thine userpage so it makes sense in Middel Anglish?
- In Middel and Olde Anglishes, -est was fore thou, -eth for he and she, and otheres are no different this morrow in verb suffixes (do is I do, We do, Thou dost, Ye do, He/She doth, They do).
- 'Ye' in the sense 'Ye Olde Anglishe' actually means 'The', and is only found in Olde Anglishe, which neither thou nor I comprehend.
- Wither doth not mean What, nor Thither meaning That. What and That are to be used for them.
This is juste some advice aboute thine 'Olde Anglish'. And also, per-haps thou should change your Wikipedia user page, whick I imagine is thine. ~ 15:50, 23 May 2006 (UTC)
- Ah, thanks a lot. For the Aenglishes help. Also, thanks for telling me about that vandalism on my user page, I reverted it. Honestly, what does an IP have against me? Flameviper12 15:55, 23 May 2006 (UTC)
edit 1 Month ban
Per Template_talk:Zorkheader, you very obviously and premeditatedly violated your parole and edited in the Template: namespace, hence the harshness of the ban. Also, {{title-left|Zork}} was already there. On the very next line. --Splaka 21:34, 30 May 2006 (UTC)
- Looks like he forgot about the ban. Whoops. At least it's only a month. --Hrodulf 22:46, 30 May 2006 (UTC)
edit Edited your userpage
Seeming as you didn't change to correct Middle/Old English. I based it on the Mercian dialect of Old English, so your userpage is now in a proper olden english language. ~ 18:41, 13 June 2006 (UTC)
- Ælsæg, sieþ Eald Ænglisc for an longer ærticel abuet Eald ænglisc. ~ 18:46, 13 June 2006 (UTC)
edit Just... Hi
Hi 69.205.178.150 19:11, 19 June 2006 (UTC)
edit Indefinite block
It has been brought to our attention that while you've been banned here, you've gone and got yourself a third indefinite block on Wikipedia [2] (soon after begging to be unblocked it seems). Due to your high count of blocks here as well as on Wikipedia, and also taking in to account all your contributions:
- Wikipedia:Special:Contributions/Flameviper12
- Special:Contributions/Flameviper
- Wikipedia:Special:Contributions/216.11.222.21
- Special:Contributions/216.11.222.21
...which show a distinct lack of civility and common sense, as well as alternating hostility and faked sincerity to the administrators, your block here has been upgraded to indefinite on Uncyclopedia as well. --Splaka 08:19, 25 June 2006 (UTC)
- He's been unblocked from Wikipedia, you dink. Deal with it. --Micoolio101 (whine • vandalism) 05:59, 7 November 2006 (UTC)
- It's completely pointless, but I thought his weapons images were sort of cool for their dorkiness value. Whether they belonged in the Template: namespace, however is a totally different issue. I have tons of custom templates on my userpage; I just keep the code there; I don't upload them to the template namespace. If I had I guess I'd be inifinibanned also. --Hrodulf 16:58, 27 November 2006 (UTC)
edit UN:ARTSTAT
Dunno what you were trying to do there, but the article you were trying to redirect to (Uncyclopedia:Article Status) doesn't exist. Please don't make broken redirects. Thanks. ^_^ —Hinoa talk.kun 17:33, 22 January 2007 (UTC)
- I made the article Uncyclopedia:Article status shortly after. Sorry. ~ Unflameviper Who's a Peach? 20:51, 22 January 2007 (UTC)
edit Category:Flameviper Images
This category has been deleted. Ensure it is not recreated, and have a look through our vanity policies... And actually... why not re-read
If you need help, ask me on my talk page, ask at the Dump, or add the following:
{{help}} to this page along with a message and someone will come along and help you if they can. Again, welcome! --Sir Todd GUN WotM MI UotM NotM MDA VFH AotM Bur. AlBur. CM NS PC (talk) 15:21, 24 January 2007 (UTC)
edit Your images...
Must you add your username to all the images you upload? Signing articles isn't allowed, and I don't see why there should be an exception when the name is included via an image. It's bordering on vanity. In fact, it is. If people want to know who the image is by, they can look at the image's page. I could go through all your images and remove the name from them all, but I'll just recommend keeping your images free of it from now on. • Spang • ☃ • talk • 03:40, 26 Jan 2007
- Well, they are my pictures. And I try to make my photosig unobtrusive and not obstruct the image (unlike CERTAIN people). If I made it any smaller, you couldn't read the name. I suppose I could abbreviate it to FV. My main concern isn't the Uncyclopedia non-attribution, it's the fact that it's public domain and that somebody will inevitable use it without attribution. I just want to take credit for my images. I guess I could remove it from the Uncyclohexane pictures, as they did not rewuire that much effort. Thanks for telling me about that. ~ Unflameviper Who's a Peach? 17:57, 26 January 2007 (UTC)
edit Hello from Wikipedia
Hello Flameviper. I do realize that you don't really know me, but I watched your edits after browsing Wikipedia's BJAODN, and I'm really sorry that you were blocked on Wikipedia. I realize that you tried (and did pretty well) at keeping your edits civilized and humorless (which is hard), and Adopt-a-User was an awesome idea. I don't know what you did wrong, but you had a good run on the Wiki. Sorry it had to end. - 65.185.223.234 03:58, 10 March 2007 (UTC) (AMP'd on en.wiki)
edit Congratulations!
--For your images, especially the ones in the Weapons that don't exist, but should articles. Message Board Sig
Here's what I came up with for ya. [3] The Last Porchesian (Holla!) 21:07, 29 March 2007 (UTC)
edit Um...
Hi thar, Flameviper!!!!!!!1!111!!1!111ONE!1!!!11ONE!ONEONEONE11!1ONE!!!ONE1 --Dylan620 (I IZ n00b · 1337!) 20:09, 14 February 2009 (UTC)
edit Award from UN:REQ | http://uncyclopedia.wikia.com/wiki/User_talk:Flameviper | CC-MAIN-2014-52 | refinedweb | 4,055 | 73.47 |
All of dW
-----------------
AIX and UNIX
Information Mgmt
Lotus
Rational
Tivoli
WebSphere
-----------------
Java technology
Linux
Open source
SOA & Web services
Web development
XML
-----------------
dW forums
-----------------
alphaWorks
-----------------
All of IBM
First steps to coding the new game
Document options requiring JavaScript are not displayed
Connect to your technical community
Help us improve this content
Level: Intermediate
Sam Lantinga, Lead programmer, Loki Entertainment SoftwareLauren MacDonell, Technical writer, Loki Entertainment Software
01 Mar 2000
Over the last month Sam Lantinga and Lauren MacDonell began the initial coding and graphic design of "Pirates Ho!". In this installment of their diary of the creation of their swashbuckling, role-playing game, the authors demonstrate the first steps in coding the game, using C++ and a variety of open source tools. Sam also addresses object caching, error handling, function logging, and more.
In this installment of the series on the new game "Pirates Ho!", we'll get down to the business of the initial coding of the game, working in C++ and introducing several GNU tools to help create some of the files we'll need. We'll then move on to look at error handling, object caching, logging functions, and finish up with a few notes on where to go from there.
Why C++
At Loki, the majority of games use C or C++ with a smattering of assembly for speed-critical routines, but games can and have been written in any language. I decided to use C++ because it is natural to think of the game logic in an object-oriented manner during the design process, and because I am comfortable with C++ (see Sidebar)
Using static C++ objects within dynamically loaded shared objects is even more perilous since the current implementation of gcc (2.95.2) does not call the destructors when you unload the shared object. Instead, the destructor is called at exit, causing a crash because the code for the destructor is no longer available.
C++ is also widely supported on different platforms, although you need to be careful of what C++ language features you use, because the different compilers may or may not implement or enforce all of the ANSI C++ specification. In the course of developing Pirates Ho!, we will be compiling with g++ on Linux and Visual C++ on Windows, and using the free MPW tools on MacOS.
Automake and autoconf, the Linux way
Automake and autoconf are a set of GNU tools that allow automatic configuration of source code for different compilation environments. SDL supports applications that are autoconf'd by providing an m4 macro. This macro allows the configuration script to detect whether the appropriate version of SDL is available on the system.
To create a new application that uses automake and autoconf, we followed these 6 steps:
# This first line initializes autoconf and gives it a file that it can
# look for to make sure the source distribution is complete.
AC_INIT(README)
# The AM_INIT_AUTOMAKE macro tells automake the name and version number
# of the software package so it can generate rules for building a source
# archive.
AM_INIT_AUTOMAKE(pirates, 0.0.1)
# We now have a list of macros which tell autoconf what tools we need to
# build our software, in this case "make", a C++ compiler, and "install".
# If we were creating a C program, we would use AC_PROC_CC instead of CXX.
AC_PROG_MAKE_SET
AC_PROG_CXX
AC_PROG_INSTALL
# This is a trick I learned at Loki - the current compiler for the alpha
# architecture doesn't produce code that works on all versions of the
# alpha processor. This bit detects the current compile architecture
# and sets the compiler flags to produce portable binary code.
AC_CANONICAL_HOST
AC_CANONICAL_TARGET
case "$target" in
alpha*-*-linux*)
CXXFLAGS="$CXXFLAGS -mcpu=ev4 -Wa,-mall"
;;
esac
# Use the macro SDL provides to check the installed version of the SDL
# development environment. Abort the configuration process if the
# minimum version we require isn't available.
SDL_VERSION=1.0.8
AM_PATH_SDL($SDL_VERSION,
:,
AC_MSG_ERROR([*** SDL version $SDL_VERSION not found!])
)
# Add the SDL preprocessor flags and libraries to the build process
CXXFLAGS="$CXXFLAGS $SDL_CFLAGS"
LIBS="$LIBS $SDL_LIBS"
# Finally create all the generated files
# The configure script takes "file.in" and substitutes variables to produce
# "file". In this case we are just generating the Makefiles, but this could
# be used to generate any number of automatically generated files.
AC_OUTPUT([
Makefile
src/Makefile
])
Create acinclude.m4. The GNU aclocal tool reads a list of macros that it uses from the file acinclude.m4, and merges these into the file aclocal.m4, which is used by autoconf to generate the "configure" script.
In our case, we just wanted to add support for SDL, so we copied the file "sdl.m4" that comes with the SDL distribution into acinclude.m4. The file "sdl.m4" is usually found in the /usr/share/aclocal directory, if you have the SDL-devel rpm installed.
Create Makefile.am. The GNU automake tool uses the Makefile.am file as a description of the source and libraries used to create a program. It uses this description to generate the Makefile.in templates that are turned into makefiles when the user runs the configure script.
In our case, we have a top-level directory containing the README, some documentation and scripts, and a subdirectory "src" that contains the actual source code for the game.
The top level Makefile.am file is very simple. It just tells automake that there is a subdirectory that has a Makefile.am file that needs to be read, and gives a list of extra files that need to be added to the distribution when we build a source archive:
SUBDIRS = src
EXTRA_DIST = NOTES autogen.sh
The source Makefile.am contains the real meat:
bin_PROGRAMS = pirates
pirates_SOURCES = \
cacheable.h game.cpp game.h image.cpp image.h logging.cpp logging.h \
main.cpp manager.h music.cpp music.h nautical_coord.h paths.cpp \
paths.h screen.h screen.cpp ship.h splash.cpp splash.h sprite.cpp \
sprite.h status.cpp status.h text_string.h textfile.h widget.h \
widget.cpp wind.h
Here we tell automake that the program we are building is called "pirates", and it consists of a large number of source files. Automake has built-in rules to build C++ source files, and will put them into the generated makefile so we don't have to worry about it and can concentrate on writing good code.
#!/bin/sh
#
aclocal
automake --foreign
autoconf
./configure $*
Error handling
The first thing I wrote when I started was a base class for handling the status of an object:
/* Basic status reporting class */
#ifndef _status_h
#define _status_h
#include "textstring.h"
typedef enum {
STATUS_ERROR = -1,
STATUS_OK = 0
} status_code;
class Status
{
public:
Status();
Status(status_code code, const char *message = 0);
virtual ~Status() { }
void set_status(status_code code, const char *fmt, ...);
void set_status(status_code code) {
m_code = code;
m_message = 0;
}
void set_status_from(Status object) {
m_code = object.status();
m_message = object.status_message();
}
void set_status_from(Status *object) {
set_status_from(*object);
}
status_code status(void) {
return(m_code);
}
const char *status_message(void) {
return(m_message);
}
protected:
status_code m_code;
text_string m_message;
};
#endif /* _status_h */
This class provides a way of storing an object's status and propagating that status up to the level where messages are printed to the screen. For example, when loading a sprite object definition from a file, the load function may call an image constructor with the name of an image file. If the image object cannot load the file, then it sets the status to STATUS_ERROR, and sets the error message, which is propagated up to the top level:
STATUS_ERROR
bool Image::Load(const char *image)
{
const char *path;
/* Load the image from disk */
path = get_path(PATH_DATA, image);
m_image = IMG_Load(path);
free_path(path);
if ( ! m_image ) {
set_status(STATUS_ERROR, IMG_GetError());
}
return(status() == STATUS_OK);
}
bool Sprite::Load(const char *descfile)
{
...
m_frames = new Image *[m_numframes+1];
for ( int i=0; i<m_numframes; ++i ) {
m_frames[i] = new Image(imagefiles[i]);
// This function is in the Status base class, and copies
// the status any error message from the image object
if ( m_frames[i]->status() != STATUS_OK ) {
set_status_from(m_frames[i]);
}
}
return(status() == STATUS_OK);
}
if ( ! sprite->Load(spritefile) ) {
printf("Couldn't load sprite: %s\n", sprite->status_message());
}
This style of error handling code is used heavily throughout our game.
Object caching
Early on I realized that I would need some sort of caching algorithm for images that may be shared, sound samples that may play simultaneously, etc. I decided to write a generic resource manager that would cache accesses to all objects in the game that could be used more than once simultaneously.
The first step was to create a general base class for all cacheable objects so they could be manipulated inside the cache without knowing exactly what type of objects they are:
/* This object can be cached in the resource manager */
#ifndef _cacheable_h
#define _cacheable_h
class Cacheable
{
public:
Cacheable() {
ref_cnt = 1;
}
virtual ~Cacheable() {}
void AddRef(void) {
++ref_cnt;
}
void DelRef(void) {
/* Free this object when it has a count of 0 */
if ( --ref_cnt == 0 ) {
delete this;
}
}
int RefCnt(void) {
return(ref_cnt);
}
private:
int ref_cnt;
};
#endif /* _cacheable_h */
All cacheable objects have a reference count associated with them so that each time they are used, the count is incremented, and each time they are released, the count is decremented. In this implementation, the object frees itself when the reference count reaches zero. This allows the object to be created, passed to a function that may keep a reference to the object, and then released by the creator, which will only free the object if it is no longer in use.
This type of implementation is prone to bugs where an object is accidentally freed multiple times. I will be adding code to catch this case later in the project. Essentially I will keep a separate pool of cacheable objects and will record the stack trace information for the last object release and raise a trap signal when I detect the object being released again. This can be done on Linux using a set of functions included in glibc 2.0 and newer that allow you to record and print stack trace information from within your application. See /usr/include/execinfo.h for more information.
Note that the destructor for the cacheable object is virtual. This is necessary so that when the base class is freed, the proper derived class destructors are called. Otherwise, just the base class destructor will be called and objects left around in the derived classes will be leaked.
Once the cacheable objects are created, I need a cache to hold them:
/* This is a data cache template that can load and unload data at will.
Items cached in this template must be derived from the Status class
and have a constructor that takes a filename as a parameter.
*/
template<class T> class ResourceCache : public Status
{
public:
ResourceCache() {
m_cache.next = 0;
}
~ResourceCache() {
Flush();
}
T *Request(const char *name) {
T *data;
data = 0;
if ( name ) {
data = Find(name);
if ( ! data ) {
data = Load(name);
}
if ( data ) {
data->AddRef();
}
}
return(data);
}
void Release(T *data) {
if ( data ) {
if ( data->RefCnt() == 1 ) {
log_warning("Tried to release cached object");
} else {
data->DelRef();
}
}
}
/* Clear all objects from the cache */
void Flush(void) {
while ( m_cache.next ) {
log_debug("Unloading object %s from cache",
m_cache.next->name);
Unload(m_cache.next->data);
}
}
/* Clear all unused objects from the cache
This could be faster if the link pointer wasn't trashed by
the unload operation...
*/
void GarbageCollect(void) {
struct cache_link *link;
int n_collected;
do {
for ( link=m_cache.next; link; link=link->next ) {
if ( link->data->RefCnt() == 1 ) {
Unload(link->data);
break;
}
}
} while ( link );
log_debug("Cache: %d objects garbage collected", n_collected);
}
protected:
struct cache_link {
char *name;
T *data;
struct cache_link *next;
} m_cache;
T *Find(const char *name) {
T *data;
struct cache_link *link;
data = 0;
for ( link=m_cache.next; link; link=link->next ) {
if ( strcmp(name, link->name) == 0 ) {
data = link->data;
break;
}
}
return(data);
}
T *Load(const char *file) {
struct cache_link *link;
T *data;
data = new T(file);
if ( data->status() == STATUS_OK ) {
link = new struct cache_link;
link->next = m_cache.next;
link->name = strdup(file);
link->data = data;
m_cache.next = link;
} else {
set_status_from(data);
delete data;
data = 0;
}
return(data);
}
void Unload(T *data) {
struct cache_link *prev, *link;
prev = &m_cache;
for ( link=m_cache.next; link; link=link->next ) {
if ( data == link->data ) {
/* Free the object, if it's not in use */
if ( data->RefCnt() != 1 ) {
log_warning("Unloading cached object in use");
}
data->DelRef();
/* Remove the link */
prev->next = link->next;
delete link;
/* We found it, stop looking */
break;
}
}
if ( ! link ) {
log_warning("Couldn't find object in cache");
}
}
};
This implementation is relatively straightforward. I use a template because this cache object will be used with several different types of data: images, music, sounds, etc. Cacheable data is kept in a singly linked list, and if a requested object is not in the cache, it will be loaded dynamically. When data is no longer needed, it is not immediately freed, but is released to the general pool in case it will be needed in the near future.
When tracking down a subtle problem, I often find that the code goes through many complicated steps before I find the source of the problem. I tend to sprinkle printf() statements throughout the suspect areas of code to get a feeling for what is being executed and when. Often this is the only way to debug a problem when cross-compiling to Win32.
The cache has a garbage collection function that frees all unused objects. I will use this during development to detect object leaks in the code. When I perform garbage collection, I can traverse the list of remaining objects to make sure that nothing is left that should have been freed.
Logging functions
It is useful to have various logging functions to print debug messages, show warnings to the user, etc. I whipped up a set of functions that we use extensively in the game. At a later point, we will probably need better control over what is being printed and when. For example, we may want to print object cache information, but not widget constructor/destructor logs, etc.
Here is what we use now:
/* Several logging routines */
#include <stdio.h>
#include <stdarg.h>
void log_warning(const char *fmt, ...)
{
va_list ap;
fprintf(stderr, "WARNING: ");
va_start(ap, fmt);
vfprintf(stderr, fmt, ap);
va_end(ap);
fprintf(stderr, "\n");
}
void log_debug(const char *fmt, ...)
{
#ifdef DEBUG
va_list ap;
fprintf(stderr, "DEBUG: ");
va_start(ap, fmt);
vfprintf(stderr, fmt, ap);
va_end(ap);
fprintf(stderr, "\n");
#endif
}
Note the use of varargs: va_start(), vsprintf(), va_end(). This allows us to have printf-like functionality in our logging, allowing statements such as:
log_warning("Only %d objects left in cache", numleft);
These functions could be extended to pop up dialog boxes, or log to a file as well as print to standard error. We'll see what we'll need for our game in the future.
Reinventing the wheel
In many cases, if you need to do something in your code, chances are good that it has been done before. Whenever you embark on a coding project, the first thing to do is search the net for other projects that are similar. You may find that you don't need to do any work at all -- someone may have already written the code you need.
A good place to look on the net is Freshmeat (see Resources). Many open source projects have their projects listed on Freshmeat.
In our case, we needed code to load various image formats, and code to load and play sounds and music. Since this article focuses on designing a game with SDL, we will look for code already designed to work with our library of choice.
To load images, we will use the SDL_image library; to load and play audio clips and music, we will use the SDL_mixer library (see Resources).
We also needed a simple singly linked list implementation, and for that we turned to the Standard Template Library (or STL for short) released by SGI (see Resources).
The Standard Template Library has varying implementations on different platforms, so we may end up using something different in the final game, but for now it provides a nice set of basic utility objects.
Putting it all together
We now have all the basic pieces to begin putting something on the screen. We have code to load images and music, code to display to the screen, and glue to put it all together. We added a sample splash screen, and Nicholas Vining contributed sample title music. This is the result:
You will need the libraries mentioned above already installed to run the binary sample available in Resources.
Conclusion
We have begun the initial coding for our game, and the results are promising. So far we can play the initial splash sequence and we have some of the basic infrastructure for the game completed. Next month we hope to have the first pass of the ship combat ready for testing, and lots of pictures of ships and landscapes. and amateur artist. She and Sam are co-developing a pirate role-playing game ? | http://www.ibm.com/developerworks/library/l-pirates2/index.html | crawl-002 | refinedweb | 2,833 | 59.33 |
Creating an Outlook Message File with C#
I've been working with a government agency lately, and came to notice that the software system they're using dates back to the stone ages. Many a time, the head of department is required to send an email to other head of departments within the same organization, each with a similar content, yet with attachment of nominal roll of folks under each of the departments (PS: HR Stuff).
On every occasion that I observe, this poor balding guy has to create a template in outlook and copy it 20 times, adding the list of recipients to the individual MSG file and attaching the relevant set of excel/word document file for each of these departments.
Of course, as a software engineer, the first thing on the mind is definitely, "AUTOMATION"!
Microsoft Office Outlook does have an interop dll which specifically allows for this to be done, easily! Another altanative would definitely be making use of Visual Studio Tools for Office (VSTO).
To save you the trouble of looking for this DLL, i've attached a copy to this blogpost.
1. To begin, let's first create a C# Winforms Project in Visual Studio.
2. Next, let's add the reference to the interop DLL (attached on this blog post). You should see something like the following.
3. For the purpose of this tutorial, we'd create a simple windows form which takes in the mail receipient, subject, message and attachment fields. Once you're familiar with how the code (which we'd discuss later), you can work more magic around this (e.g. creating automation processes)
Note: The Importance ComboBox should have the following values "High","Normal" and 'Low".
4. Next step, is to wire up the code-behind for the Save Button. It's optional whether you want to register the Outlook Interop in the namespace or not. In this example, I do not so so, thus qualifying the full path in the code (below).// Creates a new Outlook Application Instance
Outlook.Application objOutlook = new Outlook.Application();
// Creating a new Outlook Message from the Outlook Application Instance
Outlook.MailItem mic = (Outlook.MailItem)(objOutlook.CreateItem(Outlook.OlItemType.olMailItem));
mic.To = toTextBox.Text;
mic.CC = ccTextBox.Text;
mic.BCC = bccTextBox.Text;
// Assigns the Subject Field
switch (importanceComboBox.SelectedItem.ToString())
{
case "High":
mic.Importance = Outlook.OlImportance.olImportanceHigh;
break; case "Normal":
mic.Importance = Outlook.OlImportance.olImportanceNormal;
break; case "Low":
mic.Importance = Outlook.OlImportance.olImportanceLow;
break;
}
// Define the Mail Message Body. In this example, you can add in HTML content to the mail message body
// Adds Attachment to the Mail Message.
// Note: You could add more than one attachment to the mail message.
// All you need to do is to declare this relative to the number of attachments you have.
// Save the message to C:\demo.msg. Alternatively you can create a SaveFileDialog to
// allow users to choose where to save the file
Easy isn't it? Besides creating mail messages, you can also create other outlook items such as task, calendar objects and more. More to come in the future posts. Do let me know your comments/views on this post.
Attachments:
Tutorial in PDF Format ()
Interop.Outlook () | http://weblogs.asp.net/darrensim/creating-an-outlook-message-file-with-c | CC-MAIN-2015-14 | refinedweb | 533 | 56.86 |
Feb 13, 2020 08:20 AM|gugatodua|LINK
I did everything according to official documentation.
(all the signalr related files are located in myproject/market/alpha)
this is my Hub class
public class AlphaHub : Hub { public void Send(string name, string message) { Clients.All.broadcastMessage(name, message); } }
startup
namespace Coinmania.market.alpha { public class SIgnalRStartup { public void Configuration(IAppBuilder app) { app.MapSignalR(); } } }
script(located in Default.aspx)
<!--Script references. --> <!--Reference the jQuery library. --> <script src='<%: ResolveClientUrl("/Scripts/jquery-3.3.1.min.js") %>'></script> <script src='<%: ResolveClientUrl("/Scripts/jquery.signalR-2.2.2.min.js") %>'></script> <script src="/signalr/hubs"></script> <!--Add script to update the page and send messages.--> <script type="text/javascript"> $(function () { // Declare a proxy to reference the hub. var alpha = $.connection.alphaHub; // Create a function that the hub can call to broadcast messages. alpha.client.broadcastMessage = function (message) { alert(message); }; // Start the connection. $.connection.hub.start(); }); </script>
I am really curious. What is the problem? When I created a separate test project, it worked fine.
Feb 13, 2020 04:28 PM|smtaz|LINK
We don't know what the problem is because you haven't stated what you are expecting to happen and what is actually happening. I would probably start by using your browser's dev tools though. You may have some errors that are blocking it from working or the paths to your javascript libraries may be incorrect in this project.
Feb 13, 2020 04:42 PM|gugatodua|LINK
signalr/hubs is an auto-generated file.
The problem is, it does not get generated. Here is what error says
Failed to load resource: the server responded with a status of 404 (Not Found)
When I see the path to the hubs file, it says localhost:8888/signalr/hubs
Star
9821 Points
Feb 14, 2020 09:54 AM|Brando ZWZ|LINK
Hi gugatodua,
Could you please tell me your folder build-up for your whole application?
I guess the 404 error is you used the wrong signlar hubs path.
Best Regards,
Brando
3 replies
Last post Feb 14, 2020 09:54 AM by Brando ZWZ | https://forums.asp.net/p/2164133/6294502.aspx?Re+signalr+hubs+file+does+not+get+generated | CC-MAIN-2020-40 | refinedweb | 350 | 59.19 |
Red Hat Bugzilla – Bug 65687
HPLaserJet4L print errors
Last modified: 2008-05-01 11:38:02 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.79 [en] (X11; U; Linux 2.4.18-3 i586)
Description of problem:
HPLaserJet4L still works fine under RH-6.0 with LaserJet-laserjet filter. Fails
under RH-7.2 and 7.3. In latter puts characters periodically in left margin and
in those lines smears characters across page. Originally had 1Mb memory on
printer. Now upgraded to 2Mb. Has no visible effect in 6.0,7.2 or 7.3.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Any printing.
2.
3.
Additional info:
I have had no response in 5 days.
Is there a simple solution of replacing components (e.g. ghostview/ghostscript)
from RedHat-6.0 in RedHat-7.3? I purchased RedHat-6.0,6.1,7.0,7.1,7.2 disks
from you so have source as well as binaries.
Please attach the /etc/alchemist/namespace/printconf/local.adl file from an
installed 7.3 machine with a print queue set up for this printer. Does the
test page have the same artifacts on the page?
(Changing version to 7.3, the latest one mentioned in your report.)
local.adl using "laserjet"
^_<8B^@^@^@^@^@^@^CmVAnb0^P=s^UQ> ^AJ[U^B*,
U$@*$TjI2^R^C^9q6v*n_oD<90>B<88><81>e<8B>TK^"g^{cqLS^L<9F>>Rf<<93>BP<9E><8D>L^MW|^Zw<86>8IQL3I>$qj^D!kOGfW^\w^LcH^ScnO<9C><91>Ix<8C><99>i<84>N`Z^^`;}Acm]C^? b
2+^Y34_^VM^TS<82>%<96>^E![c< <99>^DU%^Q==^@?g<86>Q<96>*~<8B>~<95>$$B^W^@!k\0#^?fNFf^T,^\S^_^F<81><98>Q,^P#B^Zv\<9F>?M|Ex^FS%=<83>5XTNH~NI^V^\F<81>;^?6<8D>W[[T\~DvLK<83>)
pBP^U^X)<8B><93>^B%d<9D>FG$,xi1<[T)r^ 5t<97><94>)NS
lj>&8<81>X^\/ju^_>o}jOm/tZBK<86>Wb85<92>%h:}kJj]<84>(SMm^P^UH5g^Ep<82>^T
WyaB<8A>8 yEphM^X^T)d
<91>dgXT7Yz[;~t<89>0%<8A>yP`ty"efS^?fGn$},<9B>wW<9D>u^D[^FU^<93>/Et{<8F>^Ow-TJ $ P1=<88>aA<8A><9F>D6Sd<E<92>FPpK\2).<98>Q<80>s\<82><95>U`S!<95>+atH^U^B"8++*v
<83>^@pxiIV&ZH3#^CTo.+^EYMkhZc0Z<87>^F07i:^G8QCv3;<99>:^d^D^Gfe-s=W%9`Y^?c=*q~cp<9F>u\+XX^Wmt<8B>&.5Cs^Nu_;^]^L.u(^AcE5^Mj{FVMi^TZ{{Tg<92>5Mhs,WXg;j-K53U+]~N^G
<83>;[#G<9D>?c`<82>~l
^@^@
local.adl using "ljet4"
^_<8B^@^@^@^@^@^@^CmVAnb0^P=s^UQ>`^C^EZ-^DTY^T*H<81>TI(T<93>e^Q^C^9q6v*n_oD@!D
<81>2Ej%7H<99>w^x<s4<83><87>7<84>^Y/$^W<94>'C3s#m><8C>Z^C^\'hASI^$ql^D!kO<86>f[
^\5^Lc@ccfO<9D>!Ix^B3<9D>@5=@6ozw]~wf'<8A><83>H4`LR|[4VL1<96>Xf<84>l<8F>3<9C>&^RT<97>Ftr^D|<9E>ESCF[*]_t'
^E^Q:^Yg<86>^]ySw<4#`n<98>zD<8C>b<81>^X^UR0g~lejOC3
<98>2m=,B&rBroF6`0
\Y#i<[^<drG6gZ^ZLY<80>^S<82>*@H]<9C>d(&/tQ
aAO<8B>em*J<99>s^("?$Lq<9E>N`Ss5A1Dfx5Se{^;vDvB'&<dx%<9A>)!B<90>4F<93>I<87>)Kw^S"H67AT
U<9E>^W@s<92><83>&\e<89>^K)^V9M.<82>CkB H%<8B>$?C":IR_Zq#K<84>-UL#^B'K^S-7<9D>xS;rGugY<;j,^Fl^]4k@<98>|-"^?^?ww[C-^D<8A>s
^US<83>Xo"{u|8O0$^Khv%.<98>^T^W^L(@y&AGv`S!%%a$!^D^A^Q<9C>^U%U}5^@X\w<92>^V<89>
^VRMUi75
+z^]M_^\Wkxz^OF\7~'^ZH~tG^SW<8B><9C>`H9,uv`9$^S<}v]+:n^?NvYC=<8A>^?}RC?i`Z'<gO7<9D>n/w)=
XD^XO/mP_32jJ^O&V:\&^7,mF<9B>c=F:[SoYJ<85>m7W^].t08{%rTz^G,s&Oi
^@^@
Please use the 'create a new attachment' link so that it doesn't get garbled.
Created attachment 62425 [details]
Requested file
Created attachment 62865 [details]
Requested print file
Try using the "Laserjet 4L"->ljet4, gimp-print, or omni drivers.
I have tried them all. ljet4 and laserjet give best output but not the quality of
laserjet under RedHat-6.0. It appears that random lines get garbled. As I
wrote originally, in those random lines pixels get displaced (apparently to the
right) so the worst lines are difficult to decipher. I never had a problem
until the RedHat-7x series. Early Slackware and RedHat versions ran fine (I'm
currently transferring my print jobs to an older machine running RH-6.0) using
the same printer. I have not tried more recent versions of other diatributions.
Has anyone found a solution to this problem?
I am havign the same problem. Everything worked well under RedHat 6.9.5
(Pinstripe), but fails with 7.3.
I have a LaserJet 1100. For me, any text (of any standard font) always prints
fine. However, when there are any graphics on the page, parts of the graphics
are heavily pixelated (scan-lines are "smeared") and random characters appear
along the left margin on each affected line.
I have tried the same things that the person reported tried, with no
solution... Any thoughts?
[email protected]: Please attach your
/etc/alchemist/namespace/printconf/local.adl file too. Also, are you using
the 'ljet4' driver (which is the recommended one for that printer model)?
I am using the Laserjet/laserjet driver. The ljet4 driver does not work for
me. It prints about 10% of the first page and then prints junk characters and
leaves junk in the printer buffer.
I will attach the local.adl files for both using the laserjet and ljet4 drivers.
Also, I printed and then scanned in four pages.
1. page-RH6-lj4dith.gif
A mapquest page using RedHat 6.9.5 lj4dith driver
2. page-RH7-laserjet.gif
The same page using RedHat 7.3 laserjet driver
3. page-RH7-ljet4.gif
The same page using RedHat 7.3 ljet4 driver
4. RH7-ps-test-ljet4.gif
The RedHat 7.3 Postscript test page using ljet4 driver
(The Test page prints fine using the laserjet driver)
I will attach the gif images of the scanned-in pages. Notice in #2 the smearing
of the text in Line 10 of the directions
Thanks.
Created attachment 67244 [details]
/etc/alchemist/namespace/printconf/local.adl using laserjet driver
Created attachment 67245 [details]
/etc/alchemist/namespace/printconf/local.adl using ljet4 driver
Created attachment 67246 [details]
A mapquest page using RedHat 6.9.5 lj4dith driver (a good result)
Created attachment 67247 [details]
The same page using RedHat 7.3 laserjet driver (smearing problems)
Created attachment 67248 [details]
The same page using RedHat 7.3 ljet4 driver (corrupt)
Created attachment 67249 [details]
The RedHat 7.3 Postscript test page using ljet4 driver (corrupt)
Please try this:
gs -q -dBATCH -dSAFER -dNOPAUSE -sDEVICE=ljet4 -sOutputFile=output.ljet4 \
postscript-file-that-normally-breaks.ps
and then set up a RAW print queue for the HPLJ4L and print output.ljet4 to it.
(Or for someone who wants to try this with a locally-attached printer, do:
'cat output.ljet4 > /dev/lp0').
Let me know if _that_ works.
Also, if you save a PostScript file of a web page that you want to print
(print to file), and print that afterwards, are the smears always in the same
places? Could you attach that file please?
Problem Solved! Thank you for your help.
I was using the printer via SMB share connection, and the "Translate \n => \r\n"
option was selected. When I de-selected that option, everything works fine.
The "Laserjet/laserjet" driver no longer prints the random characters along the
left margin and no longer smears the lines containing graphics. (Presumably
this was due to the extra \r's).
The "ljet4" driver now prints everything correctly (and at a higher resolution
than the Laserjet/laserjet driver).
Based on your suggestion, I connected the printer directly to the parallel port,
and everything worked fine, so I began debugging the SMB connection. Strange
that the default SMB connection has CR/LF translation on. I had just kept the
default and was only changing the Driver and Driver Options...
For me, the problem is resolved.
(This option defaults to 'off' in the current beta.) | https://bugzilla.redhat.com/show_bug.cgi?id=65687 | CC-MAIN-2017-30 | refinedweb | 1,436 | 61.53 |
Subject: [Boost-announce] [TypeIndex 3.0] Review Manager's Report
From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2014-05-04 12:00:21
Dear Boost,
The second round of peer review of proposed Boost.TypeIndex finished
on Wed 30th, and here is the review manager's report. My thanks to
the five reviewers who took the time and trouble to submit full
reviews, plus to everyone else who commented.
Votes regarding acceptance:
4 voted to accept TypeIndex into Boost immediately.
1 voted to accept into Boost with conditions.
Overall: I recommend immediate acceptance into Boost.
Conditions:
1. No one liked the present boost::typeind namespace, almost anything
else would be better. I see the namespace has already changed in the
develop branch to boost::typeindex, so I think this is already
solved.
Other common comments:
* More than one reviewer mentioned how they liked the docs.
* More than one reviewer expressed a desire for an ability to
programmatically parse through type specifier strings. I agree with
Antony that that is another library which isn't TypeIndex (something
Boost.Spirit based which works directly with mangled symbol string
representations, representing them as a partial AST - preferably also
with a libclang compatible backend - would be great, but that is a
totally different library and TypeIndex 3.x provides extensibility
for such an additional function).
* More than one reviewer sought some method of preventing link if you
mix up incompatible uses of TypeIndex and/or with RTTI. | http://lists.boost.org/boost-announce/2014/05/0402.php | CC-MAIN-2016-30 | refinedweb | 242 | 56.15 |
Is Camel IoC friendly?
The quick answer is, yes absolutely!
. Now for the longer answer...
Spring
In particular we've gone out of our way to make Camel work great with Spring and to reuse all of Spring 2.x's power. For example
- the CamelContext, Components and Endpoints and their dependent beans can be configured in Spring using Spring custom XML namespaces or traditional bean/property elements
- we implement a spring BeanPostProcessor to allow POJOs to be injected with Camel resources along with powerful Bean Integration which allows any spring-injected POJO to be used inside Camel along with full support for Spring Remoting.
What we've tried to do is implement the Inversion Of Control With Smart Defaults pattern; namely that you can configure Camel in a single XML element (or very small amont of XML) to get going, then you can overload default configurations to add more explicit configuration as and when you need it.
Other IoC containers
Spring is clearly the leading IoC container; though there are some others such as Guice, OSGi, Pico, HiveMind & Plexus so we have made the IoC pluggable in Camel.
For example camel-core has no dependencies on anything other than commons-logging; camel-spring contains all the Spring integration etc.
We hope to have closer integration to other IoC containers, particularly with Guice. The current mechanism for hooking into other IoC containers are
- Injector is used to perform dependency injection on a type when sing the Inversion Of Control With Smart Defaults pattern
- Registry this strategy is used to abstract away the ApplicationContext such as to use JNDI or OSGi to lookup services on demand
Using no IoC container
Some folks don't even use an IoC container and thats fine too
. For example you can just use camel-core with pure Java and then wire things together using just Java code (or some scripting language etc).
By default when referring to components, endpoints or beans by name, it'll try look them up in the JNDI context and we've got a POJO based JNDI provier if you need one of those too. | https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=64009&showComments=true&showCommentArea=true | CC-MAIN-2016-30 | refinedweb | 354 | 51.11 |
BTT - Block Translation Table¶
1. Introduction¶
Persistent memory based storage is able to perform IO at byte (or more accurately, cache line) granularity. However, we often want to expose such storage as traditional block devices. The block drivers for persistent memory will do exactly this. However, they do not provide any atomicity guarantees. Traditional SSDs typically provide protection against torn sectors in hardware, using stored energy in capacitors to complete in-flight block writes, or perhaps in firmware. We don’t have this luxury with persistent memory - if a write is in progress, and we experience a power failure, the block will contain a mix of old and new data. Applications may not be prepared to handle such a scenario.
The Block Translation Table (BTT) provides atomic sector update semantics for persistent memory devices, so that applications that rely on sector writes not being torn can continue to do so. The BTT manifests itself as a stacked block device, and reserves a portion of the underlying storage for its metadata. At the heart of it, is an indirection table that re-maps all the blocks on the volume. It can be thought of as an extremely simple file system that only provides atomic sector updates.
2. Static Layout¶
The underlying storage on which a BTT can be laid out is not limited in any way. The BTT, however, splits the available space into chunks of up to 512 GiB, called “Arenas”.
Each arena follows the same layout for its metadata, and all references in an arena are internal to it (with the exception of one field that points to the next arena). The following depicts the “On-disk” metadata layout:
Backing Store +-------> Arena +---------------+ | +------------------+ | | | | Arena info block | | Arena 0 +---+ | 4K | | 512G | +------------------+ | | | | +---------------+ | | | | | | | Arena 1 | | Data Blocks | | 512G | | | | | | | +---------------+ | | | . | | | | . | | | | . | | | | | | | | | | | +---------------+ +------------------+ | | | BTT Map | | | | | +------------------+ | | | BTT Flog | | | +------------------+ | Info block copy | | 4K | +------------------+
3. Theory of Operation¶
a. The BTT Map¶
The map is a simple lookup/indirection table that maps an LBA to an internal block. Each map entry is 32 bits. The two most significant bits are special flags, and the remaining form the internal block number.
Some of the terminology that will be subsequently used:
For example, after adding a BTT, we surface a disk of 1024G. We get a read for the external LBA at 768G. This falls into the second arena, and of the 512G worth of blocks that this arena contributes, this block is at 256G. Thus, the premap ABA is 256G. We now refer to the map, and find out the mapping for block ‘X’ (256G) points to block ‘Y’, say ‘64’. Thus the postmap ABA is 64.
b. The BTT Flog¶ 32 bytes. Entries are also padded to 64 bytes to avoid cache line sharing or aliasing. Flog updates are done such that for any entry being written, it: a. overwrites the ‘old’ section in the entry based on sequence numbers b. writes the ‘new’ section such that the sequence number is written last.
c. The concept of lanes¶
While ‘nfree’ describes the number of concurrent IOs an arena can process concurrently, ‘nlanes’ is the number of IOs the BTT device as a whole can process:
nlanes = min(nfree, num_cpus)
A lane number is obtained at the start of any IO, and is used for indexing into all the on-disk and in-memory data structures for the duration of the IO. If there are more CPUs than the max number of available lanes, than lanes are protected by spinlocks.
d. In-memory data structure: Read Tracking Table (RTT)¶
Consider a case where we have two threads, one doing reads and the other, writes. We can hit a condition where the writer thread grabs a free block to do a new IO, but the (slow) reader thread is still reading from it. In other words, the reader consulted a map entry, and started reading the corresponding block. A writer started writing to the same external LBA, and finished the write updating the map for that external LBA to point to its new postmap ABA. At this point the internal, postmap block that the reader is (still) reading has been inserted into the list of free blocks. If another write comes in for the same LBA, it can grab this free block, and start writing to it, causing the reader to read incorrect data. To prevent this, we introduce the RTT.
The RTT is a simple, per arena table with ‘nfree’ entries. Every reader inserts into rtt[lane_number], the postmap ABA it is reading, and clears it after the read is complete. Every writer thread, after grabbing a free block, checks the RTT for its presence. If the postmap free block is in the RTT, it waits till the reader clears the RTT entry, and only then starts writing to it.
e. In-memory data structure: map locks¶
Consider a case where two writer threads are writing to the same LBA. There can be a race in the following sequence of steps:
free[lane] = map[premap_aba] map[premap_aba] = postmap_aba
Both threads can update their respective free[lane] with the same old, freed postmap_aba. This has made the layout inconsistent by losing a free entry, and at the same time, duplicating another free entry for two lanes.
To solve this, we could have a single map lock (per arena) that has to be taken before performing the above sequence, but we feel that could be too contentious. Instead we use an array of (nfree) map_locks that is indexed by (premap_aba modulo nfree).
f. Reconstruction from the Flog¶¶
Read:
- Convert external LBA to arena number + pre-map ABA
- Get a lane (and take lane_lock)
- Read map to get the entry for this pre-map ABA
- Enter post-map ABA into RTT[lane]
- If TRIM flag set in map, return zeroes, and end IO (go to step 8)
- If ERROR flag set in map, end IO with EIO (go to step 8)
- Read data from this block
- Remove post-map ABA entry from RTT[lane]
- Release lane (and lane_lock)
Write:
- Convert external LBA to Arena number + pre-map ABA
- Get a lane (and take lane_lock)
- Use lane to index into in-memory free list and obtain a new block, next flog index, next sequence number
- Scan the RTT to check if free block is present, and spin/wait if it is.
- Write data to this free block
- Read map to get the existing post-map ABA entry for this pre-map ABA
- Write flog entry: [premap_aba / old postmap_aba / new postmap_aba / seq_num]
- Write new post-map ABA into map.
- Write old post-map entry into the free list
- Calculate next sequence number and write into the free list entry
- Release lane (and lane_lock)
4. Error Handling¶¶
The BTT can be set up on any disk (namespace) exposed by the libnvdimm subsystem (pmem, or blk mode). The easiest way to set up such a namespace is using the ‘nd]: | https://www.kernel.org/doc/html/v5.8/driver-api/nvdimm/btt.html | CC-MAIN-2022-33 | refinedweb | 1,151 | 67.69 |
A view class that displays a model as a tree or tree table. More...
#include <Wt/WTreeView>
A view class that displays a model as a tree or tree table.
The view displays data from a WAbstractItemModel in a tree or tree table. It provides incremental rendering, allowing the display of data models of any size efficiently, but the first columns are given a width of 150px, and the first column takes the remaining size. Note that this may have as consequence that the first column's size is reduced to 0. Column widths of all columns, including the first column, can be set through the API method setColumnWidth(), and also by the user using handles provided in the header.
Optionally, the treeview may be configured so that the first column is always visible while scrolling through the other columns, which may be convenient if you wish to display a model with many columns. Use setColumn1Fixed() to enable this behaviour. treeview may receive a drop event on a particular item, at least if the item indicates support for drops (controlled by the ItemIsDropEnabled flag).
You may also react to mouse click events on any item, by connecting to one of the clicked() or doubleClicked() signals.
Usage example:
The view provides a virtual scrolling behavior which relies on Ajax availability. When Ajax is not available, a page navigation bar is used instead, see createPageNavigationBar(). In that case, the widget needs to be given an explicit height using resize() which determines the number of rows that are displayed at a time.
A snapshot of the WTreeView:
Collapses a node.
Signal emitted when a node is collapsed.
Returns the column format string (deprecated)..
Expands a node.
Signal emitted when a node is expanded.
Returns whether a node is expanded..
Returns whether toplevel items are decorated.
Signal emitted when scrolling.
Implements Wt::WAbstractItemView.
Scrolls the view to an item.
Scrolls the view to ensure that the item which represents the provided
index is visible. A
hint may indicate how the item should appear in the viewport (if possible).
Implements Wt::WAbstractItemView.
Sets if alternating row colors are to be used.
Configure whether rows get alternating background colors, defined by the current CSS theme.
The default value is
false.
Reimplemented from Wt::WAbstractItemView.
Sets the column format string (deprecated).
The DisplayRole data for that column is converted to a string using asString(), with the given format.
The default value is "".
Changes the visibility of a column.
Reimplemented from Wt::WAbstractItemView.
Sets the column width.
For a model with columnCount() ==
N, the initial width of columns 1..
N is set to 150 pixels, and column 0 will take all remaining space..
Expands or collapses a node.
Sets the header height.
The default value is 20 pixels.
Reimplemented from Wt::WAbstractItemView.
Sets the CSS an object name.
The object name can be used to easily identify a type of object in the DOM, and does not need to be unique. It will usually reflect the widget type or role. The object name is prepended to the auto-generated object id().
The default object name is empty.
Reimplemented from Wt::WAbstractItemView.
Sets whether toplevel items are decorated.
By default, top level nodes have expand/collapse and other lines to display their linkage and offspring, like any node.
By setting
show to
false, you can hide these decorations for root nodes, and in this way mimic a plain list. You could also consider using a WTableView instead.. | https://webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1WTreeView.html | CC-MAIN-2021-31 | refinedweb | 578 | 68.36 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
On 05/11/2013 at 10:03, xxxxxxxx wrote:
(My apologies in advance, I could not get the formatting for code blocks working properly.)
Is there a way to make SaveDocument() name the current document the name that the file was saved as?
When I use
c4d.documents.SaveDocument(doc, path, c4d.SAVEDOCUMENTFLAGS_DIALOGSALLOWED, c4d.FORMAT_C4DEXPORT)
the file is saved, however, the previous document name remains.
In order to deal with this, I have had to write the following, but I refuse to believe this is the correct way. I have to be missing something.
new_doc = c4d.documents.SaveDocument(doc, path, c4d.SAVEDOCUMENTFLAGS_DIALOGSALLOWED, c4d.FORMAT_C4DEXPORT)
c4d.documents.CloseAllDocuments()
c4d.documents.LoadFile(path)
TIA!
On 05/11/2013 at 17:00, xxxxxxxx wrote:
@TheAdmiral,
If I understand your question:
- you previously saved your current file as file1.c4d,
-or you opened a file named file1.c4d;
-and now, you want to save it as file2.c4d .
On 06/11/2013 at 09:42, xxxxxxxx wrote:
BaseDocuments are derived from BaseList2d, so SetName() and GetName() will apply (or the respective identifier).
On 06/11/2013 at 10:33, xxxxxxxx wrote:
@Focus3D
I'm really making a script to save incrementally and update all render settings paths to point to the new version. I would use the "Save Incremental" that Maxon is using, but it does not give an option to save to the next available version if the version number you are trying to save as is already taken or update your output paths.
When I run c4d.documents.SaveDocumet(), it saves the file properly, bu the current document keeps the original name.
So if I am in FILE_v001.c4d and I run my script, FILE_v002.c4d exists, my paths have been updated to reflect version v002, but the document name is still FILE_v001
@littledevil
Much appreciated!!
On 06/11/2013 at 21:24, xxxxxxxx wrote:
@TheAdmiral,
import os
import c4d
from c4d import documents, gui
def main() :
doc = documents.GetActiveDocument()
path = "D:\\TestPY\\file5.c4d"
newc4dName = os.path.split(path)[1]
if newc4dName.endswith('.c4d') :
newc4dName = newc4dName[:-4]
if os.path.isfile(path) == True:
gui.MessageDialog('File Exist!')
return
if documents.SaveDocument(doc, path, c4d.SAVEDOCUMENTFLAGS_DIALOGSALLOWED, c4d.FORMAT_C4DEXPORT)==True:
doc.SetDocumentName(newc4dName)
if __name__=='__main__':
main()
On 07/11/2013 at 09:32, xxxxxxxx wrote:
Thanks Focus.
How about something like this for the 'isfile'.
(Please excuse the use of a 3 button dialog when i should be making one with proper buttons. This is still in testing. Ive also formatted the message string to be semi legible. )
if os.path.isfile(o['new_scn_path']) :
glob_pat = o['new_scn_path'].replace(o['new_ver'], '*')
up_ver = str(int(re.search('_v(\d+)', max(glob.glob(glob_pat))).group(1))+1).zfill(len(o['new_ver']))
w = c4d.gui.MessageDialog('The version you are tryng to save as already exists.\n\n
Would you like to SAVE UP to ver '+up_ver+',
OVERWRITE ver'+o['new_ver']+',
or CANCEL this menu?\n\n
YES WILL SAVE UP to ver '+up_ver+',
NO WILL OVERWRITE ver '+o['new_ver']+',
and CANCEL WILL CLOSE this dialog, and kill the script.\n\n
Choose now, go!', 3)
if w == 6:
o['new_scn_name'] = o['new_scn_name'].replace('_v'+o['new_ver'], '_v'+up_ver)
o['new_scn_path'] = o['new_scn_path'].replace('_v'+o['new_ver'], '_v'+up_ver)
o['new_ver'] = up_ver
saveIncremental(o)
elif w == 2:
return False
I also have a function that then walks through the render settings to see which are set to render to paths that have the file's name and/or version number in them, and updates them values accordingly.
On 07/11/2013 at 09:33, xxxxxxxx wrote:
Also, am I missing the button for code blocking? It doesn't seem to appear here and using <code> didn't seem to work either.
TIA
On 07/11/2013 at 10:13, xxxxxxxx wrote:
@TheAdmiral,
for code use:
"["CODE]put your code here[/CODE"]" (remove all ")
also check for BBcode list.
Enable BBcodes to format post must be checked.
I will check your code as soon as finish with my new plugin.
Glad, I was helpfull.
On 07/11/2013 at 10:14, xxxxxxxx wrote:
Awesome! Thanks! | https://plugincafe.maxon.net/topic/7523/9416_savedocument | CC-MAIN-2022-05 | refinedweb | 731 | 59.8 |
26 January 2009 16:10 [Source: ICIS news]
WASHINGTON (ICIS news)--A closely watched business bellwether showed slight improvement in December, but the Conference Board said on Monday that the modest 0.3% gain in its index of leading economic indicators does not signal a recovery.
The New York City-based business analysis group said the December increase in its leading economic index (LEI) was significant in that it was the first upward movement in that measure for about 18 months.
However, Ken Goldstein, economist at the 92-year-old organisation, said the December gain was very modest and showed just how moribund the ?xml:namespace>
“It is good that December shows a positive 0.3% increase instead of a negative 0.3%,” Goldstein said.
“But it also is a measure of how weak the economy is, even with all of these effort to stimulate consumers and business,” he added.
The leading economic index is made up of ten economic measures. Four of those showed increases in December, including improvements in the real money supply, low interest rates, new orders for consumer goods and manufacturers’ orders for non-defence capital goods.
Goldstein noted, however, that much of December’s 0.3% gain was credited to efforts by the Federal Reserve Board - the
“The gains in consumer goods and manufacturers’ orders were very, very slim in December,” Goldstein said, “and the LEI improvement is due chiefly to the near zero interest rate the Fed has set and the fact that they’ve got the mint printing presses running double shifts.”
He said that the fact that the LEI had not responded more positively to the increased money supply, record low interest rates and the federal stimulus package “tells us clearly that the recession has an awful lot of momentum behind it, and we’re not likely to see any real improvement before the second half this year”.
Despite those modest gains in consumer goods and capital equipment orders, he said, “if you look almost anywhere else - labour, general industry, stock prices, housing, automotives - you see bad | http://www.icis.com/Articles/2009/01/26/9187737/us-leading-indicator-gained-in-dec-but-not-significantly.html | CC-MAIN-2014-10 | refinedweb | 345 | 55.98 |
Detecting Theft by Hyperobject Abuse
By Arch D. Robison (Intel), Updated
Intel® Cilk™ Plus employs work stealing, where threads steal work from other threads. Though a good Intel Cilk Plus program should not depend on whether work is stolen or not, you might be curious about when it occurs in a program. This blog shows how to satisfy that curiousity with a holder hyperobject, a generally useful abstraction that I'll abuse somewhat to detect stealing.
Hyperobjects are Cilk's way of doing parallel reductions. The best reference on them is the award winning paper "Reducers and Other Cilk++ Hyperobjects". I'll summarize their proper use as background for their abuse.
Intel® Cilk™ Plus allows control flow to fork into multiple strands of execution. A hyperobject is a special kind of object for which there are multiple views. Any two concurrently executing strands get separate views that they can safely update without locking. Here is a trivial example:
#include <cilk/cilk.h> #include <cilk/reducer_opadd.h> cilk::reducer_opadd<int> X; void f() { X += 1; } int g() { cilk_spawn f(); X += 2; cilk_sync; return X.get_value(); }
Variable X is declared as a hyperobject for doing addition reduction over type int. Here is what happens when function g() is called and there is an idle thread that successfully steals.
- The cilk_spawn causes control flow to fork into two strands of execution. One strand calls f(), which executes X+=1.
- The other strand is a continuation of the caller's execution. The idle thread may steal the continuation and executes "X+=2". If stealing does not occur, the strand executes after f() returns.
- Execution waits at the cilk_sync until both strands complete.
- The value of X is returned.
As a practical matter, stealing is unlikely in this example because f() executes so quickly that the original thread will get to execution of the continuation before a thief can grab it. But for exposition's sake, assume that += is slow.
If X were an ordinary int, having two strands concurrently update X would be unsafe, because one of the updates might stomp on the other. Declaring it as a hyperobject avoids the problem. When the thief operates on X, it gets a gets a fresh view, initialized to 0, the identity element for addition. The cilk_sync causes the views to be merged into a single view. The declaration of X implies merging by addition, so the net effect of calling g() is X+=3.
If the continuation is not stolen, a fresh view is not created. The X+=1 happens first, followed by X+=2, both on the same view. Thus the next effect of g() is still X+=3.
The reduction operation should be associative, or as in the case of floating-point addition, practically associative for given circumstances. But it need not be commutative. When two views merge, the reduction operation is always applied such that the left operand is the view for the spawned routine and the right operand is the view for the stolen continuation.
Now about detecting steals. The idea is to detect when a view is fresh. I'll use a global boolean hyperobject "Seen" for this purpose. I'll use "Seen==false" to indicate that a view is fresh. This convention is simplifies the code by exploiting default initialization of new views, so I do not have to explicitly specify their value.
The domain of the "reduction" is boolean values. The reduction operation is "x op y → x". It's associative but not commutative. A hyperobject with this reduction operation is called a holder, because it holds the left value. The right value is irrelevant because it corresponds to the view created by a thief, and inspected by that thief. (Exercise for the mathematically inclined: do left and right identity values exist for this operation? What are they?)
Here is a complete example of using a holder to detect steals. You can compile and run with Intel® Cilk Plus compiler.
#include <cilk/cilk.h> #include <cilk/reducer.h> template<typename U> class Holder { struct Monoid : cilk::monoid_base<U> { static void reduce(U *left, U *right) {} }; cilk::reducer<Monoid> impl; public: inline U& get_view() { return impl.view(); } }; Holder<bool> Seen; #include <cstdio> int main() { Seen.get_view() = true; cilk_for( int i=0; i<100000000; ++i ) { bool& x = Seen.get_view(); if( !x ) { std::printf("Iteration %d was stolenn",i); x = true; // Must not forget this part. } } return 0; }
Here is an explanation of the program's parts:
- Template Holder<U> defines a holder for views of type U, using the template cilk::monoid_base defined in <cilk/reducer.h>.
- Template class cilk::reducer<Monoid> requires that signature Monoid::reduce compute "*left = *left op *right". The implementation is trivial for the holder reduction operation "x op y → x".
- The views conceptually live in Holder::impl. A strand invokes impl.view() to get a reference to its view.
- Seen is declared as a Holder<bool>. The views of the bool live in Seen.impl. The initial view is default initialized to false.
- Function main marks the initial view as seen.
- Function main executes a cilk_for loop, which parcels out chunks of iterations as work.
- Each iteration inspects its view of Seen. If the view is false, then it is a freshly created view, which indicates that the chunk was stolen. The view is marked as seen after the theft is reported.
The code abuses hyperobjects in the sense that its visible behavior depends on whether steals happen or not. Not all uses of holders are abusive. Consider a scratchpad variable that is used for temporary storage, but its final value does not matter. Changing the variable to a holder enables a Cilk program to safely operate on it, without any locks, because each thread will get its own view as necessary. Furthermore, each view is operated on in the left-to-right order of the original program. For some applications, that's a valuable property that enables maintaining complex state in the scratchpad, which is not necessarily a practical thing to do with thread-local storage.
Footnote: Intel and Cilk are trademarks of Intel Corporation in the U.S. and/or other countries.
1 commentTop
Anonymous said on Nov 23,2010
This is very good stuff, we appreciate this very much. Keep on this, its critical for complete understanding of what is happening in real time. We hardware/software engineers love this, proves where problems lie.
Add a CommentSign in
Have a technical question? Visit our forums. Have site or software product issues? Contact support. | https://software.intel.com/en-us/blogs/2010/11/22/detecting-theft-by-hyperobject-abuse | CC-MAIN-2017-26 | refinedweb | 1,091 | 58.79 |
Overview
TODO: Write introduction. Goal is to build a cross compiler targeting pdp11-aout.
TODO: What kind of joint header do I want across all the articles in a set, linking them together?
This document guides you through building a cross compiler using GCC on FreeBSD. This cross compiler will run on a modern AMD64 machine but emit code which runs on a DEC PDP-11. In addition to the compiler, these instructions also build associated tooling like an assembler, linker, etc.
In this manner, modern programming tools like
make,
git,
vi, and more can
be used to write modern C in your usual style while targeting the PDP-11.
Installation
These instructions were tested on FreeBSD 12 with GCC 7.3.0 from ports as the host compiler. The cross compiler was built from the GCC 10.2.0 and Binutils 2.35.1 source code.
Building GCC requires GNU Make. On FreeBSD either install via
pkg install
gmake or build from ports under
devel/gmake. On Linux your
make command is
probably
gmake in disguise. Run
make --version and see if the first line is
something like
GNU Make 4.2.1.
In addition to GCC, we will also need to compile GNU Binutils since it contains the assembler, linker, and other necessary tools.
Obtain suitable source code tarballs from these links.
I like to build all my cross compilers under one folder in my home directory, each with a version specific sub-folder.
setenv PREFIX "$HOME/cross-compiler/pdp11-gcc10.2.0"
Remember to make any
$PATH changes permanent. For
tcsh on FreeBSD, this
means editing
~/.cshrc. To set the
$PATH for this session, execute the
following.
setenv PATH "$PREFIX/bin:$PATH"
The
$TARGET environment variable is critical as it tells GCC what kind of
cross compiler we desire. In our case, this target
triplet is requesting code for the
PDP-11 architecture, wrapped in an
a.out container, with no hosted
environment. That means this is a bare-metal target. There will be no C
standard library, only the C language itself.
setenv TARGET pdp11-aout
Both GCC and binutils are best built from outside the source tree. Make two directories to hold the build detritus. Use a clean build directory each time you reconfigure or rebuild.
cd $HOME/cross-compiler/pdp11-gcc10.2.0 mkdir workdir-binutils mkdir workdir-gcc
Build binutils first. Assuming you saved the source code in
~/cross-compiler/pdp11-gcc10.2.0/, simply do the following.
cd $HOME/cross-compiler/pdp11-gcc10.2.0 tar xzf binutils-2.35.1.tar.gz cd workdir-binutils
Now configure, build and install binutils.
../binutils-2.35.1/configure --target=$TARGET --prefix="$PREFIX" \ --with-sysroot --disable-nls --disable-werror gmake gmake install
Verify that you can access a series of files in your
$PATH named
pdp11-aout-* (e.g.
pdp11-aout-as), and that checking their version with
pdp11-aout-as --version results in something like
GNU Binutils 2.35.1.
With binutils built and installed, now it’s time to build GCC.
Follow a similar process to unpack the source code, but note the new
requirement to download dependencies. In older versions of GCC this command was
./contrib/download-dependencies instead of
./contrib/download-prerequisites.
cd $HOME/cross-compiler/pdp11-gcc10.2.0 tar xzf gcc-10.2.0.tar.gz cd gcc-10.2.0 ./contrib/download-prerequisites cd ../workdir-gcc
Configuring GCC proceeds similarly to binutils. Both GNU
as and GNU
ld are
part of binutils, hence the directive informing GCC to use them.
../gcc-10.2.0/configure --target=$TARGET --prefix="$PREFIX" \ --disable-nls --enable-languages=c --without-headers \ --with-gnu-as --with-gnu-ld --disable-libssp gmake all-gcc gmake install-gcc
Verify that
pdp11-aout-gcc --version from your
$PATH reports something like
pdp11-aout-gcc 10.2.0.
That’s it, you’re done. You now have a cross compiler that will run on your
workstation and output PDP-11 compatible binaries in
a.out format.
At this point you can skip ahead to the next section or continue reading about some potential pitfalls of the cross compiler we’ve just built.
Potential Pitfalls
Below are a few problems I ran into while using my cross compiler, some of which may apply when compiling your own code for the PDP-11. I hope that by mentioning the problems here, along with symptoms and workarounds, you might be saved some time when encountering them.
Compiling libgcc
Our newly built cross compiler expects
libgcc to exist at link time, but we
didn’t build it. So what is
libgcc anyway? Quoting from the GCC
manual:.
Why didn’t we build
libgcc? Because we encountered this error
Problem
Consider the following C code which performs division and modulus operations on 16-bit unsigned integers.
#include "pdp11.h" #include <stdint.h> uint16_t a=8, b=64; printf("b \% a = %o\n", b % a); printf("b / a = %o\n", b / a);
If we try to compile this code, we receive two errors from the linker.
pdp11-aout-ld: example.o:example.o:(.text+0x8e): undefined reference to `__umodhi3' pdp11-aout-ld: example.o:example.o:(.text+0xac): undefined reference to `__udivhi3'
The two functions referenced,
__umodhi3 and
__udivhi3 are part of
libgcc.
The names reference the unsigned modulo or division on
half-integer types. Per the GCC
manual,
the half-integer mode uses a two-byte integer.
Solution
There are two ways around this problem.
The first (and superior) option is figuring out how to build
libgcc. The
command to initiate the build is
gmake all-target-libgcc, executed under the
same environment in which
gmake all-gcc was executed earlier in this guide.
If you figure out what I’m doing wrong, let me know.
The second option is to implement your own functions for
__umodhi3(),
__udivhi3(), and whatever else might come up. It’s not hard to make something
functional, though catching all the edge cases could be challenging.
Using uint32
Although the PDP-11 utilizes a 16-bit word, GCC is clever enough to allow operations on 32-bit words by breaking them up into smaller operations. For example, in the following assembly code generated by GCC, note how the 32-bit word is pushed onto the stack as two separate words.
uint32_t a=0710004010 uint16_t a=010; add $-4, sp add $-2, sp mov $3440, (sp) mov $10, (sp) mov $4010, 2(sp)
Problem
Whenever I try to make real use of code with
uint32_t, I encounter internal
compiler errors like the following.
memtest.c:119:1: error: insn does not satisfy its constraints: } ^ (insn 95 44 45 (set (reg:HI 1 r1) (reg/f:HI 16 virtual-incoming-args)) "memtest.c":114 14 {movhi} (nil)) memtest.c:119:1: internal compiler error: in extract_constrain_insn_cached, at recog.c:2225 no stack trace because unwind library not available Please submit a full bug report, with preprocessed source if appropriate. See <> for instructions. *** Error code 1
In each case, adding a single
uint32_t operation in one spot in the code
resulted in a compiler error in a completely different part of the code.
Removing the offending
uint32_t line caused the program to again compile and
execute normally. In each case, I already had
uint32_t related code working
elsewhere in the program.
Solution
Until I track down the bug causing these errors, I’ve been using structs
containing pairs of
uint16_t words and writing helper functions to perform
operations on them.
GNU Assembler Bug
If you’re stuck using an older version of GNU binutils, as I was while cross
compiling from a SPARCstation 20, there is a bug in the GNU assembler that
crops up whenever double-indirection is used in GCC. It was present until at
least GNU Binutil 2.28 but appears to be fixed no later than 2.32 per the
following code snippet in
binutils-2.32/gas/config/tc-pdp11.c.
if (*str == '@' || *str == '*') { /* @(Rn) == @0(Rn): Mode 7, Indexed deferred. Check for auto-increment deferred. */ if ( ...
Problem compiles this to assembly it generates code of the form
@(Rn) when
assigning a value to
**csp thus causing the value
0 to overwrite the value
060000 at
*csp if GNU
as is used to assemble the code.
Solution
The following patch, tested on GNU binutils 2.28, fixes the bug. It’s a little
hacky since it overloads the
operand->code variable to pass unrelated state
information to
parse_reg().
---; | https://www.subgeniuskitty.com/development/pdp-11/modern_c_software_development/pdp11-cross-compiler | CC-MAIN-2022-40 | refinedweb | 1,413 | 57.87 |
Groovy version of code to get an Amazon S3 Download URL
Recently I had need of some Groovy code to integrate with Amazon S3. I searched around, but it seemed as if they only code out there was set up for Grails. I couldn't find a simple class that I could drop into our code at work and run with that. Now maybe I didn't search long enough, and maybe the Grails-related stuff would have worked, but it just didn't feel right.
I then ran across this UDF written by ColdFusion developer Barney Boisvert: Amazon S3 URL Builder for ColdFusion
This was perfect. While I had been looking for an S3 "library", all I really needed was a way to generate the URL. I took his CFML and converted it into the following Groovy code. Groovy people - feel free to laugh/comment on how I could improve this:
private String getS3URL(key,secret,bucket,objectkey,expires=900) { def algo = 'HmacSHA1' def expireValue = ((new Date().getTime())/1000+expires).intValue() def stringToSign = 'GET\n\n\n'+expireValue+'\n/'+bucket+'/'+objectkey def signingKey = new javax.crypto.spec.SecretKeySpec(secret.getBytes(),algo) def mymac = Mac.getInstance(algo) mymac.init(signingKey)
def rawSig = mymac.doFinal(stringToSign.getBytes()) def sig = new sun.misc.BASE64Encoder().encode(rawSig); sig = java.net.URLEncoder.encode(sig) def destURL = ""+key+"&Signature=$sig&Expires=$expireValue" return destURL
}
I think it's interesting to compare both versions. My version got rid of the requestType parameter since we didn't need to worry about that for our code.
You know it's funny - I never used to understand why people didn't use semicolons at the end of their code when it was allowed - but now that I'm getting used to it, it really bugs me when I have to use them. | https://www.raymondcamden.com/2009/03/03/Groovy-version-of-code-to-get-an-Amazon-S3-Download-URL | CC-MAIN-2020-40 | refinedweb | 303 | 55.54 |
We.
Goood work.
I love flat design
Congrats !
Autotest Integration and the flat theme are my favorites !
Thank you very much – qtcreator is already very good and getting better and better with each release!
Very nice to see two of the main highlights have nothing specific to Qt:
– Clang Static Analyzer integration
– Automatic CMake triggering
Props to the team to improve Qt Creator as a friendly general C++ IDE, not just a friendly Qt IDE 🙂
Looking good. I love the integrated test runner too.
Has there been any progress on the diagraming (CASE) features? It would be great to have a way to reference diagrams from Doxygen or QDoc comments and create images from them in the final generated doc.
I have written a command line tool which allows to export diagrams from a qmodel as pdf, svg, png, … I will release this tool soon. One can integrate this tool into documentation generation rules (first export diagrams then generate doxygen) and reference the diagrams by filename.
Does someone know if the code model (either the built-in one or the clang one) now supports auto-completion when dereferencing smart pointers? (std::shared_ptr, std::unique_ptr)
I just tried it myself, and the answer is yes, it works 🙂
I have really missed this functionality!
Qt Creator 4.0 beta1 crashes on OSX 10.10.5 when launching:
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x00007fff90e8a286 __pthread_kill + 10
1 libsystem_c.dylib 0x00007fff878ef9b3 abort + 129
2 org.qt-project.QtCore 0x000000010ecb81e9 0x10eca2000 + 90601
3 org.qt-project.QtCore 0x000000010ecb9bb7 QMessageLogger::fatal(char const*, …) const + 231
4 org.qt-project.QtGui 0x000000010e595b49 QGuiApplicationPrivate::createPlatformIntegration() + 6521
5 org.qt-project.QtGui 0x000000010e595b6b QGuiApplicationPrivate::createEventDispatcher() + 27
Please create a bug report and also run Qt Creator from Terminal (“/Qt Creator.app/Contents/MacOS/Qt Creator”) and paste the output you see there at the time of crash. Thanks.
Qt Creator is getting better indeed, but am afraid if GPL versioning will discourage its use or not !
@Aamer: Why do you think GPL would discourage someone in using Qt Creator? GPL is not contagious to the applications written with Qt Creator.
@Aamer The GCC compiler collection is also GPL licensed, did that stop its adoption?
Actually it did. My memory is hazy but I’m fairly certain that Apple developed Clang directly as a response to the stricter licensing introduced after GCC 4.2.
Your work is not a derivative work of QtCreator nor of the compiler you used
Post says GLPv3, I think you mean GPLV3.
Yes, fixed.
For those interested in speeding up clang model parser, set QTC_CLANG_DELAYED_REPARSE_TIMEOUT environment variable to 50 (value in miliseconds), and your code will be parsed faster. I think that the default timeout is 1500 miliseconds. Use with care if your classes are too big.
Btw, I know this isn’t the best place for feature requests but are you going to improve session management? One feature I’d very like is automatic creation/saving of sessions per opened project (because sometimes I have to work in multiple Qt Creator windows and if I close them, the last one will overwrite default sessions which may be annoying).
Also, I recently used std::valarray and was disappointed by lack of “pretty printing” support in QtC debugging : it only showed size and first element of the array.
Put something like
def qdump__std__valarray(d, value):
d.putItemCount(value["_M_size"])
d.putPlotData(value["_M_data"], value["_M_size"], d.templateArgument(value.type, 0))
into your share/qtcreator/debugger/stdtypes.py or put it into a separate file and point your Tools->Options->Debugger->GDB, Extra Debugging Helpers setting there.
If you want in addition the graphing display option (“Plot in separate window”) to show up under “Change Display Format” in the context menu, you would also have to add
def qform__std__valarray():
return arrayForms()
Please note that in general, comments in blog posts are not a good place for bugreports and feature requests. It’s less effort to handle them at bugreports.qt-project.org.
I’m getting a lot of “use of undeclared identifier ‘char16_t'” errors in MSVC’s header files from the static analyzer, making clang give up because of too many errors.
I don’t think that this is how it’s supposed to be, but is it a known issue already? Should I file a bug report?
Yes, please report this. Provide MSVC version and preferably a minimal example to reproduce.
Thanks, reported at.
Looking good. I like the new flat them. Question: is it possible to make use of multiple cores when running the clang static analyzer? For compiling my project I edit the project settings by adding “-j8” to the make arguments. I wish I the analyzer could similarly be sped up.
very expected feature, clang analyzer.
but is disabled in main menu in my installation (
how to enable this?
as I remember there was Analyze tab in mode selector, but this build doesn’t contain this…
The blog post mentions the Analyze mode.
How to enable analyze mode?
Last build of qtc doesn’t contain this mode on selector panel…
Debug and Analyze modes have been merged, the result is called Debug mode. All the analyzing tools formerly available in the tool selector combobox in Analyze mode are now in the same location in Debug mode. All Analyzer related menu entries are in the same location as before. There has been no functionality removed.
THank you for you patient.
Checked your recommendations. yes, I see clang static analyzer mode in combo in Debug mode selector, as there are also existing vagrind checks .
however when I’m switching to clang in a combo nothing happens and Clang Static Analyzer in Analyze main menu is also disabled. looks like this feature disabled in my configuration and I don’t know how to eneable. I’ve only checked that clang binaries are existing in the folder with qtcreator exe
I believe that found the dependency between this problem and configuration. looks like a bug. raised the request here:
I’m confused by the ending of Exception 1 to the GPLv3. The part that reads: “you may create a larger work which contains the output of this application and distribute that work under terms of your choice, so long as […] the work does not in itself generate output that contains the output from this application in its original or modified form”.
Does that mean this example is not allowed?
in file /path/to/main.cpp :
int main()
{
QFile file(“/path/to/main.cpp”); file.open();
QByteArray outputFromQtCreatorInOriginalForm = file.readAll();
QFile file2(“/whatever.cpp”); file.open();
file2.write(outputFromQtCreatorInOriginalForm);
}
So since I wrote /path/to/main.cpp in Qt Creator, it’s “output from this application”, and since it generates itself, it “generates output from this application in it’s original […] form”.
Am I incorrect in my interpretation? Does that file need to fulfill the requirements of the GPLv3? I hope I’m wrong.
Is it possible to send some UI feedback somewhere to someone who takes care of it for the final release?
@Tom: Yes, for example via mailing list.
Any news about option to create new classes for CMake projects using GUI, just like for classic project files?
This is the only missing thing (maybe except poor translations support) that makes me keep old .pro file around.
What does not work with CMake is to add the files to the build system. There is just no reliable way to do that without a lot of help from CMake or maintaining a parser for the CMake language (which unfortunately is a *lot* of work as that language keeps evolving all the time). Since adding files to CMake projects is not possible, you can not trigger the wizard via the context menu in the Projects view in the sidebar.
You should be able to use the class wizard for CMake based projects via File>New File or Project. You should also be able to assign a keyboard shortcut to trigger the wizard directly via Tools>Options>General>Keyboard. You will then need to register the new file with the build system manually though by editing the CMakeLists.txt file yourself.
Yeah… It’s a good point.
What about enabling that action anyway, but showing message box with warning (which could be disabled using check box) saying that output files have to be added manually?
Why Qt Creator is still using undocable panels. Imagine … how to work in qt creator for multi-monitor configuration? Nobody suffering this?
Is there plan to update panels widgets so that they can be detached from main window?
I use QtCreator on a multi-monitor system every day. For me the “Window -> Open in New Window” command does the job. I can have two maximized windows, or split either or both.
Looks like there have been some changes to qbs as well. My project no longer links with the qbs this beta carries. Are there any docs on how to update my qbs files to be compatible with the new version?
I see that the syntax of getEnv has changed, but fixing that didn’t help my linking issue.
Your question is impossible to answer with the information you have given here. Please file a proper bug report at bugreports.qt.io, detailing what is going wring exactly.
I’m really happy to see static analyser come to the free version of Creator. This is long over due, in fact I think it was a mistake to lock it to the commercial license as that would only lead to developers creating poorly performing apps with Qt.
I’m not sure about the new theme though as it has been applied inconsistently and un themed elements (e.g. the Projects screen, the project/target icon, the designer etc) really stick out.
Also I’m sure if it’s intentional but the help viewer now looks AWFUL. I assume this is a bug?
Please see for details.
Love continued development on this program. Thank you!
Just want to mention that new clang code model doesn’t parse my huge project well, while the qt code model figures things out well. Any other configuration instructions to get this to work? I’ve had to disable it…
I was wondering how I could add custom CMake flags to the CMake process. For one of the projects I need to specify -Wno-dev for the CMake step otherwise my console gets swamped by warnings. Before I’ve added this in the configure cmake dialog but now I can not seem to find the right place.
Looks like you can do that in the Projects page under Build Steps, click Details and then add the desired options on the Tool arguments: line.
The Flat theme looks good indeed.
The auto-completion will invalid when typing parameter in a calling function.
This problem only would occurred when the function was typed first time,if you move the cursor out of the function position and then go back and retyping, the auto-completion will work again
Am I the only victim?
How can one create CMake cache entries?
I tried passing -Dfoo=bar in the tool arguments for CMake, but it doesn’t work, I had to edit the cache manually…
The Cmake setting editor doesn’t allow adding things either, just editing.
That is currently not possible at a project level, as I did not see a use-case for that: CMake should report all the settings it will evaluate to the Cache, so anything not there should not get evaluated anyway.
You can add new settings at the Kit level though (which will effect all CMake builds using that kit).
Thanks Tobias. I see that there are some pre-populated cmake config variables in the kit which are always passed to cmake, even if I remove them manually from the config.
Unfortunately this is breaking cross-compilation for me. In particular, passing CMAKE_CXX_COMPILER results in errors when running cmake for one of my projects. I don’t have any control over the toolchain file (CMAKE_TOOLCHAIN_FILE), so I can’t fix it.
Tested with QtC 4.0.82 built from the 4 branch, cmake 2 and 3. | http://blog.qt.io/blog/2016/03/23/qt-creator-4-0-beta-released/ | CC-MAIN-2018-30 | refinedweb | 2,043 | 65.32 |
.]
Following are some "usage" oriented remarks about value types.
Value types contain the values they are assigned:
int a = 1; // the variable "a" contains "1" of value type int
Value types can also be created by using the new keyword. Using the new keyword initializes the variable with the default value obtained from the type's default constructor:
int a = new int(); // using the default constructor via the new keyword return a; // returns "0" in the case of type Int.
Value types can be declared without being initialized, but they must be initialized to some value before being used:
int a; // This is perfectly acceptable return a; // NOT acceptable! You can't use "a" because "a" doesn't have a value!
Value types cannot equal null. .NET 2.0 provides a Nullable type to get around this limitation, which is discussed in the next section, but null is not a valid value for value types:
int a = null; // Won't compile - throws an error.
If you copy a Value type to another Value type, the value is copied. Changing the value of the copy has no effect on the value of the original. The second is merely a copy of the first - they are in no way connected after assignment. This is fairly intuitive:
int var1 = 1; int var2 = var1; //the value of var1 (a "1" of type int) is copied to var2 var2 = 25; // The "1" value in var2 is overwritten with "25" Console.WriteLine("The value of var1 is {0}, the value of var2 is {1}", var1, var2);
Which would result in the output:
The value of var1 is 1, the value of var2 is 25
Changing the value of the copy (var2 in this instance) had no effect on the value of the original (var1). This is different from reference types which copy a reference to the value, not the value itself.
Value types cannot be derived from.
Value types as method parameters are passed by value by default. A copy of the value-type is made and the copy is passed to the method as a parameter. If the parameter is changed inside the method it will not affect the value of the original value type.
Nullable type[edit]
A nullable type...
- Is a generic type
- Is an instance of System.Nullable struct.
- Can only be declared on value types.
- Is declared with System.Nullable<type> or the shorthand type? - the two are interchangeable.
System.Nullable<int> MyNullableInt; // the long version int? MyNullableInt; // the short version
- Accepts the normal range of values of the underlying type, as well as null.
bool? MyBoolNullable; // valid values: true || false || null
Be careful with nullable booleans! In if, for, while or logical evaluation statements a nullable boolean will equate a null value with false -- it will not throw an error.
Methods: T GetValueOrDefault() & T GetValueOrDefault(T defaultValue)
Returns the stored value or the default value if the stored value is set to null.
Properties: HasValue & Value
Nullable types have two read only properties: HasValue and Value.
HasValue is a boolean property that returns true if Value != null. It provides a means to check your type for a non-null value before using it where you might throw an error:
int? MyInt = null; int MyOtherInt; MyOtherInt = MyInt.Value + 1; // Error! You can't add null + 1!! if (MyInt.HasValue) MyOtherInt = MyInt.Value + 1; // This is a better way.
Value returns the value of your type, null or otherwise.
int? MyInt = 27; if (MyInt.HasValue) return MyInt.Value; // returns 27. MyInt = null; return MyInt; // returns null.
Wrapping / Unwrapping
Wrapping is the process of packaging a value m from a non-nullable type N to a nullable type N? via the expression new N?(m)
Unwrapping is the process of evaluating a nullable type N? for instance m as type N or NULL and is performed via the 'Value' property (e.g. m.Value).
Note: Unwrapping a null instance generates the exception System.InvalidOperationException
The ?? Operator (aka the Null Coalescing Operator)
While not for use solely with Nullable types, the ?? operator proves very useful when you want to use a default value instead of a null value. The ?? operator returns the left operand of a statement if not null, otherwise it returns the right operand.
int? MyInt = null; return MyInt ?? 27; // returns 27, since MyInt is null
For more information see the blog entry by R. Aaron Zupancic on the ?? Operator
Building a value type[edit]
Building a value type must be very simple. The following example defines a custom "point" structure with only 2 double members. See boxing and unboxing for a discussion of implicit conversion of value types to reference types.
Building and using a custom value type (struct)
using System; using System.Collections.Generic; using System.Text; // namespace ValueTypeLab01 { class Program { static void Main(string[] args) { MyPoint p; p.x = 3.2; p.y = 14.1; Console.WriteLine("Distance from origin: " + Program.Distance(p)); // Wait for finish Console.WriteLine("Press ENTER to finish"); Console.ReadLine(); } // method where MyPoint is passed by value public static double Distance(MyPoint p) { return Math.Sqrt(p.x * p.x + p.y * p.y); } } // MyPoint is a struct (custom value type) representing a point public struct MyPoint { public double x; public double y; } }
Using a user-defined value type[edit]
The above example can be used here. Note that the p variable does not have to be initialized with the new operator.
Using enumerations[edit]
The following sample shows simple uses of the System enumeration DayOfWeek. The code is much simpler to read than testing for an integer value representing a day. Note that using ToString() on an enum variable will give the string representation of the value (ex. “Monday” instead of “1”).
The possible values can be listed using Reflection. See that section for details.
For a discussion of the Enum class see MSDN
There is a special type of enumeration called a flags enumeration. The exam objectives do not mention it specifically. See MSDN if you are interested.
Simple use of enumerations
using System; using System.Collections.Generic; using System.Text; // namespace EnumLab01 { class Program { static void Main(string[] args) { DayOfWeek day = DayOfWeek.Friday; if (day == DayOfWeek.Friday) { Console.WriteLine("Day: {0}", day); } DayOfWeek day2 = DayOfWeek.Monday; if (day2 < day) { Console.WriteLine("Smaller than Friday"); } switch (day) { case DayOfWeek.Monday: Console.WriteLine("Monday processing"); break; default: Console.WriteLine("Default processing"); break; } int i = (int)DayOfWeek.Sunday; Console.WriteLine("Int value of day: {0}", i); // Finishing Console.WriteLine("Press ENTER to finish"); Console.ReadLine(); } } }
Building an enumeration[edit]
Building a custom enumeration is pretty straightforward as shown by the following example.
Declaring a simple enumeration
using System; using System.Collections.Generic; using System.Text; // namespace EnumLab02 { class Program { public enum MyColor { None = 0, Red, Green, Blue } static void Main(string[] args) { MyColor col = MyColor.Green; Console.WriteLine("Color: {0}", col); // Finishing Console.WriteLine("Press ENTER to finish"); Console.ReadLine(); } } }
Using reference types[edit]
Reference types are more commonly referred to as objects. Classes, Interfaces and Delegates are all reference types, as well as the built-in reference types System.Object and System.String. Reference types are stored in managed Heap memory.
Unlike Value types, reference types can be assigned the value null.
Copying a reference type copies a reference that points to the object, not a copy of the object itself. This can seem counter-intuitive at times, since changing a copy of a reference will also change the original.
A Value type stores the value it is assigned, plain and simple - but a Reference type stores a pointer to a location in memory (on the heap). Think of the heap as a bunch of lockers and the Reference type holds the locker number (there are no locks in this metaphor). Copying a Reference type is like giving someone a copy of your locker number, rather than a copy of its contents. Two Reference types that point to the same memory is like two people sharing the same locker - both can modify its content:
Example of using Reference types
public class Dog { private string breed; public string Breed { get {return breed;} set {breed = value;} } private int age; public int Age { get {return age;} set {age = value;} } public override string ToString() { return String.Format("is a {0} that is {1} years old.", Breed, Age); } public Dog(string dogBreed, int dogAge) { this.breed = dogBreed; this.age = dogAge; } } public class Example() { public static void Main() { Dog myDog = new Dog("Labrador", 1); // myDog points to a position in memory. Dog yourDog = new Dog("Doberman", 3); // yourDog points to a different position in memory. yourDog = myDog; // both now point to the same position in memory, // where a Dog type has values of "Labrador" and 1 yourDog.Breed = "Mutt"; myDog.Age = 13; Console.WriteLine("Your dog {0}\nMy dog {1}", yourDog.ToString(), myDog.ToString()); } }
Since the yourDog variable and the the myDog variable both point to the same memory store, the ouput of which would be:
Your dog is a Mutt that is 13 years old. My dog is a Mutt that is 13 years old.
As a practice for manipulating reference types you may want to work with the String and StringBuilder classes. We have put these with the text manipulation section but manipulating strings is a basic operation of almost all programs.
Using and building arrays[edit]
See MSDN for reference information.
Using classes[edit]
Building a custom class[edit]
Using interfaces[edit]
Building a custom interface[edit]
Using attributes[edit]
Using generic types[edit]
The use of the four major categories of System Generic Types will mainly be demonstrated elsewhere in this book:
- The nullable type was discussed above
- A whole section follows on Generic collections
- The generic event handler will be discussed in the Event / Delegate section.
- The generic delegates will also be discussed in the Event / Delegate section as well as in the Generic collections section (Comparer class).
If you copy the next very simple example in Visual Studio and try to add something other than an int to the list the program will not compile. This demonstrates the strong typing capability of generics.
Very simple use of generic
using System; using System.Collections.Generic; namespace GenericsLab01 { class Program { static void Main(string[] args) { List<int> myIntList = new List<int>(); myIntList.Add(32); myIntList.Add(10); // Try to add something other than an int // ex. myIntList.Add(12.5); foreach (int i in myIntList) { Console.WriteLine("Item: " + i.ToString()); } Console.WriteLine("Press ENTER to finish"); Console.ReadLine(); } } }
You can use List<string> instead of List<int> and you will get a list of strings for the same price (you are using the same List(T) class).
Building generics[edit]
The programming of a custom generic collection was shown in the article mentioned in the topics discussion.
Here we have an example of a Generic Function. We use the trivial problem of swapping two references. Although very simple we still see the basic benefits of Generics:
- We don't have to recode a swap function for every type
- The generalization does not cost us the strong typing (try swapping an int and a string, it wont compile)
Simple custom generic function
using System; using System.Collections.Generic; using System.Text; namespace GenericsLab03 { class Program { static void Main(string[] args) { Program pgm = new Program(); // Swap strings string str1 = "First string"; string str2 = "Second string"; pgm.swap<string>(ref str1, ref str2); Console.WriteLine(str1); Console.WriteLine(str2); // Swap integers int int1 = 1; int int2 = 2; pgm.swap<int>(ref int1, ref int2); Console.WriteLine(int1); Console.WriteLine(int2); // Finish with wait Console.WriteLine("Press ENTER to finish"); Console.ReadLine(); } // Swapping references void swap<T>(ref T r1,ref T r2) { T r3 = r1; r1 = r2; r2 = r3; } } }
Next step is to present an example including a generic interface, a generic class that implements that generic interface and a class derived from that generic class. The sample also uses interface and derivation constraints.
This is another simple problem involving employees and suppliers which have nothing in common except that they can request payment to a "payment handler" (see visitor pattern).
The problem is to know where to put the logic if you have specific processing to do for a certain kind of payment just for employees. There are myriads of ways to solve that problem but the use of generics make the following sample clean, explicit and strongly typed.
The other nice thing is that it has nothing to do with containers or collections where you will find almost all of generic samples.
Please note that the EmployeeCheckPayment<T> class derives from CheckPayment<T> giving a stronger constraint on the type parameter T (must be employee not just implement IPaymentInfo). That gives us the to opportunity to have access (in its RequestPayment method) to all payment logic (from the base class) at the same time as all employee public interface (thru the sender method parameter) and that without having to do any cast.
Custom generic interface and class
using System; using System.Collections.Generic; using System.Text; namespace GennericLab04 { class Program { static void Main(string[] args) { // Pay supplier invoice CheckPayment<Supplier> checkS = new CheckPayment<Supplier>(); Supplier sup = new Supplier("Micro", "Paris", checkS); sup.InvoicePayment(); // Produce employee paycheck CheckPayment<Employee> checkE = new EmployeeCheckPayment<Employee>(); Employee emp = new Employee("Jacques", "Montreal", "bigboss", checkE); emp.PayTime(); // Wait to finish Console.WriteLine("Press ENTER to finish"); Console.ReadLine(); } } // Anything that can receive a payment must implement IPaymentInfo public interface IPaymentInfo { string Name { get;} string Address { get;} } // All payment handlers must implement IPaymentHandler public interface IPaymentHandler<T> where T:IPaymentInfo { void RequestPayment(T sender, double amount); } // Suppliers can receive payments thru their payment handler (which is given by an object factory) public class Supplier : IPaymentInfo { string _name; string _address; IPaymentHandler<Supplier> _handler; public Supplier(string name, string address, IPaymentHandler<Supplier> handler) { _name = name; _address = address; _handler = handler; } public string Name { get { return _name; } } public string Address { get { return _address; } } public void InvoicePayment() { _handler.RequestPayment(this, 4321.45); } } // Employees can also receive payments thru their payment handler (which is given by an object factory) // even if they are totally distinct from Suppliers public class Employee : IPaymentInfo { string _name; string _address; string _boss; IPaymentHandler<Employee> _handler; public Employee(string name, string address, string boss, IPaymentHandler<Employee> handler) { _name = name; _address = address; _boss = boss; _handler = handler; } public string Name { get { return _name; } } public string Address { get { return _address; } } public string Boss { get { return _boss; } } public void PayTime() { _handler.RequestPayment(this, 1234.50); } } // Basic payment handler public class CheckPayment<T> : IPaymentHandler<T> where T:IPaymentInfo { public virtual void RequestPayment (T sender, double amount) { Console.WriteLine(sender.Name); } } // Payment Handler for employees with supplementary logic public class EmployeeCheckPayment<T> : CheckPayment<T> where T:Employee { public override void RequestPayment(T sender, double amount) { Console.WriteLine("Get authorization from boss before paying, boss is: " + sender.Boss); base.RequestPayment(sender, amount); } } }
Exception classes[edit]
Some links to MSDN:
- Exceptions and exception handling - MSDN
- Handling and throwing exceptions - MSDN
- Exception Hierarchy - MSDN
- Exception Class and Properties - MSDN
Boxing and unboxing[edit]
All types derive directly or indirectly from System.Object (including value types by the way of System.ValueType). This allows the very convenient concept of a reference to "any" object but poses some technical concerns because value types are not "referenced". Comes boxing and unboxing.
Boxing and unboxing enable value types to be treated as objects. Boxing a value type packages it inside an instance of the Object reference type. This allows the value type to be stored on the garbage collected heap. Unboxing extracts the value type from the object. In this example, the integer variable i is boxed and assigned to object o:
int i = 123; object o = (object) i; // boxing
Please also note that it is not necessary to explicitly cast an integer to an object (as shown in the example above) to cause the integer to be boxed. Invoking any of its methods would also cause it to be boxed on the heap (because only the boxed form of the object has a pointer to a virtual method table):
int i=123; String s=i.toString(); //This call will cause boxing
There is also a third way in which a value type can be boxed. That happens when you pass a value type as a parameter to a function that expects an object. Let's say there is a function prototyped as:
void aFunction(object value)
Now let's say from some other part of your program you call this function like this:
int i=123; aFunction(i); //i is automatically boxed
This call would automatically cast the integer to an object, thus resulting in boxing.
The object o can then be unboxed and assigned to integer variable i:
o = 123; i = (int) o; // unboxing
Performance of boxing and unboxing
In relation to simple assignments, boxing and unboxing are computationally expensive processes. When a value type is boxed, an entirely new object must be allocated and constructed. To a lesser degree, the cast required for unboxing is also expensive computationally.
TypeForwardedToAttribute Class[edit]
- Other possible links: Marcus' Blog, NotGartner | http://en.wikibooks.org/wiki/.NET_Development_Foundation/Using_System_Types | CC-MAIN-2014-52 | refinedweb | 2,852 | 55.95 |
I'm not getting the message
I wonder what I was supposed to see.
I wonder what I was supposed to see.
I have been reading up on naming standards today. It has been quite a while since I challenged by naming convention habits.
This has all come about because I was using GhostDoc today to quickly put in the bulk of comments in an assembly for me to then go through and tweak.What I found was that GhostDoc didn’t like some of my parameter naming. This was the catalyst for my reading about naming standards after thinking about naming standards for quite a while.
I read Microsoft’s naming guidelines, but it seemed to not address the naming standards of member level variables. The closest discussion it seemed to get on the issue was guidelines for static fields. What standards are people using for member level variables? I read a recent post where it seems that everyone has a different opinion.
I have recently fixed up some code from another developer. There were a few things in the code that I got a little chuckle about. The weird thing is that I have seen these coding behaviors in a few jobs that I have had.
First one that I have to laugh about (to avoid crying) is when developers comment out code blocks, then put a modified version of that code block below the commented version. Maybe this isn’t a bad idea, but seriously, if you are using a source control system then what is the point? Isn’t that what version control is all about? You can look at older versions of the same file, and usually, you can even compare the files to see what was changed.
The second one is when I come across code that basically says if true, do something, if false, then do something, else do something else. How is the else statement ever going to get hit? I’m all for defensive coding, but is this going too far?
I came across a third one last week, but I think I’ll keep that gem to myself.
I just saw Dave saying his bit about representation of geeks in movies and TV. Overall, I think geeks get displayed in a way that is very unlike the majority of the geek population. While I agree with Dave, I think that some of the ideas in movies can also be the kind of things that geeks will want to use or build, no matter how impractical they are or difficult they would be to get.
Two things spring immediately to mind. The cool data searching tool used in Minority Report and the GUI compiler in Swordfish. While the GUI compiler offers no benefit at all, I just can’t go past a great UI no matter what it does or doesn’t do.
What’s your favorite piece of fictional movie/TV software?
Scott Guthrie posted some ASP.Net tips several weeks ago. There were heaps of great ideas that he put into his presentation, one of which was about registering controls for aspx pages.
User controls and custom controls that are used on a page need to be registered. The control registration allows a tag prefix to be defined and identifies the location where the control can be found. These control registrations are normally placed at the top of the aspx markup along with the page directive. If you drag and drop an unregistered control onto the page, the registration will be added for you (with the exception of dragging user controls onto the markup view).
If you happen to come across a situation where controls change location, assembly or namespace, then every registration for the controls affected will need to be changed. This means that, when using the typical control registration method, each aspx page that uses those controls needs to be changed. This maintenance problem is mostly solved by Scott’s tip of putting the control registrations into the web.config file by adding add elements under system.web/pages/controls.
Edit - Removed comment about this not working for master pages. Something must have gone wrong with my build as it was throwing compile errors with registrations missing from the master page. After emailing Scott and retesting this, I have found that it is fine and works with master pages as expected..
Just a couple of days ago, I went to the Perisher Blue site to see how the snow was going. There was nothing but grass apart from a light dusting at the top of the high mountains.
It’s a little different now.
Sweet!
Yesterday I installed the new Consolas font for Visual Studio 2005. It is a very nice font to code with. Grab it here... | http://www.neovolve.com/page39/ | CC-MAIN-2017-17 | refinedweb | 801 | 72.16 |
In this video, you’ll learn how to deserialize a non-serializable type given in a JSON file.
We can represent a complex object in JSON like this
{ "__complex__": true, "real": 42, "imaginary": 36 }
If we let the
load() method deserialize this, we’ll get a Python
dict instead of our desired
complex object. That’s because JSON objects deserialize to Python
dict. We can write a custom decoder function that will read this dictionary and return our desired
complex object.
def decode_complex(dct): if "__complex__" in dct: return complex(dct["real"], dct["imaginary"]) else: return dct
Now, we need to read our JSON file and deserialize it. We can use the optional
object_hook argument to specify our decoding function.
with open("complex_data.json") as complex_data: z = json.load(complex_data, object_hook=decode_complex)
Now, if we print the type of
z, we’ll see
<class 'complex'>
We have now deserialized a
complex object from a JSON file!
Congratulations, you made it to the end of the course! What’s your #1 takeaway or favorite thing you learned? How are you going to put your newfound skills to use? Leave a comment in the discussion section and let us know.
Anonymous on March 30, 2019
This was excellent!! However, it will take me a while to absorb it all. I may have to watch it again. | https://realpython.com/lessons/decoding-custom-types-json/ | CC-MAIN-2021-17 | refinedweb | 224 | 57.37 |
Hi there, does anybody knows how to apply TransmittedModelOptions to a transmitted revit model when opening it? i had a bunch of transmitted model that requires to be opened and saved as central using that option. Any help is much appreciated
This is just a guess, but would you be able to open the document using the OpenDocumentFile method, detach it from central, and then save as central? I don’t see any parts of the API using this enumeration apart from the TransmittedModelExceptions class.
@stillgotme Have a look below.
Hi @salvatoredragotta, sorry to say but the link you sent me is not of any use for this topic
To give a better understanding, when i open a workshared transmitted model, a dialogue will appear as shown below:
Obviously i can do it manually to all my workshared transmitted model, but i was hoping that there is an API to do it and hence i will be able to batch do it. So i found the API as stated above
But i do not know where to use that enumeration in anywhere of the script.
I tried using opendocument file and save as central but its still give me the same result.
Since you are here, do you have any idea how to use that particular API??
@stillgotme can you share your dyn? Have you got a sample transmitted Revit file for testing?
Hold on…
here is the sample transmitted file: Sample transmitted files.rvt (352 KB)
and i do not have a specific dyn for transmitted model cause i honestly have no idea where to start but i can share with you my open and resave as central dyn:
Open and resave as central.dyn (8.1 KB)
I don’t often do this, but @erfajo (sorry Erik) or @john_pierson (sorry John) are probably the guys to help… Orchid package has nodes for background opening and saving, and I think John was saying he was expanding the Rhythm functionality, but I don’t think either have what you’re after. I’ve dug around in the API and can’t see anything you’ve missed.
Sorry,
Mark
No apologies needed!
My nodes do not include functionality for resaving as central, so I don’t think Rhythm solves this issue at the moment. If there is any questions regarding how my background document nodes work, Rhythm is open source and the code for Application is here, .
This dialogue is not showing up for me when a document is opened in the background.
Unfortunately, I think that is all of the help I have to offer right now. I do have some updates for Rhythm background document nodes coming, but they are more of a “Stability upgrade” as referenced in the thread here.
Thanks John, you won’t get the message until you open the file…
I think the OP needs the code to ‘save as new central file in same location’ but the API doesn’t seem to give guidance on how that might be achieved?
It only defines the enumeration you would use in that method, not the method itself? Would you know where (if) that is squirreled away?
Cheers,
Mark
Ah I see, I missed that part. The new file would still have the property.
This should do it.
Setting
IsTransmitted to false on the newly saved as file via TransmissionData.
You are a gent
@john_pierson Does that mean that if i were to edit my another dyn (the one that i used transmission data to unload links without opening it), and set
IsTransmitted(false), i will still get my central model with the rvt link that i had unloaded or changed path without the dialogue of saving as new central file?
And thank you @Mark.Ackerley for directing him to this topic
I think so… I have not had a chance to test myself, but that seems to be the way. I would say to give it a shot and let us know what happens.
I find some of the API really hard to work out from first principles…
When you read Transmission Data Class ‘A class representing information on all external file references in a document.’
It really puts you off from thinking that it could be applicable to transmitted project files themselves.
right??? totally agree with you man
Thanks for the files. Below my solution just for one file (I’ll leave to you to loop for multiple files)
Hi @salvatoredragotta, so sorry to say. It does not transmit the data required when
IsTransmitted(false) is set… it is required to be set to true
okay i got it to work from your concept. However the
IsTransmitted=False has to be put after i save as central. below is the attached code if anyone is dealing with transmitted model:
import clr clr.AddReference('ProtoGeometry') from Autodesk.DesignScript.Geometry import * # Import RevitAPI clr.AddReference("RevitAPI") import Autodesk from Autodesk.Revit.DB import * from Autodesk.Revit.Attributes import* import clr clr.AddReference("RevitAPIUI") from Autodesk.Revit.UI import * # Import DocumentManager and TransactionManager clr.AddReference("RevitServices") import RevitServices from RevitServices.Persistence import DocumentManager from RevitServices.Transactions import TransactionManager from System.Collections.Generic import * # Import Revit Nodes clr.AddReference("RevitNodes") import Revit clr.ImportExtensions(Revit.Elements) clr.ImportExtensions(Revit.GeometryConversion) # Import python library import sys pyt_path = r'C:\Program Files (x86)\IronPython 2.7\Lib' sys.path.append(pyt_path) import os from System.IO import * doc = DocumentManager.Instance.CurrentDBDocument uiapp = DocumentManager.Instance.CurrentUIApplication app = uiapp.Application uidoc = DocumentManager.Instance.CurrentUIApplication.ActiveUIDocument worksharingOptions = WorksharingSaveAsOptions() worksharingOptions.SaveAsCentral = True SaveOptions = SaveAsOptions() SaveOptions.MaximumBackups = 50 SaveOptions.SetWorksharingOptions(worksharingOptions) SaveOptions.OverwriteExistingFile = True SaveOptions.Compact = True rOptions = RelinquishOptions(False) rOptions.StandardWorksets = True rOptions.ViewWorksets = True rOptions.FamilyWorksets = True rOptions.UserWorksets = True rOptions.CheckedOutElements = True sOptions = SynchronizeWithCentralOptions() sOptions.SetRelinquishOptions(rOptions) sOptions.Compact = True sOptions.SaveLocalBefore = True sOptions.SaveLocalAfter = True tOptions = TransactWithCentralOptions() TransactionManager.Instance.ForceCloseTransaction() filepaths = IN[0] RVer = "R" + app.VersionNumber[-2:] docpath = [] if IN[1] == True: try: for filepath in filepaths: file = FileInfo(filepath) filein = file.FullName modelpath = FilePath(filein) transData = TransmissionData.ReadTransmissionData(modelpath) if transData.IsDocumentTransmitted(modelpath): z = TransmittedModelOptions.SaveAsNewCentral) transData.IsTransmitted = False TransmissionData.WriteTransmissionData(modelpath,transData) else:) OUT = [filepaths,docpath] except Exception,e: OUT = str(e) else: OUT = "Please set it to true"
Huge thanks to @salvatoredragotta & @john_pierson & @Mark.Ackerley for your help | https://forum.dynamobim.com/t/transmittedmodeloptions-enumeration/33687 | CC-MAIN-2022-21 | refinedweb | 1,050 | 50.63 |
In this C++ tutorial, you will get a new practice of linking web with your C++ programs. Let us see about web programming in C++ with suitable examples.
Introduction of CGI
Common Gateway Interface(CGI) is a set of standards defining how the data is exchanged from the web server, how it is passing the web user’s request to an application program and to receive data back to the user. When any user requests for a web page, then the server returns the requested page.
This method or protocol for passing data back and forth between the server and the application is called the Common Gateway Interface (CGI) that is also a part of the Web’s Hypertext Transfer Protocol (HTTP).
Web Browsing
To understand the concept of CGI, let us see what happens when we click a hyperlink to browse a website through the internet.
- Your browser contacts the HTTP web server and demand for the URL ie. filename.
- Web Server will parse the URL and will look for the filename. If it finds requested file, then web server returns that file to the browser. Otherwise, it sends an error message.
- Web browser takes the response from a web server and displays either the received file or error message based on the received response.
Server Configuration
Before using CGI programming, the coders should make ensure the Web server supports CGI and is well configured for handling CGI programs. By custom, CGI files will have an extension as .cgi, though they are C++ executable. By default, Apache Web Server is configured to run CGI programs in /var/www/cgi-bin.
Program
#include <iostream> void main () { cout << "Content-type:text/html\r\n\r\n"; cout << "<html>\n"; cout << "<head>\n"; cout << "<title>Hello TutorialsCloud </title>\n"; cout << "</head>\n"; cout << "<body>\n"; cout << "<h3> <b> First CGI program </b> </h2>\n"; cout << "</body>\n"; cout << "</html>\n"; }
Output
Content-type:text/html <html> <head> <title>Hello TutorialsCloud </title> </head> <body> <h3> <b> First CGI program </b> </h2> </body> </html> | https://www.codeatglance.com/cplusplus/cpp-webprogramming/ | CC-MAIN-2020-10 | refinedweb | 340 | 60.04 |
3 Feb 08:15
How to generate the document for interface only from .net project?
Thanh Vo <vtthanh9999 <at> yahoo.com>
2012-02-03 07:15:02 GMT
2012-02-03 07:15:02 GMT
Hi Experts, I am investigate the DOXYGEN tool to generate the document for C# project, I have some questions: 1. How can I onfigure to generate the document for interface only (use patterns, filter, ...?) 2. How can I remove the main page and replaced by the classes page as the main page? 3. When I generate the classes, I want to split items in the list to 2 columns, namespace information as 1 column and class name as another column instead of namespace and class name in 1 column. Additional, how can I add the header column (caption) into the list (table) of the classes page. For example, I want to show: Namespace (title of column 1) Class/Interface name (title of column 2) MySample.Classes MyClass1 MySample.Classes MyClass2 MySample.Interfaces IMyInterface Thanks in advance for any attention on this topic. Best regards -- -- View this message in context: Sent from the Doxygen - Users mailing list archive at Nabble.com. ------------------------------------------------------------------------------(Continue reading) | http://blog.gmane.org/gmane.text.doxygen.general/month=20120201 | crawl-003 | refinedweb | 197 | 64.51 |
im just having so much confusion trying to figure out the right math to compute the interest. I know Im trying to calculate interest of each month...but the math statements throw me off. all i fized was the for loop. Other than that....im confused as to how i do the math calculations
this is what i fixed:
Code:#include <iostream> using namespace std; int main ( ) { int months, count = 1; double init_Balance, rate, interest = 0, new_balance, total_Interest = 0, int_Accumulated; char repeats; do { total_Interest = 0; { cout << " Credit card interest\n "; cout << "Enter: Initial balance, monthly interest rate as a decimal fraction, e.g. for 1.5% per month write 0.015, and the number of months the bill has run.\n "; cout << "I will give you the interest that has accumulated.\n "; cin >> init_Balance >> rate >> months; } for ( int count = 0; count < months; count++) { interest = ( rate * init_Balance ); new_balance = ( init_Balance + interest ); total_Interest = ( interest + ); cout << count ++; } cout.setf(ios::fixed); cout.setf(ios::showpoint); cout.precision(2); { cout << "Interest accumulated = $\n"; cin >> int_Accumulated; cout << "Y or y repeats, any other character quits. "; } }while ( repeats != 'Y' && repeats != 'y' ); return 0; } | http://cboard.cprogramming.com/cplusplus-programming/107885-programming-project-help-2.html | CC-MAIN-2015-35 | refinedweb | 186 | 57.16 |
Difference between revisions of "Draft ShapeString"
Revision as of 19:02, 9 April 2013
Description
The ShapeString tool inserts a compound shape representing a text string at a given point in the current document. Text height, tracking and font can be specified.
How to use
- Press the
Draft ShapeString button, or press S then S keys
- Click a point on the 3D view, or type a coordinate
- Enter the desired text, press ENTER
- Enter the desired size, press ENTER
- Enter the desired tracking, press ENTER
- Press ENTER to accept the displayed font, or,
- Press ... to select a font file.
Options
- To enter coordinates manually, simply enter the numbers, then press ENTER between each X, Y and Z component.
- Pressing ESC will cancel the operation.:
makeShapeString(String,FontFile,[Size],[Tracking]) : Turns a text string into a
Compound Shape using a specified font.
Example:
import FreeCAD,Draft Draft.makeShapeString("This is a sample text", "/usr/share/fonts/truetype/msttcorefonts/Arial.ttf', 200.0,10)
Limitations
- This tool is not yet available. It will be included in a future version. (as of v0.13)
- This tool currently only handles ASCII characters in the text string and font file path. Non-ASCII characters give unpredictable results. (as of v0.13)
Available translations of this page: {{|}} | https://wiki.freecadweb.org/index.php?title=Draft_ShapeString&diff=26221&oldid=26191 | CC-MAIN-2020-16 | refinedweb | 211 | 59.3 |
Cite as Gilbert Equipment Co., Inc. v. Higgins, 709 F.Supp. 1071
(S.D.Ala. 1989). This case was affirmed without any opinion, at
894 F.2d 412 (11th Cir 1990).
GILBERT EQUIPMENT COMPANY,
INC., Plaintiff,
v.
Stephen E. HIGGINS, Director, Bureau of Alcohol, Tobacco, and
Firearms, U.S. Department of the Treasury, Defendant.
Civ. A. No. 88-0242-P.
United States District Court,
S.D. Alabama, S.D.
March 7, 1989.
Stephen Halbrook, Fairfax, Va., Alex F. Lankford, III, Hand,
Arendall, Bedsole, Greaves & Johnston, Blane Crutchfield, Mobile,
Ala., for plaintiff.
Andrea Newmark, Dept. of Justice, Sandra Schraibman, Washington,
D.C., Eugene Seidel, Asst. U.S. Atty., Mobile, Ala., for
defendant.
ORDER ADOPTING THE RECOMMENDATION OF THE MAGISTRATE
PITTMAN, Senior District Judge.
After due and proper consideration of all portions of this
file deemed relevant to the issues raised, and a de novo
determination of those portions of the recommendation to which
objection is made, the recommendation of the magistrate made under
28 U.S. C. section 636(b)(1)(B) is ADOPTED as the opinion of this
court.
An analysis considering the plaintiff's objections and the
reasons for this court adopting the magistrate's recommendation are
set forth herein.
ANALYSIS:
1. Gilbert argues that the magistrate improperly
bootstraps standards under "arbitrary-capricious" review onto the
mandamus count. Gilbert bases this argument on the magistrate's
statement that ." According to plaintiff, a mandamus claim
is irrelevant to whether an APA claim survives a deferential
"rational-relation" test, and the magistrate erred in equating the
two. Gilbert's contention is without merit. Mandamus is an
extraordinary writ which may not properly issue unless three
elements co-exist: (1) a clear right to the relief sought; (2) a
clear duty on the part of the defendant to do the act in question,
and (3) no other adequate remedy available. District Lodge No.
166, International Association of Machinist and Aerospace Workers
v. TWA Services, Inc., 731 F.2d 711, 717 (11th Cir.1984).
18 U.S.C. section 925(d)(3) does not grant Gilbert a
clear right to import arms into this country. In fact, section
925(d)(3) allows for the importation of firearms only after it has
first been determined that the weapon is particularly suitable or
readily adaptable to a sporting purpose. In the case sub judice,
the bureau concluded that due to the weight, size, bulk, designed
magazine capacity, configuration, and other factors, the USAS-12 is
not particularly suitable for or readily adaptable to a sporting
purpose. This decision was reviewed by the magistrate under the
arbitrary and capricious standard, and was affirmed. Although the
magistrate did not specifically so state, the decision and
affirmation in fact establishes that the plaintiff had no clear
right to import firearms, and that bureau had no duty to issue the
permit. With these two elements lacking, a writ of mandamus is not
proper.
2. Plaintiff argues that the magistrate applied the
"rational basis" test to the contrary to law portion of Count Two
when the "rational basis" test is only appropriate for a claim of
arbitrariness and capriciousness. While the magistrate's rec-
ommendation is devoid of any discussion of the contrary to law
standard, a review of 18 U.S.C. section 925(d)(3) and its
legislative history, reveals that,the bureau's action is in
accordance with it. section 925, as initially enacted, was designed
to keep firearms out of the hands of those not legally entitled to
possess them (Magistrate's Recommendation (hereafter MR) p. 6). An
amendment in 1986 sought to liberalize importation by providing
that the Secretary [of the Treasury] shall, as opposed to may,
authorize the importation of firearms generally recognized as
particularly suitable for or readily adaptable to sporting purposes
(MR p. 10). In addition, the importer's burden of establishing
this fact to the Secretary was eliminated. As the magistrate notes
however, the Secretary retains the obligation to determine whether
specific firearms satisfy this test (MR p. 10). The bureau denied
Gilbert's permit request due to the firearm's weight, size, bulk,
designed magazine capacity, configuration, and other facts. In
light of the fact that section 925(d)(3) provides the Secretary
with little guidance in making this determination, there are no
facts to indicate that these were not proper factors for the bureau
to consider in reaching its decision. Accordingly, it cannot be
said that the bureau's decision was contrary to law.
3. Gilbert argues that by disregarding the statutory
"generally recognized" component, the agency applied the wrong
legal standard in making its decision, and this cannot be corrected
by the court. Gilbert bases its argument on that portion of the
bureau's decision that reads "the USAS-12 semiautomatic shotgun is
not particularly suitable for or readily adaptable for sporting
purposes." (Admin. Rec. p. 22). Gilbert also notes that the
court may not supply a reasoned basis for any agency's action which
the agency has not given. While this is true, the Supreme Court in
Camp v. Pitts, 411 U.S. 138, 142, 93 S.Ct. 1241, 1244, 36 L.Ed.2d
106 (1973), held that if the agency fails to explain its actions so
that effective judicial review is frustrated, the reviewing court
must either (1) obtain from the agency, through affidavits or
testimony, such additional explanation of the reasons for the
agency decision as may prove necessary, or (2) remand to the agency
for further amplification. Here, the agency provided the
magistrate with additional explanation of the reasons for its
decision through the declarations of Edward Owen, Jr. and William
Drake (MR p. 15, n. 13). The declaration of Mr. Owen included an
in-depth discussion of the agency's position on the "generally
recognized" component. According to Mr. Drake, the bureau takes
the position that the "generally recognized" component requires
both that the firearm itself or the "type" of firearm to which the
subject firearm is being compared, has attained general recognition
as being particularly suitable for or readily adaptable to sporting
purpose, and that a particular use of a firearm has attained
general recognition as having a sporting purpose," or that an event
has attained general recognition as being a sport" before those
uses and/or events can be "sporting purposes" or "sports" under
section 925(d)(3) (Drake declar. p. 3). Thus, contrary to
Gilbert's assertion, the "generally recognized" component was
indeed utilized by the bureau in reaching its decision. The
magistrate's recommendation also includes a discussion on the
bureau's position regarding the "generally recognized" component.
4. Gilbert argues that the magistrate gave deference to
the agency's opinion of contested questions of law, whereas the
deference rule only applies to contested questions of fact within
the special expertise of the agency. Gilbert asserts that the
issues of whether the USAS-12 is sporting and whether formal target
competitions are sports, are legal questions, thus the agency's
opinion of these issues was not entitled to deference. Whether
these questions are deemed legal, factual or mixed questions of
law, the determination of what is a sporting gun and what
constitutes a sport clearly involves construction of section
925(d)(3). Generally, the construction of a statute by those
charged with its execution should be followed unless there are
compelling indications that it is wrong. Florida Gas Transmission
Co. v. FERC, 741 F.2d 1307, 1309 (11th Cir.1984). If the rule was
otherwise, target shooting could be deemed a sport by some courts,
yet not recognized as such by others. As there is nothing in the
record to indicate that the bureau's construction is wrong it is
entitled to deference from this court.
5. Gilbert contends that as a matter of law, no rational
basis exists in the administrative record for the agency's deci-
sion. The bureau's two denial letters were indeed short and curt
as noted by the magistrate; however, the bureau provided the court
with further elucidation of its reasons for denying Gilbert's
application for a permit. The magistrate correctly determined that
those reasons provide a rational basis for the agency's decision.
The bureau determined that the USAS-12 weighed 12.4 pounds
unloaded, and this weight makes the gun extremely awkward to carry
for extended periods, as used in hunting, and cumbersome to lift
repeatedly to fire at multiple small moving targets, as used in
skeet and trap shooting (Owen declar. p. 13). The bureau also
determined that the USAS-12 contains detachable magazines which
permit more rapid reloading. A large magazine capacity and rapid
reloading are military features, according to the bureau. The
bureau also opined that the overall appearance of the weapon was
radically different from traditional sporting shotguns, and
strikingly similar to shotguns designed specifically for or
modified for combat/law enforcement/anti-personnel use (Owen
declar. p. 14). Further, the bureau determined that the activities
that the USAS-12 was designed for, various police combat
competitions, have not attained "general recognition" as shotgun
sports. These reasons provide a rational basis for the bureau's
decision. The magistrate correctly noted that it is of no moment
that the administrative record might also support the opposite
conclusion, as the court needs only determine that a rational basis
exists for the agency's decision.
6. Gilbert argues that the magistrate's decision is based on
the bureau's post hoc litigation rationalizations, and has no basis
in the administrative record.
7. Gilbert also contends that the post hoc litigation
affidavits relied on by the magistrate should not have been con-
sidered according to the rules set forth in Citizens to Preserve
Overton Park v. Volpe, 401 U.S. 402, 91 S.Ct. 814, 28 L.Ed. 2d 136
(1971) and Camp v. Pitts, 411 U.S. 138, 93 S.Ct. 1241, 36 L.Ed.2d
106 (1973). The Supreme Court in Overton Park and Camp stated that
the focal point of an administrative review is the administrative
record already in existence, not some new evidence made initially
in the reviewing court. Overton Park 401 U.S. at 420, 91 S.Ct. at
825; Camp 411 U.S. at 142, 93 S.Ct. at 1244. Where the court is
faced with a bare record that does. Overton Park 401 U.S. at 420, 91 S.Ct. at
825. It is clear from the magistrate's recommendation that he
relied heavily on materials which were not a part of the
administrative record at the time the bureau rendered its decision.
A review of the record shows that it was necessary for the
magistrate to view additional materials in order to conduct a
meaningful review of the agency's action. The two denial letters
simply failed to discuss in detail the reasons advanced as the
basis for the bureau's decisions. The declarations of Mr. Owen and
Mr. Drake [Chief of the Firearms Technology Branch and Deputy
Director, Bureau of Alcohol, Tobacco and Firearms, respectively] do
not advance new and different reasons for the bureau's actions but
merely provide the court with a more detailed explanation of the
bureau's action. The magistrate was thus correct in considering
the declarations of Mr. Owen and Mr. Drake.
8. Gilbert argues that no rational basis exists for denying
sporting uses on the basis of a pistol grip, box magazine, and
marketing, the only reasons cited by the magistrate. Plaintiff is
totally incorrect in its allegations that the pistol grip, box
magazine and marketing were the only reasons cited by the
magistrate for the denial of the sporting use of USAS-12. While
these were no doubt pivotal reasons for the denial, the
magistrate's recommendation also included a discussion of the gun's
weight and drum magazine, and the fact that the bureau was wholly
unimpressed with the evidence that Gilbert submitted with its
initial application and reconsideration letter, The magistrate
found that a rational relationship existed between these facts and
the decision made by the bureau. As discussed in objection # 5,
the magistrate's finding is supported by the record.
9. Gilbert argues that no rational basis exists for
holding that organized competitive target competitions, using
bulls-eye or animal-like targets and shooting ballshot or slugs, is
some kind of "police combat" game and is not a "sport." section
925(d)(3) provides absolutely no guidance in determining which
activities constitute a "sport." The determination of a weapon's
suitability for sporting "rest[s] directly with the Secretary of
the Treasury." 114 Cong.Rec. 27465, col. 2 (Sept. 18, 1968)
(statement of Sen. Murphy). The Secretary has delegated his
authority to make determinations concerning sporting purposes to
the Bureau of Alcohol, Tobacco and Firearms. 27 C.R.F. Part 17B.
Great deference is to be accorded the interpretation of section
925(d)(3) by the agency charged with its administration. Udall v.
Tallman, 380 U.S. 1, 16, 85 S.Ct. 792, 801, 13 L.Ed.2d 616 (1964);
American Mutual Liability Ins. Co. v. Smith, 766 F.2d 1513, 1519
(11th Cir.1985).
The bureau determined that bullseye or animal-like
targets and shooting ball-shot or slugs are of a kind of "police
combat" game and is not a "sport." In 1982, the bureau took the
position that police combat" games did in fact constitute a sport;
however, in 1984, the bureau changed its position on this issue.
According to the bureau, "police combat" competitions have only
recently generated interest outside the military/law enforcement
area, and had not by 1984-and still have not gained general
recognition as sports. The bureau states that it simply misapplied
the sporting test" in 1982 [see Owen and Drake declar.]. The court
cannot say that this was not a rational basis for the bureau's
decision that "police combat games" were not a sport.
10. Gilbert argues that the magistrate relied on
allegations of which there are material questions of fact in con-
troversy, which is prohibited in a summary judgment motion.
Specifically, Gilbert contends that the questions of whether a 12-
pound shotgun is too heavy for hunting and competition, or whether
the game in question has sufficient numbers of participants to be
"sports," etc. are material issues of fact which are in dispute.
In Bank of Commerce of Laredo v. City National Bank of Laredo, 484
F.2d 284, 289 (5th Cir.1973). cert. denied 416 U.S. 905, 94 S.Ct.
1609, 40 L.Ed.2d 109 (1974), the Fifth Circuit stated that . The appropriate legal standard for conducting such
review is that established by the legislation authorizing the
agency action and the appurtenant provisions of the Administrative
Procedure Act." Thus, contrary to the plaintiff's contention, the
court's role here is not to resolve contested fact questions which
may exist in the underlying administrative 'record, but rather the
court must determine the legal question of whether the agency's
action was arbitrary and capricious.
11. Gilbert argues that a deference rule is inconsistent
with the intent of the Firearms Owners' Protection Act (FOPA) of
1986. There is nothing in the Act or its legislative history to
support plaintiff's allegation. In fact, the plain language of the
statute places the task of administering the statute on the
Secretary of the Treasury [who has delegated this authority to the
bureau] not the courts. This makes sense in view of the fact that
the bureau is in a far better position to determine whether an
activity is a sport, and whether a firearm is a sporting firearm.
While the courts generally accord great deference to an
interpretation of a statute by the agency charged with
administering, it, the agency's decisions are not merely rubber
stamped; instead they are subjected to a searching and careful
review by the courts. Udall v. Tallman, 380 U.S. 1, 16, 85 S.Ct.
792, 801, 13 L.Ed.2d 616 (1964), reh'g denied, 380 U.S. 989, 85
S.Ct. 1325, 14 L.Ed.2d 283 (1965); Citizens to Preserve Overton
Park, 401 U.S. at 416, 91 S.Ct. at 823. In the end though, the
court's review is limited to whether a rational basis exists for
the agency's action.
12. Gilbert argues that the magistrate misapplied
the deference standard by rejecting the administrative inter-
pretation of 1968-1986, and applying an admittedly "subjective" and
vague standard concocted for this litigation. It should first be
noted that in one of the cases relied upon by plaintiff, National
Distributing Co., Inc. v. U.S. Treasury Dept., 626 F.2d 997, 1014
(D.C.Cir.1980), the court criticized the agency's change in policy
not because the agency saw fit to change its interpretation of a
statute, but because the agency had denied any shift in its policy,
and had refused to issue an explanation for the change. In this
case, the bureau unequivocally set forth the reasons for its change
in position. The bureau acknowledges that prior to 1986, the
agency relied upon caliber, gauge and safety features as being
indicative of sporting use. The bureau maintains that even then
the firearm still bad to be evaluated as a whole to determine
whether it was particu-
larly suitable for a sporting purpose. The bureau contends that
from 1968 to approximately 1980, the vast majority of new shotguns
have been traditional sporting shotguns, and that not until this
decade, in response to a recently growing interest in paramilitary
equipment, has shotguns developed for law enforcement been sought
to be imported as sporting shotguns. Thus, prior to 1980, the
bureau contends that it was not necessary for them to establish a
list of factors for the importation of these allegedly "sporting"
shotguns (Owen declar. p. 21). As noted by the magistrate, these
factors (weight, size, bulk, designed magazine capacity,
configuration, etc.) are characteristic of all firearms thus are
logical characteristics for the bureau to consider in determining
whether a particular firearm is particularly suitable for or
readily adaptable to sporting purposes.
13. Gilbert argues that even if deference to the
agency's administrative practice as the proper interpretation of
law is proper, the magistrate ignored the administrative practice
followed in 1968-1986 and instead deferred to litigation arguments,
The reason for the agency's policy shift has already been
discussed. It has also already been determined that the dec-
larations of Mr. Drake and Mr. Owen were properly considered by the
magistrate. The bureau did not invent new rationales for its denial
of importation of the USAS-12 but simply expounded on the reasons
originally given. The declarations of Drake and Owen speak to the
weight, size, bulk, configuration of the USAS-12. These were all
reasons given for the initial denial of importation of the USAS-12.
It was not the magistrate's role, and it is not this court's role,
to determine that the bureau's prior practice was the better posi-
tion. The court need only be satisfied that the bureau's policy
change, and denial, were not the result of arbitrary and capricious
action.
14. Gilbert argues that statutory interpretation is for the
judiciary, and the magistrate erred in deferring to the agency on
questions of law. While it is true that the magistrate deferred to
the agency's interpretation of section 925(d)(3), the plaintiff
fails to cite any instance in which the magistrate deferred to the
agency on purely questions of law. There is no doubt that the
courts are the final authorities on issues of statutory
construction, yet it is a long established principle that the court
will adhere to the construction of a statute by those charged with
its execution, unless there are compelling indications that it is
wrong. Udall v. Tallman, 380 U.S. 1, 16, 85 S.Ct. 792, 801, 13
L.Ed.2d 616 (1964). There are no indications that the bureau's
construction is erroneous.
15. Gilbert contends that the magistrate ignores accuracy,
safety, and other sporting factors cited in the statute by ATF n
the years 1968-1986, and by sportsmen, and instead relies on
meaningless factors invented for this litigation. The magistrate
found that the factors relied upon by the bureau, namely, weight,
size, bulk, designed magazine capacity and configuration, for the
denial of importation of the USAS-12 had not previously been cited
as factors determinative of the sporting test. The magistrate
concluded, however, that because these factors are characteristic
of all firearms, they are logical characteristics for the bureau to
consider. The bureau has indicated that this switch in factors has
been necessitated by the growing number of non-conventional
shotguns sought to be imported as "sporting" firearms. The court
cannot say that this was not a rational basis for the bureau's
decision.
16. Gilbert argues that the magistrate defers to the
opinion of one bureaucrat, Mr. Owen, without any examination of his
credentials or those of Gilbert's experts. From a review of the
record, it is clear that the magistrate relied heavily on Mr.
Owen's declaration in an effort to discern further elucidation for
the agency's action. The plaintiff points the court to no facts
which would tend to show that Mr. Owen is unqualified to serve the
bureau, and his opinion is entitled to no weight. Mr. Owen's
declaration shows that he has extensive experience in the area of
firearms. The magistrate was thus correct in relying on the
declaration of Mr. Owen to discern further elucidation for the
agency's action.
17. Gilbert argues that numerous post hoc litigation
rationalizations, which the magistrate relies on as if they are
parts of the administrative record, are clearly refuted by the
administrative record. Gilbert contends that the videotape,
submitted with its application, shows that the USAS-12 has less
muzzle rise than standard sporting shotguns, although the bureau
(and later the magistrate) state that the tape did not compare
USAS-12 with conventional sporting shotguns. The bureau did not
deny that the USAS-12 has less muzzle, but criticizes the tape
because it failed to compare the firearm's weight, bulk, size,
designed magazine capacity, and configuration with conventional
firearms.
The plaintiff also argues that the bureau incorrectly stated
that the survey of state game commissions was directed to the le-
gality of the use of the USAS-12 for hunting rather than to its
suitability for sporting purposes. The question posed to state
game commissions was "would the USAS-12 be particularly suitable
for or readily adaptable to hunting under the game regulations of
your state?" Some of the comments received in response to this
question clearly indicate that at least some of those answering the
question thought it was directed to the legality as opposed to the
sporting purposes of the firearm. One respondent in particular
stated that the USAS-12 would be legal for small and large game-but
not particularly suitable (A.R. 113, 123, 133).
Plaintiff argues further that the bureau's statement that
Gilbert's experts, Crossman and Sears, did not address the salient
physical features of the firearm which served the basis for the
agency's decision, is incorrect. According to Gilbert, these
experts explained the clear sporting advantages of the box magazine
and the reduced kick and muzzle rise due to the weight, pistol
grip, and straight line stock. The bureau states that the physical
features that Gilbert's experts addressed were not features that
render a shotgun particularly suitable for sporting purposes. The
bureau opined that the low recoil effect and muzzle rise, which
Gilbert's experts emphasized, was of little value to the "sporting"
determination, since it is offset by the weight and bulk of the
USAS-12, which is more important to sportsmen. The bureau also
noted that Gilbert's expert, Mr. Crossman, never stated that the
USAS-12 is "of a type" of shotgun "generally recognized" as
sporting, or that the "sports" for which it may be suited are
"generally recognized" sports. The bureau's findings are not
clearly refuted by the administrative record.
18. Gilbert argues that the magistrate erred in deciding that
the agency's decision was "warranted by the facts" as "borne out by
the administrative record." Although the magistrate states that
the agency's decision was warranted by the facts as borne out by
the administrative record, it is clear that the declarations of Mr.
Owen and Mr. Drake were also relied upon by the magistrate to
substantiate the agency's decision. Contrary to the plaintiff's
contention, those declarations were not mere post hoc litigation
rationalizations, but constituted more detailed explanations of
those reasons originally advanced by the agency for its denial. It
has already been determined that based on the administrative
record, and the declarations of Mr. Owen and Mr. Drake, the
agency's decision was rational, and borne out by the complete
record.
19. Gilbert contends that the magistrate's decision
would delete the "readily adaptable" standard from section
925(d)(3). Gilbert argues that the magistrate dwelled on the
particularly suitable component of section 925(d)(3), and
essentially ignored the alternative "readily adaptable" standard.
Although the magistrate's recommendation lacks an in-depth
discussion of the "readily adaptable" standard, a review of the
record shows that the bureau did in fact consider the "readily
adaptable" standard in reaching its decision to deny importation.
The bureau has consistently stated that the USAS-12 is a semi-
automatic version of a military type assault shotgun. Mr. Owen
stated that the Benelli Super 90 and the Benelli VM, which Gilbert
compares to the USAS-12, are traditional sporting shotguns adapted
for military/law enforcement use by adding over-sized magazines,
non-glare finished and synthetic stocks and forearms. According to
Mr. Owen, the USAS-12 was designed as a military assault weapon and
has never had the basic features of a traditional sporting shotgun
(Owen declar. p. 19). The bureau cites the separate combat style
pistol grip located on the bottom of the receiver forward of the
buttstock, the barrel to buttstock configuration, and the general
shape and overall appearance of the firearm that makes it radically
different from traditional sporting shotguns and not readily
adaptable to sporting purposes (Owen declar. p. 15). Contrary to
Gilbert's contention, the "readily adaptable" standard was indeed
utilized by the bureau in reaching its decision.
20. Gilbert argues that in recommending dismissal of the
due process claim, the magistrate ignored the fact that competitors
are allowed to import shotguns with features similar to the USAS-
12, but Gilbert is held to a different standard. Gilbert is in
essence arguing that the bureau has applied a facially neutral
statute in an unequal manner. As was noted by the magistrate, in
order to prevail on this claim, Gilbert must prove that the bureau
intentionally discriminated against them. E & T Realty v.
Strickland, 830 F.2d 1107, 1112 (11th Cir.1987), cert. denied,
-U.S.-, 108 S.Ct. 1225, 99 L.Ed.2d 425 (1988). There are no facts
in this case to support a claim of purposeful discrimination. The
bureau acknowledges that SPAS-12 and Benelli Super 90 shotguns are
allowed to be imported, although they have features similar to the
USAS-12 and are marketed to both sportsmen and law enforcement.
The bureau notes that while these guns have some military features,
they retain the basic features of traditional sporting shotguns.
The bureau also states that the USAS-12 is not the only shotgun
denied an importation permit. The Striker-12 was denied
classification as a sporting shotgun, and the SPAS-12, which the
bureau allowed to be imported on the basis of its suitability for
use in police combat competitions back in 1982, will be subjected
to the "revised" sporting test.
Gilbert argues that liberty and property interests exist
in the freedom of a licensed importer to import and sell com-
modities, and be subject to the same standards as competitors.
As noted above, there is no indication that Gilbert is being
subjected to standards which are different from those other
importers are subjected to. Further, in order to have a protectable
property interest, Gilbert must demonstrate that it has a
legitimate claim of entitlement to it. The magistrate was correct
in concluding that any right to import firearms is activated only
after the firearm sought to be imported is shown to be particularly
suitable or readily adaptable to sporting purposes. It has been
determined that the USAS-12 does not meet this criteria.
21. Gilbert's final argument is that the magistrate
ignores the fact that before arms may be "kept" under the second
amendment, arms must be produced and acquired. The magistrate
found that the second amendment guarantees the right to keep and
bear arms but does not give Gilbert a right to import arms. Gil-
bert argues that the magistrate is incorrect, yet fails to cite any
authority in support of its position. The magistrate relied on
United States v. Swinton, 521 F.2d 1255 (10th Cir.1975), cert.
denied, 424 U.S. 918, 96 S.Ct. 1121, 47 L.Ed.2d 324 (1976) in
reaching his decision. Swinton is a criminal case wherein the
defendant was convicted of engaging, without a license, in the
business of dealing in firearms. The Court held, in part, that
there is no absolute constitutional right of an individual to pos-
sess a firearm. In United States v. King, 532 F.2d 505 (6th
Cir.1976), cert denied, 429 U.S. 960, 97 S.Ct. 384, 50 L.Ed.2d 327
(1976), the defendant was also convicted of dealing in firearms
without a license. In upholding his conviction, the Fifth Circuit
noted that the defendant was convicted not of bearing arms, but of
selling them without a license. Id. at 510. Similarly, in the
case sub judice, Gilbert is not being denied its right to bear
arms, but is simply being prevented from importing into this
country arms that are not particularly suitable or readily
adaptable to a sporting purpose.
Accordingly, the magistrate's recommendation is ADOPTED as the
opinion of this court.
JUDGMENT
It is ORDERED, ADJUDGED and DECREED that defendants' motion
for summary judgment is GRANTED. Plaintiff's cross-motion for
summary judgment is DENIED, costs to be taxed to plaintiff.
RECOMMENDATION OF MAGISTRATE
WILLIAM E. CASSADY, United States Magistrate.
This case is before the Magistrate for report and
recommendation pursuant to 28 U.S.C. section 636(b)(1)(B) on cross-
motions for summary judgment. Upon consideration of the
administrative record and all pertinent materials contained in this
file, the Magistrate makes the following recommendation.
FACTS
In 1986, Gilbert Equipment Company, a licensed importer of
firearms, applied to the Bureau of Alcohol, Tobacco and Firearms
(hereinafter "ATF") for a permit to import the USAS-12 shotgun.
(See A.R. 2). The USAS-12 is a highly advanced magazine-fed
semiautomatic 12-gauge shotgun manufactured by Daewoo Precision
Industries in Korea. (para. 5 of Complaint). Soon after applying
for the permit, Gilbert submitted information to the Bureau in an
attempt to demonstrate that the USAS-12 "is generally recognized as
particularly suitable for or readily adaptable to sporting
purposes" and is thus importable under 18 U.S.C. Section 925(d)(3).
(9 6 of Complaint; see A.R. 2-19). After several meetings between
the parties and testing and evaluation by the Bureau, ATF, by
letter dated December 16, 1986 from the office of William T.
Drake, ATF Deputy Director, denied permission to Gilbert to import
the USAS-12 inasmuch as "due to the weight, size, bulk, designed
magazine capacity, configuration and other factors, the USAS-12
semiautomatic shotgun is not particularly suitable for or readily
adaptable to sporting purposes." (A.R. 22). On February 19, 1988,
Gilbert sought ATF's permission to import five hundred (500) USAS-
12 shotguns and accompanied its application with extensive
memoranda, exhibits, and a videotape in support of a sporting use
determination (para. 14 of Complaint). By letter dated March 1,
1988, William E. Earle, Chief of the Firearms and Explosives
Division, denied the application stating that ATF's position
remained unchanged (A.R. 197-98). On March 24, 1988, plaintiff
filed in this Court a complaint seeking: (1) mandamus relief; (2)
a determination by this Court that defendant's actions were
arbitrary, capricious, and an abuse of discretion; (3) a
determination by the Court that defendant's conclusions were
unwarranted by the facts and were not based on any facts; (4) a
determination by this Court that the defendant violated its rights
to due process of law and equal protection of the laws, rights
which are guaranteed by the Fifth Amendment; and (5) a
determination by this Court that defendant violated plaintiff's
Second Amendment right to keep and bear arms. [footnote 1] The
parties' summary judgment memoranda have addressed all of the forms
of relief requested and thus, a determination by this Court on the
cross motions will make unnecessary a trial of this cause.
STATUTORY HISTORY
In 1968, Congress enacted the Gun Control Act which was
designed to keep firearms out of the hands of those not legally
entitled to possess them because of age, criminal background, or
incompetency, and to assist law enforcement authorities in the
states and their subdivisions in combating the ever increasing
prevalence of crime in
the United States. 1968 U.S.Code Cong. & Admin.News 2112, 2113-
2114. [footnote 2] In 1968, Section 925(d) of the Act provided in
pertinent part as follows:
The Secretary may authorize a firearm or ammunition to be
imported or brought into the United States or any possession
thereof if the person importing or bringing in the firearm or
ammunition establishes to the satisfaction of the Secretary
that.
18 U.S.C. section 925(d)(3) (emphasis added). [footnote 3]
The clear intent of Section 925(d)(3) was to "curb the flow of
surplus military and other firearms being brought into the United
States which are not particularly suitable for target shooting or
hunting." 1968 U.S.Code Cong. & Admin.News 2112, 2167. [footnote 4]
As Senator William Dodd, sponsor of the legislation, emphasized,
Title IV prohibits importation of arms which the Secretary
determines are not suitable for research, sport, or as museum
pieces ...
The entire intent of the importation section is to get at
those kinds of weapons that are used by criminals and that
have no sporting purpose.
114 Cong.Rec. S 5556 Col. 3, S 5582 Col. 1, S 5585 Col. 2 (May 14,
1968) (statement of Senator Dodd). [footnote 5] The determination
of a weapon's suitability for sporting purposes was entrusted to
the Secretary of the Treasury. 114 Cong.Rec. 27465, Col. 2 (Sept.
18, 1968) (statement of Sen. Murphy). As noted in one of the
Senate Reports,
The difficulty of defining weapons characteristics to meet
this target [of eliminating (Sept. 6, 1968) . [footnote
6]
To assist the Secretary in exercising his discretion, Congress
"recommended that the Secretary establish a council that would y
provide guidance and assistance to him in determining those
firearms which meet the criteria for importation into the United
States...." S. Rep. No. 1501, 90th Cong. 2d Sess. 38 (Sept. 6,
1968). Immediately following enactment of the Gun Control Act, the
Secretary of the Treasury appointed a Firearms Evaluation Panel to
establish guidelines for implementation of the "sporting purposes"
test of Section 925(d)(3), said panel being composed of rep-
resentatives from the military, law enforcement, and firearms
industries. [footnote 7] While the panel did not propose specific
criteria for evaluating shotguns [footnote 8] the apparent general
criteria relied upon by the advisory panel and ATF from 1968
through 1986 for determining what is "generally recognized" as a
sporting firearm is as follows:
The Director may compile an Importation List of firearms
and ammunition which he determines to be generally recognized
as particularly suitable for or readily adaptable to sporting
purposes.... No firearm shall be placed on the Importation
List unless it is found that:
(1) the caliber or gauge of the firearm is suitable for
use in a recognized shooting sport,
(2) the type of firearm is generally recognized as
particularly suitable or readily adaptable to such use, and
(3) the use -of the firearm in a recognized shooting
sport will not endanger the person using it due to
deterioration through such use or because of workmanship,
materials or design.
Specifically, with regard to shotguns, the two factors panel
members were concerned with were the lack of easy convertability to
full automatic and the barrel and overall length of the weapon (18
inch barrel length and 26 inch overall length for shotguns).
In 1986, Section 925(d) of the Gun Control Act was amended by
the Firearms Owner's Protection Act to read in pertinent part as
follows:
The Secretary) (emphasis added).
The amendments to the Statute provide that the Secretary shall
(instead of "may") authorize the importation of firearms generally
recognized as particularly suitable for or readily adaptable to
sporting purposes. Additionally, the amendments whittled away at
the Secretary's discretion by eliminating the requirement that the
importer of firearms establish to the satisfaction of the Secretary
that the particular firearm sought to be imported is generally
recognized as particularly suitable for or readily adaptable to
sporting purposes. Regardless of the changes made, the firearm
must meet the sporting purposes test and it remains the Secretary's
obligation to determine whether specific firearms satisfy this
test. The Senate Report on the 1986 amendments S.Rep. No. 583,
98th Cong. 1st Sess, August 8, 1984, stated that "[i]t is
anticipated that in the vast majority of cases, [the substitution
of "shall" for "may" in the authorization section] will not result
in any change in current practice." However, opponents of the
amendments viewed the changes as liberalizing and opening up of the
importation of firearms into the United States "by mandating the
Secretary to authorize importation of a firearm if there is a
sporting purpose and eliminating the requirement that the importer
has the, burden of satisfying the Secretary of the sporting
purpose." Firearms Owners' Protection Act, 100 Stat. 1340 (1986)
(amending section 925 of the Gun Control Act of 1968, 18 U.S.C.
sections 921-929 (1986)).
DISCUSSION
I. WAS ATF'S DETERMINATION THAT THE USAS-12 SHOTGUN IS NOT A
"SPORTING" WEAPON ARBITRARY OR CAPRICIOUS?
A. The Scope and Standard of Review. As the Magistrate has
noted previously, plaintiff has alleged that ATF's determination
that the USAS-12 is not a "sporting" weapon was "arbitrary,
capricious, an abuse of discretion, in excess of statutory
authority, and otherwise not in accordance with law." (Complaint,
para. 20). Section 706 of the Administrative Procedure Act pro-
vides six separate standards of judicial review of agency actions.
Specifically for this Court's purposes, Section 706(2)(A) provides
that a reviewing court shall "hold unlawful and set aside agency
action, findings, and conclusions found to be ... arbitrary,
capricious, an abuse of discretion or otherwise not in accordance
with law." 5 U.S.C. section 706(2)(A). [footnote 9] This is the
standard of review this Court will employ to determine whether ATF
properly concluded that the USAS-12 is not a sporting shotgun; a
closer examination of this standard is, therefore, warranted.
Before a court can reach the determination that an agency's
actions were arbitrary or capricious, said23-24, 28 L.Ed.2d 136 (1971); Bowman
Transport, Inc. v. Arkansas-Best Freight System, Inc., 419 U.S.
281, 285, 95 S.Ct. 438, 442, 42 L.Ed.2d 447 (1974). [footnote 10]
That is, the particular agency must articulate a "rational
connection between the facts found and the choice made." Bowman,
supra, 419 U.S. at 285, 95 S.Ct. at 442 (quoting Burlington Truck
Lines, Inc. v. United States, 371 U.S. 156, 168, 83 S.Ct. 239, 246,
9 L.Ed.2d 207 (1962)). [footnote 11] While the Supreme Court has
stated that it will "not supply a reasoned basis for the agency's
action that the agency itself has not given, SEC v. Chenery Corp.,
332 U.S. 194, 196 [67 S.Ct. 1575, 1577, 91 L.Ed. 19951 (1947)," the
High Court has also indicated that it will "uphold a decision of
less than ideal clarity if the agency's path may reasonably be dis-
cerned. Colorado Interstate Gas Co. v. FPC, 324 U.S. 581, 595 [65
S.Ct. 829, 836, 89 L.Ed. 1206] (1945)." Bowman, supra, 419 U.S. at
285-86, 95 S.Ct. at 442. [footnote 12]
In applying the arbitrary and capricious standard, the focal
point of review is the administrative record "already in existence,
not some new record made initially in the reviewing court." Camp v.
Pitts, 411 U.S. 138, 142, 93 S.Ct. 1241, 1244, 36 L.Ed.2d 106
(1973). If the agency fails to explain its actions so that
effective judicial review is frustrated, the reviewing court must
either (1) obtain from the agency, through affidavits or testimony,
such additional explanation of the reasons for the agency decision
as may prove necessary, [footnote 13] or (2) remand to the agency
for further amplification. Pitts, supra, 411 U.S. at 142-43, 93
S.Ct. at 1244.
B. "Sporting Purposes" Test. The statute establishes that a
"sporting" firearm is a weapon which is generally recognized as
particularly suitable or readily adaptable to sporting purposes. 18
U.S.C. section 925(d)(3). The Bureau claims that in making a
sporting determination it attempts to determine whether the firearm
is of a type traditionally used in recognized sporting activities
or is as suitable for recognized sporting activities as firearms
traditionally used for such activities. (Drake Dec. para. 7). ATF
views the "generally recognized" qualification to require both that
the firearm itself or the "type" of firearm to which the subject
firearm is being compared have attained general recognition as
being particularly suitable for or readily adaptable to sporting
purposes, [footnote 14] and that a particular use of a firearm have
attained general recognition as being a "sporting purpose," or that
an event have attained general recognition as being a "sport," be-
fore those uses and/or events can be sporting purposes" or "sports"
under Section 925(d)(3). [footnote 15]
The interpretation of a statute by the agency charged with
administering it is generally entitled to great deference, Udall v.
Tallman, 380 U.S. 1, 16, 85 S.Ct. 792, 801, 13 L.Ed.2d 616, reh'g
denied, 380 U.S. 989, 85 S.Ct. 1325, 14 L.Ed.2d 283 (1965); see
Blue Cross & Blue Shield v. Department of Banking & Finance, 791
F.2d 1501, 1506 (11th Cir.1986) ("We need not find that its [the
agency's] construction is the only reasonable one, or even that it
is the result we would have reached had the question arisen in the
first instance in judicial proceedings."), unless there are com-
pelling indications that said interpretation is wrong. American
Mut. Liab. Ins. Co. v. Smith, 766 F.2d 1513, 1519 (11th Cir. 1985)
("We will adhere to the 'principle that the construction of a
statute by those charged with its execution should be followed
unless there are compelling indications that it is wrong.'"); see
Florida Gas Transmission Co. v. FERC, 741 F.2d 1307, 1309-10 (11th
Cir.1984) ("The agency's view must be upheld unless it is so
plainly erroneous or so inconsistent with either the regulation or
the statute authorizing the regulation that its decision is
arbitrary, capricious, an abuse of direction or otherwise not in
accordance with law.").
Furthermore, "[t]he words of statutes ... should be
interpreted where possible in their ordinary, everyday senses,"
Malat v. Riddell, 383 U.S. 569, 571, 86 S.Ct. 1030, 1032, 16
L.Ed.2d 102 (1966); Lane v. United States, 727 F.2d 18, 20 (1st
Cir.) (in a suit involving the Equal Access to Justice Act, the
appellate court stated that "the plain language of the statute
itself must be regarded as conclusive."), cert. denied, 469 U.S.
829, 105 S.Ct. 113, 83 L.Ed.2d 57 (1984). However, ); Griffin v. Oceanic Contractors, Inc., 458 U.S. 564, 571,
102 S.Ct. 3245, 3250, 73 L.Ed.2d 973 (1982) ("[I]n rare cases the
literal application of a statute will produce a result demonstrably
at odds with the intention of its drafters, and those intentions
must be controlling."). Thus, ATF argues that the "generally
recognized" qualification must be read to limit the types of
firearms and sports which would classify a firearm as "sporting"
under Section 925(d)(3). Additionally, ATF contends that given the
fact that the word "particularly" modifies the word "suitable" a
firearm which might be recognized as "suitable" for use in
traditional sports would not meet the statutory criteria unless it
were recognized as particularly suitable for such use. [footnote
16] Finally, ATF argues that the drafters of the legislation did
not intend for "sports" to include every available type of activity
or competition which might employ a firearm inasmuch as a "sporting
purpose" could be advanced for every firearm sought to be imported.
[footnote 17]
C. Application of the "Sporting Purposes" Test By ATF to the
USAS-12. As this Court has heretofore stated, ATF, after testing
and examining the USAS-12, concluded that due to the weight, size,
bulk, designed magazine capacity, configuration, and other factors,
Gilbert's semiautomatic shotgun is not particularly suitable for or
readily adaptable to sporting purposes; rather, ATF is of the
opinion that the weapon is a semiautomatic version of a selective
fire military type assault shotgun (A.R. 22-23). The Bureau argues
that the aforementioned factors or characteristics provide a
rational basis for its determination denying Gilbert permission to
import the USAS-12 into the United States. Since the curt
administrative denial of Gilbert's application to import the
USAS-12 and the mere reaffirmation of that decision in late March,
1988, (A.R. 199), ATF has provided this Court with further
elucidation of its reasons for denying Gilbert's application for a
permit to import the firearm, as follows:
1. The weight of the weapon, 12.4 pounds, makes it much
heavier than traditional 12-gauge sporting shotguns [footnote 18]
and thus makes it awkward to carry for extended periods, as is
required in hunting, and cumbersome to lift repeatedly to fire at
multiple small moving targets as used in skeet and trap shooting.
2. The width of the USAS-12 with drum magazine (approximately
6 inches) and the depth with box magazine (in excess of 11 inches)
far exceed that of traditional sporting shotguns which do not
exceed three inches in width or four inches in depth. [footnote 19]
The Bureau argues that because of the large size and bulk of the
USAS-12, the firearm is extremely difficult to maneuver quickly
enough to engage in moving targets as is necessary in most types of
hunting, and in skeet and trap shooting.
3. The detachable box (12-cartridge capacity) and the drum
magazine (28-cartridge capacity) have larger capacities than those
of traditional repeating sporting shotguns which contain tubular
magazines with a capacity of three to five cartridges. Ad-
ditionally, detachable magazines permit more rapid reloading than
do tubular magazines. Finally, the few manually operated 12-gauge
shotguns which incorporate detachable box magazines are supplied
with two (2) cartridge capacity magazines; those 12-gauge
semiautomatic and fully automatic shotguns which employ larger
capacity detachable magazines are specially designed combat weapons
or conventional shotguns modified for law enforcement and military
use. [footnote 20]
4. The combat style pistol grip (located on the bottom of the
receiver forward of the buttstock), the barrel-to-buttstock con-
figuration, the bayonet lug, and the overall appearance and general
shape of the gun are radically different from traditional sporting
shotguns and strikingly similar to shotguns designed specifically
for or modified for combat and law enforcement use. Specifically,
the pistol grip facilitates the handling of the weapon when fired
from positions other than the shoulder and also facilitates control
of the weapon with one hand while traditional shotgun sports
generally involve firing from the shoulder. Additionally, the
bayonet is a distinct military feature which has no sporting
application. [footnote 21]
In addition to giving further explanation of the specific
reasons for denying to Gilbert a permit to import the USAS-12, ATF
spokesman Edward M. Owen has stated that ATF relied in part on
Gilbert's own marketing and advertising literature, which listed
the various combat uses of the weapon (but listed no recognizable
sporting uses), in determining that the USAS-12 was not a sporting
shotgun. The Bureau
argues that the representations, concerning the weapon, made by
Gilbert in its advertising literature, together with the physical
characteristics of the firearm, indicate that its determination to
deny the application to import the weapon was rational, based on
relevant factors, and was not a clear error in judgment. [footnote
22]
Finally, ATF was wholly unimpressed with the evidence Gilbert
submitted with its February 17, 1988 letter requesting re-
consideration of the agency's decision specifically finding: (1)
that the tape demonstrating the firearm's uses did not provide
comparisons of the USAS-12 with conventional sporting shotguns to
demonstrate that it was of a type generally recognized as
particularly suitable for or readily adaptable to the traditional
shotgun sports of hunting and trap or skeet shooting; (2) that
comparisons cannot be made between the USAS-12 and rifles or
handguns because they are distinctly different weapons and thus it
is immaterial that some of the handguns, rifles and "combination"
rifle/shotguns the agency has allowed importation of share one or
more features with the USAS-12 which the agency now finds
objectionable; (3) although the agency has allowed importation of
several military-style 12-gauge shotguns (e.g., the Benelli Scope-
90, the Benelli VM, and the Benelli 212-M1), these shotguns
maintain the basic features of traditional sporting shotguns; (4)
the SPAS-12 is a traditional sporting shotgun adapted for military
and law enforcement use and was approved for importation in 1982
based upon an agency policy (a policy which recognized police
combat competition as a sport) which has been subsequently
reversed; [footnote 23] (5) the survey of state game commissions
was directed to the legality of the use of the USAS-12 for hunting
rather than to its suitability for sporting purposes; and (6) the
evaluations made by Edward B. Crossman and Robert Sears did not
address the salient physical features of the firearm which served
as the basis of ATF's December 11, 1986 determination and neither
stated that the USAS-12 is a shotgun "of a type" generally
recognized as sporting or that the "sports" for which it is
suitable are "generally recognized" sports.
D. Decision. It is clear to this Magistrate that the 1986
amendments to Section 925(d)(3) of the Gun Control Act were meant
not only to liberalize importation of firearms but also to ease the
burden on importers by eliminating the requirement that the
importer satisfy the Secretary that the firearm sought to be
imported is particularly suitable or readily adaptable to sporting
purposes. Nevertheless, there is left the basic and undeniable
requirement that the particular firearm sought to be imported must
be particularly suitable or readily adaptable to sporting purposes.
[footnote 24]
As this Court has heretofore noted, the Bureau of Alcohol,
Tobacco and Firearms determined that due to the weight, size,
bulk, designed magazine capacity, configuration and other factors,
the USAS-12 semiautomatic shotgun is not particularly suitable for
or readily adaptable to sporting purposes. Although the factors
relied upon by ATF, to the Court's knowledge, have never been
previously cited by ATF as factors determinative of the sporting
purposes test, they are characteristic of all firearms and thus,
the Magistrate opines that they are logical characteristics for ATF
to consider in determining whether a particular firearm is
particularly suitable for or readily adaptable to sporting pur-
poses. Given the narrow standard by which this Court must judge
agency decisions, see, e.g., Volpe, supra, 401 U.S. at 416, 91
S.Ct. at 823-24, the undersigned Magistrate cannot determine that
the Bureau's decision to deny permission to Gilbert to import the
USAS-12 is arbitrary and capricious. The administrative record
supports the agency's determination that the overall appearance and
design of the weapon (especially the detachable box magazine and
the pistol grip) is that of a combat weapon and not a sporting
weapon. In fact, the USAS-12 was specifically marketed by Gilbert
as a military and law enforcement weapon. Accordingly, the
Magistrate finds that there was a rational relationship between the
facts and the decision made by the Bureau not to allow the
importation of the USAS-12, and therefore, the Court does not find
the decision to be arbitrary or capricious. It is of no moment
that the administrative record might also support the opposite
conclusion that the USAS-12 is suitable for use as a sporting
weapon. [footnote 25] This Court need only decide that a rational
basis exists for the agency's decision and having done so, the
Magistrate turns to Gilbert's remaining arguments. [footnote 26]
II. MANDAMUS RELIEF.
In its complaint, plaintiff has also asserted that it is
entitled to mandamus relief. Mandamus is an extraordinary remedy
which requires the coexistence of the following three elements
before the writ may properly issue: "(1) a clear right in the
plaintiff to the relief sought; (2) a clear duty on the part of the
defendant to do the act in question; and (3) no other adequate
remedy available." Carter v. Seamans, 411 F.2d 767, 773 (5th
Cir.1969), cert. denied, 397 U.S. 941, 90 S.Ct. 953, 25 L.Ed.2d 121
(1970); District Lodge No. 166, International Assn of Machinists &
Aerospace Workers v. TWA Services, Inc., 731 F,2d 711, 717 (11th
Cir.1984), cert. denied, 469 U.S. 1209, 105 S.Ct. 1175, 84 L.Ed.2d
324 (1985). That is, the writ will issue "only if the act to be
compelled is ministerial and so plainly prescribed as to be free
from doubt." Bass Angler Sportman Soc'y v. United States Steel
Corp., 324 F.Supp. 412, 416 (S.D.Ala.1971), aff'd sub nom. Bass
Anglers Sportsman Soc'y v. Koppers Co., 447 F.2d 1304 (5th
Cir.1971).
In the instant case, a reading of Section 925(d)(3) clearly
indicates that before ATF must allow the importation of a firearm,
said weapon must be shown to be particularly suitable or readily
adaptable to sporting purposes..
III. CONSTITUTIONAL CLAIMS.
A. Fifth Amendment. Gilbert alleges in its complaint that
ATF has applied shifting, unequal standards, or no standards, and
has continued to approve permits to import firearms less suitable
for sporting purposes than the USAS-12 and by doing so, has
violated its right to due process of
law and equal protection of the laws, rights guaranteed by the
Fifth Amendment to the United States Constitution. To prevail on
this claim, which essentially asserts that ATF unequally applied a
facially neutral statute, Gilbert must prove intentional dis-
crimination. E & T Realty v. Strickland, 830 F.2d 1107, 1112 (11th
Cir.1987), cert. denied, - U.S. -, 108 S.Ct. 1225, 99 L.Ed.2d 425
(1988). "Unequal administration of facially neutral legislation
can result from either misapplication (i.e., departure from or
distortion of the law) or selective enforcement (i.e., correct
enforcement in only a fraction of cases). In either case, a
showing of intentional discrimination is required." Id. at 1113.
"Even arbitrary administration of a statute, without purposeful
discrimination, does not violate the equal protection clause." Id.
at 1114, citing Jurek v. Estelle, 593 F.2d 672, 685 n. 26 (5th
Cir.1979) (where plaintiff alleges "arbitrary and capricious"
administration of statute, plaintiff still must prove intentional
discrimination), issue vacated without being addressed, 623 F.2d
929, 931 (5th Cir.1980) (en banc), cert. denied, 450 U.S. 1014, 101
S.Ct. 1724, 68 L.Ed.2d 214 (1981). In the instant case, plaintiff
has simply not proved that ATF intentionally discriminated against
it in the agency's application of Section 925(d). It would simply
be anomalous for this Court to find on the one hand that ATF's
administration of Section 925(d) of the Gun Control Act was not
arbitrary and capricious only then to find that ATF, in
administering the statute, intentionally discriminated against
Gilbert in violation of the equal protection clause of
the Fifth Amendment.
Additionally, the Magistrate finds that Gilbert has neither
sufficiently alleged or proven that a liberty or property interest
was deprived by ATF's actions for the purpose of establishing a
procedural due process violation. The Magistrate has some
difficulty in surmising the exact property or liberty interest
alleged here. In its complaint, Gilbert argues that ATF deprived
it of its right to import the USAS-12 as a sporting weapon under
Section 925(d)(3) of the Gun Control Act, as amended. Even if that
Section arguably creates a right in a person to import arms, said
right is activated only after the firearm sought to be imported is
shown to be particularly suitable or readily adaptable to sporting
purposes. Given this, the Magistrate cannot find that the statute
creates an absolute right to relief sufficient to constitute a
property or liberty interest. See Thompson v. Dereta, 549 F.Supp.
297, 299 (D. Utah 1982), appeal dismissed, 709 F.2d 1343 (10th
Cir.1983). [footnote 27]
B. Second Amendment. The Second Amendment to the United
States Constitution guarantees to all Americans the right "to keep
and bear arms" and further provides that this right "shall not be
infringed." U.S. Const. Amend. II. Plaintiff alleges that the
right to keep and bear arms includes the right to manufacture,
import, sell and purchase firearms and to the extent that 18 U.S.C.
section 925(d)(3) allows ATF not to authorize importation of the
USAS-12 on the ground that it is not a sporting shotgun, said code
section infringes upon the right of the people to keep arms and is
thus unconstitutionally void. In the context of this case, the
Magistrate is concerned with whether the Second Amendment's right
to keep arms necessarily involves the right to import firearms.
The plaintiff, of course, desires to bootstrap the right to import
firearms to the right to keep and bear arms. However, Gilbert has
cited this Court to no authority, and the Court finds none, where
it has been found that the right to keep and bear arms necessarily
involves, or extends to, the right to import arms. Clearly, if
this Magistrate was to declare 18 U.S.C. section 925(d)(3) consti-
tutionally void, the Court would with one sweep of the pen destroy
over twenty years of effort to keep undesirable firearms from
flooding into the United States. This, the Magistrate will not do.
Accordingly, this Court finds that the right to keep and bear arms
does not extend to and include the right to import arms and further
finds that 18 U.S.C. section 925(d)(3) does not unconstitutionally
impinge on the right to keep and bear arms. [footnote 28]
CONCLUSION
Considering the foregoing discussion, the Magistrate
recommends that the defendant's motion for summary judgment be
granted and that plaintiff's cross-motion for summary judgment be
denied.
The attached sheet contains important information regarding
objections to this recommendation.
DONE this 12th day of January, 1989.
MAGISTRATE'S EXPLANATION OF PROCEDURAL RIGHTS AND RESPONSIBILITIES
FOLLOWING RECOMMENDATION, AND FINDINGS CONCERNING NEED FOR TRAN-
SCRIPT
1. Objection. Any party who objects to this recommendation or
anything in it must, within ten days of the date of service of this
document, file specific written objections with the Clerk of this
court. Failure to do so will bar later attack or review of
anything in the recommendation. See 28 U.S.C. section
636(b)(1)(C); Thomas v. Arn, 474 U.S. 140, 106 S.Ct. 466, 88
L.Ed.2d 435 (1985); Nettles v. Wainwright, 677 F.2d 404 (5th
Cir.1982) (en banc). The procedure for challenging the findings
and recommendations of the Magistrate is set out in more detail in
Local Rule 26(4)(b), which provides that:
Any party may object to a magistrate's proposed findings,
recommendations or report made under 28 U.S.C. section
636(b)(1)(B) within ten (10) days after being served with a
copy thereof. The appellant shall file with the Clerk, and
serve on the magistrate and all parties, written objections
which shall specifically identify the portions of the proposed
findings, recommendations or report to which objection is made
and the basis for such objections. A judge shall make a de
novo determination of those portions of the report or
specified proposed findings or recommendation to which objec-
tion is made and may accept, reject, or modify in whole or in
part, the findings or recommendations made by the magistrate.
The judge, however, need conduct a new hearing only in his
discretion or where required by law, and may consider the
record developed before the magistrate, making his own
determination on the basis of that record. The judge may also
receive further evidence, recall witnesses or recommit the
matter to the magistrate with instructions.
A Magistrate's recommendation cannot be appealed to a Court of
Appeals; only the District Judge's order or judgment can be
appealed.
2. Transcript (applicable Where Proceedings Tape Recorded).
Pursuant to 28 U.S.C. section 1915 and FED.R.CIV.P. 72(b), the
Magistrate finds that the tapes and original records in this case
are adequate for purposes of review. Any party planning to object
to this recommendation, but unable to pay the fee for a transcript,
is advised that a judicial determination that transcription is
necessary is required before the United States will pay the cost of
the transcript.
FOOTNOTES
1. Plaintiff opines that the right to keep and bear arms includes
the right to manufacture, import, sell and purchase arms.
2. The Gun Control Act of 1968, as amended, 18 U.S.C. sections 921-
29, along with the National Firearms Act of 1934, as amended, 26
U.S.C. Chapter 53, regulate the importation of firearms.
3. The underlined words were deleted in 1986.
4. However, Section 925(d) of the Act was not meant to interfere
with the bringing in of "currently produced firearms, such as
rifles, shotguns, pistols or revolvers of recognized quality which
are used for hunting and for recreational purposes, or for personal
protection." Id. (emphasis added).
5. Apparently at that time, the thrust of the legislation was to
keep Saturday Night Specials and other cheaply made revolvers out
of the United States.
6. In fact, opponents of the bill contended that USSCAN at 2306 ("Individual Views of Messrs. Dirksen,
Hruska, Thurmond and Burdict on Title IV.").
7. At the initial meeting of the Firearms Advisory Panel it was
clearly understood that the role of the panel would be advisory
only and that it was the responsibility of ATF to make final
"sporting purposes" determinations. (A.R. 103).
8. The panel did, however, recommend the adoption of factoring
criteria to evaluate the various types of handguns based upon such
considerations as overall length of the firearm, caliber, safety
features, et cetera, and an evaluation sheet (ATF Form 4590) was
thereafter developed and used for the purpose of evaluating
handguns pursuant to Section 925(d)(3). The development of a
specific evaluation sheet for handguns emphasizes the concern of
many in the late 1960's of the proliferation of the cheaply
manufactured Saturday Night Specials.
9. Section 706 of the Act also provides for de novo review. 5
U.S.C. section 706(2)(F). However, de novo review is authorized
only in the following circumstances: (1) when the action is adju-
dicatory in nature and the agency factfinding procedures are
inadequate; and (2) when issues that were not before the agency are
raised in a proceeding to enforce nonadjudicatory agency action.
Citizens to Preserve Overton Park, Inc. v. Volpe, 401 U.S. 402,
413, 91 S.Ct. 814, 823, 28 L.Ed.2d 136 (1971). The facts of the
instant case do not fit within one of these two situations and
therefore de novo review is not warranted.
10. This deferential standard of review presumes the validity of
the agency action. Manasota-88, Inc. v. Thomas, 799 F.2d 687, 691
(11th Cir. 1986).
II. Put still another way,for a court to affirm an agency's
actions, said court need only determine that the agency had a
rational basis for its decision. Manasota-88, Inc. v. Thomas, 799
F.2d 687, 691 (11th Cir.1986).
12. The Circuit Court for the District of Columbia has stated that
"[i]t is only where there is no rational nexus between the facts
found and the choice made that a court is authorized to set aside
the agency determination." Certified Color Manufacturers Assn v.
Mathews, 543 F.2d 284, 294 (D.C.Cir.1976). Additionally, the Su-
preme Court has noted that when a purely factual question within
the area of competence of an administrative agency created by
Congress is considered and when "resolution of that question
depends on 'engineering and scientific' consideration," the
relevant agency's technical expertise and experience, is recognized
and its analysis is deferred to "unless it is without substantial
basis in fact." FPC v. Florida Power & Light Co., 404 U.S. 453,
463, 92 S.Ct. 637, 644, 30 L.Ed.2d 600 (1972).
13. The Bureau has provided this Court with additional explanation
of the reasons for its decision through the declarations of Edward
M. Owen, Jr. and William T. Drake.
14. Specifically, where classification of a 12 gauge shotgun under
Section 925(d)(3) is involved, ATF looks to see whether the firearm
is the "type" of 12-gauge shotgun which is generally recognized as
suitable for traditional shotgun sporting purposes such as hunting,
and trap and skeet shooting, or is as suitable for recognized
sporting activities as the type which is generally recognized.
(Drake Dec. para. 8); (Owen Dec. para. 7).
15. Thus, ATF argues that while hunting, and trap and skeet
shooting have been recognized shotgun "sports" for centuries, and
target shooting a recognized handgun and rifle "sport," events such
as police combat competitions only recently have generated interest
outside the military and law enforcement arena and may or may not
attain general recognition as sports." (Drake Dec. para. 8); (Owen
Dec. para. 33).
16. Senator Dodd pointed out that the intent of the legislation
was to "[regulate] the importation of firearms by excluding surplus
military handguns, and rifles, and shotguns that are not truly
suitable for sporting purposes." 114 Cong. Rec. S 5586, Co. 2 (May
15, 1968) (Statement of Sen. Dodd).
17. During the Congressional discussions leading to enactment of
the Gun Control Act, Senator Dodd and Senator Hansen engaged in the
following colloquy concerning the meaning of "sporting purposes":
MR. HANSEN: Would the Olympic shooting competition be a
"sporting purpose?"
MR. DODD: I would think so.
MR. HANSEN: What about trap and skeet shooting?
MR. DODD: I would think so. I would think that trap and skeet
shooting would certainly be a sporting activity.
MR. HANSEN: Would the Camp Perry national matches be
considered a "sporting purpose?"
MR. DODD: Yes, that would not [sic] fall in that arena. It
should be described as a sporting purpose.
MR. HANSEN: I understand the only difference is in the type of
firearms used at Camp Perry which includes a wide variety of
Military types as well as commercial. Would all of these
firearms be classified as weapons constituting a "sporting
purpose?"
MR. DODD: No, I would not say so. I think when we get into
that, we definitely get into a military type of weapon for use
in matches like those at Camp Perry; but I do not think it is
generally described as a sporting weapon. It is a military
weapon. I assume they have certain types of competition in
which they use these military weapons as they would in an
otherwise completely sporting event. I do not think that fact
would change the nature of the weapon from a military to a
sporting one.
MR. HANSEN: Is it not true that military weapons are used in
Olympic Competition also?
MR. DODD: I do not know. Perhaps the Senator can tell me. I
am not well informed on that.
MR. HANSEN: It is my understanding that they are. Would the
Senator be inclined to modify this response if I say that is
true? (27461)
MR. DODD: It is not that I doubt the Senator's word. Here
again I would have to say that if a military weapon is used in
a special sporting event, it does not become a sporting
weapon. It is a military weapon used in a special sporting
event. I think the Senator would agree with that. I do not
know how else we could describe it ...
MR. HANSEN: If I understand the Senator correctly, he said
that despite the fact that a military weapon may be used in a
sporting event, it did not by that action, become a sporting
rifle. It [sic] that correct?
MR. DODD: That would seem right to me.... As I said previously
the language says no firearms will be admitted into this
country unless they are genuine sporting weapons.... I think
the Senator and I know what a genuine sporting gun is.
114 Cong.Rec., 90th Cong., 2d Sess. Pt. 21, 27461-62 (September
18, 1968).
The Firearms Advisory Panel also made it clear that not every
activity in which ammunition is expended and persons participate
would be considered a sport for purposes of importation (eg.,
"'plinking' was described as a pastime by the panel since any
firearm that could expel a projectile could be used for this
purpose without having any characteristic generally associated with
target guns."). (A.R. 103).
18. The Bureau spokesmen note that traditional 12-gauge sporting
shotguns on average weigh 7-8 pounds and rarely, if ever, exceed
ten pounds.
19. In fact, ATF claims that the width of the drum magazine is
similar to the drum-fed machine guns and other specialized weapons
used by the military and law enforcement. Gilbert notes that the
agency's findings are somewhat misleading inasmuch as the width of
the USAS with box magazine is only 11/4 inches.
20. The Bureau argues that a large magazine capacity and rapid
reloading are military features. Gilbert has indicated that ATF
has falsely stated that the USAS-12 box magazine holds twelve
shells. Instead, Gilbert states that a seven round magazine was
produced for ATFs inspection. Additionally, Gilbert argues that
box magazines can be lengthened or shortened depending on desired
shell capacity.
21. Gilbert has denied that the model it sent to ATF, for
examination and testing to determine if the weapon is a sporting
shotgun, was fitted with a bayonet lug.
22. The Bureau claims that despite the plaintiff's evidence on the
initial application from a number of people who wanted to use the
weapon for deer hunting, target shooting, etcetera, it was
reasonable for the agency to conclude that the firearm's
predominant physical features (i.e., weight, size, etcetera) are
not those of a sporting firearm. The agency argues that almost all
firearms currently being manufactured have a conceivable sporting
purpose but that this particular firearm is simply not of "a type
[of firearm] ... generally recognized as particularly suitable for
or readily adaptable to sporting purposes."
23. In 1982, ATF determined that a police combat competitions could
be considered a "sport" under Section 925(d)(3) and found the
Franchi SPAS-12 shotgun to be particularly suitable for that sport
(A.R. 169). However, in 1984, ATF reversed that position and
refused to classify the Striker-12 as a "sporting" weapon (A.R.
175). The Bureau maintains that the policy change was made in 1984
because police combat competitions had not by 1984-and still have
not gained general recognition as "sports." ATF claims that due to
administrative oversight, the importers of the Franchi SPAS-12 have
continued to receive agency permission to import that firearm.
However, on December 8, 1988, just one day prior to oral argument
in this cause on the cross-motions for summary judgment, ATF sent
a letter to the importers of the SPAS-12 informing them that future
applications for importation of that firearm will be considered on
a case-by-case basis.
24. This Court gives due deference to the agency's interpretation
of Section 925(d)(3), inasmuch as there no compelling indications
that said interpretation is wrong. See, eg., Smith, supra, 766 F.2d
at 1519.
25. If ATF was not allowed to make distinctions between firearms
and exclude those that are more clearly military than sporting,
that agency would be reduced to a nonentity so far as importation
under Section 925(d)(3) is concerned.
26. In deciding that ATF's decision was not arbitrary and
capricious, this Court means to suggest that defendant's conclusion
was warranted by the facts and was based on the facts born out by
the administrative record. (See Count Three of the Complaint).
27. For a person to obtain a protectable property interest, he).
Gilbert has not proven that it has a legitimate claim of
entitlement to the importation of the USAS-12 sufficient to create
a property or liberty interest.
28. That is, this court finds no absolute constitutional right of
an individual to import firearms. Thompson v. Dereta, 549 F.Supp.
297, 299 (D. Utah 1982) Cf. ("There is no absolute constitutional
right of an individual to possess a firearm."), appeal dismissed,
709 F.2d 1343 (10th Cir.1983); United States v. Swinton, 521 F.2d
1255, 1259 (10th cir.1975) (in a criminal case, the defendant was
convicted of engaging, without a license, in the business of
dealing in firearms and the court in part held that there is no
absolute constitutional right of an individual to possess a
firearm), cert. denied, 424 U.S. 918, 96 S.Ct. 1121, 47 L.Ed.2d 324
(1976). | http://thegunwiki.com/PoliticalTimeline/RefCaseGilbert1989 | CC-MAIN-2020-05 | refinedweb | 12,447 | 53.1 |
Write a string to stdout
#include <stdio.h> int puts( const char *buf );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The puts() function writes the character string pointed to by buf to the stdout stream, and appends a newline character to the output. The terminating NUL character of buf isn't written.
A nonnegative value for success, or EOF if an error occurs (errno is set).
#include <stdio.h> #include <stdlib.h> int main( void ) { FILE *fp; char buffer[80]; fp = freopen( "file", "r", stdin ); while( gets( buffer ) != NULL ) { puts( buffer ); } fclose( fp ); return EXIT_SUCCESS; }
ANSI, POSIX 1003.1
errno, fputs(), gets(), putc() | https://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/p/puts.html | CC-MAIN-2019-22 | refinedweb | 115 | 68.87 |
and Apache stdcxx implementations of the standard library. See 12.2 STLport and 12.3 Apache stdcxx Standard Library for more information.
You can replace most of the standard library and its associated headers. The replaced library is libCstd, and the associated headers are the following:
<algorithm> <bitset> <complex> <deque> <fstream <functional> <iomanip> <ios> <iosfwd> <iostream> <istream> <iterator> <limits> <list> <locale> <map> <memory> <numeric> <ostream> <queue> <set> <sstream> <stack> <stdexcept> <streambuf> <string> <strstream> <utility> <valarray> <vector> Oracle. (Usually library names start with “lib”.)
On each compilation, use the -I option to point to the location where the headers are installed. In addition, use the -library=no%Cstd option to prevent finding the compiler’s own versions of the libCstd headers. For example:
example% CC -I/opt/mycstd/include -library=no%Cstd... (compile). For example:
example% CC -library=no%Cstd -L/opt/mycstd/lib -lmyCstd... (link)
Alternatively, you can use the full path name of the library directly, and omit using the -L and -l options. For example:
example% CC -library=no%Cstd /opt/mycstd/lib/libmyCstd.a... (link)
During linking, the -library=no%Cstd option prevents linking the compiler’s own version of libCstd.
C has 17 standard headers (<stdio.h>, <string.h>, <stdlib.h>, and others). These headers are delivered as part of the Oracle Solaris operating system in the directory /usr/include. C++ has those same headers, with the added requirement that the various declared names appear in both the global namespace and in namespace std. 11 11.7.5 Standard Header Implementation, you do not need to supply SUNWCCh versions of the replacement headers described in 11.7.3 Installing the Replacement Library. However, if you run into some of the described problems, the recommended solution is to add symbolic links having the suffix .SUNWCCh for each of the unsuffixed headers. That is, for file utility, you would run the following command:
example% ln -s utility utility.SUNWCCh
When the compiler looks first for utility.SUNWCCh, it will find it, and not be confused by any other file or directory called utility.
Replacing the standard C headers is not supported. If you nevertheless want the following commands:
example% ln -s stdio.h stdio.h.SUNWCCh example% ln -s cstdio cstdio.SUNWCCh. | http://docs.oracle.com/cd/E24457_01/html/E21991/bkajr.html | CC-MAIN-2013-20 | refinedweb | 374 | 50.43 |
This
The. When the comparable interface is not available either in POSIX threads or in Solaris threads, the `—' character appears in the table column.Table 5–1 Comparing POSIX and Solaris fork() Handling.().
Both scope of setjmp() and longjmp() is limited to one thread, which is acceptable most of the time. However, the limited scope does mean that a thread that handles a signal can execute a longjmp() only when a setjmp() is performed in the same thread.).
The fair share scheduler (FSS) scheduling class allows allocation of CPU time based on shares.
The normal range of fair share scheduler class priorities is -60 to 60, which get mapped by the scheduler into dispatch priorities in the same range (0 to 59) as the TS and IA scheduling classes. All LWPs in a process must run in the same scheduling class. The FSS class schedules individual LWPs, not whole processes. Thus, a mix of processes in the FSS and TS/IA classes could result in unexpected scheduling behavior in both cases.
The TS/IA or the FSS scheduling class processes do not compete for the same CPUs. Processor sets enable mixing TS/IA with FSS in a system. However, all processes in each processor set must be in either the TS/IA or the FSS scheduling class.
The FX, fixed priority, scheduling class assigns fixed priorities and time quantum not adjusted to accommodate resource consumption. Process priority can be changed only by the process that assigned the priority or an appropriately privileged process. For more information about FX, see the priocntl(1) and dispadmin(1M) man pages.
The normal range of fixed priority scheduler class priorities is 0 to 60, which get mapped by the scheduler into dispatch priorities in the same range (0 to 59) as the TS and IA scheduling classes. section describes the operations on signals.
pthread_sigmask(3C) does for a thread what sigprocmask(2) does for a process. pthread_sigmask()) listens for asynchronous signals while you create other threads to block any asynchronous signals set to this process.
The following example shows the syntax of sigwait() .
#include <signal.h> int sigwait(const sigset_t *set, int *sig );
When the signal is delivered,.
One. | http://docs.oracle.com/cd/E19253-01/816-5137/gen-12013/index.html | CC-MAIN-2015-48 | refinedweb | 365 | 62.88 |
05 June 2012 14:37 [Source: ICIS news]
TOKYO (ICIS)--Global demand for ethylene derivative products is expected to total 157.5m tonnes in 2016, a 30.4% increase from 120.9m tonnes in 2010, ?xml:namespace>
The average annual demand growth rate for ethylene derivatives over the period is expected to be 4.5%, METI said.
The derivative products include low-density polyethylene (LDPE), high-density polyethylene (HDPE), styrene monomer (SM), ethylene glycol (EG) and polyvinyl chloride (PVC).
This represented an average growth rate of 5.8% per year between 2010 and 2016, led by growth in
Demand in
In
The predictions based on each country’s outlook for chemicals production.
A total of 154.8m tonnes of ethylene derivatives are estimated to be produced globally in 2016, an increase from 122.7m tonnes in 2010, the data showed.
Among Asia’s total,
Meanwhile, the global capacity of ethylene derivatives is expected to increase to 169.1m tonnes in 2016 from 146.1m tonnes as of December 2010, growing by 2.5% per annum on average, the METI said.
The prediction was based on new projects for ethylene derivative plants globally, which are expected to come on stream by 2016, the ministry added.
Among the global total, the capacity of ethylene derivatives in
The ministry did not state the reason for
The global capacity for ethylene was expected to rise to 173.2m tonnes in 2016 from 144.9m tonnes in 2010, an average growth of 3.0% per annum, based on the ministry data. | http://www.icis.com/Articles/2012/06/05/9566596/global-ethylene-derivatives-demand-to-rise-30-from-2010-to-2016.html | CC-MAIN-2014-42 | refinedweb | 256 | 60.11 |
I to find it. I ask people, what are you reading on Saturday morning when everyone else is still sleeping? That’s probably your passion. It made me think that I can probably data mine my passion in browser history and email text. I know qualitatively what I love to do (aside from family time & running), but I always want some quantitative confirmation. This is how I attacked my browser history and then my giant box of gmails.
Chrome History
I start here –. That post, using the terminal (or command prompt) and SQLite, helps me get my Chrome browser history into a CSV file for the time period after January 1, 2016. Then, a few simple lines in python with pandas and the built-in urlparse python package let’s me discover my most visited URL’s this year…
import pandas as pd import numpy as np from urlparse import urlparse data = pd.read_csv('history.csv') # define function to get base url and add column to dataframe def getUrl(url): parse_object = urlparse(url) return parse_object.netloc data['cleanurl'] = np.vectorize(getUrl)(data['url']) # count visits to each url data['cleanurl'].value_counts()
Result from top 20 (cleaned up with Google Sheets)…
Ok. The first five links don’t count to me… I check those sites every five minutes (which is a problem in its own right). After that, I see two rather distinct categories: data science / computing / programming and learning (all highlighted in yellow). It’s telling, but is my passion one or the other… or is it a combination? Learning data science? Data science learning? Learning with computers? This is a good start… maybe my emails can tell me more.
Gmail Mining
For this one I start here –. This is a bit more challenging because I am working with a previously unfamiliar file format (mbox) and the file is 5.6GB… not huge but no longer small. Fortunately, Fletcher Heisler’s post does a fantastic job helping me get from gmails to mbox to pandas dataframe.
The resulting dataframe has 113,776 rows and 8 columns which are
u'subject', u'body', u'from', u'to', u'date', u'labels', u'epilogue'
and the date range is between January 2007 and September 2016. In order to get some telling information from the data I look at A) who are the senders of emails I am actually reading and B) a high level NLP analysis of the subject and body text of emails I’m reading.
To get started, I immediately do four things to the dataframe:
- Filter only read emails. The column named ‘label’ tells me if the message is read, unread, sent, chat, etc. Obviously, I’m most interested in the emails that I actually open.
- Parse the date as datetime objects so I can analyze changes over time.
- Create a new column that transforms the ‘from’ column to only the domain of that sender (i.e. ‘[email protected]’ becomes ‘gmail.com’).
- I filter out the most common domains such as
['gmail.com', 'yahoo.com', 'aol.com', 'hotmail.com', 'us.af.mil']
That last one is because my wife is in the US Air Force and if I don’t remove it the analysis is all about our email communication : )
Then I look at the domains of the senders that I’m opening (reading) most frequently by year. I highlight in yellow the domains that are definitely data science and/or learning related.
Over the past five years, I’ve gotten more and more into data everything and this time-lapse analysis confirms it. In 2012/2013, I was much more interested in financial markets (Thomson Reuters daily briefing and email briefings from Financial Times) and running (stanfordalumni.org was my running coach at NBSV, NYAC.com was my running coach at New York Athletic Club, and optonline.com is the current running coach at Iona College). But then, in 2013, I discovered Coursera and r programming. I rarely receive an email from a data-related entity that I don’t open and at least skim through. The most common word in the subject line of my emails over the past few years is ‘data.’
In the table above I again highlight in yellow the words that are definitely data science and/or learning related. In 2013, I took an online course called ‘Maps and the Geospatial Revolution.’ In 2012, every morning I was reading the European version of a financial markets newsletter called ‘The Morning Benchmark’ that arrived in my inbox at 6AM. A more clear picture starts to form when I explore frequent bi-grams (words that appear together most frequently) in the body text of my emails over the past three years. Here are the top 20…
[(('data', 'science'), 0.0017113140105257248), (('bg', 'bg'), 0.0013495052160767866), (('mountain', 'view,'), 0.0008504586030437685), (('view,', 'ca'), 0.0008296649941673928), (('read', 'more:'), 0.0007963952199651915), (('big', 'data'), 0.0007340143933360642), (('rights', 'reserved.'), 0.0007236175888978764), (('data', 'analyst'), 0.0006113321009654473), (('new', 'york'), 0.0005739036049879709), (("o'reilly", 'media,'), 0.0005739036049879709), (('copyright', '(c)'), 0.0005572687178868703), (('machine', 'learning'), 0.00054895127433632), (('reply', 'directly'), 0.0005468719134486825), (('new', 'york,'), 0.0005052846956959309), (('data', 'scientist'), 0.0004782530041566424), (('data', 'visualization'), 0.00047617364326900483), (('visit', 'support'), 0.00047409428238136725), (('4', 'et'), 0.00047201492149372967), (('@', '4'), 0.00047201492149372967), (('briefing', 'room'), 0.0004636974779431794)]
Some of the above bigrams make sense and others don’t, primarily because the bigrams occur in the header or footer of a commonly read newsletter. The ones that stand out to me are data science, big data, o’reilly media, machine learning, data scientist, and data visualization.
At this point, I’ve convinced myself (quantitatively) that I’m passionate about data-driven problem-solving and I’m passionate
about sharing methodologies in which to do that (see Update below). In fact, I recently started as an Udacity Forum Mentor to, hopefully, help others improve their technical skills. Now, James and Ramit, what experiments can I run to see how I can monetize this passion?
Update 9-10-2016: I was listening to Derek Sivers (and James Altucher) earlier this week and something he said lit up a bright light. He was talking about his core value(s). He said his core value is “learning things for the sake of creating things which is for the sake of learning things which then is for the sake of creating things. That loop is a thing to me that should be one word.” And I thought about this and listened to it again and again. For the first time in years I thought that I may not love teaching for the sake of teaching… but I love teaching because it allows me to constantly learn.
It makes sense to me that my passion is learning things for the sake of producing creative data-driven solutions which is for the sake of learning things which then is for the sake of producing creative data-driven solutions. Today, I’m extremely happy with that.
For those wanting to help me improve the aesthetics or efficiency of my code, here is the ipython notebook for the analysis of the gmail dataframe (after following Fletcher Heisler’s post and getting mbox data into pandas dataframe)… | http://www.frank-corrigan.com/2016/09/07/finding-my-passion-mining-browser-history-and-emails/ | CC-MAIN-2019-04 | refinedweb | 1,191 | 74.59 |
umad_send man page
umad_send — send umad
Synopsis
#include <infiniband/umad.h> int umad_send(int portid, int agentid, void *umad, int length, int timeout_ms, int retries);
Description
umad_send() sends length bytes from the specified umad buffer from the port specified by portid, and using the agent specified by agentid.
The buffer can contain a RMPP transmission which is larger than a single MAD packet when the agentid specifies a class which utilizes RMPP and the header flags indicate RMPP is active. NOTE currently only RMPPFlags.Active is meaningful in the header in user space. All other RMPP fields are ignored. The data section of the buffer will be sent in multiple RMPP MAD packets with headers built for the user.
timeout_ms controls the solicited MADs behavior as follows: zero value means not solicited. Positive value makes kernel indicate timeout in milliseconds. If reply is not received within the specified value, the original; on error, errno is set and a negative value is returned as follows:
-EINVAL invalid port handle or agentid
-EIO send operation failed
Author
Hal Rosenstock <[email protected]> | https://www.mankier.com/3/umad_send | CC-MAIN-2018-09 | refinedweb | 180 | 53.81 |
[Solved] display number for device type LinuxFb
Hi Guys,
I am running my embedded Qt application with the options specified below, and the application is not displayed on the Linux based hardware.
.\myapplication -qws -display "LinuxFb"
In the "display management":, there is no description for <device> type LinuxFb.
Please suggest, what should I specify for the display number in the options above ?
Which display driver will be loaded and linked with the application in the Linux machine ?
Thanks in advance,
Sachin
Hi,
IIRC, device should be something like /dev/ttyX where X is the number of the console where your application should start.
Hello SGaist,
The below parameters seems to be working on my device.
.\myapplication -qws -display “LinuxFb:/dev/fb0”
But, I could only see a small arrow on the device screen.
Do I need to set any width and height for the screen ? If yes, How do I set these values ?
Thanks,
Sachin
It depends on the screen support you have for your system. You can use the variables described "here": to set your application environment
Hello SGaist,
I tried setting the QWS_SIZE, still the result is same.
This is the simple test code.
@#include <QApplication>
#include <QPushButton>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QPushButton button("Hello Emb World");
button.show();
return a.exec();
}@
Thanks,
Sachin
What is you screen resolution ?
If you have doubts that Qt uses the right values, you could write a little application that shows the size of a widget when maximized
Hello SGaist,
screen resolution is 480x272.
The application is hanging or something when "button.show(); " is called.
So, the debug log which is printed after the show() call, is not coming.
There seems to be some problem with window manager, but I could not find it out.
Thanks,
Sachin
You could try to start your application remotely through ssh, that should show you the log while the application is running
Hello SGaist,
I am able to see the logs, but the actual problem is in the show() function call.
The application is hanging when the show() is called on the widget.
The same application works fine on the desktop, but when I run the application on the target, it hangs at show() call and in the display, I could see only the image of an arrow.
Thanks,
Sachin
Is it hanging or not painting properly ?
Hello SGaist,
It was the problem with font I guess.
When I set the font directory path using QT_QWS_FONTDIR and pass "-font unifont" while launching the application, it is launching and working properly.
Thanks for your support.
Sachin
You welcome !
Glad you found out :)
Since it's all working now, don't forget to update the thread's title to solved so other forum users may know that a solution has been found :) | https://forum.qt.io/topic/30972/solved-display-number-for-device-type-linuxfb/5 | CC-MAIN-2019-04 | refinedweb | 470 | 72.36 |
Introduction
You have just gone through the process of sequencing a package, you go to deploy it in your test environment and you get an error message that says “The operation could not be completed successfully because a WMI class or provider in the package is already registered on the machine.” What do we do now? In this post, we will see how to identify WMI conflicts and methods for overcoming these conflicts without re-sequencing.
Identifying the problem
At the time of deployment, Server App-V performs a series of conflict detection checks to ensure that the package’s components and the machine’s components can coexist. If one of these checks fails, the package cannot be deployed to the target machine. For WMI components, this means that one or more of the classes or providers detected during sequencing is already present on the deployment machine. This often occurs when a non-essential process is accidentally left running during sequencing.
So how can we know that the add package failure is due to a WMI conflict? The error code for a WMI conflict is 1D-00003007 as shown in the Powershell screenshot below. A similar error is produced when deploying via Virtual Machine Manager.
Now that we know there was a WMI conflict, we need to determine which items in the package are actually conflicting with the native system. To do this, we open the Application event log and locate events with an Event ID of 13318 from the “App-V Services” event source. If more than one conflict was found, there will be an event log entry for each conflict.
From this event log message, we can determine which class or provider conflicted and in which namespace it resides. With this information we can begin the process of fixing the package.
Addressing the Conflict
The first thing that we need to determine is if the conflicting WMI classes and providers are part of the application itself. This is the most difficult part of this process and requires some knowledge of the application itself. Many applications publish information about their WMI classes and providers as part of their documentation. In other cases, it may be obvious that the class or provider captured belongs to another application. For the rest of this post, we will assume that the conflicting components are not part of the application (i.e. they shouldn’t be in the package).
There are two techniques for removing these components from the package, both of which leverage the package update functionality on the sequencer. The first technique is called “namespace exclusion” and provides the capability to remove entire WMI namespaces from the package. The second technique is a more targeted approach for removing individual entities within a namespace you need to keep.
NOTE Both of these techniques change the content of the package in a non-recoverable way. Backup your package before starting this process.
Namespace Exclusion
Namespace exclusion removes entire WMI namespaces from the package. This feature is not exposed in the user interface and must be configured through the registry.
From the previous section, we identified a conflict in the root\cimv2 namespace for the serverappv_collision_sample. After looking at the application’s documentation, we confirmed that the application does not put any WMI content in this namespace, so we can simply exclude it.
To do this, we open regedit on the sequencing machine and navigate to the HKEY_LOCAL_MACHINE\Software[\Wow6432Node]\Microsoft\Softgrid\4.5\ServerAppV key and edit the WmiNamespaceExclusions multistring value. This value consists of one namespace per line and is blank by default. We add “root\cimv2” to the value and close regedit.
Now, we start the Sequencer and open the existing package for update. We follow the Sequencing Wizard, selecting “Perform a custom installation” on the Select Installer step, and simply complete the wizard without installing anything. Once the wizard is completed, we are back in the main Sequencer window and can save the package. This package contains all of the previous content except the WMI components in the root\cimv2 namespace. When we deploy this package on the Agent machine, we no longer receive a WMI conflict (notice the ssrs2008_2.sft in the second Add-ServerAppVPackage call).
Targeted Removal
In some cases, an application may place classes or providers in the same namespace as other applications. If this occurs and we need to remove a specific set of WMI entities, we can use the package upgrade process as described above (without the registry change) and simply remove the problematic items from the WMI repository during the Installation phase. These components can be removed through Powershell. Below is an example of removing the serverappv_collision_sample class:
# WMI Connection Parameters
$savHost = "localhost"
$namespace = "root\cimv2"
$authLevel = "PacketPrivacy"
Remove-WmiObject -Authentication $authLevel -Namespace $namespace -Class "serverappv_collision_sample" -ComputerName $savHost
We can then complete the wizard, save the package and deploy the new version in the test environment.
Secondary Problems
During package upgrade, the same registration process that occurs on the Agent occurs on the Sequencer. This could result in a WMI class or provider that was captured in the package preventing the upgrade from succeeding. When this occurs, an error is received during the Installation phase of the Update wizard.
In this case, the sequencer log will contain an entry with the same message that was seen in the event log on the Agent. At this point, you can either find a machine that does not have the WMI entity causing the conflict or remove the conflicting component from the native system.
NOTE Removing the native component may result in a negative impact on the system’s function. This should only be done within a non-production virtual machine that you are willing to lose.
To remove a native component, follow the same process described in the Targeted Removal section above.
Conclusion
WMI conflicts can cause an otherwise usable package to fail deployment. By leveraging the package update process on the Sequencer, we can correct some of these issues without needing to completely re-sequence the application. The techniques described here can save a significant amount of time when trying to correct a minor sequencing issue.
Jeremy Dunker | Senior Software Design Engineer | Server App-V: | https://blogs.technet.microsoft.com/serverappv/2011/12/01/overcoming-wmi-deployment-conflicts-in-microsoft-server-app-v/ | CC-MAIN-2016-30 | refinedweb | 1,039 | 53.21 |
You'll need to install a library to make Arduino IDE support the module. This library includes drivers for the MMA8491_AccelTilt_Module library is responsible for reading the accelerometer and tilt data.
To use the library on Arduino IDE, add the following #include statement to the top of your sketch.
#include <Turta_AccelTilt_Module.h>
Then, create an instance of the Turta_AccelTilt_Module class.
Turta_AccelTilt_Module accel
Now you're ready to access the library by calling the accel instance.
To initialize the module, call the begin method.
begin()
This method configures the I2C bus and GPIO pins to read sensor data.
Returns the G value of the X axis.
double readXAxis()
Parameters
None.
Returns
Double: G Value of the X axis.
Returns the G value of the Y axis.
double readYAxis()
Parameters
None.
Returns
Double: G Value of the Y axis.
Returns the G value of the Z axis.
double readZAxis()
Parameters
None.
Returns
Double: G Value of the Z axis.
Returns the values of all axes in a single shot.
void readXYZAxis(double x, double y, double z)
Parameters
Double: x out
Double: y out
Double: z out
Returns
None
Returns the tilt state of all axes.
void readTiltState(bool xTilt, bool yTilt, bool zTilt)
Parameters
Bool: xTilt out
Bool: yTilt out
Bool: zTilt out
Returns
None
You can open the example from Arduino IDE > File > Examples > Examples from Custom Libraries > Turta Accel & Tilt Module. There are two examples of this sensor.
If you're experiencing difficulties while working with your device, please try the following steps.
Problem: When using the single shot reading function, Y-axis returns 0 G. Cause: There is a software communication error on the I2C bus. Solution: It's a known issue, and we're working on to fix this bug. Until then, please use the single reading functions. It's not a malfunction. | https://docs.turta.io/modular/accel-tilt/iot-node | CC-MAIN-2019-51 | refinedweb | 306 | 59.9 |
07 January 2005 17:00 [Source: ICIS news]
?xml:namespace>
?xml:namespace>
The analysts predict a bright investment landscape with gaining pricing power, volume strength, and moderating energy prices.
CS First Boston analyst Bill Young is highlighting Dow Chemical among his six outperform-rated stocks as the best way to play the cycle. Last year, shares of Dow rose from $41.50 to $49.50.
"In the last three chemical cycles covering 30 years, the earliest the commodity firms have discounted the peak has been one year ahead of the earnings zenith," says the analyst. "Pricing and margin upside will remain primarily in the hands of the commodity players. Earnings are not likely to peak for such companies until year-end 2006 at the earliest, implying another year of outperformance."
Young estimates Dow's earnings/share (eps) will rise from $2.45 in 2004 to $3.90 in 2005, and has a price target of $62 on the stock.
Deutsche Bank analyst David Begleiter is also selecting Dow Chemical as his top pick.
"We're looking for a very strong commodity chemical cycle, particularly in the first half of 2005," he says. "The stock has done quite well in 2004, but the cycle is just now really gaining momentum. Supply-demand fundamentals are tight, price increases have been pushed through, and feedstock costs are moderating."
Begleiter expects Dow's profits to ramp up from $2.35 in 2004 to $4.35 in 2005, and has a price target of $60 on the stock.
Prudential Financial analyst Andrew Rosenfeld also highlights Dow Chemical as his top pick, after posting a 76% gain with his 2004 pick Nova Chemicals.
"Expanding margins should continue to drive significant earnings growth as we head into a commodity chemical cycle peak," says the analyst. "Dow's realigned cost structure should give the company significant operating leverage to the upcoming peak."
Rosenfeld expects Dow's earnings to jump from $2.65 in 2004 to $4.25 in 2005, and has a price target of $61 on the stock.
Goldman Sachs analyst Robert Koort's top pick for 2005 is DuPont, a company that lagged in stock performance in 2004, rising from around $46 to $49.
"We believe DuPont is on course for much improved financial performance following the conclusion of distracting M&A activity in the past several years and the emergence from a challenging industrial environment that together conspired to erode investor confidence and leave DuPont shareholders in the wake of better performance achieved by other chemical stocks," says Koort.
Koort sees an earnings recovery, driven by increased demand, better pricing and cost cutting. The analyst expects DuPont's eps to rise from $2.35 in 2004 to $2.80 in 2005.
Banc of America Securities analyst Kevin McCarthy, who had the best pick in 2004 with Monsanto (up 93%), is selecting Lyondell Chemical as his top pick.
"Lyondell is our favourite way to play the cycle, which we expect to strengthen through 2006 to 2007," he says. "We like the leverage to ethylene as well as the capability to crack heavier feedstocks, which will give Lyondell an advantaged cost position."
McCarthy expects Lyondell's eps to rise from 50 cents in 2004 to $2.20 in 2005, and has a price target of $33 on the stock. Lyondell trades around $28.
Fulcrum Global Partners analyst Frank Mitsch is touting Nova Chemicals as his top pick.
"Nova has the highest leverage to commodity petrochemicals, which will continue to fly after emerging in 2004 from a deep slumber," says Mitsch. "Our estimate for 2005 is looking more conservative, and consensus estimates will likely move upward as well."
The analyst expects Nova's eps to jump from $1.70 in 2004 to $4.50 in 2005, and has a price target of $56 on the stock. Nova trades at around $47.
Jay Harris, an analyst at Goldsmith & Harris, also taps Nova Chemicals as his top pick.
"Through most of 2004, the company has had pricing power in commodity chemicals, and I think that will remain so in 2005 and 2006," says Harris. "The earnings they generate will be surprisingly high. You may be looking at eps of $10-15 - possibly in 2006."
The analyst expects Nova's eps to rise from $1.82 in 2004 to $5.50 in 2005, and has a price target in the mid $60s.
Deutsche Bank analyst Laurence Alexander highlights Cytec Industries as his top pick.
"Once the UCB deal closes and the company has reassured investors on raw material exposure and that the UCB business remains on track in terms of expected EBITDA generation in 2005, the stock could move into the mid to high 50s," he says. "Clearly the deal is very accretive."
Alexander expects Cytec's eps to rise from $2.86 in 2004 to $3.50 in 2005, and has a price target of $58 on the stock.
CS First Boston analyst John McNulty also taps Cytec Industries as his top pick, expecting eps to rise from $2.87 to $3.71.
"The Street may be overly conservative on Cytec's earnings power, and so there is risk on the upside in 2005," says McNulty. "Given this potential upside and Cytec's cheap valuation, we believe there is upside to our $52 price target."
Lehman Brothers analyst Sergey Vasnetsov's top pick is Air Products, which provides "both earnings stability from the industrial gases side and sensitivity to the cycle on the chemicals side."
The analyst sees Air Products' eps rising from $2.64 in fiscal 2004 (ended September) to $3.13 in fiscal 2005.
Buckingham Research analyst John Roberts' top pick is Cabot Microelectronics.
"We believe the electronic materials area will have unit growth higher than most chemical sectors, and prices for electronic materials are more stable than end market prices," says the analyst. "Cabot Micro, which underperformed the strong gains in the more economically sensitive chemical stocks last year, is poised to more than catch up in 2005."
Roberts sees Cabot Micro earning $2.25/share in fiscal 2005 (ended September) after earning $1.87 in fiscal 2004. His price target on the stock is $47 versus today's price of around $40.
Michael Sison, analyst at KeyBanc Capital Markets, a division of McDonald Investments, is offering up Airgas as his top pick.
"Airgas has a unique business model in the specialty chemical sector capable of generating attractive earnings growth of 10-15% over the next few years," says the analyst. He says Airgas can reach $3bn in sales by fiscal 2008 with earnings power of $2/share.
Sison expects Airgas to earn $1.25/share in fiscal 2005 (ended March) and $1.50 in fiscal 2006, and has a price target of $35 on the stock. Airgas trades around $27.
Chris Kapsch, analyst at Black Diamond Research, taps OM Group as his favourite stock.
"OM Group's valuation remains overly penalized, in our view, from legacy management missteps, as well as a tedious ongoing financial restatement process," says the analyst. "Restatement concerns should be diminished by the end of January. Meanwhile, we are particularly enamoured with favourable cobalt industry fundamentals. With Cobalt metal prices the single most important leverage variable for OM Group, we think the stock will benefit as 2005 progresses."
Kapsch expects OM Group's eps to rise from $3.55 in 2004 to $3.70 in 2005, and has a price target of $45 to $48 on the stock. OM Group trades around . | http://www.icis.com/Articles/2005/01/07/642071/analysis+wall+streets+top+chem+firm+picks+for+2005.html | CC-MAIN-2013-20 | refinedweb | 1,250 | 66.03 |
Open twice a file if via pop up window of long press in apps dock
@omz In the misc part of V3.2news, you wrote "Opening a file that is already open in a different tab switches to that tab instead of opening the file twice".
But if I tap a file name in the popup window displayed when I long press Pythonista icon in the IOS 11 apps dock, this file is open in a new tab, even if it is already open.
@cvp you sure? Just tried it and can't reproduce. Works as expected when the Pythonista is in the foreground, works as expected when the Pythonista is in the background. You sure you have 3.2?
@zrzka I'm in V3.2, but result is different if the file you tap is the selected or not tab.
Assume you have 4 tabs open in Pythonista. If the displayed tab is the file you tap, it's ok, but if the file you tap is not the displays one, a new tab is opened with this file...
@cvp thanks. Trying to reproduce it, but no luck. Please, can you do one more thing?
- Create script,
ep.pyfor example with following content:
import editor print(editor.get_path())
- Add this script to the wrench icon.
- Simulate your issue.
- Select tab with first occurrence of your file.
- Run
ep.pyfrom the wrench menu.
- Select tab with second occurrence of your file.
- Run
ep.pyfrom the wrench menu.
When you compare these two paths, do they equal?
@cvp Does this also happen with files that have only ASCII chars in the path? Maybe the "à" is causing problems for some reason.
@dgelessus tried à here as well and no problem at all. I asked for the path, because I suspect there's a bug (or it's by design, dunno yet) in Pythonista or iOS. Check the file navigator, the a faire.txt is here two times - screenshot. How it did happen and why I say two times when you see just one occurrence:
- I did create a faire.txt in the blackmamba root folder,
- a faire.txt file appeared in dock long press recent files,
- tapping on it reveals Pythonista with this file, but this file magically appears in external files as well as a separate entry.
One is inside blackmamba folder (external folders), another one is directly in external folders (virtually two times, path & content equal). Anyway, no matter what I do, opening this file in any way leads to the same path. So, maybe there's something different on @cvp iPad. But the path should equal, we'll see when @cvp will check it.
@zrzka Paths are not equal
/private/var/mobile/Containers/Shared/AppGroup/D2BEB841-85E0-47B7-B01F-2E3A78C6CD21/File Provider Storage/77805465/0B-8WTUOPVyKeVWhPSzhkb3JxZWM/patience.txt /private/var/mobile/Containers/Shared/AppGroup/BC53E549-355D-4E77-BC46-64C3D3E0BDAF/Pythonista3/Documents/patience.txt
No ascii in this file name
Don't forget I do a long press on Pythonista icon in dock...
@cvp thanks. These are clearly two different files. You have two files with the same name, but in different locations. What happened IMO?
- Pythonista is launched
- You did open SCRIPT LIBRARY/This iPad/a faire.txt file
- You did long press gesture on dock, did select a faire.txt file and this file was opened from external location in a new tab, because name equals, but path differs.
The first path points to a file opened via Files.app (EXTERNAL FILES). The second path is a file in SCRIPT LIBRARY/This iPad.
This patience.txt file, new file you just did create? If so, please, can you be more specific how? Tap here, reveal this, wrote that, tap on that, ... Or you just renamed a faire.txt? If so, you did rename one occurrence and both of them were renamed? Do you remember if this a faire.txt was initially stored in another app? Did you copy a faire.txt to the Pythonista?
@zrzka no to all questions. patience.txt is an old file in my Pythonista, just used this one to be sure the accent on "à faire" was not the origin of the problem.
And you can see, that even old txt files, no more open in a Pythonista tab, stay in the recent files which Files App présent when I do a long press in the dock.
I'm sure the problem comes from Files App not from Pythonista
@cvp thanks. Not sure, but I feel that you think that the long press shows files currently opened in tabs. I assume this based on:
And you can see, that even old txt files, no more open in a Pythonista tab, stay in the recent files which Files App présent when I do a long press in the dock.
Long press on dock icon behaves like history in Safari. It shows files you did open and then even closed ones. That was the reason for so many questions I had, because you can open file a.txt via Files.app, then close it, file is in the history, you can open a.txt from This iPad for example (not Files.app), but the file a.txt from Files.app is still in the dock icon long press history. When you tap on it, another tab is opened, because it's different file, it's the file you did open via Files.app, even a long time ago.
@zrzka Could you try the same sequence
- launch Pythonista
- open a .txt file
- close Pythonista, no remove from memory
- assuming Pythonista is in your dock, long press on it
- do you see the txt file in the presented ones?
- if yes, tap on it
- is it opened in a 2nd tab?
@zrzka sorry, previous post typed before I read your answer, which seems coherent.
Thus not a real problem but not very professional from Apple...
@cvp tried, no problem. There's one thing which I find confusing. Recent files (long press gesture) contains files opened from iCloud & from Files.app (external files). But if you open a file from This iPad the file is not listed there. It can lead to a confusion, because when you open file with same name from This iPad, you do assume that the file appears there too and that it's your last file. But it's not truth.
@zrzka Agree. I remember I had created a directory in Pythonista/iCloud for testing but I don't remember if I placed these files there but I'm sure I just opened patience.txt file only yesterday for testing this problem and that this file has never been in iCloud...
But I agree to stop spending time for this.
Thanks a lot for your time and your explanations
@zrzka I just see now that patience.txt file appears (just from now) in "external files" in Pythonista file browser...
If I delete them, they are not in Pythonista trash but stay in Pythonista Files App..
@cvp that's correct behaviour. What you see in external files is kind of link to folders / files from another applications. When you delete it in Pythonista (items in external files), you're just removing links - basically saying - okay Pythonista, forget about this file. The file is not actually deleted. If you'd like to delete, you have to open another application which provides this file and delete it there. Also it shouldn't change recent file items listing, because the file was not deleted. Example:
- I have a file in Dropbox
- I open this file in Pythonista via external files - Open...
- File appears in external files and in the recent files
- When I swipe to delete in Pythonista, the file is only unlinked from Pythonista, but it still stays in the Dropbox application
- If I'd like to delete it for real, I have to open Dropbox application and delete the file here
That's the reason why the button has label REMOVE and not TRASH. This applies to the top level items in the external files section only.
Another example:
- I have a folder in Working Copy
- I open this folder in Pythonista via external files - Open...
- This folder is
blackmambafor example
- Now I tap on the
blackmambafolder in external files , swipe to left on any file inside this folder and destructive button appears
- This destructive button has title Trash, because it's not the top level item in external files and when you tap on it, file is actually deleted, even in the Working Copy
Behavior depends on the level in external files (nested or top level) and one should read button titles:
- Remove - Kind of unlink from Pythonista
- Trash - Real trash
@zrzka Thanks for this (very clear) explanation.
But, sincerely, I really don't understand in which other app resides my .txt files I see in this pop up...
@cvp me neither. It can be a bug in iOS, in Pythonista or just some experimentation throw back. The only think I know (based on paths) is that one file is in This iPad and another one comes from any other application (not an iCloud file, path would be different).
@zrzka I've found from where these files come...
In Files App, you have (bottom left) a bouton "recents" and you get a list of recents files with their app.
These Pythonista files are recents for my Google Drive app because I've a kind of synchronisation between Pythonista and Google Drive.
I've written a script which scans my Pythonista files and, if modified since their last backup, sends them to my Google Drive account. | https://forum.omz-software.com/topic/4617/open-twice-a-file-if-via-pop-up-window-of-long-press-in-apps-dock | CC-MAIN-2022-40 | refinedweb | 1,611 | 82.85 |
Deploying Microsoft .NET Framework Version 3.0
Annie Wang
Microsoft Corporation
June 2006
Applies to
Microsoft .NET Framework version 3.0 (formerly known as WinFX)
Microsoft .NET Framework 2.0
Microsoft Windows Vista
Summary: The. (18 printed pages)
Contents
Introduction
About Microsoft .NET Framework 3.0
How .NET Framework 3.0 Relates to .NET Framework 2.0 and Earlier
Servicing Policy for the .NET Framework 3.0
Roadmap for Future .NET Framework Releases
Installing the .NET Framework 3.0
Version Numbers for .NET Framework Assemblies
Deploying .NET Framework 3.0
Software Requirements
Hardware Requirements
Redistribution Rights for the .NET Framework
IT Administrator Tools for Deploying the .NET Framework 3.0
Redistributing the .NET Framework with Your Application
Detecting .NET Framework 3.0 and Earlier Releases
Reading a Registry Key
Reading the User-Agent String in Internet Explorer
Command Line Options for the .NET Framework 3.0 Redistributable
Error Codes for the .NET Framework 3.0 Redistributable
Appendix A: Detecting .NET Framework Language Packs
Appendix B: Sample Script for Detecting the .NET Framework 3.0 Using Internet Explorer
Introduction
This section provides an overview of the .NET Framework 3.0.
About Microsoft .NET Framework 3.0
The Microsoft .NET Framework version 3.0 (formerly known as WinFX) is the new managed-code programming model for Windows. It combines the power of .NET Framework 2.0 with new technologies for building applications that have a visually compelling user experience, seamless communication across technology boundaries, and support for a wide range of business processes. Microsoft plans to ship .NET Framework 3.0 as part of Windows Vista. At the same time, Microsoft will make .NET Framework available for Windows XP Service Pack 2 and Windows Server 2003 Service Pack 1.
The following table lists some of the technologies included with .NET Framework 3.0.
All of the classes that represent the new components (WPF, WF, WCF, and CardSpace) are part of the System namespace. The core classes of the .NET platform, such as the common language runtime (CLR) and base class libraries (BCL) remain as they are in .NET Framework 2.0.
The following diagram illustrates the structure of .NET Framework 3.0.
Figure 1. .NET Framework 3.0
How .NET Framework 3.0 Relates to .NET Framework 2.0 and Earlier.).
If you are moving to .NET Framework 3.0 from .NET Framework 1.1 or 1.0, you should perform impact analysis and run compatibility testing prior to deployment. While we have worked to make .NET Framework releases compatible, there are a small number of known incompatibles due to security and significant functionality additions. For more information, see the page Breaking Changes in .NET Framework 2.0 on the Microsoft .NET Developer Center Web site.
Servicing Policy for the .NET Framework 3.0
Microsoft will continue to service .NET Framework 2.0 release in accordance with the standard support policy for the platforms it is supported on. Users who currently rely on .NET Framework 2.0 have the option of remaining on that version of the .NET Framework and receiving software updates as they become available.
Any component that ships as part of .NET Framework 3.0 will be serviced on the platforms it is supported on. For more information, see the Software Update Technology page on the Microsoft Visual Studio Developer Center Web site.
Roadmap for Future .NET Framework Releases
In general, any new version of the .NET Framework is designed to provide backward compatibility with the previous version. If a new release introduces breaking changes due to security issues or other reasons, Microsoft will enable you to install the new release side by side with the existing version.
For more information about future releases of the .NET Framework, see the Microsoft .NET Framework Developer Center Web site.
Installing the .NET Framework 3.0
The .NET Framework 3.0 is installed by default on Microsoft Windows Vista. On Microsoft Windows Server code-named "Longhorn", you can install the .NET Framework as a Windows Feature using Roles Management tools.
On Windows XP and Windows Server 2003, installing .NET Framework 3.0 also adds any .NET Framework 2.0 components that are not already installed. If .NET Framework 2.0 is already installed, the .NET Framework 3.0 installer adds only the files for Windows Presentation Foundation (WPF), Windows Workflow Foundation (WF), Windows Communication Foundation (WCF), and Windows CardSpace.
Components shared with .NET Framework 2.0 are installed in the following location:
Components that are new to .NET Framework 3.0 are installed in the following location:
All components of the .NET Framework 3.0 reference assemblies are installed in the following location:
Uninstalling .NET Framework 3.0 will not remove the components shared with .NET Framework 2.0. To remove those components, you must first uninstall .NET Framework 3.0 and then separately uninstall .NET Framework 2.0. (You can uninstall the .NET Framework using the Add or Remove Programs item in Windows Control Panel.)
Version Numbers for .NET Framework Assemblies
The .NET Framework 3.0 shares many components with .NET Framework 2.0, and the common language runtime (CLR) and base class libraries are the same as those in .NET Framework 2.0. Therefore, these shared components stay at version 2.0. The version number 3.0 applies to all runtime and reference assemblies for Windows Communication Foundation (WCF), Windows Presentation Foundation (WPF), Windows Workflow Foundation (WF), and Windows CardSpace.
Deploying .NET Framework 3.0
This section provides information about deploying the .NET Framework 3.0 for use with your applications.
Software Requirements
To install .NET Framework 3.0, you must have one of the following operating systems installed on the target computer:
- Microsoft Windows XP Home or Microsoft Windows XP Home Professional, with Service Pack 2 or later.
- Microsoft Windows Server 2003 family with Service Pack 1 or later.
Note .NET Framework 2.0 continues to be supported on its target platforms. For more information, see the .NET Framework 2.0 Redistributable Prerequisites page on the MSDN Web site.
.NET Framework 3.0 is installed by default with Microsoft Windows Vista. On Microsoft Windows Server "Longhorn", the .NET Framework 3.0 is a Windows feature that can be installed using Roles Management tools.
Note Microsoft Windows Server "Longhorn" IA64 Edition is the only IA64 platform that the .NET Framework 3.0 supports.
Hardware Requirements
The following table lists the hardware requirements for running .NET Framework 3.0.
Redistribution Rights for the .NET Framework
Microsoft strongly supports customers in deploying the .NET Framework within their organizations and as part of their software solutions. Distributing the .NET Framework 3.0 runtime requires you to accept license terms. For information about redistributing the .NET Framework 3.0 with your application or to a third party, review the page The ISV Guide for Redistributing the .NET Framework and Other Runtime Components page on the MSDN Web site.
Note The redistributable right is reserved only for the official released version of the Microsoft .NET Framework 3.0. You may not redistribute the pre-released version of Microsoft .NET Framework 3.0 with your application.
IT Administrator Tools for Deploying the .NET Framework 3.0
The .NET Framework 3.0 offers two ways for IT administrators to deploy to field clients: administrator-mode setup and Active Directory deployment.
Administrator-mode Setup
Administrator-mode setup enables IT administrators to deploy the .NET Framework through Microsoft Systems Management Server (SMS) or other software distribution tools. The IT administrator runs the Framework setup in silent mode. If errors occur, setup quits silently and logs an error code.
Active Directory Deployment
In Active Directory deployment, the administrator must add individual .msi files from the .NET Framework 3.0 installation package into the group policy in the order in which the .msi files should be deployed. After the group policy is enabled, any clients that are part of this group policy will automatically install the components when they boot and reconnect to the network. If errors occur, setup quits silently and logs an error code.
For more information about administrative deployment instructions, see the Administrators Deployment Guide Web page.
Redistributing the .NET Framework with Your Application
The .NET Framework 3.0 redistributable package is available as a stand-alone executable file. The name of the file depends on the type.
When you distribute the .NET Framework 3.0 redistributable package with your application, you must agree to the license terms, which give you specific distribution rights.
You can manually launch and install the redistributable on a computer, or it can be launched and installed as part of the setup program for a .NET Framework 3.0 application.
Note Administrator privileges are required to install the .NET Framework 3.0.
For more information, see the Microsoft .NET Framework 3.0 Deployment Guide Web page.
Detecting .NET Framework 3.0 and Earlier Releases
You can detect if the .NET Framework 3.0 is installed by reading a registry key and by querying the user-agent string in Internet Explorer.
Reading a Registry Key
You can look for a specified registry key value to detect whether the .NET Framework is installed on a computer. The following table lists the registry keys and values that you can test to determine whether specific versions of the .NET Framework are installed.
Note For more information about detecting previously released service packs for .NET Framework 1.0 and 1.1 , see article 318785, "How to determine which versions of the .NET Framework are installed and whether service packs have been applied" in the Microsoft Knowledge Base.
Reading the User-Agent String in Internet Explorer
For browser-based applications, you can detect whether the .NET Framework 3.0 is installed on a computer by examining the user-agent string using Internet Explorer running on that computer. This will contain the substring "NET CLR" followed by the major and minor version numbers. A sample user-agent string looks like the following:
Appendix B: Sample Script for Detecting the .NET Framework 3.0 Using Internet Explorer lists a sample JavaScript program that runs in a browser and displays information about the current .NET Framework version number.
The user-agent string that is sent in browser headers is stored in the registry of the server computer, as listed in the following table.
Command Line Options for the .NET Framework 3.0 Redistributable
The following table lists options that you can include when you run the .NET Framework 3.0 Redistributable installation program (Dotnetfx3.exe, Dotnetfx3_x64.exe, or Dotnet3setup.exe) from the command line.
Error Codes for the .NET Framework 3.0 Redistributable.
Appendix A: Detecting .NET Framework Language Packs
The following table lists the registry values you can read to detect whether a .NET Framework language pack is installed on a computer. For more information on how to detect localized version of the .NET Framework 1.0, see the page .NET Framework Redistributable Package Technical Reference on the MSDN Web site.
Appendix B: Sample Script for Detecting the .NET Framework 3.0 Using Internet Explorer
The following example shows a JavaScript program that runs in a browser detects whether .NET Framework 3.0 is running. The script searches the user-agent string and displays a status message based on the results of the search.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head> <title>Test for NET Framework 3.0</title> <meta http- <script type="text/javascript" language="JavaScript"> <!-- var </body> </html>
If the search for the string ".NET Framework 3.0" version is successful, the following message appears:
This computer has the correct version of the .NET Framework: 3.0.04131.06.
This computer's userAgent string is: Mozilla/4.0 (compatible; MSIE 6.0;
Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04131.06).
Otherwise, the following message appears:
This computer does not have the correct version of the .NET Framework.
to get .NET Framework 3.0 now.
This computer's userAgent string is: Mozilla/4.0 (compatible; MSIE 6.0;
Windows NT 5.1; SV1; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727). | http://msdn.microsoft.com/library/aa480198 | crawl-003 | refinedweb | 2,024 | 64.07 |
On 10/12/16 00:19, Christian Hesse wrote: > From: Christian Hesse <[email protected]> > > sd_notify() uses a socket to communicate with systemd. Communication > fails if the socket is not available within the chroot. So bind mount > the socket into the chroot when startet from systemd. > > Unsharing namespace and mounting requires extra capability CAP_SYS_ADMIN.
I will pick up this one after 2.4.0 has been released. This is a very promising approach. However, I'm not too happy about CAP_SYS_ADMIN though, that grants quite some privileges. Can we look at dropping this capability once we know we won't need it any more? Perhaps when we send READY=1? > + char * chroot_notify = NULL; > + > + if (sd_notify(0, "READY=0") > 0) > + { > + asprintf(&chroot_notify, "%s/notify", > c->options.chroot_dir); Here we should use the buffer/string functions, based on the gc_arena implementation. Unfortunately we do not have a direct equivalent to asprintf(). A starting point would be to for example look at the string handling in print_sockaddr_ex() [socket.c:2386] or x_msg_va() [error.c:251] ... there might be better examples too, I'm just not able to remember them now :) .... buffer.[ch] keeps most of these functions. The reason for this is basically to use the same well tested infrastructure. And with gc_arena, only a single gc_free() is required, regardless of how many buffers you allocate to that arena. -- kind regards, David Sommerseth OpenVPN Technologies, Inc
signature.asc
Description: OpenPGP digital signature
------------------------------------------------------------------------------ | https://www.mail-archive.com/[email protected]/msg13484.html | CC-MAIN-2021-10 | refinedweb | 239 | 61.53 |
ASF Bugzilla – Bug 57099
loose parsing of import attribute in page directive screws up SMAP output
Last modified: 2014-10-16 13:48:36 UTC
Came across some customer code that had the following in their jsp files:
<%@
page session="false"
buffer="8kb"
import="java.io.*;
import java.util.*;
import java.text.*;
import java.util.Date.*;
import java.text.DecimalFormat;
import com.xyz.debug.Debug;
import com.xyz.failure.*;
import com.xyz.messaging.*;
import com.xyz.utils.*;
import com.xyz.xml_messaging.*;
import com.xyz.environment.*;
import generated.screening_engine.*;
import generated.xml_utils.*;"
contentType="text/html"
%>
Even though the JSP spec says that the import statement should be "The value is as in an import declaration in the Java programming language, a (comma separated) list of either a fully qualified Java programming language type name denoting that type, or of a package name followed by the .* string, denoting all the public types declared in that package." this is parsed/compiled by the JSP parser." these folks seem to have stumbled on to a, well, different way of specifying a list of imports.
Since the JSP parser only sees a "single" import and believes it has merely written a single import line, the SMAP numbering ends up being off by, in this case, 12, causing all sorts of fun down the line when trying to map back to original jsp code by way of the SMAP file.
In short, it appears that one could, in an import statement, append a semicolon and then put whatever java code they want, and it would go in and get compiled in as long as there weren't any commas in it.
Thanks for the report. This is fixed in 8.0.x for 8.0.15 onwards.
And also fixed in 7.0.x for 7.0.57 onwards. | https://bz.apache.org/bugzilla/show_bug.cgi?id=57099 | CC-MAIN-2016-30 | refinedweb | 303 | 60.95 |
CocoaPods Tutorial for Swift: Getting Started
Use this CocoaPods Tutorial for Swift to learn how to install and manage third-party library dependencies in your Swift projects.
Version
- Swift 5, iOS 13, Xcode 11
CocoaPods is a popular dependency manager for Swift and Objective-C Cocoa projects. Thousands of libraries and millions of apps use it, according to the CocoaPods website. But what is a dependency manager and why do you need one?
A dependency manager makes it easy to add, remove, update and manage the third-party dependencies your app uses.
For example, instead of reinventing your own networking library, you can easily pull in Alamofire using a dependency manager. You can specify either the exact version to use or a range of acceptable versions.
This means that even if Alamofire gets an update with changes that aren’t backward-compatible, your app can continue using the older version until you’re ready to update it.
In this tutorial, you’ll learn how to use CocoaPods with Swift. Specifically, you’ll:
- Install CocoaPods.
- Work with a functional demo app that gets you thinking about ice cream.
- Use CocoaPods to add networking.
- Add another library using a flexible version.
This tutorial also includes classes that use Core Graphics. While knowledge of Core Graphics is beneficial, it’s not required. If you’d like to learn more, read our Modern Core Graphics With Swift series.
Getting Started
Download the starter project by clicking the Download Materials button at the top or bottom of the tutorial.
Throughout this tutorial, you’ll work with an app called Ice Cream Shop, Inc. You’ll use CocoaPods to add dependencies to the app the easy way, instead of writing your own.
Before you can proceed with this tutorial, you need to install CocoaPods. Fortunately, CocoaPods uses Ruby, which ships with all versions of macOS X since version 10.7.
Open Terminal and enter the following command:
sudo gem install cocoapods
Enter your password when requested. The Terminal output will show various fetching, installing and documentation-related outputs, concluding with “XX gems installed”.
sudoto install CocoaPods, but once it’s installed, you won’t need to use it again in this tutorial.
Finally, enter this command in Terminal to complete the setup:
pod setup --verbose
This process takes a few minutes because it clones the CocoaPods Master Specs repository into ~/.cocoapods/ on your computer.
The
verbose option logs progress as the process runs, allowing you to watch the process instead of seeing a seemingly “frozen” screen.
Awesome, you’re now set up to use CocoaPods!
Ice Cream Shop, Inc.
Your top client is Ice Cream Shop, Inc. Their ice cream is so popular they can’t keep up with customer orders at the counter. They’ve recruited you to create a sleek iOS app that allows customers to order ice cream right from their iPhones.
You’ve started developing the app and it’s coming along well. Take a look at your progress by opening IceCreamShop.xcodeproj, then building and running. You’ll see a mouth-watering vanilla ice cream cone:
The user should be able to choose an ice cream flavor from this screen, but that’s not possible yet. Your first step is to finish implementing this functionality.
Open Main.storyboard from the Views/Storyboards & Nibs group to see the app’s layout. Here’s a quick overview of the heart of the app, the Choose Your Flavor scene:
PickFlavorViewControlleris the view controller for this scene. It handles user interaction and provides the data for the collection view that displays the different ice cream flavors.
IceCreamViewis a custom view that displays an ice cream cone based on the backing mode,
Flavor.
ScoopCellis a custom collection view cell that contains a
ScoopView, which gets colors from a
Flavormodel.
While every Ice Cream Shop, Inc. location has signature flavors in common, each carries its own local flavors, too. For this reason, a web service needs to provide the data for the
Flavors.
However, this still doesn’t explain why users can’t select their ice cream flavors.
Open PickFlavorViewController.swift, found under the Controllers group, and you’ll see a stubbed method:
private func loadFlavors() { // TO-DO: Implement this }
Aha, there are no flavors! You need to implement the function!
While you could use
URLSession and write your own networking classes, there’s an easier way: Use Alamofire!
You might be tempted to download this library and drag the source files right into your project. However, that’d be doing it the hard way. CocoaPods provides a much more elegant and nimble solution.
Installing Your First Dependency
Your first step is to close Xcode. Yeah, you read that right.
It’s time to create the Podfile, where you’ll define your project’s dependencies.
Open Terminal and navigate to the directory that contains your IceCreamShop project by using the cd command:
cd ~/Path/To/Folder/Containing/IceCreamShop
Next, enter the following command:
pod init
This creates a Podfile for your project.
Finally, type the following command to open the Podfile using Xcode for editing:
open -a Xcode Podfile
The default Podfile looks like this:
# Uncomment the next line to define a global platform for your project # platform :ios, '9.0' target 'IceCreamShop' do # Comment the next line if you're not using Swift and don't want to use dynamic frameworks use_frameworks! # Pods for IceCreamShop end
Delete the
# and space before
platform, then delete the other lines starting with
#.
Your Podfile should now look like this:
platform :ios, '9.0' target 'IceCreamShop' do use_frameworks! end
This tells CocoaPods your project targets iOS 9.0 and will use frameworks instead of static libraries. While Swift and CocoaPods both support static linking, not all libraries you include do. One of them that you’ll use in this project does not.
If you’ve only programmed in Swift, this may look a bit strange. That’s because the Podfile is actually written in Ruby. You don’t need to know Ruby to use CocoaPods, but you should be aware that even minor text errors will cause CocoaPods to throw errors.
A Word About Libraries
You’ll see the term library often used as a general term that actually means a library or framework. This tutorial is guilty of casually intermixing these words, too.
You may be wondering about the differences between a library, a framework and a CocoaPod. It’s OK if you find the terminology a bit confusing!
A CocoaPod, or pod for short, is a general term for either a library or framework that’s added to your project using CocoaPods.
iOS 8 introduced dynamic frameworks, which allow you to bundle code, images and other assets together. Prior to iOS 8, you created CocoaPods as “fat” static libraries. “Fat” means they contained several code instruction sets, like i386 for the simulator, armv7 for devices, etc. However, Swift doesn’t allow static libraries to contain resources such as images or assets.
Back to Installing Your First Dependency
It’s finally time to add your first dependency using CocoaPods. Add the following to your Podfile, right after
use_frameworks!:
pod 'Alamofire', '4.9.1'
This tells CocoaPods you want to include Alamofire version 4.9.1 as a dependency for your project.
Save and close the Podfile.
You now need to tell CocoaPods to install the dependencies for your project.
Enter the following command in Terminal, after ensuring you’re still in the directory containing the IceCreamShop project and Podfile:
pod install
You should see output like this:
Analyzing dependencies Adding spec repo `trunk` with CDN `` Downloading dependencies Installing Alamofire (4.9.1) Generating Pods project Integrating client project [!] Please close any current Xcode sessions and use `IceCreamShop.xcworkspace` for this project from now on. Pod installation complete! There is 1 dependency from the Podfile and 1 total pod installed.
Open the project folder using Finder and you’ll see CocoaPods created a new IceCreamShop.xcworkspace file and a Pods folder to store all the project’s dependencies.
Excellent! You’ve just added your first dependency using CocoaPods!
Using Installed Pods
Now, you’ll use your brand new dependency, Alamofire.
If the Xcode project is open, close it now and open IceCreamShop.xcworkspace.
Open PickFlavorViewController.swift and add the following just below the existing import:
import Alamofire
Build and run. You’ll see no change yet but rest assured that Alamofire is now available.
Next, replace
loadFlavors() with the following:
private func loadFlavors() { // 1 Alamofire.request( "", method: .get, encoding: PropertyListEncoding(format: .xml, options: 0)) .responsePropertyList { [weak self] response in // 2 guard let self = self else { return } // 3 guard response.result.isSuccess, let dictionaryArray = response.result.value as? [[String: String]] else { return } // 4 self.flavors = self.flavorFactory.flavors(from: dictionaryArray) // 5 self.collectionView.reloadData() self.selectFirstFlavor() } }
Here’s the play-by-play of what’s happening in this code:
- You use Alamofire to create a GET request and download a plist containing ice cream flavors.
- To break a strong reference cycle, you use a weak reference to
selfin the response completion block. Once the block executes, you immediately get a strong reference to
selfso you can set properties on it later.
- Next, you verify the
response.resultshows success and the
response.result.valueis an array of dictionaries.
- Now, you set
self.flavorsto an array of
Flavorobjects that
FlavorFactorycreates. This is a class a “colleague” wrote for you (you’re welcome!), which takes an array of dictionaries and uses them to create instances of
Flavor.
- Finally, you reload the collection view and select the first flavor.
Build and run. You can now choose an ice cream flavor!
Now for a Tasty Topping
The app looks good, but you can still improve it.
Did you notice the app takes a second to download the flavors file? If you’re on a fast Internet connection, you might not notice the delay, but your customers won’t always be so lucky.
Your next step is to show a loading indicator in your app, to help customers understand it’s loading data and not just twiddling its libraries. MBProgressHUD is a really nice indicator that will work well here. And it supports CocoaPods; what a coincidence! :]
To use this pod, you need to add it to your Podfile. Rather than opening the Podfile from the command line, you can now find it in the Pods target in the workspace:
Open Podfile and add the following, right after the Alamofire line:
pod 'MBProgressHUD', '~> 1.0'
Save the file and install the dependencies via pod install in Terminal, just as you did before.
Notice anything different this time? Yep, you specified the version number as ~> 1.0. But why?
CocoaPods recommends that all pods use Semantic Versioning. Take a moment to understand what that is.
Semantic Versioning
Many times, you’ll see a version written like this: 1.0.0. Those three numbers are major, minor and patch version numbers.
For example, for the version number 1.0.0, 1 is the major number, the first 0 is the minor number, and the second 0 is the patch number.
If the major number increases, it indicates that the version contains non-backward-compatible changes. When you upgrade a pod to the next major version, you may need to fix build errors or the pod may behave differently than before.
If the minor number increases, it indicates that the version contains new functionality that is backward-compatible. When you decide to upgrade, you may or may not need the new functionality, but it shouldn’t cause any build errors or change existing behavior.
If the patch number increases, it means the new version contains bug fixes but no new functionality or behavior changes. In general, you always want to upgrade patch versions as soon as possible to have the latest, stable version of the pod.
Finally, when you increase the highest-order number — major, then minor then patch — per the above rules, you must reset any lower-order numbers to zero.
Here’s an example:
Consider a pod that has a current version number of 1.2.3.
If you make changes that are not backward-compatible, don’t have new functionality, but fix existing bugs, you’d give it version 2.0.0.
Challenge Time
If a pod has a current version of 2.4.6 and you make changes that fix bugs and add backward-compatible functionality, what should the new version number be?
[spoiler]
Answer: 2.5.0
Explanation: If you make changes that include new functionality that’s backward-compatible, you increase the minor number and reset the patch to zero.
[/spoiler]
If a pod has a current version of 3.5.8 and you make changes to existing functionality which aren’t backward-compatible, what should the new version number be?
[spoiler]
Answer: 4.0.0
Explanation: If changes modify existing behavior and are not backward-compatible, you must increase the major number and reset the minor and patch numbers to zero.
[/spoiler]
If a pod has a current version of 10.20.30 and you only fix bugs, what should the new version number be?
[spoiler]
Answer: 10.20.31
Explanation: If you only fix bugs, you only increase the patch number.
[/spoiler]
Having said all this, there’s one exception to these rules:
If a pod’s version number is less than 1.0.0, it’s considered a beta version. Minor number increases may include changes that aren’t backward-compatible.
So back to MBProgressHUB: Using
~> 1.0 means you should install the latest version that’s greater than or equal to
1.0 but less than
2.0.
This ensures you get the latest bug fixes and features when you install this pod, but you won’t accidentally pull in backward-incompatible changes.
There are several other operators you can use as well. For a complete list, see the Podfile Syntax Reference.
Now that you’ve learned how operators work with your CocoaPods, it’s time to finish your app.
Showing Progress
If you recall, you were building a progress indicator to show your users when flavors are loading in the app.
To finish this feature, go back to PickFlavorViewController.swift and add the following right after the other imports:
import MBProgressHUD
Next, add the following helper methods after
loadFlavors():
private func showLoadingHUD() { let hud = MBProgressHUD.showAdded(to: contentView, animated: true) hud.label.text = "Loading..." } private func hideLoadingHUD() { MBProgressHUD.hide(for: contentView, animated: true) }
Now, in
loadFlavors(), add the following two lines (as indicated):
private func loadFlavors() { showLoadingHUD() // <-- Add this line Alamofire.request( "", method: .get, encoding: PropertyListEncoding(format: .xml, options: 0)) .responsePropertyList { [weak self] response in guard let self = self else { return } self.hideLoadingHUD() // <-- Add this line // ...
As the method names imply,
showLoadingHUD() shows an instance of
MBProgressHUD while the GET request downloads.
hideLoadingHUD() hides the HUD when the request finishes. Since
showLoadingHUD() is outside the closure, it doesn't need the
self prefix.
Build and run. You'll now see a loading indicator while the flavors are loading. If your internet connection is too fast for this, you can add a
sleep(_:) statement just before
hideLoadingHUD() so that you can experience the goodness that is MBProgressHUD. :]
Great work! Customers can now select their favorite ice cream flavor and they see a loading indicator while flavors are downloading.
Where to Go From Here?
You can download the completed project using the Download Materials button at the top or bottom of this page.
Congratulations! You now know the basics of using CocoaPods, including creating and modifying dependencies and understanding semantic versioning. You're now ready to start using them in your own projects!
There's lots more that you can do with CocoaPods. You can search for existing pods on the official CocoaPods website. Also, refer to the CocoaPods Guides to learn the finer details of this excellent tool. But be warned, once you begin using it, you'll wonder how you ever managed without it! :]
I hope you enjoyed reading this CocoaPods tutorial as much I did writing it. What are some of your favorite CocoaPods? Which ones do you rely on the most for everyday projects? Feel free to share, or to ask any questions, in the comments below! | https://www.raywenderlich.com/7076593-cocoapods-tutorial-for-swift-getting-started | CC-MAIN-2021-17 | refinedweb | 2,711 | 66.64 |
Updating RobotPy source code to match WPILib¶
Every year, the WPILib team makes improvements to WPILib, so RobotPy needs to be updated to maintain compatibility. While this is largely a manual process, we now use a tool called git-source-track to assist with this process.
Note
git-source-track only works on Linux/macOS at this time. If you’re interested in helping with the porting process and you use Windows, file a github issue and we’ll try to help you out.
Using git-source-track¶
First, you need to checkout the git repo for allwpilib and the RobotPy WPILib next to each other in the same directory like so:
allwpilib/ robotpy-wpilib/
The way git-source-track works is it looks for a comment in the header of each tracked file that looks like this:
# validated: 2015-12-24 DS 6d854af athena/java/edu/wpi/first/wpilibj/Compressor.java
This stores when the file was validated to match the original source, initials of the person that did the validation, what commit it was validated against, and the path to the original source file.
Finding differences¶
From the robotpy-wpilib directory, you can run
git source-track and it
will output all of the configured files and their status. The status codes
include:
OK: File is up to date, no changes required
OLD: The tracked file has been updated,
`git source-track diff FILENAMEcan be used to show all of the git log messages and associated diffs.
??: The tracked file has moved or has been deleted
IGN: The file has explicitly been marked as do not track
--: The file is not currently being tracked
Sometimes, commits are added to WPILib which only change comments, formatting,
or mass file renames – these don’t change the semantic content of the file,
so we can ignore those commits. When identified, those commits should be added
to
devtools/exclude_commits.
Looking at differences¶
Once you’ve identified a file that needs to be updated, then you can run:
git source-track diff FILENAME
This will output a verbose git log command that will show associated commit
messages and the diff output associated with that commit for that specific file.
Note that it will only show the change for that specific file, it will
not show changes for other files (use
git log -p COMMITHASH in the
original source directory if you want to see other changes).
After running
git source-track diff it will ask you if you want to validate
the file. If no Python-significant changes have been made, then you can answer
‘y’ and the validation header will be updated.
Adding new files¶
Unfortunately, git-source-track doesn’t currently have a mechanism that allows it to identify new files that need to be ported. We need to do that manually.
Dealing with RobotPy-specific files¶
We don’t need to track those files;
git source-track set-notrack FILENAME
takes care of it.
After you finish porting the changes¶
Once you’ve finished making the appropriate changes to the Python code, then you should update the validation header in the source file. Thankfully, there’s a command to do this:
git source-track set-valid FILENAME
It will store the current date and the tracked git commit.
Additionally, if you answer ‘y’ after running
git source-track diff FILENAME,
then it will update the validation header in the file.
HAL Changes¶
RobotPy uses the WPILib HAL API to talk to the hardware. This API is not guaranteed to be stable, so every year we have to update it. There are several pieces to this that need to be updated.
Each WPILib build publishes new header files and library files to their website,
and in
hal-roborio/hal_impl/distutils.py there is code to download and
extract the package. The version number of the HAL release we want to use
needs to be updated there.
Once that’s updated, you can run the unit tests to see if there are any breaking HAL changes. If they fail and there are changes to the HAL, there are two places that HAL code updates need to be made aside from the code in WPILib that uses the HAL:
hal-base/hal/functions.py- contains ctypes signatures for each HAL function, which must match the functions in the HAL headers. Running the robotpy-wpilib unit tests will fail if the functions do not match.
hal-sim/hal_impl/functions.py- contains simulated HAL definitions, which need to have the same parameter names as defined in hal-base
Additionally,
devtools/hal_fix.sh is a script that can be used to detect
errors in the HAL and print out the correct HAL definitions for a given function
or generate empty python stubs (via the
--stubs argument). Use
for more information on the capabilities of this tool.
Syntax/Style Guide¶
As of the 2019 season, RobotPy projects will use the black code autoformatter. Black generates pretty good looking code, and it makes it easier to review incoming pull requests. Before making pull requests, please install black and run it on the repo you’re changing.
Except where it makes sense, developers should try to retain the structure and naming conventions that the Java implementation of WPILib follows. There are a few guidelines that can be helpful when translating Java to Python:
- Member variables such as
m_fooshould be converted to
self.foo
- Private/protected functions (but NOT variables) should start with an underscore
- Always retain original Javadoc documentation, and convert it to the appropriate standard Python docstring (see below)
Converting javadocs to docstrings¶
There is an HTML page in devtools called
convert_javadoc.html that you can
use. The way it works is you copy a Java docstring in the top box (you can also
paste in a function prototype too) and it will output a Python docstring in
the bottom box. When adding new APIs that have documentation, this tool is
invaluable and will save you a ton of time – but feel free to improve it!
This tool has also been converted to a command line application called sphinxify, which you can install by running:
pip install sphinxify
Enums¶
Python 3.4 and up have an enum module. In the past, we did not use it to implement the enums found in the Java WPILib, however, we are slowly moving towards its use, starting with moving existing enums to IntEnum. New enums should preferably use a plain Enum (although this may be up for discussion). See robotpy-wpilib issue #78 for details. For example:
class SomeObject: class MyEnum(enum.IntEnum): VALUE1 = 1 VALUE2 = 2
Many WPILib classes define various enums, see existing code for example translations.
Synchronized¶
The Python language has no equivalent to the Java
synchronized keyword.
Instead, create a
threading.RLock instance object called
self.lock, and
surround the internal function body with a
with self.lock: block:
def someSynchronizedFunction(self): with self.lock: # do something here...
Interfaces¶
While we define the various interfaces for documentation’s sake, the Python WPILib does not actually utilize most of the interfaces.
Final thoughts¶
Before translating WPILib Java code to RobotPy’s WPILib, first take some time and read through the existing RobotPy code to get a feel for the style of the code. Try to keep it Pythonic and yet true to the original spirit of the code. Style does matter, as students will be reading through this code and it will potentially influence their decisions in the future.
Remember, all contributions are welcome, no matter how big or small! | https://robotpy.readthedocs.io/en/2020.1.2/dev/porting.html | CC-MAIN-2020-24 | refinedweb | 1,262 | 59.84 |
RESTful APIs With the Play Framework — Part 2
RESTful APIs With the Play Framework — Part 2
We continue our look at creating RESTful APIs with this helpful framework by developing web services and exploring how to handle JSON in our code.
Join the DZone community and get the full member experience.Join For Free
In the first part of this series of articles, we talked about the main features of the Play Framework, its basic structure, and how to deploy our applications. In this part, we will talk about how to develop RESTful Services. If you remember (if not, we invite you to read the first part of this series of articles) we mentioned that Play Framework is Reactive, which means that we can develop Asynchronous and Synchronous Services. For the purposes of this second article, we will focus on Synchronous Services.
Developing Web Services
Let's look in more detail at the index action in HomeController.
public Result index(){ return ok(views.html.index.render()); }
Remember that RESTful is a first-class citizen in the Play Framework, which means that all actions are RESTful Services. The structure of an action is simple, it always returns an object of type Result (play.mvc.Result). A Result is the representation of an HTTP result with a state code, headers, and a body that is sent to the client.
For example, in the index action, we use the
ok method to return a Result that renders HTML in the body. Instead of rendering HTML in the body, we can send plain text. For this, we will add a new action in our HomeController call it
plainText.
public Result plainText(){ return ok("This is just a text message."); }
Once we have added the action, we must add this to the routes file in order to execute it.
GET /plainText controllers.HomeController.plainText
Now we can compare the differences between call the index and plain actions from Insomnia.
Index Response
Index Headers
plainText Response
plainText Headers
The first difference, as you can imagine, between both is the body, since for the index we render HTML, while for
plainText we just show plain text. The following difference lies in the Header Content-Type, for index return "text/html" and for plainText "text/plain."
Handling JSON
When we develop Web Services we seek to use an information exchange protocol, such as JSON. The Play Framework is based on the Jackson library for the handling JSON, using the JsonNode object \, and leaning on the play.libs.Json API. To start our journey developing RESTful Services that return JSON, we will start by importing play.libs.Json in our HomeController.
import play.libs.Json;
The first thing we will do is render some JSON using the
HashMap object, adding an action called jsonMap to ourHomeController, which will look like this:
public Result jsonMap(){ HashMap<String, Object> result = new HashMap<String, Object>(){ { put("str", "String"); put("int", 123); } }; return ok(Json.toJson(result)); }
As you can see, the only thing we did here was create a HashMap, to which we added two elements "str" and "int" with their respective values. Then, using the
toJson method of the JSON API, we indicated that we want to return the HashMap in JSON format. To see how the consumption of this new service looks from Insomnia, we will define the call to this action in our routes file.
GET /json/map controllers.HomeController.jsonMap
jsonMap Response
jsonMap Headers
This is an example to render a HashMap as JSON. To continue, we will do the same thing with an object. For this, we will define a class called Invoice, which will be inside the package com.auth0.beans that we will create at the same level as the controllers directory, which can be seen in the image below.
Note: for readability of the article we do not include the get and set methods of the Invoice class.
package com.auth0.beans; import java.math.BigDecimal; import java.time.LocalDate; public class Invoice { private String name; private String address; private String idNumber; private String code; private LocalDate date; private BigDecimal amount; public Invoice(){} public Invoice(String name, String address, String idNumber, String code, LocalDate date, BigDecimal amount){ this.name = name; this.address = address; this.idNumber = idNumber; this.code = code; this.date = date; this.amount = amount; } }
Then we will add the action
jsonObject in our HomeController, in which we will create an object
Invoice and we will use the same method
toJson of the JSON API to return it as a response to the client. The action would be like the following code:
public Result jsonObject(){ Invoice invoice = new Invoice("Perico de los Palotes", "City", "123456-7" "002245", LocalDate.now(), new BigDecimal(1293)); return ok(Json.toJson(invoice)); }
We will define the call to this action in our routes file
GET/json/objectcontrollers.HomeController.jsonObject
jsonObject Response
jsonObject Headers
To finish with the JSONs Handling, we will do it by obtaining a JSON in the request of our RESTful service. For them we will add the jsonCatch action in our HomeController. In it we will obtain a JsonNode from the body of our request, then we will use the fromJson method of the Json API to convert this JsonNode into the object we want, in this case Invoice. To finish we will return a String with some of the data of our Invoice object to guarantee that we have obtained the information that we sent in the test. The final result would look like the following code:
public Result jsonCatch(){ JsonNode jsonNode = request().body().asJson(); Invoice invoice = Json.fromJson(jsonNode, Invoice.class); DateTimeFormatter f = DateTimeFormatter.ofPattern("dd/MM/yyyy"); return ok(invoice.getCode() + " | " + invoice.getIdNumber() + " | " + f.format(invoice.getDate())); }
We will define the call to this action in our routes file
GET/json/catchcontrollers.HomeController.jsonCatch
jsonCatch Request and Response
jsonCatch Headers
Results in More Detail
We have seen, throughout our examples, that actions always return the Result object, which we have previously indicated is the representation of an HTTP response with a status code, headers, and a body that is sent to the client. The Controller object already has a series of methods that allows us to create
Result, those methods inherit them from the
Results class (play.mvc.Results). In the case of the examples that we have listed in this article, only the
ok method has been used, which returns the status code 200. Addiontionally, in the body we have rendered HTML, text, and JSON.
But there are other methods in
Results that can help us. For example, in circumstances where we need to return a status code different than 200, some of the most common are:
notFound(); //Return state code 404 badRequest(); //Return state code 400 internalServerError(); //return state code 500 status(203); //return a custom state code
Imagine that, if in our jsonCatch action we don't obtain the JSON that we expect in the body, we should indicate a badRequest. To exemplify this, we will add a new action called
jsonBadRequest to our HomeController
public Result jsonBadRequest(){ JsonNode jsonNode = request().body().asJson(); if(jsonNode == null){ return badRequest("Expecting Json data"); }else{ Invoice invoice = Json.fromJson(jsonNode, Invoice.class); DateTimeFormatter f = DateTimeFormatter.ofPattern("dd/MM/yyyy"); return ok(invoice.getCode() + " | " + invoice.getIdNumber() + " | " + f.format(invoice.getDate())); } }
In this example, we are returning a bit of plain text in the badRequest, just like in
ok. Here we can also render HTML, text, or send JSON. In order to try this new action in Insomnia, we must add it to our routes file.
GET/json/badRequestcontrollers.HomeController.jsonBadRequest
jsonBadRequest Request and Response
jsonBadRequest Headers
To further delve into the methods inherited from Results you can see the official documentation of play.mvc.Results. As we mentioned previously, in the index example, an HTML template is rendered. To adapt it to what we want, we have to make it no longer render the HTML, but, instead, render what we want.
In this article, we have talked about how to develop Synchronous RESTful Services in the Play Framework by returning and obtaining JSON. You can access the source code by visiting the repository on. In the next series of these articles, we will be talking about the use of JWT for a more secure information exchange.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/restful-apis-with-play-frameworkpartnbsp2 | CC-MAIN-2019-47 | refinedweb | 1,402 | 54.02 |
Newbie - Why doesn't this read or write me a file?990724 Feb 13, 2013 4:19 AM
I am a complete newbie and am going through the java tutorials. I use Netbeans as my IDE which makes things easier, but why doesn't this code do anything?
Edited by: EJP on 13/02/2013 15:19: added {noformat}
Can anyone explain? i have spent hours trying different options.Can anyone explain? i have spent hours trying different options.
import java.io.*; import java.util.Vector; @SuppressWarnings("empty-statement") public class ListDictionary { /** * @param args the command line arguments */ private Vector<String> list; private static final int INITIAL_SIZE = 200000; private static final int INCREMENT = 10000; public void ListDictionary() { list = new Vector<>(INITIAL_SIZE,INCREMENT); this.readFile("english-words-lowercase.txt"); this.readFile("engish-upper.txt"); this.TrimList(); this.writeFile(); } public void readFile(String fileName) { String line; try { RandomAccessFile raf = new RandomAccessFile(fileName,"r"); while ((line = raf.readLine())!= null) { list.add(line); } } catch (IOException e){ System.out.println("dictionary not found" + e); }; int listSize = list.size(); System.out.println(listSize + "words added"); } public void writeFile() { PrintWriter out = null; try { out = new PrintWriter(new FileWriter("dictionary1.txt")); for (int i=0; i<list.size();i++){ out.println(list.get(i)); } } catch (IOException e) { System.out.println(e.getMessage()); } finally { if (out != null) { System.out.println("Seems to have worked!"); } else { System.out.println("Not this time"); } } } public void TrimList() { list.trimToSize(); } public static void main(String[] args) { ListDictionary listDictionary = new ListDictionary(); } }
Edited by: EJP on 13/02/2013 15:19: added {noformat}
{noformat} tags: please use them.
This content has been marked as final. Show 6 replies
1. Re: Newbie - Why doesn't this read or write me a file?EJP Feb 13, 2013 4:22 AM (in response to 990724)What exception is thrown?
2. Re: Newbie - Why doesn't this read or write me a file?Kayaman Feb 13, 2013 9:45 AM (in response to 990724)For one, you're not closing your streams.1 person found this helpful
3. Re: Newbie - Why doesn't this read or write me a file?r035198x Feb 13, 2013 12:11 PM (in response to 990724)If you intend to run your code by calling a constructor using1 person found this helpful
then you need to have the logic in the constructor.
new ListDictionary();
You have a method called
which is not a constructor because of the void return type.
public void ListDictionary() {
Remove the void to make it a constructor. Currently the default constructor is being called which does nothing that you can see.
4. Re: Newbie - Why doesn't this read or write me a file?939520 Feb 13, 2013 3:22 PM (in response to 990724)When you get the constructor working, you may next need to include the path to where your files are, else your program may not find them.1 person found this helpful
Example:
from:
new File("myFile.txt");
to:
new File("C:/workspace/myDirectory/myFile.txt");
You can next read up on absolute (ie, the above) vs relative paths.
5. Re: Newbie - Why doesn't this read or write me a file?990724 Feb 15, 2013 5:51 PM (in response to 939520)Thanks to all that have replied. all your explanations have helped and i have been able to make the program work. Like i said, i am a newbie, so it has taken me a while to figure out relative and absolute paths so i apologies for the delay in responding.
6. Re: Newbie - Why doesn't this read or write me a file?939520 Feb 15, 2013 9:11 PM (in response to 990724)You also might consider changing this:
public void readFile(String fileName)
to this:
public List<String> readFile(String fileName)
this way, the function returns a list of lines from the file that another part of your program can use. | https://community.oracle.com/message/10856781 | CC-MAIN-2016-40 | refinedweb | 650 | 66.44 |
table of contents
NAME¶
qecvt, qfcvt, qgcvt - convert a floating-point number to a string
SYNOPSIS¶
#include <stdlib.h>
char *qecvt(long double number, int ndigits, int *decpt, int *sign);
char *qfcvt(long double number, int ndigits, int *decpt, int *sign);
char *qgcvt(long double number, int ndigit, char *buf);
qecvt(), qfcvt(), qgcvt(): _SVID_SOURCE
DESCRIPTION¶
The functions qecvt(), qfcvt(), and qgcvt() are identical to ecvt(3), fcvt(3), and gcvt(3) respectively, except that they use a long double argument number. See ecvt(3) and gcvt(3).
ATTRIBUTES¶
For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶
SVr4. Not seen in most common UNIX implementations, but occurs in SunOS. Supported by glibc.
NOTES¶
These functions are obsolete. Instead, snprintf(3) is recommended.
SEE ALSO¶
ecvt(3), ecvt_r(3), gcvt(3), sprintf(3)
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manpages.debian.org/testing/manpages-dev/qgcvt.3.en.html | CC-MAIN-2021-49 | refinedweb | 173 | 66.84 |
Pandas dataframe fillna() only some columns in place
I am trying to fill none values in a Pandas dataframe with 0's for only some subset of columns.
When I do:
import pandas as pd df = pd.DataFrame(data={'a':[1,2,3,None],'b':[4,5,None,6],'c':[None,None,7,8]}) print df df.fillna(value=0, inplace=True) print df
The output:
a b c 0 1.0 4.0 NaN 1 2.0 5.0 NaN 2 3.0 NaN 7.0 3 NaN 6.0 8.0 a b c 0 1.0 4.0 0.0 1 2.0 5.0 0.0 2 3.0 0.0 7.0 3 0.0 6.0 8.0
It replaces every
None with
0's. What I want to do is, only replace
Nones in columns
a and
b, but not
c.
What is the best way of doing this?
You can select your desired columns and do it by assignment:
df[['a', 'b']] = df[['a','b']].fillna(value=0)
The resulting output is as expected:
a b c 0 1.0 4.0 NaN 1 2.0 5.0 NaN 2 3.0 0.0 7.0 3 0.0 6.0 8.0
From: stackoverflow.com/q/38134012 | https://python-decompiler.com/article/2016-07/pandas-dataframe-fillna-only-some-columns-in-place | CC-MAIN-2020-10 | refinedweb | 216 | 89.45 |
Scan input from a file (varargs)
#include <wchar.h> #include <stdarg.h> int vfwscanf( FILE * fp, const wchar_t *format, va_list arg );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The vfwscanf() function scans input from the file designated by fp, under control of the argument format.
The vfwscanf() function is the wide-character version of vfscanf(), and is a "varargs" version of fwscanf().
The number of input arguments for which values were successfully scanned and stored, or EOF if the scanning reached the end of the input stream before storing any values. | http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/v/vfwscanf.html | CC-MAIN-2018-13 | refinedweb | 103 | 65.42 |
Serial Port Programming With .NET
Introduction: Serial Port Programming With .NET
Serial ports provide an easy way to communicate between many types of hardware and your computer. They are relatively simple to use and are very common among peripherals and especially DIY projects. Many platforms such as Arduino have built in serial communication so they are really easy to set up and use. Many times you may want your project to communicate with your computer in order to have a cool interactive output, a neat sensor that passes data to your computer, or anything else you could possibly dream up. In this tutorial, I will walk you through how to interface to a serial port on the computer side of things, using Microsoft's . net framework. The code examples in this tutorial are in C#, but can be easily transferred to Visual Basic, or Visual C++. This tutorial assumes that you have a very basic understanding of object oriented programing, and whatever language you choose to program in.
Since we are mainly going to be using the System.IO.Ports.SerialPort class, HERE is a link to the full documentation by MSDN if you want to check out the rest of the class.
I also found a great article explaining how to fix several common bugs relating to serial ports. Check it out if you get stuck with any odd errors.
Feel free to post questions or feedback! I am always happy to hear constructive comments so I can make improvements.
Step 1: Set-up and Open the Serial Port
We need to include two namespaces in order to use the SerialPort class:
using System.IO.Ports;
using System.IO;
We now need to instantiate a SerialPort object. There are several constructors to choose from to specify different frame formats but in general the easiest to use is the following:
SerialPort(string portName, int baudRate)
An example of this would be:
SerialPort mySerialPort = new SerialPort( “COM3”, 9600);
Here I am using COM3 at 9600 baud. You can find the full list of constructors in the link in the intro. Now that we have created our SerialPort object, we need to open the port using the Open() method. When we are done we will want to close it using the Close() method:
mySerialPort.Open();
mySerialPort.Close();
Several notes: when you use any operation that uses a serial port there is a good chance that an error will occur. For this reason we want to write our code for the serial port within a try – catch block. This will prevent our program from crashing if, for example we tried to open a port that didn’t exist. It is not necessary to instantiate our object within the try – catch block, but we want to open, close, read, and write within it.
//this simply creates a SerialPort object then opens and closes the port
SerialPort mySerialPort = new SerialPort( “COM3”, 9600);
try
{
mySerialPort.Open();
mySerialPort.Close();
}
catch (IOException ex)
{
Console.WriteLine(ex);
}
That’s really all there is for port setup! In the next step we will introduce how to read from a serial port.
Step 2: Reading From the Port
Now that we have created our serial port object and opened the port, we now want to read from the serial port. Here are the basic read functions: (there are several other, but these are the simplest and will work for most applications)
int readChar() - returns the next char from the input buffer
int readByte() – returns the next byte from the input buffer
string readLine() – returns everything up to the newline character (‘\n’) in the input buffer
string readExisting() – returns everything in the input buffer
It should be noted that readChar() and readByte() both return integers not chars and bytes respectively. In order to convert them to their corresponding types, you will need to typecast them into their respective types:
char nextChar = (char)mySerialPort.readChar();
byte nextByte = (byte)mySerialPort.readByte();
The other two methods are pretty self-explanatory. In the next step I'm going to go a little more in depth about how we would go about reading from a port.
*Technical note*
It is worth noting that both ReadLine(), ReadExisting() return a string based off of decoded bytes from the input buffer. What does that mean? It means that for example if we received the bytes 0x48, 0x69, and 0x0A those would be decoded based off of the ASCII encoding to ‘H’ , ‘I’ , and ‘\n’. This is significant because if we wanted our hardware to send the numeric value of 65 (0x41), and we used ReadExisting() and printed the return value to a console window we would get an output of “A” not “65” because it decoded 0x41 and changed it to ‘A’. If you wanted to read the actual numeric value you should use readByte() or readChar() since they return integer values which are not decoded. The SerialPort class supports multiple encodings other than the default ASCII through the SerialPort.Encoding property; there is plenty of information about that in the link in the intro.
Step 3: Ways to Read From the Port
If we want to be continuously read from a serial port and, for the sake of example, display everything we read in a console window the simplest way to do this would be to do this would be to create a loop and repeatedly call one of our read methods. While this method gets the job done, there are some significant disadvantages to it. First, it is very limiting since you have to be constantly calling the same method over and over again and you’re stuck within the loop.
Another problem that arises with the read methods is that if there is no data in the input buffer when you call them, they will stall the execution of your program until there is valid data to read (this is similar behavior to the Console.ReadLine() method; the program doesn’t continue until the user hits enter). There are properties that you can set to force the method to return after a specific delay, but in general you don’t want your program to run slower than it has to.
A better way to be continuously reading is to check if there is data to be read in the input buffer using the SerialPort.BytesToRead property. This property returns the number of bytes in the input buffer that need to be read. With this we could set up a loop that will skip over the read code if there is nothing in the input buffer. Here’s an example:
while (true)
{
try
{
if (mySerialPort.BytesToRead > 0) //if there is data in the buffer
{
mySerialPort.ReadByte(); //read a byte
}
//other code that can execute without being held up by read method.
}
catch (IOException ex)
{
//error handling logic
}
}
This procedure certainly is more efficient that the previous method and will work for many simple situations where all you are really doing is continuously reading from the port. Let’s take a look at a different scenario. What if you were creating a large complex program that was handling many tasks and would not be able to work within the confines of an infinite loop? Luckily for you, the SerialPort class has created an event that is raised whenever there is new data in the input buffer. For anyone who doesn’t know what an event is, an event is something that interrupts your program when something important happens, calls a method to deal with the event, and then returns to where the program left off. In our case when data is received by the input buffer, the event will stop the program, call a method where we would most likely handle the data, and then go back to where our program left off. We will delve into this in the next step.
Step 4: Reading Using Events
The first thing we need to do is to tell our serial port what method to call when it receives data. This is done with the following line:
mySerialPort.DataReceived += new SerialDataEventHandler(mySerialPort_DataRecieved);
mySerialPort.DataReceived represents the method that is called to handle the event. We specify that method by the following part += new SerialDataEventHandler(mySerialPort_DataRecieved) which says use the method mySerialPort_DataRecieved when the event is raised. Next we have to actually create the method:
public static void mySerialPort_DataRecieved(object sender, SerialDataReceivedEventArgs e)
{
//whatever logic and read procedure we want
}
That’s really all there is to it. Just one more note: you will want to make sure when using events that you declare your SerialPort object as a class level field so that you can use it in multiple methods including the event handler. Different methods and procedures fit different situations, so you will have to find one that works and that you like to use. I personally like using the events whenever possible since they are the most efficient and leave the program free to do other things, but everyone has their preferences. In the next step I am going to talk about how to write to a serial port.
Step 5: Writing to the Port
Good news! Writing to a port is incredibly easy! Here are the write methods that you can use:
Write(String data)
WriteLine(String data)
Write(byte[] data, int offset, int length)
Write(char[] data, int offset, int length)
The first two methods are almost identical, except that WriteLine() writes a newline character(‘\n’) after writing the data. The other two write methods are also similar; the only difference is the data type of the data to send. To use them, you provide an array of bytes or characters which will be written to the serial port. The offset parameter just specifies what element of the array to start at i.e. If you pass it 0, it will start at the very beginning of the array; if you pass 1 it will start at the second element. The length parameter is simply the length of the array. Remember to perform these write operations within a try-catch block because they readily throw errors.
*Technical Note*
Remember encoding from step 2? The same concept is applied to Write() and WriteLine(). The call WriteLine(“Hi”) writes 0x41, 0x61, 0x0A (0x0A is the added ‘\n’ since we used WriteLine()). If we want to recognize them as characters on the hardware side you must have your own decoding logic present there.
Step 6: Conclusion
I hope this is helpful to you and your next project. This was something that I personally struggled to figure out so I hope that this will help you have an easier time learning how to interface over a serial port. Feel free to post comments, questions and feedback below.
amazing article, many thanks
Very nice,thank you .
Very nice step by step tutorial.
+1 For going through event handlers :)
could you please, tell me how to read the integer data from arduino uno in visual basic ?
Thanks very much for responding, I wasn't sure how old this post was. But yes, the meat & potatoes of the program relies on listening to these com ports but it has to listen to all of them at the same time which I guess would be using the event handler. Since the ports most likely would be different on other systems using this program, I have a separate program that reads and writes configuration data to an SQL database. Once the main program is launched, it reads in the configuration info of the com ports from the sql database. These com ports are connected to several 3rd party databases, so I need to send heartbeats to each one at time intervals and expect to receive an acknowledgement back to make sure the far end is listening. I have a server connected to the port that would send a telephone number to this program, the program would look at the number, compare it against a list to determine which 3rd party database it belongs to and then send it out that comm port. The 3rd party database at the far end would then send that customers address information to the program, the data would be rearranged, and then sent back to the server requesting it. (in a nutshell) We've used software like this in the past but now with the XP & server 2003 going away we need to write our own, I've designed stuff in VB6 before but never used the MSComm so I'm sort of a newbie to all of this. I'll be doing it in .net though so I've got my hands full learning the intricacies. Sorry to talk your ear off, have a nice night -Steve (I tried to upload a diagram don't know if it took)
Thanks for an informative guide, I am looking to read port configurations from an SQL database, assign them to multiple com ports, and then listen on each port for data, then act upon that data. Is it possible to dimension the serial port object into an array of objects? I haven't had a chance to play around with it yet but thought I would ask first. Thanks in advance -Steve
If I understand your question correctly, what you can do is create and array or collection (List, ArrayList, etc.) to hold and control all of the SerialPort objects. From there polling would be easy (yet time consuming for your program) with a simple for loop. For event driven reading, the process is a little different and would involve the use of delegates in order to accommodate the multiple ports. | http://www.instructables.com/id/Serial-Port-Programming-With-NET/ | CC-MAIN-2017-39 | refinedweb | 2,279 | 59.03 |
Create unit test method stubs with the Create Unit Tests command
The Visual Studio Create Unit Tests command provides the ability to create unit test method stubs. This feature allows easy configuration of a test project, the test class, and the test method stub within it.
Availability and extensions
The Create Unit Tests menu command:
Is available in the Community, Professional, and Enterprise Editions of Visual Studio 2015 and later.
Supports only C# code that targets the .NET Framework.
Is extensible, and supports emitting tests in MSTest, MSTest V2, NUnit, xUnit format.
Is not yet available in .NET Core projects.
To get started, select a method, a type, or a namespace in the code editor in the project you want to test, open the shortcut menu, and choose Create Unit Tests. The Create Unit Tests dialog opens, where the create options for the new unit tests can be selected.
Setting unit test traits
If you plan to run these tests as part of the test automation process, you might consider having the test created in another test project (the second option in the dialog above) and setting unit test traits for the unit test. This enables you to more easily include or exclude these specific tests as part of a continuous integration or continuous deployment pipeline. The traits are set by adding metadata to the unit test directly, as shown below.
Using third-party unit test frameworks
With Visual Studio, you can easily have unit tests created for you using any test framework. To install other test frameworks:
- Choose Tools > Extensions and Updates.
- Expand Online > Visual Studio Marketplace > Tools, and then choose Testing.
Test framework extensions are available in Visual Studio Marketplace:
When should I use this feature?
Use this feature whenever you need to create unit tests, but specifically when you are testing existing code that has little or no test coverage, and no documentation. In other words, where there is limited or non-existent code specification. It effectively implements an approach similar to Smart unit tests that characterize the observed behavior of the code.
However, this feature is equally applicable to the situation where the developer starts by writing some code, and uses that to bootstrap the unit testing discipline. Within the flow of coding, the developer might want to quickly create a unit test method stub (with a suitable test class, and a suitable test project) for a particular piece of code. | https://docs.microsoft.com/en-us/visualstudio/test/create-unit-tests-menu | CC-MAIN-2018-34 | refinedweb | 405 | 60.45 |
Greetings to all!
The js-code for Adobe applications (Photoshop, Illustrator, Indisign...) include preprocessor directive., ie "C-style statement starting with the # character":
#include "file.jsxinc"
#includepath "include;../include"
#script "testScript"
#strict on
#target photoshop
#targetengine illustrator
Syntax checking module swears. Automatic code formatting divides these directives into three rows.
I tried to use eval("# include test.jsx"). This works for some directives, but not for all.
How do I make WebStorm not swear?
Thanks!
For example, to make it clear what it was about, I attached screenshots.
Agree, it looks ugly;)
So how do I make peace webstorm with these specific directives?
Attachment(s):
ws_troubles_01.png
ws_troubles_02.png
Hi,
Could you submit your request to the tracker?. This would be the best way, as you will be notified on our progress and other's comments.
Kirill
Kirill, thank you! I added a request to the tracker. I hope that is correct:
WEB-6095 preprocessor directives in JavaScript for Adobe applications
Now what?
Thanks for submitting!
Honestly I can't say how soon we will fix this, it very much depends on the feedback / votes this request gets. It's very important that now other users can see it and vote for it.
Kirill
How can they vote for it?
This means that there is no standard means of solving my problem (using settings and parameters WebStorm)?
It is clear that WebStorm is intended primarily for web programming. And not for scripting in Adobe design/prepress software. But it is super (cool, great ets) IDE and I can not give it up because of such trifles. Adobe standard tool (ESTK) awful.
There's a vote button on the ticket page (below issue title).
Unfortunately, there's no standard way, as these directives are not the part of the JavaScript language.
Kirill
Saw, thanks. There is a one person voted.
JS-code for Adobe professional design applications, write pre-press professionals, not professional programmers. So few of them use professional tools such as WebStorm. It is unlikely that the case will move forward.
In IntellijIDEA Ultimate same problem.
Problem solved by write directives in format of comments:
//@targetengine "session_01"
//@target illustrator
//@include "mylib.jsx"
Preprocessor sees the directive and at the same time is not broken syntax javascript.
Problem solved.
I think the topic is closed.
Thank you for your help. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/207057525-Adobe-scripting-preprocessor-directives-in-WebStorm-5-0-3 | CC-MAIN-2019-13 | refinedweb | 390 | 69.99 |
Interrupts¶
- Polling to check a signal works, but the processor still needs to check things
- This puts a load on the chip we really do not need
- What we need is for an alarm to go off that we react to
- Welcome to interrupts
What is an interrupt?¶
Most computer systems support the idea of an interrupt?
- An interrupt is a signal generated by some device and sent to the processor
- These events happen at unpredictable times
- The source of the interrupt can be external or internal.
- The AVR can sense signals though the I/O pins on the chip
- Internal devices can generate them as well
- we will set up
Timer0so it generates an interrupt when it rolls over.
Asynchronous events¶
- Interrupts are asynchronous events
- we do not know exactly what we will be doing when they happen
- The chip handles interrupts using something like a procedure call
- This call might happen in the middle of your code
- We need to preserve the state of the chip before dealing with the event
- when we return, the original code will not know this happened!
Recognizing the interrupt¶
- We can turn all interrupts on or off with code
- The AVR has a
Global Interrupt Enableflag
- every internal device that can generate an interrupt has an enable flag
- We need to set all these bits correctly for the interrupt system to work
Controlling the global interrupt system¶
- If allowing interrupts might cause problems, we can do this:
CLI- disable interrupts
SEI- enable interrupts
The processor is initialized on power-up with interrupts disabled.
Handling the interrupt¶
- Basically, the interrupt is handled by a special procedure call.
- It happens between two instructions right after the event
- We need to set up code for the procedure at specific addresses
- Each interrupt source will call a procedure at the assigned address
- We need as many
handlersas we have interrupt sources
AVR Interrupt table¶
- the AVR sets up a
jump table, also called an
interrupt vector table
- This table starts at address 0x00
- each entry is just a jump to the actual procedure code
- we only need entries at places where we want to handle specific interrupts
New style AVR code¶
- To get things working with
avr-gcc, we need to change code a bit
- The linker will set up the
interrupt vector table
- Unfortunately, some simple code becomes not so simple
- We will use
macrosto make things work correctly!
avr-gccwill set up the chip!
Interrupt handler code¶
- The actual handler code looks like other procedures
- except this one ends with a new instruction:
InterruptHandler: ... reti
- The last instruction is vital.
- It resets the interrupt system after each interrupt is recognized.
The Reset Vector¶
- one special signal related to the interrupts, but is a bit different.
- This one happens when powering up of the processor
- Some systems have a
resetbutton
- The
resethandler is at location 0x00
- We normally place a jump to the actual program start point
avr-gccwill set this up as part of the
mainentry system
Saving Processor State¶
- We need to save the processor state in our handlers
- We know how to do that! (use the stack)
- Save any registers you intend to use
- Save the system flag register
SREGas well
- The interrupted code will thank you!
Using interrupts with Timer0¶
Let’s put this all together with a simple example.
- Our polling code checked the
Timer0 Overflow (TOV0)interrupt flag.
- This flag was being set by the timer, but did not generate an interrupt
- We were running with interrupts disabled!
- To generate an interrupt, we need to reprogram the timer (and chip)
- We will use the blinking light for this example
- We want the LED to blink once per second
Sample program¶
This program will consist of a main routine and timer code in separate files
#include "config.inc" .extern Timer0Setup .global main
- The
entry pointmust be named
mainfor this example
- Set project configurarion details in
config.inc
- The name
mainis the entry point in this example
The interrupt jump table¶
- This table will be set up by
avr-gcc
- We need to declare labels defined in the include files for this chip
- This is what
Timer.Sincludes:
.global TIMER0_OVF_vect TIMER0_OVF_vect:
- The linker will place a jump to this routine in the table
Setting up the clock¶
For this example, we will run the chip at full speed
; set up the system clock ldi r24, 0x80 ; set up prescaler sts CLKPR, r24 sts CLKPR, r1 ; set to full speed
Setting up the LED¶
The LED on the Arduino is on pin 5 of PORTB
; set up LED port sbi _(DDRB), 5 ; set up the output port (bit 6) cbi _(PORTB), 5 ; start off with the LED off
Finishing up¶
call Timer0Setup ; initialze the timer 1: rjmp 1b
Huh? Where is the work going to happen? In the handler!
In this simple example, we really have no work for the program to do, other than what will happen when interrupts occur. For that reason, we simply put the main code in an infinite lop. The interrupts will happen, and the processor will take care of those events with the code we provide.
Timer code¶
timer.S starts up with this code
; Timer.S - Timer0 code for blink #include "config.inc" .global Timer0Setup .section .data ISRcount: .byte 0 .section .text
We will discuss the
ISRcount data item later.
Timer setup¶
Set up the timer prescaler value here
;---------------------------------------------------------------------- ; Initialize Timer 0 for interrupts ; Timer0Setup: in r16, _(TCCR0B) ori r16, (1 << CS02) | (1 << CS00) ; divide by 1024 out _(TCCR0B), r16 ; set timer clock sbi _(TIFR0), (1<< TOV0); clear interrupt flag ;
Enabling interrupts¶
; lds r16, TIMSK0 ; get interrupt mask reg ori r16, (1 << TOIE0) ; enable interrupts sts TIMSK0, r16 out _(TCNT0), r1 ; zero the timer counter sts ISRcount, r1 ; and our counter variable ; sei ; let the fun begin ret
Once we get here, interrupts are at work!
The handler code¶
Finally, we need our handler code:
;---------------------------------------------------------------------- ; Timer0 overflow ISR ; .global TIMER0_OVF_vect TIMER0_OVF_vect: ; save callers registers push r1 push r0 in r0, _(SREG) push r0 eor r1, r1 push r24 push r25 ;
This code protects the important registers in the chip, and any registers we plan on using in our code.
Do the work¶
We let the handler toggle the LED on/off
; toggle LED port in r24, _(PORTB) ; get current PORTD ldi r25, (1 << LED_PIN) ; LED bit position eor r24, r25 ; toggle bit out _(PORTB), r24 ; set back in place sts ISRcount, r1 ; sero the counter 1:
Finishing up¶
Finally, we restore the system state
; recover user's registers pop r25 pop r24 pop r0 out _(SREG), r0 pop r0 pop r1 reti
Wow - we are blinking fast¶
- The above code blinks about 61 times per second.
- Let’s try a simple trick.
- create a simple counter variable
- have the interrupt handler increment the counter each time it is called
- count up to 61, then trigger out LED toggle code.
- reset the counter as we toggle the LED and start over
- With any luck, we will end up with a blink every second.
counter setup¶
We need a counter variable
.section .data ISRcount: .byte 0
THis was shown earlier.
We need to set the counter in the setup code
sei ; let the fun begin ret
Adding the count logic¶
In the handler, add this code to increment the counter
; bump the ISR counter lds r24, ISRcount ; get current count inc r24 ; add one sts ISRcount, r24 ; put it back ;
Blinking only when the count is reached¶
; test the counter to see if we toggle the LED lds r24, ISRcount ; needed? cpi r24, 61 ; one second is 61 interrupts brcs 1f ; skip if not
The label is after the blink logic, just before we restore all the registers and end the handler.
Resetting the count on toggle¶
- The last thing we do is reset the counter after toggling the LED
eor r24, r25 ; toggle bit out _(PORTB), r24 ; set back in place sts ISRcount, r1 ; sero the counter
That last line resets the counter for the next pass.
This now blinks (toggles) once per second!
This simple scheme to delay actions until some number of interrupts is seen is a simple mechanism to adjust when events are handled. We will use it later, when we explore a simple multi-tasking kernel for AVR projects. | http://www.co-pylit.org/courses/cosc2325/avr-interrupts/01-interrupts.html | CC-MAIN-2018-17 | refinedweb | 1,398 | 58.15 |
Split Arraylist of 2 types when printing it
split arraylist java
java split list based on condition
split string into arraylist java
split arraylist into chunks
split java
java how can i split an arraylist in multiple small arraylists
split a list into two halves in java
I have a list composed of Objects Every object is comprised of 2 different values
int number; String name; ArrayList<Object> objectList= new ArrayList<Object>();
I have another list composed of Cars Every Car has one value
String type; ArrayList<Cars> carList= new ArrayList<Cars>();
I want to print half the
objectList first then the
carList then the other half of the objectList
Like so.. for example:
Output= "number + type + name"
Basically i want the second list to be printed in between the first list.I searched the web but i am not really sure what i should be looking for and how exactly to code this.
Again i am sorry for the bad question.. i am trying to learn coding by myself because i can't afford it.
Thanks in advance.
Just iterate through the first list, and use the index to access the corresponding Object in the second list:
for(int i = 0; i < objectList.size(); i++){ Object obj = objectList.get(i); Car car = carList.get(i); //Print Here }
What is the best way to split an array list in Java?, So to split an ArrayList, lets say ogList in to two equal part we will use as below. [code]List l If you want List type as ArrayList, then you have to cast as this method returns List type. How do I print an attribute from object array in Java? 3) Print ArrayList using Arrays class. You can convert ArrayList to array and use toString method of Arrays class to print elements of an ArrayList. 1. 2. System.out.println("Print Arraylist using Arrays.toString method"); System.out.println( Arrays.toString(aListDays.toArray()) );
You say you want to print half of your list first, then you are able to split the original list by half:
Streams.of(objectList.sublist(0, length/2), carList, objectList.sublist(length/2, length)) .flatMap(Collection::stream) .forEach(o -> { if (o instanceof YourObject) { YourObject o1 = (YourObject)o; // Print with YourObject } else if (o instanceof Cars) { // Print car objects } })
Java Split String Into ArrayList Examples, Double To String 2 Decimal Places Examples · String Split Tutorial of the Java String Split, which is a String array, to an ArrayList is to use the.
You can very easily achieve this by a for loop. But as per your output format
The size of the
objectList and
carList must be same for this to work.
Try something like this :
for(int i=0;i<objectList.size();i++){ String name; int number; number = objectList.get(i).number; name = objectList.get(i).name; String type = carList.get(i).type; System.out.Println(name+type+number); }
As per your comment, if carList is in another class and is private then, you need to set a getter method that takes an index and returns the
type
Like this :
public class A{ private ArrayList<Car> carList = new ArrayList(); ........ //some codes public String getTypeAtIndex(int i){ return carList.get(i).type; } }
Then anywhere you can call
A a; String type = a.getTypeAtIndex(i);
Java Array of ArrayList, ArrayList of Array, Java Array of ArrayList example, ArrayList of Array in java, java array of list, java l2.add("3"); l2.add("4"); l2.add("5"); List<String>[] arrayOfList = new List[2]; list holds different types of Object arrays, let's print them for (Object[] objArr : list).
Split a list into two halves in Java, function to split a list into two sublists in Java. public static List[] split(List<String> list). {. // create two empty lists. List<String> first = new ArrayList<String>();. Other Types. Elements in an ArrayList are actually objects. In the examples above, we created elements (objects) of type "String". Remember that a String in Java is an object (not a primitive type). To use other types, such as int, you must specify an equivalent wrapper class: Integer. For other primitive types, use: Boolean for boolean,
ArrayList to Array Conversion in Java : toArray() Methods , Mathematical · Randomized Algorithms · Greedy Algorithms · Dynamic Programming · Divide and Conquer · Backtracking Following methods can be used for converting ArrayList to Array: Printing array of objects Note: toArray() method returns an array of type Object(Object[]). Method 2: Using T[] toArray(T[] a) Java ArrayList of Object Array..
How to get sublist of an ArrayList with example, The subList method returns a list therefore to store the sublist in another ArrayList we must need to type cast the returned value in same way as I did in the below Let us see how to store multiple data types in an java List/ ArrayList, store objects of different data types in an List/ArrayList is pretty simple Please consider disabling your ad blocker for Java4s.com, we won't encourage audio ads, popups or any other annoyances at any point, hope you support us :-) Thank you.
- does the two list have the same length?
- For clarifcation. Do you want something like for example
objectList=[obj1,obj2,obj3,...]and
carList=[car1,car2,car3,...]and to print
"obj1.number car1.type obj1.name"for each?
objectList.subList(0, objectList.size() / 2).forEach(System.out::println); carList.forEach(System.out::println); objectList.subList(objectList.size() / 2, objectList.size()).forEach(System.out::println);
- @Francisco Hey thanks for taking the time to answer. Yes that is the kind of output i am looking for. Also the 2 lists don't have the same length necessarily,
- @shmosel Can you clarify your code here and how it works? I would really appreciate it. I am not sure how to use it as an example.Thanks aswell!
- You'll want a flatMap in there.
- Hey thanks for answering. Is t = i? or is it different?
- Edited it was a mistake
- Hey i think that has to be the answer to my problem but what if the carList arraylist is in another class and private? I am stuck at the "String type = carList.get(i).type;" part. I get a cannot find symbol error on carList.
- If you mark this class private then you can not access it outside by any means. Except as I am about to mention in answer. See edit | http://thetopsites.net/article/50376518.shtml | CC-MAIN-2021-04 | refinedweb | 1,060 | 64.2 |
Interface for Sonoff devices running v3+ Itead firmware.
Project description
Control Sonoff devices running original firmware, in LAN mode.
To control Sonoff switches running the V3+ Itead firmware (tested on 3.0, 3.0.1, 3.1.0, 3.3.0), locally (LAN mode).
This will only work for Sonoff devices running V3+ of the stock (Itead / eWeLink) firmware. For users of V1.8.0 - V2.6.1, please use PySonoffLAN
This module provides a way to interface with Sonoff smart home devices, such as smart switches (e.g. Sonoff Basic), plugs (e.g. Sonoff S20), and wall switches (e.g. Sonoff Touch), when these devices are in LAN Mode.
LAN Mode is a feature introduced by manufacturer Itead, to allow operation locally when their servers are unavailable. Further details can be found in the eWeLink LAN Mode guide.
Since mid 2018, the firmware Itead have shipped with most Sonoff devices has provided this feature, allowing devices to be controlled directly on the local network using a WebSocket connection on port 8081.
Features
- Discover all devices on local network
- Read device state
- Switch device ON/OFF
- Listen for state changes announced by the device (e.g. by physical switch)
- Activate inching/momentary device, with variable ON time (e.g. 1s)
Documentation
- Documentation:.
Install
$ pip install pysonofflanr3
Command-Line Usage
Usage: pysonofflanr3 [OPTIONS] COMMAND [ARGS]... A cli tool for controlling Sonoff Smart Switches/Plugs in LAN Mode. Options: --host TEXT IP address or hostname of the device to connect to. --device_id TEXT Device ID of the device to connect to. --inching TEXT Number of seconds of "on" time if this is an Inching/Momentary switch. -l, --level LVL Either CRITICAL, ERROR, WARNING, INFO or DEBUG --help Show this message and exit. --api_key KEY Needed for devices not in DIY mode. See Commands: discover Discover devices in the network listen Connect to device, print state and repeat off Turn the device off. on Turn the device on. state Connect to device and print current state.
Usage Example
$ pysonofflan discover 2019-01-31 00:45:32,074 - info: Attempting to discover Sonoff LAN Mode devices on the local network, please wait... 2019-01-31 00:46:24,007 - info: Found Sonoff LAN Mode device at IP 192.168.0.77 $ pysonofflan --host 192.168.0.77 state 2019-01-31 00:41:34,931 - info: Initialising SonoffSwitch with host 192.168.0.77 2019-01-31 00:41:35,016 - info: == Device: 10006866e9 (192.168.0.77) == 2019-01-31 00:41:35,016 - info: State: OFF $ pysonofflan --host 192.168.0.77 on 2019-01-31 00:49:40,334 - info: Initialising SonoffSwitch with host 192.168.0.77 2019-01-31 00:49:40,508 - info: 2019-01-31 00:49:40,508 - info: Initial state: 2019-01-31 00:49:40,508 - info: == Device: 10006866e9 (192.168.0.77) == 2019-01-31 00:49:40,508 - info: State: OFF 2019-01-31 00:49:40,508 - info: 2019-01-31 00:49:40,508 - info: New state: 2019-01-31 00:49:40,508 - info: == Device: 10006866e9 (192.168.0.77) == 2019-01-31 00:49:40,508 - info: State: ON
Library Usage
All common, shared functionality is available through
SonoffSwitch class:
x = SonoffSwitch("192.168.1.50")
Upon instantiating the SonoffSwitch class, a connection is initiated and device state is populated, but no further action is taken.
For most use cases, you’ll want to make use of the
callback_after_update
parameter to do something with the device after a connection has been
initialised, for example:
async def print_state_callback(device): if device.basic_info is not None: print("ON" if device.is_on else "OFF") device.shutdown_event_loop() SonoffSwitch( host="192.168.1.50", callback_after_update=print_state_callback )
This example simply connects to the device, prints whether it is currently “ON” or “OFF”, then closes the connection. Note, the callback must be asynchronous.
Module-specific errors are raised as Exceptions, and are expected to be handled by the user of the library.
License
- Free software: MIT license
Credits
This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.
History
1.1.4 (2020-03-29)
- Enabled code to work if device IP address changes
- Fixed faulty CLI introduced with 1.1.3 ()
- Removed previous workaround code for earlier version of zeroconf (<=24.4)
1.1.3 (2020-02-16)
- Fixed issue of reconnection that device remains unavailable until state changes
- Fixed retry code for strip type devices
1.1.2 (deleted release)
1.1.1 (2020-02-01)
- Optimisations to deal with later zeroconf versions which have some different behaviour
- Improved error handling of unexpected errors
1.1.0 (2020-01-10)
- First release on PyPI.
- Forked from PySonoffLAN package (courtesy of Andrew Beveridge)
- Works on V3 Itead firmware using mDNS for service discovery and REST for service invocation
- Supports DIY mode as well as ‘standard’ mode (for standard mode API key is needed to be obtained, e.g. by sniffing LAN)
- Supports all known devices for switching, although no sensors added at this point
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pysonofflanr3/ | CC-MAIN-2021-21 | refinedweb | 866 | 66.64 |
(ns were-creatures) ➊ (defmulti full-moon-behavior (fn [were-creature] (:were-type were-creature))) ➋ (defmethod full-moon-behavior :wolf [were-creature] (str (:name were-creature) " will howl and murder")) ➌ (defmethod full-moon-behavior :simmons [were-creature] (str (:name were-creature) " will encourage people and sweat to the oldies")) (full-moon-behavior {:were-type :wolf ➍ :name "Rachel from next door"}) ; => "Rachel from next door will howl and murder" (full-moon-behavior {:name "Andy the baker" ➎ :were-type :simmons}) ; => "Andy the baker will encourage people and sweat to the oldies"
Creating and Extending Abstractions with Multimethods, Protocols, and Records
Take a minute to contemplate how great it is to be one of Mother Nature’s top-of-the-line products: a human. As a human, you get to gossip on social media, play Dungeons and Dragons, and wear hats. Perhaps more important, you get to think and communicate in terms of abstractions.
The ability to think in terms of abstractions is truly one of the best human features. It lets you circumvent your cognitive limits by tying together disparate details into a neat conceptual package that you can hold in your working memory. Instead of having to think the clunky thought “squeezable honking red ball nose adornment,” you only need the concept “clown nose.”
In Clojure, an abstraction is a collection of operations, and data types implement abstractions. For example, the seq abstraction consists of operations like
first and
rest, and the vector data type is an implementation of that abstraction; it responds to all of the seq operations. A specific vector like
[:seltzer :water] is an instance of that data type.
The more a programming language lets you think and write in terms of abstractions, the more productive you will be. For example, if you learn that a data structure is an instance of the seq abstraction, you can instantly call forth a large web of knowledge about what functions will work with the data structure. As a result, you spend time actually using the data structure instead of constantly looking up documentation on how it works. By the same token, if you extend a data structure to work with the seq abstraction, you can use the extensive library of seq functions on it.
In Chapter 4, you learned that Clojure is written in terms of abstractions. This is powerful because in Clojure you can focus on what you can actually do with data structures and not worry about the nitty-gritty of implementation. This chapter introduces you to the world of creating and implementing your own abstractions. You’ll learn the basics of multimethods, protocols, and records.
Polymorphism
The main way we achieve abstraction in Clojure is by associating an operation name with more than one algorithm. This technique is called polymorphism. For example, the algorithm for performing
conj on a list is different from the one for vectors, but we unify them under the same name to indicate that they implement the same concept, namely, add an element to this data structure.
Because Clojure relies on Java’s standard library for many of its data types, a little Java is used in this chapter. For example, Clojure strings are just Java strings, instances of the Java class
java.lang.String. To define your own data types in Java, you use classes. Clojure provides additional type constructs: records and types. This book only covers records.
Before we learn about records, though, let’s look at multimethods, our first tool for defining polymorphic behavior.
Multimethods
Multimethods give you a direct, flexible way to introduce polymorphism into your code. Using multimethods, you associate a name with multiple implementations by defining a dispatching function, which produces dispatching values that are used to determine which method to use. The dispatching function is like the host at a restaurant. The host will ask you questions like “Do you have a reservation?” and “Party size?” and then seat you accordingly. Similarly, when you call a multimethod, the dispatching function will interrogate the arguments and send them to the right method, as this example shows:
This multimethod shows how you might define the full moon behavior of different kinds of were-creatures. Everyone knows that a werewolf turns into a wolf and runs around howling and murdering people. A lesser-known species of were-creature, the were-Simmons, turns into Richard Simmons, power perm and all, and runs around encouraging people to be their best and sweat to the oldies. You do not want to get bitten by either, lest you turn into one.
We create the multimethod at ➊. This tells Clojure, “Hey, create a new multimethod named
full-moon-behavior. Whenever someone calls
full-moon-behavior, run the dispatching function
(fn [were-creature] (:were-type were-creature)) on the arguments. Use the result of that function, aka the dispatching value, to decide which specific method to use!”
Next, we define two methods, one for when the value returned by the dispatching function is
:wolf at ➋, and one for when it’s
:simmons at ➌. Method definitions look a lot like function definitions, but the major difference is that the method name is immediately followed by the dispatch value.
:wolf and
:simmons are both dispatch values. This is different from a dispatching value, which is what the dispatching function returns. The full dispatch sequence goes like this:
- The form
(full-moon-behavior {:were-type :wolf :name "Rachel from next door"})is evaluated.
full-moon-behavior’s dispatching function runs, returning
:wolfas the dispatching value.
- Clojure compares the dispatching value
:wolfto the dispatch values of all the methods defined for
full-moon-behavior. The dispatch values are
:wolfand
:simmons.
- Because the dispatching value
:wolfis equal to the dispatch value
:wolf, the algorithm for
:wolfruns.
Don’t let the terminology trip you up! The main idea is that the dispatching function returns some value, and this value is used to determine which method definition to use.
Back to our example! Next we call the method twice. At ➍, the dispatching function returns the value
:wolf and the corresponding method is used, informing you that
"Rachel from next door will howl and murder". At ➏, the function behaves similarly, except
:simmons is the dispatching value.
You can define a method with
nil as the dispatch value:
(defmethod full-moon-behavior nil [were-creature] (str (:name were-creature) " will stay at home and eat ice cream")) (full-moon-behavior {:were-type nil :name "Martin the nurse"}) ; => "Martin the nurse will stay at home and eat ice cream"
When you call
full-moon-behavior this time, the argument you give it has
nil for its
:were-type, so the method corresponding to
nil gets evaluated and you’re informed that
"Martin the nurse will stay at home and eat ice cream".
You can also define a default method to use if no other methods match by specifying
:default as the dispatch value. In this example, the
:were-type of the argument given doesn’t match any of the previously defined methods, so the default method is used:
(defmethod full-moon-behavior :default [were-creature] (str (:name were-creature) " will stay up all night fantasy footballing")) (full-moon-behavior {:were-type :office-worker :name "Jimmy from sales"}) ; => "Jimmy from sales will stay up all night fantasy footballing"
One cool thing about multimethods is that you can always add new methods. If you publish a library that includes the
were-creatures namespace, other people can continue extending the multimethod to handle new dispatch values. This example shows that you’re creating your own random namespace and including the
were-creatures namespace, and then defining another method for the
full-moon-behavior multimethod:
(ns random-namespace (:require [were-creatures])) (defmethod were-creatures/full-moon-behavior :bill-murray [were-creature] (str (:name were-creature) " will be the most likeable celebrity")) (were-creatures/full-moon-behavior {:name "Laura the intern" :were-type :bill-murray}) ; => "Laura the intern will be the most likeable celebrity"
Your dispatching function can return arbitrary values using any or all of its arguments. The next example defines a multimethod that takes two arguments and returns a vector containing the type of each argument. It also defines an implementation of that method, which will be called when each argument is a string:
(ns user) (defmulti types (fn [x y] [(class x) (class y)])) (defmethod types [java.lang.String java.lang.String] [x y] "Two strings!") (types "String 1" "String 2") ; => "Two strings!"
Incidentally, this is why they’re called multimethods: they allow dispatch on multiple arguments. I haven’t used this feature very often, but I could see it being used in a role-playing game to write methods that are dispatched according to, say, a mage’s major school of magic and his magic specialization. Either way, it’s better to have it and not need it than need it and not have it.
Note Multimethods also allow hierarchical dispatching. Clojure lets you build custom hierarchies, which I won’t cover, but you can learn about them by reading the documentation at.
Protocols
Approximately 93.58 percent of the time, you’ll want to dispatch to methods according to an argument’s type. For example,
count needs to use a different method for vectors than it does for maps or for lists. Although it’s possible to perform type dispatch with multimethods, protocols are optimized for type dispatch. They’re more efficient than multimethods, and Clojure makes it easy for you to succinctly specify protocol implementations.
A multimethod is just one polymorphic operation, whereas a protocol is a collection of one or more polymorphic operations. Protocol operations are called methods, just like multimethod operations. Unlike multimethods, which perform dispatch on arbitrary values returned by a dispatching function, protocol methods are dispatched based on the type of the first argument, as shown in this example:
(ns data-psychology) ➊(defprotocol ➋Psychodynamics ➌"Plumb the inner depths of your data types" ➍(thoughts [x] "The data type's innermost thoughts") ➎(feelings-about [x] [x y] "Feelings about self or other"))
First, there’s
defprotocol at ➊. This takes a name,
Psychodynamics ➋, and an optional docstring,
"Plumb the inner depths of your data types" ➌. Next are the method signatures. A method signature consists of a name, an argument specification, and an optional docstring. The first method signature is named
thoughts ➍ and can take only one argument. The second is named
feelings-about ➎ and can take one or two arguments. Protocols do have one limitation: the methods can’t have rest arguments. So a line like the following isn’t allowed:
(feelings-about [x] [x & others])
By defining a protocol, you’re defining an abstraction, but you haven’t yet defined how that abstraction is implemented. It’s like you’re reserving names for behavior (in this example, you’re reserving
thoughts and
feelings-about), but you haven’t defined what exactly the behavior should be. If you were to evaluate
(thoughts "blorb"), you would get an exception that reads, “No implementation of method: thoughts of protocol: data-psychology/Psychodynamics found for class: java.lang.String.” Protocols dispatch on the first argument’s type, so when you call
(thoughts "blorb"), Clojure tries to look up the implementation of the
thoughts method for strings, and fails.
You can fix this sorry state of affairs by extending the string data type to implement the
Psychodynamics protocol:
➊ (extend-type java.lang.String ➋ Psychodynamics ➌ (thoughts [x] (str x " thinks, 'Truly, the character defines the data type'") ➍ (feelings-about ([x] (str x " is longing for a simpler way of life")) ([x y] (str x " is envious of " y "'s simpler way of life")))) (thoughts "blorb") ➎ ; => "blorb thinks, 'Truly, the character defines the data type'" (feelings-about "schmorb") ; => "schmorb is longing for a simpler way of life" (feelings-about "schmorb" 2) ; => "schmorb is envious of 2's simpler way of life"
extend-type is followed by the name of the class or type you want to extend and the protocol you want it to support—in this case, you specify the class
java.lang.String at ➊ and the protocol you want it to support,
Psychodynamics, at ➋. After that, you provide an implementation for both the
thoughts method at ➌ and the
feelings-about method at ➍. If you’re extending a type to implement a protocol, you have to implement every method in the protocol or Clojure will throw an exception. In this case, you can’t implement just
thoughts or just
feelings; you have to implement both.
Notice that these method implementations don’t begin with
defmethod like multimethods do. In fact, they look similar to function definitions, except without
defn. To define a method implementation, you write a form that starts with the method’s name, like
thoughts, then supply a vector of parameters and the method’s body. These methods also allow arity overloading, just like functions, and you define multiple-arity method implementations similarly to multiple-arity functions. You can see this in the
feelings-about implementation at ➍.
After you’ve extended the
java.lang.String type to implement the
Psychodynamics protocol, Clojure knows how to dispatch the call
(thoughts "blorb"), and you get the string
"blorb thinks, 'Truly, the character defines the data type'" at ➎.
What if you want to provide a default implementation, like you did with multimethods? To do that, you can extend
java.lang.Object. This works because every type in Java (and hence, Clojure) is a descendant of
java.lang.Object. If that doesn’t quite make sense (perhaps because you’re not familiar with object-oriented programming), don’t worry about it—just know that it works. Here’s how you would use this technique to provide a default implementation for the
Psychodynamics protocol:
(extend-type java.lang.Object Psychodynamics (thoughts [x] "Maybe the Internet is just a vector for toxoplasmosis") (feelings-about ([x] "meh") ([x y] (str "meh about " y)))) (thoughts 3) ; => "Maybe the Internet is just a vector for toxoplasmosis" (feelings-about 3) ; => "meh" (feelings-about 3 "blorb") ; => "meh about blorb"
Because we haven’t defined a
Psychodynamics implementation for numbers, Clojure dispatches calls to
thoughts and
feelings-about to the implementation defined for
java.lang.Object.
Instead of making multiple calls to
extend-type to extend multiple types, you can use
extend-protocol, which lets you define protocol implementations for multiple types at once. Here’s how you’d define the preceding protocol implementations:
(extend-protocol Psychodynamics java.lang.String (thoughts [x] "Truly, the character defines the data type") (feelings-about ([x] "longing for a simpler way of life") ([x y] (str "envious of " y "'s simpler way of life"))) java.lang.Object (thoughts [x] "Maybe the Internet is just a vector for toxoplasmosis") (feelings-about ([x] "meh") ([x y] (str "meh about " y))))
You might find this technique more convenient than using
extend-type. Then again, you might not. How does
extend-type make you feel? How about
extend-protocol? Come sit down on this couch and tell me all about it.
It’s important to note that a protocol’s methods “belong” to the namespace that they’re defined in. In these examples, the fully qualified names of the
Psychodynamics methods are
data-psychology/thoughts and
data-psychology/feelings-about. If you have an object-oriented background, this might seem weird because methods belong to data types in OOP. But don’t freak out! It’s just another way that Clojure gives primacy to abstractions. One consequence of this fact is that, if you want two different protocols to include methods with the same name, you’ll need to put the protocols in different namespaces.
Records
Clojure allows you to create records, which are custom, maplike data types. They’re maplike in that they associate keys with values, you can look up their values the same way you can with maps, and they’re immutable like maps. They’re different in that you specify fields for records. Fields are slots for data; using them is like specifying which keys a data structure should have. Records are also different from maps in that you can extend them to implement protocols.
To create a record, you use
defrecord to specify its name and fields:
(ns were-records) (defrecord WereWolf [name title])
This record’s name is
WereWolf, and its two fields are
name and
title. You can create an instance of this record in three ways:
➊ (WereWolf. "David" "London Tourist") ; => #were_records.WereWolf{:name "David", :title "London Tourist"} ➋ (->WereWolf "Jacob" "Lead Shirt Discarder") ; => #were_records.WereWolf{:name "Jacob", :title "Lead Shirt Discarder"} ➌ (map->WereWolf {:name "Lucian" :title "CEO of Melodrama"}) ; => #were_records.WereWolf{:name "Lucian", :title "CEO of Melodrama"}
At ➊, we create an instance the same way we’d create a Java object, using the class instantiation interop call. (Interop refers to the ability to interact with native Java constructs within Clojure.) Notice that the arguments must follow the same order as the field definition. This works because records are actually Java classes under the covers.
The instance at ➋ looks nearly identical to the one at ➊, but the key difference is that
->WereWolf is a function. When you create a record, the factory functions
->RecordName and
map->RecordName are created automatically. At ➌,
map->WereWolf takes a map as an argument with keywords that correspond to the record type’s fields and returns a record.
If you want to use a record type in another namespace, you’ll have to import it, just like you did with the Java classes in Chapter 12. Be careful to replace all dashes in the namespace with underscores. This brief example shows how you’d import the
WereWolf record type in another namespace:
(ns monster-mash (:import [were_records WereWolf])) (WereWolf. "David" "London Tourist") ; => #were_records.WereWolf{:name "David", :title "London Tourist"}
Notice that
were_records has an underscore, not a dash.
You can look up record values in the same way you look up map values, and you can also use Java field access interop:
(def jacob (->WereWolf "Jacob" "Lead Shirt Discarder")) ➊ (.name jacob) ; => "Jacob" ➋ (:name jacob) ; => "Jacob" ➌ (get jacob :name) ; => "Jacob"
The first example,
(.name jacob) at ➊, uses Java interop, and the examples at ➋ and ➌ access
:name the same way you would with a map.
When testing for equality, Clojure will check that all fields are equal and that the two comparands have the same type:
➊ (= jacob (->WereWolf "Jacob" "Lead Shirt Discarder")) ; => true ➋ (= jacob (WereWolf. "David" "London Tourist")) ; => false ➌ (= jacob {:name "Jacob" :title "Lead Shirt Discarder"}) ; => false
The test at ➊ returns
true because
jacob and the newly created record are of the same type and their fields are equal. The test at ➋ returns
false because the fields aren’t equal. The final test at ➌ returns
false because the two comparands don’t have the same type:
jacob is a
WereWolf record, and the other argument is a map.
Any function you can use on a map, you can also use on a record:
(assoc jacob :title "Lead Third Wheel") ; => #were_records.WereWolf{:name "Jacob", :title "Lead Third Wheel"}
However, if you
dissoc a field, the result’s type will be a plain ol’ Clojure map; it won’t have the same data type as the original record:
(dissoc jacob :title) ; => {:name "Jacob"} <- that's not a were_records.WereWolf
This matters for at least two reasons: first, accessing map values is slower than accessing record values, so watch out if you’re building a high-performance program. Second, when you create a new record type, you can extend it to implement a protocol, similar to how you extended a type using
extend-type earlier. If you
dissoc a record and then try to call a protocol method on the result, the record’s protocol method won’t be called.
Here’s how you would extend a protocol when defining a record:
➊ (defprotocol WereCreature ➋ (full-moon-behavior [x])) ➌ (defrecord WereWolf [name title] WereCreature (full-moon-behavior [x] (str name " will howl and murder"))) (full-moon-behavior (map->WereWolf {:name "Lucian" :title "CEO of Melodrama"})) ; => "Lucian will howl and murder"
We’ve created a new protocol,
WereCreature ➊, with one method,
full-moon-behavior ➋. At ➌,
defrecord implements
WereCreature for
WereWolf. The most interesting part of the
full-moon-behavior implementation is that you have access to
name. You also have access to
title and any other fields that might be defined for your record. You can also extend records using
extend-type and
extend-protocol.
When should you use records, and when should you use maps? In general, you should consider using records if you find yourself creating maps with the same fields over and over. This tells you that that set of data represents information in your application’s domain, and your code will communicate its purpose better if you provide a name based on the concept you’re trying to model. Not only that, but record access is more performant than map access, so your program will become a bit more efficient. Finally, if you want to use protocols, you’ll need to create a record.
Further Study
Clojure offers other tools for working with abstractions and data types. These tools, which I consider advanced, include
deftype,
reify, and
proxy. If you’re interested in learning more, check out the documentation on data types at.
Summary
One of Clojure’s design principles is to write to abstractions. In this chapter, you learned how to define your own abstractions using multimethods and prototypes. These constructs provide polymorphism, allowing the same operation to behave differently based on the arguments it’s given. You also learned how to create and use your own associative data types with
defrecord and how to extend records to implement protocols.
When I first started learning Clojure, I was pretty shy about using multimethods, protocols, and records. However, they are used often in Clojure libraries, so it’s good to know how they work. Once you get the hang of them, they’ll help you write cleaner code.
Exercises
- Extend the
full-moon-behaviormultimethod to add behavior for your own kind of were-creature.
- Create a
WereSimmonsrecord type, and then extend the
WereCreatureprotocol.
- Create your own protocol, and then extend it using
extend-typeand
extend-protocol.
- Create a role-playing game that implements behavior using multiple dispatch. | https://www.braveclojure.com/multimethods-records-protocols/ | CC-MAIN-2019-04 | refinedweb | 3,753 | 51.58 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
triggered because of compilation problems of the DST and triggering the
following errors :]
as explained here :
the messages as bogus and had been fixed later as can be seen by the following
patch :;a=commitdiff_plain;h=52fc0b026e99b5d5d585095148d997d5634bbc25;hp=46358614ed5b031797522f1020e989c959a8d8a6
adding this patch to the gentoo provided patches, until it gets applied
upstream (should had been part of this pull:
but it is still not applied as of 2.6.16-rc3) would prevent this issue to be
raised and investigated further for all users running a 2.6.16 kernel the
least.
Created an attachment (id=85710) [edit]
acpi-handle-null-flood.patch
That patch is too large to include. Please confirm that this smaller one
resolves your problem.
proposed patch confirmed to work.
i agree that a proposed fix shouldn't apply the full patch, but only the
following snippet (composed mostly of formatting and other null changes) from
that patch :
file:a95f636dc35d454e16865732cff8e4c57ab3731f ->
file:71e7769d7dafcf66653cbb0d9ebb0f9e34c9b9c7
and obtained by doing (attached, even if i keep thinking there should be an
easier way to get it)
mkdir -p ~a/drivers/acpi/namespace
git-cat-file blob a95f636dc35d454e16865732cff8e4c57ab3731f >
~/a/drivers/acpi/namespace/nsxfeval.c
mkdir -p ~/b/drivers/acpi/namespace
git-cat-file blob 71e7769d7dafcf66653cbb0d9ebb0f9e34c9b9c7 >
~/b/drivers/acpi/namespace/nsxfeval.c
cd
diff -urp a b > acpica-20060210-nsxfeval.patch
rm -rf a b
or the corresponding snippet from that patch (attached as
acpica-20060210-nsxfeval-debug.patch), and this way preventing future merge
conflicts or divergent code from upstream.
sadly on that line, confirmed with Len (the Linux ACPI maintainer) that this
patchset won't be merged upstream until 2.6.18 the least.
Created an attachment (id=85733) [edit]
nsxfeval.c changes as part of acpica-20060210
Created an attachment (id=85735) [edit]
snippet from nsxfeval.c changes part of acpica-20060210
We'll only take whats necessary, the larger patch contains other code changes
(and actually would increase the chance of conflict).
Thanks for testing.
Fixed in gentoo-sources-2.6.16-r5 (genpatches-2.6.16-7) | http://bugs.gentoo.org/131534 | crawl-002 | refinedweb | 369 | 56.76 |
Hooks¶
Qtile provides a mechanism for subscribing to certain events in
libqtile.hook.
To subscribe to a hook in your configuration, simply decorate a function with
the hook you wish to subscribe to.
See Built-in Hooks for a listing of available hooks.
Examples¶
Automatic floating dialogs¶
Let’s say we wanted to automatically float all dialog windows (this code is not
actually necessary; Qtile floats all dialogs by default). We would subscribe to
the
client_new hook to tell us when a new window has opened and, if the
type is “dialog”, as can set the window to float. In our configuration file it
would look something like this:
from libqtile import hook @hook.subscribe.client_new def floating_dialogs(window): dialog = window.window.get_wm_type() == 'dialog' transient = window.window.get_wm_transient_for() if dialog or transient: window.floating = True
A list of available hooks can be found in the Built-in Hooks reference.
Autostart¶
If you want to run commands or spawn some applications when Qtile starts, you’ll
want to look at the
startup and
startup_once hooks.
startup is
emitted every time Qtile starts (including restarts), whereas
startup_once
is only emitted on the very first startup.
Let’s create an executable file
~/.config/qtile/autostart.sh that will
start a few programs when Qtile first runs. Remember to chmod +x this file so
that it can be executed.
#!/bin/sh pidgin & dropbox start &
We can then subscribe to
startup_once to run this script:
import os import subprocess @hook.subscribe.startup_once def autostart(): home = os.path.expanduser('~/.config/qtile/autostart.sh') subprocess.call([home])
Accessing the qtile object¶
If you want to do something with the
Qtile manager instance inside a hook,
it can be imported into your config:
from libqtile import qtile | http://docs.qtile.org/en/latest/manual/config/hooks.html | CC-MAIN-2021-39 | refinedweb | 289 | 56.76 |
How to unit test React applications with Jest and Enzyme
You will need Node 6+ and Yarn installed on your machine.
Writing automated tests is very important in any real world project, but it can been notoriously difficult to figure out especially in the frontend world.
Jest is a testing tool from Facebook that makes it easy to perform unit testing in JavaScript. Enzyme on the other hand, is React specific. It provides a bunch of helpful methods that enhance how we test React components.
Let’s take a look at how Jest and Enzyme can be leveraged to create more robust React applications
Prerequisites
To follow through with this tutorial, you need Node.js (v6 and above) and npm installed on your machine. You also need to install
yarn since that’s what
create-react-app uses.
npm install -g yarn
Set up a React application
Before we can write any tests, we need to create an application we can test. We’ll create a simple counter app that increments a count once a button is clicked. Let’s bootstrap the project with create-react-app so we can get up and running with minimal fuss.
Install
create-react-app by running the following command in your terminal:
npm install -g create-react-app
Then create your React app with the following command:
create-react-app counter-app
Once the application has been created,
cd into the
counter-app directory and run
yarn start to launch the development server. You should see a message confirming successful compilation and the ports where you can access the app.
Now, open the
counter-app folder in your favorite text editor and locate
src/App.js. Change its contents to look like this:
// src/App.js import React, { Component } from 'react'; class App extends Component { constructor() { super(); this.state = { count: 0, } } makeIncrementer = amount => () => this.setState(prevState => ({ count: prevState.count + amount, })); increment = this.makeIncrementer(1); render() { return ( <div> <p>Count: {this.state.count}</p> <button className="increment" onClick={this.increment}>Increment count</button> </div> ) } } export default App;
Our React app has some initial state
count which is set to zero, and a button that, once clicked, increments this
count state through the
increment function which simply adds 1 to the value of
count and updates the application state.
Jest basics
Normally, we’d need to install and configure Jest before writing any tests, but since
create-react-app ships with Jest already installed, we don’t have to do any of that. We can jump straight into writing our first test.
If you look at the
src/App.test.js, you will see that a test has already been written for us. It tests that the App component can render without crashing.
// src/App.test.js import React from 'react'; import ReactDOM from 'react-dom'; import App from './App'; it('renders without crashing', () => { const div = document.createElement('div'); ReactDOM.render(<App />, div); ReactDOM.unmountComponentAtNode(div); });
Let’s add a dummy test below this one in
App.test.js:
// src/App.test.js ... describe('Addition', () => { it('knows that 2 and 2 make 4', () => { expect(2 + 2).toBe(4); }); });
We can go ahead and run both tests using the
yarn test command which runs
jest under the hood. A success message should be printed out on the screen:
Now, let’s change one of the tests so that it fails. Within
src/App.test.js, change the
Addition test to look like this:
// src/App.test.js describe('Addition', () => { it('knows that 2 and 2 make 4', () => { expect(2 + 2).toBe(5); }); });
Check your terminal output. You can see that the first test passes as before while the second one fails, and the reason for the failure is also printed.
A
describe() function groups related tests together inside one test suite. It takes a
name parameter, which should describe the component you’re testing, and a callback function where individual tests are defined with
it.
You might see individual tests with
test in some projects. To be sure,
it and
test are one and the same thing.
it is only an alias for
test.
// src/App.test.js describe('Addition', () => { it('knows that 2 and 2 make 4', () => { expect(2 + 2).toBe(4); }); // is equaivalent to test('knows that 2 and 2 make 4', () => { expect(2 + 2).toBe(4); }); });
What you want to test is wrapped in a call to the
expect() function, before calling what is termed a “matcher” function on it. In the above example,
toBe() is the matcher function used. It checks that the value provided equals the value that the code within the
expect() function produces.
Writing your first test
Before we begin writing our own tests, we need to add a few packages to our application for it to be able to test via Enzyme’s shallow renderer:
yarn add enzyme enzyme-adapter-react-16 --dev
Enzyme is built to support different versions of React. In this tutorial, I’m using the latest stable version of React which is 16.4.2, but you might be working with an older version of React, say React 15.x. So you also have to install an Adapter that corresponds to the version of React that you are using.
You also need to create a
setupTests.js file within your
src folder that tells Jest and Enzyme what Adapters you will be making use of.
create-react-app has been configured to run this file automatically before any of our tests, so that Enzyme is set up correctly.
// src/setupTests.js import { configure } from 'enzyme'; import Adapter from 'enzyme-adapter-react-16'; configure({ adapter: new Adapter() });
Now, can can begin writing tests for our application. Jump to
src/App.test.js and change its contents to look like this:
// src/App.test.js import React from 'react'; import { shallow } from 'enzyme'; import App from './App'; describe('App component', () => { it('starts with a count of 0', () => { const wrapper = shallow(<App />); const text = wrapper.find('p').text(); expect(text).toEqual('Count: 0'); }); });
We’re taking advantage of Enzyme’s shallow rendering to test our app’s initial state. A shallow render is a simulated render of a component tree that does not require a DOM. It renders only one level of components deep, and enables the inspection of the component’s contents as well as the simulation of user interaction.
In the above snippet, the shallow render of our
App component is stored in the
wrapper variable. We then grab the text inside the
p tag within the component’s output and check if the text is the same was what we passed into the
toEqual matcher function.
Testing user interaction
Let’s go ahead and write a new test that simulates a click on the button and confirms that the count output is incremented by 1.
Add this below the first test:
// src/App.test.js describe('App component', () => { ... it('increments count by 1 when the increment button is clicked', () => { const wrapper = shallow(<App />); const incrementBtn = wrapper.find('button.increment'); incrementBtn.simulate('click'); const text = wrapper.find('p').text(); expect(text).toEqual('Count: 1'); }); });
The
simulate() function on the
button variable can simulate a number of DOM events on an element. Here we are simulating the
click event on the button. We’ve also set up an expectation that the
count should be equal to now.
If you check the terminal output, you should observe that the test passes as expected.
Let’s go ahead and do something a bit different. We’ll add a test for some functionality that doesn’t exist yet, then go ahead and write the code to make the test pass. This methodology of writing tests before the code is known as Test Driven Development (TDD).
Create another test within the
describe() function that looks like this:
// src/App.test.js describe('App component', () => { ... it('decrements count by 1 when the decrement button is clicked', () => { const wrapper = shallow(<App />); const decrementBtn = wrapper.find('button.decrement'); decrementBtn.simulate('click'); const text = wrapper.find('p').text(); expect(text).toEqual('Count: -1'); }); });
Now you should have a failing test in the terminal:
Method
"``simulate``"
is only meant to be run on a single node. 0 found instead. If you’re not used to Enzyme, you might find the error message rather cryptic. What it means is that the
simulate() method was called on an element that doesn’t exist yet.
Let’s now go into the React component and write the code that will fix this test:
// src/App.js import React, { Component } from 'react'; class App extends Component { constructor() { super(); this.state = { count: 0, } } makeIncrementer = amount => () => this.setState(prevState => ({ count: prevState.count + amount, })); increment = this.makeIncrementer(1); decrement = this.makeIncrementer(-1); render() { return ( <div> <p>Count: {this.state.count}</p> <button className="increment" onClick={this.increment}>Increment count</button> <button className="decrement" onClick={this.decrement}>Decrement count</button> </div> ) } } export default App;
We added a decrement button after the increment button in the
render() function and a
decrement() function that decreases the value of
count by 1 and updates the application state. At this point, all three tests should pass successfully.
Testing React components with snapshots
Snapshot testing helps you check that the rendered output of a component is correct at all times. When you run a snapshot test, Jest renders the React component being tested and stores the output in a JSON file.
On further test runs, Jest will check that the output of the component has not deviated from what it saved previously. If you change the a component’s output, Jest will notify you and you can either update the snapshot to the latest version or fix the component so that it matches the snapshot again. This method of testing components helps you avoid accidental changes to your components because Jest will always notify you when a difference is detected.
To use Jest’s snapshot feature, we need an additional package, react-test-renderer, which can be installed through yarn:
yarn add react-test-renderer --dev
Then import it at the top of
App.test.js:
// src/App.test.js import renderer from 'react-test-renderer'; ...
Next, create a test below all the previously created ones:
// src/App.test.js it('matches the snapshot', () => { const tree = renderer.create(<App />).toJSON(); expect(tree).toMatchSnapshot(); });
The first time this test is run, there is no snapshot for this component so Jest creates it. You can inspect the contents of the snapshots inside the
src/__snapshots__ directory.
Open up
App.test.js.snap:
// src/__snapshots__/App.test.js.snap // Jest Snapshot v1, exports[`App component matches the snapshot 1`] = ` <div> <p> Count: 0 </p> <button className="increment" onClick={[Function]} > Increment count </button> <button className="decrement" onClick={[Function]} > Decrement count </button> </div> `;
You can see that the rendered output of the
App component is saved in this file. The next time this test is run, Jest will confirm that the outputs are the same. We can demonstrate this concept by changing the rendered output of
App slightly.
Let’s assume that we accidentally changed the text of the increment button in
App.js to Increment count2. The test should fail.
Since we didn’t intend to make this change, we can simply undo the change and the test will pass again.
Let’s make an actual change to the
App component. Change the text of the increment button to Increment and the decrement button to Decrement. Your render function should look like this:
// src/App.js render() { return ( <div> <p>Count: {this.state.count}</p> <button className="increment" onClick={this.increment}>Increment</button> <button className="decrement" onClick={this.decrement}>Decrement</button> </div> ) }
The tests should fail again. But since we actually intended to make this change, we need to update the snapshot to this latest version. We can do so by pressing
u in the terminal window where the tests are being run. Everything should be green again!
Conclusion
We’ve looked at how Jest makes testing React components much easier and how you can use it in conjunction with Enzyme for unit testing and snapshot testing. You can grab all the code written in this tutorial on GitHub for you to check out and run locally.
October 11, 2018
by Ayooluwa Isaiah | https://pusher.com/tutorials/react-jest-enzyme/ | CC-MAIN-2022-21 | refinedweb | 2,050 | 57.16 |
On 8/20/2012 3:36 PM, Gregg Smith wrote:
> Hi Joe,
>
> There seems to be a problem with this commit.
> mod_ssl.c
> .\mod_ssl.c(288) : error C2491: 'modssl_run_npn_advertise_protos_hook' : definition of
> dllimport function not allowed
> .\mod_ssl.c(294) : error C2491: 'modssl_run_npn_proto_negotiated_hook' : definition of
> dllimport function not allowed
That's because the API design is invalid. As I noted in the 2.2 backport
status file;
* mod_ssl: Add support for Next Protocol Negotiation.
Trunk patch:
2.2.x patch:
+1: benl
sf notes: needs the buffer overflow fix from r1345599, too
wrowe notes: also needs correction to
ssl_engine_kernel.c: In function 'ssl_callback_AdvertiseNextProtos':
ssl_engine_kernel.c:2140:5: warning: implicit declaration of function
'modssl_run_npn_advertise_protos_hook'
Including mod_ssl.h after ssl_private.h seems to suffice.
The change introduces hard linkages from modules into
mod_ssl.so (distinct from httpd), AP is the incorrect
namespace, see mod_dav main hooks as an example.
Prior to this patch all calls to mod_ssl were by way of
registered functions through apr bindings. Seems there
aught to be a way to add an npn cooperating module when
mod_ssl is not loaded, but right now it would fail.
An mmn minor bump would also be required for API addition. | http://mail-archives.apache.org/mod_mbox/httpd-dev/201208.mbox/%[email protected]%3E | CC-MAIN-2017-17 | refinedweb | 197 | 60.21 |
.
I don’t love this pattern either, but I disagree with some of your arguments against it.
The part I do agree with is that the service locator is a loosely-typed repository, and this makes it harder to write reliable software. Services are referred to by arbitrary string keys, so the service dependencies in an application are not known by the compiler and can never be validated, even in a strongly-typed, compiled language like Java. Failing to find an expected service object, or finding the wrong type of object in the repository, will always be a runtime error. That’s the biggest issue with the service locator pattern. Unless you are highly disciplined with naming and namespacing your service keys, and are building a very extensible system with a lot of optional plug-in modules, this pattern probably does more harm than good.
I disagree, though, that the pattern forces you to have a passive model layer. You can manage your models however you like. You wouldn’t provide the service locator to your models, but that doesn’t mean you can’t provide them with other objects they need, such as a reference to the datastore service or database connection they came from.
It’s also wrong to say the pattern is bad just because it creates a global namespace where certain objects live. No matter how you choose to build up your application’s object graph, the root-level objects will always need to be stored somewhere. It could be an ‘application’ class of some kind, a service manager / registry as discussed here, or the singleton / static method pattern. These alternatives all provide some kind of ‘global’ or app-wide mechanism for obtaining access to core services.
myApp->getLog()
services->get('Log')
Logger::getInstance()
Again, the only real disadvantage of the service registry compared to the other options, is that it’s loosely typed. You can’t be sure that the ‘Log’ service will exist, and you will have to cast the result to the appropriate class at runtime. That’s what sucks about a service locator. | http://bx.com.au/blog/2015/09/the-service-locator-software-design-pattern-is-an-object-unoriented-fad/ | CC-MAIN-2018-30 | refinedweb | 351 | 58.82 |
This means that it gets past the try-except in
> backend_wx.py, but can't find wxPySimpleApp.
Are you using the standard python shell or an IDE like pycrust? If
the latter, it may be overriding the sys,exit which is why you aren't
seeing it. From your previous post, it looks like your wx
installation is in bad shape. Until you can do 'from wxPython.wx
import *' from the standard python shell, I wouldn't bother with
trying to tweak matplotlib.
I didn't see any error message like the one you were getting in the
wxpython archives - you might try the wx mailing list.
JDH | https://discourse.matplotlib.org/t/cant-use-wxagg/1102 | CC-MAIN-2019-51 | refinedweb | 108 | 75.71 |
Abstract: The book "Wicked Cool Java" contains a myriad of interesting libraries, both from the JDK and various open source projects. In this review, we look at two of these, the java.util.Scanner and javax.sql.WebRowSet classes.
Welcome to the 119th edition of The Java(tm) Specialists' Newsletter. In March I will be speaking at TheServerSide Java Symposium in Las Vegas. My topics are: Java Specialists in Action and Productive Coder.
Learning.JavaSpecialists.EU: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
I must admit that the title of the book put me off a bit. In South Africa the adjective "wicked" has a dark meaning. Not so in the USA, where it nowadays means: strikingly good, effective, or skillful. Add "cool" to that, and it becomes: "strikingly super cool Java". And that, dear newsletter reader, pretty much sums up the contents of this book!
This book is more than just a rehash of the JavaDoc changes. It points you to all sorts of cool utilities and open source projects that let you do things from playing music to writing neural networks. "Wicked Cool". At the same time, it does not bore you with mountains of detail. Perfect for The Java(tm) Specialists' Newsletter readers.
As I usually do with book reviews, I will highlight a few nice gems from the book, and you can then buy "Wicked Cool Java" on Amazon or your local bookshop.
The first thing in the book to catch my eye was the new
java.util.Scanner class which Sun added in Java 5.
It helps us parse text on an input stream, which previously we
had to do with BufferedReader, StringTokenizer and various
text parsers.
Given the following text file, it would read the columns and parse them at the same time:
file logfile.txt: ----------------- entry 2006 01 11 1043 meeting Smith, John exit 2006 01 11 1204 Smith, John entry 2006 01 11 1300 work Eubanks, Brian exit 2006 01 11 2120 Eubanks, Brian alarm 2006 01 11 2301 fire This was a drill
Here is how you could use the
java.util.Scanner class:
import java.util.Scanner; import java.io.*; public class ScannerTest { public static void main(String[] args) throws IOException { Scanner scanner = new Scanner(new FileReader("logfile.txt")); while(scanner.hasNext()) { String type = scanner.next(); int year = scanner.nextInt(); int month = scanner.nextInt(); int day = scanner.nextInt(); int time = scanner.nextInt(); System.out.printf("%d/%d/%d@%d", year, month, day, time); if (type.equals("entry")) { String purpose = scanner.next(); String restOfLine = scanner.nextLine().trim(); System.out.printf(" entry %s: %s%n", purpose, restOfLine); } else if (type.equals("exit")) { String exitName = scanner.nextLine().trim(); System.out.printf(" exit %s%n", exitName); } else if (type.equals("alarm")) { String alarmType = scanner.next(); String comment = scanner.nextLine().trim(); System.out.printf(" alarm %s: %s%n", alarmType, comment); } else { throw new IllegalArgumentException(); } } scanner.close(); } }
The output of the program is:
2006/1/11@1043 entry meeting: Smith, John 2006/1/11@1204 exit Smith, John 2006/1/11@1300 entry work: Eubanks, Brian 2006/1/11@2120 exit Eubanks, Brian 2006/1/11@2301 alarm fire: This was a drill
Scanner can parse primitive
types, BigDecimal and BigInteger. Thus, you could write:
scanner.nextBigDecimal() or
scanner.nextBoolean().
Another interesting addition to Java 5 is
javax.sql.WebRowSet, implemented by
com.sun.rowset.WebRowSetImpl. You should not use
classes from
com.sun.* packages directly in your
code. Rather store the implementation class name in a
configuration file and create the WebRowSet implementation using
a factory class.
WebRowSet can generate and read XML based on a well-defined
schema, found on.
You can use them by either giving them a ResultSet instance, or
by providing the properties of what should be executed. In
Wicked Cool Java, Brian Eubanks
executes a query and passes the ResultSet to the WebRowSet
instance using the
populate(ResultSet) method.
In my example, I will show an alternative approach:
import com.sun.rowset.WebRowSetImpl; import javax.sql.rowset.WebRowSet; public class WebRowSetTest { public static void main(String[] args) throws Exception { if (args.length != 5) { System.err.println("Usage: java WebRowSetTest " + "driver url user password table"); System.exit(1); } int column = 0; String driver = args[column++]; String url = args[column++]; String user = args[column++]; String password = args[column++]; String table = args[column++]; Class.forName(driver); WebRowSet data = new WebRowSetImpl(); // rather use a factory data.setCommand("SELECT * FROM " + table); data.setUsername(user); data.setPassword(password); data.setUrl(url); data.execute(); // executes command and populates webset data.writeXml(System.out); data.close(); } }
In preparation for our move to Crete, Greece, I have started playing tavli, similar to backgammon. You can find me frequenting Tavli-Mania, where Greeks from all around the world come to pass their time. I prefer Tavli to Chess, since even the worst player can beat a pro. Strategy is important, but the dice can turn a game. Yesterday I had a ten game losing streak, but this morning I beat the #1 player. Imagine beating the #1 chess player!
An interesting bit of information that I discovered on Wikipedia: Backgammon computer games work better with neural networks than with brute force, like chess. My brain has never been brute force, which might explain my ineptitude at chess :)
In the book, Brian Eubanks mentions an open source neural network engine called Joone (Java Object Oriented Neural Engine), which even includes a graphical editor. I wish I knew more about neural networks to really appreciate this tool.
Oh, the book also mentions tools for genetic algorithms, intelligent agents and computational linguistics. Then we have scalable vector graphics, XML based Swing layouts and to top it off, code on how to record sound from within Java and a library for synthesizing speech.
Fun book to read, with just enough detail to get you searching in different directions. Not for Java beginners. But rather, Wicked Cool Java is for the wicked Java programmer ;-)... | https://www.javaspecialists.eu/archive/Issue119.html | CC-MAIN-2018-13 | refinedweb | 999 | 58.99 |
Sit.
Starting with the User
Even though this article is about the technology used to implement SitePen’s Support service, it doesn’t make any sense to talk about technology without talking about the user experience. The technology is there to do something. But, what?
With the SitePen Support service we provide support for the Dojo Toolkit, DWR and Cometd open source projects. We’re out to provide customers with the help they need when they need it. At the highest level, we needed to:
- Collect support requests from our customers
- Act on them
- Keep the customer informed
- Follow along with the terms of our support contracts
Without those things, we wouldn’t have a service at all. In addition, there are other requirements for making it the kind of service we’d be proud of:
- The customer user interface should be very responsive
- The user interface should be less like a content-oriented web site and more like an application
- People in a company should be able to work together easily (data should be shared)
- Customers should be able to bring themselves up-to-date on what’s happening in their account at any time
Note that our goals were all about providing a great support service, and not about creating software. If there was off-the-shelf software that would do all of the above for us, at the level of quality we expected, we would certainly have used it. But, there wasn’t. Which brings us to…
The Tech of SitePen Support
To realize those goals for the service, we employed a bunch of different tools and techniques:
- Dojo runs the client-side
- The client drives the whole interaction
- The browser speaks JSON-RPC with the server for most operations
- The server is built on Python’s WSGI standard
- Client-driven apps have very little “obscurity” to try to hide behind, so we needed to be sure we followed best practices
- Off-the-shelf help desk software (HelpSpot) handles part of the work for us
The Client-side Runs the App
A typical web app today looks something like this:
Typical modern web app model
Most interactions are decided by the server. The client makes a request, the server gathers data and uses some sort of template engine to format that data and present it. That cycle is repeated over and over again, with the server always deciding what comes next and how the next bit will be displayed. Many apps today add some Ajax to that (that little JavaScript box at the top of the diagram), but very often the formatting of data is handled by the server and the client just uses .innerHTML to drop the fresh content in place.
For our support application, the model looks more like:
One model for rich client web apps
.
If plain HTML is like the modern day equivalent of a green screen terminal, this design approach helps make your browser usable as a real thick client.
Dojo Runs the Client
The entire user interaction in the Support application is driven by the JavaScript client-side code. Dojo is a natural fit for this style of working, with its built in module system, RPC support, dojo.data interfaces and powerful Dijits.
We set up a very simple “PageModule” system where we use Dojo’s dynamic loading to load a new JavaScript module from the server and then call “initPage” on the code in that module. That will load up any HTML it needs to display and initialize everything for the user. A simple call to spsupport.core.loadContent is all we need to do to move the user onto the next piece of functionality:
loadContent: function(moduleName, params) { // Loads a new content section (if necessary). // A content section is defined by a Dojo module that // has an "initPage" function in it. The module is loaded // if need be and then initPage is called. if(spsupport.core.currentPage && spsupport.core.currentPage.cleanupPage){ spsupport.core.currentPage.cleanupPage(); } var module = dojo.getObject(moduleName); spsupport.core.currentPage = module; if(module && module.initPage){ // initPage is like _Widget startup(), though you can safely // call it often. each "content" resource should implement it's // own _started mechanism in initPage, and treat initPage() as // if it were a "selectChild()" call module.initPage(params) }else{ var thedot = moduleName.lastIndexOf("."); var packageName = moduleName.substring(0, thedot); var moduleRemainder = moduleName.substring(thedot + 1) + ".js"; dojo.xhrGet({ url: dojo.moduleUrl(packageName, moduleRemainder).uri, preventCache: true, handleAs:"javascript", load: function(js) { if(js && js.initPage){ js.initPage(params); } } }) } },
With this kind of modular architecture, we could have a giant application in which everything gets loaded on demand. Combining this setup with Dojo’s build system gives us quite a bit of control over exactly when things load, allowing us to balance initial load time with interactive responsiveness. Any significant “single page” application will need this. The Support application is by no measure a “giant” application, but using good application design techniques like this allows us to add whatever features we need to the application without impacting its load time or responsiveness.
URL Dispatch: Not Just for Servers Anymore
The page content is decided by the JavaScript in the client, not by the server.
URL dispatch is one of the core features of a server-side web framework. It turns out that putting the client in charge moved some of the burden of URL dispatch to the client! When you hit the front page of the Support site, the JavaScript figures out where you really want to go:
- /: send the user over to the support page on SitePen’s main site
- /?login: give the user a chance to login
- /?signup-(someplan): give the user a signup page with a plan selected
When working with server-side frameworks, you get used to URLs being divided up by slashes and everything after the ? denoting extra query parameters. With the client in control, the slashes tell the server what static file to serve up, and everything after the ? tells the client what to display. Given that we only have three different possibilities there, we didn’t have to get fancy with our URL dispatch. You certainly could write a client-side framework that has many of the same features you get from a server-side framework, if that’s what your application needs. The support application only needed to interpret a very small number of URLs.
Full-stack Framework? Not Anymore!
Our support application doesn’t use server-side templates for the user interface and doesn’t really do URL dispatch. The “full-stack” web frameworks in use today (Rails, Django, TurboGears, CakePHP, Grails, to name a few) are basically defined by their URL dispatch, templates and database support. Given that we didn’t need two of the three of those, we could go a lot simpler on the server than a full-stack framework.
Our server-side code is written in Python. Many components and libraries are now built around the Web Server Gateway Interface (WSGI) specification, making it easier than ever to put a collection of webapp components together. WSGI works well enough to have spawned a version in Erlang.
In our main web stack, we gathered up the following components:
- CherryPy’s WSGI server, used both in development on its own and in production behind Apache
- Luke Arno’s Static WSGI app, which serves up static files. This is used primarily in development, and Apache serves up our static files in production
- Mikeal Rogers’ wsgi_jsonrpc for responding to the RPC requests from the client
- Ian Bicking’s WebOb for the small number of dynamic operations that couldn’t be JSON-RPC
- Mike Bayer’s SQLAlchemy for database mapping
JSON-RPC
In the past, I’ve often used plain old HTTP requests returning JSON results as a convenient and simple mechanism for requesting data and actions on the server. In fact, that was the approach I took in my PyCon talk. For the Support project, we decided to use JSON-RPC instead because it’s a little bit cleaner. In Dojo, making a JSON-RPC request is just like making a function call that returns a Deferred. So, the syntax for using our server-side API was very straightforward. Even better, parameters and return values automatically came across as the correct types (strings, numbers, arrays, etc). The server-side is also simplified, because it does not need to do much in the way of URL dispatch (JSON-RPC POSTs to a single URL) and the server doesn’t need to worry about converting incoming values from strings.
wsgi_jsonrpc is quite easy to work with. We subclassed it to handle our authentication easily and added a neat bit where making a GET request to the JSON-RPC URL would return the service description. That little change made it easy to wire up Dojo on the client:
dojo.xhrGet({ url: "/jsonrpc", handleAs: 'json', load: function(response){ spsupport.service = new dojo.rpc.JsonService({ serviceType: 'JSON-RPC', serviceURL: "/jsonrpc", timeout: 6000, methods : response['procs'] }); }, sync:true });
This small snippet will synchronously load the RPC service description. It works synchronously because the UI can’t do much until it can make RPC calls for data. Then, it pulls the function names out of the response to create the JsonService. From that point onward, we just make calls to spsupport.service.function_name whenever we need to call the server.
Each available RPC call is simply a Python function in a module that has some extra metadata attached to it. For example, the request_details function is used to look up a support request by ID and return the detailed information about the request:
@auth @params(dict(name='id', type='str')) @returns('obj') def request_details(user, id): """Returns the detailed information for a request. Params: * id: the request ID Returns: * object with the detailed information """
The @returns decorator is used to mark the function as one that should be available via JSON-RPC, and to also make note of the return type. The @params decorator, combined with the return type listed in @returns, are used when generating the service description for the client. @auth tells our JSONRPCDispatcher subclass that this function requires authentication. Whenever the @auth decorator is present, the first parameter passed to the function is always the user object. The function itself can then perform additional checks. For example, request_details makes sure that the request is from the same organization as the user.
This design makes it very easy for us to write automated tests for the server side code. I’m personally a fan of test driven development in general, and in the next section we’ll see why automated tests are particularly important for this kind of application.
All Out in the Open
When you provide a rich user interface, particularly using Open Web technologies, you cannot count on security through obscurity. You have to assume that people will study your code and learn about all of your “hidden” URLs that make up sensitive APIs. You would never guess that Gmail doesn’t have a public API, given the number of add-ons people have made for it.
Never trust the client code and requests coming from the client. When creating new, authenticated APIs on the server, the first unit test I write is one that ensures that unauthorized users are given the boot. When working with WSGI and WebOb, writing tests that can run without a server is quite easy:
def test_bad_credentials(): req = Request.blank('/lookup') req.headers['Authorization'] = 'Basic HiThere==' res = req.get_response(requests.lookup_customer) print res assert res.status == '401 Unauthorized'
webob.Request.blank gives you a new Request object that is properly populated to look like a real request. You can then make changes from there to set up your test conditions. In the example above, I’m passing along bad authentication information. At the end, I assert that the result of sending bad authentication information is the expected 401 response.
As easy as testing is using WebOb, unit testing the JSON-RPC calls is even more straightforward. We just call the function directly as we would any Python function that we are unit testing. This kind of application setup makes server-side testing a breeze.
Of course, there’s a lot more work required to ensure that you’re handling data securely than just authenticating the users. We confirm that the user is authorized to access the data they are trying to access (with unit tests, of course). Using SQLAlchemy, we are not vulnerable to SQL injection attacks. We also run all of the requests over SSL to make sure that customer data is not grabbed off of possibly insecure networks.
HelpSpot: the Extra Layer in our Stack
Once a user is logged in, they’re taken to the /dashboard/ page where they can review their requests and account information and create new requests. The support dashboard is built around everything I’ve discussed so far. Each “page” in the dashboard is a separate module loaded via the loadContent call, and those modules make JSON-RPC requests to retrieve and update data on the server.
When you first log in, you can see the recent support request activity.
All of the details of your current support plan are available on one screen.
The server-side software that we wrote is responsible for keeping track of support plans and gathering up support requests for a given organization so that they can be displayed together. The support requests themselves with their complete histories are all tracked by HelpSpot. Within SitePen, we use HelpSpot’s user interface to update requests, and HelpSpot manages all email interaction. This saved us a good deal of implementation work.
HelpSpot is written in PHP, so we can’t directly call its functions from our Python-based server. One reason we chose HelpSpot is that it offers a solid web API of its own. We make simple HTTP requests to HelpSpot and it returns JSON formatted data. All of those requests are between our support application and HelpSpot. There are some instances where we needed to look up or update data in bulk, and HelpSpot did not have APIs specifically for that. Luckily, HelpSpot’s database schema is nicely designed and easy to understand, so there are instances where we also collect data directly from HelpSpot’s database.
Putting it all together
Bringing new developers up to speed on our support project is simple, because we use zc.buildout. zc.buildout creates a sandbox on the developer’s system with all of the pieces they need to work on the project.
Once we’re ready to deploy, we use a Paver pavement file to describe how to package up the software. Our pavement runs the Dojo build system to combine and shrink the JavaScript files and then bundles everything up into an egg file. The pavement also includes a task that will upload the egg to the server. Using zc.buildout also works great at deployment time, because we just have to run “bin/buildout” in the server’s deployment directory.
Creating the Desired User Experience
I think that the tools and techniques we used in building our support application were nifty and different from how most people are building webapps today. Our approach was all driven by a desire to provide a great user experience that meets our four original goals:
- Collect up support requests from our customers
- Act on them
- Keep the customer informed
- Follow along with the terms of our support contracts
The right tools helped us to reach these goals without a giant development budget.
Next month, I’ll be writing about the processes and tools we use to manage the support service within SitePen and ensure that we’re always on top of our customers’ needs.
Pingback: Blue Sky On Mars » The Tech of SitePen Support() | https://www.sitepen.com/blog/2008/08/19/the-tech-of-sitepen-support/ | CC-MAIN-2018-09 | refinedweb | 2,650 | 60.04 |
This page documents features available after Volatility 1.4 beta.
A useful analysis of memory is to try to find objects that remain in memory, but are currently unlinked or unreachable through list traversal techniques. For example, we might want to find residues of processes which have terminated, and therefore are removed from the list of running processes. Once the process is terminated, the EPROCESS structure is removed from process lists, but might still remain in unallocated memory for quite some time after being terminated by the system. Similarly a rootkit might be able to unlink the process from the EPROCESS structure, yet the process might continue running - this is a common way of hiding processes.
Scanning for various memory structures is a technique which is effective against such hiding methods. The idea is that we test each byte of memory as a candidate in representing the structure we want and run a number of sanity tests on it to make sure it actually is such a structure. Therefore, we do not traverse any lists, and even if the process is terminated or unlinked we still find it.
This section describes how one would implement a memory scanner for EPROCESS as an example. The next section describes the specific implementation in volatility.
For an EPROCESS to be considered valid, we might require the following conditions:
We start off at the begining of the virtual space and check each byte against these conditions. If any of these conditions dont match we continue on with the next byte. If all conditions match for a particular offset, this is a potential candidate for an _EPROCESS.
The first thing thats obvious is that since all tests have to match, failing any test will allow us to not consider this current byte offset. Therefore we can order the tests such that simpler tests can be made first, while more complex tests happen later, providing the simple tests passed. This allows us to shortcut performing complex tests in cases where its immediately obvious that the structure can not possibly match since the simple test has failed.
In our case the first test checks that _EPROCESS.Pcb.Header.Type is exactly 0x03. It should be immediately obvious that since this test will always fail when the particular offset is not 0x03, we can simply search forward for the next 0x03 at that offset - completely ignoring all bytes in between. So a further optimization is that the first test can skip a bunch of data for us which is obviously not going to match, and save us testing each byte in between.
So these are the most crucial optimizations:
Volatility provides two classes for implementing Scanners, both are automatically registered through the registry system. All you have do is extend the right classes in the plugin and they will be made available.
Specific checks are implemented using the volatility.scan.ScannerCheck base class. For example:
class DispatchHeaderCheck(scan.ScannerCheck):
""" A very fast check for an _EPROCESS.Pcb.Header.
This check assumes that the type and size of
_EPROCESS.Pcb.Header are unsigned chars, but allows their
offsets to be determined from vtypes (so they could change
between OS versions).
"""
order = 10
def __init__(self, address_space, **kwargs):
## Because this checks needs to be super fast we first
## instantiate the _EPROCESS and work out the offsets of the
## type and size members. Then in the check we just read those
## offsets directly.
eprocess = obj.Object("_EPROCESS", vm=address_space, offset=0)
self.type = eprocess.Pcb.Header.Type
self.size = eprocess.Pcb.Header.Size
self.buffer_size = max(self.size.offset, self.type.offset) + 2
scan.ScannerCheck.__init__(self, address_space)
def check(self, offset):
data = self.address_space.read(offset + self.type.offset, self.buffer_size)
return data[self.type.offset] == "\x03" and data[self.size.offset] == "\x1b"
def skip(self, data, offset):
try:
nextval = data.index("\x03", offset+1)
return nextval - self.type.offset - offset
except ValueError:
## Substring is not found - skip to the end of this data buffer
return len(data) - offset
The check does some initialization work in its constructor (in this case, pre-calculates some offsets). The two interesting methods are:
A scanner is just a set of such checks specified in order:
class PSScan(scan.BaseScanner):
""" This scanner carves things that look like _EPROCESS structures.
Since the _EPROCESS does not need to be linked to the process
list, this scanner is useful to recover terminated or cloaked
processes.
"""
checks = [ ("DispatchHeaderCheck", {}),
("CheckDTBAligned", {}),
("CheckThreadList", {}),
("CheckSynchronization", {})
]
The checks are a list of tuples containing (name of test, argv). The argv is a dictionary which will be used to instantiate the check with (in case it takes parameters in its constructors). Note that the check is specified as a named string since the actual class implementation is retrieved from the registry system. This allows us to define a check in one plugin and use it in many other plugins without regard to the exact place its defined from.
To actually use the scanner we instantiate the scanner and then call its scan() method - causing it to iterate over all matches in the address space. The scanner will generate all offsets which are deemed to have matched. For example:
def calculate(self):
address_space = utils.load_as(astype = 'physical')
for offset in PSScan().scan(address_space):
yield obj.Object('_EPROCESS', vm=address_space, offset=offset)
You can do anything with the offsets returned - for example display them, save them to a file or even perform further checks on them.
A very useful technique in windows memory analysis is the use of pool scanners. When a piece of memory is allocated in windows, its often allocated with a special tag which corresponds to the driver or subsystem to allocate the memory. This tag is used for debugging and is not really essential for use by the system (which is why many rootkits overwrite the tag or change it). Never the less, the tag is very useful for locating objects quickly. In volatility use use the PoolTagCheck to test for pool tags. For example:
class PoolScanSockFast(scan.PoolScanner):
checks = [ ('PoolTagCheck', dict(tag = "TCPA")),
('CheckPoolSize', dict(condition = lambda x: x == 0x170)),
('CheckPoolType', dict(non_paged = True, free = True)),
## Valid sockets have time > 0
('CheckSocketCreateTime', dict(condition = lambda x: x > 0)),
('CheckPoolIndex', dict(value = 0))
]
In the above example, we see that the PoolTagCheck check takes a single argument of tag, which can be passed in the second member of the check tuple. Note that the PoolTagCheck implements a skip method, which as described above, allows us to skip all the data which does not contain the pool tag - this makes this scanner extremely fast.
Further checks include CheckSocketCreateTime which allows us to pass a callable to its constructor for checking the sanity of the creation time field - in this case we check that its greater than 0.
Note that this scanner extends scan.PoolScanner. That class simply allows us to specify a bunch of structures which follow the pool tag and appear before the object of interest such that the scan method yields the object of interest. For example, the default:
preamble = [ '_POOL_HEADER', ]
Means we skip a pool header and yield the next object after that.
In this way pool tag scanning is actually the same as regular scanning - just employing a much faster condition. Since pool scanning is less reliable than more thorough scanning we can produce fast and slow versions of the same scanner by including the pool tag check for the fast check, and relying on more complex checks for slow scanner.
The above examples were taken from psscan.py. | http://code.google.com/p/volatility/wiki/Scanners | crawl-003 | refinedweb | 1,260 | 53.21 |
There was a post on the DD Forum a little while ago (Last week when I wrote this article) and it’s the issue of menus and site navigation in Dynamic Data, I’ve had several go’s at this on several projects and although I have had workable solutions I felt I was reinventing the wheel each time. So after the post politely asking for me to go into it a little deeper I made some suggestions on the forum that afterwards I felt were still going about this in the wrong way. What I felt we needed was our own SiteMapDataSource
and of course I was wrong, I discovered after a little research was I needed a SiteMapProvider which would plug into the SiteMapDataSource via the web.config allowing me to provide my own solution to navigation and menus. This is very similar the MembersipProvider and RolesProvider which makes this really cool
So now we will have a solution that is easy to deploy and maintain, so in good Blue Peter style here are the things we are going to need:
The Menu attributeThe Menu attribute
The SiteMapProviderThe SiteMapProvider
Updates to the Web.ConfigUpdates to the Web.Config
MetadataMetadata
And a Menu and SiteMapPath (bread crumb) controlsAnd a Menu and SiteMapPath (bread crumb) controls
web.sitemap file for custom paths (non Model pages)web.sitemap file for custom paths (non Model pages)
The MenuAttribute
At first I hade some code like Listing 1 here I generated the code in the page (site.master) and bound it to an XmlDataSource this worked after a fashion and produced the menus I wanted but they were organised by the DB structure not really what I wanted.protected void Page_Load(object sender, EventArgs e) { //System.Collections.IList visibleTables = MetaModel.Default.VisibleTables; System.Collections.IList visibleTables = MetaModel.Default.VisibleTables; if (visibleTables.Count == 0) { throw new InvalidOperationException("There are no accessible tables. Make.."); } var root = new XElement("home"); foreach (var table in MetaModel.Default.VisibleTables) { root.Add(GetChildren(table)); } XmlDataSource1.Data = root.ToString(); Menu1.DataSource = XmlDataSource1; Menu1.Orientation = Orientation.Horizontal; Menu1.DataBind(); } private XElement GetChildren(MetaTable parent) { XElement children = new XElement("Menu", new XAttribute("title", parent.DisplayName), new XAttribute("url", parent.ListActionPath), from c in parent.Columns where c.GetType() == typeof(MetaChildrenColumn) && ((MetaChildrenColumn)c).ChildTable.Name != parent.Name orderby c.DisplayName select GetChildren(((MetaChildrenColumn)c).ChildTable)); return children; }
Listing 1 – old Menu generating code
I suppose I could have used a sitemap file (“Web.sitemap” by default) I decided after a few stunted attempts with client sites that although this gave you the control it was tedious and boring especially with an constantly changing site.
So my final design was to have an attribute to apply to the metadata that would allow me to impose the site structure I wanted in the same place I was applying the all my other attributes (keep it all in one place is what I say then when you go looking it’s easy to find.)
/// <summary> /// Attribute to identify which column to use as a /// parent column for the child column to depend upon /// </summary> [AttributeUsage(AttributeTargets.Class, AllowMultiple = false)] public class MenuAttribute : Attribute, IComparable { public static MenuAttribute Default = new MenuAttribute(); /// <summary> /// Gets or sets the name of the menu. /// </summary> /// <value>The name of the menu.</value> public String Name { get; private set; } /// <summary> /// Gets or sets the parent. /// </summary> /// <value>The parent.</value> public String Parent { get; set; } /// <summary> /// Gets or sets a value indicating whether this <see cref="MenuAttribute"/> is show. /// </summary> /// <value><c>true</c> if show; otherwise, <c>false</c>.</value> public Boolean Show { get; set; } /// <summary> /// Gets or sets the order. /// </summary> /// <value>The order.</value> public int Order { get; set; } /// <summary> /// Gets or sets the image path. /// </summary> /// <value>The image path.</value> public String ImagePath { get; set; } /// <summary> /// Initializes a new instance of the <see cref="MenuAttribute"/> class. /// </summary> public MenuAttribute() { Name = String.Empty; Parent = String.Empty; ImagePath = String.Empty; Show = false; Order = 0; } /// <summary> /// Initializes a new instance of the <see cref="MenuAttribute"/> class. /// </summary> /// <param name="menuName">Name of the menu.</param> public MenuAttribute(String menuName) { Name = menuName; } #region IComparable Members public int CompareTo(object obj) { return Order - ((MenuAttribute)obj).Order; } #endregion }
Listing 2 – the Menu attribute
I decided I wanted to be able to specify which table are in the root of the menu which are children and of which table and also if a table is shown or not, see Listing 2. to make a table appear in the root of the menu structure you need to Parent to be empty and Show must be true* and Children must Have the Parent set to the menu Name if specified or Table Name and be set to show.
/// <summary> /// Get the attribute or a default instance of the attribute /// if the Table attribute do not contain the attribute /// </summary> /// <typeparam name="T">Attribute type</typeparam> /// <param name="table"> /// Table to search for the attribute on. /// </param> /// <returns> /// The found attribute or a default /// instance of the attribute of type T /// </returns> public static T GetAttributeOrDefault<T>(this MetaTable table) where T : Attribute, new() { return table.Attributes.OfType<T>().DefaultIfEmpty(new T()).FirstOrDefault(); }
Listing 3 GetAttributeOrDefault extension method.
I also threw in an Order property so the menus could be sorted on both the root and sub menus and an ImagePath property so that you can specify an image. Name is optional and in not specified it will user the DisplayName of the table.
The SiteMapProvider
This will go away and read the MetaModel and produce the site map from that instead of a web.sitemap file.
#region Constants private const String /// of the site map navigation structure. /// </returns> public override SiteMapNode BuildSiteMap() { // need to make sure we start with a clean slate if (rootNode != null) return rootNode; lock (objLock) { rootNode = new SiteMapNode(this, "root"); // get tables for var tables = from t in Model.Tables where String.IsNullOrEmpty(t.GetAttributeOrDefault<MenuAttribute>().Parent) && t.GetAttributeOrDefault<MenuAttribute>().Show // sort first by menu order then by name. orderby t.GetAttributeOrDefault<MenuAttribute>(), t.DisplayName select t; // get external content. var sitemapFile = HttpContext.Current.Server.MapPath("~/" + _siteMapFile); if (File.Exists(sitemapFile)) { var sitemap = XDocument.Load(sitemapFile); var elements = sitemap.Descendants().Descendants().Descendants(); foreach (var element in elements) { var provider = element.Attributes().FirstOrDefault(a => a.Name == "provider"); if (provider != null && !String.IsNullOrEmpty(provider.Value) && provider.Value == PROVIDER_NAME) { foreach (var table in tables) { SetChildren(table, rootNode); } } else { SetXmlChildren(element, rootNode); } } } else { foreach (var table in tables) { SetChildren(table, rootNode); } } } // not sure if this is needed no real explination // was given in the samples I've seen. HttpRuntime.Cache.Insert(CACHE_DEPENDENCY_NAME, new object()); return rootNode; } private void SetXmlChildren(XElement element, SiteMapNode parentNode) { var url = element.Attributes().First(a => a.Name == "url"); var node = new SiteMapNode(this, url.Value, url.Value); foreach (var attribute in element.Attributes()) { switch (attribute.Name.ToString()) { case "description": node.Description = attribute.Value; break; case "resourceKey": node.ResourceKey = attribute.Value; break; case "roles": node.Roles = attribute.Value.Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries); break; case "title": node.Title = attribute.Value; break; case "siteMapFile": case "securityTrimmingEnabled": case "provider": case "url": default: break; } } AddNode(node, parentNode); if (element.HasElements) { foreach (var childElement in element.Descendants()) { SetXmlChildren(childElement, node); } } } /// <summary> /// Sets the children nodes of the current node. /// </summary> /// <param name="parentTable">The parent table.</param> /// <param name="parentNode">The parent node.</param> private void SetChildren(MetaTable parentTable, SiteMapNode parentNode) { String imageUrl = String.Empty; var description = parentTable.GetAttribute<DescriptionAttribute>(); var menuAttribute = parentTable.GetAttribute<MenuAttribute>(); if (menuAttribute != null) imageUrl = menuAttribute.ImagePath; // get extra attributes, I'm just going to // use the ImageUrl for use in menus etc. NameValueCollection attributes = null; if (String.IsNullOrEmpty(imageUrl)) { attributes = new NameValueCollection(); attributes.Add("ImageUrl", imageUrl); } // get the title var menuTitle = !String.IsNullOrEmpty(menuAttribute.Name) ? menuAttribute.Name : parentTable.DisplayName; // get the description if attribute has // been applied or DisplayName if not. var menuDescription = description != null ? description.Description : parentTable.DisplayName; // note resource keys are not used I'm // not bothering with localization. var node = new SiteMapNode( this, // note Key and Url must match // for slected menus to showup parentTable.ListActionPath, parentTable.ListActionPath, menuTitle, menuDescription, null, attributes, null, ""); // we can't add two nodes with same URL an // InvalidOperationException with be thrown if we do AddNode(node, parentNode); // get the children nodes of this node. var tables = from t in Model.Tables where !String.IsNullOrEmpty(t.GetAttributeOrDefault<MenuAttribute>().Parent) && t.GetAttributeOrDefault<MenuAttribute>().Parent == menuTitle && t.GetAttributeOrDefault<MenuAttribute>().Show orderby t.GetAttributeOrDefault<MenuAttribute>(), t.DisplayName select t; // add children of current node foreach (var t in tables) { // call this method recursively. SetChildren(t, node); } } /// <summary> /// Retrieves the root node of all the nodes that /// are currently managed by the current provider. /// </summary> /// <returns> /// A <see cref="T:System.Web.SiteMapNode"/> that /// represents the root node of the set of nodes /// that the current provider manages. /// </returns> protected override SiteMapNode GetRootNodeCore() { BuildSiteMap(); return rootNode; } /// <summary> /// Removes all elements in the collections of /// child and parent site map nodes that the /// <see cref="T:System.Web.StaticSiteMapProvider"/> /// tracks as part of its state. /// </summary> protected override void Clear() { lock (objLock) { this.rootNode = null; base.Clear(); } }
Listing – 4 the SiteMapProvider
With this added and an entry for the provider set to "MetaDataSiteMapProvider" the standard nodes are added and when the node (and “there can be only one”) the Metadata nodes are added. See Figure 1.
In the SiteMapProvider Listing 4 the two main methods are BuildSiteMap and SetChildren; BuildSiteMap starts by building the root menu and then calls SetChildren for each menu entry in the root to create the sub menus entries (and their sub menus as it calls it’s self recursively).
<?xml version="1.0" encoding="utf-8" ?> <siteMap xmlns="" > <siteMapNode url="" title="" description=""> <siteMapNode url="~/Default.aspx" title="Home" description="Home page" roles="*" /> <siteMapNode provider="MetaDataSiteMapProvider" /> <siteMapNode url="~/Admin.aspx" title="Admin" description="Site Administration" roles="*" /> </siteMapNode> </siteMap>
Listing 5 – the DynamicData.sitemap file
To make this work we need an entry in the web.config file see Listing 5 this entry live inside the <system.web> tag.
<siteMap defaultProvider="MetaDataSiteMapProvider"> <providers> <add name="MetaDataSiteMapProvider" Description="MetaData Site Map Provider for Dynamic Data" type="NotAClue.Web.DynamicData.MetaDataSiteMapProvider, NotAClue.Web.DynamicData"/> </providers> </siteMap>
Listing 6 – SiteMap entry in web.config.
Adding Menu and SiteMapPath controls to Site.Master
Now we need to add a Menu or TreeView control to the site.master page (I am also going to add a SiteMapPath control as a bread crumb)
<div> <asp:Menu <StaticSelectedStyle BackColor="#1C5E55" /> <StaticMenuItemStyle HorizontalPadding="5px" VerticalPadding="2px" /> <DynamicHoverStyle BackColor="#666666" ForeColor="White" /> <DynamicMenuStyle BackColor="#E3EAEB" /> <DynamicSelectedStyle BackColor="#1C5E55" /> <DynamicMenuItemStyle HorizontalPadding="5px" VerticalPadding="2px" /> <StaticHoverStyle BackColor="#666666" ForeColor="White" /> </asp:Menu> <asp:SiteMapDataSource </div> <div> <asp:SiteMapPath <PathSeparatorStyle Font- <CurrentNodeStyle ForeColor="#333333" /> <NodeStyle Font- <RootNodeStyle Font- </asp:SiteMapPath> </div>
Listing 7 – adding Menu and SiteMapPath controls to site.master.
In Listing 6 I’ve just dragged and dropped ASP.Net Menu and SiteMapPath web controls to the page an configured the Menu by adding a new SiteMapDataSource control.
Then I have used the Auto Format… option to give both the Menu and SiteMapPath some styling.
Sample Metadata using Northwind
[Menu(Show = true, Parent = "Products", Order = 0)] partial class Category { } [Menu(Show = true, Parent = "Employees", Order = 0)] partial class Territory { } partial class CustomerCustomerDemo { } partial class CustomerDemographic { } [Menu("Customers", Show = true, Order = 0)] partial class Customer { } [Menu("Employees", Show = true, Order = 2)] partial class Employee { } partial class EmployeeTerritory { } [Menu("Order Details", Show = true, Parent = "Orders", Order = 0)] partial class Order_Detail { } [Menu("Orders", Show = true, Order = 1)] partial class Order { } [Menu("Products", Show = true, Order = 3)] partial class Product { } [Menu("Regions", Show = true, Parent = "Territories", Order = 0)] partial class Region { } [Menu("Shippers", Show = true, Parent = "Orders", Order = 0)] partial class Shipper { } [Menu("Suppliers", Show = true, Parent = "Products", Order = 1)] partial class Supplier { }
Listing 8 – sample metadata.
The Result
Figure 1 – Dynamic Data Menus and bread crumb (SiteMapPath)
Downloads
Left is the ASP.Net 3.5 SP1 and right is the .Net 4.0 version & now with the new Tabbed Menu from the new ASP.Net project template.
23 comments:
Nice article, it's possible to have this with secure role? That mean when you go at first time to the dynamic data website you see only one menu the "Home" by default and after a login you will see the different table attached to the role. It's possible?
Hi Michel, I'm sure it's possible, and will hopfully cover this in an article in the future
Steve :)
I was able to bring up the tabbed version, but when I clicked one of the drop-down menu items, the application couldn't find DropDownList1 and one or more other controls. I was connected to Northwind on the network.
I would like to see a version of this with rounded corners tabbed menu with the ability to style and possibly use images.
The question is does the sample work for you with Northwind local?
Steve
Hi, Steve. Do I need to create an App_Data folder and get a copy of Northwind.mdf into it?
What should my top-level directory be?
The solution appears to have three top-level directories.
I usually just mount them on Local install of SQL server, that way I don't endup with hundreds of copies of Northwind on my drive :)
the connection string server name is usually set to . which is the locat server.
Steve
How Can we use this with jQuery menus?
Regards,
Nikunj Dhawan
I suppose you could, but I don't know of any jQuery menu that would work with a SiteMap data source. You could certanly use the main logic to build some xml for the jQuery menu.
Steve :)
I have problem with the sample code DDMenu and de error was: Invalid column name 'FullName'.
Regards,
Adrian
Hi Adrian, is that with a sample as is or added into your own project, can you give me steps to repro? you can e-mail at the e-mail address at the top of the page.
Steve :D
I ended up having the same issue regarding FullName. It looks like your model is based off of a Northwind DB instance that has a computed column on Employees called FullName but I was not able to find any Northwind install script that included this field.
I ended up defining it myself and the error went away. Interestingly enough though the column doesn't appear on the List or Edit pages. I'm guessing the default behavior is not to display computed columns?
Thanks,
Matt
Hi Guys, sorry about that, it was a sample I was doing for someone else and the DB got mixed into that sample sorry.
Just refresh the model from standard Northwind and I'll fix the sample and reupload in the morning.
:(
Steve
This doesn't work. I downloaded the .net 4 tabbed version and changed the datasource and metadata to fit with my db and thats it, then it stopped working.
Hi There you may have some other issue going on I have added this to several other sites and it's working fine. You have probably not follow the article above and have missed something out e.g. metadata.
Steve :)
Following sitemap file is throwing an exception
can you please check.
Hi there, I don;t see your sitemap but I suspect you have some elements below the first level, this version does not support that sorry.
Steve
P.S. I have a newer version in the works that does but I haven't had the time to publish it yet.
Steve could you possibly email me the newer version of this?
if you drop me an e-mail (my e-mail is in the top righ corner of the site) I will try to send you a copy over Christmas.
Steve
Hi Steve,
I am a big fan of your work and all the great thing you did for DD. I'm also a big fan of DD as I think it solves a lot of problems and saves a ton of work!
Anyway I noticed in this sitemap provider that if you add a displayname for a table the meta sitemap provider will break.
like:
[IncludeInDomainService]
[Menu(Show = true, Order = 0)]
[TableName("My user")] // >> if you add a display name like this then the sitemap provider will fail
public class Users : Base
{
public string UserName { get; set; }
}
Any ideas?
Hi there "TableName" is not the same as display name it actually changes the table's name you should use DisplayName I will look into this with the next iteration of SiteMapProvider but I suspect the issues is the changes table name :)
Steve
Hi Mr Naughton,
I have a interesting situation and I think you can help me. I need to build a breadcrumb based on the data coming from a Sql stored proc. Is it possible, if yes whats the best approach. I have .net 2.0 and VS 2005.
Thanks
The bread crumb control uses a tree structure for it's data, my sitemap provider would work with it fine but it sounds like you are not using Dynamic Data?
Of course it's possible to create a bread crumb, I would suggest using a user control.
I currently use VS2010 and .Net 4.
Steve
The control is working really fine but I've manually added some extra nodes in the sitemap file(DynamicData.sitemap) and they don't appear in the main menu. Any idea why? | http://csharpbits.notaclue.net/2010/02/sitemapprovider-for-dynamic-data-net-35.html?showComment=1274217609504 | CC-MAIN-2021-10 | refinedweb | 2,914 | 56.45 |
Subsets and Splits