text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
We’ll create a simple Blinky app and connect a LED to your Windows IoT Core device (Raspberry Pi 2 or 3, MinnowBoard Max or DragonBoard). Be aware that the GPIO APIs are only available on Windows IoT Core, so this sample cannot run on your desktop. This application is designed for a headless device. To better understand what Headless mode is and how to configure your device to be headless, follow the instructions here. You can find the source code for this sample by downloading a zip of all of our samples here and navigating to the samples-develop\HelloBlinkyBackground. The sample code is available in either C++ or C#, however the documentation here only details the C# variant. Make a copy of the folder on your disk and open the project from Visual Studio. Make sure you connect the LED to your board. Go back to the basic ‘Blinky’ sample if you need guidance. Note that the app will not run successfully if it cannot find any available GPIO ports. x86. If you're building for Raspberry Pi 2, Raspberry Pi 3 or the DragonBoard, select ARM. Next, in the Visual Studio toolbar, click on the Local Machine dropdown and select Remote Machine</li> (Unencrypted Protocol) Authentication Mode, then click Select. You can verify or modify these values by navigating to the project properties (select Properties in the Solution Explorer) and choosing the Debug tab on the left: When everything is set up, you should be able to press F5 from Visual Studio. The Blinky app will deploy and start on the Windows IoT device, and you should see the attached LED blink. The code for this sample is pretty simple. We use a timer, and each time the ‘Tick’ event is called, we flip the state of the LED. Here is how you set up the timer in C#: using Windows.System.Threading; BackgroundTaskDeferral _deferral; public void Run(IBackgroundTaskInstance taskInstance) { _deferral = taskInstance.GetDeferral(); this.timer = ThreadPoolTimer.CreatePeriodicTimer(Timer_Tick, TimeSpan.FromMilliseconds(500)); . . . } private void Timer_Tick(ThreadPoolTimer timer) { . . . }(); if (gpio == null) { pin = null; return; } pin = gpio.OpenPin(LED_PIN); if (pin == null) { return; } pin.Write(GpioPinValue.High); pin.SetDriveMode(GpioPinDriveMode.Output); }. You can modify ‘Timer_Tick’ to do this. To turn the LED on, simply write the value GpioPinValue.Low to the pin: this.pin.Write(GpioPinValue.Low); and of course, write GpioPinValue.High to turn the LED off: this.pin.Write(GpioPinValue.High); Remember that we connected the other end of the LED to the 3.3 Volts power supply, so we need to drive the pin to low to have current flow into the LED.
https://developer.microsoft.com/en-us/windows/iot/Samples/HelloBlinkyBackground
CC-MAIN-2018-05
refinedweb
436
56.86
Up to [cvs.NetBSD.org] / src / lib / libc / gen Request diff between arbitrary revisions Default branch: MAIN Revision 1.21.6.1 / (download) - annotate - [select for diffs], Tue Oct 30 18:58:45 2012 UTC (3 years, 10 months ago) by yamt Branch: yamt-pagecache CVS Tags: yamt-pagecache-tag8 Changes since 1.21: +121 -152 lines Diff to previous 1.21 (colored) next main 1.22 (colored) sync with head Revision 1.21.8.1 / (download) - annotate - [select for diffs], Sat Jun 23 22:54:54 2012 UTC .21: +121 -152 lines Diff to previous 1.21 (colored) next main 1.22 .22 / (download) - annotate - [select for diffs], Sun Jun 3 21:42:46 2012 UTC (4 years, 3 months ago) by joerg.21: +121 -152 lines Diff to previous 1.21 (colored). Revision 1.21 / (download) - annotate - [select for diffs], Tue Mar 23 20:28:59 2010 UTC (6 years, 6 months ago) by drochner Branch: MAIN CVS Tags: yamt-pagecache-base5, yamt-pagecache-base4,.20: +2 -6 lines Diff to previous 1.20 (colored) remove some stray __weak_aliases, where the target functions were __RENAMEd due to the time_t/dev_t type changes, which caused bogus references to compat functions now a libc built with BUILDCOLD is usable Revision 1.19.2.1 / (download) - annotate - [select for diffs], Wed May 13 19:18:23 2009 UTC (7 years, 4 months ago) by jym Branch: jym-xensuspend Changes since 1.19: +16 -2 lines Diff to previous 1.19 (colored) next main 1.20 (colored) Sync with HEAD. Third (and last) commit. See Revision 1.20 / (download) - annotate - [select for diffs], Sun Apr 19 10:19:26 2009 UTC (7 years, 5 months ago) by mrg Branch: MAIN CVS Tags: matt-premerge-20091211, jym-xensuspend-nbase, jym-xensuspend-base Changes since 1.19: +16 -2 lines Diff to previous 1.19 (colored) add some code to cope with dev.db's that have 32 bit time_t's in them. shouldn't be relevant very much as dev.db should be re-created at boot, but it helped me at least twice so far... Revision 1.19 / (download) - annotate - [select for diffs], Tue Jan 20 18:20:48 2009 UTC (7 years, 8 months ago) by drochner Branch: MAIN Branch point for: jym-xensuspend Changes since 1.18: +10 -7 lines Diff to previous 1.18 (colored) Change major()/minor() to return 32-bit types again, called devmajor_t/devminor_t, as proposed on tech-kern. This avoids 64-bit arithmetics and 64-bit printf formats in parts of the kernel where it is not really useful, and helps clarity. Revision 1.18 / (download) - annotate - [select for diffs], Sun Jan 11 02:46:27 2009 UTC (7 years, 8 months ago) by christos Branch: MAIN Changes since 1.17: +9 -8 lines Diff to previous 1.17 (colored) merge christos-time_t Revision 1.17.8.2 / (download) - annotate - [select for diffs], Sat Nov 8 21:45:38 2008 UTC (7 years, 10 months ago) by christos Branch: christos-time_t Changes since 1.17.8.1: +185 -0 lines Diff to previous 1.17.8.1 (colored) to branchpoint 1.17 (colored) next main 1.18 (colored) time_t changes Revision 1.16.24.1 / (download) - annotate - [select for diffs], Sun May 18 12:30:15 2008 UTC (8 years, 4 months ago) by yamt Branch: yamt-pf42 Changes since 1.16: +2 -9 lines Diff to previous 1.16 (colored) next main 1.17 (colored) sync with head. Revision 1.17.8.1, Mon Apr 28 20:22:59 2008 UTC (8 years, 4 months ago) by christos Branch: christos-time_t Changes since 1.17: +0 -184 lines FILE REMOVED file devname.c was added on branch christos-time_t on 2008-11-08 21:45:38 +0000 Revision 1.17 / (download) - annotate - [select for diffs], Mon Apr 28 20:22:59.16: +2 -9 lines Diff to previous 1.16 (colored) Remove clause 3 and 4 from TNF licenses Revision 1.16 / (download) - annotate - [select for diffs], Thu Dec 16 04:33:03 2004 UTC (11 years, 9 months ago) by atat.15: +5 -5 lines Diff to previous 1.15 (colored) Put caching back on the pts major number. It's worth the code overhead not to go look it up a zillion times when running fstat or ps on a machine with a billion people logged in. fstat mostly. Revision 1.15 / (download) - annotate - [select for diffs], Thu Dec 16 04:15:19 2004 UTC (11 years, 9 months ago) by atatat Branch: MAIN Changes since 1.14: +6 -36 lines Diff to previous 1.14 (colored) Get rid of the private getptsname() function and use getdevmajor() instead. It's really much better that way, you'll see. Revision 1.14 / (download) - annotate - [select for diffs], Tue Dec 14 03:08:01 2004 UTC (11 years, 9 months ago) by atatat Branch: MAIN Changes since 1.13: +6 -4 lines Diff to previous 1.13 (colored) Properly return the constructed name for ptyfs nodes. Otherwise we accidentally return NULL on the first call and find it in the cache on all subsequent calls. Revision 1.13 / (download) - annotate - [select for diffs], Thu Nov 11 04:03:23 2004 UTC (11 years, 10 months ago) by christos Branch: MAIN Changes since 1.12: +6 -3 lines Diff to previous 1.12 (colored) More error checking. Revision 1.12 / (download) - annotate - [select for diffs], Thu Nov 11 03:22:30 2004 UTC (11 years, 10 months ago) by christos Branch: MAIN Changes since 1.11: +43 -3 lines Diff to previous 1.11 (colored) Recognize ptyfs ptys. Revision 1.11 / (download) - annotate - [select for diffs], Mon Oct 13 07:41:22 2003 UTC (12 years, 11 -37 lines Diff to previous 1.10 (colored) Move Keith Muller's code from a 4-clause to a 3-clause licence by removing the advertising clause. Diffs provided in PR 22397 by Joel Baker, confirmed to the board by Keith Muller. Revision 1.10 / (download) - annotate - [select for diffs], Thu Aug 7 16:42:47 2003 UTC (13 years, 1 month ago) by agc Branch: MAIN Changes since 1.9: +33 -3 lines Diff to previous 1.9 (colored) Move UCB-licensed code from 4-clause to 3-clause licence. Patches provided by Joel Baker in PR 22280, verified by myself. Revision 1.8.2.1 / (download) - annotate - [select for diffs], Fri Jun 23 16:17:24 2000 UTC (16 years, 3 months ago) by minoura Branch: minoura-xpg4dl Changes since 1.8: +92 -5 lines Diff to previous 1.8 (colored) next main 1.9 (colored) Sync w/ netbsd-1-5-base. Revision 1.9 / (download) - annotate - [select for diffs], Mon Jun 5 06:12:49 2000 UTC (16 years, 3.8: +92 -5 lines Diff to previous 1.8 (colored) Add a cache ala pwcache(3). Gives a small but measurable performance improvement for callers to devname(3) (eg ps(1)) under most circumstances. Revision 1.8 / (download) - annotate - [select for diffs], Sat Jan 22 22:19:09 2000 UTC (16 years, 8 months ago) by mycroft Branch: MAIN CVS Tags: minoura-xpg4dl-base Branch point for: minoura-xpg4dl Changes since 1.7: +3 -3 lines Diff to previous 1.7 (colored) Delint. Remove trailing ; from uses of __weak_alias(). The macro inserts this if needed. Revision 1.7 / (download) - annotate - [select for diffs], Mon Feb 2 02:41:19 1998 UTC (18 years, 7 months.6: +5 -5 lines Diff to previous 1.6 (colored) merge/update to lite-2 Revision 1.1.1.2 / (download) - annotate - [select for diffs] (vendor branch), Mon Feb 2 00:11:49 1998 UTC (18 years, 7 months ago) by perry Branch: CSRG CVS Tags: lite-2 Changes since 1.1.1.1: +3 -3 lines Diff to previous 1.1.1.1 (colored) import lite-2 Revision 1.6 / (download) - annotate - [select for diffs], Mon Jul 21 14:06:52 1997 UTC (19 18:54:19 1997 UTC (19 years, 2 months ago) by christos Branch: MAIN Changes since 1.4: +7 -4 lines Diff to previous 1.4 (colored) Use "namespace.h" Add missing stdlib.h Revision 1.4.4.2 / (download) - annotate - [select for diffs], Thu Sep 19 20:02:23 1996 UTC (20 years ago) by jtc Branch: ivory_soap2 Changes since 1.4.4.1: +6 -2 lines Diff to previous 1.4.4.1 (colored) to branchpoint 1.4 (colored) next main 1.5 (colored) snapshot namespace cleanup: gen Revision 1.4.4.1 / (download) - annotate - [select for diffs], Mon Sep 16 18:40:13 1996 UTC (20 years ago) by jtc Branch: ivory_soap2 Changes since 1.4: +3 -2 lines Diff to previous 1.4 (colored) snapshot namespace cleanup Revision 1.3.2.1 / (download) - annotate - [select for diffs], Wed Apr 26 00:17:55 1995 UTC (21 years, 5 months ago) by jtc Branch: ivory_soap Changes since 1.3: +1 -0 lines Diff to previous 1.3 (colored) next main 1.4 (colored) #include "namespace.h" where appropriate. Revision 1.4 / (download) - annotate - [select for diffs], Sat Feb 25 08:51:08 1995 UTC : +6 -0 lines Diff to previous 1.3 (colored) clean up Id's on files previously imported... Revision 1.3 / (download) - annotate - [select for diffs], Mon Dec 12 22:42:05 1994 UTC (21 years, 9 months ago) by jtc Branch: MAIN Branch point for: ivory_soap Changes since 1.2: +1 -1 lines Diff to previous 1.2 (colored) Rework indirect reference support as outlined by my recent message to the tech-userlevel mailing list. Revision 1.2 / (download) - annotate - [select for diffs], Sun Dec 11 20:43:54 1994 UTC (21 years, 9 months ago) by christos Branch: MAIN Changes since 1.1: +1 -1 lines Diff to previous 1.1 (colored) -. Revision 1.1.1.1 / (download) - annotate - [select for diffs] (vendor branch), Fri May 6 22:48:31 1994 UTC (22 years, 4-1 Changes since 1.1: +0 -0 lines Diff to previous 1.1 (colored) devname() routine Revision 1.1 / (download) - annotate - [select for diffs], Fri May 6 22:48:30 1994 UTC (22 years,.
http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/gen/devname.c
CC-MAIN-2016-40
refinedweb
1,702
76.42
Go for Rubyists Note: Here's another guest post from our friends at the Hybrid Group. So you're a Ruby developer and you've heard about this cool new language called Go, but you don't know where to get started. You've read the bullet points and got a litle scared. Static typing? Compiling? Is this the 80's all over again?! Well sorta! Go gives us the power of concurrent operations with a built in GC, so we get the power of a compiled language, but the lazyness of a dynamic language. Interested yet? Let's get into the basics of the language and cut out all the uncessary exposition and look at some code. Hello world Here's the canonical "Hello world" in Go. package main import "fmt" func main(){ fmt.Println("Hello world!") } And its ruby equivalent puts "Hello world!" At first glance you may think "Wow that's verbose!", but if you have a c/c++/java background, this doesn't look too out of the ordinary. Now let's discect this program and firgure out what's going on. The package keyword identifies the scope of the code you're writing to the Go environment. Since we're not writing a library, our package has to be main, so that Go can execute this code when you run your generated executable. The import function pulls in external packages to be used in your program. The fmt package is the standard package used for formatted IO operations. Now we get to the main() function. If you've written a C program before, you know you'll need a main() function as the initial entry point for your programs execution. The main() function in Go serves the same purpose. If you're not writing a library for later use, you need a main() function in order for your Go program to actually run and do something. Finally we get to the point where are program actually does something! The line fmt.Println("Hello world!") calls the fmt package's Println function and prints Hello world! to the console. Calling an external package's functions follows the same format, package_name.function_name. That's all there is to a Go program! Variables Variable definitions in Go are pretty much like every other programming language ever, with the exception that they come in two flavors. There's the implicit type instantiation s := "Hello World!" This creates a variable s as a type string and gives it the value Hello World!. This is straightforward, but what if you want to create a variable for later use? Go has another way of defining variables. var arr []int for n := 0; n < 5; n++ { arr = append(arr, n) } fmt.Println(arr) This will create an int array which is scoped outside of the for loop, so any changes to arr within the for loop persist in the scope of the function in which the for loop resides. The line arr = append(arr, n) is calling the builtin append function which takes an array, arr, and appends n to the provided array. It then returns a new array with contents of arr with n pushed on at the end. Easy stuff, right? Control Structures Go only has 3 control structures, for, if and switch. Wait a minute, only 3? Yes! The if statement is your standard fare if true { fmt.Prinln("I'm true") } else { fmt.Println("I'm false :(") } The for statement on the other hand is much more interesting. Instead of having all sorts of while, do, loop and each loops, Go has one for loop which allows a few different forms. The first form is your standard "iterate until false" for i := 0; i < 5; i++ { fmt.Println(i) } Another form is sorta like a while loop b := true i := 0 for b == true { fmt.Println("doin some stuff") i++ if i > 1 { b = false } } And the last form is like an each_with_index do sorta thing arr := []string{"a", "b", "c", "d", "e"} for index, value := range arr { fmt.Println(index, value) } which is ruby would be ["a", "b", "c", "d", "e"].each_with_index do |value, index| puts "#{index} #{value}" end And last, there's the switch statement which is pretty much exactly how you would imagine it to be i := 1 switch i { case 1: fmt.Println("its a 1!") case 2: fmt.Println("its a 2!") } There are more variations and nuances to these control structures, so be sure to consult the Go documentation on their specifics. Functions Functions in Go are first class. A first class function is a function which you can define and pass to another function for execution at a later time. Let's look at a function definition. func MyFunc(a string, b int) bool If you come from the c/c++/java world, this function defition may look backwards to you. The variables types are defined after the variable name with the return type at the end of the function signature. So this function accepts two parameters, one string one int and returns a bool. Now what about closures? Go has got you covered there! func (){ fmt.Println("some function stuff") }() Notice the extra () at the end of the function definition, that means that the function should execute immediately. Now what if you wan to pass a function and execute it at a latter time. Won't that be interesting! func ExecuteMyFunction(f func()) { f() } func main { f := func() { fmt.Println("some function stuff") } ExecuteMyFunction(f) } In the above example I define a function and assign it to the variable f. I then pass it to the function ExecuteMyFunction which expects a variable of type func() and then using the () on my variable, it is now executed. Structs Go isn't an object oriented language. Instead they have the concept a struct and structs have functions attached to said struct. You then create an instance of that struct and then have access to that structs functions and variables. Here's a simple Go program with an equivalent ruby example. package main import "fmt" type Beer struct { Brewery string Name string Style string } func (b Beer) IsTasty() bool { if b.Style == "Light" { return false } else { return true } } func (b Beer) ToString() string { return fmt.Sprintf("%v %v", b.Brewery, b.Name) } func main(){ b := Beer{"Spaten", "Optimator", "Doppelbock"} if b.IsTasty() { fmt.Println("I'll take a", b.ToString()) } else { fmt.Println("I guess I'll have water....") } } class Beer def initialize(brewery, name, style) @brewery = brewery @name = name @style = style end def tasty? if @style == "Light" false else true end end def to_s "#{@brewery} #{@name}" end end b = Beer.new("Spaten", "Optimator", "Doppelbock") if b.tasty? puts "I'll take a #{b.to_s}" else puts "I guess I'll have water...." end In the Example above I define a struct with the syntax type Beer struct. This creates a struct of type Beer. Congratulations, you've just created a new type in Go! A struct definition has a list of variable names with their types, and to assign function to a struct you simple add a (b Beer) in between the func keyword and the name of the function. Think of the b in (b Beer) as the self keyword. Any function with the (b Beer) is only accessible to an instance of the Beer struct. You may be thinking "What about private methods and variables?" You're absolutely right! The way Go defines private functions and variables is quite easy. All you need is a lower case letter at the beginning of the variable/function name and it's now private! type Person struct { Name string DOB string ssn string } func (p Person) GetSSN() string { return p.superSecretFunction() } func (p Person) superSecretFunction() string { return p.ssn } As you can see in the example above the Person struct has the private variable ssn as well as the private function superSecretFunction. Both ssn and superSecretFunction are only to available the Person struct itself, so the user has no direct access to that information. Neat! Enter the goroutine Now we're getting into the real power of Go. If you've tried to program concurrently in ruby, you have undoubtedly lost a few hairs. There are good libraries such as Celluloid for concurrent programming, but wouldn't it be awesome if there were language primitives which did it all for you? Luckily Go has got you covered! Go makes it so easy to do a concurrent operation, all it requires is the go keyword in front of your function call, and you're done. Super easy. go ReallyLongAndExpensiveOperation() Sure that looks easy enough, but how do you share data between goroutines in a safe way? Channles my friend, it's all about goroutines and channels. Channels comes in two flavors, unbuffered and buffered. Unbuffered channels will block on writing to the channel, until the channel has been cleared. Whereas a buffered channel will queue up channel messages until you hit your defined buffer size, then it will block. In this example I will call a function which which randomly sleeps for a duration, but I want to know how long my function slept for. Using an unbuffered channel, I can retrieve those sleep times just after they happen. package main import ( "fmt" "time" "math/rand" ) func ReallyLongAndExpensiveOperation(channel chan int){ for { rand.Seed(time.Now().UTC().UnixNano()) i := rand.Intn(1000) time.Sleep( time.Duration(i) * time.Millisecond) channel <- i } } func main() { channel := make(chan int) go ReallyLongAndExpensiveOperation(channel) for { fmt.Println("I waited", <-channel, "milliseconds") } } Wrap up As you can see Go is a power language, especially for concurrent operations. If this quick overview has wet your appetite, you should head over to the tour of go which will go into more detail about the concepts I've outlined here. Oh and do yourself a favor and run go fmt on your project before you commit it to github. Go has fantastic tools to format your code with the Go standard indentation and white space practices. Share your thoughts with @engineyard on Twitter
https://blog.engineyard.com/2014/intro-to-go-rubyists
CC-MAIN-2015-22
refinedweb
1,690
74.49
CNNs, Part 1: An Introduction to Convolutional Neural Networks A simple guide to what CNNs are, how they work, and how to build one from scratch in Python. | UPDATED assumes only a basic knowledge of neural networks. My introduction to Neural Networks covers everything you’ll need to know, so you might want to read that first. Ready? Let’s jump in. 1. Motivation A classic use case of CNNs is to perform image classification, e.g. looking at an image of a pet and deciding whether it’s a cat or a dog. It’s a seemingly simple task - why not just use a normal Neural Network? Good question. Reason 1: Images are Big Images used for Computer Vision problems nowadays are often 224x224 or larger. Imagine building a neural network to process 224x224 color images: including the 3 color channels (RGB) in the image, that comes out to 224 x 224 x 3 = 150,528 input features! A typical hidden layer in such a network might have 1024 nodes, so we’d have to train 150,528 x 1024 = 150+ million weights for the first layer alone. Our network would be huge and nearly impossible to train. It’s not like we need that many weights, either. The nice thing about images is that we know pixels are most useful in the context of their neighbors. Objects in images are made up of small, localized features, like the circular iris of an eye or the square corner of a piece of paper. Doesn’t it seem wasteful for every node in the first hidden layer to look at every pixel? Reason 2: Positions can change If you trained a network to detect dogs, you’d want it to be able to a detect a dog regardless of where it appears in the image. Imagine training a network that works well on a certain dog image, but then feeding it a slightly shifted version of the same image. The dog would not activate the same neurons, so the network would react completely differently! We’ll see soon how a CNN can help us mitigate these problems. 2. Dataset In this post, we’ll tackle the “Hello, World!” of Computer Vision: the MNIST handwritten digit classification problem. It’s simple: given an image, classify it as a digit. Each image in the MNIST dataset is 28x28 and contains a centered, grayscale digit. Truth be told, a normal neural network would actually work just fine for this problem. You could treat each image as a 28 x 28 = 784-dimensional vector, feed that to a 784-dim input layer, stack a few hidden layers, and finish with an output layer of 10 nodes, 1 for each digit. This would only work because the MNIST dataset contains small images that are centered, so we wouldn’t run into the aforementioned issues of size or shifting. Keep in mind throughout the course of this post, however, that most real-world image classification problems aren’t this easy. Enough buildup. Let’s get into CNNs! 3. Convolutions What are Convolutional Neural Networks? They’re basically just neural networks that use Convolutional layers, a.k.a. Conv layers, which are based on the mathematical operation of convolution. Conv layers consist of a set of filters, which you can think of as just 2d matrices of numbers. Here’s an example 3x3 filter: We can use an input image and a filter to produce an output image by convolving the filter with the input image. This consists of - Overlaying the filter on top of the image at some location. - Performing element-wise multiplication between the values in the filter and their corresponding values in the image. - Summing up all the element-wise products. This sum is the output value for the destination pixel in the output image. - Repeating for all locations. Side Note: We (along with many CNN implementations) are technically actually using cross-correlation instead of convolution here, but they do almost the same thing. I won’t go into the difference in this post because it’s not that important, but feel free to look this up if you’re curious. That 4-step description was a little abstract, so let’s do an example. Consider this tiny 4x4 grayscale image and this 3x3 filter: The numbers in the image represent pixel intensities, where 0 is black and 255 is white. We’ll convolve the input image and the filter to produce a 2x2 output image: To start, lets overlay our filter in the top left corner of the image: Next, we perform element-wise multiplication between the overlapping image values and filter values. Here are the results, starting from the top left corner and going right, then down: Next, we sum up all the results. That’s easy enough: Finally, we place our result in the destination pixel of our output image. Since our filter is overlayed in the top left corner of the input image, our destination pixel is the top left pixel of the output image: We do the same thing to generate the rest of the output image: 3.1 How is this useful? Let’s zoom out for a second and see this at a higher level. What does convolving an image with a filter do? We can start by using the example 3x3 filter we’ve been using, which is commonly known as the vertical Sobel filter: Here’s an example of what the vertical Sobel filter does: Similarly, there’s also a horizontal Sobel filter: See what’s happening? Sobel filters are edge-detectors. The vertical Sobel filter detects vertical edges, and the horizontal Sobel filter detects horizontal edges. The output images are now easily interpreted: a bright pixel (one that has a high value) in the output image indicates that there’s a strong edge around there in the original image. Can you see why an edge-detected image might be more useful than the raw image? Think back to our MNIST handwritten digit classification problem for a second. A CNN trained on MNIST might look for the digit 1, for example, by using an edge-detection filter and checking for two prominent vertical edges near the center of the image. In general, convolution helps us look for specific localized image features (like edges) that we can use later in the network. 3.2 Padding Remember convolving a 4x4 input image with a 3x3 filter earlier to produce a 2x2 output image? Often times, we’d prefer to have the output image be the same size as the input image. To do this, we add zeros around the image so we can overlay the filter in more places. A 3x3 filter requires 1 pixel of padding: This is called “same” padding, since the input and output have the same dimensions. Not using any padding, which is what we’ve been doing and will continue to do for this post, is sometimes referred to as “valid” padding. 3.3 Conv Layers Now that we know how image convolution works and why it’s useful, let’s see how it’s actually used in CNNs. As mentioned before, CNNs include conv layers that use a set of filters to turn input images into output images. A conv layer’s primary parameter is the number of filters it has. For our MNIST CNN, we’ll use a small conv layer with 8 filters as the initial layer in our network. This means it’ll turn the 28x28 input image into a 26x26x8 output volume: Reminder: The output is 26x26x8 and not 28x28x8 because we’re using valid padding, which decreases the input’s width and height by 2. Each of the 8 filters in the conv layer produces a 26x26 output, so stacked together they make up a 26x26x8 volume. All of this happens because of 3 3 (filter size) 8 (number of filters) = only 72 weights! 3.4 Implementing Convolution Time to put what we’ve learned into code! We’ll implement a conv layer’s feedforward portion, which takes care of convolving filters with an input image to produce an output volume. For simplicity, we’ll assume filters are always 3x3 (which is not true - 5x5 and 7x7 filters are also very common). Let’s start implementing a conv layer class: conv.py import numpy as np class Conv3x3: # A Convolution layer using 3x3 filters. def __init__(self, num_filters): self.num_filters = num_filters # filters is a 3d array with dimensions (num_filters, 3, 3) # We divide by 9 to reduce the variance of our initial values self.filters = np.random.randn(num_filters, 3, 3) / 9 The Conv3x3 class takes only one argument: the number of filters. In the constructor, we store the number of filters and initialize a random filters array using NumPy’s randn() method. Note: Diving by 9 during the initialization is more important than you may think. If the initial values are too large or too small, training the network will be ineffective. To learn more, read about Xavier Initialization. Next, the actual convolution: conv.py class Conv3x3: # ... def iterate_regions(self, image): ''' Generates all possible 3x3 image regions using valid padding. - image is a 2d numpy array ''' h, w = image.shape for i in range(h - 2): for j in range(w - 2): im_region = image[i:(i + 3), j:(j + 3)] yield im_region, i, j def forward(self, input): ''' Performs a forward pass of the conv layer using the given input. Returns a 3d numpy array with dimensions (h, w, num_filters). - input is a 2d numpy array ''' h, w = input.shape output = np.zeros((h - 2, w - 2, self.num_filters)) for im_region, i, j in self.iterate_regions(input): output[i, j] = np.sum(im_region * self.filters, axis=(1, 2)) return output iterate_regions() is a helper generator method that yields all valid 3x3 image regions for us. This will be useful for implementing the backwards portion of this class later on. The line of code that actually performs the convolutions is highlighted above. Let’s break it down: - We have im_region, a 3x3 array containing the relevant image region. - We have self.filters, a 3d array. - We do im_region * self.filters, which uses numpy’s broadcasting feature to element-wise multiply the two arrays. The result is a 3d array with the same dimension as self.filters. - We np.sum() the result of the previous step using axis=(1, 2), which produces a 1d array of length num_filterswhere each element contains the convolution result for the corresponding filter. - We assign the result to output[i, j], which contains convolution results for pixel (i, j)in the output. The sequence above is performed for each pixel in the output until we obtain our final output volume! Let’s give our code a test run: cnn.py import mnist from conv import Conv3x3 # The mnist package handles the MNIST dataset for us! # Learn more at train_images = mnist.train_images() train_labels = mnist.train_labels() conv = Conv3x3(8) output = conv.forward(train_images[0]) print(output.shape) # (26, 26, 8) Looks good so far. Note: in our Conv3x3implementation, we assume the input is a 2d numpy array for simplicity, because that’s how our MNIST images are stored. This works for us because we use it as the first layer in our network, but most CNNs have many more Conv layers. If we were building a bigger network that needed to use Conv3x3multiple times, we’d have to make the input be a 3d numpy array. 4. Pooling Neighboring pixels in images tend to have similar values, so conv layers will typically also produce similar values for neighboring pixels in outputs. As a result, much of the information contained in a conv layer’s output is redundant. For example, if we use an edge-detecting filter and find a strong edge at a certain location, chances are that we’ll also find relatively strong edges at locations 1 pixel shifted from the original one. However, these are all the same edge! We’re not finding anything new. Pooling layers solve this problem. All they do is reduce the size of the input it’s given by (you guessed it) pooling values together in the input. The pooling is usually done by a simple operation like max, min, or average. Here’s an example of a Max Pooling layer with a pooling size of 2: To perform max pooling, we traverse the input image in 2x2 blocks (because pool size = 2) and put the max value into the output image at the corresponding pixel. That’s it! Pooling divides the input’s width and height by the pool size. For our MNIST CNN, we’ll place a Max Pooling layer with a pool size of 2 right after our initial conv layer. The pooling layer will transform a 26x26x8 input into a 13x13x8 output: 4.1 Implementing Pooling We’ll implement a MaxPool2 class with the same methods as our conv class from the previous section: maxpool.py import numpy as np class MaxPool2: # A Max Pooling layer using a pool size of 2. def iterate_regions(self, image): ''' Generates non-overlapping 2x2 image regions to pool over. - image is a 2d numpy array ''' h, w, _ = image.shape new_h = h // 2 new_w = w // 2 for i in range(new_h): for j in range(new_w): im_region = image[(i * 2):(i * 2 + 2), (j * 2):(j * 2 + 2)] yield im_region, i, j def forward(self, input): ''' Performs a forward pass of the maxpool layer using the given input. Returns a 3d numpy array with dimensions (h / 2, w / 2, num_filters). - input is a 3d numpy array with dimensions (h, w, num_filters) ''' h, w, num_filters = input.shape output = np.zeros((h // 2, w // 2, num_filters)) for im_region, i, j in self.iterate_regions(input): output[i, j] = np.amax(im_region, axis=(0, 1)) return output This class works similarly to the Conv3x3 class we implemented previously. The critical line is again highlighted: to find the max from a given image region, we use np.amax(), numpy’s array max method. We set axis=(0, 1) because we only want to maximize over the first two dimensions, height and width, and not the third, num_filters. Let’s test it! cnn.py import mnist from conv import Conv3x3 from maxpool import MaxPool2 # The mnist package handles the MNIST dataset for us! # Learn more at train_images = mnist.train_images() train_labels = mnist.train_labels() conv = Conv3x3(8) pool = MaxPool2() output = conv.forward(train_images[0]) output = pool.forward(output) print(output.shape) # (13, 13, 8) Our MNIST CNN is starting to come together! 5. Softmax To complete our CNN, we need to give it the ability to actually make predictions. We’ll do that by using the standard final layer for a multiclass classification problem: the Softmax layer, a fully-connected (dense) layer that uses the Softmax function as its activation. Reminder: fully-connected layers have every node connected to every output from the previous layer. We used fully-connected layers in my intro to Neural Networks if you need a refresher. If you haven’t heard of Softmax before, read my quick introduction to Softmax before continuing. 5.1 Usage We’ll use a softmax layer with 10 nodes, one representing each digit, as the final layer in our CNN. Each node in the layer will be connected to every input. After the softmax transformation is applied, the digit represented by the node with the highest probability will be the output of the CNN! 5.2 Cross-Entropy Loss You might have just thought to yourself, why bother transforming the outputs into probabilities? Won’t the highest output value always have the highest probability? If you did, you’re absolutely right. We don’t actually need to use softmax to predict a digit - we could just pick the digit with the highest output from the network! What softmax really does is help us quantify how sure we are of our prediction, which is useful when training and evaluating our CNN. More specifically, using softmax lets us use cross-entropy loss, which takes into account how sure we are of each prediction. Here’s how we calculate cross-entropy loss: where is the correct class (in our case, the correct digit), is the predicted probability for class , and is the natural log. As always, a lower loss is better. For example, in the best case, we’d have In a more realistic case, we might have We’ll be seeing cross-entropy loss again later on in this post, so keep it in mind! 5.3 Implementing Softmax You know the drill by now - let’s implement a Softmax layer class: softmax.py import numpy as np class Softmax: # A standard fully-connected layer with softmax activation. def __init__(self, input_len, nodes): # We divide by input_len to reduce the variance of our initial values self.weights = np.random.randn(input_len, nodes) / input_len self.biases = np.zeros(nodes) def forward(self, input): ''' Performs a forward pass of the softmax layer using the given input. Returns a 1d numpy array containing the respective probability values. - input can be any array with any dimensions. ''' input = input.flatten() input_len, nodes = self.weights.shape totals = np.dot(input, self.weights) + self.biases exp = np.exp(totals) return exp / np.sum(exp, axis=0) There’s nothing too complicated here. A few highlights: - We flatten() the input to make it easier to work with, since we no longer need its shape. - np.dot() multiplies inputand self.weightselement-wise and then sums the results. - np.exp() calculates the exponentials used for Softmax. We’ve now completed the entire forward pass of our CNN! Putting it together: cnn.py import mnist import numpy as np from conv import Conv3x3 from maxpool import MaxPool2 from softmax import Softmax # We only use the first 1k testing examples (out of 10k total) # in the interest of time. Feel free to change this if you want. test_images = mnist.test_images()[:1000] test_labels = mnist.test_labels()[:1000] conv = Conv3x3(8) # 28x28x1 -> 26x26x8 pool = MaxPool2() # 26x26x8 -> 13x13x8 softmax = Softmax(13 * 13 * 8, 10) # 13x13x8 -> 10 def forward(image, label): ''' Completes a forward pass of the CNN and calculates the accuracy and cross-entropy loss. - image is a 2d numpy array - label is a digit ''' # We transform the image from [0, 255] to [-0.5, 0.5] to make it easier # to work with. This is standard practice. out = conv.forward((image / 255) - 0.5) out = pool.forward(out) out = softmax.forward(out) # Calculate cross-entropy loss and accuracy. np.log() is the natural log. loss = -np.log(out[label]) acc = 1 if np.argmax(out) == label else 0 return out, loss, acc print('MNIST CNN initialized!') loss = 0 num_correct = 0 for i, (im, label) in enumerate(zip(test_images, test_labels)): # Do a forward pass. _, l, acc = forward(im, label) loss += l num_correct += acc # Print stats every 100 steps. if i % 100 == 99: print( '[Step %d] Past 100 steps: Average Loss %.3f | Accuracy: %d%%' % (i + 1, loss / 100, num_correct) ) loss = 0 num_correct = 0 Running cnn.py gives us output similar to this: MNIST CNN initialized! [Step 100] Past 100 steps: Average Loss 2.302 | Accuracy: 11% [Step 200] Past 100 steps: Average Loss 2.302 | Accuracy: 8% [Step 300] Past 100 steps: Average Loss 2.302 | Accuracy: 3% [Step 400] Past 100 steps: Average Loss 2.302 | Accuracy: 12% This makes sense: with random weight initialization, you’d expect the CNN to be only as good as random guessing. Random guessing would yield 10% accuracy (since there are 10 classes) and a cross-entropy loss of , which is what we get! Want to try or tinker with this code yourself? Run this CNN in your browser. It’s also available on Github. 6. Conclusion That’s the end of this introduction to CNNs! In this post, we - Motivated why CNNs might be more useful for certain problems, like image classification. - Introduced the MNIST handwritten digit dataset. - Learned about Conv layers, which convolve filters with images to produce more useful outputs. - Talked about Pooling layers, which can help prune everything but the most useful features. - Implemented a Softmax layer so we could use cross-entropy loss. There’s still much more that we haven’t covered yet, such as how to actually train a CNN. Part 2 of this CNN series does a deep-dive on training a CNN, including deriving gradients and implementing backprop. Alternatively, you can also learn to implement your own CNN with Keras, a deep learning library for Python, or read the rest of my Neural Networks from Scratch series. If you’re eager to see a trained CNN in action: this example Keras CNN trained on MNIST achieves 99.25% accuracy. CNNs are powerful!
https://victorzhou.com/blog/intro-to-cnns-part-1/
CC-MAIN-2020-24
refinedweb
3,465
65.12
On 01/08/2014 12:39 PM, Joseph S. Myers wrote: > On Wed, 8 Jan 2014, Richard Henderson wrote: > >> diff --git a/libgcc/soft-fp/soft-fp.h b/libgcc/soft-fp/soft-fp.h >> index 696fc86..b54b1ed 100644 >> --- a/libgcc/soft-fp/soft-fp.h >> +++ b/libgcc/soft-fp/soft-fp.h >> @@ -237,6 +237,11 @@ typedef int DItype __attribute__ ((mode (DI))); >> typedef unsigned int UQItype __attribute__ ((mode (QI))); >> typedef unsigned int USItype __attribute__ ((mode (SI))); >> typedef unsigned int UDItype __attribute__ ((mode (DI))); >> +#if _FP_W_TYPE_SIZE == 64 >> +typedef int TItype __attribute__ ((mode (TI))); >> +typedef unsigned int UTItype __attribute__ ((mode (TI))); >> +#endif > > This isn't the right conditional. _FP_W_TYPE_SIZE is ultimately an > optimization choice and need not be related to whether any TImode > functions are being defined using soft-fp, or whether TImode is supported > at all. I think the most you can do is have sfp-machine.h define a macro > to say that TImode should be supported in soft-fp, rather than actually > defining the types itself. The documentation for longlong.h say we must have a double-word type defined. Given how easy it is to support a double-word type... > > (If someone were to use soft-fp on hppa64, then they might well use > _FP_W_TYPE_SIZE == 64, but hppa64 doesn't support TImode.) > ... I can't imagine that this is anything but a bug. Not that anyone seems to be doing any hppa work at all these past years. r~
https://gcc.gnu.org/pipermail/gcc-patches/2014-January/380238.html
CC-MAIN-2021-43
refinedweb
242
56.86
> tiff-3.7.1.zip > convert RISC OS Conversion log ====================== mkversion.c ~~~~~~~~~~~ The RISC OS command-line does not allow the direct creation of the version.h file in the proper manner. To remedy this in such a way that the version header is made at compiletime, I wrote this small program. It is fully portable, so should work quite happily for any other platform that might need it. msg3states.c ~~~~~~~~~~~~ Needed getopt.c from the port folder, then compiled and worked fine. tiff.h ~~~~~~ ====1==== The symbol _MIPS_SZLONG, if not defined, causes a compiler error. Fixed by ensuring it does exist. This looks to me like this wouldn't be an Acorn-specific problem. The new code fragment is as follows: #ifndef _MIPS_SZLONG #define _MIPS_SZLONG 32 #endif #if defined(__alpha) || _MIPS_SZLONG == 64 tiffcomp.h ~~~~~~~~~~ ====1==== #if !defined(__MWERKS__) && !defined(THINK_C) #include #endif Acorn also doesn't have this header so: #if !defined(__MWERKS__) && !defined(THINK_C) && !defined(__acorn) #include #endif ====2==== #ifdef VMS #include #include #else #include #endif This seems to indicate that fcntl.h is included on all systems except VMS. Odd, because I've never heard of it before. Sure it's in the ANSI definition? Anyway, following change: #ifdef VMS #include #include #else #ifndef __acorn #include #endif #endif This will probably change when I find out what it wants from fcntl.h! ====3==== #if defined(__MWERKS__) || defined(THINK_C) || defined(applec) #include #define BSDTYPES #endif Added RISC OS to above thus: #if defined(__MWERKS__) || defined(THINK_C) || defined(applec) || defined(__acorn) #include #define BSDTYPES #endif ====4==== /* * The library uses the ANSI C/POSIX SEEK_* * definitions that should be defined in unistd.h * (except on VMS where they are in stdio.h and * there is no unistd.h). */ #ifndef SEEK_SET #if !defined(VMS) && !defined (applec) && !defined(THINK_C) && !defined(__MWERKS__) #include #endif RISC OS is like VMS and Mac in this regard. So changed to: /* * The library uses the ANSI C/POSIX SEEK_* * definitions that should be defined in unistd.h * (except on VMS or the Mac or RISC OS, where they are in stdio.h and * there is no unistd.h). */ #ifndef SEEK_SET #if !defined(VMS) && !defined (applec) && !defined(THINK_C) && !defined(__MWERKS__) && !defined(__acorn) #include #endif #endif ====5==== NB: HAVE_IEEEFP is defined in tiffconf.h, not tiffcomp.h as mentioned in libtiff.README. (Note written on original port from 3.4beta004) Acorn C/C++ claims to accord with IEEE 754, so no change (yet) to tiffconf.h. ====6==== Unsure about whether this compiler supports inline functions. Will leave it on for the time being and see if it works! (Likely if everything else does.) ... Seems to be OK ... ====7==== Added to the end: /* * osfcn.h is part of C++Lib on Acorn C/C++, and as such can't be used * on C alone. For that reason, the relevant functions have been * implemented by myself in tif_acorn.c, and the elements from the header * included here. */ #ifdef __acorn #ifdef __cplusplus #include #else #include "kernel.h" #define O_RDONLY 0 #define O_WRONLY 1 #define O_RDWR 2 #define O_APPEND 8 #define O_CREAT 0x200 #define O_TRUNC 0x400 typedef long off_t; extern int open(const char *name, int flags, int mode); extern int close(int fd); extern int write(int fd, const char *buf, int nbytes); extern int read(int fd, char *buf, int nbytes); extern off_t lseek(int fd, off_t offset, int whence); #endif #endif =============================================================================== tif_acorn.c ~~~~~~~~~~~ Created file tif_acorn.c, copied initially from tif_unix.c Documented internally where necessary. Note that I have implemented the low-level file-handling functions normally found in osfcn.h in here, and put the header info at the bottom of tiffcomp.h. This is further documented from a RISC OS perspective inside the file. ===============================================================================
http://read.pudn.com/downloads19/sourcecode/comm/fax/65056/tiff-3.7.1/contrib/acorn/convert__.htm
crawl-002
refinedweb
617
68.36
Getting Started With ReGraph. ReGraph is a React toolkit for developers to add powerful, interactive graph data visualizations to their applications quickly and easily. To help you get started with ReGraph, this step-by-step tutorial covers everything you need to know. Once we’ve created our visualization in a React app, we’ll load an example network of suspected terrorists and show how easy it is to apply the key analysis techniques your users need to uncover threats. Before the tutorial, let’s learn a little more about ReGraph. ReGraph: How it works ReGraph contains two React components, a chart and a time bar. Both components are designed from the ground-up to fit into modern React environments. The fully data-driven approach allows for modern, responsive, and declarative visualization of your data. Powered by WebGL, ReGraph offers fast, reliable performance even when visualizing large, complex networks. And like React, it works with all major browsers and devices. ReGraph comes loaded with high-performance graph analysis functions and other features to help analysts discover critical insights more quickly. They can take advantage of social network analysis measures, advanced map-based analysis, automatic layouts, grouping and combining, and much more. We’ll look at a couple of these in more detail — but first, let’s get ReGraph working in an app. If you don’t have a React project set up, you can bootstrap one in seconds with create-react-app: npx create-react-app my-regraph-app Next, download ReGraph. If you haven’t already, join the EAP. You’ll get full access to the ReGraph SDK site containing the latest ReGraph package, detailed developer tutorials, interactive demos, and a fully-documented API. Add ReGraph to your project by installing it with a package manager. We’ll use npm: npm install ../path/to/regraph-0.5.0.tgz To access ReGraph from your app, simply import it alongside React: import React from ‘react’; import { Chart } from ‘regraph’; You can then render the Chart in JSX. To create a chart and load a dummy item, use: <Chart items={{ node1: { color: ‘#bbdefb’, label: { text: 'Welcome to ReGraph!' } } }}/> And that’s it: ReGraph is running in your application! ReGraph works with any data repository — databases, web services, CSV files, etc. All you have to do is convert the data into the simple JavaScript format ReGraph expects. Here’s what our converted network of suspected terrorists looks like: // A node with id N8 N8: { color: '#ff867c', label: { text: 'Mohammed Ibrahim Makkawi', }, data: { country: 'Afghanistan', } }, // A link between N8 and N99 'N8/p/N99': { id1: 'N8', id2: 'N99', width: 5, color: '#f3e5f5' } You’ll find descriptions of every supported prop, event, and style option on the ReGraph SDK site. Next, we pass this object into the items prop of our chart and ReGraph automatically draws it for us. In a React app, we usually load our items from the app state or props. We’ll read from state in this example, which means we can re-render whenever the items change: componentDidMount() { const data = await loadData(); const items = convertData(data); this.setState({ items }); } render() { const { items } = this.state; <Chart items={items} /> } The items prop is fully reactive. When you pass a new object into it, ReGraph looks for changes since its last render and updates the chart if necessary. New items slide into view, removed items fade out, and color and position changes transition with smooth animation. That’s it! We’ve got our chart working and our data loaded. Now let’s see how we can make sense of our chart. Make sense of your graph visualizations Right now, it’s hard to get useful insight from our complex network of connections, but ReGraph comes with a range of features to help you understand the data. Let’s focus on two of them: combos and SNA measures. Reducing clutter with combos In very busy charts with lots of nodes and links, it’s hard to differentiate between what’s important and what’s not. The smart way to reduce the number of items is to group together nodes with similar properties. We call these combined nodes combos. For example, our dataset contains the country of origin for each terrorist suspect. Grouping suspects by country gives us a beautifully clean high-level view of the data: Notice how ReGraph automatically combines multiple links between nodes into a single link between combos. Creating combos in ReGraph is really straightforward. You just set the combine prop to tell ReGraph which data properties to use for grouping. Setting the level property will enable or disable combos: <Chart items={items} combine={{properties: [‘country’], level: 1}} /> We can still drill down into the detail when we need it by “opening” each combo to show what’s inside. If there’s a specific node worth investigating further, we can easily highlight the nodes and combos it's connected to. Sizing nodes by SNA centrality Graph theory provides a number of ways to assess the importance of each node, which ReGraph exposes as a number of social network analysis (SNA) functions. Analyzing social connections in any network can reveal important information about information flow, hidden subnetworks, dependencies, and influential nodes. An effective way to make highly-connected nodes stand out is by making them larger. Here, we’ve counted how many connections each terrorist suspect has and sized them accordingly: ReGraph supports other powerful SNA centrality measures too, including betweenness to find the ‘gatekeepers’ in a network, and closeness to identify how easily a node can reach the rest of the network. All these measures are available through function calls with simple APIs — very similar to the example above — so experimenting with them is easy. The ReGraph SDK also features interactive examples of centrality measures to show how they reveal different insights from your data. Thanks for reading. If you liked this post, share it with all of your programming buddies! Further reading ☞ How to Build First React Website ☞ The Complete React Native and Redux Course ☞ React Native - The Practical Guide ☞ React Native: Advanced Concepts ☞ Understanding TypeScript ☞ Typescript Masterclass & FREE E-Book ☞ React - The Complete Guide (incl Hooks, React Router, Redux) This post was originally published here reactjs react-native javascript graphql database web-development..
https://morioh.com/p/2b664e629311
CC-MAIN-2020-40
refinedweb
1,044
52.6
A customer wanted some information about process IDs: I'm writing some code that depends on process IDs and I'd like to understand better problem of process ID reuse. When can PIDs be reused? Does it happen when the process handle becomes signaled (but before the zombie object is removed from the system) or does it happen only after last handle to process is released (and the process object is removed from the system)? If its the former, will OpenProcess() succeed for a zombie process? (i.e. the one that has been terminated, but not yet removed from the system)?. That doesn't explain when the id becomes available for reuse. How long does the process have to be terminated before a new process can reuse the same id? If a id is reused immediately then other processes doesn't know the process has terminated. Maybe it (the id) has terminated twice. Like he said, the pid can be reused when all handles to that process are closed. That's the point. If you are tracking it, it won't go away. Or we could just use the "What if this was true" approach. The statement to be proven is "process ids are reused for zombie processes". Let's assume that it is true. Now imagine the scenario that another process wants to know the exit code by looking up the process id. Ops, not possible, you'd get the wrong process. @missing: When the THREADS go away, there can still be handles. When the handles go away, the process goes away and then the process ID can be reused. At least that's my understanding. @Henke37: Yes, you'd get the wrong process. That's a risk you run by using PIDs instead of handles. If you don't already own a handle to the process, the last existing handle could be closed, the zombie could be reaped, and the PID could be reused. Preventing PIDs of zombies from being reused doesn't actually help much. I think it's clear that a pid becomes eligible for re-use when the process it previously identified ceases to exist. But what's still not clear is precisely when a process ceases to exist. We know it's not when the process terminates — the process can continue to exist afterward if there are still handles open on that process. During that time, it's a "zombie process." But once all the handles are closed, you only say the kernel knows the process object CAN be destroyed. Does that mean the kernel destroys it immediately, or can the process object continue to exist even after the last handle is closed? And if it can continue to exist, can somebody call OpenProcess on that pid and get a handle to that not-yet-destroyed process, even though there were no other handles open at the time? However, for an application developer, I don't see how it really matters. Suppose you're writing application A and it has a handle to process B. All that's important is to know that as long as A has an open handle, the pid will continue to identify B. Once A closes its last handle to B, it should assume that the pid is invalid and stop using it. It should not assume that B continued running afterward, nor that any other program C still has a handle to B. Just to add my 2 cents worth – this seems to be a case of "just like everything else". While you've got an open handle to file FOO.BAR, it will continue to be the same FOO.BAR. If you close your open handle and open a new handle based on the file name, it may not be the same file. If that matters to you, don't do it :-) The question is from the category "if you're asking this, you're doing it wrong". It's the same as asking "when a freed memory becomes reused" or "how to check if the memory address is valid". I'm wondering – was this really necessary? Why the duality – PID and handle, which identify the same thing? Couldn't we have just one unique identifier, like a GUID? This is one of the few places where Windows actually did it better than UNIX. Under UNIX there's no way to get a handle to a process so no way other than being the parent to prevent the pid from being recycled on you. Vilx: Handles are not unique identifiers, while PIDs are. Think of it this way: PID:process::filename:file. Processes need both PIDs and handles just as much as files need names and handles. Since a process doesn't otherwise have a unique name it gets assigned a number, essentially allowing you to talk about a process that you don't have a handle to. Perhaps you are wondering why you even need handles to processes at all (UNIX doesn't). UNIX has only two permissions on processes (if you own the process or you're root, you can do anything to the process, otherwise you can do nothing to it), meaning that for anything you want to do to a process (debug, signal, renice, etc.) it is sufficient to just pass the PID. Windows has a dozen different permissions (msdn.microsoft.com/…/ms684880.aspx) and handles are used to manage those. In this respect processes are just like any other object on the system (threads, mutexes, files, registry keys, etc.). The big difference is their namespace (PIDs). Just think about process as any other kernel object (like mutex, event etc) and PID in this analogie is a process object's "name". While there're any opened to existing object handles – you may call Open*** (OpenMutex, OpenEvent, OpenProcess) to get another handle to to. While all handles closed – Open*** will fail. But there is 2 differences between process and other objects name lifetime: 1) Process holds reference to itself while its not terminated -so its possibe to open 'running' process even there're no external handle on it yet. 2) If after process terminated there still remained any references (handles) for any of its (already dead too) thread(s) – its possible to 'open' process even if there're no other already opened handles to it. When exactly process id will be release is a complicated question, cause threads deferencing performed asynchronously by backround 'reaper' thread. So on one side – when thread terminates it pushes itself to 'reap' queue that processed asynchronously. On another side – when you calls 'final' CloseHandle related to already exited process – it will dereference and destroy process object synchonousy if all its thread alrready reaped asynchronously until this moment. So practical conclusion: you may relay that process ID will not be reused while process alive and/or while there any opened handle(s) (in other processes) referencing this process or any its thread. But you may not be sure that PID will be reused after you closed everything related to process. PS: BTW the story about handle count is even more complicated, cause every kernel object has actually 2 ref counters – 'handles' and 'pointers' count – second always same or greater than first. And kernel object 'deleted' when its pointer count reaches zero, while object's name removed from namespace when its handles count zeroed. Actually I don't know if this same (thinking like PID=="name") for process/thread handles, but while object refcounter may be modified directly only from kernelmode its out of this article's scope but interesting for me too. FAIL Hmmm… this leads to an interesting question: Is there's any way to open a process and know that it's the process you were trying to open (other than to arrange to have created the process yourself, I suppose)? It would seem there's an inherent race condition in finding the process ID you want to open and then calling OpenProcess. Ray Trent: Yes, there's an inherent race condition, and it happens everywhere. For example, let's say a user tells a program to open file FOO.TXT; you have no way of knowing if FOO.TXT has been deleted or renamed and had a new file of the same name placed in its directory between the time the user chooses it and you actually open the file. It is only once you open the file that you can control what happens to it. Of course, this race condition isn't limited to computers. I could give you my phone number, and by the time you call me I could have closed my account with the phone company and they could have assigned my number to somebody else.
https://blogs.msdn.microsoft.com/oldnewthing/20110107-00/?p=11803/
CC-MAIN-2017-43
refinedweb
1,463
70.63
Swing Application Framework is back again. Existing code The "singleton problem". Design of the View class class. Lack of flexible menu support. The ideal framework It is small but very flexible, any part of its functionality can be easily overridden. For example, if you don't like the implementation of LocalStorage, it is easy to plug in your own implementation. It is free from all mentioned problem and knows how an application should look like on a particular OS. The question of the day I mentioned only a few problems with the current SAF to start a discussion. What do you think about the problems I mentioned and what is your list of features that SAF must have? note: don't forget that SAF is supposed to be a small framework. I am looking forward for your comments It won't take a half a year for the next blog, I promise. alexp - Login or register to post comments - Printer-friendly version - alexfromsun's blog - 10354 reads by winnall - 2009-07-21 03:53I'm glad to see that platform dependencies are now on the radar for SAF, even if it is only the Mac OS X menu problem at the moment. In my view the general solution to platform dependency is to isolate it behind a series of factory interfaces and make the factories extensible (by plugging in platform-dependent JARs as needed). I don't think you can expect developers to do things like "if (isMac) {} else {}" because non-Mac developers won't be aware that they have to do this. The factory should do it for you. The same applies to other platforms too (Gnome and KDE have slightly differing menu conventions and both are different from Windows and Mac). Most developers are not fully aware of all the conventions on their own platforms, let alone all the other platforms out there. And who wants programs littered with "if (isMac) {} else if (isLinux) {} else ... {}" statements anyway? It's a debugger's or maintainer's nightmare :-) A further advantage of of abstracting the platform dependencies into factory interfaces is that the implementation for each platform can be devolved into separate projects, which will avoid further delay to SAF and allow specialist communities to concentrate on the appropriate implementation for their platform. This would allow SAF to concentrate on its "core business". I use a nascent framework for my own development which works along these lines. If anyone is interested, I'll share the details. by fritzthecat - 2009-04-08 12:37I am using the current state of AppFramework in a Swing project and I like it, nevertheless it makes some problems. (1) I would like to be able to create my own type of Action, ApplicationAction is good but needs more in some cases; in other words, an overridable factory method (generally more framework quality) is needed (2) I have severe problems with test-coverage, because class Application seems to be not mockable, and instantiating it brings up a GraphicsEnvironment and I get a HeadlessException Best would be to provide all parts of AppFramework in separate packages, so that you can use it piece by piece, independently of each other. Mind that the "Commands" project could also be a part of this. by fan_42 - 2009-03-27 03:56One thing I noticed by using SAF: How do you change on the fly the language of an application? As I live in Switzerland, a small country with 4 national languages, it is a common needed feature. I always misses the possibility to change the application language from German to French and back. It would be nice to have this too. by reusr1 - 2009-03-22 09:35I would really like to see some sort of binding to model properties and actions in the framework. I have been toying around with such an approach by naming all UI elements (also great for testing) and then binding the UI elements through an EL to the model. The naming of the UI elements also helps with automated testing. by fan_42 - 2009-03-16 01:40I agree with janaudy I can leave without a docking framework, there some that you can use as is, but a plugin mechanisem would be nice. by haraldk - 2009-03-12 06:01Alex, I think it would really help if you guys joined in on the discussion on the appframework mailing list on a semi-daily basis. Your absence there is a big concern... Apart from that focus on the basics, don't try to solve everything at once. The current API is a very good start. -- Harald K by sdoeweling - 2009-03-10 15:52Well, I have not delved too deep into SAF, but I tend to agree with karsten, especially on 1 and 3. Best, S. by arittner - 2009-03-10 01:39Is it possible to refactor the view classes to introduce interfaces? I need some different Views for the NetBeans platform (like DocumentView, DockingView, OptionPanelView, PaletteView). But the implementation of View hurts me (e.g. final methods). The SAF should be divided into SPI and implementation. br, josh. NBDT by karsten - 2009-03-05 12:461) Focus on the topics we've already agreed to be the core of the JSR 296: life-cycle, Actions, resources, background tasks. That's difficult enough 2) Work with the expert group. After this long pause, you may need to re-setup the expert group. 3) Fix the ActionMap bug, because the fix requires an API change that will affect almost every framework user. 4) Work towards an early draft according to the JSR process. 5) Exclude views from the framework, as explained in the appframework mailing list. by dsargrad - 2009-03-04 13:11Hello, I've started using the Swing Application Framework within a netbeans environment. I've had a lot of success with it. Currently I use the SingleFrameApplication and launch my application either through Java Web Start, or as a desktop application. I need to refactor my application to support a third entry point using an Applet. My current application structure is typical: class MainApp extends SingleFrameApplication { public static void main(String[] args) { launch(MainApp.class, args); } } Within my startup method I instantiate a FrameView of the form: public class MainView extends FrameView { } My simple thought is that I create a new class, perhaps of the following form: MainApplet extends JApplet { } and that this would somehow invoke MainApp, and everything else would "stay the same". I'm looking for a simple example to begin to understand how to launch my application either through the standard "main" entry point, or alternatively, as an Applet. So far I have not found an example. Can someone please point me to such an example? I apologize if this is the wrong blog to post this request to. Please let me know if there is a better forum for this question. Thanks in advance. by janaudy - 2009-03-04 10:57- Do not include in core jdk - Have a nice docking framework - Have the notion of plugins by agoubard - 2009-03-04 09:06Hi Alex (and the other Swing developers), Welcome back to SAF! Here is my wish list for the SAF: * I think it would be nice to have it or part of it included in Java 7. So if there are still a lot of discussions, the JCP should agree on the part of the SAF to be included in Java 7 and have this part easily extendable to be able to have libraries for the framework or extra features in upcoming release. * Here is my old wishing list for the SAF: (maybe point 2 is not relevant as the View class handles it) * SessionStorage an LocalStorage Here I don't understand why the Preference API (java.util.prefs) is not used instead. If the default implementation doesn't store at the desired location, provide a PreferencesFactory that does. It would also allow developer to provide their own PreferencesFactory in case they want something else (e.g. AppletPreferencesFactory). Anthony by jfpoilpret - 2009-03-04 08:41@Thomas Kuenneth I second that. JSR-296 fixes the scope of SAF. Several comments to this post are going far beyond that initial scope. People should refrain from adding new points to the official scope, or that will definitely bury this JSR (maybe this is what Sun wants?) For the time being, it would be good if Alex could work on the numerous bugs reported in SAF issues list, rather than asking everyone what they would like to see in SAF. When most critical bugs/RFE get fixed then we can discuss and see if other features would be useful (provided they are still in scope of course). Just my 2 cents Jean-Francois by tommi_kuenneth - 2009-03-04 07:35I always felt that the scope of SAF was well-choosen, as it covers a bunch of key issues all Swing developers face and concentrates on them. My humble wish is to further follow this path. Though it is true that Swing lacks some GUI components, we should not bloat the idea of a basic application framework. Therefor, when asking for improvements we have to consider if such improvements really touch SAF or if they are improvements to other areas of Swing. Still, several companions to SAF are desperately missed, for example JSR-295 oder language-level support for properties, but again, these are not part of an application framework. So, if a tweaked, finetuned and bugfixed version of SAF is part of Java 7, that would be a big step forward for client Java. Regards Thomas Kuenneth by surikov - 2009-03-04 03:31 aekold - 2009-03-04 02:05Hi Alexander! It's a good news!! After some experiments with Qt Jambi I have some suggestions: - There are some params in @Action, may be it would be better to put all possible params inside? So you will use same action from menus and buttons in different places with same icon and text. Here are some of my experiments (btw inspired by appframework):...... - Initialize some resource bundle and create method like tr(String) that will return localised string. - Qt has a status bar and possibility to show LONG_DESCRIPTION in status bar, and it helps. - May be add some icon conventions? Like add String icon to @Action, and it will load LARGE_ICON from resources/large folder and SMALL_ICON from resources/small folder. Also add possibility to specify icons for platform, like resources/kde/small, resources/win/small and so on, so application will have different icons on different platforms? by grandinj - 2009-03-04 00:36On the View problem, given that you have only have 2 sub-cases, why not simply split it into MDI_View and SDI_View? On the Mac problem, I don't think it unreasonable to provide a set of simple support classes and make developers do something like : if (MacSupport.isMac()) { // do mac specific stuff } else { // do normal stuff } This is pretty much how SWT operates. by geekycoder - 2009-03-03 19:31I certainly hope that SAF will not be over-engineered to the point of inflexibility and complexity. Been a a long-time Swing developer, I have my wish list in SAF. These are the following features that I have frequently used in desktop swing application that will hope to see in SAF -- Single application instance Use Case: This is important as some applications are best designed with single instance in mind. For example, it will make sense to have a single instance in Media player so that sound will be overlapped. When a user double-click on a audio file, it should playback in existing running MediaPlayer. Currently, widespread implementation of Single instance is to use a thread to read in file for parameter periodically. If there is content in the file, it will launch another application instance which in the main method write the parameter into the file and then exit the instance, which then pick by the first instance to do whatever with the parameter. Launching a duplicated instance takes time, and it will be better handled if the OS's native call is used instead. Hopefully, SAF will have this feature. -- OS-dependent User profile Use Case: OS has a active user profile. If SAF could implement user profile management that work in tandem with the OS then SAF application will not need to handle "hanging" user profile settings. -- Add installation and deployment modules Like application server which handles installation, deployment and execution of web application, SAF application could benefit from reference implementation ( Sun reference implementation) too of open-source installation and deployment tools that work easily and integrate with SAF eg JSmooth - Java Executable Wrapper () and Lzpack (). by cowwoc - 2009-03-03 18:50I want Sun to work on Swing. I'm just not sure that the Swing Application Framework is the way forward. I don't need a framework to help me build a specific kind of desktop application, but rather across-the-board improvements in Swing to make it easier to customize widgets and bundle common widgets like Date Picker. Frankly I'd like to see Swing 2.0 (not the one currently being proposed). I'm looking for design-level changes to make Swing easier to customize without getting lost in tons of the spaghetti code that is lurking underneath the hood. Swing components shouldn't be extensible only by those in Sun. In short, I am saying Swing's existing design has too much internal coupling and doesn't lend itself well to extending. by will69 - 2009-03-03 15:24Hi Alex, welcome back! Swing is probably the most successful cross-platform GUI toolkit that we have. Now that we all learned how to use and not use the EDT, let's make it even easier to use! And please don't forget the user's migrating from Windows to Linux. Java+Swing+OpenGL is a great way to make the transition less painful! by stolsvik - 2009-03-03 15:13btw, what was the "urgent temporary tasks"?! They must have been .. heavy. by stolsvik - 2009-03-03 15:12Quick comment: The framework should be flexible enough to fully support IoCing everything, it should be possible to have several instances of the same application run in the same JVM (pretty much "just because"), and it should be possible to fully "reboot" an appliction live without any lost memory (My app has a button that does a reboot (drop current "app context", spawn a new), and it makes a whole slew of development aspects have much shorter "roundtrip times"), and it should cleanly exit without invoking System.exit()..! Basically, all resources should be accounted and contained. by jede - 2009-03-03 13:04That's good news! Alex, there was allready a discussion about needed changes and wishes some months ago on the appframework mailing list (started by you). It would be nice when you and your team could take care of those mails too. It wouldn't make sense to copy all of them to this comments block. Once again: At the moment it's not very easy to use dependency injection frameworks. The use of mock objects for unit testing is also sometimes too tricky. It would be much simpler without singletons and when all componts of the SAF would use interfaces. Bye, Stefan by anilp1 - 2009-03-03 12:08Alex, I wanted to forward this bug to you that was discussed a while back. "Actions can lose their state (enabled, text, icon, etc.) when used with the current public appframework code." It was ruled a serious bug that has not been fixed. thanks, Anil _________________ Are you planning to kill yourself? Stop! listen to this song broadband version: dial-up version: --- On Thu, 2/12/09, Anil Philip wrote: From: Anil Philip Subject: Re: What is the status of SAF ? To: [email protected] Date: Thursday, February 12, 2009, 1:17 PM I am using Netbeans 6.5. The GUI designer uses SAF version 1.03. In the generated code which cannot be edited, I see javax.swing.ActionMap actionMap = org.jdesktop.application.Application.getInstance(com.juwo.nodepad.NodePad.class).getContext().getActionMap(NodePadView.class, this); what should one do instead? thanks, Anil _________________ for good news go to --- On Thu, 2/12/09, Karsten Lentzsch wrote: From: Karsten Lentzsch Subject: Re: What is the status of SAF ? To: [email protected] Date: Thursday, February 12, 2009, 7:31 AM Chiovari Cristian Sergiu wrote: > Well if there is a show stopper it must be something serious ! Actions can loose their state (enabled, text, icon, etc.) when used with the current public appframework code. > So then I wonder how it can be used in a production environment ? I've fixed that in my 296 implementations and have described how to fix it: a) have hard references to ActionMaps and let developers clear ActionMaps when they are no longer used (the latter to avoid memory leaks), or b) turn #getActionMap into #createActionMap that doesn't hold the ActionMap; the developer then holds the ActionMap where appropriate, for example in an Presentation Model instance. -Karsten by peyrona - 2009-03-03 11:52In my eyes, an important issue is to create it extensible: lets say we create a very generic framework, it should be easy (not a nightmare) by extending (inheriting), to create an MDI framework, or others. by rcasha - 2009-03-03 07:29One current problem I encountered is that it is impossible to override certain "factory" methods or classes. I needed to create my own implementation of ApplicationAction (which adds security via jaas) but I had to fork my own version to do that, since I had to replace the default ApplicationContext (which is final and created in the constructor of Application). by eutrilla - 2009-03-03 07:29I usually encapsulate user actions in events that are sent to a central EventManager. In this manager diferent EventListeners can be registered during startup depending on the type of event , so the same event can be routed to one or more listeners, or to none. In this way, it is easy to change the behaviour of a certain action and to reuse it between different controllers. Following what cdandoy said, each view could have a different EventManager (controller) with a different set of listeners, which can be reused or not. Apart of listeners that manage the toolbar buttons, I'd include in the list of desirable listeners an undo/redo manager, for instance. by cdandoy - 2009-03-03 05:59May I suggest a different approach to actions? The JDeveloper platform has the notion of Controller associated with Views and the controller is responsible for handling the actions. A View can contain a View and the platform respects the hierarchy of controllers. The platform not only calls the controller when an action is performed but it also asks the controller to update visible actions. The advantages are that * the actions are easily shared between views (Edit>Delete may do something different in each view) * it drastically reduces the number of listeners required to maintain the state of each action (enabled/checked) * Only visible actions are updated (for example when the user opens down a menu) In my experience the controller approach has huge benefits over the ActionListener approach. There is more details here:... by osbald - 2009-03-03 03:38That's only partly true. Actions appear to be stored & retrieve via their Class object in ActionManager. Like a lot of the AppFramework classes you cant create your own ActionManager, it has a 1:1 relationship with AppContext. Using Class object as keys essentially makes them all Singletons (cant distinsh between two instances of same Class but with differeing state) for each classloader and your classloader options are more limited in web start. Or at least going down that route (replacing JWS security manager & installing custom classloaders) makes life super-complicated for the application developer which is the opposite of what the framework is supposed to offer. There's a lot of core classes with private/protected constructors and/or static factories that prevent you from most attempts to deviate from the classic SingleFrame model. Very quickly I found various things start breaking down, ActionManager, parent chaining lookups, resource injection (because of parent issue).. Actually this should really be moved to the AppFramework mailing lists, doubt there's that many developers looking at Alexs blog after 6 months+ of inactivity. What's the remit here a long term commitment to get AppFramework right? or a quick fix to get something into JDK7 (abandoning the 1.5 compatible codebase) before moving on? by eutrilla - 2009-03-03 03:23Yes, that's more or less what I meant, but it should be possible to have multiple instances of AppContext (or whatever class your getAppContext() method is returning), not just the one returned by the static method. By the way, it could be nice to have an AppContext.setInstance() method so that we are able to switch between global contexts. I'm sure that I've come across one or two cases where this has been handy. Another option could be to store statically different AppContexts for different views, such as in AppContext.forView(VIEW_NAME).get(PRIVATE_KEY) by malenkov - 2009-03-03 02:54Actually, there no the "singleton problem". You can replace static field with the following construction: AppContext.getAppContext().gett(PRIVATE_KEY); AppContext.getAppContext().put(PRIVATE_KEY, value); by eutrilla - 2009-03-03 02:20RE: Singletons: I don't know if I misunderstood the problem, but what about using a non-static AppContext class, and a static GlobalAppContext that holds an AppContext instance? It wouldn't be a pure singleton, but will allow the best of both worlds: for simple applications it would be possible to just use the Global, and if you have an MDI, each View would have its own AppContext, which may or may not be the same as the one stored in the GlobalAppContext. And as for features of the framework, I've always felt that it's a shame that Swing doesn't have a docking system by itself, or even an abstraction layer that can be used to plug-in third party docking libraries. by carcour - 2009-03-02 21:11Good to know that the project is still Alive Alexander! I thought SAF was dead. What were the urgent tasks you've worked on. by rogerjose81 - 2009-03-02 19:21Very good to heard this ;) I would like that SAF would be able to manage the skins on an (semi)automatic way. Something like identifying each look-and-fell and its themes, and providing a menu with sub-items for each laf. I hope it is not much to ask. Best regards, Roger by osbald - 2009-03-02 14:26Hans, Re: Singletons: What about multiple document interfaces? Multiple instances of the same classes, in the same JVM, but with differing state. The Singleton ApplicationContext and the Action caching by Class objects gave me a lot of problems last time I tried. MDI interfaces as in InternalFrame are generally considered old hat these days. Most go for multiple windows. Also the ApplicationContext per classloader wasn't a practical solution for anybody wanting to use Java Web Start. by mbien - 2009-03-02 14:06great to see you blogging again ;) are there efforts to talk to the netbeans team about interoperability with the nb platform? The NetBeans platform supports out of the box persistence, tasks, actions defined in xml layers, module life cycle (but i think most of ModuleInstall methods were deprecated :P) etc. which matches the features i found by browsing through the SAF javadoc it would be a pity if it wouldn't be possible to map some of the features nb already provides to SAF because of implementation detail X. It would see it as killer feature if we would be able to drop a app written with SAF into the netbeans platform with minimal code changes and see it nicely integrating in some areas. (aka scalability for desktop apps) by shemnon - 2009-03-02 13:52What I would like to see is a framework that is usable for writing Swing Applications in languages other than The Java(tm) Programming Language. Think JavaFX Script, Groovy, Ruby, Python, Clojure, Fan, et al. There needs to be some way to access the core of the framework that doesn't depend on features that don't map directly to other programming models. Particular to that is the reliance on some annotations to do some of the magic. For example, requiring annotated methods to be the only way to add Actions to the ApplicaitonActionMap is a particular problem. It's fine to do some particularly snazzy stuff for The Java(tm) Programming Language, just don't make it the single gateway to access the APIs, or else the growing JVM languages community may have to bypass it when they write their own desktop applications. You don't need to ship the other language bindings with SAF but an equal degree of access would speed/enable adoption. by hansmuller - 2009-03-02 12:35Per the "singleton problem": the Application singleton does not prevent multiple applications per JVM nor does it prevent running multiple Application applets in the browser. Statics are per class-loader not per JVM. This topic was discussed at some length about a year and a half ago:... Having multiple Applications share one ApplicationContext was never a goal. Applications weren't intended to be modules. by masn - 2009-06-20 08:46I think what's missing is a good, relatively complete UI framework for Swing, something similar to the eclipse rich client platform or netbeans, but much lighter weight.
https://weblogs.java.net/node/241612/atom/feed
CC-MAIN-2014-10
refinedweb
4,295
60.45
Red Hat Bugzilla – Bug 205335 CVE-2006-4538 Local DoS with corrupted ELF Last modified: 2007-11-30 17:07:27 EST From Kirill Korotaev: When running on IA64 or SPARC platforms, local users can cause a denial of service via a malformed ELF file and then triggered by cross-region mappings. Where's the reproducer? I don't see any reference to it anywhere in the thread. I don't have a reproducer for this one. Jason, What do you suggest in this case? Cobble up a patch, compile it, and post it saying, "we don't have a reproducer"? Cancel the needinfo -- I contacted Kirill Korotaev, and he forwarded me an ia64 reproducer (actually 2). He response follows, and I will attach both the mangle.c program as well as his pre-compiled : Subject: Re: [PATCH] IA64,sparc: local DoS with corrupted ELFs Date: Tue, 26 Sep 2006 10:24:21 +0400 From: Kirill Korotaev <[email protected]> To: Dave Anderson <[email protected]> Dave, > I just got assigned this CVE for a RHEL4 back-port. > By any chance do you have a pointer to a quick-and-dirty > reproducer for ia64? Or instructions on how I could tinker > with an ia64 ELF header to reproduce the DOS? ELF file is attached. I'd recommend RedHat to incorporate the original test with a program which hacks ELF files randomly and run it on regular basis. This is what we do with OpenVZ. The original test was from [email protected] from Dave Jones email: ----- cut ----- 1. grab 2. create a test.c which is.. #include <stdio.h> int main(void) { printf ("Worked\n"); return 0; } 3. run this script.. #!/bin/bash while [ 1 ]; do gcc test.c ~/fuzz/mangle a.out $RANDOM ./a.out done 4. wait, until this happens.. ----- cut ----- Thanks, Kirill Name: bad_ia64_elf bad_ia64_elf Type: unspecified type (application/octet-stream) Encoding: base64 Created attachment 137132 [details] pre-compiled ia64 ELF file with corrupted PT_LOAD segments. Created attachment 137133 [details] generic binary file corrupter Note that this program does not specifically attack the issue addressed by this particular bug, but in a random attempt, may create a corrupt ELF header that does bump into it. Proposed patch posted: committed in stream U5 build 42.17. A test kernel with this patch is available from committed in stream E5 build 42.0.4 verified with the attached reproducer, I also wasn't able to see any other problems with a few runs of fuzzing elf binaries..
https://bugzilla.redhat.com/show_bug.cgi?id=205335
CC-MAIN-2017-04
refinedweb
418
67.65
Talk:Tips especially for Aussies Discuss Tips especially for Aussies: Mess These lists of tips do not appear to be 'Aussie' related at all. They're sprinkled all over the wiki, but only linked from here. Many of them are not entirely correct also. Essentially this is mess -- Harry Wood 15:41, 2 October 2009 (UTC) - Well if that is the case, it is because the original Australian pages have been removed. Some stuff may be outdated, but this was originally a nice tidy section of wiki intended for Australian readers. User:Drlizau 06:28, 27 April 2010 - Sorry Liz. I know it's kind of rude to say that this is a mess without explaining in more detail, or taking actions to fix it. The problem is I don't always have time to tidy up after people on the wiki. - I have done a little tidying now, including rescuing some of your old content, to help you with your aim of creating a set of pages especially for Aussies. The main problem is that you created new pages with titles such as "Using JOSM", and without any links back to this "Tips especially for Aussies" page. So there was no context for somebody who may for example, arrive at that "Using JOSM" page through a search on the wiki. - If you want to create a highly specialised collection of information about using JOSM targeted at Aussies, you have to name the page accordingly and make sure it is described and linked accordingly at the top of the page, otherwise it creates confusion for people searching. So that is what I've done for you now. - I think there's a lot of work to do on the content of your pages, and I guess I'll leave you to it, now that these pages are no longer confusingly cluttering the wiki namespace and causing confusion for people searching quite as much. - -- Harry Wood 13:30, 10 May 2010 (UTC)
https://wiki.openstreetmap.org/wiki/Talk:Tips_especially_for_Aussies
CC-MAIN-2022-05
refinedweb
331
66.78
According to a description from TRM, add all the power domains. At the moment, we can support some domains on RK3288. We can add more types on RK3288 in the future, that's need to do. Signed-off-by: Caesar Wang <[email protected]> --- Changes in v17: - delete the ugly chart in the commit. Changes in v16: - Add more domain decription. Changes in v15: - change the comment. Changes in v14: None Changes in v13: None Changes in v12: None include/dt-bindings/power-domain/rk3288.h | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) create mode 100644 include/dt-bindings/power-domain/rk3288.h diff --git a/include/dt-bindings/power-domain/rk3288.h b/include/dt-bindings/power-domain/rk3288.h new file mode 100644 index 0000000..db5e810 --- /dev/null +++ b/include/dt-bindings/power-domain/rk3288.h @@ -0,0 +1,31 @@ +#ifndef __DT_BINDINGS_POWER_DOMAIN_RK3288_H__ +#define __DT_BINDINGS_POWER_DOMAIN_RK3288_H__ + +/** + * RK3288 Power Domain and Voltage Domain Summary. + */ + +/* VD_CORE */ +#define RK3288_PD_A17_0 0 +#define RK3288_PD_A17_1 1 +#define RK3288_PD_A17_2 2 +#define RK3288_PD_A17_3 3 +#define RK3288_PD_SCU 4 +#define RK3288_PD_DEBUG 5 +#define RK3288_PD_MEM 6 + +/* VD_LOGIC */ +#define RK3288_PD_BUS 7 +#define RK3288_PD_PERI 8 +#define RK3288_PD_VIO 9 +#define RK3288_PD_ALIVE 10 +#define RK3288_PD_HEVC 11 +#define RK3288_PD_VIDEO 12 + +/* VD_GPU */ +#define RK3288_PD_GPU 13 + +/* VD_PMU */ +#define RK3288_PD_PMU 14 + +#endif -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at Please read the FAQ at
https://www.mail-archive.com/[email protected]/msg966064.html
CC-MAIN-2019-30
refinedweb
237
51.04
Welcome to the fourth iteration of my Glacial ListView Control v1.3. I initially began this project back in December of 2002 when I went to write a ListView for a project I was contracting on, that needed to display scores from team based games (football, basketball, etc.). I started with the stock list control provided with VS.NET. After fumbling through way too many wndprocs, hacks, and dead ends, I decided that it would be a good idea to have an extremely customizable ListView for this and many future projects. So I started this project which is a ListView written entirely in C#. ListView ListView On the first iteration v1.0, I tried to model everything based on a similar control I had written in MFC (which didn't work out very well). My second pass at the control v1.1 focused on optimization and .NET-centric changes to my code base with a few important feature upgrades. The third pass was all about features and specifically control embedding. This fourth update mainly focuses on tightening everything up, fixing bugs, and a lot more embedding work. One thing many of you have found out is that I make frequent bug fixes and updates to this control. In order to keep from driving the good people at The Code Project crazy with lots of updates though, I post the minor updates on my website. You can get these sub-updates right here at Glacial Listview Control. You can also get more detailed change logs there. At this point, this control has simply too many features to document here. It's been a real challenge to keep things clean enough in the source to be able to continually add features. The more things I add, the better my documentation and coding practices have to be or things will become unmanageable in a hurry. I am trying to create documentation to go along with this control in the form of a user guide to make using everything much easier. Items collection editor, Activated embedding, Checkboxes (not embedded control check boxes), Help Files, Hover events. Row borders, Control Embedding, New control modes (XP, Super Flat, Normal), XP Styles, Improved Sorting, Alternate Row Colors, Improved Image/Icon support, SubItem Cell word wrap, Optimization, Hot tracking for columns and items, Up/Down sorting, Focus Rectangle, Semi-transparent selections, User object variables at item and subitem levels, Multi-select, resizable columns, grid options, multi-line items, multi-line truncation, sorting, auto-sizing item height, Text + Alignment + Color, background color overrides, basic ListView functionality. Background/Foreground colors full control: You now have the ability to control the background or foreground color of the control at almost every level. I use a hierarchical system to control which color shows up. I take the lowest common override to determine the background color of a given cell. For instance, you can have a background color set for the control, the item and the subitem. A given cell will show up the subitem override. If you have only overridden the row background then that color will show up for the entire row. Same for foreground color (text color). Items Collection Editor: You asked for it, so I did it. You can now add items in the collection editor as well as edit sub items visually. I've tried to make the collection editor as simple to use as possible. Sorting: 3 methods of sorting are available. Insertion sort, which is great for sorts < 1000 in length. Quicksort and Merge sort for lists that are much greater in length. I also included the ability to sort by number, which you can set in the column property. Checkboxes: Adding a checkbox to your item/subitem is as simple as setting the property in the column. Checkboxes are drawn onto the control so you don't have to worry about dealing with embedded secondary controls. Images: You can add an image or icon to column headers or items/subitems at will. Hover Events: If you need to create a tooltip for a given column, you can subscribe to the hover event. Remember though, you must turn the hover events on for the hover event to fire. Hot tracking: If you turn on hot tracking for the vertical and/or horizontal then you will see a highlight on the column/row the mouse is over with the color you have chosen. UserObjects/Tag: In order to facilitate the ability to store your user data into the items, I have included Tag properties at both the Item and SubItem levels. NOTE: THE ITEM TAG IS NOT THE SAME AS SUBITEM[0] TAG. So don't get these two confused. They are intended to be different. Alternate row colors: This ended up being a highly sought after feature so I implemented it recently. To get that 'checkbook' look and feel, simply turn on alternate row colors and set the color you desire. Control Embedding: You can embed raw controls or embed activated controls easily. Give your listview that professional look by adding progress bars, DateTime controls or your own custom control. DateTime The interface is modeled after the stock ListView built into the .NET framework, so many of the methods used to operate a ListView will hold up in this implementation. Someone mentioned a desire to have more examples of how to use the various features so I have included more information. Most of the primary features of this control can be found in the design editor when you place the control onto the page. Everything from hot tracking, to grid lines, to selection color and others can be done visually through the MS design time environment. using GlacialComponents.Controls GlacialList mylist = new GlacialList(); mylist.Columns.Add( "Column1", 100 ); // this can also be added // through the design time support mylist.Columns.Add( "Column2", 100 ); mylist.Columns.Add( "Column3", 100 ); mylist.Columns.Add( "Column4", 100 ); GLItem item; item = this.glacialList1.Items.Add( "Atlanta Braves" ); item.SubItems[1].Text = "8v"; item.SubItems[2].Text = "Live"; item.SubItems[2].BackColor = Color.Bisque; item.SubItems[3].Text = "MLB.TV"; item = this.glacialList1.Items.Add( "Florida Marlins" ); item.SubItems[1].Text = ""; item.SubItems[2].Text = "Delayed"; item.SubItems[2].BackColor = Color.LightCoral; item.SubItems[3].Text = "Audio"; item.SubItems[1].BackColor = Color.Aqua; // set the background // of this particular subitem ONLY item.UserObject = myownuserobjecttype; // set a private user object item.Selected = true; // set this item to selected state item.SubItems[1].Span = 2; // set this sub item to span 2 spaces ArrayList selectedItems = mylist.SelectedItems; // get list of selected items There are two types of embedding in GlacialList. The first is Standard Control Embedding and is basically a one to one relationship between cells and the embedded control. Controls that are not visible in the ListView are 'hidden' but not destructed. The second is Activated Embedding, this type of embedding only shows up when you double click on a cell and is the same for every cell in a column. One of the big challenges I faced in adding embedded controls is a problem none of the other list controls have dealt with. How to make the embedded controls show up BEHIND borders. Since the borders are almost always drawn onto the control, it was not possible to simply draw over the embedded controls. I got around this by adding 5 BorderStrip controls that I created to give the effect of the border on top of embedded controls. Other controls either don't have this feature or get rid of borders altogether to fix the problem. BorderStrip The above graphic shows an embedded progress bar control. One note about embedding right up front. I am giving you the tools to do whatever you want with a ListView. However, if you load 100,000 controls onto the surface then you have only yourself to blame if you scroll at a blazing 1 frame per hour. You must make intelligent decisions about what tradeoffs are best for whatever end result you are trying to achieve. When I first tackled the problem of control embedding, I knew it wasn't going to be easy. To get solid, professional results I would have to make many tradeoffs for functionality and performance. Before I began coding into the control, I wrote several test projects first to check out various theories I had on how large numbers of visible controls would perform on an active surface. These tests allowed me to optimize the feature set without killing performance. One of the tradeoffs I had to deal with was whether to destruct or Hide a control when it goes out of view. If you destruct and reconstruct the control each time, you save on memory and handles but you sacrifice speed. If you Hide the controls that go out of view then the ListView starts bogging the system when you get large numbers of items. I decided that if someone was using embedded controls it was unlikely that they would have 100k+ items, so I 'Hide' controls that are not in current view. In order to make use of the control embedding, you simply need to add a control to the SubItem.Control property. You can also override that control at any time by setting the ForceText property of the subitem which overrides everything and displays whatever is in the Text attribute. SubItem.Control ForceText Text // add a progress bar control to a sub item // setting item 0 and subitem 0 ProgressBar pb = new ProgressBar(); pb.Value = 50; // set it to some arbitrary value item[0].SubItems[0].Control = pb; The above graphic shows an activated embedded textbox making this cell editable. Activated embedding is by far the more useful of the two embedding types and also a bit more complicated. Activated embedding allows you to embed a control without really embedding the control. An activated embedded control will only show up when someone double clicks on a cell. You set the activated embedded control in the column definition. I struggled with many different ways to do activated embedding until I came up with the system of requiring the GLActivatedEmbedded interface to be implemented. In order to use the activated embedded type, you need to go into the column area and make the appropriate setting. Activated embedded types are valid for entire columns. You can't have more than one type per column nor can you mix activated embedding with standard control embedding. GLActivatedEmbedded // Add a column, then set its embedded type GLColumn column = this.glacialList2.Columns.Add( "First column", 100 ); column.ActivatedEmbeddedType = GLActivatedEmbeddedTypes.TextBox; Or set it through the type in the Column properties of the column collection editor. If you want to use your own type as an activated embedded control then you need to set the Activated Embedded type to UserType. Then you need to add the GLActivatedEmbedded interface to your control and implement its members. Here, for example, is the implementation for the text box control. UserType // snipet from my textbox built in implementation // of the activated embedded control protected GLItem m_item = null; protected GLSubItem m_subItem = null; protected GlacialList m_Parent = null; // called when control is activated public bool GLLoad( GLItem item, GLSubItem subItem, GlacialList listctrl ) { this.BorderStyle = BorderStyle.None; this.AutoSize = false; m_item = item; m_subItem = subItem; m_Parent = listctrl; this.Text = subItem.Text; return true; } // called when control is to be destroyed public void GLUnload() { m_subItem.Text = this.Text; } // form1.cs column.ActivatedEmbeddedControlType = new GLTextBox(); At this point, every cell in the column can now use the activated embedded control! In order to achieve the XP look for the control, you need to do two things. First you need to set the ControlStyle to XP, second you need to put Application.EnableVisualStyles(); at the beginning of your application. ControlStyle Application.EnableVisualStyles(); This is a style I came up with to satisfy a need I had to have some very lightweight reports. As you can see, you can set the alternating colors field as well to make things more lively. One of the really nice things I like about VS.NET is the design time support. A nice part of that framework is the CollectionEditor. The collection editor allows you to add/remove/edit collection items as well as edit their properties. CollectionEditor To add collection editor support to your collection at design time, you need to take several steps. public class GLColumn GLColumn) { GLColumn column = (GLColumn)value; ConstructorInfo ci = typeof(GLColumn).GetConstructor(new Type[] {}); if (ci != null) return new InstanceDescriptor(ci, null, false); } return base.ConvertTo(context, culture, value, destinationType); } } [TypeConverter("YourNameSpace.YourTypeConverter")] [TypeConverter("GlacialComponents.Controls.GLColumnConverter")] public class GLColumn { ... public class CustomCollectionEditor : CollectionEditor { public CustomCollectionEditor(Type type) : base(type) {} public override object EditValue(ITypeDescriptorContext context, IServiceProvider isp, object value) { GlacialList originalControl = (GlacialList)context.Instance; object returnObject = base.EditValue( context, isp, value ); originalControl.Refresh();//.Invalidate( true ); return returnObject; } } [ Category("Behavior"), Description("Column Collection"), DesignerSerializationVisibility( DesignerSerializationVisibility.Content), Editor(typeof(CustomCollectionEditor), typeof(UITypeEditor)), Browsable(true) ] public GLColumnCollection Columns { get { return m_Columns; } } This is all you need to bring design time support to your collection! Q. XP Styles aren't showing up for me even though I have everything set correctly. A. You need to make a call to Application.EnableVisualStyles() before the Application.Run(...) of your application. Application.EnableVisualStyles() Application.Run(...) Q. I don't see the fake button to the right of visible columns in the header. A. The fake button on the stock ListView never made sense to me. This is not a bug, this is exactly the way I want it. Q. How do I make a cell editable? A. Activated embedding. Go to the column definition and set the activated embedded type to TextBox. TextBox Q. Why does your vertical scrollbar stop at the base of the header? A. Why shouldn't it? The scrollbar controls the client region not the header. The stock ListView is wrong in its implementation of that. Q. It won't let me 'Add' subitems, what am I doing wrong? A. One of my favorite features of this control is that subitems are added and removed automatically and behind the scenes. If you add an item or column, rest assured, the subitem exists already. This project will fork in two directions in the future. First, I intend to make a 'Pro' version of this control that I can market commercially. I intend to rewrite most of this control based on what I've learned and call it version 2.0. However, I also intend to keep this codebase and create a version 1.4 and above that will remain free here on The Code Project. I will do this by backporting major features (like TreeView) that I intend to add to 2.0. I hope you can all support me as I try to move into the controls business while continuing to provide free controls to people here. The versions I post on The Code Project will always be free and clear for all those who need it. TreeView For v1.4, I am looking to bring a tree view to the control. I'm not sure if I am going to integrate it or if I am going to make it a separate control that subclasses the ListView (most likely). But that's the next big thing on my plate. I also would like to really tighten up the code. There are now so many features in this list control that I can't test them all any time I make a given change. Please be diligent in sending me bug reports as I need them to improve this control. My fourth pass at this control really brings the control into what I always wanted in a list control. It's very flexible, fast, and has a large number of features. I recently integrated it into my full Glacial Source Control version control system which is one of the main reasons I started this project which went very smoothly. I am particularly interested in bugs and feature requests. I would request that you make feature requests on the message board here though so others can comment. I hope you enjoy the control. You are free to use this version of the ListView control in both personal and commercial applications. However, you may only redistribute this in its compiled form (modified or unmodified), you may not redistribute the source or modified source. When using this control, please include the line "Glacial ListView - Copyright Glacial Components Software 2004 -" reference in either the about or in the documentation. This will help me out and allow me to continue to provide you with free controls. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) GlacialList.cs The designer cannot process the code at line 265: if (this.ControlStyle == GLControlStyles.XP) Application.EnableVisualStyles(); The code within the method 'InitializeComponent' is generated by the designer and should not be manually modified. Please remove any changes and try opening the designer again. if (this.ControlStyle == GLControlStyles.XP) Application.EnableVisualStyles(); XP namespace EXControls { public class EXListView : ListView { ... protected override void WndProc(ref Message m) { if (m.Msg == WM_PAINT) { foreach (EmbeddedControl c in _controls) { Rectangle r = c.MySubItem.Bounds; if (r.Y >= 0 && r.Y < this.ClientRectangle.Height) { // ">=" instead of ">" ... gListSecondList.Items.Add(gList); foreach (GLItem gList in arrListItems) { switch (respRec.nHandleCaller) { case (int)enumHandlerCallers.HNDCALL_LIST_MYFIRSTLIST: { gListFirstList.Items.Add(gList); } break; case (int)enumHandlerCallers.HNDCALL_LIST_MYSECONDLIST: { gListSecondList.Items.Add(gList); } break; case (int)enumHandlerCallers.HNDCALL_LIST_MYTHIRDLIST: { gListThirdList.Items.Add(gList); } break; } } tabControl_Main_SelectedIndexChanged(object sender, EventArgs e) gListSecondList.Items.Add gListFirstList.Items.Add if (rdr.HasRows) { rdr.Read(); lvi = new ListViewItem(); lvi.Text = rdr.GetString(0); lvsi = new ListViewItem.ListViewSubItem(); lvsi.Text = rdr.GetString(1); lvi.SubItems.Add(lvsi); lvsi = new ListViewItem.ListViewSubItem(); lvsi.Text = rdr.GetString(2); lvi.SubItems.Add(lvsi); lvsi = new ListViewItem.ListViewSubItem(); lvsi.Text = rdr.GetString(3); lvi.SubItems.Add(lvsi); lvsi = new ListViewItem.ListViewSubItem(); lvsi.Text = rdr.GetString(4); lvi.SubItems.Add(lvsi); this.lvCircuitInfo.Items.Add(lvi); lvi.SubItems[i] = lvsi.Text; listView1.AutoResizeColumns(ColumnHeaderAutoResizeStyle.ColumnContent); private const int WHEEL_DELTA = 120; private const int WM_MOUSEWHEEL = 0x020A; protected override void WndProc(ref Message m) { if (m.Msg == WM_MOUSEWHEEL) { short MSB = (short)((m.WParam.ToInt32() & 0xFFFF0000) >> 16); int mh = (MSB / WHEEL_DELTA); int vertScroll = vPanelScrollBar.Value; vertScroll -= ((ItemHeight/7)*mh); if (vertScroll < 0) vertScroll = 0; if (vertScroll > vPanelScrollBar.Maximum) vertScroll = vPanelScrollBar.Maximum; this.vPanelScrollBar.Value = vertScroll; vPanelScrollBar_Scroll(null, null); } base.WndProc(ref m); } [email protected] wrote:but if coders are looking for a freely available option with more features then this seems to be the way to go: A Much Easier to Use ListView[^] public List<string> ComboBoxItems { get { return m_comboBoxItems; } set { m_comboBoxItems = value; } } icontrol.GLLoad( item, subItem, this ); if (type == typeof(GLComboBox)) { ((GLComboBox)icontrol).Items.Clear(); ((GLComboBox)icontrol).Items.AddRange(this.Columns[nColumn].ComboBoxItems.ToArray()); } //control.LostFocus += new EventHandler( ActivatedEmbbed_LostFocus ); control.KeyPress += new KeyPressEventHandler(tb_KeyPress); this.theGlacialList.Columns[123].ComboBoxItems = new List<string> { "foo", "bar" }; General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/4012/C-List-View-v?msg=4214815
CC-MAIN-2015-06
refinedweb
3,190
57.37
EPiServer Commerce includes an extensive system wide search engine. Any information can be made available to the search engine, even if that information is not in the EPiServer Commerce database. Some systems such as Catalog and Orders, also have specific additional search features. EPiServer Commerce has its own plugable API for search providers. The search engine itself is based on the ASP.NET provider modelmeaning you can write your own providers to search servers. EPiServer Commerce comes with providers for Lucene and SOLR. EPiServer Commerce can also be integrated with EPiServer Find, to create more advanced search-based navigation and filtering for websites. Classes referred to here are available in the following namespaces: Architecture EPiServer Commerce has a layer built on top of Lucene for easy integration. There is a MediachaseSearch layer to provide a base layer to simplify the interface. This base can be used to create an index for any data source, while you still have access to the Lucene classes. Searchable data is written to a file index. SearchExtensions provides a catalog indexing and searching implementation using the MediachaseSearch layer. This is built into Commerce Manager and includes several controls making use of this implementation. You can create you own search implementations on top of MediachaseSearch. Indexing In order for the data to be available it will first be indexed. The indexing is done by the call to the Mediachase.Search.SearchManager.BuildIndex(bool rebuild) method. This method is called by either clicking "Build" or "Rebuild" buttons in the Commerce Manager or through the Quartz service that calls BuildIndex method on the predefined schedule.. The method must be implemented by the provider that is currently configured. The provider will be passed an ISearchDocument containing the properties that need to be indexed. You can either replace the indexer completely or extend the existing indexer by inheriting CatalogIndexBuilder class and overriding OnCatalogEntryIndex method. New indexers can also be added. By default the indexer only populates fields that are marked searchable in the meta configuration as well as some of the system fields like price, name and code. Depending on the provider, additional configuration changes need to be made for those fields to make it to the index. Calling BuildIndex with rebuild = false will only add indexes that has changed since the last index was created. The system keeps track of when the last index was performed using the ".build" file. Location of the ".build" file is configured inside Mediachase.Search.config file for each indexer defined. Example: build index command 1 SearchManager searchManager = new SearchManager(applicationName); 2 searchManager.BuildIndex(false); Catalog meta data fields have options that allow you to specify: - Whether a field will be added to the index. This alone will not make the field searchable. - Whether the field value will be stored or not. Stored = stored in an uncompressed format. Not stored = putting the value into the index in a compressed format. You only store a value if you are going to use the value as part of the displayed text in the results to the user. - Whether the field is tokenized. A field must be tokenized to be searchable. When a field is tokenized, the text is broken down into individual searchable words and common words are omitted. Searching When the data has been indexed, this index will be searched. Search is done by calling the ISearchResults Mediachase.Search.Search(ISearchCriteria criteria) method. The method call is handled by the configured search provider and returns the ISearchResults interface. Example: simple catalog search 1 CatalogEntrySearchCriteria criteria = new CatalogEntrySearchCriteria(); 2 criteria.SearchPhrase = "canon"; 3 SearchManager manager = new SearchManager(AppContext.Current.ApplicationName); 4 SearchResults results = manager.Search(criteria); The search phrase can contain complex search syntax that is specific to a provider used. Facets and filters Another capability extending the Lucene.NET functionality is the ability to create Facets, which is a type of filtering that can be used to further narrow a search. In the configuration file, facets such as SimpleValue, PriceRangeValue, and RangeValue types can be found. Facets are organized into facet groups. A facet group is referred to as a Filter in the config file. For instance, a facet group would be Color, and a facet would then be Red. In the configuration, Color would be the filter and Red would be a SimpleValue. A facet group is linked to a particular field or meta field. Facets can be specified as part of the ISearchCriteria interface. The front end includes controls that read special configuration file to automatically populate the facet property of the ISearchCriteria interface. These filters are stored in Mediachase.Search.Filters.config. To add a new filter simply add new field into index and add a record to the config file. The filter will appear as soon as the data is available.
https://world.episerver.com/documentation/Items/Developers-Guide/Episerver-Commerce/8/Search/Search/
CC-MAIN-2018-51
refinedweb
800
58.58
Update – for folks who learn best visually, I’ve posted a follow-up screencast of the demo steps discussed below, as a DevNuggets video. You can view the video here. What’s an Action Filter?. Because action filters are subclasses of the System.Attribute class (via either FilterAttribute or one of its subclasses), they can be applied to your controllers and action methods using the standard .NET metadata attribute syntax: C#: 1: [MyNamedAttribute(MyParam = MyValue)] 2: public ActionResult MyActionMethod() 3: { 4: // do stuff 5: } VB: 1: <MyNamedAttribute(MyParam:=MyValue)> _ 2: Public Function MyActionMethod() As ActionResult 3: ' do stuff 4: End Function This makes action filters an easy way to add frequently-used functionality to your controllers and action methods, without intruding into the controller code, and without unnecessary repetition. To be clear, action filters aren’t new to MVC 3, but there’s a new way to apply them in MVC 3 that I’ll discuss later on in this post. What’s in the Box? ASP.NET MVC provides several action filters out of the box: - Authorize – checks to see whether the current user is logged in, and matches a provided username or role name (or names), and if not it returns a 401 status, which in turn invokes the configured authentication provider. - ChildActionOnly – used to indicate that the action method may only be called as part of a parent request, to render inline markup, rather then returning a full view template. - OutputCache – tells ASP.NET to cache the output of the requested action, and to serve the cached output based on the parameters provided. - HandleError – provides a mechanism for mapping exceptions to specific View templates, so that you can easily provide custom error pages to your users for specific exceptions (or simply have a generic error view template that handles all exceptions). - RequireHttps – forces a switch from http to https by redirecting GET requests to the https version of the requested URL, and rejects non-https POST requests. - ValidateAntiForgeryToken – checks to see whether the server request has been tampered with. Used in conjunction with the AntiForgeryToken HTML Helper, which injects a hidden input field and cookie for later verification (Here’s a post showing one way you can enable this across the board). - ValidateInput – when set to false, tells ASP.NET MVC to set ValidateRequest to false, allowing input with potentially dangerous values (i.e. markup and script). You should properly encode any values received before storing or displaying them, when request validation is disabled. Note that in ASP.NET 4.0, request validation occurs earlier in the processing pipeline, so in order to use this attribute, you must set the following value in your web.config file: <httpRuntime requestValidationMode="2.0"/> Note also that any actions invoked during the request, including child actions/partials, must have this attribute set, or you may still get request validation exceptions, as noted in this Stack Overflow thread. Learning From Our Errors I often find that the best way for me to learn something new is by actually implementing it, so to that end, I’m going to walk through the process of handling a specific error using the HandleError action filter, and in the process explain a few more things about how these filters work. First, let’s create a new ASP.NET MVC 3 Web Application (if you don’t have ASP.NET MVC 3 installed, you can grab it quickly and painlessly using the Web Platform Installer): We’ll go with the Internet Application template, using the Razor View engine. Visual Studio helpfully opens up our HomeController for us when the project is loaded: 1: namespace HandleErrorTut.Controllers 2: { 3: public class HomeController : Controller 4: { 5: public ActionResult Index() 6: { 7: ViewModel.Message = "Welcome to ASP.NET MVC!"; 8: 9: return View(); 10: } 11: 12: public ActionResult About() 13: { 14: return View(); 15: } 16: } 17: } Nothing in there about handling errors, though, right? Yes, and no. Thanks to a new feature of ASP.NET MVC 3 called Global Filters, our application is already wired up to use the HandleErrorAttribute. How? Simple. In global.asax.cs, you’ll find the following code: 1: public static void RegisterGlobalFilters(GlobalFilterCollection filters) 2: { 3: filters.Add(new HandleErrorAttribute()); 4: } 5: 6: protected void Application_Start() 7: { 8: RegisterGlobalFilters(GlobalFilters.Filters); 9: RegisterRoutes(RouteTable.Routes); 10: } By adding the HandleErrorAttribute to the GlobalFilters.Filters collection, it will be applied to every action in our application, and any exception not handled by our code will cause it to be invoked. By default, it will simply return a View template by the name of Error, which conveniently has already been placed in Views > Shared for us: So what does this look like? Let’s find out. Add code to the About action to cause an exception, as shown below: 1: public ActionResult About() 2: { 3: throw new DivideByZeroException(); 4: return View(); 5: } Then run the application using Ctrl+F5, and click the About link in the upper-right corner of the page. Whoops! That doesn’t look much like a View template, does it? Why are we getting the yellow screen of death? Because by default, CustomErrors is set to RemoteOnly, meaning that we get to see the full details of any errors when running locally. The HandleErrors filter only gets invoked when CustomErrors is enabled. To see the HandleErrors filter in action when running locally, we need to add the following line to web.config, in the system.web section: 1: <customErrors mode="On"/> Now run the application again, and click About. You should see the following screen: Now HandleError has been invoked and is returning the default Error View template. In the process, HandleError also marks the exception as being handled, thus avoiding the dreaded yellow screen of death. Taking Control Now let’s assume that the generic error view is fine for most scenarios, but we want a more specific error view when we try to divide by zero. We can do this by adding the HandleError attribute to our controller or action, which will override the global version of the attribute. First, though, let’s create a new View template. Right-click the Views > Shared folder and select Add > View, specifying the name as DivByZero, and strongly typing the view to System.Web.Mvc.HandleErrorInfo, which is passed by the HandleError filter to the view being invoked: Using a strongly-typed view simplifies our view code significantly by binding the view to the HandleErrorInfo instance passed to the view, which can then be accessed via the @Model.propertyname syntax, as shown below: 1: @model System.Web.Mvc.HandleErrorInfo 2: 3: @{ 4: View.Title = "DivByZero"; 5: Layout = "~/Views/Shared/_Layout.cshtml"; 6: } 7: 8: <h2>DivByZero</h2> 9: 10: <p> 11: Controller: @Model.ControllerName 12: </p> 13: <p> 14: Action: @Model.ActionName 15: </p> 16: <p> 17: Message: @Model.Exception.Message 18: </p> 19: <p> 20: Stack Trace: @Model.Exception.StackTrace 21: </p> Next, go back to HomeController, and add the HandleError attribute to the About action method, specifying that it should return the DivByZero View template: 1: [HandleError(View="DivByZero")] 2: public ActionResult About() 3: // remaining code omitted Now, if you run the application and click the About link, you’ll see the following: But we now have a problem…the filter attached to the About action method will return the DivByZero View template regardless of which exception occurs. Thankfully, this is easy to fix by adding the ExceptionType parameter to the HandleError attribute: 1: [HandleError(View="DivByZero", ExceptionType = typeof(DivideByZeroException))] Now the HandleError attribute attached to our action method will only be invoked for a DivideByZeroException, while the global version of the HandleError attribute will be invoked for any other exceptions in our controllers or actions. You can get even more control over how and when your filters are invoked by passing in the Order parameter to the attribute, or even creating your own custom filters. For example, you could create a custom filter that inherits from FilterAttribute, and implements IExceptionFilter, then implement IExceptionFilter’s OnException method to provide your own custom error handling, such as logging the error information, or performing a redirect. This is left as an exercise for the reader (or perhaps, for a future blog post!). Conclusion In this post, I’ve explained what action filters are, and some of the things they can do for you, and demonstrated, through the use of the HandleError filter, how you can customize their behavior. I’ve also shown how you can apply these filters across all of your controllers and actions through the use of the new global action filters feature of ASP.NET MVC 3. I also encourage you to read through the MSDN docs on Filtering in ASP.NET MVC for additional information. I hope you’ve found this information useful, and welcome your feedback via the comments or email. Nice post – specifically, I like the strongly-typed model binding to the System.Web.Mvc.HandleErrorInfo in the view. Never even thought about doing that; however, I do see that the default Error view does the same. Oh how we can learn just by discovering and analyzing the obvious (what's already there). Thanks… Dan, Glad you like the post…and yes, there's lots of stuff that can be gleaned simply by example from what's already there. Although I'm a longtime Web Forms developer, I have to say that learning by example is an area that I find MVC to be superior. Thanks for your comment! You've been kicked (a good thing) – Trackback from DotNetKicks.com Thank you for submitting this cool story – Trackback from progg.ru I've been searching for a solution online for a while now, on how to properly return a 404 error, but without the ugly default looking 404 page. I could always throw a new HttpException(404, "Resource not found"), but that oddly returns a 500 error. Can anyone point me in the right direction? Martin, There are several approaches you could take, from the simple, using CustomErrors: <customErrors mode="On"> <error statusCode="404" redirect="~/Error/NotFound"/> </customErrors> You could also use a combination of customErrors and HandleError, as described here: devstuffs.wordpress.com/…/how-to-use-customerrors-in-asp-net-mvc-2 If you're seeing a 500 HTTP status code and you're using HandleError, that's to be expected, since HandleError by default sets the status code to 500. If you want the status code to be 404, you may want to look at using HttpNotFoundResult, as described here: weblogs.asp.net/…/asp-net-mvc-3-using-httpnotfoundresult-action-result.aspx That gives you a very clean way to return a 404. Keep in mind that depending on how your application is configured, IIS may be responding to the exception as well, so be sure you understand which part of the platform (IIS or ASP.NET) is responding to the exception you're throwing. Hope that helps! While preparing to record a video walkthrough of the action filters tutorial I recently published , I I’ve published a new DevNugget screencast on Channel 9, and linked below. In it, I demonstrate the use Hey, great post – but for some reason, this one particular project I have doesn't work. I've tried following the steps as you've outlined, but when customErrors is on – it can't find the view. Frustrating .. since it works for all other projects, just not this particular one. Any ideas? Hi Kori, When you say "it can't find the view," what are the symptoms you're seeing? Are you getting a yellow screen of death indicating that the view is missing?
https://blogs.msdn.microsoft.com/gduthie/2011/03/17/get-to-know-action-filters-in-asp-net-mvc-3-using-handleerror/
CC-MAIN-2016-40
refinedweb
1,953
53.1
>> 2013-02-13 00:00 GMT Bolivian prosecutors brought charges Dec. 19 against 39 people in an alleged plot to assassinate President Evo Morales and launch an armed rebellion last year. For the general manager of the Bolivian Institute of Foreign Trade (IBCE) Gary Rodriguez, "there is concern about the imports for fuel and food, because it shows the lack of production. We are concerned in the composition of fuel exports have exceeded 500 million dollars. By mid-2013, Paraguay would receive natural gas from Bolivia. CHILE Fruit shipments from a port at ValparaAso, Chile, rose by 170% in the first two weeks of December, compared to the same period in 2009, according to a news release from port operator Terminal PacAfico Sur. Chile's Collahuasi mine, the world's No. 3 copper deposit, is looking for alternatives to export copper concentrate after its main port was shut down following an accident, the company said on Sunday.. SAMEX (TSX VENTURE:SXG)(OTCBB:SMXMF) is conducting a multi-project, multi-faceted exploration program at its large, wholly owned Los Zorros property holdings in Chile. Bolivia charges dozens in destabilization complot Submitted by WW4 Report on Mon, 12/20/2010 - 01:14. Bolivian prosecutors brought charges Dec. 19 against 39 people in an alleged plot to assassinate President Evo Morales and launch an armed rebellion last year. The accused include leading opposition politicians and Gary Prado, the ex-general who captured legendary guerilla leader Che Guevara in 1967. The supposed plot was uncovered in April 2009, when national police killed three suspected European mercenaries in the eastern lowland city of Santa Cruz. The accused deny the charges, calling them politically motivated. Most of those charged are already in custody, but 17 are now living outside Bolivia. The most prominent figure among the accused is Branco Marinkovic, a business leader and former head of the opposition Civic Committee of Santa Cruz, who is exiled in the US. Prosecutors say they have e-mail evidence linking the accused to three European mercenaries killed by police in last April's raid. Two other Europeans were arrested in the raid, and arms and ammunition seized. The killed included Irish national Michael Dwyer and Eduardo Rozsa-Flores, a veteran of the 1990s Balkan Wars with joint Bolivian, Hungarian and Croatian nationality. Rozsa-Flores, alleged to have been the ringleader, said in a video interview that emerged in Hungary after his death that he had been called to Bolivia to form a separatist militia in Santa Cruz. Branco Marinkovic and other opposition figures have denied any link to Rozsa-Flores. "I am persecuted by the Bolivian government and forced to live outside my beloved Bolivia because in my country my life is in danger. There are no guarantees I would get a fair trial," Marinkovic said from exile in the US. Bolivian Vice President Alvaro Garcia Linera retorted that Marinkovic should come back to Bolivia to "defend his truth," and suggested his leaving the country amounted to a confession of guilt. He called the plot was "the most serious act of conspiracy against the unity of the country." Gen. Prado likewise denied any involvement when he was called to testify before prosecutors earlier this year. "It seems laughable that a general with my career history would put himself under the orders of a mercenary," he said. (BBC News, Dec. 18) Opposition prefect removed in Tarija Meanwhile in Tarija, another opposition stronghold in the country's east, the Departmental Legislative Assembly voted Dec. 16 to remove the prefect (governor), Mario CossAo, after he was charged with dereliction of duty. Cossio's unseating by a legislature dominated by Morales supporters leaves opposition prefects in control of just two of Bolivia's nine departments. "This is a putschist plan by Morales in complicity with prosecutors and judges controlled by a government that wants to demolish everything that opposes it in order to have total power," CossAo thundered during heated assembly debate shortly before being ousted. CossAo backers called for a general strike "in defense of democracy." Prefect Ruben Costas of Santa Cruz department called CossAo's removal "a coup d'etat." He charged that the national government is only permitting a "pseudo-autonomy" for the departments, while pursuing "the path of totalitarianism." Prosecutors brought corruption charges last week against CossAo, who had been re-elected in April. If convicted in the case, concerning fraudulent asphalt sales in road construction, he faces up to eight years in prison. Three opposition leaders have now been removed from office after being charged with or convicted of crimes. Mayor Jaime Barron of Sucre is accused of "instigating racist actions." PotosA Mayor RenA(c) Joaquino was removed after being convicted on corruption charges related to a stolen car ring and sentenced to three years in prison. His case is on appeal. Luis Revilla, mayor of La Paz, could be next, with embezzlement charges filed against him last week. Under a law passed by the national legislature, public officials can be unseated based only on the filing of charges by a prosecutor. Lino Condori, a supporter of Morales in the Tarjia assembly, has been named as the department's new prefect. CossAo, for his part, refuses to recognize his removal and insists he is still the legitimate prefect. He also refused to show up for a judicial hearing Dec. 19, at which he could have been ordered to prison. (La Prensa, La Paz, Los Tiempos, Cochabamba, Dec. 19; La Prensa, Dec. 18; Erbol, Bolivia, Dec. 17; Los Tiempos, WP, Dec. 16; Los Tiempos, Dec. 14; Erbol, Dec. 13) Paulo Gregoire STRATFOR Suben importaciones de combustibles y alimentos Bolivia, 20 de diciembre de 2010 El informe de enero a octubre del Instituto Nacional de EstadAstica (INE) sobre comercio exterior, da cuenta que las importaciones subieron respecto a similar perAodo en 2009 en 4.291,61 millones de dA^3lares, siendo el saldo comercial de 1.430,45 millones de dA^3lares; de este monto la mayor parte obedece a compras externas por combustibles y alimentos, ademA!s de los suministros. En 2009, la balanza comercial, registrA^3 un superA!vit de 985,8 millones de dA^3lares, las exportaciones alcanzaron 5.452,6 millones de dA^3lares y las importaciones 4.466,9 millones de dA^3lares. Este aA+-o, el superA!vit comercial registrarA! mejores resultados, inclusive que el 2008, cuando las exportaciones alcanzaron 7.058,0 millones de dA^3lares y en importaciones 5.100,2 millones de dA^3lares. No obstante, de esta capacidad de compras extranjeras que tiene el Estado, se hace notorio la dependencia constante de alimentos y combustibles, porque desde el 2006 las importaciones en cifras anuales aumentaron de 2.925,8 a 5.100,2 millones de dA^3lares en 2008. Aunque el 2009 se registrA^3 un dato menor de 4.466,9 millones de dA^3lares, este aA+-o podrAa igualar al registrado hace dos aA+-os, dado que el valor de importaciones de enero a octubre es de 4.291,61 millones de dA^3lares. IMPORTACIONES De acuerdo a los datos del INE, se establece un crecimiento en las compras externas de alimentos y bebidas ( 2009 de enero a octubre) sobre las importaciones que oscilan en 311,19 millones de dA^3lares. En similar perAodo 2010 las importaciones registran 311,48 millones de dA^3lares. En el caso de suministros industriales en 2009 de enero a octubre, registra 1.276.05 millones de dA^3lares, para el 2010 en los mismos meses se tiene 1.504,96 millones de dA^3lares. Respecto al combustible y lubricantes, las importaciones alcanzan 396,65 millones de dA^3lares en 2009, y para 2010 sube en 502,59 millones de dA^3lares. FALTA DE PRODUCCIA*N Para el gerente general del Instituto Boliviano de Comercio Exterior (IBCE) Gary RodrAguez, las importaciones marcan la pauta del desarrollo de un paAs, ademA!s que significa la capacidad de compras externas. a**La clave de todo esto es quA(c), para quA(c) importa y porquA(c) importa unoa**, remarcA^3. Sin embargo, RodrAguez, enfatizA^3 que a**existe una preocupaciA^3n acerca de las importaciones en el caso de los combustibles y alimentos, porque muestra la falta de producciA^3na**. a**Nos preocupa dentro de la composiciA^3n de las exportaciones que los combustibles hayan superado los 500 millones de dA^3lares, considerando que se compra caro y se subsidia para el mercado interno, tambiA(c)n nos preocupa que se vayan incorporando los alimentos en el rubro de la importaciA^3n, cuando nosotros podrAamos producirlos, es mA!s, cuatro aA+-os de exportaciones deberAa ser la clave para realizar polAticas pA-oblicas de sustituciA^3n de importacionesa**, sostuvo. DESEMPLEO A esto se suma que las importaciones generan desempleo, sobre todo en la cadena alimenticia, porque desincentiva a la producciA^3n agrAcola. El Gobierno en los A-oltimos aA+-os dentro de su polAtica de seguridad alimentaria viene importando alimentos que han escaseado en el mercado interno como el azA-ocar. RodrAguez explicA^3 que la producciA^3n de un bien producido en un paAs genera mA!s empleo que cuando se importa. a**Cada dA^3lar que nosotros gastamos para el exterior beneficia sueldos externos, especialmente si se trata de una cadena alimenticia. Bolivia no puede descuidar auto- abastecimiento por razones estratA(c)gicas y por eso recomendamos que las polAticas pA-oblicas no deberAan estar orientadas a la seguridad alimentaria, sino a un concepto de soberanAaa**, manifestA^3. Rising fuel and food imports Bolivia, December 20, 2010 The report from January to October of the National Statistics Institute (INE) on foreign trade, given the fact that imports rose over the same period in 2009 at U.S. $ 4291.61 million, with the trade balance of U.S. $ 1430.45 million; Of this amount, the majority due to foreign purchases of fuel and food, plus supplies. In 2009, the trade balance recorded a surplus of 985.8 million dollars, exports reached U.S. $ 5452.6 million and imports U.S. $ 4466.9 million. This year, the trade surplus recorded better results, including the 2008, when exports reached U.S. $ 7058.0 million and imports U.S. $ 5100.2 million. However, this ability of foreign purchases of the State, it became obvious the continued dependence on food and fuel, because since 2006 the annual import figures increased from 2925.8 to 5100.2 million in 2008. Although 2009 was a minor detail of 4466.9 billion, this year could equal two years ago, as the value of imports from January to October is U.S. $ 4291.61 million. IMPORTS According to INE data, establishing a growth in foreign purchases of food and beverages (January-October 2009) on imports ranging in 311.19 million dollars. In the same period 2010, imports recorded U.S. $ 311.48 million. In the case of industrial supplies in January 2009 to October 1.276.05 billion recorded for 2010 in the same month is 1504.96 million. For fuel and lubricants imports reached 396.65 million dollars in 2009 or 2010 up to 502.59 million dollars. LACK OF PRODUCTION For the general manager of the Bolivian Institute of Foreign Trade (IBCE) Gary Rodriguez, imports set the pace of development of a country also means the ability of foreign purchases. "The key to this is why, for what matters and why it matters one," he said. However, Rodriguez emphasized that "there is concern about the imports for fuel and food, because it shows the lack of production." "We are concerned in the composition of fuel exports have exceeded 500 million dollars, considering that buying expensive and is subsidized for the domestic market, we are also concerned that foods be incorporated in the business of importation, when we could produce more, four years of exports should be the key to making public policy of import substitution, "he said. UNEMPLOYMENT Added to this is that imports generate unemployment, especially in the food chain, because it discourages agricultural production. The government in recent years over its policy of food security by importing food has been scarce in the domestic market as sugar. Rodriguez explained that the production of a good produced in one country generates more jobs than when you import. "Every dollar we spend for outside external salary benefits, especially if it is a food chain. Bolivia can not ignore self-sufficiency for strategic reasons and we recommend that public policy should not be aimed at food security, but a concept of sovereignty, "he said. Listen Read phonetically Dictionary - View detailed dictionary Paulo Gregoire STRATFOR Paraguay recibirAa gas de Bolivia en el 2013 Bolivia, 20 de diciembre de 2010 Para mediados del 2013, Paraguay recibirAa gas natural de Bolivia, si el estudio de prefactibilidad recomienda que se transporte dicho recurso natural a travA(c)s de la HidrovAa mediante Puerto CA!ceres (Brasil), manifestA^3 ayer el presidente Fernando Lugo, luego de firmar una DeclaraciA^3n Conjunta con sus colegas de Bolivia y Uruguay, Evo Morales y JosA(c) Mujica, respectivamente, en la reuniA^3n de Urupabol. Lugo explicA^3, segA-on publicaciA^3n de ABC color, que se realizA^3 un estudio de prefactibilidad, donde se manejaron varias opciones para traer gas natural de Bolivia a los mercados de Paraguay y Uruguay. InformA^3 que se descartA^3 la primera opciA^3n porque tenAa un a**costo altAsimoa** en la construcciA^3n de un gasoducto desde Bolivia y su trazado abarcaba Paraguay, parte de Argentina y Uruguay. ComentA^3 que la segunda opciA^3n es la mA!s viable y consiste en llevar gas natural hasta Puerto CA!ceres (Brasil) y a travA(c)s de la HidrovAa, por los rAos de Paraguay y ParanA! llegar hasta Uruguay. Si los estudios de prefactibilidad son favorables, para mediados de 2013 ya se estarAa comprando gas natural de Bolivia, informA^3 Lugo en conferencia de prensa, realizada a metros del lago de ItaipA-o bajo un intenso calor. Paraguay would receive gas from Bolivia in 2013 Bolivia, December 20, 2010 By mid-2013, Paraguay would receive natural gas from Bolivia, if the feasibility study recommends that transport natural resource through the waterway by Puerto CA!ceres (Brazil) said yesterday President Fernando Lugo, after signing a joint declaration with their colleagues in Bolivia and Uruguay, Evo Morales and JosA(c) Mujica, respectively, at the meeting of Urupabol. Lugo said, as published in ABC Color, conducted a feasibility study, where he managed several options for bringing natural gas from Bolivia to markets in Paraguay and Uruguay. Reported that the first option was discarded because it had a "high cost" in building a gas pipeline from Bolivia and Paraguay covered path, part of Argentina and Uruguay. He said the second option is more viable and is to bring natural gas to Puerto CA!ceres (Brazil) and through the waterway, by the Paraguay and Parana rivers reach Uruguay. If feasibility studies are favorable, by mid 2013 and will be buying natural gas from Bolivia, Lugo said at a news conference, held just meters from the Itaipu Lake in extreme heat. Paulo Gregoire STRATFOR Chile fruit shipments from ValparaAso port jump 170% in December December 20th, 2010 Fruit shipments from a port at ValparaAso, Chile, rose by 170% in the first two weeks of December, compared to the same period in 2009, according to a news release from port operator Terminal PacAfico Sur. From Dec. 1 to 15, 8,400 metric tons left the ValparaAso, the statement said. The number of pallets shipped increased by 6%. Seasonal fruits such as blueberries, cherries, avocado and stone fruit were shipped to the ports at Philadelphia and Los Angeles in the United States. About half of fruit and vegetable shipments from Chile leave from ValparaAso. The other main port for fruit shipments is further south in San Antonio. It is expected that shipments will continue to rise in January and February, when the main export product will be table grapes. Fruit exports hit their peak between December and March. Photo: Terminal PacAfico Sur Paulo Gregoire STRATFOR UPDATE 2-Chile mine seeks alternative to ship copper-official Sun Dec 19, 2010 7:09pm GMT SANTIAGO, Dec 19 (Reuters) - Chile's Collahuasi mine, the world's No. 3 copper deposit, is looking for alternatives to export copper concentrate after its main port was shut down following an accident, the company said on Sunday. Collahuasi, owned by Xstrata and Anglo American, extracts some 535,000 tonnes a year, or 3.3 percent of global mined copper. A company spokeswoman said Collahuasi is evaluating other options to export copper by sea after the accident at the Patache port on Saturday in which three workers were killed when part of a shiploader collapsed. "We're evaluating different alternatives to export copper concentrate while the shiploader is repaired," Bernardita Fernandez told Reuters, adding that she could not say whether Patache was the only port used by Collahuasi. An official at Patache said repairing the shiploader could take a while. "We spoke with the people there and they said that it can take at least a month to repair the structure that collapsed," said Port Captain Domingo Hormazabal. Collahuasi workers returned to their jobs earlier this month following a 32-day strike that is viewed as one of the worst faced by any foreign miner in Chile, the world's top copper producer. (Reporting by Maria Jose Latorre; Writing by Eduardo Garcia; Editing by Vicki Allen) Paulo Gregoire STRATFOR Chile blueberry volume high despite rain and hail December 20th, 2010. Even after the unstable weekend climate, the export yield of week 49 reached 5,648 tons, 32.7% than the projected yield for the week. Accumulated the season has produced 11,527 tons of the berry, thus nearly 17%a**Neal and Duke as they are the ripest. Next weekend will sill the end of the OA% in the worst of cases. The final amount of the damage still awaits to be seen, and according to the report a lower yield in packing houses is expected Paulo Gregoire STRATFOR SAMEX Mining Corp.: Exploration Update-Los Zorros District, Chile PDF Print E-mail Monday, 20 December 2010 SAMEX (TSX VENTURE:SXG)(OTCBB:SMXMF) is conducting a multi-project, multi-faceted exploration program at its large, wholly owned Los Zorros property holdings in Chile. Thus far, three core drill holes have been drilled and a fourth hole is in progress. SAMEX recently secured major funding which has facilitated a significant expansion and extension of the exploration programs on its various projects. While core drilling is ongoing, the Company is also preparing for a substantial geophysical survey over large portions of the Los Zorros district utilizing the Titan 24 proprietary deep-earth-imaging technology from Quantec Geoscience. Following is an overview of the exploration progress and planned activities for various projects at Los Zorros: Cinchado Gold Project a** Three core holes have been drilled thus far. Data evaluation is ongoing and a Titan 24 survey line has been prepared to cross over the project area and results will be incorporated together with all mapping, sampling, drilling and historic data to guide additional drilling in the Cinchado Project area. Milagro Pampa Project a** A deep core drill hole (greater than 600 meters) is currently in progress and is giving the exploration team a deeper look at an area drilled in 2004 where significant alteration and zones of anomalous copper-gold-silver were previously encountered. Thus far the hole has encountered interesting alteration and mineralization which will be sampled and assayed. Milagro Gold Project a** Multiple drill locations are being prepared to follow up important gold-bearing mantos intersections where previous drilling encountered 97.3 meters averaging 0.302 g/t gold, including 2.579 g/t gold over 4.7 meters (see news release No. 1-05 dated January 21, 2005). Throughout the Milagro project area, grab-samples of jasperoid outcrops and altered structural zones, that possibly represent leakage from deeper mineralization, have continued to give very encouraging geochemical results, including multiple locations of anomalous to as high as 20 g/t gold together with other important pathfinder metals also being present. Nora Gold Project a** The Nora project area had been previously sub-divided into four target zones (A, B, C, and D) based on strongly anomalous gold values in surface outcrop and trench samples. Target zones C-D include leakage up along a fault zone which was targeted in previous drilling and led to the discovery of a breccia-hosted, high-grade intersection of 15.96 g/t gold over 7.66 meters (see news releases No. 8-05 and 9-05 dated September 9 and 14, 2005). Access roads and multiple drill pads have been prepared along a 600-meter strike length of Zones C-D and drilling is expected to begin early in the first quarter of 2011. Six new bulldozer trenches have been completed and sampled in other parts of the Nora Project to help extend the target zones further northward. An additional new barite-jasperoid in-filled fault zone (Zone E) in the eastern part of the project area is in the early stages of evaluation. Titan 24 Geophysics - The Titan 24 Magnetotellurics and IP/Resistivity is a deep-earth-imaging technology system that images conductive mineralization, disseminated mineralization, alteration, structure and geology for targeting of drill holes to depth. Grid-lines are currently being surveyed-in for the geophysical program scheduled to begin in January 2011. Data from the Titan 24 survey will be correlated with data from drilling to help guide exploration throughout the extensive Los Zorros district. About Los Zorros -: o the property covers the breadth of a regional anticlinorium with bedrock of calcareous sediments and diorite sills. o the property is diagonally crossed by an 8-kilometer-long trend of barite veins which appears to comprise an extensive sigmoidal (S-shaped) fracture system. o the property is the locus of younger porphyry intrusions. o clay-sericite-pyrite alteration is superposed on the porphyry intrusions in four areas that have been identified so far - much of these altered areas and parts of the barite vein swarm, are largely concealed beneath a thin veneer of gravel and wind-blown silt. Trenching and a gravity survey have helped better outline the extent of the altered intrusions. o the style of mineralization at Los Zorros varies from steep crosscutting veins and breccia to bedded mantos-like occurrences a** hosted within sedimentary rocks outboard to the altered porphyritic intrusions. There is widespread occurrence of gold-bearing barite veins and altered fault zones; common, widespread occurrence of jasperoid silica; large areas/intersections of anomalous gold and the presence of important pathfinder metals (Hg, As, Sb) often found in association with gold mineralization. Metal-laden hydrothermal fluids thought to be derived from the younger porphyritic intrusions, likely expelled out along the fault structure pathways and into favorable sedimentary intervals to form the significant gold and copper-silver mineralized areas of Cinchado, Nora, Milagro, and Milagro Pampa. There are also many outlying). What has been revealed geologically at Los Zorros is very intriguing and has provided SAMEX with strong impetus to explore for multiple precious metal deposits that may be clustered beneath the widespread precious metal occurrences in this little-explored district.. Paulo Gregoire STRATFOR Paulo Gregoire STRATFOR
http://www.wikileaks.org/gifiles/docs/2032413_-latam-bolivia-chile-country-brief-am-.html
CC-MAIN-2013-20
refinedweb
3,899
52.6
If you are a developer who uses Visual Studio Professional Edition, you can create and run two types of tests: unit and ordered. You use a unit test to validate that a specific method of production code works correctly, to test for regressions, or to perform buddy testing or smoke testing. You use an ordered test to run other tests in a specified order. Testers on your team can use the Team System testing tools to create and run tests. If they run a unit test that fails, they file a bug and assign it to you. You can then use Visual Studio to reproduce the bug by running the failed unit test. The following sections provide links to topics that describe the testing capabilities now available in Visual Studio Professional Edition: Visual Studio Professional Edition. The features listed in this section are available to all users of Visual Studio Professional Edition. Professional Edition Plus Team Explorer License. The features listed in this section are available to every user of Visual Studio Professional Edition who also has a license to use Team Explorer. Not Available in Visual Studio Professional Edition. The features listed in this section are available in Visual Studio Team System Test Edition but not in Visual Studio Professional Edition. If you have Visual Studio Professional Edition, the capabilities shown in the following table are available to you: Capability For more information Generate unit tests from code How to: Create and Run a Unit Test Create unit tests Creating Unit Tests Create and run ASP.NET unit tests Unit Tests for ASP.NET Web Services Create and run data-driven unit tests How to: Create a Data-Driven Unit Test Run unit tests and ordered tests How to: Run Selected Tests Create test projects How to: Create a Test Project Disable and enable tests by using the Visual Studio Properties window How to: Disable and Enable Tests Run tests from a command line Command-Line Test Execution Edit test run configurations Configuring Test Execution View details of test results Test Results Reported Create ordered tests How to: Create an Ordered Test Run ordered tests Working with Ordered Tests Organize tests into test lists How to: Organize Tests into Test Lists Disable and enable tests by using the Test List Editor How to: Disable and Enable Tests Import, export, or load test metadata files. Reusing Tests If your team uses Visual Studio Team Foundation Server, you might be licensed to use Team Explorer. In this case, you have the capabilities shown in the following table: Use tests as part of a check-in policy Working with Check-In Policies and Notes How to: Add Check-In Policies Use tests in Team Foundation Build, such as for build verification tests How to: Configure and Run Build Verification Tests (BVTs) Download test run results and view them in the Test Results window How to: View Test Results Through a Build Report Open a linked test result. How to: Open Test Results from Work Items Add tests to source control How to: Add a Project or Solution to Version Control The following capabilities are available in Test Edition but are not available in Visual Studio Professional Edition: Create Web, load, manual, generic, or database unit tests. Gather code-coverage data. Run tests remotely. Create a bug or other work item from a test result. Link a test result to a work item. Associate a work item with a test. Publish test results. Describes the UnitTesting namespace, which provides attributes, exceptions, asserts, and other classes that support unit testing. Describes the UnitTesting.Web namespace, which extends the UnitTesting namespace by providing support for ASP.NET and Web service unit tests. Discusses how to develop new test types that integrate with Visual Studio Team System. Describes how to create and install a host adapter, which is a software component that lets you run tests in a specific environment. Also describes how to specify a host adapter for running tests.
http://msdn.microsoft.com/en-us/library/bb385902.aspx
crawl-002
refinedweb
665
58.52
Hi Troy, Thanks for contacting Syncfusion support. Query: How to resolve the assembly references? If we include both Chart and SfChart Dll, due to ambiguity between the references we get the compilation error .We can resolve this error by defining the chart namespace as in the below code snippet. Note: If the issue is still reproduced, please revert to us by providing the sample which will be helpful to provide you better solution. Please let us know if you have any queries. Thanks, Rachel. A This post will be permanently deleted. Are you sure you want to continue? Sorry, An error occured while processing your request. Please try again later.
https://www.syncfusion.com/forums/119971/ambiguous-type-references-on-upgrade-from-v11-to-v13
CC-MAIN-2018-39
refinedweb
110
68.77
Unit 2: C Programming Table of Contents - 1. C Programming and Unix - 2. C Programming Preliminaries - 3. Format Input and Output - 4. Basic Data Types - 5. Pointers and Arrays - 6. C Strings - 7. Sting Library Functions - 8. Pointer Arithmetic and Strings - 9. Double Arrays - 10. Command Line Arguments 1 C Programming and Unix In this course, all of our programming will be in C, and that's because C is a low level language, much closer to the hardware and Operating System then say C++ or Java. In fact, almost all modern operating systems are written in C including Unix, Linux, Mac OSX, and even Windows. The C programming language, itself, was developed for the purpose of writing the original Unix operating system. It shouldn't be that surprising then, that if you want to learn how to write programs that interact with the Unix system directly, then those programs must be written in C. And, in so doing, the act of learning to program in C will illuminate key parts of the Unix system. In many ways, you can view C as the lingua franc of programming; it's the language that every competent programmer should be able to program, even a little bit in. The syntax and programming constructs of C are present in all modern programming languages, and the concepts behind managing memory, data structures, and the like underpin modern programming. Humorously, while nearly all programmers know how to program in C, most try to avoid doing so because the same power that C provides as a "low level" language is what makes it finicky and difficult to deal with. 2 C Programming Preliminaries First: YOU ALREADY KNOW C! That's because you've been programming C in your previous class since C is a subset of the C++ language. Not all of the programs you wrote are valid C programs, but the structure and syntax are the same. If you were to look at a C program, you'd probably understand it to some extent, but there are a few things that C++ has that C does not; however most are the same. For example: - Conditionals: Use ifand elsewith the same syntax - Loops: Use whileand forloops with the same syntax - Basic Types: Use int, float, double, char - Variable Declaration: Still must declare your variables and types - Functions: function declaration is the same - Arrays and Pointers: Memory aligned sequences of data and references to that data The big differences between C and C++ is: - No namespace: C doesn't have a notion of namespace, everything is loaded into the same namespace. - No objects or advanced types: C does not have advanced types built in, this includes string. Instead, strings are null terminated arrays of char's. But you can create more advanced data types like structs, but they also have slightly different properties. - No function overloading: Even functions with different type declarations, that is, take different types of input and return different types, cannot share the same name. Only the last declaration will be used. - All functions are pass-by-value: You cannot declare a function to take a reference, e.g., void foo(int &a). Instead, you must pass a pointer value. - Different Structures: Structures in C use a different syntax and are interpreted differently. - Variable Scoping: Variable deceleration is tightly scoped to code blocks, and you must declare variables prior to the block to use them. For example, for(int i, ....)is not allowed in C. Instead, you must declare iprior to the start of the for loop. While clearly, the two programming languages, C++ and C, are different, they are actually more alike then different. In fact, C is a subset of C++, which means that any program you write in C is also a C++ program. There are often situations, when programming in C++ is not your best choice for completing the task while using C libraries are. This is particularly relevant whenever you need to accomplish system related tasks, such as manipulating the file system or creating new processes. However, most programs you write in C++ are not C programs. 2.1 Hello World When learning any programming language, you always start with the "Hello World" program. The way in which your program "speaks" says a lot about the syntax and structure of programming in that language. Below is the "Hello World" program for C++ and C, for comparison. /*hellowrold.cpp*/ #include <iostream> using namespace std; // Hello World in C++ int main(int argc, char * argv[]){ cout << "Hello World" << endl; } /*helloworld.c*/ #include <stdio.h> // Hello World in C int main(int argc, char * argv[]){ printf("Hello World\n"); } To begin, each of the programs has a #include, which is a compiler directive to include a library with the program. Both of the include statements ask the compiler to include the I/O library. While in C++ this was the iostream library, in C, the standard I/O library is stdio.h. The .h refers to a header file, which is also a C program that library or auxiliary information is generally stored. 2.2 Compiling a C program The compilation process for C is very similar to that of C++, but we use a C compiler. The standard C compiler on Unix system is gcc, the gnu C compiler. For example, to compile helloworld.c, we do the following. #> gcc helloworld.c Which will produce an executable file a.out, which we can run #> ./a.out Hello World If we want to specify the name of the output file, you use the -o option. #> gcc helloworld.c -o helloworld #> ./helloworld Hello World There are more advanced compilation techniques that we will cover in lab, such as including multiple files, compiling to object files, and using pre-compiler directores. 2.3 Includes The process of including libraries in your program looks very similar to that of C++, and uses the include statement. Note, all C libraries end in .h, unlike C++. Here are some common libraries you will probably want to include in your C program: stdlib.h: The C standard library, contains many useful utilities, and is generally included in all programs. stdio.h: The standard I/O library, contains utilities for reading and writing from files and file streams, and is generally included in all programs. unistd.h: The Unix standard library, contains utilities for interacting with the unix system, such as system calls sys/types.h: System types library, contains the definitions for the base types and structures of the unix system. string.h: String library, contains utilities for handling C strings. ctype.h: Character library, contains utilities for handing char conversions math.h: Math library, contains basic math utility functions. When you put a #include <header.h> in your program, the compiler will search for that header in its header search path. The most common location is in /usr/include. However, if you place your filename to include in quotes: #include "header.h" The compiler will look in the local directory for the file and not the search path. This will become important when we start to develop larger programs. 2.4 Control Flow The same control flow you find in C++ is present in C. This includes if/else statements. if( condition1 ){ //do something if condition1 is true }else if (condition2){ //do something if condition1 is false and condition2 is true }else{ //do this if both condition1 and condition2 is true } While loops: while( condition ){ //run this until the condition is not true } And, for loops: //run init at the start for ( init; condition; iteration){ //run until condition is false preforming iteration on each loop. } In previous versions of C, you were not able to declare new variables within the for loop. However, as of last year, with the latest standard, rejoice! You can now do the following without error: for(int i=0; i < 10; i++){ printf("%d\n",i); //i scoped within this loop } The declaration of the variable i exists within the scoping of the loop. Referring to i outside of the loop is an error, and if you declared a different i outside the loop, actions within the loop would not affect the outer declaration. For example: int i=3; for(int i=0;i<100;i++){ printf("%d\n",i); //prints 0 -> 99 } printf("%d\n",i); //prints 3 --- different i, different scope! Essentially, it works the same as C++. However, as old habits die hard, throughout these notes, the old C style may be used, but know that the new standard applies here. 2.5 True and False C does not have a boolean type, that is, a basic type that explicitly defines true and false. Instead, true and false are defined for each type where 0 or NULL is always false and everything else is true. All basic types can be used as a condition on its own. For example, this is a common form of writing an infinite loop: while(1){ //loop forever! } 3 Format Input and Output 3.1 printf() and scanf() The way output is performed in C++ is also quite different than that of C. In C++ you use the << and >> to direct items from cin or towards cout using iostreams. This convention is not possible in C, and instead, format printing and reading is used. Let's look at another example to further the comparison. /*enternumber.cpp*/ #include <iostream> using namespace std; int main(int argc, char * argv[]){ int num; cout << "Enter a number" << endl; cin >> num; cout << "You entered " << num << endl; } /*enternumber.c*/ #include <stdio.h> int main(int argc, char * argv[]){ int num; printf("Enter a number\n"); scanf("%d", &num); //use &num to store //at the address of num printf("You entered %d\n", num); } The two programs above both ask the user to provide a number, and then print out that number. In C++, this should be fairly familiar. You use iostreams and direct the prompts to cout and direct input from cin to the integer num. C++ is smart enough to understand that if you are directing an integer to output or from input, then clearly, you are expecting a number. C is not capable of making those assumptions. In C, we use a concept of format printing and format scanning to do basic I/O. The format tells C what kind of input or output to expect. In the above program enternumber.c, the scanf asks for a %d, which is the special format for a number, and similar, the printf has a %d format to indicate that num should be printed as a number. There are other format options. For example, you can use %f to request a float. /* getpi.c */ #include <stdio.h> int main(int argc, char * argv[]){ float pi; printf("Enter pi:\n"); scanf("%f", &pi); printf("Mmmm, pi: %f\n", pi); } And you can use the format to change the number of decimals to print. %0.2f says print a float with only 2 trailing decimals. You can also include multiple formats, and the order of the formats match the additional arguments int a=10,b=12; float f=3.14; printf("An int:%d a float:%f and another int:%d", a, f, b); // | | | | | | // | | `---|--|--' // | `----------------------|--' // `---------------------------------' There are a number of different formats available, and you can read the manual pages for printf and scanf to get more detail. man 3 printf man 3 scanf You have to use the 3 in the manual command because there exists other forms of these functions, namely for Bash programming, and you need to look in section 3 of the manual for C standard library manuals. For this class, we will use the following format characters frequently: %d: format integer %u: format an unsigned integer %f: format float/double %x: format hexadecimal %s: format string %c: format a char %l: format a long %lu: format an unsigned long %%: print a % symbol 3 ); 3.3: The argument mode points to a string beginning with one of the following 3. 3.5 Format input/output from File Streams Just as you worked with the standard file streams, stdin, stdout, and stderr, we can do format input and output with file streams that you open with fopen(). 3.5 3.5.6 Printing to stdout and stderr By default, printf() prints to stdout, but you can alternative write to any file stream. To do so, you use the fprintf() function, which acts just like printf(), except you explicitly state which file stream you wish to print. Similarly, there is a fscanf() function for format reading from files other than stdin. printf("Hello World\n"); //prints implicitly to standard out fprintf(stdout, "Hello World\n"); //print explicitly to standard out fprintf(stderr, "ERROR: World coming to an endline!\n"); //print to standard error The standard file descriptors are available in C via their shorthand, and you can refer to their file descriptor numbers where appropriate: stdin: 0 : standard input stdout: 1 : standard output stderr: 2 : standard error 4 Basic Data Types In the last lesson we review the basic types of C. For reference, they appear below: int: integer number : 4-bytes short: integer number : 2-bytes long: integer number : 8-bytes char: character : 1-byte float: floating point number : 4-bytes double: floating point number : 8-bytes void *: pointers : 8-bytes on (64 bit machines) These types and the operations over them are sufficient for most programming; however, we will need more to accomplish the needed tasks. In particular, there are three aspects of these types that require further exploration: - Advanced Structured Types: Create new types and formatted by combining basic types. - Pointers: Working with references to data - Arrays: Organizing data into linear structures. 4.1 Booleans C does not have a built in boolean type, like C++ does. Instead, anything that is equivalent to numeric 0 is false and everything that is not 0 is true. For example, this is an infinite loop. As 1 will always be true. while(1){ //loop forever } In C many different elements you may not expect can have a zero value. For example, NULL is equivlanet to zero; it's the pointer value that is 0. The character \0 is also 0. Be on the look out for these implicit 0's. 4.2 Advanced Types: struct An incredibly useful tool in programming is to be able to create advanced types built upon basic types. Consider managing a pair of integers. In practice, you could declare two integer variables and manage each separately, like so: int left; int right; left = 1; right = 2; But that is cumbersome and you always have to remember that the variable left is paired with the variable right, and what happens when you need to have two pairs or three. It just is not manageable. Instead, what we can do is declare a new type that is a structure containing two integers. struct pair{ //declaring a new pair type int left; //that containing two integers int right; }; struct pair p1; //declare two variables of that type struct pair p2; p1.left = 10; //assign values to the pair types p1.right = 20; p2.left = 0; p2.right = 5; The first part is to declare the new structure type by using the keyword struct and specify the basic types that are members of the structure. Next, we can declare variables of that type using the type name, struct pair. With those variables, we can then refer to the member values, left and right, using the . operator. One question to consider: How is the data for the structure laid out in memory? Another way to ask is: How many bytes does it take to store the structure? In this example, the structure contains two integers, so it is 8 bytes in size. In memory, it would be represented by two integers that are adjacent in memory space. struct pair .--------------------. |.--------..--------.| ||<- 4B ->||<- 4B ->|| || left || right || |'________''________'| '--------------------' <----- 8 bytes -----> Using the . and the correct name either refers to the first or second four bytes, or the left or right integer within the pair. When we print its size, that is exactly what we get. printf("%lu\n", sizeof(struct pair)); While the pair struct is a simple example, throughout the semester we will see many advanced structure types that combine a large amount of information. These structures are used to represent various states of the computer and convey a lot of information in a compact form. 4.3 Defining new types with typedef While structure data is ever present in the system, it is often hidden by declare new type names. The way to introduce a new type name or type definition is using the typedef macro. Here is an example for the pair structure type we declared above. typedef struct{ //declaring a new structure int left; //that containing two integers int right; } pair_t; //the type name for the structure is pair_t pair_t p1; //declare two variables of that type pair_t p2; p1.left = 10; //assign values to the pair types p1.right = 20; p2.left = 0; p2.right = 5; This time we declare the same type, a pair of two integers, but we gave that structure type a distinct name, a pair_t. When declaring something of this type, we do not need to specify that it is a structure, instead, we call it what it is, a pair_t. The compiler is going to recognize the new type and ensure that it has the properties of the structure. The suffix _t is typically used to specify that this type is not a basic type and defined. This is a convention of C, not a rule, but it can help guide you through the moray of types you will see in this class. 5 Pointers and Arrays 5.1 Pointers In C, pointers play a larger role than in C++. Recall that a pointer is a data type whose value is a memory address. A pointer must be declared based on what type it references; for example, int * are pointers to integers and char * are pointers to chars. Here are some basic operations associated with pointers. int * p: pointer declaration *p: pointer dereference, follow the pointer to the value &a: Address of the variable a p = &a: pointer assignment, p now references a *p = 20: assignment via a dereference, follow the pointer and assign a the value. Individually, each of these operations can be difficult to understand. Following a stack diagram, where variables and values are modeled. For the purposes of this class, we will draw stack diagrams like this: +----------+-------+ | variable | value | +----------+-------+ If we have a pointer variable, then we'll do this: +----------+-------+ | pointer | .-------> +----------+-------+ This will indicate that the value of the pointer is a memory address that references some other memory. To codify this concept further, let's follow a running example of the following program: int a = 10, b; int *p = &a; a = 20; b = *p; *p = 30; p = &b; Let us walk through it step by step: (1) Initially, a has the value 10, b has not been assigned to, and p references the value of a. int a = 10, b; int *p = &a; // <-- (1) a = 20; b = *p; *p = 30; p = &b; +---+----+ | a | 10 |<-. +---+----+ | | b | | | arrow for pointer indicates +---+----+ | a reference | p | .----+ +---+----+ (2) Assigning to a changes a's value, and now p also references that value int a = 10, b; int *p = &a; a = 20; // <-- (2) b = *p; *p = 30; p = &b; +---+----+ | a | 20 |<-. +---+----+ | | b | | | +---+----+ | | p | .----+ +---+----+ (3) p is dereferenced with *, and the value that p referenced is assigned to b int a = 10, b; int *p = &a; a = 20; b = *p; // <-- (3) *p = 30; p = &b; +---+----+ | a | 20 |<-. +---+----+ | | b | 20 | | *p means to follow pointer +---+----+ | to get value | p | .----+ +---+----+ (4) Assigning to *p stores the value that memory p references, changing a's value int a = 10, b; int *p = &a; a = 20; b = *p; *p = 30; // <-- (4) p = &b; +---+----+ | a | 30 |<-. +---+----+ | | b | 20 | | assigning *p follows pointer +---+----+ | to store value | p | .----+ +---+----+ #+ENDHTML (5) Assigning to p requires an address, now p references the memory address of b int a = 10, b; int *p = &a; a = 20; b = *p; *p = 30; p = &b; // <-- (5) +---+----+ | a | 30 | +---+----+ | b | 20 |<-. +---+----+ | | p | .----+ +---+----+ 5.2 Pointers to structures Just like for other types, we can create pointers to structured memory. Consider for example: typdef struct{ int left; int right; } pair_t; pair_t pair; pair.left = 1; pair.right = 2; pair_t * p = &pair; This should be familiar to you as we can treat pair_t just like other data types, except we know that it is actually composed of two integers. However, now that p references a pair_t how do we deference it such that we get to member data? Here is one way. printf("pair: (%d,%d)\n", (*p).left, (*p).right); Looking closely, you see we first use the * operator to deference the pointer, and then the . operator to refer to a member of the structure. That is a lot of work and because C requires us to frequently access members of structures via a pointer reference. To alleviate that, C has a shortcut operation, the arrow or ->, which dereferences and then does member reference for pointers to structures. Here is how that looks: printf("pair: (%d,%d)\n", p->left, p->right); p->right = 2017; p->left = 1845; 5.3 Array Types The last type are array types which provides a way for the program to declare an arbitrary amount of the same type in continuous memory. Here is a simple example with an array of integers: int array[10]; //declare an array of 10 integers int i; //assign to the array for(i=0;i<10;i++){ array[i] = 2*i; //index times 2 } //reference the array for(i=0;i<10;i++){ printf("%d:%d\n", i,array[i]); } aviv@saddleback: demo $ ./array-example 0:0 1:2 2:4 3:6 4:8 5:10 6:12 7:14 8:16 9:18 We declare an array using the [ ] following the variable name. We use the term index to refer to an element of an array. Above, the array array is of size 10, which means that we can use indexes 0 through 9 (computer scientist start counting at 0). To index the array, for both retrieval and assignment, we use the [ ] operators as well. 5.4 Arrays and Pointers Now, it is time to blow your mind. It turns out that in C arrays and pointers are the same thing. Seriously. Well, not exactly the same, but basically the same. Let me demonstrate. First consider how and what happens when you assign a pointer to an array. /*pointer-array.c*/ #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]){ int array[10]; int i; int * p = array; //p points to array //assign to the array for(i=0;i<10;i++){ array[i] = 2*i; //index times 2 } //derefernce p and assign 2017 *p = 2017; //print the array for(i=0;i<10;i++){ printf("%d:%d\n", i,array[i]); } } aviv@saddleback: demo $ ./pointer-array 0:2017 1:2 2:4 3:6 4:8 5:10 6:12 7:14 8:16 9:18 Notice that at index 0 the value is now 2017. Also notice that when you assigned the pointer value, we did not take the address of the array. That means p is really referencing the address of the first item in the array and for that matter, so is array! It gets crazier because we can also use the [ ] operators with pointers. Consider this small change to the program: #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]){ int array[10]; int i; int * p = array; //p points to array //assign to the array for(i=0;i<10;i++){ array[i] = 2*i; //index times 2 } //index p at 5 and assign 2017 p[5] = 2017; //<---------------!! //print the array for(i=0;i<10;i++){ printf("%d:%d\n", i,array[i]); } } aviv@saddleback: demo $ ./pointer-array-index 0:0 1:2 2:4 3:6 4:8 5:2017 //<---------!!! 6:12 7:14 8:16 9:18 In this case we indexed the pointer at 5 and assigned to it the value 2017, which resulted in that value appearing in the output. What is the implication of this? We know that p is a pointer and we know to assign to the value referenced by a pointer it requires a dereference, so the [ ] must be a dereference operation. And it is. In fact we can translate the [ ] operation like so: p[5] = *(p+5) What the [ ] operation does is increments the pointer by the index and then deference. As a stack diagram, we can visualize this like so: .-------+------. | array | .--+--. |-------+------| | | | 0 |<-'<-. |-------+------| | | | 2 | | |-------+------| | | | 4 | | |-------+------| | | | 6 | | |-------+------| | | | 8 | | |-------+------| | | | 2017 |<----+----- p+5, array+5 |-------+------| | | | 12 | | |-------+------| | | | 14 | | |-------+------| | | | 16 | | |-------+------| | | | 18 | | |-------+------| | | p | .--+-----' '-------+------' This is called pointer arithmetic, which is a bit complicated, but we'll return to it later when discussing strings. The important take away is that there is a close relationship between pointers and arrays. And now you also know why arrays are indexed starting at 0 — it is because of pointer arithmetic. The first item in the array is the same as just dereferencing the pointer to the array, thus occurring at index 0. Before I described that relationship as the same, but they are not exactly the same. Where they differ is that pointers can be reassigned like any variable, but arrays cannot. They are constants. For example, this is not allowed: int a[10]; int b[10]; int *p; p = a; // ok b = p; // not ok! Array pointers are constant, we cannot reassign to them. The reason is obvious when you think: if we could reassign the array pointer, then how would reclaim that memory? The answer is you could not. It would be lost. In the next lessons, we will continue to look at arrays and pointers, but in the context of strings, which are simply arrays of characters with the property that they are null terminated. 5.5 Close connection between pointers and arrays Recall that an array is a contiguos region of memory that stores a sequence of the same data items We declare arrays statically using the [ ] symbols and a size, and you can also reference and assign to an array using the [ ] symbol. int array[10]; int i; for(i=0;i < 10; i++){ array[i] = i*2; } Additionally, arrays and pointers are closely linked, and, in fact, an array variable is a special type of pointer whose value cannot change. When you declare an array: int array[10]; You are asking C to do two things. First, this is a request to allocate 10 integers of memory, contiguously, or 40 bytes. The second part is to assign the address of that memory allocation to the variable array and make it constant so that the value that array references cannot change. Essentially, array is a pointer to the contiguous memory. We can then access the individual integers in that memory region using the [ ] operator. But, we also know that this operation is equivalent to a deference. .---. array --> | | array[0] == *(array+0) +---+ | | array[1] == *(array+1) +---+ | | array[2] == *(array+2) +---+ : : etc. ' ' When you index into an array, you are effectively following the pointer plus the index. That is, the operation of array[i] says to following the pointer referenced by the variable array, move i steps further, and then return the value found at that memory location. The concept of pairing arrays and pointers in this style is called pointer arithmetic and is an incredibly powerful tool of C programming and used a lot with C strings. 6 C Strings A string in C is simply an array of char objects that is null terminated. Here's a typical C string declaration: char str[] = "Hello!" A couple things to note about the declaration: - First that we declare strlike an array, but we do not provide it a size. - Second, we assign to stra quoted string. - Finally, while we know that strings are NULLterminated, there is no explicit NULLtermination. We will tackle each of these in turn below. 6.1 Advanced Array Declarations While the declaration looks acquired at first without the array size, this actually means that the size will be determined automatically by the assignment. All arrays can be declared in this static way; here is an example for an integer array: int array[] = {1, 2, 3}; In that example, the array values are denoted using the { } and comma separated within. The length of the array is clearly 3, but the compiler can determine that by inspecting the static declaration, so it is often omitted. However, that does not mean you cannot provide a size, for example int array[10] = {1, 2, 3}; is also perfectly fine but has a different semantic meaning. The first declaration (without a size) says allocate only enough memory to store the statically declared array. The second declaration (with the size) says to allocate enough memory to store size items of the data type and initialize as many possible to this array. You can see this actually happening in this simple program: /*array_deceleration.c*/ #include <stdio.h> #include <stdlib.h> int main(int argc, char * argv[]){ int a[] = {1,2,3}; int b[10] = {1,2,3}; int i; printf("sizeof(a):%d sizeof(b):%d\n", (int) sizeof(a), (int) sizeof(b) ); printf("\n"); for(i=0;i<3;i++){ printf("a[%d]: %d\n", i,a[i]); } printf("\n"); for(i=0;i<10;i++){ printf("b[%d]: %d\n", i,b[i]); } } aviv@saddleback: demo $ ./array_decleration sizeof(a):12 sizeof(b):40 a[0]: 1 a[1]: 2 a[2]: 3 b[0]: 1 b[1]: 2 b[2]: 3 b[3]: 0 b[4]: 0 b[5]: 0 b[6]: 0 b[7]: 0 b[8]: 0 b[9]: 0 As you can see, both decelerations work, but the allocation sizes are different. Array b is allocated to store 10 integers with a size of 40 bytes, while array a only allocated enough to store the static declaration. Also note that the allocation implicitly filled in 0 for non statically declared array elements in b, which is behavior you'd expect. 6.2 The quoted string declaration Now that you have a broader sense of how arrays are declared, let's adapt this to strings. The first thing we can try and declare is a string, that is an array of char's, using the declaration like we had above. char a[] = {'G','o',' ','N','a','v','y','!'}; char b[10] = {'G','o',' ','N','a','v','y','!'}; Just as before we are declaring an array of the given type which is char. We also use the static declaration for arrays. At this point we should feel pretty good — we have a string, but not really. Let's look at an example using this declaration: /*string_declerations.c*/ #include <stdio.h> #include <stdlib.h> int main(int argc, char * argv[]){ char a[] = {'G','o',' ','N','a','v','y','!'}; char b[10] = {'G','o',' ','N','a','v','y','!'}; int i; printf("sizeof(a):%d sizeof(b):%d\n", (int) sizeof(a), (int) sizeof(b) ); printf("\n"); for(i=0;i_declerations sizeof(a):8 sizeof(b):10 a[0]: G (71) a[1]: o (111) a[2]: (32) a[3]: N (78) a[4]: a (97) a[5]: v (118) a[6]: y (121) a[7]: ! (33) b[0]: G (71) b[1]: o (111) b[2]: (32) b[3]: N (78) b[4]: a (97) b[5]: v (118) b[6]: y (121) b[7]: ! (33) b[8]: (0) b[9]: (0) a: Go Navy!?@ b: Go Navy! First observations is the sizeof the arrays match our expectations. A char is 1 byte in size and the arrays are allocated to match either the implicit size (7) or the explicit size (10). We can also print the arrays iteratively, and the ASCII values are inset to provide a reference. However, when we try and format print the string using the %s format, something strange happens for a that does not happen for b. The problem is that a is not NULL terminated, that is, the last char numeric value in the string is not 0. NULL termination is very important for determining the length of the string. Without this special marker, the printf() function is unable to determine when the string ends, so it prints extra characters that are not really part of the string. We can change the declaration of a to explicitly NULL terminate like so: char a[] = {'G','o',' ','N','a','v','y','!', '\0'}; The escape sequence '\0' is equivalent to NULL, and now we have a legal string. But, I think we can all agree this is a really annoying way to do string declarations using array formats because all strings should be NULL terminated anyway. Thus, the double quoted string shorthand is used. char a[] = "Go Navy!"; The quoted string is the same as statically declaring an array with an implicit NULL termination, and it is ever so much more convenient to use. You can also more explicitly declare the size, as in the below example, which declares the array of the size, but also will NULL terminate. #include <stdio.h> #include <stdlib.h> int main(int argc, char * argv[]){ char a[] = "Go Navy!"; char b[10] = "Go Navy!"; int i; printf("sizeof(a):%d sizeof(b):%d\n", (int) sizeof(a), (int) sizeof(b) ); printf("\n"); for(i=0;i_quoted sizeof(a):9 sizeof(b):10 a[0]: G (71) a[1]: o (111) a[2]: (32) a[3]: N (78) a[4]: a (97) a[5]: v (118) a[6]: y (121) a[7]: ! (33) a[8]: (0) b[0]: G (71) b[1]: o (111) b[2]: (32) b[3]: N (78) b[4]: a (97) b[5]: v (118) b[6]: y (121) b[7]: ! (33) b[8]: (0) b[9]: (0) a: Go Navy! b: Go Navy! You may now be wondering what happens if you do something silly like this, char a[3] = "Go Navy!"; where you declare the string to be of size 3 but assign a string requiring much more memory? Well … why don't you try writing a small program to finding out what happen, which you will do in homework. 6.3 String format input, output, overflows, and NULL deference's: While strings are not basic types, like numbers, they do have a special place in a lot of operations because we use them so commonly. One such place is in formats. You already saw above that %s is the format character to process a string, and it is also the format character used to scan a string. We can see how this all works using this simple example: /*format_string*/ #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]){ char name[20]; printf("What is your name?\n"); scanf("%s",name); printf("\n"); printf("Hello %s!\n",name); } There are two formats. The first will ask the user for their name, and read the response using a scanf(). Looking more closely, when you provide name as the second argument to scanf(), you are saying: "Read in a string and write it to the memory referenced by name." Later, we can then print name using a %s in a printf(). Here is a sample execution: aviv@saddleback: demo $ ./format_string What is your name? Adam Hello Adam! That works great. Let's try some other input: aviv@saddleback: demo $ ./format_string What is your name? Adam Aviv Hello Adam! Hmm. That didn't work like expected. Instead of reading in the whole input "Adam Aviv" it only read a single word, "Adam". This has to do with the functionality of scanf() that "%s" does not refer to an entire line but just an individual whitespace separated string. The other thing to notice is that the string name is of a fixed size, 20 bytes. What happens if I provide input that is longer … much longer. aviv@saddleback: demo $ ./format_string What is your name? AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdam Hello AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdam! *** stack smashing detected ***: ./format_string terminated Aborted (core dumped) That was interesting. The execution identified that you overflowed the string, that is tried to write more than 20 bytes. This caused a check to go off, and the program to crash. Generally, a segmentation fault occurs when you try to read or write invalid memory, i.e., outside the allowable memory segments. We can go even further with this example and come up with a name sooooooo long that the program crashes in a different way: What is your Hello! Segmentation fault (core dumped) In this case, we got a segmentation fault. The scanf() wrote so far out of bounds of the length of the array that it wrote memory it was not allowed to do so. This caused the segmentation fault. Another way you can get a segmentation fault is by dereferencing NULL, that is, you have a pointer value that equals NULL and you try to follow the pointer to memory that does not exist. /*null_print.c*/ #include <stdio.h> #include <stdlib.h> int main(int argc,char*argv[]){ printf("This is a bad idea ...\n"); printf("%s\n",(char *) NULL); } aviv@saddleback: demo $ ./null_print This is a bad idea ... Segmentation fault (core dumped) This example is relatively silly as I purposely dereference NULL by trying to treat it as a string. While you might not do it so blatantly, you will do something like this at some point. It is a mistake we all make as programmers, and it is a particularly annoying mistake that is inevitable when you program with pointers and strings. It can be frustrating, but we will also go over many ways to debug such errors throughout the semester. 7 Sting Library Functions Working with strings is not as straight forward as it is in C++ because they are not basic types, but rather arrays of characters. Truth be told, in C++ they are also arrays of characters; however, C++ provides a special library that overloads the basic operations so you can treat C++ strings like basic types. Unfortunately, such conveniences are not possible in C. As a result, certain programming paradigms that would seem obvious to do in C do not do as you would expect them to do. Here's an example: /*string_badcmp.c*/ #include <stdio.h> #include <stdlib.h> int main(int argc,char *argv[]){ char str[20]; printf("Enter 'Navy' for a secret message:\n"); scanf("%s",str); if( str == "Navy"){ printf("Go Navy! Beat Army!\n"); }else{ printf("No secret for you.\n"); } } And if we run this program and enter in the appropriate string, we do not get the result we expect. aviv@saddleback: demo $ ./string_badcmp Enter 'Navy' for a secret message: Navy No secret for you. What happened? If we look at the if statement expression: if( str == "Navy" ) Our intuition is that this will compare the string str and "Navy" based on the values in the string, that is, is str "Navy" ? But that is not what this is doing because remember a string is an array of characters and an array is a pointer to memory and so the equality is check to see if the str and "Navy" are stored in the same place in memory and has nothing to do with the actual strings. To see that this is case, consider this small program which also does not do what is expected: /*string_badequals.c*/ #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]){ char s1[]="Navy"; char s2[]="Navy"; if(s1 == s2){ printf("Go Navy!\n"); }else{ printf("Beat Army!\n"); } printf("\n"); printf("s1: %p \n", s1); printf("s2: %p \n",s2); } aviv@saddleback: demo $ ./string_badequals Beat Army! s1: 0x7fffe43994f0 s2: 0x7fffe4399500 Looking closely, although both s1 and s2 reference the same string values they are not the same string in memory and have two different addresses. (The %p formats a memory address in hexadecimal.) The right way to compare to strings is to compare each character, but that is a lot of extra code and something we don't want to write every time. Fortunately, its been implemented for us along with a number of other useful functions in the string library. 7.1 The string library string.h To see all the goodness in the string library, start by typing man string in your linux terminal. Up will come the manual page for all the functions in the string library: STRING(3) Linux Programmer's Manual STRING(3) NAME stpcpy, strcasecmp, strcat, strchr, strcmp, strcoll, strcpy, strcspn, strdup, strfry, strlen, strncat, strncmp, strncpy, strncasecmp, strpbrk, strrchr, strsep, strspn, strstr, strtok, strxfrm, index, rindex - string operations SYNOPSIS #include <strings.h> int strcasecmp(const char *s1, const char *s2); int strncasecmp(const char *s1, const char *s2, size_t n); char *index(const char *s, int c); char *rindex(const char *s, int c); #include <string.h> char *stpcpy(char *dest, const char *src); char *strcat(char *dest, const char *src); char *strchr(const char *s, int c); int strcmp(const char *s1, const char *s2); int strcoll(const char *s1, const char *s2); char *strcpy(char *dest, const char *src); size_t strcspn(const char *s, const char *reject); char *strdup(const char *s); char *strfry(char *string); size_t strlen(const char *s); ... To use the string library, the only thing you need to do is include string.h in the header declarations. You can further explore different functions string library within their own manual pages. The two most relevant to our discussion will be strcmp() and strlen(). However, I encourage you to explore some of the others, for example strfry() will randomize the string to create an anagram – how useful! 7.2 String Comparison To solve our string comparison delimina, we will use the strcmp() function from the string library. Here is the revelant man page: STRCMP(3) Linux Programmer's Manual STRCMP the only first (at most) n bytes. It comes in two varieties. One with a maximum length specified and one that relies on null termination. Both return the same values. If the two strings are equal, then the value is 0, if the first string string is greater (larger alphabetically) than it returns 1, and if the first string is less than (smaller alphabetically) then it returns -1. Plugging in strcmp() into our secrete message program, we get the desired results. /*string_strncp.c*/ #include <stdio.h> #include <stdlib.h> int main(int argc,char *argv[]){ char str[20]; printf("Enter 'Navy' for a secret message:\n"); scanf("%s",str); if( strcmp(str,"Navy") == 0 ) { printf("Go Navy! Beat Army!\n"); }else{ printf("No secret for you.\n"); } } aviv@saddleback: demo $ ./string_strcmp Enter 'Navy' for a secret message: Navy Go Navy! Beat Army! 7.3 String Length vs String Size Another really important string library function is strlen() which returns the length of the string. It is important to differentiate the length of the string from the size of the string. - string length: how many characters, not including the null character, are in the string - sizeof : how many bytes required to store the string. One of the most common mistakes when working with C strings is to consider the sizeof the string and not the length of the string, which are clearly two different values. Here is a small program that can demonstrate how this can go wrong quickly: #include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc, char *argv[]){ char str[]="Hello!"; char * s = str; printf("strlen(str):%d sizeof(str):%d sizeof(s):%d\n", (int) strlen(str), //the length of the str (int) sizeof(str), //the memory size of the str (int) sizeof(s) //the memory size of a pointer ); } aviv@saddleback: demo $ ./string_length strlen(str):6 sizeof(str):7 sizeof(s):8 Note that when using strlen() we get the length of the string "Hello!" which has 6 letters. The size of the string str is how much memory is used to store it, which is 7, if you include the null terminated. However, things get bad when you have a pointer to that string s. Calling sizeof() on s returns how much memory needed to store s which is a pointer and thus is 8-bytes in size. That has nothing to do with the length of the string or the size of the string. This is why when working with strings always make sure to use the right length not the size. 8 Pointer Arithmetic and Strings As noted many times now, strings are arrays, and as such, you can work with them as arrays using indexing with [ ]; however, often when programmers work with strings, they use pointer arithmetic. For example, here is a routine to print a string to stdout: void my_puts(char * str){ while(*str){ putchar(*str); str++; } } This function my_puts() takes a string and will write the string, char-by-char to stdout using the putchar() function. What might seem a little odd here is the use of the while loop, so lets unpack that: while(*str) What does this mean? First notice that str is declared as a char * which is a pointer to a character. We also know that pointers and arrays are the same, so we can say that str is a string that references the first character in the string's array. Next the *str operation is a dereference, which says to follow the pointer and retrieve the value that it references. In this case that would be a character value. Finally, the fact that this operation occurs in the expression part means that we are testing the value that the pointer references for not be false, which is the same as asking if it is not zero or not NULL. So, the while(*str) says to continue looping as long the pointer str does not reference NULL. The pointer value of str does change in the loop and is incremented, str++, for each interaction after the call to putchar(). Now putting it all together, you can see that this routine will iterate through a string using a pointer until the NULL terminator is reached. Phew. While this might seem like a backwards way of doing this, it is actually a rather common and straight foreword programming practice with strings and pointers in general. 8.1 Pointer Arithmetic and Types Something that you might have noticed is that we have been using pointer arithmetic for different types in the same way. That is, consider the two arrays below, one an array of integers and one a string: int a[] = {0,1,2,3,4,5,6,7}; char str = "Hello!"; Both arrays are the same length, 7, but they are different sizes. Integers are 4-bytes, so to store 7 integers requires 4*7=24 bytes. But characters are 1 byte in size, so to store 7 characters requires just 7 bytes. In memory the two arrays may look something like this: <------------------------ 24 bytes ----------------------------> .---------------.----------------.--- - - - ---.----------------. a -> | 0 | 1 | | 7 | '---------------'----------------'--- - - - ---'----------------' .---.---.---.---.---.---. str -> | H | e | l | l | o | \0| '---'---'---'---'---'---' <------- 7 bytes ------> Now consider what happens when we use pointer arithmetic on these arrays to dereference the third index: /*pointer_math.c*/ #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]){ int a[] = {0,1,2,3,4,5,6,7}; char str[] = "Hello!"; printf("a[3]:%d str[3]:%c\n", *(a+3),*(str+3)); } aviv@saddleback: demo $ ./pointer_math a[3]:3 str[3]:l Knowing what you know, the output is not consistent. When you add 3 to the array of integers a, you adjust the pointer by 12 bytes so that you now reference the value 3. However, when you add 3 to the string pointer, you adjust the pointer by 3 bytes to reference the value 'l'. The reason for this has to do with pointer arithmetic consideration of typing. When you declare a pointer to reference a particular type, C is aware that adding to the pointer value should consider the type of data being referenced. So when you add 1 to an integer pointer, you are moving the reference forward 4 bytes since that is the size of the integer. If we were to print the pointer values (in hex) and do numerical arithmetic we would see this to be true: printf("a=%p a+3=%p (a+3-a)=%d\n",a,a+3, ((long) (a+3)) - (long) a); printf("str=%p str+3=%p (str+3-str)=%d\n",str,str+3, ((long) (str+3)) - (long) str); aviv@saddleback: demo $ ./pointer_math a[3]:3 str[3]:l a=0x7fffa5c4d260 a+3=0x7fffa5c4d26c (a+3-a)=12 str=0x7fffa5c4d280 str+3=0x7fffa5c4d283 (str+3-str)=3 In the first part a+3 changed the pointer value by 0xc in hex which is 12, while str+3 only changes the character value by 0x3 or 3 bytes. More starkly you can see that if we treat the pointer values as longs and do numerical arithmetic after doing pointer arithmetic you see this more clearly. 8.2 Character Arrays as Arbitrary Data Buffers Now you may be wondering, how do I access the individual bytes of larger data types? The answer to this is the final peculiarity of character arrays in C. Consider that a char data type is 1 byte in size, which is the smallest data element we work with as programmers. Now consider that an array of char's matches exactly that many bytes. So when we write something like: char s[4]; What we are really saying is: "allocate 4 bytes of data." We like to think about storing a string of length 3 in that character array with one byte for the null terminator, but we do not have to. In fact, any kind of data can be stored there as long as it is only 4-bytes in size. An integer is four bytes in size. Let's store an integer in s. aviv@saddleback: demo $ cat pointer_casting.c #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]){ char s[4]; s[0] = 255; s[1] = 255; s[2] = 255; s[3] = 255; int * i = (int *) s; printf("*i = %d\n", *i); } What this program does is set all the bytes in the character array to 255, which is the largest value 1-byte can store. The result is that we have 4-bytes of data that are all 1's, since 255 in binary is 11111111. Four bytes of data that is all 1's. Next, consider what happens with this cast: int * i = (int *) s; Now the pointer i references the same memory as s, which is 4-bytes of 1's. What's different is that i is an integer pointer not a character pointer. That means the 4-bytes of 1's is an integer not characters from the perspective of i. And when we dereference i to print those bytes as a number, we get: aviv@saddleback: demo $ ./pointer_casting *i = -1 Which is the signed value for all 1's (remember two's compliment?). What we've just done is use characters as a generic container for data and then used pointer casting to determine how to interpret that data. This may seem crazy — it is — but it is what makes C so low level and useful. We often refer to character arrays as buffers because of this property of being arbitrary containers. A buffer of data is just a bunch of bytes, and a character array is the most direct way to access that data. 9 Double Arrays We continue our discussion of data types in C by looking at double arrays, which is an array of arrays. This will lead directly to command line arguments as these are processed as an array of strings, which are arrays themselves, thus double arrays. 9.1 Declaring Double Arrays Like single arrays, we can declare double arrays using the [ ], but with two. We can also do static declarations of values with { }. Here's an example: #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]){ int darray[][4] = { {0, 0, 0, 0}, {1, 1, 1, 1}, {2, 2, 2, 2}, {3, 3, 3, 3}}; int i,j; for(i=0;i<4;i++){ printf("darray[%d] = { ",i); for(j=0;j<4;j++){ printf("%d ",darray[i][j]); //<--- } printf("}\n"); } } aviv@saddleback: demo $ ./int_doublearray darray[0] = { 0 0 0 0 } darray[1] = { 1 1 1 1 } darray[2] = { 2 2 2 2 } darray[3] = { 3 3 3 3 } Each index in the array references another array. Like before we allow C to determine the size of the outer array when declaring statically. However, you must define the size of the inner arrays. This is because of the way the memory is allocated. While the array example above is square in size, the double array can be asymmetric. 9.2 The type of a double array Let's think a bit more about what a double array really is given our understanding of the relationship between pointers and arrays. For a single array, the array variable is a pointer that references the first item in the array. For a double array, the array variable references a reference that references the first item in the first array. Here's a visual of the stack diagram: .---.---.---.---. .---. _.----> | 0 | 0 | 0 | 0 | <-- darray[0] darray ---> | --+-' '---'---'---'---' |---| .---.---.---.---. | --+--------> | 1 | 1 | 1 | 1 | <-- darray[1] |---| '---'---'---'---' | --+-._ .---.---.---.---. |---| '----> | 2 | 2 | 2 | 2 | <-- darray[2] | --+-._ '---'---'---'---' '---' '._ .---.---.---.---. '-> | 3 | 3 | 3 | 3 | <-- darray[3] '---'---'---'---' If we follow the arrays, we see that the type of darray is actually a int **. That means, it is a pointer that references a memory address that stores another pointer that references a memory address of an integer. So when we say double array, we are also referring to double pointers. To demonstrate this further, we can even show the dereferences directly. #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]){ int darray[][4] = { {0, 0, 0, 0}, {1, 1, 1, 1}, {2, 2, 2017, 2}, {3, 3, 3, 3}}; printf("*(*(darray+2)+2) = %d\n", *(*(darray+2)+2)); printf(" daray[2][2] = %d\n", darray[2][2]); } aviv@saddleback: demo $ ./derefernce_doublearray *(*(darray+2)+2) = 2017 daray[2][2] = 2017 As you can see it takes two dereferences to get to the integer value. 9.3 Array of Strings as Double Arrays Now let us consider another kind of double array, an array of strings. Recall that a C string is just an array of characters, so an array of strings is a double array of characters. One of the tricky parts of string arrays is the typing declaration. Before we declared arrays using the following notation: char str1[] = "This is a string"; But now we know that the type of arrays and pointers are really the same, so we can also declare a string this way: char * str2 = "This is also a string"; Note that there is a difference between these two declarations in how and where C actually stores the string in memory. Consider the output of this program: #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]){ char str1[] = "This is a string"; char * str2 = "This is also a string"; printf("str1: %s \t\t &str1:%p\n",str1,str1); printf("str2: %s \t &str2:%p\n",str2,str2); } aviv@saddleback: demo $ ./string_declare str1: This is a string &str1:0x7fff4344d090 str2: This is also a string &str2:0x4006b4 While both strings print fine as strings, the memory address of the two strings are very different. One is located in the stack memory region and other is in the data segment. In later lessons we will explore this further, but for the moment, the important takeaway is that we can now refer to strings as char * types. Given we know that char * is the type of a string, then an array of char *'s is an array of strings. #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]){ char * strings[]={"Go Navy!", "Beat Army!", "Crash Airforce!", "Destroy the Irish!"}; int i; printf("strings: %p\n",strings); for(i=0;i<4;i++){ printf("strings[%d]: '%s' %p\n",i,strings[i],strings[i]); } } aviv@saddleback: demo $ ./string_array strings: 0x7fff7af68080 strings[0]: 'Go Navy!' 0x400634 strings[1]: 'Beat Army!' 0x40063d strings[2]: 'Crash Airforce!' 0x400648 strings[3]: 'Destroy the Irish!' 0x400658 Like before we can see that strings is a pointer that references a pointer to a char, but that's just an array of strings or a double array. Another thing you may notice is that the length and size of each of the strings is different. This is because the way the array is declared with char * as the type of the string rather than char [] which changes how the array is stored. 10 Command Line Arguments Now that you have seen an array of strings, where else does that type appear? In the arguments to the main() function. This is part of the command line arguments and is a very important part of systems programming. In your previous classes you have only accepted input from the user by reading from standard in using cin. While we will also use standard input, this class will require reading in more input from the user in the form of command line arguments. These will be used as basic settings for the program and are much more efficient than always reading in these settings from standard input. You may recall that we already did some work with command line arguments. First we discussed the varied command line arguments for the UNIX command line utilities, like ls and cut and find. We also processed some command line arguments from 10.1 Understanding main() arguments You may have notices that I have been writing main functions slightly differently then you have seen them before. // //argument ____. ._____ argument // count | | variables // v v int main(int argc, char * argv[]); The arguments to main correspond to the command line input. The first is the number of such arguments, and the second is a string array of the argument values. Here's an example that illuminates the arguments: /*print_args.c*/ #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]){ int i; for(i=0;i<argc;i++){ printf("argv[%d] = %s\n",i,argv[i]); } } aviv@saddleback: demo $ ./print_args arg1 arg2 arg3 x y z adam aviv argv[0] = ./print_args argv[1] = arg1 argv[2] = arg2 argv[3] = arg3 argv[4] = x argv[5] = y argv[6] = z argv[7] = adam argv[8] = aviv Looking at the program and its output, you can see that there is correspondence to the arguments provided and the index in the array. Its important to note that the name of the program being run is arg[0], which means that all programs have at least one argument, the name of the program. For example: aviv@saddleback: demo $ ./print_args argv[0] = ./print_args The name of the program is not compiled into the executable. It is instead passed as a true command line argument by the shell, which forks and executes the program. The mechanics of this will become clear later in the semester when we implement our own simplified version of the shell. To demonstrate this now, consider how the arg[0] changes when I change the name of the executable: aviv@saddleback: demo $ cp print_args newnameofprintargs aviv@saddleback: demo $ ./newnameofprintargs argv[0] = ./newnameofprintargs aviv@saddleback: demo $ ./newnameofprintargs a b c d e f argv[0] = ./newnameofprintargs argv[1] = a argv[2] = b argv[3] = c argv[4] = d argv[5] = e argv[6] = f 10.2 NULL Termination in args arrays Another interesting construction of the argv array is that the array is NULL terminated much like a string is null terminated. The reason for this is so the OS can determine how many arguments are present. Without null termination there is no way to know the end of the array. You can use this fact when parsing the array by using pointer arithmetic and checking for a NULL reference: #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]){ char ** curarg; int i; for( curarg=argv , i=0 ; //initialize curarg to argv array and i to 0 *curarg != NULL; //stop when curarg references NULL curarg++, i++){ //increment curarg and i printf("argv[%d] = %s\n", i, *curarg); } } viv@saddleback: demo $ ./print_args_pointer a b c d e argv[0] = ./print_args_pointer argv[1] = a argv[2] = b argv[3] = c argv[4] = d argv[5] = e Notice that the pointer incrementing over the argv arrays is of type char **. Its a pointer to a string, which is itself an array of chars, so its a pointer to a pointer. (POINTERS ARE MADNESS!) 10.3 Basic Parsing of Command Line Arguments: atoi() and sscanf() Something that you will often have to do when writing programs is parse command line arguments. The required error checking procedure can be time consuming, but it is also incredibly important for the overall user experience of your program. Lets consider a simple program that will print a string a user specified number of times. We would like to execute it this way: run_n_times 5 string Where string is printed n times. Now, what we know about command line arguments is that they are processed as strings, so both string and 5 are strings. We need to convert "5" into an integer 5. There are two ways to do this. The first is atoi() which converts a string to a number, but looking at the manual page for atoi() we find that atoi() does not detect errors. For example, this command line arguments will not be detected: run_n_times notanumber string Executing, atoi("notanumber") will return 0, so a simple routine like: int main(int argc, char * argv[]){ int i; int n = atoi(argv[1]); //does not detect errors for(i=0;i<n;i++){ printf("%s\n",argv[2]); } } will just print nothing and not return the error. While this might be reasonable in some settings, but we might want to detect this error and let the user know. Instead, we can convert the argv[1] to an integer using scanf(), but we have another problem. We have only seen scanf() in the concept of reading from standard input, but we can also have it read from an arbitrary string. That version of scanf() is called sscanf() and works like such: int main(int argc, char * argv[]){ int i; int n; if( sscanf(argv[1],"%d", &n) == 0) ){ fprintf(stderr, "ERROR: require a number\n"); exit(1); //exit the program } for(i=0;i<n;i++){ printf("%s\n",argv[2]); } } Recall that scanf() returns the number of items that successfully match the format string. So if no items match, then the user did not provide a number to match the %d format. So this program successfully error checks the first argument. But, what about the second argument? What happens when we run with these arguments? ./run_n_times 5 There is no argv[2] provided and worse because the argv array is NULL terminated, argv[2] references NULL. When the printf() dereferences argv[2] it wlll cause a segmentation fault. How do we fix this? We also have to error check the number of arguments. int main(int argc, char * argv[]){ int i; int n; if(argc < 2){ fprintf(stderr, "ERROR: invalid number of arguments\n"); exit(1); //exit the program } if( sscanf(argv[1],"%d", &n) == 0) ){ fprintf(stderr, "ERROR: require a number\n"); exit(1); //exit the program } for(i=0;i<n;i++){ printf("%s\n",argv[2]); } } And now we have properly error checked user arguments to this simple program. As you can see, error checking is tedious but incredibly important.
https://www.usna.edu/Users/cs/aviv/classes/ic221/s18/units/02/unit.html
CC-MAIN-2018-22
refinedweb
10,705
71.34
. An Introduction to Nokia Asha web apps Introduction This article provides a comprehensive introduction to Nokia Asha Web Apps. According to W3C widget specification, Widgets are client-side applications that are authored using Web standards, but whose content can also be embedded into Web documents. Series 40 web apps are based on the W3C Widget specification. Nokia has a set of tools for Series 40 web apps development and are targeted to be run on Series 40 mobile phones. Tools Required - For development and simulation of Nokia Asha Web Apps download and install Nokia Web Tools. - To deploy Nokia Asha Web Apps on device download Bluetooth Launcher and install it on device. Follow this article to run Series 40 web apps on device. - Device supported Series 40 web apps. Nokia Asha Web Apps and Web Page Though Nokia Asha Web Apps are developed maintaining web standards but Nokia Asha Web Apps are not the same as web pages. It behaves like a standalone application. The look and feel is customized according to the mobile device. More specifically we can say that Nokia Asha Web Apps follows Symbian WRT Widget standards with some major difference. Although Symbian WRT Widget and Nokia Asha Web Apps both supports HTML, JavaScript and CSS, but the main difference is that Nokia Asha Web Apps has its own set the JavaScript Library (MWL) and needs Nokia Browser for Series 40 to run on the device. - Nokia Browser for Series 40 : Most of the Nokia Asha devices doesn’t have enough CPU/RAM to run a full web browser, so it introduce a lighter-weight client side browser called the Nokia Browser for Series 40 Client. User can browser and run the web app from the Web App Server through the Nokia Browser for Series 40 Client. Below figure shows how the Nokia Browser for Series 40 proxy hosts the web app client and acts as a proxy between Nokia Browser for Series 40 Client and Web App Server. For more information see Nokia Asha Web Apps Best Practices Guide - Mobile Web Library (MWL) : Regular JavaScript are executed by the Nokia Browser for Series 40 Proxy Sever. But there is a special kind of library called Mobile Web Library which execute fully on Nokia Browser for Series 40 Client and is used via a namespace called mwl. MWL supports simple CSS transitions/animations, handling gestures (swipe, long press) . For more information see Nokia Asha Web Apps Developers Guide and API Reference Nokia Asha Web Apps Project Description In this section we will create a basic project and will explain each file in the project. Step 1 : Creating a New Project - Open -> File -> New -> Series 40 Web App (wgt) - Select Basic web app project from the Series 40 web apps template and then click Next - Give a name to project in the Project Name section and then click Next. Here the project name used is MyWebApp - In the Empty project settings dialog change the information according to you and then click Finish. In this case we leave it as it is. You will see the MyWebApp project is created in the Project Explorer. - Click on the + sign to the left of the project to expand it. - MyWebApp is the root folder of the project - basic.css is the CSS file for style information , we can add one or more CSS file in the project. - basic.js is the javascript file for application logic, we can add one or more JS file in the project. - config.xml is the configuration document file. The main information that it contains is the starting point of the application . - icon.png is the default icon of the application, we can also change it according to our choice also. - index.html is the default entry point file according the W3C . Step 2: To run the app on simulator - Right click on the project (MyWebApp) in the project explorer and then click on Preview Web app. This will launch the application in the simulator. step 3: Packaging the application : - Right click on the project (MyWebApp) in the project explorer and then click on Package Web app. This will create the wgt file (MyWebApp.wgt in this case) Step 4: Deploy the application on S40 Device - Follow this link Series_40_Web_Apps_-_Run_your_app_on_device Editing the Default Project Once the project is created in the project explorer , double click the index.html file and open it. Under <body> tag lets add Hello Series 40 Web App, save it and run the preview. This will display Hello Series 40 Web App on the simulator in the application screen. The <body onload="javascript:init();"> tag calls the init() function of basic.js file when the <body> tag of index.html file loads . We can add application logic in init() function of basic.js file. Tips and Tricks Now we are able to create application. lets note down some points that we need to keep in mind while developing Series 40 web apps. - handling large amount of data. - Working with animation and its performance. - Difference between regular web app and web app designed for Series 40 device using Nokia Browser for Series 40. Summary Series 40 web apps are standalone web app for Nokia Asha devices and runs using Nokia Browser for Series 40. Nokia Browser for Series 40 is a Light-weight, cloud assisted web runtime, client side browser, provided for a better browsing experience on Nokia Asha devices. Regular JavaScript executed at Nokia Browser for Series 40 Proxy server. MWL available for certain client side operation such as UI implementation and effects. Tutorials - W3C Widget specification () - Nokia Asha Web Apps: Getting Started - Nokia Asha Web Apps: Developer’s Guide and API Reference - Nokia Asha Web Apps: Best Practices - Nokia Asha Web Apps: Platform Overview - Nokia Asha Web Apps: Publishing Guide
http://developer.nokia.com/community/wiki/An_Introduction_to_Nokia_Asha_web_apps
CC-MAIN-2015-06
refinedweb
970
70.53
01 July 2011 11:22 [Source: ICIS news] SINGAPORE (ICIS)--Crude futures fell by more than $1/bbl on Friday, undermined by demand worries, following release of weaker manufacturing data from ?xml:namespace> At 09:45 GMT, August Brent on August NYMEX light sweet crude futures were trading at $94.42/bbl, down by $1.00/bbl from the previous close. Earlier, the contract fell to an intra-day low of $94.18/bbl, down by $1.24/bbl. Meanwhile, Concerns over the ongoing disruption of oil supplies from OPEC secretary general Abdulla al-Badri demanded that the IEA action be halted earlier this week and suggested that it was possible OPEC may cut output in response. Despite weakness on Friday’s trade, crude prices were still up substantially week on week, with ICE Brent futures around 6% higher and NYMEX WTI up by around 4%.Prices have been buoyed in the last few days by a larger-than-expected fall in US crude stocks and news
http://www.icis.com/Articles/2011/07/01/9474257/crude-falls-1bbl-on-weak-china-manufacturing-data.html
CC-MAIN-2015-18
refinedweb
167
63.29
Chapter 11. Transactions , javax:comp/UserTransaction is a standard JNDI name used to look up a user Chapter 2. Design, build and test web components the basic request processing facilities of servlets and JSP pages... and servlets that make up the Web application. A filter is declared using... to a group of servlets and static content resources by mapping a filter to a URL anyone willing to look over my code? (java) anyone willing to look over my code? (java) package inorder.without.cheats; import java.util.Arrays; import javax.swing.JOptionPane; public class... myself, it would be nice if it went through the array * and re arranged Free JSP Books of the nice features of servlets is that all of this form parsing is handled... This chapter discusses using HTML forms as front ends to servlets or other server-side... to collect data from the user and transmit it to the servlet. The following chapter Chapter 2. Design, build and test web components Design, develop and test Java Servlets, filters and listeners... projects enable you to create resources such as JavaServer Pages and servlets.... Any new servlets and JSP files that you expect to create should adhere servlets - Servlet Interview Questions will not be called again. have a nice time brother servlets servlets what is the duties of response object in servlets servlets servlets why we are using servlets Java Beans Books opens with a high-level look at component software, introducing the reader... of their benefits. The first chapter, "Introducing JavaBeans," also covers... computing. This chapter provides an explanation of other component models vs JSP - JSP-Servlet Servlets vs JSP What is the main difference between Servlets and JSP...)Java Server Pages is that they are document-centric. Servlets, on the other hand, look and act like programs. 4)JSP is basically used for presentation pdf chapter pdf chapter As we all have seen a pdf file, in the pdf file there are too many chapter. Do you... the constructor of the Paragraph pass a String. Now make a Chapter and inside Chapter 7. Validate, tune and troubleshoot an application within an IBM WebSphere Application Server environment examining Execution Flow-type views, look for threads or method calls that have Look and Feel - Swing AWT Look and Feel i m using netbeans for creating a swing GUI...... and i want to know how to give my gui the "Windows look" as by default it is set to "metal Look". Hi Friend, Please visit the following links Chapter 12. Exceptions Servlets Books Servlets Books  ... Courses Looking for short hands-on training classes on servlets..., conference speaker on servlets and JSP (JavaOne, International Conference for Java Chapter 5. EJB transactions Specification (see Chapter 17) includes six defined transaction attributes Chapter 14. Security Management Chapter 1. EJB Overview Chapter 9. EJB-QL Chapter 8. Entity Beans look up values help needed look up values help needed Hi, I have a table that returns rows with a total weight (say 212.5 for example). I have another look up table that has range of weights with a corresponding dollar value The look up table looks like Chapter 5. Client View of an Entity Chapter 13. Enterprise Bean Environment Changing Look and Feel of Swing Application Changing Look and Feel of Swing Application This section shows how to set different look and feel for your Swing Application. The look and feel feature of Java Swing Showing the Frame with Default Look and Feel Showing the Frame with Default Look and Feel  ... the frame in the default look and feel. Displayed frame has a combo box and a text... shot of frame with default look and feel: Here is the code of the program jsp -servlets jsp -servlets i have servlets s1 in this servlets i have created emplooyee object, other servlets is s2, then how can we find employee information in s2 servlets Servlets with Extjs Servlets with Extjs how to integrate servlets and extjs and also show database records in extjs grid using servlet how to get servlets json response. Can any one please help me Authentication in Servlets Authentication in Servlets What are different Authentication options available in Servlets JSP PDF books ; The First Servlets The previous chapter showed you how... want to really write a few servlets. Good. This chapter shows you how.... You can download these books and study it offline. The Servlets Servlets Programming Servlets Programming Hi this is tanu, This is a code for knowing... visit the following links: http Chapter 3. Develop clients that access the enterprise components Chapter 4. Session Bean Life Cycle Chapter 7. CMP Entity Bean Life Cycle Chapter 10. Message-Driven Bean Component Contract Chapter 2. Client View of a Session Bean Sessions in servlets Sessions in servlets What is the use of sessions in servlets? The servlet HttpSession interface is used to simulate the concept that a person's visit to a Web site is one continuous series of interactions Servlet Tutorials Links ; Java Servlet Technology: Servlets are the Java platform technology of choice for extending and enhancing Web servers. Servlets provide...), servlets are server- and platform-independent. This leaves you free to select Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/96767
CC-MAIN-2016-18
refinedweb
881
65.42
13 February 2009 14:48 [Source: ICIS news] By John Richardson ?xml:namespace> Demand has also recovered, though on modest restocking. Converters were holding several weeks of inventory in the first half of 2008, says a Hong Kong-based trader. “This fell to as low as one day in Q4 but is now back up at one week. From naphtha through to resin, prices fell too far – an overreaction as everyone panicked. A recovery was inevitable but the question now is whether the recovery has been overcooked.” Increased liquidity in Approval processes for loans have been simplified and there are reports that recently beefed-up labour and environmental rules can be ignored. “Some of the extra cash is flowing into fixed-asset investments, but a lot of it is also ending up in trade finance and with the traders who have bought petrochemical cargoes in the hope of making a killing,” said a Singapore-based trader. “If it all goes pear-shaped and they have to liquidate their inventories at a loss, will they have to repay their loans? This is an interesting question, as this would counteract the boost that the extra lending has given to the economy. I suspect that the answer could, in some cases, be no.” So there you have it: if any of the traders are aware that their lending has no strings attached, they can afford to take a risk-reduced punt. But for producers of petrochemicals, the danger is, of course, that the not-always-entirely-visible quantities of traders’ inventory could suddenly flood the market and cause a downward price correction. Producer sentiment has definitely improved relative to the fourth quarter of 2008, and there is no disputing that there has been a real and tangible improvement in demand from some end-users. However, it doesn’t take a rocket scientist to note the vulnerability of some downstream sectors compared with others. A lot of PE goes into packaging. Even in the midst of what is perhaps the worst economic crisis since the Great Depression people are not suddenly going to switch back to paper to wrap food. But while PE’s price rally continued after the Lunar New Year, the phenol chain continued its decline as most of the consumer spending at the end of this chain is non-essential or discretionary. For instance, why rush out and buy new CDs and DVDs if you might be about to lose your job? Some polycarbonate (PC) also goes into construction as does phenolic resins. Upstream aromatics values have improved as a whole. Yet benzene is awful. It has traded at or below naphtha since November 2008, reflecting the need to run reformers for other reasons – and weakness in some benzene derivatives. Five weeks ago, toluene was at $470-480/tonne FOB (free on board) The rise in isomer-grade xylenes began earlier and has led to even healthier spreads over naphtha. Prices were at $690-710/tonne FOB “We’ve seen, as a result, several toluene disproportionation (TDP) units (which take toluene and convert it into xylenes) restart in But there are reports of Japanese TDP operators being a little more cautious. The surge in isomer-grade values is obviously being driven by the polyester chain. Paraxylene (PX) had risen to $890-910 FOB But again, there you have it: Some polyester plants are reported to be running at only 50% of capacity in The huge Chinese economic stimulus packages are going to help, but some commentators say that domestic polyester demand grew too quickly and was going to level off anyway – even without the economic crisis. The blunt fact is that no matter what the government does to boost the economy, in the short term it cannot replace all the lost demand in the West for finished goods, from shirts to plastic toys to shoes and electronics. With the consumer-demand outlook at the very least uncertain – and probably set to get even weaker – are we therefore in the midst of a mini petrochemical-pricing bubble? Some support for aromatics will be offered by improving gasoline demand. There are already reports of a greater requirement for toluene and xylenes as octane boosters. The Asian gasoline demand is also set to continue growing, even if at lower rates. The fate of poor old benzene, therefore (which you obviously have to always take out of reformate) might be grim. How far could it fall below naphtha prices as reformers keep operating for gasoline, xylenes and toluene values – and to produce hydrogen for hydrocrackers? And what of the support from naphtha, which has, as was previously mentioned, been a big driver behind recent petrochemical price gains? “I am bearish on naphtha in the second half of this year,” said N Ravivenkatesh, a Singapore-based consultant with international energy consultancy Purvin & Gertz. “A reason is weaker petrochemical demand. For example, Northeast Asian naphtha demand (excluding He sees the overall Asian deficit slipping to well below its 2008 level of 3m tonnes. Big capacity build-ups in the More than 5m tonnes/year of C2 capacity alone is due on stream in the Naphtha, as with petrochemicals, had to rebound as prices had fallen too far. Refiners were caught unawares by the fourth-quarter crisis and made big operating cutbacks, along with their petrochemical customers. Crack spreads for gasoline, naphtha and other refinery products fell into deep negative territory in October-December, but have since rebounded very strongly, said Ravivenkatesh. The improvement in naphtha is the result of the refinery rate cuts, better demand from petrochemicals and reduced exports from Term naphtha requirements have been reduced by northeast Asian cracker operators because of the uncertain demand outlook. Spot premiums have, as a result, increased as the spot market has become more active. And because of deep operating rate cuts at Chinese refineries, “Now, though, the refinery and petrochemical players are trying to communicate better in order to better coordinate operating rates,” Ravivenkatesh said. What of crude prices? They might firm a little on likely further OPEC cut backs next month, but you cannot see a big upsurge in pricing without obvious signs of a sustained global economic recovery. Second-half March naphtha cargoes were at $464.50-$465.50/tonne CFR Japan, first-half April shipments were at $454-$455.50/tonne CFR Japan, with second-half April at $443.50-$445.50/tonne CFR Japan. Take away the argument of rising or firm feedstock costs and petrochemical producers will have one less reason for maintaining price increases. It’s all about the economy. But even when the world is eventually sure that the worst is over, the petrochemical industry will still have to confront its big supply overhang. We could see several more mini-pricing bubbles before the industry is out of the woods. At least, though, there will be some people making money out of entering and exiting these bubbles at the right
http://www.icis.com/Articles/2009/02/13/9192489/insight-making-the-most-of-a-mini-price-bubble.html
CC-MAIN-2014-52
refinedweb
1,159
59.43
Red Hat Bugzilla – Bug 481756 traceback when starting like a nonroot Last modified: 2009-12-14 16:10:39 EST Description of problem: traceback when starting like a nonroot Version-Release number of selected component (if applicable): virt-manager-0.5.3-8.el5 How reproducible: alwayd Steps to Reproduce: 1. su nonroot 2. virt-manager Actual results: traceback Expected results: It should ask for password up to pam, then star virt-manager Additional info: [testuser@dhcp-lab-159 root]$ virt-manager Xlib: connection to ":3.0" refused by server Xlib: No protocol specified Traceback (most recent call last): File "/usr/share/virt-manager/virt-manager.py", line 304, in ? main() File "/usr/share/virt-manager/virt-manager.py", line 224, in main import gtk File "/usr/lib64/python2.4/site-packages/gtk-2.0/gtk/__init__.py", line 76, in ? _init() File "/usr/lib64/python2.4/site-packages/gtk-2.0/gtk/__init__.py", line 64, in _init _gtk.init_check() RuntimeError: could not open displa i made mistake in first comment, version is latest: Version-Release number of selected component (if applicable): virt-manager-0.5.3-10.el5 I have problem with connection even with connection vie ssh -X ?: .qa.[root@i386-5s-m1 ~]# virt-manager Traceback (most recent call last): File "/usr/share/virt-manager/virt-manager.py", line 321, in ? main() File "/usr/share/virt-manager/virt-manager.py", line 241, in main import gtk File "/usr/lib/python2.4/site-packages/gtk-2.0/gtk/__init__.py", line 76, in ? _init() File "/usr/lib/python2.4/site-packages/gtk-2.0/gtk/__init__.py", line 64, in _init _gtk.init_check() RuntimeError: could not open display -------- only when I am on machine like a root, then virt-manager works Do you enter the root password or select 'Run Unprivileged' when the initial dialog pops up? Is anything in your set up out of the norm? (NFS home directory, running over vnc, ssh, etc.) Ping Petr, can you provide the info in Comment #2? Hi, i am sorry for late response There is everything in normal what I can say. I have no chance to choose between 'Run Unprivileged' or "something else". I log to whole system like a root then in terminal `su nonroot` and traceback with "opening window" is there no I found thet when I su with "-", like `su - nonroot` then it works as expected You can try it on my machine : dhcp-lab-159.englab.brq.redhat.com If you log into a machine as root, then 'su' to a regular user, graphical apps won't be able to access roots DISPLAY (this isn't specific to virt-manager, try launching gnome-display-properties or similar). You'll need to use 'su -'. So, closing as NOTABUG. Please reopen if I'm misunderstanding.
https://bugzilla.redhat.com/show_bug.cgi?id=481756
CC-MAIN-2017-34
refinedweb
469
59.8
SYNOPSIS#include <bobcat/onekey> Linking option: -lbobcat DESCRIPTIONOneKey objects may be used to realize `direct keyboard input': a pressed key becomes available without the need for pressing Enter. The characters are obtained from the standard input stream. Direct key entry remains in effect for as long as the OneKey object exists. Once the object is destroyed the standard input stream will return to its default mode of operation, in which input is `confirmed' by a newline character. NAMESPACEFBB All constructors, members, operators and manipulators, mentioned in this man-page, are defined in the namespace FBB. INHERITS FROM- ENUMERATIONThe OneKey::Mode enumeration is used to control echoing of returned characters. It has two values: - OFF: returned characters are not echoed to the standard output stream; - ON: returned characters are echoed to the standard output stream. CONSTRUCTORS - OneKey(OneKey::Mode state = OneKey::OFF): This constructor initializes the OneKey input object. By default, entered characters are not echoed. By constructing the object with the OneKey::ON argument, entered characters are echoed to the standard output stream. - This construct throws an Exception exception if it not properly complete. The constructor may fail for the following reasons: - the standard input stream is not a tty (e.g., when the standard input stream is redirected to a file); - the current state of the standard input stream can't be determined; - the standard input stream's state can't be changed to the `direct keyboard input' mode. The copy constructor (and the overloaded assignement operator) are not available. MEMBER FUNCTIONS - int get() const: Returns the next character from the standard input stream, without the need for pressing Enter. - void setEcho(OneKey::Mode state): Changes the echo-state of the OneKey object. The argument may be either OneKey::ON or OneKey::OFF. - void verify() const: Obsoleted, will be removed in a future Bobcat release. EXAMPLE /* driver.cc */ #include <iostream> #include <string> #include <bobcat/onekey> using namespace std; using namespace FBB; int main() { try { OneKey onekey; cout << "Usage: 1: next chars are echoed, 0: no echo, q: quits\n"; while (true) { char c; cout << "ready...\n"; cout << "Got character '" << (c = onekey.get()) << "'\n"; switch (c) { case '1': onekey.setEcho(OneKey::ON); break; case '0': onekey.setEcho(OneKey::OFF); break; case 'q': return 0; } } } catch (exception const &e) { cout << e.what() << endl; return 1; } } FILESbobcat/onekey - defines the class interface BUGSNone Reported. DISTRIBUTION FILES - bobcat_4.02.00-x.dsc: detached signature; - bobcat_4.02.00-x.tar.gz: source archive; - bobcat_4.02.00-x_i386.changes: change log; - libbobcat1_4.02.00-x_*.deb: debian package holding the libraries; - libbobcat1-dev_4.02.00-x_*.deb: debian package holding the libraries, headers and manual pages; - public archive location; BOBCATBobcat is an acronym of `Brokken's Own Base Classes And Templates'. COPYRIGHTThis is free software, distributed under the terms of the GNU General Public License (GPL). AUTHORFrank B. Brokken ([email protected]).
https://manpages.org/fbbonekey/3
CC-MAIN-2022-21
refinedweb
476
58.08
You are integrating with a 3rd party application that contains statistics on the most popular baby names for a given year. You have both high-level stats and per-name information you’d like to display. It’d be nice if you could write the code like this: class NamesController < ApplicationController def index @names = Names::Client.all_names end def show @name = Names::Client.find_name(params[:name]) end end The Client You could write a simple client like this: module Names class Client Name = Struct.new(:id, :name, :births) BASE_URL = "" def all_names fetch_data("/users"). map { |data| convert_to_name(data) } end private def fetch_data(path) HTTParty.get(BASE_URL + path). end def convert_to_name(data) Name.new(data["id"], data["name"], data["births"]) end end end Seems straightforward enough. Almost too easy. You’re about to hit your first roadblock. Pagination As you start using the API, you notice that some results seem to be missing. You take a closer look and notice that you’re always getting exactly 10 results from the API. The same 10 results. Aha! Looks like pagination! Like many APIs, this one paginates its data for performance since it’s a really large set. The items per page seems to be hard-coded to 10. You could write a method that fetches the 10 results for a given page number but that’s not how your application uses the data. You would like to be able to deal with the data as a single list. Breaking the data up into pages is an implementation detail of the API. It would be nice to model the data as a stream of data instead. Specifically, a lazy stream so that we only make the minimum number of HTTP requests. Enter the Enumerator. Enumerator You add a new method to the client to work with paginated results. This fetches a page and then yields the results one at a time until it runs out of local results. Then it makes a request for the next page and starts the process over again. The enumeration ends once an HTTP request responds with a non-200 response. def fetch_paginated_data(path) Enumerator.new do |yielder| page = 1 loop do results = fetch_data("#{path}?page=#{page}") if results.success? results.map { |item| yielder << item } page += 1 else raise StopIteration end end end.lazy end Note that appending ?page=#{page} to the end of the path is a bit naive and will only work with URLs that don’t have any other query parameters. For more complex URLs, you will want to use Ruby’s URI library. The client’s public all_names method doesn’t change much. The only difference is that it calls fetch_paginated_data instead of fetch_data. The API you’re integrating against returns an HTTP 404 response code for pages with no results so the Enumerator stops iterating when it gets a non-successful status code. For other API implementations, it may make sense to check on empty results instead. Some APIs provide links to the “next” page so you would check on that. The Bootic client has an example of this approach. module Names class Client Name = Struct.new(:id, :name, :births) BASE_URL = "" def all_names fetch_paginated_data("/users"). map { |data| convert_to_name(data) } end end end The show page Going back to our controller implementation: class NamesController < ApplicationController def index @names = Names::Client.all_names end def show @name = Names::Client.find_name(params[:name]) end end Getting all names now works the way you’d expect. But what about that show action? The API doesn’t provide a way to search. You could get the all the results and then filter them in Ruby but that would cause a lot of useless HTTP requests. How can you make the minimum number of requests to get the name you want? This is where the lazy Enumerator really pays off. This code does the minimum work needed to get us a result. def find_name(name) all_names.detect { |n| n.name == name } end Too simple? Time to try it out! Sofia is the 28th name on the list (and therefore should be on page 3). If all works the way you expect the client should only make requests for pages 1, 2, and 3 and stop once it finds Sofia. Success! Extra Want to play around with this concept? The code for the client as well as a sample server can be found on GitHub. The list of names used came from the US Social Security Administration’s list of most popular names of 2015 Check out this article on lazy refactoring for a different use case of lazy enumerators.
https://thoughtbot.com/blog/modeling-a-paginated-api-as-a-lazy-stream
CC-MAIN-2020-34
refinedweb
768
67.96
What you have is almost correct. var favoriteSiteName = '', favoriteSiteURL = ''; favoriteSiteName = prompt('What is your favorite web site?'); favoriteSiteURL = prompt('What is the URL of that site?'); document.write('<a href="' + favoriteSiteURL + '">' + favoriteSiteName + '</a>'); You were missing the semi-colons to indicate the end of the lines and adding the site name to the link. Also, the h1 and h2 elements don't belong inside the script element. JSFiddle. In your html content, you need to either change your relative links (ones that don't start with a /) to absolute links (ones that start with a /) or create a relative URI base in the <head> portion of your page: <base href="/"> <asp:TextBox</asp:TextBox> <br /> <asp:Button Page1.aspx.cs protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { if (Session["Name"] != null) txtName.Text = Session["Name"].ToString(); } } protected void Button1_Click(object sender, EventArgs e) { Session["Name"] = txtName.Text; Response.Redirect("Page2.aspx"); } Page2.aspx <asp:Button Page2.aspx.cs protected void Button1_Click(object sender, EventArgs e) { Response.Redirect("Page1.aspx"); } Set visible to false in markup. Add runat="server" to the element. Set visible to true in code behind once authenticated. <div class="footer" id="divAdmin" Visible="False" runat="server"> <ul> <li><a href="~/admin.aspx">Administration Page</a></li> </ul> </div> If Session("UserRole") = 1 Then divAdmin.Visible = True End If Create a form with all the fields you require. Submit that form. set the action to the target jsp. In the request object of target jsp you will get all the data you entered. You can specify the action like this form id="login-form" action="Your action jsp url" I have solved my problem by sending authentication with api. In api version 1.1 we need to send authentication with request otherwise we get 400 bad request(invalid request). This is the coding example for sending authentication. private OAuthConsumer consumer; consumer = new CommonsHttpOAuthConsumer( Constants.CONSUMER_KEY, Constants.CONSUMER_SECRET); consumer.setTokenWithSecret(token,secret_token); .......... HttpGet httpGet = new HttpGet(params[0]); consumer.sign(hpost); we need sign-core and sign common post jars. Do it using focus and hover a:hover {background:yellow; color:navy;} a:focus {background:yellow; color:navy;} Here is a demo I would recommend using a dummy value in either the date_joined or last_login fields for the User model, if you wanted to implement a solution without adding a new field just to serve as an indicator flag. The potential problem with using the is_active flag is that you may end up wanting to use this flag to "blacklist" or effectively delete an account without actually removing their record from your database (to prevent re-creation of the account with same credentials, etc.) If you start relying on the fact that the password is not set, then if the day ever comes where you want to implement alternative login methods (OAuth) then your password field will not be set in these cases as well (huge conflict). Very simple. When the user creates the link, capture the datetime and save it in a database. When ever the link is clicked, check if the age of the link is older than 3 days. If it is, deactivate the link and/or remove it. This is the easiest way. Another way is to have a process running which will check and automatically disable the link after 3 days. This is usually more difficult to achieve though. Write an HttpFilter to handle this and map it in web.xml. if(request.getParameter("language")==null) { String userLocale = request.getHeader("Accept-Language"); Locale locale = request.getLocale(); if(req.getRequestUrl().contains("?")) { response.sendRedirect(req.getRequestUrl()+"&language="_locale.getLanguage()); } else { response.sendRedirect(req.getRequestUrl()+"?language="_locale.getLanguage()); } } I would write it like this: $("a").click(function () { '', { id: $(this).data("linkid"), htmltitle: (document.title + " <br/> " + '(' + document.location.href + ')'), clicktitle: $(this).attr("title") + " <br/> (" + $this.attr("href") + ")"; }); }); And then do my markup like this: <a href="" data-google 1</a> <a href="" data-fb 2</a> I tested the code below and it worked without problems <!doctype html> <html> <head> <script src=""></script> <script type="text/javascript"> $(document).ready(function () { // hide #back-top first $("#back-top").hide(); // fade in #back-top $(function () { $(window).scroll(function () { if ($(this).scrollTop() > 500) { $('#back-top').fadeIn(); } else { $('#back-top').fadeOut(); } }); // scroll body try the following: $("#pr1").click(function(e){ e.preventDefault(); $("#content").html('<iframe id="idIframe" src="research/index.html#pr1" frameborder="0" height="100%" width="98%"></iframe>'); If you have no www. prefix, you can do it with JavaScript this way: href="javascript:window.location.href = window.location.href.replace('//', '//m.');void(0)". Can someone have a look at the above code and let me know if I've got something wrong or if this cannot be done? Yes, your Redirect directive is wrong. It isn't a regular expression and thus you can't have stuff like * in it, unless you're actually trying to match against a "*". You're probably not going to be able to do the redirect from the htaccess file in your root directory as Mediawiki uses its own htaccess. So in the htaccess file that's in the /wiki/ folder, try adding this above any rewrite rules that are already there: RewriteCond %{QUERY_STRING} ^title=User:([^&]+) [NC] RewriteRule ^ /memberlist.php?mode=viewprofile&un=%1 [L,R=301] That should take care of the URLs that look like: /index.php?title=User:XYZ RewriteRule User:(.*)$ /memberlist.php?mode=viewprofile Try writing a dynamic web page where the link in the HTML is constructed on the server-side based on the information provided in your form on the previous page. Some popular dynamic web page languages are JSP, ASP, and PHP. Note that dynamic web pages have to be hosted on a web server. You cannot do this with pure HTML. You will need either Javascript or server-side code, in which you retrieve the source of Reddit's homepage, parse it to pick out the link you want, and use its href attribute's value as the href value for your own link. A simple way would be Save a random string(token) on the Server and deliver it to every web page Inside the web page run a js-script(Ajax), which ask the Server for the token - if a new token is returned, refresh the page if you want to refresh all the pages, just set a new token Ok, your question is bit confusing basically, the link to me only produces XML, it's pretty standard stuff, although bizarrely If i request say I only get JSON data, this doesn't matter even if I make the request with a content type of 'application/xml' and vice versa with the provided url and a content type of 'application/json'. What you need to do is in your responses from the HTTP requests look at the content type of the body being returned, if it's 'application/xml' use your XML parser, if it's 'application/json' then use a JSON parser. I can't say much more than this with out documentation of the site about what it's meant to provide. Ideally if you can use all JSON then you should it's better for mobiles to consume over XML. An example of how to parse JSON I did not understand very well if what you're trying to get is to show some info inside the a tag or in other part of the page. But in any of the cases that will also depend on what does the showInfo() function return. If the first case is what you need and showInfo() returns the text with the info, i believe this is what you're looking for: <a id="showInfoBtn" rel="nofollow" title="SomeTitle" href="#" onclick="this.innerHTML=showInfo(event)"> Info showed here after click </a> Here's a demo with the example ;) Replace echo $row2['first_name']; with return $row2['first_name'];. If you want to get some value from the function, you should use return operator to pass the value back. It has nothing to do with printing the value with print or echo. When creating a Fragment, create public methods to set data: public class MyFragment extends Fragment { private TextView text1; private TextView text2; @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View layout = LayoutInflater.from(getActivity()).inflate(R.layout.simple_list_item_2,container,false); text1 = (TextView) layout.findViewById(R.id.text1); text2 = (TextView) layout.findViewById(R.id.text2); return super.onCreateView(inflater, container, savedInstanceState); } public void setData(String t1, String t2){ text1.setText(t1); text2.setText(t2); } } When adding a fragment in parent activity, give it a unique tag: MyFragment f = new MyFragm *Don't do it* But it is possible with the user's permission; you can achieve something like this (took me a while to find a website that was happy in a frame) window.onbeforeunload = function () { window.setTimeout(function () { // escape function context window.location = ''; }, 0); window.onbeforeunload = null; // necessary to prevent infinite loop // that kills your browser return 'Press "Stay On Page" to go to BBC website!'; // pressing leave will still leave, but the GET may be fired first anyway } Demo Since all you are doing is adding some text to the page, you can probably speed up the process by just editing the pages' content streams directly. Merging has to deal with fonts, other resources, crop boxes, etc. that slow down the process significantly. If you actually need to modify some of these things, the solution becomes more complex. Some example code: TEXT_STREAM = [] # The PS operations describing the creation of your text def add_text(page): "Add the required text to the page." contents = page.getContents() if contents is None: stream = ContentStream(TEXT_STREAM, page.pdf) else: contents.operations.extend(TEXT_STREAM) You can use the html5 data attribute to add arbitrary data to DOM elements. btn.setAttribute('data-array','one,two,three'); Get the data back by using split; Here's an example: jsFiddle It's not working because you're not actually targeting the anchor tag in your second style declaration. Try: #page #list a { color: #fff; /* white */ } This won't change the hover state, but it will change the color. To change the hover state you'll have to target that specifically to override the generic a:hover you've defined above. You'll also want to define the ':visited' styles, otherwise you might not see these changes reflected (because the link has already been visited). #page #list a:visited { color: #fff; /* white */ } Or you could define all of the states in one block, assuming they're supposed to have identical styles. #page #list a, #page #list a:visited, #page #link a:hover, #page #list a:active { color: #fff; /* white */ } In case it's not clear, the you'd have to add an iframe to like this: <iframe src=""></iframe> Try: has_many :pins, through: :pin_boards, source: :players The key is to make the source be the exact same as the relationship in the join model. So, since your join model (PinBoard) almost certainly says has_many :players then you have to match that word players exactly. So you were just missing the 's'. I would suggest you to replace the Multidimensional Array with a Dictionary (see this) Using Dicitonary<K, V> you will be able to add "new rows" (KeyValuePairs<K, V> (see this) to be precise), find, remove and modify. They are represented with a unique Key that will give you a value. For example: public class Program { public void Main() { Dictionary<int, Meeting> meetingDictionary = new Dictionary<int, Meeting>(); //`int` on the left will be the key, and `Meeting` on the right is the value //int represents a unique Id of the meet event. //To add a new meeting: var date = new DateTime(2013, 7, 21); //date representor of the meet var meetingA = new Meeting("Obamba Blackinson", date); //object. You can add the additional information to the Exception.Data property. These values will be added to the LogEntry's ExtendedProperties. You can then configure the formatter to output the specific Extended Property keys in the formattedMessage string. See: Exception.Data info is missing in EntLib log That's probably the easiest way but then the information is buried within a string in the database table. Another approach would be to add the data directly to the database. This will involve changes to the out of the box database schema and stored procedures as well as creation of a custom trace listener. You can find two different designs at the Enterprise Library Sample Projects page: The Extended Properties Trace Listener with Custom Exception Handler contains a custom trace listen Run this code when being_at_mall.html file loads $(function () { $(document.body).animate({ scrollTop: $(location.hash).offset().top }, { duration: 'slow', easing: 'swing' }); }); Demo You can see in the demo that it scrolls when the page loads. The above code uses location.hash which will give you #article i.e. the id to the element in being_at_mall.html. Also this page* lists more options on how to scroll smoothly along with demo. * I'm the author of that page. I created that when I was learning jQuery. You need to use a Page access token if you want the post to appear 'as the page' - if you use your user access token the behaviour depends on some settings in the page management interface of Facebook and may or may not As far as I know, the only way to achieve this is to use AJAX to load your content. That is, instead of reloading the entire page for every link, you just reload portions of content depending on what link was clicked. Let's say your page has a container like this: <div id="content">...</div> Then, for every link (matching a certain condition if you don't want it to happen for all the links), you execute an AJAX request similar to $("a").click(function() { $this = $(this); var data = {}; // Put additional parameters here $.get($this.attr("href"), data , function(response) { $("#content").html(response); } } As @rofavadeka has mentioned, you can also modify the URL so that it reflects the change of location, using something like: window.history. You can use this link FHSTwitterEngine and check the code given in answer.. FHSTwitterEngine is latest library with compatible with v1.1.. its should solve your problem @seshuk had given you the correct answer, from the MSDN documentation Only domain users have a principal name. Access to the principal name can be blocked by privacy settings (for example, if the UserInformation::NameAccessAllowed property is false). If access is blocked, this method returns an empty string. This method requires the enterpriseAuthentication capability. A Hotmail id, by definition, would not be a domain user. Additionally, you'd need a company account to publish such an app, since it requires the enterprise authentication capability. Use a string to hold selected number,use the stored value in the string to search for the required row.Use another string to pull the information from the database and use # to split the columns.Save each column to a specific string which you will assign to where the value should be auto filled. If you are using the WCF in web application you can store the user details in cookie as the CodeProject article does or you can follow WCF Authentication as here: msdn.microsoft.com/en-us/library/ff405740.aspx Use below code to get user: var currentUser = new WindowsPrincipal((WindowsIdentity) System.Threading.Thread.CurrentPrincipal.Identity);
http://www.w3hello.com/questions/Adding-IM-link-to-an-ASP-page-from-user-information
CC-MAIN-2018-17
refinedweb
2,580
55.54
I was reading the article “PHP Sucks, But It Doesn’t Matter” by Jeff Atwood. In the comments he writes: That said, I absolutely think it’s important for PHP devs to be aware of the architectural limitations of PHP, and understand the alternatives. What are those limitations and how do they compare with other scripting / weakly typed languages? Also, what are the alternatives in those conditions where limitations need to be avoided? There are basically two real limitations I see: PHP is a fully synchronous language. This has impact on which things you can easily implement in PHP and which not. For example implementing a Long Polling driven chat application isn’t trivial, because PHP would need block one process per chatter. I’m not saying it’s impossible, you can hack around this limitation using some PHP Daemon library. I’m just saying that this is one of the cases where other languages, like JavaScript, are more appropriate (NodeJS). PHP is slow. Please don’t understand this an an offense. It’s a fact that PHP – as implemented by Zend – is slow compared to other scripting languages. This typically is no problem when building websites, but you obviously can’t do certain things: Implementing a ray tracer in PHP is definitely a bad idea – whereas in JavaScript you could do this. But apart from that, I think that PHP is pretty multi-purpose. You can use it for nearly anything – and I do 😉 Answer: Take a look at the date. The article was written in 2008. It means, that if you’ll see the PHP5.3 advantages, you’ll find there many things, like closures and namespaces, which were in other languages before. Some of them is already affected the architecture of famous frameworks, like Symfony. And that list will never be complete. Meanwhile, I meet a lot of people who think that “weak typing” language is an architectural problem itself. Then, some people think that inline regex syntax is good thing in, for example, JavaScript, but others think, that “different language” must be written down in string constants there, as in PHP. Etc. Answer: I’ll take a stab at this without getting too into the nitty gritty: - The initial design of PHP as a collection of functions still shows through. - Object-oriented patterns that have been implemented in the latest PHP 5 releases are still half-baked and lack multiple inheritance (or “mixins”), proper module support, and are designed to be backwards compatible with the CoF (collection of functions) design. - Method overriding and callbacks are not supportive natively. - Closures. They are there, but they are very weak. - Errors vs Exceptions — methods are inconsistent in which they use (thanks again to CoF design), and error-handling is half-baked. I’m sure I’m stepping on someone’s toes here and I’ll get any angry mob, but I’m also sure that I still didn’t hit everything. It’s largely subjective, but it’s easy to see what is to dislike when you stack PHP up next to Ruby or Python. Answer: I don’t find it odd anymore that all of “PHP SUCKS” articles are coming from developers accustomed to established Microsoft technologies. What I do find odd are statements that indicate that PHP is a spaghetti code. It’s completely up to the author of the code whether the code will be spaghetti or if it’ll use certain design rules when approaching the problem. The reason a lot of PHP code out there is spaghetti code is because examples and tutorials are such that they don’t teach beginners the good coding practices. Also, people are quick to grasp examples like hello world or connecting to MySQL, doing a query and looping over the result – but that’s it, that’s where ALL tutorials stop. I still haven’t found a tutorial that covers the following: - what is a framework and what it helps with - what are data structures and data types (explained in a way a normal human can understand) - what is an array, what are array dimensions, how do arrays work, what are arrays useful for - what is object oriented code, why object oriented code, how does PHP do it, what is considered good, why are there patterns out there and so on As you can see, a beginner programmer won’t be bothered to learn all of those points outlined above, I know that because I was a beginner too and did all the mistakes beginners do. However, even if someone doesn’t know how to program, they can still create useful applications. Many popular scripts were written by people who knew WHAT they want to achieve, however they did not know HOW to properly design the environment (framework) in which they’ll deploy their php code. That’s why we see scripts that become incredibly popular due to the ease of their use as a regular user which are hard to extend looking at it as a developer, using weird function names, odd coding conventions and no commenting. Also, what’s ridiculous is saying PHP is slow which is absolute nonsense. When I come across such statement, I want to shoot myself in the head for reading such a blog entry. One has to know several things before making such a statement: PHP is a scripting language, that means the interpreter is invoked every time someone requests a PHP page which takes A LOT of CPU power. That has been addressed by using bytecode caching mechanisms such as APC which stores the copy of pre-interpreted piece of the script in memory. The results are impressive, and I kid you not – execution for some of my scripts goes from 20 milliseconds to 1 microsecond, where some benefit “only” 5 times. That’s on a system that serves 1 thousand concurrent users. Now, if someone wants to tell me that 1 microsecond is slow (or 5 milliseconds) – I’ll take that as bullshit. PHP is not the only thing involved in serving a web page. There’s also underlying server (Apache) which has its own issues, there’s MySQL which runs queries – and who says all queries are optimal? There’s the network, there’s the hard disk, there’s the CPU, there are tons of other processes. Configure Apache with PHP-FPM, optimize MySQL to perform good on 8 core machine with 16 gigs of ram, use APC, use Memcache – and voila, you’re getting an incredibly fast, scalable system capable of serving an incredible amount of traffic. Languages that PHP is being compared to are often “compiled” into the bytecode and then executed by You can extend PHP yourself. Assuming a PHP function is slow, NOTHING prevents anyone from creating a .so in C that is able to do the job faster and then hooking everything up trough extension in PHP. Not that I know what would such job be that would require that, but such a thing IS possible. Sadly, and I say sadly because I respect certain programmers and admire their work (and I’m by no means a PHP fanboy) but it hurts me when I see uneducated, inexperienced and subjective comments about a tool which spreads misinformation. As for why big websites use PHP – because it’s fast. Because they laid proper foundations before starting the projects. Because it’s free, extensible and scalable. Because it follows C syntax. Because you can extend it when you need it to be faster. Because it runs on a free operating system. Because it’s easy to use. Answer: PHP is improving everyday. It is open source and used all around the world. That said, when you have a problem, it is most probable that you will find your solution or get help faster than any other language. The very reason of this article, I believe it is simple. If you (or in that matter any other programmer) used to code in C++, Java etc.. they had a lot of possibilities such as OOP coding and PHP was limited in the beginning. It is a good thing that PHP has many built-in functions / methods / classes so you don’t have to spend hours to code some function / class / method which PHP already has. You don’t have to (and you shouldn’t) try to memorize all these functions. It is useless to memorize all of them (which one is doing what, how to use it etc). Imagine you are working on some project which took you 4-5 months to finish (yeah big one (: ) You are not going to use all these functions in all the projects and eventually you will forget what they were doing since you don’t use them often. The point is, you should know the syntax of PHP. When you need to do something, check first if PHP already has what you want to do in its library. Check the manual to see how to use it. This way, you will also LEARN (NOT MEMORIEZE) the ones you use often and this information will be hard to forget. PHP or any other programming language is just like a normal language which we humans use daily to communicate with each other. If you don’t use it, you will forget. PHP 5.3 and above brought many features. Static feature is one of the biggest feature for me. It made my life so much easier that I can’t even begin to describe. Since PHP is that famous and open source web scripting language, Facebook developer team created HipHop. What HipHop does is, takes the data from PHP and sends it to C++. C++ does all the process and sends back results to PHP for outputting. The whole idea of HipHop was to make Facebook use less servers & improve the page display times. Now you tell me if this seems limited and / or slow to you? Answer: i dont think there is anything like ‘architectural limitation’ for php. developer knowledge limitation might be the reason. read this . most of the time, non-world-class developer does not know how they could use php to its full capabilities. Answer: I would assume he is referring to the fact the the OOP portions of PHP are not the greatest compared to languages that are purely object oriented. Answer: Architecture Limitations in Addition to nikic’s answer Writing extensions for PHP is a PITA. Not as bad as with Perl or Java, but not as easy as it could be. The ease of extensibility champion is still TCL which hails from the early 90’s. Nearly any C function taking char* can be made into a TCL extension. Embedding PHP in other systems. mod_php, gtk.php.net shows it can be done, but Guile and TCL are much easier to embed.
https://exceptionshub.com/architecture-what-are-the-architectural-limitations-of-php.html
CC-MAIN-2021-39
refinedweb
1,817
69.72
Object orientated programming fits extremely well with GUI programming. Using OOP, we can easily make reusable GUI components. This post shows off a quit button that confirms if the user really wants to exit the application. I got the idea from Programming Python: Powerful Object-Oriented Programming . Here is my implementation of the idea followed by the explanation. from tkinter import * from tkinter.messagebox import * class TkQuitButton(Frame): def __init__(self, master=None, auto_pack=True, # Pack the widget automatically? dialog_title='Confirm', # Title text for the askyesno dialog dialog_message='Are you sure you want to quit?', # Message for the askyesno dialog button_text='Quit', # The quit button's text quit_command=Frame.quit, # Callback command for when the user wants to quit cnf={}, **kw): super().__init__(master, cnf, **kw) # Store our fields for later user self.quit_command = quit_command self.dialog_message = dialog_message self.dialog_title = dialog_title self.quit_button = Button(self, text=button_text, command=self.quit) # Notice that self.quit_button is exposed. This can be useful for when # the client code needs to configure this frame on its own if auto_pack: self.pack_widget() # This let's us override the packing def pack_widget(self): self.pack() self.quit_button.pack(side=LEFT, expand=YES, fill=BOTH) def quit(self): # Call the askyesno dialog result = askyesno(self.dialog_title, self.dialog_message) if result: # if they quit, then execute the stored callback command self.quit_command(self) if __name__ == '__main__': TkQuitButton().mainloop() This class extends the Frame class and packs a button into the frame. There are a few configuration properties that can be passed into the constructor. For example, we can auto_pack the widget so that it uses a default packing scheme. We can specifiy a custom title for the askyesno dialog as well as a custom message. The code even lets use customize the text of the button. We can also use a custom quit handler function should we choose to do so. We can customize how the widget is packed in two different ways. The first way to access the quit_button property and call pack on it directly. This allows client code to change how this widget is packed into their GUIs. Alternatively, we can subclass this class and just override the pack_widget method. The default quit implementation uses Tk’s askyesno dialog function to display a confirmation dialog to the user. It’s title and message are set to self.dialog_title and self.dialog_message properties. This allows use to customize what the user sees when the dialog is displayed. If the user presses yes, then we call the self.quit_command function which defaults to Frame.quit. Note that since self.quit is a method, we can customize this behavior by overriding it. Since we use a callback handler to exit the applicaiton, we can also customize how the application exits as well.
https://stonesoupprogramming.com/2018/01/30/python-advanced-quit-button/?shared=email&msg=fail
CC-MAIN-2021-39
refinedweb
464
52.66
I would like to read only the last line of a text file (I'm on UNIX, can use Boost). All the methods I know require scanning through the entire file to get the last line which is not efficient at all. Is there an efficient way to get only the last line? Also, I need this to be robust enough that it works even if the text file in question is constantly being appended to by another process. Use seekg to jump to the end of the file, then read back until you find the first newline. Below is some sample code off the top of my head using MSVC. #include <iostream> #include <fstream> #include <sstream> using namespace std; int main() { string filename = "test.txt"; ifstream fin; fin.open(filename); if(fin.is_open()) { fin.seekg(-1,ios_base::end); // go to one spot before the EOF bool keepLooping = true; while(keepLooping) { char ch; fin.get(ch); // Get current byte's data if((int)fin.tellg() <= 1) { // If the data was at or before the 0th byte fin.seekg(0); // The first line is the last line keepLooping = false; // So stop there } else if(ch == '\n') { // If the data was a newline keepLooping = false; // Stop at the current position. } else { // If the data was neither a newline nor at the 0 byte fin.seekg(-2,ios_base::cur); // Move to the front of that data, then to the front of the data before it } } string lastLine; getline(fin,lastLine); // Read the current line cout << "Result: " << lastLine << '\n'; // Display it fin.close(); } return 0; } And below is a test file. It succeeds with empty, one-line, and multi-line data in the text file. This is the first line. Some stuff. Some stuff. Some stuff. This is the last line.
https://codedump.io/share/Nb8PORq73iK4/1/c-fastest-way-to-read-only-last-line-of-text-file
CC-MAIN-2018-13
refinedweb
297
81.73
Receiver Operating Characteristic Curves Demystified (in Python) In this blog, I will reveal, step by step, how to plot an ROC curve using Python. After that, I will explain the characteristics of a basic ROC curve. By Syed Sadat Nazrul, Analytic Scientist In Data Science, evaluating model performance is very important and the most commonly used performance metric is the classification score. However, when dealing with fraud datasets with heavy class imbalance, a classification score does not make much sense. Instead, Receiver Operating Characteristic or ROC curves offer a better alternative. ROC is a plot of signal (True Positive Rate) against noise (False Positive Rate). The model performance is determined by looking at the area under the ROC curve (or AUC). The best possible AUC is 1 while the worst is 0.5 (the 45 degrees random line). Any value less than 0.5 means we can simply do the exact opposite of what the model recommends to get the value back above 0.5. While ROC curves are common, there aren’t that many pedagogical resources out there explaining how it is calculated or derived. In this blog, I will reveal, step by step, how to plot an ROC curve using Python. After that, I will explain the characteristics of a basic ROC curve. Probability Distribution of Classes First off, let us assume that our hypothetical model produced some probabilities for predicting the class of each record. As with most binary fraud models, let’s assume our classes are ‘good’ and ‘bad’ and the model produced probabilities of P(X=’bad’). To create this, probability distribution, we plot a Gaussian distribution with different mean values for each class. For more information on Gaussian distribution, read this blog. import numpy as np import matplotlib.pyplot as plt def pdf(x, std, mean): cons = 1.0 / np.sqrt(2*np.pi*(std**2)) pdf_normal_dist = const*np.exp(-((x-mean)**2)/(2.0*(std**2))) return pdf_normal_dist x = np.linspace(0, 1, num=100) good_pdf = pdf(x,0.1,0.4) bad_pdf = pdf(x,0.1,0.6) Now that we have the distribution, let’s create a function to plot the distributions. def plot_pdf(good_pdf, bad_pdf, ax): ax.fill(x, good_pdf, "g", alpha=0.5) ax.fill(x, bad_pdf,"r", alpha=0.5) ax.set_xlim([0,1]) ax.set_ylim([0,5]) ax.set_title("Probability Distribution", fontsize=14) ax.set_ylabel('Counts', fontsize=12) ax.set_xlabel('P(X="bad")', fontsize=12) ax.legend(["good","bad"]) Now let’s use this plot_pdf function to generate the plot: fig, ax = plt.subplots(1,1, figsize=(10,5)) plot_pdf(good, bad, ax) Now we have the probability distribution of the binary classes, we can now use this distribution to derive the ROC curve. Deriving ROC Curve To derive the ROC curve from the probability distribution, we need to calculate the True Positive Rate (TPR) and False Positive Rate (FPR). For a simple example, let’s assume the threshold is at P(X=’bad’)=0.6 . True positive is the area designated as “bad” on the right side of the threshold. False positive denotes the area designated as “good” on the right of the threshold. Total positive is the total area under the “bad” curve while total negative is the total area under the “good” curve. We divide the value as shown in the diagram to derive TPR and FPR. We derive the TPR and FPR different threshold values to get the ROC curve. Using this knowledge, we create the ROC plot function: def plot_roc(good_pdf, bad_pdf, ax): #Total total_bad = np.sum(bad_pdf) total_good = np.sum(good_pdf) #Cumulative sum cum_TP = 0 cum_FP = 0 #TPR and FPR list initialization TPR_list=[] FPR_list=[] #Iteratre through all values of x for i in range(len(x)): #We are only interested in non-zero values of bad if bad_pdf[i]>0: cum_TP+=bad_pdf[len(x)-1-i] cum_FP+=good_pdf[len(x)-1-i] FPR=cum_FP/total_good TPR=cum_TP/total_bad TPR_list.append(TPR) FPR_list.append(FPR) #Calculating AUC, taking the 100 timesteps into account auc=np.sum(TPR_list)/100 #Plotting final ROC curve ax.plot(FPR_list, TPR_list) ax.plot(x,x, "--") ax.set_xlim([0,1]) ax.set_ylim([0,1]) ax.set_title("ROC Curve", fontsize=14) ax.set_ylabel('TPR', fontsize=12) ax.set_xlabel('FPR', fontsize=12) ax.grid() ax.legend(["AUC=%.3f"%auc]) Now let’s use this plot_roc function to generate the plot: fig, ax = plt.subplots(1,1, figsize=(10,5)) plot_roc(good_pdf, bad_pdf, ax) Now plotting the probability distribution and the ROC next to eachother for visual comparison: fig, ax = plt.subplots(1,2, figsize=(10,5)) plot_pdf(good_pdf, bad_pdf, ax[0]) plot_roc(good_pdf, bad_pdf, ax[1]) plt.tight_layout() Effect of Class Separation Now that we can derive both plots, let’s see how the ROC curve changes as the class separation (i.e. the model performance) improves. We do this by altering the mean value of the Gaussian in the probability distributions. x = np.linspace(0, 1, num=100) fig, ax = plt.subplots(3,2, figsize=(10,12)) means_tuples = [(0.5,0.5),(0.4,0.6),(0.3,0.7)] i=0 for good_mean, bad_mean in means_tuples: good_pdf = pdf(x,0.1,good_mean) bad_pdf = pdf(x,0.1,bad_mean) plot_pdf(good_pdf, bad_pdf, ax[i,0]) plot_roc(good_pdf, bad_pdf, ax[i,1]) i+=1 plt.tight_layout() As you can see, the AUC increases as we increase the separation between the classes. Looking Beyond The AUC Beyond AUC, the ROC curve can also help debug a model. By looking at the shape of the ROC curve, we can evaluate what the model is misclassifying. For example, if the bottom left corner of the curve is closer to the random line, it implies that the model is misclassifying at X=0. Whereas, if it is random on the top right, it implies the errors are occurring at X=1. Also, if there are spikes on the curve (as opposed to being smooth), it implies the model is not stable. Additional Information - Data Science Interview Guide - Data Science is quite a large and diverse field. As a result, it is really difficult to be a jack of all trades... - Fraud Detection Under Extreme Class Imbalance - A popular field in data science is fraud analytics. This might include credit/debit card fraud, anti-money laundering... Bio: Syed Sadat Nazrul is using Machine Learning to catch cyber and financial criminals by day... and writing cool blogs by night. Original. Reposted with permission. Related: - Learning Curves for Machine Learning - Choosing the Right Metric for Evaluating Machine Learning Models – Part 1 - Choosing the Right Metric for Evaluating Machine Learning Models — Part 2
https://www.kdnuggets.com/2018/07/receiver-operating-characteristic-curves-demystified-python.html
CC-MAIN-2020-29
refinedweb
1,105
58.99
This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including mysql with all npm packages installed. Try it out: require()any package directly from npm awaitany promise instead of using callbacks (example) This service is provided by RunKit and is not affiliated with npm, Inc or the package authors. This is a Node.js module available through the npm registry. Before installing, download and install Node.js. Node.js 0.6 or higher is required. Installation is done using the npm install command: $ npm install mysql For information about the previous 0.9.x releases, visit the v0.9 branch. Sometimes I may also ask you to install the latest version from Github to check if a bugfix is working. In this case, please do: $ npm install mysqljs/mysql This is a node.js driver for mysql. It is written in JavaScript, does not require compiling, and is 100% MIT licensed. Here is an example on how to use it: var mysql = require('mysql'); var connection = mysql.createConnection({ host : 'localhost', user : 'me', password : 'secret', database : 'my_db' }); connection.connect(); connection.query('SELECT 1 + 1 AS solution', function (error, results, fields) { if (error) throw error; console.log('The solution is: ', results[0].solution); }); connection.end();'d like to discuss this module, or ask questions about it, please use one of the following: mysql) The recommended way to establish a connection is this: var mysql = require('mysql'); var connection = mysql.createConnection({ host : 'example.org', user : 'bob', password : 'secret' }); connection.connect(function(err) { if (err) { console.error('error connecting: ' + err.stack); return; } console.log('connected as id ' + connection.threadId); }); However, a connection can also be implicitly established by invoking a query: var mysql = require('mysql'); var connection = mysql.createConnection(...); connection.query('SELECT 1', function (error, results, fields) { if (error) throw error; // connected! }); Depending on how you like to handle your errors, either method may be appropriate. Any type of connection error (handshake or network) is considered a fatal error, see the Error Handling section for more information.. This is called "collation" in the SQL-level of MySQL (like utf8_general_ci). If a SQL-level charset is specified (like utf8mb4) then the default collation for that charset is used. (Default: 'UTF8_GENERAL_CI') timezone: The timezone configured on the MySQL server. This is used to type cast server date/time values to JavaScript Dateobject and vice versa. This can be 'local', 'Z', or an offset in the form +HH:MMor -HH:MM. (Default: 'local') connectTimeout: The milliseconds before a timeout occurs during the initial connection to the MySQL server. (Default: 10000). Can be true/ falseor an array of type names to keep as strings. (Default: false) debug: Prints protocol details to stdout. Can be true/ falseor an array of packet type names that should be printed. (Default: false) trace: Generates stack traces on Errorto include call site of library entrance ("long stack traces"). Slight performance penalty for most calls. (Default: true) multipleStatements: Allow multiple mysql statements per query. Be careful with this, it could increase the scope of SQL injection attacks. (Default: false) flags: List of connection flags to use other than the default ones. It is also possible to blacklist default ones. For more information, check Connection Flags. ssl: object with ssl parameters or a string containing name of ssl profile. See SSL options.. The ssl option in the connection options takes a string or an object. When given a string, it uses one of the predefined SSL profiles included. The following profiles are included: "Amazon RDS": this profile is for connecting to an Amazon RDS server and contains the certificates from and When connecting to other servers, you will need to provide an object of options, in the same format as tls.createSecureContext. Please note the arguments expect a string of the certificate, not a file name to the certificate. Here is a simple example: var connection = mysql.createConnection({ host : 'localhost', ssl : { ca : fs.readFileSync(__dirname + '/mysql-ca.crt') } }); You can also connect to a MySQL server without properly providing the appropriate CA to trust. You should not do this. var connection = mysql.createConnection({ host : 'localhost', ssl : { // DO NOT DO THIS // set up your ca correctly to trust the connection rejectUnauthorized: false } });. Rather than creating and managing connections one-by-one, this module also provides built-in connection pooling using mysql.createPool(config). Read more about connection pooling. Create a pool and use it directly: var mysql = require('mysql'); var pool = mysql.createPool({ connectionLimit : 10, host : 'example.org', user : 'bob', password : 'secret', database : 'my_db' }); pool.query('SELECT 1 + 1 AS solution', function (error, results, fields) { if (error) throw error; console.log('The solution is: ', results[0].solution); }); This is a shortcut for the pool.getConnection() -> connection.query() -> connection.release() code flow. Using pool.getConnection() is useful to share connection state for subsequent queries. This is because two calls to pool.query() may use two different connections and run in parallel. This is the basic structure: var mysql = require('mysql'); var pool = mysql.createPool(...); pool.getConnection(function(err, connection) { if (err) throw err; // not connected! // Use the connection connection.query('SELECT something FROM sometable', function (error, results, fields) { // When done with the connection, release it. connection.release(); // Handle error after the release. if (error) throw error; //. When a previous connection is retrieved from the pool, a ping packet is sent to the server to check if the connection is still good. Pools accept all the same options as a connection. When creating a new connection, the options are simply passed to the connection constructor. In addition to those options pools accept a few extras: acquireTimeout: The milliseconds before a timeout occurs during the connection acquisition. This is slightly different from connectTimeout, because acquiring a pool connection does not always involve making a connection. (Default: 10000)) The pool will emit an acquire event when a connection is acquired from the pool. This is called after all acquiring activity has been performed on the connection, right before the connection is handed to the callback of the acquiring code. pool.on('acquire', function (connection) { console.log('Connection %d acquired', connection.threadId); }); The pool will emit a connection event when a new connection is made within the pool. If you need to set session variables on the connection before it gets used, you can listen to the connection event. pool.on('connection', function (connection) { connection.query('SET SESSION auto_increment_increment=1') }); The pool will emit an enqueue event when a callback has been queued to wait for an available connection. pool.on('enqueue', function () { console.log('Waiting for available connection slot'); }); The pool will emit a release event when a connection is released back to the pool. This is called after all release activity has been performed on the connection, so the connection will be listed as free at the time of the event. pool.on('release', function (connection) { console.log('Connection %d released', connection.threadId); }); When you are done using the pool, you have to end all the connections or the Node.js event loop will stay active until the connections are closed by the MySQL server. This is typically done if the pool is used in a script or when trying to gracefully shutdown a server. To end all the connections in the pool, use the end method on the pool: pool.end(function (err) { // all connections in the pool have ended }); The end method takes an optional callback that you can use to know when all the connections are ended. Once pool.end is called, pool.getConnection and other operations can no longer be performed. Wait until all connections in the pool are released before calling pool.end. If you use the shortcut method pool.query, in place of pool.getConnection → connection.query → connection.release, wait until it completes. pool.end calls connection.end on every active connection in the pool. This queues a QUIT packet on the connection and sets a flag to prevent pool.getConnection from creating new connections. All commands / queries already in progress will complete, but new commands won't execute. PoolCluster provides multiple hosts connection. (group & retry & selector) // create var poolCluster = mysql.createPoolCluster(); // add configurations (the config is a pool config object) poolCluster.add(config); // add configuration with automatic name poolCluster.add('MASTER', masterConfig); // add a named configuration poolCluster.add('SLAVE1', slave1Config); poolCluster.add('SLAVE2', slave2Config); // remove configurations poolCluster.remove('SLAVE2'); // By nodeId poolCluster.remove('SLAVE*'); // By target group : SLAVE1-2 // }); // A pattern can be passed with * as wildcard poolCluster.getConnection('SLAVE*', 'ORDER', function (err, connection) {}); // The pattern can also be a regular expression poolCluster.getConnection(/^SLAVE[12]$/, function (err, connection) {}); // of namespace : of(pattern, selector) poolCluster.of('*').getConnection(function (err, connection) {}); var pool = poolCluster.of('SLAVE*', 'RANDOM'); pool.getConnection(function (err, connection) {}); pool.getConnection(function (err, connection) {}); pool.query(function (error, results, fields) {}); // close all connections poolCluster.end(function (err) { // all connections in the pool cluster have ended });) restoreNodeTimeout: If connection fails, specifies the number of milliseconds before another connection attempt will be made. If set to 0, then node will be removed instead and never re-used. (Default:. Re-connecting a connection is done by establishing a new connection. Once terminated, an existing connection object cannot be re-connected by design. With Pool, disconnected connections will be removed from the pool freeing up space for a new connection to be created on the next getConnection call. The most basic way to perform a query is to call the .query() method on an object (like a Connection, Pool, or PoolNamespace instance). The simplest form of . query() is .query(sqlString, callback), where a SQL string is the first argument and the second is a callback: connection.query('SELECT * FROM `books` WHERE `author` = "David"', function (error, results, fields) { // error will be an Error if one occurred during the query // results will contain the results of the query // fields will contain information about the returned results fields (if any) }); The second form .query(sqlString, values, callback) comes when using placeholder values (see escaping query values): connection.query('SELECT * FROM `books` WHERE `author` = ?', ['David'], function (error, results, fields) { //.query({ sql: 'SELECT * FROM `books` WHERE `author` = ?', timeout: 40000, // 40s values: ['David'] }, function (error, results, fields) { // error will be an Error if one occurred during the query // results will contain the results of the query // fields will contain information about the returned results fields (if any) }); Note that a combination of the second and third forms can be used where the placeholder values are passed as an argument and not in the options object. The values argument will override the values in the option object. connection.query({ sql: 'SELECT * FROM `books` WHERE `author` = ?', timeout: 40000, // 40s }, ['David'], function (error, results, fields) { // error will be an Error if one occurred during the query // results will contain the results of the query // fields will contain information about the returned results fields (if any) } ); If the query only has a single replacement character ( ?), and the value is not null, undefined, or an array, it can be passed directly as the second argument to .query: connection.query( 'SELECT * FROM `books` WHERE `author` = ?', 'David', function (error, results, fields) { // error will be an Error if one occurred during the query // results will contain the results of the query // fields will contain information about the returned results fields (if any) } ); Caution These methods of escaping values only works when the NO_BACKSLASH_ESCAPES SQL mode is disabled (which is the default state for MySQL servers). In order to avoid SQL Injection attacks, you should always escape any user provided data before using it inside a SQL query. You can do so using the mysql.escape(), connection.escape() or pool.escape() methods: var userId = 'some user provided value'; var sql = 'SELECT * FROM users WHERE id = ' + connection.escape(userId); connection.query(sql, function (error, results, fields) { if (error) throw error; // ... }); Alternatively, you can use ? characters as placeholders for values you would like to have escaped like this: connection.query('SELECT * FROM users WHERE id = ?', [userId], function (error, results, fields) { if (error) throw error; // ... }); Multiple placeholders are mapped to values in the same order as passed. For example, in the following query foo equals a, bar equals b, baz equals c, and id will be userId: connection.query('UPDATE users SET foo = ?, bar = ?, baz = ? WHERE id = ?', ['a', 'b', 'c', userId], function (error, results, fields) { if (error) throw error; // ... }); This looks similar to prepared statements in MySQL, however it really just uses the same connection.escape() method internally. Caution This also differs from prepared statements in that all ? are replaced, even those contained in comments and strings. Different value types are escaped differently, here is how: true/ false 'YYYY-mm-dd HH:ii:ss'strings X'0fa5' ['a', 'b']turns into 'a', 'b' [['a', 'b'], ['c', 'd']]turns into ('a', 'b'), ('c', 'd') toSqlStringmethod will have .toSqlString()called and the returned value is used as the raw SQL. key = 'val'pairs for each enumerable property on the object. If the property's value is a function, it is skipped; if the property's value is an object, toString() is called on it and the returned value is used. undefined/ nullare converted to NULL NaN/ Infinityare left as-is. MySQL does not support these, and trying to insert them as values will trigger MySQL errors until they implement support. This escaping allows you to do neat things like this: var post = {id: 1, title: 'Hello MySQL'}; var query = connection.query('INSERT INTO posts SET ?', post, function (error, results, fields) { if (error) throw error; // Neat! }); console.log(query.sql); // INSERT INTO posts SET `id` = 1, `title` = 'Hello MySQL' And the toSqlString method allows you to form complex queries with functions: var CURRENT_TIMESTAMP = { toSqlString: function() { return 'CURRENT_TIMESTAMP()'; } }; var sql = mysql.format('UPDATE posts SET modified = ? WHERE id = ?', [CURRENT_TIMESTAMP, 42]); console.log(sql); // UPDATE posts SET modified = CURRENT_TIMESTAMP() WHERE id = 42 To generate objects with a toSqlString method, the mysql.raw() method can be used. This creates an object that will be left un-touched when using in a ? placeholder, useful for using functions as dynamic values: Caution The string provided to mysql.raw() will skip all escaping functions when used, so be careful when passing in unvalidated input. var CURRENT_TIMESTAMP = mysql.raw('CURRENT_TIMESTAMP()'); var sql = mysql.format('UPDATE posts SET modified = ? WHERE id = ?', [CURRENT_TIMESTAMP, 42]); console.log(sql); // UPDATE posts SET modified = CURRENT_TIMESTAMP() WHERE id = 42 If you feel the need to escape queries by yourself, you can also use the escaping function directly: var query = "SELECT * FROM posts WHERE title=" + mysql.escape("Hello MySQL"); console.log(query); // SELECT * FROM posts WHERE title='Hello MySQL' If you can't trust an SQL identifier (database / table / column name) because it is provided by a user, you should escape it with mysql.escapeId(identifier), connection.escapeId(identifier) or pool.escapeId(identifier) like this: var sorter = 'date'; var sql = 'SELECT * FROM posts ORDER BY ' + connection.escapeId(sorter); connection.query(sql, function (error, results, fields) { if (error) throw error; // ... }); It also supports adding qualified identifiers. It will escape both parts. var sorter = 'date'; var sql = 'SELECT * FROM posts ORDER BY ' + connection.escapeId('posts.' + sorter); // -> SELECT * FROM posts ORDER BY `posts`.`date` If you do not want to treat . as qualified identifiers, you can set the second argument to true in order to keep the string as a literal identifier: var sorter = 'date.2'; var sql = 'SELECT * FROM posts ORDER BY ' + connection.escapeId(sorter, true); // -> SELECT * FROM posts ORDER BY `date.2` Alternatively, you can use ?? characters as placeholders for identifiers you would like to have escaped like this: var userId = 1; var columns = ['username', 'email']; var query = connection.query('SELECT ?? FROM ?? WHERE id = ?', [columns, 'users', userId], function (error, results, fields) { if (error) throw error; // ... }); console.log(query.sql); // SELECT `username`, `email` FROM `users` WHERE id = 1 Please note that this last character sequence is experimental and syntax might change When you pass an Object to .escape() or .query(), .escapeId() is used to avoid SQL injection in object keys." }); If you are inserting a row into a table with an auto increment primary key, you can retrieve the insert id like this: connection.query('INSERT INTO posts SET ?', {title: 'test'}, function (error, results, fields) { if (error) throw error; console.log(results.insertId); });.query('DELETE FROM posts WHERE title = "wrong"', function (error, results, fields) { if (error) throw error; console.log('deleted ' + results.affectedRows + ' rows'); }) You can get the number of changed rows from an update statement. "changedRows" differs from "affectedRows" in that it does not count updated rows whose values were not changed. connection.query('UPDATE posts SET ...', function (error, results, fields) { if (error) throw error; console.log('changed ' + results.changedRows + ' rows'); }) You can get the MySQL connection ID ("thread ID") of a given connection using the threadId property. connection.connect(function(err) { if (err) throw err; console.log('connected as id ' + connection.threadId); }); The MySQL protocol is sequential, this means that you need multiple connections to execute queries in parallel. You can use a Pool to manage connections, one simple approach is to create one connection per incoming http request.: pause(). This number will depend on the amount and size of your rows. pause()/ resume()operate on the underlying socket and parser. You are guaranteed that no more 'result'events will fire after calling pause(). query()method when streaming rows. 'result'event will fire for both rows as well as OK packets confirming the success of a INSERT/UPDATE query. Error: Connection lost: The server closed the connection.The time limit for this is determined by the net_write_timeout setting on your MySQL server.. The query object provides a convenience method .stream([options]) that wraps query events into a Readable Stream object. This stream can easily be piped downstream and provides automatic pause/resume, based on downstream congestion and the optional highWaterMark. The objectMode parameter of the stream is set to true and cannot be changed (if you need a byte stream, you will need to use a transform stream, like objstream for example). For example, piping query results into another stream (with a max buffer of 5 objects) is simply: connection.query('SELECT * FROM posts') .stream({highWaterMark: 5}) .pipe(...); (error, results, fields) { if (error) throw error; // . You can call stored procedures from your queries as with any other mysql driver. If the stored procedure produces several result sets, they are exposed to you the same way as the results for multiple statement queries. (error, results, fields) { if (error) throw error; /* results will be an array like this now: [{ table1: { fieldA: '...', fieldB: '...', }, table2: { fieldA: '...', fieldB: '...', }, }, ...] */ }); Or use a string separator to have your results merged. var options = {sql: '...', nestTables: '_'}; connection.query(options, function (error, results, fields) { if (error) throw error; /* results will be an array like this now: [{ table1_fieldA: '...', table1_fieldB: '...', table2_fieldA: '...', table2_fieldB: '...', }, ...] */ }); Simple transaction support is available at the connection level: connection.beginTransaction(function(err) { if (err) { throw err; } connection.query('INSERT INTO posts SET title=?', title, function (error, results, fields) { if (error) { return connection.rollback(function() { throw error; }); } var log = 'Post ' + results.insertId + ' added'; connection.query('INSERT INTO log SET data=?', log, function (error, results, fields) { if (error) { return connection.rollback(function() { throw error; }); } connection.commit(function(err) { if (err) { return A ping packet can be sent over a connection using the connection.ping method. This method will send a ping packet to the server and when the server responds, the callback will fire. If an error occurred, the callback will fire with an error argument. connection.ping(function (err) { if (err) throw err; console.log('Server responded to ping'); }) Every operation takes an optional inactivity timeout option. This allows you to specify appropriate timeouts for operations. It is important to note that these timeouts are not part of the MySQL protocol, and rather timeout operations through the client. This means that when a timeout is reached, the connection it occurred on will be destroyed and no further operations can be performed. // Kill query after 60s connection.query({sql: 'SELECT COUNT(*) AS count FROM big_table', timeout: 60000}, function (error, results, fields) { if (error && error.code === 'PROTOCOL_SEQUENCE_TIMEOUT') { throw new Error('too long to count table rows!'); } if (error) { throw error; } console.log(results[0].count + ' rows'); });. err.sql: String, contains the full SQL of the failed query. This can be useful when using a higher level interface like an ORM that is generating the queries. err.sqlState: String, contains the five-character SQLSTATE value. Only populated from MySQL server error. err.sqlMessage: String, contains the message string that provides a textual description of the error. Only populated from MySQL server error. (error, results, fields) { console.log(error.code); // 'ECONNREFUSED' console.log(error.fatal); // true }); Normal errors however are only delegated to the callback they belong to. So in the example below, only the first callback receives an error, the second query works as expected: connection.query('USE name_of_db_that_does_not_exist', function (error, results, fields) { console.log(error.code); // 'ER_BAD_DB_ERROR' }); connection.query('SELECT 1', function (error, results, fields) { console.log(error); // null console.log(results' events() {});: Note text in the binary character set is returned as Buffer, rather than a string. (error, results, fields) { if (error) throw error; // ... });, just write the flag name, or prefix it with a plus (case insensitive). Please note that some available flags that are not supported (e.g.: Compression), are still not allowed to be specified. The next example blacklists FOUND_ROWS flag from default connection flags. var connection = mysql.createConnection("mysql://localhost/test?flags=-FOUND_ROWS"); The following flags are sent by default on a new connection: CONNECT_WITH_DB- Ability to specify the database on connection. FOUND_ROWS- Send the found rows instead of the affected rows as affectedRows. IGNORE_SIGPIPE- Old; no effect. IGNORE_SPACE- Let the parser ignore spaces before the (in queries. LOCAL_FILES- Can use LOAD DATA LOCAL. LONG_FLAG LONG_PASSWORD- Use the improved version of Old Password Authentication. MULTI_RESULTS- Can handle multiple resultsets for COM_QUERY. ODBCOld; no effect. PROTOCOL_41- Uses the 4.1 protocol. PS_MULTI_RESULTS- Can handle multiple resultsets for COM_STMT_EXECUTE. RESERVED- Old flag for the 4.1 protocol. SECURE_CONNECTION- Support native 4.1 authentication. TRANSACTIONS- Asks for the transaction status flags. In addition, the following flag will be sent if the option multipleStatements is set to true: MULTI_STATEMENTS- The client may send multiple statement per query or statement prepare. There are other flags available. They may or may not function, but are still available to specify. COMPRESS INTERACTIVE NO_SCHEMA PLUGIN_AUTH REMEMBER_OPTIONS SSL SSL_VERIFY_SERVER_CERT: Security issues should not be first reported through GitHub or another public forum, but kept private in order for the collaborators to assess the report and either (a) devise a fix and plan a release date or (b) assert that it is not a security issue (in which case it can be posted in a public forum, like a GitHub issue). The primary private forum is email, either by emailing the module's author or opening a GitHub issue simply asking to whom a security issues should be addressed to without disclosing the issue or type of issue. An ideal report would include a clear indication of what the security issue is and how it would be exploited, ideally with an accompaning proof of concept ("PoC") for collaborators to work again and validate potentional fixes against. This project welcomes contributions from the community. Contributions are accepted using GitHub pull requests. If you're not familiar with making GitHub pull requests, please refer to the GitHub documentation "Creating a pull request". For a good pull request, we ask you provide the following: npm run test-covwill generate a coverage/folder that contains HTML pages of the code coverage, to better understand if everything you're adding is being tested. Readme.mdfile as well. npm run lintand fix any displayed issues. The test suite is split into two parts: unit tests and integration tests. The unit tests run on any machine while the integration tests require a MySQL server instance to be setup. $ FILTER=unit npm test Set the environment variables MYSQL_DATABASE, MYSQL_HOST, MYSQL_PORT, MYSQL_USER and MYSQL_PASSWORD. MYSQL_SOCKET can also be used in place of MYSQL_HOST and MYSQL_PORT to connect over a UNIX socket. Then run npm= FILTER=integration npm test
https://npm.runkit.com/mysql
CC-MAIN-2019-18
refinedweb
4,020
50.94
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Python Modules and Packages: An Introduction This article explores Python modules and Python packages, two mechanisms that facilitate modular programming. Modular programming refers to the process of breaking a large, unwieldy programming task into separate, smaller, more manageable subtasks or modules. Individual modules can then be cobbled together like building blocks to create a larger application. There are several advantages to modularizing code in a large application: Simplicity: Rather than focusing on the entire problem at hand, a module typically focuses on one relatively small portion of the problem. If you’re working on a single module, you’ll have a smaller problem domain to wrap your head around. This makes development easier and less error-prone. Maintainability: Modules are typically designed so that they enforce logical boundaries between different problem domains. If modules are written in a way that minimizes interdependency, there is decreased likelihood that modifications to a single module will have an impact on other parts of the program. (You may even be able to make changes to a module without having any knowledge of the application outside that module.) This makes it more viable for a team of many programmers to work collaboratively on a large application. Reusability: Functionality defined in a single module can be easily reused (through an appropriately defined interface) by other parts of the application. This eliminates the need to duplicate code. Scoping: Modules typically define a separate namespace, which helps avoid collisions between identifiers in different areas of a program. (One of the tenets in the Zen of Python is Namespaces are one honking great idea—let’s do more of those!) Functions, modules and packages are all constructs in Python that promote code modularization. Free PDF Download: Python 3 Cheat Sheetmodule. A module’s contents are accessed the same way in all three cases: with the import statement. Here, the focus will mostly be on modules that are written in Python. The cool thing about modules written in Python is that they are exceedingly straightforward to build. All you need to do is create a file that contains legitimate Python code and then give the file a name with a .py extension. That’s it! No special syntax or voodoo is necessary. For example, suppose you have created a file called mod.py containing the following: mod.py s = "If Comrade Napoleon says it, it must be right." a = [100, 200, 300] def foo(arg): print(f'arg = {arg}') class Foo: pass Several objects are defined in mod.py: s(a string) a(a list) foo()(a function) Foo(a class) Assuming mod.py is in an appropriate location, which you will learn more about shortly, these objects can be accessed by importing the module as follows: >>> import mod >>> print(mod.s) If Comrade Napoleon says it, it must be right. >>> mod.a [100, 200, 300] >>> mod.foo(['quux', 'corge', 'grault']) arg = ['quux', 'corge', 'grault'] >>> x = mod.Foo() >>> x <mod.Foo object at 0x03C181F0> The Module Search Path Continuing with the above example, let’s take a look at what happens when Python executes the statement: import mod ['', 'C:\\Users\\john\\Documents\\Python\\doc', 'C:\\Python36\\Lib\\idlelib', 'C:\\Python36\\python36.zip', 'C:\\Python36\\DLLs', 'C:\\Python36\\lib', 'C:\\Python36', 'C:\\Python36\\lib\\site-packages'] Note: The exact contents of sys.path are installation-dependent. The above will almost certainly look slightly different on your computer. Thus, to ensure actually one additional option: you can put the module file in any directory of your choice and then modify sys.path at run-time so that it contains that directory. For example, in this case, you could put mod.py in directory C:\Users\john and then issue the following statements: >>> sys.path.append(r'C:\Users\john') >>> sys.path ['', 'C:\\Users\\john\\Documents\\Python\\doc', 'C:\\Python36\\Lib\\idlelib', 'C:\\Python36\\python36.zip', 'C:\\Python36\\DLLs', 'C:\\Python36\\lib', 'C:\\Python36', 'C:\\Python36\\lib\\site-packages', 'C:\\Users\\john'] >>> import mod Once a module has been imported, you can determine the location where it was found with the module’s __file__ attribute: >>> import mod >>> mod.__file__ 'C:\\Users\\john\\mod.py' >>> import re >>> re.__file__ 'C:\\Python36\\lib\\re.py' The directory portion of __file__ should be one of the directories in sys.path. The import Statement Module contents are made available to the caller with the import statement. The import statement takes many different forms, shown below. import <module_name> The simplest form is the one already shown above: import <module_name> Note that this does not make the module contents directly accessible to the caller. Each module has its own private symbol table, which serves as the global symbol table for all objects defined in the module. Thus, a module creates a separate namespace, as already noted. illustrated below. After the following import statement, mod is placed into the local symbol table. Thus, mod has meaning in the caller’s local context: >>> import mod >>> mod <module 'mod' from 'C:\\Users\\john\\Documents\\Python\\doc\\mod.py'> But s and foo remain in the module’s private symbol table and are not meaningful in the local context: >>> s NameError: name 's' is not defined >>> foo('quux') NameError: name 'foo' is not defined To be accessed in the local context, names of objects defined in the module must be prefixed by mod: >>> mod.s 'If Comrade Napoleon says it, it must be right.' >>> mod.foo('quux') arg = quux, foo >>> s 'If Comrade Napoleon says it, it must be right.' >>> foo('quux') arg = quux >>> from mod import Foo >>> x = Foo() >>> x <mod.Foo object at 0x02E3AD50> Because this form of import places the object names directly into the caller’s symbol table, any objects that already exist with the same name will be overwritten: >>> a = ['foo', 'bar', 'baz'] >>> a ['foo', 'bar', 'baz'] >>> from mod import a >>> a [100, 200, 300] It is even possible to indiscriminately import everything from a module at one fell swoop: from <module_name> import * This will place the names of all objects from <module_name> into the local symbol table, with the exception of any that begin with the underscore ( _) character. For example: >>> from mod import * >>> s 'If Comrade Napoleon says it, it must be right.' >>> a [100, 200, 300] >>> foo <function foo at 0x03B449C0> >>> Foo <class 'mod.Foo'> This isn’t necessarily recommended in large-scale production code. It’s a bit dangerous because you are entering names into the local symbol table en masse. Unless you know them all well and can be confident there won’t be a conflict, you have a decent chance of overwriting an existing name inadvertently. However, this syntax is quite handy when you are just mucking around with the interactive interpreter, for testing or discovery purposes, because it quickly gives you access to everything a module has to offer without a lot of typing. from <module_name> import <name> as <alt_name> It is also possible to import individual objects but enter them into the local symbol table with alternate names: from <module_name> import <name> as <alt_name>[, <name> as <alt_name> …] This makes it possible to place names directly into the local symbol table but avoid conflicts with previously existing names: >>>>> a = ['foo', 'bar', 'baz'] >>> from mod import s as string, a as alist >>> s 'foo' >>> string 'If Comrade Napoleon says it, it must be right.' >>> a ['foo', 'bar', 'baz'] >>> alist [100, 200, 300] import <module_name> as <alt_name> You can also import an entire module under an alternate name: import <module_name> as <alt_name> >>> import mod as my_module >>> my_module.a [100, 200, 300] >>> my_module.foo('qux') arg = qux Module contents can be imported from within a function definition. In that case, the import does not occur until the function is called: >>> def bar(): ... from mod import foo ... foo('corge') ... >>> bar() arg = corge However, Python 3 does not allow the indiscriminate import * syntax from within a function: >>> def bar(): ... from mod import * ... SyntaxError: import * only allowed at module level Lastly, a try statement with an except ImportError clause can be used to guard against unsuccessful import attempts: >>> try: ... # Non-existent module ... import baz ... except ImportError: ... print('Module not found') ... Module not found >>> try: ... # Existing module, but non-existent object ... from mod import baz ... except ImportError: ... print('Object not found in module') ... Object not found in module The dir() Function The built-in function dir() returns a list of defined names in a namespace. Without arguments, it produces an alphabetically sorted list of names in the current local symbol table: >>> dir() ['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__'] >>> qux = [1, 2, 3, 4, 5] >>> dir() ['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'qux'] >>> class Bar(): ... pass ... >>> x = Bar() >>> dir() ['Bar', '__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'qux', 'x'] Note how the first call to dir() above lists several names that are automatically defined and already in the namespace when the interpreter starts. As new names are defined ( qux, Bar, x), they appear on subsequent invocations of dir(). This can be useful for identifying what exactly has been added to the namespace by an import statement: >>> dir() ['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__'] >>> import mod >>> dir() ['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'mod'] >>> mod.s 'If Comrade Napoleon says it, it must be right.' >>> mod.foo([1, 2, 3]) arg = [1, 2, 3] >>> from mod import a, Foo >>> dir() ['Foo', '__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'a', 'mod'] >>> a [100, 200, 300] >>> x = Foo() >>> x <mod.Foo object at 0x002EAD50> >>> from mod import s as string >>> dir() ['Foo', '__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'a', 'mod', 'string', 'x'] >>> string 'If Comrade Napoleon says it, it must be right.' When given an argument that is the name of a module, dir() lists the names defined in the module: >>> import mod >>> dir(mod) ['Foo', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'a', 'foo', 's'] >>> dir() ['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__'] >>> from mod import * >>> dir() ['Foo', '__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'a', 'foo', 's'] Executing a Module as a Script Any .py file that contains a module is essentially also a Python script, and there isn’t any reason it can’t be executed like one. Here again is mod.py as it was defined above: mod.py s = "If Comrade Napoleon says it, it must be right." a = [100, 200, 300] def foo(arg): print(f'arg = {arg}') class Foo: pass This can be run as a script: C:\Users\john\Documents>python mod.py C:\Users\john\Documents> There are no errors, so it apparently worked. Granted, it’s not very interesting. As it is written, it only defines objects. It doesn’t do anything with them, and it doesn’t generate any output. Let’s modify the above Python module so it does generate some output when run as a script: mod.py s = "If Comrade Napoleon says it, it must be right." a = [100, 200, 300] def foo(arg): print(f'arg = {arg}') class Foo: pass print(s) print(a) foo('quux') x = Foo() print(x) Now it should be a little more interesting: C:\Users\john\Documents>python mod.py If Comrade Napoleon says it, it must be right. [100, 200, 300] arg = quux <__main__.Foo object at 0x02F101D0> Unfortunately, now it also generates output when imported as a module: >>> import mod If Comrade Napoleon says it, it must be right. [100, 200, 300] arg = quux <mod.Foo object at 0x0169AD50> This is probably not what you want. It isn’t usual for a module to generate output when it is imported. Wouldn’t it be nice if you could distinguish between when the file is loaded as a module and when it is run as a standalone script? Ask and ye shall receive. When a .py file is imported as a module, Python sets the special dunder variable __name__ to the name of the module. However, if a file is run as a standalone script, __name__ is (creatively) set to the string '__main__'. Using this fact, you can discern which is the case at run-time and alter behavior accordingly: mod.py s = "If Comrade Napoleon says it, it must be right." a = [100, 200, 300] def foo(arg): print(f'arg = {arg}') class Foo: pass if (__name__ == '__main__'): print('Executing as standalone script') print(s) print(a) foo('quux') x = Foo() print(x) Now, if you run as a script, you get output: C:\Users\john\Documents>python mod.py Executing as standalone script If Comrade Napoleon says it, it must be right. [100, 200, 300] arg = quux <__main__.Foo object at 0x03450690> But if you import as a module, you don’t: >>> import mod >>> mod.foo('grault') arg = grault Modules are often designed with the capability to run as a standalone script for purposes of testing the functionality that is contained within the module. This is referred to as unit testing. For example, suppose you have created a module fact.py containing a factorial function, as follows: fact.py def fact(n): return 1 if n == 1 else n * fact(n-1) if (__name__ == '__main__'): import sys if len(sys.argv) > 1: print(fact(int(sys.argv[1]))) The file can be treated as a module, and the fact() function imported: >>> from fact import fact >>> fact(6) 720 But it can also be run as a standalone by passing an integer argument on the command-line for testing: C:\Users\john\Documents>python fact.py 6 720 Reloading a Module For reasons of efficiency, a module is only loaded once per interpreter session. That is fine for function and class definitions, which typically make up the bulk of a module’s contents. But a module can contain executable statements as well, usually for initialization. Be aware that these statements will only be executed the first time a module is imported. Consider the following file mod.py: mod.py a = [100, 200, 300] print('a =', a) >>> import mod a = [100, 200, 300] >>> import mod >>> import mod >>> mod.a [100, 200, 300] The print() statement is not executed on subsequent imports. (For that matter, neither is the assignment statement, but as the final display of the value of mod.a shows, that doesn’t matter. Once the assignment is made, it sticks.) If you make a change to a module and need to reload it, you need to either restart the interpreter or use a function called reload() from module importlib: >>> import mod a = [100, 200, 300] >>> import mod >>> import importlib >>> importlib.reload(mod) a = [100, 200, 300] <module 'mod' from 'C:\\Users\\john\\Documents\\Python\\doc\\mod.py'> Python Packages Suppose you have developed a very large application that includes many modules. As the number of modules grows, it becomes difficult to keep track of them all if they are dumped into one location. This is particularly so if they have similar names or functionality. You might wish for a means of grouping and organizing them. Packages allow for a hierarchical structuring of the module namespace using dot notation. In the same way that modules help avoid collisions between global variable names, packages help avoid collisions between module names. Creating a package is quite straightforward, since it makes use of the operating system’s inherent hierarchical file structure. Consider the following arrangement: Here, there is a directory named pkg that contains two modules, mod1.py and mod2.py. The contents of the modules are: mod1.py def foo(): print('[mod1] foo()') class Foo: pass mod2.py def bar(): print('[mod2] bar()') class Bar: pass Given this structure, if the pkg directory resides in a location where it can be found (in one of the directories contained in sys.path), you can refer to the two modules with dot notation ( pkg.mod1, pkg.mod2) and import them with the syntax you are already familiar with: import <module_name>[, <module_name> ...] >>> import pkg.mod1, pkg.mod2 >>> pkg.mod1.foo() [mod1] foo() >>> x = pkg.mod2.Bar() >>> x <pkg.mod2.Bar object at 0x033F7290> from <module_name> import <name(s)> >>> from pkg.mod1 import foo >>> foo() [mod1] foo() from <module_name> import <name> as <alt_name> >>> from pkg.mod2 import Bar as Qux >>> x = Qux() >>> x <pkg.mod2.Bar object at 0x036DFFD0> You can import modules with these statements as well: from <package_name> import <modules_name>[, <module_name> ...] from <package_name> import <module_name> as <alt_name> >>> from pkg import mod1 >>> mod1.foo() [mod1] foo() >>> from pkg import mod2 as quux >>> quux.bar() [mod2] bar() You can technically import the package as well: >>> import pkg >>> pkg <module 'pkg' (namespace)> But this is of little avail. Though this is, strictly speaking, a syntactically correct Python statement, it doesn’t do much of anything useful. In particular, it does not place any of the modules in pkg into the local namespace: >>> pkg.mod1 Traceback (most recent call last): File "<pyshell#34>", line 1, in <module> pkg.mod1 AttributeError: module 'pkg' has no attribute 'mod1' >>> pkg.mod1.foo() Traceback (most recent call last): File "<pyshell#35>", line 1, in <module> pkg.mod1.foo() AttributeError: module 'pkg' has no attribute 'mod1' >>> pkg.mod2.Bar() Traceback (most recent call last): File "<pyshell#36>", line 1, in <module> pkg.mod2.Bar() AttributeError: module 'pkg' has no attribute 'mod2' To actually import the modules or their contents, you need to use one of the forms shown above. Package Initialization If a file named __init__.py is present in a package directory, it is invoked when the package or a module in the package is imported. This can be used for execution of package initialization code, such as initialization of package-level data. For example, consider the following __init__.py file: __init__.py print(f'Invoking __init__.py for {__name__}') A = ['quux', 'corge', 'grault'] Let’s add this file to the pkg directory from the above example: Now when the package is imported, the global list A is initialized: >>> import pkg Invoking __init__.py for pkg >>> pkg.A ['quux', 'corge', 'grault'] A module in the package can access the global variable by importing it in turn: mod1.py def foo(): from pkg import A print('[mod1] foo() / A = ', A) class Foo: pass >>> from pkg import mod1 Invoking __init__.py for pkg >>> mod1.foo() [mod1] foo() / A = ['quux', 'corge', 'grault'] __init__.py can also be used to effect automatic importing of modules from a package. For example, earlier you saw that the statement import pkg only places the name pkg in the caller’s local symbol table and doesn’t import any modules. But if __init__.py in the pkg directory contains the following: __init__.py print(f'Invoking __init__.py for {__name__}') import pkg.mod1, pkg.mod2 then when you execute import pkg, modules mod1 and mod2 are imported automatically: >>> import pkg Invoking __init__.py for pkg >>> pkg.mod1.foo() [mod1] foo() >>> pkg.mod2.bar() [mod2] bar() Note: Much of the Python documentation states that an __init__.py file must be present in the package directory when creating a package. This was once true. It used to be that the very presence of __init__.py signified to Python that a package was being defined. The file could contain initialization code or even be empty, but it had to be present. Starting with Python 3.3, Implicit Namespace Packages were introduced. These allow for the creation of a package without any __init__.py file. Of course, it can still be present if package initialization is needed. But it is no longer required. Importing * From a Package For the purposes of the following discussion, the previously defined package is expanded to contain some additional modules: There are now four modules defined in the pkg directory. Their contents are as shown below: mod1.py def foo(): print('[mod1] foo()') class Foo: pass mod2.py def bar(): print('[mod2] bar()') class Bar: pass mod3.py def baz(): print('[mod3] baz()') class Baz: pass mod4.py def qux(): print('[mod4] qux()') class Qux: pass (Imaginative, aren’t they?) You have already seen that when import * is used for a module, all objects from the module are imported into the local symbol table, except those whose names begin with an underscore, as always: >>> dir() ['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__'] >>> from pkg.mod3 import * >>> dir() ['Baz', '__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'baz'] >>> baz() [mod3] baz() >>> Baz <class 'pkg.mod3.Baz'> The analogous statement for a package is this: from <package_name> import * What does that do? >>> dir() ['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__'] >>> from pkg import * >>> dir() ['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__'] Hmph. Not much. You might have expected (assuming you had any expectations at all) that Python would dive down into the package directory, find all the modules it could, and import them all. But as you can see, by default that is not what happens. Instead, Python follows this convention: if the __init__.py file in the package directory contains a list named __all__, it is taken to be a list of modules that should be imported when the statement from <package_name> import * is encountered. For the present example, suppose you create an __init__.py in the pkg directory like this: pkg/__init__.py __all__ = [ 'mod1', 'mod2', 'mod3', 'mod4' ] Now from pkg import * imports all four modules: >>> dir() ['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__'] >>> from pkg import * >>> dir() ['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'mod1', 'mod2', 'mod3', 'mod4'] >>> mod2.bar() [mod2] bar() >>> mod4.Qux <class 'pkg.mod4.Qux'> Using import * still isn’t considered terrific form, any more for packages than for modules. But this facility at least gives the creator of the package some control over what happens when import * is specified. (In fact, it provides the capability to disallow it entirely, simply by declining to define __all__ at all. As you have seen, the default behavior for packages is to import nothing.) By the way, __all__ can be defined in a module as well and serves the same purpose: to control what is imported with import *. For example, modify mod1.py as follows: pkg/mod1.py __all__ = ['foo'] def foo(): print('[mod1] foo()') class Foo: pass Now an import * statement from pkg.mod1 will only import what is contained in __all__: >>> dir() ['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__'] >>> from pkg.mod1 import * >>> dir() ['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'foo'] >>> foo() [mod1] foo() >>> Foo Traceback (most recent call last): File "<pyshell#37>", line 1, in <module> Foo NameError: name 'Foo' is not defined foo() (the function) is now defined in the local namespace, but Foo (the class) is not, because the latter is not in __all__. In summary, __all__ is used by both packages and modules to control what is imported when import * is specified. But the default behavior differs: - For a package, when __all__is not defined, import *does not import anything. - For a module, when __all__is not defined, import *imports everything (except—you guessed it—names starting with an underscore). Subpackages Packages can contain nested subpackages to arbitrary depth. For example, let’s make one more modification to the example package directory as follows: The four modules ( mod1.py, mod2.py, mod3.py and mod4.py) are defined as previously. But now, instead of being lumped together into the pkg directory, they are split out into two subpackage directories, sub_pkg1 and sub_pkg2. Importing still works the same as shown previously. Syntax is similar, but additional dot notation is used to separate package name from subpackage name: >>> import pkg.sub_pkg1.mod1 >>> pkg.sub_pkg1.mod1.foo() [mod1] foo() >>> from pkg.sub_pkg1 import mod2 >>> mod2.bar() [mod2] bar() >>> from pkg.sub_pkg2.mod3 import baz >>> baz() [mod3] baz() >>> from pkg.sub_pkg2.mod4 import qux as grault >>> grault() [mod4] qux() In addition, a module in one subpackage can reference objects in a sibling subpackage (in the event that the sibling contains some functionality that you need). For example, suppose you want to import and execute function foo() (defined in module mod1) from within module mod3. You can either use an absolute import: pkg/sub__pkg2/mod3.py def baz(): print('[mod3] baz()') class Baz: pass from pkg.sub_pkg1.mod1 import foo foo() >>> from pkg.sub_pkg2 import mod3 [mod1] foo() >>> mod3.foo() [mod1] foo() Or you can use a relative import, where .. refers to the package one level up. From within mod3.py, which is in subpackage sub_pkg2, ..evaluates to the parent package ( pkg), and ..sub_pkg1evaluates to subpackage sub_pkg1of the parent package. pkg/sub__pkg2/mod3.py def baz(): print('[mod3] baz()') class Baz: pass from .. import sub_pkg1 print(sub_pkg1) from ..sub_pkg1.mod1 import foo foo() >>> from pkg.sub_pkg2 import mod3 <module 'pkg.sub_pkg1' (namespace)> [mod1] foo() Conclusion In this tutorial, you covered the following topics: - Free PDF Download: Python 3 Cheat Sheet This will hopefully allow you to better understand how to gain access to the functionality available in the many third-party and built-in modules available in Python. Additionally, if you are developing your own application, creating your own modules and packages will help you organize and modularize your code, which makes coding, maintenance, and debugging easier. If you want to learn more, check out the following documentation at Python.org: Happy Pythoning! Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Python Modules and Packages: An Introduction
https://realpython.com/python-modules-packages/
CC-MAIN-2020-34
refinedweb
4,238
56.35
Introduction to the Terminal The Terminal is a simple command-line interface for entering Caché commands and displaying current values. It is useful during learning, development, and debugging. This chapter discusses the following topics: User account that owns the Terminal process How to start the Terminal Background information about the Terminal An introduction to what you use the Terminal for An introduction to the ZWELCOME routine Information about the startup namespace that the Terminal uses Information about the Terminal prompt and the prompt versions you might see How to interrupt the Terminal If the Terminal displays a dialog box with the message Spy Mode On, this means that you have accidentally pressed Alt+Shift+S. To exit this mode, press Alt+Shift+S again. This mode is not for general use and is not documented. Also, if the Terminal appears to be unresponsive, you may have pressed Ctrl+S, which pauses the automatic scrolling. If so, press Ctrl+Q to resume. User Account That Owns the Terminal Process In Caché versions 2015.2 and later, the Caché process is owned by the user that is logged in to Windows and is running the Terminal program (cterm.exe). In Caché versions 2015.1 and earlier: Beginning with Windows Vista, and including Windows Vista, Windows Server 2008, and Windows 7, the Caché process that serves the Terminal connection is owned by the user account in which the Caché control service runs. Before Windows Vista, the Caché process is owned by the user that is logged in to Windows and is running the Terminal program (cterm.exe). In all cases, all environment variables and shared drive letter designations are those defined by the user that is running the Terminal. Starting the Terminal You can use the Terminal interactively or in batch mode. To use the Terminal interactively, do one of the following: To work with the Terminal using a local database, select the InterSystems Launcher and then select Terminal. To work with the Terminal using a database on a remote server, select the InterSystems Launcher and then select Remote System Access > Terminal. Then select a server name. Or select the InterSystems Launcher, select Remote System Access > Telnet. Then log on to the Caché system with your username and password. For more information and additional options, see the chapter “Connecting to Remote Servers” in the Caché System Administration Guide. In either case, you then see the Terminal window. The prompt displayed in this window indicates the namespace in which you are currently working. For example: USER> In batch mode, you invoke the Terminal from the operating system command line, passing to it the name of a script file to run. This mode is not available for all operating systems. Background The Terminal was designed to work with Caché applications. It uses two methods to communicate with Caché: local and network. The title bar indicates the communication mode currently in use. Local communication is used when the Terminal communicates with the Caché server with which it was installed. In this case, the title bar displays Cache TRM:pid(instancename) where: pid is the process ID of the Caché process with which the Terminal is communicating. instancename is the Caché instance in which the process is running. Network communication uses the TELNET protocol over TCP/IP to communicate with either a Windows Caché server or with a UNIX® host. In this case, the title bar displays (server NT — Caché Telnet) where server is the host name of the remote server. The communications stack for Caché is Winsock. Winsock is a network programming interface for Microsoft Windows which is based on the socket paradigm popularized in the Berkeley Software Distribution (BSD). The host entry can be either from the local file, an IP address, or, if the Winsock implementation has access to a name server, a general host name. The host name may be followed by an optional #nnn to specify a nonstandard port number. Errors reported from this communications mode are the names of the Winsock error codes. For example, WSAECONNREFUSED means the connection was refused. General Use In the Terminal, you can enter ObjectScript commands of all kinds. For example: d ^myroutine set dirname = "c:\test" set obj=##class(Test.MyClass).%New() write obj.Prop1 The Terminal implicitly issues the Use 0 command after each line you enter. This means that if you issue a Use command to direct output to some other device, that command is essentially ignored. Also, large input buffers may defer the action of keys that attempt to stop the input flow such as Ctrl-C or Ctrl-S. This is also dependent on processor and connection speed. A special effort was made to respond to keystrokes before host input. You can also run Terminal scripts, which are files with the extension .scr existing in your file system. The Terminal provides a small set of commands you can use in these scripts, including a command that sends a Caché command to the Terminal, as if you had typed it manually. The ZWELCOME Routine When the Terminal begins execution, the installation of ZWELCOME into the %SYS namespace requires an individual with administrator privileges and write access to the CACHESYS database. The ZWELCOME routine executes in the %SYS namespace with an empty $USERNAME and with $ROLES set to %ALL. Take care to ensure that the failure modes of ZWELCOME are benign. Also, this routine should not modify the $ROLES variable. Here is a simple example: ZWELCOME() PUBLIC ; ; Example Write ! Set ME = ##class(%SYS.ProcessQuery).%OpenId($JOB) Write "Now: ", $ZDATETIME($HOROLOG, 3, 1), ! Write "Pid/JobNo: ", ME.Pid, "/", ME.JobNumber, ! Write "Priority: ", ME.Priority, ! Quit The Startup Namespace When you first start the Terminal, it opens in a particular namespace. This option is controlled by the Startup Namespace option of the user definition. See the chapter “Users” in the Caché Security Administration Guide. The command prompt displays the current namespace, such as: USER> Changing Namespaces To change to a new namespace, use the ZNSPACE command (which has a short form of ZN USER>ZN "SAMPLES" SAMPLES> The argument for the ZNSPACE command is a single string that is the name of the namespace to change to. If you enter an invalid namespace name, ZNSPACE throws a <NAMESPACE> error. See the ZNSPACE reference page in the Caché ObjectScript Reference for more information. The Terminal Prompt As noted previously, the Terminal prompt indicates the namespace in which you are currently working. The prompt may display additional information, to indicate the transaction level or the program stack level. The Transaction Level If you are within a transaction, a prefix is appended to the prompt to indicate the transaction level. The prefix is of the form TLn:, where n is the transaction level. For example, if you are in the User namespace and you enter the ObjectScript command TSTART, the prompt changes as follows: USER>tstart TL1:USER> If you exit the Terminal, that rolls back the transaction. The Program Stack Level If an error occurs, a suffix is added to the prompt to indicate the program stack level. For example: USER 5d3> Enter the Quit command to exit the debug prompt. Or debug the error; see the chapter “Command-line Routine Debugging” in the book Using Caché ObjectScript. The TSQL Shell To access the TSQL shell, type DO $SYSTEM.SQL.TSQLShell() and press Enter. The prompt is then displayed with the string (:TSQL), as follows: USER>DO $SYSTEM.SQL.TSQLShell() Current settings :- No current settings Compiler is NEW USER:TSQL> To exit the TSQL shell, enter the ^ command. For information on the TSQL shell, see the Caché Transact-SQL (TSQL) Migration Guide. The MV Shell To access the MV shell, type MV and press Enter. The prompt is then displayed with a colon (:) at the end rather than a right angle bracket (>), as follows: USER>MV USER: If the MV shell has not yet been initialized, you may see messages before this prompt. To exit the MV shell, enter the Quit command. For information on the MV shell, see the chapter “Starting MultiValue” in the book Using the MultiValue Features of Caché. Do not open another MV shell from within an MV shell. Operating-System Shells In the Terminal, you can also open various operating-system shells. To do so, type ! and press Enter. The Terminal then opens your default operating-system shell, and the prompt shows the working directory. For example: USER>! c:\intersystems\cache\mgr\user\> On Macintosh, you cannot open the C-shell this way; you receive a permission denied error. You can, however, use other shells (Bash, Bourne, or Korn). To exit the shell, use the quit or exit command as appropriate for the shell. Interrupting Execution in the Terminal To interrupt the Terminal and stop any foreground execution, use one of the following key combinations: Ctrl+C — Use this if the Windows edit accelerators option is not enabled. Ctrl+Shift+C — Use this if the Windows edit accelerators option is enabled. For information on Windows edit accelerators option, see the section “User Settings,” in the chapter “Controlling the Appearance and Behavior of the Terminal.” Exiting the Terminal To exit the Terminal, do either of the following: Select File > Exit. Press Alt+F4. This causes this copy of the Terminal to exit, closing any open files and stopping any foreground execution. If this Terminal was connected to a server at startup, it exits on its own when the communications channel is closed. If you accessed this Terminal via Cache Telnet in the InterSystems Launcher, then it does not exit automatically when the communications channel is closed; instead it remains active so you that can connect again via the Connect menu.
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GTER_intro
CC-MAIN-2021-17
refinedweb
1,617
55.13
This is your resource to discuss support topics with your peers, and learn from each other. 09-12-2012 06:02 AM Hi, I recently moved my dev environment to a new Mac, so I had to reinstall everything, I installed the BB sdk 3,0 and pointed it to AIR sdk 3,4, I can compile an app, I migrated all signing and Debug Tokens (all of which is alright, Cascaes app are OK), but whenever I try to launch my AIR app on the BB10, it opens and immediately quits before anything, even a simple HelloWorld fails. Is it because I compiled against AIR sdk 3.4? The dev sites clearly states "Adobe AIR SDK 2.6 or later", so what am I missing? Thanks! Solved! Go to Solution. 09-12-2012 06:49 AM - edited 09-12-2012 06:50 AM There has been some confusion about this recently, and IMO RIM should clear it up so that your experience is not repeated. My understanding from other threads on this is that the installer asks you for a path to an AIR SDK that is version 2.6 or greater (it's been a long time since I last installed so I'm going on the comments of others). However the system requirements on the AIR BB10 SDK state you need AIR 3.1 or higher: And on top of that, my understanding is that AIR 3.4 support won't be in BB10 for a while as it's not top priority. So, based on what seems to be working for me and for others, I'd recommend you grab the 3.1 AIR SDK and reinstall your BB10 SDK from scratch, integrating it with your AIR 3.1 SDK, if that won't cause too many problems for you. RIM, please get your system requirements in sync across all SDK installers, system requirements web pages, etc. Basic issues like this one should not be tripping up one developer after another. 09-12-2012 07:42 AM Thanks, this confirms what I was thinking, can't seem to find a link to an older AIR SDK on adobe site, evrything redirects to 3.4... 09-12-2012 07:58 AM I found this link to legacy SDK downloads with a quick web search: 09-12-2012 08:10 AM :-) my bad, I admit I didn't googled past the first 3 results of "air sdk 3.3" on that one... thanks for the link, will try and report... 09-12-2012 03:54 PM I uninstalled everything, and re-installed again using AIR SDK 3.3, but still no luck... So I checked my -app.xml descriptor file and changed the first line version from 3.3 to 3.1 < < This did the trick, my app now launches normally, though the AIR SDK is still 3.3, this is a bit weird... I thought BB10 was on AIR 3.3 09-12-2012 04:52 PM Right now we currently only support AIR version 3.1. As Tim mentioned here we will be updating to either 3.3 or 3.4 in the future. I will test this but your application should build if you have the latest SDK installed and target AIR 3.1. I will make sure that we clear up any confusion in our documentation and SDK installer. Regards, Dustin 09-12-2012 05:23 PM It seems I'm wrong and you will need the 3.1 AIR SDK. I believe AIR 3.1 is installed by default in Flash Builder 4.6 though. I will talk to the docs team about making it more clear for developers. Cheers, Dustin 09-13-2012 08:29 AM - edited 09-13-2012 08:30 AM @dmalik - thanks! Please advise the docs folks to make sure the developer guides and system requirements pages for the commandline tools and other non-Adobe tools are updated to match. 09-13-2012 01:26 PM Thanks for clarification, Few precisions, I'm using FDT, and I compiled with AIR 3,3 but with 3,1 namespace, and succesfully deployed my app to BB10 Dev Alpha... Just to avoid any possible incompatibility problems, I'll reinstall 3,1 SDK ...
https://supportforums.blackberry.com/t5/Adobe-AIR-Development/Which-AIR-SDK-version-to-use-for-BB10/m-p/1904107
CC-MAIN-2016-40
refinedweb
710
82.24
Completing Tag Names MPS automatically completes the names of tags, attributes and their values in files of the following types:DTD or Schema If there is no schema association, MPS will use the file content (tag and attribute names and their values) to complete your input. completion for taglibs and namespaces is available. You can have MPS do one of the following: - Insert a declaration of the taglib the tag in question belongs to. - Import the desired taglib and insert all required import and reference statements. To invoke the tag name completion - Press < and start typing the tag name. MPS displays the list of tag names appropriate in the current context. Use the ArrowUp and ArrowDown buttons to scroll through the list. - Press Enter to accept selection from the list, or continue typing the tag name manually. MPS. MPS adds the declaration of the selected taglib. To import a taglib - Start typing the taglib prefix and press Alt+Insert. - Select the desired taglib from the list of suggestions and press Enter. MPS imports the selected taglib and adds the import statement automatically.
https://www.jetbrains.com/help/mps/3.3/completing-tag-names.html
CC-MAIN-2017-17
refinedweb
182
64.71
mlflow Comet Support for MLFlow¶ Comet has extensive support for users of MLFlow. In fact, Comet can support using MLFlow in two different ways: - Comet built-in, core support for MLFlow - Comet for MLFlow Extension We'll explore these two methods in the sections below. 1. Comet built-in, core support for MLFlow¶ If you're already using MLFlow, then Comet will work out of the box with MLFlow. First install the Comet Python SDK and set your API key. Once you have comet_ml installed, you can simply run any MLFlow script from the console as follows: comet python mlflow_script.py Alternatively, you can add this one line of code to the top of your training MLFlow script: import comet_ml and run your MLFlow script as you normally would. With either method, you will get much more data about your run beyond MLFlow's regular functionality. We'll detail the additional benefits below. How it Works¶ Comet's built-in, core support for MLFlow will attempt to create a live, online Experiment if a Comet API Key is configured. If a Comet API Key cannot be found, you will see the following log message: No Comet API Key was found, creating an OfflineExperiment. Set up your API Key to get the full Comet experience: In the case that no API key is found the Comet SDK will still create an OfflineExperiment so you will still get all the additional tracking data from Comet.ml. Just remember to upload the offline experiment archive later. In fact, at the end of the run, the script will provide you the exact command to run, similar to the following: comet upload /path/to/archive.zip Any future experiment runs created with this script will automatically include Comet's extended experiment tracking to MLFlow. When you run MLFlow by importing comet_ml or by using the command-line comet python script.py you will automatically log all of the following items to comet.ml: - Metrics - Hyperparameters - Models - Assets - Source code - git repo and patch info - System Metrics - CPU and GPU usage - Python packages - Command-line arguments - Standard Output - Installed OS Packages Info For more information on using comet in the console, see Comet Command-Line Utilities. Now, we'll explore the other support method for MLFlow users. 2. Comet for MLFlow Extension¶ If you would like to see your previously run MLFlow experiments in Comet, try out the comet_for_mlflow extension. To do this, first download the open-source Python extension and command-line interface (CLI) command: pip install comet-for-mlflow comet_for_mlflow The Comet for MLFlow Extension will find any existing MLFlow runs in your current folder and make those available for analysis in Comet. For more options, use comet_for_mlflow --help and see the following section. The Comet for MLFlow Extension is an open-source project and can be found at: github.com/comet-ml/comet-for-mlflow/ We welcome any questions, bug fixes, and comments there. Advanced Comet for MLFlow Extension CLI Usage¶ comet_for_mlflow has a variety of options to help you get the most out of previously run MLFlow runs with Comet. You can use the following flags with comet_for_mlflow: --upload- automatically upload the prepared experiments to comet.ml --no-upload- do not upload the prepared experiments to comet.ml --api-key API_KEY- set the Comet API key --mlflow-store-uri MLFLOW_STORE_URI- set the MLFlow store uri --output-dir OUTPUT_DIR- set the directory to store prepared runs --force-reupload- force re-upload of prepared experiments -y, --yes- answer all yes/no questions automatically with 'yes' -n, --no- answer all yes/no questions automatically with 'no' --email EMAIL- set email address if needed for creating an account For more information, use comet_for_mlflow --help or see github.com/comet-ml/comet-for-mlflow. MLFlow Logging¶ As mentioned, Comet supports MLFlow users through two different approaches: - Comet built-in, core support for MLFlow - Comet for MLFlow Extension The first is useful for running new experiments, and requires that you import comet_ml or use comet python script.py. The second is useful for previously run MLFlow experiments, and requires the comet-for-mlflow extension. There are some differences between the way these two methods operate. Specifically: Comet for MLFlow Support Limitations¶ When running the MLFlow built-in, core support, there are two limitations: - Does not support MLFlow nested runs. - Does not support continuing a previous MLFlow run. The MLFlow extension will create a new Comet Experiment in that case.
https://www.comet.ml/docs/python-sdk/mlflow/
CC-MAIN-2020-40
refinedweb
741
58.82
I have a python application from which I am trying to connect to couple of URLs. When I run the application locally then there is no error. When I build a docker image for the application and run it, then also there is no error. However, when I deploy the docker image into a service fabric cluster, then I get the following error. From a similar question, I understand that this may be related to DNS/proxy issues. Can you suggest some steps to fix this in the service fabric cluster? socket.gaierror: [Errno 11002] getaddrinfo failed During handling of the above exception, another exception occurred: urllib.error.URLError: <urlopen error [Errno 11002] getaddrinfo failed> Code: import urllib.request as rr.urlopen('').close() import urllib.request as r r.urlopen('').close() Traceback: This forum is for Azure Stack, a hybrid cloud platform that lets you use Azure services from your company's or service provider's datacenter. Azure Service fabric is no longer supported on MSDN. Your best place to ask this question is at Microsoft's new forum - Microsoft Q&A under the azure-service-fabric tag.
https://social.msdn.microsoft.com/Forums/azure/en-US/bc405663-8bb1-4f25-85d7-0fda2d81e0cc/how-to-allow-accessing-urls-from-service-fabric-cluster?forum=AzureStack
CC-MAIN-2020-10
refinedweb
188
58.99
The system tray is located in the Windows Taskbar, usually at the bottom right corner next to the clock. It contains miniature icons for easy access to system functions such as antivirus settings, printer, modem, sound volume, battery status, and more. Double-click or right-click an icon to view and access the details and controls. The system tray was first introduced with Microsoft Windows 95, and is now used in Windows 98, 98SE, Me, NT 4.0, 2000, and XP. Since Java 1.5 does not have an api allowing you add icons to the system tray, you need to use a third party library which uses JNI (Java Native Interface) to be able to implement this functionality. JDIC (JDesktop Integration Components) allows creation of a tray icon on the desktop (in the System Tray Area for Windows platforms, or in the Notification Area for Unix platforms), with a caption (text), an animated icon, and an associated Swing menu containing icons. It can also display a tooltip when the mouse hovers over the tray icon. Following example shows how to implement such a functionality. You may look at JDIC documentation for further details. import java.awt.event.*; import javax.swing.*; import org.jdesktop.jdic.tray.*; public class TestTray { public static JMenuItem quit; public TestTray() { JPopupMenu menu = new JPopupMenu("Tray Icon Menu"); menu.add(new JMenuItem("Test Item")); menu.addSeparator(); JMenuItem quitItem = new JMenuItem("Quit"); quitItem.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent evt) { System.exit(0); }}); menu.add(quitItem); // Resource file "duke.gif" must exist at the same directory // as this class file. ImageIcon icon = new ImageIcon("duke.gif"); TrayIcon ti = new TrayIcon(icon, "JDIC Tray Icon API Test", menu); // Action listener for left click. ti.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { JOptionPane.showMessageDialog(null, "JDIC Tray Icon API Test!", "About", JOptionPane.INFORMATION_MESSAGE); } }); SystemTray tray = SystemTray.getDefaultSystemTray(); tray.addTrayIcon(ti); } public static void main(String[] args) { new TestTray(); } } You can share your information about this topic using the form below! Please do not post your questions with this form! Thanks.
http://www.java-tips.org/other-api-tips/jdic/adding-icons-to-system-tray-4.html
CC-MAIN-2014-15
refinedweb
343
50.02
Updated on Kisan Patel The System.Text.StringBuilder class is very similar to the System.String class. but the difference is that StringBuilder class is mutable (mutable means you can modified state of objects by operations). Unlike in the string class, you must first call the constructor of a StringBuilder to instantiate its object. string str = "This is a string!"; StringBuilder sb = new StringBuilder("This is the StringBuilder!"); StringBuilder is somewhat similar to ArrayList and other collections in the way that it grows automatically as the size of the string it contains changes. Hence, the capacity of a StringBuilder may be different from its Length. Some of the more common properties and methods of the StringBuilder class are listed in the following table: The following program demonstrates the use of StringBuilder class in C#. using System; using System.Text; namespace StringBuilderDemo { class Program { public static void Main(string[] args) { StringBuilder sb = new StringBuilder("This is the StringBuilder class!"); String str = " Great!!!"; Console.WriteLine("Length of StringBuilder sb is: " + sb.Length); Console.WriteLine("Capacity of StringBuilder sb is: " + sb.Capacity); Console.WriteLine("StringBuilder sb after appending: " + sb.Append(str)); Console.WriteLine("StringBuilder sb after removing: " + sb.Remove(4, 3)); Console.ReadLine(); } } } the output of the above C# program…
http://csharpcode.org/blog/stringbuilder-class/
CC-MAIN-2019-18
refinedweb
207
61.02
I uploaded my core mod to curse and I have no clue how to set it so that it will say that on my other mod anyone know what to do - - SoggyStache - Registered User - Member for 5 years and 26 days Last active Sun, Apr, 12 2020 13:33:35 - 24 Followers - 262 Total Posts - 0 Thanks - May 31, 2016SoggyStache posted a message on Adding dependencies to modsPosted in: General Discussion 0SoggyStache posted a message on nei conflictionsPosted in: General Discussion Download win-rar or 7-zip something like that right click the jar and open with... (Whatever program you chose) then open the assets folder. Whatever the folder is called in the assets folder is the mod-id. Keep in mind mods like galacticraft have more than one folder in there. 0SoggyStache posted a message on nei conflictionsPosted in: General Discussion If the mod-id for two mods is the same it wont work. There may be multiple of the same mod. Or you are using the incorrect version of a mod 0SoggyStache posted a message on Server Version?Posted in: General Discussion The best way is too setup the server then install forge too it once that is done then you should run it and get a mods folder just drag the mods from the modpack into that folder and then run the game using the forge server launcher not minecrafts. need more help tell me here so other can find this later 0SoggyStache posted a message on How can you tell what mod in a group of mods is causing a lag spike?Posted in: General Discussion You should just take out one mod at a time and go on a test world and see if that does anything(EX: Remove (Lets just say orespawn) play around for a bit if there is no lag take note that orespawn was the cause if there is lag take note that orespawn is ok) 0SoggyStache posted a message on I need help please ... every mod pack crashesPosted in: General Discussion Try reinstalling the game by backing up important game files (ex: worlds) and delete everything in your .minecraft folder reinstall and do everything again and then see if it works. I did that before when a mod pack I was using didnt work and now it does. If that doesnt do anything make sure your on the right version of forge 0SoggyStache posted a message on Forestry - ServerSide CrashPosted in: General Discussion A lot of mods dont have server support the mods you are adding most likely dont have support. All my mods work fine client side but non of them can be used with a server. 0SoggyStache posted a message on wheres a good places to host a modpack filePosted in: General Discussion Mediafire (will definatly work )or just make a wix or weebly site I think they have built in file hosts 0SoggyStache posted a message on Modded MinecraftPosted in: General Discussion Everyone here knows how to mod the game. And there are a ton of easy tutorials on making a modded server. 0SoggyStache posted a message on projectIDPosted in: General Discussion In game click the mods tab on the title screen and the mod-id is on there(most of the time). 0SoggyStache posted a message on I can't install mods anymore!Posted in: General Discussion Reinstall the curse client. 0SoggyStache posted a message on How can you tell what mod in a group of mods is causing a lag spike?Posted in: General Discussion You can either just remove mods to find the source or use optifine to maybe help the problem. 0SoggyStache posted a message on 1.9 Crash helpPosted in: General Discussion What mods were you using 0SoggyStache posted a message on Mac - WHEN WILL IT COMEPosted in: General Discussion If you cant get certain things on mac I recommend you go to a store and purchase a windows disc so you can run it on your computer (my friend has a mac and runs windows on it so I know this works) or if you don't want to purchase it you could download it somewhere and use it with a virtual machine like oracle's virtual box. 0SoggyStache posted a message on Help me Please :3Posted in: Ideas The following is the exact code for the minecraft coal item maybe try that package net.minecraft.item; import cpw.mods.fml.relauncher.Side; import cpw.mods.fml.relauncher.SideOnly; import java.util.List; import net.minecraft.client.renderer.texture.IIconRegister; import net.minecraft.creativetab.CreativeTabs; import net.minecraft.util.IIcon; public class ItemCoal extends Item { @SideOnly(Side.CLIENT) private IIcon field_111220_a; private static final String __OBFID = "CL_00000002"; public ItemCoal() { this.setHasSubtypes(true); this.setMaxDamage(0); this.setCreativeTab(CreativeTabs.tabMaterials); } /** * Returns the unlocalized name of this item. This version accepts an ItemStack so different stacks can have * different names based on their damage or NBT. */ public String getUnlocalizedName(ItemStack p_77667_1_) { return p_77667_1_.getItemDamage() == 1 ? "item.charcoal" : "item.coal"; } /** * returns a list of items with the same ID, but different meta (eg: dye returns 16 items) */ @SideOnly(Side.CLIENT) public void getSubItems(Item p_150895_1_, CreativeTabs p_150895_2_, List p_150895_3_) { p_150895_3_.add(new ItemStack(p_150895_1_, 1, 0)); p_150895_3_.add(new ItemStack(p_150895_1_, 1, 1)); } /** * Gets an icon index based on an item's damage value */ @SideOnly(Side.CLIENT) public IIcon getIconFromDamage(int p_77617_1_) { return p_77617_1_ == 1 ? this.field_111220_a : super.getIconFromDamage(p_77617_1_); } @SideOnly(Side.CLIENT) public void registerIcons(IIconRegister p_94581_1_) { super.registerIcons(p_94581_1_); this.field_111220_a = p_94581_1_.registerIcon("charcoal"); } } - To post a comment, please login or register a new account. 0
https://minecraft.curseforge.com/members/soggystache
CC-MAIN-2020-24
refinedweb
934
59.74
This article describes how to automate the provisioning of cloud inventory on and is based on how the authors own company went about doing so. The SaaS offering from Sastra Technologies, a firm that I co-founded, promises customers their very own database, which means that each customer has a separate database for its operational data. This puts a lot of pressure on our engineering team to ensure the database is provisioned and the SaaS widgets are up and running within minutes of the customers signing up. In the beginning, we were inclined to run a few shell scripts and have these set up by an engineer; however, we soon realised that our customers are based in the UK and could sign up while we were asleep. We had to enable this by automating the entire provisioning process. We initially looked at Puppet, Chef and FAI but these solutions had a pricing plan and, being a start-up, our aim was to conserve funds. So we decided to roll out our own provisioning scripts using the Digital Ocean API. The case for automation We had several compelling reasons for automating our provisioning. The primary reason was to guard ourselves against our inability to scale and provide infrastructure in case there was a flood of sign ups, especially in the middle of the night. Automation would also ensure that subsequent environments would be identical to those set up previously this is important because we didnt want components to fail due to differences in the versions of the underlying infrastructure components. The background Digital Ocean (DO) is a cloud computing provider and is ranked 15th among hosting companies in terms of Web-facing computers, according to a news item in Netcraft () and as of writing this article, has just announced a new region in London. As a company, we host on several of its servers. The rest of this article is about our experience in automatically provisioning the DO infrastructure. An overview of the Digital Ocean API (DO API) The Digital Ocean API is a RESTful API, which means that users can access the functions using HTTP methods. The API allows you to manage the resources in a programmatic wayyou can create new droplets (instances), resize them, install additional packages and do a lot more. The solution diagram Figure 1 gives a view of the various components that were included in the technology stack. Those highlighted are the ones that need to horizontally scale out and the rest of this article discusses how we accomplished this. Rolling out the shell script using the DO API To roll out your own scripts you will need to know UNIX shell programming, some Python and the Digital Ocean API reference. We chose to use Python because of its simple but powerful command set. You will also need to register and set up a Digital Ocean account. Though not an absolute necessity, prior experience in setting up the infrastructure would help. So lets get started by creating our first Droplet programmatically. Spinning a new Droplet The first step in provisioning is to instantiate a virtual server, which Digital Ocean calls a Droplet; so lets first spin a Droplet. Fire up your editor, key in the following Python code and save it as DON-Droplet.py def main(DropletName): SizeID = GetSizeID (2GB) OSID = Geomagnetic (CentOS 6.4 x32) RegID = GetRegID (Singapore 1) SshID = GetSSH([email protected]) if (SizeID == ERROR or OSID == ERROR or RegID == ERROR or SshID == ERROR): print Size/OS/Region/Ssh ID Not Found. So Exiting... return print Size ID::[ + SizeID + ] OS ID::[ + OSID + ] Region ID::[ + RegID + ] SSH ID::[ + SshID + ] print Creation of Droplet::[...Start print DropLet Name::[ + DropletName + ] CreateDroplet (DropletName, SizeID, OSID, RegID, SshID) print Creation of Droplet...End return The main function allows us to specify the size of the Droplet (yes, for now we have hardcoded it!), the image ID of the OS that you want to install, the ID of the region in which you want to create your Droplet and the SSH keys that you want to install. Each of these values is passed to the respective functions to check if they are valid before we create the Droplet with those values. For example, to check if the size we have specified is valid and available, we use the following function: def GetSizeID (SizeName): RespArr = GetDON (sizes) if RespArr == ERROR: print Problem in getting sizes from DON. return ERROR for RespRow in RespArr: RespRow = Clean (RespRow) #print arr entries->, RespRow Flds = dict (Fld.split (:) for Fld in RespRow.split (,)) if Flds[name] == SizeName: print Size::[ + SizeName + ] id::[ + Flds[id] + ]. Found return Flds[id].strip() print Size::[ + SizeName + ] Not Found. return ERROR We query the API with GetDON (sizes) to get the list of the available sizes. The API returns an array with the list of available sizes and we parse the array to check if we have the size thats specified by the user in the main function. If we have the required size, the rest of the checks like Image ID and Region ID are performed by the respective functions: GetImageID (CentOS 6.4 x32), GetRegID (Singapore 1), and GetSSH ([email protected]). If any of these checks fail, we abort Droplet creation. If the checks are successful, we proceed to create the Droplet using CreateDroplet (DropletName, SizeID, OSID, RegID, SshID). The Python function to create a Droplet takes the name, size, OS Image ID, Region ID and the SSH key as arguments, and uses the RESTful API to create the Droplet. A word of caution: the API keys provided here are dummy keys, just for illustrating the flow of the code. You will have to obtain your keys by registering with Digital Ocean. def CreateDroplet (Name, Size, OS, Reg, Passkey): #Copying DONs Parameters.. data = {} data[client_id] = xj53GXMazSf3NCCznoL data[api_key] = 941c3d1a0240e900ae450848c94 data[name] = Name data[size_id] = Size data[image_id] = OS data[region_id] = Reg data[ssh_key_ids] = Passkey URL_Values = urllib.urlencode(data) #Connect to DON for values of APIKey... URL =? Full_URL = URL + URL_Values print Droplet Creation URL->[ + Full_URL + ]. print Connecting DON to create droplet print URL Execution Start... data = urllib2.urlopen(Full_URL) DON_Result = data.read() print Droplet Creation Response::[ + DON_Result + ] print URL Execution End. return Thats all it takes to create a Droplet. Since we used an SSH key, the root password will not be emailed to us. Log in to the new Droplet using SSH and youll be prompted for the password since we havent yet disabled the password authentication in sshd_conf configuration. So youll have to go to the Web console and request for your password or you should not use the SSH keys while creating the Droplet! Lets now create the users we require and install our infrastructure componentsMySQL, PHP, NGINX, Munin, APC, Memcached and Postfix. Setting up a Droplet Before installing the components, first set up the time zone, create users, add them to a group and set up the firewall rules. In our case, we set up the time zone to IST, created users, added them to WHEEL (so that they have super cow powers), and then closed all ports except those we required. You can create this as a shell script called droplet-admin.bash or download it from. Run the script to make the above changes or you can do it one by one. Deploying the cloud stack Let us now write a script to install PHP Fast CGI, MySQL, Nginx, APC, memcached and Munin. Lets start with the script for installing the PHP-fCGI. Choose fCGI instead of the conventional PHP module as the former is known to have a lower memory footprint. Create a php-install.bash file with the following contents: yum install php php-fpm -y rpm -ivh yum install php-mcrypt -y yum install php-gd php-imap -y echo cgi.fix_pathinfo=0 >> /etc/php.ini echo date.timezone = America >> /etc/php.ini service php-fpm start service php-fpm status This script installs php php-fpm. It then downloads the php-mcrypt, php-gd, php-impa from the epel repositories and installs them. Php-fpm requires the cgi.fix_pathinfo=0 to be set in the php.ini file, which is done by the echo command. The script then automatically starts php-fpm. After PHP, the next thing to be installed is MySQL. Create mysql-install.bash by using the following commands: yum install mysql mysql-server -y chkconfig --levels 235 mysqld on service mysqld start service mysqld status The script installs MySQL and configures it to start up automatically when the server starts up. The script currently doesnt remove the demo database. You might want to include that step. NGINX is not available from the official Centos repositories and the official package has to be downloaded from the NGINX site. Create nginx-install.bash with the following lines. This will enable the appropriate repositories and install NGINX: wget rpm -ivh nginx-release-rhel-6-0.el6.ngx.noarch.rpm yum install nginx -y chkconfig nginx on service nginx start service nginx status Our next step is to install APC or the Alternate PHP Cache, which is available in PECL. Create apc-install.bash with the following lines. This will install APC. yum install php-pear php-devel httpd-devel pcre-devel gcc make -y pecl install apc echo extension=apc.so > /etc/php.d/apc.ini Next, we need to install memcached. Just create memcached-install.bash with the following command: yum install memcached -y Any technology stack requires to be monitored, for which we use munin. To install munin, create munin-install.bash with the following commands: yum --enablerepo=epel install munin munin-node -y /etc/init.d/munin-node start chkconfig munin-node on service munin-node status We now have the individual scripts to install the various components of our stack. We can create a master script infra-install.py to chain these individual shell scripts. You can download infra-install.py from To provision your Droplet and have it ready, all you need to do is to run infra-install.py (ensure all your scripts have the requisite permissions for executing it). Other methods The other method of provisioning hosting infrastructure is to use one of the several products available like Puppet (), Chef (), CFEngine (), Cobbler (), FAI (), Kickstart (), BCFG2 () or Vagrant () Scope for improvement For the sake of brevity, we have included the essential commands to get you started on auto-scaling your infrastructure. But there are a few things that you should include to improve these scripts. Currently, we have to log in once before we execute the other commands because though we have provided SSH keys for the root user at the time of creating the Droplet, we havent disabled password authentication in the sshd_config file. Though we create users, the script doesnt automatically copy the public keys for the users. You can add a few commands to automatically copy the SSH keys to the respective HOME directories and disable the password authentication mechanism. After installing MySQL, it is a good practice to remove the test databases and anonymous users. The script currently doesnt do this. You can add AWStats to the list of infrastructure components. You might want to run this suite of scripts as a Jenkins Job instead of manually running it. References [1] provides a detailed guide for developers to navigate the API calls.
http://opensourceforu.com/2015/12/automate-the-provisioning-process-for-cloud-inventory/
CC-MAIN-2018-09
refinedweb
1,893
63.09
Using Configuration Files in Visual C++ A Configuration File A configuration file is simply XML. The elements are defined for you, and it's extendable so that you can add your own elements if the standard ones aren't meeting your needs. Here's an example: <configuration> <appSettings> <add key="connstr" value="Provider=Microsoft.Jet.OLEDB.4.0; Data Source=C:\deptstore.mdb" /> </appSettings> </configuration> The entire file is a <configuration> element. Within that is an <appSettings> element—other elements not shown here can be used to instruct the .NET Framework how to handle this assembly while it's running. For example, .NET Remoting is set up using configuration file elements. Within the <appSettings> element are as many <add> elements as you want, each adding a name-value pair. The name of the configuration setting in this example is connstr and the value is "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\deptstore.mdb"—a reasonable connection string value. So, where do you put this information? If you're building an application called HelloWorld.exe, the configuration file is called HelloWorld.exe.config—it's always the name of the executable assembly with .config tacked on the end. It belongs in the same folder as the executable assembly. In a minute, I'll show you a neat trick related to that. Getting the Value from the File If your configuration file was just any old file of XML, with a name like KatesImportantSettings.xml, what would you have to do to extract the setting from it? Find the file, open it, parse the XML, find the <add> element with a key attribute of "connstr", and then grab the value attribute from that element. The classes in the System.Xml namespace will make that reasonably simple, but you don't even need to do that much work. Here's all you need to do: ConnStr = Configuration::ConfigurationSettings:: AppSettings->get_Item("connstr"); One line of code! The framework takes care of everything for you: reading the config file when the application starts running, and filling in the name-value pairs into an AppSettings property of the ConfigurationSettings class. All you need to do is get the item. Adding a Configuration File to the Project At this point, you can just start using a config file in your next application. But, there's one small frustration. The config file must be in the same folder as the executable: If you're building a debug version, it's almost certainly in the Debug folder under your project folder, and if you're building a release version, it's almost certainly in a folder called Release. So, you would have to maintain two copies of the file, one in each folder. A better approach is to have a single config file in your project folder, and to copy that file to the Debug or Release folder whenever you build the application. You could call the file anything you want, but you might as well hook into a mechanism that takes care of this automatically in VB and C#. In Solution Explorer, right-click your project and choose Add, Add New Item. Find the Configuration File choice and notice that you can't choose the name—it will be called app.config for you, and created with the enclosing <configuration> element already in place. You then can put your <appSettings> element in and add <add> elements with name-value pairs. If you were working in VB or C#, you'd be done, but the C++ build process doesn't automatically copy the file for you. No problem; we C++ developers have been setting up our own build steps forever. Right-click the project again and choose Properties. Under the Configuration Properties folder is a list of folders including the Build Events folder; select that one. Inside that folder, select the Post Build Event entry. What you want here is to enter a command in the Command Line setting. For a debug build of an application called HelloWorld.exe, it should read like this: copy app.config Debug\HelloWorld.exe.config Now, you could just type this string in, then flip the Configurations dropdown at the top left to Release and type a similar string, but I prefer to use some macros to make this generic—it won't even need to be edited if I change the name of the application. First, change the Configurations dropdown to AllConfigurations, then enter this command line: copy app.config "$(TargetPath).config" You can get a little help with this: First, type "copy app.config" followed by a space and a double quote into the CommandLine settings, then click the ... button. On the dialog that appears, click the Macros button. Look at all the file names and paths that you see there: These are very useful, and you can see that TargetPath corresponds to the full path to your executable. When you're building a debug version, it goes to the debug folder; release goes to release. Make sure the cursor is at the end of your partially-typed line, then double-click TargetPath in the list at the bottom of the dialog. Then type a closing double quote followed by ".config" at the end of the line and click OK. Click OK on the Properties dialog and your custom-built step is in place. (You need the double quotes if there are spaces in your full path, and there are plenty in a path like C:\Documents and Settings\yourname\My Documents\Visual Studio Projects, which is quite likely the folder where your projects are created.) Because it's a post-build step, it happens only after a successful build. If you have compile or link errors, the post-build steps don't happen. And, if you ever edit app.config without changing any code, you need to build the project to trigger your copy, even though the code doesn't need to be compiled. You might need to do a Rebuild if your code is up to date. Now, every time your project builds, your app.config file will be copied to the appropriately named config file in the right folder. What Kinds of Applications Can Have a Configuration File? The most obvious use of configuration files is in Windows applications, assemblies that end with .exe. You can also use them in console applications and Windows Services. The configuration file must be in the same folder as the executable. ASP.NET applications, which are hardly ever written in C++, have config files called web.config. (Using the same name across applications makes it simpler for IIS to refuse to serve out those files, and for ASP.NET to react to changes in the file and automatically refresh all the name-value pairs.) Class libraries (the assembly ends with .dll) can't use config files of their own—though they can use the config file of the application that is running when the assembly is loaded, which is fine if the class library serves a single application, or you can count on all the applications that use the assembly having the appropriate setting in the configuration file. So, going forward, whenever you have a connection string or some other kind of setting you want to expose to your users, something you want to be able to change without recompiling the application, use a configuration file. It's simple and easy—you need to add the file to your project, add a post-build event, type a little XML, and you need one whole line of code in your app for each setting you want to get from the file. That's all it. Error 0.0Posted by ChstrLou on 09/21/2007 06:20pm Error 1 error C2039: 'get_Item' : is not a member of 'System::Collections::Specialized::NameValueCollection' I get that when I try to buildReply
http://www.codeguru.com/cpp/cpp/cpp_managed/asp/article.php/c4873/Using-Configuration-Files-in-Visual-C.htm
CC-MAIN-2015-18
refinedweb
1,317
63.59
User and Reference Manual Altova SemanticWorks 2008 User & Reference Manual: 2008 © 2008 Altova GmbH Table of Contents 1 Altova SemanticWorks 2008 3 2 About this Documentation 6 3 Introduction 8 3.1 Product ................................................................................................................................... Features 9 3.2 Interface ................................................................................................................................... 12 3.2.1 Main ..................................................................................................................... Window 14 3.2.2 Details ..................................................................................................................... Window 17 3.2.3 Overview ..................................................................................................................... Window 19 3.2.4 Errors ..................................................................................................................... Window 20 3.3 Overview ................................................................................................................................... of Usage 22 4 Tutorial 4.1 OWL ................................................................................................................................... Lite Ontology 28 4.1.1 Creating ..................................................................................................................... a New Ontology 29 4.1.2 Declaring ..................................................................................................................... Namespaces 32 4.1.3 Creating ..................................................................................................................... Classes 34 4.1.4 Creating ..................................................................................................................... the Class Hierarchy 36 4.1.5 Defining ..................................................................................................................... Properties 39 4.1.6 Declaring ..................................................................................................................... Instances 45 4.1.7 Declaring ..................................................................................................................... AllDifferent Instances 50 4.2 OWL ................................................................................................................................... DL Ontology 52 4.2.1 Setting ..................................................................................................................... Up an OWL DL Ontology 53 4.2.2 Creating ..................................................................................................................... the Classes 55 4.2.3 Instances ..................................................................................................................... as Class Enumerations 57 4.2.4 Defining ..................................................................................................................... the Properties 59 4.2.5 Describing ..................................................................................................................... Classes and Their Instances 61 4.3 RDF ................................................................................................................................... Documents 66 4.3.1 Instances ..................................................................................................................... for an OWL DL Ontology 67 ........................................................................................................... 67 Creating a New RDF Document ........................................................................................................... 67 Referencing the Ontology Altova SemanticWorks 2008 26 1 4.3.2 ........................................................................................................... 69 Making the RDF statements Creating ..................................................................................................................... a Dublin Core (DC) Document 72 ........................................................................................................... 72 Referencing the DC Ontology 75 Creating........................................................................................................... the DC Metadata 5 User Reference 80 5.1 Toolbar ................................................................................................................................... Icons 81 5.2 Icons ................................................................................................................................... in Detail View 86 5.3 File................................................................................................................................... Menu 89 5.4 Edit ................................................................................................................................... Menu 91 5.5 View ................................................................................................................................... Menu 92 5.6 RDF/OWL ................................................................................................................................... Menu 94 5.7 Tools ................................................................................................................................... Menu 96 5.7.1 Customize ..................................................................................................................... 97 5.7.2 Options ..................................................................................................................... 99 5.7.3 Namespace ..................................................................................................................... Imports for RDF 101 5.7.4 Namespace ..................................................................................................................... Color Assignments 103 5.7.5 URIref ..................................................................................................................... Prefixes, Expand URIref Prefixes 105 5.7.6 Base ..................................................................................................................... URI 106 5.8 Window ................................................................................................................................... Menu 107 5.9 Help ................................................................................................................................... Menu 108 5.10 Usage ................................................................................................................................... Issues 110 6 Conformance 114 7 License Information 116 7.1 Electronic ................................................................................................................................... Software Distribution 117 7.2 Software ................................................................................................................................... Activation and License Metering 118 7.3 Intellectual ................................................................................................................................... Property Rights 119 7.4 Altova ................................................................................................................................... End User License Agreement 120 Index 2 Altova SemanticWorks 2008 Chapter 1 Altova SemanticWorks 2008 Altova SemanticWorks 2008 1 3 Altova SemanticWorks 2008 Altova® SemanticWorks® 2008 is an RDF document editor and ontology development IDE. It enables you to: Graphically create and edit RDF documents, RDF Schema documents, and OWL ontologies. Check the syntax and semantics of ontologies as you edit them, and the syntax of RDF documents. Convert graphically created ontologies into the RDF/XML and N-Triples formats. With Altova® SemanticWorks® 2008, therefore, besides being able to edit RDF documents in a GUI and check its syntax, you can design RDF Schema and OWL ontologies using a graphical design view, check the syntax of any RDF Schema or OWL ontology and the semantics of OWL Lite and OWL DL ontologies, and export ontologies in the RDF/XML and N-Triples formats. © 2008 Altova GmbH Altova SemanticWorks 2008 Chapter 2 About this Documentation 6 About this Documentation 2 About this Documentation This documentation is the user manual delivered with SemanticWorks. It is available as the built-in Help system of SemanticWorks, can be viewed online at the Altova website, and can also be downloaded as a PDF, which you can print. The user manual is organized into the following sections: Introduction Tutorial User Reference Conformance We suggest you read the Introduction first in order to get an overview of SemanticWorks features and general usage. You should then go through the tutorial to get hands-on experience of creating OWL Lite and OWL DL ontologies, and of creating and editing RDF documents. For subsequent reference, use the user reference section, which provides a description of all toolbar icons and menu commands. Should you have any question or problem related to SemanticWorks, the following support options are available: 1. Check the Help file (this documentation). The Help file contains a full text-search feature, besides being fully indexed. 2. Check the FAQs and Discussion Forum at the Altova Website. 3. Contact Altova's Support Center. Altova SemanticWorks 2008 © 2008 Altova GmbH Chapter 3 Introduction 8 Introduction 3 Introduction This Introduction is organized into the following sections: Product Features: Lists the main product features of SemanticWorks 2008. Read through this section to get an overview of the capabilities of SemanticWorks. Interface: Describes the SemanticWorks GUI. This description helps familiarize you with the various views and windows of SemanticWorks, and shows you how they are used. Overview of Usage: Provides a methodological approach to using SemanticWorks. Lists the steps you would typically take when creating or editing an ontology and an RDF document in SemanticWorks. Terminology: Lists key terms used in SemanticWorks and this documentation together with their meanings. Reading through this section will also provide you with a quick summary of key RDF and OWL concepts. Altova SemanticWorks 2008 © 2008 Altova GmbH Introduction 3.1 Product Features 9 Product Features The main features of SemanticWorks 2008 are listed below. Editing RDF documents In SemanticWorks, RDF documents can be created and edited graphically in SemanticWorks's RDF/OWL View. An RDF resource is defined by graphically associating it with a predicate, and then associating the predicate with a resource object or literal value. Resources are made available for selection in the GUI by referencing an ontology. A mechanism for declaring namespaces and prefixes enables URIrefs for RDF resources to be assigned flexibly and accurately. SemanticWorks also checks the syntax of RDF documents. Alternatively to editing RDF documents in the graphical RDF/OWL View, RDF documents can be edited directly in Text View, using either RDF/XML notation or N-Triples notation. Editing ontologies SemanticWorks offers ontology editing capability in a graphical user interface and in a text interface. The graphical RDF/OWL View enables you to easily create and edit RDF Schema and OWL ontologies by allowing you to insert items into a graphical representation of the ontology. The ontology level can be changed at any time while editing, enabling you to change levels according to editing needs. Syntax checks can be carried out for RDF Schema, OWL Lite, OWL DL, and OWL Full ontologies. Semantics checks can be carried out on OWL Lite and OWL DL documents. These checks can be made while you edit, thus enabling you to easily maintain the validity of the ontology as you build it. The ontology document's classes, properties, instances (aka individuals), all-different items, and ontologies can be viewed in separate tabs (screenshot below). These tabs provide an overview of each of these categories. A subsidiary pane displays related information (screenshot below). For example, selecting a class in the Classes Overview causes the instances and properties of that class to be displayed in the subsidiary pane. © 2008 Altova GmbH Altova SemanticWorks 2008 10 Introduction Product Features Clicking the Detail View button (see Main Window) of an ontology item in the Overview, changes the view to a detailed view of that item's relationships (see screenshot). In Detail View, relationships are indicated by a range of intuitive icons which are inserted into the ontology via a context menu. Relationships in Detail View can be expanded and collapsed at multiple levels to provide easily customizable views of ontologies or specific parts of ontologies. Prefixes of URIrefs used in an ontology can be conveniently set in a special table accessed via the GUI. The display of blank nodes (anonymous classes) can be toggled on and off. Text View enables direct text editing of the ontology document. Checking documents An RDF, RDF Schema, OWL Lite, OWL DL, or OWL Full document can be checked for syntax according to the rules of the relevant specification/s. Additionally, OWL Lite and OWL DL documents can be checked for correct semantics (according to the rules of OWL Lite and OWL DL, respectively). Errors are listed in the Errors Window, and each error has one or more links to the incorrect item/s in the current view (Text View or RDF/OWL View) of that document. System requirements Altova SemanticWorks runs on Windows 2000, Windows XP, Windows 2003 Server, and Windows Vista systems. Other major features SemanticWorks offers the following additional features. Imports can be reloaded at the click of a button whenever required. Ontologies can be saved as .rdf, .rdfs, or .owl files and can be exported in their RDF/XML and N-Triples formats. Multiple ontologies can be edited concurrently in multiple windows. A large range of customization options enables the application interface to be flexibly customized. Options range from GUI font selection to choosing document encoding. Altova SemanticWorks 2008 © 2008 Altova GmbH Introduction Product Features 11 The graphical Detail View of ontology items can be printed as well as saved as an image. © 2008 Altova GmbH Altova SemanticWorks 2008 12 Introduction 3.2 Interface Interface The SemanticWorks application interface consists of: (i) a top part consisting of a Menu Bar and Toolbars; (ii) the windows area; and (iii) a bottom part consisting of the Status Bar. See screenshot below. The windows areas consist of three windows: The Main Window, The Overview Window, and The Errors Window. The Menu Bar contains the various menus, and each menu, with its menu items, are described in separate sections in the User Reference. The toolbars are located below the Menu Bar, and all toolbar icons are described in the Toolbar Icons section in the User Reference. The Main Window, Details Window, Overview Window, and Errors Window are described in more detail in the subsections of this section. The Details Window, Overview Window and Errors Window can be docked within the application window or can float freely. For details about how to change the position of the Details Window, Overview Window and Errors Window, see under the next heading. Note: Multiple documents can be open at a time, and any one can be made the active document by clicking its tab label at the bottom of the Main Window. Text can be copied between the Text Views of documents, but objects in RDF/OWL Views cannot be copied. Moving, positioning, and hiding the Details Window, Overview Window and Errors Window The Details Window, Overview Window and Errors Window can each be docked within the application window or they can each float freely as independent windows. Altova SemanticWorks 2008 © 2008 Altova GmbH Introduction Interface 13 To make the Details Window, Overview Window or Errors Window float, do one of the following: Drag the window's title bar out of its docked position, so that the window floats, or Click the down-pointing arrowhead at the right-hand side of the window's title bar and select Floating. To reposition the Details Window, Overview Window or Errors Window relative to the Main Window or other windows (that is, to dock it), do the following: Drag the window's title bar into the application window till two sets of four blue arrows appear (an inner set and an outer set). Drop the window on one of the four inner arrows or one of the four outer arrows. If you drop it on an inner arrow, the window will dock within the application window and relative to the window over which it was dragged. If you drop it on an outer arrow, the window will be placed along one of the four inside edges of the application window. When the Details Window, Overview Window, or Errors Window is docked, clicking on the down-pointing arrowhead (see screenshot above) also provides the option of hiding the window (menu option Hide). Clicking the Close button (at right-hand side of the Details Window, Overview Window, or Errors Window title bar), closes the window. To reopen the Details Window, Overview Window, or Errors Window, select View | Detail View, View | Overview or View | Errors Window, respectively. © 2008 Altova GmbH Altova SemanticWorks 2008 14 Introduction 3.2.1 Main Window Interface SemanticWorks documents are opened in the Main Window, and are viewed and edited in the Main Window. The Main Window has two views, Text View and RDF/OWL View (screenshots below), between which you can switch by clicking the respective tabs. RDF/OWL View window, showing Overview (main pane, with subsidiary pane below). To switch from Overview to Detail View of an item, click the Detail View button to the left of that item. RDF/OWL View RDF/OWL View provides (i) an Overview of the document, and (ii) a Detail View of an item listed in the Overview. To switch from Overview to the Detail View of an item, click the Detail View icon to the left of that item's Overview listing. Notice that the entire Overview (main pane and subsidiary pane) is replaced by Detail View (screenshot below). To switch from the Detail View of an item back to Overview, click the Overview icon Detail View . Altova SemanticWorks 2008 located at the top left of © 2008 Altova GmbH Introduction Interface 15 The Overview of ontologies is structured into five categories of items (see first screenshot in this section): Classes, which lists all ontology classes. If the Show Blank Nodes option (View | Show Blank Nodes) is selected, then anonymous classes are also listed. When a class is selected in the main pane, then the subsidiary pane shows (i) the properties of the class, and (ii) the instances of the class. Properties, which lists all properties in the ontology. When a property is selected in the main pane, then the domain of that property is displayed in the subsidiary pane. Instances (aka individuals), which lists all the ontology's instances of classes. All-Different items, which lists the owl:AllDifferent items in the ontology. Ontologies, which lists all ontologies in the document, including imported and prior-version ontologies. Each category has a tab in the main pane of the Overview, with tab labels located at the top of the main pane. To view the items in a particular category, click that category's tab label. When either the Classes or Properties tab is selected, and an individual class or property, respectively, in that tab is selected, additional information related to the selected class or property is displayed in a subsidiary pane located below the main pane. When a class in the Classes tab is selected, its individuals and properties can be viewed in the subsidiary pane. When a property in the Properties tab is selected, its domain is displayed in the subsidiary pane. The Overview of RDF documents lists items in a single category: Resources. Note: If an ontology imports other ontologies, then the classes, properties, instances, and all-different items of the imported ontologies are also displayed. If an RDF document correctly © 2008 Altova GmbH Altova SemanticWorks 2008 16 Introduction Interface references an ontology, the items of the ontology are displayed as resources in the GUI. In Detail View, you add or edit the details of an ontology item. Items are added by right-clicking an item and selecting the new item to insert from a context menu. Items that can be inserted are listed and described in the section, Icons in Detail View. Text View In Text View, you can display and edit a document in its RDF/XML notation (screenshot below) as well as in N-Triples notation. Text View supports syntax coloring, line numbering, source folding of elements, and standard GUI editing features, such as cut-and-paste and drag-and-drop. Altova SemanticWorks 2008 © 2008 Altova GmbH Introduction 3.2.2 Interface 17 Details Window The Details Window (screenshot below) provides a compact and editable description of the item selected in the Main Window. In an ontology, the Details Window is especially useful for creating and editing instances of a class. The Details Window can be toggled between display and hidden modes by clicking the menu command View | Details. The screenshot above shows the details of the instance doc: XMLSpyEnterpriseUserManualENHTML in the Details Window. Compare it with the display in the Main Window, which is shown below. While in the Main Window all the details are displayed graphically, they are displayed in a compact table form in the Details Window and can be edited there. Labels and comments The screenshot above shows the Details Window of the XMLSpyEnterpriseUserManualENHTML instance of the EnglishWebPage class. You can add an rdfs:label or rdfs:comment element to the instance by editing the value field of rdfs:label or rdfs:comment, respectively, and then pressing Enter. The dropdown list for these two elements offers options for the value of the xml:lang attribute that can be defined for them. When a label or comment is added, it appears in a bold, dark gray font. If you wish to delete an rdfs:label or rdfs:comment element, go to the Detail View of the instance, select the instance, right-click the label or comment, and select Remove Label or Remove Comment, respectively. © 2008 Altova GmbH Altova SemanticWorks 2008 18 Introduction Interface Creating and editing properties The properties related to the instance's class are listed in the bottom half of the window, below the black rule. You can edit the values of these properties as well as create new or additional properties. The dropdown list for each property value offers the available options. In the case of literal objects, such as dc:date in the screenshot above, you can toggle between a datatype selection (by clicking DT in the bottom half of the icon) or an xml:lang selection (by clicking icon). When a property is added, it appears in a bold, dark gray Lang in the top half of the font. If you wish to delete a property, switch to Detail View, right-click the property, and select Delete. Note: Double-clicking the diagonal reference arrow of a class or an instance , causes the graph of that class or instance to be displayed in Detail View; details are also displayed in the Details Window. Defining a new instance To define a new instance using the Details Window, do the following: 1. 2. 3. 4. Select the Instances tab. Click the Add New button in the top left of the Instances tab. Name the newly created instance. Since the instance is selected, its details are displayed in the Details Window. From the dropdown list of the type field, select the class for this instance. All the classes in the ontology will be listed in the dropdown list. 5. After the class is selected, the properties related to the class appear in the Property pane of the window. Select the value of each property as required. 6. When you have completed the definition of the instance in the Details Window, save the document. Altova SemanticWorks 2008 © 2008 Altova GmbH Introduction 3.2.3 Interface 19 Overview Window The Overview Window is populated when the Main Window is in Detail View. It shows a miniature of the graphic currently in Detail View and the Detail View viewport (that is, the area of the Detail View graph that is currently visible in the Detail View window). See screenshot below. The area of the Detail View graph currently displayed in the viewport is the area within the red rectangle. You can move other parts of the graph into the viewport by placing the cursor inside the red rectangle (the cursor becomes a pointing hand) and dragging the rectangle over to the part of the graph you want displayed in the viewport; that part of the graph will now be displayed in the Detail View window (the viewport). © 2008 Altova GmbH Altova SemanticWorks 2008 20 Introduction 3.2.4 Errors Window Interface The Errors Window displays the results of the syntax and semantic checks. The syntax check is executed when the menu command RDF/OWL | Syntax Check is selected or when the toolbar icon is clicked. The semantics check (for OWL Lite and OWL DL ontologies) is executed when the menu command RDF/OWL | Semantics Check is selected or when the toolbar icon is clicked. The semantics check always includes a syntax check. If the test returns a positive result, a message about the ontology being well-formed (syntax check) or partially consistent (semantics check) is displayed in the Errors Window. The semantics check (for OWL Lite and OWL DL ontologies) is a partial consistency check and returns a positive result (partially consistent) if no error or inconsistency is found. Below the title bar of the Errors Window is a bar containing buttons that allow you to: (i) configure the display of messages in the Errors Window; (ii) navigate the messages in the Errors Window; and save messages to the clipboard. See screenshot below. These buttons are described below, and a tooltip for each is displayed when the cursor is placed over it. Filters and display Clicking the Filter button (leftmost button), pops up a menu (screenshot below), in which you can select what messages are displayed in the Errors Window. In this menu, you can select whether errors, warnings, information, and/or inconsistency warnings should be displayed in the Errors Window. To select a particular message type for display, select it so that it is checked. The Check All option causes all message types to be selected for display; the Uncheck All option deselects all message types. The Clear button, clears all messages currently displayed. Navigation The Next and Previous buttons (second and third from left) enables you to navigate up and down the list of messages, respectively, one message at a time. When a message is selected, this is indicated by its being highlighted (see screenshot below). Altova SemanticWorks 2008 © 2008 Altova GmbH Introduction Interface 21 You can also select a message by clicking it. Selecting a message is useful if you wish to save a particular message to the clipboard. Copying to clipboard There are three ways in which messages can be copied to the clipboard: (i) copy the selected line; (ii) copy the selected line and its children; and (iii) copy all messages. The corresponding buttons are the fourth, fifth, and sixth from left. Placing the cursor over these buttons causes a tooltip for that button to be displayed. © 2008 Altova GmbH Altova SemanticWorks 2008 22 Introduction 3.3 Overview of Usage Overview of Usage This section broadly describes how SemanticWorks is to be used to: Create and edit ontologies Create and edit RDF documents Creating and editing ontologies When creating or editing ontologies with SemanticWorks, the broad usage procedure is as follows: 1. Create a new ontology document or load an existing ontology document into SemanticWorks. 2. Edit the document in RDF/OWL View. 3. Within SemanticWorks, check document syntax and/or semantics against RDF Schema, OWL Lite, OWL DL, or OWL Full specifications. 4. Save the document as a .rdf, .rdfs, or .owl file. 5. If required, export the document as an N-Triples (.nt) or XML (.xml) file. Steps 1, 4, and 5 in the above process are straightforward. In this section, we briefly discuss how ontologies can be edited in RDF/OWL View (Step 2 above) and checked for correct syntax and semantics (Step 3). Editing ontologies in RDF/OWL View Ontologies are best edited in RDF/OWL View. Text View should be used to check the serialization, in XML format, of the ontology graph that was created or edited in RDF/OWL View. Additionally, Text View can be used to make minor modifications to the XML serialization so that this suits user preferences. However, most editing should be done in RDF/OWL View since this provides a graphical, intuitive, and fast way to edit ontologies. Editing an ontology in RDF/OWL View involves the following processes. There is no strict sequence to be followed, and you will likely find yourself revisiting previous steps and revising various ontology items. Declaring namespaces and their prefixes. This is done at the document level (in the URIref Prefixes dialog (Tools | URIref Prefixes)). Declaring namespaces is important because they are used to identify ontology constructs, items from various vocabularies, and user-defined resources. The RDF, RDFS, and OWL namespaces are declared by default when a new ontology is created. Select the ontology level. You do this using the menu command RDF/OWL | RDF/OWL Level. Selecting the required level is important because: (i) the choice of constructs made available in the GUI, and (ii) the syntax and semantics checks done by SemanticWorks are based on this selection. Setting up the ontology header. The ontology header is created at the ontology level and is optional. It is useful when you wish to import one or more ontologies into the current ontology, or when you wish to record a prior version of the ontology. You create a new ontology in the Ontologies tab of Overview, then switch to Detail View and define the ontology using Detail View editing mechanisms. Creating new ontology items. Ontology items are classes, properties, instances, AllDifferent items, and ontologies. Each such item must first be created in the appropriate Overview tab. Only after the relevant items have been created, should you go to the Detail View of an item to either define attributes of the item (e.g. create restrictions for properties), or define relationships with other items (e.g. define an intersection of classes). Defining and editing items in Detail View. Definitions for ontology items are created and Altova SemanticWorks 2008 © 2008 Altova GmbH Introduction Overview of Usage 23 edited in Detail View by selecting the required properties or relationships from a context menu. SemanticWorks will make only those constructs available that are allowed according to the ontology level you have selected. Checking the syntax and semantics of ontologies When you edit an ontology in RDF/OWL View, what is created is a graph of the ontology. This graph is serialized in RDF/XML format—which is what is displayed in Text View. The syntax of this document serialization can be checked for conformance. Additionally, OWL Lite and OWL DL documents can be checked for correct semantics against the OWL Lite and OWL DL specifications. Note, however, that the semantics check in SemanticWorks is a partial consistency check. The significance of this is explained in the descriptions of the commands Show Possible Inconsistencies and Semantics Check. To check the syntax of ontology documents and the semantics of OWL Lite and OWL DL documents, you do the following: Select the required specification against which the ontology is to be checked ( RDF/OWL | RDF/OWL Level). Click the Syntax Check or Semantics Check command (RDF/OWL menu) or button (in the Toolbar). If errors are detected, these are reported in the Errors Window, with each error including a link (or links) to the relevant ontology item (screenshot below). Creating and editing RDF documents In RDF/OWL View, when the RDF/OWL level has been set to RDF, resources are listed in the Overview pane of RDF/OWL View. The resources listed here are of two types: (i) those that are made available from an ontology, and (ii) those that you create (or that are present in the RDF document you are editing). In order to make resources from an ontology available in the Resources Overview, you do the following: 1. Import the namespaces used in the ontology. 2. Declare the namespaces required in the RDF document. In the Resources Overview, you create resources as required, and name them. Next, in the Detail View of each resource, you insert predicates, either by entering them or by selecting them from a dropdown list of available resources (which include resources made available from an ontology). The objects of RDF statements can also be inserted either by entering the name of an RDF resource or selecting one from a list of available resources, or by entering a literal value for the object. You can check the syntax of the RDF document at any time. © 2008 Altova GmbH Altova SemanticWorks 2008 24 Introduction Overview of Usage See the RDF Document tutorial for a detailed description of the steps listed above. Alternatively, you can create and edit RDF documents directly in Text View, using RDF/XML or N-Triples notation. Altova SemanticWorks 2008 © 2008 Altova GmbH Chapter 4 Tutorial 26 Tutorial 4 Tutorial In this tutorial, you will do the following: Create an OWL Lite ontology from scratch; Create an OWL DL ontology from scratch; Create a set of RDF resources based on the OWL DL ontology; Create an RDF document that uses Dublin Core vocabulary to describe metadata associated with document-type resources. The OWL Lite and OWL DL ontologies you create are small ontologies that take you through the various features of SemanticWorks. After you have finished creating the ontologies, you will have learned how to use all the features of SemanticWorks you need to quickly construct any type of ontology. The RDF-document-creation part of the tutorial shows you how you can use the resources of an ontology to create and define RDF resources, and includes a tutorial that shows how to create an RDF document that uses the Dublin Core vocabulary. Doing the tutorial The OWL Lite and OWL DL parts of the tutorial start from scratch, so you do not need any file to start with and can start on these parts of the tutorial as soon as you have successfully installed SemanticWorks. The files created in these two parts of the tutorial are delivered in the folder: C:/Documents and Settings/<username>/My Documents/Altova/SemanticWorks2008/SemanticWorksExamples/Tutorial. The OWL DL ontology used to create an OWL DL-based RDF document is the OWL DL ontology you will create in the OWL DL part of the tutorial. It is therefore better if you go through the OWL DL part of the tutorial before you do the part in which you create an RDF document based on this ontology. However, since the required ontology is available in the folder C:/Documents and Settings/<username>/My Documents/Altova/SemanticWorks2008/SemanticWorksExamples/Tutorial, you can start with the RDF document creation part if you like. The tutorial part for creating an RDF document based on the Dublin Core starts from scratch, so, again, you can start with this part right away. It is best to do the tutorial in sequence, that is, starting with the creation of an OWL Lite ontology and ending with the creation of an RDF document that uses the Dublin Core vocabulary. The reason for this is that basic usage mechanisms and concepts are explained in detail and incrementally as you proceed in sequence. Before starting this tutorial, we also suggest that you read through the Interface section in the Introduction so as to familiarize yourself with the GUI and the terms used to describe it. Also, note that the screenshots in this tutorial draw the graphs in Detail View horizontally, from left to right; this is a setting made in the Options dialog. About the ontologies and naming conventions The OWL Lite ontology you will create is used to describe Altova products, while the OWL DL ontology describes Altova documents. These ontologies have deliberately been kept simple and small, so that you can concentrate on the usage mechanisms and become familiar with SemanticWorks, instead of being overwhelmed by the complexity of the ontologies. In this tutorial we use the following naming conventions: Class names and instance names are capitalized (for example, Documents). If a class name consists of more than one word, the words are run together with each word capitalized (for example, ProductManual). Property names start with a lowercase alphabet (for example, source). If a property name consists of more than one word, the words are run together with words Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial 27 subsequent to the first capitalized (responsibilityOf). Note about namespaces The namespaces used for the AltovaProduct and AltovaDocument ontologies are fictitious; there are no ontologies or any other resource at the locations specified by the URIs in these namespaces. © 2008 Altova GmbH Altova SemanticWorks 2008 28 Tutorial OWL Lite Ontology 4.1 OWL Lite Ontology The OWL Lite ontology you will create describes Altova products. It consists of the following sections: Creating a New Ontology: Shows how to create a new ontology document, select an ontology level, and save the ontology document. Declaring Namespaces: Explores the Text View and explains what namespaces are required in your ontology document. You will declare namespaces for the AltovaProducts vocabulary and for XML Schema datatypes. Creating Classes: Describes how classes are created in RDF/OWL View, and explores RDF/OWL View and Text View. Explains how to delete and re-create classes, and shows how the syntax and semantics of the ontology can be checked in SemanticWorks. Creating the Class Hierarchy: Explains how classes are created as subclasses of another class, and thus how a hierarchy of classes can be built. Describes how the Detail View of RDF/OWL View works. Defining Properties: Shows how to create OWL properties—both object and datatype— and how to define the domain, range, and cardinality of properties. Also shows how relationships between classes and properties can be viewed in SemanticWorks. Declaring Instances: Describes how to create instances, how to define these as instances of a particular class, and how to assign a literal value to an instance. Concludes by showing how created instances can be viewed in the Classes Overview. Declaring AllDifferent Instances: Explains how to collect instances in a group and define each of them as being pairwise different from all other members of the group. The OWL Lite ontology that is the end result of this tutorial part is delivered in the SemanticWorks package as the file AltovaProducts.rdf; it is located in the application folder: C:/Documents and Settings/<username>/My Documents/Altova/SemanticWorks2008/SemanticWorksExamples/Tutorial. Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial 4.1.1 OWL Lite Ontology 29 Creating a New Ontology In this section, you will learn how to create a new ontology document in SemanticWorks. You will open a blank document in SemanticWorks, select an ontology compliance level (OWL Lite), and save the document, while briefly exploring the SemanticWorks interface. Creating a new ontology document in SemanticWorks To create a new ontology document in SemanticWorks, do the following: 1. Start SemanticWorks by clicking the SemanticWorks shortcut in the Quick Launch tray or via the Start | All Programs menu item. SemanticWorks starts. The application window looks something like this. Notice that there are three windows: (i) the Main Window; (ii) the Overview Window; and (iii) the Errors Window. 2. If the windows are not arranged as shown in the screenshot above, try arranging it in this way. Do this by dragging the title bar of the Overview Window and dropping it on the left-pointing arrow that appears in the Errors Window. The Errors Window itself should be located at the bottom of the application window (down-pointing arrow of outer circle arrows). 3. Click the New icon in the toolbar (or File | New or Ctrl+N) to open a blank document in SemanticWorks. The Main Window will now look like this. © 2008 Altova GmbH Altova SemanticWorks 2008 30 Tutorial OWL Lite Ontology Notice that there are five tabs at the top of the window and that the Classes tab is selected. These five tabs organize the ontology information in five categories, thus providing an overview of ontology information. We refer to this view as the Ontology Overview (also called Overview for short; it should not be confused with the Overview Window). Further, notice that a pane containing subsidiary categories for the selected Overview category (currently, the Classes category) is displayed below the main category. With the Classes tab selected, the subsidiary pane displays the instances and properties of the selected class. Selecting the language level of the ontology The ontology you will create in this part of the tutorial will use features of the OWL Lite sublanguage. SemanticWorks checks ontologies according to the language level selected by the user, and also makes available constructors specific to the selected ontology language. It is therefore best to select the required language at the outset. To select the OWL Lite sublanguage, do the following: 1. In the RDF/OWL Level combo box in the toolbar, click the arrowhead to display the dropdown list of options (screenshot below). (Alternatively, select RDF/OWL | RDF/OWL Level to display a submenu of language levels.) 2. Select OWL Lite from the dropdown list. Notice that as soon as the specification level is selected, a syntax check is run on the document, and the message This ontology is well-formed is displayed in the Errors Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial OWL Lite Ontology 31 Window. Note: You can change the RDF/OWL level at any time. The selected level is retained through changes of views (Text View and RDF/OWL View). When you reopen an ontology document, you should check that the desired language level is selected. Saving the ontology It is a good idea to save the ontology before continuing. Save the file using any name you like and with either the .rdf or .owl file extension. (The .rdf extension is allowed because OWL ontologies are themselves RDF documents; it is, in fact, more usual than the .owl extension.) We'll assume that the file you create in this part of the tutorial is called AltovaProducts.rdf. Having done all of the above, you are now ready to declare namespaces for your ontology. © 2008 Altova GmbH Altova SemanticWorks 2008 32 Tutorial 4.1.2 Declaring Namespaces OWL Lite Ontology In this part of the tutorial, you are creating an OWL Lite ontology for Altova products using a custom-made vocabulary which belongs to a unique namespace. Before starting to use this vocabulary in the ontology, you must, in the interests of good practice, declare the namespace for the vocabulary, as well as any other namespaces you require. Declaring the required namespaces is what you will do in this section. In the course of carrying out these steps, you will also see the RDF/XML format of your ontology in Text View. Text View of SemanticWorks To see how the RDF/XML representation of the newly created ontology looks, click the Text View button at the bottom of the Main Window. The Text View will look something like this. Notice that there is a single element rdf:RDF, which is the root element of every OWL ontology. It has three namespace declarations (for the RDF, RDFS, and OWL namespaces). SemanticWorks inserts the RDF, RDFS, and OWL namespaces by default; they are required so as to be able to use RDF, RDFS, and OWL elements and attributes in the ontology document. Note: You can configure Text View fonts in the Options dialog (Tools | Options). Declaring namespaces for the ontology document Your Altova product vocabulary will use the namespace, and this needs to be declared. You will also need to declare the XML Schema namespace so you can use the XML Schema datatypes for OWL datatype properties. Declare namespaces as follows: 1. Select RDF/OWL View. 2. Select the command Tools | URIref Prefixes. This pops up the URIref Prefixes dialog. 3. In this dialog, click the Add button to add a line for a new namespace declaration. 4. In the Prefix column, enter prod. In the URI column, enter. Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial OWL Lite Ontology 33 5. Next, add the XML Schema namespace, giving it a prefix of xsd. The URIref Prefixes dialog will look something like this: 5. Switch to Text View to check that the newly declared namespaces have been correctly added. You have now declared all the namespaces for your ontology, and you can now start creating ontology items. declared, then the URIref prefix associated with that namespace, when used in the name of an item, will not be expanded to that namespace. The expansion of a URIref prefix to a namespace will not take place even if: (i) the relevant namespace is declared subsequent to the creation of an item; or (ii) if the ontology item is renamed after the namespace is declared but was created before the namespace was declared. Ontology items with unexpanded prefixes might not be correctly recognized. © 2008 Altova GmbH Altova SemanticWorks 2008 34 Tutorial OWL Lite Ontology 4.1.3 Creating Classes In this tutorial, we will build the ontology using a generally top-down approach. Whatever approach one chooses, it is usual to start by defining classes. In this section you will learn how to create three classes in RDF/OWL View: Product, XMLSpy, and Edition, and check the ontology for correct syntax and semantics. Creating new classes SemanticWorks enables you to name classes so that they are represented in the RDF/XML serialization using either (i) namespace prefixes to stand for the namespace URI, or (ii) the full URIref (that is, with the prefix expanded). You will create the classes Product and XMLSpy using a different serialization method for each. Do this as follows: 1. In the Classes tab of RDF/OWL View, click the Add New button and select owl:Class . This creates a line for the class in the Classes Overview. 2. With urn:Unnamed-1 in the newly created class entry line highlighted, type in prod:Product, and press Enter. This creates a class with a URIref of. Switch to Text View to see how the class has been serialized in RDF/XML. You will see something like this: <rdf:Description rdf: <rdf:type> <rdf:Description rdf: </rdf:type> </rdf:Description> Notice that the class name has been serialized as an expanded URIref. This is because the Expand URIref Prefixes option (Tools | Expand URIref Prefixes in RDF/OWL View ) has been toggled on. Notice also that, although the URIref has been expanded in its serialized form, it is shown in RDF/OWL View as prod:Product, that is, with a prefix for the namespace part of the URIref. In RDF/OWL View, URIrefs are shown in the form in which they are entered. 3. In RDF/OWL View, create two new classes and name them prod:XMLSpy and prod:Edition. In Text View, you will see that the two new classes have been serialized as expanded URIrefs. In RDF/OWL View, the URIrefs are shown in the form in which they were entered (that is, prod:XMLSpy and prod:Edition). Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial OWL Lite Ontology 35 URIrefs with Prefixes, and How to Delete a Class If you wish to serialize the RDF/XML using URIref prefixes instead of expanded URIrefs, click the Expand URIref Prefixes toolbar icon so as to deselect it. Identifiers created from now onwards (and till this option is toggled on again) will be serialized with names in the form prefix:localname. From now. Furthermore, note that choices offered by dropdown boxes are also affected by this option; so if the setting is that URIref prefixes are not expanded, dropdown boxes always show full URIs instead of abbreviated URIrefs. To see how unexpanded URIrefs are serialized, do the following: 1. Delete the class XMLSpy (by selecting it and clicking Edit | Delete). 2. Deselect Expand URIref Prefixes so it is toggled off. 3. Re-create the class XMLSpy, naming it prod:XMLSpy. The Text View will show the class serialized as prod:XMLSpy with the prefix unexpanded. If you do try this out, be sure to once again delete the XMLSpy class and re-create it as it was originally created, that is with URIref prefixes expanded. Checking the ontology You now have a simple ontology that declares three URIrefs to be three unrelated OWL classes. To check the syntax of the ontology, click the Syntax Check icon in the toolbar ( RDF/OWL menu). The message This ontology is well-formed appears in the Errors Window. Now check the semantics of the ontology by clicking the Semantics Check icon the toolbar. The message This ontology is at least partially consistent appears in the Errors Window. This is a valid ontology (has correct syntax and is partially consistent), but does not provide much information about the three classes. (For more information on consistency evaluation, see Semantics Check.) in In the next sections of the tutorial, you will create the class hierarchy, which will link classes semantically with each other. © 2008 Altova GmbH Altova SemanticWorks 2008 36 Tutorial OWL Lite Ontology 4.1.4 Creating the Class Hierarchy In this section, you will learn how the three classes (Product, XMLSpy, and Edition) created in the previous section can be linked in a simple hierarchy. What we wish to do is: Define XMLSpy as a subclass of Product, which essentially states that any instance of the XMLSpy class must also be an instance of the Product class. Use the Edition class to (i) define it as the range of a property called prod:hasEdition, and (ii) create instances of Edition. Creating a class as a subclass of another To create the class XMLSpy as a subclass of the class Product, do the following: 1. In the Classes tab of RDF/OWL View, click the Detail View button of the prod:XMLSpy class (screenshot below). This switches the view to the Detail View of the XMLSpy class (screenshot below). Notice the shape of the classes box. All classes in Detail View are indicated by this arbitrary-hexagon-shaped box. 2. In Detail View, right-click the prod:XMLSpy classes box. This pops up a context menu (screenshot below). 3. Select Add subClassOf from the context menu. This adds a subClassOf connector to the prod:XMLSpy box (screenshot below). Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial OWL Lite Ontology 37 4. Right-click the subClassOf connector, and, in the context menu that appears, select Add Class (screenshot below). A class box is added that is linked to the subClassOf connector (screenshot below), indicating that the class represented by this class box is a subclass of the XMLSpy class. 5. To select which class this class box represents, click the downward-pointing arrow at the right-hand side of the class box. This drops down a list of available classes ( screenshot below). Notice that the set of available classes consists of the classes you have declared in this ontology plus the two general OWL classes Thing and Nothing. 6. Select prod:Product from the dropdown list to complete the definition (screenshot below). © 2008 Altova GmbH Altova SemanticWorks 2008 38 Tutorial OWL Lite Ontology The diagonal arrow at bottom left of the Product class box indicates that the box is a reference to the class Product. Having carried out the steps above, you have defined that the class XMLSpy is a subclass of the class Product. Now do the following: Check the Text View to see the RDF/XML serialization of your new definition. Check the syntax and semantics of the modified ontology. Both syntax and semantics checks should return positive results. Relating a class to its properties and instances In the steps above, you have learned how to define relationships between two classes. In SemanticWorks, relationships between classes are defined in the Detail View of the appropriate class. To define a relationship between a class and its property (for example, the domain and range of a property), the definition is made on the property. Similarly, the definition that an instance is an instance of a particular class is made on the instance. In this tutorial, we wish to do the following: Define the class XMLSpy to be the domain of the property hasEdition, and the class Edition to be the range of the property hasEdition. This would mean that the property hasEdition applies to the class XMLSpy and takes values that are instance of the class Edition. Declare instances of the class Edition. These property definitions and instance declarations are made in the following sections. Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial 4.1.5 OWL Lite Ontology 39 Defining Properties Properties are created at a global level and then related to different classes. In our ontology, we require two properties: hasEdition to carry information about the edition of a product. The edition can be Enterprise, Professional, or Home. We will create this property as an object property. Doing this enables us to relate one resource to another. In this case we wish to relate instances of the XMLSpy class to instances of the Edition class via the hasEdition property. The class (or classes) that the property applies to is called the property's domain, while the set of values the property can take is called the property's range. version, which is a literal value indicating the year in which a product is released. We will create this property as a datatype property. It relates instances of the XMLSpy class to a positive integer (which is the year of release and gives the version of the product). Creating properties Properties are created in the same way that classes are created, by clicking the Add New button and then specifying the name of the property to be created. To create a property, do the following: 1. In the Properties tab of RDF/OWL View, click the Add New button, and select owl:ObjectProperty. 2. This creates a line for the newly created object property. In this line, enter prod:hasEdition as the name of the new object property. 3. Now add a datatype property by: (i) clicking the Add New button and selecting owl:DatatypeProperty, and (ii) entering prod:version as the name of the datatype property. You have now created two properties: (i) an object property called hasEdition, and (ii) a datatype property called version. The Properties tab should now look like this: You are now ready to define the properties in Detail View. © 2008 Altova GmbH Altova SemanticWorks 2008 40 Tutorial OWL Lite Ontology Defining properties The first thing you should note when working with OWL properties in SemanticWorks is that object properties and datatype properties are indicated with slightly different symbols in Detail View. Double-click the Detail View icon to see the Detail View symbol for each property: Object properties are indicated with an O icon in the top-left corner, datatype properties are indicated with a DT icon. The icons to the right of these two icons are actually toggle switches for specifying the cardinality constraints and characteristics of properties. They are toggles for, from left to right, setting the property to be a functional property; an inverse-functional property; a transitive property; and a symmetric property. Note that a datatype property is only allowed the functional property relationship. Defining an object property You will now define the relationships and characteristics of the two properties you have created. Do this as follows: 1. Double-click the Detail View icon of the hasEdition property. This brings up the Detail View of the property. 2. Right-click the prod:hasEdition box, and, from the context menu that appears, select Add Domain (screenshot below). 3. The Domain connector box is inserted. Right-click the Domain connector box, and select Add Class (screenshot below). Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial OWL Lite Ontology 41 A Class box is inserted. 4. Click the down arrow in the Class box, to drop down a list of available classes, and select prod:XMLSpy (screenshot below). The class prod:XMLSpy is set as the domain of the property prod:hasEdition. This relationship states that the hasEdition property is applicable to the class XMLSpy. 5. In the same way that you set the domain of the property, now set the range of the property: (i) right-click the hasEdition property box; (ii) select Add Range; (iii) right-click the Range connector box; (iv) select Add Class; (v) from the dropdown list in the Class box, select the class prod:Edition. The Detail View of the hasEdition property should look like this: The range of the property is now set to be instances of the class Edition. Defining the cardinality constraint of a property We wish to specify that the property hasEdition can take only one instance of the Edition class as its value. This is done by specifying that the hasEdition property is functional. To make the property functional, click the f icon (functional property icon) in the hasEdition property box. The Detail View of the hasEdition property should look like this: Notice that the f icon in the hasEdition property box is highlighted, indicating it is toggled on. © 2008 Altova GmbH Altova SemanticWorks 2008 42 Tutorial OWL Lite Ontology Defining a datatype property For the datatype property prod:version, you will define a domain similarly as defined above for the object property has:Edition. Set the domain to the class XMLSpy. For the range, we wish to define the XML Schema datatype xsd:positiveInteger. Do this as follows: 1. Add the Range connector box, by right-clicking the version property box and selecting Add Range. 2. Right-click the Range connector box, and select Add XML Schema Datatype. A Datatype box is added. 3. From the dropdown list in the Datatype box, select xsd:positiveInteger ( screenshot below). 4. Set the cardinality of the version property by toggling on the functional property specification. The version property is now a functional property. It is applicable to instance of the class XMLSpy (its domain) and can take values that are of the XML Schema datatype xsd:positiveInteger (its range). The Detail View of the version property should look like this: Domain listing in Properties Overview When you switch to the Properties Overview (that is, the Properties tab in the Ontology Overview), notice that the domains of the selected property are displayed in the subsidiary window (screenshot below). Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial OWL Lite Ontology 43 Click the Detail View button of a domain class to go directly to the Detail View of that class. Class properties in Classes Overview You can see the properties of a selected class in the subsidiary Class Properties window of the Classes Overview (Classes tab in the Ontology Overview). See screenshot below. Click the Detail View button of a property to go directly to the Detail View of that property. Checking the syntax and semantics of the ontology At this stage, check the syntax (RDF/OWL | Syntax Check) and the semantics (RDF/OWL | Semantics Check) of your ontology, making sure that the OWL Lite level is selected. Your OWL Lite ontology till this point should be both well-formed and at least partially consistent. Checking the relationships between properties and classes Both properties (hasEdition and version) were set to have the class XMLSpy as its domain, meaning that both properties are applicable to the XMLSpy class. Now check the effect of these definitions on the XMLSpy class. Do this as follows: 1. Go to the Document Overview (by clicking the Overview icon located at the top left of Detail View). 2. Select the Classes tab. 3. Go to the Detail View of the class XMLSpy (by clicking the Detail View icon next to the class entry). The Detail View of the XMLSpy class should look something like this: © 2008 Altova GmbH Altova SemanticWorks 2008 44 Tutorial OWL Lite Ontology Here we see that the class XMLSpy is a subclass of the class Product, and has two properties: the object property hasEdition and the datatype property version. Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial 4.1.6 OWL Lite Ontology 45 Declaring Instances So far you have created three classes, Product, XMLSpy, and Edition, and two properties, the object property hasEdition and the datatype property version. You have defined both properties to apply to the XMLSpy class (by making this class the domain of the properties). Further, you have defined (i) the range of the hasEdition property (that is the values this property can take) to be instances of the Edition class, and (ii) the range of the version property to be a literal value of the XML Schema datatype positiveInteger. In this section, you will first create three instances of the Edition class, which will be simple instances, and then three more complex instances of the XMLSpy class. Creating simple instances To create an instance of the Edition class, do the following: 1. In the Instances tab of RDF/OWL View, click the Add New button and select Instance. This creates an entry for an instance. 2. Enter prod:Enterprise as the name of the instance (screenshot below). 3. Click the Detail View button of the prod:Enterprise entry to switch to Detail View, which will look something like this: 4. Expand the rdf:type connector by clicking the plus symbol on its right-hand side. Detail View will now look something like this: 5. Double-click the owl:Thing box, click the down arrow to drop down the list of available classes of which prod:Enterprise can be made an instance, and select prod:Edition (screenshot below). © 2008 Altova GmbH Altova SemanticWorks 2008 46 Tutorial OWL Lite Ontology You have created prod:Enterprise as an instance of prod:Edition. 6. Check the syntax and semantics of your ontology. You should get messages saying the ontology is both well-formed and partially consistent. 7. Create two more instances of the Edition class, as described above, and call them prod:Professional and prod:Home, respectively. 8. Check that your ontology is well-formed and partially consistent, which it should be if you have done everything as described above. You now have three instances of the Edition class: Enterprise, Professional, and Home. Note: An alternative way of creating instances of a class is to go to the Detail View of that class, then right-click the class and select Add New Instance. The new instance is created and displayed in the Instances tab of Detail View. Name the instance as required. Creating instances that have predicates (properties) You will now create three instances of the XMLSpy class. These instances will each additionally be defined with the two properties, hasEdition and version, that apply to the XMLSpy class. Create these instances as follows: 1. In the Instances tab of RDF/OWL View, click the Add New button and select Instance. Name the instance prod:XMLSpyEnterprise. 2. Click the Detail View button of the prod:XMLSpyEnterprise entry to switch to Detail View. 3. Expand the rdf:type connector and select prod:XMLSpy. This defines XMLSpyEnterprise as an instance of the class XMLSpy. The Detail View will now look something like this: 4. Right-click the XMLSpyEnterprise instance box, and select Add Predicate ( screenshot below). Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial OWL Lite Ontology 47 5. In the property box, click the down arrow to drop down a list of properties defined for this class, and select prod:hasEdition (screenshot below). 6. Right-click the prod:hasEdition property box and select Add Resource Object ( screenshot below). 7. Click the down arrow of the Resource Object box and click prod:Enterprise from the dropdown list of available instances (screenshot below). © 2008 Altova GmbH Altova SemanticWorks 2008 48 Tutorial 8. 9. 10. 11. 12. OWL Lite Ontology This defines that the object property hasEdition has the instance Enterprise (of the class Edition) as its object. Recall that you have set the range of the hasEdition property to be instances of the class Edition. If you check the semantics of the ontology, you will see that the ontology is valid (well-formed and partially consistent). Double-click the Enterprise Resource Object box, select XMLSpyEnterprise, and do a semantic check. You will receive an error message saying that the resource object XMLSpyEnterprise is not within the range defined for the property hasEdition. Double-click the Enterprise Resource Object box, and select Enterprise again, which is what we want. Right-click the XMLSpyEnterprise instance box, and select Add Predicate to add a second property. Select the property prod:version from the dropdown list to add this property as a predicate. Right-click and select Add Literal Object. (Recall that the version property has a literal value of the XML Schema datatype positiveInteger defined as its range. SemanticWorks automatically provides the correct entry helper.) Enter 2006 at the blinking cursor in the Literal Object box, and click Enter. The Detail View should look something like this: The instance XMLSpyEnterprise has therefore been defined to: Be an instance of the class XMLSpy, Have an object property hasEdition that takes the instance Enterprise as its object, and Have a datatype property version that takes the positiveInteger value 2006 as its literal value. If you check the semantics of the ontology, you will see a message saying that the ontology is both well-formed and partially consistent. Now complete the ontology by creating two more instances of the XMLSpy class. Call them XMLSpyProfessional and XMLSpyHome, respectively. Define them just as you defined the XMLSpyEnterprise instance, with the only difference being that they should have the Professional and Home instances, respectively, as the objects of the hasEdition property. Class instances in Classes Overview Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial OWL Lite Ontology 49 You can see the instances of a selected class in the subsidiary Instances for Class window of the Classes Overview (Classes tab in the Ontology Overview). In the screenshot below, the class XMLSpy is selected. The Instances for Class subsidiary window shows the instances for the XMLSpy class. Click the Detail View button of an instance to go directly to the Detail View of that instance. © 2008 Altova GmbH Altova SemanticWorks 2008 50 Tutorial OWL Lite Ontology 4.1.7 Declaring AllDifferent Instances In the previous section, you created three instances of the XMLSpy class: XMLSpyEnterprise , XMLSpyProfessional, and XMLSpyHome. These are the three editions of the XMLSpy product. You specified the difference in the editions by using the property hasEdition and assigning it three different object values, namely the instances Enterprise, Professional, and Home, respectively. This, however, does not ensure that the three URIrefs prod:XMLSpyEnterprise, prod:XMLSpyProfessional, and prod:XMLSpyHome are indeed different, since two of them, or even all three, could actually be referring to a single individual (or resource). That these are three different URIrefs must be explicitly stated, and the AllDifferent construct is used to state the pairwise difference of instances in a collection. In this section of the tutorial, you create an AllDifferent collection object and collect within it all the instances that must be pairwise different. Do this as follows: 1. In the allDifferent tab of RDF/OWL View, click the Add New button and select allDifferent. Name the newly created allDifferent item prod:XMLSpyEditions. 2. Click the Detail View button of the prod:XMLSpyEditions entry to switch to Detail View (screenshot below). 3. Right-click the owl:DistinctMembers box, and select Add Instance. 4. Click the down arrow in the newly created instance box, and select prod:XMLSpyEnterprise (screenshot below). The XMLSpyEnterprise instance is selected as a distinct member of the XMLSpyEditions AllDifferent collection. 5. Add the XMLSpyProfessional and XMLSpyHome instances to the collection (by right-clicking the owl:DistinctMembers box and selecting Add Instance, then selecting the respective instances). When you're done, the Detail View will look something like this: 6. Run a semantic check to confirm the validity (correct syntax and partial consistency) of Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial OWL Lite Ontology 51 the ontology, and then save the file. Note: Alternatively, you can specify that a set of instances are mutually different by selecting those instances in the Overview of RDF/OWL View, right-clicking, and selecting the Make Mutually Different command. That's it! You have successfully completed the OWL Lite tutorial. In the course of this tutorial you have learned how to build an OWL Lite ontology using SemanticWorks and have become familiar with the interface, features, and mechanisms of SemanticWorks. © 2008 Altova GmbH Altova SemanticWorks 2008 52 Tutorial OWL DL Ontology 4.2 OWL DL Ontology The OWL DL ontology you will create describes Altova documents. It shows you how to use SemanticWorks features that are not available when editing OWL Lite ontologies. It consists of the following sections: Setting up an OWL DL Ontology: Shows how to create a new ontology document, select the ontology level, declare namespaces, and save the ontology document. Creating Classes and Hierarchies of Classes: Shows how to create the ontology classes and create a hierarchy of classes using the subclass connector. Instances as Class Enumerations: Explains how instances can be created as class enumerations, and why class enumerations are needed. Defining Properties: Shows how to create OWL object and datatype properties, and how to define their domain and range. Describing Classes and Their Instances: The focus is on describing complex OWL DL relationships between classes. You will learn how to describe a class as a union of two classes and to further restrict that union. You conclude by creating instances of the newly created restricted class and checking the syntax and semantics of the ontology. Defining Complementary Classes and Their Instances: Explains how to state that one class is a complement of another class. Class instaces are then created and defined, and the ontology is checked for correct syntax and semantics. The OWL DL ontology that is the end result of this tutorial part is delivered in the SemanticWorks package as the file AltovaDocuments.rdf, and it is located in the application folder: C:/Documents and Settings/<username>/My Documents/Altova/SemanticWorks2008/SemanticWorksExamples/Tutorial. Note that the AltovaDocuments.rdf ontology is required for the RDF tutorial part, which uses its resources. Note: In this part of the tutorial we assume that you have already completed the earlier OWL Lite part of the tutorial. Therefore, certain steps that were explained in detail in the earlier part are not described in detail in this part. If you find that you are having difficulty with any step, refer back to the corresponding section in the first (OWL Lite) part of the tutorial. Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial 4.2.1 OWL DL Ontology 53 Setting Up an OWL DL Ontology Setting up an OWL DL ontology in SemanticWorks involves the same steps as with an OWL Lite ontology. These are: 1. 2. 3. 4. Creating a new ontology document. Setting the language level (OWL DL in this case). Declaring namespaces. Saving the file with a .rdf or .owl extension. In this section, you will go through these steps in sequence. Creating a new ontology document To create a new ontology document, click the New toolbar icon | New. or select the command File Setting the ontology language level In the RDF/OWL Level combo box in the toolbar, select OWL DL. Alternatively, in the RDF/OWL menu, select RDF/OWL Level, and then OWL DL. Declaring namespaces For the OWL DL ontology you are creating you need to declare three namespaces: (menu item Tools | URIref Prefixes) shown below. Add a line for each required namespace using the Add button. Then enter the namespace and its prefix. © 2008 Altova GmbH Altova SemanticWorks 2008 54 Tutorial OWL DL Ontology declared, then the URIref prefix associated with that namespace, when used in the name of an item, will not be expanded to that namespace. The expansion of a URIref prefix to a namespace will not take place if: (i) the namespace is declared subsequent to the creation of an item using that URIref prefix; or (ii) if the ontology item is renamed after the namespace is declared but was created before the namespace was declared. Ontology items with unexpanded prefixes might not be correctly recognized. Saving the file You can save the file with a .rdf or .owl extension using the File | Save (Ctrl+S) command. Save the file with the name AltovaDocuments.rdf. In the next section you will create the classes and define the class hierarchy. Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial 4.2.2 OWL DL Ontology 55 Creating the Classes In the document ontology, which makes use of the Altova Document vocabulary, we wish to create the following basic class hierarchy: Document | |___PrintManual | |___WebPage | |___EnglishWebPage | |___GermanWebPage Languages OutputFormats Creating the classes To start with you will create the seven classes shown in the hierarchy above in the Classes Overview. The procedure for creating each class is as follows: 1. Click the Add New button in the Classes Overview to create an entry for the new class ( screenshot below). 2. Type in the name of each class, using the doc: prefix for each. For example, doc:Document. After you have finished creating these seven classes, the Classes Overview should look something like this: © 2008 Altova GmbH Altova SemanticWorks 2008 56 Tutorial OWL DL Ontology Check the validity of the document (correct syntax and partial consistency) by clicking the icon (RDF/OWL | Semantics Check). You should get messages saying the document is both well-formed and partially consistent. Defining the class hierarchy for documents Start by defining the class PrintManual as a subclass of the class Document. Do this as follows: 1. In the Classes Overview, click the Detail View button of the PrintManual class. 2. In the Detail View of PrintManual, right-click the PrintManual box, and select Add subClassOf. 3. Right-click the subClassOf connector box and select Add Class. 4. In the newly added class box, click the down arrow, and, from the dropdown list, select doc:Document. When you are done, the Detail View of the PrintManual class will look like this: Now create the following classes as subclasses using the method described above: WebPage as a subclass of Document. EnglishWebPage as a subclass of WebPage. GermanWebPage as a subclass of WebPage. You have created a hierarchy for documents involving five of the seven classes you created. The other two classes (Languages and OutputFormats) are not directly involved in this hierarchy, and their definition will be described in the next section. Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial 4.2.3 OWL DL Ontology 57 Instances as Class Enumerations For the two classes, Languages and OutputFormats, we wish to enumerate their instances, specifically English (EN) and German (DE) for Languages, and HTML and PDF for OutputFormats. The way to do this is to first create instances for the classes (for example, EN and DE for Languages), and then specifying, for each class, that one of a list of enumerated instances is allowed as the instance of that class. Creating instances Start by creating instances for the Languages class. As follows: 1. In the Instances Overview, click the Add New button to create an entry for the new instance. Name the instance doc:EN. 2. Repeat this step twice to create two more instances: doc:DE and doc:FR. The Instances Overview should now look like this: 3. Click the Detail View button of the EN instance. 4. In the Detail View of the EN instance, expand the rdf:type predicate box, and double-click in the Resource Object box to pop up a down arrow on the right of the Resource Object box (screenshot below). 5. Click the down arrow to pop up a list containing available classes (screenshot below). 6. Select doc:Languages. This creates EN as an instance of the class Languages. 7. Define DE and FR as instances of the class Languages in exactly the same way you made EN an instance of Languages. © 2008 Altova GmbH Altova SemanticWorks 2008 58 Tutorial OWL DL Ontology When you are done, you will have defined EN, DE, and FR as three instances of the class Languages. Note: An alternative way of creating instances of a class is to go to the Detail View of that class, then right-click the class and select Add New Instance. The new instance is created and displayed in the Instances tab of Detail View. Name the instance as required. Making instances the enumerations of a class Now we wish to specify that the class Languages may have either the instance EN or DE as its value, but not FR. Do this as follows: 1. Click the Classes tab to go to Classes Overview, and there click the Detail View button of the Languages entry. 2. In the Detail View of Languages, right-click the Languages box, and select Add oneOf (which is not an OWL Lite feature but is an OWL DL feature). 3. Right-click the oneOf connector box and select Add Instance. 4. In the newly added instance box, click the down arrow, and, from the dropdown list, select doc:EN. 5. Right-click the oneOf connector box again, add another instance, and select doc:DE. The Detail View of Languages should now look like this: This indicates that the class Languages may be instantiated by one of the instances EN or DE. Since doc:FR may not be an instance of of the Languages class, change its type to the general owl:Thing class (by double-clicking the Languages Resource Object box of doc:FR, and then clicking the down arrow and selecting owl:Thing from the dropdown list that appears). Creating more enumerated classes Now, using the same method as described above, create the instances HTML and PDF as enumerations of the class OutputFormats. Do this as follows: 1. Create two new instances and name them doc:HTML and doc:PDF (in Instances Overview). 2. Define doc:HTML and doc:PDF as being instances of the OutputFormats class (in the Detail View of each instance separately). 3. Define the OutputFormats class to have HTML and PDF as its enumerated instances (in the Detail View of the OutputFormats class, and by using the oneOf connector). Check the semantics of the ontology to ensure partial consistency. Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial 4.2.4 OWL DL Ontology 59 Defining the Properties In this section you will define three properties to hold metadata information about documents. These properties are taken from the Dublin Core vocabulary for defining document metadata. They are the dc:date, dc:language, and dc:format properties. In this section, you will create the dc:date property as a datatype property with a range of the XML Schema datatype date, and the dc:language and dc:format datatypes as object datatypes with ranges, respectively of the Languages and OutputFormats classes. Create these three properties as follows: 1. In the Properties Overview, click the Add New button to create an entry for a new datatype property. Name the property dc:date. 2. Repeat this step twice to create two more properties, but create both as object properties: dc:language and dc:format. The Properties Overview should now look like this: 3. Click the Detail View button of the dc:date property. 4. In the Detail View of the dc:date property, right-click the property box, select Domain, and set the Domain to the doc:Document class. Right-click the property box, select Range, then right-click the Range connector and select Add XML Schema Datatype ( screenshot below). (The Add Data Range option creates an enumeration of data values (literals).) Set the range to xsd:date. (If required, double-click in the Datatype box to get a list of available datatypes.) The Detail View should now look like this: 5. Set the domain and range for the dc:language and dc:format similarly. Set the domains of both to the doc:Document class, the range of dc:language to the doc:Languages class, and the range of dc:format to the doc:OutputFormats class. Note: When the domain of a property is set to the Document class, all subclasses of the © 2008 Altova GmbH Altova SemanticWorks 2008 60 Tutorial OWL DL Ontology Document class are also implicitly in the domain of that property. Consequently, the PrintManual, WebPage, EnglishWebPage, and GermanWebPage classes are also in the domain of properties that have their domain set to the Document class. Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial 4.2.5 OWL DL Ontology 61 Describing Classes and Their Instances In this section you will make class descriptions that specify restrictions and define detailed relationships between classes. Specifically, we wish to define that: The WebPage class is a union of the EnglishWebPage and GermanWebPage classes. The WebPage class has a restriction specifying that all its instances must have a dc:format property with an object that is the HTML instance. An instance of the EnglishWebPage class must be an instance of the WebPage class, with the restriction that its dc:language property have an object that is the EN instance. An instance of the GermanWebPage class must be an instance of the WebPage class, with the restriction that its dc:language property have an object that is the DE instance. After creating these descriptions, you will create instances of these classes to test the validity of the axioms you have defined. © 2008 Altova GmbH Altova SemanticWorks 2008 62 Tutorial OWL DL Ontology Defining a class as a restricted union of classes To define the WebPage class as a union of the EnglishWebPage and GermanWebPage classes, with a restriction that instances of the class have a dc:format property having a value that is the HTML instance, do the following: 1. In the Classes Overview, click the Detail View button of WebPage. 2. In the Detail View of WebPage, right-click the WebPage class box and select Add unionOf. 3. Right-click the unionOf connector, click Add Class from the context menu, and select doc:EnglishWebPage. 4. Right-click the unionOf connector, click Add Class from the context menu, and select doc:GermanWebPage. 5. Right-click the unionOf connector, click Add Restriction from the context menu, and select dc:format. 6. Right-click the Restriction box and select Add hasValue. 7. Right-click the hasValue box, click Add Resource Object, and select doc:HTML. The Detail View should look like this: Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial OWL DL Ontology 63 Defining a class as a restriction of another class To define the EnglishWebPage class as an intersection of the WebPage class and the property dc:language having the EN instance as its object, do the following: 1. In the Classes Overview, click the Detail View button of EnglishWebPage. 2. In the Detail View of EnglishWebPage, right-click the subclassOf connector, click Add Restriction from the context menu, and select dc:language. 3. Right-click the Restriction box and select Add hasValue. 4. Right-click the hasValue box, click Add Resource Object, and select doc:EN. 5. Right-click the EnglishWebPage box, and select Add disjointWith. 6. Right-click the disjointWith box, click Add Class, and select doc:GermanWebPage. The Detail View should look like this: Define the GermanWebPage class similarly, with the difference that the dc:language restriction should be set to doc:DE and the class should be disjoint with the EnglishWebPage class. © 2008 Altova GmbH Altova SemanticWorks 2008 64 Tutorial OWL DL Ontology Creating instances of restricted classes Let us create an instance to denote the index page of the HTML version of the English user manual of the XMLSpy Enterprise edition. Create and define this instance as follows: 1. In the Instances Overview, click the Add New button to create an entry for a new instance. Name the instance doc:XMLSpyEnterpriseUserManualENHTML. 2. In the Detail View of the doc:XMLSpyEnterpriseUserManualENHTML instance, expand the rdf:type predicate box, double-click in the Resource Object box to pop up a down arrow on the right of the Resource Object box, and select doc:EnglishWebPage. Run a semantics check. You will get a message saying that the ontology appears to be inconsistent. This is because an instance of the EnglishWebPage class must have the dc:format and dc:language properties set, to HTML and EN, respectively. Make these settings, together with that for the dc:date property, as follows: 1. Right-click the Instance box, click Add Predicate from the context menu, and select dc:format. 2. Right-click the predicate box, click Add Resource Object from the context menu, and select doc:HTML. 3. Right-click the Instance box, click Add Predicate from the context menu, and select dc:language. 4. Right-click the predicate box, click Add Resource Object from the context menu, and select doc:EN. 5. Run a semantics check to confirm adequate consistency. 6. Right-click the Instance box, click Add Predicate from the context menu, and select dc:date. 7. Right-click the predicate box, click Add Literal Object from the context menu,. 8. Run a semantics check to confirm adequate consistency. Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial OWL DL Ontology 65 The Detail View should look like this: Create and define an instance called doc:XMLSpyEnterpriseUserManualDEHTML similarly, with the difference that (i) this will be an instance of the GermanWebPage class, and (ii) the dc:language predicate should take doc:DE as its object. © 2008 Altova GmbH Altova SemanticWorks 2008 66 Tutorial RDF Documents 4.3 RDF Documents In SemanticWorks, you can create RDF statements using the resources of ontologies referenced through the SemanticWorks interface. In the graphical RDF/OWL View, you create a new resource and then add predicate–object pairs for that resource. Predicates and objects can be selected in the SemanticWorks GUI from a list of resources made available via the referenced ontologies. The ontology resources are available in the SemanticWorks GUI as entry helpers, and are entered in the RDF document as URIrefs based on namespaces you declare for the RDF document. In this part of the tutorial, you will create two RDF documents: Instances for an OWL DL Ontology, which provides a detailed description of the mechanism for creating, in a separate document, RDF resources based on resources from an ontology. This enables you to use an ontology on the Internet or a local network, as the basis of a separate RDF document. Creating a Dublin Core (DC) Document, which shows how you can create Dublin Core metadata for a resource such as a Web page or a book. Note: If the resources of an ontology are not available, you can always directly type in URIrefs as required (in either Text View or RDF/OWL View). Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial 4.3.1 RDF Documents 67 Instances for an OWL DL Ontology This RDF document is based on the OWL DL ontology (AltovaDocuments.rdf) you created in the previous part of this tutorial. The objective is to create instances of this OWL DL ontology in a separate RDF file. In order to be able to create such instances using ontology properties as predicates and ontology instances as objects, ontology resources must be availabe to the user via the GUI. This part of the tutorial describes (i) how to set up the RDF document in SemanticWorks so that ontology resources are available via the GUI; and (ii) how to actually make RDF statements using these resources. This part of the tutorial is organized into three sections: Creating a New RDF Document; Referencing the Ontology; Making RDF statements Note: You will need to know the location of the file AltovaDocuments.rdf, which you created in the previous section, OWL DL Ontology. If you did not do that part of the tutorial, you will find the file AltovaDocuments.rdf in the application folder: C:/Documents and Settings/<username>/My Documents/Altova/SemanticWorks2008/SemanticWorksExamples/Tutorial. Creating a New RDF Document Create a new RDF document as follows: 1. Click the New toolbar icon or select the command File | New. 2. In the RDF/OWL Level combo box in the toolbar, select RDF. Alternatively, in the RDF/OWL menu, select RDF/OWL Level, and then RDF. 3. Save the document as AltovaDocumentInstances.rdf. In the next section you will set up your document to reference the AltovaDocuments.rdf ontology and to use resources from this ontology in your RDF document. Referencing the Ontology The AltovaDocuments.rdf ontology needs to be referenced by the RDF document so that its resources (classes, properties, and instances) become available for use in the RDF document. The ontology referencing mechanism in SemanticWorks is implemented via a two-step procedure: Import namespaces from the ontology. In this step, each ontology namespace to be imported is listed, together with the location of the file. This is done in the Namespace Imports dialog (Tools | Namespace Imports for RDF). Declare the namespaces that will be used in the RDF document. All namespaces that will be used in the RDF document, including the imported namespaces, are declared, and prefixes are assigned for namespaces. This enables you to use the prefixes as shorthand for the namespace part of URIrefs. Namespaces are declared in the URIref Prefixes dialog (Tools | URIref Prefixes). For your RDF document, you should carry out these two steps as described below. Importing namespaces from the ontology The ontology you will be referencing is the OWL DL ontology AltovaDocuments.rdf, which you created in the previous part of this tutorial. This ontology uses three namespaces that you © 2008 Altova GmbH Altova SemanticWorks 2008 68 Tutorial RDF Documents need to import into the RDF document: To import these namespaces, do the following: 1. Click Tools | Namespace Imports for RDF. This pops up the Namespace Imports dialog (screenshot below). 2. 3. 4. Add a line for a new entry by clicking the Add button, then enter the first namespace in the Namespace column and the location of the AltovaDocuments.rdf ontology in the Import File column. Note that you should give the absolute path for the ontology document. Repeat Step 2 twice to add the next two namespaces and the ontology location. (The ontology location is the same for all three namespaces.) When you are done, click OK. Declaring namespaces for the RDF document The RDF document you are creating uses three namespaces in addition to the RDF namespace: (Tools | URIref Prefixes) shown below. Add a line for each required namespace using the Add button. Then enter the namespace and its prefix. Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial RDF Documents 69 You are now ready to start making RDF statements using resources from the AltovaDocuments.rdf ontology. Troubleshooting If the resources from the ontology. Making the RDF statements The mechanism for creating RDF statements in SemanticWorks consists of: Creating and naming the RDF resource (the subject) in the Overview of RDF/OWL View. In the Detail View of the resource, defining the predicates and objects of the resource. You will now create resources for the various formats of the user manual of XMLSpy Professional Edition. Creating and naming the RDF resource To create a new RDF resource, do the following: 1. In the Overview of RDF/OWL View, click the Add New button, and select Add Resource (screenshot below). An entry for the new resource is added to the list of resources. 2. Name the resource doc:XMLSpyProfessionalUserManualENHTML and press © 2008 Altova GmbH Altova SemanticWorks 2008 70 Tutorial RDF Documents Enter. The resource is placed in the list according to its alphabetical order. Defining the predicates and objects of the resource The predicate and object of the newly created resource must be defined in the Detail View of the resource. Do this as follows: 1. Click the Detail View button of doc:XMLSpyProfessionalUserManualENHTML to go to its Detail View. 2. In Detail View, right-click the resource box and select Add Predicate. 3. Click the down arrow of the predicate box to display a list of all available resources ( screenshot below). Select dc:language. 4. Right-click the dc:language box, and select Add Resource Object. 5. From the dropdown list of the Resource Object box, select doc:EN. (If the down arrow is not displayed at the right hand-side of the Resource Object box, double-click inside the box to display it.) The representation of the RDF statement should look like this: You have now defined one property of your resource and its value. 6. To define the dc:format property, right-click the resource box, click Add Predicate, and select dc:format as the name of the predicate. Add a resource object to this property, and select doc:HTML (from the dropdown list) to be the resource object. 7. Create a dc:date property for the resource. Add a literal object—not a resource object —to the dc:date property, enter 2006-10-03 and select xsd:date as the datatype of the literal object. 8. Add another predicate to the resource and select rdf:type as its name. Add a resource object to the predicate and enter doc:EnglishWebPage as its name ( screenshot below). The Detail View of the XMLSpyProfessionalUserManualENHTML resource will finally look something like this: Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial RDF Documents 71 To complete the RDF document, create the following resources: XMLSpyProfessionalUserManualDEHTML (language=DE, format=HTML, rdf:type=GermanWebPage). XMLSpyProfessionalUserManualENPDF (language=EN, format=PDF, rdf:type=PrintManual). XMLSpyProfessionalUserManualDEPDF (language=DE, format=PDF, rdf:type=PrintManual). Save the file and check the Text View of the document. Notice that only the newly created resources are defined in Text View. The list of resources in the Overview of RDF/OWL View, however, also includes the resources from the referenced ontology. This helps you to enter ontology resources quickly in RDF statements. Further, clicking the Detail View of an ontology resource causes the relationships of the resource to be displayed. That's it! You have learned how to quickly create RDF documents using the graphical interface of SemanticWorks. © 2008 Altova GmbH Altova SemanticWorks 2008 72 Tutorial RDF Documents 4.3.2 Creating a Dublin Core (DC) Document To create a Dublin Core (DC) document in SemanticWorks, you will need an ontology of the DC vocabulary. The application folder C:/Documents and Settings/<username>/My Documents/Altova/SemanticWorks2008/SemanticWorksExamples/Tutorial contains an OWL Lite ontology of the DC vocabulary, called DCOntology.rdf. Creating a DC document involves the following two steps, both of which are described in detail in the respective subsections: Referencing the DC Ontology; Creating the DC Metadata. Note on the Dublin Core ontology delivered with SemanticWorks The following points should be noted: The ontology is an OWL Lite ontology The DC vocabulary covered is the DC Simple set of 15 basic elements. The DC elements have been created as properties in the ontology. No datatypes have been defined for DC elements. If you wish to assign datatypes, then select the required datatype in the GUI when entering the object definition. (Note that defining the datatype in the ontology is not sufficient to create the metadata in the RDF document as that datatype.) Note on the Dublin Core template delivered with SemanticWorks The following points should be noted: The DC template is called DCTemplate.rdf and is located in the application folder: C:/Documents and Settings/<username>/My Documents/Altova/SemanticWorks2008/SemanticWorksExamples/Tutorial. The template is intended to be used as a starting point for building RDF documents providing metadata while using the DC vocabulary. The template references the DC ontology (DCOntology.rdf) using an absolute path. You will need to change this absolute path so that it correctly points to the file DCOntology.rdf in the folder: C:/Documents and Settings/<username>/My Documents/Altova/SemanticWorks2008/SemanticWorksExamples/Tutorial. Referencing the DC Ontology After opening a new document, select the RDF level (RDF/Owl | RDF/OWL Level) and save the document as DCMetadataSample.rdf. Referencing the DC ontology To reference the Dublin Core (DC) ontology, do the following: 1. Check that you are in RDF level. 2. Click Tools | Namespace Imports for RDF. This pops up the Namespace Imports dialog (screenshot below). Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial RDF Documents 3. 4. 5. 6. 73 In the Namespace column enter, which is the DC namespace declared in the ontology file DCOntology.rdf. In the Import File column, enter the location of the ontology file DCOntology.rdf. This file will have been delivered in the C:/Documents and Settings/<username>/My Documents/Altova/SemanticWorks2008/SemanticWorksExamples/Tutorial folder of the SemanticWorks application folder. Note that you should give the absolute path for the ontology document. If you have edited the ontology file to define XML Schema datatypes, you will need to import the XML Schema namespace (), too. Enter the namespace to be imported in the Namespace column and the location of the ontology file DCOntology.rdf in the Import File column. Click OK to complete. Note: Defining a DC element in the ontology as having a certain datatype does not automatically insert that datatype annotation when that DC element is inserted in the RDF document. The datatype information in the RDF document must be explicitly entered when entering the object definition for that metadata. Declaring namespaces for the RDF document Your RDF document will require the DC () and XML Schema () namespaces to be declared. Declare these namespaces, with prefixes, in the URIref Prefixes dialog (Tools | URIref Prefixes) shown below. Add a line for each namespace using the Add button. Then enter the namespace and its prefix. Click OK to complete. © 2008 Altova GmbH Altova SemanticWorks 2008 74 Tutorial RDF Documents Note: The following points about namespaces in the RDF document should be noted. If you do not wish to use datatyping in an RDF document, there is no need to declare the XML Schema namespace. If, in this case, the ontology contains datatype definitions, then these will appear in the GUI as expanded URIrefs. To collapse the namespace part of the URIref, you should then declare the XML Schema namespace with a prefix. If you wish to create resources in a specific namespace, you must declare this namespace. After you have imported the DC ontology namespaces and declared namespaces for the RDF document, the RDF/OWL View Overview should look like this: Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial RDF Documents 75 The 15 Simple DC elements are now available as resources and can be used as predicates of RDF statements. You are now ready to start creating the DC metadata. Troubleshooting If the DC resources. Creating the DC Metadata In this section, you will create Dublin Core (DC) metadata for a single resource. Specifically, you will create a resource called A Sample Page, and define dc:title, dc:description, and dc:date elements for it. Creating a new resource To create a new resource, create the Add New button (screenshot below), and name the newly created resource urn:SamplePage. © 2008 Altova GmbH Altova SemanticWorks 2008 76 Tutorial RDF Documents Adding DC metadata for a resource Click the Detail View button of urn:SamplePage to go to the Detail View of urn:SamplePage (screenshot below). Now do the following: 1. Right-click the urn:SamplePage box, click Add Predicate, and from the dropdown list select dc:title. Right-click the dc:title box, click Add Literal Object, and type A Sample Page in the newly created Literal Object box. The dc:title metadata is created. 2. Right-click the urn:SamplePage box, click Add Predicate, and from the dropdown list select dc:description. Right-click the dc:description box, click Add Literal Object, and type A Sample Page for the DC metadata tutorial in the newly created Literal Object box. The dc:description metadata is created. 3. Right-click the urn:SamplePage box, click Add Predicate, and from the dropdown list select dc:date. Right-click the dc:date box, click Add Literal Object,. The dc:date metadata is created and has the XML Schema datatype xsd:date. The Detail View of urn:SamplePage should look something like this: Switch to Text View to see the RDF/XML serialization, which should look something like this: Altova SemanticWorks 2008 © 2008 Altova GmbH Tutorial RDF Documents 77 Notice that the dc:date element has an rdf:datatype attribute with a value of, which indicates the date datatype of XML Schema. You can create as many DC elements that you want as predicates of this resource. To create more resources, go back to RDF/OWL View Overview. Save the file to complete this part of the tutorial. That's it! You have learned how to create a Dublin Core RDF document using the graphical interface of SemanticWorks. In the Examples folder of the SemanticWorks application folder, you will find a DC ontology (DCOntology.rdf) and a DC template (DCTemplate.rdf) to help you create DC RDF documents. © 2008 Altova GmbH Altova SemanticWorks 2008 Chapter 5 User Reference 80 User Reference 5 User Reference This User Reference section describes all the icons that appear in the toolbars and Detail View and the commands in SemanticWorks menus. It ends with a section on usage issues. The User Reference is organized into the following subsections: Toolbar Icons Icons in Detail View File menu commands Edit menu commands View menu commands RDF/OWL menu commands Tools menu commands Window menu commands Help menu commands Usage Issues Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference 5.1 Toolbar Icons 81 Toolbar Icons Icons in the toolbar are shortcuts for various commands, all of which are also available as menu commands. In this section, the icons are listed together with brief descriptions of the commands for which they are shortcuts. The commands are described in more detail in the corresponding menu section in the User Reference. The toolbar icons are arranged in the following groups: Main View Options Classes Properties Miscellaneous Main The Main group of icons are shortcuts to basic file and editing commands. New (File menu, Ctrl+N) Creates a new RDF document with a name of UntitledX, where X is an integer. Open (File menu, Ctrl+O) Pops up the Open dialog, in which you can browse for the file to be opened. Save (File menu, Ctrl+S) Saves the active document to file. Enabled only if the active document has been modified. Print (File menu) Displays the Print dialog (for printing the Detail View of the selected item). Undo (Edit menu, Ctrl+Z, Alt+Backspace) Undoes an editing change. Redo (Edit menu, Ctrl+Y) Redoes an undo. Cut (Edit menu, Ctrl+X, Shift+Delete) Cuts the selected text (in Text View) from the document and saves it to the clipboard. Copy (Edit menu, Ctrl+C) © 2008 Altova GmbH Altova SemanticWorks 2008 82 User Reference Toolbar Icons Copies the selected text (in Text View) to the clipboard. Paste (Edit menu, Ctrl+V) Pastes the clipboard text into the active document at the cursor position. Find (Edit menu, Ctrl+F) Finds the submitted text string (in Text View). Find Next (Edit menu) Finds the next occurrence of the submitted text string (in Text View). View Options The View Options group of icons are shortcuts to commands that affect the view and the semantics of the active document in the GUI. Show Blank Nodes (View menu) A toggle command that shows/hides blank nodes in the Overview categories. Show Comments (View menu) A toggle command that shows/hides comments in Detail View. Show Possible Inconsistencies (View menu) A toggle command to show/hide possible semantic inconsistencies in the ontology. Applies to the semantics check on OWL Lite and OWL DL ontologies. RDF/OWL Level (RDF/OWL menu) Selects the RDF/OWL level for the active document: RDF, RDF Schema, OWL Lite, OWL DL, or OWL Full. The selected level will apply till changed or till the document is closed. Syntax Check (RDF/OWL menu) Checks the syntax of the active document. Semantics check (RDF/OWL menu) Checks the semantics of the active OWL Lite or OWL DL ontology document. Reload All Imports (RDF/OWL menu) Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference Toolbar Icons 83 Reloads all ontologies that the active document imports. Expand URIref Prefixes (Tools menu) A toggle command to expand/use URIref prefixes in the serialized RDF/XML. Classes The Classes group of icons enables you to add a relationship to a class. These commands are also available in the context menu that appears when the box of an eligible class is right-clicked in Detail View. Adds subClassOf Adds a subClassOf relationship to a class. Adds intersectionOf Adds an intersectionOf relationship to a class. Adds unionOf Adds a unionOf relationship to a class. Adds complementOf Adds a complementOf relationship to a class. Adds oneOf Adds a oneOf relationship to a class. Adds disjointWith Adds a disjointWith relationship to a class. Add equivalentClass Adds an equivalentClass relationship to a class. Properties The Properties group of icons enables you to define certain attributes of properties. These commands are also available in the context menu that appears when a property box is right-clicked in Detail View. Adds subPropertyOf © 2008 Altova GmbH Altova SemanticWorks 2008 84 User Reference Toolbar Icons Adds a subPropertyOf relationship to a property. Adds domain Adds a domain for a property. Adds range Adds a range specifier for a property. Add equivalentProperty Adds an equivalentProperty relationship to a property. Add inverseOf Adds an inverseOf relationship to a property. Miscellaneous The Classes group of icons enables you to add a relationship to a class. These commands are also available in the context menu that appears when the box of an eligible class is right-clicked in Detail View. Adds resource object Adds a resource object to a predicate. Adds literal object Adds a literal object to a predicate. Adds predicate Adds a predicate to an RDF resource. Adds restriction Adds a restriction to a class or property relationship. Adds allValuesFrom Adds the allValuesFrom relationship to link a restriction to a class or data range. Adds someValuesFrom Adds the someValuesFrom relationship to link a restriction to a class or data range. Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference Toolbar Icons 85 Adds hasValue Adds the someValuesFrom relationship to link a restriction to a class instance or a data value. © 2008 Altova GmbH Altova SemanticWorks 2008 86 User Reference Icons in Detail View 5.2 Icons in Detail View The various icons used to display relationships between ontology items in Detail View are listed below, with a brief description. Icons are organized into the following groups: Ontology items RDF containers and collections Class descriptions Class axioms Property descriptions OWL individuals (instances) Ontology items Ontology items are classes, instances, properties, and literals. In the case of some items, variants are distinguished. The Class icon is used for both RDFS and OWL classes. Contrast the bevelled edges on the left of the Class icon with the rounded edges of the Instances icon. Instances of RDFS and OWL classes, and subjects of RDF Triples. (Instances are also known as Individuals in OWL terminology.) RDFS property. Distinguished from OWL properties by the lack of symbols in the top left-hand corner. OWL object property. Distinguished from OWL datatype properties by the O symbol at extreme top left. The other symbols, from left to right, are: functional property, inverse functional property, transitive property, and symmetric property. Clicking a symbol sets the property to that type. OWL datatype property. Distinguished from OWL object properties by the DT symbol at extreme top left. The property type can be set to functional by clicking the f symbol. OWL ontology. The ontology header is optional. It is useful for importing other ontologies and for declaring prior versions. ../contd. Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference Icons in Detail View 87 RDF containers and collections The following icons indicate class relationships, and can be inserted when a class is selected. rdf:Bag rdf:Seq rdf:Alt rdf:List Class descriptions The following icons indicate class relationships, and can be inserted when a class is selected. owl:allValuesFrom. Specifies that all values allowed on a restriction must come from the specified class or data range. owl:someValuesFrom. Specifies that at least one value allowed on a restriction must come from the specified class or data range. owl:hasValue. Specifies the value that a restriction must take. owl:unionOf. A class that is a union of two or more classes is equal to that union. owl:intersectionOf. When a class ABC containing A's, B's, and C's intersects with a class CDE containing C's, D's, and E's, the resulting class contains C's. owl:complementOf. When a class A is a complement of class B, then no instance of A can be an instance of B. owl:oneOf. Describes a class by enumerating its instances. ../contd. © 2008 Altova GmbH Altova SemanticWorks 2008 88 User Reference Icons in Detail View Class axioms The following icons indicate OWL class relationships, and can be inserted when a class is selected. rdfs:subClassOf. The selected class is a subclass of another class. owl:equivalentClass. Declares the equality of the selected class with another class. owl:disjointWith. Declares the inequality of the selected class with other classes. Property descriptions The following icons indicate OWL class relationships, and can be inserted when a class is selected. rdfs:subPropertyOf. Declares the selected property as a subproperty of another property. For example, the property #hasMother could be a subproperty of the property #hasParent. rdfs:domain. Specifies the domain of a property P, i.e. the class of resources that may be the subject in a triple with the predicate P. rdfs:range. Specifies the range of a property P, i.e. the class of resources (or datatypes) that may be the value, in a triple, of the predicate P. owl:DataRange. Defines an enumeration of data values. owl:equivalentProperty. Declares equivalence between properties. Equivalent properties have the same property extensions. owl:inverseOf. Declares one property to be the inverse of another. For example, the property #hasChild could be the inverse of the property #hasParent. OWL individuals (instances) The following icons indicate OWL class relationships, and can be inserted when a class is selected. owl:sameAs. Declares two individuals to be identical. owl:differentFrom. Declares the inequality of two individuals. owl:AllDifferent. Declares the pairwise inequality of all individuals in a group. Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference 5.3 File Menu 89 File Menu The commands in the File menu enable you to create, open, save, export, and print SemanticWorks files. (SemanticWorks files are files with the .nt, .rdf, .rdfs, and .owl file extensions.) New (Ctrl+N) Opens a new blank document in the GUI with a name of UntitledX (where X is an integer). This document can subsequently be saved as an RDF (.rdf), RDF Schema (.rdfs), or OWL (.owl) file, or in the XML (.xml) or text (.txt) formats. To save to the N-Triples (.nt) format, use the File | Export to .nt command. Note that since all OWL files are valid RDF files, they are typically saved as .rdf files. The new document is created with the following rudimentary content: <?xml version="1.0"?> <rdf:RDF xmlns: Notice that the RDF, RDF Schema, and OWL namespaces are automatically declared on the rdf:RDF element. Open (Ctrl+O) Opens .rdf, .rdfs, .owl , .nt, and .xml files in the RDF/OWL view. Save (Ctrl+S), Save As Saves the active document as an RDF (.rdf), RDF Schema (.rdfs), OWL (.owl) file, or in the XML (.xml) or text (.txt) formats. To save to the N-Triples (.nt) format, use the File | Export to .nt command. Note that once a file is saved in a particular format, it is available only in that format and can, therefore, only be viewed in that format in other editors. For example, a .nt file can be viewed in a standard text editor in N-Triples format only; the graphical view of this .nt file are features special to SemanticWorks. Save Diagram as Image This command is active in the Detail View of RDF/OWL View and saves the active Detail View document as an image in PNG or EMF format. Export to .nt and .xml Exports the active SemanticWorks document as an N-Triples or XML file to the desired location. The file is exported with the .nt or .xml file extension, respectively. Close, Close All Closes, respectively, the active document and all open documents. Encoding Pops up the Encoding dialog, in which you can set the encoding of the RDF/XML, RDF Schema, or OWL document. The encoding you select is entered as the value of the encoding attribute of the XML declaration of the document, as in <?xml version="1.0" encoding="UTF-8"?>. Note that default encoding is set in the Encoding tab of the Options dialog (Tools | Options). © 2008 Altova GmbH Altova SemanticWorks 2008 90 User Reference File Menu Print, Print Preview, Print Setup The Print and Print Preview commands are available for Text View and Detail View, and enable these two views to be printed. The Print Setup command enables you to configure a printer for the print job. Recently Used Files Displays the four most recently opened documents. Exit Closes all open documents and exits the application. If a document has unsaved changes, you are prompted about whether you wish to save the changes. Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference 5.4 Edit Menu 91 Edit Menu The commands in the Edit menu enable you to navigate and edit documents in the SemanticWorks interface quickly and efficiently. Undo (Ctrl+Z), Redo (Ctrl+Y) Respectively, undoes and redoes the previous command. Both commands can be used a multiple number of times in sequence in order to undo or redo multiple steps. The command is available in both Text View and RDF/OWL View. Find (Ctrl+F), Find Next In Text View or Detail View, finds the input text string if that string is present in the current view. You can select options to match the input string as a whole word and/or whether the search should be case-sensitive. In Text View (for which, screenshot of the Find dialog is shown below ), you can additionally search using regular expressions. Also, in Text View, when you click the Advanced button, you can select what parts of the XML document are searched. The Find Next command finds the next instance of the search string. Replace In Text View, pops up the Find and Replace dialog in which you can specify a text string A to find and a text string B with which to replace the text string A. The options for specifying the text string to find are as described for the Find command. Delete (Delete) In RDF/OWL View, deletes the selected object. Cut (Shift+Delete), Copy (Ctrl+C), Paste (Ctrl+V) In Text View, respectively, cuts or copies the selected text to the clipboard, and pastes from the clipboard to the cursor position in Text View. © 2008 Altova GmbH Altova SemanticWorks 2008 92 User Reference 5.5 View Menu View Menu The commands in the View menu enable you to configure the display of toolbars and the Status Bar, and enable you to toggle on or off the display of blank nodes (anonymous classes). Toolbars The Toolbars menu item pops out a submenu in which you can choose whether to display a toolbar or not. When one of these submenu items is checked, it is displayed in the GUI; otherwise, it is hidden. Details Window, Status Bar, Overview Window, Errors Window The Status Bar menu item toggles on and off the display of the Details Window, Status Bar, Overview Window, and Errors Window. The Status Bar is located at the bottom of the SemanticWorks application window. The Status Bar displays a short description of menu items and toolbar icons when the mouse is placed over such an item. Show Blank Nodes Toggles the display of blank nodes (anonymous classes) on and off. When selected, the toggle is on (blank nodes are displayed); when unselected, the toggle is off (blank nodes are not displayed). Show Comments Toggles the display of comments in Detail View on and off. When selected, the toggle is on (comments are displayed in Detail View); when unselected, the toggle is off (comments are not displayed). Note that comments in large ontologies are hidden in order to provide a better graphical overview of the document. Note that comments are to be edited on the original declaration; where they are displayed as references, they cannot be edited. Show Possible Inconsistencies A toggle command to show possible semantic inconsistencies within OWL Lite and OWL DL ontologies. The semantic check in SemanticWorks is a partial semantic check. It is based on knowledge explicitly stated in the ontology; implied knowledge is not deduced. This means that implied knowledge, such as that derived through entailment or inference, will not be evaluated when the ontology is checked for its semantics. The semantics check, therefore, checks for inconsistencies in the explicit knowledge. The relationship between inconsistencies and the semantics check of SemanticWorks is shown in the illustration below. (Also see the section Semantics Check in the User Reference.) Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference View Menu 93 When the Show Possible Inconsistencies toggle is switched on, inconsistencies that arise because implied knowledge is not used are displayed in the Errors Window (screenshot below). When the toggle is switched off, possible inconsistencies are not displayed (screenshot below). Note: The display of inconsistencies can also be switched on and off via the Inconsistency Warnings filter in the Filter Menu of the Errors Window. © 2008 Altova GmbH Altova SemanticWorks 2008 94 User Reference 5.6 RDF/OWL Menu RDF/OWL Menu The RDF/OWL menu contains commands that enable you to make settings for RDF/OWL document editing and checking and commands to carry out these checks. RDF/OWL Level Pops out a submenu from which you can select the RDF or ontology specification (RDF Schema, OWL Lite, OWL DL, or OWL Full) according to which the active document should be edited. The RDF/OWL Level can also be selected in the corresponding combo box in the View Options toolbar (screenshot below). This step is important because it sets up the GUI for appropriate editing interaction. For example, when RDF is selected, the Main Window contains only a Resources Overview as opposed to the Overview of five categories when one of the ontology levels is selected. Further, the insertion of items appropriate for the selected RDF/OWL level depends on the selection you make. When an existing document is opened, or when a new document is created, the OWL Full level is selected by default. After you change the level to the required level, this level is maintained as long as the document is open or till the level is changed. You can change levels as many times as you wish. The display of the document will change accordingly. Note: When the OWL Lite or OWL DL level is selected, you can additionally run syntax and semantics checks on the document for that level (i.e. against the specification corresponding to the selected level). Reload All Imports Reloads all imported ontologies in the active document. This command is useful if you have modified an imported ontology after opening the active document. Syntax Check The Syntax Check command checks the syntax of the active document according to syntax rules specified in the corresponding specification. The result of the check (positive = well-formed) is displayed in the Errors Window pane. Additionally, if errors are detected, these are listed in the Errors Window pane and include links to the Detail View of the offending items. Semantics Check The Semantics Check command is enabled when the active document has an RDF/OWL level that is OWL Lite or OWL DL. It checks the semantics of the active document according to semantics rules specified in the corresponding specification. The result of the check (positive = at least partially consistent) is displayed in the Errors Window pane. Additionally, if errors are detected, these are listed in the Errors Window pane and include links to the Detail View of the items that lead to the inconsistencies. Note that the semantics check is a partial semantics check, which means that only knowledge stated explicitly in the ontology is evaluated, while knowledge implicit in certain ways (such as from entailment) is not evaluated. Inconsistencies arising from the non-consideration of possibly existent implicit information are displayed when the Show Possible Inconsistencies toggle command is switched on. If an inconsistency possibly exists and if the Show Possible Inconsistencies toggle is switched off, then the semantics check returns a result indicating the Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference RDF/OWL Menu 95 existence of apparent inconsistency. The possible inconsistencies can be viewed by switching on the Show Possible Inconsistencies toggle. Only if no possible inconsistency is detected does the semantics check return a positive result. Also see Errors Window. © 2008 Altova GmbH Altova SemanticWorks 2008 96 User Reference 5.7 Tools Menu Tools Menu The commands in the Tools menu enable you to customize the application (Options menu item) and set up the active document's namespaces, URIref prefixes, URIref serialization, and base URI. These commands are described in detail in the subsections of this section: Customize: describes the various ways you can customize your application. Options: describes the options that can be set in the various tabs of the Options dialog. Namespace Imports for RDF: describes how the Namespace Imports dialog can be used to import namespaces and thereby make available resources described in an ontology for insertion in an RDF document. URIref Prefixes, Expand URIref Prefixes: describes the URIref Prefixes dialog and the Expand URIref Prefixes feature. Base URI: describes how to define a base URI for a document using the Base URI dialog. Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference 5.7.1 Tools Menu 97 Customize The Customize command enables you to customize your SemanticWorks interface. Clicking the command pops up the Customize dialog (screenshot below), in which customization options are grouped in tabs. Commands tab The Commands tab displays all SemanticWorks commands, grouped by menu. Select a command to display a description (in the Description pane) of what the command does. You can also drag a selected command into a menu or toolbar in the GUI. When you do this, the selected command is not removed from the menu in the Customize dialog in which it was originally listed; neither does the command appear in the menu or toolbar list (in the Customize dialog) into which it was dropped. A command that is dropped into a menu or toolbar appears only in the GUI. Toolbars tab The Toolbars tab lists all SemanticWorks toolbars. Each toolbar contains icons that serve as shortcuts for menu commands.Toolbars can be activated (that is, displayed in the SemanticWorks GUI) by checking their corresponding check boxes. They are deactivated by unchecking the corresponding check box. The Menu Bar toolbar cannot be deactivated. Note that text labels can be enabled for individual toolbars. These text labels are the descriptive labels you see for each command in the Commands tab of the Customize dialog (see above). The Reset button resets toolbars to the original settings. The Reset All button resets all toolbars and menus to their original settings. The New button enables you to define a new toolbar. Note: You can also move individual toolbars to any location on the screen by dragging a toolbar by its handle and dropping it at the desired location. Keyboard tab © 2008 Altova GmbH Altova SemanticWorks 2008 98 User Reference Tools Menu The Keyboard tab enables you to customize shortcuts for various commands. To define a new shortcut key combination for a command (for which a key combination may or may not exist), do the following: 1. Select the command for which you wish to assign a shortcut key combination from the Commands pane. 2. Place the cursor in the Press New Shortcut Key text box, and press the shortcut key combination you wish to use for this command. 3. If the key combination has already been assigned to a command, an "Assigned" message appears below the text box. Otherwise, an "Unassigned" message appears. Click the Assign button to assign an unassigned key combination to the selected command. You can remove a key-combination assignment by selecting the key combination in the Current Keys pane and clicking the Remove button. Clicking the Reset All button resets the set of shortcut key combinations to the original settings. Menu tab The Menu tab enables you to select context menus and set appearance options such as menu shadows. Options tab The Options tab enables you to set the following Toolbar options: (i) whether Tool Tips are displayed when the cursor is placed over a toolbar icon; (ii) whether shortcut keys are displayed in Tool Tips; and (iii) whether toolbar icons are displayed as small or large icons. Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference 5.7.2 Tools Menu 99 Options Clicking the Options menu item pops up the Options dialog (screenshot below), in which you can set options for the application. The Detail View Settings tab enables you to make the following settings for Detail View. In the Draw Direction pane, set the direction in which the Detail View of an item is drawn: from left to right (Horizontal), or from top to bottom (Vertical). In the Widths pane, set the minimum and maximum width of object boxes. In the Distances pane, set the distance between parent and child objects, and between child objects. In the Show in Diagram pane, set (i) whether comments are shown, and set their widths; (ii) whether labels are shown instead of URIs. In the Show References pane, set whether references to classes, properties, and individuals are displayed. To revert to the original Altova-defined settings click the Predefined button. © 2008 Altova GmbH Altova SemanticWorks 2008 100 User Reference Tools Menu Further tabs in the Options dialog enable you to customize SemanticWorks as follows: The Color tab allows you to design a background color for the Detail View. You can set the background to be a solid color or a gradient. In the RDF/OWL View Fonts and Text View Fonts tabs you can set the font face, font size, font style, and font color for various items in RDF/OWL View and for text in Text View. In the Encoding tab, you can select the default encoding for XML and non-XML files. In the Application tab, you can select whether the SemanticWorks application logo should be displayed when the program starts and whether it should be printed when a document is printed from within SemanticWorks. You can also select whether imports should be resolved and, optionally, validated when a document is opened, or not. Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference 5.7.3 Tools Menu 101 Namespace Imports for RDF When creating an RDF or RDFS document, it is convenient to be able to insert resources in RDF statements by selecting the required resources from a list. This is especially useful if a single resource is to be inserted multiple times in a document. As an example of such use consider the creation of an RDF document that contains metadata for a large number of web pages. Each web page resource is described by the same set of property resources. If the property resources are described in an ontology, then SemanticWorks can access these resources so that they can be displayed in the SemanticWorks interface and be inserted in RDF statements. SemanticWorks does this using the Namespace Imports mechanism. The Namespace Imports mechanism works by importing, into the RDF or RDFS document, the namespace URIs of the resources to be referenced. The imported namespace URIs must be the same as those used to define the required resources in an ontology. Once a namespace URI is imported, ontology resources associated with this URI are made available to SemanticWorks for insertion in the active document. The Namespace Imports for RDF command is the SemanticWorks feature that enables you to import the required namespaces. The feature is to be used as follows: 1. Select the Namespace Imports for RDF command in order to display the Namespace Imports dialog (screenshot below). 2. In the Namespace column enter the first namespace to be imported, say, http//purl.org/dc/elements/1.1/, which is the DC namespace declared, say, in an ontology document DCOntology.rdf. 3. In the Import File column, enter the location of the ontology document DCOntology.rdf. Note that you should give the absolute path for the ontology document. 4. If the ontology file defines XML Schema datatypes, you will need to import the XML Schema namespace (http//), too. Enter the XML Schema namespace () in the Namespace column and the location of the ontology file in the Import File column. 5. Click OK to complete. In the procedure outlined above, you have imported two namespace URIs. After declaring these namespaces in your RDF document (see URIref Prefixes and Namespaces for a description of how to do this), resources from the ontology file from which the namespaces have been imported are listed as resources (in RDF documents) or instances (in RDFS documents); and (ii) are available in the Detail View of resources (or instances) , for insertion in the RDF or RDFS © 2008 Altova GmbH Altova SemanticWorks 2008 102 User Reference Tools Menu document. For a detailed description of usage in practice, see the tutorial section RDF Documents. Note: Resources that are displayed in the Resources or Instances Overview as a result of the namespace imports (and not as a result of being physically entered in the document) are available only for insertion. They should be regarded as abstract resources (available for instantiation) and distinct from the resources actually contained in the document. Namespace imports and owl:imports The Namespace Import feature can also be used to locate ontology resources indicated in the owl:imports statement of an ontology. The URI used in the owl:imports statement is entered as a Namespace in the Namespace Imports dialog (screenshot above). The actual (absolute path) location of the ontology to be imported is entered as the corresponding Import File. In order for the namespace import to work, the base URI of the ontology to be imported, which is specified using the xml:base attribute of its rdf:RDF document element, must be the same as the URI used in the owl:imports statement of the importing ontology. Note that it is the mapping in the Namespace Imports dialog of the importing ontology that provides the actual location of the ontology to be imported. See Usage Issues for an overview of how the owl:imports mechanism works. Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference 5.7.4 Tools Menu 103 Namespace Color Assignments Resources from different ontologies can be assigned different colors. These color assignments will be active in both the Overview and Detail View of RDF/OWL View. The assignments are done on the basis of namespaces. So each namespace can be assigned a different color, and the RDF/OWL View will display resources from these namespaces in their respective colors. W hen a box in the diagram is selected, it becomes a darker shade of the assigned color. To assign colors to namespaces, do the following: 1. Click Tools | Namespace Color Assignments. This pops up the Namespaces Color Assignments dialog (screenshot below). 2. 3. 4. 5. Click the Add button to add a color assignment line. In the new color assignment line, enter the required namespace in the URI column. In the Color column, click the color picker to select the color for that namespace. Add more color assignment lines or delete lines as required by using the Add and Delete buttons, respectively. 6. When you are done, click OK. The colors are assigned to the various resources as background colors, each resource according to the namespace in which it is. The screenshot below shows the Detail View of resources with color assignments. Note: The RDF, RDFS, OWL, and XML Schema namespaces can also be assigned colors. © 2008 Altova GmbH Altova SemanticWorks 2008 104 User Reference Altova SemanticWorks 2008 Tools Menu © 2008 Altova GmbH User Reference 5.7.5 Tools Menu 105 URIref Prefixes, Expand URIref Prefixes This section describes the menu items: URIref Prefixes and Expand URIref Prefixes. Also see the related topic, Namespace Imports for RDF. URIref Prefixes Clicking the URIref Prefixes menu item pops up the URIref Prefixes dialog (screenshot below), in which all the namespaces declared for the active ontology are displayed. (The RDF, RDF Schema, and OWL namespaces are declared by default for new documents.) Via the dialog, you can (i) add namespaces to the ontology and bind these namespaces to customized prefixes, (ii) edit existing namespaces and prefixes, and (iii) delete a selected namespace. To add a namespace or delete the selected namespace, use the Add and Delete buttons respectively. To edit a namespace or prefix, place the cursor in the appropriate field and edit using the keyboard. Expand URIref Prefixes After prefixes for namespaces have been assigned (see URIref Prefixes above), a resource can be defined or referenced in the RDF/XML serialization using either: (i) the prefix and local name, or (ii) the expanded URIref. To use the expanded URIref in the serialization, toggle the Expand URIref Prefixes command on. All URIrefs entered in RDF/OWL View after this, that have a prefix, will be serialized in the RDF/XML notation to the expanded URIref form if that prefix has been declared. If the prefix does not exist, then the URIref is serialized exactly as entered, that is, the prefix is read as the scheme part of a URI. The RDF/OWL View itself always displays the URIref as entered; URIrefs are not expanded in RDF/OWL View. In actuality, what happens when this command is toggled on, is. © 2008 Altova GmbH Altova SemanticWorks 2008 106 User Reference 5.7.6 Base URI Tools Menu The Base URI command pops up the Base URI dialog (screenshot below), in which you enter the base URI you wish the active document to have. The base URI is useful for resolving relative paths. The base URI you submit is entered as the value of the xml:base attribute of the rdf:RDF element of the document. For example: <rdf:RDF xml: Note: By default, SemanticWorks considers the base URI as the URL of the document. This default URI is not explicitly serialized in the form of a value for the xml:base attribute. Even when the URI of the document is entered in the base URI dialog, it is not serialized in the RDF/XML if it corresponds to the URL of the document. Only a URI that is not the URL of the document is serialized. For using the base URI with the Namespace Imports feature in order to import ontologies, see Usage Issues and Namespace Imports for RDF. Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference 5.8 Window Menu 107 Window Menu The Window menu has commands to specify how SemanticWorks. Warning: To exit the Windows dialog, click OK; do not click the Close Window(s) button. The Close Window(s) button closes the window/s currently selected in the Windows dialog. © 2008 Altova GmbH Altova SemanticWorks 2008 108 User Reference 5.9 Help Menu Help Menu The Help menu contains an onscreen version of this documentation, registration information, relevant Internet hyperlinks, and information about your version of SemanticWorks. Software Activation After you download your Altova product software, you can activate it using either a free evaluation key or a purchased permanent license key. Note: Free evaluation key. When you first start the software after downloading and installing it, the Software Activation dialog will pop up. In it is a button to request a free evaluation key-code. Enter your name, company, and e-mail address in the dialog that appears, and click Request Now! The evaluation key is sent to the e-mail address you entered and should reach you in a few minutes. Now enter the key in the key-code field of the Software Activation dialog box and click OK to start working with your Altova product. The software will be unlocked for a period of 30 days. Permanent license key. The Software Activation dialog contains a button to purchase a permanent license key. Clicking this button takes you to Altova's online shop, where you can purchase a permanent license key for your product. There are two types of permanent license: single-user and multi-user. Both will be sent to you by e-mail. A single-user license contains your license-data and includes your name, company, e-mail, and key-code.A multi-user license contains your license-data and includes your company name and key-code. Note that your license agreement does not allow you to install more than the licensed number of copies of your Altova software on the computers in your organization (per-seat license). Please make sure that you enter the data required in the registration dialog exactly as given in your license e-mail. When you enter your license information in the Software Activation dialog, ensure that you enter the data exactly as given in your license e-mail. For multi-user licenses, each user should enter his or her own name in the Name field. The Software Activation dialog can be accessed at any time by clicking the Help | Software Activation command. Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference Help Menu 109 Order Form When you are ready to order a licensed version of the software product, you can use either the Order license key button in the Software Activation dialog (see previous section) or the Help | Order Form command to proceed to the secure Altova Online Shop. Registration The first time you start your Altova software after having activated it, a dialog appears asking whether you would like to register your product. There are three buttons in this dialog: OK: Takes you to the Registration Form Remind Me Later: Pops up a dialog in which you can select when you wish to be next reminded. Cancel: Closes the dialog and suppresses it in future. If you wish to register at a later time, you can use the Help | Registration command. Check for Updates Checks with the Altova server whether a newer version than yours is currently available and displays a message accordingly. © 2008 Altova GmbH Altova SemanticWorks 2008 110 User Reference 5.10 Usage Issues Usage Issues The following usage issue should be noted: Cardinality of property restrictions When entering property restrictions in intersectionOf statements of an ontology, the mincardinality and maxcardinality cannot be entered on a single restriction; they must be entered on two separate restrictions for that property. Otherwise, cardinality is entered as follows: 1. Create a restriction on the subclass of a class, say, by right-clicking the subclass icon . 2. Select Add Restriction from the context menu. This inserts a restriction box (screenshot below). 3. Select or type in the name of the object property to be restricted (screenshot below). 4. Enter the mincardinality by double-clicking to the left of the two dots below the restriction box (see screenshot above). 5. Enter the maxcardinality by double-clicking to the right of the two dots below the restriction box (see screenshot above). OWL imports The owl:imports statement in an ontology header (see code fragment below) references a resource on the web or on a local system. <owl:Ontology rdf: <owl:versionInfo>v 1.17 2003/02/26 12:56:51 mdean</owl:versionInfo> <rdfs:comment>An example ontology</rdfs:comment> <owl:imports rdf: </owl:Ontology> If you are connected to the Internet and there is an ontology at the location indicated by the URI, then this ontology is imported. If you are not connected to the Internet or there is no ontology resource at the location indicated by the URI, SemanticWorks uses the mechanism explained below to locate and import ontology resources. 1. The URI in the owl:imports statement must be the same as the value of the xml:base attribute of the rdf:RDF element of the ontology to be imported. For example, the importing ontology could have the following statement: <owl:imports rdf:. The URI declared in the owl:imports statement must match the base URI of the ontology to be imported. This means that there must either be an ontology at the location specified by the URI, or the xml:base attribute of the ontology to be imported must be the same as Altova SemanticWorks 2008 © 2008 Altova GmbH User Reference Usage Issues 111 the owl:imports URI. The document element of the ontology to be imported would need to be: <rdf:RDF xml:base="" ...>. 2. Additionally, the importing ontology must map, using the Namespace Imports for RDF feature, the URI used in the owl:imports statement to the actual (absolute path) location of the imported ontology. Also see Base URI and Namespace Imports for RDF for related information. © 2008 Altova GmbH Altova SemanticWorks 2008 Chapter 6 Conformance 114 Conformance 6 Conformance SemanticWorks 2008 conforms to the W3C specifications listed in the W3C's RDF Overview and OWL Overview documents. The respective suites of specifications are as listed below. RDF specifications RDF Primer RDF Concepts and Abstract Syntax RDF/XML Syntax RDF Semantics RDF Vocabulary Description Language 1.0 (RDF Schema) OWL specifications OWL Web Ontology Language Guide OWL Semantics and Abstract Syntax OWL Web Ontology Language Reference Implementation-specific information The following implementation-specific information should be noted: The syntax check of the document Checks if the document is well-formed RDF. The Syntax Check command of SemanticWorks checks whether the document can be transformed into OWL Abstract Syntax following the rules given in the OWL Web Ontology Language Semantics and Abstract Syntax document, Section 4: Mapping to RDF Graphs. The document is said to be well-formed if it satisfies the definitions for an "OWL Lite ontology in RDF graph form" or an "OWL DL ontology in RDF graph form", respectively for OWL Lite and OWL DL documents, as described at the end of Section 4. The semantic check of the document The Semantic Check command of SemanticWorks checks whether the document follows the rules given in the OWL Web Ontology Language Semantics and Abstract Syntax document, Section 5: RDF-Compatible Model-Theoretic Semantics. The semantic engine in SemanticWorks executes the checks solely on the existing statements. It is a partial consistency checker. Altova SemanticWorks 2008 © 2008 Altova GmbH Chapter 7 License Information 116 License Information 7 License Information This section contains: Information about the distribution of this software product Information about the intellectual property rights related to this software product The End User License Agreement governing the use of this software product Please read this information carefully. It is binding upon you since you agreed to these terms when you installed this software product. Altova SemanticWorks 2008 © 2008 Altova GmbH License Information 7.1 Electronic Software Distribution 117 Electronic Software Distribution This product is available through electronic software distribution, a distribution method that provides the following unique benefits: You can evaluate the software free-of-charge before making a purchasing decision. Once you decide to buy the software, you can place your order online at the Altova website and immediately get a fully licensed product within minutes. When you place an online order, you always get the latest version of our software. The product package includes a comprehensive integrated onscreen help system. The latest version of the user manual is available at (i) in HTML format for online browsing, and (ii) in PDF format for download (and to print if you prefer to have the documentation on paper). 30-day evaluation period After downloading this product, you can evaluate it for a period of up to 30 days free of charge. About 20 days into this evaluation period, the software will start to remind you that it has not yet been licensed. The reminder message will be displayed once each time you start the application. If you would like to continue using the program after the 30-day evaluation period, you have to purchase an Altova Software License Agreement, which is delivered in the form of a key-code that you enter into the Software Activation dialog to unlock the product. You can purchase your license at the online shop at the Altova website. Helping Others within Your Organization to Evaluate the Software If you wish to distribute the evaluation version within your company network, or if you plan to use it on a PC that is not connected to the Internet, you may only distribute the Setup programs, provided that they are not modified in any way. Any person that accesses the software installer that you have provided, must request their own 30-day evaluation license key code and after expiration of their evaluation period, must also purchase a license in order to be able to continue using the product. For further details, please refer to the Altova Software License Agreement at the end of this section. © 2008 Altova GmbH Altova SemanticWorks 2008 118 License Information Software Activation and License Metering 7.2 Software Activation and License Metering. You will also notice that, if you are online, your Altova product contains many useful functions; these are unrelated to the license-metering technology. Altova SemanticWorks 2008 © 2008 Altova GmbH License Information 7.3 Intellectual Property Rights 119 Intellectual Property Rights The Altova. Notifications of claimed copyright infringement should be sent to Altova’s copyright agent as further provided on the Altova Web Site. Altova software contains certain Third Party Software that is also protected by intellectual property laws, including without limitation applicable copyright laws as described in detail at. All other names or trademarks are the property of their respective owners. © 2008 Altova GmbH Altova SemanticWorks 2008 120 License Information Altova End User License Agreement 7.4 Altova End User License Agreement THIS IS A LEGAL DOCUMENT -- RETAIN FOR YOUR RECORDS ALTOVA® END USER LICENSE AGREEMENT Licensor: Altova GmbH Rudolfsplatz 13a/9 A-1010 Wien Austria Important - Read Carefully. Notice to User: This End User License Agreement (“Software License Agreement”) is a legal document between you and Altova GmbH (“Altova”). It is important that you read this document before using the Altova Altova Privacy Policy (“Privacy Policy”) License Grant. Upon your acceptance of this Software License Agreement Altova (a) a suite of Altova software products (collectively, the “Suite”) and have not installed each product individually, then the Software License Agreement governs your use of all of the software included in the Suite. If you have licensed SchemaAgent, then the terms and conditions of this Software License Agreement apply to your use of the SchemaAgent server software (“SchemaAgent Server”) included therein, as applicable and you are licensed to use SchemaAgent Server solely in connection with your use of Altova Software and solely for the purposes described in the accompanying documentation. In addition, if you have licensed XMLSpy Enterprise Edition or MapForce Enterprise Edition, or UModel, your license to install and use a copy of the Software as provided herein permits you to generate source code based on (i) Altova Library modules that are included in the Software (such generated code hereinafter referred to as the “Restricted Source Code”) and (ii) schemas or mappings that you create or provide (such code as may be generated from your schema or mapping source materials hereinafter referred to as the “Unrestricted Source Code”). In addition to the rights granted herein, Altova grants you a non-exclusive, non-transferable, limited license to compile into executable form the complete generated code comprised of the combination of the Restricted Source Code and the Unrestricted Source Code, and to use, copy, distribute or license that executable. You may not distribute or redistribute, sublicense, sell, or transfer to a third party the Restricted Source Code, unless said third party already has a license to the Restricted Source Code through their separate license agreement with Altova or other agreement with Altova. Altova reserves all other rights in and to the Software. With respect to the feature(s) of Altova SemanticWorks 2008 © 2008 Altova GmbH License Information Altova End User License Agreement 121 UModel that permit reverse-engineering of your own source code or other source code that you have lawfully obtained, such use by you does not constitute a violation of this Agreement. Except as otherwise permitted in Section 1(h) reverse engineering of the Software is strictly prohibited as further detailed therein. Server Use. You may install one copy of the Software on your computer file server for (b) the purpose of downloading and installing the Software onto other computers within your internal network up to the Permitted Number of computers. If you have licensed SchemaAgent, then you may install SchemaAgent Server on any server computer or workstation and use it in connection with your Software. Altova. If you have purchased Concurrent User Licenses as defined in Section 1(c) you may install a copy of the Software on a terminal server within your internal network for the sole and exclusive purpose of permitting individual users within your organization to access and use the Software through a terminal server session from another computer on the network provided that the total number of user that access or use the Software on such network or terminal server does not exceed the Permitted Number. Altova makes no warranties or representations about the performance of Altova software in a terminal server environment and the foregoing are expressly excluded from the limited warranty in Section 5 hereof and technical support is not available with respect to issues arising from use in such an environment. Concurrent Use. If you have licensed a “Concurrent-User” version of the Software, (c) licenses. Backup and Archival Copies. You may make one backup and one archival copy of . Home Use. You, as the primary user of the computer on which the Software is (e) installed, may also install the Software on one of your home computers for your use. However, the Software may not be used on your home computer at the same time as the Software is being used on the primary computer. Key Codes, Upgrades and Updates. Prior to your purchase and as part of the (f) registration for the thirty (30) -day evaluation period, as applicable, you will receive an evaluation key code. You will receive a purchase key code when you elect to purchase the Software from either Altova GMBH or an authorized reseller. The purchase key code will enable you to activate the Software beyond the initial evaluation period. You may not re-license, reproduce or distribute any key code except with the express written permission of Altova. If the Software that you have licensed is an upgrade or an update, then the update replaces all or part of the Software previously licensed. The update or upgrade and the associated license keys does not constitute the granting of a second license to the Software in that you may not use the upgrade or update in addition to the Software that it is replacing. You agree that use of the upgrade of update terminates your license to use the Software or portion thereof replaced. Title. Title to the Software is not transferred to you. Ownership of all copies of the (g) Software and of copies made by you is vested in Altova, subject to the rights of use granted to you in this Software License Agreement. As between you and Altova, documents, files, stylesheets, generated program code (including the Unrestricted Source Code) and schemas that are authored or created by you via your utilization of the Software, in accordance with its Documentation and the terms of this Software License Agreement, are your property. Reverse Engineering. Except and to the limited extent as may be otherwise (h) specifically provided by applicable law in the European Union, you may not reverse engineer, decompile, disassemble or otherwise attempt to discover the source code, underlying ideas, © 2008 Altova GmbH Altova SemanticWorks 2008 122 License Information Altova End User License Agreement Altova to provide the information necessary to achieve such operability and Altova has not made such information available. Altova has the right to impose reasonable conditions and to request a reasonable fee before providing such information. Any information supplied by Altova Altova Customer Support Department. Other Restrictions. You may not loan, rent, lease, sublicense, distribute or otherwise (i) transfer all or any portion of the Software to third parties except to the limited extent set forth in Section 3 or otherwise expressly provided. Altova’s ALTOVA Altova's Rights. You acknowledge that the. You acknowledge that. You will take no actions which adversely affect Altova Altova are trademarks of Altova GmbH Altova SemanticWorks 2008 © 2008 Altova GmbH License Information Altova End User License Agreement 123 grant you any intellectual property rights in the Software. Notifications of claimed copyright infringement should be sent to Altova’s copyright agent as further provided on the Altova Web Site. Altova; (“Pre-release Software”), then this Section applies. In addition, this section applies to all evaluation and/or demonstration copies of Altova software (“Evaluation Software”) Altova, and may contain bugs, errors and other problems that could cause system or other failures and data loss. CONSEQUENTLY, THE PRE-RELEASE AND/OR EVALUATION SOFTWARE IS PROVIDED TO YOU “AS-IS” WITH NO WARRANTIES FOR USE OR PERFORMANCE, AND ALTOVA DISCLAIMS ANY WARRANTY OR LIABILITY OBLIGATIONS TO YOU OF ANY KIND, WHETHER EXPRESS OR IMPLIED. WHERE LEGALLY LIABILITY CANNOT BE EXCLUDED FOR PRE-RELEASE AND/OR EVALUATION SOFTWARE, BUT IT MAY BE LIMITED, ALTOVA’S LIABILITY AND THAT OF ITS SUPPLIERS SHALL BE LIMITED TO THE SUM OF FIFTY DOLLARS (USD Altova has not promised or guaranteed to you that Pre-release Software will be announced or made available to anyone in the future, that Altova has no express or implied obligation to you to announce or introduce the Pre-release Software, and that Altova Altova, you will provide feedback to Altova Altova of a publicly released commercial version of the Software, whether as a stand-alone product or as part of a larger product, you agree to return or destroy all earlier Pre-release Software received from Altova and to abide by the terms of the license agreement for any such later versions of the Pre-release Software. 5. LIMITED WARRANTY AND LIMITATION OF LIABILITY Limited Warranty and Customer Remedies. Altova warrants to the person or entity (a) © 2008 Altova GmbH Altova SemanticWorks 2008 124 License Information Altova End User License Agreement Altova. Altova’s and its suppliers’ entire liability and your exclusive remedy shall be, at Altova’s option, either (i) return of the price paid, if any, or (ii) repair or replacement of the Software that does not meet Altova’s Limited Warranty and which is returned to Altova (b) AND REMEDIES STATE THE SOLE AND EXCLUSIVE REMEDIES FOR ALTOVA OR ITS SUPPLIER’S BREACH OF WARRANTY. ALTOVA, ALTOVA AND ITS SUPPLIERS MAKE NO WARRANTIES, CONDITIONS, REPRESENTATIONS OR TERMS, EXPRESS OR IMPLIED, WHETHER BY STATUTE, COMMON LAW, CUSTOM, USAGE OR OTHERWISE AS TO ANY OTHER MATTERS. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, ALTOVA. Limitation Of Liability. TO THE MAXIMUM EXTENT PERMITTED BY (c) APPLICABLE LAW EVEN IF A REMEDY FAILS ITS ESSENTIAL PURPOSE, IN NO EVENT SHALL ALTOVA ALTOVA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. IN ANY CASE, ALTOVA, Altova Altova and you. Infringement Claims. Altova will indemnify and hold you harmless and will defend or (d) settle any claim, suit or proceeding brought against you by a third party that is based upon a claim that the content contained in the Software infringes a copyright or violates an intellectual Altova SemanticWorks 2008 © 2008 Altova GmbH License Information Altova End User License Agreement 125 or proprietary right protected by United States or European Union law (“Claim”), but only to the extent the Claim arises directly out of the use of the Software and subject to the limitations set forth in Section 5 of this Agreement except as otherwise expressly provided. You must notify Altova in writing of any Claim within ten (10) business days after you first receive notice of the Claim, and you shall provide to Altova at no cost with such assistance and cooperation as Altova may reasonably request from time to time in connection with the defense of the Claim. Altova shall have sole control over any Claim (including, without limitation, the selection of counsel and the right to settle on your behalf on any terms Altova deems desirable in the sole exercise of its discretion). You may, at your sole cost, retain separate counsel and participate in the defense or settlement negotiations. Altova Altova’s legal counsel the Software is likely to become the subject of a Claim, Altova shall attempt to resolve the Claim by using commercially reasonable efforts to modify the Software or obtain a license to continue using the Software. If in the opinion of Altova’s legal counsel the Claim, the injunction or potential Claim cannot be resolved through reasonable modification or licensing, Altova, at its own election, may terminate this Software License Agreement without penalty, and will refund to you on a pro rata basis any fees paid in advance by you to Altova. THE FOREGOING CONSTITUTES ALTOVA’S SOLE AND EXCLUSIVE LIABILITY FOR INTELLECTUAL PROPERTY INFRINGEMENT. This indemnity does not apply to infringements that would not be such, except for customer-supplied elements. 6. SUPPORT AND MAINTENANCE Altova offers multiple optional “Support & Maintenance Package(s)” (“SMP”)” for the purposes of this paragraph a), and Altova, (MO-FR, 8am UTC – 10pm UTC, Austrian and US holidays excluded) and to make reasonable efforts to provide work-arounds to errors reported in the Software. © 2008 Altova GmbH Altova SemanticWorks 2008 126 License Information Altova End User License Agreement Altova under this Software License Agreement. Altova’s obligations under this Section 6 are contingent upon your proper use of the Software and your compliance with the terms and conditions of this Software License Agreement at all times.. 7. SOFTWARE ACTIVATION, UPDATES AND LICENSE METERING License Metering. Altova has a built-in license metering module that helps you to (a) avoid any unintentional violation of this Software License Agreement. Altova may use your internal network for license metering between installed versions of the Software. Software Activation. Altova’s Software may use your internal network and between your computer and the Altova license server. You agree that Altova may use these measures and you agree to follow any applicable requirements. LiveUpdate. Altova provides a new LiveUpdate notification service to you, which is (c) free of charge. Altova may use your internal network and Internet connection for the purpose of transmitting license-related data to an Altova-operated LiveUpdate server to validate your license at appropriate intervals and determine if there is any update available for you. Use of Data. The terms and conditions of the Privacy Policy are set out in full at (d) and are incorporated by reference into this Software License Agreement. By your acceptance of the terms of this Software License Agreement or use of the Software, you authorize the collection, use and disclosure of information collected by Altova for the purposes provided for in this Software License Agreement and/or the Privacy Policy as Altova SemanticWorks 2008 © 2008 Altova GmbH License Information Altova End User License Agreement 127 revised from time to time. European users understand and consent to the processing of personal information in the United States for the purposes described herein. Altova has the right in its sole discretion to amend this provision of the Software License Agreement and/or Privacy Policy at any time. You are encouraged to review the terms of the Privacy Policy as posted on the Altova Web site from time to time. 8. TERM AND TERMINATION This Software License Agreement may be terminated (a) by your giving Altova written notice of termination; or (b) by Altova, at its option, giving you written notice of termination if you commit a breach of this Software License Agreement and fail to cure such breach within ten (10) days after notice from Altova or (c) at the request of an authorized Altova reseller in the event that you fail to make your license payment or other monies due and payable.. In addition the Software License Agreement governing your use of a previous version that you have upgraded or updated of the Software is terminated upon your acceptance of the terms and conditions of the Software License Agreement accompanying such upgrade or update. Upon any termination of the Software License Agreement, you must cease all use of the Software that it governs, destroy all copies then in your possession or control and take such other actions as Altova may reasonably request to ensure that no copies of the Software remain in your possession or control. The terms and conditions set forth in Sections 1(g), (h), (i), 2, 5(b), (c), 9, 10 and 11. §12.211 and 12.212) or DFARS 227. 7202 (48 C.F.R. § Altova GmbH, Rudolfsplatz, 13a/9, A-1010 Vienna, Austria/EU. You may not use or otherwise export or re-export the Software or Documentation except as authorized by United States law and the laws of the jurisdiction in which the Software was obtained. In particular, but without limitation, the Software or Documentation may not be exported or re-exported . THIRD PARTY SOFTWARE The Software may contain third party software which requires notices and/or additional terms and conditions. Such required third party software notices and/or additional terms and conditions are located Our Website at and are made a part of and incorporated by reference into this Agreement. By accepting this Agreement, you are also accepting the additional terms and conditions, if any, set forth therein. 11., © 2008 Altova GmbH Altova SemanticWorks 2008 128 License Information Altova End User License Agreement Altova Altova’s Web site for Altova and the address shown in Altova’s’s: 2006-09-05 Altova SemanticWorks 2008 © 2008 Altova GmbH Index Index 129 Consistency check, of ontologies, 92, 94 Copy, description of command, 91 Copyright information, 116 Creating ontology classes, 34, 55 Customizing SemanticWorks, 97 . .nt files, exporting to, 89 A AllDifferent, creating collections of, 50 Anonymous classes, toggling display on and off, 92 B Base URI, 101, 110 and owl:imports, 106 setting for document, 106 Blank nodes, toggling display on and off, 92 Cut, description of command, 91 D Datatype properties, defining, 39, 59 Datatypes, declaring XML Schema namespace for, 32 Delete, description of command, 91 Deleting, classes and other ontology items, 34 Detail View, displaying graph in viewport, 19 configuring appearance, 99 description of, 14 Details Window, 17 in interface, 12 Distinct members, in allDifferent collections, 50 C Cardinality, entering min and max on restrictions, 110 Cardinality constraints, of properties, 39, 59 Checking the ontology, syntax and semantics, 34, 55 Classes, creating and deleting, 34 defining hierarchy, 55 defining relationships for, 36 enumerating instances for, 57 intersectionOf, 61 restrictions of, 61 unionOf, 61 Comments, 92 © 2008 Altova GmbH Distribution, of Altova's software products, 116, 117, 119 Documentation, overview, 6 Domain, of properties, 39, 59 Dublin Core, creating metadata with, 75 namespaces in RDF document, 72 using in RDF document, 72 Dublin Core vocabulary, 53 E Edit menu, description of commands in, 91, 92 130 Index Encoding, of SemanticWorks documents, 89 setting defaults, 99 End User License Agreement, 116, 120 Enumerations, of classes, 57 Errors Window, description of, 20 in interface, 12 Evaluation period, of Altova's software products, 116, 117, 119 Exporting files, to Triples and XML formats, 89 I Images, saving Detail View graph as, 89 Importing namespaces, for RDF documents, 101 Imports, reloading, 94 Instances, creating, 45, 57 creating in Details Window, 17 defining predicates for, 45, 57 F Features, of SemanticWorks, 9 Interface, broad description of, 12 IntersectionOf, classes, 61 Introduction, 8 File extensions, for saving and exporting, 89 File menu, description of commands in, 89 Find, description of command, 91 Fonts, configuring for Text View and RDF/OWL View, 99 Functional properties, defining, 39 L Language level, selecting, 29 Legal information, 116 Level of ontology language, selecting, 29 License, 120 information about, 116 G GUI, broad description of, 12 H Help menu, description of, 108 License metering, in Altova products, 118 M Main Window, in interface, 12, 14 Menu bar, in interface, 12 N Namespace imports, © 2008 Altova GmbH Index Namespace imports, and owl:imports, 101 for RDF documents, 101 Namespaces, and color assignments, 103 how to declare, 32 importing from ontology into RDF document, 67, 72 O Object properties, defining, 39, 59 Objects, in RDF statements, 69, 75 Onscreen help, 108 Ontologies, for RDF document creation, 67 general creation and editing mechanism, 22 Ontology, creating new, 29 language level, 29 Ontology level, OWL DL tutorial, 53 Opening files in SemanticWorks, 89 Options dialog, description, 99 Overview, of categories in Main Window, 14 Overview Window, description of, 19 in interface, 12 OWL DL, checking syntax and semantics, 94 OWL DL ontology, tutorial, 52 OWL Lite, checking syntax and semantics, 94 OWL Lite ontology, tutorial, 28 owl:imports, 101, 106, 110 131 description of command, 91 Predicates, declaring in RDF statements, 69, 75 for instances, 45, 57 Prefixes, declaring for namespaces, 32 Printing, of documents, 89 Product features, of SemanticWorks, 9 Properties, cardinality constraints, 39, 59 declaring in RDF documents, 69, 75 defining, 39, 59 domain and range of, 39, 59 Property restrictions, entering cardinality requirements, 110 R Range, of properties, 39, 59 RDF document, and Dublin Core vocabulary, 72 RDF document creation, from OWL DL ontology, 67 tutorial, 66 RDF documents, creating new, 67 general creation and editing mechanism, 22 importing namespaces from ontologies, 101 referencing an ontology, 67, 72 RDF resources, creating, 69, 75 RDF statements, making, 69, 75 RDF/OWL level, selecting, 94 RDF/OWL menu, description of commands in, 94 RDF/OWL View, description of, 14 P Redo, description of command, 91 Paste, Replace, description of command, 91 © 2008 Altova GmbH 132 Index Resources, for RDF document from ontology, 67, 72 from different namespaces, 103 Restrictions, of classes, 61 on properties, 110 S Saving files in SemanticWorks, 89 Semantics check, and Errors Window, 20 of ontology, 34, 55 SemanticWorks, product features, 9 product information, 108 user manual, 3 Software product license, 120 U Undo, description of command, 91 UnionOf, classes, 61 URIref, declaring prefixes of, 32 URIref prefixes, declaring namespaces for, 105 expanding, 105 Usage, overview, 22 Usage issues, listing of, 110 User manual, overview, 6 Status Bar, display of, 92 Support resources, 108 Syntax check, and Errors Window, 20 of ontology, 34, 55 T Text View, description of, 14, 32 Toolbars, in interface, 12 setting display of, 92 setting options for, 92 Tools menu, description of commands in, 96 Triples files, exporting to, 89 Tutorial, OWL DL ontology, 52 OWL Lite ontology, 28 RDF document creation, 66 V Valid, result of semantics check, 94 W Well-formed, result of syntax check, 94 Window menu, description of commands in, 107 Windows, moving in interface, 12 Windows in GUI, configuring display of, 107 X XML files, exporting to, 89 XML Schema, declaring namespace for datatypes, 32 © 2008 Altova GmbH Index xml:base, use of, 32 © 2008 Altova GmbH 133
https://manualzz.com/doc/48873973/altova-semanticworks-2008
CC-MAIN-2018-51
refinedweb
27,225
52.29
In this tutorial, lets take a look at how raw sockets can be used to receive data packets and send those packets to specific user applications, bypassing the normal TCP/IP protocols. If you have no knowledge of the Linux kernel, yet are interested in the contents of network packets, raw sockets are the answer. A raw socket is used to receive raw packets. This means packets received at the Ethernet layer will directly pass to the raw socket. Stating it precisely, a raw socket bypasses the normal TCP/IP processing and sends the packets to the specific user application (see Figure 1). A raw socket vs other sockets Other sockets like stream sockets and data gram sockets receive data from the transport layer that contains no headers but only the payload. This means that there is no information about the source IP address and MAC address. If applications running on the same machine or on different machines are communicating, then they are only exchanging data. The purpose of a raw socket is absolutely different. A raw socket allows an application to directly access lower level protocols, which means a raw socket receives un-extracted packets (see Figure 2). There is no need to provide the port and IP address to a raw socket, unlike in the case of stream and datagram sockets. Network packets and packet sniffers When an application sends data into the network, it is processed by various network layers. Before sending data, it is wrapped in various headers of the network layer. The wrapped form of data, which contains all the information like the source and destination address, is called a network packet (see Figure 3). According to Ethernet protocols, there are various types of network packets like Internet Protocol packets, Xerox PUP packets, Ethernet Loopback packets, etc. In Linux, we can see all protocols in the if_ether.h header file (see Figure 4). When we connect to the Internet, we receive network packets, and our machine extracts all network layer headers and sends data to a particular application. For example, when we type in our browser, we receive packets sent from Google, and our machine extracts all the headers of the network layer and gives the data to our browser. By default, a machine receives those packets that have the same destination address as that of the machine, and this mode is called the non-promiscuous mode. But if we want to receive all the packets, we have to switch into the promiscuous mode. We can go into the promiscuous mode with the help of ioctls. If we are interested in the contents or the structure of the headers of different network layers, we can access these with the help of a packet sniffer. There are various packet sniffers available for Linux, like Wireshark. There is a command line sniffer called tcpdump, which is also a very good packet sniffer. And if we want to make our own packet sniffer, it can easily be done if we know the basics of C and networking. A packet sniffer with a raw socket To develop a packet sniffer, you first have to open a raw socket. Only processes with an effective user ID of 0 or the CAP_NET_RAW capability are allowed to open raw sockets. So, during the execution of the program, you have to be the root user. Opening a raw socket To open a socket, you have to know three things the socket family, socket type and protocol. For a raw socket, the socket family is AF_PACKET, the socket type is SOCK_RAW and for the protocol, see the if_ether.h header file. To receive all packets, the macro is ETH_P_ALL and to receive IP packets, the macro is ETH_P_IP for the protocol field. int sock_r; sock_r=socket(AF_PACKET,SOCK_RAW,htons(ETH_P_ALL)); if(sock_r<0) { printf(error in socket\n); return -1; } Reception of the network packet After successfully opening a raw socket, its time to receive network packets, for which you need to use the recvfrom api. We can also use the recv api. But recvfrom provides additional information. unsigned char *buffer = (unsigned char *) malloc(65536); //to receive data memset(buffer,0,65536); struct sockaddr saddr; int saddr_len = sizeof (saddr); //Receive a network packet and copy in to buffer buflen=recvfrom(sock_r,buffer,65536,0,&saddr,(socklen_t *)&saddr_len); if(buflen<0) { printf(error in reading recvfrom function\n); return -1; } In saddr, the underlying protocol provides the source address of the packet. Extracting the Ethernet header Now that we have the network packets in our buffer, we will get information about the Ethernet header. The Ethernet header contains the physical address of the source and destination, or the MAC address and protocol of the receiving packet. The if_ether.h header contains the structure of the Ethernet header (see Figure 5). Now, we can easily access these fields: struct ethhdr *eth = (struct ethhdr *)(buffer); printf(\nEthernet Header\n); printf(\t|-Source Address : %.2X-%.2X-%.2X-%.2X-%.2X-%.2X\n,eth->h_source[0],eth->h_source[1],eth->h_source[2],eth->h_source[3],eth->h_source[4],eth->h_source[5]); printf(\t|-Destination Address : %.2X-%.2X-%.2X-%.2X-%.2X-%.2X\n,eth->h_dest[0],eth->h_dest[1],eth->h_dest[2],eth->h_dest[3],eth->h_dest[4],eth->h_dest[5]); printf(\t|-Protocol : %d\n,eth->h_proto); h_proto gives information about the next layer. If you get 0x800 (ETH_P_IP), it means that the next header is the IP header. Later, we will consider the next header as the IP header. Note 1: The physical address is 6 bytes. Note 2: We can also direct the output to a file for better understanding. fprintf(log_txt,\t|-Source Address : %.2X-%.2X-%.2X-%.2X-%.2X-%.2X\n,eth->h_source[0],eth->h_source[1],eth->h_source[2],eth->h_source[3],eth->h_source[4],eth->h_source[5]); Use fflush to avoid the input-output buffer problem when writing into a file. Extracting the IP header The IP layer gives various pieces of information like the source and destination IP address, the transport layer protocol, etc. The structure of the IP header is defined in the ip.h header file (see Figure 6). Now, to get this information, you need to increment your buffer pointer by the size of the Ethernet header because the IP header comes after the Ethernet header: unsigned short iphdrlen; struct iphdr *ip = (struct iphdr*)(buffer + sizeof(struct ethhdr)); memset(&source, 0, sizeof(source)); source.sin_addr.s_addr = ip->saddr; memset(&dest, 0, sizeof(dest)); dest.sin_addr.s_addr = ip->daddr; fprintf(log_txt, \t|-Version : %d\n,(unsigned int)ip->version); fprintf(log_txt , \t|-Internet Header Length : %d DWORDS or %d Bytes\n,(unsigned int)ip->ihl,((unsigned int)(ip->ihl))*4); fprintf(log_txt , \t|-Type Of Service : %d\n,(unsigned int)ip->tos); fprintf(log_txt , \t|-Total Length : %d Bytes\n,ntohs(ip->tot_len)); fprintf(log_txt , \t|-Identification : %d\n,ntohs(ip->id)); fprintf(log_txt , \t|-Time To Live : %d\n,(unsigned int)ip->ttl); fprintf(log_txt , \t|-Protocol : %d\n,(unsigned int)ip->protocol); fprintf(log_txt , \t|-Header Checksum : %d\n,ntohs(ip->check)); fprintf(log_txt , \t|-Source IP : %s\n, inet_ntoa(source.sin_addr)); fprintf(log_txt , \t|-Destination IP : %s\n,inet_ntoa(dest.sin_addr)); The transport layer header There are various transport layer protocols. Since the underlying header was the IP header, we have various IP or Internet protocols. You can see these protocols in the /etc/protocls file. The TCP and UDP protocol structures are defined in tcp.h and udp.h respectively. These structures provide the port number of the source and destination. With the help of the port number, the system gives data to a particular application (see Figures 7 and 8). The size of the IP header varies from 20 bytes to 60 bytes. We can calculate this from the IP header field or IHL. IHL means Internet Header Length (IHL), which is the number of 32-bit words in the header. So we have to multiply the IHL by 4 to get the size of the header in bytes: struct iphdr *ip = (struct iphdr *)( buffer + sizeof(struct ethhdr) ); /* getting actual size of IP header*/ iphdrlen = ip->ihl*4; /* getting pointer to udp header*/ struct tcphdr *udp=(struct udphdr*)(buffer + iphdrlen + sizeof(struct ethhdr)); We now have the pointer to the UDP header. So lets check some of its fields. Note: If your machine is little endian, you have to use ntohs because the network uses the big endian scheme. fprintf(log_txt , \t|-Source Port : %d\n , ntohs(udp->source)); fprintf(log_txt , \t|-Destination Port : %d\n , ntohs(udp->dest)); fprintf(log_txt , \t|-UDP Length : %d\n , ntohs(udp->len)); fprintf(log_txt , \t|-UDP Checksum : %d\n , ntohs(udp->check)); Similarly, we can access the TCP header field. Extracting data After the transport layer header, there is data payload remaining. For this, we will move the pointer to the data, and then print. unsigned char * data = (buffer + iphdrlen + sizeof(struct ethhdr) + sizeof(struct udphdr)); Now, lets print data, and for better representation, let us print 16 bytes in a line. int remaining_data = buflen - (iphdrlen + sizeof(struct ethhdr) + sizeof(struct udphdr)); for(i=0;i<remaining_data;i++) { if(i!=0 && i%16==0) fprintf(log_txt,\n); fprintf(log_txt, %.2X ,data[i]); } When you receive a packet, it will look like whats shown is Figures 9 and 10. Sending packets with a raw socket To send a packet, we first have to know the source and destination IP addresses as well as the MAC address. Use your friends MAC & IP address as the destination IP and MAC address. There are two ways to find out your IP address and MAC address: 1. Enter ifconfig and get the IP and MAC for a particular interface. 2. Enter ioctl and get the IP and MAC. The second way is more efficient and will make your program machine-independent, which means you should not enter ifconfig in each machine. Opening a raw socket To open a raw socket, you have to know three fields of socket API — Family- AF_PACKET, Type- SOCK_RAW and for the protocol, lets use IPPROTO_RAW because we are trying to send an IP packet. IPPROTO_RAW macro is defined in the in.h header file: sock_raw=socket(AF_PACKET,SOCK_RAW,IPPROTO_RAW); if(sock_raw == -1) printf(error in socket); What is struct ifreq? Linux supports some standard ioctls to configure network devices. They can be used on any sockets file descriptor, regardless of the family or type. They pass an ifreq structure, which means that if you want to know some information about the network, like the interface index or interface name, you can use ioctl and it will fill the value of the ifreq structure passed as a third argument. In short, the ifreq structure is a way to get and set the network configuration. It is defined in the if.h header file or you can check the man page of netdevice (see Figure 11). Getting the index of the interface to send a packet There may be various interfaces in your machine like loopback, wired interface and wireless interface. So you have to decide the interface through which we can send our packet. After deciding on the interface, you have to get the index of that interface. For this, first give the name of the interface by setting the field ifr_name of ifreq structure, and then use ioctl. Then use the SIOCGIFINDEX macro defined in sockios.h and you will receive the index number in the ifreq structure: struct ifreq ifreq_i; memset(&ifreq_i,0,sizeof(ifreq_i)); strncpy(ifreq_i.ifr_name,wlan0,IFNAMSIZ-1); //giving name of Interface if((ioctl(sock_raw,SIOCGIFINDEX,&ifreq_i))<0) printf(error in index ioctl reading);//getting Index Name printf(index=%d\n,ifreq_i.ifr_ifindex); Getting the MAC address of the interface Similarly, you can get the MAC address of the interface, for which you need to use the SIOCGIFHWADDR macro to ioctl: struct ifreq ifreq_c; memset(&ifreq_c,0,sizeof(ifreq_c)); strncpy(ifreq_c.ifr_name,wlan0,IFNAMSIZ-1);//giving name of Interface if((ioctl(sock_raw,SIOCGIFHWADDR,&ifreq_c))<0) //getting MAC Address printf(error in SIOCGIFHWADDR ioctl reading); Getting the IP address of the interface For this, use the SIOCGIFADDR macro: struct ifreq ifreq_ip; memset(&ifreq_ip,0,sizeof(ifreq_ip)); strncpy(ifreq_ip.ifr_name,wlan0,IFNAMSIZ-1);//giving name of Interface if(ioctl(sock_raw,SIOCGIFADDR,&ifreq_ip)<0) //getting IP Address { printf(error in SIOCGIFADDR \n); } Constructing the Ethernet header After getting the index, as well as the MAC and IP addresses of an interface, its time to construct the Ethernet header. First, take a buffer in which you will place all information like the Ethernet header, IP header, UDP header and data. That buffer will be your packet. sendbuff=(unsigned char*)malloc(64); // increase in case of more data memset(sendbuff,0,64); To construct the Ethernet header, fill all the fields of the ethhdr structure: struct ethhdr *eth = (struct ethhdr *)(sendbuff); eth->h_source[0] = (unsigned char)(ifreq_c.ifr_hwaddr.sa_data[0]); eth->h_source[1] = (unsigned char)(ifreq_c.ifr_hwaddr.sa_data[1]); eth->h_source[2] = (unsigned char)(ifreq_c.ifr_hwaddr.sa_data[2]); eth->h_source[3] = (unsigned char)(ifreq_c.ifr_hwaddr.sa_data[3]); eth->h_source[4] = (unsigned char)(ifreq_c.ifr_hwaddr.sa_data[4]); eth->h_source[5] = (unsigned char)(ifreq_c.ifr_hwaddr.sa_data[5]); /* filling destination mac. DESTMAC0 to DESTMAC5 are macro having octets of mac address. */ eth->h_dest[0] = DESTMAC0; eth->h_dest[1] = DESTMAC1; eth->h_dest[2] = DESTMAC2; eth->h_dest[3] = DESTMAC3; eth->h_dest[4] = DESTMAC4; eth->h_dest[5] = DESTMAC5; eth->h_proto = htons(ETH_P_IP); //means next header will be IP header /* end of ethernet header */ total_len+=sizeof(struct ethhdr); Constructing the IP header To construct the IP header, increment sendbuff by the size of the Ethernet header and fill each field of the iphdr structure. Data after the IP header is called the payload for the IP header and, in the same way, data after the Ethernet header is called the payload for the Ethernet header. In the IP header, there is a field called Total Length, which contains the size of the IP header plus the payload. To know the size of the payload of the IP header, you must know the size of the UDP header and the UDP payload. So, some field of the iphdr structure will get the value after filling the UDP header field. struct iphdr *iph = (struct iphdr*)(sendbuff + sizeof(struct ethhdr)); iph->ihl = 5; iph->version = 4; iph->tos = 16; iph->id = htons(10201); iph->ttl = 64; iph->protocol = 17; iph->saddr = inet_addr(inet_ntoa((((struct sockaddr_in *)&(ifreq_ip.ifr_addr))->sin_addr))); iph->daddr = inet_addr(destination_ip); // put destination IP address total_len += sizeof(struct iphdr); Construct the UDP header Constructing the UDP header is very similar to constructing the IP header. Assign values to the fields of the udphdr structure. For this, increment the sendbuff pointer by the size of the Ethernet and the IP headers. struct udphdr *uh = (struct udphdr *)(sendbuff + sizeof(struct iphdr) + sizeof(struct ethhdr)); uh->source = htons(23451); uh->dest = htons(23452); uh->check = 0; total_len+= sizeof(struct udphdr); Like the IP header, the UDP also has the field len, which contains the size of the UDP header and its payload. So, first, you have to know the UDP payload, which is the actual data that will be sent. Adding data or the UDP payload We can send any data: sendbuff[total_len++] = 0xAA; sendbuff[total_len++] = 0xBB; sendbuff[total_len++] = 0xCC; sendbuff[total_len++] = 0xDD; sendbuff[total_len++] = 0xEE; Filling the remaining fields of the IP and UDP headers We now have the total_len pointer and with the help of this, we can fill the remaining fields of the IP and UDP headers: uh->len = htons((total_len - sizeof(struct iphdr) - sizeof(struct ethhdr))); //UDP length field iph->tot_len = htons(total_len - sizeof(struct ethhdr)); //IP length field The IP header checksum There is one more field remaining in the IP header check, which is used to have a checksum. A checksum is used for error checking of the header. When the packet arrives at the router, it calculates the checksum, and if the calculated checksum does not match with the checksum field of the header, the router will drop the packet; and if it matches, the router will decrement the time to the live field by one, and forward it. To calculate the checksum, sum up all the 16-bit words of the IP header and if there is any carry, add it again to get a 16-bit word. After this, find the complement of 1s and that is our checksum. To check whether our checksum is correct, use the above algorithm. unsigned short checksum(unsigned short* buff, int _16bitword) { unsigned long sum; for(sum=0;_16bitword>0;_16bitword--) sum+=htons(*(buff)++); sum = ((sum >> 16) + (sum & 0xFFFF)); sum += (sum>>16); return (unsigned short)(~sum); } iph->check = checksum((unsigned short*)(sendbuff + sizeof(struct ethhdr)), (sizeof(struct iphdr)/2)); Sending the packet Now we have our packet but before sending it, lets fill the sockaddr_ll structure with the destination MAC address: struct sockaddr_ll sadr_ll; sadr_ll.sll_ifindex = ifreq_i.ifr_ifindex; // index of interface sadr_ll.sll_halen = ETH_ALEN; // length of destination mac address sadr_ll.sll_addr[0] = DESTMAC0; sadr_ll.sll_addr[1] = DESTMAC1; sadr_ll.sll_addr[2] = DESTMAC2; sadr_ll.sll_addr[3] = DESTMAC3; sadr_ll.sll_addr[4] = DESTMAC4; sadr_ll.sll_addr[5] = DESTMAC5; And now its time to send it, for which lets use the sendto api: send_len = sendto(sock_raw,sendbuff,64,0,(const struct sockaddr*)&sadr_ll,sizeof(struct sockaddr_ll)); if(send_len<0) { printf(error in sending....sendlen=%d....errno=%d\n,send_len,errno); return -1; } How to run the program Go to root user, then compile and run your program in a machine. And in another machine, or in your destination machine, run the packet sniffer program as the root user and analyse the data that you are sending. What to do next We made a packet sniffer as well as a packet sender, but this is a user space task. Now lets try the same things in kernel space. For this, try to understand struct sk_buff and make a module that can perform the same things in kernel space. You can get the complete code used in this article from
http://opensourceforu.com/2015/03/a-guide-to-using-raw-sockets/
CC-MAIN-2016-50
refinedweb
3,014
53.1
Your answer is one click away! well i cant find how do this, basically its a variable union with params, basic idea, (writed as function) Ex1 union Some (int le) { int i[le]; float f[le]; }; Ex2 union Some { int le; int i[le]; float f[le]; }; obs this don't works D: maybe a way to use an internal variable to set the lenght but don't works too. Thx. No, this is not possible: le would need to be known at compile-time. One solution would be to use a templated union: template <int N> union Some { int i[N]; float f[N]; }; N, of course, is compile-time evaluable. Another solution is the arguably more succinct typedef std::vector<std::pair<int, float>> Some; or a similar solution based on std::array. Depending on your use case you could try to simulate a union. struct Some { //Order is important private: char* pData; public: int* const i; float* const f; public: Some(size_t len) :pData(new char[sizeof(int) < sizeof(float) ? sizeof(float) : sizeof(int)]) ,i ((int*)pData) ,f ((float*)pData) { } ~Some() { delete[] pData; } Some(const Some&) = delete; Some& operator=(const Some&) = delete; }; Alternative solution using templates, unique_ptr and explicit casts: //max_size_of<>: a recursive template struct to evaluate the // maximum value of the sizeof function of all types passed as // parameter //The recursion is done by using the "value" of another // specialization of max_size_of<> with less parameter types template <typename T, typename...Args> struct max_size_of { static const std::size_t value = std::max(sizeof(T), max_size_of<Args...>::value); }; //Specialication for max_size_of<> as recursion stop template <typename T> struct max_size_of<T> { static const std::size_t value = sizeof(T); }; //dataptr_auto_cast<>: a recursive template struct that // introduces a virtual function "char* const data_ptr()" // and an additional explicit cast operator for a pointer // of the first type. Due to the recursion a cast operator // for every type passed to the struct is created. //Attention: types are not allowed to be duplicate //The recursion is done by inheriting from of another // specialization of dataptr_auto_cast<> with less parameter types template <typename T, typename...Args> struct dataptr_auto_cast : public dataptr_auto_cast<Args...> { virtual char* const data_ptr() const = 0; //This is needed by the cast operator explicit operator T* const() const { return (T*)data_ptr(); } //make it explicit to avoid unwanted side e C++ requires that the size of a type be known at compile time. The size of a block of data need not be known, but all types have known sizes. There are three ways around it. I'll ignore the union part for now. Imagine if you wanted: struct some (int how_many) { int data[how_many]; }; as the union part adds complexity which can be dealt with separately. First, instead of storing the data as part of the type, you can store pointers/references/etc to the data. struct some { std::vector<int> data; explicit some( size_t how_many ):data(how_many) {}; some( some&& ) = default; some& operator=( some&& ) = default; some( some const& ) = default; some& operator=( some const& ) = default; some() = default; ~some() = default; }; here we store the data in a std::vector -- a dynamic array. We default copy/move/construct/destruct operations (explicitly -- because it makes it clearer), and the right thing happens. Instead of a vector we can use a unique_ptr: struct some { std::unique_ptr<int[]> data; explicit some( size_t how_many ):data(new int[how_many]) {}; some( some&& ) = default; some& operator=( some&& ) = default; some() = default; ~some() = default; }; this blocks copying of the structure, but the structure goes from being size of 3 pointers to being size of 1 in a typical std implementation. We lose the ability to easily resize after the fact, and copy without writing the code ourselves. The next approach is to template it. template<std::size_t N> struct some { int data[N]; }; this, however, requires that the size of the structure be known at compile-time, and some<2> and some<3> are 'unrelated types' (barr I would like to suggest a different approach: Instead of tying the number of elements to the union, tie it outside: union Some { int i; float f; }; Some *get_Some(int le) { return new Some[le]; } Don't forget to delete[] the return value of get_Some... Or use smart pointers: std::unique_ptr<Some[]> get_Some(int le) { return std::make_unique<Some[]>(le); } You can even create a Some_Manager: struct Some_Manager { union Some { int i; float f; }; Some_Manager(int le) : m_le{le}, m_some{std::make_unique<Some[]>(le)} {} // ... getters and setters... int count() const { return m_le; } Some &operator[](int le) { return m_some[le]; } private: int m_le{}; std::unique_ptr<Some[]> m_some; }; Take a look at the Live example. It's not possible to declare a structure with dynamic sizes as you are trying to do, the size must be specified at run time or you will have to use higher-level abstractions to manage a dynamic pool of memory at run time. Also, in your second example, you include le in the union. If what you were trying to do were possible, it would cause le to overlap with the first value of i and f. As was mentioned before, you could do this with templating if the size is known at compile time: #include <cstdlib> template<size_t Sz> union U { int i[Sz]; float f[Sz]; }; int main() { U<30> u; u.i[0] = 0; u.f[1] = 1.0; } If you want dynamic size, you're beginning to reach the realm where it would be better to use something like std::vector. #include <vector> #include <iostream> union U { int i; float f; }; int main() { std::vector<U> vec; vec.resize(32); vec[0].i = 0; vec[1].f = 42.0; // But there is no way to tell whether a given element is // supposed to be an int or a float: // vec[1] was populated via the 'f' option of the union: std::cout << "vec[1].i = " << vec[1].i << '\n'; }
http://www.devsplanet.com/question/35271453
CC-MAIN-2017-39
refinedweb
976
56.08
Easier Immutable Objects in C# and VB - | - - - - - - - Read later My Reading List A common pain point in .NET programming is the amount of boilerplate code necessary to implement immutable objects. Unlike a normal class, and immutable class requires that each property have an explicitly defined backing store. And of course a constructor is needed to tie everything together. Under a new draft specification, C# and VB will be adding what they are calling a “record class”. This is essentially an immutable class defined solely by its constructor. Here is an example from the specification: public record class Cartesian(double x: X, double y: Y); In addition to the constructor, the compiler will automatically create: - A read-only property for each parameter - An Equals function - A GetHashCode override - A ToString Override - An “is” Operator, known as “Matches” in VB The “is/Matches” operator is used in pattern matching, which we will cover in tomorrow’s article. Aside from that, record classes are a lot like C# anonymous types. (VB anonymous types differ in that they are mutable by default.) Microsoft is looking into ways to reconcile the two concepts, especially given the current limitation about not exposing anonymous types beyond their current assembly. A common feature of immutable types is the ability to create copies of the object with one or more fields updated. Though not in the specification yet, here is one option they are considering for C#. var x1 = new MyRecord(1, 2, 3); var x2 = x1 with B: 16; Console.WriteLine(x2) // prints something like "A = 1, B = 16, C = 3" Extending Record Classes In the Cartesian example class, you may have noticed that it ended with a semi-colon. This is to indicate that the class has no body other than what the compiler provides. Instead of the semi-colon you can provide a set braces like you would for a normal class. You would still get the same compiler-generated code, but have the ability to add additional properties and methods as you see fit. Other Limitations For the time being only record classes are supported. In theory, record structs could be added using the same basic syntax and concepts. Library Concerns A serious limitation of immutable types in .NET is the lack of library support. Imagine that you are a middle tier developer. Your normal day-to-day tasks probably involve asking an ORM for some objects out of the database, which you then serialize as SOAP-XML or JSON for a one-way or round-trip to the client. Currently most ORMs and serializers don’t have support for immutable types. Instead, they assume there will be a parameterless constructor and mutable properties. If this issue isn’t resolved in the more popular frameworks, record classes will be of little use in most projects. For more information, see the draft specification Pattern Matching for C#. A prototype should be available in a few weeks. Correction: This report erroneously stated that this feature would be part of C# 6 and VB 12. Joe Enos Maybe a static Record.FromXml(string xml) method with a generic parameter representing the specific record class (and another one for JSON, IDataReader, DataRow, IDictionary, FormCollection, etc.). As long as the names in the raw data line up to the properties, these should be pretty easy to build with a little reflection. If they build enough of these translators, then I'd expect any third party library that does normal object hydration could be easily extended or enhanced to use these record classes. Not sure I like the "with" syntax though - seems to me like it would be confusing to look at. If they can find a way to use more traditional code, it might look something like: // instead of var x2 = x1 with B: 16; // maybe one of these var x2 = x1.Copy(new { B = 16 }); var x2 = x1.Copy(o => { o.B = 16; }); var x2 = Record.CreateCopyFrom(x1, B: 16); Time to take a look at F# by Arturo Hernandez I can't really list all the benefits here but, if you are reading this article chances are you should take a look at F#. Library concerns by Roger Alsing Since this is immutable types, you will not use them for "fat" entities, since they are not mutable. So the only thing you could use them for in an ORM, is projections, and the libs do support selecting projections. e.g. .Select(e => new MyRecord(e.Name,e.Age)); So, it will still be of good use if you need projections with known types instead of anonymous types. Looks great! by Ian Yates I've been able to teach JSON.Net how to handle these objects without too much fuss as well. When my code assistance tool (at the moment Telerik's JustCode because I get it with the GUI components subscription I already have with them) catches up with these new C# features I'll happily jump ship for new code and gradually shift over my old code. Re: Time to take a look at F# by Isaac Abraham Looks like C# is slowly but surely morphing into F# with a C# syntax by Phylos Zero Explicitly defined backing store? by Craig Wagner Perhaps I'm just being dense here, but I don't understand why today each property requires an explicit backing store. For instance: public class Point { public int X { get; private set; } public int Y { get; private set; } public Point( int x, int y ) { X = x; Y = y; } } Granted it's still more code than the example shown in the article, but this code does not declare explicit backing store variables yet once the object is created X and Y cannot be changed (other than using reflection of course). What am I missing? Re: Explicitly defined backing store? by Jonathan Allen
https://www.infoq.com/news/2014/08/Record-Class
CC-MAIN-2016-30
refinedweb
976
61.36
Greetings, this is my first time posting here, I've been a long time lurker. I have a lab assignment which is:. Your application should only use "FOR" loops. Do not use any String or Array functions. This is the code I have generated so far, it builds successfully but does not work correctly and I can't figuru out why? Any help would be greatly appreciated. Thanks for any help or suggestions! #include "stdafx.h" #include <iostream> using namespace std; int _tmain(int argc, _TCHAR* argv[]) { char letter; int count = 0; double ppl = 0; double finalCost = ppl * (count - 1); cout << "\tWelcome to My Price Per Letter Calculator!\n"; cout << " What is your next character? Type '*' to end : "; cin >> letter; for ( ; letter != '*'; ++count); { cout << " What is your next character? Type '*' to end : "; cin >> letter; } cout << " What is the price per letter to pay? "; cin >> ppl; cout << " You have "<< (count - 1) << " letters at $"<< ppl <<" per letter, and your total cost is $" << finalCost << "."; system("pause"); return 0; }
https://www.daniweb.com/programming/software-development/threads/459952/lab-assignment-please-help
CC-MAIN-2017-43
refinedweb
165
76.82
3.7 Conditional Operator Java includes a special ternary(three-way) operator that can replace certain types of if-then-else statements. This operator is the ?. Here is an example of the way that the ? is used : max = number1 > number2 ? number1 : number2; When Java evaluates this assignment expression, it first looks at the expression to the left of the question mark. If number1 is greater than number2, then the expression between the question mark and the colon is evaluated and used as the value of the entire ? expression. If number2 is greater, then the expression after the colon is evaluated and used for the value of the entire ? expression. The result produced by the ? operator is then assigned to max. Here is a program that demonstrates the ? operator. It uses it to obtain the absolute value of a variable. import java.util.Scanner; // Needed for the Scanner class /** * This program demonstrates the ? operator. */ public class TernaryDemo { public static void main(String[] args) { int num; // holds value of integer // Create a Scanner object for keyboard input. Scanner console = new Scanner(System.in); // Get an integer. System.out.print("Enter integer : "); num = console.nextInt(); // Get absolute value of num num = (num < 0) ? -num : num; System.out.println("Absolute value is " + num); } } Output 1: Enter integer : -6 Absolute value is 6 Output 2: Enter integer : 14 Absolute value is 14
http://www.beginwithjava.com/java/decisions/conditional-operator.html
CC-MAIN-2018-30
refinedweb
227
51.75
How to generate random number in Python Python has its own module to generate a random number. The module is the random module. By using this special module you can easily generate random number in Python. But the random module can do more than that. random module in Python can generate a random number in Python as well as it can be used to pick a random element from a list. It can also be used to shuffle item easily. In a word you can say that random module is very much useful to do such things involved with the term random. In this tutorial, we will learn how to generate a random number in Python using random module. You may also read, Generate random number in Python using random module In order to generate a random number in Python we will use randint() function. In Random module we have plenty of functions available for us but in this tutorial, we will only use randint() to generate a random number in a certain range. random.randint(a,b) As you can see this function has two parameters and both of those are mandatory or you can say the required parameter. a is the lowest range and b is the highest range here. To give you a better understand I can say that if you wish to create a random number in Python which must be in a range of 5 to 15. Then you have to do that like the below: # generate a random number in a range of 5 and 15 import random print(random.randint(5,15)) Output: Anything from 5 to 15 By using this function you can generate a random number for a given range in Python Generate a random number which is divisible by n If you wish to generate a random number which must be divisible by a particular number then you can use this: import random print(random.randint(1,10)*5) Output: Always print a number which is divisible by 5 The maximum range of the random number will be 10*5=50 The minimum value of the random number will be 1*5=5 so the basic code will be like this: import random print(random.randint(a,b)*n) Where the random number will be divisible by n. The range of the random number will be from a*n to b*n How to detect strings that contain only whitespaces in Python
https://www.codespeedy.com/generate-random-number-in-python/
CC-MAIN-2020-50
refinedweb
412
58.01
03 August 2010 17:24 [Source: ICIS news] LONDON (ICIS)--Europen nylon 6,6 (polyamide 6,6) virgin polymer contracted volumes for September have fallen by 40% due to structural shortages, a nylon 6,6 buyer said on Tuesday. “6,6 orders for September are being delayed, orders are being cancelled, we’re only getting 60% of our contracted volumes,” the buyer said. A producer said that it had also heard rumours of cancellations at 40%, adding that all producers had long lead times and were struggling to fulfil orders. “We’ve heard that 40% of orders are being cancelled. It is [nylon 6,6 supply] very tight. We’re not aware of any production stoppages, but demand is holding up well,” the producer said. According to sources, so far up to 20% of orders were cancelled during the second and third quarters, due to limited availability. The reported increase in cancellations was attributed to stronger than expected demand during the summer months from the major downstream automotive sector. Traditionally, during July and August there is a slowdown in consumption of nylon 6,6. However, in 2010 a growth in purchases of large vehicles in the downstream automotive market, due to latent demand from the economic recession of 2008 and 2009, and Asian consumption was keeping demand at a high level. Most north European players said that they would not shut down during August this year. Sources in southern ?xml:namespace> “At the moment we have many orders. There’s been an increase in 6,6 imports in Nevertheless, some sources said that nylon 6,6 buying interest from Asia in the later half of August was low, signalling that Asian players expected automotive demand to be weak during the fourth quarter. They added that the weakening of the US dollar against the euro was further reducing the attractiveness of European material in Nylon 6,6 has been structurally short since the fourth quarter of 2009 following market consolidation during the global economic downturn of 2008 and 2009, and a force majeure (FM) at Rhodia Polyamide on nylon 6,6 and intermediaries. Rhodia polyamide has been in FM since the fourth quarter of 2009. This was initially caused by low water levels on the River Rhine, which led to logistical difficulties, but was now due to the low availability of feedstock adiponitrile. A company source confirmed that the FM remained in place, and there was as yet no indication of when it would be lifted. The nylon 6,6 market was expected to remain tight throughout 2010. It remained too early for players to discuss fourth quarter nylon 6,6 prices, due to a lack of transparency over fourth quarter demand, and uncertainty over the development of feedstock costs. Third quarter nylon 6,6 virgin polymer contracts settled at €2.60-2.75/kg ($3.42-3.62/kg) FD (free delivered) NWE (northwest ($1 = €0.76) For more on nylon
http://www.icis.com/Articles/2010/08/03/9381875/european-september-nylon-66-orders-fall-by-40-buyer.html
CC-MAIN-2014-42
refinedweb
491
59.53
In python, to replace an old string with a new string, we can use string.replace(old, new) function. However, this function is case sensitive. In this tutorial, we will introduce a way to replace string with case insensitive. string.replace() is case sensitive s='' s = s.replace("Https", 'http') print(s) The result is: From result, we can find string.replace() is case sensitive. How to replace string with case insensitive? We can use python regression expression to do it. Here is an example: import re def replace(old, new, str, caseinsentive = False): if caseinsentive: return str.replace(old, new) else: return re.sub(re.escape(old), new, str, flags=re.IGNORECASE) In this function, if caseinsentive = False, this function will replace old string with new string case-insensitively. How to use? s='' s = replace("Https", 'http', s) print(s) The result is: From the result, we can find our function works.
https://www.tutorialexample.com/python-replace-string-with-case-insensitive-for-beginners-python-tutorial/
CC-MAIN-2021-31
refinedweb
153
70.9
just spoke to the DEV ...the adapter TCP, UDP and SERIAL ir ready! Will open e new thread when i get the Documentation! just spoke to the DEV ...the adapter TCP, UDP and SERIAL ir ready! Will open e new thread when i get the Documentation! ioBroker ...domesticate the Internet of Things. is ** THE ** automation Controler, which can manage almost everything. over 500 different Sensors, actuators and scripts) f.e: MQTT, REST Api, Node-red, Telegram, Text to Command, Mysensors, Yamaha, Homatic, Simatic S7, DWD, JavaScript, jQuery Mobile, cul, MAX!, EnOcean, Homematic, Philips Hue, KNX, Dreambox, onkyo, sonos, XBMC, pushbullet, pushover, B-Control Energy Manager, PiFace, FRITZ!Box, Geofency and many more... HERE Video HowTo install on Windows ioBroker controlling Roomba VISUALIZATION incl. Widgets can be installed on WINDOWS, Raspi, Banana Pi etc.. See ioBroker wiki for more information ioBroker is an integration platform for the Internet of Things, focused on Building Automation, Smart Metering, Ambient Assisted Living, Process Automation, Visualization and Data Logging. It aims to be a possible replacement for software like f.e. fhem, OpenHAB or the thing system by the end of 2014. ioBroker will be the successor of CCU.IO, a project quite popular in the german HomeMatic community. ** Concept**. ** architecture** ** Databases** ioBroker uses Redis and CouchDB. Redis is an in-memory key-value data store and also a message broker with publish/subscribe pattern. It's used to maintain and publish all states of connected systems. CouchDB is used to store rarely changing and larger data, like metadata of systems and things, configurations or any additional files. **Adapters ** Systems are attached to ioBrokers databases via so called adapters, technically processes running anywhere in the network and connecting all kinds of systems to ioBrokers databases. A connection to ioBrokers databases can be implemented in nearly any programming language on nearly any platform and an adapter can run on any host that is able to reach the databases via ip networking. couple screenshots from the WEB interface ioBroker.web Hello colleagues, i did not forget you ... just a new video showing how to get Let's encrypt certificate in "seconds" without coding etc.. just pressing couple buttons. Its possible with ioBroker, look here: ioBroker + Let's Encrypt use HTTPS for AUTOMATION OF EVERYTHING - Pine64, Arduino, FHEM, Raspi – 07:12 — Jewgeni R and screenshots from EDIT page of iobroker.flot: and picture of the Page when you click "OPEN": or if you choose AREA you'll see: couple screenshots from the WEB interface ioBroker.web and here a video how to use MySensors with ioBroker to CONTROL and get the Information from EVERYWHERE: ioBroker.WEB -ioBroker.MOBILE Interface for AUTOMATION OF EVERYTHING - Arduino, FHEM, Homatic, Raspi – 27:41 — Jewgeni R [UPDATE] It runns PRODUCTIVE incl Heatpump, Solar, Wood stove etc.. I devidesd INO apart, so it is easyer to read... Included SERIAL1 , SERIAL2, SERIAL3 ... My scope was: "a" Node which i can connect what i want to and with a simple and smalest modifications, can run EVERYTHING! :-) it should be cheap! it should be easy so even me as noob can understand that 4.it should be a standard protocol, which can be used with all of teh adapters it should have updates in the future 6 can be used with node-red and ioBroker, because everything is integrated there 7 etc.. do's and dont's for me: do not mess with child ID's and PIN's ->Use something easy show to the server what you have (present) and show what it is!!!! Not just Child ID = 13 -> Send a discription to Server do it universal, so everybody can use it !!! Give it to the comunity to improve it !!! My steps: HERE THE USER SHOULD PUT HIS SENSORS IN THE TABLE create a table with pins, so if you change configuration you do not have to mess around with PIN's: you can put all you attached sensors with S-Types and S_Variables USER does not have to do enything!!! present everything (all connected or IF YOU WANT unconnected pins too). whole void presentation! so just easy and short "i think" Setup (just look in samples of MYsensors and you will see if you need it or not void setup() { //Serial1.begin(2400); //Serial2.begin(2400); for (int i = 0; i<iomodus_count; i++) //search All Sensor system Ports and set them in the RIGHT way { while (iomodus[i].sensorVersion == "MY_SystemPin") { i++; } //do not waste time to check MY_SystemPin's MyMessage msg(i, iomodus[i].variableType); //************************************************************************************* //************************************************************************************* //********************************Setup RELAY*************************************** if (iomodus[i].sensorType == S_BINARY) { #ifdef RELAY if (iomodus[i].sensorVersion == "RELAY") { pinMode(i, OUTPUT); // Set RELAY to last known state (using eeprom storage) digitalWrite(i, loadState(i) ? RELAY_ON : RELAY_OFF); send(msg.setSensor(i).set(loadState(i)==HIGH ? 1 : 0),false); } #endif // RELAY //********************************Setup BUTTON*************************************** #ifdef BUTTON if (iomodus[i].sensorVersion == "BUTTON") { pinMode(i, INPUT_PULLUP); // Set RELAY to last known state (using eeprom storage) digitalWrite(i, loadState(i) ? 1 : 0); send(msg.setSensor(i).set(loadState(i)==HIGH ? 1 : 0),false); } #endif // BUTTON } //********************************Setup 0-10V Output*************************************** #ifdef PWM if (iomodus[i].sensorType == S_DIMMER) { pinMode(i, OUTPUT); // Set RELAY to last known state (using eeprom storage) } .... or next example void ds18b20_temp(int i) { #ifdef DS18B20 if ((iomodus[i].sensorType == S_TEMP) && (iomodus[i].sensorVersion == "DS18B20")) { if (millis() > next_Time[i]) { next_Time[i] = next_Time[i] + 10000; //onewire updates every 10s (10s is MINIMUM) sensor[i].requestTemperatures(); // query conversion time and sleep until conversion completed // int16_t conversionTime = sensor[i].millisToWaitForConversion(sensor[i].getResolution()); // sleep() call can be replaced by wait() call if node need to process incoming messages (or if node is repeater) wait(1000); for (int j = 1; j <= dsSensor_count; j++) { if (dsSensor[j].dsonPinNr == i) { unsigned int dsPin=(iomodus[i].Int_to_HLP + j); float tempC = sensor[i].getTempC(dsSensor[j].dsAdress); if (tempC == -127.00) { /* send(msg.setType(77).set("Error getting temperature"), false); Serial.println("Error getting temperature on Pin"); Serial.print(i); Serial.println("MySensors Pin Nr"); Serial.print(dsPin); Serial.println("Probe Nr."); Serial.print(j); */ } else { MyMessage msg(i, iomodus[i].variableType); send(msg.setSensor(dsPin).set(tempC, 2), false); last_dsTemp[j] = tempC; /* Serial.println("C: "); Serial.print(tempC); Serial.println(" F: "); Serial.print(DallasTemperature::toFahrenheit(tempC)); */ } // dsPin=0; } } } } #endif } ! Done! Libraries here(modified 16.06.2016): 0_1466331530661_MySensors.zip All sketches: 0_1466331582101_PRODUKTIV_V1.zip hi... sorry did not know this would be nice if you could do that... so people can get started with the new stuff ! hi ... sorry we were trying new things with mysensors and iobroker... now you can use TELEGRAMM, APP @ ANDRID and MANY new Icons @ VIS... When do you think u can put iobroker on your HP? or do u need something?
https://forum.mysensors.org/user/maxtox
CC-MAIN-2021-04
refinedweb
1,108
50.73
alias setprompt 'set prompt="\\ `whoami`@`uname -n`: `pwd`\\ [\\!] "' if ( $?prompt ) then # shell is interactive. alias cd 'cd \!*; setprompt' endif cd If you're using bash instead of tcsh as your shell, the following line, entered as a command, has the same effect: export PS1='\u@\H: \w\n[\!] ' Add this line to the .bashrc file in your home directory to have the prompt stay this way. This would work.... as long as you had a non-login session when you opened a shell in Terminal. If you've ever noticed though, the shells are a login session by default.. You have to either a) put that in .bash_profile or b) create a .bash_profile that contains the following: . ~/.bashrc I recommend (b) because you can work with just .bashrc for all shell sessions; without having to worry about which type of session you're changing. A lot more information on setting interesting and useful prompts in BASH can be found in this HOWTO. In particular, you can follow the guides there on displaying information in the titlebar of an Xterm and it'll work for setting the 'custom' part of a Terminal.app titlebar. Visit other IDG sites:
http://hints.macworld.com/article.php?story=20021002062208512
CC-MAIN-2016-30
refinedweb
197
77.23
DFS Report Error 3 Replies Oct 5, 2013 at 20:24 UTC Can anyone please save my sanity? I have just upgraded our 2003 domain to 2012 and have configured and installed an DFS namespace which is working fine. The problem I have is I would like to run the diagnostic replication reports from my Windows 8 workstation. Whenever I try this I get a “the health report cannot be generated. The system cannot find the file specified (exception from HRESULT:0x80070002) The report runs fine on any Server 2012 machine but will not run on any Windows 8 machine. I am a full admin in domain so access rights are not an issue and I can run the report on the server but surely I should be able to run this on the workstation through the DFS snapin? The location on the C:\drive where the reports are written do contain the XML files to generate the report so am at a loss why the system reports it cannot find the file? any help you could offer would be very much appreciated. Thanks Simon Oct 6, 2013 at 04:04 UTC If the domain was properly upgraded prior to moving everything to WinSvr 2012 env, your permissions should not be the case ... unless the Win8 machines you are running reports against are NOT on the same domain (i.e. no authentication exists). On another hand, I would expect Svr 2012 to use specific ports for pulling reports (much like SCCM does), for which you need to create Windows Firewall rules in both the server and workstation env's (this may require or simply be made easier to handle via GPO ... which I personally hate, but that is beside the point). Finally, certain applications (e.g. File and Printer Sharing, WMA) may need to be allowed (Public / Private / Domain) within your network(s) to effect proper communication. Hope this helps ... Oct 6, 2013 at 11:09 UTC Thanks for that, it’s really weird. All other functionality in Server Manager work fine, all mmc’s load and run fine. I can run my Wsus reports fine which uses dot net as does the dfs reporting mechanism. Not the end of the world running the reports on the server but it is annoying that after a full domain upgrade something as fundamental as a report fails to function. Thanks for the advice
https://community.spiceworks.com/topic/391317-question-about-microsoft-windows-server-2012
CC-MAIN-2018-47
refinedweb
401
69.52
This is the mail archive of the [email protected] mailing list for the binutils project. No. No more reading the minds of programmers. :) Well, for GP registers we do even as_bad(). And why change the table to include default extensions for the cpu? To handle them the same way as the ISA. This is for ASEs which are always implemented in that particular CPU. /* End of GCC-shared inference code. */ You need to make sure that this shared code is the same logic in both places - preferably before committing this. Yes. Do you think the logic is ok (modulo the FP ABI warning)? +#if 0 /* XXX FIXME */ + /* 32 bit code with 64 bit FP registers. */ + if (!file_mips_fp32 && ABI_NEEDS_32BIT_REGS (mips_abi)) + elf_elfheader (stdoutput)->e_flags |= ???; +#endif } ??? Same like for MIPS3D, we should tell the linker this object is (possibly) incompatible to other O32 objects with 32bit FP regs.
http://sourceware.org/ml/binutils/2006-05/msg00386.html
CC-MAIN-2019-35
refinedweb
146
77.33
So this is just a simple question. In python when importing modules, what is the difference between this: from module import a, b, c, d from module import a from module import b from module import c from module import d There is no difference at all. They both function exactly the same. However, from a stylistic perspective, one might be more preferable than the other. And on that note, the PEP-8 for imports says that you should compress from module import name1, name2 onto a single line and leave import module1 on multiple lines: Yes: import os import sys No: import sys, os Ok: from subprocess import Popen, PIPE In response to @teewuane's comment (repeated here in case the comment gets deleted): @inspectorG4dget What if you have to import several functions from one module and it ends up making that line longer than 80 char? I know that the 80 char thing is "when it makes the code more readable" but I am still wondering if there is a more tidy way to do this. And I don't want to do from foo import * even though I am basically importing everything. The issue here is that doing something like the following could exceed the 80 char limit: from module import func1, func2, func3, func4, func5 To this, I have two responses (I don't see PEP8 being overly clear about this): Break it up into two imports: from module import func1, func2, func3 from module import func4, func5 Doing this has the disadvantage that if module is removed from the codebase or otherwise refactored, then both import lines will need to be deleted. This could prove to be painful Split the line: To mitigate the above concern, it may be wiser to do from module import func1, func2, func3, \ func4, func5 This would result in an error if the second line is not deleted along with the first, while still maintaining the singular import statement
https://codedump.io/share/0An8YQPre3Nq/1/python-module-import-single-line-vs-multi-line
CC-MAIN-2017-09
refinedweb
329
51.75
Scope is a region of a program. Variable Scope is a region in a program where a variable is declared and used. Variables are thus of two types depending on the region where these are declared and used. Local Variables Variables that are declared inside a function or a block are called local variables and are said to have local scope. These local variables can only be used within the function or block in which these are declared. For functions, local variable can either be a variable which is declared in the body of that function or can be defined as function parameters in the function definition Now let's see an example where a local variable is declared in the function definition. #include <iostream> using namespace std; int multiply(int a, int b){ return a * b; } int main() { int x = 3, y = 5; int z; z = multiply( x, y ); cout << z << endl; return 0; } In this example, variables a and b are declared in the definition of the function multiply and are used within the function. These have no meaning outside the function. 'a' and 'b' are the copies of the variables 'x' and 'y' respectively and store their respective values. Thus, any change in the values of 'a' and 'b' does not affect the values of 'x' and 'y'. Here, 'a' and 'b' are the local variables for the function 'multiply' and 'x' and 'y' are the local variables for the function main. Let's take one more example to see that local variables are assets of the function in which these are declared. #include <iostream> using namespace std; void func1(){ int x = 4; cout << x << endl; } void func2(){ int x = 5; cout << x << endl; } int main(){ func1(); func2(); return 0; } 5 In the function func1, we declared a variable x and initialized it with a value 4. This variable is in the body of the function func1 and thus gets destroyed when the body of the function func1 ends. So, when we called func1, 4 gets printed. We declared another variable x in the function func2 and gave it a value 5. This variable also gets destroyed as the body of func2 ends and has no relation with the variable x of func1. So, the two variables are independent of each other and are limited to only the function in which these are declared. A block is a group of statements enclosed within the curly braces { }. For example, the body of a function is a block. The body of loops or conditional statements like if..else is also a block. Global Variables Variables that are defined outside of all the functions and are accessible throughout the program are global variables and are said to have global scope. Once declared, these can be accessed by any function in the program. Let's see an example of a global variable. #include <iostream> using namespace std; int g; int main(){ int a = 4; g = a * 2; cout << g << endl; return 0; } Here, g is a global variable since it is declared outside of the main function. Thus unlike the local variable 'a' which can only be used in the function main, 'g' can be used throughout the program and can be used in all the functions in the program. In this example, we declared 'g' outside of all the functions and gave it a value in the function. #include <iostream> using namespace std; int g = 10; void func1(){ g = 20; cout << g << endl; } int main(){ func1(); g = 30; cout << g << endl; return 0; } 30 Same as we can change the value of any local variable in a function any number of times and the value last assigned overrides the previous value, we can also override the value of the global variable in a function. We assigned a value 10 to g at the time of its declaration. Firstly, we called the function func1 in the main function in which we assigned 20 to 'g'. Thus, the value of 'g' became 20 in the function func1 and thus 20 got printed. After that, we assigned 30 to 'g' in the main function thus making the value of 'g' 30 in the main function and printing 30. Programming is a skill best acquired by practice and example rather than from books. -Alan Turing
https://www.codesdope.com/cpp-scope-of-variables/
CC-MAIN-2022-40
refinedweb
721
75.44
Our Process of Deploying, Running and Maintaining Druid FullContact is a people company and a data company. We believe that businesses can deliver higher quality products through better relationships and 1-1 customized service. Our Identity Resolution products (both batch and API) are what enables that to happen. We have several different APIs but they can be boiled down to two simple use cases: - Resolve - Who does this identifier connect to? - Enrich - What other information do you know about that person that can help the business better connect? Our API is embedded by customers when they want to make real-time decisions about their customers and users: what types of content might this person be interested in, have they talked about our business on social media, etc. These real-time queries and responses are made to our API, api.fullcontact.com. The client passes one or more identifiers (email, social handle, etc.) and a response is returned with several pieces of information that enriches and adds additional information about the person. These enrichment pieces can be things like age, interests, demographics and are grouped together into data packs. Batch enrichment works much the same way but is more asynchronous and capable of dealing with millions of records all at once. A client of ours will securely ship us a file of identifiers, we process it, match it to our graph and append additional columns of enrichment data as defined by the agreement with the customer and the Data Packs they want to pay for. In both of these cases, API and batch, we had similar problems that needed to be solved: - How many calls did a client make to us over a given time period? We needed to find out how many 200s (matches), and 404s they received. - Out of the data that was returned to them how many of each Data Pack did they receive? - Based on the Data Packs returned, how much data has the client consumed towards their committed plan? - Out of the Data Packs returned, how much do we owe our upstream data providers? While we could store all of the data needed to compute these things in S3 and run large aggregation jobs using something like Hive or Spark, we wanted to do better. Our new usage dashboard requires that we have fast access to this usage data and can aggregate across several dimensions: account ID, data pack, time range, and more. In order to meet all of the above requirements, we built a streaming architecture that unifies usage data from both API and batch onto a Kafka topic and eventually makes it into Druid, an OLAP high-performance system where we can slice and dice the data in nearly any way we want. Architecture Let's examine the life of a usage event and how it flows through our architecture. First, from the API side: - A client with an authenticated FullContact API token makes a call to our API to resolve email - Our internal API layer processes the request, figures out the result, and returns it to the client. - As part of this process, an Avro message is emitted to a Kafka usage topic. - This Avro message is registered in the schema registry. - Secor (an open source project from Pinterest) is running in our Kubernetes cluster as a stateful set, reads from the Kafka topic and archives the data to S3 in columnar parquet storage. - AWS Glue runs regular crawlers on this S3 data and registers tables on top of it, making it available for later querying if needed in Athena and other tools. - A Druid Kafka Indexer supervisor runs on Druid and launches the Peon workers on the data nodes that do the actual ingestion. These jobs read the data, apply basic transformations, and roll up aggregations and store it in the appropriate segments. From the batch side: - A client securely delivers their bulk file with IDs to be enriched. - A result file is created by the data pipeline system. - A usage summary report file is generated that has summarized statistics for each row in the results file describing how many Data Packs were returned on that row and the source of each Data Pack. This file is persisted in S3. - When the batch file is approved and delivered back to the customer, a process is run that reads the summary report file and creates standard Avro usage messages and streams them to the same Kafka topic used in the API side From a user accessing our developer dashboard to view their usage at dashboard.fullcontact.com: - User logins into the usage dashboard. - Our web service backend passes their account ID and requested time ranges to our usage middle tier service. - The usage middle tier service has several canned Druid JSON queries that are filled in and then submitted to the Druid Query Broker. - Results from the Druid Query Broker are passed back to the client dashboard and rendered in various charts. Deploying, Running and Maintaining Druid Like you would with any new technology we went through several different phases of trying out and experimenting with Druid to actually getting it to a state we felt comfortable going with to production. The Druid project offers a “quick start” configuration that is meant to be run on a small local instance. We found this to be useful to stand up Druid locally on our laptops to try out very simple configurations. Once we graduated from that and wanted to ingest from a larger Kafka topic we manually configured Druid on a i2.xlarge instance we had unused reservation capacity for. Since this instance is quite a bit larger (has a decent amount more ephemeral disk and more CPU and memory) we modified some of the quickstart JVM parameters to use this additional capacity. In this initial configuration we were only using local deep storage, a local zookeeper and local in memory database for the metastore; all configurations you probably shouldn’t rely on in production. Before going to production we wanted our infrastructure to have a few properties: - Be completely automated and immutable, if we want to change something we can spin up new instances. - Be able to replace or scale a single component without losing data. - Be monitored and have automated alerts in case of error conditions. We went with the common deployment approach of classifying the various Druid subprocesses into three different types of nodes: Master Node - Coordinator (port 8081) - Overlord (port 8090) Query Node - Broker (port 8082) - Router (port 8088) Data Node - Historical (port 8083) - Middle Manager (port 8091) The three different kinds of nodes are described using ansible playbooks and roles. We bake a single Druid AMI that contains the configuration and JVMs that can support any one of the above three nodes then supply user data variables that the systemd services use to decide which components to launch. We configured a RDS Mysql database for the metastore and a completely external and highly available 3 node zookeeper cluster to coordinate. Externalizing these components and adding S3 deep storage means we can completely replace any of the nodes above without losing any data. For monitoring our shiny new Druid instance we setup the following prometheus components: - Prometheus Node Exporter - provides common system metrics like CPU, Memory, Network, disk usage - Prometheus Druid Exporter - exports druid specific like number of queries, query durations, failed queries and missing segments We have a simple dashboard in grafana that will show us some of the important Druid metrics that are getting scraped to prometheus: For alerting, we are using prometheus-alert manager to alert us when the missing segments count (the number of segments that the historical process has been unable to register) climbs above some specified value: Once we have this Druid cluster setup we can submit our ingestion spec to Druid that defines how to ingest the stream from Kafka. In our initial setup of of storing segments per day with no roll ups we were storing about 2GB per day. Once we started rolling up similar events occurring during the same minute using HLLSketchBuild we were able to drop this down to around 150MB. Given this fairly low data footprint and the possibility for us to define more aggressive rollup windows once data ages we feel confident we will be able to store all the usage data we need at a relatively low cost. Why Avro, why Schema registry? FullContact historically uses Google Protobuf wrapped in a few extra layers of serialization for passing messages around on Kafka topics. Early on in the usage system design we decided to go with Avro. Why Avro? One of the out of the box tools you can get with the open source Confluent Kafka distribution is the Schema registry. The schema registry is a lightweight REST service that can be run on the Kafka brokers that will keep track of the schema definition that each topic should use. Several Kafka clients and other third party tools like Secor and Druid integrate nicely with the schema registry. Going this route let us avoid writing any custom Ser/De logic. Druid Migration v0.14 to v0.15 We initially started the 3 node Druid cluster with v14 which was soon outdated by v15. We tried to predict the steps for migration so that we could accomplish migration and keep our existing data with no downtime to the cluster. We decided to bring up a v15 cluster in parallel and migrate the existing data to the new cluster. One of the major changes to Druid v15 is in the way how the segment information is stored. Prior to v15 there are two places where the segment information is stored: - As descriptor.json in deep storage - As loadspec in Metadata store With v15, Druid now stores this information only in Metadata store. This lead to the deprecation of the tool `insert-segments-to-db-tool`. This brought a challenge with manual migration of metadata since we could no longer follow the guides we were able to find online. To work around this we created a new Druid Cluster with S3 deep storage and metadata store. Then we copied the old deep storage and metastore to the new instances. Now the metastore contains the loadspec for each segment that point to old deep storage. { "loadSpec": { "type":"s3_zip", "bucket":"druid-deepstorage-data", "key":"druid/segments/usage_v0_query/2019-07-18T00:00:00.000Z_2019-07-19T00:00:00.000Z/2019-07-18T00:00:00.143Z/6/e458d343-e666-4665-85aa-f4ba2276da1e/index.zip" } } After examining the metadata for the segments you can see it is all pointing to the old deep storage location. In order to update the segment metadata to point to the new deep storage location we created a simple python script to scan through the database and update the records: #!/usr/bin/python import MySQLdb import json import pathlib db = MySQLdb.connect(host="druid-**.qpcxrads.us-east-1.rds.amazonaws.com", # your host, usually localhost user="druid", # your username passwd="**", # your password db="druid") # name of the database # you must create a Cursor object. It will let # you execute all the queries you need cur = db.cursor() # Use all the SQL you like cur.execute("SELECT * FROM druid_segments") # print all the first cell of all the rows for row in cur.fetchall(): print row[0] loadSpec=json.loads(row[8])['loadSpec'] if loadSpec['bucket']=='druid-deepstorage-data': #Old deep storage loadSpec['bucket']='druid-deepstorage-0-15-data' #New Deep Storage payload=json.loads(row[8]) payload['loadSpec']=loadSpec sql = "UPDATE druid_segments SET payload = %s WHERE id = %s" val = (json.dumps(payload), row[0]) cur.execute(sql, val) db.commit() db.close() Once this change is made, starting the Druid Master, Data and Query servers will bring up the cluster running with old data. Since this migration of v14 to v15 we have experimented with a few more data migration scenarios with Druid during our most recent Hack Week: - Standing up a secondary "sandbox cluster" that shares the same deep storage as prod for all historical data and its own deep storage for new data - Ingesting batch data from Parquet (work in progress) Summary and Next Steps Learning Druid has been a fun and informative process. This new journey feels similar to back in the day when I was learning the basics of Hadoop. So much to learn and so much potential with the technology. As you have probably noticed from our writeup the scale of our Druid cluster is fairly small (there are some descriptions of larger organizations running 1000+ node clusters) and we are only using some of the most basic features. We are excited to explore all of the additional features of Druid that can help give us even more real-time streaming insight to our data. FullContact is hiring! We have several engineering roles open on our careers page. One of those is on the Foundation Integrations team (the same one that is working on the systems described in this blog). So if you want to join a fast moving team working on Druid, JVM microservices (Clojure and Java) all deployed on AWS apply now and shoot me a note at [email protected].
https://www.fullcontact.com/blog/our-process-of-deploying-running-and-maintaining-druid/
CC-MAIN-2020-05
refinedweb
2,208
58.11
>> pandas series combine() method work? The pandas series combine() method is used to combine two series objects according to the specified function. The series.combine() method takes two required positional arguments. The first argument is another series object, the second argument is a function. The method combines two elements from each series objects based on the specified function and returns that as an element of the output series object. This method has one optional parameter which is fill_value. If the index is missing from one or another series object, then we can fill that missing index value with a specified value otherwise the value will be Nan by default. Example 1 import pandas as pd # create pandas Series1 series1 = pd.Series([1,2,3,4,5,6]) print("First series object:",series1) # create pandas Series2 series2 = pd.Series([10,20,30,40,50,60]) print("Second series object:",series2) # combine series with max function print("combined series:",series1.combine(series2, max)) Explanation In this example, we will combine the two series elements with the “max” function. The 'max' function takes two elements one from the series1 object and another one from the series2. It will compare both elements and return a single largest element. Output First series object: 0 1 1 2 2 3 3 4 4 5 5 6 dtype: int64 Second series object: 0 10 1 20 2 30 3 40 4 50 5 60 dtype: int64 combined series: 0 10 1 20 2 30 3 40 4 50 5 60 dtype: int64 The series1 and series2 objects are created by integer values, and we applied the combine() method on these two series objects. We can see the resultant series object in the above output block. Example 2 import pandas as pd # create pandas Series1 series1 = pd.Series({'A':13,'B':48,"C":98, "D":38}) print("First series object:",series1) # create pandas Series2 series2 = pd.Series({'A':32,'B':18,"C":1, "D":85,'E':60 }) print("Second series object:",series2) # combine series with max function print("combined series:",series1.combine(series2, max)) Explanation Initially, we have created two pandas Series objects by using python dictionaries. And then applied the combine() method with the “max” function. Output First series object: A 13 B 48 C 98 D 38 dtype: int64 Second series object: A 32 B 18 C 1 D 85 E 60 dtype: int64 combined series: A 32.0 B 48.0 C 98.0 D 85.0 E NaN dtype: float64 Here, the series1 and series2 are combined by using the “max” function. In this example, both series objects are having the same index labels, but the series2 is having one extra index label which is “E”. while combining these two series objects, this one extra label will not be available in another series so by default it will be filled with Nan value. - Related Questions & Answers - How does pandas series astype() method work? - How does pandas series combine_first() method work? - How does pandas series div() method work? - How does the pandas Series idxmax() method work? - How does the pandas Series idxmin() method work? - How does pandas series argsort work? - How does the pandas series.ffill() method work? - How does the pandas series.expanding() method work? - How does the pandas series.first_valid_index() method work? - How does the pandas series.divmod() method work? - How does the series.copy() method work in Pandas? - How does the series.corr() method work in pandas? - How does the series.cummax() method work in Pandas? - How does the series.cummim() method work in Pandas? - How does the series.cumprod() method work in Pandas?
https://www.tutorialspoint.com/how-does-pandas-series-combine-method-work
CC-MAIN-2022-33
refinedweb
602
60.72
Hello, I have a main movie clip in which there are two animated movie clips. When the child movie clip stops there is a button on screen. On the button I have to insert the on release action redirecting to a particular url. The problem i am facing is, if everything is on one keyframe the on release function works but when i animate the file the on release function does not work. I have made a demo file which is similar to the one i am working on. Can anyone please help me with the error . Awaiting reply. Thankyou in advance. What code are you using and where have you placed it? Are you testing using a browser? Hi Ned, Thankyou very much for your reply. The code in the .as file I am using is MC4.Mc3.mcmov.btn.onRelease = function() { getURL("", "_blank"); } MC4 is my main movie clip. In MC4, Mc3 movie slides in and set on a keyframe. mcmov is another movie clip nested in Mc3. And btn is the button nested in mcmov movie clip. And On the same button i need to insert the getURL script. But if I put everything on a single frame the script works fine. But as soon as I animate the movie clips the script does not work. And in the fla file I have given the script #include "style1.as" Above is the structure. Hope you are getting my point. Is there any option to upload the fla and .as file here?? I would have shared it. If the btn object is not present when that code sets up/executes then it will not be assigned to the btn object... meaning if it lives down some timeline somewhere, it is not present. One way to conquer it is to place the code in the timeline where the btn exists. Another is to always have the btn present but only make it visible (btn._visible = true) when it needs to be.
https://forums.adobe.com/thread/1486128
CC-MAIN-2017-43
refinedweb
332
85.89
David Reid wrote: >. I'm not goign to opine on possible repository layouts, as there are people who knwo far better than me how that works. However, if you sort it out so you can jsut run python with setuptools, you can keep the directory structure however you like on disc, and then just type "./setup.py develop" for each project you have checked out to do all the funky namespace and path mangling that setuptools does, and work with your working copy. I suppose a utility script that runs the equivalent for the various setup files in the various subprojects would be useful if you always want to work with the bleeding edge, but other than that, it's fairly easygoing. Moof -- Giles Antonio Radford, alias Moof "Too old to be a chicken and too young to be a dirty old man" Serving up my ego over at <>
http://twistedmatrix.com/pipermail/twisted-python/2005-October/011681.html
CC-MAIN-2014-35
refinedweb
149
63.73
) Mahesh Chand(5) Sanjay Debnath(4) Jignesh Trivedi(3) Asma Khalid(3) Monica Rathbun(3) Ravi Shankar(3) Santhakumar Munuswamy(2) Mukesh Kumar(2) Shashangka Shekhar(2) Suthahar J(2) Mohammed Rameez Khan(2) Chris Love(2) Iqra Ali(2) Tahir Naushad(2) Rebai Hamida(2) Ibrahim Ersoy(2) Sumit Singh Sisodia(2) Dennis Thomas(2) Mangesh Kulkarni(2) Prashant Kumar(2) Ankit Sharma(2) Nirav Daraniya(2) Nithya Mathan(2) Ck Nitin(1) Amit Kumar Singh(1) N Vinodh(1) Arthur Le Meur(1) Piyush Agarwal(1) Raj Kumar(1) Gaurav Jain(1) Mushtaq M A(1) Shantha Kumar T(1) Shweta Lodha(1) Kamlesh Kumar(1) Mahesh Verma(1) Lou Troilo(1) Vijai Anand Ramalingam(1) Hemant Panchal(1) Padmalatha Dronamraju(1) Rahul Saraswat(1) Pritam Zope(1) Priyanka Mane(1) Shiv Sharma(1) Neel Bhatt(1) Lakpriya Ganidu(1) P K Yadav(1) Thivagar Segar(1) Jayanthi P(1) Kailash Chandra Behera(1) Kantesh Sinha(1) Gowtham K(1) Ajith Kumar(1) Vikas Srivastava(1) Munish A(1) Puja Kose(1) Dhruvin Shah(1) Ahsan Siddique(1) Najuma Mahamuth(1) Jinal Shah(1) Mohammad Irshad(1) Sibeesh Venu(1) Sarath Jayachandran .. Creating Stop Watch Android Application Tutorial Mar 08, 2018. Hello all! In this tutorial, we will learn how to create a Stop Watch android app, which will have basic features of a stop watch like, Start, Stop, Pause and Reset. Getting Started With SharePoint Framework (SPFX) Mar 08, 2018. In this article, I have explained how to set up SharePoint framework development environment, and how to build a SharePoint framework web part from scratch.. Getting Started With Angular 2 Using Angular CLI Mar 01, 2018. In this article, I will demonstrate how to install Angular CLI and how to set up an Angular project and run it. Kick Start With Azure Cosmos DB Mar 01, 2018. In this article, we will discuss Azure Cosmos DB. Basic Templating Using Node.js And Express Feb 19, 2018. Previously we learned about how to simply start up with nodejs & implement package manager. Below link you can have an overview on startup NodeJS. Getting Started With Microsoft Academic Knowledge Using Cognitive Services Feb 17, 2018. Microsoft Academic is a free public search engine for academic publications and literature developed by Microsoft Research. This library has 375 million titles ,170 million of which are academic papers. An Angular 5 Application Gets Started Or Loaded Feb 09, 2018. Now, we will try to understand how an Angular application is loaded and gets started... WPF - File Menu User Control Feb 08, 2018. This article is about the development of WPF File Menu User control.. Getting Started With Razor Pages In ASP.NET Core 2.0 Feb 01, 2018. Today, we will talk about more about Razor pages - what actually a Razor page is, how to create Razor page applications, and some of the fundamentals of Razor pages. Getting Started With "ng-bootstrap" In Angular 5 App Jan 29, 2018. In this article, we are going to cover “how to install and setup ng-bootstrap in our Angular 5 apps.”. Getting Started With OpenGL Win32 Jan 27, 2018. To get started with OpenGL using GLUT, read this article.. Xamarin.Forms - Pages Jan 23, 2018. In the previous chapter, I explained how you can prepare your environment for Android or iOS application development, in this chapter I will start presenting the structure of our page in Xamarin.Forms.. How To Start With Node.js Jan 22, 2018. In this post, we will learn about NodeJS. This is the startup post for those who are interested to work with NodeJS but are confused about how to start.. Configure Windows Authentication In ASP.NET Core Jan 11, 2018. Using Windows authentication, users are authenticated in ASP.NET Core application with help of operating system. Windows Authentication is a very useful in intranet application where users are in same domain. Create Your First Bot Using Visual Studio 2017 - Step By Step Guide Jan 11, 2018. Seeing how fast the companies are adopting the Bots, it is really the best time for you to start learning Bot framework and start adopting Bots for your business.. AI Series - Part One - Registering For Emotion API Jan 11, 2018. I will be showing how to start AI Development with Cognitive. Getting Started With Azure Service Bus Dec 26, 2017. From this article you will learn an overview of Azure service bus and ow to create an Azure service bus namespace using the Azure portal. Understand HTTP.sys Web Server In ASP.NET Core Dec 19, 2017. HTTP.sys is a Windows-based web server for ASP.NET Core. It is an alternative to Kestrel server and it has some features that are not supported by Kestrel.. .. Mounting Azure File Share With Windows Nov 30, 2017. This article shows how to create an Azure Storage account Building Windows 10 App Using UWP Nov 30, 2017. Build a Windows 10 application that runs anywhere using the Universal Windows Platform. About Windows-8-Start-Screen NA Hire a remote developer Looking to add more bandwidth to your software team? Web Developers, designers, testers are now available on demand. Flexible hours, very competitive rates, expert team and High Quality work.
https://www.c-sharpcorner.com/tags/Windows-8-Start-Screen
CC-MAIN-2018-13
refinedweb
874
65.22
Lankford1,940 Points Casting object as a String--I may not be fully understanding this question but are we to be creating a conditional statement to determine whether this is in fact a String? How would I utilize instanceof in that fashion? if (obj instanceof String){ result = (String)obj } return result; Not quite sure how to approach this problem, but the instructions gave me this impressionAuthor() { return mAuthor; } public String getTitle() { return mTitle; } public String getBody() { return mBody; } public String getCategory() { return mCategory; } public Date getCreationDate() { return mCreationDate; } } import com.example.BlogPost; public class TypeCastChecker { /*************** I have provided 2 hints for this challenge. Change `false` to `true` in one line below, then click the "Check work" button to see the hint. NOTE: You must set all the hints to false to complete the exercise. ****************/ public static boolean HINT_1_ENABLED = false; public static boolean HINT_2_ENABLED = false; public static String getTitleFromObject(Object obj) { // Fix this result variable to be the correct string. String result = ""; return result; } } 3 Answers Benjamin Barslev Nielsen18,958 Points Your understanding is correct and the suggestion is correct for task 1 except a semicolon is missing after (String) obj. Benjamin Barslev Nielsen18,958 Points Sorry for the late reply but here it comes. Because of Javas scope rules, if you declare a variable inside { ... } the variable is only accessible in that block of code. Therefore you cannot access result outside the if-statement, if you declare it in the branches of the if-statement. Therefore you still need to declare the variable before the if-statement: String result = ""; Remember that the return type is String, so we would always return a String, not a Object, so therefore you should not declare result with the type of Object. If obj is not a String, we know that it is a blogpost and therefore we can cast obj to BlogPost, and then assign that BlogPost's title to result, and afterwards returning it: if (obj instanceof String) { result = (String) obj; } else { result = ((BlogPost) obj).getTitle(); } return result; Hope this helps Kevin Lankford1,940 Points This definitely helped my figure it out, thank you! One more quick question, what in the rules of Java make ((BlogPost)obj).getTitle(); correct and not (BlogPost)obj.getTitle(); ? Benjamin Barslev Nielsen18,958 Points Let's first look at the incorrect code: (BlogPost) obj.getTitle(); In this code we cast obj.getTitle() to BlogPost, but this raises two problems. obj is still of type Object, so we do not know that it has an getTitle method, and obj.getTitle() returns a String, so casting this to BlogPost will be wrong. In the correct code: ((Blogpost) obj).getTitle we cast obj to BlogPost, and since obj is a BlogPost, we know it has a getTitle method, so this is correct. Kevin Lankford1,940 Points Kevin Lankford1,940 Points I've added this condition, but it the compiler error I'm getting looks like something may be missing before returning the result: if (obj instanceof String){ String result = (String)obj; } else { Object result = obj; } return result; ./TypeCastChecker.java:19: error: cannot find symbol return result; ^ symbol: variable result location: class TypeCastChecker 1 error
https://teamtreehouse.com/community/casting-object-as-a-stringi-may-not-be-fully-understanding-this-question
CC-MAIN-2022-27
refinedweb
524
60.65
Details Description Rather than distributing the hadoop configuration files to every data node, compute node, etc, Hadoop should be able to read configuration information (dynamically!) from LDAP, ZooKeeper, whatever. Issue Links - is related to HADOOP-3582 Improve how Hadoop gets configured - Open - relates to - HADOOP-6910 Replace or merge Configuration with Jakarta Commons Configuration - Resolved HADOOP-7001 Allow configuration changes without restarting configured nodes - Closed Activity - All - Work Log - History - Activity - Transitions One aproach would be to wrap LDAP, Zookeeper, etc via an implementation of org.apache.hadoop.FileSystem. I think it goes without saying that a pluggable system would be great here. LDAP and ZK are just two examples of many that could be used to provide this sort of functionality. Heck, no reason it couldn't be as simple as a HTTP GET. [I'll admit i'm slightly partial to LDAP here due to the requirement to provide a schema. It provides a rudimentary way to enforce some basic rules on the data prior to Hadoop even seeing it.] Do you think LDAP will scale if 2000 data-nodes will start reading their conf at once? Like Hadoop, ZK, webservices, etc, LDAP's scale is implementation dependent. I'd love to be able to specify a configuration URL, instead of distributing hadoop-site.xml all over. Presumably you would specify the URL as "" and Hadoop would add some parameters, like "". Wrapping filesystem around ldap/zk/etc seems a bit heavy. Another option would be to extend Configuration and create ZooKeeperConfiguration, LDAPConfiguration, HTTPConfiguration, etc. Each node would still require a small file to tell it where/which resource to locate. A builder could read this file and construct the appropriate configuration and hand it off to the node. This would be very pluggable and transparent to the nodes. Centralized configuration facility adds one more single point of failure. If LDAP server fails - everything fails. I am not arguing against centralized configurators in general. Just pointing out possible issues. Configuration may be just the first step. Then we may want to build centralize hadoop distribution repositories from which cluster nodes can pick required jars, etc. > Each node would still require a small file to tell it where/which resource to locate. Yes and that makes me think that something is not right here: in order obtain data-node configuration I need to specify the location of the configuration server and still distribute this configuration file to 2000 nodes. The amount of churn in a configuration file that only points to where to read the real configuration data is significantly lower than changes in Hadoop's configuration in my experience. For example, our system level ldap.conf has changed maybe three times in 1 year, and most of those were growing pains of a new infrastructure/data center. I have lost track of how many times we have needed to change and bounce the jobtracker ACLs in the past week. Additionally, HTTP, LDAP, etc, have real, proven HA solutions. They are much less likely to go down than the Hadoop grids they would be supporting.... Another nice thing about this is that you can use DNS for a super-simple HA/metaconfig system. Simply point all the hadoop nodes at, then set up CNAME or A records on the local domain network to the right machines. If one confmaster is down, it'll just pull from another. making a case for zookeeper , to write a configuration layer on top of zookeeper would be just 20-30 lines of code and we can handle around >20K writes per second (with 3 servers, which I dont think would be necessary but other apps also could use zookeeper) and >50K reads/sec. Also zookeeper has a pretty strong HA story. Also, this could sow the seeds of a true federation of hdfs clusters! Couple of suggestions (assuming zookeeper option) based on past experience with similar issue. 1. Splitting configuration into two parts - default and site-specific overrides (with further specialization down the line) - would simplify hadoop upgrade by minimizing upgrade impact on locally overridden options. 2. If zookeeper does not provide for node-local persistent caching, adding this option specifically for configuration data could support disconnected node operations if need be (say, for sanity checks on start-up). #1 could be done no matter what the source. It just depends upon how smart the plug in framework and the actual plug in is. For example, assuming an HTTP plug in: it would just fetch two files and do the merge just like Hadoop configures things today. #2 should be done no matter what the source. The question is whether it should be handled inside or outside the plug in. The follow up to this bug is really: how do you build a registry of configs and have the client smart enough to know which entry it needs to follow. So that might need to be part of the design here. [For example, if I have a client machine that needs to submit jobs to two different grids, how can it automagically pull the proper configuration information for those two grids? ] > For example, assuming an HTTP plug in: it would just fetch two files and do the merge > just like Hadoop configures things today. Yes, this is exactly what a simplistic solution would do. I'd rather not limit myself to two files, though: why not get ready for federated grids in advance. Further, one has to take some care with option semantics: for instance, override of some default options may turn undesirable or even should be prohibited. There some nits here, to summarize. > [For example, if I have a client machine that needs to submit jobs to two different grids, > how can it automagically pull the proper configuration information for those two grids? ] I didn't actually consider clients. If ZK supports client connections (meaning outside-world readers and may be even writers), not sure with this - just started reading docs/code last week, - this should be fairly straightforward. The only thing one then needs at the client to answer your concern would be a simple "AvailableGrids" config listing respective ZK URLs to connect to. HADOOP-4944 contains a patch to use "xinclude" in configuration files. That seems functionally similar to the goals of this ticket. Thoughts? XInclude from a remote filesystem? I'm worried that XML parsers won't handle errors well. Perhaps it could be made to work. There are two pieces to this ticket anyway. 1. read config files from an arbitrary filesystem. 2. get daemons to re-read config more dynamically - most of them just do so once at startup. I would have separate issues for these. I had some conversations with Doug on this in the summer, but I never got round to even opening a ticket . The idea was to expose ZK as a Hadoop FileSystem then change Configuration to load from an arbitrary FS. There is a bootstraping issue to get round too. Tom Given ZK's status as a subproject, where does ZK read its configuration data from? Do we have a chicken and egg problem? ZK uses Java properties files, so it has no dependencies on Hadoop Core configuration. The bootstrapping problem is that FileSystem relies on a Configuration object having loaded its resources. So to get a Configuration to load from a FileSystem won't work, the way things stand (see Configuration#loadResource). (It's possible that the reason for this code has gone away, or at least lessened, following the recent split of configurations into core, hdfs and mapred - it might be possible to bootstrap from the core config.) There's no inherent reason, though, why a Configuration couldn't load its resources from HDFS or a ZK-backed FileSystem. We just need to avoid a system loading resources from itself. There are two pieces to this ticket anyway. 1. read config files from an arbitrary filesystem. 2. get daemons to re-read config more dynamically - most of them just do so once at startup. I would have separate issues for these. +1 > The idea was to expose ZK as a Hadoop FileSystem Like I mentioned earlier in this JIRA, I really like this idea. I have an app that currently stores data on HDFS, but I would love to make it run on zookeeper without changing my application code. - Apache Directory implements an NFS filesystem front end to LDAP data, so you could today use it to provide all of the configuration data in a single point of failure/HA directory service - I do subclass Configuration for my work, with a ManagedConfiguration being read from SmartFrog, rather than the filesystem. I have to pass an instance of this down when any service gets created. - These Configurations get serialised and sent around with Job submissions, so they soon become static snapshots of configuration when jobs were queued, not when they were executed. You can't be dynamic without changing this behaviour. - There are 350+ places in the Hadoop codebase where something calls the {{new Configuration()} operation, to create a config purely from the registered files. This is bad; some kind of factory approach would be better, with the factory settable on a per-JVM or per-thread basis. - HADOOP-3582 collects other requirements for configuration. I am willing to get involved in this, as config is part of management, and I probably have the most dynamic configuration setup to date. But I'd rather wait until after the service lifecycle stuff is in, so change can be handled more gradually. If any configuration changes were to go into 0.21, I'd opt for a factory mechanism for creating new Configuration instances, so that people exploring configuration options can control things better. > until after the service lifecycle stuff is in Could you please post a link to discussion/ticket, I'd like to take a look. Thanks, Andrey Looking at my subclassed configuration code I am reminded of some things - It subclasses JobConf, as some things expect to be handed a JobConf - In the past, I did have it live, bound to the SmartFrog configuration graph, so (until serialized) every change in the SF configuration was reflected in the result of the get() method. - That was pretty complex, I had to subclass a lot of methods that assumed that all state was stored statically in two properties files. - It proved easier to comment the overrides out and move to grabbing all the values once, and no longer being live. Some people have mentioned that they'd like daemons to re-read their config more dynamically. That can get very confusing, as you may have some config values (ports, network cards, temp directories) whose changes only get picked up by a full restart. There are two tactics we've used in the past for this - Have your service support a "reload" operation which loads in all values and resets thing - Just restart the service Provided the cost of restarting is low, restarting services is way, way easier. @Andrey, I was referring to HADOOP-3628. the changes are orthogonal, its just that I want to get the service stuff done before I spend any of my time worrying about what we can do to make configuration easier. With the subclassing working, I am happy short term, though I think it could be done more cleanly. One important and related feature are the necessary access-control mechanisms for reloading configurations, should we track it separately? Maybe HADOOP-5772 ?. Today, many users put the configuration files on an NFS server (e.g., NetApp) and all daemons and clients read the configuration from there. I can see two goals for this JIRA: - Remove the external dependency. That is, allow people to deploy Hadoop without an NFS server. - Remove the single point of failure. The NFS server might not have HA, in which case there is a single point of failure. Are there other issues with the current NFS-based architecture? What are we trying to solve here? Small (I'd say tiny) grids that can get away with using NFS is not the target audience here. Besides being a SPOF, for large installations having your entire grid and your clients mounting the same fs is likely impossible due to firewall restrictions. Using a distributed system such as LDAP or ZK or whatever means you also have a higher chance of replicating configs across firewall boundaries in a secure fashion. Also, if Hadoop is deployed to optimize certain features, chances are very very high that a site is going to have multiple configs for the same grid. For example, removing certain non-client related options out of mapred-site.xml or specifically tuning the HDFS audit log. I know this is not relevant to the ticket to dynamically/remotely read configuration files but I wanted some people interested in hadoop configuration to take a look at: The concept is to give users a IOS like interface to configure hadoop. The configurations are generated locally and then pushed out over SSH/SCP. The concept I am using: In most cases large portions of the configuration are shared. Their are very few exceptions to this. (IE one node has 9 disks not 8). These exceptions are done with an override list per host. node6.hdfs-site.xml = shared.hdfs-site.xml + override.get(node6).hdfs-site.xml Getting the effective configuration for a host is as simple as taking the shared list, and replacing the overridden variables for that host. I am using a few simple abstractions, and Java XML beans serialization for the configuration. Thus, the entire configuration is easily dumped to a single XML file, so one File object represents the state of all of the clusters configurations. Howdy, This is really exciting as I feel currently that configs are a hard problem. Currently it can be difficult to get transparency and flexibility depending on the configs scope. Specifically, where do I audit if a user sets a particular job paramater, or an admin sets a paramater for a particular system. It seems that this jira is about setting up scope on a cluster wide basis. LDAP and Zookeeper could allow configs across clusters. This is something that gasp vms logicals got right IMO in that it is one interface to solve all of these problems. I think one giant key value store is a good first step, but realistically you need different permissions based on if you are making a change intra cluster, inter cluster, for one box, for one user or for one job. Being able to go back and see what the hierarchy of those values, and having them being centrally managed would be really neat. Just my two cents, I am just spoiled from my slow big machines. Here's a really interesting paper from Akamai related to Quorum use for configuration: "ACMS: The Akamai Configuration Management System" ZooKeeper has a REST based interface that might work well if client's (hadoop) primary interest is to poll for configuration during bootstrap. Not that one has to use REST, if the ZK client interface were used directly then one could register for dynamic changes via ZK watches. Additionally the hadoop nodes could take advantage of other ZK features to implement group membership,(nodes register themselves, and their state, with ZK, allowing simple monitoring to be implemented), Leader election, etc... > nodes register themselves, and their state, with ZK, allowing simple monitoring to be implemented I would add (dynamic) membership-aware scheduling. It's typically far from straightforward for a scheduler to account for nodes (re)joining group unless there is an explicit group registration mechanism, with join/leave events dispatched to scheduler and other interested parties.. Allen and I had a conversation about this a while ago and he made some very good points for storing certain data in a DS vs ZK. In particular user data makes alot of sense imo to be stored in DS. Data. Keep in mind that ZK is all about coordination, not "data storage". We don't support search for example, which is a significant feature in most DSs. Also integration with legacy systems (your existing user database) is also a feature of most DSs that ZK does not have. While ZK could do these things, a typical DS will do them for you out of the box, and make your admin's lives easier in the sense that they already have experience with this. At the same time things like coordination are best served by ZK. Keeping track of which nodes are allocated to which functions, the status of processes and coordinating operations between them, the load and activity of processes (nodes), Leader election within a highly reliable/available service, distributed locks and work queues, etc... Take a look at LinkedIn's Norbert for an example of one instantiation of something like this: So I think their exists a couple of different conversations going on here.. However their exists numerous other things which fall into the cluster configuration in which ldap may not be the best choice. In regards to operations, I think deployment complexity is an argument against ldap. Even if an organization were willing to store such configuration in their existing ldap infrastructure, which I am skeptical of in many cases,. Allen, sorry to be a pain , but could you be a bit more pedantic about what you mean by configurations ? Do you think a hybrid system may make sense here?. If you s,LDAP,ZK,g above, you'll find it is a better fit. The cold, hard reality is that LDAP is everywhere, ZK is not. There are some key features in ZK that are/were missing in order for it to fit here (with the good news being that those gaps are slowly closing). But the truth of the matter is that LDAP is a well understood technology by most IT departments and ZK is not. (Sidenote: it would be interesting to know how well used the security components of ZK are...) Also, I don't think you should think of ZK as part of the Hadoop infrastructure. It is a sub project (and therefore part of the ecosystem), but you can run Hadoop without using ZK and many many many people do, including Yahoo! for years. Allen, sorry to be a pain , but could you be a bit more pedantic about what you mean by configurations ? Do you think a hybrid system may make sense here? I'll try and write/diagram something up with a concrete proposal as to how I think this should be done, based on conversations I've had with Owen and others over the years. You'll find I'm thinking way beyond just a simple 10 node grid. Allen, A more concrete proposal would be great, I am just trying to understand which part of the problem we are trying to solve. It's good to see people putting some real elbow great in making hadoop more palatable to ops/system as they are a very important stakeholder. I do think that requiring sysadmins to understand the details of zk, just to do configuration stuff would be kindof a show stopper for acceptance. Also having cluster and host wide information in ldap definitely seems like a win in the portability department. The only real worry is that we end up increasing the deployment complexity or develope things in such a way that people have to deploy an array of ldap servers within each cluster. Considering I have never worked at any large enterprise which uses directory services besides notes or active directory, I think that this could seriously be a burden if we do it wrong. would be interesting to know how well used the security components of ZK are ZK has plugable auth with acls on the znodes. Yahoo uses this internally for multi-tenant ZK clusters, the auth in this case is Yahoo specific implementation (uses certificates). We have met with the Yahoo Hadoop Security team, we don't have a kerberos plugin for ZK auth yet but we have discussed it. you can run Hadoop without using ZK that's true for MR/HDFS today, it may not be so in future. HBase requires ZK today, so if you're using HBase you're already using ZK. LDAP is a well understood technology by most IT departments and ZK is not I agree with this, in general most IT departments are going to be much more familiar with technologies such as LDAP, MySQL, Exchange, etc... that have been around for a while. This is a big plus. ZK is still very new relative to these mature technologies. (although the same could be said for Hadoop itself). Just my thought - I think ZooKeeper might become a part of Hadoop Deployment in somewhere near term (for High Availability) or some other part of Hadoop. With HBase already deploying ZooKeeper as part of there stack, it might need a little more thinking in just outrightly rejecting the idea of ZooKeeper because of unfamiliarity to ops. I would say it this way: - What's the right fit - What role could ldap play - What are the specific goals we are trying to solve - How do we make ops/syseng guys comfortable with our decisions. - How do we reduce operational complexity - How do we reduce deployment complexity Allen, any updates here. I know y'all have release some of your stuff. I have been really intested in this ticket for a while. I think we can go in circles for a long time. No solution will fit everyonones needs. The best general solution in my optionion is LDAP. why? - if you have an argument against ldap but are making a case for zookeeper every argument you make against ldap could be made against zookeeper - LDAP is easy to scale single master+thousands of slaves or multi master configurations - LDAP is the defacto standard for the configuration of network servers - JAVA provides built in directory clients - Current hadoop configuration with overrides per host can be easily accomplished with LDAP - yes nodes will have to bootstrap with a hostname or lookup a default host - people that dont want to run LDAP for hadoop probably will not want to run alternative XYZ either. For those people their are configuration files. If someone seconds the 'I like LDAP idea' I will start working on it and submit a patch for review. I think getting a patch and ldap support is a great idea, although i disagree with the sentiment "if you have an argument against ldap but are making a case for zookeeper every argument you make against ldap could be made against zookeeper" LDAP is a protocol, not a storage system. Which implementation did you have in mind?. Jeff the implementation I want to go for was described above:. And meet this requirement:. Also I wanted to add that since the new hadoop security is kerberos. Kerberos is normally backended by LDAP, so a good point of users will have an ldap server at their disposal. ). I think we are thinking along the same lines. I and I understand why you did and it that way but I wanted to chat about it. hadoopTaskTracker commonname: string hostname: multi-string mapred.job.tracker: string mapred.local.dir: multi-string mapred.tasktracker.map.tasks.maximum: integer mapred.tasktracker.reduce.tasks.maximum: integer Doing this will cause you to have your LDAP schema generated from your hadoop configuration. This is great because we can ensure type checking of the parameters but bad in a couple ways. - we can not deal with non hadoop parameters (which should not be in the configuration but could be) - it will be a very intricate and large and we have to track new changes Have your considered an alternate like this? commonname: string hostname: multi-string HadoopPropertyName: string hadoopPropertyValue: multi-string Yes, this is overly generic way is bad for several reasons, but I live it. Mostly because right now the current configuration is an XML file. It has no XSD schema, hadoop does the validity checking. What do you think? . I should give some live demo of what we're up to; the current slides are online - ask the infrastructure for the VMs, get some whose name is unknown - bring up the NN/JT with a datanode on the same master host, ensures the JT doesn't block waiting for the filesystem to have >1 DN and so be live. - provided that master node is live, ask for more workers. If the master doesn't come up, release that machine instance and ask for a new one. - I also serve up the JT's config by having the machine manager front end have a URL to the hadoop config, one that 302s over to the config of the JT. You get what it is really running, and as it has a static URL you can get it whenever the cluster is live (you get a 404 if there is no vm, connection-refused if the 302 fails). I can use this in a build with <get> and deploy client code against the cluster no need for fanciness. I did try with the nodes picking ports dynamically, there's no easy way of getting that info or the actual live hostnames back into the configurations. future work. We need the services to tell the base class what (host,port) they are using for each action, and to dynamically generate a config file from that. As an aside, I am not a fan of XSD, so don't miss its absence. XSD's type model is fundamentally different from that of programming languages, and is far to complex for people, be they authors of xml schema files or XSD-aware XML parsers, go look at xsd:any, whether the default namespace is in the ##other namespace and you too will conclude that it is wrong. Odd. I replied as a comment but jira didn't seem to pick it up... Agreed. It would still be necessary to support local configuration for single-node clusters and testing though. The configuration should be able to take its values either from a distributed system, or from local values. ZooKeeper would be a reasonable system for retrieving/setting the key-value pairs of the configuration file.
https://issues.apache.org/jira/browse/HADOOP-5670
CC-MAIN-2017-39
refinedweb
4,410
61.06
Simple way to get all values from hash Andrey ・1 min read In the ruby from the box, we can't find all the hash values if it's nested. I suggest an easy way to find all the values using recursion. Example hash: hash = { a: 2, b: { c: 3, d: 4, e: {f: 5}} } > hash.values => [2, {:c=>3, :d=>4, :e=>{:f=>5}}] That's not an option, we need all the values. def deep_values(array = [], object:) object.each do |_key, value| if value.is_a?(Hash) deep_values(array, object: value) else array << value end end array end > deep_values(object: hash) => [2, 3, 4, 5] If you run the benchmark with this data, we get the following data: > puts Benchmark.measure { 100_000.times { hash.values } } => 0.028920 0.002643 0.031563 ( 0.032759) > puts Benchmark.measure { 100_000.times { deep_values(object: hash ) } } => 0.140439 0.003318 0.143757 ( 0.146637) JetRockets is a technology consulting firm that architects, designs, develops and supports enterprise-level web, mobile and software platforms helping clients achieve their goals and grow their businesses. Classic DEV Post from Dec 18 '18 Stop trying to be so DRY, instead Write Everything Twice (WET) Write everything twice, just not a third time. Give a try to github.com/evanphx/benchmark-ips for benchmarking. I believe it's more reliable and accurate.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/jetrockets/simple-way-to-get-all-values-from-hash-5hd1
CC-MAIN-2019-43
refinedweb
223
66.23
Gathering information about a workstation Posted by Christian Amarie on June 8th, 1999 Some times you need information about a workstation, for example if you want to know if a workstation is not running WindowsNT and you try some NT-specific job without any chance. I think this pseudo-routine can be helpful. #include <what_you_need.h> /*Usually: #include <lmcons.h> #include <lmwksta.h> #include <lmserver.h> #include <lmerr.h>*/ // Network API job - obtain network info about selected machine. BOOL _GetWkstaInformation100() { LPBYTE lpBuf; LPCSTR lpcstrWkstaName = (LPCSTR)m_strWkstaName; int iwLength = 2 * (MAX_COMPUTERNAME_LENGTH + 1); WCHAR lpwWkstaName[2 * (MAX_COMPUTERNAME_LENGTH + 1)]; lpwWkstaName[0] = '\0'; MultiByteToWideChar(CP_ACP, 0, lpcstrWkstaName, -1, lpwWkstaName, iwLength); typedef NET_API_STATUS (NET_API_FUNCTION *NETWKPROC)(LPWSTR, DWORD, LPBYTE *); NETWKPROC _procNetWkstaGetInfo = (NETWKPROC) (GetProcAddress(theApp.m_hNetDLL, _T("NetWkstaGetInfo"))); if(_procNetWkstaGetInfo) { NET_API_STATUS nasRetVal = (*_procNetWkstaGetInfo)(lpwWkstaName, 100, (LPBYTE*)&lpBuf); if(nasRetVal == NERR_Success) { WKSTA_INFO_100 *pWkstaInfo = (WKSTA_INFO_100 *)lpBuf; DWORD dwPlatformId = pWkstaInfo->wki100_platform_id; if(dwPlatformId != PLATFORM_ID_NT) { //[ERROR]Not a Windows NT Workstation - if useful. return FALSE; } else return TRUE; } else { //[ERROR] System error. Call GetLastError, FormatMessage, etc. return FALSE; } } else { //[ERROR]Unable to find procedure NetWkstaGetInfo in netapi32.dll. return FALSE; } } History Ghd Peacock vil bringe deg mer overraskelsePosted by pletchertlo on 06/14/2013 04:51am [url=] Kjøp ghd rettetang[/url] Selskapet har gitt ut serien av rettetang med glossy og elegant finish.The konvensjonelle GHD hÃ¥r straighteners er vanligvis utgitt i matt finish og kommer i enten begrenset rosa GHD edition eller standard svart farge. For Ã¥ lette mens styling, produktet har blitt designet med dreibar ledning vedlegg slik at ledningen knuten av hÃ¥ret verktøyet vil overnatte gratis som du stil. Jevn fordeling av trykket over hÃ¥ret oppnÃ¥s ved den flytende plater funksjonen i tangen. [url=]rettetang ghd[/url] Det gjorde sin liste (og mine) fordi den nÃ¥r 450 grader pÃ¥ under 20 sekunder, noe som betyr det groveste av hÃ¥ret er flatt med bare en pull-through av dette jern. Annen teknologi setter dette jernet bortsett fra andre. Den kommuniserer med deg via smart lyd nÃ¥r du er klar til bruk, og kan automatisk slÃ¥s av hvis den ikke brukes etter 30 minutter. [url=]ghd rettetang norge[/url] Som flere og flere mennesker over hele verden ønsker for rett hÃ¥r, er GHD hÃ¥r rettetang blitt svært populære de siste Ã¥rene. Denne hair rette er definitivt ikke et nytt konsept, og det var svært aktiv i pÃ¥ 60- og 70-tallet til store krøller cane i mote i Ã¥ttiÃ¥rene. Men den beste delen av disse GHD skjønnhetsprodukter er at de kan benyttes for styling hÃ¥ret bare om en eller annen form, enten det er poker bølgete, rette eller buede. Og hva er det du velger Ã¥ gjøre, er du sikker pÃ¥ Ã¥ gjøre mange en hodet sving.Reply If you like the accessory bass, tickety-boo, thatâs your choicePosted by motherdhmm on 06/03/2013 07:57pm [url=]beats by dre Australia[/url] The most overt idiosyncrasy is that this headphone is developed at near Dr.Dre who won the Grammy Awards and Deformity audio expert group. Beats can lay down 110dB adaptability to elucidate every fatigue in Rock, Perceptive Fervent and R&B which ask for strictly in aspect velocity. Into the bargain, specially designed earcups space users be sorry for much more serene and support aural secretiveness as well. The ultra-soft breathable cushions can keep an eye on you composed while enjoying the music, reducing perspiration. Swallow actual beats nigh Dre, enjoy the cheerfulness of music, in Australia online cumulate! [url=]beats by dre[/url] Where is the sorcery beauty of spell resemble headphones? Nightmarishness headphones is why varied of my friends loved, why the Freak is the be featured of the moment presentation headphones? Hype, publicity, or what? Hanker after to conscious that the Monster would like to identify the monster headphones of it with me to get it the next. The most famous Troll headphones style is not vested with the brute recording engineer, not a helpful simple to impel Barbarity Solo HD and people agree the handle can be linked to look like lusus naturae of noodles. Speaking of the dragon of noodles would like to have him a drawing of people absolutely, monster noodles in a time sold manifest of this why? This is because the wizardry of the allure of the undisturbed, the magic sound of noodles series headphones, Giant Beats Justbeats Unaccompanied HD Purple on-ear headphones, names such as tangible, headphone line, not in any degree intertwined friends prefer to use headphones to solve a irritating problem. Is the perfect bring up, the headset intrinsic. Small headphones can genuinely carouse a significant effect. The blow-out reviews of this headset, I imagine the-ear headphones is not a can and comparable to the miraculous be activated of noodles no affair the remodel, or internal. The mo is the trend update of fast-paced society. I on that the trice the missing link is also in the supplementary products, trends and resemble combination. This is the hypnotize of monster. [url=]beats headphones Australia[/url] Sale-priced beats by dre Dreuncomfortable tenderness. Your nearby hearing tend to be redress it accomplishable in favour of affect with regard to since expeditiously for those who require inseparable special associates animal is more safely a improved than earphones. Moreover, it really is appropriate to be proficient to churn out to obtain the power to get that Is better than doctor dre. beats beside dre Is better than hea.The certain affectivity involving Nightmarishness Beats next to Generate. Dre Studio is drastically much ripping in juxtaposition with Hideousness Beats Individual Hi-def. Specifically, Beats Headphone assignation damned reasonably surely right with reduced regularity, cut-price beats by dre furthermore while striper mightiness and adaptability.Reply El mercado al contado planchas ghd baratasPosted by hanmeihm on 05/30/2013 05:09am [url=]Comprar ghd[/url] Este importante para ayudar a que el movimiento de las planchas para el pelo y peinar el cabello largo de todos ellos porque lquida, as como elegante porque alcanzable. En caso de abandonar las planchas para el pelo reales bloqueados en cualquier tipo de un lugar durante el perodo tamao Cmo se puede alargar Locks Junto con alisadores de pelo GHD, podr¢s localizar resumen de los problemas directamente en tu cabello ghd nz, bastante en comparacin con las ondas, lo que podra ser muy difcil de metales de distancia. [url=]Planchas ghd[! [url=]planchas ghd baratas[!Reply Jordan shoes mentioned Gene to buy the brand, a margin of NikePosted by TaddyGaffic on 04/24/2013 01:30 run[ Lightweight smart â Nike Unshackled TR Befit in spring 2013 3 seriesPosted by Tufffruntee on 04/22/2013 06:58am Nike Free TR Trim 3 unmistakable features is to exploit the brand-new scheme: Nike Let off 5 soles improved bending Groove; new tractor pattern making training more focused when; lighter preponderance, the permeability is stronger, and more smart shoe designs not only aim for shoes [url=]nike free run[/url] more pleasant wearing, barefoot training feel, but also more in vogue appearance. Nike Relieve TR Fit 3 provides excellent lateral solidity, you can take the legs in the untenable during training. Strong vamp upper breathable mesh, lower soap up's one of a kind lay out can be [url=]nike huarache[/url] seen from stem to stern it. Lightweight, demanding, pinched soap up means familiar through entirely only one seams, more flexible, forward is stronger. Need more mainstay, role of a training exercise, froth close in more parts of the need championing give, bubbles loose. Say two-ply tongue moisture wicking synthetic materials, tiresome on your feet, help maintain feet desiccated and comfortable. Phylite [url=]nike huarache free[/url] midsole offers lightweight revolt unceasing, outstanding durability and sedate outsole can do to greatly reduce the comprehensive weight of the shoe. Qianzhang pods on the outsole and heel-shaped Grassland rubber enhances the shoe multi-directional drag on different surfaces.Reply Please help with Error 53Posted by Legacy on 12/25/2002 12:00am Originally posted by: Maddi Hi all, I always get Error 53 back from the Funktions NetWkstaGetInfo or NetServerGetInfo, can somebody give me a hint how to fix that ? I don't know what the Error means (Network Path not found ? But the Path should be correct). Do I have to initialise something before this ? Thx,Reply Maddi help in VC projectPosted by Legacy on 10/13/2002 12:00am Originally posted by: bhushan i have to make a project in vc++ which will allow a user access to all desktops on the LAN.Reply i want to know how to go abt it,spclly whch feature to use and reference matrial if avalble How to find a file from a networked workstation or server?Posted by Legacy on 06/13/2002 12:00am Originally posted by: Ken Very interesting piece of codes, I want to know "How to find a file from a networked workstation or server?"Reply windows nt administrationPosted by Legacy on 09/25/2000 12:00am Originally posted by: vikas kumar sharma there is lot of things that can be done with network api's. we can find out the hardware details of a particular wkstn and the users that use them. info about the users in the n/w can also be found out programatically check the samples in msdn.Reply User have to be an Administrator or print or....Posted by Legacy on 12/15/1999 12:00am Originally posted by: Igor Proskuriakov Just one comment about this utility: you must have Print or Server operator privilege, or be a member of the Administrator or Account local groups to successfully execute the NetWkstaGetInfo function. Regards, Igor
http://www.codeguru.com/cpp/i-n/network/article.php/c2427/Gathering-information-about-a-workstation.htm
CC-MAIN-2016-22
refinedweb
1,637
50.57
Was this a file name format change in 6.18? Also, any idea on how to get - Code: Select all May 2, 2016, 8:34:00 AM Script 2016-05-02 Events.txt to stop appearing in the log every minute? May 2, 2016, 8:34:00 AM Script 2016-05-02 Events.txt May 2, 2016, 8:34:00 AM Script 2016-05-02 Events.txt log IndigoLogName import sys import collections): indigo.variable.updateValue(variable_ids[index], value=line) jay (support) wrote:Well, the log is in Indigo Touch.. Different Computers wrote:Now I'm confronted with the same issue mentioned for the AppleScript about a page back: Any simple way to strip the timestamps before they become the variable values? Or after for that matter. Users browsing this forum: No registered users and 0 guests
https://forums.indigodomo.com/viewtopic.php?t=6057&p=153817
CC-MAIN-2018-30
refinedweb
137
76.93
Create a new project in Visual Studio 2019. Choose "Class Library (.NET Standard)" then click Next: Set these values on the next dialog: Project name: SchoolLibrary Solution name: School class Student {=Scho" } ); } } You will need to resolve some of the unrecognized namespaces.. Data context class: SchoolDbContext (SchoolAPI) Controller name: StudentsController Once you click on Add, the StudentsController.cs file is created in the Controllers folder. To load that particular controller every time you run the SchoolAPI application, httpClient.GetJsonAsync<Student>($"{baseUrl}api/students/{id}"); studentId = student.StudentId; firstName = student.FirstName; lastName = student.LastName; school = student.School; mode = MODE.EditDelete; } Let us run the application and test it out.
https://blog.medhat.ca/2019/05/
CC-MAIN-2021-21
refinedweb
108
54.9
EF 4.2 Code First, how to map with fluent API? (Since i have a database predefined i can't let EF recreate it). This the mapping i use now (works but i want to rewrite using fluent api): public class League { [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public Guid LeagueId { get; set; } .... #region References public virtual ICollection<News> News { get; set; } #endregion } public class News { [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public Guid NewsId { get; set; } public Guid LeagueId { get; set; } .... #region References [ForeignKey("LeagueId")] public virtual League League { get; set; } #endregion } Now, how can i map this using the fluent API? Update Wrote this and it works, but is there a simpler version? modelBuilder.Entity<League>().HasMany(x => x.News).WithRequired(y => y.League).HasForeignKey(c => c.LeagueId); Update 2 I added those missing from the classes. But the problem is that if i leave it at that and try it, it fails. I need to specify a key somewhere.I can't let EF create the database and it refuses to just operate with tables. Answers You should look at this article for example: The base part, is that you should remove default db initializer: Database.SetInitializer<YourContext>(null); And this should prevent EF from trying to change your db or throw errors. It will just try to work with what you give it.
http://unixresources.net/faq/8317634.shtml
CC-MAIN-2019-04
refinedweb
221
66.64
04 March 2017 0 comments Python, Mozilla, PostgreSQL tl;dr;. crontabber is an advanced Python program to run cron-like applications in a predictable way. Every app is a Python class with a run method. Example here. Until version 0.18 you had to do locking outside but now the locking has been "internalized". Meaning, if you open two terminals and run python crontabber.py --admin.conf=myconfig.ini in both you don't have to worry about it starting the same apps in parallel. Every app has a state. It's stored in PostgreSQL. It looks like this: # \d crontabber Table "public.crontabber" Column | Type | Modifiers --------------+--------------------------+----------- app_name | text | not null next_run | timestamp with time zone | first_run | timestamp with time zone | last_run | timestamp with time zone | last_success | timestamp with time zone | error_count | integer | default 0 depends_on | text[] | last_error | json | ongoing | timestamp with time zone | Indexes: "crontabber_unique_app_name_idx" UNIQUE, btree (app_name) The last column, ongoing used to be just for the "curiosity". For example, in Socorro we used that to display a flashing message about which jobs are ongoing right now. As of version 0.18, this ongoing column is actually used to NOT run apps again. Basically, when started, crontabber figures out which app to run next (assuming it's time to run it) and now the first thing it does is look up if it's ongoing already, and if it is the whole crontabber application exits with an error code of 3. What might happen is that two separate servers which almost perfectly synchronoized clocks might have cron run crontabber at the "exact" same time. Or rather, only a few milliseconds apart. But the database is central so what might happen is that two distinct PostgreSQL connection tries to send a... UPDATE crontabber SET ongoing=now() WHERE app_name='some-app-name' at the very same time. So how is this solved? The answer is row-level locking. The magic sauce is here. You make a select, by app_name with a suffix of FOR UPDATE WAIT. Imagine two distinct PostgreSQL connections sending this: BEGIN; SELECT ongoing FROM crontabber WHERE app_name = 'my-app-name' FOR UPDATE NOWAIT; -- do some other stuff in Python UPDATE crontabber SET ongoing = now() WHERE app_name = 'my-app-name'; COMMIT; One of them will succeed the other will raise an error. Now all you need to do is catch that raised error, check that it's a row-level locking error and not some other general error. Instead of worrying about the raised error you just accept it and exit the program early. This screenshot of a test.sql script demonstrates this: Two terminals lined up and I start one and quickly switch and start the other one Another way to demonstrate this is to use psycopg2 in a little script: import threading import psycopg2 def updater(): connection = psycopg2.connect('dbname=crontabber_exampleapp') cursor = connection.cursor() cursor.execute(""" SELECT ongoing FROM crontabber WHERE app_name = 'bar' FOR UPDATE NOWAIT """) cursor.execute(""" UPDATE crontabber SET ongoing = now() WHERE app_name = 'bar' """) print("JOB CAN START!") connection.commit() # Use threads to simulate starting two connections virtually # simultaneously. threads = [ threading.Thread(target=updater), threading.Thread(target=updater), ] for thread in threads: thread.start() for thread in threads: thread.join() The output of this is: ▶ python /tmp/test.py JOB CAN START! Exception in thread Thread-1: Traceback (most recent call last): ... OperationalError: could not obtain lock on row in relation "crontabber" With threads, you never know exactly which one will work and which one will not. In this case it was Thread-1 that sent its SQL a couple of nanoseconds too late. As of version 0.18 of crontabber, all locking is now dealt with inside crontabber. You still kick off crontabber from cron or crontab but if your cron does kick it off whilst it's still in the midst of running a job, it will simply exit with an error code of 2 or 3.. Follow @peterbe on Twitter
https://api.minimalcss.app/plog/crontabber-now-supports-locking,-both-high--and-low-level
CC-MAIN-2020-24
refinedweb
659
65.52
#1 — find-untranslated withoout saxReturn to tracker Last modified on Jan 08, 2009 by Matthew Wilkes actually 'i18ndude find-untranslated' use sax xml parser that does networking request to find namespace. Now, I need to work with it behind firewalls, in train or with or high ping connexion. This feature raise error with 'urlopen' and catch it to mask traceback. The only way is to don't use sax to parse page template struture, but a non validator xml parser. It's just a feature request, don't loose time on it. ;) - Steps to reproduce: - run 'i18ndude find-untranslated' without network connexion. Added byHanno SchlichtingonFeb 27, 2006 05:41 AMAccepted this. Using a different parser has been on the todo list for quite some time, actually. Maybe I'll get to it... Issue state: unconfirmed → open Target release: 2.1 → 2.0 Added byHanno SchlichtingonApr 01, 2006 03:09 PMThis has been resolved in SVN by disabling external namespace validation, thx to a patch from Chuck Bearden. Issue state: open → resolved Target release: 2.0 → 2.1 No responses can be added.
http://plone.org/products/i18ndude/i18ndudetracker/1
crawl-002
refinedweb
182
66.74
Add TEST_MAPPING Bug: 235278695 Test: n/a Change-Id: I92be1cf02569df6e155b685280ec296e49079dae This is a wireless medium simulation tool for Linux, based on the netlink API implemented in the mac80211_hwsim kernel driver. Unlike the default in-kernel forwarding mode of mac80211_hwsim, wmediumd allows simulating frame loss and delay. This version is forked from an earlier version, hosted here: First, you need a recent Linux kernel with the mac80211_hwsim module available. If you do not have this module, you may be able to build it using the backports project. Wmediumd requires libnl3.0. cd wmediumd && make Starting wmediumd with an appropriate config file is enough to make frames pass through wmediumd: sudo modprobe mac80211_hwsim radios=2 sudo ./wmediumd/wmediumd -c tests/2node.cfg & # run some hwsim test However, please see the next section on some potential pitfalls. A complete example using network namespaces is given at the end of this document. Wmediumd supports multiple ways of configuring the wireless medium. With this configuration, all traffic flows between the configured interfaces, identified by their mac address: ifaces : { ids = [ "02:00:00:00:00:00", "02:00:00:00:01:00", "02:00:00:00:02:00", "02:00:00:00:03:00" ]; }; You can simulate a slightly more realistic channel by assigning fixed error probabilities to each link. ifaces : { ids = [ "02:00:00:00:00:00", "02:00:00:00:01:00", "02:00:00:00:02:00", "02:00:00:00:03:00" ]; }; model: { type = "prob"; default_prob = 1.0; links = ( (0, 2, 0.000000), (2, 3, 0.000000) ); }; The above configuration would assign 0% loss probability (perfect medium) to all frames flowing between nodes 0 and 2, and 100% loss probability to all other links. Unless both directions of a link are configured, the loss probability will be symmetric. This is a very simplistic model that does not take into account that losses depend on transmission rates and signal-to-noise ratio. For that, keep reading. You can model different signal-to-noise ratios for each link by including a list of link tuples in the form of (sta1, sta2, snr). ifaces : { ids = [ "02:00:00:00:00:00", "02:00:00:00:01:00", "02:00:00:00:02:00", "02:00:00:00:03:00" ]; links = ( (0, 1, 0), (0, 2, 0), (2, 0, 10), (0, 3, 0), (1, 2, 30), (1, 3, 10), (2, 3, 20) ); }; The snr will affect the maximum data rates that are successfully transmitted over the link. If only one direction of a link is configured, then the link will be symmetric. For asymmetric links, configure both directions, as in the above example where the path between 0 and 2 is usable in only one direction. The packet loss error probabilities are derived from this snr. See function get_error_prob_from_snr(). Or you can provide a packet-error-rate table like the one in tests/signal_table_ieee80211ax The path loss model derives signal-to-noise and probabilities from the coordinates of each node. This is an example configuration file for it. ifaces : {...}; model : { type = "path_loss"; positions = ( (-50.0, 0.0), ( 0.0, 40.0), ( 0.0, -70.0), ( 50.0, 0.0) ); directions = ( ( 0.0, 0.0), ( 0.0, 10.0), ( 0.0, 10.0), ( 0.0, 0.0) ); tx_powers = (15.0, 15.0, 15.0, 15.0); model_name = "log_distance"; path_loss_exp = 3.5; xg = 0.0; }; The kernel only allows wmediumd to work on the second available hardware address, which has bit 6 set in the most significant octet (i.e. 42:00:00:xx:xx:xx, not 02:00:00:xx:xx:xx). Set this appropriately using ‘ip link set address’. This issue was fixed in commit cd37a90b2a417e5882414e19954eeed174aa4d29 in Linux, released in kernel 4.1.0. wmediumd's rate table is currently hardcoded to 802.11a OFDM rates. Therefore, either operate wmediumd networks in 5 GHz channels, or supply a rateset for the BSS with no CCK rates. By default, traffic between local devices in Linux will not go over the wire / wireless medium. This is true of vanilla hwsim as well. In order to make this happen, you need to either run the hwsim interfaces in separate network namespaces, or you need to set up routing rules with the hwsim devices at a higher priority than local forwarding. tests/test-001.sh contains an example of the latter setup. The following sequence of commands establishes a two-node mesh using network namespaces. sudo modprobe -r mac80211_hwsim sudo modprobe mac80211_hwsim sudo ./wmediumd/wmediumd -c ./tests/2node.cfg # in window 2 sudo lxc-unshare -s NETWORK bash ps | grep bash # note pid # in window 1 sudo iw phy phy2 set netns $pid sudo ip link set wlan1 down sudo iw dev wlan1 set type mp sudo ip link set addr 42:00:00:00:00:00 dev wlan1 sudo ip link set wlan1 up sudo ip addr add 10.10.10.1/24 dev wlan1 sudo iw dev wlan1 set channel 149 sudo iw dev wlan1 mesh join meshabc # in window 2 ip link set lo sudo ip link set wlan2 down sudo iw dev wlan2 set type mp sudo ip link set addr 42:00:00:00:01:00 dev wlan2 sudo ip link set wlan2 up sudo ip addr add 10.10.10.2/24 dev wlan2 sudo iw dev wlan2 set channel 149 sudo iw dev wlan2 mesh join meshabc iperf -u -s -i 10 -B 10.10.10.2 # in window 1 iperf -u -c 10.10.10.2 -b 100M -i 10 -t 120
https://android.googlesource.com/platform/external/wmediumd/+/09d135d6dfd24e5df23b2806a3ac1aae0c305573
CC-MAIN-2022-33
refinedweb
920
64
Your byte size news and commentary from Silicon Valley the land of startup vanities, coding, learn-to-code and unicorn billionaire stories. Monday, January 30, 2017 Sunday, January 29, 2017 12 Silicon Valley Tech Startup Job Search Tips - Indicate on Linkedin that you are an available candidate, a service offered by their premium plan - Consider having a portfolio, works and arts to show rather than just a resume - Previous startup experience is a huge plus. Startup founders and teams look for like minded people who can hustle and deal with the startup crunch - Check the job board for alumni offered by your university. - Google and read information about the company on the internet. Any info on the internet is fair game in the interview process. - Prepare for phone screening interviews with recruiters. Sometimes these calls are scheduled quickly after resume submission when a position needs to be fulfilled. - Be ready for coding and technical interviews over the phone. Silicon Valley is tech savy even recruiters known how to ask a technical question or two. - Brush up on software and web based technical skills. Silicon valley is very software heavy except for Apple and a few other places. - If you are applying for an data analyst job, be prepared to be interviewed like a junior data scientist. Even the business roles can be technical in nature. - Use new job sites such as White Truffle, Hired, Muse ... in addition to traditional sites like Linkedin, job boards etc. - Take an advanced online course to brush up your skills or learn cutting edge technology such as self driving car on Udacity. - Research the technical stack used by the startup Machine Learning Resources: Stanford Youtube Machine Learning by Andrew Ng All of Stanford's machine learning course by Andrew Ng (not the coursera version) is posted on Youtube. Here's the course material site: and here's the video playlist This lecture series is a full-version academic course on Machine Learning that has and is Stanford University rigor. Its machine learning slides can be found here Hugo Barra Googler Xiaomi VP Joins Facebook VR - Startup News Hugo Barra once a senior Googler, Xiaomi global VP will join Facebook VR and leads all Facebook VR efforts.. iOS prototyping design tool: prototyping notepad Saturday, January 28, 2017 Machine Learning Stanford on Youtube Lecture 02 Notes - Agenda: linear regression, gradient descent and normal equations - Machine learning notations and conventions Note this is not the coursera course. This is the long youtube version of the Stanford. Seth Godin Marketing Wisdom quote on story telling Seth Godin marketing class on skillshare bestselling author in business and marketing. Growth hacking marketing tips and quotes by Seth Godin. " Marketing and advertising were the same thing, but going forward that's not what marketing is. Marketing is the act of telling a story to people who want to hear it, making that story so vivid so true, that people who hear it tell other people. " Monetize WordPress How to add adsense to free WordPress site Premium $8.25 How to add google adsense ad to free wordpress site, meaning a website that is not advanced hosted wordpress or custom domain. It used to easy: simply generate an ad unit in adsense, copy and paste the code to WordPress layout sidebar widget, text widget. It no longer works. You will have to subscribe to the $8 dollar premium plan to use WordAd. To monetize your WordPress site, you will now have to pay for a premium plan. Keep in mind you want to reach a small critical mass of audience on your blog before you can monetize your blog and only then does it make monetary sense to monetize your site. Tuesday, January 24, 2017 It's real google will pay you to do what you love - monetize your blog and YouTube channel Monetize your blog and YouTube channelIt's true it's real google will actually pay you to do what you love the most. I have been blogging about learning to code, japan, and Pokémon GO and I just got paid by google - my first ad revenue is real and in the bank. How did I do it? - Write about what you love. Your niche may be small but the authenticity of your writing, opinions and details make a huge difference. My blog audience is small, like in the hundreds and thousands but because my posts are relevant users end up enjoying the ads they are served (udacity, amazon web services). Despite not being able to earn much from CPM (based on 1000s impression), my blog's click through rare is high. CPC is a valuable earning - Be detail oriented. I am no internet sensation, I am not viral. How do I deliver value to my readers? By being thorough and detailed in my presentation of observations and the follow up analyses. - Be compliant be 100% compliant. When Google Adsense first came out, I was young and had an account that I played with. I tested my site and clicked on the ads and was blocked forever. I probably can contact them to unblock because it has been 10 years but the reality is grim Google is hard to reach and bans are permanent to deter opportunists. They really mean it. If you are not compliant, displayed fraudulent info, violated copy right content on YouTube, click on your own ads Google's algorithm will find you and terminate your entire account and stop monetization on all channels. Google's algorithm has proven to be intelligent (machine learning), complex (deep learning) and capable ( big data). You will be caught. - Follow a schedule. Smartly timed intervals of updates is desirable. No one wants to be overwhelmed by spam or visit your site or channel and find nothing new 3 times in a row - Have a brand or a style. All successful viral influencers have a brand or a style that is extremely distinctive. Think Justin Bieber, he has a very specific hair cut and look, and he only sings certain cheesy songs. But that makes him extremely recognizable. Bread girl on Instagram has a repeatable viral machine : putting her face in different types of bread. While the safety and the usefulness of such an act is questionable, its success is very repeatable. It's a machine! We may not have that money making machine but we can think and design our brand message. For me, this blog is about Silicon Valley tech lifestyle so it has gadgets, Silicon Valley jobs, startup tax and logistics. I cannot post food recipes here unless it is about how to make a Star Wars cake. Clam chowder is irrelevant here and will make my readers question my blog. It's - Analyze what works. My Pokémon GO posts on blogger generate thousands of views but no one watch my YouTube Pokémon GO videos. So I have to post more Pokémon on the blog and focus more on learn to code tutorials on my YouTube channel. Find what works, optimize and then repeat success make it better Friday, January 20, 2017 Machine Learning code pattern 01 Code pattern 01 numpy use array().T to get matrix transpose Example: X = [1,2,3] XT = array(X).T Example: X = [1,2,3] XT = array(X).T Machine Learning Concepts 01 There are three main machine learning styles: - Supervised learning - Unsupervised learning - Reninforcement learning Wednesday, January 18, 2017 Udacity Machine learning Nanodegree Syllabus and Summary udacity nanodegree - becoming a machine learning engineer. This is my personal notes summarizing what I learned from the section, consider it my personal study notes. The part I labeled syllabus is the actual outline of the course (e.g. I try to use section title as the syllabus section title) - Think of inputs as signals with different strengths, weights as sensitivity to those strengths. Can be tuned and adjusted to computer variety of tasks. Hence a collection of perceptroncomputing units is powerful - The weights sum of all the inputs is called the activation - When activation is greater than the threshold theta the perceptron outputs 1 Friday, January 13, 2017 Data Science for Business by Provost Fawcett data science book review - Interesting business approach to understand and apply data science - Use verbose texts when explaining simple concepts - Introduce advanced concepts like Information Gain and Entropy early in the book pg51. Less accessible for beginners. Good for people preparing for interviews and people who already got an introduction to machine learning - Writes the full formula out, avoid using sigma notation to make it more accessible for business readers - Useful graph of entropy pg 52 - Helpful graphical illustration pg54-pg55 two trees with different information gain - The learning curve is not a slowly ascending one. The topics jump around in this book - Variance measures impurity pg56 Thursday, January 12, 2017 Udacity Machine Learning Nanodegree Linear Regression sample code from sklearn import linear_model reg = linear_model.LinearRegression() reg.coef_ reg.intercept_ reg = linear_model.LinearRegression() reg.coef_ reg.intercept_ Codecademy SQL Table Transformation Subqueries Walkthrough - 1. Table Transformation - SELECT * FROM flights WHERE (SELECT code FROM airports WHERE elevation < 2000); - Nested subqueries - First get all the codes from airports table if its elevation is smaller than 2000 - Use this as a filter to query all columns of data from the flights table - Don't forget the ; semicolon in the end Friday, January 6, 2017 Udacity Year in Review Udacity online course highlighted itssuccesses and milestones in 2016. - Udacity currently offers 159 courses - Udacity's site-wide busiest time for learning is the Month of October - Student watched the Developing Android Apps video by Google the most - Read the full article here Udacity 2016 Year in Review Udacity Machine Learning Engineer Nanodegree - skills you will learn - Sklearn python machine learning library - Jupyter Notebook - Panda You will not spend too much time on the following, so please review, study, before proceeding to the course: - Reviewing linear algebra - Reviewing probability - Reviewing statistics Udacity Intensive Connect Data Analyst Machine Learning Nanodegrees Udacity is making two nanodegrees available for Udacity Connect or Udacity intensive: Data Analyst and Machine Learning Engineers. Its in-person meetup lessons utilizes the online material but features a bootcamp-like part-time learning opportunity with industry / professional classmates. Here are some PROs and CONs of Udacity Intensive Connect - PROs - Affordable price tag - The intensive class unlocks the corresponding online material for months, it is a much better deal than Udacity expensive monthly nanodegree price tag well north of $100 - It's a bargain compared to coding bootcamps - Udacity markets its program as : bootcamp level intensity, in-person collaboration, accountability, part-time, no need to leave or quit your current job - Fast paced, stringent project timeline - Attend classes with diverse and experienced industry professionals - In-person lectures that are more targeted and easily adjusted for the needs of the class. High quality, in-person lectures, Q&A opportunities, one-on-one help. - CONs - It is still expensive the price tag will be near $1000 or more - The physical location may be an hour away from your current location. I had to commute from San Francisco to San Jose. Ultimately it didn't work out. - It still relies heavily on the online videos. If those videos didn't speak to you, the in-person interactions may not be able to shift your learning retention. - Fast paced, stringent project timeline - Attend classes with diverse and experienced industry professionals. Not always beginner friendly - It is a significant time commitment - A lot of studying on the side, additional studying, looking up additional materials is required. The online videos will not provide all the information needed to complete the projects. - Significant in-person time commitment. Not attending the sessions will cause you to lose a lot of materials as they are not available online. It will put you behind schedule. I had to miss sessions because of business trips and it was hard to catch up. Conclusion: for me personally, Udacity Connect or Udacity intensive helped me finish the Machine Learning Engineering Nanodegree. It has some rough spots as the program is in its early stages. But it is improving fast! Over the few months of learning, I could see the program changing. This program did get me start to think about Machine Learning in depth. I have discovered my interest. Once you know what machine learning is, it is easy to learn it online. A lot of time will be spent playing with datasets hands-on any way. Without those practices, it is not possible to take on a real job as a machine learning engineer. Your connect experience can vary with the instructor and classmates, as all inter-personal interactions do. The holy grail question: can you become a machine learning engineer after completing the nanodegree? No, your knowledge, experience will not be sufficient. People get Phd's in this field. But it will open you up for some pretty awesome career paths. More practice, knowledge acquisition is needed. You will need a strong portfolio of exemplifying projects. You will become a much better data analyst, putting you closer to a data scientist role than an analyst role. The Nanodegree will not be sufficient to get you a job at Google. However, if you are already experienced, already in the industry, just need some technical skills to climb over a hill, this course can really help you make the transition internally. Udacity Machine Learning Nanodegree Instructors Review - Georgia Tech Udacity online master degree instructors - This nanodegree utilizes video clips from the Georgia Tech Udacity computer science and engineering classes. The instructors are obviously highly qualified, technical and academic, but give sometimes nerdy and perhaps less engaging and relevant jokes and try to forcefully inject a sense of humor into the learning material. It didn't work out so well. Their explanation is professional and academic but less accessible to beginners. MINUS - sebastian thrun and katie malone - Sebastian Thrun was a professor, successful entrepreneur, founder of Udacity, and the lead for many important Google businesses such as the self driving car. He's a really good teacher and gives valuable information on how machine learning is directly used in the industry. He is a god-like teacher in machine learning. PLUS! - Katie Malone was a student and a researcher and now a creator of several Udacity Machine Learning courses. She is great at explaining difficult concepts to beginners and advanced learners. She uses real life research examples, data sets from Kaggle, and simplifies the problems into workable problem sets for students. PLUS! - In-person lead Udacity Connect Intensive - If you join the Udacity Connect Intensive, you may get an in-person lead. He is usually a very qualified tutor and instructor. He/ she may not have the experience that Sebastian has, but is perhaps more practical and accessible for beginners. My session lead was once a Caltech lecturer, so he could go beginner friendly and also expert friendly. - Conclusion: Udacity instructors are industry experts, academics, and highly experienced professionals in machine learning. However, despite each clip is high quality, the Udacity Machine Learning Nanodegree curriculum is patched together not in a cohesive manner. This curriculum will pose significant difficulty for people starting from scratch. Experienced professionals, professionals who had exposures to machine learning will have an easier time. Tuesday, January 3, 2017 Python crowned as the language of choice of data scientists kaggle, a popular dataset data science machine learning competition site revealed in its recent Year In Review 2016 newsletter that Python has surpassed R as the language of choice for data scientists in recent years.this trend has been continuing for a few years. Kaggle's kernel language is now overwhelming Python despite that R was still popular and fresh in 2015. Why do you think that's the case? no revelation on Kaggle yet. Could it be because of increasing popularity of machine learning and specifically deep learning? Python works so well with ML Monday, January 2, 2017 Codecademy Walkthrough SQL Table Transformation 01 How to be viral on Imgur and Reddit? It is harder to be viral on Reddit and Imgur, one of the most popular image sharing, story telling site on the internet. It is not the best place to sell product, but it surely it is the best place to spread ideas and causes. It is also a great place to stay on top of current news and learn lifehacks. Like all forums, especially producthunt and hackernews, Imgur has its own algorithm to test out if a post should stay on its FrontPage, which essentially features the post and makes it viral. Below is a screenshot that has made it to the FrontPage, it also happens to explain how Imgur evaluates and weighs posts. Be aware, this is likely NOT the actual algorithm, but the actual model will look very similar to this.<<
http://www.siliconvanity.com/2017/01/
CC-MAIN-2020-34
refinedweb
2,810
52.6
Benson, On Wednesday 27 February 2008, Benson Margulies wrote: > While I'm happy to try to serve as Aegis level 1 support, WS Security > is really new to me. Can anyone else wade in here (again)? From the stack trace, it doesn't look like it's even getting to the ws-security stuff. It's barfing in the SAAJInInterceptor while trying to build the SAAJ model that the security stuff would use. Thus, you probably can just configure in the SAAJInInterceptor. No security stuff needed. THAT said, I still expect you to not be able reproduce it... The stack trace also points out that it's using the bea Stax parser and not woodstox which is what we pretty much always test with and have worked around any issues. I'm not exactly sure how buggy that parser is. That also said, you should get him to configure in the logging interceptors so you can see the exact soap message being sent. Thus, you could see if there really is a namespace issue or not. -- J. Daniel Kulp Principal Engineer, IONA [email protected]
http://mail-archives.apache.org/mod_mbox/cxf-dev/200802.mbox/%[email protected]%3E
CC-MAIN-2018-30
refinedweb
185
73.47
Although Java programs code be written using Notepad, or Notepad++ (in Windows), or using vi/vim editor in UNIX/Linux, IDEs are another story. If you have time, let me tell you that story; it is really exciting. What is IDE? IDE stands for Integrated Development Environment. It is software that gives computer developers (in general, not only Java developers) great capabilities which make the developing process easier and more efficient, hence increasing productivity. The essential component in all IDEs is the source code editor. Usually these editors have the necessary intelligence to provide developers with good help. Some of the facilities that IDE software editors provide are: - Code Auto-completion. - Recognizing and coloring the keywords, the built-in methods and classes, and the comments. - Tracing the opening and closure of curly brackets, single, and double quotes. - Error checking. - Code generator. Besides to the code editor, IDEs offer various facilities that differ from one IDE to another, like compiler (for compiled languages), interpreter (for interpreted languages), and project builder for testing the program code. Examples for commonly-used IDEs are Turbo and Borland C++ (for C and C++ developing), Microsoft Visual C++ (for developing applications using C++ and Visual C++), IDLE (for developing in Python), Netbeans, and Eclipse for Java. The good news is: most of these IDEs are completely free. So, cheer up. In the following sections, we are going to learn how to download and install NetBeans. Installing NetBeans NetBeans is a very rich tool for developing Java programs. Let’s install it, and let the tool talk about itself. To download and install the NetBeans, follow this simple procedure: 1. In your web browser, browse to 2. Click Download 3. The NetBeans IDE 8.0.2 Download page is opened. As you could see, there are several packages available for download with different options (and different programming languages), and all are free. The one with least size (Java SE) will be sufficient, so you can download it. For me, I will download the complete version (headed with the word All). So, click Download to start downloading the package of your choice. 4. The following page is opened, and the download starts. The download would take few minutes, depending on the size of the package you chose (and of course you internet speed). 5. When download is complete, double-click the downloaded installation package file. 6. The following dialog is displayed. 7. The default options will install almost everything, so you can click Next. 8. Check the checkbox to accept the license agreement, and click Next. 9. When prompted, choose the “I accept the terms in the license agreement. Install JUnit” to install JUnit, then click Next. 10. By default, the NetBeans will be installed on C:\Program Files\NetBeans 8.0.2 Click Next to accept this default installation path. 11. Click Next one more time, then Install. Now, get back in your chair and relax, or have a cup of coffee until the installation is complete. 12. When done, click Finish to complete the installation. Using NetBeans In this section, we are going to see how to write Java programs using IDE. We will re-write our first program (that was presented in the past article). 1. Double-click the NetBeans shortcut on the desktop (or open it from the Start menu). 2. From the File menu, click New Project. The following dialog is displayed. 3. Make sure Java and Java Application are selected, and click Next. 4. The following Dialog is opened. 5. In the Project Name textbox Type “Hello”, and click Finish. 6. The following Project window is displayed. A source code file named Hello.java is created for you, and opened for editing. Notice also that some code is generated. The NetBeans Wizard has defined a hello package, a public class named Hello, and a main() method with empty body that is ready to be filled in. My God!! That is GREAT!! 7. Type the program: actually, all we need to do is just type the line: inside the body of the main() method, so type it anyway. 8. In the toolbar, click the Green Arrow button to run the program. 9. A new pane is displayed on the lower right part of the NetBeans window. The Output pane displays the output of the program. You have just written, compiled, and executed a Java program using JavaBeans. Congratulations!! * * * * * * In this article, we have talked about IDEs. An IDE is a software application that provides the developer with useful tools to facilitate his work, and increase his productivity. Many IDEs exist, and most of them are free. Of the long list of available IDE tools, we chose to illustrate the installation and usage of two well-known Java IDEs: NetBeans (the subject of this article), that we have already learned how to download, install, and write programs in, and Eclipse, which will be the subject of the next article. So, don’t miss it.
https://blog.eduonix.com/java-programming-2/installing-and-using-netbeans-in-java/
CC-MAIN-2019-13
refinedweb
830
66.84
Things are really going well. Your website, which had very few hits until recently, has been covered by the tech media and become popular overnight. But for some reason your MVC web application doesn't seem to scale and is very slow to respond. You monitor your server's resources and everything seems fine: both CPU and memory utilization are low. What's going on? Frustrated, you turn to Twitter for quick advice. Twitter's website also seems slow. As you wait for it to load it suddenly hits you. Your website displays the Twitter followers and following counts of the current user on every page. Is it possible that your website is slow because Twitter is slow? Web applications typically have a limited number of threads that are standing by to handle requests. When all threads are busy, new requests are queued until a thread becomes available. If in the process of creating a response, your MVC controllers are waiting for lengthy I/O operations to complete, it is possible that the threads aren't doing much, but are still tied up and aren't free to handle new requests. This problem commonly occurs when controllers make requests to external web services. MVC supports asynchronous controllers. This means that when a controller performs a potentially lengthy I/O bound operation, the thread can be freed to handle other requests. When the operation completes, the thread (or a different thread) can continue handling the request that was previously placed on hold. The result is a highly scalable web application, which can make better use of server resources. Controllers are synchronous by default. To support asynchrony you will need to write your controllers a little differently. How "little" depends on whether you are using MVC 4 and C# 5 or not. In the following sections I show a synchronous controller, then rewrite it as an asynchronous controller for environments prior to MVC 4 / C# 5 and then again for MVC 4 / C# 5. Synchronous Controllers Here is a simple synchronous controller example: public class HomeController : Controller { public ActionResult Index() { ViewBag.Followers = TwitterApi.GetFollowers(); ViewBag.Following = TwitterApi.GetFollowing(); return View(); } } The controller has a single action, Index, which makes two Twitter API calls, one to get the number of followers and another to get the number of following for the current user. It places the results in the ViewBag and returns a view, which will extract those results and display them. For the sake of brevity and relevancy I will not show the TwitterApi class. Let's assume that it's available to us. Assuming that Twitter API is slow and takes three seconds to respond to each request, our page will take at least six seconds to load. During this time the thread that handles the request will be mostly idle, but nonetheless unavailable to handle any other requests. It seems beneficial to rewrite this controller as an asynchronous controller. This is done in different ways, depending on the versions of MVC and C# that you're using. Let's review both. Asynchronous Controllers Prior to MVC 4 / C# 5 Here is what we need to do: - The controller has to derive from AsyncControllerinstead of Controller - The action needs to be split into two parts: the first part begins all asynchronous calls and the second part uses the results of those calls to return the action result Let's examine the following revised controller and discuss it in detail. public class HomeController : AsyncController { public void IndexAsync() { AsyncManager.OutstandingOperations.Increment(); TwitterApi.BeginGetFollowers(ar => { AsyncManager.Parameters["followers"] = TwitterApi.EndGetFollowers(ar); AsyncManager.OutstandingOperations.Decrement(); }, null); AsyncManager.OutstandingOperations.Increment(); TwitterApi.BeginGetFollowing(ar => { AsyncManager.Parameters["following"] = TwitterApi.EndGetFollowing(ar); AsyncManager.OutstandingOperations.Decrement(); }, null); } public ActionResult IndexCompleted(int followers, int following) { ViewBag.Followers = followers; ViewBag.Following = following; return View(); } } As you can see, we split the action into two methods, IndexAsync and IndexCompleted. When a matching request arrives, MVC invokes the first part, IndexAsync. Note that this method returns void rather than the action result, because when it completes it doesn't yet have the results of the asynchronous calls, which are required prior to returning the action result. When this method returns, the thread is free to handle other requests. Later, when all asynchronous requests that began in IndexAsync complete, IndexCompleted is invoked and the action result is returned. Let's review both these methods in more detail. In IndexAsync we now call the TwitterApi BeginX and EndX methods in order to perform the requests asynchronously. The key to the IndexAsync method is to maintain a count of outstanding asynchronous operations. As we are making an asynchronous request we increment the counter by calling AsyncManager.OutstandingOperations.Increment. In the callback that gets invoked when the asynchronous call to each TwitterApi method completes, we decrement the counter by calling AsyncManager.OutstandingOperations.Decrement. The result is that when IndexAsync returns, the counter equals 2 and when both asynchronous Twitter calls complete, the counter resets to 0. When this happens, MVC can call IndexCompleted. Another point to note about IndexAsync is that the completion callbacks place the results of the asynchronous Twitter API calls in the AsyncManager.Parameters dictionary. Those are the parameters to be provided to the indexCompleted method. The IndexCompleted method is called when the outstanding operations counter reaches 0 and is provided with the parameters placed in AsyncManager.Parameters. Now it is time to complete the action and return the action result. The method simply places the number of followers and following in the view bag and returns the view. With this modification the controller action can be performed asynchronously. If each Twitter call takes three seconds, for a total of six seconds, then the thread doesn't have to sit idle for six seconds and can handle other requests. But there is another benefit. Both Twitter API calls are made in parallel. Therefore the action result can be returned within a total of three seconds instead of six. So not only do we gain parallelism between requests, we also benefit from parallelism within a single request. If you think that this code is rather cumbersome, I'd have to agree. There is a lot going on and we must ensure we do everything right. Luckily things are way simpler when using MVC 4 and C# 5. Let's check it out. Asynchronous Controllers in MVC 4 / C# 5 Before we begin, let's look at our synchronous controller again. public class HomeController : Controller { public ActionResult Index() { ViewBag.Followers = TwitterApi.GetFollowers(); ViewBag.Following = TwitterApi.GetFollowing(); return View(); } } Now let's rewrite it as an asynchronous controller: public class HomeController : Controller { public async Task<ActionResult> Index() { ViewBag.Followers = await TwitterApi.GetFollowersAsync(); ViewBag.Following = await TwitterApi.GetFollowingAsync(); return View(); } } Are you impressed? How clean is that? The asynchronous controller is almost as simple as the synchronous controller and has the same number of lines of code. Let's see what's happening here. But before we continue, if you are unfamiliar with the async and await keywords, I recommend that you check out my tutorial, Asynchronous programming in C# using async and await. OK, let's review the changes we made while converting the synchronous controller method to an asynchronous one: - The action method is now marked as asyncand returns Task<AsyncResult>instead of AsyncResult - Instead of invoking GetFollowersand GetFollowingwe invoke their asynchronous counterparts, GetFollowersAsyncand GetFollowingAsync, which return Task<int> - We use the awaitkeyword to suspend execution of the method until the asynchronous calls to Twitter API complete; while the method is suspended, the thread can handle other requests - Note that we didn't have to use a special base class; HomeControllerstill derives from Controllerrather than AsyncController The XAsync methods are typically provided to us by the API, and are present in many of the .NET Framework Class Library types. But just so I can show you a little bonus trick, let's assume that the TwitterAPI class didn't provide those methods. How can we use await on a synchronous method? Well, we can't do that directly, but luckily the Task class has a neat static method called Run which can execute a synchronous task asynchronously. We could write our controller as follows: public class HomeController : Controller { public async Task<ActionResult> Index() { ViewBag.Followers = await Task.Run<int>(() => TwitterApi.GetFollowers()); ViewBag.Following = await Task.Run<int>(() => TwitterApi.GetFollowing()); return View(); } } To be fair, we didn't yet achieve all the benefits of the older approach to asynchronous controller shown in the previous section. Can you spot what's missing? While our new asynchronous controller achieves parallelism between requests, it performs the Twitter API calls in sequence. If each API call takes three seconds, this method will return the action result in six seconds. Let's see how we can parallelize the requests. public class HomeController : Controller { public async Task<ActionResult> Index() { Task<int> followersTask = TwitterApi.GetFollowersAsync(); Task<int> followingTask = TwitterApi.GetFollowingAsync(); await Task.WhenAll(followersTask, followingTask); ViewBag.Followers = await followersTask; ViewBag.Following = await followingTask; return View(); } } Here we first invoke both asynchronous Twitter API calls and only then await them using Task.WhenAll. Finally, we use the Result property of each task to get the number of following and followers. If you want to learn more about Task.WhenAll and parallelism, check out my tutorial: Asynchronous Programming in C# - Advanced Topics. Summary In this tutorial we learned how to write asynchronous MVC controllers to handle more concurrent requests, expedite single request handling using parallelism, and ultimately better utilize server resources. Asynchronous controllers work best when a potentially lengthy I/O operation needs to be performed as part of the controller's action. Writing asynchronous controllers is a powerful optimization that has become much easier to implement since MVC 4 and C# 5. I am so thankful for the new method of writing AsyncControllers. The old way was a nightmare and caused some issues for my OSS Restful Routing. Now that this is the best approach the problems resolved themselves. Great article. Perhaps in a follow-up, can you talk about performance implications? Async tasks aren't free and I'd be interested in what your findings are. For example, in my limited testing with 3 Virtual Users (using StresStimulus for Fiddler) I found a sync long-running action could serve about 250 req/s but an async equivalent (using just Task.Factory.StartNew) could serve 300 req/s. It's certainly more but I'd be more interested in knowing when and how to measure benefits for async tasks. For an action that was already fast, there was zero difference in performance. It's hard to find a good stress tester besides WCAT and that's too complicated for me; I just want something simple and easy to use. StresStimulus is OK but the trial version is useless with 3 VUs and the free version has no features besides 100 VUs. I'd love to know more about this as well. I listened to scott hunter give a talk on async / await and he seemed to indicate that the overhead that async brings is very minimal, and that bringing up a new thread and storing the state was fairly efficient and that there should be a lot of benefit for many-concurrent-user sites. All of this stuff very much deserves some proper benchmarks. The async await keywords should give asp.net a chance to compete with node.js in terms of efficiency by freeing up all blocked threads in the IIS thread pool. (and we get the benefits of no callback hell)
http://tech.pro/tutorial/1252/asynchronous-controllers-in-asp-net-mvc
CC-MAIN-2014-15
refinedweb
1,927
55.74
it is to add or change functionality without impacting existing core system functionality. Let’s take a simple example. Suppose your company have a core product to track all the users in a sports club. Within your product architecture, you have a domain model represented by JPA POJOs. The domain model contains many POJOs including – of course – a User POJO. package com.alex.staveley.persistence /** * User entity. Represents Users in the Sports Club. * * Note: The SQL to generate a table for this in MySQL is: * * CREATE TABLE USER (ID INT NOT NULL auto_increment, NAME varchar(255) NOT NULL, * PRIMARY KEY (ID)) ENGINE=InnoDB; */ @Entity public class User { /* Surrogate Key - automatically generated by DB. */ @GeneratedValue(strategy=GenerationType.IDENTITY) @Id private int id; private String name; public int getId() { return id; } public void setName(String name) { this.name=name; } public String getName() { return name; } } Now, some customers like your product but they need some customisations done before they buy it. For example, one customer wants the attribute birthplace added to the User and wants this persisted. The logical place for this attribute is – of course – in the User POJO, but no other customer wants this attribute. So what do you do? Do you make a specific User class just for this customer and then swap it in just for them? What happens when you change your Product User class then? What happens if another customer wants another customisation? Or changes their mind? Are you sensing things are going to get messy? Thankfully, one implementation of JPA: Eclipselink helps out here. The 2.3 release (available since June 2011, latest release being a 2.3.2 maintenance released just recently, 9th December, 2011) includes some very features which work a treat for this type of scenario. Let’s elaborate. By simply adding the @VirtualAccessmethods Eclipselink annotation to a POJO we signal to Eclipselink that the POJO may have some extra (also known as virtual) attributes. You don’t have to specify any of these extra attributes in code, otherwise they wouldn’t be very virtual! You just have to specify a generic getter and setter to cater for their getting and setting. You also have to have somewhere to store them in memory, something like a good old hashmap – which of course should be transient because we don’t persist the hashmap itself. Note: They don’t have to be stored in a HashMap, it’s just a popular choice! Let’s take a look at our revamped User which is now extensible! @Entity @VirtualAccessMethods public class User { /* Surrogate Key - automatically generated by DB. */ @GeneratedValue(strategy=GenerationType.IDENTITY) @Id private int id; private String name; @Transient private Map<String, Object> extensions = new HashMap(); public int getId() { return id; } public void setName(String name) { this.name=name; } public String getName() { return name; } public <t> T get(String name) { return (T) extensions.get(name); } public Object set(String name, Object value) { return extensions.put(name, value); } } So, is that it? Well there’s a little bit more magic. You have to tell eclipselink about your additional attributes. More specifically: what their names and datatypes are. You do this by updating your eclipselink-orm.xml which resides in the same META-INF folder that the persistent.xml is in. <?xml version="1.0" encoding="UTF-8"?> <entity-mappings <entity class="com.alex.staveley.persistence.User"> <attributes> <basic name="thebirthplace" attribute- <column name="birthplace"/> <access-methods </basic> </attributes> </entity> </entity-mappings> Now this configuration simply states, the User entity has an additional attribute which in java is “thebirthplace” and it is virtual. This means it is not explictly defined in the POJO but if we were to debug things, we’d see the attribute having the name ‘thebirthplace’ in memory. This configuration also states that the corresponding database column for the attribute is birthplace. And eclipselink can get and set this method by using the generic get /set methods. You wanna test it? Well add the column to your database table. In MySql this would be: alter table user add column birthplace varchar(64) Then run this simple test: @Test public void testCreateUser() { User user = new User(); user.setName("User1Name"); user.set("thebirthplace", "donabate"); entitymanager.getTransaction().begin(); entitymanager.persist(user); entitymanager.getTransaction().commit(); entitymanager.close(); } So now, we can have one User POJO in our product code which is extensible. Each customer can have their own attributes added to the User – as they wish. And of course, each customer is separated from all other customers very easily by just ensuring each customer’s extensions resides in a specific eclipslink-orm.xml. Remember, you are free to name these files as you want and if you don’t use the default names you just update the persistence.xml file to state what names you are using. This approach means that when we want to update User in our product, we only have to update one and only User POJO (because we have ensured there is only one). But when specific attributes have to be added for specific customer(s), we don’t touch the User POJO code. We simple make the changes to the XML and do not have to recompile anything from the core product. And of course, at any time it is easy to see what the customisations are for any customer by just simply looking at the appropriate eclipselink-orm.file. Ye Ha. Happy Extending! References: - Extending your JPA POJOs from our JCG partner Alex Staveley at the Dublin’s Tech Blog - - Related Articles :
http://www.javacodegeeks.com/2012/01/extending-your-jpa-pojos.html
CC-MAIN-2014-41
refinedweb
919
56.45
Common Information Model Microsoft® Windows® 2000 Scripting Guide If you are going to build a house, you need to know how to read and interpret an architectural drawing. If you are going to build an electronic device, you need to know how to read and interpret a schematic diagram. And if you are going to write WMI scripts, you need to know how to interpret the WMI blueprint for management: the CIM repository. The CIM repository is the WMI schema that stores the class definitions that model WMI-managed resources. To emphasize the importance of the CIM and CIM classes, consider the scripts in Listing 6.6 and Listing 6.7. Listing 6.6, a slightly enhanced version of Listing 6.3, returns information about the services installed on a computer. Listing 6.6 Retrieving Service Information Using WMI and VBScript Listing 6.7, meanwhile, is another variation of the same basic script, this time using the Win32_OperatingSystem class. As you might expect, it returns information about the operating system currently in use on a computer. Listing 6.7 Retrieving Operating System Information Using WMI and VBScript There are only two differences between these scripts: the class name identifying the WMI-managed resource and the property values reported for each class. For example, the services script reports values for properties such as DisplayName, StartMode, and State; the operating system script reports values for properties such as LastBootUpTime, Version, and ServicePackMajorVersion. The fact that the same script template can be used to retrieve total physical memory, services, event log records, processes, and operating system information demonstrates the important role CIM classes play in WMI scripting. After you know how to write a script to manage one type of WMI-managed resource, you can use the same basic technique to manage other resources. Of course, knowing a managed resource class name and its corresponding properties is only part of the story. Before you can tap the full power of WMI scripting, you need to know a little bit more about the structure of the CIM repository and WMI classes for two important reasons: Understanding how to navigate the CIM repository will help you determine the computer and software resources exposed through WMI. Understanding how to interpret a managed resource blueprint (class definition) will help you understand the tasks that can be performed on the managed resource. Both points are true regardless of the WMI tool you use: Whether you use the WMI scripting library or an enterprise management application, you need to know how to navigate the CIM repository and interpret WMI classes. A less obvious yet equally important reason to learn about the CIM repository is that the CIM repository is an excellent source of documentation for WMI-managed resources. If you need detailed information about a WMI class, you can use the WMI SDK. But what if you do not need detailed information about a WMI class? Suppose you want to know only whether a specific class, method, or property is supported on the version of Windows you are managing. You can check the CIM repository of the target computer. For example, suppose you see this script in the Script Center on Microsoft TechNet: Const JOIN_DOMAIN = 1 Const ACCT_CREATE = 2 Set objNetwork = CreateObject("Wscript.Network") strComputer = objNetwork.ComputerName Set objComputerSystem = GetObject _ ("winmgmts:{impersonationLevel=Impersonate}!\\" & strComputer & _ "\root\cimv2:Win32_ComputerSystem.Name='" & strComputer & "'") ReturnValue = objComputerSystem.JoinDomainOrWorkGroup _ ("FABRIKAM", "password", "FABRIKAM\shenalan", NULL, JOIN_DOMAIN+ACCT_CREATE) You want to know whether the script will run on Windows 2000-based computers. As it turns out, it does not, because the Win32_ComputerSystem class does not support the JoinDomainOrWorkGroup method on Windows 2000. The JoinDomainOrWorkGroup method was added to the Win32_ComputerSystem class in the version of WMI included with Windows XP. But how would you find this out, other than trying the script and having it fail? One way is by using the collection of WMI tools described in "Exploring the CIM Repository" later in this chapter. A more powerful and flexible approach is to use the WMI scripting library. One useful property of WMI is the fact that you can use the WMI scripting library to learn about WMI itself. In the same way you write WMI scripts to retrieve information about managed resources, you can write WMI scripts to learn many interesting details about WMI itself. For instance, you can write WMI scripts that list all of the namespaces and classes in the CIM repository. You can write scripts to list all of the providers installed on a WMI-enabled computer. You can even write WMI scripts to retrieve managed resource class definitions. Whether you choose to use existing tools or create your own, you need a basic understanding of the structure of the CIM repository and its contents, as well as knowledge of how to interpret managed resource class definitions. The next section takes a closer look at the WMI blueprint for management - the CIM repository.
https://technet.microsoft.com/en-us/library/ee156559.aspx
CC-MAIN-2016-30
refinedweb
823
52.19
. Note These code examples do not apply to developing for Universal Windows (UWP) apps, because the Windows Runtime provides different stream types for reading and writing to files. For an example that shows how to read text from a file in a UWP app, see Quickstart: Reading and writing files. For examples that show how to convert between .NET Framework streams and Windows Runtime streams, see How to: Convert between .NET Framework streams and Windows Runtime streams. Example: Synchronous read in a console app The following example shows a synchronous read operation within a console app. This example opens the text file using a stream reader, copies the contents to a string, and outputs the string to the console. Important The example assumes that a file named TestFile.txt already exists in the same folder as the app. (IOException e) { Console.WriteLine("The file could not be read:"); Console.WriteLine(e.Message); } } } Imports System.IO Class Test Public Shared Sub Main() Try ' Open the file using a stream reader. Using sr As New StreamReader("TestFile.txt") ' Read the stream to a string and write the string to the console. Dim line = sr.ReadToEnd() Console.WriteLine(line) End Using Catch e As IOException Console.WriteLine("The file could not be read:") Console.WriteLine(e.Message) End Try End Sub End Class Example: Asynchronous read in a WPF app The following example shows an asynchronous read operation in a Windows Presentation Foundation (WPF) app. Important The example assumes that a file named TestFile.txt already exists in the same folder as the app. using System; using System.IO; using System.Windows; namespace TextFiles { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } private async void MainWindow_Loaded(object sender, RoutedEventArgs e) { try { using (StreamReader sr = new StreamReader("TestFile.txt")) { string line = await sr.ReadToEndAsync(); ResultBlock.Text = line; } } catch (FileNotFoundException ex) { ResultBlock.Text = ex.Message; } } } } Imports System.IO Imports System.Windows ''' <summary> ''' Interaction logic for MainWindow.xaml ''' </summary> Partial Public Class MainWindow Inherits Window Public Sub New() InitializeComponent() End Sub Private Async Sub MainWindow_Loaded(sender As Object, e As RoutedEventArgs) Try Using sr As StreamReader = New StreamReader("TestFile.txt") Dim line = Await sr.ReadToEndAsync() ResultBlock.Text = line End Using Catch ex As FileNotFoundException ResultBlock.Text = ex.Message End Try End Sub End Class See also - StreamReader - File.OpenText - StreamReader.ReadLine - Asynchronous file I/O - Feedback
https://docs.microsoft.com/en-us/dotnet/standard/io/how-to-read-text-from-a-file?view=netcore-2.2
CC-MAIN-2019-51
refinedweb
397
52.66
<html> <title>Debian GNU/Linux on a Thinkpad T42p</title> </html> <body> <h1><font color="#990101">Debian GNU/Linux on a Thinkpad T42p</font></h1> <h2><a href="/">William Stein</a></h2> <h3>Configuration</h3> I purchased a Thinkpad T42p for $2389 through Harvard's educational discount program: <br><br> <table align=center width=70% border=1 cellpadding=10 <tr><td> <ul> <li> Thinkpad T42p 2373 <li> 15 inch 1600x1200 flexview display <li> 1.8Ghz "Dothan" Pentium-M 745 processor <li> 1GB RAM internal (leaving one slot free) <li> 128MB ATI Mobility FireGL T2 <li> 802.11 a/b/g <li> Two USB 2.0 ports <li> Gigabit ethernet <li> 60GB 7200RPM drive <li> Nine-cell long-life battery </ul> </td> </tr> </table> <br> I've owned a 2Ghz Thinkpad T30, a 1.6Ghz T40, and now this T42p. The T42 weighs less than the T30, and in size is somewhere between the T40 and T30. It's thinner than the T30, but wider. The T42p is definitely noticeably larger than the T40, but not much heavier (I think it's just over a half pound heavier). <p>This page records some of my thoughts and experiences using the T42p with Linux. It is not supposed to be a systematic Debian installation guide. <p>My T42p is currently running Linux Kernel 2.6.6, which I downloaded from, and was the only kernel version in which I could get everything working that I need. Here is <a href="config">my configure file.</a> <h3>Other Pages</h3> There are several other pages on installing Linux on the T42p: <ul> <li><a href="">Lerdorf's page</a> is a short diary about installing <i>Debian</i> on his T42p. <li> <a href="">Heinen's page</a> is about installing and configuring <i>Debian</i> on the T42p. <li> <a href="">Gardner's page</a> is a fairly complete discussion of installing <i>Fedora Core 2</i> on a T42p. He emphasizes fixes for a major bug. <li> <a href="">This</a> was an article on installing <i>Gentoo</i> on the T42p, but it disappeared, so here is a <a href="gentoo.html">local copy of the google cache of that page</a>. <li> <a href="">Sanjiv's page</a> is a systematic discussion of installing <i>Fedora</i> on a T42 (not the T42p). </ul> <h3>Unresolved Problems</h3> If you have any idea how to solve one of these problems, or you do not have the same problem with whatever Linux distribution you are using, <a href="mailto:[email protected]">please send me an email</a>. <ul> <li> After swsusp2 hibernate the e1000 ethernet card doesn't work. Removing (by using "modprobe -r e1000") first sometimes fixes the problem, but <i>sometimes it doesn't</i>. <li> The built-in mouse doesn't work at all if I boot the computer up with the USB mouse plugged in. The problem vanished when I tried [other guy's] kernel config, so perhaps it's an ACPI vs apm issue. It might also have gone away now that I've switched to fglrx? <li> Printing graphics is very slow. (This means nothing to anyone but me, since you don't know what printer I have, etc.) </ul> <h3>Swapping the Hard Drive</h3> The first thing I did when I got the T42 was physically swap its 60GB 7200RPM drive with the 80GB 4800RPM drive in my old T40. I did this by removing one screw that holds each hard drive in place, flipping the laptops upright, sliding the drives out, swapping the mounting brackets, and reinserting. Note that the mounting brackets <i>look</i> like they are screwed on, but are not; you just pop them on and off. I swapped the drives because I had Debian Linux already installed on the 80GB drive, and I prefer 80GB to 60GB. The output of hdparm says that the 80GB drive is plenty fast <pre> sh-3.00# /sbin/hdparm -t /dev/hda1 /dev/hda1: Timing buffered disk reads: 92 MB in 3.06 seconds = 30.08 MB/sec </pre> <h3>Very Informal Benchmarks</h3> I ran some benchmarks that matter to <i>me</i> with the computer algebra system MAGMA doing the sort of calculations I do in my research. The 1.8Ghz T42p is currently the fastest single processor machine I have at this benchmark, just barely beating any single processor of <a href="/meccah">my Athlon 2800MP cluster</a>. The Athlon 2800MP's are clocked at just over 2.1Ghz, so that's not bad. <h3>Battery</h3> Using the 9-cell battery, with the video brightness set at minimum and "normal usage", gives me about 4 hours battery life (at least when I boot up on battery power, so the processor is at its slowest speed). This is about an hour less than my 1.6Ghz T40 would get with exactly the same battery. I get just over one hour from 10-month-old my old 6-cell battery, which suggests the 6-cell battery is "worn out". <h3>Speedstep</h3> Michael Clark created <a href="cupfreq-speedstep-dothan-1.patch">a patch</a> against kernel 2.6.7 that adds speedstep support for Pentium-M Dothan processors (it also depends on <a href="">bk-cpufreq.patch</a>. (Michael has also "ported it to 2.6.8-rc3 on top of the 2.6.8-rc3-mm1 cpufreq patch (these are probably the preferred ones to link too):" <a href="">bk-cpufreq.patch</a> and <a href="">cpufreq-speedstep-dothan-3.patch</a>. <p>Before Michael's patch came out, I read some of the code of <pre> /usr/src/linux-2.6.6/arch/i386/kernel/cpu/cpufreq/speedstep-centrino.c </pre> and rewrote it so that it would work with my processor. After Michael sent his patch (which is only to 2.6.7), I corrected the voltages in <a href="speedstep-centrino.c-version_2.6.6">my 2.6.6 version of speedstep-centrino.c</a>. My 2.6.6 version only adds support for 1.8GHz Dothan, and nothing else. <p>For whatever reason cpufreqd doesn't seem to work for me. However, directly echo'ing speeds into <tt>/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq</tt> and <tt>/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq</tt> <i>does</i> work for me, and that's enough because it means I have control (I can make a command or icon that speeds up or slows down my computer). For example, to set the computer at maximum: <pre> echo 1800000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq echo 1800000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq echo 1800000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed </pre> And, to set it at minimum speed: <pre> echo 600000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq echo 600000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq echo 600000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed </pre> <b>Here are some earlier remarks I had about issues with using speedstep.</b> <p>So far I have not been able to get speedstep_centrino working at all. If I start the T42p not plugged in, then it runs at 600Mhz, and will not change speed not matter what unless it is rebooted (or hibernated). If I start it up plugged in, then it runs at 1.8Ghz no matter what, even if I subsequently unplug it. This is the error I get when trying to use speedstep-centrino: <pre> </pre> <a href="">This web page</a> discusses a fix. Here's a verbatim copy of the relevant section of that web page: <table align=center width=70% kernel.org </a> but I had same results. Next I email to people that contribute to do this module, but they didn't give me any immediatly solution. The problem was that without this module is impossible to throttle the CPU frequency, an important feature on a mobile system. <p>: <p> <font face="Courier"> processor : 0<br> vendor_id : GenuineIntel<br> cpu family : 6<br> model : 13<br> model name : Intel(R) Pentium(R) M processor 1.70GHz<br> stepping : 6<br> cpu MHz : 1698.751<br> cache size : 64 sep mtrr pge mca cmov pat clflush<br> dts acpi mmx fxsr sse sse2 ss tm pbe tm2 est bogomips : 3375.10<br> </font> <p> Using 2.6.7 kernel sources I modified the "speedstep-centrino.c". This is the original code: <p> <font face="Courier"> static const struct cpu_id cpu_id_dothan_a1 = {<br> .x86_vendor = X86_VENDOR_INTEL,<br> .x86 = 6,<br> .x86_model = 13,<br> .x86_mask = 1,<br> };<br> </font> <p> And the modified code: <p> <font face="Courier"> static const struct cpu_id cpu_id_dothan_a1 = {<br> .x86_vendor = X86_VENDOR_INTEL,<br> .x86 = 6,<br> .x86_model = 13,<br> .x86_mask = 6,<br> };<br> </font> <p> Then I recompiled the linux kernel and the "speedstep-centrino" recognized my CPU and permits to throttle it's frequency, and therefore saves up battery and reduce temperature. </td></tr></table> <h3>Video</h3> I had a huge amount of trouble getting the fglrx modules working! This seems to work for me: <ol> <li> Download <a href="fglrx-4.3.0-3.9.0.i386.rpm">this RPM</a> directly from ATI for XFree 4.3.0. <li> Use alien to make <a href="fglrx_4.3.0-4.9_i386.deb">this deb package</a>. <li> Install the deb. <li> Build the tainted kernel modules: <pre> cd /lib/modules/fglrx/build_mod source make.sh cd .. source make_install.sh </pre> This works perfectly for me with kernel 2.6.x... <li> But here's where it was really complicated for me personally. I had install the xfree86 dri trunk, which makes copies of all the modules and puts them in /usr/X11R6/lib/modules-dri-trunk/... instead of /usr/X11R6/lib/modules, which is where the the ATI package puts them. Also, for some reason I had fglrx modules in my dri-trunk version, but those were too old and didn't support my laptop's very new display. Not knowing all this, I kept getting tons of errors when trying to start X (no device found). So I moved modules-dri-trunk to modules-dri-trunk.orig and made a symlink from modules to modules-dri-trunk. Really I should track down the config file that points to modules-dri-trunk <li> Run fglrxconfig to get an XF86Config file. <li> It works wonderfully. I have very very very fast 3d graphics, etc. <li><b>However,</b> swsusp or changing to a text console freeze the computer when 3d acceleration is enable. I commented out the following lines in XF86Config-4 and swsusp works again, but of course I have no 3d graphics. Until this is resolved, if I need 3d graphics, I'll restart X with these extensions enabled. <pre> # Load "dbe" # Double buffer extension # Load "glx" # libglx.a </pre> (Note: I don't know if turning off dbe was necessary.) </ol> <b>**BIG WARNING**:</b> The free open source radeon driver "works" with this laptop. However, while using it I had literally dozens of "random" complete freezes of my laptop. It cost me many hours to determine the source of the freezes, since there were almost no clues. <h3>Power Management</h3> I am using APM instead of ACPI. I would like to use ACPI since APM is quickly becoming legacy, but I've had serious issues I couldn't resolve with swsusp2 and ACPI. For example, removing the usb_uhci module during swsusp2 suspend crashes the system. Etc. Pressing Fn-F4, which is for apm suspend, causes the laptop to sort of suspend. However, the moon light blinks and the laptop sort of just sits there. I wish there were a way to disable suspend on Fn-F4, since I might accidently hit it. Pressing Fn-F3 to blank the screen works perfectly (as long as I don't accidently bump Fn-F4). <h3>Linux Suspend to Disk (Hibernate): swsusp2</h3> I have APM and swsusp2 working well, and am able to suspend and resume even under X11 while using the ATI drivers (but only with 3d acceleration support off, by removing the glx option from XF86Config). I have 1GB RAM and a smallish 1GB swap partition, so I'm happy this actually works, but it does work, and well. <ol> <li> Installed plain vanilla kernel 2.6.6, so I could patch it, since the swsusp2 patch didn't work for me with Debian kernel 2.6.6. <li> I installed swsusp2 for kernel 2.6.6 <li> I had to remove the CONFIG_REGPARM from kernel config file, so my <tt>.config</tt> now says <pre> # CONFIG_REGPARM is not set </pre> If I didn't do this, then I got errors about stuff not matching on resume from hibernate. I also had this problem: <pre> 5.16 I've suspended and resumed and now I can't open new X applications and my /tmp directory is empty (on Debian) A recent version of the initscripts package decided to blow away temporary directories when calling mountnfs.sh (See Bug #227112). The simple solution is to remove mountnfs.sh from your SWSUSP_STOP_SERVICES_BEFORE_SUSPEND in /etc/suspend.conf and add your extra NFS mounts into SWSUSP_UMOUNTS. </pre> </ol> <h3>Getting Software to suspend to work with kernel 2.6.7</h3> So far I haven't succeeded. Here are some notes about my attempts. <p>I patched kernel 2.6.7 with <pre> <a href="software-suspend-2.0.0.102-for-linux-2.6.7.tar.bz2">software-suspend-2.0.0.102-for-linux-2.6.7.tar.bz2</a> </pre> When I first tried to compile, I had configured the swsusp2 to not include debugging print support, since I thought I didn't want that. I could <i>not</i> get the kernel to compile; there were many small problems I fixed, but they all were related to avoiding the debug output. So I included debug support (by setting that option in "make xconfig"), and the kernel compiled. <p>All that said, I still had problems trying to compile. There was an error compiling <pre> /usr/src/linux/arch/i386/power/suspend2.c </pre> but I was able to fix it by putting <pre> #ifdef CONFIG_SMP </pre> at the beginning of <tt>smp_suspend2_lowlevel(void* info)</tt> and #endif at the end of that function. I don't understand why smp_suspend2_lowlevel has to be defined if CONFIG_SMP is not set. <p>Anyway I can compile as modules, but it cancels suspend. If I use an initrd (as instructed), then it fails on boot. If I compile swsusp2 into the kernel (not as a module), then the kernel crashes on boot. <h3>Wireless Networking</h3> I installed the madwifi drivers from CVS source: <pre> cvs -z3 -d:pserver:[email protected]:/cvsroot/madwifi co madwifi cd <a href="madwifi">madwifi</a> make (as root) make install </pre> These don't work for me, in sense that freezes at ifconfig ath0 up. However, I got it to work by using <a href="">this remark</a>. The ifplugd was causing a kernel fault! So I had to turn of ifplugd for the ath0 interface: <pre> form:~# more /etc/default/ifplugd (after I edited it!)from here</a>. I also changed vo to vo=xv in /etc/mplayer/mplayer.conf since xv rendering is *vastly* faster. <li>kismet: I installed it using <pre> apt-get install kismet </pre> I also installed <pre> apt-get install festival apt-get install ethereal </pre> in order to get nice things and libwiretap, which kismet needs. In /etc/kismet/kismet.conf I put <pre> #source=cisco,eth0,ciscosource #source=madwifi_b,ath0,athsource </pre> I created an application on the desktop that launches kismet as root. I also had problems with excessive loging by wireless drivers. Solution: I commented out line 4028 of <pre> /usr/src/modules/linux-wlan-ng-0.2.1pre21/src/prism2/driver/ /* WLAN_LOG_WARNING("Implement me.\n"); */ </pre> <li>Correct DPI with KDM. I changed /etc/kde3/kdm/Xservers as follows: <pre> ::0 [email protected] /usr/X11R6/bin/X -nolisten tcp vt7 -dpi 120 </pre> Note the added dpi option. <li>VMware. swap caps and ctrl in windows xp (under vmware, though not relevant):. </ul> <h3>Freezing ssh shells</h3> I have a comcast cable modem, and my ssh shells freeze regularly, without lots of network activity. This is extremely annoying. Anyway, the following is a workaround for an openssh2 client. Add to /etc/ssh/ssh_config <pre> ServerAliveInterval 20 ForwardX11 yes #also useful </pre> <h3>My Sprint USB Sanyo 4900 Phone</h3> <ol> <li> Compiled CONFIG_USB_ACM=m the acm.o module. (Selected this in the USB section, near the middle.) <li> modprobe acm <li> Plugin phone: <pre> May 29 16:40:01 localhost kernel: Product: SANYO USB Phone May 29 16:40:01 localhost kernel: SerialNumber: Serial Number May 29 16:40:01 localhost kernel: ttyACM0: USB ACM device May 29 16:40:01 localhost default.hotplug[3190]:190]: invoke /etc/hotplug/usb.agent () May 29 16:40:01 localhost default.hotplug[3191]:191]: invoke /etc/hotplug/usb.agent () May 29 16:40:05 localhost usb.agent[3190]: Setup acm for USB product 474/701/0 May 29 16:40:05 localhost usb.agent[3191]: Setup acm for USB product 474/701/0 May 29 16:40:05 localhost usb.agent[3190]: acm: already loaded May 29 16:40:05 localhost usb.agent[3191]: acm: already loaded </pre> The following configuration steps are copied <a href="">this web page</a>, and they work perfectly for me. <table align=center width=70% border=1 cellpadding=10 Now that you've got an ACM device, you just need to create a dialup connection. Note that the ACM device name may vary - just search through /dev for the proper device. On my (default) Debian install, it's /dev/ttyACM0. On RedHat 7.3, it's /dev/input/ttyACM0. Once you've found that, the number to dial to get a connection to the Vision network is '#777' (which is #PPP on the keypad). So, use whatever method you prefer to create a dialer that will dial #777. On my Debian box, I'm using the standard 'pon' scripts. Here are the config files I use: <pre> /etc/ppp/peers/sprint: # "" </pre> </td></tr></table> </ol> <h3>Canon USB 2.0 LIDE 20 CanoScan scanner</h3> It worked in about one minute. <ol> <li> apt-get install sane xsane <li> Run xsane as root. <li> It just works! </ol> <a href="">This page</a> was helpful. I added was to the scanner group (changed /etc/group). That should fix the permissions problems. <h3>PCMCIA</h3> <li> After swsusp2 hibernate the pcmcia system doesn't work. Restarting it with <pre> /etc/init.d/pcmcia restart </pre> as root fixes the problem. <h3>Writing CD's and DVD</h3> Writing CD-R's and DVD's works, but I can't erase DVD's from K3B. I have only tried writing data DVD's, not DVD's to play on a DVD player. It takes over 20 minutes to write a full 4.3GB data DVD. <li> When I started k3b with a DVD writer, it asked me to install a few other packages and I did (using apt-get). Unfortunately, I don't remember exactly what their names were. They were standard dvd authoring packages. </ol> <!-- <hr> <hr> <h2>The Stuff Below is a Mess</h2> <h3>No Speedstep</h3> * No speedstep_centrino working at all, and starting up on battery makes machine very slow, even after plugged in. Attempt: Enabled the "Relaxed speedstep capability check module option under Power management -> Cpu Freq . Effect: Nothing, still get this error: this means that speedstep is not detected because my processor is too new. <h3>Random hangs</h3> (I think switching to the proprietary ATI fglrx drivers fixed this problem.) <ul> <li> I had repeatable hangs when using vmware in full screen mode. Now I just have random hangs, that seem to coincide with a disk access, and have nothing to do with my screen. <li> The T42 just had a random crash after about 15 minutes: <lu> <li> *not* using madwifi at the time, but the madwifi modules were loaded... <li> straight boot, not resume from hibernate <li> I was listening to music. <li> no vmware involved anywhere. <li> The freeze occured when I was paging through a kernel documentation file in an emacs window. <li> I'm using radeon driver. </ul> <li> As a possible remedy I did this in the BIOS: <ul> <li> turned off PCI power management <li> turned off the other power management option at the bottom of the Power page (?) <li> turned off disk dma (which is in the Network section of BIOS). Note that the disk is still fricking fast, even with "DMA off", i.e., sh<li>3.00# /sbin/hdparm -t /dev/hda1 /dev/hda1: Timing buffered disk reads: 92 MB in 3.06 seconds = 30.08 MB/sec <li> I manually removed the madwifi drivers after reboot (they'll of course return after suspend or reboot). </ul> <li> After all the above "remedies" and booting into acpi mode, it still totally fails. <li> I recompiled the whole kernel with somebody else's T42p config and it still almost immediately froze. I'm thinking the latest freezes have something to do with my ext3 journaled filing system. Hopefully after one or two very clean reboots with </ul <h3>APM vs ACPI</h3> I should switch to ACPI, since apm is dead and laptop support for it is getting buggier and buggier. Try: compile the acpi modules: <ul> <li> Turn off the Sleep States experimental option. <li> Need to look at /proc/acpi stuff <li> It sort of works, but swsusp2 crashes hard on the usb stuff, or so it seems. Maybe not removing usb modules from kernel on suspend (option in /etc/suspend.conf) would help. </ul> This was a problem with acpi when I used the T40. Let's see what happens with the T42p. <pre> * suspend/resume sometimes fails, though it's generally OK. Maybe try using ACPI again? ACPI seems to seriously conflict with my internal madwifi wireless card. I get tons of hardware resets. Yuck! Also, resuming from ACPI messes up the middle mouse button on the Thinkpad T40 Synaptics touchpad. So, I'm sticking with APM. </pre> <h3>Mouse problems</h3> Built-in mouse doesn't work if boot up with USB mouse in. Problem vanishes when I use other guy's config, so I'm betting it's an ACPI vs apm issue. <h3>GL direct rendering</h3> It doesn't work at all yet, at least with the open source radeon drivers. For example, <pre> [email protected]:~$ glxinfo |more name of display: :0.0 display: :0 screen: 0 direct rendering: No </pre> <h3> Missing spell in emacs. Install.</h3> Fix: apt-get install spell <h3> Stupid font error messages</h3> whenever I start programs in KDE. They should be get-rid-able. Solution: These are caused by bad directories under /usr/share/icons. So I just tar-balled and deleted the ones causing problems. Since I only use noia_kde_100, which doesn't cause problems, this fixed the problem for me. <li> Mouse too fast under kernel 2.6.6: Solution: <p>"You probably have two "mouse" entries there, one pointing to /dev/psaux and the other to /dev/input/mice, so that you can get both your PS/2 and USB mouse working on 2.4. <p>2.6 uses the input subsystem for both PS2 and USB, and thus both devices will report events from both mice, resulting in doubled events. <p>Remove either the /dev/psaux or /dev/input/mice entry, depending what suits you better for 2.4 compatibility should you ever need go back to 2.4." <p>This fixed the problem for me! And, strangely enough, the middle mouse button works now too! <li> External Display for presentations: <ol> <li> works fine, as it is always on. Do *not* hit Fn-F7, as that kills the screen dead. <li> Use Control Center -> Peripherals -> Display to set the resolution to one that the external display can use. Resolution switching works fine "on the fly"! (Which is pretty cool, by the way.) <li> installed xhkeys: <pre> -- i don't think this works at all </pre> To start it, I use the following command, and it works!: <pre> tpb --osd=on --verbose --thinkpad="/usr/bin/X11/xterm -T ntpctl -e ntpctl" </pre> (The nvram module is automatically loaded.) Since I always want this going, I created a command called mytpb in bin, and added a link to it in the Autostart folder. <li> USB hard drives: work fine <li> USB keyboard, USB mouse -- I removed the ehci_hcd module and the uhci_hcd module then inserted ONLY the uhci_hcd module, and the USB mouse and keyboard both worked perfectly, and I could even just plug and unplug the mouse during an X-session without having to restart and it worked perfectly. WOW. <p>Subsequently inserting the ehci_hcd module does not break this. I might have to just make sure they are insmod'ed in the right order. PROBLEM: Here's how I got my USB mouse to work with X: <pre> Section "InputDevice" Identifier "Configured Mouse" Driver "mouse" Option "CorePointer" Option "SendCoreEvents" "true" Option "Device" "/dev/psaux" Option "Protocol" "PS/2" Option "Emulate3Buttons" "true" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Generic Mouse" Driver "mouse" Option "Device" "/dev/input/mice" Option "CorePointer" Option "SendCoreEvents" "true" Option "Protocol" "ImPS/2" Option "Emulate3Buttons" "false" Option "ZAxisMapping" "4 5" EndSection Section "ServerLayout" Identifier "Default Layout" Screen "Default Screen" InputDevice "Generic Keyboard" InputDevice "Configured Mouse" "CorePointer" InputDevice "Generic Mouse" # InputDevice "Generic Mouse" "AlwaysCore" EndSection </pre> When I had AlwaysCore there, X would hang on resume. I also used the APM script from <li> 3D GL screen savers -- had to apt-get install xscreensaver-gl <li> writing CD-R and DVD's works, but I can't erase DVD's from k3b, maybe. </ol> </ol> </ol> <h3>SOUND</h3> <pre> A trick with sound was that it was muted, as always with ALSA. I created this script and put it in Autostart: [email protected]:~$ more bin/set_volume #!/bin/bash amixer sset PCM 23 >/dev/null amixer sset Master 100 >/dev/null amixer sset Mic 0 >/dev/null # mute -- However, this really isn't very good sound support. Only one thing can use the soundcard at once, and if more than one does than it gets completely messed up!! -- I just installed ALSA, but it doesn't seem to work, after inserting the right module. I think I need a reboot. Will come back to this. I'm not going to use this ALSA install, since it just doesn't work. * The mouse didn't work after using kernel of and that was because he compiled the psmouse support as a module, which wasn't loaded by default. # "" ---------------------------------------------- * GPS crashes VMware Win xp when run under kernel 2.6.6 GPS mapping software for Linux: GPS driver for Earthmate GPS: -- also GPS crashes vmware whenever accessed w/ kernel 2.4.26 as well, even when just using serial interface. PLAN: * Buy USB -> Serial convert. * Config for linux * Try to use my Garmin GPS under vmware -- in ms streets or Street atlas * Or Try under Linux * PCMCIA doesn't work at all in 2.6. Need this for my memory cards. It works fine in kernel 2.4.26. I use cardinfo to correctly remove cards. I made a desktop shortcut to this that runs it as root. * DVD playing: Installed ogle, but it is insanely slow. Hmm. Rebooted and DMA worked (with correct kernel 2.6.6, and OGLE was also fast.) apt-get install ogle ogle-gui * VMware USB devices !? - gps XX - ipaq XXX * Nicely automated networking : These scripts hecke:/home/was/bin# more eth su -c '/sbin/ifconfig ath0 down; modprobe -r ath_pci;\ /sbin/ifup eth0; /sbin/route' hecke:/home/was/bin# more wlan su -c '/sbin/ifdown eth0; \ modprobe wlan; \ modprobe ath_hal; \ modprobe ath_pci; \ ifconfig ath0 up; \ iwlist ath0 scanning; \ dhclient ath0; \ /sbin/route' * All my files from old disk, and old debian linux install: - be very careful here!! * ACPI: 1. Manually modprobed: button battery ac fan thermal processor 2. installed acpi and acpid unstable package: apt-get install acpi apt-get install acpid These will autoload the above modules... ick ... no suspend-to-ram... * Shared files between Linux and Windows * Delete old debian from hard drive * Install Adobe suite in Windows and have it work. * Copy photos from USB hard drive (progress) Kernel-bound: * Laser printer: 1. Installed printtool, a redhat tool for setting up printers. 2. Using printtool, I deleted lp, and set up Name: lp0 Spool: /var/spool/lpd/lp0 Printer device: /dev/usb/lp0 Input Filter: latex Samsung one. 3. Install apsfilter, since it's "better". Note that apsfilter uninstalls printtool It sucked -- so I went back to printtool. 4. I also installed lprng: apt-get install lprng and this ... 5. lprng seems good, so I'm installing lprngtool (GUI) and some things it requests: apt-get install lprngtool ncpfs recode djtools ifhp ipx xpdf-utils 6. Waste of time. .. I just installed printtool again, and used my old python script to filter stuff. * Speedstep -- slow down, speed up, etc. as it should Seems like maybe unpluging slows it down and a reboot is needed to speed it up again, which is horrible! 1. I have something called tpctl installed "thinkpad control" Now using kernel 2.6.6 - note that /sys/... is there in particular, /sys/devices/system/cpu/cpu0/cpufreq/ - apt-get install cpufreqd - I then edited /etc/cpufeqd.conf This really works well. Edit this file and restart cpufreqd if I want hi-speed even when on batteries... * kismet: -- works only for built-in card. can't get it to work for the linksys pcmcia card. BUMMER. I think builtin card just doesn't support it! -- form:/usr/share/doc/kismet# zless README.gz * XD Memory cards don't work in kernel 2.6.6, microdrive? Fix -- I kept STUPIDLY missing the PCMCIA IDE support option under Device Drivers -> ATA/ATAPI... -> Enhanced ... support * 3D VIDEO: 1. Downloaded and installed debs from 2. cd /usr/src/ tar zxvf fglrx-4.3.0-3.9.0.tar.gz fakeroot make-kpkg --added-modules fglrx-4.3.0-3.9.0 modules_image dpkg -i fglrx-4.3.0-kernel-2.6.6_3.9.0-3+Custom.1_i386.deb 3. Quit X, unload radeon driver, insert fglrx driver: modprobe -r radeon modprobe fglrx 4. Modified XF86Config as suggested at above italian web site. It was crucial to set the AGPGART option to yes!! Option "UseInternalAGPGART" "yes" Please bear in mind, whenever you ?apt-get upgrade? your system and the X-libraries are touched even in the slightest, you will find that mplayer begins to winge like a tart along the lines of ?Unable to find XF86GetVidModeLine? or something. This is your cue to re-run the ?./install.sh? script in the DRI package folder; there is no need to restart X though fortunately. --- This was total crap, since suspend/resume broke. Instead I: 1. Followed directions at 2. Nothing worked until I finally compiled the following modules into my kernel: intel_mch_agp 10256 0 intel_agp 17308 1 agpgart 32808 3 intel_mch_agp,intel_agp --- * Excessive LOG of wifi card: Modified emacs src/prism2/driver/hfa384x.c * Suspend with juk playing screws up audio drivers. Bummer. Fortunately dcop juk Player gives list of dcop commands and dcop juk Player pause pauses playback. So just add dcop juk Player pause to my "off" script. Suspend/Resume freezes machine in text console. No solution. ** * win-Modem: Works easily, using drivers and README got here!!! * Good backup system -- mirror /home to pocketec using rsync + Need to exclude something (some media) and backup / as well. + For now just backup /home. * Setup Orinoco math department card: - make sure orinoco_cs.ko module compiled - try to use and fail. - Modify end of /etc/pcmcia/wlan-ng.conf card "I-Gate 11M PC Card" # version "INTERSIL", "I-GATE 11M PC Card / PC Card Plus", "Version 01.02" manfid 0x0156, 0x0002 bind "orinoco_cs" - /etc/init.d/pcmcia restart - iwconfig eth1 up - dhclient eth1 - unfortunately kernel driver doesn't support monitor mode so I downloaded latest driver from CVS and built it: export CVS_RSH="ssh" cvs -z3 -d:ext:[email protected]:/cvsroot/orinoco co orinoco * How to get tar file from dpkg: dpkg --fsys-tarfile ssh2_2.0.13-7_i386.deb > a.tar I had to do this to install both ssh and ssh2 at the same time. TODO: * Firewire and downloading of DV * GPS -- either get it to work, or buy a serial adapter and hope to get serial adapter to work! </pre> --> </body> </html>
https://share.cocalc.com/share/5d54f9d642cd3ef1affd88397ab0db616c17e5e0/www/t42/:?viewer=share
CC-MAIN-2019-18
refinedweb
5,491
65.73
How to Write Middleware for Redux Writing middleware for Redux that gets executed on every action is easy. Here's a method. Execute on Every Action Redux middleware is great for a case where you'd like to do something every time an action is dispatched to Redux. The scenario we'll look at is alert dismissal. It's common to have alerts pop up in the UI as different things happen in an app. In this case, we're going to look at automatic alert dismissal. In this app, any action payload can contain an alerts array. For those alerts that have no real user call to action -- for instance, "this action succeeeded" types of alerts -- we'll let the system auto dismiss those. We'll accomplish this via a Redux middleware. Structure of Middleware A Redux middleware is a function that returns a function that returns a function. The first function gives you access to the redux store, the 2nd gives you access to the next function which you'll need to call in order to continue the middleware chain, and finally the last function gives you access to the current action. This structure can be written like: const alertDismissal = store => next => action => { // alert dismissal things here... return next(action) } Dismissing Alerts Now to do the real work of dismissing the alerts. We won't cover the reducer implementation here; just know that alerts can be added to or deleted from a collection of alerts in the store. If any action has an alerts array, it'll be added to the alerts collection. If any of those alerts are of type SUCCESS, we'll queue up a new action ( dismissAlert) that will eventually delete that specific alert. Because there may be multiple alerts, we'll also stagger their dismissal so they can be animated to fly off-screen in rapid sequence. That setup might look like: import * as actions from './actions' const delay = 1000 const alertDismissal = store => next => action => { if (Array.isArray(action.alerts)) { action.alerts .filter(alert => alert.level === 'SUCCESS') .forEach((alert, i) => { setTimeout(_ => { store.dispatch(actions.dismissAlert(alert.id)) }, delay * (i + 1)) }) } return next(action) } export default alertDismissal Then we make sure to export this function in preparation for the next step. Add Middleware to Store To add a Redux middleware to the Redux store, in the Redux store initialization we use the applyMiddleware function from redux. That might look like: import { applyMiddleware, combineReducers, createStore } from 'redux' import alertDismissal from '../../alerts/middleware/alert-dismissal' import * as reducers from './reducers' const createStoreWithMiddleware = applyMiddleware( alertDismissal )(createStore) const rootReducer = combineReducers(reducers) const store = createStoreWithMiddleware(rootReducer) export default store There's a fair amount of tying it all together in there, but the applyMiddleware call is the most interesting to this exercise. Now the next time you dispatch any action payload that has an alerts array, you'll be on your way to auto dismissal via Redux middleware.
https://jaketrent.com/post/write-middleware-redux/
CC-MAIN-2022-40
refinedweb
489
53.71
Neil Mitchell wrote: > Does it really have to change statically? > > >>I use code like: >>#ifdef __WIN32__ >> (Windows code) >>#else >> (Linux code) >>#endif > > > In Yhc, we use a runtime test to check between Windows and Linux. It > has various advantages - we only have one code base, everything is > type checked when we compile, and we've never run into any problems > once despite developing on two different platforms. > >;a=headblob;f=/src/compiler98/Util/FilePath.hs There's a lot to be said for using runtime tests instead of conditional compilation, I agree. However, it can't be used exclusively: you can't choose between two foreign calls this way, for example, because one of the calls won't link. Cheers, Simon
http://www.haskell.org/pipermail/haskell-cafe/2006-March/014939.html
CC-MAIN-2014-42
refinedweb
121
57.81
Subject: Re: [boost] bcp update #2: namespace renaming From: Daniel James (daniel_james_at_[hidden]) Date: 2010-01-01 06:26:33 2009/12/28 John Maddock <john_at_[hidden]>: > OK, since folks keep asking for it, bcp will now do namespace renaming, the > two new options are: > > --namespace=newname > > Renames namespace boost to "newname" in all the source files copied. There are a few places where another namespace is used. A quick search found boost_asio_handler_alloc_helpers, boost_concepts (in the iterator library), boost_optional_detail, boost_132 (in serialization) and boost_swap_impl. Will that be a problem? Daniel Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2010/01/160438.php
CC-MAIN-2021-49
refinedweb
112
59.19
Join the community to find out what other Atlassian users are discussing, debating and creating. Hello Tell me how to set due date field to exclude the possibility of choosing the past date. Maybe you will advise plugins. You can add field mapping for the due date field in the behaviour section and check everytime the user enters the value if the value of the date is "last date" or not. And if it's the last date then you can show the error message. See the behaviour API, they are fix of groovy and JS See the behaviour API, they are fix of groovy and JS I found the script, but it does not work. can you help ? import groovy.time.TimeCategory import java.util.Date def DateField = getFieldByName("due date") Date DateVal = (Date) DateField.getValue() Date now = new Date() if(underlyingIssue.getDueDate().before(now)){ DateField.setError("Error") DateField.setFormValue(null) }else{ DateField.clearError() } Can you share the screenshot of the configuration, also can you add some logging to your code log.info " .....text here ..." So that we know for sure that the code is being executed. For getting the due date object you can use the code def dateField = getFieldById(getFieldChanged()) Also issue.getDueDate() return the object "class java.sql.Timestamp" and not "Date" as your code has date comparison, thus I would suggest you to add right type casting and may be in logs just print the value of the "due date" field before comparison and make sure both objects are of same type. Since Timestamp is a sub-class of Date thus you can assign the value to Date.
https://community.atlassian.com/t5/Jira-questions/Due-date-field-control/qaq-p/990666
CC-MAIN-2020-40
refinedweb
273
64.91
def TEST_ECHO_OUTPUT #define TEST_ECHO_MACRO(x) my_debug_out(#x); // expands to call #else #define TEST_ECHO_MACRO(x) // expands to nothing #endif Select all Open in new window Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle. The MCTS: Microsoft Exchange Server 2010 certification validates your skills in supporting the maintenance and administration of the Exchange servers in an enterprise environment. Learn everything you need to know with this course. #ifdef TEST_ECHO_OUTPUT #define TEST_ECHO_MACRO(x) my_debug_out(x) // expands to call #else #define TEST_ECHO_MACRO(x) // expands to nothing #endif void my_debug_out(int x) { /* ... */ } int main(void) { int a = 1; TEST_ECHO_MACRO(a); return 0; } #ifdef TEST_ECHO_OUTPUT #define TEST_ECHO_MACRO(x) my_debug_out(x); // expands to call #else #define TEST_ECHO_MACRO(x) // expands to nothing #endif TEST_ECHO_MACRO("test output") ifdef TEST_ECHO_OUTPUT #define TEST_ECHO_MACRO(x) my_debug_out(x) // Will expect a semi-colon #else #define TEST_ECHO_MACRO(x) do {} while(false) // Will expect a semi-colon also, but will be optimized away #endif #include <stdio.h> #define DO_TRACE // Comment me to change behavior #ifdef DO_TRACE #define TRACE(x) printf("trace: %d\n", x); #else #define TRACE(x) #endif void foo() { printf ("Part2\n"); } int main() { int n = 0; if (n < 10) printf ("Part1\n"); else TRACE(n) // When DO_TRACE is disabled the semi-colon goes with it so foo() becomes part of this if/else foo(); return 0; } #ifdef DO_TRACE #define TRACE1(x) printf("trace: %d\n", x) #else #define TRACE1(x) #endif #ifdef DO_TRACE #define TRACE2(x) printf("trace: %d\n", x); #else #define TRACE2(x) ; #endif // Check the code below with trace both on and off. if ( a ) TRACE1(x) else TRACE1(y) if ( a ) TRACE1(x); else TRACE1(y); if ( a ) TRACE2(x) else TRACE2(y) if ( a ) TRACE2(x); else TRACE2 The macros involved are a bit cumbersome. Perhaps better is to call the function and let it do an immediate return if debugging is disabled. Another solution is to encapsulate the function call #define MY_DEBUG_OUT(a) {if (TEST_ECHO_OUTPUT) my_debug_out(a);} Good Luck, Kent Sorry, what do you mean "expands to nothing" exactly? Literally, blank space? (Being sure because I will need to defend my decision to superiors.) The MCTS: Microsoft Exchange Server 2010 certification validates your skills in supporting the maintenance and administration of the Exchange servers in an enterprise environment. Learn everything you need to know with this course. >>space? Yes, exactly. The preprocessor will just remove the line if TEST_ECHO_OUTPUT is not set. And for #define MY_DEBUG_OUT(a) {if (TEST_ECHO_OUTPUT) my_debug_out(a);}, I think that's a logical test that is left over if test mode not #defined, isn't it? Don't want that either, my superiors would tell me to live with the three-line calls. #define MY_DEBUG_OUT(a) {if (TEST_ECHO_OUTPUT) my_debug_out(a);} will cause a aompiler error if TEST_ECHO_OUTPUT is not set, I am not sure that this is what you want... jkr: true, although I was planning on setting TEST_ECHO_OUTPUT to either 1 or 0. But your comment makes me wonder... a more experienced coworker does it differently: #define TEST_ECHO_OUTPUT // comments this out to disable #ifdef TEST_ECHO_OUTPUT whereas I do this: #define TEST_ECHO_OUTPUT 1 // change to 0 to disable #if TEST_ECHO_OUTPUT Is his approach more traditional? Does it result in tighter code? Is there a disadvantage to my approach, other than the danger of omitting a #define altogether and breaking the code (as you point out)? (Thanks. Sorry to piggy back this question, but I seem to have expert attention.) Simply this is more than enough (note that I removed the ; at the end of the macro too - to avoid confusion) : Open in new window His approach is how this is normally done, yes. The reason being, the semantics of exists or otherwise are clearer than does it exists and does it have a value and if so what is that value. Because it was used in the Q in my_debug_out(#x); and would be required for my_debug_out(a); I'd also rather not use it. Do you mean the "#"? My compiler documentation tells me that's how you pass parameters to a macro. If I have that wrong, I will figure that out momentarily... >>macro. No, not really: The number-sign or stringizing operator (#) converts macro parameters (after expansion) to string constants. It is used only with macros that take arguments. If it precedes a formal parameter in the macro definition, the actual argument passed by the macro invocation is enclosed in quotation marks and treated as a string literal. The string literal then replaces each occurrence of a combination of the stringizing operator and formal parameter within the macro definition. White space preceding the first token of the actual argument and following the last token of the actual argument is ignored. Any white space between the tokens in the actual argument is reduced to a single white space in the resulting string literal. Thus, if a comment occurs between two tokens in the actual argument, it is reduced to a single white space. The resulting string literal is automatically concatenated with any adjacent string literals from which it is separated only by white space Open in new window I've known some compilers to get ratty if you put a ; at the end of the usage of the macro , e.g TEST_ECHO_MACRO(a); // In release this is a noop but leaves a rogue semi-colon behind The release build is a no-op so you end up with a line that just contains a ; and whilst this shouldn't be an issue I've known some compilers to generate a warning. To get around this I normally use a do/while noop, like below. It'll get optimized away in release build but prevent the potential warning. Open in new window Not really since my_debug_out takes an int as parameter. >> BTW, then you'll be fine with Except for the fact that my_debug_out doesn't take a string as parameter, I still prefer not having a ; at the end of a macro ... It's so counter-intuitive this way : #define TEST_ECHO_MACRO(x) my_debug_out(x); int a = 0; TEST_ECHO_MACRO(a) /* <--- no ; at the end of this line */ some_more_code(); that I prefer "forcing" the user of my macro to add the ; #define TEST_ECHO_MACRO(x) my_debug_out(x) int a = 0; TEST_ECHO_MACRO(a); /* <--- now the ; HAS to be there at the end of this line */ some_more_code(); IMO, no. The way I suggest forces the semi-colon to be added at the end when you use it so it looks more naturally like a function call. >> I prefer "forcing" the user of my macro to add the ; Me too, hence my suggestion above :) Well, I saw it, but a semicolon is not 'rogut' at all. It would expand to void my_main_code() { int a; a =1; ; } which is legal C/C++-. Agreed, I never said it wasn't... it's just that some (mainly older) compilers will issue a warning if you build with high warning levels. It also forces the caller to provide it so it makes the code more consistent (IMO). I'm not saying do it, I'm just pointing it out as a consideration. Ah, never seen that ... The C standard explicitly allows empty statements : (6.8.3) expression-statement: expression(opt) ; (the opt meaning optional of course) What the C standard doesn't allow however, is a { } block followed by a ; To avoid that, you DO need macro's like these if they involve { } blocks : #define SOME_BLOCK_MACRO do { \ int i = 0; \ fun(i); \ } while(0) Interesting perspective on the need for consistency of semicolon use in main code, I see your point. But should be no technical problems with leaving the semicolon in the macro, right? Just requires extra care by the reviewer? I would have trouble convincing people of the evilrx loop approach, and I don't want a stray semicolon, so that might be my best route..... Technically, it's ok, yes. I just think not having it in the macro is more consistent. >> and I don't want a stray semicolon Don't worry about a stray semicolon (empty statement) ... It's no problem at all. It's perfectly legal. The loop approach evilrix showed was just to avoid warnings on certain old compilers. I have seen warnings with rogue semicolons too, so I'm prejudiced against them. Unless you actually use the macro like this : TEST_ECHO_MACRO(a); >> I have seen warnings with rogue semicolons too, so I'm prejudiced against them. Odd, I've never seen them for empty statements. Oh well ;) Open in new window >> Ah, never seen that ... The C standard explicitly allows empty statements : evil's right .. i'm currently working with a nintendo-ds compiler .. and we get warnings for those *lonesome* semicolons .. since we have set option "warning to errors" this became an issue .. however, you're right jkr .. its legal anyway .. ike One last follow-on, then I'll close this out. Given the defines #ifdef TEST_ENABLE_OUTPUT #define TEST_OUTPUT(x) print_test_message(x); // expands to call; note semicolon is in macro #else #define TEST_OUTPUT(x) // expands to nothing #endif and the calls int aa; // = 45; aa = 45; print_test_message("Hello1 TEST_OUTPUT("Hello2 %i,%i\r", aa, aa) why would the outputs be: ?Hello1 45, 45 Hello2 7387, 2552 crazy stuff. My function is below, just puts the characters out UART0. Don't want to make this another draw on your time, just wanted to know any initial thoughts, then I'll start a new question if it gets involved... char s_buf [100]; void print_test_message( flash char *format, ...) { va_list ap; va_start(ap,format); vsprintf(s_buf, format, ap); va_end(ap); putstr0(s_buf); } Okay got some help from This has some improvement, but still need to figure out that "?" #ifdef TEST_ENABLE_OUTPUT #define TEST_OUTPUT(...) print_test_message( __VA_ARGS__); #else #define TEST_OUTPUT(x) // expands to nothing #endif Can you show the complete code ? Didn't I give you all the working parts? Turns out the question mark follows the first call, so it seems like that's some left over crap in the buffer or something. I'll need to talk to the guys who wrote that function... Thanks a ton, guys, for all the help. This is the CodeVision compiler for Atmel microprocessors. Apparently it accepts the variable-list macros, do you see any danger with using it? >>using it? If it works, that's fine, yet it won't be portable - that's the downside... That's why I was asking to see the complete code ;) To find where the question mark comes from. Oh, I would never inflict all that on you. That;s a 5,000 point question. Will throw that question to a coworker, at least I have the macro working so I'm happy. I think the code below should demonstrate. Paul Open in new window You mean, like this http:#20844363 ? Paul Not sure I'm back yet but I do have a browse in 'C' most days. You guys hold the fort so well :) I've been working on a private project but that's coming to an end soon. My next is a technical one in C so I hope to be visiting often. Paul
https://www.experts-exchange.com/questions/23145299/How-to-make-my-debug-macros-very-concise.html
CC-MAIN-2018-34
refinedweb
1,869
62.17
Iterator::DBI - An iterator for returning DBI query results. This documentation describes version 0.02 of Iterator::DBI, August 23, 2005. use Iterator::DBI; # Iterate over a database SELECT query. # (returns one hash reference per row). $iter = idb_rows ($dbh, $sql); $iter = idb_rows ($dbh, $sql, @bind); This. ; The following symbol is exported to the caller's namespace: idb_rows. String: "idb_rows cannot prepare sql: message" The DBI prepare method returned an error. String: "idb_rows cannot execute sql: message" The DBI execute method returned an error. String: "idb_rows: fetch error: message" The DBI fetchrow_hashref method returned an error. Requires the following additional modules: Higher Order Perl, Mark Jason Dominus, Morgan Kauffman 2005..
http://search.cpan.org/~roode/Iterator-DBI-0.02/DBI.pm
CC-MAIN-2015-35
refinedweb
109
53.37
On Fri 2009-05-22 10:17:15, Christoph Lameter wrote:> > Subject: Warn if we run out of swap space> > Running out of swap space means that the evicton of anonymous pages may no longer> be possible which can lead to OOM conditions.> > Print a warning when swap space first becomes exhausted.> > Signed-off-by: Christoph Lameter <[email protected]>WARN_ONCE... will it mean a backtrace? That's quite an overkill forsomething that is not a kernel fault (and where backtrace is useless).But yes, I agree in principle. Pavel> @@ -412,6 +412,7 @@ swp_entry_t get_swap_page(void)> nr_swap_pages++;> noswap:> spin_unlock(&swap_lock);> + WARN_ONCE(1, "All of swap is in use. Some pages cannot be swapped out.");> return (swp_entry_t) {0};> }-- (english)(cesky, pictures)
http://lkml.org/lkml/2009/5/22/271
CC-MAIN-2017-09
refinedweb
122
76.62
Earlier we created our first Spring Data JPA repository that provides CRUD operations for todo entries. Although that is a good start, that doesn't help us to write real life applications because we have no idea how we can query information from the database by using custom search criteria. One way to find information from the database is to use query methods. However, before we can create custom database queries with query methods, we have to find the answers to the following questions: - What are query methods? - What kind of return values can we use? - How can we pass parameters to our query methods? This blog post answers to all of these questions. Let’s start by finding out the answer to the first question.. A Very Short Introduction to Query Methods. After we have done this, our repository interface looks as follows: import org.springframework.data.repository.Repository; interface TodoRepository extends Repository<Todo, Long> { //This is a query method. Todo findById(Long id); } Let’s move on and find out what kind of values we can return from our query methods. Returning Values From Query Methods A query method can return only one result or more than one result. Also, we can create a query method that is invoked asynchronously. This section addresses each of these situations and describes what kind of return values we can use in each situation. First, if we are writing a query that should return only one result, we can return the following types: - Basic type. Our query method will return the found basic type or null. - Entity. Our query method will return an entity object or null. - Guava / Java 8 Optional<T>. Our query method will return an Optional that contains the found object or an empty Optional. Here are some examples of query methods that return only one result: import java.util.Optional; import org.springframework.data.jpa.repository.Query; import org.springframework.data.repository.Repository; import org.springframework.data.repository.query.Param; interface TodoRepository extends Repository<Todo, Long> { @Query("SELECT t.title FROM Todo t where t.id = :id") String findTitleById(@Param("id") Long id); @Query("SELECT t.title FROM Todo t where t.id = :id") Optional<String> findTitleById(@Param("id") Long id); Todo findById(Long id); Optional<Todo> findById(Long id); } Second, if we are writing a query method that should return more than one result, we can return the following types: - List<T>. Our query method will return a list that contains the query results or an empty list. - Stream<T>. Our query method will return a Stream that can be used to access the query results or an empty Stream. Here are some examples of query methods that return more than one result: import java.util.stream.Stream; import org.springframework.data.repository.Repository; interface TodoRepository extends Repository<Todo, Long> { List<Todo> findByTitle(String title); Stream<Todo> findByTitle(String title); } Third, if we want that our query method is executed asynchronously, we have to annotate it with the @Async annotation and return a Future<T> object. Here are some examples of query methods that are executed asynchronously: import java.util.concurrent.Future; import java.util.stream.Stream; import org.springframework.data.jpa.repository.Query; import org.springframework.data.repository.Repository; import org.springframework.data.repository.query.Param; import org.springframework.scheduling.annotation.Async; interface TodoRepository extends Repository<Todo, Long> { @Async @Query("SELECT t.title FROM Todo t where t.id = :id") Future<String> findTitleById(@Param("id") Long id); @Async @Query("SELECT t.title FROM Todo t where t.id = :id") Future<Optional<String>> findTitleById(@Param("id") Long id); @Async Future<Todo> findById(Long id); @Async Future<Optional<Todo>> findById(Long id); @Async Future<List<Todo>> findByTitle(String title); @Async Future<Stream<Todo>> findByTitle(String title); } Let’s move on and find out how we can pass method parameters to our query methods. Passing Method Parameters to Query Methods We can pass parameters to our database queries by passing method parameters to our query methods. Spring Data JPA supports both position based parameter binding and named parameters. Both of these options are described in the following. The position based parameter binding means that the order of our method parameters decides which placeholders are replaced with them. In other words, the first placeholder is replaced with the first method parameter, the second placeholder is replaced with the second method parameter, and so on. Here are some query methods that use the position based parameter binding: import java.util.Optional import org.springframework.data.jpa.repository.Query; import org.springframework.data.repository.Repository; interface TodoRepository extends Repository<Todo, Long> { public Optional<Todo> findByTitleAndDescription(String title, String description); @Query("SELECT t FROM Todo t where t.title = ?1 AND t.description = ?2") public Optional<Todo> findByTitleAndDescription(String title, String description); @Query(value = "SELECT * FROM todos t where t.title = ?0 AND t.description = ?1", nativeQuery=true ) public Optional<Todo> findByTitleAndDescription(String title, String description); } Using position based parameter binding is a bit error prone because we cannot change the order of the method parameters or the order of the placeholders without breaking our database query. We can solve this problem by using named parameters. We can use named parameters by replacing the numeric placeholders found from our database queries with concrete parameter names, and annotating our method parameters with the @Param annotation. Here are some query methods that use named parameters: import java.util.Optional import org.springframework.data.jpa.repository.Query; import org.springframework.data.repository.Repository; import org.springframework.data.repository.query.Param; interface TodoRepository extends Repository<Todo, Long> { @Query("SELECT t FROM Todo t where t.title = :title AND t.description = :description") public Optional<Todo> findByTitleAndDescription(@Param("title") String title, @Param("description") String description); @Query( value = "SELECT * FROM todos t where t.title = :title AND t.description = :description", nativeQuery=true ) public Optional<Todo> findByTitleAndDescription(@Param("title") String title, @Param("description") String description); } Let’s move on and summarize what we learned from this blog post. Summary This blog post has taught us three things: - Query methods are methods that find information from the database and are declared on the repository interface. - Spring Data has pretty versatile support for different return values that we can leverage when we are adding query methods to our Spring Data JPA repositories. - We can pass parameters to our database queries by using either position based parameter binding or named parameters. The next part of my Spring Data JPA tutorial describes how we can create database queries from the method names of our query methods. P.S. You can get the example application of this blog post from Github. Nice to given sharing about java example it is very easy to understand Thank you. I am happy to hear that this blog post was useful to you. You're the man! Your articles are so well written and easily understandable. It's amazing! Keep up the good work Petri. Thank you for your kind words. I really appreciate them! +1 Thanks for your clean and simple explaination. Sharing is caring and it is your kindness to share and care :) You are welcome! When I executed my "String findTitleById(Long id)" I got ToDo object instead of String.Also I had to change "interface TodoRepository extends Repository { " to "interface TodoRepository extends JpaRepository { " as the former did not work. Can you please help me resolve this. Hi, This was my mistake. I thought that Spring Data JPA would support this, but it doesn't. You can still do this, but you have to specify your own query by using the @Queryannotation. By the way, you might want to check out this Jira ticket: Support Projections as Query results. Thank you for pointing this out! Did your code throw an exception or how did you figure out that it didn't work? Oh that was a mistake by me. I had 2 Repository classes imported and had to use one directly(fully qualified class name). One is for the annotation @Repository and the other one you mentioned.That is why i changed it to JpaRepository. Anyways thanks for letting me know that there is an open ticket for this feature. Thanks for the post! :) It was very useful to me but I had one problem with it: what JPA returns when findBy_ sentence? I supposed it would be null, but I had an exception Thanks for the help! Hi, It shouldn't throw an exception. What exception does it throw? Hi, thanks for your very clear tutorials. However, I have a question. Are you sure these examples will work? There are several interfaces here with attempts to overload methods with the same signature. Probably won't compile? You are right. They don't compile. The reason why I decided to put these query methods to the same interface is that this way it is easy to compare different query methods that do the same thing. Hello! Thank you very much for such a good tutorials, but I have one question. How to make some params optional (varying number of params), something like this: List findByOptionalLastnameAndOptionalFirstNameAnd...(String lastname, String firstname,...); ? Is there a way to do it? Best regards! Hi Luc, Thank you for your kind words. I really appreciate them. If you want to create dynamic queries with Spring Data JPA, you have to use either JPA Criteria API or Querydsl. Thank you very much! Both tutorials are very useful! I used JPA Criteria API and it works perfect for me! You are welcome! Hello! Thank you for that tutorial. But I have a question, is there a way how to return Stream of rows from database using org.springframework.data.jpa.domain.Specification interface. In other words I need to filter rows from database and I can't just do within query annotation. Thanks in advance! Hi, I took a look at the Javadoc of the JpaSpecificationExecutor<T>interface, and it seems that if you want to get a more than one result, you can return only List<T>and Page<T>objects. Nice introduction. one question about JPA. what difference between Spring JPA and Spring Hibernate? Hi, The Spring Hibernate module provides support for Hibernate. In other words, it ensures that you can use Hibernate in a Spring application. However, you still have to use either the Hibernate API or the "pure" JPA API. Spring Data JPA introduces an additional layer that helps you to remove the boilerplate code that is required when you write queries by using the "pure" JPA API. However, you still have to use a JPA provider such as Hibernate. how to find unique name using spring data jpa and also display all columns data Hi, I need a bit more input before I can answer to your question. For example, I need to see the source code of the entity that contains the queried name. Is valid in JPA distinct...? List findDistinctByName(String name); Yes. It should work. If you need more information, you should read this blog post. @Query("SELECT DISTINCT r.Id,r.Desc,r.isActive,r.createdBy FROM categories r where r.isActive=?1") That query doesn't work because you are not selecting an entity. If you want to select only a few fields of an entity, you need to return a DTO. thank u You are welcome. Hello Petri, excellent tutorials. I have some issue. I want the query method to generate query like "FROM ABC WHERE (NAME=? OR LASTNAME=?) AND IN(?,?,?)" I wrote method "findByNameOrLastnameAndCityIn()" but it generating query "FROM ABC WHERE NAME=? OR LASTNAME=? AND IN(?,?,?)". What can I do to get required result?? Hi, you need to use the @Queryannotation and specify the used JPQL query manually. Actually, the no. of parameters in IN operator are not fixed, it can be one or more. So I am passing List to it, but then it is giving me error; Query is like : @Query("FROM ABC WHERE (NAME=:name OR LASTNAME=:lastName) AND IN(:cities)"); 'cities' is of type List That is strange. Does your code throw an exception when you run it? If so, could you add the stack trace here? Also, just to clarify, does your repository method look like this: Hello there! Spring newbie here. When do i have to use the " nativeQuery=true"? Thanks. Hi, You need to use it when you want to create a query method that uses SQL instead of JPQL. Hey there, I am new to Spring (and databases and lots of other stuff I'm currently learning on the job). One of the problems I am currently dealing with is a database update via a web Application - you upload a csv, the database is deleted and the csv is read to the db. I used spring batch and it works well thanks to one of your tutorials. Unfortunately I found out that it only works well on my system (with H2), the next testing step uses mySQL. I've adapted everything and it runs, but REALLY slowly. I created a profile local where it runs fine and a profile !local, where i've tried different approaches, but nothing seems to improve the speed. Now i implemented a listener (the profile loads an empty Reader and Writer) that uses @Query with the sql statement LOAD DATA INFILE. I've already used some sql queries, so I know who this is working basically. My question: I only find the passing of method parameters in select statements. can i use them anywhere? I need to pass the url of the loaded csv-file to the sql statement (for i only have the file after the user loaded it via the html-mask). Any help or links or advise of any kind will be highly appreciated! I have seen that the last comment was written quite some time ago, but i am hoping, you're still keeping track of this. The snippet that currently troubles me: @Modifying @Transactional @Query(value = "LOAD DATA INFILE 'sqlCsv' INTO TABLE MYTABLE FIELDS TERMINATED BY ';' LINES TERMINATED BY '\r\n' IGNORE 1 LINES;", nativeQuery = true) public void loadData(String sqlCsv) throws InvalidDataAccessResourceUsageException; Hi, I am sorry that it took me a few days to answer to your question (I am currently on summer holiday). Did you implement your writer ItemWriterby using JPA? If so, this might be one reason why the batch job is so slow when you use a real database. I recommend that you should create an ItemWriterthat uses JDBC. I promise that it is a lot faster than the ItemWriterthat uses JPA. Yes. If you are implementing a Spring Batch ItemWriter, you shouldn't use this approach because this will break the API contract of the write()method. On the other hand, if you are writing custom code that simply invokes your repository method, you can pass the required information by using named parameters. Hi Petri, I am facing an issue when i am trying to fetch only few selected columns from a table using JPA.. @Query ("select h.abrHeaderId, h.attLegalEntityCode, h.billCycle, h.typeOfSegmentCodeId, h.countryCode, h.countryLegalName, h.hierCreationStatusCodeId, h.nodeId from AbrBillAccountNode h where h.nodeId = (?1)") public AbrBillAccountNode findOneByNodeId (@Param("nodeId") String nodeId); I want to fetch only these specific field values from DB but it is erroring out saying nested exception is org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [java.lang.Long] to type [com.att.ssdf.abr.model.domain.AbrBillAccountNode] Please help me resolving this... Hi, The problem is that Spring Data JPA doesn't know how it can convert your result set into an entity because the result set doesn't have all the required columns. If you want to select only a few columns, your query method should return a DTO instead of an entity. If you have any other questions, let me know! nice example. Thank you! Hi Petri, Firstly great article, I was having some issue which got resolved with the help of this . There was this 2.7.2. JPQL Constructor Expressions which I am using for the current project which has some aggregate function columns( 2 avg columns). I wanted to do Pageable sort on these aggregate function at runtime. But every time I am trying to do pageable on my DTO projection fields I am not getting the result. @Query( "SELECT new StudentSummary(student.id AS Student_ID," + " student.name AS NAME," + " student.email AS EMAIL," + " student.phone AS PHONE," + " student.currentCity AS CITY," + " student.country AS COUNTRY," + " studentEducation.education AS EDUCATION," + " AVG(CASE WHEN student_education_result_history.examType = :theory THEN student_education_result_history.totalScore ELSE 0 END) AS THEORY_SCORE, " + " AVG(CASE WHEN student_education_result_history.examType = :practical THEN student_education_result_history.totalScore ELSE 0 END) AS PRACTICAL_SCORE) " + " FROM Student student LEFT OUTER JOIN student.studentEducation studentEducation" + " LEFT OUTER JOIN studentEducation.studentResultHistory student_education_result_history + " WHERE studentEducation.result = :allOverresult") Page getStudentSummaryByAllOverResult(@Param("allOverresult") STUDENT_STATE state, @Param("theory") String theory, @Param("practical") String practical, Pageable pageable); Now model looks good but as per my client's request I might need to add sort by THEORY_SCORE or PRACTICAL_SCORE plus pagination. I thought of adding fields of DTO for this but pageable is binding to Student Entity fields how can I do order by StudentSummary.theoryScore or StudentSummary.practicalScore ? Let me know if its not clear. Thank You Hi, Unfortunately there is no "clean" solution to your problem. At first I thought that the only way you can do this is to add a custom method to your repository and implement the pagination logic yourself. However, I found this StackOverflow question that might help you to save some time because it seems that you don't need a custom method after all. Hi there, THank you for this interesting article. Is there a simple way to pass the WHERE clause as a parameter of the @Query ? Something like this (which does not even compile) : @Query("FROM Node n WHERE ?1") List findWithCustomConditions(String conditions); The condition expression is built on client side, hence the need to directly and conveniently pass the WHERE clause. Thank you, Elie Hi, As far as I know, it's not possible to pass the WHEREclause as a method parameter when you are using the @Queryannotation. If you want to build a REST query language, you could take a look at this tutorial. This is awesome blog Thank you for your kind words. I really appreciate them. It seems simple here. Thanks for showing such a clear picture. You are welcome. Thank you. You are welcome. Is it good or bad practice to have Optional for List findByTitle Hi, I wouldn't use Optionalas a return type (in this situation). If a query method returns a List, it will return an empty list if no results is found => it's kind of pointless to use an Optionalif the returned value can never be null. Hi, thank you for your wonderful series of the articles. I have a question regarding queries derived from method names. If the method returns a single value (entity instance), can such method return nul? So should I declare Oprional as return type? And if such method returns list of smth, will such method method return always return a list (which can be empty) or should use Optional<List> as return type? Thank you Hi, Thank you for your kind words. I really appreciate them. About your questions: If a query method returns a single value, it returns either the found entity object or null. If you are using Java 8 (or newer), you can use Optionalas a return type if you don't want to do nullchecks. I like to use this approach because if a method returns an Optional, the person who uses it understands immediately that this method doesn't necessarily return a meaningful value, and she can take this into account when she writes her code. If a query method returns a list, it will return an empty list if no results is found. That's why it doesn't really make sense to use Optional<List>as a return type. If you have any additional questions, don't hesitate to ask them. Petri, Thank you so much Hi - Is there a way to write where clause with condition using query annotation. For example : select employee based on department id and employee id. But employee id should exists in the condition only if employee id is not null. Appreciate your help. Hi, Unfortunately it isn't possible to write dynamic queries with the @Queryannotation. If you need to write dynamic queries with Spring Data JPA, you have to use the JPA Criteria API or Querydsl. Thanks for explaining Query in JPA in simple, understanding way You are welcome! Hi in the querymethods it is findAllByCusomerCompisiteKeyId. how do I write qualifier in JQL in Embeddable class like Customer.CompisiteKey.id ? thanks After wasting my time in so many websites finally i find this blog that gives A to Z info releted to spring data JPA . Thanks for writing such a helpful blog i will be more thankful to you if you will create blog for spring data rest also. thanks. Very best blog. You have big amt knowledge with minimum words but also keep it simple and easy to grasp. I saw one of the best Blog among many one Tutorials and Blogs. It is equal to best youtube tutorials. Thanks really nice article.
https://www.petrikainulainen.net/programming/spring-framework/spring-data-jpa-tutorial-introduction-to-query-methods/?replytocom=1505277
CC-MAIN-2021-43
refinedweb
3,569
58.58
Video:Torch Table Motion This is a short video of the torch table operating. I am using it to test the way videos are embedded on the wiki. Having the video on a separate page like this and then embedding it on other pages via the template substitution mechanism will allow us to put wiki-markup about these videos on these pages (letting us put them in categories, discuss them, etc) without cluttering up the pages the videos appear on (or entering the page in a category when really we wanted to categorize the video). Note that this video page itself is in the category 'Torch Table', however embedding this video on another page (such as the main page) would not automatically enter that page in 'Torch Table', which is the correct behavior. Also, this will automatically wrap the video in the universal subtitles code so that anywhere you embed it on the wiki it will be subtitled. To include this video on another page you add the following wikicode where you want the video to appear: - {{:Video:Torch Table Motion}} Notice that you need the leading colon before the Video namespace as well as after it. I think if the LocalSettings.php file is changed you can allow the Video namespace to be template substituted and then this leading colon could be ommitted and the code would just be {{Video:Torch Table Motion}} like any other template. Whether the change to localSettings.php is made or not, since it is using the template expansion mechanism of the wiki it will support additional options so you could also have ways of making the code produce just a link to the wiki page for the video, or a link to the site where it is hosted, or both (but not having the actual video load on the page where you embedded it, just the links). You can also link to the videos with the normal [[ ]] linking mechanism if you wanted to just do a wikilink only. I plan to use a secondary, templated page for making these video pages more user friendly, right now I am just making sure the system works as expected and doesn't break something.
https://wiki.opensourceecology.org/wiki/Video:Torch_Table_Motion
CC-MAIN-2021-10
refinedweb
368
57.84
icker2,987 Points What kind of datetimes are you asking me? Now and today? Here is the task: Write a function named minutes that takes two datetimes and, using timedelta.total_seconds() to get the number of seconds, returns the number of minutes, rounded, between them. The first will always be older and the second newer. You'll need to subtract the first from the second. And I don't get anything with the time. Sadly it is in the end of the course so I can't finish this. I am giving up officially. import datetime def minutes(datetime_one, datetime_two): datetime_one = datetime.datetime.now() datetime_two = datetime.datetime.now() return int(timedelta.total_seconds(datetime_one - datetime_two) / 60) 1 Answer Mark SebeckTreehouse Moderator 29,142 Points Hi Wicker. Good try. This was a difficult question. Few things going on here. First the datetimes are being passed in so no need to set them to now(). They will have datetime values already. So delete these two lines: datetime_one = datetime.datetime.now() datetime_two = datetime.datetime.now() Next instead of using the function int() you want round(). Because int() will do the same as floor() which is just remove the decimals instead of rounding up when it's greater than equal to .5. Also timedelta.total_seconds should be datetime.timedelta.total_seconds. Lastly the question states "The first will always be older and the second newer. You'll need to subtract the first from the second. " Little confusing but it should be datetime_two - datetime_one. If you still can't get it to pass post your new code back to the Community and someone will help you out. Happy coding! wicker2,987 Points wicker2,987 Points Thank you! I got this.
https://teamtreehouse.com/community/what-kind-of-datetimes-are-you-asking-me-now-and-today
CC-MAIN-2021-49
refinedweb
283
70.6
Jack’s guide to data acquisition in the modern laboratory Part 2: IVI, VISA and scripting to communicate with lab instruments IVI: Interchangable Virtual Instruments IVI is not so much a thing you’ll encounter in your lab, as a concept (and acronym) which crops up a lot when talking about instruments and automation. It is also the name of an organisation which looks after several standards that instruments may conform to, including VISA and SCPI, which are discussed below. The aim with IVI is to have generic instruments like oscilloscopes, DC power supplies, or function generators which are interchangeable. So that you can develop your experimental control software with one, and then at a later date swap it with another one form a different manufacturer and everything keeps working with minimal changes. Unless that is something you plan to do, you don’t need to pay much attention to IVI. They do make some very useful generic drivers, but they won’t give them to you directly, you have to get them by installing a VISA implementation. Speaking of which… VISA Visa is Virtual Instrument Software architecture. To understand why it is useful, think about a lab with a variety of instruments, using a variety of the busses described above. Now, to interface with them from your own software, you need to learn how to handle TCP sockets, to read and write from serial ports and so forth. Also, some instruments have more than one connector, and if you decide to change from the GPIB port to the serial port because someone wants to borrow the expensive GPIB converter, you have to change your software. VISA wraps all these different hardware busses and provides a simple, uniform interface. You can swap from serial to Ethernet by just changing the VISA address. Most VISA implementations also offer some handy extras, such as the ability to list all connected instruments and test sending and receiving messages with them, and the ability to log all messages sent and received by any instrument, which can be helpful for debugging. Several manufacturers of instruments provide VISA implementations, but they are designed to be interchangeable (except for some small caveats around GPIB). You can use anyone’s instruments with any VISA implementation, and you can pick one based on features and cost. Most are free if you have hardware from that manufacturer. Two in particular are worth mentioning: - NI VISA from national instruments is the most common. It has lots of nice features, but it is a large and slightly frustrating install, especially if you don’t already have other NI software. It is also rather expensive if you don’t have it included with other NI hardware or software. - R&S VISA from Rohde & Schwarz is compact, simple, and appears to be available for free to anyone at the time of writing. Other manufacturers, including Tektronix, Keysight, Anritsu, Bustec, and Kikusui also offer VISA implementations. Instrument commands and SCPI Once you have the hardware set up, and VISA (if you want it), you can communicate with your instruments. But what do you actually send to them? The vast majority of instruments use short strings of ASCII characters and use newline characters to indicate the end of a command. There is a mix of Windows and Linux style newlines used, but CRLF is the most common. Hopefully the instrument manual will have a list of commands and their effects, but if not, check if it mentions SCPI or IVI, it may be that it supports exactly the commands defined in a standard somewhere. Standard Commands for Programmable Instruments (SCPI) is useful standard which is worth discussing at this point. It contains three useful things: - A particular style of writing commands and responses which a lot of instruments use. Even instruments which don’t comply with the standard often use the same format. An example: SOUR:VOLT 2 to set a voltage, or SOUR:VOLT? To read it. - A short list of commands every instrument should have, and a huge list of commands supported by Interchangeable Virtual Instruments. - Several extensions which are not commands, such as status registers, which allow you to get the status of an instrument at a glance. Some of these can also be used to trigger actions on the PC outside of the usual command-response pattern, but only over GPIB or USBTMC with the USB488 subclass. One of the useful commands many instruments support is *IDN?\n, which makes the instrument tell you what it is. Very useful if you have a lot of COM ports and don’t know which is which. Not everything uses SCPI-like commands though, we have several items in our lab where all commands are just a letter and 3 digit number. Programming and scripting But how do you actually run your experiment? The final piece of the puzzle is writing a script. There are three main programming languages people use for this: - NI LabVIEW. This is a graphical programming language, where you have functions as little blocks and join them with wires which carry values about. Some people find this intuitive but I’m not a fan personally. - MATLAB is a programming language designed to work nicely with maths and arrays of data. - Python is a general purpose programming language with huge numbers of modules for all sorts of tasks, and it is available for free. This one is my personal preference, so I’ll use it for examples below. Both of the latter two can also be useful for analysing large data sets and drawing graphs, but that is a whole other topic. Here is an example then of how you might use Python to ramp the voltage in one of our power supplies: import serial import time with serial.Serial('COM1') as rp100: rp100.write(b'OUTP1 1\n') rp100.write(b'SOUR1:VOLT 1\n') time.sleep(0.1) rp100.write(b'SOUR1:VOLT:SLEW 0.1\n') rp100.write(b'SOUR1:VOLT 20\n') time.sleep(200) In this example, I’m using serial directly (not VISA) and I’m writing a series of text commands to the instrument, then waiting for the instrument to do its thing. The 200 second wait at the end is because that is how long the ramp will take. One could write an entire experiment like this, but that would be frustrating and error prone. It would be better to use the programming language’s features to automate and simplify repetitive tasks by writing functions and (depending on the language) creating objects to represent instruments. Fortunately, in the age of the internet, many other scientists have written helper code for a huge number of instruments, and made them available. Some interesting frameworks for communicating with instruments are: - Qcodes - Pymeasure - Pyinstrumentkit - We have uploaded the one we wrote and use here at Razorbill. You can find links to the above and other customers pre-written scripts here. And many more almost certainly exist. There are also many little snippets of code out there for specific instruments, and we list some of the ones which have been written for our instruments on the page for those instruments. Interprocess communication There is one more oddity which is worth mentioning, though I won’t go into detail here. Sometimes, a computer program has already been written, and it does lots of useful things but not quite everything you need. And you can’t edit it. A common case where this crops up for our customers is PPMS users. Quantum Design produce a piece of software called MultiVu, and it handles many things that the PPMS does such as pumping out the sample chamber, reading and controlling the temperature etc. But it is difficult to make MultiVu handle other instruments. The solution here is to let MultiVu control the PPMS, and use your script to control MultiVu. If you need to control another piece of software, look to see if that software has an Application Programming Interface (API) or a scripting interface. It is often the case that it will not be in the programming language you are using, but the languages listed above are all able to wrap APIs designed for common languages like C++. Final tips Hopefully if you are still reading at this point, this blog post has been useful. I’ll close with some extra tips from my experience: - Log everything because hard drive space is cheap. If you are getting a parameter from an instrument, think if there are any more you could record. Even if they aren’t useful now, they might be useful for debugging. - You will probably end up with a selection of classes/functions/modules you use, and a separate script for each experiment. Save that script in case you need to work out exactly how you acquired certain data. - Once you have a system that works, and you are confident it is safe to do so, you can script data collection in advance and walk away. We routinely gather data 24 hours a day in some of our experimental and quality control systems.
https://razorbillinstruments.com/part-2-ivi-visa-and-scripting-to-communicate-with-lab-instruments/
CC-MAIN-2022-40
refinedweb
1,524
60.85
Apache POI – Print Area This chapter explains how to set the print area on a spreadsheet. The usual print area is from left top to right bottom on Excel spreadsheets. Print area can be customized according to your requirement. It means you can print a particular range of cells from the whole spreadsheet, customize the paper size, print the contents with the grid lines turned on, etc. The following code is used to set up the print area on a spreadsheet. import java.io.File; import java.io.FileOutputStream; import org.apache.poi.xssf.usermodel.XSSFPrintSetup; import org.apache.poi.xssf.usermodel.XSSFSheet; import org.apache.poi.xssf.usermodel.XSSFWorkbook; public class PrintArea { public static void main(String[] args)throws Exception { XSSFWorkbook workbook = new XSSFWorkbook(); XSSFSheet spreadsheet = workbook.createSheet("Print Area"); //set print area with indexes workbook.setPrintArea( 0, //sheet index 0, //start column 5, //end column 0, //start row 5 //end row ); //set paper size spreadsheet.getPrintSetup().setPaperSize(XSSFPrintSetup.A4_PAPERSIZE); //set display grid lines or not spreadsheet.setDisplayGridlines(true); //set print grid lines or not spreadsheet.setPrintGridlines(true); FileOutputStream out = new FileOutputStream(new File("printarea.xlsx")); workbook.write(out); out.close(); System.out.println("printarea.xlsx written successfully"); } } Let us save the above code as PrintArea.java. Compile and execute it from the command prompt as follows. $javac PrintArea.java $java PrintArea It will generate a file named printarea.xlsx in your current directory and display the following output on the command prompt. printarea.xlsx written successfully In the above code, we have not added any cell values. Hence printarea.xlsx is a blank file. But you can observe in the following figure that the print preview shows the print area with grid lines.
https://www.tutorialspoint.com/apache_poi/apache_poi_print_area.htm
CC-MAIN-2018-26
refinedweb
285
52.05
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hi. A friend and I are in the middle of a video installation tests. I'm using an Arduino PIR Sensor and an Arduino Uno board. I need to stop a video when the PIR is "On" (Detects motion) and to play that video if the PIR is "Off". Checking the Processing and Arduino websites, examples and references I couldn't figured out what I'm doing wrong. Please let me know if something is missing or left. Thanks in advance for any help! This is the Arduino Code (copied from Arduino and Processing communication tutorials found on the web): /*; } } } And here is the Processing code: import processing.serial.*; import processing.video.*; Serial myPort; String val; Movie video; void setup() { size(640,480); video = new Movie(this,"sujeto1.mp4"); video.loop(); String portName = Serial.list()[0]; myPort = new Serial(this, portName, 9600); myPort.bufferUntil('\n'); } void serialEvent (Serial myPort) { if (myPort.available() > 0) { val=myPort.readStringUntil('\n'); } if (val=="Motion detected!") { video.stop(); } else { video.loop(); } println(val); } void draw() { background(0); image(video,0,0); } void movieEvent(Movie video) { video.read(); } Answers Thanks for the answer @GoToLoop ! I visited both references but I still don't get it. With the equals() issue I understood that if you want to compare too strings is necessary to declare them before(?). But, how could I make Processing to read the String "Motion detected!" inside an "if" block? Or to read the value stored in "myString"? I'm kinda new in Processing and newer in Arduino so I don't know if this code is closer to the right one. Thanks!!! Try this modified code for your Processing's serialEvent(). This is not the solution but you can see how the serial port is behaving (or if you are receiving any data). Notice you were missing a line in your function... the one that was reading the data from the serial stream. Kf Thanks @kfrajer for your answer. I could not integrate your code very well, but I'm receiving the data pretty well. The operation of the PIR is 10/10 and the communication between the serial port and processing is well too. The thing that I don't know is how to go from the string that is received in Processing to make the video to play from that string. I explain myself well? (English is not my native language as you could notice :D) Hi. Mixing the answers I got to this code: The thing now is that the video is "playing" in loop but not "showing" in loop. When the PIR is "On" the video stops when the value "Motion detected!" appears (all ok with that) but when the other value appears (Motion ended!") the video doesn't continue in loop but it's still reading as loop, so everytime the value appears it seems like I advanced the video :-?? Could I be missing something in void draw() ? or the void movieEvent() :-? Problem is noLoop()! The original example code didn't need to have draw() called back at 60 FPS. However, in order to display each arrived video frame, draw() needs to happen. #-o You can move redraw = true;into movieEvent(). *-:) Or even delete noLoop() & redraw = true;from the sketch entirely. 8-X More about noLoop() & redraw(): :-B Try these posts: Kf Thank you @GoToLoop and @kfrajer for your comments, knowledge and advices. The code works fine now and I can continue with the other parts of my project. :D :D :D :D
https://forum.processing.org/two/discussion/25365/play-video-using-processing-arduino-and-pir-sensor
CC-MAIN-2019-18
refinedweb
604
76.52
12 May 2006 10:26 [Source: ICIS news] SINGAPORE (ICIS News)--Sabic and Shell Chemicals, two of the world’s largest producers of monoethylene glycol (MEG), have rolled over their May Asian contract price of $870/tonne CFR ?xml:namespace> ?xml:namespace> Market participants greeted the news on a subdued note, as it was within expectations after a similar move by MEGlobal. "Although spot prices are firm, overall demand remained weaker than expected," said a trader. Earlier on Thursday, market leader MEGlobal also rolled over its May price of $870/tonne CFR Asia to June. At that time, spot prices were at $805-815/tonne CFR China, up $5/tonne from the lower end of the range from Wednesday. However, very few cargoes were available at $805/tonne CFR China,.
http://www.icis.com/Articles/2006/05/12/1062639/sabic+shell+roll+over+asia+june+meg+contract+price.html
CC-MAIN-2013-20
refinedweb
130
59.84
Post Syndicated from Christian Weber original the first session of the morning, presenting “Better together: AWS CDK and AWS SAM.” This keynote was the announcement for the public preview of the AWS Serverless Application Model CLI (AWS SAM CLI). The AWS Serverless Application Model CLI includes support for local development and testing of AWS CDK projects. To learn more, the blog post announcing the AWS SAM CLI public preview has more detail about the capabilities of the AWS SAM CLI. If you missed CDK Day, fear not! CDK Day Track 1 and Track2 are available to watch online. Great job and round of applause to the sign-language translators, the speakers, the organizers, and the hosts for making the second CDK Day a success! We can’t wait for CDK Day number 3! Updates to the CDK AWS CDK v2 developer preview It’s here! The much-anticipated release of CDK v2’s developer preview is now available! When using CDK previously, developers in JavaScript and TypeScript have faced challenges with the way that npm handles transitive dependencies; the dependencies that your dependencies rely on. For example, the aws-ec2 package.json file lists dependencies for other CDK construct libraries. If one of these transitive dependencies were updated, all of them would be need to be updated. Or you would run into dependency tree resolution errors, as seen in this StackOverflow thread. With v2, all construct modules are now provided in a single package: aws-cdk-lib. All of the dependencies are now pinned to a single version of aws-cdk-lib, making it easier to manage. This also gives you the flexibility of having all CDK construct library modules available without having to run npm install each time you want to use a new construct library. Another change to AWS CDK v2 is the removal of experimental modules. To help promote API stability and comply with semantic versioning, CDK v2 ships only with modules marked as stable. Experimental modules aren’t going away completely, though. In v1, experimental modules and constructs will be provided together with no change. In v2, experimental modules are distributed and versioned separately from the aws-cdk-lib package, in their own dedicated package and namespace. Once a v2 construct is deemed stable, it is then merged into the aws-cdk-lib package. The CDK team is still determining the best method of distributing experimental modules and constructs, so stay tuned for more information. Read more about the AWS CDK v2 developer preview in the What’s new blog post. AWS CDK for Go developer preview On April 7, the AWS CDK team announced support for golang. From the Go tracking issue on GitHub, nearly 900 members of the CDK community have requested for CDK to support golang, and we’re happy to see it become available! We are looking forward to helping out all the golang gophers out there build amazing CDK applications! To learn more about Go and AWS CDK, read the AWS CDK for Go module API documentation on pkg.go.dev. You can also read the Go bindings for JSII RFC document on GitHub. Want to contribute to the success of Go and CDK? The project tracking board for Go’s General Availability has tasks and items which could use your help. Construct modules promoted to General Availability Many new construct modules were promoted to General Availability recently. General Availability indicates a module’s stability, giving confidence to run these modules in production workloads. In April, a total of 15 modules were promoted stable: @aws-cdk/autoscaling-commonin PR#13862 @aws-cdk/chatbotin PR#13863 @aws-cdk/cloudformation-diffin PR#13857 @aws-cdk/docdbin PR#13875 @aws-cdk/elasticloadbalancingv2in PR#13861 - @aws-CDK/globalacceleratorin PR#13843 @aws-CDK/lambda-nodejsin PR#13844 @aws-CDK/ses-actionsin PR#13864 - @aws-CDK/s3-deploymentin PR#13906 @aws-CDK/elasticsearchin PR#13900 @aws-CDK/cx-apiin PR#13859 @aws-CDK/region-infoin PR#14013 @aws-CDK/aws-efsin PR#14033 Notable new L2 constructs In the @aws-cdk/route-53 module, name server (NS) records were previously defined with the route53.RecordType enum. In PR#13895, user stijnbrouwers introduces the NS record as its own L2 construct: route53.NSRecord. bringing it into company with other record type L2s, such as route53.ARecord. This makes managing NS records consistent with the other record types represented as L2 constructs. Improving the @aws-cdk/aws-events-targets module, CDK community user hedrall submitted PR#13823. This change brings support for Amazon API Gateway as a target for an Amazon EventBridge event. @aws-cdk/aws-codepipeline-actions now includes an L2 construct for AWS CodeStar Connections supporting BitBucket and GitHub. This construct lets you create a CDK application that uses AWS CodeStar with a source connection from either provider, thanks to PR#13781 from the CDK Team. Level ups to existing CDK constructs Amazon Elastic Inference makes available low-cost GPU-acceleration for deep-learning workloads. PR#13950 now lets you use the service via @aws-cdk/aws-ecs in Amazon Elastic Container Service tasks, from CDK community user upparekh. In PR#13473, from pgarbe, the @aws-cdk/aws-lambda-nodejs module will now bundle AWS Lambda functions with Docker images sourced from the Amazon Elastic Container Registry (Amazon ECR) Public Registry, instead of DockerHub. Prior to this change, CDK used your DockerHub credentials to pull a Docker image for the Lambda function. If your account was in DockerHub’s free-tier account level, your account is throttled whenever it exceeds the API limit within a short time frame set by DockerHub. This can cause your AWS CDK deployment to be delayed until you are under DockerHub’s API limit. By moving to the Amazon ECR Public Registry, this removes the risk of being affected by DockerHub’s API rate limiting . You can read more in this blog post giving customers advice about DockerHub rate limits from last year. With @aws-cdk/aws-codebuild, you can use concurrent build support to speed up your build process. Sometimes you’ll want to limit the number of builds that run concurrently, whether for cost reduction or reducing the complexity of your build process. PR#14185, authored by gmokki, adds the ability to define a concurrent build limit for an AWS CodeBuild project Stage. It is common for customers to have applications or resources spanning multiple AWS Regions. If you’re using @aws-cdk/aws-secretsmanager, you can now replicate secrets to multiple Regions, with PR#14266 from the CDK team. Make sure you’re not setting your secret as “test123” for your production databases in multiple Regions! For users of @aws-cdk/aws-eks, PR#12659 from anguslees lets you pass arguments from bootstrap.sh to avoid the DescribeCluster API call. This will speed up the time it takes nodes to join an EKS cluster. PR#14250 from the CDK team gives developers using @aws-cdk/aws-ec2 the ability to set fixed IPs when defining NAT gateways. This change will now pre-create Elastic IP address allocations and assign them to the NAT gateway. This can be useful when managing links from an Amazon Virtual Private Cloud (VPC) to an on-premises data center that relies on fixed/static IP addresses. @aws-cdk/aws-iam now lets you add AWS Identity and Access Management (AWS IAM) users to new or existing groups. For example, you might want to have a user in a specific group for the life of a deployed CDK application. And on stack deletion, revoke that membership. Thanks to PR#13698 from jogold, this is now possible. Learning – Finds from across the internet If you work with CDK parameters, you might be curious how parameters derive their names and values. Borislav Hadzhiev released a blog post about setting and using CDK parameters. Ibrahim Cesar’s wrote an awesome blog post detailing the experience of discovering and working with CDK. It’s an enjoyable read of inspiration and animated gifs. Twitter user edwin4_ released a tool for CDK automation called RocketCDK. From the project’s GitHub repository, this tool will initialize your CDK app, install your packages, and auto-import them into your stack. Neat! Anything that helps save time is a plus-one. Community acknowledgments And finally, congratulations and rounds of applause for these folks who had their first Pull Request merged to the CDK repository! - a2ush - vilikin - 14kw - rdjogo - gmokki - mfkuntz - yiluncui - pacharrin - greg-aws - timothy-farestad - S-T-O-C-H-A-S-T-I-C - saudkhanzada - shumailxyz - alex-vance - timohirt *These users’ Pull Requests were merged in April. Thank you for joining us on this update of the CDK corner. See you next time!
https://noise.getoto.net/tag/aws-sam/
CC-MAIN-2021-31
refinedweb
1,451
54.52
get_attrib - Returns needed attributes to execute a command successfully on any system. #include "get_attrib.h" char *get_attrib (char *command_name, char *permits, char *active_categories, char *authorized_categories, long flag); The get_attributes routine is designed to determine the system type currently running and return the needed runcmd(1) string to run the command specified successfully on any 7.0 and above systems. On systems with TFM configured ’ON’ some commands need special attributes that can’t be determined easily, this routine then uses an internal table to return the needed attributes to run the command. On other system types the needed attributes are easily determined without use of this table. The get_attrib arguments are as follows: command_name Pointer to the command the attributes are to be returned about. permits Pointer to either an octal or name string of permits to be added to string returned. active_category Pointer to either an octal or name string of active categories to be added to string returned. authorized_categories Pointer to either an octal or name string of categories to be added to string returned. flag Long set to any combination of values defined in get_attrib.h. These values are used to specify that the string returned should be for the specified system type. GA_BOTH_OFF PRIV_SU and PRIV_TFM off. GA_SU_ON PRIV_SU on. GA_TFM_ON PRIV_TFM on. GA_BOTH_ON PRIV_SU and PRIV_TFM on. GA_CURRENT_SYS Current system type. The following example shows how get_attrib can be used to determine the needed attributes to run a command successfully: #include <stdio.h> #include "get_attrib.h" main() { char cmd[256]; char *string; if ((string = get_attrib("mount",NULL,NULL,NULL,GA_CURRENT_SYS)) == (char *)NULL) { printf("get_attrib() failed\n"); exit(1); } else { sprintf(cmd, "runcmd %s mount /dev/dsk/qtest3 /qtest3", string); printf("Command = %s\n",cmd); } if ((string = get_attrib("mount", NULL, NULL, NULL, GA_BOTH_OFF))==(char *)NULL) { printf("get_attrib() failed\n"); exit(1); } else { sprintf(cmd, "runcmd %s mount /dev/dsk/qtest3 /qtest3", string); printf("Command = %s\n",cmd); } } On an MLS system with PRIV_SU ON the first sprintf would return, runcmd -u root mount /dev/dsk/qtest3 /qtest3. On the same system the second sprintf would return, runcmd -J secadm -j secadm mount /dev/dsk/qtest3 /qtest3 Which is as if PRIV_TFM and PRIV_SU were OFF. If get_attrib() completes successfully, a pointer to a string containing the options of the runcmd string is returned; otherwise NULL is returned. If get_attrib() has problems, an error message will be put in GA_Err_Msg and NULL will be returned. get_attrib(1) runcmd(1)
http://huge-man-linux.net/man3/get_attrib.html
CC-MAIN-2019-04
refinedweb
412
54.22
2005 Filter by week: 1 2 3 4 5 Versioning and Deployment Posted by MTsang987 at 1/31/2005 2:43:02 PM Hi, I have my dts package as a structured storage file. I plan to save it to my server for scheduling. Is there any way that I can remove the previous versions from the file to make it smaller before I save it on the server? Thanx... more >> SQL Server 2000 DTS Designer Error in the DLL Posted by Eric Manko at 1/31/2005 2:21:04 PM When trying to open the DTS Designer, I get an error stating Error in the DLL. It will then open, but when I save, I get the same error. Also, I cannot open any DTS tasks previously created from another machine. I have tried reregistering DLLs, and also removing and readding, SQL Server 2000... more >> Sql replication and archive database Posted by willatkins NO[at]SPAM gmail.com at 1/31/2005 10:48:29 AM Hi, Our software uses a SQL database and is designed to get rid of data after a specified amount of time to maintain performance. We would like to be able to keep that data in another database along with the up date information for reporting purposes, while leaving the main database to serv... more >> Add a header to a Text File ("Destination") Posted by LP at 1/31/2005 10:27:06 AM Hello All, I'm exporting a SQL table to a text file and want to add a little bit of header information. Is the best option ActiveX Script using FSO? Any thoughts. Thanks, LP... more >> Problem With Scheduled DTS Package Posted by DeltaBankUser at 1/31/2005 10:18:04 AM SQL Server Agent reports the following error when attempting to execute a scheduled DTS pacakge: --------------------------------------------------------------------------------------- Step Error Source: Microsoft Data Transformation Services (DTS) Package Step Error Description:Error Code: 0 ... more >> How to link table from another DB Posted by Mike at 1/31/2005 10:16:29 AM Hi all, I have a DB on SQL Server and I need to create a table that is a link to a table inside another MsAccess DB. How can I do this? I'd reply the same behaviour like I do in MsAccess when I run the command "Link table". Any help is appreciated. Regard, Mike.... more >> Adding to a DTS PAckage Posted by jaylou at 1/31/2005 7:35:02 AM Hi All, I have a DTS Packag that copies tables from one server to another. Is there a way to add a new table to this package? I have not been able to figure this out with the help of BOL. TIA, Joe... more >> Newbee DTS Question Posted by jj at 1/30/2005 3:53:44 PM Hello, I have 3 sql servers running. lets call them sql server 1,2 and 3 all 3 servers have a table with the following columns ID, Location, transaction id. server 1 and 2 collect data and populate thier tables....i would like server 3 to get the data from the tables on server 1 and... more >> Don't see what you're looking for? Search DevelopmentNow.com. Hyperion Essbase Posted by John Baker at 1/29/2005 7:27:38 PM Can DTS load data into Hyperion's Essbase? I know I can output a text file and then call a .bat file that uses an Essbase load command, but I was wondering if DTS supported a direct connection. -- To Email Me, ROT13 My Shown Email Address ... more >> SQL table only accepts 1 Excel data format Posted by ChrisR at 1/28/2005 4:15:14 PM I'll be the first to admit, the title for the post isn't the greatest but didn't know what else to write. Anyways... Im importing data from Excel into SQL. My StoreNbr coulmn can be in General, Text, Number, etc... take your pick, the problem still persists. Sheet1: StoreNbr 1234 96AB... more >> Copying the DTS Packages Posted by siaj at 1/28/2005 2:37:09 PM Hello, Can any one tell me how to copy the DTS Packages from One server to Other Server. Both of My server are at on different machines. Thanks in advance, siaj ... more >> Set global with count of rows transformed Posted by JRStern at 1/28/2005 2:16:36 PM Apparently DTS already keeps count of the rows it processes. I'll just bet there's some way of putting in a post-source or pump-complete handler that accesses that number, rather than keep my own count by incrementing a global on each row. I've only just been introduced to the mystery of the ... more >> Limitation to a field length when exporting via the DTS Wizard to Posted by DTSNovice at 1/28/2005 1:17:04 PM I am using the DTS wizard to export data from a table to a CSV. I have a varchar(7000) field that is getting truncated around 255 charachters. Is there anyway to work around this truncation, so I am not losing information from this field when exporting to CSV?... more >> Database Performance Posted by Marco Pais at 1/28/2005 12:27:08 PM Hello there, I am experiencing performance problems with an aplication running over a SQL Server 2000 Database. The server that holds SQL Server has 2.8 GB of RAM memory. When there are just on or two users working, the aplication performance is suportable, but when more users are using the... more >> Oracel Linked server Problem Posted by Nik at 1/28/2005 11:46:49 AM Hi Gurus We are trying to use linked server to oracle. A simple insert of 100 rows is taking 3 minutes, If i do bcp out and sql loader in to oracle it is just few seconds. Is there away to find out what is wrong. How can i use trace, or profiler to check what is happning on the oracle end. we... more >> import data from text file every 30 minutes Posted by pelican at 1/28/2005 11:31:08 AM I work for an environmental agency. I need to import data from a text file every 30 minutes. This text file contains weather info which is collected every minutes by an instrument in the Gulf. So every minutes a line of data is added to the text file. Then every 30 minutes, I need to read ... more >> Add a date to the export of a flat file with Dynamic porperties Posted by LP at 1/28/2005 11:19:02 AM Can I add the date to the file name when I export SQL data using a Dynamic Properties task. LP... more >> In your opinion: what's the best book on DTS? Posted by Darwin Fisk at 1/28/2005 10:07:29 AM I need to build an extensive DTS package and my experience is with simple DTS techniques only. I like books that start with simple but include advanced. What would you reccomend? Thanks, Darwin ... more >> Serial or Parallel? Posted by Joe at 1/28/2005 4:11:09 AM Hi Hopefully just a quickie. When just importing tables and their data, is it better to use seperate source & destination connections for each table, to run the process in parallel, or to use a "Copy SQL Objects Task" to include all tables? With the later, I'm thinking that the imports ... more >> dtsrunui is missing in MSDE2000a Posted by Alexander Baumgart via SQLMonster.com at 1/27/2005 4:53:20 PM Hello, i new to msde and ms sql. i need to migrate a access db to msde/sql. The access 2000 upsizing manager crash with a overflow error for me. M$ suggest to use dtsrunui , but msde2000a is missing dtsrunui, dtsrun and all dlls are there, but there gui is missing. Is that true ? Can i get the gu... more >> Export any SQL/SP to Excel Posted by Eric at 1/27/2005 4:41:04 PM Hi, I need the ability to export any sql statement or stored procedure (with parameters) to an excel file at will. Anyone know how to put that into a DTS package, or know of a premade one available somewhere?... more >> Posted by Edward Letendre at 1/27/2005 2:47:02 PM Okay, I have 4 tables that I need to load data into. I was thinking of using DTS to do the data load. The 4 tables already exist with old data. Some of the data in the text files is the same as the data in the tables. I have tried to disable to contraints on the tables and truncate the dat... more >> Edit DTS package crashes MMC Posted by Jim Brown at 1/27/2005 2:37:07 PM I used the export wizard to create a package to export a table to a CSV text file. When the wizard is done the package runs OK and creates my CSV file. However you can not edit the resulting package. When trying to edit the Transform Data Task the “Verifying Transformations†dialog pops u... more >> crazy question.... Posted by John316 at 1/27/2005 2:23:34 PM My ActiveX Script Task Properties dialog box has the "functions" window maximized and I can no longer get at my code. I can not find any way to resize it. Please .... If anyone has come across this and can help... it will be much appreciated. I've never had this happen before. Thanks ... more >> Retrive SQL Statement Posted by Ed at 1/27/2005 1:43:06 PM Hi, I am trying to retrive the info (SQL Statement) from a data pump task. But i got an error It said "Object doesn't support this property or method" Am i MISSING something Dim tsk Dim resul Set tsk = DTSGlobalVariables.parent Set resul = tsk.tasks("DTSTask_DTSDataPumpTask_1") msgbox ... more >> package won't stop on failure Posted by Dan D. at 1/27/2005 1:17:11 PM Using SS2000 SP3a. I have a step in a package where I check for the existence of a file. If the file is not there, I want the package to stop but it doesn't. I've set "fail package on first error in package properties. The code in the step is this: Function Main() Set fso=createobject("scri... more >> schedule new verion of DTS Package Posted by ChrisR at 1/27/2005 12:12:36 PM I have several DTS Packages that are scheduled to run every night. If I change the Packages, which version will be run the following night? If its the old one, how to I update the schedule to recognize the new version? -- SQL2K SP3 TIA, ChrisR ... more >> DTS step was executed, but no work done Posted by Subbaiahd at 1/27/2005 11:25:43 AM I am going crazy over a load job, I am loading few tables from one sql server to another sql server, for that i created a DTS package, it was running fine until recently, all of a sudden it started giving me problems. It was scheduled to run daily. The package logging shows every sub modu... more >> Export based on a sp Posted by Ed at 1/27/2005 11:05:01 AM Hi, I have a stored procedure that generates a table variable. How am I able to export the result of that table variable to an excel file. I don't see any function in Transformation Task that allows to run any stored procedure. All I can choose from Transformation Task is a table or a vi... more >> Write to msdb.sysjobhistory.messages Posted by Don Reith at 1/27/2005 10:56:40 AM When you execute a package from SQL Server Agent there is some run information that gets written to msdb.sysjobhistory.messages. Is there a way to use a script action to append some text to that information? This seems like an obvious thing to do to me but I'm banging my head against the wal... more >> Select Count(*) from DTS Posted by Ron at 1/27/2005 10:04:58 AM I need to run 5 different Select Count(*)......... Statement from the DTS package and get the result out to the text file. What is the best way to do this ? (It has to be a DTS package and not any other method Jobs, SP e.t.c.) Thanks.... more >> DTS from ASP Posted by Bish at 1/27/2005 8:41:04 AM I wish to execute a DTS package from an ASP. The package was written in SQL enterprise mangaer then exported to a VB file. The reason for this is that the source file and it's path can and will change. I therefore allow the user to specify the file name and path and dynamically change the v... more >> dts packge path execution,?? Posted by re database structure at 1/27/2005 4:39:06 AM Hello evryone, I want to load access files in sql 2000 server with a dts package. the acce4ss files are in different directory's for example d:\Models\wk1\data.mdb d:\Models\wk2\data.mdb d:\Models\wk3\data.mdb etc.. Is it possible to use a script that wil change the dts package so i can ... more >> Input Parameters in Tansform Data Tsk Posted by Bronz Fonz at 1/26/2005 11:01:02 PM Hi, I currently use Transform Data Task to load data into a tbl. The source data is a select stmt. I'd like to make the db names used in the select stmt dynamic/varaible. I've unsuccessully tried using input params in the sql stmt (the source tab). Also considered using an SP for the select s... more >> SQL Statement to retrive data from MS SQL and write into MS Access Posted by chuayl at 1/26/2005 10:43:01 PM im working on Server-client environment where i need to query data from MS SQL to be inserted into MS access database whenever user prints report. would like to know on the SQL Statement syntax on how to do that . insert into <ms Access tbl> select ...... from <MSSQLTbl> ... Thank you ... more >> Help in solving queries....... Posted by Patrick at 1/26/2005 4:27:10 PM Hi Group I want to divide one column by another column and stored that value into third column. Example : Column A, Column B, Column C I want to divide Column B by Column C and stored that value within Column A Basically Column A is % Column B < Column C ( always ) When I use divi... more >> Execute Process Task Properties Posted by Sam123 at 1/26/2005 3:51:02 PM I have a DTS package that contains a "Execute Process Task Properties". the task executes a Vb.Net exe file. I am trying to pass a value from the DTS package to the vb.net exe file/app. However I do not want the console to open and wait for my input. When I enter a value in the parameter sec... more >> How can i import a webpage into SQL Server? Posted by Erich at 1/26/2005 2:31:02 PM Hi, I need some help on creating a process where i need to save webpages as a text file and load it into sql server. Can this be done using DTS? Thanks!... more >> Execute Package From Stored Procedure Posted by Linda at 1/26/2005 1:33:06 PM Dear All, I've created some packages to transfer data from text files to SQL Server tables. I need to do this whenever I receive new data. Can I execute those packages from a stored procedure so that I don't have to run them one by one manually? Thank you. Linda... more >> ANSI to Unicode(MSSQL) convertions Posted by Aras Kucinskas at 1/26/2005 12:12:08 PM Hi, How to convert and transfer ANSI string data from FoxPro table to MS SQL Unicode table. Data in FoxPro are in 1251 codepage (Russian Windows). Now result is like "ÊÎËÜÖÀ ÊÎÌÏÐÅÑÑÎÐÀ 100ìì ".The ODBC driver does not performs the conversion from ANSI to Unicode. Help. ... more >> Foreign Key Conflict Posted by Emma at 1/26/2005 11:19:01 AM I am importing data from one database to another using DTS. During the importation process, I get the following error message: INSERT statement conflicted with COLUMN FOREIGH KEY constraint. “FK_foot_Genâ€. The conflict occurred in database ‘Sales’, table ‘Gen’, column ‘ID’ ... more >> convert dts saved as .bas file back to sql server package Posted by Rea Peleg at 1/26/2005 10:19:15 AM Hi all Is it possible to convert a dts package saved as .bas file back to a sql server package so it can be opened, designed,saved and used again in sql server? Thanks alot Rea ... more >> Space Posted by anonymous at 1/26/2005 8:12:29 AM I have a fields with the space char in data. How to find a space character in query? 'Smith David' ... more >> Audit DTS rows imported? Posted by Yong at 1/26/2005 4:17:01 AM I have a simple DTS package which imports rows from a text file into a sql database table. Connection 1 -> Connection 2 The DTS is just a tab delimited text file (Connection 1) of data going into a SQL table (Connection 2) with no conversions. How do I log the rows imported from the tex... more >> Downloading a file using HTTP Posted by Rudi Groenewald at 1/25/2005 4:35:01 PM Hi all, I am currently still looking for a solution to my problem, anyone have any ideas? This is how far I came: Hi Darren, Well, I am testing by just double clicking in windows, but when the script does what I want it to, it'll be running in a DTS task on sql server 2000. My sc... more >> Data transformation and deletion of the transfered Data Posted by Bart at 1/25/2005 2:42:12 PM I have 2 tables. At a timed interval I have "transform Data task" setup to copy the data from one table to the other. I want to delete the transfered data from the first one once the transfer is complete. I can create the SQL task to do the delate. However, I was think, what if there is data in... more >> DTS FTP & Scheduling Posted by Sal at 1/25/2005 1:38:06 PM Hello All, I created a simple DTS package which uses the FTP task in Enterprise Manager. I want to get a file off of the SQL Server D:\ Drive and copy it to a network server , the F:\ Drive. When I manually execute the job (right click execute package) it runs perfectly. When I try to sched... more >> Import truncating fields at 255 characters Posted by Justin Patch at 1/25/2005 1:23:06 PM I have a very simple DTS process that copies five columns from an Excel spreadsheet into a SQL table. One of the fields is product description that tends to be about 350-500 characters long. When I run the DTS, SQL Server truncates the description field at 255 characters? Does anyone know w... more >> Problem for DTS Import from OLAP DataSource Posted by Rodrigo at 1/25/2005 6:25:04 AM Hi everybody: I need help... I get the next error when i try to import data (using DTS Wizard) from a OLAP Cube, that have a "custom rollup" dimension: "CONTEX: ERROR CALLING OPENROWSET ON THE PROVIDER" I am using SQL Server 2000 and "Microsoft OLE DB Provider for OLAP Services Ser... more >> DTSFTPTask Posted by codputer at 1/24/2005 7:01:01 PM I have created a DTS Packaget that successfully transfers files from an FTP - sometimes. If I execute each step manually, the FTP transfer succeeeds. When I execute the package, sometimes the FTP package succeeds, and sometimes it does not. I checked all the parameters that I am dynamical... more >> · · groups Questions? Comments? Contact the d n
http://www.developmentnow.com/g/103_2005_1_0_0_0/sql-server-dts.htm
crawl-001
refinedweb
3,396
71.65
Get the highlights in your inbox every week. Getting started with functional programming in Python using the toolz library Getting started with functional programming in Python using the toolz library The toolz library allows you to manipulate functions, making it easier to understand and test code. Subscribe now.. composecan. 2 Comments Hi! I think def add_one_word(words, word): return words.set(words.get(word, 0) + 1) Is def add_one_word(words, word): return words.set(word, words.get(word, 0) + 1) Thanks Interesting article, thank you! ... why not a next step approaching functional data flow? It would be great! You have introduced the building block, now some "thredo" (for example) and ... the flow can start :-)
https://opensource.com/article/18/10/functional-programming-python-toolz
CC-MAIN-2020-29
refinedweb
114
60.92
<?php function randomText($length) { $pattern = "1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"; for($i = 0; $i < $length; $i++) { $key .= $pattern{rand(0, 62)}; } return $key; } $clave = randomText(5); echo $clave; ?> View Complete Post Hello, I have the following: @<div> Commands: @Html.ActionLink("Edit", MVC.User.Edit(item.Id)) @Html.ActionLink("Delete", MVC.User.Delete(item.Id)) </div> It is working fine. But I want to remove the div from the code. The moment I remove it I get an error. I tried various options but I keep having errors. I need disply="none" , equivalent code in cs . Because if i made visible = false then from javascript I cant access it. So is any other way to solve that problem? Hello Everyone and thanks for your help in advance. I am trying to convert a code block into VB.Net from C# but can't get it right. Part of the problem is the use of Static methods which I can't seem to get the equivalent. Here is the code: using System;using System.Collections.Generic;using System.Text;using System.Collections.Specialized;using System.Web; namespace RewriteModule{ public class RewriteContext { // returns actual RewriteContext instance for // current request public static RewriteContext Current { get { // Look for RewriteContext instance in // current HttpContext. If there is no RewriteContextInfo // item then this means t The code in C# is as follows: private void comboBox1_SelectedIndexChanged(object sender, EventArgs e) { //filling items in combobox1 if (comboBox1.SelectedIndex==0) { comboBox2.Items.Clear(); for (char i ='A' ; i < 'D'; i++) comboBox2.Items.Add(i); } if (comboBox1.SelectedIndex == 1 || comboBox1.SelectedIndex == 2 || comboBox1.SelectedIndex == 3) comboBox2.Items.Clear(); for (char i = 'A'; i < 'E';
http://www.dotnetspark.com/links/61835-what-is-equivalent-this-razor-php-code.aspx
CC-MAIN-2017-13
refinedweb
269
53.07